It’s Time for an Artificial Intelligence Policy

If we could turn back the clock 20 or 30 years, I don’t think you’d have believed me if I’d told you that, in the not-too-distant future, your organization will need a policy addressing employees’ use of artificial intelligence.  That might have landed in the same way as me telling you that you’d soon be able to beam up to a spaceship.

But, here we are, mid-way through 2025, and the need for such a policy is now plain.  It seems weird to me that the word “human” would need to appear in any company policy, but such is life in the 21st century.

A.I. can’t do everything your employees can do, but it can do many things and your staff have relatively easy access to it.  So, if you think they are not going to access it to make their work lives simpler, you’re fooling yourself.

We Have Enough Policies!!!

You may feel that your policy manual is thick enough, already, and that nobody is going to read or comply with yet another dictate from on high but, hey, what’s one more policy added onto the pile?  In the event you need to rely on the policy at some point, you’ll be glad to have it.

The issue is, quite simply, employees using A.I. to do their work for them.  Whether that’s in the context of research, report writing, generating forecasts, developing business plans and budgets, generating code, generating video images, or a hundred other things, you probably don’t want your employees to effectively be contracting out their work to the internet.

Of course, the internet is now our primary information source (it’s long past time to recycle those Funk & Wagnalls encyclopedias, mom!), so it doesn’t make much sense to be barring employees from that source.  It’s the reality that a product of the internet age now has the capability to produce the end-result work product for employees that’s the problem.

It’s a problem for many, many reasons.  First, of course, the product of A.I. searches/requests may simply not be accurate.  We’ve all heard, for instance, the example of lawyers using A.I. to generate case references for use in court; the pitfall was that some or all of the cases referenced didn’t even exist.  That’s not good for their client relationships or for their professional reputation.

It’s also an issue because the A.I. results may not directly address the problem you’re trying to solve.  You’ll get a response but it may not be the response you needed.  And, of course, there is the issue of privacy laws and the risk that distributing A.I.-generated materials may well violate those laws.

Without a doubt, there are dozens of other risks and issues we haven’t even thought of, yet.

What’s The Point of the Policy?

The point of an A.I. policy, I’d say, should be to achieve balance in giving your staff access to current, sophisticated online information sources while ensuring that all such material is used appropriately and responsibly.  If A.I. sources can enhance your employees’ effectiveness and keep your company competitive, it would be foolish to ignore them; at the same time, boundaries must be placed on their use.

At its essence, the objective of A.I. in business is (for the moment, anyway) to support – but not replace – your staff in the performance of their duties.  The human touch remains critical, and it’s supremely important that employees understand this.

What Key Points Should the Policy Address?

Although I’m certain this list is not exhaustive, here are some high-level things your policy should emphasize.  I’m calling this my “A.I. Six-Pack” of core principles.

  1. A.I. is a reference/support tool and, regardless of the source, employees are always responsible for their work product.
  2. Only use of company-approved A.I. tools and platforms, and only for company-approved purposes, is permitted.
  3. All A.I.-generated information and work product must be human-reviewed/authenticated before being relied upon.
  4. Uploading of company/client/stakeholder/employee information to A.I. tools is never permitted without prior approval and controls to protect confidentiality and privacy.
  5. Use of and/or reliance upon A.I. sources must always be proactively disclosed on all resulting work product.
  6. All A.I.-generated information and work product stored on company systems must be clearly marked as such (so that the person who relies on it 5 years later knows where it came from).

These core principles should infuse any company policy addressing employees’ use of A.I. sources for work purposes.  The resulting policy will, of course, be far more detailed and in that sense input from a human resources professional may be helpful for employers embarking on this process.

I can, for instance, recommend my good friends at ConnectsUs HR, an online provider of on-demand HR services for startups and small businesses (with typically 15-150 non-unionized workers) in B.C., Alberta and Ontario.  I’ve dealt with ConnectsUs for many years, and I know that they are alert to issues relating to A.I. use in business and related policy development.

ConnectsUs has recently published an article on this precise topic, which I recommend for your reading, at… https://bit.ly/40CbQ8H.  Happy reading, and don’t lose hope that you’ll one day be able to beam up to a spaceship.

____________

 

This item is provided for general information purposes only and is not intended to be relied upon as legal advice. Informed legal advice should always be obtained about your specific circumstances.

Making a living is hard work.

We’ll help you get organized, resolve disputes, and save money.

Get in Touch