By Dana Dobbins
Artificial intelligence (AI) is becoming increasingly prevalent in workplaces, providing new opportunities as well as new challenges for employers and employees. While AI has the potential to improve efficiency and productivity, its use also raises important questions around issues like privacy, discrimination, and job displacement. Employers who choose to implement AI should consider including a provision in their employee handbook, or a separate policy, specifically addressing its use. Such a provision or policy can help mitigate risks, provide clarity for employees, and demonstrate an employer’s commitment to using AI ethically and responsibly.
Employers who incorporate AI into the workforce should develop policies governing appropriate use of generative AI, regularly update those policies as laws and technology continue to change, and enforce their policies. Employers should consider the following provisions in their AI use policies:
Specify Which Employees May Use AI and Require Prior Approval
For any number of reasons, employers may be willing to let some teams or groups, but not others, use generative AI technology, especially while the employer is still examining how AI can be incorporated in their company or industry. An AI policy should specify which departments, if any, are permitted to use AI.
Determine Which Tasks May Be Performed Using AI
Similarly, employers should define which tasks can be performed using AI. For example, you may approve your human resources team’s use of AI for screening initial applicants (which presents its own host of issues, including bias), but prohibit them from using AI to develop employment contracts or craft termination letters.
Make Employees Responsible for Outcomes
To ensure accountability, every AI use policy should explain that employees, as human beings, are ultimately responsible for the final product or output created or inspired by AI. This means employees should fact-check output, including (as appropriate) confirming that bias has not been introduced.
Prohibit Submission of Trade Secrets and Other Confidential Information
One of the biggest risks associated with generative AI is the possible loss of patent or trade secret rights, or breach of nondisclosure agreements with other entities, through the submission of sensitive or confidential information. For example, under U.S. Patent law, the public disclosure of inventive information may invalidate potential patent rights. See 35 U.S.C. § 102. Submitting sensitive information to generative AI, without the proper protective measures, may also be considered a public disclosure that waives protections for trade secrets or other confidential information. Further, information submitted to an AI tool may be used in unintended ways, such as to train the AI model. For these reasons, companies should clearly define “confidential information” and/or “trade secrets,” and prohibit the submission of such sensitive data to AI tools.
Consider Requiring Use Logs and Other Reporting
Employers can promote transparency and accountability by encouraging or requiring clear documentation of when and how AI tools are used by employees. Reporting or logging requirements can be flexible and tailored to each business. Consider when, to whom, and how often an employee should document their AI use, including whether it should include input, output, or both.
Oversight is Essential
In this same vein, designate an individual or department in your business that oversees the use of AI. Employees should direct all inquiries about AI use, and make any necessary reports, to this individual or department. This individual should also be tasked with updating the company’s AI policy and staying abreast of relevant legal or regulatory requirements.
Train Employees on Permissible AI Use and Enforce Your Policy
Of course, a written policy is only as good as the training provided and the enforcement of that policy. Regular training, especially in this evolving area, will be crucial for employees to understand the limits of permissible AI use while still promoting creativity and efficiency. Consistent, non-discriminatory enforcement of the policy will demonstrate the company’s commitment to ethical and transparent AI use.
The foregoing suggestions are only some of what should be considered when developing a workplace AI use policy. Employers should also gather input from relevant stakeholders within the organization and seek legal counsel (either internally or externally) when designing, implementing, and enforcing a policy.