Banning tools like ChatGPT will lead to unregulated “shadow” usage of AI tools. Instead, set some ground rules. Here’s what you should include in your policy.
Here’s a story for you: a new AI tool comes out like Microsoft 365 CoPilot. You ask your Security team what your organization is going to do about it, since it’s going to be built into Word, Excel, Teams — pretty much everything. Faced with sheer visceral panic at all the private data that might get inputted into it, they bring their fist down.
“We need to ban it,” they suggest. A month later, people are using it anyway.
It’s hardly a new phenomenon. Employees always do whatever makes their job easier, even when it’s against the rules. If your procurement process is onerous, they’ll go around it. If your cloud storage is terrible, they’ll just save things locally.
Your organization is probably already using GenAI
The media is saturated with mention of generative AI. In all likelihood, your workforce is already trying it out, either as an experiment or to actively support their tasks. Banning it will only create shadow usage and a false sense of compliance.
The solution is to write a usage policy so they at least use generative AI responsibly. This will help mitigate your organizational risk, provide guidance to your staff, and let them know your organization is forward-thinking and prepared.
How to write a usage policy for generative AI
Your usage policy should be very simple — the more complex your policy is, the more likely people are to disregard it. Keep it to a handful of simple do’s and don’ts. For most organizations, you’ll want to focus on the safeguarding of personal information and double-checking information the AI provides.
Example: Generative AI policy
- DON’T provide any personal information such as full names, addresses, phone numbers, social security numbers, or any other information that can directly identify an individual.
- DO use generic or fictional names when discussing personal scenarios or examples.
- DON’T share sensitive data like passwords, credit card numbers, health records, or any confidential information.
- DON’T input any confidential information or company IP.
- DO configure external tools (like ChatGPT) to disable history and cache storage when dealing with sensitive or private conversations.
- DO be aware that the AI may provide answers with factual errors, biases, or inappropriate answers, so double-check the accuracy of any responses.
- DO use the AI model for lawful, ethical, and legitimate purposes.
Simple and digestible. Some guides on writing generative AI policies will encourage you to write whole novels including your company’s rationale on acceptable use, break down the company and employee’s responsibilities, escalation pathways, etc. But for this sort of thing, the more you cover, the less useful the policy is.
Obviously, the above is a boilerplate version, and you will need to adjust based on the needs of your particular organization. For example, if you’re in the public sector, transparency and legal considerations may be a bigger factor. However, the general principle remains: keep it as simple as possible.
How to decide what models to approve (or not)
Just because you have a policy that greenlights GenAI use, you might not want to endorse every single tool out there, just like you wouldn’t want to approve every bit of software. However, security professionals are currently struggling to figure out a framework for deciding which ones are safe and which ones are business liabilities.
We’ve written an article on how to evaluate if AI models are safe for your organization to use, which you can read here: “Security reviews and AI models: How to decide what to greenlight.”
Even if you’re using internal models, you should still have a policy
Some organizations may be running AI models in-house, in which case there’s less of a concern about privacy since you’re not providing it to a third party. However, you should still have a usage policy for three main reasons:
- Hallucinations, biases, and errors still remain as an issue, and employees need to know to put on their critical thinking caps
- Just because you’re using tools internally does not mean someone won’t use a niche GenAI for some other task
- It’s a good look to say you have an AI usage policy in place, internally and externally
Don’t regulate, educate
One thing policymakers have a tendency to do is treat a usage policy as an education tool, and try to cram a whole instructional crash course on proper GenAI use into it. Not only does this sabotage the document, it’s not really the best medium for teaching. Instead, you should actually spend effort on providing education outside of the usage policy.
Providing access to on-demand educational courses on GenAI fundamentals at your organization is a power-move for a ton of reasons:
- Upskilling in generative AI will make employees feel like they’re keeping up with a new technology that is disrupting many industries
- Training can provide employees with the means to use GenAI efficiently and effectively, such as Prompt Engineering best practices
- It communicates to your organization that you’re “keeping up with the times” of new technology
- It often reinforces GenAI best practice use, working in tandem with your written policy
- If you educate your leaders, they can then educate both new and existing staff and provide guidance (“teaching the teachers”)
Providing training is an excellent way to validate and get your employees to buy into your organization’s generative AI use policy.
Educational resources you can use
The videos below cover the basics of generative AI, and also teach you how to use prompt engineering to get the most out of these tools.
- AI & Generative AI Explained
- ChatGPT and Generative AI: The Big Picture
- Getting Started on Prompt Engineering with Generative AI
- Security Risks and Privacy Concerns Using Generative AI
- Exploring Generative AI Models and Architecture (Intermediate, good for organizations planning to implement internal models)
Here are some articles that might also help:
- What are ChatGPT and Generative AI (and how can I use them)?
- Security reviews and AI models: How to decide what to greenlight
- How to use ChatGPT’s new “Code Interpreter” feature (The name is misleading – this is a feature for general users as well to analyze data, make charts, work with math, etc).
- How to use ChatGPT to write code
- Prompt engineering 101 for developers
If your organization is using ChatGPT, you might want to be aware of plugins — third-party add-ons that make the chat bot more useful. These are a great feature, but also can have all the same risks of any other third-party plugin. Here are some articles to get started:
Content Courtesy – Plural Sight