How to develop a clear AI policy
- KVK Editors
- The basis
- 11 February 2025
- Edited 14 February 2025
- 2 min
- Managing and growing
- Digitalisation
Artificial intelligence (AI) is developing rapidly. The number of AI tools that you can use for your business is growing. But how can you and your staff use them safely and responsibly? You can set out agreements on this in a clear AI policy.
As an entrepreneur, you now have a wide choice of AI tools. These offer advantages, but also come with challenges. For example, your employees may not know which tools they are allowed to use for their work. With an AI policy, you can record various agreements to ensure that you use AI in a safe, responsible, and intelligent way in your business. These are the most important elements of a strong AI policy.
Purpose of the policy
A good AI policy starts by answering the question:
- What are the goals of the policy? For example, guaranteeing the safe use of AI for your staff and business. And ensuring that you comply with the legal requirements of the EU AI . This outlines rules for the responsible development and use of AI by businesses, governments, and other organisations active within the EU.
Definition and updates
In the next section of your AI policy, you answer questions such as:
- Who does the policy apply to? All employees? Or are there exceptions for, for example, external parties such as self-employed professionals, or colleagues with special authorisation.
- How will the policy be kept up to date with new technological developments and regulations?
Use and checks
Here you answer questions such as:
- Which AI tools are employees allowed to use, and which not? You can make a list of approved tools and update it regularly. A number of AI systems have already been banned since 2 February .
- For what purposes may employees use AI? Describe as precisely as possible the purposes for which the use of AI tools is allowed within your business.
- What is not allowed? For example, entering personal data, company secrets, or sensitive customer information into AI tools that are not GDPR-proof.
- How do you ensure that AI results are checked? Do not blindly accept the information given, but always check that it is correct. AI tools can suffer from so-called ‘hallucinations’ (giving incorrect information) and may also be influenced by prejudices and outdated or incorrect information.
- Who is responsible for any errors that arise from the use of AI?
- What happens in case of a policy violation? For example, a warning or sanctions.
- Who is the contact person for questions, approvals, and reports about AI use?
Security and privacy
In this part of the AI policy, you answer questions such as:
- What steps should employees take to keep company data safe? Here you can say that employees are not allowed to enter customer information or sensitive company information into AI tools. If you, as a user, enter confidential information, the tool stores it, and that information may also be viewed or retrieved by others.
- On which devices can AI tools be used? For example, only allow employees to use AI tools on secure devices/networks.
Training and awareness
In this last section, you answer questions such as:
- What training or instructions do you offer to teach employees how to use AI tools safely and effectively? As of 2 February 2025, organisations that develop or use AI systems must ensure that their employees are AI .
- Where can employees find the policy? Indicate here where the policy document can be found for all employees and how you will inform them when it is updated. Inform new personnel about the AI policy as soon as they are hired.
By basing your AI policy on the steps above, you offer clarity to your personnel. This allows them to make best use of the growing range of smart AI tools in their work.