The risks of using AI tools
- Amber Kuipers
- 6 June 2023
- Edited 31 May 2024
- 2 min
- Managing and growing
- Digitalisation
With AI tools you can create text and images and generate ideas using artificial intelligence (AI). Using them can save time and produce creative outcomes. The possibilities may seem endless but using these tools also has risks. For example, copyright infringement and privacy. And also the risk of unreliable texts or content based on preconceptions and bias.
How do AI tools work?
AI tools create their own text and images. You type in a request, called a "prompt," and then the tool delivers text or images. The tools do this based on existing data. AI tools collect large amounts of data (datasets) and generate text or images using the most common data. So an AI tool does not create new content, but merges text or images that already exist on the Internet . Examples of well-known tools that use AI to create text or images are: ChatGPT, Google Bard, Bing AI, Midjourney, Sora and Runway.
Copyright
Because AI tools generate content based on existing information, they may use texts or images that are subject to . Using copyrighted text without permission is illegal. The original author can sue you. Do you want to copyright a text or image you created with an AI tool? Then you must have made an original choice, for example in the wording of your "prompt" or in your editing of the AI-generated content.
Copyright does not apply, for example, if you own a shoe shop and you ask ChatGPT to write a blog post about the 5 most common issues with new shoes. If a competitor then asks the same question, they will receive the same text. You cannot then accuse your competitor of using your content. An example where copyright does apply is if you ask Midjourney to create an image of a pink elephant lying on the sandbank of a tropical island. You then edit the image. That means you have made an original choice and copyright applies.
Data protection and privacy
If you enter personal data or sensitive business information into an AI tool, that information may not be protected. Everything you enter into an AI tool is stored on a server. That server may be in a country where Dutch privacy do not apply. You are required to keep and process personal data securely.
For example, you are a therapist and you keep a medical file on your client. You ask ChatGPT to summarise that file and therefore enter the complete file in ChatGPT. Your client's personal and medical records are then on a server in America. This means you risk a fine.
Unreliable information
Texts generated by AI tools may contain false information. This happens because the tool uses existing information on the internet to create text. For example, you ask ChatGPT, who invented the cronut. ChatGPT responds with a detailed and vivid answer. But ChatGPT mentions a different name than, say, Google. Even if you Google the name that ChatGPT says is the creator of the cronut, you will not find any results. So always check if an AI tool's answer is correct.
AI check
Do you want to know if someone else’s text (a business partner's for example) was created with ChatGPT? You can check this using an online AI .
Biases and preconceptions
Because an AI tool uses existing data, a text or image is based on existing texts or images. Topics may be described from a certain angle and the tool accepts that angle as the truth. For example, Midjourney primarily uses European and U.S. datasets. If you ask Midjourney to create an image of a wedding, it will look like a typical European or American wedding.
AI legislation
In 2025, the first European AI legislation will be come into effect. The AI AI Act | Shaping Europe’s digital future (europa.eu) aims to minimise the risks of unreliable and biased information and make the use of AI tools safer.