Privacy watchdog wants Dutch companies to make clear agreements on chatbot use at work
The Dutch Data Protection Authority (AP) is pushing for organizations to make clear agreements with their employees about the usage of chatbots, which are automated conversation partners on the computer. This is done using Artificial Intelligence (AI). The authority has received several reports of data leaks that occurred because employers shared personal data of, for example, patients or clients with the chatbot.
"By entering personal data into AI chatbots, the companies offering the chatbot can gain unauthorized access to that personal data," the authority warns.
The AP has noted that many working digital assistants, like ChatGPT and Copilot, are used to answer clients' questions or summarize large files. Employees often do this independently.
It is risky because most companies behind chatbots save all the entered data automatically. "This data, therefore, ends up on the tech companies' servers, usually without the person who entered the data realizing it. And without knowing exactly what the company will do with that data. The person to whom the data belongs will not know that either," the authority explains.
One of the reports was regarding an employer of a general practitioner who, in this case, had entered patients' medical data into an AI chatbot against the rules. Sharing sensitive information with a tech company was a "major violation of the privacy of the people concerned," said AP.
If organizations allow the use of chatbots, they should make it clear to employees what data they can and cannot enter into the bot. The AP suggested that they could also arrange with the chatbot provider that it does not store the entered data.
Reporting by ANP