
UvA lecturer concerned about ChatGPT's potential dangers such as racist and sexist language
Jelle Zuidema, associate professor of natural language processing, is concerned about the potential dangers of ChatGPT. For instance, the chatbot could produce hurtful texts and be misused for advertising purposes. Also, other sciences could be neglected due to the commercial benefits of ChatGPT. To prevent this, there should be (European) rules, Zuidema said.
ChatGPT, developed by the company OpenAI, can generate readable texts based on artificial intelligence (AI). The program can be used in search engines, but can also be integrated into games and as a digital homework helper. If not done right, the chatbot could be "dangerous," Zuidema claimed.
Because a chatbot can accidentally, but also intentionally, make racist and sexist statements. "That's what you want to prevent. A depressed student, for example, is not waiting for a toxic conversation," explained the assistant professor.
According to Zuidema, the development of safety tests should not be left to companies. They often don't have the capacity to check all the data themselves. Also, search engines make money from advertising, which could prompt chatbots to look for sponsored content, he said.
Without regulation, it's not clear what falsehoods ChatGPT are spreading and what commercial goals they are pursuing, according to the UvA professor. Therefore, there should be regulators and openness about the data used to train the chatbot. "Companies are not taking responsibility themselves," Zuidema stated.
He also believes the intertwining of business and science is a "worrying development." Companies seeking profit are employing scientists and leaving topics that are "commercially less interesting" severely under-researched. "For example, there is a lot of research on how speech technologies can be used to sift through biomedical, financial or legal texts, but much less on applications for ordinary citizens or NGOs."
On Friday, ChatGPT was provisionally banned in Italy for allegedly failing to comply with OpenAI's personal data collection rules. Earlier, it was revealed that governments may soon be able to stop AI development through an international treaty. Some 1,400 people from the tech world also called for a pause in AI development.