Government organizations use at least 120 AI systems; risks of this are often unclear
Government organizations use at least 120 systems that are based on artificial intelligence (AI), according to the Netherlands Court of Audit. In almost half of these cases, little or no attention was paid to the risks of using these types of systems, according to Court of Audit Board Member Ewout Irrgang.
Systems with artificial intelligence are not programmed by hand but by using large amounts of data, which has been taught how to perform tasks. Problems like privacy violations and discrimination could be linked to this. “You can only control the risks when you know them,” said Irrgang.
The majority of government organizations do not use more than three AI systems. The exceptions are the benefits agency UWV, which uses ten, and the police, which utilizes 23 systems. The UWV uses AI to predict who will not have an income after a benefit ends and to provide support based on that. Irrgang said, "If systems have an impact on citizens, you are more likely to have risks."
Government organizations usually consider the risks to be minimal in the situations that they actually assess. But, also in those cases, there are risks of issues like privacy violations, the Netherlands Court of Audit noticed.
A smaller portion of the projects are considered “high risk.” It cannot be ruled out that AI systems may be used which could be banned by EU rule. Under EU law, certain systems based on artificial intelligence will be banned from February 2025. State Secretary of Digitalization Zsolt Szabó has said that tracking down and scrapping these systems is a “top priority.”
The systems that are to be made illegal include systems that recognize faces to classify people into population groups or systems that give people “social scores” for their behavior and attach negative consequences to them.
The Court of Audit also noted that it differs per situation in how the risk is evaluated. The Cabinet said in a response that they want to improve the cohesion and Irrgang thinks this is “positive.”
Not just the risks but the benefits of AI are often unclear. In 42 of the 120 systems, it is not known whether they are meeting the expectations. Most of the rest are performing as well as expected or better.
This is the first time that the Netherlands Court of Audit has researched AI systems. The inspector asked 70 government organizations to map their AI use, such as the Ministry of Finance, Customs, and public broadcasting association NPO.
It is possible that there are government employees who use "ad hoc" programs such as ChatGPT, but they were not included in the study. This also applies to AI systems for military purposes or intelligence agencies, for example.
Reporting by ANP