Dutch data protection watchdog starts monitoring "dangerous algorithms"
The Dutch Data Protection Authority (AP) has started monitoring “dangerous algorithms.” These are algorithms that could negatively impact people’s lives, for example, algorithms that decide who gets invited for a job interview or whether someone should be checked for fraud, NOS reports.
AP chairman Aleid Wolfsen called the extra supervision on algorithms urgently needed because all kinds of things can go wrong. For example, in the childcare allowance scandal, an algorithm selected people for fraud checks based on their nationality, family composition, and salary. The controversial anti-fraud program SyRI is another example.
“Algorithms are increasingly deployed and used in selecting people,” Wolfsen said to the broadcaster. “Who may or may not become a customer of a company, who is invited for a job interview, and who is subject to extra fraud checks.” That can be “life-threateningly dangerous,” he said. An incorrectly trained or programmed algorithm can “contain discriminatory elements that can affect many people at once. We must all try to prevent that.”
The AP’s new supervision applies to government agencies and the business community. It should enable the privacy watchdog to take measures more quickly if it finds an incorrect algorithm. “If we are tighter on that, you can intervene faster, warn faster, and stop faster,” Wolfsen said. “We will do that together with other regulators.” Individuals can also report their complaints to the AP.
The Cabinet has set aside 1 million euros for this extra supervision this year. That will increase in the coming years to a structural 3.6 million euros in 2026.
Amnesty International is not super enthusiastic about the new supervision. According to the human rights organization, there are too few resources to tackle discriminatory algorithms effectively, and the government should make more money available for this gigantic task.