Key takeaways
The national privacy watchdog told Het Financieele Dagblad that it has counted dozens of AI-related data breach reports so far this year.
The regulator cautioned that the increasing use of chatbot tools such as ChatGPT, Claude and Gemini at work heightens the risk of sensitive personal data leakage.
Eindhoven breach exposes thousands of files
The warning comes after officials at the municipality of Eindhoven uploaded personal data of residents and employees to public AI websites.
The data breach, reported to the Dutch Data Protection Authority on October 23, 2025, involved 2,368 files uploaded during a 30-day period from September 23 to October 23.
The leaked files included Youth Welfare Act documents containing details about children and families receiving care, reflection reports with assessments of citizens facing financial or personal problems, and CVs of job applicants with names, addresses, phone numbers, and work history.
The Eindhoven municipal council acknowledged in a letter to the city council that the full scope remains unknown.
"We know that many files were involved, but the precise extent of the uploaded data cannot be determined," the council wrote.
Because uploaded data is stored temporarily for only 30 days, the municipality cannot identify or notify all individuals whose information may have been exposed.
Despite the breach, Eindhoven Mayor Jeroen Dijsselbloem defended employees' use of AI tools.
"The use of AI offers opportunities to do our work more efficiently. From this perspective, it's positive that employees see opportunities and engage with AI. Initial assessment suggests employees used it to improve municipal tasks and services to residents," Dijsselbloem wrote in a letter to the city council.
The municipality has since blocked staff access to public AI websites and restricted employees from using an internal AI tool within a secure environment.
Eindhoven also requested OpenAI delete any files uploaded from the city and hired legal advisory firm Hooghiemstra & Partners to investigate the breach.
Employees acting without authorization
According to the Data Protection Authority, these types of breaches often occur because individual employees use AI models on their own initiative, without organizational safeguards or clear policies in place.
The regulator noted that free versions of popular AI chatbots store the data users enter, while it remains unclear what the companies behind these tools subsequently do with that information.
The watchdog expressed concern that such data could be used to train AI models and warned that personal details could later reappear in chatbot responses.
Stephanie Dekker, an employment law expert at Pinsent Masons based in Amsterdam, emphasized the need for clear workplace guidelines.
"Employers should develop policies around what is and is not' allowed when using AI tools, she said.
Previous incidents and ongoing concerns
The Dutch Data Protection Authority has documented multiple data breach cases involving AI chatbots.
In one instance, an employee of a medical practice entered patient medical data into an AI chatbot contrary to company policy.
The authority stated that medical data is highly sensitive and receives extra legal protection, making such unauthorized sharing a major violation of patient privacy.
Another case involved a telecommunications company where an employee entered a file including customer addresses into an AI chatbot.
Eindhoven has been under intensified supervision by the Dutch Data Protection Authority since March 2023, after previous delays in reporting breaches and retaining personal data too long.
The city established an AI conduct code on November 18, 2025, as part of its privacy and security improvement efforts.
The municipality attempted to minimize concerns about the breach, stating in its council letter that "individual data uploaded into an AI tool cannot easily be extracted and misused by third parties."
Read more:
Samsung Announces Google Photos Integration For AI TV Lineup In 2026
Hearten AI Launches Relationship Coaching App With Integrated Somatic Therapy Tools
Nvidia Strikes $20 Billion Deal With AI Chip Startup Groq