Key takeaways
Denmark has taken a groundbreaking step in the battle against artificial intelligence-generated deepfakes by passing legislation that gives every citizen copyright protection over their physical likeness and voice, making it the first country in Europe to treat personal identity as intellectual property.
The amendment to Denmark's Copyright Act, which has secured support from all major political parties, makes it illegal to share AI-generated deepfakes without the consent of the person depicted. The law is expected to take effect in early 2025.
"We are now sending an unequivocal signal to all citizens that you have the right to your own body, your own voice and your own facial features," Danish Culture Minister Jakob Engel-Schmidt said in a statement.
The legislation comes as deepfakes have become increasingly sophisticated and accessible.
According to a security report by Resemble.ai, there were 487 publicly disclosed deepfake attacks in the second quarter of 2025 alone, representing a 41% increase from the previous quarter and more than 300% year-over-year. Direct financial losses from deepfake scams have reached nearly $350 million globally.
The technology has evolved dramatically since the term "deepfake" first emerged in late 2017 on Reddit, where it described face-swapping content primarily used to create non-consensual pornography.
Today's AI tools can generate entirely new videos from text prompts, with outputs that are exponentially more realistic.
"Technology is developing rapidly, and in the future it will be even more difficult to distinguish reality from fiction in the digital world," Engel-Schmidt said, describing the new law as a safeguard against misinformation.
How the law works
Under the new legislation, Danish citizens who discover deepfakes of themselves can request that online platforms remove the content.
The law establishes clear exemptions for satire, parody, and legitimate criticism, provided the content is clearly labeled as artificially generated and does not constitute harmful misinformation.
The law will not impose fines or imprisonment on individual social media users.
However, Engel-Schmidt indicated that major technology platforms that fail to comply with takedown requests could face severe financial penalties. A second phase of legislation introducing specific fines for non-compliant companies is being considered.
"If you're able to deepfake a politician without her or him being able to have that product taken down, that will undermine our democracy," Engel-Schmidt told reporters at an AI and copyright conference in September.
Personal stories drive change
The human cost of deepfakes has been a driving force behind the legislation. Danish video game live-streamer Marie Watson experienced firsthand the violation of having her image manipulated in 2021, when an unknown Instagram account sent her a digitally altered photograph removing her clothing.
"It overwhelmed me so much," Watson recalled. "I just started bursting out in tears, because suddenly, I was there naked."
Watson discovered how easily such images could be created using readily available online tools.
"You could literally just search 'deepfake generator' on Google or 'how to make a deepfake,' and all these websites and generators would pop up," the 28-year-old said.
Danish voice actor David Bateson, who voiced a character in the popular "Hitman" video game and Lego's English advertisements, also fell victim to AI voice cloning shared by thousands of users online.
When the Danish Rights Alliance attempted to have the content removed, platforms asked which specific regulation was being violated.
"We couldn't point to an exact regulation in Denmark," said Maria Fredenslund, an attorney and director of the alliance, which supports the bill.
International implications
Denmark's approach has attracted attention from other European Union members, including France and Ireland. The country currently holds the EU's rotating presidency and has been urging counterparts to adopt similar measures.
The legislation goes beyond existing European frameworks.
The EU's AI Act, which took effect in 2024, requires labeling of AI-generated content but does not ban deepfakes.
France updated its criminal code in 2024 to impose prison sentences and fines for sharing deepfakes, while the United Kingdom's Online Safety Act targets the sharing of intimate deepfake images.
Henry Ajder, founder of consulting firm Latent Space Advisory and a leading expert in generative AI, praised Denmark's innovative approach.
"We can't just pretend that this is business as usual for how we think about those key parts of our identity and our dignity," he said.
Ajder noted that Denmark's legislation addresses a fundamental gap in existing protections.
"When people say 'what can I do to protect myself from being deepfaked?' the answer I have to give most of the time is: 'There isn't a huge amount you can do,' without me basically saying, 'scrub yourself from the internet entirely.' Which isn't really possible," he said.
Challenges ahead
Legal experts have raised questions about whether copyright law is the appropriate framework for protecting personal identity.
Luca Schirru, an intellectual property lawyer, emphasized that deepfakes should be handled through personality rights, which are "directly connected to the honor, intimacy, and dignity of the person."
Some experts worry that treating faces and voices as copyrightable commodities could lead to unintended consequences, since copyright can be transferred or sold while personality rights are typically inalienable.
Enforcement also poses challenges. Authorities must contend with identifying manipulated content, verifying consent status, and addressing jurisdictional issues when material is published from outside Denmark.
Athina Karatzogianni, a professor of technology and society at the University of Leicester, noted that deepfakes can have both individual and societal impacts.
"They can both harm individual rights and also have sociopolitical impacts, because they undermine the values that are fundamental to a democracy, such as equality and transparency," she said.
Despite the legislative progress, Watson remains cautious about the law's effectiveness. She believes more pressure must be applied directly to social media platforms.
Read more: