Key Takeaways
The Australian government announced Tuesday it will take action to ban artificial intelligence applications that create non-consensual sexualized images, marking a significant escalation in the country's efforts to combat online abuse and protect children from digital harm.
Government targets abusive AI technology
Communications Minister Anika Wells outlined the government's position in a statement that emphasized both the legitimate uses of AI technology and the need to prevent abuse.
"There is a place for AI and legitimate tracking technology in Australia, but there is no place for apps and technologies that are used solely to abuse, humiliate, and harm people, especially our children," Wells said on Tuesday. "That's why the Albanese government will use every lever at our disposal to restrict access to nudification and undetectable online stalking apps and keep Australians safer from the serious harms they cause."
The minister acknowledged that completely eliminating the problem would require sustained effort. "While this move won't eliminate the problem of abusive technology in one fell swoop, alongside existing laws and our world-leading online safety reforms, it will make a real difference in protecting Australians," she said.
Following the model established for Australia's planned social media age restrictions, the government will place responsibility on technology companies to prevent access to harmful AI applications. Wells emphasized the government's intention to work collaboratively with the industry rather than impose unilateral restrictions.
"These new, evolving technologies require a new, proactive approach to harm prevention – and we'll work closely with industry to achieve this," Wells stated.
The announcement comes amid growing concerns about the accessibility and potential misuse of deepfake technology. Australian eSafety Commissioner Julie Inman Grant highlighted the ease of creating harmful content using these tools, noting last year that modern AI applications require minimal technical expertise or computing power to produce realistic manipulated images.
Legislative precedent and criminal penalties
The government's action builds on legislative groundwork laid by independent MP Kate Chaney, who proposed comprehensive legislation in July following a roundtable discussion on AI-facilitated child exploitation. Chaney's proposed legislation would make possession of nudify applications a criminal offense, carrying penalties of up to 15 years imprisonment.
Current laws already prohibit the distribution of non-consensual sexually explicit materials and stalking at both state and Commonwealth levels, but the new measures would specifically target the applications used to create such content.
Industry response and support
The Digital Industry Group Inc (DIGI), representing major technology companies, has expressed support for the government's initiative. Dr Jennifer Duxbury, DIGI's director of regulatory affairs, policy, and research, welcomed the announcement.
"DIGI welcomes strong action from Minister Anika Wells against nudification apps and online stalking tools to strengthen online safety protections for Australians," Duxbury said. "DIGI members are also taking proactive steps against these types of harmful services that have no place in our online ecosystem."
The industry group indicated its willingness to collaborate on implementation details, with Duxbury adding: "We support ecosystem approaches to tackling harm and look forward to working constructively with the government on the details of the proposal."
Technology companies have already begun taking independent action against harmful applications. Meta, which owns Facebook and Instagram, initiated legal proceedings in June against the operator of nudify app CrushAI to prevent advertising of the software on Meta's platforms.
Read more