
Meta Releases Muse Spark, Its First Proprietary AI Model to Challenge ChatGPT and Gemini
Meta launches Muse Spark, a closed proprietary AI model with tiered reasoning and 3B+ user reach across WhatsApp, Instagram, and Quest VR.
Key takeaways YouTube's deepfake detection tool requires creators to submit government IDs and biometric facial videos to identify AI-generated content using their likeness without permission. Google'...

YouTube is facing scrutiny over its expanded deepfake detection tool after reports revealed that the platform's privacy policy permits the use of creators' biometric data to train artificial intelligence models, despite the company's assurances that it has never done so.
The video platform introduced its likeness detection feature in October and is now rolling it out to millions of creators in the YouTube Partner Program.
The tool scans videos uploaded to YouTube to identify instances where a creator's face has been altered or generated by AI, allowing them to request removal of unauthorized deepfake content.
To use the service, creators must upload a government-issued identification document and record a biometric video of their face. According to Google's privacy policy, this biometric information can be used to help train the company's AI models and build products and features.
Intellectual property specialists who work with creators and celebrities are warning about the potential risks of the tool's terms of service.
"As Google races to compete in AI and training data becomes strategic gold, creators need to think carefully about whether they want their face controlled by a platform rather than owned by themselves," Dan Neely, CEO of Vermillio, which helps individuals protect their likeness from misuse, told CNBC.
"Your likeness will be one of the most valuable assets in the AI era, and once you give that control away, you may never get it back."
Luke Arrigoni, CEO of creator protection platform Loti, said the risks of YouTube's current biometric policy are enormous.
"Because the release currently allows someone to be able to attach that name to the actual biometrics of the face, they could create something more synthetic that looks like that person," said Arrigoni.
Both Neely and Arrigoni stated they would not currently recommend their clients sign up for YouTube's likeness detection tool under the existing terms.
YouTube spokesperson Jack Malon told The New York Post that the company has never used biometric data from creators to train any AI models. He emphasized that the information is used only for identity verification and detecting synthetic videos.
Malon also pushed back against critics, suggesting that some allegations come from commercial rivals who profit from selling their own paid safety tools.
YouTube is reviewing the wording of its policy sign-up forms to reduce confusion, though the company stressed that the underlying policy itself is not expected to change.
Amjad Hanif, YouTube's head of creator product, said the platform built its likeness detection tool to operate at the scale of YouTube, where hundreds of hours of new footage are posted every minute.
The tool is set to be made available to more than 3 million creators in the YouTube Partner Program by the end of January.
"We do well when creators do well," Hanif told CNBC. "We're here as stewards and supporters of the creator ecosystem, and so we are investing in tools to support them on that journey."
The controversy comes as AI-manipulated content becomes increasingly prevalent across social media platforms. Advanced AI video generation tools have made it significantly easier to create convincing deepfakes of public figures and content creators.
Mikhail Varshavski, a physician who goes by Doctor Mike and has more than 14 million YouTube subscribers, said he first encountered a deepfake of himself on TikTok promoting a supposed miracle supplement.
"It obviously freaked me out, because I've spent over a decade investing in garnering the audience's trust and telling them the truth and helping them make good health-care decisions," Varshavski told.
"To see someone use my likeness in order to trick someone into buying something they don't need or that can potentially hurt them, scared everything about me in that situation."
The tool represents YouTube's attempt to address the growing problem of AI-generated impersonation while balancing creator protection with its AI development ambitions.
However, the debate highlights broader questions about digital identity ownership and how much control creators should surrender to platforms in exchange for protection services.
Proteus Space Launches First AI-Designed Spacecraft, Sets Multiple Records
OpenAI’s Altman Declares ‘Code Red’ To Improve ChatGPT As Google Threatens AI Lead
Trump’s Data Center Push Sparks Backlash From His Own Supporters

Meta launches Muse Spark, a closed proprietary AI model with tiered reasoning and 3B+ user reach across WhatsApp, Instagram, and Quest VR.

Key takeaways OpenAI has signed a multi-year, $10 billion agreement with AI chipmaker Cerebras Systems to secure computing infrastructure. The deal will deliver 750 megawatts of computing power throug...

KEY TAKEAWAYS: xAI implemented restrictions preventing Grok from editing images of real people in revealing clothing after global backlash California Attorney General Rob Bonta launched investigation...