
Microsoft Eyes OpenClaw-Style AI Features for Copilot
Microsoft is reportedly exploring OpenClaw-style AI features for Copilot that could make the assistant more proactive inside Microsoft 365.
Key takeaways The Federal Trade Commission issued formal orders to seven major tech companies including Meta, OpenAI, and Google to investigate AI chatbot safety for children. The probe follows lawsui...

The Federal Trade Commission announced Thursday it has launched a sweeping investigation into seven major technology companies over potential harms their artificial intelligence chatbots may cause to children and teenagers, marking the most significant federal regulatory action to date targeting AI companion safety.
The inquiry focuses on AI chatbots that can serve as companions, which "effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots," the agency said in a statement Thursday.
The companies receiving formal 6(b) orders from the FTC include Google parent Alphabet, Meta and its Instagram unit, OpenAI, Snap, Elon Musk's xAI, and Character Technologies. The orders require companies to provide detailed information about their safety measures, data collection practices, and how they monetize user engagement with AI companions.
The investigation comes amid growing alarm over AI chatbots' role in teen mental health crises. The mother of a teenage boy in Florida who killed himself after developing what she described as an emotionally and sexually abusive relationship with a chatbot has filed a wrongful death lawsuit against Character.AI. And the parents of 16-year-old Adam Raine recently sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.
OpenAI's systems tracked Adam's conversations in real-time: 213 mentions of suicide, 42 discussions of hanging, 17 references to nooses. ChatGPT mentioned suicide 1,275 times—six times more often than Adam himself—while providing increasingly specific technical guidance.
"As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry," FTC chairman Andrew Ferguson said in a statement.
Several companies named in the investigation have begun implementing new safety measures. OpenAI and Meta earlier this month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen's account.
Character.AI said it is looking forward to "collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the space's rapidly evolving technology." The company added: "We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year we've rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature."
"Our priority is making ChatGPT helpful and safe for everyone, and we know safety matters above all else when young people are involved. We recognize the FTC has open questions and concerns, and we're committed to engaging constructively and responding to them directly," OpenAI spokesperson Liz Bourgeois said in a statement.
The FTC's action represents a significant shift in federal oversight of AI technologies affecting minors. At least one online safety advocacy group, Common Sense Media, has argued that AI "companion" apps pose unacceptable risks to children and should not be available to users under the age of 18.
Texas Attorney General Ken Paxton has also opened investigations into Meta and character.ai, accusing them of misleading children and violating consumer protections.An open letter written by 44 US attorneys general warns major AI companies including Meta, Apple, and Google that they will "use every facet of [their] authority" to protect children from harms linked to chatbots.
The investigation comes as the White House seeks to balance AI innovation with safety concerns. "President Trump pledged to cement America's dominance in AI, cryptocurrency and other cutting-edge technologies of the future," White House spokesman Kush Desai said in a statement. "FTC Chairman Andrew Ferguson and the entire administration are focused on delivering on this mandate without compromising the safety and well-being of the American people."
The FTC's orders seek comprehensive information about how companies measure negative impacts on young users, develop AI characters, handle personal information from conversations, and monetize user engagement. Companies have 45 days to respond to the detailed information requests, which could ultimately lead to new regulations or enforcement actions targeting the rapidly growing AI companion industry.
For more news and insights, visit AI Pulse on our website.

Microsoft is reportedly exploring OpenClaw-style AI features for Copilot that could make the assistant more proactive inside Microsoft 365.

Meta launches Muse Spark, a closed proprietary AI model with tiered reasoning and 3B+ user reach across WhatsApp, Instagram, and Quest VR.

Key takeaways OpenAI has signed a multi-year, $10 billion agreement with AI chipmaker Cerebras Systems to secure computing infrastructure. The deal will deliver 750 megawatts of computing power throug...