
Microsoft Eyes OpenClaw-Style AI Features for Copilot
Microsoft is reportedly exploring OpenClaw-style AI features for Copilot that could make the assistant more proactive inside Microsoft 365.
Key takeaways California Governor Gavin Newsom signed SB 53, requiring major AI companies to publicly disclose safety protocols and report critical incidents. The law establishes the nation's first tr...
California Governor Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act, into law on September 29, 2025, making California the first state in the nation to establish comprehensive safety regulations for advanced artificial intelligence systems.
The law places new AI-specific regulations on top industry players, requiring them to fulfill transparency requirements and report AI-related safety incidents.
The legislation represents a significant milestone in AI governance, as federal lawmakers have yet to pass comprehensive AI regulation.
The law builds on recommendations from California's first-in-the-nation report, called for by Governor Newsom and published earlier this year.
In a statement, Governor Newsom said, "California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive.
This legislation strikes that balance. AI is the new frontier in innovation, and California is not only here for it, but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves."
State Senator Scott Wiener, who authored the bill, stated: "With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails to understand and reduce risk. With this law, California is stepping up, once again, as a global leader on both technology innovation and safety."
The legislation comes exactly one year after Newsom vetoed a similar but more stringent bill, SB 1047, which would have imposed greater liability on AI companies for adverse events.
SB 53 requires large frontier AI developers to publicly publish a framework on their websites describing how they have incorporated national standards, international standards, and industry-consensus best practices into their frontier AI development.
Companies must also submit transparency reports before deploying new or updated frontier models.
The law creates a new mechanism for frontier AI companies and the public to report potential critical safety incidents to California's Office of Emergency Services.
Additionally, it strengthens protections for employees who raise safety concerns, prohibiting companies from retaliating against whistleblowers who disclose information about potential public health or safety dangers.
Civil penalties for noncompliance will be enforceable by the Attorney General's office, though some policy experts note these penalties are relatively modest compared to international AI regulations.
The law's reach extends far beyond California's borders. California is home to 32 of the 50 top AI companies worldwide, and in 2024, more than half of global venture capital funding for AI and machine learning startups went to companies in the Bay Area.
Anthropic co-founder and head of policy Jack Clark said in a statement: "Governor Newsom's signature on SB 53 establishes meaningful transparency requirements for frontier AI companies without imposing prescriptive technical mandates.
While federal standards remain essential to avoid a patchwork of state regulations, California has created a strong framework that balances public safety with continued innovation."
OpenAI spokesperson Jamie Radice said in a statement: "We're pleased to see that California has created a critical path toward harmonization with the federal government—the most effective approach to AI safety. If implemented correctly, this will allow federal and state governments to cooperate on the safe deployment of AI technology."
However, not all industry voices support the legislation. Critics argue the law could disadvantage smaller companies and create compliance challenges, potentially entrenching the dominance of major tech firms.
The law directs the California Department of Technology to annually recommend appropriate updates based on multistakeholder input, technological developments, and international standards. This built-in review mechanism acknowledges the rapidly evolving nature of AI technology.
The legislation also establishes CalCompute, a consortium within the Government Operations Agency that will develop a framework for creating a public computing cluster to advance safe, ethical, and sustainable AI development for researchers and startups.

Microsoft is reportedly exploring OpenClaw-style AI features for Copilot that could make the assistant more proactive inside Microsoft 365.

Meta launches Muse Spark, a closed proprietary AI model with tiered reasoning and 3B+ user reach across WhatsApp, Instagram, and Quest VR.

Key takeaways OpenAI has signed a multi-year, $10 billion agreement with AI chipmaker Cerebras Systems to secure computing infrastructure. The deal will deliver 750 megawatts of computing power throug...