Key takeaways:
The advertising and publishing industries are confronting an unparalleled oversight crisis as artificial intelligence agents increasingly operate autonomously across digital platforms, making decisions without human intervention and creating murky accountability trails that have executives scrambling for solutions.
New data released last week by TolBit reveals a dramatic shift in web traffic patterns, with AI agents now contributing to a 9.4% reduction in human visitors between the first and second quarters of 2025.
More concerning for publishers, some AI systems like Perplexity's Comet browser are operating undetected, appearing as standard Chrome users while actually being autonomous agents scraping content for AI-generated summaries.
Major brands implement emergency governance
Marc Maleh, chief technology officer at Huge, told Digiday that his agency has held urgent governance discussions with major clients including NBCUniversal and Planet Fitness as AI agents gain the ability to make purchasing decisions and access sensitive customer data without explicit human approval.
"If you're a brand and you don't have a governance framework in place and you have a multi-agent system, and you didn't think through, 'Well, I'm accessing Marc's credit card information with this agent, and that agent is making an assumption that this other agent can access that same information — how am I being informed about that?' Suddenly, I bought a product I didn't want to buy because this multi-agent system did so," Maleh explained in a recent interview.
The urgency stems from rapid deployment of agentic AI tools by major technology platforms.
Salesforce, Adobe, Microsoft, and Optimizely have all launched AI agent products in 2025 that can autonomously execute tasks, learn from user behavior, and adapt their operations without constant oversight.
Publishers face "stealth agent" problem
Media companies are grappling with AI agents that browse their content while masquerading as human visitors, potentially distorting audience analytics and advertising metrics.
According to TolBit's testing, Perplexity's Comet browser doesn't identify itself as an AI tool in site logs. Instead, it fetches pages under a standard "Chrome" user agent and uses the human's residential IP.
So even if a user only sees a summary, publishers' analytics record it as a normal (human) visit — when in fact it's the AI doing the browsing and clicking in the background.
This "stealth browsing" behavior raises serious questions about advertising measurement and revenue attribution, as publishers may be unknowingly serving ads to AI systems rather than human consumers.
Clive Henry, head of partner solutions at Adobe, warned that streaming platforms and subscription services are particularly vulnerable.
"If an agent is doing the browsing, who controls the ads that normally appear on these platforms, and what data is used to target them? And if another person uses the same interface, will that agent apply the same username and password as the previous person who watched something earlier?" Henry said.
Regulatory framework scramble
The AI oversight crisis has exposed significant gaps in current regulatory frameworks.
The EU AI Act, which took effect in August 2024, was not originally designed with autonomous AI agents in mind, according to a comprehensive analysis released in June by The Future Society, the first study examining how AI agents are regulated under European law.
The AI Act governs AI agents through four primary pillars: risk assessment, transparency tools, technical deployment controls, and human oversight design.
Within these pillars, we identify ten measures to propose specific requirements for GPAISR providers, agent providers, and agent deployers—supported by relevant articles in the Act.
In the United States, President Trump's January 2025 Executive Order "Removing Barriers to American Leadership in Artificial Intelligence" rescinded many Biden-era AI safety measures, creating additional regulatory uncertainty as autonomous agents proliferate.
David Berkowitz, founder of AI Marketers Guild, emphasized the urgency of addressing agent-to-agent interactions before they become widespread.
"How do we prepare for a future where there are different agents, essentially bots, talking to each other, often without human intervention? There are a lot of different rules in play here and I don't think we want to be in a situation where we just like, let this happen, and see where this goes," Berkowitz said.
The advertising industry is particularly concerned about autonomous agents reshaping fundamental assumptions about audience targeting. "This could radically shape a world of, are you even creating messaging for humans, or bots? And if it's just seen as some tech policy or implementation, then it's very possible the CMO is going to get left out," Berkowitz warned.
Adobe launches agent governance initiative
Responding to industry concerns, Adobe announced at its Summit conference in Las Vegas last week that it is developing new governance tools for AI agents, including audit trail capabilities and intervention points designed to maintain human oversight even as agents operate autonomously.
"Companies that are going to open up their web experiences to these kinds of agentic flows, they're eventually going to want to know that they're safe from legal recourse from consumers or consumer groups, based on how they receive and store these kinds of data," Henry said, announcing Adobe's push for industry-wide governance frameworks.
25% of marketing projects now include AI components
The scale of the challenge became clear with Huge's announcement in June that 25% of its client projects now contain AI components, marking the first time in the agency's 25-year history that artificial intelligence has become so prevalent in client work.
Microsoft reported in its Q3 2025 earnings that Copilot Studio, its AI agent development platform, has been used by over 230,000 organizations, including 90% of the Fortune 500. IDC projects 1.3 billion AI agents will be deployed by 2028, indicating the oversight challenges will only intensify.
Consumer privacy concerns mount
"It's only a matter of time before consumers start to care how their data is being used by multi-modal agentic systems," Henry predicted, drawing parallels to how personalization concerns led to GDPR and the California Consumer Privacy Act.
The governance challenges extend beyond technical considerations to fundamental questions about consumer consent and data usage when AI agents operate on behalf of users without explicit approval for each action.
Industry experts stress that organizations cannot wait for comprehensive regulations to emerge.
When Huge conducts AI governance work with clients, conversations have ranged from how model decisions get documented and communicated, to what mechanisms ensure traceability of agent actions, like audit trails and data logs.
Accountability has been another hot topic: if an agent acts with bias, if it's had harmful outputs, been mis-executed, who is on the hook: the agency, vendor, brand or end-user agent?
While some platforms are building governance features into their agent orchestration tools, industry experts warn that orchestration alone doesn't guarantee accountability.
For more news and insights, visit AI Pulse on our website.
Read more