Key Takeaways
An artificially generated image showing crowds of people dangerously close to a massive sinkhole in Bangkok circulated widely on social media platforms, despite being completely fabricated using AI technology.
The false image spreads rapidly
The fake image surfaced on September 24, 2025, shortly after a real sinkhole incident occurred on Samsen Road near Vajira Hospital.
The AI-generated photo purported to show dozens of people standing perilously close to the edge of the deep crater, accompanied by a Thai-language caption reading "Road subsided in front of Vajira Hospital at 7:24 am."
The misleading content was shared across multiple platforms including X (formerly Twitter) and Threads, generating significant public concern.
Social media users expressed alarm at the apparent lack of safety measures, with one commenting "Why are you all standing there, content creators? It's dangerous!" and another questioning "Why didn't the police cordon the area off? That's dangerous!"
[caption id="attachment_31738" align="aligncenter" width="536"]
Detection reveals AI origins
Google's artificial intelligence detection systems flagged the viral image as "Made with Google AI," indicating it contained SynthID watermarks that identify artificially generated content.
A Google spokesperson previously confirmed that when these watermarks are detected, it means "the image has been generated or modified with AI."
Visual analysis revealed multiple errors typical of AI-generated imagery, including missing architectural details when compared to actual video footage from Thai newspaper Thairath and figures that appeared to be unnaturally hovering over the sinkhole's edge.
A reverse angle view from AFP video footage showed the sinkhole extended to building foundations, leaving no safe space for crowds to gather as depicted in the fake image.
The real emergency response
The actual sinkhole incident on Samsen Road was handled with proper safety protocols. Suriyachai Rawiwan, director of Bangkok's disaster prevention department, told AFP at the scene that the collapse was likely linked to heavy rain and a leaky pipe.
[caption id="attachment_31739" align="aligncenter" width="708"]
Screenshot from Google Images showing the AI-generated content detection label, highlighted by AFP.[/caption]
"There was a leak in the water pipe — water from the pipe eroded (earth) under the road so this incident happened," Rawiwan said, adding that there were no known casualties.
"The water that eroded brought some soil that dropped down to an under-construction subway station, causing the collapse."
The roughly 50-meter (160-foot) hole pulled down power lines and exposed a burst pipe gushing water. Bangkok Governor Chadchart Sittipunt confirmed at the scene: "The location is at a station, and the soil was sucked into the site... it collapsed."
Prime Minister Anutin Charnvirakul visited the site and ordered an investigation. "Dirt from an underground train construction was sliding in," the Prime Minister told reporters. "Luckily there are no deaths or injuries."
Proper safety measures in place
Contrary to the AI-generated image's implications, authorities properly cordoned off the dangerous area. AFP photographs confirmed the site was secured, preventing public access to the hazardous zone. The nearby Samsen police station was evacuated, and senior police officer Sayam Boonsom ordered the evacuation of nearby apartment blocks.
Vajira Hospital suspended outpatient services for two days as a precautionary measure, though Bangkok city officials confirmed the hospital's structure was not affected. Noppadech Pitpeng, a 27-year-old hospital worker, described hearing "a rumbling sound Wednesday morning that woke him up" and said "the sound was like an electricity pole collapsing and my whole flat shook."
This incident underscores the increasing sophistication of AI-generated misinformation during breaking news events.
Google's SynthID watermarking technology, which embeds imperceptible digital markers in AI-generated content, represents one approach to combating such deception.
The watermarking system has been integrated across Google's AI products and can detect artificially generated images, audio, video, and text.
According to Google, over 10 billion pieces of content have been watermarked with SynthID since the system's launch, though the technology primarily works for content generated using Google's AI services.
Read more: