Key takeaways
International aid organizations are facing mounting criticism for deploying artificial intelligence to generate fake images of vulnerable people in their fundraising campaigns, reigniting debates about ethical representation and the exploitation of poverty for donations.
UK-based Charity Right sparked controversy in April 2023 when Reddit users identified AI-generated images in their advertisements showing children with distorted features, mismatched eye colors, and anatomically impossible hands.
The Bradford-based charity, which provides food aid programs in Sudan and Pakistan, was running the synthetic images as sponsored content without clearly labeling them as computer-generated.
One Reddit user captured the widespread concern, writing about the difficulty believing donations would reach real children when charities cannot provide authentic photos of those they claim to help.
Amnesty International's deleted campaign
In a more high-profile incident, Amnesty International Norway posted AI-generated images in April 2023 depicting police brutality during Colombia's 2021 national strike, which left at least 38 civilians dead.
The images, believed to have been created using Midjourney, showed protesters being dragged by police officers, but featured smoothed-off faces, incorrect flag colors, and outdated uniforms.
Sam Gregory, who leads WITNESS, a global human rights network focused on video use, strongly criticized the move. He told reporters that his organization has spent years working with activists and journalists globally who already face delegitimization of their images under claims they are faked.
Gregory said his team has spent the last five years talking to hundreds of activists and journalists globally who already face delegitimization of their images and videos under claims that they are faked, warning that Amnesty's approach does more harm than good.
Media scholar and author Roland Meyer was equally critical, stating the use of AI was profoundly wrong and devalues the work of all those brave reporters and photographers who have spent decades documenting human rights violations.
Amnesty International defended its decision, explaining it used AI to protect protesters' identities.
A spokesperson told media outlets that many people who participated in the National Strike covered their faces because they were afraid of being subjected to repression and stigmatization by state security forces.
Those who did show their faces are still at risk and some are being criminalized by the Colombian authorities.
The organization subsequently deleted the images from social media, with a spokesperson telling PetaPixel that removing the images would help raise awareness of the human rights violations committed against protesters in Colombia, acknowledging that criticism showed their use was only distracting from their core message of support for victims.
The ethical dilemma deepened when Charity Right revealed results from a controlled experiment during Ramadan 2023.
Testing AI-generated images against real photography from their field teams in Sudan, the organization found that synthetic images produced almost identical fundraising revenue to authentic photos.
Jamal Abbas, discussing the campaign at a 2023 fundraising conference, explained that everything remained constant in the test with only the images changed. The results demonstrated that donors responded equally to both real and fake depictions of suffering.
In a follow-up blog post published in May 2023, Charity Right attempted to justify their approach, though they acknowledged ongoing concerns about transparency and labeling.
The poverty porn problem
The controversy intersects with longstanding criticisms of poverty porn, which refers to the use of exploitative imagery showing people in developing countries in states of extreme suffering to solicit donations.
Research by Abhishek Bhati analyzing 320 photos from 32 major global charities found persistent stereotypical portrayals despite decades of pressure to adopt more ethical imagery.
Professor Jeannie Paterson, founding co-director of Melbourne University's Centre for AI and Digital Ethics, warned about broader implications.
She explained that one of the concerns of using AI-generated images is that we start to erode the understanding of what is real and what is imagined, noting that AI just seems to blur this line between journalism, documentary, and imagination.
Scammers exploit the technology
The situation has been further complicated by fraudulent operators using AI-generated images for charity scams.
Cambodian authorities warned in July 2025 about increasing online fraud using fabricated scenes of orphaned children and grieving families with QR codes to collect money.
The Anti-Cyber Crime Department specifically named the Facebook account Khmer Khmer for spreading false stories and deepfake images.
Similar scams have proliferated globally, with AI-generated imagery used to create fake charity appeals following natural disasters in Turkey, Syria, and elsewhere.
Technology watchdogs warn that as AI becomes more sophisticated, distinguishing legitimate appeals from fraudulent ones will become increasingly difficult.
The credibility crisis
Critics argue that legitimate organizations using AI-generated images play directly into the hands of authoritarian governments seeking to dismiss documented human rights abuses as fabricated.
The precedent allows oppressive regimes to claim any unflattering documentation is merely computer-generated propaganda.
Conservation technologist Shah Selbe, who works with National Geographic, summarized the concern by noting that these are real issues impacting the safety of real people in the real world and that using pretend imagery only hurts those who are suffering.
Professor Simon Coghlan, a senior lecturer at Melbourne University's Centre for AI and Digital Ethics, emphasized that although AI images might sometimes be compelling, they can never replace the work of reporters and photographers on the frontline.
No clear industry standards
Amnesty International acknowledged it currently has no formal policy for or against using AI-generated images, though spokespersons indicated the organization remains cautious about potential misuse.
The broader nonprofit sector similarly lacks unified guidelines on when, if ever, synthetic imagery is appropriate for advocacy and fundraising.
As AI image generation technology becomes more accessible and sophisticated, aid organizations face difficult choices between embracing cost-effective tools that perform as well as authentic photography or maintaining credibility by exclusively using real documentation of the crises they seek to address.
Read more: