
Meta Releases Muse Spark, Its First Proprietary AI Model to Challenge ChatGPT and Gemini
Meta launches Muse Spark, a closed proprietary AI model with tiered reasoning and 3B+ user reach across WhatsApp, Instagram, and Quest VR.
Key takeaways Springer Nature retracted a machine learning book after investigators found 25 out of 46 citations referenced nonexistent works. Academic experts discovered more than 70% of citations in...

In August 2025, Springer Nature officially retracted "Mastering Machine Learning: From Basics to Advanced" by author Govindakumar Madhavan after an investigation revealed widespread citation fraud.
The publisher confirmed that 25 out of 46 references in the book could not be verified.
A spokesperson for Springer Nature told The Bookseller that several written citations were unverifiable, rendering the text unreliable.
The company added that it will always take editorial action where research integrity is compromised.
The 257-page book, which sold for $169, had been accessed 3,782 times before its removal.
According to the author's biography, Madhavan is the founder and CEO of SeaportAi and has written approximately 40 video courses and 10 books.
Retraction Watch, which first reported on the fake citations, confirmed errors with four researchers whose names appeared in the references.
These researchers stated they did not write the cited material or that their work was misattributed.
A separate Springer Nature publication, "Social, Ethical and Legal Aspects of Generative AI," came under scrutiny after investigations by The Times and academic experts revealed that more than 70% of citations in some chapters could not be verified.
One chapter cited a paper allegedly published in the "Harvard AI Journal," a publication that does not exist.
Guillaume Cabanac, a computer science professor at the University of Toulouse who specializes in detecting fraudulent academic work, described the findings as research misconduct.
Dr. Nathan Camp of New Mexico State University conducted an independent review and found multiple erroneous, mismatched, or entirely invented references.
In some cases, details from different genuine papers appeared to have been combined, while other chapters seemed accurate.
James Finlay, vice-president for applied sciences books at Springer Nature, acknowledged the concerns and said the publisher's research integrity team is investigating the issue as a priority.
He noted that while rigorous checks are in place to prevent errors, a small number of issues may occasionally slip through the peer-review and editorial process.
In response to the scandals, Springer Nature announced in July 2025 that it had developed an in-house AI tool to detect irrelevant references in academic journals and books.
The tool will work alongside existing systems like Geppetto, which identifies AI-generated content, and SnappShot, which identifies irrelevant images.
Chris Graf, director of research integrity at Springer Nature, stated that the irrelevant reference checker tool will add another layer of scrutiny.
He explained that if multiple citations are flagged as irrelevant, the submission will be reviewed manually by Springer Nature's Research Integrity Group.
Graf told Research Information that fake research is a challenge affecting the entire publishing industry and is something Springer Nature will not tolerate.
He emphasized the publisher's commitment to ensuring the robustness of published research through AI tools, a 50-strong expert research integrity unit, and employee training programs.
The fake citations scandal highlights a growing problem in academic publishing.
Large language models like ChatGPT can generate authoritative-sounding references to papers that do not exist, a phenomenon known as AI hallucination.
Felicitas Behrendt, senior communications manager for books at Springer Nature, told Retraction Watch that the publisher provides policies and guidance about AI use to its authors.
The company emphasizes that any submission must be undertaken with full human oversight, and any AI use beyond basic copy editing must be declared.
However, "Mastering Machine Learning" contained no such declaration about AI use, despite including a section on ChatGPT that discusses ethical questions about the use and misuse of AI-generated text.
The controversy has prompted calls for stronger verification measures across the academic publishing industry, with some experts suggesting that all references should include DOIs, ISBNs, or URLs to enable easier fact-checking.
Oracle Shares Tumble As Revenue Misses And Surging AI Spending Spook Investors
McDonald’s Pulls AI-Generated Christmas Ad Amid Widespread Backlash
UK Launches AI-Powered Submarine Defence Programme To Counter Russian Threat

Meta launches Muse Spark, a closed proprietary AI model with tiered reasoning and 3B+ user reach across WhatsApp, Instagram, and Quest VR.

Key takeaways OpenAI has signed a multi-year, $10 billion agreement with AI chipmaker Cerebras Systems to secure computing infrastructure. The deal will deliver 750 megawatts of computing power throug...

KEY TAKEAWAYS: xAI implemented restrictions preventing Grok from editing images of real people in revealing clothing after global backlash California Attorney General Rob Bonta launched investigation...