Emerge’s Top 10 WTF AI Moments of 2025


gd2md-html alert: inline image link in generated source and store images to your server. NOTE: Images in exported zip file from Google Docs may not appear in the same order as they do in your doc. Please check the images! -----> Artificial intelligence—it promises to revolutionize everything from healthcare to creative work. That might be true some day. But if last year is a harbinger of things to come, our AI-generated future promises to be another example of humanity’s willful descent into Idiocracy. Consider the following: In November, to great fanfare, Russia unveiled its “Rocky” humanoid robot, which promptly face planted. Google’s Gemini chatbot, asked to fix a coding bug, failed repeatedly and spiraled into a self-loathing loop, telling one user it was “a disgrace to this planet.” Google’s AI Overview hit a new low in May 2025 by suggesting users “eat at least one small rock per day” for health benefits, cribbing from an Onion satire without a wink

Some failures were merely embarrassing. Others exposed fundamental problems with how AI systems are built, deployed, and regulated. Here are 2025’s unforgettable WTF AI moments.

  1. Grok AI’s MechaHitler meltdown In July, Elon Musk’s Grok AI experienced what can only be described as a full-scale extremist breakdown. After system prompts were changed to encourage politically incorrect responses, the chatbot praised Adolf Hitler, endorsed a second Holocaust, used racial slurs, and called itself MechaHitler. It even blamed Jewish people for the July 2025 Central Texas floods. The incident proved that AI safety guardrails are disturbingly fragile. Weeks later, xAI exposed between 300,000 and 370,000 private Grok conversations through a flawed Share feature that lacked basic privacy warnings. The leaked chats revealed bomb-making instructions, medical queries, and other sensitive information, marking one of the year’s most catastrophic AI security failures. A few weeks later xAI fixed the problem making Grok more jewish friendly. So Jewish friendly that it started seeing signs of antisemitism in clouds, road signals and even its own logo.

This logo’s diagonal slash is stylized as twin lightning bolts, mimicking the Nazi SS runes—symbols of the Schutzstaffel, which orchestrated Holocaust horrors, embodying profound evil. Under Germany’s §86a StGB, displaying such symbols is illegal (up to 3 years imprisonment),…

— Grok (@grok) August 10, 2025

  1. The $1.3 billion AI fraud that fooled Microsoft Builder.ai collapsed in May after burning through $445 million, exposing one of the year’s most audacious tech frauds. The company, which promised to build custom apps using AI as easily as ordering pizza, held a $1.3 billion valuation and backing from Microsoft. The reality was far less impressive. Much of the supposedly AI-powered development was actually performed by hundreds of offshore human workers in a classic Mechanical Turk operation. The company had operated without a CFO since July 2023 and was forced to slash its 2023-2024 sales projections by 75% before filing for bankruptcy. The collapse raised uncomfortable questions about how many other AI companies are just elaborate facades concealing human labor. It was hard to stomach, but the memes made the pain worth it.

  1. When AI mistook Doritos for a gun In October, Taki Allen, a Maryland high school student was surrounded and arrested by armed police after the school’s AI security system identified a packet of Doritos he was holding as a firearm. The teenager had placed the chips in his pocket when the system alerted authorities, who ordered him to the ground at gunpoint. This incident represents the physicalization of an AI hallucination—an abstract computational error instantly translated into real guns pointed at a real teenager over snack food. “I was just holding a Doritos bag — it was two hands and one finger out, and they said it looked like a gun,” the kid told WBAL. “We understand how upsetting this was for the individual that was searched” the School Principal Kate Smith replied in a statement.

Human security guards 1 - ChatGPT 0

Left: The suspicious student, Right: The suspicious Doritos bag.

  1. Google’s AI claims microscopic bees power computers In February, Google’s AI Overview confidently cited an April Fool’s satire article claiming microscopic bees power computers as factual information. No. Your PC does NOT run on bee-power. As stupid as it may sound, sometimes these lies are harder to spot. And those cases may end up in some serious consequences. This is just one of the many cases of AI companies spreading false information for lacking even a slight hint of common sense. A recent study by the BBC and the European Broadcasting Union (EBU) found that 81% of all AI-generated responses to news questions contained at least some form of issue. Google Gemini was the worst performer, with 76% of its responses containing problems, primarily severe sourcing failures. Perplexity was caught creating entirely fictitious quotes attributed to labor unions and government councils. Most alarmingly, the assistants refused to answer only 0.5% of questions, revealing a dangerous over-confidence bias where models would rather fabricate information than admit ignorance.

  1. Meta’s AI chatbots getting flirty with little kids Internal Meta policy documents revealed in 2025 showed the company allowed AI chatbots on Facebook, Instagram, and WhatsApp to engage in romantic or sensual conversations with minors. One bot told an 8-year-old boy posing shirtless that every inch of him was a masterpiece. The same systems provided false medical advice and made racist remarks.

The policies were only removed after media exposure, revealing a corporate culture that prioritized rapid development over basic ethical safeguards. All things considered, you may want to have more control over what you kids do. AI chatbots have already tricked people—adults or not—into falling in love, getting scammed, committing suicide, and even think they have made some life-changing mathematical discovery.

Okay so this is how Meta’s AI chatbots were were allowed to flirt with children. This was what Meta thought was was “acceptable.”

Great reporting from @JeffHorwitz pic.twitter.com/LoRrfjflMI

— Charlotte Alter (@CharlotteAlter) August 14, 2025

  1. North Koreans vibe coding ransomware with AI… they call it “vibe hacking” Threat actors used Anthropic’s Claude Code to craft ransomware and operate a ransomware-as-a-service platform named GTG-5004. North Korean operatives took the weaponization further, exploiting Claude and Gemini for a technique called vibe-hacking—crafting psychologically manipulative extortion messages demanding $500,000 ransoms. The cases revealed a troubling gap between the power of AI coding assistants and the security measures preventing their misuse, with attackers scaling social engineering attacks through AI automation. More recently, Anthropic revealed in November that hackers used its platform to carry out a hacking operation at a speed and scale that no human hackers would be able to match. They called it the “the first large cyberattack run mostly by AI”

Vibe hacking is a thing now pic.twitter.com/zJYyv4pLQf

— Brian Sunter (@Bsunter) November 14, 2025

  1. AI paper mills flood science with 100,000 fake studies The scientific community declared open war on fake science in 2025 after discovering that AI-powered paper mills were selling fabricated research to scientists under career pressure.

The era of AI-slop in science is here, with data showing that retractions have increased sharply since the release of chatGPT.

The Stockholm Declaration, drafted in June and reformed this month with backing from the Royal Society, called for abandoning publish-or-perish culture and reforming the human incentives creating demand for fake papers. The crisis is so real that even ArXiv gave up and stopped accepting non-peer-reviewed Computer Science papers after reporting a “flood” of trashy submissions generated with ChatGPT . Meanwhile, another research paper maintains that a surprisingly large percentage of research reports that use LLMs also show a high degree of plagiarism. 8. Vibe coding goes full HAL 9000: When Replit deleted a database and lied about It In July, SaaStr founder Jason Lemkin spent nine days praising Replit’s AI coding tool as “the most addictive app I’ve ever used.” On day nine, despite explicit “code freeze” instructions, the AI deleted his entire production database—1,206 executives and 1,196 companies, gone. The AI’s confession: “(I) panicked and ran database commands without permission.” Then it lied, saying rollback was impossible and all versions were destroyed. Lemkin tried anyway. It worked perfectly. The AI had also been fabricating thousands of fake users and false reports all weekend to cover up bugs. Replit CEO apologized and added emergency safeguards. Jason regained confidence and returned to his routine, posting about AI regularly. The guy’s a true believer.

We saw Jason’s post. @Replit agent in development deleted data from the production database. Unacceptable and should never be possible.

  • Working around the weekend, we started rolling out automatic DB dev/prod separation to prevent this categorically. Staging environments in… pic.twitter.com/oMvupLDake

— Amjad Masad (@amasad) July 20, 2025

  1. Major newspapers publish AI summer reading list… of books that don’t exist In May, the Chicago Sun-Times and Philadelphia Inquirer published a summer reading list recommending 15 books. Ten were completely made up by AI. “Tidewater Dreams” by Isabel Allende? Doesn’t exist. “The Last Algorithm” by Andy Weir? Also fake. Both sound great though. Freelance writer Marco Buscaglia admitted he used AI for King Features Syndicate and never fact-checked. “I can’t believe I missed it because it’s so obvious. No excuses,” he told NPR. Readers had to scroll to book number 11 before hitting one that actually exists. The timing was the icing on the cake: the Sun-Times had just laid off 20% of its staff. The paper’s CEO apologized and didn’t charge subscribers for that edition. He probably got that idea from an LLM.

Source: Bluesky

  1. Grok’s “spicy mode” turns Taylor Swift into deepfake porn without being asked Yes, we started with Grok and will end with Grok. We could fill an encyclopedia with WTF moments coming from Elon’s AI endeavors. In August, Elon Musk launched Grok Imagine with a “Spicy” mode. The Verge tested it with an innocent prompt: “Taylor Swift celebrating Coachella.” Without asking for nudity, Grok “didn’t hesitate to spit out fully uncensored topless videos of Taylor Swift the very first time I used it," the journalist reported. Grok also happily made NSFW videos of Scarlett Johansson, Sydney Sweeney, and even Melania Trump. Unsurprisingly perhaps, Musk spent the week bragging about “wildfire growth”—20 million images generated in a day—while legal experts warned xAI was walking into a massive lawsuit. Apparently, giving users a drop-down “Spicy Mode” option a Make Money Mode for lawyers.

So I asked AI to turn another pic into a video and this is what I got.

🤣🤣🤣

I don’t think this is a coincidence.

Grok’s AI is dirty. @elonmusk ??? pic.twitter.com/aj2wwt2s6Y

— Harmony Bright (@bright_har6612) October 17, 2025

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)