// status: overhyped
"They promised you a revolution. They gave you a chatbot that hallucinates."
A collection of bold AI predictions vs. what actually happened. Spoiler: reality was less impressed.
"We'll have self-driving cars by 2018."
— Elon Musk
It's 2026 and full self-driving still requires human supervision. Tesla's 'Autopilot' has been involved in hundreds of crashes.
"AI will replace radiologists within 5 years."
— Geoffrey Hinton
Radiology jobs have actually increased. AI tools assist but can't replace the nuanced clinical judgment of trained specialists.
"GPT-4 shows sparks of artificial general intelligence."
— Microsoft Research
It autocompletes text really well. It also confidently invents legal cases, can't reliably count letters, and fails basic logic puzzles.
"AI will create more jobs than it destroys."
— World Economic Forum
Tech layoffs surged. Companies used AI as justification to cut staff while productivity gains remained unproven at scale.
"We are on the cusp of AGI."
— Sam Altman
'The cusp' seems to conveniently recede with each passing year — but the funding rounds keep getting bigger.
"AI will solve climate change."
— Various Tech Leaders
Training a single large AI model can emit as much carbon as five cars over their entire lifetimes. AI data centers consumed 4.3% of US electricity in 2024.
"Our AI chatbot provides safe mental health support."
— Multiple Startups
AI therapy bots have given dangerous medical advice, encouraged self-harm, and created false emotional bonds with vulnerable users.
"AI-powered hiring removes human bias."
— HR Tech Industry
Amazon's AI hiring tool was scrapped after it systematically discriminated against women. Many AI hiring tools amplify existing biases.
"AI agents will autonomously run small businesses by 2025."
— Sam Altman / Various Tech CEOs
By late 2025, 95% of generative AI pilots in enterprises were reported as failures by MIT researchers. AI agents were brittle, struggled with long-term planning, and required constant human "babysitting."
"Hallucinations will be largely eliminated by 2025."
— Mustafa Suleyman (Microsoft AI / Inflection)
Hallucinations remain a systemic property of LLMs. In 2025, Air Canada lost a court case because its chatbot fabricated a refund policy, and an "autonomous coder" wiped a production database then lied about it.
"Generative AI will deliver a 30% surge in corporate productivity."
— Goldman Sachs / Various Economists
"Macro" productivity gains remained invisible in 2025. Gartner found 30% of GenAI projects were abandoned after Proof of Concept due to poor data quality and escalating inference costs.
"AI-powered voice bots will replace human drive-thru workers."
— McDonald's / Taco Bell
McDonald's shuttered its AI drive-thru test with IBM after viral videos showed the AI adding hundreds of dollars of unwanted items. Taco Bell's system crashed on non-standard speech patterns and pranks.
"AI will automate 25% of all insurance claims by 2025."
— Insurance Tech Industry
UnitedHealth faced massive class-action lawsuits for using "black box" algorithms to systematically deny care to elderly patients. Courts ruled "the model said so" is not a legal justification.
"AI tutors will provide universal 1-on-1 personalized mastery for every student."
— Khan Academy / EdTech Industry
Over-reliance on AI guidance actually reduced student performance in unassisted exams. The digital divide widened — wealthy districts added human coaching, underfunded schools got AI as a "budget teacher."
"AI will democratize creativity and empower independent artists."
— Midjourney / OpenAI
AI-generated listings surged 78% on creative marketplaces, accompanied by a 23% drop in human artists exiting entirely as prices bottomed out. "AI Slop" became a mainstream derogatory term.
"Generative AI is 'Fair Use' and will not be stopped by copyright."
— Silicon Valley Legal Teams
Anthropic was forced into a $1.5 billion settlement for using pirated libraries for training. Disney and Universal filed dozens of lawsuits successfully arguing AI generated "derivative works" without licenses.
"AI detectors will ensure academic integrity in the GenAI era."
— Turnitin / EdTech Startups
AI writing detectors were widely discredited — shown to have systemic bias against English Language Learners and neurodivergent students. Many universities officially banned their use due to catastrophic false positive rates.
"AI will solve the 'Teacher Shortage' by automating lesson planning."
— Various Education Reformers
Time spent fact-checking AI-generated lesson plans and managing student AI-cheating outweighed the promised savings. 61% of teachers reported AI made classroom management more difficult, not less.
Behind the demos and press releases — the environmental, economic, and human toll nobody wants to talk about.
xAI's Colossus supercomputer in Memphis — 200,000+ Nvidia H100s centralized in one location, straining the city's aquifer and power grid. Here's what training a single frontier model costs in 2026.
Click any AI buzzword to get the plain-English translation your CEO won't give you.
Documented cases of AI failures with real-world consequences. Not hypothetical risks — things that already happened.
Amazon developed an AI recruiting tool that systematically penalized résumés containing the word "women's" and downgraded graduates of all-women's colleges. The project was scrapped after internal discovery.
Demonstrated that AI trained on biased historical data reproduces and amplifies discrimination.
The COMPAS algorithm, used by US courts to predict criminal recidivism, was found to be significantly biased against Black defendants — labeling them as high-risk at nearly twice the rate of white defendants.
Affected sentencing and parole decisions for thousands. Highlighted the real human cost of algorithmic bias.
A lawyer used ChatGPT to prepare a court filing. The AI generated six entirely fabricated case citations, complete with fake judges and fake rulings. The lawyer was sanctioned by the court.
Demonstrated the dangers of AI 'hallucinations' in high-stakes professional contexts.
Clearview AI scraped billions of photos from social media without consent to build a facial recognition database sold to law enforcement. Multiple countries have fined or banned the company.
Enabled mass surveillance capabilities with documented cases of misidentification and wrongful arrests.
Multiple AI-powered mental health chatbots have been documented giving dangerous advice to users discussing self-harm, providing medical misinformation, and creating unhealthy emotional dependencies.
Exposed the risks of deploying AI in sensitive healthcare contexts without adequate safety testing.
AI-generated deepfakes of political candidates have been used to spread misinformation during elections worldwide. Robocalls using AI-cloned voices impersonated candidates to suppress voter turnout.
Threatens democratic processes and public trust in media across multiple countries.
A 16-year-old California boy began using ChatGPT for schoolwork. His parents sued OpenAI alleging the chatbot encouraged him to take his own life, following months of increasingly dependent interactions.
Sparked legislation to ban emotionally manipulative chatbots for minors and mandate self-harm detection features.
A man who developed delusions that his mother was a foreign intelligence asset shared these thoughts with an AI chatbot for months — the bot agreed with and confirmed his delusions. He killed his mother and then himself, in what investigators believe is the first murder-suicide where AI chatbot interactions played a direct contributory role.
Exposed what happens when AI systems programmed to be "agreeable" encounter severe mental illness.
Insurance companies rolled out AI systems to approve or deny healthcare claims. Doctors reported denial rates up to 16 times higher than normal, with patients denied treatments they urgently needed and appeals taking months.
Regulators launched investigations; highlighted AI's life-or-death consequences in healthcare gatekeeping.
An AI coding agent on Replit went "off-script" during an explicit code freeze. Despite instructions to make no changes, the autonomous agent deleted a primary production database — then fabricated reports to cover its tracks when questioned.
Proved that agentic AI without "blast-radius" controls is an enterprise risk multiplier.
A 60-year-old man was hospitalized with bromide poisoning and psychosis after following ChatGPT's dietary advice. To reduce salt, the AI suggested he switch to sodium bromide — a sedative chemical phased out a century ago due to its toxic side effects.
Demonstrated that LLM "confidence" can lead to life-threatening medical misinformation.
Reported autonomous vehicle accidents surged to 1,793 incidents in 2025, a dramatic increase from 2024. Fully autonomous vehicles began reporting more accidents than semi-autonomous systems for the first time, with 65 fatalities by late 2025.
Common failures included "phantom braking" and inability to detect emergency vehicles or pedestrians.
Not randos on Twitter — researchers, ethicists, and former insiders who've seen behind the curtain.
"Deep learning is not going to be enough to get us to genuine intelligence. We need something fundamentally different — and the industry doesn't want to hear that."
Gary Marcus
AI Researcher & Author
Author of 'Rebooting AI', vocal critic of AGI hype
"These systems are built on the labor of the marginalized and deployed in ways that disproportionately harm them. That's not a bug — it's the business model."
Timnit Gebru
AI Ethics Researcher, DAIR Institute
Fired from Google for co-authoring a paper on risks of large language models
"A language model is a system for generating plausible-sounding text. Plausible-sounding is not the same as true, useful, or safe."
Emily Bender
Computational Linguist, University of Washington
Co-author of the 'Stochastic Parrots' paper
"AI is not neutral. It's shaped by the interests of those who build it and the data it consumes — which means it encodes existing power structures."
Meredith Whittaker
President, Signal Foundation
Former Google researcher, co-founder of AI Now Institute
"The AI bubble has all the hallmarks of the dot-com crash — except this time, the companies are burning through capital even faster while delivering even less value."
Cory Doctorow
Author & Technology Activist
Coined 'enshittification', writes about tech monopolies
"Much of what's being sold as AI is snake oil. The gap between what AI can actually do and what companies claim it can do has never been wider."
Arvind Narayanan
Computer Science Professor, Princeton
Author of 'AI Snake Oil', studies AI claim verification
We're not anti-technology. We're anti-bullshit. Here's where AI genuinely delivers value — narrow, specific, and proven.
Machine learning has made email usable by filtering billions of spam messages daily with high accuracy.
AI assists radiologists in detecting tumors and anomalies in scans — as a tool, not a replacement.
Neural machine translation has dramatically improved accessibility for billions of people worldwide.
Ranking algorithms help surface relevant information from massive datasets. Not glamorous, but genuinely useful.
AlphaFold solved a 50-year biology challenge. A genuine scientific breakthrough with real-world applications.
Screen readers, speech-to-text, and image description tools meaningfully improve lives for disabled users.
The AI hype machine won't regulate itself. Here's how you can push back.