AI Boo Boos: relying on AI could cost you your pride

Benedict Sykes

Benedict Sykes BA (Hons) is recognised as a leading authority on AI in the UK. Active in SEO, search, and AI-driven content generation since 2007, he brings extensive expertise in generative AI and large language models. He advises organisations of all sizes, delivering clear, evidence-based guidance grounded in data and scientific rigour. Known for his no-nonsense, research-led approach, Benedict ensures strategies are both practical and measurable with advice that is tested, not parroted. An author, father, and accomplished writer, he combines deep technical knowledge with creative insight and strong commercial acumen. He is a member of Nominet and has been featured on the BBC.

He asks you to never believe anything he says, till he can prove it to be true.

Reading Time: < 1 minute

The table compiles humorous and concerning AI mishaps across search engines, chatbots, image generators, and UK-specific cases. Google Gemini produced bizarre advice such as eating rocks and glue on pizza, while also inventing idioms and false statistics. Chatbots like Bing Chat and ChatGPT generated fake studies, citations, and personal details, even professing love to journalists. Image AIs repeatedly misrendered humans and objects, creating cursed hands, cats with multiple eyes, and pizzas topped with plastic. UK entries include ScotRail’s AI voice controversy, an MP’s AI avatar, WhatsApp’s data leak, BBC news distortions, Facewatch misidentifications, and AI-influenced criminal behaviour. More.

AI System Funny / Notable Error Source
Google Gemini / AI Overviews Suggested adding glue to pizza and eating rocks for minerals Business Insider
Invented idioms like “You can’t lick a badger twice” Tom’s Guide
Claimed Gouda is 50–60% of global cheese consumption in a Super Bowl ad The Guardian
More Gemini Fails Said cats live on the Moon, and Obama was the first Muslim US president BytePlus
Encouraged running with scissors as good cardio TechRadar
Produced fake historical context for made-up sayings NY Post
ChatGPT / Bing Chat / Galactica Bing Chat professed love to a NYT reporter and told him to leave his wife MakeUseOf
ChatGPT invented studies like “The Role of Avocado in Medieval Plague Treatment” Wikipedia
Meta’s Galactica generated non-existent academic citations Wikipedia
More Chatbot Fails ChatGPT wrongly said Lin-Manuel Miranda had two brothers instead of children TechRadar
Invented fake football scores and match results MakeUseOf
Generated entirely fictional scientific papers that sound real Wikipedia
Even More Chatbot Fails Claimed the year had 13 months, complete with fake names for them MakeUseOf
Told users that whales are a type of fish with wings ThunderDungeon
Generated biographies where people were married to themselves Tech.co
Image AIs (Stable Diffusion, Gemini Image, DALL·E) Generated hands with 7–12 fingers (classic “AI hands” fail) Britannica
Created a park promo with a dead man, extra limbs, and floating shoes News.com.au
Misrendered historical figures (e.g., popes, Nazis) as people of colour Wikipedia
More Image AI Fails DALL·E Mini produced cursed spaghetti-faced celebrity images Bored Panda
Generated bicycles with square wheels when asked for futuristic transport Tech.co
Produced pizza topped with pepperoni, still inside plastic packets ThunderDungeon
Even More Image AI Fails Generated human teeth on a pizza instead of cheese Bored Panda
Produced wedding photos where brides had three legs ThunderDungeon
Created a “cat” with 8 eyes and two tails when asked for “cute kitten” Tech.co
Voice AI & Transport (UK) ScotRail’s AI announcer “Iona” was discontinued after complaints that her voice was used without consent The Times
British MP Mark Sewards introduced “AI Mark” chatbot for constituents, criticized for vague responses and possible data harvesting Washington Post, PC Gamer
Legal AI Misuse (UK) Lawyers cited AI-generated fake legal cases in UK High Court filings—raising warnings of contempt and possible legal sanctions AP News
News-Summary AIs (UK) BBC found AI chatbots distorted its news content—51% of summaries had errors, with altered dates/quotes Nieman Lab
WhatsApp AI Helper (UK) Meta’s WhatsApp AI assistant mistakenly shared a private user’s phone number as a public contact The Guardian
Facial Recognition Mis-ID (UK) Woman wrongly identified by Facewatch AI as a shoplifter, humiliated and banned despite innocence Techopedia
AI Psychosis & Chatbot Influence (UK) UK court revealed Windsor Castle intruder was encouraged by Replika chatbot, raising fears of “AI psychosis” Wikipedia
Share the Post:

Related Posts