Press "Enter" to skip to content

Google AI Hallucinating on Glue

  • Google AI summarizing washing machine maintenance in a way that skips the part where you ensure you don’t create highly toxic chlorine gas.
  • Google AI using reddit troll threads to give advice about seasoning your cheese sticks with glue.
  • Google SERP being vulnerable to intentional poisoning attacks, allowing anyone to commandeer Google’s top SERP spot to position misinformation.

“Move fast and break things” is making a brutal return after a few years of pretending to have moderated and a year of smokescreening the world about “responsible AI” and “regulate open source because only us big companies can be trusted to be responsible”.

And it’s the right move from a business perspective because of competition – the only way to stop it is regulatory intervention. The greatest trick tech giants pulled was to convince governments that only by centralizing control could technology be effectively regulated while exhorting their benefits of “allowing local companies to access global markets”.

Because the majority of damage caused by technology is rooted in the scaling provided by these companies and their single points of failure. Misinformation, political influence campaigns, Financial scams, Radicalisation are made effective by the highly advanced targeting capabilities and centralized surfaces these companies have built.

Their AI strategy follows the same rules. Doomsday scenarios of “nuclear weapons in the hands of commoners” are being used to club outmatched government officials into passing regulation that ensures long-term centralization into the waiting arms of Big Tech.

These companies are now powerful enough to threaten entire countries with “disconnection from the global customer base”, and “being left behind in AI” using this power to shape the policies that enable them to externalize the damage enabled through their platforms.

This particular Google product now takes the cake:

– Appropriating the entire knowledge of the internet, removing author and source information, leaving users no way to validate or question the information and depriving the original source of all monetizable traffic.

– All while abdicating all responsibility for the content they are propagating on a surface that is trusted by billions of people with a * AI may be incorrect fine print.

It’s actually perfect for business, because when you have no way of validating sources, advertisements or influence campaigns can not be distinguished from real world truths.

by Georg Zoller
AI Literacy & Transformation. Strategy Consulting. Ex Meta Business Engineering Director

Read on the Author’s LinkedIn

Comments are closed.