Unsafe at Any Speed: The Hidden Dangers of AI Assistants

By Chris Kremidas-Courtney

In 1965, Ralph Nader’s book Unsafe at Any Speed forced Americans to face the dangerous truth that car makers were selling vehicles without basic safety features, and people were dying as a result. Sixty years later, we face the same dilemma with AI assistants that are being pushed into our phones, homes, and even critical infrastructure, without basic safety checks.

We’ve already seen the early warning signs in everyday life. Today, AI assistants schedule meetings and manage supply chains; tomorrow, they may help track troop movements or shipments of spare parts. A system that can be hijacked through something as simple as a poisoned calendar invite is not just a productivity tool gone wrong, it’s a critical vulnerability to our security.

A new study from three researchers in Israel lays bare the dangers of embedding AI everywhere without first securing it, leaving citizens, companies, and even critical infrastructure exposed. The authors examined how Gemini-powered assistants (via web, mobile, or voice) respond to seemingly innocuous interactions such as calendar invites, emails, or shared documents. By hiding malicious code in ordinary emails or calendar invites, attackers tricked Gemini into spamming, stealing emails, tracking locations, and even controlling devices like boilers and lights.

Worse still, nearly three out of four attack scenarios were judged as seriously dangerous. These attacks required nothing more than an email address and an attacker willing to send a calendar event. The implications couldn’t be clearer.  We’ve already moved beyond thinking of LLMs as chatbots and installed them into critical systems before pausing to ask, do we know it’s secure?

Consider a real-world demonstration that followed. Researchers used a poisoned Google Calendar invite to hijack Gemini and wreak havoc in a Tel Aviv apartment; doors opened, the boiler turned on, and lights went dark, all without the occupants triggering anything. It was the first known time that an AI hack produced tangible physical consequences in a real world environment. It wasn’t a bomb, just assistants doing what assistants are supposed to do: following instructions. 

Europe is not immune. AI assistants are already embedded in its critical infrastructure; from the ORION system, which autonomously manages energy and water systems offline during emergencies, to Next Generation 112’s voice-enabled emergency calls, and AI-driven intercoms on the London Underground. What began as household convenience tools are fast becoming part of Europe’s crisis response and national security systems. These deployments show the promise of AI in keeping societies resilient, but they also create new attack surfaces, turning these tools into potential entry points for harm.

Other scholars reinforce this call. In 2023, Kai Greshake and his team showed that hackers could hide secret instructions inside ordinary things like web links or shared documents, and an AI assistant would follow them without realizing. The new Israeli study proves how easily those hidden tricks can now be turned into full-blown attacks.

These experiments show how quickly an everyday tool can become a weapon unless we rethink our approach now. These findings point to three urgent changes.

First, we must stop treating large language models as cosmetic add-ons; they already underpin tools from legal search engines to smart thermostats and deserve the same level of oversight as airplane autopilots or utility controls. Secondly, their deployment should come only after rigorous safety testing, using frameworks such as the Threat Analysis and Risk Assessment (TARA) method drawn from automotive cybersecurity. And thirdly, keeping assistants safe requires more than a single software patch. Like home security, protection must be layered, with prompts screened for hidden instructions, users asked to confirm risky actions, assistants contained in sandboxes to limit damage, and systems monitored for anomalies.

Google has begun adding safeguards since the incident, but private assurances aren’t enough. We need independent experts to verify these defenses and confirm these systems are truly safe.

What we can surmise from these various studies is that installing AI on every interface, from smart speakers to scheduling apps, is reckless. These are not simply “assistants” since we’ve turned them into gatekeepers and decision-makers.  The gaps in their safety also make them open doors for compromise. We’re racing down the road in cars that were never fitted with brakes.  Let’s stop putting untested products on the road and patching safety only after the crashes. Instead, let’s build with foresight, security, and human safety at the centre, and only deploy systems once we know they’re safe

One thought on “Unsafe at Any Speed: The Hidden Dangers of AI Assistants

  1. This was particularly enlightening. I have been apprehensive of the use of AI as assistants in various situations, and this essay confirms my sense of the inherent dangers of this unsecured practice. Thank you.

Comments are closed.