The Driverless Taxi Dilemma: Would You Get In?
Feature | 11 - 02 - 2026
Trust in autonomous vehicles UK is not simply a technical issue. It is a psychological and governance challenge that will shape public adoption.
A silent vehicle pulls up. No driver, eye contact and no reassuring nod in the mirror.
Just sensors, code and an algorithm making thousands of decisions per second.
This is no longer speculative. Under the UK’s new autonomous vehicle framework, public trials of driverless taxis are expected to begin in 2026. The policy ambition is clear: reduce collisions linked to human error, optimise traffic flow and widen mobility access for those unable to drive.
But policy goals and personal decisions are not the same thing.
So here is the real question: would you get in?
If someone handed me a ticket today, I would take a breath and step inside. But I would do so with conditions. Not because I distrust technology outright, but because trust in automation is rarely binary. It is negotiated.
Yes. But Only With Guardrails.
My answer would be yes, with a safety-first mindset:
-
A clearly regulated, pilot-certified route
-
Daytime travel on familiar roads
-
A visible emergency override mechanism
-
Remote human oversight during early deployment phases
That blend of curiosity and caution is revealing. It is not just about transport. It is about control.
And that is where this becomes interesting.

The Psychology of Control: Why This Is Not Really About Taxis
Boarding a robotaxi is not simply a statistical risk calculation. It reflects your relationship with agency.
Some people are comfortable relinquishing direct control once reliability is demonstrated. Others experience discomfort when agency shifts from human judgement to opaque algorithms.
If you would step in without hesitation, you likely:
-
Tolerate uncertainty well
-
Prioritise efficiency and innovation
-
Trust distributed systems to outperform individuals
-
View regulated risk as acceptable
If you would hesitate, you may:
-
Prefer visible, tangible control
-
Require high assurance before delegating safety
-
Distrust opaque decision-making processes
-
Feel uneasy when accountability appears abstract
Neither response is irrational. Both are psychologically coherent.
We already accept automation elsewhere. Commercial aviation relies heavily on autopilot systems. Many urban rail networks operate autonomously. Credit scoring, fraud detection and traffic management are algorithmically driven.
Yet proximity changes perception. Sitting inside the machine sharpens the stakes.

Agency in the Age of Automation
The deeper issue is our sense of agency.
A sense of agency is not just about who turns the wheel. It is about who we believe directs outcomes in our lives.
Saying yes to a driverless taxi may signal:
-
Confidence in engineered safety
-
Comfort with shared or distributed control
-
Optimism about technological progress
Saying no may signal:
-
A desire for personal command in critical moments
-
Scepticism about system transparency
-
Concern about accountability if something goes wrong
As AI moves from background optimisation into physical decision-making, the negotiation between delegation and autonomy becomes visible. Robotaxis are simply a vivid example.

The Broader Societal Question
The UK Government argues that autonomous vehicles could significantly reduce road traffic accidents, given that human error contributes to the vast majority of collisions.
However, public confidence is not automatic. Surveys by organisations such as YouGov have shown that many people remain uneasy about travelling without a human driver.
That scepticism is not ignorance. It reflects three unresolved issues:
-
Liability – Who is accountable if an algorithm fails?
-
Transparency – How understandable are machine decision processes?
-
Equity – Who benefits first, and who absorbs early risk?
Trust in automation is built through:
-
Clear regulation
-
Demonstrable safety performance
-
Transparent oversight
-
Visible safeguards
It is not built through enthusiasm alone.

Why This Matters Beyond Transport
Driverless taxis are a microcosm of a broader transition: one of behaviour change and identity.
AI is shifting from recommendation engines to embodied decision-making. It is acting in physical space. That alters the psychological contract between humans and machines.
Three tensions intensify:
-
Control versus convenience
-
Efficiency versus emotional reassurance
-
Delegation versus autonomy
The decision to step into a robotaxi becomes a rehearsal for choices about AI in healthcare, logistics, finance, governance and leisure.
My Personal Litmus Test
So would I get in?
Yes. But not blindly.
Curiosity without caution is naive. Caution without curiosity is stagnation. The most intelligent posture is conditional trust, grounded in evidence and governance.
The more revealing question, however, is not what I would do. It is what your answer says about you.
Are you comfortable handing over the wheel and observing what the system does?
Or do you grip control tightly, valuing human judgement even if it is statistically less efficient?
Your answer reflects your tolerance for uncertainty, your trust in institutions and your evolving relationship with machine intelligence.
And as AI becomes more embedded in everyday life, that relationship will shape far more than how we travel.
So, would you get in? Let me know.