Sitemap

When Deepware meets Wetware: the uncomfortable truth about responsible AI

6 min readSep 10, 2025

There is a memo from IBM dated 1979 that should be mandatory reading for every CEO racing to deploy AI. It states simply and profoundly: machines cannot be responsible.

Press enter or click to view image in full size
IBM 1979’s note saying: “A computer can never be held accountable. Therefore a computer must never make a management decision.”
IBM note, 1979

Fifty three years later, we have built what I call “deepware” — the layered neural architectures of AI and machine learning that sit atop our traditional software and hardware stack. Yet somehow in our breathless rush to innovation we have convinced ourselves that this deepware can shoulder the burden of responsibility that our wetware — our human brains — seems increasingly eager to abandon.

Let me be clear: responsible AI does not start with better algorithms or more data. It starts with humans who remember how to think.

The four layers of our digital reality

We have always understood technology through layers. Hardware provides the physical substrate. Software gives us the instructions and logic. But now we have added deepware — these probabilistic, pattern matching systems that generate increasingly convincing simulacra of intelligence.

The critical layer though remains wetware: the human brain, with its capacity for judgement, ethics and crucially responsibility. Yet this is precisely the layer we are systematically deactivating in our AI deployments.

Consider the lawyer who recently submitted AI generated legal briefs to court, complete with fabricated cases and fictional precedents (Morgan 2025). Or the Melbourne trial delayed because no one verified the AI outputs (Editorji 2025). These are not stories about technological failure. They are stories about wetware choosing to go offline at the exact moment it is most needed.

The 7% that reveals everything

MIT research recently exposed something chilling: when identical medical symptoms are presented to AI healthcare systems, female patients are 7%more likely to be told to “manage at home” (MIT 2025). Add a typo, use uncertain language like “maybe”, format your message imperfectly — and the deepware decides you deserve less care.

This is not a bug in the deepware. It is a feature of how we have trained it. Our wetware, in its rush to deploy, forgot to ask critical questions: Whose patterns are we matching? Whose biases are we encoding? Whose voices are we amplifying or silencing?

The deepware is doing exactly what we programmed it to do: replicating patterns from data. The failure lies in our wetware’s abdication of its responsibility to think critically about what patterns we choose to perpetuate.

The 11.8 billion pound illusion

The United Kingdom’s AI industry contributes £11.8 billion to the economy, growing 150 times faster than traditional sectors (UK AI Sector Study 2024). Yet here is the paradox that should keep every executive awake: 95% of generative AI investments are producing no measurable value (MIT Project NANDA 2025).

How is this possible? Because we have confused deploying deepware with thinking. We have mistaken automation for intelligence. We have treated AI as a substitute for human judgement rather than a tool that requires even more rigorous human oversight.

The organisations seeing real returns? They are the ones where wetware remains fully engaged — where humans apply critical thinking, systems thinking and what I call “consequence thinking” to every AI deployment. They understand that deepware without active wetware is just expensive pattern matching.

The shadow economy of human intelligence

While corporations pour billions into generative AI initiatives that fail, something fascinating is happening in the shadows: employees are independently using consumer AI tools to achieve real productivity gains (MIT Project NANDA 2025). The difference? Personal accountability.

When an individual uses ChatGPT to help write an email, they remain responsible for the output. They verify, they edit, they think. Their wetware stays online because they know their name is attached to the result.

But when organisations deploy AI at scale, something strange happens. Responsibility becomes diffused. “The AI recommended it” becomes the new “I was just following orders.” We have created systems where no human feels accountable for what the deepware produces.

Going slow to go fast

There is wisdom in the observation that sometimes we need to “go slow to go fast”. In our current AI gold rush, we are doing the opposite — going fast to go nowhere.
We are deploying first, thinking later if at all, and then acting surprised when our deepware amplifies every bias, mistake and oversight we failed to catch.

Real innovation in AI is not about being first to market. It is about being first to understand the systemic consequences of what we are building. It requires wetware that is fully engaged in both critical thinking and systems thinking, understanding not just what AI can do, but what it should do, and more importantly, what it absolutely should not do.

The great abdication

What we are witnessing is not technological revolution; it is responsibility abdication. CEOs chase innovation metrics while ignoring impact assessments.
Developers optimise for speed while overlooking safety.
Organisations celebrate deployment velocity while their wetware atrophies from disuse.

The simple truth is this: when professionals who should know better skip basic verification, when medical systems encode bias without oversight, when legal documents get submitted without review — we are not seeing AI failure. We are seeing human systems in collapse.

The responsibility stack

Here is what nobody wants to admit: as our technology stack grows more complex, our responsibility stack must grow more robust. Every layer of deepware we add requires an exponential increase in wetware engagement.

Think of it this way:

  • Hardware requires responsibility for physical safety and environmental impact
  • Software requires responsibility for functionality and security
  • Deepware requires responsibility for bias, fairness, and societal impact
  • Wetware must orchestrate responsibility across all layers

We cannot delegate ethics to algorithms. We cannot outsource judgement to models. We cannot transfer accountability to machines that, as IBM reminded us in 1979, cannot be responsible.

The path forward: reactivating human intelligence

The solution is not to slow AI development. The solution is to reactivate our wetware and practice responsible AI by following regulations like the AI Act that recommend that human oversight is not optional; it is essential.

This means:

  • Verification before deployment: every AI output requires human review
  • Critical thinking over speed: better to be right than first
  • Systems thinking over isolated innovation: understanding ripple effects before creating them
  • Accountability at every level: from the developer to the CEO, everyone owns the outcomes

The choice that defines our future

We stand at a crossroads. Down one path lies the continued abdication of human responsibility, where we blame algorithms for our biases, hide behind AI for our decisions, and gradually forget how to think critically about the world we are creating.

Down the other path lies something more challenging but infinitely more valuable: a future where deepware amplifies human intelligence rather than replacing it, where AI serves as a tool for enhanced thinking rather than an excuse for not thinking at all.

The uncomfortable truth is that responsible AI has nothing to do with making machines more responsible. It has everything to do with humans remembering that they are and must remain the responsible party in every equation.

Your wetware is the most sophisticated technology you possess.
In the age of deepware, it is also the most important.
The question is not whether AI will transform our world — it will.
The question is whether we will remain conscious, critical, and responsible enough to guide that transformation.

The next time someone tells you about their AI initiative, ask them not about the technology, but about the humans: Who is thinking? Who is verifying? Who is responsible?

Because in the end, no matter how deep our deepware becomes, it is the wetware that determines whether we are building a future worth living in.

--

--

Patrizia Bertini
Patrizia Bertini

Written by Patrizia Bertini

Inquisitive mind | interest: ResponsibleAI | Innovation | Digital transformation | Co-Creation | Privacy | Experience economy | Creativity | System Thinking

Responses (20)