AI and Homo Sapiens: Humility and Puddles
The Five A's
Marvin the Paranoid Android is one of Douglas Adam's most famous creations. This passage is from Douglas' third book, 'Life the Universe and Everything.'
Having solved all the major mathematical, physical, chemical, biological, sociological, philosophical, etymological, meteorological and problems of the Universe except for his own, three times over, [Marvin] was severely stuck for something to do, and had taken up composing short dolorous ditties of no tone, or indeed tune. The latest one was a lullaby.
'Now the world has gone to bed
Darkness won't engulf my head
I can see by infra-red
How I hate the night'.
In the character of Marvin, Douglas Adams with his characteristic genius and insight, encapsulates many of the contradictions in our emerging relationship with Artificial Intelligence.
- A robot with a 'brain the size of a planet', who suffers from the 'long dark tea time of the soul'; do we want super-intelligence with existential angst?
- The inevitability of our anthropomorphisation of AI, because that's what our brains do.
- Hubris about our ability to 'control' outcomes. Worry that we won't. It is all there in that joke about all our problems being solved 'three times over.'
- Fear and hope as a result of the emergence of new kinds of intelligence in the Galaxy!
I run a company, Bioss International, started by my mother, which has for decades been fundamentally interested in good governance, in the conditions for wise judgement and in the placing and positioning of decision making in complex environments, particularly in relation to strategy and purpose.
In our work with a variety of organisations, we frequently become aware of a large gap between the stated intent or purpose, or indeed the stated values and ethics of an organisation and what is actually being lived, experienced or done by people, including Boards.
Some people, even when the gaps are pointed out, continue to choose not to see them, what has been described elsewhere as wilful blindness1 Indeed in the light of a swathe of recent meltdowns, now including conditions due to Covid, we see a growing crisis of confidence in governance across a wide range of sectors, Private, Public and Third and in our local national and international institutions.
The word Governance comes from the Greek 'kybernein,' meaning to 'take the helm or steer a ship'. It is, at its heart, about navigating emergence, uncertainty, risk, surprise2 and opportunity and above all it is about judgment in the face of complexity.
Now we are adding Artificial Intelligence into the complex mix of people and purpose, strategy, ethics, values and unintended consequence. AI is already part of a wider ecosystem of decision-making and 'work' in society. It already acts 'without a human in the loop'.
It is already part of the fractal nesting of complex adaptive systems on which all human activity depends, from climate systems, the workings of the economy to mycelium networks and what we are doing to bee populations. In the same w ay that it is 'turtles all the way down', it is complex adaptive systems all the way down (or up or sideways!)
AI systems and the 'data' substrate on which they feed are a new primal force in the web of recursive relationships in our daily lives, and in different ways in the lives of all species on the planet and for the planet itself and out into the projections of humans into space too, in our GPS satellites and newer space probes.
We have thought much about how power has been projected through the ages - at the point of a sword, the power of money, the power of ideas and of shared belief systems (eternal damnation in the fiery pits of hell as a form of social control, springs to mind here).
Shared belief systems allow for large scale co-operation and for large scale coercion.
Now we have 'data' and the language and the systems that have grown up around those shared stories are a new primal force terraforming, who we are and what we are to become.
AI is now, like a whale feeding on plankton, a new ontology, emerging from oceans of data with a capacity to see new patterns, new correlations and causes (not the same thing), a potential to give us new kinds of insight into the complexity of our selves (an inward journey) into complexities of the social systems we have created (an outward journey) and out further into the 'given' complexities of our planet and our Universe some of which are now visible and understood and many of which aren't and indeed may never be by Sapiens.
I want to explore this emerging space between on the one hand, a force that feeds on data in ways in which we are not evolved to do and how much agency, power and authority that force is granted or that it 'takes' and on the other hand, human embodiment, judgment, potential and mystery.
Human affairs are messy and our relationship with AI will be full of contradictions and messiness too.
Humans are intelligent but we do not represent the whole and final category of intelligence on the Great Chain of Being below gods and angels. It is perfectly possible to see around us already many other kinds of intelligence.
We are intelligent. We are not the sum of the category of intelligence.
Douglas Adams used to tell a little parable about a puddle that wakes up one morning and looks at the hole it's in and thinks to itself,
This hole fits me very neatly, in fact it fits me so neatly that is must have been made especially for me.
And the puddle continues to think that the hole that it is in was made especially for it as the sun came up and the puddle evaporates.
That was Douglas' plea for a little bit more humility in relation to homo sapiens' belief that we are the apogee of cognition perception and intelligence.
Indeed Douglas is on record as saying that
The whole import of the Hitchhiker's Guide to the Galaxy is that we, along with everybody else in the universe, are completely self-deluded as to our own importance in the scheme of things.
There is the paradox, a paradox we must hold, how do we both celebrate Sapiens and simultaneously 'get over ourselves'?
Can we find find the creativity and problem-solving potential in a truly creative humility about our place in the scheme of things?
AI in the Great Big Scheme of Things
For now, and for the foreseeable future, the issue that seems to me to lie at the heart of the notion of AI Ethics and the reason that I feel that term ''on its own' is problematic, is that AI cannot be meaningfully sanctioned, it can feel neither guilt, nor shame, nor remorse, nor reciprocity or obligation (so let's not get started on 'Robot or Marvin Rights! Or maybe we should!)
AI cannot a priori 'be' ethical.
AI literally does not have 'skin in the game'. As Nicholas Taleb has pointed out in 'Skin in the Game,' human systems tend to blow up, when there is asymmetry of risk amongst the parties involved in any activity. Indeed one of the events in recent years, where this phenomenon was most starkly illustrated in human consequence was the behaviour of individuals in certain financial institutions in 2008, who were not personally 'at risk' as they played with other people's money and reputation, who did not come under scrutiny until it was too late. In addition maybe some of these individuals had become so disassociated from the wider context in which they were operating that they had become devoid of shame, guilt and remorse due to psychosociopathic tendencies. Sone of these people did of course end up losing their jobs but profit had been privatised and the risk socialised .
Reflect on Algorithmic trading and the implications for the fabric of society. It is problematic when humans with agency do not have 'skin in the game'. AI cannot have skin in the game. When individuals, organisations or the wider society are exposed to risk, the AI is not.
Until the moment when we wholly cede control to our robot overlords (not in my lifetime) and they get to decide what is ethical on our behalf, ethics and risk are wholly our accountability and that accountability cannot be ceded.
Even if for example at the extreme end of concern about 'autonomous AI', some military armed drones are being programmed to decide and trade-in/out 'values' under different conditions ,the accountability for any 'action' taken by the drone must remain human.
This core principle of human accountability should remain non-negotiable, even if we find ways of developing behaviours in AI systems that we would characterise as being 'ethical', in line with intent, and expectation.
An AI system cannot suffer the consequences of risky behaviour, cannot be meaningfully sanctioned, cannot be accountable. This leaves us with an immediate need for a deeply pragmatic ethics, the ethics of the every day, not of the overly abstracted.
Ethical Governance of AI by human beings, the need for complete clarity about human accountability, no abdication to AI without careful thought and consistent review - 'how's that working for us?' 'who do we mean by 'us' in this context?' And critically 'of whom or what are we asking those first two questions?'.
It means no magical thinking about the complexity of the work the AI is currently capable of doing and it means vigilance by businesses and governments. Some deployment of AI will be good for us, some very good indeed and some of it will have the potential for a range of harms.
The Five A's
The Five A's are a simple non-legal, non-technical framework that is designed to provide a living map of the day to day working relationships between people and AI systems. They are intended to keep the gap between what AI can be trusted to do and what the AI is actually able to handle i.e. its trustworthiness under consistent review. They could be adopted as a contribution to the 'safe' or the 'aligned' or the 'ethical' deployment of AI.
Two ideas sit at the heart of the protocol. AI is not human, but we will have recognisable 'working relationships' with it. These will develop over time and we don't know how.
Wise governance by business and government should be based on understanding key boundaries in relation to these other intelligences and the work we 'task' them with rather than on hard and fast rules.
And that's the core question 'what's the work?' and how do we relate to that work?
Not, 'is it intelligent like us or is IT ethical?' Or ' its 'only maths and data'. Something are new and relational is emerging and it is that liminal space we need to navigate.
As humans we test our judgements by putting them into practice and seeing whether the results are satisfactory, whether they solve the problems they were designed to solve, whether the consequences were acceptable, whether they enabled a successful response to novel problems.
The questions we ask in the Five A's Protocol are thus not value judgements, is this good or bad? It is the analysis that flows from asking them in the first place that matters. Work through the impacts and implications in context. Keep the inputs and outputs under constant review and cross certain key boundaries consciously.
- Is the work the AI doing Advisory – does it leave space for human judgement and decision-making? If so what data and assumptions lie behind the AI's 'advice? By whom is this advice being heeded? And according to whose assumptions? How trustworthy is the advice?
- Has the AI been granted any Authority – power over people, who are now the biological agent for carrying out instruction, being treated as a mechanical agent. Has the authority been explicitly granted to the AI system? If you are an Uber driver the answer is yes. But there will be other more subtle ways in which authority emerges too and inevitably this leads us to considering the nature of the power that an AI system might wield over us in any given context.
- How much Agency has the AI been granted – the ability to commit energy and resource into a system and expose the organisation people or the wider society to both opportunity and risk in a given environment, without a human being in the loop? Might agency be precisely the right thing to give? If the risks are high, what did we do to model outcomes to check what the AI would 'do' without giving it agency straightaway?
- How conscious are we – at every stage of AI deployment – about the skills and responsibilities we are at risk of Abdicating? What human skills will atrophy? Because we can replace jobs, should we, at what pace with what consequence, with what planning?
- Are the human lines of Accountability clear for the work the AI is doing? Human Accountability is the bedrock of understanding the interplay between each of 'Advisory, Authority, Agency, and Abdication.' For all the fallibility of human institutions, accountability must lie with boards and governments.
The hope of course, is that AI can and should be a fundamental part of the work we all do more broadly to create institutions that maximise the constructive aspects of human nature and minimise the destructive.
We should treat people as people, not as so many data points, and should look to deepen and honour human capability, not to impoverish it.
Now that would be an ethical thing for organisations and governments to do.
In Hard Times, his novel that excoriates the less compassionate elements of Utilitariniasm, Charles Dickens takes on the reduction of human beings to numbers in columns. He describes Thomas Gradgrind (not with approval!) as a man 'with a rule and a pair of scales, and the multiplication table always in his pocket, sir, ready to weigh and measure any parcel of human nature, and tell you exactly what it comes to.'
Let us not create Gradgrind AI.
Be clear about the work being done. Cross boundaries knowing that we are doing so, or at least to do our very best to be aware. Attend to the emerging relationships over time between us and the ontology of AI mindful of those Five A's (Advisory, Authority, Agency, Abdication and Accountability), in thousands of 'local' contexts around the world.
Do not head out on a mistaken quest for stability and certainty. Work with intelligence and compassion, strive for AI to augment human perception, judgement, creativity, empathy, wisdom, kindness and human community.
If we do this then AI will be blessing not curse. If we don't, it won't.
If we acknowledge with humility the possibilities of other forms of intelligence and agency we will be better ancestors and we will work with AI to help us to see and understand beauty, fragility and possibility in stewardship.
I will leave the final words to Marvin.
I'd give you advice, but you wouldn't listen. No one ever does.
And on a slightly more upbeat note for a brain the size of a planet.
The best conversation I had was over forty million years ago…. And that was with a coffee machine.