Plato and AI

1 Conversation

Robbie Stamp has some well-timed reflections on artificial intelligence which, come to think of it, also relate to our March Create theme of fighting fake news. Much food for thought in this essay.

Plato and AI

February 2017 Create: Prove You Pass the Turing Test

I'm interested in judgment in relation to human decision making and increasingly judgment in relation to Artificial Intelligence. One way of thinking about judgement is that it is necessary for those decisions where you do not and cannot know what to do. Judgement is in essence about what to do/not do now, to allow for what may happen in the future, or in a wider context, the economist George Shackle's view of 'strategy as the imagined deemed possible.'
Judgment can also mean what do in the particular situation for the good. A key problem, identified as far back as Plato however, is, whose good and who decides?

The human decision is the 'act' that commits energy into a system. Eat the chocolate cake or the apple?

But there is something else that has existed as long as humans have thought, exercised judgement and made decisions, and that is accountability for what happens next. If poor judgment leads to a bad decision and a bad outcome, then somewhere, some people, maybe a lot of people, are going to feel pain of some kind. If an algorithm were to make a decision and commit 'energy' into a system with bad results, it cannot suffer the consequences in the ways humans do.

In this simple observation lies a significant challenge for organisations and indeed the wider society as we navigate rapid developments in AI, and new kinds of 'decision making' entities become embedded in our daily lives. Machines are not accountable in the way that humans are and we need to understand very carefully when, where and within what limits we 'grant' algorithms power and authority over us, and what kind of agency we want them to have.
What precisely do we task algorithms with? Are they support for decision making or decision makers in their own right, with real power and authority over humans? How can they be 'trustworthy'?

One event from last year perfectly encapsulates the need for clarity about the relationship between decisions that we allow algorithms to make and human judgment.

In Sept 2016, Facebook found itself at the centre of a media storm over its censorship of the famous photograph of a naked girl running down the street after a napalm attack during the Vietnam War. The image featured in an article by Norwegian journalist Tom Egeland, 'Seven photographs that changed the history of warfare'. Facebook's algorithm 'saw' a picture of a naked girl, judged it to be offensive and Facebook automatically banned Egeland.

Other Facebook users who shared the post similarly found it was taken down and deleted. Following vociferous criticism and accusations that Facebook were abusing their power, they backed down and reinstated the picture (and Egeland) but not before releasing a statement that read

'While we recognise that this photo is iconic, it's difficult to create a distinction between allowing a photograph of a nude child in one instance and not others.'

The essence of their argument that it was ‘difficult to create a distinction between one picture of a naked child and another' is instructive. For a human editor it would be the work of a moment, an easy judgement and decision to make, not 'difficult' at all. But if you see the world through the perspective of ‘your' algorithm then yes, maybe it is still a hard problem. The algorithm clearly could not 'see' or 'read' context. Nevertheless the algorithm wielded, in the short term at least, considerable power over Egeland.

It had agency.

Douglas Adams and I talked much about the 'liminal space' in Science Fiction. So much Sci-Fi, including of course Hitchhikers itself, jumps to the time when the imagined tech and its imagined consequence are already in existence, Arthur C Clarke's famous observation about any sufficiently advanced technology being indistinguishable from magic. Douglas had become more and more interested in what a society goes through in that liminal space when a technology is being tested, the messiness and switchbacks – for example, in another context Tom Wolfe's The Right Stuff.

I've been pondering this in relation to the big ethical questions about AI and for the 'ethical' decision making of AI itself. As we have not cracked these issues as human beings, I wonder how we are going to resolve Plato's question? So I'd argue for a vigilant trust as AI becomes more and more embedded in our ‘augmented' lives and to ask every time anybody deploys an AI: 'what agency' are we granting this thing?

Create Archive

Researcher5

27.03.17 Front Page

Back Issue Page


Bookmark on your Personal Space


Conversations About This Entry

Entry

A87886589

Infinite Improbability Drive

Infinite Improbability Drive

Read a random Edited Entry


Written by

Credits

Disclaimer

h2g2 is created by h2g2's users, who are members of the public. The views expressed are theirs and unless specifically stated are not those of the Not Panicking Ltd. Unlike Edited Entries, Entries have not been checked by an Editor. If you consider any Entry to be in breach of the site's House Rules, please register a complaint. For any other comments, please visit the Feedback page.

Write an Entry

"The Hitchhiker's Guide to the Galaxy is a wholly remarkable book. It has been compiled and recompiled many times and under many different editorships. It contains contributions from countless numbers of travellers and researchers."

Write an entry
Read more