Myles Foley summarises Brian Cox’s “You and AI” panel discussion
What is the role of AI in society – or, rather, what will become the role of AI in society? That was the overarching theme of the ever quick-witted Brian Cox at his “You and AI” event, an evening of debate and discussion with panellists Dr Vivienne Ming, Professor Peter Donelley and Dr Suchi Saria.
First up for consideration was the definition of AI, on which the panel converged with relative ease; AI is an autonomous system which is capable of solving problems with no right answer. This can then be applied to make human life faster, easier and cheaper. But when we consider AI, the majority of people seem to imagine some “Terminator” style system capable of making decisions as humans do. However, in most cases, the systems which are used today are used to work on and solve extremely specific tasks, such as discovering whether a picture has a giraffe in it.
Does that mean there is arguably no “general AI” which is capable of solving multiple complex problems. Think of the system which finds a giraffe in a picture – does this system actually know what a giraffe is? No, this system doesn’t even have a “giraffe concept”, and in many cases where the system is wrong, it finds something that looks nothing like a giraffe. In order to reach a system which can be defined as a “general AI” it may also require a shift in the technology that we use, the standard currently being the “neural network” based on neurons and their connections in the brain. But this is currently only used for solving specific tasks.
AI has the potential to revolutionise workplace infrastructure. Notably, it allows the number of white-collar or office based jobs to dramatically reduce, by replacing these roles with an AI system that can perform the same job quicker and potentially with greater accuracy. It was suggested by the panel that – contrary to popular opinion – the last jobs standing wouldn’t be those of the everyday programmer who builds websites, but would instead be the more creative jobs – the artists, the designers, and those jobs that explore the unknown. Our current AI systems are not capable of such jobs; they are built to tackle situations which are already known rather than to create the unimagined.
The potential applications of AI in medicine more specifically are huge, but raise a number of questions, especially with regards to AI-led diagnosis. For instance, suppose an AI system is used to provide a diagnosis for some symptoms – should it then have to provide an explanation on how it reached this decision? And how important is it that we truly understand why such a decision has been reached? What would the public prefer? What if the doctor who made the same judgement got it right 90% of the time, but the AI system got this same judgement right 97% of the time. You would presumably choose the AI system as your clinician as it gets the diagnosis right more often, even though it cannot decipher how it reached that diagnosis. Furthermore, would patients want explanations from an unconscious robotic system, or would a living, breathing doctor be needed to explain the results and add that human-touch so seemingly valued in today’s healthcare?
The evening concluded with a request to the panel to estimate how far away we, as a society, are from a truly generalised AI system. One which can, to all intents and purposes, “think” like a human equivalent. Unlike the first question, which led to a general consensus among all three panellists, this one left the panel very much divided. We’ll just have to wait and see…
“You and AI” was hosted by Brian Cox on 11th December at the Barbican Hall