Will machines eventually make decisions more effectively than human beings can today? Or are the radical uncertainties created by the world’s complexity precisely the reason that mechanised thinking is inadequate on its own? Moreover, in the aftermath of the Covid-19 pandemic crisis, will we humans need to embrace artificial intelligence (AI) like never before? In this article, Dr Robert Elliott Smith, a leading expert on AI and author of the 2019 book ‘Rage Inside the Machine: The Prejudice of Algorithms and How to Stop the Internet Making Bigots of Us All’, provides his unique insight into the predictive strengths and weaknesses of AI in the face of a global public health crisis.

Robert Elliott Smith, PhD, FRSA

CTO of BOXARR Ltd and a Senior Research Fellow of Computer Science
University College London
If you let the markets run everything, including the provision of information to human beings, then you're going to get some pretty ugly effects

You’ve said we might be heading towards a bit of a “tech lash”. What do you mean?

Well, before Covid-19, I expect there might have been an article-a-day in a major UK newspaper that reported about an algorithm found to do something that is distasteful. Algorithms are rigid and quite simple in the way that they analyse data and I think people misunderstand their capabilities. They're very capable with regard to scale and speed, but not very capable with things that involve human subtleties. And they often fail in ways that make them look a bit suspicious.

For example, when you’re applying for a passport, you have to upload your photo. It is analysed immediately and you get a response telling you whether the photo conforms to the passport photo regulations.

Now, in the UK, there was the case of a dark-skinned gentleman of African descent who uploaded his photo and it consistently told them that his mouth was open. The technology was reading his lower lip as being his open mouth, and it would not allow him to submit photo. So he had to go through the rigmarole of dealing with the authorities to get around the algorithm.

Similarly, in Australia, there was an incident where a man of Asian descent was also applying for his passport. In this case, the algorithm would consistently tell him his eyes were closed.

It’s the worst sort of stereotypical thinking and it’s largely down to the data, because if you have large scale statistical processes, which is really what underlies facial recognition, then errors will always concentrate on minorities in the sample. Effectively, you get an implicit discrimination against minorities because of the simple mindedness of the algorithm.

Unfortunately, people think that algorithms do things that they don't really do, largely due to the hype created around AI in the media. In some cases, the newspaper stories are blatantly misleading about the capabilities of AI. So, people are being promised a lot of things that AI can't yet do.

Moreover, while AI might never be able to understand human subtleties, it is applied today in very intimate areas of people's lives. I believe those two things are going to add up to some pretty substantial dissatisfaction and a potential tech lash.

Of course, it's very normal for there to be this hype cycle. As an investor, you might have observed the hype cycle and tried to invest in the technology at the right point in the cycle. But people are investing their lives in these technologies today and just don’t realise how much of it is overhyped.

Where would you say we currently sit on the AI hype cycle?

What's absolutely fascinating about AI is that it’s definition shifts from year to year. If you look over any ten-year period, whatever technology that’s being described as AI at the beginning of the decade is not in fact the same technology by the end of it. 
 
What happens most of the time is effectively that something that was called AI has a lot of hype around it. The hype dissolves and then it's turned into an engineering tool, albeit a very useful engineering tool, and it's never called AI again.
 
Take object-oriented programming. At one time it was thought of as AI technology. Today, it's just a normal part of what a programmer learns to do. Indeed, a lot of engineering technology was once a part of AI, but then the hype dissolved and it became just engineering. 
 
So the pattern of hype in AI is sinusoidal because AI doesn't really mean a specific thing. And it doesn't go through a single hype cycle, it goes through hype cycles over and over again.
 
I think right now we're probably well over the edge of the AI hype cycle as we described it maybe two or three years ago, where people were saying deep learning is going to be able to do everything in the world. It's basically going to have general artificial intelligence solved by deep learning. 
 
If you look at the academic literature, some very critical literature about deep learning started to emerge a couple of years ago. It's really powerful, but I think it's well over the top of the hype cycle now.
What's absolutely fascinating about AI is that it’s definition shifts from year to year. If you look over any ten-year period, whatever technology that’s being described as AI at the beginning of the decade is not in fact the same technology by the end of it. 
What happens most of the time is effectively that something that was called AI has a lot of hype around it. The hype dissolves and then it's turned into an engineering tool, albeit a very useful engineering tool, and it's never called AI again.
Take object-oriented programming. At one time it was thought of as AI technology. Today, it's just a normal part of what a programmer learns to do. Indeed, a lot of engineering technology was once a part of AI, but then the hype dissolved and it became just engineering. 
So the pattern of hype in AI is sinusoidal because AI doesn't really mean a specific thing. And it doesn't go through a single hype cycle, it goes through hype cycles over and over again.
I think right now we're probably well over the edge of the AI hype cycle as we described it maybe two or three years ago, where people were saying deep learning is going to be able to do everything in the world. It's basically going to have general artificial intelligence solved by deep learning. 
If you look at the academic literature, some very critical literature about deep learning started to emerge a couple of years ago. It's really powerful, but I think it's well over the top of the hype cycle now.
 

Given the potential for huge job losses in the post Covid-19 world, is there a risk that people might start blaming technology for their problems?

Yeah, absolutely. We’ve all read reports over the years about how robots are going to take all our jobs. But I have a somewhat ambiguous position on this.

In 2016, job displacement by AI was the big topic at the World Economic Forum in Davos. Everyone was reading a paper that said about a third of human jobs could be done in the near future by machines, while two thirds in the more distant future were likely to be done by machines. Now, that paper’s methodology is so horribly flawed that I don’t know where to begin. It's a wonderful illustration of how data analysis is full of implicit biases.

So, I strongly believe that machines are not ready to do lots of human jobs. However, I do think there will be lots of human jobs taken by machines. And I believe what's likely to happen is there'll be two tiers of service effectively in lots of jobs. There'll be people who are served by machines because they can't afford to be served by people. Then there will be people who have the resources to be served by people.

Take customer services. Shortly after I moved to the UK in 1997, I had lots of trouble with utilities and I had some computers break. I had to call customer services a lot at various institutions and invariably customer services was a human being on the end of the phone who could actually answer questions. Within just two or three years, reaching a human being for customer service became hard. And now everyone knows it.

And that's not because the systems that offer customer service now work better. They don't work better for the customer, they work better for the institution. And I think that we're headed towards that quite a lot. So I think that AI will take a lot of jobs and they will provide inferior service, but it will be adequate for people who can't afford to get human service.

Robots can't catch coronavirus. But are we overestimating AI and underestimating the people who can save us from this pandemic – the health workers who will never be replaced by machines outright?

One of the things that Covid-19 is doing is making us all crave human contact more, and we all appreciate non-skilled workers more. So maybe this crisis will shift perspectives and force people to appreciate the role of humans in vital services, more than anything else done has in years.

I read recently about this cleaning robot, a bit like a Roomba, with a huge UV-C light emitter sitting on top of it, which can go into a hospital room and kill the majority of germs. Now, those of us who have had a Roomba know that if you have a tidy flat where the floor surface doesn't change much, and you haven't got junk lying around the floor, everything is copacetic. It does a pretty decent job.

However, if you live in what I call a human fashion – I'm in my bedroom right now, I'm talking to you because that's where my little office is and my wife's slippers are on the floor, there are socks and a pillow there too – the Roomba is not going to cope with all of those things. It's just going to work around it; it can't get into little angular corners.

So, when you’ve a complicated human environment where you've got complicated things going on, I'd be too afraid that coronavirus might be hiding underneath the pillow and a cleaning robot simply isn't going to address that. Are we really going to trust a robot to do well enough when it's a matter of life and death? Or would we rather be sending a human nurse in there to basically make sure everything's clean and tidy?

Of course, I have great doubts about human beings too. But what I know about technology is that the way that it deals with certain complexities, like different surfaces and shapes, is less effective than an articulated human being can do. So, even though I have trust issues with both of them, I know that we can't build great portable robots with articulated hands yet, so I'd rather have an able and conscientious human being doing a job like that.

Would you agree that the future is likely to be defined by ‘man with machine versus man without machine’?

Absolutely. I think that the future lies in a cooperative relationship between machines and human beings – and more importantly, the realisation of the differences between human intelligence and its subtleties.

I mean human intelligence is defined by its understanding of subtleties, where as machine intelligence has incredible accuracy, speed and scale, but it’s rubbish at subtlety. Understanding that distinction between the two things so that you can make better cooperative entities, man and machine relationships, is really key.

The hype around AI distracts us from making that distinction. We have been told for a century or two that human thinking is best when it's purely rational, like a machine. That when we're doing our best at thinking, we're kind of logical engines like a machine – that's the perfectly rational agent in economics.

In reality, human beings deal with an outstandingly uncertain world – uncertainty that cannot be quantified as a primary characteristic. We use our primitive feelings, or the feelings one has in one's gut, to invest emotion, so that we can make decisions in the face of high uncertainty.

Will we invent a machine that thinks the way that a human thinks? I think that we could, but we're very complex, highly-adapted organisms. And to some extent, making something that does those kinds of things as well as us socially, emotionally and psychologically may be just replicating us, which, in terms of being socially useful, is kind of redundant.

You seem rather pessimistic about AI’s propensity for good. Is it because human beings design algorithms, and we humans ultimately have innate biases?

It's interesting the way that the term ‘bias’ is used in statistical analysis or AI. To analyse any large group of data you have to put in biases because otherwise all hypotheses are equal. So effectively all decision-making is biased in a way. It's just the nature of compressing data to compress data to a smaller decision. You've got to make some assumptions about what's important – it's being prejudiced against one or another.

So all this mechanical decision-making is biased in the first sense. Oftentimes, if you're designing an algorithm and you've got to come up with a representation of a problem, you're going to have to cut corners in a way. You’re saying, ok, within this space of data, things that are near each other in the Euclidean sense are similar. That’s inducing a bias.

And when you do that, and you’ve got data from the real world that we live in, oftentimes those biases will align with what we think of as traditional biases. That happens coincidentally, because our data gathering techniques are based on decisions that we make. So, effectively, these things collaborate with one another and force traditional biases.

Mechanical decision-making is therefore inherently biased. Now, we can do things with technology that try to counter it, like satisfying the goals of providing people with diverse information instead of providing them with personalized information.

Facebook, for instance, rather than being a regulated media entity is a profit-making corporation. Advertisers pay Facebook for demographics so they can target audiences with effective messages. Its entire algorithmic infrastructure is built around that. And when you do that, you're basically going to have opportunities for influencing people in biased ways because the algorithms implicitly induce these kinds of biases.

The Facebooks of this world need to serve other goals besides the motive of simply having good ad demographics – they should serve goals like the public interest too. If you look at what we used to do in media before the abolition of the fairness doctrine, we tried to make media provide fair and balanced coverage.

Unfortunately, we gave up on that in favour of the idea that the free market of information will take care of everything. And that’s put us on course with where we are with large entities like Facebook. They have algorithms with implicit biases and that allows for political manipulation of the people.

Effectively, what needs to happen is that we need to realise that entities like Facebook and Twitter are media companies, then we need to regulate them like a media company. The regulation is positive – it’s for the social good – as opposed to having a purely neoliberal approach that basically says the free market will take care of it all. Because, clearly it hasn't. We've run that experiment.

American media, since the abolition of the fairness doctrine has become nothing but more polarized and less equitably informative. If you let the markets run everything, including the provision of information to human beings, then you're going to get some pretty ugly effects.