I meet Roger Bickerstaff, partner at law firm Bird & Bird, on the day that the Royal Society released its report into machine learning which warned that, while offering huge opportunities for the economy, Artificial Intelligence (AI) also brings challenges for society.
It is an issue that Bickerstaff is also considering. We had previously met at a breakfast debate on artificial intelligence, sponsored by Hotwire, where enthusiasts talked about a world in which technology would free humans to enjoy ‘thinking time’ and leisure pursuits, waving away tricky questions about salaries and unemployment. Only Bickerstaff seemed to raise a cautionary tone.
The technology lawyer is, by his own admission, ‘slightly schizophrenic’ about AI. ‘I think there are huge benefits to be drawn from the use of all these developments. I am all in favour of the technology moving ahead as quickly as possible,’ he says. ‘My concern is over the economic and social disruption that comes as a result of that, and what we can do realistically and legitimately to make sure that we introduce these things in a way that is broadly beneficial for the majority of the population.’
He likens the current environment to the start of the industrial revolution, where most people worked 15 hours a day for the benefit of the five to ten per cent who owned the factories. ‘We could end up in a situation where the vast majority of the population is relatively worse off. We see that happening now. There has been a disconnect between productivity and wages. In the past, if you worked hard and productivity went up, you could expect to share in those benefits. With increased automation, what seems to be happening is that more people are being deskilled. [Their roles are] described as low grade work, which is paid the minimum wage. I think we have to be really careful of that,’ he says.
‘In my mind, there is a choice to be made. This stuff doesn’t have a pre-destined path. How we use various automation tools in society, whether that is artificial intelligence, is for us to decide. Yes, there is an economic imperative. You can see that with Uber; it comes in and pretty much undercuts all existing taxi firms, but the question is whether it should.’
Denmark certainly thinks it shouldn’t. Its government recently introduced new regulations that required taxi drivers to have fare meters and seat sensors after established firms complained that Uber did not comply with existing standards, and thus represented unfair competition. The new laws led Uber to close down operations in April after three years in the Danish market. Poland’s transport watchdog is also clamping down on Uber drivers, fining them for not possessing city-issued taxi licences.
‘My major worry is that, once you get the big corporations coming through, driving out the businesses that pay taxes, are we going to have the money to pay for public infrastructure, such as schools and the health service?’ says Bickerstaff. Could we have a situation where companies are taxed on the number of robots they employ, or on the extent to which AI runs their business, I ask. He doesn’t laugh, answering: ‘Once the existing economic approaches start to break down, we are going to have to start looking at alternative ways of approaching these things. I don’t know what that might be, but the sums don’t add up.’
The Finnish government is currently trialling a basic universal income policy, where 2,000 unemployed people between the ages of 25 and 58 will receive a guaranteed monthly sum of Eu560 for two years, regardless of whether they find work or not. The move is both a response to the increased complexity of the welfare state but also to a growing view that, in a world of robots, many of us will not work but will still need income to enjoy enforced lives of leisure. Enthusiasts for such a view include Silicon Valley’s Elon Musk, founder of Tesla. His company’s market valuation recently exceeded that of Ford, a 104 year old business which produced nearly seven million cars last year against just 84,000 manufactured by Tesla.
‘The AI guys, as technologists, just look at the technology. They’re very excited about the advances and possibilities. I don’t think it’s their role to think about the consequences; it’s their role to think as fast and as well as they can,’ says Bickerstaff. ‘It is the role of economists and social commentators to think about the consequences of this technology, and then there is a role for lawyers to think about the legal framework which we may want to implement to put some controls around the introduction of AI.’
There several different legal strands, but it is the first strand that currently concerns lawyers. What changes need to be made to the law to make AI possible? Driverless cars, for example, have prompted legal consultations around issues as diverse as criminal liability, insurance, cyber security and rules of the road.
Copyright law is also under review. As the Royal Society’s report makes clear, machine learning is developing at a rapid pace because of the growing availability of data – 90 per cent of the world’s data has been created within the past five years – and increased computer power.
‘There is an argument that when AI tools are interrogating data sets, the process of doing so is a copyright controlled activity because they are inevitably going to be making a copy. There is a debate in Europe about text and data mining (TDM), which sounds esoteric but has huge implications about how AI might work in the future,’ explains Bickerstaff. ‘The bigger AI companies are arguing that there should be an exception for commercial and non-commercial use, so that they can interrogate data sets without paying a licence fee to the people who put it together. The EU’s view focuses more on the content provider, the people who put these data sets together, and suggest that they should be entitled to some financial rewards, when these data sets are used in an AI context.’
He adds: ‘There are lots of discussions about how a legal framework needs to move on to remove the barriers to AI. In a sense, that is business as usual for technology lawyers. It’s what we did when the Internet came along, or cloud computing; it’s looking at how the law needs to change in order to facilitate the introduction of new technology. Where this is different, and is a new challenge for technology lawyers, is that the consequences of the introduction of this new technology may be so significant that we need to think about a legal framework that needs to put in some constraints over the economic and social consequences of AI. That is not something I have ever considered previously as a technology lawyer.’
Cynics have argued that lawyers are taking an unnecessarily probing view of AI because, for the first time, technology is having an impact on the legal profession – even the Royal Society highlights its ability to ‘speed up’ some legal processes and reduce costs. But Bickerstaff dismisses such a self-serving view. He believes that there are very few industries or professions, if any, that will be immune to AI.
‘There will always be work for humans working alongside machines. Nothing is going to be completely automated. But I think most jobs will use AI to give support to what they are doing. Within a very short space of time, you could see that doctors who do not use AI to support them in diagnosis, could be accused of negligence. Machines will be able to review all the literature on an up-to-date basis better than any doctor would ever be capable of doing,’ he points out.
Some AI advocates argue that research and development will be immune because machine learning computers are incapable of making mistakes. Bickerstaff is sceptical of such claims, arguing that people simply do not know where AI can lead. He points to the Google DeepMind Challenge Match in March 2016, which was won by a computer programme AlphaGo.
The programme challenged a Korean champion in Go, a strategy board game invented in China more than 2,500 years ago, which is considered more complex than chess, because it possesses more possibilities than the total number of atoms in the universe. It had been thought that this extensive branch network precluded computer success at this game. However, just nine months after starting out playing Space Invaders, AlphaGo won four out of five games of Go against one of the world’s top players. And indeed, even Musk and Professor Stephen Hawking have recently cautioned that AI could soon be smarter than the smartest human beings.
‘I think the implications of AI are so profound that there needs to be a closer working relationship between economists, sociologists, futurologists and others, and the lawyers then come along and say ’Let’s put some controls over AI’, which may be as simple as saying ‘Current employment laws don’t apply to firms like Uber’. It may not be radical from a legal perspective.’
But legislation could, he ponders, lead to regulatory arbitrage. For example, the EU may take a different view from the US, preferring to introduce legislation to protect employee rights against machine learning and automation. ‘You might see a different in approach emerging over the next ten to 15 years as people start to see the consequences. And if the EU starts taking that approach, in terms of a competitive world economy, is that sustainable?’ he asks. ‘Is the EU strong enough to cope with things being more expensive in Europe? We could get into issues around protectionism, and tariffs on countries which use AI tools to produce goods.’
As Bickerstaff readily admits, he hasn’t all the answers. But he is thinking about the questions because ‘people don’t seem to realise how quickly AI will be introduced, and it is going to have huge implications for society’. And with that, he’s off back to the office.