AI public services: I think therefore I am? (I)

19 October 2016

“Can machines think?” was the tantalising question – first posed 66 years ago by Professor Alan Turing – the Science and Technology Committee raised last week. If so, how can we create a society in which this artificial brain power could be exploited to benefit all?

The answer to Turing’s question is, in short: not quite yet. But it is only a matter of time before they can. This means, the Select Committee argued, we should be ready for the disruption – to jobs, skills and interactions with services – this will bring. This is as true in public services as it is for private ones – and government will need to play a key role in exploiting this technology. To explore this theme, Reform will publish a series of blogs looking at artificial intelligence (AI) in public services – how today’s technology can be used, how it will evolve and how to face the challenges that it will create.

To start, it is worth spelling out what AI is. Though the term has been around since 1956, there is no agreed definition because of its wide application. John McCarthy’s original definition of “the science and engineering of making intelligent machines, especially intelligent computer programs” serves as a useful starting point. This vague definition covers systems that ‘think’ in similar ways to humans, as well as those that achieve the desired outcomes, without thinking like humans (and everything in between). Likewise, ‘machine learning’ – where algorithms that learn concepts autonomously through analysing data – is a subset of AI, but not integral to it.

The Select Committee narrowly defined AI as “intelligent software that specialises in a single area or task.” This speaks to the current state of the technology – as able to perform discrete tasks. IBM’s Watson computer can, for example, diagnose lung cancer more accurately than humans: with a success rate of 90 per cent, compared to 50 per cent. To do so, it collects medical information from books and journals, as a student would, and applies that to cases. In Chicago, police have used face-recognition software to identify and convict criminals caught on CCTV. Enfield Council in London recently purchased AI software to help residents find information and fill in forms.

While Enfield Council’s use of AI should be praised, its software pales in comparison to Watson’s ability to read 40 million documents in 15 seconds – thus revealing the growing gap between the public and private sectors’ use of AI. According to Siemens, the global AI market is growing at 20 per cent a year. Policymakers are failing to keep up with this growth. The Government’s Digital Strategy was due to be published in January, meaning there is currently no strategy to maximise AI. This strategy should focus both on how AI can improve productivity and increase growth in the private sector, as well as where public services would benefit from the disruptive technology.

Turing dismissed his question as too abstract – and looked to test whether machines could act indistinguishably from humans. In a world in which AI is quickly succeeding at this ‘imitation game’, public-sector leaders need to recognise the opportunities (and challenges) of rapid change. How to do this in a way that benefits all will be explored in further blogs in this series.

Alexander Hitchcock, Researcher, Reform

Comments

Comments

Pravin Jeyaraj

05 November, 2016

Will AI devices be indistinguishable from human beings in the way they think to the extent that human beings are made redundant? That is the crux of the issue.