Skip to content
Jan 03

David Sharp

No topics.

Machine learning: you be the judge (for now)

man-working-on-laptop-ai-dna-1500pxs.jpg

"Google AI beats doctors at breast cancer detection" is one of the headlines in the Wall Street Journal earlier this week, picked up by BBC News yesterday. 

Reporting on the outcome of a study by Google Health and Imperial College involving the analysis of x-rays of 29,000 women, both news outlets emphasise how effective the use of artificial intelligence (AI) can be when given the task of processing a mass of image data from which it has been trained to detect potential cancers. In fact, it outperformed individual radiologists and was deemed to be as effective as two radioligists working together. 

This shouldn't be a surprise. In May 2019 the Telegraph reported on Google's claim that it could "spot lung cancer a year before doctors", based on a similar artificial intelligence exercise this time analysing 45,856 de-identified chest CT screening cases taken from x-rays in the US. In that case, when using a single CT scan for diagnosis, the AI algorithm from Google performed on a par with or better than the six radiologists - detecting five per cent more cancer cases than a radiologist working alone. 

No one is suggesting from either of these studies that human experts such as radiologists don't play a vital part in clinicial diagnosis, treatment or care. Quite the opposite: just as where two radiologists are used to check each other's work, in the rare event of any uncertainty or disagreement the opinion of yet another expert is sought - their combined expertise is rarely wrong.

And while the studies show that human experts can be fallible, they also highlight the fact that the AI solutions in these cases also don't get it right every single time either - they are just on balance less fallible than a human working on their own.

Why is this relevant? There are two reasons. First, because here at International Workplace we have launched a digital learning service that uses AI to do a similar thing: to carry out tasks and process large swathes of information much more efficiently than would normally be undertaken by a human being - in this case, a learner, or maybe their manager. Second, because I think this story neatly sets out the difference between the wider field of AI, and the specifc field of machine learning on which these Google initiatives are based.

Just as with processing x-ray images, our new digital learning service - Workplace DNA - aggregates the engagement data of all our learners into an anonymous pool of data, from which the algorithms we are building can discern patterns. This is machine learning, a tool trained to look for patterns according to criteria it's been told to focus on. For example, with Workplace DNA we use machine learning to identify patterns relating to people's job function and the resources they look at, or the time of day when they are most active and the average time they engage with the system. Machine learning can serve up recommendations to our learners and their managers, to save them time in searching or administering their learning. To a learner, that might simply look like us suggesting a topic that is highly relevant to them (because we know that other people like them are looking at it too). For their manager, it might look like us suggesting a particular learning path to save them from having to browse libraries, search competencies or enrol people on resources.

In all these cases, what makes machine learning distinct from artificial intelligence is because machine learning is essentially dumb. That's not to say it's not very useful, just that it's not intelligent, in the sense that it can only work with the information it's been given already. Like analysing x-rays, it can look for patterns, but it can't teach itself anything new without further human intervention. Machine learning solutions may put people out of a job, but what they really do is displace jobs to allow humans to be more effective in the tasks that only a human can complete. In the Google Health / Imperial College study machine learning can rip through x-rays faster and less fallibly than a single radiologist working on their own, but human professionals will still make the final decision on any potential cancers identifed by the algorithm. It won't design new ways of identifying lung or breast cancer, and if cancers mutate to appear differently in x-rays it will only pick up on them if it's shown by a human what 'bad' looks like.

For now then, machine learning is likely to be a huge benefit to every aspect of society, including its own application in our Workplace DNA digital learning service. We're going to be hearing more and more about exception reporting: training the algorithm to know when to give experienced humans just the right information to make the decisons themselves. That's going to make a lot of people's lives easier, it should improve the quality of the work they do and make business processes more efficient.

The wider application of AI is surely going to take us on a much wilder journey, however. Imagine the power of a system that can learn from itself? A system that doesn't necessarily need to bring exceptions to the attention of intelligent humans so they can make the final decision for it? One that learns from its mistakes. This promises a powerful and exciting future - not least for us here at International Workplace - but also brings with it some very big ethical questions.

Expect to be reading a lot more about human-centric AI in 2020; the guiding principles behind it; and how we legislate for it.