There is a school of thought that sees artificial intelligence as a direct synonym for replacement of humans by automated machines. But the reality is not quite so black and white. After all, we as humans are far more than computer code; we live and decide and even thrive on ambiguity and nuance.
Nowhere is this more true than in healthcare, where analytical data can only help improve treatment when it is consolidated with social, behavioural and contextual information unique to each individual. Only with this multidisciplinary knowledge is it possible for doctors and therapists to define the best course of action for their patients.
AI can bring many benefits to healthcare, including more accurate diagnoses and quicker access to treatment. Yet a recent study found that just 11% of respondents would trust an AI medical diagnosis more than, or just as much as, a human doctor. At the end of the day – patients really value the human touch.
Lessons from chess
When Garry Kasparov lost his famous chess match against IBM’s Deep Blue in 1997, the world was aghast. What is going to happen to us as a species when the capabilities and intelligence of supercomputers exceed that of our own? But once again, things are not quite as clear cut as they may seem. In fact, healthcare has much to learn from Kasparov’s infamous reaction to the loss; he borrowed the old adage, “if you can’t beat them, join them”.
Kasparov decided to work with the computer instead of against it. He invented a new form of the game called ‘Advanced Chess’, where both a human and AI work together as a team. The brute force analysis of the computer system, coupled with the more strategic thinking of the human player, has taken the game to heights of skill never seen before, turning it into a popular sport around the world.
It’s fair to say that chess is an analytical game by definition, and therefore AI-only agents are still able to beat the hybrid kind. But what about a field, such as healthcare, where the analytical component is only part of the solution? Healthcare is the intersection of human judgement and data-driven insights – ultimately, what matters most is outcomes for patients; and a combination of strong AI-led analysis with a human context has the potential to deliver massive improvements.
Machine learning algorithms have improved scanning technology to the point where computers are now able to detect microscopic lesions or tumours on scans better than the human eye – but this on its own is not enough. Healthcare is a multidisciplinary practice, requiring social, behavioural and contextual information from the patient in order for the physician to make an informed decision about possible treatment.
AI in healthcare: limitations and possibilities
Let’s not forget that AI is a collection of different technologies, which together represent a real opportunity to improve efficiency in administrative and clinical healthcare practices. Take Moorfields Eye Hospital in London for instance; the hospital has teamed up with Google DeepMind to greatly increase the detail at which they’re able to analyse retinal scans, opening up the potential for earlier diagnosis and cures of otherwise debilitating conditions. Meanwhile companies like BenevolentAI are using AI technology very differently, to rapidly accelerate the process of drug discovery.
At the other end of the scale, AI-driven chatbot technology is being used by Babylon Health to ease the burden on the UK’s National Health Service by offering basic diagnoses and appointment scheduling through smartphone apps. But Babylon is a case in point of AI’s weaknesses as well as its strengths; the app got into trouble recently after finding that some users have been ‘overplaying’ their symptoms in order to get appointments faster. This cuts to a key weakness of AI in healthcare; while it’s great for analysis and research, there’s still a need for human input and evaluation of data gathered.
AI and new technologies are set to affect the doctor-patient relationship. Traditionally, this relationship has been one to one. But our aging population places pressure on budgets to deliver the same high standards of care, making this doctor-patient relationship no longer sustainable.
By leveraging machine learning and modern communications technology, we can distribute physicians’ time more efficiently. Clinical teams can be augmented with AI technologies to automate more repetitive or non-critical tasks, such as supervising exercises, or analysing test results. Rather than leading to job losses, this would free up more time for doctors and therapists to focus on delivering the unique value they have to offer.
Our healthcare future is bright, and it doesn’t involve robot doctors. Instead, we’ll see more patients assigned to a single doctor or physician, and technology will enable that one-to-many relationship to happen without significantly impeding on quality of care.
AI-powered, not AI
In the mid-nineteenth century, French neurologist Jean-Martin Charcot charted the future of neurology by putting the human at the center of the decision process. He famously said: “Let someone say of a doctor that he really knows his physiology or anatomy, that he is dynamic – these are not real compliments; but if you say he is an observer, a man who knows how to see, this is perhaps the greatest compliment one can make.”
This sentiment had just as much relevance then as it does today – but the future of healthcare must be scalable, and in order to scale and extend the clinical reach we need technology.
Ultimately, we need to augment clinical teams with the power of AI to automate non-critical tasks – leaving more time for doctors and therapists to focus on what they do best: the human touch and the multidisciplinary comprehension of the patient in all its complexity and depth.
This is why the future of healthcare is not AI; it’s AI-powered.