Here at The Human OS, we are slightly obsessed with matchups between artificial intelligence and doctors.
In many experiments (though not yet in many clinics), AI systems are showing great promise in diagnosing diseases, analyzing medical images, and predicting health outcomes. They’ve even performed better than human doctors in certain tasks like surgical stitching and diagnosing autism in infants.
Now, in the latest win for AI medicine, researchers at the University of Nottingham in the UK created a system that scanned patients’ routine medical data and predicted which of them would have heart attacks or strokes within 10 years. When compared to the standard method of prediction, the AI system correctly predicted the fates of 355 more patients.
Predicting these cardiovascular events is a notoriously difficult task. In a recent paper, published in the journal PLOS One, the researchers note that about half of all heart attacks and strokes occur in people who haven’t been flagged as “at risk.”
Currently, the standard way of assessing a patient’s risk relies on guidelines developed by the American Heart Association and American College of Cardiology. Doctors use these guidelines, which focus on well-established risk factors such as high blood pressure, cholesterol, age, smoking, and diabetes, to shape their counsel and treatment for their patients.
To make a system that could do better, researcher Stephen Weng and his colleagues tested several different machine learning tools on medical records from 378,256 patients across the UK. These records tracked the patients and their health outcomes from 2005 to 2015, and contained information on demographics, medical conditions, prescription drugs, hospital visits, lab results, and more.
The researchers took 75 percent of the medical records and fed them into their machine learning models, which set out to find the distinguishing characteristics of those patients who experienced heart attacks or strokes within the 10-year span. Then Weng’s group tested the models on the other 25 percent of the records to see how accurately they’d predict heart attacks and strokes. They also tested the standard guidelines on that subset of records.
Using a statistic in which a score of 1.0 signifies 100 percent accuracy, the standard guidelines got a score of 0.728. The machine learning models ranged from 0.745 to 0.764, with the best score coming from a type of machine learning model called a neural network.
While the machine scores may not sound like a resounding triumph, when translated into human terms the significance becomes clear: The neural network model predicted 4,998 patients who went on to have a heart attack or stroke out of 7,404 actual cases—355 more than the standard method. With those predictions in hand, doctors could have taken preventative measures such as prescribing drugs to lower cholesterol.
Weng says the AI medical tools being tested in labs today will soon boost clinicians’ accuracy in both diagnosis and prognosis. “The leap from research studies to applications in clinical care will happen over the next five years,” he says.
What might that look like in practice? Weng pictures busy primary care doctors using AI tools that have been trained to recognize patterns. “Then the algorithm can look through the entire patient list, flag this up, and bring this to the attention of the doctor,” he says. “This could be done with the patient sitting in front of them during a routine appointment, or in a systematic screen of the entire list.” While Weng notes that similar clinical decision support software already exists, he says those systems don’t make use of AI pattern recognition, which could provide far more accurate results.
Before AI comes to your doctor’s office, however, the technology will have to get past major regulatory hurdles. “The key barrier to implementation will be managing privacy and patient confidentiality issues, with computer algorithms trawling through vast amounts of patient data which contain confidential and sensitive medical information,” Weng says.
In addition to coping with those privacy concerns, any AI technology will have to deal with regulators’ wariness of medical machines that make their own decisions. With all that red tape looming, one wonders: What would a machine learning tool predict about its own chances of gaining approval?