This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minutes read

Google’s medical AI was super accurate in a lab. Real life was a different story.

There is no doubt that Artificial Intelligence (AI) has a wide range of critical applications and benefits. However, the challenge is to move from good results in a controlled environment to AI working in the messy world of humans in the real world.

The Covid pandemic has stretched many medical teams around the world to breaking point, with the huge influx of work it has created for the already overworked health professionals.

The hope was that AI could speed up patient screening and thus ease the strain on clinical staff. But a recent study from Google Health, which is the first of its kind to look at the impact of AI in real medical settings, reveals that this is not as easy as it seems.

The findings were that even the most accurate AI models can actually perform badly, slow down screening and make poor decisions, if it is not heavily customised and trained based on the actual human environment it will be deployed in.

Google Health set out to see if it could help medical teams in Thailand meet the target of screening 60% of the population with diabetes for diabetic retinopathy, which can cause blindness if not caught early.

Currently, nurses take photos of patients’ eyes in a clinic and then send them to a specialist to be assessed, a process which can take up to 10 weeks. The AI developed by Google Health can identify signs of diabetic retinopathy from an eye scan with more than 90% accuracy and give a result in less than 10 minutes. 

While this sounds impressive, the results in the real world were not as good as this. 

Like most image recognition systems, the model had been trained on high-quality scans; to ensure accuracy. This meant that the model rejected images that were poor quality. In the chaotic environment of the clinics, nurses had to scan dozens of patients per hour and were taking rushed photos in poor light conditions.

This led to the AI model rejecting over  20% of the images as too low quality.

Also uploading high quality imagery to the cloud was challenging with the poor speed of internet connectivity in the hospitals.

This is the promise of AI. When it works then it can be a huge advantage to those working on the front line. 

But it is a stark lesson that we must design for the messiness and chaos of the real world and not the laboratory. This is where user research can help us keep the needs of the user, and their reality, at the forefront of AI model development.

If AI is really going to make a difference to patients we need to know how it works when real humans get their hands on it, in real situations.

Tags

ai, artificial intelligence, medical innovation