By Jude Holmes
In a world consumed with the daily threat of infection rates and the hunt for disease control solutions, technology has come in to save the day. Drones delivered supplies to Scottish Isles, around half of all NHS medical appointments took place by phone (compared to 14% pre-lockdown), and over 750,000 people signed up to the UK’s responder app. However, the track and trace trial on the Isle of White is just one of many automated interventions to have received criticism worldwide, and with good cause.
AI encompasses a lofty definition of “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”. Turing’s computer kickstarted the new age of tech, itself influenced by Charles Babbage and writings of Ada Lovelace. The giant leap from then to now is summed nicely with the over-stated fact that your calculator is more powerful than the technology used for the moon landing (on the metrics of memory space and processing speed, I wouldn’t recommend that you fly to the moon on your calculator). Beyond the magic tricks of targeted ads and Alexa’s voice recognition, what real value is AI bringing to the table?
And Betty When You Call Me, You Can Call Me AI
On the 31st of December 2019, Canadian AI company, BlueDot, released a warning for the Wuhan region, days ahead of the official World Health Organisation press release. By scouring news reports across the globe for alerts relating to plant and animal diseases, BlueDot alerted clients to a worldwide risk, in record time. BlueDot clients include public health officials and companies, informing governments, airlines, and hospitals. Those of us not in AI development teams may be surprised that this data isn’t mined from social media. Why are we not using this huge data resource when we freely complain about ailments to devoted Twitter followers?
Perhaps, like me, your experience with conversational robots was formed in childhood hours trying to make a Chatbot respond hilariously on MSN due to some inevitable misunderstanding. Under this premise, we could assume that phrases such as “plagued with work” would confuse BlueDot’s AI right?
In 2016, Microsoft proudly unleashed Tay on Twitter, their pet project language simulator. Within 24 hours, Tay had covered pretty much every racist angle possible. I don’t think the issue there was grammar or lexicon, perhaps just a lack of a social filter. Or was the time scientists shut off two robots because they stopped using English in favour of creating a new language not dystopian enough for you?
For BlueDot, social media is deemed “too messy”. However, while the algorithms do the heavy lifting, sifting through articles for data, they only find correlations between time, place, and disease. A team of scientists is behind the algorithms, checking and double checking for logical, causal links. Scientists are cautious with the results they publish as the wrong diagnosis could cause unnecessary panic or even false hope, demonstrated by Google Flu Trends complete failure to predict flu trends in 2008.
While You’re Here…
Why not take a moment to subscribe to The International’s free monthly newsletter? It takes seconds to sign up, and you’ll stay up to date with the stories shaping our world at a pace that won’t overwhelm.
Paging Doctor AI
While we continue searching for a vaccine, Doctor AI is being drafted into the pandemic. You may have read about some promising X-ray diagnostics in America, or Israel trialling voice recognition technology to diagnose and monitor potential carriers by their cough sounds. However, early development stages have thrown up a few problems. Teaching AI to understand an X-ray requires many examples of how X-rays should and shouldn’t look. Bias in medicine disproportionately affects women, especially WOC, and in AI solutions it starts at this level. When creating databases, it must be representative of every potential patient, not just the few COVID cases of San Diego who could afford an X-ray at a hospital with attached researchers who happened to collect data.
We could ask hospitals worldwide to send over patient data, if you don’t mind your data being controlled in a country with different protection laws. Or, we could target marginalised communities – but sending droves of infectious patients to the X-ray machine before sending in potential cases seems like unnecessary risk. Scientific results do undergo rigorous peer reviews and multiple hospitals will be conducting research to confirm or contest the success of the report, but this takes time and sometimes doesn’t throw up bias until a meta-analysis covering years of research is completed. Even when AI becomes the answer, public opinion matters and 87% of Americans trust human doctors over AI. This could be detrimental if, in a future and more contagious pandemic, robots really are the first port of call.
AI’ll Be Back
AI is only improving as we give it more test runs, so maybe the real limiting factor is the human factor. Handing over to the robots is not a bad thing, says Daniel Susskind, AI expert and author of A World Without Work. In his TED talk, Susskind discusses myths surrounding AI and why we might not be the best machines for the job. The main concern is a fair sharing of wealth. AI has undeniably supported business growth, but as the pie gets bigger, those taking a slice gets smaller, requiring a more secure income system for the unemployed masses, perhaps making furlough a trial run of sorts.
Most of the issues around AI are “human” limitations, often data-based issues; whether that is the amount of data we restrict, uncollected potential data or skewed data collection. It seems to be a given that AI is sticking round for the long-term, and with the pandemic, we’ve seen a glimpse into the complex future of automated thinking machines. Our current inability to trust AI is acceptable only if founded in the truth of data bias and real limitations. Perhaps our final frontier isn’t an inability to problem solve and code intelligent enough machines, but in our egos and emotions – a limitation in the most human sense.
Jude Holmes is a staff writer at The International. Find her here on LinkedIn.