AI, big surveillance and robot ethics in healthcare

Artificial intelligence (AI)-powered robots are being introduced for automation in healthcare, and on the street as facial recognition software. However, AI robots can only be as smart and democratic as the data we feed into their algorithms and the persons who use them. AI can also amplify the historical and social injustices embedded in data. For AI and robot ethics, let’s not forget it is us – humans with all our prejudices – and not aliens from space, who are designing and deploying AI robots in healthcare and society.

VURAL ÖZDEMİR / TORONTO

The term “artificial intelligence” (AI) was first used in 1956 in a conference at Dartmouth College in the US. The present day AI applications are largely based on machine learning that “learns” directly from data through a training period of trial-and-error loops in the order of many thousands to millions. AI thus depends on “Big Data” in doctors’ offices, hospitals, and electronic health records, not to mention social media data generated by patients. 
On the other hand, the human brain is not always able to rapidly process Big Data, for example, to unravel the early signs of infection, cancer or dementia. But the AI-powered robots can precisely do that. 
Medical imaging and radiology that are data intensive and depend on pattern recognition are among the early adopters of AI. With efficiency in data processing, AI sets the stage for automation in healthcare. 

AI and big surveillance 
The advanced AI tools used for pattern recognition in medicine can also be deployed as facial recognition technologies that create a fertile potential for ‘Big Surveillance’ and thus threaten global democracy and civil rights. 
Dublin and several cities around the globe are proud to move towards being a smart city, powered by new technologies such as AI (https://smartdublin.ie/about/). Smart cities claim to integrate data from all living and inanimate objects so as to improve services such as garbage collection, traffic and commerce. 
Yet, the label ‘smart integrated city’ is problematic because it preemptively closes down any debate and critique of AI and emerging technologies. Integration might mean efficiency in city governance but also control of citizens and curtailing of civil liberties.

Can AI robots be racist or sexist?
The idea that AI-powered robots will be bias-free is contested by recent case studies of racist robots that discriminate against minority groups in the US (https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses). The data we feed into AI robots, to the extent that such data are curated by human actors, often reflect the existing social injustices. AI-powered robots can only be as smart, ethical and democratic as the data they are trained on, and the persons who use them. 
If AI robots are exposed to Big Data with jingoist, anti-semitic, sexist, homophobic or racist contents, they may “grow up” to practice precisely such values. Conversely, if we train the AI robots with feminist, pro-refugee, anti-racist, pro-LGBTI, and pro-democracy datasets, they might invest in human rights in healthcare. 
Still, the amphibian and malleable nature of AI robot attitudes remains problematic. For example, how would an AI robot behave when it is exposed to both pro-democracy/authoritarian, feminist/sexist, pro-LGBTI/homophobic datasets? 
For AI robot ethics, let’s not forget it is us – humans with all our prejudices – and not aliens from space, who are designing and deploying AI robots in healthcare and society. 

Making politics and power transparent
For opportunists keen on profiteering from human diseases, AI-powered robots, unlike human employees, do not need lunch, washroom or cigarette breaks, work 24 hours nonstop without social security and paid vacation benefits. 
One strategy for ethical AI development is to routinely deploy “metadata”. Metadata are defined as “data about data,” (e.g., questions such as “who produced the data, to what ends, with what funding?”) and include both technical and political contexts in which Big Data are valorized as AI-powered robots in medical decision-making. 
Metadata would also bode well for inclusive and ethical credit attribution to historically silenced “allied” health professionals such as nurses, hospital technicians, and others without a medical degree, who play indispensible roles in generation of the Big Data and the human element that a good and just healthcare system needs.
Put in other words, data are never “just” data, technology is never “just” technology, and AI robots are never “just” robots. 

Regulatory capture
AI and robot ethics cannot be fully understood without thinking about the impacts of the current neoliberal and post-truth era. Four decades of hardcore neoliberalism and profiteering since the 1980s have led to the rise of “perception management” as a false and fake scholarship. Seen in this light, the current rise of post-truth politics is not surprising. Consenting to neoliberal approaches in technology assessment is akin to ingesting the “soma”, the fictional happiness-producing drug in Brave New World, Aldous Huxley’s book on authoritarian-dystopian societies published in 1932. The soma numbs our conscience, silences dissent, cultivates hearts of stone, and is a form of slow intellectual death. 
AI-powered robots are inviting us to rethink “expertise”. No longer should we assume, nor accept, that competence in technology is sufficient to graduate as a medical doctor or engineer. Critical technology governance education is sorely needed in a time of AI and post-truth, to be able to read the human values, power and political subtext embedded in AI and emerging technologies.
A word of caution! While ethics might sound ‘nice, warm and cozy’, ethicists and technology regulators, too, could be co-opted by neoliberalism. Indeed, a recent book by Noam Chomsky cautions against ‘regulatory capture’ *, suggesting the need to ‘watch the watchers’, or the ethics of regulators. Chomsky observes that over the history of regulation, there are many examples where “the business being regulated is in fact running the regulators.”
But none of this is surprising. After all, we live in neoliberal and post-truth times. We need to brace for impact and resist post-truth in technology regulation and ethics, too. 
In a Foucauldian sense, the “power is everywhere”, shaping not only healthcare innovation but also AI and robot ethics!


*Vural Özdemir graduated as a medical doctor from Hacettepe University in Ankara and earned a PhD in life sciences in the field of psychiatric drug development at Faculty of Medicine, University of Toronto. Qualified as Associate Professor in Canada, with works in critical governance of emerging technology, and politics of knowledge production. He is a senior advisor on technology governance and responsible innovation in Toronto, Ontario, Canada. Twitter: @CriticalPolicy1

Categories

General