Doctors and public health experts join calls for halt to AI R&D until it's regulated
Listen to this article
I have been arguing for some time (as readers of TR will know) that humanity as we know it is faced with six existential threats. These are:
- Climate change
- Unregulated AI
- Unregulated human genetic engineering
- Nuclear winter
It’s interesting that over the last few months each of these has been singled out as an existential threat by groups of scientists and other experts. Climate change we have failed to curb thus far (and I doubt if we will), human genetic engineering has been curbed for now, nuclear winter is an ever-present threat especially with proliferation, inequality we have yet to seriously even consider tackling and as for pandemics we are fast forgetting the lessons of COVID.
However, at last, unregulated AI is being taken seriously.
For example, an international group of doctors and public health experts have joined the current clamor for a moratorium on AI research until the development and use of the technology are properly regulated.
Despite its transformative potential for society, including in medicine and public health, certain types and applications of AI, including self-improving general purpose AI (AGI), pose an “existential threat to humanity,” they warn in the open access journal BMJ Global Health.
They highlight three sets of threats associated with the misuse of AI and the ongoing failure to anticipate, adapt to, and regulate the transformational impacts of the technology on society.
The first of these comes from the ability of AI to rapidly clean, organize, and analyze massive data sets consisting of personal data, including images.
This can be used to manipulate behavior and subvert democracy, they explain, citing its role in the subversion of the 2013 and 2017 Kenyan elections, the 2016 US presidential election, and the 2017 French presidential election.
What the researchers say: “When combined with the rapidly improving ability to distort or misrepresent reality with deep fakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts,” they contend, stating what IS now obvious.
AI-driven surveillance may also be used by governments and other powerful actors to control and oppress people more directly, an example of which is China’s Social Credit System, they point out.
This system combines facial recognition software and analysis of ‘big data’ repositories of people’s financial transactions, movements, police records and social relationships.
But China isn’t the only country developing AI surveillance: at least 75 others, “ranging from liberal democracies to military regimes, have been expanding such systems,” they highlight.
The second set of threats concerns the development of Lethal Autonomous Weapon Systems (LAWS)—capable of locating, selecting, and engaging human targets without the need for human supervision. LAWS can be attached to small mobile devices, such as drones, and could be cheaply mass produced and easily set up to kill “at an industrial scale,” warn the authors.
The third set of threats arises from the loss of jobs that will accompany the widespread deployment of AI technology, with estimates ranging from tens to hundreds of millions over the coming decade.
“While there would be many benefits from ending work that is repetitive, dangerous and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behavior,” they point out.
To date, increasing automation has tended only to shift income and wealth from labor to the owners of capital, so helping to contribute to inequitable wealth distribution across the globe, they note.
“Furthermore, we do not know how society will respond psychologically and emotionally to a world where work is unavailable or unnecessary, nor are we thinking much about the policies and strategies that would be needed to break the association between unemployment and ill health,” they highlight.
But the threat posed by self-improving AGI, which, theoretically, could learn and perform the full range of human tasks, is all encompassing, they suggest.
“We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and power—whether deliberately or not—in ways that could harm or subjugate humans—is real and has to be considered.
“If realized, the connection of AGI to the internet and the real world, including via vehicles, robots, weapons and all the digital systems that increasingly run our societies, could well represent the ‘biggest event in human history’,” they write.
“With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimize risk and harm and maximize benefit,” they emphasize.
International agreement and cooperation will be needed, as well as the avoidance of a mutually destructive AI ‘arms race,’ they insist. And healthcare professionals have a key role in raising awareness and sounding the alarm on the risks and threats posed by AI.
“If AI is to ever fulfil its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances.
“This includes ensuring transparency and accountability of the parts of the military–corporate industrial complex driving AI developments and the social media companies that are enabling AI-driven, targeted misinformation to undermine our democratic institutions and rights to privacy,” they conclude.
So, what? Amen. What are the chances of regulation? Zero.
Join the discussion
More from this issue of TR
How blame is attributed to male and female leaders
The negative outcomes of male leaders are blamed on their selfish decisions, while those of female leaders are put down to bad luck.
The effect of Instagram, TikTok on psychological well-being
A flow state is achieved when people are so engrossed in an activity that little else seems to matter and will often continue the activity despite its negative consequences.
You might be interested inBack to Today's Research
ChatGPT outperforms physicians in providing high-quality, empathetic advice to patient questions
Healthcare professional evaluators preferred ChatGPT responses to physician responses 79% of the time.
Can you spot the bots? New research says not
"From political manipulation, to misinformation, to cyberbullying and cybercrime, the proliferation of deep learning-generated social media profiles has significant implications for society and democracy as a whole."
Join our tribe
Subscribe to Dr. Bob Murray’s Today’s Research, a free weekly roundup of the latest research in a wide range of scientific disciplines. Explore leadership, strategy, culture, business and social trends, and executive health.