Ethicists Fire Warning Shots on AI Pause: Letter Ignores Human Risk, They Say

The signatories of the letter condemning a six-month “pause” on AI development have focused on hypothetical future threats, when the harms attributable to misuse of the tech today are clear and real. In doing so, they echo arguments made about climate science by those who deny that anthropogenic climate change is happening. The urgent need to address these harms should be fundamental in any discussion of AI development, and policymakers should instead focus on building safe and capable technology

The Future of Life institute’s open letter raises a number of valid concerns about the potential dangers of artificial intelligence and its development. While it is important to continue developing AI models for the benefit of humanity, we should take steps to ensure that these developments are carried out in a responsible and precautionary manner.

The rise of artificial intelligence has experts and everyday people alike worried about the future implications. One group of individuals pushing back against these fears is a group of AI experts formerly employed by Google. Timnit Gebru, Emily M. Bender, Angelina McMillan-Major and Margaret Mitchell are all part of the DAIR Institute, an organization created to study and prevent harms arising from AI development.

The letter’s failure to engage with existing problems caused by the tech has caused backlash from some who accuse the signatories of being out of touch.

The dangers associated with the deployment of AI systems are often overlooked by those who support them, in favour of hypothetical risks that may never materialize. Longtermism is an ideology that thrives on these hypothetical risks, at the expense of actual harms that are ongoing today. This ideology ignores the reality that worker exploitation, data theft and synthetic media are already causing serious harm, and will only become more prevalent as AI systems continue to develop. We need to be careful not to let longtermism steer us away from sensible policies that could improve our world

As we witness reports of sentient robots enslaving or exterminating humanity, it can be easy to dismiss the possibility of an apocalypse brought about by self-aware machines. But this overlooks the continuing reality of human oppression facilitated via automated systems. While Terminator-like robots are undoubtedly a threat, they are not the only ones that should be worrying us. Clearview AI, for example, is being used by police departments around the world to essentially create false accusations and frame innocent people

Although the DAIR crew agree with some of the letter’s aims, like identifying synthetic media, they emphasize that action must be taken now, on today’s problems, with remedies we have available to us. For example, the use of natural language processing software can help identify and counter propaganda campaigns online. In addition to our current technological arsenal, Shared Reality technology offers a way to connect people across different locations in order to share information more effectively. Together we can take steps to protect our democracy and ensure INFORMATION CORNERIST totalitarianism does not triumph over open discourse.

What we need is regulation that enforces transparency. Not only should it always be clear when we are encountering synthetic media, but organizations building these systems should also be required to document and disclose the training data and model architectures. The onus of creating tools that are safe to use should be on the companies that build and deploy generative systems, which means that builders of these systems should be made accountable for the outputs produced by their products.

The current race towards ever larger “AI experiments” is not a preordained path where our only choice is how fast to run, but rather a set of decisions driven by the profit motive. The actions and choices of corporations must be shaped by regulation which protects the rights and interests of people.

It is indeed time to act: but the focus of our concern should not be imaginary “powerful digital minds.” Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.

There is a great deal of criticism that the lack of transparency around synthetic media presents significant public safety risks. The problem is compounded by the fact that building and using generative systems can be complex and intimidating, making it difficult for people without specialist expertise to safely use them. This situation could be improved if organizations building synthetic media were required to document their training data and model architectures in order to make them more accessible. In addition, those responsible for deploying these systems should be made responsible for ensuring that their products are safe to use, which would require them to have expert knowledge in both programming and data science.

One issue which is beginning to come to light with the development of AI is that corporations are forming research partnerships with universities in order to circumvent regulations and gain an advantage over their competitors. Unfortunately, this type of collaboration often takes place behind closed doors and the public does not have access to the data generated by these experiments. This secrecy has resulted in little oversight on what kind of risks these large-scale AI experiments are taking, and we need to ensure that these practices do not result in harmful consequences for society as a whole.

Instead of trying to create imaginary “powerful digital minds,” we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities. We need to create policies that incentivize these companies to act in a more equitable way, and we need to raise awareness about the ways in which they are hurting oursociety.

Jessica Matthews’ words ring true in light of the recent news that Google has developed a machine learning algorithm that can identify images of dogs and cats. The question at hand is whether or not such technology will be abused, used to demean and target people with different cultural beliefs, or used for benign purposes like detecting disease outbreaks. We must continue to be vigilant about who is creating these powerful technologies, lest they be abused by people with sinister motives.

The signatories of the open letter have called on major companies to pause their research efforts in order to better assess the risks posed by artificial intelligence. Despite the lack of likelihood that any company would actually agree to this, someone will likely have to take action on their behalf. While it is vanishingly unlikely that any major company would ever agree to pause its research efforts, it’s clear judging from the engagement it received that the risks — real and hypothetical — of AI are of great concern across many segments of society. If they won’t do it, perhaps someone will have to do it for them.

Avatar photo
Ava Patel

Ava Patel is a cultural critic and commentator with a focus on literature and the arts. She is known for her thought-provoking essays and reviews, and has a talent for bringing new and diverse voices to the forefront of the cultural conversation.

Articles: 888

Leave a Reply

Your email address will not be published. Required fields are marked *