Business News

The dominance of AI is inevitable, according to a technology ethicist

Anyone who has followed the discourse surrounding artificial intelligence in recent years has heard one version or another of the claim that AI is inevitable. The common themes are that AI is already here, it’s important, and the people who are serious about it are hurting themselves.

In the business world, AI advocates tell companies and workers that they will be left behind if they fail to integrate productive AI into their operations. In science, AI advocates promise that AI will help cure hitherto incurable diseases.

In higher education, AI advocates are telling teachers that students must learn to use AI or risk not being able to compete when it comes time to get a job.

And, on national security, AI champions say the country is investing heavily in AI weapons, or it will be worse than the Chinese and Russians, who are already doing so.

The argument across these different domains is the same: The time for AI skepticism has come and gone. Technology will shape the future, whether you like it or not. You have the choice to learn how to use it or be left out of that future. Anyone who tries to stand in the way of technology is as hopeless as the hand weavers who resisted machine weaving in the early 19th century.

For the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center have been studying the ethical questions raised by the widespread adoption of AI, and I believe that the inevitable contradiction is misleading.

History and hindsight

In fact, this claim is the most recent version of a deterministic view of technological progress. It is a belief that innovation cannot be stopped once people start working on it. In other words, some genies don’t go back into their bottles. The best thing you can do is to tie your good intentions.

This technical decision has a long history. It has been used in the impact of the printing press, as well as the proliferation of vehicles and the infrastructure they require, among other developments.

But I believe that when it comes to AI, the argument for technological determinism is exaggerated and oversimplified.

AI in the field

Consider the argument that businesses can’t stay out of the AI ​​game. In fact, the case has yet to be made that AI is bringing significant productivity benefits to the companies that use it. Report to The Economist by July 2024 suggests that so far, the technology has almost no economic impact.

The role of AI in higher education is also still a very open question. Although universities have, in the past two years, invested heavily in AI-related programs, evidence suggests that they may have jumped the gun.

Technology can serve as an interesting learning tool. For example, creating a Plato chatbot that allows readers to have a text conversation with a bot pretending to be Plato is a masterpiece.

But AI is already beginning to displace some of the best tools teachers have for assessing and promoting critical thinking, such as writing assignments. The college essay is going the way of the dinosaurs as many teachers give up the ability to tell their students to write their own papers. What is the cost-benefit argument for giving up writing, an important and useful traditional skill?

In science and medicine, the use of AI seems promising. Its role in understanding protein structure, for example, is likely to be important in treating disease. The technology is also revolutionizing drug capture and helping to speed up the drug discovery process.

But the excitement may be exaggerated. AI-based predictions about which cases of COVID-19 will be severe have failed miserably, and doctors are relying more on the technology’s diagnostic capabilities, often against their better clinical judgment. Therefore, even in this area, where the potential is great, the ultimate impact of AI is unclear.

For national security, the argument for investing in AI development is compelling. As high as the stakes may be, the argument that if the Chinese and Russians develop autonomous AI-driven weapons, the United States can’t afford to fall behind, has real purchase.

But total commitment to this kind of thinking, while tempting, may lead the US to ignore the disproportionate impact of these programs on countries too poor to participate in the AI ​​arms race. Superpowers can use technology in conflicts in these nations. And, significantly, this argument de-emphasizes the possibility of cooperating with adversaries in limiting military AI systems, favoring an arms race over arms control.

One step at a time

Assessing the potential value and risks of AI in these different domains requires some skepticism about the technology. I believe that AI should be embraced slowly and subtly rather than subject to blanket claims of inevitability. In developing this carefully, there are two things to remember:

First, companies and entrepreneurs working on artificial intelligence have an obvious interest in technologies that are considered inevitable and necessary, because they make a living from their discovery. It is important to pay attention to who is making claims of inevitability, and why.

Second, it is worth taking a lesson from recent history. Over the past 15 years, smartphones and the social media applications that run on them have become a fact of life—a technology that is as transformative as it is inevitable. Then data began to emerge about the brain damage it causes to young people, especially young girls. School districts across the US began banning phones to protect the attention span and mental health of their students. And some people have returned to using flip phones as a quality of life change to avoid smartphones.

After a long exploration of children’s mental health, driven by claims of technological determinism, Americans are changing course. What seemed fixed turned out to be irreplaceable. There is still time to avoid repeating the same mistake with artificial intelligence, which could have major consequences for society.


Nir Eisikovits is a professor of philosophy and director of the Applied Ethics Center at UMass Boston..

This article has been republished from The conversation under a Creative Commons license. Read the first article.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button