What Are The Dangers Posed By AI?
The age of artificial intelligence (AI) is here with us and things are heating up. From generative AI to superhuman AI, the technology is developing rapidly. But, analysts and experts believe some dangers come with the rapid AI developments.
A recent letter calling for a moratorium on AI development integrates real threats with speculation. But, concern is growing among the experts.
In late March 2023, over 1,000 technology leaders, pundits, and researchers working in and around artificial intelligence signed an open letter that warned that AI technologies present “profound risks to society and humanity.”
The group that included Elon Musk, the owner of Twitter and Tesla’s chief executive, urged AI labs to suspend the development of their most powerful systems for six months to better understand the dangers that come with the nascent technology.
This letter stated:
“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The letter, which had over 30,000 signatures by June 2023, was brief. Its language was wide. Some of the names behind the letter appeared to have conflicting relationships with artificial intelligence. Elon Musk, for instance, continued to build his AI start-up that launched six months after the letter was first published, despite being among the primary donors to the organization that drafted the letter.
However, the letter represented an increasing worry among AI experts that the latest network, notably GPT-4, the technology introduced by OpenAI, could cause considerable harm to society. They thought future systems would become more dangerous.
Some of the risks are already showing while others might take months or years. But, others are just hypothetical. A professor and A.I. researcher at the University of Montreal, Yoshua Bengio, stated:
“Our ability to understand what could go wrong with very powerful A.I. systems is very weak. So we need to be very careful.”
Why Are They Worried?
Dr. Bengio is maybe the most important person to have signed that letter. He spent the past 40 years developing the technology that powers systems like GPT-4. In 2018, the researchers received the Turing Award, popularly known as “the Nobel Prize of computing,” for their impressive work on neural networks.
A neural network is a mathematical system that learns various skills by analyzing data. Around five years ago, big firms like Microsoft, Google, and OpenAI started building neural networks that learned from large amounts of digital text known as large language models (LLMs).
By discovering patterns in the text, LLMs learn to generate text independently, including poems, blog posts, and computer programs. They can also engage in a conversation.
The technology, can help writers, computer programmers, and many other workers to generate ideas and do various things more quickly. Nonetheless, Dr. Bengio and other experts warned that LLMs can learn unwanted and unexpected behaviors.
These platforms can generate biased, untruthful, and toxic information. Systems such as GPT-4 get facts wrong and make up some phony information, a phenomenon known as ‘hallucination.’
Firms are now working on these issues. Nonetheless, experts such as Dr. Bengio worry that as researchers make the systems highly powerful, they will introduce a lot more risks.
Short-Term Risk: Misinformation
Since these systems offer information with what appears to be complete confidence, it can be a struggle to separate truth from fiction when using these tools. Experts are worried that people will rely on these platforms for emotional support, medical advice, and raw information they use in decision-making.
One professor of computer science at Arizona State University, Subbarao Kambhampati, stated:
“There is no guarantee that these systems will be correct on any task you give them.”
Analysts and experts are also worried that people will misuse AI to spread disinformation. Since they now converse almost like human users, they can be surprisingly persuasive.
Dr. Bengio commented:
“We now have systems that can interact with us through natural language, and we can’t distinguish the real from the fake.”
Medium-Term Risk: Job Loss
According to the founding chief executive of the Allen Institute for AI in Seattle, Oren Etzioni, said that “rote jobs” could be hurt by artificial intelligence.
Experts fear that the new AI technologies might be job killers. Currently, technologies such as GPT-4 seem to complement human workers. However, OpenAI admits that they might replace some workers, including the people who moderate content on the internet.
They cannot yet duplicate the work of doctors and lawyers. However, they might replace personal assistants, paralegals, and translators. A paper published by OpenAI researchers estimated that 80% of the US workforce might have up to 10% of their work tasks impacted by LLMs and that 19% of workers may see over 50% of their work affected. Oren Etzioni said:
“There is an indication that rote jobs will go away.”
Long-Term Risk: Loss Of Control
Some of the people who signed the letter also think AI could slip out of control or destroy humanity. However, other experts believe that idea is widely overblown.
The letter in question was written by a group from the Future of Life Institute, an institution dedicated to exploring the existential risks to humanity. They warn that since AI systems mostly learn unexpected behavior from the large amounts of data they analyze, they might cause severe and unexpected problems.
They say that as firms introduce LLMs into their internet services, the systems might acquire unexpected powers since they could write their computer code. They say developers will develop new risks when they let powerful AI systems run their code.
A theoretical cosmologist and physicist at the University of California, Santa Cruz, Anthony Aguirre, said:
“If you look at a straightforward extrapolation of where we are now to three years from now, things are pretty weird. If you take a less probable scenario — where things really take off, where there is no real governance, where these systems turn out to be more powerful than we thought they would be — then things get really, really crazy.”
For now, talk of existential risk is hypothetical. But, other risks like disinformation are already happening. Dr. Etzioni said:
“Now we have some real problems. They are bona fide. They require some responsible reaction. They may require regulation and legislation.”
As we head into 2024 and the future, more infrastructure, and regulations need to be developed and implemented to ensure the dangers posed by AI are mitigated considerably.
1 Comment
I have been surfing on-line greater than three hours these days,
but I never found any attention-grabbing article like yours.
It’s pretty worth sufficient for me. In my view,
if all website owners and bloggers made just right content as you probably did,
the net will be a lot more useful than ever before.