Elon Musk and Others Push for Moratorium on A.I., Citing Safety Risks

More than 1,000 technology and A.I. luminaries, including Elon Musk, Steve Wozniak, a co-founder of Apple, and Andrew Yang, penned an open letter urging a moratorium on the development of A.I. citing “profound risks to society and humanity.”

What Is Open AI and ChatGPT?

OpenAI is a research laboratory founded by Elon Musk, Sam Altman, and others working to advance artificial intelligence (A.I.) in the way most likely to benefit humanity. A.I. has made incredible progress over the past few years, as evidenced by its use in many industries, such as self-driving cars and automated medical diagnosis tools. OpenAI has been working to create applications such as the ChatGPT system for natural language processing.

ChatGPT, an A.I. language model released to the public just a few months ago, has become one of history’s most popular consumer applications. In a few months, it had 100 million monthly active users. A UBS study found that TikTok and Instagram took nine months and three years to amass the same number of users.

Criticisms of A.I.

Despite the potential benefits of artificial intelligence, there are also serious risks associated with its development. A.I. systems have the potential to cause harm if not implemented properly. This is compounded by many A.I. applications being developed and deployed without proper oversight or regulation.

Furthermore, there is a risk of unintended consequences from A.I. systems – for example, bias in algorithms that can lead to decisions made on flawed criteria. This can have profound implications for social justice and equality.

Elon Musk tweeted, “You could literally film a Walking Dead episode unedited in downtown S.F. This is where San Francisco politics leads, and Twitter was exporting this self-destructive mind virus to the world.” In response to a related question on Open A.I. baking these politics into the foundation of machine learning, Elon said, “Extremely concerning, given that it leads to a dystopian future – just walk around downtown S.F. to see what will happen.”

For some time now, the advancement of Artificial Intelligence has been met with apprehension about its impact on employment opportunities. Andrew Yang, another prominent signatory for the moratorium, warned about the loss of jobs due to A.I. in his book “The War on Normal People,” published in 2018. Yang touted universal basic income (UBI) during his 2020 presidential campaign recognizing the risks of A.I. 

Speaking to The Australian Financial ReviewBill Gates warned that A.I. technology like ChatGPT could replace white-collar workers. Ironically, Microsoft in Jan 2023 announced an additional multi-billion dollar investment in Open A.I. following their previous investments in 2019 and 2021.

OpenAI CEO Sam Altman, in an interview with ABC News, acknowledged the risks. He said, “We’ve got to be careful here. I think people should be happy that we are a little bit scared of this.”

Pause Giant A.I. Experiments

In light of these risks, the tech and A.I. luminaries have called for a moratorium on developing specific artificial intelligence applications. The goal is to give time for governments and organizations to create regulations and safeguards that ensure A.I. systems are implemented responsibly and ethically. The proposed moratorium would ensure that robust A.I. systems are developed only when we have the assurance that their effects will be beneficial and risks can be appropriately managed.

Concerns raised in the letter include the following:

  • Flooding information channels with propaganda.
  • Eliminating jobs.
  • Developing nonhuman minds that might eventually outnumber and replace us.
  • Losing control of our civilization.

Possible Conflicts of Interest by Moratorium Signatories

Critics of the moratorium argue that since several countries are locked in an arms race to develop A.I. Even if the suspension were implemented, it would only slow down the U.S. companies while the rest of the world could surpass us.

Additionally, some of the signers could have a vested interest in slowing down the progress of A.I.

OpenAI revealed investment in 1X’s A.I. robot ‘NEO,’ in direct competition against Elon’s Tesla Bot. 

Tesla has been working on A.I. for several years. Elon Musk initially co-founded OpenAI to create a safer artificial intelligence, yet had to step away in 2018 due to a potential conflict of interest between the A.I. research done by Tesla to develop autonomous driving and OpenAI.

Musk has also invested in other A.I. companies such as DeepMind and Vicarious. Mark Zuckerberg is the other notable investor in Vicarious. DeepMind was later acquired by Google’s parent company, Alphabet. OpenAI and DeepMind are actively researching A.I. technologies; any moratorium on A.I. could significantly hurt their business.

OpenAI’s Attempt To Address Concerns

Although cryptocurrency investments have fallen out of favor with the drop in Bitcoin’s price, Sam Altman co-founded a crypto project in 2021 called Worldcoin. Worldcoin published a blog post addressing some of the criticisms of A.I. and positioning itself as the protocol that empowers all to authenticate their humanness online without relying on a third party.

In addition to acting as a potential reputation system, Worldcoin has ambitions of distributing value by creating Universal Basic Income (UBI). Since A.I. is expected to eliminate many jobs, UBI is the only solution if individuals need to learn unique skills that A.I. can’t replace.

It is difficult to deny the potential benefits of artificial intelligence in many areas of life, but it is equally important to consider the associated risks.

OpenAI, the nonprofit created by Altman and Musk, is a shield against our innovation. It was born out of Musk’s fear that an A.I. might inadvertently lead to the end of humanity as we know it. He saw such a potential outcome as too great of a risk not to take precautionary measures.

In his New Yorker profile, Sam Altman mentioned prepping for survival as one of his hobbies. He said, “My problem is that when my friends get drunk, they talk about the ways the world will end. After a Dutch lab modified the H5N1 bird-flu virus, five years ago, making it super contagious, the chance of a lethal synthetic virus being released in the next twenty years became, well, nonzero. The other most popular scenarios would be A.I. that attacks us and nations fighting with nukes over scarce resources.” 

Elon Musk and others could be right to call for caution regarding developing specific applications of A.I. A moratorium could be essential in ensuring that A.I. is developed responsibly. Hopefully, this will give us the time to create regulations and safeguards that protect us from potential harm.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *