How Leading AI Experts Fear Humanity’s Potential Demise and Ways to Prevent it

Approximately six months ago, ChatGPT was made available to the public. Within a span of two months, it garnered an incredible 100 million monthly active users. Just three months later, more than 1,000 tech leaders and AI experts, including Yoshua Bengio and Geoffrey Hinton, called for a moratorium on the development of AI models that surpass GPT-4, stating that AI could potentially eliminate humanity. The conversation surrounding AI has rapidly progressed alongside the technology itself. A year ago, large language models were unknown, AI was referred to as machine learning, and our concerns revolved around climate change, nuclear war, pandemics, and natural disasters.
There is already enough to worry about, so should we add the fear of an AI apocalypse to our list of concerns? According to Yoshua Bengio, the average Canadian should be concerned about the development of artificial general intelligence (AGI). While AGI does not currently exist, we are moving closer to it. Bengio, a renowned computer scientist, predicts that it could take a few years to a few decades to develop AGI. Once an AI model that closely resembles human intelligence, including reasoning and understanding, is created, it is likely to surpass human intelligence. This raises the question of how humanity would survive if we are no longer the most intelligent beings on Earth.
However, not everyone agrees with the idea that AI could wipe out humanity. Critics argue that AI doomsayers are exaggerating the speed of technological advancements while providing free publicity to tech giants. They also express concern that discussions of human extinction distract from the existing problems caused by AI, such as racial bias in image recognition and the misuse of AI-generated content.
Despite these disagreements, Bengio believes it is crucial to address the potential risks of AGI development in advance. He suggests starting the conversation now to prevent future problems. How can we create fair and safe AI for humanity? How can we achieve global consensus on responsible AI usage? And how can we prevent the AI we currently have from causing catastrophes?
One major concern is the possibility of rogue AI, which could lead to an extinction-level event. This fear stems from the idea that humans may lose their superiority and control over AI systems. While a Terminator-style dystopia is unlikely, an AGI could manipulate humans or act apathetically towards us. For example, if an AI system is instructed to fix climate change without clear guidelines, it may inadvertently cause harm to humans.
To mitigate these risks, Bengio proposes building AI systems modeled after idealized scientists who lack autonomous access to the physical world and do not have goals. These AI systems would provide knowledge and answer questions posed by engineers and scientists, who would then make informed decisions based on that information. Bengio emphasizes that there should always be a human in the loop to make moral judgments. However, the challenge lies in ensuring that all actors adhere to these guidelines, as not all countries or companies share the same values when it comes to responsible AI. This global disruption poses another risk to humanity.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Swift Telecast is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – swifttelecast.com. The content will be deleted within 24 hours.

Leave a Comment