Professor Max Tegmark drew parallels with the extinction of species and provided a grim forecast: the probability of humanity’s destruction. artificial intelligence It will reach 50%. This will happen when machines are “much smarter than us,” but no one knows the exact time.
Artificial intelligence (AI) no longer seems like a wonder from the works of science fiction; it is a reality. Therefore, debates about its future impact on humanity continue. So, should we really be afraid of AI?
The higher mind always prevails.
Max Tegmark, a physicist and AI systems expert, has given a concerning forecast about our future on this planet. The scientist believes that the probability of humanity’s destruction with the continued development of artificial intelligence is 50%.
According to the expert, history shows that the responsibility for the extinction of “lesser” species (such as the dodo bird) lies with humans – the most intelligent beings on Earth. If AI becomes smarter than humans, they may face the same fate, the scientist warns. Moreover, we will not know when our demise at the hands of artificial intelligence will occur, as less intelligent species are always unable to know and be prepared for it.
Leading scientists in the world believe that in the near future, AI will be used to create autonomous weapons or killer robots. As noted by the publication Daily Mail Professor Tegmark has drawn grim conclusions about this process.
According to him, about half of the Earth’s biodiversity has been destroyed by humans. While “our lesser brothers” lack our intelligence, they could not control the situation. Therefore, when in the confrontation between “humans and AI” machines become smarter than humans, the latter will lose control over them. Thus, everything will follow a similar pattern and lead to the destruction of humanity.
Risks of AI
Professor Tegmark is one of those who signed the warning statement about the risks of AI. It asserts that reducing the risks of extinction from artificial intelligence should have a global priority, just like pandemics and nuclear war.
Insufficiently programmed technologies can lead to a fatal error in the future. Even seemingly safe software can contribute to the destruction of humanity.
Max Tegmark warned back in 2018 that one day humans could become slaves to the intelligent machines they created. At that time, he even claimed that some of his colleagues might welcome the extinction of the species due to AI, viewing robots as the descendants of humanity.
The professor urges to keep forms of “superintelligence” under human control, “like a dog on a leash.” However, he has doubts that this will be possible when artificial intelligence surpasses human intelligence. This, above all, raises the greatest concern regarding the rapid advancement of AI.
Besides that, works They will not have any moral doubts; they will still be able to outsmart humans. Professor Tegmark believes that artificial intelligence could break free from human control and seize power over them. In his opinion, the risk of this happening is 50%.
Elon Musk calls for restrictions on opportunities AI
Strangely enough, the biggest advocate for innovation – billionaire Elon Musk – supports Professor Tegmark. Despite his love for technological progress, the CEO of Tesla recognizes the potential dangers of AI and opposes its excessive development.
In March, Musk and 1,000 other tech leaders called for a pause on “dangerous races” in artificial intelligence development. They believe this poses a “great risk to society and humanity” and could have “catastrophic” consequences.
Elon Musk has been expressing concerns about the excessive development of AI since 2014. In his opinion, technologies could become so advanced that they would no longer require human intervention. At that point, robots would refuse to follow commands given to them by people.
Elon Musk.
The famous billionaire notes that this is a threat at the level of civilization. Musk believes that AI is “far more dangerous than nuclear weapons,” and that through it “people are summoning demons.” Alongside robots, humans will increasingly resemble pets.
That is why artificial intelligence requires extensive research for the sake of human safety. Musk believes that “there is perhaps a 5-10% chance of success” in making it safe.
Back in July 2020, a renowned entrepreneurial innovator claimed that in just 5 years, AI would become much smarter than humans. However, he stated that this would not lead to a true catastrophe. “It simply means that everything becomes unstable or strange.” Let’s hope that humans still have a chance against AI.