The AI ​​Arms Race Is Leading To Armageddon

The ongoing AI arms race poses a threat to humanity, according to mathematics professor Olle Häggström at Chalmers University. He is one of the signatories of an international petition calling for a six-month research pause in advanced AI to allow ethics to catch up.

– There is a significant imbalance between the power of AI and the progress made in AI safety, known as AI alignment. AI alignment is about getting AI to behave in line with human values, says Olle Häggström in Consid’s podcast, Digitala influencer-podden.

AGI (artificial general intelligence) was long considered a distant possibility. However, OpenAI’s ChatGPT and GPT-4, along with similar AI models from their competitors, have disrupted the old timetables. Microsoft’s recent extensive report on GPT-4’s performance talks about “sparks of AGI,” and the uncertainty about where this is leading is significant, but AGI may well be just around the corner.

Olle Häggström ProfessorMathematical StatisticsChalmers University of Technology

Humanity is at a unique stage where we are automating the very trait that underlies our dominance on this planet: intelligence. In doing so, we need to ensure that the machine has the right motivations, but this work lags behind and is neglected by OpenAI and their competitors due to the stress generated by their race for market dominance, argues Olle Häggström.

– There is a race dynamic here where OpenAI on one side and Google on the other are competing for market dominance. Enormous sums are at stake… it creates unhealthy market incentives that cause them to rush forward uncontrollably, says Olle Häggström.

In the documentation about GPT-4, OpenAI acknowledges the significant risks. They state that the critical threshold at which the system can autonomously initiate systematic resource acquisition “probably” has not been reached and that “more research is needed.” Olle Häggström believes that agreements are needed to slow down the pace.

– We need to prepare for the possibility that this technology can emerge very quickly. Alignment research must have a reasonable chance to catch up. Because if we fail to do so, the consequences could be dire, says Olle Häggström.

Want to know more?

Send us a message and we'll get back to you.

0 / 255
Privacy Policy

See more news