Stephen Hawking warned AI could mean the ‘end of the human race’ in years leading up to his death

By NODE:DNA July 14, 2023

Stephen Hawking warned AI could mean the ‘end of the human race’ in years leading up to his death

Long before Elon Musk and Apple co-founder Steve Wozniak signed a letter warning that artificial intelligence poses “profound risks” to humanity, British theoretical physicist Stephen Hawking had been sounding the alarm on the rapidly evolving technology.

“The development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC in an interview in 2014.

Here we are in 2023 and ChatGPT has brought the whole topic back to the forefronts of our minds.

The difference we think is back in 2014, people saw AI as a potential enemy, but others have now proved its value. So many are now in a state of conflict.

Hawking, who suffered from amyotrophic lateral sclerosis (ALS) for more than 55 years, died in 2018 at the age of 76. Though he had critical remarks on AI, he also used a very basic form of the technology in order to communicate due to his disease, which weakens muscles and required Hawking to use a wheelchair.

Hawking was left unable to speak in 1985 and relied on various ways to communicate, including a speech-generating device run by Intel, which allowed him to use facial movements to select words or letters that were synthesized to speech.

Hawking’s comment to the BBC in 2014 that AI could “spell the end of the human race” was in response to a question about potentially revamping the voice technology he relied on. He told the BBC that very basic forms of AI had already proven powerful but creating systems that rival human intelligence or surpass it could be disastrous for the human race.

“It would take off on its own and re-design itself at an ever-increasing rate,” he said.

“Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded,” Hawking added.

Months after his death, Hawking’s final book hit the market. Titled “Brief Answers to the Big Questions,” his book provided readers with answers to questions he was frequently asked. The science book hashed out Hawking’s argument against the existence of God, how humans will likely live in space one day and his fears over genetic engineering and global warming.

Artificial intelligence also took a top spot on his list of “big questions,” arguing that computers are “likely to overtake humans in intelligence” within 100 years.

What does this mean to us now, is he right? Certainly with the advancements its plausable but what if businesses don’t adopt it and others do? We are entering a point in time that will not be most marked by the Covid pandemic as this will certainly be overshadowed by advacncements in AI and the resulting discussions and debate.