Geoffrey Hinton is a famous Google engineer often called a "godfather of artificial intelligence". Just recently he has quit his position and warned the public about the potential dangers of further development of AI technology. Before working for over a decade at Google, Hinton had his own company which is responsible for a 2012 breakthrough when he and his 2 colleagues created an algorithm that serves as the foundation for many current AI systems, including OpenAI's ChatGPT.
However, it seems that such participation in the creation of the world-changing technology hasn’t given Hinton an expected feeling of contentment. In recent interviews for the New York Times and BBC, the engineer even expressed regret for his work stating that "it is hard to see how you can prevent the bad actors from using it for bad things."
Hinton noted that the advancement seen in the industry since 2012 is spectacular and likely our understanding of this breakthrough doesn’t even scratch the surface. "Look at how it was five years ago and how it is now. Take the difference and propagate it forward. That’s scary," he said.
Hinton's statements can be considered a logical reflection of fears expressed earlier this year in a public letter signed by over 1,000 tech leaders, calling for a brief halt and more control of AI development. Hinton didn’t sign the letter then but, according to his explanations, it was just because he didn’t want to criticize Google while still with the company.
Geoffrey Hinton isn’t the one who expressed apprehension about possible routes in AI development. His concerns about AI's potential for harm have been echoed by other experts in the field. Years before the ChatGPT’s grand debut, Stephen Hawking warned of the potential for AI to "spell the end of the human race." Even Elon Musk, who is known for his risky AI ideas, has famously called AI "more dangerous than nuclear weapons".
In response to Hinton's resignation and concerns, Google's chief scientist, Jeff Dean, stated that the company remains committed to a responsible approach to AI, "continually learning to understand emerging risks while also innovating boldly."
Overall, specialists of Atlasiko also believe that the world should treat the AI development with maximum precaution as there’s no knowledge of just how far the development of artificial intelligence can go and how it’ll impact people.
We have already seen major advancements in the creation of AI-powered supercomputers that gained the interest of tech giants as a new field of investment.
More daring scientists now work on the theoretical basis for the development of organoid intelligence which is supposed to result in biological software able of supercomputing.
Some futurists took it a few steps further saying that artificial intelligence can help people to obtain immortality which is also quite a disputable ethical and technical concept.
All these inventions can be exciting and dangerous at the same time as long as we can’t accurately predict their impact. That’s the main reason why Hinton and other scientists are wary of such fast-advancing development, and we all should be, too.