ChatGPT Devs Warn AI’s ‘Superintelligence’ to Surpass Humans in 10 Years

Sahana Kiran
Chatgpt
Source – Pixabay

The global impact of OpenAI’s ChatGPT has been significant, causing a sensation worldwide. Despite being in its early stages, this AI tool has raised concerns among people. There is a growing fear that AI tools may replace numerous jobs, further fueling the unease surrounding AI’s potential. A recent blog post by ChatGPT’s creators appears to have intensified these concerns, instilling more fear among the masses.

According to a recent blog post by the developers and co-founders of ChatGPT, they suggested that it is possible for AI to surpass the level of expertise exhibited by humans in various domains. They also mentioned that AI has the potential to perform a comparable amount of productive work as some of the world’s largest corporations.

“Superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future, but we have to manage risk to get there.”

CEO Sam Altman has made several concerning statements regarding the potential impact of AI. In a recent remark, the CEO of OpenAI acknowledged the transformative power of AI in shaping society. However, he also expressed a sense of caution and admitted to feeling a certain level of apprehension, stating, “A little bit scared of this.”

Like Altman, many individuals have expressed their fears regarding the emergence of ChatGPT and other AI tools. Elon Musk, along with several experts, has been vocal about the need to exercise caution and temporarily halt the progress of AI-related advancements.

OpenAI addresses calls for regulation around AI

With increasing concerns surrounding the progress of AI, people around the world have voiced a strong demand for regulations to tackle the situation. Recently, Altman testified before Congress. He did so to address concerns raised by lawmakers regarding the absence of adequate regulations governing the development of AI.

The OpenAI team emphasized in their blog post that considering the potential for existential risks, it was crucial to adopt a proactive approach to managing the possible harms associated with the technology. The blog post further read,

“Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example. We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.”