'Something Dangerous For Humanity': Elon Musk On Why Sam Altman Could Have Been Fired
'Something Dangerous For Humanity': Elon Musk On Why Sam Altman Could Have Been Fired
Elon Musk has thrown light onto why OpenAI may have parted ways with Sam Altman, following his inquiry to Ilya Sutskever on his decision to terminate Altman.

Elon Musk has shed light on the idea that OpenAI might have done “something potentially dangerous” for humanity, and that may have been one of the reasons why ex-CEO Sam Altman was let go.

For those uninitiated, Sam Altman, the former CEO of OpenAI, was fired by the generative artificial intelligence giant last week. OpenAI, which is responsible for creating the popular chatbot ChatGPT and the GPT series of large language models (LLMs), said that they were no longer confident in Sam to lead the company. And this sparked a series of events that is truly bizarre.

Now, Elon Musk was quick to jump into the action after he asked Ilya Sutskever, why he took such a big step to suddenly fire Altman. Sutskever is one of the board members and a scientist at OpenAI, and it is believed that he may have been one of the top reasons why Sam was ousted from the very company he helped build. However, after all the drama unfolded, and when Microsoft’s Satya Nadella announced that Sam Altman, Greg Brockman, and some other OpenAI members were going to join Microsoft, with Sam being the CEO of the artificial intelligence project, Sutskever was quick to post an apology on Elon Musk’s X (formerly Twitter).

“I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company,” Sutskever said on X.

Elon Musk, replying to this post, later said, “Why did you take such a drastic action? If OpenAI is doing something potentially dangerous to humanity, the world needs to know.” Now, this has become a catalyst for a debate about OpenAI’s responsible development towards generative AI. 

Time and again, industry experts, including Musk himself, have voiced their opinions about how AI could prove to be dangerous for humans in the future, and that development must be done responsibly and should not be rushed.

Many have voiced fears about potential hyper-intelligent AI software that may prove to be a bane for humanity, and that working too quickly towards achieving that may not be the best idea for humanity at large.

What's your reaction?

Comments

https://chuka-chuka.com/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!