Sam Altman's dismissal from OpenAI reflects a split over the future of AI development

Updated 6 months ago on May 17, 2024

The split that cost artificial intelligence Sam Altman the CEO position at OpenAI reflects a fundamental disagreement over security, broadly speaking, between the two camps developing world-changing software and thinking about its impact on society.

On one side are those, like Altman, who believe that rapid development and especially public deployment of AI is necessary to stress-test and improve the technology. On the other side are those who argue that the safest way forward is to fully develop and test AI in a laboratory setting to ensure that it is, so to speak, safe for human consumption.

On Friday, 38-year-old Altman was fired from the company that created the popular chatbot ChatGPT. Many considered him the human face of generative AI.

Some fear that super-intelligent software could become unmanageable and lead to disaster. This worries technologists who adhere to a social movement called "effective altruism" and believe that AI advances should benefit humanity. Among those who share these concerns is OpenAI's Ilya Sutzkever, the chief scientist and board member who approved Altman's ouster.

A similar division has emerged among developers of self-driving cars, also driven by artificial intelligence, who argue that they must be released on dense city streets to fully understand the vehicles' abilities and shortcomings; while others call for restraint, fearing that the technology carries unknowable risks.

Concerns about generative AI have subsided after the unexpected ouster of Altman, who also co-founded OpenAI. Generative AI is a term for software that can produce coherent content, such as essays, computer code and photo-like images, in response to simple prompts. The popularity of OpenAI's ChatGPT over the past year has accelerated the debate about how best to regulate and develop this software.

"The question is whether it will be just another product, like social media or cryptocurrency, or whether it's a technology that has the potential to surpass humans and become uncontrollable," says Connor Leahy, CEO of ConjectureAI and a security advocate. "Will the future belong to machines?"

Sutskever reportedly felt that Altman was putting OpenAI software into users' hands too quickly, which could jeopardize security.

"We don't have a solution for how to manage or control potentially super-intelligent AI and prevent it from getting out of control," he and his deputy wrote in a July blog post. "Humans will not be able to reliably control AI systems that are much smarter than we are."

Sam Altman, CEO of Microsoft-backed OpenAI and ChatGPT creator speaks during a talk at Tel Aviv University in Tel Aviv

Sam Altman, CEO of Microsoft-backed OpenAI and creator of ChatGPT, speaks during a talk at Tel Aviv University in Tel Aviv, Israel, June 5, 2023.

Of particular concern is the fact that OpenAI announced a number of new commercial products at its developer event earlier this month, including a version of its ChatGPT-4 software and so-called agents that act as virtual assistants.

Sutzkever did not respond to a request for comment.

OpenAI's fate is seen by many technologists as critical to the development of AI. Discussions over the weekend about Altman's reinstatement failed, dashing the former CEO's acolyte hopes.

The release of ChatGPT last November sparked a frenzy of investment in AI companies, including $10 billion from Microsoft (MSFT.O), opens new tab, in OpenAI and billions more in other startups, including from Alphabet (GOOGL.O), opens new tab, and Amazon.com (AMZN.O), opens new tab.

This may help explain the explosion of new AI products as companies like Anthropic and ScaleAI seek to show investors their progress. Meanwhile, regulators are struggling to keep pace with AI developments, including recommendations from the Biden administration, and some countries are pushing for "mandatory self-regulation," while the European Union is working to introduce broad oversight of software.

While most use generative AI, such as ChatGPT, to supplement their work, such as producing quick summaries of lengthy documents, observers fear the emergence of versions known as "artificial general intelligence," or AGI, that will be able to perform increasingly complex tasks without any prompting. This has raised fears that the software could take over defense systems, create political propaganda or produce weapons on its own.

OpenAI was founded as a nonprofit organization eight years ago, in part to ensure that its products were not for profit, which could lead it down a slippery slope to dangerous AGIs that, according to its charter, threaten to "harm humanity or unduly concentrate power." But Altman has since helped create a for-profit venture within the company for fundraising and other purposes.

Late Sunday, OpenAI appointed Emmett Scheer, the former head of streaming platform Twitch, as interim CEO. In September, he spoke out on social media in favor of "slowing down" AI development. "If our speed is 10 now, a pause is a drop to 0. I think we should aim for 1-2," he wrote.

The exact reasons for Altman's suspension remained unclear as of Monday. But it can be concluded that OpenAI has some serious challenges ahead of it.

Let's get in touch!

Please feel free to send us a message through the contact form.

Drop us a line at mailrequest@nosota.com / Give us a call over skypenosota.skype