Table of Contents
OpenAI is an American artificial intelligence (AI) research laboratory consisting of the non-profit company OpenAI Incorporated and its subsidiary for-profit corporation OpenAI Limited Partnership. OpenAI conducts artificial intelligence research with the stated intention of developing "safe and useful" general-purpose artificial intelligence, which it defines as "highly autonomous systems that outperform humans in performing the most cost-effective work."
OpenAI was founded in 2015 by Ilya Sutzkever, Greg Brockman, Trevor Blackwell, Vicky Cheung, Andrei Karpati, Durk Kingma, Jessica Livingston, John Shulman, Pamela Wagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the first board members. Microsoft has provided OpenAI LP with an investment of $1 billion in 2019 and $10 billion in 2023.
The History of OpenAI
2015-2018: Starting a nonprofit
In December 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research announced the creation of OpenAI and pledged more than $1 billion to the venture. According to an investigation by TechCrunch, funding for the nonprofit organization remains murky, with Musk being the largest funder and another donor, YC Research, contributing nothing at all. The organization has said it will "freely collaborate" with other institutions and researchers by making its patents and research open to the public. OpenAI is headquartered in the Pioneer Building in San Francisco's Mission District neighborhood.
According to Wired, Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and compiled a list of "the best researchers in the field." In December 2015, Brockman was able to hire nine of them as his first employees. In 2016, OpenAI paid salaries at the level of a corporation (rather than a non-profit organization), but did not pay AI researchers a salary comparable to Facebook or Google.
OpenAI's potential and mission attracted these researchers to the company; one Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and largely because of its mission." Brockman stated that "the best thing I can imagine is to bring humanity closer to creating real artificial intelligence in a safe way." OpenAI co-founder Wojciech Zaremba said he rejected "bordering on insanity" offers of two to three times his market value to join OpenAI.
In April 2016, OpenAI released the public beta version of "OpenAI Gym," a platform for research on reinforcement learning. In December 2016, OpenAI released "Universe," a software platform for measuring and training general AI intelligence based on a global offering of games, websites and other applications.
In 2017, OpenAI spent $7.9 million, or a quarter of its functional spending, on cloud computing alone. By comparison, DeepMind's total spending in 2017 was $442 million. In the summer of 2018, simply training OpenAI bots in the game Dota 2 required renting 128k CPUs and 256 GPUs from Google for several weeks.
In 2018, Musk turned down a board seat, citing a "potential future conflict of interest" with his role as Tesla CEO over Tesla's development of artificial intelligence for self-driving cars. Sam Altman claims that Musk felt OpenAI had fallen behind other players such as Google, and Musk offered to instead lead OpenAI himself, which the board rejected. Musk subsequently left OpenAI but said he remained a donor, but has not made any donations since his departure.
2019: Transitioning from a nonprofit organization
In 2019, OpenAI is transitioning from a nonprofit to a "capped-profit" for-profit company with 100 times the profit of any investment. OpenAI says the "capped-profit" model allows OpenAI LP to legally solicit investment from venture capital funds and, in addition, give employees stakes in the company so they can say, "I'm going to OpenAI, but in the long run it won't be a disadvantage to us as a family." Many top researchers work at Google Brain, DeepMind, or Facebook, which grant stock options, something a nonprofit can't do. Prior to the transition, public disclosure of the compensation of OpenAI's top employees was legally required.
The company then distributed shares to its employees and partnered with Microsoft to announce a $1 billion investment package. Since then, OpenAI systems have been running on Microsoft's Azure supercomputing platform.
OpenAI subsequently announced its intention to commercially license its technology. OpenAI plans to spend $1 billion. "within five years, and possibly much sooner." Altman said that even a billion dollars may not be enough, and that the lab may eventually need "more capital than any nonprofit organization has ever raised" to achieve general-purpose artificial intelligence.
Oren Etzioni of the nonprofit Allen Institute for AI was skeptical of the transition from a nonprofit to a fixed-income company. He agreed that it's not easy to attract top researchers to a nonprofit, but said, "I disagree with the notion that a nonprofit can't compete," and pointed to the successful low-budget projects of OpenAI and other organizations. "If bigger and better funding was always better, IBM would still be number one."
The nonprofit organization OpenAI Inc. is the sole controlling shareholder of OpenAI LP. OpenAI LP, despite being a for-profit company, retains a formal fiduciary responsibility to the nonprofit OpenAI Inc. A majority of the members of the board of directors of OpenAI Inc. are prohibited from having financial interests in OpenAI LP. In addition, minority board members with an interest in OpenAI LP are disqualified from participating in certain votes due to conflicts of interest. Some researchers argue that OpenAI LP's transition to a commercial basis is inconsistent with OpenAI's statements about the "democratization" of AI.
2020-present: ChatGPT, DALL-E and partnership with Microsoft
In 2020, OpenAI announced GPT-3, a language model trained on big internet data. GPT-3 is designed to answer questions in natural language, but can also translate from one language to another and coherently generate improvised text. The company also announced that a related API, called simply "API," will form the basis of its first commercial product.
In 2021, OpenAI introduced DALL-E, a deep learning model capable of generating digital images from natural language descriptions.
In December 2022, OpenAI received widespread media coverage after launching a free pre-release version of ChatGPT, a new AI-based chatbot, GPT-3.5. According to OpenAI, more than one million users signed up for the preview version in the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI projected revenue of $200 million in 2023 and $1 billion in 2024.
As of January 2023, OpenAI was negotiating a financing that valued the company at $29 billion, double its value in 2021.On January 23, 2023, Microsoft announced a new multi-year $10 billion investment in OpenAI. It is rumored that Microsoft could receive 75% of OpenAI's profits until it secures a return on its investment and takes a 49% stake in the company.
The investment is believed to be part of Microsoft's efforts to integrate OpenAI's ChatGPT into the Bing search engine. After the launch of ChatGPT, Google announced a similar AI application (Bard), fearing that ChatGPT could threaten Google as a major source of information.
On February 7, 2023, Microsoft announced that Microsoft Bing, Edge, Microsoft 365 and other products are building artificial intelligence technologies based on the same foundation as ChatGPT.
On March 3, 2023, Reid Hoffman resigned from his board position, citing a desire to avoid a conflict of interest between his seat on OpenAI's board and his investments in artificial intelligence technology companies through Greylock Partners and as co-founder of the startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI.
On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a ChatGPT Plus feature.
On May 22, 2023, Sam Altman, Greg Brockman, and Ilya Sutskever published recommendations for managing superintelligence. They believe that superintelligence could emerge within the next 10 years, providing a "significantly more prosperous future" and that "given the possibility of existential risk, we cannot simply react." They propose the creation of an international watchdog organization, similar to the IAEA, to oversee AI systems that exceed a certain threshold of capability, believing that relatively weak AI systems on the other hand should not be over-regulated. They also call for more research into the technical safety of superintelligence and ask for greater coordination, for example by governments launching a joint project of which "many ongoing efforts will be a part."
Motives
Some scientists, such as Stephen Hawking and Stuart Russell, have raised concerns that if advanced AI one day gains the ability to reverse engineer itself at an ever-increasing rate, an unstoppable "intelligence explosion" could lead to humanity's extinction. Co-founder Musk characterizes AI as the "biggest existential threat" to humanity.
Musk and Altman said their actions were motivated in part by concerns about the safety of AI and the existential risk associated with general-purpose artificial intelligence. OpenAI argues that "it is difficult to envision how much human-level AI could benefit society," and equally difficult to realize "how much it could harm society if created or misused." Security research cannot be put off, "because of AI's amazing history, it is difficult to predict when human-level AI might be within reach." OpenAI states that AI "should be an extension of individual human will and, in the spirit of freedom, spread as widely and evenly as possible." Project co-chair Sam Altman expects the decades-long project to surpass human intelligence.
Vishal Sikka, former CEO of Infosys, said that "openness", whereby the endeavor will "deliver results in the overall interest of humanity", is a key condition of its support, and that OpenAI "aligns very well with our long-standing values" and their "commitment to purposeful work". Wired 's Cade Metz suggests that corporations like Amazon may be driven by a desire to use open source software and data to level the playing field with corporations like Google and Facebook, which own vast stores of closed data. Altman argues that Y Combinator companies will share their data with OpenAI.
Strategy
Musk asked the question, "What can we do to make the future good? We can sit on the sidelines, or we can encourage regulatory oversight, or we can participate in creating the right structure with people who care about developing AI in a way that is safe and good for humanity." Musk acknowledged that "there's always some risk that by trying to promote (friendly) AI, we might create something that we care about"; nevertheless, the best defense is to "enable as many people as possible to have AI. If everyone has AI capabilities, there won't be any one person or small group of people who can have AI superpowers."
Musk and Altman's counterintuitive strategy of attempting to reduce the risk that AI will cause general harm by making AI available to everyone is controversial among those concerned about the existential risk associated with artificial intelligence. Philosopher Nick Bostrom is skeptical of Musk's approach, saying, "If you have a button that can do harm to the world, you don't want to give it to everyone." During a 2016 conversation about the technological singularity, Altman said that "we don't plan to release all of our source code," and mentioned plans to "allow the broader world population to elect representatives to the new governing board." Greg Brockman stated, "Our goal now is to is to make the best of what can be done. That's a bit vague."
Conversely, OpenAI's initial decision not to release GPT-2 because of a desire to "exercise caution" in the face of potential abuse has been criticized by openness advocates. Delip Rao, a text generation expert, stated, "I don't think [OpenAI] spent enough time proving that [GPT-2] is actually dangerous." Other critics argued that open publication was necessary to replicate the research and develop countermeasures.
More recently, OpenAI published their approach to the alignment problem in 2022. They believe that aligning AGI with human values is likely to be more difficult than aligning current AI systems: "Inconsistent AGI can pose a significant risk to humanity, and solving the AGI alignment problem may be so complex that it will require all of humanity to work together." They are exploring how best to use human feedback to train AI systems. They are also looking at using AI to gradually automate alignment research.
Products and applications
As of 2021, OpenAI research is focused on reinforcement learning (RL). OpenAI is seen as an important competitor to DeepMind.
The goal of Gym, announced in 2016, is to provide an easily implementable general intelligence benchmark for a wide range of environments, similar to but broader than the ImageNet Large Scale Visual Recognition Challenge used in controlled learning research. The project hopes to standardize the way environments are defined in AI research publications so that published research can be more easily replicated. The project claims to provide a simple interface for the user. As of June 2017, Gym can only be used with Python. As of September 2017, Gym's documentation site was not maintained and active work was being done on its GitHub page.
RoboSumo
Released in 2017. RoboSumo is a virtual world in which humanoid robotic metal-learning agents initially do not know how to walk, but are tasked with learning how to move and force a rival agent out of the ring. In this adversarial learning process, agents learn to adapt to changing conditions; when an agent is removed from this virtual environment and placed in a new virtual environment with strong winds, it holds upright, indicating that it has learned to balance in a generalized manner. Igor Mordach of OpenAI argues that competition between agents can create an "arms race" that enhances an agent's ability to function even outside the context of competition.
OpenAI Five
OpenAI Five is the name of a team of five bots created by OpenAI that are used in the five-on-five competitive game of Dota 2 and learn to play against human players at a high skill level solely through trial and error algorithms. Before becoming a five-on-five team, the first public demonstration took place at The International 2017, the annual major championship for the game, where professional Ukrainian player Dendi lost to a bot in a live one-on-one match. After the match, CTO Greg Brockman explained that the bot was trained by playing against itself for two weeks of real time, and that the learning program is a step in the direction of creating software that can solve complex problems like a surgeon. The system uses a form of reinforcement learning, as bots learn over time by playing against themselves hundreds of times a day for months, and are rewarded for actions such as killing enemies and taking objectives on the map.
By June 2018, the bots' capabilities had expanded to a full five-man team, and they were able to beat teams of amateur and semi-professional players. At The International 2018, OpenAI Five played two exhibition matches against professional players, but ended up losing both games. In April 2019, OpenAI Five defeated OG, the reigning world champions of the game at the time, 2-0 in an exhibition match in San Francisco. The bots' last public appearance was in the same month: they played 42,729 games during a four-day open online competition, winning 99.4% of them.
Retro GYM
Released in 2018, Gym Retro is an RL research platform for video games. Gym Retro is used to investigate RL algorithms and learn generalizations. Previous RL research has mainly focused on optimizing agents for individual tasks. Gym Retro enables generalization between games with similar concept but different appearance.
The debate game
In 2018, OpenAI launched the Debate game, in which machines learn to debate toy problems in front of a human judge. The goal is to investigate whether this approach can help audit AI decisions and develop explainable AI.
Developed in 2018, Dactyl uses machine learning to train Shadow Hand, a humanoid robot arm, to manipulate physical objects. It is trained entirely in simulation, using the same RL algorithms and training code as OpenAI Five. OpenAI solves the object orientation problem by using domain randomization, a simulation approach that allows the learner to have a variety of experiences rather than trying to match reality. The setup for Dactyl, in addition to motion-tracking cameras, also features RGB cameras, allowing the robot to manipulate an arbitrary object by seeing it. In 2018, OpenAI demonstrated that the system can manipulate a cube and an octagonal prism.
In 2019, OpenAI demonstrated that Dactyl can solve a Rubik's cube. The robot was able to solve the puzzle 60% of the time. Objects like the Rubik's cube have complex physics that are harder to model. OpenAI solved this problem by making Dactyl more resilient to perturbations. To do so, they used Automatic Domain Randomization (ADR), an approach to modeling that infinitely generates progressively more complex conditions. ADR differs from manual domain randomization in that a human does not need to specify randomization ranges.
In June 2020, OpenAI announced the creation of a multi-purpose API, which it says is designed "to access new AI models developed by OpenAI," allowing developers to access it to solve "any AI problem in English."
Generative models
OpenAI's original GPT model ("GPT-1")
The original paper on transformer-based generative pre-training of a language model was written by Alec Radford and colleagues and published as a preprint on the OpenAI website on June 11, 2018. It showed how a generative language model can acquire knowledge about the world and process long-range dependencies by pre-training on a diverse corpus with long stretches of continuous text.
Generative Pre-trained Transformer 2 ("GPT-2") is an unsupervised transformer language model and the successor to OpenAI's original GPT model ("GPT-1"). GPT-2 was first announced in February 2019, with only limited demo versions initially made available to the public. A full version of GPT-2 was not immediately released due to concerns about possible misuse, including for writing fake news. Some experts expressed skepticism that GPT-2 posed a significant threat.
In response to GPT-2, the Allen Institute for Artificial Intelligence proposed a tool to detect "neural fake news." Other researchers, such as Jeremy Howard, warned that "the technology could completely fill Twitter, email, and the Internet with intelligent-sounding, context-appropriate prose that drowns out all other speech and is impossible to filter out." In November 2019, OpenAI released the full version of the GPT-2 language model. Several websites feature interactive demonstrations of various instances of GPT-2 and other transformation models.
The authors of GPT-2 claim that unsupervised language models are universally trainable, as evidenced by the fact that GPT-2 achieved the best accuracy and difficulty scores in 7 out of 8 zero-sum tasks (i.e., the model was not trained additionally on task-specific input-output examples).
The corpus on which it was trained is called WebText and contains just over 8 million documents totaling 40 GB of text, derived from URLs published in posts on Reddit that received at least 3 affirmative votes. This avoids some of the problems associated with encoding the lexicon with word tokens by using byte pair encoding. This allows any character string to be represented by encoding both single characters and multi-character tokens.
First described in May 2020, Generative Pre-trained Transformer 3 (GPT-3) is an unsupervised transformation language model and is the successor to GPT-2. OpenAI stated that the full version of GPT-3 contains 175 billion parameters, two orders of magnitude more than the 1.5 billion parameters in the full version of GPT-2 (although GPT-3 models with a small number of parameters (125 million) were also trained).
OpenAI has stated that GPT-3 is successful in some "meta-learning" tasks. It can generalize the purpose of a single input-output pair. The paper gives an example of learning translation and cross-linguistic transfer between English and Romanian, and between English and German.
GPT-3 significantly improved benchmark results over GPT-2. OpenAI warns that this expansion of language models may lead to approaching or encountering fundamental limitations in the capabilities of predictive language models. Pre-training GPT-3 required several thousand petaflop/day of computation, compared to tens of petaflop/day for the fully trained GPT-2 model. The fully trained GPT-3 model, like its predecessor, was not immediately made publicly available due to potential misuse, although OpenAI planned to make it available via a paid cloud API after a two-month free private beta test that began in June 2020.
On September 23, 2020, GPT-3 was exclusively licensed to Microsoft.
Announced in mid-2021, Codex is a descendant of GPT-3, further trained on code from GitHub's 54 million repositories, and is the AI on which the GitHub Copilot code autocompletion tool is based. A private beta version of the API was released in August 2021. According to OpenAI, the model is capable of creating working code in more than a dozen programming languages, most effectively in Python.
A number of glitches, design flaws and security vulnerabilities were raised.
GitHub Copilot has been accused of publishing copyrighted code without attribution or license.
OpenAI has announced that it will discontinue support for the Codex API effective March 23, 2023.
Whisper
Released in 2022, Whisper is a general purpose speech recognition model. It is trained on a large set of diverse audio data and is a multitasking model capable of multilingual speech recognition as well as speech translation and language identification.
On March 14, 2023, OpenAI announced the release of Generative Trainable Transformer 4 (GPT-4), which can ingest text and graphical data. OpenAI announced that the updated technology successfully passed a mock law school exam, scoring in the top 10% of test takers; by comparison, the previous version, GPT-3.5, scored in the top 10%. GPT-4 can also read, analyze or generate up to 25,000 words of text, as well as write code in all major programming languages.
User interfaces
MuseNet and Jukebox (music)
MuseNet, released in 2019, is a deep neural network trained to predict subsequent notes in MIDI music files. It can generate songs with ten different instruments in fifteen different styles. According to The Verge, a song generated by MuseNet usually starts out reasonable, but then the longer it plays, the more chaotic it becomes. In pop culture, the initial use of the tool was utilized back in 2020 in the psychological internet thriller Ben Drowned to create music for the main character.
Released in 2020, Jukebox is an open-source vocal music generation algorithm. After training on 1.2 million samples, the system takes a genre, artist, and snippet of lyrics and produces sample songs. OpenAI said the songs "demonstrate local musical cohesion [and] follow traditional chord schemes," but acknowledged that they lack "familiar larger musical structures such as repetitive choruses" and that "there is a significant gap" between Jukebox and human-made music. The Verge stated, "It's technologically impressive, even if the results sound like mushy versions of songs that might sound familiar," while Business Insider noted, "Surprisingly, some of the resulting songs are memorable and sound believable."
Microscope
Released in 2020, Microscope is a set of visualizations of every significant layer and neuron of eight different neural network models that are often studied in interpretation. Microscope was created to easily analyze the features that form within these neural networks. The models include AlexNet, VGG 19, various versions of Inception, and various versions of CLIP Resnet.
DALL-E and CLIP (images)
DALL-E, introduced in 2021, is a Transformer model that creates images based on textual descriptions.
CLIP, which will also appear in 2021, does the opposite: it creates a description for a given image. DALL-E uses the 12-billion version of GPT-3 to interpret natural language (e.g., "green pentagon-shaped leather wallet" or "isometric view of a sad capybara") and generate corresponding images. It can generate images of both realistic objects ("stained glass window depicting blue strawberries") and objects that do not exist in reality ("porcupine-textured cube"). As of March 2021, the API and code are not available.
DALL-E 2
In April 2022, OpenAI announced DALL-E 2, an updated version of the model with more realistic results. In December 2022, OpenAI published on GitHub the software for Point-E, a new rudimentary system for converting a textual description into a 3D model.
ChatGPT
ChatGPT, launching in November 2022, is an artificial intelligence tool built on GPT-3 that provides a conversational interface that allows users to ask questions in natural language. The system responds to questions within seconds. ChatGPT reached 1 million users 5 days after launch.
ChatGPT Plus is a $20/month subscription service that allows users to access ChatGPT during peak hours, provides faster response times, a choice of GPT-3.5 or GPT-4 models, and gives users early access to new features.
In May 2023, OpenAI released the user interface for ChatGPT on the App Store. The app supports chat history synchronization and voice input (using Whisper, OpenAI's speech recognition model). The app is only available for iOS users, with plans to release an Android app later.
Related Topics
More Press
Let's get in touch!
Please feel free to send us a message through the contact form.
Drop us a line at request@nosota.com / Give us a call over nosota.skype