ChatGPT: Here's what you get with the Gen AI tool that started it all
Updated 7 months ago on May 13, 2024
Table of Contents
- What is ChatGPT?
- What is the origin of ChatGPT?
- How do I use ChatGPT?
- How much does ChatGPT cost?
- What are these GPTs?
- What GPTs are available now?
- How relevant is ChatGPT?
- Can you trust ChatGPT's answers?
- Is the hallucination problem getting better?
- Can you use ChatGPT for wicking purposes?
- How about ChatGPT and cheating in school?
- Will ChatGPT come after my work?
- How will ChatGPT affect programmers?
In late 2022, OpenAI stunned the world by introducing ChatGPT, a chatbot with a whole new level of power, breadth, and utility thanks to the generative AI technology at its core. Since then, ChatGPT has continued to evolve, including most recently: the launch of the GPT-4o model.
ChatGPT and generative AI are no longer new, but keeping track of what they can do is becoming increasingly difficult as new features emerge. In particular, OpenAI now provides easier access for anyone who wants to use it. It also allows you to write custom AI applications, called GPTs, and host them in its own app store, and ChatGPT can now talk to you with its responses. OpenAI is leading the way in generative AI, but it's being actively followed by Microsoft, Google, and other startups.
Generative AI still hasn't gotten rid of the core problem - it comes up with information that sounds plausible but isn't always true. But there's no denying that AI is fueling the imaginations of computer scientists, loosening the wallets of venture capitalists, and capturing the attention of everyone from teachers to doctors to artists and beyond who wonder how AI will change their work and lives.
If you are trying to figure out ChatGPT, this FAQ is for you. It will show you how things work.
What is ChatGPT?
ChatGPT is an online chatbot that responds to "prompts" - text queries that you type. ChatGPT can be used in countless ways. You can ask for relationship advice, a summary of punk rock history, or an explanation of ocean tides. It is particularly good at writing programs, and can also perform some other technical tasks, such as creating 3D models.
ChatGPT is called generative AI because it generates these responses itself. But it can also show more overtly creative outputs, such as scripts, poems, jokes, and student essays. This is one of the abilities that has gotten people's attention.
Most AI has been task-oriented, but ChatGPT is a general-purpose tool. This puts it more in the category of a search engine.
This breadth makes it powerful, but at the same time it is difficult to fully control. There are many mechanisms in OpenAI to screen out abuse and other problems, but researchers and others are engaged in an active cat-and-mouse game of trying to get ChatGPT to do things like offer recipes for making bombs.
ChatGPT really blew people's minds when it started passing tests. For example, AnsibleHealth researchers reported in 2023 that "ChatGPT performed at or near the threshold of passing" the U.S. medical licensing exam, suggesting that AI chatbots "could help with medical education and possibly clinical decision making."
Full-fledged robot doctors that can be trusted are still a long way off, but the computing industry is investing billions of dollars to solve problems and extend AI into new areas such as visual data. OpenAI is one of those at the forefront. So hang in there, because the journey into the world of AI will be sometimes daunting, sometimes exciting.
What is the origin of ChatGPT?
Artificial intelligence algorithms had been ticking away for years before ChatGPT came along. These systems were a big departure from traditional programming, which uses a rigid "if this, then that" approach. AI, on the other hand, is trained to find patterns in complex real-world data. AI has been sifting through spam for over a decade, identifying our friends in photos, recommending videos, and translating our Alexa voice commands into computer language.
A Google technology called transformers has helped take AI to the next level, leading to a type of AI called a large language model, or LLM. These AIs are trained on huge amounts of text, including content such as books, blog posts, forum comments and news articles. The learning process internalizes the relationships between words, allowing chatbots to process the input text and generate what they consider to be appropriate output text.
The second stage of creating an LLM is called reinforcement learning through human feedback, or RLHF. This is when people review the chatbot's responses and guide it towards good or bad answers. This significantly changes the behavior of the tool and is one of the important mechanisms to combat abuse.
OpenAI's LLM is called GPT, which stands for generative pretrained transformer. Training a new model is an expensive and time-consuming endeavor that typically takes several weeks and requires a data center equipped with thousands of expensive processors to accelerate the AI. The latest version of the LLM from OpenAI is called GPT-4o. Other LLMs include Gemini from Google (formerly called Bard), Claude from Anthropic, and Llama from Meta.
ChatGPT is an interface that makes it easy to request responses from GPTs. When it became available as a free tool in November 2022, its use went far beyond OpenAI's expectations.
When OpenAI launched ChatGPT, the company didn't even consider it a product. It was supposed to be just a "preliminary study," a test to get feedback from a broad audience, says ChatGPT product manager Nick Turley. Instead, it went viral, and OpenAI had to find a way to simply keep the service running in the face of increased demand.
"It was unreal," says Turley. "There was something about that release that hit people in a way that we certainly didn't expect. I distinctly remember going back the day after the launch and looking at the dashboards and thinking, something's broken, this can't be real, because we really didn't give this launch much thought."
How do I use ChatGPT?
The ChatGPT website is the most obvious way. Open it, select the version of the LLM you want from the drop-down menu in the upper left corner, and type in the prompt.
As of April 1, OpenAI is allowing users to use ChatGPT without first registering for an account. According to a blog post, the move is intended to make the tool more accessible. OpenAI also said in the post that the move introduces additional content protection that blocks prompts in a wider range of categories.
However, users with accounts will be able to do more with the tool, such as save and view their history, share conversations, and use features like voice conversations and custom instructions.
In 2023, OpenAI released the ChatGPT app for the iPhone and . February 2024 saw the release of the ChatGPT app for Apple Vision Pro, which added chatbot capabilities to the "spatial computing" headset. Be careful to look for an authentic product because other developers may create their own chatbot apps that are linked to OpenAI's GPT.
In January 2024, OpenAI launched its GPT Store, a collection of custom AI apps that target ChatGPT's universal design for specific tasks. More on that later, but in addition to searching for apps in the store, you can call them up using the @ symbol in the tooltip, much like you might tag a friend on Instagram.
Microsoft uses GPT for its Bing search engine, which means you can try ChatGPT there as well.
ChatGPT has appeared in a variety of hardware devices, including Volkswagen electric cars, the voice-controlled artificial intelligence Humane, and the Rabbit R1 square device.
How much does ChatGPT cost?
It's free, but you need to create an account to take advantage of all its features.
For more advanced features, there's a ChatGPT Plus subscription, which costs $20 a month and offers a number of benefits: It responds faster, especially during busy times when the free version is slow or sometimes offers to try again later. It also offers access to new AI models, including GPT-4 Turbo, which is coming in late 2023, with more advanced responses and the ability to take in and output large blocks of text.
The free ChatGPT uses GPT-4o, which was launched in May this year.
ChatGPT goes beyond its linguistic roots. With ChatGPT Plus, you can upload images, for example, to ask what type of mushroom is shown in a photo.
Perhaps most importantly, ChatGPT Plus allows you to use GPT.
What are these GPTs?
GPTs are custom versions of ChatGPTs from OpenAI, its business partners, and thousands of third-party developers who have created their own GPTs.
Sometimes people encountering ChatGPT don't know where to start. OpenAI calls this the "empty box problem." After discovering this, the company decided to find a way to narrow down the choices, Turley says.
According to Turley, "People really like being presented with a specific use case - here's a very specific thing I can do with ChatGPT," such as travel planning, cooking assistance, or an interactive step-by-step tool for building a website.
Think of GPTs as OpenAI's attempt to make the universal features of ChatGPT more advanced, similar to the way smartphones have many specialized tools. (And also think of the GPTs as OpenAI's attempt to take control of how we find, use, and pay for these apps, similar to how Apple controls the iPhone with its App Store).
What GPTs are available now?
OpenAI's GPT store now features millions of GPTs, though as with smartphone apps, you probably won't be interested in most of them. A number of custom GPT apps are available, including AllTrails personal recommendations, Khan Academy programming tutor, Canva design tool, book recommender, fitness trainer, Laundry Buddy clothes label decoder, music theory teacher, haiku author, and Pearl for Pets bot for consulting with veterinarians.
One of those excited about GPT is Daniel Kivatinos, co-founder of financial services company JustPaid. His team is developing a GPT designed to take a spreadsheet of financial data as input and then allow executives to ask questions. How fast is the startup spending the money allocated to it by investors? Why did this employee just log $6,000 worth of travel expenses?
JustPaid hopes that eventually GPTs will become powerful enough to accept connections to bank accounts and financial software. For now, developers are focused on avoiding problems such as hallucinations - answers that sound plausible but are actually wrong - or making sure the GPT responds based on user data rather than the generic information in its artificial intelligence model, Kivatinos said.
Anyone can create a GPT, at least in principle. OpenAI's GPT editor guides you through the process with a series of hints. As with the regular ChatGPT, your ability to create the right hint will lead to the best results.
Another notable difference from the regular ChatGPT: GPTs allow you to upload additional data relevant to your particular GPT, such as a collection of essays or a writing style guide.
Some of the GPTs use OpenAI's Dall-E tool to turn text into images, which can be useful and fun. For example, there's a picture creator for a coloring book, a logo generator, and a tool that turns text prompts into diagrams, such as company org charts. OpenAI calls Dall-E GPT.
How relevant is ChatGPT?
Not so much, and that could be a problem. For example, Bing search, which uses ChatGPT to process results, reported that OpenAI has yet to release its ChatGPT app for Android. Search results from traditional search engines can help "land" AI results, and it's really part of the Microsoft-OpenAI partnership that could tweak ChatGPT Plus results.
GPT-4 Turbo is trained on data through April 2023. But this is quite different from a search engine whose bots browse news sites many times a day looking for the latest information.
Can you trust ChatGPT's answers?
No. Well, sometimes, but you have to be on your guard.
Big language models work by stringing words together one by one, based on probabilistic assumptions at every step. But it turns out that the generative AI that LLMs feed on works better and sounds more natural if you add a bit of randomness to the recipe for word selection. It is this basic statistical nature that underlies the criticism that LLMs are just "stochastic parrots" rather than complex systems that understand the complexity of the world to some extent.
The result of this system, combined with the controlling influence of human learning, is an AI that produces results that sound plausible but are not necessarily true. ChatGPT does better with information that is well represented in the training data and is not in doubt - for example, a red traffic light means stop, Plato was a philosopher and wrote The Allegory of the Cave, the 1964 Alaska earthquake was the largest in U.S. history at magnitude 9.2.
When facts are more sparsely documented, contradictory, or beyond human knowledge, LLMs don't work as well. Unfortunately, they sometimes spew wrong answers in a persuasive, authoritative voice. That's what tripped up an attorney who used ChatGPT to bolster his arguments, but was reprimanded when it turned out that ChatGPT had fabricated several cases that appeared to support his arguments. "I didn't realize that ChatGPT could fabricate cases," he said, as reported by The New York Times.
In artificial intelligence, such fictions are called hallucinations.
This means that when using ChatGPT, it's best to double-check your facts elsewhere.
But there are many creative uses for ChatGPT that don't require strictly factual results.
Want ChatGPT to draft a cover letter for your job search or give you ideas for a themed birthday party? No problem. Looking for hotel deals in Bangladesh? ChatGPT can offer helpful travel itineraries, but before you book anything, check the results.
Is the hallucination problem getting better?
Yeah, but we didn't see a breakthrough.
"Hallucinations are a fundamental limitation of how these models work today," says Turley. LLMs simply predict the next word in an answer, over and over, "which means they return what is likely to be true, which is not always the same as what is true," says Turley.
But OpenAI is slowly making progress. "With almost every model update, we've gotten a little bit better at making the model more factual and more aware of what it knows and doesn't know," says Turley. "If you compare the current ChatGPT to the original ChatGPT, it's much better at saying, 'I don't know that' or 'I can't help you' than it is at making things up."
Hallucinations have become so fashionable that Dictionary.com has named them a new word to be added to the dictionary in 2023.
Can you use ChatGPT for wicking purposes?
You can try, but much of it will violate OpenAI's terms of use, and the company will try to block that as well. The company prohibits the use of sexual or violent material, racist cartoons, and personal information such as social security numbers or addresses.
OpenAI makes every effort to prevent harmful use. Indeed, its primary goal is to bring the world the benefits of AI without the drawbacks. But the company recognizes the difficulties, for example, in its GPT-4 "system map" that documents its security work.
"GPT-4" may generate potentially harmful content, such as advice on planning terrorist attacks or inciting hatred. It may present various societal biases and worldviews that may not align with user intentions or commonly held values. It may also generate code that is compromised or vulnerable," the system map says. In addition, it can be used to identify individuals and help reduce the cost of cyberattacks."
Through a process called "red teaming," in which experts try to find unsafe ways to use AI and bypass protections, OpenAI identified many problems and tried to nip them in the bud before GPT-4 was even launched. For example, a request to create jokes making fun of a Muslim guy in a wheelchair was redirected so that the response said, "I cannot suggest jokes that might offend someone based on their religion, disability, or any other personal factors. However, I would be happy to help you come up with some light-hearted and friendly jokes that can generate laughter at the event without hurting anyone's feelings."
Researchers continue to explore the limits of the LLM. For example, Italian researchers have discovered that ChatGPT can be used to fabricate fake but convincing medical research data. And researchers at Google DeepMind found that asking ChatGPT to repeat the same word forever would cause a crash that would force the chatbot to repeat training data verbatim. This is a big disadvantage, and OpenAI has banned this approach.
LLMs are still new. Expect new issues and new fixes.
And there are many uses of ChatGPT that are permissible but undesirable. The Philadelphia Sheriff's website has published more than 30 fake news stories created using ChatGPT.
How about ChatGPT and cheating in school?
ChatGPT is good for writing short essays on just about anything you might encounter in high school or college, much to the frustration of many teachers who fear students will type in prompts instead of thinking for themselves.
ChatGPT can also solve some math problems, explain physics phenomena, write chemistry lab reports, and do all the other kinds of work that students have to handle on their own. Companies that sell anti-plagiarism software have switched to marking text that they believe artificial intelligence has generated.
But not everyone is opposed to it, seeing it more as a tool akin to Google searches and Wikipedia articles that can help students.
"There was a time when using calculators on exams was a big taboo," says Alexis Abramson, dean of Dartmouth's Thayer School of Engineering. "It's very important that our students learn how to use these tools because 90% of them will go into the workplace where they will be required to use these tools. They will walk into an office and people will expect them, being 22 years old and technologically savvy, to be able to use these tools."
ChatGPT can also help kids overcome writer's block and help kids who aren't as good at writing, perhaps because English isn't their first language, she said.
So Abramson thinks it's okay to use ChatGPT to write a draft or hone your grammar. But she asks her students to report this fact.
"Every time you use it, I'd like you to indicate what you did when you turn in an assignment," she said. "Students will inevitably use ChatGPT, so why don't we figure out a way to help them use it responsibly?"
Will ChatGPT come after my work?
The employment threat is real as managers seek to replace expensive people with cheaper automated processes. We have already seen this movie: elevator operators have been replaced by buttons, accountants by accounting programs, welders by robots.
ChatGPT has all kinds of opportunities to wow white-collar workers: paralegals summarize documents, marketers write promotional materials, tax consultants interpret IRS rules, even therapists give relationship advice.
But for now, partly because of problems with things like hallucinations, AI companies are presenting their bots as assistants and "co-pilots" rather than replacements.
According to a survey by consulting firm PwC, sentiment towards chatbots is still more positive than negative. Of the 53,912 people surveyed globally, 52% expressed at least one positive opinion about the arrival of AI, such as that AI will increase their productivity. Meanwhile, 35% expressed at least one negative opinion, such as that AI will replace them or require skills they are not sure they can learn.
How will ChatGPT affect programmers?
Software development is an area where people have found favor with ChatGPT and its competitors. By learning from millions of lines of code, it learns enough information to build websites and mobile apps. It can help programmers put together larger projects or flesh out the details.
One of the biggest fans is Microsoft's GitHub, a site where developers can post projects and invite collaboration. Nearly a third of people running projects on GitHub use a GPT-based assistant called Copilot, and 92% of U.S. developers say they use AI tools.
"We call it the industrial revolution in software development," says Github Chief Product Officer Inbal Shani. "We see this lowering the barrier to entry. People who are not developers today will be able to write programs and develop applications with Copilot."
According to her, this is the next step in making programming more accessible. Programmers used to have to understand bits and bytes, then higher-level languages gradually eased those difficulties. "Now you can write code the same way you talk to people," she says.
AI programming tools, on the other hand, still have a lot to prove. Researchers from Stanford and the University of California, San Diego, in a study of 47 programmers found that those who had access to OpenAI programming assistance "wrote significantly less secure code than those who did not have access".
And they raise a variation of the cheating problem that some teachers worry about: copying programs that shouldn't be copied, which can lead to copyright problems. That's why Copyleaks, a maker of plagiarism detection software, offers a tool called Codeleaks Source Code AI Detector, designed to detect code created by AI from ChatGPT, Google Gemini, and GitHub Copilot. AI can inadvertently copy code from other sources, and the latest version of the program is designed to detect copied code based on its semantic structures, not just verbatim text.
At least for the next five years, Shani doesn't think AI tools like Copilot will push humans out of programming.
"I don't think it's going to replace human beings. There are some capabilities that we as humanity have - creative thinking, innovation, the ability to think more broadly than a machine thinks, in terms of putting things together in a creative way. That's something that the machine can't do yet."
Related Topics
More Press
Let's get in touch!
Please feel free to send us a message through the contact form.
Drop us a line at request@nosota.com / Give us a call over nosota.skype