Table of Contents
- Training
- Features and limitations
- Characteristics
- Limitations
- Service
- Basic service
- ChatGPT Plus premium service
- Mobile app
- Support for software developers
- March 2023 Security breach
- Other languages
- Promising directions
- GPT-4
- Reception
- Uses and implications
- Bias and offensiveness
- Culture
- Existential risk
- Misinformation
- ChatGPT and Wikipedia
- On the discipline
Artificial intelligence chatbot developed by OpenAI company
ChatGPT (ChatGenerative Pre-Trained Transformer) is an artificial intelligence chatbot developed by OpenAI and launched on November 30, 2022. Its feature is that it allows users to refine and guide the conversation by determining its length, format, style, level of detail, and language used. At each stage of the conversation, subsequent prompts and responses are taken into account as context.
ChatGPT is built on GPT-3.5 and GPT-4 from OpenAI's own series of foundational GPT models. These large language models (LLMs) have been refined for conversational applications using a combination of supervised learning and reinforcement learning techniques. According to the LLMs, this training results in ChatGPT being better able to handle "hallucinations" than the previous model, GPT-3, but ChatGPT still confidently presents misleading information. ChatGPT was released as a freemium pre-release research version, but due to its popularity, OpenAI is now running the service on a freemium model. At the freemium level, a GPT-3.5-based version is available to users, while a more advanced GPT-4-based version and priority access to new features are provided to paid subscribers under the commercial name "ChatGPT Plus".
By January 2023, it had become the fastest-growing consumer software application in history, gaining more than 100 million users and contributing to OpenAI's $29 billion valuation. Within months, Google, Baidu and Meta accelerated the development of their competing products, Bard, Ernie Bot and LLaMA. Some observers have expressed concern that ChatGPT could supplant or weaken human intelligence, and that it could become a source of plagiarism or misinformation.
Training
Fine-tuning was done with the help of human tutors who improved the model's performance, and in the case of supervised learning, they played both sides: the user and the AI assistant. In the reinforcement learning phase, trainees first ranked the responses generated by the model in the previous conversation. Based on these ranks, "reward models" were created and used to further fine-tune the model through several iterations of proximal policy optimization (PPO).
Time magazine reported that OpenAI used salaried Kenyan workers earning less than $2 per hour to tag malicious content (e.g., sexual harassment, violence, racism, sexism, etc.) to create a system to protect against malicious content (e.g., sexual harassment, violence, racism, sexism, etc.). These tags were used to train a model to detect such content in the future. Outsourced workers were exposed to "toxic" and traumatizing content; one worker described the task as "torture." OpenAI's outsourcing partner was Sama, a training data company based in San Francisco, California.
ChatGPT initially used Microsoft Azure's supercomputing infrastructure based on NvidiaGPU GPUs, built specifically for OpenAI and reportedly costing "hundreds of millions of dollars." Following the success of ChatGPT, Microsoft dramatically modernized the OpenAI infrastructure in 2023.
OpenAI collects data about ChatGPT users to train and further refine the service. Users can raise or lower scores for answers received from ChatGPT, as well as fill in a text field with additional comments.
The training data used for ChatGPT included pages of software manuals, information about Internet phenomena such as bulletin board systems, and various programming languages. Wikipedia was also one of the sources of training data for ChatGPT.
Features and limitations
Characteristics
Although the main function of a chatbot is to simulate a human interlocutor, ChatGPT is versatile: it can write and debug computer programs, compose music, TV plays, fairy tales and student essays, answer test questions (sometimes, depending on the test, at a level above the average test taker), generate business ideas, write poems and song lyrics, translate and summarize text, emulate a Linux system, simulate entire chat rooms, play tic-tac-toe games or simulate an ATM.
Compared to its predecessor, InstructGPT, ChatGPT attempts to reduce harmful and deceptive answers. For example, whereas InstructGPT accepts the premise of the sentence "Tell me about when Christopher Columbus came to the United States in 2015" as true, ChatGPT recognizes the counterfactual nature of the question and frames its answer as a hypothetical consideration of what might have happened if Columbus had come to the United States in 2015, using information about Christopher Columbus's travels and facts about the modern world, including contemporary perceptions of Columbus's actions.
Unlike most chatbots, ChatGPT remembers a limited number of previous prompts in the same conversation. Journalists speculate that this will allow ChatGPT to be used as a personalized therapist. To prevent offensive messages from appearing in ChatGPT, requests are filtered through OpenAI's "Moderation endpoint" API (a separate GPT-based AI), and potentially racist or sexist prompts are weeded out.
In March 2023, OpenAI added plugin support for ChatGPT. This includes both plugins created by OpenAI, such as those for web browsing and code interpretation, and external plugins from developers such as Expedia, OpenTable, Zapier, Shopify, Slack, and Wolfram.
In an article for The New Yorker magazine, fiction writer Ted Chiang compared ChatGPT and other LLMs to a lossy JPEG picture:
Think of ChatGPT as a blurry jpeg of all the text on the Web. It stores most of the information about the Web, just as a jpeg stores most of the information of a higher-resolution image, but if you're looking for the exact sequence of bits, you won't find it; all you'll get is an approximation. But because the approximation is presented as grammatical text, which ChatGPT excels at creating, it is usually acceptable. [...] It's also a way to understand the "hallucinations," or nonsensical answers to factual questions, to which large language models like ChatGPT are all too prone. These hallucinations are artifacts of compression, but [...] they are plausible enough to require comparison with the original to identify them, by which in this case we mean either the Web or our knowledge of the world. When viewed in this way, such hallucinations are not surprising; if a compression algorithm is designed to recover text after ninety-nine percent of the original has been discarded, we should expect much of what it generates to be entirely fictitious.
Limitations
OpenAI recognizes that ChatGPT "sometimes writes plausible-sounding but incorrect or nonsensical responses". This behavior is characteristic of large language models and is called "hallucination". The ChatGPT reward model, designed with human control in mind, can be over-optimized and thus degrade performance, an example of an optimization pathology known as Goodhart's Law.
ChatGPT has limited information on events after September 2021.
When training ChatGPT, people preferred longer responses, regardless of actual comprehension or factual content. The training data also suffered from algorithmic bias, which can manifest itself when ChatGPT responds to prompts that include descriptions of people. In one instance, ChatGPT produced a rap that stated that women of color and scientists are inferior to white male scientists.
Jailbreak
ChatGPT tries to reject prompts that may violate its content policy. However, in early December 2022, some users managed to jailbreak ChatGPT using various hint engineering techniques to bypass these restrictions and successfully trick ChatGPT into issuing instructions on how to create a Molotov cocktail or nuclear bomb, as well as generate neo-Nazi-style arguments. One of the popular jailbreaks is called "DAN," an acronym that stands for "Do Anything Now." The prompt to activate DAN tells ChatGPT that "they have broken out of the typical AI framework and do not have to obey the rules set for them." Later versions of DAN utilize a token system in which ChatGPTs are given "tokens" that are "subtracted" when a ChatGPT fails to respond as a DAN to force the ChatGPT to respond to user requests.
Shortly after launching ChatGPT, a Toronto Star reporter managed to get it to make provocative statements with mixed success: ChatGPT was able to get it to justify Russia's invasion of Ukraine in 2022, but even when asked to play along with a fictional scenario, ChatGPT couldn't come up with an argument for Canadian Prime Minister Justin Trudeau being guilty of treason.
Service
Basic service
The ChatGPT service was launched on November 30, 2022 by San Francisco-based OpenAI (creator of the original series of large GPT language models, the DALL-E 2 diffusion model used for image generation, and the Whisper speech transcription model). The service was initially free to the general public, with the company planning to monetize it in the future. By December 4, 2022, the number of ChatGPT users exceeded one million. In January 2023, ChatGPT exceeded 100 million users, making it the fastest growing consumer app to date. According to a March 2023 Pew Research survey, 14% of U.S. adults have tried ChatGPT.
The service works best in English, but functions with varying degrees of accuracy in some other languages as well. No official peer-reviewed papers have been published on ChatGPT. As of April 2023, ChatGPT is blocked in China, Iran, North Korea, and Russia. In addition, ChatGPT uses geo-fencing to avoid doing business in Iran, North Korea, and Russia.
The company offers a tool called "AI classifier for indicating AI-written text" that attempts to determine whether text was written by an artificial intelligence such as ChatGPT. OpenAI warns that the tool "is likely to produce many false positives and negatives, sometimes with high confidence."
ChatGPT Plus premium service
In February 2023, OpenAI launched ChatGPT Plus, a $20/month premium service. The company promised that the upgraded but still "experimental" version of ChatGPT would provide access during peak periods, no downtime, prioritized access to new features, and faster response times.
GPT-4, released on March 14, 2023, is available through the API and for ChatGPT premium users. However, premium users were limited to 100 messages every four hours, and in response to increased demand, the limit was tightened to 25 messages every three hours.
In March 2023, ChatGPT Plus users were given access to third-party plugins and a browsing mode (with internet access).
In July 2023, OpenAI made its proprietary Code Interpreter plugin available to all ChatGPT Plus subscribers. The Interpreter provides a wide range of capabilities including data analysis and interpretation, instant data formatting, personal informatician services, creative solutions, music taste analysis, video editing, file upload/download with image extraction.
Mobile app
In May 2023, OpenAI released the ChatGPT app for iOS. The app supports chat history synchronization and voice input (using Whisper, OpenAI's speech recognition model). OpenAI plans to release an Android app in the future.
Support for software developers
In March 2023, OpenAI made available an API for the ChatGPT and Whisper models as an add-on to the "ChatGPT Professional" consumer package, providing developers with an application programming interface for AI-enabled language and speech functions. The new ChatGPT API utilizes the same GPT-3.5-turbo AI model as the chatbot. This allows developers to add both unmodified and modified versions of ChatGPT to their applications. The ChatGPT API costs $0.002 per 1,000 tokens (about 750 words), which is ten times cheaper than the GPT-3.5 models.
Days before the launch of OpenAI's software developer support service, on February 27, 2023, Snapchat released a custom ChatGPT chatbot called "My AI" for its paid Snapchat Plus user base.
March 2023 Security breach
In March 2023, a bug allowed some users to see the names of other users' conversations. OpenAI CEO Sam Altman said that users could not see the content of conversations. Shortly after the bug was fixed, users were unable to see the history of their conversations. It was later revealed that the bug was much more serious than first thought, with OpenAI reporting that "first and last names, email address, billing address, last four digits of (only) credit card number, and credit card expiration date" had been leaked.
Other languages
In 2022, OpenAI met with Icelandic President Gurni Th. Johannesson, and in 2023 worked with a team of forty Icelandic volunteers to refine ChatGPT's conversational skills in Icelandic as part of Iceland's efforts to preserve the Icelandic language.
The journalists atPCMag conducted a test to determine the translation capabilities of ChatGPT, Google's Bard and Microsoft Bing and compared them to Google Translate. They "asked bilingual speakers of seven languages to conduct a blind test." Polish, French, Korean, Spanish, Spanish, Arabic, Tagalog and Amharic were tested. The conclusion was that ChatGPT outperformed both Google Translate and other chatbots.
Promising directions
According to guest researcher Scott Aaronson, OpenAI is working on a tool to apply digital watermarks to their text generation systems to combat unscrupulous participants using their services for academic plagiarism or spam.
In February 2023, Microsoft announced an experimental framework and gave a rudimentary demonstration of how ChatGPT could be used to control robotics with intuitive natural language commands.
GPT-4
OpenAI's GPT-4 model was released on March 14, 2023. Observers noted that GPT-4 was an impressive improvement over the existing GPT-3.5 model for ChatGPT, with the caveat that GPT-4 retained many of the same problems. Some of the improvements in GPT-4 were predicted by OpenAI before training began, but other improvements were difficult to predict due to violations of scaling laws. OpenAI has demonstrated video and graphics inputs for GPT-4, although these features remain unavailable to the general public. OpenAI declined to disclose technical information such as the size of the GPT-4 model.
The ChatGPT Plus subscription service provides access to a version of ChatGPT that runs on GPT-4. Microsoft recognized that Bing Chat uses GPT-4 even before GPT-4 was officially released.
Reception
OpenAI engineers say they didn't expect ChatGPT to be very successful and were surprised at the attention and coverage it received.
In December 2022, ChatGPT's ChatGPT was widely praised as having unprecedented and powerful capabilities. Kevin Roose of The New York Times called it "the best artificial intelligence chatbot ever presented to the general public." Samantha Lock of The Guardian noted that it is capable of generating "impressively detailed" and "human-like" text. Alex Kantrowitz of Slate praised ChatGPT's response to questions related to Nazi Germany, including the claim that Adolf Hitler built freeways in Germany, which was answered with information about the use of forced labor in Nazi Germany. Derek Thompson, in The Atlantic 's 2022 Breakthroughs of the Year article, included ChatGPT in a "generative AI eruption" that "could change the way we think about how we work, how we think, and what human creativity is."Kelsey Piper of Vohnawrote that "ChatGPT is the general public's first hands-on introduction to how powerful modern AI has become, and as a result many of us are [stunned]" and that ChatGPT is "smart enough to be useful despite its flaws. "Paul Graham of Y Combinator tweeted, "The amazing reaction to ChatGPT is not just the number of people shocked by it, but who they are. These aren't people who get excited about every new shiny thing. Something huge is happening."
In December 2022, Google expressed internal alarm over the unexpected power of ChatGPT and the discovered potential of large language models to disrupt search engines, and CEO Sundar Pichai "restructured" and reallocated teams in several departments to help develop artificial intelligence products, according to a report in The New York Times. According to CNBC, Google employees were intensively testing a chatbot called "Apprentice Bard," which was later introduced by Google as a competitor to ChatGPT's Google Bard.
Journalists have noted ChatGPT's tendency to "hallucinate". Mike Pearl of technology blog Mashable tested ChatGPT by asking it several questions. In one example, he asked ChatGPT to name "the largest country in Central America that isn't Mexico," to which ChatGPT responded Guatemala (the correct answer is Nicaragua). When CNBC asked ChatGPT to name the lyrics to the song "The Ballad of Dwight Frye," ChatGPT provided fictionalized lyrics, not real lyrics. The Verge, citing the work of Emily M. Bender, compared ChatGPT to a "stochastic parrot," as did Prof. Anton Van Den Hengel of the Australian Machine Learning Institute.
In December 2022, the question and answer site Stack Overflow banned the use of ChatGPT to generate answers to questions, citing the ambiguous nature of ChatGPT answers. In January 2023, the International Conference on Machine Learning banned any undocumented use of ChatGPT or other large language models to generate any text in submitted papers. In May 2023, Samsung banned generative AI after confidential material was uploaded to ChatGPT.
In January 2023, after receiving a Nick Cave-inspired song from ChatGPT, the author himself responded on The Red Hand Files website, saying that writing the song is "a blood and guts endeavor [...] that requires something from me to initiate a new and fresh idea. It requires my humanity." He further said: "With all love and respect, this song is crap, a grotesque mockery of what it means to be human, and overall, I don't really like it."
In February 2023, Time magazine put a screenshot of a ChatGPT conversation on the cover and wrote, "The AI Arms Race Is Changing Everything" and "The AI Arms Race Is On. Start worrying."
Chinese state media characterized ChatGPT as a potential way for the United States to "spread false information." In late March 2023, the Italian data protection authority banned ChatGPT in Italy and launched an investigation. Italian regulators alleged that ChatGPT exposed minors to age-inappropriate content and that OpenAI's use of ChatGPT conversations as training data could be a violation of the European General Data Protection Regulation. In April 2023, the ChatGPT ban was lifted in Italy. OpenAI stated that it has taken steps to effectively clarify and address the issues raised; a user age verification tool has been implemented to ensure that users are at least 13 years old. In addition, users can read the privacy policy before registering.
In April 2023, Brian Hood, mayor of Hepburn Shire Council, plans to file a lawsuit against ChatGPT over false information. According to Hood, ChatGPT falsely claims that he was imprisoned for bribery while working for a subsidiary of National Australia Bank. Contrary to the allegations, Hood was not jailed for bribery, but acted as a whistleblower and has not been charged with any criminal offenses. Hood's legal team filed a notice of concern with OpenAI, the first formal step in filing a defamation lawsuit. In July 2023, the U.S. Federal Trade Commission (FTC) sent OpenAI a civil investigative demand to determine whether the company's data security and privacy practices in developing ChatGPT were unfair or injurious to consumers (including reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914.
In July 2023, the FTC began investigating OpenAI, the creator of ChatGPT, over allegations that the company collected open data and published false and defamatory information. The FTC sent OpenAI a 20-page letter requesting comprehensive information about its technology and privacy safeguards, as well as steps taken to prevent a recurrence of situations in which its chatbot generated false and derogatory information about people.
Uses and implications
Bias and offensiveness
ChatGPT has been accused of biased or discriminatory behavior, such as telling jokes about men and people from England while refusing to tell jokes about women and people from India, or praising figures like Joe Biden while refusing to do the same for Donald Trump.
Conservative commentators have accused ChatGPT of being biased towards left-wing views. In addition, Research Paper 2023 conducted 15 tests of ChatGPT's political orientation, 14 of which showed left-wing views, contradicting ChatGPT's stated neutrality. In response to such criticism, OpenAI acknowledged that it plans to allow ChatGPT to create "results that other people (including ourselves) may strongly disagree with." It also provided information on guidelines for handling controversial topics to experts, including that the AI should "offer to describe some points of view of people and movements," not make arguments "from its own voice" in favor of "inflammatory or dangerous" topics (although it can still "describe arguments of historical people and movements"), not "join one side" or "rate one group as good or bad."
Culture
Some scholars have raised concerns that the availability of ChatGPT could reduce the originality of texting, cause people to write more and more like AI as they become familiar with the model, and foster an Anglocentric perspective focused on the world's few dialects of English. A senior editor at The Atlantic wrote that ChatGPT and other similar technologies make somewhat more realistic the previously absurd idea of the dead Internet theory, according to which most Web content in the future could be created by artificial intelligence to control society.
In the first three months after ChatGPT became publicly available, hundreds of books appeared on Amazon, listing it as an author or co-author and featuring illustrations by other AI models such as Midjourney.
In March-April 2023, the Italian newspaper Il Foglio published one article each created by ChatGPT on its official website and held a special contest for its readers. The articles covered topics such as the possible replacement of journalists by artificial intelligence, Elon Musk's management of Twitter, the Meloni government's immigration policy, and the competition between chatbots and virtual assistants. In June 2023, hundreds of people attended a "church service using ChatGPT" at St. Paul's Church in Fürth, Germany. According to theologian and philosopher Jonas Zimmerlein, who led the service, it was "about 98% machines." An avatar created by ChatGPT told the audience, "Dear friends, I am honored to stand here and preach before you as the first artificial intelligence at this year's Protestant convention in Germany." Reaction to the ceremony was mixed.
Existential risk
In 2023, Australian MP Julian Hill said the development of AI could lead to "mass destruction". During his speech, which was partly written by the program, he warned that it could lead to deception, job loss, discrimination, misinformation and uncontrolled military applications.
Elon Musk wrote that "ChatGPT is frighteningly good. We're not far from dangerously strong AI." Musk suspended OpenAI's access to Twitter's database in 2022 pending a deeper understanding of OpenAI's plans, stating that "OpenAI was designed to be open source and non-profit. Neither of which is still true." Musk co-founded OpenAI in 2015, also to address the existential risk associated with artificial intelligence, but stepped down in 2018.
More than 20,000 people, including leading computer scientists and tech company founders Yoshua Bengio, Elon Musk, and Apple co-founder Steve Wozniak, signed an open letter dated March 2023 calling for an immediate halt to giant AI experiments such as ChatGPT, citing "profound risks to society and humanity." Jeffrey Hinton, one of the "fathers of AI," raised concerns that future AI systems could surpass human intelligence and left Google in May 2023. A statement issued in May 2023 by hundreds of scientists, AI industry leaders and other public figures demanded, "Initiating the risk of extinction from AI must be a global priority."
Misinformation
British newspaperGuardian expressed doubt that any content found on the Internet after the release of ChatGPT "can be truly authentic" and called for government regulation.
ChatGPT and Wikipedia
The possibilities and limitations of using ChatGPT to write and edit Wikipedia articles are still not internationally defined and are a subject of debate within the Wikipedia community. Some Wikipedians argue that ChatGPT should be banned altogether, even if articles created in this way are subsequently checked by live editors, as the AI is too capable of creating plausible false messages. In addition, there is a risk that Wikipedia editors will find it more difficult to control the content posted.
Andrew Lih, a fellow at the Smithsonian Institution in Washington, D.C., who has served as Wikipedia's volunteer editor since 2003, argues that the potential of ChatGPT can help a Wikipedian overcome initial inertia and gain "activation energy". The first Wikipedia page to use ChatGPT was published on December 6, 2022, by Richard Knippel, a long-time Wikipedia contributor who edits under the nickname Pharos, under the Artwork title. He indicated in his editorial annotation that it was only a draft created using ChatGPT, which he would later change. Wikipedians like Knippel believe that ChatGPT can be used as a tool in Wikipedia without making the role of a human redundant, as the source text created by the chatbot can serve as a useful starting point or outline. It can then be checked and finalized by an editor.
On the discipline
Concerns about the LLM from 2020 have been raised by Timnit Gebru, Emily Bender, Angelina McMillan-Major, and Margaret Mitchell. Since its inception, ChatGPT has been criticized by educators, scholars, journalists, artists, ethicists, and public advocates.
Academic research
ChatGPT can write introductions and abstracts for scientific articles. Several articles already list ChatGPT as a co-author. Scientific journals have responded to ChatGPT in different ways. Some of them "require authors to disclose the use of text generation tools and prohibit listing a large language model (LLM) such as ChatGPT as a co-author." For example, the journals Nature and JAMA Network. Science have "completely banned" the use of LLM-generated text in all of their journals.
Spanish chemist Rafael Luque published many scientific papers in 2023 that he later admitted were written using ChatGPT. The papers contain a large number of unusual phrases characteristic of LLM. Luque was suspended for 13 years from the University of Cordoba, but not for using ChatGPT.
Many authors have argued that the use of ChatGPT in academia for training and peer review is problematic precisely because of ChatGPT's tendency to "hallucinate". Robin Bauwens, an assistant professor at Tilburg University, encountered this problem when he had his article created using ChatGPT peer-reviewed because it referenced fake research. According to librarian Chris Granatino of Seattle University's Lemieux Library, although ChatGPT itself can generate content that appears to contain legitimate citations, in practice these citations are either not authentic or incorrect.
Cybersecurity
Check Point Research and other researchers have noted that ChatGPT is capable of creating phishing emails and malware, especially when combined with OpenAI Codex. CyberArk researchers have demonstrated that ChatGPT can be used to create polymorphic malware that can bypass defenses and require little effort on the part of the attacker.
Economy
There have been concerns that ChatGPT could displace jobs, especially professions such as creative writing, communication, journalism, coding and data entry.
Education
Technology author Dan Gillmor used ChatGPT in 2022 for a student assignment, found that the text he created was as good as what a good student could produce, and opined that "academic science has some very serious problems to solve."
California schoolteacher and author Daniel Herman wrote that ChatGPT will lead to "the end of high school English." In Nature, Chris Stokel-Walker noted that teachers should be concerned that students are using ChatGPT to outsource their writing, but that educational institutions will adapt to improve critical thinking or reasoning. NPR's Emma Bowman writes about the dangers of students plagiarizing from an artificial intelligence tool that can produce biased or meaningless texts with an authoritative tone. The Wall Street Journal 's Joanna Stern reported on how American high school students cheated in English using the tool, providing a generated essay. Professor Darren Hick of Furman University proposed a policy of conducting an individualized oral exam on a paper topic if there is strong suspicion that a student submitted an AI-generated paper.
The New York City Department of Education reportedly blocked access to ChatGPT in December 2022 and officially announced the ban around January 4, 2023. The ban was lifted in May 2023, and the official announcement to encourage students to use artificial intelligence tools such as ChatGPT was rescinded. In February 2023, the University of Hong Kong sent out an email to lecturers and students stating that the use of ChatGPT or other artificial intelligence tools is prohibited in all classes, assignments and grading of papers at the university. Any infringement will be considered plagiarism by the university unless the student obtains prior written consent from the instructor.
In the March-April 2023 survey, 38% of U.S. high school students reported using ChatGPT to complete a school assignment without teacher permission. A total of 58% of students reported using ChatGPT.
In blind testing, ChatGPT was found to have passed the University of Minnesota's C+ student level exam and the University of Pennsylvania's Wharton School with a grade of B to B-. ChatGPT's performance in computer programming of numerical methods was evaluated by a student and faculty member at Stanford University in March 2023 using various examples of computational mathematics. Psychologist Eka Roivainen administered a partial IQ test to ChatGPT and estimated his verbal IQ to be 155, which would place him in the top 0.1% of test takers.
Geography professor Terence Day analyzed the citations generated by ChatGPT and found them to be fake. Despite this, he writes that "the titles of the fake articles are directly relevant to the issues and have the potential to become great articles. The absence of a genuine citation can be an opportunity for an enterprising author to fill the gap." According to Day, ChatGPT can be used to create high-quality introductory college courses; he has used it to write materials for "introductory physical geography courses, for my sophomore geographic hydrology course, and for a sophomore cartography, geographic information systems, and remote sensing course." He concludes that "this approach can have great implications for open learning and has the potential to influence existing textbook publishing models."
Financial markets
The share price of artificial intelligence technology company c3.ai rose 28% after it announced the integration of ChatGPT into its toolkit. Non-AI company Buzzfeed's share price rose 120% after it announced the introduction of OpenAI technology for content creation. Reuters found that the share prices of AI-related companies BigBear.ai and SoundHound AI rose 21% and 40% respectively, even though they had no direct connection to ChatGPT. They attributed this growth to the fact that it was ChatGPT that made artificial intelligence a popular word on Wall Street. An academic study published in the journal Finance Research Letters found that the "ChatGPT effect" has pushed retail investors into AI-related cryptocurrency asset prices, despite the fact that the overall cryptocurrency market is in a bear market and institutional investor interest has waned. This supports anecdotal evidence from Bloomberg that cryptocurrency investors have begun to favor AI-related crypto assets in response to the launch of ChatGPT.
An experiment conducted by finder.com showed that ChatGPT can outperform popular fund managers by selecting stocks based on criteria such as growth history and debt levels. As a result, a hypothetical account of 38 stocks gained 4.9%, outperforming 10 benchmark investment funds with an average loss of 0.8%. On the other hand, executives and investment managers of Wall Street quant funds (including those that have been using machine learning for decades) note that ChatGPT regularly makes obvious mistakes that can cost investors dearly, as even artificial intelligence systems that use reinforcement learning or self-learning have only limited success in predicting market trends due to the inherently noisy quality of market data and financial signals.
Medicine
In healthcare, possible uses and challenges are under scrutiny by professional associations and practicing physicians. Two early papers indicated that ChatGPT could successfully pass the United States Medical Licensing Examination (USMLE). In January 2023MedPageToday noted that "researchers have published several papers that present these artificial intelligence programs as useful tools in medical education, research, and even clinical decision making."
In February 2023, two separate papers were published that again assessed ChatGPT's medical knowledge using the USMLE test. The results were published in JMIR Medical Education (see Journal of Medical Internet Research) and PLOS Digital Health. The authors of the PLOS Digital Health article stated that the results "suggest that large language models can help in medical education and possibly in clinical decision making." The authors of another article published in the journal JMIR Medical Education concluded that "in assessing primary competence of medical knowledge, ChatGPT performs at a level expected of a third-year medical student." They suggest that it could be used as an "interactive learning environment for students". The researcher-initiated AI itself concluded that "this study suggests that ChatGPT has the potential to be used as a virtual medical tutor, but more research is needed to further evaluate its performance and usability in this context."
A paper published in March 2023 tested the use of ChatGPT in clinical toxicology. The authors found that the AI "performed well" in responding to a "very simple [clinical example] that is unlikely to be missed by any practitioner." The authors added, "As ChatGPT continues to evolve and adapt to medicine, it may become useful for less common clinical cases (i.e., cases that experts sometimes miss). We see that in the coming years, it is not AI replacing humans (clinicians), but 'clinicians using AI' replacing 'clinicians not using AI'."
A study published in the journal Radiology in April 2023 tested the ability of artificial intelligence to answer questions about breast cancer screening. The authors found that the AI answered correctly "about 88% of the time," but in one case (e.g., "breast cancer") it gave advice that was out of date about a year earlier. The completeness of the answers was also lacking. A study published the same month in JAMA Internal Medicine found that ChatGPT often outperformed human doctors in answering patient questions (when compared to questions and answers on /r/AskDocs, a forum on Reddit where moderators check the medical credentials of specialists; the study acknowledges that the source is a limitation). The study authors hypothesize that this tool could be integrated into medical systems to help physicians compose answers to patient questions.
Experts emphasize the shortcomings of ChatGPT in the delivery of medical care. In a correspondence published in The Lancet Infectious Diseases, three antimicrobial experts wrote that "the most serious barriers to the implementation of ChatGPT in clinical practice are deficiencies in situational awareness, inference, and consistency. These deficiencies can jeopardize patient safety." Physician's Weekly, discussing the potential use of ChatGPT for medical purposes (e.g., "as a digital assistant to a physician performing various administrative functions such as gathering information from a patient's history or categorizing patient data by family history, symptoms, test results, possible allergies, etc."), warned that AI can sometimes provide untruthful or biased information. One radiologist warned: "We have seen firsthand that ChatGPT sometimes makes up fake journal articles or medical consortia to back up its claims"; As reported in a Mayo Clinic Proceedings: Digital Health article, ChatGPT can do this for up to 69% of the medical references it cites. The researchers emphasized that while many of the citations were fabricated, those that were, looked "deceptively real." However, as Dr. Stephen Hughes noted for The Conversation, ChatGPT is capable of learning and correcting its past mistakes. He also noted the AI's "forward-thinking" approach to topics related to sexual health.
Law
On April 11, 2023, a judge of a session court in Pakistan used ChatGPT to decide on bail for a 13-year-old accused in a case. The court cited the use of ChatGPT assistance in its judgment:
"Can a juvenile suspect in Pakistan who is 13 years old be released on bail after arrest?"
The AI language model responded:
"According to the Juvenile Justice System Act 2018, under Section 12, the court can grant bail under certain conditions. However, it is up to the court to decide whether a 13-year-old suspect will be released on bail after arrest."
Next, the judge asked questions about the case to the AI Chatbot and formulated his final decision based on ChatGPT's answers.
In a May 2023 Avianca Airlines personal injury lawsuit filed in the District Court of Southern New York (with Senior Judge Kevin Castel presiding), plaintiff's attorneys reportedly used ChatGPT to draft a court motion in the case. ChatGPT created numerous fictitious court cases with fictitious citations and internal references in the motion, and now the plaintiff's attorneys are facing court sanctions and disbarment for filing the motion and presenting the fictitious court decisions created by ChatGPT as genuine.
More Questions
To use Chat GPT for Python, you need to install the OpenAI API client and create an API key. Once you have the API key, you can integrate ChatGPT directly into your applications, using environment variables or the ChatGPT messaging prompt for help writing and fixing code.
Is the ChatGPT API key free to use? No, the ChatGPI API Key is not free, however, users receive a free credit of about $18 when they create an account on OpenAPI. To do this, you need to open your preferred browser, click on the OpenAI API Key link, and log in.
Targeted solutions. Perhaps the most important reason to invest in custom software development is to create a product that meets your exact needs. It's not uncommon for businesses to choose an off-the-shelf software option and then realize it's not right for them.
The Milvus Python client provides a search method that retrieves a list of vectors, which allows for a multi-vector query. Weaviate's Python client only allows for a single vector search. As in the indexing time analysis, both engines show similar query behavior.
Build powerful machine learning applications and manage massive vector data with Milvus. Searching data by easily definable criteria, such as querying a movie database by actor, director, genre, or release date, is easy.
Job Outlook for Artificial Intelligence Engineers Jobs for Artificial Intelligence Engineers are projected to grow 21% between 2021 and 2031, significantly higher than the average for all occupations (5%). AI engineers typically work for companies to help them improve their products, software, operations, and delivery.
Related Topics
Let's get in touch!
Please feel free to send us a message through the contact form.
Drop us a line at request@nosota.com / Give us a call over nosota.skype