Creating your own artificial intelligence tools in Python using OpenAI APIs

Updated 6 months ago on May 06, 2024

With OpenAI now supporting models up to GPT-4 Turbo, Python developers have an incredible opportunity to explore advanced AI features. This guide details how to integrate the ChatGPT API into your Python scripts, walk through the initial setup steps, and use the API effectively.

The ChatGPT API is a programming interface that allows developers to interact with GPT models and use them to generate conversational responses. But really it's just a generic API from OpenAI that works with all their models.

Since GPT-4 Turbo is more advanced and three times cheaper than GPT-4, there has never been a better time to utilize this powerful API in Python, so let's get started!

Customizing the environment

First, we'll walk you through how to set up an environment to work with the OpenAI API in Python. The first steps include installing the necessary libraries, setting up API access, working with API keys and authentication.

Installing the necessary Python libraries

Before you get started, make sure you have Python installed on your system. We recommend using a virtual environment to keep everything organized. You can create a virtual environment using the following command:

python -m venv chatgpt_env

Activate the virtual environment by running the command:

  • chatgpt_env\Scripts\activate (Windows)
  • source chatgpt_env/bin/activate (macOS or Linux)

Next, you need to install the necessary Python libraries, which include the OpenAI Python client library for interacting with the OpenAI API and the python-dotenv package for handling configuration. To install both packages, run the following command:

pip install openai python-dotenv

Configuring access to the OpenAI API

To make a request to the OpenAI API, you must first register with the OpenAI platform and generate your unique API key. Perform the following steps:

  1. Go to the OpenAI API Keys page and create a new account or log in if you already have an account.
  2. Once logged in, go to the API keys section and click on Create a new secret key .
  3. Copy the generated API key for later use. Otherwise, you will have to generate a new API key if you lose it. You will not be able to view API keys on the OpenAI website.
OpenAI's API keys page

OpenAI API keys page

Generated API key that can be used now

Generated API key that you can use right now

API key and authentication

After obtaining the API key, we recommend saving it as an environment variable for security purposes. Use the python-dotenv package to manage environment variables. To create an environment variable containing your API key, follow these steps:

  1. Create a file named . env in the project directory.

  2. Add the following line to the .env file , replacing your_api_key with the actual API key you copied earlier: CHAT_GPT_API_KEY=your_api_key .

  3. In Python code, load the API key from the .env file using the load_dotenv function from the python-dotenv package:

import openai from openai import OpenAI import os from dotenv import load_dotenv # Load API key from .env file load_dotenv ( ) client = OpenAI ( api_key = os.environ.get ("CHAT_GPT_API_KEY" ) ))

Note: In the latest version of the OpenAI Python library, you must instantiate the OpenAI client to make API calls, as shown below. This is a change from previous versions where you could directly use global methods.

Now you've added your API key, your environment is set up and ready to use the OpenAI API in Python. In the next sections of this article, we'll look at interacting with the API and creating chat applications using this powerful tool.

Don't forget to add the above code snippet to each code section below before running it.

Using the OpenAI API in Python

After loading the API from the . env file, we can start using it in Python. To use the OpenAI API in Python, we can make API calls using a client object. We can then pass a series of messages to the API as input and receive a model-generated message as output.

Creating a simple ChatGPT request

  1. Make sure you have completed the previous steps: created the virtual environment, installed the necessary libraries, generated the OpenAI secret key and the .env file in the project directory.

  2. Use the following code snippet to configure a simple ChatGPT request:

# Create chat termination chat_completion = client . chat . completions . create ( model = "gpt-4" , messages = [ { { "role" : "user" , "content" : "query" } ] ] ) print ( chat_completion . choices [ 0 ] . message . content )

Here, client.chat.completions.create is a method call on a client object. The chat attribute accesses chat-specific API functions, and completions .create is a method that asks the AI model to generate a response or completion based on the provided data.

Replace the query with the query you want to run, and feel free to use any supported GPT model instead of the GPT-4 model selected above.

Error handling

A variety of problems can occur when making requests, including network connectivity problems, exceeding the speed limit, or other non-standard response status codes. Therefore, it is very important to handle these status codes correctly. We can use try and except blocks in Python to maintain the flow of the program and handle errors better:

# Attempt to create a chat completion try : chat_completion = client . chat . completions . create ( model = "gpt-4" , messages = [ { { "role" : "user" , "content" : "query" } ] , temperature = 1 , max_tokens = 150 # Customize the number of tokens as needed ) print ( chat_completion . choices [ 0 ] . message . content ) except openai . APIConnectionError as e : print ("Server could not be reached" ) print ( e . __cause__ ) except openai . RateLimitError as e : print ("Status code 429 received; we must back off a bit.") except openai . APIStatusError as e : print ("Another status code other than the 200 range was received" ) print ( e . status_code ) print ( e . response )

Note: To use any OpenAI API model, you must have available credit grants. If it has been more than three months since you created your account, it is likely that your free credit grants have expired and you will need to purchase additional credits (minimum $5).

Here are a few ways to further customize API requests:

  • Max Tokens . Limit the maximum possible length of the output to suit your needs by setting the max_tokens parameter . This can be a cost saver, but note that it just cuts off the generated text from exceeding the limit, rather than making the overall output shorter.
  • Temperature . Adjust the temperature parameter to control randomness (higher values make responses more varied, while lower values give more consistent responses).

If a parameter is not manually set, the default value for the corresponding model is used, e.g. 0 - 7 and 1 for GPT-3.5-turbo and GPT-4 respectively.

In addition to the above settings, there are many other parameters and settings that allow you to use the GPT capabilities exactly as you need them. It is recommended to study the OpenAI API documentation to familiarize yourself.

Nevertheless, effective and contextual cues are still needed, regardless of the number of parameter settings.

Advanced API Integration Techniques

In this section, we'll cover advanced techniques for integrating OpenAI APIs into your Python projects, focusing on automating tasks, using Python queries to retrieve data, and managing large-scale API requests.

Automating tasks using OpenAI API

To make your Python project more efficient, you can automate various tasks using the OpenAI API. For example, you can automate the generation of email responses, support responses, or content creation.

Here's an example of how you can automate a task using the OpenAI API:

def automated_task ( prompt ) : try : chat_completion = client . chat . completions . create ( model = "gpt-4" , messages = [ { { "role" : "user" , "content" : prompt } ], max_tokens = 250 ) return chat_completion . choices [ 0 ] . message . content except Exception as e : return str ( e ) # Example usage generated_text = automated_task ( "Write a short note of less than 50 words to the development team asking them to report the current status of a software update" ) print ( generated_text )

This function accepts the request and returns the generated text as output.

Using Python queries to retrieve data

You can use the popular requests library to interact with the OpenAI API directly without relying on the OpenAI library. This method gives you more control over receiving the request and flexibility when calling the API.

The following example requires the requests library (if you don't have it, first run pip install requests ):

headers = { 'Content-Type' : 'application/json' , 'Authorization' : f'Bearer { api_key } ' , } data = { 'model' : 'gpt-4' , # Upgrade to desired model 'messages' : [ { 'role' : 'user' , 'content' : 'Write an interesting fact about Christmas'. } ] } } response = requests . post ( 'https://api.openai.com/v1/chat/completions' , headers = headers , json = data ) print ( response . json ( ) )

This code snippet demonstrates the execution of a POST request to the OpenAI API with headers and data as arguments. The response in JSON format can be parsed and used in your Python project.

Managing large requests to the API

When working with large projects, it is important to manage API requests efficiently. This can be achieved using techniques such as batch processing, throttling, and caching.

  • Bundling . Bundle multiple requests into a single API call using the n parameter in the OpenAI library: n = number_of_responses_needed .
  • Throttling Implement a system to limit the rate of execution of API calls to avoid overusing or overloading the API.
  • Caching . Save the results of executed API requests to avoid repeated calls for similar requests.

To effectively manage API requests, track their usage and modify configuration settings accordingly. If necessary, use the time library to add delays or timeouts between requests.

Applying these best practices to your Python projects will help you get the most out of the OpenAI APIs by providing efficient and scalable API integration.

Practical applications: OpenAI API in real projects

Incorporating OpenAI APIs into your real-world projects can provide many benefits. In this section, we will look at two specific applications: integrating ChatGPT into web development and creating chatbots using ChatGPT and Python.

Integrating ChatGPT into web development

OpenAI APIs can be used to create interactive, dynamic content tailored to user requests or needs. For example, you can use ChatGPT to create personalized product descriptions, engaging blog posts, or answers to common questions about your services. The possibilities with the OpenAI API and a little Python code are endless.

Let's look at this simple example of using an API call from the Python backend:

def generate_content ( prompt ) : try : response = client . chat . completions . create ( model = "gpt-4" , messages = [ { { "role" : "user" , "content" : prompt } ] ] ) return response . choices [ 0 ] . messages . content except Exception as e : return str ( e ) # Use this function to generate content description = generate_content ( "Write a brief description of a hiking backpack" )

You can also write code to integrate the description into HTML and JavaScript to display the generated content on your website.

Creating chatbots using ChatGPT and Python

Artificial intelligence-based chatbots are beginning to play an important role in improving the user experience. By combining the natural language processing capabilities of ChatGPT with Python, you can create chatbots that understand context and respond intelligently to user input.

Let's look at an example of processing user input and getting a response:

def get_chatbot_response ( prompt ) : try : response = client . chat . completions . create ( model = "gpt-4" , messages = [ { { "role" : "user" , "content" : prompt } ] ] ) return response . choices [ 0 ] . messages . content except Exception as e : return str ( e ) # Getting user input from the command line user_input = input ( "Enter your prompt: " ) response = get_chatbot_response ( user_input ) print ( response )

But since there is no loop, the script will terminate after one run, so consider adding conditional logic. For example, we added basic conditional logic where the script will continue to look for user prompts until the user says the stop phrase "exit" or "quit".

Given the above logic, our final code to run the chatbot on the OpenAI API endpoint might look like this:

from openai import OpenAI import os from dotenv import load_dotenv # Load API key from .env file load_dotenv ( ) client = OpenAI ( api_key = os . environ . get ("CHAT_GPT_API_KEY" ) ) def get_chatbot_response ( prompt ) : try : response = client . chat . completions . create ( model = "gpt-4" , messages = [ { { "role" : "user" , "content" : prompt } ] ] ) return response . choices [ 0 ] . messages . content except Exception as e : return str ( e ) while True : user_input = input ( "you: " ) if user_input . lower ( ) in [ "exit", "quit" ] : print ( "Chat session ended." ) break response = get_chatbot_response ( user_input ) print ( "ChatGPT:" , response )

Here's what it looks like when run at the Windows command prompt.

Running in the Windows Command Prompt

We hope these examples will help you start experimenting with ChatGPT AI. In general, OpenAI has opened up a wide range of opportunities for developers to create new and interesting products using its API, and the possibilities are endless.

OpenAI API Limitations and Pricing

Although the OpenAI API is very powerful, it has a few limitations:

Data Storage . OpenAI stores your API data for 30 days, and using the API implies consent to data storage. Be careful about the data you submit.

Model capacity . Chat models have a maximum token limit. (For example, GPT-3 supports 4096 tokens.) If an API request exceeds this limit, you will have to truncate or omit the text.

Pricing . The OpenAI API is not provided for free and has its own pricing scheme separate from model subscriptions. More information on pricing can be found on the OpenAI website. (Again, GPT-4 Turbo is three times cheaper than GPT-4!).

Conclusion

Utilizing the potential of the ChatGPT model API in Python can greatly advance various applications such as customer support, virtual assistants, and content creation. By integrating this powerful API into your projects, you can easily leverage the power of GPT models in your Python applications.

Let's get in touch!

Please feel free to send us a message through the contact form.

Drop us a line at mailrequest@nosota.com / Give us a call over skypenosota.skype