OpenAI introduces new AI optimization features to its API for fine-tuning

Updated 4 months ago on June 23, 2024

OpenAI today introduced a set of new tools that will make it easier to optimize its large language models for specific tasks.

Most of the additions extend to the application programming interface for fine-tuning, which the company launched last August. The API allows customers to provide OpenAI language models with additional data not included in their built-in knowledge base. For example, a retailer can provide GPT-4 with information about its products and then use the model to handle customer queries.

Fine-tuning an LLM using external data is a complex process fraught with technical errors. If a failure occurs, the LLM will not be able to correctly perceive the information given to it, which can limit its usefulness. The first new enhancement that OpenAI introduced today for its fine-tuning API is designed to address this problem.

An AI fine-tuning project is typically divided into stages called epochs. At each such stage, the model analyzes at least once the dataset on which it is tuned. Failures often occur not during the first epoch of a fine-tuning project, but during subsequent training sessions.

Using the improved OpenAI API for fine-tuning, developers can now save a copy of the AI model after each epoch during training. If a failure occurs in epoch five, the project can be restarted from epoch four. This eliminates the need to start from scratch, which in turn reduces the amount of time and effort required to fix fine-tuning errors.

The feature is being rolled out along with a new section of the interface known as Playground UI. According to OpenAI, developers can use it to compare different versions of a finely tuned model. For example, a development team can test how a model answers a question after the fifth epoch and then input the same question two epochs later to determine if the accuracy of the answer has improved.

OpenAI is also extending the existing AI fine-tuning panel. According to the company, developers can now more easily customize model hyperparameters. These are configuration parameters that affect the accuracy of LLM responses.

The updated dashboard also provides access to more detailed technical data about fine-tuning sessions. For greater impact, OpenAI has added the ability to feed this data into third-party AI development tools. The first integration has already launched for Weights and Biases, a model building platform developed by a well-funded startup of the same name.

For enterprises that require more advanced model optimization features, OpenAI today introduced a new offering called assisted fine-tuning. This allows the model to be extended with additional hyperparameters. Customers can also optimize their LLMs using the PEFT technique, which fine-tunes only specific parts of the model rather than its entire code base.

Let's get in touch!

Please feel free to send us a message through the contact form.

Drop us a line at mailrequest@nosota.com / Give us a call over skypenosota.skype