The Monster API uses generative AI to help anyone who wants to create generative AI

Updated 6 months ago on May 04, 2024

A startup called Monster API, formally known as Generative Cloud Inc. has developed a GPT-based agent that simplifies and accelerates the fine-tuning and deployment of generative open source artificial intelligence models.

According to the company, MonsterGPT reduces the time to fine-tune and deploy generative AI applications based on open source models such as Llama 3 and Mistral to just 10 minutes.

Open source generative AI models like Llama 3 have become incredibly popular with companies looking for an alternative to the expensive proprietary models created by OpenAI and Google LLC. However, the process of creating an application such as a conversational assistant is still extremely complex.

For example, if a company wants to create a customer service bot that can solve typical problems, the open source model used must be refined based on the company's own data and knowledge bases so that it can learn about the various products, services, and policies needed to justify its responses.

Doing this is not easy, and is handled exclusively by developers and data analysts who must configure up to 30 variables. This requires both knowledge of complex frameworks for AI optimization and an understanding of the underlying infrastructure, such as GPU-based cloud steps, containerization, and Kubernetes.

According to Monster API, fine-tuning artificial intelligence models is often a major task for companies, requiring up to 10 specialized engineers and dozens of hours of work. And that's assuming the company actually has employees with the necessary skills. Many of them do not.

Utilizing generative AI to create generative AI

This is exactly the process that Monster API is trying to solve. And it has decided that in order to help fine tune and adapt generative AI models, it is best to turn to the generative AI itself. Using its API, developers can simply say a command like "fine tune Llama 3" and the Monster API will get to work.

"This is the first time we are offering an agent-based solution for generative AI," Monster API CEO Saurabh Vij told SiliconANGLE. "The simplicity and speed of this process is like flying a supersonic airplane at Mach 4 from New York to London in 90 minutes. At the end of this lightning fast process, MonsterGPT provides developers with an API endpoint for their custom fine-tuned models."

Compared to traditional AI development, the Monster API allows for a more case-driven approach where the user can simply specify a task they want the model to solve, such as mood analysis or code generation, and it will create the most optimal model for that task.

According to Vij, the offering will generate a lot of interest because most small teams, startups and indie developers aren't particularly well-versed in the art of fine-tuning and deploying AI models. "Most developers don't understand the intricacies of how different models and architectures work, nor do they have experience with complex cloud and GPU infrastructure," he says.

According to Vij, he envisions a future world where almost anyone can become a programmer. They won't need any programming knowledge to do so, as they will be able to simply command a generative AI to create code for them using their natural language. "All of our research and development is focused on moving faster towards this future," he added.

Open source and closed source AI

Monster API believes its approach to developing generative API-based AI reflects historical advances in technology, such as the first Macintosh computer, which introduced the concept of personal computers in the 1980s, and Mosaic, the first easy-to-use web browser, which democratized the Internet by making it accessible to everyone.

The company wants to democratize access to generative AI development in the same way, and to do so, it needs to focus on open-source models rather than their closed-source alternatives.

According to Vij, the rivalry between open-source and closed-source AI has its historical precedent in the form of the battle for mobile dominance between Apple Inc. and Google LLC's open-source Android in the past decade. "Just as Android offers a flexible alternative to Apple's tightly controlled ecosystem, there is a concerted effort to refine open source AI models as a competitor to proprietary giants like OpenAI's GPT-4," he said.

Widge believes the Monster API will help the cause of open source AI by making it much easier for unskilled users to create AI applications. Such users will almost certainly prefer open source models over proprietary alternatives, he says, because proprietary versions such as GPT-4 tend to be generalized rather than specialized.

He explained that most enterprises require domain-specific AI models, which means that the base model needs to be fine-tuned. But the process of fine-tuning closed-source models is often limited to just a few techniques offered by the vendor itself, and this can be very expensive, he said.

"In the open source world, developers get to try a variety of frameworks, from Q-LORA to DPO," Widge added. "They can experiment with different methods and choose the most appropriate one for their use case and budget."

The Monster API agent is claimed to leverage powerful AI frameworks such as Q-LORA for fine-tuning and vLLM for deploying customized models. It provides a simple, unified interface that covers the entire development lifecycle, from initial fine-tuning to deployment. "MonsterGPT reduces the time and friction of running fine-tuning experiments and deployments," said Widge. "Users also get more options for selecting optimization algorithms they can use to significantly improve throughput and latency."

Another advantage of MonsterGPT is that users do not need to understand the multitude of possible cloud infrastructure settings, GPU resources and configurations. The system will automatically select the most appropriate infrastructure based on the user's budget and goals.

Widge promised that this would be especially useful because for many use cases and applications, a smaller, finely tuned model would be more than enough. For example, if a company uses OpenAI's most powerful model, GPT-4 Turbo, for a simple customer service chatbot, it will likely be overkill because its capabilities far exceed those of that kind of application. Using a smaller model optimized for this application will allow companies to significantly reduce costs.

"Smaller models can fit into smaller and more affordable commodity GPUs," Widge explained, adding that this is another advantage of open source. "With closed models, developers can't do anything - they're forced to use much larger and more generalized models."

Holger Muller of Constellation Research Inc. said he is impressed with the MonsterGPT concept, which represents a significant advance in recursive use of generative AI. "Just as we are already using robots to create new robots, we are now using AI software to write more AI software," he said.

The analyst noted that it is important that people stay in the loop when it comes to these use cases, as some oversight is needed. However, he said the technology is extremely promising in terms of its ability to lower the barrier to entry for generative AI development.

"The Monster API is designed to help anyone who can talk and validate AI models successfully build and deploy them without specialized skills," explained Mueller. "As with any new technology, and especially AI, we need to have a healthy skepticism until we see working use cases in production. But for now, the Monster API deserves praise for its efforts to democratize AI development and change the future of work for developers and business users who have no experience with AI."

At launch, the Monster API supports over 30 popular open source LLMs, including Llama 3 from Meta Platforms Inc, Mistral Mistral-7B-Instruct-v0.2, microsoft/phi-2 from Microsoft Corp and sdxl-base from Stability AI Ltd.

Let's get in touch!

Please feel free to send us a message through the contact form.

Drop us a line at mailrequest@nosota.com / Give us a call over skypenosota.skype