“Databricks’ $10M Investment in DBRX Failed to Surpass GPT-4’s Dominance in AI Generation”

You could spend it training a generative AI model. See Databricks’ DBRX, a new generative AI model announced today akin to OpenAI’s GPT series and Google’s Gemini. Customers can privately host DBRX using Databricks’ Model Serving offering, Rao suggested, or they can work with Databricks to deploy DBRX on the hardware of their choosing. It’s an easy way for customers to get started with the Databricks Mosaic AI generative AI tools. And plenty of generative AI models come closer to the commonly understood definition of open source than DBRX.

If you had a hefty $10 million budget to boost your prominent technology company’s presence, how would you allocate the resources? Would you go for a striking Super Bowl advertisement or an alluring F1 sponsorship?

Well, perhaps you could take a different approach and invest in training a powerful generative AI model. While it may not fit into the traditional marketing tactics, generative models have a knack for grabbing attention and becoming a crucial conduit to a vendor’s leading products and services.

Databricks, one of the well-known leaders in the data management industry, has announced its latest creation- a generative AI model called DBRX which is comparable to renowned models like OpenAI’s GPT series and Google’s Gemini. Offering a refined version on GitHub and the AI development platform Hugging Face, DBRX comes in two variations- the basic version (DBRX Base) and the fine-tuned version (DBRX Instruct). Both versions of DBRX are available for commercial use and research purposes, and can be trained and customized on public, custom, or other proprietary data sources.

“DBRX was trained to provide valuable information on a wide range of subjects,” says Databricks’ VP of generative AI, Naveen Rao, in an exclusive interview with TechCrunch. “While DBRX is specialized in English language usage, it can converse and translate in multiple languages, such as French, Spanish, and German.”

Being described as an “open source” model by Databricks, DBRX shares similarities with “open source” models like Meta’s Llama 2 and AI startup Mistral’s models. (The debate for whether these models truly fit the definition of open source is still ongoing.) Databricks claims to have invested approximately $10 million and eight months in training DBRX, proudly stating (taken from a press release), that it “outperforms all existing open source models on standard benchmarks.”

However, there is a marketing catch- DBRX is not easy to use unless you are a Databricks customer.

Why is that? Well, to run DBRX in its standard configuration, you would require a server or PC with at least four Nvidia H100 GPUs. A single H100 costs thousands of dollars, which may not be feasible for individual developers or small business owners. Furthermore, there is a fine print too- Databricks states that companies with more than 700 million active users may face certain limitations, similar to the restrictions imposed by Meta for Llama 2. Additionally, all users must agree to terms ensuring the responsible use of DBRX, although the specifics of these terms have not been disclosed by Databricks at this time of publication.

To make the process more accessible and resolve these barriers, Databricks highlights its Mosaic AI Foundation Model product as a managed solution. Along with running DBRX and other models, this product also offers a training stack for fine-tuning DBRX on personalized data. According to Rao, customers can privately host DBRX using Databricks’ Model Serving offering, or they can work together with Databricks to deploy DBRX on hardware of their choice.

“Our aim is to make the Databricks platform the go-to choice for customized model building, which ultimately benefits us with more users on our platform,” explains Rao. “DBRX reflects our top-notch pre-training and tuning platform, which customers can use to build their own models from scratch. It’s an easy way to get started with the Databricks Mosaic AI generative AI tools, and DBRX is highly efficient and can be tuned for exceptional performance on specific tasks, outshining large, closed models.”

Databricks claims that DBRX runs up to two times faster than Llama 2, mainly due to its unique mixture of experts (MoE) architecture. The concept of MoE is to break down data processing tasks into smaller subtasks and delegate them to specialized “expert” models. Most MoE models have eight experts, but DBRX boasts a whopping 16, significantly improving its quality.

However, quality is subjective, and DBRX still has its shortcomings.

Despite Databricks’ claim that DBRX surpasses Llama 2 and Mistral’s models in terms of language understanding, programming, mathematics, and logic benchmarks, it falls short compared to the leading generative AI model, OpenAI’s GPT-4, in most areas other than niche use cases, such as generating database programming languages.

Rao also admits that DBRX has other limitations, such as its vulnerability to “hallucinating” responses to queries despite Databricks’ efforts in safety testing and red teaming. Due to its training on word or phrase associations with specific concepts, if these associations are not entirely accurate, the responses from DBRX may also not have valid information.

Moreover, unlike some of its more recent counterparts like Gemini, DBRX is not multimodal and can only process and generate text but not images. Databricks has not disclosed the exact sources of data used to train DBRX, except for mentioning that no customer data was used. Rao also adds, “We trained DBRX using a diverse set of data from various sources that the community appreciates and uses every day.”

When questioned about any potential copyrighted, licensed, or biased data used in training DBRX, Rao declines to comment directly but states, “We have been meticulous in our data selection and conducted red teaming exercises to enhance the model’s weaknesses.” However, the issue of generative AI models regurgitating biased or any unauthorized data is a significant concern for commercial users. In the worst-case scenario, users may end up encountering ethical and legal implications for incorporating IP-infringing or biased work from a model into their projects.

Unlike other companies that train and release generative AI models, which provide policies to cover the legal fees in case of infringement, Databricks does not offer one at the moment. Rao mentions that the company is “exploring scenarios” under which this may be considered.

Given the current status of DBRX compared to its competitors, it appears to be a difficult sell except to existing or potential Databricks customers. Databricks’ rivals in generative AI, including OpenAI, offer equally or even more impressive technologies at competitive pricing. Additionally, many generative AI models are closer to the commonly accepted definition of open source than DBRX.

Rao assures that Databricks will continue to refine and release new versions of DBRX as the company’s Mosaic Labs R&D team explores new avenues in generative AI. He states, “DBRX is a leap forward in the open-source model world and challenges future models to be built with even greater efficiency. As we apply techniques to enhance the quality of the output in terms of safety, reliability, and bias, we will release variants. DBRX is a platform for our customers’ inventive capabilities, boosted by our tools.”

It is a long journey ahead for DBRX to catch up with its peers.

Avatar photo
Zara Khan

Zara Khan is a seasoned investigative journalist with a focus on social justice issues. She has won numerous awards for her groundbreaking reporting and has a reputation for fearlessly exposing wrongdoing.

Articles: 847

Leave a Reply

Your email address will not be published. Required fields are marked *