“Enterprises Embrace Open-Source Generative AI Tools: Intel and Beyond Take the Lead”

The Linux Foundation today announced the launch of the Open Platform for Enterprise AI (OPEA), a project to foster the development of open, multi-provider and composable (i.e. modular) generative AI systems. Now, OPEA’s members are very clearly invested (and self-interested, for that matter) in building tooling for enterprise generative AI. Domino offers a suite of apps for building and auditing business-forward generative AI. And VMWare — oriented toward the infrastructure side of enterprise AI — last August rolled out new “private AI” compute products.

Interoperability in the Era of Enterprise Generative AI

“Can generative AI designed for the enterprise (e.g. AI that autocompletes reports, spreadsheet formulas and so on) ever be interoperable?” This is a question many organizations, including Cloudera and Intel, have been tackling. In their pursuit for answers, the Linux Foundation, a nonprofit organization dedicated to supporting and maintaining open source efforts, has launched a new project – the Open Platform for Enterprise AI (OPEA).

Under the guidance of the Linux Foundation’s LFAI and Data org, which focuses on AI and data platform initiatives, OPEA aims to foster the development of open, multi-provider, and composable (i.e. modular) generative AI systems. The ultimate goal is to create “hardened” and “scalable” AI systems by harnessing the best open source innovations from across the ecosystem, as stated by LFAI and Data executive director Ibrahim Haddad in a press release.

“OPEA will unlock new possibilities in AI by creating a detailed, composable framework that stands at the forefront of technology stacks,” said Haddad. “This initiative is a testament to our mission to drive open source innovation and collaboration within the AI and data communities under a neutral and open governance model.”

The OPEA project includes major players in the enterprise space such as Cloudera and Intel, as well as enterprise heavyweights like IBM-owned Red Hat, Hugging Face, Domino Data Lab, MariaDB, and VMWare. Together, they aim to build open and interoperable AI tools that can be used by enterprises of all sizes.

But what exactly can we expect from these collaborations? According to Haddad, possibilities include “optimized” support for AI toolchains and compilers that enable AI workloads to run across different hardware components, as well as “heterogeneous” pipelines for retrieval-augmented generation (RAG).

RAG is gaining popularity in enterprise applications of generative AI due to its ability to extend a model’s knowledge base beyond its original training data. This is possible by referencing outside information, such as proprietary company data or public databases, before generating a response or performing a task.

In its own press release, Intel provided more details about OPEA’s goals:

“Enterprises are challenged with a do-it-yourself approach [to RAG] because there are no de facto standards across components that allow enterprises to choose and deploy RAG solutions that are open and interoperable and that help them quickly get to market. OPEA intends to address these issues by collaborating with the industry to standardize components, including frameworks, architecture blueprints and reference solutions.”

Evaluation will also be a key focus for OPEA. Their GitHub repository proposes a rubric for grading generative AI systems along four axes: performance, features, trustworthiness, and “enterprise-grade” readiness. This includes assessing the “black-box” benchmarks from real-world use cases for performance, looking at a system’s interoperability, deployment choices, and ease of use, and evaluating the model’s ability to guarantee “robustness” and quality for trustworthiness. For enterprise readiness, the focus will be on ensuring a seamless deployment process for the system.

Rachel Roumeliotis, director of open source strategy at Intel, confirms that OPEA will work with the open source community to offer tests based on the rubric and provide assessments and grading for generative AI deployments upon request.

As for OPEA’s other endeavors, they are currently in the early stages. However, Haddad envisions the potential for open model development, similar to Meta’s Llama family and Databricks’ DBRX. Intel has already contributed reference implementations for a generative-AI-powered chatbot, document summarizer, and code generator optimized for its Xeon 6 and Gaudi 2 hardware in the OPEA repo.

It is evident that OPEA’s members are fully invested and motivated to build tooling for enterprise generative AI. Cloudera, for instance, has recently launched partnerships to create an “AI ecosystem” in the cloud, and Domino offers a suite of apps for building and auditing business-friendly generative AI. VMWare, on the other hand, focuses on infrastructure for enterprise AI and has already rolled out new “private AI” compute products.

The big question now is whether these vendors will work together to create cross-compatible AI tools under the OPEA umbrella.

It is undeniably beneficial to do so – customers can then choose from multiple vendors based on their needs, resources, and budgets. However, the danger of vendor lock-in is always present in such collaborations. Hopefully, with OPEA’s open and neutral governance model, this will not be the ultimate outcome.

Avatar photo
Max Chen

Max Chen is an AI expert and journalist with a focus on the ethical and societal implications of emerging technologies. He has a background in computer science and is known for his clear and concise writing on complex technical topics. He has also written extensively on the potential risks and benefits of AI, and is a frequent speaker on the subject at industry conferences and events.

Articles: 865

Leave a Reply

Your email address will not be published. Required fields are marked *