At Red Hat Summit 2024 in Denver, Colorado, the open source software leader announced major new initiatives to bring the power of generative AI to the enterprise.
The headliners are Red Hat Enterprise Linux AI (RHEL AI), a foundation model platform for developing and running open source language models, and InstructLab, a community project to enable domain experts to enhance AI models with their knowledge.
How Red Hat stands apart from other companies integrating and offering open source AI
According to Red Hat CEO Matt Hicks, RHEL AI differentiates itself from the competition in a few key ways.
First, Red Hat is focused on open source and a hybrid approach. “We believe that AI is not really different than applications. That you’re going to need to train them in some places, run them in other places. And we’re neutral to that hardware infrastructure. We want to run anywhere,” said Hicks.
Second, Red Hat has a proven track record of optimizing performance across different hardware stacks. “We have a long history of showing that we can make the most out of the hardware stacks below us. We don’t produce GPUs. I can make Nvidia run as fast as they can. I can make AMD run as fast as they can. I can do the same with Intel and Gaudi,” explained Hicks.
This ability to maximize performance across various hardware options while still providing location and hardware optionality is fairly unique in the market.
Finally, Red Hat’s open source approach means customers retain ownership of their IP. “It’s still your IP. We provide that service and subscription business to and you’re not giving up your IP to work with us on that,” said Hicks.
In the fast-moving AI market, Red Hat believes this combination of open source, hybrid flexibility, hardware optimization, and customer IP ownership will prove to be key differentiators for RHEL AI.
“We’re expanding the ability to deploy and run these models at scale,” said Ashesh Badani, Senior Vice President and Chief Product Officer at Red Hat during a QnA with reporters and analysts after the keynote in Denver, “Whether they come from our partnership with IBM Research, or for example, something that customers might do with proprietary models of their own.”
A new platform emerges: RHEL AI
RHEL AI combines open source language models, such as the Granite family of models developed by IBM Research, with tools from the InstructLab project to allow customization and enhancement of the models.
It provides an optimized RHEL operating system image with hardware acceleration support and enterprise technical support from Red Hat.
“What we’re trying to do is enable investments that our customers have already made in infrastructure supporting applications to extend to this new critical workload support from the enterprise AI, predictive analytics and generative AI,” said Chris Wright, Chief Technology Officer and Senior Vice President, Global Engineering at Red Hat.
Red Hat aims to deliver the same reliability and confidence that customers expect from them on a single unified platform. They are focused on enhancing today’s hybrid cloud infrastructure while also pushing forward the current state of app development and deployment in cloud native environments.
“It’s really exciting because we’re taking a lot of what our customers already know and extending it so it’s not having to learn everything and you just have to learn all the new stuff,” Wright added.
InstructLab enhances LLMs with synthetic training data generated from your company’s examples
The InstructLab project, also unveiled at the summit, aims to enable domain experts without data science skills to enhance language models by contributing their knowledge. It uses a novel method called LAB (Large-scale Alignment for chatBots) developed by IBM Research to generate high-quality synthetic training data from a small number of examples.
The LAB method has four simple steps. First, experts give examples of their knowledge and skills. Next, a “teacher” AI model looks at these examples to create lots of similar training data.
Then, this synthetic data gets checked for quality. Finally, the language model learns from the approved synthetic data. This lets the community constantly improve models by sharing what they know. It’s a low-cost way to make the AI much smarter using just a small number of human examples.
This allows models to be continuously improved and fine-tuned through community contributions in a cost-effective way. IBM has already used the LAB method to create enhanced versions of open source models like Meta’s Llama and the Mistral family of models.
OpenShift AI 2.9
OpenShift AI is also getting an upgrade to version 2.9, with new features for serving both predictive and generative models and an expanded partner ecosystem. Red Hat emphasized their commitment to giving customers flexibility and choice in how they deploy AI.
Red Hat is rolling out its AI offerings to bring open source innovation to the enterprise in waves.
Developers can get started immediately with the InstructLab community project, available now to enhance open source models with domain knowledge. RHEL AI is also launching in
developer preview to provide an optimized foundation for these models with enterprise support. The latest updates to OpenShift AI are generally available now, delivering MLOps capabilities to serve both predictive and generative AI models at scale. Looking ahead, new Ansible Lightspeed offerings to automate AI workflows are slated for later this year.
A community-focused approach to enterprise AI
With RHEL AI and InstructLab, Red Hat aims to do for AI what it did for Linux and Kubernetes — make powerful technologies accessible to a broad community through open source. If successful, it could accelerate the adoption of generative AI in the enterprise by enabling domain experts to enhance models with their knowledge and deploy them in production environments with trust and support.
“It’s also an important call out. It speaks to our heritage regarding investing in the power of open and the power of community,” said Badani. “And then we want to make sure we can carry that forward in AI.”
“We’re really excited that the state of the art has gotten to the place where now we can start thinking about how we expand what open means in this context,” added Wright.
Content Courtesy – www. venturebeat.com