RHEL AI is coming: Why does enterprise Linux need AI? | heise online

RHEL AI is coming: Why does enterprise Linux need AI?

An image mode for RHEL and the new RHEL AI – Red Hat presented interesting innovations for its enterprise Linux at the Summit.

Save to Pocket listen Print view

(Bild: Generiert mit Midjourney durch iX)

4 min. read
By
  • Udo Seidel
This article was originally published in German and has been automatically translated.

At the Summit, Red Hat demonstrated the new image mode for RHEL: This is a modern method of creating installable images of the operating system. Instead of kickstart and dnf, methods from the container world are used here. Technically, this is based on the bootc project, which uses so-called bootable containers. They contain a kernel and OSTree for the image mode. A basic version of RHEL is available from the manufacturer, which can then be customized via a container file and used immediately. With additional tools, the construct can also be used for KVM or in the cloud. The image mode for RHEL is already available as a technical preview.

The same applies to the new RHEL family member: Red Hat Enterprise Linux AI. The core component is, of course, the familiar operating system, which is available as a directly bootable image from the manufacturer. The usual subscription model is also included. Red Hat has also added the Granit models, which it has placed under an open-source license together with IBM. The third key component comes from the InstructLab project – also only recently presented to the world. It uses the LAB technology developed by IBM to improve LLMs. And the latter is now intended to simplify RHEL AI, both in terms of technical access and actual customization. The entire AI community should benefit from this, which will then be reflected in improved applications such as chatbots.

The Podman AI Lab also fits in with this. This is an extension of Podman Desktop and is already available. It is designed to facilitate the development of AI-based applications. This allows the written software to be tested locally against models; things such as data access or security are simplified for the developer thanks to standards. It is even possible to adapt models. There is also a catalog with representative sample applications to make it easier even for newcomers to get started. The LLMs cover familiar use cases such as text summaries, chatbots, visual object identification, sound-to-text conversion and source code generation.

The latest AI innovation comes from the Kubernetes area – the Konveyor project. It aims to help modernize applications. Typical tasks include checking whether the application is ready for containers or what changes would be necessary. This also includes estimating migration costs. Red Hat plans to equip Konveyor with generative AI in the coming months. This should provide even more assistance with application modernization.

Red Hat therefore presented an extensive AI portfolio at the summit. The focus is on the platform area. There are also sufficient links to developers and data scientists.

Finally, there was an announcement away from AI: based on Kuadrant, Red Hat is releasing a preview version of Connectivity Link for developers. The first open-source project aims to improve the connection between Kubernetes applications. This includes not only the direct technical connection, but also the general security of the connection, the interfaces and the management of guidelines. Red Hat Connectivity Link now combines the capabilities of Kuadrant with OpenShift. This provides users with global load balancing, management of multi-cluster ingress instances, authentication and authorization and other aspects from the API area, for example. Connectivity Link is scheduled for release in the second half of 2024.

(nie)