Nvidia acquires AI workload management startup Run:ai

Nvidia acquires AI workload management startup Run:ai

Nvidia is set to acquire Run:ai, a company based in Tel Aviv that simplifies the management and enhancement of AI hardware infrastructure for developers and operations teams. Although the acquisition cost remains undisclosed, TechCrunch sources suggest it’s around $700 million.

Earlier reports by Ctech indicated that Nvidia might spend over $1 billion on Run:ai, but the final negotiations concluded smoothly.

Nvidia intends to maintain Run:ai’s product offerings within the existing business framework and plans to integrate Run:ai’s product development into Nvidia’s DGX Cloud AI platform. This platform provides enterprises with the necessary computing infrastructure and software for training various AI models. Additionally, Nvidia’s DGX server, workstation, and cloud customers will benefit from Run:ai’s features, especially for generative AI projects distributed across different data centers.

Run:ai’s CEO, Omri Geller, expressed excitement about the acquisition, noting the company’s longstanding collaboration with Nvidia since 2020 and their shared commitment to maximizing customer infrastructure efficiency.

Geller, alongside Ronen Dar and professor Meir Feder from Tel Aviv University, co-founded Run:ai with the vision of creating a platform capable of dividing AI models into segments that could operate concurrently on diverse hardware setups, including local servers, public clouds, and edge devices.

Although Run:ai faces limited direct competition, similar dynamic hardware allocation concepts for AI tasks are being explored by other firms, such as Grid.ai, which enables parallel AI model training across various computing units.

Run:ai quickly garnered a substantial clientele among Fortune 500 companies, attracting significant venture capital investments. Before Nvidia’s acquisition, Run:ai had secured $118 million from investors like Insight Partners, Tiger Global, S Capital, and TLV Partners.

Alexis Bjorlin, Nvidia’s Vice President of DGX Cloud, highlighted in a blog entry the increasing intricacy of AI deployments by customers and the rising demand for more effective utilization of AI computational resources by businesses.

ClearML’s recent survey on AI adoption among organizations revealed that the primary obstacle to AI expansion in 2024 has been the limited availability and high cost of computing resources, with infrastructure complications coming in second.

Bjorlin emphasized the need for advanced scheduling to manage and coordinate workloads like generative AI, recommendation engines, and search platforms, aiming to enhance performance at both the system and infrastructure levels. “Nvidia’s robust computing platform, in conjunction with Run:ai’s technology, will persist in supporting an extensive range of third-party solutions, offering clients versatility and options. Nvidia, in partnership with Run:ai, is set to provide a unified network that facilitates access to GPU solutions everywhere,” she stated.

Run:ai stands out as one of Nvidia’s most significant acquisitions, following its $6.9 billion purchase of Mellanox in March 2019.

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to Top