Homepage VAST Data newsroom

VAST Data and Run:ai Revolutionise AI Operations with Full-Stack AI Solution Powered by NVIDIA Accelerated Computing

Announcement posted by VAST Data 14 Feb 2024

New collaboration optimises resource and data management for end-to-end AI pipelines

SYDNEY - February 14, 2024 - VAST Data, the AI data platform company, today announced a groundbreaking partnership with Run:aithe leader in compute orchestration for AI workloads. This collaboration marks a monumental step in redefining AI operations at scale, offering a full-stack solution encompassing compute, storage, and data management. Together, VAST and Run:ai are addressing the critical needs of enterprises embarking on large-scale AI initiatives.

 

Run:ai streamlines accelerated NVIDIA AI infrastructure across private, public, and hybrid clouds. Run:ai boosts AI project efficiency through dynamic workload scheduling and innovative GPU fractioning, enhancing GPU allocation and utilisation. The platform caters to various needs, from supporting data scientists' interactive environments to improving large-scale training and reliable, scalable inference. Run:ai's Open Architecture ensures a future-proof, collaborative platform that integrates with a broad ecosystem of industry leaders. For the data layer, the VAST Data Platform unifies storage, database, and containerised compute engine services into a single, scalable software platform that was built from the ground up to power AI and GPU-accelerated tools in modern data centers and clouds.

 

"We've recognised that customers need a more holistic approach to AI operations," said Renen Hallak, CEO and co-founder of VAST Data. "Our partnership with Run:ai transcends traditional, disparate AI solutions, integrating all of the components necessary for an efficient AI pipeline. Today's announcement offers data-intensive organisations across the globe the blueprint to deliver more efficient, effective, and innovative AI operations at scale."

 

Together, VAST Data and Run:ai are providing organisations with:

 

  • Full-Stack Visibility for Resource and Data Management: The synergy between the VAST Data Platform and Run:ai creates a comprehensive AI solution, providing enterprises with full-stack visibility encompassing compute, networking, storage, and workload management across their AI operations.
  • Cloud Service Provider-Ready Infrastructure: CSPs are pivotal in bringing GPU availability to enterprises integrating AI into their business processes. VAST and Run:ai are offering CSPs a blueprint to deploy and manage AI cloud environments efficiently. The VAST Data Platform, together with Run:ai, presents a tested and validated framework for CSPs to deliver secure, enterprise-grade AI environments across a single shared infrastructure, with a Zero Trust approach to compute and better data isolation and utilisation.
  • Optimised End-to-End AI Pipelines: From multi-protocol ingest to data processing to model training and inferencing, organisations can accelerate data preparation using NVIDIA RAPIDS Accelerator for Apache Spark, as well as other AI frameworks and libraries available with the NVIDIA AI Enterprise software platform for development and deployment of production-grade AI applications, with the VAST DataBase to enable high-performance data pre-processing.
  • Simple AI Deployment and Infrastructure Management: Run:ai offers fair-share scheduling to allow users to easily and automatically share clusters of GPUs without memory overflows or processing clashes, paired with simplified multi-GPU distributed training. In addition, the VAST DataSpace makes data access across geographies and multi-cloud environments easy, while providing the encryption, access-based controls, and data security that customers require.

 

"A key challenge in the market is providing equitable access to compute resources for diverse data science teams," explained Omri Geller, CEO and co-founder at Run:ai. "Our collaboration with VAST emphasises unlocking the maximum performance potential within complex AI infrastructures and greatly extends visibility and data management across the entire AI pipeline. This is a first of its kind partnership that will provide immense value to our joint customers."

 

The VAST Data Platform is VAST's breakthrough approach to data management and accelerated computing. For enterprises and CSPs, the platform serves as the comprehensive software infrastructure required to capture, catalogue, refine, enrich, store, and secure unstructured data with real-time analytics for AI and deep learning. Through the Run:ai Open Architecture, Run:ai integrates seamlessly with NVIDIA AI Enterprise and NVIDIA accelerated computing, helping customers speed up development, scale AI infrastructure, and lower compute costs so they can orchestrate and manage compute resources effectively.

 

By deeply integrating NVIDIA's market-leading AI computing offering with the dynamic AI workload orchestration of the Run:ai platform and VAST's industry-disrupting AI data platform, organisations can optimally utilise their resources and gain better control and visibility across both the compute and data layers.

 

##

 

VAST + Run:ai blueprints, solution briefs, and demos will be first available at NVIDIA GTC 2024. Learn more about VAST and Run:ai at NVIDIA GTC by visiting VAST Data at Booth #1424.

 

Additional Resources:

 

About VAST Data

VAST Data is the data platform company built for the AI era. As the new standard for enterprise AI infrastructure, organisations trust the VAST Data Platform to serve their most data-intensive computing needs. VAST Data empowers enterprises to unlock the full potential of their data by providing AI infrastructure that is simple, scalable, and architected from the ground up to power deep learning and GPU-accelerated data centers and clouds. Launched in 2019, VAST Data is the fastest growing data infrastructure company in history. For more information, please visit and follow VAST Data on X (formerly Twitter) and LinkedIn.

 

About Run:ai

The Run:ai platform brings cloud-like simplicity to AI resource management — providing researchers with on-demand access to pooled resources for any AI workload. An innovative cloud-native operating system — which includes a workload scheduler and an abstraction layer — helps IT simplify AI implementation, increase team productivity, and gain full utilisation of expensive GPUs. Using Run:ai, companies streamline development, management, and scaling of AI applications across any infrastructure, including on-premises, edge and cloud. For more information, please visit and follow Run:ai on X (formerly Twitter) and LinkedIn.