Partnerships
Machine Learning & AI

CoreWeave Partners with Run:ai to Power AI Inference and Training at Scale

CoreWeave Partners with Run:ai to Power AI Inference and Training at Scale

CoreWeave is excited to announce its partnership with Run:ai, a leader in AI workload and GPU orchestration. This collaboration offers leading AI enterprises and labs enhanced options for efficient deployment of AI inference and training. Through this integration, CoreWeave customers can now manage workloads on its high-performance infrastructure using the Run:ai platform.

Run:ai’s platform is engineered to optimize the utilization of AI computing resources in cloud-native environments. Run:ai’s platform perfectly complements CoreWeave’s cloud-based architecture, and the two companies have worked closely to integrate their technologies, ensuring customers benefit from a seamless and highly efficient AI workload management solution.

“Run:ai is an incredible tool for CoreWeave customers who want a more hands-off approach to accelerating AI deployments. The platform is scalable, agile, and cost-effective for enterprise customers—requiring no manual resource intervention.”
— Brian Venturo, Chief Strategy Officer at CoreWeave 

The two companies have jointly supported multiple enterprise customers over the past year, delivering Run:ai installations in CoreWeave Kubernetes Service (CKS) clusters. This includes installing hundreds of NVIDIA H100 Tensor Core GPUs interconnected by NVIDIA Quantum-2 InfiniBand networking for an AI lab focused on life sciences. 

“CoreWeave’s platform offers the robust, cloud-based infrastructure our customers need. Our partnership has enabled us to deliver powerful, integrated solutions that drive real value for enterprises. We look forward to building on this success and advancing AI capabilities together in the future.”
— Yael Dor, VP Sales & Partnerships at Run:ai

How Run:ai integrates with CoreWeave Cloud

Deploying AI workloads is increasingly complex. Kubernetes has become the de facto standard for orchestration, but enterprises—regardless of their level of Kubernetes expertise—often struggle to navigate the vast ecosystem and piece together their own solutions for maximizing AI workload throughput and infrastructure efficiency.

Run:ai’s AI workload and GPU orchestration platform is purpose-built for containers and Kubernetes. It maximizes GPU efficiency and workload capacity through a comprehensive set of capabilities, including:

  • Strategic resource management
  • Advanced scheduling and prioritization
  • Dynamic resource allocation
  • Support for the entire AI lifecycle
  • Enhanced monitoring and visibility
  • Seamless scalability
  • Automated workload distribution

As the AI Hyperscaler, CoreWeave is purpose-built to bring ultra-performant AI infrastructure to leading enterprises, platforms, and AI labs. This partnership with Run:ai further expands the technical ecosystem within reach for CoreWeave customers, giving them more deployment options and greater access to compute resources at scale.

CoreWeave customers can seamlessly integrate Run:ai’s scheduling and orchestration platform into their environments. Run:ai provides a powerful orchestration layer over CoreWeave's infrastructure, enabling customers to efficiently manage AI development, training, and inference workloads. This helps customers take advantage of the industry-leading performance, utilization, and reliability of CoreWeave Cloud.

Reach out to CoreWeave, and a member of our team will connect with you.

Connect with us

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.