At CoreWeave, we are on a mission to provide developers with the best suite of software and cloud services to help them innovate using AI, build cutting-edge applications, and drive meaningful impact for their businesses. We have been consistently first-to-market with GPU instances featuring cutting-edge NVIDIA GPUs, with us most recently being the first cloud provider to make the NVIDIA GB200 NVL72-based instances generally available. Today, we are announcing support for the NVIDIA AI Enterprise software platform and DGX Cloud Serverless Inference powered by NVIDIA Cloud Functions (NVCF) on the CoreWeave Cloud Platform to help customers easily build, deploy, and manage AI applications. The support for NVIDIA AI Enterprise and NVIDIA Cloud Functions on CoreWeave is available for NVIDIA Hopper GPUs and NVIDIA GB200 NVL72.
NVIDIA AI Enterprise Software and NVIDIA Cloud Functions (NVCF)
As AI models become more capable and adoption accelerates, optimizing and scaling inference solutions grows increasingly complex. Customers often spend significant time managing deployments and struggle to optimize model-serving performance to realize the full value of AI. CoreWeave simplifies this, collaborating closely with NVIDIA to deliver optimized solutions that enable customers to fully capture the value of their AI investments.
NVIDIA AI Enterprise is a cloud-native software platform that streamlines the development and deployment of production-grade, end-to-end generative AI pipelines. It helps organizations build data flywheels for the next era of agentic AI. NVIDIA AI Enterprise includes NVIDIA NIM microservices, NVIDIA NeMo, AI frameworks, reference applications and workflows, SDKs, libraries, and infrastructure management.
NVIDIA Cloud Functions, a core technology of DGX Cloud Serverless Inference, is an auto-scaling AI inference solution that enables application deployment with speed and reliability. NVCF simplifies hosting AI inference workloads such as NVIDIA NIM microservices on CoreWeave by reducing the time needed to deploy optimized infrastructure. NIM microservices are highly performant containerized AI models and inference engines that can be deployed and scaled on cutting-edge NVIDIA GPUs.
Support for NVIDIA AI Enterprise and NVCF on CoreWeave Cloud Platform
The CoreWeave Cloud Platform is purpose-built to deliver maximum performance and efficiency for AI workloads, spanning both model training and inference. CoreWeave Kubernetes Service (CKS) includes ready-to-use components such as optimized networking and storage interfaces, GPU drivers, Slurm-on-Kubernetes integration, and built-in observability tools, ensuring production readiness from day one. Leveraging NVIDIA AI Enterprise and NVCF on CKS provides customers with powerful, ready-to-run AI services that enhance latency and throughput, driving superior AI performance and accelerated value realization.
To get started with CoreWeave’s high-performance platform for NIM microservices and NVCF, customers can install the NVIDIA Cluster Agent Operator available from NVIDIA’s NGC Platform to enable an existing CKS cluster to act as a deployment target for NVCF. With a registered CKS cluster and the Cluster Agent installed, an NVCF function can now be created and deployed through the NVIDIA Cloud Functions console or the NGC CLI. When creating a Cloud Function through the UI, customers can select “Elastic NIM” under Function Type to leverage NVIDIA performance-optimized NIM microservices, which are part of the NVIDIA AI Enterprise software platform.
CoreWeave Leads the Way in Offering NVCF on NVIDIA GB200 NVL72
NVIDIA GB200 NVL72 represents a significant step function in terms of providing leading AI labs and Enterprises with the performance they need to continue to push the boundaries of AI. As part of the enablement work to support NVIDIA AI Enterprise and NVCF, CoreWeave engineers were able to deploy and run NVCF on CoreWeave instances accelerated by GB200 NVL72, running CoreWeave Kubernetes Services (CKS). They were able to create a new cloud function tied to a custom model and deploy it on GB200 NVL72 instances, ready for invocation. The image below shows an NVCF function hosting a custom model running on a CoreWeave instance accelerated by GB200 NVL72.

Cloud Platform Built for Driving the AI Revolution
With technical innovation across our stack, we believe we are leading the way in purpose-built cloud infrastructure that enables our clients to build, tune, and deploy AI applications. By enabling support for NVIDIA Cloud Functions and NVIDIA AI Enterprise software in our Cloud platform, we are providing customers additional options to innovate faster by leveraging the combined power of NVIDIA software and CoreWeave’s Cloud platform, with the ultimate goal of getting their solutions to market at higher performance and lower costs.
We invite our partners and customers to explore these advancements and collaborate with us as we continue to drive the future of AI. Reach out to us to explore these innovations from NVIDIA and CoreWeave.