At CoreWeave, we have had a strong track record of accelerating our customers' AI journey by consistently leading the market—whether by being among the first to offer NVIDIA H100 and NVIDIA H200-powered instances or becoming the first cloud provider to make NVIDIA GB200 NVL72-based instances generally available. Today, we are excited to share that we will be introducing new GPU instances featuring NVIDIA’s upcoming NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPUs, which are designed to supercharge AI and other GPU-accelerated workloads with an incredible leap in performance and efficiency. These new CoreWeave instances are expected to be generally available in the coming months.
What’s new with NVIDIA RTX PRO 6000 Blackwell Server Edition?
The newly introduced RTX PRO 6000 Blackwell Server Edition offers a substantial performance leap over the previous NVIDIA L40S generation. Built with 5th-Gen Tensor Cores, it delivers over five times the AI performance at FP4 precision, significantly speeding up large language model (LLM) inference and other deep learning tasks. Fourth-Gen RTX technology, complete with Neural Shaders and DLSS 4, provides up to twice the rendering performance—particularly important for advanced visualization and content creation. Equipped with 96GB of GDDR7 memory running at 1.6TB/s, the RTX PRO 6000 Blackwell efficiently handles massive datasets and complex computations, while PCIe Gen5 connectivity ensures faster data transfer and reduced latency. Compared to the NVIDIA L40S, the RTX PRO 6000 Blackwell can deliver up to 5.3x faster LLM inference and 3.5x faster text-to-image generation, offering a compelling upgrade for a range of AI and graphics-intensive workloads.
Maximizing performance on CoreWeave
Each RTX PRO 6000 Blackwell instance on CoreWeave will support up to 8 GPUs and will be paired with Intel Emerald Rapids CPUs and with NVIDIA BlueField DPUs, ensuring secure VPC isolation and high-performance networking in a multi-tenant environment. By offloading critical network tasks from the CPU, BlueField DPUs maximize the compute resources available for AI and HPC workloads while providing robust security at scale. RTX PRO 6000 Blackwell-based instances will also be supported by CoreWeave’s Observability Services that will offer deep insights into the instances, such as Resource utilization, system errors, temperatures, and other logs to help you quickly spot and address issues before they escalate, keeping your most complex workflows running smoothly.
Beyond the hardware, these instances will be supported by CoreWeave Kubernetes Service (CKS) and Slurm on Kubernetes (SUNK) to streamline workload orchestration— to help you easily run containerized applications or HPC jobs. Additionally, RTX PRO 6000 Blackwell-based instances will seamlessly integrate with CoreWeave AI Object Storage (CAIOS) and Local Object Transport Accelerator (LOTA), providing performance-optimized data access and caching for data-intensive AI training and inference. In short, the entire CoreWeave platform is tuned to get the most out of every GPU cycle, letting you focus on innovating in AI and HPC rather than wrestling with infrastructure.
Get ready for next-gen AI and graphics
We look forward to bringing you the next evolution in GPU-accelerated computing. Whether you’re tackling generative AI, LLM inference, or advanced rendering, our upcoming RTX PRO 6000 Blackwell GPU-based instances will elevate your workflow.
If you have questions or want to explore the NVIDIA RTX PRO™ 6000 Blackwell Server Edition for your workloads, get in touch—our team is here to help you push the boundaries of what’s possible.