All

Our Capacity Plans for CoreWeave Data Centers

Our Capacity Plans for CoreWeave Data Centers

Capacity plans are a hot-button issue in the AI space—and for good reason. Gone are the days of distributed computing setups; welcome to the era of megaclusters with hundreds or thousands of GPUs in a single fabric.

With the AI boom exponentially ramping up power demand, AI enterprises have a herculean task ahead of them: Finding cloud providers with infrastructure that can support mega-clusters at supercomputing scale.

At CoreWeave, our data centers are purpose-built as AI infrastructure that meets AI enterprises’ growing capacity needs. We’re ushering in a total redesign of the data center, where we build for growing customer use cases for AI compute instead of tailoring them after the fact.

We’ve identified three critical challenges shaping the AI landscape in 2025 and beyond—and how our data center teams at CoreWeave tackle them with optimal efficiency, performance, and security.

Challenge 1: Increased rack power consumption

The demand for high-density compute will only grow as AI projects expand in scale, complexity, and capabilities. Now that NVIDIA Blackwell is live, these next-generation systems have set new power standards and expectations. After all, NVIDIA GB200 NVL72 has 72 GPUs in a single rack, delivering up to 30-times faster real-time trillion-parameter inference.

Here’s the thing: Higher performance means more power, which maps to hotter chips and overall heat dissipation. With typical co-location sites typically only supporting 5 to 10 kW positions, supporting growing rack density with traditional rack builds will be impossible. Air cooling alone can’t effectively nor sustainably combat the sheer temperature coming off these super-powered, AI-specialized GPUs.

Our solution: Liquid-cooled infrastructure—always

Blackwell (and beyond) means we’re looking at completely new rack builds. With up to four times more power per rack, air cooling alone can’t keep modern hardware properly cooled. That’s why we’re dedicating our data center infrastructure to a better—and greener—way forward.

We expect that most CoreWeave data centers opening in 2025 will have liquid cooling capabilities.  Unlike legacy data center holders, that means we wouldn’t have to carve out small portions of a data center to retroactively support liquid cooling. Our teams design entire data centers based on a strong foundation of liquid cooling design—enabling us to support ~130kW racks. 

Our teams can take that base design, iterate on it, and repeat wherever we go, helping to ensure that we give our customers a fast time-to-market experience. 

Challenge 2: Lack of support from traditional data center designs

LLMs are only getting bigger and more complex. Clouds need to support larger training clusters to get cutting-edge models to market. As a result, access to power—and maximized uptime—are top-of-mind concerns for AI enterprises when they’re identifying the right partner supplying AI infrastructure.

But when that power entails megaclusters made of 100,000+ GPUs with 100s of megawatts of power, it can create a strain. That’s especially true for traditional data centers that are designed to support CPU workloads, as they inherently are not equipped to handle the scale required of GPU megaclusters.

Our solution: Strategic partnerships and hands-on building

AI enterprises benefit from specialized infrastructure to train models, run inference, and make their innovations a reality. That specialization includes data center design, which needs to be purpose-built for AI use cases.

At CoreWeave, we have meaningful partnerships with our hardware and data center vendors. We can actually co-design critical components of data center infrastructure to ensure they are custom-made for the highest performance of AI workloads. Additionally, our collaboration with NVIDIA helps ensure access to high-quality AI platforms.

CoreWeave data centers focus on AI as a specialized use case. Unlike general cloud providers, our buildings are not beholden to legacy data center designs that don’t consider the unique needs of AI model training and inference—especially at supercomputing scale.

Our robust provisioning and testing on hardware and close partnerships with our vendors enable us to custom-build designs that serve accelerated computing. In fact, our engineers work hands-on with our partners to improve manufacturing and testing processes, so future batches see reduced issues from the get-go. That means we can meet consumer needs as it’s growing, and your teams won’t have to wait for costly and time-consuming retrofits.

Additionally, we work extensively with our in-house security experts and data center vendor security teams to help to ensure top-tier industry best practices and standards at every step. All CoreWeave data centers are limited-access buildings with no visible signage. Around-the-clock security personnel assist in preventing unauthorized access. Plus, air-tight entry protocol and internal data and IP safeguarding help to ensure robust identity access management and least-privileged access.

Challenge 3: Getting maximum performance

It’s one thing to have access to the huge scale of compute LLMs need to train, run inference, and experiment. It’s another entirely to support both maximum performance and the best resilience from GPU-powered clusters. 

The 2024 Llama 3 Report found that models can experience up to 420 unexpected interruptions over a 54-day period. In fact, the paper specifically called out:

“The complexity and potential failure scenarios of 16K GPU training surpass those of much larger CPU clusters that we have operated. Moreover, the synchronous nature of training makes it less fault-tolerant—a single GPU failure may require a restart of the entire job.”

Clearly, breakdowns in cluster health and performance are more than just a nuisance. They’re a quick way for AI enterprises to find themselves dealing with time-consuming interruptions, and as a result—racking up unnecessary costs. 

As such, AI enterprises need data centers that ensure highest possible performance and resilience. That way, they can both build faster and maximize returns.

Our solution: Prioritizing performance—always

Each and every layer of CoreWeave is built to deliver the highest possible performance with the greatest reliability and resilience. We’ve unlocked the key to superior price-to-performance by optimizing every component of our platform for performance.

  • Storage: CoreWeave Local Object Transport Accelerator (LOTA) revolutionizes Object Storage data requests with a more direct path between GPUs and data than ever before. Our simple and secure proxy lives on GPU nodes, accelerating responses by directly accessing data repositories—helping unlock superior storage performance.
  • Networking: At CoreWeave, we leverage the power of NVIDIA Quantum-2 InfiniBand-based cluster networks to give your teams highly performant multi-node interconnect. Our SHARP-enabled network provides highly effective bandwidth, providing the scale and speed of communication between GPUs.
  • CoreWeave Mission Control: We handle cluster and node health so your teams won’t be burdened with the responsibility of continuous testing, validation, and remediation. With our Fleet Lifecycle Controller and Node Lifecycle Controller, CoreWeave enables automated node provisioning, testing, and monitoring to assist in ensuring nodes are in peak condition—and swaps out problematic nodes when they’re even slightly underperforming.

This comprehensive approach bakes high performance into our offering and our data centers as a non-negotiable, helping to ensure your teams can get the most out of your clusters—always.

Challenge 4: Access to burstable compute

Not all workloads require the same amount of compute. AI enterprises need dynamic, burstable capacity to accommodate their models whenever and wherever they’re in the development process. 

This requires a broad selection of GPUs to ensure models are served with the right resources—at all times— with maximum efficiency. However, GPUs are in ultra-high demand, especially for supporting AI workloads. That means there’s often a very limited available supply of the compute that enterprises need to get models to market.

Our solution: Creating a wide portfolio of GPUs

At CoreWeave, we’re constantly monitoring and planning for how customers want to run their workloads—with power needs top-of-mind. We know customers can go from completely quiet periods to tens of megawatts of power usage when a training run starts, all in a matter of minutes.

That’s why we have the broadest selection of the highest performance NVIDIA GPUs ready to serve up the exact level of compute your workloads need. That means we can provide buffer capacity and strong engineering support to maintain stability within large and complex clusters, allowing your teams to feel confident in their builds.

Our careful design planning around data center build-out considers backup power, reducing the impact on the local power grid, and negotiating power usage to ensure AI enterprises have the flexibility they need to get their models to market fast.

Future-first and future-ready

In the intensely competitive LLM space, there’s little room for inefficiencies. AI enterprises can set themselves up for success by choosing the right data centers to power up, house, and access high-intensity compute. 

CoreWeave data centers are purpose-built to supply large swaths of compute and keep power reliably online with even the most daunting timelines. Plus, our extensive partnership for data center design gives our teams the experience they need to be fast and scalable in building towards the future.

With a future-focused approach to infrastructure, we believe our data centers are equipped to meet the demands of the AI boom today, tomorrow, and beyond.

In addition to capacity planning, we’re keeping our data centers ahead of the curve with liquid cooling capabilities. Learn more about the redesign of the data center.

Connect with us

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.