Bare Metal Servers

Fast. Reliable. Performant.

Get the bare metal benefit

The vast majority of GenAI workloads don’t need virtualization. They need direct access to resources to run training, inference, and experimentation with maximum performance and low latency.

Build new models better by running directly on CoreWeave bare metal.

<Nix the hypervisor layer>

Virtualization often limits performance, hinders observability, and makes environment management difficult.

CoreWeave runs Kubernetes directly on bare metal servers. You’ll get the best of both worlds: the flexibility of cloud-like provisioning and the power and performance of dedicated hardware.

Get better observability into your workloads and full use of GPU, CPU, network, and storage resources.

Unlock higher performance

Get full access to NVIDIA GPUs and x86 or Arm CPUs without the performance limitations imposed by virtualization

Maximize reliability

Our simpler software stack reduces the surface area for potential issues—increasing reliability and uptime.

Free up compute resources

NVIDIA BlueField DPUs offload networking, security, and storage processing tasks—so you can get the most out of your compute

Get in-depth insights

See metrics on cluster health and performance for an observability experience unlike any other on the market

Unparalleled observability

Virtualization can make it difficult to get the data you need to track performance.

Our bare metal stack gives you access to low-level, high-resolution metrics in heightened granular detail—keeping your teams always in the know.

Virtual servers

CoreWeave virtual servers to simplify the deployment, management, and scaling of containerized applications.

*Only supported for certain GPUs. Inquire here for details.

Ultra-fast access

Launch virtual servers in seconds from the UI or via the CoreWeave Kubernetes API

Low latency

Work with up to 100Gbps internal and external networking speeds

Consistent access to compute

Leverage dedicated GPUs and get the compute you need, when you need it

Workload fungibility

Switch and share resources between AI workloads in a matter of seconds

Bare metal is better

Ready to get started?