Storage

Get performant, secure, and reliable storage for AI.

<Tailor-made for AI workloads>

Don’t let storage performance slow your cluster down.

Get higher performance for containerized workloads and virtual servers.

Feed data into GPUs ASAP

GenAI models need a lot of data—and they need it fast. Handle massive datasets with reliability and ease, helping to enable better performance and faster training times.

Pick up where you left off

Reduce major delays after hardware interruptions . Get strategic checkpointing of intermediate results—so your teams can stay on track after interruptions.

Get top-tier reliability

Keep production on track and avoid massive data losses. Automated snapshots occur every 6-hour interval with a 3-day retention, meaning your teams can rest assured their work is saved. Plus, we manage storage separately from compute, making data easier to move and track.

Stay secure

CoreWeave Storage follows industry best practices with security. Encryption at rest and in transit, identity access management protocol, authentication, and policies with roles-based enable data protection and security.

Object storage for AI

CoreWeave AI Object Storage provides the performance, reliability, and scale AI enterprises need for GenAI workloads.

Unlock direct access to data for accelerated performance.

Local Object Transport Accelerator (LOTA)

Accelerate AI Object Storage data requests efficiently with LOTA, a simple and secure proxy residing on GPU nodes. LOTA boosts response times by directly accessing data repositories, bypassing gateway and index layers.

It also caches frequently accessed data and pre-stages files on the local disk of GPU nodes, significantly reducing load times for training and inference applications.

Automated Archive

Reduce costs effortlessly with the AI Object Storage Automated Archive. Inactive data is stored at a lower rate, automatically.

Retain all data for instant access. Enjoy high performance always, for all your data.

Higher performance results

See a significant per-GPU performance boost with up to 2GB/s per GPU, enabling faster training and inference cycles.

Enhanced uptime

Get the benefits of 99.9% uptime and eleven 9’s of durability—so your teams can get models to market ultra-fast.

Greater capacity

Scale to trillions of objects and exabytes of data, with performance that grows with your storage needs.

Compliance and support

CoreWeave AI Object Storage utilizes a standard S3 interface and supports S3 SDKs.

Left
Right

Distributed file storage

CoreWeave distributed file storage helps with centralized asset storage or parallel computation setups necessary for GenAI.

A network made for AI

Our highly performant, ultra-low latency networking architecture is built from the ground up to get data to GPUs with the speed and efficiency GenAI models need to develop, train, and deploy.

Strategic partnerships

Our partnership with VAST enables us to manage and secure hundreds of petabytes of data at a time. Plus, we’re POSIX-compliant and suitable for shared access across multiple instances.

Ultra-fast access at scale

Leverage a petabyte-scale shared file system with up to 1 GB/s per GPU. Our benchmarks show strong results for up to 64-node NVIDIA H200 GPU clusters.

Left
Right

Dedicated Storage Cluster

Need your own cluster? Our flexible tech stack provides access to dedicated storage clusters.

With partners like VAST, CoreWeave helps your teams unlock better performance and enhanced security and isolation.

Local storage

Our Kubernetes solution uniquely allows customers to access local storage. CoreWeave supports ephemeral storage up to 60TB, depending on node type.

All physical nodes have SSD or NVMe ephemeral (local) storage. No need for Volume Claims to allocate ephemeral storage. Simply write anywhere in the container file system.

That’s all included at no additional cost.

Bring your own storage

Have a storage provider you like working with for dedicated clusters? No problem.

CoreWeave works with an ecosystem of storage partners to support generative AI’s complex needs. Bring your own complete stack, and we’ll make sure it runs with our infrastructure.

Specialized storage for AI

CoreWeave infrastructure all works in tandem to ensure the future of GenAI.