At CoreWeave, we’re incredibly proud of all our partnerships and progress we’ve made this year. At this week’s SC23 conference, we’re excited to announce collaborations with NVIDIA and other key partners, including VAST Data, SchedMD, Weights and Biases, and Zeet.
At our booth #1373, we will present on:
- Our submission with NVIDIA to MLPerf™ Training 3.0, which delivered record-breaking performance results on the machine learning benchmarks.
- CoreWeave’s serverless infrastructure and why bare metal is better for serving inference, model training, and modern AI and ML workloads.
- Slurm SUNK, developed by SchedMD, which is designed for scheduling and orchestrating massively parallel jobs and Kubernetes for running production applications like inference.
Built with NVIDIA H100 Tensor Core GPUs, CoreWeave’s customized GPU clusters are trusted by leading AI labs and enterprises to build, train, and deploy some of the industry’s most complex models. These massive models continue to push the limits of today’s system, requiring immense scale to address bottlenecks in generative AI’s expansion. It’s why we’re so excited at CoreWeave about the NVIDIA GH200 Grace Hopper Superchip coming to CoreWeave in Q1 of 2024 and the new NVIDIA HGX H200 platform, which includes the recently announced NVIDIA H200 Tensor Core GPU.
The NVIDIA H200 GPU supercharges generative AI and high-performance computing, enabling faster development times and lower total costs. It has higher throughput and more memory than the NVIDIA H100 GPU, accelerating inference when handling large language models. This will translate into a lower cost of ownership for our customers, which is why we’re excited to be among the first cloud providers to deploy these starting next year.
CoreWeave is also looking to integrate the NVIDIA HGX H200 platform with our differentiated software stack and highly performant systems, which will be another milestone for the company as we deliver high-powered cloud infrastructure that performs at scale – faster and more consistently than anything else – ultimately helping our clients go to market faster than their competitors.
As the AI boom continues to require immense amounts of accelerated computing, CoreWeave continues to deliver cutting-edge tech and specialized compute power for generative AI and high-performance computing workloads.