GTC 2025 key moments and takeaways
GTC 2025 was an apex of AI innovation, collaboration, and learning for our teams at CoreWeave. While there, we had the honor and privilege of working and sharing knowledge with AI innovators at the forefront of bleeding-edge tech.
For those who couldn’t attend, here are a few of the major moments and key takeaways from NVIDIA GTC 2025.
NVIDIA CEO Jensen Huang’s keynote address
NVIDIA CEO Jensen Huang delivered an inspiring keynote address to officially kickoff the conference. This address announced thrilling developments in AI infrastructure, AI factories, agentic AI, and robotics, including the new NVIDIA Blackwell Ultra platform At the end of Jensen’s speech, one overwhelming message was clear: AI computing demand is accelerating, driven by the rise of reasoning AI and agentic AI. The scale and complexity of AI workloads are transforming a trillion-dollar worth of data center investments worldwide.
Great partnerships with AI innovators
The true highlight of GTC 2025 for our CoreWeave team was the inspiring and informative work we accomplished with our partners. We held countless booth chats, discussions, and main stage presentations with leading AI companies, demonstrating how critical a foundation of collaboration is to build a brighter, more efficient future for AI.
CoreWeave on the NVIDIA GTC 2025 main stage
We also had the privilege of sharing insights in presentations on the NVIDIA main stage, focusing on key themes such as approaches to AI infrastructure, optimizing efficiency and sustainability in data center designs, and strategies for maximizing performance in GPU clusters. Some of the most impactful talks are listed below.
🟢 Wired for AI: Lessons from Networking 100K+ GPU AI Data Centers and Clouds, featuring CoreWeave CTO Peter Salanki
This session covered the unique challenges of networking at massive scale. We also discussed how CoreWeave implemented one of the largest-scale NVIDIA Quantum-2 InfiniBand networks available, demonstrating clear lessons learned in scaling infrastructure to support the next generation of AI. This included tackling the complexities of connecting thousands of GPUs and the innovations required to maintain performance and reliability at such an unprecedented scale.
🟢 How An Early AI Adopter Thinks About Infrastructure, featuring CoreWeave CTO Peter Salanki, VP, Solutions Architecture John Mancuso, and Jane Street Research Contributor Adam Canady
In this collaborative session, we discussed the details of our partnership with Jane Street. We uncovered how its teams leverage CoreWeave’s cutting-edge AI infrastructure to train and fine-tune quantitative trading AI models with speed and precision. We dove into the details of scaling GPU capacity to help ensure flexibility, security, and performance in their AI workflows—all of which are critical in the fast-paced world of quantitative trading.
🟢 How Leading AI Labs Maximize Performance in GPU Clusters, featuring CoreWeave SVP of Engineering Chen Goldberg & CPO Chetan Kapoor
This presentation provided an in-depth look at advanced strategies and techniques for optimizing GPU clusters to achieve superior speed, efficiency, and reliability. We discussed critical aspects of improved model loading, better checkpointing, and robust fleet management for improved fault management. Additionally, our teams revealed details on the NVIDIA GB200 NVL72 clusters CoreWeave rolled out last fall, highlighting its role in enhancing AI innovation and workloads.
Innovation insights at the CoreWeave booth
We got to work side-by-side with a few of our most trusted partners. Here’s a quick recap of what we discovered and discussed with these major AI innovators.
Partner talks
🔵 IBM x CoreWeave: “Our Partnership”
CoreWeave CPO Chetan Kapoor and IBM Senior Technical Staff Member, AI/ML Systems Brian Belgodere, worked together in our booth, discussing the importance of solid AI infrastructure while training and fine-tuning complex AI models, like IBM’s Granite 3.2. We also discussed the strategic nature of a partnership between an AI hyperscaler like CoreWeave and a seasoned tech giant like IBM while diving into the details of CoreWeave’s new IBM storage offering.
🔵 Cohere x CoreWeave: “A First Look at the NVIDIA GB200 NVL72 in Practice”
CoreWeave CTO Peter Salanki and Cohere Manager of Technical Staff Ace Eldeib joined forces at CoreWeave’s booth to discuss exciting developments with the NVIDIA GB200 NVL72 racks, including early benchmarks.

Even in early benchmarks, NVIDIA GB200 NVL72 showed dramatic throughput gains over NVIDIA HGX H100s with more to come—signaling a new performance standard for GPUs.
🔵 Rescale x CoreWeave: “Exploring the Future of AI and Cloud HPC”
CoreWeave SVP of Engineering Chen Goldberg and Rescale VP of Engineering Mark Whitney teamed up to discuss the generative AI boom, the economics of GPU access, and enabling engineering at accelerated speed and expanded scale. In a thought-provoking fireside chat, these engineering experts from CoreWeave and Rescale broke down the evolution of accelerated computing in modern digital R&D. They shared how cutting-edge AI and cloud HPC continuously transform innovation.
On-demand sessions from our experts
CoreWeave-supported on-demand sessions included the following:
🔵 “From Failures to Throughput: Optimize Effective Training in AI Clusters,” featuring CoreWeave solutions architects Navarre Pratt and Jacob Feldman
This session explored how MLOps teams can improve infrastructure performance and resilience through increased visibility and automation. It also dove into the essential components enabling those processes at scale, including NVIDIA BlueField-3 DPUs and the self-healing properties of NVIDIA Quantum-2 InfiniBand.
🔵 “Storage Benchmarks: Optimize AI Model Training and Inference at Scale,” presented by CoreWeave’s Principal Product Manager Jeff Braunstein
This presentation shared the real-world benchmarking results of CoreWeave Distributed File Storage in a 32K NVIDIA H200 GPU cluster—while also uncovering best practices to achieve peak performance and reliability. We also discussed how NVIDIA Quantum-2 InfiniBand and NVIDIA Bluefield-3 DPUs specifically help optimize storage performance.
Fun CoreWeave-hosted events
At CoreWeave, we know how important it is to kick back and relax during a jam-packed conference. That’s why we hosted and provided some fun-filled events for attendees to swing by between and after sessions to blow off steam.
- Tiki & Tunes Night: CoreWeave, NVIDIA, and Dell hosted a networking night at Dr. Funk’s in San Jose—bringing together some of the great minds in AI for insightful discussions and tiki-inspired drinks!
- Solidigm Podcast: Our friends at Solidigm recorded their podcast at our booth, discussing AI trends in the industry and exciting discoveries made at NVIDIA’s 2025 GTC.
Important product announcements
While at GTC, we had a series of critical product announcements around storage offerings, compute instances, as well as software and cloud services. Here’s a breakdown of just a few of those key releases:
- CoreWeave AI Object Storage (CAIOS): We are excited to announce the General Availability of CoreWeave AI Object Storage, a next-generation object storage solution optimized for high I/O and large datasets in AI and ML workflows. CAIOS is purpose-built for AI innovation, providing excellent performance and speed at scale, seamless integration with your workflows, and enterprise-grade reliability.
- NVIDIA RTX PRO™ 6000 Blackwell Server Edition: We are proud to share that we will introduce new compute instances featuring the upcoming NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. Compared to the NVIDIA L40S, the RTX PRO 6000 can deliver up to 5.3x faster LLM inference and 3.5x faster text-to-image generation, offering a compelling upgrade for a range of AI and graphics-intensive workloads.
- NVIDIA Cloud Functions and NVIDIA GB200 NVL72: At CoreWeave, we are on a mission to provide developers with the best suite of software and cloud services to help them innovate using AI. Today, we are announcing support for the NVIDIA AI Enterprise software platform and NVIDIA Cloud Functions (NVCF), a core technology of DGX Cloud Serverless Inference, on the CoreWeave Cloud Platform. CoreWeave is one of the first cloud providers to offer NVCF on NVIDIA GB200 NVL72, and we look forward to seeing our customers build, deploy, and manage AI applications seamlessly with this new integration.
We also released our findings on Model FLOPS Utilization (MFU) at NVIDIA GTC 2025 and highlighted major CoreWeave differentiators throughout the conference. Our latest study found that CoreWeave achieves an MFU of over 50% on NVIDIA H100 GPUs, driving up to 1.2X higher performance than alternative solutions. Publicizing those findings helped solidify our dedication to delivering maximum performance and efficiency for AI workloads.
GTC 2025 in a nutshell
NVIDIA’s GTC this year brought almost 100,000 people from leading AI companies to San Jose, and it was a pleasure to share the stage and the conference floor with some of the brightest minds in the AI industry. Even though the excitement of GTC is now behind us, we’re already looking forward to what’s next at GTC26.
Missed out on GTC? Check out our recorded webinar on maximizing GPU performance, given by CPO Chetan Kapoor and SVP, Engineering Chen Goldberg.