Featured

Break Speed Barriers: 49x Faster Data Transfers with Rapid File Toolkit
Melody Zacharias Technical Evangelist Director, Pure Storage
Discover how the Rapid File Toolkit can revolutionize your file operations. In this demo, we’ll showcase its powerful capabilities that accelerate data workflows, simplify file management, and optimize performance. Whether you're processing large datasets or managing high-frequency file operations, see how this toolkit can unlock new efficiencies and empower your data-driven initiatives.
- Unparalleled Speed: Learn how the Rapid File Toolkit enables data transfers up to 49x faster, transforming the efficiency of your workflows.
- Optimized File Management: Experience streamlined file operations, from handling massive datasets to managing complex file structures with ease.
- Parallel File Performance: Discover how the toolkit leverages parallel processing to accelerate high-performance data workflows, reduce operational bottlenecks, and drive your projects forward.
All episodes
-
Enhance LLMs with RAG and Accelerate Enterprise AI with Pure Storage and NVIDIA
Anuradha Karuppiah - NVIDIA, Calvin Nieh - Pure Storage , Robert Alvarez - Pure Storage
The benefits and ROI of Generative AI for enterprises are clear with retrieval-augmented generation (RAG). RAG provides company-specific responses by enhancing generic large language models (LLMs) with proprietary data.
This session will show how an enterprise implementation of RAG with Pure Storage® and NVIDIA speeds-up data processing, increases scalability, and provides real-time responses more easily than creating custom LLMs from scratch.
Attend to get technical insight and see a demonstration of distributed and accelerated GenAI RAG pipelines:
Learn the benefits of enhancing LLMs with RAG for enterprise-scale GenAI applications
Understand how to accelerate the RAG pipeline and deliver enhanced insight using NVIDIA NeMo Retriever microservices and the Pure Storage FlashBlade//S™
See our proof of concept demonstration, showing accelerated RAG in an enterprise use case
-
Enhance LLMs with RAG and Accelerate Enterprise AI with Pure and NVIDIA
Anuradha Karuppiah - NVIDIA, Calvin Nieh- Pure Storage, Robert Alvarez - Pure Storage
The benefits and ROI of Generative AI for enterprises are clear with retrieval-augmented generation (RAG). RAG provides company-specific responses by enhancing generic large language models (LLMs) with proprietary data.
This session will show how an enterprise implementation of RAG with Pure Storage® and NVIDIA speeds-up data processing, increases scalability, and provides real-time responses more easily than creating custom LLMs from scratch.
Attend to get technical insight and see a demonstration of distributed and accelerated GenAI RAG pipelines:
Learn the benefits of enhancing LLMs with RAG for enterprise-scale GenAI applications
Understand how to accelerate the RAG pipeline and deliver enhanced insight using NVIDIA NeMo Retriever microservices and the Pure Storage FlashBlade//S™
-
Considerations for Strategic, Accelerated, Enterprise AI Infrastructure
Melody Zacharias, Jean-Baptiste Thomas, Robert Alvarez, Senior Solutions Architect AI
n this session, we’ll explore how a well-designed data storage platform optimized for AI serves as the backbone for AI-powered innovation and operations.
What you’ll learn:
Efficient end-to-end AI workflows with industry examples and use cases
Key elements of AI-ready and AI-optimized infrastructure stacks
How to futureproof AI storage to meet dynamic and growing AI data needs. -
Applying Video Understanding and RAG in Surveillance
Calvin Nieh - Solutions Marketing Manager AI Pure Storage Philip Ninan - AI Solution Manager, Pure Storage Tom Sells - Field Business Development, Public Sector Pure Storage
In this TechTalks session, we’ll explore RAG, a method of improving the accuracy and relevance of inference capability of LLMs, the evolution of multimodal LLMs, and how they can summarize and extract insights close to real time. Join us as our experts discuss:
How multimodal LLMs makes extraction of insights from video much simpler and faster
Overall trends shifting from text to video for better precision of answers
Capacity and performance requirements for video data sets
Why camera companies stand to benefit from the advent of RAGReserve your spot now for this exclusive event and gain expert insights on these emerging innovations.
Calvin Nieh - Solutions Marketing Manager AI Pure Storage
Philip Ninan - AI Solution Manager, Pure Storage
Tom Sells - Field Business Development, Public Sector Pure Storage -
Data Preparation Strategies for Accelerated AI Pipelines
Melody Zacharias - Technical Evangelist Director Victor Olmedo - Global - Analytics & AI Principal FSA
Data often gets trapped in complex infrastructures and scattered silos, ranging from data warehouses and data lakes to disparate storage systems. While each silo serves its original purpose well, they become obstacles for AI models and remain inaccessible to AI clusters.
In this fast-paced, demo-rich talk, we’ll explore how to break down these silos, unify data, and speed up data pipelines, for faster time to science and faster AI results.
Join us as our experts discuss:
· Unified ingestion of data for faster AI results
· CPU vs. GPU pipeline processing comparison
· Optimized commands for parallel processing -
AI in the Fast Lane: Strategies for Swift Integration
Speakers: Melody Zacharias: Technical Evangelist Director, Pure Storage Robert Alvarez: Senior AI Solution Architect, Pure Storage
"n the race to harness the power of AI, organizations often encounter significant challenges moving from the prototyping phase to production. Performance bottlenecks, operational inefficiencies, and the ever-present uncertainty of future demands can all hinder progress, delaying time-to-value and escalating costs.
Join us on October 17 for an information-packed session that explores these critical challenges, offering insights into how Pure Storage can be a game-changer. Optimized for AI, the Pure Storage platform delivers unmatched performance, ensuring your AI workloads run faster and more efficiently.
By addressing operational inefficiencies with a streamlined, data-driven approach, and providing scalable, future-proof solutions, Pure Storage empowers organizations to overcome the hurdles of AI integration and stay ahead in the fast lane of innovation with:
Multi-Node Query Scale Out Performance
Checkpoints at Scale
Partners for value add workflow -
Embracing AI: The Automotive Revolution That’s Happening Now
Norm Marks, Vice President of Automotive Industry, NVIDIA - Par Botes, VP, AI Infrastructure, Pure Storage - Andy Stone, Field CTO, Americas, Pure Storage
Automakers are increasingly relying on artificial intelligence (AI) to deliver new services, drive more value from their engineers, solve business problems to create tangible value, and more. In a fiercely competitive industry, those who can crack the AI code early stand to reap major advantages of faster time to value and insights.
This open discussion webinar will feature speakers from Pure Storage and NVIDIA, who will discuss what’s happening with AI for automotive. A major focus will be around software development and the shift from Autonomous Vehicles 1.0 to 2.0. This shift will entail a massive expansion in data footprints, as much as 30x or more.
AI-enabled use cases that are top of mind among automakers include:
-Autonomous driving (AD/ADAS)
-Customer experience
-Connected vehicles
-AI-driven manufacturing -
Accelerate and Simplify AI Adoption
Rob Ludeman Sr. Director, Product Marketing, Pure Storage, Chadd Kenney VP of Technology, Pure Storage, Tony Paikeday Sr. Director, Product Marketing, AI, NVIDIA, Zack Kass Futurist and Former Head of GTM for OpenAI
AI is transforming organizations in countless ways, driving innovation, agility, and performance. Yet the rapid evolution of AI is also bringing new data challenges that demand a high-performance, scalable, and future-proof infrastructure—especially for storage.
Join Pure Storage and NVIDIA experts for an exclusive that will help you simplify AI adoption and build a foundation for long-term success.
Watch now for valuable insights to help you:
- Fast-track your AI evolution
- Reduce time-to-value
- Ensure your infrastructure meets the demands of tomorrow’s AI advancements
You’ll also hear from Zack Kass, former Head of Go-to-Market at OpenAI, who will share actionable strategies to enable you to thrive in an evolving AI landscape -
Break Speed Barriers: 49x Faster Data Transfers with Rapid File Toolkit
Melody Zacharias Technical Evangelist Director, Pure Storage
Discover how the Rapid File Toolkit can revolutionize your file operations. In this demo, we’ll showcase its powerful capabilities that accelerate data workflows, simplify file management, and optimize performance. Whether you're processing large datasets or managing high-frequency file operations, see how this toolkit can unlock new efficiencies and empower your data-driven initiatives.
- Unparalleled Speed: Learn how the Rapid File Toolkit enables data transfers up to 49x faster, transforming the efficiency of your workflows.
- Optimized File Management: Experience streamlined file operations, from handling massive datasets to managing complex file structures with ease.
- Parallel File Performance: Discover how the toolkit leverages parallel processing to accelerate high-performance data workflows, reduce operational bottlenecks, and drive your projects forward.