Featured
Embracing AI: The Automotive Revolution That’s Happening Now
Norm Marks, Vice President of Automotive Industry, NVIDIA - Par Botes, VP, AI Infrastructure, Pure Storage - Andy Stone, Field CTO, Americas, Pure Storage
Automakers are increasingly relying on artificial intelligence (AI) to deliver new services, drive more value from their engineers, solve business problems to create tangible value, and more. In a fiercely competitive industry, those who can crack the AI code early stand to reap major advantages of faster time to value and insights.
This open discussion webinar will feature speakers from Pure Storage and NVIDIA, who will discuss what’s happening with AI for automotive. A major focus will be around software development and the shift from Autonomous Vehicles 1.0 to 2.0. This shift will entail a massive expansion in data footprints, as much as 30x or more.
AI-enabled use cases that are top of mind among automakers include:
-Autonomous driving (AD/ADAS)
-Customer experience
-Connected vehicles
-AI-driven manufacturing
All episodes
-
Enhance LLMs with RAG and Accelerate Enterprise AI with Pure Storage and NVIDIA
Anuradha Karuppiah - NVIDIA, Calvin Nieh - Pure Storage , Robert Alvarez - Pure Storage
The benefits and ROI of Generative AI for enterprises are clear with retrieval-augmented generation (RAG). RAG provides company-specific responses by enhancing generic large language models (LLMs) with proprietary data.
This session will show how an enterprise implementation of RAG with Pure Storage® and NVIDIA speeds-up data processing, increases scalability, and provides real-time responses more easily than creating custom LLMs from scratch.
Attend to get technical insight and see a demonstration of distributed and accelerated GenAI RAG pipelines:
Learn the benefits of enhancing LLMs with RAG for enterprise-scale GenAI applications
Understand how to accelerate the RAG pipeline and deliver enhanced insight using NVIDIA NeMo Retriever microservices and the Pure Storage FlashBlade//S™
See our proof of concept demonstration, showing accelerated RAG in an enterprise use case
-
Enhance LLMs with RAG and Accelerate Enterprise AI with Pure and NVIDIA
Anuradha Karuppiah - NVIDIA, Calvin Nieh- Pure Storage, Robert Alvarez - Pure Storage
The benefits and ROI of Generative AI for enterprises are clear with retrieval-augmented generation (RAG). RAG provides company-specific responses by enhancing generic large language models (LLMs) with proprietary data.
This session will show how an enterprise implementation of RAG with Pure Storage® and NVIDIA speeds-up data processing, increases scalability, and provides real-time responses more easily than creating custom LLMs from scratch.
Attend to get technical insight and see a demonstration of distributed and accelerated GenAI RAG pipelines:
Learn the benefits of enhancing LLMs with RAG for enterprise-scale GenAI applications
Understand how to accelerate the RAG pipeline and deliver enhanced insight using NVIDIA NeMo Retriever microservices and the Pure Storage FlashBlade//S™
-
Considerations for Strategic, Accelerated, Enterprise AI Infrastructure
Melody Zacharias, Jean-Baptiste Thomas, Robert Alvarez, Senior Solutions Architect AI
n this session, we’ll explore how a well-designed data storage platform optimized for AI serves as the backbone for AI-powered innovation and operations.
What you’ll learn:
Efficient end-to-end AI workflows with industry examples and use cases
Key elements of AI-ready and AI-optimized infrastructure stacks
How to futureproof AI storage to meet dynamic and growing AI data needs. -
Applying Video Understanding and RAG in Surveillance
Calvin Nieh - Solutions Marketing Manager AI Pure Storage Philip Ninan - AI Solution Manager, Pure Storage Tom Sells - Field Business Development, Public Sector Pure Storage
In this TechTalks session, we’ll explore RAG, a method of improving the accuracy and relevance of inference capability of LLMs, the evolution of multimodal LLMs, and how they can summarize and extract insights close to real time. Join us as our experts discuss:
How multimodal LLMs makes extraction of insights from video much simpler and faster
Overall trends shifting from text to video for better precision of answers
Capacity and performance requirements for video data sets
Why camera companies stand to benefit from the advent of RAGReserve your spot now for this exclusive event and gain expert insights on these emerging innovations.
Calvin Nieh - Solutions Marketing Manager AI Pure Storage
Philip Ninan - AI Solution Manager, Pure Storage
Tom Sells - Field Business Development, Public Sector Pure Storage -
Data Preparation Strategies for Accelerated AI Pipelines
Melody Zacharias - Technical Evangelist Director Victor Olmedo - Global - Analytics & AI Principal FSA
Data often gets trapped in complex infrastructures and scattered silos, ranging from data warehouses and data lakes to disparate storage systems. While each silo serves its original purpose well, they become obstacles for AI models and remain inaccessible to AI clusters.
In this fast-paced, demo-rich talk, we’ll explore how to break down these silos, unify data, and speed up data pipelines, for faster time to science and faster AI results.
Join us as our experts discuss:
· Unified ingestion of data for faster AI results
· CPU vs. GPU pipeline processing comparison
· Optimized commands for parallel processing -
AI in the Fast Lane: Strategies for Swift Integration
Speakers: Melody Zacharias: Technical Evangelist Director, Pure Storage Robert Alvarez: Senior AI Solution Architect, Pure Storage
"n the race to harness the power of AI, organizations often encounter significant challenges moving from the prototyping phase to production. Performance bottlenecks, operational inefficiencies, and the ever-present uncertainty of future demands can all hinder progress, delaying time-to-value and escalating costs.
Join us on October 17 for an information-packed session that explores these critical challenges, offering insights into how Pure Storage can be a game-changer. Optimized for AI, the Pure Storage platform delivers unmatched performance, ensuring your AI workloads run faster and more efficiently.
By addressing operational inefficiencies with a streamlined, data-driven approach, and providing scalable, future-proof solutions, Pure Storage empowers organizations to overcome the hurdles of AI integration and stay ahead in the fast lane of innovation with:
Multi-Node Query Scale Out Performance
Checkpoints at Scale
Partners for value add workflow -
Embracing AI: The Automotive Revolution That’s Happening Now
Norm Marks, Vice President of Automotive Industry, NVIDIA - Par Botes, VP, AI Infrastructure, Pure Storage - Andy Stone, Field CTO, Americas, Pure Storage
Automakers are increasingly relying on artificial intelligence (AI) to deliver new services, drive more value from their engineers, solve business problems to create tangible value, and more. In a fiercely competitive industry, those who can crack the AI code early stand to reap major advantages of faster time to value and insights.
This open discussion webinar will feature speakers from Pure Storage and NVIDIA, who will discuss what’s happening with AI for automotive. A major focus will be around software development and the shift from Autonomous Vehicles 1.0 to 2.0. This shift will entail a massive expansion in data footprints, as much as 30x or more.
AI-enabled use cases that are top of mind among automakers include:
-Autonomous driving (AD/ADAS)
-Customer experience
-Connected vehicles
-AI-driven manufacturing