All episodes
-
How to Protect Generative AI Models Using GenAI Secure
Rich Vorwaller, Chief Product Officer, Cloud Storage Security
Generative AI (GenAI) presents unique challenges for businesses of all sizes. Accelerating progress on one front, GenAI has a darker side that can leave businesses vulnerable to malware and sensitive data leakage. GenAI Secure offers a simple yet effective solution that can protect AWS environments from malware and data leakage brought from work with Generative AI, external datasets, and more.
Using the Ship of Theseus thought experiment, Rich draws parallels to CSS's philosophy of building adaptable and modern solutions rather than relying on outdated methods. This webinar will walk you through our integration with Amazon Bedrock, which furthers GenAI Secure's protection by offering comprehensive threat intelligence and custom policy creation. Through these methods, Artificial Intelligence models are kept free from malicious code and threats, while sensitive data is secured.
What you will learn:
- GenAI Secure Overview - Understand how CSS protects your AWS and downstream data from the threat of AI
- Advanced Protection Features - Learn about our integration with Amazon Bedrock for enhanced threat intelligence and more
- Practical Deployment Insights - Discover helpful insights on the real-world applications of GenAI Secure in various customer environments -
GenAI Secure Demo
Aaron Gettings, Director of Engineering, Cloud Storage Security
Join us for a brief demo of GenAI Secure by Cloud Storage Security. In this demo we will walk through how to secure your Generative AI models. We'll discuss how easy it is to protect a bucket, and how to push results to preferred notification routes. Forensic analysis, DLP and Malware Detection will also be covered.
What you will learn:
- How to protect a bucket
- How to initiate a scan, engines
- Picking a scanning engine
- Viewing AV results
- Viewing DLP results
- Custom, regular expression creation
- Bedrock forensic analysis