Featured
AI Strategy, Security, and Governance: The View from the Top
Sayoko Blodgett-Ford, GTC Law Group and Jamie Boote, Synopsys
The Board and C-Suite are starting to take notice of the opportunities and risks inherent with powerful new generative artificial intelligence (GAI) tools that can quickly create text, code, images, and other media. Product Development and Engineering teams want to use such tools to increase productivity by at least one order of magnitude. In response, the Security, Legal, and Compliance teams typically raise legitimate concerns about the risks involved. What role can the Board and C-Suite play in this situation?
Join this live Synopsys webinar to get a jump start on what AI strategy, security, and governance looks like from the Board-level and C-suite. We’ll cover:
• Fundamentals of AI, types of models, and data used to inform them
• Expanding existing processes and procedures to address the security risks of GAI
• The top three questions the Board and C-Suite should be asking about GAI
• How to navigate the existing and evolving legal and regulatory landscape
All episodes
-
Black Duck Snippet Matching and Generative AI Models
Frank Tomasello, Senior Sales Engineer
Join this webinar to learn how Black Duck® snippet matching can help identify open source software and the potential license risk that tools like GitHub CoPilot and OpenAI's ChatGPT can introduce into your codebase. With Black Duck you can
- Identity components as one of over 2,700 licenses tracked in our KnowledgeBase
- Understand license requirements in simple terms so development can quickly assess the impact of including a component in their code
- Flag potential license conflicts so teams stay in compliance with policy -
AI and Software Development: IP and Governance
Sarah Hopton, Barbara Zapisetskaya and Dr. Sam De Silva from CMS & Phil Odence from Black Duck
Although AI has been around for decades, recent advances, including the development of generative AI tools, mean that AI-related stories are hitting the headlines on an almost-daily basis. This includes the copyright infringement proceedings brought by a group of American novelists against OpenAI in relation to use of their novels as training data, the ban by Samsung on using generative AI tools on its internal networks and company-owned devices, and the Hollywood writers’ strike over the use of AI in the film industry.
More than half of organisations (52%) recently polled by Gartner reported that risk factors are a critical consideration when evaluating new AI use cases. So before spending time and money on the development of software by using AI, it is important for a company to understand the potential risks of doing so and how to mitigate such risks. Companies will need to rethink how development gets done, with the focus needing to be on evolving operations and training people as much as on technology. In addition, if looking to be acquired later down the line, that company will need to be ready to answer questions from prospective buyers.
Join this webinar, in which a panel of legal experts from CMS UK will focus on two hot topics in relation to AI: intellectual property and governance. They will cover:
• IP issues relating to the use of third party content to train AI tools
• questions around subsistence, authorship/inventorship and ownership of any IP
• the risk of the output infringing third party IP rights
• key IP considerations in the context of a potential acquisition
• how to manage development of software with the help of AI through effective governance
• how ISO Standards and standardisation can play a significant role in mitigating the risks associated with AI and establishing robust governance frameworks -
Keeping Pace: Managing the risks of AI-generated code
Patrick Carey, Executive Director - GTM Strategy
AI coding assistants, such as Microsoft CoPilot and ChatGPT, will fundamentally change the way teams build software, much like open source software has over the last decade. As with open source, teams seeking the benefits of AI will also need to take precautions to address the security, quality, and intellectual property risks that come with the use of AI-generated code. Is your team ready for AI?
In this webinar, we'll explore:
Key risks teams might encounter using coding assistants
Safeguards needed for confident use of AI-generated code
-
Ask the Experts: AI and Software Development
DLA Piper, GTC Law Group, Hall Law, Osler, and Synopsys
As you start down the path of using generative artificial intelligence (GAI) in software development to improve efficiency, reduce costs, and increase revenue, you must also be aware of the associated legal issues. How can you leverage AI and minimize the risk it presents?
Join this live Synopsys webinar in which a panel of legal experts and practitioners will answer your questions about the rise of AI in software development, and how you can responsibly leverage this new technology. We’ll cover:
• The benefits and risks for using AI in software development
• The evolving legal and regulatory landscape
• Practical advice for using AI today and into the future -
How to Protect Yourself from AI Cybersecurity Nightmares
Menny Barzilay, Cyber Security Expert, Cytactic, FortyTwo Global and FortyTwo Labs
Artificial intelligence (AI) continues to push the frontiers of technology - and it's here to stay. It's influence on cybersecurity reveal a double-edged sword. AI is becoming a formidable enhancer of security measures. But with all new technologies, there is a downside to consider. AI is already used as a core tool in the arsenal of cybercriminals and its capabilities are harnessed to refine every aspect of cybercrime.
The more we see criminals that leverage AI, the more we realize that our defences are not yet equipped to counter Malicious-AI. So where does this leave us?
This session will shed light on how criminals exploit AI to elevate their malicious endeavours. Gain practical insights into the evolving landscape of cybercrime and the pivotal role of AI within it. Let's take a glimpse into a future where AI dominates the cybercrime landscape! Are you ready for the move?
-
Managing Software Risks in the Age of AI-Generated Code
John Lynn & Laila Paszti, Kirkland & Ellis LLP / Chris Murphy, Vista Equity Partners / Phil Odence, Black Duck
In the complex world of software development, generative artificial intelligence (GAI) coding tools appear as a beacon of productivity and effectiveness. When handled with precision, they brighten the path to innovation, cutting through the intricacies of coding. However, as with any unchecked flame, such tools must be carefully managed to avoid endangering an organization's valued IP, impacting its bottom line or introducing risk into an M&A transaction.
Join this webinar to get an introduction to GAI coding tools and how you can minimize risk when using these in your organization. We’ll cover:
- Introduction to GAI coding tools (from code completion to code generation)
- Legal, operational, and M&A risks arising from GAI coding tools (e.g., IP ownership, IP infringement, cybersecurity)
- Establishing a general AI policy with provisions specifically tailored to issues arising in using AI for coding
- Managing risk arising from GAI coding tools - this includes a mix of technical, operational and administrative safeguards (e.g., usage policies, auditing tools, optimal selection and implementation of tools)This presentation is intended for legal and technical teams involved in software development and M&A software due diligence.
-
Artificial Intelligence, Real Security: Preparing DevSecOps for AI Development
Steven Zimmerman, DevOps Security Solutions Manager
AI-powered development has greatly increased the rate at which software evolves. But using artificial intelligence as a proxy for security-aware developers introduces a variety of risks to the business.
Organizations must prepare for the complexities of AI-powered development. This requires establishing consistent and scalable DevSecOps initiatives. The discussion will cover
• Innovation and trends in AI-powered software development
• The inherent risks to application security and business operations of using AI and third-party digital artifacts
• Best practices for establishing application security testing and issue remediation within DevOps workflows that include AI
• Ways to use AI to empower developers and reduce the opportunity of an attack -
Cloud-native and Generative AI Implications in DevSecOps
Kimm Yeo, Senior Manager, Platform Solutions Marketing & Debrup Ghosh, Senior Product Manager
Security remains a leading concern for businesses in this multi-cloud era. With the increasing cloud-native deployment and use of generative AI in application software development, along with proliferation of tools, data and applications distributed across various cloud platforms, ensuring a consistent and robust security posture and at the same time, keep pace with the fast DevOps CI/CD pipeline can be daunting.
The webinar will explore some of the challenges organizations face when navigating the complex landscape in securing their applications from code to cloud in a fast-paced DevOps CI/CD environment. Attendees will gain insights into how companies overcome these challenges and implement robust and effective application security solutions that align to their DevSecOps initiatives.
Key Takeaways:
- Recognize the security challenges in this high pace, evolving technology landscape
- The importance of adopting an integrated “Essential Three” app security strategy
- Practical solutions for DevSecOps alignment from a customer perspective -
AI Strategy, Security, and Governance: The View from the Top
Sayoko Blodgett-Ford, GTC Law Group and Jamie Boote, Synopsys
The Board and C-Suite are starting to take notice of the opportunities and risks inherent with powerful new generative artificial intelligence (GAI) tools that can quickly create text, code, images, and other media. Product Development and Engineering teams want to use such tools to increase productivity by at least one order of magnitude. In response, the Security, Legal, and Compliance teams typically raise legitimate concerns about the risks involved. What role can the Board and C-Suite play in this situation?
Join this live Synopsys webinar to get a jump start on what AI strategy, security, and governance looks like from the Board-level and C-suite. We’ll cover:
• Fundamentals of AI, types of models, and data used to inform them
• Expanding existing processes and procedures to address the security risks of GAI
• The top three questions the Board and C-Suite should be asking about GAI
• How to navigate the existing and evolving legal and regulatory landscape -
Best Practices for Using AI in Software Development
Anthony Decicco, GTC Law Group and Sam Ip, Osler, Hoskin & Harcourt
There is no shortage of buzz around generative artificial intelligence (GAI). GAI can be used in software development to generate and augment code which saves times and reduces development cycles. But using AI in software development comes with its own set of risks.
Join this webinar to get an introduction to GAI and how you can minimize risk when using it in your organization. We’ll cover:
• What GAI is and how machines learn
• Legal issues with AI including copyright, web scraping, and more
• Overview of current litigation
• Practical approaches to using GAI while minimizing risk -
Generative AI, Training Data, Open Source, and GitHub Copilot, Oh My!
Lena and Andrew Hall, Hall Law & Mark Lehberg and Chris Stevenson, DLA Piper
Generative artificial intelligence (GAI) will fundamentally change the way that software is built. Whether they are developing or using AI tools, organizations must understand the opportunities and risks involved, and evolve governance, policies and processes to address those risks.
Join this webinar for a deep dive into the issues that arise when using GAI in software development. We’ll cover:
• Open source data and software licenses and risks with AI
• Licensing and clearance considerations for materials used to train AI models
• Licensing considerations in building, training, and using AI models
• A deep dive on GitHub Copilot, including implications of the class action suit