Featured
Generative AI, Training Data, Open Source, and GitHub Copilot, Oh My!
Lena and Andrew Hall, Hall Law & Mark Lehberg and Chris Stevenson, DLA Piper
Generative artificial intelligence (GAI) will fundamentally change the way that software is built. Whether they are developing or using AI tools, organizations must understand the opportunities and risks involved, and evolve governance, policies and processes to address those risks.
Join this webinar for a deep dive into the issues that arise when using GAI in software development. We’ll cover:
• Open source data and software licenses and risks with AI
• Licensing and clearance considerations for materials used to train AI models
• Licensing considerations in building, training, and using AI models
• A deep dive on GitHub Copilot, including implications of the class action suit
All episodes
-
What Is Software Composition Analysis?
Mike McGuire, Senior Software Solutions Manager, Black Duck
Modern applications are no longer created from scratch; instead they are constructed of various components, including open source code that is often developed by individuals outside the organization. Our research reveals that open source code makes up 76% of the average application.
Although leveraging open source software provides access to external expertise, it also entails responsibilities for organizations. Ensuring the security, compliance, and quality of the code is crucial. This is where software composition analysis (SCA) plays a significant role.
Join this discussion that explores the following topics:
o What SCA is and how it functions
o Addressing risks through SCA
o Key elements of an effective SCA solution
o Building a comprehensive open source risk management program with SCA -
Black Duck Snippet Matching and Generative AI Models
Frank Tomasello, Senior Sales Engineer
Join this webinar to learn how Black Duck® snippet matching can help identify open source software and the potential license risk that tools like GitHub CoPilot and OpenAI's ChatGPT can introduce into your codebase. With Black Duck you can
- Identity components as one of over 2,700 licenses tracked in our KnowledgeBase
- Understand license requirements in simple terms so development can quickly assess the impact of including a component in their code
- Flag potential license conflicts so teams stay in compliance with policy -
AI and Software Development: IP and Governance
Sarah Hopton, Barbara Zapisetskaya and Dr. Sam De Silva from CMS & Phil Odence from Black Duck
Although AI has been around for decades, recent advances, including the development of generative AI tools, mean that AI-related stories are hitting the headlines on an almost-daily basis. This includes the copyright infringement proceedings brought by a group of American novelists against OpenAI in relation to use of their novels as training data, the ban by Samsung on using generative AI tools on its internal networks and company-owned devices, and the Hollywood writers’ strike over the use of AI in the film industry.
More than half of organisations (52%) recently polled by Gartner reported that risk factors are a critical consideration when evaluating new AI use cases. So before spending time and money on the development of software by using AI, it is important for a company to understand the potential risks of doing so and how to mitigate such risks. Companies will need to rethink how development gets done, with the focus needing to be on evolving operations and training people as much as on technology. In addition, if looking to be acquired later down the line, that company will need to be ready to answer questions from prospective buyers.
Join this webinar, in which a panel of legal experts from CMS UK will focus on two hot topics in relation to AI: intellectual property and governance. They will cover:
• IP issues relating to the use of third party content to train AI tools
• questions around subsistence, authorship/inventorship and ownership of any IP
• the risk of the output infringing third party IP rights
• key IP considerations in the context of a potential acquisition
• how to manage development of software with the help of AI through effective governance
• how ISO Standards and standardisation can play a significant role in mitigating the risks associated with AI and establishing robust governance frameworks -
The 2023 Open Source Year in Review
Tony Decicco, GTC Law Group | Chris Stevenson, DLA Piper | Phil Odence, Black Duck
Gain insights into important legal developments from two of the leading open source legal experts, Tony Decicco, Principal at GTC Law Group & Affiliates and Chris Stevenson, Of Counsel at DLA Piper.
This annual review will highlight the most significant legal developments related to open source software in 2023, focusing on topics that were resolved, those that got started and what we can expect to see in coming years.
We’ll cover:
• Updates on key open source-related litigation and disputes
• The Cyber Resilience Act and the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
• Potential liability for developers releasing and contributing to open source software
• The impacts of GAI coding tools, such as GitHub Copilot and Amazon CodeWhisperer
• Open source software controversies, deals, and hacks
• And much, much moreRegister today!
CLE:
DLA Piper LLP (US) has been certified by the State Bar of California, Illinois MCLE Board, the Board on Continuing Legal Education of the Supreme Court of New Jersey, and the New York State Continuing Legal Education Board as an Accredited Provider. The following CLE credit is being sought:
• California: 1.5 Credit (1.5 General, 0.0 Ethics)
• Illinois: 1.5 Credit (1.5 General, 0.0 Professional Responsibility)
• New Jersey: 1.8 Credits (1.8 General, 0.0 Ethics)
• New York: 1.5 Transitional & Non-Transitional Credit (1.5 Professional Practice, 0.0 Ethics)
CLE credit will be applied for in other states where DLA Piper has an office with the exception of Minnesota, North Carolina, and Puerto Rico. -
Ask the Experts: AI and Software Development
DLA Piper, GTC Law Group, Hall Law, Osler, and Synopsys
As you start down the path of using generative artificial intelligence (GAI) in software development to improve efficiency, reduce costs, and increase revenue, you must also be aware of the associated legal issues. How can you leverage AI and minimize the risk it presents?
Join this live Synopsys webinar in which a panel of legal experts and practitioners will answer your questions about the rise of AI in software development, and how you can responsibly leverage this new technology. We’ll cover:
• The benefits and risks for using AI in software development
• The evolving legal and regulatory landscape
• Practical advice for using AI today and into the future -
Managing Software Risks in the Age of AI-Generated Code
John Lynn & Laila Paszti, Kirkland & Ellis LLP / Chris Murphy, Vista Equity Partners / Phil Odence, Black Duck
In the complex world of software development, generative artificial intelligence (GAI) coding tools appear as a beacon of productivity and effectiveness. When handled with precision, they brighten the path to innovation, cutting through the intricacies of coding. However, as with any unchecked flame, such tools must be carefully managed to avoid endangering an organization's valued IP, impacting its bottom line or introducing risk into an M&A transaction.
Join this webinar to get an introduction to GAI coding tools and how you can minimize risk when using these in your organization. We’ll cover:
- Introduction to GAI coding tools (from code completion to code generation)
- Legal, operational, and M&A risks arising from GAI coding tools (e.g., IP ownership, IP infringement, cybersecurity)
- Establishing a general AI policy with provisions specifically tailored to issues arising in using AI for coding
- Managing risk arising from GAI coding tools - this includes a mix of technical, operational and administrative safeguards (e.g., usage policies, auditing tools, optimal selection and implementation of tools)This presentation is intended for legal and technical teams involved in software development and M&A software due diligence.
-
Artificial Intelligence, Real Security: Preparing DevSecOps for AI Development
Steven Zimmerman, DevOps Security Solutions Manager
AI-powered development has greatly increased the rate at which software evolves. But using artificial intelligence as a proxy for security-aware developers introduces a variety of risks to the business.
Organizations must prepare for the complexities of AI-powered development. This requires establishing consistent and scalable DevSecOps initiatives. The discussion will cover
• Innovation and trends in AI-powered software development
• The inherent risks to application security and business operations of using AI and third-party digital artifacts
• Best practices for establishing application security testing and issue remediation within DevOps workflows that include AI
• Ways to use AI to empower developers and reduce the opportunity of an attack -
Cloud-native and Generative AI Implications in DevSecOps
Kimm Yeo, Senior Manager, Platform Solutions Marketing & Debrup Ghosh, Senior Product Manager
Security remains a leading concern for businesses in this multi-cloud era. With the increasing cloud-native deployment and use of generative AI in application software development, along with proliferation of tools, data and applications distributed across various cloud platforms, ensuring a consistent and robust security posture and at the same time, keep pace with the fast DevOps CI/CD pipeline can be daunting.
The webinar will explore some of the challenges organizations face when navigating the complex landscape in securing their applications from code to cloud in a fast-paced DevOps CI/CD environment. Attendees will gain insights into how companies overcome these challenges and implement robust and effective application security solutions that align to their DevSecOps initiatives.
Key Takeaways:
- Recognize the security challenges in this high pace, evolving technology landscape
- The importance of adopting an integrated “Essential Three” app security strategy
- Practical solutions for DevSecOps alignment from a customer perspective -
How Do You Secure Hype: Baselining Your Org’s Generative AI Security
David Benas, Associate Principal Consultant
We all can plainly see that AI is the “next big thing.” Whether your organization is bringing up its skill baseline, integrating LLM chatbots with existing applications, leveraging models to augment existing applications, pulling models off HuggingFace and fine tuning them, or training your own models from scratch, this talk will provide you with a baseline to answer the deceptively basic question: how do I secure it?
Join this webinar to learn
- The basics of GenAI
- Unique risks with AI integration
- Strategies for securely implementing AI
- Lessons from “the field” -
AI Strategy, Security, and Governance: The View from the Top
Sayoko Blodgett-Ford, GTC Law Group and Jamie Boote, Synopsys
The Board and C-Suite are starting to take notice of the opportunities and risks inherent with powerful new generative artificial intelligence (GAI) tools that can quickly create text, code, images, and other media. Product Development and Engineering teams want to use such tools to increase productivity by at least one order of magnitude. In response, the Security, Legal, and Compliance teams typically raise legitimate concerns about the risks involved. What role can the Board and C-Suite play in this situation?
Join this live Synopsys webinar to get a jump start on what AI strategy, security, and governance looks like from the Board-level and C-suite. We’ll cover:
• Fundamentals of AI, types of models, and data used to inform them
• Expanding existing processes and procedures to address the security risks of GAI
• The top three questions the Board and C-Suite should be asking about GAI
• How to navigate the existing and evolving legal and regulatory landscape -
Best Practices for Using AI in Software Development
Anthony Decicco, GTC Law Group and Sam Ip, Osler, Hoskin & Harcourt
There is no shortage of buzz around generative artificial intelligence (GAI). GAI can be used in software development to generate and augment code which saves times and reduces development cycles. But using AI in software development comes with its own set of risks.
Join this webinar to get an introduction to GAI and how you can minimize risk when using it in your organization. We’ll cover:
• What GAI is and how machines learn
• Legal issues with AI including copyright, web scraping, and more
• Overview of current litigation
• Practical approaches to using GAI while minimizing risk -
Generative AI, Training Data, Open Source, and GitHub Copilot, Oh My!
Lena and Andrew Hall, Hall Law & Mark Lehberg and Chris Stevenson, DLA Piper
Generative artificial intelligence (GAI) will fundamentally change the way that software is built. Whether they are developing or using AI tools, organizations must understand the opportunities and risks involved, and evolve governance, policies and processes to address those risks.
Join this webinar for a deep dive into the issues that arise when using GAI in software development. We’ll cover:
• Open source data and software licenses and risks with AI
• Licensing and clearance considerations for materials used to train AI models
• Licensing considerations in building, training, and using AI models
• A deep dive on GitHub Copilot, including implications of the class action suit