Featured
When Software Bugs are Dragons and Kids are Vanquishers
Deb Radcliff, Shift Left Editor & Kari Kakkonen, Director, Training & Competences Knowit
As director of training and competencies at Knowit, Kari Kakkonen, who’s based in Helsinki, specializes in software testing, quality, Agile, DevOps and artificial intelligence (AI). Now, he’s poured his more than 20 years of software testing experience into an interactive book, ‘Dragons Out’ to train kids ten years old and up about the importance of testing and remediating bugs in software.
In it, “bugs are dragons, and they want dragons out of their villages and their lives,” Kakkonen says. The book has been translated into 21 languages and is popular with IT people who share the stories and lessons with their children. It is also being used in classrooms with easy-to-follow instructional guides.
In this show, Kakkonen explains how his background in testing led to writing this book, and how the stories and exercises are helping bring up the next generation of software testers and secure coders. “I wanted to share my experiences in testing, and I realized there are no testing books that are meant for children. And I love dragons and fantasy, so dragons were a nice analogy when describing bugs and hunting bugs.”
The stories are chock full of different kinds of dragons (for example, a red dragon represents a memory leak), along with knights, villages, castles, and weapons. He adds, “Fun is a great element in any type of education—and not only for children. Adults learn this way, too.”
In this interview, Kakkonen also talks about his experiences testing electric cars and their increasing reliance on artificial intelligence (AI). “Safety is the key in electric cars, especially in self-driving cars with AI elements. You need to think of how you can test the AI for security, starting with threat modeling, testing simulations, virtual testing. Repeat them again and again in virtual environments, then in the parking lot, and then on the street.”
All episodes
-
Building Resilience in Software Through Security Chaos Engineering
Deb Radcliff, Shift Left Editor & Kelly Shortridge, Sr Principal Engineer Fastly
Shift Left Editor Deb Radcliff interviews Kelly Shortridge, author of Security Chaos Engineering: Sustaining Resilience in Software and Systems.
Kelly Shortridge is Senior Principal Engineer at Fastly, a cloud services platform that helps developers extend their core cloud infrastructure to the network edge. We met at the RSA Conference where she was signing her recently-published book about software resilience through security chaos engineering.
In this twenty-minute video interview, we talk about why she wrote this book, how she defines security chaos engineering, how it compares to security platform engineering, and how to use these concepts to nurture developer productivity while not killing their souls with overly prescriptive security checklists.
She wrote the book because she feels the cybersecurity sector doesn’t understand software development very well or the constraints developers are under. And because security has an almost imperialist approach focused on stopping anything bad from happening, which she feels is impossible for developers to maintain. On the development side, she believes security needs to be demystified by embracing and extending practices software engineers already use.
“Chaos engineering is based on the foundation of resilience, preparing for failure, moving quickly, and cultivating a feedback loop,” Shortridge says. “How can we transform the way we approach security across the software delivery lifecycle in a way that aligns with software goals?”
The book is packed with information, charts, tables, and advice that is presented in easy to digest bytes for developers, their managers and product security officers. “To borrow from Dune, fear is the mind killer,” she adds. “Start small. A lot of the practices you already use for software quality can be adopted and adapted to sustain resilience against attacks.”
-
AI Embedded in Code: Do’s & Don’ts for Commercial Developers
Deb Radcliff, Shift Left Editor; Diana Kelley CISO Protect AI; Tracy Bannon, Sr Principal & DevOps Advisor for MITRE
In this video-cast, Deb hosts two experts—one focused on Artificial Intelligence (AI) in commercial products and the other focused on AI in DevOps: Diana Kelley is CISO at Protect AI, which is focused on machine learning lifecycle security; and Tracy Bannon, senior principal/software architect and DevOps advisor for MITRE.
“Arguably AI easily goes back to the 1960’s when the Eliza solution was introduced at MIT. It was essentially an early chatbot,” Kelley explains. As such, Machine Learning (ML) and AI is widely used in many different systems today, but with the explosion of ChatGPT, generative AI is being productized at a rapid pace.
For developers of commercial products AI and ML open layers of issues that they should be preparing for today. For example, faulty assumptions and output used in decisions—say a medical scanner output is incorrect and the patient truly does have cancer.
As product developers are focusing on supply chain and open source, they should consider the layers of decisions and data the AI and ML models are trained on, Bannon advises. “Who made the ML model in the first place? How is it controlled and contained? And how do I make sure that those models are the right ones that are getting into production?”
Kelley adds that security scanning and testing must become part of the ML lifecycle, including special MLBOMS to identify open source in the ML pipeline. Bannon adds that product developers need to ask questions about the lineage. “Where was it trained on? What was the lineage of the data it was trained on in the repositories of the world? Do I have the bug that is in that flawed open-source package?” Bannon asks. “We’re looking at turtles on top of turtles on top of turtles.”
-
Hacking Embedded Devices
Deb Radcliff, Shift Left Editor, Alexander Heid, VP Threat Intelligence, James Bell co-founder, Bryce Case
A quick look at the OWASP Embedded Application Security Project Best Practice Guide reveals numerous vulnerabilities in code developed for OT and embedded systems, including but not limited to buffer and stack overflows, injection attacks, cryptographic signatures and firmware updates, third party code components, and lax authentication and access controls.
To understand how these other vulnerabilities could result in serious risk to society, three red team hackers share some of their most chilling findings in critical infrastructure systems, and provide advice for shifting left on OT and embedded DevOps practices.
These experts explain why embedded and OT systems are more difficult to code to because of their small footprint, how microservices can be riddled with buffer overflows and other issues that can’t be fixed, how attackers can exploit the web servers, APIs, terminal sessions, and radio signals used to connect these devices, and how IoT and small devices often contain hard-coded passwords, and lack encryption to prevent man in the middle attacks.
-
Verifying Safety Assurance in High Risk Embedded Systems
Deb Radcliff, Shift Left Editor & Antoine Colin, Rapita Systems cofounder and CTO
For human transport drones, low-earth orbiters, vertical take-off and landing (VTOL) aircraft, and autonomous planes, safety and integrity are mission critical. The same is true for today’s high-tech automotive systems supporting vision, autonomous driving, braking and more.
Antoine Colin is a pioneer in safety-critical embedded systems. More than 20 years ago, he set his focus on critical timing analysis systems for his PhD and PostDoc, ultimately utilizing this knowledge to design Rapita’s RVS Aero security standards verification platform for Ada, C, & C++. It is used by engineers to develop compliant DO-178B/C and ED-12C certifiable multicore systems or equivalent military standards. He’s also behind Rapita’s RVS Auto verification platform that enables engineers to meet AUTOSAR and OSEK standards verification requirements laid out in ISO 26262 functional safety standards.
“Safety critical systems include anything where failure is likely to result in death, injuries, loss of equipment or any catastrophic outcome you’d like to avoid,” Colin explains. And, he says, we need to shift the needle left to address increasingly complex code components embedded in these systems. This is especially true in Avionics, where engineers traditionally use a waterfall approach and verification is done on the right, or at the end of product development, he adds.
“The cost of software has gone up massively in new airplanes, and the cost of verification is a large proportion of the cost of software,” Colin continues. “Finding defects late in the process is extremely costly. And in some cases, it would be impossible to update and fix code post deployment, for example, if that system is on Mars.”
Join us and learn how to shift left on security testing and verification to build safe, reliable, and resilient safety-critical embedded systems.
-
When Software Bugs are Dragons and Kids are Vanquishers
Deb Radcliff, Shift Left Editor & Kari Kakkonen, Director, Training & Competences Knowit
As director of training and competencies at Knowit, Kari Kakkonen, who’s based in Helsinki, specializes in software testing, quality, Agile, DevOps and artificial intelligence (AI). Now, he’s poured his more than 20 years of software testing experience into an interactive book, ‘Dragons Out’ to train kids ten years old and up about the importance of testing and remediating bugs in software.
In it, “bugs are dragons, and they want dragons out of their villages and their lives,” Kakkonen says. The book has been translated into 21 languages and is popular with IT people who share the stories and lessons with their children. It is also being used in classrooms with easy-to-follow instructional guides.
In this show, Kakkonen explains how his background in testing led to writing this book, and how the stories and exercises are helping bring up the next generation of software testers and secure coders. “I wanted to share my experiences in testing, and I realized there are no testing books that are meant for children. And I love dragons and fantasy, so dragons were a nice analogy when describing bugs and hunting bugs.”
The stories are chock full of different kinds of dragons (for example, a red dragon represents a memory leak), along with knights, villages, castles, and weapons. He adds, “Fun is a great element in any type of education—and not only for children. Adults learn this way, too.”
In this interview, Kakkonen also talks about his experiences testing electric cars and their increasing reliance on artificial intelligence (AI). “Safety is the key in electric cars, especially in self-driving cars with AI elements. You need to think of how you can test the AI for security, starting with threat modeling, testing simulations, virtual testing. Repeat them again and again in virtual environments, then in the parking lot, and then on the street.”