Reasonable 🔐AppSec #76 - Five Security Articles and Podcast Corner

A review of application security happenings and industry news from Chris Romeo.

Hey there,

In this week’s issue, please enjoy the following:

  • Five security articles 📰 that are worth YOUR time

  • Podcast 🎙️Corner

  • Where to find Chris? 🌎

Five Security Articles 📰 that Are Worth YOUR Time

  1. From Naptime to Big Sleep: Using Large Language Models To Catch Vulnerabilities In Real-World Code — Google's Project Zero team has advanced their "Naptime" framework into "Big Sleep," collaborating with Google DeepMind to enhance large language models (LLMs) for vulnerability research. This evolution led to the discovery of a stack buffer underflow in SQLite, demonstrating the potential of AI agents in identifying previously unknown exploitable memory-safety issues in widely used software. [This is an area to watch over the next 1-2 years.]

  2. Redefining Security in DevSecOps—Threat modeling is essential for integrating security into the DevSecOps process. It addresses the challenges of speed and complexity in modern software development by identifying potential vulnerabilities throughout the application lifecycle. Emphasizing a proactive security culture, organizations are encouraged to incorporate threat modeling iteratively, use automation tools, and foster collaboration among development and security teams to enhance resilience and mitigate risks effectively. [I still maintain that DevSecOps is dead.]

  3. How AWS built the Security Guardians program, a mechanism to distribute security ownership — The AWS Security Guardians program aims to distribute security ownership by integrating security experts, known as Guardians, directly into product development teams to prioritize security throughout the development lifecycle. This initiative fosters a culture of proactive security ownership, empowering teams to build and deploy products more securely and efficiently. [Scale always catches my attention - the AWS Security Guardians is rumored to be the most extensive Champion program on the planet.]

  4. Top 10 for LLM Project, Expands Initiatives & Publishes New AI Security Guidance — The OWASP Top 10 for LLM & Generative AI Security project offers comprehensive guidance on securing applications that utilize Large Language Models (LLMs) and generative AI technologies. It provides a curated list of the most critical vulnerabilities and actionable recommendations to help developers, data scientists, and security experts navigate the complex landscape of AI application security. [Top 10 for LLM are leading the pack.]

  5. Hacking Back the AI-Hacker: Prompt Injection as a Defense Against LLM-driven Cyberattacks — Mantis is a defensive framework designed to counter LLM-driven cyberattacks by exploiting large language models' vulnerability to adversarial prompt injection. By embedding crafted inputs into system responses, Mantis can misdirect or disrupt the attacker's LLM, achieving over 95% effectiveness in experiments. [Just when you thought the hackback issue had exited the building.]

Podcast Corner

I love making podcasts. In Podcast Corner, you get a single place to see what I’ve put out this week. Sometimes, they are my podcasts. Other times, they are podcasts that have caught my attention.

  • Application Security Podcast

    • Tanya Janca -- What Secure Coding Means (Audio only; YouTube)

      • Tanya Janca, SheHacksPurple, returns to discuss the importance of secure coding practices and the need for a robust, secure system development life cycle (SDLC) to uphold security claims genuinely.

      • Emphasizing proactive measures such as input validation and the principle of distrust and verification, Tanya shares personal anecdotes from threat modeling sessions to illustrate the necessity of anticipating vulnerabilities.

  • Security Table

    • The Future Role of Security and Shifting off the Table (Audio only; YouTube)

      • Hosts Chris, Izar, and Matt discuss the evolving application security landscape. Chris suggests that security functions may eventually be integrated into development teams to reduce friction and enhance efficiency despite common misconceptions about the impact of security breaches on brand reputation.

      • The conversation also addresses the "shift left" movement in application security, highlighting the need for clarity in what this term entails and advocating for starting security considerations from the project's requirements phase to ensure meaningful implementation.

  • Threat Modeling Podcast

    • Nandita Rao Narla -- Privacy Threat Modeling Wins, Losses, and Tools (Audio only)

      • Hosts Chris and Nandita Rao Narla discuss the pitfalls of privacy threat modeling programs, including high costs, friction in development processes, and misalignment with compliance-focused strategies rather than risk management.

      • Nandita emphasizes successful strategies for improving privacy threat modeling, such as using existing security resources, simplifying methodologies, and fostering a culture that prioritizes understanding potential risks, ultimately advocating for stronger integration of privacy and security threat modeling practices.

Threat Model for Free

Welcome to Simple, Collaborative Threat Modeling by Devici.

Introducing the modern drawing tool that's user-friendly, customizable, and easy on the eyes. Individuals and teams work together – no matter their location. Devici helps build a scalable threat modeling process for multi-disciplinary and geographically dispersed teams, ensuring everyone can contribute.

Visit devici.com to experience threat modeling for free.

Where to find Chris? 🌎

  • Nothing is on the docket now, but stay tuned for the next webinar!

🤔 Have questions, comments, or feedback? I'd love to hear from you!

🔥 Reasonable AppSec is brought to you by Kerr Ventures.

🤝 Want to partner with Reasonable AppSec? Reach out, and let’s chat.