Reasonable 🔐AppSec #41 - AI and Application Security: Best Friends?, Five Security Articles, and Podcast Corner

A review of application security happenings and industry news from Chris Romeo.

Hey there,

In this week’s issue, please enjoy the following:

  • Five security articles 📰 that are worth YOUR time

  • Featured focus: AI and Application Security: Best Friends?

  • Application Security Podcast 🎙️Corner

  • Where to find Chris? 🌎

Five Security Articles 📰 that Are Worth YOUR Time

  1. Lessons in threat modeling: How attack trees can deliver AppSec by design — Ever wonder how to use attack trees in threat modeling to improve the security of software design? Like a fine wine, I’ve been thinking more deeply about attack trees and how they complement / pair with the threat model. Attack trees provide a hierarchical representation of potential threat scenarios, allowing for a detailed understanding of weak points and critical junctures within a workflow, thereby formulating targeted and tactical defenses. Use attack trees with threat modeling to gain insights into specific vulnerabilities and potential exploits.

  2. Top 10 web hacking techniques of 2023 — Each year, PortSwigger presents the top 10 web hacking techniques, and each year I tune in. 2023 highlights innovative research in web security, including attacks on critical internet infrastructure, session integrity, HTTP desync attacks, exploitation of parser inconsistencies, and more.

  3. Lateral Movement Attacks - 3 Steps — I’ve been exploring the inclusion of LM or Lateral Movement to STRIDE, so this article caught my attention. The article discusses lateral movement attacks in cybersecurity, focusing on the middle stage of the attack chain, where attackers move laterally within a network after gaining initial access. It emphasizes the importance of understanding and defending against these attacks, which are often challenging to detect and can lead to significant breaches.

  4. Cybersecurity First Principles & Shouting Into the Void

    Implementing Secure-by-Design/Default approaches in cybersecurity is a long-standing challenge. Despite decades of recommendations, the industry tolerates insecure products and insufficient security measures. The article highlights the need for systemic behavioral change, focusing on reducing the probability of material impact. It also suggests that shifting liability for insecure products and services, as proposed in the National Cyber Strategy, may drive meaningful change in cybersecurity practices.

  5. NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems — The National Institute of Standards and Technology (NIST) has identified various types of cyberattacks that manipulate the behavior of AI systems, including adversarial machine learning threats. The publication outlines mitigation strategies and their limitations, highlighting the need for AI developers and users to be wary of claims of foolproof protection against these attacks.

Since the launch of OpenAI into the public square with ChatGPT, AI has been a top-of-mind challenge and technology stack question for most people.

It has been said that AI can potentially eliminate jobs from the workforce. My answer is that anything has the “potential” to do anything. My truck has the potential to win the Bahrain F1 Grand Prix. The chances of it happening are slight, but it does have the potential.

Fundamentally, what we think of as AI today is a parlor trick of hyperspeed data analysis. Ask AI to have an original thought, and it will not be able to participate. AI parrots back the best guess of an answer based on analyzing some compiled training data. Training data does not lead to innovation — questions about training data lead to answers that have been given someplace, at some time, in the past.

At this point, you may think I’m an AI naysayer or Luddite. Not so. I believe that AI can enhance an application security platform. So much so that at Devici, we are focusing on what we call AI infusion. We infuse AI into a feature such that you may never know if we don’t tell you an LLM is driving the feature. That is the value proposition of LLMs: summarizing data automatically does limit the amount of data processing that humans must perform inside an application.

LLMs and Gen AI’s best use case today is to enhance the data that the system collects. Humans often perform manual enhancement of data from a system. Find a use case to enhance existing data and measure the productivity gains.

And please, don’t add an AI chatbot to your platform and claim you are now an AI company. Nobody wants to talk to your AI chatbot. And having an AI chatbot does not make you an AI company.

Podcast 🎙️ Corner

I love making podcasts. In Podcast Corner, you get a single place to see what I’ve put out this week. Sometimes, they are my podcasts. Other times, they are podcasts that have caught my attention.

  • Application Security Podcast

    • Jason Nelson -- Three Pillars of Threat Modeling Success: Consistency, Repeatability, and Efficacy (Audio only; YouTube)

      • Jason Nelson discusses the critical pillars of establishing successful threat modeling programs in data-intensive industries, emphasizing consistency, repeatability, and efficacy.

      • He provides valuable insights into security practices, regulatory environments, and the importance of a threat modeling champion for those serious about application security.

  • Security Table

    • Selling Fear, Uncertainty, and Doubt (Audio only; YouTube)

      • The Table discusses the impact of fear, uncertainty, and doubt (FUD) in cybersecurity, highlighting its potential to raise awareness and cause decision paralysis or misguided actions due to information overload.

      • They also explore the weaponization of security in competitive markets and the need for reliable cybersecurity information sources to cut through the FUD and provide actionable insights, emphasizing the importance of balancing competitive advantage with ethical responsibility and consumer education.

  • Threat Modeling Podcast

    • Working on Nandita’s part two this week!

Pictures are Fun

Threat Modeling, Attack Trees, or a glass mixed with both?

Where to find Chris? 🌎

  • Webinar: The Role of AI in Application Security, March 6, 2024, Noon US/Eastern, sign up.

  • Livestream: AppSec and DevSecOps track discussion for #RSAC March 29, 2024; sign up coming soon.

  • Webinar: Building a Successful Security Champions Program Placeholder, April 11, 2024, Noon US/Eastern, sign up.

  • BSides SF, May 4-5, 2024

  • RSA, San Francisco, May 6 - 9, 2024

    • Speaking: The Year of Threat Modeling: Secure and Privacy by Design Converge (May 8, 14:25 Pacific)

    • Learning Lab: Threat Modeling Championship: Breaker vs. Builder (May 8, 08:30 Pacific)

    • I'm hanging out at the Devici booth at the Startup Expo for the rest of the time!

🤔 Have questions, comments, or feedback? I'd love to hear from you!

🔥 Reasonable AppSec is brought to you by Kerr Ventures.

🤝 Want to partner with Reasonable AppSec? Reach out, and let’s chat.