- Reasonable Application Security
- Posts
- Reasonable 🔐AppSec #45 - There is more to life than a stupid game, Five Security Articles, and Podcast Corner
Reasonable 🔐AppSec #45 - There is more to life than a stupid game, Five Security Articles, and Podcast Corner
A review of application security happenings and industry news from Chris Romeo.
Hey there,
In this week’s issue, please enjoy the following:
Five security articles 📰 that are worth YOUR time
Featured focus: There is more to life than a stupid game
Application Security Podcast 🎙️Corner
Where to find Chris? 🌎
Five Security Articles 📰 that Are Worth YOUR Time
Uncle Sam has had enough of SQL injection vulnerabilities — the FBI and CISA have issued a Secure by Design Alert urging software vendors to conduct formal code reviews to eliminate SQL injection vulnerabilities, citing the MOVEit supply chain attacks as an example of the potential damage. The authorities advise using parameterized queries with prepared statements to mitigate these vulnerabilities and emphasize the importance of secure-by-design programming practices. [Duhh. I’m struggling to understand why they are putting this out as an alert now unless it went into a time vortex and just now appeared in 2024 from a wormhole opened in 2004.]
Top 10 web application vulnerabilities in 2021–2023 — We have another “Top 10” data source. This instance is from Kaspersky. It discusses the top 10 web application vulnerabilities from 2021 to 2023, focusing on the frequency and severity of issues like Broken Access Control, Sensitive Data Exposure, and SQL Injection. It emphasizes the critical importance of adopting secure software development practices and regular security assessments to protect against these vulnerabilities and safeguard web applications and related systems. [This type of analysis provides a real-world component to what we see in various OWASP Top Ten lists.]
Scaling security with AI: from detection to solution — Google is scaling security with AI, particularly in automating and streamlining security tasks like fixing bugs using the Secure AI Framework (SAIF). They use Large Language Models (LLMs) to expand vulnerability testing coverage and release a fuzzing framework as an open-source resource. They have also developed an automated pipeline using LLMs to generate and test fixes for vulnerabilities, showing promising results in patching bugs and saving engineers' time. [Only from the minds of Google.]
'everything' blocks devs from removing their own npm packages — The npm package registry faced an issue where the package "everything" and its variations included every npm package as a dependency, preventing authors from removing their packages due to npm's policy. This situation, termed "dependency hell," resulted from what may have started as a prank but had significant implications for the entire npm ecosystem, with efforts underway to remedy the issue. [Supply chain security comes in all shapes and sizes.]
AI Security Overview: Summary - How to address AI Security? — We love a good threat model. And this page has some. The page provides an overview of AI security, emphasizing the importance of understanding potential threats and prioritizing them based on each use case. It highlights the need for AI governance, extending security practices to data science activities, and implementing controls throughout the AI lifecycle to address security concerns. [Threat modeling is life.]
Featured focus: There is more to life than a stupid game
[NOTE: this is not an AppSec story. It’s a story of my experience this past week that I had to write. We’ll return to our regularly scheduled AppSec snark next week.]
I was at a hockey game the other night. I have a team that I’ve cheered for since around 1985. This team has had highs beyond imagination, winning Stanley Cups as the ultimate champions of the NHL.
This same team has been rebuilding for the past ten years, and they’ve had some tough seasons leading up to this current season. We began to see some hope with this season, and forty days ago, they were handily in the playoffs. And then they hit a slide, which pushed them to the edge of missing the playoffs.
I’m sitting at our local arena, watching my team play the team for our local area, and MY team is losing 4-0. I’m seething. I don’t want to talk to anyone. I don’t want to discuss the game. Frankly, I want to go home.
I happen to look across the aisle, and I see a dad carrying his daughter around as the music is playing between plays, swinging her in the air as she giggles to her heart's content. And I hear him repeating the same statement over and over. “Cancer came for the wrong girl.”
It hit me like a ton of bricks. Here I am, upset about a stupid game when this young girl sitting nearby is fighting for her life. This was a wake-up call for me, a reminder that the outcome of a game doesn’t matter. People matter. Connections matter—time with family matters.
Hearing that little girl’s giggles as her dad swung her around to the music made me smile. To see them so happy, knowing that the past months of their lives have been hell on earth. Hug your families a little more often. Reach out to people you have lost your connection with. Realize that there is more to life than a stupid game.
Podcast 🎙️ Corner
I love making podcasts. In Podcast Corner, you get a single place to see what I’ve put out this week. Sometimes, they are my podcasts. Other times, they are podcasts that have caught my attention.
Jason Nelson -- Three Pillars of Threat Modeling Success: Consistency, Repeatability, and Efficacy (Audio only; YouTube)
Jason Nelson shares insights on establishing successful threat modeling programs in data-intensive industries like finance and healthcare.
Jason presents his three main pillars when establishing a threat modeling program: consistency, repeatability, and efficacy.
How I Learned to Stop Worrying and Love the AI (Audio only; YouTube)
In the evolving world of AI in software development, the episode delves into how artificial intelligence transforms coding practices and application security, focusing on the rise of AI-generated code and copy-pasted snippets and their implications on code quality and security.
The discussion further explores the future of software security tools in an AI-driven landscape, debating the reliability of AI in managing complex security policies and the need for a balanced approach between innovation and caution to ensure that AI enhances rather than compromises application security.
The next episode is coming out next week.
Pictures are Fun
Uncle Sam wants you to watch out for SQLi.
Where to find Chris? 🌎
Webinar: Building a Successful Security Champions Program, April 11, 2024, Noon US/Eastern; sign up.
Webinar: AppSec Unbounded, “Embrace 'Secure and Privacy by Design,” April 18, 2024, sign up.
BSides SF, May 4-5, 2024
I’ll be hanging out at the Devici booth during the event.
RSA, San Francisco, May 6 - 9, 2024
Speaking: The Year of Threat Modeling: Secure and Privacy by Design Converge (May 8, 14:25 Pacific)
Learning Lab: Threat Modeling Championship: Breaker vs. Builder (May 8, 08:30 Pacific) (This will fill up FAST)
I'm hanging out at the Devici booth at the Startup Expo for the rest of the time!
🤔 Have questions, comments, or feedback? I'd love to hear from you!
🔥 Reasonable AppSec is brought to you by Kerr Ventures.
🤝 Want to partner with Reasonable AppSec? Reach out, and let’s chat.