Reasonable 🔐AppSec #47 - Trustworthy AI, Five Security Articles, and Podcast Corner

A review of application security happenings and industry news from Chris Romeo.

Hey there,

In this week’s issue, please enjoy the following:

  • Five security articles 📰 that are worth YOUR time

  • Featured focus: Trustworthy AI

  • Application Security Podcast 🎙️Corner

  • Where to find Chris? 🌎

Five Security Articles 📰 that Are Worth YOUR Time

  1. When the “safe” is bad and the “unsafe” is safe — Josh Grossman from Bounce Security spins SQLi in a different light. He says there is deceptive safety with certain SQL query functions in the Prisma ORM. He used a LinkedIn poll and expert insights to illustrate that a function believed to be insecure is secure due to JavaScript's tagged templates. Josh’s post highlights the complexities of providing accurate security advice in product development, emphasizing the need for an in-depth understanding of language-specific features and secure coding practices. [This proves the depth of understanding we must have in AppSec for the languages we advise. Continue to refine your coding chops — your developers need it!]

  2. AI Hallucinated Packages Fool Unsuspecting Developers — AI tools like GPT-3.5-Turbo and GPT-4 can mislead developers into using non-existent "hallucinated" software packages, potentially leading to security vulnerabilities. There is danger in developers relying solely on AI recommendations without cross-verifying the existence or security of the software packages they implement. [Every conversation I have about AI ends in a debate on what trustworthy AI looks like. Trustworthy AI does not hallucinate package names, plus so much more.]

  3. The Road to Resilience: Tackling Resistance in the Quest for Product Security (Part 7 - Fear of Negative Impact) — Product security architects face challenges due to fears about security measures negatively affecting product ecosystems, like performance, costs, and user experience. The author offers strategies to balance security with product efficiency, user-centric design, and cost management, advocating for integrating security early in development to support safety and innovation. [Classic tension between dev and sec about negative impact — this is an area we can continue to grow within.]

  4. GitHub Developers Hit in Complex Supply Chain Cyberattack — A complex cyberattack targeting GitHub developers was launched. Attackers utilized sophisticated methods like hijacking GitHub accounts and distributing malicious Python packages to inject code and steal sensitive data. This operation affected individual developers and members of the Top.gg GitHub organization, using typosquatting and malicious commits to expand the attackers' reach and credibility. [A coordinated and well-resourced set of attackers can do anything.]

  5. Strengthen your security posture using Cortex LLM Functions[Jacob Kaluzny from Snowfalw is brilliant. His work in automating AppSec is best of breed, and now he’s explaining how to use LLMs to enhance the work of your ProdSec team.] He describes Snowflake's integration of AI into its security processes, mainly through the use of Cortex LLM Functions within its Product Security team. By leveraging machine learning and large language models, Snowflake processes substantial engineering data to enhance security measures, demonstrating AI's capability to conduct in-depth security analysis and manage risk assessments more efficiently.

In recent discussions about artificial intelligence, a recurring theme has emerged: what exactly does it mean for AI to be trustworthy? This question is not just academic—it's a fundamental concern that affects how we integrate AI systems into our daily lives.

Historically, the notion of trustworthiness in technology has often been linked to security. Consider Microsoft's initiative during its "security enlightenment" phase, which led to the formation of its "Trustworthy Computing" group. This team was dedicated to addressing vulnerabilities within the Windows operating system, highlighting an early recognition of the importance of security in building trust.

When we apply the concept of trustworthiness to AI, it takes on a dual meaning: the AI must perform reliably and securely and protect the privacy of information, and the outputs it generates must be factual and accurate. This distinction is crucial, underscoring the expectation that AI should enhance, rather than undermine, our decision-making processes.

However, several challenges impede the realization of genuinely trustworthy AI:

Hallucination: This occurs when AI systems generate false or nonexistent information. A notable example is the legal professional who used AI for drafting briefs only to discover the inclusion of fictitious legal cases. Similarly, the security community has been misled by AI-generated names of non-existent software packages, which attackers then exploited to deceive developers.

Tampering: The integrity of AI can be compromised if attackers can manipulate the training data. This could skew the AI's outputs to favor an attacker's objectives, eroding trust in the system's impartiality and reliability.

Operational Transparency: The inner workings of large language models (LLMs) remain a mystery to most users. Without a deep understanding of the mechanisms driving AI responses, assessing or trusting their validity is challenging. For AI to be genuinely trustworthy, users need a clear comprehension of how AI decisions are made and why specific outputs are generated.

Addressing these issues requires a robust security and privacy model tailored to AI. This model should validate inputs to prevent tampering and ensure transparency in AI operations, enabling users to understand and predict AI behavior.

While AI and LLMs represent groundbreaking technologies, their current state suggests caution in their autonomous application in critical areas, such as controlling the power grid. The journey towards trustworthy AI involves more than technological advancements; it demands a comprehensive strategy encompassing security, transparency, and rigorous testing.

The quest for trustworthy AI is not just about improving AI—it's about ensuring it's safe, reliable, and understandable. Only through such measures can we hope to fully harness the potential of AI technologies without compromising on the principles of trust and integrity.

P.S. I wonder how Asimov’s three laws of Robotics will come into play as we expand into trustworthy AI.

Podcast 🎙️ Corner

I love making podcasts. In Podcast Corner, you get a single place to see what I’ve put out this week. Sometimes, they are my podcasts. Other times, they are podcasts that have caught my attention.

  • Application Security Podcast

    • Francesco Cipollone -- Application Security Posture Management and the Power of Working with the Business (Audio only; YouTube)

      • Francesco Cipollone, CEO of Phoenix Security, explains the concept of application security posture management (ASPM) and discusses its evolution from SIEM solutions.

      • He emphasizes the need for collaboration between security, development, and business teams to enhance software security management and highlights how ASPM facilitates actionable, risk-based security decisions.

  • Security Table

    • Nobody's Going To Mess with Our STRIDE (Audio only; YouTube)

      • The gang defends the STRIDE methodology against criticisms that it is outdated and inefficient, arguing for its continued relevance in threat modeling due to its origin, utility, and adaptability.

      • They address common misconceptions and the misuse of tools like the Microsoft Threat Modeling Tool, advocating for including diverse perspectives and broader principles in threat analysis.

  • Threat Modeling Podcast

    • The episode is scripted and undergoing editing now. It’s a second part from Nandita, where we discuss things that work and don’t work and how tooling impacts privacy threat modeling.

Pictures are Fun

I asked DallE for a picture of a trustworthy AI with an on/off switch. It complied, but did it add a control in the center of it, giving me the finger?

Where to find Chris? 🌎

  • Webinar: AppSec Unbounded, “Embrace 'Secure and Privacy by Design,” April 18, 2024, sign up.

  • BSides SF, May 4-5, 2024

    • I’ll be hanging out at the Devici booth during the event.

  • RSA, San Francisco, May 6-9, 2024

    • Speaking: Secure and Privacy by Design Converge with Threat Modeling (May 8, 14:25 Pacific)

    • Learning Lab: Threat Modeling Championship: Breaker vs. Builder (May 8, 08:30 Pacific) (This will fill up FAST)

    • I'm hanging out at the Devici booth at the Startup Expo for the rest of the time!

🤔 Have questions, comments, or feedback? I'd love to hear from you!

🔥 Reasonable AppSec is brought to you by Kerr Ventures.

🤝 Want to partner with Reasonable AppSec? Reach out, and let’s chat.