Picture this: A French-speaking security researcher finds a critical vulnerability in a major U.S. retailer’s mobile app. They draft an email warning, but they run it by an AI chatbot to fix English language snafus before notifying the company.
Now imagine an attacker has been prowling the same large language model app for sensitive information. Using carefully crafted prompts, this bad actor goads the generative AI into sharing technical details of the French researcher’s submission before the retailer can patch. Next thing you know, there’s a cybersecurity breach.
That turn of events isn’t far-fetched: The OWASP Foundation has released a top 10 list of vulnerabilities affecting large language models, and Sensitive Information Disclosure made the cut. “LLM applications can inadvertently disclose sensitive information, proprietary algorithms, or confidential data, leading to unauthorized access, intellectual property theft, and privacy breaches,” OWASP said, adding that one possible attack scenario is “crafted prompts used to bypass input filters and reveal sensitive data.”
As field CISO for Synack, I have seen firsthand how generative AI is here to stay, risks notwithstanding. AI technology is crucial to maintaining competitiveness across a range of industries. Security is no exception.
But as the example above shows, generative AI can also be a jackpot for threat actors. Striking a balance between embracing the technology and adding safeguards will be key to avoiding breaches. The board is watching: according to a recent Proofpoint survey of 650 board members around the world, 59% view tools like ChatGPT as a security risk.
Organizations leveraging AI to enhance their capabilities should pause and ask a few important questions. How are we ensuring the AI we use is not making us vulnerable? How are we hardening AI infrastructure and addressing privacy concerns? How are we identifying any complex or security-sensitive tasks too delicate to assign to AI?
As with most technologies, our people training will determine our success or failure. Are we training people when it is acceptable to use a public AI engine – and when they must use a private engine based on the content? Are we developing “sanitization scripts” and other processes so that sensitive concepts are removed from submissions? Are we training our people on the use of those scripts and testing their accuracy with that and other processes? Are managers being trained to query their teams on their use of AI and how to support as well as keep them in policy?
The answers could determine whether your organization can effectively capitalize on the AI frenzy or if you make headlines as a victim of uniquely AI-driven cyber vulnerabilities.
AI’s Potential Comes with Pitfalls
It is easy to see why organizations are so enthusiastic about AI. A recent GitLab survey found that nine in ten DevSecOps teams are using AI in software development or plan to use it. Senior policymakers and intelligence officials were buzzing about the technology at the Billington Cybersecurity Summit earlier this month, touting its transformational potential (while warning that threat actors are using it, too). The U.S. federal government maintains a running list of agencies’ AI use cases, from identifying high-risk Social Security claims to helping cyber analysts better understand anomalies and potential threats via probabilistic models.
In the security testing space, AI can be used to automate data collection for reconnaissance, enhance scanning by improving the accuracy of automated tools and reduce noise by adding intelligence from outside sources like social media. AI can also equip testers to find the best course of action for exploiting a target – and help generate high-quality reports so security researchers can spend more time testing. Finally, AI techniques can be used to analyze results and, with proper privacy safeguards in place, allow exploitable software flaws at one organization to be applied to others so they can all be fixed. And that is the tip of the iceberg for how AI can help overtaxed security teams improve their organization’s cyber posture.
That said, AI cannot do everything. Even powerful generative AI platforms like ChatGPT struggle when faced with abstract problems. Their capacity to write code is convenient but still error-prone: a recent Stanford University study found that participants using an AI assistant wrote less secure code than their manual-only counterparts. The creativity of human intelligence should still come into play when building secure software and testing it for vulnerabilities.
Cyber defenders will have to evolve to incorporate AI and keep pace with attackers who are already using it. By striking the right balance between a human-led and AI-driven approach, organizations stand the best chance of realizing AI’s enormous potential while steering clear of alarming new vulnerabilities.
Synack is the title sponsor of the RH-ISAC Cyber Intelligence Summit, taking place on October 2 – 4 in Dallas, TX. For more information or to register for the Summit, visit https://summit.rhisac.org/.
About Synack
Synack’s premier on-demand security testing platform harnesses a talented, vetted community of security researchers and smart technology to deliver continuous penetration testing and vulnerability management, with actionable results. We are committed to making the world more secure by closing the cybersecurity skills gap, giving organizations on-demand access to the most trusted security researchers in the world. Headquartered in Silicon Valley with regional teams around the world, Synack protects federal agencies, DoD classified assets and a growing list of Global 2000 customers, uncovering over 14,000 vulnerabilities for clients in 2022 alone. For more information, please visit www.synack.com.