Navigating AI in application security: Trust, pitfalls, and human oversight

Artificial Intelligence (AI) has transitioned from the realms of 1980s sci-fi movies like

Artificial Intelligence (AI) has transitioned from the realms of 1980s sci-fi movies like “Terminator” and “2001: A Space Odyssey” to the practical tools we encounter daily, such as ChatGPT. Despite the excitement surrounding AI, misconceptions about its capabilities and limitations remain widespread, according to HCL Technologies.

This is particularly evident in application security, where AI tools are increasingly employed to write and evaluate code. It is crucial to establish criteria for trusting the outputs of AI in this context and to identify areas where human intervention remains necessary.

These topics are discussed extensively in “Discerning reality from the hype around AI,” a recent article by David Rubinstein in SDTimes. Rubinstein interviews three application security experts from HCLSoftware to better understand the realities of AI’s role in secure coding and learn about HCLSoftware’s innovative approach to AI.

Kristofer Duer, lead cognitive researcher at HCLSoftware, contends that, “[AI] doesn’t have discernment yet… What it can do well is pattern matching; it can pluck out the commonalities in collections of data.” Organisations are using generative AI and large language models to match patterns and make vast amounts of data more easily consumable by humans. ChatGPT, for instance, can now be used to write code and identify security issues in code it is given to review.

Navigating trust in AI: The confidence dilemma

In the realm of AI, trust is paramount. Yet, as AI excels in problem identification, a significant concern arises: When AI errs, it does so with unwavering confidence. As Duer reiterates, “… when ChatGPT is wrong, it’s confident about being wrong.” This over-reliance on generative AI by developers is a real concern for Colin Bell, the HCL AppScan CTO at HCLSoftware. As more developers use software like Meta’s Code Llama and Google’s Copilot to develop applications, exponentially more code is being written without questioning its security. Bell cautions that, “…AI is probably creating more work for application security, because there’s more code getting generated.”

Everyone interviewed agreed that there continues to be a real need for humans to audit the code throughout the software development lifecycle. While AI can assist in writing code, one of its more impactful security uses is in pointing out areas where human security experts should focus their time and attention. This assistance can result in significant time and resource savings.

AI and HCL AppScan

The two AI processes used by HCLSoftware in their HCL AppScan portfolio were developed over many years with this goal in mind. Intelligent finding analytics (IFA) is used to limit the number of findings presented to the user. Intelligent code analytics (ICA) helps the user determine the security information of methods or APIs.

IFA, for example, has been trained to look at AppSec scan results with the same criteria as a security expert and ignore results that a human tester would find irrelevant (results that don’t represent real risk to the application). In this way, the number of findings is dramatically reduced by the AI so that humans can focus on triaging only the most critical vulnerabilities. Duer said, “[IFA] automatically saves real humans countless hours of work. In one of our more famous examples, we took an assessment with over 400,000 findings down to roughly 400 a human would need to review.”

Auto-remediation: The holy grail

Significant inroads are also being made into using AI to fix vulnerabilities in code, known as auto-remediation. Rob Cuddy, customer experience executive at HCLSoftware, is excited about the possibilities this technology presents but expressed concerns over liability that mirrored his colleagues’ broader trust issues. “Let’s say you’re an auto-remediation vendor, and you’re supplying fixes and recommendations, and now someone adopts those into their code, and it’s breached. Whose fault is it?”

Much of this conversation about AI and trust reflects a broader directional change that organisations are making towards application security posture management (ASPM). The key in ASPM is how to most effectively manage risk across the entire software landscape. The future implementation of AI will be shaped by balancing efficiency with trust and how it all successfully fits into this risk-management model.

Read the full blog from HCL Technologies here.

Keep up with all the latest FinTech news here

Copyright © 2024 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.