Jit- announcement icon

How can AppSec teams empower development orgs to deliver more secure code? We asked 150 developers.

Read the survey report

In this article

Beyond Shift-Left: Rethinking AppSec Strategies in the Age of AI

David Melamed writer profile image
By David Melamed

Updated March 26, 2025.

Beyond Shift-Left: Rethinking AppSec Strategies in the Age of AI

Artificial intelligence is transforming software development at a pace few could have imagined. Developers can now generate large codebases in minutes, automate complex workflows, and build sophisticated applications with minimal manual input. This acceleration in productivity, however, is accompanied by a troubling surge in security vulnerabilities. With platforms like Cursor enabling full-stack app creation at lightning speed, and GenAI powering the rapid generation of DevOps templates, developers are embracing AI for tasks that once required significant manual effort. The industry is also seeing a surge in custom AI models designed to handle everything from backend logic to UI components, fueling a rapid increase in code generation. This level of innovation, while driving efficiency, also raises important questions about the security of the resulting code.

Beyond creating full-stack applications and automating infrastructure templates, AI is increasingly being used to write code for complex algorithms, handle edge-case testing, and even develop entire microservices that fit seamlessly into existing architectures. Developers are relying on AI to not only generate code but also to suggest improvements, refactor legacy systems, and fill in boilerplate code across multiple programming languages. While these trends push the boundaries of what’s possible in software development, they also mean that vast amounts of new code—much of it unvetted for security flaws—are being introduced into production environments at an unprecedented pace.

Why Is AI Creating More Vulnerabilities?

Early AI models were trained primarily on open-source code repositories. While this approach was effective at producing usable code quickly, it also carried over the same security weaknesses found in the training data. The result: AI models generated code with vulnerabilities baked in. Even as newer models became more advanced, the pattern didn’t fully change. For instance, when one organization analyzed AI-generated code, they found that unless explicitly prompted, the majority of outputs contained issues that would fail standard security checks.

A case study with a leading generative AI model showed how carefully crafted prompts can drastically reduce vulnerabilities. Sonnet 3.7, when directed to produce secure code, yielded output with almost no security flaws. However, this approach is not widely adopted—few developers consistently write prompts that prioritize security, leaving the vast majority of AI-generated code unguarded by design.  

On top of this, even if AI is not directly responsible for generating vulnerable code, the highly accelerated pace of developers and DevOps to creating new components and services that can be deployed in no-time, is also a weak spot prone to potentially exposing ports that should not be exposed for example, or deviating from organizational security policies that expose the organization to risk––and this is inundating AppSec and CloudSec engineers.

As the adoption of AI continues to grow, the research into its effects on engineering practices is also expanding. Studies are delving into productivity gains, security implications, and the challenges faced by development and security teams. Below, we examine some of the latest findings that shed light on how AI is reshaping the engineering landscape as we know it.

Code Production & AI Adoption

  • Developer Productivity: According to GitHub's 2023 Octoverse report, developers using GitHub Copilot showed a 55% increase in task completion speed, with 74% reporting feeling more focused on satisfying work rather than repetitive tasks [GitHub Octoverse, 2023].

  • Code Generation: McKinsey's research found that developers using generative AI tools completed programming tasks 25-45% faster, depending on task complexity and developer experience level [McKinsey Digital, "The economic potential of generative AI", 2023].

Vulnerability Density & AI-Generated Code

  • Security Assessment: A study by Stanford researchers found that developers using AI coding assistants introduced 40% more security vulnerabilities in code compared to developers coding without AI assistance [Sandoval et al., "Security Implications of Large Language Model Code Assistants", ACM CCS, 2023].

  • Vulnerability Types: Research from Carnegie Mellon University identified that AI-generated code contained 1.8x more API misuse errors and 2.4x more logical errors compared to human-written code [CERT Coordination Center, "Analysis of AI Code Generation Security", 2023].

Security Team Capacity & Scaling Challenges

  • AppSec Team Growth: Gartner's security survey found that while code production has increased by 31% in organizations adopting AI tools, AppSec team headcount has only grown by 4.7% on average [Gartner, "Security and Risk Management Trends", 2023].

  • Remediation Backlog: The Ponemon Institute reported that organizations are experiencing a 37% increase in security vulnerability backlogs, with the average time to remediation increasing from 38 days to 52 days in enterprises using AI coding tools [Ponemon Institute, "The State of Application Security", 2023].

Shift-Left Security Effectiveness

  • Early Detection: A SANS Institute survey found that shift-left security practices reduced the average time to detect vulnerabilities by 29%, but also revealed that 67% of organizations could not scale their security reviews to match increased development velocity [SANS Institute, "DevSecOps Practices Survey", 2023].

  • Security Debt: According to Veracode's State of Software Security report, organizations implementing shift-left security still see 24% of vulnerabilities remain unaddressed after 290 days, up from 17% in previous years due to increased code production [Veracode, "State of Software Security", 2023].

And these are just some examples of how GenAI and coding copilots are affecting engineering practices, delivery speeds, security and quality.

The Rise of AI-Driven Threats

As the adoption grows among developers, they’re not alone in exploring and leveraging this technology for productivity gains.  Malicious actors are also channeling AI to accelerate exploitative activities, creating an even more complex security landscape. A recent report from Gartner highlighted that threat actors are now using generative models to craft new exploits, automate phishing attacks, and refine malware at a pace that traditional security measures can’t match. AI’s ability to iterate rapidly on attack techniques has shortened the lifecycle from discovery to deployment, leaving defensive teams struggling to adapt.

Shift-Left Is No Longer Enough

For years, shifting security left—catching issues early in the development cycle—has been a cornerstone of application security. While still necessary, it is no longer sufficient. The sheer volume of AI-generated code overwhelms even the most proactive shift-left efforts. AppSec teams simply can’t scale to address every alert generated by tools or every vulnerability flagged during code reviews.

The last mile of security, where issues must be prioritized and remediated, remains heavily manual. Tools can identify problems, but human engineers must decide which vulnerabilities matter most, what the root causes are, and how to fix them. And the reason there is still plenty of manual work is that even when the fix is known, e.g. upgrading a library, the amount of work that still needs to be invested is substantial, and test coverage is still far from reliable.

 Therefore, as code output continues to grow exponentially, the workload on AppSec teams increases. Yet, the number of skilled AppSec professionals isn’t growing at the same pace. This creates an unsustainable cycle, where teams fall further and further behind.

Why Tools Alone Aren’t Enough

Automated scanning tools and static analysis platforms are essential components of a security strategy, but they can only do so much. These tools can highlight known vulnerabilities, but they often generate false positives or fail to detect complex, novel issues. Most importantly, they can’t contextualize findings or guide remediation––and even those tools that do, the solutions are often generic and don’t always align with company policies or organizational engineering principles. This is where human expertise remains critical—and where the scalability issue becomes glaring.

One reason for this is that AppSec teams are still fundamentally limited by human effort. While shift-left practices and automated tooling ensures vulnerabilities are caught earlier, it doesn’t inherently increase the capacity of security teams to address them. Manual reviews, remediation strategies, and policy enforcement still rely on skilled engineers. And as AI accelerates development speed, the backlog of issues grows faster than these teams can scale.

Rethinking AppSec in the Age of AI

To close the gap, AppSec teams need more than better tools––they need to fundamentally change how they operate. Embracing new digital approaches—such as intelligent triage systems and digital labor that automates low-level tasks—can help teams manage the deluge of alerts. And these are just some of the things we’re thinking about at Jit, when we’re envisioning a future powered by AI, and how to mitigate the threats that come along with its benefits and productivity gains. These AI-enabled assistants can prioritize issues, suggest fixes, and free up engineers to focus on the most critical vulnerabilities.

In this rapidly evolving landscape, AppSec teams must adopt a more dynamic approach. Combining the benefits of AI for routine security tasks with the strategic insight of skilled engineers will be crucial. By evolving their methods and scaling their efforts, security teams can better address the increasing volume and complexity of vulnerabilities introduced by AI-driven development. Only by making these changes can they hope to keep pace with the next wave of AI-enabled innovation.