AI security startup Dam Secure raises $4m seed round

Dam Secure

Dam Secure, an AI security start-up focused on securing AI-generated code for enterprises, has raised fresh capital as organisations race to adopt generative AI in software development.

The company has secured $4m in a seed funding round led by Washington, D.C.-based cyber and AI investor Paladin Capital Group.

Founded by Patrick Collins and Simon Harloff, Dam Secure is building an AI-native platform designed to help organisations spot and manage security risks created when machine-written code is pushed into production at scale. The firm says traditional application security tools are struggling to keep up, particularly when AI systems generate code that works as intended but contains dangerous logic flaws that can slip past pattern-based scanners.

Dam Secure’s approach is centred on allowing teams to set security requirements in plain English and then enforce those rules across large code bases during the development process. The company positions this as a way to reduce noise, tighten controls and catch issues earlier, rather than relying on a “scan-and-patch” model once code is already deployed.

The start-up said it will use the new funding to accelerate product development and expand go-to-market efforts throughout 2026, as it looks to convert early demand into wider enterprise adoption. Paladin Capital’s managing director Mourad Yesayan is also set to join Dam Secure’s Board, according to the announcement.

Dam Secure also highlighted the founders’ prior experience in application security and high-growth technology companies. Collins previously held executive roles at Zip Payments and Secure Code Warrior, and earlier built and exited mobile technology firm 5th Finger. Harloff has led product security teams at Zip Payments and Secure Code Warrior and is responsible for Dam Secure’s core technical architecture. The company added that it already has multiple customers across industry verticals.

Dam Secure co-founder Patrick Collins said, “Enterprises are rushing to adopt AI to increase developer velocity, but the volume of software being produced is overwhelming traditional application security processes,” and added, “Existing security tools generate too much noise to work effectively in this new environment.”

He continued, “Industry research shows that, when not explicitly constrained, large language models introduce vulnerabilities in up to half of generated code. This creates dangerous ‘logic gaps’ that organizations are largely blind to. We are already seeing the cost of this in recent billion-dollar heists and widespread ecosystem attacks. These breaches don’t rely on classic bugs, they exploit valid but flawed logic that existing ‘scan-and-patch’ tools simply cannot see,” continues Collins.

Read the daily FinTech news

Copyright © 2026 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.