Irregular
  • September 18, 2025
  • admin
  • 0

AI security firm Irregular has raised $80 million in a fresh funding round, led by Sequoia Capital and Redpoint Ventures, with participation from Wiz CEO Assaf Rappaport. The deal values the company at $450 million, according to sources close to the matter.

The funding comes at a crucial time when the capabilities of frontier AI models are rapidly advancing, creating both opportunities and significant security risks.

From Pattern Labs to Irregular: Building a Security Backbone for AI

Previously known as Pattern Labs, Irregular has quickly emerged as a leading player in the AI security landscape. The company has gained recognition for its work in evaluating vulnerabilities of advanced AI systems, including its contributions to security assessments for Claude 3.7 Sonnet and OpenAI’s o3 and o4-mini models.

One of Irregular’s most notable contributions is the SOLVE framework, an industry-standard tool that helps score a model’s ability to detect vulnerabilities. This framework is widely used across the AI sector, highlighting Irregular’s influence in shaping how AI systems are tested for reliability and resilience.

Preparing for Emerging AI Risks

While Irregular has already played a critical role in analyzing current risks, the company is now setting its sights on emerging threats that could arise as AI models become more powerful.

Co-founder Omer Nevo explained that Irregular has built complex simulation environments where AI systems can take on the roles of both attacker and defender. These simulated ecosystems allow the team to test how new models behave under adversarial conditions, spotting vulnerabilities before they appear in real-world use cases.

“Whenever a new model is released, we can test where defenses hold up and where they collapse,” Nevo said. This proactive approach gives developers and enterprises an early-warning system for potential security flaws.

Why AI Security Is Becoming Critical

The funding comes amid growing concerns about the dual-use nature of AI. On one hand, advanced AI models can help identify and patch software vulnerabilities faster than ever. On the other, they also hold the potential to be exploited by malicious actors looking to uncover and weaponize those same weaknesses.

This tension has put AI security in the spotlight. Even leading labs like OpenAI have overhauled internal security measures in recent months, responding to concerns about corporate espionage and misuse of their models.

Co-founder Dan Lahav summed up the challenge: “If the goal of frontier labs is to create increasingly sophisticated and capable models, our mission is to secure those models. But it’s a moving target, which means there’s much more work to be done.”

A Growing Market for AI Safety and Security

Irregular’s latest funding round reflects the rising interest from investors in AI safety and security solutions. With leading venture firms like Sequoia and Redpoint Ventures backing the company, Irregular is well-positioned to expand its testing infrastructure and scale its solutions for enterprise clients.

Industry analysts suggest that the company’s focus on AI-on-AI interactions, where models interact with each other in complex systems, sets it apart from traditional cybersecurity firms. This specialized expertise could prove critical as AI adoption accelerates across industries.

What’s Next for Irregular

With $80 million in fresh capital, Irregular plans to double down on emergent risk detection and expand its suite of simulation environments. The company’s goal is to ensure that as AI models grow more capable, they remain secure, reliable, and resilient against misuse.

The founders believe this is just the beginning of a much larger challenge. As Lahav put it, “Soon, much of the economy will be powered by human-AI and AI-to-AI interactions. That transformation will break today’s security stack in multiple ways. Our job is to stay ahead of that curve.”

Conclusion

The race to build frontier AI models is heating up, but so too are the risks. With its expertise, tools, and fresh funding, Irregular is positioning itself as a critical safeguard in the AI ecosystem. As models become more advanced, companies like Irregular will play a key role in ensuring the safe and secure deployment of artificial intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *