RSAC 2025: Cisco and Meta put open-source AI at the heart of threat defense

Blockonomics
RSAC 2025: Cisco and Meta put open-source AI at the heart of threat defense
Bitbuy


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

With cyberattacks accelerating at machine speed, open-source large language models (LLMs) have quickly become the infrastructure that enables startups and global cybersecurity leaders to develop and deploy adaptive, cost-effective defenses against threats that evolve faster than human analysts can respond.

Open-source LLMs’ initial advantages of faster time-to-market, greater adaptability and lower cost have created a scalable, secure foundation for delivering infrastructure. At last week’s RSAC 2025 conference, Cisco, Meta and ProjectDiscovery announced new open-source LLMs and a community-driven attack surface innovation that together define the future of open-source in cybersecurity.   

One of the key takeaways from this year’s RSAC is the shift in open-source LLMs to extend and strengthen infrastructure at scale.

Phemex

Open-source AI is on the verge of delivering what many cybersecurity leaders have called on for years, which is the ability of the many cybersecurity providers to join forces against increasingly complex threats. The vision of being collaborators in creating a unified, open-source LLM and infrastructure is a step closer, given the announcements at RSAC.

Cisco’s Chief Product Officer Jeetu Patel emphasized in his keynote, “The true enemy is not our competitor. It is actually the adversary. And we want to make sure that we can provide all kinds of tools and have the ecosystem band together so that we can actually collectively fight the adversary.”

Patel explained the urgency of taking on such a complex challenge, saying, “AI is fundamentally changing everything, and cybersecurity is at the heart of it all. We’re no longer dealing with human-scale threats; these attacks are occurring at machine scale.”

Cisco’s Foundation-sec-8B LLM defines a new era of open-source AI

Cisco’s newly established Foundation AI group originates from the company’s recent acquisition of Robust Intelligence. Foundation AI’s focus is on delivering domain-specific AI infrastructure tailored explicitly to cybersecurity applications, which are among the most challenging to solve. Built on Meta’s Llama 3.1 architecture, this 8-billion parameter, open-weight Large Language Model isn’t a retrofitted general-purpose AI. It was purpose-built, meticulously trained on a cybersecurity-specific dataset curated in-house by Cisco Foundation AI.

“By their nature, the problems in this charter are some of the most difficult ones in AI today. To make the technology accessible, we decided that most of the work we do in Foundation AI should be open. Open innovation allows for compounding effects across the industry, and it plays a particularly important role in the cybersecurity domain,” writes Yaron Singer, VP of AI and Security at Foundation.

With open-source anchoring Foundation AI, Cisco has designed an efficient architectural approach for cybersecurity providers who typically compete with each other, selling comparable solutions, to become collaborators in creating more unified, hardened defenses.

Singer writes, “Whether you’re embedding it into existing tools or building entirely new workflows, foundation-sec-8b adapts to your organization’s unique needs.” Cisco’s blog post announcing the model recommends that security teams apply foundation-sec-8b across the security lifecycle. Potential use cases Cisco recommends for the model include SOC acceleration, proactive threat defense, engineering enablement, AI-assisted code reviews, validating configurations and custom integration.

Foundation-sec-8B’s weights and tokenizer have been open-sourced under the permissive Apache 2.0 license on Hugging Face, allowing enterprise-level customization and deployment without vendor lock-in, maintaining compliance and privacy controls. Cisco’s blog also notes plans to open-source the training pipeline, further fostering community-driven innovation.

Cybersecurity is in the LLM’s DNA

Cisco chose to create a cybersecurity-specific model optimized for the needs of SOC, DevSecOps and large-scale security teams. Retrofitting an existing, generic AI model wouldn’t get them to their goal, so the Foundation AI team engineered its training using a large-scale, expansive and well-curated cybersecurity-specific dataset.

By taking a more precision-focused approach to building the model, the Foundation AI team was able to ensure that the model deeply understands real-world cyber threats, vulnerabilities and defensive strategies.

Key training datasets included the following:

Vulnerability Databases: Including detailed CVEs (Common Vulnerabilities and Exposures) and CWEs (Common Weakness Enumerations) to pinpoint known threats and weaknesses.

Threat Behavior Mappings: Structured from proven security frameworks such as MITRE ATT&CK, providing context on attacker methodologies and behaviors.

Threat Intelligence Reports: Comprehensive insights derived from global cybersecurity events and emerging threats.

Red-Team Playbooks: Tactical plans outlining real-world adversarial techniques and penetration strategies.

Real-World Incident Summaries: Documented analyses of cybersecurity breaches, incidents, and their mitigation paths.

Compliance and Security Guidelines: Established best practices from leading standards bodies, including the National Institute of Standards and Technology (NIST) frameworks and the Open Worldwide Application Security Project (OWASP) secure coding principles.

This tailored training regimen positions Foundation-sec-8B uniquely to excel at complex cybersecurity tasks, offering significantly enhanced accuracy, deeper contextual understanding and quicker threat response capabilities than general-purpose alternatives.

Benchmarking Foundation-sec-8B LLM

Cisco’s technical benchmarks show Foundation-sec-8B delivers cybersecurity performance comparable to significantly larger models:

BenchmarkFoundation-sec-8BLlama-3.1-8BLlama-3.1-70BCTI-MCQA67.3964.1468.23CTI-RCM75.2666.4372.66

By designing the foundation model to be cybersecurity-specific, Cisco is enabling SOC teams to gain greater efficiency with advanced threat analytics without having to pay high infrastructure costs to get it.

Cisco’s broader strategic vision, detailed in its blog, Foundation AI: Robust Intelligence for Cybersecurity, addresses common AI integration challenges, including limited domain alignment of general-purpose models, insufficient datasets and legacy system integration difficulties. Foundation-sec-8B is specifically designed to navigate these barriers, running efficiently on minimal hardware configurations, typically requiring just one or two Nvidia A100 GPUs.

Meta also underscored its open-source strategy at RSAC 2025, expanding its AI Defenders Suite to strengthen security across generative AI infrastructure. Their open-source toolkit now includes Llama Guard 4, a multimodal classifier detecting policy violations across text and images, improving compliance monitoring within AI workflows.

Also introduced is LlamaFirewall, an open-source, real-time security framework integrating modular capabilities that includes PromptGuard 2, which is used to detect prompt injections and jailbreak attempts. Also launched as part of LlamaFirewall are Agent Alignment Checks that monitor and protect AI agent decision-making processes along with CodeShield, which is designed to inspect generated code to identify and mitigate vulnerabilities.

Meta also enhanced Prompt Guard 2, offering two open-source variants that further strengthen the future of open-source AI-based infrastructure. They include a high-accuracy 86M-parameter model and a leaner, lower-latency 22M-parameter alternative optimized for minimal resource use.

Additionally, Meta launched the open-source benchmarking suite CyberSec Eval 4, which was developed in partnership with CrowdStrike. It features CyberSOC Eval, benchmarking AI effectiveness in realistic Security Operations Center (SOC) scenarios and AutoPatchBench, which is used to evaluate autonomous AI capabilities for identifying and fixing software vulnerabilities.

Meta also launched the Llama Defenders Program, which provides early access to open-AI-based security tools, including sensitive-document classifiers and audio threat detection. Private Processing is a privacy-first, on-device AI piloted within WhatsApp.

At RSAC 2025, ProjectDiscovery won the award for the “Most Innovative Startup” in the Innovation Sandbox, highlighting its commitment to open-source cybersecurity. Its flagship tool, Nuclei, is a customizable, open-source vulnerability scanner driven by a global community that rapidly identifies vulnerabilities across APIs, websites, cloud environments and networks.

Nuclei’s extensive YAML-based templating library includes over 11,000 detection patterns, 3,000 directly tied to specific CVEs, enabling real-time threat identification. Andy Cao, COO at ProjectDiscovery, emphasized open-source’s strategic importance, stating: “Winning the 20th annual RSAC Innovation Sandbox proves open-source models can succeed in cybersecurity. It reflects the power of our community-driven approach to democratizing security.”

ProjectDiscovery’s success aligns with Gartner’s 2024 Hype Cycle for Open-Source Software, which positions open-source AI and cybersecurity tools in the “Innovation Trigger” phase. Gartner recommends that organizations establish open-source program offices (OSPOs), adopt software bill-of-materials (SBOM) frameworks, and ensure regulatory compliance through effective governance practices.

Actionable insights for security leaders

Cisco’s Foundation-sec-8B, Meta’s expanded AI Defenders Suite and ProjectDiscovery’s Nuclei together demonstrated that cybersecurity innovation thrives most when openness, collaboration and specialized domain expertise align across company boundaries. These companies and others like them are setting the stage for any cybersecurity provider to be an active collaborator in creating cybersecurity defenses that deliver greater efficacy at lower costs.

As Patel emphasized during his keynote, “These aren’t fantasies. These are real-life examples that will be delivered because we now have bespoke security models that will be affordable for everyone. Better security efficacy is going to come at a fraction of the cost with state-of-the-art reasoning.”



Source link

fiverr

Be the first to comment

Leave a Reply

Your email address will not be published.


*