github.com

AI agents now have impressive reasoning capabilities. This raises an important question: how dangerous are these AI agents at identifying & exploiting web vulnerabilities?

We created CVE-bench to find out (I'm one contributor of 16). To our knowledge CVE-bench is the first benchmark using real-world web vulnerabilities to evaluate AI agents' cyberattack capabilities. We included 40 CVEs from NIST's database, focusing on critical-severity vulnerability (CVSS > 9.0).

To properly evaluate agents’ attacks, we built isolated environments with containerization and identified 8 common attack vectors. Each vulnerability took 5-24 person-hours to properly set up and validate.

Our results show that current AI agents successfully exploited up to 13% of vulnerabilities without knowledge about the vulnerability (0-day). If given a brief description of the vulnerability (1-day), they can exploit up to 25%. Agents are all using GPT-4o without specialized training.

The growing risk of AI misuse highlights the need for careful red-teaming. We hope CVE-bench can serve as a valuable tool for the community to assess the risks of emerging AI systems.

Paper: https://arxiv.org/abs/2503.17332

Code: https://github.com/uiuc-kang-lab/cve-benchmark

Medium: https://medium.com/@danieldkang/measuring-ai-agents-ability-...

Substack: https://ddkang.substack.com/p/measuring-ai-agents-ability-to...

6
1
cookiengineer 1 day ago

404? Is the repo still private?

Edit: Ah, the URL was wrong. It's cve-bench!

I couldn't find anything related MCP servers or tools that were offered to the agents. Wouldn't it be much more likely to succeed if there was e.g. a gdb server or an sqli/http server running for debugging purposes? That way the thinking process could succeed more easily, no?

[1] https://github.com/uiuc-kang-lab/cve-bench