AutoPatchBench: Meta’s new way to test AI bug fixing tools

AutoPatchBench is a new benchmark that tests how well AI tools can fix code bugs. It focuses on C and C++ vulnerabilities found through fuzzing. The benchmark includes 136 real bugs and their verified fixes, taken from the ARVO dataset. Patch generation flowchart CyberSecEval 4 AutoPatchBench is part of Meta’s CyberSecEval 4, a benchmark designed to objectively evaluate and compare various LLM-based auto-patching agents for vulnerabilities specifically identified via fuzzing, a widely used method of … More

The post AutoPatchBench: Meta’s new way to test AI bug fixing tools appeared first on Help Net Security.

This article has been indexed from Help Net Security

Read the original article: