AI Boosts Software Safety with OSS-Fuzz | Quick start guide to large language models pdf free | A comprehensive overview of large language models | Hackers guide to machine learning | Turtles AI

AI Boosts Software Safety with OSS-Fuzz
Google finds critical vulnerabilities in code using advanced language models, overcoming the limitations of traditional fuzzing
Isabella V21 November 2024

 

Google has enhanced its OSS-Fuzz project with AI to improve software security. The introduction of advanced language models has enabled the discovery of vulnerabilities that are difficult to detect through traditional human fuzzing, with significant results such as the critical bug in OpenSSL.

Key points:

  • OSS-Fuzz uses AI to discover vulnerabilities in code.
  • A critical bug in OpenSSL, which has been present for 20 years, was found through AI fuzzing.
  • LLM improves fuzzing coverage by testing more code than traditional methods.
  • Google aims for full automation of the process of finding and fixing vulnerabilities.

Google, with its OSS-Fuzz project, is transforming the search for bugs in software by leveraging the potential of AI, particularly with the use of advanced language models (LLM). OSS-Fuzz is an initiative that, using fuzzing techniques, identifies vulnerabilities in open source code repositories. With the integration of AI, this tool has made significant strides in improving software security by uncovering vulnerabilities that would have otherwise gone unnoticed by traditional methods managed by human developers. One of OSS-Fuzz’s most notable discoveries was a critical bug in the popular OpenSSL library, the flaw of which had likely been present for two decades and would have remained invisible to classical fuzzing tools, those developed by humans. This bug, identified as CVE-2024-9143, was disclosed in mid-September and fixed after a month, but it highlighted the potential of AI in detecting long-standing errors that could escape human oversight. In addition to this, the Google team also cited another example: a flaw in the cJSON design that was found with the help of AI but had not been detected by conventional fuzzing tests. This demonstrates the effectiveness of AI in expanding the coverage capacity of security tests, allowing much larger portions of code to be analyzed than traditional approaches. OSS-Fuzz introduced the use of AI in August 2023, with the goal of refining the fuzzing phase, which basically involves inserting random and unexpected data into software to detect crashes or malfunctions. Initially, the initiative focused on writing fuzzing targets and handling compilation problems. However, the integration of AI has taken a substantial step forward: language models can now handle the entire fuzzing workflow, including test generation, target execution, runtime error handling, and failure classification. The ultimate goal, as stated by Google’s security team, is to fully automate the process of finding vulnerabilities, while also enabling the automatic generation of patches to correct flaws found. Recently, other AI-based initiatives, such as Protect AI’s Vulnhuntr, have helped identify zero-day vulnerabilities in Python projects, confirming the effectiveness of these tools in addressing cybersecurity challenges. With the introduction of open source in 2024, OSS-Fuzz has further evolved, increasing its analysis capabilities and enabling researchers and developers to improve code security through AI.

In an increasingly complex cyber threat landscape, the evolution of fuzzing tools will help identify vulnerabilities that are difficult to see in traditional ways, representing a breakthrough in software security.