Skip to content
Close
Login
Login
Jonathan Reimer6 min read

Why Static Code Analysis Doesn’t Belong Into Your CI

With cybercrime on the rise, the code quality and security is becoming more important than ever before. Since it is not uncommon for modern applications to embody hundreds of thousands of lines of code (LoCs), it is crucial to apply an effective and precise method in order to detect potential vulnerabilities and bugs. 

Static code analysis (SAST) is the most prominent measure that can easily be integrated into the software development lifecycle to actively identify such vulnerabilities. The biggest advantage of static code analysis is that it can give the developer instant feedback on his code, even if the code is not yet executable. However, most tool providers and companies are increasingly integrating SAST into their CI/CD pipelines, which turns out to be far from sufficient.

The amount of false positives, the resulting manual effort and the undetected bugs are limiting this technology. This blog post will explain the disadvantages of static code analysis when integrated into a CI/CD, and the alternative security testing approaches available.

Static Code Analysis Doesn’t Belong Into Your CI

Static Code Analysis Doesn’t Belong Into Your CI

The Struggle With SAST

SAST, or Static Application Security Testing, has been around for many decades. In SAST, the source code is scanned without actually executing it. Here, the focus is to search for suspicious patterns in control/data flow by using heuristics.

Code matching specific patterns, which could indicate potential vulnerabilities, is then presented to the user.  Since SAST tools do not execute the code, they can be used at any stage of the software development process, in the best case already in the IDE. Thereby it leads to a good coding style and prevents for example code smells.

The fundamental disadvantage of static analysis is that it produces numerous false positives (warnings that do not actually involve vulnerabilities). Some of the industry’s best SAST tools are designed to have false positives rates around 5%. If we assume a common metric of 20 bugs per 1k lines of code (LoC), the number of potential bugs identified by SAST in an application with 1 million LoC is approximately 20,000.

Of these bugs, we can typically expect 1,000 to be false positive (if our SAST tool is good). Through intelligent rules, developers try to avoid these false positives. In turn, these rules often lead to false negatives (vulnerabilities that do not get detected).

In practice, large projects can easily have hundreds of thousands of warnings and even in toy examples can produce thousands of warnings. This leads to tremendous usability issues and most developers and testers dislike these tools strongly.

A common idea is to outsource the examination of the warnings, thus overcoming the purpose of running the tools in-house. A number of SAST companies now offer heuristics to reduce false positives, however, they do not solve the key issues of SAST.

SAST leads to the following problems:

  • Manual effort: Using SAST requires a large amount of developer’s time in order to identify false positives and prioritize the remaining bugs by severity.

  • Time-consuming: Software projects can be delayed for days or even weeks by the implementation of SAST. Instead of reducing time to market, as it is the basic idea behind shift-left testing, the inefficiency of SAST can hold up the entire process.

  • Less focus: Especially when implemented into the CI/CD, the nerve-wracking search for false positives can significantly reduce the focus and motivation of the dev team. The impact on the developer’s productivity must be factored into the cost of implementing SAST.

  • Lack of security: Due to the time-consuming search for false positives, it can quickly happen that a real security vulnerability is marked as a false positive by a developer and thus remains in the application (“Cry Wolf”). 

  • No real automation: SAST actually misses its purpose and ultimately relies only on the assessment of a human being, instead of creating real security through automation.

  • No cultural acceptance: All the reasons listed above lead to the fact that the integration of SAST into CI/CD is not widely accepted by developers. Usually, these measures are introduced “top-down”, but if you ask the actual users, the enthusiasm is very limited.

 

Given these issues, the question comes up if SAST alone is sufficient to increase the security of a product and the efficiency of the reviewing development teams? Many organizations choose SAST due to its simplicity and the feeling to take security/quality measures.

While there are definitely bugs and vulnerabilities that are avoided with static analysis, the main limitations are the effectiveness and cultural acceptance of it. In order to secure large codebases on a scale, another testing approach is required.

 

Fuzzing for the Win

In contrast to SAST, fuzzing always tests the application during runtime. Fuzzing feds the application with a series of inputs, which are purposefully mutated in the testing process. The fuzzer then gets feedback about the code covered during the execution of inputs. Thereby fuzzing explores the state efficiently and discovers bugs hidden deep in the code.

Technology leaders such as Google and Microsoft already use fuzzing to automatically test their code for vulnerabilities. For example, over 27,000 bugs have been found in Chrome and several open-source projects and Google stated that it finds around 80% of its bugs with modern fuzzing techniques. This clearly illustrates the effectiveness of fuzzing to uncover bugs and vulnerabilities.

 

Additionally, developers benefit from fuzzing in numerous aspects:

  • No false positives: Since the application is tested during runtime, each bug implies at least unexpected behavior of the application. Therefore, fuzzing virtually produces no false positives.

  • Saving developer time: Due to the highly automated approach, fuzzing involves little effort for developers and only requires action when an actual vulnerability has been found. 

  • Reproducibility: Crashing inputs are always reproducible. This can be used to both verify the unexpected behavior and validate a potential fix. Furthermore, this can be added to existing QA/Security processes to ensure that behavioral regressions are not introduced in future versions of the application.

  • Additional guidance: Bugs and vulnerabilities can be matched with all types of CWE’s which further supports the prioritization of a bug’s impact on the application.

  • Prevent exploits: Fuzzing is widely used by both hackers and vulnerability researchers for detecting attack vectors. By applying it in the development process, fuzzing can be done preventively and embarrassing exploits can be avoided.

 

All in all, modern fuzzing provides developers with everything needed to detect and fix all kinds of vulnerabilities and bugs while cutting away the usability issues of SAST. Furthermore, testing can happen at scale with fuzzing. Similar to the way unit tests were introduced back then, fuzzing is now revolutionizing the way the world tests software.

Developers do not have to worry about manually writing unit tests, which have to cover all kinds of possible edge cases, anymore. Instead, developers can easily set up fuzzing tests that automatically generate endless amounts of unbiased test cases. Fuzzing can drastically reduce development costs through the improvement of developer’s efficiency.

Finally, the aggregation of dynamically generated inputs can be used to apply regression testing of applications prior to the deployment. In combination, SAST can be helpful in identifying targets for fuzzing and maximizing code coverage. 

In summary, it can be said that fuzzing should no longer be missing in any development process. Otherwise, a huge potential for efficiency and security is given away.

Have you become curious about the potential fuzzing has to offer for your development process? 

Try CI Fuzz for Free

Related Articles