In today's software testing industry acronyms like SAST, DAST or IAST are omnipresent, with IAST being the most recent trend in 2019. Before introducing Feedback-Based Appliaction Security Testing (FAST), we will first give a short recap of the current application security testing approaches and discuss the advantages and disadvantages of the available toolings. In the second part of this article, we will define the FAST approach and discuss its advantages in comparison to SAST, DAST, and IAST.
SAST, or Static Application Security Testing, has been around for many decades. In SAST, the analyzer scans the source code without actually executing it. The code is then traversed for suspect patterns using heuristics. Code fitting specific patterns, which could indicate potential vulnerabilities, are then presented to the user. Since SAST tools do not execute the code, they can be used at any stage of the software development process.
The fundamental disadvantage of these tools is that they produce a large number of false positives (code that does not actually contain vulnerabilities). In practice, large projects can easily have hundreds of thousands of warnings and even in toy examples can produce thousands of warnings. This leads to tremendous usability issues and most developers and testers disliking these tools strongly.
A common coping strategy is to outsource the analysis of the warnings, thus defeating the purpose of running the tools in-house. Many SAST companies now offer heuristics to reduce the number of false positives, however, since these heuristics are also based on the static analysis they suffer from the same advantages and disadvantages and do not change the fundamental problem of SAST.
|SAST advantages||SAST disadvantages|
|Can be performed at the early stages of software development, since it does not require the application to be built completely||
Cannot discover runtime issues
|Offers 100% code coverage||Not well suited to track issues where user input is involved|
|Has difficulty with libraries and frameworks found in modern apps|
|Requires access to the source code (“white-box testing”)|
DAST, or Dynamic Application Security Testing, has also been known for several decades. Here, the analyzer searches for security vulnerabilities and weaknesses by executing the application. The software under test is executed using predefined or randomized inputs.
If the behavior of the application differs from predefined correct responses or the program crashes, there is an error or bug in the application. The main advantage of dynamic testing is that there are virtually no false positives since real program behavior is analyzed, which makes the results a lot more useful to testers.
An interesting feature of DAST is that it also can be used on software for which the tester does not have the source code. In this case, DAST treats the application as a black box and only looks at in- and outputs. This feature has led many to incorrectly use the terms black-box testing and DAST interchangeably. Black-box testing is a subcategory of DAST.
Another common misconception about DAST is that it is only used during the testing phase of development. While DAST does require that the program be executable, beyond that DAST can be used at any time during the software development lifecycle (SDL), including during early development.
However, DAST also has some disadvantages. Since DAST executes the program with random inputs, it cannot guarantee code coverage and it has poorer runtime properties than SAST solutions. Black-box DAST solutions also have the disadvantage that there is nothing to guide the generation of random inputs making it very inefficient and under most conditions incapable of finding bugs buried deep within the code.
It also requires manual effort to understand the stack traces produced by crashes and map them onto source code to fix the problems later. Some DAST solutions can address these problems, however, unlike the very simple black-box DAST solutions, they suffer from high complexity and require significant expertise to use.
|DAST advantages||DAST disadvantages|
|Produces virtually no false positives||Requires working application to be tested|
|Can discover runtime issues||Needs special testing infrastructure and customization|
|Can discover issues based on user interaction with the software||Often performed towards the end of the software development cycle, due to poor performance|
|Does not require access to the source code||Does not cover all code paths|
IAST, or Interactive Application Security Testing, is a marketing term and is often described as combining the benefits of SAST and DAST. Another feature claimed by IAST is that it is integrated into the SDL and the CI/CD chain instead of only being used in the testing phase.
This feature gives rise to the “I” in IAST. For instance, Gartner defines IAST as follows: “Interactive application security testing (IAST) uses instrumentation that combines dynamic application security testing (DAST) and static analysis security testing (SAST) techniques to increase the accuracy of application security testing. Instrumentation allows DAST-like confirmation of exploit success and SAST-like coverage of the application code, and in some cases, allows security self-testing during general application testing. IAST can be run stand-alone, or as part of a larger AST suite, typically DAST.”
There are several distinct ways this can be interpreted. Firstly, a DAST solution is used to test warnings produced by SAST tools to weed out the false positives. This would be very desirable but, to the best of our knowledge, no tool can actually do this at scale with any scientific rigor and thus we consider this to be snake oil. Alternatively, it can be interpreted as a DAST solution that utilizes the source code to improve performance, such as fuzzers that use instrumentation to improve code coverage.
These are highly successful tools but they all fall in the DAST category since DAST is not restricted to black-box testing. The “interactivity” feature also is not excluded from DAST, since dynamic testing can be done as soon as the code is executable. So we see the term IAST mainly as a marketing term, which describes a sub-category of DAST to explicitly feature the fact that the DAST tool is integrated into the CI/CD pipeline. Cutting through the marketing hype, this is still an important distinction to make, since fixing bugs early in the SDL is definitely a desirable goal.
However, current IAST solutions still have a major drawback: they either rely on the definition of good test cases triggering a high code coverage (passive) or rely on randomization as used in dumb fuzzing and well-defined patterns generated by the DAST engine. This was state of the art until the introduction of AFL and libFuzzer. In the rest of this article, we will introduce and discuss a new trend in software testing for 2020 based on feedback-based fuzzing, which we call FAST, or Feedback-based Application Security Testing.
FAST, or Feedback-based Application Security Testing is also a subcategory of DAST and is currently being developed on the basis of feedback-based coverage-guided fuzzing techniques.
Old DAST solutions and black-box approaches have the fundamental drawback that they have no information about the code covered when executing a given input. As a result, they rely on brute force and random approaches to generate inputs in the hope of triggering vulnerabilities. In other words, they are only able to find shallow bugs due to the limited code coverage they can achieve.
State-of-the-art fuzzing techniques instrument the program being tested so that the fuzzer gets feedback about the code covered when executing each input. This feedback is then used by the mutation engine as a measure of the input quality. At the core of the mutation engine are genetic algorithms that use code coverage as fitness function.
Generations of inputs resulting in new code coverage survive and are used in the next rounds of mutations. The net effect of this process are inputs that maximize code coverage and thus increase the probability of triggering bugs. This is the main technology employed by state-of-the-art fuzzers such as libFuzzer developed and intensively used at Google.
Technology leaders such as Google and Microsoft already use these modern technologies to automatically test their code for vulnerabilities. For example, with the help of oss-fuzz over 16,000 bugs have been discovered in Google Chrome and 11,000 bugs in 160 open-source projects. In 2019, fuzzing found more bugs at Google than any other technology. This clearly illustrates the effectiveness of coverage-guided fuzzing to uncover bugs and vulnerabilities.
Despite these enormous advancements, the full potential of FAST has barely been explored yet. Apart from the use of genetic algorithms to optimize code coverage, a wealth of other techniques can be used to significantly improve the effectiveness of DAST and current FAST fuzzers such as libFuzzer, AFL and hongFuzz. The following is a brief introduction of several improvements made at Code Intelligence:
|FAST advantages||FAST disadvantages|
|Produces virtually no false positives||Requires working application to be tested|
|Highly automated: feedback mechanisms guide the tool to vulnerabilities with minimal human effort||Covers significantly more code than traditional DAST, but cannot guarantee full code coverage|
|Can find bugs deeper in the code than traditional DAST|
|More efficient than traditional DAST and thus can be integrated seamlessly into CI/CD|
You can find a selection of our latest CVE's found with FAST here or learn more by watching CI Fuzz in action.