Skip to content
Close
Login
Login
Sergej Dechand10 min read

SAST, DAST, IAST and Feedback-Based Fuzzing

Introduction to Testing Approaches

In today's software testing industry acronyms like SAST, DAST or IAST are omnipresent, with IAST being the most recent trend in 2019. Before introducing Feedback-Based Application Security Testing (FAST), we will first give a short recap of the current application security testing methods and discuss the advantages and disadvantages of the available toolings. In the second part of this article, we will define the FAST approach and discuss its advantages in comparison to SAST, DAST, and IAST.

SAST

SAST, or Static Application Security Testing, has been around for many decades. In SAST, the analyzer scans the source code without actually executing it. The code is then traversed for suspect patterns using heuristics. Code fitting specific patterns, which could indicate potential vulnerabilities, are then presented to the user.  Since SAST tools do not execute the code, they can be used at any stage of the software development process. 

The fundamental disadvantage of these tools is that they produce a large number of false positives (code that does not actually contain vulnerabilities). In practice, large projects can easily have hundreds of thousands of warnings and even in toy examples can produce thousands of warnings. This leads to tremendous usability issues and most developers and testers disliking these tools strongly.

A common coping strategy is to outsource the analysis of the warnings, thus defeating the purpose of running the tools in-house. Many SAST companies now offer heuristics to reduce the number of false positives, however, since these heuristics are also based on the static analysis they suffer from the same advantages and disadvantages and do not change the fundamental problem of SAST.

SAST advantages SAST disadvantages
Can be performed at the early stages of software development, since it does not require the application to be built completely Cannot discover runtime issues
Offers 100% code coverage Not well suited to track issues where user input is involved
  Has difficulty with libraries and frameworks found in modern apps
  Requires access to the source code (“white-box testing”)

 

DAST

DAST, or Dynamic Application Security Testing, has also been known for several decades. Here, the analyzer searches for security vulnerabilities and weaknesses by executing the application. The software under test is executed using predefined or randomized inputs.

If the behavior of the application differs from predefined correct responses or the program crashes, there is an error or bug in the application. The main advantage of dynamic testing is that there are virtually no false positives since real program behavior is analyzed, which makes the results a lot more useful to testers.

An interesting feature of DAST is that it also can be used on software for which the tester does not have the source code. In this case, DAST treats the application as a black box and only looks at in- and outputs. This feature has led many to incorrectly use the terms black-box testing and DAST interchangeably. Black-box testing is a subcategory of DAST.

Another common misconception about DAST is that it is only used during the testing phase of development. While DAST does require that the program be executable, beyond that DAST can be used at any time during the software development lifecycle (SDL), including during early development.

However, DAST also has some disadvantages. Since DAST executes the program with random inputs, it cannot guarantee code coverage and it has poorer runtime properties than SAST solutions. Black-box DAST solutions also have the disadvantage that there is nothing to guide the generation of random inputs making it very inefficient and under most conditions incapable of finding bugs buried deep within the code.

It also requires manual effort to understand the stack traces produced by crashes and map them onto source code to fix the problems later. Some DAST solutions can address these problems, however, unlike the very simple black-box DAST solutions, they suffer from high complexity and require significant expertise to use.

DAST advantages DAST disadvantages
Produces virtually no false positives Requires working application to be tested
Can discover runtime issues Needs special testing infrastructure and customization
Can discover issues based on user interaction with the software Often performed towards the end of the software development cycle, due to poor performance
Does not require access to the source code Does not cover all code paths

 

IAST

IAST, or Interactive Application Security Testing, is a marketing term and is often described as combining the benefits of SAST and DAST. Another feature claimed by IAST is that it is integrated into the SDL and the CI/CD chain instead of only being used in the testing phase.

This feature gives rise to the “I” in IAST. For instance, Gartner defines IAST as follows: “Interactive application security testing (IAST) uses instrumentation that combines dynamic application security testing (DAST) and static analysis security testing (SAST) techniques to increase the accuracy of application security testing. Instrumentation allows DAST-like confirmation of exploit success and SAST-like coverage of the application code, and in some cases, allows security self-testing during general application testing. IAST can be run stand-alone, or as part of a larger AST suite, typically DAST.”

There are several distinct ways this can be interpreted. Firstly, a DAST solution is used to test warnings produced by SAST tools to weed out the false positives. This would be very desirable but, to the best of our knowledge, no tool can actually do this at scale with any scientific rigor and thus we consider this to be snake oil. Alternatively, it can be interpreted as a DAST solution that utilizes the source code to improve performance, such as fuzzers that use instrumentation to improve code coverage.

These are highly successful tools but they all fall in the DAST category since DAST is not restricted to black-box testing. The “interactivity” feature also is not excluded from DAST, since dynamic testing can be done as soon as the code is executable. So we see the term IAST mainly as a marketing term, which describes a sub-category of DAST to explicitly feature the fact that the DAST tool is integrated into the CI/CD pipeline. Cutting through the marketing hype, this is still an important distinction to make, since fixing bugs early in the SDL is definitely a desirable goal.

However, current IAST solutions still have a major drawback: they either rely on the definition of good test cases triggering a high code coverage (passive) or rely on randomization as used in dumb fuzzing and well-defined patterns generated by the DAST engine. This was state of the art until the introduction of AFL and libFuzzer.  In the rest of this article, we will introduce and discuss a new trend in software testing for 2020 based on feedback-based fuzzing, which we call FAST, or Feedback-based Application Security Testing.

FAST

FAST, or Feedback-based Application Security Testing is also a subcategory of DAST and is currently being developed on the basis of feedback-based coverage-guided fuzzing techniques.

Old DAST solutions and black-box approaches have the fundamental drawback that they have no information about the code covered when executing a given input. As a result, they rely on brute force and random approaches to generate inputs in the hope of triggering vulnerabilities. In other words, they are only able to find shallow bugs due to the limited code coverage they can achieve. 

State-of-the-art fuzzing techniques instrument the program being tested so that the fuzzer gets feedback about the code covered when executing each input. This feedback is then used by the mutation engine as a measure of the input quality. At the core of the mutation engine are genetic algorithms that use code coverage as fitness function.

Generations of inputs resulting in new code coverage survive and are used in the next rounds of mutations. The net effect of this process are inputs that maximize code coverage and thus increase the probability of triggering bugs. This is the main technology employed by state-of-the-art fuzzers such as libFuzzer developed and intensively used at Google.

Technology leaders such as Google and Microsoft already use these modern technologies to automatically test their code for vulnerabilities. For example, with the help of oss-fuzz over 16,000 bugs have been discovered in Google Chrome and 11,000 bugs in 160 open-source projects. In 2019, fuzzing found more bugs at Google than any other technology. This clearly illustrates the effectiveness of coverage-guided fuzzing to uncover bugs and vulnerabilities.

Despite these enormous advancements, the full potential of FAST has barely been explored yet. Apart from the use of genetic algorithms to optimize code coverage, a wealth of other techniques can be used to significantly improve the effectiveness of DAST and current FAST fuzzers such as libFuzzer, AFL and hongFuzz. The following is a brief introduction of several improvements made at Code Intelligence:

  • Structure-aware fuzzing
    In many scenarios, a specific structure of inputs is expected by the code under test. For example, an API expecting JSON or XML messages would reject inputs that are not in the valid format. Most state-of-the-art techniques perform mutations on the bit and byte level and thus struggle to handle these cases.

    Most of the generated inputs will be invalid and as a result rejected early in the testing process, preventing the fuzzer to reach deep into the code. In order to fuzz the internal logic of the software, mutation engines are needed that can mutate the inputs so that the generated inputs are valid for the tested API.

    It is important to provide developers with a user-friendly way to define the structure of expected inputs. The mutation engine can then automatically perform mutations that result in valid inputs according to the provided grammar and thus go much deeper into the program being tested.

  • Developer-friendly interface
    To use current fuzzing technologies, users have to implement fuzz targets (code that receives input from the fuzzing engine), perform necessary initialization, prepare the input, and pass it to the API under test. This poses a massive usability challenge to many developers and testers since it requires specialist knowledge about how the fuzzer works. This renders fuzzing hard to use for most developers.

    However, in many cases, standard interfaces such as sockets are used in the software. For these cases, fuzz targets can be automatically generated with minimal configuration overhead from developers which have the necessary domain knowledge to guide this process. To increase effectiveness, all reads and writes to the socket or file of interest can be intercepted and the fuzzer input is provided. This reduces the overhead of having to perform system calls.

  • Stateful fuzzing
    State-of-the-art fuzzers are designed to test code by repeatedly executing it with a single input each time. This provides very little support for stateful protocols. In order to effectively identify bugs in network protocol implementations, new techniques are needed to enable users to identify stateful operations of a protocol and the corresponding messages needed for each state. The mutation engine can then mutate the inputs and generate valid messages at each stage. The order of messages can also be mutated to test for any state transitions that should not be allowed. 

  • Concolic code execution
    Even the best fuzzers may get stuck when facing certain types of checks in the code. In order to satisfy these checks and go deeper into the program, very specific inputs are required. A good example of this is checksum checks. These are inherently difficult for fuzzers to get past since fuzzing is inherently a randomized process.

    A very promising approach is to use concolic code execution to compute these specific inputs. Although this approach is computationally expensive, state-of-the-art scientific research shows that it can be used as a complementary approach to fuzzing to handle cases where the fuzzer gets stuck, and as a result, combining the advantages of fuzzing and concolic code execution.

    Fuzzing can very quickly generate inputs that reach as much code as possible. When stuck, concolic execution performs more expensive computation to find the specific inputs that are needed to pass certain checks so that fuzzing can continue exploring the program space.

FAST advantages FAST disadvantages
Produces virtually no false positives Requires working application to be tested
Highly automated: feedback mechanisms guide the tool to vulnerabilities with minimal human effort Covers significantly more code than traditional DAST, but cannot guarantee full code coverage
Can find bugs deeper in the code than traditional DAST  
More efficient than traditional DAST and thus can be integrated seamlessly into CI/CD   

 

You can find a selection of our latest CVE's found with FAST here or learn more by booking a fuzzing demo with one of our security experts. We will walk you through the process and answer your questions.

Book a Demo

Related Articles