Menu

A

ADVANCED FUZZING:

A software testing method in which inputs are sent to a certain target within an application to actively reveal anomalous behavior, that could be indicative of a vulnerability.

ATTACK SURFACE:

The entirety of peripheral entry points, that can be used by attackers to gain unauthorized access to an application.

B

BLACK-BOX FUZZING:

A fuzzing approach where access to the source code is not required (opposed to whitebox-fuzzing). Inputs are generated randomly or mutated based on sample data. However, with this form of fuzzing, unexpected edge cases can hardly be detected.

C

CODE COVERAGE:

Code coverage, also called test coverage, measures the amount/percentage of code executed during a given test. The most common use case for code coverage is in the supply of additional data to software tests. Below you will find some common coverage metrics:

  • Statement coverage – the number of statements in the software source that have been executed during the test divided by the total number of statements in the code
  • Line coverage – the number of lines of code that have been executed divided by the total number of lines
  • Function coverage – the share of defined functions that have been called during the test
  • Basic block coverage – execution of code lines within basic blocks (not considering edges)
  • Edge coverage – all the edges of the control graph that are being followed during the test divided by all edges (this also includes branch coverage)
  • Path coverage – the number of logical paths in the program that were taken during execution divided by the number of all possible paths

 

CONTINUOUS DELIVERY:

Or CD describes the process of continuously updating a software or an application, by improving configuration, adding new features, fixing errors etc. These changes are then automatically delivered to a shared repository. The goal of CD is to release new code with minimal effort, while making the entire process more predictable and transparent.

 

CONTINUOUS INTEGRATION:

Or CI is a cyclical software development approach, which focuses on automating the simultaneous integration of code modifications from multiple contributors.  Integration takes place after a specified period of time, and is in most cases coupled with automated testing. 

 

CONTROL FLOW GRAPHS:

A visual representation that illustrates the different junctions and branches a program passes through during execution. Control flow graphs can be used to better trace code coverage.

 

CORPUS:

A series of inputs for a test harness. Usually it refers to a collection of basic inputs that produce maximum code coverage.

D

DAST:

Or Dynamic Application Security Testing, is an approach, where the analyzer searches for security vulnerabilities and weaknesses by executing the application. The software under test is executed using predefined or randomized inputs. If the behavior of the application differs from predefined correct responses or the program crashes, there is an error or bug in the application.

 

DEVSECOPS:

DevSecOps is a software development approach, that prioritizes early introducing of security measures into the software development lifecycle, thus minimizing vulnerabilities and bringing security closer to IT and business objectives. It adds a security perspective to the idea of DevOps and involves the integration of security testing technologies into CI/CD workflows.

 

DUMB FUZZING:

Fuzzing engines produce input completely randomly, without considering what input format is expected. A dumb fuzzer is relatively easy and inexpensive to set up, but it is also very inefficient.

 

DYNAMIC BINARY TRANSLATION:

A technique that is mostly used to transcribe and adapt machine code during the translation from one architecture to another.

E

EXECUTABLE:

A file that executes code when a program is opened. Executable files contain specific commands which are then carried out by the computer. Most data files require an executable in order to be run.

F

FEEDBACK-BASED FUZZING:

A testing method that feeds the application under test with a series of inputs, which are systematically mutated. The so called fuzzer then receives feedback on the amount of code covered during execution. Unlike traditional fuzzing or black-box fuzzing, feedback-based fuzzing can penetrate deeply into a program and reveal unexpected edge cases that would otherwise stay undetected.

 

FUZZER:

 A piece of software that uses fuzzing methods to conduct security tests on applications. 

 

FUZZING:

A software testing method that detects bugs by automatically injecting malformed data inputs into the system. There are different approaches to fuzzing such as black-box fuzzing, white-box fuzzing, and feedback-based/modern fuzzing.

 

FUZZING ENGINE/MUTATION ENGINE:

A tool used to monitor the amount of code coverage achieved by different inputs. The engine feeds information into the fuzzer, which help it to mutate or add inputs, so that more code coverage can be achieved.

 

FUZZ TARGET/TEST HARNESS:

The part of an application under test that accepts different inputs during a testing process. How the fuzz target processes these inputs using the API of the tested application determines whether an application fails the test or not.

G

GREY-BOX FUZZING:

This type of test is, in essence, a mix of white-box and black-box fuzzing. Applications are tested using external interfaces like in black-box testing, however, the source code is available and is used to improve the testing process.  For instance, this allows a fuzzer to monitor how different inputs propagate through the program, which significantly increases the efficiency of the testing process. 

H

HARNESSING:

A technique, where code is added to trigger certain routines within the executable that is being tested.

I

INSTRUMENTATION:

A programming method where measures are implemented to monitor individual applications within a larger software. This allows for precise measurement of performance and errors. In fuzzing, this is mainly used to trace information flow and send feedback to the mutation engine.

N

NEGATIVE TESTING:

Detecting vulnerabilities by actively sending inputs that are meant to generate undesirable behavior within an application. The opposite of positive testing.

P

PORTFOLIO ANALYSIS:

A testing approach that combines different analysis methods. This is often used in CI/CD pipelines in order to have fitting testing approaches for the different stages. 

 

POSITIVE TESTING:

A technique that is applied to verify that the application under test behaves correctly by sending expected inputs to the target. The selection of targets is done manually by the user. The opposite of negative testing.

R

REGRESSION TESTING:

A complete or partial collection of previously executed test cases, which are executed again to ensure that existing features function correctly.

 

RELIABILITY OF REPRODUCTION:

If an error persists for the same input in repeated testing, reliability of reproduction is given. Apart from few exceptions, this is usually the case in dynamic testing.

 

REVERSE ENGINEERING:

Dismantling and closely analyzing a program to draw conclusions about how it was assembled.

S

SANITIZER:

A dynamic analysis tool that uses compile-time mechanisms to find bugs during runtime. In C++ the term mainly refers to google sanitizers, while in web-development, it usually implies the sanitization of user inputs.

 

SAST:

In SAST or Static Application Security Testing, the analyzer scans the source code without actually executing it. The code is then traversed for suspect patterns using heuristics. Code fitting specific patterns, which could indicate potential vulnerabilities, are then presented to the user. Contrary to DAST, SAST tools do not execute the code. They can also be used at an early stage of the software development process. 

 

SEED CORPUS:

A set of correct inputs that can serve as an entry point, for a fuzzing process (similar to Regression testing). This way, the process does not have to start from scratch, which can save the fuzzer an immense amount of time. Interesting items found while fuzzing are stored in the corpus directory.

 

SOFTWARE DEVELOPMENT LIFECYCLE:

The Software Development Life Cycle (SDLC) is a cyclical process, used to design, develop and test software with a high-quality standard. The SDLC aims to produce applications that meet customer expectations and reach completion within the estimated budget. It consists of a detailed plan describing how to develop, maintain, replace and alter or enhance the application.

 

SYMBOLIC/CONCOLIC EXECUTION:

 Symbolic Execution: A testing approach, where traditional computer science methods are used to identify the one input that causes the execution of a node. Then this input is modified to generate invalid statements which can be used for negative testing.

Concolic Execution: A method that deploys symbolic execution along a concrete path, and thereby maximizes code coverage. Although concolic execution is a relatively slow procedure, it is extremely precise at triggering all conditions - especially when combined with fuzzing.

 

SYSTEM TESTING:

A set of tests performed on a fully integrated system to assess its conformity with predefined requirements. The opposite of unit testing.

T

TARGET:

The application or the part of an application, that security testing efforts are aimed at. Usually the target is compiled into an executable which can then be tested.

 

TEST CASE:

A collection of inputs and testing conditions that can be used by a tester to decide whether a system meets predefined requirements or not.

U

UNIT TESTING:

A unit is the smallest examinable segment of an application. In unit testing, each unit is tested individually which simplifies bug location and bug fixing.

W

WHITE-BOX FUZZING:

White-box fuzzing, is a testing approach, that - opposed to black-box fuzzing - requires access to the source code. Information from the source code is then used by the tester, to better guide the fuzzer and thus identify errors more accurately.

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

SOURCE_Fuzzing Academy - Online Learning-1

Fuzzing Academy

Since it is our goal to make software more secure, we have created Fuzzing Academy. Fuzzing Academy has the vision to establish fuzzing as a software testing standard. To achieve this goal, we want to share our knowledge free of charge to reach as many people as possible. Join our efforts and become an expert for the testing methods of tomorrow!

  • Forever free for learners at all levels
  • Growing program with exciting learning opportunities
  • Learn at your own pace and completely online
security stock picture

CI Fuzz Demo

We take you on a tour to explore CI Fuzz and show you how our software testing solution can help you deliver secure and reliable software.

Intro fuzzing

Introduction into Fuzzing

Find out why world leaders like Google and Microsoft rely on feedback-based fuzzing. We have put together a short introduction for you.

ash-edmonds-laptop

CI Tech Blog

Learn more about the latest developments in fuzzing and application security in our tech blog. We regularly publish new posts.