Skip to content
Login

Best Practices for
Embedded Security Testing

In embedded software, we need to make sure that growing complexity and software dependency come at no cost to security.

Secure Your Embedded Apps

The Code Intelligence Platform leverages the best of static and dynamic application security technologies, including fuzz testing. Book a demo with one of our colleagues to learn how to best apply this technology to secure large-scale embedded software projects without false positives.

Book a Demo
CI Fuzz product screenshot | embedded testing

Why Is It So Hard to Secure Embedded Applications?

Due to increasing connectivity and dependencies, modern embedded applications are constantly growing more complex. This complexity comes with implications for software security testing and requires plenty of manual effort, depending on the toolchain. From an operational perspective, many embedded industries (automotive, aviation, healthcare etc.) are tightly staffed and work in long cycles with strict deadlines. When things get tight, this can cause time-critical matters to be prioritized over software testing.

From a technical perspective, embedded software security testing can be distinguished from other ecosystems as well. On the one hand, we have the hardware, which, in the beginning, is usually tested independently of the software, for compliance with physical requirements and standards. e.g., electromagnetic compatibility. On the other hand, the final hardware has to be tested for software development. For this purpose, most teams I've worked with either rely on developer boards or simulate the hardware in software. After completion, the software is integrated into the target platform and integration tests are performed.

Finally, embedded software tests examine if all functional and non-functional requirements are fulfilled by the final product, including hardware and software. Usually, so-called robustness tests are also performed manually by the tester. Depending on the place of use, special evidence must be provided, such as high code coverage or modified condition/decision coverage (MCDC).

Many languages used in embedded such as C/C++ are dependent on the hardware and operating system and can’t be cross-compiled. This leads to two significant problems:

1. Although the Public API documentation is available, developers need to write plenty of test harnesses, which is incredibly time-consuming.

2. The connection between the hardware and Hardware-Dependent-API has to be tested to ensure that it can process unexpected inputs from the PublicAPI securely and under all circumstances.

Moreover, security tests have to prevent the application from crashing under all circumstances. They should cover all relevant states and behaviors, including edge cases caused by unexpected or erroneous inputs.

Automating Test Case Creation With Fuzz Data

In my experience, writing test cases that adequately cover relevant program states and behaviors manually takes time and bears the risk of missing things. A mocking approach based on fuzz data is a proven effective way to speed things up and pinpoint issues more accurately. Instead of manually trying to come up with test cases, this approach uses fuzz data to automatically, come up with invalid, unexpected or random data as input for your embedded application. This approach allows you to simulate the behavior of your embedded software units under realistic circumstances.

To make sure that your fuzz data is not just throwing random test inputs at your application, you should opt for embedded testing tools that are capable of leveraging information about the software under test to refine fuzz data. With such a whitebox (or greybox) fuzzing approach, you can ensure that your fuzzer sends relevant inputs into your application, i.e., inputs that maximize code coverage and thereby trigger (almost) all relevant program states.

This approach to embedded software security testing is much faster and more accurate than any form of manual testing or dumb fuzzing. Below I explained this in more detail.

Recorded Coding Session: Mocking Embedded Systems With Fuzz Data (Excerpt)

Watch Full Session

Mocking on Steroids: Testing Positive and Negative Criteria With Fuzz Data

To test an application with dynamic inputs, you would usually have to compile and run it first. But with embedded software, you often have the problem that it only runs on specific hardware or that the application requires input from external sources. For this reason, mocking/simulating these dependencies is needed for DAST, IAST, or fuzzing.

A bare-bones approach to testing embedded systems can also be mocking for the hardware-dependent functions to return static values. However this approach is basically blind, as it lacks runtime context, causing it to miss many possible behaviors, resulting in comparably low code coverage.

As described above, a more accurate method for embedded security testing is to enhance your mock testing setup with fuzz data, to dynamically generate return values for your mocked functions. This way, you can make use of the magic of feedback-based fuzzing to simulate the behavior of external sources under realistic conditions, while covering both positive and negative test cases. However, if you have the resources, I would always recommend combining different DAST, IAST, and fuzzing approaches.

Fuzzing Embedded Systems for Automotive

Two Excel sheets can be enough to generate a fuzz test that will achieve high code coverage while using fuzz data to simulate the input from outer sources

The Secret Sauce to Fuzzing Embedded Software: Instrumentation

One of the reasons why modern fuzzing technology is so unreasonably effective is that it can leverage feedback from previous test inputs to increase code coverage. To achieve this, the source code has to be instrumented, using sanitizers. A sanitizer is a software library used to compile code with the goal of making your programs crash more often. By receiving information about the program under test from these sanitizers, the fuzzing engine can constantly craft more effective test inputs.

Leveraging the API Documentation to Generate Fuzz Tests

You will most likely need some form of structured input to automatically generate your test harnesses. To determine what kinds of input may be suitable for testing, use the documentation for the Public API. In my experience, most software teams that work on embedded hardware, have such documentation stored as a CSV or Excel table. This documentation can be used to automatically create sophisticated fuzz tests without any further adaptations.

fuzzing API table

Example of an API documentation in a CSV table

Your fuzzer then starts by calling functions from your Public API documentation file in random order, with random parameters. Through instrumentation, your fuzzer can then gather feedback about the covered code. Based on this feedback, it can actively adapt and mutate the called functions and parameters to increase code coverage and trigger more interesting program states. This setup will allow you to generate test cases you might not have thought of, that can traverse large parts of the tested code. With modern open-source tooling, fuzz testing can be simplified to a point where it only takes a few commands to have a fuzzer up and running, within your CLI.

Industry Standards for Embedded Software Testing in Automotive

The automotive industry is subject to many industry norms. Many of them, such as ISO/SAE 21434 or WP. 29, include requirements for software security testing. A large number of these norms recommend integrating modern fuzz testing into the development process as a measure to comply with their requirements. Especially ISO/SAE 21434 has presented the automotive industry with many new challenges in recent years. Based on our experience with customers, my colleagues created an overview of how modern fuzzing technology can help automotive software teams become ISO 21434 compliant while building more secure software.

Learn More

3 Reasons Why You Should Fuzz Embedded Systems

In my experience, fuzzing is one of the most effective ways to test embedded systems. Especially because the margin of error is very small in many embedded industries, as software defects do not only affect the functionality of systems, they can have a physical impact on our lives (e.g., in automotive brake systems).

1. Increased Code Coverage

By leveraging feedback about the software under test, modern fuzzers can actively craft test inputs that maximize code coverage. Apart from reaching critical parts of the tested code more reliably, this enables highly useful reports that let dev teams know how much of their code actually was executed during a test. This reporting may be helpful in identifying which parts of their code need additional testing or more inspection.

2. Detecting Memory Corruption Accurately

Uncovering memory corruption issues is one of the strong suits of feedback-based fuzzing which is highly relevant for memory-unsafe languages such as C/C++(nonetheless, fuzz testing is also very effective at securing memory-safe languages). By feeding embedded software with unexpected or invalid inputs, modern fuzzing tools can uncover dangerous edge cases and behaviors that are related to the communication with the memory.

3. Early Bug Detection

Feedback-based fuzzing tools can be integrated into the development process, to test your code automatically as soon as you have an executable program. By leveraging the bug-finding capacities of fuzzing early, you can find bugs and vulnerabilities before they become a real problem.

Example: If you find a bug in a JSON parser during unit testing, you can most likely fix it in a couple of minutes. This is relatively easy compared to fixing a crash that was possibly caused by a specific user input that only affects one certain component. So do yourself a favor and fix the bugs before they pop up in production.

Enterprise Best Practice to Deal With Complexity: CI/CD-Integrated Fuzz Testing

For enterprise projects, I recommend CI/CD-integrated fuzz testing, allowing you and your team to fuzz your code automatically at each pull request. Ideally, we recommend implementing such a testing set-up early in the development process to avoid late-stage fixes and speed up the development process. In my opinion, Thomas Dohmke, CEO of GitHub, put it best when he said that modern fuzzing tools are "like having an automated security expert who is always by your side".

If you want to find out more about how you can set up fuzz testing for large automotive projects, you can book a personal demo with one of my colleagues or check out one of my recorded live coding sessions on YouTube. In the recording, you will get an impression of the testing methods that you can integrate into your CI/CD flow using the Code Intelligence platform.

Watch Recording

 


To Be Announced

About the Author

Daniel Teuchert is a Customer Success Engineer at Code Intelligence. He is specialized in implementing feedback-based fuzz testing in embedded software development processes