By Ramon Barakat, Roman Kraus and Martin Schneider, Fraunhofer FOKUS
Recent incidents have shown how a seemingly small memory access bug can cascade into global disruption [1]. With the EU’s Cyber Resilience Act (CRA) and the new Product Liability Directive, manufacturers of “products with digital elements” must deliver security by design, conduct state-of-the-art security testing throughout the lifecycle, and handle vulnerabilities promptly.
Why CRA raises the bar
From 2027, products newly placed on the EU market must meet baseline security and vulnerability-handling requirements as described in our previous blog post [2]. In practice, that means:
- Risk-based engineering and secure-by-default configurations
- Regular, effective security testing, proportionate to risk
- Structured vulnerability handling, coordinated disclosure, and prompt updates (ideally automated)
- Protection against common classes of attacks, including Denial-of-Service
The new Product Liability Directive tightens accountability further: manufacturers can be liable not just for direct defects, but also for damage caused by exploited vulnerabilities. Fines under CRA can reach up to 15 million euros or 2.5% of global turnover. Together, this creates both regulatory and business incentives to elevate testing from “best effort” to evidence-backed, systematic security assurance.
Fuzzing —beyond randomness
Security testing encompasses both static and dynamic analysis. Among the dynamic approaches, fuzzing has become the gold standard for automated security testing and is also a part of the security test that are running inside the DOSS Component Tester [3]. Fuzzing is a dynamic testing technique that stimulates interfaces with invalid, semi-valid, and unexpected inputs to trigger faulty behaviour. Contemporary fuzzing methodologies no longer adhere to a fully random approach. Instead, they incorporate systematic processes to enhance the efficiency and effectiveness of the fuzzing process.
- Coverage-guided fuzzing mutates inputs to reach new code paths efficiently.
- Grammar-/model-based fuzzing crafts inputs aligned with complex formats and protocols. This enables input data to be generated that differs only subtly from correct data.
- Stateful fuzzing drives systems through message sequences and internal states to reach deep business logic.
Sanitizers (like AddressSanitizer) make it easy to uncover memory corruption. This is one reason why fuzzing is such a popular and effective security testing method. The German BSI has published a practical primer for applying fuzzing in Common Criteria evaluations—useful guidance even outside certification contexts [4].
Coverage isn’t enough
The most widely utilised fuzzers employ the measurement of code coverage as an indicator of the efficacy of a fuzzing campaign. Code coverage signifies that a statement has been executed independently from the system state or values. But many issues depend on precise values, path order, or system states. One example is CVE-2024-6874 a recent buffer over-read in libcurl that is only triggered when an input string is exactly 256 bytes long. A fuzzing process focusing on coverage can miss this edge condition.
To improve effectiveness, fuzzing nowadays combines multiple metrics and methods:
- Directed fuzzing prioritizes inputs that reduce distance to specific code regions (e.g., patch sites, suspected hotspots).
- Vulnerability-focused fuzzing use vulnerability knowledge (e.g., nearness to buffer boundaries) to guide toward fault conditions.
Figure 1 illustrates the control flow graph of the system under test that contains multiple vulnerabilities (red nodes). The figure contrasts directed, vulnerability-focused fuzzing (blue) with random/coverage-guided fuzzing (green). Blue paths follow constrained, state-aware transitions from the entry down through deeper layers, focused on converging specific vulnerabilities (red nodes). In contrast, green paths broadly traverse numerous outer nodes to maximize code coverage but reach inner business logic and terminal states more slowly.

Figure 1: A system graph illustrating the different fuzzing approaches: Blue outlines trace targeted, vulnerability-focused fuzzing that follows precise, stateful message sequences and value constraints and green paths depict random/coverage-only fuzzing spreading broadly across many shallow nodes.
Fuzzing complex, stateful, and encrypted systems
Many systems reject malformed inputs early, long before business logic is exercised. The vulnerability CVE-2024-8376 (Eclipse Mosquitto) demonstrates how a sequence of valid protocol messages can reach a vulnerable state in a reliable manner, thus triggering memory corruption. Likewise, detecting unexpected state deviations (from spec) can be as valuable as finding a crash: it may indicate a logic bug or a nonconformant implementation. Contemporary fuzzing methodologies concentrate on attaining deeper system logics, which necessitate data format and protocol awareness.
- Grammar-based fuzzing uses declarative specifications (often available from RFCs) to generate valid and semi-valid inputs.
- Generator-based fuzzing uses imperative builders to encode dependencies that grammars struggle with (e.g., MQTT flags that imply required payloads).
- Protocol fuzzing models message sequences and acceptable transitions. When it comes to protocols, it is important to stick to the message flow that’s specified to reach certain states. Consider encrypted communication: without negotiating keys correctly, you won’t reach meaningful encrypted states to test deeper logic.
Conclusion
Fuzzing has become a key part of modern security testing. It is becoming more and more important because the rules about how companies are held responsible for their products are getting stricter. However, the efficacy of a fuzzing campaign is contingent on numerous factors in addition to the extent of system coverage.
Choose the right fuzzing mix: Combine coverage-guided fuzzers with grammar- and stateful techniques for complex formats and protocols. Use vulnerability-focused or directed fuzzers when you have hypotheses (suspected vulnerability regions, static analysis findings, security patches that need to be validated). Direct your fuzzing using risk and code knowledge.
Done right, fuzzing helps ensure a secure product and compliance with new regulations.
References:
[1] https://dossproject.eu/a-simple-coding-mistake-led-to-the-crowdstrike-outage-well-this-is-not-surprising/
[2] https://dossproject.eu/why-the-cyber-resilience-act-cra-matters-for-iot-manufacturers/
[3] https://dossproject.eu/the-doss-component-tester-comprehensive-security-testing-of-iot-devices/
[4] https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Zertifizierung/Interpretationen/AIS_50_Fuzzing_Primer_1_6_final_e_pdf.pdf?__blob=publicationFile&v=3
