Skip to main content

Conceptual overview of the DOSS Digital Cybersecurity Twin Framework, part 1.

By May 29, 2025June 5th, 2025Insights

Within the DOSS IoT Supply Trust Chain (STC) Concept, we apply a digital twin framework, called the Digital Cybersecurity Twin (DCT). The DCT enables us to perform the automated vulnerability scanning and penetration testing of an IoT system in a virtualized environment, on a digital twin. This way, we can identify weaknesses at the design stage before updates or modifications are implemented. This proactive approach improves operational security and ensures that the entire IoT ecosystem is hardened against threats.

We assume that the digital twin is developed in parallel with the real system and its low-level representation is made available to our framework in the form of Infrastructure-as-Code (IaC) by the system developer/ integrator. Our digital twin framework also expects to receive from the system developer/integrator all proprietary software components of the system (needed for instantiating the digital twin of the system in the virtualized environment), some high-level system models (e.g., static system configuration, data flows, error propagation rules, etc.), and all relevant assumptions and system requirements. Once these inputs are received, our framework performs the following steps fully automatically (see also Figure 1 for illustration):

  1. checks the consistency of the inputs received from the system developer/integrator;
  2. identifies system components and retrieves their DSPs (Device Security Passport – a security descriptor file) that include the results of some tests that have been performed at component level;
  3. performs error propagation analysis on the system and determines impactful attack goals that, if reached, violate important system requirements;
  4. generates attack trees for those attack goals;
  5. generates executable test cases, and test plans from the attack trees to catalogue all the test cases to be executed;
  6. augments the IaC representation of the digital twin by appropriate discovery, testing and observational tools;
  7. launches the digital twin, together with the relevant discovery, testing and observational tools, in a virtualized execution environment;
  8. executes system discovery, vulnerability scanning and penetration testing of the system according to the discovery and test plan using the corresponding discovery, testing and observational tools;
  9. identifies security weaknesses by analyzing the discovery and test results, as well as provides explanations of the discovered weaknesses and recommendations to the system developer/integrator on how to fix them.

Figure 1: High-level overview of our approach to build a DCT

A core concept of our DCT framework is the testing job. A testing job is created when some new input is provided by the system developer/integrator, containing the low-level descriptions of the IoT system in the form of IaC, the high-level models of the system as well as the system requirements that it is to be met. The testing job is closed when the security testing of the system is completed, and some output is produced for the system developer/integrator. This output contains a report on the discovered vulnerabilities and recommended fixes, but it also includes additional information, useful for the system developer/integrator, such as the list of impactful attack goals identified and their corresponding attack trees, as well as the list of test cases executed and their results (i.e., evidences and artefacts collected during testing).

Architecture

In terms of architecture, we envision the DCT framework as a set of interacting modules (see Figure 2 for illustration). Life cycle phase modules (labeled from 1.a to 5.b) will perform the tasks of the different life cycle phases of the testing jobs. A distinguished orchestrator module will keep track of the state of the testing jobs and schedule the invocation of the life cycle phase modules. Some utility modules will help the life cycle phase modules by providing a graph database (DB) for permanent storage of system representations, models, analysis results, testing job state, etc. and a file system (FS) for permanent storage of input files, as well as data and artefacts collected during discovery and testing. Finally, a CI/CD module will be responsible for interacting with the Virtual Execution Environment, where the digital twin of the tested system will be instantiated, and the test cases will be executed.

Figure 2: Architecture of the DCT framework

Life cycle phase modules of the testing job

Due to the complexity and wide range of responsibilities entailed into the conceptual design of DCT, we use a modular approach to split up the life-cycle into different phases.

1. Creation
a. Receiving inputs from the system developer/integrator
The lifecycle starts when the developers upload key infrastructure and model files (e.g., Ansible, ArchiMate). This step creates a new “testing job” that is stored in the DB, while the raw files are safely stored on the FS.
b. Processing inputs and creating internal representations of the system to be tested
The system interprets these inputs to generate Terraform files and store the internal representation of high-level models in. At this point, we can now utilize Terraform and Ansible to create an authentic replica of the original environment. Consistency checks between the high-level models and IaC files ensure if everything aligns. A GitLab CI/CD project then spins up a new digital twin, laying the groundwork for discovery and testing.

2. Discovery
a. Generating a system discovery plan
This module generates a tailored system discovery plan. It selects appropriate tools and extend the Ansible files to inject discovery tools into the environment.
b. Execution of the system discovery plan
The system takes that plan and executes it. A temporary environment (via VEE) is created, tools are deployed, and valuable system insights are harvested.

3. Analysis
a. Analysis of the discovery results
With the new discovery information in hand, we can provide a more realistic world-view about the system to the integrators and system developers.
b. Error propagation analysis for identification of impactful attack goals
This module performs an impact analysis to find which assets or paths are most attractive to attackers.
c. Generation of attack trees for the identified attack goals
This module builds attack trees, providing a visual and structural representation of potential attack paths for each goal.

4. Testing
a. Generation of executable test cases and a testing plan
The system generates executable test cases from the attack trees and extends existing Ansible templates to deploy observation and test tools on the digital twin.
b. Execution of the test cases according to the test plan
This module runs those tests against the digital twin. Results and artefacts are saved for review, and the environment is reset to ensure clean and reproducible tests.

5. Feedback
a. Analysis of test results and collected artefacts
The system analyses test results to understand vulnerabilities and deviations from expected behaviour.
b. Generation of feedback to the system developer/integrator (i.e., vulnerabilities discovered, recommendations for fixing them, impactful attack goals, attack trees, test cases executed, test results and artefacts)
In this module, those insights are turned into structured feedback and recommendations, made accessible through an API and dashboard.

In the next Insight post, we shall introduce additional modules of the DCT: the Orchestrator; and the utility modules: the Database, the File Storage, the CI/CD and the Virtual Execution Environment.

Leave a Reply