What is a digital twin or what can it be for my company?
For which companies or products is a digital twin useful?
What distinguishes the digital twin concept from other IT services?
This Crisp Analyst View provides guidance on the most important design element of the Internet of Things and many Industry 4.0 projects.
Like most technology terms, the “digital twin” has been haunting the industry for a while before it really gets attention and spreads. Some IoT providers suddenly describe everything that has to do with software and data as well as physical devices as a “digital twin”.
So what exactly is the digital twin? The term developed from two areas of application in parallel. On the one hand, it came from production as a digital twin prototype (DTP). The prototype is created as part of product lifecycle management (PLM) when planning or ordering a physical product long before the physical twin is even born. The DTP is particularly important for the automation of manufacturing in Industry 4.0 scenarios. With its help, companies can, for example, switch to the individual production of a “lot size 1”. The DTP appears together with the physical part on each machine and controls the processing. At the end of production, the physical twin was born and received all the properties of the DTP.
The Digital Twin Instances (DTI) behave very differently, which arise from the image of a finished product and reflect its ongoing configuration or operating data. In the automotive industry, the DTP reflects the manufacturing and the DTI the aftersale processes, such as software updates or the operating data such as telematics data. In addition to DTP and DTI, there are actually a number of academic definitions of a digital twin:
Academic digital twin definitions
“The digital twin is a set of virtual information constructs that describe a potential or physically existing product from the microscopic to the macroscopic level. Optimally, all information obtained from the physical product can also come from its digital twin. ”
“A digital twin is an integrated, multi-physical, multiscale, probabilistic simulation of a vehicle or system under construction, using the best available physical models, sensor updates, fleet history, etc. to reflect the lifecycle of the physical twin.”
"Coupled model of the real machine, which works on a cloud platform and simulates the state, with integrated knowledge both from data-driven analytical algorithms and from other available physical knowledge"
"The Digital Twin is a real representation of all components in the product lifecycle using physical data, virtual data and interaction data between them."
"A dynamic virtual representation of a physical object or system over its entire life cycle using real-time data to enable understanding, learning and thinking."
“Digtial Twins use a digital copy of the physical system to perform real-time optimization”
"A digital twin is a real-time digital reproduction of a physical device."
"A digital twin is a digital replica of a living or non-living physical entity. By connecting the physical and virtual worlds, data is transferred seamlessly so that the virtual entity can coexist with the physical entity."
In the context of Digital Built Britain, a digital twin is "a realistic digital representation of assets, processes or systems in the manufactured or natural environment".
However, theoretical definitions alone rarely help companies implement the digital twin strategy. CIOs, CDOs, CTOs, digital portfolio managers, product owners and developers are on the best way to get frustrated by the “corporate digital twin” rather than euphoric to use the concept and the appropriate technological implementation and to promote it in the company. What happened?
Digital twins are part of the corporate digital strategy. Ideally, the digital twin serves several applications that interact with a physical product. However, synergies are only achieved when different physical products share a twin infrastructure.
Digital twins require a lot of skills in digital architecture. Many functions could also be developed in individual projects for a specific digital product. Digital twins can only leverage significant synergies if you don't do that. For example, the physical connectivity in the twin can be abstracted instead of connecting to the device with every single application.
Digital twins can be overloaded quickly and will never be finished. You quickly pack too much into a digital twin approach, but it never really gets done.
This situation leads to digital twin frustration among product owners and developers of individual digital products. Especially if they are the pioneers in the company, they find it difficult to wait for a “corporate digital twin”. Even if you use a valuable project budget to develop general twin functionality, experienced IoT architects quickly realize that this will often not develop into a digital twin in the corporate standard. The result is incomplete digital twin isolated solutions for individual applications that meet on the same edge device. In contrast to PCs that connect to multiple backend services, there are no people in front of IoT devices who overlook possible problems with multiple backends.
For CIOs, CTOs and CDOs, the digital twin is often in trouble in this country. Similar to company-wide middleware in the past, funding is difficult and a comprehensive digital twin strategy threatens to fail just as much as the enterprise service bus, the event network or the corporate data lake 5 years ago.
In order to avoid this dilemma, the digital strategists and implementers of the IoT and Industry 4.0 projects should develop the appropriate definition of the twin for the maturity of the surrounding application in the portfolio. To locate the digital twin, we do not use the academic definitions, but the IT infrastructure services that are known from corporate IT or cloud native architectures. Based on our experience in technology consulting, we do not recommend a verbal description, but rather the following presentation of the added value or the integration to the traditional infrastructure services. It is particularly clear what the twin does and what it does not do!
The figure shows the range of value added by a digital twin to other infrastructure services recommended by Crisp Research. Choose the role of your twin between the recommended extremes.
IoT connectivity can be managed and consolidated using the digital twin. Even if we do not count the classic connectivity infrastructure, such as mobile Internet gateways, firewalls or even networks, among the functions of the Twin, the Twin is the interface to connectivity for all other processes and applications. Twins without connectivity abstraction make little sense, except for pure DTPs. Is a device online? When was the last time online? How much data volume does the device still have etc. - this is all information that should be accessible to everyone in the Twin. Over-the-air transport processes attach themselves to the twin and transport the changes to and from the device. (Capability recommendation 3..5)
The digital twin is not a classic middleware. Even if the digital twin is quickly presented as an all-rounder, there is still a huge difference to classic middleware such as an enterprise service bus or a queuing system. While applications are welcome to write desired changes / data for the devices in the Twin, the way is restricted the other way round. The device can mirror data in its twin, but does not require it to be transported to an ERP system. The twin should not be responsible for bringing data into third party systems like classic middleware. It is better to announce the existence of new data in an event queue. ERP systems or a classic middleware subscribe to these events and then transport the data from the twin to the target systems. The twin concentrates on synchronization with the physical world and remains free of transaction logic in the rest of the IT landscape. (Capability recommendation 2..4)
The twin should fully understand devices.Just as the DTP describes the manufacture of a physical product, the DTI instances of operational devices should fully understand their configuration. If, for example, a gateway device or a “sensor box” breaks somewhere in the field, new hardware should only be able to be completely restored by logging on to the twin. Just as a new Apple iPhone from iCloud, for example, can reconstruct a user's broken predecessor, it should also work in the industry with devices and their twins. An additional IoT device management then controls the changes to the devices via the twin. This includes the distribution of software updates and parameterization. The IoT Device Management has the process logic for this. The twin itself has no business logic and is only an executing infrastructure. (Capability recommendation 3..5)
The twin is as much or little a data lake as an IoT device. Even a device does not memorize all data for reasons of capacity alone. Good twin architectures hold the same data in the cloud that is also on the device itself, only at higher speeds and always online. The entire historical configurations are often stored in the twin. However, a complete archive of transaction data would often overload a twin. In the example of an autonomous vehicle, the complete mapping of the vehicle's software and configuration data makes sense. This is well structured data, the whole history of which fits well in the twin. The telematics data of the current driving situation also belongs in the twin, as many different applications want to access it. The entire telematics history, however, belongs more in a dedicated data lake that may is anonymized and on which machine learning can learn about all vehicles. Like inCrisp Analyst View on Gaia-X , Europeans would operate the digital twin in a highly private edge infrastructure, while the data lake could be better served by a public cloud hyperscaler.
Digital twins should abstract and consolidate analytics. Good twin concepts make it very easy for the software developer. For example, if you want to know the average speed of a vehicle over a certain period of time, this is a query on the Twin. If the time period is maybe 15 minutes, all data is in the twin and there is a very quick response. If the period is 2 years, the twin can rather forward the query to an external data lake or access it. However, if the requested period is 2 seconds, the twin gets the data more real-time from the current telematics stream that has just arrived from the vehicle. The programming or data scientist thus receives a large abstraction of the analytics.
The twin helps with autonomous processes. As the name suggests, autonomously means that a device does certain things on its own, i.e. without constantly talking to the digital twin. The twin helps to synchronize software with the device. These can be individual functions (see AWS Lambda on green grass) or entire containers. The twin itself is only a transport and synchronization tool - not runtime for external business logic.
AI and machine learning do not belong in the twin, it only transpor
- The whole point of certification is that it independently and impartially verifies that you are complying to a standard. Irrespective of regardless