What is a digital twin or what can it be for my company?
For which companies or products is a digital twin useful?
What distinguishes the digital twin concept from other IT services?
This Crisp Analyst View provides guidance on the most important design element of the Internet of Things and many Industry 4.0 projects.
Like most technology terms, the “digital twin” has been haunting the industry for a while before it really gets attention and spreads. Some IoT providers suddenly describe everything that has to do with software and data as well as physical devices as a “digital twin”.
So what exactly is the digital twin? The term developed from two areas of application in parallel. On the one hand, it came from production as a digital twin prototype (DTP). The prototype is created as part of product lifecycle management (PLM) when planning or ordering a physical product long before the physical twin is even born. The DTP is particularly important for the automation of manufacturing in Industry 4.0 scenarios. With its help, companies can, for example, switch to the individual production of a “lot size 1”. The DTP appears together with the physical part on each machine and controls the processing. At the end of production, the physical twin was born and received all the properties of the DTP.
The Digital Twin Instances (DTI) behave very differently, which arise from the image of a finished product and reflect its ongoing configuration or operating data. In the automotive industry, the DTP reflects the manufacturing and the DTI the aftersale processes, such as software updates or the operating data such as telematics data. In addition to DTP and DTI, there are actually a number of academic definitions of a digital twin:
Academic digital twin definitions
“The digital twin is a set of virtual information constructs that describe a potential or physically existing product from the microscopic to the macroscopic level. Optimally, all information obtained from the physical product can also come from its digital twin. ”
“A digital twin is an integrated, multi-physical, multiscale, probabilistic simulation of a vehicle or system under construction, using the best available physical models, sensor updates, fleet history, etc. to reflect the lifecycle of the physical twin.”
"Coupled model of the real machine, which works on a cloud platform and simulates the state, with integrated knowledge both from data-driven analytical algorithms and from other available physical knowledge"
"The Digital Twin is a real representation of all components in the product lifecycle using physical data, virtual data and interaction data between them."
"A dynamic virtual representation of a physical object or system over its entire life cycle using real-time data to enable understanding, learning and thinking."
“Digtial Twins use a digital copy of the physical system to perform real-time optimization”
"A digital twin is a real-time digital reproduction of a physical device."
"A digital twin is a digital replica of a living or non-living physical entity. By connecting the physical and virtual worlds, data is transferred seamlessly so that the virtual entity can coexist with the physical entity."
In the context of Digital Built Britain, a digital twin is "a realistic digital representation of assets, processes or systems in the manufactured or natural environment".
However, theoretical definitions alone rarely help companies implement the digital twin strategy. CIOs, CDOs, CTOs, digital portfolio managers, product owners and developers are on the best way to get frustrated by the “corporate digital twin” rather than euphoric to use the concept and the appropriate technological implementation and to promote it in the company. What happened?
Digital twins are part of the corporate digital strategy. Ideally, the digital twin serves several applications that interact with a physical product. However, synergies are only achieved when different physical products share a twin infrastructure.
Digital twins require a lot of skills in digital architecture. Many functions could also be developed in individual projects for a specific digital product. Digital twins can only leverage significant synergies if you don't do that. For example, the physical connectivity in the twin can be abstracted instead of connecting to the device with every single application.
Digital twins can be overloaded quickly and will never be finished. You quickly pack too much into a digital twin approach, but it never really gets done.
This situation leads to digital twin frustration among product owners and developers of individual digital products. Especially if they are the pioneers in the company, they find it difficult to wait for a “corporate digital twin”. Even if you use a valuable project budget to develop general twin functionality, experienced IoT architects quickly realize that this will often not develop into a digital twin in the corporate standard. The result is incomplete digital twin isolated solutions for individual applications that meet on the same edge device. In contrast to PCs that connect to multiple backend services, there are no people in front of IoT devices who overlook possible problems with multiple backends.
For CIOs, CTOs and CDOs, the digital twin is often in trouble in this country. Similar to company-wide middleware in the past, funding is difficult and a comprehensive digital twin strategy threatens to fail just as much as the enterprise service bus, the event network or the corporate data lake 5 years ago.
In order to avoid this dilemma, the digital strategists and implementers of the IoT and Industry 4.0 projects should develop the appropriate definition of the twin for the maturity of the surrounding application in the portfolio. To locate the digital twin, we do not use the academic definitions, but the IT infrastructure services that are known from corporate IT or cloud native architectures. Based on our experience in technology consulting, we do not recommend a verbal description, but rather the following presentation of the added value or the integration to the traditional infrastructure services. It is particularly clear what the twin does and what it does not do!
The figure shows the range of value added by a digital twin to other infrastructure services recommended by Crisp Research. Choose the role of your twin between the recommended extremes.
IoT connectivity can be managed and consolidated using the digital twin. Even if we do not count the classic connectivity infrastructure, such as mobile Internet gateways, firewalls or even networks, among the functions of the Twin, the Twin is the interface to connectivity for all other processes and applications. Twins without connectivity abstraction make little sense, except for pure DTPs. Is a device online? When was the last time online? How much data volume does the device still have etc. - this is all information that should be accessible to everyone in the Twin. Over-the-air transport processes attach themselves to the twin and transport the changes to and from the device. (Capability recommendation 3..5)
The digital twin is not a classic middleware. Even if the digital twin is quickly presented as an all-rounder, there is still a huge difference to classic middleware such as an enterprise service bus or a queuing system. While applications are welcome to write desired changes / data for the devices in the Twin, the way is restricted the other way round. The device can mirror data in its twin, but does not require it to be transported to an ERP system. The twin should not be responsible for bringing data into third party systems like classic middleware. It is better to announce the existence of new data in an event queue. ERP systems or a classic middleware subscribe to these events and then transport the data from the twin to the target systems. The twin concentrates on synchronization with the physical world and remains free of transaction logic in the rest of the IT landscape. (Capability recommendation 2..4)
The twin should fully understand devices.Just as the DTP describes the manufacture of a physical product, the DTI instances of operational devices should fully understand their configuration. If, for example, a gateway device or a “sensor box” breaks somewhere in the field, new hardware should only be able to be completely restored by logging on to the twin. Just as a new Apple iPhone from iCloud, for example, can reconstruct a user's broken predecessor, it should also work in the industry with devices and their twins. An additional IoT device management then controls the changes to the devices via the twin. This includes the distribution of software updates and parameterization. The IoT Device Management has the process logic for this. The twin itself has no business logic and is only an executing infrastructure. (Capability recommendation 3..5)
The twin is as much or little a data lake as an IoT device. Even a device does not memorize all data for reasons of capacity alone. Good twin architectures hold the same data in the cloud that is also on the device itself, only at higher speeds and always online. The entire historical configurations are often stored in the twin. However, a complete archive of transaction data would often overload a twin. In the example of an autonomous vehicle, the complete mapping of the vehicle's software and configuration data makes sense. This is well structured data, the whole history of which fits well in the twin. The telematics data of the current driving situation also belongs in the twin, as many different applications want to access it. The entire telematics history, however, belongs more in a dedicated data lake that may is anonymized and on which machine learning can learn about all vehicles. Like inCrisp Analyst View on Gaia-X , Europeans would operate the digital twin in a highly private edge infrastructure, while the data lake could be better served by a public cloud hyperscaler.
Digital twins should abstract and consolidate analytics. Good twin concepts make it very easy for the software developer. For example, if you want to know the average speed of a vehicle over a certain period of time, this is a query on the Twin. If the time period is maybe 15 minutes, all data is in the twin and there is a very quick response. If the period is 2 years, the twin can rather forward the query to an external data lake or access it. However, if the requested period is 2 seconds, the twin gets the data more real-time from the current telematics stream that has just arrived from the vehicle. The programming or data scientist thus receives a large abstraction of the analytics.
The twin helps with autonomous processes. As the name suggests, autonomously means that a device does certain things on its own, i.e. without constantly talking to the digital twin. The twin helps to synchronize software with the device. These can be individual functions (see AWS Lambda on green grass) or entire containers. The twin itself is only a transport and synchronization tool - not runtime for external business logic.
AI and machine learning do not belong in the twin, it only transports models. Just as described above for autonomous processes in general, the twin itself should not contain a runtime environment for machine learning. However, the digital twin transports learning models very well from the cloud to the device or the device pushes local learning successes back into the cloud via the twin. All of this can take place asynchronously and offline.
Process engines do not belong in the digital twin. Business logic is mapped in processes. And that basically does not belong in the twin as a generic infrastructure. Only in cases where processes are mapped in the cloud and on the device can it make sense to transport process flows as well as learning models or autonomous code via the twin. Crisp Research, however, warns against considering a process engine itself as part of the twin service.
igital twins should talk to event networks, but not be one yourself. If a small device, such as a smoke detector, runs out of battery, it quickly writes this event in the twin before it goes offline. Of course, it makes sense for the twin to write this event in an event queue of an event network. The event logic - for example, the correlation of events - belongs in a dedicated event network - not in the twin itself. In the example, the event network would determine that a neighboring smoke detector has also failed, which means that no secure alarms in a room would be possible .
Product lifecycle data (PLM) can fit in digital twin prototypes (DTP). PLM data such as CAD designs or parts lists for mechanical assembly of a product fit well in a DTP. However, Crisp Research warns against mixing DTPs and the further lifecycle after the physical completion (DTI) into a twin. In operation, the data often do not belong to the manufacturer of the hardware and there is of course a boundary between DTP and DTI. However, data for software maintenance of a device belongs in the DPI as described above. However, only minimal hardware data is required for this.
An important task of the CDO is to position the digital twin capabilities within this useful added value. Of course, this depends very much on the desired digital products or the digital add-ons to physical products. The PLM capability in the figure above has the largest bandwidth. It is maximum for a DTP. Only minimal hardware data is required for a DTI, which supports the further software lifecycle. The value of the ten twin capabilities may develop over time.
How the CDO avoids the digital twin dilemma:
Positioning the digital twin correctly in the company and developing it over time is one of the most important measures to avoid the frustration described above. If you leave the business units and individual digital projects alone, you will hardly experience the synergies that multiple applications use the same twin infrastructure and thus exchange information. Conversely, if you prescribe the use of a Twins for the individual projects, it is more likely that you will not be able to meet expectations as quickly. We recommend the following steps:
Make the digital twin a top priority! Larger companies have a large number of digital add-ons or digital products with different digital intensities from each corporate area. Synchronize that in a digital portfolio management. The digital twin should, however, be driven by the CDO together with the CIO itself. Without this management attention, it is risky.
The digital twin needs its own roadmap. Use the digital portfolio to understand the requirements of the individual projects. Structured requirement management moderates the expectations of the parts of the company. Start with a simple twin that at least maps device identities. Then communicate the twin roadmap about the gradual expansion of the ten twin capabilities.
Develop the digital twin for a large added value. Digital products without a digital twin means full stack development with a lot of effort. With a well-developed digital twin, however, a digital project can only be a complete product by providing a mobile app as a twin front end. Think of a remote charging app for an electric car. Suddenly this is extremely easy because all the data is up to date in the twin. The app simply writes a charging stop in the twin, which synchronizes with the vehicle and ends the physical charging.
Make platform decisions based on requirement engineering. Even the out-of-the-box digital twins of the three hyperscaler Amazon, Google and Microsoft are comparatively simple. For example, none of the out-of-the-box twins can be used to map a consistent network of embedded controllers behind a device, as is required to control a vehicle or a complex industrial system. Therefore, do not make premature and unnecessary platform decisions. You really have to develop the really differentiating elements of a twin yourself. The technology required for this is generally available on all three market leaders.
Develop a microservice architecture around the digital twin. Ideally, the twin itself has no business logic. It acts like a mixture of middleware, connectivity management, analytics framework and other services in a landscape of microservices. The service design of a digital twin architecture would go beyond the scope of these analyst views. We will address this topic in our own Analyst View.
Crisp Research has accompanied several companies in the past years in the creation of their digital twin strategy, technology and architecture. The balance between corporate governance and local agility is, as always, a key success factor in IT strategy. For example, it is perfectly fine if individual applications initially get the telematics / sensor data directly from vehicles or buildings, as long as they at least take the device identities from the twin. Later, when there are more than two consumers of the same transaction data, the twin should consolidate this.
The synergies can only be leveraged if the CxOs can actively identify the stakeholders in the corporate divisions and motivate them to work together on requirement management and funding. The digital twin will then not be a “Mission Impossible” for a company, but a cross-sectional service that is visible to everyone in the digital product and investment portfolio. Crisp Research is convinced that such a strategic approach can prevent the digital twin dilemma in most companies.
This great article is written by the Carlo Velten who is the IT researcher and currently working at Folio3 which is one of the best dairy farmers of Canada Proaction software company in the World
- New technology is affecting the industry more than ever, making it essential to understand the trends that would shape the industry in the new decade.
- CertsLeads enables you to prepare your certification exams, Get most actual and updated exam questions PDF for passing the certifications exam in first attempt
- Get latest and updated exam material from mockdumps with passing guarantee in first try. We provide 24/7 customer support to our honorable students