Smart Digital Twins Architecture

Updated 2023-01-16

Bernard DeffargesBernard Deffarges
ai_digital_twin

How to Design and Implement Efficient and Resilient Digital Twins Systems?

A Digital Twin (DT) is a virtual representation of a connected physical asset

The physical object must be able to stream data to a remote endpoint as the data are the means to realize value.

A DT can cover the whole product lifecycle of its physical counterpart. In fact, it could even exist long before and maybe long after the physical object.

A DT is a model of a physical object and it can mimic its structures and behaviors. A set of DT can represent a set of physical objects and their interactions in the real physical world can be simulated as interactions amongst the DT. Set of DT can also come as hierarchies, with some DT containing others.

One of the primary benefits of DT is simulation, as electrons are cheaper than atoms (software simulation versus hardware fixes). Amongst other benefits are defect predictions, improved engineering, testing and training. The savings and possibilities are growing with the numbers of DT deployed and the complexity of their interactions. The more complex a system becomes, the more value DT will bring over time.

The stream of information should at least go from the physical object to the DT, to update its state. The more information transmitted, the more can be done. But the stream can also go the other way around, from the DT to the physical object. Maybe to execute commands, to calibrate sensors, or to inform physical objects of events or changes in their surroundings. Data can also be exchanged between DT.

Some concepts in DT are far from being new.

Almost 30 years ago, Fly-by-wire replacing Fly-by-cable was a kind of DT Architecture. It enabled professional flight simulators to run the exact same software as planes were executing to fly. So, pilots could train themselves on the ground with the real system in many different conditions (much cheaper than flying the real plane).

Today, almost every object that can be connected to the internet could have its own DT running on some infrastructure. Even very simple components can stream their internal data and state to remote endpoints. When these components are part of bigger appliances, once the data is processed by an AI algorithm, the remote end point can stream some commands back to the appliance indicating for example the likelihood of a possible failure and need for early replacement of a component. This scenario makes a lot of sense for cheap components sitting in expensive appliances that rely on them.

Different aspects should be kept in mind while designing a new DT platform

What are the physical assets and what are their communication capabilities taking into account the infrastructure they are running on (wire, 4G, 5G, wifi, low power communication, etc.)?

What cloud infrastructure, architecture and data model design are available or should be developed to deploy these DT? The architecture should be designed to be resilient and should be able to scale with the number of DT that will be deployed and the types of simulations to be run.

What AI and data analysis algorithms will be used and should be implemented? The data model and schema are key as well. A design that would not be adaptable could jeopardize the value of the whole long term project value. Also, part of the data processing workflows should be developed in parallel with the system architecture and should be tested as soon as data gets available.

Obviously, what kind of cybersecurity risks are involved?

A DT project can run like any other Agile Software Engineering project. The shorter the iterations, the better. Reasoning per physical device and focusing on the previous aspects is an efficient approach to be predictable, explainable and successful. Domain Driven Design is a great approach to model physical assets.

From a software architecture perspective, the Actor Model and its implementation in Akka fit very well for DT.

An Actor is an abstraction that goes beyond traditional Object Oriented Architecture as it adds powerful concepts like Single-Thread Illusion, Location Transparency, Sharding, clustering and Passivation to name a few.

One Physical Object = one Digital Twin = one Aggregate Root = one Actor (maybe including children)

A DT as an Akka Actor will process messages one after another (Single-Thread Illusion). The Actor can run anywhere in the cluster (Location Transparency) and messages will be routed automatically (through sharding). When the Actor is not used for a defined period of time it gets automatically saved into a datastore (passivation), coming almost instantaneously back when needed.

Akka also combines Object Oriented and Functional Programming approaches very well, thus improving code readability and quality while reducing security risks. Replacing shared memory and reads-writes with Immutable Messages is a big step towards much less risky systems and better Software Architectures.

After a decade of developing Actor Systems, we think that reasoning about them becomes much easier and more efficient than reasoning about traditional software systems. We are providing trainings on the Actor Model and Akka, some introduction slides can be found here. Feedback is welcome.

DT systems are distributed and multi-agents by design. They tend to evolve rather fast with more physical assets to be added, changing physical topologies, growing amounts of data and new AI algorithms. Akka is not an off-the-shelf solution to deploy DT, it’s rather a set of super powerful libraries. They need a bit more effort but give far more freedom and possibilities than any other solution.

Check-it out for yourself and let us know what you think.