“Twin” originally refers to identical twins. Because they share the same genes, they may have similar susceptibilities and responses. By analogy, a digital twin is a digital replica of a physical entity created inside a computer system. Although one is physical and the other virtual, they run in parallel so the physical entity can synchronize its state to the digital replica in real time. The digital replica can then explore solutions using big data and artificial intelligence and send optimal decisions back to the physical system for execution.
What is a digital twin?
At its core, a digital twin is a tool for addressing complex system problems. Its value lies in using the digital replica as an experimental environment to validate technologies, optimize configurations, and run trials in the digital domain while keeping the physical system stable. When needed, validated solutions from the digital replica are applied to the physical system. This approach reduces risk, lowers cost, and improves efficiency.
Digital twin concepts began in aerospace and defense and have since expanded into manufacturing, smart cities, and other verticals. After being listed by Gartner among strategic technology trends for several consecutive years (2017–2019), the telecommunications industry began investigating digital twin applications.
Why the telecommunications industry needs digital twins
Engineers who perform wireless network testing, operations, and optimization often describe the work as feeling like walking on thin ice. Mobile communication networks are infrastructure that provide essential services such as voice and internet access. Any operational error that causes widespread or prolonged outages or severe performance degradation can become a major incident.
Network operators treat any change to the production network with extreme caution. New function testing is typically restricted to limited regions, parameters are verified repeatedly, and changes are often executed during overnight hours to minimize user impact.
Telecommunication systems are extremely complex. End-to-end networks contain many network elements, each with multiple operating roles and distinct concerns. Engineers often see only parts of the system and lack tools for accurate, system-wide performance prediction, resulting in slow, cautious changes and limited ability to foresee outcomes.
No matter how thorough planning is, unexpected problems still occur during execution. If a system existed that could fully validate parameter combinations before deployment, find optimal parameters via intelligent search, and then apply those validated parameters to production with high fidelity, operations would be far safer and more efficient.
That system concept is the digital twin.
How telecommunications can apply digital twins
Telecommunication networks have clear advantages for building digital twin systems. First, each base station already has precise models in network management systems. These models contain site location, latitude/longitude, antenna mounting height, azimuth, downtilt, engineering settings, numerous functional attributes, thousands of radio parameters that shape performance, and configuration data for interaction with other network elements. These data can be used to construct a digital representation of a base station.
Second, core networks store detailed user data that indicate user location, subscription details, commonly used apps, handset model, call patterns, and other context. Operators manage these data strictly, so privacy protections are required when using them for digital twin purposes.
Third, a handset with signal continually exchanges information with the network beyond user data traffic: signaling for control, measurements of radio signals across bands, and reports of measured signal quality. Base stations aggregate this information into performance KPIs used to assess user experience. In other words, the system has broad visibility over data content and flows.
Given this existing software stack and comprehensive, real-time data, it is feasible to construct a high-fidelity digital twin system on general-purpose servers by selecting appropriate simplifications.
Building an end-to-end virtualized twin system
The first step is to build an end-to-end virtualized twin environment on general servers.
Twin handsets: Simulate many virtual handsets of different capabilities on servers and configure random behavior to mimic real users, e.g., x% streaming video, y% gaming, z% moving at high speed. These virtual handsets do not transmit actual radio signals because they connect to a virtual wireless channel.
Twin wireless channel: Based on high-precision maps and local environment data, use ray-tracing models to simulate reflection, scattering, diffraction, and fading of radio propagation, and dynamically adjust for handset mobility. Interference in various scenarios can also be modeled, allowing comprehensive channel simulation.
Twin base stations: Server-based base stations that model all software and hardware modules of physical stations. The same algorithms run in the twin, with the only difference being that the twin uses the virtual wireless channel instead of real RF transmission.
Twin core network: Commercial core network implementations are already virtualized and can be adapted into the twin environment with necessary simplifications.
When the virtualized twin runs the same software and parameters as the physical system, it can emulate a real base station with high fidelity: the results seen in the twin should match outcomes in the physical network.
For example, the twin system can predict network performance precisely. If you stream video on your phone in a public square such as People's Square, a virtual handset streaming in the twin at the same virtual location will experience similar video quality and smoothness.
Beyond the base twin, an intelligent, automated network application layer is needed to solve operational problems in the physical network, including end-to-end SLA assurance, targeted network optimization, large-scale antenna weight optimization, and user experience improvement.
Operational flow and optimization process
The typical process is:
- Construct the digital twin according to physical network entities.
- Operations staff issue intents, which are translated into autonomous network requirements.
- The digital twin runs demand-driven iterative optimization and simulation validation until the twin achieves the target performance. The twin then generates a digital plan and synchronizes optimized data to the physical network entity. This is the inner loop.
- After the physical network runs with the new data, if the results differ from targets, feedback is sent to the twin for further optimization. This forms the outer closed loop. These loops operate continuously to drive the physical network toward performance goals.
Below is an example of how a digital twin supports network rate optimization.
Current challenges in wireless rate optimization include:
- User experience degradation is often discovered passively through complaints or alarms, followed by ticketing and manual diagnosis, which delays resolution and harms satisfaction.
- Network complexity means many factors influence rate, and their impact varies across external and internal environments.
- When adjusting parameters, it is difficult to predict the magnitude and direction of impact on existing services.
- Post-change test verification is often incomplete, potentially missing scenarios unrelated to the current operation, causing service impact and wasting resources.
With a digital twin, automated geospatial simulation, coverage simulation, and radio parameter configuration simulation can reveal user experience issues early. Based on simulation results, site additions, coverage optimization, and parameter tuning can be iteratively validated in the twin. The final validated plan is then applied to the physical network.
Certain actions still require human intervention, such as building new sites or adjusting antenna orientation. Combining human tasks with automated physical network actions enables achieving network rate optimization goals.
Future outlook
In previous generations up to 5G, network planning, construction, operations, and optimization were disjointed across lifecycle stages, causing inefficiencies and high costs. Autonomous intelligent networks were proposed to address these problems, and digital twins are an important path toward implementing autonomous networks. Industry research on digital twins is already extensive.
Currently, applications of digital twins in communication networks remain in early research stages, and there is no industry consensus on the concept and scope of digital twin networks. As complex systems built on large-scale networks, digital twin networks face technical challenges in data handling, modeling, and architecture.
Explorations on 5G have been mostly external and fragmented. “External” means the twin and the physical network are separated in hardware and software, which limits the timeliness and effectiveness of data synchronization and configuration. “Fragmented” means case-driven automation achieves high automation in specific functions but lacks systemic, general applicability.
In the 6G era, the goal is to leverage native network compute and intelligence to construct digital twin networks for continuous physical-virtual integration, ongoing planning, and fault self-healing. This would enable high-level autonomy across the network lifecycle, reduce human effort, and substantially improve operational efficiency.
Progress begins with practical steps. With continued research and engineering effort, digital twin technology may play a central role in future autonomous communication networks.