Books I have co-authored

5G Reality Check – who wants to pay for 1 millisecond latency?

Pointing toward the future (Ericsson CEO, MWC Barcelona)

Pointing toward the future (Ericsson CEO, MWC Barcelona)

 

Bold targets, but is the industry
promising more than it can deliver?

The mobile industry’s vision documents for 5G mobile are promising fantastic leaps in performance over current networks (links below). But as 5G evolves over the coming 10 to 15 years, it will most likely be a repeat of the pattern seen in 3G and 4G mobile.

The first step in the 3G and 4G life cycle was that bold targets were formulated by the vendors and in the industry committees. The next phase was to package the message, communicate the most attractive claims to the market, and create hype around the new generation of mobile networks. Once the new networks reached the deployment stage, marketers ignored the formal definitions and branded whatever products that were ready for delivery as “next-G”. For example, LTE is not real 4G according to ITU’s definition (only LTE-Advanced is). And when xG services were deployed by the operators to actual users, the delivered bandwidth and reliability was well below the consumers’ expectations.

Now it is time for 5G and the vision documents list a number of bold targets. Here is a summary from GSMA:

• 1-10Gbps connections to end points in the field (i.e. not theoretical maximum)
• 1 millisecond end-to-end (E2E) round trip delay (latency)
• 1000x bandwidth per unit area
• 10-100x number of connected devices
• (Perception of) 99.999% availability
• (Perception of) 100% coverage
• 90% reduction in network energy usage
• Up to ten year battery life for low power, machine-type devices

With the release of new spectrum (in the millimeter wave bands) and a much denser network, most of these targets are theoretically achievable. The main roadblock is economic. The willingness to pay (measured in ARPU) will probably stay the same. Operators who deploy denser, newer networks will be constrained by the available revenue pool from the users. When operators begin to deploy 5G overlays to the 4G networks, they will initially offer “5G services” with less 5G availability than 99.999%. Regardless of the superiority of the 5G technology, it is much more expensive to deliver 99.999%  than for example 99%.

However, simultaneously reaching all 5G targets at a reasonable cost is not achievable during the next decade. The exponential performance increase in silicon has historically been driven by Moore’s law (which translates to around 55% to 59% increase/year). This trajectory has already slowed down and will probably slow down further in a couple of years when the foundries move to 7 nm and finally 5 nm technology. After that, the size of the atoms and quantum effects make it almost impossible to increase chip density and go below 5 nm. Even if that was not the case, the historic performance increase in the mobile networks has been much slower than 59% per year. The reason for this is that electronics and processors only make up a smaller part of the mobile networks. The main cost drivers are cabling, deployment, masts and other installation costs, which don’t follow Moore’s law. If we optimistically assume an annual performance increase of 30% over the next decade, it will translate into an improvement by a factor of 14. In order to deliver the 5G vision, this performance increase will have to sustain the total of all targets listed by GSMA above, which is unlikely.

In particular, the targets in GSMA’s vision document for 5G’s energy consumption and latency are just not realistic. The goals for increased capacity and coverage conflict with the goal of lower power consumption. The dilemma is that there is a trade-off between increased performance and lower power consumption. A performance metric for energy efficiency is bit per Joule (bps/Joule). It is possible to improve this metric but there is a trade-off between improved spectral efficiency and improved energy efficiency (according to Shannon theory). Improving both simultaneously is difficult. It is unclear from the 5G vision documents if they are referring to energy efficiency (bps/Joule) or the total energy consumption when they refer to the target of a 90% reduction in energy usage. A 90% reduction of total network energy consumption is not realistic if capacity and performance are to be simultaneously increased by a factor of 100 to 1000. But improving energy efficiency (bps/Joule) is a matter of ongoing technological development and will continue, even though it is doubtful that it will be able to increase by an order of magnitude in one decade if Moore’s Law comes to a halt.

Another issue is the goal of 1 ms E2E (end-to-end) latency. The 1 ms target stated by GSMA, Ericsson, and Qualcomm is aggressive compared to other industry stakeholders. Samsung talks about air latency of 1ms with E2E latency at 5 ms. DoCoMo and ITU mention 1 ms RAN latency without specifying a target for E2E latency. EU’s goal is 5 ms E2E latency and 1 ms local latency for V2V (vehicle-to-vehicle) communication. Alcatel-Lucent‘s target is 1 ms latency for extreme cases. Nokia and Huawei mention <1 ms latency without specifying how they define it. The NGMN white paper specify E2E latency in general to 10 ms and 1 ms for use cases which require extremely low latency. GSA mentions <1 ms latency in the air link and <10 ms E2E latency.

Increased processor performance and the use of higher radio frequencies will reduce latency, which is a welcome side effect. But the goal of 1 ms E2E latency is just not credible. The latency in the user plane (the radio network) can be controlled by the mobile industry but all latencies add up and this is only a small part of total roundtrip latency. To compare with 4G, the RAN latency in LTE can ideally almost be as low as 20 ms but the median E2E latency is often much higher. Ping times (a measure of E2E latency) are going down with the introduction of new mobile networks. A test of the shortest ping times on US networks gave 88 ms for 3G (HSPA), 32 ms for 4G (LTE) and 18 ms for Wi-Fi. But the median latency was higher: 168 ms for 3G, 52 ms for 4G, and 23 ms for Wi-Fi. In another test by OpenSignals, mean 4G latency was around 70-80 ms. Network traffic congestion will inevitably lead to significantly higher median latencies compared to the best case scenario. Extrapolating a lowest 5G latency from the 3G and 4G figures would put it somewhere around 10 ms, not at 1 ms.

In order to reduce the full roundtrip E2E latency to well below 10 ms, entire content delivery networks will have to be rebuilt at significant cost. The centralised cloud data centers will have to be pushed to the edge of the networks. And edge computing will not solve the problem of backbone transmission latency for real time information originating far away. The speed of light in fiber adds around 1 ms latency per 200 km. For example, the added roundtrip latency from a Trans-Atlantic fiber backbone cable is around 60 ms. (The speed of light is 50% faster in free air and latency can be reduced somewhat by replacing fiber with millimeter wave radio links for mobile backhaul, and possibly for backbone transmission. But this is an even more expensive solution.)

The 5G vision documents mention VR, AR, haptic/tactile applications and self-driving vehicles as examples where ultra-low latency is required. But there is no deeper analysis of the use cases and no discussion about the willingness to pay for low latency.

Latencies this low are undetectable for humans. For example, when two people talk to each other in a room, the speed of sound adds 3 to 6 ms latency. The frame rate in a TV is 20-40 ms but humans perceive the sequence of still images as a natural moving picture. Human reaction times are measured in the range of 100 ms and higher.

The most demanding use case for humans is VR/AR helmets where very high bandwidth and latencies as low as 20 ms are required to offer optimal experience. An even more extreme use case would be a combination of tactile and collaborative VR, where several users in a shared VR environment manipulate the same virtual objects and receive tactile feedback. According to ITU, 1 ms latency is required for this use case. If gamers use VR helmets they will probably be willing to pay for a latency down to around 20 ms (but not 1 ms). However, this is a small market segment and will not be enough to finance a costly re-architecture of the entire global network. In addition, users with VR helmets cannot see their surroundings. Moving around on the streets blinded by a VR helmet is not a realistic 5G use case. Users of AR helmets/goggles will be more mobile but the demand for bandwidth and latency is less extreme for AR. And it is much easier for VR and AR providers to pre-load most of the displayed content in the helmet than to deliver it over a mobile connection in real time.

a car travelling at 140 km/h will move 3.9 cm in 1 ms

Not even in the use case for self-driving cars is ultra-low latency addressing an urgent need. For example, a car travelling at 140 km/hour will move 3.9 cm in 1 ms. A car travelling at 90 km/h will move 20 cm in 8 ms. An airbag takes 15-30 ms to release. Ultra-low latency would be nice to have, but a system for self-driving vehicles is not dependent on 1 ms latency.

Human drivers are supposed to follow the two second rule (2000 ms) to maintain a safe distance from the car in front. When the technology for self-driving cars matures, this distance can be reduced and fast moving cars will be able to safely tailgate each other. This will increase road capacity and throughput when a train of fast moving cars is communicating with each other wirelessly. The first car in the train can signal to the ones behind it when it needs to break speed and they can all break simultaneously. However, this direct V2V communication is not dependent on the E2E latency in the core network. Considering the immaturity of the autonomous vehicle technology, this future vision is many years away. In a mixed traffic environment, where less than 100% of all vehicles are self-driving, each vehicle will still need to maintain a safe distance. Some cars will not be equipped with self-driving capabilities and random events introduced by human drivers can occur at any time. In addition, the collision avoidance AI will be autonomous and built into each vehicle and not dependent on a mobile network connection. The idea of 100s of self-driving cars on a highway at 140 km/hours with a distance of 40 cm between them is not going to happen for at least 20 years.

Kudos to the mobile industry for setting bold targets. But the goal of 1 ms E2E latency seems like a vendor-driven solution in search of a problem. And considering that fundamental laws of physics make it very hard to achieve this goal, I’m surprised that the community of smart engineers in the industry has allowed it to be part of the 5G roadmap.

Update: I found some indirect supporting data for my prediction of 30% annual performance increase above. According to Nielsen’s law, fixed line bandwidth for high end users has been growing by 50%/year (probably based on best case data points). But actual delivered average bandwidth for all users has only grown by around 30%/year. These figures illustrate the discrepancy between advertised theoretical top speed and the delivered service quality under real-world conditions. The 30% figure is also influenced by the low willingness to pay for fast connection speed on the mass market.


5G vision documents, presentations and white papers from GSMA, NGMN, ITU, EU, GSA, DoCoMo, EricssonHuawei, Qualcomm, Nokia, Samsung, SK Telecom, and Alcatel-Lucent.

1 comment to 5G reality check – who wants to pay for 1 millisecond latency?