Books I have co-authored

5G in mmWave bands – good enough to render fiber obsolete?

 

There is an abundance of spectrum in the millimeter
Wave bands, but offering “unlimited bandwidth” in 5G still
requires expensive backhaul (and fiber)

The mobile industry has set very aggressive targets for 5G such as: 1-10Gbps delivered connection speed (not theoretical maximum), and 1000x bandwidth per unit area (see my previous article). What will make this possible is the ability to use spectrum in the millimeter wave bands (30-300 GHz), and the EHF band (30-100 GHz) in particular.

Several factors have contributed to making mmWave bands available for mobile services. The RF technology for components that can operate in these extremely high frequency bands has matured and is ready for mainstream commercial use.

Millimeter waves behave differently from radio signals and their propagation resembles visible light more than traditional radio. For example, it is almost impossible for mmWaves to penetrate solid objects such as walls. Even the human body, foliage, or glass windows with curtains pose significant problems. Another difference is the signal attenuation by the atmosphere and rainfall, which is higher for mmWaves compared to traditional radio signals. It was previously thought that millimeter waves could only be used with direct line of sight (LoS), and this limitation made the frequency band rather useless for wireless communication. However, recent field trials have shown that mmWave signals bounce (are reflected) on hard surfaces in an effective way. This makes it possible to achieve good NLoS outdoor coverage of at least around 200 meters in an urban environment.

The bouncing creates a lot of multipath propagation but this problem can be overcome with high performance signal processors. The limited reach and air attenuation is actually an advantage if mmWaves are deployed in ultra-dense networks. It will make it easier to reuse the same spectrum if nearby access nodes do not interfere with each other.

In addition, there is an abundance of spectrum in the mmWave band. In the traditional radio spectrum (0-3 GHz), the mobile industry has until now managed to deliver their services to 4.5 billion people on less than 0.6 GHz of allocated spectrum. If only 5% of the mmWave band (30-300 GHz) was allocated to mobile services, the available spectrum would increase by a factor 25. This abundance could be used for extremely wide carriers that would enable multi Gbit speed bandwidth.


In the mobile industry’s vision, 5G will provide up to 10 Gbps bandwidth in urban areas with ultra-low latency, almost 100% availability, and seamless fallback to 4G when coverage falters. If and when these targets are achieved, it is claimed that 5G mobile could be a serious competitor to landline fiber access. More outlandish claims are that “everything will be mobile and the fiber network will be abandoned”. I am sceptical for several reasons.

As far as I know, it is still unclear whether mmWave signals can penetrate buildings from an outdoor access point. Can the signal go through a window? Will it be blocked if curtains are drawn or blinds are pulled down? Or will the consumer be expected to mount a 5G repeater unit in his or her window, or outside the window? What if a cat walks in front of the mmWave transceiver on the window sill? These are potential inconveniences and could be a barrier for mainstream market adoption.

If mmWave access nodes are deployed indoors, the wireless backhaul from the indoor 5G nodes must have the capability to penetrate walls. This would require a narrow beamforming antenna (array or horn antenna) using lower frequencies than the mmWave bands. This antenna can be aimed toward a nearby target node with a connection to the core network. If the access nodes are self-deployed by the users, an array antenna could possibly self-configure and form a beam in the right direction. However, I don’t know if an array antenna can form a beam in any 3D direction. The user might have to manually point the antenna in the right direction. Alternatively, the access node could be equipped with a servo that aligns an internal antenna. Health concerns regarding radiation could be another barrier to adoption as the narrow beam from the access point will generate a strong RF signal that passes through the user’s home. (It is of course possible to connect an indoor 5G access node to the existing in-building fiber directly, but that can hardly be called, “replacing fiber with 5G”.)

Indoor use of 5G based on mmWaves face additional challenges. The signal can propagate between rooms by bouncing off the walls but a closed door will most likely cut the connection. Seamless fallback to Wi-Fi and/or LTE can manage this situation but these legacy technologies will not be able to deliver the same performance as mmWaves.

The demands for fast processing in 5G will be extreme. Higher data rates and lower latency require faster processors. The critical building block for the 5G mmWave technology is narrowbeam MIMO antennas, and they rely heavily on signal processing. In a scenario with extreme data rates and extreme user density, the requirements for processor performance will exceed what today’s processors can deliver. Moore’s Law has already slowed down significantly, and if it comes to a halt, processor capacity could be a barrier for the 5G vision. I have spoken with experts whose assessment is that processor technology will not be able to fully meet the requirements for 5G until 2022 or 2023.

processor technology will not be able to fully meet the requirements for 5G until 2022 or 2023


The costs of deploying ultra-dense 5G networks will be substantial. If the range of an access node is limited to a few hundred meters, thousands of access nodes will have to be deployed in urban areas. The major cost driver is not the electronics but installation, cabling, power and maintenance. The traffic from each access node will have to be backhauled into the core network. Probably with Point to Point narrowbeam links which have to be aimed towards a nearby access node that is connected to the core network (via fiber or further mmWave links).

Even though 5G has a fantastic best case performance, the first deployments will be underprovisioned. The number of access nodes will initially be insufficient and when peak traffic exceeds capacity, 5G networks will suffer from the same type of service degradation as 3G and 4G networks. The operators have a limited investment budget, and they will most likely settle for a slower and less costly rollout of 5G. Users who expect their wireless 5G to replace fiber will be disappointed.

In addition, new civil engineering technologies are reducing the costs of deploying fiber in street ducts. 5G is still at least five years away from a broad market launch and, at that time, the fiber networks will have a much larger user base than they do currently. When fiber service providers begin to face competition from 5G, they will of course lower their prices and offer higher bandwidth to stay competitive. The bandwidth that 5G will deliver in a decade should be compared to what fiber can deliver at that time, not with fiber capacity today.


In a competitive market, I don’t see 5G as a viable direct alternative to fiber for at least a decade. But in markets with slow-moving monopolistic landline incumbents, 5G could offer an attractive alternative. The same goes for markets where the landline network is underdeveloped or non-existent.

5G reality check – who wants to pay for 1 millisecond latency?

Pointing toward the future (Ericsson CEO, MWC Barcelona)

Pointing toward the future (Ericsson CEO, MWC Barcelona)

 

Bold targets, but is the industry
promising more than it can deliver?

The mobile industry’s vision documents for 5G mobile are promising fantastic leaps in performance over current networks (links below). But as 5G evolves over the coming 10 to 15 years, it will most likely be a repeat of the pattern seen in 3G and 4G mobile.

The first step in the 3G and 4G life cycle was that bold targets were formulated by the vendors and in the industry committees. The next phase was to package the message, communicate the most attractive claims to the market, and create hype around the new generation of mobile networks. Once the new networks reached the deployment stage, marketers ignored the formal definitions and branded whatever products that were ready for delivery as “next-G”. For example, LTE is not real 4G according to ITU’s definition (only LTE-Advanced is). And when xG services were deployed by the operators to actual users, the delivered bandwidth and reliability was well below the consumers’ expectations.

Now it is time for 5G and the vision documents list a number of bold targets. Here is a summary from GSMA:

• 1-10Gbps connections to end points in the field (i.e. not theoretical maximum)
• 1 millisecond end-to-end (E2E) round trip delay (latency)
• 1000x bandwidth per unit area
• 10-100x number of connected devices
• (Perception of) 99.999% availability
• (Perception of) 100% coverage
• 90% reduction in network energy usage
• Up to ten year battery life for low power, machine-type devices

With the release of new spectrum (in the millimeter wave bands) and a much denser network, most of these targets are theoretically achievable. The main roadblock is economic. The willingness to pay (measured in ARPU) will probably stay the same. Operators who deploy denser, newer networks will be constrained by the available revenue pool from the users. When operators begin to deploy 5G overlays to the 4G networks, they will initially offer “5G services” with less 5G availability than 99.999%. Regardless of the superiority of the 5G technology, it is much more expensive to deliver 99.999%  than for example 99%.

However, simultaneously reaching all 5G targets at a reasonable cost is not achievable during the next decade. The exponential performance increase in silicon has historically been driven by Moore’s law (which translates to around 55% to 59% increase/year). This trajectory has already slowed down and will probably slow down further in a couple of years when the foundries move to 7 nm and finally 5 nm technology. After that, the size of the atoms and quantum effects make it almost impossible to increase chip density and go below 5 nm. Even if that was not the case, the historic performance increase in the mobile networks has been much slower than 59% per year. The reason for this is that electronics and processors only make up a smaller part of the mobile networks. The main cost drivers are cabling, deployment, masts and other installation costs, which don’t follow Moore’s law. If we optimistically assume an annual performance increase of 30% over the next decade, it will translate into an improvement by a factor of 14. In order to deliver the 5G vision, this performance increase will have to sustain the total of all targets listed by GSMA above, which is unlikely.

In particular, the targets in GSMA’s vision document for 5G’s energy consumption and latency are just not realistic. The goals for increased capacity and coverage conflict with the goal of lower power consumption. The dilemma is that there is a trade-off between increased performance and lower power consumption. A performance metric for energy efficiency is bit per Joule (bps/Joule). It is possible to improve this metric but there is a trade-off between improved spectral efficiency and improved energy efficiency (according to Shannon theory). Improving both simultaneously is difficult. It is unclear from the 5G vision documents if they are referring to energy efficiency (bps/Joule) or the total energy consumption when they refer to the target of a 90% reduction in energy usage. A 90% reduction of total network energy consumption is not realistic if capacity and performance are to be simultaneously increased by a factor of 100 to 1000. But improving energy efficiency (bps/Joule) is a matter of ongoing technological development and will continue, even though it is doubtful that it will be able to increase by an order of magnitude in one decade if Moore’s Law comes to a halt.

Another issue is the goal of 1 ms E2E (end-to-end) latency. The 1 ms target stated by GSMA, Ericsson, and Qualcomm is aggressive compared to other industry stakeholders. Samsung talks about air latency of 1ms with E2E latency at 5 ms. DoCoMo and ITU mention 1 ms RAN latency without specifying a target for E2E latency. EU’s goal is 5 ms E2E latency and 1 ms local latency for V2V (vehicle-to-vehicle) communication. Alcatel-Lucent‘s target is 1 ms latency for extreme cases. Nokia and Huawei mention <1 ms latency without specifying how they define it. The NGMN white paper specify E2E latency in general to 10 ms and 1 ms for use cases which require extremely low latency. GSA mentions <1 ms latency in the air link and <10 ms E2E latency.

Increased processor performance and the use of higher radio frequencies will reduce latency, which is a welcome side effect. But the goal of 1 ms E2E latency is just not credible. The latency in the user plane (the radio network) can be controlled by the mobile industry but all latencies add up and this is only a small part of total roundtrip latency. To compare with 4G, the RAN latency in LTE can ideally almost be as low as 20 ms but the median E2E latency is often much higher. Ping times (a measure of E2E latency) are going down with the introduction of new mobile networks. A test of the shortest ping times on US networks gave 88 ms for 3G (HSPA), 32 ms for 4G (LTE) and 18 ms for Wi-Fi. But the median latency was higher: 168 ms for 3G, 52 ms for 4G, and 23 ms for Wi-Fi. In another test by OpenSignals, mean 4G latency was around 70-80 ms. Network traffic congestion will inevitably lead to significantly higher median latencies compared to the best case scenario. Extrapolating a lowest 5G latency from the 3G and 4G figures would put it somewhere around 10 ms, not at 1 ms.

In order to reduce the full roundtrip E2E latency to well below 10 ms, entire content delivery networks will have to be rebuilt at significant cost. The centralised cloud data centers will have to be pushed to the edge of the networks. And edge computing will not solve the problem of backbone transmission latency for real time information originating far away. The speed of light in fiber adds around 1 ms latency per 200 km. For example, the added roundtrip latency from a Trans-Atlantic fiber backbone cable is around 60 ms. (The speed of light is 50% faster in free air and latency can be reduced somewhat by replacing fiber with millimeter wave radio links for mobile backhaul, and possibly for backbone transmission. But this is an even more expensive solution.)

The 5G vision documents mention VR, AR, haptic/tactile applications and self-driving vehicles as examples where ultra-low latency is required. But there is no deeper analysis of the use cases and no discussion about the willingness to pay for low latency.

Latencies this low are undetectable for humans. For example, when two people talk to each other in a room, the speed of sound adds 3 to 6 ms latency. The frame rate in a TV is 20-40 ms but humans perceive the sequence of still images as a natural moving picture. Human reaction times are measured in the range of 100 ms and higher.

The most demanding use case for humans is VR/AR helmets where very high bandwidth and latencies as low as 20 ms are required to offer optimal experience. An even more extreme use case would be a combination of tactile and collaborative VR, where several users in a shared VR environment manipulate the same virtual objects and receive tactile feedback. According to ITU, 1 ms latency is required for this use case. If gamers use VR helmets they will probably be willing to pay for a latency down to around 20 ms (but not 1 ms). However, this is a small market segment and will not be enough to finance a costly re-architecture of the entire global network. In addition, users with VR helmets cannot see their surroundings. Moving around on the streets blinded by a VR helmet is not a realistic 5G use case. Users of AR helmets/goggles will be more mobile but the demand for bandwidth and latency is less extreme for AR. And it is much easier for VR and AR providers to pre-load most of the displayed content in the helmet than to deliver it over a mobile connection in real time.

a car travelling at 140 km/h will move 3.9 cm in 1 ms


Not even in the use case for self-driving cars is ultra-low latency addressing an urgent need. For example, a car travelling at 140 km/hour will move 3.9 cm in 1 ms. A car travelling at 90 km/h will move 20 cm in 8 ms. An airbag takes 15-30 ms to release. Ultra-low latency would be nice to have, but a system for self-driving vehicles is not dependent on 1 ms latency.

Human drivers are supposed to follow the two second rule (2000 ms) to maintain a safe distance from the car in front. When the technology for self-driving cars matures, this distance can be reduced and fast moving cars will be able to safely tailgate each other. This will increase road capacity and throughput when a train of fast moving cars is communicating with each other wirelessly. The first car in the train can signal to the ones behind it when it needs to break speed and they can all break simultaneously. However, this direct V2V communication is not dependent on the E2E latency in the core network. Considering the immaturity of the autonomous vehicle technology, this future vision is many years away. In a mixed traffic environment, where less than 100% of all vehicles are self-driving, each vehicle will still need to maintain a safe distance. Some cars will not be equipped with self-driving capabilities and random events introduced by human drivers can occur at any time. In addition, the collision avoidance AI will be autonomous and built into each vehicle and not dependent on a mobile network connection. The idea of 100s of self-driving cars on a highway at 140 km/hours with a distance of 40 cm between them is not going to happen for at least 20 years.

Kudos to the mobile industry for setting bold targets. But the goal of 1 ms E2E latency seems like a vendor-driven solution in search of a problem. And considering that fundamental laws of physics make it very hard to achieve this goal, I’m surprised that the community of smart engineers in the industry has allowed it to be part of the 5G roadmap.

Update: I found some indirect supporting data for my prediction of 30% annual performance increase above. According to Nielsen’s law, fixed line bandwidth for high end users has been growing by 50%/year (probably based on best case data points). But actual delivered average bandwidth for all users has only grown by around 30%/year. These figures illustrate the discrepancy between advertised theoretical top speed and the delivered service quality under real-world conditions. The 30% figure is also influenced by the low willingness to pay for fast connection speed on the mass market.


5G vision documents, presentations and white papers from GSMA, NGMN, ITU, EU, GSA, DoCoMo, EricssonHuawei, Qualcomm, Nokia, Samsung, SK Telecom, and Alcatel-Lucent.

Groupthink and Apple envy: the consumer tech industry’s biggest problem

Missing: a strong premium brand that is not Apple

Missing: a strong premium brand that is not Apple

The majority of players in the mainstream consumer tech industry are struggling with intense competition, slowing growth and squeezed margins. Despite rapid technological innovation and huge sales volumes, it is difficult for them to differentiate and build premium brands.

Though the total market size is large, the variation in design and form factor between the competitors’ products is smaller than one would expect. The market players seem unwilling to deviate too far from mainstream market consensus.

While most of the industry struggles with commoditization, there is one shining example of a player who has managed to get it right – Apple. Apple has followed a consistent differentiation strategy over decades and built up a very strong brand, customer loyalty, and a distinct product profile. It is only natural that the rest of the industry looks to Apple for inspiration.

However, if the rest of the industry merely attempts to emulate and plagiarise Apple and each other, they only increase commoditization and miss opportunities for differentiation. The market position for sleek elegance and simplicity is already taken, though it is possible to offer a cheaper version of the same concept. Some competitors even think they are “competing with Apple” by copying Apple’s look without actually understanding Apple’s product and design philosophy. At the core of Apple’s thinking is an obsession with offering a great customer experience. The product design is derived from this core thinking. (Apple’s failure to live up to its promises is another story.)

Apart from Apple, there are other expensive high-end products on the market but they are mainly driven by the high performance specs of the components. These companies are only able to capture a smaller part of the consumer value as most of it flows back to the component vendors. The margin between Bill Of Materials (BOM) and the final sales price is just too narrow. If players could achieve premium brand status, their customers would be willing to pay for the brand and not just sheer raw performance. This is the secret sauce of Apple. In side by side comparisons of specs, Apple’s hardware has been 30 to 50 percent overpriced compared to their competitors (and Apple’s marketing and distribution costs are significantly lower due to the strong brand).
 

A race to the middle

The competitors’ race to the middle has caused most areas of the tech product industry to be unnecessarily commoditized. When few players dare to pursue a differentiation or niche strategy, several market segments are left unexploited – or even undiscovered. Instead the players are chasing each other (and Apple) for market share in the undifferentiated mainstream market.

The competitors’ thinking is probably something like this: The largest market segment is the mainstream market (say, 60% of the total market). Currently we are one of several competitors in this market and we hold a market share of 25% of the mainstream market (15% of the total market). The largest potential for growth is to increase our market share here. It would be nice to find underserved niches or move upmarket with more expensive premium products. But those markets are much smaller and we don’t have any unique capabilities that would give us a competitive advantage. Let’s focus on the mainstream market and avoid any risks relating to form factor and product design.

The alternative is to develop a differentiation or niche strategy, which would be to identify an underserved market segment and develop an offer that deviates from the mainstream. I view the reluctance to pursue a differentiation strategy as a sign that the players don’t fully trust their own judgement and capabilities. It is hard to compete in the tech market and it’s safer to rely on the same toolbox and best practices as everyone else (market research, focus groups, latest trends in branding, marketing, flat UI design, financial metrics, latest industry hype, etc.). But even if it is more difficult, it is possible to identify unmet user needs and to design products that consumers didn’t know they would love until they were in their hands.

If a tech company wants to break away from the mainstream and innovate, it will have to take a stand. It needs the courage and self-confidence to reject some of the “obvious truths” in the industry and do things differently. To accomplish this, the product/market team must really understand the user’s context, worldview and micro-situation. In particular, they need a design team with strong instincts that can translate the users’ unarticulated needs into an attractive product design.

An OEM with the intention of building a premium brand will also have to make this goal a long-term commitment and be prepared to sacrifice short-term profits. It will always be a temptation to boost short term ROI by cutting quality and reducing service levels. For example, Apple is not running Apple Stores as a profit center but as an important engagement point for their total customer experience.
 

The dysfunctional performance race driving commoditization

Several factors have conflated to create this situation, not just group-think and Apple envy. The industry can partly blame itself for putting too much marketing emphasis on raw performance. If not even the brands themselves can communicate why they have something unique to offer, it is only natural that consumers resort to comparing specs. This dysfunction is visible in every segment of the industry.

For example, camera competition has mainly been a Megapixel race. For most mid-market cameras, the very high resolution in the newest models is more or less wasted as the lenses can’t match the sensor. And sensors with too high resolution will actually reduce the camera’s low-light capabilities. Other features such as autofocus accuracy, exposure accuracy, low-light capabilities, usability, and colour rendering are of equal importance. It took years for the camera makers (both for stand-alone cameras and smartphones) to get them right. As these soft features are harder for the industry to communicate, they are given lower priority, despite being very important for the total user experience.

The flat screen TV market has for years been driven by a race to make the thinnest TV with the largest screen. Thinness is not that important for a stationary product, but sound quality is. And thinner chassis can’t provide as good sound quality as a thicker TV. In addition, TVs come with different colour profiles, one of which is the so called torch mode (demo mode). This unnatural vivid mode only serves one purpose: to make the colours on the TV appear more saturated in a brightly lit store. Not to provide best possible experience for the customers in their homes. New TVs will often be pre-set with either the unnatural super-bright torch mode or a very dark eco-mode. In addition, without informing the consumers, some smart TVs come with built-in spying on the users that can not be disabled.

The smartphone market is rife with iPhone envy. Apple’s competitors have copied its most controversial design decisions such as removing the micro-SD card slot and using a non-replaceable battery. But it seems that many of them don’t get Apple’s core thinking or understand the importance of good build quality and attention to detail. (More about smartphones in my previous blog post.)

During the PC era, Microsoft extracted most of the value in the industry and deliberately drove the market towards commoditization. The PC makers didn’t have the resources to develop any differentiated customer offer – and if they had, Microsoft would most likely have thwarted their efforts. In this commoditized market, PC makers tried to raise their low margins by shipping machines filled with crapware, trialware, and bloatware. In addition, the build quality was subpar on most consumer computers. Laptop makers have tried to compete with Apple by making their products as thin as possible. They’ve done this by copying the MacBook’s look, but with a much lower build quality. The thinness has also made the laptops prone to overheating. Some laptop makers have copied Apple’s touchpad design with integrated mouse buttons but, unlike Apple, they have chosen to use cheap budget components. The result is predictable. It is no coincidence that consumer laptop makers can’t charge premium prices (with the exception of Apple).
 

Who will dare to “Think different”?

In all market segments of the consumer tech industry, it is possible for an ambitious player to identify weaknesses in the mainstream market and develop something unique. Apple did it one way, and any company that plans to embark on a differentiation strategy will have to discover their own unique product design and customer offer. Plagiarising Apple is not enough. Apple has made a number of serious mistakes and these weaknesses can be exploited. A differentiation strategy has to be different from Apple’s core thinking as well as from the mainstream market consensus. Apple’s corporate slogan from the 1990s was “Think different”. Who will be the first to “get” Apple, yet dare to think differently from them?

The smartphone makers’ dilemma

Smartphone flagship battle (Apple vs Samsung)

Smartphone flagship battle (Apple vs Samsung)

The market for flagship smartphones is the most cut-throat tech market in the world. The stakes are enormous, product life cycles are incredibly short and rivalry is intense. The top ten players are constantly reminded of the fates of fallen giants on this battle field.

The smartphone market is full of contradictions. In some respects, it appears to be a commoditized mass market. At the same time, the smartphone is one of the most complex and advanced tech products on the planet. It is pushing the limits of performance and what is technologically possible to the brink. It integrates dozens of technologies into one small device – each of which is an impressive field of technology in and of itself. The flagship smartphone represents the pinnacle of 400 years of technological development.

In spite of the advanced technologies used in smartphones, the room to differentiate is rather restricted for the heads of product strategy at the major players. The intense competitive forces in this industry constrain their available degrees of freedom. The exception to this rule is Apple which to some extent can afford to go its own way.

Even though the flagships are a smaller part of the smartphone market, they are very strategically important for the vendors. The flagships set the highest reachable price point for each vendor and most of the market and media attention is focused on them.

The head of product strategy at the smartphone makers must use the latest high end chipset in next year’s flagship model. They also have to include the latest high resolution displays released from the component suppliers. Otherwise they are not in the flagship race. The subsequent product design is a balancing act between conflicting goals. If the processor is pushed to the maximum, the device will win the benchmark tests for performance, but will be criticised for short battery life and overheating. If the opposite is done and the device is designed to avoid overheating along with providing long battery life, it will fall behind in the benchmark tests for raw performance.

The entire product design and development process is full of these trade-offs. If a very bright, high resolution display is used, costs will go up and battery life will suffer. If the device is equipped with a high capacity battery the weight goes up. However, if it is designed to be as light as possible, its short battery life will be viewed as a minus.

The first player to include a new generation of technology (like LG did in 2014 with the super high resolution QHD display) risks integration problems. LG’s GPU and processor was not up to the challenge and the display became less responsive. But if a smartphone maker waits too long to integrate new technologies in their products, they will be considered a laggard.

Another trade-off is the thinness of the phone. If the phone is too thin there will be less space for a high capacity battery. In addition, for every millimetre that is carved away, the optical quality of the camera goes down exponentially. Apple tried to have it both ways in their latest iPhones. They chose a camera that sticks out from the body, resulting in a badly designed device that wobbles if put down on a flat table.

If a display is used which is larger than the competitors’, cost and weight will be higher. In addition, there’s the risk of losing customers who prefer a more lightweight model. A smaller display choice will result in nit-picking by the tech community, and customers comparing the device in stores will probably choose competing products with larger displays.

Time to market is extremely critical. A new flagship model can only be sold at full price for a few months, and sales will drop rapidly after 6 to 10 months. If a new model is rushed to market, there is a risk that serious prototype-stage flaws are left undiscovered. However, spending too long on perfecting a new model will shorten the sales window and cut deep into the revenue potential.

For second tier players in the high end segment such as Sony, this dilemma causes a vicious circle. If sales of the current flagship device drop off early, the vendor will feel pressured to launch a new model. But a hurried launch prevents the development of a really good product. The short product cycles combined with a string of rushed models further undermines these players’ brand value.

Tear down analysis of the inside of modern smartphones provides additional evidence of this dilemma. The market leaders Apple and Samsung manage to optimise space on the chipset, the boards, and compress the spacing of components. Struggling smaller players such as Sony lack the time and resources for that and the inside of their devices are far less optimised.

When it comes to software and sensors, the players are more or less forced to add as much functionality as possible. More apps and software add to icon and menu clutter but if something is left out, that omission will be criticised.

No matter what the smartphone vendors deliver, the professional product testers (GSMArena etc.) will find something to pick on. If the “perfect” smartphone were ever created, they would criticise it for being too expensive. And if it were sold at a lower price point, the shareholders and CFO would complain about low profit margins.

In addition to these restrictions is the unwillingness by almost all industry players to experiment with other form factors than the iPhone-style slate form. When flexible displays are introduced (soon?) all the major players will probably jump on the bandwagon. This will hopefully provide room for new innovative designs, though my guess is that most smartphones with a flexible display will look rather similar.


This market would appear less like a commoditized mass market if at least some players would be brave enough to innovate and deviate from the mainstream form factor. There are rare examples of this but they haven’t really been pushed by the industry. Samsung launched a thicker phone with a much better camera that had a real 10x optical zoom in 2014 (Galaxy K Zoom). Sony have introduced flagship models with much better sound quality than the market average. There are a handful of current models with physical keyboards on the market (from Blackberry and LG). The small Russian smartphone maker YotaPhone introduced a smartphone with a second always-on e-ink display on the backside that can be read in bright sunlight. LG added a very small banner shaped always-on second screen for notifications on the V10. There are a couple of newly introduced flip phones from Samsung and LG. Some models are waterproof, some flagship models come with a leather back, etc. (Smartwatches are of course innovative but they are a separate product category.)

It’s easy to say that smartphone makers should embark on bold innovations and radical new product designs. But from the market players’ perspective I can understand their trepidation. One major strike and they’re out. Considering how difficult it is to get everything right, I can understand them preferring to play it safe. Even Apple and Samsung make massive mistakes. It is not easy to integrate a new generation of components into a seamlessly working device every year. The most difficult part seems to be the ability to really understand the users’ context and micro-situation and to offer a seductive, intuitive and compelling user experience.


The smartphone market would be more interesting if at least a few vendors dared to differentiate from the mainstream. Apart from Apple, it seems the players don’t really trust their own judgement and their product design capabilities. Instead they resort to copying each other, and Apple in particular.

To differentiate successfully requires a strong team of product designers and UI/UX experts with the self-confidence to deviate from the mainstream market’s iPhone-style design. A group of bright people that respect users and don’t fall for fads such as flat UI design. That don’t believe their role is to teach the users to be “modern” and “cool” and that are strong enough to ignore peer pressure from other designers and techies.

As Michael Porter pointed out, strategy is about choices and deliberately making “no” decisions. To differentiate means to focus on certain market segments and ignore the preferences of other parts of the market. You can’t please everyone.

With smartphone sales in the billions of units there are certainly underserved user segments that want something other than Apple’s offer of “minimalist simplicity”. The first smartphone maker to discover and serve these user segments will be able to build a fiercely loyal user base.

Millimeter wave transmission – are secretive financial firms leapfrogging 5G and the wireless industry?

Low latency wireless microwave links between financial centers (London-Frankfurt)

Low latency wireless microwave links between financial centers (London-Frankfurt)

Over the last few years, a new breed of specialised service providers have been offering low latency wireless Point-to-Point networks between financial centers. But for the High Frequency Trading (HFT) firms who use these services, fast is not enough. To beat the competition, they want their connection to be faster than everyone else.

Secretive cash rich financial trading firms are already renting space on towers and are probably building their own wireless networks using millimetre waves (the same frequencies the mobile industry plans to use for future 5G networks). Considering the huge sums at stake for the winners of this race, it wouldn’t surprise me if they have already built in-house technologies while the mainstream mobile industry has only reached the planning stage.

Low latency is absolutely critical for HFT firms. The financial firm that can connect to the marketplace before its competitors stand to make billions in profits. For these fintech players, the fiber backbone is just not fast enough. For example, the lowest latency (delay) between London and Frankfurt in fiber is 8.35 milliseconds while the speed of light in free air is only 2.1 ms. Specialised wireless fintech service providers such as Perseus and McKay Brothers have managed to get the latency down to around 4.6 ms on this route. Considering that the theoretical floor for latency is 2.1 ms there is plenty of room for aggressive HFT firms to build their own optimised network and get below 4.6 ms.

Fiber’s “slowness” is due to the fact that cables don’t run in a straight line of sight between two cities. Another reason is that the speed of light in fiber is 33 percent slower compared to the speed of light in free air. When the signal traverses through the network, latency is also added each time it passes through a router.

Financial players work hard to reduce latency in every part of their infrastructure. A seemingly minor difference such as the location of computers on different floors in a building can add to latency. In one example, the latency on the 2nd floor was 0.184 ms, but on the 9th floor it was 0.183 ms.

Building a Point-to-Point network (with microwave links transmitting narrow beams between a line of high towers) is an old technology that was used for long range communication before fiber optics. This recent revival of wireless has been made possible by better RF components (in the high gigahertz bands) and ultra-fast chips that can handle the signal processing without adding much latency.

This type of backbone network will never be cheaper than using fiber. The need for free line of sight and the curvature of earth puts a limit on the longest distance between towers. It is possible to increase reach by building higher towers but increased height adds to construction costs. Wind drift of the towers and path loss due to rain attenuation are other problems that have to be overcome. Transmission capacity can be very high if wide enough carriers are used in the (idle) millimetre wave bands above 30 GHz, though this technology will always be dwarfed by fiber.

Microwave tower

Microwave tower

We don’t know exactly how far the financial HFT firms’ secret in-house projects have come. But one thing’s for sure – they are not being held back by slow moving industry committees. They are most likely using GHz/millimeter waves but another solution could be lasers (from AOptix?). As the main objective is to reduce latency, my guess is that they’re also experimenting with transmission of some form of low level “raw” signal where the IP headers of the packets have been stripped away.

Even though this highly specialised fintech transmission network can be viewed as a custom built race car, it is relevant for the wider mobile industry. For example, a crucial building block for the future 5G mobile is wireless backhaul in the millimeter wave bands (+30 GHz). One of the goals in the 5G mobile specification is a sharp reduction in latency. These are areas where the fintech players appear to be years ahead. Their solutions have already been deployed, or will be in the near future.

If these secretive players are ever willing to share their technologies, they could serve as important proof of concept installations for the rest of the tech sector. And if their solutions are ever licenced, new advanced technologies may enter the mobile market from an unexpected industry – fintech. It would certainly be prudent for the mainstream mobile market to pay attention to innovations in this field.

Laptop makers complain they can’t differentiate – how about a decent keyboard?

 

Almost all mainstream laptops today come with versions of the same badly designed keyboard. The first laptop maker who takes UI/UX seriously and builds a better keyboard will gain a significant competitive advantage over its competitors.

After recently helping a relative buy a new laptop, I am left puzzled by the OEMs apparent indifference to one of the most essential parts of a computer: the keyboard. It may be a low-tech electro-mechanical module, however, for most people it is the main way they interact with their machines. Users spend thousands of hours typing on their keyboards.

Improving the design of laptop keyboards is actually quite straightforward. Common sense and a basic understanding of ergonomics is pretty much all that’s required. To make it easy and intuitive to use the keyboard, a good visual overview and supporting tactile feedback that reduces typing errors is important. The characters on the keys ought to be large and easy to read and the distance between the centres of adjacent keys (the pitch) should not be too short. The keys should have a concave shape so one can easily find the edges without looking. Further visual cues could be added by using different colours to group the keys. A backlit or illuminated keyboard would add to the usability. Frequently used keys should preferably be larger and arranged so they are easy to find without looking. If the keys stick up a bit and the important ones have some empty space around them this provides additional tactile cues. These empty spaces enable the user to find the keys by touch rather than having to look at the keyboard. (Additional details about the historic development of keyboards, usage of specific keys, design problems, etc. can be found here, here, here and here.)

Keyboards with most of these obvious design features already existed back in the 1980s. IBM’s classic mechanical PC keyboards, Model M, are large and heavy but have better ergonomics than today’s laptop keyboards. The keys in IBM’s Model M keyboards have a buckling spring mechanism that provide excellent tactile and audible feedback. The rugged construction makes them very durable, and many are still used today by enthusiasts. They fetch prices up to £80 on eBay and there is a small vendor that still makes them. It seems like development in this area has been going in the wrong direction for the last 30 years.

IBM, model M, classic PC keyboard with excellent ergonomic design

IBM, model M, classic PC keyboard with excellent ergonomic design

Many of the more specialised keys on the IBM Model M are irrelevant for most users today and the buckled spring mechanism would make the keyboard too deep and too heavy if used on modern laptops. However, the basic design principles are sound and could be used as inspiration for a better laptop keyboard.

 

Flawed design of most modern laptop keyboards

In the flawed design of a typical modern laptop keyboard, the keys are flat and grouped in a rectangular box. They are rarely placed in groups with empty space between them, which would make them easier to find. The letters on the keys are small, thin and often difficult to read. The most important keys (Enter, Delete, Backspace, Esc, Shift, Ctrl, Alt, Fn1-12, Page Up/Down and Arrow-left-right-up-down) are surrounded by other keys and the user has to look down from the display and aim in order to hit the right key. Around 20 percent of the available space on a standard 15 inch laptop is wasted by the inclusion of a large (and rather useless) number pad to the right. To make room for the number pad, all the keys have to be crammed very close to each other and made smaller. Due to this, the important keys to the right of the main keyboard (Enter, etc.) are much harder to find without looking down as they are surrounded by other keys. The number pad on the right pushes the center of the main keyboard to the left, including the mouse pad. This creates an unergonomic work position where the user has to twist somewhat to the left. On 17 inch laptops, the additional space is not used to increase the size of the keyboard. Instead, there is just dead space on each side of the box-shaped keyboard.

Mainstream keyboard with design flaws (Ideapad Z50)

Mainstream keyboard with design flaws (Ideapad Z50)

 

Mainstream keyboard with typical design flaws (Toshiba L70, 17 inch)

Mainstream keyboard with typical design flaws (Toshiba L70, 17 inch)

 

Average keyboard with design flaws, at least no number pad (MacBook Pro)

Average keyboard with design flaws, at least no number pad (MacBook Pro)

 

Lenovo’s decline since the Thinkpad T520

The last time I bought a new 15 inch laptop I had to search high and low to find a computer with a keyboard I liked. Most 14 inch laptops came without the superfluous number pad but I needed a larger 15 inch display. I settled for an older model of Lenovo’s flagship T-500 series ThinkPad business computer (T520) that was still available. The T520 has one of the best laptop keyboard designs I have found.

The T520’s keys are deeper than today’s nearly universal chiclet keys. The letters on the keys are large and easy to read. The Esc key is large and placed in the upper left corner and the row of Fn keys is separated from the main QWERTY keyboard by a gap of a few millimetres. The Fn keys are also made smaller to differentiate them from the adjacent regular keyboard. The Delete key is significantly larger than the surrounding keys, placed close to the upper right corner of the keyboard, and separated from the other keys by empty space on the left which makes it quite easy to find. The Enter, Shift, Backspace and Tab keys are large and placed at the outer edges of the keyboard. The Page Up/Down keys are placed logically above each other in a corner of the keyboard. I would have preferred that the four arrow keys be isolated from the others, but at least they are somewhat separate as they are down in the right corner of the keyboard.

Last laptop ever made with excellent keyboard design? (Lenovo Thinkpad T520, fr 2011)

Last laptop ever made with excellent keyboard design? (Lenovo Thinkpad T520, fr 2011)

 

Palm rest with rounded edge, gentle for underarms (Thinkpad T520)

Palm rest with rounded edge, gentle for underarms (Thinkpad T520)

Hardware controls for the laptop such as speaker volume/mute and microphone mute have dedicated hardware keys. No fiddling around trying to find the right two finger command on the main keyboard to turn the sound off. The touchpad has rugged mouse buttons both below and above the touchpad. Lenovo has added a convenient third mouse button between the left/right buttons which controls a screen magnifier. In addition, there is a red pointing stick in the middle of the keyboard. I rarely use it, but it is placed at the separation for left hand/right hand typing and offers a good tactile cue for the fingers. The only thing I don’t like on the ThinkPad T520 is the dead space to the left and right of the keyboard. It could have been used to expand the keyboard.

In addition, the T520 has a matte display. A glossy screen might look more vibrant in the store but anti-glare displays are less straining for the eyes and easier to use in an environment with many light sources, such as an office. Another well-thought-out ergonomic detail is the rounded edge of the palm rest. Many other laptop OEMs (including Apple) have quite a sharp edge at the front that cuts into the hand or underarm when typing for long periods.

Unfortunately, each new upgrade of the Lenovo T-500 series has become more and more similar to laptops from the commoditised mass market (something that stirred up significant controversy among Lenovo users, here, here, here, here, and here). In this year’s model of the ThinkPad (T550), Lenovo have discarded most of their good design elements. The T550 now has a rather mediocre keyboard just like the majority of other laptop vendors, with the unnecessary number pad added. The rounded palm rest is gone, etc. Lenovo is the number one quality laptop OEM for demanding business users who are willing to pay for reliability and quality. All these odd design decisions baffle me. Are laptop makers utterly clueless about how their products are used, or are they deliberately allowing style to trump function? I don’t get it.

The latest ThinkPad (T550), hardly better than the average keyboard, crammed by the unnecessary number pad

The latest ThinkPad (T550), hardly better than the average keyboard, crammed by the unnecessary number pad

 

Building a better keyboard – and laptop

Improving the keyboard ought to be fairly simple for the leading laptop brands. But instead of a flurry of activity among competing OEMs, the area seems stagnant. This could be an opportunity for an ambitious laptop maker. The first player that puts resources into keyboard design improvements will gain a competitive advantage.

I am not suggesting a radical departure from the established keyboard layout. Attempts at designing disruptive “ergonomic” keyboards have failed in the mainstream market due to the steep learning curve. The first step in improving existing keyboard designs is to simply look at the good ones that have already been on the market (mentioned above).

I am fairly certain that most users would prefer a larger, more spacious keyboard without a number pad (see here). I have almost never used it myself. The keys for 0-9 are already lined up above the letter keys and are simple to use. Shoehorning the unnecessary number pad on to a small laptop keyboard results in a cramped keyboard that is far less intuitive and much more difficult to use.

The specialised keys on standard computer keyboards are remnants from different eras of computing, going all the way back to TTY terminals and layouts designed for IBM mainframe programmers in the 1970s. It is time to move on. It’s likely that there are several dedicated keys on the standard computer keyboard which are hardly ever used by mainstream users. These could be removed and instead be accessed indirectly via a modifier key, or perhaps removed entirely.

For example, the large Caps Lock key is a waste of space on a crowded keyboard. I never use it, but often hit it by mistake CAUSING ALL LETTERS TO BE CAPITALISED. This is an annoyance. There has actually been an ongoing campaign against the Caps Lock key in the tech community for over a decade. If the Caps Lock key was removed,

Shoehorning the unnecessary number pad on to a small laptop keyboard results in a cramped keyboard that is far less intuitive and much more difficult to use.

the prime keyboard space on the left side of the QWERTY keys could be used for something far more useful. Other keys that are seldom used and could probably be done without are: Scroll Lock, Insert, SysReq, Home, and End. Removing unnecessary keys would free up space and improve usability.

Google took some steps in this direction when they introduced Chromebooks in 2011. They removed the Caps Lock key, all the F1 to F12 keys, Home, End, Delete, Page Up/Down and the entire number pad. However, the Chromebook is not a fully featured computer so the removal of these keys can not be directly translated to the mainstream laptop market.

Apple have removed some of the more peripheral keys, including Page Up/Down, as well as the number pad. They have also removed the Delete key (deleting on the right side of the cursor) and only offer the Backspace key (deleting on the left side of the cursor). But they have kept the Caps Lock key.

Removing Caps Lock would be great but I find the Page Up/Down keys to be very useful. I am also sceptical about removing the Delete key. I use both the Delete and Backspace keys for deleting, and Delete has additional functions in Windows such as deleting documents and folders.

If the number pad and unnecessary keys are removed, the freed up space could be used for three blank programmable keys. These blank keys could easily be assigned through an integrated key re-mapping app. To make it simple, the non-assigned blank keys could be colour coded instead of adding more symbols. Some users might want quick access to certain symbols or non-English characters. Or they might want to record a macro for quick access to a function in the OS. (There is already freeware for re-mapping the entire keyboard such as AutoHotkey, but a simpler UI is needed for the mainstream market.)

The row of function keys is typically assigned double functions (Fn1 to Fn12 as well as 12 additional laptop specific functions). Each laptop OEM does this in its own way and there is no established standard. Usability research could identify the most popular functions, which would be valuable input for improved product design.

Shallow chiclet keyboards are useful for making laptops as thin as possible. But personally, I prefer a keyboard that feels more solid and offers deeper keystrokes with higher tactile resistance. If space allows for it, I think laptop makers should reconsider their indiscriminate use of chiclet keys.

As part of this reinventing-the-laptop-keyboard project, I would include dedicated buttons for control of the laptop hardware (sound, microphone, webcam, etc.). In addition to improved usability, a real hardware switch would make it impossible to hack the webcam or microphone and use them for spying on the user. This feature is extremely important for business users, but will be appreciated by the mass market as well.

I would also ensure that the letters and symbols on the keys are large and easy to read. Having the keys grouped with colour coding to provide additional visual cues might also be helpful. Not all PC users are 21-year-olds with perfect eye sight.

I am not suggesting that there exists One ideal keyboard for the entire market. With around 175 million laptops sold annually, the market can easily be segmented. Number pad or not. Chiclet keys or not. Specialised legacy PC keys or not. Dedicated mouse buttons or integrated in touchpad. Cool stylish design or functional usefulness. For each of these segments, the market is large enough to be attractive for at least some laptop OEMs.

What I don’t understand is the vendors’ herd mentality. They all offer similar looking products, designed for an imagined mainstream customer. I sometimes get the impression that they suffer from “Apple Envy” and uncritically emulate whatever comes out of Cupertino. But if the PC market wants to emulate Apple, they can begin by offering some 15 inch laptops without number pads.

The great thing about an improved keyboard design is that it’s easy to demonstrate the use cases. The laptop brand that wants to stand out from the pack of competitors can do so without resorting to technical gibberish. It would suffice to explain how annoying it is to use the competitors’ standard keyboard, how the sharp edge of the laptop cuts into one’s hands, and how the competitors’ webcams/microphones can be enabled by spying hackers. This is so simple to explain it could even be done in TV commercials.

Embedded SIMs will take mobility to the next level

SIM Cards, soon to become an outdated technology

 
 

MNOs, fear not. Embedded SIMs will open up new markets and use cases, not destroy the operators

 

Traditional SIM technology has been around for nearly 25 years and the operators view them as a critical control point for customer ownership. Due to the risk of losing the M2M market to competing unlicensed LPWAN technologies (and pressure from strong handset vendors), MNOs are finally beginning to embrace more modern embedded SIM solutions.

Traditional SIM cards are not fit for purpose in the IoT market. There are several reasons for this. First, M2M devices are often embedded inside other machinery and very difficult to access. An industrial M2M player with 1000s of dispersed units can not send out staff to change malfunctioning SIM cards, or replace all SIM cards if a new MNO offers a better price plan. The second reason is that wearables in the consumer space are often too small to fit a SIM reader. Think waterproof smart watches or smart jewellery. Third, tablets and laptops that only occasionally require mobile connectivity will remain an untapped market for the operators as long as the user needs to find a suitable SIM card, fiddle with it, and activate/pay for a data plan that is rarely used (for example when travelling).

For the M2M market, the operator-led standard bodies GSMA and ETSI have already developed a technical architecture for reprogrammable SIMs (termed eUICC, “embedded UICC”). The eUICC is a secure hardware module that can be permanently soldered onto the circuit board. When an eUICC is manufactured, the eUICC issuer loads the master keys of the eUICC onto the hardware chip. The eUICC issuer can be an MNO or another stakeholder such as the device maker. In the eUICC hardware, one or more operator profiles can be stored (including IMSI number, network key, and other settings). The eUICC issuer will maintain a central Subscription Manager which is a database with all available operator profiles. This gives device owners the ability to swap between operator profiles as well as download new ones.

Embedded SIMs offer advantages for M2M device vendors. They can make their products smaller and better encased and it’s possible to manufacture products with a blank eUICC (no operator profiles) for worldwide delivery.

Embedded SIMs are already in place in the M2M market and strong handset vendors such as Apple want to introduce the same technology in the smartphone market. The traditional SIM card is a control point for the MNOs’ market power and so far they have been resistant to e-SIMs. But market forces are moving fast and the operators risk being sidelined if they don’t embrace this new technology.

There are already several ways for users to bypass the operators’ SIM. For example, dual SIM handsets offer a crude version of the e-SIM functionality. The most obvious way to avoid excessive roaming charges is to switch SIM cards when travelling abroad. An existing service similar to e-SIMs is offered by MVNOs who offer multi-IMSI SIM cards for international travellers. The MVNO stores IMSI numbers and operator profiles from a number of MNOs in different countries on its SIM card. The existing multi-IMSI SIMs are hard coded into the SIM card today, but once the eUICC standard is in place it will be possible to reprogram physical SIM cards as well. Examples of MVNOs using multi-IMSI cards are WorldSIM and Truphone. They offer full international multi-IMSI based connectivity with voice and data based on SIM cards. Transatel is both a multi-IMSI MVNO and a wholesale service provider (MVNE) for other MVNOs. GigSky and Cubic Telecom offer a similar international MVNO service for data only connectivity. Apple partnered with GigSky to sideline the operators in their Apple SIM offer for iPad.

There is nothing in the eUICC specification that prevents it from being used in handsets. As of March this year GSMA has a working group for eUICC in mobile consumer devices with backing from major operators such as AT&T, Deutsche Telekom (T-Mobile), Vodafone, Orange, and Telefónica. The technical architecture is anticipated for delivery by 2016. Apple and Samsung are said to be heavily involved in the project.

However, one of issues that remains to be solved is number portability. Today this is a cumbersome and mechanical process that can take up to a day, and risks leaving the phone number unreachable during the transition time. Number portability has to be instant for users to take full advantage of the ability to swap one operator profile for another in the eUICC on their smartphone. Number portability is implemented somewhat differently in different countries and a seamless global system requires extensive systems integration. The e-SIM specification from GSMA due next year will only define first generation e-SIMs, and my guess is that this issue will only partially be solved.


But even the first generation e-SIM specification will have far-reaching consequences across the value chain. MNOs will have to compete harder for customers. However, MNOs’ concerns that they will be marginalised and end up in a cut throat price competition over every phone call and data session are exaggerated. Several factors will prevent this end state. The price of typical phone calls today is so low that most users will find it too tedious to switch providers just to find the cheapest calls. The same goes for data, except when travelling abroad. Operators can still offer bundles, triple/quad plays, extras, loyalty points, subsidised handsets etc. to combat price only competition. Most consumers actually prefer a bundle with predictable costs. In addition, operators will save on the costs of distributing physical SIM cards.

A new potentially powerful player will be the eUICC issuer that controls the initial access to the e-SIM. Only operator profiles offered in the eUICC issuer’s Subscription Manager Database will be available to the users. And the issuer can control how available profiles are displayed and which operator gets to top the list.

Candidate eUICC issuers are device makers, MNOs or managed service providers. A handset maker that controls the e-SIM could restrict users’ access to available operators. This would squeeze profit margins for the operators who are lucky enough to be allowed access to the user base. If an e-SIM equipped handset offers a restricted choice compared to the older SIM card handsets, it will be viewed as a step backwards. Even for a very strong handset maker such as Apple it is far from obvious that it will be accepted by the consumers. Users will expect almost the same freedom to select operators of their choice as with a physical SIM card. An overly restrictive e-SIM will most likely be viewed as a strong negative factor. Before the e-SIM has reached maturity I expect smartphones to be equipped with both an e-SIM and a traditional SIM card slot.

In addition to consumer resistance, regulators will most likely mandate fair and open access for all operators to the eUICC issuers’ Subscription Manager. The fear that eUICCs will move all market power to the device makers is exaggerated. And there is nothing preventing MNOs from becoming eUICC issuers themselves for devices sold through their own retail channel. Subsidised handsets could also be equipped with an eUICC issued by the MNO (for the period after the operator lock-down).


Embedded SIMs based on eUICC is a critical enabler for the IoT and wearables market. But the interesting long term potential is that e-SIMs can reinvent the way we interact with our handsets and devices. Currently, if a user wants to change handsets to go to the gym or on a night out he/she will have to fiddle with the SIM card and physically move it. If the two devices don’t accept the same size SIM card it is even more complicated.

In a fully developed system with embedded SIMs, it will be possible to easily move the active phone/data connectivity from one device to another, including cars. Users will be able to have several devices for their varying needs and use cases. In addition, they could have several active subscriptions/phone numbers in the same device. They will also be able to split connectivity between different devices. For example, have text messages, IM, and notifications diverted directly to a non-tethered smartwatch while the full mobile connectivity stays with the smartphone. Or have certain notifications sent to a piece of smart jewellery or smart clothing. It will also be possible to have more than one phone number on the same device, have disposable phone numbers, and move the subscribed numbers between the user’s devices. This flexibility will of course also apply to all communication that doesn’t rely on a phone number (such as WebRTC, Skype, WhatsApp, etc.). Even devices without full mobile connectivity will probably be equipped with e-SIMs. For example laptops, tablets, and smart home hubs. And the hardware based security from the eUICC module will be an excellent enabler for payment platforms. In this scenario the full potential of mobility, connectivity, and IoT will be unleashed. One can only hope that the working groups at the GSMA will be forward-thinking enough to see this.

Apple don’t understand the luxury market – the $17,000 Apple Watch Edition could damage Apple’s brand

The $17.000 Apple Watch Edition in rose gold

The $17.000 Apple Watch Edition in rose gold

There are two fundamental flaws in the way Apple have positioned the $10,000 to $17,000 Apple Watch Edition. First, a luxury product that will be obsolete after one year contradicts the luxury market’s fundamental logic of permanence and long lasting value. Instead, the Apple Watch Edition risks being perceived as an ostentatious display of money today, and an embarrassingly outdated item after next year.

Second, the bling factor of the $17,000 Rose Gold Edition conflicts with Apple’s core brand values among its broad user base. Apple stand for cool 21st century modernity, elegant simplicity, and near affordability for the middle class. One of Apple’s most important user segments is the creative class in the advanced economies. They have a laid back, postmaterialist and slightly bohemian anti-establishment value system. Conspicuous consumption and a materialist 1980s style display of wealth is the antithesis of this socio-economic segment of the market.
 

Navigating the luxury brand market

Strong brands always stand for something and have a clear identity. They are a statement and are by definition limited in scope. It is difficult for brand owners to extend their brand into new product categories or market segments. The risk is that their core brand value will become diluted, or even worse, that their extension will destroy the original brand value. Despite their efforts to nurture and communicate their brand message, brand owners are not in control. A brand’s “brand” is the sum of the perception of the brand by all users, former users, and non-users. It is the consumers’ opinions that ultimately determine a brand’s value.

The luxury market has its own particular logic and it’s difficult for non-luxury brands to break in to this market segment. Luxury brands typically offer a combination of superior quality, high performance and features, superb services, brand mystique, and a compelling narrative of the brand’s pedigree and origins. They are produced in very small series and are often more or less handmade. They tend to be built upon emotional appeal and are imbued with a sense of exclusivity and sophistication. Luxury brands are strong and often controversial. They can be beloved by the target market but viewed as ridiculously overpriced and “out” in other socio-economic market segments. For many buyers of luxury products, the strong brand identity becomes a part of their own self-expression.

But the luxury market is not just driven by status-seeking and symbolic appeal. The majority of luxury products have superior functionality, very high performance, and a solid build quality that gives them a much longer life span than similar mainstream market products. The second hand value of most luxury products is quite high and can even appreciate over time. Permanence is a core value in this product category. For example, the watchmaker Patek Philippe’s advertising slogan is: “You never actually own a Patek Philippe, you merely look after it for the next generation.” De Beer’s slogan is “A diamond is Forever”. Paying an exorbitant price for a product with very high second hand value and a long life span is to some extent rational.

Luxury brands can be divided into two segments. One segment consists of well-known brands that are easily recognisable. For example: Rolls-Royce cars, Louis Vuitton handbags, Four Seasons Hotels, or Rolex watches. These brands enable the customers to display sophistication and wealth, and are in some sense aspirational brands. The other segment of the luxury market consists of smaller brands that are less well-known by the general population but highly regarded by the in-the-knows. The fact that so few people recognise these brands makes them even more exclusive for the connoisseurs. They are usually more expensive than well-known luxury brands and offer an even higher level of sophistication and exclusivity. In the game of social status, buyers who prefer this upper echelon of luxury brands tend to turn their noses up at those who buy well-known luxury brands. They view them as unsophisticated and solely interested in displaying their wealth. Interestingly, anti-establishment middle class consumers often share this disdain for what they consider to be the nouveau riche’s vulgar display of wealth. For example, the urban hipsters who happen to be one of Apple’s core customer segments.
 

Difficult for digital tech products to enter the luxury market

There are very few market-leading tech brands that have attempted to enter the luxury market. The fast pace of innovation and increased raw performance quickly renders last year’s top-of-the-line products obsolete. For consumers who easily can afford it, it is only natural to frequently replace their TV, computer, phone, etc. However, paying ten times the price for a “luxury computer” or “luxury mobile” that will be disposed of after a couple of years serves little purpose. Functionality and performance trump any potential symbolic luxury brand mystique. Even in the early 1990s when mobile phones cost $4,000 there were no luxury mobile phones. Functionality was the only thing that mattered during this phase of rapid technological development.

There exists a niche market for very expensive laptops from the major OEMs, but they can hardly be called luxury as the high price is based on the expensive high performance components that go into the machines. Sometimes these laptops are white labelled by luxury brands such as Ferrari or Bentley. There is also a microscopic market for bespoke laptops. For example, a MacBook Pro built into a new luxury case or covered with 24 karat gold. But there are no independent luxury laptop brands that produce better computers than the market leading OEMs.

However, some luxury technology products do exist. For example: audiophile sound equipment, watches, and cars. In these product categories, performance is dependent on craftsmanship, build quality, and analogue technology, which improve at a snail’s pace compared to microprocessor based products. As these products will not become obsolete after one year there is room for luxury brands in these categories. They will retain a high second hand value. Hence, buying these luxury products can almost be justified without resorting to emotional brand mystique.
 

Vertu – the exception to the rule

There are rare examples of luxury tech products, but they are mostly the exception to the rule. A few iPhones have been refitted in gold or platinum cases, covered with diamonds and sold with a six or seven figure price tag. However, the only luxury phone brand with any significant sales volume is Vertu. During its time as the worlds’ leading mobile phone maker, Nokia entered the luxury market with the subsidiary company “Vertu” in 1998.

Pre-smartphone mobiles had a slower technological trajectory and Vertu positioned itself with craftsmanship, style and service – not advanced phone functionality. Their phones are priced from $6,000 to $300,000. Vertu use materials such as titanium, gold, leather, buttons made of sapphires and rubies, sapphire displays, etc. The most expensive models are decorated with diamonds. They also offer a bundled luxury 24/7 concierge service for their customers, reachable via a dedicated button on the phone. But as a phone maker Vertu have not been competitive. The product release cycle of their flagship model “Signature” has been around a decade. They launched their first Android smartphone only in 2013. Before that Vertu could only offer feature phones.

Vertu could not compete in the smartphone market during the period of early rapid technological development (2007-2013). Only when technological development slowed down and matured was it possible for a luxury brand to enter the smartphone market.

The luxury phone market is a tiny niche market. Vertu have sold around 350,000 phones worldwide since their first model launched in 2002. They only made eight of their $310,000 “Signature Cobra” model. There is not much for mainstream tech players to take away from Vertu’s narrow business model. Are customers paying for the bundled concierge service, the brand mystique, or the device itself? Their main markets have been Asia, the Middle East and Russia, where cultural differences make an ostentatious display of wealth socially acceptable. A few celebrities have been spotted with Vertu phones, but they are regularly given free luxury products as promotion. Vertu have a strong brand message in their target market. However, this message will probably evoke an equally strong (albeit negative) reaction among the tech savvy urban middle class in advanced economies.
 

Apple’s dilemma

Brand owners can’t have it both ways. Their brands have to stand for something, and they can’t stand for mutually exclusive messages. Ostentatious Bling and Cool Advanced Technology do not make a good match. Apple are undermining the core values of their own brand by entering the bling market. The Apple Watch Edition is not even high quality bling. It’s just an overpriced, mass-produced Apple Watch made in gold that will lose most of its value next year. Apple’s current support program ends seven years after they stop producing a product. If they don’t make an exception for the Edition, Apple will officially declare the 2015 Edition obsolete in eight years and cease all service. Clearly, they have a lot to learn about what constitutes a luxury product.

The dilemma is that luxury products are built to last while the microprocessor based components in tech products rapidly become obsolete. Buyers who can afford the best will not settle for last year’s inferior technology. There may be a few oligarchs who want to flaunt their conspicuous consumption by buying a $17,000 Apple Watch Edition, well aware that they will dump it in a year for the Apple Watch 2.0. But for the rest of us such a purchase just seems irrational and pointless. In my opinion, the Apple Watch Edition has a negative brand value.

Imagine if Apple had released a $17,000 iPhone Edition in rose gold back in 2007. (The first iPhone was a 3.5 inch feature phone that came without the app store and without 3G connectivity.) What would the used price for that phone be today? Scrap metal value of less than $1,000? Compare that to the value development of a Rolex or a Hermès Birkin handbag bought in 2007.
 

How Apple could turn things around

The good news for Apple is that there is a way to improve the situation. In order to make the high price of the Edition more justified, Apple could offer buyers a premium membership scheme (including those who have already bought the Edition). All members would be offered the opportunity to upgrade their Edition Watch on release day for ten years for $999 per upgrade. They could send in their old Edition watch and Apple would transfer everything on the old watch to the latest Edition model and send it back to them (or perhaps swap the electronics inside). To sweeten the deal, Apple could also offer a guarantee that premium Edition members will be able to buy new iPhones as well as all other new Apple products on release day. This offer wouldn’t cost Apple much and would propel sales of the Edition because it removes rapid obsolescence from the equation. Owners of Edition watches would no longer be viewed as vulgar show-offs, but as people who made a somewhat rational decision by paying a premium for a ten year upgrade guarantee and access to new Apple products on release day.
 

A way for luxury brands to enter the IoT wearable space

The formula above could be the key to the future IoT market for luxury smart wearables. Many luxury brands are eager to enter this market. Smart luxury earrings, necklaces, rings, pendants, brooches, sunglasses, pens, or watches could be sold with an upgrade guarantee.

An ultra-expensive price for the initial purchase would preserve the luxury product’s mystique and exclusivity. Each time the luxury brand releases a hardware upgrade, they could offer to swap the electronics and battery inside the product for a minor upgrade fee. If the product can’t be disassembled, users would be allowed to return their original product and have it replaced by the latest model for a replacement fee. This would tie the customers closer to the brand and generate ongoing cash flow in the form of upgrades. It would also ensure that obsolete products are removed from the market instead of being sold for embarrassingly low prices on eBay.

I am redesigning my blog by removing the Swedish version of the blog and the dual-language plugin WPML. My intention is to make my older Swedish blog posts accessible in an archive but for now they are invisible. (Due to an upgrade of the server side PHP version, a bug has appeared that doubles the words “Categories” and “Archives” in the right sidebar.)

Shady MVNOs damage the mobile market – time for Ofcom to take action

These days, setting up an MVNO is easier than ever. You don’t need to know anything about mobile to become a MVNO player. Nearly all operations can be handled by managed service providers (MVNEs) and the initial Capex is in the range of £500k, but could possibly be as low as £25k.  Low barriers to entry increase competition and put pricing pressure on incumbent operators. Surely this must be a good thing, right? Not so fast.

Well-known and established MVNOs in the UK such as Tesco Mobile, giffgaff, Virgin Mobile, Talkmobile, Asda Mobile, Lebara, TalkTalk Mobile, and Utility Warehouse offer service on par with the network providers themselves.

However, there are also a large number of smaller more obscure MVNOs. How many people have heard of White Mobile, Delight Mobile, Vizz Africa, Vectone Mobile, tello, Now Payg, Talk Home Mobile, The People’s Operator, Ovivo Mobile, Lyca Mobile, or Econet Mobile?

These MVNOs sometimes have appallingly bad service. A while ago I tried Lyca Mobile. I didn’t expect much, but I did assume that “mobile” was a mature standardised product (voice, voicemail, SMS, MMS, data). How wrong I was. Voicemail didn’t work, I couldn’t call some numbers and operators, there were texts that didn’t go through, and MMS wasn’t included in the service. When I called the Swedish railway information number, Lyca Mobile charged me almost £10 for a six second call. They claimed it was a premium number. It wasn’t, and their tariff chart didn’t include anything about this exorbitant rate. Their customer service consisted of script reading from call centres in India, and whenever I deviated from their script they immediately hung up.

I have read reviews of some of the other smaller MVNOs and it was easy to find similar complaints. Talk Home Mobile seems to have a billing system that grossly overcharges and eats away the minutes in your allowance very quickly (here). Users of Vectone report myriad problems (here, here, and here).

Last year all 50,000 users of Ovivo Mobile were left stranded when the service suddenly shut down. Ovivo had stopped paying for network access and Vodafone cut the cables. It was pure luck that Ovivo was gracious enough to give users their PAC-codes so they could keep their mobile numbers.


Mobile is an essential basic service and it is unacceptable that consumers risk losing their access, prepaid minutes, voicemails, and phones number if a shady MVNO ceases to operate. Consumers should also be able to trust the accuracy of the operators’ billing systems. A plethora of small unstable shady MVNOs will not “increase competition” but rather scare consumers away from competitive new market entrants. Only the incumbents will benefit from this.

Consumers who sign up for a mobile plan with an MVNO can not be expected to make their own risk assessment of the bankruptcy and fraud risk. Nor can they be expected to know that they might lose their phone number if the service provider suddenly ceases to operate. And consumers who suspect foul play with the billing system should not have to take on an MVNO themselves in the courts.


In my opinion, this is the responsibility of Ofcom, who should tighten the regulatory framework. For example, Ofcom could act as mystery shopper. They could buy SIM cards from all the operators, and then run diagnostic calls/texts/data generated by software. This would enable Ofcom to evaluate the accuracy of the billing systems and check that all number series are reachable. Network operators who don’t get paid should probably not be allowed to disconnect a MVNO, but should instead be given the power to temporarily take control of the MVNO. All tariffs should be available on the websites in a clear format. The charges that are most likely to impact the user’s bill should be easily accessible. MVNOs should not be allowed to hide extreme out-of-bundle charges in the fine print. And if an operator doesn’t offer something that users expect in a basic mobile service (for example SMS or voicemail), it should be clearly stated from the start.

If the MVNO market continues to operate like the Wild West, sooner or later real scammers will spot this opportunity. They can set up a legit MVNO and run it for a year to build a brand, gain a customer base and be included in price comparison sites. Then they will lower their prices and start an aggressive ad campaign. For example by offering attractive pre-paid 6 and 12 month pay-as-you-go plans. After a while they will change their terms of service and tariffs and introduce sky high out-of-bundle rates, stop paying their network operator and other vendors, offer the latest iPhone on sale, and remove themselves from the company directorships. And while they’re at it, they’ll probably overcharge all direct debits before they go bankrupt.

Let’s hope that this scenario never plays out as it would damage all MVNO’s credibility and make it even more difficult for new market entrants to compete with the incumbent network providers.


P.S. I recently left O2’s network for EE’s. O2 had the best network back in 2012 but their service has gradually deteriorated, at least in the south east of England (confirmed by Which? and Open Signals coverage/QoS maps). This congestion is most likely caused by an unwillingness to invest in more network capacity by the cash strapped owner Telefonica. As Telefonica have been trying to sell O2 UK for some time, they probably don’t want to put any more money into their UK network. This is a risky strategy if and when consumers begin to take notice. Even if a sale of O2 were to take place tomorrow, it would take several months before everything was finalised with new owners in control. I expect even further deterioration of customer satisfaction and QoS for O2 and all the MVNOs that run on their network. It could take up to a year for O2’s network to improve.

According to Open Signals, EE has better network capacity in the South East, which was very obvious when I switched. However, EE is at the bottom in the Which? customer satisfaction survey and they are one of the the worst major operators in Ofcom’s complaint league. As I didn’t want to be an EE customer directly, I chose one of the MVNOs that run on EE’s network. So far, so good.

Post Snowden ripples – users going anonymous on the Net

 

The Snowden leak is a game changer. User angst over privacy and anonymity is at an all time high. 61% of US net users want to do more to protect their privacy.

 

It takes time for consumers and market players to absorb the full impact of game-changing events such as the Snowden leak. When the news broke in June of last year, the first Pew poll showed that 56% of Americans found NSA’s mass surveillance acceptable. After more than a year of drip-fed additional revelations from Snowden’s enormous data material, people have begun to realise the full extent of the security state’s spying. A Pew poll from June this year found that support for NSAs spying had fallen to 42% and 74% thought Americans should not have to sacrifice privacy for safety from terrorism. Another Pew poll found that 86% of US Internet users have taken steps online to remove their digital footprint. A Harris poll from March 2014 showed that 47% of US users have changed their online behaviour after the NSA revelations. 26% said they are doing less online banking and online shopping, and in the 18-34 age group the figure is 33%. A new study from Pew last week shows that awareness and concern are rising even more among Americans. 90% of the respondents agreed that users have no control over their online information and 80% are concerned about the way advertisers take advantage of information on social media. 87% had heard something about government surveillance and only 36% of users support the government’s online snooping. Most important of all, 61% of users said they wanted to do more to protect their privacy. These negative sentiments are most likely even more pronounced outside the US. In another survey of 10,000 people in nine countries from April by ComputerWeekly, 75% expressed concern about their privacy online.

Awareness is the first step, the next is taking action. Once users begin to educate themselves they will realise that the potential for intrusions is much greater than they initially thought.  As a test, I spent a few hours educating myself about privacy and anonymity for browsing with Firefox on a PC. This is what I found:


Basic advice about anti-virus programs, firewalls, avoiding obvious passwords, and deleting cookies is far from enough.

The most intrusive tracking and surveillance is done by advertisers and analytics firms. Government surveillance is ubiquitous yet invisible to the average user in most cases. However, advertisers’ tracking is actually quite obvious. We know that advertisers can easily create an approximate profile of where you live, your gender, age, interests and income. They can even match your real name and address with your browser surfing patterns if you fill in your customer data on a website that sells this information to the tracking companies. Considering all the revelations from the Snowden leak, it would be reasonable for users to also suspect that the government surveillance agencies are buying advertisers’ tracking profiles.

Deleting cookies is not enough. There are supercookies in Flash (LSO) and in HTML5 there is a cookie-like function called web storage (DOM). These can be blocked by installing the Better Privacy plugin in your browser.

Web servers can add unique Etags to identify each browser without the use of JavaScript or cookies. They were designed for increased browsing performance but can also be used for tracking. Etags are difficult to avoid, but deleting the cache will remove them. The browser plugin Secret Agent blocks Etags but the downside is that it can diminish the browsing experience.

Another privacy browser plugin is Ghostery which blocks trackers. The HTTPS Everywhere plugin forces encryption between the browser and server when possible. The NoScript plugin provides protection from malicious scripts on untrusted websites. NoScript blocks all untrusted scripts and gives the user full control over enabling or disabling each script. For every visited webpage, NoScript provides a list of all scripts used on that page. However, the best plugin of all is Adblock Plus which blocks almost all ads. Adblock Plus has over 300 million downloads.

Enabling all these plugins will reduce the browsing experience on some websites and can slow down Firefox. For example, if NoScript is installed, users will have to open the list of scripts manually to enable them. This can be an inconvenience. On the other hand, webpages often load faster, in particular if all ads are blocked with Adblock.

Webpages load even faster if Flash is disabled. Flash is potentially a huge security hole and a common recommendation is to disable Flash in the browser and only enable it temporarily when really needed.

An additional security measure is to use a VPN. Subscribing to a VPN will hide your IP address. A VPN creates an encrypted tunnel from your computer to one of the VPN provider’s servers, where your surfing traffic will enter the open internet. The servers can be located in another country, which enables users to stream TV or video that is normally blocked for users from other countries (for example BBC and Netflix USA).

Selecting the best antivirus program and firewall is also important. Bitdefender gets top test results, but aggressive antivirus programs can sometimes slow down the computer.

When it comes to search engines, users can reject Google in favour of DuckDuckGo or Startpage, which do not track users’ search patterns.

Many cybersecurity experts even recommend putting a piece of black tape over the webcam in order to prevent it from being used as a spying device. For those who want to achieve an even higher level of privacy and security there are more challenges, such as avoiding browser fingerprinting and WiFi security breaches. For the advanced user there are additional technologies such as TOR, Bitcoin, Tails, IceDragon, Ubuntu/Virtualbox, PGP, Protonmail, Comodo, and Online Armor.


It is unlikely that mainstream consumers will utilise all of these security measures, but installing the browser plugins is fairly easy and there are “How To” guides that explain how it’s done. Once in place, these users are a permanent loss for the advertisers. And it will be the active, well educated, high-income users who go first; the most valuable targets for advertisers.

When the mainstream market begins to embrace anonymity and encryption the effects will be wide-reaching. Ad-financed websites and advertising networks will be hit first. Social media sites where real names are used such as Facebook are also at risk.

Users will probably also be increasingly suspicious of cloud service providers such as Dropbox. Not for intrusive advertisers but for NSA spying on stored data and the risk of hacked accounts. Companies that have dumped their own IT infrastructure and moved everything to the cloud will also have a hard time proving that their customers’ data is secure and has not been scooped up by the NSA somewhere inside the cloud.


But this development is also a business opportunity for cybersecurity providers, anonymisers and VPN providers. For example, Protonmail is a startup offering encrypted email. The company and servers are located in Switzerland, which has very strict laws regarding data protection. All stored emails are encrypted before leaving the customer’s computer and Protonmail does not have the decryption keys.

Another example is SpiderOak, a competitor to Dropbox that offers cloud backup. Encryption makes it impossible even for SpiderOak’s own staff to view their customers’ data. For Android smartphones there is the app Redphone for encrypted voice calls. TextSecure is an app for encrypted messaging for iOS and Android. Silentcircle offers a suite of services for encrypted communication.

Yet another indicator of the strength of this trend is the rapidly growing demand for NSA-secured cryptophones. Several small companies have developed bespoke smartphones, often based on hardened versions of Android. With prices up to $3,500, they are becoming the new status symbols for business executives. Some of the brands are: GSMK Cryptophone, Blackphone, Teopad, Hoox m2, In Confidence, and Secusmart.

And the smartphone giants Apple and Google are also offering better encryption to protect their users’ privacy and security. iPhone 6 and iOS8 have integrated new encryption on the devices which Apple can’t bypass even if they are required to do so by the authorities. Google is working on a similar solution for the new Android 5.0 release Lollipop.

It is unclear how far the various segments of the user base will go in order to protect their anonymity and privacy. But this trend has the potential to be very disruptive, even if only 40% of users take action. Ignore it at your own risk.

My previous blog posts about design

Here is a compilation of links to my previous blog posts about product and service design. My view is that successful design has to build on the user’s context and micro-situation. One needs to put oneself in the user’s shoes and really understand his or her situation, pre-understanding, and purpose for using the product or service. Failure to design from the user’s perspective can mean the difference between failure and success.

  • For example, despite super glossy PC screens looking vibrant and stunning in the store, the reflections and intense brightness cause eyestrain with prolonged usage.
  • Another example is the megapixel race for cameras. A sensor with too many megapixels is pointless as the cheap lens in a smartphone or cheap digital camera will be the limiting factor. In addition, a high megapixel sensor is less sensitive to low light conditions compared to a lens with a more modest pixel count. Where are most smartphone pictures taken – outdoors in bright sunlight or indoors? Again, technology-driven vendors ignore the user context.
  • Design is important even for something as basic as mobile voicemail. If someone calls your mobile and you don’t answer, the call is forwarded to your voicemail after a few rings. Last time I checked (in Sweden), no operator offered more than 30 seconds before the call was forwarded. There are a number of situations where 30 seconds is simply too short. As a consumer, I want the option to set it to 45, 60 or 90 seconds. Again, a design decision that doesn’t take the user’s context into account.

Digital consumers – free trend forecasting from communication agencies

We all know that digital is massively invading and disrupting all industries. One indicator of this development is that part of the mainstream advertising industry finally seems to be embracing digital.

The “ad” industry is not just the creative part. It is also comprised of PR-firms, market researchers, insight, database marketers, and specialists in digital and lifestyle trends. The major players in the industry (WPP, Omnicom, Publicis Groupe, and IPG) are conglomerate holding companies with investments in all parts of the industry. It seems that these players are beginning to put the pieces together around digital.

They give away quite a lot of their research for free. Here are some free information sources that could be useful for us industry analysts and Zeitgeist watchers:

  • JWT Intelligence has quite a lot of free material, both trend forecasting and digital material. Some additional reports can be found on JWT Worldwide.
  • Trendwatching is an independent firm focusing on innovations, digital, and consumer trends. They have a global network of 3,000 freelance trendspotters. A competitor with a similar business model is Springwise, which focuses on innovation and discovering new business models in the B2C space. Another competitor is The Futures Company, though they are slightly less focused on digital.
  • Havas Media has reports about global media trends as well as material about digital marketing.
  • The largest conglomerate, WPP also provides free reports and articles from their portfolio companies. Some are about digital. The competing holding company InterPublic Group (IPG) has a compilation of free reports and blogs from their portfolio companies, though not much about digital.

One impression from quickly browsing through these sources is how digital has enabled and unleashed innovation all over the world. Here are a few examples (from Trendwatching’s service):

  • A Fiat car showroom in Brazil has equipped its staff with head-mounted video cameras. Customers can contact the showroom and the sales rep can walk around the car, open the hood, get in the car – all while talking to the customer and filming the car.
  • Pizza Hut in Panama has delivery scooters with small ovens built in. Their customers receive piping hot pizzas straight from the oven on the scooter.
  • Volvo has developed a Roam Delivery mobile app that makes it possible to use their cars as a delivery destination for couriers. Using the app, Volvo owners can give the courier the exact GPS position and temporary electronic key to the car.
  • The BFF Timeout app encourages Filipinos to focus on each other, rather than on their phones. Once all individuals in a group have opened the app together (sponsored by McDonalds and Coca Cola), the timeout begins and points are earned for every moment all phones are left alone.
  • Africa’s own startups have lofty continent-wide ambitions. Jovago is a hotel booking service from Nigeria, Oju from Mauritius launched African emojis for smartphones, the supermarket chain Choppies Enterprise is from Botswana, Nigerian Jumia is the Amazon of Africa, and Africa’s Netflix is the Nigerian startup iROKO. All of these companies are now expanding into neighbouring countries.
  • Dutch train operators Prorail and NS plan to roll out platform-length LED displays that provide real-time information to passengers. A 180 meter LED strip shows information on carriage crowdedness gathered from infrared sensors inside the carriages, as well as information on where carriage doors will open and the location of quiet carriages in the train.
  • French shopping center specialist Klépierre has developed an “Inspiration Corridor” equipped with large touchscreens on the walls. Sensors in the corridor analyse a shopper’s age, gender and apparel. The walls are then filled with personalized shopping suggestions. Tapping on the touchscreen sends directions to the customer’s mobile.

Some of all these digital innovations are of limited value, some are just silly, but some are brilliant. Tinkering, creating, and innovation takes place everywhere – not just in Silicon Valley, Manhattan, Tokyo, Shanghai, and London. The technology platforms and infrastructure are in place around the world and the barriers to further innovation are lower than ever.

Ahonen’s Rest of Decade forecast for smartphones, tablets etc.

Tomi Ahonen has just released his Rest of Decade forecast for the smartphone and PC market. His prediction is that all phones sold will be smartphones by 2018/2019 and that the low-end dumbphone market will cease to exist. Android will be the dominant platform and the PC market will remain almost flat, though within the PC segment, tablets will almost completely replace traditional PCs, holding a 77% market share by 2020. (Ahonen defines a tablet as an ultraportable PC, not a large smartphone. I agree. A tablet can’t be used with one hand while walking down the street. And you can’t put a tablet in your pocket.)

According to Ahonen, as the smartphone market rapidly expands into the price sensitive mass market, Apple/iOS will lose market share and be relegated to a very profitable high end niche player with an 8% market share by 2020. Looking at the entire computer market (PCs, tablets, smartphones) Apple/iOS will manage to keep an 11% market share by 2020, well ahead of Microsoft, which will hold a meagre 6% market share. The smaller platforms; Blackberry, Tizen, Firefox, Windows Phone, etc. will hover around 1% each if they are not almost completely wiped out.

It is hard to argue against Ahonen’s arguments and excellent track record as a forecaster. However, it is worth commenting on a few things.

In my opinion, the forecast for the decline of the traditional keyboard based laptop/desktop PC is overly pessimistic. Ahonen’s forecast is that PC sales will drop to just above a third of the 2013 sales volume (from 315 million units to 130 million). For trained knowledge workers, input via the mouse and a fully equipped keyboard is still much faster than a tablet for extended sessions of concentrated work. In addition, tablets have some significant ergonomic UX/UI problems if used for office work. A tablet placed horizontally on an office desk will catch glare from ceiling lights and give the user “iPad neck” from bending down. A vertically placed tablet in a stand will strain the arms every time the user touches the screen. No one wants to lift their arms and touch a vertical screen in front of them every 30 seconds for eight hours.

Another issue worth commenting on is the market share predictions for Google/Android and Apple/iOS. Even though Android is free and open source, the bundled Google Play is not. Google have moved most of the goodies (features and APIs) from Android into Google Play, where they exert tight control. Any device maker or operator that wants to use Android without Google’s presence will have to kick out Google Play and recreate all this functionality. Device makers are also prohibited from working on Android forks if they want access to Google’s closed source apps and APIs. They will be left with the “naked Android”. In addition, the app developer ecosystem is dependent on the APIs in Google Play.

This is of course in Android’s favour in the short run. However, as the Android platform matures and Google’s behemoth-ness increases, it is possible that another major player will fork Android and start a competing but very similar platform to get rid of Google’s tracking and analytics in their products. Perhaps by supplying a cross-platform tool that will make it possible for Android apps to easily be ported. The likelihood for such a fork has increased in the aftermath of the Snowden leak. Foreign governments will suspect that the NSA has demanded backdoor access to Google’s user tracking and/or Google Play.

Considering that entire populations will soon be smartphone users and that smartphones are an unprecedented spying device (cam, mic, location, call logs, surf patterns, banking, passwords, etc.), foreign governments will most likely find it unacceptable that the US government has backdoors into this data – and that they don’t. For example, from a Chinese perspective, government support of an Android fork will both be sound industrial policy and a way to replace US/Google control and possibly create a viable domestic player that can compete on the global market. (China has already done this but the existing Chinese Android fork is old and not compatible with the global ecosystem of Android apps.)

My point is that Ahonen’s prediction for Android dominance (89% market share in 2020) underestimates the effects of strategic countermoves from other players that view this development as a threat. When one player becomes too strong the entire industry will unite against it. Of course Android will be a dominant platform but the uncertainty is larger than what Ahonen seems to take into consideration.

I am also somewhat puzzled over the forecasts for Apple/iOS. Ahonen has a rather compelling argument for Apple as a niche player. If Apple introduces much cheaper low-end smartphones, they will reach new market segments but they will also hurt the sales of their own high-margin premium models. Apple will most likely prefer to keep stellar profit margins and remain a leader in the premium segment than compete on volume. This makes sense. The size of the global smartphone market is so large that even if you only have a 10% market share, that is enough to reap the economics of scale. And the developer ecosystem and app universe for iOS is already in place and is as strong as for Android.

But if and when the smartphone market really matures, it is possible that Apple will find itself embattled even in its core premium market in rich countries. If customers are unwilling to pay Apple’s premium prices and market shares fall, I think Apple will switch over to plan B. They can easily cut their prices and introduce low end models and go for volume instead of profit margins. In that case, Apple/iOS will end up with a significantly larger market share in 2020 than Ahonen’s forecast of 8% – perhaps 15%. The paradox is that this bad news for Apple will actually translate into a larger market share. Ahonen’s forecast is that it won’t happen before 2020. I think there is around a 40% probability that it might.

Nuisance calls will kill landline voice

Few people in the tech sector care about landline voice these days but for landline operators it’s still a significant (though declining) cash cow, and will be for years to come. Mismanaging this service could provoke a customer stampede away from landline voice. If BT and the other UK landline service providers can’t stop the deluge of nuisance calls that have flooded British customers over the last few years, the scammers and spammers will effectively and swiftly kill this business area for BT et al. (This is the downside of English being a global language. There are no Swedish or Finnish speaking call centre operators in India.)

A survey by consumer watchdog Which? found that 70 percent of respondents had received unwanted calls. In their comments fields, many Which? members reported being bombarded by several calls every day, and sometimes even in the wee hours of the morning. There are silent calls, robocalls, calls to people’s work numbers, there are scams about a legal settlement of repaying credit card fees, calls selling shady PPIs, calls about selling protection against unwanted calls, fake market research calls that morph into sales calls, there are calls about double glass windows, fake calls from “Microsoft support” where they want to access your PC, and on and on and on. Asking to be removed from the call lists rarely helps, they continue to call regardless. Some spam callers hang up immediately if you deviate from the caller’s script. Based on the accent, many calls seem to originate from Far East call centers. Many users reported that adding their numbers to the TPS list that rejects telemarketing calls was of little help.

This deluge of nuisance calls is forcing people to change their usage patterns. The older generation is still stuck with the idea that the telephone has to be answered if it rings while the younger generation gladly ignores unknown callers. But even the older generation will be forced to change their habits due to this problem.

Users are also trying to defend themselves with countermeasures. There are answering machines and phones with integrated “Nuisance Call Blocking” functionality (CPR Callblocker, Trucall, BT6500). They use the caller ID and block known nuisance calls. Typically they block all international calls, “unavailable” and “withheld” calls in addition to a blacklist of numbers for known call centers.

The problem for BT & Co. is that these counter measures undermine the landline voice business. Blocking all unlisted and international calls will make it harder for friends and family to reach you on the landline and it also blocks SkypeOut calls. Asking your service provider for a new number that has never been used will leave your contacts stranded unless you manually provide them with the new number. Blocking your own caller ID for outgoing calls makes you a “suspicious caller” and your friends and family might not want to answer. Using an answering machine to screen calls is inconvenient. And once anonymous call blocking becomes widespread, the spammers and scammers will most likely find ways around this, for example by spoofing Caller IDs.

One of the angry Which? members had taken the drastic measure of paying extra to add a premium 0871 number to his landline which he always gave to companies and other untrusted parties. This stopped most of the spam callers and if anyone called they would have to pay for the privilege of talking to this user. He actually made some money on this. But for most users, cancelling their landline subscription probably makes more sense.

Spam has ruined email as the dominant form of e-communication. Nuisance and scam calls will most likely be the final nail in the coffin for traditional landline voice. BT (and the other landline operators) should make it a top priority to stop spam calls. BT should lobby for tougher laws with severe fines for companies that profit from nuisance calling. Fraud is a crime in India as well as in the UK, and British law enforcement should cooperate with its Indian counterparts to bring high profiles cases against Indian “telemarketers” that defraud British customers. The recent £90k fine against a company in Glasgow is a first step but BT should urge law enforcement to be more vigilant.

BT’s first step should be to stop selling wholesale termination minutes without a requirement that the buyers use caller ID and that they comply with some form of Terms of Service (are there any TOS for wholesale customers?). BT should also upgrade security in its technical infrastructure. Another flaw is that the British landline network can only display 11 digits in the caller ID, which means that most international numbers can not be displayed. (Mobile networks display international caller IDs without any problem.) BT should upgrade their network and enable longer caller IDs. They should also look into regular QoS issues such as sound volume. Calls on a regular UK landline vary widely in quality. Quite often, the volume is so low that it is almost impossible to hear the other party.

Few pundits and analysts in the telco sector bother to look at landline voice. It is viewed as a boring dinosaur legacy business. That is a mistake (even though it’s true that it’s a boring, declining cash cow). For the landline operators, the speed of the decline of landline voice is a matter of billions in cash flow over its remaining lifetime. Nonchalance about nuisance calls could swiftly put an end to these operators’ business. They should heed the warning.

Update: On 18 April, Ofcom fined TalkTalk £750,000 for making an excessive number of abandoned and silent telemarketing calls. Things are moving in the right direction.