Books I have co-authored

The smartphone makers’ dilemma

 

Smartphone flagship battle (Apple vs Samsung)

Smartphone flagship battle (Apple vs Samsung)


 

The market for flagship smartphones is the most cut-throat tech market in the world. The stakes are enormous, product life cycles are incredibly short and rivalry is intense. The top ten players are constantly reminded of the fates of fallen giants on this battle field.

The smartphone market is full of contradictions. In some respects, it appears to be a commoditized mass market. At the same time, the smartphone is one of the most complex and advanced tech products on the planet. It is pushing the limits of performance and what is technologically possible to the brink. It integrates dozens of technologies into one small device – each of which is an impressive field of technology in and of itself. The flagship smartphone represents the pinnacle of 400 years of technological development.

In spite of the advanced technologies used in smartphones, the room to differentiate is rather restricted for the heads of product strategy at the major players. The intense competitive forces in this industry constrain their available degrees of freedom. The exception to this rule is Apple which to some extent can afford to go its own way.

Even though the flagships are a smaller part of the smartphone market, they are very strategically important for the vendors. The flagships set the highest reachable price point for each vendor and most of the market and media attention is focused on them.

The head of product strategy at the smartphone makers must use the latest high end chipset in next year’s flagship model. They also have to include the latest high resolution displays released from the component suppliers. Otherwise they are not in the flagship race. The subsequent product design is a balancing act between conflicting goals. If the processor is pushed to the maximum, the device will win the benchmark tests for performance, but will be criticised for short battery life and overheating. If the opposite is done and the device is designed to avoid overheating along with providing long battery life, it will fall behind in the benchmark tests for raw performance.

The entire product design and development process is full of these trade-offs. If a very bright, high resolution display is used, costs will go up and battery life will suffer. If the device is equipped with a high capacity battery the weight goes up. However, if it is designed to be as light as possible, its short battery life will be viewed as a minus.

The first player to include a new generation of technology (like LG did in 2014 with the super high resolution QHD display) risks integration problems. LG’s GPU and processor was not up to the challenge and the display became less responsive. But if a smartphone maker waits too long to integrate new technologies in their products, they will be considered a laggard.

Another trade-off is the thinness of the phone. If the phone is too thin there will be less space for a high capacity battery. In addition, for every millimetre that is carved away, the optical quality of the camera goes down exponentially. Apple tried to have it both ways in their latest iPhones. They chose a camera that sticks out from the body, resulting in a badly designed device that wobbles if put down on a flat table.

If a display is used which is larger than the competitors’, cost and weight will be higher and battery life will be shorter. In addition, there’s the risk of losing customers who prefer a more lightweight model. A smaller display choice will result in nit-picking by the tech community, and customers comparing the device in stores will probably choose competing products with larger displays.

Time to market is extremely critical. A new flagship model can only be sold at full price for a few months, and sales will drop rapidly after 6 to 10 months. If a new model is rushed to market, there is a risk that serious prototype-stage flaws are left undiscovered. However, spending too long on perfecting a new model will shorten the sales window and cut deep into the revenue potential.

For second tier players in the high end segment such as Sony, this dilemma causes a vicious circle. If sales of the current flagship device drop off early, the vendor will feel pressured to launch a new model. But a hurried launch prevents the development of a really good product. The short product cycles combined with a string of rushed models further undermines these players’ brand value.

Tear down analysis of the inside of modern smartphones provides additional evidence of this dilemma. The market leaders Apple and Samsung manage to optimise space on the chipset, the boards, and compress the spacing of components. Struggling smaller players such as Sony lack the time and resources for that and the inside of their devices are far less optimised.

When it comes to software and sensors, the players are more or less forced to add as much functionality as possible. More apps and software add to icon and menu clutter but if something is left out, that omission will be criticised.

No matter what the smartphone vendors deliver, the professional product testers (GSMArena etc.) will find something to pick on. If the “perfect” smartphone were ever created, they would criticise it for being too expensive. And if it were sold at a lower price point, the shareholders and CFO would complain about low profit margins.

In addition to these restrictions is the unwillingness by almost all industry players to experiment with other form factors than the iPhone-style slate form. When flexible displays are introduced (soon?) all the major players will probably jump on the bandwagon. This will hopefully provide room for new innovative designs, though my guess is that most smartphones with a flexible display will look rather similar.


This market would appear less like a commoditized mass market if at least some players would be brave enough to innovate and deviate from the mainstream form factor. There are rare examples of this but they haven’t really been pushed by the industry. Samsung launched a thicker phone with a much better camera that had a real 10x optical zoom in 2014 (Galaxy K Zoom). Sony have introduced flagship models with much better sound quality than the market average. There are a handful of current models with physical keyboards on the market (from Blackberry and LG). The small Russian smartphone maker YotaPhone introduced a smartphone with a second always-on e-ink display on the backside that can be read in bright sunlight. LG added a very small banner shaped always-on second screen for notifications on the V10. There are a couple of newly introduced flip phones from Samsung and LG. Some models are waterproof, some flagship models come with a leather back, etc. (Smartwatches are of course innovative but they are a separate product category.)

It’s easy to say that smartphone makers should embark on bold innovations and radical new product designs. But from the market players’ perspective I can understand their trepidation. One major strike and they’re out. Considering how difficult it is to get everything right, I can understand them preferring to play it safe. Even Apple and Samsung make massive mistakes. It is not easy to integrate a new generation of components into a seamlessly working device every year. The most difficult part seems to be the ability to really understand the users’ context and micro-situation and to offer a seductive, intuitive and compelling user experience.


The smartphone market would be more interesting if at least a few vendors dared to differentiate from the mainstream. Apart from Apple, it seems the players don’t really trust their own judgement and their product design capabilities. Instead they resort to copying each other, and Apple in particular.

To differentiate successfully requires a strong team of product designers and UI/UX experts with the self-confidence to deviate from the mainstream market’s iPhone-style design. A group of bright people that respect users and don’t fall for fads such as flat UI design. That don’t believe their role is to teach the users to be “modern” and “cool” and that are strong enough to ignore peer pressure from other designers and techies.

As Michael Porter pointed out, strategy is about choices and deliberately making “no” decisions. To differentiate means to focus on certain market segments and ignore the preferences of other parts of the market. You can’t please everyone.

With smartphone sales in the billions of units there are certainly underserved user segments that want something other than Apple’s offer of “minimalist simplicity”. The first smartphone maker to discover and serve these user segments will be able to build a fiercely loyal user base.

Millimeter wave transmission – are secretive financial firms leapfrogging 5G and the wireless industry?

 

Low latency wireless microwave links between financial centers (London-Frankfurt)

Low latency wireless microwave links between financial centers (London-Frankfurt)

Over the last few years, a new breed of specialised service providers have been offering low latency wireless Point-to-Point networks between financial centers. But for the High Frequency Trading (HFT) firms who use these services, fast is not enough. To beat the competition, they want their connection to be faster than everyone else.

Secretive cash rich financial trading firms are already renting space on towers and are probably building their own wireless networks using millimetre waves (the same frequencies the mobile industry plans to use for future 5G networks). Considering the huge sums at stake for the winners of this race, it wouldn’t surprise me if they have already built in-house technologies while the mainstream mobile industry has only reached the planning stage.

Low latency is absolutely critical for HFT firms. The financial firm that can connect to the marketplace before its competitors stand to make billions in profits. For these fintech players, the fiber backbone is just not fast enough. For example, the lowest latency (delay) between London and Frankfurt in fiber is 8.35 milliseconds while the speed of light in free air is only 2.1 ms. Specialised wireless fintech service providers such as Perseus and McKay Brothers have managed to get the latency down to around 4.6 ms on this route. Considering that the theoretical floor for latency is 2.1 ms there is plenty of room for aggressive HFT firms to build their own optimised network and get below 4.6 ms.

Fiber’s “slowness” is due to the fact that cables don’t run in a straight line of sight between two cities. Another reason is that the speed of light in fiber is 33 percent slower compared to the speed of light in free air. When the signal traverses through the network, latency is also added each time it passes through a router.

Financial players work hard to reduce latency in every part of their infrastructure. A seemingly minor difference such as the location of computers on different floors in a building can add to latency. In one example, the latency on the 2nd floor was 0.184 ms, but on the 9th floor it was 0.183 ms.

Building a Point-to-Point network (with microwave links transmitting narrow beams between a line of high towers) is an old technology that was used for long range communication before fiber optics. This recent revival of wireless has been made possible by better RF components (in the high gigahertz bands) and ultra-fast chips that can handle the signal processing without adding much latency.

This type of backbone network will never be cheaper than using fiber. The need for free line of sight and the curvature of earth puts a limit on the longest distance between towers. It is possible to increase reach by building higher towers but increased height adds to construction costs. Wind drift of the towers and path loss due to rain attenuation are other problems that have to be overcome. Transmission capacity can be very high if wide enough carriers are used in the (idle) millimetre wave bands above 30 GHz, though this technology will always be dwarfed by fiber.

Microwave tower

Microwave tower

We don’t know exactly how far the financial HFT firms’ secret in-house projects have come. But one thing’s for sure – they are not being held back by slow moving industry committees. They are most likely using GHz/millimeter waves but another solution could be lasers (from AOptix?). As the main objective is to reduce latency, my guess is that they’re also experimenting with transmission of some form of low level “raw” signal where the IP headers of the packets have been stripped away.

Even though this highly specialised fintech transmission network can be viewed as a custom built race car, it is relevant for the wider mobile industry. For example, a crucial building block for the future 5G mobile is wireless backhaul in the millimeter wave bands (+30 GHz). One of the goals in the 5G mobile specification is a sharp reduction in latency. These are areas where the fintech players appear to be years ahead. Their solutions have already been deployed, or will be in the near future.

If these secretive players are ever willing to share their technologies, they could serve as important proof of concept installations for the rest of the tech sector. And if their solutions are ever licenced, new advanced technologies may enter the mobile market from an unexpected industry – fintech. It would certainly be prudent for the mainstream mobile market to pay attention to innovations in this field.

Laptop makers complain they can’t differentiate – how about a decent keyboard?

 

Almost all mainstream laptops today come with versions of the same badly designed keyboard. The first laptop maker who takes UI/UX seriously and builds a better keyboard will gain a significant competitive advantage over its competitors.

After recently helping a relative buy a new laptop, I am left puzzled by the OEMs apparent indifference to one of the most essential parts of a computer: the keyboard. It may be a low-tech electro-mechanical module, however, for most people it is the main way they interact with their machines. Users spend thousands of hours typing on their keyboards.

Improving the design of laptop keyboards is actually quite straightforward. Common sense and a basic understanding of ergonomics is pretty much all that’s required. To make it easy and intuitive to use the keyboard, a good visual overview and supporting tactile feedback that reduces typing errors is important. The characters on the keys ought to be large and easy to read and the distance between the centres of adjacent keys (the pitch) should not be too short. The keys should have a concave shape so one can easily find the edges without looking. Further visual cues could be added by using different colours to group the keys. A backlit or illuminated keyboard would add to the usability. Frequently used keys should preferably be larger and arranged so they are easy to find without looking. If the keys stick up a bit and the important ones have some empty space around them this provides additional tactile cues. These empty spaces enable the user to find the keys by touch rather than having to look at the keyboard. (Additional details about the historic development of keyboards, usage of specific keys, design problems, etc. can be found here, here, here and here.)

Keyboards with most of these obvious design features already existed back in the 1980s. IBM’s classic mechanical PC keyboards, Model M, are large and heavy but have better ergonomics than today’s laptop keyboards. The keys in IBM’s Model M keyboards have a buckling spring mechanism that provide excellent tactile and audible feedback. The rugged construction makes them very durable, and many are still used today by enthusiasts. They fetch prices up to £80 on eBay and there is a small vendor that still makes them. It seems like development in this area has been going in the wrong direction for the last 30 years.

IBM, model M, classic PC keyboard with excellent ergonomic design

IBM, model M, classic PC keyboard with excellent ergonomic design

Many of the more specialised keys on the IBM Model M are irrelevant for most users today and the buckled spring mechanism would make the keyboard too deep and too heavy if used on modern laptops. However, the basic design principles are sound and could be used as inspiration for a better laptop keyboard.

 

Flawed design of most modern laptop keyboards

In the flawed design of a typical modern laptop keyboard, the keys are flat and grouped in a rectangular box. They are rarely placed in groups with empty space between them, which would make them easier to find. The letters on the keys are small, thin and often difficult to read. The most important keys (Enter, Delete, Backspace, Esc, Shift, Ctrl, Alt, Fn1-12, Page Up/Down and Arrow-left-right-up-down) are surrounded by other keys and the user has to look down from the display and aim in order to hit the right key. Around 20 percent of the available space on a standard 15 inch laptop is wasted by the inclusion of a large (and rather useless) number pad to the right. To make room for the number pad, all the keys have to be crammed very close to each other and made smaller. Due to this, the important keys to the right of the main keyboard (Enter, etc.) are much harder to find without looking down as they are surrounded by other keys. The number pad on the right pushes the center of the main keyboard to the left, including the mouse pad. This creates an unergonomic work position where the user has to twist somewhat to the left. On 17 inch laptops, the additional space is not used to increase the size of the keyboard. Instead, there is just dead space on each side of the box-shaped keyboard.

Mainstream keyboard with design flaws (Ideapad Z50)

Mainstream keyboard with design flaws (Ideapad Z50)

 

Mainstream keyboard with typical design flaws (Toshiba L70, 17 inch)

Mainstream keyboard with typical design flaws (Toshiba L70, 17 inch)

 

Average keyboard with design flaws, at least no number pad (MacBook Pro)

Average keyboard with design flaws, at least no number pad (MacBook Pro)

 

Lenovo’s decline since the Thinkpad T520

The last time I bought a new 15 inch laptop I had to search high and low to find a computer with a keyboard I liked. Most 14 inch laptops came without the superfluous number pad but I needed a larger 15 inch display. I settled for an older model of Lenovo’s flagship T-500 series ThinkPad business computer (T520) that was still available. The T520 has one of the best laptop keyboard designs I have found.

The T520’s keys are deeper than today’s nearly universal chiclet keys. The letters on the keys are large and easy to read. The Esc key is large and placed in the upper left corner and the row of Fn keys is separated from the main QWERTY keyboard by a gap of a few millimetres. The Fn keys are also made smaller to differentiate them from the adjacent regular keyboard. The Delete key is significantly larger than the surrounding keys, placed close to the upper right corner of the keyboard, and separated from the other keys by empty space on the left which makes it quite easy to find. The Enter, Shift, Backspace and Tab keys are large and placed at the outer edges of the keyboard. The Page Up/Down keys are placed logically above each other in a corner of the keyboard. I would have preferred that the four arrow keys be isolated from the others, but at least they are somewhat separate as they are down in the right corner of the keyboard.

Last laptop ever made with excellent keyboard design? (Lenovo Thinkpad T520, fr 2011)

Last laptop ever made with excellent keyboard design? (Lenovo Thinkpad T520, fr 2011)

 

Palm rest with rounded edge, gentle for underarms (Thinkpad T520)

Palm rest with rounded edge, gentle for underarms (Thinkpad T520)

Hardware controls for the laptop such as speaker volume/mute and microphone mute have dedicated hardware keys. No fiddling around trying to find the right two finger command on the main keyboard to turn the sound off. The touchpad has rugged mouse buttons both below and above the touchpad. Lenovo has added a convenient third mouse button between the left/right buttons which controls a screen magnifier. In addition, there is a red pointing stick in the middle of the keyboard. I rarely use it, but it is placed at the separation for left hand/right hand typing and offers a good tactile cue for the fingers. The only thing I don’t like on the ThinkPad T520 is the dead space to the left and right of the keyboard. It could have been used to expand the keyboard.

In addition, the T520 has a matte display. A glossy screen might look more vibrant in the store but anti-glare displays are less straining for the eyes and easier to use in an environment with many light sources, such as an office. Another well-thought-out ergonomic detail is the rounded edge of the palm rest. Many other laptop OEMs (including Apple) have quite a sharp edge at the front that cuts into the hand or underarm when typing for long periods.

Unfortunately, each new upgrade of the Lenovo T-500 series has become more and more similar to laptops from the commoditised mass market (something that stirred up significant controversy among Lenovo users, here, here, here, here, and here). In this year’s model of the ThinkPad (T550), Lenovo have discarded most of their good design elements. The T550 now has a rather mediocre keyboard just like the majority of other laptop vendors, with the unnecessary number pad added. The rounded palm rest is gone, etc. Lenovo is the number one quality laptop OEM for demanding business users who are willing to pay for reliability and quality. All these odd design decisions baffle me. Are laptop makers utterly clueless about how their products are used, or are they deliberately allowing style to trump function? I don’t get it.

The latest ThinkPad (T550), hardly better than the average keyboard, crammed by the unnecessary number pad

The latest ThinkPad (T550), hardly better than the average keyboard, crammed by the unnecessary number pad

 

Building a better keyboard – and laptop

Improving the keyboard ought to be fairly simple for the leading laptop brands. But instead of a flurry of activity among competing OEMs, the area seems stagnant. This could be an opportunity for an ambitious laptop maker. The first player that puts resources into keyboard design improvements will gain a competitive advantage.

I am not suggesting a radical departure from the established keyboard layout. Attempts at designing disruptive “ergonomic” keyboards have failed in the mainstream market due to the steep learning curve. The first step in improving existing keyboard designs is to simply look at the good ones that have already been on the market (mentioned above).

I am fairly certain that most users would prefer a larger, more spacious keyboard without a number pad (see here). I have almost never used it myself. The keys for 0-9 are already lined up above the letter keys and are simple to use. Shoehorning the unnecessary number pad on to a small laptop keyboard results in a cramped keyboard that is far less intuitive and much more difficult to use.

The specialised keys on standard computer keyboards are remnants from different eras of computing, going all the way back to TTY terminals and layouts designed for IBM mainframe programmers in the 1970s. It is time to move on. It’s likely that there are several dedicated keys on the standard computer keyboard which are hardly ever used by mainstream users. These could be removed and instead be accessed indirectly via a modifier key, or perhaps removed entirely.

For example, the large Caps Lock key is a waste of space on a crowded keyboard. I never use it, but often hit it by mistake CAUSING ALL LETTERS TO BE CAPITALISED. This is an annoyance. There has actually been an ongoing campaign against the Caps Lock key in the tech community for over a decade. If the Caps Lock key was removed,

Shoehorning the unnecessary number pad on to a small laptop keyboard results in a cramped keyboard that is far less intuitive and much more difficult to use.

the prime keyboard space on the left side of the QWERTY keys could be used for something far more useful. Other keys that are seldom used and could probably be done without are: Scroll Lock, Insert, SysReq, Home, and End. Removing unnecessary keys would free up space and improve usability.

Google took some steps in this direction when they introduced Chromebooks in 2011. They removed the Caps Lock key, all the F1 to F12 keys, Home, End, Delete, Page Up/Down and the entire number pad. However, the Chromebook is not a fully featured computer so the removal of these keys can not be directly translated to the mainstream laptop market.

Apple have removed some of the more peripheral keys, including Page Up/Down, as well as the number pad. They have also removed the Delete key (deleting on the right side of the cursor) and only offer the Backspace key (deleting on the left side of the cursor). But they have kept the Caps Lock key.

Removing Caps Lock would be great but I find the Page Up/Down keys to be very useful. I am also sceptical about removing the Delete key. I use both the Delete and Backspace keys for deleting, and Delete has additional functions in Windows such as deleting documents and folders.

If the number pad and unnecessary keys are removed, the freed up space could be used for three blank programmable keys. These blank keys could easily be assigned through an integrated key re-mapping app. To make it simple, the non-assigned blank keys could be colour coded instead of adding more symbols. Some users might want quick access to certain symbols or non-English characters. Or they might want to record a macro for quick access to a function in the OS. (There is already freeware for re-mapping the entire keyboard such as AutoHotkey, but a simpler UI is needed for the mainstream market.)

The row of function keys is typically assigned double functions (Fn1 to Fn12 as well as 12 additional laptop specific functions). Each laptop OEM does this in its own way and there is no established standard. Usability research could identify the most popular functions, which would be valuable input for improved product design.

Shallow chiclet keyboards are useful for making laptops as thin as possible. But personally, I prefer a keyboard that feels more solid and offers deeper keystrokes with higher tactile resistance. If space allows for it, I think laptop makers should reconsider their indiscriminate use of chiclet keys.

As part of this reinventing-the-laptop-keyboard project, I would include dedicated buttons for control of the laptop hardware (sound, microphone, webcam, etc.). In addition to improved usability, a real hardware switch would make it impossible to hack the webcam or microphone and use them for spying on the user. This feature is extremely important for business users, but will be appreciated by the mass market as well.

I would also ensure that the letters and symbols on the keys are large and easy to read. Having the keys grouped with colour coding to provide additional visual cues might also be helpful. Not all PC users are 21-year-olds with perfect eye sight.

I am not suggesting that there exists One ideal keyboard for the entire market. With around 175 million laptops sold annually, the market can easily be segmented. Number pad or not. Chiclet keys or not. Specialised legacy PC keys or not. Dedicated mouse buttons or integrated in touchpad. Cool stylish design or functional usefulness. For each of these segments, the market is large enough to be attractive for at least some laptop OEMs.

What I don’t understand is the vendors’ herd mentality. They all offer similar looking products, designed for an imagined mainstream customer. I sometimes get the impression that they suffer from “Apple Envy” and uncritically emulate whatever comes out of Cupertino. But if the PC market wants to emulate Apple, they can begin by offering some 15 inch laptops without number pads.

The great thing about an improved keyboard design is that it’s easy to demonstrate the use cases. The laptop brand that wants to stand out from the pack of competitors can do so without resorting to technical gibberish. It would suffice to explain how annoying it is to use the competitors’ standard keyboard, how the sharp edge of the laptop cuts into one’s hands, and how the competitors’ webcams/microphones can be enabled by spying hackers. This is so simple to explain it could even be done in TV commercials.

Embedded SIMs will take mobility to the next level

SIM Cards, soon to become an outdated technology

 
 

MNOs, fear not. Embedded SIMs will open up new markets and use cases, not destroy the operators

 

Traditional SIM technology has been around for nearly 25 years and the operators view them as a critical control point for customer ownership. Due to the risk of losing the M2M market to competing unlicensed LPWAN technologies (and pressure from strong handset vendors), MNOs are finally beginning to embrace more modern embedded SIM solutions.

Traditional SIM cards are not fit for purpose in the IoT market. There are several reasons for this. First, M2M devices are often embedded inside other machinery and very difficult to access. An industrial M2M player with 1000s of dispersed units can not send out staff to change malfunctioning SIM cards, or replace all SIM cards if a new MNO offers a better price plan. The second reason is that wearables in the consumer space are often too small to fit a SIM reader. Think waterproof smart watches or smart jewellery. Third, tablets and laptops that only occasionally require mobile connectivity will remain an untapped market for the operators as long as the user needs to find a suitable SIM card, fiddle with it, and activate/pay for a data plan that is rarely used (for example when travelling).

For the M2M market, the operator-led standard bodies GSMA and ETSI have already developed a technical architecture for reprogrammable SIMs (termed eUICC, “embedded UICC”). The eUICC is a secure hardware module that can be permanently soldered onto the circuit board. When an eUICC is manufactured, the eUICC issuer loads the master keys of the eUICC onto the hardware chip. The eUICC issuer can be an MNO or another stakeholder such as the device maker. In the eUICC hardware, one or more operator profiles can be stored (including IMSI number, network key, and other settings). The eUICC issuer will maintain a central Subscription Manager which is a database with all available operator profiles. This gives device owners the ability to swap between operator profiles as well as download new ones.

Embedded SIMs offer advantages for M2M device vendors. They can make their products smaller and better encased and it’s possible to manufacture products with a blank eUICC (no operator profiles) for worldwide delivery.

Embedded SIMs are already in place in the M2M market and strong handset vendors such as Apple want to introduce the same technology in the smartphone market. The traditional SIM card is a control point for the MNOs’ market power and so far they have been resistant to e-SIMs. But market forces are moving fast and the operators risk being sidelined if they don’t embrace this new technology.

There are already several ways for users to bypass the operators’ SIM. For example, dual SIM handsets offer a crude version of the e-SIM functionality. The most obvious way to avoid excessive roaming charges is to switch SIM cards when travelling abroad. An existing service similar to e-SIMs is offered by MVNOs who offer multi-IMSI SIM cards for international travellers. The MVNO stores IMSI numbers and operator profiles from a number of MNOs in different countries on its SIM card. The existing multi-IMSI SIMs are hard coded into the SIM card today, but once the eUICC standard is in place it will be possible to reprogram physical SIM cards as well. Examples of MVNOs using multi-IMSI cards are WorldSIM and Truphone. They offer full international multi-IMSI based connectivity with voice and data based on SIM cards. Transatel is both a multi-IMSI MVNO and a wholesale service provider (MVNE) for other MVNOs. GigSky and Cubic Telecom offer a similar international MVNO service for data only connectivity. Apple partnered with GigSky to sideline the operators in their Apple SIM offer for iPad.

There is nothing in the eUICC specification that prevents it from being used in handsets. As of March this year GSMA has a working group for eUICC in mobile consumer devices with backing from major operators such as AT&T, Deutsche Telekom (T-Mobile), Vodafone, Orange, and Telefónica. The technical architecture is anticipated for delivery by 2016. Apple and Samsung are said to be heavily involved in the project.

However, one of issues that remains to be solved is number portability. Today this is a cumbersome and mechanical process that can take up to a day, and risks leaving the phone number unreachable during the transition time. Number portability has to be instant for users to take full advantage of the ability to swap one operator profile for another in the eUICC on their smartphone. Number portability is implemented somewhat differently in different countries and a seamless global system requires extensive systems integration. The e-SIM specification from GSMA due next year will only define first generation e-SIMs, and my guess is that this issue will only partially be solved.


But even the first generation e-SIM specification will have far-reaching consequences across the value chain. MNOs will have to compete harder for customers. However, MNOs’ concerns that they will be marginalised and end up in a cut throat price competition over every phone call and data session are exaggerated. Several factors will prevent this end state. The price of typical phone calls today is so low that most users will find it too tedious to switch providers just to find the cheapest calls. The same goes for data, except when travelling abroad. Operators can still offer bundles, triple/quad plays, extras, loyalty points, subsidised handsets etc. to combat price only competition. Most consumers actually prefer a bundle with predictable costs. In addition, operators will save on the costs of distributing physical SIM cards.

A new potentially powerful player will be the eUICC issuer that controls the initial access to the e-SIM. Only operator profiles offered in the eUICC issuer’s Subscription Manager Database will be available to the users. And the issuer can control how available profiles are displayed and which operator gets to top the list.

Candidate eUICC issuers are device makers, MNOs or managed service providers. A handset maker that controls the e-SIM could restrict users’ access to available operators. This would squeeze profit margins for the operators who are lucky enough to be allowed access to the user base. If an e-SIM equipped handset offers a restricted choice compared to the older SIM card handsets, it will be viewed as a step backwards. Even for a very strong handset maker such as Apple it is far from obvious that it will be accepted by the consumers. Users will expect almost the same freedom to select operators of their choice as with a physical SIM card. An overly restrictive e-SIM will most likely be viewed as a strong negative factor. Before the e-SIM has reached maturity I expect smartphones to be equipped with both an e-SIM and a traditional SIM card slot.

In addition to consumer resistance, regulators will most likely mandate fair and open access for all operators to the eUICC issuers’ Subscription Manager. The fear that eUICCs will move all market power to the device makers is exaggerated. And there is nothing preventing MNOs from becoming eUICC issuers themselves for devices sold through their own retail channel. Subsidised handsets could also be equipped with an eUICC issued by the MNO (for the period after the operator lock-down).


Embedded SIMs based on eUICC is a critical enabler for the IoT and wearables market. But the interesting long term potential is that e-SIMs can reinvent the way we interact with our handsets and devices. Currently, if a user wants to change handsets to go to the gym or on a night out he/she will have to fiddle with the SIM card and physically move it. If the two devices don’t accept the same size SIM card it is even more complicated.

In a fully developed system with embedded SIMs, it will be possible to easily move the active phone/data connectivity from one device to another, including cars. Users will be able to have several devices for their varying needs and use cases. In addition, they could have several active subscriptions/phone numbers in the same device. They will also be able to split connectivity between different devices. For example, have text messages, IM, and notifications diverted directly to a non-tethered smartwatch while the full mobile connectivity stays with the smartphone. Or have certain notifications sent to a piece of smart jewellery or smart clothing. It will also be possible to have more than one phone number on the same device, have disposable phone numbers, and move the subscribed numbers between the user’s devices. This flexibility will of course also apply to all communication that doesn’t rely on a phone number (such as WebRTC, Skype, WhatsApp, etc.). Even devices without full mobile connectivity will probably be equipped with e-SIMs. For example laptops, tablets, and smart home hubs. And the hardware based security from the eUICC module will be an excellent enabler for payment platforms. In this scenario the full potential of mobility, connectivity, and IoT will be unleashed. One can only hope that the working groups at the GSMA will be forward-thinking enough to see this.

Apple don’t understand the luxury market – the $17,000 Apple Watch Edition could damage Apple’s brand

The $17.000 Apple Watch Edition in rose gold

The $17.000 Apple Watch Edition in rose gold

There are two fundamental flaws in the way Apple have positioned the $10,000 to $17,000 Apple Watch Edition. First, a luxury product that will be obsolete after one year contradicts the luxury market’s fundamental logic of permanence and long lasting value. Instead, the Apple Watch Edition risks being perceived as an ostentatious display of money today, and an embarrassingly outdated item after next year.

Second, the bling factor of the $17,000 Rose Gold Edition conflicts with Apple’s core brand values among its broad user base. Apple stand for cool 21st century modernity, elegant simplicity, and near affordability for the middle class. One of Apple’s most important user segments is the creative class in the advanced economies. They have a laid back, postmaterialist and slightly bohemian anti-establishment value system. Conspicuous consumption and a materialist 1980s style display of wealth is the antithesis of this socio-economic segment of the market.
 

Navigating the luxury brand market

Strong brands always stand for something and have a clear identity. They are a statement and are by definition limited in scope. It is difficult for brand owners to extend their brand into new product categories or market segments. The risk is that their core brand value will become diluted, or even worse, that their extension will destroy the original brand value. Despite their efforts to nurture and communicate their brand message, brand owners are not in control. A brand’s “brand” is the sum of the perception of the brand by all users, former users, and non-users. It is the consumers’ opinions that ultimately determine a brand’s value.

The luxury market has its own particular logic and it’s difficult for non-luxury brands to break in to this market segment. Luxury brands typically offer a combination of superior quality, high performance and features, superb services, brand mystique, and a compelling narrative of the brand’s pedigree and origins. They are produced in very small series and are often more or less handmade. They tend to be built upon emotional appeal and are imbued with a sense of exclusivity and sophistication. Luxury brands are strong and often controversial. They can be beloved by the target market but viewed as ridiculously overpriced and “out” in other socio-economic market segments. For many buyers of luxury products, the strong brand identity becomes a part of their own self-expression.

But the luxury market is not just driven by status-seeking and symbolic appeal. The majority of luxury products have superior functionality, very high performance, and a solid build quality that gives them a much longer life span than similar mainstream market products. The second hand value of most luxury products is quite high and can even appreciate over time. Permanence is a core value in this product category. For example, the watchmaker Patek Philippe’s advertising slogan is: “You never actually own a Patek Philippe, you merely look after it for the next generation.” De Beer’s slogan is “A diamond is Forever”. Paying an exorbitant price for a product with very high second hand value and a long life span is to some extent rational.

Luxury brands can be divided into two segments. One segment consists of well-known brands that are easily recognisable. For example: Rolls-Royce cars, Louis Vuitton handbags, Four Seasons Hotels, or Rolex watches. These brands enable the customers to display sophistication and wealth, and are in some sense aspirational brands. The other segment of the luxury market consists of smaller brands that are less well-known by the general population but highly regarded by the in-the-knows. The fact that so few people recognise these brands makes them even more exclusive for the connoisseurs. They are usually more expensive than well-known luxury brands and offer an even higher level of sophistication and exclusivity. In the game of social status, buyers who prefer this upper echelon of luxury brands tend to turn their noses up at those who buy well-known luxury brands. They view them as unsophisticated and solely interested in displaying their wealth. Interestingly, anti-establishment middle class consumers often share this disdain for what they consider to be the nouveau riche’s vulgar display of wealth. For example, the urban hipsters who happen to be one of Apple’s core customer segments.
 

Difficult for digital tech products to enter the luxury market

There are very few market-leading tech brands that have attempted to enter the luxury market. The fast pace of innovation and increased raw performance quickly renders last year’s top-of-the-line products obsolete. For consumers who easily can afford it, it is only natural to frequently replace their TV, computer, phone, etc. However, paying ten times the price for a “luxury computer” or “luxury mobile” that will be disposed of after a couple of years serves little purpose. Functionality and performance trump any potential symbolic luxury brand mystique. Even in the early 1990s when mobile phones cost $4,000 there were no luxury mobile phones. Functionality was the only thing that mattered during this phase of rapid technological development.

There exists a niche market for very expensive laptops from the major OEMs, but they can hardly be called luxury as the high price is based on the expensive high performance components that go into the machines. Sometimes these laptops are white labelled by luxury brands such as Ferrari or Bentley. There is also a microscopic market for bespoke laptops. For example, a MacBook Pro built into a new luxury case or covered with 24 karat gold. But there are no independent luxury laptop brands that produce better computers than the market leading OEMs.

However, some luxury technology products do exist. For example: audiophile sound equipment, watches, and cars. In these product categories, performance is dependent on craftsmanship, build quality, and analogue technology, which improve at a snail’s pace compared to microprocessor based products. As these products will not become obsolete after one year there is room for luxury brands in these categories. They will retain a high second hand value. Hence, buying these luxury products can almost be justified without resorting to emotional brand mystique.
 

Vertu – the exception to the rule

There are rare examples of luxury tech products, but they are mostly the exception to the rule. A few iPhones have been refitted in gold or platinum cases, covered with diamonds and sold with a six or seven figure price tag. However, the only luxury phone brand with any significant sales volume is Vertu. During its time as the worlds’ leading mobile phone maker, Nokia entered the luxury market with the subsidiary company “Vertu” in 1998.

Pre-smartphone mobiles had a slower technological trajectory and Vertu positioned itself with craftsmanship, style and service – not advanced phone functionality. Their phones are priced from $6,000 to $300,000. Vertu use materials such as titanium, gold, leather, buttons made of sapphires and rubies, sapphire displays, etc. The most expensive models are decorated with diamonds. They also offer a bundled luxury 24/7 concierge service for their customers, reachable via a dedicated button on the phone. But as a phone maker Vertu have not been competitive. The product release cycle of their flagship model “Signature” has been around a decade. They launched their first Android smartphone only in 2013. Before that Vertu could only offer feature phones.

Vertu could not compete in the smartphone market during the period of early rapid technological development (2007-2013). Only when technological development slowed down and matured was it possible for a luxury brand to enter the smartphone market.

The luxury phone market is a tiny niche market. Vertu have sold around 350,000 phones worldwide since their first model launched in 2002. They only made eight of their $310,000 “Signature Cobra” model. There is not much for mainstream tech players to take away from Vertu’s narrow business model. Are customers paying for the bundled concierge service, the brand mystique, or the device itself? Their main markets have been Asia, the Middle East and Russia, where cultural differences make an ostentatious display of wealth socially acceptable. A few celebrities have been spotted with Vertu phones, but they are regularly given free luxury products as promotion. Vertu have a strong brand message in their target market. However, this message will probably evoke an equally strong (albeit negative) reaction among the tech savvy urban middle class in advanced economies.
 

Apple’s dilemma

Brand owners can’t have it both ways. Their brands have to stand for something, and they can’t stand for mutually exclusive messages. Ostentatious Bling and Cool Advanced Technology do not make a good match. Apple are undermining the core values of their own brand by entering the bling market. The Apple Watch Edition is not even high quality bling. It’s just an overpriced, mass-produced Apple Watch made in gold that will lose most of its value next year. Apple’s current support program ends seven years after they stop producing a product. If they don’t make an exception for the Edition, Apple will officially declare the 2015 Edition obsolete in eight years and cease all service. Clearly, they have a lot to learn about what constitutes a luxury product.

The dilemma is that luxury products are built to last while the microprocessor based components in tech products rapidly become obsolete. Buyers who can afford the best will not settle for last year’s inferior technology. There may be a few oligarchs who want to flaunt their conspicuous consumption by buying a $17,000 Apple Watch Edition, well aware that they will dump it in a year for the Apple Watch 2.0. But for the rest of us such a purchase just seems irrational and pointless. In my opinion, the Apple Watch Edition has a negative brand value.

Imagine if Apple had released a $17,000 iPhone Edition in rose gold back in 2007. (The first iPhone was a 3.5 inch feature phone that came without the app store and without 3G connectivity.) What would the used price for that phone be today? Scrap metal value of less than $1,000? Compare that to the value development of a Rolex or a Hermès Birkin handbag bought in 2007.
 

How Apple could turn things around

The good news for Apple is that there is a way to improve the situation. In order to make the high price of the Edition more justified, Apple could offer buyers a premium membership scheme (including those who have already bought the Edition). All members would be offered the opportunity to upgrade their Edition Watch on release day for ten years for $999 per upgrade. They could send in their old Edition watch and Apple would transfer everything on the old watch to the latest Edition model and send it back to them (or perhaps swap the electronics inside). To sweeten the deal, Apple could also offer a guarantee that premium Edition members will be able to buy new iPhones as well as all other new Apple products on release day. This offer wouldn’t cost Apple much and would propel sales of the Edition because it removes rapid obsolescence from the equation. Owners of Edition watches would no longer be viewed as vulgar show-offs, but as people who made a somewhat rational decision by paying a premium for a ten year upgrade guarantee and access to new Apple products on release day.
 

A way for luxury brands to enter the IoT wearable space

The formula above could be the key to the future IoT market for luxury smart wearables. Many luxury brands are eager to enter this market. Smart luxury earrings, necklaces, rings, pendants, brooches, sunglasses, pens, or watches could be sold with an upgrade guarantee.

An ultra-expensive price for the initial purchase would preserve the luxury product’s mystique and exclusivity. Each time the luxury brand releases a hardware upgrade, they could offer to swap the electronics and battery inside the product for a minor upgrade fee. If the product can’t be disassembled, users would be allowed to return their original product and have it replaced by the latest model for a replacement fee. This would tie the customers closer to the brand and generate ongoing cash flow in the form of upgrades. It would also ensure that obsolete products are removed from the market instead of being sold for embarrassingly low prices on eBay.

I am redesigning my blog by removing the Swedish version of the blog and the dual-language plugin WPML. My intention is to make my older Swedish blog posts accessible in an archive but for now they are invisible. (Due to an upgrade of the server side PHP version, a bug has appeared that doubles the words “Categories” and “Archives” in the right sidebar.)

Shady MVNOs damage the mobile market – time for Ofcom to take action

These days, setting up an MVNO is easier than ever. You don’t need to know anything about mobile to become a MVNO player. Nearly all operations can be handled by managed service providers (MVNEs) and the initial Capex is in the range of £500k, but could possibly be as low as £25k.  Low barriers to entry increase competition and put pricing pressure on incumbent operators. Surely this must be a good thing, right? Not so fast.

Well-known and established MVNOs in the UK such as Tesco Mobile, giffgaff, Virgin Mobile, Talkmobile, Asda Mobile, Lebara, TalkTalk Mobile, and Utility Warehouse offer service on par with the network providers themselves.

However, there are also a large number of smaller more obscure MVNOs. How many people have heard of White Mobile, Delight Mobile, Vizz Africa, Vectone Mobile, tello, Now Payg, Talk Home Mobile, The People’s Operator, Ovivo Mobile, Lyca Mobile, or Econet Mobile?

These MVNOs sometimes have appallingly bad service. A while ago I tried Lyca Mobile. I didn’t expect much, but I did assume that “mobile” was a mature standardised product (voice, voicemail, SMS, MMS, data). How wrong I was. Voicemail didn’t work, I couldn’t call some numbers and operators, there were texts that didn’t go through, and MMS wasn’t included in the service. When I called the Swedish railway information number, Lyca Mobile charged me almost £10 for a six second call. They claimed it was a premium number. It wasn’t, and their tariff chart didn’t include anything about this exorbitant rate. Their customer service consisted of script reading from call centres in India, and whenever I deviated from their script they immediately hung up.

I have read reviews of some of the other smaller MVNOs and it was easy to find similar complaints. Talk Home Mobile seems to have a billing system that grossly overcharges and eats away the minutes in your allowance very quickly (here). Users of Vectone report myriad problems (here, here, and here).

Last year all 50,000 users of Ovivo Mobile were left stranded when the service suddenly shut down. Ovivo had stopped paying for network access and Vodafone cut the cables. It was pure luck that Ovivo was gracious enough to give users their PAC-codes so they could keep their mobile numbers.


Mobile is an essential basic service and it is unacceptable that consumers risk losing their access, prepaid minutes, voicemails, and phones number if a shady MVNO ceases to operate. Consumers should also be able to trust the accuracy of the operators’ billing systems. A plethora of small unstable shady MVNOs will not “increase competition” but rather scare consumers away from competitive new market entrants. Only the incumbents will benefit from this.

Consumers who sign up for a mobile plan with an MVNO can not be expected to make their own risk assessment of the bankruptcy and fraud risk. Nor can they be expected to know that they might lose their phone number if the service provider suddenly ceases to operate. And consumers who suspect foul play with the billing system should not have to take on an MVNO themselves in the courts.


In my opinion, this is the responsibility of Ofcom, who should tighten the regulatory framework. For example, Ofcom could act as mystery shopper. They could buy SIM cards from all the operators, and then run diagnostic calls/texts/data generated by software. This would enable Ofcom to evaluate the accuracy of the billing systems and check that all number series are reachable. Network operators who don’t get paid should probably not be allowed to disconnect a MVNO, but should instead be given the power to temporarily take control of the MVNO. All tariffs should be available on the websites in a clear format. The charges that are most likely to impact the user’s bill should be easily accessible. MVNOs should not be allowed to hide extreme out-of-bundle charges in the fine print. And if an operator doesn’t offer something that users expect in a basic mobile service (for example SMS or voicemail), it should be clearly stated from the start.

If the MVNO market continues to operate like the Wild West, sooner or later real scammers will spot this opportunity. They can set up a legit MVNO and run it for a year to build a brand, gain a customer base and be included in price comparison sites. Then they will lower their prices and start an aggressive ad campaign. For example by offering attractive pre-paid 6 and 12 month pay-as-you-go plans. After a while they will change their terms of service and tariffs and introduce sky high out-of-bundle rates, stop paying their network operator and other vendors, offer the latest iPhone on sale, and remove themselves from the company directorships. And while they’re at it, they’ll probably overcharge all direct debits before they go bankrupt.

Let’s hope that this scenario never plays out as it would damage all MVNO’s credibility and make it even more difficult for new market entrants to compete with the incumbent network providers.


P.S. I recently left O2’s network for EE’s. O2 had the best network back in 2012 but their service has gradually deteriorated, at least in the south east of England (confirmed by Which? and Open Signals coverage/QoS maps). This congestion is most likely caused by an unwillingness to invest in more network capacity by the cash strapped owner Telefonica. As Telefonica have been trying to sell O2 UK for some time, they probably don’t want to put any more money into their UK network. This is a risky strategy if and when consumers begin to take notice. Even if a sale of O2 were to take place tomorrow, it would take several months before everything was finalised with new owners in control. I expect even further deterioration of customer satisfaction and QoS for O2 and all the MVNOs that run on their network. It could take up to a year for O2’s network to improve.

According to Open Signals, EE has better network capacity in the South East, which was very obvious when I switched. However, EE is at the bottom in the Which? customer satisfaction survey and they are one of the the worst major operators in Ofcom’s complaint league. As I didn’t want to be an EE customer directly, I chose one of the MVNOs that run on EE’s network. So far, so good.

Post Snowden ripples – users going anonymous on the Net

 

The Snowden leak is a game changer. User angst over privacy and anonymity is at an all time high. 61% of US net users want to do more to protect their privacy.

 

It takes time for consumers and market players to absorb the full impact of game-changing events such as the Snowden leak. When the news broke in June of last year, the first Pew poll showed that 56% of Americans found NSA’s mass surveillance acceptable. After more than a year of drip-fed additional revelations from Snowden’s enormous data material, people have begun to realise the full extent of the security state’s spying. A Pew poll from June this year found that support for NSAs spying had fallen to 42% and 74% thought Americans should not have to sacrifice privacy for safety from terrorism. Another Pew poll found that 86% of US Internet users have taken steps online to remove their digital footprint. A Harris poll from March 2014 showed that 47% of US users have changed their online behaviour after the NSA revelations. 26% said they are doing less online banking and online shopping, and in the 18-34 age group the figure is 33%. A new study from Pew last week shows that awareness and concern are rising even more among Americans. 90% of the respondents agreed that users have no control over their online information and 80% are concerned about the way advertisers take advantage of information on social media. 87% had heard something about government surveillance and only 36% of users support the government’s online snooping. Most important of all, 61% of users said they wanted to do more to protect their privacy. These negative sentiments are most likely even more pronounced outside the US. In another survey of 10,000 people in nine countries from April by ComputerWeekly, 75% expressed concern about their privacy online.

Awareness is the first step, the next is taking action. Once users begin to educate themselves they will realise that the potential for intrusions is much greater than they initially thought.  As a test, I spent a few hours educating myself about privacy and anonymity for browsing with Firefox on a PC. This is what I found:


Basic advice about anti-virus programs, firewalls, avoiding obvious passwords, and deleting cookies is far from enough.

The most intrusive tracking and surveillance is done by advertisers and analytics firms. Government surveillance is ubiquitous yet invisible to the average user in most cases. However, advertisers’ tracking is actually quite obvious. We know that advertisers can easily create an approximate profile of where you live, your gender, age, interests and income. They can even match your real name and address with your browser surfing patterns if you fill in your customer data on a website that sells this information to the tracking companies. Considering all the revelations from the Snowden leak, it would be reasonable for users to also suspect that the government surveillance agencies are buying advertisers’ tracking profiles.

Deleting cookies is not enough. There are supercookies in Flash (LSO) and in HTML5 there is a cookie-like function called web storage (DOM). These can be blocked by installing the Better Privacy plugin in your browser.

Web servers can add unique Etags to identify each browser without the use of JavaScript or cookies. They were designed for increased browsing performance but can also be used for tracking. Etags are difficult to avoid, but deleting the cache will remove them. The browser plugin Secret Agent blocks Etags but the downside is that it can diminish the browsing experience.

Another privacy browser plugin is Ghostery which blocks trackers. The HTTPS Everywhere plugin forces encryption between the browser and server when possible. The NoScript plugin provides protection from malicious scripts on untrusted websites. NoScript blocks all untrusted scripts and gives the user full control over enabling or disabling each script. For every visited webpage, NoScript provides a list of all scripts used on that page. However, the best plugin of all is Adblock Plus which blocks almost all ads. Adblock Plus has over 300 million downloads.

Enabling all these plugins will reduce the browsing experience on some websites and can slow down Firefox. For example, if NoScript is installed, users will have to open the list of scripts manually to enable them. This can be an inconvenience. On the other hand, webpages often load faster, in particular if all ads are blocked with Adblock.

Webpages load even faster if Flash is disabled. Flash is potentially a huge security hole and a common recommendation is to disable Flash in the browser and only enable it temporarily when really needed.

An additional security measure is to use a VPN. Subscribing to a VPN will hide your IP address. A VPN creates an encrypted tunnel from your computer to one of the VPN provider’s servers, where your surfing traffic will enter the open internet. The servers can be located in another country, which enables users to stream TV or video that is normally blocked for users from other countries (for example BBC and Netflix USA).

Selecting the best antivirus program and firewall is also important. Bitdefender gets top test results, but aggressive antivirus programs can sometimes slow down the computer.

When it comes to search engines, users can reject Google in favour of DuckDuckGo or Startpage, which do not track users’ search patterns.

Many cybersecurity experts even recommend putting a piece of black tape over the webcam in order to prevent it from being used as a spying device. For those who want to achieve an even higher level of privacy and security there are more challenges, such as avoiding browser fingerprinting and WiFi security breaches. For the advanced user there are additional technologies such as TOR, Bitcoin, Tails, IceDragon, Ubuntu/Virtualbox, PGP, Protonmail, Comodo, and Online Armor.


It is unlikely that mainstream consumers will utilise all of these security measures, but installing the browser plugins is fairly easy and there are “How To” guides that explain how it’s done. Once in place, these users are a permanent loss for the advertisers. And it will be the active, well educated, high-income users who go first; the most valuable targets for advertisers.

When the mainstream market begins to embrace anonymity and encryption the effects will be wide-reaching. Ad-financed websites and advertising networks will be hit first. Social media sites where real names are used such as Facebook are also at risk.

Users will probably also be increasingly suspicious of cloud service providers such as Dropbox. Not for intrusive advertisers but for NSA spying on stored data and the risk of hacked accounts. Companies that have dumped their own IT infrastructure and moved everything to the cloud will also have a hard time proving that their customers’ data is secure and has not been scooped up by the NSA somewhere inside the cloud.


But this development is also a business opportunity for cybersecurity providers, anonymisers and VPN providers. For example, Protonmail is a startup offering encrypted email. The company and servers are located in Switzerland, which has very strict laws regarding data protection. All stored emails are encrypted before leaving the customer’s computer and Protonmail does not have the decryption keys.

Another example is SpiderOak, a competitor to Dropbox that offers cloud backup. Encryption makes it impossible even for SpiderOak’s own staff to view their customers’ data. For Android smartphones there is the app Redphone for encrypted voice calls. TextSecure is an app for encrypted messaging for iOS and Android. Silentcircle offers a suite of services for encrypted communication.

Yet another indicator of the strength of this trend is the rapidly growing demand for NSA-secured cryptophones. Several small companies have developed bespoke smartphones, often based on hardened versions of Android. With prices up to $3,500, they are becoming the new status symbols for business executives. Some of the brands are: GSMK Cryptophone, Blackphone, Teopad, Hoox m2, In Confidence, and Secusmart.

And the smartphone giants Apple and Google are also offering better encryption to protect their users’ privacy and security. iPhone 6 and iOS8 have integrated new encryption on the devices which Apple can’t bypass even if they are required to do so by the authorities. Google is working on a similar solution for the new Android 5.0 release Lollipop.

It is unclear how far the various segments of the user base will go in order to protect their anonymity and privacy. But this trend has the potential to be very disruptive, even if only 40% of users take action. Ignore it at your own risk.

My previous blog posts about design

Here is a compilation of links to my previous blog posts about product and service design. My view is that successful design has to build on the user’s context and micro-situation. One needs to put oneself in the user’s shoes and really understand his or her situation, pre-understanding, and purpose for using the product or service. Failure to design from the user’s perspective can mean the difference between failure and success.

  • For example, despite super glossy PC screens looking vibrant and stunning in the store, the reflections and intense brightness cause eyestrain with prolonged usage.
  • Another example is the megapixel race for cameras. A sensor with too many megapixels is pointless as the cheap lens in a smartphone or cheap digital camera will be the limiting factor. In addition, a high megapixel sensor is less sensitive to low light conditions compared to a lens with a more modest pixel count. Where are most smartphone pictures taken – outdoors in bright sunlight or indoors? Again, technology-driven vendors ignore the user context.
  • Design is important even for something as basic as mobile voicemail. If someone calls your mobile and you don’t answer, the call is forwarded to your voicemail after a few rings. Last time I checked (in Sweden), no operator offered more than 30 seconds before the call was forwarded. There are a number of situations where 30 seconds is simply too short. As a consumer, I want the option to set it to 45, 60 or 90 seconds. Again, a design decision that doesn’t take the user’s context into account.

Digital consumers – free trend forecasting from communication agencies

We all know that digital is massively invading and disrupting all industries. One indicator of this development is that part of the mainstream advertising industry finally seems to be embracing digital.

The “ad” industry is not just the creative part. It is also comprised of PR-firms, market researchers, insight, database marketers, and specialists in digital and lifestyle trends. The major players in the industry (WPP, Omnicom, Publicis Groupe, and IPG) are conglomerate holding companies with investments in all parts of the industry. It seems that these players are beginning to put the pieces together around digital.

They give away quite a lot of their research for free. Here are some free information sources that could be useful for us industry analysts and Zeitgeist watchers:

  • JWT Intelligence has quite a lot of free material, both trend forecasting and digital material. Some additional reports can be found on JWT Worldwide.
  • Trendwatching is an independent firm focusing on innovations, digital, and consumer trends. They have a global network of 3,000 freelance trendspotters. A competitor with a similar business model is Springwise, which focuses on innovation and discovering new business models in the B2C space. Another competitor is The Futures Company, though they are slightly less focused on digital.
  • Havas Media has reports about global media trends as well as material about digital marketing.
  • The largest conglomerate, WPP also provides free reports and articles from their portfolio companies. Some are about digital. The competing holding company InterPublic Group (IPG) has a compilation of free reports and blogs from their portfolio companies, though not much about digital.

One impression from quickly browsing through these sources is how digital has enabled and unleashed innovation all over the world. Here are a few examples (from Trendwatching’s service):

  • A Fiat car showroom in Brazil has equipped its staff with head-mounted video cameras. Customers can contact the showroom and the sales rep can walk around the car, open the hood, get in the car – all while talking to the customer and filming the car.
  • Pizza Hut in Panama has delivery scooters with small ovens built in. Their customers receive piping hot pizzas straight from the oven on the scooter.
  • Volvo has developed a Roam Delivery mobile app that makes it possible to use their cars as a delivery destination for couriers. Using the app, Volvo owners can give the courier the exact GPS position and temporary electronic key to the car.
  • The BFF Timeout app encourages Filipinos to focus on each other, rather than on their phones. Once all individuals in a group have opened the app together (sponsored by McDonalds and Coca Cola), the timeout begins and points are earned for every moment all phones are left alone.
  • Africa’s own startups have lofty continent-wide ambitions. Jovago is a hotel booking service from Nigeria, Oju from Mauritius launched African emojis for smartphones, the supermarket chain Choppies Enterprise is from Botswana, Nigerian Jumia is the Amazon of Africa, and Africa’s Netflix is the Nigerian startup iROKO. All of these companies are now expanding into neighbouring countries.
  • Dutch train operators Prorail and NS plan to roll out platform-length LED displays that provide real-time information to passengers. A 180 meter LED strip shows information on carriage crowdedness gathered from infrared sensors inside the carriages, as well as information on where carriage doors will open and the location of quiet carriages in the train.
  • French shopping center specialist Klépierre has developed an “Inspiration Corridor” equipped with large touchscreens on the walls. Sensors in the corridor analyse a shopper’s age, gender and apparel. The walls are then filled with personalized shopping suggestions. Tapping on the touchscreen sends directions to the customer’s mobile.

Some of all these digital innovations are of limited value, some are just silly, but some are brilliant. Tinkering, creating, and innovation takes place everywhere – not just in Silicon Valley, Manhattan, Tokyo, Shanghai, and London. The technology platforms and infrastructure are in place around the world and the barriers to further innovation are lower than ever.

Ahonen’s Rest of Decade forecast for smartphones, tablets etc.

Tomi Ahonen has just released his Rest of Decade forecast for the smartphone and PC market. His prediction is that all phones sold will be smartphones by 2018/2019 and that the low-end dumbphone market will cease to exist. Android will be the dominant platform and the PC market will remain almost flat, though within the PC segment, tablets will almost completely replace traditional PCs, holding a 77% market share by 2020. (Ahonen defines a tablet as an ultraportable PC, not a large smartphone. I agree. A tablet can’t be used with one hand while walking down the street. And you can’t put a tablet in your pocket.)

According to Ahonen, as the smartphone market rapidly expands into the price sensitive mass market, Apple/iOS will lose market share and be relegated to a very profitable high end niche player with an 8% market share by 2020. Looking at the entire computer market (PCs, tablets, smartphones) Apple/iOS will manage to keep an 11% market share by 2020, well ahead of Microsoft, which will hold a meagre 6% market share. The smaller platforms; Blackberry, Tizen, Firefox, Windows Phone, etc. will hover around 1% each if they are not almost completely wiped out.

It is hard to argue against Ahonen’s arguments and excellent track record as a forecaster. However, it is worth commenting on a few things.

In my opinion, the forecast for the decline of the traditional keyboard based laptop/desktop PC is overly pessimistic. Ahonen’s forecast is that PC sales will drop to just above a third of the 2013 sales volume (from 315 million units to 130 million). For trained knowledge workers, input via the mouse and a fully equipped keyboard is still much faster than a tablet for extended sessions of concentrated work. In addition, tablets have some significant ergonomic UX/UI problems if used for office work. A tablet placed horizontally on an office desk will catch glare from ceiling lights and give the user “iPad neck” from bending down. A vertically placed tablet in a stand will strain the arms every time the user touches the screen. No one wants to lift their arms and touch a vertical screen in front of them every 30 seconds for eight hours.

Another issue worth commenting on is the market share predictions for Google/Android and Apple/iOS. Even though Android is free and open source, the bundled Google Play is not. Google have moved most of the goodies (features and APIs) from Android into Google Play, where they exert tight control. Any device maker or operator that wants to use Android without Google’s presence will have to kick out Google Play and recreate all this functionality. Device makers are also prohibited from working on Android forks if they want access to Google’s closed source apps and APIs. They will be left with the “naked Android”. In addition, the app developer ecosystem is dependent on the APIs in Google Play.

This is of course in Android’s favour in the short run. However, as the Android platform matures and Google’s behemoth-ness increases, it is possible that another major player will fork Android and start a competing but very similar platform to get rid of Google’s tracking and analytics in their products. Perhaps by supplying a cross-platform tool that will make it possible for Android apps to easily be ported. The likelihood for such a fork has increased in the aftermath of the Snowden leak. Foreign governments will suspect that the NSA has demanded backdoor access to Google’s user tracking and/or Google Play.

Considering that entire populations will soon be smartphone users and that smartphones are an unprecedented spying device (cam, mic, location, call logs, surf patterns, banking, passwords, etc.), foreign governments will most likely find it unacceptable that the US government has backdoors into this data – and that they don’t. For example, from a Chinese perspective, government support of an Android fork will both be sound industrial policy and a way to replace US/Google control and possibly create a viable domestic player that can compete on the global market. (China has already done this but the existing Chinese Android fork is old and not compatible with the global ecosystem of Android apps.)

My point is that Ahonen’s prediction for Android dominance (89% market share in 2020) underestimates the effects of strategic countermoves from other players that view this development as a threat. When one player becomes too strong the entire industry will unite against it. Of course Android will be a dominant platform but the uncertainty is larger than what Ahonen seems to take into consideration.

I am also somewhat puzzled over the forecasts for Apple/iOS. Ahonen has a rather compelling argument for Apple as a niche player. If Apple introduces much cheaper low-end smartphones, they will reach new market segments but they will also hurt the sales of their own high-margin premium models. Apple will most likely prefer to keep stellar profit margins and remain a leader in the premium segment than compete on volume. This makes sense. The size of the global smartphone market is so large that even if you only have a 10% market share, that is enough to reap the economics of scale. And the developer ecosystem and app universe for iOS is already in place and is as strong as for Android.

But if and when the smartphone market really matures, it is possible that Apple will find itself embattled even in its core premium market in rich countries. If customers are unwilling to pay Apple’s premium prices and market shares fall, I think Apple will switch over to plan B. They can easily cut their prices and introduce low end models and go for volume instead of profit margins. In that case, Apple/iOS will end up with a significantly larger market share in 2020 than Ahonen’s forecast of 8% – perhaps 15%. The paradox is that this bad news for Apple will actually translate into a larger market share. Ahonen’s forecast is that it won’t happen before 2020. I think there is around a 40% probability that it might.

Nuisance calls will kill landline voice

Few people in the tech sector care about landline voice these days but for landline operators it’s still a significant (though declining) cash cow, and will be for years to come. Mismanaging this service could provoke a customer stampede away from landline voice. If BT and the other UK landline service providers can’t stop the deluge of nuisance calls that have flooded British customers over the last few years, the scammers and spammers will effectively and swiftly kill this business area for BT et al. (This is the downside of English being a global language. There are no Swedish or Finnish speaking call centre operators in India.)

A survey by consumer watchdog Which? found that 70 percent of respondents had received unwanted calls. In their comments fields, many Which? members reported being bombarded by several calls every day, and sometimes even in the wee hours of the morning. There are silent calls, robocalls, calls to people’s work numbers, there are scams about a legal settlement of repaying credit card fees, calls selling shady PPIs, calls about selling protection against unwanted calls, fake market research calls that morph into sales calls, there are calls about double glass windows, fake calls from “Microsoft support” where they want to access your PC, and on and on and on. Asking to be removed from the call lists rarely helps, they continue to call regardless. Some spam callers hang up immediately if you deviate from the caller’s script. Based on the accent, many calls seem to originate from Far East call centers. Many users reported that adding their numbers to the TPS list that rejects telemarketing calls was of little help.

This deluge of nuisance calls is forcing people to change their usage patterns. The older generation is still stuck with the idea that the telephone has to be answered if it rings while the younger generation gladly ignores unknown callers. But even the older generation will be forced to change their habits due to this problem.

Users are also trying to defend themselves with countermeasures. There are answering machines and phones with integrated “Nuisance Call Blocking” functionality (CPR Callblocker, Trucall, BT6500). They use the caller ID and block known nuisance calls. Typically they block all international calls, “unavailable” and “withheld” calls in addition to a blacklist of numbers for known call centers.

The problem for BT & Co. is that these counter measures undermine the landline voice business. Blocking all unlisted and international calls will make it harder for friends and family to reach you on the landline and it also blocks SkypeOut calls. Asking your service provider for a new number that has never been used will leave your contacts stranded unless you manually provide them with the new number. Blocking your own caller ID for outgoing calls makes you a “suspicious caller” and your friends and family might not want to answer. Using an answering machine to screen calls is inconvenient. And once anonymous call blocking becomes widespread, the spammers and scammers will most likely find ways around this, for example by spoofing Caller IDs.

One of the angry Which? members had taken the drastic measure of paying extra to add a premium 0871 number to his landline which he always gave to companies and other untrusted parties. This stopped most of the spam callers and if anyone called they would have to pay for the privilege of talking to this user. He actually made some money on this. But for most users, cancelling their landline subscription probably makes more sense.

Spam has ruined email as the dominant form of e-communication. Nuisance and scam calls will most likely be the final nail in the coffin for traditional landline voice. BT (and the other landline operators) should make it a top priority to stop spam calls. BT should lobby for tougher laws with severe fines for companies that profit from nuisance calling. Fraud is a crime in India as well as in the UK, and British law enforcement should cooperate with its Indian counterparts to bring high profiles cases against Indian “telemarketers” that defraud British customers. The recent £90k fine against a company in Glasgow is a first step but BT should urge law enforcement to be more vigilant.

BT’s first step should be to stop selling wholesale termination minutes without a requirement that the buyers use caller ID and that they comply with some form of Terms of Service (are there any TOS for wholesale customers?). BT should also upgrade security in its technical infrastructure. Another flaw is that the British landline network can only display 11 digits in the caller ID, which means that most international numbers can not be displayed. (Mobile networks display international caller IDs without any problem.) BT should upgrade their network and enable longer caller IDs. They should also look into regular QoS issues such as sound volume. Calls on a regular UK landline vary widely in quality. Quite often, the volume is so low that it is almost impossible to hear the other party.

Few pundits and analysts in the telco sector bother to look at landline voice. It is viewed as a boring dinosaur legacy business. That is a mistake (even though it’s true that it’s a boring, declining cash cow). For the landline operators, the speed of the decline of landline voice is a matter of billions in cash flow over its remaining lifetime. Nonchalance about nuisance calls could swiftly put an end to these operators’ business. They should heed the warning.

Update: On 18 April, Ofcom fined TalkTalk £750,000 for making an excessive number of abandoned and silent telemarketing calls. Things are moving in the right direction.

Microsoft needs to rethink their platform strategy

 

Why force users to upgrade from XP or into Windows 8? They
might as well migrate away from Microsoft altogether.

 
While the mobile platform race garners the most attention these days, the fate of the legacy PC ecosystem is crucial for one company: Microsoft. How they play their cards will have repercussions for the entire tech sector.

Microsoft has long profited from being a de facto monopolist with a huge user base locked into their ecosystem. Their upgrade cycles with a new OS every few years have kept up their revenue streams. The inherent flaws and instabilities in the Windows platform have usually made upgrades worthwhile as each new OS (3.1, 95, 98, NT, ME, 2000, XP, Vista, 7) has been an improvement over an even worse performing predecessor.

To push the user base forward to the next platform, Microsoft has left the legacy user base stranded. Support for new hardware and APIs has not been added to a legacy OS. Support and security patch services have had an end date. The ever increasing hardware performances has provided additional incentives for users to upgrade with new machines. This strategy was a breeze from Windows 95 to Windows 7, but now the engine is about to break down.

Hardware performance of the aging PC platform is now adequate enough for most users. These days there is less of a compelling need to upgrade a four year old PC. But the major threat to Microsoft is the risk of pushing users away from the Windows ecosystem by forced upgrades. Microsoft may be about to make a serious blunder on both ends of their product pipeline.

On April 8, 2014 Microsoft will stop supporting Windows XP with updates and new security patches. XP’s market share is falling but it still has a 24 percent market share of the global market as of January 2013. Even if the user base is less than 10 percent next year when Microsoft terminates XP, we are talking about 50 to 100 million users. Microsoft’s reckless termination of Windows XP could wreak havoc and damage the company’s reputation. Malware hackers and criminals will keep newly discovered security holes to themselves and will wait to unleash them until April 9th next year, when they know that Microsoft will no longer provide patches. It would seem that Microsoft’s corporate DNA is still stuck in the mindset of the arrogant monopolist from the 1990s and that they take for granted that this abandoned user base will stay with the Windows platform no matter what.

To add insult to injury, Microsoft is not only planning to leave legacy XP users stranded but the upgrade from Windows 7 to Windows 8 is a discontinuity that will force users to rethink whether they should stay with the Windows platform at all. Win8 represents a radical departure from the traditional Windows UX/UI with a steep learning curve. Windows 8 is adapted for touch screens. Bravo. But the hybrid touch/traditional UI/UX is a step backward for users that want to work in the way they are used to. It will slow down productivity for the billion users who are accustomed to the mouse, cursor and keyboard paradigm. Users have accumulated a 15 year learning curve speeding up their mouse, eye, keyboard, screen and motor skills. Throwing this human investment away would be madness. Touch screens are a significant technological innovation with disruptive potential. But if you are an office worker that spends 8 hours a day manipulating the same corporate applications in front of a screen, a touch screen is hardly an improvement. With a touch screen, users have to constantly move their arms and if the display is vertical it means lifting your arms in a way that will be physically exhausting after less than an hour. The slowdown in office productivity due to bad ergonomics and forced relearning could be significant, though I think corporate IT-managers will be aware of these drawbacks and keep Win8 out.

  • Microsoft needs a strategy to defend their installed base at any cost

Microsoft seems to be oblivious to their weakened strategic position. Compared to 2003, users today have a plethora of alternatives (Apple/IOS, web/HTML5, cloud apps, SaaS, Android, Ubuntu, etc.). The lock in to Windows is not as strong as during the prime PC era. The huge legacy of corporate systems makes Windows sticky but the ease of developing new apps will eat away at this exit barrier. The cost and time of developing complex apps has fallen by a factor of magnitude over a decade.

Microsoft wants to push their entire user base to one platform (their latest). But considering the risk that Windows 8 might be a failure they need to rethink. By signaling that older platforms are to be phased out they are actually encouraging skeptical users to look elsewhere if they don’t like Microsoft’s upgrade cycle.

  • Here is what Microsoft should do:

Develop a major upgrade for Windows XP with some of the more modern security features included. Announce this upgrade (“Windows XP II”) in Sep 2013 and sell it for around $15 (upgrade only) with support until 2021. Extend the free security updates for the old XP for one year to give the user base time to migrate. Microsoft should also consider marketing XP II for new users at a higher price point.

For Win7 and Win8, Microsoft should announce that this represents a fork and that they are committed to the support and upgrades of the older platform (Win7) for users that don’t want the new touch based UI/UX.

Microsoft has to accept that their total market share is the sum of several platforms. Even with a 10 percent market share, each platform is large enough to be an attractive, cash flow positive business for any company. Milking a legacy platform such as XP is humiliating for Microsoft but beggars can’t be choosers. XP II would be a highly profitable business area and it would keep up the installed base. Whether this is enough to prevent or slow down the decline of Microsoft remains to be seen. (This is of course not a comprehensive strategy for Microsoft but only addresses the limited issue of OS upgrade cycles.)

Xperia Arc’s low light camera – will Sony(Ericsson) ever come to their senses about the Mpixel race?

Xperia arc

I don’t usually pay much attention to Sony Ericsson’s products but last week I saw a billboard in the subway stating that the Xperia Arc sported a camera with excellent low light capabilities. A quick google search revealed that SE is pushing the 8 Mpix low light camera as a major selling point. The low light capabilities come from Sony’s new sensor technology Exmor R.

It looks like SE’s marketing department has realized that most of their customers put a high value on the ability to take “natural” photos in low light conditions without a disturbing flash. Back in 2009 I wrote a highly critical review of their then flagship model Satio. The Satio was equipped with an oversized 12 Mpixel camera that was mediocre in low light due to the small sensor size. Two years later and it looks like they’ve finally fixed the problem.

It’s great that Sony has developed the Exmor R sensor for improved low light photography. But if they want to exploit this technology and use it to jump ahead of the competition they should push the low light threshold to the extreme in a sensor with larger pixel size but fewer megapixels. Instead Sony has developed a small silly 16 Mpix sensor while most of their competitors are concentrating on factors other than megapixel count.

The Mpixel count is only one design parameter. Pixel size is as important as the number of pixels. If each pixel is larger it will capture more photons.

And the sensor is only one component in the camera. Compare it with the highly touted Nokia N8 camera. Nokia’s sensor is one the largest in any camera phone, the Zeiss lens is made of glass not plastic, it has a mechanical shutter, a Xenon flash, and a built-in ND filter to handle extremely bright shooting conditions.

It’s a shame that Sony(Ericsson) and the new owners Sony don’t understand that they could use this new technology for an extreme low light camera phone that would sweep the competitors away. If they developed a 6 Mpix Exmor R sensor with a larger sensor size and used a Zeiss lens made of hardened glass they would really have a winner on their hands.

New UI/UX after Apple’s iPhone/iPad

Sometimes I get the impression that the industry believes the iPhone and iPad represent the pinnacle of human technology. Even though the majority of the market attention is on these form factors, several new UI technologies are already out of the labs. These technologies have the potential to disrupt the traditional smartphone/tablet market and might pave the way for new types of products.

Here are a few examples that point toward a world after candybar multitouch. Exactly how they can be used and integrated in the UI/UX remains to be seen.

Demo of Microsoft Surface with PixelSense from Samsung

I have written about Microsoft Surface before, which is large horizontal multitouch screen built as a table. In the new slimmer version of Surface, Microsoft together with Samsung have developed PixelSense touch sensing technology. In PixelSense every pixel in the screen is also an infrared sensor that detects warm fingers on the surface. Just imagine what a future development of this technology could do if Samsung manages to fit the three RBG color sensors in every pixel. The surface could double as a copying machine. You put a paper, coupon or picture facing down on the surface, and when you lift it up, the copied object is displayed on the screen.

A technology for high performance multitouch screens has been developed by the Swedish startup Flatfrog. Their multitouch is based on an optical in-glass solution (Planar Scatter Detection) that also can be used to create multitouch on curved glass surfaces.

Another Swedish startup is Tobii, which has developed a technology for tracking eye movements. Using cameras that track the position of the pupil it is possible to calculate exactly what the user is focusing on. The company’s initial markets have been expensive high end systems for paralyzed people, market researchers, and academic researchers in cognitive psychology. Tobii has now begun to target the mainstream market together with Lenovo which are integrating eye tracking in a prototype laptop.

Kinect is a technology that Microsoft developed for their gaming console Xbox. It is an add-on gadget for your gaming console or flatscreen with facial recognition, voice recogniton and the ability to track gestures such as arm and hand movements. With Kinect you can control a game or PC by talking and waving your arms. It can be used for controlling an action figure or for moving between windows such as browsing your music collection, zooming in and out of a photo, etc. Up to six users can be tracked at the same time.

Even more futuristic UI/UX modalities are BCI technologies (Brain Computer Interface) where brain waves directly control an UI or some machinery. BCI has been used in research labs for a long time with electrodes implanted in the skull. Newer products based on less invasive methods with the electrodes attached to the scalp are now hitting the market, often in the form of a headset. The precision and bandwidth of these methods are still very primitive. One of the few things that can be reliably measured with BCI are emotive states such as relaxation vs. concentration.


Most of these new innovations are early in their life cycle and it is still too early to tell if anyone of them has a strong disruptive potential. New technologies drive development of new form factors. It remains to be seen if and how this will create future killer hardware. There is also a shortage of apps that can take advantage of the new features and turn them into compelling user experiences.

There are several hurdles to overcome. Products such as Kinect, Tobii and Surface put significant demands on processor capacity and there is a learning curve for any new UI technology. Prices have to come down for the large mainstream market to accept them.

I am slightly skeptical about a technology that requires you to wave your arms. What’s fine when gaming in your own living room, lifting and waving your arms for an extended period of time is tiresome. This has already been shown by users’ resistance to large vertical PC touchscreens.


It is possible that these new technologies will find their way into the candybar smartphone/tablet. But I think it is more likely that the future smartphone will integrate these new UI technologies without residing in the handset. If most tables, office desks, and bars are made of hard glass, with MS Surface technology perhaps the user could just place their smartphone on the glass and have all their apps, contacts and pictures displayed. The surface might even have built in eye tracking. Or maybe Corning’s vision of a world of glass will come true and the nearest wall will be able to display your smartphone home screen with built in eye tracking for navigation in the wall. Just make sure to control your eyeballs – you never know who might be looking over your shoulder.