Books I have co-authored

Nuisance calls will kill landline voice

Few people in the tech sector care about landline voice these days but for landline operators it’s still a significant (though declining) cash cow, and will be for years to come. Mismanaging this service could provoke a customer stampede away from landline voice. If BT and the other UK landline service providers can’t stop the deluge of nuisance calls that have flooded British customers over the last few years, the scammers and spammers will effectively and swiftly kill this business area for BT et al. (This is the downside of English being a global language. There are no Swedish or Finnish speaking call centre operators in India.)

A survey by consumer watchdog Which? found that 70 percent of respondents had received unwanted calls. In their comments fields, many Which? members reported being bombarded by several calls every day, and sometimes even in the wee hours of the morning. There are silent calls, robocalls, calls to people’s work numbers, there are scams about a legal settlement of repaying credit card fees, calls selling shady PPIs, calls about selling protection against unwanted calls, fake market research calls that morph into sales calls, there are calls about double glass windows, fake calls from “Microsoft support” where they want to access your PC, and on and on and on. Asking to be removed from the call lists rarely helps, they continue to call regardless. Some spam callers hang up immediately if you deviate from the caller’s script. Based on the accent, many calls seem to originate from Far East call centers. Many users reported that adding their numbers to the TPS list that rejects telemarketing calls was of little help.

This deluge of nuisance calls is forcing people to change their usage patterns. The older generation is still stuck with the idea that the telephone has to be answered if it rings while the younger generation gladly ignores unknown callers. But even the older generation will be forced to change their habits due to this problem.

Users are also trying to defend themselves with countermeasures. There are answering machines and phones with integrated “Nuisance Call Blocking” functionality (CPR Callblocker, Trucall, BT6500). They use the caller ID and block known nuisance calls. Typically they block all international calls, “unavailable” and “withheld” calls in addition to a blacklist of numbers for known call centers.

The problem for BT & Co. is that these counter measures undermine the landline voice business. Blocking all unlisted and international calls will make it harder for friends and family to reach you on the landline and it also blocks SkypeOut calls. Asking your service provider for a new number that has never been used will leave your contacts stranded unless you manually provide them with the new number. Blocking your own caller ID for outgoing calls makes you a “suspicious caller” and your friends and family might not want to answer. Using an answering machine to screen calls is inconvenient. And once anonymous call blocking becomes widespread, the spammers and scammers will most likely find ways around this, for example by spoofing Caller IDs.

One of the angry Which? members had taken the drastic measure of paying extra to add a premium 0871 number to his landline which he always gave to companies and other untrusted parties. This stopped most of the spam callers and if anyone called they would have to pay for the privilege of talking to this user. He actually made some money on this. But for most users, cancelling their landline subscription probably makes more sense.

Spam has ruined email as the dominant form of e-communication. Nuisance and scam calls will most likely be the final nail in the coffin for traditional landline voice. BT (and the other landline operators) should make it a top priority to stop spam calls. BT should lobby for tougher laws with severe fines for companies that profit from nuisance calling. Fraud is a crime in India as well as in the UK, and British law enforcement should cooperate with its Indian counterparts to bring high profiles cases against Indian “telemarketers” that defraud British customers. The recent £90k fine against a company in Glasgow is a first step but BT should urge law enforcement to be more vigilant.

BT’s first step should be to stop selling wholesale termination minutes without a requirement that the buyers use caller ID and that they comply with some form of Terms of Service (are there any TOS for wholesale customers?). BT should also upgrade security in its technical infrastructure. Another flaw is that the British landline network can only display 11 digits in the caller ID, which means that most international numbers can not be displayed. (Mobile networks display international caller IDs without any problem.) BT should upgrade their network and enable longer caller IDs. They should also look into regular QoS issues such as sound volume. Calls on a regular UK landline vary widely in quality. Quite often, the volume is so low that it is almost impossible to hear the other party.

Few pundits and analysts in the telco sector bother to look at landline voice. It is viewed as a boring dinosaur legacy business. That is a mistake (even though it’s true that it’s a boring, declining cash cow). For the landline operators, the speed of the decline of landline voice is a matter of billions in cash flow over its remaining lifetime. Nonchalance about nuisance calls could swiftly put an end to these operators’ business. They should heed the warning.

Update: On 18 April, Ofcom fined TalkTalk £750,000 for making an excessive number of abandoned and silent telemarketing calls. Things are moving in the right direction.

Microsoft needs to rethink their platform strategy


Why force users to upgrade from XP or into Windows 8? They
might as well migrate away from Microsoft altogether.

While the mobile platform race garners the most attention these days, the fate of the legacy PC ecosystem is crucial for one company: Microsoft. How they play their cards will have repercussions for the entire tech sector.

Microsoft has long profited from being a de facto monopolist with a huge user base locked into their ecosystem. Their upgrade cycles with a new OS every few years have kept up their revenue streams. The inherent flaws and instabilities in the Windows platform have usually made upgrades worthwhile as each new OS (3.1, 95, 98, NT, ME, 2000, XP, Vista, 7) has been an improvement over an even worse performing predecessor.

To push the user base forward to the next platform, Microsoft has left the legacy user base stranded. Support for new hardware and APIs has not been added to a legacy OS. Support and security patch services have had an end date. The ever increasing hardware performances has provided additional incentives for users to upgrade with new machines. This strategy was a breeze from Windows 95 to Windows 7, but now the engine is about to break down.

Hardware performance of the aging PC platform is now adequate enough for most users. These days there is less of a compelling need to upgrade a four year old PC. But the major threat to Microsoft is the risk of pushing users away from the Windows ecosystem by forced upgrades. Microsoft may be about to make a serious blunder on both ends of their product pipeline.

On April 8, 2014 Microsoft will stop supporting Windows XP with updates and new security patches. XP’s market share is falling but it still has a 24 percent market share of the global market as of January 2013. Even if the user base is less than 10 percent next year when Microsoft terminates XP, we are talking about 50 to 100 million users. Microsoft’s reckless termination of Windows XP could wreak havoc and damage the company’s reputation. Malware hackers and criminals will keep newly discovered security holes to themselves and will wait to unleash them until April 9th next year, when they know that Microsoft will no longer provide patches. It would seem that Microsoft’s corporate DNA is still stuck in the mindset of the arrogant monopolist from the 1990s and that they take for granted that this abandoned user base will stay with the Windows platform no matter what.

To add insult to injury, Microsoft is not only planning to leave legacy XP users stranded but the upgrade from Windows 7 to Windows 8 is a discontinuity that will force users to rethink whether they should stay with the Windows platform at all. Win8 represents a radical departure from the traditional Windows UX/UI with a steep learning curve. Windows 8 is adapted for touch screens. Bravo. But the hybrid touch/traditional UI/UX is a step backward for users that want to work in the way they are used to. It will slow down productivity for the billion users who are accustomed to the mouse, cursor and keyboard paradigm. Users have accumulated a 15 year learning curve speeding up their mouse, eye, keyboard, screen and motor skills. Throwing this human investment away would be madness. Touch screens are a significant technological innovation with disruptive potential. But if you are an office worker that spends 8 hours a day manipulating the same corporate applications in front of a screen, a touch screen is hardly an improvement. With a touch screen, users have to constantly move their arms and if the display is vertical it means lifting your arms in a way that will be physically exhausting after less than an hour. The slowdown in office productivity due to bad ergonomics and forced relearning could be significant, though I think corporate IT-managers will be aware of these drawbacks and keep Win8 out.

  • Microsoft needs a strategy to defend their installed base at any cost

Microsoft seems to be oblivious to their weakened strategic position. Compared to 2003, users today have a plethora of alternatives (Apple/IOS, web/HTML5, cloud apps, SaaS, Android, Ubuntu, etc.). The lock in to Windows is not as strong as during the prime PC era. The huge legacy of corporate systems makes Windows sticky but the ease of developing new apps will eat away at this exit barrier. The cost and time of developing complex apps has fallen by a factor of magnitude over a decade.

Microsoft wants to push their entire user base to one platform (their latest). But considering the risk that Windows 8 might be a failure they need to rethink. By signaling that older platforms are to be phased out they are actually encouraging skeptical users to look elsewhere if they don’t like Microsoft’s upgrade cycle.

  • Here is what Microsoft should do:

Develop a major upgrade for Windows XP with some of the more modern security features included. Announce this upgrade (“Windows XP II”) in Sep 2013 and sell it for around $15 (upgrade only) with support until 2021. Extend the free security updates for the old XP for one year to give the user base time to migrate. Microsoft should also consider marketing XP II for new users at a higher price point.

For Win7 and Win8, Microsoft should announce that this represents a fork and that they are committed to the support and upgrades of the older platform (Win7) for users that don’t want the new touch based UI/UX.

Microsoft has to accept that their total market share is the sum of several platforms. Even with a 10 percent market share, each platform is large enough to be an attractive, cash flow positive business for any company. Milking a legacy platform such as XP is humiliating for Microsoft but beggars can’t be choosers. XP II would be a highly profitable business area and it would keep up the installed base. Whether this is enough to prevent or slow down the decline of Microsoft remains to be seen. (This is of course not a comprehensive strategy for Microsoft but only addresses the limited issue of OS upgrade cycles.)

Xperia Arc’s low light camera – will Sony(Ericsson) ever come to their senses about the Mpixel race?

Xperia arc

I don’t usually pay much attention to Sony Ericsson’s products but last week I saw a billboard in the subway stating that the Xperia Arc sported a camera with excellent low light capabilities. A quick google search revealed that SE is pushing the 8 Mpix low light camera as a major selling point. The low light capabilities come from Sony’s new sensor technology Exmor R.

It looks like SE’s marketing department has realized that most of their customers put a high value on the ability to take “natural” photos in low light conditions without a disturbing flash. Back in 2009 I wrote a highly critical review of their then flagship model Satio. The Satio was equipped with an oversized 12 Mpixel camera that was mediocre in low light due to the small sensor size. Two years later and it looks like they’ve finally fixed the problem.

It’s great that Sony has developed the Exmor R sensor for improved low light photography. But if they want to exploit this technology and use it to jump ahead of the competition they should push the low light threshold to the extreme in a sensor with larger pixel size but fewer megapixels. Instead Sony has developed a small silly 16 Mpix sensor while most of their competitors are concentrating on factors other than megapixel count.

The Mpixel count is only one design parameter. Pixel size is as important as the number of pixels. If each pixel is larger it will capture more photons.

And the sensor is only one component in the camera. Compare it with the highly touted Nokia N8 camera. Nokia’s sensor is one the largest in any camera phone, the Zeiss lens is made of glass not plastic, it has a mechanical shutter, a Xenon flash, and a built-in ND filter to handle extremely bright shooting conditions.

It’s a shame that Sony(Ericsson) and the new owners Sony don’t understand that they could use this new technology for an extreme low light camera phone that would sweep the competitors away. If they developed a 6 Mpix Exmor R sensor with a larger sensor size and used a Zeiss lens made of hardened glass they would really have a winner on their hands.

New UI/UX after Apple’s iPhone/iPad

Sometimes I get the impression that the industry believes the iPhone and iPad represent the pinnacle of human technology. Even though the majority of the market attention is on these form factors, several new UI technologies are already out of the labs. These technologies have the potential to disrupt the traditional smartphone/tablet market and might pave the way for new types of products.

Here are a few examples that point toward a world after candybar multitouch. Exactly how they can be used and integrated in the UI/UX remains to be seen.

Demo of Microsoft Surface with PixelSense from Samsung

I have written about Microsoft Surface before, which is large horizontal multitouch screen built as a table. In the new slimmer version of Surface, Microsoft together with Samsung have developed PixelSense touch sensing technology. In PixelSense every pixel in the screen is also an infrared sensor that detects warm fingers on the surface. Just imagine what a future development of this technology could do if Samsung manages to fit the three RBG color sensors in every pixel. The surface could double as a copying machine. You put a paper, coupon or picture facing down on the surface, and when you lift it up, the copied object is displayed on the screen.

A technology for high performance multitouch screens has been developed by the Swedish startup Flatfrog. Their multitouch is based on an optical in-glass solution (Planar Scatter Detection) that also can be used to create multitouch on curved glass surfaces.

Another Swedish startup is Tobii, which has developed a technology for tracking eye movements. Using cameras that track the position of the pupil it is possible to calculate exactly what the user is focusing on. The company’s initial markets have been expensive high end systems for paralyzed people, market researchers, and academic researchers in cognitive psychology. Tobii has now begun to target the mainstream market together with Lenovo which are integrating eye tracking in a prototype laptop.

Kinect is a technology that Microsoft developed for their gaming console Xbox. It is an add-on gadget for your gaming console or flatscreen with facial recognition, voice recogniton and the ability to track gestures such as arm and hand movements. With Kinect you can control a game or PC by talking and waving your arms. It can be used for controlling an action figure or for moving between windows such as browsing your music collection, zooming in and out of a photo, etc. Up to six users can be tracked at the same time.

Even more futuristic UI/UX modalities are BCI technologies (Brain Computer Interface) where brain waves directly control an UI or some machinery. BCI has been used in research labs for a long time with electrodes implanted in the skull. Newer products based on less invasive methods with the electrodes attached to the scalp are now hitting the market, often in the form of a headset. The precision and bandwidth of these methods are still very primitive. One of the few things that can be reliably measured with BCI are emotive states such as relaxation vs. concentration.

Most of these new innovations are early in their life cycle and it is still too early to tell if anyone of them has a strong disruptive potential. New technologies drive development of new form factors. It remains to be seen if and how this will create future killer hardware. There is also a shortage of apps that can take advantage of the new features and turn them into compelling user experiences.

There are several hurdles to overcome. Products such as Kinect, Tobii and Surface put significant demands on processor capacity and there is a learning curve for any new UI technology. Prices have to come down for the large mainstream market to accept them.

I am slightly skeptical about a technology that requires you to wave your arms. What’s fine when gaming in your own living room, lifting and waving your arms for an extended period of time is tiresome. This has already been shown by users’ resistance to large vertical PC touchscreens.

It is possible that these new technologies will find their way into the candybar smartphone/tablet. But I think it is more likely that the future smartphone will integrate these new UI technologies without residing in the handset. If most tables, office desks, and bars are made of hard glass, with MS Surface technology perhaps the user could just place their smartphone on the glass and have all their apps, contacts and pictures displayed. The surface might even have built in eye tracking. Or maybe Corning’s vision of a world of glass will come true and the nearest wall will be able to display your smartphone home screen with built in eye tracking for navigation in the wall. Just make sure to control your eyeballs – you never know who might be looking over your shoulder.

What will happen to webOS after HP’s exit?

Update: According to The Next Web’s internal sources, HP is NOT going to sell or shut webOS down, just the underperforming hardware device division. HP communicated this in a clumsy way which gave the erroneous impression that webOS was dead or for sale. HP’s plan seems to be to license webOS to hardware partners. But if HP is going to sell their PC division, they can’t force the buyer to promote webOS on the desktop platform. WebOS as a competitor to Windows 8 still seems quite unlikely.


My previous blog post about the potential for HP/webOS to expand webOS into the desktop and compete with Windows 8 was only two days old when HP announced their plans to abandon the tablet/smartphone market and their commitment to webOS. I got it wrong. This has been an interesting week in a frantically fast moving industry. Friday is not over so there is still time for another major announcement. How about Microsoft buying Nokia or Dell buying RIM?

I think HP made a rushed and unwise decision. Their Palm smartphones and HP Touchpad tablet didn’t sell, but that was due to clumsy design, weak hardware, and bugs. It was not caused by some inherent weakness in webOS.

If they had persevered, promoted an ecosystem, and licensed webOS I think they would have had a chance on the market. After the Google/Moto deal several of the Android licensees (LG, HTC, Sony Ericsson, Samsung, etc.) would probably have been interested in an alternative OS to reduce their dependence of Google.

And as I said in my previous blog post, if and when the HTML5/cloud paradigm becomes dominant, most existing HTML5 web apps will automatically become part of a huge virtual webOS app store inventory.

I haven’t really had time to consider potential webOS buyers. Here are some rough ideas: If HP includes the patent portfolio in the deal they can probably get a better price. Google and Samsung might be interested for hoarding more patents. Other usual suspects would be LG, HTC, Sony Ericsson, ZTE and Huawei. Or possibly China Mobile, Verizon or another large operator. Amazon might be interested in using webOS for a tablet/cloud service offer. Dell, Lenovo, Acer or Asus might want to develop cloud-PC solutions based on webOS to escape Windows 8 and Microsoft’s stranglehold. Perhaps next week’s events and mega deals will provide some answers.

WebOS’s new life as HP’s Trojan horse

webOS from HP/Palm on different screens

The Google/Motorola deal suddenly makes it relevant to take a closer look at the other OS platforms on the market. Most industry observers in the mobile space have already written off HP/Palm’s webOS as a dead platform. From a smartphone perspective, this is obvious at first sight. With two percent of smartphone users in their only market (US), negligible global sales, and a thin app store, they don’t have much going for them. HP recently released a tablet powered by webOS but according to the reviewers the product is clumsy and behind the competition. There is talk about Samsung licensing webOS for a few handsets but no products have hit the shelves yet.

HP/webOS can not currently compete head to head with Apple and Android, even though the webOS technical platform is excellent. The platform is designed for true multitasking and for running web apps based on HTML5/JavaScript/CCS with hardware accelerated execution.

But there are a couple of scenarios where webOS can be viable. For example, if HTML5 becomes dominant and native apps fade into oblivion. In that case, the advantage of the market leaders’ huge app stores will become irrelevant and webOS will be able to compete on a level playing field. As webOS is optimized for HTML5 from the beginning, there is a good chance that apps will work better than on the competing platforms. If other OEMs become wary of using Android after Google’s Motorola purchase, webOS might be more attractive for licensing. An additional factor that could reinforce this scenario is the rise of cloud computing. If all this plays out, handset OEMs and mobile operators might consider webOS as a way of reducing the market power of Android, Microsoft, and Apple. But even if native apps are sidelined by HTML5, the smartphone market leaders will still have a huge advantage due to their brand, size, and market share.

However, if the game plan changes, webOS could reappear as a serious contender. If and when the cloud becomes the dominant computing paradigm, webOS will be a good fit for a range of devices. The drawback is that the mobile connection to the smartphone will always be unreliable and rather slow. If all your documents, contacts, messaging and apps are stored in the cloud this will be a problem. Enter the PC (and to some extent the tablet). For the PC one usually has a reliable broadband connection. That is where cloud computing (and hence webOS) would work best. And this is the direction HP is moving in with webOS. HP will install webOS on all their products (PCs, tablets, printers etc.) from 2012.

Industry observers predict that HP wants webOS so it can provide users with “experience roaming” across all devices, where webOS will ensure that users can keep their profile and contacts. The Synergy feature in webOS aggregates all the user’s contacts and supports that strategy. Enyo is webOS’s development framework and supports handling different screen sizes in a seamless way. This is probably part of HP’s plan.

But HP’s bolder move is to install webOS together with the upcoming Windows 8 on the traditional PC. I don’t know if this was part of HP’s original strategy when they bought Palm in April 2010 or added by the new CEO Léo Apotheker who joined HP in September 2010. If webOS boots faster than Windows 8, if HP gives the user a choice between webOS and Windows during boot, and if the user primarily uses the PC to access HTML5 web apps in the cloud – then perhaps users will choose to boot webOS instead of Windows and gradually stop using the legacy Windows platform. This would give HP huge leverage in future price negotiations with Microsoft. WebOS will be HP’s Trojan horse for breaking into Microsoft’s territory on the PC.

Windows 8 is a radical departure from the traditional desktop keyboard and mouse paradigm. Windows 8 comes with full touch screen support and their UI theme (“Metro”) has a look and feel with tiles that plagiarizes WP7 and most of the smartphone/tablets platforms. The traditional legacy desktop environment is hidden inside tiles. Windows 8 is built for app development in – surprise surprise – HTML5. The same as webOS. If we disregard the support for legacy desktop apps, Windows 8 and webOS will almost be head to head competitors. And while webOS is optimized for web apps, Windows 8 has to carry a huge legacy baggage to keep backward compatibility.

Will one of the most insignificant smartphone platforms rise from the ashes and conquer the giant of the PC era? Don’t rule it out just yet.

How cross-platform tools can end the OS wars

This article has previously been published on VisionMobile’s blog.

The Android vs. iOS vs. Windows Phone platform battle has been the talk of the industry for the last year. But the market share battle between handset platforms might not be as critical for the industry as many believe.

A popular view in the industry is that the market is inevitably moving towards an Apple-Google duopoly. Apple’s app store has more than 400,000 apps. Android is growing quickly from a base of more than 250,000 apps and is predicted to catch up with Apple later this year. Nearly 80 percent of all apps in app stores are controlled by these two market giants according to Distimo. Figures for Q1 2011 from Gartner show that the market share in the smartphone market for iOS and Android combined is 53 percent and rising.

But the duopoly may be challenged by the mobile web and cross-platform tools. HTML5 empowers all other platforms to offer apps through the browser. VisionMobile’s recent Developer Economics report shows that the mobile web (of which HTML5 is a subset) is already the third most popular platform in terms of developer mindshare after Android and iOS.

At the same time, HTML5 is overhyped and the belief that HTML5 will replace almost all native apps is in need of a reality check. Native apps will still offer richer functionality, better performance, and higher security compared to HTML5-based apps. A study by has shown that every mobile WebKit implementation is slightly different, which could cause a problem for HTML5-based apps. In a recent whitepaper, Netbiscuits measured smartphone support for 18 features in HTML5 and showed that leading smartphones only offer partial (or no) support for a significant number of these features. Implementation is also fragmented. What works on iPhone will probably not work on RIM or Samsung handsets and vice versa. Or to quote Forrester’s take on the HTML5 vs. native debate: “The ‘Apps vs. Internet’ Debate Will Continue…to be irrelevant.”, “it’s not a question of ‘either/or’ when it comes to a choice between apps vs. the mobile Web, but both.”

The Landscape Of Cross-Platform Development Tools

The new types of cross-platform tools are more interesting than plain HTML5 because they can deliver higher performance and functionality than browser based HTML5. These tools produce apps as output and fall roughly into two categories:

1) Web apps/hybrid apps. These apps exploit the web engine (“web browser”) and are typically written in HTML/CSS/JavaScript.

2) Native apps. These apps are compiled into machine code and often written in C++ or similar languages.

Cross-platform tools are a nascent market with a flurry of startup activity over the last few years. The following diagram illustrates different trade-offs between complexity and performance in the cross-platform tools market.

Market segments for mobile cross-platform tools

Traditional websites: In the lower left corner is the traditional website, limited in performance but providing access to all platforms with no added complexity. Plain HTML5 could be included here once all browsers support the standard.

Web apps/hybrid apps: Adjacent in the diagram are HTML5 web apps that can be downloaded to the browser’s cache and run offline. They will offer better performance and only slightly higher complexity. One step up in the diagram is a market segment of cross-platform tools running simulated native. These tools deliver better performance but the complexity is also higher if the tool has to support multiple platforms. Here we find tools that produce web apps built on HTML5/CCS3 and JavaScript, with some added native elements, typically inside a native wrapper. These cross-platform tools often add native extensions that provide access to some low level native functionality. An example of a player in this market segment is PhoneGap, which is often used in tandem with the Sencha Touch framework. Other tools that run on top of PhoneGap are WorkLight and appMobi.

A closely related market segment is hybrid tools, where the HTML5/JavaScript input is translated into actual native source code. An example of a hybrid tool vendor is Appcelerator’s Titanium.

Other types of solutions which fall under the main heading of web/hybrid apps are based on Java, Lua, ActionScript or less common languages. The diagram shows how the heavily-fragmented Java ME offers inferior performance in spite of high complexity. The cross-platform tools Corona SDK and DragonRAD are based on Lua. Rhodes is based on HTML/Ruby while OpenPlug uses ActionScript (Flash) as source language. Kony uses drag-n-drop for building enterprise web apps. There is no reliable information about the performance/complexity trade-off for most of these solutions, so their exact position in the diagram above should be viewed as illustrative. In general, tools in which the resulting code is compiled or recompiled to native ARM machine code will have a higher performance.

Native apps: The second main category is native apps. In cross-platform tools for native apps, developers often work with a codebase in C/C++ or C# which is then semi-automatically ported to the target platform and device. Performance is significantly higher with native code, but so is the complexity. Players in this sector include Airplay, Qt and MoSync. The Airplay SDK (now Marmalade) originates in 3D gaming but can also be used as a general C++ cross-platform tool. Qt is a cross-platform UI framework that also can be used for native C++ porting. Qt primarily supports Nokia’s legacy platforms. MoSync is a cross-platform tool for general purpose C++ development, integrated with the Eclipse IDE and also available under an open source (GPL) license.

Cross-Platform Beyond Java – Native Extensions

The traditional approach to cross-platform development has been a lowest common denominator one – much like that taken by Java, Flash Lite and mobile HTML. This approach sacrifices performance, UI pizzazz and access to specific device features.

A workaround is to add native extensions. These can provide additional SDK/NDK libraries for the IDE and also give access to low level hardware functionality. Access to low-level hardware functionality can be managed by a device database that controls which conditional code will be executed on a given device.

Several of the cross-platform vendors have built such device databases with various levels of detail. A device database contains information on screen size, input modality and exact OS version, extending to detailed hardware configurations and known bugs with workarounds.

Using native extensions, it is possible to overcome the inherent limitations that plagued Java. Instead of “write once, run everywhere”, developers can spend 90 percent of their time developing a common codebase and 10 percent adding native tweaks and extensions for each platform and device. For software purists, the 90/10 solution might not seem very elegant, but it is a way forward that can handle the incredible complexity with thousands of devices running more than five OS platforms. In this way, app developers can manage one codebase and port it to target devices without losing functionality. In principle, using a (C++) cross-platform engine with extensions should be able to offer similar functionality with minimal performance penalty as compared to direct development for the target device. There will be significant economies of scale when the common codebase is tweaked for 100s of devices.

The Disruptive Potential Of Cross-Platform

There are few signs that platform fragmentation will disappear. It’s not just Android, iOS and Windows Phone 7, which are backed by corporate giants with deep pockets, but also smaller players like QNX (RIM), WebOS (HP), MeeGo (Intel, China Mobile) and Bada (Samsung). Add to that legacy platforms, which will be around for at least a few years: Windows Mobile, Blackberry OS, Symbian, BREW, Java ME and Flash. If we also include the main desktop platforms (Windows, Mac OS, Ubuntu), gaming consoles, set-top boxes, cars, and other gadgets, the number of platforms becomes unmanageable.

App developers whose clients need to reach the entire market, face the formidable task of supporting all platforms and devices. If they can use a cross-platform engine the productivity gains will be dramatic compared to paying for separate in-house dev teams for each platform.

Early adopters of cross-platform will most likely be large consumer businesses who need to target the mass market such as media companies, games houses, entertainment companies, banks, and any brand developing B2C apps. Similarly, government agencies are often required to provide non-discriminatory access to their services and cross-platform tools will enable them to do just that. Another group of early adopters of cross-platform tools is CIOs of larger corporations. They face increasing demand from senior staff who want to use their favorite smartphone for secure access of internal company data. Once these early adopters have driven down the prices and sorted out stability issues we should expect to see a fast uptake of cross-platform tools in the mainstream app development market.

Assuming more developers move to cross-platform tools, the power distribution in the mobile sector will be challenged. The difference in the number of available apps between dominant and up-n-coming platforms will be reduced. This will allow smaller platforms to compete on a level playing field.

Web apps and HTML5 should make the largest dent in the market power of traditional platforms. But the final nail in the coffin will come when C++ cross-platform engines can offer almost the same performance and functionality as coding directly on the target platform. This is possible if the cross-platform engines can fully integrate native platform and device extensions. In that case, developers of native apps might reconsider Android, iOS and WP7 and choose to code to a cross-platform IDE, not to the platform. In this scenario, the cross-platform IDEs would become players of equal or even greater importance than the native platforms. At the very least, today’s OS platform wars will move to a totally different level.

Comments can be left at VisionMobile’s blog.

Exorbitant data roaming as an n-person Prisoner’s Dilemma

One of the major hurdles for further growth of mobile data services, and LBS in particular, is the exorbitant data traffic fees customers incur when using their phones abroad.

Corporate users can pretend that the service is free but their employers certainly notice if the mobile Net bill is larger than the hotel bill. In the private market segment, price sensitivity is much higher. Prices have to come down for this market to reach its potential.

As long as the price elasticity of demand is larger than one, lower traffic fees will lead to such high growth in traffic volume that the total revenue increases, despite the lower prices. The entire industry would benefit from an agreement to remove most of the roaming charges, but that is not even on the horizon.

A single operator could see the benefits of offering affordable data traffic abroad for its own customers. But as long as other operators don’t reciprocate, the industry will be stuck in a sub-optimal stalemate.

The problem is that the same operator who wants to offer an affordable rate to its own customers has no incentive to lower prices for other operator’s customers who travel into the operator’s coverage area.

It would not help much if two operators were to reach a bilateral agreement about lower roaming fees. In most countries there are three or more operators and as long as the handset connects to the strongest signal a bilateral agreement would just create a random patchwork of affordable and exorbitant fees. It would still be confusing, add uncertainty and deter usage.

Even if most major operators cooperated and formed a club with mutually lower prices, it would undermine the arrangement if only a few defectors refused to participate. The minority of defecting operators would be able to both reap exorbitant data rates from other operators’ customers as well as benefit from a larger total market and customers’ expectations of low prices.

For the operator market as a whole, the optimal end-state would be if all operators chose the strategy of cooperation, but the optimal individual rationality for one operator is to defect from any agreement. The result is that all operators are worse off in a sub-optimal state of mutual defection.

This can be modeled in game theory as an n-person Prisoner’s Dilemma. Here is a simplified numerical example with one operator (player A) playing against all the other operators (player B). Both the players can chose between two strategies: Cooperate (charge affordable data roaming fees) or Defect (charge exorbitant data roaming fees). Player A is the interesting active player in this game and Player B should more be viewed as a passive dummy.

If both players cooperate they will each receive a payoff of 10, denoted as (10, 10) for Player A and B respectively (where 10 for Player B is the payoff for each operator in the operator pool). If Player A chooses to defect while B cooperates the payoff will be (20, 9). The defecting player gains significantly and receives a payoff of 20 while Player B loses and only receive 9. If both Player A and B defect, the payoff will be (5, 5) which means that the total payoff to the players is significantly lower. One the other hand, if Player A is the only player that cooperates in an environment where Player B defects, the payoff will be (2, 6), and A only receives 2 (instead of 5). The figure below shows the game matrix described in normal form.

Player B
Cooperate Defect
Player A Cooperate 10, 10 2, 6
Defect 20, 9 5, 5

The payoff matrix shows the strategy of one operator (A) versus all other operators in the end states where either all chose to cooperate or all chose to defect. It is also possible to model the payoffs when the number of cooperating telcos moves from 0 –> 100 percent (described here). Regardless of the share of operators that chose to cooperate vs. defect the conclusion for Player A is the same. The dominant strategy for Player A is to defect, which makes it a classic Prisoner’s Dilemma for the active player. For Player B (all the other players) this is not a Prisoner’s Dilemma as the dominant strategy is to cooperate even if one Player (A) decides to defect. The problem is that for each individual operator it is a dominant strategy to defect even though the common good would be maximized if everybody cooperated.

I don’t have a clear answer for how to escape from this sub-optimal state. A few years ago, European voice roaming showed a similar pattern until the EU Commission mandated a price cap and forced down the prices. If the industry can’t solve this themselves the politicians may intervene again, at least in Europe.

The industry might be able to handle this on its own if the major operators form a club and then exert strong peer pressure on the remaining operators. One way is to punish defecting operators with very unfavorable roaming deals or refuse roaming. However, that will leave the club’s customers with inferior network coverage. It could possibly also be considered anticompetitive if a cartel of market dominants bullied smaller operators.

To be effective and protect the customers from accidentally connecting to a network with exorbitant prices, an operator club probably needs to have a technical software solution installed on their customers’ handsets. This software would ensure that the handset only connects automatically to networks that are members of the club, even if a renegade network has a higher signal strength. This should be manageable for operator branded handsets but if customers just have SIM cards it will be complicated to mandate that they install an app that can control low-level functionality on almost any device.

The Blackberry Playbook will be a gaming platform – the writing is on the tablet

Blackberry Playbook

An abbreviated version of this article has previously been
published on the Technorati Technology Channel.

Now that the Blackberry Playbook has been released it is fairly obvious that RIM is putting the building blocks in place for a future positioning of the Playbook as a gaming platform, most likely in their next model. After sifting through the last few week’s industry chatter and reviews of the Playbook it seems that this connection has gone unnoticed.

The Playbook tablet received rather mixed reviews when it was released on April 19. Most reviewers liked the fast dual-core processor, the sleek design, the excellent stereo sound, and the crisp HD video. However, the reviewers were dissatisfied with the buggy software and the lack of available apps. Another thing they disliked was the need to connect with a Blackberry phone for native email, calendar etc. The most extensive reviews can be found here, here, here, and here. The impression of the Playbook is of an unfinished product that was rushed to market, though RIM promises upcoming free software upgrades.

Fair enough. But RIM is clearly positioning the Playbook for the enterprise market in this first iteration. The Playbook only has WiFi connectivity and to access the mobile networks or access your email app you need to establish a Bluetooth bridge with your Blackberry smartphone. This might seem like a clumsy solution but is actually a “CIO-friendly” move. Most corporate users in the target market already own a Blackberry smartphone. The corporate email will still reside inside the smartphone, with its very high security. If the connection is lost, no sensitive information remains on the Playbook. This would enable RIM to avoid the complexity of making the Playbook as secure a platform as the traditional Blackberry handsets right now. From a marketing perspective, it is also the right tactical move for RIM to get the tablet accepted as a dull product in the enterprise market before embarking on a gaming strategy.

There are a number of indications that the Playbook is designed to be a gaming platform. The new operating system QNX that RIM bought last year is a fast and very stable real-time OS. In embedded systems where stability is absolutely critical such as in cars, satellites, the military, medtech, and industrial equipment QNX is an established market leader. QNX will easily compete with Android, iOS, and WP7 in terms of raw performance. And at the Blackberry developer conference last fall, the QNX founder Dan Dodge said: “The Playbook will be an incredible gaming platform for game designers”. One of RIM’s top exceutives called it a “party machine“.

The design choice by RIM to equip the Playbook with high quality sound, HD-video, and a fast dual core processor also fits with this strategy. Another strong sign of RIM’s commitment is the recently announced alliance with the two cross-platform game engine firms Ideaworks and Unity. Support for QNX is currently being developed. This will make it easy to quickly port the 100s of gaming apps using Ideaworks SDK to enter the Playbook ecosystem.

Once Ideaworks has added QNX to their C++ cross-platform tool (Airplay SDK), I expect intense activity behind the scenes followed by a splash launch of 100s of fast games running native code on the Playbook when RIM releases the next model of the tablet. And I don’t think it is a coincidence that RIM chose the name Playbook.

For some time, bashing RIM and Blackberry has been popular among industry pundits. Their sagging market share is viewed as proof that it’s only a matter of time before Blackberry will be a dead platform. But don’t underestimate RIM.

With QNX and the recently acquired Swedish UI framework from TAT they are assembling an excellent technical platform. All they need now is a way to migrate apps into their ecosystem. Obviously, attracting apps is a top priority for RIM and they have chosen to use as many paths as possible to enter their new platform. QNX on Playbook supports Flash, AIR, and sandbox app players for Blackberry Java plus Android 2.3 apps. It also supports the cross-platform tools WebWorks (for HTML5/JavaScript), Airplay SDK and Unity 3 (for C++ games), and native SDK (for C++). Expect future Blackberry phones with QNX to be equipped in a similar way.

Another of RIM’s strengths is a strong foothold in two attractive market segments: corporate users and the 16-25 youth market, though RIM is not present in all geographical markets.

RIM’s stronghold in the corporate market is based on their seamless integration of secure email while the youth market is driven by network effects from the instant messaging app BBM (Blackberry Messenger). In markets such as the UK and Indonesia where the BBM has reached a critical mass it has become a must have in the youth market segment. BBM has even superseded the popularity of SMS in this market segment. You need a Blackberry to be part of the BBM messaging network.

Making gaming apps a high priority is a logical move for RIM, enabling them to attract the needed critical mass in the youth market segments in countries where Blackberry’s position is currently weak. It might even be the case that the smaller form factor of the Playbook (lower weight; 7 inch screen vs. 10 for the competitors) is a way of making it easier for teens to carry it around. Even though the gaming platform strategy is viable on its own it might just be a means to promote the real killer app for Blackberry – BBM.

The HTML5 hype – time for a reality check

The hype created by the promise of HTML5 has almost reached fever pitch during the last quarter. With HTML5/CSS3 it will be possible to run most types of applications directly in the “browser” and the need to install apps that execute native code will be a thing of the past. HTML5 can run from a cache in your smartphone/tablet/PC even if you are offline and the app can access the phone’s GPS, compass, accelerometer, touch recognition and native video/audio control. App developers will no longer need to develop separate native versions for iPhone, Android, WP, Blackberry, Samsung/Bada, and WebOS. Just write once in HTML5 and run everywhere.

There is truth in all this and HTML5 is a great technology. But as usual during the peak of inflated expectations people tend to forget the limitations. HTML5 is still an immature technology. The final draft will be finished in mid-2011 and the W3C recently stated that the formal standard decision will be delayed until 2014. When people actually start using HTML5 the experience will most likely be underwhelming as developers are faced with the limitations of the technology. (This view is supported by comments from industry conferences.) Older handsets will most likely not be able to run full HTML5 web apps, which kind of defeats the vision of universal access, at least for the near future.

Native apps will always offer better performance, better UI/UX, and better integration with the device hardware. For example, HTML5 does not support augmented reality. HTML5 will, over time, be able to close the gap but if we assume that the native app platforms continue to develop, the goalpost will be moving as well. Ecosystem owners (Apple, Google, etc.) will of course work to make their native development environment as pleasant as possible to work with. In addition, cross-platform tools in the native environment will reduce the effort of porting from one platform to the other.

What the market tends to forget is that the fundamental trade-off between standardization and flexibility will not go away. By complying with the HTML5 standard, handset makers and web app developers will be unable to differentiate outside the limits set by the standard. It is inevitable that one global standard will not be fully capable of adapting to a highly heterogeneous base of various screen sizes, handsets, tablets, etc. Once a committee-based standard is finalized, innovations and new product features that are introduced after that point will not be included until the next upgrade of the standard. Apps will be better at taking full advantage of device variation and new functionality.

I doubt vendors will be able to resist the siren song of differentiation. When they give in to this temptation, the evil twin of differentiation – fragmentation – will rear its ugly head. This fragmentation will either undermine HTML5 as a universal standard (which will make it less attractive in the same way as Java ME), or be expressed in the form of more native apps.

Another fundamental trade-off in software is between raw performance and developer convenience. Coding in high level languages is easier for less experienced developers and the pool of HTML/JavaScript developers is much larger than the number of experienced C++ developers. It will be cheaper and easier to develop in HTML5 but performance will have to be sacrificed. For less advanced applications this might be a good trade-off but it is a trade-off nonetheless. Efficient low level coding also translates into lower battery drainage, which is important for smartphones.

In many cases it will be better to have a HTML5 web app than no app at all. But if high performance is critical, native apps will be the obvious choice. Performance is not just relevant in obvious areas such as games. If users expect touch-based smartphone apps that don’t freeze when browsing or scrolling it might be a critical competitive differentiator for all apps.

As pointed out by Forrester, this entire “either/or” scenario with HTML5 vs. apps is driven by vendor politics. Although I think that another strong driver is the large community of web developers who find C programming too hard. These web developers will gladly embrace HTML5 as a way to enter the mobile marketspace.

HTML5 is pushed by Google, Microsoft, Facebook and Apple. Continued browser dominance is critical for Google’s ad business. Microsoft and Apple want to kill Flash. Facebook wants universal access. But actions speak louder than words; Google is actively recruiting app developers and releasing more of their own services as apps. For example, Google Voice, Google Places, and now also Google Translate. (Google Earth is the most widely known example despite it having been around for years.) Today large players that quickly want to reach the market with their services are developing native apps when the functionality demands it, in spite of the pro-HTML5 rhetoric.

The Nokia Microsoft marriage: what was Elop thinking?

A week after the Nokia press conference last Friday, the industry still seems to be very conflicted about its view of the alliance. After reading most of the industry comments, I still can’t wrap my head around all the implications and come to a clear-cut conclusion. The pessimists view it as a disaster that will destroy Nokia and as a huge win for Microsoft/Windows Phone. The slightly more optimistic view is that the alliance was the best Nokia could do, given their weak position and downward spiral.

Nokia said they rejected using Android to avoid being commoditized and dependent on Google. With the MS alliance Nokia stated that the smartphone market would be a three-horse race between MS/Nokia, Apple, and Android (conveniently forgetting RIM, Bada, and HP/WebOS). Nokia probably has a point. If Nokia had chosen Android as their smartphone OS, Android’s dominance would be inevitable, Google would emerge as the new evil empire, and the handset makers would wind up in a undifferentiated cut-throat competition with each other. Remember, Android is only open source for the developers. For the handset makers, it is a benign tyranny under Google. On the other hand, Android (and RIM) are ready to ship today if Nokia had chosen either of them as a partner. Windows Phone 7 is not yet fully developed and Nokia will wait for the next release (codenamed Mango), planned for October before shipping new models.

What I find incomprehensible is how Nokia could publicly kill Symbian before they have new WP based models to offer the market. Sales of the legacy Symbian smartphones will most likely nosedive during the transition time. If Nokia had been more shrewd, they could have made a limited public statement now about adapting WP for smartphones in the US while maintaining their commitment to upgrading Symbian, but with a lower development budget. The extent of the full alliance with Microsoft could have been kept secret until October when Nokia will “suddenly decide” to give up on Symbian as a smartphone platform and sign an extensive alliance with Microsoft.

However, there are a few slivers of hope for Nokia during this transition time. One lifeline is that some Nokia customers will buy a Symbian smartphone for the camera, UI, maps, email or other factors that are unrelated to the size of the app ecosystem. Another is that the powerful operators have a political interest in the success of the MS/Nokia alliance. If they can keep another ecosystem alive the bargaining power of Apple and Google/Android will be reduced. By subsidizing a few Symbian smartphones they can help to keep Nokia afloat. (This is most likely wishful thinking, from the chatter at MWC it seems that carriers are about to abandon Symbian handsets.) A third market segment is large corporate buyers with integrated backend systems based on Symbian. They are locked in and forced to continue buying Symbian smartphones until they have managed to integrate on another platform, most likely RIM or Android.

The massive criticism against Nokia’s strategy will be hard to overcome. It is exacerbated by emotional reactions and anti-Microsoft sentiments among developers and Nokia old-timers. If MS/Nokia can’t entice developers to join, the entire alliance will be a failure.

It is well-known that Symbian is a horrible platform for developers but I have hardly seen any comments about the developer environment in Windows Phone. Microsoft understands the importance of an attractive development environment. Developer oriented products such as Visual Studio, .Net, and C# show that they can actually get it right. The developer environment for WP will be Silverlight and XNA from Microsoft.

The only analysis of the WP development environment I have found is from a survey last summer by Vision Mobile. In that survey developers rated early versions of WP as rather complicated to work with. The average time for developers to master WP was 9 months, which is significantly longer compared to Android (5 months) or iPhone (around 7 months). It was also considered problematic to create great UI:s with the early version of WP7 from mid 2010. However, developers gave the IDE with the emulator/debugger in WP7 a high rating. This made it possible to code and prototype quite quickly.

If it is correct that WP7 is a completely rewritten OS, there is a chance that Microsoft has managed to dump their clumsy legacy from Windows CE and Windows Mobile. These outdated bloatware platforms were attempts to run full Windows programs on the mobile, which resulted in inferior performance, execution speed, and battery life. If Microsoft has learned their lesson, fine. But I am not convinced that Microsoft’s culture can execute and deliver efficient and lean code.

All in all, I think the announced strategy will be detrimental for Nokia. Nokia has killed Symbian and there is no going back. However, we can still speculate about Nokia’s alternative strategies. Nokia did actually have a viable transition strategy for Symbian. It was based on pushing Symbian down in the stack to hide it from ordinary developers. Nokia wanted to promote the application framework Qt (“cute”) as the development environment that hides Symbian. The cross-platform capabilities in Qt would make it seamless to migrate these apps to MeeGo at a later stage. Considering how clumsily Nokia executed their new strategy I think they would probably have been better off sticking to their old transition strategy.

If Nokia can survive 2011 and they manage to introduce attractive models based on WP in 2012 they still have a chance of a revival. But there are many ifs. Will the WP experience be compelling enough for developers and consumers? Will Nokia be able to manage development projects in an alliance with another organization? Can Nokia leverage their internal capabilities or will the bloated internal bureaucracy slow down product development? Is Microsoft prepared to cooperate with good will intentions or do they have the same hidden agenda as in the 1990s and to enter strategic partnerships with bad faith? Can Microsoft unlearn their ingrained habits?

Even if the MS/Nokia handsets are somewhat uncompetitive on arrival, the operators might step in and save the venture with handset subsidies. Their agenda is to weaken Apple and Google/Android.

On the surface Nokia’s strategic move seems to be the equivalent of shooting themselves in the foot. However, Nokia’s top management and Elop are not idiots. There has to be some rationale that makes this move logical. At least from their point of view.

Perhaps the answer can be found in an extensive feature article in the business section of the Finnish newspaper Helsingin Sanomat from October 2010 (summary here). The article is a damning account of how Nokia lost its way after 2004 when an internal re-organization transformed the agile product-focused company into a matrix organization with three divisions; Mobile Phones, Multimedia, and Enterprise Solutions. The divisions were encouraged to compete internally for staff and resources. This led to endless internal politics, infighting, and bureaucracy. 300 vice presidents, fighting with each other, ensured that innovation was stifled. The results were that Nokia was unable to exploit innovation and their strong internal capabilities. A number of foolish product design decisions followed. Starting in 2001, Nokia had developed their Series 90, a touch screen UI which should have been the basis for Nokia’s iPhone competitors today. But it was cancelled in 2005. In later attempt to emulate iPhone they copied the worst from the iPhone. For example, Nokia phones that do not allow one to replace the battery, models with the micro-SD card reader removed, high end Nokia phones without MMS because some Nokia product manager thought it was “an outdated technology”. The enterprise E series phones could not use the best imaging features because the consumer multimedia division owned the technology and refused access. At the same time, the N series models were denied MailForExchange, and SIP functionality in the same way. The work with Symbian, QT and MeeGo dragged on forever without visible results.

Once a large organization is bogged down by bureaucracy it is very difficult to unravel. Perhaps Elop came to the conclusion that internal reorganization would be risky and too time consuming. The teams working on Symbian, QT and MeeGo had under-delivered for years and Nokia’s senior management had no way of knowing if the under-performance was due to bureaucracy – or if they just were not up to the job. Elop had the choice of betting the future of the company on the internal software teams, or on Microsoft. His bet on Microsoft is a signal of his mistrust of Nokia’s capabilities in software today. (Maybe a new CEO with a strong background in software development would have come to another conclusion. But we will never know the answer to that question.)

I hope Elop’s next project will be to turn the internal organization on its head to unleash the excellent engineering competence and what is left of the aggressive, agile firm from the 1990s.

Finale at Startup

Last night the entrepreneurial training program Startup at STING/KTH Innovation had its finale with the thirteen selected teams presenting their business plans. The Startup program is a first step in starting your own company and you can apply even before you have a company set up. It is a way to test the viability of your idea before actually taking the big leap.

I think that the general level of the business plans was quite high. Some were built around potential breakthrough technologies, straight out of the research labs. A jury of investment managers from Industrifonden and Almi Invest named Johan Strömqvist och Evangelos Sisamakis, with a background from the research group in Experimental Biomolecular Physics at KTH as the winner and a 10,000 SEK cheque (€1,100). Congratulations! They have developed a method for screening pharma drug candidates that makes it possible to predict the likelihood that a drug candidate will be successful before clinical trials start.

The program started in October. We got excellent feedback from the participants; one business school student wrote that he learned more about running a business from the ten workshops at Startup than he did during his three years in business school. Thanks.

The next round of Startup is planned to start in April. You can apply here.

Ahonen: “This is the golden age of mobile”

The mobile guru and former Nokia executive Tomi Ahonen has released his new book ‘The Insider’s Guide to Mobile‘ as a free ebook for download. It is an excellent overview of the mobile industry and mobile market opportunity. Its tech evangelism style can be slightly annoying but he presents rather compelling arguments to back up his claims. For example, that there are 3.7 billion unique users of mobile, almost twice the number of Internet users (1.8 billion). With strong growth and a total market size of 1.1 trillion dollars, the mobile industry is one of the most attractive industries on the planet. Ahonen has identified 13 other industries worth 5 trillion dollars that will be disrupted by mobile in a tidal wave convergence during the next decade. Here is a quote which provides a good summary of the book:

“This is the golden age of mobile. It is the best economic opportunity of our lifetimes.”

Ahonen provides an important counterweight to the Net-centric evangelists from the US West Coast who don’t understand telecom and dismiss it as a dinosaur industry. For example, just the revenues from SMS at 100 billion dollars is almost as much as the combined size of the music industry (20 BUSD), Hollywood movies (25 BUSD), video gaming including consoles (40 BUSD), and all paid content on the Net (27 BUSD). MMS (which has been called a failure) has already grown to 31 billion dollars and is larger than the entire music industry.

While most web-centric players struggle for more income with meager advertising as their main revenue source, mobile service providers get paid by the users and rarely have the same problem. One example is Real Madrid’s fan club. They charge 12 Euros/month for their mobile service and have 100,000 paying users. Another example is the three major mobile social networks in Japan (Mixi, Mobage Town, Gree). They have revenues of around 250 – 350 million dollars each. Even though these Japanese services can be accessed via the web, 76 percent of the users only use their mobile to reach the service. Advertising is a minor part of revenues. Virtual goods and virtual currencies are far more important.

Ahonen’s main message is that “mobile internet” is not the same as the PC-based web through a phone. Mobile services are something different. They take advantage of the unique features of mobile that can’t be replicated on the PC-based web. On the mobile handset identification, messaging, and a payment system are already built-in. The handset is always with you and interaction is much faster. The mobile can be used with one hand, which is impossible with a 3G enabled laptop or netbook. Another example is picture sharing via MMS, which is much more seamless and easier than connecting your camera to the PC and uploading it. Several mobile services are impractical or almost impossible to replicate on the PC-based internet. For example, if you scan a bar code in a store with your camera phone and get a list of alternative vendors for the product you are interested in.

Even though advanced sexy apps on smartphones are impressive, Ahonen points out that only 13 percent of the mobile population has a smartphone. The big numbers and huge potential is in the “boring” SMS and MMS services. The average global SMS user sends 100 SMS/month. Voice traffic is falling from cannibalization by SMS and 13% of mobile users have stopped placing outgoing voice calls entirely. MMS is already used by 1.7 billion users. His advice is that if your company is in an industry where you need to reach the mass market (banks, retailers, airlines, etc.) today the first step should be to develop SMS-based services. After that go for MMS, Wap, xTML (“phone browsers”), downloadable Flash/Java/Brew, and smartphone apps – in that order.

Another area where Ahonen’s arguments go against industry consensus is in his disbelief in Location Based Services. His point is that most users are seldom lost – and if they are abroad the punitive roaming charges deters usage. Users are also very reluctant to allow others to track their position. A few years ago Disney launched a “family friendly” MVNO where parents could locate their children with a child-tracker. The Disney phone instantly became toxic among kids and teenagers and the phones were conveniently forgotten at home, deceiving parents into believing that the kids were home doing homework. Disney shut down the service in 2007.

I find his perspective refreshing, in particular when he extols services that are considered “uncool” by the Netheads in Silicon Valley. It is an easy and almost entertaining read, I highly recommend this book.

Tomi Ahonen’s blogg: Communites Dominate  Brands

Revisiting Blue Ocean Strategy – still a fad

Recently, I reread the 2005 book “Blue Ocean Strategy”. I was unimpressed when I first read it and a second reading only reinforced my first impression. The book is mostly a compilation of already existing models and theories mixed with common sense insight. However, the authors should be given credit for the compelling tagline “Blue Ocean Strategy”. Who wouldn’t want to sail into a new, large untapped market – a blue ocean?

The problem is that the way to reach the blue ocean is generic. There is no new theory or model derived from empirical research similar to Disruptive Innovation (Christensen), Crossing The Chasm (Moore), or Competitive Strategy (Porter). The book’s main assertion is far from groundbreaking. Find an untapped market. Understand customer needs. Redefine industry boundaries. Pretty simple.

However, their advice is sensible. For example, look for what they call Value Innovations, which is another way of saying that your innovation has to be relevant and valuable for your customers. It should not be technological brilliance for its own sake.

They also present a model for how to rethink your current business. The first step is to identify relevant performance factors that define the competitive landscape. These factors are used to draw up a strategic canvas, in which your own offer (“value curve”) is plotted together with your competitors. The next step is to look for ways to change the offer to the customers by applying what they call the Four Actions.

For each factor consider if it is possible to either 1) eliminate a factor that is taken for granted in the industry, 2) reduce a factor well below industry standards, 3) raise a factor above industry standards or 4) create a new factor that has never been offered in the industry. Summaries of the rest of book can be found at Slideshare, here, here, here, and here.

An example is the way NetJets redefined the market for corporate jets. Before NetJets entered the market, business travelers only had the choice between owning a corporate jet or buying first/business class tickets. When NetJets introduced fractional aircraft ownership, client companies could reserve a corporate jet from the NetJets pool at short notice just like a car pool. NetJets would offer: the convenience of owning a private aircraft at the cost of commercial airline tickets. The figure below illustrates the strategic canvas for NetJets compared to commercial airlines or owning a corporate jet.

As usual in popular management literature, the authors collect examples of success cases and use them as examples of blue ocean strategies. That is, after a new strategy has proven to be a success, the authors squeeze it into their model and claim that it is a Blue Ocean Strategy. This sometimes borders on the ridiculous, for example when they use the case of the successful turnaround of the NYPD in 1994 by the new police commissioner Bill Bratton as an example of a Blue Ocean Strategy. The case description of how NYPD managed to radically improve performance with the same budget is inspiring but the lessons are related to leadership and overcoming resistance to change. Not a Blue Ocean Strategy. I take issue with the way the authors expand the term to label everything successful as a Blue Ocean strategy. It is sloppy thinking.

Internet Discovery Day at Skandia’s old HQ on Sveavägen

Time to revive the blog. Grädde Invest held an “un-conference-y” event and mingle at Skandia’s old HQ on Sveavägen. It took place a few days before the landlord Diligentia began a total internal renovation to transform the building into the office of the future.

The event was very “garage”. Entrepreneurs who wanted to exhibit were given marker pens and a large sheet of paper to tape on the walls. During the event there were short presentations about the Net and startup entrepreneurship.

Thanks Johan Jörgensen at Grädde for a fun event and valuable networking.