Since the very beginning of software industry, it’s always been the same: applying the most innovative ways towards lowering the friction costs of software adoption is the key to success, especially in winner-takes-all market and platform-plays.
From the no-cost software bundled with the old mainframes to the freeware of the ‘80s and the free-entry web applications of the 90’s, the pattern is clear: good’n’old pamphlet-like distribution to spread software as it were the most contagious of ideas.
It comes to the realization that the cost of learning to use some software is much higher than the cost of software licenses; or that it’s complementary to some more valuable work skills; or that the expected future value from owning the network created by its users would be higher that selling the software itself. Never mind, until recently, little care has been given to reasoning from first principles about the tactics and strategies of software distribution for optimal adoption, so the only available information are practitioner’s anecdotes with no verifiable statistics, let alone a corpus of testable predictions. So, it’s refreshing to find and read about these matters from a formalized perspective:
The most remarkable result of the paper is that, in the case of a very realistic scenario of random spreading of software with limited control and visibility over who gets the demo version, an optimal strategy is offered with conditions under which the optimal price is not affected by the randomness of seeding: just being able to identify and distribute to the low-end half of the market is enough for optimal price formation, since its determination will depend on the number of distributed copies and not on the seeding outcome. But with multiple pricing and full control of the distribution process (think registration-required freemium web applications) the optimal strategy is to charge non-zero prices to the higher half-end of the market, in deep contrast with the single-digits percentage of the paying customers in real world applications, which suggest that too much money is being left on the table.
How will the mobile application space look like in the future? Which proven strategies from the past will still provide an edge? And which strategic levers should be considered to bring exorbitant profits?
To solve the mobile conundrum and peer into the future with the wisdom of the past, I’ve been collecting economic data from the most diverse data sources to estimate regressions(IRLS, LAD) of profit (quarterly and/or annual) on different software categories (desktop, web, mobile) and their features. In other words, the most obvious analysis that nobody has ever carried out.
The following are stylized initial results, omitting exact coefficients but showing their size and direction (* for statistically significant):
|Total Addressable Market||+||++||+|
|User Base Size||+(*)||+++(*)||++(*)|
|Development Sunk Costs||++||–||–|
|Latency tolerant||++||– -(*)||–|
|BUSINESS MODEL VARIABLES|
|License fee||+||–||– – (*)|
|Use Time per User||?||++(*)||++(*)|
|DEMAND SIDE ECONOMIES OF SCALE (NETWORK EFFECTS)|
|Linkage / interoperability||–||++||?|
R^2=0.66, sample size=352 (includes most important and known programs per category)
Focusing into the higher size and statistically significant variables, the data reveals the different nature of each software category:
- Desktop applications: the most profitable strategy is to develop broadly used programs with low initial price, but higher maintenance fees and a significant impact on the labor market. Don’t make programs, revolutionize professions.
- Web: very high scale ad-monetized applications with major network effects. The result of the open nature of the web with its hyper-linking structure across domains and the absence of micropayments.
- Mobile software is a yet-to-be-determined mixture of desktop and web applications. This category is like desktop software, in that it has the same technical architecture, but its evolution resembles more closely that of the web due to the incumbency effects from web companies and lack of switching costs and traditional network effects.
More insights in future posts from this and other data sources.
Data sources: Yahoo Finance, Crunchbase, Wakoopa, RescueTime, Flurry, Admob, Distimo, Alexa, Quantcast, Compete, others.
Let’s imagine that the desktop OS market starts from scratch, with no strings attached. That is, neither lock-in effects nor switching costs; zeroed learning curve effects, resetted economies of scale and scope and absent network effects; just free-entry without any of the preceding oligopolistic and monopolistic market structures.
Wouldn’t it be wonderful if we could find a natural experiment, not a thought one, in which to observe the resultant market shares? Could we know how would they most likely rank?
Yes, we can. It’s the mobile OS market!
- Android (Linux)
- iOS (Apple)
- Windows Phone/Mobile (Microsoft)
In other words, the desktop OS market, in reverse order.
These graphs show when carriers might expect to see costs exceed revenues, based on a new Tellabs study. Currently, stock markets don’t reflect these predictions, with Forward PE ratios at about 10.
Important assumptions of the underlying model are: a traffic growth of seven fold by 2015 for both voice and data combined, with a revenue decline per gigabyte of 80-85%; and data transport using only GSM/3G technologies (HSPA/HSPA+), since LTE will not be widely deployed by 2015. There also are some questionable assumptions: a flat-rate pricing model (telcos will lobby their way out of this trap) and a high percentage of data offloading onto indoor networks, a key assumption of the model, being a big unknown.
Mobile telcos will experience massive profit compressions in the future, redefining the value chain that has been in existence for almost two decades: network equipments and mobile terminal manufacturers with almost zero profit margins, whilst MNOs enjoyed high margins. Profits are migrating toward new smartphone services, but that it’s a history for another post.
Android is in exponential growth as shown in the next graph,
But remember, it’s a platform only 2.5 years old, so let first PC-history be our guide:
And the final result of that era of computing, summarized in the following graph showing total market share as percentages,
And even though prediction markets are being used within corporations to peer into the future of these questions, they are of no use for making public predictions about future platform market shares due to their very low volumes and the enormous cash on the balance sheets of the tech companies involved, that is, the results would be too easy to subvert to ever be trusted.
So beware! Keep your development options open!
May 2017 M T W T F S S « Feb 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
- February 2017
- April 2014
- March 2014
- December 2013
- November 2013
- July 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011