The discussions of why and how technologies catch up never cease, and they always have in common that they are based on how difficult it’s to foresee the future. I disagree with them all: it’s very easy to predict technological success. If you know exactly how.

Start with this little remark by Steven Chu, US Energy Secretary, stating the necessary conditions for the success of electrical vehicles: “A rechargeable battery that can last for 5000 deep discharges, 6–7 x higher storage capacity (3.6 Mj/kg = 1000 Wh) at 3x lower price will be competitive with internal combustion engines (400–500 mile range).” First, a real exercise of honesty for a government official: I hope it did mean that no subsidies were given to sub-optimal technological proposals. But more importantly, he offered some quantifiable  pre-conditions for the acceptance and diffusion of an emergent technology, mixing technical variables with economic ones.

This line of thought reminded me of some of the most brilliants annotations in Edison’s Notebooks (Notebook Nº3, pages 106-108; Notebook Nº6, pages 11-12; Notebook Nº9 pages 40-42): he combined cost considerations to reduce the amount of copper and the price of high-resistance filaments, with scientific reasoning using Ohm and Joule laws, to guide their experimentation in the quest of better designs of a full electrical system, and not just the light bulb.

It’s that easy: mix technical variables with supply-demand analysis, some micro-economics and much attention to discontinuities in the marginal propensities to consume in the face of technological change. And this is why pitches to VCs are always so wrong and boring: almost no attention to key economic considerations and full of reasoning by analogy.

Like children, always solve labyrinths by starting at the exit: so early we learn that the end is the beginning.

 

Since the very beginning of software industry, it’s always been the same: applying the most innovative ways towards lowering the friction costs of software adoption is the key to success, especially in winner-takes-all market and platform-plays.

From the no-cost software bundled with the old mainframes to the freeware of the ‘80s and the free-entry web applications of the 90’s, the pattern is clear: good’n’old pamphlet-like distribution to spread software as it were the most contagious of ideas.

It comes to the realization that the cost of learning to use some software is much higher than the cost of software licenses; or that it’s complementary to some more valuable work skills; or that the expected future value from owning the network created by its users would be higher that selling the software itself. Never mind, until recently, little care has been given to reasoning from first principles about the tactics and strategies of software distribution for optimal adoption, so the only available information are practitioner’s anecdotes with no verifiable statistics, let alone a corpus of testable predictions. So, it’s refreshing to find and read about these matters from a formalized perspective:

Download (PDF, 1.23MB)

The most remarkable result of the paper is that, in the case of a very realistic scenario of random spreading of software with limited control and visibility over who gets the demo version, an optimal strategy is offered with conditions under which the optimal price is not affected by the randomness of seeding: just being able to identify and distribute to the low-end half of the market is enough for optimal price formation, since its determination will depend on the number of distributed copies and not on the seeding outcome. But with multiple pricing and full control of the distribution process (think registration-required freemium web applications) the optimal strategy is to charge non-zero prices to the higher half-end of the market, in deep contrast with the single-digits percentage of the paying customers in real world applications, which suggest that too much money is being left on the table.

 

As a follow-up to my previous post on software patents, this very interesting and recent survey on the economics of patents:

GDE Error: Error retrieving file - if necessary turn off error checking (404:Not Found)

 

Holding a contrarian view could carry out great benefits: there has always been a need for a procedure to protect any innovative software from reverse engineering that at the same time allows for its appropriation and exclusionary use, as well as it precludes any imitation of its functionality, implementation details aside. This procedure really exists, it’s the proverbial patent: a negative right, temporal and exclusionary, to protect any novel intellectual property in exchange of publishing enough information to replicate it. In some sense, it’s the only open window for the ordinary citizen to introduce its own personal legislation, a strong general purpose tool, however double-edged it may be.

The frontal and so popular opposition to software patents transcends the software world: indeed it can be found in past centuries and for other technologies; for example, the most cited case, and therefore so full of hyperbole, is the decades-long delayed adoption of the vapour machine due to the machiavellic use of the rights conferred by one of its patents.

Regarding current practices, a detailed study of the descriptive statistics of the use of software patents shows that they have been the fastest growing category for decades, though software companies have not been granted many of them because the biggest appliers are other industries similarly intensive in their use of IT capital but also with a strong record of filing for strategic patents. Note also that in the absence of strong patents rights, custom and common sense have required the use of copyright protection (which also does not need to give up any source code) even if it’s a far weaker protection: in fact, both of them are complimentary, but their actual use is substitutive, because whenever one of them is weakened, the other gets used much more.

From a purely economic point of view, studies show a statistically significant increase of the stock value of the software patent-owning companies and they happen to be a mandatory prerequisite to enter markets in which the incumbents already own strongly interdependent patent portfolios. And contrary to general opinion and practice, their use in software start-ups is, overall, positive: they increase the number of rounds and their amount, as well as survival and valuation in case of acquisition. From a strategic point of view, software patents raise barriers to entry acting as a deterrent mechanism to the kind of competition that just follow profits without investing in any kind of sunk costs in the search of technological advances: in short, an increase of 10% in the number of patents entails a reduction in competition between 3 and 8 per cent. And even if their valuation is a very complex matter, the intrinsic value of software patents is higher than that of the rest of patents.

In practice, the biggest burden in their granting and defence is the search of prior art: that is, even under the assumption that the inventor is operating under the principles of good will and full cooperation, he can’t get access to the full prior art because most software is developed internally for companies and never sees the public light. This gives rise to a great number of non-innovative and trivial patents, and others that ignore the prior art on purpose, a matter which sometimes can be difficult to settle (vg. Apple Multi-touch, Amazon 1-click). Fortunately, malicious uses don’t fare well in the Courts of Justice: the strategies of the so-called patent trolls aren’t successfully long-term, that is, those who are operating within the letter of the law but against its spirit, and are using the patent system to extract rents from others that are using them for productive purposes, a problem that carries a very high economic cost. Only their fair use brings a true premium to business valuations, that is, building a good patent portfolio that does not enter into practices of dubious ethics like filing for patents that only pursue the cross licensing with competitors to avoid paying them for their intellectual property.
The fastest way to begin to learn how to write software patents is to start with this set of documents. And since real learning only happens from the best sources, there are lots of noteworthy software patents, examples to follow for their high degree of inventiveness and the business volume that they backed and helped generate:

  • The first practical algorithm to solve linear programming problems.
  • DSL, the technology that allowed for the cheap diffusion of broadband connectivity.
  • Pagerank, the famous patent that laid out the foundations of Google; it also takes into account the method to quickly compute the rankings of indexed pages.
  • Lempel-Ziv-Welch, the well-known algorithm for lossless data compression.
  • In the cryptography field, the RSA and ECC algorithms, at the core of public key cryptography.
  • The beginnings of digital sound would not have been possible without the DPCM method or FM sound synthesis, and the patents of MP3 compression are dispersed across many companies.
  • In the storage field, it’s curious how RAID was invented a decade before its definition as a standard.
  • Regarding hardware, we shall not forget the patents awarded to the transistor (Shockley and Bardeen) and modern magnetic storage enabled thanks to the GMR phenomenon.
 

Entrepreneurs are already well versed on the intricacies of startup financing and the pernicious effects it may have on their companies: down-rounds, dilution, preferred stock and stock with different voting rights, among others. For that reason, they plan ahead and try their best not to find themselves caught in stalemates and catch-22 situations with no possible resolution.

But what I find fascinating is the lack of thought exhibited by VCs on their term sheets, driven more by custom and just plain imitation than by economically rational designs. The case is most notorious in the disregard of debt instruments (convertible debt and notes), of which their advantageous properties to the entrepreneurial side of the investment are widely known, but not their equally valuable properties to the other side of equation.

The detailed study of the financing structures of tech startups is a puzzling experience of negation of the received wisdom from the classic Corporate Finance results, especially the Myers-Majluf theorem: given a project within a startup, with positive or negative NPV, and the founders knowing the project’s NPV with very high certainty but startup outsiders do not, ceteris paribus, the founders may not invest in the project with positive NPV if outside equity must be issued to finance it, because the value of the project may go to the new shareholders at the expense of earlier shareholders . That is, the asymmetric information is causing an agency cost to the current shareholders if the startup issues equity, but not if it issues debt. This straightforward result is key to explain the low start-up survival rates through the different rounds of financing since, in the light of the full lifecycle of the entrepreneur, it’s perfectly rational to prefer that the current startup goes bankrupt to start a new one with the project with positive NPV if the cost of issuing new equity is so high, avoiding any pivot in the process.

And then, by Green Theorem (from Corporate Finance, not from Calculus) convertible debt, not straight, would be the ideal instrument: if the startup can choose investment levels between different projects with different risks, and outsiders don’t know the relative scale of the investments then, ceteris paribus, current shareholders bear an agency cost if the startup gets financed only by straight debt, a cost that can be avoided by issuing convertible debt.

These pecking order results hold even with stock options and without asymmetric information or managerial firm-specific human capital (see Stock Options and Capital Structure), so I wonder how many decades it will take for practice to meet theory… if they dare!

 

The latest IPOs of tech companies like LinkedIn, Yandex and RenRen have reactivated the never-ending debate of valuations and the fear of another tech bubble, even if most tech stocks are cheaper than before the dot-com bust. But this time, we have the masterful studies of Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages and Tech Stock Valuation: Investor Psychology and Economic Analysis, providing us with tons of empirical data from previous bubbles. Or even better, real-time theories of asset bubble formation, like the Jarrow-Kchia-Protter-Shimbo theory put to test in the following paper:

Download (PDF, 470KB)

This time is different.

 

How will the mobile application space look like in the future? Which proven strategies from the past will still provide an edge? And which strategic levers should be considered to bring exorbitant profits?

To solve the mobile conundrum and peer into the future with the wisdom of the past, I’ve been collecting economic data from the most diverse data sources to estimate regressions(IRLS, LAD) of profit (quarterly and/or annual) on different software categories (desktop, web, mobile) and their features. In other words, the most obvious analysis that nobody has ever carried out.

The following are stylized initial results, omitting exact coefficients but showing their size and direction (* for statistically significant):

DESKTOP

WEB

MOBILE

Total Addressable Market + ++ +
User Base Size +(*) +++(*) ++(*)
Development Sunk Costs ++
Latency tolerant ++ – -(*)
BUSINESS MODEL VARIABLES
License fee + – – (*)
Maintenance fees +++(*) ?
Versioning ++ ?
Bundling ++ ? ?
CPM/CPC +++(*) +
Targeting Quality ? ++ +
Use Time per User ? ++(*) ++(*)
DEMAND SIDE ECONOMIES OF SCALE (NETWORK EFFECTS)
Bandwagon effect + ++(*) ?
Standard setter ++ +++ ?
Linkage / interoperability ++ ?
SWITCHING COSTS
Data/file lock-in ++ + ?
Job/skill effects ++(*) ? ?
Learning/training  effects ++
Incumbency effect ? ++

R^2=0.66, sample size=352 (includes most important and known programs per category)

Focusing into the higher size and statistically significant variables, the data reveals the different nature of each software category:

  • Desktop applications: the most profitable strategy is to develop broadly used programs with low initial price, but higher maintenance fees and a significant impact on the labor market. Don’t make programs, revolutionize professions.
  • Web: very high scale ad-monetized applications with major network effects. The result of the open nature of the web with its hyper-linking structure across domains and the absence of micropayments.
  • Mobile software is a yet-to-be-determined mixture of desktop and web applications. This category is like desktop software, in that it has the same technical architecture, but its evolution resembles more closely that of the web due to the incumbency effects from web companies and lack of switching costs and traditional network effects.

More insights in future posts from this and other data sources.

Data sources: Yahoo Finance, Crunchbase, Wakoopa, RescueTime, Flurry, Admob, Distimo, Alexa, Quantcast, Compete, others.

 

Every tech entrepreneur faces the same dilemma: raise as much as capital as needed to grow as fast as possible, while avoiding dilution. The tradeoff was quantified by Noam Wasserman in a study of 460 startups, showing that in general, refusing to release control only harms the business and the entrepreneur itself:

The Rich-vs-King Tradeoff

Even so, entrepreneurs, hopeful creatures by nature, keep on searching whatever hack that gives them an edge to reduce dilution. But the biggest hack is, and always will be, possessing perfect time-to-market skills, the entrepreneurial quality per se. And whilst founders usually manage to keep 5-15% shares after IPO, some do much better, vg: Bill Gates(40.2%), Pierre Omidyar(30%) and Larry Ellison(27.5%). Another interesting case is Google, where founders keep control using Class B shares with super-voting rights (1-to-10) to offset that each one only managed to keep 13.4% of the company, and Netscape, where Jim Clark kept 25.5% of the shares vs. 2.6% of Marc Andreessen (maybe it has something to do with Clark keeping only 3% after IPO of Silicon Graphics, his previous startup).  But it’s outside the tech industry, where we find the quirkiest scheme to keep control, Kamprad’s IKEA.

 
Set your Twitter account name in your settings to use the TwitterBar Section.