Category Archives: tech history

Ancient code, long shadows

The smallest of all seeds, 

becomes the largest of garden plants:

it grows into a tree, 

and birds come and make nests in its branches.

The Net is full of open-source code, and it seems that nobody cares about its preservation. If some old programs were to be preserved for the future, what would you choose? My legal choices are as follows:

Liability for Software in the Cloud

Of the different theories under which a program could be sued (strict liability, negligence, criminal, intentional tort, fraud, negligent misrepresentation, malpractice, …), the most accurate and used are strict liability and negligence: the first applies to defective products, but negligence is more suitable for services. In the past decades, most software was characterized as a product: COTS and shrink-wrap products are clearly products, and even custom developed programs are products that may have support services under a different contract than that of the software license. These distinctions came from a time when traditional manufacturers were inflicting serious negative externalities on clients, but those of services were of little importance: much have been written about the need to impose strict liability without fault on software as a way to improve responsibility and quality, transferring the full cost of negative externalities to software companies. But this theory of liability has been rarely applied to software products, the truth being that the destructive potential of software is quite low except for medical devices, which are regulated by other provisions: strict liability covers unexpected and significant harm, and this is a rare event in software programs.

Forcing strict liability on programs will put many small software vendors out of business, and open-source will just disappear: as Alex Tabarrok notes, this is what happened in the aviation industry when manufacturers found that they could be sued for any aircraft ever produced. Only lifting these liabilities for old planes did revitalize the industry, with the unintended consequence that the end of manufacturers’ liability was associated with a significant reduction in the probability of an accident, opposite to what the former regulations intended. Moral hazard was small but pervasive, even in the face of death.

But SaaS and cloud computing are changing the software landscape: these really are services, so the negligence standard clearly applies. For sure, it’s the standard that best balances the interest of the parties: the cloud is full of SLAs, indeed. And even if these guarantees are not strong as the standard of strict liability, I wonder how much moral hazard will be introduced due to their proliferation: nowadays, the excuse that cloud providers are the root cause of system failure is getting more and more common.

The Politics of Network Protocols

One of the most important protocol switchovers was carried off 30 years ago: the ARPANET stopped using NCP (Network Control Protocol) to only use TCP/IP, as the righteous Jon Postel devised in The General Plan. NCP was a fully connection-oriented protocol more like the X.25 suite, designed to ensure reliability on a hop by hop basis. The switches in the middle of the network did have to keep track of packets, unlike the connectionless TCP/IP were error correction and flow control is handled at the edges of the network. That is, intelligence turned to the border of the network and packets of the same connection could be passed between separated networks with different configurations. Arguably, the release of an open-source protocol stack implementation under a permissive license (4.2BSD) was a key component of its success: code is always a better description than any protocol specification.

Yet TCP/IP was still incomplete: after the 1983 switchover, many computers started connecting to ARPANET, and bottlenecks due to congestion were common. Van Jacobson devised the Tahoe and Reno congestion-avoidance algorithm to lower data transfers and stop flooding the network with packets: it was quickly implemented on the TCP/IP stacks of the day, saving the Net to this day.

These changes were necessary, as they allowed the Internet to grow, on a global scale. Another set of changes as profound as those were, are now being discussed in the Secure Interdomain Routing mailing list: this time the culprit is the insecurity of BGP, as route announcements are not authenticated, and  the penance is enforcing a PKI into the currently distributed, decentralized and autonomous Internet routing system. Technical architectures force a predetermined model of control and governance, and this departure from the previously agreed customs and conventions of the Internet may simply be a bridge too far away, as always, in the name of security. And the current proposals may even impact Internet’s scalability, since the size of the required Resource Public Key Infrastructure may be too large for routers to handle, as the following paper from Verisign shows:

Download (PDF, Unknown)

On the other hand, this recent analysis shows that the design of the security of SBGP is of very high quality, a rare thing in the networking field, indeed:

 

Download (PDF909KB)

How the Future Dwells in the Past

Techno-utopians got it wrong: their tireless search for new technologies must start in the past. Most new technologies are just a rehash of past ones and most resources are devoted to maintaining existing technological infrastructure or towards incremental advances in old technologies: the new and innovative is extraordinarily rare. A fact so ignored but so intuitive, since most human needs have always been the same.

Mandelbrot’s “[amazon_link id=“0716711869” target=“_blank” ]The fractal geometry of nature[/amazon_link]” summarizes this line of thinking as the Lindy effect: the future survival of any Broadway show is best predicted by how long it has been running already. Itself based on a much older assertion that the “the future career expectation of a television comedian is proportional to his past exposure” (The New Republic, June 13th 1964). Thus, a statistical distribution that extends beyond the arts to other phenomena like the survival of technologies: the longer a technology has been in use, the longer we shall expect it to last, or more empirically, we shall conclude that every year that a technology survives may even double its additional life expectancy, contrary to the life expectancy of any living being. An insight that warns us against miracles when introducing new technologies without any precedent.

This next paper is the only one I could find that combines this effect with other power laws to try to ascertain the economic returns of basic research, and so the optimal level of investment:

Download (PDF, Unknown)

Commemorating Computational Complexity

Fifty years ago, two researchers started writing this seminal paper, kicking off the research field of computational complexity, the core of computer science, with now tens of thousands of publications.

Although the concepts and ideas underlying the paper were not new, as a letter by Kurt Gödel show us, it was the foundational moment for a field that still produces deep, beautiful and practical results: in the last decade, Williams’ lower bound on non-uniform circuitsAgrawal-Kayal-Saxena primality test that included primality testing in P (although Miller-Rabin and Solovay-Strassen   primality tests still live strong since they are much faster than AKS) or Vassilevska’s lower bound on matrix multiplication.

I see tons of start-ups and projects fail because they ignore the most basic algorithmic prudence: that is, that only sub-logarithmic algorithms should be accessible to the mass-public is one the ignored maxims of the computer industry that can only be learned by the proper interpretation of the absence of market offerings refuting it (vg. regular expressions within search engines’ queries, which could run in exponential time; or all the AI promises about solving optimization problems that never delivered).

Speaking with some researchers lately, they expressed the hope that the coming end of Moore’s law would vindicate this field, making its results much more relevant: and although they were absolutely right in that proportionally more resources will be directed towards these ends, they also failed to consider that the likely crash following this event may also reduce the total, aggregated opportunities.

Actually, Computers Are Free”

In a recent conversation with a friend, she lamented the hard time she was having in  justifying the deployment of thousands of tablets within a company, especially given their high price. But it’s really the opposite: computers pay for themselves in a very short time, no matter what their format. A fact in direct contrast with the old Solow’s productivity paradox.

Just a little research to proof this: take the welfare gain of computers, measured by their compensating variation, that is, the amount of income a consumer would have to give up in order to attain the level of utility that would have realised if computers had never been invented. Most recent results show that is 3.8–4% of total consumption expenditure: in other words, >$1500 per year in a first world country (you can also play with the Matlab code behind this model!)

Download (PDF140KB)

A high sum that completely justifies their price, but low if it’s compared with the compensating variation of the Internet (26.8%) or that of electricity (92%), ie. no one would live without electricity. And even though the variation of the Internet is much higher than that of computers, and also its contribution to economic growth, computers are Generally Purpose Technologies absolutely necessary to access the Internet, thus complementary to it and its compensating variation cannot be taken apart from them.

Success is Easily Predictable

The discussions of why and how technologies catch up never cease, and they always have in common that they are based on how difficult it’s to foresee the future. I disagree with them all: it’s very easy to predict technological success. If you know exactly how.

Start with this little remark by Steven Chu, US Energy Secretary, stating the necessary conditions for the success of electrical vehicles: “A rechargeable battery that can last for 5000 deep discharges, 6–7 x higher storage capacity (3.6 Mj/kg = 1000 Wh) at 3x lower price will be competitive with internal combustion engines (400–500 mile range).” First, a real exercise of honesty for a government official: I hope it did mean that no subsidies were given to sub-optimal technological proposals. But more importantly, he offered some quantifiable  pre-conditions for the acceptance and diffusion of an emergent technology, mixing technical variables with economic ones.

This line of thought reminded me of some of the most brilliants annotations in Edison’s Notebooks (Notebook Nº3, pages 106–108; Notebook Nº6, pages 11–12; Notebook Nº9 pages 40–42): he combined cost considerations to reduce the amount of copper and the price of high-resistance filaments, with scientific reasoning using Ohm and Joule laws, to guide their experimentation in the quest of better designs of a full electrical system, and not just the light bulb.

It’s that easy: mix technical variables with supply-demand analysis, some micro-economics and much attention to discontinuities in the marginal propensities to consume in the face of technological change. And this is why pitches to VCs are always so wrong and boring: almost no attention to key economic considerations and full of reasoning by analogy.

Like children, always solve labyrinths by starting at the exit: so early we learn that the end is the beginning.

Towards Optimal Software Adoption and Distribution

Since the very beginning of software industry, it’s always been the same: applying the most innovative ways towards lowering the friction costs of software adoption is the key to success, especially in winner-takes-all market and platform-plays.

From the no-cost software bundled with the old mainframes to the freeware of the ‘80s and the free-entry web applications of the 90’s, the pattern is clear: good’n’old pamphlet-like distribution to spread software as it were the most contagious of ideas.

It comes to the realization that the cost of learning to use some software is much higher than the cost of software licenses; or that it’s complementary to some more valuable work skills; or that the expected future value from owning the network created by its users would be higher that selling the software itself. Never mind, until recently, little care has been given to reasoning from first principles about the tactics and strategies of software distribution for optimal adoption, so the only available information are practitioner’s anecdotes with no verifiable statistics, let alone a corpus of testable predictions. So, it’s refreshing to find and read about these matters from a formalized perspective:

 

Download (PDF, 1.23MB)

The most remarkable result of the paper is that, in the case of a very realistic scenario of random spreading of software with limited control and visibility over who gets the demo version, an optimal strategy is offered with conditions under which the optimal price is not affected by the randomness of seeding: just being able to identify and distribute to the low-end half of the market is enough for optimal price formation, since its determination will depend on the number of distributed copies and not on the seeding outcome. But with multiple pricing and full control of the distribution process (think registration-required freemium web applications) the optimal strategy is to charge non-zero prices to the higher half-end of the market, in deep contrast with the single-digits percentage of the paying customers in real world applications, which suggest that too much money is being left on the table.

How Boundaries in Software Rise and Change

There is no complete theory of what features should be included within a software product, or what features should be left out. It’s more of a question of complex technical trade-offs and product-roadmap decisions taken in response to market competition, a mixture of interests and consequences that sets the present boundaries of software products. For instance, the detailed analysis of the evolution of operating systems reveals a rich history of shifts and reversals: on one hand, the operating system supplier must offer equal and free access to as much services as possible to the ecosystem of software developers, with the intent to avoid code redundancies for basic functionalities, so the platform benefits from the various network effects (two-sided, direct and indirect), but also taking into account that bloating the kernel with too much code may perhaps increase maintenance and support costs to extraordinary degrees; but on the other hand, a too small kernel also reduces lock-in. The best starting point for understanding the different forces and interests that came into play is the following paper:

Download (PDF, Unknown)

Another, and less strategic viewpoint, considers the answer of the correct place for a given functionality (exact processor ring, kernel/user mode and/or if it should be a library provided by the OS) a matter of custom and efficiency: for example, the need to get rid of the latency from user-mode calls to the kernel has always pushed more functionality into it, together with satisfying the demand of a basic security model. But these are just examples of the multiple balances that have to be struck, and what’s really interesting is to search for the non-obvious trade-offs and how they have given rise to different cultures of computing. Consider the case of the graphical interface and the window system: in the Unix tradition, a minimalistic approach was taken in that most of it is taken out of the kernel and even relegated to third parties (X Foundation), but the Windows tradition opted for the opposing approach, so it includes the Windows Manager and the GDI framework in kernel space; the consequences are long and lasting, even disastrous in the security space,  like how any X client can dump the content of an arbitrary window, including the ones it did not create. Also note that the precise breakdown of the barrier of user and kernel mode varies over time: traditional kernel-mode components are being moved into user-mode processes, like the User-Mode Scheduling for threads and the User Mode Driver Framework. But at the end, the truth is that the forces shaping the design of operating software tend to be more of economic than technical nature, as exemplified in the discussions collected in the following paper:

Download (PDF483KB)

Software Patents: Just Use Them Well

Holding a contrarian view could carry out great benefits: there has always been a need for a procedure to protect any innovative software from reverse engineering that at the same time allows for its appropriation and exclusionary use, as well as it precludes any imitation of its functionality, implementation details aside. This procedure really exists, it’s the proverbial patent: a negative right, temporal and exclusionary, to protect any novel intellectual property in exchange of publishing enough information to replicate it. In some sense, it’s the only open window for the ordinary citizen to introduce its own personal legislation, a strong general purpose tool, however double-edged it may be.

The frontal and so popular opposition to software patents transcends the software world: indeed it can be found in past centuries and for other technologies; for example, the most cited case, and therefore so full of hyperbole, is the decades-long delayed adoption of the vapour machine due to the machiavellic use of the rights conferred by one of its patents.

Regarding current practices, a detailed study of the descriptive statistics of the use of software patents shows that they have been the fastest growing category for decades, though software companies have not been granted many of them because the biggest appliers are other industries similarly intensive in their use of IT capital but also with a strong record of filing for strategic patents. Note also that in the absence of strong patents rights, custom and common sense have required the use of copyright protection (which also does not need to give up any source code) even if it’s a far weaker protection: in fact, both of them are complimentary, but their actual use is substitutive, because whenever one of them is weakened, the other gets used much more.

From a purely economic point of view, studies show a statistically significant increase of the stock value of the software patent-owning companies and they happen to be a mandatory prerequisite to enter markets in which the incumbents already own strongly interdependent patent portfolios. And contrary to general opinion and practice, their use in software start-ups is, overall, positive: they increase the number of rounds and their amount, as well as survival and valuation in case of acquisition. From a strategic point of view, software patents raise barriers to entry acting as a deterrent mechanism to the kind of competition that just follow profits without investing in any kind of sunk costs in the search of technological advances: in short, an increase of 10% in the number of patents entails a reduction in competition between 3 and 8 per cent. And even if their valuation is a very complex matter, the intrinsic value of software patents is higher than that of the rest of patents.

In practice, the biggest burden in their granting and defence is the search of prior art: that is, even under the assumption that the inventor is operating under the principles of good will and full coöperation, he can’t get access to the full prior art because most software is developed internally for companies and never sees the public light. This gives rise to a great number of non-innovative and trivial patents, and others that ignore the prior art on purpose, a matter which sometimes can be difficult to settle (vg. Apple Multi-touch, Amazon 1‑click). Fortunately, malicious uses don’t fare well in the Courts of Justice: the strategies of the so-called patent trolls aren’t successfully long-term, that is, those who are operating within the letter of the law but against its spirit, and are using the patent system to extract rents from others that are using them for productive purposes, a problem that carries a very high economic cost. Only their fair use brings a true premium to business valuations, that is, building a good patent portfolio that does not enter into practices of dubious ethics like filing for patents that only pursue the cross licensing with competitors to avoid paying them for their intellectual property.
The fastest way to begin to learn how to write software patents is to start with this set of documents. And since real learning only happens from the best sources, there are lots of noteworthy software patents, examples to follow for their high degree of inventiveness and the business volume that they backed and helped generate:

  • The first practical algorithm to solve linear programming problems.
  • DSL, the technology that allowed for the cheap diffusion of broadband connectivity.
  • Pagerank, the famous patent that laid out the foundations of Google; it also takes into account the method to quickly compute the rankings of indexed pages.
  • Lempel-Ziv-Welch, the well-known algorithm for lossless data compression.
  • In the cryptography field, the RSA and ECC algorithms, at the core of public key cryptography.
  • The beginnings of digital sound would not have been possible without the DPCM method or FM sound synthesis, and the patents of MP3 compression are dispersed across many companies.
  • In the storage field, it’s curious how RAID was invented a decade before its definition as a standard.
  • Regarding hardware, we shall not forget the patents awarded to the transistor (Shockley and Bardeen) and modern magnetic storage enabled thanks to the GMR phenomenon.

Modelling How Technological Change Influences Economic Growth

You can see the computer age everywhere but in the productivity statistics.” Robert Solow (1987 Nobel Prize in Economics) summed up with this harsh remark his celebrated “productivity paradox”, which itself started a research frenzy to find counterexamples to refute it. It took more than a decade, because there is a strong inter-relationship between information technologies and the human capital that they are at the same time complementing and substituting for, but at the end these affirmations could be discredited. Furthermore, another profound change with much more evidences against the paradox occurred parallel to the wide expansion of computer technology, which was also easier to measure and prove: the global spread of the digital mobile phone. To get a better understanding of its true economic impact, nothing better than to sum up the relevant literature regarding this topic.

From a purely microeconomic perspective, Jensen was able to prove that the introduction of mobile phones incremented the profits of North Kerala’ fishers to a whopping 8%, reducing at the same time the final consumer price by 4%: better communications enabled the access to wider markets, expanding the dealing possibilities of those offered by the previous local fish market, enhancing overall market efficiency via an stronger the law of one price. From a macroeconomic point of view, Waverman used statistical and econometric techniques to isolate cause from effect, to find that an increase of 10 devices per 100 in a developing country did add 0.6 points to GDP growth per capita and 0.5 to GDP growth: these results bring out the transformative power of technology to the the global economic activity.

And to gain a better understanding of how technological innovations are transmitted into the economy, I’ve put together a stylized model in an Excel workbook offering a mechanistic explanation of how a successful general purpose technology is able to impact economic growth in such a significant way: in the first sheet, a general Bass model is used to quantify the transition to digital mobile technology from 1996 to 2011 (taking care of network effects in a gross manner, better modelled using Becktrom’s law); in the second sheet, and by using the previously calculated penetration level of the digital mobile phone technology as one of the inputs, a neoclassical economic growth model (Solow-Swan) is utilised to explain its economic impact: note this particular model was the first used to introduce technological progress as a fundamental variable to explain economic growth, making it look like a component that increments the productivity of the labour factor and that also complements capital accumulation at the same time, itself divided in different periods of decreasing value to account for the technological depreciation process. The only negative aspect of this model is that technological progress is supposed to be constant over the full period of analysis, leaving aside the possibilities of a growing innovation rate, or a much more realistic decreasing innovation rate. In addition, other variables that are taken into account by the model are: capital depreciation, savings rate, population growth and the relationship between capital and labour in the resulting economic production. Besides, other technological changes could be analysed with the same Excel workbook, because they feature similar diffusion processes and economic impacts: the adoption of the car, substituting for horses; the diffusion of electricity; or the diffusion of computer, replacing the typewriter.

Later economic models supplement the previous one introducing the accumulation of human capital next to technological change, giving birth to endogenous economic growth theories that better explain the relationship between computer technology and economic growth: even if information technologies are mostly deployed for the purpose of substituting the labour factor, their true nature is incredibly complementary to human capital, but this is more difficult to prove econometrically. Last but not least, the entertainment potential of computer technology makes it to negatively redound in productivity growth statistics: for example, the 5 million hours that Angry Birds is played every day should also be counterbalanced in other ways.