Category Archives: tech history

The Genesis of E‑mail

Every time an email is sent, it’s expected to be handled to its recipient, no matter what their service provider is. But in the first days, it wasn’t so simple: early commercial email services (CompuServe, Prodigy, Delphi, …) featured proprietary email services with no concept of an universal e‑mail address, which in turn created technical barriers that originated the commercial practice of settling charges for email delivery between service providers. In other words, there were interconnection agreements which detailed the delivery charges between providers, bounding the two parties to periodically settle their accounting differences, much like in the other telecommunication networks (telex, fax, teletex, SMS and phone termination fees).

But the number of said required agreements grew exponentially as the number of service providers expanded, and so did the technical difficulties to integrate their different email services. X.400 was born to solve these issues, implicitly providing support to keep settlement scores between carriers and their multi-interface delivery technology (vg. the preferredDeliveryMethod attribute). In the end, X.400 didn’t really take off and was substituted in 1990 by the much simpler X.500 protocol: but not due to its tremendous complexity, but rather due to the decisive move of service providers to stop settling accounts between them so they could just use X.500 to interconnect their directory services.

As usual, it’s almost never about technology, which can be better thought as the child of necessity and will. The hassle of reaching agreements was getting so high with the growing number of service providers that their diminishing return stopped justifying the related bargaining costs, which in turn were precluding the emergence of the essential network effects from the growing number of email users (as per Metcalfe-Beckstrom-Reed’s Laws): that is, they were the real limiting factor blocking the growth of the early Internet. Nowadays, the only trace of these agreements survives in transit traffic agreements, in turn solved by peering agreements.

To sum up, notice the circular paradox that the history of email established, a curious tale of unintended consequences: free email begot spam, and spam beget the obvious solution to start charging for email to put an end on it. Whether the trade-off was correctly solved depends on whom you talk to.

OpenFlow to Free the Closed Internet

The biggest paradox of the Internet is that, being the epitomeness of freedom and openness, its actual implementation is even more closed than the old mainframes. And despite the fact that the whole thing has always been properly documented in RFC memorandums, the oddness and peculiarities of the concrete implementations have always lied hidden within router images, even for the most important inter-domain routing protocols, the biggest concern during interoperability tests.

So it turns out that the real definition of openness is a very nuanced one: in the software world, licensing and governance are paramount, meanwhile standards and interoperability are crucial and strategic in the networking world.

Fortunately, OpenFlow is unlocking a new window of openness in this closed world:  its approach to Software-Defined Networking enables reprogrammable traffic management techniques in the Layer 2 much like MPLS does in Layer 3, but in much more heterogeneous environments. Its first version is very featureless, missing IPv6, QoS, traffic shaping and high-availability, and lacking a killer app, its general adoption will take time, if ever. Even so, its ability to complement recent virtualization technologies in the data center, and being the only practical way for researchers to try out new experimental protocols, makes it a key technology to watch for in the next years.

dmr (1941 — 2011)

1
2
3
#include <stdio.h>
#include <stdlib.h>
#include <setjmp.h>
1
static jmp_buf <a href=“http://cm.bell-labs.com/who/dmr/” target=“_blank” rel=“noopener”>dmr</a>;
1
 
1
2
3
4
5
6
static void
function(void)
{
&nbsp;&nbsp;&nbsp;printf(“umquam\n”);
&nbsp;&nbsp;&nbsp;longjmp(<a href=“http://cm.bell-labs.com/who/dmr/” target=“_blank” rel=“noopener”>dmr</a>, 1);
&nbsp;&nbsp;&nbsp;printf(“meminisse\n”);
1
2
3
4
5
6
7
8
9
10
int
main(void) {
&nbsp;&nbsp;&nbsp;if (setjmp(<a href=“http://cm.bell-labs.com/who/dmr/” target=“_blank” rel=“noopener”>dmr</a>) == 0) {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;printf(“Noli\n”);
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;function();
&nbsp;&nbsp;&nbsp;} else {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;printf(“oblivisci\n”);
&nbsp;&nbsp;&nbsp;}
&nbsp;&nbsp;&nbsp;return EXIT_SUCCESS;
}

What I’ve Been Reading

Life can only be understood backwards, but it must be lived forwards.”

Søren Kierkegaard

In technology, predicting the future is risky business. Tracing parallels and contrasts between future and old technologies, in the best tradition of Odlyzko’s papers on the comparative history of technology, is the only foolproof way to reason about the future, with the only shortcoming of being forewarned that in hindsight, everything is obvious and foreseeable. The following books are the best sources to learn about the rising of past-centuries networks, before the Internet:

  • [amazon_link id=“0801846145” target=“_blank” ]Networks of Power: Electrification in Western Society[/amazon_link]. Through the looking-glass of Tom Hughes’ systematizing theory of Complex Systems, the best recollection of the battle of electrical standards (AC vs. DC), full of details of the similarities and differences of electric expansion in Germany, England and United States due to the impact of the state of affairs of each country.
  • [amazon_link id=“0393061264” target=“_blank” ]Railroaded: the Transcontinentals and the Making of Modern America[/amazon_link]. As the first infrastructure built in capitalism, it transformed the legal-economic system of its day to the current one we live within. Its fact-based approach is the best strength of the narrative, which otherwise should be read with distance and perspective to properly detach from the author’s opinions.
  • [amazon_link id=“0521131855” target=“_blank” ]Energy and the English Industrial Revolution[/amazon_link]. A masterpiece and the best short book on the Industrial Revolution, by one of the most important economic historian. You can get a taste of its content at this article written by the author itself.

Estimating the Innovator’s Dilemma

Much like Keynes’ [amazon_link id=“1169831990” target=“_blank” ]The General Theory Of Employment Interest And Money (1936)[/amazon_link] sketched the general picture of macroeconomics, leaving the hard-work of figuring the concrete equations and their variable estimation to the then nascent field of macroeconomics, Christensen’s [amazon_link id=“0060521996” target=“_blank” ]The Innovator’s Dilemma[/amazon_link] derived an acclaimed general theory of innovation through real-world examples, initiating a very fertile ground for modelling and quantification. And the following paper is the first to tackle the problem of creating a fully detailed innovation model around the canonical case of incumbent’s delay.

Download (PDF9KB)

The most interesting part is the measurement of four different forces that determine the incumbent-entrant timing gap in technology adoption, next in their actual order of importance: the very significant option value of waiting; a smaller cannibalization’s effect; and trivial sunk-cost advantages over entrants and preemption motives for this issue, but strong determinants of innovation and evolution.

And what’s more important, the absolute relevance of this very same case and models to the modern evolution of drives towards SSD and hybrid technologies.

Bön Voyage, Minitel

It’s decision time for Minitel, and its fate has been sealed. With many lessons for tomorrow, the parallels of state intervention between Minitel and Internet left much to ponder: albeit decisive for their existence in their first years, the lack of controlling restraint exhibited by the French government, not only in its censorship impetus but especially in the commercial side of the venture, ruined its international diffusion.

The Curious Case of the Diverging Browser’s Caches

Browser’s cache fulfill several aims, among others, to save network bandwidth and to diminish web pages loading time which, in turn, drop down the time costs of delays over user’s web loading. For example, suppose the typical user spends an average of 450 hours/year to surf the web at a rate of 120 pages/hour; an implied wage of 12€/hour; a fall in loading time due to the use of a cache of 1 second via desktop and 10 seconds via mobile; a caching success rate of 40%, then we easily estimate that the typical user can save between 72€/year(computer) and 720€/year(mobile) by just activating the browser’s cache.

Therefore, and given storage and bandwidth’s current costs, the implied break-even point on the use of the browser’s cache is always positive, even to store all the browsed pages that the user would ever visit for decades, a time longer than the average life of any device. This fact will still uphold true not by the exponentially decreasing costs for storage and bandwidth, but just because the labor costs are linearly increasing in time. But taking apart labor costs from the equation just consider the technological trends and taking into account that mobile bandwidth’s costs will always be several orders of magnitude higher than fiber and cable’s bandwidth, we would face the curious case that using the browser’s cache will soon stop making any sense in a computer but will still be  profitable in a mobile, and that for the period of several decades and also taking into consideration the higher mobile storage costs. Note that this is just one of the many divergences that could appear in the future evolution of the various Internet browsing devices, and that will entail much greater instruction density per transmitted byte to correct them.

The key point of this and other analyses always rests under the relative differences in the price evolution between magnetic storage (Kryder’s law ‑2x every 13 months-), the circuit’s scale of integration (Moore’s law ‑2x every 18 months-) and bandwidth’s throughput (Nielsen’s law ‑2x every 21 months-), among others. And we should put greater emphasis in the last one, since being the one with the slower evolution will also make it to be the most limited resource and, therefore, the one that will end up dominating the final price of any computer system. And on the other hand, storage will be the most used resource to lessen the disadvantages and deficiencies brought by telecommunication’s slowest evolution, following Jevons’s paradox, which remind us that increases in the efficiency with which a resource is used tend to increase, rather than decrease, the rate of consumption of that resource.

On the subject of the expected evolution of telecommunications, it would always be necessary to take apart the trends of the different underlying technologies (fiber, cable and wireless). And although the most optimistic would certainly lean into Edholm’s law, that predicts that the throughput of the different technologies will end up converging as a result of the law of the decreasing marginal returns on the fastest ones and even when taking into consideration the parallel increases in throughput that they have been experiencing, it will be the Cooper’s law regarding the efficiency in the use of the electromagnetic spectrum (-2x every 30 months-), the one law that highlights the underlying idiosyncrasy of wireless since it exploits a natural resource with no possibility of being expanded: analyzing its increases in efficiency in the last 100 years, we find that improvements in coding methods only explain the 0,6% of its enhancement; the enlargement of the spectrum under utilization, a mere 1,5%; and the most efficient use of the spectrum by its better confinement, the resting 97,9%. Nevertheless, optical fiber is in hard contrast to any wireless technology (Butter’s law ‑2x every 9 months), and just another reason to expect that the differences between the software applications available on mobile devices and the non-mobile ones using optical fiber cannot but be heightened over the years, the raison d’être of the mobile software cambrian explosion.

The New New Tech IPO Bubble

The latest IPOs of tech companies like LinkedIn, Yandex and RenRen have reactivated the never-ending debate of valuations and the fear of another tech bubble, even if most tech stocks are cheaper than before the dot-com bust. But this time, we have the masterful studies of [amazon_link id=“1843763311” target=“_blank” ]Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages[/amazon_link] and [amazon_link id=“0123497043” target=“_blank” ]Tech Stock Valuation: Investor Psychology and Economic Analysis[/amazon_link], providing us with tons of empirical data from previous bubbles. Or even better, real-time theories of asset bubble formation, like the Jarrow-Kchia-Protter-Shimbo theory put to test in the following paper:

Download (PDF470KB)

This time is different.

Software Profit Strategies by Category

How will the mobile application space look like in the future? Which proven strategies from the past will still provide an edge? And which strategic levers should be considered to bring exorbitant profits?

To solve the mobile conundrum and peer into the future with the wisdom of the past, I’ve been collecting economic data from the most diverse data sources to estimate regressions(IRLS, LAD) of profit (quarterly and/or annual) on different software categories (desktop, web, mobile) and their features. In other words, the most obvious analysis that nobody has ever carried out.

The following are stylized initial results, omitting exact coefficients but showing their size and direction (* for statistically significant):

 

DESKTOP

WEB

MOBILE

Total Addressable Market + ++ +
User Base Size +(*) +++(*) ++(*)
Development Sunk Costs ++ - -
Latency tolerant ++ - -(*) -
BUSINESS MODEL VARIABLES
License fee + - - — (*)
Maintenance fees +++(*) - ?
Versioning ++ - ?
Bundling ++ ? ?
CPM/CPC - +++(*) +
Targeting Quality ? ++ +
Use Time per User ? ++(*) ++(*)
DEMAND SIDE ECONOMIES OF SCALE (NETWORK EFFECTS)
Bandwagon effect + ++(*) ?
Standard setter ++ +++ ?
Linkage / interoperability - ++ ?
SWITCHING COSTS
Data/file lock-in ++ + ?
Job/skill effects ++(*) ? ?
Learning/training  effects ++ - -
Incumbency effect ? - ++

R^2=0.66, sample size=352 (includes most important and known programs per category)

Focusing into the higher size and statistically significant variables, the data reveals the different nature of each software category:

  • Desktop applications: the most profitable strategy is to develop broadly used programs with low initial price, but higher maintenance fees and a significant impact on the labor market. Don’t make programs, revolutionize professions.
  • Web: very high scale ad-monetized applications with major network effects. The result of the open nature of the web with its hyper-linking structure across domains and the absence of micropayments.
  • Mobile software is a yet-to-be-determined mixture of desktop and web applications. This category is like desktop software, in that it has the same technical architecture, but its evolution resembles more closely that of the web due to the incumbency effects from web companies and lack of switching costs and traditional network effects.

More insights in future posts from this and other data sources.

Data sources: Yahoo Finance, Crunchbase, Wakoopa, RescueTime, Flurry, Admob, Distimo, Alexa, Quantcast, Compete, others.

Code battling code battling… (ad infinitum)

pMARS, the official Redcode simulator

Battle programs of the future will perhaps be longer than today’s winners but orders of magnitude more robust. They will gather intelligence, lay false trails and strike at their opponents suddenly and with determination.”
‑A. K. DEWDNEY, Computer Recreations, Scientific American (January 1987)

Mahjong is different from Checkers, just like High Frequency Trading is different from Core War: they include the profit motive!

Code of Virii Set in Silicon

I’m eager to learn the outcome from Intel’s biggest acquisition ever, McAfee. As company representatives have said in a conference call with Wall Street analysts, they plan to push functionality down from userland to the die chip, just below the OS. And that is a really bizarre rationalization for this acquisition, since antivirus are memory and I/O bound processes, not CPU-bound applications (see “Characterizing Antivirus Workload Execution” for more information), that’s why significant speed-ups aren’t likely to be attained. And all improvements Intel is going to put into the chips should be also offered to other antivirus companies, otherwise they risk facing antitrust action as the EU has forewarned, coincidentally the same reason Microsoft hasn’t be able to make a good antivirus even if everybody would benefit from a development like that (ironically, the last one being a case of a public bad from public intervention).

Security on a chip should be as simple as possible. I still remember the security fiasco within the Intel 286 ring model caused by an undocumented instruction, LOADALL, which rendered it useless. In my opinion, progress will be more in the vein of current virtualization offerings, seeking to improve performance with multiple virtual machines within a host.

Finally, note that EBITDA margins aren’t exactly attractive, in particular for a company like Intel:

[trefis_forecast ticker=“INTC” driver=“1318”]