Monthly Archives: December 2012

How the Future Dwells in the Past

Techno-utopians got it wrong: their tireless search for new technologies must start in the past. Most new technologies are just a rehash of past ones and most resources are devoted to maintaining existing technological infrastructure or towards incremental advances in old technologies: the new and innovative is extraordinarily rare. A fact so ignored but so intuitive, since most human needs have always been the same.

Mandelbrot’s “[amazon_link id=“0716711869” target=“_blank” ]The fractal geometry of nature[/amazon_link]” summarizes this line of thinking as the Lindy effect: the future survival of any Broadway show is best predicted by how long it has been running already. Itself based on a much older assertion that the “the future career expectation of a television comedian is proportional to his past exposure” (The New Republic, June 13th 1964). Thus, a statistical distribution that extends beyond the arts to other phenomena like the survival of technologies: the longer a technology has been in use, the longer we shall expect it to last, or more empirically, we shall conclude that every year that a technology survives may even double its additional life expectancy, contrary to the life expectancy of any living being. An insight that warns us against miracles when introducing new technologies without any precedent.

This next paper is the only one I could find that combines this effect with other power laws to try to ascertain the economic returns of basic research, and so the optimal level of investment:

Download (PDF, Unknown)

Assorted Links (Finance)

    1. Exploratory trading: explaining the use of small trades to test and prove the market
    2. Damodaran’s series on Acquisition: Winners & Losers, Big Deal or Good Deal?, Over-confident CEOs and Compliant Boards, Accretive (Dilutive) Deals can be Bad (Good) Deals
    3. Predictable Growth Decay in SaaS Companies: next year’s growth rate is likely to be 85% of this year’s growth rate.
    4. Making CrunchBase Computable with Wolfram|Alpha
    5. Does Academic Research Destroy Stock Return Predictability? Not as much as you think, publishing results only makes returns fall by a third.

On The Never-Ending Quest of Functional Programming

Functional programming has been in the top of the hype-cycle for decades: no side effects, methods written in a purely mathematical style, strong type systems… What’s not to like about functional programming? It’s even a loaded term! Just ask someone: “But is not your code functional , yet?”

Although the Turing machine (imperative) model of computation was shown to be equivalent to the λ‑calculus (functional), there’s actually a long gap in its asymptotics.

Start with the most simple of the data structures, the array: in a purely functional style of programming, to update an array you would need O(log N) time, but in an imperative language it’s just O(1). And this is not a specific problem with arrays, it’s the standard behaviour within pure functional programming languages due to the their overrated immutability of data: in “Pure versus Impure LISP”, it’s shown that O(N) functional programs written with lazy, non-strict evaluation semantics can be translated to a purely functional style in O(N log N). That is, at worst an O(log N) slowdown per operation may occur by simulating mutable memory access: immutability carries a very high cost, indeed.

In practice, extensions to the pure functional paradigm have been considered to address the need to directly reference and change the state of variables (monads in Haskell), and most algorithms can be implemented efficiently by using purely functional programming, but it also involves extensive and difficult knowledge (“[amazon_link id=“0521663504” target=“_blank” ]Purely Functional Data Structures[/amazon_link]”) for even the most basic of data structures, and the recognition  that they require a significant slowdown by constant factors and CPU stalling due to many cache misses.

And, as if that is not enough, debugging any functional program is a quest in itself: I would say that it’s an order of magnitude more difficult than an imperative language, vastly increasing software maintenance costs.

Software Development based on Financial Ratios

Why did software component reuse never take root? Why is outsourcing within the software industry such a bad practice, even if other industries outsource all of their developments? Why do big software companies perform so bad and have such a poor innovation record?

From a purely financial viewpoint, focused on profitability ratios, these and other questions are far beyond all comprehension. But paradoxically, they emerge as the result of the blind application of financial theory to the software industry:

  • Reusing software would look like the shortest path to profitability: selling properly tested and documented software components over and over again was the holy grail of the software industry decades ago; in fact, that’s how all the manufacturing industry works. But the incorporation of other costs (friction, learning, integration, …) discourages this model of development; and what really abolished this trend  was the rejection of whole software stacks based on the elusive nature of some requirements that were not taken into account because componentized software is not built by iterating on customer feedback since its very beginning.
  • Outsourcing is the preferred way to develop software in every industry, except for the software industry itself: financially speaking, outsourcing improves every efficiency measure, lowers fixed costs and improves flexibility. What’s not to like? That what is difficult to measure: the cost of lost knowledge and the opportunity cost of not being able to innovate based on this knowledge; and finally, the cost of not surviving because competition was more learned and readier.
  • Big companies do not innovate because all their projects start to get measured by IRR (Internal Rate of Return): but this ratio prioritizes short term, low-value projects over long-term projects. Start-ups teach us that it takes 7–10 years to full profitability and investment amortization: no wonder big companies fail to innovate.

And there are many more examples: striving for a better ROCE (Return on Capital Employed) on software projects is not a very good idea, since capital is not a scarce resource, qualified labour is; so economizing on an abundant asset is absolutely wrong. And that for any other return-like ratio: they will strip off any software company and deprive it of its most precious assets.

Assorted Links (Math)

Some links about the uses of mathematics in everyday life:
  1. The Mathematics of RAID6: another remark that RAID5 is considered harmful
  2. The maths that made the Voyager possible: “Using a solution to the three-body problem, a single mission, launching from Earth in 1977, could sling a spacecraft past all four planets within 12 years. Such an opportunity would not present itself again for another 176 years.”
  3. Speeding GPS calculations by shrinking data
  4. Coded-TCP: replacing packets with algebraic equations to improve wireless bandwidth in the presence of errors
  5. Voice Recognition using MATLAB: easy and fun to experiment with

Commemorating Computational Complexity

Fifty years ago, two researchers started writing this seminal paper, kicking off the research field of computational complexity, the core of computer science, with now tens of thousands of publications.

Although the concepts and ideas underlying the paper were not new, as a letter by Kurt Gödel show us, it was the foundational moment for a field that still produces deep, beautiful and practical results: in the last decade, Williams’ lower bound on non-uniform circuitsAgrawal-Kayal-Saxena primality test that included primality testing in P (although Miller-Rabin and Solovay-Strassen   primality tests still live strong since they are much faster than AKS) or Vassilevska’s lower bound on matrix multiplication.

I see tons of start-ups and projects fail because they ignore the most basic algorithmic prudence: that is, that only sub-logarithmic algorithms should be accessible to the mass-public is one the ignored maxims of the computer industry that can only be learned by the proper interpretation of the absence of market offerings refuting it (vg. regular expressions within search engines’ queries, which could run in exponential time; or all the AI promises about solving optimization problems that never delivered).

Speaking with some researchers lately, they expressed the hope that the coming end of Moore’s law would vindicate this field, making its results much more relevant: and although they were absolutely right in that proportionally more resources will be directed towards these ends, they also failed to consider that the likely crash following this event may also reduce the total, aggregated opportunities.