Software Development based on Financial Ratios

Why did software component reuse never take root? Why is outsourcing within the software industry such a bad practice, even if other industries outsource all of their developments? Why do big software companies perform so bad and have such a poor innovation record?

From a purely financial viewpoint, focused on profitability ratios, these and other questions are far beyond all comprehension. But paradoxically, they emerge as the result of the blind application of financial theory to the software industry:

  • Reusing software would look like the shortest path to profitability: selling properly tested and documented software components over and over again was the holy grail of the software industry decades ago; in fact, that’s how all the manufacturing industry works. But the incorporation of other costs (friction, learning, integration, …) discourages this model of development; and what really abolished this trend  was the rejection of whole software stacks based on the elusive nature of some requirements that were not taken into account because componentized software is not built by iterating on customer feedback since its very beginning.
  • Outsourcing is the preferred way to develop software in every industry, except for the software industry itself: financially speaking, outsourcing improves every efficiency measure, lowers fixed costs and improves flexibility. What’s not to like? That what is difficult to measure: the cost of lost knowledge and the opportunity cost of not being able to innovate based on this knowledge; and finally, the cost of not surviving because competition was more learned and readier.
  • Big companies do not innovate because all their projects start to get measured by IRR (Internal Rate of Return): but this ratio prioritizes short term, low-value projects over long-term projects. Start-ups teach us that it takes 7–10 years to full profitability and investment amortization: no wonder big companies fail to innovate.

And there are many more examples: striving for a better ROCE (Return on Capital Employed) on software projects is not a very good idea, since capital is not a scarce resource, qualified labour is; so economizing on an abundant asset is absolutely wrong. And that for any other return-like ratio: they will strip off any software company and deprive it of its most precious assets.

Assorted Links (Math)

Some links about the uses of mathematics in everyday life:
  1. The Mathematics of RAID6: another remark that RAID5 is considered harmful
  2. The maths that made the Voyager possible: “Using a solution to the three-body problem, a single mission, launching from Earth in 1977, could sling a spacecraft past all four planets within 12 years. Such an opportunity would not present itself again for another 176 years.”
  3. Speeding GPS calculations by shrinking data
  4. Coded-TCP: replacing packets with algebraic equations to improve wireless bandwidth in the presence of errors
  5. Voice Recognition using MATLAB: easy and fun to experiment with

Commemorating Computational Complexity

Fifty years ago, two researchers started writing this seminal paper, kicking off the research field of computational complexity, the core of computer science, with now tens of thousands of publications.

Although the concepts and ideas underlying the paper were not new, as a letter by Kurt Gödel show us, it was the foundational moment for a field that still produces deep, beautiful and practical results: in the last decade, Williams’ lower bound on non-uniform circuitsAgrawal-Kayal-Saxena primality test that included primality testing in P (although Miller-Rabin and Solovay-Strassen   primality tests still live strong since they are much faster than AKS) or Vassilevska’s lower bound on matrix multiplication.

I see tons of start-ups and projects fail because they ignore the most basic algorithmic prudence: that is, that only sub-logarithmic algorithms should be accessible to the mass-public is one the ignored maxims of the computer industry that can only be learned by the proper interpretation of the absence of market offerings refuting it (vg. regular expressions within search engines’ queries, which could run in exponential time; or all the AI promises about solving optimization problems that never delivered).

Speaking with some researchers lately, they expressed the hope that the coming end of Moore’s law would vindicate this field, making its results much more relevant: and although they were absolutely right in that proportionally more resources will be directed towards these ends, they also failed to consider that the likely crash following this event may also reduce the total, aggregated opportunities.

Assorted Links (Architecture)

    1. The Architecture of Open Source Applications:  wonderful resource to learn more about the internals of some of the greatest programs of history. I love the chapter about LLVM, a perfect complement to this post about the life of an instruction in LLVM
    2. Secret Servers and Data Centers: best special report from Wired in a very long time, full of previously unpublished information about the biggest data centers in the world
    3. Understanding Cloud Failures: useful infographic on the perils of Cloud Computing
    4. One in Six Active US Patents Pertain to the Smartphone: such a little device, such a complicated legal maze
    5. The Licensing of Mobile Bands in CEPT: complete list on the distribution of mobile spectrum in Europe
    6. Kernel Drivers Compiled to Javascript and Run in Browser: a simple hack to turn upside down completely opposite software layers

New Presentations on Mobile Security

I’ve just updated the list of presentations on mobile security:

Captured Regulators on Natural Monopolies

Due to the high competition and low profit margins of the mobile network operators, an odd market structure with no precedents is being gestated in the UK: there will be only two mobile networks but five major brands, MVNOs aside. The first network is the result of the merging of T‑Mobile and Orange (Everything Everywhere) and the network sharing agreement of T‑Mobile and Three (Mobile Broadband Network Limited); the other network, Cornerstone, is result of the network sharing agreement of  Vodafone and Telefónica (O2).

There are multiple ways to analyze and interpret this situation: on one side, regulated and mandated fragmentation against natural monopolies/duopolies is a disaster waiting to happen that lowers the level of network investment, thus it always reverts back to their natural structure, as the Ma Bell history clearly shows; on the other side, the companies are forced to keep a façade of competition under multiple commercial entities that resell network access to the consumer, a messy situation that only a captured regulator would agree with.

What I do know is that this experiment would only happen in the UK and not under the current rule of the European Union, but given how influential and imitated the policies of the OFCOM regulator are, it’s a matter of time before other states follow. And I wonder how this play with BEREC, the European telco super-regulator: will network sharing create duopolies in every national market under the pretext of the incipient 4G deployment, but just some mega-marketing companies at the European level? Will the incentives for network investment be perfectly aligned under such structure?

Assorted Links (CompSec)

    1. The most dangerous code in the world: validating SSL certificates in non-browser software. Yet another round of broken implementations of the SSL protocol.
    2. Cross-VM Side Channels and Their Use to Extract Private Keys: first practical proof that we shall not run SSL servers or any cryptographic software in a public cloud.
    3. Short keys used on DKIM: the strange case of the race to use the shortest RSA keys.
    4. How to Garble RAM Programs: Yao’s garbled circuits may turn to be practical.
    5. Apache Accumulo: NSA’s secure BigTable.

Real Market Models

One the biggest mysteries of economics is how an academic discipline could have come so long without solid models of one of its most fundamental pieces of study: markets. They are assumed to exist, without any formal typology nor empirical tests of their properties. So it’s refreshing to find this recent paper to try to tackle this challenge:

Download (PDF, 1.16MB)

It introduces the central property of decentralization to a model of markets, and proceeds to proof that this very property makes them resilient to manipulation, enhancing welfare and liquidity. Just like in the real world.

Sharing Databases of Virus Signatures

Given that the antivirus market is a market for lemons (quality is difficult to ascertain and the asymmetry of knowledge between the buyer and the seller is so high that it produces a broken market) and the software supply is so over-saturated that even the market is full of free products and an open-source alternative (ClamAV), I wonder when the signature collection and production processes will be shared between vendors. There has already been news about companies plagiarizing each other closed databases, so the need is already there. And there are precedents in other technology areas (patent pools, mobile network sharing) and even examples within the computer security space (DNS blacklists like Spamhaus), so maybe it’s just a matter of time and when the PC market stops growing, companies will resolve to pool their signature databases to a common entity and concentrate on more specialized aspects of virus detection (heuristics for *morphic viruses).

Actually, Computers Are Free”

In a recent conversation with a friend, she lamented the hard time she was having in  justifying the deployment of thousands of tablets within a company, especially given their high price. But it’s really the opposite: computers pay for themselves in a very short time, no matter what their format. A fact in direct contrast with the old Solow’s productivity paradox.

Just a little research to proof this: take the welfare gain of computers, measured by their compensating variation, that is, the amount of income a consumer would have to give up in order to attain the level of utility that would have realised if computers had never been invented. Most recent results show that is 3.8–4% of total consumption expenditure: in other words, >$1500 per year in a first world country (you can also play with the Matlab code behind this model!)

Download (PDF140KB)

A high sum that completely justifies their price, but low if it’s compared with the compensating variation of the Internet (26.8%) or that of electricity (92%), ie. no one would live without electricity. And even though the variation of the Internet is much higher than that of computers, and also its contribution to economic growth, computers are Generally Purpose Technologies absolutely necessary to access the Internet, thus complementary to it and its compensating variation cannot be taken apart from them.