Monthly Archives: October 2012

New Presentations on Mobile Security

I’ve just updated the list of presentations on mobile security:

Captured Regulators on Natural Monopolies

Due to the high competition and low profit margins of the mobile network operators, an odd market structure with no precedents is being gestated in the UK: there will be only two mobile networks but five major brands, MVNOs aside. The first network is the result of the merging of T‑Mobile and Orange (Everything Everywhere) and the network sharing agreement of T‑Mobile and Three (Mobile Broadband Network Limited); the other network, Cornerstone, is result of the network sharing agreement of  Vodafone and Telefónica (O2).

There are multiple ways to analyze and interpret this situation: on one side, regulated and mandated fragmentation against natural monopolies/duopolies is a disaster waiting to happen that lowers the level of network investment, thus it always reverts back to their natural structure, as the Ma Bell history clearly shows; on the other side, the companies are forced to keep a façade of competition under multiple commercial entities that resell network access to the consumer, a messy situation that only a captured regulator would agree with.

What I do know is that this experiment would only happen in the UK and not under the current rule of the European Union, but given how influential and imitated the policies of the OFCOM regulator are, it’s a matter of time before other states follow. And I wonder how this play with BEREC, the European telco super-regulator: will network sharing create duopolies in every national market under the pretext of the incipient 4G deployment, but just some mega-marketing companies at the European level? Will the incentives for network investment be perfectly aligned under such structure?

Assorted Links (CompSec)

    1. The most dangerous code in the world: validating SSL certificates in non-browser software. Yet another round of broken implementations of the SSL protocol.
    2. Cross-VM Side Channels and Their Use to Extract Private Keys: first practical proof that we shall not run SSL servers or any cryptographic software in a public cloud.
    3. Short keys used on DKIM: the strange case of the race to use the shortest RSA keys.
    4. How to Garble RAM Programs: Yao’s garbled circuits may turn to be practical.
    5. Apache Accumulo: NSA’s secure BigTable.

Real Market Models

One the biggest mysteries of economics is how an academic discipline could have come so long without solid models of one of its most fundamental pieces of study: markets. They are assumed to exist, without any formal typology nor empirical tests of their properties. So it’s refreshing to find this recent paper to try to tackle this challenge:

Download (PDF, 1.16MB)

It introduces the central property of decentralization to a model of markets, and proceeds to proof that this very property makes them resilient to manipulation, enhancing welfare and liquidity. Just like in the real world.

Sharing Databases of Virus Signatures

Given that the antivirus market is a market for lemons (quality is difficult to ascertain and the asymmetry of knowledge between the buyer and the seller is so high that it produces a broken market) and the software supply is so over-saturated that even the market is full of free products and an open-source alternative (ClamAV), I wonder when the signature collection and production processes will be shared between vendors. There has already been news about companies plagiarizing each other closed databases, so the need is already there. And there are precedents in other technology areas (patent pools, mobile network sharing) and even examples within the computer security space (DNS blacklists like Spamhaus), so maybe it’s just a matter of time and when the PC market stops growing, companies will resolve to pool their signature databases to a common entity and concentrate on more specialized aspects of virus detection (heuristics for *morphic viruses).

Actually, Computers Are Free”

In a recent conversation with a friend, she lamented the hard time she was having in  justifying the deployment of thousands of tablets within a company, especially given their high price. But it’s really the opposite: computers pay for themselves in a very short time, no matter what their format. A fact in direct contrast with the old Solow’s productivity paradox.

Just a little research to proof this: take the welfare gain of computers, measured by their compensating variation, that is, the amount of income a consumer would have to give up in order to attain the level of utility that would have realised if computers had never been invented. Most recent results show that is 3.8–4% of total consumption expenditure: in other words, >$1500 per year in a first world country (you can also play with the Matlab code behind this model!)

Download (PDF140KB)

A high sum that completely justifies their price, but low if it’s compared with the compensating variation of the Internet (26.8%) or that of electricity (92%), ie. no one would live without electricity. And even though the variation of the Internet is much higher than that of computers, and also its contribution to economic growth, computers are Generally Purpose Technologies absolutely necessary to access the Internet, thus complementary to it and its compensating variation cannot be taken apart from them.

Success is Easily Predictable

The discussions of why and how technologies catch up never cease, and they always have in common that they are based on how difficult it’s to foresee the future. I disagree with them all: it’s very easy to predict technological success. If you know exactly how.

Start with this little remark by Steven Chu, US Energy Secretary, stating the necessary conditions for the success of electrical vehicles: “A rechargeable battery that can last for 5000 deep discharges, 6–7 x higher storage capacity (3.6 Mj/kg = 1000 Wh) at 3x lower price will be competitive with internal combustion engines (400–500 mile range).” First, a real exercise of honesty for a government official: I hope it did mean that no subsidies were given to sub-optimal technological proposals. But more importantly, he offered some quantifiable  pre-conditions for the acceptance and diffusion of an emergent technology, mixing technical variables with economic ones.

This line of thought reminded me of some of the most brilliants annotations in Edison’s Notebooks (Notebook Nº3, pages 106–108; Notebook Nº6, pages 11–12; Notebook Nº9 pages 40–42): he combined cost considerations to reduce the amount of copper and the price of high-resistance filaments, with scientific reasoning using Ohm and Joule laws, to guide their experimentation in the quest of better designs of a full electrical system, and not just the light bulb.

It’s that easy: mix technical variables with supply-demand analysis, some micro-economics and much attention to discontinuities in the marginal propensities to consume in the face of technological change. And this is why pitches to VCs are always so wrong and boring: almost no attention to key economic considerations and full of reasoning by analogy.

Like children, always solve labyrinths by starting at the exit: so early we learn that the end is the beginning.

Assorted Links (EconFin)

    1. Innovation Without Patents — Evidence from the World Fairs: how the propensity to patent changes over time
    2. Software Patents and the Return of Functional Claiming: Lemley call for the return of the 1952 Patent Act
    3. Buffett’s Alpha: betting against beta with ingenious sources of leverage
    4. R&D and the Incentives from Merger and Acquisition Activity: empirical evidence for the “small businesses are more innovative than large firms” mantra
    5. Regulation and Investment in Network Industries: Evidence from European Telecoms. Access regulation considered harmful to network investment.

On the Half-Life of the Half-Life of Facts

Reading [amazon_link id=“159184472X” target=“_blank” ]The Half-Life of Facts[/amazon_link] has left me with mixed feelings: even if enjoyable by its recollection of scientometric research and their masterful presentation by countless anecdotes, the thesis reached by their inter-waving, that everything that we know has an expiration date and other grandiloquent and post-modernistic statements, is just an immense non-sequitur: not only is the author intentionally leaving aside mathematical proofs (notwithstanding Gödel’s Incompleteness theorems), the very definition of fact applied is very misleading and tendentious. Facts should not be directly equalled with immovable truths: stripping out the context of how the facts were established and ignoring that the different Sciences offer different degrees of truth (5 sigma in particle physics and 1 sigma in sociology) is a disservice even to the author’s purposes.

And this is not just an argument from the analytic philosophy of language, à la Wittgenstein: in a chain of reasoning, it seems obvious that conclusions cannot be taken apart from their premises and inference rules; that is, facts are logical consequences of a set of statements because they were deduced from them, in a deductive-theoretic sense, so those statements are at least as important as the facts themselves.

Not to mention the problematic implications of applying the book to itself, in a recursive way: because what is the half-life of scientometric statements about the half-life of truths? Nullius in verba carried to n-degree!