As a follow-up to my previous post on software Disasters at Japan’s Fukushima Daiichi and Chernobyl nuclear power facilities , this very interesting and recent survey on the economics of patents:
Monthly Archives: January 2012
Assorted Links (Cloud Computing)
-
- The Magellan Report, or how the cloud is not yet ready for HPC
- The Jungle of Hardware Heterogeneity, the cloud as a consequence of hitting the limits of Moore’s Law
- Unconditionally secure cloud computing: an experimental demonstration of Blind Quantum Computing
- Vanishing margins in cloud computing: the case of Netflix
How Boundaries in Software Rise and Change
There is no complete theory of what features should be included within a software product, or what features should be left out. It’s more of a question of complex technical trade-offs and product-roadmap decisions taken in response to market competition, a mixture of interests and consequences that sets the present boundaries of software products. For instance, the detailed analysis of the evolution of operating systems reveals a rich history of shifts and reversals: on one hand, the operating system supplier must offer equal and free access to as much services as possible to the ecosystem of software developers, with the intent to avoid code redundancies for basic functionalities, so the platform benefits from the various network effects (two-sided, direct and indirect), but also taking into account that bloating the kernel with too much code may perhaps increase maintenance and support costs to extraordinary degrees; but on the other hand, a too small kernel also reduces lock-in. The best starting point for understanding the different forces and interests that came into play is the following paper:
Another, and less strategic viewpoint, considers the answer of the correct place for a given functionality (exact processor ring, kernel/user mode and/or if it should be a library provided by the OS) a matter of custom and efficiency: for example, the need to get rid of the latency from user-mode calls to the kernel has always pushed more functionality into it, together with satisfying the demand of a basic security model. But these are just examples of the multiple balances that have to be struck, and what’s really interesting is to search for the non-obvious trade-offs and how they have given rise to different cultures of computing. Consider the case of the graphical interface and the window system: in the Unix tradition, a minimalistic approach was taken in that most of it is taken out of the kernel and even relegated to third parties (X Foundation), but the Windows tradition opted for the opposing approach, so it includes the Windows Manager and the GDI framework in kernel space; the consequences are long and lasting, even disastrous in the security space, like how any X client can dump the content of an arbitrary window, including the ones it did not create. Also note that the precise breakdown of the barrier of user and kernel mode varies over time: traditional kernel-mode components are being moved into user-mode processes, like the User-Mode Scheduling for threads and the User Mode Driver Framework. But at the end, the truth is that the forces shaping the design of operating software tend to be more of economic than technical nature, as exemplified in the discussions collected in the following paper:
Assorted Links (Data Processing)
- Extracting demand curves across 240 categories by age
- Largest quantum computation ever
- More instances in which sheer human ingenuity surpasses Moore’s law
- Google’s replacement value: a thousand man-years.
- On the hardships of data analysis to prove reverse engineering: chess edition.
Software Patents: Just Use Them Well
Holding a contrarian view could carry out great benefits: there has always been a need for a procedure to protect any innovative software from reverse engineering that at the same time allows for its appropriation and exclusionary use, as well as it precludes any imitation of its functionality, implementation details aside. This procedure really exists, it’s the proverbial patent: a negative right, temporal and exclusionary, to protect any novel intellectual property in exchange of publishing enough information to replicate it. In some sense, it’s the only open window for the ordinary citizen to introduce its own personal legislation, a strong general purpose tool, however double-edged it may be.
The frontal and so popular opposition to software patents transcends the software world: indeed it can be found in past centuries and for other technologies; for example, the most cited case, and therefore so full of hyperbole, is the decades-long delayed adoption of the vapour machine due to the machiavellic use of the rights conferred by one of its patents.
Regarding current practices, a detailed study of the descriptive statistics of the use of software patents shows that they have been the fastest growing category for decades, though software companies have not been granted many of them because the biggest appliers are other industries similarly intensive in their use of IT capital but also with a strong record of filing for strategic patents. Note also that in the absence of strong patents rights, custom and common sense have required the use of copyright protection (which also does not need to give up any source code) even if it’s a far weaker protection: in fact, both of them are complimentary, but their actual use is substitutive, because whenever one of them is weakened, the other gets used much more.
From a purely economic point of view, studies show a statistically significant increase of the stock value of the software patent-owning companies and they happen to be a mandatory prerequisite to enter markets in which the incumbents already own strongly interdependent patent portfolios. And contrary to general opinion and practice, their use in software start-ups is, overall, positive: they increase the number of rounds and their amount, as well as survival and valuation in case of acquisition. From a strategic point of view, software patents raise barriers to entry acting as a deterrent mechanism to the kind of competition that just follow profits without investing in any kind of sunk costs in the search of technological advances: in short, an increase of 10% in the number of patents entails a reduction in competition between 3 and 8 per cent. And even if their valuation is a very complex matter, the intrinsic value of software patents is higher than that of the rest of patents.
In practice, the biggest burden in their granting and defence is the search of prior art: that is, even under the assumption that the inventor is operating under the principles of good will and full coöperation, he can’t get access to the full prior art because most software is developed internally for companies and never sees the public light. This gives rise to a great number of non-innovative and trivial patents, and others that ignore the prior art on purpose, a matter which sometimes can be difficult to settle (vg. Apple Multi-touch, Amazon 1‑click). Fortunately, malicious uses don’t fare well in the Courts of Justice: the strategies of the so-called patent trolls aren’t successfully long-term, that is, those who are operating within the letter of the law but against its spirit, and are using the patent system to extract rents from others that are using them for productive purposes, a problem that carries a very high economic cost. Only their fair use brings a true premium to business valuations, that is, building a good patent portfolio that does not enter into practices of dubious ethics like filing for patents that only pursue the cross licensing with competitors to avoid paying them for their intellectual property.
The fastest way to begin to learn how to write software patents is to start with this set of documents. And since real learning only happens from the best sources, there are lots of noteworthy software patents, examples to follow for their high degree of inventiveness and the business volume that they backed and helped generate:
- The first practical algorithm to solve linear programming problems.
- DSL, the technology that allowed for the cheap diffusion of broadband connectivity.
- Pagerank, the famous patent that laid out the foundations of Google; it also takes into account the method to quickly compute the rankings of indexed pages.
- Lempel-Ziv-Welch, the well-known algorithm for lossless data compression.
- In the cryptography field, the RSA and ECC algorithms, at the core of public key cryptography.
- The beginnings of digital sound would not have been possible without the DPCM method or FM sound synthesis, and the patents of MP3 compression are dispersed across many companies.
- In the storage field, it’s curious how RAID was invented a decade before its definition as a standard.
- Regarding hardware, we shall not forget the patents awarded to the transistor (Shockley and Bardeen) and modern magnetic storage enabled thanks to the GMR phenomenon.
What I’ve Been Reading (Crypto)
- [amazon_link id=“3642143024” target=“_blank” ]Efficient Secure Two-Party Protocols[/amazon_link]. Good introduction to the paradigm and the techniques of secure computation, with an emphasis on the proving methodology. Although it doesn’t cover all the relaxations and variations generally used in the literate to get significant speed-ups, the authors really do care about the efficiency part to the point of providing empirical results to prove the feasibility of two-party secure in current computers
- [amazon_link id=“354020105X” target=“_blank” ]Composition of Secure Multi-Party Protocols[/amazon_link]. Written by the top contributor of the field, it’s a good survey that covers up the subject in sufficient detail for a quick introduction. A bit old, although the theoretical treatment of the subject has survived the passing of time, but it lacks the newer results on the limits and impossibilities on concurrent general composition and information-theoretically secure protocols.
- [amazon_link id=“1420070029” target=“_blank” ]Algorithmic Cryptanalysis[/amazon_link]. Forget all the previous books on cryptanalysis, with too much focus in the classical ciphers. This is the most technical and advanced book on cryptanalysis, reviewing all the techniques with lots of references to modern and more detailed papers. The coverage of lattice-based cryptanalysis and algorithms deserves special mention. IMHO, much more C source code will be preferred in the next editions.
Re-Updated List on Smartphone Security Presentations
I’ve been updating the list of presentations on smartphone security with 25 new links. A few noteworthy mentions are the newer exploit techniques, new OSes and their vulnerabilities (Windows Phone) and the emphasis on researching the distinct mobile network protocols (femtocells, satellite, GPRS and GSM, …).
Assorted Links (Coding)
-
- The R Inferno: a programming manual in a very literary style
- Category Theory for the Java Programmer
- Voyager 2, debugging across planetary systems
- A Graphical Notation for the Lambda Calculus with Animated Reduction
Modelling How Technological Change Influences Economic Growth
“You can see the computer age everywhere but in the productivity statistics.” Robert Solow (1987 Nobel Prize in Economics) summed up with this harsh remark his celebrated “productivity paradox”, which itself started a research frenzy to find counterexamples to refute it. It took more than a decade, because there is a strong inter-relationship between information technologies and the human capital that they are at the same time complementing and substituting for, but at the end these affirmations could be discredited. Furthermore, another profound change with much more evidences against the paradox occurred parallel to the wide expansion of computer technology, which was also easier to measure and prove: the global spread of the digital mobile phone. To get a better understanding of its true economic impact, nothing better than to sum up the relevant literature regarding this topic.
From a purely microeconomic perspective, Jensen was able to prove that the introduction of mobile phones incremented the profits of North Kerala’ fishers to a whopping 8%, reducing at the same time the final consumer price by 4%: better communications enabled the access to wider markets, expanding the dealing possibilities of those offered by the previous local fish market, enhancing overall market efficiency via an stronger the law of one price. From a macroeconomic point of view, Waverman used statistical and econometric techniques to isolate cause from effect, to find that an increase of 10 devices per 100 in a developing country did add 0.6 points to GDP growth per capita and 0.5 to GDP growth: these results bring out the transformative power of technology to the the global economic activity.
And to gain a better understanding of how technological innovations are transmitted into the economy, I’ve put together a stylized model in an Excel workbook offering a mechanistic explanation of how a successful general purpose technology is able to impact economic growth in such a significant way: in the first sheet, a general Bass model is used to quantify the transition to digital mobile technology from 1996 to 2011 (taking care of network effects in a gross manner, better modelled using Becktrom’s law); in the second sheet, and by using the previously calculated penetration level of the digital mobile phone technology as one of the inputs, a neoclassical economic growth model (Solow-Swan) is utilised to explain its economic impact: note this particular model was the first used to introduce technological progress as a fundamental variable to explain economic growth, making it look like a component that increments the productivity of the labour factor and that also complements capital accumulation at the same time, itself divided in different periods of decreasing value to account for the technological depreciation process. The only negative aspect of this model is that technological progress is supposed to be constant over the full period of analysis, leaving aside the possibilities of a growing innovation rate, or a much more realistic decreasing innovation rate. In addition, other variables that are taken into account by the model are: capital depreciation, savings rate, population growth and the relationship between capital and labour in the resulting economic production. Besides, other technological changes could be analysed with the same Excel workbook, because they feature similar diffusion processes and economic impacts: the adoption of the car, substituting for horses; the diffusion of electricity; or the diffusion of computer, replacing the typewriter.
Later economic models supplement the previous one introducing the accumulation of human capital next to technological change, giving birth to endogenous economic growth theories that better explain the relationship between computer technology and economic growth: even if information technologies are mostly deployed for the purpose of substituting the labour factor, their true nature is incredibly complementary to human capital, but this is more difficult to prove econometrically. Last but not least, the entertainment potential of computer technology makes it to negatively redound in productivity growth statistics: for example, the 5 million hours that Angry Birds is played every day should also be counterbalanced in other ways.