Monthly Archives: March 2012

Assorted Links (Economics)

    1. Cartels are also an emergent phenomenon
    2. Excellent dashboards: What kind of revenue does it take to go public? and Do Tech IPOs Always Fall?
    3. Lack of profitability limited the early distribution of the barcode scanner
    4. Customer Lifetime Value techniques: ARPU-based, cohort-based,  and Bayesian-based methodologies
    5. Institutions and Technology: Law, as Much as Tech, Made Silicon Valley
    6. When Theory Matches Reality: AMD goes fabless as they don’t spur Intel to innovate more

Towards Optimal Software Adoption and Distribution

Since the very beginning of software industry, it’s always been the same: applying the most innovative ways towards lowering the friction costs of software adoption is the key to success, especially in winner-takes-all market and platform-plays.

From the no-cost software bundled with the old mainframes to the freeware of the ‘80s and the free-entry web applications of the 90’s, the pattern is clear: good’n’old pamphlet-like distribution to spread software as it were the most contagious of ideas.

It comes to the realization that the cost of learning to use some software is much higher than the cost of software licenses; or that it’s complementary to some more valuable work skills; or that the expected future value from owning the network created by its users would be higher that selling the software itself. Never mind, until recently, little care has been given to reasoning from first principles about the tactics and strategies of software distribution for optimal adoption, so the only available information are practitioner’s anecdotes with no verifiable statistics, let alone a corpus of testable predictions. So, it’s refreshing to find and read about these matters from a formalized perspective:

 

Download (PDF, 1.23MB)

The most remarkable result of the paper is that, in the case of a very realistic scenario of random spreading of software with limited control and visibility over who gets the demo version, an optimal strategy is offered with conditions under which the optimal price is not affected by the randomness of seeding: just being able to identify and distribute to the low-end half of the market is enough for optimal price formation, since its determination will depend on the number of distributed copies and not on the seeding outcome. But with multiple pricing and full control of the distribution process (think registration-required freemium web applications) the optimal strategy is to charge non-zero prices to the higher half-end of the market, in deep contrast with the single-digits percentage of the paying customers in real world applications, which suggest that too much money is being left on the table.

Assorted Links (Theory)

    1. John Nash’s Letter to the NSA (in the same premonitory spirit of Gödel’s letter)
    2. Computing 10,000x more efficiently
    3. Superexponential long-term trends in technological progress
    4. Superb discussions on the practical feasibility of quantum computing while IBM deeps into the future: Perpetual motion of the 21st century?, Flying machine of the 21st century?, Nature does not conspire and The Quantum Super-PAC
    5. Consensus Routing: the Internet as a Distributed System
    6. 30 years since the BBBW protocol, the first quantum cryptographic protocol, and 4093 patents later.

Book Recommendations

[amazon_link id=“1420075187” target=“_blank” ]Cryptanalysis of RSA and it variants[/amazon_link]. It’s always fascinating how even a simple set of equations can give rise to some many cryptanalytic attacks, and just by looking for some corner cases: small public and private exponents, combined with the leakage of private parameters and instantiations sharing common modules or private exponents. To prevent these attacks, variants were also invented: like using the Chine Remainder Theorem during the decryption phase; or using modulus of special forms or multiple primes; plus choosing primes p and q of special forms or the dual instantiation of RSA. If I wouldn’t have read the hundreds of papers covering these topics, I would have loved to start with his book.

[amazon_link id=“1593273886” target=“_blank” ]The Tangled Web[/amazon_link]. The web is the biggest kludge ever: a chaotic patchwork of technologies with security added as an afterthought. Understanding the details and motivation behind each security feature is no small feat whatsoever, an effort that can only be carried out by someone, like the author, well battled on exploiting them through the years. Reviewing the entire browser security model through its history it’s the only way to get a full understanding of how things have come to be the way they are, and this is the definitive guide to understand how complexity quickly builds up in security front when it’s not been planned since the beginning.