לֹא, תִּגְנֹבוּ; וְלֹא-תְכַחֲשׁוּ וְלֹא-תְשַׁקְּרוּ, אִישׁ בַּעֲמִיתוֹ
As the human mind is inscrutable to others, so its elucubrations are the truly purest form of property. Raziel protects your secrets from the Adversary and provides proofs against its malicious machinations: you shall not be robbed neither of your data nor of your code, for they are your inalienable property.
Imagine devising a set of rules for a game such that the dominant strategy of every player is to truthfully reveal their valuations and/or strategies: this is just one of the ambitious goals of mechanism design, the science of rule-making and the most useful branch of game theory. Fifteen years ago, a pioneering paper of Nisam and Ronen (Algorithmic Mechanism Design) merged it with computer science by including the requisite that computations should also be reasonably tractable for every involved player: this created a fruitful field of research that contributed every tool of algorithmics and computational complexity, from combinatorial optimization and linear programming to approximation algorithms and complexity classes.
In practice, Algorithmic Mechanism Design is also behind the successes of the modern Internet economy: every ad-auctions uses it results, like Google’s DoubleClick auctions or Yahoo’s Auctions, and peer-to-peer networks and network protocols are being designed under its guiding principles. It has also contributed to spectrum auctions and matching markets (kidneys, school choice systems and medical positions) and it has also generated interesting models, like the first one that justifies the optimality of the fee-setting structure of real estate agents, stock brokers and auction houses (see Fee Setting Intermediaries).
Up until a decade ago, the only way to learn this fascinating field of research was by venturing to read papers dispersed between the areas of economics, game theory and computer science, but this changed in 2008 with the publication of the basic textbook of the field, Algorithmic Game Theory, also available online:
Now a bit dated, it has recently been complemented with some great resources:
- The Handbook of Market Design, of which the part I have liked the most is the one on experiments.
- The online courses of Tim Roughgarden, a real master on choosing and presenting the best proofs of this field: Algorithmic Game Theory and Frontiers in Mechanism Design.
- The on-going writing of a more specific book, Mechanism Design and Approximation
And that’s enough to begin with: hundred of hours of learning insightful research with fantastic applications!
I’ve just ended these three books published this year on the intersection of finance and programming:
- C# for Financial Markets. This recently published book is just the translation to C# of all the previous books by the same author, especially the ones on the intersection between finance and C++. As such, one-third of the book delves into the implementation of basic mathematical finance (bonds, futures, options, swaptions, curve and surface modelling, finite difference method, …) and two-thirds of the book delves into teaching the C# language and its interoperability with Excel/LINQ/C++: note that if you’re already a pro on C#, you’ll better skip these parts since they are far from being the best authoritative source, although the sections on interoperability are really instructive. The best point of this book is that really full of examples to enlighten every concept (850 pages long!), although they never manage to compose a full application of any financial worth (that’s left as an exercise to the reader!): thus, and only on this technology angle, it’s the best book for beginners.
- Financial Modelling – Theory, Implementation and Practice (MATLAB). Do you need a book to quickly acquaint yourself with the state-of-the-art in financial mathematics for asset allocations, hedging and derivatives pricing, skipping the study of dozens of papers? Then this book is your definitive shortcut: it encompasses all from the derivation of the models to their implementation in Matlab, demonstrating that this language can also be used for prototyping purposes in the financial industry, efficiency and interoperability aside (if you don’t know Matlab, a third of the book delves into that). My favourite part of the book it’s the first one on models (stochastic, jump-models, multi-dimensional, copulas) that reads lightly and fast in just an afternoon, but the book is also overfocused on the numerical implementation of the models (a third of the book), when most of these details are just left to some library in the real world. Even so, just running over all its examples is worth its full price.
- Financial Risk Modelling and Portfolio Optimization with R. R is the lingua franca of statistical research, and its under-utilization in the financial industry it’s a real puzzle, the truth being that the sheer number of packages dealing with every imaginable statistical function should be enough to justify a much deeper penetration into daily use. This book is best suited for quantitative risk managers, and it surveys the latest techniques for modelling and measuring financial risk, besides portfolio optimisation techniques. Every topic follows a precisely defined structure: after a brief overview, an explanation is offered and a very interesting synopsis of R packages and their empirical applications ends the discussion. My favourite parts are at the end, on constructing optimal portfolios subject to risk constraints and tactical asset allocation based on variations of the Black-Litterman approach.
Why did software component reuse never take root? Why is outsourcing within the software industry such a bad practice, even if other industries outsource all of their developments? Why do big software companies perform so bad and have such a poor innovation record?
From a purely financial viewpoint, focused on profitability ratios, these and other questions are far beyond all comprehension. But paradoxically, they emerge as the result of the blind application of financial theory to the software industry:
- Reusing software would look like the shortest path to profitability: selling properly tested and documented software components over and over again was the holy grail of the software industry decades ago; in fact, that’s how all the manufacturing industry works. But the incorporation of other costs (friction, learning, integration, …) discourages this model of development; and what really abolished this trend was the rejection of whole software stacks based on the elusive nature of some requirements that were not taken into account because componentized software is not built by iterating on customer feedback since its very beginning.
- Outsourcing is the preferred way to develop software in every industry, except for the software industry itself: financially speaking, outsourcing improves every efficiency measure, lowers fixed costs and improves flexibility. What’s not to like? That what is difficult to measure: the cost of lost knowledge and the opportunity cost of not being able to innovate based on this knowledge; and finally, the cost of not surviving because competition was more learned and readier.
- Big companies do not innovate because all their projects start to get measured by IRR (Internal Rate of Return): but this ratio prioritizes short term, low-value projects over long-term projects. Start-ups teach us that it takes 7-10 years to full profitability and investment amortization: no wonder big companies fail to innovate.
And there are many more examples: striving for a better ROCE (Return on Capital Employed) on software projects is not a very good idea, since capital is not a scarce resource, qualified labour is; so economizing on an abundant asset is absolutely wrong. And that for any other return-like ratio: they will strip off any software company and deprive it of its most precious assets.
No matter how hard times could get: in hindsight, everything is forgotten and hope replaces every glimpse of prudent rationality. But reading some carefully selected books is the perfect antidote to get back some good’n’old common sense:
- Debt, the First 5000 Years. A history of debt through the different cultures and civilizations. Although dichotomous and highly controversial in its moral judgments, it outstandly debunks myths like the prime role of money over the debt or the dual nature of debt as an instrument of commerce and finance, and perfectly portrays the cult of personal honor as the root of the economy through the ages. You better skip most of the narrative and go directly to the more academic sources cited in the references.
- Manias, Panics and Crashes: A History of Financial Crisis. A reference work, revered by its insights and the lasting impact of its anecdotes. Entirely literary and qualitative, it was the first to illustrate that crisis do follow predefined patterns by the carefully picked descriptions of past debacles, though it lacks a general theory of their formation and development.
- This Time Is Different. It’s a wonderful masterpiece of the cliometric school, born to the power of the personal computer to carry out hundreds of regressions: contrary to the previous book, it offers a quantitative study of financial crisis over centuries and continents, a view far away from the traditional equilibrium models of the economy. Frequentist and predictive by its nature, it fails at ignoring that crisis may have roots different to a failure in the saving-to-investment mechanism that it forcibly ascribes to, even if the first 200 pages are dedicated to a fully detailed taxonomy of financial crisis.
As a follow-up to my previous post on software patents, this very interesting and recent survey on the economics of patents:
Holding a contrarian view could carry out great benefits: there has always been a need for a procedure to protect any innovative software from reverse engineering that at the same time allows for its appropriation and exclusionary use, as well as it precludes any imitation of its functionality, implementation details aside. This procedure really exists, it’s the proverbial patent: a negative right, temporal and exclusionary, to protect any novel intellectual property in exchange of publishing enough information to replicate it. In some sense, it’s the only open window for the ordinary citizen to introduce its own personal legislation, a strong general purpose tool, however double-edged it may be.
The frontal and so popular opposition to software patents transcends the software world: indeed it can be found in past centuries and for other technologies; for example, the most cited case, and therefore so full of hyperbole, is the decades-long delayed adoption of the vapour machine due to the machiavellic use of the rights conferred by one of its patents.
Regarding current practices, a detailed study of the descriptive statistics of the use of software patents shows that they have been the fastest growing category for decades, though software companies have not been granted many of them because the biggest appliers are other industries similarly intensive in their use of IT capital but also with a strong record of filing for strategic patents. Note also that in the absence of strong patents rights, custom and common sense have required the use of copyright protection (which also does not need to give up any source code) even if it’s a far weaker protection: in fact, both of them are complimentary, but their actual use is substitutive, because whenever one of them is weakened, the other gets used much more.
From a purely economic point of view, studies show a statistically significant increase of the stock value of the software patent-owning companies and they happen to be a mandatory prerequisite to enter markets in which the incumbents already own strongly interdependent patent portfolios. And contrary to general opinion and practice, their use in software start-ups is, overall, positive: they increase the number of rounds and their amount, as well as survival and valuation in case of acquisition. From a strategic point of view, software patents raise barriers to entry acting as a deterrent mechanism to the kind of competition that just follow profits without investing in any kind of sunk costs in the search of technological advances: in short, an increase of 10% in the number of patents entails a reduction in competition between 3 and 8 per cent. And even if their valuation is a very complex matter, the intrinsic value of software patents is higher than that of the rest of patents.
In practice, the biggest burden in their granting and defence is the search of prior art: that is, even under the assumption that the inventor is operating under the principles of good will and full cooperation, he can’t get access to the full prior art because most software is developed internally for companies and never sees the public light. This gives rise to a great number of non-innovative and trivial patents, and others that ignore the prior art on purpose, a matter which sometimes can be difficult to settle (vg. Apple Multi-touch, Amazon 1-click). Fortunately, malicious uses don’t fare well in the Courts of Justice: the strategies of the so-called patent trolls aren’t successfully long-term, that is, those who are operating within the letter of the law but against its spirit, and are using the patent system to extract rents from others that are using them for productive purposes, a problem that carries a very high economic cost. Only their fair use brings a true premium to business valuations, that is, building a good patent portfolio that does not enter into practices of dubious ethics like filing for patents that only pursue the cross licensing with competitors to avoid paying them for their intellectual property.
The fastest way to begin to learn how to write software patents is to start with this set of documents. And since real learning only happens from the best sources, there are lots of noteworthy software patents, examples to follow for their high degree of inventiveness and the business volume that they backed and helped generate:
- The first practical algorithm to solve linear programming problems.
- DSL, the technology that allowed for the cheap diffusion of broadband connectivity.
- Pagerank, the famous patent that laid out the foundations of Google; it also takes into account the method to quickly compute the rankings of indexed pages.
- Lempel-Ziv-Welch, the well-known algorithm for lossless data compression.
- In the cryptography field, the RSA and ECC algorithms, at the core of public key cryptography.
- The beginnings of digital sound would not have been possible without the DPCM method or FM sound synthesis, and the patents of MP3 compression are dispersed across many companies.
- In the storage field, it’s curious how RAID was invented a decade before its definition as a standard.
- Regarding hardware, we shall not forget the patents awarded to the transistor (Shockley and Bardeen) and modern magnetic storage enabled thanks to the GMR phenomenon.
As a follow-up to my previous post about mobile subsidies, it’s important to note that new IFRS financial accounting rules affecting them are under discussion (IAS 18: Revenue in Relation to Bundled Sales), even though they are not expected to come by 2015. Traditionally, mobile revenue per month is recognised for the whole bundled mobile contract, the cost of the handset is expensed on the first day of the contract and the initial subsidised payment, if any, is reported; under the forthcoming accounting proposals, these subsidised contracts would be effectively unbundled and interests would be taken into consideration. That is, a receivable for the unsubsidised fair value of the terminal would be recognised on the first day and every monthly instalment per month would be proportionally split into two parts: a fraction to settle the terminal receivables with their corresponding income from interests, the handset being recognised at inception of the contract, and the rest will be booked as revenue for the services.
These changes will provide a much more faithful view of the real nature of the current mobile business model: handsets are just not marketing expenses but integral to the whole mobile experience, therefore their costs won’t be diffused with other charges and profits and revenue will stop being misstated. But on the other hand, the new approach is more imprudent and the treatment of the breach of mobile contracts will further introduce unnecessary complexity.
“κλοιῷ τέτριπται σάρκα τῷ σιδηρείῳ,
ὃν ὁ τροφεύς μοι περιτέθεικε χαλκεύσας.”
λύκος δ’ ἐπ’ αὐτῷ καγχάσας “ἐγὼ τοίνυν
χαίρειν κελεύω” φησί “τῇ τρυφῇ ταύτῃ,
δι’ ἣν σίδηρος τὸν ἐμὸν αὐχένα τρίψει.”
THE WOLF, THE DOG AND THE COLLAR
The Internet and modern computer technology promised to reduce the effects from consumer myopia arising from mental calculation costs. However, it is to note that the current cost of mobile permanence agreements in Spain, calculated as the foregone value of forsaking the right to change to the best mobile provider for 18 to 24 months, ranges from 216€ to 296€.
And given that smartphone Average Selling Prices (ASPs) are around 250€, the implicit interest rate of these permanence agreements may even surpass 100% in some cases. A really astonishing figure.
Therefore, and in accordance with other studies in the empirical literature about transaction costs on the e-commerce industry (Do Lower Search Costs Reduce Prices and Price Dispersion?), very high search costs and the analysis paralysis resulting from them also exists in the mobile telecommunications industry, and are just as relevant today as they always have been.
November 2017 M T W T F S S « Sep 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
- September 2017
- February 2017
- April 2014
- March 2014
- December 2013
- November 2013
- July 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011