A graphical summary to Caspers Jones’ latest book, “The Technical and Social History of Software Engineering”, aggregating the data of thousands of projects:
- Note how application size is lowering in terms of number of lines of code, in direct correlation to the linear increase in the expressive power of programming languages. This observation fits well the growing number of web/mobile application that only do a very limited number of functions.
- The maximum percentage of code reuse is growing very fast, due to a higher number of libraries and open-source, but spotting projects with a 85% of reuse is a yet a rarity.
- Defect removal efficiency has steadily improved, but I expected a steeper line due to static analysis and better compiler warnings
- The percentage of personal dedicated to maintenance has surpassed that of the initial development, but there’s little research on the success factors of this stage.
As languages improved (and their number, so more languages are available for specific tasks), so did the programmer’s productivity, lowering the defect potential at the same time: this document about software engineering laws also provides another interesting outlook of the same datasets.
Imagine devising a set of rules for a game such that the dominant strategy of every player is to truthfully reveal their valuations and/or strategies: this is just one of the ambitious goals of mechanism design, the science of rule-making and the most useful branch of game theory. Fifteen years ago, a pioneering paper of Nisam and Ronen (Algorithmic Mechanism Design) merged it with computer science by including the requisite that computations should also be reasonably tractable for every involved player: this created a fruitful field of research that contributed every tool of algorithmics and computational complexity, from combinatorial optimization and linear programming to approximation algorithms and complexity classes.
In practice, Algorithmic Mechanism Design is also behind the successes of the modern Internet economy: every ad-auctions uses it results, like Google’s DoubleClick auctions or Yahoo’s Auctions, and peer-to-peer networks and network protocols are being designed under its guiding principles. It has also contributed to spectrum auctions and matching markets (kidneys, school choice systems and medical positions) and it has also generated interesting models, like the first one that justifies the optimality of the fee-setting structure of real estate agents, stock brokers and auction houses (see Fee Setting Intermediaries).
Up until a decade ago, the only way to learn this fascinating field of research was by venturing to read papers dispersed between the areas of economics, game theory and computer science, but this changed in 2008 with the publication of the basic textbook of the field, Algorithmic Game Theory, also available online:
Now a bit dated, it has recently been complemented with some great resources:
- The Handbook of Market Design, of which the part I have liked the most is the one on experiments.
- The online courses of Tim Roughgarden, a real master on choosing and presenting the best proofs of this field: Algorithmic Game Theory and Frontiers in Mechanism Design.
- The on-going writing of a more specific book, Mechanism Design and Approximation
And that’s enough to begin with: hundred of hours of learning insightful research with fantastic applications!
I’ve just ended these three books published this year on the intersection of finance and programming:
- C# for Financial Markets. This recently published book is just the translation to C# of all the previous books by the same author, especially the ones on the intersection between finance and C++. As such, one-third of the book delves into the implementation of basic mathematical finance (bonds, futures, options, swaptions, curve and surface modelling, finite difference method, …) and two-thirds of the book delves into teaching the C# language and its interoperability with Excel/LINQ/C++: note that if you’re already a pro on C#, you’ll better skip these parts since they are far from being the best authoritative source, although the sections on interoperability are really instructive. The best point of this book is that really full of examples to enlighten every concept (850 pages long!), although they never manage to compose a full application of any financial worth (that’s left as an exercise to the reader!): thus, and only on this technology angle, it’s the best book for beginners.
- Financial Modelling – Theory, Implementation and Practice (MATLAB). Do you need a book to quickly acquaint yourself with the state-of-the-art in financial mathematics for asset allocations, hedging and derivatives pricing, skipping the study of dozens of papers? Then this book is your definitive shortcut: it encompasses all from the derivation of the models to their implementation in Matlab, demonstrating that this language can also be used for prototyping purposes in the financial industry, efficiency and interoperability aside (if you don’t know Matlab, a third of the book delves into that). My favourite part of the book it’s the first one on models (stochastic, jump-models, multi-dimensional, copulas) that reads lightly and fast in just an afternoon, but the book is also overfocused on the numerical implementation of the models (a third of the book), when most of these details are just left to some library in the real world. Even so, just running over all its examples is worth its full price.
- Financial Risk Modelling and Portfolio Optimization with R. R is the lingua franca of statistical research, and its under-utilization in the financial industry it’s a real puzzle, the truth being that the sheer number of packages dealing with every imaginable statistical function should be enough to justify a much deeper penetration into daily use. This book is best suited for quantitative risk managers, and it surveys the latest techniques for modelling and measuring financial risk, besides portfolio optimisation techniques. Every topic follows a precisely defined structure: after a brief overview, an explanation is offered and a very interesting synopsis of R packages and their empirical applications ends the discussion. My favourite parts are at the end, on constructing optimal portfolios subject to risk constraints and tactical asset allocation based on variations of the Black-Litterman approach.
- The Art of Project Management. Wise insights are exceptionally uncommon. Practical guidance is plentiful, but all equally inconsequential. And wise and practical musings, rarer still: except within this book, a unique gem in a category of books that shines by its mediocrity. Well-thought and balanced in its theme choice, it covers every topic necessary to thrive in the always difficult to define role of project manager within a big software enterprise: for a better reading, a good exercise is to tweak every moral lesson offered in each section to the different contexts, scales and perspectives that could arise in other settings.
- One Strategy: Strategy, Planning and Decision Making. Project management is only side of the coin. Since software is by definition so malleable, projects can get as complex as desired, with no end in sight. Their success depends on the proper alignment of multiple tasks and roles: product design and planning, development, testing and usability, among others. All these must integrate into a single common vision, with no holes nor voids for coherence to emerge; quoting Heraclitus: “The unlike is joined together, and from differences results the most beautiful harmony”. This book recounts the strategy and roadmap of Windows 7, a titanic effort with little or none equivalent in the software industry: as technical as a success case can get, this book is a must to understand the current Microsoft organization.
Reading The Half-Life of Facts has left me with mixed feelings: even if enjoyable by its recollection of scientometric research and their masterful presentation by countless anecdotes, the thesis reached by their inter-waving, that everything that we know has an expiration date and other grandiloquent and post-modernistic statements, is just an immense non-sequitur: not only is the author intentionally leaving aside mathematical proofs (notwithstanding Gödel’s Incompleteness theorems), the very definition of fact applied is very misleading and tendentious. Facts should not be directly equalled with immovable truths: stripping out the context of how the facts were established and ignoring that the different Sciences offer different degrees of truth (5 sigma in particle physics and 1 sigma in sociology) is a disservice even to the author’s purposes.
And this is not just an argument from the analytic philosophy of language, a la Wittgenstein: in a chain of reasoning, it seems obvious that conclusions cannot be taken apart from their premises and inference rules; that is, facts are logical consequences of a set of statements because they were deduced from them, in a deductive-theoretic sense, so those statements are at least as important as the facts themselves.
Not to mention the problematic implications of applying the book to itself, in a recursive way: because what is the half-life of scientometric statements about the half-life of truths? Nullius in verba carried to n-degree!
Windows Powershell for Developers. For decades, the strongest point of Unices systems have always been its scriptability, beginning with the pipe paradigm of Unices commands introduced by the command shell (Bourne, C, KSH,…) and expanded by the capabilities of Perl/Python. But that is just to change, with the quantum leap introduced by Microsoft in next version Powershell: more than 2300 cmdlets, powerful remoting enabling distributed automation of tasks, Windows Workflows and access to almost every application via COM and .NET interfaces. All these and more, will erode and leapfrog the traditional competitive advantages of Unices systems. But to really master Powershell, it’s much better to start from the perspective of the professional developer and skip all the deficient scripting done by systems administrators. Thus this book is the perfect starting point, in that it not only shows the tips’n’tricks of Powershell, it also teaches by example how to extend applications via embedded scripts.
Formal Correctness of Security Protocols (Information Security and Cryptography). A theoretical and practical guide to the generation of formal proofs for security protocols using the inductive method, an ambitious enterprise of mixed results which is of primordial importance in a field of ever-growing complexity and numerous definitions of what is secure. Short and straight to the point, this book offers lots of code for the Isabelle theorem prover of some prominent security protocols: Kerberos IV & V, Shoup-Rubin, the Abadi-Glew-Horne-Pinkas protocol for certified mail and the non-repudiation protocol of Zhou-Gollman. The best part of this book is the last chapter, in which an honest recollection of statistics shows the effort dedicated to model each security protocol.
Combinatorial Pattern Matching Algorithms in Computational Biology Using Perl and R. Pedagogical, practical and with tons of examples, it progresses from pseudo-code to Perl and R source code for the most common algorithms of this interdisciplinary field, in which the beauty of nature is left to be interpreted and apprehended with some basic computer data structures: sequences for DNA pattern matching; trees for phylogenetic and RNA reconstruction; and graphs for biochemical reactions and metabolic pathways. Although it lacks of theorems is worrisome, it certainly fits its objective target of biologists with little exposure to formal computer science.
No matter how hard times could get: in hindsight, everything is forgotten and hope replaces every glimpse of prudent rationality. But reading some carefully selected books is the perfect antidote to get back some good’n’old common sense:
- Debt, the First 5000 Years. A history of debt through the different cultures and civilizations. Although dichotomous and highly controversial in its moral judgments, it outstandly debunks myths like the prime role of money over the debt or the dual nature of debt as an instrument of commerce and finance, and perfectly portrays the cult of personal honor as the root of the economy through the ages. You better skip most of the narrative and go directly to the more academic sources cited in the references.
- Manias, Panics and Crashes: A History of Financial Crisis. A reference work, revered by its insights and the lasting impact of its anecdotes. Entirely literary and qualitative, it was the first to illustrate that crisis do follow predefined patterns by the carefully picked descriptions of past debacles, though it lacks a general theory of their formation and development.
- This Time Is Different. It’s a wonderful masterpiece of the cliometric school, born to the power of the personal computer to carry out hundreds of regressions: contrary to the previous book, it offers a quantitative study of financial crisis over centuries and continents, a view far away from the traditional equilibrium models of the economy. Frequentist and predictive by its nature, it fails at ignoring that crisis may have roots different to a failure in the saving-to-investment mechanism that it forcibly ascribes to, even if the first 200 pages are dedicated to a fully detailed taxonomy of financial crisis.
Cryptanalysis of RSA and it variants. It’s always fascinating how even a simple set of equations can give rise to some many cryptanalytic attacks, and just by looking for some corner cases: small public and private exponents, combined with the leakage of private parameters and instantiations sharing common modules or private exponents. To prevent these attacks, variants were also invented: like using the Chine Remainder Theorem during the decryption phase; or using modulus of special forms or multiple primes; plus choosing primes p and q of special forms or the dual instantiation of RSA. If I wouldn’t have read the hundreds of papers covering these topics, I would have loved to start with his book.
The Tangled Web. The web is the biggest kludge ever: a chaotic patchwork of technologies with security added as an afterthought. Understanding the details and motivation behind each security feature is no small feat whatsoever, an effort that can only be carried out by someone, like the author, well battled on exploiting them through the years. Reviewing the entire browser security model through its history it’s the only way to get a full understanding of how things have come to be the way they are, and this is the definitive guide to understand how complexity quickly builds up in security front when it’s not been planned since the beginning.
- Efficient Secure Two-Party Protocols. Good introduction to the paradigm and the techniques of secure computation, with an emphasis on the proving methodology. Although it doesn’t cover all the relaxations and variations generally used in the literate to get significant speed-ups, the authors really do care about the efficiency part to the point of providing empirical results to prove the feasibility of two-party secure in current computers
- Composition of Secure Multi-Party Protocols. Written by the top contributor of the field, it’s a good survey that covers up the subject in sufficient detail for a quick introduction. A bit old, although the theoretical treatment of the subject has survived the passing of time, but it lacks the newer results on the limits and impossibilities on concurrent general composition and information-theoretically secure protocols.
- Algorithmic Cryptanalysis. Forget all the previous books on cryptanalysis, with too much focus in the classical ciphers. This is the most technical and advanced book on cryptanalysis, reviewing all the techniques with lots of references to modern and more detailed papers. The coverage of lattice-based cryptanalysis and algorithms deserves special mention. IMHO, much more C source code will be preferred in the next editions.
“Life can only be understood backwards, but it must be lived forwards.”
In technology, predicting the future is risky business. Tracing parallels and contrasts between future and old technologies, in the best tradition of Odlyzko’s papers on the comparative history of technology, is the only foolproof way to reason about the future, with the only shortcoming of being forewarned that in hindsight, everything is obvious and foreseeable. The following books are the best sources to learn about the rising of past-centuries networks, before the Internet:
- Networks of Power: Electrification in Western Society. Through the looking-glass of Tom Hughes’ systematizing theory of Complex Systems, the best recollection of the battle of electrical standards (AC vs. DC), full of details of the similarities and differences of electric expansion in Germany, England and United States due to the impact of the state of affairs of each country.
- Railroaded: the Transcontinentals and the Making of Modern America. As the first infrastructure built in capitalism, it transformed the legal-economic system of its day to the current one we live within. Its fact-based approach is the best strength of the narrative, which otherwise should be read with distance and perspective to properly detach from the author’s opinions.
- Energy and the English Industrial Revolution. A masterpiece and the best short book on the Industrial Revolution, by one of the most important economic historian. You can get a taste of its content at this article written by the author itself.
All the recent news about the Android and iPhone smartphones storing geo-location data without the user’s knowledge and consent are just the tip the iceberg of the very long history of the clash between the growing functionality of mobile phones and the unawareness of the userbase, and a omen of what’s to come in the ever increasing privacy erosion created by the digital world. The applications to uncover the hidden features are freely available (iPhoneTracker, Location Cache) and it was their very own existence what propelled the public worry and interest.
Yet as Scott McNealy, CEO and co-founder of SUN, once said, “You have zero privacy anyway, get over it”: a truth best-known to computer scientist but hardly understood by the general public.
I’ve also been reading the very small list of books written on mobile security, and these are my recommendations:
- Mobile Device Security: A Comprehensive Guide to Securing Your Information in a Moving World. Very high level and non-technical overview of the new mobile paradigm for computing and communications, covering the threats, risks, scenarios, business cases, security models and policies of organizations. Technical readers will be highly disappointed.
- Mobile Application Security. Recent book covering all the topics required to master mobile application security, making it a very good compilation of all the data currently scattered all over the net. It covers all the mobile operating systems, even the disappearing ones (Windows Mobile, WebOS, Symbian, Java ME) and the specific mobile technologies (Bluetooth, SMS, geolocation). An expanded chapter on enterprise security on the mobile OS would be preferred.
- Mobile Malware Attacks and Defense. A wonderful technical and historical reference on mobile malware and other mobile threats, with an emphasis on forensic techniques applied to the different mobile platforms. It shines at its comprehensiveness, as it lists almost every technique, malware and software known as of its publishing date. The only shortcoming is that Android is not mentioned since the book is a bit dated.
- February 2017
- April 2014
- March 2014
- December 2013
- November 2013
- July 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011