Category Archives: mathematics

Assorted Links (Maths)

On the Education of the Market Designer

Imagine devising a set of rules for a game such that the dominant strategy of every player is to truthfully reveal their valuations and/or strategies: this is just one of the ambitious goals of mechanism design, the science of rule-making and the most useful branch of game theory.  Fifteen years ago, a pioneering paper of Nisam and Ronen (Algorithmic Mechanism Design) merged it with computer science by including the requisite that computations should also be reasonably tractable for every involved player: this created a fruitful field of research that contributed every tool of algorithmics and computational complexity, from combinatorial optimization and linear programming to approximation algorithms and complexity classes.

In practice, Algorithmic Mechanism Design is also behind the successes of the modern Internet economy: every ad-auctions uses it results, like Google’s DoubleClick auctions or Yahoo’s Auctions, and peer-to-peer networks and network protocols are being designed under its guiding principles. It has also contributed to spectrum auctions and matching markets (kidneys, school choice systems and medical positions) and it has also generated interesting models, like the first one that justifies the optimality of the fee-setting structure of real estate agents, stock brokers and auction houses (see Fee Setting Intermediaries).

Up until a decade ago, the only way to learn this fascinating field of research was by venturing to read papers dispersed between the areas of economics, game theory and computer science, but this changed in 2008 with the publication of the basic textbook of the field, [amazon_link id=“0521872820” target=“_blank” ]Algorithmic Game Theory[/amazon_link], also available online:

Download (PDF, Unknown)

Now a bit dated, it has recently been complemented with some great resources:

And that’s enough to begin with: hundred of hours of learning insightful research with fantastic applications!

Computer Science for the coming Information Age

Computer science is changing: the amount of data available for processing is growing exponentially, and so must the emphasis towards its handling. Like the 19th century change in physics from mechanics to statistical mechanics, the new algorithms sacrifice the precision of a unique answer for the fast search of statistical properties. The following draft of a book by Hopcroft and Kannan breaks the path of what most future algorithms manuals may look like:

Download (PDF, 1.83MB)

Heavy on proofs, many topics have been selected for their mathematical elegance, not their pragmatism. On the final version of this much anticipated book, I would love to see more content on hash algorithms, parallel algorithms, graph spanners or a more extensive discussion on Support Vector Machines.

Assorted Links (Maths)

    1. The library of George B. Dantzig is for sale: peering over the libraries of great mean is as enlightening as their work.
    2. Benford’s Law and the Art of Succeeding in Multiple Choice Tests: funny application of Benford’s Law.
    3. Polymath8: I’ve been following this project on collaborative mathematics and the article on Wired is a good description of its course.
    4. Rota’s conjecture proved: well, almost proved, there are some problems with the inequivalent representation of matroids, but it’s almost here.

Analytic Combinatorics

The classic era of the analysis of algorithms exhibited limitations: worst-case running analysis and upper bound limits were not enough to compare algorithms in practice, neither to predict the real performance of algorithms. Although it enabled a fruitful research era in the design of algorithms, it had to be ignored by most practitioners. Neither lower bounds (Omega notation), nor theta notation (order of growth) were of much help: the field was in need of a profound paradigm shift.

Analytic combinatorics was the catalyst, a calculus built on the power of generating functions to enable the creation of predictive models for algorithm performance by using precise quantitative predictions of large combinatorial structures.

The book of Flajolet and Sedgewick, two key researchers that shaped the field with their fundamental theorem of symbolic combinatorics, is the best, self-contained source for learning (freely available on the web):

GDE Error: Error retrieving file — if necessary turn off error checking (404:Not Found)

You may also want to receive some lectures from Sedgewick himself, a privilege only available if you go to Princeton (or by using the Coursera online learning platform):

Assorted Links (Math)

Some links about the uses of mathematics in everyday life:
  1. The Mathematics of RAID6: another remark that RAID5 is considered harmful
  2. The maths that made the Voyager possible: “Using a solution to the three-body problem, a single mission, launching from Earth in 1977, could sling a spacecraft past all four planets within 12 years. Such an opportunity would not present itself again for another 176 years.”
  3. Speeding GPS calculations by shrinking data
  4. Coded-TCP: replacing packets with algebraic equations to improve wireless bandwidth in the presence of errors
  5. Voice Recognition using MATLAB: easy and fun to experiment with

Commemorating Computational Complexity

Fifty years ago, two researchers started writing this seminal paper, kicking off the research field of computational complexity, the core of computer science, with now tens of thousands of publications.

Although the concepts and ideas underlying the paper were not new, as a letter by Kurt Gödel show us, it was the foundational moment for a field that still produces deep, beautiful and practical results: in the last decade, Williams’ lower bound on non-uniform circuitsAgrawal-Kayal-Saxena primality test that included primality testing in P (although Miller-Rabin and Solovay-Strassen   primality tests still live strong since they are much faster than AKS) or Vassilevska’s lower bound on matrix multiplication.

I see tons of start-ups and projects fail because they ignore the most basic algorithmic prudence: that is, that only sub-logarithmic algorithms should be accessible to the mass-public is one the ignored maxims of the computer industry that can only be learned by the proper interpretation of the absence of market offerings refuting it (vg. regular expressions within search engines’ queries, which could run in exponential time; or all the AI promises about solving optimization problems that never delivered).

Speaking with some researchers lately, they expressed the hope that the coming end of Moore’s law would vindicate this field, making its results much more relevant: and although they were absolutely right in that proportionally more resources will be directed towards these ends, they also failed to consider that the likely crash following this event may also reduce the total, aggregated opportunities.

To Err is CAS-possible

I was aware that there are too many errors on any CAS to be listed on the margin of their accompanying documentation. But finding the culprit after hours of debugging, a very simple bug but with a very high impact since other complex calculations are being based on its proper calculation, is a disturbing experience. It turns out that, in Mathematica,

In[1]:= badIntegration = Integrate[1 / (2 + Cos[x]), {x, 0, y}, Assumptions -> y > Pi]

Out[1]:= 2(Pi+ ArcTan[Tan[y/2]/ Sqrt(3))/Sqrt(3)

is symbolically evaluated as a discontinuous function, even if it must clearly be continuous, since the integrand is everywhere continuous, finite and positive. But the Weierstrass substitution method used internally when symbolically calculating integrals of inverse trigonometric functions sometimes produces discontinuities. Go figure.

Lesson learned: as a safety measure, always use multiple CAS programs (SAGE, Maple, …)  to compare solutions when getting strange results.

The Need for Speed^WCapacity

Computer networks are used every day, but with a very limited understanding of the consequences of their cumulative aggregation. Network coding is the field that devises techniques for their optimal utilization to reach the maximum possible transmission rate in a network, under the assumption that the nodes are somewhat intelligent and able to alter the network flow and not just to forward it. It’s still nascent, so the practical impact of its results is quite limited: for example, it would very useful to have techniques and a tool to estimate the real network capacity in a multicast/P2P network, except it’s still an open problem. Fortunately, the following paper offers the first worthy approach to close this question:

Download (PDF839KB)

Although to be resistant to common Internet attacks, network coding should be accompanied with homomorphic signatures.

(Broken) Folklore Heuristics

Example of A* algorithm where nodes are cities connected with roads and h(x) is the straight-line distance to target pointIt’s funny how two of the most studied algorithms of a discipline, that are considered polynomially optimal for decades under a set of well-accepted assumptions, turn out not to be so great. That’s the case of the famous A* and IDA* algorithms, cornerstones in AI and currently implemented in thousands of computer games because there were thought to be polynomial in the search space since 1968 under a set of broadly defined heuristics, even though nobody has found practical instances of them after hundreds of proposal and attempts. Their optimality is finally debunked in “How Good is Almost Perfect”: general search algorithms such as A* must necessarily explore an exponential number of search nodes even under the most optimistic assumption of estimators with heuristic error bounded by a small additive constant. What is worse, other similar search techniques cannot perform better than A*.

These are the kind of results that reminds you to always revisit assumptions, to search more closely where others may have obviated, no matter how well-accepted and reputed are the sources, in particular with folklore wisdom, beliefs and other uncanny heuristics.

GDE Error: Error retrieving file — if necessary turn off error checking (403:Forbidden)

Addictive Number Theory

Addictive Number Theory

In 1996, just after Springer-Verlag published my books Additive Number Theory: The Classical Bases [34] and Additive Number Theory: Inverse Problems and the Geometry of Sumsets [35], I went into my local Barnes and Noble superstore on Route 22 in Springfield, New Jersey, and looked for them on the shelves. Suburban bookstores do not usually stock technical mathematical books, and, of course, the books were not there. As an experiment, I asked if they could be ordered. The person at the information desk typed in the titles, and told me that his computer search reported that the books did not exist. However, when I gave him the ISBN numbers, he did find them in the Barnes and Noble database. The problem was that the book titles had been cataloged incorrectly. The data entry person had written Addictive Number Theory.