1. Crypto-coding like a Charm, in Python!
  2. Encyclopaedia of Windows Privilege Escalation
  3. Forward Secrecy for HTTPS
  4. Things you can learn while data-mining Facebook: the world is closer than we though, at 4.74 degrees of separation, and FB followers correlate with stock prices, but its userbase valuation fundamentals are still in question.
  5. Clayton Christensen on Disruption
  6. The Private and Social Costs of Patent Trolls

Every time an email is sent, it’s expected to be handled to its recipient, no matter what their service provider is. But in the first days, it wasn’t so simple: early commercial email services (CompuServe, Prodigy, Delphi, …) featured proprietary email services with no concept of an universal e-mail address, which in turn created technical barriers that originated the commercial practice of settling charges for email delivery between service providers. In other words, there were interconnection agreements which detailed the delivery charges between providers, bounding the two parties to periodically settle their accounting differences, much like in the other telecommunication networks (telex, fax, teletex, SMS and phone termination fees).

But the number of said required agreements grew exponentially as the number of service providers expanded, and so did the technical difficulties to integrate their different email services. X.400 was born to solve these issues, implicitly providing support to keep settlement scores between carriers and their multi-interface delivery technology (vg. the preferredDeliveryMethod attribute). In the end, X.400 didn’t really take off and was substituted in 1990 by the much simpler X.500 protocol: but not due to its tremendous complexity, but rather due to the decisive move of service providers to stop settling accounts between them so they could just use X.500 to interconnect their directory services.

As usual, it’s almost never about technology, which can be better thought as the child of necessity and will. The hassle of reaching agreements was getting so high with the growing number of service providers that their diminishing return stopped justifying the related bargaining costs, which in turn were precluding the emergence of the essential network effects from the growing number of email users (as per MetcalfeBeckstromReed’s Laws): that is, they were the real limiting factor blocking the growth of the early Internet. Nowadays, the only trace of these agreements survives in transit traffic agreements, in turn solved by peering agreements.

To sum up, notice the circular paradox that the history of email established, a curious tale of unintended consequences: free email begot spam, and spam beget the obvious solution to start charging for email to put an end on it. Whether the trade-off was correctly solved depends on whom you talk to.

  1. Measuring the E-conomy
  2. Costs of Bugs in Open-Source Software
  3. Patent Pools and the Direction of Innovation
  4. Property Rights and Parliament in Industrializing Britain
  5. Fully Homomorphic Encryption without Squashing Using Depth-3 Arithmetic Circuits

The biggest paradox of the Internet is that, being the epitomeness of freedom and openness, its actual implementation is even more closed than the old mainframes. And despite the fact that the whole thing has always been properly documented in RFC memorandums, the oddness and peculiarities of the concrete implementations have always lied hidden within router images, even for the most important inter-domain routing protocols, the biggest concern during interoperability tests.

So it turns out that the real definition of openness is a very nuanced one: in the software world, licensing and governance are paramount, meanwhile standards and interoperability are crucial and strategic in the networking world.

Fortunately, OpenFlow is unlocking a new window of openness in this closed world:  its approach to Software-Defined Networking enables reprogrammable traffic management techniques in the Layer 2 much like MPLS does in Layer 3, but in much more heterogeneous environments. Its first version is very featureless, missing IPv6, QoS, traffic shaping and high-availability, and lacking a killer app, its general adoption will take time, if ever. Even so, its ability to complement recent virtualization technologies in the data center, and being the only practical way for researchers to try out new experimental protocols, makes it a key technology to watch for in the next years.

  1. Lasers illuminate quantum security loophole
  2. A survey of venture capital research
  3. How much does a startup cost?
  4. Benford’s Law and the Decreasing Reliability of Accounting Data for US firms
  5. Nobel for Sargent and Sims

Quis custodiet ipsos custodes?

Juvenal, (Satire VI, lines 347–8)

It’s been a while since I lectured about the formal construction of Java compilers and virtual machines. But nowadays, all the action has moved to an even more basic layer of the software infrastructure: the hypervisor, the core of the operating system virtual machine manager. But it’s just déjà-vu all over again, an instance of the eternal recurrence in the software world, since the first virtual machines originated at the operating system-level (IBM 360 Model 67). Even so, verification is still a key step to solve many Gordian knots, rooted in the same fashion of Thompson’s Reflections on Trusting Trust : from concurrency verification in Hyper-V to the more advanced Typed Assembly Language and Proof-Carrying Code techniques.

Set your Twitter account name in your settings to use the TwitterBar Section.