It’s all over the news: a vulnerability has been found on OpenSSL that leaks memory contents on server and clients. Named Heartbleed, it has a very simple patch and some informative posts have already been written about it (Troy Hunt, Matthew Green).

What nobody is saying is that the real root cause is the lack of modern memory management in the C language: OpenSSL added a wrapper around malloc() to manage memory in a more secure and efficient way, effectively bypassing some improvements that have been made in this area during a decade; specifically, it tries to improve the reuse of allocated memory by avoiding to free() it. Now enter Heartbleed: by a very simple bug (intentional or not), the attacker is able to retrieve chosen memory areas. What was the real use of that layer?

Face it: it’s a no-win situation. No matter how many ways these layers are going to be written, there will always be a chance for error. You can’t have secure code in C.

But re-writing and/or throwing away thousands of security related programs written in C is no-brainer: the only way to securely run these programs is with the help of some memory debuggers techniques, like those used by Insure++ or Rational Purify. For example, the next technical report contains a detailed analysis of some of these techniques that prevent these kind of vulnerabilities:

Download (PDF, 1.99MB)


After Edward Snowden’s revelations had been publically discussed for months, I can only claim: did nobody see this coming? Because there is a clear precedent: NSA’s Project SHAMROCK collected all telegraphic data entering or exiting the United States from 1945 to 1975. And more recently, the gargantuan scale of the Great Firewall of China was an omen of what more advanced countries could built to spy on its citizenship.

Privacy will be gradually eroded, being a weak private right and a strong public wrong whenever facing the presumption of illegalities, and prosecution laws are already being modified to be more accepting of evidence gathered through warrantless surveillance programs, especially in common-law countries where police forces have ample power to interpret the law. And if in the future all surveillance is perfectly automated with almost no human intervention, who would really care of the invasion of privacy if done by amoral and unsentient machines?

What I really find fascinating is that Snowden’s revelations haven’t brought us any advanced technology: it’s almost like the NSA didn’t have any real technology edge over current commercial technologies, which I don’t really buy into. Meanwhile, the private sector develops and markets some technologies like PredPol for real-time crime prediction: an excellent predictor of what’s to come.

  1. Turing award goes to Goldwasser&Micali: zero-knowledge, semantic security and universal forgery under chosen message attack.
  2. The DDoS That Almost Broke the Internet: attacking whole Internet eXchanges just to bring down SpamHaus.
  3. Cyber-security: The digital arms trade: price quotes for exploits, The Economist’s style.
  4. NSA Cryptologs: was this really classified? Many opinions and little-to-none noteworthy facts.
  5. RC4 as used in TLS/SSL is broken: see page 306. More here.
  1. Security Engineering – The Book: the updated Ross Anderson’s masterpiece, for free again!
  2. The Fundamental Goal of “Provable Security”: Dan Bernstein’s insights on the whole provable security trend.
  3. Lucky Thirteen – Breaking the TLS and DTLS Record Protocols: yet another breach in the wall of the messy SSL/TLS protocol suite.
  4. Inception: a cheaper way to obtain passwords from RAM than Elcomsoft Forensic Disk Decryptor.
  5. 7 Codes You’ll Never Ever Break: classic ciphers that never die.

The black arts of reverse engineering network protocols have been lost. These days, every network protocol seems to be run over HTTP and handling lots of XML: every network engineer of the past decades would just cringe at the thought of it.

Complete specifications of network protocols like those offered in RFCs have always been luxuries: the product of idealistic minds of the past like Jon Postel, they only exist for the better known protocols of the Internet. For the rest, their details could only be known by reverse engineering: and the truth is that it requires a deep understanding of traditional software debugging, using tools like IDA and/or OllyDbg, specially for protocols of the binary kind.

Thus, the case of Skype: a recent decompilation of its binaries using Hex-Rays was publicly sold as a reverse engineering of the whole protocol suite. Nothing could be further from the truth.

Providing yourself with a kit of the best tools is the best path to success:

  • Sniffers are boring, read-only tools to see through the network layers. More fun can be had by crafting network packets, as recently simplified by tools like Ostinato and scapy
  • Another set of tools focus on decoding text-like protocols: reverx  (paper), and the impressive netzob
  • And the more interesting ones, tools that cross-overs between debuggers and sniffers: oSpy, an utility to sniff network application calls, and windbgshark, an extension to integrate wireshark within windbg to manipulate virtual machine network traffic

It’s said that in computer science, there’s only a sure way to find a research topic to write papers about: just add automatic to any problem statement, and a whole area of research is born! (aka. the meta-folk theorem of CS research). Most of the time the topic is obviously undecidable and a huge effort will be needed to produce tools of real practical value, but this doesn’t seem to stop researchers to produce interesting Proof-Of-Concepts. Reverse engineering being such a painstaking manual process, it’s a perfect target for this way of producing research, and very different methods and approaches have been tested: Smith-Waterman and Needleman-Wunsch algorithms from bioinformatics, with a recent open-source implementation combined with statistical techniques; automata algorithms to infer transitions between statesstatic binary analysis and runtime analysis of binaries because access to the runtime call stack is very convenient whenever using distributed computing contexts. Finally, a very interesting project was Discoverer @Discover@MSR: they announced very high success rates for very complex protocols (RPC – CIFS/SMB), but the tools were never released,

Download (PDF, 19KB)

This post would not be complete without the mention of the best inspiration for every reverse engineer in the network field: SAMBA, the magnum opus of Andrew Tridgell, an open-source interoperability suite to let Linux and Windows computers talk together. A book about the protocol and the project, Implementing CIFS, is as good as any divulgation book can get: he makes it look so easy, even a child could do it.


One of the most important protocol switchovers was carried off 30 years ago: the ARPANET stopped using NCP (Network Control Protocol) to only use TCP/IP, as the righteous Jon Postel devised in The General Plan. NCP was a fully connection-oriented protocol more like the X.25 suite, designed to ensure reliability on a hop by hop basis. The switches in the middle of the network did have to keep track of packets, unlike the connectionless TCP/IP were error correction and flow control is handled at the edges of the network. That is, intelligence turned to the border of the network and packets of the same connection could be passed between separated networks with different configurations. Arguably, the release of an open-source protocol stack implementation under a permissive license (4.2BSD) was a key component of its success: code is always a better description than any protocol specification.

Yet TCP/IP was still incomplete: after the 1983 switchover, many computers started connecting to ARPANET, and bottlenecks due to congestion were common. Van Jacobson devised the Tahoe and Reno congestion-avoidance algorithm to lower data transfers and stop flooding the network with packets: it was quickly implemented on the TCP/IP stacks of the day, saving the Net to this day.

These changes were necessary, as they allowed the Internet to grow, on a global scale. Another set of changes as profound as those were, are now being discussed in the Secure Interdomain Routing mailing list: this time the culprit is the insecurity of BGP, as route announcements are not authenticated, and  the penance is enforcing a PKI into the currently distributed, decentralized and autonomous Internet routing system. Technical architectures force a predetermined model of control and governance, and this departure from the previously agreed customs and conventions of the Internet may simply be a bridge too far away, as always, in the name of security. And the current proposals may even impact Internet’s scalability, since the size of the required Resource Public Key Infrastructure may be too large for routers to handle, as the following paper from Verisign shows:

Download (PDF, 189KB)

On the other hand, this recent analysis shows that the design of the security of SBGP is of very high quality, a rare thing in the networking field, indeed:


Download (PDF, 909KB)


I’ve just updated the list of presentations on mobile security:

  1. The most dangerous code in the world: validating SSL certificates in non-browser software. Yet another round of broken implementations of the SSL protocol.
  2. Cross-VM Side Channels and Their Use to Extract Private Keys: first practical proof that we shall not run SSL servers or any cryptographic software in a public cloud.
  3. Short keys used on DKIM: the strange case of the race to use the shortest RSA keys.
  4. How to Garble RAM Programs: Yao’s garbled circuits may turn to be practical.
  5. Apache Accumulo: NSA’s secure BigTable.

Given that the antivirus market is a market for lemons (quality is difficult to ascertain and the asymmetry of knowledge between the buyer and the seller is so high that it produces a broken market) and the software supply is so over-saturated that even the market is full of free products and an open-source alternative (ClamAV), I wonder when the signature collection and production processes will be shared between vendors. There has already been news about companies plagiarizing each other closed databases, so the need is already there. And there are precedents in other technology areas (patent pools, mobile network sharing) and even examples within the computer security space (DNS blacklists like Spamhaus), so maybe it’s just a matter of time and when the PC market stops growing, companies will resolve to pool their signature databases to a common entity and concentrate on more specialized aspects of virus detection (heuristics for *morphic viruses).

Set your Twitter account name in your settings to use the TwitterBar Section.