It’s all over the news: a vulnerability has been found on OpenSSL that leaks memory contents on server and clients. Named Heartbleed, it has a very simple patch and some informative posts have already been written about it (Troy Hunt, Matthew Green).
What nobody is saying is that the real root cause is the lack of modern memory management in the C language: OpenSSL added a wrapper around malloc() to manage memory in a more secure and efficient way, effectively bypassing some improvements that have been made in this area during a decade; specifically, it tries to improve the reuse of allocated memory by avoiding to free() it. Now enter Heartbleed: by a very simple bug (intentional or not), the attacker is able to retrieve chosen memory areas. What was the real use of that layer?
Face it: it’s a no-win situation. No matter how many ways these layers are going to be written, there will always be a chance for error. You can’t have secure code in C.
But re-writing and/or throwing away thousands of security related programs written in C is no-brainer: the only way to securely run these programs is with the help of some memory debuggers techniques, like those used by Insure++ or Rational Purify. For example, the next technical report contains a detailed analysis of some of these techniques that prevent these kind of vulnerabilities:
After Edward Snowden’s revelations had been publically discussed for months, I can only claim: did nobody see this coming? Because there is a clear precedent: NSA’s Project SHAMROCK collected all telegraphic data entering or exiting the United States from 1945 to 1975. And more recently, the gargantuan scale of the Great Firewall of China was an omen of what more advanced countries could built to spy on its citizenship.
Privacy will be gradually eroded, being a weak private right and a strong public wrong whenever facing the presumption of illegalities, and prosecution laws are already being modified to be more accepting of evidence gathered through warrantless surveillance programs, especially in common-law countries where police forces have ample power to interpret the law. And if in the future all surveillance is perfectly automated with almost no human intervention, who would really care of the invasion of privacy if done by amoral and unsentient machines?
What I really find fascinating is that Snowden’s revelations haven’t brought us any advanced technology: it’s almost like the NSA didn’t have any real technology edge over current commercial technologies, which I don’t really buy into. Meanwhile, the private sector develops and markets some technologies like PredPol for real-time crime prediction: an excellent predictor of what’s to come.
The black arts of reverse engineering network protocols have been lost. These days, every network protocol seems to be run over HTTP and handling lots of XML: every network engineer of the past decades would just cringe at the thought of it.
Complete specifications of network protocols like those offered in RFCs have always been luxuries: the product of idealistic minds of the past like Jon Postel, they only exist for the better known protocols of the Internet. For the rest, their details could only be known by reverse engineering: and the truth is that it requires a deep understanding of traditional software debugging, using tools like IDA and/or OllyDbg, specially for protocols of the binary kind.
Providing yourself with a kit of the best tools is the best path to success:
- Sniffers are boring, read-only tools to see through the network layers. More fun can be had by crafting network packets, as recently simplified by tools like Ostinato and scapy
- Another set of tools focus on decoding text-like protocols: reverx (paper), and the impressive netzob
- And the more interesting ones, tools that cross-overs between debuggers and sniffers: oSpy, an utility to sniff network application calls, and windbgshark, an extension to integrate wireshark within windbg to manipulate virtual machine network traffic
It’s said that in computer science, there’s only a sure way to find a research topic to write papers about: just add automatic to any problem statement, and a whole area of research is born! (aka. the meta-folk theorem of CS research). Most of the time the topic is obviously undecidable and a huge effort will be needed to produce tools of real practical value, but this doesn’t seem to stop researchers to produce interesting Proof-Of-Concepts. Reverse engineering being such a painstaking manual process, it’s a perfect target for this way of producing research, and very different methods and approaches have been tested: Smith-Waterman and Needleman-Wunsch algorithms from bioinformatics, with a recent open-source implementation combined with statistical techniques; automata algorithms to infer transitions between states; static binary analysis and runtime analysis of binaries because access to the runtime call stack is very convenient whenever using distributed computing contexts. Finally, a very interesting project was Discoverer @Discover@MSR: they announced very high success rates for very complex protocols (RPC – CIFS/SMB), but the tools were never released,
This post would not be complete without the mention of the best inspiration for every reverse engineer in the network field: SAMBA, the magnum opus of Andrew Tridgell, an open-source interoperability suite to let Linux and Windows computers talk together. A book about the protocol and the project, Implementing CIFS, is as good as any divulgation book can get: he makes it look so easy, even a child could do it.
One of the most important protocol switchovers was carried off 30 years ago: the ARPANET stopped using NCP (Network Control Protocol) to only use TCP/IP, as the righteous Jon Postel devised in The General Plan. NCP was a fully connection-oriented protocol more like the X.25 suite, designed to ensure reliability on a hop by hop basis. The switches in the middle of the network did have to keep track of packets, unlike the connectionless TCP/IP were error correction and flow control is handled at the edges of the network. That is, intelligence turned to the border of the network and packets of the same connection could be passed between separated networks with different configurations. Arguably, the release of an open-source protocol stack implementation under a permissive license (4.2BSD) was a key component of its success: code is always a better description than any protocol specification.
Yet TCP/IP was still incomplete: after the 1983 switchover, many computers started connecting to ARPANET, and bottlenecks due to congestion were common. Van Jacobson devised the Tahoe and Reno congestion-avoidance algorithm to lower data transfers and stop flooding the network with packets: it was quickly implemented on the TCP/IP stacks of the day, saving the Net to this day.
These changes were necessary, as they allowed the Internet to grow, on a global scale. Another set of changes as profound as those were, are now being discussed in the Secure Interdomain Routing mailing list: this time the culprit is the insecurity of BGP, as route announcements are not authenticated, and the penance is enforcing a PKI into the currently distributed, decentralized and autonomous Internet routing system. Technical architectures force a predetermined model of control and governance, and this departure from the previously agreed customs and conventions of the Internet may simply be a bridge too far away, as always, in the name of security. And the current proposals may even impact Internet’s scalability, since the size of the required Resource Public Key Infrastructure may be too large for routers to handle, as the following paper from Verisign shows:
On the other hand, this recent analysis shows that the design of the security of SBGP is of very high quality, a rare thing in the networking field, indeed:
I’ve just updated the list of presentations on mobile security:
- Why Eve and Mallory Love Android: An Analysis of Android SSL (In)Security. SSL is hard for developers, mobile or not.
- Hacking Femtocell, Immature Femtocells and Security challenges for Femtocell communication architecture
- Don’t Trust Satellite Phone – an Analysis of the GMR-1 and GMR-2 Standards. Not even satellite phones are safe!
- Introducing the Smartphone Pentesting Framework. Very useful, albeit basic, set of pentesting tools.
- APK Infection on Android. Easy virii for Android install files.
- NFC for Free Rides and Rooms. How to UltraReset the transit cards.
- iOS 6 Security.
- Android Forensic Deep Dive.
- Probing Mobile Operator Networks. What would you find by network scanning the mobile telcos?
- Binary Instrumentation Framework for Android. Binary instrumentation for NFC/RFID tag reading.
- Bypassing the Android Permission Model
- Evolution of iPhone Baseband and Unlocks
- Into the Droid: Gaining Access to Android User Data
- How many bricks does it take to crack a microcell?
- iOS Kernel Heap Armageddon Revisited
- Windows Phone 7 Internals and Exploitability
- The Heavy Metal That Poisoned the Droid. Reduce the attack surface of Android applications.
- Android Reverse Engineering Tools
- Why Telcos Keep Getting Hacked. Interesting research on the history of telco security.
Given that the antivirus market is a market for lemons (quality is difficult to ascertain and the asymmetry of knowledge between the buyer and the seller is so high that it produces a broken market) and the software supply is so over-saturated that even the market is full of free products and an open-source alternative (ClamAV), I wonder when the signature collection and production processes will be shared between vendors. There has already been news about companies plagiarizing each other closed databases, so the need is already there. And there are precedents in other technology areas (patent pools, mobile network sharing) and even examples within the computer security space (DNS blacklists like Spamhaus), so maybe it’s just a matter of time and when the PC market stops growing, companies will resolve to pool their signature databases to a common entity and concentrate on more specialized aspects of virus detection (heuristics for *morphic viruses).
April 2017 M T W T F S S « Feb 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
- February 2017
- April 2014
- March 2014
- December 2013
- November 2013
- July 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011