-
- The most dangerous code in the world: validating SSL certificates in non-browser software. Yet another round of broken implementations of the SSL protocol.
- Cross-VM Side Channels and Their Use to Extract Private Keys: first practical proof that we shall not run SSL servers or any cryptographic software in a public cloud.
- Short keys used on DKIM: the strange case of the race to use the shortest RSA keys.
- How to Garble RAM Programs: Yao’s garbled circuits may turn to be practical.
- Apache Accumulo: NSA’s secure BigTable.
Category Archives: computer security
Sharing Databases of Virus Signatures
Given that the antivirus market is a market for lemons (quality is difficult to ascertain and the asymmetry of knowledge between the buyer and the seller is so high that it produces a broken market) and the software supply is so over-saturated that even the market is full of free products and an open-source alternative (ClamAV), I wonder when the signature collection and production processes will be shared between vendors. There has already been news about companies plagiarizing each other closed databases, so the need is already there. And there are precedents in other technology areas (patent pools, mobile network sharing) and even examples within the computer security space (DNS blacklists like Spamhaus), so maybe it’s just a matter of time and when the PC market stops growing, companies will resolve to pool their signature databases to a common entity and concentrate on more specialized aspects of virus detection (heuristics for *morphic viruses).
Assorted Links (Computer Security)
Erasing David
“The right to be let alone
is indeed
the beginning of all Freedom”
Justice William O. Douglas
One the many curiosities about privacy is that there is no written record of old laws left, a fact that could be wrongly interpreted as if it were some improper outgrowth of modern times and not one of the fundamental human rights. This confusion is easily solved when it’s remarked that it was anything but the proliferation of modern mass media, by the late 19th century, that actually propelled the claim for its legal recognition, a quest of very unsuccessful results: that is, the need for privacy is one of many technological wrongs, always increasing with new media-related technologies.
As evidence, it was the visionary paper “The Right to Privacy” by Justice Louis D. Brandeis and Samuel D. Warren published at the Harvard Law Review in 1890 that started the doctrine of the invasion of privacy and mostly settled its current definition: unsurprisingly, it was written as a reaction to a new technology, the photographic camera.
In modern times, the ever-falling costs of computer storage and sensors allow the affordable recording of the full life of an human being, as the experiment MyLifeBits@Microsoft Research did show: but in this case, every piece of information is registered under informed consent and it‘s not as correlated with the information of other people as those found in social-network databases. Its implications are of a more socio-psychological significance, as it strives to redefine human memory, so frail and self-deceiving.
The other side of the coin emerges whenever the tons of unknowingly collected data are used with malicious intents, as shown in the following documentary, Erasing David: escaping from the past has gotten as difficult as escaping from the piles of accumulated data by both governments and private companies.
Anonymity, as a good, is getting scarcer by the moment, and as such, much more pricier to achieve. [amazon_link id=“1599219778” target=“_blank” ]Disappearing, vanishing without a trace[/amazon_link], is the luxury item of our times.
Assorted Links (Governments&Security)
-
- Epic Catch-22, brought to you by the NSA: It Would Violate Your Privacy to Say if We Spied on You
- Ground Mobile Radio, the Software-Defined Radio that will never see the light of day
- Flame and Stuxnet, the confessed work of the United States of America
- Advanced crypto-research just in gov-malware
- Solomon’s Knot: Law, Freedom and Development
Assorted Links (Comp. Security)
-
- German Federal Government intelligence agencies can decrypt PGP (German)
- Breakthrough silicon scanning discovers backdoor in military chip and Rutkowska’s essay on Trusting Hardware
- A closer look into the RSA SecureID software token
- Off-Path TCP Sequence Number Inference Attack
- Fixing SSL: the Trustworthy Internet Movement
- Alan Turing’s Wartime Research Papers: Statistics of Repetitions and On the Applications of Probability to Cryptography
The Newly, Unknown Cryptography
The first electronic and programmable computer, Colossus, was created to break the Lorenz cipher as implemented by Enigma machines. Since then, the exponential growth in the computational performance of integrated circuits has given rise to a cryptographic arms race in which safer encryption methods are conceived to protect information from the most recent and powerful crypto-analytic attacks. This competition with no end in sight was the key behind the development of cryptography as an academic discipline in the 70’s, a key turning point that left behind methods that resemblings those of pre-scientific periods: the dawn of the classical epoch of cryptography saw the invention of the well-known algorithms Diffie-Hellman-Merkle and Rivest-Shamir-Adleman, now fundamental for electronic commerce in the Internet era.
But in the last decade, the greater emphasis on models, formalization and the building of provable secure protocols transformed the discipline in a transcendental way: however, many of these results are yet to be implemented. Next, some of the most interesting constructions, that only appear on the academic literature and are not yet published in textbooks:
- Identity Based Encryption: public cryptography reduced the problem of information security “to that of key distribution” (Martin Hellman) and IBE schemes are the next step forward, because they enable the use of any string as the public key. This way, the recipient’s email address could be used as the public key, even if he didn’t requested a certificate for it, removing the need to pre-deploy a Public Key Infrastructure and their cumbersome costs. Later variants even allow for the use of biometric identities with the introduction of a margin of error in the definition of the public key, or for the efficient revocation of certificates.
- Attribute-Based Encryption: embedding a Role-Based Access Control model in public key cryptography, so every private key gets associated with a set of attributes representing its capabilities and every ciphertext could only be decrypted by those users complying with a prefixed set of attributes (vg. only “NATO officials” with an authorization level of “Cosmic Top Secret” are able to decrypt an important document). Later variants develop advanced features, like doing without a centralized authority.
- Predicate Encryption: generalizes and extends the previous IBE and ABE schemes, allowing for the encryption of the attributes and the decryption policy itself, and for far more granular policies.
- Signcryption: as the name implies, performs the encryption and signing at the same time, with lesser storage and computational costs than if the operations were individually carried out.
- Post-quantum cryptography: after Shor’s algorithm for efficiently integer factoring, new public key encryption algorithms are required, resistant to cryptanalytic methods enabled by quantum computation, like NTRU.
- Proofs of retrievability, ownership and work: a must in the cloud computing world, they respectively allow checking the integrity of remotely storaged files, without the need to keep a local backup of them; and storing only one copy of the same encrypted file (both proofs can be joined in just one proof of storage); or to proof that a costly computation has been carried out, a very useful primitive to fight spam and the basis of Bitcoin.
- Zero-Knowledge Protocols: This fascinating idea, initially contradictory, has become a fundamental building block of modern cryptography as the basic primitive for authentication and secure computation, among others. They allow proving the truth of a statement to another party, without revealing anything but the truthfulness of said statement, or in other words, to proof that the solution to a problem has been found, but without having to show the result to prove it.
- Commitment schemes: one party commits to a value, but keeps it hidden with the option to reveal it later. Intimately related to the previously described zero-knowledge protocols, they also are a fundamental primitive for more complicated protocols, in practice and in formal proofs.
- Private Information Retrieval: this family of protocols enables to privately query a database with very little overhead, without revealing to the server the exact information that is being search for. For example, a modern implementation of PIR-enabled MapReduce only introduces an overhead of 11%.
- Threshold cryptography: a set of modifications to common encryption schemes to share keys within a group, so at least a set of parties over a threshold are needed to decrypt the secrets. Their equivalents for signing schemes are ring signatures or group signatures.
Static vs. Formal Methods for Code Auditing
Sure, it’s always feasible to write secure C code: for example, using the ternary operator (cond ? : x : y;) it’s possible to write statements so that errors are thrown whenever type mismatches are found at compile time, but it also would be very easy to accidentally bypass these software constructs. However, and for serious undertakings, it’s always better to rely on the existing tools that automate the many mind-numbing tasks that comprise a software audit.
Static code analysis features a long history (with the first lint tool dating back to 1977), but only the profusion of programmers from the last decade instructed in the paradigm of strongly-typed languages generated the expectation to get the same level of pre-execution bug detection for the weakly-typed languages. So strong is the impetus for static analysis in the C arena, that the whole Clang compiler suite is replacing gcc just because is almost impossible to develop static analysis tools for it. Nowadays, in the hall of fame for static analysis code auditing software tools, Deputy gets the top prize for the open source/free program, and Coverity for the commercial one, with Veracode being a strong-contender.
Coverity is a commercial spin-off of the academic work of Dawson Engler, whose papers are a must to not only understand how their tools work, but also other cutting-edge research subjects like symbolic execution. Coverity Static Analysis generates a higher number of false positives than Veracode’s, especially given that they charge so much, but neither of the two surpass the huge number that gets generated by the venerable PC-Lint. But this last one, and others like PVS-Studio and Parasoft C/C++ are still a must if you prefer on-premise software rather than giving away to the cloud your precious source code. Or if no external tools are available, at least consider the code analysis tool in Visual Studio.
On a different category, Deputy is an innovative software project that tries to introduce rigorous type safety based on annotations that are included within the code. I agree that it’s always a great idea to start introducing annotations whenever the codebase is in a nascent state, but it’s much more cumbersome whenever software projects are in an advanced state because it is akin to the tedious maintenance of code written by others. It also features runtime checks to detect those faults that couldn’t be detected at compile time. Or if having limited time available, you may consider just using splint instead of annotating everything.
At the end, I like to think that the most critical parts of the code should be minimized and subjected to formal verification efforts (see L4.verified) but always noting the implicit circular paradox of having to write a specification for the formal verification process, which is a just a program in itself. Because, who verifies the verifier?
Trust (hypervisors), but (formally) verify them
Quis custodiet ipsos custodes?
Juvenal, (Satire VI, lines 347–8)
It’s been a while since I lectured about the formal construction of Java compilers and virtual machines. But nowadays, all the action has moved to an even more basic layer of the software infrastructure: the hypervisor, the core of the operating system virtual machine manager. But it’s just déjà-vu all over again, an instance of the eternal recurrence in the software world, since the first virtual machines originated at the operating system-level (IBM 360 Model 67). Even so, verification is still a key step to solve many Gordian knots, rooted in the same fashion of Thompson’s Reflections on Trusting Trust : from concurrency verification in Hyper‑V to the more advanced Typed Assembly Language and Proof-Carrying Code techniques.
Obfuscating Android C Native Code
I’ve got way too many emails from this blog, but one has found my attention: a reader has emailed me asking for advice on Android Native Code obfuscation, in the same line of previous posts. It’s pretty clear that ProGuard it’s an excellent solution for the main language of the Android platform, Java, but there is no clear alternative for native development in C/C++ with ARM binaries.
The best way to frame this question is to start defining what would be the preferred tools to decompile/disassemble the binary code by Mallory, our evil cracker. Many tools have existed over the years to decompile C code (REC, DCC), Hex-Rays being the latest and most powerful one ever, so it would be the first in her tool chest. Fixed the chosen scalpel, the most effective countermeasure against that, and any decompiler, is self-modifying/metamorphic code, since it breaks their over-reliance on static binary analysis. But the downside of it being that it’s very difficult to create good, reliable self-modifying/metamorphic code, especially in these times in which almost everyone abhors assembly programming, so protecting most parts of the binary and decrypting them at program load time it’s a realistic substitute, much like UPX does (but no, it’s no protection at all).
Most people would recommend following the conventional route of code obfuscation (Mangle-It, Stunnix C/C++ Obfuscator, COBF, Thicket), but there also are some very creative approaches, vg: use the LLVM compiler infrastructure with the C back-end to produce an intermediate C representation, to be recompiled with gcc; or my favorite one, try to use a virtual machine like Oreans or Python for the most critical parts of the program.
For the sake of completeness, there has also been some very interesting papers on cryptographically-aided obfuscation, my favourite being the following one:
And remember, enabling full compiler optimizations will always help!
TDSS Botnet is Not Sophisticated, is Antiquated
Propagating a mass media scare-mongering on the latest piece of malware is always a very good resource to fill those blank pages of newspapers.
These days, it’s the turn of TDSS, yet another so-so malware that endures due to the lusers’ blatant incompetence. This so-called indestructible botnet features:
- Snake-oil crypto: the best crypto! It cures all ailments!
- C&C through the KAD network (Tor is just a misspelled Norse god!).
- Cutting-edge MBR infection! (it seems the ’80s was such an obscure period that nothing from that age remains, except a much-much younger Madonna, go figure).
- TDSS removes other malware, thank you very much: because this have never been attempted before, and I would say, it’s the easiest way to determine a system has been infected.
- A new and very innovative 64-bit kernel-mode driver: let’s just pretend the first 64-bit viruses were not written in 2004…
- Other articles provide a much more detailed view of the evolution of this malware, this being the only thing to note about it.
- Last, but not at least, I don’t understand how they can claim that the botnet is indestructible, but they have been able to reverse engineer the C&C protocol and to send queries to the servers.
I wonder when malware will catch-up with the already published research from the crypto-virology field. It would be wonderful to see a massive botnet, if you understand me, using advanced techniques such as questionable encryption, kleptography or homomorphic encryption applied to delegated computation. Then, we would be talking about a really indestructible botnet.