Thursday, 28 February 2008

Repurposing Attacks Against Java Applets

If you read my review of the Web Application Hacker's Handbook you may remember I made the following point:

The authors talk about repurposing ActiveX controls but do not mention that this also applies to signed Java applets, which can also expose dangerous methods in exactly the same way.

In this post I'm going to discuss a security flaw in a Java applet signed by Sun Microsystems. The vulnerability lets us drop an arbitrary file to an arbitrary location on the file system on Windows platforms, subject to the user's permissions and, if in IE7, depending on whether Protected Mode is on or not. The applet in question has been updated to fix this issue and the NGS advisory is here.

Before I talk about the applet itself, lets take a brief look at the components of a signed JAR:

  • A signed Java applet consists of a JAR file (a zip archive) containing the application class files, a manifest, one or more signature files and a signature block.

  • The manifest holds a list of files in the JAR that have been signed and a corresponding digest for each file, typically SHA1.

  • A signature file consists of a main section which includes information supplied by the signer but not specific to any particular jar file entry, followed by a list of individual entries whose name must also be present in the manifest file. Each individual entry must contain at least the digest of the corresponding entry in the manifest file.

  • The signature block contains the binary signature of the signature file and all public certificates needed for verification.

For anyone interested in the verification process, this paragraph from the Java Plug-in Developer's Guide gives a good description. In practice, the user is presented a dialog like this:

Contrast this to the dialog box that IE7 presents when installing a new ActiveX control (and note that I clicked for "more options" to show the always install/never install options):

The difference from a security perspective, is obvious*: Sun want you to always trust the publisher, Microsoft want to ask you every time. For anyone wondering if you only see this behaviour with Sun-signed applets, this is the default behaviour for all signed applets (don't believe me? Go check out the Hushmail applet). It is worth noting, however, that the checkbox is only ticked if the certificate chain verifies all the way up to a trusted Root CA certificate. If the certificate has expired, or is self-signed the checkbox will not be ticked.

This is a pretty interesting design decision. If you click "Run" in the Java dialog box above, you're allowing all existing and future applets signed by the same publisher (strictly the same certificate) to automatically run regardless of the website they are loaded from and the parameters they are instantiated with. So even if you have some level of confidence in the applet that you are about to run, if the publisher produced a buggy applet and signed it with the same certificate, a malicious website can repurpose it and silently use it against you. Scary, huh? This is one of those useability vs. security trade-offs**. Even if the applet is cached on your machine, if the certificate is not in the trusted store, you will be prompted every time its instantiated. If you're the IT manager of a large corporation and your Intranet homepage has a signed applet, you probably don't want your users to see a security warning everytime they open a browser.

Now for the actual vulnerability... The JNLPAppletLauncher is a "general purpose JNLP-based applet launcher class for deploying applets that use extension libraries containing native code." What this means is an unsigned applet that requires a signed native code extension such as Java 3D can be launched via invoking the JNLPAppletLauncher and passing it a JNLP file that references both the original applet and the extension. There are some demos here; FourByFour is a great example of Java 3D in the browser (though tic-tac-toe on a 4x4x4 cube... its not exactly Guitar Hero III).

The JNLPAppletLauncher had a simple directory traversal flaw exploitable on Windows platforms. The applet reads extensions from the JNLP file, whose location is passed as a parameter during the applet instantiation. The extension path is examined for the parent path sequence "../". On Windows of course, this is insufficient - the failure to check for "..\" ultimately allows us to drop an arbitrary file on the file system. The extension path is concatenated to the base URL so we end up with something like:


If you're thinking this is an invalid URL, you're right. You'll need a hacked up web server to honour it, or at least the ability to modify the httpd.conf on an Apache server. A request for a file below the web root will cause Apache to generate an HTTP 400 (Bad Request). We can translate this into an HTTP 302 (Redirect) via the ErrorDocument directive. The applet will follow the redirection and download the content to the path "..\..\..\..\..\..\windows\system32\file.dll".

Sun have now fixed this issue so the applet you can retrieve from is no longer vulnerable. Since the JAR is not an officially supported product, there will be no Sun Alert released. And given the prerequisites for this attack (you have to coerce a user into visiting a malicious web site then have the user agree to run the control [unless of course they have trusted Sun as a publisher] then you need a hacked up web server), I do not consider this issue to be especially serious. That said, its worth checking your Java trusted certificate store to see exactly which publishers you currently trust. You can get to this via the Java Control Panel (C:\Windows\system32\javacpl.cpl):

Anyway I'll be revisiting signed applets in a future post. In the meantime, my advice is beware of always trusting the publisher.



* Though the dialogs boxes look pretty similar and present the same information, the bottom panel is used to communicate different messages: Microsoft warn you that the file could harm your computer; Sun tell you that the certificate chains to a trusted root CA certificate (which is redundant, as they've already told us "the application's digital signature has been verified" in the top panel).

** If you want to check out some of the Java community's feedback to this dialog box, check out the comments on Stanley Ho's blog post from 2005, Deployment: Goodbye scary security dialog box!

Wednesday, 20 February 2008

Thoughts on Firmware Rootkits

Over the last couple of years I've presented a number of low level attacks aimed at demonstrating off-disk rootkit persistence in firmware.

Vulnerability research into hardware typically has a high barrier to entry; development boards and hardware debuggers are expensive and specs are often unfathomable or hundreds of pages long (or both). That said, tools like Bochs (an open source IA-32 emulator with integrated debugger allowing you to debug the VM from the first very instruction) and the MindShare books are great resources.

So why go to the effort of hacking hardware? Well I believe its a fruitful research area, after all, the OS is only as secure as the hardware its running on and as more and more machines ship with TPMs (and software to make use of them) the need for independent researchers to cast a critical eye over these technologies is greater than ever, especially in light of analyses such as Christiane Rütten's investigation into an encrypted harddrive enclosure (a kind of technical version of the Emperor's New Clothes).

Previously I've focused on the Advanced Configuration and Power Management Interface (ACPI), PCI Option ROMs and the Extensible Firmware Interface (EFI*). The concepts behind most of the attacks I've covered are not new (is there ever anything in security that is truly new?) At the time that I carried out the research, however, I found no practical information on firmware attacks hence I set out to determine how feasible they really were, what they might look like from a defensive perspective and how different hardware and firmware implementations affected things.

So anyway, its been a while since I've released any material in this area but let me assure you there is some on the way. I recently spoke to Deb Radcliff for an article in this month's SC Magazine. The crux of the article is that a modern PC is complex system containing many peripherals devices, each with its own CPU, its own firmware and its own interface to this firmware. If we assume that a given secure boot process will measure all firmware containing instructions for the main CPU, the question is how does it locate and measure the firmware specific to each device? On a system with a TPM and a secure boot process, there is still potential to reflash a device's firmware in order to run a rootkit on the device itself... why run on the main CPU, risking detection, if you can interact with main memory and the I/O space from a peripheral?

You'd be surprised exactly what you can attack from the OS without physical access to the machine. In the article I use smart batteries as an example. There's a good chance that your notebook's battery firmware (data and potentially code) can be updated from the OS. For the incredulous among you, check out the following passage from Atmel’s ATmega406 AVR Microcontroller whitepaper:

The ATmega406 facilitates safe in-field update through self-programming. The ATmega406 CPU can access and write its own program memory. Atmel’s self-programming has true read-while-write capabilities, so critical parts of the battery application can be allowed to remain running while the update is in progress. Since the programming is CPU-initiated, the device is able to receive updates through any supported interface. This means that the SMBUS interface between the PC and battery in effect can be used for in-field updating of the battery. This is by far the most flexible option, as the update can be implemented as a program running on the host PC.

For a more rigorous treatment of trusted computing with untrustworthy devices, I highly recommend Hendricks and van Doorn's paper from 2004, Shoring up the Trusted Computing Base.



* I'll blog on EFI in a future post. If you've never heard of it or don't know much about it you're probably a Windows XP or Vista user, who as Apple puts it, is "stuck in the 1980s with old-fashioned BIOS" :)

Wednesday, 13 February 2008

Review of The Web Application Hacker's Handbook

You might be forgiven for thinking that I would give a harsh review to a book whose co-author once had a unfortunate vommiting incident in my near vicinity. My very near vicinity *. That said, I know first hand that both Dafydd Stuttard and Marcus Pinto, colleagues of mine at NGS worked extremely hard on this book so I'll try and give an honest review...

WAHH is a book primarily for pen testers, though developers of web application would do well to read it too. The first thing that struck me is that it has a logical flow to it; chapters on the evolution of web applications, core defensive mechanisms and web application technologies are followed by mapping the application and attacking key components prior to the introduction of more advanced topics such as automation. WAHH is a hefty 700 pages split into 20 chapters. I made some notes as I went through it, which I've written up below.

What I liked about WAHH:

  • Chapter 11 - Attacking Application Logic; this chapter presents 11 real-world examples. Its hard to describe a generic approach to detecting logic flaws in an application, as the authors point out, but they've managed to do a good job of imparting the mindset required to find logic bugs, breaking each example into three sections: the functionality, the attack and the (misplaced) assumptions . This chapter could have easily ended up coming across as two pen testers wheeling out old war stories but instead its an interesting read. Example 8, "Escaping from Escaping" (the developers forgot to escape the escape character) is a classic.

  • Chapter 13 - Automating Bespoke Attacks shows how to automate an attack against a specific application by creating your own Java-based tool. Its great to see the authors present this kind of information from first principles rather than simply refering the reader to a pre-made tool as so many security books seem to do. Of course, the hugely powerful Burp Intruder, written by Dafydd makes an appearance later in the chapter but the underlying message is automation can saves you heaps of time, and if there isn't a tool out there that does what you need, write one!

  • Chapter 15 - Attacking Compiled Applications, provides a solid overview of typical implementation flaws such as buffer overflows, integer overflows and format strings. Its good to see mention of FormatMessage vulnerabilities. Whilst many web app tests won't involve any direct testing of components written in native code (with the exception of the web server etc.), all pen testers should at least be comfortable code reviewing simple CGIs written in C. I also found chapter 18 - Finding Vulnerabilties in Source Code a handy cheat sheet for obvious things to look for in the common web languages.

  • Chapter 20 - A Web Application Hacker's Methodology. A methodology is an important part of pen testing to ensure consistent results through a base level of testing. Its a difficult thing to write as it has to be generic enough to apply to a sizeable number of application scenarios but if its too generic its just not useful (not to mention most pen testers run a mile when asked to work on documentation!) Conveniently Daf and Marcus provide a comprehensive real-world, ready to use methodology at the end of WAHH.

What I Didn't Like

  • There is no mention of Silverlight. Chapter 5 covers "thick client technologies" - Java, ActiveX and Flash but not Silverlight. I do not envisage many financial institutions creating applications in Silverlight (in the same way that they don't use Flash either), however I believe we shall see a slow but steady increase in its mainstream popularity so it would have been nice to see some coverage of Silverlight-specific tools, such as Silverlight Spy. As an aside I am also not convinced on the use of the term "thick client" in the context that the authors use it, though its obvious what is meant.

  • The MSSQL information in the SQL injection section seemed more SQL Server 2000-centric than 2005 e.g. there was no mention of xp_cmdshell being off by default in SQL Server 2005 (it is enabled by executing the sp_configure stored procedure).

  • There was little mention of WebDAV. I would have liked to have seen a little more coverage of WebDAV, exploiting misconfigurations, information disclosure and so on since a great many content management systems use it and it is popular with online office suites like Zimbra and ThinkFree.

  • The discussion of decompiling Java applets was vague ("For various reasons, Jad sometimes does not do a perfect job of decompiling bytecode"). And though JSwat is mentioned in passing I would have liked to have seen an example of hotswapping a class in an applet to bypass a client-side check.

  • The ActiveX section could do with some further detail. There's no mention of IObjectSafety nor property bags and the only fuzzer mentioned is COMRaider (you might also try AxMan or AxFuzz). SiteLocking is mentioned but not by name. In addition, the authors talk about repurposing ActiveX controls but do not mention that this also applies to signed Java applets, which can also expose dangerous methods in exactly the same way.


All in all I highly recommend this book to pen testers, web application developers and anyone interested in the evolution of web security. Its great to see all this information in one place and my minor grumbles above certainly do not detract from an informative, enjoyable read. I thought it read very well, breaking up technical discussion with humour ("whatever your opinion of the threat posed by XSS vulnerabilities, it seems unlikely that Al Gore will be producing a movie about them any time soon.") It has clearly gone through diligent editing which seems to be lacking in many tech books these days (reminding me of a lecturer I had at university who had written the course text; he paid out a small reward every time someone found a mistake or typo in it - I challenge Daf and Marcus to do the same!)



* Ask PortSwigger.