Bypassing the intent of blocking "third-party" cookies

[Aside: I’m not sure anyone cares, particularly because the “block third party cookies” option tends to break legitimate web sites. But I’ll document it just in case :)]Major browsers tend to have an option to block “third-party” cookies. The main inte…

[Aside: I'm not sure anyone cares, particularly because the "block third party cookies" option tends to break legitimate web sites. But I'll document it just in case :)]

Major browsers tend to have an option to block "third-party" cookies. The main intent of this is to disable tracking cookies used by iframe'd ads.

It turns out that you can bypass this intent by abusing "HTML5 Local Storage". This modern browser facility is present in (at least) Firefox 3.5, Safari 4 and even the normally-lagging IE8. Chrome 4 Beta has it too, making it well supported across all browsers and a more tempting target.

In concept, HTML5 Local Storage is very similar to cookies. On a per-origin basis, there is a set of disk-persisted name / value pairs.

With a simple test, it's easy to show that the HTML5 Local Storage feature is not affected by the third-party cookie setting. I believe this holds across all the above browsers. A simple test page that gets / sets a name / value pair from within a third-party iframe may be located here:

http://scary.beasts.org/misc/iframe_storage.html

(This page also tests for a similar situation with HTML5 Web Database, but that is so far a less supported standard).

What's interesting is that all these browsers did remember to disable these persisted databases in their various private modes.

Cross-domain search timing

I’ve been meaning to fiddle around with timing attacks for a while. I’ve had various discussions in the past about the significance of login determination attacks (including ones I found myself) and my usual response would be “it’s all moot — the atta…

I've been meaning to fiddle around with timing attacks for a while. I've had various discussions in the past about the significance of login determination attacks (including ones I found myself) and my usual response would be "it's all moot -- the attacker could just use a timing attack". Finally, here's some ammo to support that position. And -- actual cross-domain data theft using just a timing attack, as a bonus.

Unfortunately, this is another case of the web being built upon broken specifications and protocols. There's nothing to stop domain evil.com referencing resources on some.sensitive.domain.com and timing how long the server takes to respond. For a GET request, a good bet is the <img> tag plus the onerror() / onload() events. For a POST request, you can direct the post to an <iframe> element and monitor the onload() event.

Why should an evil domain be able to read timing information from any other domain? Messy. Actually, it's even worse than that. Even if the core web model didn't fire the relevant event handles for cross-domain loads, there would still be trouble. The attacker is at liberty to monitor the performance of a bunch of busy-loops in Javascript. The attacker then frames or opens a new window for the HTML page they are interested in. When performance drops, the server likely responded. When performance goes up again, the client likely finished rendering. That's two events and actually a leak of more information that the pure-event case.

Moving on to something real. The most usable primitive that this gives the attacker is a 1-bit leak of information. i.e. was the request relatively fast or relatively slow? I have a little demo:

https://cevans-app.appspot.com/static/ymailtimings.html

It takes a few seconds, but if I'm not logged into Yahoo! Mail, I see:

DONE! 7 79 76 82

From the relatively flat timings of the last three timings (three different inbox searches) and the relative latency between the first number and the latter three, it's pretty clear I'm not logged in to Yahoo! Mail.

If I'm logged in, I see:

DONE! 10 366 414 539

This is where things get interesting. I am clearly logged in because of the significant server latency inherent in a text search within the inbox. But better still, the last three numbers represent searches for the words nosuchterm1234, sensitive and the. Even with a near-empty inbox, the server has at least a 40ms difference in minimum latency between a query for a word not in the index, and a query for a word in the index. (I mailed myself with sensitive in the subject to make a clear point).

There are many places to go from here. We have a primitive which can be used to ask cross-domain YES/NO questions about a victim's inbox. Depending on the power of the search we are abusing, we can ask all sorts of questions. e.g. "Has the victim ever mailed X?", "If so, within the past day?", "Does the word earnings appear in the last week?", "What about the phrase 'earnings sharply down'?" etc. etc. By asking the right YES/NO questions in the right order, you could reconstruct sentences.

It's important to note this is not a failing in any particular site. A particular site can be following current best practices and still be bitten by this. Fundamentally, many search operations on web sites are non-state-changing GETs or POSTSs and therefore do not need XSRF protection. The solution, of course, is to add it (and do the check before doing any work on the server like walking indexes)!

With thanks to Michal Zalewski for interesting debate and Christoph Kern for pointing out this ACM paper, which I haven't read but from the abstract it sounds like it covers some less serious angles of the same base attack.

vsftpd-2.2.2 released

Just a quick note that I released vsftpd-2.2.2.Most significantly, a regression was fixed in the inbuilt listener. Heavily loaded sites could see a session get booted out just after the initial connect. If you saw “500 OOPS: child died”, that was proba…

Just a quick note that I released vsftpd-2.2.2.
Most significantly, a regression was fixed in the inbuilt listener. Heavily loaded sites could see a session get booted out just after the initial connect. If you saw "500 OOPS: child died", that was probably this.

http://vsftpd.beasts.org/

A new fuzz frontier: packet boundaries

Recently, I’ve been getting pretty behind on executing my various research ideas. The only sane thing to do is blog the idea in case someone else wants to run with it and pwn up a bunch of stuff.The general concept I’d like to see explored is perhaps b…

Recently, I've been getting pretty behind on executing my various research ideas. The only sane thing to do is blog the idea in case someone else wants to run with it and pwn up a bunch of stuff.

The general concept I'd like to see explored is perhaps best explained with a couple of concrete bugs I have found and fixed recently:

  1. Dimensions error parsing XBM image. Welcome to the XBM image format, a veritable dinosaur of image formats. It's a textual format and looks a bit like this:
    #define test_width 8
    #define test_height 14
    static char test_bits[] = {
    0x13, 0x00, 0x15, 0x00, 0x93, 0xcd, 0x55, 0xa5, 0x93, 0xc5, 0x00, 0x80,
    0x00, 0x60 };
    The WebKit XBM parsing code includes this line, to extract the width and height:
            if (sscanf(&input[m_decodeOffset], "#define %*s %i #define %*s %i%n",
    &width, &height, &count) != 2)
    return false;

    The XBM parser supports streaming (making render progress before you have the full data available), including streaming in the header. i.e. the above code will attempt to extract width and height from a partial XBM, and retry with more data if it fails. So what happens if the first network packet contains an XBM fragment of exactly the first 42 bytes of the above example? This looks like:
    #define test_width 8
    #define test_height 1
    I think you can see where this is going. The sscanf() sees two valid integers, and XBM decoding proceeds for an 8x1 image, which is incorrect. The real height, 14, had its ASCII representation fractured across a packet boundary.

  2. Out-of-bounds read skipping over HTML comments. This is best expressed in terms of part of the patch I submitted to fix it:
    --- WebCore/loader/TextResourceDecoder.cpp (revision 44821)
    +++ WebCore/loader/TextResourceDecoder.cpp (working copy)
    @@ -509,11 +509,13 @@ bool TextResourceDecoder::checkForCSSCha
    static inline void skipComment(const char*& ptr, const char* pEnd)
    {
    const char* p = ptr;
    + if (p == pEnd)
    + return;
    // Allow ; other browsers do.
    if (*p == '>') {
    p++;
    } else {
    - while (p != pEnd) {
    + while (p + 2 < pEnd) {
    if (*p == '-') {
    // This is the real end of comment, "-->".
    if (p[1] == '-' && p[2] == '>') {

    As can be seen, some simple bounds checking was missing. In order to trigger, the browser would need to find itself processing an HTML fragment ending in: