Further hardening glibc malloc() against single byte overflows

Introduction

Back in 2014, while at Project Zero, I exploited a buffer overflow of a single NUL byte in glibc. Tavis Ormandy had found the interesting glibc vulnerability but there was skepticism in the Linux community that this was exploitable. The only thing to do was to write an exploit. (Was this really 3 years ago? How time flies!)

As part of warming up to write the exploit, I created a few sample test C files which explored different interesting glibc malloc() side effects after an off-by-one NUL byte write. You can find them here. The exploit abused a situation similar to consolidate_forward.c, but did not thoroughly need to defeat ASLR because the target platform was 32-bit.

I also noted a curiosity for future research in shrink_free_hole_alloc_overlap_consolidate_backward.c: there's a possible sequence triggered by a NUL byte overwrite whereby ASLR can be cleanly defeated to leave aliased heap chunks. The C file is commented if you want to see how the sequence works.

Interestingly enough, this technique turned up in a December 2016 Project Zero guest post describing a Chrome OS exploit chain. In a fantastic piece of work, attributed anonymously, a single byte overflow of the value \x01 is used to fully defeat ASLR and eventually achieve code execution in a 64-bit x86_64 process.

I've been thinking about ways to kill this technique for a while now: it's far too powerful if it can reliably defeat ASLR in a 64-bit process. I also worry that many of the super serious Linux daemon bugs remaining are likely to be of the subtle type, i.e. one byte overflows.

Inline vs. out-of-line metadata

It's time for a brief foray into memory allocator design.

  • glibc malloc. glibc malloc is based on dlmalloc, which has an inline metadata design. Specifically, every buffer handed out to the program is preceded by a little piece of metadata indicating the size of the chunk and whether the previous chunk is free or not.
  • tcmalloc. tcmalloc is an allocator that has a largely out-of-line metadata design, on account of being a bucketing allocator for smaller sizes and a page based allocator for larger sizes. However, the internal metadata for this allocator is mixed in with pages handed out as buffers to the calling program. This was hardened a while ago by Justin Schuh for the Google Chrome web browser; not sure it made it upstream.
  • PartitionAlloc. PartitionAlloc, part of the Google Chrome web browser. It has an out-of-line metadata design such that heap metadata is partitioned or guard paged off from the calling program's buffers. The notable exception, shared with tcmalloc, is freelist pointers, which occupy the freed slots. In PartitionAlloc, freelist pointers are transformed so that dereferencing them faults, and partial pointer overwrites are thwarted.

While writing PartitionAlloc, strictly protected heap metadata was a goal. After all, if the metadata is naturally guarded with guard pages then there's less need for code defenses which are trying to detect and stop bad side effects from metadata corruption. Simplicity wins.

However, there's an interesting interaction with single byte overflows that provides pause for thought and philosophizing.

With an out-of-line metadata design, a single byte overflow off the end of a chunk will hit the first byte of some other chunk. Depending on the different object types that could be in the adjacent chunk, that's potentially a lot of possibilities for the attacker to explore.

With an inline metadata design, a single byte overflow off the end of a chunk will hit some metadata. Depending on the level of metadata corruption checks present, the single byte overflow could be dead in the water as an attacker primitive. However, on the flipside, there's also the possibility for more generic attacks against the metadata, such as the one we are patching up here.

Hardening against the glibc malloc() ASLR defeat

This is one of those cases where a relatively simple patch pops out, but only after some careful thought and analysis.

In this case, the key observation is that there are two linkages between heap chunks in glibc malloc():

  • Freelist pointers. These are doubly linked and the double linkage is already validated. Since pointer values are a "secret" before ASLR is defeated, the attacker cannot fake pointer values before they have defeated ASLR.
  • Lengths. There is also length linkage. For a free chunk, the length is present at both the beginning and the end of the chunk, to enable chunk traversal. Unfortunately, an attacker can craft a fake length easily and problems arise specifically when these lengths are modified to point to valid metadata structures, but not the "correct" metadata structure. Previously there was no length linkage validation.
The solution used was to also validate length linkage when unlinking chunks. It was checked in a while ago, here: https://sourceware.org/git/?p=glibc.git;a=commitdiff;h=17f487b7afa7cd6c316040f3e6c86dc96b2eec30

Now, if the attacker has an off-by-one corruption with a small value (NUL or \x01 - \x07) that hits the lowest significant byte of a length (malloc_chunk->size), the attacker can only use that to cause the length to effectively shrink. This is because all heap chunks are at least 8 bytes under the covers. Shrinking a chunk's length means it will never match the prev_size stored at the end of that chunk. Even if the attacker deploys their one byte overflow multiple times, this new check should always catch them.

If an attacker has an arbitrary linear overwrite, they could corrupt both size and prev_size to match, but with that powerful primitive, there would likely be other more fruitful ways forward.

Conclusion

Did we finally nail off-by-one NUL byte overwrites in the glibc heap? Only time will tell!