text
stringlengths
0
1.99k
inputs smaller than this to make sure we're exploring new functionality and
not running into the same crash that we had with `stack`.
Before diving straight in, let's think for a moment about what kind of
crash we should expect based on our current understanding. As explained
above, as long as we keep our inputs less than 16 bytes, we won't run into
the same error as with `./stack`. In fact, we should expect that the call
to `strcpy()` should return without any drama. However, the `strcpy()`
isn't without its importance this time around because it will have written
to a capability which we will then dereference by calling `func_ptr()` at
the end of `main()`.
If the memory controller is doing what it's supposed to, then it will have
cleared the tag bit from the capability corresponding to `func_ptr` when
`strcpy()` overwrites part of it. Okay, enough hypothesising - let's run
this without an argument to see the `func_ptr` capability before it gets
overwritten.
```
root@cheribsd-morello-purecap:~/vuln # ./partial
Calling `func_ptr()` @ 0x1108b1 [rxR,0x100000-0x130c80] (sentry)
inside some_func()
```
So the `func_ptr` capability has a value of `0x1108b1`, is valid for the
bounds `0x00000-0x130c80` and can be used to both read and fetch bytes (the
lower-case "rx") as well as read capabilities (the upper-case "R"). Now
let's run `partial` again but this time with an argument:
```
root@cheribsd-morello-purecap:~/vuln # ./partial A
Calling `func_ptr()` @ 0x110041 [rxR,0x100000-0x130c80] (invalid,sentry)
In-address space security exception (core dumped)
```
Aha! Notice how the low bytes of the value field changed from `0xb108` to
`0x4100` - the "A" (followed by a NUL) we passed as an argument
successfully overwrote the capability, but the tag bit got cleared in the
process. Notice how the `%#p` specifier helpfully adds the word "invalid"
to the formatting of the `func_ptr` capability now. If we wanted to, we
could overwrite the entirety of the `func_ptr` capability and STILL not be
able to prevent the tag bit from being cleared. No matter what we do in
this example, modifying the capability using user input, forces the
capability to be treated as data.
In summary, the CPU once again threw an exception at us, but this time it
was ultimately because we tried to dereference the capability in `pcc`
(remember - this is the CHERI version of `pc`) after the tag bit had been
cleared. We were able to successfully return from `strcpy()` because we
didn't overflow the bounds of the capability that was used to write to the
`func_ptr` object. However, in doing so, the memory controller cleared the
corresponding tag bit for the `func_ptr` capability, meaning that it was no
longer valid! When we then tried to call `func_ptr()`, `pcc` still gets set
to the now invalid capability, but as soon as the CPU tries to fetch an
instruction from the address that `pcc` now points to, the exception gets
thrown.
Pretty cool, huh? Hopefully these two examples demonstrate how CHERI can
help to mitigate two very common avenues of attack that academics refer to
as "spatial memory safety issues". Here, "spatial" refers to the fact that
we're modifying memory that we're not supposed to, "beyond the space/region
that the program expects". At this point, it's worth mentioning something
that's missing from this picture - if you've been following along at home
you might have already noticed it. If you run the second example above a
few times *without passing any argument*, you'll see the address of
`some_func` printed out a few times. Notice anything strange? That's
right - there's no ASLR on this system. The thought behind this appears to
be that because you can't forge capabilities, why does the memory layout
need to be randomised at all? Does knowing the virtual memory locations of
*anything* help you anymore with regards to exploitation? Are information
leaks still a concern (assuming you're only leaking capabilities)?
If you know anything about academics, then you're probably suspecting that
labeling something as "spatial memory safety" means that there's another
type of memory safety to think about. In our case, we should also consider
"temporal memory safety issues". These are vulnerabilities that occur when
the contents of memory changes at different times in ways that the
programmer didn't intend, and therefore the *program* doesn't expect. Think
of things like use-after-free, or perhaps even type-confusion. Personally,
I'm not a fan of categorising memory corruption issues into these two camps
because I feel like there's too much grey area, but we'll proceed with it
for now as it's what the CHERI literature uses.
### Use-After-Free
If you've written a UAF exploit or similar in the past, then you'll know
that exploits in this realm depend heavily on the allocator due to the
objects of interest being on the heap (objects on the stack are typically
more "permanent" so are drastically less likely to have their contents
switched out from under the program's nose during execution). A CHERI
system is no exception and temporal memory protections come from the use of
a "CHERI-hardened" allocator [4]. The current research in this field
describes an allocator that employs a concept referred to as "quarantining"
to protect freed allocations from being reused.
The idea of a quarantining allocator is reasonably straightforward: when a
heap chunk is freed, it goes into a quarantine list where it cannot be
re-allocated. Later, the quarantine list can be cleaned up by removing the
tag bit from all the capabilities in the list before returning them to the
pool of free chunks.