repo_name
stringlengths
1
62
dataset
stringclasses
1 value
lang
stringclasses
11 values
pr_id
int64
1
20.1k
owner
stringlengths
2
34
reviewer
stringlengths
2
39
diff_hunk
stringlengths
15
262k
code_review_comment
stringlengths
1
99.6k
cheriot-rtos
github_2023
others
84
CHERIoT-Platform
rmn30
@@ -0,0 +1,55 @@ +Interrupt handling in CHERIoT RTOS +================================== + +CHERIoT RTOS does not allow user code to run directly from the interrupt handler. +The scheduler has a separate stack that is used to run interrupt handler and this then maps interrupts to scheduler events.
```suggestion The scheduler has a separate stack that is used to run the interrupt handler and this then maps interrupts to scheduler events. ```
cheriot-rtos
github_2023
others
84
CHERIoT-Platform
rmn30
@@ -0,0 +1,100 @@ +Static sealed objects in CHERIoT RTOS +===================================== + +CHERIoT is a capability system. +This extends from the hardware (all memory accesses require an authorising capability in a register, provided as an operand to an instruction) up through the software abstractions in the RTOS. +In a capability system, any privileged operation requires an explicit capability to authorise it. + +Privilege is a somewhat fluid notion in a system with fine-grained mutual distrust but, in general, any operation that affects state beyond the current compartment (and, especially, anything related to external hardware) is considered privileged. +This includes actions such as: + + - Acknowledging interrupts + - Allocating heap memory + - Establishing network connections + - Flashing LEDs + - Firing missiles + +Each of these should require that the caller present a capability that authorises the callee to perform the action on behalf of the caller. + +Delegation is an important part of a capability system. +A capability may be passed from compartment A to compartment B, which can then use it to ask C to perform some privileged operation. +The identity of the immediate caller does not matter. + +CHERI capabilities to software-defined capabilities +--------------------------------------------------- + +CHERI provides a hardware mechanism for building software-defined capabilities: sealing. +The sealing mechanism allows a pointer (a CHERI capability) to be made immutable and unusable until it is unsealed using an authorising capability. +The CHERIoT ISA has a limited set of space for sealing types and so these are virtualised with the allocator's [token](../sdk/include/token.h) APIs. +These APIs combine allocation and sealing, returning both sealed and unsealed capabilities to an object. +The unsealed capability can be used directly, the sealed capability can be passed to other code and unsealed only by calling the allocator's `token_unseal` function with the capability used to allocate the object. + +A compartment can use this to generate software-defined capabilities that represent dynamic resources. +For example, a network stack can use it to allocate the state associated with a connection. +The scheduler uses the same mechanism for providing capabilities for cross-thread communication so that, for example, only a holder of the relevant capability can send or receive messages in a message queue. + +Static software-defined capabilities +------------------------------------ + +Dynamic software-defined capabilities, as described in the previous section, are exposed to the allocating compartment and must be passed to other compartments (usually in response to some request). +This presents a bootstrapping problem: what authorises a compartment to request a capability? +This problem is addressed by providing a mechanism provision software-defined capabilities (managed by one compartment) to another compartment at *build time*.
```suggestion This problem is addressed by providing a mechanism for provisioning software-defined capabilities (managed by one compartment) to another compartment at *build time*. ```
cheriot-rtos
github_2023
others
84
CHERIoT-Platform
rmn30
@@ -0,0 +1,100 @@ +Static sealed objects in CHERIoT RTOS +===================================== + +CHERIoT is a capability system. +This extends from the hardware (all memory accesses require an authorising capability in a register, provided as an operand to an instruction) up through the software abstractions in the RTOS. +In a capability system, any privileged operation requires an explicit capability to authorise it. + +Privilege is a somewhat fluid notion in a system with fine-grained mutual distrust but, in general, any operation that affects state beyond the current compartment (and, especially, anything related to external hardware) is considered privileged. +This includes actions such as: + + - Acknowledging interrupts + - Allocating heap memory + - Establishing network connections + - Flashing LEDs + - Firing missiles + +Each of these should require that the caller present a capability that authorises the callee to perform the action on behalf of the caller. + +Delegation is an important part of a capability system. +A capability may be passed from compartment A to compartment B, which can then use it to ask C to perform some privileged operation. +The identity of the immediate caller does not matter. + +CHERI capabilities to software-defined capabilities +--------------------------------------------------- + +CHERI provides a hardware mechanism for building software-defined capabilities: sealing. +The sealing mechanism allows a pointer (a CHERI capability) to be made immutable and unusable until it is unsealed using an authorising capability. +The CHERIoT ISA has a limited set of space for sealing types and so these are virtualised with the allocator's [token](../sdk/include/token.h) APIs. +These APIs combine allocation and sealing, returning both sealed and unsealed capabilities to an object. +The unsealed capability can be used directly, the sealed capability can be passed to other code and unsealed only by calling the allocator's `token_unseal` function with the capability used to allocate the object. + +A compartment can use this to generate software-defined capabilities that represent dynamic resources. +For example, a network stack can use it to allocate the state associated with a connection. +The scheduler uses the same mechanism for providing capabilities for cross-thread communication so that, for example, only a holder of the relevant capability can send or receive messages in a message queue. + +Static software-defined capabilities +------------------------------------ + +Dynamic software-defined capabilities, as described in the previous section, are exposed to the allocating compartment and must be passed to other compartments (usually in response to some request). +This presents a bootstrapping problem: what authorises a compartment to request a capability? +This problem is addressed by providing a mechanism provision software-defined capabilities (managed by one compartment) to another compartment at *build time*. +The simplest case for this is (allocator capabilities)[Allocator.md], which authorise allocation and so are necessary to be able to create any dynamic software-defined capabilities. +The scheduler also uses this mechanism for capabilities that allow interaction with [interrupts](Interrupts.md). + +Objects created in this way are allocated outside of any compartment's global region and are accessed only via capabilities provided by the loader. +These capabilities use the allocator's token mechanism and so can be unsealed only with the relevant authorising capability + +For more detail on how to use the static sealing mechanism, see [`compartment-macros.h`](../sdk/include/compartment-macros.h). + +### Exporting a sealing type + +The `STATIC_SEALING_TYPE` macro defines a new sealing type that the loader can use to seal static objects. +This macro also evaluates to the capability that permits unsealing. + +Static sealing capabilities are defined by both their name and the name of the compartment that exports them and so the name that you pick does not need to be unique (though, for documentation purposes, it should not be too generic). + +### Creating a sealed value + +Static sealed values are created with the `DECLARE_STATIC_SEALED_VALUE` and `DEFINE_STATIC_SEALED_VALUE` macros. +These construct an object with the specified type and contents. + +**Note:** These objects may not contain pointers. + +### Auditing + +Static sealed objects are visible to the linker and set up by the loader. +As such, they are amenable to auditing. +Each static sealing type will have an entry in the exports section of its compartment of the following form: + +```json + { + "export_symbol": "__export.sealing_type.static_sealing_inner.SealingType", + "exported": true, + "kind": "SealingKey" + }, +``` + +This example is taken from the test suite. +It gives the unique symbol name for the sealing key (which includes the compartment name and the key name) and indicates that it is a sealing key.
```suggestion It gives the unique symbol name for the sealing key (which includes the compartment name, `static_sealing_inner`, and the key name, `SealingType`) and indicates that it is a sealing key. ```
cheriot-rtos
github_2023
cpp
82
CHERIoT-Platform
nwf-msr
@@ -316,29 +315,395 @@ namespace return capability; } + /** + * Object representing a claim. When a heap object is claimed, an instance + * of this structure exists to track the reference count per claimer. + */ + class Claim + { + /** + * The identifier of the owning allocation capability. + */ + uint16_t allocatorIdentifier = 0; + /** + * Next 'pointer' encoded as a shifted offset from the start of the + * heap. + */ + uint16_t encodedNext = 0; + /** + * Saturating reference count. We use a single
A single...?
cheriot-rtos
github_2023
cpp
82
CHERIoT-Platform
nwf-msr
@@ -316,29 +315,395 @@ namespace return capability; } + /** + * Object representing a claim. When a heap object is claimed, an instance + * of this structure exists to track the reference count per claimer. + */ + class Claim + { + /** + * The identifier of the owning allocation capability. + */ + uint16_t allocatorIdentifier = 0; + /** + * Next 'pointer' encoded as a shifted offset from the start of the + * heap. + */ + uint16_t encodedNext = 0; + /** + * Saturating reference count. We use a single + */ + uint32_t referenceCount = 1; + + /** + * Private constructor, creates a new claim with a single reference + * count. + */ + Claim(uint16_t identifier, uint16_t nextClaim) + : allocatorIdentifier(identifier), encodedNext(nextClaim) + { + } + + /** + * Destructor is private, claims should always be destroyed via + * `destroy`. + */ + ~Claim() = default; + + friend class Iterator; + + public: + /** + * Returns the owner of this claim. + */ + [[nodiscard]] uint16_t owner() const + { + return allocatorIdentifier; + } + + /** + * Returns the value of the compressed next pointer. + */ + [[nodiscard]] uint16_t encoded_next() const + { + return encodedNext; + } + + /** + * Claims list iterator. This wraps a next pointer and so can be used + * both to inspect a value and update it. + */ + class Iterator + { + /** + * Placeholder value for end iterators. + */ + static inline const uint16_t EndPlaceholder = 0; + + /** + * A pointer to the encoded next pointer. + */ + uint16_t *encodedNextPointer = + const_cast<uint16_t *>(&EndPlaceholder); + + public: + /** + * Default constructor returns a generic end iterator. + */ + Iterator() = default; + + /// Copy constructor. + __always_inline Iterator(const Iterator &other) = default; + + /// Constructor from an explicit next pointer. + __always_inline Iterator(uint16_t *nextPointer) + : encodedNextPointer(nextPointer) + { + } + + /** + * Dereference. Returns the claim that this iterator points to. + */ + __always_inline Claim *operator*() + { + return Claim::from_encoded_offset(*encodedNextPointer); + } + + /** + * Dereference. Returns the claim that this iterator points to. + */ + __always_inline Claim *operator->() + { + return Claim::from_encoded_offset(*encodedNextPointer); + } + + /// Iteration termination condition. + __always_inline bool operator!=(const Iterator Other) + { + return *encodedNextPointer != *Other.encodedNextPointer; + } + + /** + * Preincrement, moves to the next element. + */ + Iterator &operator++() + { + Claim *next = **this; + encodedNextPointer = + next != nullptr ? &next->encodedNext : nullptr; + return *this; + } + + /** + * Assignment, replaces the claim that this iterator points to with + * the new one. + */ + Iterator &operator=(Claim *claim) + { + *encodedNextPointer = claim->encode_address(); + return *this; + } + + /** + * Returns the next pointer that this iterator refers to. + */ + uint16_t *pointer() + { + return encodedNextPointer; + } + }; + + /** + * Allocate a new claim. This will fail if space is not immediately + * available. + * + * Returns a pointer to the new allocation on success, nullptr on + * failure. + */ + static Claim *create(PrivateAllocatorCapabilityState &capability, + uint16_t next) + { + auto space = gm->mspace_dispatch( + sizeof(Claim), capability.quota, capability.identifier); + if (!std::holds_alternative<Capability<void>>(space)) + { + return nullptr; + } + return new (std::get<Capability<void>>(space)) + Claim(capability.identifier, next); + } + + /** + * Destroy a claim, which must have been allocated with `capability`. + */ + static void destroy(PrivateAllocatorCapabilityState &capability, + Claim *claim) + { + Capability heap{gm->heapStart}; + heap.address() = Capability{claim}.address(); + auto chunk = MChunkHeader::from_body(heap); + capability.quota += chunk->size_get(); + // We could skip quarantine for these objects, since we know that + // they haven't escaped, but they're small so it's probably not + // worthwhile. + gm->mspace_free(*chunk, sizeof(Claim)); + } + + /** + * Add a reference. If this would overflow, the reference is pinned + * and this never decrements. + */ + void reference_add() + { + if (referenceCount != + std::numeric_limits<decltype(referenceCount)>::max()) + { + referenceCount++; + } + } + + /** + * Decrement the reference count and return whether this has dropped + * the reference count to 0. + */ + bool reference_remove() + { + if (referenceCount != + std::numeric_limits<decltype(referenceCount)>::max()) + { + referenceCount--; + } + return referenceCount == 0; + } + + /** + * Decode an encoded offset and return a pointer to the claim. + */ + static Claim *from_encoded_offset(uint16_t offset) + { + if (offset == 0) + { + return nullptr; + } + Capability<Claim> ret{gm->heapStart.cast<Claim>()}; + ret.address() += offset << MallocAlignShift; + ret.bounds() = sizeof(Claim); + return ret; + } + + /** + * Encode the address of this object in a 16-bit value. + */ + uint16_t encode_address() + { + ptraddr_t address = Capability{this}.address(); + address -= gm->heapStart.address(); + Debug::Assert((address & MallocAlignMask) == 0, + "Claim at address {} is insufficiently aligned", + address); + address >>= MallocAlignShift; + Debug::Assert(address <= std::numeric_limits<uint16_t>::max(), + "Encoded claim address is too large: {}", + address); + return address; + } + }; + static_assert(sizeof(Claim) <= (1 << MallocAlignShift), + "Claims should fit in the smallest possible allocation"); + + /** + * Find a claim if one exists. Returns a reference to the next pointer + * that refers to this claim. + */ + std::pair<uint16_t &, Claim *> claim_find(uint16_t owner, + MChunkHeader &chunk) + { + for (Claim::Iterator i{&chunk.claims}, end; i != end; ++i) + { + Claim *claim = *i; + if (claim->owner() == owner) + { + return {*i.pointer(), claim}; + } + } + return {chunk.claims, nullptr}; + } + + /** + * Add a claim to a chunk, owned by `owner`. This returns true if the + * claim was successfully added, false otherwise. + */ + bool claim_add(PrivateAllocatorCapabilityState &owner, MChunkHeader &chunk) + { + Debug::log("Adding claim for {}", owner.identifier); + auto [next, claim] = claim_find(owner.identifier, chunk); + if (claim) + { + Debug::log("Adding second claim"); + claim->reference_add(); + return true; + } + bool isOwner = (chunk.ownerID == owner.identifier); + size_t size = chunk.size_get(); + if (!isOwner) + { + if (owner.quota < size) + { + Debug::log("quota insufficient"); + return false; + } + owner.quota -= size; + } + claim = Claim::create(owner, next); + if (claim != nullptr) + { + Debug::log("Allocated new claim"); + // If this is the owner, remove the owner and downgrade our + // ownership to a claim. This simplifies the deallocation path. + if (isOwner) + { + chunk.ownerID = 0; + claim->reference_add(); + } + next = claim->encode_address(); + return true; + } + // If we failed to allocate the claim object, undo adding this to our + // quota. + if (!isOwner) + { + owner.quota += size; + } + Debug::log("Failed to add claim"); + return false; + } + + /** + * Drop a claim on an object by the specified allocator capability. If + * `reallyDrop` is false then this does not actually drop the claim but + * returns true if it *could have* dropped a claim. + * Returns true if a claim was dropped, false otherwise. + */ + bool claim_drop(PrivateAllocatorCapabilityState &owner, + MChunkHeader &chunk, + bool reallyDrop) + { + Debug::log("Dropping claim with {} ({})", owner.identifier, &owner); + auto [next, claim] = claim_find(owner.identifier, chunk); + // If there is no claim, fail. + if (claim == nullptr) + { + return false; + } + if (!reallyDrop) + { + return true; + } + // Drop the reference. If this results in the last claim going away,
```suggestion // Drop the reference. If this results in the last reference going away, ```
cheriot-rtos
github_2023
cpp
82
CHERIoT-Platform
nwf-msr
@@ -316,29 +315,395 @@ namespace return capability; } + /** + * Object representing a claim. When a heap object is claimed, an instance + * of this structure exists to track the reference count per claimer. + */ + class Claim + { + /** + * The identifier of the owning allocation capability. + */ + uint16_t allocatorIdentifier = 0; + /** + * Next 'pointer' encoded as a shifted offset from the start of the + * heap. + */ + uint16_t encodedNext = 0; + /** + * Saturating reference count. We use a single + */ + uint32_t referenceCount = 1; + + /** + * Private constructor, creates a new claim with a single reference + * count. + */ + Claim(uint16_t identifier, uint16_t nextClaim) + : allocatorIdentifier(identifier), encodedNext(nextClaim) + { + } + + /** + * Destructor is private, claims should always be destroyed via + * `destroy`. + */ + ~Claim() = default; + + friend class Iterator; + + public: + /** + * Returns the owner of this claim. + */ + [[nodiscard]] uint16_t owner() const + { + return allocatorIdentifier; + } + + /** + * Returns the value of the compressed next pointer. + */ + [[nodiscard]] uint16_t encoded_next() const + { + return encodedNext; + } + + /** + * Claims list iterator. This wraps a next pointer and so can be used + * both to inspect a value and update it. + */ + class Iterator + { + /** + * Placeholder value for end iterators. + */ + static inline const uint16_t EndPlaceholder = 0; + + /** + * A pointer to the encoded next pointer. + */ + uint16_t *encodedNextPointer = + const_cast<uint16_t *>(&EndPlaceholder); + + public: + /** + * Default constructor returns a generic end iterator. + */ + Iterator() = default; + + /// Copy constructor. + __always_inline Iterator(const Iterator &other) = default; + + /// Constructor from an explicit next pointer. + __always_inline Iterator(uint16_t *nextPointer) + : encodedNextPointer(nextPointer) + { + } + + /** + * Dereference. Returns the claim that this iterator points to. + */ + __always_inline Claim *operator*() + { + return Claim::from_encoded_offset(*encodedNextPointer); + } + + /** + * Dereference. Returns the claim that this iterator points to. + */ + __always_inline Claim *operator->() + { + return Claim::from_encoded_offset(*encodedNextPointer); + } + + /// Iteration termination condition. + __always_inline bool operator!=(const Iterator Other) + { + return *encodedNextPointer != *Other.encodedNextPointer; + } + + /** + * Preincrement, moves to the next element. + */ + Iterator &operator++() + { + Claim *next = **this; + encodedNextPointer = + next != nullptr ? &next->encodedNext : nullptr; + return *this; + } + + /** + * Assignment, replaces the claim that this iterator points to with + * the new one. + */ + Iterator &operator=(Claim *claim) + { + *encodedNextPointer = claim->encode_address(); + return *this; + } + + /** + * Returns the next pointer that this iterator refers to. + */ + uint16_t *pointer() + { + return encodedNextPointer; + } + }; + + /** + * Allocate a new claim. This will fail if space is not immediately + * available. + * + * Returns a pointer to the new allocation on success, nullptr on + * failure. + */ + static Claim *create(PrivateAllocatorCapabilityState &capability, + uint16_t next) + { + auto space = gm->mspace_dispatch( + sizeof(Claim), capability.quota, capability.identifier); + if (!std::holds_alternative<Capability<void>>(space)) + { + return nullptr; + } + return new (std::get<Capability<void>>(space)) + Claim(capability.identifier, next); + } + + /** + * Destroy a claim, which must have been allocated with `capability`. + */ + static void destroy(PrivateAllocatorCapabilityState &capability, + Claim *claim) + { + Capability heap{gm->heapStart}; + heap.address() = Capability{claim}.address(); + auto chunk = MChunkHeader::from_body(heap); + capability.quota += chunk->size_get(); + // We could skip quarantine for these objects, since we know that + // they haven't escaped, but they're small so it's probably not + // worthwhile. + gm->mspace_free(*chunk, sizeof(Claim)); + } + + /** + * Add a reference. If this would overflow, the reference is pinned + * and this never decrements. + */ + void reference_add() + { + if (referenceCount != + std::numeric_limits<decltype(referenceCount)>::max()) + { + referenceCount++; + } + } + + /** + * Decrement the reference count and return whether this has dropped + * the reference count to 0. + */ + bool reference_remove() + { + if (referenceCount != + std::numeric_limits<decltype(referenceCount)>::max()) + { + referenceCount--; + } + return referenceCount == 0; + } + + /** + * Decode an encoded offset and return a pointer to the claim. + */ + static Claim *from_encoded_offset(uint16_t offset) + { + if (offset == 0) + { + return nullptr; + } + Capability<Claim> ret{gm->heapStart.cast<Claim>()}; + ret.address() += offset << MallocAlignShift; + ret.bounds() = sizeof(Claim); + return ret; + } + + /** + * Encode the address of this object in a 16-bit value. + */ + uint16_t encode_address() + { + ptraddr_t address = Capability{this}.address(); + address -= gm->heapStart.address(); + Debug::Assert((address & MallocAlignMask) == 0, + "Claim at address {} is insufficiently aligned", + address); + address >>= MallocAlignShift; + Debug::Assert(address <= std::numeric_limits<uint16_t>::max(), + "Encoded claim address is too large: {}", + address); + return address; + } + }; + static_assert(sizeof(Claim) <= (1 << MallocAlignShift), + "Claims should fit in the smallest possible allocation"); + + /** + * Find a claim if one exists. Returns a reference to the next pointer + * that refers to this claim. + */ + std::pair<uint16_t &, Claim *> claim_find(uint16_t owner, + MChunkHeader &chunk) + { + for (Claim::Iterator i{&chunk.claims}, end; i != end; ++i) + { + Claim *claim = *i; + if (claim->owner() == owner) + { + return {*i.pointer(), claim}; + } + } + return {chunk.claims, nullptr}; + } + + /** + * Add a claim to a chunk, owned by `owner`. This returns true if the + * claim was successfully added, false otherwise. + */ + bool claim_add(PrivateAllocatorCapabilityState &owner, MChunkHeader &chunk) + { + Debug::log("Adding claim for {}", owner.identifier); + auto [next, claim] = claim_find(owner.identifier, chunk); + if (claim) + { + Debug::log("Adding second claim"); + claim->reference_add(); + return true; + } + bool isOwner = (chunk.ownerID == owner.identifier); + size_t size = chunk.size_get(); + if (!isOwner) + { + if (owner.quota < size) + { + Debug::log("quota insufficient"); + return false; + } + owner.quota -= size; + } + claim = Claim::create(owner, next); + if (claim != nullptr) + { + Debug::log("Allocated new claim"); + // If this is the owner, remove the owner and downgrade our + // ownership to a claim. This simplifies the deallocation path. + if (isOwner) + { + chunk.ownerID = 0; + claim->reference_add(); + } + next = claim->encode_address(); + return true; + } + // If we failed to allocate the claim object, undo adding this to our + // quota. + if (!isOwner) + { + owner.quota += size; + } + Debug::log("Failed to add claim"); + return false; + } + + /** + * Drop a claim on an object by the specified allocator capability. If + * `reallyDrop` is false then this does not actually drop the claim but + * returns true if it *could have* dropped a claim. + * Returns true if a claim was dropped, false otherwise. + */ + bool claim_drop(PrivateAllocatorCapabilityState &owner, + MChunkHeader &chunk, + bool reallyDrop) + { + Debug::log("Dropping claim with {} ({})", owner.identifier, &owner); + auto [next, claim] = claim_find(owner.identifier, chunk); + // If there is no claim, fail. + if (claim == nullptr) + { + return false; + } + if (!reallyDrop) + { + return true; + } + // Drop the reference. If this results in the last claim going away, + // destroy this claim structure. + if (claim->reference_remove()) + { + next = claim->encoded_next(); + size_t size = chunk.size_get(); + owner.quota += size; + Claim::destroy(owner, claim); + } + return true; + } + __noinline int heap_free_internal(SObj heapCapability, void *rawPointer, bool reallyFree) { auto *capability = malloc_capability_unseal(heapCapability); if (capability == nullptr) { - Debug::log("Invalid heap capabilityL {}", heapCapability); + Debug::log("Invalid heap capability {}", heapCapability); return -EPERM; } Capability<void> mem{rawPointer}; if (!mem.is_valid()) { - return 0; + return -EINVAL; } - // Use the default memory space. check_gm(); - if (!gm->is_free_cap_inbounds(mem)) + // Find the chunk that corresponds to this allocation. + auto *chunk = gm->allocation_start(mem.address()); + if (!chunk) { return -EINVAL; } - return gm->mspace_free( - mem, capability->quota, capability->identifier, reallyFree); + ptraddr_t start = chunk->body().address(); + size_t bodySize = gm->chunk_body_size(*chunk); + // Is the pointer that we're freeing a pointer to the entire allocation? + bool isPrecise = (start == mem.base()) && (bodySize == mem.length()); + // If this is a precise allocation, see if we can
if we can... free it now, because it is otherwise unclaimed?
cheriot-rtos
github_2023
others
79
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,120 @@ +Design of the allocator claims model +==================================== + +This document describes the design of the claim model that the allocator should provide and how it should be implemented. + +The problem +----------- + +Consider the scenario where compartment A passes a heap buffer to compartment B. +Another thread executing in compartment A may free the buffer in the middle of B's execution. +This will cause B to fault. +This is a concurrent operation and so is a classic TOCTOU problem.
```suggestion This is a concurrent operation and so sets the stage for classic TOCTOU problems. ``` as we haven't actually said that B has performed any Checks yet.
cheriot-rtos
github_2023
others
79
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,120 @@ +Design of the allocator claims model +==================================== + +This document describes the design of the claim model that the allocator should provide and how it should be implemented. + +The problem +----------- + +Consider the scenario where compartment A passes a heap buffer to compartment B. +Another thread executing in compartment A may free the buffer in the middle of B's execution. +This will cause B to fault. +This is a concurrent operation and so is a classic TOCTOU problem. +In the core compartments, we mostly avoid this by running with interrupts disabled, but this is not desirable for the entire system. + +To mitigate this, we need some mechanism that allows B to prevent the deallocation of an object. +This is further complicated by the fact that A and B may exist in a mutual distrust relationship and so B must not be permitted to consume A's memory allocation quota. + +There are two variants of this problem, which may have different solutions. +In one case, the callee wishes to use an object for the duration of the call. +In another case, the callee wishes to use an object for a long period. +The main difference between these is the relative costs. +For objects held for the duration of a call, the cost of two calls into the allocator to hold and release the object may be prohibitive, whereas this is likely to be negligible for objects held for a long time. + +Additional constraints +---------------------- + +In addition to solving the basic problem, we have a number of additional constraints on the design space: + + - A solution should be possible to apply at the boundaries when wrapping an existing component in a compartment. + - Most objects will not use this mechanism (they will be reachable from a single compartment) and so there should be no overhead when not in use. + - We are targeting resource-constrained systems and so the total overhead must be low. + - We assume compartments may be malicious and so they should not be able to trick the allocator into creating a data structure that takes a very large amount of time to walk. + +Possible approach 0: Explicit copies +------------------------------------ + +The first approach is simply to punt on the problem entirely. +When an OS kernel interacts with userspace memory, it calls explicit helpers to copy data to and from the kernel. +This is necessary because the userspace process may provide invalid pointers or update the memory map to invalidate pointers while the kernel runs. + +In many situations, the same approach could work with CHERIoT. +We could provide a safe memcpy as a library function that runs with interrupts disabled, checks the source and destination pointers, and then performs the copy (up to some bounded size) and reports whether the copy succeeded. +This would be sufficient in a lot of cases but is not a very friendly programmer model. +In particular, it means that any compartment wrapping existing unmodified code would have to either (deep) copy any objects passed in or would have to make invasive changes to the wrapped code. + +Note: This approach can be orthogonal to others and so we should implement it anyway.
:+1:
cheriot-rtos
github_2023
others
79
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,120 @@ +Design of the allocator claims model +==================================== + +This document describes the design of the claim model that the allocator should provide and how it should be implemented. + +The problem +----------- + +Consider the scenario where compartment A passes a heap buffer to compartment B. +Another thread executing in compartment A may free the buffer in the middle of B's execution. +This will cause B to fault. +This is a concurrent operation and so is a classic TOCTOU problem. +In the core compartments, we mostly avoid this by running with interrupts disabled, but this is not desirable for the entire system. + +To mitigate this, we need some mechanism that allows B to prevent the deallocation of an object. +This is further complicated by the fact that A and B may exist in a mutual distrust relationship and so B must not be permitted to consume A's memory allocation quota. + +There are two variants of this problem, which may have different solutions. +In one case, the callee wishes to use an object for the duration of the call. +In another case, the callee wishes to use an object for a long period. +The main difference between these is the relative costs. +For objects held for the duration of a call, the cost of two calls into the allocator to hold and release the object may be prohibitive, whereas this is likely to be negligible for objects held for a long time. + +Additional constraints +---------------------- + +In addition to solving the basic problem, we have a number of additional constraints on the design space: + + - A solution should be possible to apply at the boundaries when wrapping an existing component in a compartment. + - Most objects will not use this mechanism (they will be reachable from a single compartment) and so there should be no overhead when not in use. + - We are targeting resource-constrained systems and so the total overhead must be low. + - We assume compartments may be malicious and so they should not be able to trick the allocator into creating a data structure that takes a very large amount of time to walk. + +Possible approach 0: Explicit copies +------------------------------------ + +The first approach is simply to punt on the problem entirely. +When an OS kernel interacts with userspace memory, it calls explicit helpers to copy data to and from the kernel. +This is necessary because the userspace process may provide invalid pointers or update the memory map to invalidate pointers while the kernel runs. + +In many situations, the same approach could work with CHERIoT. +We could provide a safe memcpy as a library function that runs with interrupts disabled, checks the source and destination pointers, and then performs the copy (up to some bounded size) and reports whether the copy succeeded. +This would be sufficient in a lot of cases but is not a very friendly programmer model. +In particular, it means that any compartment wrapping existing unmodified code would have to either (deep) copy any objects passed in or would have to make invasive changes to the wrapped code. + +Note: This approach can be orthogonal to others and so we should implement it anyway. + +Possible approach 1: Hazard pointers +------------------------------------ + +Hazard pointers provide inspiration for the first possible solution. +This approach involves having a per-compartment list of held objects that the allocator must consult before freeing an object. +The allocator would provide an API that allowed a compartment to register a table containing a (bounded) array of held objects and an allocation capability. +On deallocation, the allocator would need to traverse all such lists to determine whether an object is on any of them.
This traversal would have to be done with interrupts off. In LF SMR's use of hazards, there's a global cooperative assumption that threads will not share their local copies of pointers, so that the number of copies of a given pointer in the hazard lists is strictly decreasing with time. That's not necessarily true of this use of hazards even without considering malice: compartment B might forward a pointer it's holding via a hazard to compartment C, which might then hold it, even after A has attempted to privatize and deallocate it.
cheriot-rtos
github_2023
others
79
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,120 @@ +Design of the allocator claims model +==================================== + +This document describes the design of the claim model that the allocator should provide and how it should be implemented. + +The problem +----------- + +Consider the scenario where compartment A passes a heap buffer to compartment B. +Another thread executing in compartment A may free the buffer in the middle of B's execution. +This will cause B to fault. +This is a concurrent operation and so is a classic TOCTOU problem. +In the core compartments, we mostly avoid this by running with interrupts disabled, but this is not desirable for the entire system. + +To mitigate this, we need some mechanism that allows B to prevent the deallocation of an object. +This is further complicated by the fact that A and B may exist in a mutual distrust relationship and so B must not be permitted to consume A's memory allocation quota. + +There are two variants of this problem, which may have different solutions. +In one case, the callee wishes to use an object for the duration of the call. +In another case, the callee wishes to use an object for a long period. +The main difference between these is the relative costs. +For objects held for the duration of a call, the cost of two calls into the allocator to hold and release the object may be prohibitive, whereas this is likely to be negligible for objects held for a long time. + +Additional constraints +---------------------- + +In addition to solving the basic problem, we have a number of additional constraints on the design space: + + - A solution should be possible to apply at the boundaries when wrapping an existing component in a compartment. + - Most objects will not use this mechanism (they will be reachable from a single compartment) and so there should be no overhead when not in use. + - We are targeting resource-constrained systems and so the total overhead must be low. + - We assume compartments may be malicious and so they should not be able to trick the allocator into creating a data structure that takes a very large amount of time to walk. + +Possible approach 0: Explicit copies +------------------------------------ + +The first approach is simply to punt on the problem entirely. +When an OS kernel interacts with userspace memory, it calls explicit helpers to copy data to and from the kernel. +This is necessary because the userspace process may provide invalid pointers or update the memory map to invalidate pointers while the kernel runs. + +In many situations, the same approach could work with CHERIoT. +We could provide a safe memcpy as a library function that runs with interrupts disabled, checks the source and destination pointers, and then performs the copy (up to some bounded size) and reports whether the copy succeeded. +This would be sufficient in a lot of cases but is not a very friendly programmer model. +In particular, it means that any compartment wrapping existing unmodified code would have to either (deep) copy any objects passed in or would have to make invasive changes to the wrapped code. + +Note: This approach can be orthogonal to others and so we should implement it anyway. + +Possible approach 1: Hazard pointers +------------------------------------ + +Hazard pointers provide inspiration for the first possible solution. +This approach involves having a per-compartment list of held objects that the allocator must consult before freeing an object. +The allocator would provide an API that allowed a compartment to register a table containing a (bounded) array of held objects and an allocation capability. +On deallocation, the allocator would need to traverse all such lists to determine whether an object is on any of them. +If the object is found on a list then the metadata should be updated to mark it as owned by the allocation capability associated with the list. + +This is attractive for the fast-path case, because the compartment wishing to claim an object for the duration of a call needs only to insert a pointer to it into a pre-registered array. +It is unclear whether it is possible to use this interface securely. +In particular, a malicious compartment could allocate a large object and pass a pointer to a small sub-object to the callee. +The callee has no mechanism (other than calling the allocator) to know whether it holds a pointer to an entire object. + +Possible approach 2: Explicit claims +------------------------------------ + +The approach in CheriOS is similar to reference counting. +Compartments may explicitly claim memory and objects may not be deallocated until all claims have been dropped. +Importantly, the CheriOS model handles resource constraints: when you claim an object, it counts towards your quota. + +The down side of this model is that it requires tracking a lot of state. +Straight reference counting is not sufficient because references can be owned by mutually distrusting entities. +With a simple reference counting model, the original attack is still possible: + +1. A allocates an object. +2. A passes a pointer to the object to B. +3. B increments the reference count. +4. A decrements the reference count twice from another thread. +5. B traps. + +Each reference to the object is now state that must be tracked and, most likely, will require an allocation. +We currently have 14 bits spare in the allocation header (slightly more if we shrink the allocator ID, which does not actually require a full 16 bits, since a 16-bit allocator ID would require 2 MiB of RAM to hold all of the corresponding allocation capabilities, leaving no space for the heap) and so could store a heap-start-relative (shifted for alignment) pointer to the head of a linked list of claims.
In the spirit of "O(n) but n is 3", it could suffice to just have a single list of all claims. If that's too flippant, a small hash table keyed on address could also work. In either case, we'd just need one bit in the header to bring the (potential) claims to deallocation's attention.
cheriot-rtos
github_2023
others
79
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,120 @@ +Design of the allocator claims model +==================================== + +This document describes the design of the claim model that the allocator should provide and how it should be implemented. + +The problem +----------- + +Consider the scenario where compartment A passes a heap buffer to compartment B. +Another thread executing in compartment A may free the buffer in the middle of B's execution. +This will cause B to fault. +This is a concurrent operation and so is a classic TOCTOU problem. +In the core compartments, we mostly avoid this by running with interrupts disabled, but this is not desirable for the entire system. + +To mitigate this, we need some mechanism that allows B to prevent the deallocation of an object. +This is further complicated by the fact that A and B may exist in a mutual distrust relationship and so B must not be permitted to consume A's memory allocation quota. + +There are two variants of this problem, which may have different solutions. +In one case, the callee wishes to use an object for the duration of the call. +In another case, the callee wishes to use an object for a long period. +The main difference between these is the relative costs. +For objects held for the duration of a call, the cost of two calls into the allocator to hold and release the object may be prohibitive, whereas this is likely to be negligible for objects held for a long time. + +Additional constraints +---------------------- + +In addition to solving the basic problem, we have a number of additional constraints on the design space: + + - A solution should be possible to apply at the boundaries when wrapping an existing component in a compartment. + - Most objects will not use this mechanism (they will be reachable from a single compartment) and so there should be no overhead when not in use. + - We are targeting resource-constrained systems and so the total overhead must be low. + - We assume compartments may be malicious and so they should not be able to trick the allocator into creating a data structure that takes a very large amount of time to walk. + +Possible approach 0: Explicit copies +------------------------------------ + +The first approach is simply to punt on the problem entirely. +When an OS kernel interacts with userspace memory, it calls explicit helpers to copy data to and from the kernel. +This is necessary because the userspace process may provide invalid pointers or update the memory map to invalidate pointers while the kernel runs. + +In many situations, the same approach could work with CHERIoT. +We could provide a safe memcpy as a library function that runs with interrupts disabled, checks the source and destination pointers, and then performs the copy (up to some bounded size) and reports whether the copy succeeded. +This would be sufficient in a lot of cases but is not a very friendly programmer model. +In particular, it means that any compartment wrapping existing unmodified code would have to either (deep) copy any objects passed in or would have to make invasive changes to the wrapped code. + +Note: This approach can be orthogonal to others and so we should implement it anyway. + +Possible approach 1: Hazard pointers +------------------------------------ + +Hazard pointers provide inspiration for the first possible solution. +This approach involves having a per-compartment list of held objects that the allocator must consult before freeing an object. +The allocator would provide an API that allowed a compartment to register a table containing a (bounded) array of held objects and an allocation capability. +On deallocation, the allocator would need to traverse all such lists to determine whether an object is on any of them. +If the object is found on a list then the metadata should be updated to mark it as owned by the allocation capability associated with the list. + +This is attractive for the fast-path case, because the compartment wishing to claim an object for the duration of a call needs only to insert a pointer to it into a pre-registered array. +It is unclear whether it is possible to use this interface securely. +In particular, a malicious compartment could allocate a large object and pass a pointer to a small sub-object to the callee. +The callee has no mechanism (other than calling the allocator) to know whether it holds a pointer to an entire object. + +Possible approach 2: Explicit claims +------------------------------------ + +The approach in CheriOS is similar to reference counting. +Compartments may explicitly claim memory and objects may not be deallocated until all claims have been dropped. +Importantly, the CheriOS model handles resource constraints: when you claim an object, it counts towards your quota. + +The down side of this model is that it requires tracking a lot of state. +Straight reference counting is not sufficient because references can be owned by mutually distrusting entities. +With a simple reference counting model, the original attack is still possible: + +1. A allocates an object. +2. A passes a pointer to the object to B. +3. B increments the reference count. +4. A decrements the reference count twice from another thread. +5. B traps. + +Each reference to the object is now state that must be tracked and, most likely, will require an allocation. +We currently have 14 bits spare in the allocation header (slightly more if we shrink the allocator ID, which does not actually require a full 16 bits, since a 16-bit allocator ID would require 2 MiB of RAM to hold all of the corresponding allocation capabilities, leaving no space for the heap) and so could store a heap-start-relative (shifted for alignment) pointer to the head of a linked list of claims. +In the common case, this field will be 0 (the start of the heap cannot be allocated) and so free can skip it. + +The proposed solution involves constructing a linked list of structures holding an allocator ID, a saturating reference count, and a next pointer.
What happens on saturation, the object becomes immortal?
cheriot-rtos
github_2023
others
79
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,120 @@ +Design of the allocator claims model +==================================== + +This document describes the design of the claim model that the allocator should provide and how it should be implemented. + +The problem +----------- + +Consider the scenario where compartment A passes a heap buffer to compartment B. +Another thread executing in compartment A may free the buffer in the middle of B's execution. +This will cause B to fault. +This is a concurrent operation and so is a classic TOCTOU problem. +In the core compartments, we mostly avoid this by running with interrupts disabled, but this is not desirable for the entire system. + +To mitigate this, we need some mechanism that allows B to prevent the deallocation of an object. +This is further complicated by the fact that A and B may exist in a mutual distrust relationship and so B must not be permitted to consume A's memory allocation quota. + +There are two variants of this problem, which may have different solutions. +In one case, the callee wishes to use an object for the duration of the call. +In another case, the callee wishes to use an object for a long period. +The main difference between these is the relative costs. +For objects held for the duration of a call, the cost of two calls into the allocator to hold and release the object may be prohibitive, whereas this is likely to be negligible for objects held for a long time. + +Additional constraints +---------------------- + +In addition to solving the basic problem, we have a number of additional constraints on the design space: + + - A solution should be possible to apply at the boundaries when wrapping an existing component in a compartment. + - Most objects will not use this mechanism (they will be reachable from a single compartment) and so there should be no overhead when not in use. + - We are targeting resource-constrained systems and so the total overhead must be low. + - We assume compartments may be malicious and so they should not be able to trick the allocator into creating a data structure that takes a very large amount of time to walk. + +Possible approach 0: Explicit copies +------------------------------------ + +The first approach is simply to punt on the problem entirely. +When an OS kernel interacts with userspace memory, it calls explicit helpers to copy data to and from the kernel. +This is necessary because the userspace process may provide invalid pointers or update the memory map to invalidate pointers while the kernel runs. + +In many situations, the same approach could work with CHERIoT. +We could provide a safe memcpy as a library function that runs with interrupts disabled, checks the source and destination pointers, and then performs the copy (up to some bounded size) and reports whether the copy succeeded. +This would be sufficient in a lot of cases but is not a very friendly programmer model. +In particular, it means that any compartment wrapping existing unmodified code would have to either (deep) copy any objects passed in or would have to make invasive changes to the wrapped code. + +Note: This approach can be orthogonal to others and so we should implement it anyway. + +Possible approach 1: Hazard pointers +------------------------------------ + +Hazard pointers provide inspiration for the first possible solution. +This approach involves having a per-compartment list of held objects that the allocator must consult before freeing an object. +The allocator would provide an API that allowed a compartment to register a table containing a (bounded) array of held objects and an allocation capability. +On deallocation, the allocator would need to traverse all such lists to determine whether an object is on any of them. +If the object is found on a list then the metadata should be updated to mark it as owned by the allocation capability associated with the list.
Ack; I had a comment here and I'm not sure where it went. :( In any case: how is compartment B notified that ownership of the object has transferred from compartment A? When does either compartment B or the allocator know to free the object?
cheriot-rtos
github_2023
others
79
CHERIoT-Platform
rmn30
@@ -0,0 +1,120 @@ +Design of the allocator claims model +==================================== + +This document describes the design of the claim model that the allocator should provide and how it should be implemented. + +The problem +----------- + +Consider the scenario where compartment A passes a heap buffer to compartment B. +Another thread executing in compartment A may free the buffer in the middle of B's execution. +This will cause B to fault. +This is a concurrent operation and so is a classic TOCTOU problem. +In the core compartments, we mostly avoid this by running with interrupts disabled, but this is not desirable for the entire system. + +To mitigate this, we need some mechanism that allows B to prevent the deallocation of an object. +This is further complicated by the fact that A and B may exist in a mutual distrust relationship and so B must not be permitted to consume A's memory allocation quota. + +There are two variants of this problem, which may have different solutions. +In one case, the callee wishes to use an object for the duration of the call. +In another case, the callee wishes to use an object for a long period. +The main difference between these is the relative costs. +For objects held for the duration of a call, the cost of two calls into the allocator to hold and release the object may be prohibitive, whereas this is likely to be negligible for objects held for a long time. + +Additional constraints +---------------------- + +In addition to solving the basic problem, we have a number of additional constraints on the design space: + + - A solution should be possible to apply at the boundaries when wrapping an existing component in a compartment. + - Most objects will not use this mechanism (they will be reachable from a single compartment) and so there should be no overhead when not in use. + - We are targeting resource-constrained systems and so the total overhead must be low. + - We assume compartments may be malicious and so they should not be able to trick the allocator into creating a data structure that takes a very large amount of time to walk. + +Possible approach 0: Explicit copies +------------------------------------ + +The first approach is simply to punt on the problem entirely. +When an OS kernel interacts with userspace memory, it calls explicit helpers to copy data to and from the kernel. +This is necessary because the userspace process may provide invalid pointers or update the memory map to invalidate pointers while the kernel runs. + +In many situations, the same approach could work with CHERIoT. +We could provide a safe memcpy as a library function that runs with interrupts disabled, checks the source and destination pointers, and then performs the copy (up to some bounded size) and reports whether the copy succeeded. +This would be sufficient in a lot of cases but is not a very friendly programmer model. +In particular, it means that any compartment wrapping existing unmodified code would have to either (deep) copy any objects passed in or would have to make invasive changes to the wrapped code. + +Note: This approach can be orthogonal to others and so we should implement it anyway. + +Possible approach 1: Hazard pointers +------------------------------------ + +Hazard pointers provide inspiration for the first possible solution. +This approach involves having a per-compartment list of held objects that the allocator must consult before freeing an object. +The allocator would provide an API that allowed a compartment to register a table containing a (bounded) array of held objects and an allocation capability. +On deallocation, the allocator would need to traverse all such lists to determine whether an object is on any of them. +If the object is found on a list then the metadata should be updated to mark it as owned by the allocation capability associated with the list. + +This is attractive for the fast-path case, because the compartment wishing to claim an object for the duration of a call needs only to insert a pointer to it into a pre-registered array. +It is unclear whether it is possible to use this interface securely. +In particular, a malicious compartment could allocate a large object and pass a pointer to a small sub-object to the callee. +The callee has no mechanism (other than calling the allocator) to know whether it holds a pointer to an entire object.
The allocators traversal could do a subobject check, although we could really do with an `issubset` instruction for that.
cheriot-rtos
github_2023
others
34
CHERIoT-Platform
rmn30
@@ -6,10 +6,9 @@ SECTIONS . = 0; .compartment_export_table : ALIGN(8) { - # Space for the compartment's PCC and GDC + # Space for the compartment's PCC and GDC and an empty thingy.
```suggestion # Space for the compartment's PCC and GDC and -1 for no compartment_error_handler ```
cheriot-rtos
github_2023
others
74
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,155 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +#pragma once + +#include <stdint.h> + +/** + * Concept for checking that a UART driver exposes the right interface. + */ +template<typename T> +concept IsUart = requires(volatile T *v, uint8_t byte) +{ + {v->init()}; + { + v->can_write() + } -> std::same_as<bool>; + { + v->can_read() + } -> std::same_as<bool>; + { + v->blocking_read() + } -> std::same_as<uint8_t>; + {v->blocking_write(byte)}; +}; + +/** + * Generic 16550A memory-mapped register layout. + * + * The registers are 8 bits wide, but typically the bus supports only 4-byte + * (or larger) transactions and so they are padded to a 32-bit word. The + * template parameter allows this to be controlled. + */ +template<typename RegisterType = uint32_t> +class Uart16550 +{ + static void no_custom_init() {} + public: + /** + * The interface to the read/write FIFOs for this UART. + * + * This is also the low byte of the divisor when the divisor latch (bit 7 of + * the `lineControl`) is set. + */ + RegisterType data; + /** + * Interrupt-enabled control / status. Write 1 to enabled, 0 to disable, to + * each of the low four bits: + * + * 0: Data-receive interrupt + * 1: Transmit holding register empty interrupt + * 2: Receive line status interrupts + * 3: Modem status interrupts. + * + * When bit 7 of `lineControl` is set, this is instead the + * divisor-latch-high register and stores the high 8 bits of the divisor. + */ + RegisterType intrEnable; + /** + * Interrupt identification and FIFO enable/disable. + * + * We only care about the low bit here, which enables the FIFO. + */ + RegisterType intrIDandFifo; + /** + * + */ + RegisterType lineControl; + /** + * Modem control. + */ + RegisterType modemControl; + /** + * The line status word. The bits that we care about are: + * + * 0: Receive ready + * 5: Transmit buffer empty + */ + const RegisterType lineStatus; + /** + * Modem status. + */ + const RegisterType modemStatus; + /** + * Scratch register. Unused. + */ + RegisterType scratch; + + /** + * Returns true if the transmit buffer is empty. + */ + __always_inline bool can_write() volatile + { + return lineStatus & (1<<5); + } + + /** + * Returns true if the receive buffer is not. + */ + __always_inline bool can_read() volatile + { + return lineStatus & (1<<0); + } + + /** + * Read one byte, blocking until a byte is available. + */ + uint8_t blocking_read() volatile + { + while (!can_read()) {} + return data; + } + + /** + * Write one byte, blocking until the byte is written. + */ + void blocking_write(uint8_t byte) volatile + { + while (!can_write()) {} + data = byte; + } + + /** + * Initialise the UART. + */ + template<typename T=decltype(no_custom_init)> + void init( + int divisor = 1, + T &&otherSetup = no_custom_init) volatile + { + // Disable interrupts + intrEnable = 0x00; + // Set the divisor latch (we're going to write the divisor) and set the + // character width to 8 bits. + lineControl = 0x83; + // Set the divisor + data = divisor & 0xff; + intrEnable = (divisor >> 8) & 0xff; + // Run any other setup that we were asked to do. + otherSetup(); + // Clear the divisor latch + lineControl = 0x03; + // Enable the FIFO and reset + intrIDandFifo = 0x01; + } +}; + +// A platform can provide a custom version of this. +#ifndef CHERIOT_PLATFORM_CUSTOM_UART +/// The default UART type. +using Uart = Uart16550<uint32_t>; +// Check that our UART matches the concept. +static_assert(IsUart<Uart>);
Maybe add `static_assert(std::is_standard_layout<Uart>);` Maybe move the `static_assert(IsUart<Uart>);` outside the `#ifndef CHERIOT_PLATFORM_CUSTOM_UART`, since it probably should hold even of other people's `Uart`s?
cheriot-rtos
github_2023
others
74
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,155 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +#pragma once + +#include <stdint.h> + +/** + * Concept for checking that a UART driver exposes the right interface. + */ +template<typename T> +concept IsUart = requires(volatile T *v, uint8_t byte) +{ + {v->init()}; + { + v->can_write() + } -> std::same_as<bool>; + { + v->can_read() + } -> std::same_as<bool>; + { + v->blocking_read() + } -> std::same_as<uint8_t>; + {v->blocking_write(byte)}; +}; + +/** + * Generic 16550A memory-mapped register layout. + * + * The registers are 8 bits wide, but typically the bus supports only 4-byte + * (or larger) transactions and so they are padded to a 32-bit word. The + * template parameter allows this to be controlled. + */ +template<typename RegisterType = uint32_t> +class Uart16550 +{ + static void no_custom_init() {} + public: + /** + * The interface to the read/write FIFOs for this UART. + * + * This is also the low byte of the divisor when the divisor latch (bit 7 of + * the `lineControl`) is set. + */ + RegisterType data; + /** + * Interrupt-enabled control / status. Write 1 to enabled, 0 to disable, to + * each of the low four bits: + * + * 0: Data-receive interrupt + * 1: Transmit holding register empty interrupt + * 2: Receive line status interrupts + * 3: Modem status interrupts. + * + * When bit 7 of `lineControl` is set, this is instead the + * divisor-latch-high register and stores the high 8 bits of the divisor. + */ + RegisterType intrEnable; + /** + * Interrupt identification and FIFO enable/disable. + * + * We only care about the low bit here, which enables the FIFO. + */ + RegisterType intrIDandFifo; + /** + *
The 16550 often leaves me speechless as well, but an empty doc comment feels wrong.
cheriot-rtos
github_2023
others
74
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,155 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +#pragma once + +#include <stdint.h> + +/** + * Concept for checking that a UART driver exposes the right interface. + */ +template<typename T> +concept IsUart = requires(volatile T *v, uint8_t byte) +{ + {v->init()}; + { + v->can_write() + } -> std::same_as<bool>; + { + v->can_read() + } -> std::same_as<bool>; + { + v->blocking_read() + } -> std::same_as<uint8_t>; + {v->blocking_write(byte)}; +}; + +/** + * Generic 16550A memory-mapped register layout. + * + * The registers are 8 bits wide, but typically the bus supports only 4-byte + * (or larger) transactions and so they are padded to a 32-bit word. The + * template parameter allows this to be controlled. + */ +template<typename RegisterType = uint32_t> +class Uart16550 +{ + static void no_custom_init() {} + public: + /** + * The interface to the read/write FIFOs for this UART. + * + * This is also the low byte of the divisor when the divisor latch (bit 7 of + * the `lineControl`) is set. + */ + RegisterType data; + /** + * Interrupt-enabled control / status. Write 1 to enabled, 0 to disable, to + * each of the low four bits: + * + * 0: Data-receive interrupt + * 1: Transmit holding register empty interrupt + * 2: Receive line status interrupts + * 3: Modem status interrupts. + * + * When bit 7 of `lineControl` is set, this is instead the + * divisor-latch-high register and stores the high 8 bits of the divisor. + */ + RegisterType intrEnable; + /** + * Interrupt identification and FIFO enable/disable. + * + * We only care about the low bit here, which enables the FIFO. + */ + RegisterType intrIDandFifo; + /** + * + */ + RegisterType lineControl; + /** + * Modem control. + */ + RegisterType modemControl; + /** + * The line status word. The bits that we care about are: + * + * 0: Receive ready + * 5: Transmit buffer empty + */ + const RegisterType lineStatus; + /** + * Modem status. + */ + const RegisterType modemStatus; + /** + * Scratch register. Unused. + */ + RegisterType scratch; + + /** + * Returns true if the transmit buffer is empty. + */ + __always_inline bool can_write() volatile + { + return lineStatus & (1<<5); + } + + /** + * Returns true if the receive buffer is not. + */ + __always_inline bool can_read() volatile + { + return lineStatus & (1<<0); + } + + /** + * Read one byte, blocking until a byte is available. + */ + uint8_t blocking_read() volatile + { + while (!can_read()) {} + return data; + } + + /** + * Write one byte, blocking until the byte is written. + */ + void blocking_write(uint8_t byte) volatile + { + while (!can_write()) {} + data = byte; + } + + /** + * Initialise the UART. + */ + template<typename T=decltype(no_custom_init)> + void init( + int divisor = 1, + T &&otherSetup = no_custom_init) volatile + { + // Disable interrupts + intrEnable = 0x00; + // Set the divisor latch (we're going to write the divisor) and set the + // character width to 8 bits. + lineControl = 0x83; + // Set the divisor + data = divisor & 0xff; + intrEnable = (divisor >> 8) & 0xff; + // Run any other setup that we were asked to do. + otherSetup(); + // Clear the divisor latch + lineControl = 0x03;
Switch the order of clearing the latch and the other setup, so that `otherSetup` could, say, kick on IRQs if desired? ```suggestion // Clear the divisor latch lineControl = 0x03; // Run any other setup that we were asked to do. otherSetup(); ```
cheriot-rtos
github_2023
cpp
78
CHERIoT-Platform
nwf-msr
@@ -460,11 +460,44 @@ namespace (size == (LA_ABS(__export_mem_heap_end) - LA_ABS(__export_mem_heap)))) { + Debug::log("Rounding heap ({}--{}) region", + target, + target + size); + // FIXME: This should not be hard coded. + static constexpr size_t MantissaBits = 9; + static constexpr size_t MantissaMask = + (1 << MantissaBits) - 1; + size_t sizeBits = 32 - __builtin_clz(size); + size_t sizeMask = (sizeBits <= MantissaBits) + ? MantissaMask + : MantissaMask + << (sizeBits - MantissaBits); + Debug::log("Applying mask {} to size", sizeMask); + size_t roundedSize = size & sizeMask; Debug::log( - "Assigning the heap ({}--{}) to the allocator", + "Rounding heap size down from {} to {} (rounded up " + "to {})", + size, + roundedSize, + __builtin_cheri_round_representable_length(size)); + ptraddr_t end = target + size; + ptraddr_t roundedBase = end - roundedSize;
This is assuming that `end` is suitably aligned. We should probably assert that `(end & ~sizeMask) == 0`? That is, I don't think it's worth trying to compensate for the case that it isn't, because it really should be, but all the same, an assertion wouldn't go amiss.
cheriot-rtos
github_2023
cpp
78
CHERIoT-Platform
nwf-msr
@@ -460,11 +460,44 @@ namespace (size == (LA_ABS(__export_mem_heap_end) - LA_ABS(__export_mem_heap)))) { + Debug::log("Rounding heap ({}--{}) region", + target, + target + size); + // FIXME: This should not be hard coded. + static constexpr size_t MantissaBits = 9; + static constexpr size_t MantissaMask = + (1 << MantissaBits) - 1; + size_t sizeBits = 32 - __builtin_clz(size); + size_t sizeMask = (sizeBits <= MantissaBits) + ? MantissaMask + : MantissaMask + << (sizeBits - MantissaBits); + Debug::log("Applying mask {} to size", sizeMask); + size_t roundedSize = size & sizeMask;
Isn't that equivalent to ```suggestion size_t sizeMask = __builtin_cheri_representable_alignment_mask(size); Debug::log("Applying mask {} to size", sizeMask); size_t roundedSize = size & sizeMask; ``` The intermediate steps are different, in that CRAM is going to give you all ones to the left of your `MantissaMask`. In the end, though, that shouldn't matter, because all those bits in `size` are zero from the `__builtin_clz()`, and so they'll be zero in `roundedSize`.
cheriot-rtos
github_2023
cpp
77
CHERIoT-Platform
nwf-msr
@@ -1214,7 +1214,19 @@ extern "C" SchedulerEntryInfo loader_entry_point(const ImgHdr &imgHdr, // accesses these in the revoker is very small and amenable to auditing // (the only memory accesses are a load and a store back at the same // location, with interrupts disabled, to trigger the load barrier). - auto scaryCapabilities = build<void *, Root::Type::RWStoreL>( + // + // We use imprecise set-bounds operations here because we need to ensure + // that the regions are completely scanned and scanning slightly more is + // not a problem unless the revoker is compromised. The software revoker + // already has a terrifying set of rights, so this doesn't really make + // things worse and is a nother good reason to use a hardware revoker.
```suggestion // things worse and is another good reason to use a hardware revoker. ```
cheriot-rtos
github_2023
cpp
77
CHERIoT-Platform
nwf-msr
@@ -1214,7 +1214,19 @@ extern "C" SchedulerEntryInfo loader_entry_point(const ImgHdr &imgHdr, // accesses these in the revoker is very small and amenable to auditing // (the only memory accesses are a load and a store back at the same // location, with interrupts disabled, to trigger the load barrier). - auto scaryCapabilities = build<void *, Root::Type::RWStoreL>( + // + // We use imprecise set-bounds operations here because we need to ensure + // that the regions are completely scanned and scanning slightly more is + // not a problem unless the revoker is compromised. The software revoker + // already has a terrifying set of rights, so this doesn't really make + // things worse and is a nother good reason to use a hardware revoker. + // Given that hardware revokers are lower power, faster, and more secure, + // there's little reason for the software revoker to be used for anything + // other than testing. + auto scaryCapabilities = build<void *, + Root::Type::RWStoreL, + Root::Permissions<Root::Type::RWStoreL>, + false>(
Since we aren't doing the type-level enumeration thing, perhaps ```suggestion /* Precise: */ false>( ```
cheriot-rtos
github_2023
cpp
45
CHERIoT-Platform
nwf-msr
@@ -35,7 +35,7 @@ namespace sched /** * The real type of this subclass. */ - enum class Type : uint8_t + enum class Type : uint32_t
Add a comment that this type is deliberately chosen for compatibility with the allocator-based mechanism (xref `sdk/core/allocator/token.h:/^struct SObjStruct`)? That said, I don't think we're guaranteed that this one pads after the type like `SObjStruct`s do... I don't think we need it to, necessarily, but it might be worth pointing out the difference.
cheriot-rtos
github_2023
cpp
45
CHERIoT-Platform
nwf-msr
@@ -568,14 +574,11 @@ namespace } return false; }; - bool found = false; - for (auto &compartment : image.privilegedCompartments) + bool found = findExport(image.allocator()); + if (!found && findExport(image.scheduler()))
It's possibly a bit of a pity that the scheduler isn't represented as a `PrivilegedCompartment` and so we have to special-case these two.
cheriot-rtos
github_2023
cpp
71
CHERIoT-Platform
rmn30
@@ -44,6 +44,12 @@ typedef struct [[cheri::interrupt_state(disabled)]] uint16_t __cheri_compartment("sched") thread_id_get(void); +/** + * Returns a cacheable (read-only) pointer to a global owned by the scheduler + * that contains the current thread ID. Reading this pointer will return the
Incomplete sentence.
cheriot-rtos
github_2023
others
12
CHERIoT-Platform
saaramar
@@ -180,10 +180,10 @@ compartment_switcher_entry: cjr cs0 .Lzero_arguments_start: zeroRegisters a0, a1, a2, a3, a4, a5, t0 - // Enable interrupts of the interrupt-disable bit is not set in flags - bnez t1, .Lskip_interrupt_disable - csrsi mstatus, 0x8 -.Lskip_interrupt_disable: + // Disable interrupts if the interrupt-disable bit is set in flags
There is an extra space there
cheriot-rtos
github_2023
others
70
CHERIoT-Platform
nwf-msr
@@ -219,15 +219,13 @@ after_zero: csetaddr cs0, cs0, gp // The target capability is in ct1. Unseal, check tag and load the entry point offset. cunseal ct1, ct1, cs0 - // Make sure the export table is valid - cgettag s0, ct1 - beqz s0, .Linvalid_entry - // Load the entry point offset. + // Load the entry point offset. If cunseal failed then this will fault and + // we will force unwind. clhu s0, ExportEntry_offset_functionStart(ct1) // At this point, we known that the cunseal has succeeded (we didn't trap
While you're in the neighborhood, that should be "know", not "known".
cheriot-rtos
github_2023
others
70
CHERIoT-Platform
nwf-msr
@@ -550,19 +547,12 @@ exception_entry_asm: // Fetch the base of compartment stack before cincoffset for later // comparison. The subsequent cincoffset could cause the base to change, // if the capability becomes unrepresentable. Even though that would clear - // the tag, which we will detect in check_compartment_stack_integrity. + // the tag, will cause a trap later that will force unwind.
That last sentence's grammar leaves something to be desired. Perhaps "In that case, the tag will be cleared, and that will cause a trap later in the switcher, forcing an unwind back to the caller" or somesuch?
cheriot-rtos
github_2023
others
70
CHERIoT-Platform
rmn30
@@ -550,19 +547,12 @@ exception_entry_asm: // Fetch the base of compartment stack before cincoffset for later // comparison. The subsequent cincoffset could cause the base to change, // if the capability becomes unrepresentable. Even though that would clear - // the tag, which we will detect in check_compartment_stack_integrity. + // the tag, will cause a trap later that will force unwind. cgetbase tp, ct0 - // Allocate space for the register save frame on the stack. + // Allocate space for the register save frame on the stack. If we didn't + // have enough space here, we'll fault in the unwind path, which will
Should say fault in the error handler path?
cheriot-rtos
github_2023
others
59
CHERIoT-Platform
davidchisnall
@@ -16,6 +16,11 @@ option("scheduler-accounting") set_description("Track per-thread cycle counts in the scheduler"); set_showmenu(true) +option("stack-low-water-mark") + set_default(false) + set_description("Build with support for stack low water mark"); + set_showmenu(true)
This should probably be part of the board config.
cheriot-rtos
github_2023
cpp
59
CHERIoT-Platform
davidchisnall
@@ -44,33 +44,41 @@ struct TrustedStackFrame template<size_t NFrames> struct TrustedStackGeneric { - void *mepcc; - void *c1; - void *csp; - void *cgp; - void *c4; - void *c5; - void *c6; - void *c7; - void *c8; - void *c9; - void *c10; - void *c11; - void *c12; - void *c13; - void *c14; - void *c15; - size_t mstatus; - size_t mcause; + void *mepcc; + void *c1; + void *csp; + void *cgp; + void *c4; + void *c5; + void *c6; + void *c7; + void *c8; + void *c9; + void *c10; + void *c11; + void *c12; + void *c13; + void *c14; + void *c15; + size_t mstatus; + size_t mcause; +#ifdef CONFIG_MSLWM + uint32_t mslwm; + uint32_t mslwmb; +#endif uint16_t frameoffset; /** * Flag indicating whether this thread is in the process of a forced * unwind. If so, this is one, otherwise it is zero. */ uint8_t inForcedUnwind; // Padding up to multiple of 16-bytes. - uint8_t pad0; - uint16_t padding[2]; +#ifdef CONFIG_MSLWM +# define TRUSTED_STACK_PADDING 13 +#else +# define TRUSTED_STACK_PADDING 5 +#endif + uint8_t padding[TRUSTED_STACK_PADDING];
```suggestion uint8_t padding[ #ifdef CONFIG_MSLWM 13 #else 5 #endif ]; ```
cheriot-rtos
github_2023
others
59
CHERIoT-Platform
davidchisnall
@@ -181,8 +180,24 @@ compartment_switcher_entry: cgetbase s1, csp csetaddr csp, csp, s1 sub s1, s0, s1 - csetboundsexact csp, csp, s1 - zero_stack sp, s0, gp + csetboundsexact ct2, csp, s1 + csetaddr csp, ct2, s0 +#ifdef CONFIG_MSLWM + // read and align the stack low water mark + csrr gp, 0xbc1 // mslwm
Please can you turn 0abc1 and 0xbc2 into symbolic constants?
cheriot-rtos
github_2023
others
59
CHERIoT-Platform
davidchisnall
@@ -181,8 +180,24 @@ compartment_switcher_entry: cgetbase s1, csp csetaddr csp, csp, s1 sub s1, s0, s1 - csetboundsexact csp, csp, s1 - zero_stack sp, s0, gp + csetboundsexact ct2, csp, s1 + csetaddr csp, ct2, s0 +#ifdef CONFIG_MSLWM + // read and align the stack low water mark + csrr gp, 0xbc1 // mslwm + and gp, gp, ~0xf + // skip zeroing if low water mark >= stack poitner + bge t2, sp, after_zero
We probably need to zero the whole thing if the address is not in bounds. I think that is not needed on the return path though.
cheriot-rtos
github_2023
others
59
CHERIoT-Platform
davidchisnall
@@ -232,6 +232,11 @@ rule("firmware") add_defines("SIMULATION") end + if board.stack_high_water_mark then
At least the Sail json should probably be updated to include this.
cheriot-rtos
github_2023
others
59
CHERIoT-Platform
davidchisnall
@@ -607,6 +646,13 @@ exception_entry_asm: // Load the trusted stack pointer to ct1 cspecialr ct1, mtdc +#ifdef CONFIG_MSHWM + // Update the spilled copy of the stack high watermark to ensure that we
Technically, it should be high-water mark, but this is probably fine.
cheriot-rtos
github_2023
cpp
65
CHERIoT-Platform
nwf-msr
@@ -1323,22 +1327,25 @@ class MState } void ok_any_chunk(MChunkHeader *p) { - bool thisShadowBit = - revoker.shadow_bit_get(CHERI::Capability{p}.address()); - Debug::Assert(thisShadowBit, - "Chunk header does not point to a set shadow bit: {}", - p); - MChunkHeader *next = p->cell_next(); - bool nextShadowBit = - revoker.shadow_bit_get(CHERI::Capability{next}.address()); - Debug::Assert( - nextShadowBit, - "Next chunk header does not point to a set shadow bit: {}", - next); - Debug::Assert( - is_aligned(p->body()), "Chunk is not correctly aligned: {}", p); - Debug::Assert( - ok_address(p->body()), "Invalid address {} for chunk", p->body()); + if constexpr (HasTemporalSafety) + { + bool thisShadowBit = + revoker.shadow_bit_get(CHERI::Capability{p}.address()); + Debug::Assert(thisShadowBit, + "Chunk header does not point to a set shadow bit: {}", + p); + MChunkHeader *next = p->cell_next(); + bool nextShadowBit = + revoker.shadow_bit_get(CHERI::Capability{next}.address()); + Debug::Assert( + nextShadowBit, + "Next chunk header does not point to a set shadow bit: {}", + next); + Debug::Assert(
This assert and the next can live outside the `if contexpr`, yes?
cheriot-rtos
github_2023
cpp
44
CHERIoT-Platform
davidchisnall
@@ -413,19 +413,19 @@ int __cheri_compartment("sched") } return typed_op<Event>(evt, [&](Event &event) { return event.bits_wait( - retBits, bitsToWait, clearOnExit, waitAll, timeout); + retBits, bitsToWait, flags & EventWaitClearOnExit, flags & EventWaitAll, timeout);
It would be nice to wrap this in the same kind of template that I did for PAL features in snmalloc. Something like: ``` template<EventWaitFlags Desired> bool has_event_flag(int eventFlags) { return eventFlags & int(Desired); } ``` I am always somewhat nervous of raw `&` for checking bitfields because you're relying on the implicit conversion to bool, which can happen in a different place if you don't put enough brackets around the expression. Putting it in a separate function forces it to happen at the evaluation of the return and better expresses intentionality.
cheriot-rtos
github_2023
cpp
44
CHERIoT-Platform
davidchisnall
@@ -403,8 +404,7 @@ int __cheri_compartment("sched") void *evt, uint32_t *retBits, uint32_t bitsToWait, - bool clearOnExit, - bool waitAll) + int flags)
Can we use a fixed-width type here please?
cheriot-rtos
github_2023
cpp
44
CHERIoT-Platform
davidchisnall
@@ -48,8 +58,22 @@ int __cheri_compartment("sched") event_bits_wait(Timeout *timeout, void *evt, uint32_t *retBits, uint32_t bitsToWait, - bool clearOnExit, - bool waitAll); + int flags __if_cxx(=0) + ); + + +/** + * Flags used by `event_bits_set`. + */ +enum [[clang::flag_enum]] EventSetFlags { + /** + * Do not yield immediately if a higher priority thread becomes runnable. + * Instead the other thread will run when this thread yields or another + * scheduler event occurs (e.g. a timer interrupt). This may be useful to + * avoid multiple context switches when setting bits in multiple event + * groups from a low priority thread. + */ + EventSetNoYield = (1 << 0)};
I think GitHub has screwed up rendering the diff here, there's no } for this enum definition in the UI.
cheriot-rtos
github_2023
cpp
44
CHERIoT-Platform
davidchisnall
@@ -98,3 +121,24 @@ int __cheri_compartment("sched") event_delete(struct SObjStruct *heapCapability, void *evt); __END_DECLS + +#ifdef __cplusplus
Not sure these should go in the public header: I don't see them as being useful for anything other than the scheduler (a helper for constructing the flags would be).
cheriot-rtos
github_2023
cpp
55
CHERIoT-Platform
rmn30
@@ -105,7 +110,17 @@ namespace sched size_t mepc, size_t mtval) { + // The cycle count value the last time the scheduler returned. bool schedNeeded; + if constexpr (sched::Accounting) + { + uint64_t currentCycles = rdcycle64(); + auto *thread = Thread::current_get(); + uint64_t &cycles = + thread ? thread->cycles : Thread::idleThreadCycles; + currentCycles -= cyclesAtLastSchedulingEvent;
Minor nit: I would prefer to introduce a new local `cyclesElapsed` here.
cheriot-rtos
github_2023
cpp
55
CHERIoT-Platform
rmn30
@@ -105,7 +110,17 @@ namespace sched size_t mepc, size_t mtval) { + // The cycle count value the last time the scheduler returned. bool schedNeeded; + if constexpr (sched::Accounting) + { + uint64_t currentCycles = rdcycle64(); + auto *thread = Thread::current_get(); + uint64_t &cycles =
`threadCycleCounter` would be a more descriptive name.
cheriot-rtos
github_2023
cpp
55
CHERIoT-Platform
rmn30
@@ -45,6 +45,11 @@ void simulation_exit(uint32_t code) #endif +/** + * The value of the cycle counter at the last scheduling event. + */ +static uint64_t cyclesAtLastSchedulingEvent;
Since this is not atomic and will take two 32-bit loads / stores to access I guess we have to make sure that all accesses are with interrupts disabled?
cheriot-rtos
github_2023
cpp
55
CHERIoT-Platform
rmn30
@@ -642,3 +661,20 @@ int multiwaiter_wait(Timeout *timeout, return 0; }); } + +#ifdef SCHEDULER_ACCOUNTING +[[cheri::interrupt_state(disabled)]] uint64_t thread_elapsed_cycles_idle()
Is this interrupts disabled to ensure atomicity of access to `Thread::idleThreadCycles`. If so a comment would not go amiss.
cheriot-rtos
github_2023
cpp
55
CHERIoT-Platform
rmn30
@@ -42,3 +42,12 @@ }) #define BARRIER() __asm volatile("" : : : "memory") + +/** + * Read the cycle counter. Returns the number of cycles since boot as a 64-bit + * value. + */ +static inline uint64_t rdcycle64() +{ + return CSR_READ64(mcycle);
Do we want to make this `minstret` on Sail?
cheriot-rtos
github_2023
others
40
CHERIoT-Platform
rmn30
@@ -97,6 +97,25 @@ switcher_scheduler_entry_csp: bnez t2, .Lforce_unwind .endm +.macro zero_stack base top scratch
Add a comment saying that `base` has its address changed and `scratch` is clobbered (obvs).
cheriot-rtos
github_2023
others
40
CHERIoT-Platform
rmn30
@@ -97,6 +97,25 @@ switcher_scheduler_entry_csp: bnez t2, .Lforce_unwind .endm +.macro zero_stack base top scratch + addi \scratch, \top, -32 + bge \base, \scratch, 1f + // Zero the stack used by the callee. +0: + csc cnull, 0(c\base) + csc cnull, 8(c\base) + csc cnull, 16(c\base) + csc cnull, 24(c\base) + cincoffset c\base, c\base, 32 + blt \base, \scratch, 0b + bge \base, \top, 2f +1: + csc cnull, 0(ct2)
What is `ct2` referenced here.
cheriot-rtos
github_2023
others
40
CHERIoT-Platform
rmn30
@@ -95,6 +95,42 @@ switcher_scheduler_entry_csp: // make sure the caller's CSP is unsealed cgettype t2, \reg bnez t2, .Lforce_unwind + // Check that the base is 16-byte aligned + cgetbase t2, csp + andi t2, t2, 0xf + bnez t2, .Lforce_unwind + // Check that the address (top of the remainder) is 16-byte aligned + andi t2, sp, 0xf + bnez t2, .Lforce_unwind +.endm + +/** + * Zero the stack. The three operands are the base address (modified during + * this call, will point at the top at the end), the top address, and a scratch + * register to use. The base must be a capability but it must be provided + * without the c prefix because it is used as both a capability and integer + * register. Top and scratch are both clobbered. + */ +.macro zero_stack base top scratch + addi \scratch, \top, -32 + addi \top, \top, -16 + bge \base, \scratch, 1f + // Zero the stack in 32-byte chunks +0: + csc cnull, 0(c\base) + csc cnull, 8(c\base) + csc cnull, 16(c\base) + csc cnull, 24(c\base) + cincoffset c\base, c\base, 32 + blt \base, \scratch, 0b + bge \base, \top, 2f
Should this be `bgt`? If base was 16-byte but not 32-byte aligned we would get here `\cbase == \top` but still have 16 bytes to zero.
cheriot-rtos
github_2023
others
40
CHERIoT-Platform
rmn30
@@ -95,6 +95,42 @@ switcher_scheduler_entry_csp: // make sure the caller's CSP is unsealed cgettype t2, \reg bnez t2, .Lforce_unwind + // Check that the base is 16-byte aligned + cgetbase t2, csp + andi t2, t2, 0xf + bnez t2, .Lforce_unwind + // Check that the address (top of the remainder) is 16-byte aligned + andi t2, sp, 0xf + bnez t2, .Lforce_unwind +.endm + +/** + * Zero the stack. The three operands are the base address (modified during + * this call, will point at the top at the end), the top address, and a scratch + * register to use. The base must be a capability but it must be provided + * without the c prefix because it is used as both a capability and integer + * register. Top and scratch are both clobbered. + */ +.macro zero_stack base top scratch + addi \scratch, \top, -32 + addi \top, \top, -16 + bge \base, \scratch, 1f + // Zero the stack in 32-byte chunks +0: + csc cnull, 0(c\base) + csc cnull, 8(c\base) + csc cnull, 16(c\base) + csc cnull, 24(c\base) + cincoffset c\base, c\base, 32 + blt \base, \scratch, 0b + bge \base, \top, 2f +1: + // Zero any tail in 16-byte chunks + csc cnull, 0(c\base) + csc cnull, 8(c\base) + cincoffset c\base, c\base, 16 + blt \base, \top, 1b
Can this ever execute more than once?
cheriot-rtos
github_2023
cpp
40
CHERIoT-Platform
rmn30
@@ -35,15 +35,30 @@ namespace }; constexpr ThreadConfig ThreadConfigs[] = CONFIG_THREADS; + /** + * Round up to a multiple of `Multiple`, which must be a power of two. + */ + template<size_t Multiple> + constexpr size_t round_up(size_t value) + { + static_assert((Multiple & (Multiple - 1)) == 0, + "Multiple must be a power of two"); + return (value + Multiple - 1) & -Multiple;
An explanation of the bit-twiddling might be nice, maybe with reference if you found it somehwere?
cheriot-rtos
github_2023
cpp
40
CHERIoT-Platform
rmn30
@@ -35,15 +35,30 @@ namespace }; constexpr ThreadConfig ThreadConfigs[] = CONFIG_THREADS; + /** + * Round up to a multiple of `Multiple`, which must be a power of two. + */ + template<size_t Multiple> + constexpr size_t round_up(size_t value) + { + static_assert((Multiple & (Multiple - 1)) == 0, + "Multiple must be a power of two"); + return (value + Multiple - 1) & -Multiple; + } + static_assert(round_up<16>(15) == 16); + static_assert(round_up<16>(28) == 32); + static_assert(round_up<8>(17) == 24);
Should include a test for already rounded value: ```suggestion static_assert(round_up<8>(17) == 24); static_assert(round_up<8>(8) == 8); ```
cheriot-rtos
github_2023
others
40
CHERIoT-Platform
rmn30
@@ -95,6 +95,40 @@ switcher_scheduler_entry_csp: // make sure the caller's CSP is unsealed cgettype t2, \reg bnez t2, .Lforce_unwind + // Check that the base is 16-byte aligned + cgetbase t2, csp + andi t2, t2, 0xf + bnez t2, .Lforce_unwind + // Check that the address (top of the remainder) is 16-byte aligned + andi t2, sp, 0xf + bnez t2, .Lforce_unwind +.endm + +/** + * Zero the stack. The three operands are the base address (modified during + * this call, will point at the top at the end), the top address, and a scratch + * register to use. The base must be a capability but it must be provided + * without the c prefix because it is used as both a capability and integer + * register. Top and scratch are both clobbered.
Should mention that `base` and `top` must be 16-byte aligned as I think this is important to the implementation. A unit test for this macro would be good...
cheriot-rtos
github_2023
cpp
43
CHERIoT-Platform
rmn30
@@ -11,8 +11,12 @@ using Debug = ConditionalDebug<DEBUG_ALLOCBENCH, "Allocator benchmark">; */ void __cheri_compartment("allocbench") run() { + // Make sure sail doesn't print annoying log messages in the middle of the
Is it worth noting that this could be removed once Sail makes s release with this fix: https://github.com/rems-project/sail/commit/362254783e2c0a12b5393169d883ebe5621baa6f
cheriot-rtos
github_2023
cpp
36
CHERIoT-Platform
davidchisnall
@@ -940,7 +940,7 @@ extern "C" SchedulerEntryInfo loader_entry_point(const ImgHdr &imgHdr, SchedulerEntryInfo ret; volatile Uart *uart16550; - // Populate the 4 roots from system registers.
This actually does populate the four software roots, from the three hardware roots.
cheriot-rtos
github_2023
others
36
CHERIoT-Platform
davidchisnall
@@ -339,6 +402,15 @@ exception_entry_asm: reloadRegisters c1, cgp, c4, c5, c6, c7, c8, c9, c10, c11, c12, c13, c14, c15, csp mret +.Lreentrant_exception:
I'm not sure what we're expecting to happen here. The exception code runs with interrupts disabled, so we should only ever hit this path if we take a trap while handling an interrupt. That can only happen in the case of total internal consistency failure and at that point we should just die. We could achieve that by setting the exception entry point to null on exception entry and then have the hardware assert reset if we take an exception with no exception handler registered.
cheriot-rtos
github_2023
others
36
CHERIoT-Platform
davidchisnall
@@ -32,30 +32,30 @@ start: la_abs a3, bootStack li a1, BOOT_STACK_SIZE - cspecialr ca2, mtdc - li a4, ~CHERI_PERM_STORE_LOCAL + cspecialr ca4, mtdc // Keep the RW memory root in ca4 throughout + li a2, ~CHERI_PERM_STORE_LOCAL
:set list
cheriot-rtos
github_2023
others
36
CHERIoT-Platform
davidchisnall
@@ -303,20 +314,15 @@ exception_entry_asm: // ca3, used for mtval zeroAllRegistersExcept ra, sp, gp, a0, a1, a2, a3 - // Call the scheduler. This returns two values via ca0. The first is a - // sealed trusted stack capability, the second is a return value to be - // provided to threads that voluntarily yielded. + // Call the scheduler. This returns the new thread in ca0. cjalr cra - // The scheduler returns the new thread in ca0. - // We don't need to restore the stack pointer because we're now done with - // this stack until the next exception, at which point we'll reload the - // original stack pointer. + // Switch onto the new thread's trusted stack LoadCapPCC ct0, compartment_switcher_sealing_key li gp, 10 csetaddr ct0, ct0, gp cunseal csp, ca0, ct0 - cspecialw mscratchc, csp + cspecialw mtdc, csp
This can install a non-null, untagged capability in MTDC. This doesn't matter in release builds, but breaks the debug-mode check. If you move it below the clw, then the clw checks the tag, but please add a comment explaining it.
cheriot-rtos
github_2023
cpp
35
CHERIoT-Platform
rmn30
@@ -105,13 +105,18 @@ namespace */ FirstDataSealingType = 9, + /** + * The sealing type used for sealed export table entries.
Why does the comment say export table entries but the name is `ImportTableEntries`?
cheriot-rtos
github_2023
cpp
28
CHERIoT-Platform
JerryHsia-MS
@@ -8,6 +8,26 @@ using namespace CHERI; bool *threadStackTestFailed; +/* + * Define a macro that gets a __cheri_callback capability and calls it, while + * support adding instruction before the call. This is used to avoid code + * duplication, in cases we want to call a __cheri_callback in multiple + * places while adding additional functionalities. + */ +#define CALL_CHERI_CALLBACK(fn, instruction, additional_input) \
Very Nit: `fn` should probably have a better name like `handle` because it is not really a function.
cheriot-rtos
github_2023
cpp
28
CHERIoT-Platform
JerryHsia-MS
@@ -89,8 +108,7 @@ void test_stack_invalid_on_call(bool *outTestFailed, { threadStackTestFailed = outTestFailed; - __asm__ volatile("ccleartag csp, csp\n" - "csh zero, 0(cnull)\n"); + CALL_CHERI_CALLBACK(fn, "move a0, %1\nccleartag csp, csp\n", 0);
Is the `move` instruction useless? Might be better with a comment.
cheriot-rtos
github_2023
cpp
28
CHERIoT-Platform
davidchisnall
@@ -8,6 +8,31 @@ using namespace CHERI; bool *threadStackTestFailed; +/* + * Define a macro that gets a __cheri_callback capability and calls it, while + * support adding instruction before the call. This is used to avoid code + * duplication, in cases we want to call a __cheri_callback in multiple + * places while adding additional functionalities. + * + * handle: a sealed capability to a __cheri_callback to call + * instruction: additional instruction(s) to add before the call, + * with an operand. + * additional_input: the operand the additional instruction refers to. + */ +#define CALL_CHERI_CALLBACK(handle, instructions, additional_input) \ + ({ \ + register auto rfn asm("ct1") = handle; \ + __asm__ volatile( \ + "1:\n" \ + "auipcc ct2, %%cheri_compartment_pccrel_hi(.compartment_switcher)\n" \ + "clc ct2, %%cheri_compartment_pccrel_lo(1b)(ct2)\n" \ + "" instructions "\n" \ + "cjalr ct2\n" \ + : /* no outputs; we're jumping and probably not coming back */ \ + : "C"(rfn), "r"(additional_input) \ + : "ct2", "memory" /* in case we return */); \
These clobbers are wrong. The call clobbers everything except cs0 and cs1
cheriot-rtos
github_2023
others
22
CHERIoT-Platform
davidchisnall
@@ -91,11 +91,21 @@ compartment_switcher_entry: // XXX: This line is useless, only for mscratch to show up in debugging. cmove ct2, ct2 #endif - clhu tp, TrustedStack_offset_frameoffset(ct2) + // make sure the trusted stack is still in bounds - cgetlen t2, ct2 - bgeu tp, t2, .Lout_of_trusted_stack + clhu tp, TrustedStack_offset_frameoffset(ct2) + cgetlen t2, ct2 + bgeu tp, t2, .Lout_of_trusted_stack + // make sure the caller's CSP is valid + cgettag t2, csp + beqz t2, .Lforce_unwind + // make sure the caller's CSP has the expected permissions + cgetperm t2, csp + li tp, 0x7e + bne tp, t2, .Lforce_unwind
Can these be moved before we access mscratch? We don't need access to any privileged state to do these checks.
cheriot-rtos
github_2023
others
22
CHERIoT-Platform
davidchisnall
@@ -415,6 +425,13 @@ exception_entry_asm: li a1, 0 // If we don't have enough space, give up and force unwind bltu t1, tp, .Lforce_unwind + // make sure the caller's CSP is valid + cgettag t1, ct0 + beqz t1, .Lforce_unwind + // make sure the caller's CSP has the expected permissions + cgetperm t1, ct0 + li tp, 0x7e + bne tp, t1, .Lforce_unwind
These checks are the same as above, can you make them a macro? We should also add a sealing check.
cheriot-rtos
github_2023
others
22
CHERIoT-Platform
davidchisnall
@@ -79,23 +79,47 @@ switcher_scheduler_entry_csp: forall reloadOne, \reg1, \regs .endm +/** + * Verify the compartment stack is valid, with the expected permissions and + * unsealed. + * This macro assumes t2 and tp are available to use. + */ +.macro check_compartment_stack_integrity reg + // make sure the caller's CSP is valid + cgettag t2, \reg + beqz t2, .Lforce_unwind + // make sure the caller's CSP has the expected permissions + cgetperm t2, \reg + li tp, COMPARTMENT_STACK_PERMISSIONS + bne tp, t2, .Lforce_unwind + // make sure the caller's CSP is unsealed + cgettype t2, \reg + bnez t2, .Lforce_unwind +.endm + .section .text, "ax", @progbits .globl compartment_switcher_entry .p2align 2 .type compartment_switcher_entry,@function compartment_switcher_entry: + // before we access any privileged state, we can verify the + // compartment's csp is valid. If no, force unwind. + check_compartment_stack_integrity csp // The caller should back up all callee saved registers. // mscratchc should always have an offset of 0. cspecialr ct2, mscratchc #ifndef NDEBUG // XXX: This line is useless, only for mscratch to show up in debugging. cmove ct2, ct2 #endif - clhu tp, TrustedStack_offset_frameoffset(ct2) + // make sure the trusted stack is still in bounds - cgetlen t2, ct2 - bgeu tp, t2, .Lout_of_trusted_stack + clhu tp, TrustedStack_offset_frameoffset(ct2)
Can you fix indenting? It looks like you're mixing tabs and spaces.
cheriot-rtos
github_2023
others
22
CHERIoT-Platform
davidchisnall
@@ -79,23 +79,47 @@ switcher_scheduler_entry_csp: forall reloadOne, \reg1, \regs .endm +/** + * Verify the compartment stack is valid, with the expected permissions and + * unsealed. + * This macro assumes t2 and tp are available to use. + */ +.macro check_compartment_stack_integrity reg + // make sure the caller's CSP is valid + cgettag t2, \reg + beqz t2, .Lforce_unwind + // make sure the caller's CSP has the expected permissions + cgetperm t2, \reg + li tp, COMPARTMENT_STACK_PERMISSIONS + bne tp, t2, .Lforce_unwind + // make sure the caller's CSP is unsealed + cgettype t2, \reg + bnez t2, .Lforce_unwind +.endm + .section .text, "ax", @progbits .globl compartment_switcher_entry .p2align 2 .type compartment_switcher_entry,@function compartment_switcher_entry: + // before we access any privileged state, we can verify the + // compartment's csp is valid. If no, force unwind.
``` // compartment's csp is valid. If not, force unwind. ```
cheriot-rtos
github_2023
cpp
22
CHERIoT-Platform
davidchisnall
@@ -44,3 +44,5 @@ EXPORT_ASSEMBLY_SIZE(TrustedStackFrame, (7 * 8)) #define TSTACKOFFSET_FIRSTFRAME \ (TrustedStack_offset_frameoffset + TSTACK_HEADER_SZ) + +#define COMPARTMENT_STACK_PERMISSIONS 0x7e
Can you add a doc comment explaining this macro?
cheriot-rtos
github_2023
cpp
27
CHERIoT-Platform
nwf-msr
@@ -4,10 +4,37 @@ #pragma once #include <cdefs.h> +#include <compartment-macros.h> #include <stddef.h> #include <stdint.h> #include <timeout.h> +#ifndef MALLOC_DEFAULT_QUOTA +# define MALLOC_DEFAULT_QUOTA 4096 +#endif +struct MallocCapability
Hm. I think I know why you called this a `Capability`, and I don't completely disagree, but I might prefer `MallocCapabilityState` or `MallocAuthorityState` or something along those lines: it's not a CHERI capability, and (an instance of) *this structure* isn't presented as a capability would be, in general: a (sealed) capability to (a wrapper around) an instance of this structure is the actual bearer token.
cheriot-rtos
github_2023
cpp
27
CHERIoT-Platform
nwf-msr
@@ -4,10 +4,37 @@ #pragma once #include <cdefs.h> +#include <compartment-macros.h> #include <stddef.h> #include <stdint.h> #include <timeout.h> +#ifndef MALLOC_DEFAULT_QUOTA +# define MALLOC_DEFAULT_QUOTA 4096 +#endif +struct MallocCapability +{ + size_t quota; + size_t unused; + uintptr_t reserved[2];
I might reserve more space; we might want to have these... - know a small identifier for themselves (another `uint16_t`), - be intrusively indexed on that identifier by a list or tree or something (2 or 3 `uintptr_t`s, a la `MChunk` or `TChunk`) - have a hierarchical relationship amongst themselves (perhaps as a rose tree with children encoded as a ds::linked_list ring... another 5 `uintptr_t`s) I don't think we imagine that there will be too many of these, so making them kind of hefty is probably not the worst fate?
cheriot-rtos
github_2023
cpp
27
CHERIoT-Platform
nwf-msr
@@ -52,15 +85,15 @@ void *__cheri_compartment("alloc") heap_allocate(size_t size, Timeout *timeout); * Memory returned from this interface is guaranteed to be zeroed. */ void *__cheri_compartment("alloc") - heap_allocate_array(size_t nmemb, size_t size, Timeout *timeout); + heap_allocate_array(struct SObjStruct *heapCapability, size_t nmemb, size_t size, Timeout *timeout);
Do you want to have an `ifdef __cplusplus` overload for this and for `heap_free`, too?
cheriot-rtos
github_2023
cpp
27
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,174 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +#pragma once +#include <cdefs.h> + +/** + * Provide a capability of the type `volatile type *` referring to the MMIO + * region exported in the linker script with `name` as its name. This macro + * can be used only in code (it cannot be used to initialise a global). + */ +#define MMIO_CAPABILITY(type, name) \ + ({ \ + volatile type *ret; /* NOLINT(bugprone-macro-parentheses) */ \ + __asm(".ifndef __import_mem_" #name "\n" \ + " .type __import_mem_" #name ",@object\n" \ + " .section .compartment_imports." #name \ + ",\"awG\",@progbits," #name ",comdat\n" \ + " .globl __import_mem_" #name "\n" \ + " .p2align 3\n" \ + "__import_mem_" #name ":\n" \ + " .word __export_mem_" #name "\n" \ + " .word __export_mem_" #name "_end - __export_mem_" #name "\n" \ + " .size __import_mem_" #name ", 8\n" \ + " .previous\n" \ + ".endif\n" \ + "1:" \ + " auipcc %0," \ + " %%cheri_compartment_pccrel_hi(__import_mem_" #name ")\n" \ + " clc %0, %%cheri_compartment_pccrel_lo(1b)(%0)\n" \ + : "=C"(ret)); \ + ret; \ + }) + +/** + * Macro to test whether a device with a specific name exists in the board + * definition for the current target.
I know it's not part of this PR, but this comment really should be expanded to include something about this macro being useful *only* within a preprocessor expression. (It's probably best to limit it to preprocessor evaluation contexts rather than pursue terrors like http://hmijailblog.blogspot.com/2016/03/an-isdefined-c-macro-to-check-whether.html ...)
cheriot-rtos
github_2023
cpp
27
CHERIoT-Platform
nwf-msr
@@ -155,4 +162,58 @@ namespace sched void exception_entry_asm(void); __END_DECLS + template<typename T> + struct HeapObject + { + class Deleter + { + struct SObjStruct *heapCapability; + + public: + Deleter(struct SObjStruct *heapCapability) + : heapCapability(heapCapability) + { + } + void operator()(T *object) + { + object->~T(); + free(object);
Should that be `heap_free(heapCapability, object)`?
cheriot-rtos
github_2023
others
19
CHERIoT-Platform
rmn30
@@ -315,6 +324,14 @@ exception_entry_asm: reloadRegisters c1, cgp, c4, c5, c6, c7, c8, c9, c10, c11, c12, c13, c14, c15, csp mret +.Linvalid_csp_permissions: + cjal .Lpop_trusted_stack_frame + csh zero, 0(cnull)
This will trap. Was it committed by mistake? If not how will the subsequent instructions be reached?
cheriot-rtos
github_2023
cpp
16
CHERIoT-Platform
nwf-msr
@@ -123,11 +123,23 @@ namespace */ Allocator, + /** + * The first sealing key that is reserved for use by the allocator's + * software sealing mechanism and used for static sealing types, + */ + FirstStaticSoftware = 16, + + /** + * The first sealing key in the space that the allocator will + * dynamically allocate for sealing types. + */ + FirstDynamicSoftware = 0x1000000, }; // We currently have a 4-bit otype, but we'd like to reduce it to 3.
While you're in the neighborhood, that comment is stale.
cheriot-rtos
github_2023
cpp
16
CHERIoT-Platform
nwf-msr
@@ -245,6 +257,50 @@ namespace return cgp; } + /** + * Returns a sealing capability to use for statically allocated sealing + * keys. + */ + uint16_t allocate_static_sealing_key() + { + static uint16_t nextValue = FirstStaticSoftware; + // We currently stash the allocated key value in the export table. We + // could expand this a bit if we were a bit more clever in how we used + // that space, but 1^16 static sealing keys will require over 768 KiB
```suggestion // that space, but 2^16 static sealing keys will require over 768 KiB ```
cheriot-rtos
github_2023
cpp
16
CHERIoT-Platform
nwf-msr
@@ -435,6 +502,51 @@ namespace return buildMMIO(); } + // Privileged compartments don't have sealed objects. + if constexpr (!std::is_same_v< + std::remove_cvref_t<decltype(sourceCompartment)>, + ImgHdr::PrivilegedCompartment>) + { + if (contains(sourceCompartment.sealedObjects, target, size)) + { + auto sealingType = + build<uint32_t, + Root::Type::RWGlobal, + PermissionSet{Permission::Load, Permission::Store}>( + target); + // TODO: This currently places a restriction that data memory + // can't be in the low 64 KiB of the address space. That may be + // too restrictive. If we haven't visited this sealed object + // yet, then we should update its first word to point to the + // sealing type. + if (*sealingType >= 0x10000)
What is `0x10000`? I was sort of expecting to see something about `FirstDynamicSoftware` here or somesuch. (Values larger than that are certainly invalid, so that could be an offset applied to the address of the export table?) If a dynamic approach is necessary, is there no other metadata in the export table entry that could be used to discriminate processed from unprocessed rows?
cheriot-rtos
github_2023
cpp
16
CHERIoT-Platform
nwf-msr
@@ -435,6 +502,51 @@ namespace return buildMMIO(); } + // Privileged compartments don't have sealed objects. + if constexpr (!std::is_same_v< + std::remove_cvref_t<decltype(sourceCompartment)>, + ImgHdr::PrivilegedCompartment>) + { + if (contains(sourceCompartment.sealedObjects, target, size)) + { + auto sealingType = + build<uint32_t, + Root::Type::RWGlobal, + PermissionSet{Permission::Load, Permission::Store}>( + target); + // TODO: This currently places a restriction that data memory + // can't be in the low 64 KiB of the address space. That may be + // too restrictive. If we haven't visited this sealed object
If we haven't visited yet? That seems a little worrying as an approach. From our discussion, I thought we could get away with two passes without the need to re-visit anything?
cheriot-rtos
github_2023
cpp
16
CHERIoT-Platform
nwf-msr
@@ -874,10 +1060,8 @@ extern "C" SchedulerEntryInfo loader_entry_point(const ImgHdr &imgHdr, switcherKey.bounds() = 1; setSealingKey(imgHdr.scheduler(), Scheduler); setSealingKey(imgHdr.allocator(), Allocator); - setSealingKey(imgHdr.allocator(), - static_cast<SealingType>(0x1000000), - 0xff000000, - sizeof(void *)); + setSealingKey( + imgHdr.allocator(), FirstDynamicSoftware, 0xff000000, sizeof(void *));
`0xff000000` is `2**32 - FirstDynamicSoftware`?
cheriot-rtos
github_2023
cpp
16
CHERIoT-Platform
nwf-msr
@@ -862,9 +868,22 @@ namespace loader static constexpr uint8_t InterruptStatusMask = uint8_t(0b11) << InterruptStatusShift; + /** + * The flag indicating that this is a fake entry used to identify + * sealing types. Nothing should refer to this other than an import + * table entry from the same compartment, which will be populated with + * a sealing capability. + */
Is that right? Don't the statically sealed objects also refer to the export table entry to specify what type they're sealed under?
cheriot-rtos
github_2023
cpp
16
CHERIoT-Platform
nwf-msr
@@ -49,6 +49,112 @@ ret; \ }) +/** + * Helper macro, used by `STATIC_SEALING_TYPE`. Do not use this directly, it + * exists to avoid error-prone copying and pasting of the mangled name for a + * static sealing type. + */ +#define CHERIOT_EMIT_STATIC_SEALING_TYPE(name) \ + ({ \ + SKey ret; /* NOLINT(bugprone-macro-parentheses) */ \ + __asm(".ifndef __import." name "\n" \ + " .type __import." name ",@object\n" \ + " .section .compartment_imports." name \ + ",\"awG\",@progbits," name ",comdat\n" \ + " .globl __import." name "\n" \ + " .p2align 3\n" \ + "__import." name ":\n" \ + " .word __export." name "\n" \ + " .word 0\n" \ + " .previous\n" \ + ".endif\n" \ + ".ifndef __export." name "\n" \ + " .type __export." name ",@object\n" \ + " .section .compartment_exports." name \ + ",\"awG\",@progbits," name ",comdat\n" \ + " .globl __export." name "\n" \ + " .p2align 2\n" \ + "__export." name ":\n" \ + " .half 0\n" \ + " .byte 0\n" \ + " .byte 0b100000\n" \
Assuming this is `SealingTypeEntry`, could you bring it into the assembler via an `i`-type input rather than as a constant? Failing that, a comment for `grep` to find?
cheriot-rtos
github_2023
cpp
16
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,51 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +#define TEST_NAME "Static sealing (inner compartment)" +#include "static_sealing.h" +#include "tests.hh" + +using namespace CHERI; + +void test_static_sealed_object(Sealed<TestType> obj) +{ + // Get our static sealing key. + SKey key = STATIC_SEALING_TYPE(SealingType); + Capability keyCap{key}; + + debug_log("Static sealing key: {}", key); + // Make sure the sealing key has sensible permissions + TEST((check_pointer<PermissionSet{Permission::Seal, + Permission::Unseal, + Permission::Global, + Permission::User0}>(key, 1)), + "Incorrect permissions on {}", + key); + // Make sure it's in the right range. + TEST( + keyCap.address() >= 16,
```suggestion keyCap.address() >= FirstStaticSoftware, ```
cheriot-rtos
github_2023
cpp
16
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,51 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +#define TEST_NAME "Static sealing (inner compartment)" +#include "static_sealing.h" +#include "tests.hh" + +using namespace CHERI; + +void test_static_sealed_object(Sealed<TestType> obj) +{ + // Get our static sealing key. + SKey key = STATIC_SEALING_TYPE(SealingType); + Capability keyCap{key}; + + debug_log("Static sealing key: {}", key); + // Make sure the sealing key has sensible permissions + TEST((check_pointer<PermissionSet{Permission::Seal, + Permission::Unseal, + Permission::Global, + Permission::User0}>(key, 1)), + "Incorrect permissions on {}", + key); + // Make sure it's in the right range. + TEST( + keyCap.address() >= 16, + "Software sealing key has an address in the hardware-reserved range: {}", + keyCap.address()); + TEST(keyCap.address() < 0x10000,
```suggestion TEST(keyCap.address() < FirstDynamicSoftware, ```
cheriot-rtos
github_2023
cpp
16
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,51 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +#define TEST_NAME "Static sealing (inner compartment)" +#include "static_sealing.h" +#include "tests.hh" + +using namespace CHERI; + +void test_static_sealed_object(Sealed<TestType> obj) +{ + // Get our static sealing key. + SKey key = STATIC_SEALING_TYPE(SealingType); + Capability keyCap{key}; + + debug_log("Static sealing key: {}", key); + // Make sure the sealing key has sensible permissions + TEST((check_pointer<PermissionSet{Permission::Seal, + Permission::Unseal, + Permission::Global, + Permission::User0}>(key, 1)), + "Incorrect permissions on {}", + key); + // Make sure it's in the right range. + TEST( + keyCap.address() >= 16, + "Software sealing key has an address in the hardware-reserved range: {}", + keyCap.address()); + TEST(keyCap.address() < 0x10000, + "Software sealing key has an address too large: {}", + keyCap.address()); + // Make sure that it's a single sealing type + TEST(keyCap.bounds() == 1, "Invalid bounds on {}", key); + + // Try to use it + Capability unsealed = token_unseal(key, obj); + debug_log("Unsealed object: {}", unsealed); + // Make sure that the unsealed allocation is the right everything. + TEST(unsealed->value == 42, "Unexpected value for static sealed object");
```suggestion TEST(unsealed->value == 42, "Unexpected value in static sealed object"); ```
cheriot-rtos
github_2023
cpp
16
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,51 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +#define TEST_NAME "Static sealing (inner compartment)" +#include "static_sealing.h" +#include "tests.hh" + +using namespace CHERI; + +void test_static_sealed_object(Sealed<TestType> obj) +{ + // Get our static sealing key. + SKey key = STATIC_SEALING_TYPE(SealingType); + Capability keyCap{key}; + + debug_log("Static sealing key: {}", key); + // Make sure the sealing key has sensible permissions + TEST((check_pointer<PermissionSet{Permission::Seal, + Permission::Unseal, + Permission::Global, + Permission::User0}>(key, 1)), + "Incorrect permissions on {}",
```suggestion "Incorrect permissions on static sealing key {}", ```
cheriot-rtos
github_2023
cpp
16
CHERIoT-Platform
nwf-msr
@@ -0,0 +1,51 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +#define TEST_NAME "Static sealing (inner compartment)" +#include "static_sealing.h" +#include "tests.hh" + +using namespace CHERI; + +void test_static_sealed_object(Sealed<TestType> obj) +{ + // Get our static sealing key. + SKey key = STATIC_SEALING_TYPE(SealingType); + Capability keyCap{key}; + + debug_log("Static sealing key: {}", key); + // Make sure the sealing key has sensible permissions + TEST((check_pointer<PermissionSet{Permission::Seal, + Permission::Unseal, + Permission::Global, + Permission::User0}>(key, 1)), + "Incorrect permissions on {}", + key); + // Make sure it's in the right range. + TEST( + keyCap.address() >= 16, + "Software sealing key has an address in the hardware-reserved range: {}", + keyCap.address()); + TEST(keyCap.address() < 0x10000, + "Software sealing key has an address too large: {}", + keyCap.address()); + // Make sure that it's a single sealing type + TEST(keyCap.bounds() == 1, "Invalid bounds on {}", key); + + // Try to use it + Capability unsealed = token_unseal(key, obj); + debug_log("Unsealed object: {}", unsealed); + // Make sure that the unsealed allocation is the right everything. + TEST(unsealed->value == 42, "Unexpected value for static sealed object"); + TEST(unsealed.length() == sizeof(TestType), + "Incorrect length on unsealed capability {}", + unsealed); + TEST((check_pointer<PermissionSet{Permission::Load, + Permission::Store, + Permission::LoadStoreCapability, + Permission::LoadMutable, + Permission::LoadGlobal, + Permission::Global}>(unsealed.get(), 1)), + "Incorrect permissions on {}",
```suggestion "Incorrect permissions on unsealed statically sealed object {}", ```
cheriot-rtos
github_2023
cpp
16
CHERIoT-Platform
nwf-msr
@@ -49,6 +49,113 @@ ret; \ }) +/** + * Helper macro, used by `STATIC_SEALING_TYPE`. Do not use this directly, it + * exists to avoid error-prone copying and pasting of the mangled name for a + * static sealing type. + */ +#define CHERIOT_EMIT_STATIC_SEALING_TYPE(name) \ + ({ \ + SKey ret; /* NOLINT(bugprone-macro-parentheses) */ \ + __asm( \ + ".ifndef __import." name "\n" \ + " .type __import." name ",@object\n" \ + " .section .compartment_imports." name ",\"awG\",@progbits," name \ + ",comdat\n" \ + " .globl __import." name "\n" \ + " .p2align 3\n" \ + "__import." name ":\n" \ + " .word __export." name "\n" \ + " .word 0\n" \ + " .previous\n" \ + ".endif\n" \ + ".ifndef __export." name "\n" \ + " .type __export." name ",@object\n" \ + " .section .compartment_exports." name ",\"awG\",@progbits," name \ + ",comdat\n" \ + " .globl __export." name "\n" \ + " .p2align 2\n" \ + "__export." name ":\n" \ + " .half 0\n" /* function start and stack size initialised to 0 */ \ + " .byte 0\n" \ + " .byte 0b100000\n" /* Set the flag that indicates that this is a \ + sealing key. */ \
GitHub has seemingly eaten my earlier suggestion that this be brought in by name via `"i"(SealingTypeEntry)`
cheriot-rtos
github_2023
cpp
18
CHERIoT-Platform
davidchisnall
@@ -1056,21 +1079,48 @@ class MState * @param mem the user cap which has been checked, but has not been * rederived into an internal cap yet */ - void mspace_free(CHERI::Capability<void> mem) + int mspace_free(CHERI::Capability<void> mem)
Please document what the return value is.
cheriot-rtos
github_2023
cpp
18
CHERIoT-Platform
davidchisnall
@@ -215,15 +215,9 @@ namespace { return -EINVAL; } - /* - * Since we use the shadow bits to detect valid frees, we need to consult - * the revoker on whether the user cap is valid. - */ - if (!revoker.is_free_cap_valid(mem)) - { - return -EINVAL; - } - gm->mspace_free(mem); + int rv = gm->mspace_free(mem); + if (rv)
Clang-tidy should complain here.
cheriot-rtos
github_2023
others
8
CHERIoT-Platform
davidchisnall
@@ -311,6 +315,9 @@ exception_entry_asm: reloadRegisters c1, cgp, c4, c5, c6, c7, c8, c9, c10, c11, c12, c13, c14, c15, csp mret +.Lforce_unwind_with_return_values:
Can you leave a comment explaining why this is sufficient? Other registers have been trampled by the call and so need zeroing. Are they zeroed somewhere else later on? I believe you may also need to set the inForcedUnwind flag here, to allow returning to a frame that doesn't have an error handler.
cheriot-rtos
github_2023
others
8
CHERIoT-Platform
davidchisnall
@@ -311,6 +315,16 @@ exception_entry_asm: reloadRegisters c1, cgp, c4, c5, c6, c7, c8, c9, c10, c11, c12, c13, c14, c15, csp mret +.Linvalid_entry: +// Mark this threads as in the middle of a forced unwind.
Can you expand this to explain *why* (we want to resume rather than unwind if the compartment doesn't have an error handler)?
cheriot-rtos
github_2023
cpp
8
CHERIoT-Platform
davidchisnall
@@ -40,3 +40,9 @@ void test_compartment_call() compartment_call_inner(value, value, &value, value, &value, value, value); TEST(ret == 0, "compartment_call_inner returend {}", ret); } + +void test_compartment_call() +{ + test_number_of_arguments(); + test_incorrect_export_table(NULL);
```suggestion test_incorrect_export_table(nullptr); ```
cheriot-rtos
github_2023
cpp
8
CHERIoT-Platform
davidchisnall
@@ -10,6 +10,12 @@ using namespace CHERI; +extern "C" ErrorRecoveryBehaviour +compartment_error_handler(ErrorState *frame, size_t mcause, size_t mtval) +{ + return ErrorRecoveryBehaviour::ForceUnwind; +} +
This should be the default, do we need it to exist?
cheriot-rtos
github_2023
cpp
8
CHERIoT-Platform
davidchisnall
@@ -95,4 +101,17 @@ int compartment_call_inner(int x0, debug_log("Seven arguments"); verify_arguments(x0, x1, *x2, x3, *x4, x5, x6); return 0; +} + +void test_incorrect_export_table(__cheri_callback void (*fn)()) +{ + /* + * Trigger a cross-compartment call with an invalid export entry. + */ + + debug_log("test an incorrect export table entry"); + + fn(); + + TEST(false, "Should be unreachable");
Without an error handler, this should be reachable. Can we make sure that this path is tested?
cheriot-rtos
github_2023
others
8
CHERIoT-Platform
davidchisnall
@@ -336,10 +349,10 @@ exception_entry_asm: LoadCapPCC cs0, compartment_switcher_sealing_key // ca2 at this point was loaded by .Lpop_trusted_stack_frame from the pcc // in the trusted stack and so should always be sealed as a sentry type. - cgettype gp, cra + cgettype gp, ca2
This looks like it's accidentally reverting a fix?
cheriot-rtos
github_2023
cpp
17
CHERIoT-Platform
nwf-msr
@@ -1966,6 +1979,8 @@ class MState unlink_chunk(MChunk::from_header(prev), prev->size_get()); ds::linked_list::unsafe_remove_link(prev, p); p->clear(); + // p is no longer a header. Clear the shadow bit. + revoker.shadow_paint_single(CHERI::Capability{p}.address(), false);
Maybe move the `shadow_paint_single` into `->clear`?
cheriot-rtos
github_2023
cpp
17
CHERIoT-Platform
nwf-msr
@@ -2155,15 +2173,22 @@ class MState // If we reached here, then it means we took a real chunk off the free // list without errors. Zero the user portion metadata. size_t size = p->size_get(); - // We sanity check that things off the free list are indeed zeroed out. - Debug::Assert(capaligned_range_do(p->body(), - size - sizeof(MChunkHeader), - [](void *&word) { - return CHERI::Capability<void>( - word) != nullptr; - }) == false, - "Memory from free list is not entirely zeroed, size {}", - size); + /* + * We sanity check that things off the free list are indeed zeroed out, + * and none corresponds to a set shadow bit. We need to wrap *word + * inside a Capability because that gives exact equal for nullptr. + */ + Debug::Assert( + capaligned_range_do(p->body(), + size - sizeof(MChunkHeader), + [](void **word) { + CHERI::Capability eachCap{*word}; + return eachCap != nullptr && + revoker.shadow_bit_get( + CHERI::Capability{word}.address());
I'm still a C++ novice, but I think you can avoid needing to change the `capaligned_range_do` interface by using `&word` here instead: that will turn your `void *&word` into a `void **`.
cheriot-rtos
github_2023
cpp
1
CHERIoT-Platform
davidchisnall
@@ -0,0 +1,642 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +/** + * @file A (circular) doubly linked list, abstracted over cons cell + * representations. + */ + +#pragma once + +#include <concepts> +#include <ds/pointer.h> + +namespace ds::linked_list +{ + + namespace cell + { + /** + * The primitive, required, abstract interface to our cons cells. + * + * All methods are "namespaced" with `cell_` to support the case where + * the encoded forms are also representing other state (for example, + * bit-packed flags in pointer address bits). + */ + template<typename T> + concept HasPrimOps = requires(T &t)
Please don't use abbreviations in names (as per the style guide). Does Prim stand for primitive? Prime? Per the comment, I presume primitive, though it's not really clear what 'primitive' means here. Can it have a more descriptive name (e.g. `CellOperations`)?
cheriot-rtos
github_2023
cpp
1
CHERIoT-Platform
davidchisnall
@@ -0,0 +1,642 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +/** + * @file A (circular) doubly linked list, abstracted over cons cell + * representations. + */ + +#pragma once + +#include <concepts> +#include <ds/pointer.h> + +namespace ds::linked_list +{ + + namespace cell + { + /** + * The primitive, required, abstract interface to our cons cells. + * + * All methods are "namespaced" with `cell_` to support the case where + * the encoded forms are also representing other state (for example, + * bit-packed flags in pointer address bits). + */ + template<typename T> + concept HasPrimOps = requires(T &t) + { + /** Proxies for list linkages */ + { + t.cell_next() + } -> ds::pointer::proxy::Proxies<T>; + { + t.cell_prev() + } -> ds::pointer::proxy::Proxies<T>; + }; + + /** + * Initialize to singleton ring. Not all cons cells are required to be + * able to do this, though if you're sticking to rings and not (ab)using + * the machinery here in interesting ways, this should be easy to + * specify. + */ + template<typename T> + concept HasInit = requires(T &t) + { + { + t.cell_init() + } -> std::same_as<void>; + }; + + template<typename T> + concept HasPrimOpsInit = HasPrimOps<T> && HasInit<T>; + + /** + * Additional, optional overrides available within implementation of + * cons cells. It may be useful to static_assert() these in + * implementations to make sure we are not falling back to the defaults + * in terms of the above primops. + * + * @{ + */ + template<typename T> + concept HasIsSingleton = requires(T &t) + { + { + t.cell_is_singleton() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasIsSingletonCheck = requires(T &t) + { + { + t.cell_is_singleton_check() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasIsDoubleton = requires(T &t) + { + { + t.cell_is_doubleton() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasEqOps = requires(T &t) + { + { + t.cell_next_is_eq(&t) + } -> std::same_as<bool>; + + { + t.cell_prev_is_eq(&t) + } -> std::same_as<bool>; + }; + + /** @} */ + + } // namespace cell + + /** + * Self-loops indicate either the sentinels of an empty list or, + * less often, singletons without their sentinels; it's up to + * the caller to know which is being tested for, here. + * + * The default implementation decodes and compares one link; + * implementations may have more efficient mechanisms. + * + * @{ + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasIsSingleton<T>, bool> + is_singleton(T *e) + { + return e == e->cell_prev(); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasIsSingleton<T>, bool> + is_singleton(T *e) + { + return e->cell_is_singleton(); + } + /** @} */ + + /** + * Like is_singleton(), but checks both edges. Useful only for + * testing invariants. + * + * The default implementation decodes and compares both links. + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasIsSingletonCheck<T>, bool> + is_singleton_check(T *e) + { + return (e == e->cell_next()) && (e == e->cell_prev()); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasIsSingletonCheck<T>, bool>
Can this not use a requires clause instead of `enable_if`?
cheriot-rtos
github_2023
cpp
1
CHERIoT-Platform
davidchisnall
@@ -0,0 +1,642 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +/** + * @file A (circular) doubly linked list, abstracted over cons cell + * representations. + */ + +#pragma once + +#include <concepts> +#include <ds/pointer.h> + +namespace ds::linked_list +{ + + namespace cell + { + /** + * The primitive, required, abstract interface to our cons cells. + * + * All methods are "namespaced" with `cell_` to support the case where + * the encoded forms are also representing other state (for example, + * bit-packed flags in pointer address bits). + */ + template<typename T> + concept HasPrimOps = requires(T &t) + { + /** Proxies for list linkages */ + { + t.cell_next() + } -> ds::pointer::proxy::Proxies<T>; + { + t.cell_prev() + } -> ds::pointer::proxy::Proxies<T>; + }; + + /** + * Initialize to singleton ring. Not all cons cells are required to be + * able to do this, though if you're sticking to rings and not (ab)using + * the machinery here in interesting ways, this should be easy to + * specify. + */ + template<typename T> + concept HasInit = requires(T &t) + { + { + t.cell_init() + } -> std::same_as<void>; + }; + + template<typename T> + concept HasPrimOpsInit = HasPrimOps<T> && HasInit<T>; + + /** + * Additional, optional overrides available within implementation of + * cons cells. It may be useful to static_assert() these in + * implementations to make sure we are not falling back to the defaults + * in terms of the above primops. + * + * @{ + */ + template<typename T> + concept HasIsSingleton = requires(T &t) + { + { + t.cell_is_singleton() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasIsSingletonCheck = requires(T &t) + { + { + t.cell_is_singleton_check() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasIsDoubleton = requires(T &t) + { + { + t.cell_is_doubleton() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasEqOps = requires(T &t) + { + { + t.cell_next_is_eq(&t) + } -> std::same_as<bool>; + + { + t.cell_prev_is_eq(&t) + } -> std::same_as<bool>; + }; + + /** @} */ + + } // namespace cell + + /** + * Self-loops indicate either the sentinels of an empty list or, + * less often, singletons without their sentinels; it's up to + * the caller to know which is being tested for, here. + * + * The default implementation decodes and compares one link; + * implementations may have more efficient mechanisms. + * + * @{ + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasIsSingleton<T>, bool> + is_singleton(T *e) + { + return e == e->cell_prev(); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasIsSingleton<T>, bool> + is_singleton(T *e) + { + return e->cell_is_singleton(); + } + /** @} */ + + /** + * Like is_singleton(), but checks both edges. Useful only for + * testing invariants. + * + * The default implementation decodes and compares both links. + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasIsSingletonCheck<T>, bool> + is_singleton_check(T *e) + { + return (e == e->cell_next()) && (e == e->cell_prev()); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasIsSingletonCheck<T>, bool> + is_singleton_check(T *e) + { + return e->is_singleton_check(); + } + /** @} */ + + /** + * Doubletons are either singleton collections (with both the sentinel + * and the single element satisfying this test) or, less often, a pair + * of elements without a sentinel. The caller is expected to know + * what's meant by this test. + * + * The default implementation decodes and compares both links. + * @{ + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasIsDoubleton<T>, bool> + is_doubleton(T *e) + { + return e->cell_prev() == e->cell_next(); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasIsDoubleton<T>, bool> + is_doubleton(T *e) + { + return e->cell_is_doubleton(); + } + /** @} */ + + /** + * Link equality predicates + * + * @{ + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasEqOps<T>, bool> is_prev_eq(T *e, + T *p) + { + return e->cell_prev() == p; + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasEqOps<T>, bool> is_prev_eq(T *e, + T *p) + { + return e->cell_is_prev_eq(p); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasEqOps<T>, bool> is_next_eq(T *e, + T *p) + { + return e->cell_next() == p; + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasEqOps<T>, bool> is_next_eq(T *e, + T *p) + { + return e->cell_is_next_eq(p); + } + /** @} */ + + /** + * Verify linkage invariants. Again, useful only for testing. + * + * The default implementation decodes all four relevant links. + */ + template<cell::HasPrimOps T> + __always_inline bool is_well_formed(T *e) + { + return (e == e->cell_prev()->cell_next()) && + (e == e->cell_next()->cell_prev()); + } + + /** + * Insert a ring of `elem`-ents (typically, a singleton ring) before the + * `curr`-ent element (or sentinel) in the ring. In general, you will + * probably want to make sure that at most one of `elem` or `curr` + * points to a ring with a sentinel node. + * + * If `curr` is the sentinel, this is appending to the list, in the + * sense that the element(s) occupy (or span) the next-most and + * prev-least position from the sentinel. + * + * By symmetry, if `elem` is, instead, the sentinel, then `curr` is + * prepended to the list in the same sense. + */ + template<cell::HasPrimOps Cell> + __always_inline void insert_before(Cell *curr, Cell *elem) + { + curr->cell_prev()->cell_next() = elem->cell_next(); + elem->cell_next()->cell_prev() = curr->cell_prev(); + curr->cell_prev() = elem; + elem->cell_next() = curr; + } + + /** + * Fuse cell initialization with insertion before. Specifically, + * + * insert_new_before(c, e); + * + * is semantically equivalent to + * + * e->cell_init(); insert_before(c, e); + * + * but spelled in a way that the compiler can understand a bit better, with + * less effort spent in provenance and/or alias analysis. + */ + template<cell::HasPrimOps Cell, typename P> + requires std::same_as<P, Cell *> || ds::pointer::proxy::Proxies<P, Cell> + __always_inline void insert_new_before(P curr, Cell *elem) + { + auto prev = curr->cell_prev(); + elem->cell_next() = curr; + elem->cell_prev() = prev; + prev->cell_next() = elem; + prev = elem; + } + + /** + * Fuse cell initialization with insertion after. Specifically,
Is this the same as `emplace` in standard-library terminology?
cheriot-rtos
github_2023
cpp
1
CHERIoT-Platform
davidchisnall
@@ -0,0 +1,642 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +/** + * @file A (circular) doubly linked list, abstracted over cons cell + * representations. + */ + +#pragma once + +#include <concepts> +#include <ds/pointer.h> + +namespace ds::linked_list +{ + + namespace cell + { + /** + * The primitive, required, abstract interface to our cons cells. + * + * All methods are "namespaced" with `cell_` to support the case where + * the encoded forms are also representing other state (for example, + * bit-packed flags in pointer address bits). + */ + template<typename T> + concept HasPrimOps = requires(T &t) + { + /** Proxies for list linkages */ + { + t.cell_next() + } -> ds::pointer::proxy::Proxies<T>; + { + t.cell_prev() + } -> ds::pointer::proxy::Proxies<T>; + }; + + /** + * Initialize to singleton ring. Not all cons cells are required to be + * able to do this, though if you're sticking to rings and not (ab)using + * the machinery here in interesting ways, this should be easy to + * specify. + */ + template<typename T> + concept HasInit = requires(T &t) + { + { + t.cell_init() + } -> std::same_as<void>; + }; + + template<typename T> + concept HasPrimOpsInit = HasPrimOps<T> && HasInit<T>; + + /** + * Additional, optional overrides available within implementation of + * cons cells. It may be useful to static_assert() these in + * implementations to make sure we are not falling back to the defaults + * in terms of the above primops. + * + * @{ + */ + template<typename T> + concept HasIsSingleton = requires(T &t) + { + { + t.cell_is_singleton() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasIsSingletonCheck = requires(T &t) + { + { + t.cell_is_singleton_check() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasIsDoubleton = requires(T &t) + { + { + t.cell_is_doubleton() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasEqOps = requires(T &t) + { + { + t.cell_next_is_eq(&t) + } -> std::same_as<bool>; + + { + t.cell_prev_is_eq(&t) + } -> std::same_as<bool>; + }; + + /** @} */ + + } // namespace cell + + /** + * Self-loops indicate either the sentinels of an empty list or, + * less often, singletons without their sentinels; it's up to + * the caller to know which is being tested for, here. + * + * The default implementation decodes and compares one link; + * implementations may have more efficient mechanisms. + * + * @{ + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasIsSingleton<T>, bool> + is_singleton(T *e) + { + return e == e->cell_prev(); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasIsSingleton<T>, bool> + is_singleton(T *e) + { + return e->cell_is_singleton(); + } + /** @} */ + + /** + * Like is_singleton(), but checks both edges. Useful only for + * testing invariants. + * + * The default implementation decodes and compares both links. + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasIsSingletonCheck<T>, bool> + is_singleton_check(T *e) + { + return (e == e->cell_next()) && (e == e->cell_prev()); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasIsSingletonCheck<T>, bool> + is_singleton_check(T *e) + { + return e->is_singleton_check(); + } + /** @} */ + + /** + * Doubletons are either singleton collections (with both the sentinel + * and the single element satisfying this test) or, less often, a pair + * of elements without a sentinel. The caller is expected to know + * what's meant by this test. + * + * The default implementation decodes and compares both links. + * @{ + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasIsDoubleton<T>, bool> + is_doubleton(T *e) + { + return e->cell_prev() == e->cell_next(); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasIsDoubleton<T>, bool> + is_doubleton(T *e) + { + return e->cell_is_doubleton(); + } + /** @} */ + + /** + * Link equality predicates + * + * @{ + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasEqOps<T>, bool> is_prev_eq(T *e, + T *p) + { + return e->cell_prev() == p; + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasEqOps<T>, bool> is_prev_eq(T *e, + T *p) + { + return e->cell_is_prev_eq(p); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasEqOps<T>, bool> is_next_eq(T *e, + T *p) + { + return e->cell_next() == p; + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasEqOps<T>, bool> is_next_eq(T *e, + T *p) + { + return e->cell_is_next_eq(p); + } + /** @} */ + + /** + * Verify linkage invariants. Again, useful only for testing. + * + * The default implementation decodes all four relevant links. + */ + template<cell::HasPrimOps T> + __always_inline bool is_well_formed(T *e) + { + return (e == e->cell_prev()->cell_next()) && + (e == e->cell_next()->cell_prev()); + } + + /** + * Insert a ring of `elem`-ents (typically, a singleton ring) before the + * `curr`-ent element (or sentinel) in the ring. In general, you will + * probably want to make sure that at most one of `elem` or `curr` + * points to a ring with a sentinel node. + * + * If `curr` is the sentinel, this is appending to the list, in the + * sense that the element(s) occupy (or span) the next-most and + * prev-least position from the sentinel. + * + * By symmetry, if `elem` is, instead, the sentinel, then `curr` is + * prepended to the list in the same sense. + */ + template<cell::HasPrimOps Cell> + __always_inline void insert_before(Cell *curr, Cell *elem) + { + curr->cell_prev()->cell_next() = elem->cell_next(); + elem->cell_next()->cell_prev() = curr->cell_prev(); + curr->cell_prev() = elem; + elem->cell_next() = curr; + } + + /** + * Fuse cell initialization with insertion before. Specifically, + * + * insert_new_before(c, e); + * + * is semantically equivalent to + * + * e->cell_init(); insert_before(c, e); + * + * but spelled in a way that the compiler can understand a bit better, with + * less effort spent in provenance and/or alias analysis. + */ + template<cell::HasPrimOps Cell, typename P> + requires std::same_as<P, Cell *> || ds::pointer::proxy::Proxies<P, Cell> + __always_inline void insert_new_before(P curr, Cell *elem) + { + auto prev = curr->cell_prev(); + elem->cell_next() = curr; + elem->cell_prev() = prev; + prev->cell_next() = elem; + prev = elem; + } + + /** + * Fuse cell initialization with insertion after. Specifically, + * + * insert_new_after(c, e); + * + * is semantically equivalent to + * + * e->cell_init(); insert_before(e, c); + * + * but spelled in a way that the compiler can understand a bit better, with + * less effort spent in provenance and/or alias analysis. + */ + template<cell::HasPrimOps Cell, typename P> + requires std::same_as<P, Cell *> || ds::pointer::proxy::Proxies<P, Cell> + __always_inline void insert_new_after(P curr, Cell *elem) + { + auto next = curr->cell_next(); + elem->cell_prev() = curr; + elem->cell_next() = next; + next->cell_prev() = elem; + next = elem; + } + + /** + * Remove from the list without turning the removed span into a + * well-formed ring. This is useful only if that invariant will be + * restored later (prior to insertion, at the very least). + * + * The removed element or span instead retains links into the ring + * whence it was removed, but is no longer well-formed, since that ring + * no longer references the removed element or span. + * + * This can be used to remove... + * + * - a single element (`el == er`) + * + * - the sentinel (`el == er`), leaving the rest of the ring, if any, + * as a sentinel-free ring + * + * - a span of elements from `el` to `er` via the `next` links; the + * removed span is damaged and must be corrected, while the residual + * ring remains well-formed. + * + * In all cases, `el`'s previous element is returned as a handle to the + * residual ring. + */ + template<cell::HasPrimOps Cell> + __always_inline Cell *unsafe_remove(Cell *el, Cell *er) + { + auto p = el->cell_prev(); + auto n = er->cell_next(); + n->cell_prev() = p; + p->cell_next() = n; + return p; + } + + template<cell::HasPrimOps Cell> + __always_inline Cell *unsafe_remove(Cell *e) + { + return unsafe_remove(e, e); + } + + /** + * Remove a particular element `rem` from the ring, already knowing its + * adjacent, previous link `prev`. `prev` remains connected to the ring + * but `rem` will no longer be well-formed. Returns a proxy to prev's + * next field. + */ + template<cell::HasPrimOps Cell> + __always_inline auto unsafe_remove_link(Cell *prev, Cell *rem) + { + auto next = rem->cell_next(); + auto prevnext = prev->cell_next(); + prevnext = next; + next->cell_prev() = prev; + return prevnext; + } + + /** + * Remove from the ring, cleaving the ring into two well-formed rings. + * + * This can be used to remove... + * + * - a single element (`el == er`) + * + * - the sentinel (`el == er`), leaving the rest of the ring, if any, + * as a sentinel-free collection + * + * - a span of elements from `el` to `er` via `next` links; the + * removed span is made into a ring and the residual ring is left + * well-formed. + * + * In all cases, `el`'s previous element is returned as a handle to the + * residual ring. (The caller must already have a reference to the span + * being removed). This is especially useful when `remove`-ing elements + * during a `search`, below: overwriting the callback's Cell pointer + * (passed by *reference*) will continue the iteration, calling back at + * the removed node's successor. + * + * Removing a singleton from its ring from itself causes no change, as + * any would-be residual ring is empty. This corner case requires some + * care on occasion. + */ + template<cell::HasPrimOps Cell> + __always_inline Cell *remove(Cell *el, Cell *er) + { + Cell *p = unsafe_remove(el, er); + el->cell_prev() = er; + er->cell_next() = el; + return p; + } + + template<cell::HasPrimOps Cell> + __always_inline Cell *remove(Cell *e) + { + return remove(e, e); + } + + /** + * Search through a span of a ring, inclusively from `from` through + * exclusively to `to`, applying `f` to each cons cell in turn. If `f` + * returns `true`, the search stops early and returns `true`; otherwise, + * search returns `false`. To (side-effectfully) visit every node in the + * span, have `f` always return false. + */ + template<cell::HasPrimOps Cell, typename F> + __always_inline bool search(Cell *from, Cell *to, F f) + { + Cell *elem; + for (elem = from; elem != to; elem = elem->cell_next()) + { + if (f(elem)) + { + return true; + } + } + return false; + } + + /** + * Search through all elements of a ring *except* `elem`. If `elem` is the + * sentinel of a ring, then this is, as one expects, a `search` over all + * non-sentinel memebers of the ring. + */ + template<cell::HasPrimOps Cell, typename F> + __always_inline bool search(Cell *elem, F f) + { + return search(static_cast<Cell *>(elem->cell_next()), elem, f); + } + + /** + * Convenience wrapper for a sentinel cons cell, encapsulating some common + * patterns. + */ + template<cell::HasPrimOpsInit Cell_> + struct Sentinel + { + using Cell = Cell_; + + /** + * The sentinel node itself. Viewing the ring as a list, this + * effectively serves as pointers to the head (next) and to the tail + * (prev) of the list. Unlike more traditional nullptr-terminated + * lists, though, here, the sentinel participates in the ring. + * + * This is marked `cheri_no_subobject_bounds` because some of our cons + * cell implementations use pointer proxies that rely on the bounds + * provided by `this` (which, in turn, is likely to be + * `cheri_no_subobject_bounds`) + */ + Cell sentinel __attribute__((__cheri_no_subobject_bounds__)); + + __always_inline void init() + { + sentinel.cell_init(); + } + + __always_inline bool is_empty() + { + return linked_list::is_singleton(&sentinel); + } + + __always_inline void append(Cell *elem) + { + linked_list::insert_before(&sentinel, elem); + } + + __always_inline void append_new(Cell *elem) + { + linked_list::insert_new_before(&sentinel, elem); + } + + __always_inline void prepend(Cell *elem) + { + linked_list::insert_before(elem, &sentinel); + } + + __always_inline Cell *first() + { + return sentinel.cell_next(); + } + + __always_inline Cell *last() + { + return sentinel.cell_prev(); + } + + __always_inline bool last_is_eq(Cell *p) + { + return p->cell_next_is_eq(&sentinel); + } + + __always_inline Cell *unsafe_take_first() + { + Cell *f = sentinel.cell_next(); + linked_list::unsafe_remove_link(&sentinel, f); + return f; + } + + __always_inline Cell *take_all() + { + auto p = linked_list::unsafe_remove(&sentinel); + sentinel.cell_init(); + return p; + } + + template<typename F> + __always_inline bool search(F f) + { + return linked_list::search(&sentinel, f); + } + }; + + namespace cell + { + + /** Cons cell using two pointers */ + class Pointer + { + Pointer *prev, *next;
```suggestion Pointer *prev = this; Pointer *next = this; ```
cheriot-rtos
github_2023
cpp
1
CHERIoT-Platform
davidchisnall
@@ -0,0 +1,642 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +/** + * @file A (circular) doubly linked list, abstracted over cons cell + * representations. + */ + +#pragma once + +#include <concepts> +#include <ds/pointer.h> + +namespace ds::linked_list +{ + + namespace cell + { + /** + * The primitive, required, abstract interface to our cons cells. + * + * All methods are "namespaced" with `cell_` to support the case where + * the encoded forms are also representing other state (for example, + * bit-packed flags in pointer address bits). + */ + template<typename T> + concept HasPrimOps = requires(T &t) + { + /** Proxies for list linkages */ + { + t.cell_next() + } -> ds::pointer::proxy::Proxies<T>; + { + t.cell_prev() + } -> ds::pointer::proxy::Proxies<T>; + }; + + /** + * Initialize to singleton ring. Not all cons cells are required to be + * able to do this, though if you're sticking to rings and not (ab)using + * the machinery here in interesting ways, this should be easy to + * specify. + */ + template<typename T> + concept HasInit = requires(T &t) + { + { + t.cell_init() + } -> std::same_as<void>; + }; + + template<typename T> + concept HasPrimOpsInit = HasPrimOps<T> && HasInit<T>; + + /** + * Additional, optional overrides available within implementation of + * cons cells. It may be useful to static_assert() these in + * implementations to make sure we are not falling back to the defaults + * in terms of the above primops. + * + * @{ + */ + template<typename T> + concept HasIsSingleton = requires(T &t) + { + { + t.cell_is_singleton() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasIsSingletonCheck = requires(T &t) + { + { + t.cell_is_singleton_check() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasIsDoubleton = requires(T &t) + { + { + t.cell_is_doubleton() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasEqOps = requires(T &t) + { + { + t.cell_next_is_eq(&t) + } -> std::same_as<bool>; + + { + t.cell_prev_is_eq(&t) + } -> std::same_as<bool>; + }; + + /** @} */ + + } // namespace cell + + /** + * Self-loops indicate either the sentinels of an empty list or, + * less often, singletons without their sentinels; it's up to + * the caller to know which is being tested for, here. + * + * The default implementation decodes and compares one link; + * implementations may have more efficient mechanisms. + * + * @{ + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasIsSingleton<T>, bool> + is_singleton(T *e) + { + return e == e->cell_prev(); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasIsSingleton<T>, bool> + is_singleton(T *e) + { + return e->cell_is_singleton(); + } + /** @} */ + + /** + * Like is_singleton(), but checks both edges. Useful only for + * testing invariants. + * + * The default implementation decodes and compares both links. + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasIsSingletonCheck<T>, bool> + is_singleton_check(T *e) + { + return (e == e->cell_next()) && (e == e->cell_prev()); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasIsSingletonCheck<T>, bool> + is_singleton_check(T *e) + { + return e->is_singleton_check(); + } + /** @} */ + + /** + * Doubletons are either singleton collections (with both the sentinel + * and the single element satisfying this test) or, less often, a pair + * of elements without a sentinel. The caller is expected to know + * what's meant by this test. + * + * The default implementation decodes and compares both links. + * @{ + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasIsDoubleton<T>, bool> + is_doubleton(T *e) + { + return e->cell_prev() == e->cell_next(); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasIsDoubleton<T>, bool> + is_doubleton(T *e) + { + return e->cell_is_doubleton(); + } + /** @} */ + + /** + * Link equality predicates + * + * @{ + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasEqOps<T>, bool> is_prev_eq(T *e, + T *p) + { + return e->cell_prev() == p; + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasEqOps<T>, bool> is_prev_eq(T *e, + T *p) + { + return e->cell_is_prev_eq(p); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasEqOps<T>, bool> is_next_eq(T *e, + T *p) + { + return e->cell_next() == p; + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasEqOps<T>, bool> is_next_eq(T *e, + T *p) + { + return e->cell_is_next_eq(p); + } + /** @} */ + + /** + * Verify linkage invariants. Again, useful only for testing. + * + * The default implementation decodes all four relevant links. + */ + template<cell::HasPrimOps T> + __always_inline bool is_well_formed(T *e) + { + return (e == e->cell_prev()->cell_next()) && + (e == e->cell_next()->cell_prev()); + } + + /** + * Insert a ring of `elem`-ents (typically, a singleton ring) before the + * `curr`-ent element (or sentinel) in the ring. In general, you will + * probably want to make sure that at most one of `elem` or `curr` + * points to a ring with a sentinel node. + * + * If `curr` is the sentinel, this is appending to the list, in the + * sense that the element(s) occupy (or span) the next-most and + * prev-least position from the sentinel. + * + * By symmetry, if `elem` is, instead, the sentinel, then `curr` is + * prepended to the list in the same sense. + */ + template<cell::HasPrimOps Cell> + __always_inline void insert_before(Cell *curr, Cell *elem) + { + curr->cell_prev()->cell_next() = elem->cell_next(); + elem->cell_next()->cell_prev() = curr->cell_prev(); + curr->cell_prev() = elem; + elem->cell_next() = curr; + } + + /** + * Fuse cell initialization with insertion before. Specifically, + * + * insert_new_before(c, e); + * + * is semantically equivalent to + * + * e->cell_init(); insert_before(c, e); + * + * but spelled in a way that the compiler can understand a bit better, with + * less effort spent in provenance and/or alias analysis. + */ + template<cell::HasPrimOps Cell, typename P> + requires std::same_as<P, Cell *> || ds::pointer::proxy::Proxies<P, Cell> + __always_inline void insert_new_before(P curr, Cell *elem) + { + auto prev = curr->cell_prev(); + elem->cell_next() = curr; + elem->cell_prev() = prev; + prev->cell_next() = elem; + prev = elem; + } + + /** + * Fuse cell initialization with insertion after. Specifically, + * + * insert_new_after(c, e); + * + * is semantically equivalent to + * + * e->cell_init(); insert_before(e, c); + * + * but spelled in a way that the compiler can understand a bit better, with + * less effort spent in provenance and/or alias analysis. + */ + template<cell::HasPrimOps Cell, typename P> + requires std::same_as<P, Cell *> || ds::pointer::proxy::Proxies<P, Cell> + __always_inline void insert_new_after(P curr, Cell *elem) + { + auto next = curr->cell_next(); + elem->cell_prev() = curr; + elem->cell_next() = next; + next->cell_prev() = elem; + next = elem; + } + + /** + * Remove from the list without turning the removed span into a + * well-formed ring. This is useful only if that invariant will be + * restored later (prior to insertion, at the very least). + * + * The removed element or span instead retains links into the ring + * whence it was removed, but is no longer well-formed, since that ring + * no longer references the removed element or span. + * + * This can be used to remove... + * + * - a single element (`el == er`) + * + * - the sentinel (`el == er`), leaving the rest of the ring, if any, + * as a sentinel-free ring + * + * - a span of elements from `el` to `er` via the `next` links; the + * removed span is damaged and must be corrected, while the residual + * ring remains well-formed. + * + * In all cases, `el`'s previous element is returned as a handle to the + * residual ring. + */ + template<cell::HasPrimOps Cell> + __always_inline Cell *unsafe_remove(Cell *el, Cell *er) + { + auto p = el->cell_prev(); + auto n = er->cell_next(); + n->cell_prev() = p; + p->cell_next() = n; + return p; + } + + template<cell::HasPrimOps Cell> + __always_inline Cell *unsafe_remove(Cell *e) + { + return unsafe_remove(e, e); + } + + /** + * Remove a particular element `rem` from the ring, already knowing its + * adjacent, previous link `prev`. `prev` remains connected to the ring + * but `rem` will no longer be well-formed. Returns a proxy to prev's + * next field. + */ + template<cell::HasPrimOps Cell> + __always_inline auto unsafe_remove_link(Cell *prev, Cell *rem) + { + auto next = rem->cell_next(); + auto prevnext = prev->cell_next(); + prevnext = next; + next->cell_prev() = prev; + return prevnext; + } + + /** + * Remove from the ring, cleaving the ring into two well-formed rings. + * + * This can be used to remove... + * + * - a single element (`el == er`) + * + * - the sentinel (`el == er`), leaving the rest of the ring, if any, + * as a sentinel-free collection + * + * - a span of elements from `el` to `er` via `next` links; the + * removed span is made into a ring and the residual ring is left + * well-formed. + * + * In all cases, `el`'s previous element is returned as a handle to the + * residual ring. (The caller must already have a reference to the span + * being removed). This is especially useful when `remove`-ing elements + * during a `search`, below: overwriting the callback's Cell pointer + * (passed by *reference*) will continue the iteration, calling back at + * the removed node's successor. + * + * Removing a singleton from its ring from itself causes no change, as + * any would-be residual ring is empty. This corner case requires some + * care on occasion. + */ + template<cell::HasPrimOps Cell> + __always_inline Cell *remove(Cell *el, Cell *er) + { + Cell *p = unsafe_remove(el, er); + el->cell_prev() = er; + er->cell_next() = el; + return p; + } + + template<cell::HasPrimOps Cell> + __always_inline Cell *remove(Cell *e) + { + return remove(e, e); + } + + /** + * Search through a span of a ring, inclusively from `from` through + * exclusively to `to`, applying `f` to each cons cell in turn. If `f` + * returns `true`, the search stops early and returns `true`; otherwise, + * search returns `false`. To (side-effectfully) visit every node in the + * span, have `f` always return false. + */ + template<cell::HasPrimOps Cell, typename F> + __always_inline bool search(Cell *from, Cell *to, F f) + { + Cell *elem; + for (elem = from; elem != to; elem = elem->cell_next()) + { + if (f(elem)) + { + return true; + } + } + return false; + } + + /** + * Search through all elements of a ring *except* `elem`. If `elem` is the + * sentinel of a ring, then this is, as one expects, a `search` over all + * non-sentinel memebers of the ring. + */ + template<cell::HasPrimOps Cell, typename F> + __always_inline bool search(Cell *elem, F f) + { + return search(static_cast<Cell *>(elem->cell_next()), elem, f); + } + + /** + * Convenience wrapper for a sentinel cons cell, encapsulating some common + * patterns. + */ + template<cell::HasPrimOpsInit Cell_> + struct Sentinel + { + using Cell = Cell_; + + /** + * The sentinel node itself. Viewing the ring as a list, this + * effectively serves as pointers to the head (next) and to the tail + * (prev) of the list. Unlike more traditional nullptr-terminated + * lists, though, here, the sentinel participates in the ring. + * + * This is marked `cheri_no_subobject_bounds` because some of our cons + * cell implementations use pointer proxies that rely on the bounds + * provided by `this` (which, in turn, is likely to be + * `cheri_no_subobject_bounds`) + */ + Cell sentinel __attribute__((__cheri_no_subobject_bounds__)); + + __always_inline void init() + { + sentinel.cell_init(); + } + + __always_inline bool is_empty() + { + return linked_list::is_singleton(&sentinel); + } + + __always_inline void append(Cell *elem) + { + linked_list::insert_before(&sentinel, elem); + } + + __always_inline void append_new(Cell *elem) + { + linked_list::insert_new_before(&sentinel, elem); + } + + __always_inline void prepend(Cell *elem) + { + linked_list::insert_before(elem, &sentinel); + } + + __always_inline Cell *first() + { + return sentinel.cell_next(); + } + + __always_inline Cell *last() + { + return sentinel.cell_prev(); + } + + __always_inline bool last_is_eq(Cell *p) + { + return p->cell_next_is_eq(&sentinel); + } + + __always_inline Cell *unsafe_take_first() + { + Cell *f = sentinel.cell_next(); + linked_list::unsafe_remove_link(&sentinel, f); + return f; + } + + __always_inline Cell *take_all() + { + auto p = linked_list::unsafe_remove(&sentinel); + sentinel.cell_init(); + return p; + } + + template<typename F> + __always_inline bool search(F f) + { + return linked_list::search(&sentinel, f); + } + }; + + namespace cell + { + + /** Cons cell using two pointers */ + class Pointer + { + Pointer *prev, *next; + + public: + __always_inline void cell_init()
Why is this not a constructor?
cheriot-rtos
github_2023
cpp
1
CHERIoT-Platform
davidchisnall
@@ -0,0 +1,642 @@ +// Copyright Microsoft and CHERIoT Contributors. +// SPDX-License-Identifier: MIT + +/** + * @file A (circular) doubly linked list, abstracted over cons cell + * representations. + */ + +#pragma once + +#include <concepts> +#include <ds/pointer.h> + +namespace ds::linked_list +{ + + namespace cell + { + /** + * The primitive, required, abstract interface to our cons cells. + * + * All methods are "namespaced" with `cell_` to support the case where + * the encoded forms are also representing other state (for example, + * bit-packed flags in pointer address bits). + */ + template<typename T> + concept HasPrimOps = requires(T &t) + { + /** Proxies for list linkages */ + { + t.cell_next() + } -> ds::pointer::proxy::Proxies<T>; + { + t.cell_prev() + } -> ds::pointer::proxy::Proxies<T>; + }; + + /** + * Initialize to singleton ring. Not all cons cells are required to be + * able to do this, though if you're sticking to rings and not (ab)using + * the machinery here in interesting ways, this should be easy to + * specify. + */ + template<typename T> + concept HasInit = requires(T &t) + { + { + t.cell_init() + } -> std::same_as<void>; + }; + + template<typename T> + concept HasPrimOpsInit = HasPrimOps<T> && HasInit<T>; + + /** + * Additional, optional overrides available within implementation of + * cons cells. It may be useful to static_assert() these in + * implementations to make sure we are not falling back to the defaults + * in terms of the above primops. + * + * @{ + */ + template<typename T> + concept HasIsSingleton = requires(T &t) + { + { + t.cell_is_singleton() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasIsSingletonCheck = requires(T &t) + { + { + t.cell_is_singleton_check() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasIsDoubleton = requires(T &t) + { + { + t.cell_is_doubleton() + } -> std::same_as<bool>; + }; + + template<typename T> + concept HasEqOps = requires(T &t) + { + { + t.cell_next_is_eq(&t) + } -> std::same_as<bool>; + + { + t.cell_prev_is_eq(&t) + } -> std::same_as<bool>; + }; + + /** @} */ + + } // namespace cell + + /** + * Self-loops indicate either the sentinels of an empty list or, + * less often, singletons without their sentinels; it's up to + * the caller to know which is being tested for, here. + * + * The default implementation decodes and compares one link; + * implementations may have more efficient mechanisms. + * + * @{ + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasIsSingleton<T>, bool> + is_singleton(T *e) + { + return e == e->cell_prev(); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasIsSingleton<T>, bool> + is_singleton(T *e) + { + return e->cell_is_singleton(); + } + /** @} */ + + /** + * Like is_singleton(), but checks both edges. Useful only for + * testing invariants. + * + * The default implementation decodes and compares both links. + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasIsSingletonCheck<T>, bool> + is_singleton_check(T *e) + { + return (e == e->cell_next()) && (e == e->cell_prev()); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasIsSingletonCheck<T>, bool> + is_singleton_check(T *e) + { + return e->is_singleton_check(); + } + /** @} */ + + /** + * Doubletons are either singleton collections (with both the sentinel + * and the single element satisfying this test) or, less often, a pair + * of elements without a sentinel. The caller is expected to know + * what's meant by this test. + * + * The default implementation decodes and compares both links. + * @{ + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasIsDoubleton<T>, bool> + is_doubleton(T *e) + { + return e->cell_prev() == e->cell_next(); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasIsDoubleton<T>, bool> + is_doubleton(T *e) + { + return e->cell_is_doubleton(); + } + /** @} */ + + /** + * Link equality predicates + * + * @{ + */ + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasEqOps<T>, bool> is_prev_eq(T *e, + T *p) + { + return e->cell_prev() == p; + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasEqOps<T>, bool> is_prev_eq(T *e, + T *p) + { + return e->cell_is_prev_eq(p); + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<!cell::HasEqOps<T>, bool> is_next_eq(T *e, + T *p) + { + return e->cell_next() == p; + } + + template<cell::HasPrimOps T> + __always_inline std::enable_if_t<cell::HasEqOps<T>, bool> is_next_eq(T *e, + T *p) + { + return e->cell_is_next_eq(p); + } + /** @} */ + + /** + * Verify linkage invariants. Again, useful only for testing. + * + * The default implementation decodes all four relevant links. + */ + template<cell::HasPrimOps T> + __always_inline bool is_well_formed(T *e) + { + return (e == e->cell_prev()->cell_next()) && + (e == e->cell_next()->cell_prev()); + } + + /** + * Insert a ring of `elem`-ents (typically, a singleton ring) before the + * `curr`-ent element (or sentinel) in the ring. In general, you will + * probably want to make sure that at most one of `elem` or `curr` + * points to a ring with a sentinel node. + * + * If `curr` is the sentinel, this is appending to the list, in the + * sense that the element(s) occupy (or span) the next-most and + * prev-least position from the sentinel. + * + * By symmetry, if `elem` is, instead, the sentinel, then `curr` is + * prepended to the list in the same sense. + */ + template<cell::HasPrimOps Cell> + __always_inline void insert_before(Cell *curr, Cell *elem) + { + curr->cell_prev()->cell_next() = elem->cell_next(); + elem->cell_next()->cell_prev() = curr->cell_prev(); + curr->cell_prev() = elem; + elem->cell_next() = curr; + } + + /** + * Fuse cell initialization with insertion before. Specifically, + * + * insert_new_before(c, e); + * + * is semantically equivalent to + * + * e->cell_init(); insert_before(c, e); + * + * but spelled in a way that the compiler can understand a bit better, with + * less effort spent in provenance and/or alias analysis. + */ + template<cell::HasPrimOps Cell, typename P> + requires std::same_as<P, Cell *> || ds::pointer::proxy::Proxies<P, Cell> + __always_inline void insert_new_before(P curr, Cell *elem) + { + auto prev = curr->cell_prev(); + elem->cell_next() = curr; + elem->cell_prev() = prev; + prev->cell_next() = elem; + prev = elem; + } + + /** + * Fuse cell initialization with insertion after. Specifically, + * + * insert_new_after(c, e); + * + * is semantically equivalent to + * + * e->cell_init(); insert_before(e, c); + * + * but spelled in a way that the compiler can understand a bit better, with + * less effort spent in provenance and/or alias analysis. + */ + template<cell::HasPrimOps Cell, typename P> + requires std::same_as<P, Cell *> || ds::pointer::proxy::Proxies<P, Cell> + __always_inline void insert_new_after(P curr, Cell *elem) + { + auto next = curr->cell_next(); + elem->cell_prev() = curr; + elem->cell_next() = next; + next->cell_prev() = elem; + next = elem; + } + + /** + * Remove from the list without turning the removed span into a + * well-formed ring. This is useful only if that invariant will be + * restored later (prior to insertion, at the very least). + * + * The removed element or span instead retains links into the ring + * whence it was removed, but is no longer well-formed, since that ring + * no longer references the removed element or span. + * + * This can be used to remove... + * + * - a single element (`el == er`) + * + * - the sentinel (`el == er`), leaving the rest of the ring, if any, + * as a sentinel-free ring + * + * - a span of elements from `el` to `er` via the `next` links; the + * removed span is damaged and must be corrected, while the residual + * ring remains well-formed. + * + * In all cases, `el`'s previous element is returned as a handle to the + * residual ring. + */ + template<cell::HasPrimOps Cell> + __always_inline Cell *unsafe_remove(Cell *el, Cell *er) + { + auto p = el->cell_prev(); + auto n = er->cell_next(); + n->cell_prev() = p; + p->cell_next() = n; + return p; + } + + template<cell::HasPrimOps Cell> + __always_inline Cell *unsafe_remove(Cell *e) + { + return unsafe_remove(e, e); + } + + /** + * Remove a particular element `rem` from the ring, already knowing its + * adjacent, previous link `prev`. `prev` remains connected to the ring + * but `rem` will no longer be well-formed. Returns a proxy to prev's + * next field. + */ + template<cell::HasPrimOps Cell> + __always_inline auto unsafe_remove_link(Cell *prev, Cell *rem) + { + auto next = rem->cell_next(); + auto prevnext = prev->cell_next(); + prevnext = next; + next->cell_prev() = prev; + return prevnext; + } + + /** + * Remove from the ring, cleaving the ring into two well-formed rings. + * + * This can be used to remove... + * + * - a single element (`el == er`) + * + * - the sentinel (`el == er`), leaving the rest of the ring, if any, + * as a sentinel-free collection + * + * - a span of elements from `el` to `er` via `next` links; the + * removed span is made into a ring and the residual ring is left + * well-formed. + * + * In all cases, `el`'s previous element is returned as a handle to the + * residual ring. (The caller must already have a reference to the span + * being removed). This is especially useful when `remove`-ing elements + * during a `search`, below: overwriting the callback's Cell pointer + * (passed by *reference*) will continue the iteration, calling back at + * the removed node's successor. + * + * Removing a singleton from its ring from itself causes no change, as + * any would-be residual ring is empty. This corner case requires some + * care on occasion. + */ + template<cell::HasPrimOps Cell> + __always_inline Cell *remove(Cell *el, Cell *er) + { + Cell *p = unsafe_remove(el, er); + el->cell_prev() = er; + er->cell_next() = el; + return p; + } + + template<cell::HasPrimOps Cell> + __always_inline Cell *remove(Cell *e) + { + return remove(e, e); + } + + /** + * Search through a span of a ring, inclusively from `from` through + * exclusively to `to`, applying `f` to each cons cell in turn. If `f` + * returns `true`, the search stops early and returns `true`; otherwise, + * search returns `false`. To (side-effectfully) visit every node in the + * span, have `f` always return false. + */ + template<cell::HasPrimOps Cell, typename F> + __always_inline bool search(Cell *from, Cell *to, F f) + { + Cell *elem; + for (elem = from; elem != to; elem = elem->cell_next()) + { + if (f(elem)) + { + return true; + } + } + return false; + } + + /** + * Search through all elements of a ring *except* `elem`. If `elem` is the + * sentinel of a ring, then this is, as one expects, a `search` over all + * non-sentinel memebers of the ring. + */ + template<cell::HasPrimOps Cell, typename F> + __always_inline bool search(Cell *elem, F f) + { + return search(static_cast<Cell *>(elem->cell_next()), elem, f); + } + + /** + * Convenience wrapper for a sentinel cons cell, encapsulating some common + * patterns. + */ + template<cell::HasPrimOpsInit Cell_> + struct Sentinel + { + using Cell = Cell_; + + /** + * The sentinel node itself. Viewing the ring as a list, this + * effectively serves as pointers to the head (next) and to the tail + * (prev) of the list. Unlike more traditional nullptr-terminated + * lists, though, here, the sentinel participates in the ring. + * + * This is marked `cheri_no_subobject_bounds` because some of our cons + * cell implementations use pointer proxies that rely on the bounds + * provided by `this` (which, in turn, is likely to be + * `cheri_no_subobject_bounds`) + */ + Cell sentinel __attribute__((__cheri_no_subobject_bounds__)); + + __always_inline void init() + { + sentinel.cell_init(); + } + + __always_inline bool is_empty() + { + return linked_list::is_singleton(&sentinel); + } + + __always_inline void append(Cell *elem) + { + linked_list::insert_before(&sentinel, elem); + } + + __always_inline void append_new(Cell *elem) + { + linked_list::insert_new_before(&sentinel, elem); + } + + __always_inline void prepend(Cell *elem) + { + linked_list::insert_before(elem, &sentinel); + } + + __always_inline Cell *first() + { + return sentinel.cell_next(); + } + + __always_inline Cell *last() + { + return sentinel.cell_prev(); + } + + __always_inline bool last_is_eq(Cell *p) + { + return p->cell_next_is_eq(&sentinel); + } + + __always_inline Cell *unsafe_take_first() + { + Cell *f = sentinel.cell_next(); + linked_list::unsafe_remove_link(&sentinel, f); + return f; + } + + __always_inline Cell *take_all() + { + auto p = linked_list::unsafe_remove(&sentinel); + sentinel.cell_init(); + return p; + } + + template<typename F> + __always_inline bool search(F f) + { + return linked_list::search(&sentinel, f); + } + }; + + namespace cell + { + + /** Cons cell using two pointers */ + class Pointer + { + Pointer *prev, *next; + + public: + __always_inline void cell_init() + { + prev = next = this; + } + + __always_inline auto cell_next() + { + return ds::pointer::proxy::Pointer(next); + } + + __always_inline auto cell_prev() + { + return ds::pointer::proxy::Pointer(prev); + } + }; + static_assert(HasPrimOpsInit<Pointer>); + + /** + * Encode a linked list cons cell as a pair of addresses (but present an + * interface in terms of pointers). CHERI bounds on the returned + * pointers are inherited from the pointer to `this` cons cell. + */ + class PtrAddr + { + ptraddr_t prev, next; + + public: + /* Primops */ + + __always_inline void cell_init() + { + prev = next = CHERI::Capability{this}.address(); + } + + __always_inline auto cell_next() + { + return ds::pointer::proxy::PtrAddr(this, next); + } + + __always_inline auto cell_prev() + { + return ds::pointer::proxy::PtrAddr(this, prev); + } + + /* + * Specialized implementations that may be slightly fewer + * instructions than the generic approaches in terms of the primops. + */ + + __always_inline bool cell_is_singleton() + { + return prev == CHERI::Capability{this}.address(); + } + + __always_inline bool cell_is_doubleton() + { + return prev == next; + } + + __always_inline bool cell_next_is_eq(PtrAddr *p)
It feels like this would be cleaner if you exposed a `next()` that had an `operator==` (or, since it's the future now, an `operator<=>`) implemented.