repo_name stringlengths 1 62 | dataset stringclasses 1 value | lang stringclasses 11 values | pr_id int64 1 20.1k | owner stringlengths 2 34 | reviewer stringlengths 2 39 | diff_hunk stringlengths 15 262k | code_review_comment stringlengths 1 99.6k |
|---|---|---|---|---|---|---|---|
svsm | github_2023 | others | 461 | coconut-svsm | Freax13 | @@ -561,6 +581,73 @@ impl PageTable {
Self::walk_addr_lvl3(&mut self.root, vaddr)
}
+ /// Calculate the virtual address of a PTE in the self-map, which maps a
+ /// specified virtual address.
+ ///
+ /// # Parameters
+ /// - `vaddr': The virtual address whose PTE should be located.
+ ///
+ /// # Returns
+ /// The virtual address of the PTE.
+ fn get_pte_address(vaddr: VirtAddr) -> VirtAddr {
+ SVSM_PTE_BASE
+ + ((u64::from(vaddr) & 0x0000_FFFF_FFFF_F000) >> 9)
+ .try_into()
+ .unwrap() | ```suggestion
SVSM_PTE_BASE + ((usize::from(vaddr) & 0x0000_FFFF_FFFF_F000) >> 9)
```
The same change can be applied to the return statements in `virt_to_phys` below. |
svsm | github_2023 | others | 461 | coconut-svsm | Freax13 | @@ -561,6 +581,73 @@ impl PageTable {
Self::walk_addr_lvl3(&mut self.root, vaddr)
}
+ /// Calculate the virtual address of a PTE in the self-map, which maps a
+ /// specified virtual address.
+ ///
+ /// # Parameters
+ /// - `vaddr': The virtual address whose PTE should be located.
+ ///
+ /// # Returns
+ /// The virtual address of the PTE.
+ fn get_pte_address(vaddr: VirtAddr) -> VirtAddr {
+ SVSM_PTE_BASE
+ + ((u64::from(vaddr) & 0x0000_FFFF_FFFF_F000) >> 9)
+ .try_into()
+ .unwrap()
+ }
+
+ /// Perform a virtual to physical translation using the self-map.
+ ///
+ /// # Parameters
+ /// - `vaddr': The virtual address to transalte.
+ ///
+ /// # Returns
+ /// Some(PhysAddr) if the virtual address is valid.
+ /// None if the virtual address is not valid.
+ pub fn virt_to_phys(vaddr: VirtAddr) -> Option<PhysAddr> { | This function needs to synchronize with the other functions modifying the page tables to prevent stale/use-after-free accesses. |
svsm | github_2023 | others | 461 | coconut-svsm | Freax13 | @@ -82,11 +88,22 @@ pub trait SvsmPlatform {
op: PageStateChangeOp,
) -> Result<(), SvsmError>;
- /// Marks a range of pages as valid for use as private pages.
- fn validate_page_range(&self, region: MemoryRegion<VirtAddr>) -> Result<(), SvsmError>;
+ /// Marks a physical range of pages as valid or invalid for use as private
+ /// pages. Not usable in stage2. | > Not usable in stage2.
Can we enforce this at compile time? |
svsm | github_2023 | others | 461 | coconut-svsm | Freax13 | @@ -140,14 +143,11 @@ fn map_and_validate(
let mut pgtbl = this_cpu().get_pgtable();
pgtbl.map_region(vregion, paddr, flags)?;
+ let pregion = MemoryRegion::new(paddr, vregion.len());
if config.page_state_change_required() {
- platform.page_state_change(
- MemoryRegion::new(paddr, vregion.len()),
- PageSize::Huge,
- PageStateChangeOp::Private,
- )?;
+ platform.page_state_change(pregion, PageSize::Huge, PageStateChangeOp::Private)?; | This change isn't needed anymore. |
svsm | github_2023 | others | 461 | coconut-svsm | Freax13 | @@ -361,6 +362,11 @@ impl PTEntry {
let addr = PhysAddr::from(self.0.bits() & 0x000f_ffff_ffff_f000);
strip_confidentiality_bits(addr)
}
+
+ /// Read a page table entry from the specified virtual address.
+ pub fn read_pte(vaddr: VirtAddr) -> Self {
+ unsafe { *vaddr.as_ptr::<Self>() }
+ } | ```suggestion
pub unsafe fn read_pte(vaddr: VirtAddr) -> Self {
*vaddr.as_ptr::<Self>()
}
```
That was the more important part of my [suggestion](https://github.com/coconut-svsm/svsm/pull/461#discussion_r1771258416) regarding this function :D |
svsm | github_2023 | others | 451 | coconut-svsm | msft-jlange | @@ -63,10 +63,9 @@ fn init_percpu(platform: &mut dyn SvsmPlatform) -> Result<(), SvsmError> {
Ok(())
}
-fn shutdown_percpu() {
- this_cpu()
- .shutdown()
- .expect("Failed to shut down percpu data (including GHCB)");
+unsafe fn shutdown_percpu() { | Why is this function `unsafe`? At a minimum, there should be a Safety comment here. But there is also nothing about the function declaration that suggests that it can only be called from unsafe code. My understanding of the convention we have been using is that a function should only be declared `unsafe` if it is not possible to call it from safe code due to its parameters or return value (this was the position advocated by @00xc as I always understood it). |
svsm | github_2023 | others | 451 | coconut-svsm | msft-jlange | @@ -37,3 +37,54 @@ pub unsafe fn write_bytes(dst: usize, size: usize, value: u8) {
);
}
}
+
+/// Returns whether there are `size` null-bytes at `src`.
+///
+/// # Safety
+///
+/// This function has all the safety requirements of `core::ptr::read` except
+/// that data races are explicitly permitted.
+#[inline(always)]
+pub unsafe fn is_clear(src: usize, size: usize) -> bool { | This appears to be used only by test code. Should it therefore be within the test module so it is not accidentally referenced by non-test code? |
svsm | github_2023 | others | 451 | coconut-svsm | msft-jlange | @@ -4,25 +4,38 @@
//
// Author: Jon Lange (jlange@microsoft.com)
+use core::mem::MaybeUninit;
+use core::ptr::NonNull;
+
use crate::address::VirtAddr;
use crate::cpu::flush_tlb_global_sync;
+use crate::cpu::mem::{copy_bytes, is_clear, write_bytes};
use crate::cpu::percpu::this_cpu;
use crate::error::SvsmError;
use crate::mm::validate::{
valid_bitmap_clear_valid_4k, valid_bitmap_set_valid_4k, valid_bitmap_valid_addr,
};
-use crate::mm::virt_to_phys;
+use crate::mm::{virt_to_phys, PageBox};
use crate::platform::{PageStateChangeOp, SVSM_PLATFORM};
+use crate::protocols::errors::SvsmReqError;
use crate::types::{PageSize, PAGE_SIZE};
use crate::utils::MemoryRegion;
+use zerocopy::{FromBytes, FromZeroes};
+
/// Makes a virtual page shared by revoking its validation, updating the
/// page state, and modifying the page tables accordingly.
///
/// # Arguments
///
/// * `vaddr` - The virtual address of the page to be made shared.
-pub fn make_page_shared(vaddr: VirtAddr) -> Result<(), SvsmError> {
+///
+/// # Safety
+///
+/// Converting the memory at `vaddr` must be safe within Rust's memory model.
+/// Notably any objects at `vaddr` must tolerate unsychronized writes of any | Typo: `unsychronized` => `unsynchronized`. |
svsm | github_2023 | others | 451 | coconut-svsm | msft-jlange | @@ -160,7 +162,28 @@ impl GhcbPage {
impl Drop for GhcbPage {
fn drop(&mut self) {
- self.0.shutdown().expect("Could not shut down GHCB");
+ let vaddr = self.0.vaddr();
+ let paddr = virt_to_phys(vaddr);
+
+ // Re-encrypt page
+ this_cpu()
+ .get_pgtable()
+ .set_encrypted_4k(vaddr)
+ .expect("Could not re-encrypt page");
+
+ // Unregister GHCB PA
+ register_ghcb_gpa_msr(PhysAddr::null()).expect("Could not unregister GHCB"); | This is not guaranteed to succeed, because the VMM is not required to accept NULL as a valid GHCB page. |
svsm | github_2023 | others | 451 | coconut-svsm | msft-jlange | @@ -160,7 +162,28 @@ impl GhcbPage {
impl Drop for GhcbPage {
fn drop(&mut self) { | Is there any scenario for which dropping a GHCB is not associated with a fatal termination of the SVSM? The process of restoring a GHCB page to the private state is fragile, and I believe it would be best to avoid any attempt to do so unless we were aware of a valid use case for this. If we cannot come up with one, then it would be simplest for `GhcbPage::drop()` simply to panic, since we shouldn't get here anyway. |
svsm | github_2023 | others | 451 | coconut-svsm | msft-jlange | @@ -317,33 +334,10 @@ impl GHCB {
Ok(register_ghcb_gpa_msr(paddr)?)
}
- pub fn shutdown(&self) -> Result<(), SvsmError> {
- let vaddr = VirtAddr::from(ptr::from_ref(self));
- let paddr = virt_to_phys(vaddr);
-
- // Re-encrypt page
- this_cpu().get_pgtable().set_encrypted_4k(vaddr)?;
-
- // Unregister GHCB PA
- register_ghcb_gpa_msr(PhysAddr::null())?;
-
- // Make page guest-invalid
- validate_page_msr(paddr)?;
-
- // Make page guest-valid
- pvalidate(vaddr, PageSize::Regular, PvalidateOp::Valid)?;
-
- // Needs guarding for Stage2 GHCB
- if valid_bitmap_valid_addr(paddr) {
- valid_bitmap_set_valid_4k(paddr);
- }
-
- Ok(())
- }
-
pub fn clear(&self) {
// Clear valid bitmap
- self.valid_bitmap.set([0, 0]);
+ self.valid_bitmap[0].store(0, Ordering::SeqCst); | The GHCB page is not manipulated across processors, so there is no reason for expensive memory barriers when modifying its contents. The only race conditions we might expect are between the SVSM environment and the host running on the same processor; this possibility for interruption means that atomic operations are necessary, but `Ordering::Relaxed` will be sufficient for all such operations. That is true everywhere throughout this file. |
svsm | github_2023 | others | 451 | coconut-svsm | msft-jlange | @@ -78,3 +91,139 @@ pub fn make_page_private(vaddr: VirtAddr) -> Result<(), SvsmError> {
Ok(())
}
+
+/// SharedBox is a safe wrapper around memory pages shared with the host.
+pub struct SharedBox<T> { | This structure appears to have nothing to do with GHCB - the `GhcbPage` structure doesn't even make use of it - and thus it should be in its own source file and not stuck in the GHCB sources. In addition, the notion of `SharedBox` will be used on other confidential platforms (like TDX) and definitely should not be in the SEV sources directory. |
svsm | github_2023 | others | 451 | coconut-svsm | msft-jlange | @@ -47,51 +46,28 @@ pub struct HVExtIntInfo {
pub isr: [AtomicU32; 8],
}
-/// An allocation containing the `#HV` doorbell page.
-#[derive(Debug)]
-pub struct HVDoorbellPage(PageBox<HVDoorbell>);
-
-impl HVDoorbellPage {
- /// Allocates a new HV doorbell page and registers it on the hypervisor
- /// using the given GHCB.
- pub fn new(ghcb: &GHCB) -> Result<Self, SvsmError> {
- // SAFETY: all zeroes is a valid representation for `HVDoorbell`.
- let page = PageBox::try_new_zeroed()?;
- let paddr = virt_to_phys(page.vaddr());
-
- // The #HV doorbell page must be shared before it can be used.
- make_page_shared(page.vaddr())?;
-
- // Now Drop will have correct behavior, so construct the new type.
- // SAFETY: all zeros is a valid representation of the HV doorbell page.
- let page = unsafe { Self(page.assume_init()) };
- ghcb.register_hv_doorbell(paddr)?;
- Ok(page)
- }
+/// Allocates a new HV doorbell page and registers it on the hypervisor
+/// using the given GHCB.
+pub fn allocate_hv_doorbell_page(ghcb: &GHCB) -> Result<&'static HVDoorbell, SvsmError> { | Why does this function not use `SharedBox`? It seems that `SharedBox` is designed to do exactly the sort of assignment and visibility management expected here, and unlike the GHCB, there is no chicken-and-egg problem because `SharedBox` only requires the existence of a GHCB, not of a doorbell page. |
svsm | github_2023 | others | 465 | coconut-svsm | stefano-garzarella | @@ -436,6 +436,65 @@ of these limitations may be addressed in future updates.
* Debugging is currently limited to the SVSM kernel itself. OVMF and the guest
OS cannot be debugged using the SVSM GDB stub.
+Coconut-SVSM CI
+-------------
+ | A description of what the next steps are for would be appreciated. At first glance it is not clear what they are for. |
svsm | github_2023 | others | 465 | coconut-svsm | stefano-garzarella | @@ -436,6 +436,65 @@ of these limitations may be addressed in future updates.
* Debugging is currently limited to the SVSM kernel itself. OVMF and the guest
OS cannot be debugged using the SVSM GDB stub.
+Coconut-SVSM CI
+-------------
+
+## SVSM - Using the Script Utility
+
+Download the sev_utils script:
+```
+git clone https://github.com/ramagali24/sev-utils.git
+cd sev-utils/tools
+
+```
+
+To set up the host by building IGVM, SVSM, QEMU, OVMF, and the Linux kernel, use the following command:
+```
+./snp.sh --svsm setup-host | Wait, does this install stuff in the host?
What kind of system does it need to be?
I would put a big warning, I don't know if people are happy to install stuff in the host, so I would explain well what you install. |
svsm | github_2023 | others | 465 | coconut-svsm | stefano-garzarella | @@ -8,10 +8,18 @@ set -e
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
+
+ | Unrelated changes |
svsm | github_2023 | others | 465 | coconut-svsm | stefano-garzarella | @@ -82,6 +105,7 @@ if [ ! -z $IMAGE ]; then
-device scsi-hd,drive=disk0,bootindex=0"
fi
+ | ditto |
svsm | github_2023 | others | 465 | coconut-svsm | stefano-garzarella | @@ -8,10 +8,18 @@ set -e
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
+
+
: "${QEMU:=qemu-system-x86_64}"
: "${IGVM:=$SCRIPT_DIR/../bin/coconut-qemu.igvm}"
+: "${KERNEL_BIN:="${guest_kernel}"}"
+: "${INITRD_BIN:="${GENERATED_INITRD_BIN}"}"
+
+GUEST_ROOT_LABEL="${GUEST_ROOT_LABEL:-cloudimg-rootfs}"
+GUEST_KERNEL_APPEND="root=LABEL=${GUEST_ROOT_LABEL} ro console=ttyS0"
+
-C_BIT_POS=`$SCRIPT_DIR/../utils/cbit`
+#C_BIT_POS=`$SCRIPT_DIR/../utils/cbit` | Why commenting this line? |
svsm | github_2023 | others | 465 | coconut-svsm | stefano-garzarella | @@ -8,10 +8,18 @@ set -e
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
+
+
: "${QEMU:=qemu-system-x86_64}"
: "${IGVM:=$SCRIPT_DIR/../bin/coconut-qemu.igvm}"
+: "${KERNEL_BIN:="${guest_kernel}"}"
+: "${INITRD_BIN:="${GENERATED_INITRD_BIN}"}" | I don't understand these definitions, usually this way is used to provide a default, but here you're putting the default to other environment variables not defined anywhere, what's the point? |
svsm | github_2023 | others | 465 | coconut-svsm | stefano-garzarella | @@ -115,6 +142,9 @@ $SUDO_CMD \
$IMAGE_DISK \
-nographic \
-monitor none \
+ -kernel ${KERNEL_BIN} \
+ -initrd ${INITRD_BIN} \
+ -append "${GUEST_KERNEL_APPEND}" | By adding these parameters do we preserve the previous behavior of booting from disk? |
svsm | github_2023 | others | 465 | coconut-svsm | stefano-garzarella | @@ -10,6 +10,11 @@ SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
: "${QEMU:=qemu-system-x86_64}"
: "${IGVM:=$SCRIPT_DIR/../bin/coconut-qemu.igvm}"
+: "${KERNEL_BIN:="${guest_kernel}"}" ## Incase your work environment is ubuntu, guest_kernel is your ubuntu kernel image "vmlinuz-6.8.0-snp-guest-bc4de28e0cc1" o
+: "${INITRD_BIN:="${GENERATED_INITRD_BIN}"}" #Incase your work environment is ubuntu, GENERATED_INITRD_BIN is your generated guest linux "initrd.img-6.8.0-snp-guest-bc4de28e0cc1" | Again, I still don't understand the meaning of these definitions. Why do the default of these two new variables depend on other variables? |
svsm | github_2023 | others | 462 | coconut-svsm | Freax13 | @@ -306,25 +303,27 @@ pub mod svsm_gdbstub {
});
}
- struct GdbStubConnection;
+ struct GdbStubConnection<'a> {
+ serial_port: SerialPort<'a>,
+ }
- impl GdbStubConnection {
- const fn new() -> Self {
- Self {}
+ impl GdbStubConnection<'_> {
+ fn new(platform: &'static dyn SvsmPlatform) -> Self { | ```suggestion
impl<'a> GdbStubConnection<'a> {
fn new(platform: &'a dyn SvsmPlatform) -> Self {
```
Or just change the lifetime of the `serial_port` field to `'static`. |
svsm | github_2023 | others | 464 | coconut-svsm | p4zuu | @@ -523,6 +583,32 @@ fn ioio_perm<I: InsnMachineCtx>(mctx: &I, port: u16, size: Bytes, io_read: bool)
}
}
+#[inline]
+fn read_bytereg<I: InsnMachineCtx>(mctx: &I, reg: Register, lhbr: bool) -> u8 {
+ let data = mctx.read_reg(reg); | I guess you can cast to `u8` at the beginning to avoid cast duplicate:
```suggestion
let data = mctx.read_reg(reg) as u8;
``` |
svsm | github_2023 | others | 464 | coconut-svsm | p4zuu | @@ -275,6 +279,59 @@ pub mod test_utils {
Ok(())
}
+
+ fn translate_linear_addr(
+ &self,
+ la: usize,
+ _write: bool,
+ _fetch: bool,
+ ) -> Result<(usize, bool), InsnError> {
+ Ok((la, false))
+ }
+
+ fn handle_mmio_read(
+ &self,
+ pa: usize,
+ _shared: bool,
+ size: Bytes,
+ ) -> Result<u64, InsnError> {
+ if pa != core::ptr::addr_of!(self.mmio_reg) as usize {
+ return Ok(0);
+ }
+
+ let data = unsafe { *(pa as *const u64) };
+ match size {
+ Bytes::One => Ok(data as u8 as u64), | Is there a reason why not returning `Ok(data)` here? |
svsm | github_2023 | others | 464 | coconut-svsm | p4zuu | @@ -275,6 +279,59 @@ pub mod test_utils {
Ok(())
}
+
+ fn translate_linear_addr(
+ &self,
+ la: usize,
+ _write: bool,
+ _fetch: bool,
+ ) -> Result<(usize, bool), InsnError> {
+ Ok((la, false))
+ }
+
+ fn handle_mmio_read(
+ &self,
+ pa: usize,
+ _shared: bool,
+ size: Bytes,
+ ) -> Result<u64, InsnError> {
+ if pa != core::ptr::addr_of!(self.mmio_reg) as usize {
+ return Ok(0);
+ }
+
+ let data = unsafe { *(pa as *const u64) };
+ match size {
+ Bytes::One => Ok(data as u8 as u64),
+ Bytes::Two => Ok(data as u16 as u64),
+ Bytes::Four => Ok(data as u32 as u64),
+ Bytes::Eight => Ok(data),
+ _ => Err(InsnError::HandleMmioRead),
+ }
+ }
+
+ fn handle_mmio_write(
+ &mut self,
+ pa: usize,
+ _shared: bool,
+ size: Bytes,
+ data: u64,
+ ) -> Result<(), InsnError> {
+ if pa != core::ptr::addr_of!(self.mmio_reg) as usize {
+ return Ok(());
+ }
+
+ let addr = pa as *mut u64;
+ unsafe {
+ match size {
+ Bytes::One => *addr = data as u8 as u64, | Same about `Ok(data)`? |
svsm | github_2023 | others | 456 | coconut-svsm | Freax13 | @@ -0,0 +1,108 @@
+// SPDX-License-Identifier: MIT OR Apache-2.0
+//
+// Copyright (c) 2024 Intel Corporation.
+//
+// Author: Chuanxiao Dong <chuanxiao.dong@intel.com>
+
+extern crate alloc;
+
+use crate::cpu::percpu::current_task;
+use crate::error::SvsmError;
+use alloc::sync::Arc;
+
+#[derive(Clone, Copy, Debug)]
+pub enum ObjError {
+ NotFound,
+}
+
+/// An object represents the type of resource like file, VM, vCPU in the
+/// COCONUT-SVSM kernel which can be accessible by the user mode. The Obj
+/// trait is defined for such type of resource, which can be used to define
+/// the common functionalities of the objects. With the trait bounds of Send
+/// and Sync, the objects implementing Obj trait could be sent to another
+/// thread and shared between threads safely.
+pub trait Obj: Send + Sync + core::fmt::Debug {}
+
+/// ObjHandle is a unique identifier for an object in the current process.
+/// An ObjHandle can be converted to a u32 id which can be used by the user
+/// mode to access this object. The passed id from the user mode by syscalls
+/// can be converted to an `ObjHandle`, which is used to access the object in
+/// the COCONUT-SVSM kernel.
+#[derive(Clone, Copy, Debug, Eq, Ord, PartialEq, PartialOrd)]
+pub struct ObjHandle(u32);
+
+impl ObjHandle {
+ pub fn new(id: u32) -> Self {
+ Self(id)
+ }
+}
+
+impl From<u32> for ObjHandle {
+ #[inline]
+ fn from(id: u32) -> Self {
+ Self(id)
+ }
+}
+
+impl From<ObjHandle> for u32 {
+ #[inline]
+ fn from(obj_handle: ObjHandle) -> Self {
+ obj_handle.0
+ }
+}
+
+pub type ObjPointer = Arc<dyn Obj>; | I'm not a big fan of the name `ObjPointer`. The word "pointer" reminds me too much of raw pointers and this is not a raw pointer. How about `ObjReference` (or just `Arc<dyn Obj>)`? |
svsm | github_2023 | others | 456 | coconut-svsm | Freax13 | @@ -0,0 +1,113 @@
+// SPDX-License-Identifier: MIT OR Apache-2.0
+//
+// Copyright (c) 2024 Intel Corporation.
+//
+// Author: Chuanxiao Dong <chuanxiao.dong@intel.com>
+
+extern crate alloc;
+
+use crate::cpu::percpu::current_task;
+use crate::error::SvsmError;
+use alloc::sync::Arc;
+
+#[derive(Clone, Copy, Debug)]
+pub enum ObjError {
+ InvalidId, | Let's call this `InvalidHandle` (since we already use `ObjHandle`, not `ObjId`). |
svsm | github_2023 | others | 456 | coconut-svsm | joergroedel | @@ -139,6 +141,9 @@ pub struct Task {
/// Link to scheduler run queue
runlist_link: LinkedListAtomicLink,
+
+ /// Objects shared among threads within the same process
+ objs: Arc<RWLock<BTreeMap<ObjHandle, Arc<dyn Obj>>>>, | Why is the outer `Arc` needed? Could as well be a `Box`, no? |
svsm | github_2023 | others | 453 | coconut-svsm | Freax13 | @@ -0,0 +1,506 @@
+# Background
+
+The syscalls design philosophy is to provide a unified interface for the user to
+access system resources, which are represented by object handles. This makes
+objects a fundamental concept in the COCONUT-SVSM kernel
+
+This document describes the object and object handle from both the user mode's
+and the COCONUT-SVSM kernel's point of view.
+
+# Key Data Structures
+
+## Object
+
+An object represents the type of resource like file, VM, vCPU in the
+COCONUT-SVSM kernel, that can be accessible by the user mode. A trait named Obj
+is defined for such type of resource, which defines the common functionalities
+of the object. The Obj trait is defined with the trait bounds of Send and Sync,
+which means the object implementing Obj trait could be sent to another thread
+and shared between threads safely.
+
+```Rust
+trait Obj: Send + Sync {
+ /// Open an object
+ fn open(&self) -> Result<(), SvsmError> {
+ Ok(())
+ }
+
+ /// Close an object
+ fn close(&self) {} | Why can't the `Obj` just implement `Drop` instead of implementing this method? |
svsm | github_2023 | others | 453 | coconut-svsm | Freax13 | @@ -0,0 +1,506 @@
+# Background
+
+The syscalls design philosophy is to provide a unified interface for the user to
+access system resources, which are represented by object handles. This makes
+objects a fundamental concept in the COCONUT-SVSM kernel
+
+This document describes the object and object handle from both the user mode's
+and the COCONUT-SVSM kernel's point of view.
+
+# Key Data Structures
+
+## Object
+
+An object represents the type of resource like file, VM, vCPU in the
+COCONUT-SVSM kernel, that can be accessible by the user mode. A trait named Obj
+is defined for such type of resource, which defines the common functionalities
+of the object. The Obj trait is defined with the trait bounds of Send and Sync,
+which means the object implementing Obj trait could be sent to another thread
+and shared between threads safely.
+
+```Rust
+trait Obj: Send + Sync {
+ /// Open an object
+ fn open(&self) -> Result<(), SvsmError> { | What would this method do? Why can't we do this at the time the object is constructed? |
svsm | github_2023 | others | 453 | coconut-svsm | Freax13 | @@ -0,0 +1,506 @@
+# Background
+
+The syscalls design philosophy is to provide a unified interface for the user to
+access system resources, which are represented by object handles. This makes
+objects a fundamental concept in the COCONUT-SVSM kernel
+
+This document describes the object and object handle from both the user mode's
+and the COCONUT-SVSM kernel's point of view.
+
+# Key Data Structures
+
+## Object
+
+An object represents the type of resource like file, VM, vCPU in the
+COCONUT-SVSM kernel, that can be accessible by the user mode. A trait named Obj
+is defined for such type of resource, which defines the common functionalities
+of the object. The Obj trait is defined with the trait bounds of Send and Sync,
+which means the object implementing Obj trait could be sent to another thread
+and shared between threads safely.
+
+```Rust
+trait Obj: Send + Sync {
+ /// Open an object
+ fn open(&self) -> Result<(), SvsmError> {
+ Ok(())
+ }
+
+ /// Close an object
+ fn close(&self) {}
+
+ /// Convert to a virtual machine object if is supported.
+ fn as_vm(&self) -> Option<&VmObj> {
+ None
+ }
+
+ /// Convert to a virtual cpu object if is supported.
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ None
+ }
+
+ /// Get a mappable file handle if the object is mappable.
+ fn mappable(&self) -> Option<&FileHandle> {
+ None
+ }
+
+ /// Convert to an object which implements EventObj trait.
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ None
+ }
+ ...
+}
+```
+
+In order for the user mode to access the resources in the COCONUT-SVSM kernel,
+the kernel needs to implement the `Obj` trait to represent the resources as
+objects. Some objects may need to implement multiple methods in the Obj trait
+for multiple purposes. For example, a vcpu object needs to implement `as_vcpu()`
+to represent it as a vcpu object, `mappable()` to provide a file handle for its
+user-mode-mappable backing 4k page, and `as_event()` to represent it as an event
+object which can be used by the WAIT_FOR_EVENT syscall.
+
+```Rust
+struct VcpuObj {
+ id: VmId,
+ ...
+}
+
+/// The trait for the object which can be used as an event.
+triat EventObj {
+ ...
+}
+
+impl EventObj for VcpuObj {
+ ...
+}
+
+impl Obj for VcpuObj {
+ fn open(&self) -> Result<(), SvsmError> {
+ ...
+ }
+
+ fn close(&self) {
+ ...
+ }
+
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ Some(self)
+ }
+
+ fn mappable(&self) -> Option<&FileHandle> {
+ Some(&self.run_page_file_handle)
+ }
+
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ Some(self)
+ }
+ ...
+}
+
+```
+
+Objects without special requirements can fall back to the default implementation
+in the Obj trait, which returns None.
+
+## Object Handle
+
+When the user mode is trying to open a particular object via the syscalls, the
+COCONUT-SVSM kernel creates an object handle, which is defined as below:
+
+```Rust
+structure ObjHandle {
+ id: u32,
+ obj: Arc<dyn Obj>,
+}
+
+impl ObjHandle {
+ /// Creates a new `ObjHandle` with opening the object.
+ ///
+ /// # Arguments
+ ///
+ /// * `id` - A unique identifier for the object.
+ /// * `obj` - An `Arc` containing a trait object implementing `Obj`.
+ ///
+ /// # Returns
+ ///
+ /// * `Result<Self, SvsmError>` - Returns an `ObjHandle` on success, or an `SvsmError`
+ /// on failure.
+ ///
+ /// # Errors
+ ///
+ /// This function will return an error if the `open` method on the `obj` fails.
+ pub fn new(id: u32, obj: Arc<dyn Obj>) -> Result<Self, SvsmError> {
+ obj.open()?;
+ Ok(Self { id, obj })
+ }
+
+ /// Get the ObjHandle id.
+ pub fn id(&self) -> u32 {
+ self.id
+ }
+}
+
+impl Drop for ObjHandle {
+ fn drop(&mut self) {
+ // Drop the ObjHandle will close the object.
+ self.obj.close();
+ }
+}
+
+impl Obj for ObjHandle {
+ // Implement Obj trait for ObjHandle to facilitate accessing the object.
+ ...
+}
+```
+
+The `ObjHandle` doesn't implement Copy/Clone trait. It contains a unique id
+allocated by a global ObjIDAllocator, and an Arc pointer pointing to the opened
+object which implements the Obj trait. When a new `ObjHandle` is created, the
+object is opened by calling the `open` method. When the `ObjHandle` is dropped,
+the object is closed by calling the `close` method. The closed object itself
+will be dropped if its last reference is held by the `ObjHandle`, otherwise the
+object will still be alive.
+
+The unique id of the `ObjHandle` will be returned as the user mode ObjHandle,
+and the subsequent syscalls can use this id to access this object.
+
+This requires the COCONUT-SVSM kernel to take below responsibilities:
+
+- The COCONUT-SVSM kernel should manage the object handle's lifecycle properly.
+ The `ObjHandle` should be dropped when the user mode thread closes the object
+ via syscall, or the user mode thread is terminated without closing.
+
+- The COCONUT-SVSM kernel should prevent a user mode thread from misusing the
+ object handle opened by another thread.
+
+To achieve the above goals, the `ObjHandle` should be associated with the task
+which creates it. The task structure is extended to hold the ObjHandles created
+by this thread.
+
+```Rust
+pub struct Task {
+ ...
+
+ /// Object handles created by this task
+ obj_handles: RWLock<BTreeMap<u32, Arc<ObjHandle>>>,
+}
+```
+
+The task structure will provide 3 new functions:
+
+- `add_obj_handle(&self, obj_handle: Arc<ObjHandle>) -> Result<(), SvsmError>;`
+ This is to add the object handle to the BTreeMap with the object handle id as
+ the key. The syscalls which open an object will add Arc<ObjHandle> to the
+ current task.
+
+- `remove_obj_handle(&self, id: u32) -> Result<Arc<ObjHandle>, SvsmError>;` This
+ is to remove the object handle from the BTreeMap. The CLOSE syscall will
+ remove the corresponding ObjHandle from BTreeMap and drop it to close the
+ object.
+
+- `get_obj_handle(&self, id: u32) -> Result<Arc<ObjHandle>, SvsmError>;` This is
+ to get the object handle from the BTreeMap, which will increase the reference
+ counter of the ObjHandle. The syscalls which access an object will get the
+ corresponding ObjHandle from the current task.
+
+When a task is terminated while it still has opened objects, these objects will
+be closed automatically when the `obj_handles` is dropped.
+
+### User Mode Object Handle
+
+The object is exposed to the user mode via the object open related syscalls by
+returning the id of the `ObjHandle` created by the COCONUT-SVSM kernel. Each
+ObjHandle is uniquely mapped to an id. The user mode can make use this id to
+access the corresponding object via other syscalls. From the user mode's point
+of view, the object handle is defined as below:
+
+```Rust
+/// User mode object handle received from syscalls.
+struct ObjHandle(u32);
+
+impl ObjHandle {
+ /// Create a new object handle with the id returned from a syscall.
+ fn new(raw: u32) -> Self {
+ Self(raw)
+ }
+
+ /// Get the raw object handle id, which can be used as the input of the syscalls.
+ fn raw(&self) -> u32 {
+ self.0
+ }
+}
+
+impl Drop for ObjHandle {
+ fn drop(&mut self) {
+ // Close the object when drop the ObjHandle.
+ unsafe { syscall1(SYS_CLOSE, self.raw().into()) };
+ }
+}
+
+```
+
+The user mode `ObjHandle` doesn't implement Copy/Clone trait, and dropping
+`ObjHandle` will automatically close the underlying object via the syscall.
+
+For a syscall class which is associated with a particular object handle type,
+e.g. VM object handles for VMM subsystem syscalls, VCPU object handles for VCPU
+subsystem syscalls, can be defined as:
+
+```Rust
+struct VmObjHandle(ObjHandle);
+```
+
+```Rust
+struct VcpuObjHandle(ObjHandle);
+```
+
+# Opening an Object in User Mode
+
+The user mode can open a particular object via syscalls. For example, VM_OPEN
+syscall is used to open a virtual machine object. The COCONUT-SVSM kernel
+provides `obj_open()` function to facilitate opening an object in user mode.
+
+```Rust
+pub fn sys_vm_open(idx: u32) -> Result<u64, i32> {
+ // Get the VmObj
+ let vm_obj = get_vm_obj(idx)?;
+
+ // Open the VmObj to return the object handle id to the user mode.
+ obj_open(vm_obj).map_or(Err(..), |id| Ok(id.into()))
+}
+
+```
+
+```Rust
+/// Opens an object and assigns it a unique identifier.
+///
+/// # Arguments
+///
+/// * `obj` - An `ObjPointer` representing the object to be opened.
+///
+/// # Returns
+///
+/// * `Result<u32, SvsmError>` - Returns the unique identifier of the opened
+/// object on success, or an `SvsmError` on failure.
+///
+/// # Errors
+///
+/// This function will return an error if adding the object handle to the
+/// current task fails.
+pub fn obj_open(obj: Arc<dyn Obj>) -> Result<u32, SvsmError> {
+ let id = OBJ_ID_ALLOCATOR.next_id();
+ current_task()
+ .add_obj_handle(Arc::new(ObjHandle::new(id, obj)?))
+ .map(|_| id) | ```suggestion
current_task()
.add_obj_handle(Arc::new(ObjHandle::new(id, obj)?))?;
Ok(id)
``` |
svsm | github_2023 | others | 453 | coconut-svsm | Freax13 | @@ -0,0 +1,506 @@
+# Background
+
+The syscalls design philosophy is to provide a unified interface for the user to
+access system resources, which are represented by object handles. This makes
+objects a fundamental concept in the COCONUT-SVSM kernel
+
+This document describes the object and object handle from both the user mode's
+and the COCONUT-SVSM kernel's point of view.
+
+# Key Data Structures
+
+## Object
+
+An object represents the type of resource like file, VM, vCPU in the
+COCONUT-SVSM kernel, that can be accessible by the user mode. A trait named Obj
+is defined for such type of resource, which defines the common functionalities
+of the object. The Obj trait is defined with the trait bounds of Send and Sync,
+which means the object implementing Obj trait could be sent to another thread
+and shared between threads safely.
+
+```Rust
+trait Obj: Send + Sync {
+ /// Open an object
+ fn open(&self) -> Result<(), SvsmError> {
+ Ok(())
+ }
+
+ /// Close an object
+ fn close(&self) {}
+
+ /// Convert to a virtual machine object if is supported.
+ fn as_vm(&self) -> Option<&VmObj> {
+ None
+ }
+
+ /// Convert to a virtual cpu object if is supported.
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ None
+ }
+
+ /// Get a mappable file handle if the object is mappable.
+ fn mappable(&self) -> Option<&FileHandle> {
+ None
+ }
+
+ /// Convert to an object which implements EventObj trait.
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ None
+ }
+ ...
+}
+```
+
+In order for the user mode to access the resources in the COCONUT-SVSM kernel,
+the kernel needs to implement the `Obj` trait to represent the resources as
+objects. Some objects may need to implement multiple methods in the Obj trait
+for multiple purposes. For example, a vcpu object needs to implement `as_vcpu()`
+to represent it as a vcpu object, `mappable()` to provide a file handle for its
+user-mode-mappable backing 4k page, and `as_event()` to represent it as an event
+object which can be used by the WAIT_FOR_EVENT syscall.
+
+```Rust
+struct VcpuObj {
+ id: VmId,
+ ...
+}
+
+/// The trait for the object which can be used as an event.
+triat EventObj {
+ ...
+}
+
+impl EventObj for VcpuObj {
+ ...
+}
+
+impl Obj for VcpuObj {
+ fn open(&self) -> Result<(), SvsmError> {
+ ...
+ }
+
+ fn close(&self) {
+ ...
+ }
+
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ Some(self)
+ }
+
+ fn mappable(&self) -> Option<&FileHandle> {
+ Some(&self.run_page_file_handle)
+ }
+
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ Some(self)
+ }
+ ...
+}
+
+```
+
+Objects without special requirements can fall back to the default implementation
+in the Obj trait, which returns None.
+
+## Object Handle
+
+When the user mode is trying to open a particular object via the syscalls, the
+COCONUT-SVSM kernel creates an object handle, which is defined as below:
+
+```Rust
+structure ObjHandle {
+ id: u32,
+ obj: Arc<dyn Obj>,
+}
+
+impl ObjHandle {
+ /// Creates a new `ObjHandle` with opening the object.
+ ///
+ /// # Arguments
+ ///
+ /// * `id` - A unique identifier for the object.
+ /// * `obj` - An `Arc` containing a trait object implementing `Obj`.
+ ///
+ /// # Returns
+ ///
+ /// * `Result<Self, SvsmError>` - Returns an `ObjHandle` on success, or an `SvsmError`
+ /// on failure.
+ ///
+ /// # Errors
+ ///
+ /// This function will return an error if the `open` method on the `obj` fails.
+ pub fn new(id: u32, obj: Arc<dyn Obj>) -> Result<Self, SvsmError> {
+ obj.open()?;
+ Ok(Self { id, obj })
+ }
+
+ /// Get the ObjHandle id.
+ pub fn id(&self) -> u32 {
+ self.id
+ }
+}
+
+impl Drop for ObjHandle {
+ fn drop(&mut self) {
+ // Drop the ObjHandle will close the object.
+ self.obj.close();
+ }
+}
+
+impl Obj for ObjHandle {
+ // Implement Obj trait for ObjHandle to facilitate accessing the object.
+ ...
+}
+```
+
+The `ObjHandle` doesn't implement Copy/Clone trait. It contains a unique id
+allocated by a global ObjIDAllocator, and an Arc pointer pointing to the opened
+object which implements the Obj trait. When a new `ObjHandle` is created, the
+object is opened by calling the `open` method. When the `ObjHandle` is dropped,
+the object is closed by calling the `close` method. The closed object itself
+will be dropped if its last reference is held by the `ObjHandle`, otherwise the
+object will still be alive.
+
+The unique id of the `ObjHandle` will be returned as the user mode ObjHandle,
+and the subsequent syscalls can use this id to access this object.
+
+This requires the COCONUT-SVSM kernel to take below responsibilities:
+
+- The COCONUT-SVSM kernel should manage the object handle's lifecycle properly.
+ The `ObjHandle` should be dropped when the user mode thread closes the object
+ via syscall, or the user mode thread is terminated without closing.
+
+- The COCONUT-SVSM kernel should prevent a user mode thread from misusing the
+ object handle opened by another thread.
+
+To achieve the above goals, the `ObjHandle` should be associated with the task
+which creates it. The task structure is extended to hold the ObjHandles created
+by this thread.
+
+```Rust
+pub struct Task {
+ ...
+
+ /// Object handles created by this task
+ obj_handles: RWLock<BTreeMap<u32, Arc<ObjHandle>>>,
+}
+```
+
+The task structure will provide 3 new functions:
+
+- `add_obj_handle(&self, obj_handle: Arc<ObjHandle>) -> Result<(), SvsmError>;`
+ This is to add the object handle to the BTreeMap with the object handle id as
+ the key. The syscalls which open an object will add Arc<ObjHandle> to the
+ current task.
+
+- `remove_obj_handle(&self, id: u32) -> Result<Arc<ObjHandle>, SvsmError>;` This
+ is to remove the object handle from the BTreeMap. The CLOSE syscall will
+ remove the corresponding ObjHandle from BTreeMap and drop it to close the
+ object.
+
+- `get_obj_handle(&self, id: u32) -> Result<Arc<ObjHandle>, SvsmError>;` This is
+ to get the object handle from the BTreeMap, which will increase the reference
+ counter of the ObjHandle. The syscalls which access an object will get the
+ corresponding ObjHandle from the current task.
+
+When a task is terminated while it still has opened objects, these objects will
+be closed automatically when the `obj_handles` is dropped.
+
+### User Mode Object Handle
+
+The object is exposed to the user mode via the object open related syscalls by
+returning the id of the `ObjHandle` created by the COCONUT-SVSM kernel. Each
+ObjHandle is uniquely mapped to an id. The user mode can make use this id to
+access the corresponding object via other syscalls. From the user mode's point
+of view, the object handle is defined as below:
+
+```Rust
+/// User mode object handle received from syscalls.
+struct ObjHandle(u32);
+
+impl ObjHandle {
+ /// Create a new object handle with the id returned from a syscall.
+ fn new(raw: u32) -> Self {
+ Self(raw)
+ }
+
+ /// Get the raw object handle id, which can be used as the input of the syscalls.
+ fn raw(&self) -> u32 {
+ self.0
+ }
+}
+
+impl Drop for ObjHandle {
+ fn drop(&mut self) {
+ // Close the object when drop the ObjHandle.
+ unsafe { syscall1(SYS_CLOSE, self.raw().into()) };
+ }
+}
+
+```
+
+The user mode `ObjHandle` doesn't implement Copy/Clone trait, and dropping
+`ObjHandle` will automatically close the underlying object via the syscall.
+
+For a syscall class which is associated with a particular object handle type,
+e.g. VM object handles for VMM subsystem syscalls, VCPU object handles for VCPU
+subsystem syscalls, can be defined as:
+
+```Rust
+struct VmObjHandle(ObjHandle);
+```
+
+```Rust
+struct VcpuObjHandle(ObjHandle);
+```
+
+# Opening an Object in User Mode
+
+The user mode can open a particular object via syscalls. For example, VM_OPEN
+syscall is used to open a virtual machine object. The COCONUT-SVSM kernel
+provides `obj_open()` function to facilitate opening an object in user mode.
+
+```Rust
+pub fn sys_vm_open(idx: u32) -> Result<u64, i32> {
+ // Get the VmObj
+ let vm_obj = get_vm_obj(idx)?;
+
+ // Open the VmObj to return the object handle id to the user mode.
+ obj_open(vm_obj).map_or(Err(..), |id| Ok(id.into()))
+}
+
+```
+
+```Rust
+/// Opens an object and assigns it a unique identifier.
+///
+/// # Arguments
+///
+/// * `obj` - An `ObjPointer` representing the object to be opened.
+///
+/// # Returns
+///
+/// * `Result<u32, SvsmError>` - Returns the unique identifier of the opened
+/// object on success, or an `SvsmError` on failure.
+///
+/// # Errors
+///
+/// This function will return an error if adding the object handle to the
+/// current task fails.
+pub fn obj_open(obj: Arc<dyn Obj>) -> Result<u32, SvsmError> {
+ let id = OBJ_ID_ALLOCATOR.next_id();
+ current_task()
+ .add_obj_handle(Arc::new(ObjHandle::new(id, obj)?))
+ .map(|_| id)
+}
+```
+
+```Rust
+impl Task {
+ ...
+
+ /// Adds an object handle to the current task.
+ ///
+ /// # Arguments
+ ///
+ /// * `obj_handle` - The object handle to be added.
+ ///
+ /// # Returns
+ ///
+ /// * `Result<(), SvsmError>` - Returns `Ok(())` on success, or an `SvsmError` on failure.
+ ///
+ /// # Errors
+ ///
+ /// This function will return an error if the object handle already exists in the current task.
+ pub fn add_obj_handle(&self, obj_handle: Arc<ObjHandle>) -> Result<(), SvsmError> {
+ if let Entry::Vacant(entry) = self.obj_handles.lock_write().entry(obj_handle.id()) {
+ entry.insert(obj_handle);
+ Ok(())
+ } else {
+ Err(ObjError::Exist.into())
+ }
+ }
+}
+```
+
+The `obj_open()` takes the `Arc<dyn Obj>` as an input, which represents the
+particular object to be opened. After allocating a unique id, it creates a new
+`ObjHandle` with the object opened and adds this ObjHandle to the current task
+by the `add_obj_handle()` method. The unique id of the ObjHandle is returned to
+the user mode as the user mode ObjHandle.
+
+# Closing an Object in User Mode
+
+The CLOSE syscall can close an object, taking the object id as the input
+parameter. The COCONUT-SVSM kernel provides `obj_close()` function to facilitate
+closing an object in the syscall.
+
+```Rust
+pub fn sys_close(obj_id: u32) -> Result<u64, i32> {
+ // Close the object by the object id.
+ let _ = obj_close(obj_id);
+ Ok(0) | Why are we not reporting errors to userspace here? |
svsm | github_2023 | others | 453 | coconut-svsm | Freax13 | @@ -0,0 +1,506 @@
+# Background
+
+The syscalls design philosophy is to provide a unified interface for the user to
+access system resources, which are represented by object handles. This makes
+objects a fundamental concept in the COCONUT-SVSM kernel
+
+This document describes the object and object handle from both the user mode's
+and the COCONUT-SVSM kernel's point of view.
+
+# Key Data Structures
+
+## Object
+
+An object represents the type of resource like file, VM, vCPU in the
+COCONUT-SVSM kernel, that can be accessible by the user mode. A trait named Obj
+is defined for such type of resource, which defines the common functionalities
+of the object. The Obj trait is defined with the trait bounds of Send and Sync,
+which means the object implementing Obj trait could be sent to another thread
+and shared between threads safely.
+
+```Rust
+trait Obj: Send + Sync {
+ /// Open an object
+ fn open(&self) -> Result<(), SvsmError> {
+ Ok(())
+ }
+
+ /// Close an object
+ fn close(&self) {}
+
+ /// Convert to a virtual machine object if is supported.
+ fn as_vm(&self) -> Option<&VmObj> {
+ None
+ }
+
+ /// Convert to a virtual cpu object if is supported.
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ None
+ }
+
+ /// Get a mappable file handle if the object is mappable.
+ fn mappable(&self) -> Option<&FileHandle> {
+ None
+ }
+
+ /// Convert to an object which implements EventObj trait.
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ None
+ }
+ ...
+}
+```
+
+In order for the user mode to access the resources in the COCONUT-SVSM kernel,
+the kernel needs to implement the `Obj` trait to represent the resources as
+objects. Some objects may need to implement multiple methods in the Obj trait
+for multiple purposes. For example, a vcpu object needs to implement `as_vcpu()`
+to represent it as a vcpu object, `mappable()` to provide a file handle for its
+user-mode-mappable backing 4k page, and `as_event()` to represent it as an event
+object which can be used by the WAIT_FOR_EVENT syscall.
+
+```Rust
+struct VcpuObj {
+ id: VmId,
+ ...
+}
+
+/// The trait for the object which can be used as an event.
+triat EventObj {
+ ...
+}
+
+impl EventObj for VcpuObj {
+ ...
+}
+
+impl Obj for VcpuObj {
+ fn open(&self) -> Result<(), SvsmError> {
+ ...
+ }
+
+ fn close(&self) {
+ ...
+ }
+
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ Some(self)
+ }
+
+ fn mappable(&self) -> Option<&FileHandle> {
+ Some(&self.run_page_file_handle)
+ }
+
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ Some(self)
+ }
+ ...
+}
+
+```
+
+Objects without special requirements can fall back to the default implementation
+in the Obj trait, which returns None.
+
+## Object Handle
+
+When the user mode is trying to open a particular object via the syscalls, the
+COCONUT-SVSM kernel creates an object handle, which is defined as below:
+
+```Rust
+structure ObjHandle {
+ id: u32,
+ obj: Arc<dyn Obj>,
+}
+
+impl ObjHandle {
+ /// Creates a new `ObjHandle` with opening the object.
+ ///
+ /// # Arguments
+ ///
+ /// * `id` - A unique identifier for the object.
+ /// * `obj` - An `Arc` containing a trait object implementing `Obj`.
+ ///
+ /// # Returns
+ ///
+ /// * `Result<Self, SvsmError>` - Returns an `ObjHandle` on success, or an `SvsmError`
+ /// on failure.
+ ///
+ /// # Errors
+ ///
+ /// This function will return an error if the `open` method on the `obj` fails.
+ pub fn new(id: u32, obj: Arc<dyn Obj>) -> Result<Self, SvsmError> {
+ obj.open()?;
+ Ok(Self { id, obj })
+ }
+
+ /// Get the ObjHandle id.
+ pub fn id(&self) -> u32 {
+ self.id
+ }
+}
+
+impl Drop for ObjHandle {
+ fn drop(&mut self) {
+ // Drop the ObjHandle will close the object.
+ self.obj.close();
+ }
+}
+
+impl Obj for ObjHandle {
+ // Implement Obj trait for ObjHandle to facilitate accessing the object.
+ ...
+}
+```
+
+The `ObjHandle` doesn't implement Copy/Clone trait. It contains a unique id
+allocated by a global ObjIDAllocator, and an Arc pointer pointing to the opened
+object which implements the Obj trait. When a new `ObjHandle` is created, the
+object is opened by calling the `open` method. When the `ObjHandle` is dropped,
+the object is closed by calling the `close` method. The closed object itself
+will be dropped if its last reference is held by the `ObjHandle`, otherwise the
+object will still be alive.
+
+The unique id of the `ObjHandle` will be returned as the user mode ObjHandle,
+and the subsequent syscalls can use this id to access this object.
+
+This requires the COCONUT-SVSM kernel to take below responsibilities:
+
+- The COCONUT-SVSM kernel should manage the object handle's lifecycle properly.
+ The `ObjHandle` should be dropped when the user mode thread closes the object
+ via syscall, or the user mode thread is terminated without closing.
+
+- The COCONUT-SVSM kernel should prevent a user mode thread from misusing the
+ object handle opened by another thread.
+
+To achieve the above goals, the `ObjHandle` should be associated with the task
+which creates it. The task structure is extended to hold the ObjHandles created
+by this thread.
+
+```Rust
+pub struct Task {
+ ...
+
+ /// Object handles created by this task
+ obj_handles: RWLock<BTreeMap<u32, Arc<ObjHandle>>>, | Storing the object id in the map keys and in the map values (inside `ObjHandle`) is redundant. Can we remove it from `ObjHandle` (or just remove `ObjHandle` altogether and just pass around `Arc<Obj>`? |
svsm | github_2023 | others | 453 | coconut-svsm | Freax13 | @@ -0,0 +1,506 @@
+# Background
+
+The syscalls design philosophy is to provide a unified interface for the user to
+access system resources, which are represented by object handles. This makes
+objects a fundamental concept in the COCONUT-SVSM kernel
+
+This document describes the object and object handle from both the user mode's
+and the COCONUT-SVSM kernel's point of view.
+
+# Key Data Structures
+
+## Object
+
+An object represents the type of resource like file, VM, vCPU in the
+COCONUT-SVSM kernel, that can be accessible by the user mode. A trait named Obj
+is defined for such type of resource, which defines the common functionalities
+of the object. The Obj trait is defined with the trait bounds of Send and Sync,
+which means the object implementing Obj trait could be sent to another thread
+and shared between threads safely.
+
+```Rust
+trait Obj: Send + Sync {
+ /// Open an object
+ fn open(&self) -> Result<(), SvsmError> {
+ Ok(())
+ }
+
+ /// Close an object
+ fn close(&self) {}
+
+ /// Convert to a virtual machine object if is supported.
+ fn as_vm(&self) -> Option<&VmObj> {
+ None
+ }
+
+ /// Convert to a virtual cpu object if is supported.
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ None
+ }
+
+ /// Get a mappable file handle if the object is mappable.
+ fn mappable(&self) -> Option<&FileHandle> {
+ None
+ }
+
+ /// Convert to an object which implements EventObj trait.
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ None
+ }
+ ...
+}
+```
+
+In order for the user mode to access the resources in the COCONUT-SVSM kernel,
+the kernel needs to implement the `Obj` trait to represent the resources as
+objects. Some objects may need to implement multiple methods in the Obj trait
+for multiple purposes. For example, a vcpu object needs to implement `as_vcpu()`
+to represent it as a vcpu object, `mappable()` to provide a file handle for its
+user-mode-mappable backing 4k page, and `as_event()` to represent it as an event
+object which can be used by the WAIT_FOR_EVENT syscall.
+
+```Rust
+struct VcpuObj {
+ id: VmId,
+ ...
+}
+
+/// The trait for the object which can be used as an event.
+triat EventObj { | ```suggestion
trait EventObj {
``` |
svsm | github_2023 | others | 453 | coconut-svsm | Freax13 | @@ -0,0 +1,408 @@
+# Background
+
+The syscalls design philosophy is to provide a unified interface for the user to
+access system resources, which are represented by object handles. This makes
+objects a fundamental concept in the COCONUT-SVSM kernel
+
+This document describes the object and object handle from both the user mode's
+and the COCONUT-SVSM kernel's point of view.
+
+# Key Data Structures
+
+## Object
+
+An object represents the type of resource like file, VM, vCPU in the
+COCONUT-SVSM kernel, that can be accessible by the user mode. A trait named Obj
+is defined for such type of resource, which defines the common functionalities
+of the object. The Obj trait is defined with the trait bounds of Send and Sync,
+which means the object implementing Obj trait could be sent to another thread
+and shared between threads safely.
+
+```Rust
+trait Obj: Send + Sync {
+ /// Convert to a virtual machine object if is supported.
+ fn as_vm(&self) -> Option<&VmObj> {
+ None
+ }
+
+ /// Convert to a virtual cpu object if is supported.
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ None
+ }
+
+ /// Get a mappable file handle if the object is mappable.
+ fn mappable(&self) -> Option<&FileHandle> {
+ None
+ }
+
+ /// Convert to an object which implements EventObj trait.
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ None
+ }
+ ...
+}
+```
+
+In order for the user mode to access the resources in the COCONUT-SVSM kernel,
+the kernel needs to implement the `Obj` trait to represent the resources as
+objects. Some objects may need to implement multiple methods in the Obj trait
+for multiple purposes. For example, a vcpu object needs to implement `as_vcpu()`
+to represent it as a vcpu object, `mappable()` to provide a file handle for its
+user-mode-mappable backing 4k page, and `as_event()` to represent it as an event
+object which can be used by the WAIT_FOR_EVENT syscall.
+
+```Rust
+struct VcpuObj {
+ id: VmId,
+ ...
+}
+
+/// The trait for the object which can be used as an event.
+trait EventObj {
+ ...
+}
+
+impl EventObj for VcpuObj {
+ ...
+}
+
+impl Obj for VcpuObj {
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ Some(self)
+ }
+
+ fn mappable(&self) -> Option<&FileHandle> {
+ Some(&self.run_page_file_handle)
+ }
+
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ Some(self)
+ }
+ ...
+}
+
+```
+
+Objects without special requirements can fall back to the default implementation
+in the Obj trait, which returns None.
+
+## Object Handle
+
+When the user mode is trying to open a particular kernel resource via the
+syscalls, the COCONUT-SVSM kernel creates a corresponding object which
+implements Obj trait, and allocates a unique id for that object. The unique id
+will be returned to the user mode as the `ObjHandle`, and the subsequent
+syscalls can use this id to access this object.
+
+### User Mode Object Handle
+
+The object is exposed to the user mode via the object open related syscalls by
+returning the id of the object created by the COCONUT-SVSM kernel. The user mode
+can make use this id to access the corresponding object via other syscalls. From
+the user mode's point of view, the object handle is defined as below:
+
+```Rust
+/// User mode object handle received from syscalls.
+struct ObjHandle(u32);
+
+impl ObjHandle {
+ /// Create a new object handle with the id returned from a syscall.
+ fn new(raw: u32) -> Self {
+ Self(raw)
+ }
+
+ /// Get the raw object handle id, which can be used as the input of the syscalls.
+ fn raw(&self) -> u32 {
+ self.0
+ }
+}
+
+impl Drop for ObjHandle {
+ fn drop(&mut self) {
+ // Close the object when drop the ObjHandle.
+ unsafe { syscall1(SYS_CLOSE, self.raw().into()) };
+ }
+}
+
+```
+
+The user mode `ObjHandle` doesn't implement Copy/Clone trait, and dropping
+`ObjHandle` will automatically close the underlying object via the syscall.
+
+For a syscall class which is associated with a particular object handle type,
+e.g. VM object handles for VMM subsystem syscalls, VCPU object handles for VCPU
+subsystem syscalls, can be defined as:
+
+```Rust
+struct VmObjHandle(ObjHandle);
+```
+
+```Rust
+struct VcpuObjHandle(ObjHandle);
+```
+
+# Object Management in COCONUT-SVSM Kernel
+
+To facilitate the user mode using the object, the COCONUT-SVSM kernel should:
+
+- Manage the object's lifecycle properly. The underlying object should be
+ dropped when the user mode closes the object handle via syscalls, or the user
+ mode is terminated without closing.
+
+- Prevent one user mode process from misusing the object handle opened by
+ another. But the object handles are shared among the threads within the same
+ process.
+
+To achieve the above goals, the opened object should be associated with the
+process which creates it. The task structure is extended to hold the created
+objects.
+
+```Rust
+pub struct Task {
+ ...
+
+ /// Objects created by this task
+ objs: Arc<RWLock<BTreeMap<u32, Arc<dyn Obj>>>>, | This comment is no longer accurate. |
svsm | github_2023 | others | 453 | coconut-svsm | Freax13 | @@ -0,0 +1,408 @@
+# Background
+
+The syscalls design philosophy is to provide a unified interface for the user to
+access system resources, which are represented by object handles. This makes
+objects a fundamental concept in the COCONUT-SVSM kernel
+
+This document describes the object and object handle from both the user mode's
+and the COCONUT-SVSM kernel's point of view.
+
+# Key Data Structures
+
+## Object
+
+An object represents the type of resource like file, VM, vCPU in the
+COCONUT-SVSM kernel, that can be accessible by the user mode. A trait named Obj
+is defined for such type of resource, which defines the common functionalities
+of the object. The Obj trait is defined with the trait bounds of Send and Sync,
+which means the object implementing Obj trait could be sent to another thread
+and shared between threads safely.
+
+```Rust
+trait Obj: Send + Sync {
+ /// Convert to a virtual machine object if is supported.
+ fn as_vm(&self) -> Option<&VmObj> {
+ None
+ }
+
+ /// Convert to a virtual cpu object if is supported.
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ None
+ }
+
+ /// Get a mappable file handle if the object is mappable.
+ fn mappable(&self) -> Option<&FileHandle> {
+ None
+ }
+
+ /// Convert to an object which implements EventObj trait.
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ None
+ }
+ ...
+}
+```
+
+In order for the user mode to access the resources in the COCONUT-SVSM kernel,
+the kernel needs to implement the `Obj` trait to represent the resources as
+objects. Some objects may need to implement multiple methods in the Obj trait
+for multiple purposes. For example, a vcpu object needs to implement `as_vcpu()`
+to represent it as a vcpu object, `mappable()` to provide a file handle for its
+user-mode-mappable backing 4k page, and `as_event()` to represent it as an event
+object which can be used by the WAIT_FOR_EVENT syscall.
+
+```Rust
+struct VcpuObj {
+ id: VmId,
+ ...
+}
+
+/// The trait for the object which can be used as an event.
+trait EventObj {
+ ...
+}
+
+impl EventObj for VcpuObj {
+ ...
+}
+
+impl Obj for VcpuObj {
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ Some(self)
+ }
+
+ fn mappable(&self) -> Option<&FileHandle> {
+ Some(&self.run_page_file_handle)
+ }
+
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ Some(self)
+ }
+ ...
+}
+
+```
+
+Objects without special requirements can fall back to the default implementation
+in the Obj trait, which returns None.
+
+## Object Handle
+
+When the user mode is trying to open a particular kernel resource via the
+syscalls, the COCONUT-SVSM kernel creates a corresponding object which
+implements Obj trait, and allocates a unique id for that object. The unique id
+will be returned to the user mode as the `ObjHandle`, and the subsequent
+syscalls can use this id to access this object.
+
+### User Mode Object Handle
+
+The object is exposed to the user mode via the object open related syscalls by
+returning the id of the object created by the COCONUT-SVSM kernel. The user mode
+can make use this id to access the corresponding object via other syscalls. From
+the user mode's point of view, the object handle is defined as below:
+
+```Rust
+/// User mode object handle received from syscalls.
+struct ObjHandle(u32);
+
+impl ObjHandle {
+ /// Create a new object handle with the id returned from a syscall.
+ fn new(raw: u32) -> Self {
+ Self(raw)
+ }
+
+ /// Get the raw object handle id, which can be used as the input of the syscalls.
+ fn raw(&self) -> u32 {
+ self.0
+ }
+}
+
+impl Drop for ObjHandle {
+ fn drop(&mut self) {
+ // Close the object when drop the ObjHandle.
+ unsafe { syscall1(SYS_CLOSE, self.raw().into()) };
+ }
+}
+
+```
+
+The user mode `ObjHandle` doesn't implement Copy/Clone trait, and dropping
+`ObjHandle` will automatically close the underlying object via the syscall.
+
+For a syscall class which is associated with a particular object handle type,
+e.g. VM object handles for VMM subsystem syscalls, VCPU object handles for VCPU
+subsystem syscalls, can be defined as:
+
+```Rust
+struct VmObjHandle(ObjHandle);
+```
+
+```Rust
+struct VcpuObjHandle(ObjHandle);
+```
+
+# Object Management in COCONUT-SVSM Kernel
+
+To facilitate the user mode using the object, the COCONUT-SVSM kernel should:
+
+- Manage the object's lifecycle properly. The underlying object should be
+ dropped when the user mode closes the object handle via syscalls, or the user
+ mode is terminated without closing.
+
+- Prevent one user mode process from misusing the object handle opened by
+ another. But the object handles are shared among the threads within the same
+ process.
+
+To achieve the above goals, the opened object should be associated with the
+process which creates it. The task structure is extended to hold the created
+objects.
+
+```Rust
+pub struct Task {
+ ...
+
+ /// Objects created by this task
+ objs: Arc<RWLock<BTreeMap<u32, Arc<dyn Obj>>>>,
+}
+```
+
+The objs is a BTreeMap with the object handle id as the key and the Arc<dyn Obj>
+as the value. It is wrapped by an Arc and protected by a RWLock, to make it
+shared among the threads within the same process.
+
+The task structure will provide 3 new functions:
+
+- `add_obj(&self, id: u32, obj: Arc<dyn Obj>) -> Result<(), SvsmError>;` This is
+ to add the object to the BTreeMap with the object handle id as the key. The
+ syscalls which open an object will add Arc<dyn Obj> to the current task.
+
+- `remove_obj(&self, id: u32) -> Result<Arc<dyn Obj>, SvsmError>;` This is to
+ remove the object from the BTreeMap. The CLOSE syscall will remove the
+ corresponding object from BTreeMap and drop it.
+
+- `get_obj(&self, id: u32) -> Result<Arc<dyn Obj>, SvsmError>;` This is to get
+ the object from the BTreeMap, which will increase the reference counter. The
+ syscalls which access an object will get the corresponding object from the
+ current task.
+
+When a task is terminated while it still has opened objects, these objects will
+be dropped automatically when the `objs` is dropped, if `objs` held the last
+reference to the objects.
+
+# Opening an Object in User Mode
+
+The user mode can open a particular object via syscalls. For example, VM_OPEN
+syscall is used to open a virtual machine object. The COCONUT-SVSM kernel
+provides `obj_open()` function to facilitate opening an object in user mode.
+
+```Rust
+pub fn sys_vm_open(idx: u32) -> Result<u64, i32> {
+ // Get the VmObj
+ let vm_obj = get_vm_obj(idx)?;
+
+ // Open the VmObj to return the object handle id to the user mode.
+ obj_open(vm_obj).map_or(Err(..), |id| Ok(id.into()))
+}
+
+```
+
+```Rust
+/// Opens an object and assigns it a unique identifier.
+///
+/// # Arguments
+///
+/// * `obj` - An Arc<dyn Obj> representing the object to be opened.
+///
+/// # Returns
+///
+/// * `Result<u32, SvsmError>` - Returns the unique identifier of the opened
+/// object on success, or an `SvsmError` on failure.
+///
+/// # Errors
+///
+/// This function will return an error if adding the object to the
+/// current task fails.
+pub fn obj_open(obj: Arc<dyn Obj>) -> Result<u32, SvsmError> {
+ let id = OBJ_ID_ALLOCATOR.next_id();
+ current_task().add_obj(id, obj)?;
+ Ok(id)
+}
+```
+
+```Rust
+impl Task {
+ ...
+
+ /// Adds an object to the current task.
+ ///
+ /// # Arguments
+ ///
+ /// * `id` - The id of the object to be added.
+ /// * `obj` - The object to be added.
+ ///
+ /// # Returns
+ ///
+ /// * `Result<(), SvsmError>` - Returns `Ok(())` on success, or an `SvsmError` on failure.
+ ///
+ /// # Errors
+ ///
+ /// This function will return an error if the object handle id already exists in the current task.
+ pub fn add_obj(&self, id: u32, obj: Arc<dyn Obj>) -> Result<(), SvsmError> {
+ if let Entry::Vacant(entry) = self.objs.lock_write().entry(id) {
+ entry.insert(obj);
+ Ok(())
+ } else {
+ Err(ObjError::Exist.into())
+ }
+ } | Can we move the allocation of the object id into this method? Now that we no longer need to maintain a global id, we can just use any available id in the map. Allocating the id in `add_obj` has the advantage that `add_obj` can no longer fail because the id is already in use. |
svsm | github_2023 | others | 453 | coconut-svsm | Freax13 | @@ -0,0 +1,408 @@
+# Background
+
+The syscalls design philosophy is to provide a unified interface for the user to
+access system resources, which are represented by object handles. This makes
+objects a fundamental concept in the COCONUT-SVSM kernel
+
+This document describes the object and object handle from both the user mode's
+and the COCONUT-SVSM kernel's point of view.
+
+# Key Data Structures
+
+## Object
+
+An object represents the type of resource like file, VM, vCPU in the
+COCONUT-SVSM kernel, that can be accessible by the user mode. A trait named Obj
+is defined for such type of resource, which defines the common functionalities
+of the object. The Obj trait is defined with the trait bounds of Send and Sync,
+which means the object implementing Obj trait could be sent to another thread
+and shared between threads safely.
+
+```Rust
+trait Obj: Send + Sync {
+ /// Convert to a virtual machine object if is supported.
+ fn as_vm(&self) -> Option<&VmObj> {
+ None
+ }
+
+ /// Convert to a virtual cpu object if is supported.
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ None
+ }
+
+ /// Get a mappable file handle if the object is mappable.
+ fn mappable(&self) -> Option<&FileHandle> {
+ None
+ }
+
+ /// Convert to an object which implements EventObj trait.
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ None
+ }
+ ...
+}
+```
+
+In order for the user mode to access the resources in the COCONUT-SVSM kernel,
+the kernel needs to implement the `Obj` trait to represent the resources as
+objects. Some objects may need to implement multiple methods in the Obj trait
+for multiple purposes. For example, a vcpu object needs to implement `as_vcpu()`
+to represent it as a vcpu object, `mappable()` to provide a file handle for its
+user-mode-mappable backing 4k page, and `as_event()` to represent it as an event
+object which can be used by the WAIT_FOR_EVENT syscall.
+
+```Rust
+struct VcpuObj {
+ id: VmId,
+ ...
+}
+
+/// The trait for the object which can be used as an event.
+trait EventObj {
+ ...
+}
+
+impl EventObj for VcpuObj {
+ ...
+}
+
+impl Obj for VcpuObj {
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ Some(self)
+ }
+
+ fn mappable(&self) -> Option<&FileHandle> {
+ Some(&self.run_page_file_handle)
+ }
+
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ Some(self)
+ }
+ ...
+}
+
+```
+
+Objects without special requirements can fall back to the default implementation
+in the Obj trait, which returns None.
+
+## Object Handle
+
+When the user mode is trying to open a particular kernel resource via the
+syscalls, the COCONUT-SVSM kernel creates a corresponding object which
+implements Obj trait, and allocates a unique id for that object. The unique id
+will be returned to the user mode as the `ObjHandle`, and the subsequent
+syscalls can use this id to access this object.
+
+### User Mode Object Handle
+
+The object is exposed to the user mode via the object open related syscalls by
+returning the id of the object created by the COCONUT-SVSM kernel. The user mode
+can make use this id to access the corresponding object via other syscalls. From
+the user mode's point of view, the object handle is defined as below:
+
+```Rust
+/// User mode object handle received from syscalls.
+struct ObjHandle(u32);
+
+impl ObjHandle {
+ /// Create a new object handle with the id returned from a syscall.
+ fn new(raw: u32) -> Self {
+ Self(raw)
+ }
+
+ /// Get the raw object handle id, which can be used as the input of the syscalls.
+ fn raw(&self) -> u32 {
+ self.0
+ }
+}
+
+impl Drop for ObjHandle {
+ fn drop(&mut self) {
+ // Close the object when drop the ObjHandle.
+ unsafe { syscall1(SYS_CLOSE, self.raw().into()) };
+ }
+}
+
+```
+
+The user mode `ObjHandle` doesn't implement Copy/Clone trait, and dropping
+`ObjHandle` will automatically close the underlying object via the syscall.
+
+For a syscall class which is associated with a particular object handle type,
+e.g. VM object handles for VMM subsystem syscalls, VCPU object handles for VCPU
+subsystem syscalls, can be defined as:
+
+```Rust
+struct VmObjHandle(ObjHandle);
+```
+
+```Rust
+struct VcpuObjHandle(ObjHandle);
+```
+
+# Object Management in COCONUT-SVSM Kernel
+
+To facilitate the user mode using the object, the COCONUT-SVSM kernel should:
+
+- Manage the object's lifecycle properly. The underlying object should be
+ dropped when the user mode closes the object handle via syscalls, or the user
+ mode is terminated without closing.
+
+- Prevent one user mode process from misusing the object handle opened by
+ another. But the object handles are shared among the threads within the same
+ process.
+
+To achieve the above goals, the opened object should be associated with the
+process which creates it. The task structure is extended to hold the created
+objects.
+
+```Rust
+pub struct Task {
+ ...
+
+ /// Objects created by this task
+ objs: Arc<RWLock<BTreeMap<u32, Arc<dyn Obj>>>>,
+}
+```
+
+The objs is a BTreeMap with the object handle id as the key and the Arc<dyn Obj>
+as the value. It is wrapped by an Arc and protected by a RWLock, to make it
+shared among the threads within the same process.
+
+The task structure will provide 3 new functions:
+
+- `add_obj(&self, id: u32, obj: Arc<dyn Obj>) -> Result<(), SvsmError>;` This is
+ to add the object to the BTreeMap with the object handle id as the key. The
+ syscalls which open an object will add Arc<dyn Obj> to the current task.
+
+- `remove_obj(&self, id: u32) -> Result<Arc<dyn Obj>, SvsmError>;` This is to | Can we also add a newtype `pub struct ObjectHandle(u32);` (without a `Drop` impl) in the kernel and use it here? Personally, I'd prefer this to passing around untyped integers and I expect us to use this type a lot when implementing syscalls. |
svsm | github_2023 | others | 453 | coconut-svsm | Freax13 | @@ -0,0 +1,427 @@
+# Background
+
+The syscalls design philosophy is to provide a unified interface for the user to
+access system resources, which are represented by object handles. This makes
+objects a fundamental concept in the COCONUT-SVSM kernel
+
+This document describes the object and object handle from both the user mode's
+and the COCONUT-SVSM kernel's point of view.
+
+# Key Data Structures
+
+## Object
+
+An object represents the type of resource like file, VM, vCPU in the
+COCONUT-SVSM kernel, that can be accessible by the user mode. A trait named Obj
+is defined for such type of resource, which defines the common functionalities
+of the object. The Obj trait is defined with the trait bounds of Send and Sync,
+which means the object implementing Obj trait could be sent to another thread
+and shared between threads safely.
+
+```Rust
+trait Obj: Send + Sync {
+ /// Convert to a virtual machine object if is supported.
+ fn as_vm(&self) -> Option<&VmObj> {
+ None
+ }
+
+ /// Convert to a virtual cpu object if is supported.
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ None
+ }
+
+ /// Get a mappable file handle if the object is mappable.
+ fn mappable(&self) -> Option<&FileHandle> {
+ None
+ }
+
+ /// Convert to an object which implements EventObj trait.
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ None
+ }
+ ...
+}
+```
+
+In order for the user mode to access the resources in the COCONUT-SVSM kernel,
+the kernel needs to implement the `Obj` trait to represent the resources as
+objects. Some objects may need to implement multiple methods in the Obj trait
+for multiple purposes. For example, a vcpu object needs to implement `as_vcpu()`
+to represent it as a vcpu object, `mappable()` to provide a file handle for its
+user-mode-mappable backing 4k page, and `as_event()` to represent it as an event
+object which can be used by the WAIT_FOR_EVENT syscall.
+
+```Rust
+struct VcpuObj {
+ id: VmId,
+ ...
+}
+
+/// The trait for the object which can be used as an event.
+trait EventObj {
+ ...
+}
+
+impl EventObj for VcpuObj {
+ ...
+}
+
+impl Obj for VcpuObj {
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ Some(self)
+ }
+
+ fn mappable(&self) -> Option<&FileHandle> {
+ Some(&self.run_page_file_handle)
+ }
+
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ Some(self)
+ }
+ ...
+}
+
+```
+
+Objects without special requirements can fall back to the default implementation
+in the Obj trait, which returns None.
+
+## Object Handle
+
+When the user mode is trying to open a particular kernel resource via the
+syscalls, the COCONUT-SVSM kernel creates a corresponding object which
+implements Obj trait, and allocates an object handle with a unique id for that
+object. The object handle is defined as below:
+
+```Rust
+#[derive(Clone, Copy, Debug, Eq, Ord, PartialEq, PartialOrd)]
+pub struct ObjHandle(u32);
+
+impl ObjHandle {
+ /// Create a new object handle with the allocated id
+ pub fn new(id: u32) -> Self {
+ Self(id)
+ }
+}
+
+impl From<u32> for ObjHandle {
+ #[inline]
+ fn from(id: u32) -> Self {
+ Self(id)
+ }
+}
+
+impl From<ObjHandle> for u32 {
+ #[inline]
+ fn from(obj_handle: ObjHandle) -> Self {
+ obj_handle.0
+ }
+}
+```
+
+An `ObjHandle` can be converted to a `u32` id which is returned to the user
+mode, and the subsequent syscalls use this id to access this object. The passed
+id from the syscalls can be converted to an `ObjHandle`, which is used to access
+the object in the COCONUT-SVSM kernel.
+
+### User Mode Object Handle
+
+The object is exposed to the user mode via the object-opening related syscalls,
+which returns the id of the object created by the COCONUT-SVSM kernel. The user
+mode can make use this id to access the corresponding object via other syscalls.
+From the user mode's point of view, the object handle is defined as below:
+
+```Rust
+/// User mode object handle received from syscalls.
+pub struct ObjHandle(u32);
+
+impl ObjHandle {
+ /// Create a new object handle with the id returned from a syscall.
+ pub(crate) fn new(id: u32) -> Self {
+ Self(id)
+ }
+}
+
+impl From<&ObjHandle> for u32 {
+ #[inline]
+ fn from(obj_handle: &ObjHandle) -> Self {
+ obj_handle.0
+ }
+}
+
+impl Drop for ObjHandle {
+ fn drop(&mut self) {
+ // Close the object when drop the ObjHandle.
+ unsafe { syscall1(SYS_CLOSE, self.0.into()) };
+ }
+}
+
+```
+
+The user mode `ObjHandle` doesn't implement Copy/Clone trait, and dropping
+`ObjHandle` will automatically close the underlying object via the syscall.
+
+For a syscall class which is associated with a particular object handle type,
+e.g. VM object handles for VMM subsystem syscalls, VCPU object handles for VCPU
+subsystem syscalls, can be defined as:
+
+```Rust
+pub struct VmObjHandle(ObjHandle);
+```
+
+```Rust
+pub struct VcpuObjHandle(ObjHandle);
+```
+
+# Object Management in COCONUT-SVSM Kernel
+
+To facilitate the user mode using the object, the COCONUT-SVSM kernel should:
+
+- Manage the object's lifecycle properly. The underlying object should be
+ dropped when the user mode closes the object handle via syscalls, or the user
+ mode is terminated without closing.
+
+- Prevent one user mode process from misusing the object handle opened by
+ another. But the object handles are shared among the threads within the same
+ process.
+
+To achieve the above goals, the opened object should be associated with the
+process which creates it. The task structure is extended to hold the created
+objects.
+
+```Rust
+pub struct Task {
+ ...
+
+ /// Objects shared among threads within the same process
+ objs: Arc<RWLock<BTreeMap<ObjHandle, Arc<dyn Obj>>>>,
+}
+```
+
+The objs is a BTreeMap with the object handle id as the key and the Arc<dyn Obj>
+as the value. It is wrapped by an Arc and protected by a RWLock, to make it
+shared among the threads within the same process.
+
+The task structure will provide 3 new functions:
+
+- `add_obj(&self, obj: Arc<dyn Obj>) -> ObjHandle;` It allocates a unique
+ ObjHandle which is local to the process for the object. The object is added to
+ the BTreeMap with the `ObjHandle` as the key. This method will be used by the
+ syscalls which open an object.
+
+- `remove_obj(&self, id: &ObjHandle) -> Result<Arc<dyn Obj>, SvsmError>;` It
+ removes the object from the BTreeMap. This method will be used by the CLOSE
+ syscall to remove the corresponding object from process and drop it.
+
+- `get_obj(&self, id: &ObjHandle) -> Result<Arc<dyn Obj>, SvsmError>;` It gets | ```suggestion
- `remove_obj(&self, id: ObjHandle) -> Result<Arc<dyn Obj>, SvsmError>;` It
removes the object from the BTreeMap. This method will be used by the CLOSE
syscall to remove the corresponding object from process and drop it.
- `get_obj(&self, id: ObjHandle) -> Result<Arc<dyn Obj>, SvsmError>;` It gets
```
`ObjHandle` is `Copy`, so we don't need to pass a reference. |
svsm | github_2023 | others | 453 | coconut-svsm | Freax13 | @@ -0,0 +1,427 @@
+# Background
+
+The syscalls design philosophy is to provide a unified interface for the user to
+access system resources, which are represented by object handles. This makes
+objects a fundamental concept in the COCONUT-SVSM kernel
+
+This document describes the object and object handle from both the user mode's
+and the COCONUT-SVSM kernel's point of view.
+
+# Key Data Structures
+
+## Object
+
+An object represents the type of resource like file, VM, vCPU in the
+COCONUT-SVSM kernel, that can be accessible by the user mode. A trait named Obj
+is defined for such type of resource, which defines the common functionalities
+of the object. The Obj trait is defined with the trait bounds of Send and Sync,
+which means the object implementing Obj trait could be sent to another thread
+and shared between threads safely.
+
+```Rust
+trait Obj: Send + Sync {
+ /// Convert to a virtual machine object if is supported.
+ fn as_vm(&self) -> Option<&VmObj> {
+ None
+ }
+
+ /// Convert to a virtual cpu object if is supported.
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ None
+ }
+
+ /// Get a mappable file handle if the object is mappable.
+ fn mappable(&self) -> Option<&FileHandle> {
+ None
+ }
+
+ /// Convert to an object which implements EventObj trait.
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ None
+ }
+ ...
+}
+```
+
+In order for the user mode to access the resources in the COCONUT-SVSM kernel,
+the kernel needs to implement the `Obj` trait to represent the resources as
+objects. Some objects may need to implement multiple methods in the Obj trait
+for multiple purposes. For example, a vcpu object needs to implement `as_vcpu()`
+to represent it as a vcpu object, `mappable()` to provide a file handle for its
+user-mode-mappable backing 4k page, and `as_event()` to represent it as an event
+object which can be used by the WAIT_FOR_EVENT syscall.
+
+```Rust
+struct VcpuObj {
+ id: VmId,
+ ...
+}
+
+/// The trait for the object which can be used as an event.
+trait EventObj {
+ ...
+}
+
+impl EventObj for VcpuObj {
+ ...
+}
+
+impl Obj for VcpuObj {
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ Some(self)
+ }
+
+ fn mappable(&self) -> Option<&FileHandle> {
+ Some(&self.run_page_file_handle)
+ }
+
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ Some(self)
+ }
+ ...
+}
+
+```
+
+Objects without special requirements can fall back to the default implementation
+in the Obj trait, which returns None.
+
+## Object Handle
+
+When the user mode is trying to open a particular kernel resource via the
+syscalls, the COCONUT-SVSM kernel creates a corresponding object which
+implements Obj trait, and allocates an object handle with a unique id for that
+object. The object handle is defined as below:
+
+```Rust
+#[derive(Clone, Copy, Debug, Eq, Ord, PartialEq, PartialOrd)]
+pub struct ObjHandle(u32);
+
+impl ObjHandle {
+ /// Create a new object handle with the allocated id
+ pub fn new(id: u32) -> Self {
+ Self(id)
+ }
+}
+
+impl From<u32> for ObjHandle {
+ #[inline]
+ fn from(id: u32) -> Self {
+ Self(id)
+ }
+}
+
+impl From<ObjHandle> for u32 {
+ #[inline]
+ fn from(obj_handle: ObjHandle) -> Self {
+ obj_handle.0
+ }
+}
+```
+
+An `ObjHandle` can be converted to a `u32` id which is returned to the user
+mode, and the subsequent syscalls use this id to access this object. The passed
+id from the syscalls can be converted to an `ObjHandle`, which is used to access
+the object in the COCONUT-SVSM kernel.
+
+### User Mode Object Handle
+
+The object is exposed to the user mode via the object-opening related syscalls,
+which returns the id of the object created by the COCONUT-SVSM kernel. The user
+mode can make use this id to access the corresponding object via other syscalls.
+From the user mode's point of view, the object handle is defined as below:
+
+```Rust
+/// User mode object handle received from syscalls.
+pub struct ObjHandle(u32);
+
+impl ObjHandle {
+ /// Create a new object handle with the id returned from a syscall.
+ pub(crate) fn new(id: u32) -> Self {
+ Self(id)
+ }
+}
+
+impl From<&ObjHandle> for u32 {
+ #[inline]
+ fn from(obj_handle: &ObjHandle) -> Self {
+ obj_handle.0
+ }
+}
+
+impl Drop for ObjHandle {
+ fn drop(&mut self) {
+ // Close the object when drop the ObjHandle.
+ unsafe { syscall1(SYS_CLOSE, self.0.into()) };
+ }
+}
+
+```
+
+The user mode `ObjHandle` doesn't implement Copy/Clone trait, and dropping
+`ObjHandle` will automatically close the underlying object via the syscall.
+
+For a syscall class which is associated with a particular object handle type,
+e.g. VM object handles for VMM subsystem syscalls, VCPU object handles for VCPU
+subsystem syscalls, can be defined as:
+
+```Rust
+pub struct VmObjHandle(ObjHandle);
+```
+
+```Rust
+pub struct VcpuObjHandle(ObjHandle);
+```
+
+# Object Management in COCONUT-SVSM Kernel
+
+To facilitate the user mode using the object, the COCONUT-SVSM kernel should:
+
+- Manage the object's lifecycle properly. The underlying object should be
+ dropped when the user mode closes the object handle via syscalls, or the user
+ mode is terminated without closing.
+
+- Prevent one user mode process from misusing the object handle opened by
+ another. But the object handles are shared among the threads within the same
+ process.
+
+To achieve the above goals, the opened object should be associated with the
+process which creates it. The task structure is extended to hold the created
+objects.
+
+```Rust
+pub struct Task {
+ ...
+
+ /// Objects shared among threads within the same process
+ objs: Arc<RWLock<BTreeMap<ObjHandle, Arc<dyn Obj>>>>,
+}
+```
+
+The objs is a BTreeMap with the object handle id as the key and the Arc<dyn Obj>
+as the value. It is wrapped by an Arc and protected by a RWLock, to make it
+shared among the threads within the same process.
+
+The task structure will provide 3 new functions:
+
+- `add_obj(&self, obj: Arc<dyn Obj>) -> ObjHandle;` It allocates a unique
+ ObjHandle which is local to the process for the object. The object is added to
+ the BTreeMap with the `ObjHandle` as the key. This method will be used by the
+ syscalls which open an object.
+
+- `remove_obj(&self, id: &ObjHandle) -> Result<Arc<dyn Obj>, SvsmError>;` It
+ removes the object from the BTreeMap. This method will be used by the CLOSE
+ syscall to remove the corresponding object from process and drop it.
+
+- `get_obj(&self, id: &ObjHandle) -> Result<Arc<dyn Obj>, SvsmError>;` It gets
+ the object from the BTreeMap, which increases the reference counter. This
+ method will be used by the syscalls which access an object.
+
+When a task is terminated while it still has opened objects, these objects will
+be dropped automatically when the `objs` is dropped, if `objs` held the last
+reference to the objects.
+
+# Opening an Object in User Mode
+
+The user mode can open a particular object via syscalls. For example, VM_OPEN
+syscall is used to open a virtual machine object. The COCONUT-SVSM kernel
+provides `obj_open()` function to facilitate opening an object in user mode.
+
+```Rust
+pub fn sys_vm_open(idx: u32) -> Result<u64, i32> {
+ // Get the VmObj
+ let vm_obj = get_vm_obj(idx)?;
+
+ // Open the VmObj to return the object handle id to the user mode.
+ Ok(u32::from(obj_open(vm_obj)).into())
+}
+
+```
+
+```Rust
+/// Opens an object and assigns it a unique identifier.
+///
+/// # Arguments
+///
+/// * `obj` - An Arc<dyn Obj> representing the object to be opened.
+///
+/// # Returns
+///
+/// * `ObjHandle` - Returns the object handle of the opened object.
+pub fn obj_open(obj: Arc<dyn Obj>) -> ObjHandle {
+ current_task().add_obj(obj)
+}
+```
+
+```Rust
+impl Task {
+ ...
+
+ /// Adds an object to the current task.
+ ///
+ /// # Arguments
+ ///
+ /// * `obj` - The object to be added.
+ ///
+ /// # Returns
+ ///
+ /// * `ObjHandle` - Returns the object handle for the object to be added.
+ pub fn add_obj(&self, obj: Arc<dyn Obj>) -> ObjHandle {
+ let mut objs = self.objs.lock_write();
+ let id = ObjHandle::new(objs.len() as u32); | There may be an object with id `objs.len()`. Consider the following sequence:
1. An object is added -> `len()` is 0, so id is 0
2. Another object is added -> `len()` is 1, id is 1
3. The object with id 0 is removed.
4. Another object is added -> `len()` is 1, so id is 1
In step 4 a second object with id 1 was created.
|
svsm | github_2023 | others | 453 | coconut-svsm | Freax13 | @@ -0,0 +1,427 @@
+# Background
+
+The syscalls design philosophy is to provide a unified interface for the user to
+access system resources, which are represented by object handles. This makes
+objects a fundamental concept in the COCONUT-SVSM kernel
+
+This document describes the object and object handle from both the user mode's
+and the COCONUT-SVSM kernel's point of view.
+
+# Key Data Structures
+
+## Object
+
+An object represents the type of resource like file, VM, vCPU in the
+COCONUT-SVSM kernel, that can be accessible by the user mode. A trait named Obj
+is defined for such type of resource, which defines the common functionalities
+of the object. The Obj trait is defined with the trait bounds of Send and Sync,
+which means the object implementing Obj trait could be sent to another thread
+and shared between threads safely.
+
+```Rust
+trait Obj: Send + Sync {
+ /// Convert to a virtual machine object if is supported.
+ fn as_vm(&self) -> Option<&VmObj> {
+ None
+ }
+
+ /// Convert to a virtual cpu object if is supported.
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ None
+ }
+
+ /// Get a mappable file handle if the object is mappable.
+ fn mappable(&self) -> Option<&FileHandle> {
+ None
+ }
+
+ /// Convert to an object which implements EventObj trait.
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ None
+ }
+ ...
+}
+```
+
+In order for the user mode to access the resources in the COCONUT-SVSM kernel,
+the kernel needs to implement the `Obj` trait to represent the resources as
+objects. Some objects may need to implement multiple methods in the Obj trait
+for multiple purposes. For example, a vcpu object needs to implement `as_vcpu()`
+to represent it as a vcpu object, `mappable()` to provide a file handle for its
+user-mode-mappable backing 4k page, and `as_event()` to represent it as an event
+object which can be used by the WAIT_FOR_EVENT syscall.
+
+```Rust
+struct VcpuObj {
+ id: VmId,
+ ...
+}
+
+/// The trait for the object which can be used as an event.
+trait EventObj {
+ ...
+}
+
+impl EventObj for VcpuObj {
+ ...
+}
+
+impl Obj for VcpuObj {
+ fn as_vcpu(&self) -> Option<&VcpuObj> {
+ Some(self)
+ }
+
+ fn mappable(&self) -> Option<&FileHandle> {
+ Some(&self.run_page_file_handle)
+ }
+
+ fn as_event(&self) -> Option<&dyn EventObj> {
+ Some(self)
+ }
+ ...
+}
+
+```
+
+Objects without special requirements can fall back to the default implementation
+in the Obj trait, which returns None.
+
+## Object Handle
+
+When the user mode is trying to open a particular kernel resource via the
+syscalls, the COCONUT-SVSM kernel creates a corresponding object which
+implements Obj trait, and allocates an object handle with a unique id for that
+object. The object handle is defined as below:
+
+```Rust
+#[derive(Clone, Copy, Debug, Eq, Ord, PartialEq, PartialOrd)]
+pub struct ObjHandle(u32);
+
+impl ObjHandle {
+ /// Create a new object handle with the allocated id
+ pub fn new(id: u32) -> Self {
+ Self(id)
+ }
+}
+
+impl From<u32> for ObjHandle {
+ #[inline]
+ fn from(id: u32) -> Self {
+ Self(id)
+ }
+}
+
+impl From<ObjHandle> for u32 {
+ #[inline]
+ fn from(obj_handle: ObjHandle) -> Self {
+ obj_handle.0
+ }
+}
+```
+
+An `ObjHandle` can be converted to a `u32` id which is returned to the user
+mode, and the subsequent syscalls use this id to access this object. The passed
+id from the syscalls can be converted to an `ObjHandle`, which is used to access
+the object in the COCONUT-SVSM kernel.
+
+### User Mode Object Handle
+
+The object is exposed to the user mode via the object-opening related syscalls,
+which returns the id of the object created by the COCONUT-SVSM kernel. The user
+mode can make use this id to access the corresponding object via other syscalls.
+From the user mode's point of view, the object handle is defined as below:
+
+```Rust
+/// User mode object handle received from syscalls.
+pub struct ObjHandle(u32);
+
+impl ObjHandle {
+ /// Create a new object handle with the id returned from a syscall.
+ pub(crate) fn new(id: u32) -> Self {
+ Self(id)
+ }
+}
+
+impl From<&ObjHandle> for u32 {
+ #[inline]
+ fn from(obj_handle: &ObjHandle) -> Self {
+ obj_handle.0
+ }
+}
+
+impl Drop for ObjHandle {
+ fn drop(&mut self) {
+ // Close the object when drop the ObjHandle.
+ unsafe { syscall1(SYS_CLOSE, self.0.into()) };
+ }
+}
+
+```
+
+The user mode `ObjHandle` doesn't implement Copy/Clone trait, and dropping
+`ObjHandle` will automatically close the underlying object via the syscall.
+
+For a syscall class which is associated with a particular object handle type,
+e.g. VM object handles for VMM subsystem syscalls, VCPU object handles for VCPU
+subsystem syscalls, can be defined as:
+
+```Rust
+pub struct VmObjHandle(ObjHandle);
+```
+
+```Rust
+pub struct VcpuObjHandle(ObjHandle);
+```
+
+# Object Management in COCONUT-SVSM Kernel
+
+To facilitate the user mode using the object, the COCONUT-SVSM kernel should:
+
+- Manage the object's lifecycle properly. The underlying object should be
+ dropped when the user mode closes the object handle via syscalls, or the user
+ mode is terminated without closing.
+
+- Prevent one user mode process from misusing the object handle opened by
+ another. But the object handles are shared among the threads within the same
+ process.
+
+To achieve the above goals, the opened object should be associated with the
+process which creates it. The task structure is extended to hold the created
+objects.
+
+```Rust
+pub struct Task {
+ ...
+
+ /// Objects shared among threads within the same process
+ objs: Arc<RWLock<BTreeMap<ObjHandle, Arc<dyn Obj>>>>,
+}
+```
+
+The objs is a BTreeMap with the object handle id as the key and the Arc<dyn Obj>
+as the value. It is wrapped by an Arc and protected by a RWLock, to make it
+shared among the threads within the same process.
+
+The task structure will provide 3 new functions:
+
+- `add_obj(&self, obj: Arc<dyn Obj>) -> ObjHandle;` It allocates a unique
+ ObjHandle which is local to the process for the object. The object is added to
+ the BTreeMap with the `ObjHandle` as the key. This method will be used by the
+ syscalls which open an object.
+
+- `remove_obj(&self, id: &ObjHandle) -> Result<Arc<dyn Obj>, SvsmError>;` It
+ removes the object from the BTreeMap. This method will be used by the CLOSE
+ syscall to remove the corresponding object from process and drop it.
+
+- `get_obj(&self, id: &ObjHandle) -> Result<Arc<dyn Obj>, SvsmError>;` It gets
+ the object from the BTreeMap, which increases the reference counter. This
+ method will be used by the syscalls which access an object.
+
+When a task is terminated while it still has opened objects, these objects will
+be dropped automatically when the `objs` is dropped, if `objs` held the last
+reference to the objects.
+
+# Opening an Object in User Mode
+
+The user mode can open a particular object via syscalls. For example, VM_OPEN
+syscall is used to open a virtual machine object. The COCONUT-SVSM kernel
+provides `obj_open()` function to facilitate opening an object in user mode.
+
+```Rust
+pub fn sys_vm_open(idx: u32) -> Result<u64, i32> {
+ // Get the VmObj
+ let vm_obj = get_vm_obj(idx)?;
+
+ // Open the VmObj to return the object handle id to the user mode.
+ Ok(u32::from(obj_open(vm_obj)).into())
+}
+
+```
+
+```Rust
+/// Opens an object and assigns it a unique identifier.
+///
+/// # Arguments
+///
+/// * `obj` - An Arc<dyn Obj> representing the object to be opened.
+///
+/// # Returns
+///
+/// * `ObjHandle` - Returns the object handle of the opened object.
+pub fn obj_open(obj: Arc<dyn Obj>) -> ObjHandle {
+ current_task().add_obj(obj) | Let's rename this to `obj_add`. We no longer call any `.open()` methods on the object, so the name doesn't quite fit anymore. |
svsm | github_2023 | others | 450 | coconut-svsm | 00xc | @@ -932,47 +933,89 @@ impl PageRef {
pub fn try_copy_page(&self) -> Result<Self, SvsmError> {
let virt_addr = allocate_file_page()?;
+
+ let src = self.virt_addr.bits();
+ let dst = virt_addr.bits();
+ let size = PAGE_SIZE;
unsafe {
- let src = self.virt_addr.as_ptr::<[u8; PAGE_SIZE]>();
- let dst = virt_addr.as_mut_ptr::<[u8; PAGE_SIZE]>();
- ptr::copy_nonoverlapping(src, dst, 1);
+ // SAFETY: `src` and `dst` are both valid.
+ rep_movs(src, dst, size);
}
+
Ok(PageRef {
virt_addr,
phys_addr: virt_to_phys(virt_addr),
})
}
- pub fn write(&mut self, offset: usize, buf: &[u8]) {
+ pub fn write(&self, offset: usize, buf: &[u8]) {
assert!(offset.checked_add(buf.len()).unwrap() <= PAGE_SIZE);
- self.as_mut()[offset..][..buf.len()].copy_from_slice(buf);
+ let src = buf.as_ptr() as usize;
+ let dst = self.virt_addr.bits() + offset;
+ let size = buf.len();
+ unsafe {
+ // SAFETY: `src` and `dst` are both valid.
+ rep_movs(src, dst, size);
+ }
}
pub fn read(&self, offset: usize, buf: &mut [u8]) {
assert!(offset.checked_add(buf.len()).unwrap() <= PAGE_SIZE);
- buf.copy_from_slice(&self.as_ref()[offset..][..buf.len()]);
+ let src = self.virt_addr.bits() + offset;
+ let dst = buf.as_mut_ptr() as usize;
+ let size = buf.len();
+ unsafe {
+ // SAFETY: `src` and `dst` are both valid.
+ rep_movs(src, dst, size);
+ }
}
- pub fn fill(&mut self, offset: usize, value: u8) {
- self.as_mut()[offset..].fill(value);
+ pub fn fill(&self, offset: usize, value: u8) {
+ let dst = self.virt_addr.bits() + offset;
+ let size = PAGE_SIZE.checked_sub(offset).unwrap();
+
+ unsafe {
+ // SAFETY: `dst` is valid.
+ rep_stosb(dst, size, value);
+ }
}
}
-impl AsRef<[u8; PAGE_SIZE]> for PageRef {
- /// Returns a reference to the underlying array representing the memory page.
- fn as_ref(&self) -> &[u8; PAGE_SIZE] {
- let ptr = self.virt_addr.as_ptr::<[u8; PAGE_SIZE]>();
- unsafe { ptr.as_ref().unwrap() }
+/// Copy `size` bytes from `src` to `dst`.
+///
+/// # Safety
+///
+/// This function has all the safety requirements of `core::ptr::copy` except
+/// that data races (both on `src` and `dst`) are explicitly permitted.
+#[inline(always)]
+unsafe fn rep_movs(src: usize, dst: usize, size: usize) {
+ unsafe {
+ asm!("rep movsb", | Shouldn't we do a `cld` beforehand, since we do not know the status of the direction flag? We have the same in [`do_movsb()`](https://github.com/coconut-svsm/svsm/blob/6d8a3e90944e6fb3727fcb17fc1b21693bfe76c0/kernel/src/mm/guestmem.rs#L165). Same comment about `rep_movsb()` a few lines below. |
svsm | github_2023 | others | 450 | coconut-svsm | joergroedel | @@ -932,47 +933,89 @@ impl PageRef {
pub fn try_copy_page(&self) -> Result<Self, SvsmError> {
let virt_addr = allocate_file_page()?;
+
+ let src = self.virt_addr.bits();
+ let dst = virt_addr.bits();
+ let size = PAGE_SIZE;
unsafe {
- let src = self.virt_addr.as_ptr::<[u8; PAGE_SIZE]>();
- let dst = virt_addr.as_mut_ptr::<[u8; PAGE_SIZE]>();
- ptr::copy_nonoverlapping(src, dst, 1);
+ // SAFETY: `src` and `dst` are both valid.
+ rep_movs(src, dst, size);
}
+
Ok(PageRef {
virt_addr,
phys_addr: virt_to_phys(virt_addr),
})
}
- pub fn write(&mut self, offset: usize, buf: &[u8]) {
+ pub fn write(&self, offset: usize, buf: &[u8]) {
assert!(offset.checked_add(buf.len()).unwrap() <= PAGE_SIZE);
- self.as_mut()[offset..][..buf.len()].copy_from_slice(buf);
+ let src = buf.as_ptr() as usize;
+ let dst = self.virt_addr.bits() + offset;
+ let size = buf.len();
+ unsafe {
+ // SAFETY: `src` and `dst` are both valid.
+ rep_movs(src, dst, size);
+ }
}
pub fn read(&self, offset: usize, buf: &mut [u8]) {
assert!(offset.checked_add(buf.len()).unwrap() <= PAGE_SIZE);
- buf.copy_from_slice(&self.as_ref()[offset..][..buf.len()]);
+ let src = self.virt_addr.bits() + offset;
+ let dst = buf.as_mut_ptr() as usize;
+ let size = buf.len();
+ unsafe {
+ // SAFETY: `src` and `dst` are both valid.
+ rep_movs(src, dst, size);
+ }
}
- pub fn fill(&mut self, offset: usize, value: u8) {
- self.as_mut()[offset..].fill(value);
+ pub fn fill(&self, offset: usize, value: u8) {
+ let dst = self.virt_addr.bits() + offset;
+ let size = PAGE_SIZE.checked_sub(offset).unwrap();
+
+ unsafe {
+ // SAFETY: `dst` is valid.
+ rep_stosb(dst, size, value);
+ }
}
}
-impl AsRef<[u8; PAGE_SIZE]> for PageRef {
- /// Returns a reference to the underlying array representing the memory page.
- fn as_ref(&self) -> &[u8; PAGE_SIZE] {
- let ptr = self.virt_addr.as_ptr::<[u8; PAGE_SIZE]>();
- unsafe { ptr.as_ref().unwrap() }
+/// Copy `size` bytes from `src` to `dst`.
+///
+/// # Safety
+///
+/// This function has all the safety requirements of `core::ptr::copy` except
+/// that data races (both on `src` and `dst`) are explicitly permitted.
+#[inline(always)]
+unsafe fn rep_movs(src: usize, dst: usize, size: usize) {
+ unsafe {
+ asm!("rep movsb",
+ inout("rsi") src => _,
+ inout("rdi") dst => _,
+ inout("rcx") size => _,
+ options(nostack),
+ ); | This function is x86-architecture specific, in order to prepare for future portability to other architectures please place these assembly helpers under cpu/ and give them a more generic name. |
svsm | github_2023 | others | 450 | coconut-svsm | joergroedel | @@ -932,47 +933,89 @@ impl PageRef {
pub fn try_copy_page(&self) -> Result<Self, SvsmError> {
let virt_addr = allocate_file_page()?;
+
+ let src = self.virt_addr.bits();
+ let dst = virt_addr.bits();
+ let size = PAGE_SIZE;
unsafe {
- let src = self.virt_addr.as_ptr::<[u8; PAGE_SIZE]>();
- let dst = virt_addr.as_mut_ptr::<[u8; PAGE_SIZE]>();
- ptr::copy_nonoverlapping(src, dst, 1);
+ // SAFETY: `src` and `dst` are both valid.
+ rep_movs(src, dst, size);
}
+
Ok(PageRef {
virt_addr,
phys_addr: virt_to_phys(virt_addr),
})
}
- pub fn write(&mut self, offset: usize, buf: &[u8]) {
+ pub fn write(&self, offset: usize, buf: &[u8]) {
assert!(offset.checked_add(buf.len()).unwrap() <= PAGE_SIZE);
- self.as_mut()[offset..][..buf.len()].copy_from_slice(buf);
+ let src = buf.as_ptr() as usize;
+ let dst = self.virt_addr.bits() + offset;
+ let size = buf.len();
+ unsafe {
+ // SAFETY: `src` and `dst` are both valid.
+ rep_movs(src, dst, size);
+ }
}
pub fn read(&self, offset: usize, buf: &mut [u8]) {
assert!(offset.checked_add(buf.len()).unwrap() <= PAGE_SIZE);
- buf.copy_from_slice(&self.as_ref()[offset..][..buf.len()]);
+ let src = self.virt_addr.bits() + offset;
+ let dst = buf.as_mut_ptr() as usize;
+ let size = buf.len();
+ unsafe {
+ // SAFETY: `src` and `dst` are both valid.
+ rep_movs(src, dst, size);
+ }
}
- pub fn fill(&mut self, offset: usize, value: u8) {
- self.as_mut()[offset..].fill(value);
+ pub fn fill(&self, offset: usize, value: u8) {
+ let dst = self.virt_addr.bits() + offset;
+ let size = PAGE_SIZE.checked_sub(offset).unwrap();
+
+ unsafe {
+ // SAFETY: `dst` is valid.
+ rep_stosb(dst, size, value);
+ }
}
}
-impl AsRef<[u8; PAGE_SIZE]> for PageRef {
- /// Returns a reference to the underlying array representing the memory page.
- fn as_ref(&self) -> &[u8; PAGE_SIZE] {
- let ptr = self.virt_addr.as_ptr::<[u8; PAGE_SIZE]>();
- unsafe { ptr.as_ref().unwrap() }
+/// Copy `size` bytes from `src` to `dst`.
+///
+/// # Safety
+///
+/// This function has all the safety requirements of `core::ptr::copy` except
+/// that data races (both on `src` and `dst`) are explicitly permitted.
+#[inline(always)]
+unsafe fn rep_movs(src: usize, dst: usize, size: usize) {
+ unsafe {
+ asm!("rep movsb",
+ inout("rsi") src => _,
+ inout("rdi") dst => _,
+ inout("rcx") size => _,
+ options(nostack),
+ );
}
}
-impl AsMut<[u8; PAGE_SIZE]> for PageRef {
- /// Returns a mutable reference to the underlying array representing the memory page.
- fn as_mut(&mut self) -> &mut [u8; PAGE_SIZE] {
- let ptr = self.virt_addr.as_mut_ptr::<[u8; PAGE_SIZE]>();
- unsafe { ptr.as_mut().unwrap() }
+/// Set `size` bytes at `dst` to `val`.
+///
+/// # Safety
+///
+/// This function has all the safety requirements of `core::ptr::write_bytes` except
+/// that data races are explicitly permitted.
+#[inline(always)]
+unsafe fn rep_stosb(dst: usize, size: usize, value: u8) {
+ unsafe {
+ asm!("rep stosb",
+ inout("rdi") dst => _,
+ inout("rcx") size => _,
+ in("al") value,
+ options(nostack),
+ ); | Same here, please move to cpu/ module and give it a more generic name. |
svsm | github_2023 | others | 447 | coconut-svsm | roy-hopkins | @@ -149,6 +149,10 @@ impl GpaMap {
// Place the kernel area at 64 MB with a size of 16 MB.
GpaRange::new(0x04000000, 0x01000000)?
}
+ Hypervisor::Vanadium => {
+ // Place the kernel area at 8TiB-2GiB with a size of 16 MB.
+ GpaRange::new(0x7ff80000000, 0x01000000)? | As far as I can see this is the only difference between the Qemu and Vanadium configurations. Can you see the two diverging more in the future? Would it make sense to make the Qemu kernel location align with Vanadium? |
svsm | github_2023 | others | 391 | coconut-svsm | p4zuu | @@ -143,6 +146,98 @@ pub trait InsnMachineCtx: core::fmt::Debug {
fn read_cr0(&self) -> u64;
/// Read CR4 register
fn read_cr4(&self) -> u64;
+
+ /// Read a register
+ fn read_reg(&self, _reg: Register) -> usize {
+ panic!("Reading register is not implemented");
+ }
+
+ /// Read rflags register
+ fn read_flags(&self) -> usize {
+ panic!("Reading flags is not implemented"); | Very nitpick but you can directly use the `unimplemented!()` macro :) |
svsm | github_2023 | others | 391 | coconut-svsm | p4zuu | @@ -84,6 +92,10 @@ fn init_encrypt_mask(platform: &dyn SvsmPlatform, vtom: usize) -> ImmutAfterInit
guest_phys_addr_size
};
+ PHYS_ADDR_SIZE | I would rather return the Result here instead:
```suggestion
PHYS_ADDR_SIZE.reinit(&phys_addr_size)?;
``` |
svsm | github_2023 | others | 391 | coconut-svsm | p4zuu | @@ -143,7 +143,11 @@ pub fn handle_vc_exception(ctx: &mut X86ExceptionContext, vector: usize) -> Resu
Ok(())
}
(SVM_EXIT_CPUID, Some(DecodedInsn::Cpuid)) => handle_cpuid(ctx),
- (SVM_EXIT_IOIO, Some(ins)) => handle_ioio(ctx, ghcb, ins),
+ (SVM_EXIT_IOIO, Some(_)) => insn_ctx
+ .as_ref()
+ .unwrap() | I would avoid using `unwrap()` here and return an error instead to let the caller decide to panic or not. This is even a security issue: a user can panic the SVSM kernel by providing buggy raw instructions (so bytes) to the decoder. This should be possible by changing the instructions in userspace that raised the #VC before the kernel fetches them. For this, I would avoid using any function that could panic the kernel (`unwrap()`, `expect()` etc) in this code path :) |
svsm | github_2023 | others | 391 | coconut-svsm | p4zuu | @@ -797,11 +1016,269 @@ impl DecodedInsnCtx {
DecodedInsn::Out(Operand::rdx(), self.opsize)
}
}
+ OpCodeClass::Ins | OpCodeClass::Outs => {
+ if self.prefix.contains(PrefixFlags::REPZ_P) {
+ // The prefix REPZ(F3h) actually represents REP for ins/outs.
+ // The count register is depending on the address size of the
+ // instruction.
+ self.repeat = read_reg(mctx, Register::Rcx, self.addrsize);
+ };
+
+ if opdesc.class == OpCodeClass::Ins {
+ DecodedInsn::Ins
+ } else {
+ DecodedInsn::Outs
+ }
+ }
OpCodeClass::Rdmsr => DecodedInsn::Rdmsr,
OpCodeClass::Rdtsc => DecodedInsn::Rdtsc,
OpCodeClass::Rdtscp => DecodedInsn::Rdtscp,
OpCodeClass::Wrmsr => DecodedInsn::Wrmsr,
_ => return Err(InsnError::UnSupportedInsn),
})
}
+
+ fn canonical_check(&self, la: usize) -> Option<usize> {
+ if match self.cpu_mode {
+ CpuMode::Bit64(level) => {
+ let virtaddr_bits = if level == PagingLevel::Level4 { 48 } else { 57 };
+ let mask = !((1 << virtaddr_bits) - 1);
+ if la & (1 << (virtaddr_bits - 1)) != 0 {
+ la & mask == mask
+ } else {
+ la & mask == 0
+ }
+ }
+ _ => true,
+ } {
+ Some(la)
+ } else {
+ None
+ }
+ }
+
+ fn alignment_check(&self, la: usize, size: Bytes) -> Option<usize> {
+ match size {
+ // Zero size is not allowed
+ Bytes::Zero => None,
+ // One byte is always aligned
+ Bytes::One => Some(la),
+ // Two/Four/Eight bytes must be aligned on a boundary
+ _ => {
+ if la & (size as usize - 1) != 0 {
+ None
+ } else {
+ Some(la)
+ }
+ }
+ }
+ }
+
+ fn cal_linear_addr<I: InsnMachineCtx>(
+ &self,
+ mctx: &I,
+ seg: SegRegister,
+ ea: usize,
+ writable: bool,
+ ) -> Option<usize> {
+ let segment = mctx.read_seg(seg);
+
+ let addrsize = if self.cpu_mode.is_bit64() {
+ Bytes::Eight
+ } else {
+ let attr = SegDescAttrFlags::from_bits_truncate(segment);
+ // Invalid if is system segment
+ if !attr.contains(SegDescAttrFlags::S) {
+ return None;
+ }
+
+ if writable {
+ // Writing to a code segment, or writing to a read-only
+ // data segment is not allowed.
+ if attr.contains(SegDescAttrFlags::C_D) || !attr.contains(SegDescAttrFlags::R_W) {
+ return None;
+ }
+ } else {
+ // Data segment is always read-able, but code segment
+ // may be execute only. Invalid if read an execute only
+ // code segment.
+ if attr.contains(SegDescAttrFlags::C_D) && !attr.contains(SegDescAttrFlags::R_W) {
+ return None;
+ }
+ }
+
+ let mut limit = segment_limit(segment) as usize;
+
+ if !attr.contains(SegDescAttrFlags::C_D) && attr.contains(SegDescAttrFlags::C_E) {
+ // Expand-down segment, check low limit
+ if ea <= limit {
+ return None;
+ }
+
+ limit = if attr.contains(SegDescAttrFlags::DB) {
+ u32::MAX as usize
+ } else {
+ u16::MAX as usize
+ }
+ }
+
+ // Check high limit for each byte
+ for i in 0..self.opsize as usize {
+ if ea + i > limit {
+ return None;
+ }
+ }
+
+ Bytes::Four
+ };
+
+ self.canonical_check(
+ if self.cpu_mode.is_bit64() && seg != SegRegister::FS && seg != SegRegister::GS {
+ ea & (addrsize.mask() as usize)
+ } else {
+ (segment_base(segment) as usize + ea) & addrsize.mask() as usize
+ },
+ )
+ }
+
+ fn get_linear_addr<I: InsnMachineCtx>(
+ &self,
+ mctx: &I,
+ seg: SegRegister,
+ ea: usize,
+ writable: bool,
+ ) -> Result<usize, InsnError> {
+ self.cal_linear_addr(mctx, seg, ea, writable)
+ .ok_or(if seg == SegRegister::SS {
+ InsnError::ExceptionSS
+ } else {
+ InsnError::ExceptionGP(0)
+ })
+ .and_then(|la| {
+ if (mctx.read_cpl() == 3)
+ && (mctx.read_cr0() & CR0Flags::AM.bits()) != 0
+ && (mctx.read_flags() & RFlags::AC.bits()) != 0
+ {
+ self.alignment_check(la, self.opsize)
+ .ok_or(InsnError::ExceptionAC)
+ } else {
+ Ok(la)
+ }
+ })
+ }
+
+ fn emulate_ins_outs<I: InsnMachineCtx>(
+ &self,
+ mctx: &mut I,
+ io_read: bool,
+ ) -> Result<(), InsnError> {
+ // I/O port number is stored in DX.
+ let port = mctx.read_reg(Register::Rdx) as u16;
+
+ // Check the IO permission bit map.
+ if !ioio_perm(mctx, port, self.opsize, io_read) {
+ return Err(InsnError::ExceptionGP(0));
+ }
+
+ let (seg, reg) = if io_read {
+ // Input byte from I/O port specified in DX into
+ // memory location specified with ES:(E)DI or
+ // RDI.
+ (SegRegister::ES, Register::Rdi)
+ } else {
+ // Output byte/word/doubleword from memory location specified in
+ // DS:(E)SI (The DS segment may be overridden with a segment
+ // override prefix.) or RSI to I/O port specified in DX.
+ (
+ self.override_seg.map_or(SegRegister::DS, |s| s),
+ Register::Rsi,
+ )
+ };
+
+ // Decoed the linear addresses and map as a memory object
+ // which allows accessing to the memory represented by the
+ // linear addresses.
+ let mut mem = mctx.map_linear_addr(
+ self.get_linear_addr(mctx, seg, read_reg(mctx, reg, self.addrsize), io_read)?,
+ self.opsize as usize,
+ io_read,
+ false,
+ )?;
+
+ if io_read {
+ // Read data from IO port and then write to the memory location.
+ mem.write_integer(0, self.opsize, mctx.ioio_in(port, self.opsize)?)?;
+ } else {
+ // Read data from memory location and then write to the IO port
+ mctx.ioio_out(port, self.opsize, mem.read_integer(0, self.opsize)?)?;
+ }
+
+ let rflags = RFlags::from_bits_truncate(mctx.read_flags());
+ if rflags.contains(RFlags::DF) {
+ // The DF flag is 1, the (E)SI/DI register is decremented.
+ write_reg(
+ mctx,
+ reg,
+ read_reg(mctx, reg, self.addrsize)
+ .checked_sub(self.opsize as usize)
+ .unwrap(), | Same comment about `unwrap()` |
svsm | github_2023 | others | 391 | coconut-svsm | p4zuu | @@ -797,11 +1016,269 @@ impl DecodedInsnCtx {
DecodedInsn::Out(Operand::rdx(), self.opsize)
}
}
+ OpCodeClass::Ins | OpCodeClass::Outs => {
+ if self.prefix.contains(PrefixFlags::REPZ_P) {
+ // The prefix REPZ(F3h) actually represents REP for ins/outs.
+ // The count register is depending on the address size of the
+ // instruction.
+ self.repeat = read_reg(mctx, Register::Rcx, self.addrsize);
+ };
+
+ if opdesc.class == OpCodeClass::Ins {
+ DecodedInsn::Ins
+ } else {
+ DecodedInsn::Outs
+ }
+ }
OpCodeClass::Rdmsr => DecodedInsn::Rdmsr,
OpCodeClass::Rdtsc => DecodedInsn::Rdtsc,
OpCodeClass::Rdtscp => DecodedInsn::Rdtscp,
OpCodeClass::Wrmsr => DecodedInsn::Wrmsr,
_ => return Err(InsnError::UnSupportedInsn),
})
}
+
+ fn canonical_check(&self, la: usize) -> Option<usize> {
+ if match self.cpu_mode {
+ CpuMode::Bit64(level) => {
+ let virtaddr_bits = if level == PagingLevel::Level4 { 48 } else { 57 };
+ let mask = !((1 << virtaddr_bits) - 1);
+ if la & (1 << (virtaddr_bits - 1)) != 0 {
+ la & mask == mask
+ } else {
+ la & mask == 0
+ }
+ }
+ _ => true,
+ } {
+ Some(la)
+ } else {
+ None
+ }
+ }
+
+ fn alignment_check(&self, la: usize, size: Bytes) -> Option<usize> {
+ match size {
+ // Zero size is not allowed
+ Bytes::Zero => None,
+ // One byte is always aligned
+ Bytes::One => Some(la),
+ // Two/Four/Eight bytes must be aligned on a boundary
+ _ => {
+ if la & (size as usize - 1) != 0 {
+ None
+ } else {
+ Some(la)
+ }
+ }
+ }
+ }
+
+ fn cal_linear_addr<I: InsnMachineCtx>(
+ &self,
+ mctx: &I,
+ seg: SegRegister,
+ ea: usize,
+ writable: bool,
+ ) -> Option<usize> {
+ let segment = mctx.read_seg(seg);
+
+ let addrsize = if self.cpu_mode.is_bit64() {
+ Bytes::Eight
+ } else {
+ let attr = SegDescAttrFlags::from_bits_truncate(segment);
+ // Invalid if is system segment
+ if !attr.contains(SegDescAttrFlags::S) {
+ return None;
+ }
+
+ if writable {
+ // Writing to a code segment, or writing to a read-only
+ // data segment is not allowed.
+ if attr.contains(SegDescAttrFlags::C_D) || !attr.contains(SegDescAttrFlags::R_W) {
+ return None;
+ }
+ } else {
+ // Data segment is always read-able, but code segment
+ // may be execute only. Invalid if read an execute only
+ // code segment.
+ if attr.contains(SegDescAttrFlags::C_D) && !attr.contains(SegDescAttrFlags::R_W) {
+ return None;
+ }
+ }
+
+ let mut limit = segment_limit(segment) as usize;
+
+ if !attr.contains(SegDescAttrFlags::C_D) && attr.contains(SegDescAttrFlags::C_E) {
+ // Expand-down segment, check low limit
+ if ea <= limit {
+ return None;
+ }
+
+ limit = if attr.contains(SegDescAttrFlags::DB) {
+ u32::MAX as usize
+ } else {
+ u16::MAX as usize
+ }
+ }
+
+ // Check high limit for each byte
+ for i in 0..self.opsize as usize {
+ if ea + i > limit {
+ return None;
+ }
+ }
+
+ Bytes::Four
+ };
+
+ self.canonical_check(
+ if self.cpu_mode.is_bit64() && seg != SegRegister::FS && seg != SegRegister::GS {
+ ea & (addrsize.mask() as usize)
+ } else {
+ (segment_base(segment) as usize + ea) & addrsize.mask() as usize
+ },
+ )
+ }
+
+ fn get_linear_addr<I: InsnMachineCtx>(
+ &self,
+ mctx: &I,
+ seg: SegRegister,
+ ea: usize,
+ writable: bool,
+ ) -> Result<usize, InsnError> {
+ self.cal_linear_addr(mctx, seg, ea, writable)
+ .ok_or(if seg == SegRegister::SS {
+ InsnError::ExceptionSS
+ } else {
+ InsnError::ExceptionGP(0)
+ })
+ .and_then(|la| {
+ if (mctx.read_cpl() == 3)
+ && (mctx.read_cr0() & CR0Flags::AM.bits()) != 0
+ && (mctx.read_flags() & RFlags::AC.bits()) != 0
+ {
+ self.alignment_check(la, self.opsize)
+ .ok_or(InsnError::ExceptionAC)
+ } else {
+ Ok(la)
+ }
+ })
+ }
+
+ fn emulate_ins_outs<I: InsnMachineCtx>(
+ &self,
+ mctx: &mut I,
+ io_read: bool,
+ ) -> Result<(), InsnError> {
+ // I/O port number is stored in DX.
+ let port = mctx.read_reg(Register::Rdx) as u16;
+
+ // Check the IO permission bit map.
+ if !ioio_perm(mctx, port, self.opsize, io_read) {
+ return Err(InsnError::ExceptionGP(0));
+ }
+
+ let (seg, reg) = if io_read {
+ // Input byte from I/O port specified in DX into
+ // memory location specified with ES:(E)DI or
+ // RDI.
+ (SegRegister::ES, Register::Rdi)
+ } else {
+ // Output byte/word/doubleword from memory location specified in
+ // DS:(E)SI (The DS segment may be overridden with a segment
+ // override prefix.) or RSI to I/O port specified in DX.
+ (
+ self.override_seg.map_or(SegRegister::DS, |s| s),
+ Register::Rsi,
+ )
+ };
+
+ // Decoed the linear addresses and map as a memory object
+ // which allows accessing to the memory represented by the
+ // linear addresses.
+ let mut mem = mctx.map_linear_addr(
+ self.get_linear_addr(mctx, seg, read_reg(mctx, reg, self.addrsize), io_read)?,
+ self.opsize as usize,
+ io_read,
+ false,
+ )?;
+
+ if io_read {
+ // Read data from IO port and then write to the memory location.
+ mem.write_integer(0, self.opsize, mctx.ioio_in(port, self.opsize)?)?;
+ } else {
+ // Read data from memory location and then write to the IO port
+ mctx.ioio_out(port, self.opsize, mem.read_integer(0, self.opsize)?)?;
+ }
+
+ let rflags = RFlags::from_bits_truncate(mctx.read_flags());
+ if rflags.contains(RFlags::DF) {
+ // The DF flag is 1, the (E)SI/DI register is decremented.
+ write_reg(
+ mctx,
+ reg,
+ read_reg(mctx, reg, self.addrsize)
+ .checked_sub(self.opsize as usize)
+ .unwrap(),
+ self.addrsize,
+ );
+ } else {
+ // The DF flag is 0, the (E)SI/DI register is incremented.
+ write_reg(
+ mctx,
+ reg,
+ read_reg(mctx, reg, self.addrsize)
+ .checked_add(self.opsize as usize)
+ .unwrap(), | Same comment about `unwrap()` |
svsm | github_2023 | others | 391 | coconut-svsm | p4zuu | @@ -415,9 +600,39 @@ impl DecodedInsnCtx {
///
/// # Returns
///
- /// The length of the decoded instruction as a `usize`.
+ /// The length of the decoded instruction as a `usize`. If the
+ /// repeat count is greater than 1, then return 0 to indicate not to
+ /// skip this instruction. If the repeat count is less than 1, then
+ /// return instruction len to indicate this instruction can be skipped.
pub fn size(&self) -> usize {
- self.insn_len
+ if self.repeat > 1 {
+ 0
+ } else {
+ self.insn_len
+ }
+ }
+
+ /// Emulates the decoded instruction using the provided machine context.
+ ///
+ /// # Arguments
+ ///
+ /// * `mctx` - A mutable reference to an object implementing the
+ /// `InsnMachineCtx` trait to provide the necessary machine context
+ /// for emulation.
+ ///
+ /// # Returns
+ ///
+ /// An `Ok(())` if emulation is successful or an `InsnError` otherwise.
+ pub fn emulate<I: InsnMachineCtx>(&self, mctx: &mut I) -> Result<(), InsnError> {
+ self.insn
+ .ok_or(InsnError::UnSupportedInsn) | We're starting to have divergence here, some handlers return a `VcErrorType::DecodedFailed` if there was an issue while decoding. It would be nice to coordinate errors on this.
I don't think returning a `VcError` here makes sense, but maybe we can convert `InsnError::UnSupportedInsn` to `VcError` in #VC handler the caller? That would still be 3 layers of error conversion (`InsnError` -> `VcError` -> `SvsmError`) and that's maybe too much... |
svsm | github_2023 | others | 391 | coconut-svsm | joergroedel | @@ -117,6 +127,15 @@ fn make_private_address(paddr: PhysAddr) -> PhysAddr {
PhysAddr::from(paddr.bits() & !shared_pte_mask() | private_pte_mask())
}
+fn shared_address(paddr: PhysAddr) -> bool {
+ if shared_pte_mask() | private_pte_mask() != 0 {
+ (paddr.bits() & shared_pte_mask()) == shared_pte_mask()
+ } else {
+ // No confidential bits in the physical address.
+ false
+ } | ```suggestion
make_shared_address(paddr) == paddr
```
This is the bug that makes the test-cases fail on AMD. The logic really only works for TDX and causes all pages treated as shared on SNP. As a result the instruction decoder made all of its temporary mappings shared and read the encrypted data. Fixing it like above make the tests pass for me. |
svsm | github_2023 | others | 391 | coconut-svsm | p4zuu | @@ -143,6 +146,121 @@ pub trait InsnMachineCtx: core::fmt::Debug {
fn read_cr0(&self) -> u64;
/// Read CR4 register
fn read_cr4(&self) -> u64;
+
+ /// Read a register
+ fn read_reg(&self, _reg: Register) -> usize {
+ unimplemented!("Reading register is not implemented");
+ }
+
+ /// Read rflags register
+ fn read_flags(&self) -> usize {
+ unimplemented!("Reading flags is not implemented");
+ }
+
+ /// Write a register
+ fn write_reg(&mut self, _reg: Register, _val: usize) {
+ unimplemented!("Writing register is not implemented");
+ }
+
+ /// Read the current privilege level
+ fn read_cpl(&self) -> usize {
+ unimplemented!("Reading CPL is not implemented");
+ }
+
+ /// Map the given linear address region to a machine memory object
+ /// which provides access to the memory of this linear address region.
+ ///
+ /// # Arguments
+ ///
+ /// * `la` - The linear address of the region to map.
+ /// * `write` - Whether write access is allowed to the mapped region.
+ /// * `fetch` - Whether fetch access is allowed to the mapped region.
+ ///
+ /// # Returns
+ ///
+ /// A `Result` containing a boxed trait object representing the mapped
+ /// memory, or an `InsnError` if mapping fails.
+ fn map_linear_addr<T: Copy + 'static>(
+ &self,
+ _la: usize, | I would rather use a `VirtAddr` as early as possible:
```suggestion
_la: VirtAddr,
``` |
svsm | github_2023 | others | 391 | coconut-svsm | p4zuu | @@ -143,6 +146,121 @@ pub trait InsnMachineCtx: core::fmt::Debug {
fn read_cr0(&self) -> u64;
/// Read CR4 register
fn read_cr4(&self) -> u64;
+
+ /// Read a register
+ fn read_reg(&self, _reg: Register) -> usize {
+ unimplemented!("Reading register is not implemented");
+ }
+
+ /// Read rflags register
+ fn read_flags(&self) -> usize {
+ unimplemented!("Reading flags is not implemented");
+ }
+
+ /// Write a register
+ fn write_reg(&mut self, _reg: Register, _val: usize) {
+ unimplemented!("Writing register is not implemented");
+ }
+
+ /// Read the current privilege level
+ fn read_cpl(&self) -> usize {
+ unimplemented!("Reading CPL is not implemented");
+ }
+
+ /// Map the given linear address region to a machine memory object
+ /// which provides access to the memory of this linear address region.
+ ///
+ /// # Arguments
+ ///
+ /// * `la` - The linear address of the region to map.
+ /// * `write` - Whether write access is allowed to the mapped region.
+ /// * `fetch` - Whether fetch access is allowed to the mapped region.
+ ///
+ /// # Returns
+ ///
+ /// A `Result` containing a boxed trait object representing the mapped
+ /// memory, or an `InsnError` if mapping fails.
+ fn map_linear_addr<T: Copy + 'static>(
+ &self,
+ _la: usize,
+ _write: bool,
+ _fetch: bool,
+ ) -> Result<Box<dyn InsnMachineMem<Item = T>>, InsnError> {
+ Err(InsnError::MapLinearAddr)
+ }
+
+ /// Check IO permission bitmap.
+ ///
+ /// # Arguments
+ ///
+ /// * `port` - The I/O port to check.
+ /// * `size` - The size of the I/O operation.
+ /// * `io_read` - Whether the I/O operation is a read operation.
+ ///
+ /// # Returns
+ ///
+ /// A `Result` containing true if the port is permitted otherwise false.
+ fn ioio_perm(&self, _port: u16, _size: Bytes, _io_read: bool) -> bool {
+ unimplemented!("Checking IO permission bitmap is not implemented");
+ }
+
+ /// Handle an I/O in operation.
+ ///
+ /// # Arguments
+ ///
+ /// * `port` - The I/O port to read from.
+ /// * `size` - The size of the data to read.
+ ///
+ /// # Returns
+ ///
+ /// A `Result` containing the read data if success or an `InsnError` if
+ /// the operation fails.
+ fn ioio_in(&self, _port: u16, _size: Bytes) -> Result<u64, InsnError> {
+ Err(InsnError::IoIoIn)
+ }
+
+ /// Handle an I/O out operation.
+ ///
+ /// # Arguments
+ ///
+ /// * `port` - The I/O port to write to.
+ /// * `size` - The size of the data to write.
+ /// * `data` - The data to write to the I/O port.
+ ///
+ /// # Returns
+ ///
+ /// A `Result` indicating success or an `InsnError` if the operation fails.
+ fn ioio_out(&mut self, _port: u16, _size: Bytes, _data: u64) -> Result<(), InsnError> {
+ Err(InsnError::IoIoOut)
+ }
+}
+
+/// Trait representing a machine memory for instruction decoding.
+pub trait InsnMachineMem {
+ type Item;
+
+ /// Read data from the memory at the specified offset.
+ ///
+ /// # Returns
+ ///
+ /// Returns the read data on success, or an `InsnError` if the read
+ /// operation fails.
+ fn mem_read(&self) -> Result<Self::Item, InsnError> { | Since this method will likely be implemented by reading directly to memory, I would prefer making this function `unsafe`:
```suggestion
unsafe fn mem_read(&self) -> Result<Self::Item, InsnError> {
```
I would also add a Safety comment similar to `GuestPtr::read()` for all the implementations as well, the caller needs to ensure that this can't allow an untrusted user to read memory that it should not be allowed to |
svsm | github_2023 | others | 391 | coconut-svsm | p4zuu | @@ -143,6 +146,121 @@ pub trait InsnMachineCtx: core::fmt::Debug {
fn read_cr0(&self) -> u64;
/// Read CR4 register
fn read_cr4(&self) -> u64;
+
+ /// Read a register
+ fn read_reg(&self, _reg: Register) -> usize {
+ unimplemented!("Reading register is not implemented");
+ }
+
+ /// Read rflags register
+ fn read_flags(&self) -> usize {
+ unimplemented!("Reading flags is not implemented");
+ }
+
+ /// Write a register
+ fn write_reg(&mut self, _reg: Register, _val: usize) {
+ unimplemented!("Writing register is not implemented");
+ }
+
+ /// Read the current privilege level
+ fn read_cpl(&self) -> usize {
+ unimplemented!("Reading CPL is not implemented");
+ }
+
+ /// Map the given linear address region to a machine memory object
+ /// which provides access to the memory of this linear address region.
+ ///
+ /// # Arguments
+ ///
+ /// * `la` - The linear address of the region to map.
+ /// * `write` - Whether write access is allowed to the mapped region.
+ /// * `fetch` - Whether fetch access is allowed to the mapped region.
+ ///
+ /// # Returns
+ ///
+ /// A `Result` containing a boxed trait object representing the mapped
+ /// memory, or an `InsnError` if mapping fails.
+ fn map_linear_addr<T: Copy + 'static>(
+ &self,
+ _la: usize,
+ _write: bool,
+ _fetch: bool,
+ ) -> Result<Box<dyn InsnMachineMem<Item = T>>, InsnError> {
+ Err(InsnError::MapLinearAddr)
+ }
+
+ /// Check IO permission bitmap.
+ ///
+ /// # Arguments
+ ///
+ /// * `port` - The I/O port to check.
+ /// * `size` - The size of the I/O operation.
+ /// * `io_read` - Whether the I/O operation is a read operation.
+ ///
+ /// # Returns
+ ///
+ /// A `Result` containing true if the port is permitted otherwise false.
+ fn ioio_perm(&self, _port: u16, _size: Bytes, _io_read: bool) -> bool {
+ unimplemented!("Checking IO permission bitmap is not implemented");
+ }
+
+ /// Handle an I/O in operation.
+ ///
+ /// # Arguments
+ ///
+ /// * `port` - The I/O port to read from.
+ /// * `size` - The size of the data to read.
+ ///
+ /// # Returns
+ ///
+ /// A `Result` containing the read data if success or an `InsnError` if
+ /// the operation fails.
+ fn ioio_in(&self, _port: u16, _size: Bytes) -> Result<u64, InsnError> {
+ Err(InsnError::IoIoIn)
+ }
+
+ /// Handle an I/O out operation.
+ ///
+ /// # Arguments
+ ///
+ /// * `port` - The I/O port to write to.
+ /// * `size` - The size of the data to write.
+ /// * `data` - The data to write to the I/O port.
+ ///
+ /// # Returns
+ ///
+ /// A `Result` indicating success or an `InsnError` if the operation fails.
+ fn ioio_out(&mut self, _port: u16, _size: Bytes, _data: u64) -> Result<(), InsnError> {
+ Err(InsnError::IoIoOut)
+ }
+}
+
+/// Trait representing a machine memory for instruction decoding.
+pub trait InsnMachineMem {
+ type Item;
+
+ /// Read data from the memory at the specified offset.
+ ///
+ /// # Returns
+ ///
+ /// Returns the read data on success, or an `InsnError` if the read
+ /// operation fails.
+ fn mem_read(&self) -> Result<Self::Item, InsnError> {
+ Err(InsnError::MemRead)
+ }
+
+ /// Write data to the memory at the specified offset.
+ ///
+ /// # Arguments
+ ///
+ /// * `data` - The data to write to the memory.
+ ///
+ /// # Returns
+ ///
+ /// Returns `Ok`on success, or an `InsnError` if the write operation fails.
+ fn mem_write(&mut self, _data: Self::Item) -> Result<(), InsnError> { | Same comment than `mem_read()` about `unsafe`:
```suggestion
unsafe fn mem_write(&mut self, _data: Self::Item) -> Result<(), InsnError> {
``` |
svsm | github_2023 | others | 391 | coconut-svsm | p4zuu | @@ -115,3 +159,113 @@ impl Drop for PerCPUPageMappingGuard {
}
}
}
+
+/// Represents a guard for a specific memory range mapping, which will
+/// unmap the specific memory range after being dropped.
+#[derive(Debug)]
+pub struct MemMappingGuard<T> {
+ // The guard of holding the temperary mapping for a specific memory range.
+ guard: PerCPUPageMappingGuard,
+ // The starting offset of the memory range.
+ start_off: usize,
+
+ phantom: PhantomData<T>,
+}
+
+impl<T: Copy> MemMappingGuard<T> {
+ /// Creates a new `MemMappingGuard` with the given `PerCPUPageMappingGuard`
+ /// and starting offset.
+ ///
+ /// # Arguments
+ ///
+ /// * `guard` - The `PerCPUPageMappingGuard` to associate with the `MemMappingGuard`.
+ /// * `start_off` - The starting offset for the memory mapping.
+ ///
+ /// # Returns
+ ///
+ /// Self is returned.
+ pub fn new(guard: PerCPUPageMappingGuard, start_off: usize) -> Result<Self, SvsmError> {
+ if start_off >= guard.mapping.len() {
+ Err(SvsmError::Mem)
+ } else {
+ Ok(Self {
+ guard,
+ start_off,
+ phantom: PhantomData,
+ })
+ }
+ }
+
+ /// Reads data from a virtual address region specified by an offset
+ ///
+ /// # Arguments
+ ///
+ /// * `offset`: The offset (in unit of `size_of::<T>()`) from the start of the virtual address
+ /// region to read from.
+ ///
+ /// # Returns
+ ///
+ /// This function returns a `Result` that indicates the success or failure of the operation.
+ /// If the read operation is successful, it returns `Ok(T)` which contains the read back data.
+ /// If the virtual address region cannot be retrieved, it returns `Err(SvsmError::Mem)`.
+ pub fn read(&self, offset: usize) -> Result<T, SvsmError> { | Similar to `InsnMachineMem::read()`, I would make this function unsafe with a Safety comment |
svsm | github_2023 | others | 391 | coconut-svsm | p4zuu | @@ -115,3 +159,113 @@ impl Drop for PerCPUPageMappingGuard {
}
}
}
+
+/// Represents a guard for a specific memory range mapping, which will
+/// unmap the specific memory range after being dropped.
+#[derive(Debug)]
+pub struct MemMappingGuard<T> {
+ // The guard of holding the temperary mapping for a specific memory range.
+ guard: PerCPUPageMappingGuard,
+ // The starting offset of the memory range.
+ start_off: usize,
+
+ phantom: PhantomData<T>,
+}
+
+impl<T: Copy> MemMappingGuard<T> {
+ /// Creates a new `MemMappingGuard` with the given `PerCPUPageMappingGuard`
+ /// and starting offset.
+ ///
+ /// # Arguments
+ ///
+ /// * `guard` - The `PerCPUPageMappingGuard` to associate with the `MemMappingGuard`.
+ /// * `start_off` - The starting offset for the memory mapping.
+ ///
+ /// # Returns
+ ///
+ /// Self is returned.
+ pub fn new(guard: PerCPUPageMappingGuard, start_off: usize) -> Result<Self, SvsmError> {
+ if start_off >= guard.mapping.len() {
+ Err(SvsmError::Mem)
+ } else {
+ Ok(Self {
+ guard,
+ start_off,
+ phantom: PhantomData,
+ })
+ }
+ }
+
+ /// Reads data from a virtual address region specified by an offset
+ ///
+ /// # Arguments
+ ///
+ /// * `offset`: The offset (in unit of `size_of::<T>()`) from the start of the virtual address
+ /// region to read from.
+ ///
+ /// # Returns
+ ///
+ /// This function returns a `Result` that indicates the success or failure of the operation.
+ /// If the read operation is successful, it returns `Ok(T)` which contains the read back data.
+ /// If the virtual address region cannot be retrieved, it returns `Err(SvsmError::Mem)`.
+ pub fn read(&self, offset: usize) -> Result<T, SvsmError> {
+ let size = core::mem::size_of::<T>();
+ self.virt_addr_region(offset * size, size)
+ .map_or(Err(SvsmError::Mem), |region| {
+ // SAFETY: The region is checked and valid for the size of the data.
+ Ok(unsafe { *(region.start().as_ptr::<T>()) })
+ })
+ }
+
+ /// Writes data from a provided data into a virtual address region specified by an offset.
+ ///
+ /// # Arguments
+ ///
+ /// * `offset`: The offset (in unit of `size_of::<T>()`) from the start of the virtual address
+ /// region to write to.
+ /// * `data`: Data to write.
+ ///
+ /// # Returns
+ ///
+ /// This function returns a `Result` that indicates the success or failure of the operation.
+ /// If the write operation is successful, it returns `Ok(())`. If the virtual address region
+ /// cannot be retrieved or if the buffer size is larger than the region size, it returns
+ /// `Err(SvsmError::Mem)`.
+ pub fn write(&self, offset: usize, data: T) -> Result<(), SvsmError> { | Same comment about `unsafe` |
svsm | github_2023 | others | 391 | coconut-svsm | joergroedel | @@ -71,6 +88,98 @@ impl InsnMachineCtx for X86ExceptionContext {
fn read_cr4(&self) -> u64 {
read_cr4().bits()
}
+
+ fn read_reg(&self, reg: Register) -> usize {
+ match reg {
+ Register::Rax => self.regs.rax,
+ Register::Rdx => self.regs.rdx,
+ Register::Rcx => self.regs.rcx,
+ Register::Rbx => self.regs.rdx,
+ Register::Rsp => self.frame.rsp,
+ Register::Rbp => self.regs.rbp,
+ Register::Rdi => self.regs.rdi,
+ Register::Rsi => self.regs.rsi,
+ Register::R8 => self.regs.r8,
+ Register::R9 => self.regs.r9,
+ Register::R10 => self.regs.r10,
+ Register::R11 => self.regs.r11,
+ Register::R12 => self.regs.r12,
+ Register::R13 => self.regs.r13,
+ Register::R14 => self.regs.r14,
+ Register::R15 => self.regs.r15,
+ Register::Rip => self.frame.rip,
+ }
+ }
+
+ fn read_flags(&self) -> usize {
+ self.frame.flags
+ }
+
+ fn write_reg(&mut self, reg: Register, val: usize) {
+ match reg {
+ Register::Rax => self.regs.rax = val,
+ Register::Rdx => self.regs.rdx = val,
+ Register::Rcx => self.regs.rcx = val,
+ Register::Rbx => self.regs.rdx = val,
+ Register::Rsp => self.frame.rsp = val,
+ Register::Rbp => self.regs.rbp = val,
+ Register::Rdi => self.regs.rdi = val,
+ Register::Rsi => self.regs.rsi = val,
+ Register::R8 => self.regs.r8 = val,
+ Register::R9 => self.regs.r9 = val,
+ Register::R10 => self.regs.r10 = val,
+ Register::R11 => self.regs.r11 = val,
+ Register::R12 => self.regs.r12 = val,
+ Register::R13 => self.regs.r13 = val,
+ Register::R14 => self.regs.r14 = val,
+ Register::R15 => self.regs.r15 = val,
+ Register::Rip => self.frame.rip = val,
+ }
+ }
+
+ fn read_cpl(&self) -> usize {
+ self.frame.cs & 3
+ }
+
+ fn map_linear_addr<T: Copy + 'static>(
+ &self,
+ la: usize,
+ _write: bool,
+ _fetch: bool,
+ ) -> Result<Box<dyn InsnMachineMem<Item = T>>, InsnError> {
+ if user_mode(self) {
+ todo!();
+ } else {
+ Ok(Box::new(GuestPtr::<T>::new(VirtAddr::from(la)))) | Most of the checks are done by the hardware already. For kernel-MMIO is no issue at all, as the kernel trusts itself and all permission checks are done in hardware before the #VE/#VC exception is raised. For user-MMIO it is similar, we just need to make sure to set CR0.WP=1 on all CPUs.
The only exception is the check for user-space accessing kernel memory. This needs to be checked in the exception handlers via a simple address check. So I think walking the page-table for #VE/#VC instruction emulation is not necessary. |
svsm | github_2023 | others | 392 | coconut-svsm | 00xc | @@ -104,3 +104,34 @@ pub fn get_regular_report(buffer: &mut [u8]) -> Result<usize, SvsmReqError> {
pub fn get_extended_report(buffer: &mut [u8], certs: &mut [u8]) -> Result<usize, SvsmReqError> {
get_report(buffer, Some(certs))
}
+
+#[cfg(test)]
+mod tests {
+ extern crate alloc;
+
+ use super::*;
+ use crate::serial::{test::serial_port, Terminal};
+ use alloc::vec;
+
+ #[test]
+ #[cfg_attr(not(test_in_svsm), ignore = "Can only be run inside guest")]
+ fn test_snp_launch_measurement() {
+ let sp = serial_port();
+
+ // 0x01: return SEV-SNP pre-calculated launch measurement (48 bytes)
+ sp.put_byte(0x01); | Can we make an enum out of this value and place it somewhere like `kernel/src/testing.rs`? Let's also add some documentation there referencing the `scripts/test-in-svsm.sh` script. This way it will be easier for other developers to use this infrastructure. |
svsm | github_2023 | others | 392 | coconut-svsm | 00xc | @@ -104,3 +104,34 @@ pub fn get_regular_report(buffer: &mut [u8]) -> Result<usize, SvsmReqError> {
pub fn get_extended_report(buffer: &mut [u8], certs: &mut [u8]) -> Result<usize, SvsmReqError> {
get_report(buffer, Some(certs))
}
+
+#[cfg(test)]
+mod tests {
+ extern crate alloc;
+
+ use super::*;
+ use crate::serial::{test::serial_port, Terminal};
+ use alloc::vec;
+
+ #[test]
+ #[cfg_attr(not(test_in_svsm), ignore = "Can only be run inside guest")]
+ fn test_snp_launch_measurement() {
+ let sp = serial_port();
+
+ // 0x01: return SEV-SNP pre-calculated launch measurement (48 bytes)
+ sp.put_byte(0x01);
+
+ let mut expected_measurement = [0u8; 48];
+ for byte in &mut expected_measurement {
+ *byte = sp.get_byte();
+ }
+
+ let mut buf = vec![0; size_of::<SnpReportResponse>()];
+ let size = get_regular_report(&mut buf).unwrap();
+ assert_eq!(size, buf.len());
+
+ let response = SnpReportResponse::try_from_as_ref(&buf).unwrap();
+ assert!(response.validate().is_ok()); | This will display the error cause:
```suggestion
response.validate().unwrap();
``` |
svsm | github_2023 | others | 392 | coconut-svsm | 00xc | @@ -1,7 +1,65 @@
use log::info;
use test::ShouldPanic;
-use crate::{cpu::percpu::current_ghcb, sev::ghcb::GHCBIOSize};
+use crate::{
+ cpu::percpu::current_ghcb,
+ locking::{LockGuard, SpinLock},
+ serial::{SerialPort, Terminal},
+ sev::ghcb::GHCBIOSize,
+ svsm_console::SVSMIOPort,
+};
+
+use core::sync::atomic::{AtomicBool, Ordering};
+
+#[macro_export]
+macro_rules! assert_eq_warn {
+ ($left:expr, $right:expr) => {
+ if $left != $right {
+ log::warn!(
+ "Assertion warning failed at {}:{}:{}:\nassertion `left == rigth` failed\n left: {:?}\n right: {:?}",
+ file!(),
+ line!(),
+ column!(),
+ $left,
+ $right
+ );
+ }
+ };
+} | This evaluates `$left` and `$right` twice in the error case. I suggest using temporary variables. There is also a typo in the error message ("rigth").
```suggestion
#[macro_export]
macro_rules! assert_eq_warn {
($left:expr, $right:expr) => {
{
let left = $left;
let right = $right;
if left != right {
log::warn!(
"Assertion warning failed at {}:{}:{}:\nassertion `left == right` failed\n left: {left:?}\n right: {right:?}",
file!(),
line!(),
column!(),
);
}
}
};
}
``` |
svsm | github_2023 | others | 432 | coconut-svsm | p4zuu | @@ -142,10 +142,13 @@ global_asm!(
/* Determine the PTE C-bit position from the CPUID page. */
+ /* Locate the table. The pointer to the CPUID page is 12 bytes into
+ * the stage2 startup structure. */
+ movl 12(%ebp), %ecx
/* Read the number of entries. */
- mov CPUID_PAGE, %eax
+ movl (%ecx), %eax
/* Create a pointer to the first entry. */
- leal CPUID_PAGE + 16, %ecx
+ leal 12(%ecx), %ecx | I guess you meant the following here:
```suggestion
leal 16(%ecx), %ecx
``` |
svsm | github_2023 | others | 432 | coconut-svsm | roy-hopkins | @@ -67,6 +68,12 @@ startup_32:
leal kernel_elf(%ebp), %edi
pushl %edi
+ /* Push the location of the secrets page. It is always at 9E000 */
+ pushl $0x9F000 | This should be `$0x9E000` in this commit. However, I see the value is replaced in a later commit. |
svsm | github_2023 | others | 432 | coconut-svsm | roy-hopkins | @@ -73,16 +73,15 @@ impl GpaMap {
options: &CmdOptions,
firmware: &Option<Box<dyn Firmware>>,
) -> Result<Self, Box<dyn Error>> {
- // 0x000000-0x00EFFF: zero-filled (must be pre-validated)
- // 0x00F000-0x00FFFF: initial stage 2 stack page
- // 0x010000-0x0nnnnn: stage 2 image
- // 0x0nnnnn-0x09DFFF: zero-filled (must be pre-validated)
- // 0x09E000-0x09EFFF: Secrets page
- // 0x09F000-0x09FFFF: CPUID page
- // 0x100000-0x1nnnnn: kernel
- // 0x1nnnnn-0x1nnnnn: filesystem
- // 0x1nnnnn-0x1nnnnn: IGVM parameter block
- // 0x1nnnnn-0x1nnnnn: general and memory map parameter pages
+ // 0x800000-0x804FFF: zero-filled (must be pre-validated)
+ // 0x805000-0x805FFF: initial stage 2 stack page
+ // 0x806000-0x806FFF: Secrets page
+ // 0x807000-0x807FFF: CPUID page
+ // 0x808000-0x8nnnnn: stage 2 image
+ // 0x8nnnnn-0x8nnnnn: kernel | Minor observation: the kernel start address is hardcoded below as 0x8a0000 so the start address can be included in this comment. |
svsm | github_2023 | c | 432 | coconut-svsm | roy-hopkins | @@ -194,15 +194,15 @@ void init_sev_meta(struct svsm_meta_data *svsm_meta)
svsm_meta->version = 1;
svsm_meta->num_desc = NUM_DESCS;
- svsm_meta->descs[0].base = 0;
- svsm_meta->descs[0].len = 632 * 1024;
+ svsm_meta->descs[0].base = 8192 * 1024;
+ svsm_meta->descs[0].len = 8832 * 1024; | I think the length is wrong here.
In fact, unless it's safe to overlap this range with the secrets and CPUID pages then I think we will need to add two SEV_DESC_TYPE_SNP_SEC_MEM sections, the first from 0x800000-0x805FFF and the second from 0x808000 to the end of the stage 2 image (or the maximum stage 2 size which ends at 0x89FFFF). |
svsm | github_2023 | others | 432 | coconut-svsm | roy-hopkins | @@ -100,8 +106,8 @@ fn setup_env(
set_init_pgtable(PageTableRef::shared(unsafe { addr_of_mut!(pgtable) }));
- // The end of the heap is the base of the secrets page.
- setup_stage2_allocator(0x9e000);
+ // The end of the heap is the base of the kernel image.
+ setup_stage2_allocator(launch_info.kernel_elf_start as u64); | I don't think this works when stage1 is in use. In the stage1 case, the `kernel_elf_start` is still in low memory isn't it?
Although on further review I see this is replaced by a later commit. |
svsm | github_2023 | others | 432 | coconut-svsm | roy-hopkins | @@ -7,48 +7,97 @@
use crate::address::{PhysAddr, VirtAddr};
use crate::utils::immut_after_init::ImmutAfterInitCell;
-#[derive(Copy, Clone)]
+#[derive(Debug, Copy, Clone)]
#[allow(dead_code)]
-struct KernelMapping {
+pub struct FixedAddressMappingRange {
virt_start: VirtAddr,
virt_end: VirtAddr,
phys_start: PhysAddr,
}
-static KERNEL_MAPPING: ImmutAfterInitCell<KernelMapping> = ImmutAfterInitCell::uninit();
+impl FixedAddressMappingRange {
+ pub fn new(virt_start: VirtAddr, virt_end: VirtAddr, phys_start: PhysAddr) -> Self {
+ Self {
+ virt_start,
+ virt_end,
+ phys_start,
+ }
+ }
+}
+
+#[derive(Debug, Copy, Clone)]
+#[cfg_attr(not(target_os = "none"), allow(dead_code))]
+pub struct FixedAddressMapping {
+ kernel_mapping: FixedAddressMappingRange,
+ heap_mapping: Option<FixedAddressMappingRange>,
+}
-pub fn init_kernel_mapping_info(vstart: VirtAddr, vend: VirtAddr, pstart: PhysAddr) {
- let km = KernelMapping {
- virt_start: vstart,
- virt_end: vend,
- phys_start: pstart,
+static FIXED_MAPPING: ImmutAfterInitCell<FixedAddressMapping> = ImmutAfterInitCell::uninit();
+
+pub fn init_kernel_mapping_info(
+ kernel_mapping: FixedAddressMappingRange,
+ heap_mapping: Option<FixedAddressMappingRange>,
+) {
+ let mapping = FixedAddressMapping {
+ kernel_mapping,
+ heap_mapping,
};
- KERNEL_MAPPING
- .init(&km)
- .expect("Already initialized kernel mapping info");
+ FIXED_MAPPING
+ .init(&mapping)
+ .expect("Already initialized fixed mapping info");
+}
+
+#[cfg(target_os = "none")]
+fn virt_to_phys_mapping(vaddr: VirtAddr, mapping: &FixedAddressMappingRange) -> Option<PhysAddr> {
+ if (vaddr < mapping.virt_start) || (vaddr >= mapping.virt_end) {
+ None
+ } else {
+ let offset: usize = vaddr - mapping.virt_start;
+ Some(mapping.phys_start + offset)
+ }
}
#[cfg(target_os = "none")]
pub fn virt_to_phys(vaddr: VirtAddr) -> PhysAddr {
- if vaddr < KERNEL_MAPPING.virt_start || vaddr >= KERNEL_MAPPING.virt_end {
- panic!("Invalid physical address {:#018x}", vaddr);
+ if let Some(addr) = virt_to_phys_mapping(vaddr, &FIXED_MAPPING.kernel_mapping) {
+ return addr;
+ }
+ if let Some(ref mapping) = &FIXED_MAPPING.heap_mapping {
+ if let Some(addr) = virt_to_phys_mapping(vaddr, mapping) {
+ return addr;
+ }
}
- let offset: usize = vaddr - KERNEL_MAPPING.virt_start;
+ panic!("Invalid physical address {:#018x}", vaddr); | Should be "Invalid _virtual_ address" |
svsm | github_2023 | others | 432 | coconut-svsm | Freax13 | @@ -53,9 +55,17 @@ pub struct Stage2LaunchInfo {
// platform_type must be the second field.
pub platform_type: u32,
+ // cpuid_page must be the third field.
+ pub cpuid_page: u32,
+
+ // secrets_page must be the fourth field.
+ pub secrets_page: u32,
+
+ pub stage2_end: u32,
pub kernel_elf_start: u32,
pub kernel_elf_end: u32,
pub kernel_fs_start: u32,
pub kernel_fs_end: u32,
pub igvm_params: u32,
+ pub _reserved: u32, | ```suggestion
```
Structs marked as `#[repr(packed)]` don't need padding fields.
|
svsm | github_2023 | others | 432 | coconut-svsm | Freax13 | @@ -107,35 +105,11 @@ impl GpaMap {
0
};
- let stage2_image = GpaRange::new(0x10000, stage2_len as u64)?;
+ let stage2_image = GpaRange::new(0x808000, stage2_len as u64)?;
- // Calculate the firmware range
- let firmware_range = if let Some(firmware) = firmware {
- let fw_start = firmware.get_fw_info().start as u64;
- let fw_size = firmware.get_fw_info().size as u64;
- GpaRange::new(fw_start, fw_size)?
- } else {
- GpaRange::new(0, 0)?
- };
-
- let kernel_address = match options.hypervisor {
- Hypervisor::Qemu => {
- // Plan to load the kernel image at a base address of 1 MB unless it must
- // be relocated due to firmware.
- 1 << 20
- }
- Hypervisor::HyperV => {
- // Load the kernel image after the firmware, but now lower than
- // 1 MB.
- let firmware_end = firmware_range.get_end();
- let addr_1mb = 1 << 20;
- if firmware_end < addr_1mb {
- addr_1mb
- } else {
- firmware_end
- }
- }
- };
+ // The kernel image is loaded beyond the end of the stage2 image,
+ // rounded up to a 4 KB boundary.
+ let kernel_address = (stage2_image.get_end() + 0xFFF) & !0xFFF; | ```suggestion
let kernel_address = stage2_image.get_end().next_multiple_of(0x1000);
``` |
svsm | github_2023 | others | 432 | coconut-svsm | Freax13 | @@ -7,48 +7,97 @@
use crate::address::{PhysAddr, VirtAddr};
use crate::utils::immut_after_init::ImmutAfterInitCell;
-#[derive(Copy, Clone)]
+#[derive(Debug, Copy, Clone)]
#[allow(dead_code)]
-struct KernelMapping {
+pub struct FixedAddressMappingRange {
virt_start: VirtAddr,
virt_end: VirtAddr,
phys_start: PhysAddr,
}
-static KERNEL_MAPPING: ImmutAfterInitCell<KernelMapping> = ImmutAfterInitCell::uninit();
+impl FixedAddressMappingRange {
+ pub fn new(virt_start: VirtAddr, virt_end: VirtAddr, phys_start: PhysAddr) -> Self {
+ Self {
+ virt_start,
+ virt_end,
+ phys_start,
+ }
+ }
+}
+
+#[derive(Debug, Copy, Clone)]
+#[cfg_attr(not(target_os = "none"), allow(dead_code))]
+pub struct FixedAddressMapping {
+ kernel_mapping: FixedAddressMappingRange,
+ heap_mapping: Option<FixedAddressMappingRange>,
+}
-pub fn init_kernel_mapping_info(vstart: VirtAddr, vend: VirtAddr, pstart: PhysAddr) {
- let km = KernelMapping {
- virt_start: vstart,
- virt_end: vend,
- phys_start: pstart,
+static FIXED_MAPPING: ImmutAfterInitCell<FixedAddressMapping> = ImmutAfterInitCell::uninit();
+
+pub fn init_kernel_mapping_info(
+ kernel_mapping: FixedAddressMappingRange,
+ heap_mapping: Option<FixedAddressMappingRange>,
+) {
+ let mapping = FixedAddressMapping {
+ kernel_mapping,
+ heap_mapping,
};
- KERNEL_MAPPING
- .init(&km)
- .expect("Already initialized kernel mapping info");
+ FIXED_MAPPING
+ .init(&mapping)
+ .expect("Already initialized fixed mapping info");
+}
+
+#[cfg(target_os = "none")]
+fn virt_to_phys_mapping(vaddr: VirtAddr, mapping: &FixedAddressMappingRange) -> Option<PhysAddr> {
+ if (vaddr < mapping.virt_start) || (vaddr >= mapping.virt_end) {
+ None
+ } else {
+ let offset: usize = vaddr - mapping.virt_start;
+ Some(mapping.phys_start + offset)
+ } | ```suggestion
if (mapping.virt_start..mapping.virt_end).contains(&vaddr) {
let offset = vaddr - mapping.virt_start;
Some(mapping.phys_start + offset)
} else {
None
}
```
or even
```suggestion
(mapping.virt_start..mapping.virt_end)
.contains(&vaddr)
.then(|| {
let offset = vaddr - mapping.virt_start;
mapping.phys_start + offset
})
```
Have you considered consolidating `virt_start` and `virt_end` into a single field of type `MemoryRegion<VirtAddr>`? |
svsm | github_2023 | others | 432 | coconut-svsm | Freax13 | @@ -7,48 +7,97 @@
use crate::address::{PhysAddr, VirtAddr};
use crate::utils::immut_after_init::ImmutAfterInitCell;
-#[derive(Copy, Clone)]
+#[derive(Debug, Copy, Clone)]
#[allow(dead_code)]
-struct KernelMapping {
+pub struct FixedAddressMappingRange {
virt_start: VirtAddr,
virt_end: VirtAddr,
phys_start: PhysAddr,
}
-static KERNEL_MAPPING: ImmutAfterInitCell<KernelMapping> = ImmutAfterInitCell::uninit();
+impl FixedAddressMappingRange {
+ pub fn new(virt_start: VirtAddr, virt_end: VirtAddr, phys_start: PhysAddr) -> Self {
+ Self {
+ virt_start,
+ virt_end,
+ phys_start,
+ }
+ }
+}
+
+#[derive(Debug, Copy, Clone)]
+#[cfg_attr(not(target_os = "none"), allow(dead_code))]
+pub struct FixedAddressMapping {
+ kernel_mapping: FixedAddressMappingRange,
+ heap_mapping: Option<FixedAddressMappingRange>,
+}
-pub fn init_kernel_mapping_info(vstart: VirtAddr, vend: VirtAddr, pstart: PhysAddr) {
- let km = KernelMapping {
- virt_start: vstart,
- virt_end: vend,
- phys_start: pstart,
+static FIXED_MAPPING: ImmutAfterInitCell<FixedAddressMapping> = ImmutAfterInitCell::uninit();
+
+pub fn init_kernel_mapping_info(
+ kernel_mapping: FixedAddressMappingRange,
+ heap_mapping: Option<FixedAddressMappingRange>,
+) {
+ let mapping = FixedAddressMapping {
+ kernel_mapping,
+ heap_mapping,
};
- KERNEL_MAPPING
- .init(&km)
- .expect("Already initialized kernel mapping info");
+ FIXED_MAPPING
+ .init(&mapping)
+ .expect("Already initialized fixed mapping info");
+}
+
+#[cfg(target_os = "none")]
+fn virt_to_phys_mapping(vaddr: VirtAddr, mapping: &FixedAddressMappingRange) -> Option<PhysAddr> {
+ if (vaddr < mapping.virt_start) || (vaddr >= mapping.virt_end) {
+ None
+ } else {
+ let offset: usize = vaddr - mapping.virt_start;
+ Some(mapping.phys_start + offset)
+ }
}
#[cfg(target_os = "none")]
pub fn virt_to_phys(vaddr: VirtAddr) -> PhysAddr {
- if vaddr < KERNEL_MAPPING.virt_start || vaddr >= KERNEL_MAPPING.virt_end {
- panic!("Invalid physical address {:#018x}", vaddr);
+ if let Some(addr) = virt_to_phys_mapping(vaddr, &FIXED_MAPPING.kernel_mapping) {
+ return addr;
+ }
+ if let Some(ref mapping) = &FIXED_MAPPING.heap_mapping {
+ if let Some(addr) = virt_to_phys_mapping(vaddr, mapping) {
+ return addr;
+ }
}
- let offset: usize = vaddr - KERNEL_MAPPING.virt_start;
+ panic!("Invalid virtual address {:#018x}", vaddr); | ```suggestion
panic!("Invalid virtual address {vaddr:#018x}");
```
Here and elsewhere. |
svsm | github_2023 | others | 432 | coconut-svsm | Freax13 | @@ -7,48 +7,97 @@
use crate::address::{PhysAddr, VirtAddr};
use crate::utils::immut_after_init::ImmutAfterInitCell;
-#[derive(Copy, Clone)]
+#[derive(Debug, Copy, Clone)]
#[allow(dead_code)]
-struct KernelMapping {
+pub struct FixedAddressMappingRange {
virt_start: VirtAddr,
virt_end: VirtAddr,
phys_start: PhysAddr,
}
-static KERNEL_MAPPING: ImmutAfterInitCell<KernelMapping> = ImmutAfterInitCell::uninit();
+impl FixedAddressMappingRange {
+ pub fn new(virt_start: VirtAddr, virt_end: VirtAddr, phys_start: PhysAddr) -> Self {
+ Self {
+ virt_start,
+ virt_end,
+ phys_start,
+ }
+ }
+}
+
+#[derive(Debug, Copy, Clone)]
+#[cfg_attr(not(target_os = "none"), allow(dead_code))]
+pub struct FixedAddressMapping {
+ kernel_mapping: FixedAddressMappingRange,
+ heap_mapping: Option<FixedAddressMappingRange>,
+}
-pub fn init_kernel_mapping_info(vstart: VirtAddr, vend: VirtAddr, pstart: PhysAddr) {
- let km = KernelMapping {
- virt_start: vstart,
- virt_end: vend,
- phys_start: pstart,
+static FIXED_MAPPING: ImmutAfterInitCell<FixedAddressMapping> = ImmutAfterInitCell::uninit();
+
+pub fn init_kernel_mapping_info(
+ kernel_mapping: FixedAddressMappingRange,
+ heap_mapping: Option<FixedAddressMappingRange>,
+) {
+ let mapping = FixedAddressMapping {
+ kernel_mapping,
+ heap_mapping,
};
- KERNEL_MAPPING
- .init(&km)
- .expect("Already initialized kernel mapping info");
+ FIXED_MAPPING
+ .init(&mapping)
+ .expect("Already initialized fixed mapping info");
+}
+
+#[cfg(target_os = "none")]
+fn virt_to_phys_mapping(vaddr: VirtAddr, mapping: &FixedAddressMappingRange) -> Option<PhysAddr> {
+ if (vaddr < mapping.virt_start) || (vaddr >= mapping.virt_end) {
+ None
+ } else {
+ let offset: usize = vaddr - mapping.virt_start;
+ Some(mapping.phys_start + offset)
+ }
}
#[cfg(target_os = "none")]
pub fn virt_to_phys(vaddr: VirtAddr) -> PhysAddr {
- if vaddr < KERNEL_MAPPING.virt_start || vaddr >= KERNEL_MAPPING.virt_end {
- panic!("Invalid physical address {:#018x}", vaddr);
+ if let Some(addr) = virt_to_phys_mapping(vaddr, &FIXED_MAPPING.kernel_mapping) {
+ return addr;
+ }
+ if let Some(ref mapping) = &FIXED_MAPPING.heap_mapping { | Using `&` and `ref` is redundant.
```suggestion
if let Some(ref mapping) = FIXED_MAPPING.heap_mapping {
```
or
```suggestion
if let Some(mapping) = &FIXED_MAPPING.heap_mapping {
```
Here and elsewhere. |
svsm | github_2023 | others | 432 | coconut-svsm | Freax13 | @@ -7,48 +7,97 @@
use crate::address::{PhysAddr, VirtAddr};
use crate::utils::immut_after_init::ImmutAfterInitCell;
-#[derive(Copy, Clone)]
+#[derive(Debug, Copy, Clone)]
#[allow(dead_code)]
-struct KernelMapping {
+pub struct FixedAddressMappingRange {
virt_start: VirtAddr,
virt_end: VirtAddr,
phys_start: PhysAddr,
}
-static KERNEL_MAPPING: ImmutAfterInitCell<KernelMapping> = ImmutAfterInitCell::uninit();
+impl FixedAddressMappingRange {
+ pub fn new(virt_start: VirtAddr, virt_end: VirtAddr, phys_start: PhysAddr) -> Self {
+ Self {
+ virt_start,
+ virt_end,
+ phys_start,
+ }
+ }
+}
+
+#[derive(Debug, Copy, Clone)]
+#[cfg_attr(not(target_os = "none"), allow(dead_code))]
+pub struct FixedAddressMapping {
+ kernel_mapping: FixedAddressMappingRange,
+ heap_mapping: Option<FixedAddressMappingRange>,
+}
-pub fn init_kernel_mapping_info(vstart: VirtAddr, vend: VirtAddr, pstart: PhysAddr) {
- let km = KernelMapping {
- virt_start: vstart,
- virt_end: vend,
- phys_start: pstart,
+static FIXED_MAPPING: ImmutAfterInitCell<FixedAddressMapping> = ImmutAfterInitCell::uninit();
+
+pub fn init_kernel_mapping_info(
+ kernel_mapping: FixedAddressMappingRange,
+ heap_mapping: Option<FixedAddressMappingRange>,
+) {
+ let mapping = FixedAddressMapping {
+ kernel_mapping,
+ heap_mapping,
};
- KERNEL_MAPPING
- .init(&km)
- .expect("Already initialized kernel mapping info");
+ FIXED_MAPPING
+ .init(&mapping)
+ .expect("Already initialized fixed mapping info");
+}
+
+#[cfg(target_os = "none")]
+fn virt_to_phys_mapping(vaddr: VirtAddr, mapping: &FixedAddressMappingRange) -> Option<PhysAddr> {
+ if (vaddr < mapping.virt_start) || (vaddr >= mapping.virt_end) {
+ None
+ } else {
+ let offset: usize = vaddr - mapping.virt_start;
+ Some(mapping.phys_start + offset)
+ }
}
#[cfg(target_os = "none")]
pub fn virt_to_phys(vaddr: VirtAddr) -> PhysAddr {
- if vaddr < KERNEL_MAPPING.virt_start || vaddr >= KERNEL_MAPPING.virt_end {
- panic!("Invalid physical address {:#018x}", vaddr);
+ if let Some(addr) = virt_to_phys_mapping(vaddr, &FIXED_MAPPING.kernel_mapping) {
+ return addr;
+ }
+ if let Some(ref mapping) = &FIXED_MAPPING.heap_mapping {
+ if let Some(addr) = virt_to_phys_mapping(vaddr, mapping) {
+ return addr;
+ }
}
- let offset: usize = vaddr - KERNEL_MAPPING.virt_start;
+ panic!("Invalid virtual address {:#018x}", vaddr);
+}
- KERNEL_MAPPING.phys_start + offset
+#[cfg(target_os = "none")]
+fn phys_to_virt_mapping(paddr: PhysAddr, mapping: &FixedAddressMappingRange) -> Option<VirtAddr> {
+ if paddr < mapping.phys_start {
+ None
+ } else {
+ let size: usize = mapping.virt_end - mapping.virt_start;
+ if paddr >= mapping.phys_start + size {
+ None
+ } else {
+ let offset: usize = paddr - mapping.phys_start; | ```suggestion
let size = mapping.virt_end - mapping.virt_start;
if paddr >= mapping.phys_start + size {
None
} else {
let offset = paddr - mapping.phys_start;
```
Here and elsewhere. |
svsm | github_2023 | others | 432 | coconut-svsm | Freax13 | @@ -10,7 +10,8 @@ OUTPUT_ARCH(i386:x86-64)
SECTIONS
{
- . = 64k;
+ /* Base address is 8 MB + 32 KB */
+ . = 8224k; | ```suggestion
. = 8m + 32k;
``` |
svsm | github_2023 | others | 432 | coconut-svsm | Freax13 | @@ -86,16 +83,36 @@ fn setup_env(
.env_setup(debug_serial_port)
.expect("Early environment setup failed");
- init_kernel_mapping_info(
- VirtAddr::null(),
- VirtAddr::from(640 * 1024usize),
- PhysAddr::null(),
+ // Validate the first 640 KB of memory so it can be used if necessary.
+ let region = MemoryRegion::<VirtAddr>::new(VirtAddr::from(0u64), 640 * 1024);
+ platform
+ .validate_page_range(region)
+ .expect("failed to validate low 640 KB");
+
+ // Supply the heap bounds as the kernel range, since the only virtual-to
+ // physical translations required will be on heap memory.
+ let kernel_mapping = FixedAddressMappingRange::new(
+ VirtAddr::from(0x808000u64),
+ VirtAddr::from(launch_info.stage2_end as u64),
+ PhysAddr::from(0x808000u64),
);
- register_cpuid_table(unsafe { &CPUID_PAGE });
+ let heap_mapping =
+ FixedAddressMappingRange::new(region.start(), region.end(), PhysAddr::from(0u64));
+ init_kernel_mapping_info(kernel_mapping, Some(heap_mapping));
+
+ let cpuid_page = unsafe {
+ let ptr = VirtAddr::from(launch_info.cpuid_page as u64).as_ptr::<SnpCpuidTable>();
+ &*ptr
+ }; | ```suggestion
let cpuid_page = unsafe { &*(launch_info.cpuid_page as *const SnpCpuidTable) };
```
There's no need to go through `VirtAddr`. |
svsm | github_2023 | others | 432 | coconut-svsm | Freax13 | @@ -130,10 +130,15 @@ pub fn invalidate_early_boot_memory(
// invalidate stage 2 memory, unless firmware is loaded into low memory.
// Also invalidate the boot data if required.
if !config.fw_in_low_memory() {
- let stage2_region = MemoryRegion::new(PhysAddr::null(), 640 * 1024);
- invalidate_boot_memory_region(platform, config, stage2_region)?;
+ let lowmem_region = MemoryRegion::new(PhysAddr::null(), 640 * 1024);
+ invalidate_boot_memory_region(platform, config, lowmem_region)?;
}
+ let stage2_base = usize::try_from(launch_info.stage2_start).unwrap();
+ let stage2_len = usize::try_from(launch_info.stage2_end).unwrap() - stage2_base;
+ let stage2_region = MemoryRegion::new(PhysAddr::new(stage2_base), stage2_len); | ```suggestion
let stage2_base = PhysAddr::from(launch_info.stage2_start);
let stage2_end = PhysAddr::from(launch_info.stage2_end);
let stage2_region = MemoryRegion::from_addresses(stage2_base, stage2_end);
``` |
svsm | github_2023 | c | 432 | coconut-svsm | Freax13 | @@ -194,17 +194,21 @@ void init_sev_meta(struct svsm_meta_data *svsm_meta)
svsm_meta->version = 1;
svsm_meta->num_desc = NUM_DESCS;
- svsm_meta->descs[0].base = 0;
- svsm_meta->descs[0].len = 632 * 1024;
- svsm_meta->descs[0].type = SEV_DESC_TYPE_SNP_SEC_MEM;
+ svsm_meta->descs[3].base = 0x800000;
+ svsm_meta->descs[3].len = 0x6000;
+ svsm_meta->descs[3].type = SEV_DESC_TYPE_SNP_SEC_MEM; | Shouldn't we use index `0` here? |
svsm | github_2023 | others | 432 | coconut-svsm | Freax13 | @@ -7,48 +7,97 @@
use crate::address::{PhysAddr, VirtAddr};
use crate::utils::immut_after_init::ImmutAfterInitCell;
-#[derive(Copy, Clone)]
+#[derive(Debug, Copy, Clone)]
#[allow(dead_code)]
-struct KernelMapping {
+pub struct FixedAddressMappingRange {
virt_start: VirtAddr,
virt_end: VirtAddr,
phys_start: PhysAddr,
}
-static KERNEL_MAPPING: ImmutAfterInitCell<KernelMapping> = ImmutAfterInitCell::uninit();
+impl FixedAddressMappingRange {
+ pub fn new(virt_start: VirtAddr, virt_end: VirtAddr, phys_start: PhysAddr) -> Self {
+ Self {
+ virt_start,
+ virt_end,
+ phys_start,
+ }
+ }
+}
+
+#[derive(Debug, Copy, Clone)]
+#[cfg_attr(not(target_os = "none"), allow(dead_code))]
+pub struct FixedAddressMapping {
+ kernel_mapping: FixedAddressMappingRange,
+ heap_mapping: Option<FixedAddressMappingRange>,
+}
-pub fn init_kernel_mapping_info(vstart: VirtAddr, vend: VirtAddr, pstart: PhysAddr) {
- let km = KernelMapping {
- virt_start: vstart,
- virt_end: vend,
- phys_start: pstart,
+static FIXED_MAPPING: ImmutAfterInitCell<FixedAddressMapping> = ImmutAfterInitCell::uninit();
+
+pub fn init_kernel_mapping_info(
+ kernel_mapping: FixedAddressMappingRange,
+ heap_mapping: Option<FixedAddressMappingRange>,
+) {
+ let mapping = FixedAddressMapping {
+ kernel_mapping,
+ heap_mapping,
};
- KERNEL_MAPPING
- .init(&km)
- .expect("Already initialized kernel mapping info");
+ FIXED_MAPPING
+ .init(&mapping)
+ .expect("Already initialized fixed mapping info");
+}
+
+#[cfg(target_os = "none")]
+fn virt_to_phys_mapping(vaddr: VirtAddr, mapping: &FixedAddressMappingRange) -> Option<PhysAddr> { | Have considered making this a function `FixedAddressMappingRange`? |
svsm | github_2023 | c | 432 | coconut-svsm | roy-hopkins | @@ -194,17 +194,21 @@ void init_sev_meta(struct svsm_meta_data *svsm_meta)
svsm_meta->version = 1;
svsm_meta->num_desc = NUM_DESCS;
- svsm_meta->descs[0].base = 0;
- svsm_meta->descs[0].len = 632 * 1024;
+ svsm_meta->descs[0].base = 0x800000;
+ svsm_meta->descs[0].len = 0x6000;
svsm_meta->descs[0].type = SEV_DESC_TYPE_SNP_SEC_MEM;
- svsm_meta->descs[1].base = 632 * 1024;
- svsm_meta->descs[1].len = 4096;
+ svsm_meta->descs[1].base = 0x806000;
+ svsm_meta->descs[1].len = 0x1000;
svsm_meta->descs[1].type = SEV_DESC_TYPE_SNP_SECRETS;
- svsm_meta->descs[2].base = 636 * 1024;
- svsm_meta->descs[2].len = 4096;
+ svsm_meta->descs[2].base = 0x807000;
+ svsm_meta->descs[2].len = 0x1000;
svsm_meta->descs[2].type = SEV_DESC_TYPE_CPUID;
+
+ svsm_meta->descs[3].base = 0x808000;
+ svsm_meta->descs[3].len = 0x8A0000 - 0x080000; | This should be:
```suggestion
svsm_meta->descs[3].len = 0x8A0000 - 0x808000;
``` |
svsm | github_2023 | others | 445 | coconut-svsm | joergroedel | @@ -0,0 +1,36 @@
+#!/bin/env sh
+# SPDX-License-Identifier: MIT OR Apache-2.0
+#
+# Author: Tom Dohrmann <erbse.13@gmx.de>
+# A script to find functions with excessive stack sizes.
+# Requires yq-go and obj2yaml (bundled with llvm) to be installed.
+
+# Default to displaying functions with a stack size >= 0x400 (1024) bytes.
+MIN_SIZE="${MIN_SIZE:-0x400}"
+# Default to displaying the 10 top functions with the biggest stack sizes.
+# Setting this too high will make yq run for a long time.
+MAX_RESULTS="${MAX_RESULTS:-10}"
+
+# Forcefully enable a nightly toolchain.
+export RUSTUP_TOOLCHAIN=nightly
+
+# Append -Z emit-stack-sizes to the set of rustflags. The RUSTFLAGS environment variable overrides the flags in the config.
+RUSTFLAGS=$(yq '.build.rustflags | join(" ")' .cargo/config.toml)
+export RUSTFLAGS="$RUSTFLAGS -Z emit-stack-sizes"
+
+# Build the SVSM kernel.
+make bin/svsm-kernel.elf
+
+# Determine the path to the built binary.
+if [[ -z "${RELEASE}" ]]; then
+ TARGET_PATH=debug
+else
+ TARGET_PATH=release
+fi
+SVSM_PATH=target/x86_64-unknown-none/${TARGET_PATH}/svsm
+
+# Dump the binary into a yaml file.
+obj2yaml target/x86_64-unknown-none/${TARGET_PATH}/svsm > bin/svsm-kernel.yaml | Unfortunately the `obj2yaml` tool is not available on openSUSE :(
But some searching found that `llvm-readelf -C --stack-sizes` will also print the stack sizes. Can the script use this tool instead? |
svsm | github_2023 | others | 445 | coconut-svsm | joergroedel | @@ -0,0 +1,27 @@
+#!/bin/env sh
+# SPDX-License-Identifier: MIT OR Apache-2.0
+#
+# Author: Tom Dohrmann <erbse.13@gmx.de>
+# A script to find functions with excessive stack sizes.
+# Requires llvm-readelf (bundled with llvm) to be installed.
+
+# Forcefully enable a nightly toolchain.
+export RUSTUP_TOOLCHAIN=nightly
+
+# Append -Z emit-stack-sizes to the set of rustflags. The RUSTFLAGS environment variable overrides the flags in the config.
+RUSTFLAGS=$(yq '.build.rustflags | join(" ")' .cargo/config.toml)
+export RUSTFLAGS="$RUSTFLAGS -Z emit-stack-sizes"
+
+# Build the SVSM kernel.
+make bin/svsm-kernel.elf
+
+# Determine the path to the built binary.
+if [[ -z "${RELEASE}" ]]; then
+ TARGET_PATH=debug
+else
+ TARGET_PATH=release
+fi
+SVSM_PATH=target/x86_64-unknown-none/${TARGET_PATH}/svsm
+
+# Dump the binary into a yaml file.
+llvm-readelf -C --stack-sizes target/x86_64-unknown-none/${TARGET_PATH}/svsm | ```suggestion
# Print stack frame sizes, sorted from small to large
llvm-readelf -C --stack-sizes target/x86_64-unknown-none/${TARGET_PATH}/svsm | sort -n
``` |
svsm | github_2023 | others | 441 | coconut-svsm | Freax13 | @@ -0,0 +1,195 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+
+use core::arch::asm;
+
+/// Interrupt flag in RFLAGS register
+const EFLAGS_IF: u64 = 1 << 9;
+
+/// Unconditionally disable IRQs
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs.
+#[inline(always)]
+pub unsafe fn irqs_disable() {
+ asm!("cli", options(att_syntax));
+}
+
+/// Unconditionally enable IRQs
+///
+/// # Safety
+///
+/// Callers need to make sure it is safe to enable IRQs
+#[inline(always)]
+pub unsafe fn irqs_enable() {
+ asm!("sti", options(att_syntax));
+}
+
+/// Query IRQ state on current CPU
+///
+/// # Returns
+///
+/// `true` when IRQs are enabled, `false` otherwise
+#[inline(always)]
+#[must_use = "Unused irqs_enabled() result - meant to be irq_enable()?"]
+pub fn irqs_enabled() -> bool {
+ // SAFETY: The inline assembly just reads the processors RFLAGS register
+ // and does not change any state.
+ let state: u64 = unsafe {
+ let s: u64;
+ asm!("pushfq",
+ "popq %rax",
+ out("rax") s,
+ options(att_syntax));
+ s
+ };
+
+ (state & EFLAGS_IF) == EFLAGS_IF
+}
+
+/// Query IRQ state on current CPU
+///
+/// # Returns
+///
+/// `false` when IRQs are enabled, `true` otherwise
+#[inline(always)]
+#[must_use = "Unused irqs_disabled() result - meant to be irq_disable()?"]
+pub fn irqs_disabled() -> bool {
+ !irqs_enabled()
+}
+
+/// Disable IRQs and return previous IRQ state
+///
+/// # Returns
+///
+/// Previous IRQ state
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs at the end of the critical
+/// section.
+#[inline(always)]
+pub unsafe fn irqs_save() -> u64 {
+ let state: u64;
+
+ asm!("pushfq",
+ "cli",
+ "popq %rax",
+ out("rax") state,
+ options(att_syntax));
+ state
+}
+
+/// Restore previous IRQ state
+///
+/// # Arguments:
+///
+/// `state` - IRQ state as returned from [`irqs_save`].
+///
+/// # Safety
+///
+/// Callers need to make sure to pass the correct previous state into
+/// the function to not accidentially re-enable IRQs.
+#[inline(always)]
+pub unsafe fn irqs_restore(state: u64) {
+ asm!("pushq %rax",
+ "popfq",
+ in("rax") state,
+ options(att_syntax));
+} | This potentially modifies flags other than IF. I don't know if we currently use any of the other flags (such as AC for SMAP), but it's probably safer to not touch the other flags if we don't need to. |
svsm | github_2023 | others | 441 | coconut-svsm | Freax13 | @@ -0,0 +1,195 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+
+use core::arch::asm;
+
+/// Interrupt flag in RFLAGS register
+const EFLAGS_IF: u64 = 1 << 9;
+
+/// Unconditionally disable IRQs
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs.
+#[inline(always)]
+pub unsafe fn irqs_disable() {
+ asm!("cli", options(att_syntax));
+}
+
+/// Unconditionally enable IRQs
+///
+/// # Safety
+///
+/// Callers need to make sure it is safe to enable IRQs
+#[inline(always)]
+pub unsafe fn irqs_enable() {
+ asm!("sti", options(att_syntax));
+}
+
+/// Query IRQ state on current CPU
+///
+/// # Returns
+///
+/// `true` when IRQs are enabled, `false` otherwise
+#[inline(always)]
+#[must_use = "Unused irqs_enabled() result - meant to be irq_enable()?"]
+pub fn irqs_enabled() -> bool {
+ // SAFETY: The inline assembly just reads the processors RFLAGS register
+ // and does not change any state.
+ let state: u64 = unsafe {
+ let s: u64;
+ asm!("pushfq",
+ "popq %rax",
+ out("rax") s,
+ options(att_syntax));
+ s
+ };
+
+ (state & EFLAGS_IF) == EFLAGS_IF
+}
+
+/// Query IRQ state on current CPU
+///
+/// # Returns
+///
+/// `false` when IRQs are enabled, `true` otherwise
+#[inline(always)]
+#[must_use = "Unused irqs_disabled() result - meant to be irq_disable()?"]
+pub fn irqs_disabled() -> bool {
+ !irqs_enabled()
+}
+
+/// Disable IRQs and return previous IRQ state
+///
+/// # Returns
+///
+/// Previous IRQ state
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs at the end of the critical
+/// section.
+#[inline(always)]
+pub unsafe fn irqs_save() -> u64 {
+ let state: u64;
+
+ asm!("pushfq",
+ "cli",
+ "popq %rax",
+ out("rax") state,
+ options(att_syntax));
+ state
+}
+
+/// Restore previous IRQ state
+///
+/// # Arguments:
+///
+/// `state` - IRQ state as returned from [`irqs_save`].
+///
+/// # Safety
+///
+/// Callers need to make sure to pass the correct previous state into
+/// the function to not accidentially re-enable IRQs.
+#[inline(always)]
+pub unsafe fn irqs_restore(state: u64) {
+ asm!("pushq %rax",
+ "popfq",
+ in("rax") state,
+ options(att_syntax));
+}
+
+/// And IRQ guard which saves the current IRQ state and disabled interrupts
+/// upon creation. When the guard goes out of scope the previous IRQ state is
+/// restored.
+///
+/// The struct implements the `Default` and `Drop` traits for easy use.
+#[derive(Debug)]
+#[must_use = "if unused previous IRQ state will be immediatly restored"]
+pub struct IrqGuard { | This API can cause some problems if multiple IrqGuards with different lifetimes are used.
```rust
let guard1 = IrqGuard::default(); // This disables interrupts.
let guard2 = IrqGuard::default(); // This does nothing because interrupts are already disabled.
drop(guard1); // This re-enables interrupts.
// At this point guard2 is still alive, but interrupts have been enabled again.
drop(guard2); // This disables interrupts again.
// Now interrupts are disabled even though no guards are left.
```
One solution for this problem I've seen before is to have a cpu-local counter to count the number of active IRQ guards. With this approach, IRQs are only disabled when the counter is increased from 0 to 1 and IRQs are only enabled when the counter is decreased from 1 to 0.
`IrqSafeLocking` suffers from the same issues. |
svsm | github_2023 | others | 441 | coconut-svsm | Freax13 | @@ -0,0 +1,71 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+use crate::cpu::{irqs_restore, irqs_save};
+
+/// Abstracts IRQ state handling when taking and releasing locks. There are two
+/// implemenations:
+///
+/// * [IrqUnsafeLocking] implements the methods as no-ops and does not change
+/// any IRQ state.
+/// * [IrqSafeLocking] actually disables and enables IRQs in the methods,
+/// making a lock IRQ-safe by using this structure.
+pub trait IrqLocking {
+ /// Associated helper function to create an instance of the implementing
+ /// struct. This is used by lock implementation structs.
+ ///
+ /// # Returns
+ ///
+ /// New, empty instance of the implementing IRQ state handling struct
+ fn new() -> Self;
+
+ /// Called when a locking implementation needs to potentially disable IRQs.
+ /// The method needs to store the previous IRQ state before modifying it.
+ fn irqs_save(&mut self) {}
+
+ /// Called when the locking implementation needs to restore the IRQ state
+ /// before the lock was taken.
+ fn irqs_restore(&mut self) {}
+}
+
+/// Implements the IRQ state handling methods as no-ops. For use it IRQ-unsafe
+/// locks.
+#[derive(Debug, Default)]
+pub struct IrqUnsafeLocking;
+
+impl IrqLocking for IrqUnsafeLocking {
+ fn new() -> Self {
+ IrqUnsafeLocking {}
+ }
+}
+
+/// Properly implements the IRQ state handling methods. For use it IRQ-safe
+/// locks.
+#[derive(Debug, Default)]
+pub struct IrqSafeLocking {
+ state: Option<u64>,
+}
+
+impl IrqLocking for IrqSafeLocking {
+ fn new() -> Self {
+ IrqSafeLocking { state: None }
+ }
+
+ fn irqs_save(&mut self) {
+ assert!(self.state.is_none());
+ self.state = Some(unsafe { irqs_save() });
+ }
+
+ fn irqs_restore(&mut self) {
+ self.state.map(|s| unsafe { irqs_restore(s) }).take();
+ self.state = None; | ```suggestion
self.state.map(|s| unsafe { irqs_restore(s) });
self.state = None;
```
or
```suggestion
if let Some(state) = self.state {
unsafe {
irqs_restore(s);
}
}
```
|
svsm | github_2023 | others | 441 | coconut-svsm | Freax13 | @@ -4,59 +4,70 @@
//
// Author: Joerg Roedel <jroedel@suse.de>
+use super::common::*;
use core::cell::UnsafeCell;
+use core::marker::PhantomData;
use core::ops::{Deref, DerefMut};
use core::sync::atomic::{AtomicU64, Ordering};
/// A guard that provides read access to the data protected by [`RWLock`]
#[derive(Debug)]
#[must_use = "if unused the RWLock will immediately unlock"]
-pub struct ReadLockGuard<'a, T> {
+pub struct RawReadLockGuard<'a, T, I: IrqLocking> {
/// Reference to the associated `AtomicU64` in the [`RWLock`]
rwlock: &'a AtomicU64,
/// Reference to the protected data
data: &'a T,
+ /// IRQ state before and after critical sectionV | ```suggestion
/// IRQ state before and after critical section
``` |
svsm | github_2023 | others | 441 | coconut-svsm | 00xc | @@ -0,0 +1,195 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+
+use core::arch::asm;
+
+/// Interrupt flag in RFLAGS register
+const EFLAGS_IF: u64 = 1 << 9;
+
+/// Unconditionally disable IRQs
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs.
+#[inline(always)]
+pub unsafe fn irqs_disable() {
+ asm!("cli", options(att_syntax)); | ```suggestion
asm!("cli", options(att_syntax, preserves_flags, nostack));
``` |
svsm | github_2023 | others | 441 | coconut-svsm | 00xc | @@ -0,0 +1,195 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+
+use core::arch::asm;
+
+/// Interrupt flag in RFLAGS register
+const EFLAGS_IF: u64 = 1 << 9;
+
+/// Unconditionally disable IRQs
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs.
+#[inline(always)]
+pub unsafe fn irqs_disable() {
+ asm!("cli", options(att_syntax));
+}
+
+/// Unconditionally enable IRQs
+///
+/// # Safety
+///
+/// Callers need to make sure it is safe to enable IRQs
+#[inline(always)]
+pub unsafe fn irqs_enable() {
+ asm!("sti", options(att_syntax)); | ```suggestion
asm!("sti", options(att_syntax, preserves_flags, nostack));
``` |
svsm | github_2023 | others | 441 | coconut-svsm | 00xc | @@ -0,0 +1,195 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+
+use core::arch::asm;
+
+/// Interrupt flag in RFLAGS register
+const EFLAGS_IF: u64 = 1 << 9;
+
+/// Unconditionally disable IRQs
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs.
+#[inline(always)]
+pub unsafe fn irqs_disable() {
+ asm!("cli", options(att_syntax));
+}
+
+/// Unconditionally enable IRQs
+///
+/// # Safety
+///
+/// Callers need to make sure it is safe to enable IRQs
+#[inline(always)]
+pub unsafe fn irqs_enable() {
+ asm!("sti", options(att_syntax));
+}
+
+/// Query IRQ state on current CPU
+///
+/// # Returns
+///
+/// `true` when IRQs are enabled, `false` otherwise
+#[inline(always)]
+#[must_use = "Unused irqs_enabled() result - meant to be irq_enable()?"]
+pub fn irqs_enabled() -> bool {
+ // SAFETY: The inline assembly just reads the processors RFLAGS register
+ // and does not change any state.
+ let state: u64 = unsafe {
+ let s: u64;
+ asm!("pushfq",
+ "popq %rax",
+ out("rax") s,
+ options(att_syntax)); | We do not need to force using RAX:
```suggestion
asm!("pushfq",
"popq {}",
out(reg) s,
options(att_syntax, nomem, preserves_flags));
``` |
svsm | github_2023 | others | 441 | coconut-svsm | 00xc | @@ -0,0 +1,195 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+
+use core::arch::asm;
+
+/// Interrupt flag in RFLAGS register
+const EFLAGS_IF: u64 = 1 << 9;
+
+/// Unconditionally disable IRQs
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs.
+#[inline(always)]
+pub unsafe fn irqs_disable() {
+ asm!("cli", options(att_syntax));
+}
+
+/// Unconditionally enable IRQs
+///
+/// # Safety
+///
+/// Callers need to make sure it is safe to enable IRQs | This probably needs to be more specific |
svsm | github_2023 | others | 441 | coconut-svsm | 00xc | @@ -0,0 +1,71 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+use crate::cpu::{irqs_restore, irqs_save};
+
+/// Abstracts IRQ state handling when taking and releasing locks. There are two
+/// implemenations:
+///
+/// * [IrqUnsafeLocking] implements the methods as no-ops and does not change
+/// any IRQ state.
+/// * [IrqSafeLocking] actually disables and enables IRQs in the methods,
+/// making a lock IRQ-safe by using this structure.
+pub trait IrqLocking {
+ /// Associated helper function to create an instance of the implementing
+ /// struct. This is used by lock implementation structs.
+ ///
+ /// # Returns
+ ///
+ /// New, empty instance of the implementing IRQ state handling struct
+ fn new() -> Self;
+
+ /// Called when a locking implementation needs to potentially disable IRQs.
+ /// The method needs to store the previous IRQ state before modifying it.
+ fn irqs_save(&mut self) {}
+
+ /// Called when the locking implementation needs to restore the IRQ state
+ /// before the lock was taken.
+ fn irqs_restore(&mut self) {}
+}
+
+/// Implements the IRQ state handling methods as no-ops. For use it IRQ-unsafe
+/// locks.
+#[derive(Debug, Default)]
+pub struct IrqUnsafeLocking;
+
+impl IrqLocking for IrqUnsafeLocking {
+ fn new() -> Self {
+ IrqUnsafeLocking {} | ```suggestion
Self {}
``` |
svsm | github_2023 | others | 441 | coconut-svsm | 00xc | @@ -149,9 +165,10 @@ impl<T> RWLock<T> {
/// let rwlock = RWLock::new(data);
/// ```
pub const fn new(data: T) -> Self {
- RWLock {
+ RawRWLock { | ```suggestion
Self {
``` |
svsm | github_2023 | others | 441 | coconut-svsm | 00xc | @@ -110,10 +126,11 @@ impl<T> SpinLock<T> {
/// let spin_lock = SpinLock::new(data);
/// ```
pub const fn new(data: T) -> Self {
- SpinLock {
+ RawSpinLock { | ```suggestion
Self {
``` |
svsm | github_2023 | others | 441 | coconut-svsm | Freax13 | @@ -0,0 +1,242 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+
+use crate::cpu::{irqs_disable, irqs_enable};
+use core::arch::asm;
+use core::sync::atomic::{AtomicBool, AtomicIsize, Ordering};
+
+/// Interrupt flag in RFLAGS register
+const EFLAGS_IF: u64 = 1 << 9;
+
+/// Unconditionally disable IRQs
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs.
+#[inline(always)]
+pub unsafe fn raw_irqs_disable() {
+ asm!("cli", options(att_syntax, preserves_flags, nostack)); | ```suggestion
asm!("cli", options(att_syntax, preserves_flags, nomem));
``` |
svsm | github_2023 | others | 441 | coconut-svsm | Freax13 | @@ -0,0 +1,242 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+
+use crate::cpu::{irqs_disable, irqs_enable};
+use core::arch::asm;
+use core::sync::atomic::{AtomicBool, AtomicIsize, Ordering};
+
+/// Interrupt flag in RFLAGS register
+const EFLAGS_IF: u64 = 1 << 9;
+
+/// Unconditionally disable IRQs
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs.
+#[inline(always)]
+pub unsafe fn raw_irqs_disable() {
+ asm!("cli", options(att_syntax, preserves_flags, nostack));
+}
+
+/// Unconditionally enable IRQs
+///
+/// # Safety
+///
+/// Callers need to make sure it is safe to enable IRQs. e.g. that no data
+/// structures or locks which are accessed in IRQ handlers are used after IRQs
+/// have been enabled.
+#[inline(always)]
+pub unsafe fn raw_irqs_enable() {
+ asm!("sti", options(att_syntax, preserves_flags, nostack)); | ```suggestion
asm!("sti", options(att_syntax, preserves_flags, nomem));
``` |
svsm | github_2023 | others | 441 | coconut-svsm | Freax13 | @@ -0,0 +1,242 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+
+use crate::cpu::{irqs_disable, irqs_enable};
+use core::arch::asm;
+use core::sync::atomic::{AtomicBool, AtomicIsize, Ordering};
+
+/// Interrupt flag in RFLAGS register
+const EFLAGS_IF: u64 = 1 << 9;
+
+/// Unconditionally disable IRQs
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs.
+#[inline(always)]
+pub unsafe fn raw_irqs_disable() {
+ asm!("cli", options(att_syntax, preserves_flags, nostack));
+}
+
+/// Unconditionally enable IRQs
+///
+/// # Safety
+///
+/// Callers need to make sure it is safe to enable IRQs. e.g. that no data
+/// structures or locks which are accessed in IRQ handlers are used after IRQs
+/// have been enabled.
+#[inline(always)]
+pub unsafe fn raw_irqs_enable() {
+ asm!("sti", options(att_syntax, preserves_flags, nostack));
+}
+
+/// Query IRQ state on current CPU
+///
+/// # Returns
+///
+/// `true` when IRQs are enabled, `false` otherwise
+#[inline(always)]
+#[must_use = "Unused irqs_enabled() result - meant to be irq_enable()?"]
+pub fn irqs_enabled() -> bool {
+ // SAFETY: The inline assembly just reads the processors RFLAGS register
+ // and does not change any state.
+ let state: u64 = unsafe {
+ let s: u64;
+ asm!("pushfq",
+ "popq {}",
+ out(reg) s,
+ options(att_syntax, nomem, preserves_flags)); | ```suggestion
options(att_syntax, preserves_flags));
```
It's not entirely clear whether [`nomem` implies `nostack`](https://github.com/rust-lang/reference/issues/1350). |
svsm | github_2023 | others | 441 | coconut-svsm | Freax13 | @@ -0,0 +1,242 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+
+use crate::cpu::{irqs_disable, irqs_enable};
+use core::arch::asm;
+use core::sync::atomic::{AtomicBool, AtomicIsize, Ordering};
+
+/// Interrupt flag in RFLAGS register
+const EFLAGS_IF: u64 = 1 << 9;
+
+/// Unconditionally disable IRQs
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs.
+#[inline(always)]
+pub unsafe fn raw_irqs_disable() {
+ asm!("cli", options(att_syntax, preserves_flags, nostack));
+}
+
+/// Unconditionally enable IRQs
+///
+/// # Safety
+///
+/// Callers need to make sure it is safe to enable IRQs. e.g. that no data
+/// structures or locks which are accessed in IRQ handlers are used after IRQs
+/// have been enabled.
+#[inline(always)]
+pub unsafe fn raw_irqs_enable() {
+ asm!("sti", options(att_syntax, preserves_flags, nostack));
+}
+
+/// Query IRQ state on current CPU
+///
+/// # Returns
+///
+/// `true` when IRQs are enabled, `false` otherwise
+#[inline(always)]
+#[must_use = "Unused irqs_enabled() result - meant to be irq_enable()?"]
+pub fn irqs_enabled() -> bool {
+ // SAFETY: The inline assembly just reads the processors RFLAGS register
+ // and does not change any state.
+ let state: u64 = unsafe {
+ let s: u64;
+ asm!("pushfq",
+ "popq {}",
+ out(reg) s,
+ options(att_syntax, nomem, preserves_flags));
+ s
+ }; | ```suggestion
let state: u64;
unsafe {
asm!("pushfq",
"popq {}",
out(reg) state,
options(att_syntax, nomem, preserves_flags));
}
``` |
svsm | github_2023 | others | 441 | coconut-svsm | Freax13 | @@ -0,0 +1,242 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+
+use crate::cpu::{irqs_disable, irqs_enable};
+use core::arch::asm;
+use core::sync::atomic::{AtomicBool, AtomicIsize, Ordering};
+
+/// Interrupt flag in RFLAGS register
+const EFLAGS_IF: u64 = 1 << 9;
+
+/// Unconditionally disable IRQs
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs.
+#[inline(always)]
+pub unsafe fn raw_irqs_disable() {
+ asm!("cli", options(att_syntax, preserves_flags, nostack));
+}
+
+/// Unconditionally enable IRQs
+///
+/// # Safety
+///
+/// Callers need to make sure it is safe to enable IRQs. e.g. that no data
+/// structures or locks which are accessed in IRQ handlers are used after IRQs
+/// have been enabled.
+#[inline(always)]
+pub unsafe fn raw_irqs_enable() {
+ asm!("sti", options(att_syntax, preserves_flags, nostack));
+}
+
+/// Query IRQ state on current CPU
+///
+/// # Returns
+///
+/// `true` when IRQs are enabled, `false` otherwise
+#[inline(always)]
+#[must_use = "Unused irqs_enabled() result - meant to be irq_enable()?"]
+pub fn irqs_enabled() -> bool {
+ // SAFETY: The inline assembly just reads the processors RFLAGS register
+ // and does not change any state.
+ let state: u64 = unsafe {
+ let s: u64;
+ asm!("pushfq",
+ "popq {}",
+ out(reg) s,
+ options(att_syntax, nomem, preserves_flags));
+ s
+ };
+
+ (state & EFLAGS_IF) == EFLAGS_IF
+}
+
+/// Query IRQ state on current CPU
+///
+/// # Returns
+///
+/// `false` when IRQs are enabled, `true` otherwise
+#[inline(always)]
+#[must_use = "Unused irqs_disabled() result - meant to be irq_disable()?"]
+pub fn irqs_disabled() -> bool {
+ !irqs_enabled()
+}
+
+/// This structure keeps track of PerCpu IRQ states. It tracks the original IRQ
+/// state and how deep IRQ-disable calls have been nested. The use of atomics
+/// is necessary for interior mutability and to make state modifications safe
+/// wrt. to IRQs.
+///
+/// The original state needs to be stored to not accidentially enable IRQs in
+/// contexts which have IRQs disabled by other means, e.g. in an exception or
+/// NMI/HV context.
+#[derive(Debug, Default)]
+pub struct IrqState {
+ /// IRQ state when count was `0`
+ state: AtomicBool,
+ /// Depth of IRQ-disabled nesting
+ count: AtomicIsize,
+}
+
+impl IrqState {
+ /// Create a new instance of `IrqState`
+ pub fn new() -> Self {
+ Self {
+ state: AtomicBool::new(false),
+ count: AtomicIsize::new(0),
+ }
+ }
+
+ /// Increase IRQ-disable nesting level by 1. The method will disable IRQs.
+ ///
+ /// # Safety
+ ///
+ /// The caller needs to make sure to match the number of `disable` calls
+ /// with the number of `enable` calls.
+ #[inline(always)]
+ pub unsafe fn disable(&self) {
+ let state = irqs_enabled();
+
+ raw_irqs_disable();
+ let val = self.count.fetch_add(1, Ordering::Relaxed);
+
+ assert!(val >= 0);
+
+ if val == 0 {
+ self.state.store(state, Ordering::Relaxed)
+ }
+ }
+
+ /// Decrease IRQ-disable nesting level by 1. The method will restore the
+ /// original IRQ state when the nesting level reaches 0.
+ ///
+ /// # Safety
+ ///
+ /// The caller needs to make sure to match the number of `disable` calls
+ /// with the number of `enable` calls.
+ #[inline(always)]
+ pub unsafe fn enable(&self) {
+ debug_assert!(irqs_disabled());
+
+ self.count.fetch_sub(1, Ordering::Relaxed);
+ let val = self.count.load(Ordering::Relaxed);
+
+ assert!(val >= 0);
+
+ if val == 0 { | ```suggestion
let val = self.count.fetch_sub(1, Ordering::Relaxed);
self.count.load(Ordering::Relaxed);
assert!(val > 0);
if val == 1 {
``` |
svsm | github_2023 | others | 441 | coconut-svsm | Freax13 | @@ -0,0 +1,242 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+
+use crate::cpu::{irqs_disable, irqs_enable};
+use core::arch::asm;
+use core::sync::atomic::{AtomicBool, AtomicIsize, Ordering};
+
+/// Interrupt flag in RFLAGS register
+const EFLAGS_IF: u64 = 1 << 9;
+
+/// Unconditionally disable IRQs
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs.
+#[inline(always)]
+pub unsafe fn raw_irqs_disable() {
+ asm!("cli", options(att_syntax, preserves_flags, nostack));
+}
+
+/// Unconditionally enable IRQs
+///
+/// # Safety
+///
+/// Callers need to make sure it is safe to enable IRQs. e.g. that no data
+/// structures or locks which are accessed in IRQ handlers are used after IRQs
+/// have been enabled.
+#[inline(always)]
+pub unsafe fn raw_irqs_enable() {
+ asm!("sti", options(att_syntax, preserves_flags, nostack));
+}
+
+/// Query IRQ state on current CPU
+///
+/// # Returns
+///
+/// `true` when IRQs are enabled, `false` otherwise
+#[inline(always)]
+#[must_use = "Unused irqs_enabled() result - meant to be irq_enable()?"]
+pub fn irqs_enabled() -> bool {
+ // SAFETY: The inline assembly just reads the processors RFLAGS register
+ // and does not change any state.
+ let state: u64 = unsafe {
+ let s: u64;
+ asm!("pushfq",
+ "popq {}",
+ out(reg) s,
+ options(att_syntax, nomem, preserves_flags));
+ s
+ };
+
+ (state & EFLAGS_IF) == EFLAGS_IF
+}
+
+/// Query IRQ state on current CPU
+///
+/// # Returns
+///
+/// `false` when IRQs are enabled, `true` otherwise
+#[inline(always)]
+#[must_use = "Unused irqs_disabled() result - meant to be irq_disable()?"]
+pub fn irqs_disabled() -> bool {
+ !irqs_enabled()
+}
+
+/// This structure keeps track of PerCpu IRQ states. It tracks the original IRQ
+/// state and how deep IRQ-disable calls have been nested. The use of atomics
+/// is necessary for interior mutability and to make state modifications safe
+/// wrt. to IRQs.
+///
+/// The original state needs to be stored to not accidentially enable IRQs in
+/// contexts which have IRQs disabled by other means, e.g. in an exception or
+/// NMI/HV context.
+#[derive(Debug, Default)]
+pub struct IrqState {
+ /// IRQ state when count was `0`
+ state: AtomicBool,
+ /// Depth of IRQ-disabled nesting
+ count: AtomicIsize,
+}
+
+impl IrqState {
+ /// Create a new instance of `IrqState`
+ pub fn new() -> Self {
+ Self {
+ state: AtomicBool::new(false),
+ count: AtomicIsize::new(0),
+ }
+ }
+
+ /// Increase IRQ-disable nesting level by 1. The method will disable IRQs.
+ ///
+ /// # Safety
+ ///
+ /// The caller needs to make sure to match the number of `disable` calls
+ /// with the number of `enable` calls.
+ #[inline(always)]
+ pub unsafe fn disable(&self) {
+ let state = irqs_enabled();
+
+ raw_irqs_disable();
+ let val = self.count.fetch_add(1, Ordering::Relaxed);
+
+ assert!(val >= 0);
+
+ if val == 0 {
+ self.state.store(state, Ordering::Relaxed)
+ }
+ }
+
+ /// Decrease IRQ-disable nesting level by 1. The method will restore the
+ /// original IRQ state when the nesting level reaches 0.
+ ///
+ /// # Safety
+ ///
+ /// The caller needs to make sure to match the number of `disable` calls
+ /// with the number of `enable` calls.
+ #[inline(always)]
+ pub unsafe fn enable(&self) {
+ debug_assert!(irqs_disabled());
+
+ self.count.fetch_sub(1, Ordering::Relaxed);
+ let val = self.count.load(Ordering::Relaxed);
+
+ assert!(val >= 0);
+
+ if val == 0 {
+ let state = self.state.load(Ordering::Relaxed);
+ if state {
+ raw_irqs_enable();
+ }
+ }
+ }
+
+ /// Returns the current nesting count
+ ///
+ /// # Returns
+ ///
+ /// Levels of IRQ-disable nesting currently active
+ pub fn count(&self) -> isize {
+ self.count.load(Ordering::Relaxed)
+ }
+}
+
+impl Drop for IrqState {
+ /// This struct should never be dropped. Add a debug check in case it is
+ /// dropped anyway.
+ fn drop(&mut self) {
+ let count = self.count.load(Ordering::Relaxed);
+ assert!(count == 0); | ```suggestion
assert_eq!(count, 0);
``` |
svsm | github_2023 | others | 441 | coconut-svsm | Freax13 | @@ -0,0 +1,242 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 SUSE LLC
+//
+// Author: Joerg Roedel <jroedel@suse.de>
+
+use crate::cpu::{irqs_disable, irqs_enable};
+use core::arch::asm;
+use core::sync::atomic::{AtomicBool, AtomicIsize, Ordering};
+
+/// Interrupt flag in RFLAGS register
+const EFLAGS_IF: u64 = 1 << 9;
+
+/// Unconditionally disable IRQs
+///
+/// # Safety
+///
+/// Callers need to take care of re-enabling IRQs.
+#[inline(always)]
+pub unsafe fn raw_irqs_disable() {
+ asm!("cli", options(att_syntax, preserves_flags, nostack));
+}
+
+/// Unconditionally enable IRQs
+///
+/// # Safety
+///
+/// Callers need to make sure it is safe to enable IRQs. e.g. that no data
+/// structures or locks which are accessed in IRQ handlers are used after IRQs
+/// have been enabled.
+#[inline(always)]
+pub unsafe fn raw_irqs_enable() {
+ asm!("sti", options(att_syntax, preserves_flags, nostack));
+}
+
+/// Query IRQ state on current CPU
+///
+/// # Returns
+///
+/// `true` when IRQs are enabled, `false` otherwise
+#[inline(always)]
+#[must_use = "Unused irqs_enabled() result - meant to be irq_enable()?"]
+pub fn irqs_enabled() -> bool {
+ // SAFETY: The inline assembly just reads the processors RFLAGS register
+ // and does not change any state.
+ let state: u64 = unsafe {
+ let s: u64;
+ asm!("pushfq",
+ "popq {}",
+ out(reg) s,
+ options(att_syntax, nomem, preserves_flags));
+ s
+ };
+
+ (state & EFLAGS_IF) == EFLAGS_IF
+}
+
+/// Query IRQ state on current CPU
+///
+/// # Returns
+///
+/// `false` when IRQs are enabled, `true` otherwise
+#[inline(always)]
+#[must_use = "Unused irqs_disabled() result - meant to be irq_disable()?"]
+pub fn irqs_disabled() -> bool {
+ !irqs_enabled()
+}
+
+/// This structure keeps track of PerCpu IRQ states. It tracks the original IRQ
+/// state and how deep IRQ-disable calls have been nested. The use of atomics
+/// is necessary for interior mutability and to make state modifications safe
+/// wrt. to IRQs.
+///
+/// The original state needs to be stored to not accidentially enable IRQs in
+/// contexts which have IRQs disabled by other means, e.g. in an exception or
+/// NMI/HV context.
+#[derive(Debug, Default)]
+pub struct IrqState {
+ /// IRQ state when count was `0`
+ state: AtomicBool,
+ /// Depth of IRQ-disabled nesting
+ count: AtomicIsize,
+}
+
+impl IrqState {
+ /// Create a new instance of `IrqState`
+ pub fn new() -> Self {
+ Self {
+ state: AtomicBool::new(false),
+ count: AtomicIsize::new(0),
+ }
+ }
+
+ /// Increase IRQ-disable nesting level by 1. The method will disable IRQs.
+ ///
+ /// # Safety
+ ///
+ /// The caller needs to make sure to match the number of `disable` calls
+ /// with the number of `enable` calls.
+ #[inline(always)]
+ pub unsafe fn disable(&self) {
+ let state = irqs_enabled();
+
+ raw_irqs_disable();
+ let val = self.count.fetch_add(1, Ordering::Relaxed);
+
+ assert!(val >= 0);
+
+ if val == 0 {
+ self.state.store(state, Ordering::Relaxed)
+ }
+ }
+
+ /// Decrease IRQ-disable nesting level by 1. The method will restore the
+ /// original IRQ state when the nesting level reaches 0.
+ ///
+ /// # Safety
+ ///
+ /// The caller needs to make sure to match the number of `disable` calls
+ /// with the number of `enable` calls.
+ #[inline(always)]
+ pub unsafe fn enable(&self) {
+ debug_assert!(irqs_disabled());
+
+ self.count.fetch_sub(1, Ordering::Relaxed);
+ let val = self.count.load(Ordering::Relaxed);
+
+ assert!(val >= 0);
+
+ if val == 0 {
+ let state = self.state.load(Ordering::Relaxed);
+ if state {
+ raw_irqs_enable();
+ }
+ }
+ }
+
+ /// Returns the current nesting count
+ ///
+ /// # Returns
+ ///
+ /// Levels of IRQ-disable nesting currently active
+ pub fn count(&self) -> isize {
+ self.count.load(Ordering::Relaxed)
+ }
+}
+
+impl Drop for IrqState {
+ /// This struct should never be dropped. Add a debug check in case it is
+ /// dropped anyway.
+ fn drop(&mut self) {
+ let count = self.count.load(Ordering::Relaxed);
+ assert!(count == 0);
+ }
+}
+
+/// And IRQ guard which saves the current IRQ state and disabled interrupts
+/// upon creation. When the guard goes out of scope the previous IRQ state is
+/// restored.
+///
+/// The struct implements the `Default` and `Drop` traits for easy use.
+#[derive(Debug)]
+#[must_use = "if unused previous IRQ state will be immediatly restored"]
+pub struct IrqGuard; | ```suggestion
pub struct IrqGuard(());
```
Adding a private unit field makes it impossible to create an instance without calling `IrqGuard::new()` (or `IrqGuard::default()`). |
svsm | github_2023 | others | 441 | coconut-svsm | Freax13 | @@ -4,22 +4,27 @@
//
// Author: Joerg Roedel <jroedel@suse.de>
+use super::common::*;
use core::cell::UnsafeCell;
+use core::marker::PhantomData;
use core::ops::{Deref, DerefMut};
use core::sync::atomic::{AtomicU64, Ordering};
/// A guard that provides read access to the data protected by [`RWLock`]
#[derive(Debug)]
#[must_use = "if unused the RWLock will immediately unlock"]
-pub struct ReadLockGuard<'a, T> {
+pub struct RawReadLockGuard<'a, T, I: IrqLocking> {
/// Reference to the associated `AtomicU64` in the [`RWLock`]
rwlock: &'a AtomicU64,
/// Reference to the protected data
data: &'a T,
+ /// IRQ state before and after critical section
+ #[allow(dead_code)]
+ irq_state: I, | ```suggestion
_irq_state: I,
``` |
svsm | github_2023 | others | 441 | coconut-svsm | Freax13 | @@ -4,22 +4,27 @@
//
// Author: Joerg Roedel <jroedel@suse.de>
+use super::common::*;
use core::cell::UnsafeCell;
+use core::marker::PhantomData;
use core::ops::{Deref, DerefMut};
use core::sync::atomic::{AtomicU64, Ordering};
/// A guard that provides read access to the data protected by [`RWLock`]
#[derive(Debug)]
#[must_use = "if unused the RWLock will immediately unlock"]
-pub struct ReadLockGuard<'a, T> {
+pub struct RawReadLockGuard<'a, T, I: IrqLocking> { | ```suggestion
pub struct RawReadLockGuard<'a, T, I> {
```
Generic bounds in type declarations are generally discouraged unless strictly needed (e.g. the [HashMap](https://doc.rust-lang.org/stable/std/collections/struct.HashMap.html) type doesn't have a `Hash` bound on the key type `K`, only the [methods](https://doc.rust-lang.org/stable/std/collections/struct.HashMap.html#method.get) that need to hash the key have that bound). Removing this bound here allows removing it from the other `impl` blocks. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.