repo_name stringlengths 1 62 | dataset stringclasses 1 value | lang stringclasses 11 values | pr_id int64 1 20.1k | owner stringlengths 2 34 | reviewer stringlengths 2 39 | diff_hunk stringlengths 15 262k | code_review_comment stringlengths 1 99.6k |
|---|---|---|---|---|---|---|---|
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -35,119 +28,64 @@ impl Document {
pub(crate) fn new(path: PgLspPath, content: String, version: i32) -> Self {
let mut id_generator = IdGenerator::new();
- let statements: Vec<StatementPosition> = pg_statement_splitter::split(&content)
+ let ranges: Vec<StatementPos> = pg_statement_splitter::split(&content)
.ranges
.iter()
.map(|r| (id_generator.next(), *r))
.collect();
Self {
path,
- statements,
+ positions: ranges,
content,
version,
id_generator,
}
}
- pub fn debug_statements(&self) {
- for (id, range) in self.statements.iter() {
- tracing::info!(
- "Document::debug_statements: statement: id: {}, range: {:?}, text: {:?}",
- id,
- range,
- &self.content[*range]
- );
- }
- }
-
- #[allow(dead_code)]
- pub fn get_statements(&self) -> &[StatementPosition] {
- &self.statements
- }
-
- pub fn statement_refs(&self) -> Vec<StatementRef> {
- self.statements
- .iter()
- .map(|inner_ref| self.statement_ref(inner_ref))
- .collect()
- }
-
- pub fn statement_refs_with_ranges(&self) -> Vec<(StatementRef, TextRange)> {
- self.statements
- .iter()
- .map(|inner_ref| (self.statement_ref(inner_ref), inner_ref.1))
- .collect()
- }
-
- #[allow(dead_code)]
- /// Returns the statement ref at the given offset
- pub fn statement_ref_at_offset(&self, offset: &TextSize) -> Option<StatementRef> {
- self.statements.iter().find_map(|r| {
- if r.1.contains(*offset) {
- Some(self.statement_ref(r))
- } else {
- None
- }
+ pub fn iter_statements(&self) -> impl Iterator<Item = Statement> + '_ { | nit: I believe it's idiomatic that an `iter` iterates over &T (except if `T` is copy?)... so idiomatically, this would be an `Iterator<Item = &Statement>` :)
no need to change it though! |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -3,45 +3,54 @@ use text_size::{TextLen, TextRange, TextSize};
use crate::workspace::{ChangeFileParams, ChangeParams};
-use super::{document::Statement, Document, StatementRef};
+use super::{Document, Statement};
#[derive(Debug, PartialEq, Eq)]
pub enum StatementChange {
- Added(Statement),
- Deleted(StatementRef),
- Modified(ChangedStatement),
+ Added(AddedStatement),
+ Deleted(Statement),
+ Modified(ModifiedStatement),
}
#[derive(Debug, PartialEq, Eq)]
-pub struct ChangedStatement {
- pub old: Statement,
- pub new_ref: StatementRef,
-
- pub range: TextRange,
+pub struct AddedStatement {
+ pub stmt: Statement,
pub text: String,
}
-impl ChangedStatement {
- pub fn new_statement(&self) -> Statement {
- Statement {
- ref_: self.new_ref.clone(),
- text: apply_text_change(&self.old.text, Some(self.range), &self.text),
- }
- }
+#[derive(Debug, PartialEq, Eq)]
+pub struct ModifiedStatement { | aaah where you inspired by the slatedb approach? Nice!! |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +63,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
-
- let mut changed: Vec<StatementChange> = Vec::with_capacity(self.statements.len());
-
- tracing::info!("applying change: {:?}", change);
+ /// Applies a full change to the document and returns the affected statements
+ fn apply_full_change(&mut self, text: &str) -> Vec<StatementChange> {
+ let mut changes = Vec::new();
- if change.range.is_none() {
- // apply full text change and return early
- changed.extend(
- self.statements
- .drain(..)
- .map(|(id, _)| {
- StatementChange::Deleted(StatementRef {
- id,
- path: self.path.clone(),
- })
- })
- .collect::<Vec<StatementChange>>(),
- );
+ changes.extend(self.positions.drain(..).map(|(id, _)| {
+ StatementChange::Deleted(Statement {
+ id,
+ path: self.path.clone(),
+ })
+ }));
- self.content = change.text.clone();
+ self.content = text.to_string();
- for (id, range) in pg_statement_splitter::split(&self.content)
+ changes.extend(
+ pg_statement_splitter::split(&self.content)
.ranges
.iter()
- .map(|r| (self.id_generator.next(), *r))
- {
- self.statements.push((id, range));
- changed.push(StatementChange::Added(Statement {
- ref_: StatementRef {
- path: self.path.clone(),
- id,
- },
- text: self.content[range].to_string(),
- }))
- }
+ .map(|range| { | Nit: Since the `split()` return value is dropped anyways, would it make the code simpler if we use `into_iter()` instead of `.iter()` ? |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +63,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
-
- let mut changed: Vec<StatementChange> = Vec::with_capacity(self.statements.len());
-
- tracing::info!("applying change: {:?}", change);
+ /// Applies a full change to the document and returns the affected statements
+ fn apply_full_change(&mut self, text: &str) -> Vec<StatementChange> {
+ let mut changes = Vec::new();
- if change.range.is_none() {
- // apply full text change and return early
- changed.extend(
- self.statements
- .drain(..)
- .map(|(id, _)| {
- StatementChange::Deleted(StatementRef {
- id,
- path: self.path.clone(),
- })
- })
- .collect::<Vec<StatementChange>>(),
- );
+ changes.extend(self.positions.drain(..).map(|(id, _)| {
+ StatementChange::Deleted(Statement {
+ id,
+ path: self.path.clone(),
+ })
+ }));
- self.content = change.text.clone();
+ self.content = text.to_string();
- for (id, range) in pg_statement_splitter::split(&self.content)
+ changes.extend(
+ pg_statement_splitter::split(&self.content)
.ranges
.iter()
- .map(|r| (self.id_generator.next(), *r))
- {
- self.statements.push((id, range));
- changed.push(StatementChange::Added(Statement {
- ref_: StatementRef {
- path: self.path.clone(),
- id,
- },
- text: self.content[range].to_string(),
- }))
- }
+ .map(|range| {
+ let id = self.id_generator.next();
+ let text = self.content[*range].to_string();
+ self.positions.push((id, *range));
- return changed;
- }
+ StatementChange::Added(AddedStatement {
+ stmt: Statement {
+ path: self.path.clone(),
+ id,
+ },
+ text,
+ })
+ }),
+ );
- // no matter where the change is, we can never be sure if its a modification or a deletion/addition
- // e.g. if a statement is "select 1", and the change is "select 2; select 2", its an addition even though its in the middle of the statement.
- // hence we only have three "real" cases:
- // 1. the change touches no statement at all (addition)
- // 2. the change touches exactly one statement AND splitting the statement results in just
- // one statement (modification)
- // 3. the change touches more than one statement (addition/deletion)
+ changes
+ }
- let new_content = change.apply_to_text(&self.content);
+ fn insert_statement(&mut self, range: TextRange) -> usize {
+ let pos = self
+ .positions
+ .binary_search_by(|(_, r)| r.start().cmp(&range.start()))
+ .unwrap_err();
- let mut affected = vec![];
+ let new_id = self.id_generator.next();
+ self.positions.insert(pos, (new_id, range));
- for (idx, (id, r)) in self.statements.iter_mut().enumerate() {
- if r.intersect(change.range.unwrap()).is_some() {
- affected.push((idx, (*id, *r)));
- } else if r.start() > change.range.unwrap().end() {
- if change.is_addition() {
- *r += change.diff_size();
- } else if change.is_deletion() {
- *r -= change.diff_size();
- }
+ new_id
+ }
+
+ /// Returns all relevant details about the change and its effects on the current state of the document.
+ /// - The affected range is the full range of the change, including the range of all statements that intersect with the change
+ /// - All indices of affected statement positions
+ /// - The index of the first statement position before the change, if any
+ /// - The index of the first statement position after the change, if any
+ /// - the full affected range includng the prev and next statement | great comment!! should we add it to the `Affected` struct as well? |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -26,15 +26,15 @@ impl PgLspEnv {
fn new() -> Self {
Self {
pglsp_log_path: PgLspEnvVariable::new(
- "BIOME_LOG_PATH",
+ "PGLSP_LOG_PATH", |
 |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -327,21 +379,38 @@ impl ChangeParams {
#[cfg(test)]
mod tests {
- use text_size::{TextRange, TextSize};
+ use super::*;
+ use text_size::TextRange;
- use crate::workspace::{server::document::Statement, ChangeFileParams, ChangeParams};
+ use crate::workspace::{ChangeFileParams, ChangeParams};
- use super::{super::StatementRef, Document, StatementChange};
use pg_fs::PgLspPath;
+ impl Document {
+ pub fn get_text(&self, idx: usize) -> String {
+ self.content[self.positions[idx].1.start().into()..self.positions[idx].1.end().into()]
+ .to_string()
+ }
+ } | ah, cool pattern to implement this only for tests 🤓 |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -461,22 +615,22 @@ mod tests {
assert_eq!(
changed[0],
- StatementChange::Deleted(StatementRef {
+ StatementChange::Deleted(Statement {
path: path.clone(),
- id: 0
+ id: 1 | sanity check: it's expected that statements are now in order `id: 1` then `id: 0`? |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +63,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
-
- let mut changed: Vec<StatementChange> = Vec::with_capacity(self.statements.len());
-
- tracing::info!("applying change: {:?}", change);
+ /// Applies a full change to the document and returns the affected statements
+ fn apply_full_change(&mut self, text: &str) -> Vec<StatementChange> {
+ let mut changes = Vec::new();
- if change.range.is_none() {
- // apply full text change and return early
- changed.extend(
- self.statements
- .drain(..)
- .map(|(id, _)| {
- StatementChange::Deleted(StatementRef {
- id,
- path: self.path.clone(),
- })
- })
- .collect::<Vec<StatementChange>>(),
- );
+ changes.extend(self.positions.drain(..).map(|(id, _)| {
+ StatementChange::Deleted(Statement {
+ id,
+ path: self.path.clone(),
+ })
+ }));
- self.content = change.text.clone();
+ self.content = text.to_string();
- for (id, range) in pg_statement_splitter::split(&self.content)
+ changes.extend(
+ pg_statement_splitter::split(&self.content)
.ranges
.iter()
- .map(|r| (self.id_generator.next(), *r))
- {
- self.statements.push((id, range));
- changed.push(StatementChange::Added(Statement {
- ref_: StatementRef {
- path: self.path.clone(),
- id,
- },
- text: self.content[range].to_string(),
- }))
- }
+ .map(|range| {
+ let id = self.id_generator.next();
+ let text = self.content[*range].to_string();
+ self.positions.push((id, *range));
- return changed;
- }
+ StatementChange::Added(AddedStatement {
+ stmt: Statement {
+ path: self.path.clone(),
+ id,
+ },
+ text,
+ })
+ }),
+ );
- // no matter where the change is, we can never be sure if its a modification or a deletion/addition
- // e.g. if a statement is "select 1", and the change is "select 2; select 2", its an addition even though its in the middle of the statement.
- // hence we only have three "real" cases:
- // 1. the change touches no statement at all (addition)
- // 2. the change touches exactly one statement AND splitting the statement results in just
- // one statement (modification)
- // 3. the change touches more than one statement (addition/deletion)
+ changes
+ }
- let new_content = change.apply_to_text(&self.content);
+ fn insert_statement(&mut self, range: TextRange) -> usize {
+ let pos = self
+ .positions
+ .binary_search_by(|(_, r)| r.start().cmp(&range.start()))
+ .unwrap_err();
- let mut affected = vec![];
+ let new_id = self.id_generator.next();
+ self.positions.insert(pos, (new_id, range));
- for (idx, (id, r)) in self.statements.iter_mut().enumerate() {
- if r.intersect(change.range.unwrap()).is_some() {
- affected.push((idx, (*id, *r)));
- } else if r.start() > change.range.unwrap().end() {
- if change.is_addition() {
- *r += change.diff_size();
- } else if change.is_deletion() {
- *r -= change.diff_size();
- }
+ new_id
+ }
+
+ /// Returns all relevant details about the change and its effects on the current state of the document.
+ /// - The affected range is the full range of the change, including the range of all statements that intersect with the change
+ /// - All indices of affected statement positions
+ /// - The index of the first statement position before the change, if any
+ /// - The index of the first statement position after the change, if any
+ /// - the full affected range includng the prev and next statement
+ fn get_affected(
+ &self,
+ change_range: TextRange,
+ content_size: TextSize,
+ diff_size: TextSize,
+ is_addition: bool,
+ ) -> Affected {
+ let mut start = change_range.start();
+ let mut end = change_range.end().min(content_size);
+
+ let mut affected_indices = Vec::new();
+ let mut prev_index = None;
+ let mut next_index = None;
+
+ for (index, (_, pos_range)) in self.positions.iter().enumerate() { | Food for thought: Would it be safer to gather the affected IDs and then maybe compare with a previous set?
This obviously works, but after the change is applied, the indices of the statements will be different – which could lead to bugs further down the road.
Just a feeling! |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +63,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
-
- let mut changed: Vec<StatementChange> = Vec::with_capacity(self.statements.len());
-
- tracing::info!("applying change: {:?}", change);
+ /// Applies a full change to the document and returns the affected statements
+ fn apply_full_change(&mut self, text: &str) -> Vec<StatementChange> {
+ let mut changes = Vec::new();
- if change.range.is_none() {
- // apply full text change and return early
- changed.extend(
- self.statements
- .drain(..)
- .map(|(id, _)| {
- StatementChange::Deleted(StatementRef {
- id,
- path: self.path.clone(),
- })
- })
- .collect::<Vec<StatementChange>>(),
- );
+ changes.extend(self.positions.drain(..).map(|(id, _)| {
+ StatementChange::Deleted(Statement {
+ id,
+ path: self.path.clone(),
+ })
+ }));
- self.content = change.text.clone();
+ self.content = text.to_string();
- for (id, range) in pg_statement_splitter::split(&self.content)
+ changes.extend(
+ pg_statement_splitter::split(&self.content)
.ranges
.iter()
- .map(|r| (self.id_generator.next(), *r))
- {
- self.statements.push((id, range));
- changed.push(StatementChange::Added(Statement {
- ref_: StatementRef {
- path: self.path.clone(),
- id,
- },
- text: self.content[range].to_string(),
- }))
- }
+ .map(|range| {
+ let id = self.id_generator.next();
+ let text = self.content[*range].to_string();
+ self.positions.push((id, *range));
- return changed;
- }
+ StatementChange::Added(AddedStatement {
+ stmt: Statement {
+ path: self.path.clone(),
+ id,
+ },
+ text,
+ })
+ }),
+ );
- // no matter where the change is, we can never be sure if its a modification or a deletion/addition
- // e.g. if a statement is "select 1", and the change is "select 2; select 2", its an addition even though its in the middle of the statement.
- // hence we only have three "real" cases:
- // 1. the change touches no statement at all (addition)
- // 2. the change touches exactly one statement AND splitting the statement results in just
- // one statement (modification)
- // 3. the change touches more than one statement (addition/deletion)
+ changes
+ }
- let new_content = change.apply_to_text(&self.content);
+ fn insert_statement(&mut self, range: TextRange) -> usize {
+ let pos = self
+ .positions
+ .binary_search_by(|(_, r)| r.start().cmp(&range.start()))
+ .unwrap_err();
- let mut affected = vec![];
+ let new_id = self.id_generator.next();
+ self.positions.insert(pos, (new_id, range));
- for (idx, (id, r)) in self.statements.iter_mut().enumerate() {
- if r.intersect(change.range.unwrap()).is_some() {
- affected.push((idx, (*id, *r)));
- } else if r.start() > change.range.unwrap().end() {
- if change.is_addition() {
- *r += change.diff_size();
- } else if change.is_deletion() {
- *r -= change.diff_size();
- }
+ new_id
+ }
+
+ /// Returns all relevant details about the change and its effects on the current state of the document.
+ /// - The affected range is the full range of the change, including the range of all statements that intersect with the change
+ /// - All indices of affected statement positions
+ /// - The index of the first statement position before the change, if any
+ /// - The index of the first statement position after the change, if any
+ /// - the full affected range includng the prev and next statement
+ fn get_affected(
+ &self,
+ change_range: TextRange,
+ content_size: TextSize,
+ diff_size: TextSize,
+ is_addition: bool,
+ ) -> Affected {
+ let mut start = change_range.start();
+ let mut end = change_range.end().min(content_size);
+
+ let mut affected_indices = Vec::new();
+ let mut prev_index = None;
+ let mut next_index = None;
+
+ for (index, (_, pos_range)) in self.positions.iter().enumerate() {
+ if pos_range.intersect(change_range).is_some() {
+ affected_indices.push(index);
+ start = start.min(pos_range.start());
+ end = end.max(pos_range.end());
+ } else if pos_range.end() <= change_range.start() {
+ prev_index = Some(index);
+ } else if pos_range.start() >= change_range.end() && next_index.is_none() {
+ next_index = Some(index);
+ break; | nit: we're implicitly assuming that the `StatementPosition`s are sorted ascending by range.
If we change the order that the `StatementSplitter` returns statements, this will break.
Should we make the ordering more explicit? |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +63,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
-
- let mut changed: Vec<StatementChange> = Vec::with_capacity(self.statements.len());
-
- tracing::info!("applying change: {:?}", change);
+ /// Applies a full change to the document and returns the affected statements
+ fn apply_full_change(&mut self, text: &str) -> Vec<StatementChange> {
+ let mut changes = Vec::new();
- if change.range.is_none() {
- // apply full text change and return early
- changed.extend(
- self.statements
- .drain(..)
- .map(|(id, _)| {
- StatementChange::Deleted(StatementRef {
- id,
- path: self.path.clone(),
- })
- })
- .collect::<Vec<StatementChange>>(),
- );
+ changes.extend(self.positions.drain(..).map(|(id, _)| {
+ StatementChange::Deleted(Statement {
+ id,
+ path: self.path.clone(),
+ })
+ }));
- self.content = change.text.clone();
+ self.content = text.to_string();
- for (id, range) in pg_statement_splitter::split(&self.content)
+ changes.extend(
+ pg_statement_splitter::split(&self.content)
.ranges
.iter()
- .map(|r| (self.id_generator.next(), *r))
- {
- self.statements.push((id, range));
- changed.push(StatementChange::Added(Statement {
- ref_: StatementRef {
- path: self.path.clone(),
- id,
- },
- text: self.content[range].to_string(),
- }))
- }
+ .map(|range| {
+ let id = self.id_generator.next();
+ let text = self.content[*range].to_string();
+ self.positions.push((id, *range));
- return changed;
- }
+ StatementChange::Added(AddedStatement {
+ stmt: Statement {
+ path: self.path.clone(),
+ id,
+ },
+ text,
+ })
+ }),
+ );
- // no matter where the change is, we can never be sure if its a modification or a deletion/addition
- // e.g. if a statement is "select 1", and the change is "select 2; select 2", its an addition even though its in the middle of the statement.
- // hence we only have three "real" cases:
- // 1. the change touches no statement at all (addition)
- // 2. the change touches exactly one statement AND splitting the statement results in just
- // one statement (modification)
- // 3. the change touches more than one statement (addition/deletion)
+ changes
+ }
- let new_content = change.apply_to_text(&self.content);
+ fn insert_statement(&mut self, range: TextRange) -> usize {
+ let pos = self
+ .positions
+ .binary_search_by(|(_, r)| r.start().cmp(&range.start()))
+ .unwrap_err();
- let mut affected = vec![];
+ let new_id = self.id_generator.next();
+ self.positions.insert(pos, (new_id, range));
- for (idx, (id, r)) in self.statements.iter_mut().enumerate() {
- if r.intersect(change.range.unwrap()).is_some() {
- affected.push((idx, (*id, *r)));
- } else if r.start() > change.range.unwrap().end() {
- if change.is_addition() {
- *r += change.diff_size();
- } else if change.is_deletion() {
- *r -= change.diff_size();
- }
+ new_id
+ }
+
+ /// Returns all relevant details about the change and its effects on the current state of the document.
+ /// - The affected range is the full range of the change, including the range of all statements that intersect with the change
+ /// - All indices of affected statement positions
+ /// - The index of the first statement position before the change, if any
+ /// - The index of the first statement position after the change, if any
+ /// - the full affected range includng the prev and next statement
+ fn get_affected(
+ &self,
+ change_range: TextRange,
+ content_size: TextSize,
+ diff_size: TextSize,
+ is_addition: bool,
+ ) -> Affected {
+ let mut start = change_range.start();
+ let mut end = change_range.end().min(content_size);
+
+ let mut affected_indices = Vec::new();
+ let mut prev_index = None;
+ let mut next_index = None;
+
+ for (index, (_, pos_range)) in self.positions.iter().enumerate() {
+ if pos_range.intersect(change_range).is_some() {
+ affected_indices.push(index);
+ start = start.min(pos_range.start());
+ end = end.max(pos_range.end());
+ } else if pos_range.end() <= change_range.start() {
+ prev_index = Some(index);
+ } else if pos_range.start() >= change_range.end() && next_index.is_none() {
+ next_index = Some(index);
+ break;
}
}
- // special case: if no statement is affected, the affected range is between the prev and
- // the next statement
- if affected.is_empty() {
- let start = self
- .statements
- .iter()
- .rev()
- .find(|(_, r)| r.end() <= change.range.unwrap().start())
- .map(|(_, r)| r.end())
- .unwrap_or(TextSize::new(0));
- let end = self
- .statements
- .iter()
- .find(|(_, r)| r.start() >= change.range.unwrap().end())
- .map(|(_, r)| r.start())
- .unwrap_or_else(|| self.content.text_len());
+ let start_incl = prev_index
+ .map(|i| self.positions[i].1.start())
+ .unwrap_or(start);
+ let end_incl = next_index
+ .map(|i| self.positions[i].1.end())
+ .unwrap_or_else(|| end);
- let affected = new_content
- .as_str()
- .get(usize::from(start)..usize::from(end))
- .unwrap();
+ let end_incl = if is_addition {
+ end_incl.add(diff_size)
+ } else {
+ end_incl.sub(diff_size)
+ };
- // add new statements
- for range in pg_statement_splitter::split(affected).ranges {
- let doc_range = range + start;
- match self
- .statements
- .binary_search_by(|(_, r)| r.start().cmp(&doc_range.start()))
- {
- Ok(_) => {}
- Err(pos) => {
- let new_id = self.id_generator.next();
- self.statements.insert(pos, (new_id, doc_range));
- changed.push(StatementChange::Added(Statement {
- ref_: StatementRef {
- path: self.path.clone(),
- id: new_id,
- },
- text: new_content[doc_range].to_string(),
- }));
- }
- }
- }
+ let end = if is_addition {
+ end.add(diff_size)
} else {
- // get full affected range
- let mut start = change.range.unwrap().start();
- let mut end = change.range.unwrap().end();
+ end.sub(diff_size)
+ };
- if end > new_content.text_len() {
- end = new_content.text_len();
- }
+ Affected {
+ affected_range: TextRange::new(start, end.min(content_size)),
+ affected_indices,
+ prev_index,
+ next_index,
+ full_affected_range: TextRange::new(start_incl, end_incl.min(content_size)),
+ }
+ }
- for (_, (_, r)) in &affected {
- // adjust the range to the new content
- let adjusted_start = if r.start() >= change.range.unwrap().end() {
- r.start() + change.diff_size()
- } else {
- r.start()
- };
- let adjusted_end = if r.end() >= change.range.unwrap().end() {
- if change.is_addition() {
- r.end() + change.diff_size()
- } else {
- r.end() - change.diff_size()
- }
+ fn move_ranges(&mut self, offset: TextSize, diff_size: TextSize, is_addition: bool) {
+ self.positions
+ .iter_mut()
+ .skip_while(|(_, r)| offset > r.start()) | We check `offset > r.start()` becuase if the offset is *within* a statements range, that's the modified statement, so we don't have to move it over, right? |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +63,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
-
- let mut changed: Vec<StatementChange> = Vec::with_capacity(self.statements.len());
-
- tracing::info!("applying change: {:?}", change);
+ /// Applies a full change to the document and returns the affected statements
+ fn apply_full_change(&mut self, text: &str) -> Vec<StatementChange> {
+ let mut changes = Vec::new();
- if change.range.is_none() {
- // apply full text change and return early
- changed.extend(
- self.statements
- .drain(..)
- .map(|(id, _)| {
- StatementChange::Deleted(StatementRef {
- id,
- path: self.path.clone(),
- })
- })
- .collect::<Vec<StatementChange>>(),
- );
+ changes.extend(self.positions.drain(..).map(|(id, _)| {
+ StatementChange::Deleted(Statement {
+ id,
+ path: self.path.clone(),
+ })
+ }));
- self.content = change.text.clone();
+ self.content = text.to_string();
- for (id, range) in pg_statement_splitter::split(&self.content)
+ changes.extend(
+ pg_statement_splitter::split(&self.content)
.ranges
.iter()
- .map(|r| (self.id_generator.next(), *r))
- {
- self.statements.push((id, range));
- changed.push(StatementChange::Added(Statement {
- ref_: StatementRef {
- path: self.path.clone(),
- id,
- },
- text: self.content[range].to_string(),
- }))
- }
+ .map(|range| {
+ let id = self.id_generator.next();
+ let text = self.content[*range].to_string();
+ self.positions.push((id, *range));
- return changed;
- }
+ StatementChange::Added(AddedStatement {
+ stmt: Statement {
+ path: self.path.clone(),
+ id,
+ },
+ text,
+ })
+ }),
+ );
- // no matter where the change is, we can never be sure if its a modification or a deletion/addition
- // e.g. if a statement is "select 1", and the change is "select 2; select 2", its an addition even though its in the middle of the statement.
- // hence we only have three "real" cases:
- // 1. the change touches no statement at all (addition)
- // 2. the change touches exactly one statement AND splitting the statement results in just
- // one statement (modification)
- // 3. the change touches more than one statement (addition/deletion)
+ changes
+ }
- let new_content = change.apply_to_text(&self.content);
+ fn insert_statement(&mut self, range: TextRange) -> usize {
+ let pos = self
+ .positions
+ .binary_search_by(|(_, r)| r.start().cmp(&range.start()))
+ .unwrap_err();
- let mut affected = vec![];
+ let new_id = self.id_generator.next();
+ self.positions.insert(pos, (new_id, range));
- for (idx, (id, r)) in self.statements.iter_mut().enumerate() {
- if r.intersect(change.range.unwrap()).is_some() {
- affected.push((idx, (*id, *r)));
- } else if r.start() > change.range.unwrap().end() {
- if change.is_addition() {
- *r += change.diff_size();
- } else if change.is_deletion() {
- *r -= change.diff_size();
- }
+ new_id
+ }
+
+ /// Returns all relevant details about the change and its effects on the current state of the document.
+ /// - The affected range is the full range of the change, including the range of all statements that intersect with the change
+ /// - All indices of affected statement positions
+ /// - The index of the first statement position before the change, if any
+ /// - The index of the first statement position after the change, if any
+ /// - the full affected range includng the prev and next statement
+ fn get_affected(
+ &self,
+ change_range: TextRange,
+ content_size: TextSize,
+ diff_size: TextSize,
+ is_addition: bool,
+ ) -> Affected {
+ let mut start = change_range.start();
+ let mut end = change_range.end().min(content_size);
+
+ let mut affected_indices = Vec::new();
+ let mut prev_index = None;
+ let mut next_index = None;
+
+ for (index, (_, pos_range)) in self.positions.iter().enumerate() {
+ if pos_range.intersect(change_range).is_some() {
+ affected_indices.push(index);
+ start = start.min(pos_range.start());
+ end = end.max(pos_range.end());
+ } else if pos_range.end() <= change_range.start() {
+ prev_index = Some(index);
+ } else if pos_range.start() >= change_range.end() && next_index.is_none() {
+ next_index = Some(index);
+ break;
}
}
- // special case: if no statement is affected, the affected range is between the prev and
- // the next statement
- if affected.is_empty() {
- let start = self
- .statements
- .iter()
- .rev()
- .find(|(_, r)| r.end() <= change.range.unwrap().start())
- .map(|(_, r)| r.end())
- .unwrap_or(TextSize::new(0));
- let end = self
- .statements
- .iter()
- .find(|(_, r)| r.start() >= change.range.unwrap().end())
- .map(|(_, r)| r.start())
- .unwrap_or_else(|| self.content.text_len());
+ let start_incl = prev_index
+ .map(|i| self.positions[i].1.start())
+ .unwrap_or(start);
+ let end_incl = next_index
+ .map(|i| self.positions[i].1.end())
+ .unwrap_or_else(|| end);
- let affected = new_content
- .as_str()
- .get(usize::from(start)..usize::from(end))
- .unwrap();
+ let end_incl = if is_addition {
+ end_incl.add(diff_size)
+ } else {
+ end_incl.sub(diff_size)
+ };
- // add new statements
- for range in pg_statement_splitter::split(affected).ranges {
- let doc_range = range + start;
- match self
- .statements
- .binary_search_by(|(_, r)| r.start().cmp(&doc_range.start()))
- {
- Ok(_) => {}
- Err(pos) => {
- let new_id = self.id_generator.next();
- self.statements.insert(pos, (new_id, doc_range));
- changed.push(StatementChange::Added(Statement {
- ref_: StatementRef {
- path: self.path.clone(),
- id: new_id,
- },
- text: new_content[doc_range].to_string(),
- }));
- }
- }
- }
+ let end = if is_addition {
+ end.add(diff_size)
} else {
- // get full affected range
- let mut start = change.range.unwrap().start();
- let mut end = change.range.unwrap().end();
+ end.sub(diff_size)
+ };
- if end > new_content.text_len() {
- end = new_content.text_len();
- }
+ Affected {
+ affected_range: TextRange::new(start, end.min(content_size)),
+ affected_indices,
+ prev_index,
+ next_index,
+ full_affected_range: TextRange::new(start_incl, end_incl.min(content_size)),
+ }
+ }
- for (_, (_, r)) in &affected {
- // adjust the range to the new content
- let adjusted_start = if r.start() >= change.range.unwrap().end() {
- r.start() + change.diff_size()
- } else {
- r.start()
- };
- let adjusted_end = if r.end() >= change.range.unwrap().end() {
- if change.is_addition() {
- r.end() + change.diff_size()
- } else {
- r.end() - change.diff_size()
- }
+ fn move_ranges(&mut self, offset: TextSize, diff_size: TextSize, is_addition: bool) {
+ self.positions
+ .iter_mut()
+ .skip_while(|(_, r)| offset > r.start())
+ .for_each(|(_, range)| {
+ let new_range = if is_addition {
+ range.add(diff_size)
} else {
- r.end()
+ range.sub(diff_size)
};
- if adjusted_start < start {
- start = adjusted_start;
- }
- if adjusted_end > end && adjusted_end <= new_content.text_len() {
- end = adjusted_end;
- }
- }
+ *range = new_range;
+ });
+ }
+
+ /// Applies a single change to the document and returns the affected statements
+ fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
+ tracing::info!("applying change: {:?}", change);
+
+ // if range is none, we have a full change
+ if change.range.is_none() {
+ return self.apply_full_change(&change.text);
+ }
+
+ // i spent a relatively large amount of time thinking about how to handle range changes
+ // properly. there are quite a few edge cases to consider. I eventually skipped most of
+ // them, because the complexity is not worth the return for now. we might want to revisit
+ // this later though. | 
|
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +69,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
+ /// Applies a full change to the document and returns the affected statements
+ fn apply_full_change(&mut self, text: &str) -> Vec<StatementChange> {
+ let mut changes = Vec::new();
- let mut changed: Vec<StatementChange> = Vec::with_capacity(self.statements.len());
+ changes.extend(self.positions.drain(..).map(|(id, _)| {
+ StatementChange::Deleted(Statement {
+ id,
+ path: self.path.clone(),
+ })
+ }));
- tracing::info!("applying change: {:?}", change);
+ self.content = text.to_string();
- if change.range.is_none() {
- // apply full text change and return early
- changed.extend(
- self.statements
- .drain(..)
- .map(|(id, _)| {
- StatementChange::Deleted(StatementRef {
- id,
+ changes.extend(
+ pg_statement_splitter::split(&self.content)
+ .ranges
+ .into_iter()
+ .map(|range| {
+ let id = self.id_generator.next();
+ let text = self.content[range].to_string();
+ self.positions.push((id, range));
+
+ StatementChange::Added(AddedStatement {
+ stmt: Statement {
path: self.path.clone(),
- })
+ id,
+ },
+ text,
})
- .collect::<Vec<StatementChange>>(),
- );
-
- self.content = change.text.clone();
-
- for (id, range) in pg_statement_splitter::split(&self.content)
- .ranges
- .iter()
- .map(|r| (self.id_generator.next(), *r))
- {
- self.statements.push((id, range));
- changed.push(StatementChange::Added(Statement {
- ref_: StatementRef {
- path: self.path.clone(),
- id,
- },
- text: self.content[range].to_string(),
- }))
- }
+ }),
+ );
- return changed;
- }
+ changes
+ }
- // no matter where the change is, we can never be sure if its a modification or a deletion/addition
- // e.g. if a statement is "select 1", and the change is "select 2; select 2", its an addition even though its in the middle of the statement.
- // hence we only have three "real" cases:
- // 1. the change touches no statement at all (addition)
- // 2. the change touches exactly one statement AND splitting the statement results in just
- // one statement (modification)
- // 3. the change touches more than one statement (addition/deletion)
+ fn insert_statement(&mut self, range: TextRange) -> usize {
+ let pos = self
+ .positions
+ .binary_search_by(|(_, r)| r.start().cmp(&range.start()))
+ .unwrap_err();
- let new_content = change.apply_to_text(&self.content);
+ let new_id = self.id_generator.next();
+ self.positions.insert(pos, (new_id, range));
- let mut affected = vec![];
+ new_id
+ }
- for (idx, (id, r)) in self.statements.iter_mut().enumerate() {
- if r.intersect(change.range.unwrap()).is_some() {
- affected.push((idx, (*id, *r)));
- } else if r.start() > change.range.unwrap().end() {
- if change.is_addition() {
- *r += change.diff_size();
- } else if change.is_deletion() {
- *r -= change.diff_size();
- }
+ /// Returns all relevant details about the change and its effects on the current state of the document.
+ /// - The affected range is the full range of the change, including the range of all statements that intersect with the change
+ /// - All indices of affected statement positions
+ /// - The index of the first statement position before the change, if any
+ /// - The index of the first statement position after the change, if any
+ /// - the full affected range includng the prev and next statement
+ fn get_affected(
+ &self,
+ change_range: TextRange,
+ content_size: TextSize,
+ diff_size: TextSize,
+ is_addition: bool,
+ ) -> Affected {
+ let mut start = change_range.start();
+ let mut end = change_range.end().min(content_size);
+
+ let mut affected_indices = Vec::new();
+ let mut prev_index = None;
+ let mut next_index = None;
+
+ for (index, (_, pos_range)) in self.positions.iter().enumerate() {
+ if pos_range.intersect(change_range).is_some() {
+ affected_indices.push(index);
+ start = start.min(pos_range.start());
+ end = end.max(pos_range.end());
+ } else if pos_range.end() <= change_range.start() {
+ prev_index = Some(index);
+ } else if pos_range.start() >= change_range.end() && next_index.is_none() {
+ next_index = Some(index);
+ break;
}
}
- // special case: if no statement is affected, the affected range is between the prev and
- // the next statement
- if affected.is_empty() {
- let start = self
- .statements
- .iter()
- .rev()
- .find(|(_, r)| r.end() <= change.range.unwrap().start())
- .map(|(_, r)| r.end())
- .unwrap_or(TextSize::new(0));
- let end = self
- .statements
- .iter()
- .find(|(_, r)| r.start() >= change.range.unwrap().end())
- .map(|(_, r)| r.start())
- .unwrap_or_else(|| self.content.text_len());
+ let start_incl = prev_index
+ .map(|i| self.positions[i].1.start())
+ .unwrap_or(start);
+ let end_incl = next_index
+ .map(|i| self.positions[i].1.end())
+ .unwrap_or_else(|| end);
- let affected = new_content
- .as_str()
- .get(usize::from(start)..usize::from(end))
- .unwrap();
+ let end_incl = if is_addition {
+ end_incl.add(diff_size)
+ } else {
+ end_incl.sub(diff_size)
+ };
- // add new statements
- for range in pg_statement_splitter::split(affected).ranges {
- let doc_range = range + start;
- match self
- .statements
- .binary_search_by(|(_, r)| r.start().cmp(&doc_range.start()))
- {
- Ok(_) => {}
- Err(pos) => {
- let new_id = self.id_generator.next();
- self.statements.insert(pos, (new_id, doc_range));
- changed.push(StatementChange::Added(Statement {
- ref_: StatementRef {
- path: self.path.clone(),
- id: new_id,
- },
- text: new_content[doc_range].to_string(),
- }));
- }
- }
- }
+ let end = if is_addition {
+ end.add(diff_size)
} else {
- // get full affected range
- let mut start = change.range.unwrap().start();
- let mut end = change.range.unwrap().end();
+ end.sub(diff_size)
+ };
- if end > new_content.text_len() {
- end = new_content.text_len();
- }
+ Affected {
+ affected_range: TextRange::new(start, end.min(content_size)),
+ affected_indices,
+ prev_index,
+ next_index,
+ full_affected_range: TextRange::new(start_incl, end_incl.min(content_size)),
+ }
+ }
- for (_, (_, r)) in &affected {
- // adjust the range to the new content
- let adjusted_start = if r.start() >= change.range.unwrap().end() {
- r.start() + change.diff_size()
- } else {
- r.start()
- };
- let adjusted_end = if r.end() >= change.range.unwrap().end() {
- if change.is_addition() {
- r.end() + change.diff_size()
- } else {
- r.end() - change.diff_size()
- }
+ fn move_ranges(&mut self, offset: TextSize, diff_size: TextSize, is_addition: bool) {
+ self.positions
+ .iter_mut()
+ .skip_while(|(_, r)| offset > r.start())
+ .for_each(|(_, range)| {
+ let new_range = if is_addition {
+ range.add(diff_size)
} else {
- r.end()
+ range.sub(diff_size)
};
- if adjusted_start < start {
- start = adjusted_start;
- }
- if adjusted_end > end && adjusted_end <= new_content.text_len() {
- end = adjusted_end;
- }
- }
+ *range = new_range;
+ });
+ }
+ /// Applies a single change to the document and returns the affected statements
+ fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
+ tracing::info!("applying change: {:?}", change);
+
+ // if range is none, we have a full change
+ if change.range.is_none() {
+ return self.apply_full_change(&change.text);
+ }
+
+ // i spent a relatively large amount of time thinking about how to handle range changes
+ // properly. there are quite a few edge cases to consider. I eventually skipped most of
+ // them, because the complexity is not worth the return for now. we might want to revisit
+ // this later though.
+
+ let mut changed: Vec<StatementChange> = Vec::with_capacity(self.positions.len());
+
+ let change_range = change.range.unwrap();
+ let new_content = change.apply_to_text(&self.content);
+
+ // we first need to determine the affected range and all affected statements, as well as
+ // the index of the prev and the next statement, if any. The full affected range is the
+ // affected range expanded to the start of the previous statement and the end of the next
+ let Affected {
+ affected_range,
+ affected_indices,
+ prev_index,
+ next_index,
+ full_affected_range,
+ } = self.get_affected(
+ change_range,
+ new_content.text_len(),
+ change.diff_size(),
+ change.is_addition(),
+ );
+
+ // if within a statement, we can modify it if the change results in also a single statement
+ if affected_indices.len() == 1 {
let changed_content = new_content
.as_str()
- .get(usize::from(start)..usize::from(end))
+ .get(usize::from(affected_range.start())..usize::from(affected_range.end()))
.unwrap();
- let ranges = pg_statement_splitter::split(changed_content).ranges;
+ let new_ranges = pg_statement_splitter::split(changed_content).ranges;
- if affected.len() == 1 && ranges.len() == 1 {
- // from one to one, so we do a modification
- let stmt = &affected[0];
- let new_stmt = &ranges[0];
+ if new_ranges.len() == 1 {
+ if change.is_whitespace() {
+ self.move_ranges(
+ affected_range.end(),
+ change.diff_size(),
+ change.is_addition(),
+ );
- let new_id = self.id_generator.next();
- self.statements[stmt.0] = (new_id, new_stmt.add(start));
-
- let changed_stmt = ChangedStatement {
- old: self.statement(&stmt.1),
- new_ref: self.statement_ref(&self.statements[stmt.0]),
- // change must be relative to statement
- range: change.range.unwrap().sub(stmt.1 .1.start()),
- text: change.text.clone(),
- };
+ self.content = new_content;
- changed.push(StatementChange::Modified(changed_stmt));
- } else {
- // delete and add new ones
- for (_, (id, r)) in &affected {
- changed.push(StatementChange::Deleted(self.statement_ref(&(*id, *r))));
+ return changed;
}
- // remove affected statements
- self.statements
- .retain(|(id, _)| !affected.iter().any(|(affected_id, _)| id == affected_id));
-
- // add new statements
- for range in ranges {
- match self
- .statements
- .binary_search_by(|(_, r)| r.start().cmp(&range.start()))
- {
- Ok(_) => {}
- Err(pos) => {
- let new_id = self.id_generator.next();
- self.statements.insert(pos, (new_id, range));
- changed.push(StatementChange::Added(Statement {
- ref_: StatementRef {
- path: self.path.clone(),
- id: new_id,
- },
- text: new_content[range].to_string(),
- }));
- }
- }
- }
- }
- }
+ let affected_idx = affected_indices[0];
+ let new_range = new_ranges[0].add(affected_range.start());
+ let (old_id, old_range) = self.positions[affected_idx];
- self.content = new_content;
+ // move all statements after the afffected range
+ self.move_ranges(old_range.end(), change.diff_size(), change.is_addition());
- self.debug_statements();
+ let new_id = self.id_generator.next();
+ self.positions[affected_idx] = (new_id, new_range); | beautiful! |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +69,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
+ /// Applies a full change to the document and returns the affected statements
+ fn apply_full_change(&mut self, text: &str) -> Vec<StatementChange> {
+ let mut changes = Vec::new();
- let mut changed: Vec<StatementChange> = Vec::with_capacity(self.statements.len());
+ changes.extend(self.positions.drain(..).map(|(id, _)| {
+ StatementChange::Deleted(Statement {
+ id,
+ path: self.path.clone(),
+ })
+ }));
- tracing::info!("applying change: {:?}", change);
+ self.content = text.to_string();
- if change.range.is_none() {
- // apply full text change and return early
- changed.extend(
- self.statements
- .drain(..)
- .map(|(id, _)| {
- StatementChange::Deleted(StatementRef {
- id,
+ changes.extend(
+ pg_statement_splitter::split(&self.content)
+ .ranges
+ .into_iter()
+ .map(|range| {
+ let id = self.id_generator.next();
+ let text = self.content[range].to_string();
+ self.positions.push((id, range));
+
+ StatementChange::Added(AddedStatement {
+ stmt: Statement {
path: self.path.clone(),
- })
+ id,
+ },
+ text,
})
- .collect::<Vec<StatementChange>>(),
- );
-
- self.content = change.text.clone();
-
- for (id, range) in pg_statement_splitter::split(&self.content)
- .ranges
- .iter()
- .map(|r| (self.id_generator.next(), *r))
- {
- self.statements.push((id, range));
- changed.push(StatementChange::Added(Statement {
- ref_: StatementRef {
- path: self.path.clone(),
- id,
- },
- text: self.content[range].to_string(),
- }))
- }
+ }),
+ );
- return changed;
- }
+ changes
+ }
- // no matter where the change is, we can never be sure if its a modification or a deletion/addition
- // e.g. if a statement is "select 1", and the change is "select 2; select 2", its an addition even though its in the middle of the statement.
- // hence we only have three "real" cases:
- // 1. the change touches no statement at all (addition)
- // 2. the change touches exactly one statement AND splitting the statement results in just
- // one statement (modification)
- // 3. the change touches more than one statement (addition/deletion)
+ fn insert_statement(&mut self, range: TextRange) -> usize {
+ let pos = self
+ .positions
+ .binary_search_by(|(_, r)| r.start().cmp(&range.start()))
+ .unwrap_err();
- let new_content = change.apply_to_text(&self.content);
+ let new_id = self.id_generator.next();
+ self.positions.insert(pos, (new_id, range));
- let mut affected = vec![];
+ new_id
+ }
- for (idx, (id, r)) in self.statements.iter_mut().enumerate() {
- if r.intersect(change.range.unwrap()).is_some() {
- affected.push((idx, (*id, *r)));
- } else if r.start() > change.range.unwrap().end() {
- if change.is_addition() {
- *r += change.diff_size();
- } else if change.is_deletion() {
- *r -= change.diff_size();
- }
+ /// Returns all relevant details about the change and its effects on the current state of the document.
+ /// - The affected range is the full range of the change, including the range of all statements that intersect with the change
+ /// - All indices of affected statement positions
+ /// - The index of the first statement position before the change, if any
+ /// - The index of the first statement position after the change, if any
+ /// - the full affected range includng the prev and next statement
+ fn get_affected(
+ &self,
+ change_range: TextRange,
+ content_size: TextSize,
+ diff_size: TextSize,
+ is_addition: bool,
+ ) -> Affected {
+ let mut start = change_range.start();
+ let mut end = change_range.end().min(content_size);
+
+ let mut affected_indices = Vec::new();
+ let mut prev_index = None;
+ let mut next_index = None;
+
+ for (index, (_, pos_range)) in self.positions.iter().enumerate() {
+ if pos_range.intersect(change_range).is_some() {
+ affected_indices.push(index);
+ start = start.min(pos_range.start());
+ end = end.max(pos_range.end());
+ } else if pos_range.end() <= change_range.start() {
+ prev_index = Some(index);
+ } else if pos_range.start() >= change_range.end() && next_index.is_none() {
+ next_index = Some(index);
+ break;
}
}
- // special case: if no statement is affected, the affected range is between the prev and
- // the next statement
- if affected.is_empty() {
- let start = self
- .statements
- .iter()
- .rev()
- .find(|(_, r)| r.end() <= change.range.unwrap().start())
- .map(|(_, r)| r.end())
- .unwrap_or(TextSize::new(0));
- let end = self
- .statements
- .iter()
- .find(|(_, r)| r.start() >= change.range.unwrap().end())
- .map(|(_, r)| r.start())
- .unwrap_or_else(|| self.content.text_len());
+ let start_incl = prev_index
+ .map(|i| self.positions[i].1.start())
+ .unwrap_or(start);
+ let end_incl = next_index
+ .map(|i| self.positions[i].1.end())
+ .unwrap_or_else(|| end);
- let affected = new_content
- .as_str()
- .get(usize::from(start)..usize::from(end))
- .unwrap();
+ let end_incl = if is_addition {
+ end_incl.add(diff_size)
+ } else {
+ end_incl.sub(diff_size)
+ };
- // add new statements
- for range in pg_statement_splitter::split(affected).ranges {
- let doc_range = range + start;
- match self
- .statements
- .binary_search_by(|(_, r)| r.start().cmp(&doc_range.start()))
- {
- Ok(_) => {}
- Err(pos) => {
- let new_id = self.id_generator.next();
- self.statements.insert(pos, (new_id, doc_range));
- changed.push(StatementChange::Added(Statement {
- ref_: StatementRef {
- path: self.path.clone(),
- id: new_id,
- },
- text: new_content[doc_range].to_string(),
- }));
- }
- }
- }
+ let end = if is_addition {
+ end.add(diff_size)
} else {
- // get full affected range
- let mut start = change.range.unwrap().start();
- let mut end = change.range.unwrap().end();
+ end.sub(diff_size)
+ };
- if end > new_content.text_len() {
- end = new_content.text_len();
- }
+ Affected {
+ affected_range: TextRange::new(start, end.min(content_size)),
+ affected_indices,
+ prev_index,
+ next_index,
+ full_affected_range: TextRange::new(start_incl, end_incl.min(content_size)),
+ }
+ }
- for (_, (_, r)) in &affected {
- // adjust the range to the new content
- let adjusted_start = if r.start() >= change.range.unwrap().end() {
- r.start() + change.diff_size()
- } else {
- r.start()
- };
- let adjusted_end = if r.end() >= change.range.unwrap().end() {
- if change.is_addition() {
- r.end() + change.diff_size()
- } else {
- r.end() - change.diff_size()
- }
+ fn move_ranges(&mut self, offset: TextSize, diff_size: TextSize, is_addition: bool) {
+ self.positions
+ .iter_mut()
+ .skip_while(|(_, r)| offset > r.start())
+ .for_each(|(_, range)| {
+ let new_range = if is_addition {
+ range.add(diff_size)
} else {
- r.end()
+ range.sub(diff_size)
};
- if adjusted_start < start {
- start = adjusted_start;
- }
- if adjusted_end > end && adjusted_end <= new_content.text_len() {
- end = adjusted_end;
- }
- }
+ *range = new_range;
+ });
+ }
+ /// Applies a single change to the document and returns the affected statements
+ fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
+ tracing::info!("applying change: {:?}", change);
+
+ // if range is none, we have a full change
+ if change.range.is_none() {
+ return self.apply_full_change(&change.text);
+ }
+
+ // i spent a relatively large amount of time thinking about how to handle range changes
+ // properly. there are quite a few edge cases to consider. I eventually skipped most of
+ // them, because the complexity is not worth the return for now. we might want to revisit
+ // this later though.
+
+ let mut changed: Vec<StatementChange> = Vec::with_capacity(self.positions.len());
+
+ let change_range = change.range.unwrap();
+ let new_content = change.apply_to_text(&self.content);
+
+ // we first need to determine the affected range and all affected statements, as well as
+ // the index of the prev and the next statement, if any. The full affected range is the
+ // affected range expanded to the start of the previous statement and the end of the next
+ let Affected {
+ affected_range,
+ affected_indices,
+ prev_index,
+ next_index,
+ full_affected_range,
+ } = self.get_affected(
+ change_range,
+ new_content.text_len(),
+ change.diff_size(),
+ change.is_addition(),
+ );
+
+ // if within a statement, we can modify it if the change results in also a single statement
+ if affected_indices.len() == 1 {
let changed_content = new_content
.as_str()
- .get(usize::from(start)..usize::from(end))
+ .get(usize::from(affected_range.start())..usize::from(affected_range.end()))
.unwrap();
- let ranges = pg_statement_splitter::split(changed_content).ranges;
+ let new_ranges = pg_statement_splitter::split(changed_content).ranges;
- if affected.len() == 1 && ranges.len() == 1 {
- // from one to one, so we do a modification
- let stmt = &affected[0];
- let new_stmt = &ranges[0];
+ if new_ranges.len() == 1 {
+ if change.is_whitespace() {
+ self.move_ranges(
+ affected_range.end(),
+ change.diff_size(),
+ change.is_addition(),
+ );
- let new_id = self.id_generator.next();
- self.statements[stmt.0] = (new_id, new_stmt.add(start));
-
- let changed_stmt = ChangedStatement {
- old: self.statement(&stmt.1),
- new_ref: self.statement_ref(&self.statements[stmt.0]),
- // change must be relative to statement
- range: change.range.unwrap().sub(stmt.1 .1.start()),
- text: change.text.clone(),
- };
+ self.content = new_content;
- changed.push(StatementChange::Modified(changed_stmt));
- } else {
- // delete and add new ones
- for (_, (id, r)) in &affected {
- changed.push(StatementChange::Deleted(self.statement_ref(&(*id, *r))));
+ return changed;
}
- // remove affected statements
- self.statements
- .retain(|(id, _)| !affected.iter().any(|(affected_id, _)| id == affected_id));
-
- // add new statements
- for range in ranges {
- match self
- .statements
- .binary_search_by(|(_, r)| r.start().cmp(&range.start()))
- {
- Ok(_) => {}
- Err(pos) => {
- let new_id = self.id_generator.next();
- self.statements.insert(pos, (new_id, range));
- changed.push(StatementChange::Added(Statement {
- ref_: StatementRef {
- path: self.path.clone(),
- id: new_id,
- },
- text: new_content[range].to_string(),
- }));
- }
- }
- }
- }
- }
+ let affected_idx = affected_indices[0];
+ let new_range = new_ranges[0].add(affected_range.start());
+ let (old_id, old_range) = self.positions[affected_idx];
- self.content = new_content;
+ // move all statements after the afffected range
+ self.move_ranges(old_range.end(), change.diff_size(), change.is_addition());
- self.debug_statements();
+ let new_id = self.id_generator.next();
+ self.positions[affected_idx] = (new_id, new_range);
- changed
- }
-}
+ changed.push(StatementChange::Modified(ModifiedStatement {
+ old_stmt: Statement {
+ id: old_id, | at this point, that `id` is only stored here, because statement matching that id in the `document` is overwritten, right?
Do we at some point after this try to find the old statement by id? |
postgres_lsp | github_2023 | others | 168 | supabase-community | psteinroe | @@ -61,18 +70,56 @@ impl<'a> CompletionContext<'a> {
text: ¶ms.text,
schema_cache: params.schema,
position: usize::from(params.position),
-
ts_node: None,
schema_name: None,
wrapping_clause_type: None,
+ wrapping_statement_range: None,
is_invocation: false,
+ mentioned_relations: HashMap::new(),
};
ctx.gather_tree_context();
+ ctx.gather_info_from_ts_queries();
ctx
}
+ fn gather_info_from_ts_queries(&mut self) {
+ let tree = match self.tree.as_ref() {
+ None => return,
+ Some(t) => t,
+ }; | I I think it's already quite idiomatic like that. only thing that comes to mind is returning an Option<> and then
let tree = self.tree.as_ref()?;
but I would keep it like this. |
postgres_lsp | github_2023 | others | 168 | supabase-community | psteinroe | @@ -0,0 +1,114 @@
+use crate::{
+ builder::CompletionBuilder, context::CompletionContext, relevance::CompletionRelevanceData,
+ CompletionItem, CompletionItemKind,
+};
+
+pub fn complete_columns(ctx: &CompletionContext, builder: &mut CompletionBuilder) {
+ let available_columns = &ctx.schema_cache.columns; | is this how others also do it? iterating over all possible options? |
postgres_lsp | github_2023 | others | 168 | supabase-community | psteinroe | @@ -0,0 +1,114 @@
+use crate::{
+ builder::CompletionBuilder, context::CompletionContext, relevance::CompletionRelevanceData,
+ CompletionItem, CompletionItemKind,
+};
+
+pub fn complete_columns(ctx: &CompletionContext, builder: &mut CompletionBuilder) {
+ let available_columns = &ctx.schema_cache.columns;
+
+ for col in available_columns {
+ let item = CompletionItem {
+ label: col.name.clone(),
+ score: CompletionRelevanceData::Column(col).get_score(ctx),
+ description: format!("Table: {}.{}", col.schema_name, col.table_name),
+ preselected: false,
+ kind: CompletionItemKind::Column,
+ };
+
+ builder.add_item(item);
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use crate::{
+ complete,
+ test_helper::{get_test_deps, get_test_params, InputQuery, CURSOR_POS},
+ CompletionItem,
+ };
+
+ struct TestCase {
+ query: String,
+ message: &'static str,
+ label: &'static str,
+ description: &'static str,
+ }
+
+ impl TestCase {
+ fn get_input_query(&self) -> InputQuery {
+ let strs: Vec<&str> = self.query.split_whitespace().collect();
+ strs.join(" ").as_str().into()
+ }
+ }
+
+ #[tokio::test]
+ async fn completes_columns() {
+ let setup = r#"
+ create schema private;
+
+ create table public.users (
+ id serial primary key,
+ name text
+ );
+
+ create table public.audio_books (
+ id serial primary key,
+ narrator text
+ );
+
+ create table private.audio_books (
+ id serial primary key,
+ narrator_id text
+ );
+ "#;
+
+ let queries: Vec<TestCase> = vec![
+ TestCase {
+ message: "correctly prefers the columns of present tables",
+ query: format!(r#"select na{} from public.audio_books;"#, CURSOR_POS),
+ label: "narrator",
+ description: "Table: public.audio_books", | noice!! |
postgres_lsp | github_2023 | others | 168 | supabase-community | psteinroe | @@ -0,0 +1,91 @@
+use crate::{Query, QueryResult};
+
+use super::QueryTryFrom;
+
+static QUERY: &'static str = r#"
+ (relation
+ (object_reference
+ .
+ (identifier) @schema_or_table
+ "."?
+ (identifier)? @table
+ )+
+ )
+"#;
+
+#[derive(Debug)]
+pub struct RelationMatch<'a> {
+ pub(crate) schema: Option<tree_sitter::Node<'a>>,
+ pub(crate) table: tree_sitter::Node<'a>,
+}
+
+impl<'a> RelationMatch<'a> {
+ pub fn get_schema(&self, sql: &str) -> Option<String> {
+ let str = self
+ .schema
+ .as_ref()?
+ .utf8_text(sql.as_bytes())
+ .expect("Failed to get schema from RelationMatch");
+
+ Some(str.to_string())
+ }
+
+ pub fn get_table(&self, sql: &str) -> String {
+ self.table
+ .utf8_text(sql.as_bytes())
+ .expect("Failed to get schema from RelationMatch")
+ .to_string()
+ }
+}
+
+impl<'a> TryFrom<&'a QueryResult<'a>> for &'a RelationMatch<'a> {
+ type Error = String;
+
+ fn try_from(q: &'a QueryResult<'a>) -> Result<Self, Self::Error> {
+ match q {
+ QueryResult::Relation(r) => Ok(&r),
+
+ #[allow(unreachable_patterns)]
+ _ => Err("Invalid QueryResult type".into()),
+ }
+ }
+}
+
+impl<'a> QueryTryFrom<'a> for RelationMatch<'a> {
+ type Ref = &'a RelationMatch<'a>;
+}
+
+impl<'a> Query<'a> for RelationMatch<'a> {
+ fn execute(root_node: tree_sitter::Node<'a>, stmt: &'a str) -> Vec<crate::QueryResult<'a>> {
+ let query =
+ tree_sitter::Query::new(tree_sitter_sql::language(), &QUERY).expect("Invalid Query."); | what about a std::sync::LazyLock? but I dont know whether it matter a lot in terms of performance? |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -342,6 +342,45 @@ impl Workspace for WorkspaceServer {
skipped_diagnostics: 0,
})
}
+
+ #[tracing::instrument(level = "info", skip(self))]
+ fn get_completions(
+ &self,
+ params: super::CompletionParams,
+ ) -> Result<pg_completions::CompletionResult, WorkspaceError> {
+ let doc = self
+ .documents
+ .get(¶ms.path)
+ .ok_or(WorkspaceError::not_found())?;
+
+ tracing::info!("Found the document.");
+ tracing::info!("Looking for statement at position: {:?}", ¶ms.position);
+
+ let statement_ref = match doc.statement_ref_at_offset(¶ms.position) {
+ Some(s) => s,
+ None => return Ok(pg_completions::CompletionResult::default()),
+ };
+
+ let tree = self.tree_sitter.fetch(&statement_ref);
+ let text = doc
+ .statement_by_id(statement_ref.id)
+ .expect("Found statement_ref but no matching statement")
+ .text;
+
+ let schema_cache = self
+ .schema_cache
+ .read()
+ .map_err(|_| WorkspaceError::runtime("Unable to load SchemaCache"))?;
+
+ let result = pg_completions::complete(pg_completions::CompletionParams {
+ position: params.position,
+ schema: &schema_cache,
+ tree: tree.as_deref(), | isn't a ref to the tree sufficient? |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -342,6 +342,45 @@ impl Workspace for WorkspaceServer {
skipped_diagnostics: 0,
})
}
+
+ #[tracing::instrument(level = "info", skip(self))]
+ fn get_completions(
+ &self,
+ params: super::CompletionParams,
+ ) -> Result<pg_completions::CompletionResult, WorkspaceError> {
+ let doc = self
+ .documents
+ .get(¶ms.path)
+ .ok_or(WorkspaceError::not_found())?;
+
+ tracing::info!("Found the document.");
+ tracing::info!("Looking for statement at position: {:?}", ¶ms.position);
+
+ let statement_ref = match doc.statement_ref_at_offset(¶ms.position) {
+ Some(s) => s,
+ None => return Ok(pg_completions::CompletionResult::default()),
+ };
+
+ let tree = self.tree_sitter.fetch(&statement_ref);
+ let text = doc
+ .statement_by_id(statement_ref.id)
+ .expect("Found statement_ref but no matching statement")
+ .text;
+
+ let schema_cache = self
+ .schema_cache
+ .read()
+ .map_err(|_| WorkspaceError::runtime("Unable to load SchemaCache"))?; | I agree that the document api is a bit weird, but you should be able to query for `statement_at_offset`, and the `Statement` also includes the `StatementRef`. Open for ideas to improve that api. |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -342,6 +342,45 @@ impl Workspace for WorkspaceServer {
skipped_diagnostics: 0,
})
}
+
+ #[tracing::instrument(level = "info", skip(self))]
+ fn get_completions(
+ &self,
+ params: super::CompletionParams,
+ ) -> Result<pg_completions::CompletionResult, WorkspaceError> {
+ let doc = self
+ .documents
+ .get(¶ms.path)
+ .ok_or(WorkspaceError::not_found())?;
+
+ tracing::info!("Found the document.");
+ tracing::info!("Looking for statement at position: {:?}", ¶ms.position);
+
+ let statement_ref = match doc.statement_ref_at_offset(¶ms.position) {
+ Some(s) => s,
+ None => return Ok(pg_completions::CompletionResult::default()),
+ };
+
+ let tree = self.tree_sitter.fetch(&statement_ref);
+ let text = doc
+ .statement_by_id(statement_ref.id)
+ .expect("Found statement_ref but no matching statement")
+ .text;
+
+ let schema_cache = self
+ .schema_cache
+ .read()
+ .map_err(|_| WorkspaceError::runtime("Unable to load SchemaCache"))?;
+
+ let result = pg_completions::complete(pg_completions::CompletionParams {
+ position: params.position, | I think we need to make the position relative to the statement? |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -0,0 +1,36 @@
+use std::{fs::File, path::PathBuf, str::FromStr}; | what benefit does this have over the pg_cli? |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -342,6 +342,57 @@ impl Workspace for WorkspaceServer {
skipped_diagnostics: 0,
})
}
+
+ #[tracing::instrument(level = "debug", skip(self))]
+ fn get_completions(
+ &self,
+ params: super::CompletionParams,
+ ) -> Result<pg_completions::CompletionResult, WorkspaceError> {
+ let doc = self
+ .documents
+ .get(¶ms.path)
+ .ok_or(WorkspaceError::not_found())?;
+
+ let offset = doc.line_and_col_to_offset(params.line, params.column); | this should happen in the lsp layer. we already have helpers for that in `pg_lsp_converters`. the input to the workspace should only be `TextSize` or `TextRange`. the lsp server tracks documents too with the purpose of maintaining the `LineIndex`, which is the data structure we need for the conversion. another reason to handle it on lsp level are different encodings. |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -81,6 +81,31 @@ impl Document {
.collect()
}
+ pub fn line_and_col_to_offset(&self, line: u32, col: u32) -> TextSize { | see prev comment - this should be done in the lsp layer, and respective helpers are provided in the `pg_lsp_converters` crate. |
postgres_lsp | github_2023 | typescript | 164 | supabase-community | psteinroe | @@ -9,37 +9,39 @@ import {
let client: LanguageClient;
-export function activate(_context: ExtensionContext) {
+export async function activate(_context: ExtensionContext) {
// If the extension is launched in debug mode then the debug server options are used
// Otherwise the run options are used
const run: Executable = {
- command: 'pglsp'
+ command: 'pglsp_new' | can't we just the cli instead with `pg_cli lsp-proxy`? |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -137,7 +137,10 @@ fn install_client(sh: &Shell, client_opt: ClientOpt) -> anyhow::Result<()> {
}
fn install_server(sh: &Shell) -> anyhow::Result<()> {
- let cmd = cmd!(sh, "cargo install --path crates/pg_lsp --locked --force");
+ let cmd = cmd!(
+ sh,
+ "cargo install --path crates/pg_lsp_new --locked --force" | can't we use `pg_cli` instead? the lsp should not be an entry point. |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -163,19 +163,20 @@ impl LanguageServer for LSPServer {
self.session.update_all_diagnostics().await;
}
+ #[tracing::instrument(level = "info", skip(self))]
async fn shutdown(&self) -> LspResult<()> {
Ok(())
}
- #[tracing::instrument(level = "trace", skip(self))]
+ #[tracing::instrument(level = "info", skip(self))] | lets undo this once everything is setup |
postgres_lsp | github_2023 | others | 165 | supabase-community | juleswritescode | @@ -140,7 +142,10 @@ clear-branches:
git branch --merged | egrep -v "(^\\*|main)" | xargs git branch -d
reset-git:
- git checkout main && git pull && pnpm run clear-branches
+ git checkout main
+ git pull
+ just clear-branches
merge-main:
- git fetch origin main:main && git merge main
+ git fetch origin main:main
+ git merge main | sehr nice |
postgres_lsp | github_2023 | others | 162 | supabase-community | juleswritescode | @@ -1 +1 @@
-DATABASE_URL=postgresql://postgres:postgres@127.0.0.1:5432/postgres
+DATABASE_URL=postgresql://postgres:postgres@127.0.0.1:54322/postgres | ```suggestion
DATABASE_URL=postgresql://postgres:postgres@127.0.0.1:5432/postgres
``` |
postgres_lsp | github_2023 | others | 162 | supabase-community | juleswritescode | @@ -0,0 +1,80 @@
+use crate::{
+ categories::RuleCategory,
+ rule::{GroupCategory, Rule, RuleGroup, RuleMetadata},
+};
+
+pub struct RuleContext<'a, R: Rule> {
+ stmt: &'a pg_query_ext::NodeEnum,
+ options: &'a R::Options,
+}
+
+impl<'a, R> RuleContext<'a, R>
+where
+ R: Rule + Sized + 'static,
+{
+ #[allow(clippy::too_many_arguments)]
+ pub fn new(
+ stmt: &'a pg_query_ext::NodeEnum,
+ options: &'a R::Options,
+ ) -> Self {
+ Self {
+ stmt,
+ options,
+ }
+ }
+
+ /// Returns the group that belongs to the current rule
+ pub fn group(&self) -> &'static str {
+ <R::Group as RuleGroup>::NAME
+ }
+
+ /// Returns the category that belongs to the current rule
+ pub fn category(&self) -> RuleCategory {
+ <<R::Group as RuleGroup>::Category as GroupCategory>::CATEGORY
+ }
+
+ /// Returns the AST root
+ pub fn stmt(&self) -> &pg_query_ext::NodeEnum {
+ self.stmt
+ }
+
+ /// Returns the metadata of the rule
+ ///
+ /// The metadata contains information about the rule, such as the name, version, language, and whether it is recommended.
+ ///
+ /// ## Examples
+ /// ```rust,ignore
+ /// declare_lint_rule! {
+ /// /// Some doc
+ /// pub(crate) Foo {
+ /// version: "0.0.0",
+ /// name: "foo",
+ /// language: "js", | ```suggestion
``` |
postgres_lsp | github_2023 | others | 162 | supabase-community | juleswritescode | @@ -0,0 +1,327 @@
+use pg_console::fmt::Display;
+use pg_console::{markup, MarkupBuf};
+use pg_diagnostics::advice::CodeSuggestionAdvice;
+use pg_diagnostics::{
+ Advices, Category, Diagnostic, DiagnosticTags, Location, LogCategory, MessageAndDescription,
+ Visit,
+};
+use std::cmp::Ordering;
+use std::fmt::Debug;
+use text_size::TextRange;
+
+use crate::{categories::RuleCategory, context::RuleContext, registry::RegistryVisitor};
+
+#[derive(Clone, Debug)]
+#[cfg_attr(feature = "serde", derive(serde::Serialize))]
+/// Static metadata containing information about a rule
+pub struct RuleMetadata {
+ /// It marks if a rule is deprecated, and if so a reason has to be provided.
+ pub deprecated: Option<&'static str>,
+ /// The version when the rule was implemented
+ pub version: &'static str,
+ /// The name of this rule, displayed in the diagnostics it emits
+ pub name: &'static str,
+ /// The content of the documentation comments for this rule
+ pub docs: &'static str,
+ /// Whether a rule is recommended or not
+ pub recommended: bool,
+ /// The source URL of the rule
+ pub sources: &'static [RuleSource],
+}
+
+impl RuleMetadata {
+ pub const fn new(version: &'static str, name: &'static str, docs: &'static str) -> Self {
+ Self {
+ deprecated: None,
+ version,
+ name,
+ docs,
+ sources: &[],
+ recommended: false,
+ }
+ }
+
+ pub const fn recommended(mut self, recommended: bool) -> Self {
+ self.recommended = recommended;
+ self
+ }
+
+ pub const fn deprecated(mut self, deprecated: &'static str) -> Self {
+ self.deprecated = Some(deprecated);
+ self
+ }
+
+ pub const fn sources(mut self, sources: &'static [RuleSource]) -> Self {
+ self.sources = sources;
+ self
+ }
+}
+
+pub trait RuleMeta {
+ type Group: RuleGroup;
+ const METADATA: RuleMetadata;
+}
+
+/// A rule group is a collection of rules under a given name, serving as a
+/// "namespace" for lint rules and allowing the entire set of rules to be
+/// disabled at once
+pub trait RuleGroup {
+ type Category: GroupCategory;
+ /// The name of this group, displayed in the diagnostics emitted by its rules
+ const NAME: &'static str;
+ /// Register all the rules belonging to this group into `registry`
+ fn record_rules<V: RegistryVisitor + ?Sized>(registry: &mut V);
+}
+
+/// A group category is a collection of rule groups under a given category ID,
+/// serving as a broad classification on the kind of diagnostic or code action
+/// these rule emit, and allowing whole categories of rules to be disabled at
+/// once depending on the kind of analysis being performed
+pub trait GroupCategory {
+ /// The category ID used for all groups and rule belonging to this category
+ const CATEGORY: RuleCategory;
+ /// Register all the groups belonging to this category into `registry`
+ fn record_groups<V: RegistryVisitor + ?Sized>(registry: &mut V);
+}
+
+/// Trait implemented by all analysis rules: declares interest to a certain AstNode type,
+/// and a callback function to be executed on all nodes matching the query to possibly
+/// raise an analysis event
+pub trait Rule: RuleMeta + Sized {
+ type Options: Default + Clone + Debug;
+
+ fn run(ctx: &RuleContext<Self>) -> Vec<RuleDiagnostic>;
+} | the design is pretty awesome! |
postgres_lsp | github_2023 | others | 162 | supabase-community | juleswritescode | @@ -1,18 +1,85 @@
//! Codegen tools. Derived from Biome's codegen
+mod generate_analyser;
+mod generate_configuration;
mod generate_crate;
+mod generate_new_analyser_rule;
+pub use self::generate_analyser::generate_analyser;
+pub use self::generate_configuration::generate_rules_configuration;
pub use self::generate_crate::generate_crate;
+pub use self::generate_new_analyser_rule::generate_new_analyser_rule;
use bpaf::Bpaf;
+use generate_new_analyser_rule::Category;
+use std::path::Path;
+use xtask::{glue::fs2, Mode, Result};
+
+pub enum UpdateResult {
+ NotUpdated,
+ Updated,
+}
+
+/// A helper to update file on disk if it has changed.
+/// With verify = false,
+pub fn update(path: &Path, contents: &str, mode: &Mode) -> Result<UpdateResult> {
+ match fs2::read_to_string(path) {
+ Ok(old_contents) if old_contents == contents => {
+ return Ok(UpdateResult::NotUpdated);
+ }
+ _ => (),
+ } | ```suggestion
if fs2::read_to_string(path).is_ok_and(|old_contents| old_contents == contents) {
return Ok(UpdateResult::NotUpdated);
}
``` |
postgres_lsp | github_2023 | others | 162 | supabase-community | juleswritescode | @@ -1,18 +1,85 @@
//! Codegen tools. Derived from Biome's codegen
+mod generate_analyser;
+mod generate_configuration;
mod generate_crate;
+mod generate_new_analyser_rule;
+pub use self::generate_analyser::generate_analyser;
+pub use self::generate_configuration::generate_rules_configuration;
pub use self::generate_crate::generate_crate;
+pub use self::generate_new_analyser_rule::generate_new_analyser_rule;
use bpaf::Bpaf;
+use generate_new_analyser_rule::Category;
+use std::path::Path;
+use xtask::{glue::fs2, Mode, Result};
+
+pub enum UpdateResult {
+ NotUpdated,
+ Updated,
+}
+
+/// A helper to update file on disk if it has changed.
+/// With verify = false, | ```suggestion
/// With verify = false, the contents of the file will be updated to the passed in contents.
/// With verify = true, an Err will be returned if the contents of the file do not match the passed-in contents.
``` |
postgres_lsp | github_2023 | others | 161 | supabase-community | psteinroe | @@ -127,9 +145,14 @@ impl<'a> CompletionContext<'a> {
self.wrapping_clause_type = "where".try_into().ok();
}
+ "keyword_from" => {
+ self.wrapping_clause_type = "keyword_from".try_into().ok();
+ }
+
_ => {}
}
+ // We have arrived at the leaf node | I love that you also use `we` when writing comments. it's so welcoming to read. |
postgres_lsp | github_2023 | others | 161 | supabase-community | psteinroe | @@ -130,6 +130,10 @@ new-crate name:
cargo new --lib crates/{{snakecase(name)}}
cargo run -p xtask_codegen -- new-crate --name={{snakecase(name)}}
+# Prints the treesitter tree of the given SQL file
+tree-print file:
+ cargo run --bin tree_print -- -f {{file}}
+ | love it! will add this alter for the parser too |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -291,6 +300,50 @@ impl Workspace for WorkspaceServer {
fn is_path_ignored(&self, params: IsPathIgnoredParams) -> Result<bool, WorkspaceError> {
Ok(self.is_ignored(params.pglsp_path.as_path()))
}
+
+ fn pull_diagnostics(
+ &self,
+ params: super::PullDiagnosticsParams,
+ ) -> Result<super::PullDiagnosticsResult, WorkspaceError> {
+ // get all statements form the requested document and pull diagnostics out of every
+ // sourcece | Martin sourcece, das ist doch der berühmte Regisseur von Wolf of Wallstreet |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -291,6 +300,50 @@ impl Workspace for WorkspaceServer {
fn is_path_ignored(&self, params: IsPathIgnoredParams) -> Result<bool, WorkspaceError> {
Ok(self.is_ignored(params.pglsp_path.as_path()))
}
+
+ fn pull_diagnostics(
+ &self,
+ params: super::PullDiagnosticsParams,
+ ) -> Result<super::PullDiagnosticsResult, WorkspaceError> {
+ // get all statements form the requested document and pull diagnostics out of every
+ // sourcece
+ let doc = self
+ .documents
+ .get(¶ms.path)
+ .ok_or(WorkspaceError::not_found())?;
+
+ let diagnostics: Vec<SDiagnostic> = doc
+ .statement_refs_with_ranges()
+ .iter()
+ .flat_map(|(stmt, r)| {
+ let mut stmt_diagnostics = vec![];
+
+ stmt_diagnostics.extend(self.pg_query.pull_diagnostics(stmt));
+
+ stmt_diagnostics
+ .into_iter()
+ .map(|d| {
+ SDiagnostic::new(
+ d.with_file_path(params.path.as_path().display().to_string())
+ .with_file_span(r),
+ )
+ })
+ .collect::<Vec<_>>()
+ })
+ .collect();
+
+ let errors = diagnostics
+ .iter()
+ .filter(|d| d.severity() == Severity::Error) | There's also `Severity::Fatal` which would not be included here, should we maybe check `d.severity() >= Severity::Error`? |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -291,6 +300,50 @@ impl Workspace for WorkspaceServer {
fn is_path_ignored(&self, params: IsPathIgnoredParams) -> Result<bool, WorkspaceError> {
Ok(self.is_ignored(params.pglsp_path.as_path()))
}
+
+ fn pull_diagnostics(
+ &self,
+ params: super::PullDiagnosticsParams,
+ ) -> Result<super::PullDiagnosticsResult, WorkspaceError> {
+ // get all statements form the requested document and pull diagnostics out of every
+ // sourcece
+ let doc = self
+ .documents
+ .get(¶ms.path)
+ .ok_or(WorkspaceError::not_found())?;
+
+ let diagnostics: Vec<SDiagnostic> = doc
+ .statement_refs_with_ranges()
+ .iter()
+ .flat_map(|(stmt, r)| {
+ let mut stmt_diagnostics = vec![];
+
+ stmt_diagnostics.extend(self.pg_query.pull_diagnostics(stmt)); | ```suggestion
let mut stmt_diagnostics = self.pg_query.pull_diagnostics(stmt);
``` |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -160,7 +160,7 @@ impl LanguageServer for LSPServer {
self.setup_capabilities().await;
// Diagnostics are disabled by default, so update them after fetching workspace config
- // self.session.update_all_diagnostics().await;
+ self.session.update_all_diagnostics().await; | Feature: Check ✅ 🙌🏻 |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -89,9 +89,9 @@ pub(crate) async fn did_change(
session.insert_document(url.clone(), new_doc);
- // if let Err(err) = session.update_diagnostics(url).await {
- // error!("Failed to update diagnostics: {}", err);
- // }
+ if let Err(err) = session.update_diagnostics(url).await {
+ error!("Failed to update diagnostics: {}", err);
+ } | Should we inform the client as well? Or is it just for debugging? |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -35,9 +35,9 @@ pub(crate) async fn did_open(
session.insert_document(url.clone(), doc);
- // if let Err(err) = session.update_diagnostics(url).await {
- // error!("Failed to update diagnostics: {}", err);
- // }
+ if let Err(err) = session.update_diagnostics(url).await {
+ error!("Failed to update diagnostics: {}", err);
+ } | Same here, should we inform the client as well? |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -34,7 +34,7 @@ pub(crate) struct JunitReporterVisitor<'a>(pub(crate) Report, pub(crate) &'a mut
impl<'a> JunitReporterVisitor<'a> {
pub(crate) fn new(console: &'a mut dyn Console) -> Self {
- let report = Report::new("Biome");
+ let report = Report::new("PgLsp"); | 
|
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -83,12 +83,27 @@ impl From<(bool, bool)> for VcsTargeted {
pub enum TraversalMode {
/// A dummy mode to be used when the CLI is not running any command
Dummy,
+ /// This mode is enabled when running the command `check`
+ Check {
+ /// The type of fixes that should be applied when analyzing a file.
+ ///
+ /// It's [None] if the `check` command is called without `--apply` or `--apply-suggested`
+ /// arguments.
+ // fix_file_mode: Option<FixFileMode>,
+ /// An optional tuple. | ```suggestion
// fix_file_mode: Option<FixFileMode>,
/// An optional tuple.
``` |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -237,6 +241,75 @@ impl Session {
}
}
+ /// Computes diagnostics for the file matching the provided url and publishes
+ /// them to the client. Called from [`handlers::text_document`] when a file's
+ /// contents changes.
+ #[tracing::instrument(level = "trace", skip_all, fields(url = display(&url), diagnostic_count), err)]
+ pub(crate) async fn update_diagnostics(&self, url: lsp_types::Url) -> Result<(), LspError> {
+ let pglsp_path = self.file_path(&url)?;
+ let doc = self.document(&url)?;
+ if self.configuration_status().is_error() && !self.notified_broken_configuration() {
+ self.set_notified_broken_configuration();
+ self.client
+ .show_message(MessageType::WARNING, "The configuration file has errors. Biome will report only parsing errors until the configuration is fixed.") | ```suggestion
.show_message(MessageType::WARNING, "The configuration file has errors. PgLSP will report only parsing errors until the configuration is fixed.")
``` |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -237,6 +241,75 @@ impl Session {
}
}
+ /// Computes diagnostics for the file matching the provided url and publishes
+ /// them to the client. Called from [`handlers::text_document`] when a file's
+ /// contents changes.
+ #[tracing::instrument(level = "trace", skip_all, fields(url = display(&url), diagnostic_count), err)]
+ pub(crate) async fn update_diagnostics(&self, url: lsp_types::Url) -> Result<(), LspError> {
+ let pglsp_path = self.file_path(&url)?;
+ let doc = self.document(&url)?;
+ if self.configuration_status().is_error() && !self.notified_broken_configuration() {
+ self.set_notified_broken_configuration();
+ self.client
+ .show_message(MessageType::WARNING, "The configuration file has errors. Biome will report only parsing errors until the configuration is fixed.")
+ .await;
+ }
+
+ let diagnostics: Vec<lsp_types::Diagnostic> = {
+ let result = self.workspace.pull_diagnostics(PullDiagnosticsParams {
+ path: pglsp_path.clone(),
+ max_diagnostics: u64::MAX,
+ })?;
+
+ tracing::trace!("biome diagnostics: {:#?}", result.diagnostics); | ```suggestion
tracing::trace!("pglsp diagnostics: {:#?}", result.diagnostics);
``` |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -30,6 +30,33 @@ pub struct ChangeFileParams {
pub changes: Vec<ChangeParams>,
}
+#[derive(Debug, serde::Serialize, serde::Deserialize)]
+pub struct PullDiagnosticsParams {
+ pub path: PgLspPath,
+ // pub categories: RuleCategories,
+ pub max_diagnostics: u64,
+ // pub only: Vec<RuleSelector>,
+ // pub skip: Vec<RuleSelector>, | Should we remove those? |
postgres_lsp | github_2023 | others | 155 | supabase-community | psteinroe | @@ -2,6 +2,34 @@ use pg_schema_cache::SchemaCache;
use crate::CompletionParams;
+#[derive(Debug, PartialEq, Eq)]
+pub enum ClauseType {
+ Select,
+ Where,
+ From,
+ Update,
+ Delete,
+}
+
+impl From<&str> for ClauseType {
+ fn from(value: &str) -> Self {
+ match value {
+ "select" => Self::Select,
+ "where" => Self::Where,
+ "from" => Self::From,
+ "update" => Self::Update,
+ "delete" => Self::Delete,
+ _ => panic!("Unimplemented ClauseType: {}", value), | Are we sure we want to panic here? We could also implement TryFrom |
postgres_lsp | github_2023 | others | 155 | supabase-community | psteinroe | @@ -0,0 +1,162 @@
+use crate::{
+ builder::CompletionBuilder, context::CompletionContext, relevance::CompletionRelevanceData,
+ CompletionItem, CompletionItemKind,
+};
+
+pub fn complete_functions(ctx: &CompletionContext, builder: &mut CompletionBuilder) {
+ let available_functions = &ctx.schema_cache.functions;
+
+ let completion_items: Vec<CompletionItem> = available_functions
+ .iter()
+ .map(|foo| CompletionItem { | Nit: can't we loop over available functions and add the item dirextly? |
postgres_lsp | github_2023 | others | 150 | supabase-community | psteinroe | @@ -12,6 +12,7 @@ tree-sitter.workspace = true
tree_sitter_sql.workspace = true
pg_schema_cache.workspace = true
pg_test_utils.workspace = true
+tower-lsp.workspace = true | I would prefer to keep language server specifics out of the feature crates. you can find a good rationale on this [here](https://github.com/rust-lang/rust-analyzer/blob/master/docs/dev/architecture.md#crateside-crateside-db-crateside-assists-crateside-completion-crateside-diagnostics-crateside-ssr):
> Architecture Invariant: ide crate strives to provide a perfect API. Although at the moment it has only one consumer, the LSP server, LSP does not influence its API design. Instead, we keep in mind a hypothetical ideal client -- an IDE tailored specifically for rust, every nook and cranny of which is packed with Rust-specific goodies.
Biome does the same thing: lsp types are only used within the lsp service. it acts as a client to the workspace (biome_service), which in turn returns its own types. this makes the boundary clearer, and makes sure we are building a postgres tool, and not "just" the language server. especially since we will now have a cli with potentially a `lint` command from the start. the linter should not return lsp-specific `CodeActions` or `Diagnostics`, but just generic lint diagnostics that can also be used by a cli. |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -0,0 +1,28 @@
+use pg_lexer::SyntaxKind;
+
+use super::{
+ common::{parenthesis, statement, unknown},
+ Parser,
+};
+
+pub(crate) fn cte(p: &mut Parser) {
+ p.expect(SyntaxKind::With);
+
+ loop {
+ p.expect(SyntaxKind::Ident);
+ p.expect(SyntaxKind::As);
+ parenthesis(p);
+
+ if !p.eat(SyntaxKind::Ascii44) {
+ break;
+ }
+ }
+
+ statement(p);
+}
+
+pub(crate) fn select(p: &mut Parser) {
+ p.expect(SyntaxKind::Select);
+
+ unknown(p);
+} | this looks so simple, i really hope it works for complex statements :) great work! |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -1,137 +1,54 @@
///! Postgres Statement Splitter
///!
///! This crate provides a function to split a SQL source string into individual statements.
-///!
-///! TODO:
-///! Instead of relying on statement start tokens, we need to include as many tokens as
-///! possible. For example, a `CREATE TRIGGER` statement includes an `EXECUTE [ PROCEDURE |
-///! FUNCTION ]` clause, but `EXECUTE` is also a statement start token for an `EXECUTE` statement.
-/// We should expand the definition map to include an `Any*`, which must be followed by at least
-/// one required token and allows the parser to search for the end tokens of the statement. This
-/// will hopefully be enough to reduce collisions to zero.
-mod is_at_stmt_start;
mod parser;
mod syntax_error;
-use is_at_stmt_start::{is_at_stmt_start, TokenStatement, STATEMENT_START_TOKEN_MAPS};
-
-use parser::{Parse, Parser};
-
-use pg_lexer::{lex, SyntaxKind};
+use parser::{source, Parse, Parser};
pub fn split(sql: &str) -> Parse {
- let mut parser = Parser::new(lex(sql));
-
- while !parser.eof() {
- match is_at_stmt_start(&mut parser) {
- Some(stmt) => {
- parser.start_stmt();
-
- // advance over all start tokens of the statement
- for i in 0..STATEMENT_START_TOKEN_MAPS.len() {
- parser.eat_whitespace();
- let token = parser.nth(0, false);
- if let Some(result) = STATEMENT_START_TOKEN_MAPS[i].get(&token.kind) {
- let is_in_results = result
- .iter()
- .find(|x| match x {
- TokenStatement::EoS(y) | TokenStatement::Any(y) => y == &stmt,
- })
- .is_some();
- if i == 0 && !is_in_results {
- panic!("Expected statement start");
- } else if is_in_results {
- parser.expect(token.kind);
- } else {
- break;
- }
- }
- }
-
- // move until the end of the statement, or until the next statement start
- let mut is_sub_stmt = 0;
- let mut is_sub_trx = 0;
- let mut ignore_next_non_whitespace = false;
- while !parser.at(SyntaxKind::Ascii59) && !parser.eof() {
- match parser.nth(0, false).kind {
- SyntaxKind::All => {
- // ALL is never a statement start, but needs to be skipped when combining queries
- // (e.g. UNION ALL)
- parser.advance();
- }
- SyntaxKind::BeginP => {
- // BEGIN, consume until END
- is_sub_trx += 1;
- parser.advance();
- }
- SyntaxKind::EndP => {
- is_sub_trx -= 1;
- parser.advance();
- }
- // opening brackets "(", consume until closing bracket ")"
- SyntaxKind::Ascii40 => {
- is_sub_stmt += 1;
- parser.advance();
- }
- SyntaxKind::Ascii41 => {
- is_sub_stmt -= 1;
- parser.advance();
- }
- SyntaxKind::As
- | SyntaxKind::Union
- | SyntaxKind::Intersect
- | SyntaxKind::Except => {
- // ignore the next non-whitespace token
- ignore_next_non_whitespace = true;
- parser.advance();
- }
- _ => {
- // if another stmt FIRST is encountered, break
- // ignore if parsing sub stmt
- if ignore_next_non_whitespace == false
- && is_sub_stmt == 0
- && is_sub_trx == 0
- && is_at_stmt_start(&mut parser).is_some()
- {
- break;
- } else {
- if ignore_next_non_whitespace == true && !parser.at_whitespace() {
- ignore_next_non_whitespace = false;
- }
- parser.advance();
- }
- }
- }
- }
+ let mut parser = Parser::new(sql);
- parser.expect(SyntaxKind::Ascii59);
-
- parser.close_stmt();
- }
- None => {
- parser.advance();
- }
- }
- }
+ source(&mut parser);
parser.finish()
}
#[cfg(test)]
mod tests {
+ use ntest::timeout;
+
use super::*;
#[test]
- fn test_splitter() {
- let input = "select 1 from contact;\nselect 1;\nalter table test drop column id;";
+ #[timeout(1000)]
+ fn basic() {
+ let input = "select 1 from contact; select 1;";
let res = split(input);
- assert_eq!(res.ranges.len(), 3);
+ assert_eq!(res.ranges.len(), 2);
assert_eq!("select 1 from contact;", input[res.ranges[0]].to_string());
assert_eq!("select 1;", input[res.ranges[1]].to_string());
- assert_eq!(
- "alter table test drop column id;",
- input[res.ranges[2]].to_string()
- );
+ }
+
+ #[test]
+ fn no_semicolons() {
+ let input = "select 1 from contact\nselect 1";
+
+ let res = split(input);
+ assert_eq!(res.ranges.len(), 2);
+ assert_eq!("select 1 from contact", input[res.ranges[0]].to_string());
+ assert_eq!("select 1", input[res.ranges[1]].to_string()); | epic, also very readable |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -1,137 +1,54 @@
///! Postgres Statement Splitter
///!
///! This crate provides a function to split a SQL source string into individual statements.
-///!
-///! TODO:
-///! Instead of relying on statement start tokens, we need to include as many tokens as
-///! possible. For example, a `CREATE TRIGGER` statement includes an `EXECUTE [ PROCEDURE |
-///! FUNCTION ]` clause, but `EXECUTE` is also a statement start token for an `EXECUTE` statement.
-/// We should expand the definition map to include an `Any*`, which must be followed by at least
-/// one required token and allows the parser to search for the end tokens of the statement. This
-/// will hopefully be enough to reduce collisions to zero.
-mod is_at_stmt_start;
mod parser;
mod syntax_error;
-use is_at_stmt_start::{is_at_stmt_start, TokenStatement, STATEMENT_START_TOKEN_MAPS};
-
-use parser::{Parse, Parser};
-
-use pg_lexer::{lex, SyntaxKind};
+use parser::{source, Parse, Parser};
pub fn split(sql: &str) -> Parse {
- let mut parser = Parser::new(lex(sql));
-
- while !parser.eof() {
- match is_at_stmt_start(&mut parser) {
- Some(stmt) => {
- parser.start_stmt();
-
- // advance over all start tokens of the statement
- for i in 0..STATEMENT_START_TOKEN_MAPS.len() {
- parser.eat_whitespace();
- let token = parser.nth(0, false);
- if let Some(result) = STATEMENT_START_TOKEN_MAPS[i].get(&token.kind) {
- let is_in_results = result
- .iter()
- .find(|x| match x {
- TokenStatement::EoS(y) | TokenStatement::Any(y) => y == &stmt,
- })
- .is_some();
- if i == 0 && !is_in_results {
- panic!("Expected statement start");
- } else if is_in_results {
- parser.expect(token.kind);
- } else {
- break;
- }
- }
- }
-
- // move until the end of the statement, or until the next statement start
- let mut is_sub_stmt = 0;
- let mut is_sub_trx = 0;
- let mut ignore_next_non_whitespace = false;
- while !parser.at(SyntaxKind::Ascii59) && !parser.eof() {
- match parser.nth(0, false).kind {
- SyntaxKind::All => {
- // ALL is never a statement start, but needs to be skipped when combining queries
- // (e.g. UNION ALL)
- parser.advance();
- }
- SyntaxKind::BeginP => {
- // BEGIN, consume until END
- is_sub_trx += 1;
- parser.advance();
- }
- SyntaxKind::EndP => {
- is_sub_trx -= 1;
- parser.advance();
- }
- // opening brackets "(", consume until closing bracket ")"
- SyntaxKind::Ascii40 => {
- is_sub_stmt += 1;
- parser.advance();
- }
- SyntaxKind::Ascii41 => {
- is_sub_stmt -= 1;
- parser.advance();
- }
- SyntaxKind::As
- | SyntaxKind::Union
- | SyntaxKind::Intersect
- | SyntaxKind::Except => {
- // ignore the next non-whitespace token
- ignore_next_non_whitespace = true;
- parser.advance();
- }
- _ => {
- // if another stmt FIRST is encountered, break
- // ignore if parsing sub stmt
- if ignore_next_non_whitespace == false
- && is_sub_stmt == 0
- && is_sub_trx == 0
- && is_at_stmt_start(&mut parser).is_some()
- {
- break;
- } else {
- if ignore_next_non_whitespace == true && !parser.at_whitespace() {
- ignore_next_non_whitespace = false;
- }
- parser.advance();
- }
- }
- }
- }
+ let mut parser = Parser::new(sql);
- parser.expect(SyntaxKind::Ascii59);
-
- parser.close_stmt();
- }
- None => {
- parser.advance();
- }
- }
- }
+ source(&mut parser); | 
|
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -1,26 +1,30 @@
+mod common;
+mod data;
+mod dml;
+
+pub use common::source;
+
use std::cmp::min;
-use pg_lexer::{SyntaxKind, Token, TokenType, WHITESPACE_TOKENS};
+use pg_lexer::{lex, SyntaxKind, Token, WHITESPACE_TOKENS};
use text_size::{TextRange, TextSize};
use crate::syntax_error::SyntaxError;
/// Main parser that exposes the `cstree` api, and collects errors and statements
pub struct Parser {
/// The ranges of the statements
- ranges: Vec<(usize, usize)>,
+ ranges: Vec<TextRange>, | Should we add a comment that says that this is modelled after a Pratt parser, so future devs have an easier time understanding the strategy? |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -0,0 +1,14 @@
+use pg_lexer::SyntaxKind;
+
+pub static STATEMENT_START_TOKENS: &[SyntaxKind] = &[ | do we still need to export this if we export the helper below? |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -0,0 +1,100 @@
+use pg_lexer::{SyntaxKind, Token, TokenType};
+
+use super::{
+ data::at_statement_start,
+ dml::{cte, select},
+ Parser,
+};
+
+pub fn source(p: &mut Parser) {
+ loop {
+ match p.peek() {
+ Token {
+ kind: SyntaxKind::Eof,
+ ..
+ } => {
+ break;
+ }
+ Token {
+ token_type: TokenType::Whitespace | TokenType::NoKeyword,
+ ..
+ } => {
+ p.advance();
+ }
+ _ => {
+ statement(p);
+ }
+ }
+ }
+}
+
+pub(crate) fn statement(p: &mut Parser) {
+ p.start_stmt();
+ match p.peek().kind {
+ SyntaxKind::With => {
+ cte(p);
+ }
+ SyntaxKind::Select => {
+ select(p);
+ }
+ SyntaxKind::Insert => {
+ todo!();
+ // insert(p);
+ }
+ SyntaxKind::Update => {
+ todo!();
+ // update(p);
+ }
+ SyntaxKind::DeleteP => {
+ todo!();
+ // delete(p);
+ }
+ t => {
+ panic!("stmt: Unknown token {:?}", t); | ```suggestion
panic!("stmt: Unknown start token {:?}", t);
``` |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -33,164 +37,94 @@ pub struct Parse {
}
impl Parser {
- pub fn new(tokens: Vec<Token>) -> Self {
+ pub fn new(sql: &str) -> Self {
+ // we dont care about whitespace tokens, except for double newlines
+ // to make everything simpler, we just filter them out
+ // the token holds the text range, so we dont need to worry about that
+ let tokens = lex(sql)
+ .iter()
+ .filter(|t| {
+ return !WHITESPACE_TOKENS.contains(&t.kind)
+ || (t.kind == SyntaxKind::Newline && t.text.chars().count() > 1);
+ })
+ .rev()
+ .cloned() | nit: `into_iter()` instead of `.cloned()` ?
I'm not sure, but if I understand it correctly, we could skip some allocations |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -33,164 +37,94 @@ pub struct Parse {
}
impl Parser {
- pub fn new(tokens: Vec<Token>) -> Self {
+ pub fn new(sql: &str) -> Self {
+ // we dont care about whitespace tokens, except for double newlines
+ // to make everything simpler, we just filter them out
+ // the token holds the text range, so we dont need to worry about that
+ let tokens = lex(sql)
+ .iter()
+ .filter(|t| {
+ return !WHITESPACE_TOKENS.contains(&t.kind)
+ || (t.kind == SyntaxKind::Newline && t.text.chars().count() > 1);
+ })
+ .rev()
+ .cloned()
+ .collect::<Vec<_>>();
+
Self {
- eof_token: Token::eof(usize::from(tokens.last().unwrap().span.end())),
ranges: Vec::new(),
+ eof_token: Token::eof(usize::from(tokens.first().unwrap().span.end())), | will this throw if somebody opens an empty sql file? |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -33,164 +37,94 @@ pub struct Parse {
}
impl Parser {
- pub fn new(tokens: Vec<Token>) -> Self {
+ pub fn new(sql: &str) -> Self {
+ // we dont care about whitespace tokens, except for double newlines
+ // to make everything simpler, we just filter them out
+ // the token holds the text range, so we dont need to worry about that
+ let tokens = lex(sql)
+ .iter()
+ .filter(|t| {
+ return !WHITESPACE_TOKENS.contains(&t.kind)
+ || (t.kind == SyntaxKind::Newline && t.text.chars().count() > 1);
+ })
+ .rev()
+ .cloned()
+ .collect::<Vec<_>>();
+
Self {
- eof_token: Token::eof(usize::from(tokens.last().unwrap().span.end())),
ranges: Vec::new(),
+ eof_token: Token::eof(usize::from(tokens.first().unwrap().span.end())),
errors: Vec::new(),
current_stmt_start: None,
tokens,
- pos: 0,
- whitespace_token_buffer: None,
+ last_token_end: None,
}
}
pub fn finish(self) -> Parse {
Parse {
- ranges: self
- .ranges
- .iter()
- .map(|(start, end)| {
- let from = self.tokens.get(*start);
- let to = self.tokens.get(end - 1);
- // get text range from token range
- let text_start = from.unwrap().span.start();
- let text_end = to.unwrap().span.end();
-
- TextRange::new(
- TextSize::try_from(text_start).unwrap(),
- TextSize::try_from(text_end).unwrap(),
- )
- })
- .collect(),
+ ranges: self.ranges,
errors: self.errors,
}
}
- pub fn start_stmt(&mut self) {
+ /// Start statement
+ pub fn start_stmt(&mut self) -> Token { | Should we better return nothing here? It might be confusing when a dev works with this and assumes that the next `.peek()` yields a different token |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -46,66 +46,96 @@ impl Parser {
return !WHITESPACE_TOKENS.contains(&t.kind)
|| (t.kind == SyntaxKind::Newline && t.text.chars().count() > 1);
})
- .rev()
.cloned()
.collect::<Vec<_>>();
+ let eof_token = Token::eof(usize::from(
+ tokens
+ .last()
+ .map(|t| t.span.start())
+ .unwrap_or(TextSize::from(0)),
+ ));
+
+ // next_pos should be the initialised with the first valid token already
+ let mut next_pos = 0;
+ loop {
+ let token = tokens.get(next_pos).unwrap_or(&eof_token);
+
+ if is_irrelevant_token(token) {
+ next_pos += 1;
+ } else {
+ break;
+ }
+ }
+
Self {
ranges: Vec::new(),
- eof_token: Token::eof(usize::from(
- tokens
- .first()
- .map(|t| t.span.start())
- .unwrap_or(TextSize::from(0)),
- )),
+ eof_token,
errors: Vec::new(),
current_stmt_start: None,
tokens,
- last_token_end: None,
+ next_pos,
}
}
pub fn finish(self) -> Parse {
Parse {
- ranges: self.ranges,
+ ranges: self
+ .ranges
+ .iter()
+ .map(|(start, end)| {
+ println!("{} {}", start, end);
+ let from = self.tokens.get(*start);
+ let to = self.tokens.get(*end).unwrap_or(&self.eof_token);
+
+ TextRange::new(from.unwrap().span.start(), to.span.end())
+ })
+ .collect(),
errors: self.errors,
}
}
/// Start statement
- pub fn start_stmt(&mut self) -> Token {
+ pub fn start_stmt(&mut self) {
assert!(self.current_stmt_start.is_none());
-
- let token = self.peek();
-
- self.current_stmt_start = Some(token.span.start());
-
- token
+ self.current_stmt_start = Some(self.next_pos);
}
/// Close statement
pub fn close_stmt(&mut self) {
- self.ranges.push(TextRange::new(
+ assert!(self.next_pos > 0);
+
+ self.ranges.push((
self.current_stmt_start.expect("Expected active statement"),
- self.last_token_end.expect("Expected last token end"),
+ self.next_pos - 1,
));
self.current_stmt_start = None;
}
- fn advance(&mut self) -> Token {
- let token = self.tokens.pop().unwrap_or(self.eof_token.clone());
-
- self.last_token_end = Some(token.span.end());
-
- token
+ fn advance(&mut self) -> &Token {
+ let mut first_relevant_token = None;
+ loop {
+ let token = self.tokens.get(self.next_pos).unwrap_or(&self.eof_token); | self.peek? 🤓 |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -1,137 +1,68 @@
///! Postgres Statement Splitter
///!
///! This crate provides a function to split a SQL source string into individual statements.
-///!
-///! TODO:
-///! Instead of relying on statement start tokens, we need to include as many tokens as
-///! possible. For example, a `CREATE TRIGGER` statement includes an `EXECUTE [ PROCEDURE |
-///! FUNCTION ]` clause, but `EXECUTE` is also a statement start token for an `EXECUTE` statement.
-/// We should expand the definition map to include an `Any*`, which must be followed by at least
-/// one required token and allows the parser to search for the end tokens of the statement. This
-/// will hopefully be enough to reduce collisions to zero.
-mod is_at_stmt_start;
mod parser;
mod syntax_error;
-use is_at_stmt_start::{is_at_stmt_start, TokenStatement, STATEMENT_START_TOKEN_MAPS};
-
-use parser::{Parse, Parser};
-
-use pg_lexer::{lex, SyntaxKind};
+use parser::{source, Parse, Parser};
pub fn split(sql: &str) -> Parse {
- let mut parser = Parser::new(lex(sql));
+ let mut parser = Parser::new(sql);
- while !parser.eof() {
- match is_at_stmt_start(&mut parser) {
- Some(stmt) => {
- parser.start_stmt();
+ source(&mut parser);
+
+ parser.finish()
+}
- // advance over all start tokens of the statement
- for i in 0..STATEMENT_START_TOKEN_MAPS.len() {
- parser.eat_whitespace();
- let token = parser.nth(0, false);
- if let Some(result) = STATEMENT_START_TOKEN_MAPS[i].get(&token.kind) {
- let is_in_results = result
- .iter()
- .find(|x| match x {
- TokenStatement::EoS(y) | TokenStatement::Any(y) => y == &stmt,
- })
- .is_some();
- if i == 0 && !is_in_results {
- panic!("Expected statement start");
- } else if is_in_results {
- parser.expect(token.kind);
- } else {
- break;
- }
- }
- }
+#[cfg(test)]
+mod tests {
+ use ntest::timeout;
- // move until the end of the statement, or until the next statement start
- let mut is_sub_stmt = 0;
- let mut is_sub_trx = 0;
- let mut ignore_next_non_whitespace = false;
- while !parser.at(SyntaxKind::Ascii59) && !parser.eof() {
- match parser.nth(0, false).kind {
- SyntaxKind::All => {
- // ALL is never a statement start, but needs to be skipped when combining queries
- // (e.g. UNION ALL)
- parser.advance();
- }
- SyntaxKind::BeginP => {
- // BEGIN, consume until END
- is_sub_trx += 1;
- parser.advance();
- }
- SyntaxKind::EndP => {
- is_sub_trx -= 1;
- parser.advance();
- }
- // opening brackets "(", consume until closing bracket ")"
- SyntaxKind::Ascii40 => {
- is_sub_stmt += 1;
- parser.advance();
- }
- SyntaxKind::Ascii41 => {
- is_sub_stmt -= 1;
- parser.advance();
- }
- SyntaxKind::As
- | SyntaxKind::Union
- | SyntaxKind::Intersect
- | SyntaxKind::Except => {
- // ignore the next non-whitespace token
- ignore_next_non_whitespace = true;
- parser.advance();
- }
- _ => {
- // if another stmt FIRST is encountered, break
- // ignore if parsing sub stmt
- if ignore_next_non_whitespace == false
- && is_sub_stmt == 0
- && is_sub_trx == 0
- && is_at_stmt_start(&mut parser).is_some()
- {
- break;
- } else {
- if ignore_next_non_whitespace == true && !parser.at_whitespace() {
- ignore_next_non_whitespace = false;
- }
- parser.advance();
- }
- }
- }
- }
+ use super::*;
- parser.expect(SyntaxKind::Ascii59);
+ struct Tester {
+ input: String,
+ parse: Parse,
+ } | ah nice, that's convenient |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -1,12 +0,0 @@
-brin | YEAH ⛹️ |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -166,15 +162,9 @@ impl Change {
// if addition, expand the range
// if deletion, shrink the range
if self.is_addition() {
- *r = TextRange::new(
- r.start(),
- r.end() + TextSize::from(self.diff_size()),
- );
+ *r = TextRange::new(r.start(), r.end() + self.diff_size()); | hmm, strange that there's no method on the type for increasing the range 🤷 |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -44,18 +44,11 @@ impl Document {
pub fn new(url: PgLspPath, text: Option<String>) -> Document {
Document {
version: 0,
- line_index: LineIndex::new(&text.as_ref().unwrap_or(&"".to_string())),
+ line_index: LineIndex::new(text.as_ref().unwrap_or(&"".to_string())),
// TODO: use errors returned by split
- statement_ranges: text.as_ref().map_or_else(
- || Vec::new(),
- |f| {
- pg_statement_splitter::split(&f)
- .ranges
- .iter()
- .map(|range| range.clone())
- .collect()
- },
- ),
+ statement_ranges: text.as_ref().map_or_else(Vec::new, |f| {
+ pg_statement_splitter::split(f).ranges.to_vec()
+ }), | pretty cool changes here in the file! |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -52,145 +76,116 @@ impl Parser {
.iter()
.map(|(start, end)| {
let from = self.tokens.get(*start);
- let to = self.tokens.get(end - 1);
- // get text range from token range
- let text_start = from.unwrap().span.start();
- let text_end = to.unwrap().span.end();
-
- TextRange::new(
- TextSize::try_from(text_start).unwrap(),
- TextSize::try_from(text_end).unwrap(),
- )
+ let to = self.tokens.get(*end).unwrap_or(&self.eof_token);
+
+ TextRange::new(from.unwrap().span.start(), to.span.end())
})
.collect(),
errors: self.errors,
}
}
+ /// Start statement
pub fn start_stmt(&mut self) {
assert!(self.current_stmt_start.is_none());
- self.current_stmt_start = Some(self.pos);
+ self.current_stmt_start = Some(self.next_pos);
}
+ /// Close statement
pub fn close_stmt(&mut self) {
- assert!(self.current_stmt_start.is_some());
- self.ranges
- .push((self.current_stmt_start.take().unwrap(), self.pos));
- }
+ assert!(self.next_pos > 0);
- /// collects an SyntaxError with an `error` message at `pos`
- pub fn error_at_pos(&mut self, error: String, pos: usize) {
- self.errors.push(SyntaxError::new_at_offset(
- error,
- self.tokens
- .get(min(self.tokens.len() - 1, pos))
- .unwrap()
- .span
- .start(),
- ));
- }
+ // go back the positions until we find the first relevant token
+ let mut end_token_pos = self.next_pos - 1;
+ loop {
+ let token = self.tokens.get(end_token_pos);
- /// applies token and advances
- pub fn advance(&mut self) {
- assert!(!self.eof());
- if self.nth(0, false).kind == SyntaxKind::Whitespace {
- if self.whitespace_token_buffer.is_none() {
- self.whitespace_token_buffer = Some(self.pos);
+ if end_token_pos == 0 || token.is_none() {
+ break;
}
- } else {
- self.flush_token_buffer();
- }
- self.pos += 1;
- }
- /// flush token buffer and applies all tokens
- pub fn flush_token_buffer(&mut self) {
- if self.whitespace_token_buffer.is_none() {
- return;
- }
- while self.whitespace_token_buffer.unwrap() < self.pos {
- self.whitespace_token_buffer = Some(self.whitespace_token_buffer.unwrap() + 1);
- }
- self.whitespace_token_buffer = None;
- }
+ if !is_irrelevant_token(token.unwrap()) {
+ break;
+ }
- pub fn eat(&mut self, kind: SyntaxKind) -> bool {
- if self.at(kind) {
- self.advance();
- true
- } else {
- false
+ end_token_pos -= 1;
}
- }
- pub fn at_whitespace(&self) -> bool {
- self.nth(0, false).kind == SyntaxKind::Whitespace
+ self.ranges.push((
+ self.current_stmt_start.expect("Expected active statement"),
+ end_token_pos,
+ ));
+
+ self.current_stmt_start = None;
}
- pub fn eat_whitespace(&mut self) {
- while self.nth(0, false).token_type == TokenType::Whitespace {
- self.advance();
+ fn advance(&mut self) -> &Token {
+ let mut first_relevant_token = None;
+ loop {
+ let token = self.tokens.get(self.next_pos).unwrap_or(&self.eof_token);
+
+ // we need to continue with next_pos until the next relevant token after we already
+ // found the first one
+ if !is_irrelevant_token(token) {
+ if let Some(t) = first_relevant_token {
+ return t;
+ }
+ first_relevant_token = Some(token);
+ }
+
+ self.next_pos += 1;
}
}
- pub fn eof(&self) -> bool {
- self.pos == self.tokens.len()
+ fn peek(&self) -> &Token {
+ match self.tokens.get(self.next_pos) {
+ Some(token) => token,
+ None => &self.eof_token,
+ }
}
- /// lookahead method.
- ///
- /// if `ignore_whitespace` is true, it will skip all whitespace tokens
- pub fn nth(&self, lookahead: usize, ignore_whitespace: bool) -> &Token {
- if ignore_whitespace {
- let mut idx = 0;
- let mut non_whitespace_token_ctr = 0;
- loop {
- match self.tokens.get(self.pos + idx) {
- Some(token) => {
- if !WHITESPACE_TOKENS.contains(&token.kind) {
- if non_whitespace_token_ctr == lookahead {
- return token;
- }
- non_whitespace_token_ctr += 1;
- }
- idx += 1;
- }
- None => {
- return &self.eof_token;
- }
- }
+ fn look_back(&self) -> Option<&Token> {
+ // we need to look back to the last relevant token
+ let mut look_back_pos = self.next_pos - 1;
+ loop {
+ let token = self.tokens.get(look_back_pos);
+
+ if look_back_pos == 0 || token.is_none() {
+ return None;
}
- } else {
- match self.tokens.get(self.pos + lookahead) {
- Some(token) => token,
- None => &self.eof_token,
+
+ if !is_irrelevant_token(token.unwrap()) {
+ return token;
}
+
+ look_back_pos -= 1;
}
}
- /// checks if the current token is of `kind`
- pub fn at(&self, kind: SyntaxKind) -> bool {
- self.nth(0, false).kind == kind
+ /// checks if the current token is of `kind` and advances if true
+ /// returns true if the current token is of `kind`
+ pub fn eat(&mut self, kind: SyntaxKind) -> bool {
+ if self.peek().kind == kind {
+ self.advance();
+ true
+ } else {
+ false
+ }
}
pub fn expect(&mut self, kind: SyntaxKind) {
if self.eat(kind) {
return;
}
- if self.whitespace_token_buffer.is_some() {
- self.error_at_pos(
- format!(
- "Expected {:#?}, found {:#?}",
- kind,
- self.tokens[self.whitespace_token_buffer.unwrap()].kind
- ),
- self.whitespace_token_buffer.unwrap(),
- );
- } else {
- self.error_at_pos(
- format!("Expected {:#?}, found {:#?}", kind, self.nth(0, false)),
- self.pos + 1,
- );
- }
+
+ self.errors.push(SyntaxError::new(
+ format!("Expected {:#?}", kind),
+ self.peek().span,
+ )); | very nice! much cleaner. |
postgres_lsp | github_2023 | others | 141 | supabase-community | psteinroe | @@ -0,0 +1,43 @@
+use sqlx::PgPool;
+
+use crate::schema_cache::SchemaCacheItem;
+
+#[derive(Debug, Clone, Default)]
+pub struct Version {
+ pub version: Option<String>,
+ pub version_num: Option<i64>,
+ pub active_connections: Option<i64>,
+ pub max_connections: Option<i64>,
+}
+
+impl SchemaCacheItem for Version {
+ type Item = Version;
+
+ async fn load(pool: &PgPool) -> Vec<Version> {
+ sqlx::query_as!(
+ Version,
+ r#"select
+ version(),
+ current_setting('server_version_num') :: int8 AS version_num,
+ (
+ select
+ count(*) :: int8 AS active_connections
+ FROM
+ pg_stat_activity
+ ) AS active_connections,
+ current_setting('max_connections') :: int8 AS max_connections;"#
+ )
+ .fetch_all(pool) | why not just one row? |
postgres_lsp | github_2023 | others | 141 | supabase-community | psteinroe | @@ -6,13 +6,15 @@ use crate::functions::Function;
use crate::schemas::Schema;
use crate::tables::Table;
use crate::types::PostgresType;
+use crate::versions::Version;
#[derive(Debug, Clone, Default)]
pub struct SchemaCache {
pub schemas: Vec<Schema>,
pub tables: Vec<Table>,
pub functions: Vec<Function>,
pub types: Vec<PostgresType>,
+ pub versions: Vec<Version>, | any reason to have a Vec instead of a single struct? |
postgres_lsp | github_2023 | others | 109 | supabase-community | psteinroe | @@ -412,7 +412,23 @@ impl<'p> LibpgQueryNodeParser<'p> {
}
/// list of aliases from https://www.postgresql.org/docs/current/datatype.html
-const ALIASES: [&[&str]; 2] = [&["integer", "int", "int4"], &["real", "float4"]];
+const ALIASES: [&[&str]; 15] = [
+ &["bigint", "int8"],
+ &["bigserial", "serial8"],
+ &["bit varying", "varbit"],
+ &["boolean", "bool"],
+ &["character", "char"],
+ &["character varying", "varchar"],
+ &["double precision", "float8"],
+ &["integer", "int", "int4"],
+ &["numeric", "decimal"],
+ &["real", "float4"],
+ &["smallint", "int2"],
+ &["smallserial", "serial2"],
+ &["serial", "serial4"],
+ &["time with time zone", "timetz"], | This will not work, because we are comparing token by token, and this text will be split up over multiplen tokens. It requires a larger change. |
postgres_lsp | github_2023 | others | 109 | supabase-community | psteinroe | @@ -412,7 +412,21 @@ impl<'p> LibpgQueryNodeParser<'p> {
}
/// list of aliases from https://www.postgresql.org/docs/current/datatype.html
-const ALIASES: [&[&str]; 2] = [&["integer", "int", "int4"], &["real", "float4"]];
+const ALIASES: [&[&str]; 13] = [
+ &["bigint", "int8"],
+ &["bigserial", "serial8"],
+ &["bit varying", "varbit"],
+ &["boolean", "bool"],
+ &["character", "char"],
+ &["character varying", "varchar"], | These two are also multi word |
postgres_lsp | github_2023 | others | 104 | supabase-community | psteinroe | @@ -1,19 +1,40 @@
---
source: crates/parser/tests/statement_parser_test.rs
-description: "/* TODO: CREATE TABLE films2 AS SELECT * FROM films; */ SELECT 1;"
+description: CREATE TABLE films2 AS SELECT * FROM films;
---
Parse {
- cst: SourceFile@0..64
- CComment@0..55 "/* TODO: CREATE TABLE ..."
- SelectStmt@55..64
- Select@55..61 "SELECT"
- Whitespace@61..62 " "
- ResTarget@62..63
- AConst@62..63
- Iconst@62..63 "1"
- Ascii59@63..64 ";"
+ cst: SourceFile@0..42
+ Create@0..6 "CREATE" | It seems like the statement parser does not pick the root statement up. Can you check that? |
postgres_lsp | github_2023 | others | 100 | supabase-community | psteinroe | @@ -104,6 +104,16 @@ If you're not using VS Code, you can install the server by running:
cargo xtask install --server
```
+### Github CodeSpaces
+Currently, Windows does not support `libpg_query`. You can setup your development environment
+on [CodeSpaces](https://github.com/features/codespaces).
+
+After your codespace boots up, run the following command in the shell to install Rust:
+```shell
+curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh
+```
+Proceed with the rest of the installation as usual.
+
The server binary will be installed in `.cargo/bin`. Make sure that `.cargo/bin` is in `$PATH`. | Shouldn't this paragraph be above your section? |
postgres_lsp | github_2023 | others | 95 | supabase-community | psteinroe | @@ -0,0 +1,11 @@
+CREATE UNLOGGED TABLE cities (name text, population real, altitude double, identifier smallint, postal_code int, foreign_id bigint);
+/* TODO: CREATE TABLE IF NOT EXISTS distributors (name varchar(40) DEFAULT 'Luso Films', len interval hour to second(3), name varchar(40) DEFAULT 'Luso Films', did int DEFAULT nextval('distributors_serial'), stamp timestamp DEFAULT now() NOT NULL, stamptz timestamp with time zone, "time" time NOT NULL, timetz time with time zone, CONSTRAINT name_len PRIMARY KEY (name, len)); */ SELECT 1;
+/* TODO: CREATE TABLE types (a real, b double precision, c numeric(2, 3), d char(4), e char(5), f varchar(6), g varchar(7)); */ SELECT 1; | @cvng why is this todo? |
postgres_lsp | github_2023 | others | 94 | supabase-community | cvng | @@ -692,6 +692,74 @@ fn custom_handlers(node: &Node) -> TokenStream {
tokens.push(TokenProperty::from(Token::With));
}
},
+ "CreatePublicationStmt" => quote! {
+ tokens.push(TokenProperty::from(Token::Create));
+ tokens.push(TokenProperty::from(Token::Publication));
+ if n.for_all_tables {
+ tokens.push(TokenProperty::from(Token::For));
+ tokens.push(TokenProperty::from(Token::All));
+ tokens.push(TokenProperty::from(Token::Tables));
+ }
+ if let Some(n) = n.options.first() {
+ tokens.push(TokenProperty::from(Token::With));
+ }
+ if let Some(n) = n.pubobjects.first() {
+ tokens.push(TokenProperty::from(Token::For));
+ if let Some(NodeEnum::PublicationObjSpec(n)) = &n.node {
+ match n.pubobjtype() {
+ protobuf::PublicationObjSpecType::PublicationobjTable => {
+ tokens.push(TokenProperty::from(Token::Table));
+ },
+ protobuf::PublicationObjSpecType::PublicationobjTablesInSchema => {
+ tokens.push(TokenProperty::from(Token::Tables));
+ tokens.push(TokenProperty::from(Token::InP));
+ tokens.push(TokenProperty::from(Token::Schema));
+ },
+ _ => panic!("Unknown CreatePublicationStmt {:#?}", n.pubobjtype())
+ }
+ }
+ }
+ if let Some(n) = n.pubobjects.last() {
+ if let Some(NodeEnum::PublicationObjSpec(n)) = &n.node {
+ match n.pubobjtype() {
+ protobuf::PublicationObjSpecType::PublicationobjTablesInSchema => {
+ tokens.push(TokenProperty::from(Token::Tables));
+ tokens.push(TokenProperty::from(Token::InP));
+ tokens.push(TokenProperty::from(Token::Schema));
+ },
+ _ => {}
+ }
+ } | sure, let's have this in a next PR with a `test_create_publication` test |
postgres_lsp | github_2023 | others | 88 | supabase-community | psteinroe | @@ -0,0 +1,86 @@
+---
+source: crates/parser/tests/statement_parser_test.rs
+description: CREATE DATABASE x OWNER abc CONNECTION LIMIT 5;
+---
+Parse {
+ cst: SourceFile@0..47
+ CreatedbStmt@0..47
+ Create@0..6 "CREATE"
+ Whitespace@6..7 " "
+ Database@7..15 "DATABASE"
+ Whitespace@15..16 " "
+ Ident@16..17 "x"
+ Whitespace@17..18 " "
+ DefElem@18..27
+ Owner@18..23 "OWNER"
+ Whitespace@23..24 " "
+ Ident@24..27 "abc"
+ Whitespace@27..28 " "
+ DefElem@28..38
+ Connection@28..38 "CONNECTION"
+ Whitespace@38..39 " "
+ Limit@39..44 "LIMIT"
+ Whitespace@44..45 " "
+ Iconst@45..46 "5" | These should be part of DefElement |
postgres_lsp | github_2023 | others | 88 | supabase-community | psteinroe | @@ -0,0 +1,47 @@
+---
+source: crates/parser/tests/statement_parser_test.rs
+description: "\nCREATE DATABASE x LOCATION DEFAULT;"
+---
+Parse {
+ cst: SourceFile@0..36
+ Newline@0..1 "\n"
+ CreatedbStmt@1..36
+ Create@1..7 "CREATE"
+ Whitespace@7..8 " "
+ Database@8..16 "DATABASE"
+ Whitespace@16..17 " "
+ Ident@17..18 "x"
+ Whitespace@18..19 " "
+ DefElem@19..27
+ Location@19..27 "LOCATION"
+ Whitespace@27..28 " "
+ Default@28..35 "DEFAULT" | Should also be part of DefElement |
postgres_lsp | github_2023 | others | 72 | supabase-community | cvng | @@ -481,6 +481,48 @@ fn custom_handlers(node: &Node) -> TokenStream {
tokens.push(TokenProperty::from(Token::As));
}
},
+ "DefineStmt" => quote! {
+ tokens.push(TokenProperty::from(Token::Create));
+ if n.replace {
+ tokens.push(TokenProperty::from(Token::Or));
+ tokens.push(TokenProperty::from(Token::Replace));
+ }
+ match n.kind() {
+ protobuf::ObjectType::ObjectAggregate => {
+ tokens.push(TokenProperty::from(Token::Aggregate));
+
+ // n.args is always an array with two nodes
+ assert_eq!(n.args.len(), 2, "DefineStmt of type ObjectAggregate does not have exactly 2 args");
+ // the first is either a List or a Node { node: None }
+
+ if let Some(node) = &n.args.first() {
+ if node.node.is_none() {
+ // if first element is a Node { node: None }, then it's "*"
+ tokens.push(TokenProperty::from(Token::Ascii42));
+ } else if let Some(node) = &node.node {
+ if let NodeEnum::List(_) = node {
+ // there *seems* to be an integer node in the last position of args that
+ // defines whether the list contains an order by statement
+ let integer = n.args.last()
+ .and_then(|node| node.node.as_ref())
+ .and_then(|node| if let NodeEnum::Integer(n) = node { Some(n.ival) } else { None });
+ if integer.is_none() {
+ panic!("DefineStmt of type ObjectAggregate has no integer node in last position of args");
+ }
+ // if the integer is 1, then there is an order by statement
+ // BUT: the order by tokens should be part of the List or maybe
+ // even the last FunctionParameter node in the list
+ if integer.unwrap() == 1 {
+ tokens.push(TokenProperty::from(Token::Order));
+ tokens.push(TokenProperty::from(Token::By));
+ }
+ } | @psteinroe I'm not confortable enough with the AST to know what a proper solution should be. It the test in #69 passes with the same output?
As of my understanding, I would go with solution 3 (direct children to `DefineStmt`) - based on the docs - aggregate with `order by` is kind of a special case
> The syntax with ORDER BY in the parameter list creates a special type of aggregate called an ordered-set aggregate;
(also, I'm not sure this syntax exists elsewhere)
I let you close the previous PR if you think this is a better approach |
postgres_lsp | github_2023 | others | 67 | supabase-community | psteinroe | @@ -66,10 +66,7 @@ mod tests {
debug!("selected node: {:#?}", node_graph[node_index]);
- assert!(node_graph[node_index]
- .properties
- .iter()
- .all(|p| { expected.contains(p) }));
+ assert_eq!(node_graph[node_index].properties, expected); | Can you add a comment there that even though we test for strict equality of the two vectors the order of the properties does not have to match the order of the tokens in the string? |
postgres_lsp | github_2023 | others | 65 | supabase-community | psteinroe | @@ -439,6 +439,17 @@ fn custom_handlers(node: &Node) -> TokenStream {
tokens.push(TokenProperty::from(Token::Or));
tokens.push(TokenProperty::from(Token::Replace));
}
+ if let Some(n) = &n.view {
+ match n.relpersistence.as_str() { | thats an interesting case! I agree with your reasoning. to give a bit of context. Take the substring `create temporary view comedies` as an example. The `create` and the `view` token should be part of the `ViewStmt` itself, while the `temporary` is definitely part of the `RangeVar` node. We cannot create a valid tree out of this, since we would have to open `RangeVar` at `temporary`, close it to go back to `ViewStmt ` and then open it again to be back at `RangeVar` for `comedies`. So yes, we should pull `temporary` back up into `ViewStmt`. thanks! |
postgres_lsp | github_2023 | others | 61 | supabase-community | psteinroe | @@ -529,6 +529,16 @@ fn custom_handlers(node: &Node) -> TokenStream {
"TypeCast" => quote! {
tokens.push(TokenProperty::from(Token::Typecast));
},
+ "CreateDomainStmt" => quote! {
+ tokens.push(TokenProperty::from(Token::Create));
+ tokens.push(TokenProperty::from(Token::DomainP));
+ if n.type_name.is_some() {
+ tokens.push(TokenProperty::from(Token::As));
+ }
+ if n.constraints.len() > 0 {
+ tokens.push(TokenProperty::from(Token::Check));
+ } | `check` should be part of the `Constraint` node, right? its currently implement as
```rust
"Constraint" => quote! {
match n.contype {
// ConstrNotnull
2 => {
tokens.push(TokenProperty::from(Token::Not));
tokens.push(TokenProperty::from(Token::NullP));
},
// ConstrDefault
3 => tokens.push(TokenProperty::from(Token::Default)),
// ConstrCheck
6 => tokens.push(TokenProperty::from(Token::Check)),
// ConstrPrimary
7 => {
tokens.push(TokenProperty::from(Token::Primary));
tokens.push(TokenProperty::from(Token::Key));
},
// ConstrForeign
10 => tokens.push(TokenProperty::from(Token::References)),
_ => panic!("Unknown Constraint {:#?}", n.contype),
}
},
```
similarly, `As` should be part of the `TypeName` node. |
postgres_lsp | github_2023 | others | 61 | supabase-community | psteinroe | @@ -87,4 +87,18 @@ mod tests {
],
)
}
+
+ #[test]
+ fn test_create_domain() {
+ test_get_node_properties(
+ "create domain us_postal_code as text check (value is not null);",
+ SyntaxKind::CreateDomainStmt,
+ vec
+
+Install: [instructions](https://pgtools.dev/#installation)
+
+- [CLI releases](https://github.com/supabase-community/postgres-language-server/releases)
+- [VSCode](https://marketplace.visualstudio.com/items?itemName=Supabase.postgrestools-vscode)
+- [Neovim](https://github.com/neovim/nvim-lspconfig/blob/master/doc/configs.md#postgres_lsp) | @psteinroe reminder to change over in the other repo if possible:
```suggestion
- [Neovim](https://github.com/neovim/nvim-lspconfig/blob/master/doc/configs.md#postgres_language_server)
``` |
postgres_lsp | github_2023 | others | 261 | supabase-community | w3b6x9 | @@ -4,11 +4,19 @@
A collection of language tools and a Language Server Protocol (LSP) implementation for Postgres, focusing on developer experience and reliable SQL tooling.
+Docs: [pgtools.dev](https://pgtools.dev/)
+
+Install: [instructions](https://pgtools.dev/#installation)
+
+- [CLI releases](https://github.com/supabase-community/postgres-language-server/releases)
+- [VSCode](https://marketplace.visualstudio.com/items?itemName=Supabase.postgrestools-vscode) | ```suggestion
- [VSCode](https://marketplace.visualstudio.com/items?itemName=Supabase.postgrestools)
``` |
svsm | github_2023 | others | 654 | coconut-svsm | peterfang | @@ -92,8 +93,14 @@ global_asm!(
* environment and context structure from the address space. */
movq %r8, %cr0
movq %r10, %cr4
+
+ /* Check to see whether EFER.LME is specified. If not, then EFER
+ * should not be reloaded. */
+ testl ${LME}, %ecx | `%eax`? |
svsm | github_2023 | others | 652 | coconut-svsm | peterfang | @@ -364,16 +364,22 @@ pub fn send_ipi(
}
}
_ => {
+ let mut target_count: usize = 0;
for cpu in PERCPU_AREAS.iter() {
- ipi_board.pending.fetch_add(1, Ordering::Relaxed);
- cpu.as_cpu_ref().ipi_from(sender_cpu_index);
+ // Ignore the current CPU and CPUs that are not online.
+ let cpu_shared = cpu.as_cpu_ref();
+ if cpu_shared.is_online() && cpu_shared.apic_id() != this_cpu().get_apic_id() {
+ target_count += 1;
+ cpu_shared.ipi_from(sender_cpu_index);
+ } | Would something like this be a bit cleaner?
```suggestion
for cpu in PERCPU_AREAS
.iter()
.map(|c| c.as_cpu_ref())
.filter(|c| c.is_online() && c.apic_id() != this_cpu().get_apic_id())
{
target_count += 1;
cpu.ipi_from(sender_cpu_index);
}
``` |
svsm | github_2023 | others | 652 | coconut-svsm | peterfang | @@ -364,16 +364,22 @@ pub fn send_ipi(
}
}
_ => {
+ let mut target_count: usize = 0;
for cpu in PERCPU_AREAS.iter() {
- ipi_board.pending.fetch_add(1, Ordering::Relaxed);
- cpu.as_cpu_ref().ipi_from(sender_cpu_index);
+ // Ignore the current CPU and CPUs that are not online.
+ let cpu_shared = cpu.as_cpu_ref();
+ if cpu_shared.is_online() && cpu_shared.apic_id() != this_cpu().get_apic_id() { | Since `sender_cpu_index` is already supplied as input, does it make sense to use `cpu_shared.cpu_index() != sender_cpu_index` instead? |
svsm | github_2023 | others | 629 | coconut-svsm | msft-jlange | @@ -0,0 +1,38 @@
+// SPDX-License-Identifier: MIT OR Apache-2.0
+//
+// Copyright (c) 2025 Intel Corporation.
+//
+// Author: Chuanxiao Dong <chuanxiao.dong@intel.com>
+
+use super::idt::common::X86ExceptionContext;
+use crate::error::SvsmError;
+use crate::tdx::tdcall::{tdcall_get_ve_info, tdvmcall_cpuid};
+use crate::tdx::TdxError;
+
+const VMX_EXIT_REASON_CPUID: u32 = 10;
+
+pub fn handle_virtualization_exception(ctx: &mut X86ExceptionContext) -> Result<(), SvsmError> {
+ let veinfo = tdcall_get_ve_info().expect("Failed to get #VE info");
+
+ match veinfo.exit_reason {
+ VMX_EXIT_REASON_CPUID => handle_cpuid(ctx), | I'd like to consider adding IO emulation in the near future because that should be low-hanging fruit. There is much that can be copied from the instruction emulator and/or the #VC handler. |
svsm | github_2023 | others | 629 | coconut-svsm | msft-jlange | @@ -0,0 +1,38 @@
+// SPDX-License-Identifier: MIT OR Apache-2.0 | Can we put this file under `kernel\src\tdx` instead? I'd like to move towards a model where architecture-specific code is in architecture-specific directories. There's a lot of SNP code that doesn't follow this pattern today but I'd like to avoid making it any worse than it already is. |
svsm | github_2023 | others | 642 | coconut-svsm | tlendacky | @@ -185,6 +185,12 @@ impl SvsmPlatform for SnpPlatform {
}
}
+ fn determine_cet_support(&self) -> bool {
+ // CET is supported on all SNP platforms, and CPUID should not be
+ // consulted to determine this.
+ true | Hypervisor support is required to ensure that the proper MSRs are not intercepted. This is typically communicated to the guest by providing the guest with appropriate CPUID information in the CPUID table that has been vetted by firmware. If the leaf isn't present or the bit isn't set, maybe it can be a build time option as to whether the SVSM should continue. |
svsm | github_2023 | others | 636 | coconut-svsm | msft-jlange | @@ -1,16 +1,99 @@
// SPDX-License-Identifier: MIT
//
// Copyright (c) Microsoft Corporation
+// Copyright (c) SUSE LLC
//
// Author: Jon Lange <jlange@microsoft.com>
+// Author: Joerg Roedel <jroedel@suse.de>
-pub const APIC_MSR_EOI: u32 = 0x80B;
-pub const APIC_MSR_ISR: u32 = 0x810;
-pub const APIC_MSR_ICR: u32 = 0x830;
+use crate::cpu::msr::{read_msr, write_msr};
-// Returns the MSR offset and bitmask to identify a specific vector in an
-// APIC register (IRR, ISR, or TMR).
+/// End-of-Interrupt register MSR offset
+pub const MSR_X2APIC_EOI: u32 = 0x80B;
+/// Spurious-Interrupt-Register MSR offset
+pub const MSR_X2APIC_SPIV: u32 = 0x80F;
+/// Interrupt-Service-Register base MSR offset
+pub const MSR_X2APIC_ISR: u32 = 0x810;
+/// Interrupt-Control-Register register MSR offset
+pub const MSR_X2APIC_ICR: u32 = 0x830;
+
+const MSR_APIC_BASE: u32 = 0x1B;
+const APIC_ENABLE_MASK: u64 = 0x800;
+const APIC_X2_ENABLE_MASK: u64 = 0x400;
+
+// SPIV bits
+const APIC_SPIV_VECTOR_MASK: u64 = (1u64 << 8) - 1;
+const APIC_SPIV_SW_ENABLE_MASK: u64 = 1 << 8;
+
+/// Get the MSR offset relative to a bitmap base MSR and the mask for the MSR
+/// value to check for a specific vector bit being set in IRR, ISR, or TMR.
+///
+/// # Returns
+///
+/// A `(u32, u32)` tuple with the MSR offset as the first and the vector
+/// bitmask as the second value.
pub fn apic_register_bit(vector: usize) -> (u32, u32) {
let index: u8 = vector as u8;
((index >> 5) as u32, 1 << (index & 0x1F))
}
+
+/// Enables the X2APIC by setting the AE and EXTD bits in the APIC base address
+/// register.
+pub fn x2apic_enable() {
+ // Enable X2APIC mode.
+ let apic_base = read_msr(MSR_APIC_BASE);
+ let apic_base_x2_enabled = apic_base | APIC_ENABLE_MASK | APIC_X2_ENABLE_MASK;
+ if apic_base != apic_base_x2_enabled {
+ write_msr(MSR_APIC_BASE, apic_base_x2_enabled);
+ }
+ // Set SW-enable in SPIV to enable IRQ delivery
+ x2apic_sw_enable();
+}
+
+/// Send an End-of-Interrupt notification to the X2APIC.
+pub fn x2apic_eoi() {
+ write_msr(MSR_X2APIC_EOI, 0);
+} | If we keep EOI in the platform abstraction, then this could be written like this and avoid #VC in the SNP case.
```suggestion
pub fn x2apic_eoi(wrmsr: FnOnce<(u32, u64)>) {
wrmsr(MSR_X2APIC_EOI, 0);
}
```
Passing a closure to perform the WRMSR permits each platform to implement this optimally without requiring any virtualization exception. |
svsm | github_2023 | others | 636 | coconut-svsm | msft-jlange | @@ -200,10 +199,7 @@ impl SvsmPlatform for TdpPlatform {
fn eoi(&self) {} | Should we implement this while we're at it?
```suggestion
fn eoi(&self) {
x2apic_eoi();
}
``` |
svsm | github_2023 | others | 584 | coconut-svsm | msft-jlange | @@ -93,11 +104,12 @@ impl GDT {
pub fn load_tss(&mut self, tss: &X86Tss) {
let (desc0, desc1) = tss.to_gdt_entry();
- unsafe {
- self.set_tss_entry(desc0, desc1);
- asm!("ltr %ax", in("ax") SVSM_TSS, options(att_syntax));
- self.clear_tss_entry()
- }
+ self.set_tss_entry(desc0, desc1);
+ // SAFETY: loading task register must me done in assembly.
+ // It's safe to do so as long as a global GDT is in use and still
+ // allocated, which is always our case. | Actually, it is not necessarily the case that a global GDT is in use here. However, the lifetime of the GDT doesn't matter, because once the task register is loaded, only the TSS needs to remain live. For that reason, either this whole function should be `unsafe` (because the compiler cannot prove that the lifetime of the `tss` parameter cannot be proven) or the `tss` parameter should be declared as `&'static`. A quick experiment suggests that the latter approach is workable as long as we modify `PerCpu::load()` and `PerCpu::load_isst()` to take `&'static self`. Once that is done, this comment should be updated to indicate that the assembly here is safe because `tss` has a static lifetime. |
svsm | github_2023 | others | 584 | coconut-svsm | msft-jlange | @@ -577,7 +577,9 @@ mod tests {
fn test_wrmsr_tsc_aux() {
if is_qemu_test_env() && is_test_platform_type(SvsmPlatformType::Snp) {
let test_val = 0x1234;
- verify_ghcb_gets_altered(|| write_msr(MSR_TSC_AUX, test_val));
+ verify_ghcb_gets_altered(||
+ // SAFETY: writing to TSC MSR doesn't break memory safety. | Should say `TSC_AUX MSR`. |
svsm | github_2023 | others | 542 | coconut-svsm | joergroedel | @@ -0,0 +1,18 @@
+// SPDX-License-Identifier: MIT OR Apache-2.0
+//
+// Copyright (c) 2024 Red Hat, Inc.
+//
+// Author: Stefano Garzarella <sgarzare@redhat.com>
+// Author: Oliver Steffen <osteffen@redhat.com>
+
+#[derive(Debug)]
+pub enum BlockDeviceError {
+ Failed, // ToDo: insert proper errors
+}
+
+pub trait BlockDriver {
+ fn read_blocks(&self, block_id: usize, buf: &mut [u8]) -> Result<(), BlockDeviceError>;
+ fn write_blocks(&self, block_id: usize, buf: &[u8]) -> Result<(), BlockDeviceError>; | This interface makes the safe-unsafe memory boundary a bit blurry and looks unsafe in itself (due to the slice parameters). I have seen this interface somewhat resembles the interface of the virtio-blk driver crate. Maybe it is the best strategy to import the drivers into our code-base and improve their interfaces.
Some things I'd like to see:
* Asynchronous interface with a separate request tracking data structure and polling methods.
* Better abstractions for the source/target memory regions of block access. |
svsm | github_2023 | others | 542 | coconut-svsm | joergroedel | @@ -319,6 +319,12 @@ pub extern "C" fn svsm_main() {
panic!("Failed to launch FW: {e:#?}");
}
+ {
+ use svsm::block::virtio_blk;
+ static MMIO_BASE: u64 = 0xfef03000;
+ let _blk = virtio_blk::VirtIOBlkDriver::new(PhysAddr::from(MMIO_BASE));
+ } | We should start thinking about a proper detection mechanism for SVSM-assigned devices. |
svsm | github_2023 | others | 542 | coconut-svsm | joergroedel | @@ -0,0 +1,227 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 Red Hat, Inc.
+//
+// Author: Oliver Steffen <osteffen@redhat.com>
+
+extern crate alloc;
+use crate::locking::SpinLock;
+use alloc::vec::Vec;
+use core::{
+ cell::OnceCell,
+ ptr::{addr_of, NonNull},
+};
+use zerocopy::{FromBytes, Immutable, IntoBytes};
+
+use crate::{
+ address::{PhysAddr, VirtAddr},
+ cpu::{self, percpu::this_cpu},
+ mm::{page_visibility::*, *},
+};
+
+struct PageStore {
+ pages: Vec<(PhysAddr, SharedBox<[u8; PAGE_SIZE]>)>,
+} | Nothing wrong with this code, but I think this uncovers a fundamental problem with `SharedBox` which we need to solve separately. Reading any any memory in a `SharedBox` is UB, so it should have a `read()/write()` interface instead of direct data access. |
svsm | github_2023 | others | 614 | coconut-svsm | msft-jlange | @@ -126,7 +126,7 @@ pub fn construct_native_start_context(
context.gs_base = segment.base;
}
X86Register::Cr0(r) => {
- context.cr0 = *r;
+ context.cr0 = *r & !0x8000_0000; | Can you explain? I don't think this is desirable on all targets. |
svsm | github_2023 | others | 614 | coconut-svsm | msft-jlange | @@ -135,7 +135,7 @@ pub fn construct_native_start_context(
context.cr4 = *r;
}
X86Register::Efer(r) => {
- context.efer = *r;
+ context.efer = *r & !0x500; | Can you explain? I don't think this is desirable on all targets. |
svsm | github_2023 | others | 614 | coconut-svsm | joergroedel | @@ -0,0 +1,35 @@
+{
+ "igvm": {
+ "qemu": {
+ "output": "coconut-qemu.igvm",
+ "platforms": [
+ "snp",
+ "native"
+ ],
+ "policy": "0x30000",
+ "measure": "print",
+ "check-kvm": true
+ }
+ },
+ "kernel": {
+ "svsm": {
+ "features": "nosmep,nosmap",
+ "binary": true
+ },
+ "stage2": {
+ "manifest": "kernel/Cargo.toml",
+ "binary": true,
+ "objcopy": "binary"
+ }
+ },
+ "firmware": {
+ "env": "FW_FILE"
+ },
+ "fs": {
+ "modules": {
+ "userinit": {
+ "path": "/init"
+ }
+ }
+ }
+} | There is no need for a separate target definition. Just add `native` as a platform to `qemu-target.json`. |
svsm | github_2023 | others | 626 | coconut-svsm | deeglaze | @@ -139,7 +140,8 @@ impl GpaMap {
let igvm_param_block = GpaRange::new_page(kernel_fs.get_end())?;
let general_params = GpaRange::new_page(igvm_param_block.get_end())?;
- let memory_map = GpaRange::new_page(general_params.get_end())?;
+ let madt = GpaRange::new_page(general_params.get_end())?; | Is this table something we can measure into a service manifest and/or rtmr? With the oem and table ids undigested for lookup purposes? |
svsm | github_2023 | others | 626 | coconut-svsm | joergroedel | @@ -345,12 +349,17 @@ impl IgvmBuilder {
});
}
- // Create the two parameter areas for memory map and general parameters.
+ // Create the parameter areas for all host-supplied parameters.
self.directives.push(IgvmDirectiveHeader::ParameterArea {
number_of_bytes: PAGE_SIZE_4K,
parameter_area_index: IGVM_MEMORY_MAP_PA,
initial_data: vec![],
});
+ self.directives.push(IgvmDirectiveHeader::ParameterArea {
+ number_of_bytes: PAGE_SIZE_4K,
+ parameter_area_index: IGVM_MADT_PA,
+ initial_data: vec![],
+ }); | QEMU stumbles over these sections in the IGVM file and exits with an error:
```
qemu-system-x86_64: IGVM: Unknown header type encountered when processing file: (type 0x309)
qemu-system-x86_64: failed to initialize kvm: Operation not permitted
``` |
svsm | github_2023 | others | 626 | coconut-svsm | AdamCDunlap | @@ -378,6 +394,13 @@ impl IgvmBuilder {
parameter_area_index: IGVM_MEMORY_MAP_PA,
},
));
+ self.directives.push(IgvmDirectiveHeader::ParameterInsert( | Should this be under an `if self.gpa_map.madt.get_size() != 0`? If I understand correctly, this will cause the loader to insert the all-zeros parameter area (since the MADT was not actually added to it) to the madt region, but the madt region is 0 sized. It would probably work since the madt region overlaps the "general params" region and the next directive would overwrite it, but it still seems error prone.
It might make sense to have 1 if statement and then bundle together to the parameter area creation, madt insertion, and parameter area insertion so all the code for it is together. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.