repo_name stringlengths 1 62 | dataset stringclasses 1
value | lang stringclasses 11
values | pr_id int64 1 20.1k | owner stringlengths 2 34 | reviewer stringlengths 2 39 | diff_hunk stringlengths 15 262k | code_review_comment stringlengths 1 99.6k |
|---|---|---|---|---|---|---|---|
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -35,119 +28,64 @@ impl Document {
pub(crate) fn new(path: PgLspPath, content: String, version: i32) -> Self {
let mut id_generator = IdGenerator::new();
- let statements: Vec<StatementPosition> = pg_statement_splitter::split(&content)
+ let ranges: Vec<StatementPos> = pg_statement_split... | nit: I believe it's idiomatic that an `iter` iterates over &T (except if `T` is copy?)... so idiomatically, this would be an `Iterator<Item = &Statement>` :)
no need to change it though! |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -3,45 +3,54 @@ use text_size::{TextLen, TextRange, TextSize};
use crate::workspace::{ChangeFileParams, ChangeParams};
-use super::{document::Statement, Document, StatementRef};
+use super::{Document, Statement};
#[derive(Debug, PartialEq, Eq)]
pub enum StatementChange {
- Added(Statement),
- Deleted(S... | aaah where you inspired by the slatedb approach? Nice!! |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +63,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
-
- let mut changed: Vec<StatementChange> = Vec::with_capacity(self.statements.len());
-
- tracing::info!("applying change: {:?... | Nit: Since the `split()` return value is dropped anyways, would it make the code simpler if we use `into_iter()` instead of `.iter()` ? |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +63,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
-
- let mut changed: Vec<StatementChange> = Vec::with_capacity(self.statements.len());
-
- tracing::info!("applying change: {:?... | great comment!! should we add it to the `Affected` struct as well? |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -26,15 +26,15 @@ impl PgLspEnv {
fn new() -> Self {
Self {
pglsp_log_path: PgLspEnvVariable::new(
- "BIOME_LOG_PATH",
+ "PGLSP_LOG_PATH", |
 |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -327,21 +379,38 @@ impl ChangeParams {
#[cfg(test)]
mod tests {
- use text_size::{TextRange, TextSize};
+ use super::*;
+ use text_size::TextRange;
- use crate::workspace::{server::document::Statement, ChangeFileParams, ChangeParams};
+ use crate::workspace::{ChangeFileParams, ChangeParams};
-... | ah, cool pattern to implement this only for tests 🤓 |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -461,22 +615,22 @@ mod tests {
assert_eq!(
changed[0],
- StatementChange::Deleted(StatementRef {
+ StatementChange::Deleted(Statement {
path: path.clone(),
- id: 0
+ id: 1 | sanity check: it's expected that statements are now in order `id: 1` then `id: 0`? |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +63,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
-
- let mut changed: Vec<StatementChange> = Vec::with_capacity(self.statements.len());
-
- tracing::info!("applying change: {:?... | Food for thought: Would it be safer to gather the affected IDs and then maybe compare with a previous set?
This obviously works, but after the change is applied, the indices of the statements will be different – which could lead to bugs further down the road.
Just a feeling! |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +63,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
-
- let mut changed: Vec<StatementChange> = Vec::with_capacity(self.statements.len());
-
- tracing::info!("applying change: {:?... | nit: we're implicitly assuming that the `StatementPosition`s are sorted ascending by range.
If we change the order that the `StatementSplitter` returns statements, this will break.
Should we make the ordering more explicit? |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +63,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
-
- let mut changed: Vec<StatementChange> = Vec::with_capacity(self.statements.len());
-
- tracing::info!("applying change: {:?... | We check `offset > r.start()` becuase if the offset is *within* a statements range, that's the modified statement, so we don't have to move it over, right? |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +63,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
-
- let mut changed: Vec<StatementChange> = Vec::with_capacity(self.statements.len());
-
- tracing::info!("applying change: {:?... | 
|
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +69,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
+ /// Applies a full change to the document and returns the affected statements
+ fn apply_full_change(&mut self, text: &str) -> Vec<St... | beautiful! |
postgres_lsp | github_2023 | others | 167 | supabase-community | juleswritescode | @@ -54,235 +69,278 @@ impl Document {
changes
}
- fn apply_change(&mut self, change: &ChangeParams) -> Vec<StatementChange> {
- self.debug_statements();
+ /// Applies a full change to the document and returns the affected statements
+ fn apply_full_change(&mut self, text: &str) -> Vec<St... | at this point, that `id` is only stored here, because statement matching that id in the `document` is overwritten, right?
Do we at some point after this try to find the old statement by id? |
postgres_lsp | github_2023 | others | 168 | supabase-community | psteinroe | @@ -61,18 +70,56 @@ impl<'a> CompletionContext<'a> {
text: ¶ms.text,
schema_cache: params.schema,
position: usize::from(params.position),
-
ts_node: None,
schema_name: None,
wrapping_clause_type: None,
+ wrapping_statement_rang... | I I think it's already quite idiomatic like that. only thing that comes to mind is returning an Option<> and then
let tree = self.tree.as_ref()?;
but I would keep it like this. |
postgres_lsp | github_2023 | others | 168 | supabase-community | psteinroe | @@ -0,0 +1,114 @@
+use crate::{
+ builder::CompletionBuilder, context::CompletionContext, relevance::CompletionRelevanceData,
+ CompletionItem, CompletionItemKind,
+};
+
+pub fn complete_columns(ctx: &CompletionContext, builder: &mut CompletionBuilder) {
+ let available_columns = &ctx.schema_cache.columns; | is this how others also do it? iterating over all possible options? |
postgres_lsp | github_2023 | others | 168 | supabase-community | psteinroe | @@ -0,0 +1,114 @@
+use crate::{
+ builder::CompletionBuilder, context::CompletionContext, relevance::CompletionRelevanceData,
+ CompletionItem, CompletionItemKind,
+};
+
+pub fn complete_columns(ctx: &CompletionContext, builder: &mut CompletionBuilder) {
+ let available_columns = &ctx.schema_cache.columns;
+
+... | noice!! |
postgres_lsp | github_2023 | others | 168 | supabase-community | psteinroe | @@ -0,0 +1,91 @@
+use crate::{Query, QueryResult};
+
+use super::QueryTryFrom;
+
+static QUERY: &'static str = r#"
+ (relation
+ (object_reference
+ .
+ (identifier) @schema_or_table
+ "."?
+ (identifier)? @table
+ )+
+ )
+"#;
+
+#[derive(Debug)]
+pub str... | what about a std::sync::LazyLock? but I dont know whether it matter a lot in terms of performance? |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -342,6 +342,45 @@ impl Workspace for WorkspaceServer {
skipped_diagnostics: 0,
})
}
+
+ #[tracing::instrument(level = "info", skip(self))]
+ fn get_completions(
+ &self,
+ params: super::CompletionParams,
+ ) -> Result<pg_completions::CompletionResult, WorkspaceError... | isn't a ref to the tree sufficient? |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -342,6 +342,45 @@ impl Workspace for WorkspaceServer {
skipped_diagnostics: 0,
})
}
+
+ #[tracing::instrument(level = "info", skip(self))]
+ fn get_completions(
+ &self,
+ params: super::CompletionParams,
+ ) -> Result<pg_completions::CompletionResult, WorkspaceError... | I agree that the document api is a bit weird, but you should be able to query for `statement_at_offset`, and the `Statement` also includes the `StatementRef`. Open for ideas to improve that api. |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -342,6 +342,45 @@ impl Workspace for WorkspaceServer {
skipped_diagnostics: 0,
})
}
+
+ #[tracing::instrument(level = "info", skip(self))]
+ fn get_completions(
+ &self,
+ params: super::CompletionParams,
+ ) -> Result<pg_completions::CompletionResult, WorkspaceError... | I think we need to make the position relative to the statement? |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -0,0 +1,36 @@
+use std::{fs::File, path::PathBuf, str::FromStr}; | what benefit does this have over the pg_cli? |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -342,6 +342,57 @@ impl Workspace for WorkspaceServer {
skipped_diagnostics: 0,
})
}
+
+ #[tracing::instrument(level = "debug", skip(self))]
+ fn get_completions(
+ &self,
+ params: super::CompletionParams,
+ ) -> Result<pg_completions::CompletionResult, WorkspaceErro... | this should happen in the lsp layer. we already have helpers for that in `pg_lsp_converters`. the input to the workspace should only be `TextSize` or `TextRange`. the lsp server tracks documents too with the purpose of maintaining the `LineIndex`, which is the data structure we need for the conversion. another reason t... |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -81,6 +81,31 @@ impl Document {
.collect()
}
+ pub fn line_and_col_to_offset(&self, line: u32, col: u32) -> TextSize { | see prev comment - this should be done in the lsp layer, and respective helpers are provided in the `pg_lsp_converters` crate. |
postgres_lsp | github_2023 | typescript | 164 | supabase-community | psteinroe | @@ -9,37 +9,39 @@ import {
let client: LanguageClient;
-export function activate(_context: ExtensionContext) {
+export async function activate(_context: ExtensionContext) {
// If the extension is launched in debug mode then the debug server options are used
// Otherwise the run options are used
cons... | can't we just the cli instead with `pg_cli lsp-proxy`? |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -137,7 +137,10 @@ fn install_client(sh: &Shell, client_opt: ClientOpt) -> anyhow::Result<()> {
}
fn install_server(sh: &Shell) -> anyhow::Result<()> {
- let cmd = cmd!(sh, "cargo install --path crates/pg_lsp --locked --force");
+ let cmd = cmd!(
+ sh,
+ "cargo install --path crates/pg_lsp_new... | can't we use `pg_cli` instead? the lsp should not be an entry point. |
postgres_lsp | github_2023 | others | 164 | supabase-community | psteinroe | @@ -163,19 +163,20 @@ impl LanguageServer for LSPServer {
self.session.update_all_diagnostics().await;
}
+ #[tracing::instrument(level = "info", skip(self))]
async fn shutdown(&self) -> LspResult<()> {
Ok(())
}
- #[tracing::instrument(level = "trace", skip(self))]
+ #[traci... | lets undo this once everything is setup |
postgres_lsp | github_2023 | others | 165 | supabase-community | juleswritescode | @@ -140,7 +142,10 @@ clear-branches:
git branch --merged | egrep -v "(^\\*|main)" | xargs git branch -d
reset-git:
- git checkout main && git pull && pnpm run clear-branches
+ git checkout main
+ git pull
+ just clear-branches
merge-main:
- git fetch origin main:main && git merge main
+ git... | sehr nice |
postgres_lsp | github_2023 | others | 162 | supabase-community | juleswritescode | @@ -1 +1 @@
-DATABASE_URL=postgresql://postgres:postgres@127.0.0.1:5432/postgres
+DATABASE_URL=postgresql://postgres:postgres@127.0.0.1:54322/postgres | ```suggestion
DATABASE_URL=postgresql://postgres:postgres@127.0.0.1:5432/postgres
``` |
postgres_lsp | github_2023 | others | 162 | supabase-community | juleswritescode | @@ -0,0 +1,80 @@
+use crate::{
+ categories::RuleCategory,
+ rule::{GroupCategory, Rule, RuleGroup, RuleMetadata},
+};
+
+pub struct RuleContext<'a, R: Rule> {
+ stmt: &'a pg_query_ext::NodeEnum,
+ options: &'a R::Options,
+}
+
+impl<'a, R> RuleContext<'a, R>
+where
+ R: Rule + Sized + 'static,
+{
+ #... | ```suggestion
``` |
postgres_lsp | github_2023 | others | 162 | supabase-community | juleswritescode | @@ -0,0 +1,327 @@
+use pg_console::fmt::Display;
+use pg_console::{markup, MarkupBuf};
+use pg_diagnostics::advice::CodeSuggestionAdvice;
+use pg_diagnostics::{
+ Advices, Category, Diagnostic, DiagnosticTags, Location, LogCategory, MessageAndDescription,
+ Visit,
+};
+use std::cmp::Ordering;
+use std::fmt::Debug... | the design is pretty awesome! |
postgres_lsp | github_2023 | others | 162 | supabase-community | juleswritescode | @@ -1,18 +1,85 @@
//! Codegen tools. Derived from Biome's codegen
+mod generate_analyser;
+mod generate_configuration;
mod generate_crate;
+mod generate_new_analyser_rule;
+pub use self::generate_analyser::generate_analyser;
+pub use self::generate_configuration::generate_rules_configuration;
pub use self::gener... | ```suggestion
if fs2::read_to_string(path).is_ok_and(|old_contents| old_contents == contents) {
return Ok(UpdateResult::NotUpdated);
}
``` |
postgres_lsp | github_2023 | others | 162 | supabase-community | juleswritescode | @@ -1,18 +1,85 @@
//! Codegen tools. Derived from Biome's codegen
+mod generate_analyser;
+mod generate_configuration;
mod generate_crate;
+mod generate_new_analyser_rule;
+pub use self::generate_analyser::generate_analyser;
+pub use self::generate_configuration::generate_rules_configuration;
pub use self::gener... | ```suggestion
/// With verify = false, the contents of the file will be updated to the passed in contents.
/// With verify = true, an Err will be returned if the contents of the file do not match the passed-in contents.
``` |
postgres_lsp | github_2023 | others | 161 | supabase-community | psteinroe | @@ -127,9 +145,14 @@ impl<'a> CompletionContext<'a> {
self.wrapping_clause_type = "where".try_into().ok();
}
+ "keyword_from" => {
+ self.wrapping_clause_type = "keyword_from".try_into().ok();
+ }
+
_ => {}
}
+ // We hav... | I love that you also use `we` when writing comments. it's so welcoming to read. |
postgres_lsp | github_2023 | others | 161 | supabase-community | psteinroe | @@ -130,6 +130,10 @@ new-crate name:
cargo new --lib crates/{{snakecase(name)}}
cargo run -p xtask_codegen -- new-crate --name={{snakecase(name)}}
+# Prints the treesitter tree of the given SQL file
+tree-print file:
+ cargo run --bin tree_print -- -f {{file}}
+ | love it! will add this alter for the parser too |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -291,6 +300,50 @@ impl Workspace for WorkspaceServer {
fn is_path_ignored(&self, params: IsPathIgnoredParams) -> Result<bool, WorkspaceError> {
Ok(self.is_ignored(params.pglsp_path.as_path()))
}
+
+ fn pull_diagnostics(
+ &self,
+ params: super::PullDiagnosticsParams,
+ ) -> Re... | Martin sourcece, das ist doch der berühmte Regisseur von Wolf of Wallstreet |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -291,6 +300,50 @@ impl Workspace for WorkspaceServer {
fn is_path_ignored(&self, params: IsPathIgnoredParams) -> Result<bool, WorkspaceError> {
Ok(self.is_ignored(params.pglsp_path.as_path()))
}
+
+ fn pull_diagnostics(
+ &self,
+ params: super::PullDiagnosticsParams,
+ ) -> Re... | There's also `Severity::Fatal` which would not be included here, should we maybe check `d.severity() >= Severity::Error`? |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -291,6 +300,50 @@ impl Workspace for WorkspaceServer {
fn is_path_ignored(&self, params: IsPathIgnoredParams) -> Result<bool, WorkspaceError> {
Ok(self.is_ignored(params.pglsp_path.as_path()))
}
+
+ fn pull_diagnostics(
+ &self,
+ params: super::PullDiagnosticsParams,
+ ) -> Re... | ```suggestion
let mut stmt_diagnostics = self.pg_query.pull_diagnostics(stmt);
``` |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -160,7 +160,7 @@ impl LanguageServer for LSPServer {
self.setup_capabilities().await;
// Diagnostics are disabled by default, so update them after fetching workspace config
- // self.session.update_all_diagnostics().await;
+ self.session.update_all_diagnostics().await; | Feature: Check ✅ 🙌🏻 |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -89,9 +89,9 @@ pub(crate) async fn did_change(
session.insert_document(url.clone(), new_doc);
- // if let Err(err) = session.update_diagnostics(url).await {
- // error!("Failed to update diagnostics: {}", err);
- // }
+ if let Err(err) = session.update_diagnostics(url).await {
+ error... | Should we inform the client as well? Or is it just for debugging? |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -35,9 +35,9 @@ pub(crate) async fn did_open(
session.insert_document(url.clone(), doc);
- // if let Err(err) = session.update_diagnostics(url).await {
- // error!("Failed to update diagnostics: {}", err);
- // }
+ if let Err(err) = session.update_diagnostics(url).await {
+ error!("Fai... | Same here, should we inform the client as well? |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -34,7 +34,7 @@ pub(crate) struct JunitReporterVisitor<'a>(pub(crate) Report, pub(crate) &'a mut
impl<'a> JunitReporterVisitor<'a> {
pub(crate) fn new(console: &'a mut dyn Console) -> Self {
- let report = Report::new("Biome");
+ let report = Report::new("PgLsp"); | 
|
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -83,12 +83,27 @@ impl From<(bool, bool)> for VcsTargeted {
pub enum TraversalMode {
/// A dummy mode to be used when the CLI is not running any command
Dummy,
+ /// This mode is enabled when running the command `check`
+ Check {
+ /// The type of fixes that should be applied when analyzing a ... | ```suggestion
// fix_file_mode: Option<FixFileMode>,
/// An optional tuple.
``` |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -237,6 +241,75 @@ impl Session {
}
}
+ /// Computes diagnostics for the file matching the provided url and publishes
+ /// them to the client. Called from [`handlers::text_document`] when a file's
+ /// contents changes.
+ #[tracing::instrument(level = "trace", skip_all, fields(url = disp... | ```suggestion
.show_message(MessageType::WARNING, "The configuration file has errors. PgLSP will report only parsing errors until the configuration is fixed.")
``` |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -237,6 +241,75 @@ impl Session {
}
}
+ /// Computes diagnostics for the file matching the provided url and publishes
+ /// them to the client. Called from [`handlers::text_document`] when a file's
+ /// contents changes.
+ #[tracing::instrument(level = "trace", skip_all, fields(url = disp... | ```suggestion
tracing::trace!("pglsp diagnostics: {:#?}", result.diagnostics);
``` |
postgres_lsp | github_2023 | others | 153 | supabase-community | juleswritescode | @@ -30,6 +30,33 @@ pub struct ChangeFileParams {
pub changes: Vec<ChangeParams>,
}
+#[derive(Debug, serde::Serialize, serde::Deserialize)]
+pub struct PullDiagnosticsParams {
+ pub path: PgLspPath,
+ // pub categories: RuleCategories,
+ pub max_diagnostics: u64,
+ // pub only: Vec<RuleSelector>,
+ ... | Should we remove those? |
postgres_lsp | github_2023 | others | 155 | supabase-community | psteinroe | @@ -2,6 +2,34 @@ use pg_schema_cache::SchemaCache;
use crate::CompletionParams;
+#[derive(Debug, PartialEq, Eq)]
+pub enum ClauseType {
+ Select,
+ Where,
+ From,
+ Update,
+ Delete,
+}
+
+impl From<&str> for ClauseType {
+ fn from(value: &str) -> Self {
+ match value {
+ "selec... | Are we sure we want to panic here? We could also implement TryFrom |
postgres_lsp | github_2023 | others | 155 | supabase-community | psteinroe | @@ -0,0 +1,162 @@
+use crate::{
+ builder::CompletionBuilder, context::CompletionContext, relevance::CompletionRelevanceData,
+ CompletionItem, CompletionItemKind,
+};
+
+pub fn complete_functions(ctx: &CompletionContext, builder: &mut CompletionBuilder) {
+ let available_functions = &ctx.schema_cache.function... | Nit: can't we loop over available functions and add the item dirextly? |
postgres_lsp | github_2023 | others | 150 | supabase-community | psteinroe | @@ -12,6 +12,7 @@ tree-sitter.workspace = true
tree_sitter_sql.workspace = true
pg_schema_cache.workspace = true
pg_test_utils.workspace = true
+tower-lsp.workspace = true | I would prefer to keep language server specifics out of the feature crates. you can find a good rationale on this [here](https://github.com/rust-lang/rust-analyzer/blob/master/docs/dev/architecture.md#crateside-crateside-db-crateside-assists-crateside-completion-crateside-diagnostics-crateside-ssr):
> Architecture I... |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -0,0 +1,28 @@
+use pg_lexer::SyntaxKind;
+
+use super::{
+ common::{parenthesis, statement, unknown},
+ Parser,
+};
+
+pub(crate) fn cte(p: &mut Parser) {
+ p.expect(SyntaxKind::With);
+
+ loop {
+ p.expect(SyntaxKind::Ident);
+ p.expect(SyntaxKind::As);
+ parenthesis(p);
+
+ ... | this looks so simple, i really hope it works for complex statements :) great work! |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -1,137 +1,54 @@
///! Postgres Statement Splitter
///!
///! This crate provides a function to split a SQL source string into individual statements.
-///!
-///! TODO:
-///! Instead of relying on statement start tokens, we need to include as many tokens as
-///! possible. For example, a `CREATE TRIGGER` statement in... | epic, also very readable |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -1,137 +1,54 @@
///! Postgres Statement Splitter
///!
///! This crate provides a function to split a SQL source string into individual statements.
-///!
-///! TODO:
-///! Instead of relying on statement start tokens, we need to include as many tokens as
-///! possible. For example, a `CREATE TRIGGER` statement in... | 
|
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -1,26 +1,30 @@
+mod common;
+mod data;
+mod dml;
+
+pub use common::source;
+
use std::cmp::min;
-use pg_lexer::{SyntaxKind, Token, TokenType, WHITESPACE_TOKENS};
+use pg_lexer::{lex, SyntaxKind, Token, WHITESPACE_TOKENS};
use text_size::{TextRange, TextSize};
use crate::syntax_error::SyntaxError;
/// Main... | Should we add a comment that says that this is modelled after a Pratt parser, so future devs have an easier time understanding the strategy? |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -0,0 +1,14 @@
+use pg_lexer::SyntaxKind;
+
+pub static STATEMENT_START_TOKENS: &[SyntaxKind] = &[ | do we still need to export this if we export the helper below? |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -0,0 +1,100 @@
+use pg_lexer::{SyntaxKind, Token, TokenType};
+
+use super::{
+ data::at_statement_start,
+ dml::{cte, select},
+ Parser,
+};
+
+pub fn source(p: &mut Parser) {
+ loop {
+ match p.peek() {
+ Token {
+ kind: SyntaxKind::Eof,
+ ..
+ ... | ```suggestion
panic!("stmt: Unknown start token {:?}", t);
``` |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -33,164 +37,94 @@ pub struct Parse {
}
impl Parser {
- pub fn new(tokens: Vec<Token>) -> Self {
+ pub fn new(sql: &str) -> Self {
+ // we dont care about whitespace tokens, except for double newlines
+ // to make everything simpler, we just filter them out
+ // the token holds the text... | nit: `into_iter()` instead of `.cloned()` ?
I'm not sure, but if I understand it correctly, we could skip some allocations |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -33,164 +37,94 @@ pub struct Parse {
}
impl Parser {
- pub fn new(tokens: Vec<Token>) -> Self {
+ pub fn new(sql: &str) -> Self {
+ // we dont care about whitespace tokens, except for double newlines
+ // to make everything simpler, we just filter them out
+ // the token holds the text... | will this throw if somebody opens an empty sql file? |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -33,164 +37,94 @@ pub struct Parse {
}
impl Parser {
- pub fn new(tokens: Vec<Token>) -> Self {
+ pub fn new(sql: &str) -> Self {
+ // we dont care about whitespace tokens, except for double newlines
+ // to make everything simpler, we just filter them out
+ // the token holds the text... | Should we better return nothing here? It might be confusing when a dev works with this and assumes that the next `.peek()` yields a different token |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -46,66 +46,96 @@ impl Parser {
return !WHITESPACE_TOKENS.contains(&t.kind)
|| (t.kind == SyntaxKind::Newline && t.text.chars().count() > 1);
})
- .rev()
.cloned()
.collect::<Vec<_>>();
+ let eof_token = Token::eof(usize:... | self.peek? 🤓 |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -1,137 +1,68 @@
///! Postgres Statement Splitter
///!
///! This crate provides a function to split a SQL source string into individual statements.
-///!
-///! TODO:
-///! Instead of relying on statement start tokens, we need to include as many tokens as
-///! possible. For example, a `CREATE TRIGGER` statement in... | ah nice, that's convenient |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -1,12 +0,0 @@
-brin | YEAH ⛹️ |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -166,15 +162,9 @@ impl Change {
// if addition, expand the range
// if deletion, shrink the range
if self.is_addition() {
- *r = TextRange::new(
- r.start(),
- ... | hmm, strange that there's no method on the type for increasing the range 🤷 |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -44,18 +44,11 @@ impl Document {
pub fn new(url: PgLspPath, text: Option<String>) -> Document {
Document {
version: 0,
- line_index: LineIndex::new(&text.as_ref().unwrap_or(&"".to_string())),
+ line_index: LineIndex::new(text.as_ref().unwrap_or(&"".to_string())),
... | pretty cool changes here in the file! |
postgres_lsp | github_2023 | others | 142 | supabase-community | juleswritescode | @@ -52,145 +76,116 @@ impl Parser {
.iter()
.map(|(start, end)| {
let from = self.tokens.get(*start);
- let to = self.tokens.get(end - 1);
- // get text range from token range
- let text_start = from.unwrap()... | very nice! much cleaner. |
postgres_lsp | github_2023 | others | 141 | supabase-community | psteinroe | @@ -0,0 +1,43 @@
+use sqlx::PgPool;
+
+use crate::schema_cache::SchemaCacheItem;
+
+#[derive(Debug, Clone, Default)]
+pub struct Version {
+ pub version: Option<String>,
+ pub version_num: Option<i64>,
+ pub active_connections: Option<i64>,
+ pub max_connections: Option<i64>,
+}
+
+impl SchemaCacheItem for ... | why not just one row? |
postgres_lsp | github_2023 | others | 141 | supabase-community | psteinroe | @@ -6,13 +6,15 @@ use crate::functions::Function;
use crate::schemas::Schema;
use crate::tables::Table;
use crate::types::PostgresType;
+use crate::versions::Version;
#[derive(Debug, Clone, Default)]
pub struct SchemaCache {
pub schemas: Vec<Schema>,
pub tables: Vec<Table>,
pub functions: Vec<Func... | any reason to have a Vec instead of a single struct? |
postgres_lsp | github_2023 | others | 109 | supabase-community | psteinroe | @@ -412,7 +412,23 @@ impl<'p> LibpgQueryNodeParser<'p> {
}
/// list of aliases from https://www.postgresql.org/docs/current/datatype.html
-const ALIASES: [&[&str]; 2] = [&["integer", "int", "int4"], &["real", "float4"]];
+const ALIASES: [&[&str]; 15] = [
+ &["bigint", "int8"],
+ &["bigserial", "serial8"],
+ ... | This will not work, because we are comparing token by token, and this text will be split up over multiplen tokens. It requires a larger change. |
postgres_lsp | github_2023 | others | 109 | supabase-community | psteinroe | @@ -412,7 +412,21 @@ impl<'p> LibpgQueryNodeParser<'p> {
}
/// list of aliases from https://www.postgresql.org/docs/current/datatype.html
-const ALIASES: [&[&str]; 2] = [&["integer", "int", "int4"], &["real", "float4"]];
+const ALIASES: [&[&str]; 13] = [
+ &["bigint", "int8"],
+ &["bigserial", "serial8"],
+ ... | These two are also multi word |
postgres_lsp | github_2023 | others | 104 | supabase-community | psteinroe | @@ -1,19 +1,40 @@
---
source: crates/parser/tests/statement_parser_test.rs
-description: "/* TODO: CREATE TABLE films2 AS SELECT * FROM films; */ SELECT 1;"
+description: CREATE TABLE films2 AS SELECT * FROM films;
---
Parse {
- cst: SourceFile@0..64
- CComment@0..55 "/* TODO: CREATE TABLE ..."
- Selec... | It seems like the statement parser does not pick the root statement up. Can you check that? |
postgres_lsp | github_2023 | others | 100 | supabase-community | psteinroe | @@ -104,6 +104,16 @@ If you're not using VS Code, you can install the server by running:
cargo xtask install --server
```
+### Github CodeSpaces
+Currently, Windows does not support `libpg_query`. You can setup your development environment
+on [CodeSpaces](https://github.com/features/codespaces).
+
+After your code... | Shouldn't this paragraph be above your section? |
postgres_lsp | github_2023 | others | 95 | supabase-community | psteinroe | @@ -0,0 +1,11 @@
+CREATE UNLOGGED TABLE cities (name text, population real, altitude double, identifier smallint, postal_code int, foreign_id bigint);
+/* TODO: CREATE TABLE IF NOT EXISTS distributors (name varchar(40) DEFAULT 'Luso Films', len interval hour to second(3), name varchar(40) DEFAULT 'Luso Films', did int ... | @cvng why is this todo? |
postgres_lsp | github_2023 | others | 94 | supabase-community | cvng | @@ -692,6 +692,74 @@ fn custom_handlers(node: &Node) -> TokenStream {
tokens.push(TokenProperty::from(Token::With));
}
},
+ "CreatePublicationStmt" => quote! {
+ tokens.push(TokenProperty::from(Token::Create));
+ tokens.push(TokenProperty::from(Token::... | sure, let's have this in a next PR with a `test_create_publication` test |
postgres_lsp | github_2023 | others | 88 | supabase-community | psteinroe | @@ -0,0 +1,86 @@
+---
+source: crates/parser/tests/statement_parser_test.rs
+description: CREATE DATABASE x OWNER abc CONNECTION LIMIT 5;
+---
+Parse {
+ cst: SourceFile@0..47
+ CreatedbStmt@0..47
+ Create@0..6 "CREATE"
+ Whitespace@6..7 " "
+ Database@7..15 "DATABASE"
+ Whitespace@1... | These should be part of DefElement |
postgres_lsp | github_2023 | others | 88 | supabase-community | psteinroe | @@ -0,0 +1,47 @@
+---
+source: crates/parser/tests/statement_parser_test.rs
+description: "\nCREATE DATABASE x LOCATION DEFAULT;"
+---
+Parse {
+ cst: SourceFile@0..36
+ Newline@0..1 "\n"
+ CreatedbStmt@1..36
+ Create@1..7 "CREATE"
+ Whitespace@7..8 " "
+ Database@8..16 "DATABASE"
+ ... | Should also be part of DefElement |
postgres_lsp | github_2023 | others | 72 | supabase-community | cvng | @@ -481,6 +481,48 @@ fn custom_handlers(node: &Node) -> TokenStream {
tokens.push(TokenProperty::from(Token::As));
}
},
+ "DefineStmt" => quote! {
+ tokens.push(TokenProperty::from(Token::Create));
+ if n.replace {
+ tokens.push(TokenPro... | @psteinroe I'm not confortable enough with the AST to know what a proper solution should be. It the test in #69 passes with the same output?
As of my understanding, I would go with solution 3 (direct children to `DefineStmt`) - based on the docs - aggregate with `order by` is kind of a special case
> The syntax ... |
postgres_lsp | github_2023 | others | 67 | supabase-community | psteinroe | @@ -66,10 +66,7 @@ mod tests {
debug!("selected node: {:#?}", node_graph[node_index]);
- assert!(node_graph[node_index]
- .properties
- .iter()
- .all(|p| { expected.contains(p) }));
+ assert_eq!(node_graph[node_index].properties, expected); | Can you add a comment there that even though we test for strict equality of the two vectors the order of the properties does not have to match the order of the tokens in the string? |
postgres_lsp | github_2023 | others | 65 | supabase-community | psteinroe | @@ -439,6 +439,17 @@ fn custom_handlers(node: &Node) -> TokenStream {
tokens.push(TokenProperty::from(Token::Or));
tokens.push(TokenProperty::from(Token::Replace));
}
+ if let Some(n) = &n.view {
+ match n.relpersistence.as_str() { | thats an interesting case! I agree with your reasoning. to give a bit of context. Take the substring `create temporary view comedies` as an example. The `create` and the `view` token should be part of the `ViewStmt` itself, while the `temporary` is definitely part of the `RangeVar` node. We cannot create a valid tree o... |
postgres_lsp | github_2023 | others | 61 | supabase-community | psteinroe | @@ -529,6 +529,16 @@ fn custom_handlers(node: &Node) -> TokenStream {
"TypeCast" => quote! {
tokens.push(TokenProperty::from(Token::Typecast));
},
+ "CreateDomainStmt" => quote! {
+ tokens.push(TokenProperty::from(Token::Create));
+ tokens.push(TokenProperty::... | `check` should be part of the `Constraint` node, right? its currently implement as
```rust
"Constraint" => quote! {
match n.contype {
// ConstrNotnull
2 => {
tokens.push(TokenProperty::from(Token::Not));
tokens.push(Tok... |
postgres_lsp | github_2023 | others | 61 | supabase-community | psteinroe | @@ -87,4 +87,18 @@ mod tests {
],
)
}
+
+ #[test]
+ fn test_create_domain() {
+ test_get_node_properties(
+ "create domain us_postal_code as text check (value is not null);",
+ SyntaxKind::CreateDomainStmt,
+ vec
+
+Install: [instructions](https://pgtools.dev/#installation)
+
+- [CLI releases](https://github.c... | @psteinroe reminder to change over in the other repo if possible:
```suggestion
- [Neovim](https://github.com/neovim/nvim-lspconfig/blob/master/doc/configs.md#postgres_language_server)
``` |
postgres_lsp | github_2023 | others | 261 | supabase-community | w3b6x9 | @@ -4,11 +4,19 @@
A collection of language tools and a Language Server Protocol (LSP) implementation for Postgres, focusing on developer experience and reliable SQL tooling.
+Docs: [pgtools.dev](https://pgtools.dev/)
+
+Install: [instructions](https://pgtools.dev/#installation)
+
+- [CLI releases](https://github.c... | ```suggestion
- [VSCode](https://marketplace.visualstudio.com/items?itemName=Supabase.postgrestools)
``` |
svsm | github_2023 | others | 654 | coconut-svsm | peterfang | @@ -92,8 +93,14 @@ global_asm!(
* environment and context structure from the address space. */
movq %r8, %cr0
movq %r10, %cr4
+
+ /* Check to see whether EFER.LME is specified. If not, then EFER
+ * should not be reloaded. */
+ testl ${LME}, %ecx | `%eax`? |
svsm | github_2023 | others | 652 | coconut-svsm | peterfang | @@ -364,16 +364,22 @@ pub fn send_ipi(
}
}
_ => {
+ let mut target_count: usize = 0;
for cpu in PERCPU_AREAS.iter() {
- ipi_board.pending.fetch_add(1, Ordering::Relaxed);
- cpu.as_cpu_ref().ipi_from(sender_cpu_index);
+ ... | Would something like this be a bit cleaner?
```suggestion
for cpu in PERCPU_AREAS
.iter()
.map(|c| c.as_cpu_ref())
.filter(|c| c.is_online() && c.apic_id() != this_cpu().get_apic_id())
{
target_count += 1;
cp... |
svsm | github_2023 | others | 652 | coconut-svsm | peterfang | @@ -364,16 +364,22 @@ pub fn send_ipi(
}
}
_ => {
+ let mut target_count: usize = 0;
for cpu in PERCPU_AREAS.iter() {
- ipi_board.pending.fetch_add(1, Ordering::Relaxed);
- cpu.as_cpu_ref().ipi_from(sender_cpu_index);
+ ... | Since `sender_cpu_index` is already supplied as input, does it make sense to use `cpu_shared.cpu_index() != sender_cpu_index` instead? |
svsm | github_2023 | others | 629 | coconut-svsm | msft-jlange | @@ -0,0 +1,38 @@
+// SPDX-License-Identifier: MIT OR Apache-2.0
+//
+// Copyright (c) 2025 Intel Corporation.
+//
+// Author: Chuanxiao Dong <chuanxiao.dong@intel.com>
+
+use super::idt::common::X86ExceptionContext;
+use crate::error::SvsmError;
+use crate::tdx::tdcall::{tdcall_get_ve_info, tdvmcall_cpuid};
+use crate:... | I'd like to consider adding IO emulation in the near future because that should be low-hanging fruit. There is much that can be copied from the instruction emulator and/or the #VC handler. |
svsm | github_2023 | others | 629 | coconut-svsm | msft-jlange | @@ -0,0 +1,38 @@
+// SPDX-License-Identifier: MIT OR Apache-2.0 | Can we put this file under `kernel\src\tdx` instead? I'd like to move towards a model where architecture-specific code is in architecture-specific directories. There's a lot of SNP code that doesn't follow this pattern today but I'd like to avoid making it any worse than it already is. |
svsm | github_2023 | others | 642 | coconut-svsm | tlendacky | @@ -185,6 +185,12 @@ impl SvsmPlatform for SnpPlatform {
}
}
+ fn determine_cet_support(&self) -> bool {
+ // CET is supported on all SNP platforms, and CPUID should not be
+ // consulted to determine this.
+ true | Hypervisor support is required to ensure that the proper MSRs are not intercepted. This is typically communicated to the guest by providing the guest with appropriate CPUID information in the CPUID table that has been vetted by firmware. If the leaf isn't present or the bit isn't set, maybe it can be a build time optio... |
svsm | github_2023 | others | 636 | coconut-svsm | msft-jlange | @@ -1,16 +1,99 @@
// SPDX-License-Identifier: MIT
//
// Copyright (c) Microsoft Corporation
+// Copyright (c) SUSE LLC
//
// Author: Jon Lange <jlange@microsoft.com>
+// Author: Joerg Roedel <jroedel@suse.de>
-pub const APIC_MSR_EOI: u32 = 0x80B;
-pub const APIC_MSR_ISR: u32 = 0x810;
-pub const APIC_MSR_ICR: u32... | If we keep EOI in the platform abstraction, then this could be written like this and avoid #VC in the SNP case.
```suggestion
pub fn x2apic_eoi(wrmsr: FnOnce<(u32, u64)>) {
wrmsr(MSR_X2APIC_EOI, 0);
}
```
Passing a closure to perform the WRMSR permits each platform to implement this optimally without requ... |
svsm | github_2023 | others | 636 | coconut-svsm | msft-jlange | @@ -200,10 +199,7 @@ impl SvsmPlatform for TdpPlatform {
fn eoi(&self) {} | Should we implement this while we're at it?
```suggestion
fn eoi(&self) {
x2apic_eoi();
}
``` |
svsm | github_2023 | others | 584 | coconut-svsm | msft-jlange | @@ -93,11 +104,12 @@ impl GDT {
pub fn load_tss(&mut self, tss: &X86Tss) {
let (desc0, desc1) = tss.to_gdt_entry();
- unsafe {
- self.set_tss_entry(desc0, desc1);
- asm!("ltr %ax", in("ax") SVSM_TSS, options(att_syntax));
- self.clear_tss_entry()
- }
+ ... | Actually, it is not necessarily the case that a global GDT is in use here. However, the lifetime of the GDT doesn't matter, because once the task register is loaded, only the TSS needs to remain live. For that reason, either this whole function should be `unsafe` (because the compiler cannot prove that the lifetime o... |
svsm | github_2023 | others | 584 | coconut-svsm | msft-jlange | @@ -577,7 +577,9 @@ mod tests {
fn test_wrmsr_tsc_aux() {
if is_qemu_test_env() && is_test_platform_type(SvsmPlatformType::Snp) {
let test_val = 0x1234;
- verify_ghcb_gets_altered(|| write_msr(MSR_TSC_AUX, test_val));
+ verify_ghcb_gets_altered(||
+ // SAF... | Should say `TSC_AUX MSR`. |
svsm | github_2023 | others | 542 | coconut-svsm | joergroedel | @@ -0,0 +1,18 @@
+// SPDX-License-Identifier: MIT OR Apache-2.0
+//
+// Copyright (c) 2024 Red Hat, Inc.
+//
+// Author: Stefano Garzarella <sgarzare@redhat.com>
+// Author: Oliver Steffen <osteffen@redhat.com>
+
+#[derive(Debug)]
+pub enum BlockDeviceError {
+ Failed, // ToDo: insert proper errors
+}
+
+pub trait B... | This interface makes the safe-unsafe memory boundary a bit blurry and looks unsafe in itself (due to the slice parameters). I have seen this interface somewhat resembles the interface of the virtio-blk driver crate. Maybe it is the best strategy to import the drivers into our code-base and improve their interfaces.
... |
svsm | github_2023 | others | 542 | coconut-svsm | joergroedel | @@ -319,6 +319,12 @@ pub extern "C" fn svsm_main() {
panic!("Failed to launch FW: {e:#?}");
}
+ {
+ use svsm::block::virtio_blk;
+ static MMIO_BASE: u64 = 0xfef03000;
+ let _blk = virtio_blk::VirtIOBlkDriver::new(PhysAddr::from(MMIO_BASE));
+ } | We should start thinking about a proper detection mechanism for SVSM-assigned devices. |
svsm | github_2023 | others | 542 | coconut-svsm | joergroedel | @@ -0,0 +1,227 @@
+// SPDX-License-Identifier: MIT
+//
+// Copyright (c) 2024 Red Hat, Inc.
+//
+// Author: Oliver Steffen <osteffen@redhat.com>
+
+extern crate alloc;
+use crate::locking::SpinLock;
+use alloc::vec::Vec;
+use core::{
+ cell::OnceCell,
+ ptr::{addr_of, NonNull},
+};
+use zerocopy::{FromBytes, Immu... | Nothing wrong with this code, but I think this uncovers a fundamental problem with `SharedBox` which we need to solve separately. Reading any any memory in a `SharedBox` is UB, so it should have a `read()/write()` interface instead of direct data access. |
svsm | github_2023 | others | 614 | coconut-svsm | msft-jlange | @@ -126,7 +126,7 @@ pub fn construct_native_start_context(
context.gs_base = segment.base;
}
X86Register::Cr0(r) => {
- context.cr0 = *r;
+ context.cr0 = *r & !0x8000_0000; | Can you explain? I don't think this is desirable on all targets. |
svsm | github_2023 | others | 614 | coconut-svsm | msft-jlange | @@ -135,7 +135,7 @@ pub fn construct_native_start_context(
context.cr4 = *r;
}
X86Register::Efer(r) => {
- context.efer = *r;
+ context.efer = *r & !0x500; | Can you explain? I don't think this is desirable on all targets. |
svsm | github_2023 | others | 614 | coconut-svsm | joergroedel | @@ -0,0 +1,35 @@
+{
+ "igvm": {
+ "qemu": {
+ "output": "coconut-qemu.igvm",
+ "platforms": [
+ "snp",
+ "native"
+ ],
+ "policy": "0x30000",
+ "measure": "print",
+ "check-kvm": true
+ }
+ },
+ "kerne... | There is no need for a separate target definition. Just add `native` as a platform to `qemu-target.json`. |
svsm | github_2023 | others | 626 | coconut-svsm | deeglaze | @@ -139,7 +140,8 @@ impl GpaMap {
let igvm_param_block = GpaRange::new_page(kernel_fs.get_end())?;
let general_params = GpaRange::new_page(igvm_param_block.get_end())?;
- let memory_map = GpaRange::new_page(general_params.get_end())?;
+ let madt = GpaRange::new_page(general_params.get_... | Is this table something we can measure into a service manifest and/or rtmr? With the oem and table ids undigested for lookup purposes? |
svsm | github_2023 | others | 626 | coconut-svsm | joergroedel | @@ -345,12 +349,17 @@ impl IgvmBuilder {
});
}
- // Create the two parameter areas for memory map and general parameters.
+ // Create the parameter areas for all host-supplied parameters.
self.directives.push(IgvmDirectiveHeader::ParameterArea {
number_of_byte... | QEMU stumbles over these sections in the IGVM file and exits with an error:
```
qemu-system-x86_64: IGVM: Unknown header type encountered when processing file: (type 0x309)
qemu-system-x86_64: failed to initialize kvm: Operation not permitted
``` |
svsm | github_2023 | others | 626 | coconut-svsm | AdamCDunlap | @@ -378,6 +394,13 @@ impl IgvmBuilder {
parameter_area_index: IGVM_MEMORY_MAP_PA,
},
));
+ self.directives.push(IgvmDirectiveHeader::ParameterInsert( | Should this be under an `if self.gpa_map.madt.get_size() != 0`? If I understand correctly, this will cause the loader to insert the all-zeros parameter area (since the MADT was not actually added to it) to the madt region, but the madt region is 0 sized. It would probably work since the madt region overlaps the "genera... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.