repo_name stringlengths 1 62 | dataset stringclasses 1
value | lang stringclasses 11
values | pr_id int64 1 20.1k | owner stringlengths 2 34 | reviewer stringlengths 2 39 | diff_hunk stringlengths 15 262k | code_review_comment stringlengths 1 99.6k |
|---|---|---|---|---|---|---|---|
pyspark-ai | github_2023 | others | 162 | pyspark-ai | gengliangwang | @@ -22,29 +22,43 @@ classifiers = [
]
[tool.poetry.dependencies]
+# default required dependencies
python = "^3.8.1"
pydantic = "^1.10.10"
-requests = "^2.31.0"
-tiktoken = "0.4.0"
-beautifulsoup4 = "^4.12.2"
openai = "^0.27.8"
langchain = ">=0.0.271,<0.1.0"
-pandas = ">=1.0.5"
pygments = "^2.15.1"
-google-api-... | This is for Spark connect, too |
pyspark-ai | github_2023 | others | 162 | pyspark-ai | gengliangwang | @@ -24,27 +24,41 @@ classifiers = [
[tool.poetry.dependencies]
python = "^3.8.1"
pydantic = "^1.10.10"
-requests = "^2.31.0"
-tiktoken = "0.4.0"
-beautifulsoup4 = "^4.12.2"
openai = "^0.27.8"
langchain = ">=0.0.271,<0.1.0"
-pandas = ">=1.0.5"
pygments = "^2.15.1"
-google-api-python-client = "^2.90.0"
-chispa = "^... | Why it can't be 2.1.0? let's add a comment |
pyspark-ai | github_2023 | python | 162 | pyspark-ai | gengliangwang | @@ -43,6 +39,15 @@
)
from pyspark_ai.spark_utils import SparkUtils
+create_deps_requirement_message = None
+try: | let's move the try..catch to method `create_df`? |
pyspark-ai | github_2023 | python | 162 | pyspark-ai | gengliangwang | @@ -62,7 +67,7 @@ def __init__(
cache_file_location: Optional[str] = None,
vector_store_dir: Optional[str] = None,
vector_store_max_gb: Optional[float] = 16,
- encoding: Optional[Encoding] = None,
+ encoding: "Optional[Encoding]" = None, | I suspect if anyone specifies this option...How about let's just remove it and mention it in the PR description. |
pyspark-ai | github_2023 | python | 162 | pyspark-ai | gengliangwang | @@ -113,7 +118,8 @@ def __init__(
self._cache = None
self._vector_store_dir = vector_store_dir
self._vector_store_max_gb = vector_store_max_gb
- self._encoding = encoding or tiktoken.get_encoding("cl100k_base")
+ if not create_deps_requirement_message:
+ self._enc... | As per https://github.com/pyspark-ai/pyspark-ai/pull/162/files#r1361217186, we can now move the encoding into method `create_df` |
pyspark-ai | github_2023 | python | 162 | pyspark-ai | gengliangwang | @@ -173,8 +173,13 @@ def vector_similarity_search(
lru_vector_store: Optional[LRUVectorStore],
search_text: str,
) -> str:
- from langchain.vectorstores import FAISS
- from langchain.embeddings import HuggingFaceBgeEmbeddings
+ try:
+ from langchain.vectorstores im... | Is this the same as ["faiss-cpu", "sentence-transformers", "torch"]? |
pyspark-ai | github_2023 | python | 162 | pyspark-ai | gengliangwang | @@ -351,6 +351,16 @@ def create_df(
:return: a Spark DataFrame
"""
+ # check for necessary dependencies
+ try:
+ import requests
+ import tiktoken
+ from bs4 import BeautifulSoup
+ except ImportError:
+ raise Exception(
+ ... | create => ingestion |
pyspark-ai | github_2023 | others | 162 | pyspark-ai | gengliangwang | @@ -14,10 +14,26 @@ For a more comprehensive introduction and background to our project, we have the
## Installation
+pyspark-ai can be installed via pip from [PyPI](https://pypi.org/project/pyspark-ai/):
```bash
pip install pyspark-ai
```
+pyspark-ai can also be installed with optional dependencies to enable... | Let's mention plot instead since it is much more popular. |
pyspark-ai | github_2023 | others | 162 | pyspark-ai | gengliangwang | @@ -91,6 +107,27 @@ df.ai.transform("Pivot the data by product and the revenue for each product").sh
For a detailed walkthrough of the transformations, please refer to our [transform_dataframe.ipynb](https://github.com/databrickslabs/pyspark-ai/blob/master/examples/transform_dataframe.ipynb) notebook.
+### Transfo... | Let's ignore the details of LLM here. |
pyspark-ai | github_2023 | others | 162 | pyspark-ai | gengliangwang | @@ -91,6 +107,27 @@ df.ai.transform("Pivot the data by product and the revenue for each product").sh
For a detailed walkthrough of the transformations, please refer to our [transform_dataframe.ipynb](https://github.com/databrickslabs/pyspark-ai/blob/master/examples/transform_dataframe.ipynb) notebook.
+### Transfo... | Let's ignore the llm parameter in this example since there is default value |
pyspark-ai | github_2023 | others | 162 | pyspark-ai | gengliangwang | @@ -1,15 +1,25 @@
-# Installation and setup
+# Installation and Setup
## Installation
+pyspark-ai can be installed via pip from [PyPI](https://pypi.org/project/pyspark-ai/):
```bash
pip install pyspark-ai
```
+pyspark-ai can also be installed with optional dependencies to enable certain functionality.
+For e... | Again, let's mention plot instead. |
pyspark-ai | github_2023 | python | 160 | pyspark-ai | gengliangwang | @@ -125,7 +125,10 @@ def __init__(self, vector_file_dir: str, max_size: float = 16) -> None:
self.files[file_path] = file_size
self.current_size += file_size
else:
- shutil.rmtree(file_path)
+ if os.path.isfile(file_path):
... | How about just throwing an error and letting the user manually clean it up or increase the size limit? When Spark creates table from a dir, it also throw exception instead of deleting files, in case of mistakes. |
pyspark-ai | github_2023 | python | 160 | pyspark-ai | gengliangwang | @@ -106,36 +106,40 @@ async def _arun(self, *args: Any, **kwargs: Any) -> Any:
class LRUVectorStore:
"""Implements an LRU policy to enforce a max storage space for vector file storage."""
- def __init__(self, vector_file_dir: str, max_size: float = 16) -> None:
+ def __init__(self, vector_store_dir: str, ... | We can simply check whether self.current_size exceeds the limitation when the loop ends |
pyspark-ai | github_2023 | others | 165 | pyspark-ai | gengliangwang | @@ -0,0 +1,420 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Vector Similarity Search"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This notebook demonstrates usage of the vector similarity search, to help the agent select an ... | Why do we need this? Are there warning from Langchain? |
pyspark-ai | github_2023 | others | 165 | pyspark-ai | gengliangwang | @@ -0,0 +1,420 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Vector Similarity Search"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This notebook demonstrates usage of the vector similarity search, to help the agent select an ... | Note: the fair sql should be filtering population in a range, such as `population` > 270 million and `population` < 330 million. so this is actually not a good example.
Is there another good dataset or question? If not, let's drop this one. |
pyspark-ai | github_2023 | python | 159 | pyspark-ai | xinrong-meng | @@ -227,31 +227,25 @@
)
PLOT_PROMPT_TEMPLATE = """
-You are an Apache Spark SQL expert programmer.
-It is forbidden to include old deprecated APIs in your code.
-For example, you will not use the pandas method "append" because it is deprecated.
-
Given a pyspark DataFrame `df`, with the output columns:
{columns}
... | May I ask what's the reason for 6.? |
pyspark-ai | github_2023 | others | 159 | pyspark-ai | xinrong-meng | @@ -0,0 +1,179 @@
+start_lat,start_lon,end_lat,end_lon,airline,airport1,airport2,cnt | I'm wondering why we don't pull the data from the original website but copy the file. |
pyspark-ai | github_2023 | python | 159 | pyspark-ai | xinrong-meng | @@ -0,0 +1,91 @@
+from typing import Any, List, Optional
+
+from langchain import LLMChain
+from langchain.callbacks.manager import Callbacks
+from langchain.chat_models.base import BaseChatModel
+from langchain.schema import BaseMessage, HumanMessage
+from pyspark.sql import DataFrame
+
+from pyspark_ai.code_logger im... | I'm wondering what's that used for |
pyspark-ai | github_2023 | python | 159 | pyspark-ai | xinrong-meng | @@ -0,0 +1,91 @@
+from typing import Any, List, Optional
+
+from langchain import LLMChain
+from langchain.callbacks.manager import Callbacks
+from langchain.chat_models.base import BaseChatModel
+from langchain.schema import BaseMessage, HumanMessage
+from pyspark.sql import DataFrame
+
+from pyspark_ai.code_logger im... | Do we want to raise an exception here instead? |
pyspark-ai | github_2023 | python | 157 | pyspark-ai | gengliangwang | @@ -60,6 +61,7 @@ def __init__(
cache_file_format: str = "json",
cache_file_location: Optional[str] = None,
vector_store_dir: Optional[str] = None,
+ vector_store_max_size: Optional[float] = 1e6, | Shall we make it an Int? What's the unit here, KB? |
pyspark-ai | github_2023 | python | 157 | pyspark-ai | gengliangwang | @@ -101,12 +103,49 @@ async def _arun(self, *args: Any, **kwargs: Any) -> Any:
raise NotImplementedError("ListTablesSqlDbTool does not support async")
+class LRUVectorStore:
+ """Implements an LRU policy to enforce a max storage space for vector file storage."""
+
+ def __init__(self, vector_file_dir... | 1M is just too small |
pyspark-ai | github_2023 | python | 157 | pyspark-ai | gengliangwang | @@ -101,12 +103,49 @@ async def _arun(self, *args: Any, **kwargs: Any) -> Any:
raise NotImplementedError("ListTablesSqlDbTool does not support async")
+class LRUVectorStore:
+ """Implements an LRU policy to enforce a max storage space for vector file storage."""
+
+ def __init__(self, vector_file_dir... | We need to load the existing index files under the file path into the OrderedDict. The files under vector_file_dir can be reused after restarting PySparkAI.
(You can also refer some of the code from https://chat.openai.com/share/2bd66613-5dc1-427e-9277-fb45b317438a) |
pyspark-ai | github_2023 | python | 157 | pyspark-ai | gengliangwang | @@ -150,6 +168,51 @@ def test_similar_value_tool_e2e(self):
finally:
self.spark.sql(f"DROP TABLE IF EXISTS {table_name}")
+ def test_vector_file_lru_cache_max_files(self):
+ """Tests VectorFileLRUCache adheres to max file size, using WikiSQL training tables"""
+ # set ma... | shall we make vector_store_max_size smaller and check if there is a file/files evicted? |
pyspark-ai | github_2023 | python | 157 | pyspark-ai | gengliangwang | @@ -60,6 +61,7 @@ def __init__(
cache_file_format: str = "json",
cache_file_location: Optional[str] = None,
vector_store_dir: Optional[str] = None,
+ vector_store_max_gb: Optional[float] = 1e6, | 1000000GB is too big. Let's make it 16GB by default. |
pyspark-ai | github_2023 | python | 157 | pyspark-ai | gengliangwang | @@ -101,13 +103,70 @@ async def _arun(self, *args: Any, **kwargs: Any) -> Any:
raise NotImplementedError("ListTablesSqlDbTool does not support async")
+class LRUVectorStore:
+ """Implements an LRU policy to enforce a max storage space for vector file storage."""
+
+ def __init__(self, vector_file_dir... | we should either use `os.path.getsize` in all the code, or all use `LRUVectorStore.get_file_size_gb` in all the code. |
pyspark-ai | github_2023 | python | 157 | pyspark-ai | gengliangwang | @@ -101,13 +103,70 @@ async def _arun(self, *args: Any, **kwargs: Any) -> Any:
raise NotImplementedError("ListTablesSqlDbTool does not support async")
+class LRUVectorStore:
+ """Implements an LRU policy to enforce a max storage space for vector file storage."""
+
+ def __init__(self, vector_file_dir... | Does this returns GB? |
pyspark-ai | github_2023 | python | 157 | pyspark-ai | gengliangwang | @@ -150,6 +168,144 @@ def test_similar_value_tool_e2e(self):
finally:
self.spark.sql(f"DROP TABLE IF EXISTS {table_name}")
+ def test_vector_file_lru_store_large_max_files(self):
+ """Tests LRUVectorStore stores all vector files to disk with large max size, for 3 small dfs"""
+... | nit: let mock method get_file_size_gb and make it always return 1. Then we can test maximum 2GB limit and creating 3 files, the first created file should be evicted. |
pyspark-ai | github_2023 | python | 157 | pyspark-ai | gengliangwang | @@ -101,13 +103,70 @@ async def _arun(self, *args: Any, **kwargs: Any) -> Any:
raise NotImplementedError("ListTablesSqlDbTool does not support async")
+class LRUVectorStore:
+ """Implements an LRU policy to enforce a max storage space for vector file storage."""
+
+ def __init__(self, vector_file_dir... | What if curr_file_size is larger than the max size limit? It will be added to the store and then everything will be deleted..
|
pyspark-ai | github_2023 | python | 156 | pyspark-ai | gengliangwang | @@ -133,6 +135,8 @@ def get_tables_and_questions(source_file):
errors = 0
# Create sql query for each question and table
+ error_file = open("data/error_file.txt", "w") | let's make it `mismatched_results.txt`. |
pyspark-ai | github_2023 | python | 156 | pyspark-ai | gengliangwang | @@ -87,7 +87,7 @@
Observation: 01-01-2006
Thought: The correct Birthday filter should be '01-01-2006' because it is semantically closest to the keyword.
I will use the column 'Birthday' to filter the rows where its value is '01-01-2006' and then select the COUNT(`Student`)
-because COUNT gives me the total number o... | QQ: does this fix all mismatched all `COUNT` queries? |
pyspark-ai | github_2023 | python | 155 | pyspark-ai | gengliangwang | @@ -51,6 +51,7 @@
Write a Spark SQL query to retrieve from view `spark_ai_temp_view_14kjd0`: Find the mountain located in Japan.
Thought: The column names are non-descriptive, but from the sample values I see that column `a` contains mountains
and column `c` contains countries. So, I will filter on column `c` for 'J... | end-to-end test for this one? |
pyspark-ai | github_2023 | python | 155 | pyspark-ai | gengliangwang | @@ -123,12 +123,12 @@ def test_similar_value_tool_e2e(self):
agent = spark_ai._create_sql_agent()
similar_value_tool = agent.lookup_tool("similar_value")
- table_file = "tests/data/test_similar_value_tool_e2e.tables.jsonl"
+ table_file = "tests/data/test_transform.tables.jsonl"
... | This is too hacky. It's ok to have two jsonl files, one for test_ai_tools and another one for end_to_end tests.
This way, our tests are easier to understand. |
pyspark-ai | github_2023 | python | 155 | pyspark-ai | gengliangwang | @@ -135,6 +138,27 @@ def test_transform_col_query_wikisql(self):
finally:
self.spark.sql(f"DROP TABLE IF EXISTS {table_name}")
+ def test_filter_exact(self):
+ """Test that agent filters by an exact value"""
+ statements = create_temp_view_statements( | Nit: we can have a function get_table_name
```
def create_and_get_table(tbl: str):
statements = create_temp_view_statements("tests/data/test_transform.tables.jsonl")
tbl_in_json = "table_" + tbl.replaceAll("_", "-")
for statement in statements:
if tbl_in_json in statement:
... |
pyspark-ai | github_2023 | python | 155 | pyspark-ai | gengliangwang | @@ -89,6 +89,19 @@ def setUp(self):
schema="string",
)
+ def get_table_name( | let's rename this as create_and_get_table_name, since there is another function `get_table_name` |
pyspark-ai | github_2023 | python | 153 | pyspark-ai | gengliangwang | @@ -253,8 +253,10 @@ def _create_dataframe_with_llm(
self._spark.sql(sql_query)
return self._spark.table(view_name)
- def _get_df_schema(self, df: DataFrame) -> str:
- return "\n".join([f"{name}: {dtype}" for name, dtype in df.dtypes])
+ def _get_df_schema(self, df: DataFrame) -> Tuple[... | let's split this into two methods. Returning two values here are confusing |
pyspark-ai | github_2023 | python | 153 | pyspark-ai | gengliangwang | @@ -41,20 +41,21 @@
)
SPARK_SQL_EXAMPLES = [
- """QUESTION: Given a Spark temp view `spark_ai_temp_view_14kjd0` with the following columns:
+ """QUESTION: Given a Spark temp view `spark_ai_temp_view_14kjd0` with the following sample vals, | nit: we can have a follow-up to create a function to unify the prompt over a table. So that both the example and the prompt for user input are consistent. |
pyspark-ai | github_2023 | python | 153 | pyspark-ai | gengliangwang | @@ -96,6 +96,34 @@ def test_dataframe_transform(self):
transformed_df = df.ai.transform("what is the name with oldest age?")
self.assertEqual(transformed_df.collect()[0][0], "Bob")
+ def test_transform_col_query_nondescriptive(self):
+ """Test that agent selects correct query column, even ... | This test case seems too much. I am not sure if it works with other LLMs in the future... |
pyspark-ai | github_2023 | others | 145 | pyspark-ai | gengliangwang | @@ -0,0 +1,875 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "2227e466-f9e4-4882-9a21-da2b1824b301",
+ "metadata": {},
+ "source": [
+ "# Generate Python UDFs for different cases"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "86b69eae-351d-45f4-ac16-f3bd8eb2bd42",
+ "metadata": {},
... | Hmmm, it doesn't seem necessary to have this cell in the example notebook, shall we remove it? |
pyspark-ai | github_2023 | others | 145 | pyspark-ai | gengliangwang | @@ -140,6 +140,9 @@ auto_top_growth_df.ai.verify("expect sales change percentage to be between -100
> result: True
### UDF Generation
+
+#### Example 1: Compute expression from columns | @SemyonSinchenko sorry but I would prefer keeping the README simple.
> @SemyonSinchenko Again, thanks for working on this. Shall we also mention this new notebook in https://github.com/pyspark-ai/pyspark-ai/blob/master/docs/udf_generation.md?
I meant to mention the new examples or notebook link in the `udf_genera... |
pyspark-ai | github_2023 | others | 145 | pyspark-ai | gengliangwang | @@ -23,3 +25,117 @@ spark.sql("select brand as brand, previous_years_sales(brand, us_sales, sales_ch
| Honda | 1315225|
| Hyundai | 739045|
+## Example 2: Parse heterogeneous JSON text
+
+```python
+from typing import List
+
+@spark_ai.udf
+def parse_heterogeneous_json(json_str: str, schema: List... | Hi @SemyonSinchenko , the doc is now available at https://pyspark.ai/udf_generation/
Shall we show how `df` is created instead? |
pyspark-ai | github_2023 | others | 144 | pyspark-ai | gengliangwang | @@ -39,7 +39,6 @@ pyarrow = ">=4.0.0"
grpcio-status = ">=1.56.0"
faiss-cpu = "^1.7.4"
sentence-transformers = "^2.2.2"
-flagembedding = "^1.1.1"
babel = "^2.12.1" | Can we remove babel & torch? |
pyspark-ai | github_2023 | python | 141 | pyspark-ai | gengliangwang | @@ -334,18 +360,146 @@ def test_spark_connect_pivot(self):
("C", "English", 90),
("C", "Science", 100),
],
- ["Student", "Subject", "Marks"]
+ ["Student", "Subject", "Marks"],
)
result = df.ai.transform("p... | @SemyonSinchenko let's move this one to tests/test_end_to_end.py like https://github.com/pyspark-ai/pyspark-ai/pull/139 did |
pyspark-ai | github_2023 | python | 141 | pyspark-ai | gengliangwang | @@ -334,18 +360,146 @@ def test_spark_connect_pivot(self):
("C", "English", 90),
("C", "Science", 100),
],
- ["Student", "Subject", "Marks"]
+ ["Student", "Subject", "Marks"],
)
result = df.ai.transform("p... | we don't need cache file in for end-to-end test.
Previously we use cache file since we didn't setup the API key in github action. |
pyspark-ai | github_2023 | others | 140 | pyspark-ai | gengliangwang | @@ -40,6 +40,8 @@ grpcio-status = ">=1.56.0"
faiss-cpu = "^1.7.4"
sentence-transformers = "^2.2.2"
flagembedding = "^1.1.1"
+babel = "^2.12.1"
+torch = ">=2.0.0, !=2.0.1" | why put more dependencies here? we should revisit all the dependency for SimilarValueTool and make our SDK more lightweight. |
pyspark-ai | github_2023 | python | 140 | pyspark-ai | gengliangwang | @@ -46,5 +57,91 @@ def test_include_similar_value_tool(self):
)
+class TestSimilarValueTool(unittest.TestCase):
+ """Tests SimilarValueTool functionality"""
+
+ @classmethod
+ def setUpClass(cls):
+ cls.spark = SparkSession.builder.getOrCreate()
+ cls.llm_mock = MagicMock(spec=BaseLa... | Need to drop tables at the end |
pyspark-ai | github_2023 | python | 140 | pyspark-ai | gengliangwang | @@ -46,5 +57,91 @@ def test_include_similar_value_tool(self):
)
+class TestSimilarValueTool(unittest.TestCase):
+ """Tests SimilarValueTool functionality"""
+
+ @classmethod
+ def setUpClass(cls):
+ cls.spark = SparkSession.builder.getOrCreate()
+ cls.llm_mock = MagicMock(spec=BaseLa... | get_expected_results? |
pyspark-ai | github_2023 | python | 133 | pyspark-ai | gengliangwang | @@ -126,13 +126,21 @@ def _create_llm_chain(self, prompt: BasePromptTemplate):
return LLMChainWithCache(llm=self._llm, prompt=prompt, cache=self._cache)
def _create_sql_agent(self):
- tools = [
- QuerySparkSQLTool(spark=self._spark),
- QueryValidationTool(spark=self._spark),... | let's have a simple test case for this, to verify the tools |
pyspark-ai | github_2023 | python | 133 | pyspark-ai | gengliangwang | @@ -0,0 +1,38 @@
+import unittest
+
+from pyspark_ai.pyspark_ai import SparkAI
+from pyspark_ai.tool import QuerySparkSQLTool, QueryValidationTool, SimilarValueTool
+
+
+class TestToolsInit(unittest.TestCase):
+ def test_exclude_similar_value_tool(self):
+ """Test that SimilarValueTool is excluded by default"... | need to pass a mocked llm |
pyspark-ai | github_2023 | python | 119 | pyspark-ai | gengliangwang | @@ -123,6 +128,8 @@ def _create_sql_agent(self):
tools = [
QuerySparkSQLTool(spark=self._spark),
QueryValidationTool(spark=self._spark),
+ ColumnQueryTool(spark=self._spark),
+ SimilarValueTool(spark=self._spark, vector_store=None, stored_df_cols=set()), | Let's have a new variable `verctor_store_dir` in `SparkAI` which can be specified during init. And `verctor_store_dir` will be used when creating SimilarValueTool |
pyspark-ai | github_2023 | python | 119 | pyspark-ai | gengliangwang | @@ -104,3 +104,104 @@ def _run(
async def _arun(self, *args: Any, **kwargs: Any) -> Any:
raise NotImplementedError("ListTablesSqlDbTool does not support async")
+
+
+class ColumnQueryTool(BaseTool):
+ """Tool for finding the correct column name given keywords from a question."""
+
+ spark: Union[S... | Shall we specify `args_schema` and use two args in the _run method?
Example: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/tools/file_management/copy.py#L25 |
pyspark-ai | github_2023 | python | 119 | pyspark-ai | gengliangwang | @@ -104,3 +104,104 @@ def _run(
async def _arun(self, *args: Any, **kwargs: Any) -> Any:
raise NotImplementedError("ListTablesSqlDbTool does not support async")
+
+
+class ColumnQueryTool(BaseTool):
+ """Tool for finding the correct column name given keywords from a question."""
+
+ spark: Union[S... | note: only string column supports 'like' operator |
pyspark-ai | github_2023 | python | 119 | pyspark-ai | gengliangwang | @@ -104,3 +104,104 @@ def _run(
async def _arun(self, *args: Any, **kwargs: Any) -> Any:
raise NotImplementedError("ListTablesSqlDbTool does not support async")
+
+
+class ColumnQueryTool(BaseTool):
+ """Tool for finding the correct column name given keywords from a question."""
+
+ spark: Union[S... | Same as the comment in https://github.com/databrickslabs/pyspark-ai/pull/119/files#r1333630340 |
pyspark-ai | github_2023 | python | 119 | pyspark-ai | gengliangwang | @@ -104,3 +104,104 @@ def _run(
async def _arun(self, *args: Any, **kwargs: Any) -> Any:
raise NotImplementedError("ListTablesSqlDbTool does not support async")
+
+
+class ColumnQueryTool(BaseTool): | I would suggest let's keep it simpler and add tool SimilarValueTool only in this PR |
pyspark-ai | github_2023 | python | 119 | pyspark-ai | gengliangwang | @@ -99,6 +104,7 @@ def __init__(
).search
else:
self._cache = None
+ self._vector_store_dir = vector_store_dir | We need to have a default value for the `vector_store_dir`. For example, "spark_ai_vector_store" under the temp dir of the file system. |
pyspark-ai | github_2023 | python | 119 | pyspark-ai | gengliangwang | @@ -104,3 +104,58 @@ def _run(
async def _arun(self, *args: Any, **kwargs: Any) -> Any:
raise NotImplementedError("ListTablesSqlDbTool does not support async")
+
+
+class SimilarValueTool(BaseTool):
+ """Tool for finding the semantically closest word to a keyword from a vector database."""
+
+ spa... | Let's move this one to a new class "VectorSearchUtil" as a static method |
pyspark-ai | github_2023 | others | 119 | pyspark-ai | gengliangwang | @@ -37,6 +37,11 @@ grpcio = ">=1.56.0"
plotly = "^5.15.0"
pyarrow = ">=4.0.0"
grpcio-status = ">=1.56.0"
+lib = "^4.0.0" | what is this about? |
pyspark-ai | github_2023 | python | 119 | pyspark-ai | gengliangwang | @@ -104,3 +104,73 @@ def _run(
async def _arun(self, *args: Any, **kwargs: Any) -> Any:
raise NotImplementedError("ListTablesSqlDbTool does not support async")
+
+
+class VectorSearchUtil:
+ @staticmethod
+ def vector_similarity_search(
+ col_lst: list,
+ vector_store_path: Optional[... | If the vector store already exists, we don't need to run this query |
pyspark-ai | github_2023 | python | 119 | pyspark-ai | gengliangwang | @@ -104,3 +104,73 @@ def _run(
async def _arun(self, *args: Any, **kwargs: Any) -> Any:
raise NotImplementedError("ListTablesSqlDbTool does not support async")
+
+
+class VectorSearchUtil:
+ @staticmethod
+ def vector_similarity_search(
+ col_lst: list,
+ vector_store_path: Optional[... | I am not sure if we should make it more specific here by saying: Tool for finding the column value which is closet to the input text. |
pyspark-ai | github_2023 | python | 119 | pyspark-ai | gengliangwang | @@ -104,3 +104,73 @@ def _run(
async def _arun(self, *args: Any, **kwargs: Any) -> Any:
raise NotImplementedError("ListTablesSqlDbTool does not support async")
+
+
+class VectorSearchUtil:
+ @staticmethod
+ def vector_similarity_search(
+ col_lst: list,
+ vector_store_path: Optional[... | Not sure if we need to mention FAISS to LLM. We should mention that there is an existing vector store which contains all the column values |
pyspark-ai | github_2023 | python | 119 | pyspark-ai | gengliangwang | @@ -104,3 +104,73 @@ def _run(
async def _arun(self, *args: Any, **kwargs: Any) -> Any:
raise NotImplementedError("ListTablesSqlDbTool does not support async")
+
+
+class VectorSearchUtil:
+ @staticmethod
+ def vector_similarity_search(
+ col_lst: list,
+ vector_store_path: Optional[... | This is duplicated with the code in pyspark_ai.py...let's move them to a new file spark_utils.py |
pyspark-ai | github_2023 | python | 119 | pyspark-ai | gengliangwang | @@ -51,28 +64,49 @@
Write a Spark SQL query to retrieve from view `spark_ai_temp_view_93bcf0`: Pivot the fruit table by country and sum the amount for each fruit and country combination.
Thought: Spark SQL does not support dynamic pivot operations, which are required to transpose the table as requested. I should get ... | Let's make this question "What is the total number of students with the birthday January 1, 2006?"
and remove the example above.
The fewer examples, the lower the cost it is. These two seems able to be combined. |
pyspark-ai | github_2023 | python | 119 | pyspark-ai | gengliangwang | @@ -119,7 +149,8 @@
SPARK_SQL_PREFIX = """You are an assistant for writing professional Spark SQL queries.
Given a question, you need to write a Spark SQL query to answer the question. The result is ALWAYS a Spark SQL query.
-"""
+ALWAYS use the tool similar_value to find the correct filter value format. | Shall we make it optional? Only when the assistant is not sure about the column value |
pyspark-ai | github_2023 | python | 119 | pyspark-ai | gengliangwang | @@ -51,28 +64,32 @@
Write a Spark SQL query to retrieve from view `spark_ai_temp_view_93bcf0`: Pivot the fruit table by country and sum the amount for each fruit and country combination.
Thought: Spark SQL does not support dynamic pivot operations, which are required to transpose the table as requested. I should get ... | In the followup PR for tests, we should test the similar scores of January 1, 2006 with some dates |
pyspark-ai | github_2023 | python | 116 | pyspark-ai | gengliangwang | @@ -102,11 +103,16 @@ def get_tables_and_questions(source_file):
sqls.append(item['sql'])
return tables, questions, results, sqls
+def similarity(spark_ai_result, expected_result): | We should use the similarity in the transform_df method and ask gpt to pick to most relevant column value. |
pyspark-ai | github_2023 | python | 116 | pyspark-ai | gengliangwang | @@ -59,6 +59,20 @@
Observation: OK
Thought:I now know the final answer.
Final Answer: SELECT * FROM spark_ai_temp_view_93bcf0 PIVOT (SUM(Amount) FOR Country IN ('USA', 'Canada', 'Mexico', 'China'))"""
+ """QUESTION: Given a Spark temp view `spark_ai_temp_view_19acs2` with the following columns:
+```
+Car STRING
+... | can there be a case where we need to use sum for total number? |
pyspark-ai | github_2023 | python | 116 | pyspark-ai | gengliangwang | @@ -84,15 +98,29 @@
]
SPARK_SQL_SUFFIX = """\nQuestion: Given a Spark temp view `{view_name}` {comment}.
-It contains the following columns:
+The dataframe contains the column names and types in this format:
+column_name: type.
+It's very important to ONLY use the verbatim column names in your resulting SQL query.
... | shall we use backtick instead of single quote? Single quote means string literal in Spark SQL |
pyspark-ai | github_2023 | python | 116 | pyspark-ai | gengliangwang | @@ -102,11 +103,16 @@ def get_tables_and_questions(source_file):
sqls.append(item['sql'])
return tables, questions, results, sqls
+def similarity(spark_ai_result, expected_result):
+ import spacy
+
+ spacy_model = spacy.load('en_core_web_lg')
+ return spacy_model(spark_ai_result).similarity... | let revert these two lines too |
pyspark-ai | github_2023 | python | 104 | pyspark-ai | xinrong-meng | @@ -0,0 +1,130 @@
+import json
+import re
+from argparse import ArgumentParser
+
+from babel.numbers import parse_decimal, NumberFormatError
+from pyspark.sql import SparkSession
+
+from pyspark_ai import SparkAI
+
+
+def generate_sql_statements(table_file):
+ sql_statements = []
+ num_re = re.compile(r'[-+]?\d*\... | nit: Shall we add the accuracy percentage for easier comparison? Fine to do it as a follow-up though. |
pyspark-ai | github_2023 | python | 104 | pyspark-ai | xinrong-meng | @@ -0,0 +1,130 @@
+import json
+import re
+from argparse import ArgumentParser
+
+from babel.numbers import parse_decimal, NumberFormatError
+from pyspark.sql import SparkSession
+
+from pyspark_ai import SparkAI
+
+
+def generate_sql_statements(table_file): | nit: Shall we add a docstring to call out the sqls generated are to represent tables as views? |
pyspark-ai | github_2023 | python | 104 | pyspark-ai | xinrong-meng | @@ -327,14 +333,42 @@ def create_df(
return self._create_dataframe_with_llm(page_content, desc, columns, cache)
def _get_transform_sql_query_from_agent(
- self, temp_view_name: str, schema: str, desc: str
+ self, temp_view_name: str, schema: str, sample_rows_str: str, desc: str
) -> s... | Do we want sampling to fail silently? |
pyspark-ai | github_2023 | python | 104 | pyspark-ai | xinrong-meng | @@ -327,14 +333,42 @@ def create_df(
return self._create_dataframe_with_llm(page_content, desc, columns, cache)
def _get_transform_sql_query_from_agent(
- self, temp_view_name: str, schema: str, desc: str
+ self, temp_view_name: str, schema: str, sample_rows_str: str, desc: str
) -> s... | nit: `columns_str = "\t".join([f.name for f in df.schema.fields])` |
pyspark-ai | github_2023 | python | 91 | pyspark-ai | gengliangwang | @@ -5,7 +5,13 @@
from typing import Callable, List, Optional
from urllib.parse import urlparse
-import pandas as pd # noqa: F401
+# We only try to import pandas here to circumvent certain issues
+# when the generated code assumes that pandas needs to be imported
+# before actually evaluating the code.
+try: | Shall we move this block under method `plot_df`? |
pyspark-ai | github_2023 | python | 94 | pyspark-ai | gengliangwang | @@ -205,5 +205,81 @@ def test_analysis_handling(self):
self.assertEqual(left, right)
+class SparkConnectTestCase(unittest.TestCase):
+
+ @classmethod
+ def setUpClass(cls):
+ cls.spark = SparkSession.builder.remote("sc://localhost").getOrCreate()
+
+ @classmethod
+ def tearDownClass(cls)... | let's skip the tests in Github Action:
```
@unittest.skip(...)
```
I will enable them later.
|
pyspark-ai | github_2023 | others | 94 | pyspark-ai | gengliangwang | @@ -35,6 +35,8 @@ google-api-python-client = "^2.90.0"
chispa = "^0.9.2"
grpcio = ">=1.48.1"
plotly = "^5.15.0"
+pyarrow = "^12.0.1" | Let's make it consistent with Spark:
https://github.com/apache/spark/blob/master/python/setup.py#L134
pyarrow = ">=4.0.0"
|
pyspark-ai | github_2023 | others | 94 | pyspark-ai | gengliangwang | @@ -35,6 +35,8 @@ google-api-python-client = "^2.90.0"
chispa = "^0.9.2"
grpcio = ">=1.48.1"
plotly = "^5.15.0"
+pyarrow = "^12.0.1"
+grpcio-status = ">=1.48,<1.57" | ">=1.56.0" |
pyspark-ai | github_2023 | others | 82 | pyspark-ai | gengliangwang | @@ -33,6 +33,8 @@ pygments = "^2.15.1"
google-api-python-client = "^2.90.0"
chispa = "^0.9.2"
grpcio = ">=1.48.1"
+plotly = "^5.15.0"
+pyspark = "^3.4.0" | @vjr pyspark is listed as dev dependency below |
pyspark-ai | github_2023 | others | 90 | pyspark-ai | gengliangwang | @@ -27,7 +27,7 @@ requests = "^2.31.0"
tiktoken = "0.4.0"
beautifulsoup4 = "^4.12.2"
openai = "^0.27.8"
-langchain = "^0.0.201"
+langchain >= 0.0.220, < 0.1.0 | ```suggestion
langchain = ">=0.0.201,<0.1.0"
``` |
pyspark-ai | github_2023 | python | 81 | pyspark-ai | gengliangwang | @@ -153,6 +153,7 @@
1. function definition f, in Python (Do NOT surround the function definition with quotes)
2. 1 blank new line
3. Call f on df and assign the result to a variable, result: result = name_of_f(df)
+Do not include any explanation in English. For example, do NOT include "Here is your output:" | Shall we try `The answer MUST contain python code only` |
pyspark-ai | github_2023 | python | 77 | pyspark-ai | gengliangwang | @@ -391,6 +391,13 @@ def verify_df(self, df: DataFrame, desc: str, cache: bool = True) -> None:
"""
tags = self._get_tags(cache)
llm_output = self._verify_chain.run(tags=tags, df=df, desc=desc)
+
+ if "```" in llm_output: | let's use the `_extract_code_blocks` method here. |
pyspark-ai | github_2023 | python | 77 | pyspark-ai | allisonwang-db | @@ -150,13 +150,12 @@
2. 1 blank new line
3. Call f on df and assign the result to a variable, result: result = name_of_f(df)
-Include any necessary import statements INSIDE the function definition.
-For example:
+Include any necessary import statements INSIDE the function definition, like this:
def gen_random():
... | Do we have a test for this to prevent regressions? |
pyspark-ai | github_2023 | others | 20 | pyspark-ai | gengliangwang | @@ -80,70 +80,70 @@
"\n",
"\u001b[92mINFO: \u001b[0mSQL query for the ingestion:\n",
"\u001b[34mCREATE\u001b[39;49;00m\u001b[37m \u001b[39;49;00m\u001b[34mOR\u001b[39;49;00m\u001b[37m \u001b[39;49;00m\u001b[34mREPLACE\u001b[39;49;00m\u001b[37m \u001b[39;49;00mTEMP\u001b[37m \u001b[39;49;00m\u001b[3... | Shall we rerun this one. It doesn't look good to have `change_direction` |
pyspark-ai | github_2023 | python | 2 | pyspark-ai | gengliangwang | @@ -131,3 +131,19 @@
PLOT_PROMPT = PromptTemplate(
input_variables=["columns", "explain"], template=PLOT_PROMPT_TEMPLATE
)
+
+TEST_TEMPLATE = """
+You are an Apache Spark SQL expert, with experience writing robust test cases for PySpark code.
+Given a PySpark function which transforms a dataframe, write a unit ... | So the test is totally based on function name? Seems hacky.
Shall we make this code generation as a notebook magic instead of an API? |
pyspark-ai | github_2023 | others | 3 | pyspark-ai | gengliangwang | @@ -15,19 +15,19 @@ classifiers = [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.7",
- "Programming Language :: Python :: 3.8", | why dropping python 3.8? We should support it. |
pyspark-ai | github_2023 | python | 5 | pyspark-ai | allisonwang-db | @@ -234,6 +237,26 @@ def plot_df(self, df: DataFrame) -> None:
code = response.content.replace("```python", "```").split("```")[1]
exec(code)
+# def udf_llm(self, header: str, desc: str):
+# code = self._udf_chain.run(
+# desc=desc
+# )
+# # reformat, add inde... | How about let's make it a decorator. Similar to the current `@udf` API in PySpark. For example
```
@assistant.udf(returnType="int")
def my_udf(x: int):
"""Description of the udf"
...
``` |
pyspark-ai | github_2023 | python | 5 | pyspark-ai | gengliangwang | @@ -166,3 +166,50 @@
"""
VERIFY_PROMPT = PromptTemplate(input_variables=["df", "desc"], template=VERIFY_TEMPLATE)
+
+UDF_TEMPLATE = """
+This is the documentation for a PySpark user-defined function (udf): pyspark.sql.functions.udf
+
+A udf creates a deterministic, reusable function in Spark. It can take any data t... | Let's use the few_shot_examples from langchain to build the prompt https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples |
pyspark-ai | github_2023 | python | 1 | pyspark-ai | allisonwang-db | @@ -119,3 +119,64 @@
Assume the result of the Spark SQL query is stored in a dataframe named 'df', visualize the query result using plotly.
There is no need to install any package with pip.
"""
+
+VERIFY_TEMPLATE = """
+Given 1) a PySpark dataframe, my_df, and 2) a description of expected properties, desc,
+generate... | shall we ask the llm to generate the code that contains the function, and we can execute the generated code to get the result (True/False). I don't think the llm can directly execute the code generated. |
pyspark-ai | github_2023 | python | 1 | pyspark-ai | allisonwang-db | @@ -119,3 +119,64 @@
Assume the result of the Spark SQL query is stored in a dataframe named 'df', visualize the query result using plotly.
There is no need to install any package with pip.
"""
+
+VERIFY_TEMPLATE = """
+Given 1) a PySpark dataframe, my_df, and 2) a description of expected properties, desc,
+generate... | For example, here we can ask LLM to generate this line
```suggestion
has_5_columns(df=my_df)
```
and we can execute the code to get the actual result |
pyspark-ai | github_2023 | python | 1 | pyspark-ai | allisonwang-db | @@ -119,3 +119,64 @@
Assume the result of the Spark SQL query is stored in a dataframe named 'df', visualize the query result using plotly.
There is no need to install any package with pip.
"""
+
+VERIFY_TEMPLATE = """
+Given 1) a PySpark dataframe, my_df, and 2) a description of expected properties, desc,
+generate... | Maybe we can add this in a separate PR. |
pyspark-ai | github_2023 | python | 1 | pyspark-ai | gengliangwang | @@ -119,3 +119,64 @@
Assume the result of the Spark SQL query is stored in a dataframe named 'df', visualize the query result using plotly.
There is no need to install any package with pip.
"""
+
+VERIFY_TEMPLATE = """
+Given 1) a PySpark dataframe, my_df, and 2) a description of expected properties, desc, | Shall we use `df` instead of `my_df`? Also, it would be great to quote the variable name |
pyspark-ai | github_2023 | python | 1 | pyspark-ai | gengliangwang | @@ -119,3 +119,64 @@
Assume the result of the Spark SQL query is stored in a dataframe named 'df', visualize the query result using plotly.
There is no need to install any package with pip.
"""
+
+VERIFY_TEMPLATE = """
+Given 1) a PySpark dataframe, my_df, and 2) a description of expected properties, desc,
+generate... | Why do we need to mentiond how df is created? |
pyspark-ai | github_2023 | python | 1 | pyspark-ai | gengliangwang | @@ -234,11 +239,41 @@ def plot_df(self, df: DataFrame) -> None:
code = response.content.replace("```python", "```").split("```")[1]
exec(code)
+ def verify_df(self, my_df: DataFrame, desc: str) -> None:
+ """
+ This method creates and runs test cases for the provided PySpark datafra... | will the code be executed? Or just show in the log? |
pyspark-ai | github_2023 | python | 1 | pyspark-ai | gengliangwang | @@ -119,3 +119,59 @@
Assume the result of the Spark SQL query is stored in a dataframe named 'df', visualize the query result using plotly.
There is no need to install any package with pip.
"""
+
+VERIFY_TEMPLATE = """
+Given 1) a PySpark dataframe, df, and 2) a description of expected properties, desc,
+generate a ... | SHall we move the TEST_TEMPLATE to another PR? |
pyspark-ai | github_2023 | python | 1 | pyspark-ai | gengliangwang | @@ -234,11 +237,51 @@ def plot_df(self, df: DataFrame) -> None:
code = response.content.replace("```python", "```").split("```")[1]
exec(code)
+ def verify_df(self, my_df: DataFrame, desc: str) -> None: | ```suggestion
def verify_df(self, df: DataFrame, desc: str) -> None:
``` |
pyspark-ai | github_2023 | python | 1 | pyspark-ai | gengliangwang | @@ -234,11 +237,51 @@ def plot_df(self, df: DataFrame) -> None:
code = response.content.replace("```python", "```").split("```")[1]
exec(code)
+ def verify_df(self, my_df: DataFrame, desc: str) -> None:
+ """
+ This method creates and runs test cases for the provided PySpark datafra... | shall we use `exec()` on the generated code directly? |
biliscope | github_2023 | javascript | 258 | gaogaotiantian | gaogaotiantian | @@ -97,6 +97,15 @@ window.addEventListener("load", function() {
?.querySelector("bili-comment-user-info")?.shadowRoot
?.querySelector("#user-name > a");
userNameA?.addEventListener("mouseover", showProfileDebo... | 这里需要做这件事么?text内部是不会再改变的吧?这个observe量还挺大的。 |
biliscope | github_2023 | javascript | 258 | gaogaotiantian | gaogaotiantian | @@ -97,6 +97,15 @@ window.addEventListener("load", function() {
?.querySelector("bili-comment-user-info")?.shadowRoot
?.querySelector("#user-name > a");
userNameA?.addEventListener("mouseover", showProfileDebo... | 这个query是不是可以稍微再准确一点?因为text里可能还会有比如视频的link或者搜索的link之类的,这里应该可以直接通过attribute定位到user at吧?尽管debounce那里会filter掉,但是如果我们可以在这里比较轻松地做filter感觉是更理想的。 |
biliscope | github_2023 | javascript | 258 | gaogaotiantian | gaogaotiantian | @@ -109,10 +118,18 @@ window.addEventListener("load", function() {
const avatar = userInfo.querySelector("#user-avatar");
avatar?.addEventListener("mouseover", showProfileDebounce);
- tryObserve(userInfo.shadowRoot); | 我没理解。我觉得这里你的理解是不是出现了一些问题。`tryObserve`的意思是要观察这个`shadowRoot`里面的变化,`userInfo`和`richText`没有任何关系啊,它们是两个tree,它们谁先被render和它们彼此是不是需要被observe是没关系的。
而且在这种PR里非常random地拿掉一个别的地方的代码一般是不太推荐的,除非有很扎实的理由。 |
biliscope | github_2023 | javascript | 258 | gaogaotiantian | gaogaotiantian | @@ -109,10 +118,18 @@ window.addEventListener("load", function() {
const avatar = userInfo.querySelector("#user-avatar");
avatar?.addEventListener("mouseover", showProfileDebounce);
- tryObserve(userInfo.shadowRoot);
... | 想了一下,这里确实需要observe。 |
biliscope | github_2023 | javascript | 257 | gaogaotiantian | gaogaotiantian | @@ -36,13 +36,19 @@ function labelPopularPage() {
function labelDynamicPage() {
for (let el of document.getElementsByClassName("bili-dyn-item")) {
+ if (el.__vue__.author.type == "AUTHOR_TYPE_UGC_SEASON") { | 这里正常的user type是固定的么?能不能写成类似
```javascript
const authorType = ...
const mid = ...
if (authorType == ... && mid) {
...
}
```
的形式。 |
biliscope | github_2023 | javascript | 251 | gaogaotiantian | gaogaotiantian | @@ -267,6 +273,8 @@ VideoProfileCard.prototype.updateTarget = function(target) {
}
VideoProfileCard.prototype.drawConclusion = function() {
+ document.getElementById("biliscope-ai-summary-popup").classList.add("d-none"); | 这个是为啥? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.