repo_name
stringlengths
1
62
dataset
stringclasses
1 value
lang
stringclasses
11 values
pr_id
int64
1
20.1k
owner
stringlengths
2
34
reviewer
stringlengths
2
39
diff_hunk
stringlengths
15
262k
code_review_comment
stringlengths
1
99.6k
openrouter-runner
github_2023
python
68
OpenRouterTeam
alexanderatallah
@@ -70,70 +82,56 @@ def __init__(self, params: VllmParams): async def generate(self, payload: CompletionPayload, params): assert self.engine is not None, "Engine not initialized" + # Track usage as a running total + # NOTE: This does NOT yet include cold-start GPU time
how hard is this to add @sambarnes ?
openrouter-runner
github_2023
python
68
OpenRouterTeam
louisgv
@@ -95,45 +100,29 @@ class Usage(BaseModel): prompt_tokens: int completion_tokens: int - # TODO: add these in after deprecating the old fields - # duration: float - # gpu_type: GPUType - # gpu_count: int + duration: float + gpu_type: GPUType + gpu_count: int class ResponseBody(Base...
The main reason we used this was due to the pydantic version shipped with vllm - it's deprecated in latest (due to modal installation), but not if we have the vllm's pydantic typing xd..... I wanted to do this way back in September when I saw the red deprecation strike! - Let's try it out on dev and see if it work!
openrouter-runner
github_2023
python
68
OpenRouterTeam
louisgv
@@ -70,84 +84,82 @@ def __init__(self, params: VllmParams): async def generate(self, payload: CompletionPayload, params): assert self.engine is not None, "Engine not initialized" - t_start_inference = time.perf_counter() + # Track usage as a running total. For the first request to the + ...
@sambarnes we need to yield "something" to keep the connection going, based on an experiment I did some time ago with fetch + SSE. Basically a hack to keep the connection going since this is a one-way pull, not a bi-directional connection like websocket where there's a dedicated handshake before transmission. It's k...
openrouter-runner
github_2023
python
68
OpenRouterTeam
louisgv
@@ -70,84 +84,82 @@ def __init__(self, params: VllmParams): async def generate(self, payload: CompletionPayload, params): assert self.engine is not None, "Engine not initialized" - t_start_inference = time.perf_counter() + # Track usage as a running total. For the first request to the + ...
Feel like we don't have to reset it here. The exception could have been thrown mid generation :-?
openrouter-runner
github_2023
python
39
OpenRouterTeam
louisgv
@@ -29,3 +29,12 @@ async def test_completion_streaming(): output = await completion(prompt, stream=True) assert output is not None + + +@pytest.mark.integration +@pytest.mark.asyncio +async def test_large_prompt(): + """Should accept & complete a large prompt""" + prompt = get_words_from_file(n=2500) ...
Ooh interesting - how else and what other aspect of an LLM endpoint would you test from an e2e?
openrouter-runner
github_2023
python
64
OpenRouterTeam
alexanderatallah
@@ -85,10 +85,28 @@ class CompletionPayload(BaseModel): runner: RunnerConfiguration | None = None -class ResponseBody(BaseModel): - text: str +class Usage(BaseModel): + """ + An OpenAI-style usage struct, containing the expected + token counts as well as additional data about the GPU + usage of ...
yeah it seems redundant to me. this repo doesn't follow their spec anyway, so I think we could omit it
openrouter-runner
github_2023
python
64
OpenRouterTeam
alexanderatallah
@@ -87,22 +88,29 @@ async def generate(self, payload: CompletionPayload, params): and request_output.outputs[0].text[-1] == "\ufffd" ): continue - token = request_output.outputs[0].text[index:] - index = len(req...
OpenAI doesn't do any accounting for streaming :( We should, by sending the running token counts along with the running total time
openrouter-runner
github_2023
python
61
OpenRouterTeam
sambarnes
@@ -15,11 +15,20 @@ def completion( + request: Request, payload: CompletionPayload, ): model_path = get_model_path(payload.model) logger.info( - "Received completion request", extra={"model": str(model_path)} + "Received completion request", + extra={
IIRC at opensea we did something that set this context globally for a request so all logs in the call stack used it 🤔 but that's just something in the back of my mind, we can do it later if necessary
openrouter-runner
github_2023
python
61
OpenRouterTeam
sambarnes
@@ -15,11 +15,20 @@ def completion( + request: Request, payload: CompletionPayload, ): model_path = get_model_path(payload.model) logger.info( - "Received completion request", extra={"model": str(model_path)} + "Received completion request", + extra={ + "model": s...
I forget if this key needs to be snake_case for datadog to pick it up
openrouter-runner
github_2023
python
60
OpenRouterTeam
alexanderatallah
@@ -1,9 +1,12 @@ from enum import Enum -from typing import List, Optional, Union +from typing import Final, List, Optional, Union from fastapi.responses import JSONResponse, PlainTextResponse from pydantic import BaseModel +A100_40G: Final[float] = 0.001036 +A100_80G: Final[float] = 0.001553
can we name this variable based on the units? ($/sec?)
openrouter-runner
github_2023
python
53
OpenRouterTeam
alexanderatallah
@@ -77,11 +77,9 @@ async def generate(self, payload: CompletionPayload, params): try: import time - tags = {"model": self.engine_args.model} - with timer("engine.generate()", tags=tags):
we're removing this bc it was instant right?
openrouter-runner
github_2023
python
49
OpenRouterTeam
montasaurus
@@ -31,6 +33,17 @@ def add_observability(image: Image): ) +@contextmanager +def timer(action: str, tags: dict[str, str | int] = None) -> None: + """A simple timer context manager with structured logging for its output.""" + start = time.perf_counter() + yield + elapsed = time.perf_counter() - start...
```suggestion extra = (tags or {}) | {"duration": elapsed} ``` to fix the precedence
openrouter-runner
github_2023
others
51
OpenRouterTeam
louisgv
@@ -0,0 +1,39 @@ +name: Release (prod) + +on: + push: + branches: [main] + paths: ['modal/**']
We should add the poetry lock file as well
openrouter-runner
github_2023
others
51
OpenRouterTeam
louisgv
@@ -0,0 +1,39 @@ +name: Release (prod) + +on: + push: + branches: [main] + paths: ['modal/**'] + +jobs: + release: + name: Release (prod) + runs-on: ubuntu-latest + env: + MODAL_TOKEN_ID: ${{ secrets.MODAL_TOKEN_ID }} + MODAL_TOKEN_SECRET: ${{ secrets.MODAL_TOKEN_SECRET }}
Done! cc @alexanderatallah to config @sambarnes 's perm
openrouter-runner
github_2023
python
48
OpenRouterTeam
sambarnes
@@ -53,7 +58,7 @@ def download_model(model_name: str): ignore_patterns.append("*.bin") # Clean doesn't remove the cache, so using `local_files_only` here returns the cache even when the local dir is empty. - print(f"Checking for {model_name}") + logger.info(f"Checking for {model_name}")
in the future we might wanna move more towards structured logging with some of these that way we can e.g. group logs by model in DD `logger.info("Checking for model", extra=dict(model=model_name)"` def not a right now thing, but worth considering going forward
openrouter-runner
github_2023
python
48
OpenRouterTeam
sambarnes
@@ -0,0 +1,107 @@ +import json +import logging +import os +import sys + +from datadog_api_client import ApiClient, Configuration +from datadog_api_client.v2.api.logs_api import LogsApi +from datadog_api_client.v2.model.content_encoding import ContentEncoding +from datadog_api_client.v2.model.http_log import HTTPLog +fr...
given the new docs say: > or leave it blank to disable Datadog (e.g. `DD_API_KEY=` do you know if `DD_API_KEY` will still be on the container as empty string if ppl set it like that? if so, would probs need to be ```suggestion if os.environ.get("DD_API_KEY") is not None: ```
openrouter-runner
github_2023
python
48
OpenRouterTeam
sambarnes
@@ -0,0 +1,107 @@ +import json +import logging +import os +import sys + +from datadog_api_client import ApiClient, Configuration +from datadog_api_client.v2.api.logs_api import LogsApi +from datadog_api_client.v2.model.content_encoding import ContentEncoding +from datadog_api_client.v2.model.http_log import HTTPLog +fr...
think we should still do a bare `logging.exception(` in order for the sentry hook to fire & notify us there? ```suggestion logging.exception("Error sending log to Datadog") ```
openrouter-runner
github_2023
python
48
OpenRouterTeam
sambarnes
@@ -0,0 +1,8 @@ +from modal import Image + + +def add_datadog(image: Image): + return image.pip_install("datadog-api-client==2.21.0")
mind expanding one why this helper would have api & not ddtrace in it? (just mentioning cause i see ddtrace manually on the download.py image)
openrouter-runner
github_2023
python
48
OpenRouterTeam
sambarnes
@@ -0,0 +1,65 @@ +from fastapi import Depends, FastAPI, Request +from fastapi.responses import JSONResponse +from fastapi.security import HTTPAuthorizationCredentials +from pydantic import BaseModel + +from runner.endpoints.completion import completion as completion_endpoint +from runner.shared.common import config +fr...
i like this reorganization :+1:
openrouter-runner
github_2023
python
48
OpenRouterTeam
louisgv
@@ -11,13 +13,16 @@ models_volume, ) +logger = get_logger(__name__)
Does this work at the top-level scope? Does the stub's env injection take effect here?
openrouter-runner
github_2023
python
48
OpenRouterTeam
louisgv
@@ -11,13 +13,16 @@ models_volume, ) +logger = get_logger(__name__) + cache_path = get_model_path("__cache__") downloader_image = ( - Image.debian_slim() + BASE_IMAGE # Use the barebones hf-transfer package for maximum download speeds. No progress bar, but expect 700MB/s. .pip_install("huggi...
This would install on top of the `BASE_IMAGE` instance which is not what we want I think. We should make BASE_IMAGE into a function that return a new Image() instance instead.
openrouter-runner
github_2023
python
48
OpenRouterTeam
louisgv
@@ -0,0 +1,107 @@ +import json +import logging +import os +import sys + +from datadog_api_client import ApiClient, Configuration +from datadog_api_client.v2.api.logs_api import LogsApi +from datadog_api_client.v2.model.content_encoding import ContentEncoding +from datadog_api_client.v2.model.http_log import HTTPLog +fr...
Is this the case even for prod instance?
openrouter-runner
github_2023
python
48
OpenRouterTeam
alexanderatallah
@@ -0,0 +1,80 @@ +import modal +from fastapi import Depends, FastAPI, Request +from fastapi.responses import JSONResponse +from fastapi.security import HTTPAuthorizationCredentials +from pydantic import BaseModel + +from runner.endpoints.completion import completion as completion_endpoint +from runner.shared.common imp...
when is this called?
openrouter-runner
github_2023
python
47
OpenRouterTeam
montasaurus
@@ -116,7 +117,9 @@ async def generate(self, payload: CompletionPayload, params): print(f"Request completed: {throughput:.4f} tokens/s") except Exception as err: e = create_error_text(err) - print(e) + logging.exception( + "Failed generation", extr...
I hit the same thing, `logging.basicConfig(level=logging.INFO)` fixed it
openrouter-runner
github_2023
python
46
OpenRouterTeam
montasaurus
@@ -70,7 +72,13 @@ async def get_job( @stub.function( - secret=Secret.from_name("ext-api-key"), + image=Image.debian_slim(python_version="3.10").pip_install( + "sentry-sdk[fastapi]==1.39.1" + ),
Got exactly that in a PR I'm working on!
openrouter-runner
github_2023
python
43
OpenRouterTeam
louisgv
@@ -10,7 +10,9 @@ def get_model_path(model_name: str): return models_path / model_name.lower() -def does_model_exist(model_path: Path): +# gets passed a str in some circumstances despite being typed as Path +def does_model_exist(model_path: Path | str): + model_path = Path(model_path)
Might need to revert this (and the model_path passing down to our container) to just string. When we instantiate a GPU container, the props passed down will be serialized by the Modal abstraction, using pickle, and afaik it will not be deserialized on our code properly. 3 options: 1. Ping Modal about better type...
openrouter-runner
github_2023
typescript
41
OpenRouterTeam
louisgv
@@ -0,0 +1,35 @@ +import { + awaitJob, + enqueueAddModel, + getApiUrl, + getAuthHeaders, + runIfCalledAsScript +} from 'scripts/shared'; + +async function main(model?: string) { + const modelName = model || process.env.MODEL!; + console.log(`Test adding model ${modelName}`); + const body = await enqueueAddModel...
nit: Can we think of an alternative name that doesn't repeat the `await`?
openrouter-runner
github_2023
python
41
OpenRouterTeam
louisgv
@@ -24,7 +25,7 @@ def _make_container( class _VllmContainer(VllmEngine): def __init__( self, - model_path: str, + model_path: Path,
Interesting - so it's possible to use `Path` as a param such that only 1 instance will be spawned to serve a certain path until the max concurrent input is reached?
openrouter-runner
github_2023
typescript
41
OpenRouterTeam
alexanderatallah
@@ -0,0 +1,34 @@ +import { + completion, + enqueueAddModel, + pollForJobCompletion, + runIfCalledAsScript +} from 'scripts/shared'; + +async function main() { + const modelName = 'microsoft/Orca-2-13b';
can we do phi-2? (if this test is still helpful for small models) it has a better license
openrouter-runner
github_2023
python
41
OpenRouterTeam
alexanderatallah
@@ -24,12 +37,17 @@ class Params(BaseModel): skip_special_tokens: bool = True -class Payload(BaseModel): +class RunnerConfiguration(BaseModel): + container: ContainerType + + +class CompletionPayload(BaseModel): id: str prompt: str stream: bool = False params: Params model: str + ...
woah, new union notation
openrouter-runner
github_2023
typescript
41
OpenRouterTeam
louisgv
@@ -6,9 +6,21 @@ const envFile = `.env.dev`; config({ path: envFile }); -const url = process.env.API_URL; -const key = process.env.RUNNER_API_KEY; +const url = process.env.API_URL!; +const key = process.env.RUNNER_API_KEY!;
We should assert these instead of using `!` since new devs might not have them. We lean toward strict typing with assertion as much as possible.
openrouter-runner
github_2023
typescript
41
OpenRouterTeam
louisgv
@@ -0,0 +1,35 @@ +import { + enqueueAddModel, + getApiUrl, + getAuthHeaders, + pollForJobCompletion, + runIfCalledAsScript +} from 'scripts/shared'; + +async function main(model?: string) { + const modelName = model || process.env.MODEL!; + console.log(`Test adding model ${modelName}`); + const body = await enq...
nit: we should prob reuse the variable from shared
openrouter-runner
github_2023
python
33
OpenRouterTeam
louisgv
@@ -22,6 +22,13 @@ def download_models(all_models: List[str]): cache_path = get_model_path("__cache__") for model_name in all_models: + parts = model_name.split(":") + + model_revision = None
This only downloads the model - can we also added it somehow into the protocol's input params?
openrouter-runner
github_2023
python
36
OpenRouterTeam
alexanderatallah
@@ -28,27 +27,39 @@ def __init__( model_path: str, max_model_len: Optional[int] = None, ): - if num_gpus > 1: - # Patch issue from https://github.com/vllm-project/vllm/issues/1116 - import ray - - ray.shutdown() - ...
we can't init sentry higher up, for other Modal processes too?
openrouter-runner
github_2023
others
32
OpenRouterTeam
louisgv
@@ -48,3 +48,11 @@ ignore = [ [tool.ruff.per-file-ignores] "gcp/shap-e/app.py" = ["E402"] # module level import not at top of file -- expected for sys.path.insert() usage +[tool.ruff.lint.isort] +known-third-party = ["modal"] +known-first-party = [ + "punctuator", + "runner", + "shared", + "tuner", +]
I wonder if there's a better way to structure our project? Can ruff target or isort has sub projects?
openrouter-runner
github_2023
python
31
OpenRouterTeam
louisgv
@@ -0,0 +1,29 @@ +import os +from typing import Annotated + +from fastapi import Depends, FastAPI, testclient +from fastapi.security import HTTPAuthorizationCredentials + +from shared.config import Config + + +def test_auth(): + """The API auth dependency should prevent unauthorized requests."""
@sambarnes I think the CI PR targeted the wrong remote (or wrong github org)
openrouter-runner
github_2023
python
31
OpenRouterTeam
louisgv
@@ -75,16 +72,12 @@ def transform(self, input_str: str): import threading import time - output = [ - None - ] # Use a list to hold the output to bypass Python's scoping limitations
Ooh - I also forgot about the line-length. Kinda want to keep it at 80 since I split my screen a lot :d
openrouter-runner
github_2023
others
31
OpenRouterTeam
louisgv
@@ -16,11 +16,13 @@ accelerate = "^0.23.0" datasets = "^2.14.5" scipy = "^1.11.3" wandb = "^0.15.12" +httpx = "^0.26.0"
seems like we don't need this anymore? Based on your comment, I think this was needed for the e2e test. For e2e, I wonder if we can do a `modal deply runner` to a `e2e` workspace, tail the modal URL, then call it with our JS scripts? Also we should prob add a js script to test auth as well :-?
openrouter-runner
github_2023
typescript
31
OpenRouterTeam
louisgv
@@ -15,6 +15,17 @@ async function main(model?: string) { max_tokens: 1024, stop: ['</s>'], }); + + // Unauthorized requests should fail with a 401 + let gotExpectedError = false; + try { + await completion(prompt, {model, apiKey: "BADKEY"}); + } catch (e: any) { + gotExpectedError = e.message =...
nit: we can just re-throw the error in the try if not 401, no need to do it separately IMO. Also, we should either use `startsWith` or strict equal.
openrouter-runner
github_2023
others
28
OpenRouterTeam
louisgv
@@ -1,46 +1,260 @@ -# PREREQ: +# OpenRouter Runner - Setup Guide -1. Create a modal secret group - `HUGGINGFACE_TOKEN = <your huggingface token>` - with name "huggingface" -2. Create a modal secret group - `RUNNER_API_KEY = <generate a random key>` - with name "ext-api-key" -3. Make sure your current d...
nit: the official casing is `vLLM`
openrouter-runner
github_2023
python
29
OpenRouterTeam
louisgv
@@ -89,7 +90,7 @@ async def generate(self, payload: Payload, params): async for request_output in results_generator: final_output = request_output - output = request_output.outputs[0].text
Good catch - will run the e2e test without stream on this path, but this diff makes sense to me
openrouter-runner
github_2023
python
17
OpenRouterTeam
alexanderatallah
@@ -0,0 +1,54 @@ +from runner.engines.vllm import VllmEngine, VllmParams + +# Each container comprises of: +# 1. An image +# 2. A stub class wrapping an engine + +from modal import gpu, Image + +from shared.volumes import models_path +from runner.shared.common import stub + +_gpu = gpu.A100(count=1, memory=80) + +_vllm...
do we need this? seems like vllm merge da pr sept 27 to remove the need: https://github.com/vllm-project/vllm/issues/1189#issuecomment-1738255066
openrouter-runner
github_2023
python
12
OpenRouterTeam
alexanderatallah
@@ -8,6 +8,7 @@ def _to_lower_list(l: List[str]): vllm_13b_model_ids = [ + "mistralai/Mistral-7B-Instruct-v0.1",
do we need the same hardware for this model? (or if we stick with 2 gpus, will they scale efficiently with a 7b?)
openrouter-runner
github_2023
others
8
OpenRouterTeam
alexanderatallah
@@ -0,0 +1,32 @@ +# Add new model + +1. Copy one of the model file in `aux/models` and rename it to your model name. +2. Change the `model_id` to the HF model ID. +3. Adapt the Model class name accordingly. +4. Import the model into [](./models/__init__.py) and add it to the get_model function. + +# Configuration + +Se...
you mean `aux/main.py`?
openrouter-runner
github_2023
python
8
OpenRouterTeam
alexanderatallah
@@ -1,4 +1,18 @@ -from modal.config import Config +from modal.config import Config as ModalConfig +from pydantic import BaseModel -modal_config = Config() +modal_config = ModalConfig() keep_warm = None if modal_config.get(key="environment") == "dev" else 1
shouldn't we always cold-start now?
openrouter-runner
github_2023
python
8
OpenRouterTeam
alexanderatallah
@@ -0,0 +1,14 @@ +from modal import Stub +from vllm_runner.shared.config import Config + +config = Config( + name="aux", + api_key_id="AUX_API_KEY", + download_dir="/model", + num_gpu=1, + max_batched_tokens=4096, + idle_timeout=5 * 60, # 5 minutes
Small comment: this is slightly confusing because `Config` is coming out of `vllm_runner`, which makes me think it's vllm-specific, but it's also being used as the `container_idle_timeout` on Modal containers. Should we rename vllm_runner to runner, perhaps?
openrouter-runner
github_2023
python
8
OpenRouterTeam
alexanderatallah
@@ -0,0 +1,147 @@ +from vllm_runner.shared.protocol import ( + Payload, + CompletionResponse, + ErrorPayload, + ErrorResponse, +) +from vllm_runner.shared.sampling_params import SamplingParams + +from typing import List +from modal import Secret, method, gpu, Image + +from ..shared.common import stub, confi...
does it work to call helper functions from inside each of these class methods?
openrouter-runner
github_2023
python
8
OpenRouterTeam
alexanderatallah
@@ -0,0 +1,147 @@ +from vllm_runner.shared.protocol import ( + Payload, + CompletionResponse, + ErrorPayload, + ErrorResponse, +) +from vllm_runner.shared.sampling_params import SamplingParams + +from typing import List +from modal import Secret, method, gpu, Image + +from ..shared.common import stub, confi...
why not use mk_gpu_image?
openrouter-runner
github_2023
python
8
OpenRouterTeam
alexanderatallah
@@ -0,0 +1,201 @@ +# Copied from: https://github.com/vllm-project/vllm/blob/bc0644574ca12d754a031596bdcfe8e1f0e6ab39/vllm/sampling_params.py
can we put something like `# BEGIN OPENROUTER MODIFICATION` / `# END OPENROUTER MODIFICATION` in the code below so we know why we forked it?
openrouter-runner
github_2023
python
8
OpenRouterTeam
alexanderatallah
@@ -1,5 +1,7 @@ # Copied from: https://github.com/vllm-project/vllm/blob/bc0644574ca12d754a031596bdcfe8e1f0e6ab39/vllm/sampling_params.py +# The main reason we copy it here is to keep our CPU endpoint container lean - since all we need from vllm is this params validator and not the whole package
thanks 👍
openrouter-runner
github_2023
python
10
OpenRouterTeam
louisgv
@@ -124,7 +121,7 @@ def download_model_to_folder(): @stub.cls( - gpu=gpu.A100(count=NUM_GPU, memory=20), + gpu=gpu.A10G(count=NUM_GPU), secret=Secret.from_name("huggingface"), allow_concurrent_inputs=12, container_idle_timeout=600,
Maybe we can reduce the idle timeout as well?
openrouter-runner
github_2023
python
5
OpenRouterTeam
alexanderatallah
@@ -88,63 +112,114 @@ def download_model_to_folder(): ) ) -stub = Stub("mythalion", image=image) +stub = Stub(NAME, image=image) @stub.cls( - gpu=gpu.A100(), + gpu=gpu.A100(count=NUM_GPU, memory=20), secret=Secret.from_name("huggingface"), allow_concurrent_inputs=12, container_idle_tim...
should we make keep_warm 0 when in the dev environment?
openrouter-runner
github_2023
typescript
5
OpenRouterTeam
alexanderatallah
@@ -2,7 +2,10 @@ import { once } from 'events'; import fs from 'fs/promises'; import { createInterface } from 'readline'; -export async function getWords(n: number, fileNo = 0): Promise<string> { +export async function getWords(
nit: from the function naming, it was unclear if this fn generates random words or if it gets words out of a file. (had to read it to see that fileNo is actually important and it's doing the latter)
openrouter-runner
github_2023
python
5
OpenRouterTeam
alexanderatallah
@@ -158,37 +233,46 @@ def completion( payload: Payload, token: HTTPAuthorizationCredentials = Depends(auth_scheme), ): - if token.credentials != os.environ["MYTHALION_API_KEY"]: + if token.credentials != os.environ[API_KEY_ID]: raise HTTPException( status_code=status.HTTP_401_UNAU...
why do these two things remotely? and if each can take more than 50ms, is it faster to parallelize?
openrouter-runner
github_2023
python
5
OpenRouterTeam
alexanderatallah
@@ -88,63 +112,114 @@ def download_model_to_folder(): ) ) -stub = Stub("mythalion", image=image) +stub = Stub(NAME, image=image) @stub.cls( - gpu=gpu.A100(), + gpu=gpu.A100(count=NUM_GPU, memory=20), secret=Secret.from_name("huggingface"), allow_concurrent_inputs=12, container_idle_tim...
nit: would be nice to add a comment here to explain why we are skipping this
openrouter-runner
github_2023
python
2
OpenRouterTeam
alexanderatallah
@@ -108,6 +108,7 @@ def __enter__(self): tensor_parallel_size=1, # using 95% of GPU memory by default gpu_memory_utilization=0.95, + log_requests=False,
can we undeploy this? see https://modal.com/logs/ap-oeR9RaoFC1krDiG94H2iUq?functionId=fu-W8IPkamMB6habWTtkQRiKJ&taskId=ta-3GWMAUS1dPKhkbTYAZVnh0&inputId=in-pw3u7DxpqrC3ynnn9dstHW
openrouter-runner
github_2023
python
1
OpenRouterTeam
alexanderatallah
@@ -48,13 +48,14 @@ def download_model_to_folder(): ) ) -stub = Stub("vllm-serving", image=image) +stub = Stub("mythomax", image=image) + @stub.cls( - gpu=gpu.A100(), - secret=Secret.from_name("huggingface"), - allow_concurrent_inputs=12, - container_idle_timeout=300 + gpu=gpu.A100(), + secret=S...
nit: we estimated around 20 concurrent inputs earlier. what are the cons of increasing this number?
openrouter-runner
github_2023
python
1
OpenRouterTeam
alexanderatallah
@@ -48,13 +48,14 @@ def download_model_to_folder(): ) ) -stub = Stub("vllm-serving", image=image) +stub = Stub("mythomax", image=image) + @stub.cls( - gpu=gpu.A100(), - secret=Secret.from_name("huggingface"), - allow_concurrent_inputs=12, - container_idle_timeout=300 + gpu=gpu.A100(), + secret=S...
what does this mean? is it just for the cpu container? how fast is it to cold-start? Not sure we should shut it down after 5 min if it takes 30 sec to cold-start
openrouter-runner
github_2023
python
1
OpenRouterTeam
alexanderatallah
@@ -112,10 +109,8 @@ async def generate(self, question: str): print(f"Request completed: {throughput:.4f} tokens/s") # print(request_output.outputs[0].text) -@stub.function( - timeout=60 * 10, - allow_concurrent_inputs=12 -) + +@stub.function(timeout=60 * 10, allow_concurrent_inputs=12) @web_end...
can we try using https://vllm.readthedocs.io/en/latest/getting_started/quickstart.html#openai-compatible-server ?
openrouter-runner
github_2023
python
1
OpenRouterTeam
alexanderatallah
@@ -0,0 +1,168 @@ +import os + +from typing import List, Optional, Union + +from fastapi import Depends, HTTPException, status, Request + +from modal import Image, Secret, Stub, method, gpu, web_endpoint +from pydantic import BaseModel +from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials + + +MODEL_DI...
is this where we should specify default settings, or should we do that in the adapter? seems like a good thing for the adapter to own. (for that, Pygmalion has suggested defaults here: https://pygmalionai.github.io/blog/posts/introducing_pygmalion_2/#sillytavern)
openrouter-runner
github_2023
python
1
OpenRouterTeam
alexanderatallah
@@ -0,0 +1,168 @@ +import os + +from typing import List, Optional, Union + +from fastapi import Depends, HTTPException, status, Request + +from modal import Image, Secret, Stub, method, gpu, web_endpoint +from pydantic import BaseModel +from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials + + +MODEL_DI...
template doesn't seem defined (but prob should be managed by the adapter anyway)
openrouter-runner
github_2023
python
1
OpenRouterTeam
alexanderatallah
@@ -0,0 +1,168 @@ +import os + +from typing import List, Optional, Union + +from fastapi import Depends, HTTPException, status, Request + +from modal import Image, Secret, Stub, method, gpu, web_endpoint +from pydantic import BaseModel +from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials + + +MODEL_DI...
skipping this for review, assuming WIP
dotnet-cloud-native-build-2023
github_2023
csharp
5
bradygaster
davidfowl
@@ -30,74 +30,80 @@ protected override async Task ExecuteAsync(CancellationToken stoppingToken) var orders = await _ordersClient.GetOrders(); activity?.AddTag("order-count", orders?.Count ?? 0); - var orderTasks = new List<Task>(); fore...
These lowercase methods are java
axum-htmx
github_2023
others
15
robertwayne
robertwayne
@@ -0,0 +1,123 @@ +use axum_core::response::{IntoResponseParts, ResponseParts}; +use http::header::VARY; + +use crate::{extractors, headers, HxError}; + +/// The `Vary: HX-Request` header. +/// +/// You may want to add this header to the response if your handler responds differently based on +/// the `HX-Request` reque...
We can use `http::HeaderValue::from_static(HX_TARGET)` instead. This applies to the other instances of `try_into()?` as well. Originally I was going to say we could make this `type Error = Infallible`, but after thinking some more, `.insert()` can actually panic, so this should probably be addressed in the existing ...
claude-to-chatgpt
github_2023
python
9
jtsang4
jtsang4
@@ -74,15 +74,9 @@ def claude_to_chatgpt_response_stream(self, claude_response, prev_decoded_respon "object": "chat.completion.chunk", "created": int(time.time()), "model": "gpt-3.5-turbo-0301", - "usage": { - "prompt_tokens": 0, - "complet...
The "role" field appeared in my previouse tests, please check it again.
online-edit-web
github_2023
typescript
123
xun082
xun082
@@ -55,6 +61,46 @@ export default function CodeEditor({ editorId }: CodeEditorProps) { border: isOver ? '1px #3b82f6 solid' : undefined, }; + _editor && + _editor.addCommand(monaco.KeyMod.CtrlCmd | monaco.KeyCode.KeyS, () => { + const prettierValue = getPrettierConfig(fileData); + formatWithPret...
这里的ts类型别用object,很容易有报错,可以用Record这种 record<string,any>这样,又或者record<string,string|boolean|number>
online-edit-web
github_2023
typescript
119
xun082
xun082
@@ -210,14 +211,23 @@ const SearchPage: FC = () => { <IncludeAndExclude /> </div> <div className="mx-7 text-[12px] select-none text-gray-500"> - {searchInpVal ? ( - searchResult && searchResult.length ? ( - <div>{`${resultCount.fileCount} 文件中 ${resultCount.matchCount} 个...
这里返回null
create-neat
github_2023
typescript
309
xun082
uaenaTzx
@@ -146,9 +146,9 @@ export default async function createAppTest(projectName: string, options: Record if (gitCheck(rootDirectory)) exec("git init", { cwd: rootDirectory }); // 安装传入的依赖 - if (process.env.NODE_ENV === "PROD") { - await dependenciesInstall(rootDirectory, packageManager); - } + // if (process.e...
这块为什么要删除 其他插件可能会带进来一些依赖到 pka 里面
create-neat
github_2023
others
309
xun082
uaenaTzx
@@ -0,0 +1,47 @@ +import fs from 'fs'; +import path from 'path'; +import spawn from 'cross-spawn'; +import {fileURLToPath} from 'url';
看着没有格式化
create-neat
github_2023
others
309
xun082
uaenaTzx
@@ -0,0 +1,47 @@ +import fs from 'fs'; +import path from 'path'; +import spawn from 'cross-spawn'; +import {fileURLToPath} from 'url'; + +const __filename = fileURLToPath(import.meta.url); +const __dirname = path.dirname(__filename); + +//插件目录 +const pluginsDir = path.resolve(__dirname, '../packages/@plugin'); + +funct...
log 可以带个颜色(chalk)
create-neat
github_2023
others
309
xun082
uaenaTzx
@@ -0,0 +1,47 @@ +import fs from 'fs'; +import path from 'path'; +import spawn from 'cross-spawn'; +import {fileURLToPath} from 'url'; + +const __filename = fileURLToPath(import.meta.url); +const __dirname = path.dirname(__filename); + +//插件目录 +const pluginsDir = path.resolve(__dirname, '../packages/@plugin'); + +funct...
看着都没有格式化
create-neat
github_2023
others
309
xun082
uaenaTzx
@@ -0,0 +1,47 @@ +import fs from 'fs'; +import path from 'path'; +import spawn from 'cross-spawn'; +import {fileURLToPath} from 'url'; + +const __filename = fileURLToPath(import.meta.url); +const __dirname = path.dirname(__filename); + +//插件目录 +const pluginsDir = path.resolve(__dirname, '../packages/@plugin'); + +funct...
输出信息尽量完整,code 也可以输出
create-neat
github_2023
others
309
xun082
uaenaTzx
@@ -0,0 +1,47 @@ +import fs from 'fs';
还缺少一些东西,建议修改的点: 1. 发包过程改成交互式的(参考 cn create) 2. 交互式的具体流程可以包括:是否要批量发,检查当前包是否有更新(否则不用发),是否选择发 beta 包 等等,你再设计看看
create-neat
github_2023
others
308
xun082
uaenaTzx
@@ -72,7 +72,7 @@ }, "engines": { "node": ">=18", - "pnpm": "9.4.0" + "pnpm": "^9.0.0"
不要轻易改依赖版本,这个是有什么背景吗
create-neat
github_2023
others
308
xun082
uaenaTzx
@@ -1,5 +1,21 @@ // const path = require("path"); module.exports = { - rules: [], + rules: [
这块其实还有很多优化空间,可以看到历史的 config 配置里存在 构建工具 x 框架 的情况,我们看看能不能通过协议的方式来解决问题 如果遇到特殊场景不好做的,可以找我讨论
create-neat
github_2023
javascript
308
xun082
uaenaTzx
@@ -2,11 +2,18 @@ module.exports = { rules: [ { test: /\.(ts|tsx|js|jsx)$/, - exclude: /node_modules/, - use: { - loader: "swc-loader", - exclude: /node_modules/, - }, + exclude: [/node_modules/, /public/, /(.|_)min\.js$/],
同 babel
create-neat
github_2023
others
308
xun082
uaenaTzx
@@ -1,7 +1,7 @@ -const PluginConfig = require("./config/index.cjs"); +// const PluginConfig = require("./config/index.cjs"); -const pluginBabel = (buildTool, template) => {
不需要的就删了吧
create-neat
github_2023
typescript
308
xun082
uaenaTzx
@@ -163,11 +163,12 @@ class Generator { } // 根据环境变量加载 plugin/template + // 返回增加可选的buildTool,编译器🥱插件(babel/swc)需要
这个注释改一下
create-neat
github_2023
typescript
310
xun082
uaenaTzx
@@ -3,21 +3,23 @@ import generator from "@babel/generator"; import fs from "fs-extra"; import chalk from "chalk"; import { parse } from "@babel/parser"; - -import { relativePathToRoot } from "../utils/constants"; -import { createFiles } from "../utils/createFiles"; -import { createConfigByParseAst } from "../utils/a...
应该是 ts 后缀
create-neat
github_2023
typescript
310
xun082
uaenaTzx
@@ -231,10 +234,9 @@ class Generator { // 单独处理一个框架相关依赖,主要是将框架相关的依赖包插入到pkg内,以及将需要的构建工具配置合并到构建工具模板中 async templateGenerate() { - const templateGenerator = await this.loadBase( - `packages/core/template/template-${this.templateName}/generator/index.cjs`, - "", - ); + const templatePath = `packag...
log 删了吧
create-neat
github_2023
typescript
307
xun082
uaenaTzx
@@ -18,6 +17,8 @@ import dependenciesInstall from "./dependenciesInstall"; import { createReadmeString } from "./createFiles"; import { buildToolConfigDevDependencies, buildToolScripts } from "./constants"; +const { resolveApp } = require("@laconic/utils");
这个文件并没有引用这个呀
create-neat
github_2023
typescript
306
xun082
uaenaTzx
@@ -1,28 +1,35 @@ +/** 通用协议常量,定义了跨不同工具和框架交互的协议类型。*/ const globalProtocol = { - ENTRY_FILE: "ENTRY_FILE", // 入口文件配置,如:入口文件引入全局 less、scss + /** 入口文件配置,例如引入全局的less、scss文件。 */ + ENTRY_FILE: "ENTRY_FILE", + /** 更改文件导出的内容。 */ UPDATE_EXPORT_CONTENT_PROTOCOL: "UPDATE_EXPORT_CONTENT_PROTOCOL", + /** 向文件插入import语句。 */ ...
扩展了通用协议,并添加处理样式的协议 这个没必要注释
create-neat
github_2023
typescript
306
xun082
uaenaTzx
@@ -1,28 +1,35 @@ +/** 通用协议常量,定义了跨不同工具和框架交互的协议类型。*/ const globalProtocol = { - ENTRY_FILE: "ENTRY_FILE", // 入口文件配置,如:入口文件引入全局 less、scss + /** 入口文件配置,例如引入全局的less、scss文件。 */ + ENTRY_FILE: "ENTRY_FILE", + /** 更改文件导出的内容。 */ UPDATE_EXPORT_CONTENT_PROTOCOL: "UPDATE_EXPORT_CONTENT_PROTOCOL", + /** 向文件插入import语句。 */ ...
根据框架,不同的打包工具需要不同的插件,有些是都需要用的,有些是框架独有的 解释有些奇怪,add config 不一定是添加插件配置,描述有些局限
create-neat
github_2023
typescript
306
xun082
uaenaTzx
@@ -1,28 +1,35 @@ +/** 通用协议常量,定义了跨不同工具和框架交互的协议类型。*/ const globalProtocol = { - ENTRY_FILE: "ENTRY_FILE", // 入口文件配置,如:入口文件引入全局 less、scss + /** 入口文件配置,例如引入全局的less、scss文件。 */ + ENTRY_FILE: "ENTRY_FILE", + /** 更改文件导出的内容。 */ UPDATE_EXPORT_CONTENT_PROTOCOL: "UPDATE_EXPORT_CONTENT_PROTOCOL", + /** 向文件插入import语句。 */ ...
slot 是什么也得说明清楚,比如:对目标文件的指定插槽进行内容注入
create-neat
github_2023
typescript
306
xun082
uaenaTzx
@@ -1,28 +1,35 @@ +/** 通用协议常量,定义了跨不同工具和框架交互的协议类型。*/ const globalProtocol = { - ENTRY_FILE: "ENTRY_FILE", // 入口文件配置,如:入口文件引入全局 less、scss + /** 入口文件配置,例如引入全局的less、scss文件。 */
英文字母和中文留空格,比如 xx 这样
create-neat
github_2023
typescript
298
xun082
uaenaTzx
@@ -66,6 +66,11 @@ export const buildToolScripts = { }, }; +export const templateFileExtension = {
注释明确
create-neat
github_2023
typescript
298
xun082
uaenaTzx
@@ -174,13 +197,18 @@ class FileTree { * @param url 添加文件的原始的真实路径 * @param parentDir 父文件夹路径 */ - addToTreeByTemplateDirPathAndEjs(url: string, parentDir: string, options: any) { + addToTreeByTemplateDirPathAndEjs(url: string, parentDir: string, options: any, isTs: any) {
any 不对,而且这里是可以读取到 process 的值的,不太需要传参
create-neat
github_2023
typescript
298
xun082
uaenaTzx
@@ -107,13 +108,25 @@ class FileTree { return file; } + private transformFileExtension(fileContent: string, fileName: string, fileExtension: string) {
注释尽量补充完善
create-neat
github_2023
others
302
xun082
uaenaTzx
@@ -1,16 +1,104 @@ -module.exports = (generatorAPI) => { +const path = require("path"); +const fs = require("fs"); + +function fileRender(files) { + try { + const outputDir = path.join(__dirname, "template"); + Object.entries(files).forEach(([filePath, content]) => { + const fullPath = path.join(outputDir, ...
这个不走 template 渲染吗
create-neat
github_2023
others
302
xun082
uaenaTzx
@@ -1,16 +1,104 @@ -module.exports = (generatorAPI) => { +const path = require("path"); +const fs = require("fs"); + +function fileRender(files) { + try { + const outputDir = path.join(__dirname, "template"); + Object.entries(files).forEach(([filePath, content]) => { + const fullPath = path.join(outputDir, ...
包管理器不是写死的
create-neat
github_2023
others
302
xun082
uaenaTzx
@@ -1,16 +1,104 @@ -module.exports = (generatorAPI) => { +const path = require("path"); +const fs = require("fs"); + +function fileRender(files) { + try { + const outputDir = path.join(__dirname, "template"); + Object.entries(files).forEach(([filePath, content]) => { + const fullPath = path.join(outputDir, ...
+1
create-neat
github_2023
typescript
302
xun082
uaenaTzx
@@ -8,6 +8,8 @@ const globalProtocol = { const pluginToTemplateProtocol = { ...globalProtocol, PROCESS_STYLE_PLUGIN: "PROCESS_STYLE_PLUGIN", + INSERT_IMPORT: "INSERT_IMPORT", + RENDER_FILE: "RENDER_FILE",
没看到相关的处理器,不过这个协议确实意义不大
create-neat
github_2023
others
299
xun082
uaenaTzx
@@ -35,11 +35,22 @@ module.exports = (generatorAPI) => { [pluginToTemplateProtocol.UPDATE_EXPORT_CONTENT_PROTOCOL]: { params: { url: 'src/App', - exportContent: 'Observer', + exportContent: 'observer',
这个改动是有什么原因
create-neat
github_2023
others
299
xun082
uaenaTzx
@@ -35,11 +35,22 @@ module.exports = (generatorAPI) => { [pluginToTemplateProtocol.UPDATE_EXPORT_CONTENT_PROTOCOL]: { params: { url: 'src/App', - exportContent: 'Observer', + exportContent: 'observer', astOptions: { parserOptions: { sourceType: "module", plugins: [...
内容修正
create-neat
github_2023
javascript
299
xun082
uaenaTzx
@@ -2,7 +2,9 @@ import React from "react"; import "./index.css"; function App() { + // <!--slot: store-slot -->
不用加 // 容易出现歧义
create-neat
github_2023
typescript
299
xun082
uaenaTzx
@@ -0,0 +1,24 @@ +import prettier from "prettier"; +import prettierPluginVue from "prettier-plugin-vue";
只有 vue 的吗,这个不能搞成通用的吗
create-neat
github_2023
others
295
xun082
uaenaTzx
@@ -10,9 +10,31 @@ module.exports = (generatorAPI) => { }); generatorAPI.protocolGenerate({ + [pluginToTemplateProtocol.INSERT_IMPORT_PROTOCOL]: { + params: { + imports: [ + { + dir: "src/App", + module: [
modules吧
create-neat
github_2023
others
295
xun082
uaenaTzx
@@ -10,9 +10,31 @@ module.exports = (generatorAPI) => { }); generatorAPI.protocolGenerate({ + [pluginToTemplateProtocol.INSERT_IMPORT_PROTOCOL]: { + params: { + imports: [ + { + dir: "src/App", + module: [ + { + name: "mobx", + ...
这块的传参确实有一定学习成本
create-neat
github_2023
typescript
294
xun082
uaenaTzx
@@ -0,0 +1,35 @@ +import path from "path"; + +import { FileData } from "../models/FileTree"; + +/** + * 根据给定的文件路径,从嵌套的文件结构中检索目标文件的数据。 + *
jsdoc 不需要这一行
create-neat
github_2023
typescript
294
xun082
uaenaTzx
@@ -52,3 +52,14 @@ export const createCallExpression = ( */ export const createObjectProperty = (name: string, value: Parameters<typeof objectProperty>[1]) => objectProperty(identifier(name), value); + +/** + * 封装遍历ast中ExportDefaultDeclaration的通用函数 + * @param {object} path + * @param {object} t + * @param {string...
any 尽量避免吧