repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
jowilf/starlette-admin
|
sqlalchemy
| 552
|
Bug: UUID pk gives JSON serialization error when excluded from the list
|
**Describe the bug**
When I have a model with the pk (id) field of UUID type and include it in the list, it works fine. But if I exclude it, while leaving among the fields, it gives an error:
```
File "/Users/alg/p/template-fastapi/.venv/lib/python3.11/site-packages/starlette/responses.py", line 187, in render
return json.dumps(
^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.9/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/__init__.py", line 238, in dumps
**kw).encode(obj)
^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.9/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.9/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/encoder.py", line 258, in iterencode
return _iterencode(o, 0)
^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.9/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type UUID is not JSON serializable
```
**To Reproduce**
- Create a model with the id field of type UUID.
- List it among fields in the model.
- Exclude it from the list.
**Environment (please complete the following information):**
- Starlette-Admin version: 0.14.0
- ORM/ODMs: SQLAlchemy
|
closed
|
2024-06-25T08:29:24Z
|
2024-07-12T19:22:21Z
|
https://github.com/jowilf/starlette-admin/issues/552
|
[
"bug"
] |
alg
| 5
|
zappa/Zappa
|
django
| 514
|
[Migrated] Using Wildcard Subdomains with API Gateway
|
Originally from: https://github.com/Miserlou/Zappa/issues/1355 by [atkallie](https://github.com/atkallie)
## Context
I have developed a Django project that makes use of the [django-hosts ](https://django-hosts.readthedocs.io/en/latest/) library and now I am trying to deploy it. After working through the Zappa tutorials, I became convinced that Zappa was the way to go. However, I later realized that API Gateway does not support wildcard subdomains. While this is not strictly a Zappa issue, I was wondering if there was anyone who had any success in circumventing this issue since [it doesn't look like the API Gateway developers have any intention of enabling this in the near future](https://forums.aws.amazon.com/message.jspa?messageID=798965).
## Expected Behavior
Allow *.example.com to host a Django project and let Django map to a view based on the subdomain.
## Actual Behavior
Wildcard subdomains are not permitted by API Gateway; only subdomains that are explicitly defined are passed.
## Possible Fix
I tried to manually create a CloudFront distribution that allows "*.example.com" as a CNAME and set the raw API gateway URL as the origin. However, due to CloudFront's default caching behavior, the problem then becomes that any subdomain (e.g. api.example.com) returns a 301 redirect to the raw API Gateway URL. Not sure how the hidden CloudFront distribution created via the Custom Domains tab in API Gateway handles this.
## Steps to Reproduce (for possible fix)
* Create a CloudFront distribution and select 'Web' Distribution
* Use the Zappa distribution domain name from API Gateway as the Origin Domain Name,(e.g. ajnfioanpsda.execute-api.us-east-1.amazonaws.com)
* For alternate domain names, enter in "*.example.com, example.com"
* Use the Zappa deployment name (e.g. dev) as the Origin Path
* For Object Caching select 'Use Origin Cache Headers'
* Turn on "compress objects automatically"
* Associate the CloudFront distribution with your ACM SSL/TLS certificate
|
closed
|
2021-02-20T09:43:45Z
|
2024-04-13T16:36:52Z
|
https://github.com/zappa/Zappa/issues/514
|
[
"enhancement",
"aws",
"feature-request",
"good-idea",
"no-activity",
"auto-closed"
] |
jneves
| 2
|
aio-libs/aiomysql
|
asyncio
| 11
|
rename Connection.wait_closed()
|
`asyncio.AbstractServer` uses the following idiom:
- `close()` closes connection asynchronously, it's regular function.
- `wait_closed()` is a coroutine that waits for actual closing.
`Connection.wait_closed()` has different behavior: it sends disconnection signal to server and after that closes connection.
I guess rename it to `.ensure_closed()`.
|
closed
|
2015-04-20T17:19:55Z
|
2015-04-26T18:33:02Z
|
https://github.com/aio-libs/aiomysql/issues/11
|
[] |
asvetlov
| 1
|
horovod/horovod
|
pytorch
| 3,011
|
System env variables are not captured when using Spark as backend.
|
**Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet) TensorFlow
2. Framework version: 2.5.0
3. Horovod version: 0.22.1
4. MPI version: 4.0.2
5. CUDA version: 11.2
6. NCCL version: 2.9.9
7. Python version: 3.8
8. Spark / PySpark version: 3.1.2
9. Ray version:
10. OS and version: Ubuntu20.04
11. GCC version: 9.3
12. CMake version: 3.19
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
Y
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
Y
**Bug report:**
When using horovod.spark.run API, if `env` parameter is not set the following error could be seen:
```
Was unable to run mpirun --version:
/bin/sh: 1: mpirun: not found
```
my environment:
```
(base) allxu@allxu-home:~/github/e2e-train$ which mpirun
/home/allxu/miniconda3/bin/mpirun
```
The whole demo for this error could be found here: https://github.com/wjxiz1992/e2e-train
I've seen similar issue: https://github.com/horovod/horovod/issues/2002
but it's not the spark case.
|
open
|
2021-06-30T07:36:11Z
|
2021-07-09T16:59:22Z
|
https://github.com/horovod/horovod/issues/3011
|
[
"bug"
] |
wjxiz1992
| 1
|
huggingface/transformers
|
pytorch
| 36,623
|
Why are there so many variables named layrnorm in the codebase?
|
Running
`grep -R -n --color=auto "layrnorm" .`
gives these results when ran in src/tranformers
```
./models/idefics/vision.py:441: self.pre_layrnorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps)
./models/idefics/vision.py:468: hidden_states = self.pre_layrnorm(hidden_states)
./models/idefics/vision_tf.py:506: self.pre_layrnorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="pre_layrnorm")
./models/idefics/vision_tf.py:534: hidden_states = self.pre_layrnorm(hidden_states)
./models/idefics/vision_tf.py:564: if getattr(self, "pre_layrnorm", None) is not None:
./models/idefics/vision_tf.py:565: with tf.name_scope(self.pre_layrnorm.name):
./models/idefics/vision_tf.py:566: self.pre_layrnorm.build([None, None, self.embed_dim])
./models/altclip/modeling_altclip.py:1140: self.pre_layrnorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps)
./models/altclip/modeling_altclip.py:1168: hidden_states = self.pre_layrnorm(hidden_states)
./models/git/convert_git_to_pytorch.py:88: rename_keys.append((f"{prefix}image_encoder.ln_pre.weight", "git.image_encoder.vision_model.pre_layrnorm.weight"))
./models/git/convert_git_to_pytorch.py:89: rename_keys.append((f"{prefix}image_encoder.ln_pre.bias", "git.image_encoder.vision_model.pre_layrnorm.bias"))
./models/git/modeling_git.py:997: self.pre_layrnorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps)
./models/git/modeling_git.py:1025: hidden_states = self.pre_layrnorm(hidden_states)
./models/clipseg/modeling_clipseg.py:849: self.pre_layrnorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps)
./models/clipseg/modeling_clipseg.py:874: hidden_states = self.pre_layrnorm(hidden_states)
./models/clipseg/convert_clipseg_original_pytorch_to_hf.py:87: name = name.replace("visual.ln_pre", "vision_model.pre_layrnorm")
./models/chinese_clip/convert_chinese_clip_original_pytorch_to_hf.py:84: copy_linear(hf_model.vision_model.pre_layrnorm, pt_weights, "visual.ln_pre")
./models/chinese_clip/modeling_chinese_clip.py:1097: self.pre_layrnorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps)
./models/chinese_clip/modeling_chinese_clip.py:1124: hidden_states = self.pre_layrnorm(hidden_states)
./models/clip/modeling_tf_clip.py:719: self.pre_layernorm = keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="pre_layrnorm")
./models/clip/modeling_clip.py:1073: self.pre_layrnorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps)
./models/clip/modeling_clip.py:1101: hidden_states = self.pre_layrnorm(hidden_states)
./models/clip/convert_clip_original_pytorch_to_hf.py:96: copy_linear(hf_model.vision_model.pre_layrnorm, pt_model.visual.ln_pre)
./models/clip/modeling_flax_clip.py:584: self.pre_layrnorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype)
./models/clip/modeling_flax_clip.py:603: hidden_states = self.pre_layrnorm(hidden_states)
./models/kosmos2/modeling_kosmos2.py:748: self.pre_layrnorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps)
./models/kosmos2/modeling_kosmos2.py:770: hidden_states = self.pre_layrnorm(hidden_states)
./models/kosmos2/modeling_kosmos2.py:1440: module.pre_layrnorm.bias.data.zero_()
./models/kosmos2/modeling_kosmos2.py:1441: module.pre_layrnorm.weight.data.fill_(1.0)
./models/kosmos2/convert_kosmos2_original_pytorch_checkpoint_to_pytorch.py:16: "ln_pre": "pre_layrnorm",
```
Why are there so many layernorm variables named layrnorm? Is it a typo or is this intended?
|
closed
|
2025-03-10T01:10:24Z
|
2025-03-10T14:26:12Z
|
https://github.com/huggingface/transformers/issues/36623
|
[] |
jere357
| 1
|
kynan/nbstripout
|
jupyter
| 100
|
documentation: .gitattributes and .git/config - install still required after cloning
|
I thought writing to `.gitattributes` would make someone cloning the repo to get the nbstripout functionality out of the box assuming `nbstripout` and a `.gitattributes` file was provided in the repo. But, after thorough inspection I realized that the `.gitattributes` file only references a name (`nbstripout` / `ipynb`) that needs to be defined somewhere.
__`.gitattributes` (in repo) or `.git/info/attributes` (local)__
```
*.ipynb filter=nbstripout
*.ipynb diff=ipynb
```
The git configuration with the actual definitions of referenced functionality (`nbstripout` / `ipynb`) can either in the local `.git/config` or system wide in `/etc/gitconfig`.
__`.git/config` (local), `~/.gitconfig` (global), or `/etc/gitconfig` (system)__
```
[filter "nbstripout"]
clean = \"/opt/conda/bin/python3.6\" \"/opt/conda/lib/python3.6/site-packages/nbstripout\"
smudge = cat
required = true
[diff "ipynb"]
textconv = \"/opt/conda/bin/python3.6\" \"/opt/conda/lib/python3.6/site-packages/nbstripout\" -t
```
Learning this, I realized that someone cloning the repo would also need to run `nbstripout --install` unless they were provided with a system wide git configuration with these definitions.
---
It would be useful to have this understanding communicated in the repo! Thanks for providing that excellent video earlier btw, that was great!
|
closed
|
2019-06-19T12:29:31Z
|
2020-05-09T10:17:17Z
|
https://github.com/kynan/nbstripout/issues/100
|
[
"type:documentation",
"resolution:fixed"
] |
consideRatio
| 4
|
ultralytics/ultralytics
|
python
| 19,518
|
Metrics all 0 after TensorRT INT8 export for mode val, only INT8 ONNX performs well
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I succesfully exported my FP32 YOLOv8 OBB (s) model to FP16 and INT8. For FP16 I get nearly the same metrics values like FP32, but the INT8 model performs very bad. My calibration set are 3699 images, I tried with training calibration set (18536 images) too, but the metrics stay all at 0. Different export `batch_sizes=1,8,16` didn't helped.
Update: The problem, must be between the conversion from `ONNX` to `engine` format (see below). There must be a bug between the conversion process, which leads to 0 in all metrics using `engine` model.
Exporter Code:
```python
from ultralytics import YOLO
import argparse
def export_model(model, export_args):
model.export(**export_args)
def main():
parser = argparse.ArgumentParser(description='Export YOLOv8 OBB model to TensorRT with user-configurable parameters.')
parser.add_argument('--model_path', type=str, required=True, help='Path to the trained YOLOv8 model (.pt file).')
parser.add_argument('--export_fp16', type=bool, default=False, help='Export to FP16 TensorRT model.')
parser.add_argument('--export_int8', type=bool, default=False, help='Export to INT8 TensorRT model.')
parser.add_argument('--format', type=str, default='engine', help="Format to export to (e.g., 'engine', 'onnx').")
parser.add_argument('--imgsz', type=int, default=640, help='Desired image size for the model input. Can be an integer for square images or a tuple (height, width) for specific dimensions.')
parser.add_argument('--keras', type=bool, default=False, help='Enables export to Keras format for TensorFlow SavedModel, providing compatibility with TensorFlow serving and APIs.')
parser.add_argument('--optimize', type=bool, default=False, help='Applies optimization for mobile devices when exporting to TorchScript, potentially reducing model size and improving performance.')
parser.add_argument('--half', type=bool, default=False, help='Enables FP16 (half-precision) quantization, reducing model size and potentially speeding up inference on supported hardware.')
parser.add_argument('--int8', type=bool, default=False, help='Activates INT8 quantization, further compressing the model and speeding up inference with minimal accuracy loss, primarily for edge devices.')
parser.add_argument('--dynamic', type=bool, default=False, help='Allows dynamic input sizes for ONNX, TensorRT and OpenVINO exports, enhancing flexibility in handling varying image dimensions (enforced).')
parser.add_argument('--simplify', type=bool, default=False, help='Simplifies the model graph for ONNX exports with onnxslim, potentially improving performance and compatibility.')
parser.add_argument('--opset', type=int, default=None, help='Specifies the ONNX opset version for compatibility with different ONNX parsers and runtimes. If not set, uses the latest supported version.')
parser.add_argument('--workspace', type=int, default=None, help='Sets the maximum workspace size in GiB for TensorRT optimizations, balancing memory usage and performance; use None for auto-allocation by TensorRT up to device maximum.')
parser.add_argument('--nms', type=bool, default=False, help='Adds Non-Maximum Suppression (NMS) to the exported model when supported (see Export Formats), improving detection post-processing efficiency.')
parser.add_argument('--batch', type=int, default=1, help="Batch size for export. For INT8 it's recommended using a larger batch like batch=8 (calibrated as batch=16))")
parser.add_argument('--device', type=str, default='0', help="Device to use for export (e.g., '0' for GPU 0).")
parser.add_argument('--data', type=str, default=None, help="Path to the dataset configuration file for INT8 calibration.")
args = parser.parse_args()
# Load the final trained YOLOv8 model
model = YOLO(args.model_path, task='obb')
export_args = {
'format': args.format,
'imgsz': args.imgsz,
'keras': args.keras,
'optimize': args.optimize,
'half': args.half,
'int8': args.int8,
'dynamic': args.dynamic,
'simplify': args.simplify,
'opset': args.opset,
'workspace': args.workspace,
'nms': args.nms,
'batch': args.batch,
'device': args.device,
'data': args.data,
}
if args.export_fp16: # data argument isn't needed for FP16 exports since no calibration is required
print('Exporting to FP16 TensorRT model...')
fp16_args = export_args.copy()
fp16_args['half'] = True
fp16_args['int8'] = False
export_model(model, fp16_args)
print('FP16 export completed.')
if args.export_int8: # NOTE: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#enable_int8_c, for INT8 calibration, the kitti_bev.yaml val split with 3769 images is used.
print('Exporting to INT8 TensorRT model...')
int8_args = export_args.copy()
int8_args['half'] = False
int8_args['int8'] = True
export_model(model, int8_args)
print('INT8 export completed.\nThe calibration .cache which can be reused to speed up export of future model weights using the same data, but this may result in poor calibration when the data is vastly different or if the batch value is changed drastically. In these circumstances, the existing .cache should be renamed and moved to a different directory or deleted entirely.')
if not args.export_fp16 and not args.export_int8:
print('No export option selected. Please specify --export_fp16 and/or --export_int8.')
if __name__ == '__main__':
main()
```
Used export command:
```txt
python export_kitti_obb.py --model_path /home/heizung1/ultralytics_yolov8-obb_ob_kitti/ultralytics/kitti_bev_yolo/run_94_Adam_88.8_87.2/weights/best.pt --export_int8 True --int8 True --dynamic=True --batch 1 --data /home/heizung1/ultralytics_yolov8-obb_ob_kitti/ultralytics/cfg/datasets/kitti_bev.yaml
```
Validation script:
```python
from ultralytics import YOLO
model = YOLO('/home/heizung1/ultralytics_yolov8-obb_ob_kitti/ultralytics/kitti_bev_yolo/run_94_Adam_88.8_87.2/weights/best_1.engine', task='obb', verbose=False)
metrics = model.val(data='/home/heizung1/ultralytics_yolov8-obb_ob_kitti/ultralytics/cfg/datasets/kitti_bev.yaml', imgsz=640,
batch=16, save_json=False, save_hybrid=False, conf=0.001, iou=0.5, max_det=300, half=False,
device='0', dnn=False, plots=False, rect=False, split='val', project=None, name=None)
```
Validation output with INT8 TensorRT:

Validation output with INT8 ONNX:

Thank you very much!
### Additional
_No response_
|
open
|
2025-03-04T17:11:26Z
|
2025-03-14T01:33:53Z
|
https://github.com/ultralytics/ultralytics/issues/19518
|
[
"question",
"OBB",
"exports"
] |
Petros626
| 19
|
TheAlgorithms/Python
|
python
| 12,531
|
Football questions
|
### What would you like to share?
Questions about football
### Additional information
_No response_
|
closed
|
2025-01-18T18:19:16Z
|
2025-01-19T00:45:37Z
|
https://github.com/TheAlgorithms/Python/issues/12531
|
[
"awaiting triage"
] |
ninostudio
| 0
|
Guovin/iptv-api
|
api
| 636
|
一直执行不起来,lite版本,用标准版好像也不行
|
[2024_12_9 13_52_58_log.txt](https://github.com/user-attachments/files/18056923/2024_12_9.13_52_58_log.txt)
|
closed
|
2024-12-09T05:54:42Z
|
2024-12-13T08:43:10Z
|
https://github.com/Guovin/iptv-api/issues/636
|
[
"duplicate",
"incomplete"
] |
Andy-Home
| 4
|
Lightning-AI/pytorch-lightning
|
deep-learning
| 20,116
|
Error when disabling an optimizer with native AMP turned on
|
### Bug description
I'm using 2 optimizers and trying to train with AMP (FP16). I can take steps with my first optimizer. When I take my first step with the second optimizer I get the following error:
```
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/torch/amp/grad_scaler.py", line 450, in step
len(optimizer_state["found_inf_per_device"]) > 0
AssertionError: No inf checks were recorded for this optimizer.
```
I can train this correctly in FP32 -- so it seems to be an issue with AMP.
### What version are you seeing the problem on?
version 2.3.3
### How to reproduce the bug
```python
def training_step(self, batch: Dict, batch_idx: int):
"""
We have 2 sets of optimizers.
Every N batches (self.n_batches_per_optimizer), we make an optimizer update and
switch the optimizer to update.
If self.n_batches_per_optimizer = 1, then we make updates every batch and alternate optimizers
every batch.
If self.n_batches_per_optimizer > 1, then we're doing gradient accumulartion, where we are making
updates evern n_batches_per_optimizer batches and alternating optimizers every n_batches_per_optimizer
batches.
"""
opts = self.optimizers()
current_cycle = (batch_idx // self.n_batches_per_optimizer) % len(opts)
opt = opts[current_cycle]
opt.zero_grad()
if current_cycle == 0:
compute_model_1_loss = True
elif current_cycle == 1:
compute_model_1_loss = False
else:
raise NotImplementedError(f"Unknown optimizer {current_cycle}")
with opt.toggle_model():
loss = self.inner_training_step(batch=batch, compute_model_1_loss=compute_model_1_loss)
self.manual_backward(loss=loss)
# Perform the optimization step every accumulate_grad_batches steps
if (batch_idx + 1) % self.n_batches_per_optimizer == 0:
if not compute_model_1_loss:
print("About to take compute model 2 loss ...")
opt.step()
opt.zero_grad()
```
### Error messages and logs
```
Traceback (most recent call last):
File "/home/sahil/train.py", line 82, in <module>
main(config)
File "/home/sahil/train.py", line 62, in main
trainer.fit(model, datamodule=data_module, ckpt_path=ckpt)
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 543, in fit
call._call_and_handle_interrupt(
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 579, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _run
results = self._run_stage()
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1030, in _run_stage
self.fit_loop.run()
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 205, in run
self.advance()
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 363, in advance
self.epoch_loop.run(self._data_fetcher)
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 140, in run
self.advance(data_fetcher)
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 252, in advance
batch_output = self.manual_optimization.run(kwargs)
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/manual.py", line 94, in run
self.advance(kwargs)
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/manual.py", line 114, in advance
training_step_output = call._call_strategy_hook(trainer, "training_step", *kwargs.values())
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 311, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 390, in training_step
return self.lightning_module.training_step(*args, **kwargs)
File "/home/sahil/model/model.py", line 169, in training_step
opt.step()
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/core/optimizer.py", line 153, in step
step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 238, in optimizer_step
return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/amp.py", line 93, in optimizer_step
step_output = self.scaler.step(optimizer, **kwargs) # type: ignore[arg-type]
File "/home/sahil/.cache/pypoetry/virtualenvs/env-auw7Hy33-py3.10/lib/python3.10/site-packages/torch/amp/grad_scaler.py", line 450, in step
len(optimizer_state["found_inf_per_device"]) > 0
AssertionError: No inf checks were recorded for this optimizer.
```
### Environment
<details>
<summary>Current environment</summary>
```
* CUDA:
- GPU:
- NVIDIA A100-SXM4-80GB
- available: True
- version: 12.1
* Lightning:
- lightning-utilities: 0.11.5
- pytorch-lightning: 2.3.3
- torch: 2.3.1
- torchmetrics: 1.4.0.post0
- torchvision: 0.18.1
* Packages:
- aiohttp: 3.9.5
- aiosignal: 1.3.1
- annotated-types: 0.7.0
- antlr4-python3-runtime: 4.9.3
- anyio: 4.4.0
- argon2-cffi: 23.1.0
- argon2-cffi-bindings: 21.2.0
- arrow: 1.3.0
- asttokens: 2.4.1
- async-lru: 2.0.4
- async-timeout: 4.0.3
- attrs: 23.2.0
- autocommand: 2.2.2
- babel: 2.15.0
- backports.tarfile: 1.2.0
- beautifulsoup4: 4.12.3
- bitsandbytes: 0.43.1
- bleach: 6.1.0
- boto3: 1.34.144
- botocore: 1.34.144
- braceexpand: 0.1.7
- certifi: 2024.7.4
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu12: 2.20.5
- nvidia-nvjitlink-cu12: 12.5.82
- nvidia-nvtx-cu12: 12.1.105
- omegaconf: 2.3.0
- opencv-python: 4.10.0.84
- ordered-set: 4.1.0
- overrides: 7.7.0
- packaging: 24.1
- pandocfilters: 1.5.1
- parso: 0.8.4
- pexpect: 4.9.0
- pillow: 10.4.0
- pip: 24.1
- platformdirs: 4.2.2
- pre-commit: 3.7.1
- proglog: 0.1.10
- prometheus-client: 0.20.0
- prompt-toolkit: 3.0.47
- protobuf: 5.27.2
- psutil: 6.0.0
- ptyprocess: 0.7.0
- pure-eval: 0.2.2
- pycparser: 2.22
- pydantic: 2.8.2
- pydantic-core: 2.20.1
- pydantic-settings: 2.3.4
- pygments: 2.18.0
- python-dateutil: 2.9.0.post0
- python-dotenv: 1.0.1
- python-json-logger: 2.0.7
- pytorch-lightning: 2.3.3
- pyyaml: 6.0.1
- pyzmq: 26.0.3
- referencing: 0.35.1
- requests: 2.32.3
- rfc3339-validator: 0.1.4
- rfc3986-validator: 0.1.1
- rpds-py: 0.19.0
- s3transfer: 0.10.2
- send2trash: 1.8.3
- sentry-sdk: 2.10.0
- setproctitle: 1.3.3
- setuptools: 71.0.2
- six: 1.16.0
- smmap: 5.0.1
- sniffio: 1.3.1
- soupsieve: 2.5
- stack-data: 0.6.3
- sympy: 1.13.0
- terminado: 0.18.1
- tinycss2: 1.3.0
- tomli: 2.0.1
- torch: 2.3.1
- torchmetrics: 1.4.0.post0
- torchvision: 0.18.1
- tornado: 6.4.1
- tqdm: 4.66.4
- traitlets: 5.14.3
- triton: 2.3.1
- typeguard: 4.3.0
- types-python-dateutil: 2.9.0.20240316
- typing-extensions: 4.12.2
- uri-template: 1.3.0
- urllib3: 2.2.2
- virtualenv: 20.26.3
- wandb: 0.17.4
- wcwidth: 0.2.13
- webcolors: 24.6.0
- webdataset: 0.2.86
- webencodings: 0.5.1
- websocket-client: 1.8.0
- wheel: 0.43.0
- yarl: 1.9.4
- zipp: 3.19.2
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor:
- python: 3.10.14
- release: 5.10.0-31-cloud-amd64
- version: #1 SMP Debian 5.10.221-1 (2024-07-14)
```
</details>
### More info
_No response_
|
open
|
2024-07-22T18:15:07Z
|
2024-07-22T18:17:32Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20116
|
[
"bug",
"needs triage",
"ver: 2.2.x"
] |
schopra8
| 1
|
encode/httpx
|
asyncio
| 2,677
|
[Bug] Logged issue with Pygments #2418
|
The starting point for issues should usually be a discussion... - What if not: I have a syntax update issue with a dependency of yours - compatibility 3.11. Easy fix
https://github.com/encode/httpx/discussions
Possible bugs may be raised as a "Potential Issue" discussion, feature requests may be raised as an "Ideas" discussion. We can then determine if the discussion needs to be escalated into an "Issue" or not.
This will help us ensure that the "Issues" list properly reflects ongoing or needed work on the project.
---
- [ ] Initially raised as discussion #... https://github.com/pygments/pygments/issues/2418 for `import pygments.lexers`
Issue with your transitive dependency on `Pygments.lexers` import. I logged it with the pygments. Details are as per that issue
https://github.com/pygments/pygments/issues/2418
|
closed
|
2023-04-24T15:07:28Z
|
2023-04-26T10:21:40Z
|
https://github.com/encode/httpx/issues/2677
|
[] |
iPoetDev
| 0
|
sinaptik-ai/pandas-ai
|
data-visualization
| 993
|
File exists error when creating a `SmartDataframe` object
|
### System Info
OS version: Ubuntu 20.04.6 LTS
Python version: 3.11.8
The current version of `pandasai` being used: 2.0.3
### 🐛 Describe the bug
Here is the code (simple flask API) that I'm using right now:
```python
# Route to get all books
@app.route('/run', methods=['POST'])
def run_pandasai():
data = request.get_json()
engine = create_engine(SQLALCHEMY_BASE_DATABASE_URI)
df = None
with engine.connect() as conn:
df = pd.read_sql(text(f'SELECT * FROM some_table;'), conn)
llm = OpenAI(api_token='<my_api_key>')
df = SmartDataframe(df, config={"llm": llm})
response = df.chat(some prompt?')
return jsonify({'response': response})
```
I get the following error while running this:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 1463, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 872, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 870, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 855, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/app.py", line 20, in run_pandasai
df = SmartDataframe(df, config={"llm": llm})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pandasai/smart_dataframe/__init__.py", line 64, in __init__
self._agent = Agent([df], config=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pandasai/agent/base.py", line 75, in __init__
self.context = PipelineContext(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pandasai/pipelines/pipeline_context.py", line 35, in __init__
self.cache = cache if cache is not None else Cache()
^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pandasai/helpers/cache.py", line 29, in __init__
os.makedirs(cache_dir, mode=DEFAULT_FILE_PERMISSIONS, exist_ok=True)
File "<frozen os>", line 225, in makedirs
FileExistsError: [Errno 17] File exists: '/app/cache'
```
I can understand that this issue is because it is trying to create the dir that already exists, and even though the exist_ok is `True` the `DEFAULT_FILE_PERMISSIONS` is such that it can't create in and it fails, is this a bug?
|
closed
|
2024-03-04T15:06:35Z
|
2024-03-07T18:54:46Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/993
|
[
"wontfix"
] |
araghuvanshi-systango
| 2
|
graphistry/pygraphistry
|
pandas
| 269
|
[FEA] validator
|
**Is your feature request related to a problem? Please describe.**
A recent viz had some null titles -- would help to check validate somehow!
**Describe the solution you'd like**
Ex: https://gist.github.com/lmeyerov/423df6b3b5bd85d12fd74b85eca4a17a
- nodes not in edges
- edges referencing non-existent nodes
- na nodes/edges
- if colors/sizes/icons/titles, NA vals
- if no title and defaulting to guess title, NAs there
|
open
|
2021-10-15T05:45:44Z
|
2024-12-16T17:25:38Z
|
https://github.com/graphistry/pygraphistry/issues/269
|
[
"enhancement",
"good-first-issue"
] |
lmeyerov
| 0
|
TracecatHQ/tracecat
|
pydantic
| 342
|
[FEATURE IDEA] Add UI to show action if `run_if` is specified
|
## Why
- It's hard to tell if a node has a conditional attached unless you select the node or gave it a meaningful title
## Suggested solution
- Add a greenish `#C1DEAF` border that shows up around the node
- Prompt the the user to give the "condition" a human-readable name (why? because the expression will probably be too long)
- If no human-readable name is given. Take the last `.attribute` and `operator value` part of the expression as the condition name.
|
open
|
2024-08-22T17:08:12Z
|
2024-08-22T17:17:01Z
|
https://github.com/TracecatHQ/tracecat/issues/342
|
[
"enhancement",
"frontend"
] |
topher-lo
| 2
|
CorentinJ/Real-Time-Voice-Cloning
|
pytorch
| 385
|
What versions of everything does one need to avoid errors during setup?
|
Hi all,
I'm on Ubuntu 20.04 with Python 3.7 in a conda env. I have a Nvidia GTX660 GPU installed.
I'm currently rockin' torch 1.2, cuda-10-0, tensorflow 1.14.0, tensorflow-gpu 1.14.0, and torchvision 0.4.0, along with everything else in requirements.txt. I am using python 3.7. For the life of me, I can't figure out how to get demo_cli.py to not give the error a bunch of people get:
```Your PyTorch installation is not configured to use CUDA. If you have a GPU ready for deep learning, ensure that the drivers are properly installed, and that your CUDA version matches your PyTorch installation. CPU-only inference is currently not supported.```
Could someone give me the lowdown on precisely what packages and version numbers I need to make this thing fire up?
|
closed
|
2020-06-27T04:21:48Z
|
2020-06-29T21:58:11Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/385
|
[] |
deltabravozulu
| 4
|
JaidedAI/EasyOCR
|
pytorch
| 584
|
AttributeError: 'Reader' object has no attribute 'detector'
|
I'm following the api documentation (https://www.jaided.ai/easyocr/documentation/), which points to the detector as a parameter, but I can't use it! :/
The traceback follows...
```bash
Traceback (most recent call last):
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 396, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/fastapi/applications.py", line 199, in __call__
await super().__call__(scope, receive, send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc from None
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/middleware/cors.py", line 78, in __call__
await self.app(scope, receive, send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc from None
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
await route.handle(scope, receive, send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/routing.py", line 227, in handle
await self.app(scope, receive, send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/routing.py", line 41, in app
response = await func(request)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/fastapi/routing.py", line 201, in app
raw_response = await run_endpoint_function(
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/fastapi/routing.py", line 148, in run_endpoint_function
return await dependant.call(**values)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/./web_ocr/routers/v2/ocr.py", line 71, in ocr_extract
result = await runner.run_pipeline(postprocesses=postprocesses,
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/./web_ocr/repository/ocr.py", line 259, in run_pipeline
text, conf = await self.apply_ocr(*args, **kwargs)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/./web_ocr/repository/ocr.py", line 246, in apply_ocr
text, conf = await self.framework.predict(self.im, *args, **kwargs)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/./web_ocr/repository/ocr.py", line 141, in predict
results = reader.readtext(image, **config)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/easyocr/easyocr.py", line 376, in readtext
horizontal_list, free_list = self.detect(img, min_size, text_threshold,\
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/easyocr/easyocr.py", line 274, in detect
text_box = get_textbox(self.detector, img, canvas_size, mag_ratio,\
AttributeError: 'Reader' object has no attribute 'detector'
ERROR:uvicorn.error:Exception in ASGI application
Traceback (most recent call last):
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 396, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/fastapi/applications.py", line 199, in __call__
await super().__call__(scope, receive, send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc from None
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/middleware/cors.py", line 78, in __call__
await self.app(scope, receive, send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc from None
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
await route.handle(scope, receive, send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/routing.py", line 227, in handle
await self.app(scope, receive, send)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/starlette/routing.py", line 41, in app
response = await func(request)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/fastapi/routing.py", line 201, in app
raw_response = await run_endpoint_function(
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/fastapi/routing.py", line 148, in run_endpoint_function
return await dependant.call(**values)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/./web_ocr/routers/v2/ocr.py", line 71, in ocr_extract
result = await runner.run_pipeline(postprocesses=postprocesses,
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/./web_ocr/repository/ocr.py", line 259, in run_pipeline
text, conf = await self.apply_ocr(*args, **kwargs)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/./web_ocr/repository/ocr.py", line 246, in apply_ocr
text, conf = await self.framework.predict(self.im, *args, **kwargs)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/./web_ocr/repository/ocr.py", line 141, in predict
results = reader.readtext(image, **config)
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/easyocr/easyocr.py", line 376, in readtext
horizontal_list, free_list = self.detect(img, min_size, text_threshold,\
File "/home/lead/Documents/wal-project/ocr/icaro-ocr/.venv/lib/python3.8/site-packages/easyocr/easyocr.py", line 274, in detect
text_box = get_textbox(self.detector, img, canvas_size, mag_ratio,\
AttributeError: 'Reader' object has no attribute 'detector'
```
|
closed
|
2021-11-03T18:39:29Z
|
2022-08-07T05:00:33Z
|
https://github.com/JaidedAI/EasyOCR/issues/584
|
[] |
igormcsouza
| 0
|
SYSTRAN/faster-whisper
|
deep-learning
| 1,215
|
Service Execution Failure: when Running FasterWhisper as a Service on Ubuntu
|
# Issue: Service Execution Failure - `status=6/ABRT`
The following code executes successfully when run directly on an Ubuntu-based system like say python3 make_transcript.py. However, when executed as a service file, it fails and exits with the error:
```
code=dumped, status=6/ABRT
```
## Code Snippet
```python
segments, _ = model.transcribe(audio_file, task='translate', vad_filter=True)
print("[Transcribing COMPLETE]")
cleaned_segments = []
stX = time.time()
for idx, segment in enumerate(tqdm(segments)):
print(f"current running {idx} xxxxxxxxxxxxx")
text = segment.text
cleaned_segments.append({
'start': segment.start,
'end': segment.end,
'text': cleaned_text.strip()
})
```
## Observed Behavior
1. The code runs without any issues when executed directly in a terminal or Python environment.
2. When the code is run as part of a systemd service file, it crashes with the following error and basically it does not execute properly.
3. It runs properly until it starts entering the for loop there it crashes
```
code=dumped, status=6/ABRT
```
5. Observe the failure in logs.
I would be very grateful if anyone can identify the cause of the failure and provide a resolution or guidance on debugging this issue further.
|
open
|
2024-12-24T11:39:49Z
|
2024-12-24T11:39:49Z
|
https://github.com/SYSTRAN/faster-whisper/issues/1215
|
[] |
manashb96
| 0
|
hankcs/HanLP
|
nlp
| 1,319
|
安装过程出现的问题,以及最后安装成功的版本号,汇报一下
|
<!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:pyhanlp 0.1.50
我使用的版本是:pyhanlp 0.1.47
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
环境参数:
* win10
* jdk 1.8
* python 3.7
失败安装过程:
* 第一次:直接`pip install pyhanlp` (版本0.1.50),自动安装依赖jpype1(版本0.7.0),下载data文件执行`hanlp`报错:`startJVM() got an unexpected keyword argument 'convertStrings'`
* 第二次:`pip install pyhanlp=0.1.47`,`pip install jpype1=0.6.2`,执行报错:`AttributeError: module '_jpype' has no attribute 'setResource'`
成功过程:
* `pip install pyhanlp=0.1.47`
* `pip install jpype1=0.6.3`
|
closed
|
2019-11-07T01:48:48Z
|
2019-11-07T02:03:00Z
|
https://github.com/hankcs/HanLP/issues/1319
|
[
"discussion"
] |
chenwenhang
| 3
|
PaddlePaddle/ERNIE
|
nlp
| 134
|
训练任务killed
|
下载了bert的demo,在开发机本地运行train.sh,任务直接被killed

|
closed
|
2019-05-13T12:11:04Z
|
2019-10-12T09:15:09Z
|
https://github.com/PaddlePaddle/ERNIE/issues/134
|
[] |
MaxLingwei
| 5
|
deepfakes/faceswap
|
deep-learning
| 1,282
|
No Module Decorator
|
Tried to run training.py crashed with this in console
```
Traceback (most recent call last):
File "C:\Users\Admin\faceswap\lib\cli\launcher.py", line 217, in execute_script
process.process()
File "C:\Users\Admin\faceswap\scripts\train.py", line 218, in process
self._end_thread(thread, err)
File "C:\Users\Admin\faceswap\scripts\train.py", line 258, in _end_thread
thread.join()
File "C:\Users\Admin\faceswap\lib\multithreading.py", line 217, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\Admin\faceswap\lib\multithreading.py", line 96, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Admin\faceswap\scripts\train.py", line 280, in _training
raise err
File "C:\Users\Admin\faceswap\scripts\train.py", line 268, in _training
model = self._load_model()
File "C:\Users\Admin\faceswap\scripts\train.py", line 292, in _load_model
model: "ModelBase" = PluginLoader.get_model(self._args.trainer)(
File "C:\Users\Admin\faceswap\plugins\plugin_loader.py", line 131, in get_model
return PluginLoader._import("train.model", name, disable_logging)
File "C:\Users\Admin\faceswap\plugins\plugin_loader.py", line 197, in _import
module = import_module(mod)
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "C:\Users\Admin\faceswap\plugins\train\model\original.py", line 11, in <module>
from ._base import KerasModel, ModelBase
File "C:\Users\Admin\faceswap\plugins\train\model\_base\__init__.py", line 4, in <module>
from .model import get_all_sub_models, KerasModel, ModelBase # noqa
File "C:\Users\Admin\faceswap\plugins\train\model\_base\model.py", line 23, in <module>
from .settings import Loss, Optimizer, Settings
File "C:\Users\Admin\faceswap\plugins\train\model\_base\settings.py", line 37, in <module>
from lib.model.autoclip import AutoClipper # pylint:disable=ungrouped-imports
File "C:\Users\Admin\faceswap\lib\model\autoclip.py", line 8, in <module>
import tensorflow_probability as tfp
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\site-packages\tensorflow_probability\__init__.py", line 75, in <module>
from tensorflow_probability.python import * # pylint: disable=wildcard-import
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\site-packages\tensorflow_probability\python\__init__.py", line 21, in <module>
from tensorflow_probability.python import bijectors
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\site-packages\tensorflow_probability\python\bijectors\__init__.py", line 23, in <module>
from tensorflow_probability.python.bijectors.absolute_value import AbsoluteValue
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\site-packages\tensorflow_probability\python\bijectors\absolute_value.py", line 23, in <module>
from tensorflow_probability.python.bijectors import bijector
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\site-packages\tensorflow_probability\python\bijectors\bijector.py", line 33, in <module>
from tensorflow_probability.python.internal import distribution_util
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\site-packages\tensorflow_probability\python\internal\distribution_util.py", line 29, in <module>
from tensorflow_probability.python.internal import prefer_static
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\site-packages\tensorflow_probability\python\internal\prefer_static.py", line 22, in <module>
import decorator
ModuleNotFoundError: No module named 'decorator'
11/15/2022 21:24:55 CRITICAL An unexpected crash has occurred. Crash report written to 'C:\Users\Admin\faceswap\crash_report.2022.11.15.212446667521.log'. You MUST provide this file if seeking assistance. Please verify you are running the latest version of faceswap before reporting
`
`11/15/2022 21:24:44 MainProcess MainThread __init__ wrapper DEBUG CONFIGDIR=C:\Users\Admin\.matplotlib
11/15/2022 21:24:44 MainProcess MainThread __init__ <module> DEBUG interactive is False
11/15/2022 21:24:44 MainProcess MainThread __init__ <module> DEBUG platform is win32
11/15/2022 21:24:44 MainProcess MainThread __init__ wrapper DEBUG CACHEDIR=C:\Users\Admin\.matplotlib
11/15/2022 21:24:44 MainProcess MainThread font_manager _load_fontmanager DEBUG Using fontManager instance from C:\Users\Admin\.matplotlib\fontlist-v330.json
11/15/2022 21:24:44 MainProcess MainThread queue_manager __init__ DEBUG Initializing _QueueManager
11/15/2022 21:24:44 MainProcess MainThread queue_manager __init__ DEBUG Initialized _QueueManager
11/15/2022 21:24:45 MainProcess MainThread stats __init__ DEBUG Initializing GlobalSession
11/15/2022 21:24:45 MainProcess MainThread stats __init__ DEBUG Initialized GlobalSession
11/15/2022 21:24:45 MainProcess MainThread train __init__ DEBUG Initializing Train: (args: Namespace(func=<bound method ScriptExecutor.execute_script of <lib.cli.launcher.ScriptExecutor object at 0x00000268DF228E80>>, exclude_gpus=None, configfile=None, loglevel='INFO', logfile=None, redirect_gui=True, colab=False, input_a='C:\\Users\\Admin\\Desktop\\faces\\A', input_b='C:\\Users\\Admin\\Desktop\\faces\\B', model_dir='C:\\Users\\Admin\\Desktop\\faces', load_weights=None, trainer='original', summary=False, freeze_weights=False, batch_size=16, iterations=1180000, distributed=False, distribution_strategy='default', save_interval=250, snapshot_interval=25000, timelapse_input_a=None, timelapse_input_b=None, timelapse_output=None, preview=True, write_image=False, no_logs=False, warp_to_landmarks=False, no_flip=False, no_augment_color=False, no_warp=False)
11/15/2022 21:24:45 MainProcess MainThread train _get_images DEBUG Getting image paths
11/15/2022 21:24:45 MainProcess MainThread utils get_image_paths DEBUG Scanned Folder contains 33 files
11/15/2022 21:24:45 MainProcess MainThread utils get_image_paths DEBUG Returning 33 images
11/15/2022 21:24:45 MainProcess MainThread train _get_images DEBUG Test file: (filename: C:\Users\Admin\Desktop\faces\A\2020-04-17 22.55.41 2289591023441328516_291158439_0.png, metadata: {'width': 512, 'height': 512, 'itxt': {'alignments': {'x': 202, 'w': 124, 'y': 179, 'h': 175, 'landmarks_xy': [[186.1354217529297, 263.46875], [193.3229217529297, 282.63543701171875], [200.5104217529297, 299.40625], [207.6979217529297, 313.78125], [219.67709350585938, 330.55206298828125], [236.44790649414062, 342.53125], [255.6145782470703, 352.11456298828125], [279.57293701171875, 361.69793701171875], [301.13543701171875, 364.09375], [315.51043701171875, 356.90625], [317.90625, 349.71875], [317.90625, 337.73956298828125], [320.30206298828125, 318.57293701171875], [327.48956298828125, 297.01043701171875], [329.88543701171875, 280.23956298828125], [327.48956298828125, 265.86456298828125], [322.69793701171875, 249.09375], [231.65625, 241.90625], [243.6354217529297, 234.71875], [258.01043701171875, 229.9270782470703], [267.59375, 229.9270782470703], [277.17706298828125, 234.71875], [303.53125, 229.9270782470703], [308.32293701171875, 227.53125], [313.11456298828125, 227.53125], [317.90625, 229.9270782470703], [317.90625, 234.71875], [296.34375, 249.09375], [303.53125, 261.07293701171875], [313.11456298828125, 270.65625], [315.51043701171875, 280.23956298828125], [289.15625, 289.82293701171875], [296.34375, 289.82293701171875], [303.53125, 289.82293701171875], [308.32293701171875, 287.42706298828125], [310.71875, 287.42706298828125], [250.8229217529297, 256.28125], [260.40625, 253.8854217529297], [267.59375, 253.8854217529297], [272.38543701171875, 256.28125], [267.59375, 258.67706298828125], [258.01043701171875, 258.67706298828125], [298.73956298828125, 251.4895782470703], [305.92706298828125, 246.6979217529297], [313.11456298828125, 246.6979217529297], [313.11456298828125, 249.09375], [313.11456298828125, 251.4895782470703], [305.92706298828125, 251.4895782470703], [265.19793701171875, 311.38543701171875], [281.96875, 306.59375], [298.73956298828125, 301.80206298828125], [305.92706298828125, 301.80206298828125], [310.71875, 299.40625], [315.51043701171875, 301.80206298828125], [313.11456298828125, 306.59375], [313.11456298828125, 318.57293701171875], [310.71875, 323.36456298828125], [305.92706298828125, 325.7604064941406], [296.34375, 325.7604064941406], [284.36456298828125, 320.96875], [267.59375, 311.38543701171875], [293.94793701171875, 306.59375], [303.53125, 306.59375], [308.32293701171875, 306.59375], [313.11456298828125, 306.59375], [308.32293701171875, 313.78125], [303.53125, 316.17706298828125], [293.94793701171875, 316.17706298828125]], 'mask': {'components': {'mask': b"x\x9c\xed\x9a[\x92\xc2@\x08E\xdd\x19Kci,-\xa3\xd1\xa91>:\x81{i\xa6*\x9c\x0f\x7f\x0f\rD#\xcd\xe5\xd24M\xd34M\xd34M\xd34M\xd34M\xf3?P\xd5eY\xae\x9f:\xdb,\xaby\xcb5\x0c\x99cW{\x93\xffa\xc9\xe9\x90\x91|\x9b\x0e~\x1c\xf2\x9e\xf6\xbd0h5\x91a\xda\x07\x110\xe4\x1f\xfa\xed0\x06\x1f<\xee^\x81\xdc\x06\xca\x81\xf3\x13\xdc7B\r\x80\x14\x1c\xf53\x92\xfe\x8cO\xceu\xbb\xfdJ\xd7;\xcf\xcf\xf7\xfb\x1a\xe0\xec~\xe3\x07\xe0\xf2\xeb\xc9\xfdR\xecOh@\xdf;\x08\xdf_\xfd\x00\xf8\xfczr?\xbf\x01\x9co@t\xbf\xf3\x01\xd4\x93\xfb\xa5\xd8\xcfo\x00\xa7\xdf\x8a\xfdZ\xec\x97b?\xbd\x01\xbc~+\xf6k\xb1\x9f]\x00\xb7\xdf\xa8z\xff?`\xa5\xfa\xfd3 \xa1\xfa\xddzn\x03D\x06\x10Z\xec\x17\xa2?\xa0g\x16 6\x7fR\x9a?8\x80\xa4\xf9czZ\x02\xc2\xe3?\x92?<\x80V\x8e?\xaa'%\x00\x18\x7f+\xc3\x8f\xcc\xff\tzh\xf8\xae\xb8\x1f\xbb}\xc0\xfd\x90\x1e\xff\x15\x80\xef>@?|\xfb\x04\x06\x80\xea\xc1\x00\x18w_\n\xf8)\x97\x7f\x16\xd6\xa3\xdd\x87\x06\xc0\xba\x80\x8d\x06@\xd2G\x03 \xa5?\x1c\x00s\x1d@\x03~\xa2>\xf2=\xc0\xbe\xfe7\xa7\x9f\xbe\x8d\xa1.=\xb1\xfb~\x11\x8f?e\x0b\xc4\x8e\xfb3\xf4\x8e\x1a$\xa4\x7f\xe5h\r\xf2v\x81\x8e\xa5 M\x7f\xec\x8a>w%k\xbf\x08\xd9\xabX;E\xc8\xea\xbe'\x86E\x98\xb1\x896h\x83\t\xc7\xbf\xf1\xb5\x08\x93\x16\xf1\xbeE0s\x1f\xf1\xc3v\xd6\xecu\xc8\xcd\xbe\xd0\xac%\xc8\x17\xee!\x14\xc9\x1f!T\xca\x19\xfc\x00=Z\x92s", 'affine_matrix': [[0.5971773365351224, -0.10676463488366382, -77.57558414093813], [0.10676463488366382, 0.5971773365351223, -123.80531080504358]], 'interpolator': 3, 'stored_size': 128, 'stored_centering': 'face'}, 'extended': {'mask': b"x\x9c\xed\xda\x81m\xc30\x0c\x04@o\xc6\xd1~4\x8d\xe6\xb6i\x02\x18\xa8\xea\x9a\xfc\xa7\xd8\xc0\xfc\x05\x8e!\x95\xb4\x90\xb8m\x9dN\xa7\xd3\xe9t:\x9dN\xa7\xd3\xe9t:\xff.\x06X\x11\r`\x7ff\x00\x8b\xed\xb1\xff\x0cVUa\x13\xfc\xd0\x8a\xec\x81\xe0\x84\xcf\x1f\x88\xcdZ?O\xc6@\xcez?o\x85r\x1c\x8e\x0f\x7f\xa8\xa0T\x97\xb5`\xfa\x9d\xbb\x18\x81\x1e\xc7\xf9\tD\x1b\xff\n\xf75`\xf5}g\xe6\x0f\x16\xdf\x99\xf1+\xf4\xb0\xcf7\x9e\xf1ez\xc8\x87\x0c\x0f\xf8&\xd5\xbd\xbe\xb0\xf1\xcfx\xbe\x7fz\xdd\xf7\xfb\x03=\xef\xf2\xad\xd8\xdf\xee\xee\x8fb\x1f7\xf7\xad\xd8O8\x80\xef\xe5\x0f\xb9\xef\xfb\xff\x13r\xdf\xf7\x07\xb0\xdaO8\x00\xef\xe5\xe3\xe6\xbe\x15\xfb\xfa\x03\xe0\xf4G\xb1\x8fb\xdf\x8a}\xf9\x01\xf0\xfa\xa3\xd8G\xb1\xaf\x1e\x80\xdb\x1fR\xde\x7f\xff\x03\xa9\xef\xbf\xff1\xa9\xef\xe6\xb5\x07 r\xfd\x86b\xdf\x84~\x80W\x0e v\xfb\n\x99\x1f\xbc}\x95\xf91^\xd6\x80\xf0\xe5\xb7\xc8\x0f_>C\xe3GyQ\x03\x88\xbb\x7f(|\xe6\xee_\xc0SO/\xe0}\xee\xe9\x85\xf7)\x9e\xff+@\xbf|\x91>\xfd\xf2H\x16\xc0\xf2d\x01\x8a\xf7o\x10\xbe\xe4\xe1w\x84y\xd1\xcbw\xb8\x00\xd5\xfaA\xb4\x00\x11\x1f-@\xb6x\x10,@\xb9y\x81\x80/\xe4#\xbf\x03\xea\xe5\x97\xe1\xf4\xe5{Hp\xf1\xc2\xd3\xf7\x8ay\xfc\x94\r\xa8q\xdd\xcf\xe0\x1d3Hh\xff#Wg\x90\xb7\x05w\xad\x05i\xfc\xb5'\xfa\xdcU\xc4\xbf\x87P\xbc\x84\x98u\xfa\x0e9\x1d\xc2\x8a\xa5\xd8\x93c\xb0\xe0\xe3\x7f\xe5\xd7!,\xdb\t\x9eW\xb0r\x1bx\xb2+\xb4x\x19\xf9\xb3\x84\xc3I(\xda\xc7\xfe.\xa1l\x19\xfcQB%\xae\xc8\x07D\xd6_\xbc", 'affine_matrix': [[0.5971773365351224, -0.10676463488366382, -77.57558414093813], [0.10676463488366382, 0.5971773365351223, -123.80531080504358]], 'interpolator': 3, 'stored_size': 128, 'stored_centering': 'face'}}, 'identity': {}}, 'source': {'alignments_version': 2.3, 'original_filename': '2020-04-17 22.55.41 2289591023441328516_291158439_0.png', 'face_index': 0, 'source_filename': '2020-04-17 22.55.41 2289591023441328516_291158439.jpg', 'source_is_video': False, 'source_frame_dims': (1349, 1080)}}})
11/15/2022 21:24:45 MainProcess MainThread train _get_images INFO Model A Directory: 'C:\Users\Admin\Desktop\faces\A' (33 images)
11/15/2022 21:24:45 MainProcess MainThread utils get_image_paths DEBUG Scanned Folder contains 180 files
11/15/2022 21:24:45 MainProcess MainThread utils get_image_paths DEBUG Returning 180 images
11/15/2022 21:24:45 MainProcess MainThread train _get_images DEBUG Test file: (filename: C:\Users\Admin\Desktop\faces\B\2019-07-14 10.37.45 2087731961133796176_380280568_0.png, metadata: {'width': 512, 'height': 512, 'itxt': {'alignments': {'x': 308, 'w': 631, 'y': 78, 'h': 937, 'landmarks_xy': [[328.24359130859375, 427.7779541015625], [315.67950439453125, 528.290771484375], [315.67950439453125, 603.6753540039062], [328.24359130859375, 679.0599975585938], [340.80767822265625, 767.0087280273438], [365.9359130859375, 842.3933715820312], [403.62823486328125, 892.6497192382812], [441.32049560546875, 942.9061889648438], [529.2692260742188, 1005.7266235351562], [629.7820434570312, 993.1625366210938], [692.6025390625, 955.4702758789062], [742.8589477539062, 917.7778930664062], [805.6795043945312, 854.9574584960938], [843.3717651367188, 779.5728149414062], [881.0640869140625, 704.1881713867188], [906.1922607421875, 628.8035888671875], [931.320556640625, 540.8548583984375], [365.9359130859375, 339.8292236328125], [416.19232177734375, 302.13690185546875], [466.44873046875, 314.7010498046875], [504.14105224609375, 327.26513671875], [541.8333129882812, 352.393310546875], [730.2948608398438, 390.08563232421875], [767.9871826171875, 377.52154541015625], [830.8076782226562, 377.52154541015625], [881.0640869140625, 390.08563232421875], [906.1922607421875, 440.342041015625], [604.6538696289062, 490.59844970703125], [604.6538696289062, 553.4189453125], [592.0897216796875, 603.6753540039062], [579.525634765625, 653.9317626953125], [516.7051391601562, 679.0599975585938], [541.8333129882812, 691.6240844726562], [566.9615478515625, 704.1881713867188], [604.6538696289062, 704.1881713867188], [629.7820434570312, 704.1881713867188], [416.19232177734375, 427.7779541015625], [441.32049560546875, 427.7779541015625], [491.576904296875, 427.7779541015625], [529.2692260742188, 465.47027587890625], [479.0128173828125, 465.47027587890625], [441.32049560546875, 452.9061279296875], [705.1666870117188, 490.59844970703125], [755.423095703125, 490.59844970703125], [793.1153564453125, 490.59844970703125], [830.8076782226562, 515.7266845703125], [793.1153564453125, 528.290771484375], [742.8589477539062, 515.7266845703125], [453.8846435546875, 792.1369018554688], [479.0128173828125, 767.0087280273438], [541.8333129882812, 754.4446411132812], [566.9615478515625, 754.4446411132812], [592.0897216796875, 754.4446411132812], [629.7820434570312, 792.1369018554688], [667.474365234375, 829.8291625976562], [629.7820434570312, 867.5215454101562], [579.525634765625, 880.0856323242188], [541.8333129882812, 880.0856323242188], [504.14105224609375, 867.5215454101562], [479.0128173828125, 842.3933715820312], [453.8846435546875, 792.1369018554688], [529.2692260742188, 779.5728149414062], [554.3974609375, 792.1369018554688], [592.0897216796875, 804.7009887695312], [654.9102783203125, 829.8291625976562], [579.525634765625, 829.8291625976562], [554.3974609375, 829.8291625976562], [516.7051391601562, 817.2650756835938]], 'mask': {'components': {'mask': b'x\x9c\xed\x99Q\xae\xc20\x0c\x04{\xb3\x1c-G\xf3\xd1\xca\xa3| D\x81\xc4\xb3I@o\xe7\x02\xb3\xb5\xe3Tr\xb6\xcd\x18c\x8c1\xc6\x18c\x8c1\xc6\x18c\xcc7R\xea)bK\xec\xfd\x94\xb5za\x80\x9c~\xdfE\xfa\x92\xd4\xef\xb1V/\n\x90\xd7K\x8e\x00\xf8|I\x01*\xf1\x0b\n\xc0\xfc\xbc\x00H/\x98A\xe8\xa77q\x81~Z\x80J\xfd\xb0\x00\xd8\x0f\x0b\x80\xf5\xb0\x00\xf6S\xd8\x15\xc8\xfdH\xbf\xdc\x1f\x8b\xfd\xf5\xd7\xfd\xf0\x0f\x88\xfd\xab\xef_\xfb\x19\xd4\x0f\xf5\xff\xde_\x7f\xdb\x8f7\x01e\xb1\x1f\x1e\x00\xee\x0f\xe4\xc7zx\x00\xb8\x1f5@\xb1\x88\n\xe0W\xec\x80\n\xf0\x0b\xf4\xa4\x01\x9a\rPM\xfb5{\xc8\x92\xf6K\xf4\xa0\x01"\x7fM\xeaUk\xe0\x92\xf4\xcb6\xb0I\xbfJ\x9fl\x80f\xfa\x0eR~\xe1+@d\xfc:}\xae\x01B\x7f\xa6\x01\xd2G\x98D\x01\x94\xfaD\x01\x84\xef?WJ\xa7^8|7\xa2\xcf\xaf\xd6w\x16@\\\xfd+\xb5C\xaf~\x80<h\xd7\xcb\x9b\x7f\xd0\xde\x81\x01\xd5\xef\t0H\xff\x17 \x96\xea\xb7\xa6)\x1c\xd3\xfc\xe6\x00c\xf5\x1f\x03\x8c\xd6\x7f\x080^\xffv\x0c\x86\xdc;\xed\x01F\x9e\xfc\x87\x00\xe7=\x98\xa5?O0\xa3\xf5\xaf\x13\xd4\x89\x1f\xff\x94 \xe6\xcb\xef\tb\xce\xa1\x7f\x91`\xa5\xdc\x18cL\x9a\x0bJ\xab\xda&', 'affine_matrix': [[0.10563867057061942, 0.02008655320207599, -11.293521284931124], [-0.020086553202075995, 0.10563867057061942, 20.28065738741556]], 'interpolator': 2, 'stored_size': 128, 'stored_centering': 'face'}, 'extended': {'mask': b"x\x9c\xed\x99\x8b\x8d\x03!\x0cD\xb73\x97\xe6\xd2\\\x1a\x97\x8f\xa2\xe8\x94\x90\x80g\xc0Ze^\x01y\x13\xc6 \xc4\x1e\x87\x10B\x08!\x84\x10B\x08!\x84\x10\xf3\xb8;\xed\xb7\xac]\xf0\x0e\xad\x0b-@\xf4\x1d\x1f\xb1Z}k\x14\xbd\xa5\xf5-j\xf5\x94\x06\x02\xf1\xe3\x010=\xdc\x80azx\x04\x1d\xf5\x83\x87\x80\xfc(\xf2\xd7\xfa\xc1\x01\x90\x1fE\xfeZ?6\x00\xf2\x9f\xdd\x8f]\xc1p?\xa4/\xf7G\xb1\xdf\xcf\xee\x07o\xe0\xb0\xbf\xfa\x02(?\x06\xea\x07\xf5?\xef\xf7s\xfb\xe1G0+\xf6\x83\x03\x80\xfb\x03\xf2\xc3zp\x00p?T\x00\xe3\r6\x00?\xe3\t\xd6\x00?A\x8f\x14\xc0x\x80E&\x90\xf3\x04oi?E\x0f\x14@\xf2{R\xcf\xfa\x02bI?\xe9\x03H\xba\x00\x96>Y\x00g\xf7\xddH\xf9y_\xe0rg0O\x9f+\x80\xe8\xcf\x14@\\\xfe\xd4\x020\xf5\x89\x05\xa0m\xfe;6\xa9'n\xbe;1\xe7g\xeb'\x17\x80\xbc\xfaW|BO\x9d\xfd\x07\xe3zz\xf97\xc6\x1bX\xb0\xfa3\x01\x16\xe9/\x01\xa2T\x7f\x0c\xed\xc25\xe5\x0f\x07X\xab\xff\x1a`\xb5\xfeK\x80\xf5\xfa\x8f\xdb`\xc9\xb93\x1e`\xe5\xe4\xff\x0b\xf0\xbe\x83]\xfa\xf7\tvT\xdfO\xe0\x1b\xff\xfcK\x82\xd8/\x7f&\x88=C\xdfIP)\x17B\x08\x91\xe6\x0f\xfc\xf7\xa9.", 'affine_matrix': [[0.10563867057061942, 0.02008655320207599, -11.293521284931124], [-0.020086553202075995, 0.10563867057061942, 20.28065738741556]], 'interpolator': 2, 'stored_size': 128, 'stored_centering': 'face'}}, 'identity': {}}, 'source': {'alignments_version': 2.3, 'original_filename': '2019-07-14 10.37.45 2087731961133796176_380280568_0.png', 'face_index': 0, 'source_filename': '2019-07-14 10.37.45 2087731961133796176_380280568.jpg', 'source_is_video': False, 'source_frame_dims': (1350, 1080)}}})
11/15/2022 21:24:45 MainProcess MainThread train _get_images INFO Model B Directory: 'C:\Users\Admin\Desktop\faces\B' (180 images)
11/15/2022 21:24:45 MainProcess MainThread train _get_images DEBUG Got image paths: [('a', '33 images'), ('b', '180 images')]
11/15/2022 21:24:45 MainProcess MainThread train _validate_image_counts WARNING At least one of your input folders contains fewer than 250 images. Results are likely to be poor.
11/15/2022 21:24:45 MainProcess MainThread train _validate_image_counts WARNING You need to provide a significant number of images to successfully train a Neural Network. Aim for between 500 - 5000 images per side.
11/15/2022 21:24:45 MainProcess MainThread preview_cv __init__ DEBUG Initializing: PreviewBuffer
11/15/2022 21:24:45 MainProcess MainThread preview_cv __init__ DEBUG Initialized: PreviewBuffer
11/15/2022 21:24:45 MainProcess preview preview_tk __init__ DEBUG Initializing PreviewTk (parent: 'None')
11/15/2022 21:24:45 MainProcess preview preview_cv __init__ DEBUG Initializing PreviewTk parent (triggers: {'toggle_mask': <threading.Event object at 0x00000268DF24EC10>, 'refresh': <threading.Event object at 0x00000268EA541A90>, 'save': <threading.Event object at 0x00000268EA549700>, 'quit': <threading.Event object at 0x00000268EA5A8700>, 'shutdown': <threading.Event object at 0x00000268EA5A8A60>})
11/15/2022 21:24:45 MainProcess preview preview_cv __init__ DEBUG Initialized PreviewTk parent
11/15/2022 21:24:45 MainProcess MainThread train __init__ DEBUG Initialized Train
11/15/2022 21:24:45 MainProcess MainThread train process DEBUG Starting Training Process
11/15/2022 21:24:45 MainProcess MainThread train process INFO Training data directory: C:\Users\Admin\Desktop\faces
11/15/2022 21:24:45 MainProcess MainThread train _start_thread DEBUG Launching Trainer thread
11/15/2022 21:24:45 MainProcess MainThread multithreading __init__ DEBUG Initializing MultiThread: (target: '_training', thread_count: 1)
11/15/2022 21:24:45 MainProcess MainThread multithreading __init__ DEBUG Initialized MultiThread: '_training'
11/15/2022 21:24:45 MainProcess MainThread multithreading start DEBUG Starting thread(s): '_training'
11/15/2022 21:24:45 MainProcess MainThread multithreading start DEBUG Starting thread 1 of 1: '_training'
11/15/2022 21:24:45 MainProcess preview preview_tk __init__ DEBUG Initializing _Taskbar (parent: '.!frame', taskbar: None)
11/15/2022 21:24:45 MainProcess preview preview_tk _add_scale_combo DEBUG Adding scale combo
11/15/2022 21:24:45 MainProcess MainThread multithreading start DEBUG Started all threads '_training': 1
11/15/2022 21:24:45 MainProcess MainThread train _start_thread DEBUG Launched Trainer thread
11/15/2022 21:24:45 MainProcess MainThread train _output_startup_info DEBUG Launching Monitor
11/15/2022 21:24:45 MainProcess MainThread train _output_startup_info INFO ===================================================
11/15/2022 21:24:45 MainProcess MainThread train _output_startup_info INFO Starting
11/15/2022 21:24:45 MainProcess MainThread train _output_startup_info INFO Using live preview
11/15/2022 21:24:45 MainProcess MainThread train _output_startup_info INFO ===================================================
11/15/2022 21:24:45 MainProcess preview preview_tk _add_scale_combo DEBUG Added scale combo: '.!frame.!frame.!combobox'
11/15/2022 21:24:45 MainProcess preview preview_tk _add_scale_slider DEBUG Adding scale slider
11/15/2022 21:24:45 MainProcess preview preview_tk _add_scale_slider DEBUG Added scale slider: '.!frame.!frame.!scale'
11/15/2022 21:24:45 MainProcess preview preview_tk _add_interpolator_radio DEBUG Adding nearest_neighbour radio button
11/15/2022 21:24:45 MainProcess preview preview_tk _add_interpolator_radio DEBUG Added .!frame.!frame.!frame.!radiobutton radio button
11/15/2022 21:24:45 MainProcess preview preview_tk _add_interpolator_radio DEBUG Adding bicubic radio button
11/15/2022 21:24:45 MainProcess preview preview_tk _add_interpolator_radio DEBUG Added .!frame.!frame.!frame.!radiobutton2 radio button
11/15/2022 21:24:45 MainProcess preview preview_tk _add_save_button DEBUG Adding save button
11/15/2022 21:24:45 MainProcess preview preview_tk _add_save_button DEBUG Added save burron: '.!frame.!frame.!button'
11/15/2022 21:24:45 MainProcess preview preview_tk __init__ DEBUG Initialized _Taskbar ('<lib.training.preview_tk._Taskbar object at 0x00000268EA5A8BE0>')
11/15/2022 21:24:45 MainProcess preview preview_tk _get_geometry DEBUG Obtaining screen geometry
11/15/2022 21:24:45 MainProcess preview preview_tk _get_geometry DEBUG Obtained screen geometry: (1920, 1080)
11/15/2022 21:24:45 MainProcess preview preview_tk __init__ DEBUG Initializing _PreviewCanvas (parent: '.!frame', scale_var: PY_VAR1, screen_dimensions: (1920, 1080))
11/15/2022 21:24:45 MainProcess preview preview_tk _configure_scrollbars DEBUG Configuring scrollbars
11/15/2022 21:24:45 MainProcess preview preview_tk _configure_scrollbars DEBUG Configured scrollbars. x: '.!frame.!frame2.!scrollbar', y: '.!frame.!frame2.!scrollbar2'
11/15/2022 21:24:45 MainProcess preview preview_tk __init__ DEBUG Initialized _PreviewCanvas ('.!frame.!frame2.!_previewcanvas')
11/15/2022 21:24:45 MainProcess preview preview_tk __init__ DEBUG Initializing _Image: (save_variable: PY_VAR0, is_standalone: True)
11/15/2022 21:24:45 MainProcess preview preview_tk __init__ DEBUG Initialized _Image
11/15/2022 21:24:45 MainProcess preview preview_tk __init__ DEBUG Initializing _Bindings (canvas: '.!frame.!frame2.!_previewcanvas', taskbar: '<lib.training.preview_tk._Taskbar object at 0x00000268EA5A8BE0>', image: '<lib.training.preview_tk._Image object at 0x00000268EA5D78E0>')
11/15/2022 21:24:45 MainProcess preview preview_tk _set_mouse_bindings DEBUG Binding mouse events
11/15/2022 21:24:45 MainProcess preview preview_tk _set_mouse_bindings DEBUG Bound mouse events
11/15/2022 21:24:45 MainProcess preview preview_tk _set_key_bindings DEBUG Binding key events
11/15/2022 21:24:45 MainProcess preview preview_tk _set_key_bindings DEBUG Bound key events
11/15/2022 21:24:45 MainProcess preview preview_tk __init__ DEBUG Initialized _Bindings
11/15/2022 21:24:45 MainProcess preview preview_tk _process_triggers DEBUG Processing triggers
11/15/2022 21:24:45 MainProcess preview preview_tk _process_triggers DEBUG Adding trigger for key: 'm'
11/15/2022 21:24:45 MainProcess preview preview_tk _process_triggers DEBUG Adding trigger for key: 'r'
11/15/2022 21:24:45 MainProcess preview preview_tk _process_triggers DEBUG Adding trigger for key: 's'
11/15/2022 21:24:45 MainProcess preview preview_tk _process_triggers DEBUG Adding trigger for key: 'Return'
11/15/2022 21:24:45 MainProcess preview preview_tk _process_triggers DEBUG Processed triggers
11/15/2022 21:24:45 MainProcess preview preview_tk pack DEBUG Packing master frame: (args: (), kwargs: {'fill': 'both', 'expand': True})
11/15/2022 21:24:45 MainProcess preview preview_tk _output_helptext INFO ---------------------------------------------------
11/15/2022 21:24:45 MainProcess preview preview_tk _output_helptext INFO Preview key bindings:
11/15/2022 21:24:45 MainProcess preview preview_tk _output_helptext INFO Zoom: +/-
11/15/2022 21:24:45 MainProcess preview preview_tk _output_helptext INFO Toggle Zoom Mode: i
11/15/2022 21:24:45 MainProcess preview preview_tk _output_helptext INFO Move: arrow keys
11/15/2022 21:24:45 MainProcess preview preview_tk _output_helptext INFO Save Preview: Ctrl+s
11/15/2022 21:24:45 MainProcess preview preview_tk _output_helptext INFO ---------------------------------------------------
11/15/2022 21:24:45 MainProcess preview preview_tk __init__ DEBUG Initialized PreviewTk
11/15/2022 21:24:45 MainProcess preview preview_cv _launch DEBUG Launching PreviewTk
11/15/2022 21:24:45 MainProcess preview preview_cv _launch DEBUG Waiting for preview image
11/15/2022 21:24:46 MainProcess _training train _training DEBUG Commencing Training
11/15/2022 21:24:46 MainProcess _training train _training INFO Loading data, this may take a while...
11/15/2022 21:24:46 MainProcess _training train _load_model DEBUG Loading Model
11/15/2022 21:24:46 MainProcess _training utils get_folder DEBUG Requested path: 'C:\Users\Admin\Desktop\faces'
11/15/2022 21:24:46 MainProcess _training utils get_folder DEBUG Returning: 'C:\Users\Admin\Desktop\faces'
11/15/2022 21:24:46 MainProcess _training plugin_loader _import INFO Loading Model from Original plugin...
11/15/2022 21:24:46 MainProcess _training multithreading run DEBUG Error in thread (_training): No module named 'decorator'
11/15/2022 21:24:46 MainProcess MainThread train _monitor DEBUG Thread error detected
11/15/2022 21:24:46 MainProcess MainThread train shutdown DEBUG Sending shutdown to preview viewer
11/15/2022 21:24:46 MainProcess MainThread train _monitor DEBUG Closed Monitor
11/15/2022 21:24:46 MainProcess MainThread train _end_thread DEBUG Ending Training thread
11/15/2022 21:24:46 MainProcess MainThread train _end_thread CRITICAL Error caught! Exiting...
11/15/2022 21:24:46 MainProcess MainThread multithreading join DEBUG Joining Threads: '_training'
11/15/2022 21:24:46 MainProcess MainThread multithreading join DEBUG Joining Thread: '_training'
11/15/2022 21:24:46 MainProcess MainThread multithreading join ERROR Caught exception in thread: '_training'
Traceback (most recent call last):
File "C:\Users\Admin\faceswap\lib\cli\launcher.py", line 217, in execute_script
process.process()
File "C:\Users\Admin\faceswap\scripts\train.py", line 218, in process
self._end_thread(thread, err)
File "C:\Users\Admin\faceswap\scripts\train.py", line 258, in _end_thread
thread.join()
File "C:\Users\Admin\faceswap\lib\multithreading.py", line 217, in join
raise thread.err[1].with_traceback(thread.err[2])
File "C:\Users\Admin\faceswap\lib\multithreading.py", line 96, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Admin\faceswap\scripts\train.py", line 280, in _training
raise err
File "C:\Users\Admin\faceswap\scripts\train.py", line 268, in _training
model = self._load_model()
File "C:\Users\Admin\faceswap\scripts\train.py", line 292, in _load_model
model: "ModelBase" = PluginLoader.get_model(self._args.trainer)(
File "C:\Users\Admin\faceswap\plugins\plugin_loader.py", line 131, in get_model
return PluginLoader._import("train.model", name, disable_logging)
File "C:\Users\Admin\faceswap\plugins\plugin_loader.py", line 197, in _import
module = import_module(mod)
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "C:\Users\Admin\faceswap\plugins\train\model\original.py", line 11, in <module>
from ._base import KerasModel, ModelBase
File "C:\Users\Admin\faceswap\plugins\train\model\_base\__init__.py", line 4, in <module>
from .model import get_all_sub_models, KerasModel, ModelBase # noqa
File "C:\Users\Admin\faceswap\plugins\train\model\_base\model.py", line 23, in <module>
from .settings import Loss, Optimizer, Settings
File "C:\Users\Admin\faceswap\plugins\train\model\_base\settings.py", line 37, in <module>
from lib.model.autoclip import AutoClipper # pylint:disable=ungrouped-imports
File "C:\Users\Admin\faceswap\lib\model\autoclip.py", line 8, in <module>
import tensorflow_probability as tfp
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\site-packages\tensorflow_probability\__init__.py", line 75, in <module>
from tensorflow_probability.python import * # pylint: disable=wildcard-import
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\site-packages\tensorflow_probability\python\__init__.py", line 21, in <module>
from tensorflow_probability.python import bijectors
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\site-packages\tensorflow_probability\python\bijectors\__init__.py", line 23, in <module>
from tensorflow_probability.python.bijectors.absolute_value import AbsoluteValue
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\site-packages\tensorflow_probability\python\bijectors\absolute_value.py", line 23, in <module>
from tensorflow_probability.python.bijectors import bijector
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\site-packages\tensorflow_probability\python\bijectors\bijector.py", line 33, in <module>
from tensorflow_probability.python.internal import distribution_util
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\site-packages\tensorflow_probability\python\internal\distribution_util.py", line 29, in <module>
from tensorflow_probability.python.internal import prefer_static
File "C:\Users\Admin\anaconda3\envs\faceswap\lib\site-packages\tensorflow_probability\python\internal\prefer_static.py", line 22, in <module>
import decorator
ModuleNotFoundError: No module named 'decorator'
============ System Information ============
encoding: cp1252
git_branch: Not Found
git_commits: Not Found
gpu_cuda: No global version found. Check Conda packages for Conda Cuda
gpu_cudnn: No global version found. Check Conda packages for Conda cuDNN
gpu_devices: GPU_0: NVIDIA GeForce GTX 1070
gpu_devices_active: GPU_0
gpu_driver: 511.79
gpu_vram: GPU_0: 8192MB
os_machine: AMD64
os_platform: Windows-10-10.0.19044-SP0
os_release: 10
py_command: C:\Users\Admin\faceswap\faceswap.py train -A C:/Users/Admin/Desktop/faces/A -B C:/Users/Admin/Desktop/faces/B -m C:/Users/Admin/Desktop/faces -t original -bs 16 -it 1180000 -D default -s 250 -ss 25000 -p -L INFO -gui
py_conda_version: conda 4.13.0
py_implementation: CPython
py_version: 3.9.13
py_virtual_env: True
sys_cores: 8
sys_processor: Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
sys_ram: Total: 32720MB, Available: 21423MB, Used: 11296MB, Free: 21423MB
=============== Pip Packages ===============
absl-py==1.3.0
astunparse==1.6.3
cachetools==5.2.0
certifi==2022.9.24
charset-normalizer==2.1.1
colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1666700638685/work
contourpy @ file:///D:/bld/contourpy_1667248269651/work
cycler @ file:///home/conda/feedstock_root/build_artifacts/cycler_1635519461629/work
fastcluster @ file:///D:/bld/fastcluster_1667859058636/work
ffmpy @ file:///home/conda/feedstock_root/build_artifacts/ffmpy_1659474992694/work
flatbuffers==1.12
fonttools @ file:///D:/bld/fonttools_1666827219016/work
gast==0.4.0
google-auth==2.14.1
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.50.0
h5py==3.7.0
idna==3.4
imageio @ file:///home/conda/feedstock_root/build_artifacts/imageio_1663572338894/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1649960641006/work
importlib-metadata==5.0.0
joblib @ file:///home/conda/feedstock_root/build_artifacts/joblib_1663332044897/work
keras==2.9.0
Keras-Preprocessing==1.1.2
kiwisolver @ file:///D:/bld/kiwisolver_1666805897768/work
libclang==14.0.6
Markdown==3.4.1
MarkupSafe==2.1.1
matplotlib @ file:///D:/bld/matplotlib-suite_1667505027923/work
munkres==1.1.4
numexpr @ file:///D:/bld/numexpr_1666816820577/work
numpy @ file:///D:/bld/numpy_1666788367701/work
nvidia-ml-py @ file:///home/conda/feedstock_root/build_artifacts/nvidia-ml-py_1664523937022/work
oauthlib==3.2.2
opencv-python==4.6.0.66
opt-einsum==3.3.0
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1637239678211/work
Pillow @ file:///D:/bld/pillow_1666920708409/work
ply==3.11
protobuf==3.19.6
psutil @ file:///D:/bld/psutil_1667886042967/work
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing @ file:///home/conda/feedstock_root/build_artifacts/pyparsing_1652235407899/work
PyQt5==5.15.7
PyQt5-sip @ file:///D:/bld/pyqt-split_1666830111760/work/pyqt_sip
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1626286286081/work
pywin32==304
pywinpty @ file:///D:/bld/pywinpty_1643992546220/work/target/wheels/pywinpty-2.0.2-cp39-none-win_amd64.whl
requests==2.28.1
requests-oauthlib==1.3.1
rsa==4.9
scikit-learn @ file:///D:/bld/scikit-learn_1666884817551/work
scipy==1.9.3
sip @ file:///D:/bld/sip_1667565633471/work
six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
tensorboard==2.9.1
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow-estimator==2.9.0
tensorflow-gpu==2.9.2
tensorflow-io-gcs-filesystem==0.27.0
tensorflow-probability==0.7.0
termcolor==2.1.0
threadpoolctl @ file:///home/conda/feedstock_root/build_artifacts/threadpoolctl_1643647933166/work
toml @ file:///home/conda/feedstock_root/build_artifacts/toml_1604308577558/work
tornado @ file:///D:/bld/tornado_1666788767305/work
tqdm @ file:///home/conda/feedstock_root/build_artifacts/tqdm_1662214488106/work
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1665144421445/work
unicodedata2 @ file:///D:/bld/unicodedata2_1667240049903/work
urllib3==1.26.12
Werkzeug==2.2.2
wrapt==1.14.1
zipp==3.10.0
============== Conda Packages ==============
# packages in environment at C:\Users\Admin\anaconda3\envs\faceswap:
#
# Name Version Build Channel
absl-py 1.3.0 pypi_0 pypi
aom 3.5.0 h63175ca_0 conda-forge
astunparse 1.6.3 pypi_0 pypi
brotli 1.0.9 hcfcfb64_8 conda-forge
brotli-bin 1.0.9 hcfcfb64_8 conda-forge
bzip2 1.0.8 h8ffe710_4 conda-forge
ca-certificates 2022.9.24 h5b45459_0 conda-forge
cachetools 5.2.0 pypi_0 pypi
certifi 2022.9.24 pyhd8ed1ab_0 conda-forge
charset-normalizer 2.1.1 pypi_0 pypi
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
contourpy 1.0.6 py39h1f6ef14_0 conda-forge
cudatoolkit 11.2.2 h933977f_10 conda-forge
cudnn 8.1.0.77 h3e0f4f4_0 conda-forge
cycler 0.11.0 pyhd8ed1ab_0 conda-forge
expat 2.5.0 h1537add_0 conda-forge
fastcluster 1.2.6 py39h2ba5b7c_2 conda-forge
ffmpeg 5.1.2 gpl_h6a9407d_103 conda-forge
ffmpy 0.3.0 pyhb6f538c_0 conda-forge
flatbuffers 1.12 pypi_0 pypi
font-ttf-dejavu-sans-mono 2.37 hab24e00_0 conda-forge
font-ttf-inconsolata 3.000 h77eed37_0 conda-forge
font-ttf-source-code-pro 2.038 h77eed37_0 conda-forge
font-ttf-ubuntu 0.83 hab24e00_0 conda-forge
fontconfig 2.14.1 hbde0cde_0 conda-forge
fonts-conda-ecosystem 1 0 conda-forge
fonts-conda-forge 1 0 conda-forge
fonttools 4.38.0 py39ha55989b_1 conda-forge
freetype 2.12.1 h546665d_0 conda-forge
gast 0.4.0 pypi_0 pypi
gettext 0.21.1 h5728263_0 conda-forge
git 2.38.1 h57928b3_1 conda-forge
glib 2.74.1 h12be248_1 conda-forge
glib-tools 2.74.1 h12be248_1 conda-forge
google-auth 2.14.1 pypi_0 pypi
google-auth-oauthlib 0.4.6 pypi_0 pypi
google-pasta 0.2.0 pypi_0 pypi
grpcio 1.50.0 pypi_0 pypi
gst-plugins-base 1.21.1 h001b923_1 conda-forge
gstreamer 1.21.1 h6b5321d_1 conda-forge
h5py 3.7.0 pypi_0 pypi
icu 70.1 h0e60522_0 conda-forge
idna 3.4 pypi_0 pypi
imageio 2.22.0 pyhfa7a67d_0 conda-forge
imageio-ffmpeg 0.4.7 pyhd8ed1ab_0 conda-forge
importlib-metadata 5.0.0 pypi_0 pypi
intel-openmp 2022.1.0 h57928b3_3787 conda-forge
joblib 1.2.0 pyhd8ed1ab_0 conda-forge
jpeg 9e h8ffe710_2 conda-forge
keras 2.9.0 pypi_0 pypi
keras-preprocessing 1.1.2 pypi_0 pypi
kiwisolver 1.4.4 py39h1f6ef14_1 conda-forge
krb5 1.19.3 h1176d77_0 conda-forge
lcms2 2.14 h90d422f_0 conda-forge
lerc 4.0.0 h63175ca_0 conda-forge
libblas 3.9.0 16_win64_mkl conda-forge
libbrotlicommon 1.0.9 hcfcfb64_8 conda-forge
libbrotlidec 1.0.9 hcfcfb64_8 conda-forge
libbrotlienc 1.0.9 hcfcfb64_8 conda-forge
libcblas 3.9.0 16_win64_mkl conda-forge
libclang 14.0.6 pypi_0 pypi
libclang13 15.0.4 default_h77d9078_0 conda-forge
libdeflate 1.14 hcfcfb64_0 conda-forge
libffi 3.4.2 h8ffe710_5 conda-forge
libglib 2.74.1 he8f3873_1 conda-forge
libiconv 1.17 h8ffe710_0 conda-forge
liblapack 3.9.0 16_win64_mkl conda-forge
libogg 1.3.4 h8ffe710_1 conda-forge
libpng 1.6.38 h19919ed_0 conda-forge
libsqlite 3.39.4 hcfcfb64_0 conda-forge
libtiff 4.4.0 h8e97e67_4 conda-forge
libvorbis 1.3.7 h0e60522_0 conda-forge
libwebp-base 1.2.4 h8ffe710_0 conda-forge
libxcb 1.13 hcd874cb_1004 conda-forge
libxml2 2.10.3 hc3477c8_0 conda-forge
libzlib 1.2.13 hcfcfb64_4 conda-forge
m2w64-gcc-libgfortran 5.3.0 6 conda-forge
m2w64-gcc-libs 5.3.0 7 conda-forge
m2w64-gcc-libs-core 5.3.0 7 conda-forge
m2w64-gmp 6.1.0 2 conda-forge
m2w64-libwinpthread-git 5.0.0.4634.697f757 2 conda-forge
markdown 3.4.1 pypi_0 pypi
markupsafe 2.1.1 pypi_0 pypi
matplotlib 3.6.2 py39hcbf5309_0 conda-forge
matplotlib-base 3.6.2 py39haf65ace_0 conda-forge
mkl 2022.1.0 h6a75c08_874 conda-forge
msys2-conda-epoch 20160418 1 conda-forge
munkres 1.1.4 pyh9f0ad1d_0 conda-forge
numexpr 2.8.3 mkl_py39h013c7a2_1 conda-forge
numpy 1.23.4 py39hbccbffa_1 conda-forge
nvidia-ml-py 11.515.75 pyhd8ed1ab_0 conda-forge
oauthlib 3.2.2 pypi_0 pypi
opencv-python 4.6.0.66 pypi_0 pypi
openh264 2.3.1 h63175ca_1 conda-forge
openjpeg 2.5.0 hc9384bd_1 conda-forge
openssl 1.1.1s hcfcfb64_0 conda-forge
opt-einsum 3.3.0 pypi_0 pypi
packaging 21.3 pyhd8ed1ab_0 conda-forge
pcre2 10.40 h17e33f8_0 conda-forge
pillow 9.2.0 py39h595c93f_3 conda-forge
pip 22.3.1 pyhd8ed1ab_0 conda-forge
ply 3.11 py_1 conda-forge
protobuf 3.19.6 pypi_0 pypi
psutil 5.9.4 py39ha55989b_0 conda-forge
pthread-stubs 0.4 hcd874cb_1001 conda-forge
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pyparsing 3.0.9 pyhd8ed1ab_0 conda-forge
pyqt 5.15.7 py39hb77abff_2 conda-forge
pyqt5-sip 12.11.0 py39h99910a6_2 conda-forge
python 3.9.13 h9a09f29_0_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python_abi 3.9 2_cp39 conda-forge
pywin32 304 py39h99910a6_2 conda-forge
pywinpty 2.0.2 py39h99910a6_0 conda-forge
qt-main 5.15.6 h9c3277a_1 conda-forge
requests 2.28.1 pypi_0 pypi
requests-oauthlib 1.3.1 pypi_0 pypi
rsa 4.9 pypi_0 pypi
scikit-learn 1.1.3 py39h6fe01c0_1 conda-forge
scipy 1.9.3 py39hfbf2dce_1 conda-forge
setuptools 65.5.1 pyhd8ed1ab_0 conda-forge
sip 6.7.4 py39h99910a6_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlite 3.39.4 hcfcfb64_0 conda-forge
svt-av1 1.3.0 h63175ca_0 conda-forge
tbb 2021.6.0 h91493d7_1 conda-forge
tensorboard 2.9.1 pypi_0 pypi
tensorboard-data-server 0.6.1 pypi_0 pypi
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
tensorflow-estimator 2.9.0 pypi_0 pypi
tensorflow-gpu 2.9.2 pypi_0 pypi
tensorflow-io-gcs-filesystem 0.27.0 pypi_0 pypi
tensorflow-probability 0.7 py_1 conda-forge
termcolor 2.1.0 pypi_0 pypi
threadpoolctl 3.1.0 pyh8a188c0_0 conda-forge
tk 8.6.12 h8ffe710_0 conda-forge
toml 0.10.2 pyhd8ed1ab_0 conda-forge
tornado 6.2 py39ha55989b_1 conda-forge
tqdm 4.64.1 pyhd8ed1ab_0 conda-forge
typing-extensions 4.4.0 hd8ed1ab_0 conda-forge
typing_extensions 4.4.0 pyha770c72_0 conda-forge
tzdata 2022f h191b570_0 conda-forge
ucrt 10.0.22621.0 h57928b3_0 conda-forge
unicodedata2 15.0.0 py39ha55989b_0 conda-forge
urllib3 1.26.12 pypi_0 pypi
vc 14.3 h3d8a991_9 conda-forge
vs2015_runtime 14.32.31332 h1d6e394_9 conda-forge
werkzeug 2.2.2 pypi_0 pypi
wheel 0.38.3 pyhd8ed1ab_0 conda-forge
winpty 0.4.3 4 conda-forge
wrapt 1.14.1 pypi_0 pypi
x264 1!164.3095 h8ffe710_2 conda-forge
x265 3.5 h2d74725_3 conda-forge
xorg-libxau 1.0.9 hcd874cb_0 conda-forge
xorg-libxdmcp 1.1.3 hcd874cb_0 conda-forge
xz 5.2.6 h8d14728_0 conda-forge
zipp 3.10.0 pypi_0 pypi
zstd 1.5.2 h7755175_4 conda-forge
================= Configs ==================
--------- .faceswap ---------
backend: nvidia
--------- convert.ini ---------
[color.color_transfer]
clip: True
preserve_paper: True
[color.manual_balance]
colorspace: HSV
balance_1: 0.0
balance_2: 0.0
balance_3: 0.0
contrast: 0.0
brightness: 0.0
[color.match_hist]
threshold: 99.0
[mask.mask_blend]
type: normalized
kernel_size: 3
passes: 4
threshold: 4
erosion: 0.0
erosion_top: 0.0
erosion_bottom: 0.0
erosion_left: 0.0
erosion_right: 0.0
[scaling.sharpen]
method: none
amount: 150
radius: 0.3
threshold: 5.0
[writer.ffmpeg]
container: mp4
codec: libx264
crf: 23
preset: medium
tune: none
profile: auto
level: auto
skip_mux: False
[writer.gif]
fps: 25
loop: 0
palettesize: 256
subrectangles: False
[writer.opencv]
format: png
draw_transparent: False
separate_mask: False
jpg_quality: 75
png_compress_level: 3
[writer.pillow]
format: png
draw_transparent: False
separate_mask: False
optimize: False
gif_interlace: True
jpg_quality: 75
png_compress_level: 3
tif_compression: tiff_deflate
--------- extract.ini ---------
[global]
allow_growth: False
aligner_min_scale: 0.07
aligner_max_scale: 2.0
aligner_distance: 22.5
aligner_roll: 45.0
filter_refeed: True
save_filtered: False
[align.fan]
batch-size: 12
[detect.cv2_dnn]
confidence: 50
[detect.mtcnn]
minsize: 20
scalefactor: 0.709
batch-size: 8
cpu: True
threshold_1: 0.6
threshold_2: 0.7
threshold_3: 0.7
[detect.s3fd]
confidence: 70
batch-size: 4
[mask.bisenet_fp]
batch-size: 8
cpu: False
weights: faceswap
include_ears: False
include_hair: False
include_glasses: True
[mask.custom]
batch-size: 8
centering: face
fill: False
[mask.unet_dfl]
batch-size: 8
[mask.vgg_clear]
batch-size: 6
[mask.vgg_obstructed]
batch-size: 2
[recognition.vgg_face2]
batch-size: 16
cpu: False
--------- gui.ini ---------
[global]
fullscreen: False
tab: extract
options_panel_width: 30
console_panel_height: 20
icon_size: 14
font: default
font_size: 9
autosave_last_session: prompt
timeout: 120
auto_load_model_stats: True
--------- train.ini ---------
[global]
centering: face
coverage: 87.5
icnr_init: False
conv_aware_init: False
optimizer: adam
learning_rate: 5e-05
epsilon_exponent: -7
autoclip: False
reflect_padding: False
allow_growth: False
mixed_precision: False
nan_protection: True
convert_batchsize: 16
[global.loss]
loss_function: ssim
loss_function_2: mse
loss_weight_2: 100
loss_function_3: none
loss_weight_3: 0
loss_function_4: none
loss_weight_4: 0
mask_loss_function: mse
eye_multiplier: 3
mouth_multiplier: 2
penalized_mask_loss: True
mask_type: extended
mask_blur_kernel: 3
mask_threshold: 4
learn_mask: False
[model.dfaker]
output_size: 128
[model.dfl_h128]
lowmem: False
[model.dfl_sae]
input_size: 128
architecture: df
autoencoder_dims: 0
encoder_dims: 42
decoder_dims: 21
multiscale_decoder: False
[model.dlight]
features: best
details: good
output_size: 256
[model.original]
lowmem: False
[model.phaze_a]
output_size: 128
shared_fc: none
enable_gblock: True
split_fc: True
split_gblock: False
split_decoders: False
enc_architecture: fs_original
enc_scaling: 7
enc_load_weights: True
bottleneck_type: dense
bottleneck_norm: none
bottleneck_size: 1024
bottleneck_in_encoder: True
fc_depth: 1
fc_min_filters: 1024
fc_max_filters: 1024
fc_dimensions: 4
fc_filter_slope: -0.5
fc_dropout: 0.0
fc_upsampler: upsample2d
fc_upsamples: 1
fc_upsample_filters: 512
fc_gblock_depth: 3
fc_gblock_min_nodes: 512
fc_gblock_max_nodes: 512
fc_gblock_filter_slope: -0.5
fc_gblock_dropout: 0.0
dec_upscale_method: subpixel
dec_upscales_in_fc: 0
dec_norm: none
dec_min_filters: 64
dec_max_filters: 512
dec_slope_mode: full
dec_filter_slope: -0.45
dec_res_blocks: 1
dec_output_kernel: 5
dec_gaussian: True
dec_skip_last_residual: True
freeze_layers: keras_encoder
load_layers: encoder
fs_original_depth: 4
fs_original_min_filters: 128
fs_original_max_filters: 1024
fs_original_use_alt: False
mobilenet_width: 1.0
mobilenet_depth: 1
mobilenet_dropout: 0.001
mobilenet_minimalistic: False
[model.realface]
input_size: 64
output_size: 128
dense_nodes: 1536
complexity_encoder: 128
complexity_decoder: 512
[model.unbalanced]
input_size: 128
lowmem: False
nodes: 1024
complexity_encoder: 128
complexity_decoder_a: 384
complexity_decoder_b: 512
[model.villain]
lowmem: False
[trainer.original]
preview_images: 14
zoom_amount: 5
rotation_range: 10
shift_range: 5
flip_chance: 50
color_lightness: 30
color_ab: 8
color_clahe_chance: 50
color_clahe_max_size: 4
```
|
closed
|
2022-11-15T21:27:36Z
|
2022-11-16T01:07:12Z
|
https://github.com/deepfakes/faceswap/issues/1282
|
[] |
Pytoolbox
| 1
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 574
|
指令精调已按wiki安装相应版本的peft还是出现attributeError
|
合并chinese alpaca 7b 权重后配置脚本如下:


(myenv) root@autodl-container-cc0c11a652-b6d7390e:/Chinese-LLaMA-Alpaca/scripts/training# bash run_sft.sh
Traceback (most recent call last):
File "/root/miniconda3/envs/myenv/bin/torchrun", line 8, in <module>
sys.exit(main())
^^^^^^
File "/root/miniconda3/envs/myenv/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/myenv/lib/python3.11/site-packages/torch/distributed/run.py", line 762, in main
run(args)
File "/root/miniconda3/envs/myenv/lib/python3.11/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/root/miniconda3/envs/myenv/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/myenv/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 237, in launch_agent
result = agent.run()
^^^^^^^^^^^
File "/root/miniconda3/envs/myenv/lib/python3.11/site-packages/torch/distributed/elastic/metrics/api.py", line 129, in wrapper
result = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/myenv/lib/python3.11/site-packages/torch/distributed/elastic/agent/server/api.py", line 709, in run
result = self._invoke_run(role)
^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/myenv/lib/python3.11/site-packages/torch/distributed/elastic/agent/server/api.py", line 844, in _invoke_run
self._initialize_workers(self._worker_group)
File "/root/miniconda3/envs/myenv/lib/python3.11/site-packages/torch/distributed/elastic/metrics/api.py", line 129, in wrapper
result = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/myenv/lib/python3.11/site-packages/torch/distributed/elastic/agent/server/api.py", line 681, in _initialize_workers
worker_ids = self._start_workers(worker_group)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/myenv/lib/python3.11/site-packages/torch/distributed/elastic/metrics/api.py", line 129, in wrapper
result = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/myenv/lib/python3.11/site-packages/torch/distributed/elastic/agent/server/local_elastic_agent.py", line 271, in _start_workers
self._pcontext = start_processes(
^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/myenv/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/__init__.py", line 207, in start_processes
redirs = to_map(redirects, nprocs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/envs/myenv/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/api.py", line 162, in to_map
map[i] = val_or_map.get(i, Std.NONE)
^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'get'
run_sft.sh: line 56: --lora_dropout: command not found
run_sft.sh: line 61: --ddp_find_unused_parameters: command not found
- [*] **基础模型**:Alpaca(7B)
- [*] **运行系统**:Linux
- [*] **问题分类**:模型训练与精调
- [*] (必选)由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [*] (必选)我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
|
closed
|
2023-06-12T08:44:49Z
|
2023-06-13T00:42:39Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/574
|
[] |
HuXinjing
| 6
|
ray-project/ray
|
data-science
| 50,672
|
[core] Split giant ray core C++ targets into small ones (raylet client)
|
This is a sub-issue of https://github.com/ray-project/ray/issues/50586 to split the raylet client bazel target.
- [x] Split out `raylet_client_connection_lib` from the `raylet_client_lib` target.
- [x] Flatten dependencies related to `src/ray/common` and `src/ray/util`.
|
closed
|
2025-02-17T23:33:00Z
|
2025-02-18T13:26:40Z
|
https://github.com/ray-project/ray/issues/50672
|
[] |
rueian
| 0
|
huggingface/datasets
|
tensorflow
| 7,346
|
OSError: Invalid flatbuffers message.
|
### Describe the bug
When loading a large 2D data (1000 × 1152) with a large number of (2,000 data in this case) in `load_dataset`, the error message `OSError: Invalid flatbuffers message` is reported.
When only 300 pieces of data of this size (1000 × 1152) are stored, they can be loaded correctly.
When 2,000 2D arrays are stored in each file, about 100 files are generated, each with a file size of about 5-6GB. But when 300 2D arrays are stored in each file, **about 600 files are generated, which is too many files**.
### Steps to reproduce the bug
error:
```python
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[2], line 4
1 from datasets import Dataset
2 from datasets import load_dataset
----> 4 real_dataset = load_dataset("arrow", data_files='tensorData/real_ResidueTensor/*', split="train")#.with_format("torch") # , split="train"
5 # sim_dataset = load_dataset("arrow", data_files='tensorData/sim_ResidueTensor/*', split="train").with_format("torch")
6 real_dataset
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/load.py:2151](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/load.py#line=2150), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2148 return builder_instance.as_streaming_dataset(split=split)
2150 # Download and prepare data
-> 2151 builder_instance.download_and_prepare(
2152 download_config=download_config,
2153 download_mode=download_mode,
2154 verification_mode=verification_mode,
2155 num_proc=num_proc,
2156 storage_options=storage_options,
2157 )
2159 # Build dataset for splits
2160 keep_in_memory = (
2161 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2162 )
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py:924](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py#line=923), in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
922 if num_proc is not None:
923 prepare_split_kwargs["num_proc"] = num_proc
--> 924 self._download_and_prepare(
925 dl_manager=dl_manager,
926 verification_mode=verification_mode,
927 **prepare_split_kwargs,
928 **download_and_prepare_kwargs,
929 )
930 # Sync info
931 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py:978](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py#line=977), in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
976 split_dict = SplitDict(dataset_name=self.dataset_name)
977 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 978 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
980 # Checksums verification
981 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py:47](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py#line=46), in Arrow._split_generators(self, dl_manager)
45 with open(file, "rb") as f:
46 try:
---> 47 reader = pa.ipc.open_stream(f)
48 except pa.lib.ArrowInvalid:
49 reader = pa.ipc.open_file(f)
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py:190](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py#line=189), in open_stream(source, options, memory_pool)
171 def open_stream(source, *, options=None, memory_pool=None):
172 """
173 Create reader for Arrow streaming format.
174
(...)
188 A reader for the given source
189 """
--> 190 return RecordBatchStreamReader(source, options=options,
191 memory_pool=memory_pool)
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py:52](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py#line=51), in RecordBatchStreamReader.__init__(self, source, options, memory_pool)
50 def __init__(self, source, *, options=None, memory_pool=None):
51 options = _ensure_default_ipc_read_options(options)
---> 52 self._open(source, options=options, memory_pool=memory_pool)
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.pxi:1006](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.pxi#line=1005), in pyarrow.lib._RecordBatchStreamReader._open()
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi:155](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi#line=154), in pyarrow.lib.pyarrow_internal_check_status()
File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi:92](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi#line=91), in pyarrow.lib.check_status()
OSError: Invalid flatbuffers message.
```
reproduce:Here is just an example result, the real 2D matrix is the output of the ESM large model, and the matrix size is approximate
```python
import numpy as np
import pyarrow as pa
random_arrays_list = [np.random.rand(1000, 1152) for _ in range(2000)]
table = pa.Table.from_pydict({
'tensor': [tensor.tolist() for tensor in random_arrays_list]
})
import pyarrow.feather as feather
feather.write_feather(table, 'test.arrow')
from datasets import load_dataset
dataset = load_dataset("arrow", data_files='test.arrow', split="train")
```
### Expected behavior
`load_dataset` load the dataset as normal as `feather.read_feather`
```python
import pyarrow.feather as feather
feather.read_feather('tensorData/real_ResidueTensor/real_tensor_1.arrow')
```
Plus `load_dataset("parquet", data_files='test.arrow', split="train")` works fine
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.26.5
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
|
closed
|
2024-12-25T11:38:52Z
|
2025-01-09T14:25:29Z
|
https://github.com/huggingface/datasets/issues/7346
|
[] |
antecede
| 3
|
OpenInterpreter/open-interpreter
|
python
| 914
|
--local on windows lmstudio via interpreter ran via wsl
|
### Is your feature request related to a problem? Please describe.
i tried to run ur interpreter with --local via wsl windows linux ( debian python 3.12 venv pip )
before , run lmstudio windows version , and start there <->
but the linux part didnt make it to the windows one
so it aborted
### Describe the solution you'd like
i suppose some general windows style forward firewalling rule anywhere .. or use some special localhost notation that actually makes a connection ..
in both the the windows parts i dunno much of , as i dunno many windows cmds
### Describe alternatives you've considered
_No response_
### Additional context
thers simply windows localhost not accessable ( or the like ) via wsl .. so finding out and making such a rule seems my solution
|
closed
|
2024-01-13T22:47:58Z
|
2024-01-14T11:48:01Z
|
https://github.com/OpenInterpreter/open-interpreter/issues/914
|
[
"Enhancement"
] |
fxmbsw7
| 4
|
mitmproxy/pdoc
|
api
| 2
|
Template files are searched for in the wrong location
|
I'm trying to use pdoc on a installed-two-weeks-ago Debian Jessie machine, but I get this message:
```
$ pdoc --html some-module
Traceback (most recent call last):
File "/usr/local/bin/pdoc", line 487, in <module>
html_out(module)
File "/usr/local/bin/pdoc", line 334, in html_out
raise e
IOError: [Errno 2] No template at any of: /usr/share/pdoc/templates/html.mako, /usr/local/lib/python2.7/dist-packages/templates/html.mako
```
I installed python (2.7.6) and pip via apt-get:
```
sudo apt-get install python
sudo apt-get install python-pip
```
I installed pdoc via pip:
```
sudo pip install pdoc
```
I can see that the html.mako template file exists in the `/usr/local/share/pdoc` tree, but pdoc seems to look for it in `/usr/share/pdoc`. After a quick glance at the pdoc code, it seems to get the prefix from the `sys.prefix` attribute, which seems to have the value `/usr` on my computer. Apparently pip used another prefix when it installed the files.
I don't think I have set up things in a weird way, but please tell me if you want me to try something. I would be happy to help debugging this, but I don't know where to start.
|
closed
|
2014-02-20T17:57:14Z
|
2018-06-01T11:52:47Z
|
https://github.com/mitmproxy/pdoc/issues/2
|
[] |
raek
| 13
|
vllm-project/vllm
|
pytorch
| 15,340
|
[Bug][V0][Trition MLA][GGUF]: Deepseek R1 GGUF starts producing gibberish towards the end of a longer generation
|
### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
vllm docker 0.8.1 openai server image
```
</details>
### 🐛 Describe the bug
When inferencing with DeepSeek R1 `Q3_K_M` gguf quant it starts to produce gibberish towards the end of a longer generation.
I have followed direction in https://github.com/vllm-project/vllm/pull/13167#issue-2848824985 in terms of the `--tokenizer and ---hf-config-path` configuration.
I have tested various different images with nightly, and most recent `0.8.1` release, the issue persists.
I would appreciate some direction on this, as vLLM is by far the fastest inference engine for GGUF on my 16x3090 config, but this bug (which @SzymonOzog had said he experienced a similar issue with model overflowing and producing NaNs, but that got fixed - ref here https://github.com/vllm-project/vllm/pull/13167#issuecomment-2728111595).
Unfortunately I'm at a bit of a loss to fix this myself.
Run command:
```
networks:
vllm-dev:
external: true
name: br1
services:
vllm-dev:
image: vllm/vllm-openai:v0.8.1
runtime: nvidia
restart: unless-stopped
networks:
vllm-dev:
ipv4_address: 192.168.x.x
environment:
- HUGGING_FACE_HUB_TOKEN=${HUGGING_FACE_HUB_TOKEN}
- NVIDIA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
- VLLM_RPC_TIMEOUT=180000
- VLLM_PP_LAYER_PARTITION=31,30
- VLLM_WORKER_MULTIPROC_METHOD=spawn
ports:
- "8000:8000"
volumes:
- /mnt/user/appdata/models:/models
ipc: "host"
command: --swap-space 2
--model /models/dp-config/DeepSeek-R1-Q3_K_M.gguf \
--enable-reasoning --reasoning-parser deepseek_r1 \
--seed 3407 \
--served-model-name deepseek-ai/DeepSeek-R1 \
--hf-config-path /models/dp-v2/ --tokenizer /models/dp-v2/ \
--gpu-memory-utilization 0.945 \
--max-model-len 8192 \
--max-num-seqs 3 \
--trust-remote-code \
--tensor-parallel-size 8 \
--pipeline-parallel-size 2 \
--host 192.168.10.225 \
--port 8000 \
--enable-chunked-prefill=True
```
<details>
<summary>Run log</summary>
```
INFO 03-22 18:40:10 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:14 [api_server.py:977] vLLM API server version 0.8.1
INFO 03-22 18:40:14 [api_server.py:978] args: Namespace(host='192.168.10.225', port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key='b18766c98a9b8092dcb66033afabff4f', lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='/models/dp-config/DeepSeek-R1-Q3_K_M.gguf', task='auto', tokenizer='/models/dp-v2/', hf_config_path='/models/dp-v2/', skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=True, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=8192, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=2, tensor_parallel_size=8, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=3407, swap_space=2.0, cpu_offload_gb=0, gpu_memory_utilization=0.945, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=3, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, use_tqdm_on_load=True, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=True, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['deepseek-ai/DeepSeek-R1'], qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=True, reasoning_parser='deepseek_r1', disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False)
WARNING 03-22 18:40:14 [utils.py:2079] Found ulimit of 40960 and failed to automatically increase with error current limit exceeds maximum limit. This can cause fd limit errors like `OSError: [Errno 24] Too many open files`. Consider increasing with ulimit -n
INFO 03-22 18:40:14 [config.py:208] Replacing legacy 'type' key with 'rope_type'
INFO 03-22 18:40:23 [config.py:583] This model supports multiple tasks: {'score', 'classify', 'generate', 'embed', 'reward'}. Defaulting to 'generate'.
WARNING 03-22 18:40:23 [config.py:662] gguf quantization is not fully optimized yet. The speed can be slower than non-quantized models.
WARNING 03-22 18:40:23 [arg_utils.py:1765] --quantization gguf is not supported by the V1 Engine. Falling back to V0.
INFO 03-22 18:40:23 [config.py:1515] Defaulting to use mp for distributed inference
INFO 03-22 18:40:23 [config.py:1693] Chunked prefill is enabled with max_num_batched_tokens=2048.
INFO 03-22 18:40:24 [llm_engine.py:241] Initializing a V0 LLM engine (v0.8.1) with config: model='/models/dp-config/DeepSeek-R1-Q3_K_M.gguf', speculative_config=None, tokenizer='/models/dp-v2/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=8192, download_dir=None, load_format=LoadFormat.GGUF, tensor_parallel_size=8, pipeline_parallel_size=2, disable_custom_all_reduce=False, quantization=gguf, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend='deepseek_r1'), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=3407, served_model_name=deepseek-ai/DeepSeek-R1, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=None, chunked_prefill_enabled=True, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[4,2,1],"max_capture_size":4}, use_cached_outputs=False,
WARNING 03-22 18:40:25 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 64 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
INFO 03-22 18:40:25 [cuda.py:190] Using Triton MLA backend.
WARNING 03-22 18:40:28 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
INFO 03-22 18:40:29 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:29 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:29 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:29 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:29 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:29 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:30 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:30 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:30 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:30 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:30 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:30 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:30 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:30 [__init__.py:256] Automatically detected platform cuda.
INFO 03-22 18:40:30 [__init__.py:256] Automatically detected platform cuda.
(VllmWorkerProcess pid=279) INFO 03-22 18:40:33 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=281) INFO 03-22 18:40:33 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=283) INFO 03-22 18:40:33 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=279) INFO 03-22 18:40:33 [cuda.py:190] Using Triton MLA backend.
(VllmWorkerProcess pid=275) INFO 03-22 18:40:33 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=281) INFO 03-22 18:40:33 [cuda.py:190] Using Triton MLA backend.
(VllmWorkerProcess pid=283) INFO 03-22 18:40:33 [cuda.py:190] Using Triton MLA backend.
(VllmWorkerProcess pid=276) INFO 03-22 18:40:34 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=275) INFO 03-22 18:40:34 [cuda.py:190] Using Triton MLA backend.
(VllmWorkerProcess pid=277) INFO 03-22 18:40:34 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=278) INFO 03-22 18:40:34 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=276) INFO 03-22 18:40:34 [cuda.py:190] Using Triton MLA backend.
(VllmWorkerProcess pid=274) INFO 03-22 18:40:34 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=277) INFO 03-22 18:40:34 [cuda.py:190] Using Triton MLA backend.
(VllmWorkerProcess pid=278) INFO 03-22 18:40:34 [cuda.py:190] Using Triton MLA backend.
(VllmWorkerProcess pid=272) INFO 03-22 18:40:34 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=271) INFO 03-22 18:40:34 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=274) INFO 03-22 18:40:34 [cuda.py:190] Using Triton MLA backend.
(VllmWorkerProcess pid=285) INFO 03-22 18:40:34 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=280) INFO 03-22 18:40:34 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=273) INFO 03-22 18:40:34 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=282) INFO 03-22 18:40:34 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=284) INFO 03-22 18:40:34 [multiproc_worker_utils.py:225] Worker ready; awaiting tasks
(VllmWorkerProcess pid=272) INFO 03-22 18:40:34 [cuda.py:190] Using Triton MLA backend.
(VllmWorkerProcess pid=271) INFO 03-22 18:40:34 [cuda.py:190] Using Triton MLA backend.
(VllmWorkerProcess pid=285) INFO 03-22 18:40:34 [cuda.py:190] Using Triton MLA backend.
(VllmWorkerProcess pid=282) INFO 03-22 18:40:35 [cuda.py:190] Using Triton MLA backend.
(VllmWorkerProcess pid=284) INFO 03-22 18:40:35 [cuda.py:190] Using Triton MLA backend.
(VllmWorkerProcess pid=279) WARNING 03-22 18:40:36 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=283) WARNING 03-22 18:40:38 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=281) WARNING 03-22 18:40:39 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=275) WARNING 03-22 18:40:39 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=276) WARNING 03-22 18:40:40 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=277) WARNING 03-22 18:40:40 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=273) WARNING 03-22 18:40:40 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=278) WARNING 03-22 18:40:40 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=285) WARNING 03-22 18:40:40 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=284) WARNING 03-22 18:40:40 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=272) WARNING 03-22 18:40:40 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=271) WARNING 03-22 18:40:41 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=282) WARNING 03-22 18:40:41 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=274) WARNING 03-22 18:40:41 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=280) WARNING 03-22 18:40:41 [triton_decode_attention.py:44] The following error message 'operation scheduled before its operands' can be ignored.
(VllmWorkerProcess pid=272) INFO 03-22 18:40:45 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=272) INFO 03-22 18:40:45 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=284) INFO 03-22 18:40:45 [utils.py:925] Found nccl from library libnccl.so.2
INFO 03-22 18:40:45 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=275) INFO 03-22 18:40:45 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=284) INFO 03-22 18:40:45 [pynccl.py:69] vLLM is using nccl==2.21.5
INFO 03-22 18:40:45 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=275) INFO 03-22 18:40:45 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=283) INFO 03-22 18:40:45 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=277) INFO 03-22 18:40:45 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=283) INFO 03-22 18:40:45 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=282) INFO 03-22 18:40:45 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=285) INFO 03-22 18:40:45 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=279) WARNING 03-22 18:40:47 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=281) WARNING 03-22 18:40:47 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=282) WARNING 03-22 18:40:47 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=285) WARNING 03-22 18:40:47 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=278) INFO 03-22 18:40:47 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3, 4, 5, 6, 7], buffer_handle=(7, 4194304, 6, 'psm_c8657217'), local_subscribe_addr='ipc:///tmp/59169403-03a0-4321-9215-d9317b8825e8', remote_subscribe_addr=None, remote_addr_ipv6=False)
(VllmWorkerProcess pid=271) WARNING 03-22 18:40:47 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=273) WARNING 03-22 18:40:47 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
WARNING 03-22 18:40:47 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=274) WARNING 03-22 18:40:47 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=272) WARNING 03-22 18:40:47 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=275) WARNING 03-22 18:40:47 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=276) WARNING 03-22 18:40:47 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=277) WARNING 03-22 18:40:47 [custom_all_reduce.py:137] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
INFO 03-22 18:40:47 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[1, 2, 3, 4, 5, 6, 7], buffer_handle=(7, 4194304, 6, 'psm_4f24e55b'), local_subscribe_addr='ipc:///tmp/e6521145-fa0f-478e-896e-d5466e72273b', remote_subscribe_addr=None, remote_addr_ipv6=False)
INFO 03-22 18:40:47 [utils.py:925] Found nccl from library libnccl.so.2
INFO 03-22 18:40:47 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=278) INFO 03-22 18:40:47 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=278) INFO 03-22 18:40:47 [pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=280) INFO 03-22 18:40:47 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=273) INFO 03-22 18:40:47 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=285) INFO 03-22 18:40:47 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=275) INFO 03-22 18:40:47 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=279) INFO 03-22 18:40:47 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=284) INFO 03-22 18:40:47 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=272) INFO 03-22 18:40:47 [utils.py:925] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=284) INFO 03-22 18:40:47 [parallel_state.py:967] rank 14 in world size 16 is assigned as DP rank 0, PP rank 1, TP rank 6
(VllmWorkerProcess pid=276) INFO 03-22 18:40:47 [parallel_state.py:967] rank 6 in world size 16 is assigned as DP rank 0, PP rank 0, TP rank 6
(VllmWorkerProcess pid=271) INFO 03-22 18:40:47 [parallel_state.py:967] rank 1 in world size 16 is assigned as DP rank 0, PP rank 0, TP rank 1
(VllmWorkerProcess pid=273) INFO 03-22 18:40:47 [parallel_state.py:967] rank 3 in world size 16 is assigned as DP rank 0, PP rank 0, TP rank 3
(VllmWorkerProcess pid=279) INFO 03-22 18:40:47 [parallel_state.py:967] rank 9 in world size 16 is assigned as DP rank 0, PP rank 1, TP rank 1
(VllmWorkerProcess pid=282) INFO 03-22 18:40:47 [parallel_state.py:967] rank 12 in world size 16 is assigned as DP rank 0, PP rank 1, TP rank 4
(VllmWorkerProcess pid=274) INFO 03-22 18:40:47 [parallel_state.py:967] rank 4 in world size 16 is assigned as DP rank 0, PP rank 0, TP rank 4
(VllmWorkerProcess pid=285) INFO 03-22 18:40:47 [parallel_state.py:967] rank 15 in world size 16 is assigned as DP rank 0, PP rank 1, TP rank 7
(VllmWorkerProcess pid=277) INFO 03-22 18:40:47 [parallel_state.py:967] rank 7 in world size 16 is assigned as DP rank 0, PP rank 0, TP rank 7
(VllmWorkerProcess pid=281) INFO 03-22 18:40:47 [parallel_state.py:967] rank 11 in world size 16 is assigned as DP rank 0, PP rank 1, TP rank 3
(VllmWorkerProcess pid=280) INFO 03-22 18:40:47 [parallel_state.py:967] rank 10 in world size 16 is assigned as DP rank 0, PP rank 1, TP rank 2
(VllmWorkerProcess pid=278) INFO 03-22 18:40:47 [model_runner.py:1110] Starting to load model /models/dp-config/DeepSeek-R1-Q3_K_M.gguf...
(VllmWorkerProcess pid=282) INFO 03-22 18:40:47 [model_runner.py:1110] Starting to load model /models/dp-config/DeepSeek-R1-Q3_K_M.gguf...
INFO 03-22 18:40:47 [model_runner.py:1110] Starting to load model /models/dp-config/DeepSeek-R1-Q3_K_M.gguf...
(VllmWorkerProcess pid=276) INFO 03-22 18:40:47 [model_runner.py:1110] Starting to load model /models/dp-config/DeepSeek-R1-Q3_K_M.gguf...
(VllmWorkerProcess pid=285) INFO 03-22 18:40:47 [model_runner.py:1110] Starting to load model /models/dp-config/DeepSeek-R1-Q3_K_M.gguf...
(VllmWorkerProcess pid=283) WARNING 03-22 18:41:18 [utils.py:169] The model class DeepseekV3ForCausalLM has not defined `packed_modules_mapping`, this may lead to incorrect mapping of quantized or ignored modules
(VllmWorkerProcess pid=279) WARNING 03-22 18:41:18 [utils.py:169] The model class DeepseekV3ForCausalLM has not defined `packed_modules_mapping`, this may lead to incorrect mapping of quantized or ignored modules
(VllmWorkerProcess pid=278) WARNING 03-22 18:41:18 [utils.py:169] The model class DeepseekV3ForCausalLM has not defined `packed_modules_mapping`, this may lead to incorrect mapping of quantized or ignored modules
(VllmWorkerProcess pid=285) WARNING 03-22 18:41:18 [utils.py:169] The model class DeepseekV3ForCausalLM has not defined `packed_modules_mapping`, this may lead to incorrect mapping of quantized or ignored modules
(VllmWorkerProcess pid=276) WARNING 03-22 18:41:18 [utils.py:169] The model class DeepseekV3ForCausalLM has not defined `packed_modules_mapping`, this may lead to incorrect mapping of quantized or ignored modules
(VllmWorkerProcess pid=273) WARNING 03-22 18:41:18 [utils.py:169] The model class DeepseekV3ForCausalLM has not defined `packed_modules_mapping`, this may lead to incorrect mapping of quantized or ignored modules
WARNING 03-22 18:41:19 [utils.py:169] The model class DeepseekV3ForCausalLM has not defined `packed_modules_mapping`, this may lead to incorrect mapping of quantized or ignored modules
(VllmWorkerProcess pid=282) WARNING 03-22 18:41:19 [utils.py:169] The model class DeepseekV3ForCausalLM has not defined `packed_modules_mapping`, this may lead to incorrect mapping of quantized or ignored modules
(VllmWorkerProcess pid=280) WARNING 03-22 18:41:19 [utils.py:169] The model class DeepseekV3ForCausalLM has not defined `packed_modules_mapping`, this may lead to incorrect mapping of quantized or ignored modules
(VllmWorkerProcess pid=275) WARNING 03-22 18:41:19 [utils.py:169] The model class DeepseekV3ForCausalLM has not defined `packed_modules_mapping`, this may lead to incorrect mapping of quantized or ignored modules
(VllmWorkerProcess pid=272) WARNING 03-22 18:41:20 [utils.py:169] The model class DeepseekV3ForCausalLM has not defined `packed_modules_mapping`, this may lead to incorrect mapping of quantized or ignored modules
(VllmWorkerProcess pid=275) /opt/venv/lib/python3.12/site-packages/torch/nested/__init__.py:228: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
(VllmWorkerProcess pid=275) return _nested.nested_tensor(
(VllmWorkerProcess pid=277) /opt/venv/lib/python3.12/site-packages/torch/nested/__init__.py:228: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
(VllmWorkerProcess pid=277) return _nested.nested_tensor(
(VllmWorkerProcess pid=271) /opt/venv/lib/python3.12/site-packages/torch/nested/__init__.py:228: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
(VllmWorkerProcess pid=276) /opt/venv/lib/python3.12/site-packages/torch/nested/__init__.py:228: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
(VllmWorkerProcess pid=273) /opt/venv/lib/python3.12/site-packages/torch/nested/__init__.py:228: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
(VllmWorkerProcess pid=273) return _nested.nested_tensor(
/opt/venv/lib/python3.12/site-packages/torch/nested/__init__.py:228: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
return _nested.nested_tensor(
(VllmWorkerProcess pid=279) /opt/venv/lib/python3.12/site-packages/torch/nested/__init__.py:228: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
(VllmWorkerProcess pid=284) /opt/venv/lib/python3.12/site-packages/torch/nested/__init__.py:228: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
(VllmWorkerProcess pid=282) /opt/venv/lib/python3.12/site-packages/torch/nested/__init__.py:228: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
(VllmWorkerProcess pid=285) /opt/venv/lib/python3.12/site-packages/torch/nested/__init__.py:228: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
(VllmWorkerProcess pid=283) /opt/venv/lib/python3.12/site-packages/torch/nested/__init__.py:228: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
(VllmWorkerProcess pid=280) /opt/venv/lib/python3.12/site-packages/torch/nested/__init__.py:228: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
(VllmWorkerProcess pid=279) return _nested.nested_tensor(
(VllmWorkerProcess pid=284) return _nested.nested_tensor(
(VllmWorkerProcess pid=278) /opt/venv/lib/python3.12/site-packages/torch/nested/__init__.py:228: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
(VllmWorkerProcess pid=282) return _nested.nested_tensor(
(VllmWorkerProcess pid=281) /opt/venv/lib/python3.12/site-packages/torch/nested/__init__.py:228: UserWarning: The PyTorch API of nested tensors is in prototype stage and will change in the near future. We recommend specifying layout=torch.jagged when constructing a nested tensor, as this layout receives active development, has better operator coverage, and works with torch.compile. (Triggered internally at /pytorch/aten/src/ATen/NestedTensorImpl.cpp:178.)
(VllmWorkerProcess pid=285) return _nested.nested_tensor(
(VllmWorkerProcess pid=283) return _nested.nested_tensor(
(VllmWorkerProcess pid=280) return _nested.nested_tensor(
(VllmWorkerProcess pid=278) return _nested.nested_tensor(
(VllmWorkerProcess pid=281) return _nested.nested_tensor(
(VllmWorkerProcess pid=276) INFO 03-22 18:52:51 [model_runner.py:1146] Model loading took 18.3843 GB and 724.014383 seconds
(VllmWorkerProcess pid=285) INFO 03-22 18:52:52 [model_runner.py:1146] Model loading took 19.5904 GB and 724.529272 seconds
(VllmWorkerProcess pid=273) INFO 03-22 18:52:52 [model_runner.py:1146] Model loading took 18.3843 GB and 724.529966 seconds
(VllmWorkerProcess pid=277) INFO 03-22 18:52:52 [model_runner.py:1146] Model loading took 18.3843 GB and 724.544103 seconds
(VllmWorkerProcess pid=274) INFO 03-22 18:52:52 [model_runner.py:1146] Model loading took 18.3843 GB and 724.542021 seconds
(VllmWorkerProcess pid=275) INFO 03-22 18:52:52 [model_runner.py:1146] Model loading took 18.3843 GB and 724.558444 seconds
(VllmWorkerProcess pid=271) INFO 03-22 18:52:52 [model_runner.py:1146] Model loading took 18.3843 GB and 724.543127 seconds
(VllmWorkerProcess pid=283) INFO 03-22 18:52:52 [model_runner.py:1146] Model loading took 19.5904 GB and 724.573560 seconds
INFO 03-22 18:52:52 [model_runner.py:1146] Model loading took 18.3843 GB and 724.537170 seconds
(VllmWorkerProcess pid=282) INFO 03-22 18:52:52 [model_runner.py:1146] Model loading took 19.5904 GB and 724.575245 seconds
(VllmWorkerProcess pid=272) INFO 03-22 18:52:52 [model_runner.py:1146] Model loading took 18.3843 GB and 724.618766 seconds
(VllmWorkerProcess pid=281) INFO 03-22 18:52:52 [model_runner.py:1146] Model loading took 19.5904 GB and 724.653382 seconds
(VllmWorkerProcess pid=280) INFO 03-22 18:52:52 [model_runner.py:1146] Model loading took 19.5904 GB and 724.653049 seconds
(VllmWorkerProcess pid=279) INFO 03-22 18:52:52 [model_runner.py:1146] Model loading took 19.5904 GB and 724.657291 seconds
(VllmWorkerProcess pid=278) INFO 03-22 18:52:52 [model_runner.py:1146] Model loading took 19.5904 GB and 724.662290 seconds
(VllmWorkerProcess pid=284) INFO 03-22 18:52:55 [model_runner.py:1146] Model loading took 19.5904 GB and 727.899927 seconds
(VllmWorkerProcess pid=280) INFO 03-22 18:53:21 [worker.py:267] Memory profiling takes 25.52 seconds
(VllmWorkerProcess pid=280) INFO 03-22 18:53:21 [worker.py:267] the current vLLM instance can use total_gpu_memory (23.58GiB) x gpu_memory_utilization (0.94) = 22.29GiB
(VllmWorkerProcess pid=280) INFO 03-22 18:53:21 [worker.py:267] model weights take 19.59GiB; non_torch_memory takes 0.20GiB; PyTorch activation peak memory takes 0.82GiB; the rest of the memory reserved for KV Cache is 1.68GiB.
(VllmWorkerProcess pid=279) INFO 03-22 18:53:21 [worker.py:267] Memory profiling takes 25.54 seconds
(VllmWorkerProcess pid=279) INFO 03-22 18:53:21 [worker.py:267] the current vLLM instance can use total_gpu_memory (23.58GiB) x gpu_memory_utilization (0.94) = 22.29GiB
(VllmWorkerProcess pid=279) INFO 03-22 18:53:21 [worker.py:267] model weights take 19.59GiB; non_torch_memory takes 0.20GiB; PyTorch activation peak memory takes 0.82GiB; the rest of the memory reserved for KV Cache is 1.68GiB.
(VllmWorkerProcess pid=284) INFO 03-22 18:53:21 [worker.py:267] Memory profiling takes 25.58 seconds
(VllmWorkerProcess pid=284) INFO 03-22 18:53:21 [worker.py:267] the current vLLM instance can use total_gpu_memory (23.58GiB) x gpu_memory_utilization (0.94) = 22.29GiB
(VllmWorkerProcess pid=284) INFO 03-22 18:53:21 [worker.py:267] model weights take 19.59GiB; non_torch_memory takes 0.20GiB; PyTorch activation peak memory takes 0.82GiB; the rest of the memory reserved for KV Cache is 1.68GiB.
(VllmWorkerProcess pid=283) INFO 03-22 18:53:21 [worker.py:267] Memory profiling takes 25.59 seconds
(VllmWorkerProcess pid=283) INFO 03-22 18:53:21 [worker.py:267] the current vLLM instance can use total_gpu_memory (23.58GiB) x gpu_memory_utilization (0.94) = 22.29GiB
(VllmWorkerProcess pid=283) INFO 03-22 18:53:21 [worker.py:267] model weights take 19.59GiB; non_torch_memory takes 0.20GiB; PyTorch activation peak memory takes 0.82GiB; the rest of the memory reserved for KV Cache is 1.68GiB.
(VllmWorkerProcess pid=282) INFO 03-22 18:53:21 [worker.py:267] Memory profiling takes 25.58 seconds
(VllmWorkerProcess pid=282) INFO 03-22 18:53:21 [worker.py:267] the current vLLM instance can use total_gpu_memory (23.58GiB) x gpu_memory_utilization (0.94) = 22.29GiB
(VllmWorkerProcess pid=282) INFO 03-22 18:53:21 [worker.py:267] model weights take 19.59GiB; non_torch_memory takes 0.20GiB; PyTorch activation peak memory takes 0.82GiB; the rest of the memory reserved for KV Cache is 1.68GiB.
(VllmWorkerProcess pid=281) INFO 03-22 18:53:21 [worker.py:267] Memory profiling takes 25.58 seconds
(VllmWorkerProcess pid=281) INFO 03-22 18:53:21 [worker.py:267] the current vLLM instance can use total_gpu_memory (23.58GiB) x gpu_memory_utilization (0.94) = 22.29GiB
(VllmWorkerProcess pid=281) INFO 03-22 18:53:21 [worker.py:267] model weights take 19.59GiB; non_torch_memory takes 0.20GiB; PyTorch activation peak memory takes 0.82GiB; the rest of the memory reserved for KV Cache is 1.68GiB.
(VllmWorkerProcess pid=285) INFO 03-22 18:53:21 [worker.py:267] Memory profiling takes 25.59 seconds
(VllmWorkerProcess pid=285) INFO 03-22 18:53:21 [worker.py:267] the current vLLM instance can use total_gpu_memory (23.58GiB) x gpu_memory_utilization (0.94) = 22.29GiB
(VllmWorkerProcess pid=285) INFO 03-22 18:53:21 [worker.py:267] model weights take 19.59GiB; non_torch_memory takes 0.20GiB; PyTorch activation peak memory takes 0.82GiB; the rest of the memory reserved for KV Cache is 1.68GiB.
(VllmWorkerProcess pid=278) INFO 03-22 18:53:21 [worker.py:267] Memory profiling takes 25.78 seconds
(VllmWorkerProcess pid=278) INFO 03-22 18:53:21 [worker.py:267] the current vLLM instance can use total_gpu_memory (23.58GiB) x gpu_memory_utilization (0.94) = 22.29GiB
(VllmWorkerProcess pid=278) INFO 03-22 18:53:21 [worker.py:267] model weights take 19.59GiB; non_torch_memory takes 0.20GiB; PyTorch activation peak memory takes 0.82GiB; the rest of the memory reserved for KV Cache is 1.68GiB.
(VllmWorkerProcess pid=271) INFO 03-22 18:53:23 [worker.py:267] Memory profiling takes 27.79 seconds
(VllmWorkerProcess pid=271) INFO 03-22 18:53:23 [worker.py:267] the current vLLM instance can use total_gpu_memory (23.58GiB) x gpu_memory_utilization (0.94) = 22.29GiB
(VllmWorkerProcess pid=271) INFO 03-22 18:53:23 [worker.py:267] model weights take 18.38GiB; non_torch_memory takes 0.15GiB; PyTorch activation peak memory takes 0.79GiB; the rest of the memory reserved for KV Cache is 2.96GiB.
(VllmWorkerProcess pid=273) INFO 03-22 18:53:23 [worker.py:267] Memory profiling takes 27.80 seconds
(VllmWorkerProcess pid=273) INFO 03-22 18:53:23 [worker.py:267] the current vLLM instance can use total_gpu_memory (23.58GiB) x gpu_memory_utilization (0.94) = 22.29GiB
(VllmWorkerProcess pid=273) INFO 03-22 18:53:23 [worker.py:267] model weights take 18.38GiB; non_torch_memory takes 0.15GiB; PyTorch activation peak memory takes 0.79GiB; the rest of the memory reserved for KV Cache is 2.96GiB.
(VllmWorkerProcess pid=277) INFO 03-22 18:53:23 [worker.py:267] Memory profiling takes 27.81 seconds
(VllmWorkerProcess pid=277) INFO 03-22 18:53:23 [worker.py:267] the current vLLM instance can use total_gpu_memory (23.58GiB) x gpu_memory_utilization (0.94) = 22.29GiB
(VllmWorkerProcess pid=277) INFO 03-22 18:53:23 [worker.py:267] model weights take 18.38GiB; non_torch_memory takes 0.15GiB; PyTorch activation peak memory takes 0.79GiB; the rest of the memory reserved for KV Cache is 2.96GiB.
INFO 03-22 18:53:23 [worker.py:267] Memory profiling takes 27.78 seconds
INFO 03-22 18:53:23 [worker.py:267] the current vLLM instance can use total_gpu_memory (23.58GiB) x gpu_memory_utilization (0.94) = 22.29GiB
INFO 03-22 18:53:23 [worker.py:267] model weights take 18.38GiB; non_torch_memory takes 0.15GiB; PyTorch activation peak memory takes 0.79GiB; the rest of the memory reserved for KV Cache is 2.96GiB.
(VllmWorkerProcess pid=275) INFO 03-22 18:53:23 [worker.py:267] Memory profiling takes 27.82 seconds
(VllmWorkerProcess pid=275) INFO 03-22 18:53:23 [worker.py:267] the current vLLM instance can use total_gpu_memory (23.58GiB) x gpu_memory_utilization (0.94) = 22.29GiB
(VllmWorkerProcess pid=274) INFO 03-22 18:53:23 [worker.py:267] model weights take 18.38GiB; non_torch_memory takes 0.15GiB; PyTorch activation peak memory takes 0.79GiB; the rest of the memory reserved for KV Cache is 2.96GiB.
(VllmWorkerProcess pid=276) INFO 03-22 18:53:23 [worker.py:267] Memory profiling takes 27.82 seconds
(VllmWorkerProcess pid=276) INFO 03-22 18:53:23 [worker.py:267] the current vLLM instance can use total_gpu_memory (23.58GiB) x gpu_memory_utilization (0.94) = 22.29GiB
(VllmWorkerProcess pid=276) INFO 03-22 18:53:23 [worker.py:267] model weights take 18.38GiB; non_torch_memory takes 0.15GiB; PyTorch activation peak memory takes 0.79GiB; the rest of the memory reserved for KV Cache is 2.96GiB.
INFO 03-22 18:53:23 [executor_base.py:111] # cuda blocks: 3258, # CPU blocks: 3758
INFO 03-22 18:53:23 [executor_base.py:116] Maximum concurrency for 8192 tokens per request: 6.36x
(VllmWorkerProcess pid=285) INFO 03-22 18:53:34 [model_runner.py:1442] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=282) INFO 03-22 18:53:35 [model_runner.py:1442] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=283) INFO 03-22 18:53:35 [model_runner.py:1442] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
INFO 03-22 18:53:38 [model_runner.py:1442] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
(VllmWorkerProcess pid=272) INFO 03-22 18:53:38 [model_runner.py:1442] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
Capturing CUDA graph shapes: 100%|██████████| 3/3 [00:09<00:00, 3.32s/it]
Capturing CUDA graph shapes: 100%|██████████| 3/3 [00:08<00:00, 2.67s/it]
(VllmWorkerProcess pid=280) INFO 03-22 18:53:48 [model_runner.py:1570] Graph capturing finished in 11 secs, took 0.17 GiB
Capturing CUDA graph shapes: 100%|██████████| 3/3 [00:02<00:00, 1.05it/s]
(VllmWorkerProcess pid=278) INFO 03-22 18:53:48 [model_runner.py:1570] Graph capturing finished in 13 secs, took 0.17 GiB
(VllmWorkerProcess pid=283) INFO 03-22 18:53:48 [model_runner.py:1570] Graph capturing finished in 13 secs, took 0.17 GiB
(VllmWorkerProcess pid=284) INFO 03-22 18:53:48 [model_runner.py:1570] Graph capturing finished in 10 secs, took 0.17 GiB
(VllmWorkerProcess pid=279) INFO 03-22 18:53:48 [model_runner.py:1570] Graph capturing finished in 11 secs, took 0.17 GiB
(VllmWorkerProcess pid=282) INFO 03-22 18:53:48 [model_runner.py:1570] Graph capturing finished in 13 secs, took 0.17 GiB
(VllmWorkerProcess pid=281) INFO 03-22 18:53:48 [model_runner.py:1570] Graph capturing finished in 12 secs, took 0.17 GiB
(VllmWorkerProcess pid=285) INFO 03-22 18:53:48 [model_runner.py:1570] Graph capturing finished in 14 secs, took 0.17 GiB
(VllmWorkerProcess pid=276) INFO 03-22 18:53:49 [model_runner.py:1570] Graph capturing finished in 11 secs, took 0.19 GiB
(VllmWorkerProcess pid=274) INFO 03-22 18:53:49 [model_runner.py:1570] Graph capturing finished in 12 secs, took 0.19 GiB
(VllmWorkerProcess pid=272) INFO 03-22 18:53:49 [model_runner.py:1570] Graph capturing finished in 11 secs, took 0.19 GiB
(VllmWorkerProcess pid=275) INFO 03-22 18:53:49 [model_runner.py:1570] Graph capturing finished in 11 secs, took 0.19 GiB
(VllmWorkerProcess pid=273) INFO 03-22 18:53:49 [model_runner.py:1570] Graph capturing finished in 11 secs, took 0.19 GiB
(VllmWorkerProcess pid=277) INFO 03-22 18:53:49 [model_runner.py:1570] Graph capturing finished in 11 secs, took 0.19 GiB
(VllmWorkerProcess pid=271) INFO 03-22 18:53:49 [model_runner.py:1570] Graph capturing finished in 12 secs, took 0.19 GiB
Capturing CUDA graph shapes: 100%|██████████| 3/3 [00:02<00:00, 1.01it/s]
INFO 03-22 18:53:49 [model_runner.py:1570] Graph capturing finished in 11 secs, took 0.19 GiB
INFO 03-22 18:53:49 [llm_engine.py:447] init engine (profile, create kv cache, warmup model) took 54.39 seconds
INFO 03-22 18:53:50 [serving_chat.py:115] Using default chat sampling params from model: {'temperature': 0.6, 'top_p': 0.95}
INFO 03-22 18:53:50 [serving_completion.py:61] Using default completion sampling params from model: {'temperature': 0.6, 'top_p': 0.95}
INFO 03-22 18:53:50 [api_server.py:1024] Starting vLLM API server on http://192.168.10.225:8000
INFO 03-22 18:53:50 [launcher.py:26] Available routes are:
INFO 03-22 18:53:50 [launcher.py:34] Route: /openapi.json, Methods: HEAD, GET
INFO 03-22 18:53:50 [launcher.py:34] Route: /docs, Methods: HEAD, GET
INFO 03-22 18:53:50 [launcher.py:34] Route: /docs/oauth2-redirect, Methods: HEAD, GET
INFO 03-22 18:53:50 [launcher.py:34] Route: /redoc, Methods: HEAD, GET
INFO 03-22 18:53:50 [launcher.py:34] Route: /health, Methods: GET
INFO 03-22 18:53:50 [launcher.py:34] Route: /load, Methods: GET
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO 03-22 18:54:25 [chat_utils.py:346] Detected the chat template content format to be 'string'. You can set `--chat-template-content-format` to override this.
INFO 03-22 18:54:25 [logger.py:39] Received request chatcmpl-39c6bdfde1e143c19c76dfe72ce3cc3e: prompt: "<|begin▁of▁sentence|><|User|>Show me a code snippet of a website's sticky header in CSS and JavaScript.<|Assistant|><think>\n", params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.6, top_p=0.95, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=8171, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None.
INFO 03-22 18:54:25 [async_llm_engine.py:211] Added request chatcmpl-39c6bdfde1e143c19c76dfe72ce3cc3e.
INFO: 192.168.1.64:44616 - "POST /v1/chat/completions HTTP/1.1" 200 OK
(VllmWorkerProcess pid=277) /opt/venv/lib/python3.12/site-packages/vllm/distributed/parallel_state.py:408: UserWarning: The given buffer is not writable, and PyTorch does not support non-writable tensors. This means you can write to the underlying (supposedly non-writable) buffer using the tensor. You may want to copy the buffer to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_new.cpp:1561.)
(VllmWorkerProcess pid=277) object_tensor = torch.frombuffer(pickle.dumps(obj), dtype=torch.uint8)
[rank7]:[W322 18:54:27.377827544 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank15]:[W322 18:54:27.377957486 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
(VllmWorkerProcess pid=271) /opt/venv/lib/python3.12/site-packages/vllm/distributed/parallel_state.py:408: UserWarning: The given buffer is not writable, and PyTorch does not support non-writable tensors. This means you can write to the underlying (supposedly non-writable) buffer using the tensor. You may want to copy the buffer to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_new.cpp:1561.)
(VllmWorkerProcess pid=271) object_tensor = torch.frombuffer(pickle.dumps(obj), dtype=torch.uint8)
(VllmWorkerProcess pid=275) /opt/venv/lib/python3.12/site-packages/vllm/distributed/parallel_state.py:408: UserWarning: The given buffer is not writable, and PyTorch does not support non-writable tensors. This means you can write to the underlying (supposedly non-writable) buffer using the tensor. You may want to copy the buffer to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_new.cpp:1561.)
(VllmWorkerProcess pid=273) /opt/venv/lib/python3.12/site-packages/vllm/distributed/parallel_state.py:408: UserWarning: The given buffer is not writable, and PyTorch does not support non-writable tensors. This means you can write to the underlying (supposedly non-writable) buffer using the tensor. You may want to copy the buffer to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_new.cpp:1561.)
(VllmWorkerProcess pid=272) /opt/venv/lib/python3.12/site-packages/vllm/distributed/parallel_state.py:408: UserWarning: The given buffer is not writable, and PyTorch does not support non-writable tensors. This means you can write to the underlying (supposedly non-writable) buffer using the tensor. You may want to copy the buffer to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_new.cpp:1561.)
(VllmWorkerProcess pid=275) object_tensor = torch.frombuffer(pickle.dumps(obj), dtype=torch.uint8)
(VllmWorkerProcess pid=273) object_tensor = torch.frombuffer(pickle.dumps(obj), dtype=torch.uint8)
(VllmWorkerProcess pid=272) object_tensor = torch.frombuffer(pickle.dumps(obj), dtype=torch.uint8)
(VllmWorkerProcess pid=274) /opt/venv/lib/python3.12/site-packages/vllm/distributed/parallel_state.py:408: UserWarning: The given buffer is not writable, and PyTorch does not support non-writable tensors. This means you can write to the underlying (supposedly non-writable) buffer using the tensor. You may want to copy the buffer to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_new.cpp:1561.)
(VllmWorkerProcess pid=274) object_tensor = torch.frombuffer(pickle.dumps(obj), dtype=torch.uint8)
(VllmWorkerProcess pid=276) /opt/venv/lib/python3.12/site-packages/vllm/distributed/parallel_state.py:408: UserWarning: The given buffer is not writable, and PyTorch does not support non-writable tensors. This means you can write to the underlying (supposedly non-writable) buffer using the tensor. You may want to copy the buffer to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_new.cpp:1561.)
(VllmWorkerProcess pid=276) object_tensor = torch.frombuffer(pickle.dumps(obj), dtype=torch.uint8)
[rank1]:[W322 18:54:27.380129008 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank4]:[W322 18:54:27.380217129 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank2]:[W322 18:54:27.380218309 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank3]:[W322 18:54:27.380219399 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank5]:[W322 18:54:27.380248339 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank6]:[W322 18:54:27.380314470 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank9]:[W322 18:54:27.380311840 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank11]:[W322 18:54:27.380335620 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank12]:[W322 18:54:27.380335960 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank13]:[W322 18:54:27.380351460 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank10]:[W322 18:54:27.380370780 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank14]:[W322 18:54:27.380446631 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
/opt/venv/lib/python3.12/site-packages/vllm/distributed/parallel_state.py:408: UserWarning: The given buffer is not writable, and PyTorch does not support non-writable tensors. This means you can write to the underlying (supposedly non-writable) buffer using the tensor. You may want to copy the buffer to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_new.cpp:1561.)
object_tensor = torch.frombuffer(pickle.dumps(obj), dtype=torch.uint8)
[rank8]:[W322 18:54:27.389643976 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
[rank0]:[W322 18:54:27.389699367 ProcessGroupNCCL.cpp:3436] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS=1 has no effect for point-to-point collectives. (function operator())
INFO 03-22 18:54:28 [async_llm_engine.py:223] Aborted request chatcmpl-39c6bdfde1e143c19c76dfe72ce3cc3e.
INFO 03-22 18:54:40 [metrics.py:481] Avg prompt throughput: 1.4 tokens/s, Avg generation throughput: 0.3 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.
INFO 03-22 18:54:50 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.
INFO 03-22 18:54:56 [logger.py:39] Received request chatcmpl-3b4a21965d0a4add83f94e8cd2d84d7d: prompt: "<|begin▁of▁sentence|><|User|>Show me a code snippet of a website's sticky header in CSS and JavaScript.<|Assistant|><think>\n", params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.6, top_p=0.95, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=8171, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None.
INFO 03-22 18:54:56 [async_llm_engine.py:211] Added request chatcmpl-3b4a21965d0a4add83f94e8cd2d84d7d.
INFO: 192.168.1.64:34600 - "POST /v1/chat/completions HTTP/1.1" 200 OK
INFO 03-22 18:55:01 [metrics.py:481] Avg prompt throughput: 4.2 tokens/s, Avg generation throughput: 29.9 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.3%, CPU KV cache usage: 0.0%.
INFO 03-22 18:55:06 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 31.2 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.6%, CPU KV cache usage: 0.0%.
INFO 03-22 18:55:11 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 29.1 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.9%, CPU KV cache usage: 0.0%.
INFO 03-22 18:55:16 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 29.6 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.2%, CPU KV cache usage: 0.0%.
INFO 03-22 18:55:21 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 29.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.5%, CPU KV cache usage: 0.0%.
INFO 03-22 18:55:26 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 30.2 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 1.8%, CPU KV cache usage: 0.0%.
INFO 03-22 18:55:31 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 30.1 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 2.1%, CPU KV cache usage: 0.0%.
INFO 03-22 18:55:36 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 29.6 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 2.3%, CPU KV cache usage: 0.0%.
INFO 03-22 18:55:41 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 29.7 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 2.6%, CPU KV cache usage: 0.0%.
INFO 03-22 18:55:46 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 29.4 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 2.9%, CPU KV cache usage: 0.0%.
INFO 03-22 18:55:51 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 28.4 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 3.2%, CPU KV cache usage: 0.0%.
INFO 03-22 18:55:56 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 28.2 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 3.5%, CPU KV cache usage: 0.0%.
INFO 03-22 18:56:01 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 28.3 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 3.7%, CPU KV cache usage: 0.0%.
INFO 03-22 18:56:06 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 27.6 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 4.0%, CPU KV cache usage: 0.0%.
INFO 03-22 18:56:11 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 27.9 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 4.3%, CPU KV cache usage: 0.0%.
INFO 03-22 18:56:16 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 27.5 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 4.5%, CPU KV cache usage: 0.0%.
INFO 03-22 18:56:21 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 27.6 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 4.8%, CPU KV cache usage: 0.0%.
INFO 03-22 18:56:26 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 27.6 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 5.1%, CPU KV cache usage: 0.0%.
INFO 03-22 18:56:31 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 27.5 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 5.3%, CPU KV cache usage: 0.0%.
INFO 03-22 18:56:36 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 27.3 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 5.6%, CPU KV cache usage: 0.0%.
INFO 03-22 18:56:41 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 27.3 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 5.9%, CPU KV cache usage: 0.0%.
INFO 03-22 18:56:46 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.8 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 6.1%, CPU KV cache usage: 0.0%.
INFO 03-22 18:56:51 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.6 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 6.4%, CPU KV cache usage: 0.0%.
INFO 03-22 18:56:56 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.3 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 6.6%, CPU KV cache usage: 0.0%.
INFO 03-22 18:57:01 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 26.6 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 6.9%, CPU KV cache usage: 0.0%.
INFO 03-22 18:57:06 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.4 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 7.1%, CPU KV cache usage: 0.0%.
INFO 03-22 18:57:11 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 23.1 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 7.3%, CPU KV cache usage: 0.0%.
INFO 03-22 18:57:16 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 24.0 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 7.6%, CPU KV cache usage: 0.0%.
INFO 03-22 18:57:46 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 25.1 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 9.0%, CPU KV cache usage: 0.0%.
INFO 03-22 18:57:51 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 24.3 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 9.2%, CPU KV cache usage: 0.0%.
INFO 03-22 18:57:56 [metrics.py:481] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 24.7 tokens/s, Running: 1 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 9.5%, CPU KV cache usage: 0.0%.
... I have to kill generation ...
```
</details>
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
open
|
2025-03-22T19:12:30Z
|
2025-03-23T08:31:08Z
|
https://github.com/vllm-project/vllm/issues/15340
|
[
"bug"
] |
davidsyoung
| 5
|
jupyterlab/jupyter-ai
|
jupyter
| 1,117
|
Problem with the Chat backend bug in kubeflow Jupyter Notebook
|
<!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue.
Before creating a new issue:
* Search for relevant issues
* Follow the issue reporting guidelines:
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html
-->
## Description
I'm installing jupyter-ai package to integrate it with self-hosted ollama model, inside the kubeflow jupyter notebook. I've tried several images from kubeflownotebookswg and none of them worked for me, after installing jupyter-ai I was getting
"There seems to be a problem with the Chat backend, please look at the JupyterLab server logs or contact your administrator to correct this problem."
and in the kubernetes logs(repeatedly):
[W 2024-11-22 10:12:24.663 ServerApp] 404 GET /notebook/mlflow/ai-test-3/api/ai/chats (f2446b1d028f4a6f8aecc051007baedd@127.0.0.6) 1.10ms referer=None
## Reproduce
1. Create a jupyter notebook in kubeflow dashboard with kubeflownotebookswg/jupyter:v1.9.2 image (I believe it can be any other, this one is the most lightweight). I've tried the other ones, but result is the same.
2. Create a new terminal session and run there ```pip install jupyter-ai```. (I'm using pip because conda install jupyter-ai fails on solving environment).
3. Refresh browser page for chat icon to appear.
4.
<img width="233" alt="image" src="https://github.com/user-attachments/assets/db70c2c5-5a07-4b53-8e5f-642f86971a43">
I got the same error with local docker testing, testing with another versions of kubeflownotebookswg/jupyter, so I'm sure it's not env-specific. As well, following this instruction I tried both Minimal and Quich installation with pip.
## Expected behavior
As it is described in https://jupyter-ai.readthedocs.io/en/latest/users/index.html#prerequisites, I expect chat configuration to appear so I can configure models connections.
## Context
<!--Complete the following for context, and add any other relevant context-->
- Docker image: kubeflownotebookswg/jupyter:v1.9.2
- Python version(from image): Python 3.11.10
- JupyterLab version:
<img width="405" alt="image" src="https://github.com/user-attachments/assets/37ee7033-cbdf-4416-a989-6608b8d25b90">
<!--The more content you provide, the more we can help!-->
<details><summary>Troubleshoot Output</summary>
(base) jovyan@ai-test-4-0:~$ jupyter troubleshoot
$PATH:
/opt/conda/bin
/opt/conda/condabin
/command
/opt/conda/bin
/usr/local/sbin
/usr/local/bin
/usr/sbin
/usr/bin
/sbin
/bin
sys.path:
/opt/conda/bin
/opt/conda/lib/python311.zip
/opt/conda/lib/python3.11
/opt/conda/lib/python3.11/lib-dynload
/opt/conda/lib/python3.11/site-packages
sys.executable:
/opt/conda/bin/python3.11
sys.version:
3.11.10 | packaged by conda-forge | (main, Sep 30 2024, 18:08:57) [GCC 13.3.0]
platform.platform():
Linux-5.14.0-427.33.1.el9_4.x86_64-x86_64-with-glibc2.35
which -a jupyter:
/opt/conda/bin/jupyter
/opt/conda/bin/jupyter
pip list:
Package Version
----------------------------- ---------------
ai21 3.0.0
ai21-tokenizer 0.12.0
aiohappyeyeballs 2.4.3
aiohttp 3.11.7
aiolimiter 1.1.0
aiosignal 1.3.1
aiosqlite 0.20.0
annotated-types 0.7.0
anthropic 0.39.0
anyio 4.6.0
archspec 0.2.3
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
arxiv 2.1.3
asttokens 2.4.1
async-lru 2.0.4
attrs 24.2.0
Babel 2.14.0
bce-python-sdk 0.9.23
beautifulsoup4 4.12.3
bleach 6.1.0
boltons 24.0.0
boto3 1.34.162
botocore 1.34.162
Brotli 1.1.0
cached-property 1.5.2
cachetools 5.5.0
certifi 2024.8.30
cffi 1.17.1
charset-normalizer 3.3.2
click 8.1.7
cloudpickle 3.1.0
cohere 5.11.4
colorama 0.4.6
comm 0.2.2
conda 24.9.0
conda-libmamba-solver 24.9.0
conda-package-handling 2.3.0
conda_package_streaming 0.10.0
dask 2024.11.2
dataclasses-json 0.6.7
debugpy 1.8.6
decorator 5.1.1
deepmerge 2.0
defusedxml 0.7.1
dill 0.3.9
diskcache 5.6.3
distributed 2024.11.2
distro 1.9.0
entrypoints 0.4
eval_type_backport 0.2.0
exceptiongroup 1.2.2
executing 2.1.0
faiss-cpu 1.9.0.post1
fastavro 1.9.7
fastjsonschema 2.20.0
feedparser 6.0.11
filelock 3.16.1
fqdn 1.5.1
frozendict 2.4.4
frozenlist 1.5.0
fsspec 2024.10.0
future 1.0.0
google-ai-generativelanguage 0.6.6
google-api-core 2.23.0
google-api-python-client 2.154.0
google-auth 2.36.0
google-auth-httplib2 0.2.0
google-generativeai 0.7.2
googleapis-common-protos 1.66.0
gpt4all 2.8.2
greenlet 3.1.1
grpcio 1.68.0
grpcio-status 1.62.3
h11 0.14.0
h2 4.1.0
hpack 4.0.0
httpcore 1.0.6
httplib2 0.22.0
httpx 0.27.2
httpx-sse 0.4.0
huggingface-hub 0.26.2
hyperframe 6.0.1
idna 3.10
importlib_metadata 8.5.0
importlib_resources 6.4.5
ipykernel 6.29.5
ipython 8.27.0
ipywidgets 8.1.5
isoduration 20.11.0
jedi 0.19.1
Jinja2 3.1.4
jiter 0.7.1
jmespath 1.0.1
json5 0.9.25
jsonpatch 1.33
jsonpath-ng 1.7.0
jsonpointer 3.0.0
jsonschema 4.23.0
jsonschema-specifications 2023.12.1
jupyter_ai 2.28.2
jupyter_ai_magics 2.28.2
jupyter_client 8.6.3
jupyter_core 5.7.2
jupyter-events 0.10.0
jupyter-lsp 2.2.5
jupyter_server 2.14.2
jupyter_server_terminals 0.5.3
jupyterlab 4.2.5
jupyterlab_pygments 0.3.0
jupyterlab_server 2.27.3
jupyterlab_widgets 3.0.13
langchain 0.2.17
langchain-anthropic 0.1.23
langchain-aws 0.1.18
langchain-cohere 0.2.4
langchain-community 0.2.19
langchain-core 0.2.43
langchain-experimental 0.0.65
langchain-google-genai 1.0.10
langchain-mistralai 0.1.13
langchain-nvidia-ai-endpoints 0.2.2
langchain-ollama 0.1.3
langchain-openai 0.1.25
langchain-text-splitters 0.2.4
langsmith 0.1.144
libmambapy 1.5.10
locket 1.0.0
mamba 1.5.10
markdown-it-py 3.0.0
MarkupSafe 2.1.5
marshmallow 3.23.1
matplotlib-inline 0.1.7
mdurl 0.1.2
menuinst 2.1.2
mistune 3.0.2
msgpack 1.1.0
multidict 6.1.0
multiprocess 0.70.17
mypy-extensions 1.0.0
nbclient 0.10.0
nbconvert 7.16.4
nbformat 5.10.4
nest_asyncio 1.6.0
notebook 7.2.2
notebook_shim 0.2.4
numpy 1.26.4
ollama 0.4.0
openai 1.55.0
orjson 3.10.11
overrides 7.7.0
packaging 24.1
pandas 2.2.3
pandocfilters 1.5.0
parameterized 0.9.0
parso 0.8.4
partd 1.4.2
pexpect 4.9.0
pickleshare 0.7.5
pillow 10.4.0
pip 24.2
pkgutil_resolve_name 1.3.10
platformdirs 4.3.6
pluggy 1.5.0
ply 3.11
prometheus_client 0.21.0
prompt_toolkit 3.0.48
propcache 0.2.0
proto-plus 1.25.0
protobuf 4.25.5
psutil 6.0.0
ptyprocess 0.7.0
pure_eval 0.2.3
pyarrow 18.0.0
pyasn1 0.6.1
pyasn1_modules 0.4.1
pycosat 0.6.6
pycparser 2.22
pycryptodome 3.21.0
pydantic 2.10.1
pydantic_core 2.27.1
Pygments 2.18.0
pyparsing 3.2.0
pypdf 5.1.0
PySocks 1.7.1
python-dateutil 2.9.0
python-dotenv 1.0.1
python-json-logger 2.0.7
pytz 2024.2
PyYAML 6.0.2
pyzmq 26.2.0
qianfan 0.4.12.1
referencing 0.35.1
regex 2024.11.6
requests 2.32.3
requests-toolbelt 1.0.0
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rich 13.9.4
rpds-py 0.20.0
rsa 4.9
ruamel.yaml 0.18.6
ruamel.yaml.clib 0.2.8
s3transfer 0.10.4
Send2Trash 1.8.3
sentencepiece 0.2.0
setuptools 75.1.0
sgmllib3k 1.0.0
shellingham 1.5.4
six 1.16.0
sniffio 1.3.1
sortedcontainers 2.4.0
soupsieve 2.5
SQLAlchemy 2.0.36
stack-data 0.6.2
tabulate 0.9.0
tblib 3.0.0
tenacity 8.5.0
terminado 0.18.1
tiktoken 0.8.0
tinycss2 1.3.0
together 1.3.5
tokenizers 0.20.3
tomli 2.0.1
toolz 1.0.0
tornado 6.4.1
tqdm 4.66.5
traitlets 5.14.3
truststore 0.9.2
typer 0.13.1
types-python-dateutil 2.9.0.20240906
types-requests 2.32.0.20241016
typing_extensions 4.12.2
typing-inspect 0.9.0
typing-utils 0.1.0
tzdata 2024.2
uri-template 1.3.0
uritemplate 4.1.1
urllib3 2.2.3
wcwidth 0.2.13
webcolors 24.8.0
webencodings 0.5.1
websocket-client 1.8.0
wheel 0.44.0
widgetsnbextension 4.0.13
yarl 1.18.0
zict 3.0.0
zipp 3.20.2
zstandard 0.23.0
conda list:
# packages in environment at /opt/conda:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
ai21 3.0.0 pypi_0 pypi
ai21-tokenizer 0.12.0 pypi_0 pypi
aiohappyeyeballs 2.4.3 pypi_0 pypi
aiohttp 3.11.7 pypi_0 pypi
aiolimiter 1.1.0 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
aiosqlite 0.20.0 pypi_0 pypi
annotated-types 0.7.0 pypi_0 pypi
anthropic 0.39.0 pypi_0 pypi
anyio 4.6.0 pyhd8ed1ab_1 conda-forge
archspec 0.2.3 pyhd8ed1ab_0 conda-forge
argon2-cffi 23.1.0 pyhd8ed1ab_0 conda-forge
argon2-cffi-bindings 21.2.0 py311h9ecbd09_5 conda-forge
arrow 1.3.0 pyhd8ed1ab_0 conda-forge
arxiv 2.1.3 pypi_0 pypi
asttokens 2.4.1 pyhd8ed1ab_0 conda-forge
async-lru 2.0.4 pyhd8ed1ab_0 conda-forge
attrs 24.2.0 pyh71513ae_0 conda-forge
babel 2.14.0 pyhd8ed1ab_0 conda-forge
bce-python-sdk 0.9.23 pypi_0 pypi
beautifulsoup4 4.12.3 pyha770c72_0 conda-forge
bleach 6.1.0 pyhd8ed1ab_0 conda-forge
boltons 24.0.0 pyhd8ed1ab_0 conda-forge
boto3 1.34.162 pypi_0 pypi
botocore 1.34.162 pypi_0 pypi
brotli-python 1.1.0 py311hfdbb021_2 conda-forge
bzip2 1.0.8 h4bc722e_7 conda-forge
c-ares 1.33.1 heb4867d_0 conda-forge
ca-certificates 2024.8.30 hbcca054_0 conda-forge
cached-property 1.5.2 hd8ed1ab_1 conda-forge
cached_property 1.5.2 pyha770c72_1 conda-forge
cachetools 5.5.0 pypi_0 pypi
certifi 2024.8.30 pyhd8ed1ab_0 conda-forge
cffi 1.17.1 py311hf29c0ef_0 conda-forge
charset-normalizer 3.3.2 pyhd8ed1ab_0 conda-forge
click 8.1.7 pypi_0 pypi
cloudpickle 3.1.0 pypi_0 pypi
cohere 5.11.4 pypi_0 pypi
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
comm 0.2.2 pyhd8ed1ab_0 conda-forge
conda 24.9.0 py311h38be061_0 conda-forge
conda-libmamba-solver 24.9.0 pyhd8ed1ab_0 conda-forge
conda-package-handling 2.3.0 pyh7900ff3_0 conda-forge
conda-package-streaming 0.10.0 pyhd8ed1ab_0 conda-forge
dask 2024.11.2 pypi_0 pypi
dataclasses-json 0.6.7 pypi_0 pypi
debugpy 1.8.6 py311hfdbb021_0 conda-forge
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
deepmerge 2.0 pypi_0 pypi
defusedxml 0.7.1 pyhd8ed1ab_0 conda-forge
dill 0.3.9 pypi_0 pypi
diskcache 5.6.3 pypi_0 pypi
distributed 2024.11.2 pypi_0 pypi
distro 1.9.0 pyhd8ed1ab_0 conda-forge
entrypoints 0.4 pyhd8ed1ab_0 conda-forge
eval-type-backport 0.2.0 pypi_0 pypi
exceptiongroup 1.2.2 pyhd8ed1ab_0 conda-forge
executing 2.1.0 pyhd8ed1ab_0 conda-forge
faiss-cpu 1.9.0.post1 pypi_0 pypi
fastavro 1.9.7 pypi_0 pypi
feedparser 6.0.11 pypi_0 pypi
filelock 3.16.1 pypi_0 pypi
fmt 10.2.1 h00ab1b0_0 conda-forge
fqdn 1.5.1 pyhd8ed1ab_0 conda-forge
frozendict 2.4.4 py311h9ecbd09_1 conda-forge
frozenlist 1.5.0 pypi_0 pypi
fsspec 2024.10.0 pypi_0 pypi
future 1.0.0 pypi_0 pypi
google-ai-generativelanguage 0.6.6 pypi_0 pypi
google-api-core 2.23.0 pypi_0 pypi
google-api-python-client 2.154.0 pypi_0 pypi
google-auth 2.36.0 pypi_0 pypi
google-auth-httplib2 0.2.0 pypi_0 pypi
google-generativeai 0.7.2 pypi_0 pypi
googleapis-common-protos 1.66.0 pypi_0 pypi
gpt4all 2.8.2 pypi_0 pypi
greenlet 3.1.1 pypi_0 pypi
grpcio 1.68.0 pypi_0 pypi
grpcio-status 1.62.3 pypi_0 pypi
h11 0.14.0 pyhd8ed1ab_0 conda-forge
h2 4.1.0 pyhd8ed1ab_0 conda-forge
hpack 4.0.0 pyh9f0ad1d_0 conda-forge
httpcore 1.0.6 pyhd8ed1ab_0 conda-forge
httplib2 0.22.0 pypi_0 pypi
httpx 0.27.2 pyhd8ed1ab_0 conda-forge
httpx-sse 0.4.0 pypi_0 pypi
huggingface-hub 0.26.2 pypi_0 pypi
hyperframe 6.0.1 pyhd8ed1ab_0 conda-forge
icu 75.1 he02047a_0 conda-forge
idna 3.10 pyhd8ed1ab_0 conda-forge
importlib-metadata 8.5.0 pyha770c72_0 conda-forge
importlib_metadata 8.5.0 hd8ed1ab_0 conda-forge
importlib_resources 6.4.5 pyhd8ed1ab_0 conda-forge
ipykernel 6.29.5 pyh3099207_0 conda-forge
ipython 8.27.0 pyh707e725_0 conda-forge
ipywidgets 8.1.5 pypi_0 pypi
isoduration 20.11.0 pyhd8ed1ab_0 conda-forge
jedi 0.19.1 pyhd8ed1ab_0 conda-forge
jinja2 3.1.4 pyhd8ed1ab_0 conda-forge
jiter 0.7.1 pypi_0 pypi
jmespath 1.0.1 pypi_0 pypi
json5 0.9.25 pyhd8ed1ab_0 conda-forge
jsonpatch 1.33 pyhd8ed1ab_0 conda-forge
jsonpath-ng 1.7.0 pypi_0 pypi
jsonpointer 3.0.0 py311h38be061_1 conda-forge
jsonschema 4.23.0 pyhd8ed1ab_0 conda-forge
jsonschema-specifications 2023.12.1 pyhd8ed1ab_0 conda-forge
jsonschema-with-format-nongpl 4.23.0 hd8ed1ab_0 conda-forge
jupyter-ai 2.28.2 pypi_0 pypi
jupyter-ai-magics 2.28.2 pypi_0 pypi
jupyter-lsp 2.2.5 pyhd8ed1ab_0 conda-forge
jupyter_client 8.6.3 pyhd8ed1ab_0 conda-forge
jupyter_core 5.7.2 pyh31011fe_1 conda-forge
jupyter_events 0.10.0 pyhd8ed1ab_0 conda-forge
jupyter_server 2.14.2 pyhd8ed1ab_0 conda-forge
jupyter_server_terminals 0.5.3 pyhd8ed1ab_0 conda-forge
jupyterlab 4.2.5 pyhd8ed1ab_0 conda-forge
jupyterlab-widgets 3.0.13 pypi_0 pypi
jupyterlab_pygments 0.3.0 pyhd8ed1ab_1 conda-forge
jupyterlab_server 2.27.3 pyhd8ed1ab_0 conda-forge
keyutils 1.6.1 h166bdaf_0 conda-forge
krb5 1.21.3 h659f571_0 conda-forge
langchain 0.2.17 pypi_0 pypi
langchain-anthropic 0.1.23 pypi_0 pypi
langchain-aws 0.1.18 pypi_0 pypi
langchain-cohere 0.2.4 pypi_0 pypi
langchain-community 0.2.19 pypi_0 pypi
langchain-core 0.2.43 pypi_0 pypi
langchain-experimental 0.0.65 pypi_0 pypi
langchain-google-genai 1.0.10 pypi_0 pypi
langchain-mistralai 0.1.13 pypi_0 pypi
langchain-nvidia-ai-endpoints 0.2.2 pypi_0 pypi
langchain-ollama 0.1.3 pypi_0 pypi
langchain-openai 0.1.25 pypi_0 pypi
langchain-text-splitters 0.2.4 pypi_0 pypi
langsmith 0.1.144 pypi_0 pypi
ld_impl_linux-64 2.43 h712a8e2_1 conda-forge
libarchive 3.7.4 hfca40fe_0 conda-forge
libcurl 8.10.1 hbbe4b11_0 conda-forge
libedit 3.1.20191231 he28a2e2_2 conda-forge
libev 4.33 hd590300_2 conda-forge
libexpat 2.6.3 h5888daf_0 conda-forge
libffi 3.4.2 h7f98852_5 conda-forge
libgcc 14.1.0 h77fa898_1 conda-forge
libgcc-ng 14.1.0 h69a702a_1 conda-forge
libgomp 14.1.0 h77fa898_1 conda-forge
libiconv 1.17 hd590300_2 conda-forge
libmamba 1.5.10 h4cc3d14_0 conda-forge
libmambapy 1.5.10 py311h7f1ffb1_0 conda-forge
libnghttp2 1.58.0 h47da74e_1 conda-forge
libnsl 2.0.1 hd590300_0 conda-forge
libsodium 1.0.20 h4ab18f5_0 conda-forge
libsolv 0.7.30 h3509ff9_0 conda-forge
libsqlite 3.46.1 hadc24fc_0 conda-forge
libssh2 1.11.0 h0841786_0 conda-forge
libstdcxx 14.1.0 hc0a3c3a_1 conda-forge
libstdcxx-ng 14.1.0 h4852527_1 conda-forge
libuuid 2.38.1 h0b41bf4_0 conda-forge
libxcrypt 4.4.36 hd590300_1 conda-forge
libxml2 2.12.7 he7c6b58_4 conda-forge
libzlib 1.3.1 h4ab18f5_1 conda-forge
locket 1.0.0 pypi_0 pypi
lz4-c 1.9.4 hcb278e6_0 conda-forge
lzo 2.10 hd590300_1001 conda-forge
mamba 1.5.10 py311h3072747_0 conda-forge
markdown-it-py 3.0.0 pypi_0 pypi
markupsafe 2.1.5 py311h9ecbd09_1 conda-forge
marshmallow 3.23.1 pypi_0 pypi
matplotlib-inline 0.1.7 pyhd8ed1ab_0 conda-forge
mdurl 0.1.2 pypi_0 pypi
menuinst 2.1.2 py311h38be061_1 conda-forge
mistune 3.0.2 pyhd8ed1ab_0 conda-forge
msgpack 1.1.0 pypi_0 pypi
multidict 6.1.0 pypi_0 pypi
multiprocess 0.70.17 pypi_0 pypi
mypy-extensions 1.0.0 pypi_0 pypi
nbclient 0.10.0 pyhd8ed1ab_0 conda-forge
nbconvert-core 7.16.4 pyhd8ed1ab_1 conda-forge
nbformat 5.10.4 pyhd8ed1ab_0 conda-forge
ncurses 6.5 he02047a_1 conda-forge
nest-asyncio 1.6.0 pyhd8ed1ab_0 conda-forge
notebook 7.2.2 pyhd8ed1ab_0 conda-forge
notebook-shim 0.2.4 pyhd8ed1ab_0 conda-forge
numpy 1.26.4 pypi_0 pypi
ollama 0.4.0 pypi_0 pypi
openai 1.55.0 pypi_0 pypi
openssl 3.3.2 hb9d3cd8_0 conda-forge
orjson 3.10.11 pypi_0 pypi
overrides 7.7.0 pyhd8ed1ab_0 conda-forge
packaging 24.1 pyhd8ed1ab_0 conda-forge
pandas 2.2.3 pypi_0 pypi
pandocfilters 1.5.0 pyhd8ed1ab_0 conda-forge
parameterized 0.9.0 pypi_0 pypi
parso 0.8.4 pyhd8ed1ab_0 conda-forge
partd 1.4.2 pypi_0 pypi
pexpect 4.9.0 pyhd8ed1ab_0 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pillow 10.4.0 pypi_0 pypi
pip 24.2 pyh8b19718_1 conda-forge
pkgutil-resolve-name 1.3.10 pyhd8ed1ab_1 conda-forge
platformdirs 4.3.6 pyhd8ed1ab_0 conda-forge
pluggy 1.5.0 pyhd8ed1ab_0 conda-forge
ply 3.11 pypi_0 pypi
prometheus_client 0.21.0 pyhd8ed1ab_0 conda-forge
prompt-toolkit 3.0.48 pyha770c72_0 conda-forge
propcache 0.2.0 pypi_0 pypi
proto-plus 1.25.0 pypi_0 pypi
protobuf 4.25.5 pypi_0 pypi
psutil 6.0.0 py311h9ecbd09_1 conda-forge
ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge
pure_eval 0.2.3 pyhd8ed1ab_0 conda-forge
pyarrow 18.0.0 pypi_0 pypi
pyasn1 0.6.1 pypi_0 pypi
pyasn1-modules 0.4.1 pypi_0 pypi
pybind11-abi 4 hd8ed1ab_3 conda-forge
pycosat 0.6.6 py311h459d7ec_0 conda-forge
pycparser 2.22 pyhd8ed1ab_0 conda-forge
pycryptodome 3.21.0 pypi_0 pypi
pydantic 2.10.1 pypi_0 pypi
pydantic-core 2.27.1 pypi_0 pypi
pygments 2.18.0 pyhd8ed1ab_0 conda-forge
pyparsing 3.2.0 pypi_0 pypi
pypdf 5.1.0 pypi_0 pypi
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.11.10 hc5c86c4_2_cpython conda-forge
python-dateutil 2.9.0 pyhd8ed1ab_0 conda-forge
python-dotenv 1.0.1 pypi_0 pypi
python-fastjsonschema 2.20.0 pyhd8ed1ab_0 conda-forge
python-json-logger 2.0.7 pyhd8ed1ab_0 conda-forge
python_abi 3.11 5_cp311 conda-forge
pytz 2024.2 pyhd8ed1ab_0 conda-forge
pyyaml 6.0.2 py311h9ecbd09_1 conda-forge
pyzmq 26.2.0 py311h7deb3e3_2 conda-forge
qianfan 0.4.12.1 pypi_0 pypi
readline 8.2 h8228510_1 conda-forge
referencing 0.35.1 pyhd8ed1ab_0 conda-forge
regex 2024.11.6 pypi_0 pypi
reproc 14.2.4.post0 hd590300_1 conda-forge
reproc-cpp 14.2.4.post0 h59595ed_1 conda-forge
requests 2.32.3 pyhd8ed1ab_0 conda-forge
requests-toolbelt 1.0.0 pypi_0 pypi
rfc3339-validator 0.1.4 pyhd8ed1ab_0 conda-forge
rfc3986-validator 0.1.1 pyh9f0ad1d_0 conda-forge
rich 13.9.4 pypi_0 pypi
rpds-py 0.20.0 py311h9e33e62_1 conda-forge
rsa 4.9 pypi_0 pypi
ruamel.yaml 0.18.6 py311h459d7ec_0 conda-forge
ruamel.yaml.clib 0.2.8 py311h459d7ec_0 conda-forge
s3transfer 0.10.4 pypi_0 pypi
send2trash 1.8.3 pyh0d859eb_0 conda-forge
sentencepiece 0.2.0 pypi_0 pypi
setuptools 75.1.0 pyhd8ed1ab_0 conda-forge
sgmllib3k 1.0.0 pypi_0 pypi
shellingham 1.5.4 pypi_0 pypi
six 1.16.0 pyh6c4a22f_0 conda-forge
sniffio 1.3.1 pyhd8ed1ab_0 conda-forge
sortedcontainers 2.4.0 pypi_0 pypi
soupsieve 2.5 pyhd8ed1ab_1 conda-forge
sqlalchemy 2.0.36 pypi_0 pypi
stack_data 0.6.2 pyhd8ed1ab_0 conda-forge
tabulate 0.9.0 pypi_0 pypi
tblib 3.0.0 pypi_0 pypi
tenacity 8.5.0 pypi_0 pypi
terminado 0.18.1 pyh0d859eb_0 conda-forge
tiktoken 0.8.0 pypi_0 pypi
tinycss2 1.3.0 pyhd8ed1ab_0 conda-forge
tk 8.6.13 noxft_h4845f30_101 conda-forge
together 1.3.5 pypi_0 pypi
tokenizers 0.20.3 pypi_0 pypi
tomli 2.0.1 pyhd8ed1ab_0 conda-forge
toolz 1.0.0 pypi_0 pypi
tornado 6.4.1 py311h9ecbd09_1 conda-forge
tqdm 4.66.5 pyhd8ed1ab_0 conda-forge
traitlets 5.14.3 pyhd8ed1ab_0 conda-forge
truststore 0.9.2 pyhd8ed1ab_0 conda-forge
typer 0.13.1 pypi_0 pypi
types-python-dateutil 2.9.0.20240906 pyhd8ed1ab_0 conda-forge
types-requests 2.32.0.20241016 pypi_0 pypi
typing-extensions 4.12.2 hd8ed1ab_0 conda-forge
typing-inspect 0.9.0 pypi_0 pypi
typing_extensions 4.12.2 pyha770c72_0 conda-forge
typing_utils 0.1.0 pyhd8ed1ab_0 conda-forge
tzdata 2024.2 pypi_0 pypi
uri-template 1.3.0 pyhd8ed1ab_0 conda-forge
uritemplate 4.1.1 pypi_0 pypi
urllib3 2.2.3 pyhd8ed1ab_0 conda-forge
wcwidth 0.2.13 pyhd8ed1ab_0 conda-forge
webcolors 24.8.0 pyhd8ed1ab_0 conda-forge
webencodings 0.5.1 pyhd8ed1ab_2 conda-forge
websocket-client 1.8.0 pyhd8ed1ab_0 conda-forge
wheel 0.44.0 pyhd8ed1ab_0 conda-forge
widgetsnbextension 4.0.13 pypi_0 pypi
xz 5.2.6 h166bdaf_0 conda-forge
yaml 0.2.5 h7f98852_2 conda-forge
yaml-cpp 0.8.0 h59595ed_0 conda-forge
yarl 1.18.0 pypi_0 pypi
zeromq 4.3.5 ha4adb4c_5 conda-forge
zict 3.0.0 pypi_0 pypi
zipp 3.20.2 pyhd8ed1ab_0 conda-forge
zstandard 0.23.0 py311hbc35293_1 conda-forge
zstd 1.5.6 ha6fb4c9_0 conda-forge
conda env:
name: base
channels:
- conda-forge
dependencies:
- _libgcc_mutex=0.1=conda_forge
- _openmp_mutex=4.5=2_gnu
- anyio=4.6.0=pyhd8ed1ab_1
- archspec=0.2.3=pyhd8ed1ab_0
- argon2-cffi=23.1.0=pyhd8ed1ab_0
- argon2-cffi-bindings=21.2.0=py311h9ecbd09_5
- arrow=1.3.0=pyhd8ed1ab_0
- asttokens=2.4.1=pyhd8ed1ab_0
- async-lru=2.0.4=pyhd8ed1ab_0
- attrs=24.2.0=pyh71513ae_0
- babel=2.14.0=pyhd8ed1ab_0
- beautifulsoup4=4.12.3=pyha770c72_0
- bleach=6.1.0=pyhd8ed1ab_0
- boltons=24.0.0=pyhd8ed1ab_0
- brotli-python=1.1.0=py311hfdbb021_2
- bzip2=1.0.8=h4bc722e_7
- c-ares=1.33.1=heb4867d_0
- ca-certificates=2024.8.30=hbcca054_0
- cached-property=1.5.2=hd8ed1ab_1
- cached_property=1.5.2=pyha770c72_1
- certifi=2024.8.30=pyhd8ed1ab_0
- cffi=1.17.1=py311hf29c0ef_0
- charset-normalizer=3.3.2=pyhd8ed1ab_0
- colorama=0.4.6=pyhd8ed1ab_0
- comm=0.2.2=pyhd8ed1ab_0
- conda=24.9.0=py311h38be061_0
- conda-libmamba-solver=24.9.0=pyhd8ed1ab_0
- conda-package-handling=2.3.0=pyh7900ff3_0
- conda-package-streaming=0.10.0=pyhd8ed1ab_0
- debugpy=1.8.6=py311hfdbb021_0
- decorator=5.1.1=pyhd8ed1ab_0
- defusedxml=0.7.1=pyhd8ed1ab_0
- distro=1.9.0=pyhd8ed1ab_0
- entrypoints=0.4=pyhd8ed1ab_0
- exceptiongroup=1.2.2=pyhd8ed1ab_0
- executing=2.1.0=pyhd8ed1ab_0
- fmt=10.2.1=h00ab1b0_0
- fqdn=1.5.1=pyhd8ed1ab_0
- frozendict=2.4.4=py311h9ecbd09_1
- h11=0.14.0=pyhd8ed1ab_0
- h2=4.1.0=pyhd8ed1ab_0
- hpack=4.0.0=pyh9f0ad1d_0
- httpcore=1.0.6=pyhd8ed1ab_0
- httpx=0.27.2=pyhd8ed1ab_0
- hyperframe=6.0.1=pyhd8ed1ab_0
- icu=75.1=he02047a_0
- idna=3.10=pyhd8ed1ab_0
- importlib-metadata=8.5.0=pyha770c72_0
- importlib_metadata=8.5.0=hd8ed1ab_0
- importlib_resources=6.4.5=pyhd8ed1ab_0
- ipykernel=6.29.5=pyh3099207_0
- ipython=8.27.0=pyh707e725_0
- isoduration=20.11.0=pyhd8ed1ab_0
- jedi=0.19.1=pyhd8ed1ab_0
- jinja2=3.1.4=pyhd8ed1ab_0
- json5=0.9.25=pyhd8ed1ab_0
- jsonpatch=1.33=pyhd8ed1ab_0
- jsonpointer=3.0.0=py311h38be061_1
- jsonschema=4.23.0=pyhd8ed1ab_0
- jsonschema-specifications=2023.12.1=pyhd8ed1ab_0
- jsonschema-with-format-nongpl=4.23.0=hd8ed1ab_0
- jupyter-lsp=2.2.5=pyhd8ed1ab_0
- jupyter_client=8.6.3=pyhd8ed1ab_0
- jupyter_core=5.7.2=pyh31011fe_1
- jupyter_events=0.10.0=pyhd8ed1ab_0
- jupyter_server=2.14.2=pyhd8ed1ab_0
- jupyter_server_terminals=0.5.3=pyhd8ed1ab_0
- jupyterlab=4.2.5=pyhd8ed1ab_0
- jupyterlab_pygments=0.3.0=pyhd8ed1ab_1
- jupyterlab_server=2.27.3=pyhd8ed1ab_0
- keyutils=1.6.1=h166bdaf_0
- krb5=1.21.3=h659f571_0
- ld_impl_linux-64=2.43=h712a8e2_1
- libarchive=3.7.4=hfca40fe_0
- libcurl=8.10.1=hbbe4b11_0
- libedit=3.1.20191231=he28a2e2_2
- libev=4.33=hd590300_2
- libexpat=2.6.3=h5888daf_0
- libffi=3.4.2=h7f98852_5
- libgcc=14.1.0=h77fa898_1
- libgcc-ng=14.1.0=h69a702a_1
- libgomp=14.1.0=h77fa898_1
- libiconv=1.17=hd590300_2
- libmamba=1.5.10=h4cc3d14_0
- libmambapy=1.5.10=py311h7f1ffb1_0
- libnghttp2=1.58.0=h47da74e_1
- libnsl=2.0.1=hd590300_0
- libsodium=1.0.20=h4ab18f5_0
- libsolv=0.7.30=h3509ff9_0
- libsqlite=3.46.1=hadc24fc_0
- libssh2=1.11.0=h0841786_0
- libstdcxx=14.1.0=hc0a3c3a_1
- libstdcxx-ng=14.1.0=h4852527_1
- libuuid=2.38.1=h0b41bf4_0
- libxcrypt=4.4.36=hd590300_1
- libxml2=2.12.7=he7c6b58_4
- libzlib=1.3.1=h4ab18f5_1
- lz4-c=1.9.4=hcb278e6_0
- lzo=2.10=hd590300_1001
- mamba=1.5.10=py311h3072747_0
- markupsafe=2.1.5=py311h9ecbd09_1
- matplotlib-inline=0.1.7=pyhd8ed1ab_0
- menuinst=2.1.2=py311h38be061_1
- mistune=3.0.2=pyhd8ed1ab_0
- nbclient=0.10.0=pyhd8ed1ab_0
- nbconvert-core=7.16.4=pyhd8ed1ab_1
- nbformat=5.10.4=pyhd8ed1ab_0
- ncurses=6.5=he02047a_1
- nest-asyncio=1.6.0=pyhd8ed1ab_0
- notebook=7.2.2=pyhd8ed1ab_0
- notebook-shim=0.2.4=pyhd8ed1ab_0
- openssl=3.3.2=hb9d3cd8_0
- overrides=7.7.0=pyhd8ed1ab_0
- packaging=24.1=pyhd8ed1ab_0
- pandocfilters=1.5.0=pyhd8ed1ab_0
- parso=0.8.4=pyhd8ed1ab_0
- pexpect=4.9.0=pyhd8ed1ab_0
- pickleshare=0.7.5=py_1003
- pip=24.2=pyh8b19718_1
- pkgutil-resolve-name=1.3.10=pyhd8ed1ab_1
- platformdirs=4.3.6=pyhd8ed1ab_0
- pluggy=1.5.0=pyhd8ed1ab_0
- prometheus_client=0.21.0=pyhd8ed1ab_0
- prompt-toolkit=3.0.48=pyha770c72_0
- psutil=6.0.0=py311h9ecbd09_1
- ptyprocess=0.7.0=pyhd3deb0d_0
- pure_eval=0.2.3=pyhd8ed1ab_0
- pybind11-abi=4=hd8ed1ab_3
- pycosat=0.6.6=py311h459d7ec_0
- pycparser=2.22=pyhd8ed1ab_0
- pygments=2.18.0=pyhd8ed1ab_0
- pysocks=1.7.1=pyha2e5f31_6
- python=3.11.10=hc5c86c4_2_cpython
- python-dateutil=2.9.0=pyhd8ed1ab_0
- python-fastjsonschema=2.20.0=pyhd8ed1ab_0
- python-json-logger=2.0.7=pyhd8ed1ab_0
- python_abi=3.11=5_cp311
- pytz=2024.2=pyhd8ed1ab_0
- pyyaml=6.0.2=py311h9ecbd09_1
- pyzmq=26.2.0=py311h7deb3e3_2
- readline=8.2=h8228510_1
- referencing=0.35.1=pyhd8ed1ab_0
- reproc=14.2.4.post0=hd590300_1
- reproc-cpp=14.2.4.post0=h59595ed_1
- requests=2.32.3=pyhd8ed1ab_0
- rfc3339-validator=0.1.4=pyhd8ed1ab_0
- rfc3986-validator=0.1.1=pyh9f0ad1d_0
- rpds-py=0.20.0=py311h9e33e62_1
- ruamel.yaml=0.18.6=py311h459d7ec_0
- ruamel.yaml.clib=0.2.8=py311h459d7ec_0
- send2trash=1.8.3=pyh0d859eb_0
- setuptools=75.1.0=pyhd8ed1ab_0
- six=1.16.0=pyh6c4a22f_0
- sniffio=1.3.1=pyhd8ed1ab_0
- soupsieve=2.5=pyhd8ed1ab_1
- stack_data=0.6.2=pyhd8ed1ab_0
- terminado=0.18.1=pyh0d859eb_0
- tinycss2=1.3.0=pyhd8ed1ab_0
- tk=8.6.13=noxft_h4845f30_101
- tomli=2.0.1=pyhd8ed1ab_0
- tornado=6.4.1=py311h9ecbd09_1
- tqdm=4.66.5=pyhd8ed1ab_0
- traitlets=5.14.3=pyhd8ed1ab_0
- truststore=0.9.2=pyhd8ed1ab_0
- types-python-dateutil=2.9.0.20240906=pyhd8ed1ab_0
- typing-extensions=4.12.2=hd8ed1ab_0
- typing_extensions=4.12.2=pyha770c72_0
- typing_utils=0.1.0=pyhd8ed1ab_0
- uri-template=1.3.0=pyhd8ed1ab_0
- urllib3=2.2.3=pyhd8ed1ab_0
- wcwidth=0.2.13=pyhd8ed1ab_0
- webcolors=24.8.0=pyhd8ed1ab_0
- webencodings=0.5.1=pyhd8ed1ab_2
- websocket-client=1.8.0=pyhd8ed1ab_0
- wheel=0.44.0=pyhd8ed1ab_0
- xz=5.2.6=h166bdaf_0
- yaml=0.2.5=h7f98852_2
- yaml-cpp=0.8.0=h59595ed_0
- zeromq=4.3.5=ha4adb4c_5
- zipp=3.20.2=pyhd8ed1ab_0
- zstandard=0.23.0=py311hbc35293_1
- zstd=1.5.6=ha6fb4c9_0
- pip:
- ai21==3.0.0
- ai21-tokenizer==0.12.0
- aiohappyeyeballs==2.4.3
- aiohttp==3.11.7
- aiolimiter==1.1.0
- aiosignal==1.3.1
- aiosqlite==0.20.0
- annotated-types==0.7.0
- anthropic==0.39.0
- arxiv==2.1.3
- bce-python-sdk==0.9.23
- boto3==1.34.162
- botocore==1.34.162
- cachetools==5.5.0
- click==8.1.7
- cloudpickle==3.1.0
- cohere==5.11.4
- dask==2024.11.2
- dataclasses-json==0.6.7
- deepmerge==2.0
- dill==0.3.9
- diskcache==5.6.3
- distributed==2024.11.2
- eval-type-backport==0.2.0
- faiss-cpu==1.9.0.post1
- fastavro==1.9.7
- feedparser==6.0.11
- filelock==3.16.1
- frozenlist==1.5.0
- fsspec==2024.10.0
- future==1.0.0
- google-ai-generativelanguage==0.6.6
- google-api-core==2.23.0
- google-api-python-client==2.154.0
- google-auth==2.36.0
- google-auth-httplib2==0.2.0
- google-generativeai==0.7.2
- googleapis-common-protos==1.66.0
- gpt4all==2.8.2
- greenlet==3.1.1
- grpcio==1.68.0
- grpcio-status==1.62.3
- httplib2==0.22.0
- httpx-sse==0.4.0
- huggingface-hub==0.26.2
- ipywidgets==8.1.5
- jiter==0.7.1
- jmespath==1.0.1
- jsonpath-ng==1.7.0
- jupyter-ai==2.28.2
- jupyter-ai-magics==2.28.2
- jupyterlab-widgets==3.0.13
- langchain==0.2.17
- langchain-anthropic==0.1.23
- langchain-aws==0.1.18
- langchain-cohere==0.2.4
- langchain-community==0.2.19
- langchain-core==0.2.43
- langchain-experimental==0.0.65
- langchain-google-genai==1.0.10
- langchain-mistralai==0.1.13
- langchain-nvidia-ai-endpoints==0.2.2
- langchain-ollama==0.1.3
- langchain-openai==0.1.25
- langchain-text-splitters==0.2.4
- langsmith==0.1.144
- locket==1.0.0
- markdown-it-py==3.0.0
- marshmallow==3.23.1
- mdurl==0.1.2
- msgpack==1.1.0
- multidict==6.1.0
- multiprocess==0.70.17
- mypy-extensions==1.0.0
- numpy==1.26.4
- ollama==0.4.0
- openai==1.55.0
- orjson==3.10.11
- pandas==2.2.3
- parameterized==0.9.0
- partd==1.4.2
- pillow==10.4.0
- ply==3.11
- propcache==0.2.0
- proto-plus==1.25.0
- protobuf==4.25.5
- pyarrow==18.0.0
- pyasn1==0.6.1
- pyasn1-modules==0.4.1
- pycryptodome==3.21.0
- pydantic==2.10.1
- pydantic-core==2.27.1
- pyparsing==3.2.0
- pypdf==5.1.0
- python-dotenv==1.0.1
- qianfan==0.4.12.1
- regex==2024.11.6
- requests-toolbelt==1.0.0
- rich==13.9.4
- rsa==4.9
- s3transfer==0.10.4
- sentencepiece==0.2.0
- sgmllib3k==1.0.0
- shellingham==1.5.4
- sortedcontainers==2.4.0
- sqlalchemy==2.0.36
- tabulate==0.9.0
- tblib==3.0.0
- tenacity==8.5.0
- tiktoken==0.8.0
- together==1.3.5
- tokenizers==0.20.3
- toolz==1.0.0
- typer==0.13.1
- types-requests==2.32.0.20241016
- typing-inspect==0.9.0
- tzdata==2024.2
- uritemplate==4.1.1
- widgetsnbextension==4.0.13
- yarl==1.18.0
- zict==3.0.0
prefix: /opt/conda
<details><summary>Command Line Output</summary>
<pre>
Paste the output from your command line running `jupyter lab` here, use `--debug` if possible.
</pre>
</details>
<details><summary>Browser Output</summary>
(base) jovyan@ai-test-4-0:~$ jupyter lab
[I 2024-11-22 10:41:38.027 ServerApp] jupyter_ai | extension was successfully linked.
[I 2024-11-22 10:41:38.027 ServerApp] jupyter_lsp | extension was successfully linked.
[I 2024-11-22 10:41:38.030 ServerApp] jupyter_server_terminals | extension was successfully linked.
[I 2024-11-22 10:41:38.033 ServerApp] jupyterlab | extension was successfully linked.
[I 2024-11-22 10:41:38.036 ServerApp] notebook | extension was successfully linked.
[I 2024-11-22 10:41:38.039 ServerApp] notebook_shim | extension was successfully linked.
[I 2024-11-22 10:41:38.083 ServerApp] notebook_shim | extension was successfully loaded.
[I 2024-11-22 10:41:38.083 AiExtension] Configured provider allowlist: None
[I 2024-11-22 10:41:38.083 AiExtension] Configured provider blocklist: None
[I 2024-11-22 10:41:38.083 AiExtension] Configured model allowlist: None
[I 2024-11-22 10:41:38.083 AiExtension] Configured model blocklist: None
[I 2024-11-22 10:41:38.083 AiExtension] Configured model parameters: {}
[I 2024-11-22 10:41:38.091 AiExtension] Registered model provider `ai21`.
[I 2024-11-22 10:41:38.339 AiExtension] Registered model provider `bedrock`.
[I 2024-11-22 10:41:38.339 AiExtension] Registered model provider `bedrock-chat`.
[I 2024-11-22 10:41:38.339 AiExtension] Registered model provider `bedrock-custom`.
[I 2024-11-22 10:41:38.528 AiExtension] Registered model provider `anthropic-chat`.
[I 2024-11-22 10:41:38.888 AiExtension] Registered model provider `azure-chat-openai`.
[I 2024-11-22 10:41:40.344 AiExtension] Registered model provider `cohere`.
[I 2024-11-22 10:41:40.737 AiExtension] Registered model provider `gemini`.
[I 2024-11-22 10:41:40.737 AiExtension] Registered model provider `gpt4all`.
[I 2024-11-22 10:41:40.738 AiExtension] Registered model provider `huggingface_hub`.
[I 2024-11-22 10:41:40.785 AiExtension] Registered model provider `mistralai`.
[I 2024-11-22 10:41:40.800 AiExtension] Registered model provider `nvidia-chat`.
[I 2024-11-22 10:41:40.920 AiExtension] Registered model provider `ollama`.
[I 2024-11-22 10:41:40.920 AiExtension] Registered model provider `openai`.
[I 2024-11-22 10:41:40.920 AiExtension] Registered model provider `openai-chat`.
[I 2024-11-22 10:41:40.929 AiExtension] Registered model provider `openrouter`.
[I 2024-11-22 10:41:40.929 AiExtension] Registered model provider `qianfan`.
[I 2024-11-22 10:41:40.929 AiExtension] Registered model provider `sagemaker-endpoint`.
[I 2024-11-22 10:41:40.929 AiExtension] Registered model provider `togetherai`.
[I 2024-11-22 10:41:40.936 AiExtension] Registered embeddings model provider `azure`.
[I 2024-11-22 10:41:40.936 AiExtension] Registered embeddings model provider `bedrock`.
[I 2024-11-22 10:41:40.936 AiExtension] Registered embeddings model provider `cohere`.
[I 2024-11-22 10:41:40.936 AiExtension] Registered embeddings model provider `gpt4all`.
[I 2024-11-22 10:41:40.936 AiExtension] Registered embeddings model provider `huggingface_hub`.
[I 2024-11-22 10:41:40.936 AiExtension] Registered embeddings model provider `mistralai`.
[I 2024-11-22 10:41:40.936 AiExtension] Registered embeddings model provider `ollama`.
[I 2024-11-22 10:41:40.936 AiExtension] Registered embeddings model provider `openai`.
[I 2024-11-22 10:41:40.936 AiExtension] Registered embeddings model provider `qianfan`.
[I 2024-11-22 10:41:40.942 AiExtension] Registered providers.
[I 2024-11-22 10:41:40.942 AiExtension] Registered jupyter_ai server extension
[I 2024-11-22 10:41:41.231 AiExtension] Registered context provider `file`.
[I 2024-11-22 10:41:41.232 AiExtension] Initialized Jupyter AI server extension in 3149 ms.
[I 2024-11-22 10:41:41.233 ServerApp] jupyter_ai | extension was successfully loaded.
[I 2024-11-22 10:41:41.235 ServerApp] jupyter_lsp | extension was successfully loaded.
[I 2024-11-22 10:41:41.236 ServerApp] jupyter_server_terminals | extension was successfully loaded.
[I 2024-11-22 10:41:41.237 LabApp] JupyterLab extension loaded from /opt/conda/lib/python3.11/site-packages/jupyterlab
[I 2024-11-22 10:41:41.237 LabApp] JupyterLab application directory is /opt/conda/share/jupyter/lab
[I 2024-11-22 10:41:41.237 LabApp] Extension Manager is 'pypi'.
[I 2024-11-22 10:41:41.291 ServerApp] jupyterlab | extension was successfully loaded.
[I 2024-11-22 10:41:41.294 ServerApp] notebook | extension was successfully loaded.
[I 2024-11-22 10:41:41.295 ServerApp] The port 8888 is already in use, trying another port.
[I 2024-11-22 10:41:41.295 ServerApp] Serving notebooks from local directory: /home/jovyan
[I 2024-11-22 10:41:41.295 ServerApp] Jupyter Server 2.14.2 is running at:
[I 2024-11-22 10:41:41.295 ServerApp] http://localhost:8889/lab?token=d1a97ac9ca8a95fccb9d37f45ae2a32f9853041f63517ce5
[I 2024-11-22 10:41:41.295 ServerApp] http://127.0.0.1:8889/lab?token=d1a97ac9ca8a95fccb9d37f45ae2a32f9853041f63517ce5
[I 2024-11-22 10:41:41.295 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 2024-11-22 10:41:41.299 ServerApp] No web browser found: Error('could not locate runnable browser').
[C 2024-11-22 10:41:41.299 ServerApp]
To access the server, open this file in a browser:
file:///tmp/jupyter_runtime/jpserver-383-open.html
Or copy and paste one of these URLs:
http://localhost:8889/lab?token=d1a97ac9ca8a95fccb9d37f45ae2a32f9853041f63517ce5
http://127.0.0.1:8889/lab?token=d1a97ac9ca8a95fccb9d37f45ae2a32f9853041f63517ce5
[I 2024-11-22 10:41:41.523 ServerApp] Skipped non-installed server(s): bash-language-server, dockerfile-language-server-nodejs, javascript-typescript-langserver, jedi-language-server, julia-language-server, pyright, python-language-server, python-lsp-server, r-languageserver, sql-language-server, texlab, typescript-language-server, unified-language-server, vscode-css-languageserver-bin, vscode-html-languageserver-bin, vscode-json-languageserver-bin, yaml-language-server
^C[I 2024-11-22 10:42:17.991 ServerApp] interrupted
[I 2024-11-22 10:42:17.991 ServerApp] Serving notebooks from local directory: /home/jovyan
0 active kernels
Jupyter Server 2.14.2 is running at:
http://localhost:8889/lab?token=d1a97ac9ca8a95fccb9d37f45ae2a32f9853041f63517ce5
http://127.0.0.1:8889/lab?token=d1a97ac9ca8a95fccb9d37f45ae2a32f9853041f63517ce5
Shut down this Jupyter server (y/[n])? y
[C 2024-11-22 10:42:19.982 ServerApp] Shutdown confirmed
[I 2024-11-22 10:42:19.983 ServerApp] Shutting down 6 extensions
[I 2024-11-22 10:42:19.983 AiExtension] Closing Dask client.
</details>
|
open
|
2024-11-22T10:42:56Z
|
2025-02-09T13:57:53Z
|
https://github.com/jupyterlab/jupyter-ai/issues/1117
|
[
"bug"
] |
IvanLapchenko
| 3
|
biolab/orange3
|
scikit-learn
| 7,033
|
Web version
|
Hi, can you tell me if there is a web version that can be deployed directly, such as using vue? Thanks.
|
closed
|
2025-02-19T02:55:46Z
|
2025-02-19T08:29:07Z
|
https://github.com/biolab/orange3/issues/7033
|
[] |
WilliaJing
| 0
|
autogluon/autogluon
|
scikit-learn
| 4,371
|
Adding F2 to evaluation metrics
|
## Description
Please add F2 as an evaluation metric. It is very useful when modeling with an emphasis on recall. Even better than F2 would perhaps be fbeta, which allows you to specify the degree to which recall is more important.
## References
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.fbeta_score.html
|
open
|
2024-08-07T17:23:48Z
|
2024-11-25T23:04:34Z
|
https://github.com/autogluon/autogluon/issues/4371
|
[
"enhancement",
"module: tabular"
] |
jack-hillesheim
| 0
|
CTFd/CTFd
|
flask
| 2,307
|
forgot email does not sent
|
<!--
If this is a bug report please fill out the template below.
If this is a feature request please describe the behavior that you'd like to see.
-->
**Environment**:
- CTFd Version/Commit: 3.5.0
- Operating System: Docker Container on Linux
- Web Browser and Version: Chrome 113.0.5672.127
**What happened?**
We set up the Email server(we have used in many production environemnt) in admin panel and we click the forgot password button in the front page, then fill in the registerd email in the field, we got a response said "If that account exists you will receive an email, please check your inbox", after that, we doesn't get any forgot password email in the mailbox.
**What did you expect to happen?**
Get the forgot password email when we click forgot password in the login page.
**How to reproduce your issue**
fill in a registerd email in the forgot password page.
**Any associated stack traces or error logs**
no error logs
|
closed
|
2023-05-23T13:02:45Z
|
2023-07-12T03:44:03Z
|
https://github.com/CTFd/CTFd/issues/2307
|
[] |
Gary827
| 1
|
pydantic/pydantic-core
|
pydantic
| 1,504
|
build error for python 3.14 (alpha)
|
```
Failed to build pydantic-core
qa: exit 1 (13.40 seconds) /home/runner/work/luma.core/luma.core> python -I -m pip install '.[qa]' pid=2954
Compiling static_assertions v1.1.0
Compiling cfg-if v1.0.0
Compiling memchr v2.7.4
Compiling pyo3-ffi v0.22.2
Compiling pyo3-macros-backend v0.22.2
error: failed to run custom build command for `pyo3-ffi v0.22.2`
Caused by:
process didn't exit successfully: `/tmp/pip-install-vmyi7r9j/pydantic-core_f98f83e4570a40c49a602e2d60c388c9/target/release/build/pyo3-ffi-b6cb14e247bfd858/build-script-build` (exit status: 1)
--- stdout
cargo:rustc-check-cfg=cfg(Py_LIMITED_API)
cargo:rustc-check-cfg=cfg(PyPy)
cargo:rustc-check-cfg=cfg(GraalPy)
cargo:rustc-check-cfg=cfg(py_sys_config, values("Py_DEBUG", "Py_REF_DEBUG", "Py_TRACE_REFS", "COUNT_ALLOCS"))
cargo:rustc-check-cfg=cfg(invalid_from_utf8_lint)
cargo:rustc-check-cfg=cfg(pyo3_disable_reference_pool)
cargo:rustc-check-cfg=cfg(pyo3_leak_on_drop_without_reference_pool)
cargo:rustc-check-cfg=cfg(diagnostic_namespace)
cargo:rustc-check-cfg=cfg(c_str_lit)
cargo:rustc-check-cfg=cfg(Py_3_7)
cargo:rustc-check-cfg=cfg(Py_3_8)
cargo:rustc-check-cfg=cfg(Py_3_9)
cargo:rustc-check-cfg=cfg(Py_3_10)
cargo:rustc-check-cfg=cfg(Py_3_11)
cargo:rustc-check-cfg=cfg(Py_3_12)
cargo:rustc-check-cfg=cfg(Py_3_13)
cargo:rerun-if-env-changed=PYO3_CROSS
cargo:rerun-if-env-changed=PYO3_CROSS_LIB_DIR
cargo:rerun-if-env-changed=PYO3_CROSS_PYTHON_VERSION
cargo:rerun-if-env-changed=PYO3_CROSS_PYTHON_IMPLEMENTATION
cargo:rerun-if-env-changed=PYO3_PRINT_CONFIG
cargo:rerun-if-env-changed=PYO3_USE_ABI3_FORWARD_COMPATIBILITY
--- stderr
error: the configured Python interpreter version (3.14) is newer than PyO3's maximum supported version (3.13)
= help: please check if an updated version of PyO3 is available. Current version: 0.22.2
= help: set PYO3_USE_ABI3_FORWARD_COMPATIBILITY=1 to suppress this check and build anyway using the stable ABI
warning: build failed, waiting for other jobs to finish...
💥 maturin failed
Caused by: Failed to build a native library through cargo
Caused by: Cargo build finished with "exit status: 101": `env -u CARGO PYO3_ENVIRONMENT_SIGNATURE="cpython-3.14-64bit" PYO3_PYTHON="/home/runner/work/luma.core/luma.core/.tox/qa/bin/python" PYTHON_SYS_EXECUTABLE="/home/runner/work/luma.core/luma.core/.tox/qa/bin/python" "cargo" "rustc" "--features" "pyo3/extension-module" "--message-format" "json-render-diagnostics" "--manifest-path" "/tmp/pip-install-vmyi7r9j/pydantic-core_f98f83e4570a40c49a602e2d60c388c9/Cargo.toml" "--release" "--lib" "--crate-type" "cdylib"`
Error: command ['maturin', 'pep517', 'build-wheel', '-i', '/home/runner/work/luma.core/luma.core/.tox/qa/bin/python', '--compatibility', 'off'] returned non-zero exit status 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pydantic-core
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (pydantic-core)
```
|
closed
|
2024-10-26T13:44:15Z
|
2025-03-21T11:52:54Z
|
https://github.com/pydantic/pydantic-core/issues/1504
|
[] |
thijstriemstra
| 2
|
huggingface/transformers
|
deep-learning
| 36,548
|
Facing issue while getting model from Rag,pretrained
|
**Code**
`# Initialize the tokenizer and model
model_name = "facebook/rag-token-nq"
tokenizer = RagTokenizer.from_pretrained(model_name)
model = RagTokenForGeneration.from_pretrained(model_name)
# Initialize the retriever
retriever = RagRetriever.from_pretrained(model_name)
# Tokenization function
def tokenize_function(examples):
return tokenizer(
examples['text'],
truncation=True,
padding='max_length',
max_length=512
)
# Tokenize the dataset
tokenized_dataset = dataset.map(
tokenize_function,
batched=True,
remove_columns=dataset.column_names
)`
**error**
`HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/'. Use `repo_type` argument if needed.
The above exception was the direct cause of the following exception:
OSError Traceback (most recent call last)
OSError: Incorrect path_or_model_id: 'https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
/usr/local/lib/python3.11/dist-packages/transformers/models/rag/retrieval_rag.py in _resolve_path(self, index_path, filename)
122 f"- or '{index_path}' is the correct path to a directory containing a file named {filename}.\n\n"
123 )
--> 124 raise EnvironmentError(msg)
125 if is_local:
126 logger.info(f"loading file {resolved_archive_file}")
OSError: Can't load 'psgs_w100.tsv.pkl'. Make sure that:
- 'https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/' is a correct remote path to a directory containing a file named psgs_w100.tsv.pkl
- or 'https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/' is the correct path to a directory containing a file named psgs_w100.tsv.pkl.`
|
open
|
2025-03-05T03:18:44Z
|
2025-03-05T03:18:44Z
|
https://github.com/huggingface/transformers/issues/36548
|
[] |
MAHESH18TECH
| 0
|
oegedijk/explainerdashboard
|
plotly
| 93
|
Shap Dependence toggle outlier treatment
|
Problem: sometimes its very useful to see on shap values (Shap Dependence plot) with and without outliers.
It will be very useful if there is a toggle switch button on plot which enable/disable outlier treatment.
Outliers often make a lot of mess on shap dependence plot, and if exist you often not can view correct colored interaction with dependent features. We of course can treat it before passing to explainer, but look on data without treatment also very useful, so has both options will be great!
What do you think about it @oegedijk ?
Maybe if we can provide both dataset (with and without treatment) or just option to calculate shap values on both origin dataset and dataset with outlier treatment internally?
|
closed
|
2021-03-02T15:20:13Z
|
2021-04-06T07:01:12Z
|
https://github.com/oegedijk/explainerdashboard/issues/93
|
[] |
oleg-savko
| 15
|
nalepae/pandarallel
|
pandas
| 73
|
Python quit unexpectedlly
|
Hi guys,
I got python crashing on my MacBook, even with only one process. Has it happened to anyone before?
Thanks,
Hector
|
open
|
2020-01-24T12:24:41Z
|
2024-04-27T07:00:55Z
|
https://github.com/nalepae/pandarallel/issues/73
|
[] |
hector-orkestro
| 3
|
LAION-AI/Open-Assistant
|
python
| 2,861
|
License of OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5
|
I would like to understand whether the apache 2.0 license on [this](https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5) particular model card is appropriate.
I noticed a related discussion about this [here](https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5/discussions/1).
|
closed
|
2023-04-23T17:40:32Z
|
2023-04-24T20:11:18Z
|
https://github.com/LAION-AI/Open-Assistant/issues/2861
|
[
"question"
] |
debraj135
| 3
|
huggingface/transformers
|
python
| 36,766
|
ValueError: weight is on the meta device, we need a `value` to put in on 0. `Gemma3`
|
### System Info
**Error trace**
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[64], line 2
1 tokenizer = AutoTokenizer.from_pretrained(model_id)
----> 2 model = Gemma3ForCausalLM.from_pretrained(model_id, device_map="auto")
3 input_ids = tokenizer("Write me a poem about Machine Learning.", return_tensors="pt").to(model.device)
4 outputs = model.generate(**input_ids, max_new_tokens=100)
File [~/.conda/envs/ai4scisci/lib/python3.12/site-packages/transformers/modeling_utils.py:273](http://localhost:3126/lab/tree/users/uig2924/notebooks/~/.conda/envs/ai4scisci/lib/python3.12/site-packages/transformers/modeling_utils.py#line=272), in restore_default_torch_dtype.<locals>._wrapper(*args, **kwargs)
271 old_dtype = torch.get_default_dtype()
272 try:
--> 273 return func(*args, **kwargs)
274 finally:
275 torch.set_default_dtype(old_dtype)
File [~/.conda/envs/ai4scisci/lib/python3.12/site-packages/transformers/modeling_utils.py:4531](http://localhost:3126/lab/tree/users/uig2924/notebooks/~/.conda/envs/ai4scisci/lib/python3.12/site-packages/transformers/modeling_utils.py#line=4530), in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, weights_only, *model_args, **kwargs)
4528 device_map_kwargs["offload_buffers"] = True
4530 if not is_fsdp_enabled() and not is_deepspeed_zero3_enabled():
-> 4531 dispatch_model(model, **device_map_kwargs)
4533 # This is needed for the RotaryEmbedding, which was not initialized on the correct device as it is
4534 # not part of the state_dict (persistent=False)
4535 if device_mesh is not None:
File [~/.conda/envs/ai4scisci/lib/python3.12/site-packages/accelerate/big_modeling.py:420](http://localhost:3126/lab/tree/users/uig2924/notebooks/~/.conda/envs/ai4scisci/lib/python3.12/site-packages/accelerate/big_modeling.py#line=419), in dispatch_model(model, device_map, main_device, state_dict, offload_dir, offload_index, offload_buffers, skip_keys, preload_module_classes, force_hooks)
415 tied_params_map[data_ptr] = {}
417 # Note: To handle the disk offloading case, we can not simply use weights_map[param_name].data_ptr() as the reference pointer,
418 # as we have no guarantee that safetensors' `file.get_tensor()` will always give the same pointer.
--> 420 attach_align_device_hook_on_blocks(
421 model,
422 execution_device=execution_device,
423 offload=offload,
424 offload_buffers=offload_buffers,
425 weights_map=weights_map,
426 skip_keys=skip_keys,
427 preload_module_classes=preload_module_classes,
428 tied_params_map=tied_params_map,
429 )
431 # warn if there is any params on the meta device
432 offloaded_devices_str = " and ".join(
433 [device for device in set(device_map.values()) if device in ("cpu", "disk")]
434 )
File [~/.conda/envs/ai4scisci/lib/python3.12/site-packages/accelerate/hooks.py:656](http://localhost:3126/lab/tree/users/uig2924/notebooks/~/.conda/envs/ai4scisci/lib/python3.12/site-packages/accelerate/hooks.py#line=655), in attach_align_device_hook_on_blocks(module, execution_device, offload, weights_map, offload_buffers, module_name, skip_keys, preload_module_classes, tied_params_map)
654 for child_name, child in module.named_children():
655 child_name = f"{module_name}.{child_name}" if len(module_name) > 0 else child_name
--> 656 attach_align_device_hook_on_blocks(
657 child,
658 execution_device=execution_device,
659 offload=offload,
660 weights_map=weights_map,
661 offload_buffers=offload_buffers,
662 module_name=child_name,
663 preload_module_classes=preload_module_classes,
664 skip_keys=skip_keys,
665 tied_params_map=tied_params_map,
666 )
File [~/.conda/envs/ai4scisci/lib/python3.12/site-packages/accelerate/hooks.py:656](http://localhost:3126/lab/tree/users/uig2924/notebooks/~/.conda/envs/ai4scisci/lib/python3.12/site-packages/accelerate/hooks.py#line=655), in attach_align_device_hook_on_blocks(module, execution_device, offload, weights_map, offload_buffers, module_name, skip_keys, preload_module_classes, tied_params_map)
654 for child_name, child in module.named_children():
655 child_name = f"{module_name}.{child_name}" if len(module_name) > 0 else child_name
--> 656 attach_align_device_hook_on_blocks(
657 child,
658 execution_device=execution_device,
659 offload=offload,
660 weights_map=weights_map,
661 offload_buffers=offload_buffers,
662 module_name=child_name,
663 preload_module_classes=preload_module_classes,
664 skip_keys=skip_keys,
665 tied_params_map=tied_params_map,
666 )
File [~/.conda/envs/ai4scisci/lib/python3.12/site-packages/accelerate/hooks.py:616](http://localhost:3126/lab/tree/users/uig2924/notebooks/~/.conda/envs/ai4scisci/lib/python3.12/site-packages/accelerate/hooks.py#line=615), in attach_align_device_hook_on_blocks(module, execution_device, offload, weights_map, offload_buffers, module_name, skip_keys, preload_module_classes, tied_params_map)
607 if module_name in execution_device and module_name in offload and not offload[module_name]:
608 hook = AlignDevicesHook(
609 execution_device=execution_device[module_name],
610 offload_buffers=offload_buffers,
(...)
614 tied_params_map=tied_params_map,
615 )
--> 616 add_hook_to_module(module, hook)
617 attach_execution_device_hook(module, execution_device[module_name], tied_params_map=tied_params_map)
618 elif module_name in execution_device and module_name in offload:
File [~/.conda/envs/ai4scisci/lib/python3.12/site-packages/accelerate/hooks.py:161](http://localhost:3126/lab/tree/users/uig2924/notebooks/~/.conda/envs/ai4scisci/lib/python3.12/site-packages/accelerate/hooks.py#line=160), in add_hook_to_module(module, hook, append)
158 old_forward = module.forward
159 module._old_forward = old_forward
--> 161 module = hook.init_hook(module)
162 module._hf_hook = hook
164 def new_forward(module, *args, **kwargs):
File [~/.conda/envs/ai4scisci/lib/python3.12/site-packages/accelerate/hooks.py:283](http://localhost:3126/lab/tree/users/uig2924/notebooks/~/.conda/envs/ai4scisci/lib/python3.12/site-packages/accelerate/hooks.py#line=282), in AlignDevicesHook.init_hook(self, module)
281 if not self.offload and self.execution_device is not None:
282 for name, _ in named_module_tensors(module, recurse=self.place_submodules):
--> 283 set_module_tensor_to_device(module, name, self.execution_device, tied_params_map=self.tied_params_map)
284 elif self.offload:
285 self.original_devices = {
286 name: param.device for name, param in named_module_tensors(module, recurse=self.place_submodules)
287 }
File [~/.conda/envs/ai4scisci/lib/python3.12/site-packages/accelerate/utils/modeling.py:364](http://localhost:3126/lab/tree/users/uig2924/notebooks/~/.conda/envs/ai4scisci/lib/python3.12/site-packages/accelerate/utils/modeling.py#line=363), in set_module_tensor_to_device(module, tensor_name, device, value, dtype, fp16_statistics, tied_params_map)
361 return
363 if old_value.device == torch.device("meta") and device not in ["meta", torch.device("meta")] and value is None:
--> 364 raise ValueError(f"{tensor_name} is on the meta device, we need a `value` to put in on {device}.")
366 param = module._parameters[tensor_name] if tensor_name in module._parameters else None
367 param_cls = type(param)
ValueError: weight is on the meta device, we need a `value` to put in on 0.
```
**Code to reproduce**
```python
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = Gemma3ForCausalLM.from_pretrained(model_id, device_map="auto")
input_ids = tokenizer("Write me a poem about Machine Learning.", return_tensors="pt").to(model.device)
outputs = model.generate(**input_ids, max_new_tokens=100)
text = tokenizer.batch_decode(outputs, skip_special_tokens=True)
```
taken from https://huggingface.co/docs/transformers/main/en/model_doc/gemma3#transformers.Gemma3TextConfig.
**Hardware accelerator**
```python
!nvidia-smi -L
GPU 0: NVIDIA A100 80GB PCIe (UUID: GPU-cf881215-9c35-ed1c-cfc3-fb7a629e34fe)
```
**Library information**
```python
print(transformers.__version__)
print(torch.__version__)
4.50.0.dev0
2.4.1+cu121
```
**Kernel**
```shell
Linux qgpu0518 3.10.0-1160.95.1.el7.x86_64 #1 SMP Fri Jun 23 08:44:55 EDT 2023 x86_64 x86_64 x86_64 GNU/Linux
```
**OS**
```shell
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: RedHatEnterpriseServer
Description: Red Hat Enterprise Linux Server release 7.9 (Maipo)
Release: 7.9
Codename: Maipo
```
> NOTE: I've disabled the `device_map` yet the meta device issue wouldn't go away. The only circumstance the above snippet works is when i use `cpu`.
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code
```python
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = Gemma3ForCausalLM.from_pretrained(model_id, device_map="auto")
input_ids = tokenizer("Write me a poem about Machine Learning.", return_tensors="pt").to(model.device)
outputs = model.generate(**input_ids, max_new_tokens=100)
text = tokenizer.batch_decode(outputs, skip_special_tokens=True)
```
### Expected behavior
`ValueError`
```python
ValueError: weight is on the meta device, we need a `value` to put in on 0.
```
|
closed
|
2025-03-17T16:07:05Z
|
2025-03-17T19:25:51Z
|
https://github.com/huggingface/transformers/issues/36766
|
[
"bug"
] |
akhilpandey95
| 2
|
twopirllc/pandas-ta
|
pandas
| 179
|
ImportError: cannot import name 'version' from partially initialized module 'pandas_ta'
|
pandas_ta-0.2.23b
```python
import pandas_ta as ta
```
<br/>
Error:
```sh
File "/.pyenv/versions/covesting/lib/python3.8/site-packages/pandas_ta/__init__.py", line 96, in <module>
from pandas_ta.core import *
File "/.pyenv/versions/covesting/lib/python3.8/site-packages/pandas_ta/core.py", line 12, in <module>
from pandas_ta import version, Category
ImportError: cannot import name 'version' from partially initialized module 'pandas_ta' (most likely due to a circular import) (/.pyenv/versions/covesting/lib/python3.8/site-packages/pandas_ta/__init__.py)
```
Using Python 3.8.5 (default, Sep 11 2020, 11:13:06) on mac
I can use the git repo version directly, which 28 I think, but then using `.ta.<anything>` does not work and does not add anything onto the dataframe.
|
closed
|
2020-12-29T11:46:09Z
|
2021-02-15T22:21:44Z
|
https://github.com/twopirllc/pandas-ta/issues/179
|
[
"bug"
] |
Tjorriemorrie
| 4
|
httpie/cli
|
python
| 812
|
config.json cannot be load occassionally in multiprocess usecase
|
It seems that during http request, the file config.json will be cleaned and update. In multiprocess usecase, sometimes httpie would read an empty config file and occurs an error.
The thrown exception are:
```python
Traceback (most recent call last):
File "c:\python36\lib\site-packages\httpie\config.py", line 47, in load
data = json.load(f)
File "c:\python36\lib\json\__init__.py", line 299, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "c:\python36\lib\json\__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "c:\python36\lib\json\decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "c:\python36\lib\json\decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\python36\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Python36\Scripts\http.exe\__main__.py", line 9, in <module>
File "c:\python36\lib\site-packages\httpie\__main__.py", line 11, in main
sys.exit(main())
File "c:\python36\lib\site-packages\httpie\core.py", line 193, in main
if env.config.default_options:
File "c:\python36\lib\site-packages\httpie\context.py", line 84, in config
self._config.load()
File "c:\python36\lib\site-packages\httpie\config.py", line 96, in load
super(Config, self).load()
File "c:\python36\lib\site-packages\httpie\config.py", line 51, in load
(type(self).__name__, str(e), self.path)
ValueError: Invalid Config JSON: Expecting value: line 1 column 1 (char 0) [D:\Users\xxxx\AppData\Roaming\\httpie\config.json]
Traceback (most recent call last):
File "c:\python36\lib\site-packages\httpie\config.py", line 47, in load
data = json.load(f)
File "c:\python36\lib\json\__init__.py", line 299, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "c:\python36\lib\json\__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "c:\python36\lib\json\decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "c:\python36\lib\json\decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\python36\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Python36\Scripts\http.exe\__main__.py", line 9, in <module>
File "c:\python36\lib\site-packages\httpie\__main__.py", line 11, in main
sys.exit(main())
File "c:\python36\lib\site-packages\httpie\core.py", line 193, in main
if env.config.default_options:
File "c:\python36\lib\site-packages\httpie\context.py", line 84, in config
self._config.load()
File "c:\python36\lib\site-packages\httpie\config.py", line 96, in load
super(Config, self).load()
File "c:\python36\lib\site-packages\httpie\config.py", line 51, in load
(type(self).__name__, str(e), self.path)
ValueError: Invalid Config JSON: Expecting value: line 1 column 1 (char 0) [D:\Users\xxxx\AppData\Roaming\\httpie\config.json]
```
|
closed
|
2019-10-31T14:09:16Z
|
2019-12-02T16:45:32Z
|
https://github.com/httpie/cli/issues/812
|
[] |
onriv
| 2
|
coqui-ai/TTS
|
python
| 3,483
|
[Bug] Update current_rl XTTS
|
### Describe the bug
Hi!
Is the learning rate update actually happening?


Is the learning rate update actually happening? After 10,000 steps it should be 0.004, but I constantly see the initial learning rate. It's annoying.
### To Reproduce
def main():
model_args = GPTArgs(
max_conditioning_length=132300,
min_conditioning_length=66150,
debug_loading_failures=False,
max_wav_length=384000,
max_text_length=180,
mel_norm_file=MEL_NORM_FILE,
dvae_checkpoint=DVAE_CHECKPOINT,
xtts_checkpoint=XTTS_CHECKPOINT,
tokenizer_file=TOKENIZER_FILE,
gpt_num_audio_tokens=1026,
gpt_start_audio_token=1024,
gpt_stop_audio_token=1025,
gpt_use_masking_gt_prompt_approach=True,
gpt_use_perceiver_resampler=True,
)
audio_config = XttsAudioConfig(sample_rate=22050, dvae_sample_rate=22050, output_sample_rate=24000)
config = GPTTrainerConfig(
epochs=1000,
output_path=OUT_PATH,
model_args=model_args,
run_name=RUN_NAME,
project_name=PROJECT_NAME,
run_description="""
GPT XTTS training
""",
dashboard_logger=DASHBOARD_LOGGER,
logger_uri=LOGGER_URI,
audio=audio_config,
batch_size=BATCH_SIZE,
batch_group_size=48,
eval_batch_size=BATCH_SIZE,
num_loader_workers=16,
eval_split_max_size=512,
print_step=1130,
plot_step=1130,
log_model_step=1000,
save_step=100000,
save_n_checkpoints=1,
save_checkpoints=True,
#text_cleaner="phoneme_cleaners",
use_phonemes=True,
phoneme_language="ru-ru",
#mixed_precision=True,
# target_loss="loss",
print_eval=False,
optimizer="AdamW",
optimizer_wd_only_on_weights=OPTIMIZER_WD_ONLY_ON_WEIGHTS,
optimizer_params={"betas": [0.9, 0.96], "eps": 1e-8, "weight_decay": 1e-2},
lr=5e-04,
lr_scheduler="StepwiseGradualLR",
lr_scheduler_params={
"gradual_learning_rates": [
[0, 5e-4],
[10000, 4e-4],
[20000, 3e-4],
[30000, 2e-4],
[40000, 1e-4],
[50000, 9e-5],
[60000, 8e-5],
[70000, 7e-5],
[80000, 6e-5]
]
},
scheduler_after_epoch=True,
test_sentences=[
{
"text": "На стуле у оловянного тазика висела куртка Тамары, две узкие кровати были сдвинуты, шерстяные одеяла смяты, окно оставили открытым, и через него проникало осеннее солнце.",
"speaker_wav": SPEAKER_REFERENCE,
"language": LANGUAGE,
},
],
)
# init the model from config
model = GPTTrainer.init_from_config(config)
# load training samples
train_samples, eval_samples = load_tts_samples(
DATASETS_CONFIG_LIST,
eval_split=True,
eval_split_max_size=config.eval_split_max_size,
eval_split_size=config.eval_split_size,
)
import torch
torch.cuda.set_device(0)
# init the trainer and
trainer = Trainer(
TrainerArgs(
restore_path=None, # xtts checkpoint is restored via xtts_checkpoint key so no need of restore it using Trainer restore_path parameter
skip_train_epoch=False,
start_with_eval=START_WITH_EVAL,
grad_accum_steps=GRAD_ACUMM_STEPS,
),
config,
output_path=OUT_PATH,
model=model,
train_samples=train_samples,
eval_samples=eval_samples,
)
trainer.fit()
### Expected behavior
Correct learning rate
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA GeForce RTX 4090",
"NVIDIA GeForce GTX 1060 6GB",
"NVIDIA GeForce GTX 1060 6GB"
],
"available": true,
"version": "12.1"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.2",
"TTS": "0.22.0",
"numpy": "1.26.2"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.11.7",
"version": "#101-Ubuntu SMP Tue Nov 14 13:30:08 UTC 2023"
}
}
```
### Additional context
_No response_
|
closed
|
2024-01-01T08:50:00Z
|
2024-03-21T14:53:07Z
|
https://github.com/coqui-ai/TTS/issues/3483
|
[
"bug",
"wontfix"
] |
insomnia777
| 2
|
Asabeneh/30-Days-Of-Python
|
numpy
| 189
|
programacion
|
closed
|
2022-02-14T18:27:35Z
|
2023-07-08T22:21:37Z
|
https://github.com/Asabeneh/30-Days-Of-Python/issues/189
|
[] |
Carlos188125
| 0
|
|
pytest-dev/pytest-qt
|
pytest
| 251
|
[Question] How to use with QML applications?
|
I came across #204; but as the issue is closed and more than 6 months old, I figured I would open a new one.
I'm trying to make use of pytest-qt with a QML application, my understanding is that qtbot needs to add a widget, but from `QQmlApplicationEngine` I can only get a `QWindow`, not a widget.
```python
@pytest.fixture()
def window(qtbot):
engine = QQmlApplicationEngine()
engine.load(QUrl("main.qml")
window = engine.rootObjects()[0] # this gives a QWindow object
qtbot.add_widget(window) # probably not what I actually want to do
window.show()
return window
def test_window_title(window):
assert window.title() == "some title"
```
Running this results in:
```
____________________ ERROR at teardown of test_window_title ____________________
item = <Function 'test_window_title'>
@pytest.mark.hookwrapper
@pytest.mark.trylast
def pytest_runtest_teardown(item):
"""
Hook called after each test tear down, to process any pending events and
avoiding leaking events to the next test. Also, if exceptions have
been captured during fixtures teardown, fail the test.
"""
_process_events()
> _close_widgets(item)
.venv/lib/python3.7/site-packages/pytestqt/plugin.py:181:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
item = <Function 'test_window_title'>
def _close_widgets(item):
"""
Close all widgets registered in the pytest item.
"""
widgets = getattr(item, "qt_widgets", None)
if widgets:
for w in item.qt_widgets:
w = w()
if w is not None:
> w.close()
E RuntimeError: Internal C++ object (PySide2.QtGui.QWindow) already deleted.
```
Should I be trying something else?
BTW thanks for the awesome pytest plugin, it's been very helpful w/ my non-QML based PyQt based applications.
|
closed
|
2018-12-07T19:23:18Z
|
2018-12-14T23:36:34Z
|
https://github.com/pytest-dev/pytest-qt/issues/251
|
[] |
j9ac9k
| 3
|
Skyvern-AI/skyvern
|
automation
| 1,166
|
Error while running docker compose using podman
|
Hi I am trying to run docker compose with podman and I get the below error:
PS C:\Users\xxxxx\skyvern\skyvern> podman compose up -d
>>>> Executing external compose provider "C:\\Users\\xxxxx\\AppData\\Local\\Microsoft\\WindowsApps\\docker-compose.exe". Please see podman-compose(1) for how to disable this message. <<<<
time="2024-11-11T17:38:46+05:30" level=warning msg="C:\\Users\\xxxx\\skyvern\\skyvern\\docker-compose.yml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion"
[+] Running 0/1
- Container skyvern-postgres-1 Creating 0.1s
Error response from daemon: container create: statfs /mnt/c/Users/anuthakur/skyvern/skyvern/postgres-data: no such file or directory
Error: executing C:\Users\xxxx\AppData\Local\Microsoft\WindowsApps\docker-compose.exe up -d: exit status 1
|
open
|
2024-11-11T12:11:25Z
|
2024-11-13T05:06:05Z
|
https://github.com/Skyvern-AI/skyvern/issues/1166
|
[
"help wanted"
] |
ThakurAnunaya
| 1
|
jonaswinkler/paperless-ng
|
django
| 152
|
FR: Searchable dropdowns
|
As the list of correspondents might grow, it would be nice to be able to search in the correspondents dropdown.
For example by using [Select2](www.select2.org) or something else.
|
closed
|
2020-12-17T19:41:09Z
|
2020-12-22T15:12:59Z
|
https://github.com/jonaswinkler/paperless-ng/issues/152
|
[
"feature request",
"fixed in next release"
] |
zjean
| 6
|
robotframework/robotframework
|
automation
| 4,759
|
performance drop for remote libraries
|
**The problem**
We are just upgrading from 3.x to 6.0.2. We experienced a drop in performance. Running the test started taking a lot longer: from ~2:30h to ~2:50, from 1:05 to 1:15.
**Analysis**
We have narrowed down the problem to the start-phase of tests suites - and specifically to setting up the remote libraries. The tests themselves seem to be running at the same speed as earlier. In log.html, we see that there's a long delay between the start time of the suite and the start time of the suite setup. In robot syslog, we can see that the "Creates keyword"-entries are running for 10s of seconds and even minutes for remote libraries.
We are using jrobotremoteserver. We upgraded from 3.0 to 4.1.0 - with no effect. Only the spurious error messages changed from "_No such handler: get_keyword_tags_" to "_Failed to invoke method get_library_information in class org.robotframework.remoteserver.servlet.ServerMethods: Duplicate key stop_remote_server_"
**During the suite setups, we noticed load spikes in the nodes running the jrobotremoteserver:**
CPU( greenish=Priviledged, brownish=User) - note that the spikes are on the Priviledged side:

Network packets ( grid-lines are for 5Kp/s, 10Kp/s,15 Kp/s, greenish=Received, brownish=Sent):

We have not captured network traffic, but from the figures and metrics, it would seem that the new version of robot framework bombards the remote server for some reason.
We found out that this issue has probably been found out in these tickets:
* https://github.com/robotframework/jrobotremoteserver/issues/58
* https://github.com/d-biehl/robotcode/issues/24
**Workaround**
We also found out that in one ticket, someone has published a forked version of jrobotremoteserver, which fixes the "duplicate key stop_remote_server"-error: https://github.com/robotframework/jrobotremoteserver/pull/66. We gave that version a go -and the performance issue was resolved, the run times are now back to what they used to be and also the metrics are no more showing the spikes:


**Conclusions**
I'm reporting this to robot framework instead of jrobotremoteserver - because it would seem that robot framework is doing some bombarding and causes the problem in the jrobotremoteserver. Of course, the issue might be resolved - or rather hidden - when the jrobotremoteserver gets a new release with the seemingly unrelated(?) "Duplicate key stop_remote_server_"-issue resolved.
|
closed
|
2023-05-09T09:11:51Z
|
2023-12-20T08:25:44Z
|
https://github.com/robotframework/robotframework/issues/4759
|
[] |
karniemi
| 4
|
Evil0ctal/Douyin_TikTok_Download_API
|
api
| 248
|
[BUG] 无法下载
|
<html>
<body>
<!--StartFragment-->
视频下载-无水印 | 点击下载
-- | --
<!--EndFragment-->
</body>
</html>
http://api.douyin.wtf/download?url=https://v.douyin.com/iJp9BVd9/&prefix=true&watermark=false
{
"status": "endpoint closed",
"message": "此端点已关闭请在配置文件中开启/This endpoint is closed, please enable it in the configuration file"
}

视频下载-无水印 [点击下载](http://api.douyin.wtf/download?url=https://v.douyin.com/iJp9BVd9/&prefix=true&watermark=false)
|
closed
|
2023-08-21T21:13:22Z
|
2023-08-23T07:20:01Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/248
|
[
"BUG"
] |
Kwaiyu
| 2
|
kangvcar/InfoSpider
|
automation
| 13
|
?
|

|
closed
|
2020-08-25T03:19:04Z
|
2020-12-15T03:34:34Z
|
https://github.com/kangvcar/InfoSpider/issues/13
|
[
"bug"
] |
locatomlion
| 3
|
pydata/xarray
|
numpy
| 9,656
|
mypy failure for Variable + Dataset arithmetic in test_typed_ops.py with Python 3.12 only
|
### What is your issue?
I get the following error when I run mypy in a Python 3.12 environment:
```
$ python -m mypy --install-types --non-interactive
xarray/tests/test_typed_ops.py: note: In function "test_dataset_typed_ops":
xarray/tests/test_typed_ops.py:139: error: Argument 1 to "_test" has incompatible type "Variable"; expected "Dataset" [arg-type]
xarray/tests/test_typed_ops.py:140: error: Argument 1 to "_test" has incompatible type "DataArray"; expected "Dataset" [arg-type]
xarray/tests/test_typed_ops.py:151: error: Argument 1 to "_test" has incompatible type "Variable"; expected "Dataset" [arg-type]
xarray/tests/test_typed_ops.py:152: error: Argument 1 to "_test" has incompatible type "DataArray"; expected "Dataset" [arg-type]
xarray/tests/test_typed_ops.py:163: error: Argument 1 to "_test" has incompatible type "Variable"; expected "Dataset" [arg-type]
xarray/tests/test_typed_ops.py:164: error: Argument 1 to "_test" has incompatible type "DataArray"; expected "Dataset" [arg-type]
Found 6 errors in 1 file (checked 167 source files)
```
When I run this using an identical Python 3.11 setup, mypy passes without any errors.
The [offending lines](https://github.com/pydata/xarray/blob/ed32ba722cbc289cd44f931966dedbee46461642/xarray/tests/test_typed_ops.py#L139-L140) are doing arithmetic operations like `Variable + Dataset` or `DataArray + Dataset`.
Enviroment details:
<details>
## Python 3.12
commit: df87f692ea3d68ec90bc19fb227996413ee083a0
python: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 15:57:01) [Clang 17.0.6 ]
python-bits: 64
OS: Darwin
OS-release: 23.6.0
machine: arm64
processor: arm
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.14.3
libnetcdf: 4.9.2
xarray: 2024.9.1.dev81+g994550f0
pandas: 2.2.3
numpy: 2.0.2
scipy: 1.14.1
netCDF4: 1.7.1
pydap: 3.5
h5netcdf: 1.4.0
h5py: 3.12.1
zarr: 2.18.3
cftime: 1.6.4
nc_time_axis: 1.4.1
iris: 3.9.0
bottleneck: 1.4.2
dask: 2024.10.0
distributed: 2024.10.0
matplotlib: 3.9.2
cartopy: 0.24.0
seaborn: 0.13.2
numbagg: 0.8.2
fsspec: 2024.9.0
cupy: None
pint: None
sparse: 0.15.4
flox: 0.9.12
numpy_groupies: 0.11.2
setuptools: 75.1.0
pip: 24.2
conda: None
pytest: 8.3.3
mypy: 1.11.2
IPython: None
sphinx: None
## Python 3.11
commit: df87f692ea3d68ec90bc19fb227996413ee083a0
python: 3.11.10 | packaged by conda-forge | (main, Oct 16 2024, 01:26:25) [Clang 17.0.6 ]
python-bits: 64
OS: Darwin
OS-release: 23.6.0
machine: arm64
processor: arm
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.14.3
libnetcdf: 4.9.2
xarray: 2024.9.1.dev77+gdf87f692
pandas: 2.2.3
numpy: 2.0.2
scipy: 1.14.1
netCDF4: 1.7.1
pydap: 3.5
h5netcdf: 1.4.0
h5py: 3.12.1
zarr: 2.18.3
cftime: 1.6.4
nc_time_axis: 1.4.1
iris: 3.9.0
bottleneck: 1.4.2
dask: 2024.10.0
distributed: 2024.10.0
matplotlib: 3.9.2
cartopy: 0.24.0
seaborn: 0.13.2
numbagg: 0.8.2
fsspec: 2024.9.0
cupy: None
pint: None
sparse: 0.15.4
flox: 0.9.12
numpy_groupies: 0.11.2
setuptools: 75.1.0
pip: 24.2
conda: None
pytest: 8.3.3
mypy: 1.11.2
IPython: None
sphinx: None
</details>
|
closed
|
2024-10-21T21:38:03Z
|
2024-10-28T22:08:25Z
|
https://github.com/pydata/xarray/issues/9656
|
[
"bug",
"topic-typing"
] |
shoyer
| 0
|
TencentARC/GFPGAN
|
deep-learning
| 571
|
Fix a picture
|

|
closed
|
2024-08-24T13:08:57Z
|
2024-08-24T13:10:00Z
|
https://github.com/TencentARC/GFPGAN/issues/571
|
[] |
mn3m28
| 1
|
opengeos/leafmap
|
streamlit
| 379
|
Errors in 00_key_features.ipynb (in colab)
|
## Errors found when running 00_key_features.ipynb on colab:
1. **Missing package: pycrs** - "Add shapefile".
Add a cell with `!pip install pycrs` & execute it before running the related code cell.
2. **Missing import: subprocess** - "Add KML"
```
#[...]
except ImportError:
print('Installing geopandas ...')
import subprocess # missing
```
3. **Missing import: os** - "Add Planet imagery"
```
import os # missing
os.environ["PLANET_API_KEY"] = "12345"
```
That import should be done early on, with the leafmap import cell, imho. No need to repeat `import os` and `import leafmap.heremap as leafmap` as done in "Use heremap plotting backend"
|
closed
|
2023-03-03T00:11:05Z
|
2023-03-04T20:59:17Z
|
https://github.com/opengeos/leafmap/issues/379
|
[
"bug"
] |
CatChenal
| 0
|
explosion/spaCy
|
data-science
| 12,906
|
LOWER does not always work when presented with random casing
|
## How to reproduce the behavior
Create a matcher with the following pattern:
```
[{'LOWER': 'git'}]
```
Then match on this sentence:
```
GitHub is a platform and cloud-based service for software development and version control using gIT.
```
Observe that "gIT" is correctly matched. Then try matching on this sentence:
```
GitHub is a platform and cloud-based service for software development and version control using giT.
```
This time, observe that "giT" is not matched.
[Here's a demo of the problem](https://demos.explosion.ai/matcher?text=GitHub%20is%20a%20platform%20and%20cloud-based%20service%20for%20software%20development%20and%20version%20control%20using%20giT.%0A%0AGitHub%20is%20a%20platform%20and%20cloud-based%20service%20for%20software%20development%20and%20version%20control%20using%20gIT.%0A&model=en_core_web_sm&pattern=%5B%7B%22id%22%3A0%2C%22attrs%22%3A%5B%7B%22name%22%3A%22LOWER%22%2C%22value%22%3A%22git%22%7D%5D%7D%5D) I made using the online rule-based matcher explorer.
Alternatively, here's a code snippet:
```python
from spacy.matcher import Matcher
import spacy
spacy.cli.download('en_core_web_sm')
model = spacy.load('en_core_web_sm')
matcher = Matcher(model.vocab)
matcher.add(key=0, patterns=[[{'LOWER': 'git'}]])
sentence1 = "GitHub is a platform and cloud-based service for software development and version control using gIT."
sentence2 = "GitHub is a platform and cloud-based service for software development and version control using giT."
doc1 = model(sentence1)
matches1 = matcher(doc1, as_spans=True)
print(f"{sentence1} {matches1}")
doc2 = model(sentence2)
matches2 = matcher(doc2, as_spans=True)
print(f"{sentence2} {matches2}")
```
Punctuation seems to play a role, but I'm not sure how. If you remove the period from the end of the failing sentence, the matcher works.
## Your Environment
- **spaCy version:** 3.6.1
- **Platform:** macOS-13.5-arm64-arm-64bit
- **Python version:** 3.9.17
- **Pipelines:** en_core_web_sm (3.6.0)
|
closed
|
2023-08-11T19:46:53Z
|
2023-09-23T00:02:06Z
|
https://github.com/explosion/spaCy/issues/12906
|
[
"feat / matcher",
"feat / tokenizer"
] |
mjsonofharry
| 3
|
open-mmlab/mmdetection
|
pytorch
| 11,400
|
is it possible to load a pretrained model and reinitialize a specific layer?
|
I'm using a YOLOX model with a custom bbox head. I've essentially added a new layer identical to the objectness head (`multi_level_conv_obj`) to predict an additional property. Let's say it's called `multi_level_conv_prop`.
What I want to do is load a pretrained checkpoint of this model and finetune only `multi_level_conv_prop`. I also want to reinitialize the weights of `multi_level_conv_prop` before starting finetuning. Is that possible to do in the config? I think maybe a pre-training hook can do this, but I'm not sure if that's the best way.
And in case this info matters - I'm also using a custom optim wrapper constructor since I couldn't find a way to freeze the entire model except the `multi_level_conv_prop` layer.
|
open
|
2024-01-18T13:25:09Z
|
2024-01-18T13:25:24Z
|
https://github.com/open-mmlab/mmdetection/issues/11400
|
[] |
SchizoidBat
| 0
|
xinntao/Real-ESRGAN
|
pytorch
| 77
|
怎麽能在Cupscale GUI裏調用Real-ESRGAN的pytorch AI網絡?
|
現在有個Cupscale GUI,目前沒辦法使用AI network:Real-ESRGAN(Pytorch)。只能用Real-ESRGAN(ncnnn)網絡。能指點一下把源碼的什麽部分拷貝出來或者怎麽搞一下,能讓這個圖形化界面能使用到Real-ESRGAN(Pytorch)網絡 嗎?
個人是接觸這個的新手,對這個深度學習模型很感興趣,能加一下您的微信聯係方式,便於請教一些具體問題不?
|
open
|
2021-09-13T05:58:16Z
|
2021-09-19T15:06:12Z
|
https://github.com/xinntao/Real-ESRGAN/issues/77
|
[] |
Battlecraft369
| 5
|
koxudaxi/datamodel-code-generator
|
fastapi
| 1,495
|
Missing constr import when --collapse-root-models and using directory input
|
**Describe the bug**
When using a directory input and multiple input files the resulting python module is missing a `from pydantic import constr`
Note that this occurs for both pydantic v1 and pydantic v2 output types.
**To Reproduce**
```
$ tree schemas
schemas
├── common.yml
└── test.yml
$ datamodel-codegen --input schemas --input-file-type jsonschema --output src --disable-timestamp --enable-version-header --collapse-root-models --output-model-type pydantic_v2.BaseModel
$ tree src
src
├── __init__.py
├── common.py
└── test.py
# Note the missing constr import in the `from pydantic ...` line
$ cat test.py
# generated by datamodel-codegen:
# filename: test.yml
# version: 0.21.4
from __future__ import annotations
from pydantic import BaseModel, Field
from . import common
class Test(BaseModel):
uid: constr(pattern=r'[0-9ABCDEFGHJKMNPQRSTVWXYZ]{26,26}') = Field(
..., description='ulid of this object'
)
```
Example schema:
- schemas/common.yml
```yaml
---
$schema: https://json-schema.org/draft/2020-12/schema
$id: common.yml
definitions:
ulid:
type: string
pattern: '[0-9ABCDEFGHJKMNPQRSTVWXYZ]{26,26}'
```
- schemas/test.yml
```yaml
---
$schema: https://json-schema.org/draft/2020-12/schema
$id: test.yml
title: test
required:
- uid
properties:
uid:
description: ulid of this object
$ref: ./common.yml#/definitions/ulid
```
Used commandline:
```
$ datamodel-codegen --input schemas --input-file-type jsonschema --output src --disable-timestamp --enable-version-header --collapse-root-models --output-model-type pydantic_v2.BaseModel
```
**Expected behavior**
The import in `test.py` should include `constr`
eg:
```python
from pydantic import BaseModel, Field, constr
```
Without the `constr` import a `pydantic.errors.PydanticUserError` is raised stating that the model '... is not fully defined, you should define `constr`, then call ...rebuild.
Separately, `common.py` is generated with the `constr` import despite not using it in that module or having any other schema models defined in it and ends up being:
```python
# generated by datamodel-codegen:
# filename: common.yml
# version: 0.21.4
from __future__ import annotations
from typing import Any
from pydantic import RootModel, constr
class Model(RootModel):
root: Any
```
**Version:**
- OS: Linux, debian 12.
- Python version: 3.11
- datamodel-code-generator version: 0.21.4
|
closed
|
2023-08-18T01:58:45Z
|
2023-10-07T16:23:59Z
|
https://github.com/koxudaxi/datamodel-code-generator/issues/1495
|
[
"bug"
] |
indrat
| 1
|
benbusby/whoogle-search
|
flask
| 104
|
[FEATURE] Option to use privacy-respecting services
|
**Describe the feature you'd like to see added**
It would be nice if we could activate an option to transform all "*twitter.com*" links to "*nitter.net*", "*youtube.com*" to "*invidio.us*", "*maps.google.com*" to "*openstreetmaps.org*" and "*instagram.com*" to "*bibliogram.art*" (or any instance of preference from users).
**Describe which parts of the project this would modify (/app/filter.py)**
I think "filter.py" should be modified (I may be wrong as I'm not very familiarized with the project). Maybe a string.replace("service.bad", "service.good") would do as (at least invidious and nitter) share the exact same links with their respective pairs (eg. twitter.com/Snowden/status/... == nitter.net/Snowden/status/... and same for invidio.us)
**Additional context**
Those services fully respect the privacy from users and allow them to navigate through those privative networks.
|
closed
|
2020-07-11T08:37:11Z
|
2020-07-26T17:54:00Z
|
https://github.com/benbusby/whoogle-search/issues/104
|
[
"enhancement"
] |
hialvaro
| 1
|
django-import-export/django-import-export
|
django
| 1,960
|
Explicit field declaration with no attribute is not exported
|
**Describe the bug**
Create Resources as follows:
```python
class BookResource(ModelResource):
author_email = Field(column_name='aut_em')
class Meta:
fields = ("author_email",)
model = Book
```
Run an export via admin page. The output is an empty `aut_em` column.
The column should be exported with the correct data.
Adding the attribute declaration fixes the issue:
```python
author_email = Field(attribute="author_email", column_name='aut_em')
```
**Versions (please complete the following information):**
- Django Import Export: 4.1.1
- Python 3.12
- Django 5.1
v3 also has this bug.
|
closed
|
2024-10-10T15:10:50Z
|
2024-10-10T17:17:29Z
|
https://github.com/django-import-export/django-import-export/issues/1960
|
[
"bug"
] |
matthewhegarty
| 1
|
ipython/ipython
|
jupyter
| 14,439
|
: documentation build fails with `cannot import name 'system' from 'IPython.utils.process'` error
|
Looks like something is wrong ant with new version is no longer possible to build documentation
```console
+ /usr/bin/sphinx-build -n -T -b man docs/source build/sphinx/man
Running Sphinx v7.3.7
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/sphinx/registry.py", line 453, in load_extension
mod = import_module(extname)
File "/usr/lib64/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/tkloczko/rpmbuild/BUILD/ipython-8.24.0/IPython/__init__.py", line 54, in <module>
from .core.application import Application
File "/home/tkloczko/rpmbuild/BUILD/ipython-8.24.0/IPython/core/application.py", line 26, in <module>
from IPython.core import release, crashhandler
File "/home/tkloczko/rpmbuild/BUILD/ipython-8.24.0/IPython/core/crashhandler.py", line 27, in <module>
from IPython.core import ultratb
File "/home/tkloczko/rpmbuild/BUILD/ipython-8.24.0/IPython/core/ultratb.py", line 115, in <module>
from IPython.utils import path as util_path
File "/home/tkloczko/rpmbuild/BUILD/ipython-8.24.0/IPython/utils/path.py", line 17, in <module>
from IPython.utils.process import system
ImportError: cannot import name 'system' from 'IPython.utils.process' (/home/tkloczko/rpmbuild/BUILD/ipython-8.24.0/IPython/utils/process.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/sphinx/cmd/build.py", line 332, in build_main
app = Sphinx(args.sourcedir, args.confdir, args.outputdir,
File "/usr/lib/python3.10/site-packages/sphinx/application.py", line 229, in __init__
self.setup_extension(extension)
File "/usr/lib/python3.10/site-packages/sphinx/application.py", line 402, in setup_extension
self.registry.load_extension(self, extname)
File "/usr/lib/python3.10/site-packages/sphinx/registry.py", line 456, in load_extension
raise ExtensionError(__('Could not import extension %s') % extname,
sphinx.errors.ExtensionError: Could not import extension IPython.sphinxext.ipython_console_highlighting (exception: cannot import name 'system' from 'IPython.utils.process' (/home/tkloczko/rpmbuild/BUILD/ipython-8.24.0/IPython/utils/process.py))
Extension error:
Could not import extension IPython.sphinxext.ipython_console_highlighting (exception: cannot import name 'system' from 'IPython.utils.process' (/home/tkloczko/rpmbuild/BUILD/ipython-8.24.0/IPython/utils/process.py))
Adding Tag: ipystable
```
|
open
|
2024-05-17T12:03:44Z
|
2024-05-20T18:25:16Z
|
https://github.com/ipython/ipython/issues/14439
|
[] |
kloczek
| 2
|
aio-libs-abandoned/aioredis-py
|
asyncio
| 1,050
|
aioredis==1.3.1 & Redis 6.0.12 connection error with reader at end of file.
|
After upgrading Redis to 6.0.12, now every time we trying to make a connection it closed the connection. it works perfectly with Redis = 5.4.10 and Aio 1.3.1
We are using tornado as app server and make call to redis using aioredis=1.3.1 , I always get 503 from tornado and get this error in tornado logs.
```
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/tornado/web.py", line 1592, in _execute
result = yield result
File "/usr/local/lib/python3.8/site-packages/tornado/gen.py", line 1133, in run
value = future.result()
File "/app/routes/entitlementsync.py", line 27, in post
status = await self.redis.hget(entitlement_status_hash, "status")
aioredis.errors.ConnectionClosedError: Reader at end of file
```
let me know if you guys need more details.
|
open
|
2021-07-06T17:25:43Z
|
2021-07-16T22:52:03Z
|
https://github.com/aio-libs-abandoned/aioredis-py/issues/1050
|
[
"need investigation"
] |
linux2000in
| 7
|
polarsource/polar
|
fastapi
| 4,926
|
link is broken
|
### Description
a link is broken in _`welcome-to-polar`_ page [Contributions welcome](https://docs.polar.sh/docs/developers/open-source)
### Current Behavior

### Expected Behavior
The page should redirect to contributions.md
### Environment:
- Operating System: Windows 10
- Browser (if applicable): Chrome
---
<!-- Thank you for contributing to Polar! We appreciate your help in improving it. -->
<!-- Questions: [Discord Server](https://discord.com/invite/Pnhfz3UThd). -->
|
closed
|
2025-01-30T06:46:35Z
|
2025-01-31T08:14:24Z
|
https://github.com/polarsource/polar/issues/4926
|
[
"bug"
] |
Boby900
| 1
|
explosion/spaCy
|
data-science
| 12,544
|
token.sentiment is only outputting 0.0
|
in any polarity token token.sentiement attribute is giving 0.0 score, why is that?
Thank you for any suggestion
|
closed
|
2023-04-19T06:05:51Z
|
2023-04-19T11:46:37Z
|
https://github.com/explosion/spaCy/issues/12544
|
[
"feat / doc"
] |
Ibrokhimsadikov
| 1
|
zappa/Zappa
|
django
| 848
|
[Migrated] assume_policy setting mostly without effect
|
Originally from: https://github.com/Miserlou/Zappa/issues/2094 by [tommie-lie](https://github.com/tommie-lie)
## Context
I create a trust relationship between `ZappaLambdaExecutionRole` and itself (to call AssumeRole with a session policy in the authorizer to drop certain privileges). To that end, I created a policy document like this:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::XXXX:role/zappa-project-dev-ZappaLambdaExecutionRole"
],
"Service": [
"apigateway.amazonaws.com",
"lambda.amazonaws.com",
"events.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
```
and set it's filename as `assume_policy` in the Zappa settings.
## Expected Behavior
After `zappa update`, the trust relationship should appear in the IAM console and a call to `AssumeRole` should work.
## Actual Behavior
The IAM console shows only the default trust relationships:
> The identity provider(s) events.amazonaws.com
> The identity provider(s) lambda.amazonaws.com
> The identity provider(s) apigateway.amazonaws.com
and calls to `AssumeRole` fail with permission denied.
## Possible Fix
There is a strange check in https://github.com/Miserlou/Zappa/blob/80a6881f0ec0be525a8fd7835b5a1157f9e66100/zappa/core.py#L2583-L2584
This check causes the policy to only be update if the policy reported from IAM and the local one differ *and* their first Statement's **service principals** differ as well.
As we normally want the apigateway and lambda service principals in a Zappa app, and events.amazonaws.com is often handy, too, this default set of service principals never change.
Therefore, the manually added other principals are never added. If two statements are used in the policy, the check even causes a `KeyError`, because the first statement does not have service principals:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::XXXX:role/zappa-project-dev-ZappaLambdaExecutionRole"
],
},
"Action": "sts:AssumeRole"
},
{
"Effect": "Allow",
"Principal": {
"Service": [
"apigateway.amazonaws.com",
"lambda.amazonaws.com",
"events.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
```
The bogus check was added in 937dbf5a8c39f19bf38f8024e1b8c091f93d9c01 for an unknown reason. Python dict comparison is invariant for the order of the dict entries and JSON lists are order-sensitive, so the normal check `role.assume_role_policy_document != assume_policy_obj` would be perfectly fine. Coincidentally, it's the same check that is used for the more common `attach_policy` setting:
https://github.com/Miserlou/Zappa/blob/80a6881f0ec0be525a8fd7835b5a1157f9e66100/zappa/core.py#L2572
Therefore, the check should be simplified to
```
if role.assume_role_policy_document != assume_policy_obj:
```
## Steps to Reproduce
1. Create a zappa project and copy the above policy into a file called `assume-policy.json`, replace `arn:aws:iam::XXXX:role/zappa-project-dev-` with your project's account ID, project name and stage, respectively
2. `zappa update`
3. go to https://console.aws.amazon.com/iam/home, select your policy and check the tab "Trust relationships"
4. there is no entry for the ZappaLambdaExecutionRole is missing from the "Trusted entities" section
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.51.0
* Operating System and Python version: Linux, Python 3.8
* The output of `pip freeze`: irrelevant
* Your `zappa_settings.yaml`: abbreviated:
```yaml
dev:
project_name: zappa-project
runtime: python3.8
attach_policy: app-exec-policy.json
assume_policy: assume-policy.json
```
|
closed
|
2021-02-20T12:52:26Z
|
2024-04-13T19:10:24Z
|
https://github.com/zappa/Zappa/issues/848
|
[
"no-activity",
"auto-closed"
] |
jneves
| 2
|
wandb/wandb
|
tensorflow
| 9,299
|
[Q]: When does Wandb generate a name if none is provided for the run?
|
### Ask your question
Hi! I'm writing a small script that will generate names for my wandb run using words from my own custom word bank. I know I can manually provide new names, but I experiment a lot and prefer having names automatically generated from a specified set.
I found that the word bank is private [here](https://github.com/wandb/wandb/issues/3478), but I'm struggling to locate where in the code this request to the corpus is made. Can you help me find it?
|
open
|
2025-01-18T15:36:12Z
|
2025-01-28T16:24:32Z
|
https://github.com/wandb/wandb/issues/9299
|
[
"ty:question"
] |
TitovSergey
| 3
|
nschloe/tikzplotlib
|
matplotlib
| 474
|
IndexError when saving
|
Using tikzplotlib for `contourf` sometimes leads to:
```
File "D:\Local_Repositories\Studium\Semester_7\bachelorarbeit\code\env\lib\site-packages\tikzplotlib\_save.py", line 260, in save
code = get_tikz_code(*args, filepath=filepath, **kwargs)
File "D:\Local_Repositories\Studium\Semester_7\bachelorarbeit\code\env\lib\site-packages\tikzplotlib\_save.py", line 209, in get_tikz_code
data, content = _recurse(data, figure)
File "D:\Local_Repositories\Studium\Semester_7\bachelorarbeit\code\env\lib\site-packages\tikzplotlib\_save.py", line 353, in _recurse
data, children_content = _recurse(data, child)
File "D:\Local_Repositories\Studium\Semester_7\bachelorarbeit\code\env\lib\site-packages\tikzplotlib\_save.py", line 378, in _recurse
data, cont = _draw_collection(data, child)
File "D:\Local_Repositories\Studium\Semester_7\bachelorarbeit\code\env\lib\site-packages\tikzplotlib\_save.py", line 319, in _draw_collection
return _path.draw_pathcollection(data, child)
File "D:\Local_Repositories\Studium\Semester_7\bachelorarbeit\code\env\lib\site-packages\tikzplotlib\_path.py", line 214, in draw_pathcollection
p = obj.get_paths()[0]
IndexError: list index out of range
```
A solution is to check here:
https://github.com/nschloe/tikzplotlib/blob/1b9139cf642f9a392892dfcf556eb0ba729154fd/tikzplotlib/_path.py#L212-L228
if `get_paths()` returns an empty list and in this case setting `marker0 = None`.
|
closed
|
2021-03-29T12:44:06Z
|
2021-04-08T14:00:22Z
|
https://github.com/nschloe/tikzplotlib/issues/474
|
[] |
LDAP
| 0
|
MaartenGr/BERTopic
|
nlp
| 1,076
|
remove stop words from saved model
|
Hi. @MaartenGr
I want to remove stop words from the saved BERTopic model and update the model again.
I looked up the official document and found that there was no method to update the keywords.
Maybe I didn't find it?
|
closed
|
2023-03-07T02:26:44Z
|
2023-05-23T09:33:16Z
|
https://github.com/MaartenGr/BERTopic/issues/1076
|
[] |
kimkyulim
| 2
|
plotly/dash
|
data-science
| 2,290
|
add new search_indexes prop on dropdown
|
When "label" is a component (img and div for example), and I provide "value", "label" and "search" arguments in options is there a way to tell it to search options only by "search"?
It is very slow if searching by value and search for 2k items, "value " is email, I would like to disable searching "value".
|
open
|
2022-10-27T12:33:22Z
|
2024-08-13T19:22:01Z
|
https://github.com/plotly/dash/issues/2290
|
[
"feature",
"P3"
] |
MarkoPuntaric
| 3
|
agronholm/anyio
|
asyncio
| 186
|
`BrokenResourceError` raised by `MemoryObjectStream.send` after successful send
|
I'm unsure if this is working as designed - it's slightly unclear in the docs. Running the following:
```python
import anyio
async def demo() -> None:
send_stream, receive_stream = anyio.create_memory_object_stream()
async def send() -> None:
async with send_stream:
await send_stream.send(None)
print("sent")
async def receive() -> None:
async with receive_stream:
await receive_stream.receive()
print("received")
async with anyio.create_task_group() as task_group:
await task_group.spawn(send)
await task_group.spawn(receive)
```
prints only
```
received
```
before raising `anyio.BrokenResourceError`. It seems like we're falling afoul of https://github.com/agronholm/anyio/blob/master/src/anyio/streams/memory.py#L161. My guess for what's happening is:
- we enter the context of the send stream and attempt the send;
- we first attempt `send_nowait`, which fails as the `receive` coroutine has not yet called `receive_stream.receive()`;
- we create the send event and await on it, suspending the `send` coroutine;
- the `receive` coroutine is scheduled, receives the item, and closes `receive_stream` without the `send` coroutine being scheduled;
- the `send` coroutine resumes, notices that no receive channels are open, and raises.
I would've expected either:
- no exception to be raised, with an exception raised if another send was attempted;
- some kind of end-of-stream exception.
The `BrokenResourceError` seems inconsistent with the fact that the item has actually been sent.
Tested on 2.0.2 and current master (83a0cbd), on all three event loops.
|
closed
|
2021-01-10T13:02:56Z
|
2021-01-30T14:20:52Z
|
https://github.com/agronholm/anyio/issues/186
|
[
"bug"
] |
tomwatson1024
| 1
|
autogluon/autogluon
|
computer-vision
| 4,423
|
[tabular] [BUG] Ensure val has sample of each class for metrics like `roc_auc_ovr_macro`
|
Metrics such as `roc_auc_ovr_macro` will raise an exception in multiclass tasks if the validation set does not contain at least 1 instance of every class.
This was discovered in #4411 as when `test_data` was passed, we couldn't drop rare classes, but with small amounts of rows, we didn't ensure all validation splits had at least one of each class (while ensuring each training row had one of each class).
This leads to the following exception:
```
ValueError: Only one class present in y_true. ROC AUC score is not defined in that case.
```
We should add `min_cls_count_test` to `generate_train_test_split` function to mitigate this issue. For bagging, we will want to add a helpful error message and a check to ensure one of each class is present in both the train and val splits for each bag.
|
open
|
2024-08-23T21:23:59Z
|
2024-11-25T23:04:34Z
|
https://github.com/autogluon/autogluon/issues/4423
|
[
"bug",
"module: tabular",
"priority: 1"
] |
Innixma
| 0
|
faif/python-patterns
|
python
| 376
|
random seed does not take effect in doctest
|
Thanks for your awesome works on the pythonic design pattern, I'm reviewing some strategies in the design patterns. However, I tried your first abstract_factory.py
`random. seed(1234)` in the main function will not take effect in doctest random. choice(), and it returns fake random depend on machine time or some other kind of random parameter.
I tried to fix this problem by this
>>> random.seed(1234)
>>> shop = PetShop(random_animal)
then you can get the same result every time you rerun it.
|
closed
|
2021-06-11T09:59:07Z
|
2022-07-04T19:28:26Z
|
https://github.com/faif/python-patterns/issues/376
|
[
"bug"
] |
qitianliang
| 3
|
pydantic/pydantic-settings
|
pydantic
| 136
|
Validation error if environment variable contains an integer
|
I'm trying to load a `PORT` environment variable into pydantic settings.
In v1 it was annotated like this
```python
from pydantic import BaseSettings
class MySettings(BaseSettings):
PORT: str | None = None
```
and it used to be able to load the following values: `5432`, `5432,5433`.
When I use the following code in pydantic v2 with pydantic-settings, I get an error parsing a single port:
```python
import os
from pydantic_settings import BaseSettings
class MyNewSettings(BaseSettings):
PORT: str | None = None
os.environ["PORT"] = "5432"
MyNewSettings()
```
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[1], line 8
5 PORT: str | None = None
7 os.environ["PORT"] = "5432"
----> 8 MyNewSettings()
File ~/.local/share/virtualenvs/theenv/lib/python3.10/site-packages/pydantic_settings/main.py:71, in BaseSettings.__init__(__pydantic_self__, _case_sensitive, _env_prefix, _env_file, _env_file_encoding, _env_nested_delimiter, _secrets_dir, **values)
60 def __init__(
61 __pydantic_self__,
62 _case_sensitive: bool | None = None,
(...)
69 ) -> None:
70 # Uses something other than `self` the first arg to allow "self" as a settable attribute
---> 71 super().__init__(
72 **__pydantic_self__._settings_build_values(
73 values,
74 _case_sensitive=_case_sensitive,
75 _env_prefix=_env_prefix,
76 _env_file=_env_file,
77 _env_file_encoding=_env_file_encoding,
78 _env_nested_delimiter=_env_nested_delimiter,
79 _secrets_dir=_secrets_dir,
80 )
81 )
File ~/.local/share/virtualenvs/theenv/lib/python3.10/site-packages/pydantic/main.py:159, in BaseModel.__init__(__pydantic_self__, **data)
157 # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
158 __tracebackhide__ = True
--> 159 __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
ValidationError: 1 validation error for MyNewSettings
PORT
Input should be a valid string [type=string_type, input_value=5432, input_type=int]
For further information visit https://errors.pydantic.dev/2.1/v/string_type
```
However, if the `PORT`'s value can't be parsed to an integer, it works:
```python
In [2]: os.environ["PORT"] = "5432,5433"
In [3]: MyNewSettings()
Out[3]: MyNewSettings(PORT='5432,5433')
```
Environment
```
pydantic==2.1.1
pydantic-extra-types==2.0.0
pydantic-settings==2.0.2
pydantic_core==2.4.0
```
Selected Assignee: @hramezani
|
closed
|
2023-07-31T11:37:16Z
|
2023-08-08T07:07:41Z
|
https://github.com/pydantic/pydantic-settings/issues/136
|
[
"unconfirmed"
] |
xome4ok
| 3
|
flairNLP/flair
|
nlp
| 2,874
|
🎓 New position for PhD candidate available at HU Berlin!
|
Hello all,
we now have another **position for a PhD candidate** available! It is a full-time and fully paid research associate position in my group, intended for persons with a master degree in computer science that aim to pursue a PhD in machine learning or NLP.
**The project:** This is the first of multiple positions in a big project in which we aim to create powerful language models a la BERT and GPT-3 - but with the difference that our models should only require a fraction of the computational resources and data to train. We will therefore take a deep look at the internals of language models and pursue algorithmic improvements of training objectives and internal representations.
**To apply:** If you're looking to pursue a PhD in this topic, consider applying! You should:
- have an M.Sc. degree (or be close to graduating) in computer science or computational linguistics with a focus on ML and NLP
- already have relevant experience in deep learning research, for instance gathered as part of your M.Sc.
- strong coding skills, especially in deep learning frameworks such as PyTorch
- ideally have your first publications
- love open source
- love NLP ;)
(Note that the M.Sc. is a mandatory requirement, as otherwise we cannot initiate the formal hiring process.)
The link to the job ad is [here](https://www.personalabteilung.hu-berlin.de/de/stellenausschreibungen/wiss-mitarbeiter-in-m-w-d-mit-vorauss-vollzeit-e-13-tv-l-hu-drittmittelfinanzierung-befristet-bis-30-06-2024) (in German) - applications can be in English, as the working language of our research group is English. **Consider applying**, and contact me in case of interest!
Cheers,
Alan
|
closed
|
2022-07-26T11:26:01Z
|
2022-09-23T15:46:32Z
|
https://github.com/flairNLP/flair/issues/2874
|
[] |
alanakbik
| 2
|
MagicStack/asyncpg
|
asyncio
| 1,206
|
Why is asyncpg doing type introspection on json types?
|
* **asyncpg version**: `0.30.0`
* **PostgreSQL version**: `15.3`
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: RDS, and yes
* **Python version**: `3.12.6`
* **Platform**: MacOS and Linux
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: No, `poetry`
* **If you built asyncpg locally, which version of Cython did you use?**: n/a
* **Can the issue be reproduced under both asyncio and [uvloop](https://github.com/magicstack/uvloop)?**: Have not tried. Happy to if you think it would be beneficial
Spinning out of https://github.com/MagicStack/asyncpg/issues/1138#issuecomment-2451097693 because it feels like a different discussion.
---
I'm running a FastAPI service that connects to AWS RDS, and needs to refresh credentials every 15 minutes. Normally, the type introspection queries don't take up much time because they run once per connection, but I have a lot of churn in my connection pool so run them a decent number of times. Recently I'm seen more traffic and thus more connections being created, and with more connections, the more often we're likely to see slow queries on things that are normally fast.
At a very high level, my service is set to connect to the database with:
```python
engine = create_async_engine(
postgres_url(use_asyncpg=True),
pool_size=10,
max_overflow=25,
pool_recycle=600, # IAM credentials expire after 15 mins
pool_pre_ping=True,
)
@event.listens_for(engine.sync_engine, "do_connect")
def provide_token(dialect, conn_rec, cargs, cparams) -> None:
cparams["password"] = boto3.client("rds").generate_db_auth_token(
config.POSTGRES_HOST, config.POSTGRES_PORT, config.POSTGRES_USER,
)
```
Even abnormally slow type introspection queries aren't horrible but they are noticeable, as in the example below these 2 queries took more than 50% of the service's total response time.

Debugging locally a little with `command: ["postgres", "-c", "log_statement=all"]` in my `docker-compose.yml`, I can see what type `asyncpg` needs to examine:
```text
2024-11-01 20:52:52.239 UTC [491] LOG: execute __asyncpg_stmt_1__: SELECT
t.oid,
t.typelem AS elemtype,
t.typtype AS kind
FROM
pg_catalog.pg_type AS t
WHERE
t.oid = $1
2024-11-01 20:52:52.239 UTC [491] DETAIL: parameters: $1 = '114'
2024-11-01 20:52:52.240 UTC [491] LOG: execute __asyncpg_stmt_2__: SELECT
t.oid,
t.typelem AS elemtype,
t.typtype AS kind
FROM
pg_catalog.pg_type AS t
WHERE
t.oid = $1
2024-11-01 20:52:52.240 UTC [491] DETAIL: parameters: $1 = '3802'
```
These correspond to the `JSON` and `JSONB` types, respectively, not even custom types.
---
The actual question: how can I pre-register the `JSON` and `JSONB` types in each connection so I don't have to keep running the introspection query? I've tried the `json_{de,}serializer` argument to the SQLAlchemy engine, as well as trying to hook into SQLAlchemy events to intercept connection creation and set the codecs.
|
closed
|
2024-11-01T22:49:32Z
|
2025-03-19T19:04:19Z
|
https://github.com/MagicStack/asyncpg/issues/1206
|
[] |
swanysimon
| 8
|
jupyter/docker-stacks
|
jupyter
| 1,334
|
how to add maven dependency in jupyter/pyspark-notebook
|
**What docker image you are using?**
`jupyter/pyspark-notebook`
**What complete docker command do you run to launch the container (omitting sensitive values)?**
Example: `docker run -it --rm -p 8888:8888 jupyter/pyspark-notebook:latest`
**What steps do you take once the container is running to reproduce the issue?**
I want to connect to a local docker instance via the hadoop connector
```
import random
import pyspark
import random
from pyspark.sql import SparkSession
from datetime import datetime
import pandas as pd
start = datetime.now()
ss = SparkSession.builder.config("spark.driver.memory", "8g").appName('ES').getOrCreate()
es_reader = (ss.read
.format("org.elasticsearch.spark.sql")
.option("inferSchema", "true")
.option("es.read.field.as.array.include", "tags")
.option("es.nodes","elasticsearch:9200")
.option("es.net.https.auth.user","elastic")
)
sysmon_df = es_reader.load("test_index-*/")
end = datetime.now()
time_taken = end - start
print('Time: ',time_taken)
ss.stop()
```
**What do you expect to happen?**
I would like to know how to add a maven dependency for the ES connector: https://www.elastic.co/guide/en/elasticsearch/hadoop/current/install.html
**What actually happens?**
The command fails because it is unable to find the JAR files:
` java.lang.ClassNotFoundException: Failed to find data source: org.elasticsearch.spark.sql
`
|
closed
|
2021-05-25T22:53:18Z
|
2021-06-02T21:19:24Z
|
https://github.com/jupyter/docker-stacks/issues/1334
|
[] |
priamai
| 4
|
python-visualization/folium
|
data-visualization
| 1,944
|
HTML does not work when run on mobile.
|
Suddenly, starting today, HTML does not work when run on mobile. The results of testing several types of devices are the same.
|
closed
|
2024-05-02T03:44:37Z
|
2024-05-06T09:02:01Z
|
https://github.com/python-visualization/folium/issues/1944
|
[] |
Yheionsung
| 2
|
marimo-team/marimo
|
data-visualization
| 3,670
|
Methods have the same color as variables in dark mode
|
### Describe the bug
In dark mode, everything accessed by a dot like methods or objects from modules have the same color as variables which isn't the case in light mode. Maybe it's just the theme marimo uses but it would be nice if the color is different which is the case in light mode as it makes the code more readable.
### Environment
<details>
```
{
"marimo": "0.10.19",
"OS": "Linux",
"OS Version": "6.12.10-arch1-1",
"Processor": "",
"Python Version": "3.12.8",
"Binaries": {
"Browser": "--",
"Node": "--"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.24.2",
"packaging": "24.2",
"psutil": "6.1.1",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.3",
"pyyaml": "6.0.2",
"ruff": "0.9.4",
"starlette": "0.45.3",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "14.2"
},
"Optional Dependencies": {
"pandas": "2.2.3"
},
"Experimental Flags": {
"rtc": true
}
}
```
</details>
### Code to reproduce
It doesn't have to do anything with code. It's the standard behavior in dark mode.
|
closed
|
2025-02-03T13:09:39Z
|
2025-02-03T15:36:57Z
|
https://github.com/marimo-team/marimo/issues/3670
|
[
"bug"
] |
nojovo
| 0
|
FactoryBoy/factory_boy
|
sqlalchemy
| 151
|
delete old instance before creating new
|
To speed up unittesting, we don't delete the whole database before running each test.
This way a test needs to clean up old data before running.
Does factory boy support deleting before creating?
I could not find anything in the docs about deleting objects.
Do you have a hint how to implement a "delete before create" helper method?
Background: we use django, but don't use the "delete whole DB before each test" pattern.
|
closed
|
2014-08-12T07:16:25Z
|
2025-02-06T23:05:54Z
|
https://github.com/FactoryBoy/factory_boy/issues/151
|
[
"Q&A"
] |
guettli
| 8
|
errbotio/errbot
|
automation
| 1,549
|
send_card and send_stream_request don't reply in-thread
|
### I am...
* [X] Reporting a bug
* [ ] Suggesting a new feature
* [ ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### I am running...
Errbot version: 6.1.8
OS version: Debian (`python:3.9-slim` container)
Python version: 3.9
Using a virtual environment: no (using Docker instead)
Backend: `Slack`
### Issue description
When invoked in a Slack thread, [`send_card`](https://errbot.readthedocs.io/en/latest/user_guide/plugin_development/messaging.html#cards) and [`send_stream_request`](https://errbot.readthedocs.io/en/latest/user_guide/plugin_development/streams.html) replies in thread's channel instead of in the thread itself.
### Steps to reproduce
Write a bot command that replies using either `send_card` and `send_stream_request`, and invoke it in a Slack thread. The command reply will appear in the channel instead of in the thread where it were invoked.
|
closed
|
2022-01-04T12:55:13Z
|
2024-01-04T09:35:31Z
|
https://github.com/errbotio/errbot/issues/1549
|
[] |
torgeirl
| 6
|
awtkns/fastapi-crudrouter
|
fastapi
| 74
|
Example for more than one table.
|
Is there an example for doing more then one table.
Potato - table 1
Meat - table 2
|
closed
|
2021-06-10T02:17:54Z
|
2021-06-13T01:00:30Z
|
https://github.com/awtkns/fastapi-crudrouter/issues/74
|
[
"question"
] |
dawnpatrol04
| 4
|
pytorch/pytorch
|
python
| 149,008
|
[AOTI][Debug logger] Min value: Error: "min_all_cuda" not implemented for 'Float8_e4m3fn'
|
### 🐛 Describe the bug
Problem is with AOTI intermediate debug logger with FP8.
repro:
```
import torch
import torch._inductor.config as config
config.aot_inductor.debug_intermediate_value_printer = "2"
config.aot_inductor.filtered_kernel_names = "triton_poi_fused__to_copy_add_0"
class Model(torch.nn.Module):
def forward(self, x):
x = x.to(torch.float)
return x + 1
model = Model().cuda()
x = torch.randn(10).cuda().to(torch.float8_e4m3fn)
ep = torch.export.export(model, (x,))
path = torch._inductor.aoti_compile_and_package(ep)
aot_model = torch._inductor.aoti_load_package(path)
aot_model(x)
print("done")
```
logs:
```
[ CUDAFloat8_e4m3fnType{10} ]
Number of elements: 10
Dtype: c10::Float8_e4m3fn
Mean value: -0.124023
Min value: Error: "min_all_cuda" not implemented for 'Float8_e4m3fn'
```
### Versions
trunk
cc @yanbing-j @vkuzo @albanD @kadeng @penguinwu @desertfire @chenyang78 @yushangdi @benjaminglass1 @chauhang @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
|
open
|
2025-03-11T23:09:57Z
|
2025-03-21T13:53:58Z
|
https://github.com/pytorch/pytorch/issues/149008
|
[
"triaged",
"module: float8",
"module: aotinductor"
] |
henrylhtsang
| 1
|
Buuntu/fastapi-react
|
fastapi
| 160
|
script build.sh continues to execute even if the docker is not available
|
It happens that I start the build.sh at the moment when docker is not running yet.
The script shows the trace of the error from the "docker-compose up" but does not quit, but continues executing commands
This is not critical at the moment, but in the future it may have destructive actions.
|
closed
|
2021-06-12T12:41:55Z
|
2021-07-30T19:40:40Z
|
https://github.com/Buuntu/fastapi-react/issues/160
|
[] |
numbnp
| 1
|
tensorpack/tensorpack
|
tensorflow
| 898
|
Differences with Cascade-RCNN paper
|
Thanks for sharing your code. Some questions about your cascadeRCNN implementation:
1) The scale_gradient implementation for cascade-rcnn: https://github.com/tensorpack/tensorpack/blob/master/examples/FasterRCNN/model_cascade.py#L29, seems to stabilize the training, but I did not find the corresponding description in the original paper, is it needed?
2) It seems that neither mean nor variance normalization is performed during bounding box regression
hope for your response :)
|
closed
|
2018-09-18T02:46:36Z
|
2018-09-18T15:35:21Z
|
https://github.com/tensorpack/tensorpack/issues/898
|
[
"examples"
] |
LandyGuo
| 2
|
ionelmc/pytest-benchmark
|
pytest
| 1
|
Add function wrapper support
|
Eg:
```
def test_stuff(benchmark):
assert benchmark(func)(1, 2, 3) = 'blabla'
```
|
closed
|
2014-10-11T03:02:35Z
|
2015-02-02T05:07:03Z
|
https://github.com/ionelmc/pytest-benchmark/issues/1
|
[] |
ionelmc
| 0
|
Avaiga/taipy
|
data-visualization
| 1,683
|
[🐛 BUG] Table width issue with persistent pane and d-flex
|
### What went wrong? 🤔
The table takes more than the screen width when having a persistent pane.
I am using *with* to create the content of the pane. If I use another page to put it inside the pane, the persistent pane will not even work.
### Expected Behavior
This shouldn't change the layout of the application where there is no pane. And if there is a pane, the layout should just change so that the pane appears by the side of the app.
### Steps to Reproduce Issue
Create a folder data and put this CSV in it.
[modified_supermarkt_sales_plus.csv](https://github.com/user-attachments/files/16664692/modified_supermarkt_sales_plus.csv)
Add this code and run it:
```python
from taipy.gui import Gui, notify
import pandas as pd
import taipy.gui.builder as tgb
import json
# Load and prepare data
data = pd.read_csv("data/modified_supermarkt_sales_plus.csv")
data["Date"] = pd.to_datetime(data["Date"])
data["Review"] = ["[Review](Review)" for _ in range(len(data))]
data["Total ($)"] = data["Total"]
data["Total (€)"] = data["Total"] * 1.2
displayed_data = data.copy()
# Initialize state variables with default values
show_city_info_pane = True
selected_view = "Simple view"
selected_currency = "USD"
selected_dates = [data["Date"].min().date(), data["Date"].max().date()]
selected_prices = [0, 5000]
selected_city = "All"
selected_product_line = "All"
selected_branch = "All"
rate_info = "Good"
rate_price = "Good"
open_dialog_review = False
selected_row_for_review = None
# Load city information from a JSON file
city_info_dict = {}
# Function to filter the data based on selected criteria
def filter(state):
filtered_data = state.data
if state.selected_city != "All":
filtered_data = filtered_data[filtered_data["City"] == state.selected_city]
if state.selected_product_line != "All":
filtered_data = filtered_data[
filtered_data["Product_line"] == state.selected_product_line
]
if state.selected_branch != "All":
filtered_data = filtered_data[filtered_data["Branch"] == state.selected_branch]
filtered_data = filtered_data[
(filtered_data["Date"].dt.date >= state.selected_dates[0])
& (filtered_data["Total"] >= state.selected_prices[0])
& (filtered_data["Total"] <= state.selected_prices[1])
]
state.displayed_data = filtered_data
state.city_info_partial.update_content(state, build_city_info(state.displayed_data))
# Function to convert the total values based on the selected currency
def convert(state):
if state.selected_currency == "USD":
state.displayed_data["Total"] = state.displayed_data["Total ($)"]
elif state.selected_currency == "EUR":
state.displayed_data["Total"] = state.displayed_data["Total (€)"]
state.refresh("displayed_data")
# Function to handle the review submission
def send_review(state, id, payload):
state.open_dialog_review = False
# Build basic filters section
def build_basic_filters():
tgb.text("### Basic Filters", mode="md")
tgb.selector(
value="{selected_product_line}",
lov=["All"] + data["Product_line"].unique().tolist(),
dropdown=True,
filter=True,
label="Product Line",
on_change=filter,
class_name="fullwidth",
)
tgb.selector(
value="{selected_city}",
lov=["All"] + data["City"].unique().tolist(),
dropdown=True,
filter=True,
label="City",
on_change=filter,
class_name="fullwidth",
)
tgb.selector(
value="{selected_branch}",
lov=["All"] + data["Branch"].unique().tolist(),
dropdown=True,
filter=True,
label="Branch",
on_change=filter,
class_name="fullwidth",
)
# Build conversion section
def build_conversion():
tgb.text("### Conversion", mode="md")
tgb.selector(
value="{selected_currency}",
lov=["USD", "EUR"],
dropdown=True,
label="Currency",
on_change=convert,
class_name="fullwidth",
)
tgb.text("Date Range")
tgb.date_range(
"{selected_dates}", label_start="Start", label_end="End", on_change=filter
)
tgb.text("Price Range")
tgb.slider(
"{selected_prices}",
min=0,
max=5000,
on_change=filter,
continuous=False,
width="100%",
)
# Function to handle the review process
def open_review(state: State, var_name: str, payload: dict):
index = payload["index"]
data = getattr(state, var_name).copy()
state.selected_row_for_review = data.iloc[index].to_frame().T
state.open_dialog_review = True
with tgb.Page() as review_page:
tgb.text("Rate info", mode="md")
tgb.table("{selected_row_for_review}")
tgb.selector(
value="{rate_info}",
lov=["Good", "Bad"],
dropdown=True,
label="Rate info",
class_name="fullwidth", # native in 4.0
)
tgb.text("Rate price", mode="md")
tgb.selector(
value="{rate_price}",
lov=["Good", "Bad"],
dropdown=True,
label="Rate price",
class_name="fullwidth", # native in 4.0
)
# Build city information pane
def build_city_info(displayed_data):
with tgb.Page() as page:
tgb.text("### City Information", mode="md")
for city in displayed_data["City"].unique():
with tgb.expandable(title=city, expanded=False):
tgb.text(
city_info_dict.get(city, "No information available."), mode="md"
)
return page
# Build the main GUI page
with tgb.Page() as page:
with tgb.part(class_name="container d-flex"):
with tgb.part():
tgb.text("Sales Insights", class_name="h1 text-center")
with tgb.layout("1 1 1", gap="20px", columns__mobile="1"):
with tgb.part():
build_basic_filters()
with tgb.part():
build_conversion()
tgb.html("hr")
tgb.toggle(
value="{selected_view}",
lov=["Simple view", "Advanced view", "Raw view"],
)
with tgb.part(render="{selected_view=='Raw view'}"):
tgb.table(
"{data}",
on_action=open_review,
filter=True,
)
with tgb.part(render="{selected_view=='Simple view'}"):
tgb.table(
"{displayed_data}",
columns=["Date", "City", "Product_line", "Total", "Review"],
group_by__City=True,
group_by__Product_line=True,
apply_Total="mean",
filter=True,
on_action=open_review,
)
with tgb.part(render="{selected_view=='Advanced view'}"):
tgb.table(
"{displayed_data}",
columns=[
"City",
"Product_line",
"Total",
"Quantity",
"Tax_5%",
"Total",
"Date",
"Review",
],
filter=True,
on_action=open_review,
)
def open_info_pane(state):
state.show_city_info_pane = True
tgb.button(
"City info",
on_action=open_info_pane,
id="open_pane",
)
with tgb.pane(
open="{show_city_info_pane}",
width="300px",
persistent=True,
anchor="right",
):
tgb.part(partial="{city_info_partial}")
tgb.dialog(
page="review_page",
open="{open_dialog_review}",
on_action=send_review,
labels=["Cancel", "Send"],
width="500px",
title="Review the selected row",
)
# Define pages for the GUI
pages = {"page": page, "review_page": review_page}
# Run the GUI application
if __name__ == "__main__":
gui = Gui(pages=pages)
city_info_partial = gui.add_partial(build_city_info(displayed_data))
gui.run(title="Sales", port=2452)
```
### Version of Taipy
develop - 4.0.0.dev0
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional)
|
closed
|
2024-08-19T19:51:35Z
|
2024-08-23T14:37:33Z
|
https://github.com/Avaiga/taipy/issues/1683
|
[
"🖰 GUI",
"💥Malfunction",
"🟨 Priority: Medium"
] |
FlorianJacta
| 1
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 4,399
|
Error reports list as Whistleblower on 5.0.48
|
### What version of GlobaLeaks are you using?
5.0.48
### What browser(s) are you seeing the problem on?
All
### What operating system(s) are you seeing the problem on?
All
### Describe the issue
Hi to all,
after login as Whistleblower, i see all reports disabled [as screen below]

Opening the developer tool into browser, i see this error

### Proposed solution
_No response_
|
closed
|
2025-02-11T14:05:48Z
|
2025-02-11T14:27:29Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4399
|
[
"T: Bug",
"Triage"
] |
andreagigliola
| 0
|
jpadilla/django-rest-framework-jwt
|
django
| 441
|
Can't use multiple authentication classes
|
I'm trying to allow two different forms of authentication and am listing them as authentication classes in my views file like so:
`authentication_classes = (JSONWebTokenAuthentication, MyCustomAuthentication,)`
I'm finding that either one works on it's own, but that if I try to use both, it will either authenticate or return a 401 based on the FIRST authentication class listed, instead of as this document suggests, iterating through them and returning the values for the first class for which it successfully authenticates. http://www.tomchristie.com/rest-framework-2-docs/api-guide/authentication
Do you have an idea of why that might be?
|
open
|
2018-05-24T19:41:29Z
|
2018-11-20T09:45:32Z
|
https://github.com/jpadilla/django-rest-framework-jwt/issues/441
|
[] |
nancyhawa
| 3
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
computer-vision
| 1,155
|
pix2pix test input with only sketch?
|
Do I need to get a photo with sketch and the sketch model? Could I just input a sketch?
|
closed
|
2020-10-03T15:07:18Z
|
2022-08-19T06:30:14Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1155
|
[] |
darrenleeleelee1
| 1
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 302
|
Question about using distributed loss
|
Hi,
I am working with torch.DistributedDataParallel. I am following your /pytorch-metric-learning/examples/notebooks/DistributedTripletMarginLossMNIST.ipynb. I noticed that all the gpus have the same loss value in each epoch. I am trying to understand why.
According to torch.DistributedDataParallel, each gpu will get its replica of data, and calculate the loss with that part of data. The losses among different gpus are not reduced (the gradients do). Thus, different gpus should have different loss values. My question is, how is the loss calculated here? Is it an average of all gpus' losses?
Thanks.
|
closed
|
2021-04-12T22:18:30Z
|
2021-05-10T15:22:33Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/302
|
[
"question"
] |
liangyuandg
| 2
|
randyzwitch/streamlit-folium
|
streamlit
| 218
|
Could I get the coordinates of the map without running as a service?
|
I used `fit_bounds()` in folium to focus a specific place but folium doesn't inform me the new corner coordinates of the canvas map. I need to use the coordinates to crop a rectangle area. Can I just call a function or api to get them without launching a service?
|
closed
|
2024-09-27T01:42:18Z
|
2024-09-28T13:47:38Z
|
https://github.com/randyzwitch/streamlit-folium/issues/218
|
[] |
LZY-SPCA
| 1
|
pallets-eco/flask-sqlalchemy
|
sqlalchemy
| 444
|
Need help in finding the memory leak for master/slave check latency
|
Hi guys,
I'm trying to split read and writes on the database. I found some sources online but now, i'm having issues with slave replica lag. I have a query to check the latency on postgres but somehow when i implemented it, it was causing some memory leaks. Any tips would be helpful. I have list of slaves, and I wanted check the latency first before picking them.
```python
class RoutingSession(orm.Session):
_name = None
def __init__(self, db, autocommit=False, autoflush=False, **options):
self.db = db
self.app = db.get_app()
self._model_changes = {}
self.slaves = None
orm.Session.__init__(
self,
autocommit=autocommit,
autoflush=autoflush,
bind=db.engine,
binds=db.get_binds(self.app),
**options
)
def get_bind(self, mapper=None, clause=None):
try:
state = get_state(self.app)
except (AssertionError, AttributeError, TypeError) as err:
self.app.logger.error(
"Unable to get Flask-SQLAlchemy configuration."
" Outputting default bind. Error:" + err)
return orm.Session.get_bind(self, mapper, clause)
# If there are no binds configured
# connect using the default SQLALCHEMY_DATABASE_URI
if state is None or not self.app.config['SQLALCHEMY_BINDS']:
return orm.Session.get_bind(self, mapper, clause)
elif self._name:
return state.db.get_engine(self.app, bind=self._name)
elif self._flushing:
self.app.logger.debug("Connecting -> MASTER")
self.mark_as_write()
return state.db.get_engine(self.app, bind='master')
elif hasattr(self, '_db_write') and self._db_write:
self.app.logger.debug("Connecting -> MASTER due to recent writes")
return state.db.get_engine(self.app, bind='master')
else:
if not self.slaves:
slaves = []
for key in self.app.config['SQLALCHEMY_BINDS'].keys():
if re.match(r"^slave", key):
slaves.append(key)
self.slaves = slaves
while len(self.slaves):
slave = random.choice(self.slaves)
self.app.logger.debug("Connecting -> " + slave)
dbengine = state.db.get_engine(self.app, bind=slave)
latency = self.get_latency(slave, dbengine)
if latency <= self.app.config['SLAVE_MAX_LATENCY']:
return dbengine
self.app.logger.error("Slave {} has very high latency {} seconds".\
format(slave, latency)
)
self.slaves.remove(slave)
self.app.logger.warn("Reverted to master db instead")
# Revert to master in the end
return state.db.get_engine(self.app, bind='master')
def using_bind(self, name):
s = RoutingSession(self.db)
vars(s).update(vars(self))
s._name = name
return s
"""
Checks what latency if it is connecting to if it's slave
a.) It it's in recovery = false, it means it's master no latency here
b.) It it's xlog location is the same, it means it's sync
c.) Returns the number of seconds of lag
"""
def get_latency(self, name, engine):
if not self.app.config['SLAVE_CHECK_LATENCY']:
return 0
connection = engine.connect()
query = """
SELECT CASE
WHEN NOT pg_is_in_recovery() THEN 0
WHEN pg_last_xlog_receive_location() = pg_last_xlog_replay_location() THEN 0
ELSE EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp())::INTEGER
END AS replication_lag;
"""
result = connection.execute(query)
return result.fetchone()[0]
```
|
closed
|
2016-11-03T15:41:36Z
|
2020-12-05T21:18:20Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/444
|
[] |
christopher-abastar
| 1
|
jupyter-book/jupyter-book
|
jupyter
| 2,076
|
Sphinx config `navigation with keys` removes links to Github
|
### Describe the bug
**context**
When I add `navigation_with_keys: True` or `False` to
```yaml
sphinx:
config:
html_theme_options:
navigation_with_keys: true
```
in `_config.yml`, the repository buttons dissapear
**expectation**
I expected Github repository buttons to appear if they are set to true.
**bug**
But instead they disappear.
### Reproduce the bug
1. Install jupyterbook (I use venv with the following commands)
```bash
python3 -m venv --clear ./test_jb
source test_jb/bin/activate
python3 -m pip install jupyter-book
```
2. Create book template
```bash
python3 -m jupyter book create test_book
```
3. Test building
```bash
python3 -m jupyter book build .
```
which gives

4. add
```yaml
sphinx:
config:
html_theme_options:
navigation_with_keys: true
```
to config and rebuild. Returns

### List your environment
```
python3 -m jupyter book --version
Jupyter Book : 0.15.1
External ToC : 0.3.1
MyST-Parser : 0.18.1
MyST-NB : 0.17.2
Sphinx Book Theme : 1.0.1
Jupyter-Cache : 0.6.1
NbClient : 0.7.4
```
Python 3.10.12
Ubuntu 22.04
|
open
|
2023-11-15T08:59:17Z
|
2023-11-21T10:54:10Z
|
https://github.com/jupyter-book/jupyter-book/issues/2076
|
[
"bug"
] |
jorgensd
| 1
|
deepspeedai/DeepSpeed
|
deep-learning
| 5,898
|
[BUG] Gradient accumulation causing training loss differences in Deepspeed vs FSDP
|
**Describe the bug**
I am trying to pretrain an [Olmo ](https://github.com/allenai/OLMo)1B model on 8 MI 250 GPUs with Docker image: rocm/pytorch:latest (ROCm 6.1). I'm using a small subset of Dolma dataset for pretraining.
I see that training loss is comparable between FSDP and Deepspeed when gradient accumulation is small but as the gradient accumulation increases, the training loss seems to be different
<img width="280" alt="image" src="https://github.com/user-attachments/assets/836f0135-5a9f-4d6d-addf-d51f44357740">
^ for instance the above run, im using a gradient accumulation of 16 (dark blue is FSDP and purple is deepspeed)
I'm testing all my training runs in mixed precision amp_fp16. The reduce_dtype in FSDP.MixedPrecision is set to fp32 and I also make sure to set "data_types": { "grad_accum_dtype": "fp32" } in ds_config.
Here are the relevant ds_config im using:
ds_config = {
"train_batch_size": 1024,
"train_micro_batch_size_per_gpu": 8, # grad_acc of 16 will get 1024 effective batch size
"prescale_gradients": True | False, # ive tried both
"zero_optimization": {
"stage": 0,
"cpu_offload": False,
"overlap_comm": True | False, # ive tried both
"reduce_scatter": True,
"reduce_bucket_size": model_hidden_size * model_hidden_size,
"contiguous_gradients": True,
},
"gradient_clipping": 1.0,
"data_types": { "grad_accum_dtype": "fp32" },
"bf16": {
"enabled": True
},
}
<img width="269" alt="image" src="https://github.com/user-attachments/assets/f87c1c03-710e-4355-902a-b4527c58cb75">
^ I also tried to run a full precision FP32 run with per_gpu_batch_size of 2 and a high gradient accumulation of 128 and I still see a big difference in training losses (Blue is Deepspeed, yellow is FSDP)
Given the other settings are same (lr, lr_Scheduler, optimizer etc), what could be causing this difference?
**To Reproduce**
For deepspeed version of Olmo, Im using the changes in this [pull request](https://github.com/allenai/OLMo/pull/384) with the latest code changes. I can share more details if needed
**ds_report output**
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn is not compatible with ROCM
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fp_quantizer ........... [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn is not compatible with ROCM
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/opt/conda/envs/olmo/lib/python3.9/site-packages/torch']
torch version .................... 2.3.0a0+gitae01701
deepspeed install path ........... ['/opt/conda/envs/olmo/lib/python3.9/site-packages/deepspeed']
deepspeed info ................... 0.14.5+unknown, unknown, unknown
torch cuda version ............... None
torch hip version ................ 6.1.40091-a8dbc0c19
nvcc version ..................... None
deepspeed wheel compiled w. ...... torch 2.3, hip 6.1
shared memory (/dev/shm) size .... 503.85 GB
**System info (please complete the following information):**
- GPU count and types: 8 Mi250
- Interconnects (if applicable) [e.g., two machines connected with 100 Gbps IB]: single node
- Python version: 3.9
**Launcher context**
Are you launching your experiment with the `deepspeed` launcher, MPI, or something else?: im using torchrun
**Docker context**
Are you using a specific docker image that you can share? rocm/pytorch:latest (ROCm 6.1)
**Additional context**
Add any other context about the problem here.
|
closed
|
2024-08-09T10:57:49Z
|
2024-09-25T20:32:03Z
|
https://github.com/deepspeedai/DeepSpeed/issues/5898
|
[
"bug",
"training"
] |
gramesh-amd
| 3
|
Ehco1996/django-sspanel
|
django
| 308
|
如何修改网站图标 favicon
|
没有使用docker,使用了nginx反代,该如何修改favicon呢
|
closed
|
2020-04-22T12:47:52Z
|
2020-05-17T02:13:35Z
|
https://github.com/Ehco1996/django-sspanel/issues/308
|
[] |
chengziorange
| 1
|
yt-dlp/yt-dlp
|
python
| 12,062
|
Unable Download Video Instagram
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
There is not much to explain, an Instagram video where you copy the link you share, paste it directly into the seal app to download it but it is not possible and it shows an error, the error report was copied and placed in this report.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
App version: 1.13.1 (11312)
Device information: Android 14 (API 34)
Supported ABIs: [arm64-v8a]
Yt-dlp version: 2024.12.26.232815
URL: https://www.instagram.com/reel/C_FJQUZM0TB/?igsh=MXh1aTBsOWt0MW85NA==
WARNING: [Instagram] unable to extract username; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
ERROR: [Instagram] C_FJQUZM0TB: Unable to extract video url; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
```
|
closed
|
2025-01-12T03:11:34Z
|
2025-01-12T10:00:23Z
|
https://github.com/yt-dlp/yt-dlp/issues/12062
|
[
"spam"
] |
VickTC
| 1
|
xorbitsai/xorbits
|
numpy
| 288
|
REF: use Xoscar as dependency
|
Before, Xoscar is included in Xorbits itself, now, it's separated and hosted at: https://github.com/xprobe-inc/xoscar . Hence Xoscar can be a dependency for this repo.
|
closed
|
2023-03-20T09:51:51Z
|
2023-03-22T06:48:22Z
|
https://github.com/xorbitsai/xorbits/issues/288
|
[
"refactor"
] |
qianduoduo0904
| 0
|
d2l-ai/d2l-en
|
data-science
| 1,664
|
17.1. Generative Adversarial Networks
|
Second paragraph ~
>But there is more to machine learning than just solving discriminative tasks. For example, given a large dataset, without any labels, we might want to **learn a model** that concisely captures the characteristics of this data. Given such a model, we could sample synthetic data examples that resemble the distribution of the training data. For example, given a large corpus of photographs of faces, we might want to be able to generate a new photorealistic image that looks like it might plausibly have come from the same dataset. This kind of learning is called generative modeling.
(1) What does learn a model mean here? Something different than the commonly used phrase train a model?
(2) This paragraph is little bit like word soup, consider revising for clarity. Below is one option.
There is more to machine learning than just solving discriminative tasks. If you have a large dataset without any labels you might want train a model that can concisely captures the characteristics of this data. You cold then could sample synthetic data examples that resemble the distribution of the training data... *and then what?!*. For example, given a large corpus of photographs of faces, we might want to be able to generate new photo realistic images which look like they could have came from original dataset. This kind of learning is called generative modeling.
|
open
|
2021-02-25T19:39:02Z
|
2023-11-05T20:08:25Z
|
https://github.com/d2l-ai/d2l-en/issues/1664
|
[] |
froggie901
| 1
|
glumpy/glumpy
|
numpy
| 124
|
Problem with fonts on Windows and TypeError: 'float'
|
setup:
Windows 10 64 bit...
This is an output:
c:\Users\levan\Desktop\glumpy-master\examples>
c:\Users\levan\Desktop\glumpy-master\examples>python lorenz.py
[i] Using GLFW (GL 4.6)
[i] Requesting "OpenSans-Regular.ttf" from remote server
[w] Data not available on remote server
[w] Falling back to default font
[i] Requesting "OpenSans-Bold.ttf" from remote server
[w] Data not available on remote server
[w] Falling back to default font
Traceback (most recent call last):
File "lorenz.py", line 59, in <module>
anchor_x = "left", anchor_y = "center")
File "C:\Python36\lib\site-packages\glumpy\graphics\collections\sdf_glyph_collection.py", line 76, in append
V, I = self.bake(text, font, anchor_x, anchor_y)
File "C:\Python36\lib\site-packages\glumpy\graphics\collections\sdf_glyph_collection.py", line 128, in bake
glyph = font[charcode]
File "C:\Python36\lib\site-packages\glumpy\graphics\text\sdf_font.py", line 75, in __getitem__
self.load('%c' % charcode)
File "C:\Python36\lib\site-packages\glumpy\graphics\text\sdf_font.py", line 130, in load
data,offset,advance = self.load_glyph(face, charcode)
File "C:\Python36\lib\site-packages\glumpy\graphics\text\sdf_font.py", line 102, in load_glyph
hires_data = np.zeros( (hires_height,hires_width), np.double)
TypeError: 'float' object cannot be interpreted as an integer
c:\Users\levan\Desktop\glumpy-master\examples>
|
closed
|
2017-11-21T14:25:29Z
|
2017-11-22T11:17:56Z
|
https://github.com/glumpy/glumpy/issues/124
|
[] |
shoshia
| 3
|
feder-cr/Jobs_Applier_AI_Agent_AIHawk
|
automation
| 635
|
[HELP WANTED]: Apply for multiple positions at a company?
|
### Issue description
My question is in regards to the field "apply_once_at_company". If I select true, does this mean it will only apply once at a company for only one position? I'd want to apply multiple times to a company if there are multiple appropriate positions or the position is with the same company but at different locations. Furthermore, does this apply only to each instance the script is once, or will it never apply to the same company again if it already applied to the company?
### Specific tasks
apply_once_at_company
### Additional resources
_No response_
### Additional context
_No response_
|
closed
|
2024-10-28T13:47:35Z
|
2024-10-29T15:29:56Z
|
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/635
|
[
"help wanted"
] |
shiboby
| 6
|
httpie/cli
|
api
| 1,201
|
http://external.system.id=
|
https://github.com/plaid/plaid-link-examples/blob/master/webviews/android/link-android-webview-example.iml#L1-L21
|
closed
|
2021-11-08T07:21:06Z
|
2021-11-23T09:03:19Z
|
https://github.com/httpie/cli/issues/1201
|
[
"invalid"
] |
transmatecode
| 1
|
donnemartin/data-science-ipython-notebooks
|
matplotlib
| 108
|
Data science
|
open
|
2024-07-17T05:22:04Z
|
2024-07-17T05:22:04Z
|
https://github.com/donnemartin/data-science-ipython-notebooks/issues/108
|
[] |
rjagathe
| 0
|
|
plotly/dash
|
jupyter
| 3,094
|
Allow_duplicate=True Fails with More Than Two Duplicate Callbacks
|
## Bug Report: `allow_duplicate=True` Fails with More Than Two Duplicate Callbacks
**Description:**
The `allow_duplicate=True` parameter does not function correctly when there are more than two duplicate callbacks.
**Reproducible Example:**
The following examples demonstrate the issue:
**Working Examples (Two Duplicate Callbacks):**
```python
# Example 1: Works
Output("layout_ctx-train", "children")
Input('button1', 'n_clicks'),
...
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button2', 'n_clicks'),
...
```
```python
# Example 2: Works
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button1', 'n_clicks'),
...
Output("layout_ctx-train", "children")
Input('button2', 'n_clicks'),
...
```
```python
# Example 3: Works
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button1', 'n_clicks'),
...
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button2', 'n_clicks'),
...
```
**Failing Examples (More Than Two Duplicate Callbacks):**
```python
# Example 4: Fails
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button1', 'n_clicks'),
...
Output("layout_ctx-train", "children")
Input('button2', 'n_clicks'),
...
Output("layout_ctx-train", "children")
Input('button3', 'n_clicks'),
...
Output("layout_ctx-train", "children")
Input('button4', 'n_clicks'),
...
```
```python
# Example 5: Fails
Output("layout_ctx-train", "children")
Input('button1', 'n_clicks'),
...
Output("layout_ctx-train", "children")
Input('button2', 'n_clicks'),
...
Output("layout_ctx-train", "children")
Input('button3', 'n_clicks'),
...
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button4', 'n_clicks'),
...
```
```python
# Example 6: Fails
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button1', 'n_clicks'),
...
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button2', 'n_clicks'),
...
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button3', 'n_clicks'),
...
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button4', 'n_clicks'),
...
```
**Expected Behavior:**
Duplicate callbacks should function correctly when at least one of the components has `allow_duplicate=True` set.
**Additional Comments:**
This functionality worked correctly in Dash version 2.9.1 for more than two duplicate callbacks as long as `allow_duplicate=True` was present on all relevant components. The issue was encountered in Dash versions 2.17.1+.
|
closed
|
2024-11-26T12:01:25Z
|
2024-11-27T15:35:24Z
|
https://github.com/plotly/dash/issues/3094
|
[
"bug",
"P2"
] |
Kissabi
| 1
|
jazzband/django-oauth-toolkit
|
django
| 1,120
|
Django Lazy Reference ValueError after upgrading to the latest version and running migrations,
|
```
raise ValueError("\n".join(error.msg for error in errors))
ValueError: The field oauth2_provider.AccessToken.application was declared with a lazy reference to 'oauth.clientapplication', but app 'oauth' isn't installed.
The field oauth2_provider.AccessToken.source_refresh_token was declared with a lazy reference to 'oauth.clientrefreshtoken', but app 'oauth' isn't installed.
The field oauth2_provider.Grant.application was declared with a lazy reference to 'oauth.clientapplication', but app 'oauth' isn't installed.
The field oauth2_provider.RefreshToken.access_token was declared with a lazy reference to 'oauth.clientaccesstoken', but app 'oauth' isn't installed.
The field oauth2_provider.RefreshToken.application was declared with a lazy reference to 'oauth.clientapplication', but app 'oauth' isn't installed.
```
I am constantly facing this issues. Is there any solutions for this one,
oauth2_provider setting configurations:
```python
OAUTH2_PROVIDER_APPLICATION_MODEL = "oauth.ClientApplication"
OAUTH2_PROVIDER_ACCESS_TOKEN_MODEL = "oauth.ClientAccessToken"
OAUTH2_PROVIDER_GRANT_MODEL = "oauth.ClientGrant"
OAUTH2_PROVIDER_REFRESH_TOKEN_MODEL = "oauth.ClientRefreshToken"
OAUTH2_PROVIDER_ID_TOKEN_MODEL = "oauth.ClientIdToken"
OAUTH2_PROVIDER = {
"ACCESS_TOKEN_EXPIRE_SECONDS": 1800,
"SCOPES": {
"uid": "User ID read access",
},
}
```
**Even, I tried to apply `run_before` to my custom initial migrations. But No luck.**
```python
run_before = [
('oauth2_provider', '0001_initial'),
]
```
This is with `Django==3.2.11` and `django-oauth-toolkit==1.7.0`
Below is the order of Apply migrations.
```bash
Applying oauth.0001_initial_squashed_0004_auto_20220218_1009...accounts_ui client does not exist
accounts_ui client created
OK
Applying oauth2_provider.0001_initial... OK
Applying oauth2_provider.0002_auto_20190406_1805... OK
Applying oauth2_provider.0003_auto_20201211_1314... OK
Applying oauth2_provider.0004_auto_20200902_2022... OK
Applying oauth2_provider.0005_auto_20211222_2352... OK
```
But still i am facing the above error. I have tried everything that i could from other open issues. Such as ... Swappable models. Clean migrations, run before.
|
open
|
2022-02-18T11:47:41Z
|
2022-06-26T13:37:45Z
|
https://github.com/jazzband/django-oauth-toolkit/issues/1120
|
[
"bug"
] |
smit-mehta25
| 7
|
vllm-project/vllm
|
pytorch
| 15,264
|
[Bug]: qwen2.5vl cannot use fp8 quantization
|
### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.12.9 (main, Mar 17 2025, 21:01:58) [Clang 20.1.0 ] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA L20
GPU 1: NVIDIA L20
GPU 2: NVIDIA L20
GPU 3: NVIDIA L20
Nvidia driver version: 535.216.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9K84 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 0
BogoMIPS: 5200.43
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single ibpb vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 avx512_bf16 clzero xsaveerptr wbnoinvd arat avx512vbmi umip avx512_vbmi2 vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-191
NUMA node1 CPU(s): 192-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer-python==0.2.1.post2+cu124torch2.6
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.3.0
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] transformers==4.49.0
[pip3] triton==3.2.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.8.1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NODE NODE NODE 0-191 0 N/A
GPU1 NODE X PIX NODE 0-191 0 N/A
GPU2 NODE PIX X NODE 0-191 0 N/A
GPU3 NODE NODE NODE X 0-191 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NVIDIA_VISIBLE_DEVICES=0,1,2,3
NVIDIA_REQUIRE_CUDA=cuda>=12.4 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 brand=tesla,driver>=535,driver<536 brand=unknown,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=geforce,driver>=535,driver<536 brand=geforcertx,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=titan,driver>=535,driver<536 brand=titanrtx,driver>=535,driver<536
NCCL_VERSION=2.20.5-1
NVIDIA_DRIVER_CAPABILITIES=compute,utility
NVIDIA_PRODUCT_NAME=CUDA
VLLM_USAGE_SOURCE=production-docker-image
CUDA_VERSION=12.4.0
LD_LIBRARY_PATH=/opt/venv/lib/python3.12/site-packages/cv2/../../lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
```
</details>
### 🐛 Describe the bug
When using vllm 0.8.1 to deploy the qwen2.5-vl-7B model, fp8 quantization cannot be used. How can I solve this problem?
The deployment command is as follows:
`vllm serve Qwen2.5-VL/Qwen2.5-VL-7B-Instruct --port 8083 --quantization fp8`
The error is as follows:
```
......
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:03<00:00, 1.51it/s]
INFO 03-21 02:21:53 [loader.py:429] Loading weights took 3.47 seconds
INFO 03-21 02:21:53 [gpu_model_runner.py:1176] Model loading took 8.9031 GB and 3.891568 seconds
INFO 03-21 02:21:53 [gpu_model_runner.py:1421] Encoder cache will be initialized with a budget of 98304 tokens, and profiled with 1 video items of the maximum feature size.
ERROR 03-21 02:21:57 [core.py:340] EngineCore hit an exception: Traceback (most recent call last):
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 332, in run_engine_core
ERROR 03-21 02:21:57 [core.py:340] engine_core = EngineCoreProc(*args, **kwargs)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 287, in __init__
ERROR 03-21 02:21:57 [core.py:340] super().__init__(vllm_config, executor_class, log_stats)
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 62, in __init__
ERROR 03-21 02:21:57 [core.py:340] num_gpu_blocks, num_cpu_blocks = self._initialize_kv_caches(
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 121, in _initialize_kv_caches
ERROR 03-21 02:21:57 [core.py:340] available_gpu_memory = self.model_executor.determine_available_memory()
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py", line 66, in determine_available_memory
ERROR 03-21 02:21:57 [core.py:340] output = self.collective_rpc("determine_available_memory")
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
ERROR 03-21 02:21:57 [core.py:340] answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/utils.py", line 2216, in run_method
ERROR 03-21 02:21:57 [core.py:340] return func(*args, **kwargs)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 03-21 02:21:57 [core.py:340] return func(*args, **kwargs)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 157, in determine_available_memory
ERROR 03-21 02:21:57 [core.py:340] self.model_runner.profile_run()
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1452, in profile_run
ERROR 03-21 02:21:57 [core.py:340] dummy_encoder_outputs = self.model.get_multimodal_embeddings(
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2_5_vl.py", line 975, in get_multimodal_embeddings
ERROR 03-21 02:21:57 [core.py:340] video_embeddings = self._process_video_input(video_input)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2_5_vl.py", line 931, in _process_video_input
ERROR 03-21 02:21:57 [core.py:340] video_embeds = self.visual(pixel_values_videos, grid_thw=grid_thw)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
ERROR 03-21 02:21:57 [core.py:340] return self._call_impl(*args, **kwargs)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
ERROR 03-21 02:21:57 [core.py:340] return forward_call(*args, **kwargs)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2_5_vl.py", line 659, in forward
ERROR 03-21 02:21:57 [core.py:340] hidden_states = blk(
ERROR 03-21 02:21:57 [core.py:340] ^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
ERROR 03-21 02:21:57 [core.py:340] return self._call_impl(*args, **kwargs)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
ERROR 03-21 02:21:57 [core.py:340] return forward_call(*args, **kwargs)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2_5_vl.py", line 382, in forward
ERROR 03-21 02:21:57 [core.py:340] x = x + self.mlp(self.norm2(x))
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
ERROR 03-21 02:21:57 [core.py:340] return self._call_impl(*args, **kwargs)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
ERROR 03-21 02:21:57 [core.py:340] return forward_call(*args, **kwargs)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/model_executor/models/qwen2_5_vl.py", line 191, in forward
ERROR 03-21 02:21:57 [core.py:340] x_gate, _ = self.gate_proj(x)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
ERROR 03-21 02:21:57 [core.py:340] return self._call_impl(*args, **kwargs)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
ERROR 03-21 02:21:57 [core.py:340] return forward_call(*args, **kwargs)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 474, in forward
ERROR 03-21 02:21:57 [core.py:340] output_parallel = self.quant_method.apply(self, input_, bias)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/fp8.py", line 386, in apply
ERROR 03-21 02:21:57 [core.py:340] return self.fp8_linear.apply(input=x,
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/utils/w8a8_utils.py", line 184, in apply
ERROR 03-21 02:21:57 [core.py:340] output = ops.cutlass_scaled_mm(qinput,
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] File "/opt/venv/lib/python3.12/site-packages/vllm/_custom_ops.py", line 523, in cutlass_scaled_mm
ERROR 03-21 02:21:57 [core.py:340] assert (b.shape[0] % 16 == 0 and b.shape[1] % 16 == 0)
ERROR 03-21 02:21:57 [core.py:340] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-21 02:21:57 [core.py:340] AssertionError
...
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
open
|
2025-03-21T03:04:00Z
|
2025-03-24T09:23:19Z
|
https://github.com/vllm-project/vllm/issues/15264
|
[
"bug"
] |
lessmore991
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.