repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
Lightning-AI/pytorch-lightning
|
deep-learning
| 19,981
|
[fabric.example.rl] Not support torch.float64 for MPS device
|
### Bug description
I found an error when run the example `pytorch-lightning/examples/fabric/reinforcement_learning` on M2 Mac (device type=mps)
### Reproduce Error
```
reinforcement_learning git:(master) ✗ fabric run train_fabric.py
W0617 12:53:22.541000 8107367488 torch/distributed/elastic/multiprocessing/redirects.py:27] NOTE: Redirects are currently not supported in Windows or MacOs.
[rank: 0] Seed set to 42
Missing logger folder: logs/fabric_logs/2024-06-17_12-53-24/CartPole-v1_default_42_1718596404
set default torch dtype as torch.float32
Traceback (most recent call last):
File "/Users/user/workspace/pytorch-lightning/examples/fabric/reinforcement_learning/train_fabric.py", line 215, in <module>
main(args)
File "/Users/user/workspace/pytorch-lightning/examples/fabric/reinforcement_learning/train_fabric.py", line 154, in main
rewards[step] = torch.tensor(reward, device=device).view(-1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
```
This bug is fixed by checking device.type and type casting to torch.float32 `reward`.
```bash
@@ -146,7 +146,7 @@ def main(args: argparse.Namespace):
# Single environment step
next_obs, reward, done, truncated, info = envs.step(action.cpu().numpy())
done = torch.logical_or(torch.tensor(done), torch.tensor(truncated))
- rewards[step] = torch.tensor(reward, device=device).view(-1)
+ rewards[step] = torch.tensor(reward, device=device, dtype=torch.float32 if device.type == 'mps' else None).view(-1)
```
### What version are you seeing the problem on?
master
### Environment
<details><summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0): 2.3.0
#- Lightning App Version (e.g., 0.5.2): 2.3.0
#- PyTorch Version (e.g., 2.0): 2.3.1
#- Python version (e.g., 3.9): 3.12.3
#- OS (e.g., Linux): Mac
#- CUDA/cuDNN version: MPS
#- GPU models and configuration: M2
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
|
closed
|
2024-06-17T04:13:05Z
|
2024-06-21T14:36:12Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19981
|
[
"bug",
"example",
"ver: 2.2.x"
] |
swyo
| 0
|
ultralytics/yolov5
|
machine-learning
| 13,389
|
how to reduce false postives in yolov5
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello everyone,
I'm currently training YOLOv5s to detect three objects: phone, cigarette, and vape. My original dataset contained 9,000 images, with 3,000 images for each class. After training the model for 100 epochs, I've noticed a high number of false positives.
To address this, I've added 3,000 negative images (images that don't contain any of the target objects) to the dataset. I've also experimented with adjusting the conf_thres and iou_thres settings a bit. I plan to train the model for more epochs in the future.
Are there any additional strategies or techniques you recommend to further reduce the number of false positives? Any insights would be greatly appreciated!
thanks in advance.
### Additional
training info
pochs 100, --img-size 640, --batch-size 16, --optimizer SGD --cache ram --hyp /content/yolov5/data/hyps/hyp.scratch-low.yaml
the content of hyp.scratch-low.yaml file is set to default
|
open
|
2024-10-28T17:26:18Z
|
2024-11-09T01:07:51Z
|
https://github.com/ultralytics/yolov5/issues/13389
|
[
"question",
"detect"
] |
yAlqubati
| 2
|
zappa/Zappa
|
flask
| 434
|
[Migrated] Deploy and Update from Local and S3-Hosted Zip Files
|
Originally from: https://github.com/Miserlou/Zappa/issues/1128 by [Miserlou](https://github.com/Miserlou)
A few people have requested the ability to do something like:
`zappa deploy dev --zip localfile.zip`
and similarly:
`zappa update dev --zip s3://my_bucket/package.zip`
Could be handy.
|
closed
|
2021-02-20T08:32:53Z
|
2024-04-13T16:17:45Z
|
https://github.com/zappa/Zappa/issues/434
|
[
"enhancement",
"help wanted",
"hacktoberfest",
"no-activity",
"auto-closed"
] |
jneves
| 2
|
microsoft/nni
|
deep-learning
| 5,719
|
Enhancement of GBDTSelector inherited from FeatureSelector
|
<!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
The FeatureSelector class was written in a preliminary form, like the following referenced code snippet:
https://github.com/microsoft/nni/blob/767ed7f22e1e588ce76cbbecb6c6a4a76a309805/nni/feature_engineering/feature_selector.py#L26-L34
I think GBDTSelector is not necessary to inherit FeatureSelector while it rewrite all class methods, though it doesn't inherit the class properties of FeatureSelector.
https://github.com/microsoft/nni/blob/767ed7f22e1e588ce76cbbecb6c6a4a76a309805/nni/algorithms/feature_engineering/gbdt_selector/gbdt_selector.py#L35
and GBDTSelector class adopts train_test_split function from scikit-learn, I was wondering if the validation datasets will be needed to enhance the effect of fit module ([like what I implemented for training cycle while evaluate validation to early stop](https://github.com/linjing-lab/easy-pytorch/blob/9651774dcc4581104f914980baf2ebc05f96fd85/released_box/perming/_utils.py#L382)):
https://github.com/microsoft/nni/blob/767ed7f22e1e588ce76cbbecb6c6a4a76a309805/nni/algorithms/feature_engineering/gbdt_selector/gbdt_selector.py#L86-L89
**Why is this needed**:
The subclass that inherits from FeatureSelector is GBDTSelector, and it is not a valid inheritance because all subclass properties and methods are overridden.
**Without this feature, how does current nni work**:
GBDTSelector works in nni that powered by `fit` module and [`get_selected_features` module](https://github.com/microsoft/nni/blob/767ed7f22e1e588ce76cbbecb6c6a4a76a309805/nni/algorithms/feature_engineering/gbdt_selector/gbdt_selector.py#L102), not obtained by the methods from FeatureSelector class.
**Components that may involve changes**:
Add `super(GBDTSelector, self).__init__` to [initial part of GBDTSelector](https://github.com/microsoft/nni/blob/767ed7f22e1e588ce76cbbecb6c6a4a76a309805/nni/algorithms/feature_engineering/gbdt_selector/gbdt_selector.py#L37), or drop FeatureSelector class if inherited properties wasn't important when subclass rewrited all the methods instead of giving some changes.
**Brief description of your proposal if any**:
The properties and methods contained in FeatureSelector may not contribute to the code logic of any module in GBDTSelector.
|
open
|
2023-12-04T03:25:52Z
|
2023-12-04T03:27:24Z
|
https://github.com/microsoft/nni/issues/5719
|
[] |
linjing-lab
| 0
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,191
|
Can't login on my Google account
|
Until 3.2.0 version of undetected_chromedriver works perfectly, but on newers it doesn't. Always in headful mode.
|
open
|
2023-04-12T12:58:31Z
|
2023-05-02T22:01:31Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1191
|
[] |
FRIKIdelTO
| 6
|
dpgaspar/Flask-AppBuilder
|
rest-api
| 2,010
|
Select2.js auto blink is not working
|
In select2widgets when I open drop-down under search input blinking is not working but when I click manually in the search input it working but previous version is working fine
Select2.js
|
closed
|
2023-04-02T14:06:48Z
|
2023-04-09T11:56:48Z
|
https://github.com/dpgaspar/Flask-AppBuilder/issues/2010
|
[] |
Alex-7999
| 1
|
waditu/tushare
|
pandas
| 1,651
|
财报数据更新错乱
|
今天是5月1日了,按理说所有一季报和年报数据都应该已经更新完成。
今天获取美的集团财报,
通过代码pro.income(ts_code='000333.SZ', start_date='20210101',end_date='20220430')
获取美的集团利润表,结果22年1季报更新,但找不到21年年报数据。从获取数据看,end_date时间从20210930后,直接跳到了20220331,缺失20211231年报数据。以下是python输出结果
f_ann_date end_date
0 20220430 20220331
1 20211030 20210930
2 20210831 20210630
但是,通过pro.balancesheet(ts_code='000333.SZ', start_date='20210101',end_date='20220430'),以及pro.cashflow(ts_code='000333.SZ', start_date='20210101',end_date='20220430')
获取美的集团的资产负债表和现金流量表,是有年报数据的。而且试过几家4月30日发布年报公司,也都有利润表数据。
应当是数据错乱了。
<img width="389" alt="tsbug美的财报" src="https://user-images.githubusercontent.com/104714818/166139915-0b9da2c4-2730-4062-a38f-8da678a160be.PNG">
|
open
|
2022-05-01T09:21:54Z
|
2022-05-01T09:41:20Z
|
https://github.com/waditu/tushare/issues/1651
|
[] |
Yellowman9
| 1
|
itamarst/eliot
|
numpy
| 243
|
MemoryLogger.validate mutates the contents of logged dictionaries
|
In particular, serialization does this. Validation really ought to be lacking in any side-effects!
|
open
|
2015-12-01T21:14:19Z
|
2018-09-22T20:59:21Z
|
https://github.com/itamarst/eliot/issues/243
|
[] |
itamarst
| 0
|
nonebot/nonebot2
|
fastapi
| 2,549
|
Plugin: BA模拟抽卡
|
### PyPI 项目名
nonebot-plugin-badrawcard
### 插件 import 包名
nonebot_plugin_BAdrawcard
### 标签
[]
### 插件配置项
_No Response_
|
closed
|
2024-01-26T08:46:31Z
|
2024-01-29T02:39:22Z
|
https://github.com/nonebot/nonebot2/issues/2549
|
[
"Plugin"
] |
lengmianzz
| 6
|
raphaelvallat/pingouin
|
pandas
| 13
|
Add 95% CI for ttest function
|
See gitter chat.
|
closed
|
2019-04-08T16:09:24Z
|
2019-04-22T16:16:48Z
|
https://github.com/raphaelvallat/pingouin/issues/13
|
[
"feature request :construction:"
] |
raphaelvallat
| 1
|
onnx/onnx
|
scikit-learn
| 6,536
|
[Feature request] operator Conv has no example with a bias in the backend test
|
### System information
All
### What is the problem that this feature solves?
Robustness.
### Alternatives considered
_No response_
### Describe the feature
_No response_
### Will this influence the current api (Y/N)?
No
### Feature Area
_No response_
### Are you willing to contribute it (Y/N)
Yes
### Notes
_No response_
|
open
|
2024-11-07T10:22:43Z
|
2024-11-07T10:22:43Z
|
https://github.com/onnx/onnx/issues/6536
|
[
"topic: enhancement"
] |
xadupre
| 0
|
sczhou/CodeFormer
|
pytorch
| 332
|
Video Enhancement, Can it run in parallel?
|
Nice work!
Video Enhancement, Can image processing run in Parallel?
|
open
|
2023-12-14T03:17:32Z
|
2023-12-14T03:17:49Z
|
https://github.com/sczhou/CodeFormer/issues/332
|
[] |
jackyin68
| 0
|
modoboa/modoboa
|
django
| 2,623
|
Admins input fields empty in GUI v2.
|
# Impacted versions
* OS Type: Debian
* OS Version: bullseye
* Database Type: PostgreSQL
* Database version: 13.8-0+deb11u1
* Modoboa: 2.0.2
* installer used: Yes
* Webserver: Nginx
* Navigator: Safari OSX, Chrome OSX
# Current behavior
Hello,
In the v2 interface all input fields are empty. They do not take the values from the database as in the v1 interface.


|
closed
|
2022-10-01T07:17:21Z
|
2023-02-23T15:00:13Z
|
https://github.com/modoboa/modoboa/issues/2623
|
[
"feedback-needed"
] |
stefaweb
| 8
|
tqdm/tqdm
|
pandas
| 1,336
|
Visual statusbar finish error for multiprocessed tasks
|
I use:
4.64.0 3.9.7 | packaged by conda-forge | (default, Sep 23 2021, 07:24:41) [MSC v.1916 64 bit (AMD64)] win32
While work is done everything is working:

But then every process which ends, wrotes his finish statusbar at the wrong position. This leads to:

As far as i understand it is this line:
https://github.com/tqdm/tqdm/blob/master/tqdm/std.py#L1315-L1317
It should be:
```python
# let the finished status bar at this position it was
self.display(pos=pos)
fp_write('\r')
```
I started with:
```python
freeze_support()
tqdm.set_lock(RLock())
with Pool(initializer=tqdm.set_lock, initargs=(tqdm.get_lock(),), processes=8) as p:
p.map(partial(Historizer._parallel), tasks)
```
Then i do:
```python
for obj in tqdm(object_set.fetchmany(100_000), desc=f'objs #{identifier:02d}', position=identifier):
```
|
open
|
2022-06-29T07:13:34Z
|
2023-08-25T18:34:59Z
|
https://github.com/tqdm/tqdm/issues/1336
|
[] |
MrJack91
| 2
|
vaexio/vaex
|
data-science
| 1,552
|
[BUG-REPORT] Forbidden symbols in column names should raise warning?
|
Hi,
**Description**
When trying to drop a column actually used during intermediate calculations, I got below error.
I finally found out that the bug only arises if using some symbol in initial column name.
```python
import vaex as vx
import pandas as pd
col = 'init#1'
pdf = pd.DataFrame({col: [2, 8, 20, 3, 17] })
vdf = vx.from_pandas(pdf)
# Temporary column used for intermediate calculations.
vdf['truc'] = vdf[col]*2
vdf[col] = vdf['truc']
# Drop temporary column.
vdf.drop(columns=['truc'], inplace=True)
# Show result.
vdf
```
```python
NameError: name 'truc' is not defined
Out[55]:
# init#1
0 error
1 error
2 error
3 error
4 error
```
But if using `col='init_1`, it works.
I have tested with all symbols, and I can see that only `_` is allowed / does not raise error.
I would propose to raise a warning when a symbol is used in a column name so that user is warned further calculations may not be possible.
The error message is not quite clear (in case of dropping a column), and it is only when doing a minimal example with short column name that I got rid of the error, and finally understood it is coming from column name.
**Software information**
- Vaex version (`import vaex; vaex.__version__)`:
{'vaex-core': '4.3.0.post1',
'vaex-viz': '0.5.0',
'vaex-hdf5': '0.8.0',
'vaex-server': '0.5.0',
'vaex-astro': '0.8.2',
'vaex-jupyter': '0.6.0',
'vaex-ml': '0.12.0'}
- Vaex was installed via: conda-forge
- OS: Ubuntu 20.04
|
open
|
2021-08-29T16:46:13Z
|
2021-09-11T18:59:53Z
|
https://github.com/vaexio/vaex/issues/1552
|
[] |
yohplala
| 1
|
cleanlab/cleanlab
|
data-science
| 611
|
Missing link in documentation
|
The documentation on this page: https://docs.cleanlab.ai/stable/index.html have a link for "label errors" that it found, but that link points to https://labelerrors.com/, which doesn't seem to be a site. I'm assuming there is supposed to be a different link here.
|
closed
|
2023-01-30T23:23:28Z
|
2023-03-13T22:04:01Z
|
https://github.com/cleanlab/cleanlab/issues/611
|
[
"needs triage"
] |
jss367
| 2
|
jina-ai/serve
|
machine-learning
| 6,185
|
Jina-ai API unresponsive for Amazon product URLs
|
Jina-ai API is not returning any content when provided with Amazon product URLs, while it works correctly for other websites.
Please advise on potential causes or solutions. Let me know if you need more information.
Returned data:
```
Title: 503 - Service Unavailable Error
URL Source: https://www.amazon.com/Portable-Mechanical-Keyboard-MageGee-Backlit/dp/B098LG3N6R/ref=sr_1_1?_encoding=UTF8&content-id=amzn1.sym.12129333-2117-4490-9c17-6d31baf0582a&dib=eyJ2IjoiMSJ9.xPISJOYMxoc_9dHbx858f1JwKKN8MOEI6pxe0RfkUq5-YoBt2WvHxwQ2JfTjMUHM7KhYH9-CViR_7Mu_sdA9fTtlO6upY81XXLsTCcvQrQd21jMTrrvCPcFNCLu32ovECyUqHJP9do03wDM8Jfj5VMCYBB8Dvkbf3evLyK9vgRnNe1jvnki39RmDw-qvRsAhlUUtDgkkeS6MWfNYIM70Vz83mL8jXD44sShexO4WSU4.znblRHErp_k4_mgDDvhFgjVPe6jUfQ9iO6KvL_-AHyI&dib_tag=se&keywords=gaming+keyboard&pd_rd_r=00494241-850d-40b9-a741-3b34a5027b69&pd_rd_w=zOk93&pd_rd_wg=QZ3N1&pf_rd_p=12129333-2117-4490-9c17-6d31baf0582a&pf_rd_r=0ST68QYVQESTVXXG751C&qid=1723040389&sr=8-1
Markdown Content:
503 - Service Unavailable Error
===============
[](https://www.amazon.com/)
[](https://www.amazon.com/ref=cs_503_link/)
[](https://www.amazon.com/dogsofamazon)
```
|
closed
|
2024-08-07T14:24:21Z
|
2024-11-06T07:11:40Z
|
https://github.com/jina-ai/serve/issues/6185
|
[
"Stale"
] |
suysoftware
| 4
|
taverntesting/tavern
|
pytest
| 26
|
Publishing MQTT message(s) before commencing test
|
How do you think this should work @michaelboulton? Can you come up with something and implement it with some examples in the docs site?
|
closed
|
2018-02-07T08:41:23Z
|
2018-02-22T16:26:46Z
|
https://github.com/taverntesting/tavern/issues/26
|
[
"Type: Enhancement"
] |
benhowes
| 7
|
cvat-ai/cvat
|
tensorflow
| 8,673
|
Keybinds in UI allow drawing disabled shape types
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Create a task with 1 label, points type
2. Open Single Shape mode
3. Open Standard mode
4. Press N - bbox drawing will start
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
_No response_
|
closed
|
2024-11-08T17:34:46Z
|
2024-11-13T12:44:04Z
|
https://github.com/cvat-ai/cvat/issues/8673
|
[
"bug",
"ui/ux"
] |
zhiltsov-max
| 0
|
piskvorky/gensim
|
data-science
| 2,872
|
Broken file link in `run_corpora_and_vector_spaces` tutorial
|
#### Problem description
The `run_corpora_and_vector_spaces.ipynb` tutorial depends on a file on the web, and that file is missing.
#### Steps/code/corpus to reproduce
See https://groups.google.com/g/gensim/c/nX4lc8j0ZO0
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
Unknown (probably any).
|
closed
|
2020-07-05T07:24:23Z
|
2021-06-06T13:50:18Z
|
https://github.com/piskvorky/gensim/issues/2872
|
[
"bug",
"documentation",
"difficulty easy"
] |
piskvorky
| 6
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,691
|
Setting Proxy with Authentication on ChromeDriver in Selenium 4.5
|
Hello,
I am using Selenium version 4.5, and I would like to inquire about setting up a proxy with authentication (username and password) on a custom ChromeDriver using the "undetected-chromedriver" library.
|
open
|
2023-12-07T17:06:04Z
|
2024-02-21T18:58:23Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1691
|
[] |
behzad-azizan
| 3
|
jschneier/django-storages
|
django
| 692
|
Performance of the url() method
|
Hi there,
I'm wondering if there's a performance opportunity around the `url()` method used in templates, views, etc., to get the full URL of an Image stored in, say, Amazon S3. It seems to me that the storage engine will do much more than simply put together the file `name` and `MEDIA_URL`.
When I'm dealing with publicly-accessible items on S3, would I be better off simply putting those two values together instead of calling the `url()` method?
I'm using Django REST Framework and need to list 100+ images' URLs in one single request, which is currently heavy via the `url()` method — that was my motivator to try accessing these values directly.
What am I missing?
Many thanks!
|
closed
|
2019-04-22T20:06:22Z
|
2024-08-12T18:12:06Z
|
https://github.com/jschneier/django-storages/issues/692
|
[
"s3boto"
] |
avorio
| 27
|
aimhubio/aim
|
tensorflow
| 2,742
|
Unhandled exception when attempting to retrieve commit.
|
## 🐛 Bug
Hi.
I'm getting an issue with the try block associated with this line (in the version I have installed):
https://github.com/aimhubio/aim/blob/416a599ef1c6e80ec08d046a24f49255aa604e98/aim/ext/utils.py#L59
(in the `main` branch):
https://github.com/aimhubio/aim/blob/f469779312ab0e01c0a9241d2b5cecf55b97910a/src/aim/ext/system_info/utils.py#L59
Essentially, if `'commit'` is not part of `results`, the `get()` method will return `None`, which doesn't have a `split()` method. This means that an `AttributeError` is raised, instead of a `ValueError`, which will not be caught and will make the program crash.
### To reproduce
It seems like `aim init` is not initiating a `git` repository and, hence, the corresponding subprocess command to retrieve `'commit'` will fail.
I'm not sure what's causing a `git` repository to not be initiated in the first place.
However, I can track values and visualize them via the web interface (when using the fix below).
### Expected behavior
The following try block would handle the issue:
```python
try:
commit_hash, commit_timestamp, commit_author = results['commit'].split('/')
except KeyError:
commit_hash = commit_timestamp = commit_author = None
```
### Environment
- Aim Version: 3.17.4
- Python version: 3.10.8
- pip version: 23.1.2
- OS: Archlinux
|
closed
|
2023-05-12T13:18:07Z
|
2023-11-14T22:37:00Z
|
https://github.com/aimhubio/aim/issues/2742
|
[
"type / bug",
"help wanted",
"phase / shipped",
"area / SDK-storage"
] |
Nuno-Mota
| 4
|
facebookresearch/fairseq
|
pytorch
| 5,092
|
Wav2Vec-U 2.0: could not training with fp16
|
## ❓ Questions and Help
### Before asking:
1. search the issues.
2. search the docs.
<!-- If you still can't find what you need: -->
#### What is your question?
When training Wav2Vec-U 2.0 models following the official configuration, I tried training with fp16 but leads to errors. The losses will be NaN.
https://github.com/facebookresearch/fairseq/blob/3f6ba43f07a6e9e2acf957fc24e57251a7a3f55c/examples/wav2vec/unsupervised/config/gan/w2vu2.yaml#L3-L10
#### Code
No code needed.
<!-- Please paste a code snippet if your question requires it! -->
#### What have you tried?
Kind of stuck on debugging. Have no idea.
#### What's your environment?
- fairseq Version: '0.12.2'
- PyTorch Version: '1.13.0+cu117'
- OS: Linux avsu-ESC8000-G4 5.15.0-69-generic
- How you installed fairseq (`pip`, source): pip -e install
- Build command you used (if compiling from source):
- Python version: Python 3.8.13
- CUDA/cuDNN version: 11.7
- GPU models and configuration: 4 GeForce RTX 3090
- Any other relevant information:
|
open
|
2023-04-28T02:49:16Z
|
2025-01-28T10:28:42Z
|
https://github.com/facebookresearch/fairseq/issues/5092
|
[
"question",
"needs triage"
] |
xiabingquan
| 3
|
ckan/ckan
|
api
| 8,027
|
datatables_view not search if value contains latvian characters
|
## CKAN version 2.10.1
## Describe the bug
A clear and concise description of what the bug is.
Search return results with search value: "datu sis"

Search return results with search value: "datu sist**ē**m"

### Steps to reproduce
Steps to reproduce the behavior:
1. dataset resource CSV
name, number
"Datu Sistēmas ","4003232323"
2. In datatables_view try serach in field "name" value "Datu Sis" you will get result
3. 2. In datatables_view try serach in field "name" value "Datu Sistēm" you will **not** get result
### Expected behavior
When search text including latvian caracters like ē, ā, ž, ī, ņ etc. search works and return results.
|
open
|
2024-01-25T07:47:18Z
|
2024-01-25T14:13:38Z
|
https://github.com/ckan/ckan/issues/8027
|
[] |
gatiszeiris
| 4
|
tflearn/tflearn
|
tensorflow
| 523
|
Data preprocessing and augmentation causes warnings.
|
Using tensorflow version 0.12.0rc1, these errors occur while using DataPreprocessing() and DataAugmentation().
```
WARNING:tensorflow:Error encountered when serializing data_augmentation.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'ImageAugmentation' object has no attribute 'name'
WARNING:tensorflow:Error encountered when serializing summary_tags.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'dict' object has no attribute 'name'
WARNING:tensorflow:Error encountered when serializing data_preprocessing.
Type is unsupported, or the types of the items don't match field type in CollectionDef.
'DataPreprocessing' object has no attribute 'name'
```
|
closed
|
2016-12-16T00:28:40Z
|
2016-12-19T21:15:27Z
|
https://github.com/tflearn/tflearn/issues/523
|
[
"review needed"
] |
jadenyjw
| 2
|
recommenders-team/recommenders
|
deep-learning
| 1,519
|
No module named 'recommenders'[ASK]
|
### Description
I am running a Google Colab notebook for the 00_quick_start/als_movielens.ipynb I have done !pip install recommenders and when I do !pip show recommenders it shows that it has been installed. But when I run the cell with from recommenders.utils.timer import Timer I get an error message. It is attached. I am not sure what I am doing wrong. I am using Windows 10 and Python 3.7.11

### Other Comments
|
closed
|
2021-09-04T01:02:39Z
|
2023-05-22T14:11:30Z
|
https://github.com/recommenders-team/recommenders/issues/1519
|
[
"help wanted"
] |
dagartga
| 9
|
microsoft/qlib
|
deep-learning
| 1,138
|
How to let code change take effect without re-installation?
|
Since we are importing qlib package from installed location, code change never take effect unless re-install it.
For developers, is there any way to let code change take effect immediately?
|
closed
|
2022-06-19T10:20:51Z
|
2022-07-03T12:12:48Z
|
https://github.com/microsoft/qlib/issues/1138
|
[
"question"
] |
jingedawang
| 2
|
feder-cr/Jobs_Applier_AI_Agent_AIHawk
|
automation
| 1,086
|
Generates resume, and then shutsdown itself
|
Generates resume, and then shuts down itself instead of applying jobs
|
open
|
2025-02-06T21:16:50Z
|
2025-03-03T04:05:56Z
|
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/1086
|
[
"bug"
] |
mrtknrt
| 4
|
Lightning-AI/pytorch-lightning
|
machine-learning
| 19,794
|
LOG issue
|
Bug description
As shown in the below's screenshot, 710144 is my total number of samples, but 100 is the batch amount. Since my batch size is 64, so I expect the total number is 710144/64=11096, I think 11096 should be at the position of 710144. Can someone explain me this? Which makes me a little bit confused.
This is the way I log during training step:
`self.log('train_loss', loss, on_step=True, rank_zero_only=True)'`
Thanks in advance!
JJ
cc @awaelchli
|
closed
|
2024-04-22T04:10:38Z
|
2024-06-22T03:21:32Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19794
|
[
"question",
"progress bar: tqdm",
"ver: 2.1.x"
] |
jzhanghzau
| 2
|
NVIDIA/pix2pixHD
|
computer-vision
| 91
|
How do I increase the number of training iterations for the discriminator?
|
Hi,
I want to increase the training iterations for the discriminator. How do I change the number of D (or even G) iterations?
|
open
|
2018-12-11T11:32:53Z
|
2019-04-09T08:41:29Z
|
https://github.com/NVIDIA/pix2pixHD/issues/91
|
[] |
gvenkat21
| 5
|
streamlit/streamlit
|
python
| 9,984
|
`st.altair_chart` does not show with a good size if the title is too long
|
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
`st.altair_chart` fails to properly display `alt.Chart` with title that exceed the container width.
When using st.altair_chart, charts with titles that do not fit within the container width are rendered poorly.
In the example below, two `st.altair_chart` instances use the same `alt.Chart` object. The first chart has sufficient space to display the entire title, resulting in a clear presentation. In contrast, the second chart has limited space, causing the title to be truncated and the chart to appear distorted.
<img width="652" alt="image" src="https://github.com/user-attachments/assets/3f520e5e-7677-4d68-a6d0-c7c66ffb05cd">
If the title length is increased to affect the first chart as well, it will also render poorly, indicating that the issue is not related to the `use_container_width` parameter in `st.altair_chart`.
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-9984)
```Python
# create a very basic alt.chart
import streamlit as st
import altair as alt
import pandas as pd
# Create a simple dataframe
df = pd.DataFrame({"x": [1, 2, 3, 4, 5], "y": [10, 20, 30, 40, 50]})
# Create a simple chart
chart = (
alt.Chart(
data=df,
title="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed nec purus euismod, ultricies nunc nec, ultricies nunc.",
)
.mark_line()
.encode(x="x", y="y")
)
# Render the chart
st.altair_chart(chart, use_container_width=True)
st.altair_chart(chart, use_container_width=False)
```
### Steps To Reproduce
Run the previous code
### Expected Behavior
The chart should be always displayed with the appropriate width.
If the title exceeds the available space, it should be truncated at the point where it reaches the container's width limit without affecting the chart.
### Current Behavior
If the `alt.Chart` title is too long, the title is truncated (which is good), but the chart is rendered with an incorrect width, leading to a poor visual presentation.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.40.2
- Python version: 3.10.11
- Operating System: Windows 11
- Browser: Chrome
### Additional Information
|
open
|
2024-12-09T16:37:58Z
|
2024-12-17T22:55:03Z
|
https://github.com/streamlit/streamlit/issues/9984
|
[
"type:bug",
"status:confirmed",
"priority:P3",
"feature:st.altair_chart"
] |
RubenCata
| 2
|
recommenders-team/recommenders
|
deep-learning
| 2,097
|
[FEATURE] Support Python 3.12
|
I know you just updated it to support Python 3.11, but Manjaro or Ubuntu now use Python3.12 by default.
|
open
|
2024-05-14T11:47:00Z
|
2024-05-15T05:18:28Z
|
https://github.com/recommenders-team/recommenders/issues/2097
|
[
"enhancement"
] |
daviddavo
| 2
|
mars-project/mars
|
numpy
| 2,414
|
Add support for `label_binarize`.
|
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Is your feature request related to a problem? Please describe.**
`mars.learn.preprocessing.label_binarize` can be added support.
|
closed
|
2021-09-02T10:21:25Z
|
2021-09-02T15:24:02Z
|
https://github.com/mars-project/mars/issues/2414
|
[
"type: feature",
"mod: learn"
] |
qinxuye
| 0
|
Zeyi-Lin/HivisionIDPhotos
|
machine-learning
| 14
|
演示空间进不去了
|

ninh您好提示显示一直进不去演示空间,请问是关闭了吗?
|
closed
|
2024-08-17T10:07:25Z
|
2024-08-30T07:10:57Z
|
https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/14
|
[] |
hxj0316
| 1
|
allure-framework/allure-python
|
pytest
| 629
|
hooks.py throws exception because of missing listener function
|
https://github.com/allure-framework/allure-python/blob/master/allure-behave/src/hooks.py#L53
`start_feature` was recently removed from listener.py, but hooks.py still calls this function.
|
open
|
2021-10-06T11:36:13Z
|
2023-07-08T22:30:20Z
|
https://github.com/allure-framework/allure-python/issues/629
|
[
"bug",
"theme:behave"
] |
donders
| 1
|
vitalik/django-ninja
|
django
| 1,025
|
Apply decorators to a subset of routes/all paths under a sub-router
|
Recently, we found the need in Django to take an iterable of Django url configs and decorate the functions within them with something (in our case, observability functions to tag logs correctly).
With NinjaAPI, you can do this perfectly well with the .urls property on the top-level API. 🚀
However, if you want to decorate certain subroutes of that main API, it doesn't seem possible to do this.
For example:
```
router_1 = Router()
router_2 = Router()
# Imagine the routers gets a load of paths/functions registered to it...
api.add_router("prefix_1/", router_1)
api.add_router("prefix_2/", router_2)
urlpatterns = [
path("api/", api.urls)
]
```
Given that example, say I wanted to decorate all the functions under `router_1` with `decorator_1` and `router_2` with `decorator_2`. As far as I can tell, I am unable to do that. Instead, I have to decorate all the functions within those routers individually, which is obviously not ideal.
Could this be possible?
|
open
|
2023-12-22T09:40:00Z
|
2023-12-22T11:58:39Z
|
https://github.com/vitalik/django-ninja/issues/1025
|
[] |
JimNero009
| 2
|
ijl/orjson
|
numpy
| 23
|
Is Rust nightly still necessary, or can you use some released version of Rust?
|
I will need to package `orjson` for Conda-Forge if I decide to use it, and I'm not sure if random versions of rust will be available.
|
closed
|
2019-07-16T23:53:40Z
|
2021-03-10T16:10:10Z
|
https://github.com/ijl/orjson/issues/23
|
[] |
itamarst
| 8
|
litestar-org/litestar
|
pydantic
| 3,189
|
Docs: Add guard example to JWT docs
|
### Summary
Currently in the [JWT docs](https://docs.litestar.dev/latest/usage/security/jwt.html) there are a few references regarding how to access the 'User' object during and after being authenticated. These boil down to:
a) queried using the token
b) directly from the request (which can be easily assumed to be attached via retrieve_user_handler prior to going to the api path)
However there are other instances where user details need to be extracted from the `connection` object (such as in role-based guards).
```py
def admin_guard(connection: ASGIConnection, _: BaseRouteHandler) -> None:
if not connection.user.is_admin:
raies NotAuthorizedException()
```
A gap in knowledge between the page on JWTs and guards is that it's not made entirely clear _how_ user gets attached to connection. I would like to suggest that an example guard is added to the JWT docs with a comment explaining that the Auth object automatically attaches it for you based on the object returned from `retrieve_user_handler`.
It also isn't made abundantly clear that the TypeVar provided to the Auth object directly corresponds to the retrieve_user_handler. For a little while, I was actually setting the TypeVar based on my login response and wondering why it wasn't working. A silly mistake in hindsight, but I believe a simple comment could have saved me from it!
|
open
|
2024-03-11T23:43:59Z
|
2025-03-20T15:54:28Z
|
https://github.com/litestar-org/litestar/issues/3189
|
[
"Documentation :books:",
"Help Wanted :sos:",
"Good First Issue"
] |
BVengo
| 2
|
xonsh/xonsh
|
data-science
| 4,796
|
xonsh-in-docker.py: outdated default python version -> parser does not compile
|
``xonsh-in-docker.py`` is broken due do python 3.6 being set as default.
|
closed
|
2022-05-05T21:15:45Z
|
2024-06-29T22:29:12Z
|
https://github.com/xonsh/xonsh/issues/4796
|
[
"docker"
] |
dev2718
| 4
|
amdegroot/ssd.pytorch
|
computer-vision
| 16
|
How to train own datasets?
|
I have own datasets with labeled. How to train it?
|
open
|
2017-05-02T10:17:37Z
|
2020-11-30T17:39:46Z
|
https://github.com/amdegroot/ssd.pytorch/issues/16
|
[
"enhancement"
] |
zhanghan328
| 26
|
coqui-ai/TTS
|
python
| 3,966
|
[Feature request] Adjust output audio speed in YourTTS
|
Hello,
I have finetuned YourTTS on a number of new speakers, and the quality of audio, pronounciation is good. However, the audio output is a bit fast. I have tried postprocessing like resampling etc, but it changes the pitch.
There is a speed feature available in xttsv2. Can we have a similar one for YourTTS or is there any workaround for this?
Some inputs would be highly appreciated.
Thanks
|
closed
|
2024-08-13T07:14:41Z
|
2025-01-26T08:48:38Z
|
https://github.com/coqui-ai/TTS/issues/3966
|
[
"wontfix",
"feature request"
] |
Rakshith12-pixel
| 9
|
ymcui/Chinese-BERT-wwm
|
tensorflow
| 172
|
相同配置代码,重复运行结果不同?
|
你好,我们的BERT-wwm-ext(tf)模型做ner和情感分类任务,在相同的配置也指定了随机种子的情况下,两次运行结果不同,换了google的BERT或roBERTa都不会出现这个问题。用部分样本训练发现,样本不多(20条)时结果是一样的,但训练样本大以后,结果就不同了。不知道什么原因,谢谢你的解答。
|
closed
|
2021-03-09T14:34:34Z
|
2021-03-22T07:35:02Z
|
https://github.com/ymcui/Chinese-BERT-wwm/issues/172
|
[
"stale"
] |
sirlb
| 9
|
onnx/onnx
|
tensorflow
| 6,308
|
Convert a model with custom pytorch CUDA kernel
|
# Ask a Question
### Question
I have implemented a custom pytorch CUDA kernel (and CPU kernel) for rotated bounding boxes that is not supported in pytorch. (like this example https://github.com/pytorch/extension-cpp)
For my use case, i need to convert the model to onnx, but im struggling a lot. It seems like i have to implement custom operators for onnx using onnx ops only, but this is too much work. What is a better way of converting the model to onnx? Any tips or advices will be very much appreciated.
|
open
|
2024-08-20T13:29:42Z
|
2024-08-20T16:18:10Z
|
https://github.com/onnx/onnx/issues/6308
|
[
"question",
"topic: runtime"
] |
davidgill97
| 1
|
keras-team/keras
|
machine-learning
| 20,985
|
Output discrepancy between keras.predict and tf_saved_model
|
Hi,
I am exporting a model like this:
```
import keras
export_archive = keras.export.ExportArchive()
export_archive.track(model_tra)
export_archive.add_endpoint(
name="serve",
fn=lambda x: model_tra.call(x, training=False),
input_signature=[
keras.InputSpec(shape=(None, 3, 320, 320, 1), dtype="float64")
],
)
export_archive.write_out(
"/home/edge7/Desktop/projects/ing_edurso/wharfreenet-inference/models/la_ao"
)
```
Then I load it like:
`tf.saved_model.load(os.path.join(MODEL_DIRECTORY, "la_ao"))`
and I use it like:
`model.serve(np.expand_dims(np.array(frame), axis=-1))`
I get slightly different results than just using:
model.predict or model.predict_on_batch
am I missing something silly in the conversion?
|
open
|
2025-03-05T09:30:06Z
|
2025-03-20T05:07:24Z
|
https://github.com/keras-team/keras/issues/20985
|
[
"type:Bug"
] |
edge7
| 4
|
sammchardy/python-binance
|
api
| 852
|
Get withdraw history failing unexpectedly
|
**Describe the bug**
I am trying to access my withdrawal and deposit histories and receiving the following error:
```BinanceAPIException Traceback (most recent call last)
<ipython-input-4-e52a5e13df7b> in <module>
----> 1 client.get_withdraw_history()
~/personal/binance/.venv/lib/python3.8/site-packages/binance/client.py in get_withdraw_history(self, **params)
2600
2601 """
-> 2602 return self._request_margin_api('get', 'capital/withdraw/history', True, data=params)
2603
2604 def get_withdraw_history_id(self, withdraw_id, **params):
~/personal/binance/.venv/lib/python3.8/site-packages/binance/client.py in _request_margin_api(self, method, path, signed, **kwargs)
356 uri = self._create_margin_api_uri(path)
357
--> 358 return self._request(method, uri, signed, **kwargs)
359
360 def _request_website(self, method, path, signed=False, **kwargs) -> Dict:
~/personal/binance/.venv/lib/python3.8/site-packages/binance/client.py in _request(self, method, uri, signed, force_params, **kwargs)
307
308 self.response = getattr(self.session, method)(uri, **kwargs)
--> 309 return self._handle_response(self.response)
310
311 @staticmethod
~/personal/binance/.venv/lib/python3.8/site-packages/binance/client.py in _handle_response(response)
316 """
317 if not (200 <= response.status_code < 300):
--> 318 raise BinanceAPIException(response, response.status_code, response.text)
319 try:
320 return response.json()
BinanceAPIException: APIError(code=-1100): Illegal characters found in a parameter.
```
This seems like a weird error to me because I am not passing any parameters. I also am able to hit other endpoints with my client credentials.
**To Reproduce**
I am simply following the example and running:
`client.get_withdraw_history()`
**Expected behavior**
I expect to receive a json containing withdrawal information.
**Environment (please complete the following information):**
- Python version:3.8
- Virtual Env: virtualenv
- OS: Ubuntu
- python-binance version: 1.0.10
|
open
|
2021-05-13T19:18:04Z
|
2021-12-23T02:07:21Z
|
https://github.com/sammchardy/python-binance/issues/852
|
[] |
ngriffiths13
| 2
|
2noise/ChatTTS
|
python
| 911
|
E:\\Chat TTS\\output_audio\\segment\\合并\\segment_合并.wav
|
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening 'E:\\Chat TTS\\output_audio\\segment\\合并\\segment_合并.wav': System error
各位大佬这是问什么呢?.
|
closed
|
2025-03-04T07:43:22Z
|
2025-03-12T13:55:46Z
|
https://github.com/2noise/ChatTTS/issues/911
|
[
"wontfix"
] |
paul2264
| 1
|
opengeos/leafmap
|
streamlit
| 483
|
Choropleth map is not working in windows
|
<!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.22.0
- Python version: 3.9.13
- Operating System: Windows 11 Enterprise
### Description
I'm trying to do the same example as https://colab.research.google.com/github/opengeos/leafmap/blob/master/examples/notebooks/53_choropleth.ipynb#scrollTo=iyOJ30pLR_ur
however, it shows the type error
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
in
1 m = leafmap.Map()
----> 2 m.add_data(
3 data, column='POP_EST', scheme='Quantiles', cmap='Blues', legend_title='Population'
4 )
5 m
venv\lib\site-packages\leafmap\leafmap.py in add_data(self, data, column, colors, labels, cmap, scheme, k, add_legend, legend_title, legend_position, legend_kwds, classification_kwds, layer_name, style, hover_style, style_callback, info_mode, encoding, **kwargs)
3687 style_callback = lambda feat: {"fillColor": feat["properties"]["color"]}
3688
-> 3689 self.add_gdf(
3690 gdf,
3691 layer_name=layer_name,
packages\leafmap\leafmap.py in add_gdf(self, gdf, layer_name, style, hover_style, style_callback, fill_colors, info_mode, zoom_to_layer, encoding, **kwargs)
2476 """
2477 for col in gdf.columns:
-> 2478 if gdf[col].dtype in ["datetime64[ns]", "datetime64[ns, UTC]"]:
2479 gdf[col] = gdf[col].astype(str)
2480
TypeError: Invalid datetime unit in metadata string "[ns, UTC]"
```
|
closed
|
2023-06-23T14:16:34Z
|
2023-06-29T19:44:46Z
|
https://github.com/opengeos/leafmap/issues/483
|
[
"bug"
] |
jxlyn
| 4
|
slackapi/python-slack-sdk
|
asyncio
| 736
|
client_msg_id missing from messages with attachments
|
### Description
For all messages without attachments we see client_msg_id as a root element in the message event data. For all messages with attachments, we do not see any element client_msg_id anywhere. Same for RTMClient or EventsAPI.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [x] bug
- [ ] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements (place an `x` in each of the `[ ]`)
* [x] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slackclient/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [x] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [x] I've searched for any related issues and avoided creating a duplicate issue.
---
### Bug Report
Filling out the following details about bugs will help us solve your issue sooner.
#### Reproducible in:
slackclient version: 2.6.1
python version: ~~2.6.9~~ 3.6.9
OS version(s): MacOS, Ubuntu 16.04
#### Steps to reproduce:
1. Send message with an image attachment
2. Observe message event data.
3. Attempting to extract client_msg_id as with messages without attachments will yield a KeyError
#### Expected result:
client_msg_id should be present no matter if the message has an attachment or not
#### Actual result:
client_msg_id is missing
#### Attachments:
Logs, screenshots, screencast, sample project, funny gif, etc.
|
closed
|
2020-06-26T04:22:12Z
|
2021-02-24T01:46:49Z
|
https://github.com/slackapi/python-slack-sdk/issues/736
|
[
"Version: 2x",
"rtm-client",
"web-client",
"server-side-issue"
] |
pbrackin
| 5
|
plotly/dash
|
data-visualization
| 2,315
|
editing dash-renderer/build/dash_renderer.dev.js has no effect during runtime
|
**Describe your context**
linux, installed dash and dash-cytoscape with pip
- replace the result of `pip list | grep dash` below
```
dash 2.7.0
dash-core-components 2.0.0
dash-cytoscape 0.3.0 /home/nmz787/git/dash-cytoscape
dash-dangerously-set-inner-html 0.0.2
dash-flow-example 0.0.5
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: Windows 10
- Browser - Chrome
- Version - Version 107.0.5304.106 (Official Build) (64-bit)
**Describe the bug**
when I edit the `site-packages` copy of `dash/dash-renderer/build/dash_renderer.dev.js` and hard-refresh my browser, my code changes do not show up.
What is the correct procedure for testing hot-patches like this? If I need to clone this repo and build a local copy after my changes/alterations, where are the build instructions?
**Expected behavior**
I edit the JS file, which is presumably being served when Dash is run, and my browser runs the changed code.
|
closed
|
2022-11-16T23:15:18Z
|
2023-03-10T21:14:45Z
|
https://github.com/plotly/dash/issues/2315
|
[] |
nmz787-intel
| 2
|
jina-ai/serve
|
machine-learning
| 6,100
|
Suggestions For Improving Testing
|
**Describe your proposal/problem**
<!-- A clear and concise description of what the proposal is. -->
I have a suggestions that could improve development experience specifically around testing. Forgive me if you've already considered these, but I thought this might be useful feedback.
1. ### Sharing Deployments Across Tests via Fixture Scopes
I realize some tests are going to need custom deployments, but many of them can use a shared deployment/flow which is launched once at the beginning of the session. This means we only have to wait for the app to startup once.
```python
@pytest.fixture(scope="session")
def hub_client(session_mocker):
with Deployment(
uses=MyExecutor, protocol="http",
) as dep:
yield dep
```
2. ### Run CI Locally
We can use Tox, Docker Compose, or some orchestration tool to allow us to reproduce CI locally. (If we already have this, I missed it and maybe I can update the docs to make it more clear)
3. ### Use pytest-xdist to run tests asynchronously
I believe there have been some efforts to do this, and it might require some refactoring, but I saw a huge reduction in runtime when using multiple workers.
---
<!-- Optional, but really help us locate the problem faster -->
**Environment**
<!-- Run `jina --version-full` and copy paste the output here -->
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
|
closed
|
2023-10-26T14:22:41Z
|
2023-10-30T17:17:07Z
|
https://github.com/jina-ai/serve/issues/6100
|
[] |
NarekA
| 2
|
pydantic/pydantic-ai
|
pydantic
| 671
|
TypeError: Object of type ValueError is not JSON serializable
|
### Description
Sorry if this is already reported or if there is a solution im missing.
I've encountered a sporadic issue while using a Pydantic model with a custom validator (but i guess that does not have to be related). About **2% of the runs** result in the following error:
```
Exception: TypeError: Object of type ValueError is not JSON serializable
```
I haven’t yet found a reliable way to reproduce the issue, below is a mocked-up code snippet and a stack trace to demonstrate the scenario.
---
### Code Example
```python
from typing import Any
import pydantic_ai
from pydantic import BaseModel, model_validator
VALID_TYPES = {"test": ["testing"]}
class TypeModel(BaseModel):
type: str
@model_validator(mode="before")
def validate_type(cls, values: dict[str, Any]):
type_ = values.get("type")
if type_ not in VALID_TYPES:
raise ValueError(
f"Invalid type '{type_}'. Valid types are: {list(VALID_TYPES.keys())}."
)
return values
agent = pydantic_ai.Agent(model="openai:gpt-4o", result_type=TypeModel)
agent.run_sync("toast")
```
---
### Stack Trace
```
response = self.agent.run_sync(prompt)
File "/home/site/wwwroot/.python_packages/lib/site-packages/pydantic_ai/agent.py", line 220, in run_sync
return asyncio.run(self.run(user_prompt, message_history=message_history, model=model, deps=deps))
File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/home/site/wwwroot/.python_packages/lib/site-packages/pydantic_ai/agent.py", line 172, in run
model_response, request_cost = await agent_model.request(messages)
File "/home/site/wwwroot/.python_packages/lib/site-packages/pydantic_ai/models/openai.py", line 125, in request
response = await self._completions_create(messages, False)
File "/home/site/wwwroot/.python_packages/lib/site-packages/pydantic_ai/models/openai.py", line 155, in _completions_create
openai_messages = [self._map_message(m) for m in messages]
File "/home/site/wwwroot/.python_packages/lib/site-packages/pydantic_ai/models/openai.py", line 155, in <listcomp>
openai_messages = [self._map_message(m) for m in messages]
File "/home/site/wwwroot/.python_packages/lib/site-packages/pydantic_ai/models/openai.py", line 236, in _map_message
content=message.model_response(),
File "/home/site/wwwroot/.python_packages/lib/site-packages/pydantic_ai/messages.py", line 121, in model_response
description = f'{len(self.content)} validation errors: {json.dumps(self.content, indent=2)}'
File "/usr/local/lib/python3.10/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/local/lib/python3.10/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/usr/local/lib/python3.10/json/encoder.py", line 429, in _iterencode
yield from _iterencode_list(o, _current_indent_level)
File "/usr/local/lib/python3.10/json/encoder.py", line 325, in _iterencode_list
yield from chunks
File "/usr/local/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/local/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/usr/local/lib/python3.10/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/usr/local/lib/python3.10/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
```
|
open
|
2025-01-13T15:37:08Z
|
2025-01-23T08:00:48Z
|
https://github.com/pydantic/pydantic-ai/issues/671
|
[
"bug"
] |
HampB
| 2
|
zama-ai/concrete-ml
|
scikit-learn
| 85
|
WARNING: The shape inference of onnx.brevitas::Quant type is missing?
|
<img width="1494" alt="image" src="https://github.com/zama-ai/concrete-ml/assets/127387074/66030dd8-d210-4df5-9cb4-b180846c9a8f">
|
closed
|
2023-06-11T11:15:23Z
|
2023-06-12T10:09:18Z
|
https://github.com/zama-ai/concrete-ml/issues/85
|
[] |
maxwellgodv
| 1
|
PrefectHQ/prefect
|
automation
| 17,482
|
Flow Run continues to run after cancelled
|
### Bug summary
A Flow Run in the running state can be cancelled (via API call to Prefect) and the Flow Run will continue to run, including new Tasks which start after the cancellation time.
Similar issues:
- #16939 this issues mentions something about failing tasks (details unclear but in our case no failed tasks). In their case the flow run is cancelled via UI, so possibly a similar root cause?
- #16001 this is for runs stuck in the Cancelling state. In contrast, we saw a run successfully enter the Cancelled state but then continue to run.
### Version info
```Text
Version: 3.1.3
API version: 0.8.4
Python version: 3.10.12
Git commit: 39b6028c
Built: Tue, Nov 19, 2024 3:25 PM
OS/Arch: linux/x86_64
Profile: ephemeral
Server type: cloud
Pydantic version: 2.9.2
Integrations:
prefect-azure: 0.4.2
```
### Additional context
Some additional details:
- In the Prefect UI and logs, we can see a `prefect.flow-run.Cancelled` log at the correct time
- The Start and End time in the UI show the true start of the run and the End time is the cancellation time. Duration is based on start to cancel time.
- New tasks Start and End after the cancellation time. Visible in logs and in UI
I will provide screenshots as possible (all from the same Flow Run):
Full run time visualized:

Start and End agree with cancelled log but not above screenshot:



The logs for the final Tasks of this Flow Run, which were scheduled to start, started, and ended after the cancel time and entered a completed state:

#### Reproduction steps
I have not identified a reliable way to reproduce this. Any insight appreciated.
|
open
|
2025-03-14T21:18:13Z
|
2025-03-17T19:46:37Z
|
https://github.com/PrefectHQ/prefect/issues/17482
|
[
"bug"
] |
gscholtes-relativity
| 1
|
gunthercox/ChatterBot
|
machine-learning
| 2,353
|
healthcare chat bot
|
closed
|
2024-03-04T09:19:33Z
|
2025-01-04T19:10:16Z
|
https://github.com/gunthercox/ChatterBot/issues/2353
|
[] |
Stalin1419
| 2
|
|
amisadmin/fastapi-amis-admin
|
sqlalchemy
| 33
|
URL 组件示例无法正常运行
|
按照文档复制以下代码:
```python
# adminsite.py
from fastapi_amis_admin.admin import admin
from fastapi_amis_admin.amis import PageSchema
@site.register_admin
class GitHubLinkAdmin(admin.LinkAdmin):
# 通过page_schema类属性设置页面菜单信息;
# PageSchema组件支持属性参考: https://baidu.gitee.io/amis/zh-CN/components/app
page_schema = PageSchema(label='AmisLinkAdmin', icon='fa fa-github')
# 设置跳转链接
link = 'https://github.com/amisadmin/fastapi_amis_admin'
```
在侧边栏可得一选项, 但点击后并没有在新页面打开超链接, 而在在本页面开了一个`frame`, 由于同源策略, `github.com 拒绝了我们的连接请求。`.
其余代码如下:
```python
from fastapi import FastAPI
from fastapi_amis_admin.admin.settings import Settings
from fastapi_amis_admin.admin.site import AdminSite
from adminsite import site
# 创建FastAPI应用
app = FastAPI()
# 挂载后台管理系统
site.mount_app(app)
if __name__ == '__main__':
import uvicorn
uvicorn.run('main:app', debug=True, reload=True, workers=1)
```
- fastapi_amis_admin 0.2.0
- python 3.10.5
|
closed
|
2022-07-15T08:48:50Z
|
2022-07-15T08:52:20Z
|
https://github.com/amisadmin/fastapi-amis-admin/issues/33
|
[] |
myuanz
| 0
|
igorbenav/fastcrud
|
pydantic
| 75
|
Multiple nesting for joins.
|
**Is your feature request related to a problem? Please describe.**
I'm wonderring if it's currently possible to have multiple layers of nesting in joined objects.
**Describe the solution you'd like**
Ability to return multi-nested objects like
```
{
'id': 1,
'name': 'Jakub',
'item': {
'id': 1,
'price': 20,
'shop': {
'id': 1,
'name': 'Adidas'
}
}
}
```
|
open
|
2024-05-06T19:55:08Z
|
2024-05-16T01:33:37Z
|
https://github.com/igorbenav/fastcrud/issues/75
|
[
"enhancement",
"FastCRUD Methods"
] |
JakNowy
| 1
|
ultralytics/ultralytics
|
pytorch
| 19,680
|
how can I run model.val() AND get the individual predictions in the results?
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I'm posting this mostly to get the AI generated response.
This can be nicely used to validate a dataset and get graphs
`results = model.val(data="dataset/data.yaml", split="test")`
However, `results` does not contain any reference to the obb or individual predictions.
### Additional
_No response_
|
closed
|
2025-03-13T15:10:35Z
|
2025-03-13T18:16:49Z
|
https://github.com/ultralytics/ultralytics/issues/19680
|
[
"question"
] |
luizwritescode
| 2
|
gradio-app/gradio
|
data-visualization
| 10,663
|
Search functionality doesn't work for gr.Dataframe
|
### Describe the bug
Notice another bug for gr.Dataframe, i.e., the search & filter functionality doesn't work for text cells.
<img width="1920" alt="Image" src="https://github.com/user-attachments/assets/8f8f180d-451f-4b1c-842d-525cefee8f37" />
<img width="1920" alt="Image" src="https://github.com/user-attachments/assets/89a99660-7b4a-4af3-8cb4-f3e105c37456" />
<img width="1920" alt="Image" src="https://github.com/user-attachments/assets/c3292e86-e5d1-4c7b-8b32-f2ee4f0d66eb" />
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import pandas as pd
import gradio as gr
# Creating a sample dataframe
def run():
df = pd.DataFrame({
"A" : ["Apparel & Accessories", "Home & Garden", "Health & Beauty", "Cameras & Optics", "Apparel & Accessories"],
"B" : [6, 2, 54, 3, 2],
"C" : [3, 20, 7, 3, 8],
"D" : [2, 3, 6, 2, 6],
"E" : [-1, 45, 64, 32, 23]
})
df = df.style.map(color_num, subset=["E"])
return df
# Function to apply text color
def color_num(value: float) -> str:
color = "red" if value >= 0 else "green"
color_style = "color: {}".format(color)
return color_style
# Displaying the styled dataframe in Gradio
with gr.Blocks() as demo:
gr.Textbox("{}".format(gr.__version__))
a = gr.DataFrame(show_search="search")
b = gr.Button("run")
b.click(run,outputs=a)
demo.launch()
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
gradio = 5.17.1
```
### Severity
Blocking usage of gradio
|
closed
|
2025-02-24T02:54:23Z
|
2025-03-10T18:14:59Z
|
https://github.com/gradio-app/gradio/issues/10663
|
[
"bug",
"💾 Dataframe"
] |
jamie0725
| 4
|
pytorch/pytorch
|
python
| 149,281
|
"Significant" Numerical differences for different tensor shapes in CUDA
|
### 🐛 Describe the bug
I get a very different results when I do calculation on cuda with tensor operation order, below is a sample code for reproduce:
```
import torch
import torch.nn as nn
sequence_size = 32
env_size = 64
input_dim = 39
hidden_dim = 64
output_dim = 6
device = "cuda"
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
batch_input = torch.randn((sequence_size, env_size, input_dim), dtype=torch.float32, device=device)
model = nn.Linear(in_features=input_dim, out_features=output_dim, device=device)
batch_output = model(batch_input)
print("big batch together:", batch_output[0,0])
print("smaller batch:", model(batch_input[0])[0])
```
The output is

where, the big difference is around 3e-4 which is bigger than the 1e-6 (the float precision).
During my application, I saw these difference can go up to 5e-3, which seems much bigger than the previous issues and affects the overall performance of my algorithm.
Everything works fine, when I do the compuation on CPU, no big difference(1e-4~1e-3) noticed
### Versions
PyTorch version: 1.11.0
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.3
Libc version: glibc-2.35
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to:
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Ti Laptop GPU
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 20
On-line CPU(s) list: 0-19
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900H
CPU family: 6
Model: 154
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 1
Stepping: 3
CPU max MHz: 5000.0000
CPU min MHz: 400.0000
BogoMIPS: 5836.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 544 KiB (14 instances)
L1i cache: 704 KiB (14 instances)
L2 cache: 11.5 MiB (8 instances)
L3 cache: 24 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-19
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==1.11.0
[pip3] torchvision==0.12.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.24.3 py38h14f4228_0
[conda] numpy-base 1.24.3 py38h31eccc5_0
[conda] pytorch 1.11.0 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchvision 0.12.0 py38_cu113 pytorch
|
closed
|
2025-03-16T20:22:24Z
|
2025-03-17T15:45:57Z
|
https://github.com/pytorch/pytorch/issues/149281
|
[
"module: numerical-stability",
"triaged"
] |
HaoxiangYou
| 1
|
huggingface/transformers
|
machine-learning
| 36,816
|
Gemma3 can't be fine-tuned on multi-image examples
|
### System Info
There are more details in here: https://github.com/google-deepmind/gemma/issues/193
But shortly; it seems like multi-image training is not implemented yet
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The following code works:
```python
import io
from typing import Any, cast
import requests
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from transformers import TrainingArguments
from trl import SFTTrainer, SFTConfig
from peft import LoraConfig
from datasets import IterableDataset, Features
import datasets
from PIL import Image
import numpy as np
HF_TOKEN = "..."
def load_image(url):
response = requests.get(url)
image = Image.open(io.BytesIO(response.content))
return image
def image_from_bytes(image_bytes):
return Image.open(io.BytesIO(image_bytes))
def main():
model_id = "google/gemma-3-4b-it"
model = Gemma3ForConditionalGeneration.from_pretrained(
model_id, device_map="auto", token=HF_TOKEN
)
model.config.use_cache = False # Disable caching for training
processor = AutoProcessor.from_pretrained(model_id, padding_side="right", token=HF_TOKEN)
processor.tokenizer.pad_token = processor.tokenizer.eos_token # Use eos token as pad token
processor.tokenizer.padding_side = "right"
def train_iterable_gen():
N_IMAGES = 1
image = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg").resize((896, 896))
images = np.array([image] * N_IMAGES)
print("IMAGES SHAPE", images.shape)
yield {
"images": images,
"messages": [
{
"role": "user",
"content": [{"type": "image" } for _ in range(images.shape[0])]
},
{
"role": "assistant",
"content": [{"type": "text", "text": "duck" }]
}
]
}
train_ds = IterableDataset.from_generator(
train_iterable_gen,
features=Features({
'images': [datasets.Image(mode=None, decode=True, id=None)],
'messages': [{'content': [{'text': datasets.Value(dtype='string', id=None), 'type': datasets.Value(dtype='string', id=None) }], 'role': datasets.Value(dtype='string', id=None)}]
} )
)
def collate_fn(examples):
# Get the texts and images, and apply the chat template
texts = [processor.apply_chat_template(example["messages"], tokenize=False) for example in examples]
images = [example["images"] for example in examples]
# Tokenize the texts and process the images
batch = processor(text=texts, images=images, return_tensors="pt", padding=True)
print("collate_fn pixel_values", batch["pixel_values"].shape)
print("collate_fn input_ids", batch["input_ids"].shape)
# The labels are the input_ids, and we mask the padding tokens in the loss computation
labels = batch["input_ids"].clone()
labels[labels == processor.tokenizer.pad_token_id] = -100
labels[labels == processor.image_token_id] = -100
batch["labels"] = labels
return batch
# Set up LoRA configuration for causal language modeling
lora_config = LoraConfig(
r=8,
lora_alpha=16,
lora_dropout=0.1,
target_modules=["q_proj", "v_proj"],
bias="none",
task_type="CAUSAL_LM"
)
# Define training arguments
training_args = SFTConfig(
output_dir="./results",
num_train_epochs=1,
per_device_train_batch_size=1,
learning_rate=2e-4,
logging_steps=1,
save_steps=25,
report_to="tensorboard",
group_by_length=False,
remove_unused_columns=False,
dataset_kwargs = {"skip_prepare_dataset": True},
gradient_checkpointing_kwargs = dict(use_reentrant=False),
max_steps=1
)
# Create the SFTTrainer with LoRA parameters
trainer = SFTTrainer(
model=model,
train_dataset=cast(Any, train_ds),
peft_config=lora_config,
args=training_args,
data_collator=collate_fn,
processing_class=processor.tokenizer,
)
print("Training model...")
trainer.train()
print("Training complete.")
if __name__ == "__main__":
main()
```
but if I increase `N_IMAGES` to 2 it crashes with the following error:
```
File "/tmp/ray/session_2025-03-18_01-47-01_879621_311/runtime_resources/working_dir_files/_ray_pkg_95509c95a64411ba/.venv/lib/python3.12/site-packages/transformers/models/gemma3/modeling_gemma3.py", line 1333, in forward
raise ValueError(
ValueError: Number of images does not match number of special image tokens in the input text. Got 512 image tokens in the text but 256 tokens from image embeddings.
```
### Expected behavior
I'd expect either:
1. The preprocessor to return pixel_values in the shape `[batch, n_images, c, w, h]`, and that the model can handle that
2. Or, something else needs fixing because if I put debug statements in the transformers library, it looks like it's only taking the first image in the batch of images
In the first case:
- [get_image_features](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gemma3/modeling_gemma3.py#L1216) doesn't seem to handle multi-image batches
- [the preprocessor](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gemma3/image_processing_gemma3.py#L389) seems to just output a single list of images
|
open
|
2025-03-19T09:19:32Z
|
2025-03-20T09:12:45Z
|
https://github.com/huggingface/transformers/issues/36816
|
[
"bug"
] |
FredrikNoren
| 8
|
tensorlayer/TensorLayer
|
tensorflow
| 1,116
|
Possible Arbitrary code execution bug.
|
### New Issue Checklist
- [x] I have read the [Contribution Guidelines](https://github.com/tensorlayer/tensorlayer/blob/master/CONTRIBUTING.md)
- [x] I searched for [existing GitHub issues](https://github.com/tensorlayer/tensorlayer/issues)
### Issue Description
Possibility of arbitrary code execution in ```tensorlayer```.
### Issue problem and fix explained here (https://github.com/418sec/tensorlayer/pull/1)
|
open
|
2021-01-31T07:56:17Z
|
2021-02-19T08:00:01Z
|
https://github.com/tensorlayer/TensorLayer/issues/1116
|
[] |
d3m0n-r00t
| 5
|
unit8co/darts
|
data-science
| 2,291
|
is there a guide for Explainability?
|
Can't figure out how to use this functionality from reading the docs. I have seen and used the guide for the TFTModel (which worked great).
|
closed
|
2024-03-19T12:18:54Z
|
2024-03-24T10:45:08Z
|
https://github.com/unit8co/darts/issues/2291
|
[] |
Allena101
| 2
|
rthalley/dnspython
|
asyncio
| 1,172
|
httpx 0.28+ deprecationwarning when passing str to verify for dns.query.https(), and mypy complaint when passing ssl.SSLContext
|
**Describe the bug**
Hello :)
With dnspython 2.7.0 and since httpx 0.28 I am getting this deprecation warning https://github.com/encode/httpx/blob/master/httpx/_config.py#L45-L50 when passing a string to the `verify` arg of `dns.query.https()`:
```deprecationwarning: `verify=<str>` is deprecated. Use `verify=ssl.create_default_context(cafile=...)` or `verify=ssl.create_default_context(capath=...)` instead```
Passing a `ssl.SSLContext()` instance instead of a string works well and also silences the warning, but then mypy complains because the signature of `dns.query.https()` only allows `bool` and `str` for the `verify` argument:
```
src/dns_exporter/collector.py:463: error: Argument "verify" to "https" has incompatible type "Union[SSLContext, bool]"; expected "Union[bool, str]" [arg-type]
```
I propose a small patch to permit `ssl.SSLContext` as well as `bool` and `str` for the `verify` argument for `dns.query.https()`, see https://github.com/rthalley/dnspython/pull/1173
**To Reproduce**
I discovered this because my tox runs with all warnings as errors but I think `-Wd` can trigger the warnings with any https query:
```
(venv) user@privat-dev:~/devel/dns_exporter/src$ python -Wd
Python 3.10.13 (main, Nov 15 2023, 13:09:29) [GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dns.name
>>> import dns.query
>>> import dns.message
>>> qname = dns.name.from_text('dns.google.')
>>> q = dns.message.make_query(qname, dns.rdatatype.A)
>>> dns.query.https(q, '8.8.8.8', verify="/etc/ssl/certs")
/home/user/devel/dns_exporter/venv/lib/python3.10/site-packages/httpx/_config.py:51: DeprecationWarning: `verify=<str>` is deprecated. Use `verify=ssl.create_default_context(cafile=...)` or `verify=ssl.create_default_context(capath=...)` instead.
warnings.warn(message, DeprecationWarning)
<DNS message, ID 50759>
>>>
(venv) user@privat-dev:~/devel/dns_exporter/src$ pip freeze | grep -E "dnspython|httpx"
dnspython==2.7.0
httpx==0.28.1
(venv) user@privat-dev:~/devel/dns_exporter/src$
```
**Context (please complete the following information):**
- dnspython 2.7.0
- httpx 0.28+
- Python 3.10.13
- OS: debian
|
open
|
2025-01-12T18:32:34Z
|
2025-01-12T22:04:03Z
|
https://github.com/rthalley/dnspython/issues/1172
|
[] |
tykling
| 0
|
sammchardy/python-binance
|
api
| 1,494
|
Add Margin Borrow-repay endpoints
|
Add Borrow repay endpoints
Reference: https://developers.binance.com/docs/margin_trading/borrow-and-repay
|
closed
|
2024-12-02T01:10:13Z
|
2024-12-05T14:28:43Z
|
https://github.com/sammchardy/python-binance/issues/1494
|
[
"enhancement"
] |
pcriadoperez
| 1
|
pydata/xarray
|
numpy
| 9,325
|
Inconsistent `run_spec` during zarr dataset opening by multiple bound methods
|
### What happened?
A dask distributed client lets multiple agents read data from a zarr store:
```python
for agent in agents:
client.submit(agent.read_data)
```
If the agents load the same data, then there is a run_spec inconsistency.
I am experiencing deadlocks with many agents and tasks in an application running 24/7, and as mentioned by the following warning message referring to [this issue](https://github.com/dask/dask/issues/9888), I suspect that this `run_spec` inconsistency is the cause of the deadlocks:
```
WARNING | distributed.scheduler | Detected different `run_spec` for key
'original-open_dataset-b-5a5934048f8680ef6d38726770b86391'
between two consecutive calls to `update_graph`.
This can cause failures and deadlocks down the line. Please ensure unique key names.
If you are using a standard dask collections, consider releasing all the data before resubmitting another computation.
More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: memory
old run_spec: (<function execute_task at 0x000002C3869591C0>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x000002C3886C7900>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x000002C3869591C0>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x000002C3886D9F00>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('2a84eba15180db3d422384d14b52d55c8ec9c4fd', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('2a84eba15180db3d422384d14b52d55c8ec9c4fd', ['2da998e53403d195a27e3e251b85adb7ecb9e29d'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
```
I believe it is a reasonable workflow to let agents read data independently from disk. Because we discover over time what data is needed, I cannot avoid that multiple agents may redundantly read the same data.
If `read_data` is the bound method of an agent, then the `run_spec` issue occurs.
If `read_data` is not a bound method, then no issue. The log shows a proper release of the keys in that case:
```
DEBUG | distributed.scheduler | Client Client-worker-ac3b4576-5519-11ef-8a98-089204dc4273 releases keys: [('open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40', 0)]
```
### What did you expect to happen?
No `run_spec` issue.
### Minimal Complete Verifiable Example
```Python
from __future__ import annotations
import logging
import sys
from dataclasses import dataclass
from pathlib import Path
import xarray as xr
from distributed import Client
@dataclass
class Agent:
"""Agent."""
def _read_data(self, path: Path) -> None:
"""Reads data from xarray saved as .zarr."""
read_data(path=path)
def read_data(path: Path) -> None:
"""Reads data from xarray saved as .zarr."""
xr.open_zarr(path).load()
def save_dataset_zarr(path: Path) -> None:
"""Saves a toy dataset to disk in .zarr format."""
ds = xr.Dataset(
data_vars={"a": ("x", [5, 7]), "b": ("x", [0.1, 2.4])},
coords={"x": ["a", "b"], "y": ("x", [0, 1])},
)
ds.to_zarr(path, mode="a")
def set_distributed_loggers_to_debug() -> None:
"""Sets logging level of distributed loggers to debug."""
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(
logging.Formatter("{levelname:^7} | {name:<35} | {message}", style="{")
)
for name_logger, logger in logging.Logger.manager.loggerDict.items():
if hasattr(logger, "setLevel") and (
"distributed" in name_logger
or (hasattr(logger, "name") and "distributed" in logger.name)
):
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
if __name__ == "__main__":
path_zarr = Path(r"C:\tests\dataset.zarr")
save_dataset_zarr(path=path_zarr)
REPEATS = range(2)
with Client(processes=True) as client:
client.run_on_scheduler(set_distributed_loggers_to_debug)
client.run(set_distributed_loggers_to_debug, nanny=True)
print("***\nsubmit read data tasks of agents\n***")
futures = [
client.submit(Agent()._read_data, path=path_zarr) for repeat in REPEATS
]
client.gather(futures) # <- run_spec issue with 2 repeats
print("***\nsubmit read data tasks\n***")
futures = [client.submit(read_data, path=path_zarr) for repeat in REPEATS]
client.gather(futures) # <- no run_spec issue with many repeats
```
### MVCE confirmation
- [x] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [x] Complete example — the example is self-contained, including all data and the text of any traceback.
- [x] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [x] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [x] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
2024-08-07 20:03:42,747 - distributed.scheduler - WARNING - Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
WARNING | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-a-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`. This can cause failures and deadlocks down the line. Please ensure unique key names. If you are using a standard dask collections, consider releasing all the data before resubmitting another computation. More details and help can be found at https://github.com/dask/dask/issues/9888.
Debugging information
---------------------
old task state: processing
old run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDD7A6C0>, key=BasicIndexer((slice(None, None, None),))))),), {})
new run_spec: (<function execute_task at 0x00000232EB1D9120>, (ImplicitToExplicitIndexingAdapter(array=CopyOnWriteArray(array=LazilyIndexedArray(array=<xarray.backends.zarr.ZarrArrayWrapper object at 0x00000232EDFF9540>, key=BasicIndexer((slice(None, None, None),))))),), {})
old token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['8473dc8afac710c0d78f4dadf0eb1b8ef9fe5c46'])]), ('dict', [])])
new token: ('tuple', [('913ceb5b5beb463a9010ec0790bc30002ca34164', []), ('tuple', [('673b402c9db84ac3e891ca3ce3c9a8b9e901612a', ['05fe405753166f125559e7c9ac558654f107c7e9'])]), ('dict', [])])
old dependencies: set()
new dependencies: set()
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
DEBUG | distributed.scheduler | Detected different `run_spec` for key 'original-open_dataset-y-c34960ff895a9a4a2d1c966c401b7d40' between two consecutive calls to `update_graph`.
```
### Anything else we need to know?
I wish I could do one of these:
1. `dataset.load(key=my_unique_key)`
2. `dataset.load(name=my_unique_key)`
3. `dataset.load(pure=False)` to generate a random key
Other workarounds?
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:26:23) [MSC v.1916 64 bit (AMD64)]
python-bits: 64
OS: Windows
OS-release: 10
machine: AMD64
processor: Intel64 Family 6 Model 85 Stepping 7, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None
LOCALE: ('English_United States', '1252')
libhdf5: 1.12.1
libnetcdf: None
xarray: 2024.6.0
pandas: 2.0.3
numpy: 1.24.3
scipy: 1.13.0
netCDF4: None
pydap: None
h5netcdf: None
h5py: 3.9.0
zarr: 2.18.2
cftime: None
nc_time_axis: None
iris: None
bottleneck: 1.3.5
dask: 2024.7.1
distributed: 2024.7.1
matplotlib: 3.8.2
cartopy: None
seaborn: 0.12.2
numbagg: None
fsspec: 2024.6.1
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 68.0.0
pip: 24.1.2
conda: 23.9.0
pytest: 7.4.0
mypy: None
IPython: 8.15.0
sphinx: 5.0.2
</details>
|
closed
|
2024-08-08T00:42:57Z
|
2024-08-16T16:23:49Z
|
https://github.com/pydata/xarray/issues/9325
|
[
"topic-dask",
"upstream issue"
] |
templiert
| 11
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 906
|
Continue training --epoch_count?
|
I am using Google Colab so I can only use it for 12 hours and then I have to wait to use it again for 12 hours, if I am right the progress is saved every 5 epochs? (around 1 hour in Google Colab, can I make this more often?)
I looked in the FAQ and it states:
> You can use the option --continue_train. Also set --epoch_count to specify a different starting epoch count. See more discussion in [training/test tips](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/tips.md#trainingtest-tips).
So I looked at [training/test tips](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/tips.md#trainingtest-tips) and it states:
> To fine-tune a pre-trained model, or resume the previous training, use the --continue_train flag. The program will then load the model based on epoch. By default, the program will initialize the epoch count as 1. Set --epoch_count <int> to specify a different starting epoch count.
What do I need to set <int> to continue training? Manually look how much epoch's it did and insert that? Or is it okay to use the default 1?
Thanks so much for helping!
|
closed
|
2020-01-30T18:19:10Z
|
2020-09-24T10:29:59Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/906
|
[] |
siemlohuis
| 3
|
autokey/autokey
|
automation
| 223
|
QT: Tray icon broken, if disabled, then enabled in settings
|
## Classification:
Bug
## Reproducibility:
Always
## Summary
If the tray icon is disabled in the settings and then enabled, the tray icon is shown, but is non-functional.
(While fixing this, also immediately update the tray icon colour scheme (light/dark) when saving the settings.)
## Steps to Reproduce (if applicable)
- Disable the tray icon in the settings
- (Optional: Restart autokey-qt)
- Enable the tray icon in the settings
- Try to interact with the tray icon
## Expected Results
- Left click should show the hidden main window
- Right click should open the context menu
## Actual Results
- Instead, nothing happens.
## Workaround
- Restart autokey-qt after enabling the tray icon
## Version
AutoKey version: All up to the latest. Reproducible in 0.95.4
Used GUI (Gtk, Qt, or both): Qt
Installed via: local git clone
Distro: Kubuntu 18.04
## Notes
The icon is in a fully non-functional state. There are no Python Exceptions, just nothing.
The code tries to fully initialize and update the icon, even in the hidden state to prevent any signals from not getting connected on creation, but that seems to fail.
A simple `hide` and `show` cycle seems to disconnect any Qt signals.
It still reacts slightly to a mouse-hover event by turning from black to dark-grey, but that might be done by the KDE system tray.
Ported this issue from my personal fork’s issue tracker. https://github.com/luziferius/autokey/issues/18
|
closed
|
2018-12-02T16:07:22Z
|
2019-01-19T14:44:42Z
|
https://github.com/autokey/autokey/issues/223
|
[
"bug",
"autokey-qt",
"help-wanted",
"user interface"
] |
luziferius
| 1
|
mars-project/mars
|
numpy
| 2,956
|
Submit query condition to remote node instead of fetch to local then query
|
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Is your feature request related to a problem? Please describe.**
Curently if ray fetcher gets objects with condition, it will fetch objects to local, then filter local objects with conditions. This will incur much objects transfer cost and high memory footsprint in local node.
**Describe the solution you'd like**
We should submit query to objects' node then fetch queried result instead.
|
closed
|
2022-04-24T12:06:51Z
|
2022-04-25T03:45:07Z
|
https://github.com/mars-project/mars/issues/2956
|
[] |
chaokunyang
| 0
|
Yorko/mlcourse.ai
|
data-science
| 749
|
Fix MyST header anchors
|
Check auto-generated anchors, e.g.:
`myst-anchors -l 3 mlcourse_ai_jupyter_book/book/topic02/topic02_visual_data_analysis.md`
[MyST docs](https://myst-parser.readthedocs.io/en/latest/syntax/optional.html#auto-generated-header-anchors)
|
closed
|
2023-05-17T14:22:11Z
|
2024-08-19T16:42:26Z
|
https://github.com/Yorko/mlcourse.ai/issues/749
|
[] |
Yorko
| 2
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 385
|
What versions of everything does one need to avoid errors during setup?
|
Hi all,
I'm on Ubuntu 20.04 with Python 3.7 in a conda env. I have a Nvidia GTX660 GPU installed.
I'm currently rockin' torch 1.2, cuda-10-0, tensorflow 1.14.0, tensorflow-gpu 1.14.0, and torchvision 0.4.0, along with everything else in requirements.txt. I am using python 3.7. For the life of me, I can't figure out how to get demo_cli.py to not give the error a bunch of people get:
```Your PyTorch installation is not configured to use CUDA. If you have a GPU ready for deep learning, ensure that the drivers are properly installed, and that your CUDA version matches your PyTorch installation. CPU-only inference is currently not supported.```
Could someone give me the lowdown on precisely what packages and version numbers I need to make this thing fire up?
|
closed
|
2020-06-27T04:21:48Z
|
2020-06-29T21:58:11Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/385
|
[] |
deltabravozulu
| 4
|
wkentaro/labelme
|
deep-learning
| 357
|
Speed up labelme2voc script by parallel
|
Hey, it seems that labelme2voc script runs really slow, so I slightly modify it to make it run parallel. It runs faster now. https://gist.github.com/hubutui/9b294f8b287eabc260b9162297638009
However, I only save the label in image format, and all other feature is deleted (save as npy, etc). Not sure why save as npy not work. But luckily, saving label as png image is great enough for segmentation task.
|
closed
|
2019-03-23T08:26:34Z
|
2019-04-27T02:14:27Z
|
https://github.com/wkentaro/labelme/issues/357
|
[] |
hubutui
| 0
|
Avaiga/taipy
|
automation
| 2,381
|
Remove login visual element
|
### Description
The issue consists of removing the login visual element from taipy community.
Indeed, it is made to cover a use case with some authentication, which is part of the taipy enterprise package.
In parallel of this breaking change, another issue should be opened to add a login visual element into taipy-enterprise that will be able to deeply integrate with authentication, and authorization.
### Acceptance Criteria
- [ ] If applicable, a new demo code is provided to show the new feature in action.
- [ ] Integration tests exhibiting how the functionality works are added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional)
|
open
|
2025-01-06T10:17:51Z
|
2025-01-31T13:26:20Z
|
https://github.com/Avaiga/taipy/issues/2381
|
[
"🖰 GUI",
"🟧 Priority: High",
"✨New feature",
"🔒 Staff only",
"Enterprise",
"Enterprise: 🙍🏼User management"
] |
jrobinAV
| 1
|
autogluon/autogluon
|
scikit-learn
| 3,821
|
time series classification
|
This request has been here, and there is a workaround with tsfresh - but that's not in the spirit of autogluon it seems :)
Since you have very powerful transformations for DirectTabular and RecursiveTabular, you may implement it in autogluon.timeseries by recognizing the target as classification and directly switching to train only the tabular ts-models with problem type set to classification.
just for clarification: can this be done right now manually? specifying distinct models is easy with hyperparams, but can I force them to do strictly classification? The manual says no : "TabularPredictor will always be trained with "regression" problem type, (...)"
|
open
|
2023-12-15T11:40:50Z
|
2025-03-16T16:06:47Z
|
https://github.com/autogluon/autogluon/issues/3821
|
[
"enhancement",
"module: timeseries"
] |
obwohl
| 1
|
pydata/pandas-datareader
|
pandas
| 39
|
0.15.2 causing problems with pandas.io.data.Options
|
Identical with #22, as the issue isn't fixed. Are the changes reflected in a version that I can specify to be sure to pick them up? When I install from `pip` it looks as if the code hasn't been changed. See my latest comments under #22.
|
closed
|
2015-04-09T06:57:42Z
|
2015-04-10T05:41:10Z
|
https://github.com/pydata/pandas-datareader/issues/39
|
[] |
aisthesis
| 3
|
graphql-python/graphene-django
|
graphql
| 1,436
|
Circular Import: cannot import name 'GraphQLObjectType' from partially initialized module 'graphql.type'
|
**Note: for support questions, please use stackoverflow**. This repository's issues are reserved for feature requests and bug reports.
* **What is the current behavior?**
When running project with `graphene-django` installed I get the following error:
```
File "/ve/lib/python3.8/site-packages/graphene_django/__init__.py", line 1, in <module>
--
1970 | from .fields import DjangoConnectionField, DjangoListField
1971 | File "/ve/lib/python3.8/site-packages/graphene_django/fields.py", line 4, in <module>
1972 | from graphql_relay.connection.arrayconnection import (
1973 | File "/ve/lib/python3.8/site-packages/graphql_relay/connection/arrayconnection.py", line 10, in <module>
1974 | from ..utils.base64 import base64, unbase64
1975 | File "/ve/lib/python3.8/site-packages/graphql_relay/__init__.py", line 16, in <module>
1976 | from .connection.connection import (
1977 | File "/ve/lib/python3.8/site-packages/graphql_relay/connection/connection.py", line 3, in <module>
1978 | from graphql.type import (
1979 | File "/ve/lib/python3.8/site-packages/graphql/type/__init__.py", line 8, in <module>
1980 | from .schema import (
1981 | File "/ve/lib/python3.8/site-packages/graphql/type/schema.py", line 17, in <module>
1982 | from .definition import (
1983 | File "/ve/lib/python3.8/site-packages/graphql/type/definition.py", line 57, in <module>
1984 | from ..utilities.value_from_ast_untyped import value_from_ast_untyped
1985 | File "/ve/lib/python3.8/site-packages/graphql/utilities/__init__.py", line 14, in <module>
1986 | from .get_operation_root_type import get_operation_root_type
1987 | File "/ve/lib/python3.8/site-packages/graphql/utilities/get_operation_root_type.py", line 9, in <module>
1988 | from ..type import GraphQLObjectType, GraphQLSchema
1989 | ImportError: cannot import name 'GraphQLObjectType' from partially initialized module 'graphql.type' (most likely due to a circular import) (/ve/lib/python3.8/site-packages/graphql/type/__init__.py)
```
* **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem** via
a github repo, https://repl.it or similar (you can use this template as a starting point: https://repl.it/@jkimbo/Graphene-Django-Example).
* **What is the expected behavior?**
Project starting without errors.
* **Please tell us about your environment:**
- Version:
graphene: 3.0b7
graphene-django: 3.0.0b7
graphql-core: 3.1.5
graphql-relay: 3.1.0
- Platform: Linux
|
open
|
2023-07-25T07:48:42Z
|
2023-07-25T21:38:09Z
|
https://github.com/graphql-python/graphene-django/issues/1436
|
[
"🐛bug"
] |
yankeexe
| 2
|
allenai/allennlp
|
nlp
| 5,377
|
How can i retrain the elmo model using pytorch?
|
Please ask questions on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than on GitHub. We monitor and triage questions on Stack Overflow with the AllenNLP label and questions there are more easily searchable for others.
|
closed
|
2021-08-25T06:52:58Z
|
2021-09-08T16:09:39Z
|
https://github.com/allenai/allennlp/issues/5377
|
[
"question",
"stale"
] |
xiaoqimiao7
| 2
|
praw-dev/praw
|
api
| 2,025
|
Unreleased Feature inquiry
|
### Describe the Documentation Issue
Hello, Praw community,
I would like to thank you for your efforts made in this product.
What I'm trying to do is to scrape as much as I can from [[r/Egypt](https://www.reddit.com/r/Egypt/)] to collect some Arabic text data to create a custom Arabic dataset for a university project. when I try to scrape the subreddit top using
`for submission in subreddit.new( limit=None) `
it give me the same 673 posts with their respective comments then the listing generator ends.
I make a new call after 1 minute to try to fetch more posts. but I end up having the same ones.
is there a way to start scrapping from certain point in the subreddit instead of scrapping the same ones over and over.
I have seen in the unreleased version documentation that the stream_generator() function accepts a parameter called "the continue_after_id ", wondering if this might be helpful in my case, and if so how may I access this version because this feature is not available in 7.7.1.
Thanks in advance,
### Attributes
- [X] Yes
### Location of the issue
Unreleased, Inquiry
### What did you expect to see?
Helpful advice, and explanation regarding the unreleased changelog.
### What did you actually see?
unreleased changelog.
### Proposed Fix
Helpful advice, and explanation regarding the unreleased changelog.
### Operating System/Web Browser
Windows, Chrome
### Anything else?
Thanks
|
closed
|
2024-07-30T15:27:58Z
|
2024-08-01T02:20:55Z
|
https://github.com/praw-dev/praw/issues/2025
|
[] |
xDido
| 3
|
comfyanonymous/ComfyUI
|
pytorch
| 6,616
|
VAEDecodeTiled Allocation on device - error message - can someone help ? HUNYUAN VIDEO AI ON COMFI UI
|
### Your question
USING VERY STRONG COMPUTER , WITH RTX3080 10GB GPU -
FOLLOWED AN ONLINE UTUBE CLIP THAT EXPLAINS HOW TO USE IT ON A GPU THAT HAS LESS THEN 16GB MEM
THIS IS THE UTUBE CLIP : (IN THE END THERES A SECTION HE EXPLAINS HOW TO RUN IT)
https://www.youtube.com/watch?v=ox9M9CMDYi8&list=PLYDPLOkojNompANOQ4U_q7hrJO1yqlfah
I GET A ERROR SAYING -
VAEDecodeTiled Allocation on device
AND A LONG ERROR MESSAGE APEARS ..
WHAT TO DO ?
### Logs
```powershell
# ComfyUI Error Report
## Error Details
- **Node ID:** 73
- **Node Type:** VAEDecodeTiled
- **Exception Type:** torch.OutOfMemoryError
- **Exception Message:** Allocation on device
## Stack Trace
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 318, in decode
images = vae.decode_tiled(samples["samples"], tile_x=tile_size // compression, tile_y=tile_size // compression, overlap=overlap // compression, tile_t=temporal_size, overlap_t=temporal_overlap)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 531, in decode_tiled
output = self.decode_tiled_3d(samples, **args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 453, in decode_tiled_3d
return self.process_output(comfy.utils.tiled_scale_multidim(samples, decode_fn, tile=(tile_t, tile_x, tile_y), overlap=overlap, upscale_amount=self.upscale_ratio, out_channels=self.output_channels, index_formulas=self.upscale_index_formula, output_device=self.output_device))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 939, in tiled_scale_multidim
ps = function(s_in).to(output_device)
^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 452, in <lambda>
decode_fn = lambda a: self.first_stage_model.decode(a.to(self.vae_dtype).to(self.device)).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\models\autoencoder.py", line 209, in decode
dec = self.decoder(dec, **decoder_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 723, in forward
h = self.up[i_level].upsample(h)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 114, in forward
x = self.conv(x)
^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 62, in forward
return self.conv(x)
^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 112, in forward
return super().forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 725, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 720, in _conv_forward
return F.conv3d(
^^^^^^^^^
## System Information
- **ComfyUI Version:** 0.3.12
- **Arguments:** ComfyUI\main.py --windows-standalone-build
- **OS:** nt
- **Python Version:** 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
- **Embedded Python:** true
- **PyTorch Version:** 2.5.1+cu124
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 3080 : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 10736893952
- **VRAM Free:** 9436004352
- **Torch VRAM Total:** 6643777536
- **Torch VRAM Free:** 6635257856
## Logs
2025-01-26T22:10:08.873211 - [START] Security scan2025-01-26T22:10:08.873211 -
2025-01-26T22:10:09.461398 - [DONE] Security scan2025-01-26T22:10:09.461398 -
2025-01-26T22:10:09.547432 - ## ComfyUI-Manager: installing dependencies done.2025-01-26T22:10:09.547932 -
2025-01-26T22:10:09.547932 - ** ComfyUI startup time:2025-01-26T22:10:09.547932 - 2025-01-26T22:10:09.547932 - 2025-01-26 22:10:09.5472025-01-26T22:10:09.547932 -
2025-01-26T22:10:09.547932 - ** Platform:2025-01-26T22:10:09.547932 - 2025-01-26T22:10:09.547932 - Windows2025-01-26T22:10:09.547932 -
2025-01-26T22:10:09.547932 - ** Python version:2025-01-26T22:10:09.547932 - 2025-01-26T22:10:09.547932 - 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]2025-01-26T22:10:09.547932 -
2025-01-26T22:10:09.547932 - ** Python executable:2025-01-26T22:10:09.547932 - 2025-01-26T22:10:09.547932 - C:\COMFYUI\ComfyUI_windows_portable\python_embeded\python.exe2025-01-26T22:10:09.547932 -
2025-01-26T22:10:09.548432 - ** ComfyUI Path:2025-01-26T22:10:09.548432 - 2025-01-26T22:10:09.548432 - C:\COMFYUI\ComfyUI_windows_portable\ComfyUI2025-01-26T22:10:09.548432 -
2025-01-26T22:10:09.548432 - ** User directory:2025-01-26T22:10:09.548432 - 2025-01-26T22:10:09.548432 - C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\user2025-01-26T22:10:09.548432 -
2025-01-26T22:10:09.548432 - ** ComfyUI-Manager config path:2025-01-26T22:10:09.548432 - 2025-01-26T22:10:09.548432 - C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\user\default\ComfyUI-Manager\config.ini2025-01-26T22:10:09.548432 -
2025-01-26T22:10:09.548432 - ** Log path:2025-01-26T22:10:09.548432 - 2025-01-26T22:10:09.548432 - C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\user\comfyui.log2025-01-26T22:10:09.548432 -
2025-01-26T22:10:10.076738 -
Prestartup times for custom nodes:
2025-01-26T22:10:10.076738 - 1.4 seconds: C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
2025-01-26T22:10:10.076738 -
2025-01-26T22:10:12.411586 - Checkpoint files will always be loaded safely.
2025-01-26T22:10:12.552638 - Total VRAM 10240 MB, total RAM 32530 MB
2025-01-26T22:10:12.553138 - pytorch version: 2.5.1+cu124
2025-01-26T22:10:12.553138 - Set vram state to: NORMAL_VRAM
2025-01-26T22:10:12.553138 - Device: cuda:0 NVIDIA GeForce RTX 3080 : cudaMallocAsync
2025-01-26T22:10:13.397474 - Using pytorch attention
2025-01-26T22:10:14.545038 - [Prompt Server] web root: C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\web
2025-01-26T22:10:14.934058 - ### Loading: ComfyUI-Manager (V3.9.2)
2025-01-26T22:10:15.046561 - ### ComfyUI Version: v0.3.12-22-g7fbf4b72 | Released on '2025-01-24'
2025-01-26T22:10:15.245572 -
Import times for custom nodes:
2025-01-26T22:10:15.246072 - 0.0 seconds: C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
2025-01-26T22:10:15.246072 - 0.1 seconds: C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-videohelpersuite
2025-01-26T22:10:15.246572 - 0.2 seconds: C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager
2025-01-26T22:10:15.246572 -
2025-01-26T22:10:15.250572 - Starting server
2025-01-26T22:10:15.250572 - To see the GUI go to: http://127.0.0.1:8188
2025-01-26T22:10:15.321572 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-01-26T22:10:15.363573 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-01-26T22:10:15.429578 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-01-26T22:10:15.446079 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-01-26T22:10:15.469579 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-01-26T22:10:16.062659 - FETCH DATA from: C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager\extension-node-map.json2025-01-26T22:10:16.062659 - 2025-01-26T22:10:16.065659 - [DONE]2025-01-26T22:10:16.066159 -
2025-01-26T22:10:22.063915 - FETCH ComfyRegistry Data: 5/312025-01-26T22:10:22.063915 -
2025-01-26T22:10:29.365249 - FETCH ComfyRegistry Data: 10/312025-01-26T22:10:29.365249 -
2025-01-26T22:10:36.337779 - FETCH ComfyRegistry Data: 15/312025-01-26T22:10:36.337779 -
2025-01-26T22:10:43.400103 - FETCH ComfyRegistry Data: 20/312025-01-26T22:10:43.400103 -
2025-01-26T22:10:50.686668 - FETCH ComfyRegistry Data: 25/312025-01-26T22:10:50.686668 -
2025-01-26T22:10:57.642780 - FETCH ComfyRegistry Data: 30/312025-01-26T22:10:57.642780 -
2025-01-26T22:10:59.470304 - FETCH ComfyRegistry Data [DONE]2025-01-26T22:10:59.470304 -
2025-01-26T22:10:59.491805 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
2025-01-26T22:10:59.522304 - nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote
2025-01-26T22:10:59.522304 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-01-26T22:10:59.522304 - 2025-01-26T22:10:59.761304 - [DONE]2025-01-26T22:10:59.761304 -
2025-01-26T22:42:20.882006 - FETCH DATA from: C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-manager\extension-node-map.json2025-01-26T22:42:20.882506 - 2025-01-26T22:42:20.885506 - [DONE]2025-01-26T22:42:20.885506 -
2025-01-26T22:42:33.155453 - got prompt
2025-01-26T22:42:33.225473 - Using pytorch attention in VAE
2025-01-26T22:42:33.227976 - Using pytorch attention in VAE
2025-01-26T22:42:33.428070 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-01-26T22:42:33.684656 - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2025-01-26T22:42:33.695659 - model_type FLOW
2025-01-26T22:42:43.015561 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-01-26T22:42:43.095596 - clip missing: ['text_projection.weight']
2025-01-26T22:42:50.281307 - Requested to load HunyuanVideoClipModel_
2025-01-26T22:42:53.535977 - loaded partially 7653.8 7653.798038482666 0
2025-01-26T22:42:54.475332 - Requested to load HunyuanVideo
2025-01-26T22:43:00.876028 - loaded partially 4056.6349999999993 4056.2510375976562 538
2025-01-26T22:49:09.012384 -
100%|████████████████████████████████████████████████████████████████████████████████████| 6/6 [06:08<00:00, 56.17s/it]2025-01-26T22:49:09.013384 -
100%|████████████████████████████████████████████████████████████████████████████████████| 6/6 [06:08<00:00, 61.34s/it]2025-01-26T22:49:09.013384 -
2025-01-26T22:49:09.023889 - Requested to load AutoencoderKL
2025-01-26T22:49:09.127427 - 0 models unloaded.
2025-01-26T22:49:09.554111 - loaded completely 9.5367431640625e+25 470.1210079193115 True
2025-01-26T22:49:10.074674 - !!! Exception during processing !!! Allocation on device
2025-01-26T22:49:10.120705 - Traceback (most recent call last):
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 318, in decode
images = vae.decode_tiled(samples["samples"], tile_x=tile_size // compression, tile_y=tile_size // compression, overlap=overlap // compression, tile_t=temporal_size, overlap_t=temporal_overlap)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 531, in decode_tiled
output = self.decode_tiled_3d(samples, **args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 453, in decode_tiled_3d
return self.process_output(comfy.utils.tiled_scale_multidim(samples, decode_fn, tile=(tile_t, tile_x, tile_y), overlap=overlap, upscale_amount=self.upscale_ratio, out_channels=self.output_channels, index_formulas=self.upscale_index_formula, output_device=self.output_device))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 939, in tiled_scale_multidim
ps = function(s_in).to(output_device)
^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 452, in <lambda>
decode_fn = lambda a: self.first_stage_model.decode(a.to(self.vae_dtype).to(self.device)).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\models\autoencoder.py", line 209, in decode
dec = self.decoder(dec, **decoder_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 723, in forward
h = self.up[i_level].upsample(h)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 114, in forward
x = self.conv(x)
^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\model.py", line 62, in forward
return self.conv(x)
^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 112, in forward
return super().forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 725, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\COMFYUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 720, in _conv_forward
return F.conv3d(
^^^^^^^^^
torch.OutOfMemoryError: Allocation on device
2025-01-26T22:49:10.125205 - Got an OOM, unloading all loaded models.
2025-01-26T22:49:10.808025 - Prompt executed in 397.65 seconds
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
{"last_node_id":80,"last_link_id":222,"nodes":[{"id":16,"type":"KSamplerSelect","pos":[484,751],"size":[315,58],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"SAMPLER","type":"SAMPLER","links":[19],"shape":3}],"properties":{"Node name for S&R":"KSamplerSelect"},"widgets_values":["euler"]},{"id":26,"type":"FluxGuidance","pos":[520,100],"size":[317.4000244140625,58],"flags":{},"order":11,"mode":0,"inputs":[{"name":"conditioning","type":"CONDITIONING","link":175}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[129],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"FluxGuidance"},"widgets_values":[6],"color":"#233","bgcolor":"#355"},{"id":45,"type":"EmptyHunyuanLatentVideo","pos":[475.540771484375,432.673583984375],"size":[315,130],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[180],"slot_index":0}],"properties":{"Node name for S&R":"EmptyHunyuanLatentVideo"},"widgets_values":[848,480,73,1]},{"id":8,"type":"VAEDecode","pos":[1150,90],"size":[210,46],"flags":{},"order":15,"mode":2,"inputs":[{"name":"samples","type":"LATENT","link":181},{"name":"vae","type":"VAE","link":206}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":77,"type":"Note","pos":[0,0],"size":[350,110],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[],"properties":{},"widgets_values":["Select a fp8 weight_dtype if you are running out of memory."],"color":"#432","bgcolor":"#653"},{"id":13,"type":"SamplerCustomAdvanced","pos":[860,200],"size":[272.3617858886719,124.53733825683594],"flags":{},"order":14,"mode":0,"inputs":[{"name":"noise","type":"NOISE","link":37,"slot_index":0},{"name":"guider","type":"GUIDER","link":30,"slot_index":1},{"name":"sampler","type":"SAMPLER","link":19,"slot_index":2},{"name":"sigmas","type":"SIGMAS","link":20,"slot_index":3},{"name":"latent_image","type":"LATENT","link":180,"slot_index":4}],"outputs":[{"name":"output","type":"LATENT","links":[181,210],"slot_index":0,"shape":3},{"name":"denoised_output","type":"LATENT","links":null,"shape":3}],"properties":{"Node name for S&R":"SamplerCustomAdvanced"},"widgets_values":[]},{"id":25,"type":"RandomNoise","pos":[479,618],"size":[315,82],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"NOISE","type":"NOISE","links":[37],"shape":3}],"properties":{"Node name for S&R":"RandomNoise"},"widgets_values":[215238710358391,"randomize"],"color":"#2a363b","bgcolor":"#3f5159"},{"id":74,"type":"Note","pos":[1151.330810546875,425.2190856933594],"size":[210,170],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[],"properties":{},"widgets_values":["Use the tiled decode node by default because most people will need it.\n\nLower the tile_size and overlap if you run out of memory."],"color":"#432","bgcolor":"#653"},{"id":73,"type":"VAEDecodeTiled","pos":[1150,200],"size":[210,150],"flags":{},"order":16,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":210},{"name":"vae","type":"VAE","link":211}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[216],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecodeTiled"},"widgets_values":[256,64,64,8]},{"id":78,"type":"VHS_VideoCombine","pos":[1448.0716552734375,202.6382598876953],"size":[214.7587890625,334],"flags":{},"order":17,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":216},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null}],"properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":24,"loop_count":0,"filename_prefix":"HUNYUAN","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":10,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{},"muted":false}}},{"id":10,"type":"VAELoader","pos":[0,420],"size":[350,60],"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"VAE","type":"VAE","links":[206,211],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAELoader"},"widgets_values":["hunyuan_video_vae_bf16.safetensors"]},{"id":17,"type":"BasicScheduler","pos":[478,860],"size":[315,106],"flags":{},"order":9,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":190,"slot_index":0}],"outputs":[{"name":"SIGMAS","type":"SIGMAS","links":[20],"shape":3}],"properties":{"Node name for S&R":"BasicScheduler"},"widgets_values":["simple",6,1]},{"id":11,"type":"DualCLIPLoader","pos":[0,270],"size":[350,106],"flags":{},"order":6,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[205],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"DualCLIPLoader"},"widgets_values":["clip_l (1).safetensors","llava_llama3_fp8_scaled.safetensors","hunyuan_video","default"]},{"id":12,"type":"UNETLoader","pos":[0,150],"size":[350,82],"flags":{},"order":7,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[190,209],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"UNETLoader"},"widgets_values":["hunyuan_video_FastVideo_720_fp8_e4m3fn.safetensors","default"],"color":"#223","bgcolor":"#335"},{"id":67,"type":"ModelSamplingSD3","pos":[360,0],"size":[210,58],"flags":{},"order":10,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":209}],"outputs":[{"name":"MODEL","type":"MODEL","links":[221],"slot_index":0}],"properties":{"Node name for S&R":"ModelSamplingSD3"},"widgets_values":[7]},{"id":22,"type":"BasicGuider","pos":[600,0],"size":[222.3482666015625,46],"flags":{},"order":13,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":222,"slot_index":0},{"name":"conditioning","type":"CONDITIONING","link":129,"slot_index":1}],"outputs":[{"name":"GUIDER","type":"GUIDER","links":[30],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"BasicGuider"},"widgets_values":[]},{"id":80,"type":"LoraLoaderModelOnly","pos":[469.9593811035156,-189.8470916748047],"size":[315,82],"flags":{},"order":12,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":221}],"outputs":[{"name":"MODEL","type":"MODEL","links":[222],"slot_index":0}],"properties":{"Node name for S&R":"LoraLoaderModelOnly"},"widgets_values":["hyvideo_FastVideo_LoRA-fp8.safetensors",1]},{"id":44,"type":"CLIPTextEncode","pos":[420,200],"size":[422.84503173828125,164.31304931640625],"flags":{},"order":8,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":205}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[175],"slot_index":0}],"title":"CLIP Text Encode (Positive Prompt)","properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["AIR BALLON IN STREET "],"color":"#232","bgcolor":"#353"}],"links":[[19,16,0,13,2,"SAMPLER"],[20,17,0,13,3,"SIGMAS"],[30,22,0,13,1,"GUIDER"],[37,25,0,13,0,"NOISE"],[129,26,0,22,1,"CONDITIONING"],[175,44,0,26,0,"CONDITIONING"],[180,45,0,13,4,"LATENT"],[181,13,0,8,0,"LATENT"],[190,12,0,17,0,"MODEL"],[205,11,0,44,0,"CLIP"],[206,10,0,8,1,"VAE"],[209,12,0,67,0,"MODEL"],[210,13,0,73,0,"LATENT"],[211,10,0,73,1,"VAE"],[216,73,0,78,0,"IMAGE"],[221,67,0,80,0,"MODEL"],[222,80,0,22,0,"MODEL"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.6830134553650705,"offset":[250.26644464062525,342.59003714160156]},"groupNodes":{},"node_versions":{"comfy-core":"0.3.12","comfyui-videohelpersuite":"1.4.6"},"VHS_latentpreview":false,"VHS_latentpreviewrate":0},"version":0.4}
## Additional Context
(Please add any additional context or steps to reproduce the error here)
```
### Other
_No response_
|
closed
|
2025-01-27T10:28:30Z
|
2025-01-27T21:12:14Z
|
https://github.com/comfyanonymous/ComfyUI/issues/6616
|
[
"User Support"
] |
cyberai1980
| 1
|
uriyyo/fastapi-pagination
|
fastapi
| 659
|
How to use a custom JSONResponse class?
|
Hello, I am currently facing a problem. I have rewritten your Page class and used my own defined JSONResponse class. Is there a way to combine these two?Here is the code:
What do I need to do with PageSchemes of the page class in order for DeptAll's Schemes to take effect
@dept.get('', summary='xxx', response_model=Page[DeptAll])
async def get_all_dept():
async with get_db() as session:
return await paginate(session, DeptDao.get_all_dept())
async def paginate(
conn: AsyncSession,
stmt: Select,
params: Optional[Params] = None,
*,
transformer: Optional[ItemsTransformer] = None,
additional_data: AdditionalData = None,
) -> Any:
params, raw_params = verify_params(params, "limit-offset")
total = (await conn.execute(stmt.with_only_columns(func.count()))).scalar()
q = stmt.offset(raw_params.offset).limit(raw_params.limit)
items = (await conn.execute(q)).scalars().all()
t_items = apply_items_transformer(items, transformer)
return create_page(
t_items,
total,
params,
**(additional_data or {}),
)
class Page(AbstractPage[T], Generic[T]):
code: int = 200
result: PageSchemas
msg: Optional[Any] = Field(default='Success')
success: bool = True
__params_type__ = Params
@classmethod
def create(
cls,
results: Sequence[T],
total: int,
params: Params,
**kwargs: Any,
) -> Page[T]:
pageIndex = params.pageIndex
pageSize = params.pageSize
total_pages = math.ceil(total / params.pageSize)
next = f"?pageIndex={pageIndex + 1}&pageSize={pageSize}" if (pageIndex + 1) <= total_pages else "null"
previous = f"?pageIndex={pageIndex - 1}&pageSize={pageSize}" if (pageIndex - 1) >= 1 else "null"
result_page = PageSchemas(list=results, total=total, pageIndex=params.pageIndex, pageSize=params.pageSize,
next=next,
previous=previous, pageCount=total_pages) # .init()
return cls(**kwargs, result=result_page)
class PageSchemas(BaseModel):
list: Sequence[T] = None
total: int = None
pageIndex: int = None
pageSize: int = None
next: str = None
previous: str = None
pageCount: int = None
|
closed
|
2023-05-10T03:23:29Z
|
2023-05-10T03:27:41Z
|
https://github.com/uriyyo/fastapi-pagination/issues/659
|
[] |
jwxtz
| 0
|
BeastByteAI/scikit-llm
|
scikit-learn
| 17
|
Documentation
|
Hi @iryna-kondr
Re: `Documentation: Improve the project's documentation, including code comments and README files.`
I would love to help document this project including code comments and possibly adding some use case notebooks etc.
Let me know if this is something you're open to or if you have some pointers on where to start?
Thanks!
|
closed
|
2023-05-28T17:51:50Z
|
2023-06-07T18:25:30Z
|
https://github.com/BeastByteAI/scikit-llm/issues/17
|
[] |
ToluClassics
| 3
|
autogluon/autogluon
|
data-science
| 4,034
|
Fix the re-ordering of tabular test
|
Platform tests fail due to interaction of Ray with tabular tests.
Currently we overcome that by reordering the tests under `tabular/tests/test_tabular.sh`.
This issue is to re-visit the same in the future and not depend on the re-ordering of tests to fix the platform tests
|
open
|
2024-04-03T18:38:31Z
|
2024-04-03T19:00:01Z
|
https://github.com/autogluon/autogluon/issues/4034
|
[
"bug",
"module: tabular",
"priority: 1"
] |
prateekdesai04
| 1
|
albumentations-team/albumentations
|
deep-learning
| 2,114
|
[Add transform] Add RandomFisheye
|
Look like an alias of OpticalDistortion
https://kornia.readthedocs.io/en/latest/augmentation.module.html#kornia.augmentation.RandomFisheye
|
closed
|
2024-11-08T16:05:35Z
|
2024-11-19T02:37:22Z
|
https://github.com/albumentations-team/albumentations/issues/2114
|
[
"enhancement"
] |
ternaus
| 1
|
mwaskom/seaborn
|
pandas
| 3,631
|
Add Color Universal Design palette
|
Hi,
I think the [Color Universal Design](https://jfly.uni-koeln.de/color/) colorblind-friendly palette would be a great addition to seaborn.
I have created a repository with some examples of the palette you can take a look at: <https://github.com/mbhall88/cud>
Not sure if this request is better off at matplotlib or not though?
|
closed
|
2024-02-07T05:36:50Z
|
2024-02-07T22:06:45Z
|
https://github.com/mwaskom/seaborn/issues/3631
|
[] |
mbhall88
| 2
|
scanapi/scanapi
|
rest-api
| 167
|
Show the tests results in the html report
|
Show the tests results in the html report. Show if each test case passed or failed in the html reports
|
closed
|
2020-06-04T21:07:36Z
|
2020-06-25T10:28:16Z
|
https://github.com/scanapi/scanapi/issues/167
|
[
"Feature",
"Reporter"
] |
camilamaia
| 0
|
lepture/authlib
|
django
| 177
|
estimated 0.14 release date
|
I'm not sure this is the right place for asking this...
Do you have estimated release date for the version 0.14 ?
thanks
|
closed
|
2020-01-03T08:12:37Z
|
2020-01-06T14:18:06Z
|
https://github.com/lepture/authlib/issues/177
|
[] |
dmartin35
| 1
|
labmlai/annotated_deep_learning_paper_implementations
|
machine-learning
| 217
|
Scripts Scripts_ Img2img get is empty
|
I create a parameter object img2imgreg using the "img2imgreg=StableDiffusionImg2ImgProcessingAPI ()" in a new py file.
Provide init for parameter init_images, masks, always on_ Assigning values to scripts, etc
Then, use 'script_runner=scripts. scripts_img2img' to obtain the script's parameters script_ Args, but the script obtained_ Args has always been empty, may I ask what is going on? @AUTOMATIC1111
|
closed
|
2023-10-20T01:11:46Z
|
2024-06-20T07:26:47Z
|
https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/217
|
[] |
fu-jianhua
| 1
|
autogluon/autogluon
|
computer-vision
| 4,808
|
[BUG] [timeseries] TimeSeriesPredictor.feature_importance outputting 0 when covariate is used by regressor
|
**Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [ ] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
When `TimeSeriesPredictor.feature_importance` is called given only Chronos with CatBoost covariate regressor, feature importance is computed as 0 even though CatBoost assigns 100% feature importance to one of the covariates.
set up:
```
import numpy as np
import pandas as pd
from autogluon.timeseries import TimeSeriesDataFrame, TimeSeriesPredictor
df = pd.read_csv("https://autogluon.s3.amazonaws.com/datasets/timeseries/m4_daily_subset/train.csv")
df.head()
static_features_df = pd.read_csv("https://autogluon.s3.amazonaws.com/datasets/timeseries/m4_daily_subset/metadata.csv")
static_features_df.head()
train_data = TimeSeriesDataFrame.from_data_frame(
df,
id_column="item_id",
timestamp_column="timestamp",
static_features_df=static_features_df,
)
train_data.head()
train_data["log_target"] = np.log(train_data["target"])
WEEKEND_INDICES = [5, 6]
timestamps = train_data.index.get_level_values("timestamp")
train_data["weekend"] = timestamps.weekday.isin(WEEKEND_INDICES).astype(float)
train_data.head()
predictor = TimeSeriesPredictor(
prediction_length=14,
target="target",
known_covariates_names=["weekend"],
).fit(
train_data,
hyperparameters={
"Chronos": {
"model_path": "bolt_small",
"covariate_regressor": "CAT",
"target_scaler": "standard",
"ag_args": {"name_suffix": "WithRegressor"},
},
}
)
predictor.feature_importance(train_data)
```
```python
trainer = predictor._learner.load_trainer()
model = trainer.load_model(
trainer.get_model_best()
)
reg = model._get_model_base().covariate_regressor.fit(train_data).model.model
dict(zip(reg.feature_names_, reg.feature_importances_)) # {'weekend': 100.0}
```
Similar issue: https://github.com/autogluon/autogluon/issues/4322
```python
# Replace this code with the output of the following:
from autogluon.core.utils import show_versions
show_versions()
```
</details>
|
closed
|
2025-01-17T08:09:53Z
|
2025-01-29T16:31:16Z
|
https://github.com/autogluon/autogluon/issues/4808
|
[
"bug",
"module: timeseries"
] |
canerturkmen
| 1
|
lanpa/tensorboardX
|
numpy
| 131
|
How to make train and validation results in the same plot ?
|
Hello,
Do you min tell how can l can make train,valid losses in the same plot and train,valid accuracies in the same plot ?
Thank you ?
|
closed
|
2018-04-27T16:05:35Z
|
2018-05-08T17:11:58Z
|
https://github.com/lanpa/tensorboardX/issues/131
|
[] |
pinkfloyd06
| 1
|
mwaskom/seaborn
|
pandas
| 3,790
|
inner='box' in violinplot shows too large quartiles and whiskers
|
When making violinplots, passing `inner='box'` uses mpl's Axes.plot() to draw the inner boxplot.
However, the default in mpl is `solid_capstyle='projecting'`, which extends the line too far out, and thus exaggerates both the boxplot's quartiles and the whiskers.
This becomes increasingly exaggerated with a larger linewidth, because `'projecting'` uses `linewidth/2` (https://matplotlib.org/stable/api/_enums_api.html#matplotlib._enums.CapStyle).
Showing incorrect data ranges may be problematic for e.g., scientific publications.
For more accurate plotting, I therefore suggest that seaborn sets the default for `inner_kws` to `solid_capstyle='butt'`. This limits the line to its actual endpoints.
I don't know whether this issue may extend to other types of plots as well.
Below is a minimal example which shows both the default and desired behaviours. I am using seaborn 0.13.2.
I set `box_width` to 12 to make it a bit more obvious, but even at default box_width it happens.
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
# List of 0-100
y = np.linspace(0, 100, 100)
avg, q1, q3 = np.mean(y), np.quantile(y, 1/4), np.quantile(y, 3/4)
# To dataframe
df = pd.DataFrame({'y': y})
# Plot default behaviour
plt.figure()
sns.violinplot(data=df, y='y', cut=0, fill=False, inner='box', inner_kws={'box_width': 12})
plt.axhline(avg, c='black', zorder=-100)
plt.axhline(q1, c='black', zorder=-100)
plt.axhline(q3, c='black', zorder=-100)
plt.yticks([q1, avg, q3], ['q1', 'avg', 'q3'])
plt.title('Default behaviour')
plt.show()
# Plot desired behaviour (inner_kws={'solid_capstyle': 'butt'})
plt.figure()
sns.violinplot(data=df, y='y', cut=0, fill=False, inner='box', inner_kws={'box_width': 12, 'solid_capstyle': 'butt'})
plt.axhline(avg, c='black', zorder=-100)
plt.axhline(q1, c='black', zorder=-100)
plt.axhline(q3, c='black', zorder=-100)
plt.yticks([q1, avg, q3], ['q1', 'avg', 'q3'])
plt.title('Desired behaviour (solid_capstyle = "butt")')
plt.show()
```


|
open
|
2024-11-26T13:52:26Z
|
2024-11-26T13:52:26Z
|
https://github.com/mwaskom/seaborn/issues/3790
|
[] |
higher-bridge
| 0
|
PaddlePaddle/PaddleNLP
|
nlp
| 9,584
|
[Question]: nlp最大支持的文本长度是多少呢,我自己抽取100多字的文本会报错
|
### 请提出你的问题
错误信息如下
/usr/local/lib/python3.10/dist-packages/paddlenlp/transformers/tokenizer_utils_base.py:1985: FutureWarning: The `pad_to_max_length` argument is deprecated and will be removed in a future version, use `padding=True` or `padding='longest'` to pad to the longest sequence in the batch, or use `padding='max_length'` to pad to a max length. In this case, you can give a specific length with `max_length` (e.g. `max_length=45`) or leave max_length to None to pad to the maximal input size of the model (e.g. 512 for Bert).
warnings.warn(
[{}]
代码如下
from paddlenlp import Taskflow
def uie_base(schema,text):
uie = Taskflow("information_extraction", schema=schema, model="uie-x-base")
result=uie(text)
return result
if __name__ == '__main__':
res=uie_base(
['题目'],
"""
2. 七年级(1)班同学开展了社会实践调查,
· 人们采集野生植物时掌握了一些可食用植物的生长规律,过实践,将它们培育为农作物。
· 人们开始将一些活的野兽或小动物图永起来,以备日后食用。
· 最初进行农业生产,主要是靠双手摇种和收获。后来人们仗用石刀等工具收割,用石磨盘加工
A. 家畜饲养的出现
B. 磨制石器的发展历程
C. 早期国家的出现
D. 原始农业的兴起发展
""")
print(res)
|
closed
|
2024-12-09T07:59:12Z
|
2025-01-10T02:03:19Z
|
https://github.com/PaddlePaddle/PaddleNLP/issues/9584
|
[
"question"
] |
dongyu6
| 3
|
flasgger/flasgger
|
flask
| 334
|
Route specs disappearing when `endpoint` is passed
|
closed
|
2019-09-13T19:55:32Z
|
2019-09-14T03:25:40Z
|
https://github.com/flasgger/flasgger/issues/334
|
[] |
nick-brown
| 0
|
|
laurentS/slowapi
|
fastapi
| 206
|
dynamic_limit isn't really dynamic
|
**Describe the bug**
I would expect the callable `limit_value: StrOrCallableStr` to be executed for every request, because I use something like configcat or LaunchDarkly to dynamically update the rate limit as needed, like below
```
@limiter.limit(
get_rate_limit(feature_flag="api1_rate_limit", default_value="10/minute"),
)
...
def get_rate_limit(
feature_flag: str,
default_value: str,
configcat: ConfigCatClient = get_configcat(),
) -> str:
return configcat.get_value(flag, default_value)
```
However, I found the `limit_value` seems to be cached by some key (I guess it's the key from key_func), so it only be called once and the new value from configcat is not being picked up.
**To Reproduce**
See description above.
**Expected behavior**
If limit_value is a callable, can we make a flag somewhere to make sure it's being called for every request?
**Screenshots**
N/A
**Your app (please complete the following information):**
- fastapi
- 0.109.1
- slowapi version: 0.1.9
**Additional context**
N/A
|
closed
|
2024-07-07T17:20:05Z
|
2024-07-09T13:45:55Z
|
https://github.com/laurentS/slowapi/issues/206
|
[] |
yanqianglu
| 2
|
microsoft/nni
|
tensorflow
| 5,079
|
Can't use OrderedDict inside nn.LayerChioce when using ProxylessTrainer
|
ProxylessTrainer forces to use list of ops candidates (can't use OrderedDict) inside nn.LayerChoice. That's due to fact that ops order is mapped to name and used inside latency predictor. That's inconsistent with documentation, which says that both can be used.
Ex. If block is used:
```python
class ConvBlock(nn.Module):
def __init__(self, in_channels: int, out_channels: int):
super().__init__()
self.block = nn.LayerChoice(OrderedDict([
# conv block is standard Conv-bn-act
('3x3', ConvBlock(in_channels, out_channels, kernel_size=3)),
('1x3', ConvBlock(in_channels, out_channels, kernel_size=(1, 3))),
('3x1', ConvBlock(in_channels, out_channels, kernel_size=(3, 1))),
('3x3_sep', ConvBlock(in_channels, out_channels, kernel_size=3, groups=in_channels)),
('identity', Identity())
]))
```
Then an error is thrown:
```python
Traceback (most recent call last):
File "C:\Users\...\proxylessnas.py", line 373, in <module>
main()
File "C:\Users\...\proxylessnas.py", line 359, in main
trainer.fit()
File "C:\Users\...\lib\site-packages\nni\retiarii\oneshot\pytorch\proxyless.py", line 363, in fit
self._train_one_epoch(i)
File "C:\Users\...\proxylessnas.py", line 295, in _train_one_epoch
logits, loss = self._logits_and_loss_for_arch_update(val_X, val_y)
File "C:\Users\...\lib\site-packages\nni\retiarii\oneshot\pytorch\proxyless.py", line 330, in _logits_and_loss_for_arch_update
expected_latency = self.latency_estimator.cal_expected_latency(current_architecture_prob)
File "C:\Users\...\lib\site-packages\nni\retiarii\oneshot\pytorch\proxyless.py", line 168, in cal_expected_latency
lat += torch.sum(torch.tensor([probs[i] * self.block_latency_table[module_name][str(i)]
File "C:\Users\...\lib\site-packages\nni\retiarii\oneshot\pytorch\proxyless.py", line 168, in <listcomp>
lat += torch.sum(torch.tensor([probs[i] * self.block_latency_table[module_name][str(i)]
KeyError: '0'
```
**Environment**:
- NNI version: 2.8
- Training service (local|remote|pai|aml|etc): local
- Client OS: Windows 10
- Python version: 3.10
- PyTorch version: 1.12
- Is conda/virtualenv/venv used?: Pipenv
- Is running in Docker?: No
|
open
|
2022-08-22T14:47:22Z
|
2023-03-10T01:57:31Z
|
https://github.com/microsoft/nni/issues/5079
|
[
"waiting user confirm",
"support"
] |
AL3708
| 4
|
pywinauto/pywinauto
|
automation
| 1,095
|
Item access does not work for nested controls
|
## Expected Behavior
Can access control items as `window["Pane"]["List"]["ListItem"]`
## Actual Behavior
`AttributeError: The control does not have a __getitem__ method for item access (i.e. ctrl[key]) so maybe you have requested this in error?`
## Steps to Reproduce the Problem
1. Run the script below to get the error
2. Comment line 3 to get expected result
## Short Example of Code to Demonstrate the Problem
```
wnd = pywinauto.Application(backend="uia").start("notepad.exe").top_window()
wnd["File"].click_input() # Open "File" menu, so "File" below is opened menu
wnd["File"]["Save As"].select() # Does not work
wnd["File"].child_window(best_match="Save As").select() # Does work
```
## Specifications
- Pywinauto version: 0.6.8
- Python version and bitness: 3.9.4 x32
- Platform and OS: Windows 10 x64
|
open
|
2021-06-25T08:39:59Z
|
2024-03-07T09:39:52Z
|
https://github.com/pywinauto/pywinauto/issues/1095
|
[] |
TesterNick
| 2
|
MagicStack/asyncpg
|
asyncio
| 990
|
Invalid input type error despite the use of CAST operator in query.
|
<!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.27.0
* **PostgreSQL version**: 14
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: Docker image `postgis/postgis:14-3.3-alpine`
* **Python version**: 3.8.13
* **Platform**: Darwin Kernel Version 21.6.0, arm64
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: Yes
* **If you built asyncpg locally, which version of Cython did you use?**: No
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: Didn't try
<!-- Enter your issue details below this comment. -->
Getting invalid input for query argument despite using cast. Here's the code example:
```python
import asyncio
from sqlalchemy.ext.asyncio import create_async_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import sessionmaker
import sqlalchemy as sa
import asyncpg
postgres = "postgresql+asyncpg://username:password@localhost:5432/stac_db"
Base = declarative_base()
engine = create_async_engine(postgres, echo=True)
class TestData(Base):
__tablename__ = 'tests'
id = sa.Column(sa.Integer, primary_key=True)
t = sa.Column(sa.VARCHAR(20))
async def setup():
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.drop_all)
await conn.run_sync(Base.metadata.create_all)
async_session = sessionmaker(engine, expire_on_commit=False, class_=AsyncSession)
async with async_session() as session:
async with session.begin():
session.add_all(
[
TestData(id=1, t='x'),
TestData(id=2, t='z')
]
)
await session.commit()
async def run_sqlalchemy_asyncpg(id: str):
async_session = sessionmaker(engine, expire_on_commit=False, class_=AsyncSession)
async with async_session() as session:
q = sa.select(TestData).where(TestData.id > sa.cast(id, sa.BIGINT))
result = await session.execute(q)
rows = result.scalars().all()
print(rows)
async def run_asyncpg(id: str):
conn = await asyncpg.connect("postgresql://username:password@localhost:5432/stac_db")
rows = await conn.fetchrow('SELECT * FROM tests WHERE id > CAST($1 AS BIGINT)', id)
print(rows)
await conn.close()
import asyncio
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(setup())
loop.run_until_complete(run_sqlalchemy_asyncpg('0'))
loop.run_until_complete(run_asyncpg('0'))
finally:
loop.run_until_complete(engine.dispose())
```
In both the sqlalchemy and asyncpg cases, we get following error
```
<class 'asyncpg.exceptions.DataError'>: invalid input for query argument $1: '0' (an integer is required (got type str))
```
|
closed
|
2022-12-30T11:14:29Z
|
2022-12-31T06:37:08Z
|
https://github.com/MagicStack/asyncpg/issues/990
|
[] |
Geosynopsis
| 1
|
man-group/arctic
|
pandas
| 418
|
VersionStore deprecation warning
|
#### Arctic Version
```
Latest
```
```
tests/integration/store/test_ndarray_store_append.py::test_append_with_extra_columns
/home/bryant/arctic/arctic/store/_ndarray_store.py:240: FutureWarning: Assignment between structured arrays with different field names will change in numpy 1.14.
Previously fields in the dst would be set to the value of the identically-named field in the src. In numpy 1.14 fields will instead be assigned 'by position': The Nth field of the dst will be set to the Nth field of the src array.
See the release notes for details
old_arr = self._do_read(collection, previous_version, symbol).astype(dtype)
```
|
closed
|
2017-09-21T19:55:32Z
|
2018-04-16T10:15:59Z
|
https://github.com/man-group/arctic/issues/418
|
[] |
bmoscon
| 1
|
JaidedAI/EasyOCR
|
deep-learning
| 357
|
latin.pth recognition model for azerbaijani does not detect characters other than english and german umlauts
|
Hi,
Thanks for the great repo. I have a need to OCR different languages such as
az - Azerbaijani
cs - Czech
da - Danish
de - Deutsch
en - English
es - Spanish, Castilian
et - Estonian
fi - Finnish
fr - French
hr - Croatian
All these languages use ''latin.pth'' model for recognition. I have 2 questions:
1) It does not recognize characters other than English and german when I used the Azerbaijani recognition model. But, I could not read any special characters like
For example, if I read an image with the following Azerbaijani text after loading the model for Azerbaijan language ('az'):
**Ş** i r k **ə** t n ü m a y **ə** n d **ə** s i
it gives the output as
Siket nümayandesi
It misreads the characters highlighted in bold.
2) I have a use case where I need to use both Cyrillic and Latin model together cause I don't know which language type of images I need to process for OCR. When i tried mixing Cyrillic and Latin languages, it was throwing errors. Is there any way where I could combine these two different language models for recognition?
Thanks in advance :)
|
closed
|
2021-01-27T16:08:47Z
|
2021-03-23T01:16:14Z
|
https://github.com/JaidedAI/EasyOCR/issues/357
|
[] |
kalai2033
| 2
|
RobertCraigie/prisma-client-py
|
asyncio
| 652
|
Add missing tests for native database types
|
We currently do not have tests for every possible native database type.
Status (not yet exhaustive):
- [ ] `db.Date`
- [x] `db.Date` in raw queries
- [ ] `db.Time`
- [ ] `db.Time` in raw queries
- [ ] `db.Timetz `
- [ ] `db.Timetz ` in raw queries
- [ ] `db.Timestamp`
- [ ] `db.Timestamp` in raw queries
- [ ] `db.Timestamptz`
- [ ] `db.Timestamptz` in raw queries
|
open
|
2022-12-24T22:49:06Z
|
2022-12-24T22:50:54Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/652
|
[
"kind/improvement",
"topic: internal",
"priority/medium",
"level/unknown"
] |
RobertCraigie
| 0
|
adbar/trafilatura
|
web-scraping
| 446
|
Text extraction performance fix.
|
I've been looking into text extraction algorithm and it seems the single most time consuming is following part:
`prune_unwanted_nodes(one_of_the_trees, OVERALL_DISCARD_XPATH)`, since `OVERALL_DISCARD_XPATH` is very long "selector".
The same pruning process is called 4 times on (sometimes) different trees, when precision is required.
Am i wrong, but this is used in all paths of extraction (in case, where precision preference is required)?
It would be nice, to do it only once if possible.
When i tried to prune the trees only once in function
`bare_extraction()`
, just before line:
`cleaned_tree_backup = deepcopy(cleaned_tree)`
, it seems it does save almost half of execution time for precision preference extractor run.
I'm not sure, if i havent overlooked something, but i think its working still as intended.
|
open
|
2023-11-23T09:09:42Z
|
2023-11-23T13:41:46Z
|
https://github.com/adbar/trafilatura/issues/446
|
[
"question"
] |
majcl
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.