repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
babysor/MockingBird
|
pytorch
| 764
|
如何解决colab炼丹每次都要上传数据集预处理数据集还爆磁盘的蛋疼问题
|
许多同鞋因为家里设备不佳训练模型效果不好,不得不去世界最大乞丐炼丹聚集地colab上训练。但是对于无法扩容google drive和升级colab的同鞋来说,上传数据集真的如同地狱一般,网速又慢空间又不够,而且每次重置都要上传,预处理令人头疼。我耗时9天终于解决了这个问题,现在给各位同学分享我的解决方案。
首先要去kaggle这个网站上面注册一个账号,然后获取token
我已经把预处理了的数据集(用的aidatatang_200zh)上传在上面了,但是下载数据集需要token,token需要注册账号,具体获取token的方法请自行百度,在此不过多赘述。
然后打开colab
修改-> 笔记本设置->运行时把 None 改成 GPU
输入以下代码:
```
!pip install kaggle
import json
token = {"username":"你的账号","key":"你获取到的token"}
with open('/content/kaggle.json', 'w') as file:
json.dump(token, file)
!mkdir -p ~/.kaggle
!cp /content/kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
!kaggle config set -n path -v /content
```
第三行请根据之前获取到的token填写
这一步是准备好kaggle命令行
然后是下载数据集并解压
```
!kaggle datasets download -d bjorndido/sv2ttspart1
!unzip "/content/datasets/bjorndido/sv2ttspart1/sv2ttspart1.zip" -d "/content/aidatatang_200zh"
!rm -rf /content/datasets
!kaggle datasets download -d bjorndido/sv2ttspart2
!unzip "/content/datasets/bjorndido/sv2ttspart2/sv2ttspart2.zip" -d "/content/aidatatang_200zh"
!rm -rf /content/datasets
```
为了怕某些童鞋用和我一样的免费版,如果从下载未处理的数据集开始磁盘要爆炸,所以我把预处理过的数据集上传到kaggle了
而且解压后会自己删掉zip,非常滴银杏
实测下载速度能达到200MB/s,网慢点也有50MB/s,非常滴快
这一步要不了10分钟就可以弄好了
```
!git clone https://github.com/babysor/MockingBird.git
!pip install -r /content/MockingBird/requirements.txt
git仓库,下载依赖,这一步不多说
```
然后改hparams
```
%%writefile /content/MockingBird/synthesizer/hparams.py
import ast
import pprint
import json
class HParams(object):
def __init__(self, **kwargs): self.__dict__.update(kwargs)
def __setitem__(self, key, value): setattr(self, key, value)
def __getitem__(self, key): return getattr(self, key)
def __repr__(self): return pprint.pformat(self.__dict__)
def parse(self, string):
# Overrides hparams from a comma-separated string of name=value pairs
if len(string) > 0:
overrides = [s.split("=") for s in string.split(",")]
keys, values = zip(*overrides)
keys = list(map(str.strip, keys))
values = list(map(str.strip, values))
for k in keys:
self.__dict__[k] = ast.literal_eval(values[keys.index(k)])
return self
def loadJson(self, dict):
print("\Loading the json with %s\n", dict)
for k in dict.keys():
if k not in ["tts_schedule", "tts_finetune_layers"]:
self.__dict__[k] = dict[k]
return self
def dumpJson(self, fp):
print("\Saving the json with %s\n", fp)
with fp.open("w", encoding="utf-8") as f:
json.dump(self.__dict__, f)
return self
hparams = HParams(
### Signal Processing (used in both synthesizer and vocoder)
sample_rate = 16000,
n_fft = 800,
num_mels = 80,
hop_size = 200, # Tacotron uses 12.5 ms frame shift (set to sample_rate * 0.0125)
win_size = 800, # Tacotron uses 50 ms frame length (set to sample_rate * 0.050)
fmin = 55,
min_level_db = -100,
ref_level_db = 20,
max_abs_value = 4., # Gradient explodes if too big, premature convergence if too small.
preemphasis = 0.97, # Filter coefficient to use if preemphasize is True
preemphasize = True,
### Tacotron Text-to-Speech (TTS)
tts_embed_dims = 512, # Embedding dimension for the graphemes/phoneme inputs
tts_encoder_dims = 256,
tts_decoder_dims = 128,
tts_postnet_dims = 512,
tts_encoder_K = 5,
tts_lstm_dims = 1024,
tts_postnet_K = 5,
tts_num_highways = 4,
tts_dropout = 0.5,
tts_cleaner_names = ["basic_cleaners"],
tts_stop_threshold = -3.4, # Value below which audio generation ends.
# For example, for a range of [-4, 4], this
# will terminate the sequence at the first
# frame that has all values < -3.4
### Tacotron Training
tts_schedule = [(2, 1e-3, 10_000, 32), # Progressive training schedule
(2, 5e-4, 15_000, 32), # (r, lr, step, batch_size)
(2, 2e-4, 20_000, 32), # (r, lr, step, batch_size)
(2, 1e-4, 30_000, 32), #
(2, 5e-5, 40_000, 32), #
(2, 1e-5, 60_000, 32), #
(2, 5e-6, 160_000, 32), # r = reduction factor (# of mel frames
(2, 3e-6, 320_000, 32), # synthesized for each decoder iteration)
(2, 1e-6, 640_000, 32)], # lr = learning rate
tts_clip_grad_norm = 1.0, # clips the gradient norm to prevent explosion - set to None if not needed
tts_eval_interval = 500, # Number of steps between model evaluation (sample generation)
# Set to -1 to generate after completing epoch, or 0 to disable
tts_eval_num_samples = 1, # Makes this number of samples
## For finetune usage, if set, only selected layers will be trained, available: encoder,encoder_proj,gst,decoder,postnet,post_proj
tts_finetune_layers = [],
### Data Preprocessing
max_mel_frames = 900,
rescale = True,
rescaling_max = 0.9,
synthesis_batch_size = 16, # For vocoder preprocessing and inference.
### Mel Visualization and Griffin-Lim
signal_normalization = True,
power = 1.5,
griffin_lim_iters = 60,
### Audio processing options
fmax = 7600, # Should not exceed (sample_rate // 2)
allow_clipping_in_normalization = True, # Used when signal_normalization = True
clip_mels_length = True, # If true, discards samples exceeding max_mel_frames
use_lws = False, # "Fast spectrogram phase recovery using local weighted sums"
symmetric_mels = True, # Sets mel range to [-max_abs_value, max_abs_value] if True,
# and [0, max_abs_value] if False
trim_silence = True, # Use with sample_rate of 16000 for best results
### SV2TTS
speaker_embedding_size = 256, # Dimension for the speaker embedding
silence_min_duration_split = 0.4, # Duration in seconds of a silence for an utterance to be split
utterance_min_duration = 1.6, # Duration in seconds below which utterances are discarded
use_gst = True, # Whether to use global style token
use_ser_for_gst = True, # Whether to use speaker embedding referenced for global style token
)
```
我用的batch size是32,同鞋们可以根据情况自行更改
开始训练
```
%cd "/content/MockingBird/"
!python synthesizer_train.py train "/content/aidatatang_200zh" -m /content/drive/MyDrive/
```
注意,开始这个步骤前请先挂载谷歌云盘,不想挂载的就把-m后面的路径改了
我选择drive是因为下次训练又能继续上传训练的进度继续训练
然后就是欢快的白嫖时间了
氪金的同鞋可以运行!nvidia-smi查看显卡信息,白嫖版的都是tesla t4 16g显存
实测9k步的时候开始出现注意力曲线,loss值为0.45
注意!白嫖版的用户长时间不碰电脑colab会自动断开
再次打开环境会还原成最初的样子
这个时候选择drive保存的优势就体现出来了:不用担心模型重置被删掉
第一次写,写得不好请见谅
希望这篇教程可以帮助到你们
|
open
|
2022-10-10T08:52:38Z
|
2023-03-31T12:02:28Z
|
https://github.com/babysor/MockingBird/issues/764
|
[] |
HexBanana
| 10
|
pyppeteer/pyppeteer
|
automation
| 254
|
Installation in docker fails
|
I am using Ubuntu bionic container (18.04)
```
Step 19/29 : RUN python3 -m venv .
---> Using cache
---> 921d12b1ff09
Step 20/29 : RUN python3 -m pip install setuptools
---> Using cache
---> 112749c3d9f4
Step 21/29 : RUN python3 -m pip install -U git+https://github.com/pyppeteer/pyppeteer@dev
---> Running in d9ff781c3217
Collecting git+https://github.com/pyppeteer/pyppeteer@dev
Cloning https://github.com/pyppeteer/pyppeteer (to dev) to /tmp/pip-d7qe1_0r-build
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3.6/tokenize.py", line 452, in open
buffer = _builtin_open(filename, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-d7qe1_0r-build/setup.py'
```
|
closed
|
2021-05-11T06:49:12Z
|
2021-05-11T08:03:34Z
|
https://github.com/pyppeteer/pyppeteer/issues/254
|
[] |
larytet
| 1
|
polarsource/polar
|
fastapi
| 5,156
|
API Reference docs: validate/activate/deactivate license keys don't need a token
|
### Discussed in https://github.com/orgs/polarsource/discussions/5155
The overlay matches every endpoints: https://github.com/polarsource/polar/blob/37646b3cab3219369bd48b3f105300893515ad9c/sdk/overlays/security.yml#L6-L10
We should exclude those endpoints.
|
closed
|
2025-03-04T16:01:26Z
|
2025-03-04T16:42:42Z
|
https://github.com/polarsource/polar/issues/5156
|
[
"docs"
] |
frankie567
| 0
|
tox-dev/tox
|
automation
| 3,249
|
tox seems to ignore part of tox.ini file
|
## Issue
This PR of mine is failing: https://github.com/open-telemetry/opentelemetry-python/pull/3746
The issue seems to be that `tox` ignores part of the `tox.ini` file, from [this](https://github.com/open-telemetry/opentelemetry-python/pull/3746/files#diff-ef2cef9f88b4fe09ca3082140e67f5ad34fb65fb6e228f119d3812261ae51449R128) line onwards (including that line). When I run `tox -rvvve py38-proto4-opentelemetry-exporter-otlp-proto-grpc` tests fail because certain test requirements that would have been installed if that line was executed are not installed (of course, because that line was not executed). Other tests fail (the PR has many jobs failing besides `py38-proto4-opentelemetry-exporter-otlp-proto-grpc`) as well because they need lines in the `tox.ini` file that are below the line mentioned before. When I run `tox -rvvve py38-proto3-opentelemetry-exporter-otlp-proto-grpc` everything seems to work fine.
Here is the output of both commands:
[`tox -rvvve py38-proto3-opentelemetry-exporter-otlp-proto-grpc`](https://gist.github.com/ocelotl/2690254b7fc2aa0b04af054e04248949#file-tox-rvvve-py38-proto3-opentelemetry-exporter-otlp-proto-grpc-txt)
[`tox -rvvve py38-proto4-opentelemetry-exporter-otlp-proto-grpc`](https://gist.github.com/ocelotl/2690254b7fc2aa0b04af054e04248949#file-tox-rvvve-py38-proto4-opentelemetry-exporter-otlp-proto-grpc-txt)
The issue seems to be that `tox` is not executing [this](https://github.com/open-telemetry/opentelemetry-python/pull/3746/files#diff-ef2cef9f88b4fe09ca3082140e67f5ad34fb65fb6e228f119d3812261ae51449R128) line nor anything below that line.
For example, when running `tox -rvvve py38-proto3-opentelemetry-exporter-otlp-proto-grpc`, `pip install -r ... test-requirements-0.txt` gets called (line [226](https://gist.github.com/ocelotl/2690254b7fc2aa0b04af054e04248949#file-tox-rvvve-py38-proto3-opentelemetry-exporter-otlp-proto-grpc-txt-L226)):
```
py38-proto3-opentelemetry-exporter-otlp-proto-grpc: 15305 I exit 0 (9.53 seconds) /home/tigre/github/ocelotl/opentelemetry-python> pip install /home/tigre/github/ocelotl/opentelemetry-python/opentelemetry-api /home/tigre/github/ocelotl/opentelemetry-python/opentelemetry-semantic-conventions /home/tigre/github/ocelotl/opentelemetry-python/opentelemetry-sdk /home/tigre/github/ocelotl/opentelemetry-python/tests/opentelemetry-test-utils pid=433928 [tox/execute/api.py:280]
py38-proto3-opentelemetry-exporter-otlp-proto-grpc: 15306 W commands_pre[2]> pip install -r /home/tigre/github/ocelotl/opentelemetry-python/exporter/opentelemetry-exporter-otlp-proto-grpc/test-requirements-0.txt [tox/tox_env/api.py:425]
```
But when running `tox -rvvve py38-proto4-opentelemetry-exporter-otlp-proto-grpc` it doesn't, it jumps straight to calling `pytest` (line [226](https://gist.github.com/ocelotl/2690254b7fc2aa0b04af054e04248949#file-tox-rvvve-py38-proto4-opentelemetry-exporter-otlp-proto-grpc-txt-L226)):
```
py38-proto4-opentelemetry-exporter-otlp-proto-grpc: 16148 I exit 0 (10.33 seconds) /home/tigre/github/ocelotl/opentelemetry-python> pip install /home/tigre/github/ocelotl/opentelemetry-python/opentelemetry-api /home/tigre/github/ocelotl/opentelemetry-python/opentelemetry-semantic-conventions /home/tigre/github/ocelotl/opentelemetry-python/opentelemetry-sdk /home/tigre/github/ocelotl/opentelemetry-python/tests/opentelemetry-test-utils pid=436384 [tox/execute/api.py:280]
py38-proto4-opentelemetry-exporter-otlp-proto-grpc: 16149 W commands[0]> pytest /home/tigre/github/ocelotl/opentelemetry-python/exporter/opentelemetry-exporter-otlp-proto-grpc/tests [tox/tox_env/api.py:425]
```
## Environment
Provide at least:
- OS:
`lsb_release -a`
```
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.3 LTS
Release: 22.04
Codename: jammy
```
`tox --version`
```
4.14.1 from /home/tigre/.pyenv/versions/3.11.2/lib/python3.11/site-packages/tox/__init__.py
```
`pip list`
```
Package Version Editable project location
---------------------------------- ---------- -----------------------------------------------------------------
argcomplete 3.0.8
asttokens 2.2.1
attrs 23.1.0
autopep8 2.0.3
backcall 0.2.0
backoff 2.2.1
beartype 0.16.4
behave 1.2.6
black 23.3.0
bleach 6.0.0
boolean.py 4.0
build 0.10.0
cachetools 5.3.2
certifi 2022.12.7
chardet 5.2.0
charset-normalizer 3.1.0
click 8.1.3
cmake 3.26.3
colorama 0.4.6
colorlog 6.7.0
commonmark 0.9.1
cyclonedx-bom 3.11.7
cyclonedx-python-lib 3.1.5
decorator 5.1.1
distlib 0.3.8
docutils 0.20.1
elementpath 4.1.5
executing 1.2.0
fancycompleter 0.9.1
filelock 3.13.1
flake8 6.0.0
fsspec 2023.4.0
gensim 4.3.1
grpcio 1.59.0
grpcio-tools 1.59.0
huggingface-hub 0.14.1
idna 3.4
importlib-metadata 6.0.1
iniconfig 2.0.0
ipdb 0.13.11
ipython 8.10.0
isodate 0.6.1
isort 5.12.0
jedi 0.18.2
Jinja2 3.1.2
joblib 1.2.0
jsonschema 4.20.0
jsonschema-specifications 2023.11.1
license-expression 30.1.1
lit 16.0.2
Markdown 3.5.1
marko 1.3.0
MarkupSafe 2.1.2
matplotlib-inline 0.1.6
mccabe 0.7.0
mistletoe 1.3.0
mpmath 1.3.0
mypy-extensions 1.0.0
networkx 3.1
nltk 3.8.1
nox 2023.4.22
numpy 1.24.3
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-cupti-cu11 11.7.101
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
nvidia-cufft-cu11 10.9.0.58
nvidia-curand-cu11 10.2.10.91
nvidia-cusolver-cu11 11.4.0.1
nvidia-cusparse-cu11 11.7.4.91
nvidia-nccl-cu11 2.14.3
nvidia-nvtx-cu11 11.7.91
openspecification 0.0.1
opentelemetry-api 0.0.0 /home/tigre/github/ocelotl/opentelemetry-python/opentelemetry-api
opentelemetry-sdk 0.0.0
opentelemetry-semantic-conventions 0.0.0
packageurl-python 0.11.2
packaging 23.2
pandas 2.0.1
parse 1.19.1
parse-type 0.6.2
parso 0.8.3
pathspec 0.11.1
pdbpp 0.10.3
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.5.0
pip 24.0
pip-requirements-parser 32.0.1
platformdirs 4.2.0
pluggy 1.4.0
ply 3.11
prompt-toolkit 3.0.36
protobuf 4.24.3
psycopg2-binary 2.9.9
ptyprocess 0.7.0
pure-eval 0.2.2
py 1.11.0
pycodestyle 2.10.0
pyflakes 3.0.1
Pygments 2.14.0
pyparsing 3.1.1
pyproject-api 1.6.1
pyproject_hooks 1.0.0
pyrepl 0.9.0
pytest 7.3.1
python-dateutil 2.8.2
python-frontmatter 1.1.0
pytz 2023.3
PyYAML 5.1
rdflib 7.0.0
readme-renderer 36.0
referencing 0.31.0
regex 2023.5.4
requests 2.29.0
restview 3.0.1
rpds-py 0.13.1
sacremoses 0.0.53
sbom 2023.10.7
scikit-learn 1.3.0
scipy 1.10.1
semantic-version 2.10.0
sentence-transformers 2.2.2
sentencepiece 0.1.99
setuptools 65.5.0
six 1.16.0
smart-open 6.3.0
sortedcontainers 2.4.0
spdx-tools 0.8.2
specification_parser 0.0.1
stack-data 0.6.2
sympy 1.11.1
threadpoolctl 3.1.0
tokenizers 0.13.3
toml 0.10.2
torch 2.0.0
torchvision 0.15.1
tox 4.14.1
tqdm 4.65.0
traitlets 5.9.0
transformers 4.28.1
triton 2.0.0
typer 0.9.0
typing_extensions 4.5.0
tzdata 2023.3
uritools 4.0.2
urllib3 1.26.15
virtualenv 20.25.0
wcwidth 0.2.6
webencodings 0.5.1
Werkzeug 0.16.1
wheel 0.40.0
wmctrl 0.5
wrapt 1.15.0
xmlschema 2.5.0
xmltodict 0.13.0
zipp 3.15.0
```
## Minimal example
Check out https://github.com/open-telemetry/opentelemetry-python/pull/3746.
Run `tox -rvvve py38-proto3-opentelemetry-exporter-otlp-proto-grpc` (it should work)
Run `tox -rvvve py38-proto4-opentelemetry-exporter-otlp-proto-grpc` (it should fail)
Sorry if this ends up being a dumb mistake on my part but I just can't find out why this is happening.
To make things even more confusing, I have a very similar PR, everything worked perfectly there: https://github.com/open-telemetry/opentelemetry-python/pull/3742
|
closed
|
2024-03-21T01:12:26Z
|
2024-04-12T20:43:57Z
|
https://github.com/tox-dev/tox/issues/3249
|
[] |
ocelotl
| 2
|
strawberry-graphql/strawberry
|
asyncio
| 3,613
|
Hook for new results in subscription (and also on defer/stream)
|
From #3554

|
open
|
2024-09-02T17:57:36Z
|
2025-03-20T15:56:51Z
|
https://github.com/strawberry-graphql/strawberry/issues/3613
|
[
"feature-request"
] |
patrick91
| 0
|
qwj/python-proxy
|
asyncio
| 112
|
It works from command line but not from within a python script
|
Hello, thanks for pproxy, it's very useful. One problem I am facing is that the following command works well (I can browse the internet normally)...
```
C:\tools>pproxy -l http://127.0.0.1:12345 -r socks5://127.0.0.1:9050
Serving on 127.0.0.1:12345 by http
```
...but the following script, with the same schemes and settings, doesn't work (I get the error at the end of this issue):
```
import asyncio
import pproxy
server = pproxy.Server('http://127.0.0.1:12345')
remote = pproxy.Connection('socks5://127.0.0.1:9050')
args = dict( rserver = [remote],
verbose = print )
loop = asyncio.get_event_loop()
handler = loop.run_until_complete(server.start_server(args))
try:
loop.run_forever()
except KeyboardInterrupt:
print('exit!')
handler.close()
loop.run_until_complete(handler.wait_closed())
loop.run_until_complete(loop.shutdown_asyncgens())
loop.close()
```
The error I got:
`http_accept() missing 1 required positional argument: 'httpget' from 127.0.0.1`
|
closed
|
2021-02-22T20:05:16Z
|
2021-02-23T14:42:44Z
|
https://github.com/qwj/python-proxy/issues/112
|
[] |
analyserdmz
| 2
|
aleju/imgaug
|
deep-learning
| 186
|
matplotlib to show is not convenient for some linux platform
|
when pip install,some error appear,such as:
Collecting matplotlib>=2.0.0 (from scikit-image>=0.11.0->imgaug==0.2.6)
Downloading http://10.123.98.50/pypi/web/packages/ec/06/def4fb2620cbe671ba0cb6462cbd8653fbffa4acd87d6d572659e7c71c13/matplotlib-3.0.0.tar.gz (36.3MB)
100% |################################| 36.3MB 101.1MB/s
Complete output from command python setup.py egg_info:
Matplotlib 3.0+ does not support Python 2.x, 3.0, 3.1, 3.2, 3.3, or 3.4.
Beginning with Matplotlib 3.0, Python 3.5 and above is required.
This may be due to an out of date pip.
Make sure you have pip >= 9.0.1.
|
open
|
2018-09-26T09:19:44Z
|
2018-10-02T19:27:56Z
|
https://github.com/aleju/imgaug/issues/186
|
[] |
sanren99999
| 1
|
scikit-optimize/scikit-optimize
|
scikit-learn
| 615
|
TypeError: %d format: a number is required
|
Here is my example:
```python
from skopt import Optimizer
from skopt.utils import dimensions_aslist
from skopt.space import Integer, Categorical, Real
NN = {
'activation': Categorical(['identity', 'logistic', 'tanh', 'relu']),
'solver': Categorical(['adam', 'sgd', 'lbfgs']),
'learning_rate': Categorical(['constant', 'invscaling', 'adaptive']),
'hidden_layer_sizes': Categorical([(100,100)])
}
listified_space = dimensions_aslist(NN)
acq_optimizer_kwargs = {'n_points': 20, 'n_restarts_optimizer': 5, 'n_jobs': 3}
acq_func_kwargs = {'xi': 0.01, 'kappa': 1.96}
optimizer = Optimizer(listified_space, base_estimator='gp', n_initial_points=10,
acq_func='EI', acq_optimizer='auto', random_state=None,
acq_optimizer_kwargs=acq_optimizer_kwargs, acq_func_kwargs=acq_fun_kwargs)
rand_xs = []
for n in range(10):
rand_xs.append(optimizer.ask())
rand_ys = [1,2,3,4,5,6,7,8,9,10]
print rand_xs
print rand_ys
optimizer.tell(rand_xs, rand_ys)
```
Running with `acq_optimizer='lbfgs'` I was seeing `ValueError: The regressor <class 'skopt.learning.gaussian_process.gpr.GaussianProcessRegressor'> should run with acq_optimizer='sampling'.` But by tracing the Optimizer's `_check_arguments()` code I was able to figure out that my base_estimator must simply not have gradients in this Categoricals-only case. Changing to `acq_optimizer='auto'` solves that problem.
But now I see a new `TypeError: %d format: a number is required` thrown deep inside skopt/learning/gaussian_process/kernels.py. *If I print the transformed space before it gets passed to the Gaussian process, I see that it isn't really transformed at all: It still has strings in it!*
Adding a numerical dimension like `alpha: Real(0.0001, 0.001, prior='log-uniform')` causes the construction and the `.tell()` to succeed because the transformed space is then purely numerical.
*So the way purely-categorical spaces are transformed should be updated.*
Or there is an other possibility: Does it even make sense to try to do Bayesian optimization on a purely categorical space like this? Say I try setting (A,B), setting (A,C), setting (X,Z), and setting (Y,Z). For the sake of argument say (A,B) does better than (A,C) and (X,Z) does better than (Y,Z). Can we then suppose (X,B) will do better than (Y,C)? Who is to say (Y,C) isn't a super-combination or that the gains we seem to see from varying the first parameter to X or the second to B are unrelated? It seems potentially dangerous to reason this way, so perhaps the intent is that no Bayesian optimization should be possible in purely Categorical spaces. If this is the case, an error should be thrown early to say this. Furthermore, if this is correct, then how are point-values in Categorical dimensions decided during optimization? Are the numerical parameters optimized while Categoricals are selected at random?
It seems the previous paragraph should be wrong: If I try many examples with some parameter set to some value and observe a pattern of poor performance, I can update my beliefs to say "This is a bad setting". It shouldn't matter whether a parameter is Categorical or not; Bayesian optimization should be equally powerful in all cases.
|
closed
|
2018-01-25T14:28:05Z
|
2018-04-19T12:09:11Z
|
https://github.com/scikit-optimize/scikit-optimize/issues/615
|
[] |
pavelkomarov
| 1
|
ufoym/deepo
|
tensorflow
| 120
|
Building wheel for torchvision (setup.py): finished with status 'error'
|
`$ docker build -f Dockerfile.pytorch-py36-cpu .`
Collecting git+https://github.com/pytorch/vision.git
Cloning https://github.com/pytorch/vision.git to /tmp/pip-req-build-8_0m83s6
Running command git clone -q https://github.com/pytorch/vision.git /tmp/pip-req-build-8_0m83s6
Requirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.6/dist-packages (from torchvision==0.5.0a0+7c9bbf5) (1.17.1)
Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.6/dist-packages (from torchvision==0.5.0a0+7c9bbf5) (1.12.0)
Requirement already satisfied, skipping upgrade: torch in /usr/local/lib/python3.6/dist-packages (from torchvision==0.5.0a0+7c9bbf5) (1.2.0)
Collecting pillow>=4.1.1 (from torchvision==0.5.0a0+7c9bbf5)
Downloading https://files.pythonhosted.org/packages/14/41/db6dec65ddbc176a59b89485e8cc136a433ed9c6397b6bfe2cd38412051e/Pillow-6.1.0-cp36-cp36m-manylinux1_x86_64.whl (2.1MB)
Building wheels for collected packages: torchvision
Building wheel for torchvision (setup.py): started
**Building wheel for torchvision (setup.py): finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-8_0m83s6/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-8_0m83s6/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-yob68akv --python-tag cp36
cwd: /tmp/pip-req-build-8_0m83s6/**
Complete output (513 lines):
Building wheel torchvision-0.5.0a0+7c9bbf5
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/torchvision
copying torchvision/utils.py -> build/lib.linux-x86_64-3.6/torchvision
copying torchvision/extension.py -> build/lib.linux-x86_64-3.6/torchvision
...
Thank you.
|
closed
|
2019-09-02T08:47:29Z
|
2019-11-19T11:24:47Z
|
https://github.com/ufoym/deepo/issues/120
|
[] |
oiotoxt
| 1
|
PokeAPI/pokeapi
|
graphql
| 536
|
Normalized entities in JSON data?
|
I asked about this in the Slack channel but didn't get any responses. I looked through the documentation but couldn't find any mention of this; is it possible to ask an endpoint for normalized entity data? Currently the entity relationships are eagerly fetched, so, for example, a berry has a list of flavors, and the flavor entities are included in the json payload for the berry entity.
```jsonc
{
"id": 1,
"name": "cheri",
/*... snip ... */
"flavors": [
{
"potency": 10,
"flavor": {
"name": "spicy",
"url": "https://pokeapi.co/api/v2/berry-flavor/1/"
}
}
],
}
```
I would like to be able to send a query parameter or header that asks the endpoint to return references to relations instead of nesting them:
```jsonc
{
"id": 1,
"name": "cheri",
/*... snip ... */
"flavors": [
"https://pokeapi.co/api/v2/berry-flavor/1/",
],
}
```
Of course this requires more API requests to fetch the data the client requires, and I know PokeAPI relies heavily on cacheing. However, it's easier to parse if you are using a stateful client UI (since the client state should already be normalized), and it uses less bandwidth. If the service is using HTTP2 the latency issue from multiple requests is mostly mitigated. Is this something that is supported, or that could possibly be supported in the future? I'd like to use PokeAPI in examples when I'm coaching, but normalizing json payloads adds complexity that I'd rather not focus on (payload normalization is its own topic of study).
|
closed
|
2020-10-25T16:08:23Z
|
2020-12-27T18:27:14Z
|
https://github.com/PokeAPI/pokeapi/issues/536
|
[] |
parkerault
| 2
|
matterport/Mask_RCNN
|
tensorflow
| 2,392
|
Create environment
|
Could anyone help create the environment in anaconda? I've tried it in different ways and it shows error. I'm trying to create an environment using python 3.4, tensorflow 1.15.3 and keras 2.2.4
|
open
|
2020-10-16T11:10:13Z
|
2022-03-08T14:59:50Z
|
https://github.com/matterport/Mask_RCNN/issues/2392
|
[] |
teixeirafabiano
| 3
|
cupy/cupy
|
numpy
| 8,269
|
CuPy v14 Release Plan
|
## Roadmap
```[tasklist]
## Planned Features
- [ ] #8306
- [ ] `complex32` support for limited APIs (restart [#4454](https://github.com/cupy/cupy/pull/4454))
- [ ] Support bfloat16
- [ ] Structured Data Type
- [ ] #8013
- [ ] #6986
- [ ] Drop support for Python 3.9 following [SPEC 0](https://scientific-python.org/specs/spec-0000/)
- [ ] https://github.com/cupy/cupy/issues/8215
```
```[tasklist]
## Timeline & Release Manager
- [ ] **v14.0.0a1** (Mar 2025)
- [ ] v14.0.0a2 (TBD)
- [ ] **v14.0.0b1** (TBD)
- [ ] **v14.0.0rc1** (TBD)
- [ ] **v14.0.0** (TBD)
```
## Notes
* Starting in the CuPy v13 development cycle, we have adjusted our release frequency to once every two months. Mid-term or hot-fix releases may be provided depending on necessity, such as for new CUDA/Python version support or critical bug fixes.
* The schedule and planned features may be subject to change depending on the progress of the development.
* `v13.x` releases will only contain backported pull-requests (mainly bug-fixes) from the v14 (`main`) branch.
## Past Release Plans
* v9: https://github.com/cupy/cupy/issues/3891
* v10: https://github.com/cupy/cupy/issues/5049
* v11: https://github.com/cupy/cupy/issues/6246
* v12: https://github.com/cupy/cupy/issues/6866
* v13: https://github.com/cupy/cupy/issues/7555
|
open
|
2024-04-03T02:51:19Z
|
2025-02-13T08:02:10Z
|
https://github.com/cupy/cupy/issues/8269
|
[
"issue-checked"
] |
asi1024
| 1
|
matplotlib/matplotlib
|
data-visualization
| 28,931
|
[Bug]: plt.savefig incorrectly discarded z-axis label when saving Line3D/Bar3D/Surf3D images using bbox_inches='tight'
|
### Bug summary
I drew a 3D bar graph in python3 using ax.bar3d and then saved the image to pdf under bbox_inches='tight' using plt.savefig, but the Z-axis label was missing in the pdf. Note that everything worked fine after switching matplotlib to version 3.5.0 without changing my python3 code.
### Code for reproduction
```Python
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.ticker import MultipleLocator
def plot_3D_bar(x_data, data, save_path, x_label, y_label, z_label,
var_orient='horizon'):
if var_orient != 'vertical' and var_orient != 'horizon':
print('plot_figure var_orient error!')
exit(1)
if var_orient == 'vertical':
data = list(map(list, zip(*data)))
x_number = len(data[0])
y_number = len(data)
if len(x_data) != x_number:
exit(1)
x_data_label = ['A', 'B', 'C', 'D', 'E', 'F', 'G']
if len(x_data_label) % x_number != 0:
exit(1)
if y_number == 1:
label = ['label1']
elif y_number == 2:
label = ['label1', 'label2']
elif y_number == 3:
label = ['l1', 'l2', 'l3']
else:
label = []
if y_number == 1:
color = ['green']
elif y_number == 2:
color = ['green', 'red']
elif y_number == 3:
color = ['green', 'red', 'blue']
else:
color = []
if y_number == 1:
hatch = ['x']
elif y_number == 2:
hatch = ['x', '.']
elif y_number == 3:
hatch = ['x', '.', '|']
else:
hatch = []
if y_number != len(label) or y_number != len(color) or \
y_number != len(hatch):
exit(1)
# plt.title('title')
plt.rc('text', usetex=False)
plt.rc('font', family='Times New Roman', size=15)
font1 = {'family': 'Times New Roman', 'weight': 'normal', 'size': 15}
bar_width_x = 0.5
dx = [bar_width_x for _ in range(x_number)]
bar_width_y = 0.2
dy = [bar_width_y for _ in range(x_number)]
ax = plt.subplot(projection='3d')
ax.xaxis.set_major_locator(MultipleLocator(1))
ax.yaxis.set_major_locator(MultipleLocator(1))
ax.yaxis.set_minor_locator(MultipleLocator(0.5))
ax.zaxis.set_major_locator(MultipleLocator(5))
ax.tick_params(labelrotation=0)
ax.set_xticks(x_data)
ax.set_xticklabels(x_data_label, rotation=0)
ax.set_yticks(np.arange(y_number)+1)
ax.set_yticklabels(label, rotation=0)
ax.set_xlabel(x_label, fontweight='bold', size=15)
# ax.set_xlim([0, 8])
ax.set_ylabel(y_label, fontweight='bold', size=15)
# ax.set_ylim([0, 4])
ax.set_zlabel(z_label, fontweight='bold', size=15)
ax.set_zlim([0, 25])
min_x = [0 for _ in range(y_number)]
min_z = [0 for _ in range(y_number)]
for _ in range(y_number):
min_z[_] = np.min(data[_])
for __ in range(x_number):
if data[_][__] == min_z[_]:
min_x[_] = x_data[__]
min_z[_] = round(min_z[_], 2)
y_list = [_+1-1/2*bar_width_y for __ in range(x_number)]
z_list = [0 for __ in range(x_number)]
ax.bar3d(x_data, y_list, z_list,
dx=dx, dy=dy, dz=data[_], label=label[_],
color=color[_], edgecolor='k', hatch=hatch[_])
ax.text(x=min_x[_], y=_+1, z=min_z[_]+3, s='%.2f' % min_z[_],
horizontalalignment='center', verticalalignment='top',
backgroundcolor='white', zorder=5,
fontsize=10, color='k', fontweight='bold')
plt.grid(axis='both', color='gray', linestyle='-')
fig = plt.gcf()
fig.set_size_inches(4, 4)
if save_path is not None:
plt.savefig(save_path, bbox_inches='tight')
plt.show()
x_data = [1, 2, 3, 4, 5, 6, 7]
y_data = [[3, 13, 1, 7, 9, 10, 2],
[6, 3, 9, 4, 9, 20, 9],
[7, 9, 5, 8, 5, 3, 9]]
plot_3D_bar(x_data, y_data, 'figure.pdf', '\nxlabel',
'\nylabel', '\nzlabel', 'horizon')
```
### Actual outcome
There is no Z-axis label in the saved figure.pdf
[figure.pdf](https://github.com/user-attachments/files/17243046/figure.pdf)
### Expected outcome
Saved figure.pdf should contain the z-axis label
[figure.pdf](https://github.com/user-attachments/files/17243055/figure.pdf)
### Additional information
_No response_
### Operating system
Ubuntu 20.04
### Matplotlib Version
3.9.2
### Matplotlib Backend
_No response_
### Python version
_No response_
### Jupyter version
_No response_
### Installation
pip
|
closed
|
2024-10-03T11:05:30Z
|
2024-10-03T14:17:50Z
|
https://github.com/matplotlib/matplotlib/issues/28931
|
[
"status: duplicate"
] |
NJU-ZAD
| 5
|
JoeanAmier/TikTokDownloader
|
api
| 383
|
unhandled exception,直接使用打包版exe,运行6,1,1,可获取视频数量,但无法下载
|
报错如下,使用的是打包版,是setting配置问题吗,可否帮忙看看,谢谢!
TikTokDownloader V5.5
Traceback (most recent call last):
File "main.py", line 19, in <module>
File "asyncio\runners.py", line 194, in run
File "asyncio\runners.py", line 118, in run
File "asyncio\base_events.py", line 686, in run_until_complete
File "main.py", line 10, in main
File "src\application\TikTokDownloader.py", line 355, in run
File "src\application\TikTokDownloader.py", line 244, in main_menu
File "src\application\TikTokDownloader.py", line 324, in compatible
File "src\application\TikTokDownloader.py", line 251, in complete
File "src\application\main_complete.py", line 1709, in run
File "src\application\main_complete.py", line 232, in account_acquisition_interactive
File "src\application\main_complete.py", line 260, in __secondary_menu
File "src\application\main_complete.py", line 263, in account_detail_batch
File "src\application\main_complete.py", line 299, in __account_detail_batch
File "src\application\main_complete.py", line 445, in deal_account_detail
File "src\application\main_complete.py", line 571, in _batch_process_detail
TypeError: cannot unpack non-iterable NoneType object
[PYI-6840:ERROR] Failed to execute script 'main' due to unhandled exception!
|
closed
|
2025-01-22T03:25:34Z
|
2025-01-22T07:18:57Z
|
https://github.com/JoeanAmier/TikTokDownloader/issues/383
|
[] |
WaymonHe
| 2
|
gee-community/geemap
|
streamlit
| 1,180
|
Scrolling for layer panel
|
Hi,
Is there a way to scroll down the layer panel when we have a lot of layers?
Thanks,
Daniel

|
closed
|
2022-08-07T04:54:23Z
|
2022-08-08T01:37:02Z
|
https://github.com/gee-community/geemap/issues/1180
|
[
"Feature Request"
] |
Daniel-Trung-Nguyen
| 1
|
coqui-ai/TTS
|
python
| 4,162
|
[Bug] Python 3.12.0
|
### Describe the bug
Hi,
My version of python isn't supported by TTS 3.12.0 on Cygwin and 3.12 on Windows 10.
Also TTS isn't known when i try to install from pip: pip install TTS
### To Reproduce
tried installing from pip or manually
### Expected behavior
_No response_
### Logs
```shell
```
### Environment
```shell
python 3.12 & 3.12.0
```
### Additional context
_No response_
|
open
|
2025-02-27T18:45:18Z
|
2025-03-06T16:03:13Z
|
https://github.com/coqui-ai/TTS/issues/4162
|
[
"bug"
] |
TDClarke
| 4
|
xlwings/xlwings
|
automation
| 2,150
|
Unable to embed code using version 0.28.9
|
#### OS Windows 10
#### Versions of xlwings 0.28.9, Excel LTSC Professional Plus 2021 version 2108 and Python 3.8.6
#### I was able to embed my py code using the same setup with xlwings version 0.28.8. But it's not working with the latest version. I used noncommercial license of xlwings Pro with both versions.
### How can I downgrade to version 0.28.8?
#### The files and screenshot are attached.
### Only the following commands were used. The embedded code was working with xlwings version 0.28.8, but not working anymore with version 0.28.9
```python
# pip install xlwings
# xlwings license update -k noncommercial
# xlwings quickstart myproject --addin --ribbon
# xlwings code embed
```
[myproject.zip](https://github.com/xlwings/xlwings/files/10485543/myproject.zip)

|
closed
|
2023-01-22T19:36:29Z
|
2023-01-25T00:54:53Z
|
https://github.com/xlwings/xlwings/issues/2150
|
[] |
MansuraKhanom
| 4
|
encode/uvicorn
|
asyncio
| 1,228
|
Support Custom HTTP implementations protocols
|
### Checklist
<!-- Please make sure you check all these items before submitting your feature request. -->
- [x] There are no similar issues or pull requests for this yet.
- [ ] I discussed this idea on the [community chat](https://gitter.im/encode/community) and feedback is positive.
### Is your feature related to a problem? Please describe.
<!-- A clear and concise description of what you are trying to achieve.
Eg "I want to be able to [...] but I can't because [...]". -->
I want to be able to use custom http protocol implementation by providing a module path to it. Eg. `mypackage.http.MyHttpProtocol`
## Describe the solution you would like.
<!-- A clear and concise description of what you would want to happen.
For API changes, try to provide a code snippet of what you would like the API to look like.
-->
Uvicorn can import the path and use it as the http implementation, specifically I want to be able to use a full cython based http implementation using httptools for efficiency and speed.
|
closed
|
2021-10-24T06:18:35Z
|
2021-10-28T01:24:38Z
|
https://github.com/encode/uvicorn/issues/1228
|
[] |
kumaraditya303
| 1
|
ranaroussi/yfinance
|
pandas
| 2,200
|
yfinance - Unable to retrieve Ticker earnings_dates
|
### Describe bug
Unable to retrieve Ticker earnings_dates
E.g. for NVDA, using Python get yf message back: $NVDA: possibly delisted; no earnings dates found
ditto for AMD and apparently all other valid ticker symbols: $AMD: possibly delisted; no earnings dates found
A list of all methods available for yf Ticker includes both earnings_dates and a likely newer entry called get_earnings_dates
but neither one currently returns 1) a history of Qtrly Earnings report dates nor 2) a list of future Qtrly Earnings report dates
The earnings_dates method/function/process has worked very well in the past at retrieving both the Earnings Dates as well as the actual release approx Times, like 8AM, after 4PM, etc.
### Simple code that reproduces your problem
yf_tkr_obj = yf.Ticker("NVDA")
df_earnings_dates=yf_tkr_obj.earnings_dates
None seems to get returned.
### Debug log
Unable to supply this, diff PC
### Bad data proof
_No response_
### `yfinance` version
yfinance 0.2.51
### Python version
3.7
### Operating system
Debian Buster
|
closed
|
2025-01-02T19:58:24Z
|
2025-01-02T23:03:06Z
|
https://github.com/ranaroussi/yfinance/issues/2200
|
[] |
SymbReprUnlim
| 1
|
python-restx/flask-restx
|
flask
| 272
|
Remove default HTTP 200 response code in doc
|
**Ask a question**
Hi, I was wondering if it's possible to remove the default "200 - Success" response in the swagger.json?
My model:
```
user_model = api.model(
"User",
{
"user_id": fields.String,
"project_id": fields.Integer(default=1),
"reward_points": fields.Integer(default=0),
"rank": fields.String,
},
)
```
The route:
```
@api.route("/")
class UserRouteList(Resource):
@api.doc(model=user_model, body=user_model)
@api.response(201, "User created", user_model)
@api.marshal_with(user_model, code=201)
def post(self):
"""Add user"""
data = api.payload
return (
userDB.add(data),
201
)
```
**Additional context**
flask-restx version: 0.2.0

|
closed
|
2021-01-04T16:26:19Z
|
2021-02-01T12:29:09Z
|
https://github.com/python-restx/flask-restx/issues/272
|
[
"question"
] |
mateusz-chrzastek
| 2
|
flasgger/flasgger
|
api
| 585
|
a little miss
|
when i forget install this packet -> apispec
page will back err -> internal server error
just have not other tips, so i search the resource code,
in marshmallow_apispec.py

there have some import code, if err ,set Schema = None
|
open
|
2023-06-27T07:43:36Z
|
2023-06-27T07:45:55Z
|
https://github.com/flasgger/flasgger/issues/585
|
[] |
Spectator133
| 2
|
benlubas/molten-nvim
|
jupyter
| 237
|
[Feature Request] Option to show images only in output buffer
|
Option to show images only in the output buffer and not in the virtual text while still having virtual text enabled. This is because images can be very buggy in the virtual text compared to the buffer and it is nice to have the virtual text enabled for other types of output. This could be its own option where you specify to show images in: "virtual_text", "output_buffer", "virtual_text_and_output_buffer", "native_application". There are many ways this option could be implemented and this is just a suggestion for how the API could look.
|
closed
|
2024-09-12T11:11:37Z
|
2024-10-05T13:59:42Z
|
https://github.com/benlubas/molten-nvim/issues/237
|
[
"enhancement"
] |
michaelbrusegard
| 0
|
keras-team/keras
|
machine-learning
| 21,004
|
Ensured torch import is properly handled
|
Before :
try:
import torch # noqa: F401
except ImportError:
pass
After :
try:
import torch # noqa: F401
except ImportError:
torch = None # Explicitly set torch to None if not installed
|
open
|
2025-03-07T19:58:17Z
|
2025-03-13T07:09:01Z
|
https://github.com/keras-team/keras/issues/21004
|
[
"type:Bug"
] |
FNICKE
| 1
|
huggingface/pytorch-image-models
|
pytorch
| 1,477
|
[FEATURE] Huge discrepancy between HuggingFace and timm in terms of the initialization of ViT
|
I see a huge discrepancy between HuggingFace and timm in terms of the initialization of ViT. Timm's implementation uses trunc_normal whereas huggingface uses "module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)". I noticed this cause a huge drop in performance when training ViT models on imagenet with huggingface implementaion. I'm not sure if it's not just the initialization but also something more. Is it possible if one of you check and try to make the huggingface implementation as consistent as the timm's version? Thanks!
|
closed
|
2022-09-28T18:57:06Z
|
2023-02-02T04:45:09Z
|
https://github.com/huggingface/pytorch-image-models/issues/1477
|
[
"enhancement"
] |
Phuoc-Hoan-Le
| 7
|
desec-io/desec-stack
|
rest-api
| 223
|
Send emails asynchronously
|
Currently, requests that trigger emails have to wait until the email has been sent. This is "not nice", and also opens a timing side channel for email enumeration at account registration or password reset.
Let's move to an asynchronous solution, like:
- https://pypi.org/project/django-celery-email/ (full-fledged)
- https://github.com/pinax/django-mailer/ (much simpler)
- (Simple threading does not seem like the best solution: https://code.djangoproject.com/ticket/19214#comment:5)
|
closed
|
2019-07-02T22:27:43Z
|
2019-07-12T01:01:43Z
|
https://github.com/desec-io/desec-stack/issues/223
|
[
"bug",
"api"
] |
peterthomassen
| 1
|
marcomusy/vedo
|
numpy
| 1,035
|
Adding text to screenshot
|
Hi there,
May I know how to adjust the size and text font of the text on a screenshot? Thank you so much
This is my code:
vp = Plotter(axes=0, offscreen=True)
text = str(lab)
vp.add(text)
vp.show(item, interactive=False, at=0)
screenshot(os.path.join(save_path,file_name))
|
closed
|
2024-01-25T07:43:40Z
|
2024-01-25T09:16:36Z
|
https://github.com/marcomusy/vedo/issues/1035
|
[] |
priyabiswas12
| 2
|
AirtestProject/Airtest
|
automation
| 1,150
|
执行了70来次才出现的一个问题,非常偶现
|
Traceback (most recent call last):
File "/Users/administrator/jenkinsauto/workspace/autotest-IOS/initmain.py", line 35, in <module>
raise e
File "/Users/administrator/jenkinsauto/workspace/autotest-IOS/initmain.py", line 27, in <module>
t.pushtag_ios()
File "/Users/administrator/jenkinsauto/workspace/autotest-IOS/pubscirpt/tag.py", line 25, in pushtag_ios
touch(Template(r'tag输入.png'))
File "/usr/local/lib/python3.11/site-packages/airtest/utils/logwraper.py", line 124, in wrapper
res = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airtest/core/api.py", line 367, in touch
pos = loop_find(v, timeout=ST.FIND_TIMEOUT)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airtest/utils/logwraper.py", line 124, in wrapper
res = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airtest/core/cv.py", line 62, in loop_find
screen = G.DEVICE.snapshot(filename=None, quality=ST.SNAPSHOT_QUALITY)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airtest/core/ios/ios.py", line 47, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airtest/core/ios/ios.py", line 615, in snapshot
data = self._neo_wda_screenshot()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airtest/core/ios/ios.py", line 601, in _neo_wda_screenshot
raw_value = base64.b64decode(value)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/base64.py", line 88, in b64decode
return binascii.a2b_base64(s, strict_mode=validate)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
binascii.Error: Invalid base64-encoded string: number of data characters (2433109) cannot be 1 more than a multiple of 4
|
open
|
2023-07-28T09:15:06Z
|
2023-07-28T09:15:06Z
|
https://github.com/AirtestProject/Airtest/issues/1150
|
[] |
z929151231
| 0
|
Evil0ctal/Douyin_TikTok_Download_API
|
api
| 322
|
[Feature request] Brief and clear description of the problem
|
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
closed
|
2024-02-05T21:10:46Z
|
2024-02-07T03:40:58Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/322
|
[
"enhancement"
] |
s6k6s6
| 0
|
ipyflow/ipyflow
|
jupyter
| 40
|
separate modes for prod / dev
|
closed
|
2020-08-06T22:17:30Z
|
2021-03-08T05:45:32Z
|
https://github.com/ipyflow/ipyflow/issues/40
|
[] |
smacke
| 0
|
|
nvbn/thefuck
|
python
| 889
|
how can you get the history for current terminal session
|
cuz it has not been written to the history file, how to get it? I'm very curious
|
open
|
2019-03-03T10:49:49Z
|
2023-10-30T20:00:47Z
|
https://github.com/nvbn/thefuck/issues/889
|
[] |
imroc
| 1
|
donnemartin/system-design-primer
|
python
| 235
|
Anki Error on Launching using Ubuntu 18.10
|
I'm using Ubuntu 18.10 on Lenovo Thinkpad:
I get this error right after the install using Ubuntu software install ....after installed ...when pressing Launch button:
_____________________________________________________
Error during startup:
Traceback (most recent call last):
File "/usr/share/anki/aqt/main.py", line 50, in __init__
self.setupUI()
File "/usr/share/anki/aqt/main.py", line 75, in setupUI
self.setupMainWindow()
File "/usr/share/anki/aqt/main.py", line 585, in setupMainWindow
tweb = self.toolbarWeb = aqt.webview.AnkiWebView()
File "/usr/share/anki/aqt/webview.py", line 114, in __init__
self.focusProxy().installEventFilter(self)
AttributeError: 'NoneType' object has no attribute 'installEventFilter'
_____________________________________________________
Anki version: 2.1.0+dfsg-1
Updated: 11/17/2018
The error message won't reappear unless you uninstall and reboot O/S. Any help is appreciated. Thanks!
DJ
|
open
|
2018-11-17T16:17:20Z
|
2019-04-10T00:51:09Z
|
https://github.com/donnemartin/system-design-primer/issues/235
|
[
"help wanted",
"needs-review"
] |
FunMyWay
| 6
|
piskvorky/gensim
|
machine-learning
| 3,154
|
Yes, thanks. If it's really a bug with the FB model, not much we can do about it.
|
Yes, thanks. If it's really a bug with the FB model, not much we can do about it.
_Originally posted by @piskvorky in https://github.com/RaRe-Technologies/gensim/issues/2969#issuecomment-799811459_
|
closed
|
2021-05-19T22:08:58Z
|
2021-05-19T23:05:28Z
|
https://github.com/piskvorky/gensim/issues/3154
|
[] |
Dino1981
| 0
|
streamlit/streamlit
|
data-visualization
| 10,029
|
st.radio label alignment and label_visibility issues in the latest versions
|
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
The st.radio widget has two unexpected behaviors in the latest versions of Streamlit:
The label is centered instead of left-aligned, which is inconsistent with previous versions.
The label_visibility option 'collapsed' does not work as expected, and the label remains visible.
### Reproducible Code Example
```Python
import streamlit as st
# Store the initial value of widgets in session state
if "visibility" not in st.session_state:
st.session_state.visibility = "visible"
st.session_state.disabled = False
st.session_state.horizontal = False
col1, col2 = st.columns(2)
with col1:
st.checkbox("Disable radio widget", key="disabled")
st.checkbox("Orient radio options horizontally", key="horizontal")
with col2:
st.radio(
"Set label visibility 👇",
["visible", "hidden", "collapsed"],
key="visibility",
label_visibility=st.session_state.visibility,
disabled=st.session_state.disabled,
horizontal=st.session_state.horizontal,
)
```
### Steps To Reproduce
please play with the options of the above snippet, and inspect label
### Expected Behavior
The label of the st.radio widget should be left-aligned, consistent with previous versions.
When label_visibility='collapsed' is set, the label should not be visible.
### Current Behavior
The label is centered.
Setting label_visibility='collapsed' does not hide the label; it remains visible.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.41.1
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_
|
closed
|
2024-12-16T12:28:53Z
|
2024-12-16T16:06:20Z
|
https://github.com/streamlit/streamlit/issues/10029
|
[
"type:bug",
"feature:st.radio",
"status:awaiting-team-response"
] |
Panosero
| 4
|
deeppavlov/DeepPavlov
|
tensorflow
| 1,372
|
Go-Bot: migrate to PyTorch (policy)
|
Moved to internal Trello
|
closed
|
2021-01-12T11:19:57Z
|
2021-11-30T10:16:57Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1372
|
[] |
danielkornev
| 0
|
jina-ai/clip-as-service
|
pytorch
| 235
|
Support for FP16 (mixed precision) trained models
|
I trained a little german BERT Model with FP16 and tensorflow, so the Nvidia suggestion https://github.com/google-research/bert/pull/255
the suggestions are hosted in this repository
https://github.com/thorjohnsen/bert/tree/gpu_optimizations
Training works fine, but bert-as-service doesn't work with that, because of the tensorflows typechecks when loading the graph. Same problem with the fp16 option activated.
Is there a simple way of modifiing bert-as-service or convert the model so i can use the embeddings with bert-as-service?
Thank you!
|
closed
|
2019-02-13T17:20:31Z
|
2019-03-14T03:44:38Z
|
https://github.com/jina-ai/clip-as-service/issues/235
|
[] |
miweru
| 1
|
ultralytics/ultralytics
|
computer-vision
| 18,799
|
YOLO导出TensorRT格式或许不支持JetPack 6.2
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
你好!我目前发现我在将pytorch格式转成tensorRT格式时,程序会报错:
下面的错误表明似乎无法支持tensorRT,目前的jetson orin nano super的tensorRT版本是10.3.并且无法降级
requirements: Ultralytics requirement ['tensorrt>7.0.0,!=10.1.0'] not found, attempting AutoUpdate...
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-zhfqxl_t/tensorrt-cu12_7e93d472a67644448c0bd7e73bd68a66/setup.py", line 71, in <module>
raise RuntimeError("TensorRT does not currently build wheels for Tegra systems")
RuntimeError: TensorRT does not currently build wheels for Tegra systems
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Retry 1/2 failed: Command 'pip install --no-cache-dir "tensorrt>7.0.0,!=10.1.0" ' returned non-zero exit status 1.
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-cnl6plvw/tensorrt-cu12_af3eec6e67824417ac3fee8b4d1a66b0/setup.py", line 71, in <module>
raise RuntimeError("TensorRT does not currently build wheels for Tegra systems")
RuntimeError: TensorRT does not currently build wheels for Tegra systems
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Retry 2/2 failed: Command 'pip install --no-cache-dir "tensorrt>7.0.0,!=10.1.0" ' returned non-zero exit status 1.
requirements: ❌ Command 'pip install --no-cache-dir "tensorrt>7.0.0,!=10.1.0" ' returned non-zero exit status 1.
TensorRT: export failure ❌ 14.1s: No module named 'tensorrt'
Traceback (most recent call last):
File "/home/nvidia/YOLO11/YOLO/lib/python3.10/site-packages/ultralytics/engine/exporter.py", line 810, in export_engine
import tensorrt as trt # noqa
ModuleNotFoundError: No module named 'tensorrt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/nvidia/YOLO11/v3(m-model)/convert.py", line 7, in <module>
model.export(format="engine") # creates 'best.engine'
File "/home/nvidia/YOLO11/YOLO/lib/python3.10/site-packages/ultralytics/engine/model.py", line 738, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "/home/nvidia/YOLO11/YOLO/lib/python3.10/site-packages/ultralytics/engine/exporter.py", line 396, in __call__
f[1], _ = self.export_engine(dla=dla)
File "/home/nvidia/YOLO11/YOLO/lib/python3.10/site-packages/ultralytics/engine/exporter.py", line 180, in outer_func
raise e
File "/home/nvidia/YOLO11/YOLO/lib/python3.10/site-packages/ultralytics/engine/exporter.py", line 175, in outer_func
f, model = inner_func(*args, **kwargs)
File "/home/nvidia/YOLO11/YOLO/lib/python3.10/site-packages/ultralytics/engine/exporter.py", line 814, in export_engine
import tensorrt as trt # noqa
ModuleNotFoundError: No module named 'tensorrt'
### Additional
_No response_
|
open
|
2025-01-21T12:45:56Z
|
2025-01-23T14:31:26Z
|
https://github.com/ultralytics/ultralytics/issues/18799
|
[
"question",
"embedded",
"exports"
] |
Vectorvin
| 6
|
hpcaitech/ColossalAI
|
deep-learning
| 5,601
|
[FEATURE]: support multiple (partial) backward passes for zero
|
### Describe the feature
In some vae training, users may use weight adaptive loss which may compute grad of some parameters twice, like

This will trigger backward hook twice.
Based on pytorch's document, we may use post-grad-accumulation hook to solve this problem.

|
closed
|
2024-04-16T05:02:06Z
|
2024-04-16T09:49:22Z
|
https://github.com/hpcaitech/ColossalAI/issues/5601
|
[
"enhancement"
] |
ver217
| 0
|
exaloop/codon
|
numpy
| 595
|
Is there a way to use python Enum?
|
I cannot find an example : ( .
|
closed
|
2024-09-30T07:21:46Z
|
2024-09-30T07:28:37Z
|
https://github.com/exaloop/codon/issues/595
|
[] |
BeneficialCode
| 0
|
mwaskom/seaborn
|
data-visualization
| 3,584
|
UserWarning: The figure layout has changed to tight self._figure.tight_layout(*args, **kwargs)
|
I have encountered a persistent `UserWarning` when using Seaborn's `pairplot` function, indicating a change in figure layout to tight.


`\anaconda3\Lib\site-packages\seaborn\axisgrid.py:118: UserWarning: The figure layout has changed to tight
self._figure.tight_layout(*args, **kwargs)`
`\AppData\Local\Temp\ipykernel_17504\146091522.py:3: UserWarning: The figure layout has changed to tight
plt.tight_layout()`
The warning appears even when attempting to adjust the layout using `plt.tight_layout()` or other methods. The warning does not seem to affect the appearance or functionality of the plot, but it's causing some concern. Seeking guidance on resolving or understanding the implications of this warning. Any insights or recommendations would be greatly appreciated. Thank you!
|
closed
|
2023-12-05T19:21:49Z
|
2023-12-05T22:23:49Z
|
https://github.com/mwaskom/seaborn/issues/3584
|
[] |
rahulsaran21
| 1
|
AirtestProject/Airtest
|
automation
| 595
|
有截图的时候会报错 AttributeError: module 'cv2.cv2' has no attribute 'xfeatures2d'
|
AttributeError: module 'cv2.cv2' has no attribute 'xfeatures2d'
好像也没什么影响,麻烦看一下

|
open
|
2019-11-07T04:47:01Z
|
2019-11-14T07:13:36Z
|
https://github.com/AirtestProject/Airtest/issues/595
|
[] |
abcdd12
| 5
|
gevent/gevent
|
asyncio
| 1,679
|
Python 3.8.6 and perhaps other recent Python 3.8 builds are incompatible
|
* gevent version: 1.4.0
* Python version: Please be as specific as possible: the `python:3.8` docker image as of 2020-09-24 6:30PM PST
* Operating System: Linux - ubuntu or Docker on Windows or Docker on Mac or Google container optimized linux
### Description:
Trying to boot a gunicorn worker with `--worker-class=gevent` and it logs this and then infinite loops:
```<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
<frozen importlib._bootstrap>:219: RuntimeWarning: greenlet.greenlet size changed, may indicate binary incompatibility. Expected 144 from C header, got 152 from PyObject
[2020-09-25 01:13:02 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2020-09-25 01:13:02 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
[2020-09-25 01:13:02 +0000] [1] [INFO] Using worker: gevent
/usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
return io.open(fd, *args, **kwargs)
[2020-09-25 01:13:02 +0000] [7] [INFO] Booting worker with pid: 7
[2020-09-25 01:13:02 +0000] [7] [INFO] Made Psycopg2 Green
/usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
return io.open(fd, *args, **kwargs)
[2020-09-25 01:13:02 +0000] [8] [INFO] Booting worker with pid: 8
[2020-09-25 01:13:02 +0000] [8] [INFO] Made Psycopg2 Green
/usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
return io.open(fd, *args, **kwargs)
[2020-09-25 01:13:02 +0000] [9] [INFO] Booting worker with pid: 9
[2020-09-25 01:13:02 +0000] [9] [INFO] Made Psycopg2 Green
# infinite loop
```
|
closed
|
2020-09-25T01:34:56Z
|
2020-09-25T10:25:02Z
|
https://github.com/gevent/gevent/issues/1679
|
[] |
AaronFriel
| 2
|
microsoft/nni
|
deep-learning
| 5,651
|
Can not download Tar when Build from Source
|
The tar package cannot be downloaded when the source code is used for building.
nni version: dev
python: 3.7.9
os: openeuler-20.03
How to reproduce it?
```
git clone https://github.com/microsoft/nni.git
cd nni
export NNI_RELEASE=2.0
python setup.py build_ts
```

https://nodejs.org/dist/v18.15.0/node-v18.15.0-linux-aarch64.tar.xz
|
open
|
2023-07-28T08:28:37Z
|
2023-08-11T07:09:50Z
|
https://github.com/microsoft/nni/issues/5651
|
[] |
zhuofeng6
| 3
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 331
|
problem with implem of mAP@R ?
|
Hi,
Thanks for open-sourcing this repo!
I want to report a problem which I noticed the the [XBM repo](https://github.com/msight-tech/research-xbm) which uses code similar to yours for mAP@R calculation (I am not sure who's based on who, opening this issue on both).
I suspect there is a mistake in the implementation of mAP@R:
[Line 99](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/9559b21559ca6fbcb46d2d51d7953166e18f9de6/src/pytorch_metric_learning/utils/accuracy_calculator.py#L99), you divide by max_possible_matches_per_row for this query (which for mAP@R is equal to R for each query).
But what should be done is divide by the __actual__ number of relevant items in the R-best-ranked ones for this query (as is done for [mAP](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/9559b21559ca6fbcb46d2d51d7953166e18f9de6/src/pytorch_metric_learning/utils/accuracy_calculator.py#L97) 2 line later).
wikipedia page on AP:

(by the way, for mAP, as you do line 98, one has to decide what to do with queries which have no relevant items and on which AP is therefore not defined. You use max_possible_values_per_row=0 for these rows, but I am not sure this is the most standard choice, maybe it's good to give to option to set AP=0, or AP=1, or exclude these rows...)
Regards,
A
|
closed
|
2021-05-25T15:24:50Z
|
2021-05-26T06:58:39Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/331
|
[
"question"
] |
drasros
| 3
|
liangliangyy/DjangoBlog
|
django
| 525
|
文章只是点播视频吗?
|
<!--
如果你不认真勾选下面的内容,我可能会直接关闭你的 Issue。
提问之前,建议先阅读 https://github.com/ruby-china/How-To-Ask-Questions-The-Smart-Way
-->
**我确定我已经查看了** (标注`[ ]`为`[x]`)
- [x] [DjangoBlog的readme](https://github.com/liangliangyy/DjangoBlog/blob/master/README.md)
- [x] [配置说明](https://github.com/liangliangyy/DjangoBlog/blob/master/bin/config.md)
- [x] [其他 Issues](https://github.com/liangliangyy/DjangoBlog/issues)
----
**我要申请** (标注`[ ]`为`[x]`)
- [ ] BUG 反馈
- [ ] 添加新的特性或者功能
- [x] 请求技术支持
|
closed
|
2021-11-22T07:10:34Z
|
2022-03-02T11:22:56Z
|
https://github.com/liangliangyy/DjangoBlog/issues/525
|
[] |
glinxx
| 2
|
httpie/cli
|
rest-api
| 1,247
|
Test new httpie command on various actions
|
We have 2 actions that gets triggered when specific files are edited (for testing snap/brew works), they test the `http` but not `httpie`, so let's also test it too.
|
closed
|
2021-12-21T17:32:58Z
|
2021-12-23T18:55:40Z
|
https://github.com/httpie/cli/issues/1247
|
[
"bug",
"new"
] |
isidentical
| 1
|
scrapy/scrapy
|
python
| 6,378
|
Edit Contributing.rst document to specify how to propose documentation suggestions
|
There are multiple types of contributions that the community can suggest including bug reports, feature requests, code improvements, security vulnerability reports, and documentation changes.
For the Scrapy.py project it was difficult to discern what process to follow to make a documentation improvement suggestion.
I want to suggest an additional section to the documentation that clearly explains how to propose a non-code related change.
This section will follow the guidelines outlined in the Contributing.rst file from another open source project, https://github.com/beetbox/beets
|
closed
|
2024-05-26T15:43:40Z
|
2024-07-10T07:37:32Z
|
https://github.com/scrapy/scrapy/issues/6378
|
[] |
jtoallen
| 11
|
pytorch/pytorch
|
machine-learning
| 149,186
|
torch.fx.symbolic_trace failed on deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
|
### 🐛 Describe the bug
I try to compile deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B to mlir with the following script.
```python
# Import necessary libraries
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from torch.export import export
import onnx
from torch_mlir import fx
# Load the DeepSeek model and tokenizer
model_name = "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
class Qwen2(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.qwen = model
def forward(self, x):
result = self.qwen(x)
result.past_key_values = ()
return result
qwen2 = Qwen2()
# Define a prompt for the model
prompt = "What are the benefits of using AI in healthcare?"
# Encode the prompt
input_ids = tokenizer.encode(prompt, return_tensors="pt")
exported_program: torch.export.ExportedProgram = export (
qwen2, (input_ids,)
)
traced_model = torch.fx.symbolic_trace(qwen2)
m = fx.export_and_import(traced_model, (input_ids,), enable_ir_printing=True,
enable_graph_printing=True)
with open("qwen1.5b_s.mlir", "w") as f:
f.write(str(m))
```
But it failed with following backtrace.
```shell
/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support. (Triggered internally at /pytorch/aten/src/ATen/Context.cpp:148.)
torch._C._set_onednn_allow_tf32(_allow_tf32)
/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support. (Triggered internally at /pytorch/aten/src/ATen/Context.cpp:148.)
torch._C._set_onednn_allow_tf32(_allow_tf32)
Traceback (most recent call last):
File "/home/hmsjwzb/work/models/QWEN/qwen5.py", line 55, in <module>
traced_model = torch.fx.symbolic_trace(qwen2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 1314, in symbolic_trace
graph = tracer.trace(root, concrete_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 838, in trace
(self.create_arg(fn(*args)),),
^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen5.py", line 18, in forward
result = self.qwen(x)
^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 813, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 531, in call_module
ret_val = forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 806, in forward
return _orig_module_call(mod, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 856, in forward
outputs = self.model(
^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 813, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 531, in call_module
ret_val = forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/fx/_symbolic_trace.py", line 806, in forward
return _orig_module_call(mod, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmsjwzb/work/models/QWEN/qwen/lib/python3.11/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 542, in forward
cache_position = torch.arange(
^^^^^^^^^^^^^
TypeError: arange() received an invalid combination of arguments - got (int, Proxy, device=Attribute), but expected one of:
* (Number end, *, Tensor out = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
* (Number start, Number end, *, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
* (Number start, Number end, Number step = 1, *, Tensor out = None, torch.dtype dtype = None, torch.layout layout = None, torch.device device = None, bool pin_memory = False, bool requires_grad = False)
```
With Some debug, it seems the Trace module wrap x with Proxy to make it Proxy(x) and pass it to Qwen2. The Proxy caused error in the execution of neural network.
### Versions
```shell
Collecting environment information...
PyTorch version: 2.7.0.dev20250310+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 19.1.7 (https://github.com/llvm/llvm-project.git cd708029e0b2869e80abe31ddb175f7c35361f90)
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.11+local (heads/3.11-dirty:f0895aa9c1d, Dec 20 2024, 14:17:01) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.16.1
[pip3] onnxscript==0.2.2
[pip3] optree==0.14.0
[pip3] torch==2.7.0.dev20250310+cpu
[pip3] torchvision==0.22.0.dev20250310+cpu
[pip3] triton==3.2.0
[conda] magma-cuda121 2.6.1 1 pytorch
```
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
|
open
|
2025-03-14T08:37:31Z
|
2025-03-24T06:33:05Z
|
https://github.com/pytorch/pytorch/issues/149186
|
[
"module: fx",
"oncall: pt2",
"oncall: export"
] |
FlintWangacc
| 4
|
gunthercox/ChatterBot
|
machine-learning
| 2,190
|
After my training using a third-party corpus, the robot was unable to answer accurately
|
I plan to use Chatterbot to build a chatbot. After training with a third-party corpus, MY questions cannot be answered accurately, and the error rate is almost 100%. How can I solve it
|
open
|
2021-08-11T08:20:54Z
|
2021-08-11T08:22:29Z
|
https://github.com/gunthercox/ChatterBot/issues/2190
|
[] |
jianwei923
| 1
|
gevent/gevent
|
asyncio
| 1,981
|
How to determine the saturation of the generated little green leaves
|
I want to observe how long the green applet waits in the ready queue for the loop to be dispatched.
Because I want to determine if I am generating too many greenlets
|
closed
|
2023-08-03T12:14:10Z
|
2023-08-03T13:10:35Z
|
https://github.com/gevent/gevent/issues/1981
|
[] |
xpf629629
| 3
|
davidteather/TikTok-Api
|
api
| 612
|
[BUG] - It seems that a csrf_session_id is now needed in cookies
|
`userPosts` endpoint stops to work on my side since several hours.
According to my first investigations, It seems a new cookie is now needed : csrf_session_id
I tried to fix that but was not able to make it work again for now
|
closed
|
2021-06-09T17:00:49Z
|
2021-06-24T21:01:53Z
|
https://github.com/davidteather/TikTok-Api/issues/612
|
[
"bug"
] |
vdemay
| 8
|
seleniumbase/SeleniumBase
|
pytest
| 2,761
|
Can the nodejs code in the integrations/nodejs directory be automatically generated?
|
closed
|
2024-05-10T02:50:06Z
|
2024-05-10T03:25:00Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2761
|
[
"question"
] |
cloudswar
| 1
|
|
Textualize/rich
|
python
| 2,512
|
[BUG] Weird coloring when SSHed into a server
|
**Describe the bug**
So basically, when I am SSHed into a server, the coloring of text is not right
<br>
Here's the raw text:
```
[grey50][04/09/22 18:15:58][/] [blue3]122.161.67.108 [/] -- [magenta1]/ [/] [chartreuse4][/] -- [light_steel_blue]Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:104.0) Gecko/20100101 Firefox/104.0 en-GB,en;q=0.5[/]
[grey50][04/09/22 18:16:12][/] [blue3]122.161.67.108 [/] -- [magenta1]/random [/] [chartreuse4](https://apiofcats.xyz/) [/][orange3](xlyhtdqqlw571.png)[/] -- [light_steel_blue]Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:104.0) Gecko/20100101 Firefox/104.0 en-GB,en;q=0.5[/]
[grey50][04/09/22 18:16:13][/] [blue3]122.161.67.108 [/] -- [magenta1]/random [/] [chartreuse4](https://apiofcats.xyz/) [/][orange3](7c6icgbmp7t71.jpeg)[/] -- [light_steel_blue]Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:104.0) Gecko/20100101 Firefox/104.0 en-GB,en;q=0.5[/]
[grey50][04/09/22 18:16:15][/] [blue3]122.161.67.108 [/] -- [magenta1]/random [/] [chartreuse4](https://apiofcats.xyz/) [/][orange3](l0r5l6xg76w81.png)[/] -- [light_steel_blue]Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:104.0) Gecko/20100101 Firefox/104.0 en-GB,en;q=0.5[/]
[grey50][04/09/22 18:16:16][/] [blue3]122.161.67.108 [/] -- [magenta1]/random [/] [chartreuse4](https://apiofcats.xyz/) [/][orange3](nafb03x2hp591.png)[/] -- [light_steel_blue]Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:104.0) Gecko/20100101 Firefox/104.0 en-GB,en;q=0.5[/]
[grey50][04/09/22 18:16:18][/] [blue3]122.161.67.108 [/] -- [magenta1]/random [/] [chartreuse4](https://apiofcats.xyz/) [/][orange3](y1okoracutl81.png)[/] -- [light_steel_blue]Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:104.0) Gecko/20100101 Firefox/104.0 en-GB,en;q=0.5[/]
[grey50][04/09/22 18:16:19][/] [blue3]122.161.67.108 [/] -- [magenta1]/random [/] [chartreuse4](https://apiofcats.xyz/) [/][orange3](tg6gtejsgzh71.jpeg)[/] -- [light_steel_blue]Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:104.0) Gecko/20100101 Firefox/104.0 en-GB,en;q=0.5[/]
[grey50][04/09/22 18:16:21][/] [blue3]122.161.67.108 [/] -- [magenta1]/random [/] [chartreuse4](https://apiofcats.xyz/) [/][orange3](redditsave.com_updates_i_got_cat_food_and_there_is_a_kitten_as-rik5p67nw0x81.mp4)[/] -- [light_steel_blue]Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:104.0) Gecko/20100101 Firefox/104.0 en-GB,en;q=0.5[/]
[grey50][04/09/22 18:22:35][/] [blue3]122.161.67.108 [/] -- [magenta1]/ [/] [chartreuse4][/] -- [light_steel_blue]Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:104.0) Gecko/20100101 Firefox/104.0 en-GB,en;q=0.5[/]
[grey50][04/09/22 18:23:59][/] [blue3]122.161.67.108 [/] -- [magenta1]/api/random[/] [chartreuse4](https://apiofcats.xyz/) [/] -- [light_steel_blue]Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:104.0) Gecko/20100101 Firefox/104.0 en-GB,en;q=0.5[/]
[grey50][04/09/22 18:24:01][/] [blue3]122.161.67.108 [/] -- [magenta1]/api/stats [/] [chartreuse4](https://apiofcats.xyz/) [/] -- [light_steel_blue]Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:104.0) Gecko/20100101 Firefox/104.0 en-GB,en;q=0.5[/]
```
<br>
This is what its supposed to look like:

<br>
This is what it looks like when I am SSHed into my server:

<br>
**Platform**
<details>
<summary>Click to expand</summary>
<br>
**Local machine:** MacOS Monterey 12.5.1
**Server:** Ubuntu 22.04.1 LTS
<br>
Output of ```python -m rich.diagnose; pip freeze | grep rich```:
(on my local machine, ie mac)
```
╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮
│ A high level console interface. │
│ │
│ ╭──────────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=167 ColorSystem.TRUECOLOR> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = 'truecolor' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 28 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=167, height=28), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=167, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=28, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=167, height=28) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 167 │
╰──────────────────────────────────────────────────────────────────────────────────╯
╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮
│ Windows features available. │
│ │
│ ╭───────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ ╰───────────────────────────────────────────────────╯ │
│ │
│ truecolor = False │
│ vt = False │
╰───────────────────────────────────────────────────────╯
╭────── Environment Variables ───────╮
│ { │
│ 'TERM': 'xterm-kitty', │
│ 'COLORTERM': 'truecolor', │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰────────────────────────────────────╯
platform="Darwin"
rich==12.5.1
```
<br>
(on my server, ie ubuntu)
```
╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮
│ A high level console interface. │
│ │
│ ╭──────────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=167 ColorSystem.STANDARD> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = 'standard' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 28 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=167, height=28), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=167, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=28, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=167, height=28) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 167 │
╰──────────────────────────────────────────────────────────────────────────────────╯
╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮
│ Windows features available. │
│ │
│ ╭───────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ ╰───────────────────────────────────────────────────╯ │
│ │
│ truecolor = False │
│ vt = False │
╰───────────────────────────────────────────────────────╯
╭────── Environment Variables ───────╮
│ { │
│ 'TERM': 'xterm-kitty', │
│ 'COLORTERM': None, │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰────────────────────────────────────╯
platform="Linux"
rich==12.5.1
```
</details>
|
closed
|
2022-09-04T13:09:51Z
|
2022-09-24T18:31:14Z
|
https://github.com/Textualize/rich/issues/2512
|
[
"Needs triage"
] |
msr8
| 4
|
deeppavlov/DeepPavlov
|
tensorflow
| 1,338
|
404 in the docs
|
Our docs contain 404 links for nemo tutorials [here](http://docs.deeppavlov.ai/en/master/features/models/nemo.html) below in the model training section
|
closed
|
2020-11-05T18:33:14Z
|
2022-04-01T11:35:14Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1338
|
[
"bug"
] |
oserikov
| 2
|
Evil0ctal/Douyin_TikTok_Download_API
|
api
| 431
|
[BUG] 4.0.3版本更换了cookies依然无法获取视频的信息
|
***发生错误的平台?***
抖音
***发生错误的端点?***
/api/hybrid/video_data
***提交的输入值?***
[如:短视频链接](https://v.douyin.com/ijtxc3Go/)
***是否有再次尝试?***
是。
***你有查看本项目的自述文件或接口文档吗?***
有。log如下:
程序出现异常,请检查错误信息。
ERROR 无效响应类型。响应类型: <class 'NoneType'>
程序出现异常,请检查错误信息。
|
closed
|
2024-06-15T07:23:47Z
|
2024-06-17T14:05:06Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/431
|
[
"BUG"
] |
ilucl
| 12
|
slackapi/bolt-python
|
fastapi
| 879
|
Installation issue
|
I have noticed that anytime i update my code for my webhook that uses oauth that is deployed, it suddenly starts requesting for me to install the app again into the organisation. I don't think this is going to be sustainable once the app is actually published. What's the solution for this? my guess is slack pings it a lot and i might not have zero downtime deployments. I am using [this ](https://github.com/slackapi/bolt-python/tree/main/examples/django/oauth_app)
|
closed
|
2023-04-08T00:01:47Z
|
2023-04-08T09:18:08Z
|
https://github.com/slackapi/bolt-python/issues/879
|
[] |
smyja
| 1
|
wiseodd/generative-models
|
tensorflow
| 18
|
loss compute
|
i think, should use `nn.BCEloss`, not `-(torch.mean(torch.log(D_real)) + torch.mean(torch.log(1 - D_fake)))`
|
closed
|
2017-05-05T01:28:26Z
|
2017-12-29T11:44:38Z
|
https://github.com/wiseodd/generative-models/issues/18
|
[] |
JiangWeixian
| 0
|
strawberry-graphql/strawberry-django
|
graphql
| 340
|
optimizer over fetching rows
|
Hi,
It looks like Query Optimizer is over fetching rows when working with Relay connections. I works fine with List but not with Connections.
So when I build schema using List, query below gets executed as single SQL
But when using Connection, I get two SQLs. I guess thats how it works? Or could it be "optimized" ?
`strawberry-graphql-django = "^0.16.0"`
repro https://github.com/tasiotas/strawberry_debug
thank you
```gql
#query
query MyQuery {
user(pk: "1") {
products {
edges {
node {
name
}
}
}
}
}
```
```sql
# first query
SELECT "app_user"."id"
FROM "app_user"
WHERE "app_user"."id" = '1'
LIMIT 21
```
```sql
# second query
SELECT "app_product"."id",
"app_product"."name",
"app_product"."size", # <- this shouldnt be here, didnt ask for it in the query
"app_product"."user_id"
FROM "app_product"
WHERE "app_product"."user_id" IN ('1')
```
|
closed
|
2023-08-16T22:25:21Z
|
2025-03-20T15:57:17Z
|
https://github.com/strawberry-graphql/strawberry-django/issues/340
|
[
"bug"
] |
tasiotas
| 12
|
piskvorky/gensim
|
data-science
| 3,288
|
Python 3.10 wheels
|
Hello!
Is there a reason why there are no wheels for python 3.10? See https://pypi.org/project/gensim/4.1.2/#files
If not, are you planning to add them?
Let me know if I can help in any way!
Best,
|
closed
|
2022-02-08T10:45:44Z
|
2022-03-18T14:05:48Z
|
https://github.com/piskvorky/gensim/issues/3288
|
[
"testing",
"reach MEDIUM",
"impact MEDIUM"
] |
LouisTrezzini
| 2
|
polakowo/vectorbt
|
data-visualization
| 29
|
Returns_Accessor
|
Is there a generic way to create a new method for these performance metrics, kind of how indicators has an indicator factory? Would be nice to be able to add these performance metrics and a generic way to implement new ones and add to the portfolio class.
I've only started implementing the examples so I might not have dug deep enough to find how to do this yet
|
closed
|
2020-07-30T13:11:45Z
|
2024-03-16T09:19:10Z
|
https://github.com/polakowo/vectorbt/issues/29
|
[] |
stevennevins
| 8
|
tflearn/tflearn
|
tensorflow
| 833
|
Accuracy null and low loss
|
Hello everyone and thank you for looking at my question.
I have data made of date (one entry per minute) for the first column and congestion (value, between 0 and 200) for the 2nd. My goal is to feed it to my neural network and so be able to predict for the next week the congestion at each minute (my dataset is more than 10M of entry, I shouldn't have problem of lack of data for training).
Problem: I have an error (may be linked to me using the wrong loss/optimizer). When using my code (10 000 data of training for now) I have an accuracy of 0, a low loss (0.00X) and a bad prediction (not even close to the reality). Do you have any idea of where it could come from (I use Adam as an optimizer, Mean_square for loss and linear for activation)?
Here is an example of what I got
> Training Step: 7100 | total loss: 0.00304 | time: 1.385s | Adam | epoch: 100 | loss: 0.00304 - binary_acc: 0.0000 | val_loss: 0.00260 - val_acc: 0.0000 -- iter: 8999/8999
My code look like that:
```
from __future__ import print_function
import numpy
import numpy as np
import tflearn
from pandas import DataFrame
from pandas import Series
from pandas import concat
from pandas import read_csv
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from math import sqrt
import datetime
# Preprocessing function
from tflearn import Accuracy
def preprocess(data):
return np.array(data, dtype=np.int32)
def parser(x):
return datetime.datetime.strptime(x, '%Y-%m-%d %H:%M:%S')
# frame a sequence as a supervised learning problem
def timeseries_to_supervised(data, lag=1):
df = DataFrame(data)
columns = [df.shift(i) for i in range(1, lag + 1)]
columns.append(df)
df = concat(columns, axis=1)
df.fillna(0, inplace=True)
return df
def difference(dataset, interval=1):
diff = list()
for i in range(interval, len(dataset)):
value = dataset[i] - dataset[i - interval]
diff.append(value)
return Series(diff)
# invert differenced value
def inverse_difference(history, yhat, interval=1):
return yhat + history[-interval]
# scale train and test data to [-1, 1]
def scale(train, test):
# fit scaler
scaler = MinMaxScaler(feature_range=(-1, 1))
scaler = scaler.fit(train)
# transform train
train = train.reshape(train.shape[0], train.shape[1])
train_scaled = scaler.transform(train)
# transform test
test = test.reshape(test.shape[0], test.shape[1])
test_scaled = scaler.transform(test)
return scaler, train_scaled, test_scaled
# inverse scaling for a forecasted value
def invert_scale(scaler, X, value):
new_row = [x for x in X] + [value]
array = numpy.array(new_row)
array = array.reshape(1, len(array))
inverted = scaler.inverse_transform(array)
return inverted[0, -1]
def fit_lstm(train, batch_size, nb_epoch, neurons):
X, y = train[0:-1], train[:, -1]
X = X[:, 0].reshape(len(X), 1)
y = y.reshape(len(y), 1)
# Build neural network
net = tflearn.input_data(shape=[None, 1])
net = tflearn.fully_connected(net, 1, activation='linear')
acc = Accuracy()
net = tflearn.regression(net, optimizer='adam', learning_rate=0.001,
loss='mean_square', metric=acc)
# Define model
model = tflearn.DNN(net)
print (X.shape)
model.fit(X, y, n_epoch=nb_epoch, batch_size=batch_size, shuffle=False, show_metric=True)
return model
# make a one-step forecast
def forecast_lstm(model, X):
X = X.reshape(len(X), 1)
yhat = model.predict(X)
return yhat[0, 0]
# Load CSV file, indicate that the first column represents labels
data = read_csv('nowcastScaled.csv', header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser)
# transform data to be stationary
raw_values = data.values
diff_values = difference(raw_values, 1)
# transform data to be supervised learning
supervised = timeseries_to_supervised(diff_values, 1)
supervised_values = supervised.values
# split data into train and test-sets
train, test = supervised_values[0:10000], supervised_values[10000:11000]
# transform the scale of the data
scaler, train_scaled, test_scaled = scale(train, test)
# fit the model
lstm_model = fit_lstm(train_scaled, 16, 15, 1)
lstm_model.save('ModelTrainTfLearn')
# forecast the entire training dataset to build up state for forecasting
train_reshaped = train_scaled[:, 0].reshape(len(train_scaled), 1)
predictiont = lstm_model.predict(train_reshaped)
print("Prediction: %s" % str(predictiont[0][:3])) # only show first 3 probas
# walk-forward validation on the test data
predictions = list()
error_scores = list()
for i in range(len(test_scaled)):
# make one-step forecast
X, y = test_scaled[i, 0:-1], test_scaled[i, -1]
yhat = forecast_lstm(lstm_model, X)
# X, y = test_scaled[i, 0:-1], test_scaled[i, -1]
# invert scaling
yhat = invert_scale(scaler, X, yhat)
# # invert differencing
yhat = inverse_difference(raw_values, yhat, len(test_scaled) + 1 - i)
# store forecast
predictions.append(yhat)
rmse = sqrt(mean_squared_error(raw_values[10000:11000], predictions))
print('%d) Test RMSE: %.3f' % (1, rmse))
error_scores.append(rmse)
print (predictions)
print (raw_values[10000:11000])
```
What I'm trying to do is a time series prediction, is tflearn good for that or is it just my way to do it that is wrong?
Thank your for your attention and help.
(Linked is the file I'm using)
[nowcastScaled.zip](https://github.com/tflearn/tflearn/files/1141334/nowcastScaled.zip)
|
closed
|
2017-07-12T08:29:35Z
|
2017-07-20T05:58:42Z
|
https://github.com/tflearn/tflearn/issues/833
|
[] |
Erickira3
| 1
|
Sanster/IOPaint
|
pytorch
| 429
|
[BUG]pydantic_core._pydantic_core.ValidationError: 1 validation error for Config paint_by_example_example_image Input should be an instance of Image [type=is_instance_of, input_value=None, input_type=NoneType]
|
**Model**
LAMA
**Describe the bug**
[2024-01-24 09:36:41,914] ERROR in app: Exception on /inpaint [POST]
Traceback (most recent call last):
File "E:\wss\python\pycharm-work\lama-cleaner-0.37.0\venv\lib\site-packages\flask\app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "E:\wss\python\pycharm-work\lama-cleaner-0.37.0\venv\lib\site-packages\flask\app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "E:\wss\python\pycharm-work\lama-cleaner-0.37.0\venv\lib\site-packages\flask_cors\extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "E:\wss\python\pycharm-work\lama-cleaner-0.37.0\venv\lib\site-packages\flask\app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "E:\wss\python\pycharm-work\lama-cleaner-0.37.0\venv\lib\site-packages\flask\_compat.py", line 39, in reraise
raise value
File "E:\wss\python\pycharm-work\lama-cleaner-0.37.0\venv\lib\site-packages\flask\app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "E:\wss\python\pycharm-work\lama-cleaner-0.37.0\venv\lib\site-packages\flask\app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "E:\wss\python\pycharm-work\lama-cleaner-0.37.0\lama_cleaner\server.py", line 259, in process
p2p_guidance_scale=form["p2pGuidanceScale"],
File "E:\wss\python\pycharm-work\lama-cleaner-0.37.0\venv\lib\site-packages\pydantic\main.py", line 164, in __init__
__pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
pydantic_core._pydantic_core.ValidationError: 1 validation error for Config
paint_by_example_example_image
Input should be an instance of Image [type=is_instance_of, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.4/v/is_instance_of
**System Info**
(venv) E:\wss\python\pycharm-work\lama-cleaner-0.37.0>python main.py --model=lama --device=cpu --port=8085
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.2
- torch: 1.13.1
- Pillow: 9.5.0
- diffusers: 0.12.1
- transformers: 4.30.2
- opencv-python: 4.9.0.80
- xformers: N/A
- accelerate: 0.20.3
- lama-cleaner: N/A
2024-01-24 09:33:59.046 | INFO | lama_cleaner.helper:load_jit_model:102 - Loading model from: C:\Users\wss\.cache\torch\hub\checkpoints\big-lama.pt
2024-01-24 09:33:59.799 | INFO | lama_cleaner.helper:load_jit_model:102 - Loading model from: C:\Users\wss\.cache\torch\hub\checkpoints\clickseg_pplnet.
pt
* Running on http://127.0.0.1:8085/ (Press CTRL+C to quit)
127.0.0.1 - - [24/Jan/2024 09:34:06] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [24/Jan/2024 09:34:07] "GET /model HTTP/1.1" 200 -
127.0.0.1 - - [24/Jan/2024 09:34:07] "GET /inputimage HTTP/1.1" 200 -
127.0.0.1 - - [24/Jan/2024 09:34:07] "GET /is_desktop HTTP/1.1" 200 -
127.0.0.1 - - [24/Jan/2024 09:34:07] "GET /is_disable_model_switch HTTP/1.1" 200 -
127.0.0.1 - - [24/Jan/2024 09:34:07] "GET /is_enable_file_manager HTTP/1.1" 200 -
127.0.0.1 - - [24/Jan/2024 09:34:07] "GET /is_enable_auto_saving HTTP/1.1" 200 -
127.0.0.1 - - [24/Jan/2024 09:34:14] "GET /model HTTP/1.1" 200 -
|
closed
|
2024-01-24T03:15:51Z
|
2025-02-27T02:01:44Z
|
https://github.com/Sanster/IOPaint/issues/429
|
[
"stale"
] |
wang3680
| 3
|
PaddlePaddle/PaddleHub
|
nlp
| 1,869
|
运行trainer.train()出现label should not out of bound, but gotTensor(shape=[1], dtype=int64, place=CUDAPlace(0), stop_gradient=True, [2])
|
- 版本、环境信息
1)例如PaddleHub2.1.1,PaddlePaddle2.2.2.post101使用erine_tiny
2)系统环境:aistudio平台
- 复现信息:ValueError: label should not out of bound, but gotTensor(shape=[1], dtype=int64, place=CUDAPlace(0), stop_gradient=True,[2]
-
代码:https://aistudio.baidu.com/aistudio/datasetdetail/146374/0
|
closed
|
2022-05-14T02:57:07Z
|
2022-05-25T01:56:52Z
|
https://github.com/PaddlePaddle/PaddleHub/issues/1869
|
[] |
fatepary
| 2
|
viewflow/viewflow
|
django
| 28
|
right lookup over flow_task fields
|
Need to add full path to flow_task name
or
get_objects_or_404(Task, process__flow_cls=flow_task.flow_cls, flow_task=flow_task)
|
closed
|
2014-03-28T00:37:33Z
|
2014-05-01T09:58:12Z
|
https://github.com/viewflow/viewflow/issues/28
|
[
"request/enhancement"
] |
kmmbvnr
| 1
|
jpadilla/django-rest-framework-jwt
|
django
| 368
|
Validate and get the user using the jwt token inside a view or consumer
|
I am using django-rest-framework for the REST API. Also, for JSON web token authentication I am using django-rest-framework-jwt. After a successful login, the user is provided with a token. I have found how to [verify a token](https://getblimp.github.io/django-rest-framework-jwt/#verify-token) with the api call, but is there any way to validate the token inside a view and get the user of that token, similar to request.user?
I need it to validate inside the consumer when using [django-channels](https://channels.readthedocs.io/en/stable/index.html):
```
def ws_connect(message):
params = parse_qs(message.content["query_string"])
if b"token" in params:
token = params[b"token"][0]
# validate the token and get the user object
# create an object with that user
```
|
open
|
2017-09-15T02:24:36Z
|
2018-08-20T05:28:08Z
|
https://github.com/jpadilla/django-rest-framework-jwt/issues/368
|
[] |
robinlery
| 1
|
biolab/orange3
|
data-visualization
| 6,173
|
Feature Statistics - open issues
|
<!--
If something's not right with Orange, fill out the bug report template at:
https://github.com/biolab/orange3/issues/new?assignees=&labels=bug+report&template=bug_report.md&title=
If you have an idea for a new feature, fill out the feature request template at:
https://github.com/biolab/orange3/issues/new?assignees=&labels=&template=feature_request.md&title=
-->
After PR #6158 there are some open issues left that should be discussed:
- [x] Compute Mode for numeric variables?
- [x] Show Mode in the widget to make it consistent with the new (and improved) output? Currently the mode is squeezed into the Median column, which would otherwise be empty for categorical variables. But numeric variables could have both...
- [ ] Documentation should be updated
- [x] There are some warnings which could cause issues in the future:
```
Orange/statistics/util.py:510: FutureWarning: Unlike other reduction functions (e.g. `skew`, `kurtosis`), the default behavior of `mode` typically preserves the axis it acts along. In SciPy 1.11.0, this behavior will change: the default value of `keepdims` will become False, the `axis` over which the statistic is taken will be eliminated, and the value None will no longer be accepted. Set `keepdims` to True or False to avoid this warning.
res = scipy.stats.stats.mode(x, axis)
```
```
orange-widget-base/orangewidget/gui.py:2068: UserWarning: decorate OWFeatureStatistics.commit with @gui.deferred and then explicitly call commit.now or commit.deferred.
```
|
closed
|
2022-10-14T13:50:34Z
|
2023-02-23T11:00:15Z
|
https://github.com/biolab/orange3/issues/6173
|
[
"wish",
"snack"
] |
lanzagar
| 2
|
pytest-dev/pytest-xdist
|
pytest
| 328
|
Re-run errored tests on another node
|
With https://github.com/pytest-dev/pytest-xdist/issues/327 (errors due to out-of-memory), it would be useful if those tests could be run on another node automatically/optionally.
I guess that `--dist=each` would consider the test(s) to be OK if it failed only on one node, but I am using `--dist=loadfile` (for performance reasons).
|
open
|
2018-08-02T05:27:24Z
|
2018-08-02T08:19:31Z
|
https://github.com/pytest-dev/pytest-xdist/issues/328
|
[] |
blueyed
| 4
|
marcomusy/vedo
|
numpy
| 912
|
linewidth in silhouette
|
problem: how to close linewidth in silhouette
mesh: [SMPL_A.txt](https://github.com/marcomusy/vedo/files/12273512/SMPL_A.txt)
code:
```
from vedo import settings, Plotter, Mesh
settings.use_depth_peeling = True
plt = Plotter()
path_mesh = "SMPL_A.obj"
mesh = Mesh(path_mesh).lighting("off").c("pink").alpha(0.5)
plt.show(
mesh,
mesh.silhouette(border_edges=True, feature_angle=False).linewidth(3).color("dr"),
)
plt.close()
```
result:

|
closed
|
2023-08-07T07:41:01Z
|
2023-08-24T06:45:38Z
|
https://github.com/marcomusy/vedo/issues/912
|
[] |
LogWell
| 3
|
jacobgil/pytorch-grad-cam
|
computer-vision
| 22
|
'image not found' error
|
I see that several people get the image not found error.
Here is my solution: 'brew install libomp'
Added a comment to Readme:
https://github.com/punnerud/pytorch-grad-cam
|
closed
|
2019-06-04T13:27:22Z
|
2019-06-04T16:39:31Z
|
https://github.com/jacobgil/pytorch-grad-cam/issues/22
|
[] |
punnerud
| 1
|
pallets-eco/flask-sqlalchemy
|
flask
| 563
|
many to many relationship, cann't add any data.
|
I use the case which was provided from official website (http://flask-sqlalchemy.pocoo.org/2.3/models/#many-to-many-relationships),this can create three tables, but when I add new data, they raise "AttributeError: 'tuple' object has no attribute 'foreign_keys'"
my code:
```
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from app import adminApp
db=SQLAlchemy(adminApp)
tags = db.Table('tags',
db.Column('tag_id', db.Integer, db.ForeignKey('tag.id')),
db.Column('page_id', db.Integer, db.ForeignKey('page.id'))
)
class Page(db.Model):
id = db.Column(db.Integer, primary_key=True)
tags = db.relationship('Tag', secondary=tags,
backref=db.backref('pages', lazy='dynamic'))
class Tag(db.Model):
id = db.Column(db.Integer, primary_key=True)
db.create_all()
tag1=Tag()
tag2=Tag()
page1=Page()
page2=Page()
```
error informations:
`File "f:/zww/project/testmanytomany.py", line 21, in <module>
tag1=Tag()
File "<string>", line 2, in __init__
File "D:\Anaconda3\envs\webpy\lib\site-packages\sqlalchemy\orm\instrumentation.py", line 347, in _new_state_if_none
state = self._state_constructor(instance, self)
File "D:\Anaconda3\envs\webpy\lib\site-packages\sqlalchemy\util\langhelpers.py", line 764, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "D:\Anaconda3\envs\webpy\lib\site-packages\sqlalchemy\orm\instrumentation.py", line 177, in _state_constructor
self.dispatch.first_init(self, self.class_)
File "D:\Anaconda3\envs\webpy\lib\site-packages\sqlalchemy\event\attr.py", line 256, in __call__
fn(*args, **kw)
File "D:\Anaconda3\envs\webpy\lib\site-packages\sqlalchemy\orm\mapper.py", line 2976, in _event_on_first_init
configure_mappers()
File "D:\Anaconda3\envs\webpy\lib\site-packages\sqlalchemy\orm\mapper.py", line 2872, in configure_mappers
mapper._post_configure_properties()
File "D:\Anaconda3\envs\webpy\lib\site-packages\sqlalchemy\orm\mapper.py", line 1765, in _post_configure_properties
prop.init()
File "D:\Anaconda3\envs\webpy\lib\site-packages\sqlalchemy\orm\interfaces.py", line 184, in init
self.do_init()
File "D:\Anaconda3\envs\webpy\lib\site-packages\sqlalchemy\orm\relationships.py", line 1654, in do_init
self._setup_join_conditions()
File "D:\Anaconda3\envs\webpy\lib\site-packages\sqlalchemy\orm\relationships.py", line 1729, in _setup_join_conditions
can_be_synced_fn=self._columns_are_mapped
File "D:\Anaconda3\envs\webpy\lib\site-packages\sqlalchemy\orm\relationships.py", line 1987, in __init__
self._determine_joins()
File "D:\Anaconda3\envs\webpy\lib\site-packages\sqlalchemy\orm\relationships.py", line 2053, in _determine_joins
consider_as_foreign_keys=consider_as_foreign_keys
File "<string>", line 2, in join_condition
File "D:\Anaconda3\envs\webpy\lib\site-packages\sqlalchemy\sql\selectable.py", line 964, in _join_condition
a, a_subset, b, consider_as_foreign_keys)
File "D:\Anaconda3\envs\webpy\lib\site-packages\sqlalchemy\sql\selectable.py", line 996, in _joincond_scan_left_right
b.foreign_keys,
AttributeError: 'tuple' object has no attribute 'foreign_keys'`
I use python3.5 with anaconda, mysql and pymysql as a connector between python3 and mysql.
please help me , I spent much time to find a solution with google, but I failed.
|
closed
|
2017-10-26T16:13:52Z
|
2020-12-05T20:55:27Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/563
|
[] |
dreamsuifeng
| 2
|
biolab/orange3
|
pandas
| 6,657
|
Documentation for Table.from_list
|
<!--
This is an issue template. Please fill in the relevant details in the
sections below.
Wrap code and verbatim terminal window output into triple backticks, see:
https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code
If you're raising an issue about an add-on (e.g. installed via
Options > Add-ons), raise an issue on the relevant add-on's issue
tracker instead. See: https://github.com/biolab?q=orange3
-->
##### Orange version
<!-- From menu _Help→About→Version_ or code `Orange.version.full_version` -->
'3.36.2'
##### Expected behavior
I would like to know how to use `Table.from_list` method or if is there an alternative for creating a table from a lists of lists, or list of dicts.
The domain has discrete and string (meta) variables, but I have no idea how the domain relates to the list of values. Should discrete variables come first? later?
|
closed
|
2023-11-27T13:08:41Z
|
2023-11-27T13:21:44Z
|
https://github.com/biolab/orange3/issues/6657
|
[] |
dhilst
| 0
|
chezou/tabula-py
|
pandas
| 198
|
CalledProcessError after bundling Python script into executable with PyInstaller
|
<!--- Provide a general summary of your changes in the Title above -->
# Summary of your issue
I've been using tabula-py to convert a series of pdf tables into csv files for my program to parse through and collect some data. The code works perfectly when just in the script, but if I attempt to bundle my code into an executable file using PyInstaller I get an error. Thanks for any help you can give!
# Check list before submit
<!--- Write and check the following questionaries. -->
- Your PDF URL: Unable to upload due to sensitivity of PDF.
- Paste the output of `import tabula; tabula.environment_info()` on Python REPL:
Python version:
3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:21:23) [MSC v.1916 32 bit (Intel)]
Java version:
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) Client VM (build 25.131-b11, mixed mode, sharing)
tabula-py version: 1.4.3
platform: Windows-7-6.1.7601-SP1
uname:
uname_result(system='Windows', node='L17CA-D7426CPS', release='7', version='6.1.7601', machine='AMD64', processor='Intel64 Family 6 Model 94 Stepping 3, GenuineIntel')
linux_distribution: ('', '', '')
mac_ver: ('', ('', '', ''), '')
# What did you do when you faced the problem?
Upon facing the problem I tried the previous solutions, namely putting the PDF in the same folder as the executable and modifying my tabula-py version to 1.3.1. Neither fix was successful.
<!--- Provide your information to reproduce the issue. -->
## Code:
```
pdf_path = pdf_path
output_path = output_path
try:
tabula.convert_into(pdf_path,pages='all', guess=False,output_format="CSV",output_path=output_path) #Try to convert the file
except AttributeError:
messagebox.showerror("Operation cancelled!", "In order to continue, please re-run the program and select a PDF file.")
```
## Expected behavior:
A .csv file with the tables that were contained within the PDF.
## Actual behavior:

## Related Issues:
#60 #93 #86
|
closed
|
2019-12-17T00:34:05Z
|
2023-04-06T01:40:51Z
|
https://github.com/chezou/tabula-py/issues/198
|
[] |
adammet
| 5
|
seleniumbase/SeleniumBase
|
web-scraping
| 2,178
|
Unable to obtain multiple texts:self.get_text('li.el-menu-item')
|

I need to retrieve these files into a list, currently only the first one can be obtained
触发事件
流程列表
触发策略
|
closed
|
2023-10-11T11:04:28Z
|
2023-10-12T02:33:52Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2178
|
[
"question"
] |
luckyboy-wei
| 2
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,851
|
Does the windows environment cause multiple client links to block
|
Hello, I am using Flask-SocketIO to process tasks, if a user sends multiple tasks, it will cause the link Flask-SocketIO to block, my environment is windows
|
closed
|
2022-07-22T10:09:36Z
|
2022-07-22T10:17:18Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1851
|
[] |
zhenzi0322
| 1
|
Lightning-AI/LitServe
|
api
| 177
|
Show warning if `batch` & `unbatch` is implemented but max_batch_size not set in `LitServer`
|
## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
### Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
### Pitch
<!-- A clear and concise description of what you want to happen. -->
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
|
closed
|
2024-07-19T18:13:44Z
|
2024-08-02T14:57:26Z
|
https://github.com/Lightning-AI/LitServe/issues/177
|
[
"enhancement",
"good first issue",
"help wanted"
] |
aniketmaurya
| 2
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 667
|
Dataset option is blur(not usable) in GUI
|

How to enable dataset option ? It is not usable in GUI. Where to extract data sets? <dataset_root> means any folder ?
|
closed
|
2021-02-17T18:30:13Z
|
2021-02-17T20:23:42Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/667
|
[] |
clonereal
| 1
|
httpie/cli
|
rest-api
| 1,523
|
[macOS] Lag when piping stdout
|
When redirecting httpie output to a pipe, there is a significant lag in the total execution time:
1. Without pipe
```
time http http://localhost:8508/api/v1 > /dev/null
http http://localhost:8508/api/v1> /dev/null 0.12s user 0.05s system 59% cpu 0.282 total
```
2. With pipe
```
time http http://localhost:8508/api/v1 | cat > /dev/null
http http://localhost:8508/api/v1 0.12s user 0.04s system 86% cpu 0.193 total
cat > /dev/null 0.00s user 0.00s system 0% cpu 0.973 total
```
Note that the lag appears to be happening in the `cat` command after `httpie`, which doesn't make sense and doesn't happen for `curl` or `wget`:
```
time curl -s http://localhost:8508/api/v1 > /dev/null
curl -s http://localhost:8508/api/v1 > /dev/null 0.01s user 0.01s system 56% cpu 0.024 total
time curl -s http://localhost:8508/api/v1 | cat > /dev/null
curl -s http://localhost:8508/api/v1 0.00s user 0.01s system 44% cpu 0.035 total
cat > /dev/null 0.00s user 0.00s system 9% cpu 0.034 total
```
## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
|
closed
|
2023-07-30T23:28:36Z
|
2024-01-31T18:01:32Z
|
https://github.com/httpie/cli/issues/1523
|
[
"bug",
"new"
] |
gsakkis
| 5
|
arogozhnikov/einops
|
numpy
| 167
|
[BUG] einops materializes tensors that could be views
|
**Describe the bug**
That a lot for this great library! My code looks much nicer now. Today I noticed that einops exhibits suboptimal behavior when repeating tensors in ways that could be views. In the example below, expanding with PyTorch creates a view that actually reuses the same memory as the original tensor. If I use einops instead to expand the dimensions, I get a fully realized tensor with a much larger memory footprint.
**Reproduction steps**
```python
$ ipython
Python 3.9.9 (main, Jan 5 2022, 20:20:26)
Type 'copyright', 'credits' or 'license' for more information
IPython 8.0.1 -- An enhanced Interactive Python. Type '?' for help.
Exception reporting mode: Verbose
In [1]: x = torch.arange(10)
In [2]: import einops as eo
In [3]: a = eo.repeat(x, "i -> 8 3 i")
In [4]: a.storage().size(), a.storage().data_ptr()
Out[4]: (240, 93944369638784)
In [5]: b = x.expand((8, 3, -1))
In [6]: b.storage().size(), b.storage().data_ptr()
Out[6]: (10, 93944370155264)
In [7]: x.storage().size(), x.storage().data_ptr()
Out[7]: (10, 93944370155264)
```
**Expected behavior**
I would expect einops to be as efficient as creating the view directly.
**Your platform**
```
torch==1.10.1
einops==0.3.2
```
|
closed
|
2022-01-19T17:23:03Z
|
2022-01-19T18:53:09Z
|
https://github.com/arogozhnikov/einops/issues/167
|
[
"bug"
] |
martenlienen
| 2
|
kaliiiiiiiiii/Selenium-Driverless
|
web-scraping
| 150
|
sendkeys-elemwrite ?
|
File "C:\Users\Administrator\Desktop\Proxy-Changer-main\drivertest.py", line 83, in <module>
asyncio.run(main())
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\asyncio\runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 650, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\Proxy-Changer-main\drivertest.py", line 78, in main
epostagiris.write("yunus")
^^^^^^^^^^^^^^^^^
AttributeError: 'coroutine' object has no attribute 'write'
sys:1: RuntimeWarning: coroutine 'Chrome.find_element' was never awaited
async def main():
options = webdriver.ChromeOptions()
async with webdriver.Chrome(options=options) as driver:
await driver.get('---')
#await driver.sleep(1.5)
#await driver.wait_for_cdp("Page.domContentEventFired", timeout=10)
# wait 10s for elem to exist
birincitik = await driver.find_element(By.XPATH, '//*[@id="SupportFormRow.486349168100130"]/div[3]/label[2]')
await birincitik.click(move_to=True)
time.sleep(10)
ikincitik = await driver.find_element(By.XPATH, '//*[@id="SupportFormRow.334536476675719"]/div[4]/label[1]/span')
await ikincitik.click(move_to=True)
ucuncutik = await driver.find_element(By.XPATH, '//*[@id="SupportFormRow.650238494993366"]/div[4]/label[1]/span')
await ucuncutik.click(move_to=True)
epostatik = await driver.find_element(By.XPATH, '//*[@id="454337367989067"]')
await ucuncutik.click(move_to=True)
epostagiris = driver.find_element(By.XPATH, '//*[@id="454337367989067"]')
epostagiris.write("yunus")
time.sleep(10)
#await epostagiris.click(move_to=True)
asyncio.run(main())
|
closed
|
2024-01-15T10:02:52Z
|
2024-01-15T10:30:22Z
|
https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/150
|
[] |
yemrekula0748
| 1
|
neuml/txtai
|
nlp
| 377
|
OpenMP issues with torch 1.13+ on macOS
|
macOS users are running into a compatibility issue between Faiss and Torch related to OpenMP. This has also affected the GitHub Actions build.
It would be great if a macOS user took a look at trying to figure out the root cause. See this comment for debugging ideas https://github.com/neuml/txtai/issues/498#issuecomment-1627807236.
_Note: The GitHub Actions build is currently pinned to torch==1.12.1._
|
closed
|
2022-11-01T18:23:06Z
|
2023-07-10T17:06:04Z
|
https://github.com/neuml/txtai/issues/377
|
[
"bug"
] |
davidmezzetti
| 2
|
Evil0ctal/Douyin_TikTok_Download_API
|
fastapi
| 243
|
我用cloudflare解析了域名,为什么无法访问,是不是我使用了泛解析,还是证书问题?
|
***发生错误的平台?***
如:抖音/TikTok
***发生错误的端点?***
如:API-V1/API-V2/Web APP
***提交的输入值?***
如:短视频链接
***是否有再次尝试?***
如:是,发生错误后X时间后错误依旧存在。
***你有查看本项目的自述文件或接口文档吗?***
如:有,并且很确定该问题是程序导致的。
|
closed
|
2023-08-17T15:19:25Z
|
2023-09-12T15:22:05Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/243
|
[
"help wanted"
] |
tiermove
| 3
|
zappa/Zappa
|
flask
| 738
|
[Migrated] Zappa appears to fail silently when it can't create a bucket
|
Originally from: https://github.com/Miserlou/Zappa/issues/1860 by [fbidu](https://github.com/fbidu)
<!--- Provide a general summary of the issue in the Title above -->
## Context
I'm using Python 3.7 but that doesn't seem to affect the issue itself. In my initial config, I put a "bucket_name" that, by accident, already existed but wasn't mine. When I tried to deploy, I got this somewhat cryptic error message
```
Traceback (most recent call last):
File "/home/fbidu/.local/share/virtualenvs/predict-Rd_jSeMc/lib/python3.7/site-packages/zappa/core.py", line 924, in upload_to_s3
self.s3_client.head_bucket(Bucket=bucket_name)
File "/home/fbidu/.local/share/virtualenvs/predict-Rd_jSeMc/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/fbidu/.local/share/virtualenvs/predict-Rd_jSeMc/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadBucket operation: Forbidden
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/fbidu/.local/share/virtualenvs/predict-Rd_jSeMc/lib/python3.7/site-packages/zappa/cli.py", line 2779, in handle
sys.exit(cli.handle())
File "/home/fbidu/.local/share/virtualenvs/predict-Rd_jSeMc/lib/python3.7/site-packages/zappa/cli.py", line 509, in handle
self.dispatch_command(self.command, stage)
File "/home/fbidu/.local/share/virtualenvs/predict-Rd_jSeMc/lib/python3.7/site-packages/zappa/cli.py", line 546, in dispatch_command
self.deploy(self.vargs['zip'])
File "/home/fbidu/.local/share/virtualenvs/predict-Rd_jSeMc/lib/python3.7/site-packages/zappa/cli.py", line 723, in deploy
self.zip_path, self.s3_bucket_name, disable_progress=self.disable_progress)
File "/home/fbidu/.local/share/virtualenvs/predict-Rd_jSeMc/lib/python3.7/site-packages/zappa/core.py", line 931, in upload_to_s3
Bucket=bucket_name,
File "/home/fbidu/.local/share/virtualenvs/predict-Rd_jSeMc/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/fbidu/.local/share/virtualenvs/predict-Rd_jSeMc/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The unspecified location constraint is incompatible for the region specific endpoint this request was sent to.
```
After changing the bucket name to a new, unique one, the deployment immediately worked.
## Expected Behavior
<!--- Tell us what should happen -->
I think Zappa should check about this error and give a more clear warning
## Actual Behavior
<!--- Tell us what happens instead -->
I got an weird error
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
I think that zappa can make some assertion before trying to do the deploy
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Set `bucket_name` to any name that already exists and does not belong to your user. In my case it was "zappa-predict" at us-east-1
2. Try to run `zappa deploy`
|
closed
|
2021-02-20T12:41:35Z
|
2022-07-16T06:24:36Z
|
https://github.com/zappa/Zappa/issues/738
|
[] |
jneves
| 1
|
Johnserf-Seed/TikTokDownload
|
api
| 532
|
异常,本地网络请求异常。 异常: 0, message='Attempt to decode JSON with unexpected mimetype: text/plain; charset=utf-8', url=URL('https://www.douyin.com/aweme/v1/web/user/profile/other/?device_platform=webapp&aid=6383&sec_user_id=MS4wLjABAAAA4vgRHGrSG6rPlffm3RvwHWL8TBq7O4YnM5jHUNXz0-s&cookie_enabled=true& platform=PC&downlink=10&X-Bogus=DFSzswVuSYhANxXhtyf4ql9WX7J-')
|
**描述出现的错误**
遇到bug先在issues中搜索,没有得到解决方案再提交。
对出现bug的地方做清晰而简洁的描述。
**bug复现**
复现这次行为的步骤:
1.首先更改了什么什么
2.点击了什么什么
3.“……”
**截图**
如果适用,添加屏幕截图以帮助解释您的问题。
**桌面(请填写以下信息):**
-操作系统:[例如windows10 64bit]
-vpn代理:[例如开启、关闭]
-项目版本:[如1.4.2.2]
-py版本:[如3.11.1]
-依赖库的版本:[出错的库版本号]
**附文**
在此处添加有关此问题的其他备注。
|
closed
|
2023-08-31T16:45:48Z
|
2023-09-01T13:03:57Z
|
https://github.com/Johnserf-Seed/TikTokDownload/issues/532
|
[
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] |
Hkxuan
| 1
|
amidaware/tacticalrmm
|
django
| 1,229
|
Request: Register all repos with OpenHub and Coverity Scan
|
Tactical RMM is already registered with OpenHub - https://www.openhub.net/p/tacticalrmm
It would be great for the project (and visibility) to register all the repos associated with Tactical RMM (core, agent, web) with both OpenHub and Coverity Scan - https://scan.coverity.com/ - a static analysis tool that is free for open source projects.
|
closed
|
2022-07-28T13:40:57Z
|
2022-10-26T12:09:47Z
|
https://github.com/amidaware/tacticalrmm/issues/1229
|
[
"help wanted"
] |
UranusBytes
| 1
|
pallets/flask
|
flask
| 4,604
|
Update of the downloadable documentation of flask2.0
|
I would love to be able to download the latest flask2.0 docs as it's always a good resource and since the update of flask the docs have been updated online but the downloadable versions are still in flask1.x.x

|
closed
|
2022-05-21T02:08:44Z
|
2022-06-05T00:07:17Z
|
https://github.com/pallets/flask/issues/4604
|
[] |
forinda
| 2
|
kizniche/Mycodo
|
automation
| 1,016
|
Error 500 - internal server error
|
Hey mate its me fumetin again xD
Was installing a new input on my new raspberry and while trying to add i believe it was "anyleaf ph meter I2C" got me this error that dont allow me to get into New INPUTS page.
here the Error:
Error (Full Traceback):
Traceback (most recent call last):
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask_restx/api.py", line 652, in error_router
return original_handler(e)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask_login/utils.py", line 272, in decorated_view
return func(*args, **kwargs)
File "/home/pi/Mycodo/mycodo/mycodo_flask/routes_page.py", line 2166, in page_input
devices_1wire_w1thermsensor=devices_1wire_w1thermsensor)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/templating.py", line 140, in render_template
ctx.app,
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/flask/templating.py", line 120, in _render
rv = template.render(context)
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/environment.py", line 1304, in render
self.environment.handle_exception()
File "/home/pi/Mycodo/env/lib/python3.7/site-packages/jinja2/environment.py", line 925, in handle_exception
raise rewrite_traceback_stack(source=source)
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/pages/input.html", line 3, in top-level template code
{% set help_page = ["https://kizniche.github.io/Mycodo/Inputs/", dict_translation['input']['title']] %}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/layout.html", line 375, in top-level template code
{%- block body %}{% endblock -%}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/pages/input.html", line 35, in block 'body'
{% include 'pages/data_options/input.html' %}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/pages/data_options/input.html", line 201, in top-level template code
{% include 'pages/form_options/Interface.html' %}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/pages/form_options/Interface.html", line 1, in top-level template code
{% if 'interface' in dict_options['options_disabled'] %}
jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'options_disabled'
Thanks for ur time and such an incredible program ( im already running whit another raspberry for last 3 months whitout any issue).
|
closed
|
2021-05-30T15:24:59Z
|
2021-06-05T16:32:42Z
|
https://github.com/kizniche/Mycodo/issues/1016
|
[
"duplicate"
] |
Fumetin
| 1
|
google-research/bert
|
nlp
| 1,010
|
Finding results on STS Benchmark dataset
|
Hi,
Can someone tell how results on https://paperswithcode.com/sota/semantic-textual-similarity-on-sts-benchmark are found? What architecture BERT, ALBERT follow to get the output.
|
open
|
2020-02-25T05:11:10Z
|
2020-02-25T05:11:10Z
|
https://github.com/google-research/bert/issues/1010
|
[] |
dheerajiiitv
| 0
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
computer-vision
| 668
|
Confusion with CycleGAN testing regarding domains
|
Hi I am trying to train a CycleGAN such that I can translate images from domain B to domain A. After training the initial model I wanted to test it on some out-of-training-set images from domain B. So I used:
`python test.py --dataroot datasets/my_proj/testB --name my_cyclegan_proj --model test --no_dropout`
However when I open the index.html the view the results I am confused. As I understand it real_A refers to a real image from domain A, however the pictures with this label in the html file are my pictures from the folder testB, which are real images from domain B.
I've double checked that my trainA, testA and trainB, testB folder contain only the images from their respective domains. Additionally, like some previous issues mentioned I renamed `latest_net_G_B.pth -> latest_net_G.pth`
Does this have something to do with the --direction flag? Or am I still confused about the labeling of the results?
|
closed
|
2019-06-06T16:47:18Z
|
2024-09-02T16:54:27Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/668
|
[] |
patrick-han
| 4
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 998
|
UVR5
|
Last Error Received:
Process: VR Architecture
Missing file error raised. Please address the error and try again.
If this error persists, please contact the developers with the error details.
Raw Error Details:
FileNotFoundError: "[WinError 2] The system cannot find the file specified"
Traceback Error: "
File "UVR.py", line 6638, in process_start
File "separate.py", line 1079, in seperate
File "separate.py", line 382, in final_process
File "separate.py", line 446, in write_audio
File "separate.py", line 419, in save_with_message
File "separate.py", line 393, in save_audio_file
File "separate.py", line 1318, in save_format
File "pydub\audio_segment.py", line 820, in from_wav
File "pydub\audio_segment.py", line 735, in from_file
File "pydub\utils.py", line 274, in mediainfo_json
File "subprocess.py", line 951, in __init__
File "subprocess.py", line 1420, in _execute_child
"
Error Time Stamp [2023-11-30 18:03:02]
Full Application Settings:
vr_model: 7_HP2-UVR
aggression_setting: 10
window_size: 1024
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Inst HQ 3
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Time Stretch
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 4.0
is_time_correction: True
is_gpu_conversion: False
is_primary_stem_only: False
is_secondary_stem_only: True
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: True
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_use_opencl: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: MP3
wav_type_set: 64-bit Float
device_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems
|
closed
|
2023-11-30T14:53:22Z
|
2023-11-30T15:03:48Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/998
|
[] |
Soorenaca
| 0
|
docarray/docarray
|
pydantic
| 1,544
|
Indexing field of DocList type (=subindex)is not yet supported.
|
I am creating database of the following format, but unable to index/search
User
-- Folder (multiple folders)
-- Document (multiple documents)
-- Sentences ( document split into sentences)
For the datastore, I get the error `Indexing field of DocList type (=subindex)is not yet supported.` when using HNSWLIB
1. What alternative datastore can I use that currently support the case, or what alternate strategy I could use to implement the same
2. Is searching in sub-sub index possible? Is there a direct function or alternate strategy to use as of now?
3. I want to search for relevant sentences from all of the documents in one folder at once, so how can I search through all of them at once
|
closed
|
2023-05-16T14:03:27Z
|
2023-05-17T06:53:04Z
|
https://github.com/docarray/docarray/issues/1544
|
[] |
munish0838
| 4
|
ultralytics/ultralytics
|
pytorch
| 18,853
|
inference time
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi
Is there a script I can use to measure the inference time?
### Additional
_No response_
|
open
|
2025-01-23T16:08:44Z
|
2025-01-23T20:42:35Z
|
https://github.com/ultralytics/ultralytics/issues/18853
|
[
"question"
] |
MuhabHariri
| 5
|
strawberry-graphql/strawberry
|
graphql
| 3,473
|
Upgrade causes "no current event loop in thread" exception
|
Moving from 0.225.1 to 0.227.2, I get event loop exceptions like:
https://wedgworths-inc.sentry.io/share/issue/1ea8832512944d21b11adc8dedcacef7/
I don't use async so I'm confused why exception is in that area of the code.
I wonder if it has something to do with 63dfc89fc799e3b50700270fe243d3f53d543412
|
closed
|
2024-04-26T16:55:49Z
|
2025-03-20T15:56:42Z
|
https://github.com/strawberry-graphql/strawberry/issues/3473
|
[
"bug"
] |
paltman
| 1
|
miguelgrinberg/flasky
|
flask
| 308
|
Purpose of seed() in User model
|
```python
class User(UserMixin, db.Model):
...
def generate_fake(count=100):
from sqlalchemy.exc import IntegrityError
from random import seed
import forgery_py
seed()
for i in range(count):
u = User(email=forgery_py.internet.email_address(),
username=forgery_py.internet.user_name(True),
password=forgery_py.lorem_ipsum.word(),
confirmed=True,
```
What's the purpose of calling `seed()` just above the loop ?
|
closed
|
2017-10-25T07:31:42Z
|
2017-10-26T14:19:29Z
|
https://github.com/miguelgrinberg/flasky/issues/308
|
[
"question"
] |
noisytoken
| 2
|
recommenders-team/recommenders
|
deep-learning
| 1,616
|
[BUG] Error in integration tests with SASRec
|
### Description
<!--- Describe your issue/bug/request in detail -->
we got this error in the nightly integration tests:
```
tests/integration/examples/test_notebooks_gpu.py ..............F [100%]
=================================== FAILURES ===================================
_ test_sasrec_quickstart_integration[/recsys_data/RecSys/SASRec-tf2/data/-1-128-expected_values0-42] _
notebooks = {'als_deep_dive': '/home/recocat/myagent/_work/1/s/examples/02_model_collaborative_filtering/als_deep_dive.ipynb', 'al...p_dive': '/home/recocat/myagent/_work/1/s/examples/02_model_collaborative_filtering/cornac_bivae_deep_dive.ipynb', ...}
output_notebook = 'output.ipynb', kernel_name = 'python3'
data_dir = '/recsys_data/RecSys/SASRec-tf2/data/', num_epochs = 1
batch_size = 128, expected_values = {'Hit@10': 0.4244, 'ndcg@10': 0.2626}
seed = 42
@pytest.mark.gpu
@pytest.mark.integration
@pytest.mark.parametrize(
"data_dir, num_epochs, batch_size, expected_values, seed",
[
(
"/recsys_data/RecSys/SASRec-tf2/data/",
1,
128,
{"ndcg@10": 0.2626, "Hit@10": 0.4244},
42,
)
],
)
def test_sasrec_quickstart_integration(
notebooks,
output_notebook,
kernel_name,
data_dir,
```
link: https://dev.azure.com/best-practices/recommenders/_build/results?buildId=56130&view=logs&j=11a98e2c-5412-5289-91fd-519c9484010b&t=bd06083a-594e-5d8b-fa42-0b1a3b2f6a78
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Other Comments
FYI @anargyri @aeroabir
|
closed
|
2022-01-19T14:14:01Z
|
2022-01-19T20:12:33Z
|
https://github.com/recommenders-team/recommenders/issues/1616
|
[
"bug"
] |
miguelgfierro
| 1
|
dask/dask
|
scikit-learn
| 11,226
|
Negative lookahead suddenly incorrectly parsed
|
In Dask 2024.2.1 we suddenly have an issue with a regex with a negative lookahead. It somehow is invalid now.
```python
import dask.dataframe as dd
regex = 'negativelookahead(?!/check)'
ddf = dd.from_dict(
{
"test": ["negativelookahead", "negativelookahead/check/negativelookahead", ],
},
npartitions=1)
ddf["test"].str.contains(regex).head()
```
This results in the following error:
```python
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
Cell In[2], line 8
2 regex = 'negativelookahead(?!/check)'
3 ddf = dd.from_dict(
4 {
5 "test": ["negativelookahead", "negativelookahead/check/negativelookahead", ],
6 },
7 npartitions=1)
----> 8 ddf["test"].str.contains(regex).head()
File /opt/conda/lib/python3.10/site-packages/dask_expr/_collection.py:702, in FrameBase.head(self, n, npartitions, compute)
700 out = new_collection(expr.Head(self, n=n, npartitions=npartitions))
701 if compute:
--> 702 out = out.compute()
703 return out
File /opt/conda/lib/python3.10/site-packages/dask_expr/_collection.py:476, in FrameBase.compute(self, fuse, **kwargs)
474 out = out.repartition(npartitions=1)
475 out = out.optimize(fuse=fuse)
--> 476 return DaskMethodsMixin.compute(out, **kwargs)
File /opt/conda/lib/python3.10/site-packages/dask/base.py:375, in DaskMethodsMixin.compute(self, **kwargs)
351 def compute(self, **kwargs):
352 """Compute this dask collection
353
354 This turns a lazy Dask collection into its in-memory equivalent.
(...)
373 dask.compute
374 """
--> 375 (result,) = compute(self, traverse=False, **kwargs)
376 return result
File /opt/conda/lib/python3.10/site-packages/dask/base.py:661, in compute(traverse, optimize_graph, scheduler, get, *args, **kwargs)
658 postcomputes.append(x.__dask_postcompute__())
660 with shorten_traceback():
--> 661 results = schedule(dsk, keys, **kwargs)
663 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
File /opt/conda/lib/python3.10/site-packages/dask_expr/_expr.py:3727, in Fused._execute_task(graph, name, *deps)
3725 for i, dep in enumerate(deps):
3726 graph["_" + str(i)] = dep
-> 3727 return dask.core.get(graph, name)
File /opt/conda/lib/python3.10/site-packages/dask_expr/_accessor.py:102, in FunctionMap.operation(obj, accessor, attr, args, kwargs)
100 @staticmethod
101 def operation(obj, accessor, attr, args, kwargs):
--> 102 out = getattr(getattr(obj, accessor, obj), attr)(*args, **kwargs)
103 return maybe_wrap_pandas(obj, out)
File /opt/conda/lib/python3.10/site-packages/pyarrow/compute.py:263, in _make_generic_wrapper.<locals>.wrapper(memory_pool, options, *args, **kwargs)
261 if args and isinstance(args[0], Expression):
262 return Expression._call(func_name, list(args), options)
--> 263 return func.call(args, options, memory_pool)
File /opt/conda/lib/python3.10/site-packages/pyarrow/_compute.pyx:385, in pyarrow._compute.Function.call()
File /opt/conda/lib/python3.10/site-packages/pyarrow/error.pxi:154, in pyarrow.lib.pyarrow_internal_check_status()
File /opt/conda/lib/python3.10/site-packages/pyarrow/error.pxi:91, in pyarrow.lib.check_status()
ArrowInvalid: Invalid regular expression: invalid perl operator: (?!
```
**Environment**:
- Dask version: 2024.2.1
- Python version: 3.10
- Operating System: Linux
- Install method (conda, pip, source): pip
|
closed
|
2024-07-15T07:23:02Z
|
2024-07-17T12:59:24Z
|
https://github.com/dask/dask/issues/11226
|
[
"needs triage"
] |
manschoe
| 3
|
ARM-DOE/pyart
|
data-visualization
| 751
|
There is a bug in pyart/io/cfradial.py
|
when I read the code of cfradial.py, I found that there is a value named 'radar_reciever_bandwidth' in pyart/io/cfradial.py line 62. This is wrong because you named it 'radar_receiver_bandwidth' in other file. It may cause other errors, I hope you can fix it soon.
|
closed
|
2018-06-06T08:58:36Z
|
2018-06-07T01:55:06Z
|
https://github.com/ARM-DOE/pyart/issues/751
|
[] |
YvZheng
| 4
|
sunscrapers/djoser
|
rest-api
| 46
|
doesn't work is_authenticated() at the themplate
|
doesn't work is_authenticated() with 'rest_framework.authentication.TokenAuthentication' installed.
how to fix it?
|
closed
|
2015-05-12T17:22:57Z
|
2015-05-13T08:28:52Z
|
https://github.com/sunscrapers/djoser/issues/46
|
[] |
ghost
| 2
|
babysor/MockingBird
|
deep-learning
| 437
|
【长期】训练克隆特定人声音&finetune
|
[AyahaShirane](https://github.com/AyahaShirane)
专项训练参照这个视频MockingBird数据集制作教程-手把手教你克隆海子姐的声线_哔哩哔哩_bilibili<https://www.bilibili.com/video/BV1dq4y137pH>
实测在已有模型基础上训练20K左右就能改变成想要的语音语调了。你如果是想要泛用型台湾口音的话,就尽可能收集更多人的数据集,否则会偏向特定某一个人的口音,而且断句和停顿似乎也会受到新数据集的影响
Reference: #380
> 作者却苦于近期精力限制只能势单力薄处理一些小的bug,也看到issue区有不少爱好与开发者想要学习或二次改造更好满足自己需求,不过比较零碎难以展开。为了让项目和AI持续可以给大家提供更多价值,共同学习,我在issue区根据不同主题创建长期交流频道,若留言人数超过20也将建立对应交流群。
> - 如何改参数,搞出更逼真的克隆效果 435
> - 如何改模型,搞出更好效果 436
> - 训练克隆特定人声音&finetune 437
> - 学术/论文讨论/训练分析 438
> - 跨语言支持 440
> - 工程化/新场景讨论(绝不做恶 & 合法合规) 439
|
open
|
2022-03-07T15:14:08Z
|
2023-11-20T06:15:40Z
|
https://github.com/babysor/MockingBird/issues/437
|
[
"discussion"
] |
babysor
| 18
|
deeppavlov/DeepPavlov
|
nlp
| 1,519
|
Could not find a version that satisfies the requirement uvloop==0.14.0 (from deeppavlov)
|
Want to contribute to DeepPavlov? Please read the [contributing guideline](http://docs.deeppavlov.ai/en/master/devguides/contribution_guide.html) first.
Please enter all the information below, otherwise your issue may be closed without a warning.
**DeepPavlov version** (you can look it up by running `pip show deeppavlov`):
**Python version**:
python==3.6
**Operating system** (ubuntu linux, windows, ...):
windows
**Issue**:
**Content or a name of a configuration file**:
```
```
**Command that led to error**:
```
pip install deeppavlov==0.17.0
pip install deeppavlov==0.15.0
```
**Error (including full traceback)**:
```
Collecting pymorphy2==0.8
Using cached pymorphy2-0.8-py2.py3-none-any.whl (46 kB)
Collecting uvloop==0.14.0
Using cached uvloop-0.14.0.tar.gz (2.0 MB)
ERROR: Command errored out with exit status 1:
command: 'D:\anaconda\envs\deeplove\python.exe' -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\64518\\AppData\\Local\\Temp\\pip-install-l6up2mdb\\uvloop_b30d837ca1cd4ee69a0d711ff1872366\\setup.py'"'"'; __file__=
'"'"'C:\\Users\\64518\\AppData\\Local\\Temp\\pip-install-l6up2mdb\\uvloop_b30d837ca1cd4ee69a0d711ff1872366\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools imp
ort setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\64518\AppData\Local\Temp\pip-pip-egg-info-ucpd9os3'
cwd: C:\Users\64518\AppData\Local\Temp\pip-install-l6up2mdb\uvloop_b30d837ca1cd4ee69a0d711ff1872366\
Complete output (5 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\64518\AppData\Local\Temp\pip-install-l6up2mdb\uvloop_b30d837ca1cd4ee69a0d711ff1872366\setup.py", line 15, in <module>
raise RuntimeError('uvloop does not support Windows at the moment')
RuntimeError: uvloop does not support Windows at the moment
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/84/2e/462e7a25b787d2b40cf6c9864a9e702f358349fc9cfb77e83c38acb73048/uvloop-0.14.0.tar.gz#sha256=123ac9c0c7dd71464f58f1b4ee0bbd81285d96cdda8bc3519281b8973e3a461e (from https://pypi.org/si
mple/uvloop/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement uvloop==0.14.0 (from deeppavlov) (from versions: 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.4.8, 0.4.9, 0.4.10, 0.4.11, 0.4.12, 0.4.13, 0.4.14, 0.4.15, 0.4.16, 0.4.17, 0.4.18
, 0.4.19, 0.4.20, 0.4.21, 0.4.22, 0.4.23, 0.4.24, 0.4.25, 0.4.26, 0.4.27, 0.4.28, 0.4.29, 0.4.30, 0.4.31, 0.4.32, 0.4.33, 0.4.34, 0.5.0, 0.5.1, 0.5.2, 0.5.3, 0.5.4, 0.5.5, 0.6.0, 0.6.5, 0.6.6, 0.6.7, 0.6.8, 0.7.0, 0.7.1, 0.7.2, 0.8.0, 0.8.1, 0.9
.0, 0.9.1, 0.10.0, 0.10.1, 0.10.2, 0.10.3, 0.11.0, 0.11.1, 0.11.2, 0.11.3, 0.12.0rc1, 0.12.0, 0.12.1, 0.12.2, 0.13.0rc1, 0.13.0, 0.14.0rc1, 0.14.0rc2, 0.14.0, 0.15.0, 0.15.1)
ERROR: No matching distribution found for uvloop==0.14.0
```
|
closed
|
2022-01-28T09:16:07Z
|
2022-04-01T11:46:48Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1519
|
[
"bug"
] |
645187919
| 2
|
aminalaee/sqladmin
|
fastapi
| 776
|
with ... as ... statement can make sesssion close,which will lead to DetachedInstanceError
|
### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
sqladmin/models
```
def _run_query_sync(self, stmt: ClauseElement) -> Any:
with self.session_maker(expire_on_commit=False) as session:
result = session.execute(stmt)
return result.scalars().unique().all()
```
In the code, the use of a with ... as ... statement to manage the session led to an unexpected closure of the session, which may caused the following error in some case:
sqlalchemy.orm.exc.DetachedInstanceError:
> https://docs.sqlalchemy.org/en/20/errors.html#parent-instance-x-is-not-bound-to-a-session-lazy-load-deferred-load-refresh-etc-operation-cannot-proceed
This error occurs because closing the session detaches instances that still require the session for lazy loading or other operations.
To resolve this, I modified the code to manage the session explicitly, avoiding its premature closure. This change prevents the DetachedInstanceError by keeping the session open until all required operations are complete.
```
def _run_query_sync(self, stmt: ClauseElement) -> Any:
session = self.session_maker(expire_on_commit=False)
result = session.execute(stmt)
return result.scalars().unique().all()
```
With this modification, the error no longer occurs, and the instances remain attached to the session as needed.
### Steps to reproduce the bug
_No response_
### Expected behavior
_No response_
### Actual behavior
_No response_
### Debugging material
_No response_
### Environment
Windows 10 / ptyhon 3.10.11 / sqladmin 0.16.1
### Additional context
_No response_
|
open
|
2024-05-30T08:41:13Z
|
2024-06-07T05:06:09Z
|
https://github.com/aminalaee/sqladmin/issues/776
|
[] |
ChengChe106
| 8
|
TencentARC/GFPGAN
|
deep-learning
| 279
|
styleGAN 256x256
|
hello, is there any pretrained styleGAN2 with 256x256 px that provided, as the FFHQ dataset is too large to download and it may take a long times for training.
|
open
|
2022-10-04T02:16:00Z
|
2022-10-09T03:21:44Z
|
https://github.com/TencentARC/GFPGAN/issues/279
|
[] |
greatbaozi001
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.