repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
ymcui/Chinese-LLaMA-Alpaca-2
|
nlp
| 500
|
关于chinese-alpaca-2-7b-64k模型在inference_hf.py推理部署中使用vllm报错的问题
|
### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [x] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
模型量化和部署
### 基础模型
Others
### 操作系统
Linux
### 详细描述问题
```
# 这是运行命令
```
python scripts/inference/inference_hf.py --base_model model/chinese-alpaca-2-7b-64k --with_prompt --interactive --use_vllm
### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况(请粘贴在本代码块里)
```
bitsandbytes 0.41.1
peft 0.3.0
sentencepiece 0.1.99
torch 2.1.2
torchvision 0.16.2
transformers 4.36.2
### 运行日志或截图
```
# 请在此处粘贴运行日志(请粘贴在本代码块里)
```
USE_XFORMERS_ATTENTION: True
STORE_KV_BEFORE_ROPE: False
Traceback (most recent call last):
File "/hy-tmp/Aplaca2/Chinese-LLaMA-Alpaca-2-main/scripts/inference/inference_hf.py", line 129, in <module>
model = LLM(model=args.base_model,
File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 105, in __init__
self.llm_engine = LLMEngine.from_engine_args(engine_args)
File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 304, in from_engine_args
engine_configs = engine_args.create_engine_configs()
File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 218, in create_engine_configs
model_config = ModelConfig(self.model, self.tokenizer,
File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/config.py", line 101, in __init__
self.hf_config = get_config(self.model, trust_remote_code, revision)
File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/transformers_utils/config.py", line 35, in get_config
raise e
File "/usr/local/miniconda3/lib/python3.10/site-packages/vllm/transformers_utils/config.py", line 23, in get_config
config = AutoConfig.from_pretrained(
File "/usr/local/miniconda3/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1099, in from_pretrained
return config_class.from_dict(config_dict, **unused_kwargs)
File "/usr/local/miniconda3/lib/python3.10/site-packages/transformers/configuration_utils.py", line 774, in from_dict
config = cls(**config_dict)
File "/usr/local/miniconda3/lib/python3.10/site-packages/transformers/models/llama/configuration_llama.py", line 160, in __init__
self._rope_scaling_validation()
File "/usr/local/miniconda3/lib/python3.10/site-packages/transformers/models/llama/configuration_llama.py", line 180, in _rope_scaling_validation
raise ValueError(
ValueError: `rope_scaling` must be a dictionary with with two fields, `type` and `factor`, got {'factor': 16.0, 'finetuned': True, 'original_max_position_embeddings': 4096, 'type': 'yarn'}
|
closed
|
2024-01-12T10:59:41Z
|
2024-02-10T01:36:06Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/500
|
[
"stale"
] |
hoohooer
| 6
|
pywinauto/pywinauto
|
automation
| 599
|
Custom Type Object wont Take Click
|
Currently i'm using Pywinauto along with Behave to test a desktop application and i have encountered a road bump. at one point in my automation i need to use double click, currently i have it working as this:
```
@step("User selects {row} in Multi payment window")
def step_impl(context, row):
"""
:param row: that we are going to fill.
:type context: behave.runner.Context
"""
tries = 5
for i in range(tries):
try:
context.popup[str(row)].click_input(button='left', double=True)
except:
if i < tries - 1: # i is zero indexed
continue
else:
break
```
It works perfectly! but if i'm not present or the machine is open this will cause issues because i'm using click_input() so i have tried using click(double=True) but returns this error _AttributeError: Neither GUI element (wrapper) nor wrapper method 'click' were found (typo?)_
Is there any way for me to get around this? I need to be able to run in a VM without having a session open.
This is the result of running print_control_identifiers(), The items i'm trying to double click is Row 0 and Row 1, they are custom items.

|
open
|
2018-11-06T16:19:56Z
|
2018-11-10T14:47:43Z
|
https://github.com/pywinauto/pywinauto/issues/599
|
[
"question"
] |
LeoDOD
| 3
|
widgetti/solara
|
fastapi
| 402
|
Media placeholder Jupyter dashboard tutorial
|
closed
|
2023-11-27T14:55:40Z
|
2023-11-27T20:15:09Z
|
https://github.com/widgetti/solara/issues/402
|
[] |
maartenbreddels
| 1
|
|
feature-engine/feature_engine
|
scikit-learn
| 786
|
yeo-johnson inverse transform throws an erro
|
```
InvalidIndexError Traceback (most recent call last)
File ~\Documents\Repositories\envs\fe_not\lib\site-packages\pandas\core\series.py:1289, in Series.__setitem__(self, key, value)
1288 try:
-> 1289 self._set_with_engine(key, value, warn=warn)
1290 except KeyError:
1291 # We have a scalar (or for MultiIndex or object-dtype, scalar-like)
1292 # key that is not present in self.index.
File ~\Documents\Repositories\envs\fe_not\lib\site-packages\pandas\core\series.py:1361, in Series._set_with_engine(self, key, value, warn)
1360 def _set_with_engine(self, key, value, warn: bool = True) -> None:
-> 1361 loc = self.index.get_loc(key)
1363 # this is equivalent to self._values[key] = value
File ~\Documents\Repositories\envs\fe_not\lib\site-packages\pandas\core\indexes\range.py:418, in RangeIndex.get_loc(self, key)
417 raise KeyError(key)
--> 418 self._check_indexing_error(key)
419 raise KeyError(key)
File ~\Documents\Repositories\envs\fe_not\lib\site-packages\pandas\core\indexes\base.py:6059, in Index._check_indexing_error(self, key)
6056 if not is_scalar(key):
6057 # if key is not a scalar, directly raise an error (the code below
6058 # would convert to numpy arrays and raise later any way) - GH29926
-> 6059 raise InvalidIndexError(key)
InvalidIndexError: 64 True
682 True
960 True
1384 True
1100 True
...
763 True
835 True
1216 True
559 True
684 True
Name: LotArea, Length: 1022, dtype: bool
During handling of the above exception, another exception occurred:
IndexingError Traceback (most recent call last)
Cell In [21], line 1
----> 1 train_unt = tf.inverse_transform(train_t)
2 test_unt = tf.inverse_transform(test_t)
File c:\users\sole\documents\repositories\feature_engine\feature_engine\transformation\yeojohnson.py:181, in YeoJohnsonTransformer.inverse_transform(self, X)
178 X = self._check_transform_input_and_state(X)
180 for feature in self.variables_:
--> 181 X[feature] = self._inverse_transform_series(
182 X[feature], lmbda=self.lambda_dict_[feature]
183 )
185 return X
File c:\users\sole\documents\repositories\feature_engine\feature_engine\transformation\yeojohnson.py:195, in YeoJohnsonTransformer._inverse_transform_series(self, X, lmbda)
193 x_inv[pos] = np.exp(X[pos]) - 1
194 else: # lmbda != 0
--> 195 x_inv[pos] = np.power(X[pos] * lmbda + 1, 1 / lmbda) - 1
197 # when x < 0
198 if lmbda != 2:
File ~\Documents\Repositories\envs\fe_not\lib\site-packages\pandas\core\series.py:1329, in Series.__setitem__(self, key, value)
1324 raise KeyError(
1325 "key of type tuple not found and not a MultiIndex"
1326 ) from err
1328 if com.is_bool_indexer(key):
-> 1329 key = check_bool_indexer(self.index, key)
1330 key = np.asarray(key, dtype=bool)
1332 if (
1333 is_list_like(value)
1334 and len(value) != len(self)
(...)
1339 # _where call below
1340 # GH#44265
File ~\Documents\Repositories\envs\fe_not\lib\site-packages\pandas\core\indexing.py:2662, in check_bool_indexer(index, key)
2660 indexer = result.index.get_indexer_for(index)
2661 if -1 in indexer:
-> 2662 raise IndexingError(
2663 "Unalignable boolean Series provided as "
2664 "indexer (index of the boolean Series and of "
2665 "the indexed object do not match)."
2666 )
2668 result = result.take(indexer)
2670 # fall through for boolean
IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match).
```
Error can be reproduced by applying inverse_transform to the code we currently have in the user guide as demo
|
closed
|
2024-07-17T09:14:48Z
|
2024-08-23T17:21:04Z
|
https://github.com/feature-engine/feature_engine/issues/786
|
[] |
solegalli
| 0
|
521xueweihan/HelloGitHub
|
python
| 2,681
|
【开源自荐】CardCarousel - 最易用的 iOS 轮播组件
|
- 项目地址:https://github.com/YuLeiFuYun/CardCarousel
- 类别:Swift
- 项目标题:一个功能强大且易于使用的轮播组件,支持使用咒语进行设置
- 项目描述:CardCarousel 可以让你对轮播进行更精细地控制,你可以设置滚动方向、页面尺寸、页间距、滚动停止时的页面对齐方式、自动滚动时的滚动动画效果、页面过渡效果、分页阈值和手动滚动时的页面减速速率等等。CardCarousel 可以在 UIKit 与 SwiftUI 中使用,支持链式调用,提供了丰富的初始化方法,参数可以通过点语法进行设置。更好的是,CardCarousel 还支持通过咒语进行设置。
- 亮点:更精细的控制、更好的易用性与咒语
- 示例代码:
```swift
CardCarousel(data: data) { (cell: CustomCell, index: Int, itemIdentifier: Item) in
cell.imageView.kf.setImage(with: url)
cell.indexLabel.backgroundColor = itemIdentifier.color
cell.indexLabel.text = itemIdentifier.index
}
.cardLayoutSize(widthDimension: .fractionalWidth(0.7), heightDimension: .fractionalHeight(0.7))
.cardTransformMode(.liner(minimumAlpha: 0.3))
.cardCornerRadius(10)
.move(to: view)
// 咒语(《高级动物》风格)
CardCarousel(咒语: "矛盾,自私,好色,爱喜,无聊,善良,爱喜 贪婪,真诚 善变,暗淡 无奈,埋怨", 施法材料: data, 作用域: CGRect(x: 0, y: 100, width: 393, height: 200))
.法术目标(view)
// 效果等同于
CardCarousel(frame: CGRect(x: 0, y: 100, width: 393, height: 200), data: data)
.cardLayoutSize(widthDimension: .fractionalWidth(0.7), heightDimension: .fractionalHeight(0.7))
.cardTransformMode(.liner)
.scrollDirection(.rightToLeft)
.loopMode(.rollback)
.move(to: view)
```
- 截图:

|
closed
|
2024-01-27T11:54:03Z
|
2024-04-24T12:12:53Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2681
|
[] |
YuLeiFuYun
| 0
|
serengil/deepface
|
machine-learning
| 884
|
Memory usage in Windows Server is very high
|
i use this deepface package in windows server and this works well
but memory usage is very high
please tell me , this memory usage is normal?

|
closed
|
2023-11-04T19:41:38Z
|
2023-11-05T19:20:25Z
|
https://github.com/serengil/deepface/issues/884
|
[
"question"
] |
ghost
| 1
|
jschneier/django-storages
|
django
| 603
|
S3Boto3 listdir can no longer create buckets
|
Hi,
With the recent update of `listdir` in S3Boto3Backend, an undocumented behavior has also changed, I am not sure if this is a bug or if it is intended.
Formerly, when performing a `listdir` on a non-existing bucket, the function would call a `_get_or_create_bucket` which would create the bucket if `AWS_AUTO_CREATE_BUCKET` was set to True and then listdir would return that the bucket is empty.
After this commit, https://github.com/jschneier/django-storages/commit/b606a5129bc4d0f9189145c80382ba74b63350ef
this is no longer the case. An error is raised "NoSuchBucket" if the bucket does not already exist.
I am not sure what is the best, but if we follow the principles of a CRUD api, listdir is performing a GET request and I would not expect it to modify the state of the remote resource and create a bucket. But raising an error is also annoying as always checking if a bucket exists before a call to listdir is not very practical.
A third option would be:
- When `AWS_AUTO_CREATE_BUCKET` is set to True, return that the bucket is empty `return ([], [])` even if the bucket does not exist, because the bucket will be created anyway at the first occasion and this setting is mainly used in test environments
- When `AWS_AUTO_CREATE_BUCKET` is set to False, raise the error to warn the user about creating the bucket first.
Please, let me know what you think about this or close the issue if the current behavior is intended
Thanks
|
closed
|
2018-09-20T14:18:03Z
|
2020-02-03T06:08:02Z
|
https://github.com/jschneier/django-storages/issues/603
|
[
"s3boto"
] |
baldychristophe
| 1
|
capitalone/DataProfiler
|
pandas
| 856
|
Add documentation for `sampling_ratio` option
|
Related to PR #845 add documentation around the new `sampling_ratio` option paramter
|
closed
|
2023-06-05T17:40:23Z
|
2023-06-28T17:24:04Z
|
https://github.com/capitalone/DataProfiler/issues/856
|
[
"Documentation"
] |
taylorfturner
| 1
|
falconry/falcon
|
api
| 1,950
|
ASGI mount
|
Hi, I'm really glad to see Falcon supporting ASGI - great job!
In some other ASGI frameworks (for example FastAPI, Starlette and BlackSheep) there is the ability to mount other ASGI apps at a certain route. For example:
```python
asgi_app = falcon.asgi.App()
asgi_app.mount('/admin/', some_other_asgi_app)
```
It's nice because you can include third party ASGI apps within your own app - in my case it's [Piccolo admin](https://github.com/piccolo-orm/piccolo_admin). It also lets you compose your app in interesting ways, by making it consist of smaller sub apps.
I wonder if you'd consider this in a future version of Falcon? Or if it's currently possible, and I'm unaware.
If you feel it's out of scope, please feel free to close this issue. Thanks.
|
open
|
2021-08-14T22:17:27Z
|
2023-07-24T10:34:26Z
|
https://github.com/falconry/falcon/issues/1950
|
[
"enhancement",
"proposal",
"community"
] |
dantownsend
| 1
|
ansible/ansible
|
python
| 84,680
|
Cron module fails to properly work under some cases on systems with systemd-cron
|
### Summary
Hi!
I've found strange situation and killed few days to debug it properly.
I started with strange problem that ansble's cron module failed to install any jobs for any users (as I thought), throwing be a (not very useful) python traceback and
```
CronTabError: Unable to read crontab
```
error.
Also I found that creating even empty (`crontab <(echo -n)`) crontab fixes the issue.
So, problem only happens when user have no (personal) crontab installed.
After endless hours of debugging I've found that problem happens when Ansble calling [this](https://github.com/ansible/ansible/blob/cae4f90b21bc40c88a00e712d28531ab0261f759/lib/ansible/modules/cron.py#L539C35-L539C51) command.
Then, `systemd-cron`'s `crontab` returns exit code `2` (!!!) in case if user have no crontab yet.
(I guess, [here](https://github.com/systemd-cron/systemd-cron/blob/5f6f344de122476a9585d09f3f335138d231066e/src/bin/crontab.cpp#L226C61-L226C67) is the source, but I'm not sure about that)
And ansible [expects](https://github.com/ansible/ansible/blob/cae4f90b21bc40c88a00e712d28531ab0261f759/lib/ansible/modules/cron.py#L283) either `0` (success) or `1` (as comment states, it thinks `1` means missing crontab), annd bails out otherwise.
I already created an [issue](https://github.com/systemd-cron/systemd-cron/issues/163) on systemd-cron repo with question about making it consistent with other crons, but I also not sure if it is any chances, author will fix that (and also it is no way fixed release will be shipped on all the LTS distros where I faced this issue).
May it be possible to fix this (also) on Ansible side by either don't care so much on exit code, or by also support `rc=2`, please?
(I can make a PR if needed)
### Issue Type
Bug Report
### Component Name
cron
### Ansible Version
```console
$ ansible --version
ansible [core 2.18.2]
config file = None
configured module search path = ['/home/mva/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/mva/.local/pipx/venvs/ansible/lib/python3.12/site-packages/ansible
ansible collection location = /home/mva/.ansible/collections:/usr/share/ansible/collections
executable location = /home/mva/.local/bin/ansible
python version = 3.12.8 (main, Feb 3 2025, 02:21:59) [GCC 13.3.1 20241220] (/home/mva/.local/pipx/venvs/ansible/bin/python)
jinja version = 3.1.4
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_FORCE_COLOR(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = True
CACHE_PLUGIN(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = jsonfile
CACHE_PLUGIN_CONNECTION(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = .ansible/fact_caching
CONFIG_FILE() = /home/mva/.vcs_repos/alpha/ansbl/ansible.cfg
DEFAULT_BECOME(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = True
DEFAULT_BECOME_METHOD(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = sudo
DEFAULT_BECOME_USER(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = root
DEFAULT_FORKS(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = 7
DEFAULT_HASH_BEHAVIOUR(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = replace
DEFAULT_HOST_LIST(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = ['/home/mva/.vcs_repos/alpha/ansbl/meta/inv']
DEFAULT_STDOUT_CALLBACK(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = yaml
DEPRECATION_WARNINGS(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = True
EDITOR(env: EDITOR) = nvim
HOST_KEY_CHECKING(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = True
PAGER(env: PAGER) = less
RETRY_FILES_ENABLED(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = False
SYSTEM_WARNINGS(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = True
GALAXY_SERVERS:
BECOME:
======
runas:
_____
become_user(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = root
su:
__
become_user(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = root
sudo:
____
become_user(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = root
CACHE:
=====
jsonfile:
________
_uri(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = /home/mva/.vcs_repos/alpha/ansbl/.ansible/fact_caching
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = True
record_host_keys(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = False
ssh:
___
control_path(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = %(directory)s/%%h-%%p-%%r
host_key_checking(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = True
pipelining(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = True
ssh_args(/home/mva/.vcs_repos/alpha/ansbl/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=300s
```
### OS / Environment
Gentoo (host), Ubuntu 24.04 (target)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- ansible.builtin.cron:
name: "moo"
special_time: "reboot"
job: "echo"
state: "present"
```
### Expected Results
Cronjob added
### Actual Results
```console
fatal: [atlas_db]: FAILED! => changed=false
module_stderr: |-
OpenSSH_9.8p1, OpenSSL 3.3.2 3 Sep 2024
debug1: Reading configuration data [redacted]
debug1: [redacted] line 15: Applying options for *
debug3: [redacted] line 46: Including file [redacted] depth 0 (parse only)
debug1: Reading configuration data [redacted]
debug3: kex names ok: [diffie-hellman-group1-sha1]
debug3: kex names ok: [diffie-hellman-group1-sha1]
debug3: kex names ok: [diffie-hellman-group1-sha1]
debug3: [redacted] line 93: Including file [redacted] depth 0 (parse only)
debug1: Reading configuration data [redacted]
debug3: [redacted] line 96: Including file [redacted] depth 0 (parse only)
debug1: Reading configuration data [redacted]
debug3: [redacted] line 6: Including file [redacted] depth 1 (parse only)
debug1: Reading configuration data [redacted]
debug3: [redacted] line 7: Including file [redacted] depth 1 (parse only)
debug1: Reading configuration data [redacted]
debug3: [redacted] line 99: Including file [redacted] depth 0 (parse only)
debug1: Reading configuration data [redacted]
debug3: [redacted] line 100: Including file [redacted] depth 0 (parse only)
debug1: Reading configuration data [redacted]
debug3: [redacted] line 103: Including file [redacted] depth 0 (parse only)
debug1: Reading configuration data [redacted]
debug1: [redacted] line 105: Applying options for *
debug3: [redacted] line 106: Including file [redacted] depth 0
debug1: Reading configuration data [redacted]
debug1: [redacted] line 1: Applying options for *
debug2: add_identity_file: ignoring duplicate key ~/.ssh/fp/all/ed.pub
debug1: Reading configuration data /etc/ssh/ssh_config
debug3: /etc/ssh/ssh_config line 17: Including file /etc/ssh/ssh_config.d/20-systemd-ssh-proxy.conf depth 0
debug1: Reading configuration data /etc/ssh/ssh_config.d/20-systemd-ssh-proxy.conf
debug3: /etc/ssh/ssh_config line 17: Including file /etc/ssh/ssh_config.d/9999999gentoo-security.conf depth 0
debug1: Reading configuration data /etc/ssh/ssh_config.d/9999999gentoo-security.conf
debug3: /etc/ssh/ssh_config line 17: Including file /etc/ssh/ssh_config.d/9999999gentoo.conf depth 0
debug1: Reading configuration data /etc/ssh/ssh_config.d/9999999gentoo.conf
debug2: resolve_canonicalize: hostname 100.100.100.1 is address
debug1: Setting implicit ProxyCommand from ProxyJump: ssh -p 55222 -vvv -W '[%h]:%p' [redacted]
debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '[redacted]
debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '[redacted]
debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling
debug1: auto-mux: Trying existing master at '[redacted]
debug2: fd 3 setting O_NONBLOCK
debug2: mux_client_hello_exchange: master version 4
debug3: mux_client_forwards: request forwardings: 0 local, 0 remote
debug3: mux_client_request_session: entering
debug3: mux_client_request_alive: entering
debug3: mux_client_request_alive: done pid = 17733
debug3: mux_client_request_session: session request sent
debug1: mux_client_request_session: master session id: 2
Traceback (most recent call last):
File "<stdin>", line 107, in <module>
File "<stdin>", line 99, in _ansiballz_main
File "<stdin>", line 47, in invoke_module
File "<frozen runpy>", line 226, in run_module
File "<frozen runpy>", line 98, in _run_module_code
File "<frozen runpy>", line 88, in _run_code
File "/tmp/ansible_ansible.builtin.cron_payload_n4yg75c0/ansible_ansible.builtin.cron_payload.zip/ansible/modules/cron.py", line 768, in <module>
File "/tmp/ansible_ansible.builtin.cron_payload_n4yg75c0/ansible_ansible.builtin.cron_payload.zip/ansible/modules/cron.py", line 630, in main
File "/tmp/ansible_ansible.builtin.cron_payload_n4yg75c0/ansible_ansible.builtin.cron_payload.zip/ansible/modules/cron.py", line 257, in __init__
File "/tmp/ansible_ansible.builtin.cron_payload_n4yg75c0/ansible_ansible.builtin.cron_payload.zip/ansible/modules/cron.py", line 279, in read
CronTabError: Unable to read crontab
debug3: mux_client_read_packet_timeout: read header failed: Broken pipe
debug2: Received exit status from master 1
module_stdout: ''
msg: |-
MODULE FAILURE: No start of json char found
See stdout/stderr for the exact error
rc: 1
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct
|
closed
|
2025-02-06T15:09:38Z
|
2025-02-25T14:00:07Z
|
https://github.com/ansible/ansible/issues/84680
|
[
"module",
"bug",
"affects_2.18"
] |
msva
| 5
|
redis/redis-om-python
|
pydantic
| 59
|
list and tuple fields could have other types than strings
|
'this Preview release, list and tuple fields can only contain strings. Problem field: . See docs: TODO'
|
closed
|
2022-01-01T08:29:00Z
|
2022-08-30T09:48:28Z
|
https://github.com/redis/redis-om-python/issues/59
|
[] |
gam-phon
| 1
|
pydata/pandas-datareader
|
pandas
| 383
|
Eurostat - mismatched tag;
|
`eu_trade_since_2000 = web.DataReader("DS-043327", 'eurostat')`
gives the message
` File "<string>", line unknown
ParseError: mismatched tag: line 28, column 8`
That's not very informative. I have no idea what is going on at all. Part of the pip freeze output is:
>numpy==1.13.1
pandas==0.20.3
pandas-datareader==0.5.0
|
closed
|
2017-08-24T12:47:41Z
|
2019-09-26T21:20:30Z
|
https://github.com/pydata/pandas-datareader/issues/383
|
[] |
HristoBuyukliev
| 8
|
sczhou/CodeFormer
|
pytorch
| 207
|
Great job! How amazing, I was planning on reproducing the code myself today, but then it suddenly got updated!
|
open
|
2023-04-19T15:17:09Z
|
2023-04-19T15:20:10Z
|
https://github.com/sczhou/CodeFormer/issues/207
|
[] |
Liar-zzy
| 1
|
|
Sanster/IOPaint
|
pytorch
| 376
|
[Feature Request] Increase/decrease maximum base cursor size range.
|
Could it be possible to increase/decrease the maximum sizes for the cursor? I'd love to be able to make my cursor as small as 1px - 2px to get very exact in my masking for smaller images. If this can be adjusted on my own, I'd appreciate some guidance. And I don't mean that I need help figuring out how to work the normal cursor size slider, I'm asking for a change in the maximum smallness and bigness of the cursor or assistance to do it myself. Any help is greatly appreciated! :)
|
closed
|
2023-09-21T23:05:38Z
|
2025-03-21T02:05:02Z
|
https://github.com/Sanster/IOPaint/issues/376
|
[
"stale"
] |
ArchAngelAries
| 2
|
sammchardy/python-binance
|
api
| 1,459
|
python-binance ThreadedWebsocketManager not working with Python 3.11 or 3.12?
|
**Describe the bug**
When I run the following code in PyCharm, it doesn’t print any information. However, if I run it in debug mode, the information appears. This causes the code to not function properly on Python 3.11 or 3.12.
**To Reproduce**
```
from binance import ThreadedWebsocketManager
def handle_socket_message(msg):
print(msg)
print(1, flush=True)
def main():
# socket manager using threads
twm = ThreadedWebsocketManager()
twm.start()
twm.start_multiplex_socket(callback=handle_socket_message, streams=['!miniTicker@arr'])
twm.start_futures_multiplex_socket(callback=handle_socket_message, streams=['!miniTicker@arr'])
# join the threaded managers to the main thread
while True:
twm.join(3)
print("join")
if __name__ == '__main__':
main()
```
**Expected behavior**
I need Python-binance to work properly on python3.11or3.12 and to print information
**Environment (please complete the following information):**
- Python version: 3.11 or 3.12
- Virtual Env: conda
- OS: Mac, Ubuntu
- python-binance version 1.0.21
**Logs or Additional context**
This is running

This is debug

|
closed
|
2024-10-29T08:17:14Z
|
2024-10-30T01:11:17Z
|
https://github.com/sammchardy/python-binance/issues/1459
|
[] |
XiaoWXHang
| 5
|
horovod/horovod
|
machine-learning
| 4,043
|
NVIDIA CUDA TOOLKIT version to run Horovod in Conda Environment
|
Hi Developers
I wish to install horovod inside Conda environment for which I require nccl from NVIDIA CUDA toolkit installed in system so I just wanted to know which is version of NVIDIA CUDA Toolkit is required to build horovod inside conda env to run Pytorch library.
Many Thanks
Pushkar
|
open
|
2024-05-10T06:56:06Z
|
2025-01-31T23:14:47Z
|
https://github.com/horovod/horovod/issues/4043
|
[
"wontfix"
] |
ppandit95
| 2
|
huggingface/datasets
|
computer-vision
| 6,791
|
`add_faiss_index` raises ValueError: not enough values to unpack (expected 2, got 1)
|
### Describe the bug
Calling `add_faiss_index` on a `Dataset` with a column argument raises a ValueError. The following is the trace
```python
214 def replacement_add(self, x):
215 """Adds vectors to the index.
216 The index must be trained before vectors can be added to it.
217 The vectors are implicitly numbered in sequence. When `n` vectors are
(...)
224 `dtype` must be float32.
225 """
--> 227 n, d = x.shape
228 assert d == self.d
229 x = np.ascontiguousarray(x, dtype='float32')
ValueError: not enough values to unpack (expected 2, got 1)
```
### Steps to reproduce the bug
1. Load any dataset like `ds = datasets.load_dataset("wikimedia/wikipedia", "20231101.en")["train"]`
2. Add an FAISS index on any column `ds.add_faiss_index('title')`
### Expected behavior
The index should be created
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-26-generic-x86_64-with-glibc2.35
- Python version: 3.9.19
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0
- `faiss-cpu` version: 1.8.0
|
closed
|
2024-04-08T01:57:03Z
|
2024-04-11T15:38:05Z
|
https://github.com/huggingface/datasets/issues/6791
|
[] |
NeuralFlux
| 3
|
benbusby/whoogle-search
|
flask
| 250
|
[BUG] Whoogle spits out garbage when going to next page of search results
|
Whenever i try to go to the next page of a search (e.g. COVID-19) it spits out a ton of garbage
The exact string is the following
gAAAAABgZPRDBmNckg-txy85CufwUIccaLrnLWvW7gm9lyPJAXd8uFW1bFln-rKIyC3QxQAkoMDGjcZDgNlEtAS5_Kluz1OpGg==
It's the same no matter what search i do
Steps to reproduce the behavior:
1. Search something
2. Go to the next page
3. See garbage in search bar
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [ ] Docker
- [] `run` executable
- [x] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ x] Version [v0.3.1]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: Windows 7 SP1 Ultimate 64-bit
- Browser Google Chrome
- Version 89
**Additional context**
Issues always occur
|
closed
|
2021-03-31T22:19:16Z
|
2021-04-27T13:46:19Z
|
https://github.com/benbusby/whoogle-search/issues/250
|
[
"bug"
] |
Rowan-Bird
| 5
|
aiortc/aiortc
|
asyncio
| 558
|
Does the Raspberry PI 4B not support Google-CRC32C?
|
ity -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/google_crc32c/_crc32c.c -o build/temp.linux-aarch64-3.7/src/google_crc32c/_crc32c.o
src/google_crc32c/_crc32c.c:3:10: fatal error: crc32c/crc32c.h: No such file or directory
#include <crc32c/crc32c.h>
^~~~~~~~~~~~~~~~~
compilation terminated.
ERROR:root:Compiling the C Extension for the crc32c library failed. To enable building / installing a pure-Python-only version, set 'CRC32C_PURE_PYTHON=1' in the environment.
error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Failed building wheel for google-crc32c
Running setup.py clean for google-crc32c
Failed to build google-crc32c
Installing collected packages: pylibsrtp, google-crc32c, aiortc
Running setup.py install for google-crc32c ... error
Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-zemk6oyq/google-crc32c/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-_30_twxh/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.7
creating build/lib.linux-aarch64-3.7/google_crc32c
copying src/google_crc32c/cext.py -> build/lib.linux-aarch64-3.7/google_crc32c
copying src/google_crc32c/_checksum.py -> build/lib.linux-aarch64-3.7/google_crc32c
copying src/google_crc32c/__config__.py -> build/lib.linux-aarch64-3.7/google_crc32c
copying src/google_crc32c/__init__.py -> build/lib.linux-aarch64-3.7/google_crc32c
copying src/google_crc32c/python.py -> build/lib.linux-aarch64-3.7/google_crc32c
running build_ext
building 'google_crc32c._crc32c' extension
creating build/temp.linux-aarch64-3.7
creating build/temp.linux-aarch64-3.7/src
creating build/temp.linux-aarch64-3.7/src/google_crc32c
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c src/google_crc32c/_crc32c.c -o build/temp.linux-aarch64-3.7/src/google_crc32c/_crc32c.o
src/google_crc32c/_crc32c.c:3:10: fatal error: crc32c/crc32c.h: No such file or directory
#include <crc32c/crc32c.h>
^~~~~~~~~~~~~~~~~
compilation terminated.
ERROR:root:Compiling the C Extension for the crc32c library failed. To enable building / installing a pure-Python-only version, set 'CRC32C_PURE_PYTHON=1' in the environment.
error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
/------------------------
Raspberry pi:
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster
|
closed
|
2021-09-03T10:14:20Z
|
2021-09-06T01:04:11Z
|
https://github.com/aiortc/aiortc/issues/558
|
[] |
Canees
| 2
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
computer-vision
| 1,409
|
Abouttransfer learning
|
closed
|
2022-04-18T10:05:36Z
|
2022-04-18T10:05:43Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1409
|
[] |
ZhenyuLiu-SYSU
| 0
|
|
deepfakes/faceswap
|
deep-learning
| 722
|
has installed cuDNN,but not found
|
**Describe the bug**
has installed cuDNN,but not found
**To Reproduce**
python setup.py
**Expected behavior**
WARNING Running without root/admin privileges
INFO The tool provides tips for installation
and installs required python packages
INFO Setup in Windows 10
INFO Installed Python: 3.6.8 64bit
INFO Encoding: cp936
INFO Upgrading pip...
INFO Installed pip: 19.1.1
Enable Docker? [y/N] n
INFO Docker Disabled
Enable CUDA? [Y/n] y
INFO CUDA Enabled
INFO CUDA version: 10.1
ERROR cuDNN not found. See https://github.com/deepfakes/faceswap/blob/master/INSTALL.md#cudnn for instructions
WARNING The minimum Tensorflow requirement is 1.12.
Tensorflow currently has no official prebuild for your CUDA, cuDNN combination.
Either install a combination that Tensorflow supports or build and install your own tensorflow-gpu.
CUDA Version: 10.1
cuDNN Version:
Help:
Building Tensorflow: https://www.tensorflow.org/install/install_sources
Tensorflow supported versions: https://www.tensorflow.org/install/source#tested_build_configurations
Location of custom tensorflow-gpu wheel (leave blank to manually install):
INFO Checking System Dependencies...
INFO CMake version: 3.14.3
INFO Visual Studio 2015 version: 14.0
INFO Visual Studio C++ version: v14.0.24215.01
INFO 1. Install PIP requirements
You may want to execute `chcp 65001` in cmd line
to fix Unicode issues on Windows when installing dependencies
**Screenshots**



**Desktop (please complete the following information):**
- Windows 10
|
closed
|
2019-05-10T15:00:49Z
|
2019-05-10T15:09:23Z
|
https://github.com/deepfakes/faceswap/issues/722
|
[] |
chenkarl
| 1
|
Nemo2011/bilibili-api
|
api
| 698
|
[提问]上传视频遇到问题 AttributeError: 'NoneType' object has no attribute '__dict__'. Did you mean: '__dir__'?
|
**Python 版本:** 3.10
**模块版本:** 16.2.0
**运行环境:** Windows
<!-- 务必提供模块版本并确保为最新版 -->
---
按照文档给的案例上传视频,credential和视频封面,视频文件都修改了,但是遇见这个问题
AttributeError: 'NoneType' object has no attribute '__dict__'. Did you mean: '__dir__'?
|
closed
|
2024-03-01T06:33:22Z
|
2024-03-15T14:14:36Z
|
https://github.com/Nemo2011/bilibili-api/issues/698
|
[
"bug",
"solved"
] |
RickyCui010
| 8
|
snarfed/granary
|
rest-api
| 46
|
Duplicate in-reply-to links on Tweets
|
In the last week or so, I've noticed Twitter posts have started showing up with duplicated in-reply-to links:
```
<article class="h-entry h-as-note">
<span class="u-uid">tag:twitter.com:653670712104738816</span>
<time class="dt-published" datetime="2015-10-12T20:36:57+00:00">2015-10-12T20:36:57+00:00</time>
<div class="h-card p-author">
<div class="p-name"><a class="u-url" href="https://kylewm.com">Kyle Mahan</a></div>
<img class="u-photo" src="https://twitter.com/kylewmahan/profile_image?size=original" alt="">
</div>
<a class="u-url" href="https://twitter.com/kylewmahan/status/653670712104738816"></a>
<div class="e-content p-name">
<a href="https://twitter.com/WeWantPlates">@WeWantPlates</a> <a href="https://twitter.com/BethFad91">@BethFad91</a> oh good, it looks like most of the paint has already been scraped off by other people's utensils
</div>
<a class="u-in-reply-to" href="https://twitter.com/WeWantPlates/status/653649456454365184"></a>
<a class="u-in-reply-to" href="https://twitter.com/WeWantPlates/status/653649456454365184"></a>
</article>
```
I haven't looked into what's causing this yet
|
closed
|
2015-10-13T15:55:00Z
|
2015-10-13T18:05:29Z
|
https://github.com/snarfed/granary/issues/46
|
[] |
karadaisy
| 4
|
CanopyTax/asyncpgsa
|
sqlalchemy
| 101
|
Asyncpg connection are not returning to a pool
|
An Asyncpg connection will not return to a pool connection if in the method __aenter__ was raised exception agter acquire_context.
```
async def __aenter__(self):
self.acquire_context = self.pool.acquire(timeout=self.timeout)
con = await self.acquire_context.__aenter__()
self.transaction = con.transaction(**self.trans_kwargs)
await self.transaction.__aenter__()
return con
```
Cancelled error may be `raised acquire_context` in await of `transaction.__aenter__`. And Connectio return to the pool never.
|
closed
|
2019-09-15T19:36:50Z
|
2019-09-19T19:21:19Z
|
https://github.com/CanopyTax/asyncpgsa/issues/101
|
[] |
matemax
| 0
|
numba/numba
|
numpy
| 9,650
|
Function with @guvectorize allow the index of array out of bound, not sure if this is in purpose
|
<!--
Thanks for opening an issue! To help the Numba team handle your information
efficiently, please first ensure that there is no other issue present that
already describes the issue you have
(search at https://github.com/numba/numba/issues?&q=is%3Aissue).
-->
## Reporting a bug
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [ x] have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [ x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
<!--
Please include details of the bug here, including, if applicable, what you
expected to happen!
-->
*Not sure if this behavior is expected or it is potentially can be improved. I did not see much discuss on this issue on the internet, so if this is expected, please ignore this.*
Weird behavior:
when running a python function decorated by @guvectorize, then the numpy array in this function can run with an out of bound index, and without generating any exception.
Example code:
```
import numpy as np
from numba import vectorize, float64, guvectorize,int64
@guvectorize([(int64[:], int64, int64[:])], '(n),()->(n)')
def g(x, y, res):
for i in range(x.shape[0]):
res[i] = x[i + 10] + y # Add index 10 to make sure out of boundary for the array x
x = np.array([1,2,3])
y = np.array([1])
myRes = g(x, y)
print(myRes)
# No error, the printed result is:
#array([[ 12884901892, 1, 4075923963910]])
```
So above fundtion try to access an numpy array "x" with an index out of bound in the "g" function.
But this does not generate any error, and the result returned a random number.
----
I'm new to this kind of universal function, just guessing the cause.
Maybe numba help to convert this python code to another format to execute, like a C code.
Well, in C, the array name is a pointer that allow the index to out bound. It just calculate the element address using the array base address + the index, so the code is actually accessing a address that is out of the array and generate this random output.
Not sure if correct, would like some insight about this behavior, because this generated a weird behavior in our model testing work. We test the same function with same input but got different output, and we eventually found out there is a index out of bound hiding in a loop and the random array element value caused a random function output.
Hope can get some more insight on this behavior.
Thanks
|
closed
|
2024-07-12T04:11:44Z
|
2024-08-22T01:52:33Z
|
https://github.com/numba/numba/issues/9650
|
[
"question",
"stale"
] |
BixiongXiang
| 3
|
tensorpack/tensorpack
|
tensorflow
| 733
|
The usage of dataflow
|
Will the get_data() and the reset_state() method be called only once or at the beginning of each epoch?
I want to do some curriculum learning. If the get_data() method is called every epoch, then I can record the epoch index in it and change the data as epoch number increases. Currently I have a data set consists of millions of samples. I set the steps_per_epoch to 3000 and the batch size is 8. Thus each epoch only 24k samples are used. I want to use the simplest samples in the first epoch and increase the difficulty as the training goes. But it seems that in the beginning of the second epoch, get_data() is not called again.
|
closed
|
2018-04-20T01:23:23Z
|
2018-05-30T20:59:41Z
|
https://github.com/tensorpack/tensorpack/issues/733
|
[
"usage"
] |
JesseYang
| 3
|
aminalaee/sqladmin
|
asyncio
| 415
|
Protocol, Domain & port with request.get_url over just reporting the path
|
### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
When using SQLAdmin behind a proxy, the URLs use 'http://' instead of 'https://'
This can be fixed by setting the Uvicorn proxy settings.
However, using full URLs will lead to many unnecessary issues. Using just paths as mentioned here will work fine in all cases; https://github.com/encode/starlette/issues/538#issuecomment-1135096753
### Steps to reproduce the bug
Run SQLAdmin behind a proxy.
### Expected behavior
All URLs should just be the subpath.
### Actual behavior
All URLs contain the protocol, domain (optinally port as well) and finally the path.
### Debugging material
_No response_
### Environment
Python 3.8
SQLAdmin 0.8.0
### Additional context
_No response_
|
closed
|
2023-01-19T14:06:20Z
|
2023-03-08T20:34:18Z
|
https://github.com/aminalaee/sqladmin/issues/415
|
[
"waiting-for-feedback"
] |
Jorricks
| 5
|
davidsandberg/facenet
|
tensorflow
| 729
|
IndexError : index 1 is out of bounds for axis 0 with size 1
|
class_index = class_indices[i] line 332 of tripletloss.py file
I am using LFW dataset people per batch = 45 and image per person 5 I have a user. when I have use image per person is 40 than I am also getting this error.
|
open
|
2018-04-30T04:07:17Z
|
2018-04-30T04:10:19Z
|
https://github.com/davidsandberg/facenet/issues/729
|
[] |
praveenkumarchandaliya
| 0
|
ultralytics/ultralytics
|
pytorch
| 18,892
|
why cli results is different with python
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question

vs
```
yolo predict model=yolo11m.pt source=video.avi show
```
python code gives no detections. Why? Weights are the same (default)?
CLI results:
video 1/1 (frame 11/54025) C:\repos\restaurant\detection\video.avi: 480x640 5 persons, 19 chairs, 4 potted plants, 6 dining tables, 1 laptop, 1 vase, 11.7ms
### Additional
_No response_
|
closed
|
2025-01-25T22:50:01Z
|
2025-01-27T10:00:54Z
|
https://github.com/ultralytics/ultralytics/issues/18892
|
[
"question",
"detect"
] |
ankhafizov
| 2
|
iperov/DeepFaceLab
|
machine-learning
| 5,526
|
Train Quick96 press any key Forever
|
On step 6, after loading samples it says "Press any key", but nothing happens after pressing... Any ways i can fix it? Thanks.
Running trainer.
[new] No saved models found. Enter a name of a new model : 1
1
Model first run.
Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : NVIDIA GeForce GTX 1060 6GB
[0] Which GPU indexes to choose? :
0
Initializing models: 100%|###############################################################| 5/5 [00:01<00:00, 2.72it/s]
Loading samples: 100%|############################################################| 2951/2951 [00:05<00:00, 497.68it/s]
Loading samples: 100%|##########################################################| 33410/33410 [01:09<00:00, 478.53it/s]
Для продолжения нажмите любую клавишу . . .
|
open
|
2022-05-29T12:27:07Z
|
2023-07-25T09:36:26Z
|
https://github.com/iperov/DeepFaceLab/issues/5526
|
[] |
huebez
| 5
|
pandas-dev/pandas
|
python
| 61,165
|
BUG: `datetime64[s]` fails round trip using `.to_parquet` and `read_parquet`
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
c = pd.Series(["2024-01-01", "2025-01-01", "2026-01-01"], dtype="datetime64[s]")
df0 = c.to_frame()
print(df0.dtypes)
df0.to_parquet("test.parquet")
df1 = pd.read_parquet("test.parquet")
print(df1.dtypes)
```
### Issue Description
The `dtype` changes from `datetime64[s]` to `datetime64[ms]`.
### Expected Behavior
I would expect the `dtype` to remain unchanged.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.16
python-bits : 64
OS : Linux
OS-release : 6.8.0-1021-azure
Version : #25-Ubuntu SMP Wed Jan 15 20:45:09 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.2
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 25.0
Cython : None
sphinx : None
IPython : 8.34.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.3.1
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 2.0.39
tables : None
tabulate : None
xarray : None
xlrd : 2.0.1
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5
</details>
|
closed
|
2025-03-21T23:39:25Z
|
2025-03-22T11:19:48Z
|
https://github.com/pandas-dev/pandas/issues/61165
|
[
"Bug",
"Datetime",
"IO Parquet"
] |
noahblakesmith
| 1
|
explosion/spaCy
|
nlp
| 12,611
|
support future pydantic v2
|
Spacy uses an older version of pydantic, please lighten the pinning to support 1.10.x and the forthcoming version 2.0.0
|
closed
|
2023-05-08T17:00:15Z
|
2023-09-08T00:02:11Z
|
https://github.com/explosion/spaCy/issues/12611
|
[
"enhancement",
"third-party"
] |
achapkowski
| 8
|
httpie/cli
|
rest-api
| 1,006
|
Redirected output starts response headers on same line as request body
|
When I run a POST request without redirection, I see the response headers start on a new line:
```
}
}
}
}
HTTP/1.1 201
Date: Mon, 21 Dec 2020 13:39:00 GMT
Content-Length: 0
```
But when I redirect the output, I see the `HTTP/1.1 201` on the same line as the request:
```
}
}
}
}HTTP/1.1 201
Date: Mon, 21 Dec 2020 13:36:09 GMT
Content-Length: 0
```
The options I specified in each case were `http -v --pretty format --unsorted`; the only difference was in the second case I redirected the output to a file.
|
closed
|
2020-12-21T13:43:07Z
|
2021-02-06T11:19:42Z
|
https://github.com/httpie/cli/issues/1006
|
[
"bug"
] |
hughpv
| 5
|
man-group/arctic
|
pandas
| 205
|
stock tick data storing tutorial.
|
Hi, is there any tutorial for storing tick data and how to update the data for my symbols?
|
closed
|
2016-08-30T03:15:59Z
|
2017-12-03T21:46:14Z
|
https://github.com/man-group/arctic/issues/205
|
[] |
leolle
| 18
|
serengil/deepface
|
machine-learning
| 980
|
cv:resize issue for functions.extract_faces
|
Hi, there seems to be an issue with the `functions.extract_faces` method (using ssd).
```
File C:\ProgramData\anaconda3\Lib\site-packages\deepface\commons\functions.py:211, in extract_faces(img, target_size, detector_backend, grayscale, enforce_detection, align)
205 factor = min(factor_0, factor_1)
207 dsize = (
208 int(current_img.shape[1] * factor),
209 int(current_img.shape[0] * factor),
210 )
--> 211 current_img = cv2.resize(current_img, dsize)
213 diff_0 = target_size[0] - current_img.shape[0]
214 diff_1 = target_size[1] - current_img.shape[1]
error: OpenCV(4.9.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\resize.cpp:4155: error: (-215:Assertion failed) inv_scale_x > 0 in function 'cv::resize'
```
|
closed
|
2024-01-28T20:28:28Z
|
2024-01-31T09:12:05Z
|
https://github.com/serengil/deepface/issues/980
|
[
"bug"
] |
fechnologies-d
| 7
|
pywinauto/pywinauto
|
automation
| 932
|
Panel
|
## Expected Behavior
## Actual Behavior
Unable to get the Control in the Static Panel and open the child window
## Steps to Reproduce the Problem
1.
2.
3.
## Short Example of Code to Demonstrate the Problem
## Specifications
- Pywinauto version:
- Python version and bitness:
- Platform and OS:
|
open
|
2020-05-14T10:36:03Z
|
2020-06-07T13:19:28Z
|
https://github.com/pywinauto/pywinauto/issues/932
|
[
"question"
] |
uvanesh
| 3
|
explosion/spaCy
|
data-science
| 13,264
|
Regex doesn't work if less than 3 characters?
|
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
Taken and adjusted right from the docs:
```python
import spacy
from spacy.matcher import Matcher
nlp = spacy.blank("en")
matcher = Matcher(nlp.vocab, validate=True)
pattern = [
{
"TEXT": {
"regex": r"4K"
}
}
]
matcher.add("TV_RESOLUTION", [pattern])
doc = nlp("Sony 55 Inch 4K Ultra HD TV X90K Series:BRAVIA XR LED Smart Google TV, Dolby Vision HDR, Exclusive Features for PS 5 XR55X90K-2022 w/HT-A5000 5.1.2ch Dolby Atmos Sound Bar Surround Home Theater")
res = matcher(doc)
# res = []
```
However if I add a `D` after `4K` in both strings, a match is found. Is there a minimal length restriction?
```python
import spacy
from spacy.matcher import Matcher
nlp = spacy.blank("en")
matcher = Matcher(nlp.vocab, validate=True)
pattern = [
{
"TEXT": {
"regex": r"4KD"
}
}
]
matcher.add("TV_RESOLUTION", [pattern])
doc = nlp("Sony 55 Inch 4KD Ultra HD TV X90K Series:BRAVIA XR LED Smart Google TV, Dolby Vision HDR, Exclusive Features for PS 5 XR55X90K-2022 w/HT-A5000 5.1.2ch Dolby Atmos Sound Bar Surround Home Theater")
res = matcher(doc)
# res = [[(11960903833032025891, 3, 4)]]
```
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: macOS
* Python Version Used: 3.11.3
* spaCy Version Used: 3.7.2
* Environment Information: Nothing special
|
closed
|
2024-01-23T16:14:48Z
|
2024-02-23T00:05:21Z
|
https://github.com/explosion/spaCy/issues/13264
|
[
"feat / matcher"
] |
SHxKM
| 3
|
joerick/pyinstrument
|
django
| 168
|
Feature request: cumulated time / total time / ncalls statistics + report
|
I used pyinstrument today to find bottlenecks in my optical simulation code, and found it overall very helpful. The HTML report is very usable and looks great!
One feature I was missing (or didn't find :-)) compared to builtin cProfile, is the possibility to sort / display **cumulative time for individual functions**. I.e. total time spent in that function, regardless of the call stack above. This is really crucial to find "hot" functions, i.e. with short runtime but high call count.
In the simplest form, the HTML report could show this as on-hover popup; or make it more fancy and display a sorted list grouped by module / function...
If this already possible, I'd appreciate a pointer on how to...
|
closed
|
2021-11-30T12:48:12Z
|
2022-11-06T18:22:40Z
|
https://github.com/joerick/pyinstrument/issues/168
|
[] |
loehnertj
| 4
|
dunossauro/fastapi-do-zero
|
sqlalchemy
| 234
|
Probleminha de versão do python na aula 10
|
Fiz esse gist, pois tive problema por causa da versão do Python na aula 10
https://gist.github.com/fabiocasadossites/7194d9c6b36eed1452547d7ea8f24bef
|
closed
|
2024-08-25T20:23:44Z
|
2024-08-27T17:32:21Z
|
https://github.com/dunossauro/fastapi-do-zero/issues/234
|
[] |
fabiocasadossites
| 2
|
dmlc/gluon-cv
|
computer-vision
| 1,038
|
Issue with "pose estimation" using GPU
|
For this tutorial: https://gluon-cv.mxnet.io/build/examples_pose/cam_demo.html.
I tried GPU, but failed with problems like :
```
[22:57:26] c:\jenkins\workspace\mxnet-tag\mxnet\src\operator\nn\cudnn\./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
[22:57:47] c:\jenkins\workspace\mxnet-tag\mxnet\src\operator\nn\cudnn\./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
[22:57:47] c:\jenkins\workspace\mxnet-tag\mxnet\src\operator\nn\cudnn\./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (set the environment variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
[ WARN:0] global C:\projects\opencv-python\opencv\modules\videoio\src\cap_msmf.cpp (674) SourceReaderCB::~SourceReaderCB terminating async callback
```
It seems a problem with opencv, but why it happened when gpu is used?
I am using latest gluoncv with mxnet 1.5 gpu cuda10 on win10.
|
closed
|
2019-11-13T15:05:07Z
|
2021-06-07T07:04:29Z
|
https://github.com/dmlc/gluon-cv/issues/1038
|
[
"Stale"
] |
dbsxdbsx
| 4
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 549
|
Import Error
|
Hey, i am trying to run this code and everytime i run demo_toolbox.py there comes an error "failed to load qt binding" i tried reinstalling matplotlib and also tried installing PYQt5 .
Need Help !!!
|
closed
|
2020-10-06T20:23:24Z
|
2020-10-12T09:55:04Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/549
|
[] |
jay-1104
| 5
|
Nemo2011/bilibili-api
|
api
| 298
|
【建议】爬取视频弹幕时对cookies应该不设置要求
|
看了一下代码,发现爬取视频弹幕需要提供cookies,但是大部分视频不需要cookies即可获取弹幕,是否可以修改为若不提供credential也可以爬取弹幕。
|
closed
|
2023-05-22T00:03:54Z
|
2023-05-24T11:17:20Z
|
https://github.com/Nemo2011/bilibili-api/issues/298
|
[] |
jhzgjhzg
| 4
|
Avaiga/taipy
|
data-visualization
| 2,293
|
Have part or dialog centered to the element clicked
|
### Description
Here, I have clicked on an icon and I have a dropdown menu of labels next to where I clicked:

Here, I have clicked on icon and I see a dialog/part showing up next to where I clicked:

I want to do that generically to put anything in this part. If I click somewhere else, this dialog should disappear.
### Acceptance Criteria
- [ ] If applicable, a new demo code is provided to show the new feature in action.
- [ ] Integration tests exhibiting how the functionality works are added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional)
|
closed
|
2024-11-29T10:51:56Z
|
2024-12-17T18:15:45Z
|
https://github.com/Avaiga/taipy/issues/2293
|
[
"🖰 GUI",
"🟨 Priority: Medium",
"✨New feature",
"🔒 Staff only"
] |
FlorianJacta
| 15
|
inducer/pudb
|
pytest
| 84
|
IPython crashes when enabled with %pudb
|
If you use `%pudb` and then use `!` to enable IPython, it crashes (this is with IPython 1.0). The API has changed, I think. See https://github.com/inducer/pudb/pull/83.
|
open
|
2013-08-13T04:34:13Z
|
2014-01-25T20:17:55Z
|
https://github.com/inducer/pudb/issues/84
|
[] |
asmeurer
| 1
|
gradio-app/gradio
|
data-visualization
| 10,611
|
thinking=true in some models
|
- [X] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
The IBM model granite has a setting which allows for reasoning or not. You set thinking=true or false.
It's like this:
```python
input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device)
```
https://huggingface.co/ibm-granite/granite-3.2-8b-instruct-preview
We have no way of setting this on the vllm worker, from what I understand.
I can modify the tokenized and have one version or the other, but that's cumbersome to say the least.
**Describe the solution you'd like**
A way to send additional parameters to the models.
|
closed
|
2025-02-17T20:09:40Z
|
2025-02-17T21:10:44Z
|
https://github.com/gradio-app/gradio/issues/10611
|
[] |
surak
| 2
|
3b1b/manim
|
python
| 1,824
|
Pip doesn't install a new enough numpy
|
### Describe the bug
I ran
```
$ pip install manimgl
$ manimgl
```
and got the error
```
import numpy.typing as npt
ModuleNotFoundError: No module named 'numpy.typing'
```
### Additional context
I have numpy 1.19 and I numpy.typing requires numpy 1.20. I think the pip requirements files need to specify "numpy >= 1.20" rather than just "numpy" as it does now.
|
closed
|
2022-06-02T21:51:49Z
|
2022-06-04T08:04:34Z
|
https://github.com/3b1b/manim/issues/1824
|
[
"bug"
] |
thomasahle
| 3
|
twopirllc/pandas-ta
|
pandas
| 385
|
Stochastic Rsi is very different from trading view values (again without proof)
|
**Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
```
**Upgrade.**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
**Describe the bug**
I ran a simple call to stochastic rsi with the same parameters 14,14,3,3. The result are much different from Tradingview values.
**To Reproduce**
dt = ta.stochrsi(df['Close'], length=14, rsi_length=14, k=3, d=3)
df['momentum_stoch_rsi_d'] = dt['STOCHRSId_14_14_3_3']
df['momentum_stoch_rsi_k'] = dt['STOCHRSIk_14_14_3_3']
It is much different even though parameters are the same.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
Thanks for using Pandas TA!
|
closed
|
2021-09-02T10:48:31Z
|
2021-09-02T15:19:44Z
|
https://github.com/twopirllc/pandas-ta/issues/385
|
[
"bug"
] |
hosseinghafarian
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 1,198
|
Error when training encoder
|
Hello, I am appealing to all who can and want to help. so I have a problem when I run encoder training, the first time everything is working fine and then gives an error. here it is:
..........
Step 110 Loss: 3.9845 EER: 0.4027 Step time: mean: 31023ms std: 39773ms
Average execution time over 10 steps:
Blocking, waiting for batch (threaded) (10/10): mean: 26881ms std: 38848ms
Data to cpu (10/10): mean: 1ms std: 0ms
Forward pass (10/10): mean: 966ms std: 37ms
Loss (10/10): mean: 32ms std: 2ms
Backward pass (10/10): mean: 2471ms std: 54ms
Parameter update (10/10): mean: 7ms std: 1ms
Extras (visualizations, saving) (10/10): mean: 0ms std: 1ms
........Traceback (most recent call last):
File "Z:\Real-Time-Voice-Cloning-master\encoder_train.py", line 44, in <module>
train(**vars(args))
File "Z:\Real-Time-Voice-Cloning-master\encoder\train.py", line 71, in train
for step, speaker_batch in enumerate(loader, init_step):
File "C:\Users\Professional\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 634, in __next__
data = self._next_data()
File "C:\Users\Professional\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 1326, in _next_data
return self._process_data(data)
File "C:\Users\Professional\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 1372, in _process_data
data.reraise()
File "C:\Users\Professional\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\_utils.py", line 644, in reraise
raise exception
Exception: Caught Exception in DataLoader worker process 2.
Original Traceback (most recent call last):
File "C:\Users\Professional\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\_utils\worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "C:\Users\Professional\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\_utils\fetch.py", line 54, in fetch
return self.collate_fn(data)
File "Z:\Real-Time-Voice-Cloning-master\encoder\data_objects\speaker_verification_dataset.py", line 55, in collate
return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames)
File "Z:\Real-Time-Voice-Cloning-master\encoder\data_objects\speaker_batch.py", line 9, in __init__
self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
File "Z:\Real-Time-Voice-Cloning-master\encoder\data_objects\speaker_batch.py", line 9, in <dictcomp>
self.partials = {s: s.random_partial(utterances_per_speaker, n_frames) for s in speakers}
File "Z:\Real-Time-Voice-Cloning-master\encoder\data_objects\speaker.py", line 34, in random_partial
self._load_utterances()
File "Z:\Real-Time-Voice-Cloning-master\encoder\data_objects\speaker.py", line 18, in _load_utterances
self.utterance_cycler = RandomCycler(self.utterances)
File "Z:\Real-Time-Voice-Cloning-master\encoder\data_objects\random_cycler.py", line 14, in __init__
raise Exception("Can't create RandomCycler from an empty collection")
Exception: Can't create RandomCycler from an empty collection
what wrong? How can I fix it?
|
open
|
2023-04-19T10:38:46Z
|
2023-04-19T10:38:46Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1198
|
[] |
terminatormlp
| 0
|
scikit-image/scikit-image
|
computer-vision
| 6,890
|
Update Hausdorff Distance example to show usage as a segmentation metric and clarify docstring
|
### Description:
## What is the issue?
The current version of [the Hausdorff Distance example](https://scikit-image.org/docs/stable/auto_examples/segmentation/plot_hausdorff_distance.html#hausdorff-distance) computes the distance on a set of four points. The example, however, is a bit confusing, as generally Hausdorff Distance is used as a segmentation metric, and therefore starts from segmentation masks.
As the method itself takes as input parameters named `image0` and `image1`, it leads to some confusion where users may expect the method to work *directly* on the segmentation masks, instead of on "images **of contours**". This is particularly confusing since there is no direct method to compute a "contour image" based on a segmentation mask.
We can see this confusion in action in some uses of the metric on GitHub, sometimes in code accompanying published results [e.g. 1, 2].
[1] : "Unsupervised Nuclei Segmentation using Spatial Organization Priors" -- published in MICCAI 22 -- [metrics.py](https://github.com/loic-lb/Unsupervised-Nuclei-Segmentation-using-Spatial-Organization-Priors/blob/58200221430f19c955039d7bf56c0c0f9739ef87/performance/metrics.py), called from [objmetrics.py](https://github.com/loic-lb/Unsupervised-Nuclei-Segmentation-using-Spatial-Organization-Priors/blob/58200221430f19c955039d7bf56c0c0f9739ef87/performance/objmetrics.py) with the same arguments as the Dice score.
[2] : "Head and Neck Tumour Segmentation and Precition of Patient Survival" -- published in MICCAI 21 -- [metrics.py](https://github.com/EmmanuelleB985/Head-and-Neck-Tumour-Segmentation-and-Prediction-of-Patient-Survival/blob/bb36a0aa953367775d140abcc112342a50066759/src/Segmentation_Task/metrics.py) returns average of dice and hausdorff_distance, called with the same arguments.
## Possible improvements
The easiest way to mitigate the issue would probably be to:
* Update the **example** so that it starts from segmentation masks (ground truth and prediction) and shows how to create a "contours" image *then* compute the metric.
* Update the **docstring** so that it explicitly states that it expects contours image and not segmentation masks.
In the longer term, it may be useful to provide either a method to quickly generate a "contours" image from a binary mask (as the `find_contours` method returns a list of coordinates which is not compatible with the behaviour of `hausdorff_distance`), or an alternative method (e.g. `hausdorff_distance_from_masks`) that uses `find_contours` on the masks first.
### Possible updated example:
This could replace: https://github.com/scikit-image/scikit-image/blob/main/doc/examples/segmentation/plot_hausdorff_distance.py
```python
"""
==================
Hausdorff Distance
==================
This example shows how to calculate the Hausdorff distance between a "ground truth" and
a "predicted" segmentation mask. The `Hausdorff distance
<https://en.wikipedia.org/wiki/Hausdorff_distance>`__ is the maximum distance
between any point on the first set and its nearest point on the second set,
and vice-versa.
To use it as a segmentation metric, the contours of the masks have to be computed first.
In this example, this is done by removing the eroded mask from the mask itself.
"""
import numpy as np
import matplotlib.pyplot as plt
from skimage import metrics
from skimage.morphology import erosion, disk
# Creates a "ground truth" binary mask with a disk, and a partially overlapping "predicted" rectangle
ground_truth = np.zeros((100, 100), dtype=bool)
predicted = ground_truth.copy()
ground_truth[30:71, 30:71] = disk(20)
predicted[25:65, 40:70] = True
# Creates "contours" image by xor-ing an erosion
se = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
gt_contour = ground_truth ^ erosion(ground_truth, se)
predicted_contour = predicted ^ erosion(predicted, se)
# Computes & display the distance & the corresponding pair of points
distance = metrics.hausdorff_distance(gt_contour, predicted_contour)
pair = metrics.hausdorff_pair(gt_contour, predicted_contour)
plt.figure(figsize=(15, 5))
plt.subplot(1, 3, 1)
plt.imshow(ground_truth)
plt.subplot(1, 3, 2)
plt.imshow(predicted)
plt.subplot(1, 3, 3)
plt.imshow(gt_contour)
plt.imshow(predicted_contour, alpha=0.5)
plt.plot([pair[0][1], pair[1][1]], [pair[0][0], pair[1][0]], 'wo-')
plt.title(f"HD={distance:.3f}px")
plt.show()
```
### Possible updated docstring
```python
def hausdorff_distance(image0, image1, method = 'standard'):
"""Calculate the Hausdorff distance between nonzero elements of given images.
To use as a segmentation metric, the method should receive as input images
containing the contours of the objects as nonzero elements.
Parameters
----------
image0, image1 : ndarray
Arrays where ``True`` represents a point that is included in a
set of points. Both arrays must have the same shape.
method : {'standard', 'modified'}, optional, default = 'standard'
The method to use for calculating the Hausdorff distance.
``standard`` is the standard Hausdorff distance, while ``modified``
is the modified Hausdorff distance.
Returns
-------
distance : float
The Hausdorff distance between coordinates of nonzero pixels in
``image0`` and ``image1``, using the Euclidean distance.
Notes
-----
The Hausdorff distance [1]_ is the maximum distance between any point on
``image0`` and its nearest point on ``image1``, and vice-versa.
The Modified Hausdorff Distance (MHD) has been shown to perform better
than the directed Hausdorff Distance (HD) in the following work by
Dubuisson et al. [2]_. The function calculates forward and backward
mean distances and returns the largest of the two.
References
----------
.. [1] http://en.wikipedia.org/wiki/Hausdorff_distance
.. [2] M. P. Dubuisson and A. K. Jain. A Modified Hausdorff distance for object
matching. In ICPR94, pages A:566-568, Jerusalem, Israel, 1994.
:DOI:`10.1109/ICPR.1994.576361`
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.1.8155
Examples
--------
>>> points_a = (3, 0)
>>> points_b = (6, 0)
>>> shape = (7, 1)
>>> image_a = np.zeros(shape, dtype=bool)
>>> image_b = np.zeros(shape, dtype=bool)
>>> image_a[points_a] = True
>>> image_b[points_b] = True
>>> hausdorff_distance(image_a, image_b)
3.0
"""
```
### Possible additional function
To compute directly Hausdorff's distance from the segmentation masks using `find_contours`.
```python
def hausdorff_distance_mask(image0, image1, method = 'standard'):
"""Calculate the Hausdorff distance between the contours of two segmentation masks.
Parameters
----------
image0, image1 : ndarray
Arrays where ``True`` represents a pixel from a segmented object. Both arrays must have the same shape.
method : {'standard', 'modified'}, optional, default = 'standard'
The method to use for calculating the Hausdorff distance.
``standard`` is the standard Hausdorff distance, while ``modified``
is the modified Hausdorff distance.
Returns
-------
distance : float
The Hausdorff distance between coordinates of the segmentation mask contours in
``image0`` and ``image1``, using the Euclidean distance.
Notes
-----
The Hausdorff distance [1]_ is the maximum distance between any point on the
contour of ``image0`` and its nearest point on the contour of ``image1``, and
vice-versa.
The Modified Hausdorff Distance (MHD) has been shown to perform better
than the directed Hausdorff Distance (HD) in the following work by
Dubuisson et al. [2]_. The function calculates forward and backward
mean distances and returns the largest of the two.
References
----------
.. [1] http://en.wikipedia.org/wiki/Hausdorff_distance
.. [2] M. P. Dubuisson and A. K. Jain. A Modified Hausdorff distance for object
matching. In ICPR94, pages A:566-568, Jerusalem, Israel, 1994.
:DOI:`10.1109/ICPR.1994.576361`
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.1.8155
Examples
--------
>>> ground_truth = np.zeros((100, 100), dtype=bool)
>>> predicted = ground_truth.copy()
>>> ground_truth[30:71, 30:71] = disk(20)
>>> predicted[25:65, 40:70] = True
>>> hausdorff_distance_mask(ground_truth, predicted)
11.40175425099138
"""
if method not in ('standard', 'modified'):
raise ValueError(f'unrecognized method {method}')
a_points = np.concatenate(find_contours(image0>0))
b_points = np.concatenate(find_contours(image1>0))
# Handle empty sets properly:
# - if both sets are empty, return zero
# - if only one set is empty, return infinity
if len(a_points) == 0:
return 0 if len(b_points) == 0 else np.inf
elif len(b_points) == 0:
return np.inf
fwd, bwd = (
cKDTree(a_points).query(b_points, k=1)[0],
cKDTree(b_points).query(a_points, k=1)[0],
)
if method == 'standard': # standard Hausdorff distance
return max(max(fwd), max(bwd))
elif method == 'modified': # modified Hausdorff distance
return max(np.mean(fwd), np.mean(bwd))
```
|
open
|
2023-04-13T08:28:20Z
|
2023-10-26T11:48:50Z
|
https://github.com/scikit-image/scikit-image/issues/6890
|
[
":pray: Feature request"
] |
adfoucart
| 9
|
axnsan12/drf-yasg
|
django
| 762
|
Not able to group any ListAPIView using Tags
|
I am not able to group any ListAPIView using Tags. This seems to be happening for only ListAPIView. There is no error or warning on Django debug console. The particular API gets grouped in the default untagged group. Any ideas on how to overcome this?
```
class GET_CurrencyList_API(generics.ListAPIView):
"""
List Currencies.
API for Listing currencies.
**ORDERING**:
**Tags**: Currencies
"""
permission_classes = [permissions.AllowAny]
pagination_class = LargeResultsSetPagination
filter_backends = (filters.DjangoFilterBackend,)
serializer_class = CurrecyListSerializer
filterset_class = CurrenciesListFilter
queryset = Currencies.objects.all()
@swagger_auto_schema(tags=['Currency Ops'])
def get_queryset(self):
return self.queryset
```

I have tagged both APIs with the Currency Ops group. But only non-listapiviews seem to be grouping. The other API uses generics.GenericAPIView.
|
open
|
2021-12-21T06:22:39Z
|
2025-03-21T10:49:34Z
|
https://github.com/axnsan12/drf-yasg/issues/762
|
[
"bug",
"help wanted",
"1.21.x"
] |
aibharata
| 2
|
serengil/deepface
|
deep-learning
| 537
|
RAM leak with multiple calls
|
Hello @serengil, many thanks for this awesome library!
I noticed a non neglectible memory leak when calling `DeepFace.analyze()` multiple times. I've seen #78 and your suggestion to not use the function in a `for` loop and to use the `tf.keras.backend.clear_session()` to clear the tf graph.
Unfortunately, even calling `tf.keras.backend.clear_session()` at every iteration does not stop the memory leak. I really need to use a for loop, as I'm processing a large dataset of videos, so I need to extract the frames and pass them to the function (can't use filenames instead of actual frames).
The pseudo-code is the following:
```python
models = {}
models['race'] = DeepFace.build_model('Race')
dataset_results = list()
for v in videos:
frame_list = extract_fixed_number_of_frames(v)
deepface_result = DeepFace.analyze(frame_list, actions=['race'], models=models)
dataset_results.append(deepface_result)
```
The dataset is too large to load it entirely and call `DeepFace.analyze([frame_list_1, ..., frame_list_v])`.
Do I have other options other than splitting the dataset into chunks that can fit my RAM considering the memory leak?
Do you have any idea on what part of the code is leaking memory? The models are pre-built and the face detector (opencv) is allocated once and retrieved as a global variable, so those should be ok.
|
closed
|
2022-08-19T09:01:54Z
|
2022-08-20T11:14:55Z
|
https://github.com/serengil/deepface/issues/537
|
[
"question"
] |
nicobonne
| 6
|
nltk/nltk
|
nlp
| 3,024
|
nltk 3.7 requires explicit download of omw-1.4 on Linux
|
Consider the following script:
```
import nltk
nltk.download("wordnet")
nltk.corpus.wordnet.synsets("test")
```
This runs successfully on both Windows and Linux for nltk version 3.5, however for version 3.7 it only succeeds for Windows and produces the following error on Linux:
> [nltk_data] Downloading package wordnet to
> [nltk_data] [redacted]/nltk_data...
> [nltk_data] Package wordnet is already up-to-date!
> Traceback (most recent call last):
> File "[redacted]/.venv/lib/python3.9/site-packages/nltk/corpus/util.py", line 84, in __load
> root = nltk.data.find(f"{self.subdir}/{zip_name}")
> File "[redacted]/.venv/lib/python3.9/site-packages/nltk/data.py", line 583, in find
> raise LookupError(resource_not_found)
> LookupError:
> \**********************************************************************
> Resource omw-1.4 not found.
> Please use the NLTK Downloader to obtain the resource:
>
> \>\>\> import nltk
> \>\>\> nltk.download('omw-1.4')
>
|
closed
|
2022-07-21T09:24:05Z
|
2024-11-18T13:38:10Z
|
https://github.com/nltk/nltk/issues/3024
|
[] |
lanzkron
| 8
|
facebookresearch/fairseq
|
pytorch
| 5,142
|
text after filtering OOV is empty output
|
Japanese TTS
downloaded the model with - wget https://dl.fbaipublicfiles.com/mms/tts/jvn.tar.gz
after running infer.py, there is no text after the line - text after filtering OOV:
What's the problem?
|
closed
|
2023-05-24T03:02:51Z
|
2023-05-25T01:20:41Z
|
https://github.com/facebookresearch/fairseq/issues/5142
|
[
"bug",
"needs triage"
] |
lisea2017
| 8
|
tensorlayer/TensorLayer
|
tensorflow
| 535
|
Failed: TensorLayer (b10975ab)
|
*Sent by Read the Docs (readthedocs@readthedocs.org). Created by [fire](https://fire.fundersclub.com/).*
---
| TensorLayer build #7116813
---
| 
---
| Build Failed for TensorLayer (latest)
---
Error: Problem parsing YAML configuration. Invalid "python.version": expected one of (2, 2.7, 3, 3.5), got 3.6
You can find out more about this failure here:
[TensorLayer build #7116813](https://readthedocs.org/projects/tensorlayer/builds/7116813/) \- failed
If you have questions, a good place to start is the FAQ:
<https://docs.readthedocs.io/en/latest/faq.html>
You can unsubscribe from these emails in your [Notification Settings](https://readthedocs.org/dashboard/tensorlayer/notifications/)
Keep documenting,
Read the Docs
| Read the Docs
<https://readthedocs.org>
---

|
closed
|
2018-04-30T04:12:23Z
|
2018-04-30T04:26:32Z
|
https://github.com/tensorlayer/TensorLayer/issues/535
|
[] |
fire-bot
| 0
|
lux-org/lux
|
pandas
| 186
|
[SETUP] Failed building wheel for scikit-learn
|
I am working on a Ubuntu 18.04.5 LTS machine, and I am trying to install lux-api using pip as described in the docs. My installation exits on the following error:
error: Command "x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.7/dist-packages/numpy/core/include -I/usr/local/lib/python3.7/dist-packages/numpy/core/include -I/usr/include/python3.7m -c sklearn/__check_build/_check_build.c -o build/temp.linux-x86_64-3.7/sklearn/__check_build/_check_build.o -MMD -MF build/temp.linux-x86_64-3.7/sklearn/__check_build/_check_build.o.d -fopenmp" failed with exit status 1
Any help will be great! Thanks
|
closed
|
2020-12-23T20:01:14Z
|
2021-01-06T09:21:17Z
|
https://github.com/lux-org/lux/issues/186
|
[
"setup"
] |
vmreyes12
| 1
|
d2l-ai/d2l-en
|
tensorflow
| 1,779
|
The `devices` argument in the TF implementation of d2l.predict_seq2seq
|
The TF implementation of `d2l.predict_seq2seq` in https://github.com/d2l-ai/d2l-en/pull/1768/files?file-filters%5B%5D=.md#diff-dbae7acee5140a9e76359207c8a1b718efcc82d4e7fbd194d6036ab0ee8e2130R883 removes `devices` argument with a note "We don't need the `device` argument in TF as TF uses available device automatically."
@biswajitsahoo1111 and @terrytangyuan, does it hurt if we force to specify `device` like what we do in the mxnet and pytorch implementations? One benefit is that the same `d2l.predict_seq2seq` function will share the same interface (same list of arguments, all passing `device`) to allow us to combine many code blocks to avoid redundancy.
|
open
|
2021-06-08T01:42:02Z
|
2023-10-31T14:20:58Z
|
https://github.com/d2l-ai/d2l-en/issues/1779
|
[
"tensorflow-adapt-track"
] |
astonzhang
| 5
|
davidsandberg/facenet
|
computer-vision
| 919
|
Training on a small dataset
|
I have a small dataset and I get the OutOfRangeError after a few epochs. Is it possible to use the dataset multiple times (e.g. `dataset.repeat()` )? How should I modify the code?
|
open
|
2018-11-13T14:20:13Z
|
2019-04-25T07:23:45Z
|
https://github.com/davidsandberg/facenet/issues/919
|
[] |
FSet89
| 1
|
vaexio/vaex
|
data-science
| 2,343
|
[BUG-REPORT] rename when the new name is already a column has unexpected results
|
so this is a tricky little bug
Because we were renaming but not dropping the original columns, _sometimes_ vaex wouldn't overwrite correctly (I'll make an issue in the vaex github).
You can run these to understand the issue fully
```
import vaex
import numpy as np
df = vaex.example()[["x","y"]]
df["data_x"] = np.random.rand(len(df))
df["data_y"] = np.random.rand(len(df))
df.rename("data_x", "x")
df.rename("data_y", "y")
```
This will work as expected. The dataframe will show 2 columns, x, and y, and the values will match that of data_x and data_y
This will _fail_
```
df = vaex.from_arrays(
data_x = np.random.rand(1000),
data_y = np.random.rand(1000),
x = np.random.rand(1000),
y = np.random.rand(1000)
)
df.rename("data_x", "x")
df.rename("data_y", "y")
```
The reason has to do with the state. If you look at the `state_get()` of either dataframe
`df.state_get()`
You'll see something like this
```
{'virtual_columns': {},
'column_names': ['x', 'y', 'x', 'y'],
'renamed_columns': [('data_x', 'x'), ('data_y', 'y')],
...
}
```
You see the columns are `["x", "y", "x", "y]`
The _issue_ is that whichever x and y came second will be the ones used. So when we rename data_x and data_y, if they were "first" in the dataframe, the rename won't work as expected
## What should happen?
Ideally, if the column already exists, it should be renamed to a hidden `_column_` and the new one should take over.
But at the minimum, vaex should throw an error that you cannot rename to a column that already exists. One of these, but ideally the first
**Software information**
- Vaex version (`import vaex; vaex.__version__)`: 4.16.0
- Vaex was installed via: pip / conda-forge / from source
- OS:
|
open
|
2023-02-24T16:42:20Z
|
2023-02-24T16:42:47Z
|
https://github.com/vaexio/vaex/issues/2343
|
[] |
Ben-Epstein
| 0
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 727
|
Numpy requirement
|
Hello,
pytorch-metric-learning has this numpy requirement that makes it hard to work with other package needing numpy > 2.0.
Would it be possible to loosen the numpy requirement ?
https://github.com/KevinMusgrave/pytorch-metric-learning/blob/60bab5ff9233de90b01a5c28d6a5c6cb02604640/setup.py#L42
|
closed
|
2024-10-31T14:49:41Z
|
2024-11-04T09:32:39Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/727
|
[] |
pchampio
| 2
|
pytorch/vision
|
computer-vision
| 8,270
|
CocoDetection dataset incompatible with Faster R-CNN model Training and mAP calculation
|
### 🐛 Describe the bug
### **🐛 Bug**
I would like to thank you for Object Detection Finetuning tutorial. The **CocoDetection** dataset appears to be incompatible with the Faster R-CNN model, I have been using **transforms-v2-end-to-end-object-detection-segmentation-example** for coco detection. The TorchVision Object Detection Finetuning tutorial specifies the format of datasets to be compatible with the Mask R-CNN model: datasets' getitem method should output an image and a target with fields boxes, labels, area, etc. The CocoDetection dataset returns the COCO annotations as the target.
After Training and Evaluation (_engine.evaluate)_ The mAP scrores are always 0 for every epoch.
**Dataset:**
```Python
from torchvision.datasets import CocoDetection ,wrap_dataset_for_transforms_v2
transforms = v2.Compose(
[
v2.ToPILImage(),
v2.Resize(512),
v2.RandomPhotometricDistort(p=1),
v2.RandomZoomOut(fill={tv_tensors.Image: (123, 117, 104), "others": 0}),
v2.RandomIoUCrop(),
v2.RandomHorizontalFlip(p=1),
v2.ToTensor(),
v2.ToDtype(torch.float32, scale=True),
]
)
dataset = CocoDetection(root_dir, annotation_file,transforms=transforms)
dataset = wrap_dataset_for_transforms_v2(dataset, target_keys=("boxes", "labels"))
```
### **Expected behavior**
FasterRcnn model output from evaluation:
45514: {'boxes': tensor([[ 40.1482, 48.6490, 46.0760, 50.4945],
[ 56.4980, 98.9506, 59.9611, 99.8110],
[ 16.5955, 50.3514, 20.7766, 51.7256],
[ 7.7093, 49.7779, 9.9628, 51.1177],
[ 23.2416, 115.2277, 27.9833, 116.0603],
[ 6.2100, 43.7826, 12.1565, 44.4718],
[ 84.3244, 92.1326, 89.6173, 92.8679],
[ 27.8029, 111.4202, 33.1342, 112.3421],
[ 6.4772, 83.6187, 11.6571, 85.1347],
[ 12.0571, 57.7298, 17.3467, 58.6374],
[ 52.0026, 100.2936, 55.6111, 101.0397],
[ 32.9334, 95.2229, 36.7513, 96.0473],
[ 11.9714, 50.6148, 16.0876, 51.3073],
[ 36.5298, 99.2084, 40.1270, 100.3034],
[ 56.8915, 95.4639, 59.8486, 96.2372],
[ 29.7059, 95.3435, 34.5201, 96.1218],
[ 83.4291, 96.4723, 89.3993, 97.1828],
[ 80.5682, 114.7297, 86.4728, 115.2060],
[ 55.3361, 96.1351, 57.5648, 97.3579],
[ 87.8969, 120.9048, 91.9940, 122.8857],
[ 79.1790, 95.7387, 83.8181, 96.0961],
[ 5.2113, 81.8440, 12.3168, 82.5170],
[ 11.9503, 9.1723, 15.8027, 10.5436],
[ 43.5947, 115.0965, 46.6917, 116.0323],
[ 36.3678, 44.5311, 45.5149, 45.0957],
[ 64.0280, 91.6801, 70.0944, 92.6666],
[ 34.9408, 48.2833, 39.4942, 48.7989],
[ 44.6860, 34.5384, 48.7593, 35.4988],
[ 8.5666, 52.0507, 9.7412, 53.2962],
[ 59.0582, 114.7045, 62.3767, 115.6113],
[ 42.6140, 95.4168, 47.2140, 95.9096],
[ 51.6593, 116.3869, 54.5132, 117.3841],
[ 10.2391, 8.1375, 15.3591, 9.5619],
[ 79.1855, 103.1416, 83.1228, 104.2892],
[ 11.6779, 115.0183, 15.2959, 115.6937],
[ 92.2911, 64.1361, 97.0701, 65.2341],
[ 77.5316, 94.2480, 88.9283, 103.3851],
[ 20.0655, 29.8961, 25.1227, 31.4927],
[ 41.8090, 91.6322, 66.6452, 114.4692],
[ 1.7047, 99.8426, 5.1214, 100.9323],
[ 21.0609, 30.4280, 25.2658, 31.7394],
[ 77.2151, 111.6185, 83.4417, 112.1318],
[ 21.3434, 105.4950, 25.2204, 106.5172],
[ 0.0000, 100.4384, 44.0517, 127.3968],
[ 37.8296, 43.6599, 41.4970, 44.9153],
[101.9358, 28.7851, 107.8658, 29.7577],
[ 84.6480, 112.4950, 89.6441, 113.4983],
[ 32.0187, 48.2305, 34.6299, 49.9225],
[ 21.0508, 96.1484, 49.0644, 115.7840],
[ 78.7586, 91.0307, 83.5991, 92.0456],
[ 7.4563, 99.3216, 11.9333, 100.4296],
[ 41.8862, 39.9427, 48.0095, 40.6046],
[ 64.2320, 110.5276, 69.4041, 111.1726],
[ 48.5087, 35.6968, 51.0966, 36.3254],
[ 69.9470, 80.5252, 77.2012, 81.4588],
[ 64.5411, 5.3913, 69.1044, 6.1687],
[ 14.9313, 118.7279, 18.5300, 119.9807],
[ 67.1189, 75.9650, 74.1020, 76.6732],
[104.7447, 31.4134, 109.5322, 32.3514],
[ 68.2009, 112.8180, 71.6182, 113.7542],
[ 77.4721, 33.7746, 80.8393, 34.7606],
[ 9.3352, 80.4828, 12.7890, 82.0704],
[ 65.1386, 107.5109, 71.5020, 108.5092],
[ 0.0000, 95.9446, 24.0685, 127.6092],
[ 43.3848, 100.4263, 46.2075, 101.2998],
[ 14.2563, 116.5107, 17.7641, 117.3038],
[ 75.3176, 28.1107, 79.5012, 29.0837],
[ 21.1844, 99.2131, 24.2272, 100.0415],
[ 59.2131, 7.5139, 63.0848, 8.2906],
[ 0.0000, 48.6410, 43.9125, 57.1178],
[ 92.4588, 61.7449, 97.9917, 62.5649],
[ 0.0000, 40.3103, 21.4952, 74.7319],
[ 26.3296, 18.2271, 30.6895, 19.6554],
[ 24.1920, 8.1525, 29.7535, 9.5155],
[ 0.0000, 82.6814, 1.6050, 86.1069],
[ 69.8429, 91.4722, 74.5880, 92.6022],
[ 40.0346, 106.1212, 43.5103, 107.1371],
[ 77.5447, 109.2828, 81.9013, 110.5845],
[ 68.1803, 44.8517, 73.7433, 45.6382],
[ 0.0000, 84.0534, 3.4056, 85.0159],
[ 38.7503, 35.8580, 56.7992, 47.4848],
[ 50.1666, 31.5740, 53.6334, 32.6867],
[ 31.0113, 101.1863, 33.5362, 101.8402],
[ 53.7563, 9.7722, 55.8778, 10.9179],
[ 51.0325, 13.5929, 54.7412, 14.6228],
[ 18.1654, 104.8237, 21.8600, 105.6227],
[ 19.5623, 35.9696, 24.1356, 37.0714],
[ 69.2776, 28.0173, 88.3530, 39.2125],
[ 75.1365, 115.5374, 77.8445, 116.9997],
[ 31.0881, 58.5975, 34.6037, 59.4643],
[ 1.6351, 80.1350, 6.1082, 81.3295],
[ 22.8064, 117.3966, 62.7837, 127.8345],
[ 63.6129, 70.9242, 69.0646, 71.9814],
[ 3.4624, 87.2172, 8.6216, 88.5247],
[ 55.8403, 28.8055, 59.7083, 30.4423],
[ 26.2743, 18.7395, 31.5674, 19.9733],
[ 26.8567, 117.3554, 32.5164, 117.9245],
[ 55.5966, 104.9360, 58.4963, 105.7098],
[ 88.1490, 100.0630, 91.5376, 101.2722],
[ 61.8169, 10.4709, 64.8416, 11.5875]], device='cuda:0'), 'labels': tensor([13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13,
13, 13, 13, 13, 13, 13, 13, 13, 13, 13], device='cuda:0'), 'scores': tensor([0.3700, 0.3590, 0.3542, 0.3493, 0.3464, 0.3439, 0.3411, 0.3396, 0.3373,
0.3338, 0.3312, 0.3304, 0.3297, 0.3272, 0.3269, 0.3263, 0.3233, 0.3213,
0.3156, 0.3152, 0.3146, 0.3145, 0.3119, 0.3090, 0.3085, 0.3070, 0.3041,
0.3034, 0.3016, 0.3015, 0.3014, 0.3011, 0.3009, 0.2997, 0.2996, 0.2984,
0.2978, 0.2973, 0.2968, 0.2955, 0.2954, 0.2952, 0.2939, 0.2928, 0.2921,
0.2914, 0.2912, 0.2906, 0.2905, 0.2902, 0.2898, 0.2891, 0.2878, 0.2870,
0.2870, 0.2867, 0.2864, 0.2861, 0.2856, 0.2854, 0.2844, 0.2830, 0.2829,
0.2821, 0.2808, 0.2808, 0.2805, 0.2804, 0.2799, 0.2796, 0.2796, 0.2791,
0.2791, 0.2789, 0.2788, 0.2780, 0.2775, 0.2774, 0.2767, 0.2766, 0.2765,
0.2762, 0.2761, 0.2757, 0.2751, 0.2748, 0.2744, 0.2743, 0.2740, 0.2737,
0.2737, 0.2736, 0.2735, 0.2733, 0.2733, 0.2731, 0.2728, 0.2726, 0.2725,
0.2721], device='cuda:0')}}
But the mAP calculation is always:
Test: [ 0/245] eta: 0:00:23 model_time: 0.0417 (0.0417) evaluator_time: 0.0033 (0.0033) time: 0.0973 data: 0.0519 max mem: 3903
Test: [100/245] eta: 0:00:12 model_time: 0.0388 (0.0390) evaluator_time: 0.0016 (0.0018) time: 0.0882 data: 0.0472 max mem: 3903
Test: [200/245] eta: 0:00:03 model_time: 0.0388 (0.0389) evaluator_time: 0.0015 (0.0018) time: 0.0874 data: 0.0466 max mem: 3903
Test: [244/245] eta: 0:00:00 model_time: 0.0388 (0.0388) evaluator_time: 0.0019 (0.0018) time: 0.0880 data: 0.0478 max mem: 3903
Test: Total time: 0:00:21 (0.0879 s / it)
Averaged stats: model_time: 0.0388 (0.0388) evaluator_time: 0.0019 (0.0018)
Accumulating evaluation results...
DONE (t=0.06s).
Accumulating evaluation results...
DONE (t=0.06s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
IoU metric: segm
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
### To Reproduce
Steps to reproduce the behaviour:
Follow the steps in Object Detection Finetuning tutorial substituting a dataset with COCO Detection ( torchvision.datasets.CocoDetection ).
I get predicted mAP 0 within the coco_eval.
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
### Versions
Collecting environment information...
PyTorch version: 2.1.2
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:26:23) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Laptop GPU
Nvidia driver version: 522.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2300
DeviceID=CPU0
Family=198
L2CacheSize=11776
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2300
Name=12th Gen Intel(R) Core(TM) i7-12700H
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] numpy==1.26.2
[pip3] pytorch-ignite==0.4.13
[pip3] pytorch-lightning==1.9.5
[pip3] torch==2.1.2
[pip3] torchmetrics==0.10.3
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.16.2
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6b88ed4_46358
[conda] mkl-service 2.4.0 py311h2bbff1b_1
[conda] mkl_fft 1.3.8 py311h2bbff1b_0
[conda] mkl_random 1.2.4 py311h59b6b97_0
[conda] numpy 1.26.2 py311hdab7c0b_0
[conda] numpy-base 1.26.2 py311hd01c5d8_0
[conda] pytorch 2.1.2 py3.11_cuda11.8_cudnn8_0 pytorch
[conda] pytorch-cuda 11.8 h24eeafa_5 pytorch
[conda] pytorch-ignite 0.4.13 pypi_0 pypi
[conda] pytorch-lightning 1.9.5 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchmetrics 0.10.3 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.16.2 pypi_0 pypi
|
open
|
2024-02-12T20:12:37Z
|
2024-03-14T19:50:35Z
|
https://github.com/pytorch/vision/issues/8270
|
[] |
anirudh6415
| 2
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 1,730
|
GaboxR67/MelBandRoformers
|
Last Error Received:
Process: Ensemble Mode
If this error persists, please contact the developers with the error details.
Raw Error Details:
AttributeError: ""'norm'""
Traceback Error: "
File "UVR.py", line 9274, in process_start
File "separate.py", line 730, in seperate
File "separate.py", line 943, in demix
File "lib_v5\tfc_tdf_v3.py", line 167, in __init__
File "ml_collections\config_dict\config_dict.py", line 829, in __getattr__
"
Error Time Stamp [2025-02-06 12:11:31]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: Default
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 2
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Choose Model
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
is_demud: False
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: True
is_use_torch_inference_mode: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: True
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_save_to_input_path: False
apollo_overlap: 2
apollo_chunk_size: 5
apollo_model: Choose Model
is_task_complete: False
is_normalization: False
is_use_directml: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
device_set: NVIDIA GeForce RTX 4080 Laptop GPU:0
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: True
model_sample_mode_duration: 30
demudder_method: Combine Methods
demucs_stems: Bass
mdx_stems: All Stems
Patch Version: UVR_Patch_1_21_25_2_28_BETA
|
open
|
2025-02-06T10:13:20Z
|
2025-02-11T14:12:35Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/1730
|
[] |
infyplay
| 1
|
mckinsey/vizro
|
data-visualization
| 866
|
Fix theme flickering
|
There's a quick theme change flickering that happens when the page is refreshed. It doesn't depend on the default theme set in the vm.Dashboard.
It looks like this started happening since the 0.1.25 version.
|
closed
|
2024-11-12T10:14:20Z
|
2024-11-13T14:05:31Z
|
https://github.com/mckinsey/vizro/issues/866
|
[] |
huong-li-nguyen
| 0
|
InstaPy/InstaPy
|
automation
| 6,570
|
get_key = shared_data.get("entry_data").get("ProfilePage") - AttributeError: 'NoneType' object has no attribute 'get'
|
Just yesterday (3/28/2022) instapy stopped working for me:
with smart_run(session):
File "C:\Python39\lib\contextlib.py", line 119, in __enter__
return next(self.gen)
File "C:\Users\Eric\AppData\Roaming\Python\Python39\site-packages\instapy\util.py", line 1983, in smart_run
session.login()
File "C:\Users\Eric\AppData\Roaming\Python\Python39\site-packages\instapy\instapy.py", line 475, in login
self.followed_by = log_follower_num(self.browser, self.username, self.logfolder)
File "C:\Users\Eric\AppData\Roaming\Python\Python39\site-packages\instapy\print_log_writer.py", line 21, in log_follower_num
followed_by = getUserData("graphql.user.edge_followed_by.count", browser)
File "C:\Users\Eric\AppData\Roaming\Python\Python39\site-packages\instapy\util.py", line 501, in getUserData
get_key = shared_data.get("entry_data").get("ProfilePage")
AttributeError: 'NoneType' object has no attribute 'get'
|
closed
|
2022-03-29T06:27:12Z
|
2022-03-29T07:55:19Z
|
https://github.com/InstaPy/InstaPy/issues/6570
|
[] |
ersom
| 1
|
plotly/dash-table
|
plotly
| 560
|
Renaming export button [feature suggestion]
|
Hi, the possibility to rename export button from "Export" to something custom could be necessary for apps where context suggests more elaborate naming or naming in another language.
Thank you:)
|
open
|
2019-08-28T09:19:18Z
|
2019-08-28T09:19:18Z
|
https://github.com/plotly/dash-table/issues/560
|
[] |
vetertann
| 0
|
nonebot/nonebot2
|
fastapi
| 2,478
|
Feature: 查看已安装插件及版本
|
### 希望能解决的问题
列出所有已安装的插件,以及查看已安装插件的版本
### 描述所需要的功能
列出所有已安装的插件,以及查看已安装插件的版本
|
closed
|
2023-12-04T02:51:25Z
|
2023-12-10T10:13:48Z
|
https://github.com/nonebot/nonebot2/issues/2478
|
[
"enhancement"
] |
WindStill
| 4
|
scikit-hep/awkward
|
numpy
| 2,455
|
`ak.flatten` flattens strings with `axis != None`
|
### Version of Awkward Array
main
### Description and code to reproduce
We are leaning towards strings being a robust abstraction — if you want to erase a string, remove the parameters (or use `ak.enforce_type`).
However, there are some holes in this, notably with `ak.flatten`:
```python
>>> ak.flatten(["hello", "moto"], axis=1)
'hellomoto'
```
We should ensure that this raises an error.
|
closed
|
2023-05-11T14:24:57Z
|
2023-05-26T17:51:54Z
|
https://github.com/scikit-hep/awkward/issues/2455
|
[
"bug (unverified)"
] |
agoose77
| 0
|
huggingface/datasets
|
numpy
| 7,047
|
Save Dataset as Sharded Parquet
|
### Feature request
`to_parquet` currently saves the dataset as one massive, monolithic parquet file, rather than as several small parquet files. It should shard large datasets automatically.
### Motivation
This default behavior makes me very sad because a program I ran for 6 hours saved its results using `to_parquet`, putting the entire billion+ row dataset into a 171 GB *single shard parquet file* which pyarrow, apache spark, etc. all cannot work with without completely exhausting the memory of my system. I was previously able to work with larger-than-memory parquet files, but not this one. I *assume* the reason why this is happening is because it is a single shard. Making sharding the default behavior puts datasets in parity with other frameworks, such as spark, which automatically shard when a large dataset is saved as parquet.
### Your contribution
I could change the logic here https://github.com/huggingface/datasets/blob/bf6f41e94d9b2f1c620cf937a2e85e5754a8b960/src/datasets/io/parquet.py#L109-L158
to use `pyarrow.dataset.write_dataset`, which seems to support sharding, or periodically open new files. We would only shard if the user passed in a path rather than file handle.
|
open
|
2024-07-12T23:47:51Z
|
2024-07-17T12:07:08Z
|
https://github.com/huggingface/datasets/issues/7047
|
[
"enhancement"
] |
tom-p-reichel
| 2
|
pytorch/vision
|
computer-vision
| 8,087
|
Custom coco format dataset
|
Hello! Can you suggest the structure of this dataset?
I want to use a custom dataset in coco format.
But I need to know what folder/file structure is needed for training.
```python
def get_args_parser(add_help=True):
import argparse
parser = argparse.ArgumentParser(description="PyTorch Detection Training", add_help=add_help)
parser.add_argument("--data-path", default="/datasets01/COCO/022719/", type=str, help="dataset path")
parser.add_argument(
"--dataset",
default="coco",
type=str,
help="dataset name. Use coco for object detection and instance segmentation and coco_kp for Keypoint detection",
)
```
|
closed
|
2023-11-02T10:18:15Z
|
2023-11-07T15:01:36Z
|
https://github.com/pytorch/vision/issues/8087
|
[] |
Egorundel
| 10
|
d2l-ai/d2l-en
|
machine-learning
| 2,590
|
Website of preview version is down.
|
Please fix. Thanks!
|
open
|
2024-03-13T15:16:28Z
|
2024-04-29T14:30:59Z
|
https://github.com/d2l-ai/d2l-en/issues/2590
|
[] |
Shujian2015
| 5
|
521xueweihan/HelloGitHub
|
python
| 2,743
|
【项目推荐】一个简单易用,跨平台的通用版本管理器,VMR
|
## 推荐项目
- 项目地址:https://github.com/gvcgo/version-manager
- 类别:Go
- 项目标题:一个简单易用,跨平台却非常强大的通用版本管理器,VMR
- 项目描述:
目前各种SDK版本管理器存在以下缺点:
- 各种语言的SDK版本管理器各自为政,彼此间差异较大,跨平台支持也不够完善。因此,作为多语言开发者,希望有一款开箱即用,能够支持多种常见编程语言的版本管理器。
- 现存的版本管理器很少有支持编程工具安装的,例如,很多发布在github上的好的开源工具,只能手动下载安装,比较麻烦。
- 现存的版本管理器都是直接从SDK列表页抓取然后下载,抓取结果不会缓存起来,每次都需要额外请求,效率较低。一旦列表页改版,也存在不可用的风险。
- 现存的版本管理器操作不够方便,例如,使用list命令列出列表时,如果列表太长,显示效果非常不好。
- 现存的版本管理器,各种纷繁的插件,各种不同的命令,使用起来复杂又麻烦。
VMR的出现,正是为了解决上述问题。
- 亮点:
- **跨平台**,支持Windows,Linux,MacOS
- 支持**60多种语言和工具**,省心
- 受到lazygit的启发,拥有更友好的TUI,更符合直觉,且**无需记忆任何命令**
- 支持针**对项目锁定SDK版本**
- 支持**反向代理**/**本地代理**设置,提高国内用户下载体验
- 相比于其他SDK管理器,拥有**更优秀的架构设计**,响应更**快**,**稳定性更高**
- **无需麻烦的插件**,开箱即用
- **无需docker**,纯本地安装,效率更高
- **更高的可扩展性**,甚至可以通过使用conda来支持数以千计的应用
- 截图:
<p style="" align="center">
<img src="https://cdn.jsdelivr.net/gh/moqsien/img_repo@main/vmr_logo_trans.png" alt="logo" width="360" height="120">
</p>
<div align=center><img src="https://cdn.jsdelivr.net/gh/moqsien/img_repo@main/vmr.gif"></div>
- 后续更新计划:
- 修复用户提出的bug。
- 新增对用户提出的新语言的支持。
|
open
|
2024-05-06T02:29:34Z
|
2024-06-05T03:29:18Z
|
https://github.com/521xueweihan/HelloGitHub/issues/2743
|
[
"Go 项目"
] |
moqsien
| 0
|
pyqtgraph/pyqtgraph
|
numpy
| 3,017
|
Export to SVG with opacity on items
|
It would be very nice if the opacities of the items were respected when exporting to SVG.
Opacities are set with `setOpacity` method of the `ImageItem` (which very conveniently works with any other item).
Here is a minimal example where the resulting image is grey (black blended with white). However, the saved SVG is two images, one black and one white and the opacity of the white image is 100% (instead of 50% that was set in the code).
```python
import numpy as np
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore
app = pg.mkQApp("Blend images example")
## Create window with GraphicsView widget
w = pg.GraphicsView()
w.show()
w.resize(800,800)
w.setWindowTitle('Two images blended with opacity')
view = pg.ViewBox()
w.setCentralItem(view)
## lock the aspect ratio
view.setAspectLocked(True)
## Create image item
img1 = pg.ImageItem(np.zeros((200,200)))
img1.setLevels([0, 1])
view.addItem(img1)
## Create image item
img2 = pg.ImageItem(np.ones((200,200)))
img2.setLevels([0, 1])
view.addItem(img2)
img2.setOpacity(0.5)
## Set initial view bounds
view.setRange(QtCore.QRectF(0, 0, 200, 200))
if __name__ == '__main__':
pg.exec()
```
**Displayed image in the app**:

**Saved SVG**:

|
open
|
2024-05-02T10:31:58Z
|
2024-05-02T10:37:24Z
|
https://github.com/pyqtgraph/pyqtgraph/issues/3017
|
[
"enhancement",
"exporters",
"svg"
] |
ElpadoCan
| 0
|
Kludex/mangum
|
asyncio
| 154
|
[Question] What is a purpose of using asyncio.Queue() in HTTPCycle
|
https://github.com/jordaneremieff/mangum/blob/8763b9736a8ef60d16e10a204617f9b25fcd6a61/mangum/protocols/http.py#L45-L46
|
closed
|
2020-12-29T12:14:01Z
|
2020-12-30T11:30:35Z
|
https://github.com/Kludex/mangum/issues/154
|
[] |
ediskandarov
| 2
|
simple-login/app
|
flask
| 2,015
|
Private vulnerability reporting ?
|
Please, I sent you an email on Thu, Jan 11, 3:54 PM (7 days ago), regarding a vulnerability on the latest codebase with a severity of High 7.7. Could you please consider enabling GitHub private reporting for this repository, so that the process of private reporting go smooth?
https://docs.github.com/en/code-security/security-advisories/working-with-repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository
|
closed
|
2024-01-18T15:24:00Z
|
2024-01-19T17:52:20Z
|
https://github.com/simple-login/app/issues/2015
|
[] |
Sim4n6
| 0
|
tqdm/tqdm
|
jupyter
| 1,237
|
Add integration to prometheus pushgateway
|
- [x] I have marked all applicable categories:
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [x] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [ ] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
|
open
|
2021-08-30T03:35:58Z
|
2021-08-30T03:36:53Z
|
https://github.com/tqdm/tqdm/issues/1237
|
[] |
MartinForReal
| 0
|
K3D-tools/K3D-jupyter
|
jupyter
| 61
|
Grid text overlays surface mesh
|
Image is pretty obvious. Let me know if you need an example to reproduce it, my current one requires external dependencies.

|
closed
|
2017-06-20T13:10:38Z
|
2017-10-30T10:38:36Z
|
https://github.com/K3D-tools/K3D-jupyter/issues/61
|
[] |
martinal
| 5
|
kaliiiiiiiiii/Selenium-Driverless
|
web-scraping
| 34
|
Error: No module named 'selenium_driverless.pycdp' on Linux
|
Hi there,
Not sure if this is related to the other CDP bug reported on Linux but just in case:
trying out the examples provided in the readme give the following error (with or without async)
```
Error: ModuleNotFoundError: No module named 'selenium_driverless.pycdp'
```
Any idea what might be causing this?
|
closed
|
2023-08-20T03:22:03Z
|
2023-08-27T09:34:18Z
|
https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/34
|
[
"needs information"
] |
alisawazrak
| 2
|
Asabeneh/30-Days-Of-Python
|
python
| 12
|
Reference code for exercises
|
Thanks for the open source code, is there any reference code for the exercise?
|
closed
|
2019-12-20T10:59:13Z
|
2019-12-20T11:28:18Z
|
https://github.com/Asabeneh/30-Days-Of-Python/issues/12
|
[] |
Donaghys
| 1
|
HIT-SCIR/ltp
|
nlp
| 376
|
batch处理的时候,分词会引入空字符
|

可以看到分词得到的结果,会有字段为空,从最后2个文本可以看到文本末尾并没有空格
这样会影响后续句法分析的结果,参见第1条文本
|
closed
|
2020-07-02T12:06:13Z
|
2020-07-02T13:11:20Z
|
https://github.com/HIT-SCIR/ltp/issues/376
|
[] |
Nipi64310
| 3
|
microsoft/nni
|
tensorflow
| 4,966
|
Or(<function DoReFaQuantizer.validate_config.<locals>.<lambda> at 0x000001EF3142C9D0>) did not validate 'input'
|
I use the
>
configure_list = [{
'quant_types': ['weight','input','output'],
'quant_bits': {
'weight': 8,
'input': 8,
'output': 8
}, # you can just use `int` here because all `quan_types` share same bits length, see config for `ReLu6` below.
'op_types':['Conv2d']
}]
dummy_input = torch.rand(32, 2, 224, 224).to(device)
quantizer = DoReFaQuantizer(net, configure_list, optimizer)
quantizer.compress()
>
The the error:
>
SchemaError: Or(And(And({Optional('quant_types'): Schema([<function DoReFaQuantizer.validate_config.<locals>.<lambda> at 0x000001EF3142C9D0>]), Optional('quant_bits'): Or(And(<class 'int'>, <function DoReFaQuantizer.validate_config.<locals>.<lambda> at 0x000001EF189AD670>), Schema({Optional('weight'): And(<class 'int'>, <function DoReFaQuantizer.validate_config.<locals>.<lambda> at 0x000001EF189AD550>)})), Optional('op_types'): And([<class 'str'>], <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x000001EF0ADA3F70>), Optional('op_names'): And([<class 'str'>], <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x000001EF189BC040>), Optional('exclude'): <class 'bool'>}, <function CompressorSchema._modify_schema.<locals>.<lambda> at 0x000001EF189BC1F0>), <function QuantizerSchema._modify_schema.<locals>.<lambda> at 0x000001EF189BC160>)) did not validate {'quant_types': ['weight', 'input', 'output'], 'quant_bits': {'weight': 8, 'input': 8, 'output': 8}, 'op_types': ['Conv2d']}
Key 'quant_types' error:
Or(<function DoReFaQuantizer.validate_config.<locals>.<lambda> at 0x000001EF3142C9D0>) did not validate 'input'
<lambda>('input') should evaluate to True
>
When I find the
>
from nni.algorithms.compression.pytorch.quantization import DoReFaQuantizer
>
,the validate_config is only support weight?
>
def validate_config(self, model, config_list):
"""
Parameters
----------
model : torch.nn.Module
Model to be pruned
config_list : list of dict
List of configurations
"""
schema = QuantizerSchema([{
Optional('quant_types'): Schema([lambda x: x in ['weight']]),
Optional('quant_bits'): Or(And(int, lambda n: 0 < n < 32), Schema({
Optional('weight'): And(int, lambda n: 0 < n < 32)
})),
Optional('op_types'): [str],
Optional('op_names'): [str],
Optional('exclude'): bool
}], model, logger)
schema.validate(config_list)
>
|
open
|
2022-06-27T01:54:55Z
|
2022-07-05T07:19:26Z
|
https://github.com/microsoft/nni/issues/4966
|
[
"user raised",
"support",
"quantize"
] |
sunpeil
| 2
|
suitenumerique/docs
|
django
| 416
|
Add mermaid.js support
|
## Feature Request
**Is your feature request related to a problem or unsupported use case? Please describe.**
This will allow users to do diagrams (and other cool stuff) in their docs.
This has been requested by a few users with a technical background.
**Describe the solution you'd like**
I'd like to add support of Mermaid.js.
I saw there is a custom plugin for it : https://github.com/defensestation/blocknote-mermaid
|
open
|
2024-11-12T14:12:22Z
|
2025-03-18T15:10:25Z
|
https://github.com/suitenumerique/docs/issues/416
|
[
"designed"
] |
virgile-dev
| 7
|
deepinsight/insightface
|
pytorch
| 2,359
|
C++ build on insightface
|
Can any one provide the same implementation in c++ because I want to run face detection and face recognition in c++ I am already using it in python but my requirement is to convert all code into c++
|
open
|
2023-07-03T12:46:19Z
|
2023-07-06T06:38:19Z
|
https://github.com/deepinsight/insightface/issues/2359
|
[] |
AwaisPF
| 3
|
iperov/DeepFaceLab
|
deep-learning
| 568
|
DFL 2.0 'copy' is not defined
|
Hi :)
First: i think thats the right direction u goes :)
if i start the DFl 2.0 i got an error:
Error: name 'copy' is not defined
Traceback (most recent call last):
File "N:\xy\_internal\DeepFaceLab\mainscripts\Trainer.py", line 57, in trainerThread
debug=debug,
File "N:\xy\_internal\DeepFaceLab\models\ModelBase.py", line 173, in __init__
self.on_initialize()
File "N:\xy\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 575, in on_initialize
src_dst_loss_gv_op = self.src_dst_opt.get_update_op (src_dst_loss_gv )
File "N:\xy\_internal\DeepFaceLab\core\leras\optimizers.py", line 94, in get_update_op
g = self.tf_clip_norm(g, self.clipnorm, norm)
File "N:\xy\_internal\DeepFaceLab\core\leras\optimizers.py", line 28, in tf_clip_norm
g_shape = copy.copy(then_expression.get_shape())
NameError: name 'copy' is not defined
|
closed
|
2020-01-22T19:48:15Z
|
2020-01-23T06:44:20Z
|
https://github.com/iperov/DeepFaceLab/issues/568
|
[] |
blanuk
| 1
|
twopirllc/pandas-ta
|
pandas
| 466
|
Problem with strategy (all)
|
**Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
```
0.3.14b0
**Do you have _TA Lib_ also installed in your environment?**
```sh
$ pip list
```
Yes
TA-Lib 0.4.17
**Upgrade.**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
**Describe the bug**
When trying to add all indicators (ta.AllStrategy) to my dataframe I get:
```sh
Traceback (most recent call last):
File "C:\Users\hanna\Anaconda3\lib\multiprocessing\pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "C:\Users\hanna\Anaconda3\lib\multiprocessing\pool.py", line 44, in mapstar
return list(map(*args))
File "C:\Users\hanna\Anaconda3\lib\site-packages\pandas_ta\core.py", line 467, in _mp_worker
return getattr(self, method)(*args, **kwargs)
File "C:\Users\hanna\Anaconda3\lib\site-packages\pandas_ta\core.py", line 874, in cdl_pattern
result = cdl_pattern(open_=open_, high=high, low=low, close=close, name=name, offset=offset, **kwargs)
File "C:\Users\hanna\Anaconda3\lib\site-packages\pandas_ta\candles\cdl_pattern.py", line 64, in cdl_pattern
pattern_result = Series(pattern_func(open_, high, low, close, **kwargs) / 100 * scalar)
File "_abstract.pxi", line 352, in talib._ta_lib.Function.__call__
File "_abstract.pxi", line 383, in talib._ta_lib.Function.__call_function
File "C:\Users\hanna\Anaconda3\lib\site-packages\talib\__init__.py", line 24, in wrapper
return func(*args, **kwargs)
TypeError: Argument 'open' has incorrect type (expected numpy.ndarray, got NoneType)
```
As you can see it is probably due to cdl_pattern error.
When I switch to ta.CommonStrategy I get:
Index(['open', 'high', 'low', 'close', 'volume', 'SMA_10', 'SMA_20', 'SMA_50',
'SMA_200', 'VOL_SMA_20'],
I get 6 indicators(sma) added to my dataframe.
Is this the expected behavior? Only 6 common indicators?
|
closed
|
2022-01-20T12:36:09Z
|
2022-01-22T00:17:18Z
|
https://github.com/twopirllc/pandas-ta/issues/466
|
[
"question",
"wontfix",
"info"
] |
hn2
| 26
|
google/seq2seq
|
tensorflow
| 239
|
No Speedup for Multiple GPUs?
|
I just switched to using an 8 GPU AWS instance from a 1 GPU machine, same instance. The log shows that tensorflow finds the additional GPUs, but the log makes it seem that there's no significant speedup using the additional GPUs. When I was using 1 GPU, it was about 150 seconds for 100 steps, and it's still about the same on the bigger machine, as shown. Is there something else I need to do to enable a speedup? This is using the Google/seq2seq neural machine translation tutorial.
`INFO:tensorflow:loss = 0.115634, step = 186203 (143.421 sec)
INFO:tensorflow:Saving checkpoints for 186303 into /home/ubuntu/models/nmt_tutorial/large/model.ckpt.
INFO:tensorflow:global_step/sec: 0.693705
INFO:tensorflow:loss = 0.22715, step = 186303 (144.154 sec)`
|
open
|
2017-05-31T21:27:48Z
|
2017-09-07T03:47:12Z
|
https://github.com/google/seq2seq/issues/239
|
[] |
npowell88
| 3
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 4,218
|
The "enter a receipt interfaces" does not show up when a direct link to a context is used
|
### What version of GlobaLeaks are you using?
5.0.11
### What browser(s) are you seeing the problem on?
All
### What operating system(s) are you seeing the problem on?
Linux
### Describe the issue
As reported by [sperti](https://github.com/esperti) the "enter a receipt interfaces" does not show up when a direct link to a context is used
### Proposed solution
_No response_
|
closed
|
2024-10-04T08:38:04Z
|
2024-10-05T10:39:47Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4218
|
[
"T: Bug",
"C: Client"
] |
evilaliv3
| 1
|
huggingface/text-generation-inference
|
nlp
| 2,757
|
The same model, but different loading methods will result in very different inference speeds?
|
### System Info
TGI version latest;single NVIDIA GeForce RTX 3090;
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
The first loading method (loading llama3 8B model from Hugging face):
```
model=meta-llama/Meta-Llama-3-8B-Instruct
volume=/home/data/Project/model # share a volume with the Docker container to avoid downloading weights every run
sudo docker run -it --name tgi_llama3_8B --restart=unless-stopped --shm-size 48g -p 3002:80 --runtime "nvidia" --gpus '"device=1"' -v $volume:/data \
-e HF_TOKEN=$token \
-e HF_ENDPOINT="https://hf-mirror.com" \
-e HF_HUB_ENABLE_HF_TRANSFER=False \
-e USE_FLASH_ATTENTION=False \
-e HF_HUB_OFFLINE=1 \
ghcr.chenby.cn/huggingface/text-generation-inference:latest \
--model-id $model
```
The second loading method (loading llama3 8B model from local directory):
```
model=/data/ans_model/meta-llama/Meta-Llama-3-8B-Instruct
volume=/home/data/Project/model # share a volume with the Docker container to avoid downloading weights every run
sudo docker run -it --name tgi_llama3_8B --restart=unless-stopped --shm-size 48g -p 3002:80 --runtime "nvidia" --gpus '"device=1"' -v $volume:/data \
-e HF_TOKEN=$token \
-e HF_ENDPOINT="https://hf-mirror.com" \
-e HF_HUB_ENABLE_HF_TRANSFER=False \
-e USE_FLASH_ATTENTION=False \
-e HF_HUB_OFFLINE=1 \
ghcr.chenby.cn/huggingface/text-generation-inference:latest \
--model-id $model
```
### Expected behavior
The inference speed of the llama3 8B model loaded from Hugging face is much faster than that loaded from the local directory. I don't know why this happens, how can I fix it?
Faster:


Slower:


|
open
|
2024-11-19T12:55:49Z
|
2024-11-19T13:06:01Z
|
https://github.com/huggingface/text-generation-inference/issues/2757
|
[] |
hjs2027864933
| 1
|
explosion/spaCy
|
machine-learning
| 13,293
|
Install via `requirements.txt` documentation doesn't work
|
The docs [state](https://spacy.io/usage/models#models-download) I can specify the model like this in `requirements.txt`:
```
spacy>=3.0.0,<4.0.0
en_core_web_sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.4.0/en_core_web_sm-3.4.0-py3-none-any.whl
```
This attempts to download spacy `3.4.4`.
And a VERY long series of exceptions like this is raised:
```
45.29 thinc/backends/numpy_ops.cpp:2408:34: error: ‘_PyCFrame’ {aka ‘struct _PyCFrame’} has no member named ‘use_tracing’
45.29 2408 | (unlikely((tstate)->cframe->use_tracing) &&\
45.29 | ^~~~~~~~~~~
45.29 thinc/backends/numpy_ops.cpp:1001:43: note: in definition of macro ‘unlikely’
45.29 1001 | #define unlikely(x) __builtin_expect(!!(x), 0)
45.29 | ^
45.29 thinc/backends/numpy_ops.cpp:2513:15: note: in expansion of macro ‘__Pyx_IsTracing’
45.29 2513 | if (__Pyx_IsTracing(tstate, 0, 0)) {\
45.29 | ^~~~~~~~~~~~~~~
45.29 thinc/backends/numpy_ops.cpp:46877:3: note: in expansion of macro ‘__Pyx_TraceReturn’
45.29 46877 | __Pyx_TraceReturn(Py_None, 1);
45.29 | ^~~~~~~~~~~~~~~~~
```
All my attempts so far to cache Spacy's download by specifying the model in `requirements.txt` have failed (see replies below).
Context: I'm installing this in Docker. Python image is `python:3.12.1-slim`
|
open
|
2024-01-30T19:46:14Z
|
2024-09-25T05:38:14Z
|
https://github.com/explosion/spaCy/issues/13293
|
[
"docs",
"install"
] |
SHxKM
| 18
|
pytest-dev/pytest-django
|
pytest
| 1,009
|
assertRaisesMessage expects wrong excepted_exception type
|
I am trying to use `assertRaisesMessage` to test for `save()` raising an `IntegrityError`. However, the `expected_exception` expected type is `BaseException`.
vscode shows me `Type[Exception]` for `TestCase.assertRaisesMessage`.
Code snippet:
```python
from django.db.utils import IntegrityError
from pytest_django.asserts import assertRaisesMessage
def test_some_functionality():
with assertRaisesMessage(IntegrityError, constraint_name):
user.save()
```
mypy error:
> Argument 1 to "assertRaisesMessage" has incompatible type "Type[IntegrityError]"; expected "BaseException" [arg-type]mypy(error)
|
closed
|
2022-04-21T14:50:12Z
|
2022-04-26T08:57:23Z
|
https://github.com/pytest-dev/pytest-django/issues/1009
|
[] |
mschoettle
| 0
|
datapane/datapane
|
data-visualization
| 26
|
token and graph plotting issues
|
So, the issue that I am facing is that I am not able to publish the report on the data pane server.
it is showing the token is invalid but that's not the case I have rechecked it and the token seems fine.
The second issue is when I am trying to display graphs only the first dp.Plot() method is working and other graphs are shown blank.
I am using google colab with python version 3.6.7
Hope, to get the fix of this issue.
|
closed
|
2020-09-25T17:54:34Z
|
2020-10-21T15:22:58Z
|
https://github.com/datapane/datapane/issues/26
|
[] |
pooja-anandani
| 3
|
thp/urlwatch
|
automation
| 332
|
Can urlwatch do the same thing Website Watcher does?
|
Basically I can bulk import thousands of links into website watcher and it will detect and alert me if the link has any changes (without considering HTML tags -- just content)
Can urlwatch do the same thing? I need to be able to bulk import links and alert me if there's a content change in the website. I don't want to manually set each link.
Also, what if the page has Ajax/Javascript content, can it process those?
Website Watcher -https://www.aignes.com/features.htm
|
closed
|
2018-12-06T19:57:49Z
|
2020-07-10T13:34:04Z
|
https://github.com/thp/urlwatch/issues/332
|
[] |
majestique
| 1
|
dask/dask
|
pandas
| 11,145
|
Concat with unknown divisions raises TypeError
|
**Describe the issue**:
When trying to concatenate multiple Dataframes without known divisions with Dask.Dataframe.multi.concat an error is raised as shown below.

After some digging in the codebase I found some logic causing it:
https://github.com/dask/dask/blob/42ccab530ba01c00b51e89a48acd6bd178e94afb/dask/dataframe/multi.py#L1310
results in an empty list as the dataframes are not of type __Frame_ but of type _dask_expr._collection.DataFrame_
Which then causes
https://github.com/dask/dask/blob/42ccab530ba01c00b51e89a48acd6bd178e94afb/dask/dataframe/multi.py#L1337
to always be True -> Comparing Nones after that
**Minimal Complete Verifiable Example**:
```python
import dask.dataframe as dd
first = dd.from_dict(
{
"a": [1, 2, 3],
},
npartitions=1
).clear_divisions()
second = dd.from_dict(
{
"b": [3, 1, 3],
},
npartitions=1,
).clear_divisions()
dd.multi.concat([first,second])
```
**Anything else we need to know?**:
Strange thing is that when doing the following it actually works, so now we're confused why it isn't using the same concat function and what the difference is?
```python
dd.concat([first,second])
```
**Environment**:
- Dask version: 2024.5.1
- Python version: 3.10.11
- Operating System: Linux
- Install method (conda, pip, source): pip
|
closed
|
2024-05-24T14:00:05Z
|
2024-11-12T15:05:29Z
|
https://github.com/dask/dask/issues/11145
|
[
"needs triage"
] |
manschoe
| 3
|
amidaware/tacticalrmm
|
django
| 1,416
|
Github does not want me to sponsor any more (drops PayPal). Alternative?
|
In some way this is a feature request,..
As Github drops PayPal from sponsoring (only), I cannot use it any more.
I don´t really like PayPal, but in this case it´s my only option. And I guess I´m not the only one.
Will there be an alternative?
|
closed
|
2023-01-25T20:17:40Z
|
2023-02-20T20:46:10Z
|
https://github.com/amidaware/tacticalrmm/issues/1416
|
[] |
forti42
| 2
|
python-arq/arq
|
asyncio
| 272
|
Logging jobs info to database
|
I want to log job infomation to a database (PostgreSQL). Where is the best place to put it ?
|
closed
|
2021-10-25T01:51:04Z
|
2023-04-04T17:47:39Z
|
https://github.com/python-arq/arq/issues/272
|
[] |
hieulw
| 1
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,111
|
Keeping a Socket.io connection on in the background
|
**Your question**
I currently have a Tweepy streaming api connected via flask socketio and everything is working fine (tweets are streaming in without any problems). Question: is it possible to configure socketio in such a manner that when a user lands on the webpage, the latest streamed tweets are already showing? Right now, when a user lands on the page, the flask app "loads", and as a result, the screen is blank (as it is waiting to receive new tweets to stream in). Instead, is it possible to keep the stream alive in the background so when a user visits, the latest tweets are already loaded on the page? Thank You
|
closed
|
2019-11-25T21:21:40Z
|
2019-11-26T01:38:57Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1111
|
[
"question"
] |
ghost
| 1
|
plotly/dash
|
data-visualization
| 2,882
|
Switching back and forth between dcc.Tabs doesn't seem to release memory in the browser
|
In a dash app with a 50 000 point scatter chart in tab 1 and another in tab 2, switching back and forth between those tabs increases the memory footprint that I see in the Chrome task manager by about 100 MB every time but it doesn't look like any gets released.

In the snapshot above, the memory footprint has climbed to 1.5 GB. Within a few minutes of not using the app it dropped, but only to 1 GB so it's still way higher than it was before I'd interacted with it.
Also, it looks like [others in the community have seen something similar](https://community.plotly.com/t/analyzing-memory-footprint-of-a-dash-app/24751).
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.17.0
dash-ag-grid 2.1.0
dash-bio 1.0.2
dash-bootstrap-components 1.4.2
dash-bootstrap-templates 1.1.2
dash-chart-editor 0.0.1a4
dash-core-components 2.0.0
dash-cytoscape 0.3.0
dash-dangerously-set-inner-html 0.0.2
dash-design-kit 1.10.0
dash-embedded 2.14.0
dash-enterprise 1.0.0
dash-enterprise-auth 0.1.1
dash-enterprise-libraries 1.4.1
dash-extensions 0.1.6
dash-facebook-login 0.0.2
dash-gif-component 1.1.0
dash-html-components 2.0.0
dash-iconify 0.1.2
dash-mantine-components 0.12.1
dash-notes 0.0.3
dash-renderer 1.9.0
dash-snapshots 2.2.7
dash-table 5.0.0
dash-user-analytics 0.0.2
```
- if frontend related, tell us your Browser, Version and OS
- OS: Unbuntu 22.04
- Browser Chrome
- Version 122
Example app:
```python
import dash
from dash import dash_table
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
import plotly.graph_objs as go
import numpy as np
import pandas as pd
# Data generation for plots and table
np.random.seed(42) # for reproducibility
x = np.linspace(0, 10, 50000)
y1 = np.random.randn(50000)
y2 = np.sin(x) + np.random.randn(50000) * 0.2
table_data = pd.DataFrame(np.random.randn(10, 4), columns=list("ABCD"))
# App initialization
app = dash.Dash(__name__)
server = app.server
# Layout
app.layout = html.Div([
dcc.Tabs(id="tabs", value='tab-1', children=[
dcc.Tab(label='Scatter Plot 1', value='tab-1'),
dcc.Tab(label='Scatter Plot 2', value='tab-2'),
dcc.Tab(label='Data Table', value='tab-3'),
]),
html.Div(id='tabs-content')
])
# Callback to update tab content
@app.callback(Output('tabs-content', 'children'),
Input('tabs', 'value'))
def render_content(tab):
if tab == 'tab-1':
return dcc.Graph(
id='scatter-plot-1',
figure={
'data': [go.Scatter(x=x, y=y1, mode='markers')],
'layout': go.Layout(title='Scatter Plot 1')
}
)
elif tab == 'tab-2':
return dcc.Graph(
id='scatter-plot-2',
figure={
'data': [go.Scatter(x=x, y=y2, mode='markers')],
'layout': go.Layout(title='Scatter Plot 2')
}
)
elif tab == 'tab-3':
return html.Div([
html.H4('Data Table'),
dash_table.DataTable(
data=table_data.to_dict('records'),
columns=[{'name': i, 'id': i} for i in table_data.columns]
)
])
if __name__ == '__main__':
app.run_server(debug=False)
```
|
open
|
2024-06-11T16:34:16Z
|
2024-08-13T19:51:36Z
|
https://github.com/plotly/dash/issues/2882
|
[
"bug",
"sev-2",
"P3"
] |
michaelbabyn
| 0
|
pytest-dev/pytest-django
|
pytest
| 936
|
Connection already closed
|
_After the first test runner, the connection to DB dropped._
`django.db.utils.InterfaceError: connection already closed`
_Seems to work with_ `pytest-django==4.2.0`
**But broken with:**
```
pytest==6.2.4
pytest-django==4.4.0
psycopg2-binary==2.9.1
```
**Error log:**
```
self = <django.db.backends.postgresql.base.DatabaseWrapper object at 0x7f66ebdddb80>
name = None
@async_unsafe
def create_cursor(self, name=None):
if name:
# In autocommit mode, the cursor will be used outside of a
# transaction, hence use a holdable cursor.
cursor = self.connection.cursor(name, scrollable=False, withhold=self.connection.autocommit)
else:
> cursor = self.connection.cursor()
E django.db.utils.InterfaceError: connection already closed
../../../../../env/lib/python3.8/site-packages/django/db/backends/postgresql/base.py:236: InterfaceError
args = (<django.db.backends.postgresql.base.DatabaseWrapper object at 0x7f66ebdddb80>, None)
kwargs = {}
event_loop = <_UnixSelectorEventLoop running=False closed=False debug=False>
@functools.wraps(func)
def inner(*args, **kwargs):
if not os.environ.get('DJANGO_ALLOW_ASYNC_UNSAFE'):
# Detect a running event loop in this thread.
try:
event_loop = asyncio.get_event_loop()
except RuntimeError:
pass
else:
if event_loop.is_running():
raise SynchronousOnlyOperation(message)
# Pass onwards.
> return func(*args, **kwargs)
```
|
closed
|
2021-06-22T08:15:41Z
|
2023-10-26T20:09:05Z
|
https://github.com/pytest-dev/pytest-django/issues/936
|
[] |
sweetpythoncode
| 8
|
odoo/odoo
|
python
| 202,837
|
[18.0] base: DateTime widget cannot be set in seconds, and there is a problem displayed
|
### Odoo Version
- [ ] 16.0
- [x] 17.0
- [x] 18.0
- [ ] Other (specify)
### Steps to Reproduce

I saw that the default value of the parameter 'show_seconds' in the source code is true, Actually, the widget does not display the option for seconds, When I set 'show_seconds' to false in the option of the view, it can display seconds, but setting it on the field has no effect
```
export const dateTimeField = {
...dateField,
displayName: _t("Date & Time"),
supportedOptions: [
...dateField.supportedOptions,
{
label: _t("Time interval"),
name: "rounding",
type: "number",
default: 5,
help: _t(
`Control the number of minutes in the time selection. E.g. set it to 15 to work in quarters.`
),
},
{
label: _t("Show seconds"),
name: "show_seconds",
type: "boolean",
default: true,
help: _t(`Displays or hides the seconds in the datetime value.`),
},
{
label: _t("Show time"),
name: "show_time",
type: "boolean",
default: true,
help: _t(`Displays or hides the time in the datetime value.`),
},
],
extractProps: ({ attrs, options }, dynamicInfo) => ({
...dateField.extractProps({ attrs, options }, dynamicInfo),
showSeconds: exprToBoolean(options.show_seconds ?? true),
showTime: exprToBoolean(options.show_time ?? true),
}),
supportedTypes: ["datetime"],
};
```
### Log Output
```shell
```
### Support Ticket
_No response_
|
open
|
2025-03-21T08:35:29Z
|
2025-03-21T08:35:29Z
|
https://github.com/odoo/odoo/issues/202837
|
[] |
a1061026202
| 0
|
ckan/ckan
|
api
| 8,725
|
DataStore Delete Uncaught ProgrammingErrors
|
## CKAN version
master branch (2.11 ??)
## Describe the bug
pSQL ProgrammingErrors are not caught and re-raised as ValidationErrors in datastore_delete
### Steps to reproduce
Steps to reproduce the behavior:
- Have a datastore field that is a text field
- Insert some data
- Try to delete with filters on the text field but pass an integer
- See fatal error
### Expected behavior
The pSQL errors are caught, parsed and re-raised as ValidationErrors just like in datastore_create and datastore_upsert
### Additional details
Stacktrace example:
```
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/srv/app/ckan/registry/src/ckan/ckan/config/middleware/../../views/api.py", line 279, in action
result = function(context, request_data)
File "/srv/app/ckan/registry/src/ckan/ckan/logic/__init__.py", line 581, in wrapped
result = _action(context, data_dict, **kw)
File "/srv/app/ckan/registry/src/ckanext-datastore-search/ckanext/datastore_search/logic/action.py", line 69, in datastore_delete
func_result = up_func(context, data_dict)
File "/srv/app/ckan/registry/src/ckan/ckanext/datastore/logic/action.py", line 488, in datastore_delete
result = backend.delete(context, data_dict)
File "/srv/app/ckan/registry/src/ckan/ckanext/datastore/backend/postgres.py", line 2119, in delete
delete_data(context, data_dict)
File "/srv/app/ckan/registry/src/ckan/ckanext/datastore/backend/postgres.py", line 1660, in delete_data
results = _execute_single_statement(context, sql_string, where_values)
File "/srv/app/ckan/registry/src/ckan/ckanext/datastore/backend/postgres.py", line 795, in _execute_single_statement
results = context['connection'].execute(
File "/srv/app/ckan/registry/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1416, in execute
return meth(
File "/srv/app/ckan/registry/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 515, in _execute_on_connection
return connection._execute_clauseelement(
File "/srv/app/ckan/registry/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1638, in _execute_clauseelement
ret = self._execute_context(
File "/srv/app/ckan/registry/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1843, in _execute_context
return self._exec_single_context(
File "/srv/app/ckan/registry/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1983, in _exec_single_context
self._handle_dbapi_exception(
File "/srv/app/ckan/registry/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2352, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/srv/app/ckan/registry/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1964, in _exec_single_context
self.dialect.do_execute(
File "/srv/app/ckan/registry/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 942, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) operator does not exist: text = integer
LINE 1: ...611a8-0514-47f4-a6f7-bc42ba1569f4" WHERE (prim_id = 1) RETUR...
^
HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
[SQL: DELETE FROM "c69611a8-0514-47f4-a6f7-bc42ba1569f4" WHERE (prim_id = %(value_0)s) RETURNING _id, "prim_id", "example_text_field"]
[parameters: {'value_0': 1}]
```
|
open
|
2025-03-17T18:22:22Z
|
2025-03-19T19:20:06Z
|
https://github.com/ckan/ckan/issues/8725
|
[] |
JVickery-TBS
| 3
|
thtrieu/darkflow
|
tensorflow
| 917
|
From Darknet to Darkflow and then Movidius
|
Hi,
I'm going to use a retrained model with darknet of YOLOv2Tiny on the Movidius NCS.
I cannot produce a .pb and .meta file from .cfg and .weights and I don't know why.
I use the command:
`python3 flow --model apple_tiny_yolov2/apple_tiny_yolov2.cfg --load apple_tiny_yolov2/apple_tiny_yolov2_1000.weights --savepb`
but the systems answer:
```
/home/tart/Desktop/YOLO/darkflow-master/darkflow/dark/darknet.py:54: UserWarning: ./cfg/apple_tiny_yolov2_1000.cfg not found, use apple_tiny_yolov2/apple_tiny_yolov2.cfg instead
cfg_path, FLAGS.model))
Parsing apple_tiny_yolov2/apple_tiny_yolov2.cfg
Loading apple_tiny_yolov2/apple_tiny_yolov2_1000.weights ...
Traceback (most recent call last):
File "flow", line 6, in <module>
cliHandler(sys.argv)
File "/home/tart/Desktop/YOLO/darkflow-master/darkflow/cli.py", line 26, in cliHandler
tfnet = TFNet(FLAGS)
File "/home/tart/Desktop/YOLO/darkflow-master/darkflow/net/build.py", line 58, in __init__
darknet = Darknet(FLAGS)
File "/home/tart/Desktop/YOLO/darkflow-master/darkflow/dark/darknet.py", line 27, in __init__
self.load_weights()
File "/home/tart/Desktop/YOLO/darkflow-master/darkflow/dark/darknet.py", line 82, in load_weights
wgts_loader = loader.create_loader(*args)
File "/home/tart/Desktop/YOLO/darkflow-master/darkflow/utils/loader.py", line 105, in create_loader
return load_type(path, cfg)
File "/home/tart/Desktop/YOLO/darkflow-master/darkflow/utils/loader.py", line 19, in __init__
self.load(*args)
File "/home/tart/Desktop/YOLO/darkflow-master/darkflow/utils/loader.py", line 70, in load
val = walker.walk(new.wsize[par])
File "/home/tart/Desktop/YOLO/darkflow-master/darkflow/utils/loader.py", line 130, in walk
'Over-read {}'.format(self.path)
AssertionError: Over-read apple_tiny_yolov2/apple_tiny_yolov2_1000.weights
```
Can you help me? Thanks a lot
|
open
|
2018-10-08T08:46:13Z
|
2018-10-08T08:46:13Z
|
https://github.com/thtrieu/darkflow/issues/917
|
[] |
keldrom
| 0
|
nalepae/pandarallel
|
pandas
| 135
|
[Feature Request] Timer on Progress Bar
|
I'm switching to this package on places using tqdm.pandas earlier, think it would be nice to have a similar timer at the progress bar to track Estimated Time to Finish and monitor the speed.
|
open
|
2021-02-12T22:00:19Z
|
2024-04-27T07:48:12Z
|
https://github.com/nalepae/pandarallel/issues/135
|
[] |
zhenyulin
| 5
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.