repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
pallets/flask
|
flask
| 4,976
|
mypy 1.0 no longer accepts flask.Response as redirect type
|
```
from __future__ import annotations
from flask import Response, redirect
def handler() -> Response:
return redirect("")
```
```
❯ .tox/type/bin/mypy magic.py
magic.py:7: error: Incompatible return value type (got "werkzeug.wrappers.response.Response", expected "flask.wrappers.Response") [return-value]
Found 1 error in 1 file (checked 1 source file)
~/git/dqc/bsso-flask on timeout [!]
❯ .tox/type/bin/mypy --version
mypy 1.0.0 (compiled: yes)
```
|
closed
|
2023-02-13T20:16:49Z
|
2023-03-02T00:06:54Z
|
https://github.com/pallets/flask/issues/4976
|
[
"typing"
] |
gaborbernat
| 2
|
MagicStack/asyncpg
|
asyncio
| 605
|
Passing table name as a parameter
|
Hi there,
Thanks for a great library. I just swapped from `aiopg` and I'm impressed with `asyncpg` so far.
I'm trying to pass a table name as an input parameter, like this:
```python
await conn.fetchrow("select count(*) as count from $1", table)
```
And getting:
```txt
asyncpg.exceptions.PostgresSyntaxError: syntax error at or near "$1"
```
In `psycopg2` you solve it with a special `Identifier` class as described in this post: https://stackoverflow.com/questions/13793399/passing-table-name-as-a-parameter-in-psycopg2
My thoughts so far:
1. Export and document the sanitizer methods such that one can write code like this: `conn.fetchrow(f"select count(*) from {asyncpg.sanitize(table)})`. I've seen this earlier, but I forgot which library.
2. Or support the `$...` syntax in table names also, and either make it work directly with strings, or an `Identifier` type as used by `psycopg`
3. Remember to update documentation and/or FAQ.
Let me know what you think!
Håkon
|
closed
|
2020-08-09T18:03:02Z
|
2025-01-09T20:22:42Z
|
https://github.com/MagicStack/asyncpg/issues/605
|
[] |
hawkaa
| 7
|
jonaswinkler/paperless-ng
|
django
| 1,552
|
[BUG] password authentication failed for user "paperless"
|
<!---
=> Before opening an issue, please check the documentation and see if it helps you resolve your issue: https://paperless-ng.readthedocs.io/en/latest/troubleshooting.html
=> Please also make sure that you followed the installation instructions.
=> Please search the issues and look for similar issues before opening a bug report.
=> If you would like to submit a feature request please submit one under https://github.com/jonaswinkler/paperless-ng/discussions/categories/feature-requests
=> If you encounter issues while installing of configuring Paperless-ng, please post that in the "Support" section of the discussions. Remember that Paperless successfully runs on a variety of different systems. If paperless does not start, it's probably an issue with your system, and not an issue of paperless.
=> Don't remove the [BUG] prefix from the title.
-->
**Describe the bug**
Hi community
I installed paperless-ng via docker-compose on my linux system for test purposes. I had a working configuration and imported some 100 documents. Looked really good.
To move the deployment to my productive system I planned to change the default postgres DB password.
I changed POSTGRES_PASSWORD in docker-compose.yml and added a corresponding PAPERLESS_DBPASS line in docker-compose.env.
I received a couple of error messages after the docker-compose up and decided to revert the config. I was unable to do so. Despite having deleted all volumes, directories (I even removed the images) Postgres complains about a wrong password.
So I completely screwed up my config...
How can I debug this?
**To Reproduce**
execute docker-compose-up
```
error messages:
webserver_1 | 19:58:52 [Q] ERROR FATAL: password authentication failed for user "paperless"
webserver_1 |
db_1 | 2022-01-16 19:59:22.618 UTC [3358] FATAL: password authentication failed for user "paperless"
db_1 | 2022-01-16 19:59:22.618 UTC [3358] DETAIL: Role "paperless" does not exist.
db_1 | Connection matched pg_hba.conf line 99: "host all all all md5"
webserver_1 | 19:59:22 [Q] ERROR FATAL: password authentication failed for user "paperless"
webserver_1 |
db_1 | 2022-01-16 19:59:52.687 UTC [3360] FATAL: password authentication failed for user "paperless"
db_1 | 2022-01-16 19:59:52.687 UTC [3360] DETAIL: Role "paperless" does not exist.
db_1 | Connection matched pg_hba.conf line 99: "host all all all md5"
webserver_1 | 19:59:52 [Q] ERROR FATAL: password authentication failed for user "paperless"
webserver_1 |
db_1 | 2022-01-16 20:00:22.756 UTC [3361] FATAL: password authentication failed for user "paperless"
db_1 | 2022-01-16 20:00:22.756 UTC [3361] DETAIL: Role "paperless" does not exist.
db_1 | Connection matched pg_hba.conf line 99: "host all all all md5"
webserver_1 | 20:00:22 [Q] ERROR FATAL: password authentication failed for user "paperless"
webserver_1 |
```
And that's the config in the container:
```
ms@oscar:~/Projekte/docker/paperless$ docker exec -it paperless_webserver_1 bash
root@68637bd68bb2:/usr/src/paperless/src# env | sort
GPG_KEY=E3FF2839C048B25C084DEBE9B26995E310250568
HOME=/root
HOSTNAME=68637bd68bb2
LANG=C.UTF-8
PAPERLESS_CONSUMER_RECURSIVE=true
PAPERLESS_CONSUMER_SUBDIRS_AS_TAGS=true
PAPERLESS_DBHOST=db
PAPERLESS_FILENAME_FORMAT={created_year}/{correspondent}/{title}
PAPERLESS_OCR_LANGUAGE=deu
PAPERLESS_REDIS=redis://broker:6379
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/usr/src/paperless/src
PYTHON_GET_PIP_SHA256=fa6f3fb93cce234cd4e8dd2beb54a51ab9c247653b52855a48dd44e6b21ff28b
PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/c20b0cfd643cd4a19246ccf204e2997af70f6b21/public/get-pip.py
PYTHON_PIP_VERSION=21.2.4
PYTHON_VERSION=3.9.6
SHLVL=0
TERM=xterm
USERMAP_GID=1000
USERMAP_UID=1026
_=/usr/bin/env
```
```
ms@oscar:~/Projekte/docker/paperless$ docker exec -it paperless_db_1 bash
root@037eab1907b5:/# env | sort
GOSU_VERSION=1.14
HOME=/root
HOSTNAME=037eab1907b5
LANG=en_US.utf8
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/13/bin
PGDATA=/var/lib/postgresql/data
PG_MAJOR=13
PG_VERSION=13.5-1.pgdg110+1
POSTGES_USER=paperless
POSTGRES_DB=paperless
POSTGRES_PASSWORD=paperless
PWD=/
SHLVL=0
TERM=xterm
_=/usr/bin/env
```
My docker-compose.yml:
```
version: "3.4"
services:
broker:
image: redis:6.0
restart: unless-stopped
db:
image: postgres:13
restart: unless-stopped
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_DB: paperless
POSTGES_USER: paperless
POSTGRES_PASSWORD: paperless
webserver:
image: jonaswinkler/paperless-ng:latest
restart: unless-stopped
depends_on:
- db
- broker
ports:
- 8200:8000
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8200"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- data:/usr/src/paperless/data
- media:/usr/src/paperless/media
- ./export:/usr/src/paperless/export
- ./consume:/usr/src/paperless/consume
env_file: docker-compose.env
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
volumes:
data:
media:
pgdata:
```
My docker-compose.env:
```
USERMAP_UID=1099
USERMAP_GID=1000
# Additional languages to install for text recognition, separated by a
# whitespace. Note that this is
# different from PAPERLESS_OCR_LANGUAGE (default=eng), which defines the
# language used for OCR.
# The container installs English, German, Italian, Spanish and French by
# default.
# See https://packages.debian.org/search?keywords=tesseract-ocr-&searchon=names&suite=buster
# for available languages.
#PAPERLESS_OCR_LANGUAGES=tur ces
###############################################################################
# Paperless-specific settings #
###############################################################################
# All settings defined in the paperless.conf.example can be used here. The
# Docker setup does not use the configuration file.
# A few commonly adjusted settings are provided below.
# Adjust this key if you plan to make paperless available publicly. It should
# be a very long sequence of random characters. You don't need to remember it.
#PAPERLESS_SECRET_KEY=change-me
# Use this variable to set a timezone for the Paperless Docker containers. If not specified, defaults to UTC.
#PAPERLESS_TIME_ZONE=America/Los_Angeles
# The default language to use for OCR. Set this to the language most of your
# documents are written in.
PAPERLESS_OCR_LANGUAGE=deu
PAPERLESS_CONSUMER_RECURSIVE=true
PAPERLESS_CONSUMER_SUBDIRS_AS_TAGS=true
PAPERLESS_FILENAME_FORMAT={created_year}/{correspondent}/{title}
```
That's the db container log:
```
2022-01-16 20:10:24.213 UTC [3399] FATAL: password authentication failed for user "paperless"
2022-01-16 20:10:24.213 UTC [3399] DETAIL: Role "paperless" does not exist.
```
The broker:
```
1:C 16 Jan 2022 19:48:44.067 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 16 Jan 2022 19:48:44.067 # Redis version=6.0.16, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 16 Jan 2022 19:48:44.067 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 16 Jan 2022 19:48:44.068 * Running mode=standalone, port=6379.
1:M 16 Jan 2022 19:48:44.068 # Server initialized
1:M 16 Jan 2022 19:48:44.068 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 16 Jan 2022 19:48:44.068 * Ready to accept connections
1:M 16 Jan 2022 19:53:45.045 * 100 changes in 300 seconds. Saving...
1:M 16 Jan 2022 19:53:45.045 * Background saving started by pid 19
19:C 16 Jan 2022 19:53:45.046 * DB saved on disk
19:C 16 Jan 2022 19:53:45.047 * RDB: 0 MB of memory used by copy-on-write
1:M 16 Jan 2022 19:53:45.145 * Background saving terminated with success
1:M 16 Jan 2022 19:58:46.028 * 100 changes in 300 seconds. Saving...
1:M 16 Jan 2022 19:58:46.028 * Background saving started by pid 20
20:C 16 Jan 2022 19:58:46.029 * DB saved on disk
20:C 16 Jan 2022 19:58:46.029 * RDB: 0 MB of memory used by copy-on-write
1:M 16 Jan 2022 19:58:46.128 * Background saving terminated with success
1:M 16 Jan 2022 20:03:47.075 * 100 changes in 300 seconds. Saving...
1:M 16 Jan 2022 20:03:47.075 * Background saving started by pid 21
21:C 16 Jan 2022 20:03:47.076 * DB saved on disk
21:C 16 Jan 2022 20:03:47.076 * RDB: 0 MB of memory used by copy-on-write
1:M 16 Jan 2022 20:03:47.175 * Background saving terminated with success
1:M 16 Jan 2022 20:08:48.051 * 100 changes in 300 seconds. Saving...
1:M 16 Jan 2022 20:08:48.052 * Background saving started by pid 22
22:C 16 Jan 2022 20:08:48.053 * DB saved on disk
22:C 16 Jan 2022 20:08:48.053 * RDB: 0 MB of memory used by copy-on-write
```
And the webserver
```
20:09:24 [Q] ERROR FATAL: password authentication failed for user "paperless"
20:09:54 [Q] ERROR FATAL: password authentication failed for user "paperless"
20:10:24 [Q] ERROR FATAL: password authentication failed for user "paperless"
20:10:54 [Q] ERROR FATAL: password authentication failed for user "paperless"
20:11:24 [Q] ERROR FATAL: password authentication failed for user "paperless"
20:11:54 [Q] ERROR FATAL: password authentication failed for user "paperless"
20:12:24 [Q] ERROR FATAL: password authentication failed for user "paperless"
20:12:54 [Q] ERROR FATAL: password authentication failed for user "paperless"
```
As it once wokred, I do assume that I screw it up by trying to change the default db password. I am not familiar with postgres. I tried to connect to the database inside the DB container with psql and user paperless, but that didn't work.
How can I check whether this user does reall exist in the db?
**Relevant information**
```
ms@oscar:~/Projekte/docker/paperless$ uname -a
Linux oscar 5.13.0-25-generic #26-Ubuntu SMP Fri Jan 7 15:48:31 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
```
|
closed
|
2022-01-16T20:19:40Z
|
2022-01-18T19:16:33Z
|
https://github.com/jonaswinkler/paperless-ng/issues/1552
|
[] |
miknuc
| 1
|
tox-dev/tox
|
automation
| 2,872
|
set_env substitution doesn't pass PYTHONHASHSEED in tox 4.x
|
## Issue
Hi, I have encountered what I believe is tox bug.
With 2 test environments: base (`[testenv]`) and non-base (`[testenv:hs]`), _some_ environment variables from the base one aren't set in the non-base when they are passed as a substitution, like:
```ini
[testenv]
setenv =
PYTHONHASHSEED=0
OTHER=foo
[testenv:hs]
setenv = {[testenv]setenv}
```
In above example `PYTHONHASHSEED` will be random for every run of `tox -e has` and `OTHER` will be correctly set to foo. I found out that tox displays the same behaviour for some other variables, like `PATH`, but I didn't dig any further.
Funny thing is that if we mix `setenv` and `set_env` a little bit, tox suddenly starts passing `PYTHONHASHSEED`, for example below tox.ini sets `PYTHONHASHSEED` to 0.
```ini
[testenv]
set_env =
PYTHONHASHSEED=0
OTHER=foo
[testenv:hs]
setenv = {[testenv]set_env}
```
I checked this on tox 4.3.1, 4.3, 4.2, 4.1 and 4.0. Tox 3.x doesn't have this behaviour.
Full tox.ini files attached below.
## Environment
## Output of running tox
Provide the output of `tox -rvv`:
```console
using tox.ini: /home/mgoral/test/tox.ini (pid 279807)
removing /home/mgoral/test/.tox/log
could not satisfy requires MissingDependency(<Requirement('tox>=4.0')>)
using tox-3.21.4 from /usr/lib/python3/dist-packages/tox/__init__.py (pid 279807)
/usr/bin/python3 (/usr/bin/python3) is {'executable': '/usr/bin/python3', 'implementation': 'CPython', 'version_info': [3, 9, 2, 'final', 0], 'version': '3.9.2 (default, Feb 28 2021, 17:03:44) \n[GCC 10.2.1 20210110]', 'is_64': True, 'sysplatform': 'linux', 'extra_version_info': None}
.tox uses /usr/bin/python3
.tox start: getenv /home/mgoral/test/.tox/.tox
.tox cannot reuse: -r flag
.tox recreate: /home/mgoral/test/.tox/.tox
removing /home/mgoral/test/.tox/.tox
setting PATH=/home/mgoral/test/.tox/.tox/bin:/home/mgoral/.cargo/bin:/home/mgoral/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
[279827] /home/mgoral/test/.tox$ /usr/bin/python3 -m virtualenv --no-download --python /usr/bin/python3 .tox
created virtual environment CPython3.9.2.final.0-64 in 81ms
creator CPython3Posix(dest=/home/mgoral/test/.tox/.tox, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/mgoral/.local/share/virtualenv)
added seed packages: pip==20.3.4, pkg_resources==0.0.0, setuptools==44.1.1, wheel==0.34.2
activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator
.tox installdeps: tox>=4.0
setting PATH=/home/mgoral/test/.tox/.tox/bin:/home/mgoral/.cargo/bin:/home/mgoral/.local/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
[279835] /home/mgoral/test$ /home/mgoral/test/.tox/.tox/bin/python -m pip install 'tox>=4.0'
Collecting tox>=4.0
Using cached tox-4.3.1-py3-none-any.whl (147 kB)
Collecting platformdirs>=2.6.2
Using cached platformdirs-2.6.2-py3-none-any.whl (14 kB)
Collecting virtualenv>=20.17.1
Using cached virtualenv-20.17.1-py3-none-any.whl (8.8 MB)
Collecting pyproject-api>=1.4
Using cached pyproject_api-1.4.0-py3-none-any.whl (12 kB)
Collecting pluggy>=1
Using cached pluggy-1.0.0-py2.py3-none-any.whl (13 kB)
Collecting chardet>=5.1
Using cached chardet-5.1.0-py3-none-any.whl (199 kB)
Collecting tomli>=2.0.1
Using cached tomli-2.0.1-py3-none-any.whl (12 kB)
Collecting filelock>=3.9
Using cached filelock-3.9.0-py3-none-any.whl (9.7 kB)
Collecting colorama>=0.4.6
Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting packaging>=23
Using cached packaging-23.0-py3-none-any.whl (42 kB)
Collecting cachetools>=5.2.1
Using cached cachetools-5.2.1-py3-none-any.whl (9.3 kB)
Collecting distlib<1,>=0.3.6
Using cached distlib-0.3.6-py2.py3-none-any.whl (468 kB)
Installing collected packages: tomli, platformdirs, packaging, filelock, distlib, virtualenv, pyproject-api, pluggy, colorama, chardet, cachetools, tox
Successfully installed cachetools-5.2.1 chardet-5.1.0 colorama-0.4.6 distlib-0.3.6 filelock-3.9.0 packaging-23.0 platformdirs-2.6.2 pluggy-1.0.0 pyproject-api-1.4.0 tomli-2.0.1 tox-4.3.1 virtualenv-20.17.1
.tox finish: getenv /home/mgoral/test/.tox/.tox after 3.43 seconds
.tox start: finishvenv
write config to /home/mgoral/test/.tox/.tox/.tox-config1 as '409276cb52787e20907912730020ca0f84204a375c30a79fcb494ffde0e0f116 /usr/bin/python3\n3.21.4 0 0 0\n00000000000000000000000000000000 tox>=4.0'
.tox finish: finishvenv after 0.02 seconds
.tox start: provision
[279851] /home/mgoral/test$ /home/mgoral/test/.tox/.tox/bin/python -m tox -rvv -e hs
hs: 135 W remove tox env folder /home/mgoral/test/.tox/hs [tox/tox_env/api.py:321]
hs: 159 I find interpreter for spec PythonSpec(major=3) [virtualenv/discovery/builtin.py:56]
hs: 159 D discover exe for PythonInfo(spec=CPython3.9.2.final.0-64, exe=/home/mgoral/test/.tox/.tox/bin/python, platform=linux, version='3.9.2 (default, Feb 28 2021, 17:03:44) \n[GCC 10.2.1 20210110]', encoding_fs_io=utf-8-utf-8) in /usr [virtualenv/discovery/py_info.py:437]
hs: 159 D filesystem is case-sensitive [virtualenv/info.py:24]
hs: 160 D got python info of /usr/bin/python3.9 from /home/mgoral/.local/share/virtualenv/py_info/1/36cf16204b8548560b1c020c4e8fb5b57f0e4c58016f52f2d4be01e192833930.json [virtualenv/app_data/via_disk_folder.py:129]
hs: 161 I proposed PythonInfo(spec=CPython3.9.2.final.0-64, system=/usr/bin/python3.9, exe=/home/mgoral/test/.tox/.tox/bin/python, platform=linux, version='3.9.2 (default, Feb 28 2021, 17:03:44) \n[GCC 10.2.1 20210110]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63]
hs: 161 D accepted PythonInfo(spec=CPython3.9.2.final.0-64, system=/usr/bin/python3.9, exe=/home/mgoral/test/.tox/.tox/bin/python, platform=linux, version='3.9.2 (default, Feb 28 2021, 17:03:44) \n[GCC 10.2.1 20210110]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
hs: 186 I create virtual environment via CPython3Posix(dest=/home/mgoral/test/.tox/hs, clear=False, no_vcs_ignore=False, global=False) [virtualenv/run/session.py:48]
hs: 186 D create folder /home/mgoral/test/.tox/hs/bin [virtualenv/util/path/_sync.py:9]
hs: 186 D create folder /home/mgoral/test/.tox/hs/lib/python3.9/site-packages [virtualenv/util/path/_sync.py:9]
hs: 187 D write /home/mgoral/test/.tox/hs/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:30]
hs: 187 D home = /usr/bin [virtualenv/create/pyenv_cfg.py:34]
hs: 187 D implementation = CPython [virtualenv/create/pyenv_cfg.py:34]
hs: 187 D version_info = 3.9.2.final.0 [virtualenv/create/pyenv_cfg.py:34]
hs: 187 D virtualenv = 20.17.1 [virtualenv/create/pyenv_cfg.py:34]
hs: 187 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:34]
hs: 187 D base-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
hs: 187 D base-exec-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
hs: 187 D base-executable = /usr/bin/python3.9 [virtualenv/create/pyenv_cfg.py:34]
hs: 187 D symlink /usr/bin/python3.9 to /home/mgoral/test/.tox/hs/bin/python [virtualenv/util/path/_sync.py:28]
hs: 187 D create virtualenv import hook file /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/_virtualenv.pth [virtualenv/create/via_global_ref/api.py:89]
hs: 187 D create /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/_virtualenv.py [virtualenv/create/via_global_ref/api.py:92]
hs: 188 D ============================== target debug ============================== [virtualenv/run/session.py:50]
hs: 188 D debug via /home/mgoral/test/.tox/hs/bin/python /home/mgoral/test/.tox/.tox/lib/python3.9/site-packages/virtualenv/create/debug.py [virtualenv/create/creator.py:197]
hs: 188 D {
"sys": {
"executable": "/home/mgoral/test/.tox/hs/bin/python",
"_base_executable": "/home/mgoral/test/.tox/hs/bin/python",
"prefix": "/home/mgoral/test/.tox/hs",
"base_prefix": "/usr",
"real_prefix": null,
"exec_prefix": "/home/mgoral/test/.tox/hs",
"base_exec_prefix": "/usr",
"path": [
"/usr/lib/python39.zip",
"/usr/lib/python3.9",
"/usr/lib/python3.9/lib-dynload",
"/home/mgoral/test/.tox/hs/lib/python3.9/site-packages"
],
"meta_path": [
"<class '_virtualenv._Finder'>",
"<class '_frozen_importlib.BuiltinImporter'>",
"<class '_frozen_importlib.FrozenImporter'>",
"<class '_frozen_importlib_external.PathFinder'>"
],
"fs_encoding": "utf-8",
"io_encoding": "utf-8"
},
"version": "3.9.2 (default, Feb 28 2021, 17:03:44) \n[GCC 10.2.1 20210110]",
"makefile_filename": "/usr/lib/python3.9/config-3.9-x86_64-linux-gnu/Makefile",
"os": "<module 'os' from '/usr/lib/python3.9/os.py'>",
"site": "<module 'site' from '/usr/lib/python3.9/site.py'>",
"datetime": "<module 'datetime' from '/usr/lib/python3.9/datetime.py'>",
"math": "<module 'math' (built-in)>",
"json": "<module 'json' from '/usr/lib/python3.9/json/__init__.py'>"
} [virtualenv/run/session.py:51]
hs: 213 I add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/mgoral/.local/share/virtualenv) [virtualenv/run/session.py:55]
hs: 215 D got embed update of distribution wheel from /home/mgoral/.local/share/virtualenv/wheel/3.9/embed/3/wheel.json [virtualenv/app_data/via_disk_folder.py:129]
hs: 216 D got embed update of distribution setuptools from /home/mgoral/.local/share/virtualenv/wheel/3.9/embed/3/setuptools.json [virtualenv/app_data/via_disk_folder.py:129]
hs: 216 D got embed update of distribution pip from /home/mgoral/.local/share/virtualenv/wheel/3.9/embed/3/pip.json [virtualenv/app_data/via_disk_folder.py:129]
hs: 219 D install wheel from wheel /home/mgoral/test/.tox/.tox/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/wheel-0.38.4-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
hs: 219 D install setuptools from wheel /home/mgoral/test/.tox/.tox/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/setuptools-65.6.3-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
hs: 219 D install pip from wheel /home/mgoral/test/.tox/.tox/lib/python3.9/site-packages/virtualenv/seed/wheels/embed/pip-22.3.1-py3-none-any.whl via CopyPipInstall [virtualenv/seed/embed/via_app_data/via_app_data.py:47]
hs: 220 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip-22.3.1.dist-info to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/pip-22.3.1.dist-info [virtualenv/util/path/_sync.py:36]
hs: 220 D copy /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools-65.6.3.virtualenv to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/setuptools-65.6.3.virtualenv [virtualenv/util/path/_sync.py:36]
hs: 220 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/wheel [virtualenv/util/path/_sync.py:36]
hs: 221 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/setuptools [virtualenv/util/path/_sync.py:36]
hs: 224 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/pip [virtualenv/util/path/_sync.py:36]
hs: 227 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel-0.38.4.dist-info to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/wheel-0.38.4.dist-info [virtualenv/util/path/_sync.py:36]
hs: 229 D copy /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/wheel-0.38.4-py3-none-any/wheel-0.38.4.virtualenv to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/wheel-0.38.4.virtualenv [virtualenv/util/path/_sync.py:36]
hs: 231 D generated console scripts wheel-3.9 wheel3 wheel3.9 wheel [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
hs: 256 D copy /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/distutils-precedence.pth to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/distutils-precedence.pth [virtualenv/util/path/_sync.py:36]
hs: 256 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/pkg_resources to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/pkg_resources [virtualenv/util/path/_sync.py:36]
hs: 264 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/_distutils_hack to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/_distutils_hack [virtualenv/util/path/_sync.py:36]
hs: 264 D copy directory /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/setuptools-65.6.3-py3-none-any/setuptools-65.6.3.dist-info to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/setuptools-65.6.3.dist-info [virtualenv/util/path/_sync.py:36]
hs: 265 D generated console scripts [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
hs: 286 D copy /home/mgoral/.local/share/virtualenv/wheel/3.9/image/1/CopyPipInstall/pip-22.3.1-py3-none-any/pip-22.3.1.virtualenv to /home/mgoral/test/.tox/hs/lib/python3.9/site-packages/pip-22.3.1.virtualenv [virtualenv/util/path/_sync.py:36]
hs: 287 D generated console scripts pip3.9 pip3 pip-3.9 pip [virtualenv/seed/embed/via_app_data/pip_install/base.py:41]
hs: 287 I add activators for Bash, CShell, Fish, Nushell, PowerShell, Python [virtualenv/run/session.py:61]
hs: 288 D write /home/mgoral/test/.tox/hs/pyvenv.cfg [virtualenv/create/pyenv_cfg.py:30]
hs: 288 D home = /usr/bin [virtualenv/create/pyenv_cfg.py:34]
hs: 288 D implementation = CPython [virtualenv/create/pyenv_cfg.py:34]
hs: 288 D version_info = 3.9.2.final.0 [virtualenv/create/pyenv_cfg.py:34]
hs: 288 D virtualenv = 20.17.1 [virtualenv/create/pyenv_cfg.py:34]
hs: 288 D include-system-site-packages = false [virtualenv/create/pyenv_cfg.py:34]
hs: 288 D base-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
hs: 288 D base-exec-prefix = /usr [virtualenv/create/pyenv_cfg.py:34]
hs: 288 D base-executable = /usr/bin/python3.9 [virtualenv/create/pyenv_cfg.py:34]
hs: 291 W commands[0]> python -c 'import os; print(os.environ["PYTHONHASHSEED"])' [tox/tox_env/api.py:427]
288846098
hs: 311 I exit 0 (0.02 seconds) /home/mgoral/test> python -c 'import os; print(os.environ["PYTHONHASHSEED"])' pid=279864 [tox/execute/api.py:275]
hs: 312 W commands[1]> python -c 'import os; print(os.environ["OTHER"])' [tox/tox_env/api.py:427]
foo
hs: 327 I exit 0 (0.01 seconds) /home/mgoral/test> python -c 'import os; print(os.environ["OTHER"])' pid=279870 [tox/execute/api.py:275]
hs: OK (0.19=setup[0.16]+cmd[0.02,0.01] seconds)
congratulations :) (0.24 seconds)
.tox finish: provision after 0.39 seconds
```
## Minimal example
tox.ini which doesn't pass `PYTHONHASHSEED` to `hs` environment, but passes `OTHER`:
```ini
[tox]
requires = tox>=4.0
skipsdist = True
[testenv]
basepython = python3
setenv =
PYTHONHASHSEED=0
OTHER=foo
[testenv:hs]
commands =
python -c 'import os; print(os.environ["PYTHONHASHSEED"])'
python -c 'import os; print(os.environ["OTHER"])'
setenv = {[testenv]setenv}
```
tox.ini which passes `PYTHONHASHSEED` to `hs` environment (watch the underscores in set_env and setenv)
```ini
[tox]
requires = tox>=4.0
skipsdist = True
[testenv]
basepython = python3
set_env =
PYTHONHASHSEED=0
OTHER=foo
[testenv:hs]
commands =
python -c 'import os; print(os.environ["PYTHONHASHSEED"])'
python -c 'import os; print(os.environ["OTHER"])'
setenv = {[testenv]set_env}
```
|
open
|
2023-01-16T15:23:34Z
|
2024-03-05T22:15:13Z
|
https://github.com/tox-dev/tox/issues/2872
|
[
"bug:minor",
"help:wanted"
] |
mgoral
| 3
|
holoviz/panel
|
matplotlib
| 7,567
|
Make TextInput work for forms
|
When comparing to [Reflex](https://reflex.dev/) its clear they have given more thought to their TextInput for forms.
We are missing basic attributes like
- `required` (`True` or `False`)
- `type` (for example `"email"`)
- `pattern` (`".+@example\.com"`)
That can help users when they enter text.
Reflex will
- make a TextInput active is its required and nothing is input
- show a tooltip explaining what is missing (for example a `@` in an email)
https://github.com/user-attachments/assets/a10a04c9-8537-48e4-80b5-6fc876052fe3
|
open
|
2024-12-23T07:18:32Z
|
2025-01-20T21:32:15Z
|
https://github.com/holoviz/panel/issues/7567
|
[
"type: enhancement"
] |
MarcSkovMadsen
| 3
|
oegedijk/explainerdashboard
|
plotly
| 32
|
Problem with deploying a custom dashboard using waitress
|
I am currently trying to deploy explainer dashboards using waitress. Whereas it's working fine using the vanilla regression dashboard, there is an error when running _waitress-serve --call "dashboard:create_c_app"_ where dashboard.py is
```
from explainerdashboard import ExplainerDashboard
from custom_tabs import *
def create_c_app():
db = ExplainerDashboard.from_config("dashboard.yaml")
app = db.flask_server()
return app
```
The error is
```
Traceback (most recent call last):
File "C:\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\...\venv\Scripts\waitress-serve.exe\__main__.py", line 7, in <module>
File "c:\...\venv\lib\site-packages\waitress\runner.py", line 280, in run
app = app()
File "C:\...\dashboard.py", line 10, in create_c_app
db = ExplainerDashboard.from_config("dashboard.yaml")
File "c:\...\venv\lib\site-packages\explainerdashboard\dashboards.py", line 483, in from_config
tabs = cls._yamltabs_to_tabs(dashboard_params['tabs'], explainer)
File "c:\...\venv\lib\site-packages\explainerdashboard\dashboards.py", line 611, in _yamltabs_to_tabs
return [instantiate_tab(tab, explainer) for tab in yamltabs]
File "c:\...\venv\lib\site-packages\explainerdashboard\dashboards.py", line 611, in <listcomp>
return [instantiate_tab(tab, explainer) for tab in yamltabs]
File "c:\...\venv\lib\site-packages\explainerdashboard\dashboards.py", line 600, in instantiate_tab
tab_class = getattr(import_module(tab['module']), tab['name'])
AttributeError: module '__main__' has no attribute 'CustomDescriptiveTab
```
The .yaml file looks like this:
```
[...]
decision_trees: true
tabs:
- name: CustomDescriptiveTab
module: __main__
params: null
- name: CustomFeatImpTab
module: __main__
params: null
- name: CustomWhatIfTab
module: __main__
params: null
```
I don't understand why running via waitress is not working whereas loading the dashboard using ExplainerDashboard.from_config("dashboard.yaml") and running it locally works fine.
|
closed
|
2020-12-01T13:38:12Z
|
2020-12-01T20:18:58Z
|
https://github.com/oegedijk/explainerdashboard/issues/32
|
[] |
hkoppen
| 6
|
flasgger/flasgger
|
api
| 487
|
Sorting tags in swagger UI
|
How do I sort only the tags in alpha order (I don't really care about the order of API and Operations) ? I have tried adding config similar to the ones mentioned in issue #277 but the UI is sorted based on API end point instead of tags. Kindly let me know if any other details are needed.
|
open
|
2021-07-25T14:07:02Z
|
2023-09-22T11:04:07Z
|
https://github.com/flasgger/flasgger/issues/487
|
[] |
ajaynarayanan
| 10
|
LAION-AI/Open-Assistant
|
machine-learning
| 3,429
|
Download links in sft training folder
|
Please add links or cite to the opensource data in sft training.
|
closed
|
2023-06-14T06:04:57Z
|
2023-06-14T08:14:46Z
|
https://github.com/LAION-AI/Open-Assistant/issues/3429
|
[] |
lucasjinreal
| 1
|
Sanster/IOPaint
|
pytorch
| 152
|
Error when using cpu text encoder
|
This error happens when using the cpu text encoder with 0.29.0:
```
...
File ".\lama-cleaner\lama_cleaner\server.py", line 159, in process
res_np_img = model(image, mask, config)
File ".\lama-cleaner\lama_cleaner\model_manager.py", line 38, in __call__
return self.model(image, mask, config)
File ".\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File ".\lama-cleaner\lama_cleaner\model\sd.py", line 183, in __call__
crop_image = self._pad_forward(crop_img, crop_mask, config)
File ".\lama-cleaner\lama_cleaner\model\base.py", line 56, in _pad_forward
result = self.forward(pad_image, pad_mask, config)
File ".\lama-cleaner\lama_cleaner\model\sd.py", line 147, in forward
width=img_w,
File ".\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File ".\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_inpaint.py", line 650, in __call__
prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
File ".\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_inpaint.py", line 379, in _encode_prompt
if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
AttributeError: 'CPUTextEncoderWrapper' object has no attribute 'config'
```
Without `--sd-cpu-textencoder` it works fine.
|
closed
|
2022-12-04T13:32:06Z
|
2022-12-04T14:24:21Z
|
https://github.com/Sanster/IOPaint/issues/152
|
[] |
fiskbil
| 1
|
dynaconf/dynaconf
|
fastapi
| 322
|
[RFC] Merge all settings files before merging defaults and environment settings
|
**Is your feature request related to a problem? Please describe.**
Currently, a default setting from settings.local.yaml takes precedence over an environment-specific setting in settings.yaml. To me, the intuitive behaviour would be that default settings in settings.local.yaml only would be used if no environment-specific setting has already been set.
```
# settings.yaml
default:
key: default-from-settings
development:
production:
key: production-from-settings
```
```
# settings.local.yaml
dynaconf_merge: true
default:
key: default-from-local
```
In the above example, the key would always be "default-from-local".
**Describe the solution you'd like**
To me, a more useful behaviour would be to have key be "default-from-local" only when the development setting, but the production value would still remain "production-from-settings".
**Additional context**
The current behaviour is to individually merge each files default settings with environment specific settings first, then to merge the files together. I propose that the behaviour changes to first merge all files together, then merge default settings with the environment-specific settings.
|
closed
|
2020-03-21T13:21:29Z
|
2022-07-02T20:12:26Z
|
https://github.com/dynaconf/dynaconf/issues/322
|
[
"wontfix",
"hacktoberfest",
"Not a Bug",
"RFC"
] |
tjader
| 3
|
cvat-ai/cvat
|
computer-vision
| 8,504
|
I created a Cvat account but I did not receive the verification email
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
_No response_
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
I can see that people hade had this issue but no solution i can do
// Adam
### Environment
_No response_
|
closed
|
2024-10-03T11:01:44Z
|
2024-12-07T08:17:33Z
|
https://github.com/cvat-ai/cvat/issues/8504
|
[
"bug"
] |
aerngren
| 4
|
airtai/faststream
|
asyncio
| 1,181
|
Feature: add K8S probes checker API
|
We should provides users with an ability to check application health in a some way. It can be reached by using multiple mechanism, so we can create a base `Checker` Protocol and use it the following way
```python
app = FastStream(broker)
app.add_checker(
SocketChecker("/sock"),
# HTTPChecker(port=8000),
# ...
)
```
And check all of them by the same command
```shell
faststream probe main:app
```
So, we can create multiple checkers for various cases and user can implement any checker by itself in the same time
* [x] add `broker.ping()` unified method
* [ ] add `BaseCheker` using `broker.ping()`
* [ ] implement `HTTPChecker` (inheritor of `BaseCheker`)
* [ ] add CLI command to use any checker
|
closed
|
2024-01-29T16:41:18Z
|
2024-08-07T19:36:48Z
|
https://github.com/airtai/faststream/issues/1181
|
[
"enhancement",
"good first issue"
] |
Lancetnik
| 3
|
docarray/docarray
|
fastapi
| 1,324
|
Improve fastapi-docarray compatibility
|
Improve the way we send/receive DocArray objects with FastAPI.
### Context
Currently, DocArray objects are received as lists, so we have to construct an actual DocArray inside the function, and in case of returning, we have to convert it into a list ourselves (done in https://github.com/docarray/docarray/pull/1320)
```python
@app.post("/doc/", response_class=DocArrayResponse)
async def func(fastapi_docs: List[ImageDoc]) -> List[ImageDoc]:
# The docs FastAPI will receive will be treated as List[ImageDoc]
# so you need to cast it to DocArray
docarray_docs = DocArray[ImageDoc].construct(fastapi_docs)
...
# In case you want to return a DocArray, return it as a list
return list(docarray_docs)
```
### What we want
The goal is to receive/return DocArray objects without any additional operations from the developer side.
```python
@app.post("/doc/", response_class=DocArrayResponse)
async def func(docs: DocArray[ImageDoc]) -> DocArray[ImageDoc]:
return docs
```
### Ways to do it
I've investigated couple of approaches, and here are the most plausible ones and the main problem I had with them:
1. DocArray inherits from List/Sequence (draft https://github.com/docarray/docarray/pull/1316)
FastAPI still received DocArray as a List. If we fix that, this is the best way to go
2. DocArray inherits from Pydantic's GenericModel (draft https://github.com/docarray/docarray/pull/1304)
GenericModel does type-checking during runtime which introduces lots of circular imports
3. DocArray inherits from Pydantic's BaseModel (draft https://github.com/docarray/docarray/pull/1303)
BaseModel introduced couple of functions that were in conflict with our existing ones
In general, extending pydantic's classes was a bit of a headache, the first option is much cleaner.
|
closed
|
2023-04-02T18:03:18Z
|
2023-04-26T09:41:43Z
|
https://github.com/docarray/docarray/issues/1324
|
[] |
jupyterjazz
| 1
|
litestar-org/litestar
|
api
| 3,130
|
Enhancement: Have OpenAPI Schema generation respect route handler ordering
|
### Summary
Opening a new issue as discussed [here](https://github.com/litestar-org/litestar/issues/3059#issuecomment-1960609250).
From a DX perspective I personally find very helpful to have ordered routes in the `Controller`, that follow a certain logic:
- Progressive nesting (`GET /` comes before `GET /:id` and before `GET /:id/nested`)
- Logical actions ordering (`GET`, `POST`, `GET id`, `PATCH id`, `DELETE id`)
Example:
```python
class MyController(Controller):
tags = ["..."]
path = "/"
dependencies = {...}
@routes.get()
async def get_many(self):
...
@routes.post()
async def create(self, data: ...):
...
@routes.get("/{resource_id:uuid}")
async def get(self, resource_id: UUID):
...
@routes.patch("/{resource_id:uuid}")
async def update(self, resource_id: UUID):
...
@routes.delete("/{resource_id:uuid}")
async def delete(self, resource_id: UUID):
...
@routes.get("/{resource_id:uuid}/nested")
async def get_nested(self, resource_id: UUID):
...
```
Currently the ordering of the route definition at the Controller is not respected by the docs, so I end up having a Swagger that looks like this:
```
- GET /:id/nested/:nest_id/another
- POST /:id/nested
- GET /
- DELETE /:id
- PATCH /
```
Which I personally find very confusing since:
(1) ~It doesn't seem to follow a pre-defined logic (couldn't find any pattern for that when looking at the docs)~ it does seem to follow the ordering of the methods as defined [here](https://github.com/litestar-org/litestar/blob/3e5c179e714bb074bae13e02d10e2f3f51e24d5c/litestar/openapi/spec/path_item.py#L44) as shared by @guacs [here](https://github.com/litestar-org/litestar/issues/3059#issuecomment-1960609250)
(2) It doesn't respect the logic that was defined on the controller (specifically the nesting).
Was having a quick look and it seems to be related to logic when registering the route [here](https://github.com/litestar-org/litestar/blob/3e5c179e714bb074bae13e02d10e2f3f51e24d5c/litestar/app.py#L647), and then the handler:method map on the `HTTPRoute` [here](https://github.com/litestar-org/litestar/blob/3e5c179e714bb074bae13e02d10e2f3f51e24d5c/litestar/routes/http.py#L94).
It seems that by the time it reaches the plugin the nesting is already out of order - so it might make sense to handle when registering the routes maybe?
Anyways, if this is something of interest I'd help to contribute and take a deeper look into it.
|
open
|
2024-02-23T02:21:14Z
|
2025-03-20T15:54:26Z
|
https://github.com/litestar-org/litestar/issues/3130
|
[
"Enhancement"
] |
ccrvlh
| 8
|
huggingface/pytorch-image-models
|
pytorch
| 1,396
|
Swin transformer feature extraction: same input image but get different feature vectors
|
Hi there,
when I used pre-trained 'swin_base_patch4_window7_224_in22k' to extract a 224 feature vector for an input image, every time I called net.forward_features(INPUT), I got a different vector.
Is there anything I should be aware?
Thanks in advance
|
closed
|
2022-08-05T16:21:17Z
|
2022-08-05T16:25:06Z
|
https://github.com/huggingface/pytorch-image-models/issues/1396
|
[
"bug"
] |
Dadatata-JZ
| 1
|
microsoft/qlib
|
machine-learning
| 1,284
|
Can i use a dataset other than the one provided by Qlib to learn the model?
|
## ❓ Questions and Help
Can I use a dataset other than the one provided by Qlib to learn the model?
If possible, is it correct to follow the contents of [Converting CSV Format into Qlib Format](https://qlib.readthedocs.io/en/latest/component/data.html#converting-csv-format-into-qlib-format)?
|
closed
|
2022-09-08T04:38:58Z
|
2023-03-02T09:35:36Z
|
https://github.com/microsoft/qlib/issues/1284
|
[
"question",
"stale"
] |
Haaae
| 4
|
Miserlou/Zappa
|
django
| 1,267
|
Need an improved way to distinguish Django from Flask
|
<!--- Provide a general summary of the issue in the Title above -->
cli.py has the code
```
# Detect Django/Flask
try: # pragma: no cover
import django
has_django = True
except ImportError as e:
has_django = False
```
But some people have both and don't always want Django
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
## Expected Behavior
<!--- Tell us what should happen -->
There should be a way to notice a flask project
## Actual Behavior
<!--- Tell us what happens instead -->
If importing django works, it is assumed to be django
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1.
2.
3.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.45.1
* Operating System and Python version:
MacOS High Sierra 10.13.1
Python 3.6.3 |Anaconda, Inc.| (default, Oct 6 2017, 12:04:38)
* The output of `pip freeze`:
Babel==2.5.1
click==6.7
Flask==0.12.2
Flask-Babel==0.11.2
Flask-Table==0.4.1
itsdangerous==0.24
Jinja2==2.10
MarkupSafe==1.0
PyMySQL==0.7.11
pytz==2017.3
SQLAlchemy==1.1.15
Werkzeug==0.12.2
* Link to your project (optional):
* Your `zappa_settings.py`:
|
closed
|
2017-11-28T23:24:19Z
|
2017-11-29T01:20:52Z
|
https://github.com/Miserlou/Zappa/issues/1267
|
[] |
grfiv
| 0
|
ray-project/ray
|
pytorch
| 51,135
|
CI test windows://python/ray/tests:test_logging is consistently_failing
|
CI test **windows://python/ray/tests:test_logging** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8722#01956cdc-3ef9-480f-934c-c866f9d45f90
- https://buildkite.com/ray-project/postmerge/builds/8722#01956c6c-bdef-45b6-9e8e-d051acbaa89e
DataCaseName-windows://python/ray/tests:test_logging-END
Managed by OSS Test Policy
|
closed
|
2025-03-06T20:18:20Z
|
2025-03-07T20:44:30Z
|
https://github.com/ray-project/ray/issues/51135
|
[
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] |
can-anyscale
| 4
|
allenai/allennlp
|
pytorch
| 4,855
|
Models: missing None check in PrecoReader's text_to_instance method.
|
<!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [x] I have verified that the issue exists against the `master` branch of AllenNLP.
- [x] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/master/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [x] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [x] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/master/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/master) to find out if the bug was already fixed in the master branch.
- [x] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [x] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [x] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [x] I have included in the "Environment" section below the output of `pip freeze`.
- [x] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
Hi,
I think a `None` check is missing at that [line](https://github.com/allenai/allennlp-models/blob/ea1f71c79c329db1b66d9db79f0eaa39d2fd2857/allennlp_models/coref/dataset_readers/preco.py#L94) in `PrecoReader`.
According to the function argument list, and the subsequent call to `make_coref_instance`, `clusters` should be allowed to be `None`.
A typical use-case would be e.g. inference where we don't have any info about the clusters.
<details>
<summary><b>Python traceback:
</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
```
Traceback (most recent call last):
File "/home/fco/coreference/bug.py", line 15, in <module>
instance = reader.text_to_instance(sentences)
File "/home/fco/anaconda3/envs/coref/lib/python3.8/site-packages/allennlp_models/coref/dataset_readers/preco.py", line 94, in text_to_instance
for cluster in gold_clusters:
TypeError: 'NoneType' object is not iterable
```
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS: Ubuntu 18.04.3 LTS
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version: 3.8.5
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
absl-py==0.11.0
allennlp==1.2.0
allennlp-models==1.2.0
attrs==20.3.0
blis==0.4.1
boto3==1.16.14
botocore==1.19.14
cachetools==4.1.1
catalogue==1.0.0
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
conllu==4.2.1
cymem==2.0.4
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.3.1/en_core_web_sm-2.3.1.tar.gz
filelock==3.0.12
ftfy==5.8
future==0.18.2
google-auth==1.23.0
google-auth-oauthlib==0.4.2
grpcio==1.33.2
h5py==3.1.0
idna==2.10
importlib-metadata==2.0.0
iniconfig==1.1.1
jmespath==0.10.0
joblib==0.17.0
jsonnet==0.16.0
jsonpickle==1.4.1
Markdown==3.3.3
murmurhash==1.0.4
nltk==3.5
numpy==1.19.4
oauthlib==3.1.0
overrides==3.1.0
packaging==20.4
pandas==1.1.4
plac==1.1.3
pluggy==0.13.1
preshed==3.0.4
protobuf==3.13.0
py==1.9.0
py-rouge==1.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyconll==2.3.3
pyparsing==2.4.7
PySocks==1.7.1
pytest==6.1.2
python-dateutil==2.8.1
pytz==2020.4
regex==2020.10.28
requests==2.24.0
requests-oauthlib==1.3.0
rsa==4.6
s3transfer==0.3.3
sacremoses==0.0.43
scikit-learn==0.23.2
scipy==1.5.4
sentencepiece==0.1.94
six==1.15.0
spacy==2.3.2
srsly==1.0.3
tensorboard==2.4.0
tensorboard-plugin-wit==1.7.0
tensorboardX==2.1
thinc==7.4.1
threadpoolctl==2.1.0
tokenizers==0.9.2
toml==0.10.2
torch==1.7.0
tqdm==4.51.0
transformers==3.4.0
tweepy==3.9.0
typing==3.7.4.3
typing-extensions==3.7.4.3
urllib3==1.25.11
wasabi==0.8.0
wcwidth==0.2.5
Werkzeug==1.0.1
word2number==1.1
zipp==3.4.0
```
</p>
</details>
## Steps to reproduce
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```python
import spacy
from allennlp.data.token_indexers import SingleIdTokenIndexer, TokenCharactersIndexer
from allennlp_models.coref import PrecoReader
my_text = "Night you. Subdue creepeth cattle creeping living lesser."
sp = spacy.load("en_core_web_sm")
doc = sp(my_text)
sentences = [[token.text for token in sent] for sent in doc.sents]
reader = PrecoReader(max_span_width=10, token_indexers={"tokens": SingleIdTokenIndexer(),
"token_characters": TokenCharactersIndexer()})
instance = reader.text_to_instance(sentences)
```
</p>
</details>
|
closed
|
2020-12-09T14:16:36Z
|
2020-12-10T20:30:06Z
|
https://github.com/allenai/allennlp/issues/4855
|
[
"bug"
] |
frcnt
| 0
|
littlecodersh/ItChat
|
api
| 514
|
发送消息的时候,返回1024是什么原因?
|
`{u'MsgID': u'', u'LocalID': u'', u'BaseResponse': {u'ErrMsg': u'', u'Ret': 1204, 'RawMsg': u''}}`
|
closed
|
2017-09-20T02:24:07Z
|
2017-09-20T02:36:51Z
|
https://github.com/littlecodersh/ItChat/issues/514
|
[] |
ysnows
| 2
|
CTFd/CTFd
|
flask
| 1,935
|
Make the distinction between user and team mode clearer during setup
|
We should probably have some kind of big checkbox div that explains what it means to be in user/teams mode. It would reduce the likelihood of needing to switch.
|
closed
|
2021-07-02T05:01:12Z
|
2021-07-17T19:44:24Z
|
https://github.com/CTFd/CTFd/issues/1935
|
[
"easy"
] |
ColdHeat
| 0
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 376
|
为什么我训练总结提示缺失output_dir
|
(torch) root@autodl-container-ce8743b1d5-46215747:~/autodl-tmp/LLAMA_cn/src/Chinese-LLaMA-Alpaca/scripts# bash run_pt.sh
run_pt.sh: line 7: $'\r': command not found
run_pt.sh: line 17: $'\r': command not found
run_pt.sh: line 19: $'\r': command not found
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA SETUP: CUDA runtime path found: /root/miniconda3/envs/torch/lib/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version 116
CUDA SETUP: Loading binary /root/miniconda3/envs/torch/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda116.so...
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA SETUP: CUDA runtime path found: /root/miniconda3/envs/torch/lib/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version 116
CUDA SETUP: Loading binary /root/miniconda3/envs/torch/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda116.so...
usage: run_clm_pt_with_peft.py [-h] [--model_name_or_path MODEL_NAME_OR_PATH] [--tokenizer_name_or_path TOKENIZER_NAME_OR_PATH] [--model_type MODEL_TYPE] [--config_overrides CONFIG_OVERRIDES] [--config_name CONFIG_NAME]
[--tokenizer_name TOKENIZER_NAME] [--cache_dir CACHE_DIR] [--use_fast_tokenizer [USE_FAST_TOKENIZER]] [--no_use_fast_tokenizer] [--model_revision MODEL_REVISION] [--use_auth_token [USE_AUTH_TOKEN]]
[--torch_dtype {auto,bfloat16,float16,float32}] [--dataset_dir DATASET_DIR] [--dataset_config_name DATASET_CONFIG_NAME] [--train_file TRAIN_FILE] [--validation_file VALIDATION_FILE] [--max_train_samples MAX_TRAIN_SAMPLES]
[--max_eval_samples MAX_EVAL_SAMPLES] [--streaming [STREAMING]] [--block_size BLOCK_SIZE] [--overwrite_cache [OVERWRITE_CACHE]] [--validation_split_percentage VALIDATION_SPLIT_PERCENTAGE]
[--preprocessing_num_workers PREPROCESSING_NUM_WORKERS] [--keep_linebreaks [KEEP_LINEBREAKS]] [--no_keep_linebreaks] [--data_cache_dir DATA_CACHE_DIR] --output_dir OUTPUT_DIR
[--overwrite_output_dir [OVERWRITE_OUTPUT_DIR]] [--do_train [DO_TRAIN]] [--do_eval [DO_EVAL]] [--do_predict [DO_PREDICT]] [--evaluation_strategy {no,steps,epoch}] [--prediction_loss_only [PREDICTION_LOSS_ONLY]]
[--per_device_train_batch_size PER_DEVICE_TRAIN_BATCH_SIZE] [--per_device_eval_batch_size PER_DEVICE_EVAL_BATCH_SIZE] [--per_gpu_train_batch_size PER_GPU_TRAIN_BATCH_SIZE]
[--per_gpu_eval_batch_size PER_GPU_EVAL_BATCH_SIZE] [--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS] [--eval_accumulation_steps EVAL_ACCUMULATION_STEPS] [--eval_delay EVAL_DELAY] [--learning_rate LEARNING_RATE]
[--weight_decay WEIGHT_DECAY] [--adam_beta1 ADAM_BETA1] [--adam_beta2 ADAM_BETA2] [--adam_epsilon ADAM_EPSILON] [--max_grad_norm MAX_GRAD_NORM] [--num_train_epochs NUM_TRAIN_EPOCHS] [--max_steps MAX_STEPS]
[--lr_scheduler_type {linear,cosine,cosine_with_restarts,polynomial,constant,constant_with_warmup,inverse_sqrt}] [--warmup_ratio WARMUP_RATIO] [--warmup_steps WARMUP_STEPS]
[--log_level {debug,info,warning,error,critical,passive}] [--log_level_replica {debug,info,warning,error,critical,passive}] [--log_on_each_node [LOG_ON_EACH_NODE]] [--no_log_on_each_node] [--logging_dir LOGGING_DIR]
[--logging_strategy {no,steps,epoch}] [--logging_first_step [LOGGING_FIRST_STEP]] [--logging_steps LOGGING_STEPS] [--logging_nan_inf_filter [LOGGING_NAN_INF_FILTER]] [--no_logging_nan_inf_filter]
[--save_strategy {no,steps,epoch}] [--save_steps SAVE_STEPS] [--save_total_limit SAVE_TOTAL_LIMIT] [--save_safetensors [SAVE_SAFETENSORS]] [--save_on_each_node [SAVE_ON_EACH_NODE]] [--no_cuda [NO_CUDA]]
[--use_mps_device [USE_MPS_DEVICE]] [--seed SEED] [--data_seed DATA_SEED] [--jit_mode_eval [JIT_MODE_EVAL]] [--use_ipex [USE_IPEX]] [--bf16 [BF16]] [--fp16 [FP16]] [--fp16_opt_level FP16_OPT_LEVEL]
[--half_precision_backend {auto,cuda_amp,apex,cpu_amp}] [--bf16_full_eval [BF16_FULL_EVAL]] [--fp16_full_eval [FP16_FULL_EVAL]] [--tf32 TF32] [--local_rank LOCAL_RANK] [--xpu_backend {mpi,ccl,gloo}]
[--tpu_num_cores TPU_NUM_CORES] [--tpu_metrics_debug [TPU_METRICS_DEBUG]] [--debug DEBUG] [--dataloader_drop_last [DATALOADER_DROP_LAST]] [--eval_steps EVAL_STEPS] [--dataloader_num_workers DATALOADER_NUM_WORKERS]
[--past_index PAST_INDEX] [--run_name RUN_NAME] [--disable_tqdm DISABLE_TQDM] [--remove_unused_columns [REMOVE_UNUSED_COLUMNS]] [--no_remove_unused_columns] [--label_names LABEL_NAMES [LABEL_NAMES ...]]
[--load_best_model_at_end [LOAD_BEST_MODEL_AT_END]] [--metric_for_best_model METRIC_FOR_BEST_MODEL] [--greater_is_better GREATER_IS_BETTER] [--ignore_data_skip [IGNORE_DATA_SKIP]] [--sharded_ddp SHARDED_DDP] [--fsdp FSDP]
[--fsdp_min_num_params FSDP_MIN_NUM_PARAMS] [--fsdp_config FSDP_CONFIG] [--fsdp_transformer_layer_cls_to_wrap FSDP_TRANSFORMER_LAYER_CLS_TO_WRAP] [--deepspeed DEEPSPEED] [--label_smoothing_factor LABEL_SMOOTHING_FACTOR]
[--optim {adamw_hf,adamw_torch,adamw_torch_fused,adamw_torch_xla,adamw_apex_fused,adafactor,adamw_bnb_8bit,adamw_anyprecision,sgd,adagrad}] [--optim_args OPTIM_ARGS] [--adafactor [ADAFACTOR]]
[--group_by_length [GROUP_BY_LENGTH]] [--length_column_name LENGTH_COLUMN_NAME] [--report_to REPORT_TO [REPORT_TO ...]] [--ddp_find_unused_parameters DDP_FIND_UNUSED_PARAMETERS] [--ddp_bucket_cap_mb DDP_BUCKET_CAP_MB]
[--dataloader_pin_memory [DATALOADER_PIN_MEMORY]] [--no_dataloader_pin_memory] [--skip_memory_metrics [SKIP_MEMORY_METRICS]] [--no_skip_memory_metrics] [--use_legacy_prediction_loop [USE_LEGACY_PREDICTION_LOOP]]
[--push_to_hub [PUSH_TO_HUB]] [--resume_from_checkpoint RESUME_FROM_CHECKPOINT] [--hub_model_id HUB_MODEL_ID] [--hub_strategy {end,every_save,checkpoint,all_checkpoints}] [--hub_token HUB_TOKEN]
[--hub_private_repo [HUB_PRIVATE_REPO]] [--gradient_checkpointing [GRADIENT_CHECKPOINTING]] [--include_inputs_for_metrics [INCLUDE_INPUTS_FOR_METRICS]] [--fp16_backend {auto,cuda_amp,apex,cpu_amp}]
[--push_to_hub_model_id PUSH_TO_HUB_MODEL_ID] [--push_to_hub_organization PUSH_TO_HUB_ORGANIZATION] [--push_to_hub_token PUSH_TO_HUB_TOKEN] [--mp_parameters MP_PARAMETERS] [--auto_find_batch_size [AUTO_FIND_BATCH_SIZE]]
[--full_determinism [FULL_DETERMINISM]] [--torchdynamo TORCHDYNAMO] [--ray_scope RAY_SCOPE] [--ddp_timeout DDP_TIMEOUT] [--torch_compile [TORCH_COMPILE]] [--torch_compile_backend TORCH_COMPILE_BACKEND]
[--torch_compile_mode TORCH_COMPILE_MODE] [--trainable TRAINABLE] [--lora_rank LORA_RANK] [--lora_dropout LORA_DROPOUT] [--lora_alpha LORA_ALPHA] [--modules_to_save MODULES_TO_SAVE] [--debug_mode [DEBUG_MODE]]
[--peft_path PEFT_PATH]
run_clm_pt_with_peft.py: error: the following arguments are required: --output_dir
usage: run_clm_pt_with_peft.py [-h] [--model_name_or_path MODEL_NAME_OR_PATH] [--tokenizer_name_or_path TOKENIZER_NAME_OR_PATH] [--model_type MODEL_TYPE] [--config_overrides CONFIG_OVERRIDES] [--config_name CONFIG_NAME]
[--tokenizer_name TOKENIZER_NAME] [--cache_dir CACHE_DIR] [--use_fast_tokenizer [USE_FAST_TOKENIZER]] [--no_use_fast_tokenizer] [--model_revision MODEL_REVISION] [--use_auth_token [USE_AUTH_TOKEN]]
[--torch_dtype {auto,bfloat16,float16,float32}] [--dataset_dir DATASET_DIR] [--dataset_config_name DATASET_CONFIG_NAME] [--train_file TRAIN_FILE] [--validation_file VALIDATION_FILE] [--max_train_samples MAX_TRAIN_SAMPLES]
[--max_eval_samples MAX_EVAL_SAMPLES] [--streaming [STREAMING]] [--block_size BLOCK_SIZE] [--overwrite_cache [OVERWRITE_CACHE]] [--validation_split_percentage VALIDATION_SPLIT_PERCENTAGE]
[--preprocessing_num_workers PREPROCESSING_NUM_WORKERS] [--keep_linebreaks [KEEP_LINEBREAKS]] [--no_keep_linebreaks] [--data_cache_dir DATA_CACHE_DIR] --output_dir OUTPUT_DIR
[--overwrite_output_dir [OVERWRITE_OUTPUT_DIR]] [--do_train [DO_TRAIN]] [--do_eval [DO_EVAL]] [--do_predict [DO_PREDICT]] [--evaluation_strategy {no,steps,epoch}] [--prediction_loss_only [PREDICTION_LOSS_ONLY]]
[--per_device_train_batch_size PER_DEVICE_TRAIN_BATCH_SIZE] [--per_device_eval_batch_size PER_DEVICE_EVAL_BATCH_SIZE] [--per_gpu_train_batch_size PER_GPU_TRAIN_BATCH_SIZE]
[--per_gpu_eval_batch_size PER_GPU_EVAL_BATCH_SIZE] [--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS] [--eval_accumulation_steps EVAL_ACCUMULATION_STEPS] [--eval_delay EVAL_DELAY] [--learning_rate LEARNING_RATE]
[--weight_decay WEIGHT_DECAY] [--adam_beta1 ADAM_BETA1] [--adam_beta2 ADAM_BETA2] [--adam_epsilon ADAM_EPSILON] [--max_grad_norm MAX_GRAD_NORM] [--num_train_epochs NUM_TRAIN_EPOCHS] [--max_steps MAX_STEPS]
[--lr_scheduler_type {linear,cosine,cosine_with_restarts,polynomial,constant,constant_with_warmup,inverse_sqrt}] [--warmup_ratio WARMUP_RATIO] [--warmup_steps WARMUP_STEPS]
[--log_level {debug,info,warning,error,critical,passive}] [--log_level_replica {debug,info,warning,error,critical,passive}] [--log_on_each_node [LOG_ON_EACH_NODE]] [--no_log_on_each_node] [--logging_dir LOGGING_DIR]
[--logging_strategy {no,steps,epoch}] [--logging_first_step [LOGGING_FIRST_STEP]] [--logging_steps LOGGING_STEPS] [--logging_nan_inf_filter [LOGGING_NAN_INF_FILTER]] [--no_logging_nan_inf_filter]
[--save_strategy {no,steps,epoch}] [--save_steps SAVE_STEPS] [--save_total_limit SAVE_TOTAL_LIMIT] [--save_safetensors [SAVE_SAFETENSORS]] [--save_on_each_node [SAVE_ON_EACH_NODE]] [--no_cuda [NO_CUDA]]
[--use_mps_device [USE_MPS_DEVICE]] [--seed SEED] [--data_seed DATA_SEED] [--jit_mode_eval [JIT_MODE_EVAL]] [--use_ipex [USE_IPEX]] [--bf16 [BF16]] [--fp16 [FP16]] [--fp16_opt_level FP16_OPT_LEVEL]
[--half_precision_backend {auto,cuda_amp,apex,cpu_amp}] [--bf16_full_eval [BF16_FULL_EVAL]] [--fp16_full_eval [FP16_FULL_EVAL]] [--tf32 TF32] [--local_rank LOCAL_RANK] [--xpu_backend {mpi,ccl,gloo}]
[--tpu_num_cores TPU_NUM_CORES] [--tpu_metrics_debug [TPU_METRICS_DEBUG]] [--debug DEBUG] [--dataloader_drop_last [DATALOADER_DROP_LAST]] [--eval_steps EVAL_STEPS] [--dataloader_num_workers DATALOADER_NUM_WORKERS]
[--past_index PAST_INDEX] [--run_name RUN_NAME] [--disable_tqdm DISABLE_TQDM] [--remove_unused_columns [REMOVE_UNUSED_COLUMNS]] [--no_remove_unused_columns] [--label_names LABEL_NAMES [LABEL_NAMES ...]]
[--load_best_model_at_end [LOAD_BEST_MODEL_AT_END]] [--metric_for_best_model METRIC_FOR_BEST_MODEL] [--greater_is_better GREATER_IS_BETTER] [--ignore_data_skip [IGNORE_DATA_SKIP]] [--sharded_ddp SHARDED_DDP] [--fsdp FSDP]
[--fsdp_min_num_params FSDP_MIN_NUM_PARAMS] [--fsdp_config FSDP_CONFIG] [--fsdp_transformer_layer_cls_to_wrap FSDP_TRANSFORMER_LAYER_CLS_TO_WRAP] [--deepspeed DEEPSPEED] [--label_smoothing_factor LABEL_SMOOTHING_FACTOR]
[--optim {adamw_hf,adamw_torch,adamw_torch_fused,adamw_torch_xla,adamw_apex_fused,adafactor,adamw_bnb_8bit,adamw_anyprecision,sgd,adagrad}] [--optim_args OPTIM_ARGS] [--adafactor [ADAFACTOR]]
[--group_by_length [GROUP_BY_LENGTH]] [--length_column_name LENGTH_COLUMN_NAME] [--report_to REPORT_TO [REPORT_TO ...]] [--ddp_find_unused_parameters DDP_FIND_UNUSED_PARAMETERS] [--ddp_bucket_cap_mb DDP_BUCKET_CAP_MB]
[--dataloader_pin_memory [DATALOADER_PIN_MEMORY]] [--no_dataloader_pin_memory] [--skip_memory_metrics [SKIP_MEMORY_METRICS]] [--no_skip_memory_metrics] [--use_legacy_prediction_loop [USE_LEGACY_PREDICTION_LOOP]]
[--push_to_hub [PUSH_TO_HUB]] [--resume_from_checkpoint RESUME_FROM_CHECKPOINT] [--hub_model_id HUB_MODEL_ID] [--hub_strategy {end,every_save,checkpoint,all_checkpoints}] [--hub_token HUB_TOKEN]
[--hub_private_repo [HUB_PRIVATE_REPO]] [--gradient_checkpointing [GRADIENT_CHECKPOINTING]] [--include_inputs_for_metrics [INCLUDE_INPUTS_FOR_METRICS]] [--fp16_backend {auto,cuda_amp,apex,cpu_amp}]
[--push_to_hub_model_id PUSH_TO_HUB_MODEL_ID] [--push_to_hub_organization PUSH_TO_HUB_ORGANIZATION] [--push_to_hub_token PUSH_TO_HUB_TOKEN] [--mp_parameters MP_PARAMETERS] [--auto_find_batch_size [AUTO_FIND_BATCH_SIZE]]
[--full_determinism [FULL_DETERMINISM]] [--torchdynamo TORCHDYNAMO] [--ray_scope RAY_SCOPE] [--ddp_timeout DDP_TIMEOUT] [--torch_compile [TORCH_COMPILE]] [--torch_compile_backend TORCH_COMPILE_BACKEND]
[--torch_compile_mode TORCH_COMPILE_MODE] [--trainable TRAINABLE] [--lora_rank LORA_RANK] [--lora_dropout LORA_DROPOUT] [--lora_alpha LORA_ALPHA] [--modules_to_save MODULES_TO_SAVE] [--debug_mode [DEBUG_MODE]]
[--peft_path PEFT_PATH]
run_clm_pt_with_peft.py: error: the following arguments are required: --output_dir
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 2) local_rank: 0 (pid: 10520) of binary: /root/miniconda3/envs/torch/bin/python
Traceback (most recent call last):
File "/root/miniconda3/envs/torch/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==1.13.1', 'console_scripts', 'torchrun')())
File "/root/miniconda3/envs/torch/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/root/miniconda3/envs/torch/lib/python3.10/site-packages/torch/distributed/run.py", line 762, in main
run(args)
File "/root/miniconda3/envs/torch/lib/python3.10/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/root/miniconda3/envs/torch/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/miniconda3/envs/torch/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
run_clm_pt_with_peft.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2023-05-18_16:26:57
host : autodl-container-ce8743b1d5-46215747
rank : 1 (local_rank: 1)
exitcode : 2 (pid: 10521)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-05-18_16:26:57
host : autodl-container-ce8743b1d5-46215747
rank : 0 (local_rank: 0)
exitcode : 2 (pid: 10520)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
run_pt.sh: line 21: --deepspeed: command not found
run_pt.sh: line 22: --model_name_or_path: command not found
run_pt.sh: line 23: --tokenizer_name_or_path: command not found
run_pt.sh: line 24: --dataset_dir: command not found
run_pt.sh: line 25: --data_cache_dir: command not found
run_pt.sh: line 26: --validation_split_percentage: command not found
run_pt.sh: line 27: --per_device_train_batch_size: command not found
run_pt.sh: line 28: --per_device_eval_batch_size: command not found
run_pt.sh: line 29: --do_train: command not found
run_pt.sh: line 30: --seed: command not found
run_pt.sh: line 31: --fp16: command not found
run_pt.sh: line 32: --max_steps: command not found
run_pt.sh: line 33: --lr_scheduler_type: command not found
run_pt.sh: line 34: --learning_rate: command not found
run_pt.sh: line 35: --warmup_ratio: command not found
run_pt.sh: line 36: --weight_decay: command not found
run_pt.sh: line 37: --logging_strategy: command not found
run_pt.sh: line 38: --logging_steps: command not found
run_pt.sh: line 39: --save_strategy: command not found
run_pt.sh: line 40: --save_total_limit: command not found
run_pt.sh: line 41: --save_steps: command not found
run_pt.sh: line 42: --gradient_accumulation_steps: command not found
run_pt.sh: line 43: --preprocessing_num_workers: command not found
run_pt.sh: line 44: --block_size: command not found
run_pt.sh: line 45: --ddp_timeout: command not found
run_pt.sh: line 46: --logging_first_step: command not found
run_pt.sh: line 47: --lora_rank: command not found
run_pt.sh: line 48: --lora_alpha: command not found
run_pt.sh: line 49: --trainable: command not found
run_pt.sh: line 50: --modules_to_save: command not found
run_pt.sh: line 51: --lora_dropout: command not found
run_pt.sh: line 52: --torch_dtype: command not found
run_pt.sh: line 53: --gradient_checkpointing: command not found
run_pt.sh: line 54: --ddp_find_unused_parameters: command not found
配置文件:
lr=2e-4
lora_rank=8
lora_alpha=32
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05
pretrained_model=/root/autodl-tmp/LLAMA_cn/resources/model/LLaMA_7B_cn_merge
chinese_tokenizer_path=/root/autodl-tmp/LLAMA_cn/resources/model/LLaMA_7B_cn_merge
dataset_dir=/root/autodl-tmp/LLAMA_cn/resources/data/pretrain_data
data_cache=/root/autodl-tmp/LLAMA_cn/resources/data/data_cache
per_device_train_batch_size=1
per_device_eval_batch_size=1
training_steps=10000
gradient_accumulation_steps=8
output_dir=/root/autodl-tmp/LLAMA_cn/resources/model/pretrain_model
deepspeed_config_file=/root/autodl-tmp/LLAMA_cn/src/Chinese-LLaMA-Alpaca/scripts/ds_zero2_no_offload.json
torchrun --nnodes 1 --nproc_per_node 2 run_clm_pt_with_peft.py \
--deepspeed ${deepspeed_config_file} \
--model_name_or_path ${pretrained_model} \
--tokenizer_name_or_path ${chinese_tokenizer_path} \
--dataset_dir ${dataset_dir} \
--data_cache_dir ${data_cache} \
--validation_split_percentage 0.001 \
--per_device_train_batch_size ${per_device_train_batch_size} \
--per_device_eval_batch_size ${per_device_eval_batch_size} \
--do_train \
--seed $RANDOM \
--fp16 \
--max_steps ${training_steps} \
--lr_scheduler_type cosine \
--learning_rate ${lr} \
--warmup_ratio 0.05 \
--weight_decay 0.01 \
--logging_strategy steps \
--logging_steps 10 \
--save_strategy steps \
--save_total_limit 30 \
--save_steps 500 \
--gradient_accumulation_steps ${gradient_accumulation_steps} \
--preprocessing_num_workers 8 \
--block_size 512 \
--ddp_timeout 30000 \
--logging_first_step True \
--lora_rank ${lora_rank} \
--lora_alpha ${lora_alpha} \
--trainable ${lora_trainable} \
--modules_to_save ${modules_to_save} \
--lora_dropout ${lora_dropout} \
--torch_dtype float16 \
--gradient_checkpointing \
--ddp_find_unused_parameters False
--output_dir ${output_dir} \
--overwrite_output_dir \
|
closed
|
2023-05-18T08:25:55Z
|
2023-09-22T06:52:26Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/376
|
[
"stale"
] |
fan159159
| 4
|
nok/sklearn-porter
|
scikit-learn
| 63
|
Fails with big dataset
|
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1, train_size=0.0001)
clf = DecisionTreeClassifier()
clf.fit(train_X, train_y)
# Export:
porter = Porter(clf, language='java')
output = porter.export(embed_data=True)
print(output)
fails with bigger train sizespython3.7/site-packages/sklearn_porter/estimator/classifier/DecisionTreeClassifier/__init__.py", line 308, in create_branches
out += temp.format(features[node], '<=', self.repr(threshold[node]))
IndexError: list index out of range
|
closed
|
2020-01-31T10:25:56Z
|
2020-01-31T10:38:13Z
|
https://github.com/nok/sklearn-porter/issues/63
|
[] |
mmhobi7
| 2
|
qwj/python-proxy
|
asyncio
| 152
|
Is this project still maintained?
|
### What happened?
I still use this package and still running on prod now, but I'm now a little worried that the project might be unmaintained. last PR over 7 months ago. and no new release for more than one year.
this is a cool project I hope this project still continue
|
open
|
2022-07-25T03:23:16Z
|
2023-03-27T12:09:11Z
|
https://github.com/qwj/python-proxy/issues/152
|
[] |
aarestu
| 1
|
keras-team/autokeras
|
tensorflow
| 1,853
|
Is there a table of Keras and AutoKeras, or which TF2 version should be installed for the latest AutoKeras?
|
I'm using Tensorflow 2.11 now, will it automatically uninstall keras when I install autokeras? Is there a table of Keras and AutoKeras, or which TF2 version should be installed for the latest AutoKeras?
|
open
|
2023-02-15T22:18:00Z
|
2023-02-16T12:04:34Z
|
https://github.com/keras-team/autokeras/issues/1853
|
[
"feature request"
] |
Prince5867
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 1,314
|
vioce clone
|
please i need your help on my set up, am unable to get librispeech on my file, what should i do have downloaded it so manytime
|
open
|
2024-10-22T18:30:18Z
|
2024-10-22T18:30:18Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1314
|
[] |
richmind1
| 0
|
RobertCraigie/prisma-client-py
|
asyncio
| 661
|
Deserialize raw query types into richer python types
|
## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
We currently deserialize raw query fields into types that are different than what the core ORM uses, e.g. `Decimal` becomes `float` instead of `decimal.Decimal`, this was implemented this way for backwards compatibility reasons.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should use types that are nicer to work with:
- [ ] `datetime.datetime` for `DateTime` fields
- [ ] `decimal.Decimal` for `Decimal` fields
- [ ] `prisma.Base64` for `Bytes` fields
|
open
|
2022-12-31T23:18:37Z
|
2022-12-31T23:18:37Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/661
|
[
"kind/improvement",
"topic: client",
"level/intermediate",
"priority/low",
"topic: raw queries"
] |
RobertCraigie
| 0
|
gradio-app/gradio
|
deep-learning
| 10,411
|
`show_row_numbers` in `gr.Dataframe` doesn't work
|
### Describe the bug
According to the [change log](https://github.com/gradio-app/gradio/blob/main/CHANGELOG.md), #10376 is included in `gradio==5.13.0`, but row numbers are not displayed when setting `show_row_numbers=True`.
Also, when using the wheel in #10376 , row numbers is shown, but it's rendered like this:

- The column for row numbers is too narrow.
- The width of the column next to the row number is different from other columns, and as it's too narrow, you can't click the sort button.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
repro Space: https://huggingface.co/spaces/hysts-debug/dataframe-show-row-numbers
```python
import gradio as gr
import numpy as np
np.random.seed(0)
data = np.random.randint(0, 100, size=(10, 10))
with gr.Blocks() as demo:
gr.Dataframe(value=data, show_row_numbers=True)
demo.launch()
```
### Screenshot

### Logs
```shell
```
### System Info
```shell
gradio==5.13.0
```
### Severity
I can work around it
|
closed
|
2025-01-23T03:01:52Z
|
2025-02-05T20:20:45Z
|
https://github.com/gradio-app/gradio/issues/10411
|
[
"bug"
] |
hysts
| 2
|
unit8co/darts
|
data-science
| 1,812
|
Please help me
|
Please help me I'm having problems with the tool.
I'm training the model using RNNModel using yahoo finance only the close values.
The loss values are: train_loss=0.000101, val_loss=0.00272
But when making the prediction (model.predict) the forecasting values are very bad.
I know this is just for reporting bugs but I don't know where to turn.
Please could someone help me?
|
closed
|
2023-06-02T17:29:33Z
|
2023-08-07T08:13:35Z
|
https://github.com/unit8co/darts/issues/1812
|
[
"question"
] |
rafaepires
| 1
|
cuemacro/chartpy
|
matplotlib
| 2
|
any difference from altair?
|
open
|
2016-08-25T15:18:03Z
|
2016-08-28T08:13:41Z
|
https://github.com/cuemacro/chartpy/issues/2
|
[] |
den-run-ai
| 1
|
|
coqui-ai/TTS
|
deep-learning
| 3,574
|
Not working for multiple sentences
|
### Describe the bug
When I send more than one sentence, it is giving me the error.
### To Reproduce
Input Text: a tree is a perennial plant with an elongated stem, or trunk, usually supporting branches and leaves. In some usages, the definition of a tree may be narrower, including only woody plants with secondary growth, plants that are usable as lumber or plants above a specified height.
import os
from TTS.api import TTS
TTS_MODEL_PATH = "tts_models/multilingual/multi-dataset/xtts_v2"
tts = TTS(model_name=TTS_MODEL_PATH, gpu=True)
def text_to_speech(text, speaker_wav_file, language, output_path):
os.makedirs(os.path.dirname(output_path), exist_ok=True)
tts.tts_to_file(text, speaker_wav=speaker_wav_file, language=language, file_path=output_path)
return output_path
@app.route('/text-to-speech', methods=['POST'])
def text_to_speech_api():
try:
text = request.form.get('text')
language = request.form.get('language')
speaker_wav_file = request.files['speaker_wav']
output_path = '/data/ai-tools/voice_cloning/output.wav' # Update this with the actual path
output_file = text_to_speech(text, speaker_wav_file, language, output_path)
return send_file(output_file)
except Exception as e:
return str(e), 500
### Expected behavior
_No response_
### Logs
```shell
Getting Error: Failed to open the input "Custom Input Context" (Invalid data found when
processing input).
Exception raised from get_input_format_context at
/__w/audio/audio/pytorch/audio/torchaudio/csrc/ffmpeg/stream_reader/stream_reader.cpp:42
(most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57
(0x7f4ea370f617 in
/data/ai-tools/venv/lib64/python3.9/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int,
std::string const&) + 0x64 (0x7f4ea36ca98d in
/data/ai-tools/venv/lib64/python3.9/site-packages/torch/lib/libc10.so)
frame #2: <unknown function> + 0x43944 (0x7f4d9c583944 in
/data/ai-tools/venv/lib/python3.9/site-packages/torchaudio/lib/libtorchaudio_ffmpeg6.so)
frame #3: torchaudio::io::StreamReader::StreamReader(AVIOContext*,
c10::optional<std::string> const&, c10::optional<std::map<std::string,
std::string, std::less<std::string>, std::allocator<std::pair<std::string
const, std::string> > > > const&) + 0x43 (0x7f4d9c586283 in
/data/ai-tools/venv/lib/python3.9/site-packages/torchaudio/lib/libtorchaudio_ffmpeg6.so)
frame #4:
torchaudio::io::StreamReaderCustomIO::StreamReaderCustomIO(void*,
c10::optional<std::string> const&, int, int (*)(void*, unsigned char*,
int), long (*)(void*, long, int), c10::optional<std::map<std::string,
std::string, std::less<std::string>,
std::allocator<std::pair<std::string const, std::string> > > >
const&) + 0x2f (0x7f4d9c58631f in
/data/ai-tools/venv/lib/python3.9/site-packages/torchaudio/lib/libtorchaudio_ffmpeg6.so)
frame #5: <unknown function> + 0x17a89 (0x7f4d9c47aa89 in
/data/ai-tools/venv/lib64/python3.9/site-packages/torchaudio/lib/_torchaudio_ffmpeg6.so)
frame #6: <unknown function> + 0x2de35 (0x7f4d9c490e35 in
/data/ai-tools/venv/lib64/python3.9/site-packages/torchaudio/lib/_torchaudio_ffmpeg6.so)
<omitting python frames>
frame #12: <unknown function> + 0xf744 (0x7f4dc5a20744 in
/data/ai-tools/venv/lib64/python3.9/site-packages/torchaudio/lib/_torchaudio.so)
```
### Environment
```shell
TTS Version: 2
AWS EC2
ffmpeg version N-112128-gfa20f5cd9e Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 11 (GCC)
configuration: --enable-shared --disable-static --enable-gpl --enable-libfreetype
libavutil 58. 25.100 / 58. 25.100
libavcodec 60. 26.100 / 60. 26.100
libavformat 60. 13.100 / 60. 13.100
libavdevice 60. 2.101 / 60. 2.101
libavfilter 9. 11.100 / 9. 11.100
libswscale 7. 3.100 / 7. 3.100
libswresample 4. 11.100 / 4. 11.100
libpostproc 57. 2.100 / 57. 2.100
```
### Additional context
_No response_
|
closed
|
2024-02-13T06:46:54Z
|
2024-06-26T16:49:25Z
|
https://github.com/coqui-ai/TTS/issues/3574
|
[
"bug",
"wontfix"
] |
ayushi15092002
| 1
|
qubvel-org/segmentation_models.pytorch
|
computer-vision
| 287
|
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED
|
`RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED`
I have tried many pytorch versions. The error is still there. Could anyone help me?
|
closed
|
2020-12-02T09:56:50Z
|
2020-12-03T07:58:51Z
|
https://github.com/qubvel-org/segmentation_models.pytorch/issues/287
|
[] |
somebodyus
| 2
|
roboflow/supervision
|
tensorflow
| 1,042
|
[Detections] extend `from_transformers` with segmentation models support
|
### Description
Currently, Supervision only supports Transformers object detection models. Let's expand [`from_transformers`](https://github.com/roboflow/supervision/blob/781a064d8aa46e3875378ab6aba1dfdad8bc636c/supervision/detection/core.py#L391) by adding support for segmentation models.
### API
The code below should enable the annotation of an image with segmentation results.
```python
import torch
import supervision as sv
from PIL import Image
from transformers import DetrImageProcessor, DetrForSegmentation
processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50")
model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")
image = Image.open(<PATH TO IMAGE>)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
width, height = image.size
target_size = torch.tensor([[height, width]])
results = processor. post_process_segmentation(
outputs=outputs, target_sizes=target_size)[0]
detections = sv.Detections.from_transformers(results)
mask_annotator = sv.MaskAnnotator()
annotated_image = mask.annotate(scene=image, detections=detections)
```
### Additional
- [Transformers DETR Docs](https://huggingface.co/docs/transformers/en/model_doc/detr)
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will speed up the review process. The reviewer must test each change. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻
|
closed
|
2024-03-25T12:58:25Z
|
2024-04-11T18:03:26Z
|
https://github.com/roboflow/supervision/issues/1042
|
[
"enhancement",
"good first issue",
"api:detection"
] |
SkalskiP
| 8
|
satwikkansal/wtfpython
|
python
| 87
|
Is this organization related to this repository?
|
I wanted to know if the following organization is connected with this project as it seems to have no connection with you, with respect to membership.
**_https://github.com/wtfpython-web_**
|
closed
|
2018-06-20T22:26:34Z
|
2018-06-21T19:47:27Z
|
https://github.com/satwikkansal/wtfpython/issues/87
|
[] |
0x48piraj
| 2
|
dropbox/sqlalchemy-stubs
|
sqlalchemy
| 194
|
Add "common issues" doc to cover known workarounds
|
I'm happy to write a first draft of this if the maintainers would like it. But I don't want to put in the effort if it's not wanted.
I've run into two issues which really puzzled me in terms of what to do until I poked around in issues here. I'm sure there are others.
`hybrid_property` can't use the special magic of `property` in `mypy`, but #98 has a workaround seems to work. And #94 covers ways of getting `UUID(as_uuid=True)` to work.
It would be helpful (since these are not going to be fixed soon) if this library made these workarounds part of its docs. It could be done at the end of the readme or in a separate markdown doc linked in the repo.
|
open
|
2020-12-11T15:28:53Z
|
2021-04-22T13:47:35Z
|
https://github.com/dropbox/sqlalchemy-stubs/issues/194
|
[] |
sirosen
| 1
|
huggingface/transformers
|
deep-learning
| 36,322
|
Look like there is error with `sentencepiece ` and `protobuf`
|
### System Info
Hi, I'm install transformers to convert Llama-11B-Vision-Instruct `.pth` to HF format `.safetensors`. I was carefully install follow the https://github.com/huggingface/transformers?tab=readme-ov-file#with-pip
Then I run my bash command:
```sh
python /home/chwenjun225/.llama/checkpoints/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /home/chwenjun225/.llama/checkpoints/Llama3.2-11B-Vision-Instruct \
--model_size 11B \
--output_dir /home/chwenjun225/.llama/checkpoints/Llama3.2-11B-Vision-Instruct/hf
```
and here is my terminal:
```
(pth_convert_st) (base) chwenjun225@chwenjun225:~/.llama/checkpoints/transformers$ python /home/chwenjun225/.llama/checkpoints/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /home/chwenjun225/.llama/checkpoints/Llama3.2-11B-Vision-Instruct \
--model_size 11B \
--output_dir /home/chwenjun225/.llama/checkpoints/Llama3.2-11B-Vision-Instruct/hf
Converting the tokenizer.
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
Traceback (most recent call last):
File "/home/chwenjun225/.llama/checkpoints/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 502, in write_tokenizer
tokenizer = tokenizer_class(input_tokenizer_path)
File "/home/chwenjun225/.llama/checkpoints/transformers/src/transformers/models/llama/tokenization_llama_fast.py", line 157, in __init__
super().__init__(
~~~~~~~~~~~~~~~~^
vocab_file=vocab_file,
^^^^^^^^^^^^^^^^^^^^^^
...<10 lines>...
**kwargs,
^^^^^^^^^
)
^
File "/home/chwenjun225/.llama/checkpoints/transformers/src/transformers/tokenization_utils_fast.py", line 133, in __init__
slow_tokenizer = self.slow_tokenizer_class(*args, **kwargs)
File "/home/chwenjun225/.llama/checkpoints/transformers/src/transformers/models/llama/tokenization_llama.py", line 169, in __init__
self.sp_model = self.get_spm_processor(kwargs.pop("from_slow", False))
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chwenjun225/.llama/checkpoints/transformers/src/transformers/models/llama/tokenization_llama.py", line 196, in get_spm_processor
tokenizer.Load(self.vocab_file)
~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/chwenjun225/miniconda3/envs/pth_convert_st/lib/python3.13/site-packages/sentencepiece/__init__.py", line 961, in Load
return self.LoadFromFile(model_file)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/home/chwenjun225/miniconda3/envs/pth_convert_st/lib/python3.13/site-packages/sentencepiece/__init__.py", line 316, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
RuntimeError: Internal: could not parse ModelProto from /home/chwenjun225/.llama/checkpoints/Llama3.2-11B-Vision-Instruct/tokenizer.model
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/chwenjun225/.llama/checkpoints/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 601, in <module>
main()
~~~~^^
File "/home/chwenjun225/.llama/checkpoints/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 576, in main
write_tokenizer(
~~~~~~~~~~~~~~~^
args.output_dir,
^^^^^^^^^^^^^^^^
...<4 lines>...
push_to_hub=args.push_to_hub,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/chwenjun225/.llama/checkpoints/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py", line 504, in write_tokenizer
raise ValueError(
"Failed to instantiate tokenizer. Please, make sure you have sentencepiece and protobuf installed."
)
ValueError: Failed to instantiate tokenizer. Please, make sure you have sentencepiece and protobuf installed.
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python /home/chwenjun225/.llama/checkpoints/transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /home/chwenjun225/.llama/checkpoints/Llama3.2-11B-Vision-Instruct \
--model_size 11B \
--output_dir /home/chwenjun225/.llama/checkpoints/Llama3.2-11B-Vision-Instruct/hf
### Expected behavior
I think it should be ok after ```pip install transformers[sentencepiece]``` and ```pip install transformers[protobuf]```. But no, I upgrade my python from 3.11 to 3.13, I still get this problem
|
closed
|
2025-02-21T07:39:19Z
|
2025-02-21T15:44:31Z
|
https://github.com/huggingface/transformers/issues/36322
|
[
"bug"
] |
chwenjun225
| 1
|
huggingface/transformers
|
machine-learning
| 36,272
|
Device Movement Error with 4-bit Quantized LLaMA 3.1 Model Loading
|
### System Info
```shell
I'm running into a persistent issue when trying to load the LLaMA 3.1 8B model with 4-bit quantization. No matter what configuration I try, I get this error during initialization:
pgsql
Copy
CopyValueError: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
```
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Environment:
Python: 3.10
Transformers: Latest version
PyTorch: Latest version
GPU: 85.05 GB memory available
CUDA: Properly installed and available
What I've tried:
Loading with a BitsAndBytesConfig:
python
Copy
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
llm_int8_has_fp16_weight=True
)
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.1-8B-Instruct",
quantization_config=bnb_config,
trust_remote_code=True,
use_cache=True,
device_map='auto',
max_memory={0: "24GiB"}
)
Loading without device mapping:
python
Copy
model_kwargs = {
"trust_remote_code": True,
"load_in_4bit": True,
"torch_dtype": torch.float16,
"use_cache": True
}
### Expected behavior
```shell
Clearing CUDA cache and running garbage collection beforehand.
Experimenting with different device mapping strategies.
Even with an ample GPU memory (85.05 GB) and confirmed CUDA availability, I still can't seem to get the model to load without running into this device movement error. Other models load fine when using quantization, so I'm not sure what's special about this setup.
Any ideas on how to resolve this or work around the error? Thanks in advance for your help!
```
### Checklist
- [x] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [ ] I checked if a related official extension example runs on my machine.
|
open
|
2025-02-19T07:33:39Z
|
2025-03-13T13:29:01Z
|
https://github.com/huggingface/transformers/issues/36272
|
[] |
Pritidhrita
| 2
|
AirtestProject/Airtest
|
automation
| 883
|
在执行用例的过程中,会频繁发生微信闪退的问题
|
**描述问题bug**
测试**微信小程序**
在执行用例的过程中,会频繁发生微信闪退的问题,导致后续测试用例无法执行
**预期效果**
执行过程中能够稳定
**python 版本:** `python3.7`
**airtest 版本:** `1.1.8`
**设备:**
- 型号: [HUAWEI Mate 10]
- 系统: [Android 10]
|
open
|
2021-03-31T02:09:03Z
|
2021-03-31T02:09:03Z
|
https://github.com/AirtestProject/Airtest/issues/883
|
[] |
DDDDanny
| 0
|
ray-project/ray
|
machine-learning
| 51,207
|
[Data] Adding streaming capability for `ray.data.Dataset.unique`
|
### Description
The current [doc](https://docs.ray.io/en/latest/data/api/doc/ray.data.Dataset.unique.html) indicates that `ray.data.Dataset.unique` is a blocking operation: **_This operation requires all inputs to be materialized in object store for it to execute._**.
But I presume, conceptually, it's possible to implement a streaming one: keeps a record of "seen" values and drops entry when its value is in the "seen" collection
### Use case
A streaming `unique` function will be very useful when the amount of data is too large to be materialized.
|
open
|
2025-03-10T05:26:33Z
|
2025-03-13T12:37:57Z
|
https://github.com/ray-project/ray/issues/51207
|
[
"enhancement",
"triage",
"data"
] |
marcmk6
| 7
|
lux-org/lux
|
pandas
| 169
|
[SETUP] Support for Google Colab
|
How to use this libary in colab
|
open
|
2020-12-01T11:07:56Z
|
2022-08-01T22:02:23Z
|
https://github.com/lux-org/lux/issues/169
|
[
"enhancement",
"setup"
] |
rizwannitk
| 11
|
google-research/bert
|
nlp
| 1,177
|
3090
|
I used a 3090 graphics card, TensorFlow1.14. There will be a problem. The same code will work on 2080. Have you come across anything like this?
|
open
|
2020-11-23T06:50:48Z
|
2020-11-23T06:50:48Z
|
https://github.com/google-research/bert/issues/1177
|
[] |
Mbdn
| 0
|
lucidrains/vit-pytorch
|
computer-vision
| 52
|
How to run this model on linear data?
|
How to run this model on image features rather than image data like image features generated from pretrained ResNet?
|
closed
|
2020-12-26T20:14:21Z
|
2020-12-29T17:57:51Z
|
https://github.com/lucidrains/vit-pytorch/issues/52
|
[] |
purbayankar
| 9
|
FujiwaraChoki/MoneyPrinter
|
automation
| 152
|
[BUG] More than 5 videos are downloaded
|
Caused by this:
main.py
```
# Check for duplicates
for url in found_urls:
if url not in video_urls:
video_urls.append(url)
```
found_urls is a list that contains all urls of a given search term.
We only want the first one of those that isn't a duplicate to be added to the downloads, otherwise we will have more cips than set by `AMOUNT_OF_STOCK_VIDEOS = 5`
|
closed
|
2024-02-10T14:47:02Z
|
2024-02-10T15:18:10Z
|
https://github.com/FujiwaraChoki/MoneyPrinter/issues/152
|
[] |
radry
| 0
|
unionai-oss/pandera
|
pandas
| 1,866
|
Empty (all-null) or missing fields with Python (non-Pandas) types fail validation despite `coerce=True` and `nullable=True`
|
**Describe the bug**
Pandera raises `SchemaError` when passed data with an entirely empty or missing non-Pandas type column, despite use of `coerce=True`, `nullable=True` and `add_missing_columns=True`, whereas an equivalent Pandas-type column _is_ filled with null values as expected.
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandera.
- [ ] (optional) I have confirmed this bug exists on the main branch of pandera.
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import datetime as dt
import pandas as pd
import pandera as pa
class Schema(pa.DataFrameModel):
class Config:
coerce = True
add_missing_columns = True
timestamp: pa.Timestamp = pa.Field(nullable=True)
date: dt.date = pa.Field(nullable=True)
"""Columns present but null."""
complete = dict(timestamp=dt.datetime(1964, 8, 24), date=dt.datetime(1964, 8, 24))
null_timestamp = dict(complete, timestamp=None)
null_date = dict(complete, date=None)
# ✅ everything coerced as expected:
print(Schema.validate(pd.DataFrame([complete])).dtypes)
# ->
# timestamp datetime64[ns]
# date object
# dtype: object
# ✅null-only `timestamp` column still coerced as expected
print(Schema.validate(pd.DataFrame([null_timestamp])))
# ->
# timestamp date
# 0 NaT 1964-08-24
# ✅ with a mix of null and non-null, `date` column still coerced as expected
print(Schema.validate(pd.DataFrame([complete, null_date])))
# ->
# timestamp date
# 0 1964-08-24 1964-08-24
# 1 1964-08-24 NaT
# ❗️ with all nulls, `date` fails
print(Schema.validate(pd.DataFrame([null_date])))
# ->
# pandera.errors.SchemaError: expected series 'date' to have type date:
# failure cases:
# index failure_case
# 0 0 NaT
"""Columns missing."""
missing_timestamp, missing_date = complete.copy(), complete.copy()
missing_timestamp.pop("timestamp")
missing_date.pop("date")
# ✅ missing `timestamp` column created as expected
print(Schema.validate(pd.DataFrame([missing_timestamp])))
# timestamp date
# 0 NaT 1964-08-24
# ❗️ missing `date` column fails
print(Schema.validate(pd.DataFrame([missing_date])))
# ->
# pandera.errors.SchemaError: expected series 'date' to have type date:
# failure cases:
# index failure_case
# 0 0 NaT
```
This was run with:
- pandas==2.2.3"
- pandera==0.21.0
#### Expected behavior
In the example above, I would expect the non-Pandas type column (`date`) to behave identically to a pandas type column (`timestamp`) i.e. here be filled with a null value (`NaT`) when:
- all values are null
- the column is missing entirely
This is not specific to date types, that's just for illustration; you can swap out for `int`/`pa.Int` etc.
#### Desktop (please complete the following information):
- OS: Mac OS X
|
open
|
2024-11-29T18:03:51Z
|
2024-11-29T18:08:41Z
|
https://github.com/unionai-oss/pandera/issues/1866
|
[
"bug"
] |
stainbank
| 0
|
harry0703/MoneyPrinterTurbo
|
automation
| 159
|
cannot unpack non-iterable NoneType object
|
TypeError:
[Uploading 075251ac-edaa-4e24-b72a-03530e5336a1.zip…]()
Traceback:
File "C:\Users\hourp\anaconda3\envs\MoneyPrinterTurbo\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script
exec(code, module.__dict__)
File "C:\Users\hourp\Downloads\MoneyPrinterTurbo-main\webui\Main.py", line 376, in <module>
tm.start(task_id=task_id, params=params)
File "C:\Users\hourp\Downloads\MoneyPrinterTurbo-main\app\services\task.py", line 158, in start
video.generate_video(video_path=combined_video_path,
File "C:\Users\hourp\Downloads\MoneyPrinterTurbo-main\app\services\video.py", line 225, in generate_video
sub = SubtitlesClip(subtitles=subtitle_path, make_textclip=generator, encoding='utf-8')
File "C:\Users\hourp\anaconda3\envs\MoneyPrinterTurbo\lib\site-packages\moviepy\video\tools\subtitles.py", line 69, in __init__
self.duration = max([tb for ((ta, tb), txt) in self.subtitles])
File "C:\Users\hourp\anaconda3\envs\MoneyPrinterTurbo\lib\site-packages\moviepy\video\tools\subtitles.py", line 69, in <listcomp>
self.duration = max([tb for ((ta, tb), txt) in self.subtitles])
|
closed
|
2024-04-03T17:07:01Z
|
2024-04-04T03:13:24Z
|
https://github.com/harry0703/MoneyPrinterTurbo/issues/159
|
[] |
pupheng
| 3
|
microsoft/nni
|
data-science
| 5,383
|
The following code block gives me error
|
The following code block gives me error
```
import bz2
import urllib.request
import numpy as np
from sklearn.datasets import load_svmlight_file
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.feature_selection import SelectFromModel
from nni.algorithms.feature_engineering.gradient_selector import FeatureGradientSelector
def test():
url_zip_train = 'https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary/rcv1_train.binary.bz2'
urllib.request.urlretrieve(url_zip_train, filename='train.bz2')
f_svm = open('train.svm', 'wt')
with bz2.open('train.bz2', 'rb') as f_zip:
data = f_zip.read()
f_svm.write(data.decode('utf-8'))
f_svm.close()
X, y = load_svmlight_file('train.svm')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
pipeline = make_pipeline(FeatureGradientSelector(
n_epochs=1, n_features=10), LogisticRegression())
# pipeline = make_pipeline(SelectFromModel(ExtraTreesClassifier(n_estimators=50)), LogisticRegression())
pipeline.fit(X_train, y_train)
print("Pipeline Score: ", pipeline.score(X_train, y_train))
if __name__ == "__main__":
test()
```

_Originally posted by @AbdelrahmanHamdy1996 in https://github.com/microsoft/nni/discussions/5382_
|
closed
|
2023-02-19T14:12:35Z
|
2023-09-15T04:22:12Z
|
https://github.com/microsoft/nni/issues/5383
|
[] |
AbdelrahmanHamdy1996
| 3
|
napari/napari
|
numpy
| 7,547
|
Viewer FPS label gallery example not displaying FPS in gallery preview
|
### 🐛 Bug Report
Viewer FPS label no longer shows in the [gallery example](https://napari.org/stable/gallery/viewer_fps_label.html).
However, FPS label still works when script run locally.
Gallery example shows FPS label up through 0.4.19. Starting with 0.5.0 the FPS label is no longer visible.
|
closed
|
2025-01-21T15:58:53Z
|
2025-01-29T08:53:00Z
|
https://github.com/napari/napari/issues/7547
|
[
"bug",
"documentation",
"example"
] |
TimMonko
| 3
|
ultralytics/ultralytics
|
deep-learning
| 19,631
|
Slow Tensor Inference on MPS Compared to NumPy - Possible Upsampling Bottleneck
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi team!
Thanks for the amazing YOLOv11 library—truly top-notch.
I’m stuck on an issue for quite a long time, truly.
So... SOS.
I’m running batched PyTorch tensors (`torch.rand(10, 3, 640, 640, device="mps")`) directly into inference on MPS, skipping NumPy preprocessing. Tensor performance lags (~0.07-0.08s per image) versus NumPy (~0.02-0.05s), despite `.contiguous()`, `torch.no_grad()`, warm-up, and manual `non_max_suppression()` batch splitting. Postprocessing is a bottleneck (~0.4s vs. expected ~0.01s), and inference takes ~0.35s. Profiling suggests ~40% in upsampling ops and ~30% in other layers (e.g., conv), hinting at MPS optimization or stride issues.
Am I missing a trick here? I'd like to share more profiling data if it helps—just need a nudge in the right direction.
Cheers!
### Additional
_No response_
|
open
|
2025-03-10T20:07:57Z
|
2025-03-19T09:51:17Z
|
https://github.com/ultralytics/ultralytics/issues/19631
|
[
"question",
"dependencies",
"detect"
] |
theOnlyBoy
| 26
|
saulpw/visidata
|
pandas
| 2,405
|
[texttables] incorrect 'tabulate' module installed with brew
|
**Small description**
Getting AttributeError with version 3.0.2 installed from homebrew
**Expected result**
Visidata opens csv file.
**Actual result with screenshot**
See below.
**Steps to reproduce with sample data and a .vd**
See below.
vd foo.csv
Traceback (most recent call last):
File "/opt/homebrew/bin/vd", line 3, in <module>
import visidata.main
File "/opt/homebrew/Cellar/visidata/3.0.2_1/libexec/lib/python3.12/site-packages/visidata/__init__.py", line 137, in <module>
vd.importSubmodules('visidata.loaders')
File "/opt/homebrew/Cellar/visidata/3.0.2_1/libexec/lib/python3.12/site-packages/visidata/settings.py", line 495, in importSubmodules
vd.importModule(pkgname + '.' + module.name)
File "/opt/homebrew/Cellar/visidata/3.0.2_1/libexec/lib/python3.12/site-packages/visidata/settings.py", line 481, in importModule
r = importlib.import_module(pkgname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/visidata/3.0.2_1/libexec/lib/python3.12/site-packages/visidata/loaders/texttables.py", line 6, in <module>
for fmt in tabulate.tabulate_formats:
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'tabulate' has no attribute 'tabulate_formats'
echo 'foo,bar' | vd
Traceback (most recent call last):
File "/opt/homebrew/bin/vd", line 3, in <module>
import visidata.main
File "/opt/homebrew/Cellar/visidata/3.0.2_1/libexec/lib/python3.12/site-packages/visidata/__init__.py", line 137, in <module>
vd.importSubmodules('visidata.loaders')
File "/opt/homebrew/Cellar/visidata/3.0.2_1/libexec/lib/python3.12/site-packages/visidata/settings.py", line 495, in importSubmodules
vd.importModule(pkgname + '.' + module.name)
File "/opt/homebrew/Cellar/visidata/3.0.2_1/libexec/lib/python3.12/site-packages/visidata/settings.py", line 481, in importModule
r = importlib.import_module(pkgname)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/visidata/3.0.2_1/libexec/lib/python3.12/site-packages/visidata/loaders/texttables.py", line 6, in <module>
for fmt in tabulate.tabulate_formats:
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'tabulate' has no attribute 'tabulate_formats'
**Additional context**
brew info visidata
==> saulpw/vd/visidata: stable 3.0.2
Terminal utility for exploring and arranging tabular data
https://visidata.org/
Installed
/opt/homebrew/Cellar/visidata/3.0.2_1 (880 files, 10.4MB) *
Built from source on 2024-05-15 at 11:09:53
From: https://github.com/saulpw/homebrew-vd/blob/HEAD/Formula/visidata.rb
==> Dependencies
Required: python3 ✔
|
closed
|
2024-05-15T18:47:19Z
|
2024-05-16T23:48:22Z
|
https://github.com/saulpw/visidata/issues/2405
|
[
"packaging"
] |
bnisly
| 6
|
pydata/xarray
|
numpy
| 9,300
|
Unloaded DataArray opened with scipy missing attributes when sliced with a sequence
|
### What happened?
I opened the "air_temperature" dataset using the scipy engine, sliced on a dimension with a list of index and received a seemingly "incomplete" object. It raises `AttributeError: 'DataArray' object has no attribute 'data'` and similar with `values`.
### What did you expect to happen?
I expected the sliced result to be a normal DataArray.
### Minimal Complete Verifiable Example
```Python
import xarray as xr
ds = xr.tutorial.open_dataset("air_temperature", engine='scipy')
ds.air.isel(lat=20).data # Works
ds.air.isel(lat=slice(20, 22)).data # Works
ds.air.isel(lat=[20, 21]).data # Fails
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[10], line 1
----> 1 ds.air.isel(lat=[20, 21]).data
File ~/miniforge3/envs/xclim/lib/python3.12/site-packages/xarray/core/common.py:286, in AttrAccessMixin.__getattr__(self, name)
284 with suppress(KeyError):
285 return source[name]
--> 286 raise AttributeError(
287 f"{type(self).__name__!r} object has no attribute {name!r}"
288 )
AttributeError: 'DataArray' object has no attribute 'data'
```
### Anything else we need to know?
If I `ds.air.load()` first, this doesn't happen.
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:23:07) [GCC 12.3.0]
python-bits: 64
OS: Linux
OS-release: 6.9.6-200.fc40.x86_64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: fr_CA.UTF-8
LOCALE: ('fr_CA', 'UTF-8')
libhdf5: 1.14.3
libnetcdf: None
xarray: 2024.7.0
pandas: 2.2.2
numpy: 2.0.1
scipy: 1.14.0
netCDF4: None
pydap: None
h5netcdf: 1.3.0
h5py: 3.11.0
zarr: None
cftime: 1.6.4
nc_time_axis: 1.4.1
iris: None
bottleneck: 1.4.0
dask: 2024.7.1
distributed: 2024.7.1
matplotlib: 3.9.1
cartopy: None
seaborn: None
numbagg: None
fsspec: 2024.6.1
cupy: None
pint: 0.24.3
sparse: 0.16.0a10.dev3+ga73b20d
flox: 0.9.8
numpy_groupies: 0.11.1
setuptools: 71.0.4
pip: 24.1.2
conda: None
pytest: 7.4.4
mypy: 1.11.0
IPython: 8.26.0
sphinx: 7.4.7
</details>
|
closed
|
2024-07-31T19:26:54Z
|
2024-07-31T20:22:56Z
|
https://github.com/pydata/xarray/issues/9300
|
[
"bug",
"needs triage"
] |
aulemahal
| 3
|
biolab/orange3
|
pandas
| 6,957
|
Better "Report"
|
Reporting currently opens a dialog with a list view on the left-hand side and an html on the right. Images are stored within the document.
1. The document size is probably limited, which is the most probable cause of #6105. This problem seems difficult and nobody touched it in two years, but it makes reporting rather useless.
2. This functionality (and probably *only* this functionality) requires Webkit. Webkit is annoying to install (confession: I don't know how to install it in PyQt6, and don't dare to try because I already ruined my development conda environment more than once) and causes problems like #6746.
3. I'm not sure that images in the report can be saved.
4. It doesn't look to good. It would requires some styling. Saved HTMLs (and PDFs) look pretty sad.
The only good thing I see is the integration with Orange: once can click on a reported item and bring the workflow back to the state in which the item was produced. I doubt, though, that many use this functionality.
I suggest we think about an alternative: to report, the user can choose a directory into which Orange will write an index.html and where it will drop all images and potentially other files(!). When reporting, Orange would simply add to the index.html, but without undoing any changes the user makes.
To view this report, Orange would open the default browser (and instruct the user to do it, if it doesn't happen automatically). The document would probably take care of refreshing when the file changes.
This would solve all above problems.
1. Images would be referred to from the document.
2. Orange would no longer require webkit. (Note: this may break some add-ons if they do not explicitly depend upon it under assumption that Orange does. Text mining?)
3. Images are already saved. Potentially in vector and bitmap form, for user convenience. Even more: the user can ignore the index.html altogether and use this to just dump images into some directory without the save dialog.
4. The page can be properly styles (e.g. bootstrap or whatever), it can include the existing interactivity (editing of comment) and add more, such as resizing images, reordering ... (I suppose that we'd have to go with jquery, not react, unfortunately. But we'll live.)
As for the functionality of going back to previous workflows: if we really want it (I don't advocate it), Orange can save snapshots of past workflows in a subdirectory of the report directory. Reported items can contain some kind of ids that can be pasted into Orange.
Finally, it would be amazing if we could integrate reporting with jupyter notebook instead of plain index.html.
|
open
|
2024-12-14T15:50:24Z
|
2024-12-14T15:50:25Z
|
https://github.com/biolab/orange3/issues/6957
|
[
"feast"
] |
janezd
| 0
|
ultralytics/ultralytics
|
computer-vision
| 19,795
|
YOLO11-OBB for the detection of objects with aspect ratios
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
When I used YOLO11-OBB for object detection, I found that there will be an error in the box angle of the object with a close aspect ratio, is there a way to solve this problem?
For example:

### Additional
_No response_
|
closed
|
2025-03-20T09:09:07Z
|
2025-03-21T06:10:54Z
|
https://github.com/ultralytics/ultralytics/issues/19795
|
[
"question",
"OBB",
"detect"
] |
HOUHarden13
| 5
|
iperov/DeepFaceLab
|
deep-learning
| 953
|
Manual DST extract locking up around 20-25%.
|
Im running DFL on a 3090, I have 64gb of ram. I am not getting any OOM errors. When running data_dst manual RE-EXTRACT DELETED ALIGNED_DEBUG the application is totally locking up around 25%. I have to task manager close it. Doesn't even register as not responding.
HIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
## Expected behavior
*Describe, in some detail, what you are trying to do and what the output is that you expect from the program.*
## Actual behavior
*Describe, in some detail, what the program does instead. Be sure to include any error message or screenshots.*
## Steps to reproduce
*Describe, in some detail, the steps you tried that resulted in the behavior described above.*
## Other relevant information
- **Command lined used (if not specified in steps to reproduce)**: main.py ...
- **Operating system and version:** Windows, macOS, Linux
- **Python version:** 3.5, 3.6.4, ... (if you are not using prebuilt windows binary)
|
open
|
2020-11-20T14:45:28Z
|
2023-06-08T21:35:36Z
|
https://github.com/iperov/DeepFaceLab/issues/953
|
[] |
minestra
| 2
|
Nekmo/amazon-dash
|
dash
| 22
|
Delay
|
What unit is the delay in? Seconds? Minutes?
I would guess at seconds, because the dash button seems to be active for just under that. Is there a minimum that we shouldn't go below? Does it take a certain amount of time for the button to drop off the network ready to be picked up again?
|
closed
|
2018-01-15T23:37:28Z
|
2018-03-15T13:53:33Z
|
https://github.com/Nekmo/amazon-dash/issues/22
|
[
"Documentation"
] |
Kallb123
| 3
|
graphdeco-inria/gaussian-splatting
|
computer-vision
| 1,180
|
When running 3dgs on a machine with multiple graphics cards, it reports memory at dist2 = torch.clamp_min(distCUDA2(torch.from_numpy(np.asarray(pcd.points)).float().cuda()), 0.0000001) # (P,)
|

You can see that the memory is only used for a very small portion, and there is no problem running the same data on a single gpu machine, why is that?
|
open
|
2025-03-05T08:27:27Z
|
2025-03-05T08:27:27Z
|
https://github.com/graphdeco-inria/gaussian-splatting/issues/1180
|
[] |
p-z-p
| 0
|
microsoft/nni
|
deep-learning
| 5,510
|
Getting AttributeError in test_fine_grained_prune() for pruning
|
**Describe the issue**:
So I was trying to run the lab tutorials for proxylessnas and was running training + pruning, where I am getting the following error when using
**test_fine_grained_prune()**
the earlier line, gives me an error :
#############
#############
AttributeError Traceback (most recent call last)
[<ipython-input-88-f93ff8dfb8cd>](https://localhost:8080/#) in <cell line: 1>()
----> 1 test_fine_grained_prune()
1 frames
[<ipython-input-73-783c2fe47075>](https://localhost:8080/#) in test_fine_grained_prune(test_tensor, test_mask, target_sparsity, target_nonzeros)
29 mask = fine_grained_prune(test_tensor, target_sparsity)
30 sparsity_after_pruning = get_sparsity(test_tensor)
---> 31 sparsity_of_mask = get_sparsity(mask)
32
33 plot_matrix(test_tensor, ax_right, 'sparse tensor')
[<ipython-input-72-26703df74fdb>](https://localhost:8080/#) in get_sparsity(tensor)
8 sparsity = #zeros / #elements = 1 - #nonzeros / #elements
9 """
---> 10 return 1 - float(tensor.count_nonzero()) / tensor.numel()
11
12
AttributeError: 'int' object has no attribute 'count_nonzero'
#############
#############
I don't understand what problem is there, could you please explain the issue.Thanks
|
closed
|
2023-04-07T22:30:34Z
|
2023-04-12T04:26:54Z
|
https://github.com/microsoft/nni/issues/5510
|
[] |
HiiroWakaba
| 2
|
kizniche/Mycodo
|
automation
| 437
|
problem with adding SHT75
|
Hello,
thank you for creating Mycodo, I find it very clean, precise and reliable.
For some days I'm trying to add SHT75 sensor, but it doesn't match with Mycodo.
I followed mk-fg installation instructions - sensor is working with his script using the virtualenv on the terminal.
After I try to activate STH75 in the Mycodo Input panel, I get the message:
"Error: Could not activate Input controller with ID 11: name 'xrange' is not defined"
Here are some logs:
2018-03-30 23:44:35,720 - mycodo.daemon - ERROR - Could not activate Input controller with ID 11: name 'xrange' is not defined
Traceback (most recent call last):
File "/var/mycodo-root/mycodo/mycodo_daemon.py", line 486, in controller_activate
ready, cont_id)
File "/var/mycodo-root/mycodo/controller_input.py", line 378, in __init__
self.sht_voltage)
File "/var/mycodo-root/mycodo/inputs/sht1x_7x.py", line 35, in __init__
self.sht_sensor = Sht(self.clock_pin, self.pin, voltage=self.voltage)
File "/var/mycodo-root/env/lib/python3.5/site-packages/sht_sensor/sensor.py", line 297, in __init__
super(Sht, self).__init__(pin_sck, pin_data, **sht_comms_kws)
File "/var/mycodo-root/env/lib/python3.5/site-packages/sht_sensor/sensor.py", line 131, in __init__
self._init()
File "/var/mycodo-root/env/lib/python3.5/site-packages/sht_sensor/sensor.py", line 136, in _init
self.gpio.set_pin_value(pin, k='direction', v='low')
File "/var/mycodo-root/env/lib/python3.5/site-packages/sht_sensor/gpio.py", line 66, in set_pin_value
ft.partial(open, get_pin_path(n, k), 'wb', 0) ) as dst:
File "/var/mycodo-root/env/lib/python3.5/site-packages/sht_sensor/gpio.py", line 22, in gpio_access_wrap
for n in xrange(checks, -1, -1):
NameError: name 'xrange' is not defined
To my modest coding knowledge, problem has to do with python versions.
Where should I dig? What should I learn?
|
closed
|
2018-03-30T22:10:21Z
|
2018-04-09T09:15:11Z
|
https://github.com/kizniche/Mycodo/issues/437
|
[] |
kobrok
| 38
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
pytorch
| 1,620
|
Exception in user code
|
when i run train.py in pycharm:
The code runs incorrectly but does not stop running.
how can I solve this?
The full script of error as follows:
-----------------------------------------------
Setting up a new session...
Exception in user code:
------------------------------------------------------------
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 175, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 722, in urlopen
chunked=chunked,
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 416, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 244, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1287, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1333, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1282, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1042, in _send_output
self.send(msg)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 980, in send
self.connect()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 205, in connect
conn = self._new_conn()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 187, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f4d5a902278>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 450, in send
timeout=timeout
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 800, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d5a902278>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 760, in _send
data=json.dumps(msg),
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 720, in _handle_post
r = self.session.post(url, data=data)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 577, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d5a902278>: Failed to establish a new connection: [Errno 111] Connection refused',))
[Errno 111] Connection refused
on_close() takes 1 positional argument but 3 were given
Could not connect to Visdom server.
Trying to start a server....
Command: /home/ncut/anaconda3/envs/cyclegan/bin/python -m visdom.server -p 8097 &>/dev/null &
create web directory ./checkpoints/maps_cyclegan/web...
learning rate 0.0002000 -> 0.0002000
/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:134: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
(epoch: 1, iters: 100, time: 0.227, data: 0.482) D_A: 0.232 G_A: 0.258 cycle_A: 2.129 idt_A: 0.503 D_B: 0.469 G_B: 0.588 cycle_B: 1.136 idt_B: 1.186
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 175, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 722, in urlopen
chunked=chunked,
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 416, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 244, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1287, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1333, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1282, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1042, in _send_output
self.send(msg)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 980, in send
self.connect()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 205, in connect
conn = self._new_conn()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 187, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f4d5a5a7748>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 450, in send
timeout=timeout
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 800, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d5a5a7748>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 760, in _send
data=json.dumps(msg),
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 720, in _handle_post
r = self.session.post(url, data=data)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 577, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d5a5a7748>: Failed to establish a new connection: [Errno 111] Connection refused',))
Exception in user code:
------------------------------------------------------------
(epoch: 1, iters: 200, time: 0.230, data: 0.003) D_A: 0.173 G_A: 0.267 cycle_A: 2.317 idt_A: 0.363 D_B: 0.300 G_B: 0.439 cycle_B: 0.697 idt_B: 1.128
Exception in user code:
------------------------------------------------------------
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 175, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 722, in urlopen
chunked=chunked,
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 416, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 244, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1287, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1333, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1282, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1042, in _send_output
self.send(msg)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 980, in send
self.connect()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 205, in connect
conn = self._new_conn()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 187, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f4d5a5a7438>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 450, in send
timeout=timeout
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 800, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d5a5a7438>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 760, in _send
data=json.dumps(msg),
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 720, in _handle_post
r = self.session.post(url, data=data)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 577, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d5a5a7438>: Failed to establish a new connection: [Errno 111] Connection refused',))
(epoch: 1, iters: 300, time: 0.228, data: 0.002) D_A: 0.230 G_A: 0.256 cycle_A: 1.105 idt_A: 0.872 D_B: 0.302 G_B: 0.293 cycle_B: 1.817 idt_B: 0.469
Exception in user code:
------------------------------------------------------------
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 175, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 722, in urlopen
chunked=chunked,
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 416, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 244, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1287, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1333, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1282, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1042, in _send_output
self.send(msg)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 980, in send
self.connect()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 205, in connect
conn = self._new_conn()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 187, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f4d5a5a61d0>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 450, in send
timeout=timeout
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 800, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d5a5a61d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 760, in _send
data=json.dumps(msg),
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 720, in _handle_post
r = self.session.post(url, data=data)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 577, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d5a5a61d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
Exception in user code:
------------------------------------------------------------
Exception in user code:
------------------------------------------------------------
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 175, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 722, in urlopen
chunked=chunked,
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 416, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 244, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1287, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1333, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1282, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1042, in _send_output
self.send(msg)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 980, in send
self.connect()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 205, in connect
conn = self._new_conn()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 187, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f4d1d89ea20>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 450, in send
timeout=timeout
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 800, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d1d89ea20>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 760, in _send
data=json.dumps(msg),
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 720, in _handle_post
r = self.session.post(url, data=data)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 577, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d1d89ea20>: Failed to establish a new connection: [Errno 111] Connection refused',))
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 175, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 722, in urlopen
chunked=chunked,
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 416, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 244, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1287, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1333, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1282, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1042, in _send_output
self.send(msg)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 980, in send
self.connect()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 205, in connect
conn = self._new_conn()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 187, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f4d1d89ee48>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 450, in send
timeout=timeout
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 800, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d1d89ee48>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 760, in _send
data=json.dumps(msg),
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 720, in _handle_post
r = self.session.post(url, data=data)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 577, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d1d89ee48>: Failed to establish a new connection: [Errno 111] Connection refused',))
(epoch: 1, iters: 400, time: 0.885, data: 0.002) D_A: 0.271 G_A: 0.305 cycle_A: 1.942 idt_A: 0.247 D_B: 0.309 G_B: 0.210 cycle_B: 0.508 idt_B: 0.921
Exception in user code:
------------------------------------------------------------
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 175, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 722, in urlopen
chunked=chunked,
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 416, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 244, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1287, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1333, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1282, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1042, in _send_output
self.send(msg)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 980, in send
self.connect()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 205, in connect
conn = self._new_conn()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 187, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f4d1d8b0f98>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 450, in send
timeout=timeout
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 800, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d1d8b0f98>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 760, in _send
data=json.dumps(msg),
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 720, in _handle_post
r = self.session.post(url, data=data)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 577, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d1d8b0f98>: Failed to establish a new connection: [Errno 111] Connection refused',))
(epoch: 1, iters: 500, time: 0.228, data: 0.002) D_A: 0.128 G_A: 0.171 cycle_A: 1.658 idt_A: 0.454 D_B: 0.285 G_B: 0.332 cycle_B: 0.991 idt_B: 0.974
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 175, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 95, in create_connection
raise err
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 722, in urlopen
chunked=chunked,
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 416, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 244, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1287, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1333, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1282, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 1042, in _send_output
self.send(msg)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/http/client.py", line 980, in send
self.connect()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 205, in connect
conn = self._new_conn()
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connection.py", line 187, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f4d1d8d01d0>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 450, in send
timeout=timeout
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/connectionpool.py", line 800, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d1d8d01d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 760, in _send
data=json.dumps(msg),
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/visdom/__init__.py", line 720, in _handle_post
r = self.session.post(url, data=data)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 577, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d1d8d01d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
Exception in user code:
------------------------------------------------------------
Traceback (most recent call last):
File "/home/ncut/Desktop/pycharm item code/pytorch-CycleGAN-and-pix2pix/train.py", line 52, in <module>
model.optimize_parameters() # calculate loss functions, get gradients, update network weights
File "/home/ncut/Desktop/pycharm item code/pytorch-CycleGAN-and-pix2pix/models/cycle_gan_model.py", line 183, in optimize_parameters
self.forward() # compute fake images and reconstruction images.
File "/home/ncut/Desktop/pycharm item code/pytorch-CycleGAN-and-pix2pix/models/cycle_gan_model.py", line 115, in forward
self.rec_A = self.netG_B(self.fake_B) # G_B(G_A(A))
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
for t in chain(self.module.parameters(), self.module.buffers()):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1520, in parameters
for name, param in self.named_parameters(recurse=recurse):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1546, in named_parameters
for elem in gen:
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1490, in _named_members
for module_prefix, module in modules:
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1695, in named_modules
for m in module.named_modules(memo, submodule_prefix, remove_duplicate):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1695, in named_modules
for m in module.named_modules(memo, submodule_prefix, remove_duplicate):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1695, in named_modules
for m in module.named_modules(memo, submodule_prefix, remove_duplicate):
File "/home/ncut/anaconda3/envs/cyclegan/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1691, in named_modules
for name, module in self._modules.items():
KeyboardInterrupt
Process finished with exit code 1
|
closed
|
2023-12-11T10:14:32Z
|
2023-12-11T10:48:36Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1620
|
[] |
L-1027
| 0
|
custom-components/pyscript
|
jupyter
| 524
|
Impossible to Install: Hangs on configuration
|
Hello I've installed this module from Hacs,
Then i tried to add the integration from the UI,
but it's stuck (and eventually timeout)
<img width="614" alt="Capture d’écran 2023-09-06 à 23 22 37" src="https://github.com/custom-components/pyscript/assets/177003/8d7f41de-6792-425c-b21d-b2473c3bf93e">
So i tried to manually add via the config.yml file,
but if I add `pyscript: ` with or without config billow, and click "verify configuration" in dev tools,
the verification hangs too ...
Not sure what's wrong...
Nothing in the logs obviously (in HA core 🤔 correct?)
|
open
|
2023-09-06T21:26:45Z
|
2023-11-01T12:34:22Z
|
https://github.com/custom-components/pyscript/issues/524
|
[] |
eMerzh
| 2
|
modoboa/modoboa
|
django
| 2,468
|
Spam not being released/marked
|
# Impacted versions
* OS Type: Ubuntu
OS Version: 20.04
Database Type: PostgreSQL
Database version: 10
Modoboa: 1.17
installer used: Yes
Webserver: Nginx
Steps to reproduce
Set domain filters like:

Current behavior
mesages with a scrore like 3,4 are not being released to their destination (i mean imap clients like outlook, roundcube ...)
Expected behavior
messages between 3 and 8 should be released and marked as spam
|
closed
|
2022-02-25T09:35:37Z
|
2022-03-11T09:05:16Z
|
https://github.com/modoboa/modoboa/issues/2468
|
[] |
sergioeix
| 1
|
explosion/spaCy
|
machine-learning
| 12,695
|
spacy-llm is not up to date on PyPI
|
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
```
pip install spacy-llm
```
Run example 3 from the documentation:
```
import spacy
nlp = spacy.blank("en")
nlp.add_pipe(
"llm",
config={
"task": {
"@llm_tasks": "spacy.NER.v2",
"labels": ["PERSON", "ORGANISATION", "LOCATION"],
},
"backend": {
"@llm_backends": "spacy.REST.v1",
"api": "OpenAI",
"config": {"model": "gpt-3.5-turbo"},
},
},
)
nlp.initialize()
doc = nlp("Jack and Jill rode up the hill in Les Deux Alpes")
print([(ent.text, ent.label_) for ent in doc.ents])
```
You will get the error:
```
ConfigValidationError:
Config validation error
llm.task -> labels str type expected
```
This is due the type hint being specified as:
```
@registry.llm_tasks("spacy.NER.v2")
def make_ner_task_v2(
labels: str,
template: str = _DEFAULT_NER_TEMPLATE_V2,
```
## Attempts to resolve:
I can see that it is updated on the GitHub. So it should be a simple case of updating the dependency:
```
pip install spacy-llm --upgrade
```
This correctly installs the 0.2.0 version that I want but it does not add the change.
Installing from github resolves the issue.
## Info about spaCy
- **spaCy version:** 3.5.3
- **Platform:** macOS-13.4-arm64-arm-64bit
- **Python version:** 3.10.10
|
closed
|
2023-06-03T02:22:58Z
|
2023-07-06T00:02:36Z
|
https://github.com/explosion/spaCy/issues/12695
|
[
"bug",
"feat/llm"
] |
KennethEnevoldsen
| 4
|
HumanSignal/labelImg
|
deep-learning
| 789
|
Can I draw splines as annotators using labelImg
|
Can I draw splines as annotators using labelImg? Thank you.
|
open
|
2021-08-26T15:28:00Z
|
2021-08-26T15:28:00Z
|
https://github.com/HumanSignal/labelImg/issues/789
|
[] |
JanineCHEN
| 0
|
HumanSignal/labelImg
|
deep-learning
| 283
|
Program freezes on last Windows version
|
<!--
Please provide as much as detail and example as you can.
You can add screenshots if appropriate.
-->
I opened a dir ,made a selection,saved the new label name and then clicked for the next image.Since then everytime I open the program and try to make a selection the program freezes with no error in console
- OS:Windows10
- PyQt version:1.3.1
|
closed
|
2018-05-03T08:24:55Z
|
2018-05-27T17:51:30Z
|
https://github.com/HumanSignal/labelImg/issues/283
|
[] |
raresandrei
| 4
|
PaddlePaddle/ERNIE
|
nlp
| 592
|
该模型适用于文本校对吗?
|
closed
|
2020-11-12T08:26:01Z
|
2021-01-22T10:02:02Z
|
https://github.com/PaddlePaddle/ERNIE/issues/592
|
[
"wontfix"
] |
Baililily
| 2
|
|
microsoft/nni
|
machine-learning
| 4,796
|
How to dynamically skip over empty layers when performing model speedup after pruning?
|
**Describe the issue**:
When pruning a model at various pruning percentages (10%-95%) using the L1Norm Pruner, I get a `nni.compression.pytorch.speedup.error_code.EmptyLayerError: Pruning a Layer to empty is not legal` error. I was wondering if I can dynamically skip over such layers in these cases? Based on the documentation, I can't determine if a layer will be empty after pruning and before model speedup.
I couldn't find it in the documentation, but I was wondering if there was a way to tell if a layer is empty after pruning and before speedup, so that I can exclude it when speeding up, preventing the EmptyLayerError. Any help would be greatly appreciated, thanks!
**Environment**:
- NNI version: nni==2.7
- Python version: Python 3.8.10
- PyTorch/TensorFlow version: torch==1.10.2+cu113
**How to reproduce it?**:
Prune a model to the point where it gets very small, or start with a small model and continue to prune.
|
closed
|
2022-04-23T20:11:09Z
|
2022-11-16T07:21:15Z
|
https://github.com/microsoft/nni/issues/4796
|
[] |
pmmitche
| 4
|
modin-project/modin
|
pandas
| 6,703
|
Executing `_query_compiler.set_index_name(None)` seems unnecessary.
|
closed
|
2023-11-03T23:45:10Z
|
2023-11-07T13:05:40Z
|
https://github.com/modin-project/modin/issues/6703
|
[
"Performance 🚀",
"Code Quality 💯"
] |
anmyachev
| 0
|
|
widgetti/solara
|
fastapi
| 616
|
https://solara.dev/documentation/advanced/understanding/routing has some errors in examples
|
`from solara.website.pages.examples.utilities import calculator` gives an error, that doesn't seem to exist
`import solara as sol` but then the example uses solar
`route_current, routes_current_level = solara.use_routes()` gives me an error: `AttributeError: module 'solara' has no attribute 'use_routes'`
I think I'm on the latest version but maybe I'm out of sync somehow?
|
closed
|
2024-04-26T20:02:43Z
|
2024-04-26T22:10:40Z
|
https://github.com/widgetti/solara/issues/616
|
[] |
johnowhitaker
| 3
|
fohrloop/dash-uploader
|
dash
| 34
|
Write a test
|
Any kind of test. Just to get testing rolling.
|
closed
|
2021-04-26T15:51:58Z
|
2021-05-13T11:02:54Z
|
https://github.com/fohrloop/dash-uploader/issues/34
|
[
"help wanted",
"good first issue"
] |
fohrloop
| 3
|
nl8590687/ASRT_SpeechRecognition
|
tensorflow
| 167
|
点击那个链接,Thchs30和ST-CMDS这两个数据集仍然下载不了
|
你好,下了几天的数据集都没下载下来,我同学也是的,求大佬
|
closed
|
2020-02-06T13:57:07Z
|
2021-11-22T14:17:50Z
|
https://github.com/nl8590687/ASRT_SpeechRecognition/issues/167
|
[] |
dl-1875
| 3
|
gevent/gevent
|
asyncio
| 1,664
|
TypeError: memoryview: a bytes-like object is required, not 'str'
|
* gevent version: 20.6.2
* Python version: python:3.8.3-slim
* Docker: Docker version 19.03.1, build 74b1e89
* Operating System: debian-9-stretch-v20190813
* Google cloud machine n1-standard-4 (4 vCPUs, 15 GB memory)
### Description:
The docker environment starts as follows:
```
CMD /usr/sbin/cron && gunicorn -b 0.0.0.0:5000 -k geventwebsocket.gunicorn.workers.GeventWebSocketWorker -w 1 webserver:server
```
It runs a flask application. Every-time a socket connection seems to acquire the incoming response looks like a string while bytes are expected.
```
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/gevent/pywsgi.py", line 970, in handle_one_response
self.run_application()
File "/usr/local/lib/python3.8/site-packages/geventwebsocket/handler.py", line 82, in run_application
self.process_result()
File "/usr/local/lib/python3.8/site-packages/gevent/pywsgi.py", line 904, in process_result
self.write(data)
File "/usr/local/lib/python3.8/site-packages/gevent/pywsgi.py", line 751, in write
self._write_with_headers(data)
File "/usr/local/lib/python3.8/site-packages/gevent/pywsgi.py", line 772, in _write_with_headers
self._write(data)
File "/usr/local/lib/python3.8/site-packages/gevent/pywsgi.py", line 734, in _write
self._sendall(data)
File "/usr/local/lib/python3.8/site-packages/gevent/pywsgi.py", line 708, in _sendall
self.socket.sendall(data)
File "/usr/local/lib/python3.8/site-packages/gevent/_socket3.py", line 533, in sendall
data_memory = _get_memory(data)
File "src/gevent/_greenlet_primitives.py", line 98, in gevent._gevent_c_greenlet_primitives.get_memory
File "src/gevent/_greenlet_primitives.py", line 121, in gevent._gevent_c_greenlet_primitives.get_memory
File "src/gevent/_greenlet_primitives.py", line 109, in gevent._gevent_c_greenlet_primitives.get_memory
TypeError: memoryview: a bytes-like object is required, not 'str'
```
### What I've run:
The docker environment starts as follows:
```
CMD /usr/sbin/cron && gunicorn -b 0.0.0.0:5000 -k geventwebsocket.gunicorn.workers.GeventWebSocketWorker -w 1 webserver:server
```
Request comes in as
```
2020-08-18T10:22:51Z {'REMOTE_ADDR': '192.168.1.3', 'REMOTE_PORT': '49710', 'HTTP_HOST': 'somewebsite.com', (hidden keys: 33)} failed with TypeError
```
Perhaps an exception handler? that tries to encode json string
|
closed
|
2020-08-18T10:34:16Z
|
2020-08-19T00:02:59Z
|
https://github.com/gevent/gevent/issues/1664
|
[] |
0x78f1935
| 4
|
dynaconf/dynaconf
|
django
| 576
|
Documenting the used paramaters
|
Hi,
I recently started using Dynaconf for my project and found it very useful. I use yaml file for setting the parameters. Besides describing the used parameters as comments in the yaml file, is some programmable alternative possible? For example, the user calls some method of dynaconf object and it returns the descriptive comment already present in the yaml file.
Thanks.
|
open
|
2021-05-02T06:25:28Z
|
2024-07-01T10:22:42Z
|
https://github.com/dynaconf/dynaconf/issues/576
|
[
"question",
"RFC"
] |
sarslanali
| 3
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 911
|
Can use selenium with chromium on aarch64 ubuntu 18.xx but when I try same with undetected_chromedriver it won't even start
|
I cannot get Undetected-ChromeDriver working on oracle A1 Ampere server with ubuntu 18.xx installed. I have latest available version of chromium and chromium-chromedriver and I can use everything normally when I use selenium library. But with undetected-chromedriver it fails.
Here is some info about my installed versions and system.
ubuntu$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic
ubuntu$ chromium-browser --version
Chromium 107.0.5304.87 Built on Ubuntu , running on Ubuntu 18.04
ubuntu$ file chromedriver
chromedriver: ELF 64-bit LSB shared object, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, BuildID[sha1]=9a79cae5c2e9ad7ecbe2aea855e6a09b25cc4ed7, stripped
Here is the code which works:
from selenium.webdriver.chrome.options import Options
from selenium.webdriver import Chrome
import time
import os
copt = Options()
copt.add_argument('--headless')
copt.add_argument('--disable-web-security')
drv = Chrome(executable_path=os.getcwd() + '/chromedriver', options=copt)
drv.get('https://www.google.com/')
time.sleep(10)
drv.save_screenshot('screen_shot.png')
drv.quit()
According to my understanding this should also work but it just does not. I would be grateful if someone can shed some light. I also read that it can be because the chromedriver exists in a folder under root's control so I copied it from that folder to project folder.
import undetected_chromedriver
import time
import os
copt = undetected_chromedriver.ChromeOptions()
copt.add_argument('--headless')
copt.add_argument('--disable-web-security')
copt.add_argument("--remote-debugging-port=9222") #tried after researching, did not work
copt.add_argument("--disable-dev-shm-usage") #tried after researching, did not work
#copt.add_argument("--disable-gpu") #tried after researching, did not work
copt.add_argument("--crash-dumps-dir=/tmp") #tried after researching, did not work
copt.add_argument("--no-sandbox") #tried after researching, did not work
drv = undetected_chromedriver.Chrome(executable_path=os.getcwd() + '/chromedriver', version_main=107, options=copt)
drv.get('https://www.google.com/')
time.sleep(10)
drv.save_screenshot('screen_shot.png')
drv.quit()
Here is the output for undetected-chromedriver script:
ubuntu$ python3 main.py
Traceback (most recent call last):
File "main.py", line 83, in <module>
drv = CChrome()
File "/usr/local/lib/python3.6/dist-packages/undetected_chromedriver/__init__.py", line 53, in __new__
instance.__init__(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/chrome/webdriver.py", line 81, in __init__
desired_capabilities=desired_capabilities)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py", line 157, in __init__
self.start_session(capabilities, browser_profile)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py", line 252, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python3.6/dist-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /usr/bin/chromium-browser is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
These are the permissions for py file and chromedriver:
ubuntu$ ls -l
total 12892
-rwxr-xr-x 1 ubuntu ubuntu 13193696 Nov 20 15:50 chromedriver
-rw-r--r-- 1 ubuntu ubuntu 2264 Nov 20 19:28 main.py
|
open
|
2022-11-20T20:00:35Z
|
2023-06-22T13:22:45Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/911
|
[] |
Jook3r
| 7
|
roboflow/supervision
|
pytorch
| 1,218
|
tflite with supervison
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Hi, does supervsion support tflite detection results? If not, is there a way to convert tflite outputs to the Detections class format?
### Additional
_No response_
|
closed
|
2024-05-21T15:38:18Z
|
2024-11-02T17:22:19Z
|
https://github.com/roboflow/supervision/issues/1218
|
[
"question"
] |
abichoi
| 4
|
assafelovic/gpt-researcher
|
automation
| 980
|
Error Code 404 with Azure OpenAI
|
Hey team,
Great work on building this project.
I am currently facing an issue while `gpt-researcher` with Azure Open AI in hybrid mode.
Here is my current `.env` file
```bash
AZURE_OPENAI_API_KEY=XXXXXXXX
AZURE_OPENAI_ENDPOINT=https://sydney-openai.openai.azure.com/openai/
AZURE_OPENAI_API_VERSION=2024-08-01-preview
note that the deployment name must be the same as the model name
FAST_LLM=azure_openai:gpt-4o
SMART_LLM=azure_openai:gpt-4o
STRATEGIC_LLM=azure_openai:gpt-4o
EMBEDDING=azure_openai:text-embedding-3-large
```
And this is the error that I am getting:
```python
openai.NotFoundError: Error code: 404 - {'error': {'code': 'DeploymentNotFound', 'message': 'The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.'}}
INFO: connection closed
```
FYI, the API has been deployed and tested in *Azure OpenAI Studio*! Any tips on how to solve it?
|
closed
|
2024-11-11T08:11:19Z
|
2024-11-11T23:22:18Z
|
https://github.com/assafelovic/gpt-researcher/issues/980
|
[] |
franckess
| 1
|
d2l-ai/d2l-en
|
deep-learning
| 2,083
|
[Enhancement] Add related papers to the Universal Approximator section?
|
When reading [Section 4.1.1.4](https://d2l.ai/chapter_multilayer-perceptrons/mlp.html#universal-approximators), I am quite interesting that the Neural network can approximate any continuous function, however, it does not provide additional reading materials for this one, which is a bit sad. May I propose adding some related papers in this section, such as [Wang 2017](https://arxiv.org/abs/1702.07800), [Cybenko 1989](https://web.njit.edu/~usman/courses/cs675_fall18/10.1.1.441.7873.pdf), [Hornik 1989](https://www.sciencedirect.com/science/article/abs/pii/0893608089900208) and [Hornik 1991](https://www.sciencedirect.com/science/article/abs/pii/089360809190009T)?
|
closed
|
2022-03-26T16:30:14Z
|
2022-03-27T05:48:17Z
|
https://github.com/d2l-ai/d2l-en/issues/2083
|
[] |
minhlong94
| 2
|
biolab/orange3
|
pandas
| 6,699
|
Read and Write Apache Parquet
|
<!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
Apache Parquet ( https://parquet.apache.org/ ) becomes more and more popular and I think it's like a standard now in the data community, this is no more restricted to Hadoop People. Qlik supports it, Alteryx will support it in the next release, even LibreOffice is working on it, etc, etc.
Why?
-opensource format
-fast
**What's your proposed solution?**
To have Orange Data Mining support Apache Parquet files for read and write.
**Are there any alternative solutions?**
To convert parquet files before but seems useless
|
open
|
2024-01-09T08:25:06Z
|
2024-11-03T00:19:42Z
|
https://github.com/biolab/orange3/issues/6699
|
[
"wish",
"meal"
] |
simonaubertbd
| 5
|
lanpa/tensorboardX
|
numpy
| 477
|
UserWarning: ONNX export failed on ATen operator atan2 because torch.onnx.symbolic.atan2 does not exist .format(op_name, op_name))
|
**Describe the bug**
A clear and concise description of what the bug is.
I'm trying add_graph and I get those four errors
`C:\Users\s4551072\AppData\Local\conda\conda\envs\python36\lib\site-packages\torch\onnx\utils.py:586: UserWarning: ONNX export failed on ATen operator atan2 because torch.onnx.symbolic.atan2 does not exist
.format(op_name, op_name))
C:\Users\s4551072\AppData\Local\conda\conda\envs\python36\lib\site-packages\torch\onnx\utils.py:586: UserWarning: ONNX export failed on ATen operator empty because torch.onnx.symbolic.empty does not exist
.format(op_name, op_name))
C:\Users\s4551072\AppData\Local\conda\conda\envs\python36\lib\site-packages\torch\onnx\utils.py:586: UserWarning: ONNX export failed on ATen operator fill because torch.onnx.symbolic.fill does not exist
.format(op_name, op_name))
C:\Users\s4551072\AppData\Local\conda\conda\envs\python36\lib\site-packages\torch\onnx\utils.py:586: UserWarning: ONNX export failed on ATen operator index because torch.onnx.symbolic.index does not exist
.format(op_name, op_name))`
**Minimal runnable code to reproduce the behavior**
```
from tensorboardX import SummaryWriter
def crelu(re, im):
return nn.functional.relu(re), nn.functional.relu(im)
def zlogit(re, im):
abs_ = t.sqrt(re ** 2 + im ** 2)
ang = t.atan2(im, re)
mask = ~(ang > 0)
clone = abs_.clone()
clone[mask] = -1 * abs_[mask]
return clone
class Clinear(nn.Module):
def __init__(self, in_, out_):
super(Clinear, self).__init__()
self.Lre = nn.Linear(in_, out_)
self.Lim = nn.Linear(in_, out_)
def forward(self, re, im):
return self.Lre(re) - self.Lim(im), self.Lre(im) + self.Lim(re)
class CDan(nn.Module):
def __init__(self):
super(CDan, self).__init__()
self.feat1 = Cconv1d(in_channels=30, out_channels=60, kernel_size=2, stride=1, padding=2)
self.feat2 = Cconv1d(in_channels=60, out_channels=120, kernel_size=3, stride=2, padding=0)
# self.feat3 = nn.MaxPool1d(kernel_size=4)
self.featl1 = Clinear(6360, 500)
self.featl2 = Clinear(500, 31)
self.featl3 = nn.Softmax(dim=1)
def forward(self, x):
re, im = x[Ellipsis, 0], x[Ellipsis, 1]
op0re, op0im = crelu(*self.feat1(re, im))
op1re, op1im = crelu(*self.feat2(op0re, op0im))
# op2re, op2im = self.feat3(op1re), self.feat3(op1im)
op2re, op2im = op1re.view(x.shape[0], -1), op1im.view(x.shape[0], -1)
op3re, op3im = crelu(*self.featl1(op2re, op2im))
op4re, op4im = crelu(*self.featl2(op3re, op3im))
op = zlogit(op4re, op4im)
return self.featl3(op)
my_net = CDan().to(device)
writer.add_graph(my_net, next(iter(dataloader_target))[0], verbose=True)
...
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment**
What is the result of
`pip list|grep -E "torch|proto|tensor"`
If the version is too old, please try to update first.
**Python environment**
Which version of python are you using? Did you use Andconda or Virtualenv?
Anaconda venv
**Additional context**
Add any other context about the problem here.
I'm using latest versions of Pytorch and TB, that is, 1.1 and 1.14 respectively.
|
closed
|
2019-07-26T07:23:45Z
|
2019-10-23T15:51:50Z
|
https://github.com/lanpa/tensorboardX/issues/477
|
[] |
thisismygitrepo
| 2
|
reiinakano/scikit-plot
|
scikit-learn
| 92
|
Too many indices For Array
|
I am facing IndexError but i don't know why as my train and test set are in perfect shape as required by cross_val_score but still im getting this error. Any suggestions ?
[K-NN for Amazon Fud Review.html.pdf](https://github.com/reiinakano/scikit-plot/files/2300276/K-NN.for.Amazon.Fud.Review.html.pdf)
|
open
|
2018-08-19T11:26:33Z
|
2019-12-06T02:55:05Z
|
https://github.com/reiinakano/scikit-plot/issues/92
|
[] |
Bhisham-Sharma
| 4
|
donnemartin/system-design-primer
|
python
| 1,065
|
Scaliblity link is wrong
|
The scaliblity link given in the repo is wrong. How i get this because the topic given that will be covered in video , is not actually there . But there is video which match to given description and topic. https://www.youtube.com/watch?v=gmKPkRCLNek&t=130s
|
open
|
2025-03-13T14:32:35Z
|
2025-03-21T03:35:35Z
|
https://github.com/donnemartin/system-design-primer/issues/1065
|
[] |
jayantk137
| 1
|
gunthercox/ChatterBot
|
machine-learning
| 1,889
|
Only return one character as response
|
I'm study the lib and I had a error in my first test, return o character as response.
the code:
```
from chatterbot import ChatBot
from chatterbot.trainers import ListTrainer
import os
bot = ChatBot('teste')
treino = ListTrainer(bot)
diretorio = '/Users/archo/PycharmProjects/untield/venv/testechatbot/treinadores/'
for treinadores in os.listdir(diretorio):
treino.train(diretorio + treinadores)
while True:
req = str(input("Voce: "))
req = bot.get_response(req)
print('Bot: ' + str(req))
```
I used a subdirectory with 4 .yml for training in treinadores:
example format:
```
categories:
- pedidos
conversations:
- - Gostaria de uma pizza grande 4 queijos
- Anotado, mais alguma coisa?
console:
```
**[nltk_data] Downloading package stopwords to /Users/archo/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /Users/archo/nltk_data...
[nltk_data] Package averaged_perceptron_tagger is already up-to-
[nltk_data] date!
List Trainer: [####################] 100%
List Trainer: [####################] 100%
List Trainer: [####################] 100%
List Trainer: [####################] 100%
Voce: oi
Bot: a
Voce: como vai?
Bot: e**
|
closed
|
2019-12-26T17:45:17Z
|
2020-01-10T17:26:11Z
|
https://github.com/gunthercox/ChatterBot/issues/1889
|
[] |
viniciusyaunner
| 2
|
wkentaro/labelme
|
computer-vision
| 512
|
Is it possible to expose this application as an API and access through browser
|
I was wondering if any one has implemented to access it from browser.
|
closed
|
2019-11-18T15:36:57Z
|
2020-01-27T01:47:57Z
|
https://github.com/wkentaro/labelme/issues/512
|
[] |
surajitkundu29
| 0
|
TencentARC/GFPGAN
|
deep-learning
| 528
|
C:\Users\MARY\Downloads
|
open
|
2024-03-17T02:29:50Z
|
2024-03-17T02:30:32Z
|
https://github.com/TencentARC/GFPGAN/issues/528
|
[] |
waldirsof
| 1
|
|
mljar/mercury
|
data-visualization
| 236
|
Unable to open uploaded image file in "widgets" using Pillow
|
Hi, @pplonski . How can I open an image uploaded in the "widgets" file using Pillow? The following code doesn't work:
```python
import mercury as mr
from PIL import Image
file = mr.File(label="Upload the file", max_file_size="100MB")
filepath = file.filepath
im = Image.open(filepath)
```
Thank you for always responding quickly. Please take a break once in a while.
|
closed
|
2023-04-03T02:20:37Z
|
2023-04-03T08:26:23Z
|
https://github.com/mljar/mercury/issues/236
|
[
"bug"
] |
pythonista-blitz
| 1
|
ymcui/Chinese-BERT-wwm
|
tensorflow
| 114
|
请问同一个模型(如RoBERTa-wwm-ext, Chinese),tensorflow版和pytorch版,模型输出是一样的吗
|
请问同一个模型(如RoBERTa-wwm-ext, Chinese),tensorflow版和pytorch版,模型输出是一样的吗?
|
closed
|
2020-05-07T09:21:13Z
|
2020-05-12T05:09:18Z
|
https://github.com/ymcui/Chinese-BERT-wwm/issues/114
|
[] |
LowinLi
| 3
|
Nemo2011/bilibili-api
|
api
| 346
|
Release v16
|
还有什么要修的吗qeq
|
closed
|
2023-06-17T22:01:20Z
|
2023-06-21T04:05:57Z
|
https://github.com/Nemo2011/bilibili-api/issues/346
|
[] |
z0z0r4
| 0
|
sktime/pytorch-forecasting
|
pandas
| 1,590
|
[ENH] un-stale the package and give simple instructions for working install
|
Could somebody give a docker image, or detailes of package versions.
I tried multiple installations and most of the time had errors when running the tutorials.
As the library becomes stale it is more and more difficult to make the proper environment.
Please share detailes, thanks.
|
open
|
2024-08-09T11:35:06Z
|
2024-08-30T12:17:35Z
|
https://github.com/sktime/pytorch-forecasting/issues/1590
|
[
"maintenance"
] |
veberi-alteo
| 6
|
clovaai/donut
|
nlp
| 108
|
What are acceptable values for max_length?
|
What does max_length refer to? What are the acceptable values?
|
open
|
2022-12-20T05:51:59Z
|
2023-01-06T07:19:47Z
|
https://github.com/clovaai/donut/issues/108
|
[] |
htcml
| 7
|
jupyter/nbgrader
|
jupyter
| 1,216
|
Option to export all feedback as PDF
|
<!--
Thanks for helping to improve nbgrader!
If you are submitting a bug report or looking for support, please use the below
template so we can efficiently solve the problem.
If you are requesting a new feature, feel free to remove irrelevant pieces of
the issue template.
-->
First, thank you for `nbgrader`!
Apologies if I've missed this in the documentation, but is there a config param to generate all feedback as PDF rather than HTML?
I suspect this is related to #440. [Based on this comment](https://github.com/jupyter/nbgrader/issues/440#issuecomment-181726806). I'm guessing I'll need to create a template, right?
## Use case
I want to mark up/annotate a PDF with additional feedback (drawings, arrows, notes, etc.)
## What I've tried
### `ipynb` -> `pdf`
- `jupyter nbconvert --to pdf feedback.html`
- **Download as ... PDF (via LaTeX)**
- Currently, I get an error ([here's the log](https://gist.github.com/myedibleenso/2c93ff310bf239f07a8e87044e4c2ea4)) when attempting to export via `jupyter nbconvert`
### `html` -> `pdf`
- html -> pdf via headless chrome (v76.0.3809.132)
- sort of works, but the equations aren't rendered for some reason
- print as PDF from browser (Chrome v76.0.3809.132)
- this more or less works, although it includes the absolute path of the file as a footer
- [`wkhtmltopdf`](https://wkhtmltopdf.org/)
- same results as my attempt using headless chrome
- html -> tex -> pdf via pandoc
- too barebones without a template
## System
### OS
macOS Mojave (v10.14.5)
### `nbgrader --version`
nbgrader version 0.6.0
### `jupyter notebook --version`
6.0.1
|
open
|
2019-09-13T03:41:21Z
|
2019-11-02T09:11:50Z
|
https://github.com/jupyter/nbgrader/issues/1216
|
[
"enhancement"
] |
myedibleenso
| 1
|
graphql-python/graphene-mongo
|
graphql
| 194
|
Can I use 'MongoengineConnectionField' without 'edges'?
|
current:
`users {
edges {
node {
id
}
}
}`
What I want is:
`
users {
id
}`
|
closed
|
2022-03-20T03:43:23Z
|
2023-03-28T17:51:26Z
|
https://github.com/graphql-python/graphene-mongo/issues/194
|
[] |
xiangxn
| 2
|
saulpw/visidata
|
pandas
| 2,266
|
[undo] can't undo some threaded commands
|
**Small description**
After a pasting into cells or rows using `syspaste-*`, undo does not restore the original value of the cell.
**Steps to reproduce with sample data and a .vd**
Starting with this tsv sheet:
```
cells
first
second
```
hit `zY` on the cell that contains`"first"`, then `j` and then `zP`
```
cells
first
first
```
then hit `U`, and no undo happens. Instead there is a message: `nothing to undo on current sheet`.
To trigger the bug, `syspaste-cells` must be done manually. If it's done inside `-p script.vdj`, then `undo-last` works as expected.
**Additional context**
saul.pw/VisiData v3.1dev
The behavior started with a9e4e050 and its parent commit.
I've looked into the undo code a bit. The problem is that when `syspaste-*` runs `addUndo()`, `vd.activeCommand` is `None`, so the function returns without adding to `r.undofuncs`. I'm not sure what the fix should be.
|
closed
|
2024-01-23T09:42:27Z
|
2024-03-22T23:17:59Z
|
https://github.com/saulpw/visidata/issues/2266
|
[
"bug"
] |
midichef
| 2
|
YiVal/YiVal
|
api
| 45
|
Concurrency issue, global variable not enabled?
|
https://rcc5lo3n39a.larksuite.com/docx/ZtYpdJiYtouPiQx2WYIuFcDasRQ?from=from_copylink
|
closed
|
2023-08-18T09:13:14Z
|
2023-09-16T14:00:55Z
|
https://github.com/YiVal/YiVal/issues/45
|
[
"bug"
] |
crazycth
| 1
|
brightmart/text_classification
|
nlp
| 27
|
the last ouput of Bi-RNN in TextRNN
|
https://github.com/brightmart/text_classification/blob/68e2fcf57a8dcec7e7d12f78953ed570451f0076/a03_TextRNN/p8_TextRNN_model.py#L67
In the implementation, the final outputs of Bi-RNN are calculated as the reduce mean among all time stamps. Compared with `output_rnn_last=output_rnn[:,-1,:]`, what is the difference between these two strategies on the impact of the final classification results?
|
closed
|
2018-01-03T07:08:13Z
|
2018-01-27T15:10:46Z
|
https://github.com/brightmart/text_classification/issues/27
|
[] |
longbowking
| 1
|
pydata/xarray
|
pandas
| 9,565
|
Discrepancy between `.chunk` and provided encoding with `to_zarr`
|
### What happened?
While attempting to reverse engineer the chunking behavior of `to_zarr`, I came across (I think) an unexpected difference between two ways of defining the chunking behavior. Here's a short illustration:
via `.chunk`:
```python
chunk_scheme = {
'time': 1000,
'lat': 90,
'lon': 180
}
ds_chunked = ds.chunk(chunk_scheme)
ds_chunked.to_zarr(zarr_path, mode='w')
```
A sample of what is written by the `.chunk` strategy:
<img width="715" alt="image" src="https://github.com/user-attachments/assets/14c133ae-f4a9-4fd3-a647-2384e93c548e">
via provided `encoding`:
```python
encoding = {'time': {'chunks': (1000,)}, 'lat': {'chunks': (90,)}, 'lon': {'chunks': (180,)}}
ds.to_zarr('/tmp/output_zarr_store_cf', encoding=encoding)
```
A sample of what is written by the supplied `encoding` strategy:
<img width="671" alt="image" src="https://github.com/user-attachments/assets/9d81e22e-48b0-487e-b14e-8aad813d955a">
Obviously, the above look different from one another.
### What did you expect to happen?
'time' is a coordinate variable, and these seem to be handled differently in the `.chunk` case than one might expect. The `encoding` case has the chunking one might expect while `.chunk` hasn't been chunked at all here. This, from `.info`:
```
default (chunk) method
Name : /time
Type : zarr.core.Array
Data type : float64
Shape : (29200,)
Chunk shape : (29200,)
Order : C
Read-only : True
Compressor : Blosc(cname='lz4', clevel=5, shuffle=SHUFFLE, blocksize=0)
Store type : zarr.storage.DirectoryStore
No. bytes : 233600 (228.1K)
No. bytes stored : 4692 (4.6K)
Storage ratio : 49.8
Chunks initialized : 1/1
encoding-based method
Name : /time
Type : zarr.core.Array
Data type : int64
Shape : (29200,)
Chunk shape : (1000,)
Order : C
Read-only : True
Compressor : Blosc(cname='lz4', clevel=5, shuffle=SHUFFLE, blocksize=0)
Store type : zarr.storage.DirectoryStore
No. bytes : 233600 (228.1K)
No. bytes stored : 14686 (14.3K)
Storage ratio : 15.9
Chunks initialized : 30/30
```
Data variables, too, have significant differences. `pr` is one such variable. For reasons that are pretty unclear, the `.chunk` path is producing what I'd hoped for while the `encoding` path (which does chunk as expected for coordinates, as seen above) appears to be choosing (perhaps sensible) chunks which neatly divide up the shape of this dimension.
```
default (chunk) method
Name : /pr
Type : zarr.core.Array
Data type : float32
Shape : (29200, 192, 288)
Chunk shape : (1000, 90, 180)
Order : C
Read-only : True
Compressor : Blosc(cname='lz4', clevel=5, shuffle=SHUFFLE, blocksize=0)
Store type : zarr.storage.DirectoryStore
No. bytes : 6458572800 (6.0G)
No. bytes stored : 5906005659 (5.5G)
Storage ratio : 1.1
Chunks initialized : 180/180
encoding-based method
Name : /pr
Type : zarr.core.Array
Data type : float32
Shape : (29200, 192, 288)
Chunk shape : (1825, 12, 36)
Order : C
Read-only : True
Compressor : Blosc(cname='lz4', clevel=5, shuffle=SHUFFLE, blocksize=0)
Store type : zarr.storage.DirectoryStore
No. bytes : 6458572800 (6.0G)
No. bytes stored : 5753062918 (5.4G)
Storage ratio : 1.1
Chunks initialized : 2048/2048
```
I expected these two approaches to produce the same thing in the end. That seems to be what's suggested [here](https://github.com/pydata/xarray/blob/main/xarray/core/dataset.py#L2421-L2422)
### Minimal Complete Verifiable Example
```Python
import xarray as xr
ds = xr.tutorial.open_dataset('air_temperature')
# `.chunk` path
chunk_scheme = {'lat': 3, 'lon': 5, 'time': 100}
ds_chunked = ds.chunk(chunk_scheme)
ds_chunked.to_zarr('/tmp/output_zarr_store_dot-chunk', mode='w')
# encoding path
encoding = {'lat': {'chunks': (3,)}, 'lon': {'chunks': (5,), 'time': {'chunks': (100,)}
ds.to_zarr('/tmp/output_zarr_store_encoding', encoding=encoding)
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [x] Complete example — the example is self-contained, including all data and the text of any traceback.
- [ ] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
_No response_
### Anything else we need to know?
_No response_
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.11.9 (main, May 22 2024, 12:34:58) [Clang 15.0.0 (clang-1500.3.9.4)]
python-bits: 64
OS: Darwin
OS-release: 23.4.0
machine: arm64
processor: arm
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.12.2
libnetcdf: None
xarray: 2024.6.0
pandas: 2.2.2
numpy: 2.0.0
scipy: 1.14.0
netCDF4: None
pydap: None
h5netcdf: 1.3.0
h5py: 3.11.0
zarr: 2.18.2
cftime: 1.6.4
nc_time_axis: None
iris: None
bottleneck: None
dask: 2024.8.0
distributed: None
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: 2024.6.1
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 65.5.0
pip: 24.2
conda: None
pytest: 8.2.2
mypy: None
IPython: 8.26.0
sphinx: None
</details>
|
closed
|
2024-10-01T18:48:03Z
|
2024-12-10T21:03:42Z
|
https://github.com/pydata/xarray/issues/9565
|
[
"bug",
"needs mcve",
"topic-zarr"
] |
moradology
| 4
|
ploomber/ploomber
|
jupyter
| 294
|
Docs: clarify that other formats are supported as well (any format supported by jupytext)
|
Percent should be the default format (shown in the docs and examples) since natively supported by some editors and IDEs
|
closed
|
2021-05-09T20:04:50Z
|
2021-10-11T23:32:25Z
|
https://github.com/ploomber/ploomber/issues/294
|
[] |
edublancas
| 1
|
lepture/authlib
|
django
| 615
|
1.3.0: sphinx warnings `reference target not found`
|
First of all currently it is not possible to use straight `sphinx-build` command to build documentation out of source tree
<details>
```console
+ /usr/bin/sphinx-build -n -T -b man docs build/sphinx/man
Running Sphinx v7.1.2
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/sphinx/config.py", line 356, in eval_config_file
exec(code, namespace) # NoQA: S102
File "/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/docs/conf.py", line 1, in <module>
import authlib
ModuleNotFoundError: No module named 'authlib'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/sphinx/cmd/build.py", line 285, in build_main
app = Sphinx(args.sourcedir, args.confdir, args.outputdir,
File "/usr/lib/python3.8/site-packages/sphinx/application.py", line 207, in __init__
self.config = Config.read(self.confdir, confoverrides or {}, self.tags)
File "/usr/lib/python3.8/site-packages/sphinx/config.py", line 179, in read
namespace = eval_config_file(filename, tags)
File "/usr/lib/python3.8/site-packages/sphinx/config.py", line 369, in eval_config_file
raise ConfigError(msg % traceback.format_exc()) from exc
sphinx.errors.ConfigError: There is a programmable error in your configuration file:
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/sphinx/config.py", line 356, in eval_config_file
exec(code, namespace) # NoQA: S102
File "/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/docs/conf.py", line 1, in <module>
import authlib
ModuleNotFoundError: No module named 'authlib'
Configuration error:
There is a programmable error in your configuration file:
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/sphinx/config.py", line 356, in eval_config_file
exec(code, namespace) # NoQA: S102
File "/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/docs/conf.py", line 1, in <module>
import authlib
ModuleNotFoundError: No module named 'authlib'
```
</details>
This can be fixed by patch like below:
```patch
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -1,12 +1,17 @@
-from pallets_sphinx_themes import get_version
+import sys
+import os
+sys.path.insert(0, os.path.abspath("../src"))
+
from pallets_sphinx_themes import ProjectLink
+import cachelib
# Project --------------------------------------------------------------
project = "CacheLib"
copyright = "2018 Pallets"
author = "Pallets"
-release, version = get_version("cachelib")
+version = cachelib.__version__
+release = version
# General --------------------------------------------------------------
```
This patch fixes what is in the comment and that can of fix is suggested in sphinx example copy.py https://www.sphinx-doc.org/en/master/usage/configuration.html#example-of-configuration-file
Please let me know if you want this patch as PR.
Than .. on building my packages I'm using `sphinx-build` command with `-n` switch which shows warmings about missing references. These are not critical issues.
<details>
```console
+ /usr/bin/sphinx-build -n -T -b man docs build/sphinx/man
Running Sphinx v7.1.2
making output directory... done
building [mo]: targets for 0 po files that are out of date
writing output...
building [man]: all manpages
updating environment: [new config] 86 added, 0 changed, 0 removed
reading sources... [100%] specs/rfc9068
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
writing... python-authlib.3 { basic/index basic/intro basic/install basic/logging client/index client/oauth1 client/oauth2 client/requests client/httpx client/frameworks client/flask client/django client/starlette client/fastapi client/api jose/index jose/jws jose/jwe jose/jwk jose/jwt oauth/index oauth/1/index oauth/1/intro oauth/2/index oauth/2/intro oauth/oidc/index oauth/oidc/intro oauth/oidc/core oauth/oidc/discovery flask/index flask/1/index flask/1/authorization-server flask/1/resource-server flask/1/customize flask/1/api flask/2/index flask/2/authorization-server flask/2/grants flask/2/endpoints flask/2/resource-server flask/2/openid-connect flask/2/api django/index django/1/index django/1/authorization-server django/1/resource-server django/1/api django/2/index django/2/authorization-server django/2/grants django/2/endpoints django/2/resource-server django/2/openid-connect django/2/api specs/index specs/rfc5849 specs/rfc6749 specs/rfc6750 specs/rfc7009 specs/rfc7515 specs/rfc7516 specs/rfc7517 specs/rfc7518 specs/rfc7519 specs/rfc7523 specs/rfc7591 specs/rfc7592 specs/rfc7636 specs/rfc7638 specs/rfc7662 specs/rfc8037 specs/rfc8414 specs/rfc8628 specs/rfc9068 specs/oidc community/index community/funding community/support community/security community/contribute community/awesome community/sustainable community/authors community/licenses changelog } /home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/docs/client/httpx.rst:20: WARNING: py:class reference target not found: AssertionClient
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/integrations/requests_client/oauth2_session.py:docstring of authlib.integrations.requests_client.oauth2_session.OAuth2Session:21: WARNING: py:class reference target not found: OAuth2Token
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/integrations/requests_client/oauth2_session.py:docstring of authlib.oauth2.client.OAuth2Client.fetch_token:14: WARNING: py:class reference target not found: OAuth2Token
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/integrations/requests_client/oauth2_session.py:docstring of authlib.oauth2.client.OAuth2Client.refresh_token:9: WARNING: py:class reference target not found: OAuth2Token
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/integrations/httpx_client/oauth1_client.py:docstring of authlib.integrations.httpx_client.oauth1_client.OAuth1Auth.auth_flow:1: WARNING: py:class reference target not found: httpx.Request
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/integrations/httpx_client/oauth1_client.py:docstring of authlib.integrations.httpx_client.oauth1_client.OAuth1Auth.auth_flow:1: WARNING: py:class reference target not found: httpx.Request
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/integrations/httpx_client/oauth1_client.py:docstring of authlib.integrations.httpx_client.oauth1_client.OAuth1Auth.auth_flow:1: WARNING: py:class reference target not found: httpx.Response
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/integrations/httpx_client/oauth2_client.py:docstring of authlib.oauth2.client.OAuth2Client.fetch_token:14: WARNING: py:class reference target not found: OAuth2Token
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/integrations/httpx_client/oauth2_client.py:docstring of authlib.oauth2.client.OAuth2Client.refresh_token:9: WARNING: py:class reference target not found: OAuth2Token
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/integrations/httpx_client/oauth2_client.py:docstring of authlib.oauth2.client.OAuth2Client.fetch_token:14: WARNING: py:class reference target not found: OAuth2Token
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/integrations/httpx_client/oauth2_client.py:docstring of authlib.oauth2.client.OAuth2Client.refresh_token:9: WARNING: py:class reference target not found: OAuth2Token
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/integrations/flask_client/__init__.py:docstring of authlib.integrations.flask_client.OAuth.register:5: WARNING: py:class reference target not found: RemoteApp
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/integrations/django_client/__init__.py:docstring of authlib.integrations.base_client.registry.BaseOAuth.register:5: WARNING: py:class reference target not found: RemoteApp
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/integrations/starlette_client/__init__.py:docstring of authlib.integrations.base_client.registry.BaseOAuth.register:5: WARNING: py:class reference target not found: RemoteApp
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/docs/oauth/2/intro.rst:82: WARNING: py:class reference target not found: JWTBearerGrant
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/integrations/flask_oauth1/authorization_server.py:docstring of authlib.integrations.flask_oauth1.authorization_server.AuthorizationServer:1: WARNING: py:class reference target not found: authlib.rfc5849.AuthorizationServer
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/docs/flask/2/openid-connect.rst:95: WARNING: py:class reference target not found: OpenIDCode
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/docs/flask/2/openid-connect.rst:177: WARNING: py:class reference target not found: OpenIDImplicitGrant
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/docs/django/2/openid-connect.rst:103: WARNING: py:class reference target not found: OpenIDCode
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/docs/django/2/openid-connect.rst:185: WARNING: py:class reference target not found: OpenIDImplicitGrant
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/oauth2/rfc6749/authorization_server.py:docstring of authlib.oauth2.rfc6749.authorization_server.AuthorizationServer.create_json_request:1: WARNING: py:class reference target not found: authlib.oauth2.rfc6749.requests.JsonRequest
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/oauth2/rfc6749/authorization_server.py:docstring of authlib.oauth2.rfc6749.authorization_server.AuthorizationServer.create_oauth2_request:1: WARNING: py:class reference target not found: authlib.oauth2.rfc6749.requests.OAuth2Request
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/oauth2/rfc6749/resource_protector.py:docstring of authlib.oauth2.rfc6749.resource_protector.ResourceProtector.register_token_validator:1: WARNING: py:class reference target not found: authlib.oauth2.rfc6749.resource_protector.TokenValidator
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/oauth2/rfc6749/grants/authorization_code.py:docstring of authlib.oauth2.rfc6749.grants.authorization_code.AuthorizationCodeGrant:1: WARNING: py:class reference target not found: authlib.oauth2.rfc6749.requests.OAuth2Request
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/oauth2/rfc6749/grants/implicit.py:docstring of authlib.oauth2.rfc6749.grants.implicit.ImplicitGrant:1: WARNING: py:class reference target not found: authlib.oauth2.rfc6749.requests.OAuth2Request
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/oauth2/rfc6749/grants/resource_owner_password_credentials.py:docstring of authlib.oauth2.rfc6749.grants.resource_owner_password_credentials.ResourceOwnerPasswordCredentialsGrant:1: WARNING: py:class reference target not found: authlib.oauth2.rfc6749.requests.OAuth2Request
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/oauth2/rfc6749/grants/client_credentials.py:docstring of authlib.oauth2.rfc6749.grants.client_credentials.ClientCredentialsGrant:1: WARNING: py:class reference target not found: authlib.oauth2.rfc6749.requests.OAuth2Request
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/oauth2/rfc6749/grants/refresh_token.py:docstring of authlib.oauth2.rfc6749.grants.refresh_token.RefreshTokenGrant:1: WARNING: py:class reference target not found: authlib.oauth2.rfc6749.requests.OAuth2Request
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/docs/specs/rfc6750.rst:17: WARNING: py:class reference target not found: BearerTokenValidator
<unknown>:1: WARNING: py:class reference target not found: authlib.oauth2.rfc6750.token.BearerTokenGenerator
<unknown>:1: WARNING: py:class reference target not found: cryptography.hazmat.primitives.asymmetric.rsa.RSAPublicKey
<unknown>:1: WARNING: py:class reference target not found: cryptography.hazmat.primitives.asymmetric.rsa.RSAPrivateKey
<unknown>:1: WARNING: py:class reference target not found: cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePublicKey
<unknown>:1: WARNING: py:class reference target not found: cryptography.hazmat.primitives.asymmetric.ec.EllipticCurvePrivateKey
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/jose/rfc7519/jwt.py:docstring of authlib.jose.rfc7519.jwt.JsonWebToken.decode:1: WARNING: py:meth reference target not found: verify
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/docs/specs/rfc7523.rst:69: WARNING: py:class reference target not found: authlib.integrations.httpx_client.AssertionSession
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/docs/specs/rfc7523.rst:70: WARNING: py:class reference target not found: authlib.integrations.httpx_client.AsyncAssertionSession
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/oauth2/rfc7523/jwt_bearer.py:docstring of authlib.oauth2.rfc7523.jwt_bearer.JWTBearerGrant:1: WARNING: py:class reference target not found: authlib.oauth2.rfc6749.requests.OAuth2Request
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/oauth2/rfc8628/device_code.py:docstring of authlib.oauth2.rfc8628.device_code.DeviceCodeGrant:1: WARNING: py:class reference target not found: authlib.oauth2.rfc6749.requests.OAuth2Request
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/oauth2/rfc9068/token.py:docstring of authlib.oauth2.rfc9068.token.JWTBearerTokenGenerator:5: WARNING: py:class reference target not found: authlib.oauth2.rfc6750.token.BearerTokenGenerator
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/oidc/core/grants/implicit.py:docstring of authlib.oidc.core.grants.implicit.OpenIDImplicitGrant:1: WARNING: py:class reference target not found: authlib.oauth2.rfc6749.requests.OAuth2Request
/home/tkloczko/rpmbuild/BUILD/authlib-1.3.0/authlib/oidc/core/grants/hybrid.py:docstring of authlib.oidc.core.grants.hybrid.OpenIDHybridGrant:1: WARNING: py:class reference target not found: authlib.oauth2.rfc6749.requests.OAuth2Request
done
build succeeded, 42 warnings.
```
</details>
You can peak on fixes that kind of issues in other projects
https://github.com/RDFLib/rdflib-sqlalchemy/issues/95
https://github.com/RDFLib/rdflib/pull/2036
https://github.com/click-contrib/sphinx-click/commit/abc31069
https://github.com/frostming/unearth/issues/14
https://github.com/jaraco/cssutils/issues/21
https://github.com/latchset/jwcrypto/pull/289
https://github.com/latchset/jwcrypto/pull/289
https://github.com/pypa/distlib/commit/98b9b89f
https://github.com/pywbem/pywbem/pull/2895
https://github.com/sissaschool/elementpath/commit/bf869d9e
https://github.com/sissaschool/xmlschema/commit/42ea98f2
https://github.com/sqlalchemy/sqlalchemy/commit/5e88e6e8
|
closed
|
2024-01-11T07:15:28Z
|
2024-01-17T17:06:09Z
|
https://github.com/lepture/authlib/issues/615
|
[] |
kloczek
| 3
|
sanic-org/sanic
|
asyncio
| 2,544
|
Sanic server
|
use sanic server in production
How to stop or reload gracefully
|
closed
|
2022-09-14T08:19:55Z
|
2022-09-14T09:51:40Z
|
https://github.com/sanic-org/sanic/issues/2544
|
[
"feature request"
] |
wjy1100
| 1
|
man-group/arctic
|
pandas
| 617
|
First, last, and closest
|
Is it possible to load the first and the last ticks from TickStore? How do I query the closest row to the specified date?
|
closed
|
2018-08-28T04:30:29Z
|
2018-08-30T18:35:27Z
|
https://github.com/man-group/arctic/issues/617
|
[] |
kothique
| 1
|
OFA-Sys/Chinese-CLIP
|
computer-vision
| 181
|
COCO 数据集
|
作者你好,我已获得coco-cn的许可,希望获得预处理的coco数据,已向该邮箱yangapku@gmail.com发送申请,万分感谢!
|
closed
|
2023-08-08T07:45:58Z
|
2023-08-08T10:25:44Z
|
https://github.com/OFA-Sys/Chinese-CLIP/issues/181
|
[] |
songtianhui
| 1
|
albumentations-team/albumentations
|
machine-learning
| 2,041
|
Shifting augmentation causes 'Tensors must have same number of dimensions: got 2 and 1'
|
Hello,
been recieving the following error when training a YOLOv8 model with Albumentations:
`RuntimeError: Tensors must have same number of dimensions: got 2 and 1`
Until now I've only seen it happen when using this specific augmentation: (shifting the picture)
`A.ShiftScaleRotate(
shift_limit=(-0.3, 0.3), # ScaleFloatType
scale_limit=(0,0), # ScaleFloatType
rotate_limit=(0,0), # ScaleFloatType
interpolation=1, # <class 'int'>
border_mode=0, # int
value=0, # ColorType
mask_value=0, # ColorType
shift_limit_x=None, # ScaleFloatType | None
shift_limit_y=None, # ScaleFloatType | None
rotate_method="largest_box", # Literal['largest_box', 'ellipse']
always_apply=None, # bool | None
p=1.0, # float
)`
I'm sadly not very sure when the error started appearing.
Console Log:
``
`RuntimeError Traceback (most recent call last)
Cell In[6], line 35
32 current_params = kfold_params[i]
34 model = YOLO('yolov8x-seg.pt')
---> 35 model_results = model.train(data=os.path.join(ROOT_DIR, f'config{i}.yaml'), imgsz=512, batch=16, deterministic=True, plots=True,
36 close_mosaic = 0,
37 optimizer = current_params["optimizer"],
38 epochs = current_params["epochs"],
39 lr0 = current_params["lr"],
40 dropout = current_params["dropout"],
41 # disable built-in augmentation, instead use Albumentations Library
42 augment=False, hsv_h=0, hsv_s=0, hsv_v=0, degrees=0, translate=0,
43 scale=0, shear=0.0, perspective=0, flipud=0, fliplr=0, bgr=0,
44 mosaic=0, mixup=0, copy_paste=0, erasing=0, crop_fraction=0)
45 results = model.val()
47 print("\n" + "#" * 60)
File ~/.local/lib/python3.11/site-packages/ultralytics/engine/model.py:802, in Model.train(self, trainer, **kwargs)
799 self.model = self.trainer.model
801 self.trainer.hub_session = self.session # attach optional HUB session
--> 802 self.trainer.train()
803 # Update model and cfg after training
804 if RANK in {-1, 0}:
File ~/.local/lib/python3.11/site-packages/ultralytics/engine/trainer.py:207, in BaseTrainer.train(self)
204 ddp_cleanup(self, str(file))
206 else:
--> 207 self._do_train(world_size)
File ~/.local/lib/python3.11/site-packages/ultralytics/engine/trainer.py:367, in BaseTrainer._do_train(self, world_size)
365 pbar = TQDM(enumerate(self.train_loader), total=nb)
366 self.tloss = None
--> 367 for i, batch in pbar:
368 self.run_callbacks("on_train_batch_start")
369 # Warmup
File ~/.local/lib/python3.11/site-packages/tqdm/std.py:1181, in tqdm.__iter__(self)
1178 time = self._time
1180 try:
-> 1181 for obj in iterable:
1182 yield obj
1183 # Update and possibly print the progressbar.
1184 # Note: does not call self.update(1) for speed optimisation.
File ~/.local/lib/python3.11/site-packages/ultralytics/data/build.py:48, in InfiniteDataLoader.__iter__(self)
46 """Creates a sampler that repeats indefinitely."""
47 for _ in range(len(self)):
---> 48 yield next(self.iterator)
File ~/.local/lib/python3.11/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self)
627 if self._sampler_iter is None:
628 # TODO(https://github.com/pytorch/pytorch/issues/76750)
629 self._reset() # type: ignore[call-arg]
--> 630 data = self._next_data()
631 self._num_yielded += 1
632 if self._dataset_kind == _DatasetKind.Iterable and \
633 self._IterableDataset_len_called is not None and \
634 self._num_yielded > self._IterableDataset_len_called:
File ~/.local/lib/python3.11/site-packages/torch/utils/data/dataloader.py:1324, in _MultiProcessingDataLoaderIter._next_data(self)
1322 if len(self._task_info[self._rcvd_idx]) == 2:
1323 data = self._task_info.pop(self._rcvd_idx)[1]
-> 1324 return self._process_data(data)
1326 assert not self._shutdown and self._tasks_outstanding > 0
1327 idx, data = self._get_data()
File ~/.local/lib/python3.11/site-packages/torch/utils/data/dataloader.py:1370, in _MultiProcessingDataLoaderIter._process_data(self, data)
1368 self._try_put_index()
1369 if isinstance(data, ExceptionWrapper):
-> 1370 data.reraise()
1371 return data
File ~/.local/lib/python3.11/site-packages/torch/_utils.py:706, in ExceptionWrapper.reraise(self)
702 except TypeError:
703 # If the exception takes multiple arguments, don't try to
704 # instantiate since we don't know how to
705 raise RuntimeError(msg) from None
--> 706 raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 3.
Original Traceback (most recent call last):
File "/home/user/.local/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 309, in _worker_loop
data = fetcher.fetch(index) # type: ignore[possibly-undefined]
^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 55, in fetch
return self.collate_fn(data)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/ultralytics/data/dataset.py", line 240, in collate_fn
value = torch.cat(value, 0)
^^^^^^^^^^^^^^^^^^^
RuntimeError: Tensors must have same number of dimensions: got 2 and 1`
|
open
|
2024-10-30T20:51:53Z
|
2025-01-13T17:25:08Z
|
https://github.com/albumentations-team/albumentations/issues/2041
|
[
"bug",
"Need check"
] |
armanivers
| 13
|
sherlock-project/sherlock
|
python
| 2,392
|
Most of the results are 'Error connecting"
|
### Installation method
PyPI (via pip)
### Package version
Sherlock v0.15.0
### Description
Hello community !
While everything was working fine, for the last 1 or 2 hours I've been getting mostly incorrect results.
here's an example (where I've replaced the real username with “username”)
For example, for the username I'm looking for, I'd have to find a telegram, a vsco, a twitter... all I'm left with are false positives.
```[-] ResearchGate: Illegal Username Format For This Site!
[-] 1337x: Error Connecting
[-] 2Dimensions: Not Found!
[-] 3dnews: Error Connecting
[-] 7Cups: Not Found!
[-] 8tracks: Not Found!
[-] 9GAG: Not Found!
[-] APClips: Not Found!
[-] About.me: Not Found!
[-] Academia.edu: Not Found!
[-] AdmireMe.Vip: Not Found!
[-] Air Pilot Life: Not Found!
[-] Airbit: Not Found!
[-] Airliners: Not Found!
[-] All Things Worn: Not Found!
[-] AllMyLinks: Not Found!
[-] AniWorld: Not Found!
[-] Anilist: Not Found!
[-] Apple Developer: Not Found!
[-] Apple Discussions: Not Found!
[-] Archive of Our Own: Not Found!
[-] Archive.org: Not Found!
[-] ArtStation: Not Found!
[-] Asciinema: Error Connecting
[-] Ask Fedora: Not Found!
[-] AskFM: Error Connecting
[-] Atcoder: Not Found!
[-] Audiojungle: Not Found!
[-] Autofrage: Not Found!
[-] Avizo: Not Found!
[-] BLIP.fm: Error Connecting
[-] BOOTH: Not Found!
[-] Bandcamp: Not Found!
[-] Bazar.cz: Not Found!
[-] Behance: Not Found!
[-] Bezuzyteczna: Not Found!
[-] BiggerPockets: Not Found!
[-] BioHacking: Not Found!
[-] BitBucket: Not Found!
[-] Bitwarden Forum: Not Found!
[-] Blipfoto: Not Found!
[-] Blogger: Not Found!
[-] BoardGameGeek: Not Found!
[-] BongaCams: Not Found!
[-] Bookcrossing: Not Found!
[-] BraveCommunity: Not Found!
[-] BugCrowd: Not Found!
[-] BuyMeACoffee: Not Found!
[-] BuzzFeed: Not Found!
[-] CGTrader: Not Found!
[-] CNET: Not Found!
[-] CSSBattle: Not Found!
[-] CTAN: Not Found!
[-] Caddy Community: Not Found!
[-] Car Talk Community: Not Found!
[-] Carbonmade: Not Found!
[-] Career.habr: Not Found!
[-] Championat: Not Found!
[-] Chaos: Not Found!
[-] Chatujme.cz: Not Found!
[-] ChaturBate: Not Found!
[-] Chess: Not Found!
[-] Choice Community: Not Found!
[-] Clapper: Not Found!
[-] CloudflareCommunity: Not Found!
[-] Clozemaster: Not Found!
[-] Clubhouse: Not Found!
[-] Code Snippet Wiki: Not Found!
[-] Codeberg: Not Found!
[-] Codecademy: Not Found!
[-] Codechef: Not Found!
[-] Codeforces: Not Found!
[-] Codepen: Not Found!
[-] Coders Rank: Not Found!
[-] Coderwall: Not Found!
[-] Codewars: Not Found!
[-] Coinvote: Not Found!
[-] ColourLovers: Not Found!
[-] Contently: Not Found!
[-] Coroflot: Not Found!
[-] Cracked: Not Found!
[-] Crevado: Not Found!
[-] Crowdin: Not Found!
[-] Cryptomator Forum: Not Found!
[-] Cults3D: Not Found!
[-] CyberDefenders: Not Found!
[-] DEV Community: Not Found!
[-] DMOJ: Not Found!
[-] DailyMotion: Not Found!
[-] Dealabs: Not Found!
[-] DeviantART: Not Found!
[-] Discogs: Not Found!
[-] Discord: Not Found!
[-] Discuss.Elastic.co: Not Found!
[-] Disqus: Not Found!
[-] Docker Hub: Not Found!
[-] Dribbble: Not Found!
[-] Duolingo: Not Found!
[-] Eintracht Frankfurt Forum: Not Found!
[-] Empretienda AR: Error Connecting
[-] Envato Forum: Not Found!
[-] Erome: Not Found!
[-] Exposure: Not Found!
[-] exophase: Not Found!
[-] EyeEm: Not Found!
[-] F3.cool: Not Found!
[-] Fameswap: Not Found!
[-] Fandom: Not Found!
[-] Fanpop: Not Found!
[-] Finanzfrage: Not Found!
[-] Fiverr: Not Found!
[-] Flickr: Not Found!
[-] Flightradar24: Not Found!
[-] Flipboard: Not Found!
[-] Football: Not Found!
[-] FortniteTracker: Not Found!
[-] Forum Ophilia: Not Found!
[-] Fosstodon: Not Found!
[-] Freelance.habr: Not Found!
[-] Freelancer: Not Found!
[-] Freesound: Not Found!
[-] GNOME VCS: Not Found!
[-] GaiaOnline: Not Found!
[-] Gamespot: Not Found!
[-] GeeksforGeeks: Not Found!
[-] Genius (Artists): Not Found!
[-] Genius (Users): Not Found!
[-] Gesundheitsfrage: Not Found!
[-] GetMyUni: Not Found!
[-] Giant Bomb: Not Found!
[-] Giphy: Not Found!
[-] GitBook: Not Found!
[-] GitHub: Not Found!
[-] GitLab: Not Found!
[-] Gitea: Not Found!
[-] Gitee: Not Found!
[-] GoodReads: Not Found!
[-] Google Play: Not Found!
[-] Gradle: Not Found!
[-] Grailed: Not Found!
[-] Gravatar: Not Found!
[-] Gumroad: Not Found!
[-] Gutefrage: Not Found!
[-] HackTheBox: Not Found!
[-] Hackaday: Not Found!
[+] HackenProof (Hackers): https://hackenproof.com/hackers/username
[-] HackerEarth: Error Connecting
[-] HackerNews: Not Found!
[-] HackerOne: Not Found!
[-] HackerRank: Not Found!
[-] Harvard Scholar: Not Found!
[-] Hashnode: Not Found!
[-] Heavy-R: Not Found!
[-] Holopin: Not Found!
[-] Houzz: Not Found!
[-] HubPages: Not Found!
[-] Hubski: Not Found!
[-] HudsonRock: Not Found!
[-] IFTTT: Not Found!
[-] IRC-Galleria: Not Found!
[-] Icons8 Community: Not Found!
[-] Image Fap: Not Found!
[-] ImgUp.cz: Not Found!
[-] Imgur: Not Found!
[-] Instagram: Not Found!
[-] Instructables: Not Found!
[-] Intigriti: Not Found!
[-] Ionic Forum: Not Found!
[-] Issuu: Not Found!
[-] Itch.io: Not Found!
[-] Itemfix: Not Found!
[-] Jellyfin Weblate: Not Found!
[-] Jimdo: Not Found!
[-] Joplin Forum: Not Found!
[-] KEAKR: Error Connecting
[-] Kaggle: Not Found!
[-] kaskus: Not Found!
[-] Keybase: Not Found!
[-] Kick: Not Found!
[-] Kik: Not Found!
[-] Kongregate: Not Found!
[-] LOR: Not Found!
[-] Launchpad: Not Found!
[-] LeetCode: Not Found!
[-] LessWrong: Not Found!
[-] Letterboxd: Not Found!
[-] LibraryThing: Not Found!
[-] Lichess: Not Found!
[-] LinkedIn: Not Found!
[-] Linktree: Not Found!
[-] Listed: Not Found!
[-] LiveJournal: Not Found!
[-] Lobsters: Not Found!
[-] LottieFiles: Not Found!
[+] LushStories: https://www.lushstories.com/profile/username
[-] MMORPG Forum: Not Found!
[-] Mapify: Not Found!
[-] Medium: Not Found!
[-] Memrise: Not Found!
[-] Minecraft: Not Found!
[-] MixCloud: Not Found!
[-] Monkeytype: Not Found!
[-] Motherless: Not Found!
[-] Motorradfrage: Not Found!
[-] MyAnimeList: Not Found!
[-] MyMiniFactory: Not Found!
[-] Mydramalist: Not Found!
[-] Myspace: Not Found!
[-] NICommunityForum: Not Found!
[-] NationStates Nation: Not Found!
[-] NationStates Region: Not Found!
[-] Naver: Not Found!
[-] Needrom: Not Found!
[-] Newgrounds: Not Found!
[-] Nextcloud Forum: Not Found!
[-] Nightbot: Not Found!
[-] Ninja Kiwi: Not Found!
[-] NintendoLife: Not Found!
[-] NitroType: Not Found!
[-] NotABug.org: Not Found!
[-] Nyaa.si: Not Found!
[-] OGUsers: Error Connecting
[-] OpenStreetMap: Not Found!
[-] Opensource: Not Found!
[-] OurDJTalk: Error Connecting
[-] PCGamer: Not Found!
[-] PSNProfiles.com: Not Found!
[-] Packagist: Not Found!
[-] Pastebin: Not Found!
[-] Patreon: Not Found!
[-] PentesterLab: Not Found!
[-] PepperIT: Error Connecting
[+] Periscope: https://www.periscope.tv/username/
[-] Pinkbike: Not Found!
[-] PlayStore: Error Connecting
[-] PocketStars: Error Connecting
[-] Pokemon Showdown: Error Connecting
[-] Polarsteps: Error Connecting
[-] Polygon: Error Connecting
[-] Polymart: Error Connecting
[-] Pornhub: Error Connecting
[-] ProductHunt: Error Connecting
[-] PromoDJ: Error Connecting
[-] PyPi: Error Connecting
[-] Rajce.net: Error Connecting
[-] Rarible: Error Connecting
[-] Rate Your Music: Error Connecting
[-] Rclone Forum: Error Connecting
[-] RedTube: Error Connecting
[-] Redbubble: Error Connecting
[-] Reddit: Error Connecting
[-] Reisefrage: Error Connecting
[-] Replit.com: Error Connecting
[-] ReverbNation: Error Connecting
[-] Roblox: Error Connecting
[-] RocketTube: Error Connecting
[-] RoyalCams: Error Connecting
[-] RubyGems: Error Connecting
[-] Rumble: Error Connecting
[-] RuneScape: Error Connecting
[-] SWAPD: Error Connecting
[-] Sbazar.cz: Error Connecting
[-] Scratch: Error Connecting
[-] Scribd: Error Connecting
[-] ShitpostBot5000: Error Connecting
[-] Shpock: Error Connecting
[-] Signal: Error Connecting
[-] Sketchfab: Error Connecting
[-] Slack: Error Connecting
[-] Slant: Error Connecting
[-] Slashdot: Error Connecting
[-] SlideShare: Error Connecting
[-] Slides: Error Connecting
[-] SmugMug: Error Connecting
[-] Smule: Error Connecting
[-] Snapchat: Error Connecting
[-] SoundCloud: Error Connecting
[-] SourceForge: Error Connecting
[-] SoylentNews: Error Connecting
[-] Speedrun.com: Error Connecting
[-] Spells8: Error Connecting
[-] Splice: Error Connecting
[-] Splits.io: Error Connecting
[-] Sporcle: Error Connecting
[-] Sportlerfrage: Error Connecting
[-] SportsRU: Error Connecting
[-] Spotify: Error Connecting
[-] Star Citizen: Error Connecting
[-] Steam Community (Group): Error Connecting
[-] Steam Community (User): Error Connecting
[-] Strava: Error Connecting
[-] SublimeForum: Error Connecting
[-] TETR.IO: Error Connecting
[-] Tiendanube: Error Connecting
[-] TLDR Legal: Error Connecting
[-] Topcoder: Error Connecting
[-] TRAKTRAIN: Error Connecting
[-] Telegram: Error Connecting
[-] Tellonym.me: Error Connecting
[-] Tenor: Error Connecting
[-] ThemeForest: Error Connecting
[-] TnAFlix: Error Connecting
[-] TorrentGalaxy: Error Connecting
[-] TradingView: Error Connecting
[-] Trakt: Error Connecting
[-] TrashboxRU: Error Connecting
[-] Trawelling: Error Connecting
[-] Trello: Error Connecting
[-] TryHackMe: Error Connecting
[-] Tuna: Error Connecting
[-] Tweakers: Error Connecting
[-] Twitch: Error Connecting
[-] Twitter: Error Connecting
[-] Typeracer: Error Connecting
[-] Ultimate-Guitar: Error Connecting
[-] Unsplash: Error Connecting
[-] Untappd: Error Connecting
[-] VK: Error Connecting
[-] VSCO: Error Connecting
[-] Velog: Error Connecting
[-] Velomania: Error Connecting
[-] Venmo: Error Connecting
[-] Vero: Error Connecting
[-] Vimeo: Error Connecting
[-] VirusTotal: Error Connecting
[-] VLR: Error Connecting
[-] WICG Forum: Error Connecting
[-] Warrior Forum: Error Connecting
[-] Wattpad: Error Connecting
[-] WebNode: Error Connecting
[-] Weblate: Error Connecting
[-] Weebly: Error Connecting
[-] Wikidot: Error Connecting
[-] Wikipedia: Error Connecting
[-] Windy: Error Connecting
[-] Wix: Error Connecting
[-] WolframalphaForum: Error Connecting
[-] WordPress: Error Connecting
[-] WordPressOrg: Error Connecting
[-] Wordnik: Error Connecting
[-] Wykop: Error Connecting
[-] Xbox Gamertag: Error Connecting
[-] Xvideos: Error Connecting
[-] YandexMusic: Error Connecting
[-] YouNow: Error Connecting
[-] YouPic: Error Connecting
[-] YouPorn: Error Connecting
[-] YouTube: Error Connecting
[-] akniga: Error Connecting
[-] authorSTREAM: Error Connecting
[-] babyRU: Error Connecting
[-] babyblogRU: Error Connecting
[-] chaos.social: Error Connecting
[-] couchsurfing: Error Connecting
[-] d3RU: Error Connecting
[-] dailykos: Error Connecting
[-] datingRU: Error Connecting
[-] devRant: Error Connecting
[-] drive2: Error Connecting
[-] eGPU: Error Connecting
[-] eintracht: Error Connecting
[-] fixya: Error Connecting
[-] fl: Error Connecting
[-] forum_guns: Error Connecting
[-] freecodecamp: Error Connecting
[-] furaffinity: Error Connecting
[-] geocaching: Error Connecting
[-] gfycat: Error Connecting
[-] habr: Error Connecting
[-] hackster: Error Connecting
[-] hunting: Error Connecting
[-] iMGSRC.RU: Error Connecting
[-] igromania: Error Connecting
[-] interpals: Error Connecting
[-] irecommend: Error Connecting
[-] jbzd.com.pl: Error Connecting
[-] jeuxvideo: Error Connecting
[-] kofi: Error Connecting
[-] kwork: Error Connecting
[-] last.fm: Error Connecting
[-] leasehackr: Error Connecting
[-] livelib: Error Connecting
[-] mastodon.cloud: Error Connecting
[-] mastodon.social: Error Connecting
[-] mastodon.technology: Error Connecting
[-] mastodon.xyz: Error Connecting
[-] mercadolivre: Error Connecting
[-] minds: Error Connecting
[-] moikrug: Error Connecting
[-] mstdn.io: Error Connecting
[-] nairaland.com: Error Connecting
[-] nnRU: Error Connecting
[-] note: Error Connecting
[-] npm: Error Connecting
[-] opennet: Error Connecting
[-] osu!: Error Connecting
[-] phpRU: Error Connecting
[-] pikabu: Error Connecting
[-] pr0gramm: Error Connecting
[-] prog.hu: Error Connecting
[-] queer.af: Error Connecting
[-] satsisRU: Error Connecting
[-] sessionize: Error Connecting
[-] skyrock: Error Connecting
[-] social.tchncs.de: Error Connecting
[-] spletnik: Error Connecting
[-] svidbook: Error Connecting
[-] threads: Error Connecting
[-] toster: Error Connecting
[-] uid: Error Connecting
[-] wiki.vg: Error Connecting
[-] xHamster: Error Connecting
[-] znanylekarz.pl: Error Connecting ```
### Steps to reproduce
search for “sherlock username
### Additional information
_No response_
### Code of Conduct
- [x] I agree to follow this project's Code of Conduct
|
open
|
2025-01-16T16:20:12Z
|
2025-03-17T17:41:12Z
|
https://github.com/sherlock-project/sherlock/issues/2392
|
[
"environment"
] |
babbyshark75
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.