repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
FactoryBoy/factory_boy
|
sqlalchemy
| 1,073
|
Protect `master` branch to prevent accidents
|
I saw this on the home page:

I assume this was an accidental oversight... Or is there a reason this isn't currently enabled?
On other projects I help maintain, typically we setup the following:
1. `master` is locked against pushes by everyone, only merge via PR.
2. PR's require at least 1 review
3. Most CI checks are made mandatory... the exception is stuff like code coverage bots, which sometimes it's okay if code coverage drops slightly during a refactor etc.
|
closed
|
2024-04-23T22:41:26Z
|
2024-08-17T15:53:39Z
|
https://github.com/FactoryBoy/factory_boy/issues/1073
|
[] |
jeffwidman
| 1
|
thtrieu/darkflow
|
tensorflow
| 726
|
Download datasets
|
I want to download images of apple fruit where can i download dataset
|
open
|
2018-04-18T05:37:16Z
|
2018-04-19T17:01:41Z
|
https://github.com/thtrieu/darkflow/issues/726
|
[] |
rucsacman
| 1
|
3b1b/manim
|
python
| 1,446
|
Error while installing manim
|
### Describe the error
Error while running test program in Manim
### Code and Error
**Code**:
manimgl example_scenes.py OpeningManimExample
**Error**:
Traceback (most recent call last):
File "/home/akhil/anaconda3/envs/manim/bin/manimgl", line 6, in <module>
from manimlib.__main__ import main
File "/home/akhil/manim_gl/manimlib/__init__.py", line 19, in <module>
from manimlib.window import *
File "/home/akhil/manim_gl/manimlib/window.py", line 2, in <module>
from moderngl_window.context.pyglet.window import Window as PygletWindow
File "/home/akhil/anaconda3/envs/manim/lib/python3.8/site-packages/moderngl_window/context/pyglet/__init__.py", line 1, in <module>
from .keys import Keys # noqa
File "/home/akhil/anaconda3/envs/manim/lib/python3.8/site-packages/moderngl_window/context/pyglet/keys.py", line 12, in <module>
from pyglet.window import key
File "/home/akhil/anaconda3/envs/manim/lib/python3.8/site-packages/pyglet/window/__init__.py", line 1899, in <module>
gl._create_shadow_window()
File "/home/akhil/anaconda3/envs/manim/lib/python3.8/site-packages/pyglet/gl/__init__.py", line 206, in _create_shadow_window
_shadow_window = Window(width=1, height=1, visible=False)
File "/home/akhil/anaconda3/envs/manim/lib/python3.8/site-packages/pyglet/window/xlib/__init__.py", line 173, in __init__
super(XlibWindow, self).__init__(*args, **kwargs)
File "/home/akhil/anaconda3/envs/manim/lib/python3.8/site-packages/pyglet/window/__init__.py", line 600, in __init__
raise NoSuchConfigException('No standard config is available.')
pyglet.window.NoSuchConfigException: No standard config is available.
### Environment
**OS System**: Ubuntu 18.04
**manim version**: manim_gl
**python version**: Python 3.8.8
|
closed
|
2021-03-24T12:35:35Z
|
2022-01-31T07:21:06Z
|
https://github.com/3b1b/manim/issues/1446
|
[] |
AkhilAkkapelli
| 0
|
google-research/bert
|
nlp
| 627
|
BUG: 'truncate_seq_pair'
|
Hi,
I find something confused.
```
def truncate_seq_pair(tokens_a, tokens_b, max_num_tokens, rng):
"""Truncates a pair of sequences to a maximum sequence length."""
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_num_tokens:
break
trunc_tokens = tokens_a if len(tokens_a) > len(tokens_b) else tokens_b
assert len(trunc_tokens) >= 1
# We want to sometimes truncate from the front and sometimes from the
# back to add more randomness and avoid biases.
if rng.random() < 0.5:
del trunc_tokens[0]
else:
trunc_tokens.pop()
```
In the code `truncate_seq_pair`, you set pop trunc_tokens either start or end will be OK and randomly. BUT, if 'is_random_next' == False, tokens_a will be the actual front of tokens_b.
For the positional relation, if trunc_tokens==tokens_a, I think only 'del trunc_tokens[0]' will be OK. Equaly, if trunc_tokens==tokens_b, only 'trunc_tokens.pop()' will be OK.
**e.g.**
>INFO:tensorflow:tokens: [CLS] - - the name is [MASK] ##im ##port ##ant to a monk - [MASK] pumped water night [MASK] that he might study by day , so i , the guardian [MASK] cloak ##s and para ##sol ##s [MASK] [MASK] the sacred doors of her lecture - room , im [MASK] ##be celestial knowledge . [MASK] my [MASK] i felt in me a soul [SEP] fallen star , i am , sir ! ' continued he , [MASK] ##sive ##ly , stroking his lean stomach - - [MASK] a fallen star ! [MASK] - fallen , if the dignity of philosophy will [MASK] of the [MASK] ##mile , among the snuck ##gs of the lower world - - indeed , even into the ho ##g - bucket [SEP]
**The truth:**
...
of cloaks and parasols, at the sacred doors of her lecture-room, imbibe celestial knowledge.
From my youth I felt in me a soul **above the matter-entangled herd.
She revealed to me the glorious fact, that I am a spark of Divinity itself.
A** fallen star, I am, sir!' continued he, pensively, stroking his lean stomach--'a fallen star!--fallen, if the dignity of philosophy will allow of the simile, among the hogs of the lower world--
...
You can see that **more than one line** has been dropped. And token_a is not close to token_b.
If you have considered this question, can you share me you thought?
Thank you for you work.
|
open
|
2019-05-08T09:03:15Z
|
2019-05-08T09:03:15Z
|
https://github.com/google-research/bert/issues/627
|
[] |
moonlight1776
| 0
|
robotframework/robotframework
|
automation
| 4,980
|
DateTime library uses deprecated `datetime.utcnow()`
|
[datetime.datetime.utcnow()](https://docs.python.org/3/library/datetime.html#datetime.datetime.utcnow) has been deprecated in Python 3.12 and `datetime.datetime.now(datetime.UTC)` should be used instead. That `datetime.UTC` is new in Python 3.11, so we need to use either version or feature detection to avoid using it with earlier Python versions.
|
closed
|
2023-12-18T08:21:10Z
|
2023-12-20T23:39:01Z
|
https://github.com/robotframework/robotframework/issues/4980
|
[
"bug",
"priority: medium",
"rc 1",
"effort: small"
] |
pekkaklarck
| 1
|
jupyter/nbgrader
|
jupyter
| 1,555
|
`ImportError: cannot import name 'contextfilter' from 'jinja2'`
|
Using nbgrader 0.6.2 (also tried with 0.7.0-dev)
```
jupyter-nbconvert \
--Exporter.preprocessors=nbgrader.preprocessors.LockCells \
--Exporter.preprocessors=nbgrader.preprocessors.ClearSolutions \
--Exporter.preprocessors=nbgrader.preprocessors.ClearOutput \
--Exporter.preprocessors=nbgrader.preprocessors.CheckCellMetadata \
--Exporter.preprocessors=nbgrader.preprocessors.ClearHiddenTests \
--Exporter.preprocessors=nbgrader.preprocessors.ClearMarkScheme \
--Exporter.preprocessors=nbgrader.preprocessors.ComputeChecksums \
--stdout \
--to ipynb \
source.ipynb \
> release.ipynb
```
leads to:
```
Traceback (most recent call last):
File "/builds/lsi/inf2-notes/nbgrader/bin/jupyter-nbconvert", line 5, in <module>
from nbconvert.nbconvertapp import main
File "/builds/lsi/inf2-notes/nbgrader/lib/python3.10/site-packages/nbconvert/__init__.py", line 4, in <module>
from .exporters import *
File "/builds/lsi/inf2-notes/nbgrader/lib/python3.10/site-packages/nbconvert/exporters/__init__.py", line 3, in <module>
from .html import HTMLExporter
File "/builds/lsi/inf2-notes/nbgrader/lib/python3.10/site-packages/nbconvert/exporters/html.py", line 12, in <module>
from jinja2 import contextfilter
ImportError: cannot import name 'contextfilter' from 'jinja2'
```
Apparently Jinja2 replaced contextfilter, here is a related issue from another project.
https://github.com/great-expectations/great_expectations/issues/4538
Pinning `jinja2==3.0.3` resolves the issue.
Should this be fixed in `jupyter-nbconvert` or here?
|
closed
|
2022-04-02T15:13:19Z
|
2022-04-04T16:45:05Z
|
https://github.com/jupyter/nbgrader/issues/1555
|
[] |
goekce
| 2
|
graphdeco-inria/gaussian-splatting
|
computer-vision
| 798
|
Training time breakdown
|
In the paper, the author mentioned that "The majority (∼80%) of our training time is spent in Python code, since...". However, I did a runtime breakdown on traing of 3DGS using torch.cuda.Event(), and it turns out that most of the time is spent on the backward propagation(which is implemented by CUDA), can anyone please explain if there is a misunderstanding?
|
closed
|
2024-05-09T13:47:00Z
|
2024-05-09T13:47:24Z
|
https://github.com/graphdeco-inria/gaussian-splatting/issues/798
|
[] |
AmeYatoro
| 0
|
marimo-team/marimo
|
data-visualization
| 4,148
|
Notebook move to "no folder" does not work on dashboard project
|
### Describe the bug
If you select the "..." menu dropdown on a notebook in a project folder, choose "Move to folder", and select "No folder", nothing happens. I think it's supposed to move the notebook to the root folder of the project.
### Environment
This happened in `marimo.io/dashboard/projects?folder=****`.
### Code to reproduce
_No response_
|
open
|
2025-03-18T14:36:52Z
|
2025-03-18T14:36:52Z
|
https://github.com/marimo-team/marimo/issues/4148
|
[
"bug"
] |
dchassin
| 0
|
robotframework/robotframework
|
automation
| 5,219
|
Support stopping execution using `robot:exit-on-failure` tag
|
The command line argument `--exitonfailure` has been very useful. If we can have a reserved tag `robot:exit-on-failure` that would be great so that the same behavior can be enabled in individual test suite. For example:
`suite1.robot`:
```
*** Settings ***
Test Tags robot:exit-on-failure
*** Test Cases ***
Pass
Pass Execution Pass this
Scenario 1
Simple Task 1
Fail
Simple Task 2 # Will not execute
Scenario 2 # Will not execute
Simple Task 1
Fail
Simple Task 2
Scenario 3 # Will not execute
Simple Task 1
*** Keywords ***
Simple Task 1
Log To Console ${\n}Simple Task 1...${\n}
Simple Task 2
Log To Console ${\n}Simple Task 2...${\n}
```
Output, something like this:
```
==============================================================================
Suite1
==============================================================================
Suite1.Test
==============================================================================
Pass | PASS |
Pass this
------------------------------------------------------------------------------
Scenario 1
Simple Task 1...
Scenario 1 | FAIL |
AssertionError
------------------------------------------------------------------------------
Scenario 2 | SKIP |
Skipped due to exit-on-failure
------------------------------------------------------------------------------
Scenario 3 | SKIP |
Skipped due to exit-on-failure
------------------------------------------------------------------------------
Suite1.Test | FAIL |
4 tests, 1 passed, 1 failed, 2 skipped
==============================================================================
Suite1 | FAIL |
4 tests, 1 passed, 1 failed, 2 skipped
==============================================================================
|
closed
|
2024-09-24T15:54:43Z
|
2024-12-18T13:33:02Z
|
https://github.com/robotframework/robotframework/issues/5219
|
[
"enhancement",
"priority: medium",
"beta 1",
"effort: small"
] |
douglasdotc
| 1
|
ymcui/Chinese-BERT-wwm
|
tensorflow
| 74
|
对于对话文本的阅读理解的效果怎样
|
感谢开源,对于描述类的文本看到有较好的效果可是对于对话形式的文本呢。
|
closed
|
2019-11-13T10:39:27Z
|
2019-11-18T12:34:39Z
|
https://github.com/ymcui/Chinese-BERT-wwm/issues/74
|
[] |
victor-wzq
| 1
|
mage-ai/mage-ai
|
data-science
| 5,697
|
Search trigger by name in the UI
|
It would be very useful if we could search for pipeline name in the Triggers page, specially when you have pages of trigger, like in the example below.

|
open
|
2025-02-14T11:20:34Z
|
2025-02-14T11:20:34Z
|
https://github.com/mage-ai/mage-ai/issues/5697
|
[] |
messerzen
| 0
|
microsoft/JARVIS
|
pytorch
| 7
|
Where is the code?
|
closed
|
2023-04-03T05:10:23Z
|
2023-04-03T06:41:58Z
|
https://github.com/microsoft/JARVIS/issues/7
|
[] |
tensorboy
| 1
|
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 1,166
|
Debugging
|
Hello, if you can change the internal bitvise software so that it automatically downloads and installs the latest version.
|
open
|
2024-02-13T06:27:22Z
|
2024-02-13T06:27:22Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/1166
|
[] |
shayanspd
| 0
|
flaskbb/flaskbb
|
flask
| 378
|
ValueError: unsupported pickle protocol: 4
|
Can you help me? How can i solve the problem?
flaskbb version:
(.flaskbb)[root@python flaskbb]# flaskbb --version
FlaskBB 1.0 using Flask 0.12.1 on Python 2.7.5 (default, Aug 4 2017, 00:39:18)
The error:
(.flaskbb)[root@python flaskbb]# flaskbb --config flaskbb.cfg run
/usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1287, u"'@@tx_isolation' is deprecated and will be removed in a future release. Please use '@@transaction_isolation' instead")
result = self._query(query)
--------------------------------------------------------------------------------
INFO in manager [/home/path/www/flaskbb/flaskbb/plugins/manager.py:37]:
Loading plugins under entrypoint flaskbb_plugins
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
INFO in manager [/home/path/www/flaskbb/flaskbb/plugins/manager.py:60]:
Loaded plugin: portal
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
INFO in manager [/home/path/www/flaskbb/flaskbb/plugins/manager.py:63]:
Loaded 1 plugins for entrypoint flaskbb_plugins
--------------------------------------------------------------------------------
/usr/lib64/python2.7/site-packages/sqlalchemy/sql/sqltypes.py:219: SAWarning: Unicode type received non-unicode bind param value '140214412517152'. (this warning may be suppressed after 10 occurrences)
(util.ellipses_string(value),))
2017-12-16 13:54:48,406 - werkzeug - INFO - * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
2017-12-16 13:55:04,110 - werkzeug - DEBUG - Initializing Flask-DebugToolbar log handler
2017-12-16 13:55:04,142 - werkzeug - INFO - 127.0.0.1 - - [16/Dec/2017 13:55:04] "GET / HTTP/1.1" 500 -
2017-12-16 13:55:04,244 - werkzeug - ERROR - Error on request:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/werkzeug/serving.py", line 209, in run_wsgi
execute(self.server.app)
File "/usr/lib/python2.7/site-packages/werkzeug/serving.py", line 197, in execute
application_iter = app(environ, start_response)
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1997, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1985, in wsgi_app
response = self.handle_exception(e)
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1540, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1610, in full_dispatch_request
rv = self.preprocess_request()
File "/usr/lib/python2.7/site-packages/flask/app.py", line 1831, in preprocess_request
rv = func()
File "/home/mingdong.dan/www/flaskbb/flaskbb/app.py", line 282, in mark_current_user_online
mark_online(request.remote_addr, guest=True)
File "/home/mingdong.dan/www/flaskbb/flaskbb/utils/helpers.py", line 326, in mark_online
expires = now + (flaskbb_config['ONLINE_LAST_MINUTES'] * 60) + 10
File "/home/mingdong.dan/www/flaskbb/flaskbb/utils/settings.py", line 26, in __getitem__
return Setting.as_dict()[key]
File "/usr/lib/python2.7/site-packages/flask_caching/__init__.py", line 349, in decorated_function
rv = f(*args, **kwargs)
File "/home/mingdong.dan/www/flaskbb/flaskbb/management/models.py", line 130, in as_dict
result = cls.query.all()
File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2703, in all
return list(self)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py", line 90, in instances
util.raise_from_cause(err)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py", line 75, in instances
rows = [proc(row) for row in fetch]
File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py", line 437, in _instance
loaded_instance, populate_existing, populators)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/loading.py", line 498, in _populate_full
dict_[key] = getter(row)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/sqltypes.py", line 1540, in process
return loads(value)
ValueError: unsupported pickle protocol: 4
|
closed
|
2017-12-16T05:58:27Z
|
2018-04-15T07:47:49Z
|
https://github.com/flaskbb/flaskbb/issues/378
|
[
"bug"
] |
582727501
| 4
|
mitmproxy/mitmproxy
|
python
| 7,423
|
Merge of Set-Cookie response headers in reverse mode causes Firefox browser to not recognize all cookies
|
#### Problem Description
A clear and concise description of what the bug is.
#### Steps to reproduce the behavior:
In anti proxy mode, the source server response contains multiple Set Cookie headers, but mitmproxy automatically merges them, resulting in some websites only recognizing the first cookie.
The website server uses HTTP/2.
#### System Information
Paste the output of "mitmproxy --version" here.
Mitmproxy: 11.0.2
Python: 3.10.16
OpenSSL: OpenSSL 3.4.0 22 Oct 2024
Platform: Windows-10-10.0.26100-SP0
|
closed
|
2024-12-29T17:16:15Z
|
2024-12-30T09:21:41Z
|
https://github.com/mitmproxy/mitmproxy/issues/7423
|
[
"kind/triage"
] |
yinsel
| 4
|
3b1b/manim
|
python
| 1,925
|
Different expressions of pi creatures were found. Should we put them in the repository?
|
Now all pi creatures are able to get:
https://github.com/CaftBotti/manim_pi_creatures
They are from 3blue1brown.com, maybe they are using in 3b1b videos.
That repository also includes pi_creature.py, it is from cairo-backend, but i changed a little to use the different picreatures, you just need to put the picreature svgs in the assets folder, and use the modified pi_creature.py instead of the original one.
Then you can write a piece of code with many picreatures just like 3b1b's code, and the video you get will produce the same effects as in 3b1b's videos.
So should we put the different picreatures in this repository and/or manim-community? If we can use cute pi in our own animations, we will be more glad to use manim.
|
closed
|
2022-12-10T09:03:28Z
|
2022-12-10T15:07:57Z
|
https://github.com/3b1b/manim/issues/1925
|
[] |
CaftBotti
| 1
|
fastapi-users/fastapi-users
|
fastapi
| 220
|
[feature request] route to generate tokens with equal or less access
|
The idea is that if a token can use some scopes (e.g. user:register, user:read, user:delete), he should be able to create a token for equal or smaller access.
Let's say, I have a token that expires in 1 day and has permissions `user:register, user:read`, I can generate a new token with just `user:read`, then I can make a request:
`www.host.com/token`
Passing `permissions=['user:read']` or something similar and the return is a token that can be used up to the same expiration (1 day) and allows me to access any request that depends on this or no scope
|
closed
|
2020-06-03T19:37:55Z
|
2020-06-15T12:31:20Z
|
https://github.com/fastapi-users/fastapi-users/issues/220
|
[
"duplicate"
] |
nullhack
| 2
|
davidteather/TikTok-Api
|
api
| 213
|
[BUG] - ByHashtag Not Working
|
I run the bytag code and get the below error.
I have installed chromedriver and works properly.
{'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.0 Safari/537.36)', 'accept-encoding': 'gzip, deflate, br', 'accept': 'application/json, text/plain, */*', 'Connection': 'keep-alive', 'authority': 'm.tiktok.com', 'method': 'GET', 'path': '/share/item/list?secUid=&id=5054&type=3&count=50&minCursor=0&maxCursor=0&shareUid=&lang=en&verifyFp=WWdzPV1IIfynsEDN&_signature=_02B4Z6wo00f01dpc3OQAAIBAiIevIjiGAd3aXNhAACmo7e', 'scheme': 'https', 'accept-language': 'en-US,en;q=0.9', 'referrer': 'https://www.tiktok.com/', 'sec-fetch-dest': 'empty', 'sec-fetch-mode': 'cors', 'sec-fetch-site': 'same-site'}
Converting response to JSON failed response is below (probably empty)
Traceback (most recent call last):
File "C:\Python38\lib\site-packages\TikTokApi\tiktok.py", line 48, in getData
return r.json()
File "C:\Python38\lib\site-packages\requests\models.py", line 898, in json
return complexjson.loads(self.text, **kwargs)
File "C:\Python38\lib\json\__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "C:\Python38\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Python38\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\getbytag.py", line 41, in <module>
Hashtag= api.byHashtag(tag, count=count)
File "C:\Python38\lib\site-packages\TikTokApi\tiktok.py", line 255, in byHashtag
res = self.getData(api_url, b, proxy=proxy, language=language)
File "C:\Python38\lib\site-packages\TikTokApi\tiktok.py", line 54, in getData
raise Exception('Invalid Response')
Exception: Invalid Response
|
closed
|
2020-08-11T15:27:14Z
|
2020-08-11T20:35:04Z
|
https://github.com/davidteather/TikTok-Api/issues/213
|
[
"bug"
] |
soleimanian
| 3
|
comfyanonymous/ComfyUI
|
pytorch
| 6,472
|
No module named 'ComfyUI-CCSR'
|
### Your question
I'm trying to run a simple workflow and even though I have downloaded the correct model I keep getting an error that does not make it clear what I'm missing.

### Logs
```powershell
# ComfyUI Error Report
## Error Details
- **Node ID:** 2
- **Node Type:** CCSR_Model_Select
- **Exception Type:** ModuleNotFoundError
- **Exception Message:** No module named 'ComfyUI-CCSR'
## Stack Trace
File "/home/pef/ComfyUI/execution.py", line 328, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/execution.py", line 203, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/home/pef/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/nodes.py", line 177, in load_ccsr_checkpoint
self.model = instantiate_from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/utils/common.py", line 18, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/utils/common.py", line 12, in get_obj_from_str
return getattr(importlib.import_module(module, package=None), cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/Python-3.11.11/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1140, in _find_and_load_unlocked
```
## System Information
- **ComfyUI Version:** v0.3.9
- **Arguments:** /home/pef/ComfyUI/main.py --normalvram
- **OS:** posix
- **Python Version:** 3.11.11 (main, Jan 7 2025, 21:36:27) [GCC 13.3.0]
- **Embedded Python:** false
- **PyTorch Version:** 2.5.1+cu124
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 4080 SUPER : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 16849829888
- **VRAM Free:** 14193524736
- **Torch VRAM Total:** 0
- **Torch VRAM Free:** 0
## Logs
```
2025-01-14T19:04:34.929061 - ** ComfyUI startup time:2025-01-14T19:04:34.929081 - 2025-01-14T19:04:34.929099 - 2025-01-14 19:04:34.9292025-01-14T19:04:34.929116 -
2025-01-14T19:04:34.929138 - ** Platform:2025-01-14T19:04:34.929161 - 2025-01-14T19:04:34.929179 - Linux2025-01-14T19:04:34.929195 -
2025-01-14T19:04:34.929213 - ** Python version:2025-01-14T19:04:34.929228 - 2025-01-14T19:04:34.929248 - 3.11.11 (main, Jan 7 2025, 21:36:27) [GCC 13.3.0]2025-01-14T19:04:34.929264 -
2025-01-14T19:04:34.929280 - ** Python executable:2025-01-14T19:04:34.929295 - 2025-01-14T19:04:34.929310 - /home/pef/Python-3.11.11/bin/python3.112025-01-14T19:04:34.929324 -
2025-01-14T19:04:34.929339 - ** ComfyUI Path:2025-01-14T19:04:34.929354 - 2025-01-14T19:04:34.929368 - /home/pef/ComfyUI2025-01-14T19:04:34.929383 -
2025-01-14T19:04:34.929399 - ** User directory:2025-01-14T19:04:34.929413 - 2025-01-14T19:04:34.929427 - /home/pef/ComfyUI/user2025-01-14T19:04:34.929441 -
2025-01-14T19:04:34.929457 - ** ComfyUI-Manager config path:2025-01-14T19:04:34.929471 - 2025-01-14T19:04:34.929485 - /home/pef/ComfyUI/user/default/ComfyUI-Manager/config.ini2025-01-14T19:04:34.929500 -
2025-01-14T19:04:34.929520 - ** Log path:2025-01-14T19:04:34.929534 - 2025-01-14T19:04:34.929549 - /home/pef/ComfyUI/user/comfyui.log2025-01-14T19:04:34.929563 -
2025-01-14T19:04:35.874892 -
Prestartup times for custom nodes:2025-01-14T19:04:35.874923 -
2025-01-14T19:04:35.874962 - 0.0 seconds:2025-01-14T19:04:35.874975 - 2025-01-14T19:04:35.874987 - /home/pef/ComfyUI/custom_nodes/rgthree-comfy2025-01-14T19:04:35.875000 -
2025-01-14T19:04:35.875013 - 0.0 seconds:2025-01-14T19:04:35.875025 - 2025-01-14T19:04:35.875036 - /home/pef/ComfyUI/custom_nodes/comfyui-easy-use2025-01-14T19:04:35.875048 -
2025-01-14T19:04:35.875061 - 2.1 seconds:2025-01-14T19:04:35.875073 - 2025-01-14T19:04:35.875084 - /home/pef/ComfyUI/custom_nodes/comfyui-manager2025-01-14T19:04:35.875096 -
2025-01-14T19:04:35.875107 -
2025-01-14T19:04:37.276016 - Total VRAM 16069 MB, total RAM 88179 MB
2025-01-14T19:04:37.276141 - pytorch version: 2.5.1+cu124
2025-01-14T19:04:38.641684 - xformers version: 0.0.29.post1
2025-01-14T19:04:38.641870 - Set vram state to: NORMAL_VRAM
2025-01-14T19:04:38.641963 - Device: cuda:0 NVIDIA GeForce RTX 4080 SUPER : cudaMallocAsync
2025-01-14T19:04:38.785124 - Using xformers attention
2025-01-14T19:04:39.538847 - [Prompt Server] web root: /home/pef/ComfyUI/web
2025-01-14T19:04:40.260545 - Please install flash attention2025-01-14T19:04:40.260582 -
2025-01-14T19:04:41.191000 - Total VRAM 16069 MB, total RAM 88179 MB
2025-01-14T19:04:41.191120 - pytorch version: 2.5.1+cu124
2025-01-14T19:04:41.191167 - xformers version: 0.0.29.post1
2025-01-14T19:04:41.191268 - Set vram state to: NORMAL_VRAM
2025-01-14T19:04:41.191347 - Device: cuda:0 NVIDIA GeForce RTX 4080 SUPER : cudaMallocAsync
2025-01-14T19:04:41.215656 - [AnimateDiffEvo] - [0;31mERROR[0m - No motion models found. Please download one and place in: ['/home/pef/ComfyUI/custom_nodes/comfyui-animatediff-evolved/models', '/home/pef/ComfyUI/models/animatediff_models']
2025-01-14T19:04:42.131813 - --------------
2025-01-14T19:04:42.132008 - [91m ### Mixlab Nodes: [93mLoaded
2025-01-14T19:04:42.137265 - json_repair## OK2025-01-14T19:04:42.137295 -
2025-01-14T19:04:42.138535 - ChatGPT.available True
2025-01-14T19:04:42.138809 - edit_mask.available True
2025-01-14T19:04:42.138952 - simple_lama_inpainting## OK2025-01-14T19:04:42.138971 -
2025-01-14T19:04:42.139846 - ## lama torchscript model not found: /home/pef/ComfyUI/models/lama/big-lama.pt,pls download from https://github.com/enesmsahin/simple-lama-inpainting/releases/download/v0.1.0/big-lama.pt2025-01-14T19:04:42.139869 -
2025-01-14T19:04:42.139939 - LaMaInpainting.available True
2025-01-14T19:04:42.285287 - ## clip_interrogator_model not found: /home/pef/ComfyUI/models/clip_interrogator/Salesforce/blip-image-captioning-base, pls download from https://huggingface.co/Salesforce/blip-image-captioning-base2025-01-14T19:04:42.285320 -
2025-01-14T19:04:42.285446 - ClipInterrogator.available True
2025-01-14T19:04:42.326025 - ## text_generator_model not found: /home/pef/ComfyUI/models/prompt_generator/text2image-prompt-generator, pls download from https://huggingface.co/succinctly/text2image-prompt-generator/tree/main2025-01-14T19:04:42.326060 -
2025-01-14T19:04:42.326085 - ## zh_en_model not found: /home/pef/ComfyUI/models/prompt_generator/opus-mt-zh-en, pls download from https://huggingface.co/Helsinki-NLP/opus-mt-zh-en/tree/main2025-01-14T19:04:42.326099 -
2025-01-14T19:04:42.326417 - PromptGenerate.available True
2025-01-14T19:04:42.326482 - ChinesePrompt.available True
2025-01-14T19:04:42.326534 - RembgNode_.available True
2025-01-14T19:04:42.327016 - ffmpeg could not be found. Using ffmpeg from imageio-ffmpeg.2025-01-14T19:04:42.327039 -
2025-01-14T19:04:42.482943 - TripoSR.available
2025-01-14T19:04:42.483261 - MiniCPMNode.available
2025-01-14T19:04:42.493638 - Scenedetect.available
2025-01-14T19:04:42.559913 - FishSpeech.available
2025-01-14T19:04:42.563600 - SenseVoice.available
2025-01-14T19:04:42.579946 - Whisper.available False
2025-01-14T19:04:42.580353 - fal-client## OK2025-01-14T19:04:42.580378 -
2025-01-14T19:04:42.586062 - FalVideo.available
2025-01-14T19:04:42.586155 - [93m -------------- [0m
2025-01-14T19:04:42.587339 - ### Loading: ComfyUI-Impact-Pack (V8.3)2025-01-14T19:04:42.587362 -
2025-01-14T19:04:42.612804 - [Impact Pack] Wildcards loading done.2025-01-14T19:04:42.612856 -
2025-01-14T19:04:42.613394 - Adding2025-01-14T19:04:42.613420 - 2025-01-14T19:04:42.613436 - /home/pef/ComfyUI/custom_nodes2025-01-14T19:04:42.613450 - 2025-01-14T19:04:42.613462 - to sys.path2025-01-14T19:04:42.613476 -
2025-01-14T19:04:42.828360 -
[36mEfficiency Nodes:[0m Attempting to add Control Net options to the 'HiRes-Fix Script' Node (comfyui_controlnet_aux add-on)...[92mSuccess![0m2025-01-14T19:04:42.828399 -
2025-01-14T19:04:42.829263 - Loaded Efficiency nodes from2025-01-14T19:04:42.829302 - 2025-01-14T19:04:42.829317 - /home/pef/ComfyUI/custom_nodes/efficiency-nodes-comfyui2025-01-14T19:04:42.829331 -
2025-01-14T19:04:42.831646 - Loaded ControlNetPreprocessors nodes from2025-01-14T19:04:42.831678 - 2025-01-14T19:04:42.831694 - /home/pef/ComfyUI/custom_nodes/comfyui_controlnet_aux2025-01-14T19:04:42.831707 -
2025-01-14T19:04:42.831959 - Could not find AdvancedControlNet nodes2025-01-14T19:04:42.831977 -
2025-01-14T19:04:42.832429 - Could not find AnimateDiff nodes2025-01-14T19:04:42.832451 -
2025-01-14T19:04:42.832698 - Could not find IPAdapter nodes2025-01-14T19:04:42.832717 -
2025-01-14T19:04:42.834260 - Could not find VideoHelperSuite nodes2025-01-14T19:04:42.834282 -
2025-01-14T19:04:42.834643 - Could not load ImpactPack nodes2025-01-14T19:04:42.834663 - 2025-01-14T19:04:42.834689 - Could not find ImpactPack nodes2025-01-14T19:04:42.834702 -
2025-01-14T19:04:42.845130 - # 😺dzNodes: LayerStyle -> [1;33mCannot import name 'guidedFilter' from 'cv2.ximgproc'
A few nodes cannot works properly, while most nodes are not affected. Please REINSTALL package 'opencv-contrib-python'.
For detail refer to [4mhttps://github.com/chflame163/ComfyUI_LayerStyle/issues/5[0m[m2025-01-14T19:04:42.845158 -
2025-01-14T19:04:43.277063 - [34mWAS Node Suite: [0mOpenCV Python FFMPEG support is enabled[0m2025-01-14T19:04:43.277103 -
2025-01-14T19:04:43.277160 - [34mWAS Node Suite [93mWarning: [0m`ffmpeg_bin_path` is not set in `/home/pef/ComfyUI/custom_nodes/pr-was-node-suite-comfyui-47064894/was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.[0m2025-01-14T19:04:43.277177 -
2025-01-14T19:04:43.653949 - [34mWAS Node Suite: [0mFinished.[0m [32mLoaded[0m [0m218[0m [32mnodes successfully.[0m2025-01-14T19:04:43.653984 -
2025-01-14T19:04:43.654027 -
[3m[93m"Don't wait. The time will never be just right."[0m[3m - Napoleon Hill[0m
2025-01-14T19:04:43.654041 -
2025-01-14T19:04:43.658966 - Traceback (most recent call last):
File "/home/pef/ComfyUI/nodes.py", line 2073, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/pef/ComfyUI/custom_nodes/comfyui_ultimatesdupscale/__init__.py", line 32, in <module>
from .nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "/home/pef/ComfyUI/custom_nodes/comfyui_ultimatesdupscale/nodes.py", line 6, in <module>
from usdu_patch import usdu
File "/home/pef/ComfyUI/custom_nodes/comfyui_ultimatesdupscale/usdu_patch.py", line 2, in <module>
from repositories import ultimate_upscale as usdu
File "/home/pef/ComfyUI/custom_nodes/comfyui_ultimatesdupscale/repositories/__init__.py", line 14, in <module>
spec.loader.exec_module(ultimate_upscale)
File "<frozen importlib._bootstrap_external>", line 936, in exec_module
File "<frozen importlib._bootstrap_external>", line 1073, in get_code
File "<frozen importlib._bootstrap_external>", line 1130, in get_data
FileNotFoundError: [Errno 2] No such file or directory: '/home/pef/ComfyUI/custom_nodes/comfyui_ultimatesdupscale/repositories/ultimate_sd_upscale/scripts/ultimate-upscale.py'
2025-01-14T19:04:43.659072 - Cannot import /home/pef/ComfyUI/custom_nodes/comfyui_ultimatesdupscale module for custom nodes: [Errno 2] No such file or directory: '/home/pef/ComfyUI/custom_nodes/comfyui_ultimatesdupscale/repositories/ultimate_sd_upscale/scripts/ultimate-upscale.py'
2025-01-14T19:04:43.662057 - [rvtools [0;32mINFO[0m] RvTools Version: 2.1.0
2025-01-14T19:04:43.681027 - Web extensions folder found at /home/pef/ComfyUI/web/extensions/ComfyLiterals2025-01-14T19:04:43.681054 -
2025-01-14T19:04:44.092124 - [36;20m[comfy-mtb] | INFO -> loaded [96m91[0m nodes successfuly[0m
2025-01-14T19:04:44.092440 - [36;20m[comfy-mtb] | INFO -> Some nodes (5) could not be loaded. This can be ignored, but go to http://127.0.0.1:8188/mtb if you want more information.[0m
2025-01-14T19:04:44.095199 - /home/pef2025-01-14T19:04:44.095224 -
2025-01-14T19:04:44.095239 - ############################################2025-01-14T19:04:44.095251 -
2025-01-14T19:04:44.095281 - /home/pef/custom_nodes/ComfyUI-NAI-styler/CSV2025-01-14T19:04:44.095300 -
2025-01-14T19:04:44.095319 - ############################################2025-01-14T19:04:44.095333 -
2025-01-14T19:04:44.095452 - []2025-01-14T19:04:44.095468 -
2025-01-14T19:04:44.095482 - ############################################2025-01-14T19:04:44.095495 -
2025-01-14T19:04:44.096799 - (pysssss:WD14Tagger) [DEBUG] Available ORT providers: TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider2025-01-14T19:04:44.096822 -
2025-01-14T19:04:44.096840 - (pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider2025-01-14T19:04:44.096854 -
2025-01-14T19:04:44.615178 - ### Loading: ComfyUI-Manager (V3.7.2)
2025-01-14T19:04:44.649512 - ### ComfyUI Revision: 2953 [418eb706] *DETACHED | Released on '2024-12-20'
2025-01-14T19:04:44.809832 - [34m[ComfyUI-Easy-Use] server: [0mv1.2.6 [92mLoaded[0m2025-01-14T19:04:44.809996 -
2025-01-14T19:04:44.810036 - [34m[ComfyUI-Easy-Use] web root: [0m/home/pef/ComfyUI/custom_nodes/comfyui-easy-use/web_version/v2 [92mLoaded[0m2025-01-14T19:04:44.810092 -
2025-01-14T19:04:44.817200 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-01-14T19:04:44.832631 - ----------Jake Upgrade Nodes Loaded----------2025-01-14T19:04:44.832663 -
2025-01-14T19:04:44.850642 - ------------------------------------------2025-01-14T19:04:44.850683 -
2025-01-14T19:04:44.850700 - [34mComfyroll Studio v1.76 : [92m 175 Nodes Loaded[0m2025-01-14T19:04:44.850713 -
2025-01-14T19:04:44.850728 - ------------------------------------------2025-01-14T19:04:44.850740 -
2025-01-14T19:04:44.850753 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2025-01-14T19:04:44.850766 -
2025-01-14T19:04:44.850780 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2025-01-14T19:04:44.850793 -
2025-01-14T19:04:44.850805 - ------------------------------------------2025-01-14T19:04:44.850826 -
2025-01-14T19:04:44.883096 - [Crystools [0;32mINFO[0m] Crystools version: 1.21.0
2025-01-14T19:04:44.901595 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-01-14T19:04:44.909939 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-01-14T19:04:44.923754 - [Crystools [0;32mINFO[0m] CPU: Intel(R) Core(TM) i9-7900X CPU @ 3.30GHz - Arch: x86_64 - OS: Linux 6.8.0-51-generic
2025-01-14T19:04:44.923984 - [Crystools [0;32mINFO[0m] Pynvml (Nvidia) initialized.
2025-01-14T19:04:44.924185 - [Crystools [0;32mINFO[0m] GPU/s:
2025-01-14T19:04:44.924340 - [Crystools [0;32mINFO[0m] 0) NVIDIA GeForce RTX 4080 SUPER
2025-01-14T19:04:44.924444 - [Crystools [0;32mINFO[0m] NVIDIA Driver: 550.120
2025-01-14T19:04:44.971645 - Failed to auto update `Quality of Life Suit` 2025-01-14T19:04:44.971695 -
2025-01-14T19:04:44.972940 - [33mQualityOfLifeSuit_Omar92_DIR:[0m /home/pef/ComfyUI/custom_nodes/ComfyUI-QualityOfLifeSuit_Omar922025-01-14T19:04:44.972976 -
2025-01-14T19:04:44.983425 -
2025-01-14T19:04:44.983484 - [92m[rgthree-comfy] Loaded 42 exciting nodes. 🎉[00m2025-01-14T19:04:44.983558 -
2025-01-14T19:04:44.983572 -
2025-01-14T19:04:45.030389 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-01-14T19:04:45.031225 - Traceback (most recent call last):
File "/home/pef/ComfyUI/nodes.py", line 2073, in load_custom_node
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/pef/ComfyUI/custom_nodes/comfyui-saveimagewithmetadata/__init__.py", line 1, in <module>
from .py.nodes.node import SaveImageWithMetaData
File "/home/pef/ComfyUI/custom_nodes/comfyui-saveimagewithmetadata/py/__init__.py", line 3, in <module>
from .hook import pre_execute, pre_get_input_data
File "/home/pef/ComfyUI/custom_nodes/comfyui-saveimagewithmetadata/py/hook.py", line 1, in <module>
from .nodes.node import SaveImageWithMetaData
File "/home/pef/ComfyUI/custom_nodes/comfyui-saveimagewithmetadata/py/nodes/node.py", line 19, in <module>
from ..capture import Capture
File "/home/pef/ComfyUI/custom_nodes/comfyui-saveimagewithmetadata/py/capture.py", line 5, in <module>
from .defs.captures import CAPTURE_FIELD_LIST
File "/home/pef/ComfyUI/custom_nodes/comfyui-saveimagewithmetadata/py/defs/__init__.py", line 5, in <module>
from .captures import CAPTURE_FIELD_LIST
File "/home/pef/ComfyUI/custom_nodes/comfyui-saveimagewithmetadata/py/defs/captures.py", line 3, in <module>
from .formatters import (
File "/home/pef/ComfyUI/custom_nodes/comfyui-saveimagewithmetadata/py/defs/formatters.py", line 10, in <module>
from comfy.sd2_clip import SD2Tokenizer
ModuleNotFoundError: No module named 'comfy.sd2_clip'
2025-01-14T19:04:45.031398 - Cannot import /home/pef/ComfyUI/custom_nodes/comfyui-saveimagewithmetadata module for custom nodes: No module named 'comfy.sd2_clip'
2025-01-14T19:04:45.072048 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-01-14T19:04:45.073588 - FETCH DATA from: /home/pef/ComfyUI/user/default/ComfyUI-Manager/cache/2233941102_nodes_page_1_limit_1000.json2025-01-14T19:04:45.073619 - 2025-01-14T19:04:45.116898 - [DONE]2025-01-14T19:04:45.116944 -
2025-01-14T19:04:45.178091 - nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/cache
2025-01-14T19:04:45.178325 - FETCH DATA from: /home/pef/ComfyUI/user/default/ComfyUI-Manager/cache/1514988643_custom-node-list.json2025-01-14T19:04:45.178367 - 2025-01-14T19:04:45.182054 -
Import times for custom nodes:
2025-01-14T19:04:45.187149 - [DONE]2025-01-14T19:04:45.187295 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/websocket_image_save.py
2025-01-14T19:04:45.187332 -
2025-01-14T19:04:45.194629 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/teacachehunyuanvideo
2025-01-14T19:04:45.195442 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-mxtoolkit
2025-01-14T19:04:45.195782 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/stability-ComfyUI-nodes
2025-01-14T19:04:45.195911 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyUI-HunyuanVideoMultiLora
2025-01-14T19:04:45.195952 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/efficiency-nodes-comfyui
2025-01-14T19:04:45.195987 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfy-image-saver
2025-01-14T19:04:45.196183 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-cliption
2025-01-14T19:04:45.196225 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/cg-use-everywhere
2025-01-14T19:04:45.196259 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/Comfyui_TTP_Toolset
2025-01-14T19:04:45.196294 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyLiterals
2025-01-14T19:04:45.196328 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-logic
2025-01-14T19:04:45.196362 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyUI-Universal-Styler
2025-01-14T19:04:45.196395 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-image-saver
2025-01-14T19:04:45.196429 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui_controlnet_aux
2025-01-14T19:04:45.196461 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyUI-GGUF
2025-01-14T19:04:45.196494 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-detail-daemon
2025-01-14T19:04:45.196526 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-wd14-tagger
2025-01-14T19:04:45.196558 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyUi_PromptStylers
2025-01-14T19:04:45.196591 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyUI_JPS-Nodes
2025-01-14T19:04:45.196623 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-inpaint-nodes
2025-01-14T19:04:45.196655 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyUI_Noise
2025-01-14T19:04:45.196699 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-lora-auto-trigger-words
2025-01-14T19:04:45.196733 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-various
2025-01-14T19:04:45.196766 - 0.0 seconds (IMPORT FAILED): /home/pef/ComfyUI/custom_nodes/comfyui_ultimatesdupscale
2025-01-14T19:04:45.196809 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui_ipadapter_plus
2025-01-14T19:04:45.196843 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyUI-QualityOfLifeSuit_Omar92
2025-01-14T19:04:45.196875 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-custom-scripts
2025-01-14T19:04:45.196908 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/teacache
2025-01-14T19:04:45.196941 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyMath
2025-01-14T19:04:45.196993 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/mikey_nodes
2025-01-14T19:04:45.197149 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-frame-interpolation
2025-01-14T19:04:45.197445 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui_essentials
2025-01-14T19:04:45.197531 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-kjnodes
2025-01-14T19:04:45.197602 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-advanced-controlnet
2025-01-14T19:04:45.197785 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/rgthree-comfy
2025-01-14T19:04:45.197837 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-dynamicprompts
2025-01-14T19:04:45.197898 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/cg-image-picker
2025-01-14T19:04:45.197945 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyUI-JakeUpgrade
2025-01-14T19:04:45.197991 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-dream-project
2025-01-14T19:04:45.198037 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyUI_Comfyroll_CustomNodes
2025-01-14T19:04:45.198084 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyUI-RvTools
2025-01-14T19:04:45.198130 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-animatediff-evolved
2025-01-14T19:04:45.198175 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui_ttp_toolset
2025-01-14T19:04:45.198220 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale
2025-01-14T19:04:45.198265 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-videohelpersuite
2025-01-14T19:04:45.198310 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-impact-pack
2025-01-14T19:04:45.198354 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/wlsh_nodes
2025-01-14T19:04:45.198399 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyUI-MediaMixer
2025-01-14T19:04:45.198444 - 0.0 seconds (IMPORT FAILED): /home/pef/ComfyUI/custom_nodes/comfyui-saveimagewithmetadata
2025-01-14T19:04:45.198494 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui_layerstyle
2025-01-14T19:04:45.198538 - 0.0 seconds: /home/pef/ComfyUI/custom_nodes/comfyui_segment_anything
2025-01-14T19:04:45.198583 - 0.1 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-fluxtrainer
2025-01-14T19:04:45.198627 - 0.1 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-crystools
2025-01-14T19:04:45.198680 - 0.1 seconds: /home/pef/ComfyUI/custom_nodes/derfuu_comfyui_moddednodes
2025-01-14T19:04:45.198728 - 0.1 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-manager
2025-01-14T19:04:45.198772 - 0.1 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-easy-use
2025-01-14T19:04:45.198816 - 0.1 seconds: /home/pef/ComfyUI/custom_nodes/ComfyUI-YOLO
2025-01-14T19:04:45.198860 - 0.1 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-ollama
2025-01-14T19:04:45.198904 - 0.2 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-cogvideoxwrapper
2025-01-14T19:04:45.198947 - 0.2 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-ccsr
2025-01-14T19:04:45.198991 - 0.2 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-art-venture
2025-01-14T19:04:45.199036 - 0.4 seconds: /home/pef/ComfyUI/custom_nodes/comfy-mtb
2025-01-14T19:04:45.199081 - 0.8 seconds: /home/pef/ComfyUI/custom_nodes/pr-was-node-suite-comfyui-47064894
2025-01-14T19:04:45.199126 - 1.0 seconds: /home/pef/ComfyUI/custom_nodes/ComfyUI-PyramidFlowWrapper
2025-01-14T19:04:45.199169 - 1.4 seconds: /home/pef/ComfyUI/custom_nodes/comfyui-mixlab-nodes
2025-01-14T19:04:45.199214 -
2025-01-14T19:04:45.211551 - Starting server
2025-01-14T19:04:45.211966 - To see the GUI go to: http://127.0.0.1:8188
2025-01-14T19:04:56.341583 - got prompt
2025-01-14T19:04:56.490408 - !!! Exception during processing !!! No module named 'ComfyUI-CCSR'
2025-01-14T19:04:56.491909 - Traceback (most recent call last):
File "/home/pef/ComfyUI/execution.py", line 328, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/execution.py", line 203, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/home/pef/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/nodes.py", line 177, in load_ccsr_checkpoint
self.model = instantiate_from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/utils/common.py", line 18, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/utils/common.py", line 12, in get_obj_from_str
return getattr(importlib.import_module(module, package=None), cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/Python-3.11.11/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'ComfyUI-CCSR'
2025-01-14T19:04:56.492899 - Prompt executed in 0.15 seconds
2025-01-14T19:05:11.046981 - got prompt
2025-01-14T19:05:11.085756 - !!! Exception during processing !!! No module named 'ComfyUI-CCSR'
2025-01-14T19:05:11.087477 - Traceback (most recent call last):
File "/home/pef/ComfyUI/execution.py", line 328, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/execution.py", line 203, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/home/pef/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/nodes.py", line 177, in load_ccsr_checkpoint
self.model = instantiate_from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/utils/common.py", line 18, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/utils/common.py", line 12, in get_obj_from_str
return getattr(importlib.import_module(module, package=None), cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/Python-3.11.11/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'ComfyUI-CCSR'
2025-01-14T19:05:11.089410 - Prompt executed in 0.04 seconds
2025-01-14T19:05:16.556613 - got prompt
2025-01-14T19:05:16.584832 - !!! Exception during processing !!! No module named 'ComfyUI-CCSR'
2025-01-14T19:05:16.585837 - Traceback (most recent call last):
File "/home/pef/ComfyUI/execution.py", line 328, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/execution.py", line 203, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/home/pef/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/nodes.py", line 177, in load_ccsr_checkpoint
self.model = instantiate_from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/utils/common.py", line 18, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/utils/common.py", line 12, in get_obj_from_str
return getattr(importlib.import_module(module, package=None), cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/Python-3.11.11/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'ComfyUI-CCSR'
2025-01-14T19:05:16.587012 - Prompt executed in 0.03 seconds
2025-01-14T19:05:41.258881 - got prompt
2025-01-14T19:05:41.303843 - !!! Exception during processing !!! No module named 'ComfyUI-CCSR'
2025-01-14T19:05:41.305650 - Traceback (most recent call last):
File "/home/pef/ComfyUI/execution.py", line 328, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/execution.py", line 203, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/home/pef/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/nodes.py", line 177, in load_ccsr_checkpoint
self.model = instantiate_from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/utils/common.py", line 18, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/utils/common.py", line 12, in get_obj_from_str
return getattr(importlib.import_module(module, package=None), cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/Python-3.11.11/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'ComfyUI-CCSR'
2025-01-14T19:05:41.307551 - Prompt executed in 0.04 seconds
2025-01-14T19:05:51.469600 - got prompt
2025-01-14T19:05:51.504077 - !!! Exception during processing !!! No module named 'ComfyUI-CCSR'
2025-01-14T19:05:51.505745 - Traceback (most recent call last):
File "/home/pef/ComfyUI/execution.py", line 328, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/execution.py", line 203, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/home/pef/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/nodes.py", line 177, in load_ccsr_checkpoint
self.model = instantiate_from_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/utils/common.py", line 18, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/ComfyUI/custom_nodes/comfyui-ccsr/utils/common.py", line 12, in get_obj_from_str
return getattr(importlib.import_module(module, package=None), cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/pef/Python-3.11.11/lib/python3.11/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1126, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'ComfyUI-CCSR'
2025-01-14T19:05:51.507246 - Prompt executed in 0.03 seconds
```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
```
{"last_node_id":5,"last_link_id":5,"nodes":[{"id":5,"type":"Image Comparer (rgthree)","pos":[390,1530],"size":[330,78],"flags":{},"order":3,"mode":0,"inputs":[{"name":"image_a","type":"IMAGE","link":3,"dir":3},{"name":"image_b","type":"IMAGE","link":4,"dir":3}],"outputs":[],"properties":{"comparer_mode":"Slide"},"widgets_values":[[]]},{"id":1,"type":"LoadImage","pos":[-390,1410],"size":[330,314],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[2,3],"slot_index":0},{"name":"MASK","type":"MASK","links":null}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["PEF__00077_.png","image"]},{"id":3,"type":"CCSR_Upscale","pos":[0,1290],"size":[330,390],"flags":{},"order":2,"mode":0,"inputs":[{"name":"ccsr_model","type":"CCSRMODEL","link":1},{"name":"image","type":"IMAGE","link":2}],"outputs":[{"name":"upscaled_image","type":"IMAGE","links":[4,5],"slot_index":0}],"properties":{"Node name for S&R":"CCSR_Upscale"},"widgets_values":["lanczos",4,45,0.6667,0.3333,"ccsr_tiled_vae_gaussian_weights",512,256,1024,1024,"adain",false,673343119022429,"randomize"]},{"id":4,"type":"SaveImage","pos":[390,1290],"size":[330,60],"flags":{},"order":4,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":5}],"outputs":[],"properties":{"Node name for S&R":"SaveImage"},"widgets_values":["pef-"]},{"id":2,"type":"CCSR_Model_Select","pos":[-390,1290],"size":[330,60],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"ccsr_model","type":"CCSRMODEL","links":[1],"slot_index":0}],"properties":{"Node name for S&R":"CCSR_Model_Select"},"widgets_values":["real-world_ccsr-fp16.safetensors"]}],"links":[[1,2,0,3,0,"CCSRMODEL"],[2,1,0,3,1,"IMAGE"],[3,1,0,5,0,"IMAGE"],[4,3,0,5,1,"IMAGE"],[5,3,0,4,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1.4122927695244514,"offset":[468.9218651622283,-1061.8643901168634]},"node_versions":{"rgthree-comfy":"5d771b8b56a343c24a26e8cea1f0c87c3d58102f","comfy-core":"0.3.10","comfyui-ccsr":"5fb3cecf3a685e1b1274a4fb6b4dedce8343c74c"},"ue_links":[]},"version":0.4}
```
## Additional Context
(Please add any additional context or steps to reproduce the error here)
```
### Other
_No response_
|
closed
|
2025-01-15T01:12:25Z
|
2025-01-15T01:37:11Z
|
https://github.com/comfyanonymous/ComfyUI/issues/6472
|
[
"User Support"
] |
dodgingspam
| 1
|
xonsh/xonsh
|
data-science
| 5,092
|
The way to disable environment variable validation
|
We have validation for `*DIRS`:
https://github.com/xonsh/xonsh/blob/076ea2583f14485bc910c9f9b74561bfc9ca0094/xonsh/environ.py#L1232
But there is a case in [Jupyter](https://stackoverflow.com/questions/74734191/how-to-set-the-environment-variable-jupyter-platform-dirs-1):
>DeprecationWarning: Jupyter is migrating its paths to use standard platformdirs
>given by the platformdirs library. To remove this warning and
>see the appropriate new directories, set the environment variable
>`JUPYTER_PLATFORM_DIRS=1` and then run `jupyter --paths`.
>The use of platformdirs will be the default in `jupyter_core` v6
and we need to disable validation to make it works:
```xsh
$JUPYTER_PLATFORM_DIRS=1
# TypeError: EnvPath cannot be initialized with items of type <class 'int'>
$JUPYTER_PLATFORM_DIRS
# KeyError: 'Unknown environment variable: $JUPYTER_PLATFORM_DIRS'
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
|
open
|
2023-03-16T20:51:47Z
|
2023-03-25T19:59:23Z
|
https://github.com/xonsh/xonsh/issues/5092
|
[
"environ",
"priority-low"
] |
anki-code
| 0
|
pytest-dev/pytest-html
|
pytest
| 612
|
Implement better solution for DOM manipulation
|
We need a better solution/implementation for handling the DOM manipulation hooks.
They rely on primitives/low-level python API which is bad and extremely hacky with the new way of generating the report.
See #611
|
open
|
2023-04-01T23:19:45Z
|
2023-04-01T23:19:49Z
|
https://github.com/pytest-dev/pytest-html/issues/612
|
[] |
BeyondEvil
| 0
|
keras-team/keras
|
pytorch
| 20,699
|
The default setting for aggregation='mean' for variables and optimizers is incorrect.
|
The default policy with aggregation='mean' is incorrect and should be set to none. In a distributed context, the backend handles gradient reduction and variable updates, eliminating the need for intermediate aggregations. Using aggregation='mean' breaks moment estimation in the optimizer. Ref: In keras==2.15.0, the default policy was effectively equivalent to aggregation='none
|
closed
|
2024-12-28T16:40:30Z
|
2024-12-31T04:22:19Z
|
https://github.com/keras-team/keras/issues/20699
|
[] |
loveyou143j
| 0
|
ultralytics/yolov5
|
deep-learning
| 13,238
|
How can I save the detections Yolov5 makes when he's working with a camera source?
|
Hi! Soo what I want to do is that my Yolov5 model saves in a folder the detections it makes when working with a camera in real time, in my Raspberry PI 4B+.
I found this code made by glenn-jocher responding to user sanchaykasturey at the following link:
https://github.com/ultralytics/yolov5/issues/11102, and I tried to modify it to my needs, but I realised that although it takes the images and saves them, it doesn't do it because it detects any object or class: It just takes them and doesn't classify any object... I tried changing the model to the 'yolov5s' but then the code doesn't even run.
I'm very confused as I'm new to this and I'm really not sure if this happens because I'm working on a Raspberry or if it's a problem with the code. Could someone help me?
Here is the code I have modified slightly to test...
import torch
from PIL import Image
import cv2
import datetime
CKPT_PATH = '/home/pi/yolov5/yolov5s.pt'
yolov5 = torch.hub.load('/home/pi/yolov5', 'custom', path=CKPT_PATH, source='local', force_reload=True)
vidcap = cv2.VideoCapture(0)
success, image = vidcap.read()
while success:
# Convert image to PIL format
img_pil = Image.fromarray(image)
# Perform YOLOv5 inference
results = yolov5(img_pil)
# Check if any detections are made
if len(results.pred) > 0:
# Save the frame as an image
timestamp = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
image_name = f"image_{timestamp}.jpg"
cv2.imwrite(image_name, image)
# Read the next frame
success, image = vidcap.read()
# Release the video capture
vidcap.release()
PS: Sorry if I didn't tag it correctly I'm new here and thought this didn't fit any tag.
|
closed
|
2024-08-01T03:47:26Z
|
2024-10-20T19:51:14Z
|
https://github.com/ultralytics/yolov5/issues/13238
|
[
"question"
] |
ComederoAVES2024
| 4
|
graphql-python/graphene-django
|
django
| 1,079
|
Make input fields not required for DRF Serialize
|
**Is your feature request related to a problem? Please describe.**
I have a model and DRF serializer.
```
class Center(models.Model):
name = models.CharField(max_length=255) # required
homepage = models.URLField(null=True, blank=True)
class CenterSerializer(serializers.ModelSerializer):
class Meta:
model = Center
fields = ['name', 'homepage']
```
I want to create an update mutation for existing object:
```
class CenterMutation(SerializerMutation):
class Meta:
serializer_class = CenterSerializer
model_operations = ('update')
lookup_field = 'id'
```
As you can see from model, `name` is required in the model and CenterMutationInput will be as follows:
```
CenterMutationInput {
name: String!
homepage: String
}
```
In update mutation it should be fully partial, meaning that we can skip name for the input. Is that achievable?
**Describe the solution you'd like**
I would like to introduce a parameter in meta to make some fields optional, like this:
```
class CenterMutation(SerializerMutation):
class Meta:
serializer_class = CenterSerializer
model_operations = ('update')
lookup_field = 'id'
optional_fields = ('name',) # mark fields optional
```
Is there any existing solution or alternative to the described approach?
|
closed
|
2020-12-27T10:52:44Z
|
2021-01-02T06:20:08Z
|
https://github.com/graphql-python/graphene-django/issues/1079
|
[
"✨enhancement"
] |
rganeyev
| 2
|
yunjey/pytorch-tutorial
|
pytorch
| 195
|
RNN input size question
|
I'm new to pytorch, Can anyone answer my question which confused me a lot:
[In RNN tutorial](https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/02-intermediate/recurrent_neural_network/main.py)
images are reshaped into (batch, seq_len, input_size)
```
images = images.reshape(-1, sequence_length, input_size)
```
But What I learned input dimensions should be (seq_len, batch, input_size)?
|
closed
|
2019-11-20T00:39:00Z
|
2019-11-21T05:32:10Z
|
https://github.com/yunjey/pytorch-tutorial/issues/195
|
[] |
OrangeC93
| 2
|
aimhubio/aim
|
tensorflow
| 3,300
|
Switch GitHub PR workflows run on the latest python version.
|
Currently, it looks like pull requests are tested on [python 3.8](https://github.com/aimhubio/aim/blob/main/.github/workflows/pull-request.yml). Can we update this workflow to run on the latest supported python version for aim (3.12)?
|
closed
|
2025-03-05T21:31:52Z
|
2025-03-05T23:49:07Z
|
https://github.com/aimhubio/aim/issues/3300
|
[
"type / code-health"
] |
amifalk
| 1
|
huggingface/datasets
|
deep-learning
| 6,937
|
JSON loader implicitly coerces floats to integers
|
The JSON loader implicitly coerces floats to integers.
The column values `[0.0, 1.0, 2.0]` are coerced to `[0, 1, 2]`.
See CI error in dataset-viewer: https://github.com/huggingface/dataset-viewer/actions/runs/9290164936/job/25576926446
```
=================================== FAILURES ===================================
___________________________ test_statistics_endpoint ___________________________
normal_user_public_json_dataset = 'DVUser/tmp-dataset-17170199043860'
def test_statistics_endpoint(normal_user_public_json_dataset: str) -> None:
dataset = normal_user_public_json_dataset
config, split = get_default_config_split()
statistics_response = poll_until_ready_and_assert(
relative_url=f"/statistics?dataset={dataset}&config={config}&split={split}",
check_x_revision=True,
dataset=dataset,
)
content = statistics_response.json()
assert len(content) == 3
assert sorted(content) == ["num_examples", "partial", "statistics"], statistics_response
statistics = content["statistics"]
num_examples = content["num_examples"]
partial = content["partial"]
assert isinstance(statistics, list), statistics
assert len(statistics) == 6
assert num_examples == 4
assert partial is False
string_label_column = statistics[0]
assert "column_name" in string_label_column
assert "column_statistics" in string_label_column
assert "column_type" in string_label_column
assert string_label_column["column_name"] == "col_1"
assert string_label_column["column_type"] == "string_label" # 4 unique values -> label
assert isinstance(string_label_column["column_statistics"], dict)
assert string_label_column["column_statistics"] == {
"nan_count": 0,
"nan_proportion": 0.0,
"no_label_count": 0,
"no_label_proportion": 0.0,
"n_unique": 4,
"frequencies": {
"There goes another one.": 1,
"Vader turns round and round in circles as his ship spins into space.": 1,
"We count thirty Rebel ships, Lord Vader.": 1,
"The wingman spots the pirateship coming at him and warns the Dark Lord": 1,
},
}
int_column = statistics[1]
assert "column_name" in int_column
assert "column_statistics" in int_column
assert "column_type" in int_column
assert int_column["column_name"] == "col_2"
assert int_column["column_type"] == "int"
assert isinstance(int_column["column_statistics"], dict)
assert int_column["column_statistics"] == {
"histogram": {"bin_edges": [0, 1, 2, 3, 3], "hist": [1, 1, 1, 1]},
"max": 3,
"mean": 1.5,
"median": 1.5,
"min": 0,
"nan_count": 0,
"nan_proportion": 0.0,
"std": 1.29099,
}
float_column = statistics[2]
assert "column_name" in float_column
assert "column_statistics" in float_column
assert "column_type" in float_column
assert float_column["column_name"] == "col_3"
> assert float_column["column_type"] == "float"
E AssertionError: assert 'int' == 'float'
E - float
E + int
tests/test_14_statistics.py:72: AssertionError
=========================== short test summary info ============================
FAILED tests/test_14_statistics.py::test_statistics_endpoint - AssertionError: assert 'int' == 'float'
- float
+ int
```
This bug was introduced after:
- #6914
We have reported the issue to pandas:
- https://github.com/pandas-dev/pandas/issues/58866
|
open
|
2024-05-31T08:09:12Z
|
2024-05-31T08:11:57Z
|
https://github.com/huggingface/datasets/issues/6937
|
[
"bug"
] |
albertvillanova
| 0
|
BeanieODM/beanie
|
pydantic
| 649
|
[BUG] ModuleNotFoundError: No module named 'pydantic_settings'
|
**Describe the bug**
Trying to import beanie and _ModuleNotFoundError_ raises.
**To Reproduce**
```python
import beanie
```
**Expected behavior**
Successful import.
**Additional context**
I am also running pydantic v2.
Error:
`ModuleNotFoundError: No module named 'pydantic_settings'`
|
closed
|
2023-08-09T16:33:59Z
|
2023-10-01T01:50:22Z
|
https://github.com/BeanieODM/beanie/issues/649
|
[
"Stale"
] |
arynyklas
| 4
|
apify/crawlee-python
|
web-scraping
| 496
|
Throw an exception when we receive a 4xx status code
|
See https://github.com/apify/crawlee-python/issues/486 for motivation and details.
|
closed
|
2024-09-04T06:08:39Z
|
2024-09-06T13:36:13Z
|
https://github.com/apify/crawlee-python/issues/496
|
[
"bug",
"t-tooling"
] |
janbuchar
| 0
|
healthchecks/healthchecks
|
django
| 1,007
|
Return UUID in "List Existing Checks" response
|
Just a minor quality-of-life request: have you considered returning a field with just the UUID in the `GET https://healthchecks.io/api/v3/checks/` response (assuming using a read-write API key)? It looks like the UUID can be extracted from the `ping_url` field, but would parsing that is still slightly error-prone and potentially not future-proof.
Sorry if this is a duplicate; I searched existing issues and didn't see obvious other requests for this, aside from [the original issue](https://github.com/healthchecks/healthchecks/issues/370) that led to the `unique_key` field.
|
closed
|
2024-05-30T23:03:32Z
|
2024-07-18T16:31:15Z
|
https://github.com/healthchecks/healthchecks/issues/1007
|
[] |
chriselion
| 4
|
tflearn/tflearn
|
data-science
| 1,056
|
About examples/images/alexnet.py
|
in AlexNet ,
the order of the first and second layers is ( conv——LRN——maxPool )
but in your code
`network = input_data(shape=[None, 227, 227, 3])`
`network = conv_2d(network, 96, 11, strides=4, activation='relu')`
`network = max_pool_2d(network, 3, strides=2)`
`network = local_response_normalization(network)`
`network = conv_2d(network, 256, 5, activation='relu')`
`network = max_pool_2d(network, 3, strides=2)`
`network = local_response_normalization(network)`
the order is ( conv——maxPool ——LRN)
Is the order of LRN and maxPool upside down?
|
open
|
2018-05-25T06:43:09Z
|
2018-05-25T07:05:23Z
|
https://github.com/tflearn/tflearn/issues/1056
|
[] |
SmokerX
| 1
|
ultralytics/ultralytics
|
python
| 18,718
|
Draw / visualize ground truth annotations for segmentation
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
As a standard step before training, I wanted to make sure that the ground truth is correct and that JSON2YOLO converted the data correctly.
I was not able to find any easy / straightforward way to do that. I had to write the following code just to draw the masks on top of the images
```python
formatter = Format(return_mask=True, mask_ratio=1)
dataset = YOLODataset(data=data_cfg, img_path=yolo_dir, task="segment")
num_annot_vis = min(dataset.ni / 10, 50)
step = round(dataset.ni / num_annot_vis)
for idx in range(0, dataset.ni, step):
img_label = formatter(dataset.get_image_and_label(idx))
mask = img_label['masks']
mask[mask != 0] = 1 # treat multiple segments as one
img = cv.imread(img_label['im_file'])
im_gpu = torch.as_tensor(img, dtype=torch.float16).permute(2, 0, 1).flip(0).contiguous() / 255.0
ann = Annotator(img)
ann.masks(mask, colors=[[255, 0, 0]], im_gpu=im_gpu)
annot_path = annot_vis_dir / Path(img_label['im_file']).name
ann.save(annot_path)
```
My question: Is there any better / easier way to visualize the segments in yolo format on top of the input images?
### Additional
_No response_
|
closed
|
2025-01-16T22:27:21Z
|
2025-01-25T00:14:03Z
|
https://github.com/ultralytics/ultralytics/issues/18718
|
[
"question",
"segment"
] |
duklin
| 8
|
encode/apistar
|
api
| 340
|
Misspelling in exception
|
apistar.exceptions.ConfigurationError: Route **wtih** name "welcome" exists more than once. Use an explicit name="..." on the Route to avoid a conflict.
apistar/components/router.py, line 27, in __init__
|
closed
|
2017-10-25T06:54:31Z
|
2018-03-13T14:25:36Z
|
https://github.com/encode/apistar/issues/340
|
[] |
vulnersCom
| 2
|
iperov/DeepFaceLab
|
deep-learning
| 956
|
Issue with dual GTX 1080TI cards
|
Hi everyone, I just bought 2 GTX 1080TI cards. I have installed them and they are recognized by the system, the drivers are up to date. One is on PCI16X (which obviously works at 8x) and the second is on PCIE4X (I have Asus crossfire motherboard).
NO SLI CONNECTION
When I launch DFL I immediately get this error and then everything crashes. The first card has about 8.5GB allocated (GPU0 wich is on PCIE16X), while the second only 4.5Gb (GPU1 wich is on PCIE4X). For this reason, the program indicates that it lacks 4Gb and stops responding. What's the problem? I try to use it in SLI? (there is a program that allows it (DifferentSliAuto). I bought these cards on purpose, it would be a real disaster.
The really nonsense is that if I run train using only one card (0,1 option), card0 not work (same error) (but using 2 gpu is the one with memory allocated) and card1 (wich is on PCIE4X) it work fine. If i disable card1 and i try again, card0 work again.
I have a ghost in my case? :)
Sort by yaw: 100%|######################################################################################################################################################################################| 128/128 [00:00<00:00, 1170.35it/s]
2020-11-23 23:46:26.065111: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 4.00G (4294967296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-23 23:46:26.160303: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 3.60G (3865470464 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-23 23:46:26.253655: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 3.24G (3478923264 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-23 23:46:26.344427: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 2.92G (3131030784 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-23 23:46:26.469213: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 4.00G (4294967296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-23 23:46:26.671608: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 4.00G (4294967296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
================ Model Summary =================
== ==
== Model name: 320head_SAEHD ==
== ==
== Current iteration: 150000 ==
== ==
==-------------- Model Options ---------------==
== ==
== resolution: 320 ==
== face_type: head ==
== models_opt_on_gpu: True ==
== archi: df-ud ==
== ae_dims: 256 ==
== e_dims: 64 ==
== d_dims: 80 ==
== d_mask_dims: 32 ==
== masked_training: True ==
== eyes_prio: True ==
== uniform_yaw: True ==
== lr_dropout: y ==
== random_warp: False ==
== gan_power: 0.1 ==
== true_face_power: 0.01 ==
== face_style_power: 0.0 ==
== bg_style_power: 0.0 ==
== ct_mode: none ==
== clipgrad: False ==
== pretrain: False ==
== autobackup_hour: 1 ==
== write_preview_history: False ==
== target_iter: 200000 ==
== random_flip: True ==
== batch_size: 8 ==
== ==
==---------------- Running On ----------------==
== ==
== Device index: 0 ==
== Name: GeForce GTX 1080 Ti ==
== VRAM: 11.00GB ==
== Device index: 1 ==
== Name: GeForce GTX 1080 Ti ==
== VRAM: 11.00GB ==
== ==
================================================
Starting. Target iteration: 200000. Press "Enter" to stop training and save model.
2020-11-23 23:46:38.813222: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 4.00G (4294967296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-23 23:46:38.914679: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 4.00G (4294967296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-23 23:46:39.329281: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 4.00G (4294967296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-23 23:46:39.593559: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 4.00G (4294967296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2020-11-23 23:46:39.704984: E tensorflow/stream_executor/cuda/cuda_driver.cc:806] failed to allocate 4.00G (4294967296 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
|
closed
|
2020-11-24T00:05:24Z
|
2020-11-27T12:39:06Z
|
https://github.com/iperov/DeepFaceLab/issues/956
|
[] |
fuorissimo
| 0
|
airtai/faststream
|
asyncio
| 2,016
|
Feature: Manual commit Kafka
|
Is it possible to do manual commit with Kafka ? I don't find this feature in the doc.
|
closed
|
2025-01-02T14:12:25Z
|
2025-01-03T12:50:13Z
|
https://github.com/airtai/faststream/issues/2016
|
[
"enhancement"
] |
Renaud-ia
| 3
|
pytest-dev/pytest-cov
|
pytest
| 437
|
coverage starts too late with warning filter
|
# Summary
When trying to get coverage on something like `myproject`, while also running a warning filter on a warning within that project, say `myproject.MyWarning` coverage is started too late. For example:
```
python -m pytest -W error::myproject.MyWarning --cov=myproject tests/
```
will result in `myproject` being imported (to get `MyWarning`), before pytest-cov starts coverage. Any imports (and code that runs at import time) as a results of importing `myproject` will be missed. You do get the warning: `Coverage.py warning: Module myproject was previously imported, but not measured (module-not-measured)`
I'm not sure if anything can be done about it (since pytest is the one controlling when warning filters are set up), but at the very least it would be great to put this as a gotcha in the docs somewhere.
|
open
|
2020-09-30T06:35:16Z
|
2024-04-22T13:32:21Z
|
https://github.com/pytest-dev/pytest-cov/issues/437
|
[] |
dopplershift
| 3
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,068
|
what is the uwsgi config file for runing websockets through nginx flask-socketio
|
uwsgi --http 0.0.0.0:8000 --gevent 1000 --http-websockets --master --wsgi-file socketexample/main.py --callable app
what is uwsgi config file for this one , do have any suggestions for this one?
|
closed
|
2019-09-24T12:18:13Z
|
2019-09-24T12:39:55Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1068
|
[] |
Sudheertati
| 1
|
kymatio/kymatio
|
numpy
| 75
|
[BUG] ... typo that killed the unit tests...
|
https://github.com/eickenberg/scattering_transform/blob/master/scattering/scattering2d/tests/test_scattering.py#L17
Damn. It means there was no unit tests since a while and that it was if were on a free fall for a while
🥇
|
closed
|
2018-10-29T21:14:38Z
|
2018-10-30T10:41:13Z
|
https://github.com/kymatio/kymatio/issues/75
|
[
"bug",
"high priority"
] |
edouardoyallon
| 1
|
deepfakes/faceswap
|
machine-learning
| 488
|
Extract with dlib-cnn doesn't work on macOS
|
**Note: Please only report bugs in this repository. Just because you are getting an error message does not automatically mean you have discovered a bug. If you don't have a lot of experience with this type of project, or if you need for setup help and other issues in using the faceswap tool, please refer to the [faceswap-playground](https://github.com/deepfakes/faceswap-playground/issues) instead. The faceswap-playground is also an excellent place to ask questions and submit feedback.**
## Expected behavior
*Describe, in some detail, what you are trying to do and what the output is that you expect from the program.*
## Actual behavior
*Describe, in some detail, what the program does instead. Be sure to include any error message or screenshots.*
## Steps to reproduce
*Describe, in some detail, the steps you tried that resulted in the behavior described above.*
## Other relevant information
- **Command lined used (if not specified in steps to reproduce)**: faceswap.py ...
- **Operating system and version:** Windows, macOS, Linux
- **Python version:** 2.7, 3.5, 3.6.4, ...
- **Faceswap version:** commit hash or version number
- **Faceswap method:** CPU/GPU
- **Other related issues:** #123, #124...
- ... (for example, installed packages that you can see with `pip freeze`)
Extract with dnn-lib on mac OS X 10.13.5 failed:
```
Failed to extract from image: NoneType is not iterable
```
GPU: NVIDIA GTX 1080, 8GB
CUDA: 9.1
Python: 3.6.5
Reason is NVML library is not supported on macOS, so GPUStats.initialize() in [gpu_stats.py](https://github.com/deepfakes/faceswap/blob/master/lib/gpu_stats.py) fail with pynvml.NVMLError_LibraryNotFound, self.handles = None, get_free() gets Exception: "NoneType is not iterable" when extracting with dlib-cnn, no face will be dectected.
Can we work around it?
|
closed
|
2018-09-03T02:38:06Z
|
2018-10-22T06:56:13Z
|
https://github.com/deepfakes/faceswap/issues/488
|
[
"bug"
] |
helloall1900
| 2
|
explosion/spaCy
|
deep-learning
| 12,012
|
Add an example on how to discover available extensions on a Doc/Span/Token object
|
New to spacy and I was searching the Doc API documentation ([https://spacy.io/api/doc](https://spacy.io/api/doc)) on extensions about how to "discover" what extensions would be available on a given doc/span/token object.
The use-case would have been discovering the available extensions of a doclike object returned by an (external) library function.
The options are either to go back and read the docs of the library or to have a way to list the available extensions in the shell/notebook.
After some searching and head-scratching it dawned on me that a classic python dir() actually does the trick:
(using the example from [Overwriting custom extension attributes](https://spacy.io/usage/linguistic-features#retokenization-extensions))
```
>>> print("list extensions:", dir(doc[0]._))
list extensions: ['get', 'has', 'is_musician', 'set']
```
or alternatively and a bit more involved but cleaner:
```
>>>print("list extensions:", [ext for ext in doc[0]._.__dict__['_extensions'].keys()])
list extensions: ['is_musician']
```
Would it be possible to add to one of of the examples about extensions also a one-liner on how to list available extensions of a doc, span and/or token object either in the API docs or in the usage docs [Overwriting custom extension attributes](https://spacy.io/usage/linguistic-features#retokenization-extensions)?
Would it make sense to have a `.list_extensions()` class method for the DOC, Span, Token classes to that effect?
Thanks
|
closed
|
2022-12-21T16:08:57Z
|
2023-02-23T14:06:14Z
|
https://github.com/explosion/spaCy/issues/12012
|
[
"enhancement",
"feat / doc"
] |
mattkeanny
| 3
|
tfranzel/drf-spectacular
|
rest-api
| 598
|
Allow globally excluding certain content types
|
By default, (I believe because DRF supports the Browsable Web API) `drf-spectacular` will generate a schema with both `form-data` and `json` content types:
```yaml
requestBody:
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/REDACTED'
application/x-www-form-urlencoded:
schema:
type: array
items:
$ref: '#/components/schemas/REDACTED'
multipart/form-data:
schema:
type: array
items:
$ref: '#/components/schemas/REDACTED'
required: true
```
While the `form-data` type is used for the browseable web API, users might want to exclude it from the schema so that generated clients are simpler. Without an exclusion, multiple client code paths are generated which complicates the generated code and sometimes conflicts with one another:
- https://github.com/openapi-generators/openapi-python-client/issues/453
Adding a global setting to somehow disable certain content types could be useful to solve this issue:
```py
SPECTACULAR_SETTINGS = {
"EXCLUDE_CONTENT_TYPES": ["multipart/form-data"],
}
```
This would allow the browsable API to continue to work (by not actually changing any views) but allow the exported DRF schema to be much simpler.
|
closed
|
2021-11-09T16:18:13Z
|
2025-03-13T05:25:31Z
|
https://github.com/tfranzel/drf-spectacular/issues/598
|
[
"enhancement",
"fix confirmation pending"
] |
johnthagen
| 11
|
hpcaitech/ColossalAI
|
deep-learning
| 6,200
|
Does GeminiPlugin support DeepSeek V3?
|
How can we finetune or continue-pretrain DeepSeek V3 using GeminiPlugin?
Thanks a lot.
|
closed
|
2025-02-19T06:57:57Z
|
2025-02-20T04:01:13Z
|
https://github.com/hpcaitech/ColossalAI/issues/6200
|
[] |
ericxsun
| 1
|
man-group/arctic
|
pandas
| 663
|
Versions should include metadata for the arctic version used to create it
|
#### Arctic Version
```
1.72.0
```
#### Arctic Store
```
VersionStore
```
#### Description of problem and/or code sample that reproduces the issue
We need to have finer control and visibility about the version of arctic which was used to crate a new version for Version store.
|
closed
|
2018-11-22T19:57:10Z
|
2018-11-27T14:55:37Z
|
https://github.com/man-group/arctic/issues/663
|
[
"enhancement",
"feature"
] |
dimosped
| 0
|
huggingface/datasets
|
pandas
| 7,147
|
IterableDataset strange deadlock
|
### Describe the bug
```
import datasets
import torch.utils.data
num_shards = 1024
def gen(shards):
for shard in shards:
if shard < 25:
yield {"shard": shard}
def main():
dataset = datasets.IterableDataset.from_generator(
gen,
gen_kwargs={"shards": list(range(num_shards))},
)
dataset = dataset.shuffle(buffer_size=1)
dataset = datasets.interleave_datasets(
[dataset, dataset], probabilities=[1, 0], stopping_strategy="all_exhausted"
)
dataset = dataset.shuffle(buffer_size=1)
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=8,
num_workers=8,
)
for i, batch in enumerate(dataloader):
print(batch)
if i >= 10:
break
print()
if __name__ == "__main__":
for _ in range(100):
main()
```
### Steps to reproduce the bug
Running the script above, at some point it will freeze.
- Changing `num_shards` from 1024 to 25 avoids the issue
- Commenting out the final shuffle avoids the issue
- Commenting out the interleave_datasets call avoids the issue
As an aside, if you comment out just the final shuffle, the output from interleave_datasets is not shuffled at all even though there's the shuffle before it. So something about that shuffle config is not being propagated to interleave_datasets.
### Expected behavior
The script should not freeze.
### Environment info
- `datasets` version: 3.0.0
- Platform: macOS-14.6.1-arm64-arm-64bit
- Python version: 3.12.5
- `huggingface_hub` version: 0.24.7
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1
I observed this with 2.21.0 initially, then tried upgrading to 3.0.0 and could still repro.
|
closed
|
2024-09-12T18:59:33Z
|
2024-09-23T09:32:27Z
|
https://github.com/huggingface/datasets/issues/7147
|
[] |
jonathanasdf
| 6
|
NVIDIA/pix2pixHD
|
computer-vision
| 18
|
Perceptual Loss Issues
|
Hi: @mingyuliutw @junyanz
I have read your paper carefully, I noticed you use a perceptual loss in paper with hyperparameters λ=10, weights = 1/Mi and Mi is the number of elements of i-th layer in VGG19, however, I checked the codes you released where the weight of loss is [1.0/32, 1.0/16, 1.0/8, 1.0/4, 1.0], I am confused with these numbers 32, 16... What's the elements you're referring to?
|
closed
|
2018-01-30T06:11:04Z
|
2018-05-20T08:16:06Z
|
https://github.com/NVIDIA/pix2pixHD/issues/18
|
[] |
iRmantou
| 3
|
lexiforest/curl_cffi
|
web-scraping
| 402
|
[BUG]ValueError: not enough values to unpack (expected 2, got 1)
|
**Describe the bug**
```
curl_cffi\requests\headers.py", line 132, in __init__
k, v = line.split(sep, maxsplit=1) # pyright: ignore
ValueError: not enough values to unpack (expected 2, got 1)
```
```
if isinstance(headers[0], (str, bytes)):
print(headers[0])
sep = ":" if isinstance(headers[0], str) else b":"
h = []
for line in headers:
k, v = line.split(sep, maxsplit=1) # pyright: ignore
h.append((k, v.strip()))
```
在访问某个网站的时候响应头是`b'SSH-2.0-OpenSSH_7.9p1 Debian-10+deb10u3'`不包含冒号就会报错
**Versions**
- OS: win11 64
- curl_cffi version [0.7.2]
|
closed
|
2024-10-04T12:19:46Z
|
2024-12-30T06:21:46Z
|
https://github.com/lexiforest/curl_cffi/issues/402
|
[
"bug",
"needs more info"
] |
wwang129
| 2
|
sktime/pytorch-forecasting
|
pandas
| 1,695
|
[MNT] `pre-commit` failing on `main`
|
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
`pre-commit` is failing on `main`, rendering it effectively unusable for other branches created from `main`.
**To Reproduce**
<!--
Add a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com
-->
```shell
$ git checkout main
Already on 'main'
Your branch is up to date with 'origin/main'.
$ git pull
Already up to date.
$ git reset --hard
HEAD is now at 53a1c41 [MNT] Relax `numpy` bound to `numpy<3.0.0` (#1624)
$ pre-commit run --all-files
```
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
All `pre-commit` tests pass.
**Additional context**
`pre-commit` output:
```text
trim trailing whitespace.................................................Passed
fix end of files.........................................................Passed
check yaml...............................................................Passed
check python ast.........................................................Passed
flake8...................................................................Failed
- hook id: flake8
- exit code: 1
examples/ar.py:88:56: E226 missing whitespace around arithmetic operator
print(f"Number of parameters in network: {deepar.size()/1e3:.1f}k")
^
examples/nbeats.py:68:53: E226 missing whitespace around arithmetic operator
print(f"Number of parameters in network: {net.size()/1e3:.1f}k")
^
examples/stallion.py:122:53: E226 missing whitespace around arithmetic operator
print(f"Number of parameters in network: {tft.size()/1e3:.1f}k")
^
pytorch_forecasting/data/timeseries.py:106:28: E226 missing whitespace around arithmetic operator
f"{na} ({na/tensor.size(0):.2%}) of {name} "
^
isort....................................................................Passed
black....................................................................Passed
nbqa-black...............................................................Passed
nbqa-isort...............................................................Passed
nbqa-flake8..............................................................Failed
- hook id: nbqa-flake8
- exit code: 1
docs/source/tutorials/stallion.ipynb:cell_7:25:53: E226 missing whitespace around arithmetic operator
print(f"Number of parameters in network: {tft.size()/1e3:.1f}k")
^
docs/source/tutorials/stallion.ipynb:cell_9:29:53: E226 missing whitespace around arithmetic operator
print(f"Number of parameters in network: {tft.size()/1e3:.1f}k")
^
nbqa-check-ast...........................................................Passed
```
**Versions**
N/A
<!-- Thanks for contributing! -->
|
closed
|
2024-10-09T08:59:51Z
|
2024-10-10T10:12:04Z
|
https://github.com/sktime/pytorch-forecasting/issues/1695
|
[
"bug"
] |
ewth
| 1
|
jina-ai/clip-as-service
|
pytorch
| 270
|
Load fine-tuned pytorch model
|
Is it possible to load a model fine-tuned with [huggingface's](https://github.com/huggingface/pytorch-pretrained-BERT) pytorch implementation?
|
open
|
2019-03-13T15:46:30Z
|
2019-07-17T21:24:48Z
|
https://github.com/jina-ai/clip-as-service/issues/270
|
[] |
v1nc3nt27
| 2
|
sqlalchemy/sqlalchemy
|
sqlalchemy
| 11,250
|
Login fails when odbc_connect PWD= includes a plus sign
|
### Describe the bug
When `odbc_connect` is given a properly-formed ODBC connection string with a password that contains a plus sign, the plus sign is converted to a space and the login fails.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.1.0b1.dev0
### DBAPI (i.e. the database driver)
pyodbc
### Database Vendor and Major Version
MS SQL Server
### Python Version
3.8
### Operating system
all
### To Reproduce
```python
import pyodbc
import sqlalchemy as sa
connection_string = (
"Driver=ODBC Driver 17 for SQL Server;"
"Server=192.168.0.199;"
"Database=test;"
"UID=howie;"
"PWD=ab+cd;"
)
# ___ this works ___
cnxn = pyodbc.connect(connection_string)
crsr = cnxn.cursor()
result = crsr.execute("SELECT 'Connected.' AS foo").fetchval()
print(result)
# ___ but this fails ___
connection_url = sa.URL.create(
"mssql+pyodbc", query=dict(odbc_connect=connection_string)
)
engine = sa.create_engine(connection_url)
with engine.begin() as conn:
result = conn.exec_driver_sql("SELECT 'Connected.' AS foo").scalar()
print(result)
```
### Error
```
Traceback (most recent call last):
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/engine/base.py", line 150, in __init__
self._dbapi_connection = engine.raw_connection()
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/engine/base.py", line 3298, in raw_connection
return self.pool.connect()
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/base.py", line 449, in connect
return _ConnectionFairy._checkout(self)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/base.py", line 1263, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/base.py", line 712, in checkout
rec = pool._do_get()
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/impl.py", line 180, in _do_get
self._dec_overflow()
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/util/langhelpers.py", line 144, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/impl.py", line 177, in _do_get
return self._create_connection()
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/base.py", line 390, in _create_connection
return _ConnectionRecord(self)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/base.py", line 674, in __init__
self.__connect()
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/base.py", line 901, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/util/langhelpers.py", line 144, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/base.py", line 896, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/engine/create.py", line 635, in connect
return dialect.connect(*cargs, **cparams)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/engine/default.py", line 629, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
pyodbc.InterfaceError: ('28000', "[28000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user 'howie'. (18456) (SQLDriverConnect)")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/gord/git/sqla-gerrit/gord_test/so78294946.py", line 26, in <module>
with engine.begin() as conn:
File "/usr/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/engine/base.py", line 3238, in begin
with self.connect() as conn:
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/engine/base.py", line 3274, in connect
return self._connection_cls(self)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/engine/base.py", line 152, in __init__
Connection._handle_dbapi_exception_noconnection(
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/engine/base.py", line 2438, in _handle_dbapi_exception_noconnection
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/engine/base.py", line 150, in __init__
self._dbapi_connection = engine.raw_connection()
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/engine/base.py", line 3298, in raw_connection
return self.pool.connect()
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/base.py", line 449, in connect
return _ConnectionFairy._checkout(self)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/base.py", line 1263, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/base.py", line 712, in checkout
rec = pool._do_get()
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/impl.py", line 180, in _do_get
self._dec_overflow()
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/util/langhelpers.py", line 144, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/impl.py", line 177, in _do_get
return self._create_connection()
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/base.py", line 390, in _create_connection
return _ConnectionRecord(self)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/base.py", line 674, in __init__
self.__connect()
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/base.py", line 901, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/util/langhelpers.py", line 144, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/pool/base.py", line 896, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/engine/create.py", line 635, in connect
return dialect.connect(*cargs, **cparams)
File "/home/gord/git/sqla-gerrit/lib/sqlalchemy/engine/default.py", line 629, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
sqlalchemy.exc.InterfaceError: (pyodbc.InterfaceError) ('28000', "[28000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Login failed for user 'howie'. (18456) (SQLDriverConnect)")
(Background on this error at: https://sqlalche.me/e/21/rvf5)
```
### Additional context
_No response_
|
closed
|
2024-04-09T17:29:43Z
|
2024-05-30T17:47:47Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/11250
|
[
"bug",
"SQL Server",
"quagmire"
] |
gordthompson
| 9
|
feder-cr/Jobs_Applier_AI_Agent_AIHawk
|
automation
| 89
|
Runtime error: Error running the bot: Error parsing YAML file.
|
Good afternoon! I went through all the steps and have matched my files to your example files and I am getting an error:
Runtime error: Error running the bot: Error parsing YAML file.
Refer to the configuration and troubleshooting guide: https://github.com/feder-cr/LinkedIn_AIHawk_automatic_job_application/blob/main/readme.md#configuration
when I run the python main.py command. I have run the YAML's through a validator and do not have any great feedback there. I have also run line by line through the text and it matches so I am not sure what the issue is.
|
closed
|
2024-08-27T17:22:05Z
|
2024-11-17T00:18:08Z
|
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/89
|
[
"bug"
] |
Securitydude3245
| 12
|
Lightning-AI/LitServe
|
rest-api
| 72
|
Add OpenAI as a pluggable spec
|
## 🚀 Feature
We should enable serving a model through a spec, without having to implement it manually in `decode_request` and `encode_response`. The spec (could be more than one) would:
- expose a route
- implement specific ways of decoding requests and encoding responses
- require the API to expose certain kinds of information (e.g. token used)
in a way that is pluggable at the LitServer level (`spec=OpenAISpec`) and independent from the API implementation itself.
### Motivation
We want to make it seamless for users to expose a model using one or more standard specs.
### Pitch
I define a `LitAPI` subclass, call `LitServer(api, spec=OpenAISpec, ...)` and I will get an `v1/chat/completions/` endpoint that behaves like an OpenAI compatible endpoint.
### Alternatives
We subclass `LitServer` and `LitAPI`, but this would't compose cleanly down the road with other pieces we want to factor out (e.g. kvcache management).
|
closed
|
2024-04-30T19:05:20Z
|
2024-05-23T15:45:03Z
|
https://github.com/Lightning-AI/LitServe/issues/72
|
[
"enhancement",
"help wanted"
] |
lantiga
| 1
|
nltk/nltk
|
nlp
| 2,805
|
proxy error
|
i used to get my output through nltk.set_proxy('http://user:password@ip:port') for like one year, but now this method is giving HTTP error, even by using the rule given by the nltk library which is ('http://ip:port',user,password)

|
closed
|
2021-09-14T12:30:15Z
|
2021-09-26T22:38:39Z
|
https://github.com/nltk/nltk/issues/2805
|
[
"installation"
] |
mannan291
| 2
|
sinaptik-ai/pandas-ai
|
data-science
| 856
|
parquet file
|
### 🚀 The feature
When I check the framework, I can not see any parquet file example. I added it by pull request.
### Motivation, pitch
It can provide more examples to the user that the people want to see on your examples page.
### Alternatives
_No response_
### Additional context
_No response_
|
closed
|
2024-01-08T10:06:11Z
|
2024-06-01T00:21:53Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/856
|
[] |
tanersekmen
| 0
|
developmentseed/lonboard
|
jupyter
| 661
|
Expose `highlightColor`
|
https://deck.gl/docs/api-reference/core/layer#highlightcolor
|
closed
|
2024-10-02T14:38:02Z
|
2024-10-03T18:01:49Z
|
https://github.com/developmentseed/lonboard/issues/661
|
[] |
kylebarron
| 0
|
nerfstudio-project/nerfstudio
|
computer-vision
| 3,260
|
ns-render BUG
|
**Describe the bug**
When i use the following command
`ns-render dataset --load-config ns_outputs/ori_splatfacto_do_dencify_distorted/splatfacto/2024-06-18_113728/config.yml --image-format png --split train+test --rendered-output-names rgb gt-rgb`
for rendering images and comparing with GT.
And what I found was that there was a growing gap between the rendered view along the trajectory and the GT view.

The same goes for the `ns-render camera-path` command.
|
open
|
2024-06-26T09:14:24Z
|
2024-06-28T09:00:52Z
|
https://github.com/nerfstudio-project/nerfstudio/issues/3260
|
[] |
JuliusQv
| 1
|
tflearn/tflearn
|
data-science
| 1,053
|
importing tensorflow.contrib produces a warning (about the retry module)
|
Describe the problem
Using tensorflow.contrib produces a warning. The warning is
WARNING:tensorflow:From C:\Users\xxxx\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
|
open
|
2018-05-19T13:01:36Z
|
2018-12-09T12:56:51Z
|
https://github.com/tflearn/tflearn/issues/1053
|
[] |
chanwing
| 2
|
sherlock-project/sherlock
|
python
| 2,322
|
Requesting support for: Velog
|
### Site URL
https://velog.io/
### Additional info
Velog is a popular development blogging platform in Korea.
Many Korean developers, especially junior developers, frequently use Velog to share their technical knowledge and experiences.
Sherlock can detect if a username exists on Velog by using:
1. URL Pattern: `https://velog.io/@{username}/posts` to check for profiles.
- Example: When querying a non-existent user:
```bash
curl -I "https://velog.io/@asdsgsthsd/posts"
```
Response:
```
HTTP/2 404
date: Wed, 09 Oct 2024 08:56:33 GMT
content-type: text/html; charset=utf-8
access-control-allow-credentials: true
access-control-allow-methods: GET,DELETE,PATCH,POST,PUT
access-control-allow-origin: *
vary: RSC, Next-Router-State-Tree, Next-Router-Prefetch, Accept-Encoding
link: <https://assets.velcdn.com/_next/static/media/498cd24af98ee1c5-s.p.woff2>; rel=preload; as="font"; crossorigin=""; type="font/woff2", <https://assets.velcdn.com/_next/static/media/8f32c48a86b1398a-s.p.woff2>; rel=preload; as="font"; crossorigin=""; type="font/woff2", <https://assets.velcdn.com/_next/static/media/e0c8a07f5438bca2-s.p.woff2>; rel=preload; as="font"; crossorigin=""; type="font/woff2"
x-powered-by: Next.js
cache-control: private, no-cache, no-store, max-age=0, must-revalidate
```
- Example: When querying an existing user:
```bash
curl -I "https://velog.io/@qlgks1/posts"
```
Response:
```
HTTP/2 200
date: Wed, 09 Oct 2024 08:59:17 GMT
content-type: text/html; charset=utf-8
access-control-allow-credentials: true
access-control-allow-methods: GET,DELETE,PATCH,POST,PUT
access-control-allow-origin: *
vary: RSC, Next-Router-State-Tree, Next-Router-Prefetch, Accept-Encoding
link: <https://assets.velcdn.com/_next/static/media/498cd24af98ee1c5-s.p.woff2>; rel=preload; as="font"; crossorigin=""; type="font/woff2", <https://assets.velcdn.com/_next/static/media/8f32c48a86b1398a-s.p.woff2>; rel=preload; as="font"; crossorigin=""; type="font/woff2", <https://assets.velcdn.com/_next/static/media/e0c8a07f5438bca2-s.p.woff2>; rel=preload; as="font"; crossorigin=""; type="font/woff2"
x-powered-by: Next.js
cache-control: private, no-cache, no-store, max-age=0, must-revalidate
```
2. As a result, Status Code 404 indicates that the profile does not exist and returns a "Not Found" error.
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
closed
|
2024-10-09T09:02:17Z
|
2024-11-01T08:51:07Z
|
https://github.com/sherlock-project/sherlock/issues/2322
|
[
"site support request"
] |
Nuung
| 0
|
swisskyrepo/GraphQLmap
|
graphql
| 38
|
TypeError: can only concatenate str (not "NoneType") to str
|
```
_____ _ ____ _
/ ____| | | / __ \| |
| | __ _ __ __ _ _ __ | |__ | | | | | _ __ ___ __ _ _ __
| | |_ | '__/ _` | '_ \| '_ \| | | | | | '_ ` _ \ / _` | '_ \
| |__| | | | (_| | |_) | | | | |__| | |____| | | | | | (_| | |_) |
\_____|_| \__,_| .__/|_| |_|\___\_\______|_| |_| |_|\__,_| .__/
| | | |
|_| |_|
Author: @pentest_swissky Version: 1.0
GraphQLmap > dump_via_fragment
============= [SCHEMA] ===============
e.g: name[Type]: arg (Type!)
00: Query
getProperty[Property]: entity (String!), entity_id (String!), prop_keys (None!), Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.9/bin/graphqlmap", line 4, in <module>
__import__('pkg_resources').run_script('graphqlmap==0.0.1', 'graphqlmap')
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pkg_resources/__init__.py", line 651, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pkg_resources/__init__.py", line 1455, in run_script
exec(script_code, namespace, namespace)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/graphqlmap-0.0.1-py3.9.egg/EGG-INFO/scripts/graphqlmap", line 81, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/graphqlmap-0.0.1-py3.9.egg/EGG-INFO/scripts/graphqlmap", line 59, in __init__
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/graphqlmap-0.0.1-py3.9.egg/graphqlmap/attacks.py", line 74, in dump_schema
TypeError: can only concatenate str (not "NoneType") to str
```
|
open
|
2022-01-18T22:12:18Z
|
2022-01-20T22:49:46Z
|
https://github.com/swisskyrepo/GraphQLmap/issues/38
|
[] |
chevyphillip
| 1
|
aeon-toolkit/aeon
|
scikit-learn
| 2,619
|
[DOC] Cell Output Error in Interval Based Classification
|
### Describe the issue linked to the documentation


### Suggest a potential alternative/fix
_No response_
|
closed
|
2025-03-14T15:32:22Z
|
2025-03-16T14:40:59Z
|
https://github.com/aeon-toolkit/aeon/issues/2619
|
[
"documentation"
] |
kavya-r30
| 2
|
huggingface/transformers
|
tensorflow
| 36,550
|
size mismatch for lm_head when fintune QWEN2.5
|
### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.49.0
- Platform: Linux-6.6.0-72.0.0.64.oe2403.x86_64-x86_64-with-glibc2.38
- Python version: 3.10.16
- Huggingface_hub version: 0.29.1
- Safetensors version: 0.5.3
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.2.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA L40
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
I finetune qwen2.5 using follow code:
```python
from datasets import load_dataset
from trl import SFTConfig, SFTTrainer
from peft import LoraConfig
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
dataset = load_dataset("trl-lib/Capybara", split="train")
dataset = dataset.select(range(500))
MODEL_ID = 'Qwen/Qwen2.5-0.5B'
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
target_modules="all-linear",
modules_to_save=["lm_head", "embed_token"],
task_type="CAUSAL_LM",
)
args = SFTConfig(
output_dir="Qwen2.5-0.5B-SFT-Capybara", # directory to save and repository id
num_train_epochs=1, # number of training epochs
per_device_train_batch_size=4, # batch size per device during training
gradient_accumulation_steps=4, # number of steps before performing a backward/update pass
gradient_checkpointing=True, # use gradient checkpointing to save memory
optim="adamw_torch_fused", # use fused adamw optimizer
logging_steps=10, # log every 10 steps
save_strategy="epoch", # save checkpoint every epoch
bf16=True, # use bfloat16 precision
tf32=True, # use tf32 precision
learning_rate=2e-4, # learning rate, based on QLoRA paper
max_grad_norm=0.3, # max gradient norm based on QLoRA paper
warmup_ratio=0.03, # warmup ratio based on QLoRA paper
lr_scheduler_type="constant", # use constant learning rate scheduler
push_to_hub=False, # push model to hub
# report_to="tensorboard", # report metrics to tensorboard
)
trainer = SFTTrainer(
MODEL_ID,
train_dataset=dataset,
args=args,
peft_config=peft_config
)
trainer.train()
print('end')
```
and I use follow code to inference:
```python
import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, pipeline
from peft import PeftConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "/home/chenjq/pythonWork/nlp/Qwen2.5-0.5B-SFT-Capybara/checkpoint-31"
# peft_model_id = args.output_dir
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
# Load Model with PEFT adapter
model = AutoPeftModelForCausalLM.from_pretrained(
peft_model_id,
device_map="auto",
torch_dtype=torch.float16
)
prompt = "3的5倍是多少"
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=200
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
print(1)
```
an error occur when load model with AutoPeftModelForCausalLM:
```
Sliding Window Attention is enabled but not implemented for `sdpa`; unexpected results may be encountered.
Traceback (most recent call last):
File "/home/chenjq/.pycharm_helpers/pydev/pydevd.py", line 1500, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/chenjq/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/chenjq/pythonWork/nlp/test14.py", line 11, in <module>
model = AutoPeftModelForCausalLM.from_pretrained(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/auto.py", line 130, in from_pretrained
return cls._target_peft_class.from_pretrained(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/peft_model.py", line 581, in from_pretrained
load_result = model.load_adapter(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/peft_model.py", line 1239, in load_adapter
load_result = set_peft_model_state_dict(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 451, in set_peft_model_state_dict
load_result = model.load_state_dict(peft_model_state_dict, strict=False)
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM:
size mismatch for base_model.model.lm_head.modules_to_save.default.weight: copying a param with shape torch.Size([151936, 896]) from checkpoint, the shape in current model is torch.Size([151665, 896]).
Process finished with exit code 1
```
### Expected behavior
expecte model can predict normally.
|
closed
|
2025-03-05T03:54:51Z
|
2025-03-10T02:50:17Z
|
https://github.com/huggingface/transformers/issues/36550
|
[
"bug"
] |
minmie
| 8
|
smarie/python-pytest-cases
|
pytest
| 195
|
filter/skipif combination of cases from multiple parametrize_with_cases decorators
|
Hi, loving the library! I have a instance where I create a large number of test cases based on a combination of multiple parametrize_with_cases, a simple example here:
in test_package_cases.py:
```python
def encrypted_encrypted():
"""Return a key for a encrypted test."""
return KEY
def encrypted_unencrypted():
"""Return None for a unencrypted test."""
return None
def proto_tcp():
"""Return TCP protocol."""
return Protocol.TCP
def proto_udp():
"""Return UDP protocol."""
return Protocol.UDP
```
and in test_package.py
```python
@parametrize_with_cases("encrypted", prefix="encrypted_")
@parametrize_with_cases("protocol", prefix="proto_")
def test_encrypted_over_protocols(
self,
encrypted,
protocol)
<test code>
```
This is a simplified example of what I want to achieve (I have one test that has 5 parametrize_with_cases, with a total of 80 tests coming out of it, which I love!), but let's say I now want to exclude the test for TCP Encrypted, I would have expected either a Filter or using something like pytest.mark.skipif(protocol==TCP and encrypted is not None) to do this, but that doesn't seem to work, and in the filter doc, I seem to read that a filter only applies to a single parametrize call, so you couldn't use both variables there.
The workaround right now is to skip in the code, but I don't really like that approach!
|
closed
|
2021-03-31T12:09:11Z
|
2021-05-18T13:09:04Z
|
https://github.com/smarie/python-pytest-cases/issues/195
|
[
"has_workaround"
] |
eavanvalkenburg
| 8
|
microsoft/nni
|
deep-learning
| 5,457
|
Failed to create NNI experiment, error: AttributeError: 'dict' object has no attribute 'name'
|
**Describe the issue**:
But when I created the NNI experiment using the official example of Hyperparameter Optimization, I received the following error information:
```
Traceback (most recent call last):
File "/home/sunze/anaconda3/bin/nnictl", line 8, in <module>
sys.exit(parse_args())
File "/home/sunze/anaconda3/lib/python3.9/site-packages/nni/tools/nnictl/nnictl.py", line 497, in parse_args
args.func(args)
File "/home/sunze/anaconda3/lib/python3.9/site-packages/nni/tools/nnictl/launcher.py", line 91, in create_experiment
exp.start(port, debug, RunMode.Detach)
File "/home/sunze/anaconda3/lib/python3.9/site-packages/nni/experiment/experiment.py", line 135, in start
self._start_impl(port, debug, run_mode, None, [])
File "/home/sunze/anaconda3/lib/python3.9/site-packages/nni/experiment/experiment.py", line 94, in _start_impl
config = self.config.canonical_copy()
File "/home/sunze/anaconda3/lib/python3.9/site-packages/nni/experiment/config/base.py", line 166, in canonical_copy
canon._canonicalize([])
File "/home/sunze/anaconda3/lib/python3.9/site-packages/nni/experiment/config/experiment_config.py", line 124, in _canonicalize
if algo is not None and algo.name == '_none_': # type: ignore
AttributeError: 'dict' object has no attribute 'name'
```
**The NNI source code at the wrong place is as follows**:
```
def _canonicalize(self, _parents):
if self.log_level is None:
self.log_level = 'debug' if self.debug else 'info'
self.tuner_gpu_indices = utils.canonical_gpu_indices(self.tuner_gpu_indices)
for algo_type in ['tuner', 'assessor', 'advisor']:
algo = getattr(self, algo_type)
# TODO: need a more universal solution for similar problems
if isinstance(algo, dict):
# the base class should have converted it to `_AlgorithmConfig` if feasible
# it is a `dict` here means an exception was raised during the convertion attempt
# we do the convertion again to show user the error message
_AlgorithmConfig(**algo) # pylint: disable=not-a-mapping
if algo is not None and algo.name == '_none_': # type: ignore
setattr(self, algo_type, None)
```
**I found ‘_algo_’ in code should be type ‘__AlgorithmConfig_’, but when I was running, I got the ‘_dict_’ type. I think this is the root of the problem, but I don't know how to solve it. My other Ubuntu server can run normally with the same environment, but this machine reports the error on the above error. So I want to ask what to do in this situation, I can't think of other solutions anymore. Thank you very much for your answer!**
**Environment**:
- NNI version: 2.9
- Training service (local|remote|pai|aml|etc): local
- Client OS: Ubuntu 20.04
- Python version: 3.9.13
- PyTorch version: 1.12.0
- Is conda/virtualenv/venv used?: conda used
- Is running in Docker?: No
**Configuration**:
```
experimentName: demo
searchSpace:
features:
_type: choice
_value: [ 128, 256, 512, 1024 ]
lr:
_type: loguniform
_value: [ 0.0001, 0.1 ]
momentum:
_type: uniform
_value: [ 0, 1 ]
trialCommand: python model.py
trialCodeDirectory: .
trialGpuNumber: 1
trialConcurrency: 4
maxTrialNumber: 80
tuner:
name: TPE
classArgs:
optimize_mode: maximize
trainingService:
platform: local
useActiveGpu: True
maxTrialNumberPerGpu: 1
```
**The very simple code of model.py**:
```
import nni
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor
params = {
'features': 512,
'lr': 0.001,
'momentum': 0,
}
optimized_params = nni.get_next_parameter()
params.update(optimized_params)
training_data = datasets.FashionMNIST(root="data", train=True, download=True, transform=ToTensor())
test_data = datasets.FashionMNIST(root="data", train=False, download=True, transform=ToTensor())
batch_size = 64
train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, params['features']),
nn.ReLU(),
nn.Linear(params['features'], params['features']),
nn.ReLU(),
nn.Linear(params['features'], 10)
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork().to(device)
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=params['lr'], momentum=params['momentum'])
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
pred = model(X)
loss = loss_fn(pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
def test(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
return correct
epochs = 5
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(train_dataloader, model, loss_fn, optimizer)
accuracy = test(test_dataloader, model, loss_fn)
nni.report_intermediate_result(accuracy)
nni.report_final_result(accuracy)
```
|
closed
|
2023-03-18T12:38:13Z
|
2023-03-21T04:48:49Z
|
https://github.com/microsoft/nni/issues/5457
|
[] |
sunze992
| 4
|
koxudaxi/datamodel-code-generator
|
pydantic
| 2,351
|
Add option to disable Field constraints
|
**Is your feature request related to a problem? Please describe.**
I'm in the unfortunate position of having to consume an API that has incorrect constraints in its OpenAPI documentation. Up until now, I have been manually removing the Field constraints, but I was wondering if this could be a useful CLI flag for anyone else.
**Describe the solution you'd like**
Some sort of `--no-constraints` flag added as a CLI flag
**Describe alternatives you've considered**
N/A
**Additional context**
N/A
|
open
|
2025-03-20T19:44:36Z
|
2025-03-21T15:53:23Z
|
https://github.com/koxudaxi/datamodel-code-generator/issues/2351
|
[] |
harrymconner
| 1
|
pytest-dev/pytest-xdist
|
pytest
| 1,004
|
How Master node can know info from Worker nodes after call pytest_collection_modifyitems?
|
Hello everyone!
I am using Terstrail plugin to report.
So I need to know the list of test cases collected when calling pytest_collection_modifyitems on each node.
Do you know how I can know this?
Thanks.
|
open
|
2024-01-10T14:11:28Z
|
2024-01-12T12:39:19Z
|
https://github.com/pytest-dev/pytest-xdist/issues/1004
|
[] |
voloxastik
| 8
|
scikit-optimize/scikit-optimize
|
scikit-learn
| 1,105
|
BayesSearchCV multimetric scoring inconsistency: KeyError: 'mean_test_score'
|
I am working on a multilabel classification and for now I have primarily been relying on `RandomizedSearchCV` from `scikit-learn` to perform hyperparameter optimization. I now started experimenting with `BayesSearchCV` and ran into a potential bug when using multi-metric scoring, combined with the `refit` argument.
I created a full reproducible toy example below.
**Imports, data generation, pipeline:**
```
import numpy as np
from sklearn.datasets import make_multilabel_classification
from sklearn.naive_bayes import MultinomialNB
from sklearn.multioutput import MultiOutputClassifier
from sklearn.model_selection import RandomizedSearchCV
from skopt.searchcv import BayesSearchCV
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import MinMaxScaler
X, Y = make_multilabel_classification(
n_samples=10000,
n_features=20,
n_classes=10,
n_labels=3
)
pipe = Pipeline(
steps = [
('scaler', MinMaxScaler()),
('model', MultiOutputClassifier(MultinomialNB()))
]
)
```
**Step 1: Single metric with `refit = True` : works!**
```
search = BayesSearchCV(
estimator = pipe,
search_spaces = {'model__estimator__alpha': np.linspace(0.01,1,50)},
scoring = 'precision_macro',
refit = True,
cv = 5
).fit(X, Y)
```
**Step 2: Single metric with `refit = 'precision_macro'`: works!**
```
search = BayesSearchCV(
estimator = pipe,
search_spaces = {'model__estimator__alpha': np.linspace(0.01,1,50)},
scoring = 'precision_macro',
refit = 'precision_macro',
cv = 5
).fit(X, Y)
```
**Step 3: Multiple metrics with `refit = 'precision_macro'`: fails!**
```
search = BayesSearchCV(
estimator = pipe,
search_spaces = {'model__estimator__alpha': np.linspace(0.01,1,50)},
scoring = ['precision_macro', 'recall_macro', 'accuracy'],
refit = 'precision_macro',
cv = 5
).fit(X, Y)
```
(Note: adding `return_train_score=True` to `BayesSearchCV()` didn't make a difference.)
The error message is:
```
File "multioutput.py", line 46, in <module>
).fit(X, Y)
File ".venv\lib\site-packages\skopt\searchcv.py", line 466, in fit
super().fit(X=X, y=y, groups=groups, **fit_params)
File ".venv\lib\site-packages\sklearn\model_selection\_search.py", line 891, in fit
self._run_search(evaluate_candidates)
File ".venv\lib\site-packages\skopt\searchcv.py", line 514, in _run_search
evaluate_candidates, n_points=n_points_adjusted
File ".venv\lib\site-packages\skopt\searchcv.py", line 411, in _step
local_results = all_results["mean_test_score"][-len(params):]
KeyError: 'mean_test_score'
```
For comparison, I ran the same setup through `RandomizedSearchCV` from `scikit-learn`:
```
search = RandomizedSearchCV(
estimator = pipe,
param_distributions = {'model__estimator__alpha': np.linspace(0.01,1,50)},
scoring = ['precision_macro', 'recall_macro', 'accuracy'],
refit = 'precision_macro',
cv = 5
).fit(X, Y)
```
and it evaluated correctly.
My version of scikit-optimize: 0.9.0.
OS: Windows 10
|
open
|
2022-02-06T16:38:09Z
|
2023-02-18T15:51:17Z
|
https://github.com/scikit-optimize/scikit-optimize/issues/1105
|
[] |
leweex95
| 3
|
alirezamika/autoscraper
|
automation
| 13
|
Not very reasonable, a lot of things change
|
Not very reasonable, a lot of things change
|
closed
|
2020-09-10T07:31:31Z
|
2020-09-10T13:33:58Z
|
https://github.com/alirezamika/autoscraper/issues/13
|
[] |
abbabb123
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 815
|
Can someone link me some resources about model training?
|
I have polish voice database that is maybe not the best but I still want to try to make polish model.
I have loaded my database into encoder training and there were some errors connected with "visdom_server"
Please help me find some resources to train it. Some kind of tutorial or guide?
|
closed
|
2021-08-10T07:55:58Z
|
2022-06-13T22:00:09Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/815
|
[] |
justkowal
| 2
|
google-research/bert
|
nlp
| 778
|
Did Chinese-only model strip out the accent mark? I mean the mark like "-"
|
Did Chinese-only model strip out the accent mark? I mean the mark like "-" . I need it. Should I use Multilingual Cased instead?
|
open
|
2019-07-20T09:34:03Z
|
2019-07-20T09:34:03Z
|
https://github.com/google-research/bert/issues/778
|
[] |
LiangYuHai
| 0
|
svc-develop-team/so-vits-svc
|
pytorch
| 120
|
训练中途可以修改学习率吗?
|
可不可以在刚开始训练时用较大的学习率(0.0004),中途再改为正常的(0.0002,0.0001)
|
closed
|
2023-04-04T09:19:12Z
|
2023-08-01T09:09:42Z
|
https://github.com/svc-develop-team/so-vits-svc/issues/120
|
[
"not urgent"
] |
TQG1997
| 6
|
comfyanonymous/ComfyUI
|
pytorch
| 6,796
|
CLIP/text encoder load device: cpu
|
### Your question
I am running comfyui on RTX 3090. All drivers updated, comfyui environment updated. I dont have an issue with image generation speed, it runs on GPU, but it took my attention :
CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
Should i accept this normal ? Or should i tweak something in configuration ?
### Logs
```powershell
got prompt
Requested to load FluxClipModel_
loaded completely 9.5367431640625e+25 9319.23095703125 True
CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
clip missing: ['text_projection.weight']
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
model_type FLUX
Requested to load Flux
loaded completely 21526.4872109375 11350.067443847656 True
100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:27<00:00, 1.37s/it]
Prompt executed in 140.29 seconds
```
### Other
_No response_
|
closed
|
2025-02-12T14:44:14Z
|
2025-02-28T15:36:28Z
|
https://github.com/comfyanonymous/ComfyUI/issues/6796
|
[
"User Support"
] |
simartem
| 6
|
yeongpin/cursor-free-vip
|
automation
| 323
|
[讨论]: 我在使用自定义邮箱,但是没有接收到验证码。我要怎么做
|
### Issue 检查清单
- [x] 我理解 Issue 是用于反馈和解决问题的,而非吐槽评论区,将尽可能提供更多信息帮助问题解决。
- [x] 我确认自己需要的是提出问题并且讨论问题,而不是 Bug 反馈或需求建议。
- [x] 我已阅读 [Github Issues](https://github.com/yeongpin/cursor-free-vip/issues) 并搜索了现有的 [开放 Issue](https://github.com/yeongpin/cursor-free-vip/issues) 和 [已关闭 Issue](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20),没有找到类似的问题。
### 平台
Windows x64
### 版本
最新版
### 您的问题
我在使用自定义邮箱,但是没有接收到验证码。我要怎么做,现在一直是等待验证码中。

### 补充信息
```shell
```
### 优先级
高 (阻碍工作进行)
|
closed
|
2025-03-20T00:33:23Z
|
2025-03-20T03:47:34Z
|
https://github.com/yeongpin/cursor-free-vip/issues/323
|
[
"question"
] |
MyMaskKing
| 1
|
ray-project/ray
|
data-science
| 50,823
|
[Train] Unable to gain long-term access to S3 storage for training state/checkpoints when running on AWS EKS
|
### What happened + What you expected to happen
I am using Ray Train + KubeRay to train on AWS EKS. I have configured EKS to give Ray pods a specific IAM role through Pod Identity Association. EKS Pod Identity Association provides a HTTP endpoint through which an application can "assume said IAM role" - i.e. gain a set of temporary credentials to AWS services. The `boto3` library, for example, is able to automatically use this method to obtain credentials without any special user action.
However It seems `pyarrow.fs.S3FileSystem` is the only (built-in) vehicle for Ray Train to use S3 as storage. Unfortunately, `pyarrow.fs.S3FileSystem` is unable to automatically gain access through Pod Identity Association. I have filed a report about that in the Apache Arrow repository: https://github.com/apache/arrow/issues/45603
As mentioned in the Apache Arrow issue report, it is possible for the user to manually obtain a set of temporary credentials from Pod Identity Association and pass them them to `S3FileSystem` in constructor arguments. However, Ray Train will keep using this same instance of `S3FileSystem` throughout training, during which time the temporary credentials may expire.
It is possible, however, for the user to store the temporary credentials into environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN and an `S3FileSystem` created with no arguments will pick up those credentials. Moreover, the user can also keep updating these environment variables with new credentials when an old set expires, and the same instance of `S3FileSystem` will use the new credentials in every method call.
This methods of manually updating temporary credentials in environment variables is the only way I found that I am able to give Ray Train long-term access to S3 through Pod Identity Association. However, as mentioned in the Apache Arrow ticket, this has its own drawbacks:
- The environment variables affect the entire Python process - it is not possible to give specific credentials to S3FileSytem.
- The user needs to go through a significant effort to implement updating the environment variables in the context of a Ray Train session, and it is not clear whether it can be done reliably and safely at all. I have found the environment variables need to be updated in the process where TorchTrainer was created. I have found two ways to do this, neither ideal:
- The user can do it on a separate thread, potentially risking race conditions with S3FileSytem use by Ray Train.
- The user can implement a custom `ray.tune.Callback`, hoping to avoid thread safety issues that way. However:
- The existing callback methods are all related to the different points in the progression of training and so inherently coupled to the timing of it, rather than the timing constraints of the credential expiry.
- Moreover, it is still possible Ray Train uses S3FileSystem on a different thread under the hood, so it is not clear whether this is thread safe either.
While this issue could be resolved in `pyarrow.fs.S3FileSystem`, as suggested in the Apache Arrow bug report, I see possible changes in Ray Train that would alleviate the issue. Specifically:
- Ray Train could switch to accessing S3 through `boto3` instead of `pyarrow.fs.S3FileSystem`, since recent versions of `boto3` provide seamless integration with EKS Pod Identity Association.
- Alternatively, Ray Train could provide a specific, safe way for the user to periodically update the instance of `S3FileSystem` used by Ray Train. This way
1. an update period could be guaranteed independently of the progression of training, and
2. the user could pass temporary credentials to `S3FileSystem` constructor arguments, not affecting the rest of the Python process.
It is worth noting that the same issue occurs when using Ray Train in the context where AWS access is provided through SSO and you will find reproduction instruction below for both cases. However, the main blocker for my production environment is using it on AWS EKS, so I am focusing this bug report on that.
### Versions / Dependencies
Ray version: 2.42.1
### Reproduction script
#### Alternative 1: KubeRay + EKS
Set up:
- an EKS cluster
- an IAM role providing access to an S3 bucket (let's say it's called `my-bucket`)
- set up an EKS Pod Identity Association assigning the IAM role to Ray pods (pods in a specific Kubernetes service account)
- Install KubeRay operator on the EKS cluster.
- Install a RayCluster on the EKS cluster.
Submit Ray job using entrypoint listed below using `ray job submit -- python3 entrypoint.py`
#### Alternative 2: Local Ray Cluster + AWS SSO
Set up access to AWS using an SSO profile - providing access to an S3 bucket (let's say it's called `my-bucket`)
Run the entrypoint listed below locally using `python3 entrypoint.py`, letting is create a local Ray cluster on the fly.
#### Entrypoint (entrypoint.py)
```
import ray.train.torch
import ray.train
import ray
ray.init()
trainer = ray.train.torch.TorchTrainer(
lambda: print("Hello"),
scaling_config=ray.train.ScalingConfig(num_workers=1),
run_config=ray.train.RunConfig(
storage_path="s3://my-bucket",
name="s3fs-bug",
)
)
result = trainer.fit()
```
The job errors with this message:
```
Status message: Job entrypoint command failed with exit code 1, last available logs (truncated to 20,000 chars):
result = trainer.fit()
File "/usr/local/lib/python3.10/site-packages/ray/train/base_trainer.py", line 589, in fit
storage = StorageContext(
File "/usr/local/lib/python3.10/site-packages/ray/train/_internal/storage.py", line 461, in __init__
self._create_validation_file()
File "/usr/local/lib/python3.10/site-packages/ray/train/_internal/storage.py", line 489, in _create_validation_file
self.storage_filesystem.create_dir(self.experiment_fs_path)
File "pyarrow/_fs.pyx", line 603, in pyarrow._fs.FileSystem.create_dir
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
OSError: When testing for existence of bucket 'my-bucket': AWS Error ACCESS_DENIED during HeadBucket operation: No response body.
```
### Issue Severity
Medium: It poses a significant difficulty though I can work around it.
|
open
|
2025-02-22T05:18:57Z
|
2025-03-05T01:25:36Z
|
https://github.com/ray-project/ray/issues/50823
|
[
"bug",
"P1",
"triage",
"train"
] |
jleben
| 1
|
graphdeco-inria/gaussian-splatting
|
computer-vision
| 503
|
Any suggestions on using lidar point clouds to initialize 3dgs to reconstruct large scenes
|
I'm trying to reconstruct a relatively large indoor scene using the 3dgs algorithm, in this case using colmap can easily fail due to the large number of low texture and similar areas in the interior. So I tried exporting the point cloud and pose from polycam and from there, trained 3dgs.
It is worth noting that the point cloud obtained from polycam is about 50 times denser than using colmap and is very close to the surface of the object.
But in all current attempts, using dense pcd to initialize 3dgs cannot achieve satisfactory results. If anyone has any relevant experience, it would be greatly appreciated!
|
closed
|
2023-11-29T09:36:36Z
|
2024-08-07T02:49:18Z
|
https://github.com/graphdeco-inria/gaussian-splatting/issues/503
|
[] |
Bin-ze
| 4
|
vitalik/django-ninja
|
pydantic
| 1,227
|
[BUG] Primary keys are always marked as nullable
|
**Describe the bug**
As a result of #1181, all primary keys are now marked as nullable in ModelSchema, regardless of whether the field is set to nullable or not. I'm not sure why this was done, but I would expect it to defer to the field's own nullable setting instead if for some reason you have a nullable primary key. If the intention was to make this optional for request schemas, I think it should be marked in `fields_optional` instead in the Meta class.
This is a result of these lines here in fields.py
https://github.com/vitalik/django-ninja/blob/8a499928dafb68626e1bcf5c231f23933e4ec241/ninja/orm/fields.py#L153-L155
**Versions (please complete the following information):**
- Python version: 3.12
- Django version: 5.0.7
- Django-Ninja version: 1.2.1
- Pydantic version: 2.8.2
|
open
|
2024-07-11T14:38:51Z
|
2024-08-02T18:22:47Z
|
https://github.com/vitalik/django-ninja/issues/1227
|
[] |
pmdevita
| 3
|
ydataai/ydata-profiling
|
pandas
| 1,068
|
tangled-up-in-unicode dependency means installing pandas-profiling takes ~2GB
|
### Current Behaviour
```
⊙ python3.10 -m venv venv && venv/bin/python -m pip install --quiet pandas-profiling
...
⊙ du -ha venv | sort -h | tail -n 20 julian@Airm
25M venv/lib/python3.10/site-packages/numpy/.dylibs
45M venv/lib/python3.10/site-packages/statsmodels
54M venv/lib/python3.10/site-packages/numpy
56M venv/lib/python3.10/site-packages/pandas
103M venv/lib/python3.10/site-packages/scipy
194M venv/lib/python3.10/site-packages/tangled_up_in_unicode/u13_0_0_data/__pycache__/prop_list_to_property.cpython-310.pyc
194M venv/lib/python3.10/site-packages/tangled_up_in_unicode/u14_0_0_data/__pycache__/prop_list_to_property.cpython-310.pyc
318M venv/lib/python3.10/site-packages/tangled_up_in_unicode/u13_0_0_data/__pycache__/unicode_data_to_name_start.cpython-310.pyc
318M venv/lib/python3.10/site-packages/tangled_up_in_unicode/u14_0_0_data/__pycache__/unicode_data_to_name_start.cpython-310.pyc
540M venv/lib/python3.10/site-packages/tangled_up_in_unicode/__pycache__
540M venv/lib/python3.10/site-packages/tangled_up_in_unicode/__pycache__/tangled_up_in_unicode_12_0_1.cpython-310.pyc
588M venv/lib/python3.10/site-packages/tangled_up_in_unicode/u13_0_0_data/__pycache__
588M venv/lib/python3.10/site-packages/tangled_up_in_unicode/u14_0_0_data/__pycache__
602M venv/lib/python3.10/site-packages/tangled_up_in_unicode/u13_0_0_data
602M venv/lib/python3.10/site-packages/tangled_up_in_unicode/u14_0_0_data
1.8G venv/lib/python3.10/site-packages/tangled_up_in_unicode
2.1G venv
2.1G venv/lib
2.1G venv/lib/python3.10
2.1G venv/lib/python3.10/site-packages
```
(This is filed upstream at dylan-profiler/tangled-up-in-unicode#10 which has some detail on why, but filing here for visibility)
### Expected Behaviour
A smaller unicode database :)
### Data Description
N/A
### Code that reproduces the bug
_No response_
### pandas-profiling version
3.3.0
### Dependencies
```Text
tangled-up-in-unicode 0.2.0
```
### OS
macOS 12.5.1 + Linux
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
|
closed
|
2022-09-23T11:54:14Z
|
2023-06-03T02:16:18Z
|
https://github.com/ydataai/ydata-profiling/issues/1068
|
[
"dependencies 🔗"
] |
Julian
| 4
|
dpgaspar/Flask-AppBuilder
|
flask
| 1,533
|
FilterEqualFunction on Mongoengine
|
Hi, I've seen in the past you implemented in aminor release FilterEqualFunction on mongoengine. Any chances to get it in some new release?
Thanks
|
closed
|
2020-12-17T13:40:32Z
|
2021-06-29T00:56:13Z
|
https://github.com/dpgaspar/Flask-AppBuilder/issues/1533
|
[
"stale"
] |
amaioli
| 1
|
microsoft/Bringing-Old-Photos-Back-to-Life
|
pytorch
| 280
|
Out of memory error
|
Hello
XXXXXXXX-deskcomputer:~/Projects/photo_restoration$ python run.py --input_folder /home/XXXXXX/Videos/08281965BC/Test/ --output_folder /home/XXXXXX/Videos/08281965BC/ColorCorrected/ --GPU 0 --HR
Running Stage 1: Overall restoration
Mapping: You are using the mapping model without global restoration.
Traceback (most recent call last):
File "test.py", line 102, in <module>
model.initialize(opt)
File "/home/XXXXXX/Projects/photo_restoration/Global/models/mapping_model.py", line 147, in initialize
self.netG_A.cuda(opt.gpu_ids[0])
File "/home/XXXXXX/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 905, in cuda
return self._apply(lambda t: t.cuda(device))
File "/home/XXXXXX/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/home/XXXXXX/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 797, in _apply
module._apply(fn)
File "/home/XXXXXX/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 820, in _apply
param_applied = fn(param)
File "/home/XXXXXX/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 905, in <lambda>
return self._apply(lambda t: t.cuda(device))
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Traceback (most recent call last):
File "run.py", line 102, in <module>
for x in os.listdir(stage_1_results):
FileNotFoundError: [Errno 2] No such file or directory: '/home/XXXXXX/Videos/08281965BC/ColorCorrected/stage_1_restore_output/restored_image'
System Details:
GPU NVIDIA GeForce GT 1030
NVidia driver 525.125.06
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
16 Gb ram
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 21
Model: 2
Model name: AMD FX(tm)-8350 Eight-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 1400.000
5.3 MiB PNG file
Input file attached.

I'm willing to test code if asked.
Thank You for this project. I'm hoping it will be work for my 8mm telecine (personal home videos).
Regards
Chris
|
open
|
2023-09-10T13:28:24Z
|
2023-09-10T13:28:24Z
|
https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/280
|
[] |
c1arocque
| 0
|
gradio-app/gradio
|
data-visualization
| 10,532
|
dropdowns in apps built with gradio are not accessible for screenreaders
|
### Describe the bug
All apps built with gradio I've used so far have this issue. When using a screenreader such as nvda:
https://www.nvaccess.org/
There is an issue which makes it unable for blind or visually impaired screenreader users to choose something from dropdowns in gradio UI.
The list is being presented as an editable field, so a place where the user should input text. When pressing up and down arrows to change the value, the screenreader only announces the current default value, while below a list appears which can't be focused with the keyboard or the screenreader can't see it.
This makes blind people like me unable to select anything from lists like voice models in applio or even the interface language, same with other gradio powered apps like audiocraft plus, whisper UI etc.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
model_file = gr.Dropdown(
label=i18n("Voice Model"),
info=i18n("Select the voice model to use for the conversion."),
choices=sorted(names, key=lambda x: extract_model_and_epoch(x)),
interactive=True,
value=default_weight,
allow_custom_value=True,
)
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio 5.5.0 with gradio client 1.4.2
```
### Severity
Blocking usage of gradio
|
open
|
2025-02-06T23:14:40Z
|
2025-02-07T03:31:26Z
|
https://github.com/gradio-app/gradio/issues/10532
|
[
"bug",
"svelte"
] |
VIPPotato
| 0
|
plotly/dash
|
plotly
| 2,346
|
Allow a "cleanup" callback that is executed when a session ends
|
It would be nice to have something like this, sort of the opposite of a `serve_layout()` function that gets executed with `app.layout` for every new session. This would allow, for example, server-side storing (e.g. using Redis) that would delete session data when the user disconnects.
```python
def serve_layout():
user_id = uuid4()
redis_db[user_id] = <some data here>
...
return layout
app.layout = serve_layout
@dash.on_end_session
def cleanup_layout():
del redis_db[user_id]
```
|
open
|
2022-12-01T05:11:56Z
|
2024-08-13T19:23:42Z
|
https://github.com/plotly/dash/issues/2346
|
[
"feature",
"P3"
] |
xhluca
| 0
|
predict-idlab/plotly-resampler
|
plotly
| 132
|
FigureResampler not working when using backend parameter on ServersideOutputTransform
|
Hello and thank you for this awesome project.
I'm trying to specify backend parameter on ServersideOutputTransform in order to set threshold parameter on FileSystemCache, but then if I do that my callback with fig.construct_update_data(relayoutdata) stop working. Any idea about why this is happening?
Code:
```` python
from dash import html, dcc, Output, Input, ctx, State, no_update
import dash_bootstrap_components as dbc
import plotly.graph_objects as go
from dash_extensions.enrich import (
DashProxy,
ServersideOutput,
ServersideOutputTransform
)
from plotly_resampler import FigureResampler
from trace_updater import TraceUpdater
from flask_caching.backends import FileSystemCache
from settings import DROPDOWN_OPTIONS
from preprocess import Preprocess
# ----------------- Defining Global Variable with all data ---------------------
data = Preprocess()
data.preprocess_all_data()
def get_plot(plot_domain, acc, file):
return data.get_plot(plot_domain, acc, file)
def get_stats(plot_domain, acc, file):
return data.get_stats(plot_domain, acc, file)
# ---------------------- Create App ----------------------------
backend = FileSystemCache(cache_dir='file_system_store',
threshold=3)
app = DashProxy(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP],
transforms=[ServersideOutputTransform(backend=backend)])
def create_figure(files_list=['a_normal__2000rpm'], plot_domain='time', acc=0):
fig = FigureResampler(go.Figure(), default_n_shown_samples=10_000)
if plot_domain == 'fft':
for file in files_list:
fig.add_trace(get_plot(plot_domain, acc, file))
fig.update_layout(
yaxis_title='Amplitude',
xaxis_title='Frequency (Hz)',
width=1400,
height=600)
elif plot_domain == 'time':
for file in files_list:
fig.add_trace(get_plot(plot_domain, acc, file))
fig.update_layout(
yaxis_title='Amplitude (g)',
xaxis_title='Time (s)',
width=1400,
height=600)
return fig
app.layout = html.Div(
[
dbc.Row(
dbc.Col([
html.H1('Medições de Bancada')
], width={'size': 6, 'offset': 4}
),
align='center'
),
dbc.Row(
[
dbc.Col(width={'size': 3, 'offset': 1},
children=[
html.H3('Medições:'),
dcc.Dropdown(id='Arquivo',
options=DROPDOWN_OPTIONS,
multi=True,
optionHeight=50
),
html.H3('Tipo de Gráfico:'),
dcc.RadioItems(
id='plot_domain',
options=[
{'label': 'FFT', 'value': 'fft'},
{'label': 'Tempo', 'value': 'time'}
],
labelStyle={'display': 'block'},
value='time'
)
]
),
dbc.Col(width={'size': 5, 'offset': 2},
children=[
html.H4('Tempo de Aquisição: 30s'),
html.H4(
'Frequência de Amostragem: 25600 Hz'),
html.H3('Spot:'),
dcc.RadioItems(
id='accel',
options=[
{'label': 'Drive End Bearing',
'value': 0},
{'label': 'Non Drive End Bearing',
'value': 1},
{'label': 'Drive End Motor',
'value': 2},
{'label': 'Fan End Motor',
'value': 3}
],
labelStyle={'display': 'block'},
value=0
)
]
)
]
),
dbc.Row(
children=[
dcc.Graph(id="plot"),
dcc.Loading(dcc.Store(id='storage-data')),
TraceUpdater(id='dynamic-updater', gdID='plot')
]
),
dbc.Row(
id='stats-card-row'
)
]
)
# ----------------- App Callbacks -----------------------
@app.callback(
ServersideOutput('storage-data', 'data'),
Output('plot', 'figure'),
Input('Arquivo', 'value'),
Input('plot_domain', 'value'),
Input('accel', 'value'),
prevent_initial_call=True
)
def create_new_plot(dropdown_selection, plot_domain, acc):
fig = create_figure(
dropdown_selection, plot_domain=plot_domain, acc=acc)
return fig, fig
@app.callback(
Output('stats-card-row', 'children'),
Input('Arquivo', 'value'),
Input('plot_domain', 'value'),
Input('accel', 'value'),
prevent_initial_call=True
)
def create_stats_card(dropdown_selection, plot_domain, acc):
card = dbc.Card(
[
dbc.CardHeader(html.H2('Estatísticas das medições')),
dbc.CardBody(dbc.ListGroup(
[
dbc.ListGroupItem(children=[
html.H3(stats["name"]),
html.H4(f'Média: {stats["mean"]}'),
html.H4(f'Valor RMS: {stats["rms"]}'),
html.H4(f'Skewness: {stats["skewness"]}')
]
)
for stats in [get_stats(plot_domain, acc, file) for file in dropdown_selection]
],
flush=True,
),
style={"width": "100rem"},
)])
return card
@app.callback(
Output("dynamic-updater", "updateData"),
Input("plot", "relayoutData"),
State("storage-data", "data"),
prevent_initial_call=True,
memoize=True,
)
def update_fig(relayoutdata: dict, fig: FigureResampler):
if fig is not None:
return fig.construct_update_data(relayoutdata)
return no_update
# ----------- Run App --------------
if __name__ == '__main__':
app.run_server(debug=True)
````
|
closed
|
2022-11-07T16:12:11Z
|
2023-10-25T22:59:20Z
|
https://github.com/predict-idlab/plotly-resampler/issues/132
|
[] |
VictorBauler
| 2
|
jupyter-book/jupyter-book
|
jupyter
| 1,528
|
Plotly and dollarmath don't work unless using mathjax2
|
### Describe the problem
I have noticed that after adding the myst extension "dollarmath" plotly graphs stopped rendering.
Looking at the browser console I could see these errors:

Going to the notebooks.html:325 line I see this:

### Link to your repository or website
https://github.com/raphaeltimbo/mynewbook
### Steps to reproduce
1. Clone https://github.com/raphaeltimbo/mynewbook
2. Build jupyter-book
3. Check the notebooks.html file (under the link "Content with notebooks") and see the rendered plot from the last cell
4. Add dollarmath to myst extension in the _config.yml file
5. Build the jupyter-book again and check notebooks.html with no plot rendered
### The version of Python you're using
3.7.9
### Your operating system
Ubuntu 20.04
### Versions of your packages
```
Jupyter Book : 0.12.0
External ToC : 0.2.3
MyST-Parser : 0.15.2
MyST-NB : 0.13.1
Sphinx Book Theme : 0.1.6
Jupyter-Cache : 0.4.3
NbClient : 0.5.3
```
### Additional context
_No response_
|
closed
|
2021-10-28T23:06:51Z
|
2023-10-13T00:04:42Z
|
https://github.com/jupyter-book/jupyter-book/issues/1528
|
[
"bug"
] |
raphaeltimbo
| 10
|
desec-io/desec-stack
|
rest-api
| 412
|
[Feature Request] DynDNS API can update subdomains
|
**Feature Description:**
Using the [dynDNS API] it is possible to update `A` and `AAAA` records of sub-domains, e.g. `sub.main.dedyn.io`.
One possible way to achieve this would be via the already existing `hostname` parameter in the API:
```
curl -u main.dedyn.io:<token> https://update.dedyn.io/?hostname=sub.main.dedyn.io
```
**Context:**
* according to [this comment](https://github.com/desec-io/desec-stack/issues/411#issuecomment-640259255) the current behavior does_not_ allow to do so
* according to [this comment](https://github.com/desec-io/desec-stack/issues/411#issuecomment-640806623) the required changes _could be a relatively easy change_.
* this feature request stems from the issue #411.
[dynDNS API]: https://desec.readthedocs.io/en/latest/dyndns/update-api.html
|
closed
|
2020-06-10T12:08:57Z
|
2024-10-07T17:21:54Z
|
https://github.com/desec-io/desec-stack/issues/412
|
[
"enhancement",
"api",
"prio: high",
"easy"
] |
hcc23
| 8
|
fastapi-users/fastapi-users
|
asyncio
| 1,099
|
asyncio.exceptions.CancelledError
|
## Describe the bug
I am running the full example in a fastapi application and I am regularly running into `asyncio.exceptions.CancelledError`.
```
api_1 | INFO: 172.20.0.1:51026 - "POST /auth/jwt/login HTTP/1.1" 200 OK
api_1 | INFO: 172.20.0.1:51032 - "POST /auth/jwt/login HTTP/1.1" 200 OK
api_1 | INFO: 172.20.0.1:47926 - "POST /auth/jwt/login HTTP/1.1" 200 OK
api_1 | INFO: 172.20.0.1:47932 - "POST /auth/jwt/login HTTP/1.1" 200 OK
api_1 | ERROR: Exception during reset or similar (line:765)
api_1 | Traceback (most recent call last):
api_1 | File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 739, in _finalize_fairy
api_1 | fairy._reset(pool)
api_1 | File "/usr/local/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 988, in _reset
api_1 | pool._dialect.do_rollback(self)
api_1 | File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 682, in do_rollback
api_1 | dbapi_connection.rollback()
api_1 | File "/usr/local/lib/python3.10/site-packages/sqlalchemy/dialects/mysql/aiomysql.py", line 205, in rollback
api_1 | self.await_(self._connection.rollback())
api_1 | File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 68, in await_only
api_1 | return current.driver.switch(awaitable)
api_1 | File "/usr/local/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 121, in greenlet_spawn
api_1 | value = await result
api_1 | File "/usr/local/lib/python3.10/site-packages/aiomysql/connection.py", line 399, in rollback
api_1 | await self._read_ok_packet()
api_1 | File "/usr/local/lib/python3.10/site-packages/aiomysql/connection.py", line 372, in _read_ok_packet
api_1 | pkt = await self._read_packet()
api_1 | File "/usr/local/lib/python3.10/site-packages/aiomysql/connection.py", line 598, in _read_packet
api_1 | packet_header = await self._read_bytes(4)
api_1 | File "/usr/local/lib/python3.10/site-packages/aiomysql/connection.py", line 646, in _read_bytes
api_1 | data = await self._reader.readexactly(num_bytes)
api_1 | File "/usr/local/lib/python3.10/asyncio/streams.py", line 708, in readexactly
api_1 | await self._wait_for_data('readexactly')
api_1 | File "/usr/local/lib/python3.10/asyncio/streams.py", line 502, in _wait_for_data
api_1 | await self._waiter
api_1 | asyncio.exceptions.CancelledError
mariadb_1 | 2022-10-20 12:50:40 84 [Warning] Aborted connection 84 to db: 'api' user: 'api' host: '172.20.0.5' (Got an error reading communication packets)
```
Yes, running application and mariadb in docker containers.
## To Reproduce
### Configuration
- Python version : 3.10
- FastAPI version : 0.85.1
- FastAPI Users version : 10.2.0
### FastAPI Users configuration
`users/users.py`
```py
#
# ENV
#
import os
from os.path import join, dirname
from dotenv import load_dotenv
dotenv_path = join(dirname(__file__), '../.env')
load_dotenv(dotenv_path)
#
# FASTAPI USERS
#
import uuid
from typing import AsyncGenerator, Optional
from fastapi import Depends, Request
from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine
from sqlalchemy.ext.declarative import DeclarativeMeta, declarative_base
from sqlalchemy.orm import sessionmaker
from fastapi_users import schemas, BaseUserManager, UUIDIDMixin
from fastapi_users.db import SQLAlchemyBaseUserTableUUID, SQLAlchemyUserDatabase
from fastapi_users.authentication import CookieTransport, AuthenticationBackend, BearerTransport, JWTStrategy
from fastapi_users.authentication.strategy.db import AccessTokenDatabase, DatabaseStrategy
from fastapi_users_db_sqlalchemy.access_token import (
SQLAlchemyAccessTokenDatabase,
SQLAlchemyBaseAccessTokenTableUUID,
)
DATABASE_URL = "mysql+aiomysql://" + os.environ.get('DATABASE_URL')
SECRET = os.environ.get('FASTAPI_SECRET')
Base: DeclarativeMeta = declarative_base()
class User(SQLAlchemyBaseUserTableUUID, Base):
pass
class UserRead(schemas.BaseUser[uuid.UUID]):
pass
class UserCreate(schemas.BaseUserCreate):
pass
class UserUpdate(schemas.BaseUserUpdate):
pass
class AccessToken(SQLAlchemyBaseAccessTokenTableUUID, Base):
pass
engine = create_async_engine(DATABASE_URL)
async_session_maker = sessionmaker(engine, class_=AsyncSession, expire_on_commit=False)
async def create_db_and_tables():
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.create_all)
async def get_async_session() -> AsyncGenerator[AsyncSession, None]:
async with async_session_maker() as session:
yield session
async def get_user_db(session: AsyncSession = Depends(get_async_session)):
yield SQLAlchemyUserDatabase(session, User)
async def get_access_token_db(session: AsyncSession = Depends(get_async_session),):
yield SQLAlchemyAccessTokenDatabase(session, AccessToken)
def get_database_strategy(access_token_db: AccessTokenDatabase[AccessToken] = Depends(get_access_token_db),) -> DatabaseStrategy:
return DatabaseStrategy(access_token_db, lifetime_seconds=3600)
cookie_transport = CookieTransport(cookie_max_age=86400)
bearer_transport = BearerTransport(tokenUrl="auth/jwt/login")
def get_jwt_strategy() -> JWTStrategy:
return JWTStrategy(secret=SECRET, lifetime_seconds=86400)
jwt_auth_backend = AuthenticationBackend(
name="jwt",
transport=bearer_transport,
get_strategy=get_jwt_strategy,
)
cookie_auth_backend = AuthenticationBackend(
name="cookie",
transport=cookie_transport,
get_strategy=get_jwt_strategy,
)
class UserManager(UUIDIDMixin, BaseUserManager[User, uuid.UUID]):
reset_password_token_secret = SECRET
verification_token_secret = SECRET
async def on_after_register(self, user: User, request: Optional[Request] = None):
print(f"User {user.id} has registered.")
async def on_after_forgot_password(self, user: User, token: str, request: Optional[Request] = None):
print(f"User {user.id} has forgot their password. Reset token: {token}")
async def on_after_request_verify(self, user: User, token: str, request: Optional[Request] = None):
print(f"Verification requested for user {user.id}. Verification token: {token}")
async def get_user_manager(user_db=Depends(get_user_db)):
yield UserManager(user_db)
```
and I import it into `main.py`
```py
from fastapi_users import FastAPIUsers
from .users.users import User, uuid, get_user_manager, jwt_auth_backend, cookie_auth_backend, create_db_and_tables, \
UserRead, UserCreate, UserUpdate
fastapi_users = FastAPIUsers[User, uuid.UUID](
get_user_manager,
[jwt_auth_backend, cookie_auth_backend],
)
current_active_user = fastapi_users.current_user(active=True)
```
Then I am going to just login to produce the above error to `$DOMAIN/auth/jwt/login`
## Additional context
The connection itself is processing the request normally, but I guess the connection does not get closed properly. Any idea why this is happening?
Thanks,
bert
|
closed
|
2022-10-20T13:12:16Z
|
2022-10-23T07:47:59Z
|
https://github.com/fastapi-users/fastapi-users/issues/1099
|
[
"bug"
] |
bert2002
| 0
|
darrenburns/posting
|
rest-api
| 185
|
vim mode?
|
is there any plans to add vim mode
|
open
|
2025-02-11T18:41:52Z
|
2025-03-17T01:10:40Z
|
https://github.com/darrenburns/posting/issues/185
|
[] |
samarth-na
| 5
|
pytest-dev/pytest-mock
|
pytest
| 174
|
xxx() missing 1 required positional argument: 'mocker'
|
I'm using pytest 5.3.0 with pytest-mock 1.12.1, and in a pytest class I want to temporarily mock a function of an object returned by some other module function. When I use `test_calls_operation(mocker)` outside a test class, then the `mocker` arg gets filled in by pytest-mock as expected. However, when I declare the test as shown below, I get `test_calls_operation() missing 1 required positional argument: 'mocker'`. What am I doing wrong? I sifted through the documentation and especially StackOverflow, but I seem to miss the knack of getting the `mocker` arg` automatically set as needed.
```
class KeysModuleTests(TestCase):
def test_calls_operation(self, mocker):
keys = mnt.keys.end()
mocker.spy(keys, 'op_end')
# ...
```
|
closed
|
2019-12-07T15:10:47Z
|
2023-04-15T09:41:56Z
|
https://github.com/pytest-dev/pytest-mock/issues/174
|
[] |
thediveo
| 17
|
scikit-learn/scikit-learn
|
machine-learning
| 30,714
|
Version 1.0.2 requires numpy<2
|
### Describe the bug
Installing scikit-learn version 1.0.2 leads to the following error:
```bash
ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
```
This seems to indicate a mismatch between this version of scikit-learn and numpy versions greater than 2.0 (Specifically 2.2.2 was being installed, following the only restriction of `numpy>=1.14.6`).
This can be solved by indicating to use a numpy version older than 2.0 by modifying step 1 to:
```bash
pip install "scikit-learn==1.0.2" "numpy<2"
```
## Additional references
https://stackoverflow.com/questions/66060487/valueerror-numpy-ndarray-size-changed-may-indicate-binary-incompatibility-exp
https://stackoverflow.com/questions/78650222/valueerror-numpy-dtype-size-changed-may-indicate-binary-incompatibility-expec
### Steps/Code to Reproduce
1. Install scikit-learn through pip
```bash
pip install "scikit-learn==1.0.2"
```
2. Use scikit-learn
````python
% path/to/script.py
...
from sklearn.datasets import load_iris
...
````
### Expected Results
No errors thrown
### Actual Results
Error is thrown:
```bash
path/to/script.py:2: in <module>
from sklearn.datasets import load_iris
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/sklearn/__init__.py:82: in <module>
from .base import clone
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/sklearn/base.py:17: in <module>
from .utils import _IS_32BIT
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/sklearn/utils/__init__.py: in <module>
from .murmurhash import murmurhash3_32
sklearn/utils/murmurhash.pyx:1: in init sklearn.utils.murmurhash
???
E ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
```
### Versions
```shell
OS: Ubuntu 24.10 (latest)
Python version 3.10
Scikit-learn version: 1.0.2
pip version: 24.3.1
setuptools version: 65.5.0
```
|
closed
|
2025-01-24T11:47:50Z
|
2025-01-24T15:00:00Z
|
https://github.com/scikit-learn/scikit-learn/issues/30714
|
[
"Bug",
"Needs Triage"
] |
grudloff
| 7
|
tflearn/tflearn
|
data-science
| 849
|
Model.predict giving same predictions for every examples
|
I have a 110 layer resnet trained and validated with 4 classes to classify. Training examples are in decent proportion (30%,20%,25%,25%). It has validation accuracy of around 90%. When testing it for new examples it gives same class as output always. I am giving a list of arrays as input to model.predict. I have attached the code below.
`from __future__ import division, print_function, absolute_import
import numpy as np
#import pandas as pd
import tflearn
import os
from glob import glob
import cv2
import csv
import pickle
from tflearn.data_utils import shuffle, to_categorical
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.estimator import regression
from tflearn.layers.normalization import local_response_normalization
from tflearn.data_preprocessing import ImagePreprocessing
from tflearn.data_augmentation import ImageAugmentation
import h5py
train_data = h5py.File('train_dataset_her2.h5', 'r')
X = train_data['X']
Y = train_data['Y']
test_data = h5py.File('test_dataset_her2.h5', 'r')
testX = test_data['X']
testY = test_data['Y']
# Real-time data preprocessing
img_prep = tflearn.ImagePreprocessing()
img_prep.add_featurewise_zero_center()
# Real-time data augmentation
img_aug = tflearn.ImageAugmentation()
img_aug.add_random_flip_leftright()
#network
n=18
network = input_data(shape=[None, 224, 224,3])#data_preprocessing=img_prep,data_augmentation=img_aug)
network=conv_2d(network,108,3,activation='relu')
network=max_pool_2d(network,2)
network=conv_2d(network,108,3,activation='relu')
network=max_pool_2d(network,2)
network = conv_2d(network,108, 3, activation='relu')
network = dropout(network, 0.8)
network = tflearn.conv_2d(network, 16, 3, regularizer='L2', weight_decay=0.0001)
network = tflearn.residual_block(network, n, 16)
network = tflearn.residual_block(network, 1, 32, downsample=True)
network = tflearn.residual_block(network, n-1, 32)
network = tflearn.residual_block(network, 1, 64, downsample=True)
network = tflearn.residual_block(network, n-1, 64)
network = tflearn.batch_normalization(network)
network = tflearn.activation(network, 'relu')
network = tflearn.global_avg_pool(network)
network = tflearn.fully_connected(network, 4, activation='softmax')
adam =tflearn.optimizers.Adam(learning_rate=0.002)
network = tflearn.regression(network, optimizer=adam,
loss='categorical_crossentropy')
model = tflearn.DNN(network, tensorboard_verbose=0)
model.load("dataset_adam_resnet.tfl.ckpt-4000")
print("Done loading model")
############################################################################
###Prediction
sub_folder = [temp[0] for temp in os.walk('Dataset_Test_Data')]
sub_folder = sub_folder[1:]
######################################################################
###predict without pickle
for f1 in range(1):#(len(sub_folder)):
list_images = sorted(glob(sub_folder[f1] + '/*.jpg'))
predictions = []
temp_m = sub_folder[f1].split("/")
print("OPerating '%s' folder" %temp_m[1])
for item in list_images:
print("predicting%s"%item)
predictions.append(model.predict_label((cv2.imread(item)).astype(float).reshape(1,224,224,3)))
writer = csv.writer(open('./HER2_Test_Data/'+temp_m[1]+'/Prediction_cnn_without_pickle' + temp_m[1] + '.csv', "w"))
writer.writerows(predictions)
`
|
open
|
2017-07-22T21:30:11Z
|
2019-05-24T14:20:06Z
|
https://github.com/tflearn/tflearn/issues/849
|
[] |
deepakanandece
| 2
|
gradio-app/gradio
|
data-visualization
| 10,354
|
KeyError in gradio_client stream_messages() pending_event_ids
|
### Describe the bug
```
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/gradio_client/client.py", line 277, in stream_messages
self.pending_event_ids.remove(event_id)
KeyError: 'a4306ceb969b452f8b4e2d6de58cd88f'
```
Hi, a `httpx.ReadTimeout` during `client.predict()` might prevent `self.client.pending_event_ids.add(event_id)` from executing in `make_predict()`.
This results in a missing `event_id` in `pending_event_ids`, causing the `KeyError` when `stream_messages()` attempts removal.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
from time import sleep
import httpx
from gradio_client import Client
client = Client(SPACE_NAME, HF_TOKEN)
while True:
try:
result = client.predict(
state="0",
api_name="/check_status"
)
if result:
break
else:
sleep(60)
continue
except (httpx.ReadTimeout, httpx.ConnectError) as exc:
print('this line executed when httpx.ReadTimeout')
sleep(60)
continue
except Exception as exc:
print('this line executed when pending_event_ids KeyError')
break
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
print(gradio_client.__version__)
1.5.2
```
### Severity
I can work around it
|
open
|
2025-01-14T14:57:49Z
|
2025-03-05T12:30:24Z
|
https://github.com/gradio-app/gradio/issues/10354
|
[
"bug"
] |
linzack
| 1
|
onnx/onnx
|
pytorch
| 6,533
|
[reference] Improve ConcatFromSequence reference implementation
|
> I still have the same question. I agree there is an ambiguity in the ONNX documentation ... but the ambiguity applies to values other than -1 also. Eg., -2 as well.
>
> I think that the spec means that the specified axis is interpreted with respect to the output's shape (not the input's shape). So, the specified axis in the output will be the newly inserted axis.
>
> If I understand the spec of `np.expand_dims`, it is similar: https://numpy.org/doc/stable/reference/generated/numpy.expand_dims.html ... so, something seems off here.
_Originally posted by @gramalingam in https://github.com/onnx/onnx/pull/6369#discussion_r1774104460_
|
open
|
2024-11-05T23:17:22Z
|
2024-11-05T23:18:34Z
|
https://github.com/onnx/onnx/issues/6533
|
[
"module: reference implementation",
"contributions welcome"
] |
justinchuby
| 0
|
tflearn/tflearn
|
data-science
| 782
|
wide and deep help.
|
Run on default everything,
Training
---------------------------------
Run id: wide+deep
Log directory: /tmp/tflearn_logs/
INFO:tensorflow:Summary name BinaryAccuracy/wide_regression_0 (raw) is illegal; using BinaryAccuracy/wide_regression_0__raw_ instead.
INFO:tensorflow:Summary name BinaryAccuracy_1/deep_regression_0 (raw) is illegal; using BinaryAccuracy_1/deep_regression_0__raw_ instead.
INFO:tensorflow:Summary name BinaryAccuracy_2/central_bias_regression_0 (raw) is illegal; using BinaryAccuracy_2/central_bias_regression_0__raw_ instead.
INFO:tensorflow:Summary name BinaryAccuracy_3/wide_regression_1 (raw) is illegal; using BinaryAccuracy_3/wide_regression_1__raw_ instead.
INFO:tensorflow:Summary name BinaryAccuracy_4/deep_regression_1 (raw) is illegal; using BinaryAccuracy_4/deep_regression_1__raw_ instead.
INFO:tensorflow:Summary name BinaryAccuracy_5/central_bias_regression_1 (raw) is illegal; using BinaryAccuracy_5/central_bias_regression_1__raw_ instead.
---------------------------------
Training samples: 195366
Validation samples: 97686
--
Traceback (most recent call last):
File "<ipython-input-2-36cb65ca3534>", line 1, in <module>
runfile('/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py', wdir='/Users/shaotang/OneDrive/ebay_project/wide_n_deep')
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py", line 426, in <module>
CommandLine()
File "/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py", line 374, in CommandLine
twad.train(n_epoch=FLAGS.n_epoch, snapshot_step=FLAGS.snapshot_step)
File "/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py", line 315, in train
run_id=self.name,
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tflearn/models/dnn.py", line 215, in fit
callbacks=callbacks)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tflearn/helpers/trainer.py", line 336, in fit
show_metric)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tflearn/helpers/trainer.py", line 777, in _train
feed_batch)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 997, in _run
feed_dict_string, options, run_metadata)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1132, in _do_run
target_list, options, run_metadata)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _do_call
raise type(e)(node_def, op, message)
InvalidArgumentError: Shape [-1,5] has negative dimensions
[[Node: wide_X_1/X = Placeholder[dtype=DT_FLOAT, shape=[?,5], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op 'wide_X_1/X', defined at:
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/spyder/utils/ipython/start_kernel.py", line 227, in <module>
main()
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/spyder/utils/ipython/start_kernel.py", line 223, in main
kernel.start()
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/ipykernel/kernelapp.py", line 474, in start
ioloop.IOLoop.instance().start()
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/zmq/eventloop/ioloop.py", line 177, in start
super(ZMQIOLoop, self).start()
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tornado/ioloop.py", line 887, in start
handler_func(fd_obj, events)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events
self._handle_recv()
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv
self._run_callback(callback, msg)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback
callback(*args, **kwargs)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 276, in dispatcher
return self.dispatch_shell(stream, msg)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 228, in dispatch_shell
handler(stream, idents, msg)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 390, in execute_request
user_expressions, allow_stdin)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/ipykernel/ipkernel.py", line 196, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/ipykernel/zmqshell.py", line 501, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2717, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2827, in run_ast_nodes
if self.run_code(code, result):
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2881, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-36cb65ca3534>", line 1, in <module>
runfile('/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py', wdir='/Users/shaotang/OneDrive/ebay_project/wide_n_deep')
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 866, in runfile
execfile(filename, namespace)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py", line 426, in <module>
CommandLine()
File "/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py", line 369, in CommandLine
checkpoints_dir=FLAGS.checkpoints_dir)
File "/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py", line 57, in __init__
self.build_model([wide_learning_rate, deep_learning_rate])
File "/Users/shaotang/OneDrive/ebay_project/wide_n_deep/ebay_linux_SK.py", line 88, in build_model
wide_inputs = tflearn.input_data(shape=input_shape, name="wide_X")
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tflearn/layers/core.py", line 81, in input_data
placeholder = tf.placeholder(shape=shape, dtype=dtype, name="X")
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 1530, in placeholder
return gen_array_ops._placeholder(dtype=dtype, shape=shape, name=name)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1954, in _placeholder
name=name)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2506, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/Users/shaotang/anaconda/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1269, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): Shape [-1,5] has negative dimensions
[[Node: wide_X_1/X = Placeholder[dtype=DT_FLOAT, shape=[?,5], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Any fix?
|
open
|
2017-06-06T03:54:06Z
|
2017-06-06T03:54:06Z
|
https://github.com/tflearn/tflearn/issues/782
|
[] |
lancerts
| 0
|
scikit-image/scikit-image
|
computer-vision
| 6,915
|
Existing "inpainting" gallery example could use a better (more specific) title.
|
Creating this issue so we don't lose track of what's been discussed in the conversation _Originally posted by @lagru in https://github.com/scikit-image/scikit-image/pull/6853#discussion_r1149741067_
> @mkcor, just wondering how this relates to [our existing inpainting example ](https://scikit-image.org/docs/dev/auto_examples/filters/plot_inpaint.html#sphx-glr-auto-examples-filters-plot-inpaint-py). I am assuming that the main benefit here is that it's a real world use case?
[...]
> Which prompts the idea that we should update the title of the existing example, so it's less generic than just "inpainting."
|
closed
|
2023-05-02T15:44:44Z
|
2023-06-03T16:51:32Z
|
https://github.com/scikit-image/scikit-image/issues/6915
|
[
":page_facing_up: type: Documentation"
] |
mkcor
| 0
|
PokeAPI/pokeapi
|
api
| 315
|
Pokemon Ultra Sun and Ultra Moon data needed
|
Pokemon Ultra sun and moon added some new pokemon and forms which seem to have yet to be added to the database, like the new forms of necrozma and dusk lycanroc.
|
closed
|
2017-12-05T20:59:33Z
|
2018-05-21T08:43:36Z
|
https://github.com/PokeAPI/pokeapi/issues/315
|
[
"enhancement",
"veekun"
] |
Smethan
| 2
|
Gozargah/Marzban
|
api
| 1,034
|
ALPN bug in dev branch
|
Hello dear Gozargah team and Thank you very much for you help to bypass internet restrictions.
In latest version of dev branch, when we set ALPN to the combination of http/1.1 and h2, the panel return it as a string (i.e, "h2, http/1.1" ) into json config and does not split it as separate values for the ALPN array of json config.
The ALPN key in the json config would be as:
"alpn": [
"h2,http/1.1"
],
instead of:
"alpn": [
"h2" ,
"http/1.1"
],
Also, setting h3 (or any combination with h3) in ALPN section, makes the host settings not being saved.
|
closed
|
2024-06-05T12:21:52Z
|
2024-07-04T12:24:41Z
|
https://github.com/Gozargah/Marzban/issues/1034
|
[
"Bug"
] |
develfishere
| 2
|
horovod/horovod
|
pytorch
| 2,998
|
Imported target "MPI::MPI_CXX" includes non-existent path
|
Hello,
I am having some issues installing horovod with MPI, would anybody be able to give any suggestions? I have attached my setup below.
Thanks,
Yiltan
**Environment:**
1. Framework: TensorFlow
2. Framework version: 1.15.2
3. Horovod version: v0.20.3
4. MPI version: MVAPICH2-GDR
5. CUDA version: 10.1.2436. NCCL version: n/a
7. Python version: 3.7
8. Spark / PySpark version: n/a
9. Ray version: n/a
10. OS and version: Red Hat
11. GCC version: 8.4.0
12. CMake version: 3.16.3
**Install Script**
```
IBM_REPO="https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/"
# Create the enviroment
conda create -y python=3.7 --prefix=$(pwd)/.conda/envs/horovod_tf/
eval "$(conda shell.bash hook)"
conda activate .conda/envs/horovod_tf/
conda install -y -c $IBM_REPO tensorflow-gpu==1.15.2 keras Pillow
# Horovod variables
export HOROVOD_WITH_TENSORFLOW=1
export HOROVOD_WITHOUT_PYTORCH=1
export HOROVOD_WITHOUT_MXNET=1
export HOROVOD_WITHOUT_GLOO=1
export HOROVOD_CUDA_HOME=$CUDA_HOME
export HOROVOD_GPU_OPERATIONS=MPI
export HOROVOD_WITH_MPI=1
cd packages/horovod
python setup.py install
```
**Output**
```running clean
removing 'build/temp.linux-ppc64le-3.7' (and everything under it)
running install
running bdist_egg
running egg_info
writing horovod.egg-info/PKG-INFO
writing dependency_links to horovod.egg-info/dependency_links.txt
writing entry points to horovod.egg-info/entry_points.txt
writing requirements to horovod.egg-info/requires.txt
writing top-level names to horovod.egg-info/top_level.txt
reading manifest file 'horovod.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
no previously-included directories found matching '.eggs'
warning: no previously-included files found matching 'third_party/eigen/Eigen/src/IterativeSolvers/*'
warning: no previously-included files found matching 'third_party/eigen/unsupported/Eigen/FFT'
warning: no previously-included files found matching 'third_party/eigen/unsupported/Eigen/MPRealSupport'
warning: no previously-included files found matching 'third_party/eigen/doc/PreprocessorDirectives.dox'
warning: no previously-included files found matching 'third_party/eigen/doc/UsingIntelMKL.dox'
warning: no previously-included files found matching 'third_party/eigen/doc/SparseLinearSystems.dox'
warning: no previously-included files found matching 'third_party/eigen/COPYING.GPL'
warning: no previously-included files found matching 'third_party/eigen/COPYING.LGPL'
warning: no previously-included files found matching 'third_party/eigen/COPYING.README'
writing manifest file 'horovod.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-ppc64le/egg
running install_lib
running build_py
running build_ext
-- Could not find CCache. Consider installing CCache to speed up compilation.
-- The CXX compiler identification is GNU 9.3.0
-- Check for working CXX compiler: /opt/base/gcc/9.3.0/bin/g++
-- Check for working CXX compiler: /opt/base/gcc/9.3.0/bin/g++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Build architecture flags:
-- Using command /project/.conda/envs/horovod_tf/bin/python
CMake Error in /project/packages/horovod/build/temp.linux-ppc64le-3.7/CMakeFiles/CMakeTmp/CMakeLists.txt:
Imported target "MPI::MPI_CXX" includes non-existent path
"/usr/tce/packages/cuda/cuda-10.1.243/include"
in its INTERFACE_INCLUDE_DIRECTORIES. Possible reasons include:
* The path was deleted, renamed, or moved to another location.
* An install or uninstall procedure did not complete successfully.
* The installation package was faulty and references files it does not
provide.
CMake Error in /project/packages/horovod/build/temp.linux-ppc64le-3.7/CMakeFiles/CMakeTmp/CMakeLists.txt:
Imported target "MPI::MPI_CXX" includes non-existent path
"/usr/tce/packages/cuda/cuda-10.1.243/include"
in its INTERFACE_INCLUDE_DIRECTORIES. Possible reasons include:
* The path was deleted, renamed, or moved to another location.
* An install or uninstall procedure did not complete successfully.
* The installation package was faulty and references files it does not
provide.
CMake Error at /opt/base/cmake/3.16.3/share/cmake-3.16/Modules/FindMPI.cmake:1194 (try_compile):
Failed to generate test project build system.
Call Stack (most recent call first):
/opt/base/cmake/3.16.3/share/cmake-3.16/Modules/FindMPI.cmake:1245 (_MPI_try_staged_settings)
/opt/base/cmake/3.16.3/share/cmake-3.16/Modules/FindMPI.cmake:1505 (_MPI_check_lang_works)
CMakeLists.txt:131 (find_package)
-- Configuring incomplete, errors occurred!
See also "/project/packages/horovod/build/temp.linux-ppc64le-3.7/CMakeFiles/CMakeOutput.log".
Traceback (most recent call last):
File "setup.py", line 193, in <module>
'horovodrun = horovod.runner.launch:run_commandline'
File "/project/.conda/envs/horovod_tf/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/project/.conda/envs/horovod_tf/lib/python3.7/site-packages/setuptools/command/install.py", line 67, in run
self.do_egg_install()
File "/project/.conda/envs/horovod_tf/lib/python3.7/site-packages/setuptools/command/install.py", line 109, in do_egg_install
self.run_command('bdist_egg')
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/project/.conda/envs/horovod_tf/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 164, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "/project/.conda/envs/horovod_tf/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 150, in call_command
self.run_command(cmdname)
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/project/.conda/envs/horovod_tf/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run
self.build()
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/command/install_lib.py", line 107, in build
self.run_command('build_ext')
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/project/.conda/envs/horovod_tf/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/project/.conda/envs/horovod_tf/lib/python3.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "setup.py", line 89, in build_extensions
cwd=self.build_temp)
File "/project/.conda/envs/horovod_tf/lib/python3.7/subprocess.py", line 363, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '/project/packages/horovod', '-DCMAKE_BUILD_TYPE=RelWithDebInfo', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO=/project/packages/horovod/build/lib.linux-ppc64le-3.7', '-DPYTHON_EXECUTABLE:FILEPATH=/project/mvapich-gdr/.conda/envs/horovod_tf/bin/python']' returned non-zero exit status 1.
```
|
closed
|
2021-06-25T21:50:13Z
|
2021-07-02T22:39:50Z
|
https://github.com/horovod/horovod/issues/2998
|
[
"bug"
] |
Yiltan
| 2
|
pallets/quart
|
asyncio
| 223
|
Quart::send_from_directory produces ( semanticly ) incorrect response object in case of 304 response
|
**Bug**
When serving files from a directory using `send_from_directory`, Quart sometimes returns an `304 Not Modified` response.
However the content in this response is not empty as it should be as dictated by RFC 7230
> Any response to a HEAD request and any response with a 1xx
(Informational), 204 (No Content), or 304 (Not Modified) status
code is always terminated by the first empty line after the
header fields, regardless of the header fields present in the
message, and thus cannot contain a message body.
https://www.rfc-editor.org/rfc/rfc7230#section-3.3.3
This can cause issues with certain ASGI web servers such as uvicorn, which rely on h11 ( https://github.com/tiangolo/fastapi/issues/2253#issuecomment-717357627 ).
**Replication**
```python
from quart import Quart, send_from_directory
app = Quart(__name__)
@app.route("/<path:path>", methods=[ 'GET' ])
async def serve_static( path ):
response = await send_from_directory( "public" , path )
print( ( response.content_length ) )
print( len( data ) )
return response
```
Launch this application using any prefered method and request a none empty file located in `./public` until it returns a 304 and observe that both `content_length` and the actual data are not empty
**Expected behavior**
```python
from quart import Quart, send_from_directory
app = Quart(__name__)
@app.route("/<path:path>", methods=[ 'GET' ])
async def serve_static( path ):
response = await send_from_directory( "public" , path )
response.set_data( b"" )
return response
```
Manually setting the data to no bytes corrects the bug.
Environment:
- Python version: 3.10.4
- Quart version: 0.18.3
|
closed
|
2023-03-01T09:06:08Z
|
2023-10-01T00:20:34Z
|
https://github.com/pallets/quart/issues/223
|
[] |
SamCoutteauHybrid
| 1
|
coqui-ai/TTS
|
pytorch
| 3,833
|
[Bug] Error with torch.isin() in Docker Container with transformers Library
|
### Describe the bug
When running the application inside a Docker container, an error occurs related to the torch.isin() method within the transformers library. The error does not occur when running the application locally (outside of the container), suggesting a possible incompatibility or issue with the dependencies inside the Docker container.
### To Reproduce
Build the Docker image using the provided Dockerfile.
Dockerfile:
```dockerfile
FROM python:3.11.8-slim
ENV PYTHONUNBUFFERED=1
# Install system dependencies and Rust
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential \
curl \
libsndfile1 \
libgomp1 \
pkg-config \
libssl-dev && \
curl https://sh.rustup.rs -sSf | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
ENV COQUI_TOS_AGREED=1
# Update pip to the latest version
RUN pip install --upgrade pip
# Install Python dependencies
RUN pip install --no-cache-dir fastapi uvicorn torch==2.2.0 torchaudio==2.2.0 transformers==4.43.1 numpy==1.24.3 TTS==0.22.0 sudachipy cutlet
RUN pip install --upgrade transformers
# Copy the FastAPI application code
COPY main.py /app/main.py
WORKDIR /app
EXPOSE 8001
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8001"]
```
main.py:
```
import io
import os
import wave
import torch
import numpy as np
from fastapi import FastAPI, Request, Header, Body
from fastapi.responses import StreamingResponse
from TTS.tts.configs.xtts_config import XttsConfig
from TTS.tts.models.xtts import Xtts
from TTS.utils.generic_utils import get_user_data_dir
from TTS.utils.manage import ModelManager
# Set the number of threads and device
torch.set_num_threads(int(os.environ.get("NUM_THREADS", os.cpu_count())))
device = torch.device("cuda" if torch.cuda.is_available() and os.environ.get("USE_CPU", "0") == "0" else "cpu")
# Load custom model if available, otherwise download the default model
custom_model_path = os.environ.get("CUSTOM_MODEL_PATH", "/app/tts_models")
if os.path.exists(custom_model_path) and os.path.isfile(custom_model_path + "/config.json"):
model_path = custom_model_path
print("Loading custom model from", model_path, flush=True)
else:
print("Loading default model", flush=True)
model_name = "tts_models/multilingual/multi-dataset/xtts_v2"
print("Downloading XTTS Model:", model_name, flush=True)
ModelManager().download_model(model_name)
model_path = os.path.join(get_user_data_dir("tts"), model_name.replace("/", "--"))
print("XTTS Model downloaded", flush=True)
# Load model configuration and model
print("Loading XTTS", flush=True)
config = XttsConfig()
config.load_json(os.path.join(model_path, "config.json"))
model = Xtts.init_from_config(config)
model.load_checkpoint(config, checkpoint_dir=model_path, eval=True, use_deepspeed=True if device == "cuda" else False)
model.to(device)
print("XTTS Loaded.", flush=True)
# Initialize FastAPI
app = FastAPI(
title="XTTS Streaming server",
description="XTTS Streaming server",
version="0.0.1",
docs_url="/",
)
# Helper functions
def postprocess(wav):
if isinstance(wav, list):
wav = torch.cat(wav, dim=0)
wav = wav.clone().detach().cpu().numpy()
wav = wav[None, : int(wav.shape[0])]
wav = np.clip(wav, -1, 1)
wav = (wav * 32767).astype(np.int16)
return wav
def wav_data_generator(frame_input, sample_rate=24000, sample_width=2, channels=1):
wav_buf = io.BytesIO()
with wave.open(wav_buf, "wb") as vfout:
vfout.setnchannels(channels)
vfout.setsampwidth(sample_width)
vfout.setframerate(sample_rate)
vfout.writeframes(frame_input)
wav_buf.seek(0)
return wav_buf.read()
# Streaming generator
def predict_streaming_generator(text, language, add_wav_header, stream_chunk_size):
speaker_name = "Alison Dietlinde"
speaker_raw = model.speaker_manager.speakers[speaker_name]["speaker_embedding"].cpu().squeeze().half().tolist()
gpt_raw = model.speaker_manager.speakers[speaker_name]["gpt_cond_latent"].cpu().squeeze().half().tolist()
speaker_embedding = torch.tensor(speaker_raw).unsqueeze(0).unsqueeze(-1)
gpt_cond_latent = torch.tensor(gpt_raw).reshape((-1, 1024)).unsqueeze(0)
chunks = model.inference_stream(
text,
language,
gpt_cond_latent,
speaker_embedding,
stream_chunk_size=int(stream_chunk_size),
enable_text_splitting=True
)
for i, chunk in enumerate(chunks):
chunk = postprocess(chunk)
if i == 0 and add_wav_header:
yield wav_data_generator(b"")
yield chunk.tobytes()
else:
yield chunk.tobytes()
# FastAPI endpoint for streaming
@app.post("/tts_stream")
async def predict_streaming_endpoint(
text: str = Header(...),
language: str = Header(...),
add_wav_header: bool = Header(True),
stream_chunk_size: str = Header("20")
):
try:
return StreamingResponse(
predict_streaming_generator(text,language, add_wav_header, stream_chunk_size),
media_type="audio/wav"
)
except Exception as e:
raise
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8001)
```
Start the Docker container.
Make a POST request to the /tts_stream endpoint with the appropriate headers and data.
test.py:
```
import argparse
import json
import shutil
import subprocess
import sys
import time
from typing import Iterator
import requests
def is_installed(lib_name: str) -> bool:
lib = shutil.which(lib_name)
if lib is None:
return False
return True
def save(audio: bytes, filename: str) -> None:
with open(filename, "wb") as f:
f.write(audio)
def stream_ffplay(audio_stream, output_file, save=True):
if not save:
ffplay_cmd = ["ffplay", "-nodisp", "-probesize", "1024", "-autoexit", "-"]
else:
print("Saving to ", output_file)
ffplay_cmd = ["ffmpeg", "-probesize", "1024", "-i", "-", output_file]
ffplay_proc = subprocess.Popen(ffplay_cmd, stdin=subprocess.PIPE)
for chunk in audio_stream:
if chunk is not None:
ffplay_proc.stdin.write(chunk)
# close on finish
ffplay_proc.stdin.close()
ffplay_proc.wait()
def tts(text, language, server_url, stream_chunk_size) -> Iterator[bytes]:
start = time.perf_counter()
headers = {
"text": text,
"language": language,
"add_wav_header": "False",
"stream_chunk_size": stream_chunk_size,
}
res = requests.post(
f"{server_url}/tts_stream",
headers=headers,
stream=True
)
end = time.perf_counter()
print(f"Time to make POST: {end-start}s", file=sys.stderr)
if res.status_code != 200:
print("Error:", res.text)
sys.exit(1)
first = True
for chunk in res.iter_content(chunk_size=512):
if first:
end = time.perf_counter()
print(f"Time to first chunk: {end-start}s", file=sys.stderr)
first = False
if chunk:
yield chunk
print("⏱️ response.elapsed:", res.elapsed)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--text",
default="It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
help="text input for TTS"
)
parser.add_argument(
"--language",
default="en",
help="Language to use default is 'en' (English)"
)
parser.add_argument(
"--output_file",
default=None,
help="Save TTS output to given filename"
)
parser.add_argument(
"--ref_file",
default=None,
help="Reference audio file to use, when not given will use default"
)
parser.add_argument(
"--server_url",
default="http://localhost:8000",
help="Server url http://localhost:8000 default, change to your server location "
)
parser.add_argument(
"--stream_chunk_size",
default="20",
help="Stream chunk size , 20 default, reducing will get faster latency but may degrade quality"
)
args = parser.parse_args()
with open("./default_speaker.json", "r") as file:
speaker = json.load(file)
if args.ref_file is not None:
print("Computing the latents for a new reference...")
audio = stream_ffplay(
tts(
args.text,
args.language,
args.server_url,
args.stream_chunk_size
),
args.output_file,
save=bool(args.output_file)
)
```
CMD:
```python test.py --text "This is a Test." --language en --server_url "http://localhost:8001" --stream_chunk_size 145```
### Expected behavior
_No response_
### Logs
```shell
TypeError: isin() received an invalid combination of arguments - got (test_elements=int, elements=Tensor, ), but expected one of:
* (Tensor elements, Tensor test_elements, *, bool assume_unique, bool invert, Tensor out)
* (Number element, Tensor test_elements, *, bool assume_unique, bool invert, Tensor out)
* (Tensor elements, Number test_element, *, bool assume_unique, bool invert, Tensor out)
```
### Environment
```shell
transformers: 4.43.1
torch: 2.2.0
torchaudio: 2.2.0
TTS: 0.22.0
Platform: Docker
```
### Additional context
_No response_
|
closed
|
2024-07-23T21:26:45Z
|
2024-07-24T18:30:14Z
|
https://github.com/coqui-ai/TTS/issues/3833
|
[
"bug"
] |
Fledermaus-20
| 3
|
laughingman7743/PyAthena
|
sqlalchemy
| 56
|
SQLAlchemy doubles percent sign (%)
|
(But it can be fixed by setting self._double_percents = False in AthenaIdentifierPreparer `__init__`)
Broken:
```python
>>> import sqlalchemy
>>> engine = sqlalchemy.create_engine(.....)
>>> session = engine.connect()
>>> sql = "select date_parse('20191030', '%Y%m%d' )"
>>> txt = sqlalchemy.sql.text(sql)
>>> result = session.execute(txt, {})
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.6/site-packages/pyathena/util.py", line 28, in _wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pyathena/cursor.py", line 46, in execute
raise OperationalError(query_execution.state_change_reason)
pyathena.error.OperationalError: INVALID_FUNCTION_ARGUMENT: Invalid format: "20191030"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 948, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
exc_info
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 265, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 248, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.6/site-packages/pyathena/util.py", line 28, in _wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pyathena/cursor.py", line 46, in execute
raise OperationalError(query_execution.state_change_reason)
sqlalchemy.exc.OperationalError: (pyathena.error.OperationalError) INVALID_FUNCTION_ARGUMENT: Invalid format: "20191030" [SQL: "select date_parse('20191030', '%%Y%%m%%d' )"] (Background on this error at: http://sqlalche.me/e/e3q8)
```
(See how it has doubled the percent signs in the last line of the traceback?)
Fix by changing sqlalchemy_athena.py to have this:
```python
class AthenaIdentifierPreparer(IdentifierPreparer):
"""PrestoIdentifierPreparer
https://github.com/dropbox/PyHive/blob/master/pyhive/sqlalchemy_presto.py"""
reserved_words = UniversalSet()
def __init__(self, dialect, initial_quote='"',
final_quote=None, escape_quote='"',
quote_case_sensitive_collations=True, omit_schema=False):
super(AthenaIdentifierPreparer, self).__init__(dialect, initial_quote, final_quote, escape_quote, quote_case_sensitive_collations, omit_schema)
self._double_percents = False
```
Fixed:
```python
>>> import sqlalchemy
>>> engine = sqlalchemy.create_engine(....., echo="debug")
>>> session = engine.connect()
>>> sql = "select date_parse('20191030', '%Y%m%d' )"
>>> txt = sqlalchemy.sql.text(sql)
>>> result = session.execute(txt, {})
2018-11-13 23:22:15,519 INFO sqlalchemy.engine.base.Engine select date_parse('20191030', '%Y%m%d' )
2018-11-13 23:22:15,519 INFO sqlalchemy.engine.base.Engine {}
2018-11-13 23:22:16,813 DEBUG sqlalchemy.engine.base.Engine Col ('_col0',)
>>> result.fetchall()
2018-11-13 23:22:27,171 DEBUG sqlalchemy.engine.base.Engine Row (datetime.datetime(2019, 10, 30, 0, 0),)
[(datetime.datetime(2019, 10, 30, 0, 0),)]
>>> session.execute(sqlalchemy.text("select :word"), {'word':"cat"}).fetchall()
2018-11-13 23:23:27,964 INFO sqlalchemy.engine.base.Engine select %(word)s
2018-11-13 23:23:27,965 INFO sqlalchemy.engine.base.Engine {'word': 'cat'}
2018-11-13 23:23:29,199 DEBUG sqlalchemy.engine.base.Engine Col ('_col0',)
2018-11-13 23:23:29,199 DEBUG sqlalchemy.engine.base.Engine Row ('cat',)
[('cat',)]
```
|
closed
|
2018-11-14T04:48:40Z
|
2018-11-26T04:27:16Z
|
https://github.com/laughingman7743/PyAthena/issues/56
|
[] |
mister-average
| 6
|
Kav-K/GPTDiscord
|
asyncio
| 81
|
Don't end the conversation if an api error occurs
|
Currently, when an api error occurs like overload, rate limit, or etc, the conversation will end, fix this, allow for the user to just be able to retry it
|
closed
|
2023-01-10T08:22:55Z
|
2023-01-11T23:07:12Z
|
https://github.com/Kav-K/GPTDiscord/issues/81
|
[
"enhancement",
"high-prio"
] |
Kav-K
| 1
|
quantumlib/Cirq
|
api
| 6,336
|
Fix CI notebook tests for MacOS and Windows
|
**Description of the issue**
After activating MacOS and Windows testing in [ci-daily.yml](https://github.com/quantumlib/Cirq/blob/master/.github/workflows/ci-daily.yml) in #6331, the notebook tests failed on these platforms, [example](https://github.com/quantumlib/Cirq/actions/runs/6700241980).
As a temporary solution, #6335 restricts notebook testing to run on Linux only.
For a proper solution we need to fix notebook tests on Mac and Windows platforms.
The affected notebook tests are those touched in #6335, ie, those decorated with
```
@pytest.mark.skipif(sys.platform != "linux", reason="Linux-only test")
```
**Cirq version**
1.3.0.dev at 34e8dab087c65ff62957e8fc33c418f19f47333a
|
open
|
2023-10-31T06:38:21Z
|
2024-11-13T01:40:28Z
|
https://github.com/quantumlib/Cirq/issues/6336
|
[
"good first issue",
"no QC knowledge needed",
"kind/health",
"triage/accepted"
] |
pavoljuhas
| 6
|
ultralytics/ultralytics
|
python
| 19,316
|
Inquiry Regarding Licensing for Commercial Use of YOLO with Custom Training Tool
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello,
I have developed a user-friendly tool (UI) that enables code-free training of YOLO models on custom datasets. I am planning to commercialize this tool and would like to clarify the licensing requirements.
Specifically, I would like to know:
Do I need to obtain a commercial license to use YOLO within my tool, which will be marketed to customers for training custom models?
If my customers use the tool to train models and implement them in their production systems, will they also require a separate license?
Your guidance on the licensing implications for both the tool provider (myself) and the end users (my customers) would be highly appreciated.
Thank you in advance for your assistance. I look forward to your response.
Dorra
### Additional
_No response_
|
open
|
2025-02-19T16:45:49Z
|
2025-02-24T07:08:11Z
|
https://github.com/ultralytics/ultralytics/issues/19316
|
[
"question",
"enterprise"
] |
DoBacc
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.