repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
lepture/authlib
|
django
| 376
|
CSRF Warning! State not equal in request and response
|
**Describe the bug**
It happens when using authlib to configure Keycloak for Apache Superset. Everything works perfectly up until redirecting back from Keycloak to Superset.
**Error Stacks**
```
Traceback (most recent call last):
File "/home/abc/.local/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/home/abc/.local/lib/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/abc/.local/lib/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/abc/.local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/home/abc/.local/lib/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/home/abc/.local/lib/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/abc/.local/lib/python3.8/site-packages/flask_appbuilder/security/views.py", line 681, in oauth_authorized
resp = self.appbuilder.sm.oauth_remotes[provider].authorize_access_token()
File "/usr/local/lib/python3.8/site-packages/authlib/integrations/flask_client/remote_app.py", line 74, in authorize_access_token
params = self.retrieve_access_token_params(flask_req, request_token)
File "/usr/local/lib/python3.8/site-packages/authlib/integrations/base_client/base_app.py", line 145, in retrieve_access_token_params
params = self._retrieve_oauth2_access_token_params(request, params)
File "/usr/local/lib/python3.8/site-packages/authlib/integrations/base_client/base_app.py", line 126, in _retrieve_oauth2_access_token_params
raise MismatchingStateError()
authlib.integrations.base_client.errors.MismatchingStateError: mismatching_state: CSRF Warning! State not equal in request and response.
```
**To Reproduce**
A minimal example to reproduce the behavior:
This is my code:
In `superset_config.py`
```AUTH_TYPE = AUTH_OAUTH
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = 'Public'
CSRF_ENABLED = True
ENABLE_PROXY_FIX = True
OAUTH_PROVIDERS = [
{
'name': 'keycloak',
'token_key': 'access_token',
'icon': 'fa-icon',
'remote_app': {
'client_id': CLIENT_ID,
'client_secret': CLIENT_SECRET,
'client_kwargs': {
'scope': 'openid email profile'
},
'access_token_method': 'POST',
'api_base_url': 'https://KEYCLOAK_URL/auth/realms/REALM_NAME/protocol/openid-connect/',
'access_token_url': 'https://KEYCLOAK_URL/auth/realms/REALM_NAME/protocol/openid-connect/token',
'authorize_url': 'https://KEYCLOAK_URL/auth/realms/REALM_NAME/protocol/openid-connect/auth',
},
}
]
CUSTOM_SECURITY_MANAGER = OIDCSecurityManager
```
In `extended_security.py`
```from superset.security import SupersetSecurityManager
class OIDCSecurityManager(SupersetSecurityManager):
def get_oauth_user_info(self, provider, response=None):
if provider == 'keycloak':
me = self.appbuilder.sm.oauth_remotes[provider].get("userinfo")
return {
"first_name": me.data.get("given_name", ""),
"last_name": me.data.get("family_name", ""),
"email": me.data.get("email", "")
}
```
**Expected behavior**
Suerpset redirect user to keycloak authentication site as expected. Upon finishing authenticating and getting redirected back to superset, `CSRF Warning! State not equal in request and response` occur.
**Environment:**
Docker:
- Python Version: `3.8`
- Authlib Version: `0.15.4`
**Additional context**
Tried on different browser (chrome, firefox, edge, etc.), clearing cookies, etc., but the error is still there.
Would very appreciate some help.
|
closed
|
2021-08-18T12:06:46Z
|
2024-11-11T13:40:19Z
|
https://github.com/lepture/authlib/issues/376
|
[
"bug"
] |
nl2412
| 25
|
Miserlou/Zappa
|
flask
| 2,230
|
[Question] Is there a way to set the API Gateway stage on AWS different from the 'stage name' in the config?
|
I have 3 stages... dev, staging, and prod. But I would like each of the three API Gateway endpoints to have a single API Gateway stage name of 'api' so that my app is accessible at '/api' on all three instead of '/dev', '/staging', and '/prod'.
I know about custom domains but this doesn't work for me because I share the domain with my front end client through a Cloudfront Distribution.
Is there a technical reason that there isn't configuration (that I can see) for this?
|
open
|
2021-10-25T17:20:47Z
|
2021-10-25T17:20:47Z
|
https://github.com/Miserlou/Zappa/issues/2230
|
[] |
dsmurrell
| 0
|
aio-libs/aiomysql
|
asyncio
| 62
|
Enable by default utf8 encoding
|
Hi,
If I import this SQL file in MySQL DB with this line: https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/config/create.sql#L110
When I retrieve this line with aiomysql, I have **??????????????** string instead of **フレームワークのベンチマーク**.
Except if I add two parameters in the aiomysql connection:
```
charset='utf8',
use_unicode=True,
```
At least to me, it should be the default behaviour of aiomysql.
What do you think ?
Have a nice day.
|
closed
|
2016-02-14T22:00:19Z
|
2016-02-16T11:17:17Z
|
https://github.com/aio-libs/aiomysql/issues/62
|
[] |
ludovic-gasc
| 4
|
feature-engine/feature_engine
|
scikit-learn
| 41
|
Discretiser - return interval boundaries
|
I would like to suggest discretiser had an option to return interval boundaries instead of an integer, it turns more understandable the variable content output.
for example: outputs 1, 3, 2 turns (0,10] , (20,30], (10,20]
Thanks in advance
|
closed
|
2020-05-12T14:33:21Z
|
2020-05-14T17:33:57Z
|
https://github.com/feature-engine/feature_engine/issues/41
|
[] |
pellanda
| 1
|
minimaxir/textgenrnn
|
tensorflow
| 31
|
ValueError: Layer #1 (named "embedding")
|
Hi,
After I train the model with the following config:
num_epochs: 1000
gen_epochs: 10
batch_size: 128
prop_keep: 1.0
new_model: True
model_config:
rnn_layers: 2
rnn_size: 128
rnn_bidirectional: False
max_length: 40
dim_embeddings: 100
word_level: False
I am trying to generate some text with the following code;
from textgenrnn import textgenrnn
textgen_2 = textgenrnn('weights.hdf5')
textgen_2.generate(3, temperature=1.0)
But every time I get the following error: (i changed the dataset, the number of epochs, trained a few times from scratch, result is the same...)
Traceback (most recent call last):
File "test.py", line 2, in <module>
textgen_2 = textgenrnn('weights.hdf5')
File "textgenrnn/textgenrnn.py", line 66, in __init__
weights_path=weights_path)
File "textgenrnn/model.py", line 38, in textgenrnn_model
model.load_weights(weights_path, by_name=True)
File "lib/python3.6/site-packages/keras/engine/network.py", line 1177, in load_weights
reshape=reshape)
File "lib/python3.6/site-packages/keras/engine/saving.py", line 1018, in load_weights_from_hdf5_group_by_name
str(weight_values[i].shape) + '.')
ValueError: Layer #1 (named "embedding"), weight <tf.Variable 'embedding/embeddings:0' shape=(465, 100) dtype=float32_ref> has shape (465, 100), but the saved weight has shape (91, 100).
What might be the problem?
|
closed
|
2018-06-10T02:23:06Z
|
2018-06-10T02:29:46Z
|
https://github.com/minimaxir/textgenrnn/issues/31
|
[] |
ihavetoomanyquestions
| 1
|
Gozargah/Marzban
|
api
| 1,216
|
مشکل اتصال اپراتور همراه اول
|
دوستان من زمانی که پینگ میگیرم با CMD، کاملا هم آیپی هم دامنه اوکیه ولی نه پنل واسم با همراه اول بالا میاد، نه سرورها واسم کار میکنه. با آیپی مستقیم هم پنل بالا نمیاد روی همراه اول.
خیلی اوکی میتونم ssh بزنم بهش با اینترنت همراه اول،
اما پنل X-UI رو هم که نصب کردم همین شکلی بود
راهی هست واسهی این قضیه که این محدودیت همراه اول رو دور بزنیم؟
|
closed
|
2024-08-03T21:33:44Z
|
2024-08-04T04:46:57Z
|
https://github.com/Gozargah/Marzban/issues/1216
|
[] |
MatinSenPai
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 499
|
Real Time Voice Cloning Issue
|
after i have done everything and whin i put code [python demo_toolbox.py] to run tool box it shows ImportError : DLL Load Failed:The specified module could not be found.
How to Fix this problem
|
closed
|
2020-08-20T21:22:24Z
|
2020-08-23T13:50:29Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/499
|
[] |
Jason3018
| 3
|
hankcs/HanLP
|
nlp
| 739
|
分词问题:自定义词典有时会失效
|
<!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:hanlp-portable-1.5.3
我使用的版本是:hanlp-portable-1.3.2
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
自定义词典在某种情况下会失效,对“海南省海口市龙华区春来早市场”进行分词,结果正常,但是增加一个方位词内字之后会出现分词异常的情况
## 复现问题
我自定义的词典:
```
海南省 dictrict1 1000
海口市 dictrict2 1000
龙华区 dictrict3 1000
春来早市场 resrge 1000
```
```
List<Term> termList = HanLP.segment("海南省海口市龙华区春来早市场");
//分词结果:海南省/dictrict1, 海口市/dictrict2, 龙华区/dictrict3, 春来早市场/resrge
```
此时说明 “春来早市场”是正确加载的了
### 触发代码
```
List<Term> termList = HanLP.segment("海南省海口市龙华区春来早市场内");
```
### 期望输出
```
海南省/dictrict1, 海口市/dictrict2, 龙华区/dictrict3, 春来早市场/resrge, 内/s
```
### 实际输出
```
海南省/dictrict1, 海口市/dictrict2, 龙华区/dictrict3, 春/tg, 来/v, 早市/n, 场内/s
```
增加了“内”字,重新分词,发现分词效果与预期的差别比较大,这种情况应该怎么处理呢
|
closed
|
2018-01-10T07:07:05Z
|
2020-01-01T10:51:04Z
|
https://github.com/hankcs/HanLP/issues/739
|
[
"ignored"
] |
kjdongzh
| 6
|
biosustain/potion
|
sqlalchemy
| 140
|
Cross resource fields.Inline does not updates the schema accordingly
|
I have a resource with a Route that instead of returning the own schema, should return the schema of another ModelResource. I'll explain with some code that is purely a demo but should give the idea:
```
class UserResource(ModelResource):
class Meta:
model = User
class AuthResource(Resource):
class Meta:
name = 'auth'
@Route.PUT
@jwt_required
def link_facebook_account(self, fb_user_token: fields.String()) -> fields.Inline('user'):
user = User.query.get(get_jwt_identity())
link_fb_account(user, fb_user_token)
return user
```
Since the `link_facebook_account` annotates the `user` resource as its output value, I'm expecting that the /auth/schema endpoint should describe the `targetSchema` for the `link-facebook-account` link like this:
``
"targetSchema": {
"$ref": "/api/v1/user/schema"
}
``
But what I have is:
``
"targetSchema": {
"$ref": "/api/v1/auth/schema"
}
``
So the targetSchema refers to the own schema of AuthResource and not to the UserResource schema. Anyway calling the endpoint produces the correct result returning the serialized user object.
|
closed
|
2018-07-18T15:16:02Z
|
2018-07-20T11:29:10Z
|
https://github.com/biosustain/potion/issues/140
|
[] |
dappiu
| 0
|
plotly/dash
|
plotly
| 2,583
|
Inline callback exceptions in Jupyter Notebook
|
closed
|
2023-07-04T21:20:27Z
|
2023-07-04T21:38:36Z
|
https://github.com/plotly/dash/issues/2583
|
[] |
mcstarioni
| 0
|
|
labmlai/annotated_deep_learning_paper_implementations
|
machine-learning
| 61
|
torch version to run a demo code for FNet
|
Hi guys! Thanks for the awesome work! I am excited to play with the code. Here is the gist
https://gist.github.com/ra312/f3c895aba6e8954985258de10e9be52f
At the first attempt, I am encountered this exception
<img width="863" alt="image" src="https://user-images.githubusercontent.com/12486277/122645152-bd598100-d13a-11eb-8a6b-15824f1f8f46.png">
I believe the right version of pytorch shold help to fix this!
These are my current dependencies
<img width="463" alt="image" src="https://user-images.githubusercontent.com/12486277/122645170-d104e780-d13a-11eb-8226-b1f38fb006f6.png">
If someone can advise on torch version, it would save me time and be super cool!
Many thanks, Rauan.
|
closed
|
2021-06-19T14:15:10Z
|
2022-07-02T10:03:29Z
|
https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/61
|
[
"enhancement"
] |
ra312
| 6
|
ultralytics/yolov5
|
deep-learning
| 12,894
|
How to remove a detection layer?
|
### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I am using YOLOv5 to create a detection model. The model will only see small objects (boxes of 10x10). Taking this into account, I have thought of removing some of the detection layers that are in charge of detecting bigger objects in the image (in particular, I am thinking of keeping layers until P3/8). I have two questions:
1. The main reason I want to do this is to reduce the number of parameters and, therefore, making the model faster to train. Would this be beneficial? Would it help to detect better these small objects?
2. How could I implement this. I have looked at other issues such as [Issue #1418](https://github.com/ultralytics/yolov5/issues/1418) and try to implement the same backwards (removing the layers in charge of those bigger objects), but I face some errors which I am not sure how to fix. I have done the following changes.
<img width="367" alt="Captura de pantalla 2024-04-08 a las 14 08 45" src="https://github.com/ultralytics/yolov5/assets/159000555/1747f712-19a4-45c2-bbe4-390c3687ec08">
Could someone please help me with this?
### Additional
_No response_
|
closed
|
2024-04-08T12:09:44Z
|
2024-10-20T19:43:12Z
|
https://github.com/ultralytics/yolov5/issues/12894
|
[
"question",
"Stale"
] |
nachoogriis
| 3
|
lux-org/lux
|
jupyter
| 240
|
Extend support for ordinal data type
|
Ordinal data are common in rating scales for surveys, as well as attributes like Age or number of years for X.
Ordinal data currently gets classified as categorical, especially if the column contains NaN values.
The [young people survey dataset](https://www.kaggle.com/miroslavsabo/young-people-survey) on Kaggle is a good example of this, since it contains lots of rating scale data.

This issue should extend support for ordinal data type detection, as well as better visualizations to display for ordinal data type. For example, ordinal data bar charts should be ordered instead of sorted based on the measure values. In addition, correlation of one or more ordinal attribute would be relevant to show.
|
open
|
2021-01-23T08:55:59Z
|
2021-03-02T13:09:40Z
|
https://github.com/lux-org/lux/issues/240
|
[
"easy"
] |
dorisjlee
| 1
|
activeloopai/deeplake
|
tensorflow
| 2,320
|
deeplake.util.exceptions.TransformError
|
## 🐛🐛 Bug Report
I'm attempting to load some Documents and get a `TransformError` - could someone please point me in the right direction? Thanks!
I'm afraid the traceback doesn't mean much to me.
```python
db = DeepLake(dataset_path=deeplake_path, embedding_function=embeddings)
db.add_documents(texts)
```
python 3.11
Windows 11
### ⚗️ Current Behavior
```
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (0,) float32 None
ids text (0,) str None
metadata json (0,) str None
text text (0,) str None
Evaluating ingest: 0%| | 0/1 [00:10<?
Traceback (most recent call last):
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk_engine.py", line 1065, in extend
self._extend(samples, progressbar, pg_callback=pg_callback)
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk_engine.py", line 1001, in _extend
self._samples_to_chunks(
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk_engine.py", line 824, in _samples_to_chunks
num_samples_added = current_chunk.extend_if_has_space(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk\chunk_compressed_chunk.py", line 50, in extend_if_has_space
return self.extend_if_has_space_byte_compression(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk\chunk_compressed_chunk.py", line 233, in extend_if_has_space_byte_compression
serialized_sample, shape = self.serialize_sample(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk\base_chunk.py", line 342, in serialize_sample
incoming_sample, shape = serialize_text(
^^^^^^^^^^^^^^^
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\serialize.py", line 505, in serialize_text
incoming_sample, shape = text_to_bytes(incoming_sample, dtype, htype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\serialize.py", line 458, in text_to_bytes
byts = json.dumps(sample, cls=HubJsonEncoder).encode()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\charles\AppData\Local\Programs\Python\Python311\Lib\json\__init__.py", line 238, in dumps
**kw).encode(obj)
^^^^^^^^^^^
File "C:\Users\charles\AppData\Local\Programs\Python\Python311\Lib\json\encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\charles\AppData\Local\Programs\Python\Python311\Lib\json\encoder.py", line 258, in iterencode
return _iterencode(o, 0)
^^^^^^^^^^^^^^^^^
ValueError: Circular reference detected
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\util\transform.py", line 220, in _transform_and_append_data_slice
transform_dataset.flush()
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\transform\transform_dataset.py", line 154, in flush
raise SampleAppendError(name) from e
deeplake.util.exceptions.SampleAppendError: Failed to append a sample to the tensor 'metadata'. See more details in the traceback.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk_engine.py", line 1065, in extend
self._extend(samples, progressbar, pg_callback=pg_callback)
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk_engine.py", line 1001, in _extend
self._samples_to_chunks(
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk_engine.py", line 824, in _samples_to_chunks
num_samples_added = current_chunk.extend_if_has_space(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk\chunk_compressed_chunk.py", line 50, in extend_if_has_space
return self.extend_if_has_space_byte_compression(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk\chunk_compressed_chunk.py", line 233, in extend_if_has_space_byte_compression
serialized_sample, shape = self.serialize_sample(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\chunk\base_chunk.py", line 342, in serialize_sample
incoming_sample, shape = serialize_text(
^^^^^^^^^^^^^^^
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\serialize.py", line 505, in serialize_text
incoming_sample, shape = text_to_bytes(incoming_sample, dtype, htype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\serialize.py", line 458, in text_to_bytes
byts = json.dumps(sample, cls=HubJsonEncoder).encode()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\charles\AppData\Local\Programs\Python\Python311\Lib\json\__init__.py", line 238, in dumps
**kw).encode(obj)
^^^^^^^^^^^
File "C:\Users\charles\AppData\Local\Programs\Python\Python311\Lib\json\encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\charles\AppData\Local\Programs\Python\Python311\Lib\json\encoder.py", line 258, in iterencode
return _iterencode(o, 0)
^^^^^^^^^^^^^^^^^
ValueError: Circular reference detected
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\util\transform.py", line 177, in _handle_transform_error
transform_dataset.flush()
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\transform\transform_dataset.py", line 154, in flush
raise SampleAppendError(name) from e
deeplake.util.exceptions.SampleAppendError: Failed to append a sample to the tensor 'metadata'. See more details in the traceback.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\charles\Documents\GitHub\Chat-with-Github-Repo\venv\Lib\site-packages\deeplake\core\transform\transform.py", line 298, in eval
raise TransformError(
deeplake.util.exceptions.TransformError: Transform failed at index 0 of the input data. See traceback for more details.
```
|
closed
|
2023-04-27T09:10:31Z
|
2023-05-27T16:43:44Z
|
https://github.com/activeloopai/deeplake/issues/2320
|
[
"bug"
] |
CharlesFr
| 3
|
AUTOMATIC1111/stable-diffusion-webui
|
deep-learning
| 16,322
|
[Bug]: Adding "txt2img" to "Hidden UI tabs" makes img2img no longer function
|
### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
After adding `txt2img` to `Settings -> User Interface -> Hidden UI` tabs and Reloading UI, pressing Generate in the `img2img` tab has no effect.
Browser's console displays JavaScript errors: `error running callback function () : TypeError: counter is null` on page load and `Uncaught (in promise) TypeError: P[x] is undefined` on pressing Generate
### Steps to reproduce the problem
1. Go to Settings -> User Interface -> Hidden UI
2. Add txt2img
3. Reload UI
4. Go to img2img
5. Load image
6. Press Generate
### What should have happened?
Images should be generated
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
{
"Platform": "Windows-10-10.0.22631-SP0",
"Python": "3.10.6",
"Version": "v1.10.1",
"Commit": "82a973c04367123ae98bd9abdf80d9eda9b910e2",
"Git status": "On branch master\nYour branch is up to date with 'origin/master'.\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n\t!start.bat\n\tUncategorized.lnk\n\nnothing added to commit but untracked files present (use \"git add\" to track)",
"Script path": "F:\\Programs\\SDiff\\2",
"Data path": "F:\\Programs\\SDiff\\2",
"Extensions dir": "F:\\Programs\\SDiff\\2\\extensions",
"Checksum": "f073c81935c43cef352eca83d552fb41ca9fe2b77624c4deaba9990c20fcc4e9",
"Commandline": [
"launch.py",
"--ckpt-dir",
"F:\\Programs\\SDiff\\! models",
"--no-half",
"--xformers",
"--theme",
"dark"
],
"Torch env info": {
"torch_version": "2.1.2+cu121",
"is_debug_build": "False",
"cuda_compiled_version": "12.1",
"gcc_version": null,
"clang_version": null,
"cmake_version": "version 3.30.0-rc3",
"os": "Microsoft Windows 11 Pro",
"libc_version": "N/A",
"python_version": "3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] (64-bit runtime)",
"python_platform": "Windows-10-10.0.22631-SP0",
"is_cuda_available": "True",
"cuda_runtime_version": null,
"cuda_module_loading": "LAZY",
"nvidia_driver_version": "560.70",
"nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 4090",
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"numpy==1.26.2",
"open-clip-torch==2.20.0",
"pytorch-lightning==1.9.4",
"torch==2.1.2+cu121",
"torchdiffeq==0.2.3",
"torchmetrics==1.4.0.post0",
"torchsde==0.2.6",
"torchvision==0.16.2+cu121"
],
"conda_packages": null,
"hip_compiled_version": "N/A",
"hip_runtime_version": "N/A",
"miopen_runtime_version": "N/A",
"caching_allocator_config": "",
"is_xnnpack_available": "True",
"cpu_info": [
"Architecture=9",
"CurrentClockSpeed=3701",
"DeviceID=CPU0",
"Family=107",
"L2CacheSize=6144",
"L2CacheSpeed=",
"Manufacturer=AuthenticAMD",
"MaxClockSpeed=3701",
"Name=AMD Ryzen 9 5900X 12-Core Processor ",
"ProcessorType=3",
"Revision=8448"
]
},
"Exceptions": [],
"CPU": {
"model": "AMD64 Family 25 Model 33 Stepping 0, AuthenticAMD",
"count logical": 24,
"count physical": 12
},
"RAM": {
"total": "32GB",
"used": "15GB",
"free": "17GB"
},
"Extensions": [],
"Inactive extensions": [],
"Environment": {
"COMMANDLINE_ARGS": "--ckpt-dir \"F:\\Programs\\SDiff\\! models\" --no-half --xformers --theme dark",
"GRADIO_ANALYTICS_ENABLED": "False"
},
"Config": {
"ldsr_steps": 100,
"ldsr_cached": false,
"SCUNET_tile": 256,
"SCUNET_tile_overlap": 8,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"SWIN_torch_compile": false,
"hypertile_enable_unet": false,
"hypertile_enable_unet_secondpass": false,
"hypertile_max_depth_unet": 3,
"hypertile_max_tile_unet": 256,
"hypertile_swap_size_unet": 3,
"hypertile_enable_vae": false,
"hypertile_max_depth_vae": 3,
"hypertile_max_tile_vae": 128,
"hypertile_swap_size_vae": 3,
"sd_model_checkpoint": "somemodel.safetensors [xxxx]",
"sd_checkpoint_hash": "xxxx",
"outdir_samples": "",
"outdir_txt2img_samples": "Uncategorized",
"outdir_img2img_samples": "Uncategorized",
"outdir_extras_samples": "Uncategorized",
"outdir_grids": "",
"outdir_txt2img_grids": "Uncategorized",
"outdir_img2img_grids": "Uncategorized",
"outdir_save": "Uncategorized",
"outdir_init_images": "Uncategorized",
"samples_save": false,
"samples_format": "jpg",
"samples_filename_pattern": "",
"save_images_add_number": true,
"save_images_replace_action": "Replace",
"grid_save": false,
"grid_format": "png",
"grid_extended_filename": false,
"grid_only_if_multiple": true,
"grid_prevent_empty_spots": false,
"grid_zip_filename_pattern": "",
"n_rows": -1,
"font": "",
"grid_text_active_color": "#000000",
"grid_text_inactive_color": "#999999",
"grid_background_color": "#ffffff",
"save_images_before_face_restoration": false,
"save_images_before_highres_fix": false,
"save_images_before_color_correction": false,
"save_mask": false,
"save_mask_composite": false,
"jpeg_quality": 80,
"webp_lossless": false,
"export_for_4chan": true,
"img_downscale_threshold": 4.0,
"target_side_length": 4000.0,
"img_max_size_mp": 200.0,
"use_original_name_batch": true,
"use_upscaler_name_as_suffix": false,
"save_selected_only": true,
"save_write_log_csv": false,
"save_init_img": false,
"temp_dir": "",
"clean_temp_dir_at_start": false,
"save_incomplete_images": false,
"notification_audio": false,
"notification_volume": 100,
"save_to_dirs": true,
"grid_save_to_dirs": true,
"use_save_to_dirs_for_ui": false,
"directories_filename_pattern": "[date]",
"directories_max_prompt_words": 8,
"auto_backcompat": true,
"use_old_emphasis_implementation": false,
"use_old_karras_scheduler_sigmas": false,
"no_dpmpp_sde_batch_determinism": false,
"use_old_hires_fix_width_height": false,
"hires_fix_use_firstpass_conds": false,
"use_old_scheduling": false,
"use_downcasted_alpha_bar": false,
"refiner_switch_by_sample_steps": false,
"lora_functional": false,
"extra_networks_show_hidden_directories": true,
"extra_networks_dir_button_function": false,
"extra_networks_hidden_models": "When searched",
"extra_networks_default_multiplier": 1,
"extra_networks_card_width": 0.0,
"extra_networks_card_height": 0.0,
"extra_networks_card_text_scale": 1,
"extra_networks_card_show_desc": true,
"extra_networks_card_description_is_html": false,
"extra_networks_card_order_field": "Path",
"extra_networks_card_order": "Ascending",
"extra_networks_tree_view_style": "Dirs",
"extra_networks_tree_view_default_enabled": true,
"extra_networks_tree_view_default_width": 180.0,
"extra_networks_add_text_separator": " ",
"ui_extra_networks_tab_reorder": "",
"textual_inversion_print_at_load": false,
"textual_inversion_add_hashes_to_infotext": true,
"sd_hypernetwork": "None",
"sd_lora": "None",
"lora_preferred_name": "Alias from file",
"lora_add_hashes_to_infotext": true,
"lora_bundled_ti_to_infotext": true,
"lora_show_all": false,
"lora_hide_unknown_for_versions": [],
"lora_in_memory_limit": 0,
"lora_not_found_warning_console": false,
"lora_not_found_gradio_warning": false,
"cross_attention_optimization": "Automatic",
"s_min_uncond": 0,
"s_min_uncond_all": false,
"token_merging_ratio": 0,
"token_merging_ratio_img2img": 0,
"token_merging_ratio_hr": 0,
"pad_cond_uncond": false,
"pad_cond_uncond_v0": false,
"persistent_cond_cache": true,
"batch_cond_uncond": true,
"fp8_storage": "Disable",
"cache_fp16_weight": false,
"hide_samplers": [],
"eta_ddim": 0,
"eta_ancestral": 1,
"ddim_discretize": "uniform",
"s_churn": 0,
"s_tmin": 0,
"s_tmax": 0,
"s_noise": 1,
"sigma_min": 0.0,
"sigma_max": 0.0,
"rho": 0.0,
"eta_noise_seed_delta": 0,
"always_discard_next_to_last_sigma": false,
"sgm_noise_multiplier": false,
"uni_pc_variant": "bh1",
"uni_pc_skip_type": "time_uniform",
"uni_pc_order": 3,
"uni_pc_lower_order_final": true,
"sd_noise_schedule": "Default",
"skip_early_cond": 0,
"beta_dist_alpha": 0.6,
"beta_dist_beta": 0.6,
"sd_checkpoints_limit": 1,
"sd_checkpoints_keep_in_cpu": true,
"sd_checkpoint_cache": 0,
"sd_unet": "Automatic",
"enable_quantization": false,
"emphasis": "Original",
"enable_batch_seeds": true,
"comma_padding_backtrack": 20,
"sdxl_clip_l_skip": false,
"CLIP_stop_at_last_layers": 1,
"upcast_attn": false,
"randn_source": "GPU",
"tiling": false,
"hires_fix_refiner_pass": "second pass",
"enable_prompt_comments": true,
"sd3_enable_t5": false,
"sdxl_crop_top": 0.0,
"sdxl_crop_left": 0.0,
"sdxl_refiner_low_aesthetic_score": 2.5,
"sdxl_refiner_high_aesthetic_score": 6.0,
"sd_vae_checkpoint_cache": 0,
"sd_vae": "Automatic",
"sd_vae_overrides_per_model_preferences": true,
"auto_vae_precision_bfloat16": false,
"auto_vae_precision": true,
"sd_vae_encode_method": "Full",
"sd_vae_decode_method": "Full",
"inpainting_mask_weight": 1,
"initial_noise_multiplier": 1,
"img2img_extra_noise": 0,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"img2img_background_color": "#ffffff",
"img2img_editor_height": 720,
"img2img_sketch_default_brush_color": "#ffffff",
"img2img_inpaint_mask_brush_color": "#ffffff",
"img2img_inpaint_sketch_default_brush_color": "#ffffff",
"return_mask": false,
"return_mask_composite": false,
"img2img_batch_show_results_limit": 32,
"overlay_inpaint": true,
"return_grid": true,
"do_not_show_images": false,
"js_modal_lightbox": true,
"js_modal_lightbox_initially_zoomed": true,
"js_modal_lightbox_gamepad": false,
"js_modal_lightbox_gamepad_repeat": 250.0,
"sd_webui_modal_lightbox_icon_opacity": 1,
"sd_webui_modal_lightbox_toolbar_opacity": 0.9,
"gallery_height": "",
"open_dir_button_choice": "Subdirectory",
"enable_pnginfo": true,
"save_txt": false,
"add_model_name_to_info": true,
"add_model_hash_to_info": true,
"add_vae_name_to_info": true,
"add_vae_hash_to_info": true,
"add_user_name_to_info": false,
"add_version_to_infotext": true,
"disable_weights_auto_swap": true,
"infotext_skip_pasting": [],
"infotext_styles": "Apply if any",
"show_progressbar": true,
"live_previews_enable": true,
"live_previews_image_format": "png",
"show_progress_grid": true,
"show_progress_every_n_steps": 10,
"show_progress_type": "Approx NN",
"live_preview_allow_lowvram_full": false,
"live_preview_content": "Prompt",
"live_preview_refresh_period": 1000.0,
"live_preview_fast_interrupt": false,
"js_live_preview_in_modal_lightbox": false,
"prevent_screen_sleep_during_generation": true,
"keyedit_precision_attention": 0.1,
"keyedit_precision_extra": 0.05,
"keyedit_delimiters": ".,\\/!?%^*;:{}=`~() ",
"keyedit_delimiters_whitespace": [
"Tab",
"Carriage Return",
"Line Feed"
],
"keyedit_move": true,
"disable_token_counters": false,
"include_styles_into_token_counters": true,
"extra_options_txt2img": [],
"extra_options_img2img": [],
"extra_options_cols": 1,
"extra_options_accordion": false,
"compact_prompt_box": true,
"samplers_in_dropdown": true,
"dimensions_and_batch_together": true,
"sd_checkpoint_dropdown_use_short": false,
"hires_fix_show_sampler": false,
"hires_fix_show_prompts": false,
"txt2img_settings_accordion": false,
"img2img_settings_accordion": false,
"interrupt_after_current": true,
"localization": "None",
"quicksettings_list": [
"sd_model_checkpoint"
],
"ui_tab_order": [],
"hidden_tabs": [
"txt2img"
],
"ui_reorder_list": [],
"gradio_theme": "Default",
"gradio_themes_cache": true,
"show_progress_in_title": true,
"send_seed": true,
"send_size": true,
"enable_reloading_ui_scripts": false,
"api_enable_requests": true,
"api_forbid_local_requests": true,
"api_useragent": "",
"prioritized_callbacks_app_started": [],
"prioritized_callbacks_model_loaded": [],
"prioritized_callbacks_ui_settings": [],
"prioritized_callbacks_infotext_pasted": [],
"prioritized_callbacks_script_unloaded": [],
"prioritized_callbacks_before_ui": [],
"prioritized_callbacks_list_optimizers": [],
"prioritized_callbacks_before_token_counter": [],
"prioritized_callbacks_script_before_process": [],
"prioritized_callbacks_script_process": [],
"prioritized_callbacks_script_post_sample": [],
"prioritized_callbacks_script_on_mask_blend": [],
"prioritized_callbacks_script_postprocess_maskoverlay": [],
"profiling_enable": false,
"profiling_activities": [
"CPU"
],
"profiling_record_shapes": true,
"profiling_profile_memory": true,
"profiling_with_stack": true,
"profiling_filename": "trace.json",
"auto_launch_browser": "Disable",
"enable_console_prompts": false,
"show_warnings": false,
"show_gradio_deprecation_warnings": true,
"memmon_poll_rate": 8,
"samples_log_stdout": false,
"multiple_tqdm": true,
"enable_upscale_progressbar": true,
"print_hypernet_extra": false,
"list_hidden_files": true,
"disable_mmap_load_safetensors": false,
"hide_ldm_prints": true,
"dump_stacks_on_signal": false,
"face_restoration": false,
"face_restoration_model": "CodeFormer",
"code_former_weight": 0.5,
"face_restoration_unload": false,
"postprocessing_enable_in_main_ui": [],
"postprocessing_disable_in_extras": [],
"postprocessing_operation_order": [],
"upscaling_max_images_in_cache": 5,
"postprocessing_existing_caption_action": "Ignore",
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"realesrgan_enabled_models": [
"R-ESRGAN 4x+",
"R-ESRGAN 4x+ Anime6B"
],
"dat_enabled_models": [
"DAT x2",
"DAT x3",
"DAT x4"
],
"DAT_tile": 192,
"DAT_tile_overlap": 8,
"set_scale_by_when_changing_upscaler": false,
"unload_models_when_training": false,
"pin_memory": false,
"save_optimizer_state": false,
"save_training_settings_to_txt": true,
"dataset_filename_word_regex": "",
"dataset_filename_join_string": " ",
"training_image_repeats_per_epoch": 1,
"training_write_csv_every": 500.0,
"training_xattention_optimizations": false,
"training_enable_tensorboard": false,
"training_tensorboard_save_images": false,
"training_tensorboard_flush_every": 120.0,
"canvas_hotkey_zoom": "Alt",
"canvas_hotkey_adjust": "Ctrl",
"canvas_hotkey_shrink_brush": "Q",
"canvas_hotkey_grow_brush": "W",
"canvas_hotkey_move": "F",
"canvas_hotkey_fullscreen": "S",
"canvas_hotkey_reset": "R",
"canvas_hotkey_overlap": "O",
"canvas_show_tooltip": true,
"canvas_auto_expand": true,
"canvas_blur_prompt": false,
"canvas_disabled_functions": [
"Overlap"
],
"interrogate_keep_models_in_memory": false,
"interrogate_return_ranks": false,
"interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500.0,
"interrogate_clip_skip_categories": [],
"interrogate_deepbooru_score_threshold": 0.5,
"deepbooru_sort_alpha": true,
"deepbooru_use_spaces": true,
"deepbooru_escape": true,
"deepbooru_filter_tags": ""
},
"Startup": {
"total": 0.45717644691467285,
"records": {
"app reload callback": 0.0,
"scripts unloaded callback": 0.0,
"set samplers": 0.0,
"list extensions": 0.001501321792602539,
"restore config state file": 0.0,
"list SD models": 0.002532958984375,
"list localizations": 0.0005013942718505859,
"load scripts/custom_code.py": 0.003001689910888672,
"load scripts/img2imgalt.py": 0.0005006790161132812,
"load scripts/loopback.py": 0.0,
"load scripts/outpainting_mk_2.py": 0.0005006790161132812,
"load scripts/poor_mans_outpainting.py": 0.0,
"load scripts/postprocessing_codeformer.py": 0.0004999637603759766,
"load scripts/postprocessing_gfpgan.py": 0.0,
"load scripts/postprocessing_upscale.py": 0.0005004405975341797,
"load scripts/prompt_matrix.py": 0.0005006790161132812,
"load scripts/prompts_from_file.py": 0.0,
"load scripts/sd_upscale.py": 0.0005002021789550781,
"load scripts/xyz_grid.py": 0.0005004405975341797,
"load scripts/ldsr_model.py": 0.016555070877075195,
"load scripts/lora_script.py": 0.07069611549377441,
"load scripts/scunet_model.py": 0.012040138244628906,
"load scripts/swinir_model.py": 0.012031316757202148,
"load scripts/hotkey_config.py": 0.0,
"load scripts/extra_options_section.py": 0.0,
"load scripts/hypertile_script.py": 0.02559041976928711,
"load scripts/postprocessing_autosized_crop.py": 0.0005004405975341797,
"load scripts/postprocessing_caption.py": 0.0005004405975341797,
"load scripts/postprocessing_create_flipped_copies.py": 0.0,
"load scripts/postprocessing_focal_crop.py": 0.0004999637603759766,
"load scripts/postprocessing_split_oversized.py": 0.0005004405975341797,
"load scripts/soft_inpainting.py": 0.0,
"load scripts/comments.py": 0.013545036315917969,
"load scripts/refiner.py": 0.0004999637603759766,
"load scripts/sampler.py": 0.0,
"load scripts/seed.py": 0.0008001327514648438,
"load scripts": 0.160264253616333,
"load upscalers": 0.0005002021789550781,
"refresh VAE": 0.0010006427764892578,
"refresh textual inversion templates": 0.0,
"scripts list_optimizers": 0.0010008811950683594,
"scripts list_unets": 0.0,
"reload hypernetworks": 0.0005011558532714844,
"initialize extra networks": 0.0020008087158203125,
"scripts before_ui_callback": 0.0005004405975341797,
"create ui": 0.2194669246673584,
"gradio launch": 0.06424379348754883,
"add APIs": 0.0031616687774658203,
"app_started_callback/lora_script.py": 0.0,
"app_started_callback": 0.0
}
},
"Packages": [
"accelerate==0.21.0",
"aenum==3.1.15",
"aiofiles==23.2.1",
"aiohappyeyeballs==2.3.2",
"aiohttp==3.10.0",
"aiosignal==1.3.1",
"altair==5.3.0",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"async-timeout==4.0.3",
"attrs==23.2.0",
"blendmodes==2022",
"certifi==2024.7.4",
"charset-normalizer==3.3.2",
"clean-fid==0.1.35",
"click==8.1.7",
"clip @ https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip#sha256=b5842c25da441d6c581b53a5c60e0c2127ebafe0f746f8e15561a006c6c3be6a",
"colorama==0.4.6",
"contourpy==1.2.1",
"cycler==0.12.1",
"deprecation==2.1.0",
"diskcache==5.6.3",
"einops==0.4.1",
"exceptiongroup==1.2.2",
"facexlib==0.3.0",
"fastapi==0.94.0",
"ffmpy==0.4.0",
"filelock==3.15.4",
"filterpy==1.4.5",
"fonttools==4.53.1",
"frozenlist==1.4.1",
"fsspec==2024.6.1",
"ftfy==6.2.0",
"gitdb==4.0.11",
"GitPython==3.1.32",
"gradio==3.41.2",
"gradio_client==0.5.0",
"h11==0.12.0",
"httpcore==0.15.0",
"httpx==0.24.1",
"huggingface-hub==0.24.3",
"idna==3.7",
"imageio==2.34.2",
"importlib_resources==6.4.0",
"inflection==0.5.1",
"Jinja2==3.1.4",
"jsonmerge==1.8.0",
"jsonschema==4.23.0",
"jsonschema-specifications==2023.12.1",
"kiwisolver==1.4.5",
"kornia==0.6.7",
"lark==1.1.2",
"lazy_loader==0.4",
"lightning-utilities==0.11.6",
"llvmlite==0.43.0",
"MarkupSafe==2.1.5",
"matplotlib==3.9.1",
"mpmath==1.3.0",
"multidict==6.0.5",
"networkx==3.3",
"numba==0.60.0",
"numpy==1.26.2",
"omegaconf==2.2.3",
"open-clip-torch==2.20.0",
"opencv-python==4.10.0.84",
"orjson==3.10.6",
"packaging==24.1",
"pandas==2.2.2",
"piexif==1.1.3",
"Pillow==9.5.0",
"pillow-avif-plugin==1.4.3",
"pip==24.2",
"protobuf==3.20.0",
"psutil==5.9.5",
"pydantic==1.10.17",
"pydub==0.25.1",
"pyparsing==3.1.2",
"python-dateutil==2.9.0.post0",
"python-multipart==0.0.9",
"pytorch-lightning==1.9.4",
"pytz==2024.1",
"PyWavelets==1.6.0",
"PyYAML==6.0.1",
"referencing==0.35.1",
"regex==2024.7.24",
"requests==2.32.3",
"resize-right==0.0.2",
"rpds-py==0.19.1",
"safetensors==0.4.2",
"scikit-image==0.21.0",
"scipy==1.14.0",
"semantic-version==2.10.0",
"sentencepiece==0.2.0",
"setuptools==69.5.1",
"six==1.16.0",
"smmap==5.0.1",
"sniffio==1.3.1",
"spandrel==0.3.4",
"spandrel_extra_arches==0.1.1",
"starlette==0.26.1",
"sympy==1.13.1",
"tifffile==2024.7.24",
"timm==1.0.8",
"tokenizers==0.13.3",
"tomesd==0.1.3",
"toolz==0.12.1",
"torch==2.1.2+cu121",
"torchdiffeq==0.2.3",
"torchmetrics==1.4.0.post0",
"torchsde==0.2.6",
"torchvision==0.16.2+cu121",
"tqdm==4.66.4",
"trampoline==0.1.2",
"transformers==4.30.2",
"typing_extensions==4.12.2",
"tzdata==2024.1",
"urllib3==2.2.2",
"uvicorn==0.30.3",
"wcwidth==0.2.13",
"websockets==11.0.3",
"xformers==0.0.23.post1",
"yarl==1.9.4"
]
}
### Console logs
```Shell
Launching Web UI with arguments: --ckpt-dir 'F:\Programs\SDiff\! models' --no-half --xformers --theme dark
Loading weights [xxxx] from F:\Programs\SDiff\! models\somemodel.safetensors
Creating model from config: F:\Programs\SDiff\2\configs\v1-inpainting-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
F:\Programs\SDiff\2\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Startup time: 39.4s (prepare environment: 14.9s, import torch: 12.6s, import gradio: 5.5s, setup paths: 2.8s, initialize shared: 0.4s, other imports: 1.5s, list SD models: 0.1s, load scripts: 0.9s, create ui: 0.5s, gradio launch: 0.3s).
Applying attention optimization: xformers... done.
Model loaded in 14.5s (load weights from disk: 0.6s, create model: 0.5s, apply weights to model: 12.7s, apply dtype to VAE: 0.3s, calculate empty prompt: 0.3s).
Restarting UI...
Closing server running on port: 7860
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 0.5s (load scripts: 0.2s, create ui: 0.2s).
Restarting UI...
Closing server running on port: 7860
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 0.6s (load scripts: 0.2s, create ui: 0.4s).
```
### Additional information
_No response_
|
open
|
2024-08-02T19:34:02Z
|
2024-10-29T15:11:44Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16322
|
[
"bug"
] |
1cheez
| 2
|
mars-project/mars
|
pandas
| 2,930
|
[BUG] `dask.persist` cannot work on dask-on-mars
|
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
`dask.persist` cannot work on dask-on-mars.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
```
In [1]: import mars
In [2]: mars.new_session()
Web service started at http://0.0.0.0:64573
Out[2]: <mars.deploy.oscar.session.SyncSession at 0x7fa0f8366d90>
In [3]: import dask
...: from mars.contrib.dask import mars_scheduler
In [4]: def inc(x):
...: return x + 1
...:
In [5]: dask_task = dask.delayed(inc)(1)
In [6]: dask_task.persist(scheduler=mars_scheduler)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-6-095d61a2acc0> in <module>
----> 1 dask_task.persist(scheduler=mars_scheduler)
~/miniconda3/envs/mars3.8/lib/python3.8/site-packages/dask/base.py in persist(self, **kwargs)
259 dask.base.persist
260 """
--> 261 (result,) = persist(self, traverse=False, **kwargs)
262 return result
263
~/miniconda3/envs/mars3.8/lib/python3.8/site-packages/dask/base.py in persist(traverse, optimize_graph, scheduler, *args, **kwargs)
831 postpersists.append((rebuild, a_keys, state))
832
--> 833 results = schedule(dsk, keys, **kwargs)
834 d = dict(zip(keys, results))
835 results2 = [r({k: d[k] for k in ks}, *s) for r, ks, s in postpersists]
~/Workspace/mars/mars/contrib/dask/scheduler.py in mars_scheduler(dsk, keys)
39 Computed values corresponding to the provided keys.
40 """
---> 41 res = reduce(mars_dask_get(dsk, keys)).execute().fetch()
42 if not isinstance(res, List):
43 return [[res]]
~/Workspace/mars/mars/contrib/dask/scheduler.py in mars_dask_get(dsk, keys)
86 return spawn(task[0], args=tuple(_get_arg(a) for a in task[1:]))
87
---> 88 return [[_execute_task(dsk[k]) for k in keys_d] for keys_d in keys]
~/Workspace/mars/mars/contrib/dask/scheduler.py in <listcomp>(.0)
86 return spawn(task[0], args=tuple(_get_arg(a) for a in task[1:]))
87
---> 88 return [[_execute_task(dsk[k]) for k in keys_d] for keys_d in keys]
~/Workspace/mars/mars/contrib/dask/scheduler.py in <listcomp>(.0)
86 return spawn(task[0], args=tuple(_get_arg(a) for a in task[1:]))
87
---> 88 return [[_execute_task(dsk[k]) for k in keys_d] for keys_d in keys]
~/miniconda3/envs/mars3.8/lib/python3.8/site-packages/dask/highlevelgraph.py in __getitem__(self, key)
737 pass
738
--> 739 raise KeyError(key)
740
741 def __len__(self) -> int:
KeyError: 'i'
```
|
closed
|
2022-04-18T13:35:17Z
|
2022-04-24T02:19:24Z
|
https://github.com/mars-project/mars/issues/2930
|
[
"type: bug",
"dask-on-mars"
] |
qinxuye
| 4
|
statsmodels/statsmodels
|
data-science
| 8,652
|
ENH/REF/SUMM: enhance, refactor power classes
|
triggered by #8646 and #8651
see also #8159 for power classes without effect size
related issues: ...
The semi-generic power classes were written initially based on effect sizes and packages GPower and R pwr.
Design decisions were based on those packages with the generic structure it is often not "obvious" how to use those.
Target is to make them more directly usable and extend them to new cases, with possibly test specific power classes.
This should be based more on NCSS/PASS and new Stata pss, than on the previous packages.
The more recent power function, especially for cases where var/std differs between null and alternative as in rates and proportions, where heavily based on the NCSS/PASS docs.
Additionally, I want
- power classes or/and effect sizes based on test statistics, tstat, fstat, ..., with normalized noncentrality, nc / nobs.
- main difference is e.g. using std of test statistic instead of std of population as in Cohen's d family.
- ...
not clear yet
- options not yet included, e.g. unequal var in t_test
- more general: power computation that are specification robust,
e.g. robust cov_type, excess dispersion in poisson, kurtosis in variance hypothesis tests
- ...
specific todos
- review existing power classes for hidden assumptions especially what special cases they are designed for
- FTestPower, see comments in #8646
- TTestIndPower: assumes equal var, and cohen's d effect size
- maybe distinguish more clearly between keyword we can solve for and keywords that define setting or hypothesis test. For example we will likely need a `method` argument if we make classes for recently added power functions like those for rates and proportions.
- power classes for recently added power functions, rates, proportions, variances, ...
- power classes for TOST and other hypothesis tests that are not point hypothesis
- ...
- ...
I guess (not checked again): The basic power classes for one sample TTestPower can be used for generic case if std in effect size is the std of the (unstandardized) test statistic.
Why is there currently no NormalPower class? We only have NormalIndPower with same equal var assumption as TTestIndPower.
**update**
NormalIndPower can be used for one sample test if ratio=0
```
``ratio`` can be set to zero in order to get the power for a
one sample test.
```
It wouldn't cost much to add a specific NormalPower class.
Aside:
NormalIndPower has an option in the `__init__` `self.ddof = ddof` instead of as method keyword.
It's the only power class with an `__init__` method
|
open
|
2023-02-06T15:39:16Z
|
2023-02-25T18:31:35Z
|
https://github.com/statsmodels/statsmodels/issues/8652
|
[
"type-enh",
"comp-stats"
] |
josef-pkt
| 4
|
comfyanonymous/ComfyUI
|
pytorch
| 7,286
|
Installing requirements.txt fails due to missing RECORD for tokenizers
|
### Your question
I have recently synced the latest ComfyUI (commit ID 50614f1b7), and I am trying to install and run using the instructions here: [](https://docs.comfy.org/installation/manual_install#nvidia)
I am encountering an error when I run the `pip install -r requirements.txt` command. Here's my output:
```
...
Using cached comfyui_frontend_package-1.12.14-py3-none-any.whl (29.9 MB)
Using cached sympy-1.13.1-py3-none-any.whl (6.2 MB)
Using cached tokenizers-0.21.1-cp39-abi3-win_amd64.whl (2.4 MB)
Using cached sentencepiece-0.2.0-cp312-cp312-win_amd64.whl (991 kB)
Using cached aiohttp-3.11.14-cp312-cp312-win_amd64.whl (438 kB)
Using cached yarl-1.18.3-cp312-cp312-win_amd64.whl (90 kB)
Using cached av-14.2.0-cp312-cp312-win_amd64.whl (30.9 MB)
Using cached huggingface_hub-0.29.3-py3-none-any.whl (468 kB)
Using cached fsspec-2025.3.0-py3-none-any.whl (193 kB)
Using cached packaging-24.2-py3-none-any.whl (65 kB)
Using cached propcache-0.3.0-cp312-cp312-win_amd64.whl (44 kB)
Using cached trampoline-0.1.2-py3-none-any.whl (5.2 kB)
Installing collected packages: trampoline, sentencepiece, sympy, propcache, packaging, fsspec, comfyui-frontend-package, av, yarl, huggingface-hub, tokenizers, aiohttp
Attempting uninstall: sympy
Found existing installation: sympy 1.13.3
Uninstalling sympy-1.13.3:
Successfully uninstalled sympy-1.13.3
Attempting uninstall: tokenizers
Found existing installation: tokenizers None
error: uninstall-no-record-file
× Cannot uninstall tokenizers None
╰─> The package's contents are unknown: no RECORD file was found for tokenizers.
hint: You might be able to recover from this via: pip install --force-reinstall --no-deps tokenizers==0.19.1
```
It seems weird that it detects a cached version of tokenizers-0.21.1 then says it can't find a RECORD for tokenizers...
I tried the suggested command, and that didn't work. I tried it with the `--ignore-installed` flag, and that didn't work. I tried deleting and re-creating the conda environment, and even uninstalling and re-installing conda! Still the same issue... Can anyone suggest something to try to get past this?
### Logs
```powershell
```
### Other
_No response_
|
closed
|
2025-03-17T21:48:49Z
|
2025-03-18T15:30:59Z
|
https://github.com/comfyanonymous/ComfyUI/issues/7286
|
[
"User Support"
] |
sipkode
| 1
|
saulpw/visidata
|
pandas
| 1,557
|
2.10.2 unavailable in PyPI
|
<img width="1728" alt="Screen Shot 2022-10-08 at 1 21 43 PM" src="https://user-images.githubusercontent.com/11388735/194719825-b3e6cdeb-da92-4370-aa62-6e47cdf113a9.png">
I installed the dev branch of Visidata to get around [1550](https://github.com/saulpw/visidata/issues/1550).
```sh
pipx install git+https://github.com/saulpw/visidata.git
installed package visidata 2.11.dev0, installed using Python 3.10.6
These apps are now globally available
- vd
- visidata
done! ✨ 🌟 ✨
```
When I saw [bugfix for 1500](https://github.com/saulpw/visidata/commit/39cbe730f28b42d6e80ace3ae24fc812c6f8a567) and [the 2.10.2 release](https://github.com/saulpw/visidata/commit/bc39516261516343aefe1bde9e5728fe5b76afd7#diff-06572a96a58dc510037d5efa622f9bec8519bc1beab13c9f251e97e657a9d4edR3), I reinstalled:
```sh
pipx install visidata
installed package visidata 2.10.1, installed using Python 3.10.6
These apps are now globally available
- vd
- visidata
done! ✨ 🌟 ✨
```
I thought that maybe pipx was fetching 2.10.1 from its cache but after force upgrading I realized that 2.10.2 just doesn't seem to be available yet.
|
closed
|
2022-10-08T17:24:12Z
|
2022-10-08T17:56:19Z
|
https://github.com/saulpw/visidata/issues/1557
|
[
"question"
] |
zachvalenta
| 4
|
pykaldi/pykaldi
|
numpy
| 232
|
Clif build fails on Ubuntu 20.04
|
I tried some configurations to build clif and i had issues that clif doesnt build with Ubuntu 20.04 and Python 3.6, 3.7 and 3.8.
I get the same error message in Ubuntu 20.04 with Python 3.6 and 3.7 on two different virtual machines:
`
[324/490] Building CXX object tools/llvm-as/CMakeFiles/llvm-as.dir/llvm-as.cpp.o
ninja: build stopped: subcommand failed.`
With Ubuntu 18.04 Python 3.6 it works fine.
|
closed
|
2020-08-15T10:58:49Z
|
2021-06-13T16:14:53Z
|
https://github.com/pykaldi/pykaldi/issues/232
|
[] |
Alienmaster
| 4
|
FlareSolverr/FlareSolverr
|
api
| 435
|
[mteamtp] (updating) Error connecting to FlareSolverr server
|
**Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
An error occurred while updating this indexer
Error connecting to FlareSolverr server: System.Net.Http.HttpRequestException: Connection refused (192.168.1.101:8191) ---> System.Net.Sockets.SocketException (111): Connection refused at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken) at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource.GetResult(Int16 token) at System.Net.Sockets.Socket.<ConnectAsync>g__WaitForConnectWithCancellation|277_0(AwaitableSocketAsyncEventArgs saea, ValueTask connectTask, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken) --- End of inner exception stack trace --- at System.Net.Http.HttpConnectionPool.ConnectToTcpHostAsync(String host, Int32 port, HttpRequestMessage initialRequest, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.ConnectAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.CreateHttp11ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.AddHttp11ConnectionAsync(HttpRequestMessage request) at System.Threading.Tasks.TaskCompletionSourceWithCancellation`1.WaitWithCancellationAsync(CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.GetHttp11ConnectionAsync(HttpRequestMessage request, Boolean async, CancellationToken cancellationToken) at System.Net.Http.HttpConnectionPool.SendWithVersionDetectionAndRetryAsync(HttpRequestMessage request, Boolean async, Boolean doRequestAuth, CancellationToken cancellationToken) at System.Net.Http.DiagnosticsHandler.SendAsyncCo
### Screenshots
[Place any screenshots of the issue here if needed]
|
closed
|
2022-07-20T08:41:41Z
|
2022-07-24T03:22:15Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/435
|
[
"invalid",
"more information needed"
] |
bing81620
| 1
|
collerek/ormar
|
sqlalchemy
| 1,014
|
Lost connection to MySQL server during query
|
When my application throws an exception and I try to access an endpoint again, it returns the error "(2013, 'Lost connection to MySQL server during query ([Errno 104] Connection reset by peer)')".
It is as if it were not connected to the database or as if it did not refresh. Using SQLAlchemist I edited the parameters "pool_timeout=60, pool_recycle=280, pool_size=20, max_overflow=50" when I call create_engine, but in ORMAR I don't know how to do it.
Any idea how to do it?
Thanks!
|
closed
|
2023-02-15T08:59:56Z
|
2024-04-29T13:08:09Z
|
https://github.com/collerek/ormar/issues/1014
|
[
"bug"
] |
alexol91
| 1
|
arnaudmiribel/streamlit-extras
|
streamlit
| 136
|
🐛 [BUG] - Markdownlit
|
### Description
In Markdownlit it uses "st.experimental_memo" which should be changed to "st.cache_data". Using st.experimental_memo I've found that my app crashes on importing markdownlit (from markdownlit import mdlit) with the error "streamlit.errors.StreamlitAPIException: `set_page_config()` can only be called once per app"
### Reproduction steps
```bash
1. pip install markdownlit
2. from markdownlit import mdlit
3. produce error; streamlit.errors.StreamlitAPIException: `set_page_config()` can only be called once per app, and must be called as the first Streamlit command in your script.
```
### Screenshots
_No response_
### Logs
```bash
2023-04-04 09:33:01.064 `st.experimental_memo` is deprecated. Please use the new command `st.cache_data` instead, which has the same behavior. More information [in our docs](https://docs.streamlit.io/library/advanced-features/caching).
2023-04-04 09:33:01.067 `st.experimental_memo` is deprecated. Please use the new command `st.cache_data` instead, which has the same behavior. More information [in our docs](https://docs.streamlit.io/library/advanced-features/caching).
2023-04-04 09:33:01.095 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\DitlevHome\Desktop\Python_Projekts\streamlit_projects\CV\CV\.venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "C:\Users\DitlevHome\Desktop\Python_Projekts\streamlit_projects\CV\CV\0_🏠_Home.py", line 40, in <module>
st.set_page_config(page_title=PAGE_TITLE, page_icon=PAGE_ICON, layout="wide")
ine 311, in wrapped_func
result = non_optional_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\DitlevHome\Desktop\Python_Projekts\streamlit_projects\CV\CV\.venv\Lib\site-packages\streamlit\commands\page_config.py", line 225, in set_page_config
ctx.enqueue(msg)
File "C:\Users\DitlevHome\Desktop\Python_Projekts\streamlit_projects\CV\CV\.venv\Lib\site-packages\streamlit\runtime\scriptrunner\script raise StreamlitAPIException(
streamlit.errors.StreamlitAPIException: `set_page_config()` can only be called once per app, and must be called as the first Streamlit command in your script.
For more information refer to the [docs](https://docs.streamlit.io/library/api-reference/utilities/st.set_page_config).
```
### Version of streamlit
1.20.0
### Version of streamlit-extras
0.2.7
|
closed
|
2023-04-04T07:41:13Z
|
2023-05-11T15:28:00Z
|
https://github.com/arnaudmiribel/streamlit-extras/issues/136
|
[
"bug"
] |
ditlevjoergensen
| 1
|
dsdanielpark/Bard-API
|
api
| 241
|
BardAsync with continuous conversation
|
bard = BardAsync(token=tokenBard, session=session, timeout = 60)
jawaban = await bard.get_answer(q)
return jawaban
#res: __init__() got an unexpected keyword argument 'session
|
closed
|
2023-12-06T04:28:48Z
|
2024-01-01T13:33:09Z
|
https://github.com/dsdanielpark/Bard-API/issues/241
|
[] |
ahmadkeren
| 7
|
schemathesis/schemathesis
|
graphql
| 1,994
|
[FEATURE] graphql not required handling
|
Currently when fields in graphql schemas are not required, schemathesis can send `null` to them.
According to the graphql specs this is valid and it is useful to find bugs.
But sometimes it would be easier to not send null values
Is there a way to turn the null value sending behavior off? It would be nice to have a simple switch for it
|
closed
|
2024-01-25T12:08:38Z
|
2024-08-06T19:23:32Z
|
https://github.com/schemathesis/schemathesis/issues/1994
|
[
"Priority: Medium",
"Type: Feature",
"Specification: GraphQL",
"Difficulty: Intermediate"
] |
devkral
| 2
|
graphql-python/gql
|
graphql
| 304
|
Infinite Recursion Error
|
**Describe the bug**
So when constructing my gql client, I am able to do that successfully. When doing do, I also set `fetch_schema_from_transport=True` in there. When doing so, I try to access a query using dot notation but I get an infinite recursion error.
```
client = Client(
transport=RequestsHTTPTransport(
url=url,
use_json=True,
headers=headers
),
fetch_schema_from_transport=True,
)
with client as session:
ds = DSLSchema(client.schema)
product_query = ds.Query.products #this throws error `<class 'RecursionError'>-maximum recursion depth exceeded`
```
It looks like the schema is not actually getting populated into the `client` even with that variable being set to true. I checked by putting in
```
assert client.schema is not None
```
and it fails.
For what it's worth, I printed the schema using the gql-cli doing the following:
gql-cli https://server_url.com/graphql --print-schema
and, within, the schema, the Query looks like this:
```
type Query {
....
products(id: ID...)
....
}
```
so I know that it's actually there
**Expected behavior**
I would expect this to not throw a recursion error and that I would at the very least be able to access the schema using dot notation.
**System info (please complete the following information):**
OS: mac OS big sur
Python version: 3.9.10
gql version: 3.0.0
graphql-core version: 3.2.0
|
closed
|
2022-02-23T20:10:14Z
|
2022-02-23T22:13:33Z
|
https://github.com/graphql-python/gql/issues/304
|
[
"type: question or discussion"
] |
alexmaguilar25
| 10
|
CorentinJ/Real-Time-Voice-Cloning
|
pytorch
| 1,200
|
webrtcvad wont install
|
When I try to install webrtcvad with pip install webrtcvad it throws out
Collecting webrtcvad
Using cached webrtcvad-2.0.10.tar.gz (66 kB)
Preparing metadata (setup.py) ... done
Building wheels for collected packages: webrtcvad
Building wheel for webrtcvad (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [9 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-39
copying webrtcvad.py -> build\lib.win-amd64-cpython-39
running build_ext
building '_webrtcvad' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for webrtcvad
Running setup.py clean for webrtcvad
Failed to build webrtcvad
Installing collected packages: webrtcvad
Running setup.py install for webrtcvad ... error
error: subprocess-exited-with-error
× Running setup.py install for webrtcvad did not run successfully.
│ exit code: 1
╰─> [11 lines of output]
running install
X:\anaconda2\envs\voice-clone\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-39
copying webrtcvad.py -> build\lib.win-amd64-cpython-39
running build_ext
building '_webrtcvad' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> webrtcvad
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
|
open
|
2023-04-21T22:17:25Z
|
2024-08-06T16:59:22Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1200
|
[] |
zamonster
| 6
|
qubvel-org/segmentation_models.pytorch
|
computer-vision
| 47
|
cannot import name 'cfg' from 'torchvision.models.vgg'
|
Hi, there is an error.
```
~/anaconda3/envs/dl/lib/python3.7/site-packages/segmentation_models_pytorch/encoders/vgg.py in <module>
2 from torchvision.models.vgg import VGG
3 from torchvision.models.vgg import make_layers
----> 4 from torchvision.models.vgg import cfg
ImportError: cannot import name 'cfg' from 'torchvision.models.vgg' (/Users/anaconda3/envs/dl/lib/python3.7/site-packages/torchvision/models/vgg.py)
```
my pytorch version is 1.2.0 and torchvision version is 0.4.0
|
closed
|
2019-08-22T07:17:01Z
|
2019-08-26T14:44:13Z
|
https://github.com/qubvel-org/segmentation_models.pytorch/issues/47
|
[] |
JimWang97
| 1
|
Evil0ctal/Douyin_TikTok_Download_API
|
web-scraping
| 164
|
发现一个挺搞笑的东西
|
words=FXXK_U_ByteDance
# 从安卓apk中提取到的新API,目前可用,支持视频,图集,笔记的解析(2022年12月25日)
api_url = f"https://www.iesdouyin.com/aweme/v1/web/aweme/detail/?aweme_id={video_id}&aid=1128&version_name=23.5.0&device_platform=android&os_version=2333&Github=Evil0ctal&words=FXXK_U_ByteDance"
|
closed
|
2023-03-06T09:33:44Z
|
2023-03-11T00:43:53Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/164
|
[
"enhancement"
] |
xyfacai
| 1
|
ultralytics/yolov5
|
machine-learning
| 12,949
|
How to change annotations indices in memory without changing the dataset locally?
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello! 😊
I have a large dataset and I would like to change the annotations before starting training: my dataset is as follows: 0 indicates car, 1 indicates van, 2 indicates bicycle, 3 indicates people, and 4 indicates pedestrian. I would like to change these indices to merge the car class with the van class into a single class (0: car), keep bicycle as 1, and merge the people class with the pedestrian class into the people class (2: people). So, I'm wondering where I can make this change in the code without altering my dataset locally. Is there a way to change these indices in memory?
Thank you 😊
### Additional
_No response_
|
closed
|
2024-04-22T09:46:36Z
|
2024-06-02T00:23:54Z
|
https://github.com/ultralytics/yolov5/issues/12949
|
[
"question",
"Stale"
] |
killich8
| 4
|
hbldh/bleak
|
asyncio
| 1,507
|
KeyError: 'org.bluez.Device1'
|
- raspberry pi 4B
- debian bookworm
- bleak 0.20.2-1 from python3-bleak package
- bluetoothctl: 5.66
I'm having a very hard time getting my code to run stable; i'm basically round-robin polling about 60 bluetooth devices (For details of my setup see #1500)
Apart from the occasional problem at the kernel/driver level, my code often runs into the error below:
```
ERROR:root:A message handler raised an exception: 'org.bluez.Device1'.
Traceback (most recent call last):
File "src/dbus_fast/message_bus.py", line 811, in dbus_fast.message_bus.BaseMessageBus._process_message
File "/usr/lib/python3/dist-packages/bleak/backends/bluezdbus/manager.py", line 854, in _parse_msg
condition_callback()
File "/usr/lib/python3/dist-packages/bleak/backends/bluezdbus/manager.py", line 709, in callback
self._properties[device_path][defs.DEVICE_INTERFACE][property_name]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'org.bluez.Device1'
```
I can't seem to relate this to anything I'm doing as none of my code is in the traceback - what could be the cause of above message, and how do I avoid this situation?
Thank you
|
closed
|
2024-02-12T13:54:37Z
|
2024-02-20T03:54:41Z
|
https://github.com/hbldh/bleak/issues/1507
|
[
"bug",
"Backend: BlueZ"
] |
zevv
| 11
|
dynaconf/dynaconf
|
django
| 1,157
|
[RFC]Change the semantics of layered environments
|
Dynaconf was designed to have multiple environments (sections) on the same file
```toml
[default]
a = 1
[dev]
a = 2
[prod]
a = 3
```
It looks like this approach is not the best one, and not the one adopted by industry,
what I see in the deployments are the usage of the same name settings files that vary its contents depending on the environment and this is orchestrated by installers and operators.
## Proposal
Deprecate the `environments` flag, how the settings are loaded will be controlled by the loaders defined on the schema, and schema will be only one.
Remove all the `env=..` variables that filters loaders and validators.
## How to keep using layered environments?
1. define `env_switcher` variable, example `="APP_MODE"`
2. Export `ENV_MODE=prod|dev|testing|anything`
3. On `loaders=[]` pass `FileLoader(["/etc/settings.yaml"], alternatives="{path}/{name}.{env}.{extension}")`
So the above will make Dynaconf to load the `/etc/settings.yaml` and immediatelly after (even if settings.yaml is not found) load `/etc/settings.testing.yaml`
The same `alternatives="{path}/{name}.local.{extension}"` will be the way to load `/etc/settings.local.yaml`
## What about envs as sections on a file?
```
loaders=[FileLoader([...], filters=[EnvFilter])]
```
Then the setting is a loader property, not dynaconf.
|
open
|
2024-07-08T16:32:19Z
|
2024-07-08T18:38:26Z
|
https://github.com/dynaconf/dynaconf/issues/1157
|
[
"Not a Bug",
"RFC",
"4.0-breaking-change"
] |
rochacbruno
| 0
|
recommenders-team/recommenders
|
data-science
| 1,342
|
Cannot replicate LSTUR results for MIND large test
|
Hello, I cannot replicate the results of LSTUR model with MIND test set. I used the scripts provided to generate `embedding.npy`, `word_dict.pkl` and `uid2index.pkl` for test set because they are not provided with MINDlarge_utils.zip.
I use the last lines of code in lstur_MIND.pynb to make predictions in test set, but the results of metrics in validations and test are very differents.
For example, I obtained
`group_auc: 0.65, mean_mrr: 0.31, ndcg@5: 0.34, ndcg@10: 0.40` in validation and `auc: 0.5075, mrr: 0.2259, ndcg@5: 0.2309, nDCG@10: 0.2868` in test set, with the model trained for 10 epochs.
|
closed
|
2021-03-11T21:03:05Z
|
2021-04-19T09:07:02Z
|
https://github.com/recommenders-team/recommenders/issues/1342
|
[] |
albertobezzon
| 3
|
kizniche/Mycodo
|
automation
| 454
|
Can't read DHT22 - power output error
|
## Mycodo Issue Report:
- Specific Mycodo Version: 6.0.4
#### Problem Description
Please list:
Read input from DHT22 on GPIO 2
Fresh install of Raspbian Stretch Lite did update/upgrade/dist-upgrade before installing Mycodo
### Errors
2018-04-21 23:17:44,643 - mycodo.inputs.dht22_1 - INFO - Turning on sensor
2018-04-21 23:17:44,650 - mycodo.output - WARNING - Cannot turn on Output with ID 0. It doesn't exist
2018-04-21 23:17:46,655 - mycodo.input_8bed4119 - INFO - Activated in 2148.0 ms
2018-04-21 23:17:46,715 - mycodo.inputs.dht22_1 - ERROR - DHT22Sensor raised an exception when taking a reading: 'NoneType' object has no attribute 'is_on'
Traceback (most recent call last):
File "/var/mycodo-root/mycodo/inputs/dht22.py", line 212, in read
self._temperature) = self.get_measurement()
File "/var/mycodo-root/mycodo/inputs/dht22.py", line 155, in get_measurement
not db_retrieve_table_daemon(Output, unique_id=self.power_output_id).is_on()):
AttributeError: 'NoneType' object has no attribute 'is_on'
https://github.com/kizniche/Mycodo/blob/c6b7ae3ba5f5d8a852919d702a5131c594682742/mycodo/inputs/dht22.py#L154
### Steps to Reproduce the issue:
How can this issue be reproduced?
1. Fresh Raspian install
2. Install Mycodo
3. Attach DHT22
4. Add Input and try to read data
### Additional Notes
Maybe i'm doing something wrong, but if you edit the dht22.py and comment out the lines 154 up to 160 save and restart the backend it will display the data from the sensor without any errors.
|
closed
|
2018-04-21T21:37:20Z
|
2018-04-28T01:05:13Z
|
https://github.com/kizniche/Mycodo/issues/454
|
[] |
MrDeadBeef
| 1
|
iMerica/dj-rest-auth
|
rest-api
| 545
|
Module import error with all-auth latest version
|
File "/home/fotoley-anup/blog-project/blogzine/venv/lib/python3.10/site-packages/dj_rest_auth/registration/urls.py", line 4, in <module>
from .views import RegisterView, VerifyEmailView, ResendEmailVerificationView
File "/home/fotoley-anup/blog-project/blogzine/venv/lib/python3.10/site-packages/dj_rest_auth/registration/views.py", line 21, in <module>
from dj_rest_auth.registration.serializers import (
File "/home/fotoley-anup/blog-project/blogzine/venv/lib/python3.10/site-packages/dj_rest_auth/registration/serializers.py", line 20, in <module>
raise ImportError('allauth needs to be added to INSTALLED_APPS.
|
open
|
2023-09-05T13:12:37Z
|
2023-09-08T21:41:11Z
|
https://github.com/iMerica/dj-rest-auth/issues/545
|
[] |
anupsingh3292
| 1
|
facebookresearch/fairseq
|
pytorch
| 4,648
|
Question about implementation of sinusoidal positional embedding
|
Is there any reason why not shuffling sin and cos values when calculating positional embedding?
Because to the best of my knowledge, "attention is all you need" used shuffled sinusoidal positional embedding values.
e.g.) sin, cos, sin, cos ...
But I see the implementation be like: torch.cat([sin,cos], dim=-1), which would be like: sin, sin, sin, ..., cos, cos, cos ...
I would appreciate if you let me know. THanks!
|
open
|
2022-08-12T09:50:43Z
|
2022-08-12T09:51:27Z
|
https://github.com/facebookresearch/fairseq/issues/4648
|
[
"question",
"needs triage"
] |
rlawjdghek
| 0
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 3,395
|
Questions in different languages
|
### What version of GlobaLeaks are you using?
4.11.0
### What browser(s) are you seeing the problem on?
_No response_
### What operating system(s) are you seeing the problem on?
Windows
### Describe the issue
Hello, is it possible to offer the questions in different languages or do I have to open an extra channel for this?
### Proposed solution
_No response_
|
closed
|
2023-03-23T07:57:33Z
|
2023-03-23T08:49:52Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3395
|
[] |
nicogithub22
| 1
|
tox-dev/tox
|
automation
| 3,326
|
Document all config expansions in a single page
|
## Issue
I want to have one documentation page that lists all expansions made by tox when loading its config, with the kind of values they would have. Example:
- `basepython` - full path to python interpreter
- `env_name` - name of the environment
- `env_dir` - path to the environment
- `env_tmp_dir` - {work_dir}/{env_name}/tmp
- `passargs`- ...
It should be noted that most of these do have some documentation, but they are not listed in a centralized space, making hard to determine what you could use.
I am curious if there is a programmatic way to generate these as I would also find it useful to list them with `--help`.
|
open
|
2024-08-13T18:34:58Z
|
2024-08-13T18:38:48Z
|
https://github.com/tox-dev/tox/issues/3326
|
[] |
ssbarnea
| 1
|
qubvel-org/segmentation_models.pytorch
|
computer-vision
| 301
|
how to use UnetPlusPlus
|
model = smp.UnetPlusPlus()
AttributeError: module 'segmentation_models_pytorch' has no attribute 'UnetPlusPlus'
|
closed
|
2020-12-08T06:26:39Z
|
2020-12-08T09:35:16Z
|
https://github.com/qubvel-org/segmentation_models.pytorch/issues/301
|
[] |
picEmily
| 2
|
huggingface/peft
|
pytorch
| 1,824
|
Fix PeftMixedModel example
|
### Feature request
The `PeftMixedModel` docstring references `get_peft_model` and imports it in its example but doesn't use it:
https://github.com/huggingface/peft/blob/ad8f7cb59ee7ca4b9ca1c9048711038ac36b31b8/src/peft/mixed_model.py#L97-L107
### Motivation
The unused import in the docstring is confusing.
### Your contribution
Yes.
|
closed
|
2024-06-04T23:34:16Z
|
2024-06-12T12:27:16Z
|
https://github.com/huggingface/peft/issues/1824
|
[] |
ringohoffman
| 2
|
giotto-ai/giotto-tda
|
scikit-learn
| 659
|
Correlation dimension
|
Is there a way to compute the correlation dimension for a given embedding dimension in this framework?
Let's say that if I were to take `x` time series of the Lorenz system and compute the correlation dimension for each embedding dimension then I would get a plot where the correlation dimension would increase with the embedding dimension for a while but after a while correlation dimension would be independent of embedding dimension.
If I were to do the same thing for the noise then the correlation dimension would linearly increase with the embedding dimension for all along.
|
closed
|
2023-02-14T08:05:07Z
|
2023-02-14T08:27:01Z
|
https://github.com/giotto-ai/giotto-tda/issues/659
|
[] |
qubit0
| 0
|
tflearn/tflearn
|
tensorflow
| 447
|
GPUs
|
I've been following threads about using GPUs with tflearn---does that happen now automatically or is that feature to come? If not, how do I indicate I want GPUs to be used. Thanks.
|
open
|
2016-11-07T04:59:00Z
|
2016-11-09T23:28:56Z
|
https://github.com/tflearn/tflearn/issues/447
|
[] |
arshavir
| 1
|
explosion/spaCy
|
nlp
| 13,248
|
Cannot train Arabic models with a custom tokenizer
|
This issue was initially about a possible bug in the _training pipeline_, related to the _parser_ (see below). But now I believe that posing preliminary questions is more appropriate:
- is it possible to create a completely _custom tokenizer_, which does not define custom rules and a few methods, but just redefines the main `__call__` method?
- in that case, where can I find documentation on how the tokenizer should use the Vocabulary API to feed the vocabulary while tokenizing?
### Some context information
In the discussion _Arabic language support_, comment _[I'm willing to prototype a spaCy language model for Arabic (SMA)](https://github.com/explosion/spaCy/discussions/7146#discussioncomment-8094879)_, I reported on the choice of a _training set_ and on the unsatisfactory training results obtained using the native spaCy _tokenizer_. Then, I reported on the integration/adaptation of an alternative tokenizer whose output, according to the printout of the _debug data_ command, shows a better alignment with the tokens in the training set (after a minor modification of the training set itself).
With the [subsequent comment](https://github.com/explosion/spaCy/discussions/7146#discussioncomment-8115239), in the same discussion, I reported on
1. an exception emitted by a parser-related module of the spaCy training software, when executing the _train_ command with the same data and configuration as _debug data_;
2. the very bad results (low overall _score_) obtained with a reduced configuration, excluding the parser.
Here below is an excerpt of the _Traceback_ related to the exception (point 1). You can find the full Traceback in the discussion to which I refer.
```(omissis)
⚠ Aborting and saving the final best model. Encountered exception:
KeyError("[E900] Could not run the full pipeline for evaluation. If you
specified frozen components, make sure they were already initialized and
trained. Full pipeline: ['tok2vec', 'tagger', 'morphologizer',
'trainable_lemmatizer', 'parser']")
Traceback (most recent call last):
File "C:\language310\lib\site-packages\spacy\training\loop.py", line 298, in evaluate
scores = nlp.evaluate(dev_corpus(nlp))
File "C:\language310\lib\site-packages\spacy\language.py", line 1459, in evaluate
for eg, doc in zip(examples, docs):
File "C:\language310\lib\site-packages\spacy\language.py", line 1618, in pipe
for doc in docs:
File "C:\language310\lib\site-packages\spacy\util.py", line 1685, in _pipe
yield from proc.pipe(docs, **kwargs)
File "spacy\pipeline\transition_parser.pyx", line 255, in pipe
File "C:\language310\lib\site-packages\spacy\util.py", line 1704, in raise_error
raise e
File "spacy\pipeline\transition_parser.pyx", line 252, in spacy.pipeline.transition_parser.Parser.pipe
File "spacy\pipeline\transition_parser.pyx", line 345, in spacy.pipeline.transition_parser.Parser.set_annotations
File "spacy\pipeline\_parser_internals\nonproj.pyx", line 176, in spacy.pipeline._parser_internals.nonproj.deprojectivize
File "spacy\pipeline\_parser_internals\nonproj.pyx", line 181, in spacy.pipeline._parser_internals.nonproj.deprojectivize
File "spacy\strings.pyx", line 160, in spacy.strings.StringStore.__getitem__
KeyError: "[E018] Can't retrieve string for hash '8206900633647566924'. This usually refers to an issue with the `Vocab` or `StringStore`."
The above exception was the direct cause of the following exception:
(omissis)
```
### My Environment
* Operating System: Windows 11
* Python Version Used: 3.10
* spaCy Version Used: 3.7
|
open
|
2024-01-18T21:32:32Z
|
2024-02-09T22:44:40Z
|
https://github.com/explosion/spaCy/issues/13248
|
[
"lang / ar",
"feat / tokenizer"
] |
gtoffoli
| 3
|
pydantic/logfire
|
fastapi
| 264
|
Specifying `http_client` in openai client options causing `TypeError` with `instrument_httpx`
|
### Description
If I use `instrument_httpx()`, and then construct an OpenAI client with a custom httpx client, openai will raise an TypeError:
```py
TypeError: Invalid `http_client` argument; Expected an instance of `httpx.AsyncClient` but got <class 'httpx.AsyncClient'>
```
### Python, Logfire & OS Versions, related packages (not required)
```TOML
logfire="0.42.0"
platform="Windows-10-10.0.22631-SP0"
python="3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]"
[related_packages]
requests="2.32.3"
pydantic="2.7.4"
openai="1.34.0"
protobuf="4.25.3"
rich="13.7.1"
executing="2.0.1"
opentelemetry-api="1.25.0"
opentelemetry-exporter-otlp-proto-common="1.25.0"
opentelemetry-exporter-otlp-proto-http="1.25.0"
opentelemetry-instrumentation="0.46b0"
opentelemetry-instrumentation-asgi="0.46b0"
opentelemetry-instrumentation-fastapi="0.46b0"
opentelemetry-instrumentation-httpx="0.46b0"
opentelemetry-instrumentation-system-metrics="0.46b0"
opentelemetry-proto="1.25.0"
opentelemetry-sdk="1.25.0"
opentelemetry-semantic-conventions="0.46b0"
opentelemetry-util-http="0.46b0"
```
|
closed
|
2024-06-14T12:09:36Z
|
2024-07-18T07:02:11Z
|
https://github.com/pydantic/logfire/issues/264
|
[
"bug",
"OTel Issue"
] |
CNSeniorious000
| 5
|
Evil0ctal/Douyin_TikTok_Download_API
|
web-scraping
| 107
|
[BUG] tiktok有个视频的下载,总是出错。
|
***发生错误的平台?***
TikTok
***发生错误的端点?***
三个都试过,一样的,API-V1/API-V2/Web APP
***提交的输入值?***
https://vm.tiktok.com/ZMFHSEp6J/
https://www.tiktok.com/@user5875741980434/video/7166974134272970011
这两个网址是同一个视频,视频内容是女生和一条狗。可是解析出来的视频,都不是这个原视频,下载的视频也不是原视频
***是否有再次尝试?***
是,发生错误后X时间后错误依旧存在。
***你有查看本项目的自述文件或接口文档吗?***
如:有,并且很确定该问题是程序导致的。
|
closed
|
2022-11-22T06:41:17Z
|
2022-11-22T21:46:07Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/107
|
[
"help wanted"
] |
wzboyer
| 1
|
huggingface/datasets
|
nlp
| 6,671
|
CSV builder raises deprecation warning on verbose parameter
|
CSV builder raises a deprecation warning on `verbose` parameter:
```
FutureWarning: The 'verbose' keyword in pd.read_csv is deprecated and will be removed in a future version.
```
See:
- https://github.com/pandas-dev/pandas/pull/56556
- https://github.com/pandas-dev/pandas/pull/57450
|
closed
|
2024-02-16T14:23:46Z
|
2024-02-19T09:20:23Z
|
https://github.com/huggingface/datasets/issues/6671
|
[] |
albertvillanova
| 0
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 478
|
Multi language cloning
|
Hi,
is it possible to train the model for multilingual voice cloning?
Is there any trained model available.
Or can you help with the parameters to train the model in multilingual using language database.
Thanks
|
closed
|
2020-08-10T05:28:05Z
|
2020-08-10T06:12:08Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/478
|
[] |
sid0791
| 1
|
mwaskom/seaborn
|
data-science
| 2,775
|
Inconsistent pointplot documentation/functionality
|
The pointplot documentation (https://seaborn.pydata.org/generated/seaborn.pointplot.html) suggests you can pass **kwargs, presumably to an underlying scatter or line plot, but there's no argument description on that page for **kwargs and in the source code there's no **kwargs either. The ax.scatter() just takes arguments from the constructor which is limited to the arguments in the documentation.
|
closed
|
2022-03-30T13:34:35Z
|
2022-06-15T00:17:10Z
|
https://github.com/mwaskom/seaborn/issues/2775
|
[
"mod:categorical",
"ux"
] |
abpoll
| 2
|
ivy-llc/ivy
|
pytorch
| 28,253
|
Fix Ivy Failing Test: jax - shape.shape__radd__
|
closed
|
2024-02-12T15:46:44Z
|
2024-02-13T09:32:13Z
|
https://github.com/ivy-llc/ivy/issues/28253
|
[] |
fnhirwa
| 0
|
|
jonaswinkler/paperless-ng
|
django
| 1,545
|
[BUG] Ansible Installation fails at "get commit for target tag" task
|
**Describe the bug**
I am trying to install Paperless NG with Ansible.
The installation fails at the step "get commit for target tag" with the following error:
"You need to install \"jmespath\" prior to running json_query filter"
I tried to install jmespath on the target manually (pip3 install jmespath) but the error still persists, probably due to the use a virtualenv.
**To Reproduce**
Steps to reproduce the behavior:
1. Install Paperless through Ansible Galaxy: ansible-galaxy install git+https://github.com/jonaswinkler/paperless-ng.git,ng-1.5.0
2. Run the Ansible playbook
3. The error occurs
**Expected behavior**
The installation goes through.
**Screenshots**
N.A.
**Webserver logs**
```
TASK [paperless-ng : get commit for target tag] ******************************************************************************
fatal: [10.0.0.24]: FAILED! => {"msg": "You need to install \"jmespath\" prior to running json_query filter"}
```
**Ansible Playbook**
```
- hosts: 10.0.0.24
become: yes
gather_facts: True
vars_files:
- vars/paperless-ng-vars.yml
roles:
- paperless-ng
```
**Ansible Vars**
```
ansible_python_interpreter: /usr/bin/python3
paperlessng_secret_key: <removed>
paperlessng_superuser_name: <removed>
paperlessng_superuser_email: <removed>
paperlessng_superuser_password: <removed>
paperlessng_ocr_languages:
- eng
- deu
```
**Relevant information**
- Host OS of the machine running paperless:
- Ansible Host: Debian 10
- Paperless Host: Ubuntu 20.04.3
- Browser N.A.
- Version Paperless-NG 1.5.0
- Installation method: Ansible
- Ansible version: 4.10.0
|
open
|
2022-01-13T10:54:50Z
|
2022-01-23T18:06:07Z
|
https://github.com/jonaswinkler/paperless-ng/issues/1545
|
[] |
moxli
| 1
|
pytest-dev/pytest-cov
|
pytest
| 117
|
Incorrect coverage report
|
Using `py.test --cov-config .coveragerc --cov nengo -n 6 nengo` a lot of lines that should be hit get reported as missed (like class and function definitions in a module). This might be related to #19 as the project has a conftest file importing other modules from the project.
Using `coverage run --rcfile .coveragerc --source nengo -m py.test nengo` instead a correct coverage report is generated, but this command does not support xdist.
|
closed
|
2016-05-11T22:04:47Z
|
2017-10-28T01:48:33Z
|
https://github.com/pytest-dev/pytest-cov/issues/117
|
[] |
jgosmann
| 21
|
seleniumbase/SeleniumBase
|
pytest
| 2,962
|
SeleniumBase A.I. detected a CF Turnstile change (update needed)
|
### SeleniumBase A.I. detected a CF Turnstile change (update needed)
<img width="610" alt="Screenshot" src="https://github.com/user-attachments/assets/b761801e-78fb-4b4e-b7ae-4f607f379837">
That change is preventing UC Mode from bypassing the CAPTCHAs.
The proposed update will be shipped in the next release to fix UC Mode.
|
closed
|
2024-07-25T18:59:43Z
|
2024-07-25T21:16:48Z
|
https://github.com/seleniumbase/SeleniumBase/issues/2962
|
[
"UC Mode / CDP Mode"
] |
mdmintz
| 3
|
jupyter/docker-stacks
|
jupyter
| 1,517
|
Make the start script fail loudly with an error where it makes sense
|
This issue represents the following comments from another PR, so we can mark them as resolved without forgetting them.
~https://github.com/jupyter/docker-stacks/pull/1512#discussion_r745520508~ fixed
https://github.com/jupyter/docker-stacks/pull/1512#discussion_r745526275
|
closed
|
2021-11-09T22:18:05Z
|
2024-01-07T15:08:28Z
|
https://github.com/jupyter/docker-stacks/issues/1517
|
[
"type:Enhancement"
] |
consideRatio
| 1
|
marcomusy/vedo
|
numpy
| 1,056
|
test_core2 fails: 'NoneType' object has no attribute 'GetData'
|
test_core2.py in vedo 2024.5.1 fails (with the debian build of vtk 9.1.0)
```
$ python3 test_core2.py
2024-02-19 14:01:08.512 ( 0.520s) [ 7FD21040] vtkXMLParser.cxx:375 ERR| vtkXMLDataParser (0x1e40440): Error parsing XML in stream at line 24, column 32, byte index 1540: not well-formed (invalid token)
2024-02-19 14:01:08.513 ( 0.520s) [ 7FD21040] vtkXMLReader.cxx:521 ERR| vtkXMLUnstructuredGridReader (0x22f6980): Error parsing input file. ReadXMLInformation aborting.
2024-02-19 14:01:08.513 ( 0.520s) [ 7FD21040] vtkExecutive.cxx:752 ERR| vtkCompositeDataPipeline (0x22cbe20): Algorithm vtkXMLUnstructuredGridReader(0x22f6980) returned failure for request: vtkInformation (0x2173980)
Debug: Off
Modified Time: 590164
Reference Count: 1
Registered Events: (none)
Request: REQUEST_INFORMATION
ALGORITHM_AFTER_FORWARD: 1
FORWARD_DIRECTION: 0
2024-02-19 14:01:08.513 ( 0.521s) [ 7FD21040] vtkXMLParser.cxx:375 ERR| vtkXMLDataParser (0x1e40440): Error parsing XML in stream at line 27, column 0, byte index 1583: not well-formed (invalid token)
2024-02-19 14:01:08.513 ( 0.521s) [ 7FD21040] vtkXMLReader.cxx:521 ERR| vtkXMLRectilinearGridReader (0x22fb4e0): Error parsing input file. ReadXMLInformation aborting.
2024-02-19 14:01:08.513 ( 0.521s) [ 7FD21040] vtkExecutive.cxx:752 ERR| vtkCompositeDataPipeline (0x22fbdd0): Algorithm vtkXMLRectilinearGridReader(0x22fb4e0) returned failure for request: vtkInformation (0x194c280)
Debug: Off
Modified Time: 590714
Reference Count: 1
Registered Events: (none)
Request: REQUEST_INFORMATION
ALGORITHM_AFTER_FORWARD: 1
FORWARD_DIRECTION: 0
-- TEST METHOD add_ids() -------------------
vedo.volume.Volume at (0x1795480)
name : Volume
filename : /projects/.cache/vedo/embryo.tif
dimensions : [125 80 107]
origin : (0, 0, 0)
center : (6450.40, 4109.53, 5514.05)
spacing : (104.039, 104.039, 104.039)
bounds : x=(0, 1.29e+4), y=(0, 8.22e+3), z=(0, 1.10e+4)
memory size : 16 MB
scalar size : 8 bytes (idtype)
scalar range : (0.0, 1069999.0)
vedo.grids.TetMesh at (0x22e0bb0)
nr. of verts : 0
nr. of tetras : 0
bounds : x=(1.00, -1.00), y=(1.00, -1.00), z=(1.00, -1.00)
vedo.grids.RectilinearGrid at (0x19c7270)
name : RectilinearGrid
filename : /projects/.cache/vedo/RectilinearGrid.vtr
dimensions : (0, 0, 0)
center : (0, 0, 0)
bounds : x=(0, 0), y=(0, 0), z=(0, 0)
memory size : 2.9e-3 MB
-- TEST METHOD average_size() -------------------
5256.014682725444
2024-02-19 14:01:08.621 ( 0.628s) [ 7FD21040]vtkDemandDrivenPipeline:756 ERR| vtkCompositeDataPipeline (0x1b3e740): Input for connection index 0 on input port index 0 for algorithm vtkImageToPoints(0x19624c0) is of type vtkUnstructuredGrid, but a vtkImageData is required.
/usr/lib/python3/dist-packages/numpy/core/fromnumeric.py:3464: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
/usr/lib/python3/dist-packages/numpy/core/_methods.py:192: RuntimeWarning: invalid value encountered in scalar divide
ret = ret.dtype.type(ret / rcount)
0.0
/usr/lib/python3/dist-packages/numpy/core/_methods.py:184: RuntimeWarning: invalid value encountered in divide
ret = um.true_divide(
0.0
--- TEST METHOD bounds() -------------------
[ 0. 12900.79467088 0. 8219.05466935
0. 11028.09867027]
(1.0, -1.0, 1.0, -1.0, 1.0, -1.0)
[0. 0. 0. 0. 0. 0.]
-- TEST METHOD cell_centers() -------------------
[[ 52.01933335 52.01933335 52.01933335]
[ 156.05800005 52.01933335 52.01933335]
[ 260.09666675 52.01933335 52.01933335]
...
[12640.69800413 8167.035336 10976.07933692]
[12744.73667083 8167.035336 10976.07933692]
[12848.77533753 8167.035336 10976.07933692]]
Traceback (most recent call last):
File "/projects/vedo/tests/common/test_core2.py", line 31, in <module>
print(tm.cell_centers)
^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/vedo/core.py", line 633, in cell_centers
return utils.vtk2numpy(vcen.GetOutput().GetPoints().GetData())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'GetData'
```
|
open
|
2024-02-19T12:57:09Z
|
2024-06-13T18:49:20Z
|
https://github.com/marcomusy/vedo/issues/1056
|
[
"possible bug"
] |
drew-parsons
| 3
|
eamigo86/graphene-django-extras
|
graphql
| 121
|
V2.6 Version is not compatible
|
Graphene-django updated to 2.6, adding "DjangoListField only accepts DjangoObjectType types" causes version incompatibility。
|
closed
|
2019-09-28T16:32:03Z
|
2019-10-16T09:24:53Z
|
https://github.com/eamigo86/graphene-django-extras/issues/121
|
[] |
grusirna
| 4
|
ultralytics/yolov5
|
machine-learning
| 12,597
|
imgsz,iou_thres and conf_thres
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello,
I have a few questions during training YOLO v5 and would like to ask for advice:
1.My data size is mostly 1080 * 2340, how much should I write for the -- imgsz parameter during training?
2.I found that conf_thres=0.001, iou_thres=0.6 in val.py when running the validation set, but when detecting,conf_thres=0.25, iou_thres=0.45 in detect.py. why? And what should I set these two thresholds appropriately when validating and detecting?
Thank you for your help.
### Additional
_No response_
|
closed
|
2024-01-08T10:39:13Z
|
2024-02-19T00:21:08Z
|
https://github.com/ultralytics/yolov5/issues/12597
|
[
"question",
"Stale"
] |
Gary55555
| 4
|
deeppavlov/DeepPavlov
|
nlp
| 1,164
|
How do I train slot filler using 'slotfill_dstc2.json' config on custom data
|
I have created a dataset following the strucutre of the dstc2 dataset.
Then I sucessfully trained gobot using "gobot_simple_dstc2.json" slot filler component.
But when I train gobot with "slotfill_dstc2.json" I get an error. How?
First I try to train the NER using "slotfill_dstc2.json" using the default parameters:
```
from deeppavlov import configs, train_model
ner_model = train_model(configs.ner.slotfill_dstc2, download=True)
```
Everything goes well and the training reasches 97% accuracy on the original dataset.
Then I one by one replace the dataset in the downloads folder with my custom dataset. Then, I point to an empty directory for the "MODEL_PATH" so it should create a new model from scrach when I do the following:
```
from deeppavlov import configs, train_model
ner_model = train_model(configs.ner.slotfill_dstc2, download=True)
```
but, this gives me an error of
```
assert n_tags != 0, 'Number of classes equal 0! It seems that vocabularies is not loaded.'
```
I guess, it does not get the tags dict which I think it should get and create from the data itself.
So, how do I train slot filler using 'slotfill_dstc2.json' config on custom data ?
|
closed
|
2020-03-31T04:00:19Z
|
2020-03-31T07:13:10Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1164
|
[] |
Eugen2525
| 1
|
albumentations-team/albumentations
|
deep-learning
| 2,446
|
[Feature request] Add apply_to_images to BboxSafeRandomCrop
|
open
|
2025-03-11T01:21:28Z
|
2025-03-11T01:21:34Z
|
https://github.com/albumentations-team/albumentations/issues/2446
|
[
"enhancement",
"good first issue"
] |
ternaus
| 0
|
|
Johnserf-Seed/TikTokDownload
|
api
| 415
|
[BUG]闪退
|
请问这是什么问题?

|
closed
|
2023-04-27T10:23:20Z
|
2023-05-27T10:28:10Z
|
https://github.com/Johnserf-Seed/TikTokDownload/issues/415
|
[
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] |
summer990
| 2
|
bmoscon/cryptofeed
|
asyncio
| 958
|
Too much request
|
**Describe the bug**
Connecting more than 400 pairs on Binance leads to a connection error that looks like: {"code":-1003,"msg":"Too much request weight used; current limit is 1200 request weight per 1 MINUTE. Please use WebSocket Streams for live updates to avoid polling the API."}
**To Reproduce**
```python
from cryptofeed import FeedHandler
from cryptofeed.exchanges import Binance
from cryptofeed.defines import L2_BOOK
f = FeedHandler()
async def callback(book, t):
print(book)
f.add_feed(Binance(symbols=Binance.symbols(), channels=[L2_BOOK], callbacks={L2_BOOK: callback}))
f.run()
```
**Operating System:**
- Windows 10
**Cryptofeed Version**
- Latest v2.3.1
|
closed
|
2023-03-04T12:20:45Z
|
2023-03-05T11:42:56Z
|
https://github.com/bmoscon/cryptofeed/issues/958
|
[
"bug"
] |
ghost
| 2
|
coqui-ai/TTS
|
python
| 2,590
|
[Bug] TTS loads a corrupted model instead of redownloading
|
### Describe the bug
If I interrupt model downloading and rerun the code, the corrupted model will be loaded instead of re-downloading. In that case, I need to find the cached file on the disk myself and delete it.
### To Reproduce
1. Run `tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=True, gpu=False)`
2. Interrupt when downloading
3. Rerun above code
4. See the error
### Expected behavior
The correct behavior would be re-downloading the model if a hash is different than expected for that model.
### Logs
```shell
Traceback (most recent call last):
File "/home/adrian/PycharmProjects/pythonProject/tts.py", line 6, in <module>
tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False, gpu=False)
File "/home/adrian/PycharmProjects/pythonProject/venv/lib/python3.9/site-packages/TTS/api.py", line 54, in __init__
self.load_model_by_name(model_name, gpu)
File "/home/adrian/PycharmProjects/pythonProject/venv/lib/python3.9/site-packages/TTS/api.py", line 114, in load_model_by_name
model_path, config_path, vocoder_path, vocoder_config_path = self.download_model_by_name(model_name)
File "/home/adrian/PycharmProjects/pythonProject/venv/lib/python3.9/site-packages/TTS/api.py", line 98, in download_model_by_name
model_path, config_path, model_item = self.manager.download_model(model_name)
File "/home/adrian/PycharmProjects/pythonProject/venv/lib/python3.9/site-packages/TTS/utils/manage.py", line 248, in download_model
output_model_path, output_config_path = self._find_files(output_path)
File "/home/adrian/PycharmProjects/pythonProject/venv/lib/python3.9/site-packages/TTS/utils/manage.py", line 271, in _find_files
raise ValueError(" [!] Model file not found in the output path")
ValueError: [!] Model file not found in the output path
```
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA GeForce RTX 3060 Laptop GPU"
],
"available": true,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.0+cu117",
"TTS": "0.12.0",
"numpy": "1.21.6"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.9.16",
"version": "#42~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 18 17:40:00 UTC 2"
}
}
```
### Additional context
_No response_
|
closed
|
2023-05-05T15:11:35Z
|
2024-03-17T12:35:47Z
|
https://github.com/coqui-ai/TTS/issues/2590
|
[
"bug"
] |
adrianboguszewski
| 3
|
microsoft/nni
|
pytorch
| 5,333
|
Can't speed up model when pruning mT5 model
|
**Describe the issue**:
I use TaylorFOWeightPruner to prune mT5_base model, but the errors happened when speed up model.
pruner = TaylorFOWeightPruner(attention_pruned_model, ffn_config_list, evaluator, taylor_pruner_steps)
_, ffn_masks = pruner.compress()
renamed_ffn_masks = {}
# rename the masks keys, because we only speedup the bert.encoder
for model_name, targets_mask in ffn_masks.items():
renamed_ffn_masks[model_name] = targets_mask
pruner._unwrap_model()
attention_pruned_model.load_state_dict(check_point)
m_Speedup = ModelSpeedup(attention_pruned_model, (a.to(device), b.to(device), c.to(device), d.to(device)), renamed_ffn_masks)
m_Speedup.speedup_model()
optimizer = Adam(attention_pruned_model.parameters(), lr=init_lr)
logg error:
[2023-02-03 11:29:50] start to speedup the model
2023-02-03 11:29:50 - INFO: start to speedup the model
2023-02-03 11:29:54 - INFO: {}
2023-02-03 11:29:54 - WARNING: no multi-dimension masks found.
2023-02-03 11:29:54 - INFO: Dectected conv prune dim" 0
[2023-02-03 11:29:55] infer module masks...
2023-02-03 11:29:55 - INFO: infer module masks...
[2023-02-03 11:29:55] Update mask for encoder.aten::size.519
2023-02-03 11:29:55 - INFO: Update mask for encoder.aten::size.519
[2023-02-03 11:29:55] Update mask for encoder.aten::slice.521
2023-02-03 11:29:55 - INFO: Update mask for encoder.aten::slice.521
[2023-02-03 11:29:55] Slice dim:0, Slice obj:slice(0, 9223372036854775807, 1)
2023-02-03 11:29:55 - INFO: Slice dim:0, Slice obj:slice(0, 9223372036854775807, 1)
[2023-02-03 11:29:55] Model has Slice operation, and the operand size=torch.Size([8, 10]), Slice object:(slice(0, 9223372036854775807, 1),)
2023-02-03 11:29:55 - INFO: Model has Slice operation, and the operand size=torch.Size([8, 10]), Slice object:(slice(0, 9223372036854775807, 1),)
[2023-02-03 11:29:55] Model has Slice operation, and the operand size=torch.Size([8, 10]), Slice object:(slice(0, 9223372036854775807, 1),)
2023-02-03 11:29:55 - INFO: Model has Slice operation, and the operand size=torch.Size([8, 10]), Slice object:(slice(0, 9223372036854775807, 1),)
[2023-02-03 11:29:55] Update mask for decoder.aten::size.808
2023-02-03 11:29:55 - INFO: Update mask for decoder.aten::size.808
[2023-02-03 11:29:55] Update mask for decoder.aten::size.809
2023-02-03 11:29:55 - INFO: Update mask for decoder.aten::size.809
[2023-02-03 11:29:55] Update mask for decoder.aten::slice.825
2023-02-03 11:29:55 - INFO: Update mask for decoder.aten::slice.825
[2023-02-03 11:29:55] Slice dim:0, Slice obj:slice(0, 9223372036854775807, 1)
2023-02-03 11:29:55 - INFO: Slice dim:0, Slice obj:slice(0, 9223372036854775807, 1)
[2023-02-03 11:29:55] Model has Slice operation, and the operand size=torch.Size([8, 10]), Slice object:(slice(0, 9223372036854775807, 1),)
2023-02-03 11:29:55 - INFO: Model has Slice operation, and the operand size=torch.Size([8, 10]), Slice object:(slice(0, 9223372036854775807, 1),)
[2023-02-03 11:29:55] Model has Slice operation, and the operand size=torch.Size([8, 10]), Slice object:(slice(0, 9223372036854775807, 1),)
2023-02-03 11:29:55 - INFO: Model has Slice operation, and the operand size=torch.Size([8, 10]), Slice object:(slice(0, 9223372036854775807, 1),)
[2023-02-03 11:29:55] Update mask for decoder.aten::slice.833
2023-02-03 11:29:55 - INFO: Update mask for decoder.aten::slice.833
[2023-02-03 11:29:55] Slice dim:0, Slice obj:slice(0, 9223372036854775807, 1)
2023-02-03 11:29:55 - INFO: Slice dim:0, Slice obj:slice(0, 9223372036854775807, 1)
[2023-02-03 11:29:55] Model has Slice operation, and the operand size=torch.Size([8, 10]), Slice object:(slice(0, 9223372036854775807, 1),)
2023-02-03 11:29:55 - INFO: Model has Slice operation, and the operand size=torch.Size([8, 10]), Slice object:(slice(0, 9223372036854775807, 1),)
[2023-02-03 11:29:55] Model has Slice operation, and the operand size=torch.Size([8, 10]), Slice object:(slice(0, 9223372036854775807, 1),)
2023-02-03 11:29:55 - INFO: Model has Slice operation, and the operand size=torch.Size([8, 10]), Slice object:(slice(0, 9223372036854775807, 1),)
[2023-02-03 11:29:55] Update mask for encoder.aten::view.520
2023-02-03 11:29:55 - INFO: Update mask for encoder.aten::view.520
[2023-02-03 11:29:55] WARNING: throw some args away when calling the function "view"
2023-02-03 11:29:55 - WARNING: throw some args away when calling the function "view"
[2023-02-03 11:29:55] WARNING: throw some args away when calling the function "view"
2023-02-03 11:29:55 - WARNING: throw some args away when calling the function "view"
[2023-02-03 11:29:55] Update mask for encoder.aten::unsqueeze.522
2023-02-03 11:29:55 - INFO: Update mask for encoder.aten::unsqueeze.522
[2023-02-03 11:29:55] Update mask for decoder.aten::view.810
2023-02-03 11:29:55 - INFO: Update mask for decoder.aten::view.810
[2023-02-03 11:29:55] WARNING: throw some args away when calling the function "view"
2023-02-03 11:29:55 - WARNING: throw some args away when calling the function "view"
[2023-02-03 11:29:55] WARNING: throw some args away when calling the function "view"
2023-02-03 11:29:55 - WARNING: throw some args away when calling the function "view"
[2023-02-03 11:29:55] Update mask for decoder.aten::arange.811
2023-02-03 11:29:55 - INFO: Update mask for decoder.aten::arange.811
Traceback (most recent call last):
File "attention_pruned_model.py", line 446, in <module>
main(config)
File "attention_pruned_model.py", line 387, in main
m_Speedup.speedup_model()
File "/home/dhnan/anaconda3/envs/nemo/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py", line 546, in speedup_model
self.infer_modules_masks()
File "/home/dhnan/anaconda3/envs/nemo/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py", line 383, in infer_modules_masks
self.update_direct_sparsity(curnode)
File "/home/dhnan/anaconda3/envs/nemo/lib/python3.8/site-packages/nni/compression/pytorch/speedup/compressor.py", line 237, in update_direct_sparsity
_auto_infer = AutoMaskInference(
File "/home/dhnan/anaconda3/envs/nemo/lib/python3.8/site-packages/nni/compression/pytorch/speedup/infer_mask.py", line 80, in __init__
self.output = self.module(*dummy_input)
File "/home/dhnan/anaconda3/envs/nemo/lib/python3.8/site-packages/nni/compression/pytorch/speedup/jit_translate.py", line 245, in __call__
result = self.func(*self.positional, **self.keyword)
TypeError: arange() received an invalid combination of arguments - got (Tensor, pin_memory=bool, device=torch.device, layout=NoneType, dtype=NoneType), but expected one of:
* (Number end, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (Number start, Number end, *, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (Number start, Number end, Number step, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
Could you please tell me how to solve it?
Thanks very much!
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version: 3.8.0
- PyTorch/TensorFlow version: Pytorch 1.13.0
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: No
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
|
open
|
2023-02-03T03:31:20Z
|
2023-02-15T07:24:45Z
|
https://github.com/microsoft/nni/issues/5333
|
[] |
Kathrine94
| 4
|
roboflow/supervision
|
tensorflow
| 1,243
|
Request: PolygonZone determination using object recognition
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
I am currently using supervision in my thesis to analyse driving behavior in different videos and its super useful. But the PolygonZone array must be determined manually for each video.
Would it be possible to (semi-) automate this process with object recognition? By specifying an object that can be found in several places in a frame, the feature would then return the coordinates of the object from the frame and append them to an array.
### Use case
It would be very useful, for example, when determining the polygon zone, which is created on the basis of delineators. In this way, a road section can be recognized directly without having to enter an array manually.
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
closed
|
2024-05-29T12:20:02Z
|
2024-05-29T12:44:24Z
|
https://github.com/roboflow/supervision/issues/1243
|
[
"enhancement"
] |
pasionline
| 1
|
deepinsight/insightface
|
pytorch
| 1,905
|
[arcface_torch] Problems with distributed training in arcface_torch
|
Good Afternoon,
I have a current setup of two machines in the same network, each of them presenting a RTX3090 GPU, and despite intializing a process in each of the machines (Checked with nvidia-smi), and the master machine printing "Training YYYY-MM-DD HH:MM:SS,MS-rank_id: 0" both processes seem to freeze (power usage decay on GPUs) and later on the following error is raised in the master machine:
```
python_env/lib64/python3.6/site-packages/torch/distributed/launch.py:186: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
FutureWarning,
Traceback (most recent call last):
File "train.py", line 141, in <module>
main(parser.parse_args())
File "train.py", line 59, in main
module=backbone, broadcast_buffers=False, device_ids=[local_rank])
File "/home/nunes/python_env/lib64/python3.6/site-packages/torch/nn/parallel/distributed.py", line 578, in __init__
dist._verify_model_across_ranks(self.process_group, parameters)
RuntimeError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:957, unhandled system error, NCCL version 21.0.3
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 123541) of binary: /home/nunes/python_env/bin/python
```
Both machines present the same NCCL (21.0.3) and Driver Versions (510.47.03).
(Fun fact, swapping the ranks and the master machine, the error still pop on the same machine, implying the problem is with such machine.)
These are my running configurations:
Master (Machine 1) - Rank 0
```bash
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 --nnodes=2 --node_rank=1 --master_addr="192.168.0.207" --master_port=1234 train.py configs/test_02
ps -ef | grep "train" | grep -v grep | awk '{print "kill -9 "$2}' | sh
```
Machine 2 - Rank 1
```bash
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 --nnodes=2 --node_rank=0 --master_addr="192.168.0.207" --master_port=1234 train.py configs/test_02
ps -ef | grep "train" | grep -v grep | awk '{print "kill -9 "$2}' | sh
```
Any ideas what might be the cause of this problem?
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Edit:
Okay, so I enabled NCCL Debug with `export NCCL_DEBUG=INFO` and seems like it is a NCCL connection problem according to this output:
```
FutureWarning,
gr3090:166652:166652 [0] NCCL INFO Bootstrap : Using enp6s0:192.168.0.136<0>
gr3090:166652:166652 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
gr3090:166652:166652 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1.
gr3090:166652:166652 [0] NCCL INFO NET/Socket : Using [0]enp6s0:192.168.0.136<0>
gr3090:166652:166652 [0] NCCL INFO Using network Socket
gr3090:166652:166738 [0] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1
gr3090:166652:166738 [0] NCCL INFO Channel 00 : 0[1000] -> 1[9000] [receive] via NET/Socket/0
gr3090:166652:166738 [0] NCCL INFO Channel 01 : 0[1000] -> 1[9000] [receive] via NET/Socket/0
gr3090:166652:166738 [0] NCCL INFO Channel 00 : 1[9000] -> 0[1000] [send] via NET/Socket/0
gr3090:166652:166738 [0] NCCL INFO Channel 01 : 1[9000] -> 0[1000] [send] via NET/Socket/0
gr3090:166652:166738 [0] NCCL INFO Call to connect returned Connection timed out, retrying
gr3090:166652:166738 [0] NCCL INFO Call to connect returned Connection timed out, retrying
gr3090:166652:166738 [0] include/socket.h:409 NCCL WARN Net : Connect to 192.168.122.1<49891> failed : Connection timed out
gr3090:166652:166738 [0] NCCL INFO transport/net_socket.cc:316 -> 2
gr3090:166652:166738 [0] NCCL INFO include/net.h:21 -> 2
gr3090:166652:166738 [0] NCCL INFO transport/net.cc:210 -> 2
gr3090:166652:166738 [0] NCCL INFO transport.cc:111 -> 2
gr3090:166652:166738 [0] NCCL INFO init.cc:778 -> 2
gr3090:166652:166738 [0] NCCL INFO init.cc:904 -> 2
gr3090:166652:166738 [0] NCCL INFO group.cc:72 -> 2 [Async thread]
Traceback (most recent call last):
File "train.py", line 141, in <module>
main(parser.parse_args())
File "train.py", line 59, in main
module=backbone, broadcast_buffers=False, device_ids=[local_rank])
File "/home/nunes/penv/lib64/python3.6/site-packages/torch/nn/parallel/distributed.py", line 578, in __init__
dist._verify_model_across_ranks(self.process_group, parameters)
RuntimeError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:957, unhandled system error, NCCL version 21.0.3
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 166652) of binary: /home/nunes/penv/bin/python
```
|
closed
|
2022-02-07T20:20:53Z
|
2022-05-11T20:25:07Z
|
https://github.com/deepinsight/insightface/issues/1905
|
[] |
luisfmnunes
| 2
|
feder-cr/Jobs_Applier_AI_Agent_AIHawk
|
automation
| 181
|
Error running the bot: Failed to initialize browser: Message: unknown error: cannot find Chrome binary Stacktrace
|
Runtime error: Error running the bot: Failed to initialize browser: Message: unknown error: cannot find Chrome binary
Stacktrace:
Backtrace:
GetHandleVerifier [0x00B0A813+48355]
(No symbol) [0x00A9C4B1]
(No symbol) [0x009A5358]
(No symbol) [0x009C1A9E]
(No symbol) [0x009C0579]
(No symbol) [0x009F0C55]
(No symbol) [0x009F093C]
(No symbol) [0x009EA536]
(No symbol) [0x009C82DC]
(No symbol) [0x009C93DD]
GetHandleVerifier [0x00D6AABD+2539405]
GetHandleVerifier [0x00DAA78F+2800735]
GetHandleVerifier [0x00DA456C+2775612]
GetHandleVerifier [0x00B951E0+616112]
(No symbol) [0x00AA5F8C]
(No symbol) [0x00AA2328]
(No symbol) [0x00AA240B]
(No symbol) [0x00A94FF7]
BaseThreadInitThunk [0x76827BA9+25]
RtlInitializeExceptionChain [0x7760C10B+107]
RtlClearBits [0x7760C08F+191]
Refer to the configuration and troubleshooting guide: https://github.com/feder-cr/LinkedIn_AIHawk_automatic_job_application/blob/main/readme.md#configuration
|
closed
|
2024-08-31T07:48:49Z
|
2024-09-07T13:00:05Z
|
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/181
|
[] |
Jasmeer57
| 1
|
facebookresearch/fairseq
|
pytorch
| 5,460
|
How did you train the k-means clustering model on the HuBERT model?
|
## ❓ Questions and Help
### My question
Hello, due to my downstream task requirements, I need to perform k-means clustering on the output of Contentvec model, that has the same structure as the HuBERT model but with a different training idea. I have performed feature extraction on my dataset on Contentvec and learnt a clustering model using the code you provided. However I found the clustering to be far less effective than the clustering model you provided for HuBERT.

Do you do any special treatment of the features (such as dimensionality reduction) before training the clustering model? Or maybe my dataset is small in size (7430431* 768)? Or if you can make valuable suggestions for my clustering, I would appreciate it!
### The code I have tried for clustering:

|
open
|
2024-03-16T10:41:14Z
|
2024-07-08T11:40:36Z
|
https://github.com/facebookresearch/fairseq/issues/5460
|
[
"question",
"needs triage"
] |
Remaxic
| 1
|
allure-framework/allure-python
|
pytest
| 601
|
Multiple runs for same test case is showing as separate results
|
[//]: # (
. Note: for support questions, please use Stackoverflow or Gitter**.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin** please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
#### I'm submitting a ...
- [*] bug report
- [ ] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
When the same test cases are run multiple times, it is showing as separate results instead of appearing in history tabs
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
<img width="209" alt="Screenshot 2021-06-18 at 8 56 23 AM" src="https://user-images.githubusercontent.com/86087397/122491366-6a86a900-d016-11eb-9243-e4366e89245d.png">
Running the same case multiple times
#### What is the expected behavior?
Multiple runs of same test cases should aggregate into 1 result with previous runs appearing in history tabs
#### What is the motivation / use case for changing the behavior?
This allow rerun of test cases which eventually only show the final last results
#### Please tell us about your environment:
- Allure version: 2.14.0
- Test framework: pytest_bdd@4.02
- Allure adaptor: allure-pytest-bdd@2..9.43
#### Other information
[//]: # (
. e.g. detailed explanation, stacktraces, related issues, suggestions
. how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc
)
|
closed
|
2021-06-18T01:23:55Z
|
2022-12-29T10:38:27Z
|
https://github.com/allure-framework/allure-python/issues/601
|
[
"theme:pytest-bdd"
] |
budsee
| 2
|
mirumee/ariadne-codegen
|
graphql
| 204
|
Add full support for skip/include directives
|
GQL spec defines [skip](https://spec.graphql.org/June2018/#sec--skip) and [include](https://spec.graphql.org/June2018/#sec--include) directives. They are not fully supported cause generated pydantic types cannot parse server-side responses (e.g. skipped part of the query is missing and pydantic raises validation error).
**Example reproduction steps:**
Using https://beta.pokeapi.co/graphql/console/:
```graphql
query getPokemonByName($name: String!, $includeAbilities: Boolean!) {
pokemon_v2_pokemon(where: {
name: {
_eq: $name
}
}) {
id
name
pokemon_v2_pokemonabilities @include(if: $includeAbilities) {
pokemon_v2_ability {
id
name
}
}
}
}
```
```
>>> result = client.get_pokemon_by_name('pikachu', include_abilities=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jbacic/Documents/Private/ariadne-codegen/app/graphql_client/client.py", line 40, in get_pokemon_by_name
return GetPokemonByName.model_validate(data)
File "/Users/jbacic/Documents/Private/ariadne-codegen/.venv/lib/python3.10/site-packages/pydantic/main.py", line 504, in model_validate
return cls.__pydantic_validator__.validate_python(
pydantic_core._pydantic_core.ValidationError: 1 validation error for GetPokemonByName
pokemon_v2_pokemon.0.pokemon_v2_pokemonabilities
Field required [type=missing, input_value={'id': 25, 'name': 'pikachu'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.3/v/missing
```
The following server-side response:
```json
{
"data": {
"pokemon_v2_pokemon": [
{
"id": 25,
"name": "pikachu"
}
]
}
}
```
cannot be parsed by:
```python
class GetPokemonByName(BaseModel):
pokemon_v2_pokemon: List["GetPokemonByNamePokemonV2Pokemon"]
class GetPokemonByNamePokemonV2Pokemon(BaseModel):
id: int
name: str
pokemon_v2_pokemonabilities: Optional[
List["GetPokemonByNamePokemonV2PokemonPokemonV2Pokemonabilities"]
]
```
because it doesn't contain `pokemon_v2_pokemonabilities` key.
|
closed
|
2023-09-04T14:13:42Z
|
2023-09-07T12:44:29Z
|
https://github.com/mirumee/ariadne-codegen/issues/204
|
[] |
jakub-bacic
| 4
|
Yorko/mlcourse.ai
|
seaborn
| 408
|
Topic 2.2: For new versions, add sns.set() to apply styles throughout notebook
|

In the [article](https://mlcourse.ai/notebooks/blob/master/jupyter_english/topic02_visual_data_analysis/topic2_additional_seaborn_matplotlib_plotly.ipynb):
> Even by simply adding import seaborn in your code, the images of your plots will become much nicer.
This is correct for older versions of seaborn.
For new versions, add `sns.set()` to apply styles throughout notebook.
From the [documentation](https://seaborn.pydata.org/tutorial/aesthetics.html):
> (Note that in versions of seaborn prior to 0.8, set() was called on import. On later versions, it must be explicitly invoked).
|
closed
|
2018-11-05T21:52:35Z
|
2018-11-10T16:18:38Z
|
https://github.com/Yorko/mlcourse.ai/issues/408
|
[
"minor_fix"
] |
ptaiga
| 1
|
pytest-dev/pytest-html
|
pytest
| 743
|
is it possible to create report with filtering ability based on log levels?
|
Looking for ability to filter logs when expanded based on the log-level chosen (debug, info, error) etc., Is it possible to create that filter in the HTML report?
|
open
|
2023-09-29T05:32:47Z
|
2023-10-31T00:02:13Z
|
https://github.com/pytest-dev/pytest-html/issues/743
|
[] |
jdilipan
| 1
|
gee-community/geemap
|
streamlit
| 2,048
|
Add UI tests with Galata
|
Galata is a set of helpers and fixtures for JupyterLab UI Testing. ipyleaflet and bqplot uses Galata for UI tests. We can add UI tests using Galata as well.
- https://github.com/jupyterlab/jupyterlab/tree/main/galata
- https://github.com/jupyter-widgets/ipyleaflet/tree/master/ui-tests
- https://github.com/bqplot/bqplot/tree/master/ui-tests
|
open
|
2024-06-16T03:50:25Z
|
2024-06-16T03:50:25Z
|
https://github.com/gee-community/geemap/issues/2048
|
[
"Feature Request"
] |
giswqs
| 0
|
aio-libs/aiopg
|
sqlalchemy
| 911
|
Can't create a connection pool when running the example from the README
|
### Describe the bug
For some reason, running your example runs into a NotImplementedError of asyncio when it tries to create a connection pool. The PostgreSQL psycopg2 package works just fine when I use it directly without aiopg.
### To Reproduce
As already said, the code is just the first example of the README with the dsn modified to connect to my database.
```py
import asyncio
import aiopg
dsn = ...
async def go():
async with aiopg.create_pool(dsn) as pool:
async with pool.acquire() as conn:
async with conn.cursor() as cur:
await cur.execute("SELECT 1")
ret = []
async for row in cur:
ret.append(row)
assert ret == [(1,)]
loop = asyncio.get_event_loop()
loop.run_until_complete(go())
```
Running the example with psycopg2 works just fine.
```py
import psycopg2
dsn = ...
conn = psycopg2.connect(dsn=dsn)
cursor = conn.cursor()
cursor.execute("SELECT 1")
rows = cursor.fetchall() # contains [(1,)] as expected
cursor.close()
conn.close()
```
I've tried both Python 3.11 and 3.12, but still same error. I even downgraded to Python 3.11 as I've seen you don't support 3.12 yet.
``python -m pip list`` for my 3.11 environment yields
```txt
Package Version
---------------- -----------
aiofiles 24.1.0
aiohappyeyeballs 2.4.0
aiohttp 3.10.5
aiopg 1.4.0
aiosignal 1.3.1
async-timeout 4.0.3
attrs 24.2.0
colorama 0.4.6
frozenlist 1.4.1
idna 3.7
multidict 6.0.5
numpy 1.26.4
pandas 2.2.0
pip 24.2
psycopg2-binary 2.9.9
python-dateutil 2.9.0.post0
pytz 2024.1
six 1.16.0
tqdm 4.66.2
tzdata 2024.1
yarl 1.9.4
```
### Expected behavior
Same result as in the example. Assertion passes.
### Logs/tracebacks
```python-traceback
Traceback (most recent call last):
File "C:\Users\mtroester\source\repos\kfz_webapi_migration\aiopg_test.py", line 22, in <module>
loop.run_until_complete(go())
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2544.0_x64__qbz5n2kfra8p0\Lib\asyncio\base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\mtroester\source\repos\kfz_webapi_migration\aiopg_test.py", line 12, in go
async with aiopg.create_pool(dsn) as pool:
File "C:\Users\mtroester\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\aiopg\utils.py", line 82, in __aenter__
self._obj = await self._coro
^^^^^^^^^^^^^^^^
File "C:\Users\mtroester\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\aiopg\pool.py", line 300, in from_pool_fill
await self._fill_free_pool(False)
File "C:\Users\mtroester\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\aiopg\pool.py", line 336, in _fill_free_pool
conn = await connect(
^^^^^^^^
File "C:\Users\mtroester\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\aiopg\connection.py", line 65, in connect
connection = Connection(
^^^^^^^^^^^
File "C:\Users\mtroester\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\aiopg\connection.py", line 772, in __init__
self._loop.add_reader(
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2544.0_x64__qbz5n2kfra8p0\Lib\asyncio\events.py", line 534, in add_reader
raise NotImplementedError
NotImplementedError
Exception ignored in: <function Connection.__del__ at 0x0000015325BEA0C0>
Traceback (most recent call last):
File "C:\Users\mtroester\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\aiopg\connection.py", line 1188, in __del__
File "C:\Users\mtroester\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\aiopg\connection.py", line 995, in close
File "C:\Users\mtroester\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\aiopg\connection.py", line 977, in _close
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2544.0_x64__qbz5n2kfra8p0\Lib\asyncio\events.py", line 537, in remove_reader
NotImplementedError:
```
### Python Version
```console
$ python --version
Python 3.11.9
(it's the official Python 3.11 version installed by the Windows 11 App Store)
```
### aiopg Version
```console
$ python -m pip show aiopg
Name: aiopg
Version: 1.4.0
Summary: Postgres integration with asyncio.
Home-page: https://aiopg.readthedocs.io
Author: Andrew Svetlov
Author-email: andrew.svetlov@gmail.com
License: BSD
Location: C:\Users\mtroester\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages
Requires: async-timeout, psycopg2-binary
Required-by:
```
### OS
Windows 11 Pro
Version 23H2
### Additional context
I'm not sure whether I'm using the correct package versions together. When your base example doesn't work anymore, that's a huge deal IMO.
### Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct
|
open
|
2024-08-21T09:27:56Z
|
2024-08-21T09:37:56Z
|
https://github.com/aio-libs/aiopg/issues/911
|
[
"bug"
] |
Bonifatius94
| 1
|
huggingface/diffusers
|
deep-learning
| 10,411
|
How to call the training of lora weights obtained from examples/concestency_stiffness/train_lcm-distill-lora_std_wds. py
|
I followed https://github.com/huggingface/diffusers/tree/main/examples/consistency_distillation The provided tutorial trained the final Lora weight, but did not find a way to call it. May I ask if you can provide me with a demo of running and calling this weight? Thank you very much!
the training set:
```
#!/bin/bash
# Define the variables
PRETRAINED_TEACHER_MODEL="/ai/yzy/latent-consistency-model-main/stable-diffusion-v1-5"
OUTPUT_DIR="/ai/yzy/latent-consistency-model-main/output_sd001"
RESOLUTION=512
LORA_RANK=64
LEARNING_RATE=1e-6
LOSS_TYPE='huber'
ADAM_WEIGHT_DECAY=0.0
MAX_TRAIN_STEPS=1000
MAX_TRAIN_SAMPLES=4000000
DATALOADER_NUM_WORKERS=4
TRAIN_SHARDS_PATH_OR_URL='/ai/yzy/latent-consistency-model-main/00000.tar'
VALIDATION_STEPS=200
CHECKPOINTING_STEPS=200
CHECKPOINTS_TOTAL_LIMIT=10
TRAIN_BATCH_SIZE=8
GRADIENT_ACCUMULATION_STEPS=1
SEED=453645634
# Run the training script
python ./LCM_Training_Script/consistency_distillation/train_lcm_distill_lora_sd_wds.py \
--pretrained_teacher_model=$PRETRAINED_TEACHER_MODEL \
--output_dir=$OUTPUT_DIR \
--mixed_precision=fp16 \
--resolution=$RESOLUTION \
--lora_rank=$LORA_RANK \
--learning_rate=$LEARNING_RATE \
--loss_type=$LOSS_TYPE \
--adam_weight_decay=$ADAM_WEIGHT_DECAY \
--max_train_steps=$MAX_TRAIN_STEPS \
--max_train_samples=$MAX_TRAIN_SAMPLES \
--dataloader_num_workers=$DATALOADER_NUM_WORKERS \
--train_shards_path_or_url=$TRAIN_SHARDS_PATH_OR_URL \
--validation_steps=$VALIDATION_STEPS \
--checkpointing_steps=$CHECKPOINTING_STEPS \
--checkpoints_total_limit=$CHECKPOINTS_TOTAL_LIMIT \
--train_batch_size=$TRAIN_BATCH_SIZE \
--gradient_checkpointing \
--enable_xformers_memory_efficient_attention \
--gradient_accumulation_steps=$GRADIENT_ACCUMULATION_STEPS \
--use_8bit_adam \
--resume_from_checkpoint=latest \
--seed=$SEED
```
the output:

|
closed
|
2024-12-30T12:06:07Z
|
2024-12-31T07:21:40Z
|
https://github.com/huggingface/diffusers/issues/10411
|
[] |
yangzhenyu6
| 0
|
TencentARC/GFPGAN
|
pytorch
| 607
|
NameError: name 'fused_act_ext' is not defined
|
After using export BASICSR_JIT=True, when running python basicsr/train.py -opt ..., the terminal gets stuck and doesn't proceed. Is there anyone who can help me solve this issue?
|
closed
|
2025-03-08T09:55:14Z
|
2025-03-08T09:57:49Z
|
https://github.com/TencentARC/GFPGAN/issues/607
|
[] |
WangYJ-WYJ
| 0
|
microsoft/nni
|
pytorch
| 5,503
|
Load data once and use it afterwards for multiple trails
|
**What would you like to be added**: Data file that is loaded once and used for multiple trails instead of loading data individually for each trail.
**Why is this needed**: As I have a lot of data for the hyperparameter, I don't want to load data for each trail. I was wondering if it is possible to do so in this build. If not, can this be added to the up coming builds
**Without this feature, how does current nni work**:For me parallel trails doesn't work due to memory requirements.
**Components that may involve changes**: shared file/memory among nni trails.
**Brief description of your proposal if any**:
|
open
|
2023-04-04T11:15:24Z
|
2023-04-07T02:35:27Z
|
https://github.com/microsoft/nni/issues/5503
|
[] |
TayyabaZainab0807
| 0
|
PaddlePaddle/ERNIE
|
nlp
| 702
|
一年多了中文ernie-gen还不开源吗?
|
closed
|
2021-06-21T06:32:57Z
|
2021-07-04T02:47:39Z
|
https://github.com/PaddlePaddle/ERNIE/issues/702
|
[] |
cedar33
| 1
|
|
ycd/manage-fastapi
|
fastapi
| 96
|
Remove support of python 3.6
|
As long as the FastAPI version has been bumped to 0.85.0 and it also drops the support to the python version 3.6 as mentionned here : https://fastapi.tiangolo.com/release-notes/ , shouldn't we drop the support within the package (This will make it also possible to make some updates on the packages such as Black, pydantic, ...
|
closed
|
2022-10-03T09:33:31Z
|
2023-01-01T16:26:01Z
|
https://github.com/ycd/manage-fastapi/issues/96
|
[] |
kaieslamiri
| 2
|
joeyespo/grip
|
flask
| 373
|
Create CONTRIBUTION.md to guide open source developers to contribute to this project
|
In order to develop, people need to know:
- How to run code locally
- Program subsystems
- ...
Thanks for making this useful tool :)
|
open
|
2023-05-20T21:07:39Z
|
2023-05-20T21:09:42Z
|
https://github.com/joeyespo/grip/issues/373
|
[] |
anhtumai
| 0
|
python-security/pyt
|
flask
| 108
|
Feature Request: Whitelist lines ending in # nosec
|
So both [detect-secrets](https://github.com/Yelp/detect-secrets/blob/master/detect_secrets/plugins/high_entropy_strings.py#L41) and Bandit have the concept of whitelisting a line by putting a comment at the end, similar to how you've probably seen people do `# noqa: F401` or whatever, with pylint.
Let us steal once again, from Bandit, since they are most similar to us, [here are the relevant lines](https://github.com/openstack/bandit/blob/8f09d8b208f037b7d49ed6bc88f2ac200e7cc06c/bandit/core/manager.py#L270-L277), but we shall change `lineno + 1 for` to `enumerate(lines, start=1)` because it is more pythonic.
They also have the `--ignore-nosec do not skip lines with # nosec comments` command line option` so we shall pass in the set of lines to the 2 calls to `find_vulnerabilities` [in \_\_main\_\_](https://github.com/python-security/pyt/blob/master/pyt/__main__.py#L53),
|
closed
|
2018-04-14T18:27:20Z
|
2018-04-28T18:38:34Z
|
https://github.com/python-security/pyt/issues/108
|
[
"feature request"
] |
KevinHock
| 7
|
skforecast/skforecast
|
scikit-learn
| 495
|
Feature importance ForecasterAutoregMultiVariate - regressor LightGBM
|
Hi all,
When I try to retrieve the important features of an AutoRegressive Multivariate model (Direct Multi Step forecasting, step=3), it works with all regressors except LightGBM.
Indeed, for the same data, the feature importances of the model trained using XGB or Ridge regressors work (positive/negative values), but when the regressor is LightGBM, then all feature importances are equal to 0 : forecaster_trained.get_feature_importances(step=i) works but returns a dataframe with all value equal to 0 when forecaster_trained.regressor = lightgbm.
Do you have any idea of the issue ?
Thank's a lot for your help !
|
closed
|
2023-06-26T19:24:06Z
|
2023-11-02T08:18:42Z
|
https://github.com/skforecast/skforecast/issues/495
|
[
"question"
] |
jouet-leon
| 1
|
ghtmtt/DataPlotly
|
plotly
| 94
|
Speeding up the plot with many points
|
Using `go.Scattergl` should help but it seems not working nicely in DataPlotly. As standalone script no problem.
To test in DataPlotly just change the https://github.com/ghtmtt/DataPlotly/blob/master/data_plotly_plot.py#L160 from `go.Scatter` to `go.Scattergl`.
Here a standalone snippet working in python:
```
import plotly
import plotly.graph_objs as go
import numpy as np
N = 100000
trace = go.Scattergl(
x = np.random.randn(N),
y = np.random.randn(N),
mode = 'markers',
marker = dict(
color = '#FFBAD2',
line = dict(width = 1)
)
)
layout = go.Layout(
showlegend = True
)
data = [trace]
fig = go.Figure(data=data, layout = layout)
plotly.offline.plot(fig)
```
|
open
|
2019-03-08T16:59:08Z
|
2019-10-11T13:03:32Z
|
https://github.com/ghtmtt/DataPlotly/issues/94
|
[
"enhancement"
] |
ghtmtt
| 1
|
DistrictDataLabs/yellowbrick
|
scikit-learn
| 1,089
|
Parallel Plot with independent vline
|
**Describe the solution you'd like**
`ParallelCoordinates` plots each vline using a shared y scale. Instead of normalizing the values (using the parameter normalize), I'd like to keep original values, and allow each vline to have its own scale (scaling / moving the axis in order to fit current figsize)
**Examples**
I believe I've seen this behavior in other packages (i.e. plotly)

|
closed
|
2020-07-28T17:38:09Z
|
2020-10-02T17:27:34Z
|
https://github.com/DistrictDataLabs/yellowbrick/issues/1089
|
[
"type: question",
"level: expert"
] |
franperezlopez
| 1
|
FactoryBoy/factory_boy
|
django
| 908
|
Setup for flask - unexpected behavior probably due to session management
|
#### The problem
I'm trying to use _flask, flask-sqlalchemy, factory-boy,_ and _pytest,_ but I'm having some kind of trouble with sessions (I think). Could anyone help me?
I've prepared a minimal example to show the problem I'm having. Tests are working only because I put the _factory-boy_ class (named _ExampleFactory_) inside a _pytest_ fixture, but I would like to have it in one module alone like described in the docs.
Here is the minimal code example of my situation:
https://gist.github.com/lmtani/53a5723795b3ee9f3ad4ae52cecbec75#file-test_factory_boy-py
When I use the factory outside the _pytest_ fixture the second test will unexpectedly present "metrics" field.
```python
def test_should_not_contain_metrics(app, db_):
# Arrange
ExampleFactory._meta.model = ExampleDB
ExampleFactory._meta.sqlalchemy_session = db_.session
rec = ExampleFactory()
> assert "metrics" not in rec.outputs
E AssertionError: assert 'metrics' not in {'metrics': {'a': 1}}
E + where {'metrics': {'a': 1}} = <ExampleDB (transient 140452939208496)>.outputs
```
#### Proposed solution
It would be nice to have an example of setup when using Flask,
|
closed
|
2022-02-24T12:54:54Z
|
2024-04-08T16:36:08Z
|
https://github.com/FactoryBoy/factory_boy/issues/908
|
[] |
lmtani
| 0
|
huggingface/datasets
|
deep-learning
| 7,457
|
Document the HF_DATASETS_CACHE env variable
|
### Feature request
Hello,
I have a use case where my team is sharing models and dataset in shared directory to avoid duplication.
I noticed that the [cache documentation for datasets](https://huggingface.co/docs/datasets/main/en/cache) only mention the `HF_HOME` environment variable but never the `HF_DATASETS_CACHE`.
It should be nice to add `HF_DATASETS_CACHE` to datasets documentation if it's an intended feature.
If it's not, I think a depreciation warning would be appreciated.
### Motivation
This variable is fully working and similar to what `HF_HUB_CACHE` does for models, so it's nice to know that this exists. This seems to be a quick change to implement.
### Your contribution
I could contribute since this is only affecting a small portion of the documentation
|
open
|
2025-03-17T12:24:50Z
|
2025-03-20T10:36:46Z
|
https://github.com/huggingface/datasets/issues/7457
|
[
"enhancement"
] |
LSerranoPEReN
| 4
|
freqtrade/freqtrade
|
python
| 11,090
|
problem with Hyperliquid, data-download and trade
|
## Describe your environment
* Operating system: ____Ubuntu 24.04.1 LTS
* Python Version: _____ 3.12.3
* CCXT version: _____ 4.4.35
* Freqtrade Version: ____ 2024.11
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.
## Describe the problem:
Using hyperliquid as a DEX with lower fees seems interesting. So, I made a bot, used one of my strategies which works on other exchanges well. I changed USDT to USDC and started the bot in dry mode and tried to download data. but:
1. in past two weeks, no trade were made on hyperliquid , but with the other exchange (OKX) had about 200 trades in that period.
2. could not download the data of hyperliquid.
### Steps to reproduce:
1. as per statements of the freqtrade website, I used --dl-trades, and this was the results with below command:
```
docker-compose run --rm Bot2 download-data --dl-trades --pairs .*/USDC:USDC --days 30
```
### Observed Results:
as it is mentioned in the last line of logs, no history is available.
1. am I making steps wrong?
2. is there some other things that should be considered?
3. is it normal that no trade made in past 2 weeks in hyperliquid?
lots of thanks and regards
### Relevant code exceptions or logs
Note: Please copy/paste text of the messages, no screenshots of logs please.
```
2024-12-14 14:16:55,862 - freqtrade - INFO - freqtrade 2024.11
2024-12-14 14:16:56,327 - numexpr.utils - INFO - NumExpr defaulting to 8 threads.
2024-12-14 14:16:57,563 - freqtrade.configuration.load_config - INFO - Using config: /home/user_data/bot_config.json ...
2024-12-14 14:16:57,563 - freqtrade.configuration.load_config - INFO - Using config: /home/user_data/mutual_settings.json ...
2024-12-14 14:16:57,564 - freqtrade.loggers - INFO - Verbosity set to 0
2024-12-14 14:16:57,564 - freqtrade.configuration.configuration - INFO - Using exchange hyperliquid
2024-12-14 14:16:57,575 - freqtrade.configuration.configuration - INFO - Using user-data directory: /freqtrade/user_data ...
2024-12-14 14:16:57,575 - freqtrade.configuration.configuration - INFO - Using data directory: /home/user_data/data/hyperliquid ...
2024-12-14 14:16:57,575 - freqtrade.configuration.configuration - INFO - Using pairs ['.*/USDC:USDC']
2024-12-14 14:16:57,575 - freqtrade.configuration.configuration - INFO - Detected --days: 30
2024-12-14 14:16:57,575 - freqtrade.configuration.configuration - INFO - Detected --dl-trades: True
2024-12-14 14:16:57,577 - freqtrade.exchange.check_exchange - INFO - Checking exchange...
2024-12-14 14:16:57,587 - freqtrade.exchange.check_exchange - INFO - Exchange "hyperliquid" is officially supported by the Freqtrade development team.
2024-12-14 14:16:57,587 - freqtrade.configuration.config_validation - INFO - Validating configuration ...
2024-12-14 14:16:58,192 - freqtrade.exchange.exchange - INFO - Instance is running with dry_run enabled
2024-12-14 14:16:58,192 - freqtrade.exchange.exchange - INFO - Using CCXT 4.4.35
2024-12-14 14:16:58,192 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'swap'}, 'enableRateLimit': True}
2024-12-14 14:16:58,196 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'options': {'defaultType': 'swap'}, 'enableRateLimit': True, 'rateLimit': 1000}
2024-12-14 14:16:58,200 - freqtrade.exchange.exchange - INFO - Using Exchange "Hyperliquid"
2024-12-14 14:16:58,200 - freqtrade.resolvers.exchange_resolver - INFO - Using resolved exchange 'Hyperliquid'...
2024-12-14 14:16:58,200 - freqtrade.exchange.exchange - INFO - Markets were not loaded. Loading them now..
2024-12-14 14:17:40,369 - freqtrade.data.history.history_utils - INFO - About to download pairs: ['BTC/USDC:USDC', 'ETH/USDC:USDC', 'ATOM/USDC:USDC', 'MATIC/USDC:USDC', 'DYDX/USDC:USDC', 'SOL/USDC:USDC', 'AVAX/USDC:USDC', 'BNB/USDC:USDC', 'APE/USDC:USDC', 'OP/USDC:USDC', 'LTC/USDC:USDC', 'ARB/USDC:USDC', 'DOGE/USDC:USDC', 'INJ/USDC:USDC', 'SUI/USDC:USDC', 'kPEPE/USDC:USDC', 'CRV/USDC:USDC', 'LDO/USDC:USDC', 'LINK/USDC:USDC', 'STX/USDC:USDC', 'RNDR/USDC:USDC', 'CFX/USDC:USDC', 'FTM/USDC:USDC', 'GMX/USDC:USDC', 'SNX/USDC:USDC', 'XRP/USDC:USDC', 'BCH/USDC:USDC', 'APT/USDC:USDC', 'AAVE/USDC:USDC', 'COMP/USDC:USDC', 'MKR/USDC:USDC', 'WLD/USDC:USDC', 'FXS/USDC:USDC', 'HPOS/USDC:USDC', 'RLB/USDC:USDC', 'UNIBOT/USDC:USDC', 'YGG/USDC:USDC', 'TRX/USDC:USDC', 'kSHIB/USDC:USDC', 'UNI/USDC:USDC', 'SEI/USDC:USDC', 'RUNE/USDC:USDC', 'OX/USDC:USDC', 'FRIEND/USDC:USDC', 'SHIA/USDC:USDC', 'CYBER/USDC:USDC', 'ZRO/USDC:USDC', 'BLZ/USDC:USDC', 'DOT/USDC:USDC', 'BANANA/USDC:USDC', 'TRB/USDC:USDC', 'FTT/USDC:USDC', 'LOOM/USDC:USDC', 'OGN/USDC:USDC', 'RDNT/USDC:USDC', 'ARK/USDC:USDC', 'BNT/USDC:USDC', 'CANTO/USDC:USDC', 'REQ/USDC:USDC', 'BIGTIME/USDC:USDC', 'KAS/USDC:USDC', 'ORBS/USDC:USDC', 'BLUR/USDC:USDC', 'TIA/USDC:USDC', 'BSV/USDC:USDC', 'ADA/USDC:USDC', 'TON/USDC:USDC', 'MINA/USDC:USDC', 'POLYX/USDC:USDC', 'GAS/USDC:USDC', 'PENDLE/USDC:USDC', 'STG/USDC:USDC', 'FET/USDC:USDC', 'STRAX/USDC:USDC', 'NEAR/USDC:USDC', 'MEME/USDC:USDC', 'ORDI/USDC:USDC', 'BADGER/USDC:USDC', 'NEO/USDC:USDC', 'ZEN/USDC:USDC', 'FIL/USDC:USDC', 'PYTH/USDC:USDC', 'SUSHI/USDC:USDC', 'ILV/USDC:USDC', 'IMX/USDC:USDC', 'kBONK/USDC:USDC', 'GMT/USDC:USDC', 'SUPER/USDC:USDC', 'USTC/USDC:USDC', 'NFTI/USDC:USDC', 'JUP/USDC:USDC', 'kLUNC/USDC:USDC', 'RSR/USDC:USDC', 'GALA/USDC:USDC', 'JTO/USDC:USDC', 'NTRN/USDC:USDC', 'ACE/USDC:USDC', 'MAV/USDC:USDC', 'WIF/USDC:USDC', 'CAKE/USDC:USDC', 'PEOPLE/USDC:USDC', 'ENS/USDC:USDC', 'ETC/USDC:USDC', 'XAI/USDC:USDC', 'MANTA/USDC:USDC', 'UMA/USDC:USDC', 'ONDO/USDC:USDC', 'ALT/USDC:USDC', 'ZETA/USDC:USDC', 'DYM/USDC:USDC', 'MAVIA/USDC:USDC', 'W/USDC:USDC', 'PANDORA/USDC:USDC', 'STRK/USDC:USDC', 'PIXEL/USDC:USDC', 'AI/USDC:USDC', 'TAO/USDC:USDC', 'AR/USDC:USDC', 'MYRO/USDC:USDC', 'kFLOKI/USDC:USDC', 'BOME/USDC:USDC', 'ETHFI/USDC:USDC', 'ENA/USDC:USDC', 'MNT/USDC:USDC', 'TNSR/USDC:USDC', 'SAGA/USDC:USDC', 'MERL/USDC:USDC', 'HBAR/USDC:USDC', 'POPCAT/USDC:USDC', 'OMNI/USDC:USDC', 'EIGEN/USDC:USDC', 'REZ/USDC:USDC', 'NOT/USDC:USDC', 'TURBO/USDC:USDC', 'BRETT/USDC:USDC', 'IO/USDC:USDC', 'ZK/USDC:USDC', 'BLAST/USDC:USDC', 'LISTA/USDC:USDC', 'MEW/USDC:USDC', 'RENDER/USDC:USDC', 'kDOGS/USDC:USDC', 'POL/USDC:USDC', 'CATI/USDC:USDC', 'CELO/USDC:USDC', 'HMSTR/USDC:USDC', 'SCR/USDC:USDC', 'NEIROETH/USDC:USDC', 'kNEIRO/USDC:USDC', 'GOAT/USDC:USDC', 'MOODENG/USDC:USDC', 'GRASS/USDC:USDC', 'PURR/USDC:USDC', 'PNUT/USDC:USDC', 'XLM/USDC:USDC', 'CHILLGUY/USDC:USDC', 'SAND/USDC:USDC', 'IOTA/USDC:USDC', 'ALGO/USDC:USDC', 'HYPE/USDC:USDC', 'ME/USDC:USDC', 'MOVE/USDC:USDC'], intervals: ['1m', '5m'] to /home/user_data/data/hyperliquid
2024-12-14 14:17:40,370 - freqtrade - ERROR - Trade history not available for Hyperliquid. You cannot use --dl-trades for this exchange.
```
|
closed
|
2024-12-14T14:55:12Z
|
2024-12-16T09:09:08Z
|
https://github.com/freqtrade/freqtrade/issues/11090
|
[
"Question"
] |
0xh0551
| 5
|
dunossauro/fastapi-do-zero
|
sqlalchemy
| 304
|
Adicionar mais questões aos quizes
|
O ideal seria que todos os quizes tivessem o mesmo número de questões. Como a maioria tem 10, podemos normalizar nesse número. Alguns quizes que não têm 10 questões.
- 09, com 5
Os não listados já foram contempladas
|
closed
|
2025-02-10T09:48:12Z
|
2025-03-11T20:02:53Z
|
https://github.com/dunossauro/fastapi-do-zero/issues/304
|
[] |
dunossauro
| 2
|
ultralytics/ultralytics
|
computer-vision
| 19,151
|
locking on to a single person in a multi-person frame
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi, I am using yolo11-pose to assess power lifts (squats) against standards and decide whether or not the lift passes or fails based on keypoints in the critical frame.
In dummy videos with only one person (the lifter) in the frame, the model and application perform perfectly.
However the application is intended for competition, and in competition, while there is only one lifter on the platform at a time, there are multiple other people (spotters) visible in the frame. This creates undesirable model behavior.
The desired behavior is that the model should only focus on the lifter and assess the quality of his lift. While the result derived is correct, the skeleton overlay is unstable: some times it correctly overlays the skeleton on the lifter, at other times during the lift the skeleton may be temporarily overlaid against a spotter or other person in frame. This is a problem. I have attached images to illustrate.
I have tried to overcome this by specifying the lifters person id number:
```
results = model.track(
source=video_file,
device=device,
show=False,
# conf=0.7,
save=True,
max_det=1
)
```
I have also tried to exclude ids which are erroneously annotated, reduce the ROI, experimented with increasing euclidean distance, and confidence weights.
```
lifter_selector:
expected_center: [0.5, 0.5]
roi: [0.3, 0.3, 0.7, 0.7]
distance_weight: 2.0
confidence_weight: 0.8
lifter_id: 4
excluded_ids: [1, 7, 10]
```
I am having no success, and i hope that someone can help me to find a way to "fix" the bounding box and skeleton overlay to the lifter and prevent those annotations on non-litfters on the platform.
thank you
correct

incorrect

incorrect

### Additional
Please let me know if you'd like me to share the code via GitHub repo. I am happy to do so. I am really hoping you can help me and i thank you in advance. Please let me know if my explanation is not clear or if you require more information.
|
closed
|
2025-02-10T05:05:48Z
|
2025-02-15T02:37:39Z
|
https://github.com/ultralytics/ultralytics/issues/19151
|
[
"question",
"track",
"pose"
] |
mkingopng
| 8
|
albumentations-team/albumentations
|
deep-learning
| 2,070
|
[Bug] RandomShadow + shadow_intensity
|
In RandomShadow large values of shadow_intensity correspond to lighter shadow and visa versa.
|
closed
|
2024-11-07T21:20:03Z
|
2024-11-08T23:50:18Z
|
https://github.com/albumentations-team/albumentations/issues/2070
|
[
"bug"
] |
ternaus
| 0
|
mlfoundations/open_clip
|
computer-vision
| 875
|
[1,512] data
|
What is the meaning of the [1,512] data obtained from the clip encoder.
|
closed
|
2024-05-09T01:30:09Z
|
2024-05-09T05:07:03Z
|
https://github.com/mlfoundations/open_clip/issues/875
|
[] |
lwtgithublwt
| 1
|
marshmallow-code/marshmallow-sqlalchemy
|
sqlalchemy
| 296
|
How can I deserialize from either of two JSON fields into the same DB field
|
Hi,
I have this problem that I've been trying to solve for a few hours.
I'm using marshmallow-sqlalchemy to deserialize json into SQLAlchemy models
So far, I had a schema that contained a nested relationship :
category = Nested(CategorySchema)
So far so good.
However now the API evolves and has to accomodate for a new format : either "category" is provided, and is a single category, or a "categories" is provided, and is a list of categories. (When one is provided, the other will be missing). In either case, the categories relationship will have to be filled in the DB.
How can I make my marshmallow-sqlalchemy schema evolve to accomodate either of the fields ? It's similar to a synonym, except that if category is provided, I have to "transform" it into a list of one object. (I hope my explanation makes sense)
I can't find in which direction I have to look.
Thanks in advance for any hint or help !
|
closed
|
2020-03-18T15:33:21Z
|
2020-03-21T16:27:22Z
|
https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/296
|
[] |
eino
| 1
|
yt-dlp/yt-dlp
|
python
| 12,482
|
[youtube] SSAP interference - missing formats, only itag 18 available
|
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar issues **including closed ones**. DO NOT post duplicates
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Canada
### Provide a description that is worded well enough to be understood
Had some reports of users only getting format 18.
This seems consistent with YouTube's SSAP (server-side ads) experiment where they appear to remove the playback links for `adaptiveFormats` in the player response (leaving only the SABR url).
It appears to be mostly targeting Canadian accounts.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
(Example log from user, redacted)
yt-dlp --cookies-from-browser firefox -S vcodec:h264,res,acodec:m4a "https://www.youtube.com/watch?v=aqz-KE-bpKQ" -vU
[debug] Command-line config: ['--cookies-from-browser', 'firefox', '-S', 'vcodec:h264,res,acodec:m4a', 'https://www.youtube.com/watch?v=aqz-KE-bpKQ', '-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.02.19.023542 from yt-dlp/yt-dlp-nightly-builds [4985a4041] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg (setts), ffprobe N
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2025.01.31, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-15.0
[debug] Proxy map: {}
Extracting cookies from firefox
[debug] Extracting cookies from: "C:\Users\PC\AppData\Roaming\Mozilla\Firefox\Profiles\XXX.default-release\cookies.sqlite"
Extracted 2074 cookies from firefox
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1841 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2025.02.19.023542 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2025.02.19.023542 from yt-dlp/yt-dlp-nightly-builds)
[debug] [youtube] Found YouTube account cookies
[youtube] Extracting URL: https://www.youtube.com/watch?v=aqz-KE-bpKQ
[youtube] aqz-KE-bpKQ: Downloading webpage
[youtube] aqz-KE-bpKQ: Downloading tv client config
[youtube] aqz-KE-bpKQ: Downloading player d50f54ef
[youtube] aqz-KE-bpKQ: Downloading tv player API JSON
[debug] Loading youtube-nsig.d50f54ef from cache
[debug] [youtube] Decrypted nsig XXX => XXX
[debug] Loading youtube-nsig.d50f54ef from cache
[debug] [youtube] Decrypted nsig XXX-2c => XXX
[debug] [youtube] aqz-KE-bpKQ: web client https formats require a GVS PO Token which was not provided. They will be skipped as they may yield HTTP Error 403. You can manually pass a GVS PO Token for this client with --extractor-args "youtube:po_token=web.gvs+XXX". For more information, refer to https://github.com/yt-dlp/yt-dlp/wiki/PO-Token-Guide . To enable these broken formats anyway, pass --extractor-args "youtube:formats=missing_pot"
[debug] Sort order given by user: vcodec:h264, res, acodec:m4a
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, vcodec:h264(7), res, acodec:m4a(9), quality, fps, hdr:12(7), source, channels, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] aqz-KE-bpKQ: Downloading 1 format(s): 18
[debug] Invoking http downloader on "https://rr5---sn-ab5sznzy.googlevideo.com/videoplayback?REDACTED"
[debug] File locking is not supported. Proceeding without locking
[download] Destination: Big Buck Bunny 60fps 4K - Official Blender Foundation Short Film [aqz-KE-bpKQ].mp4
Another example: https://old.reddit.com/r/youtubedl/comments/1iwg2fj/ytdl_not_seeing_full_quality_of_videos/
```
|
open
|
2025-02-26T06:58:37Z
|
2025-02-26T06:58:37Z
|
https://github.com/yt-dlp/yt-dlp/issues/12482
|
[
"help-wanted",
"site-bug",
"needs-investigating",
"site:youtube"
] |
coletdjnz
| 0
|
remsky/Kokoro-FastAPI
|
fastapi
| 15
|
High VRAM usage
|
Running with `docker compose up` I get 1020 MiB of VRAM usage `/usr/bin/python3 1020MiB`.
Is this normal? Given the size of the model, I was expecting less VRAM used and was wondering if there is a way to trim this?
|
closed
|
2025-01-08T18:57:05Z
|
2025-01-10T02:42:43Z
|
https://github.com/remsky/Kokoro-FastAPI/issues/15
|
[
"enhancement"
] |
cduk
| 3
|
matplotlib/mplfinance
|
matplotlib
| 121
|
Bug Report: When calculating mean average, -1 need to be converted into np.NaN
|
**Describe the bug**
In https://github.com/matplotlib/mplfinance/blob/master/src/mplfinance/plotting.py#L340
```python
mavprices = pd.Series(closes).rolling(mav).mean().values
```
I believe the code need to filter `-1`, for example
```python
mavprices = pd.Series(c if c >= 0 else np.NaN for c in closes).rolling(mav).mean().values
```
otherwise the mean average line will result like the image below when there is missing data

|
closed
|
2020-05-04T14:20:56Z
|
2020-05-04T14:26:29Z
|
https://github.com/matplotlib/mplfinance/issues/121
|
[
"bug"
] |
char101
| 1
|
piskvorky/gensim
|
nlp
| 3,325
|
AttributeError: 'KeyedVectors' object has no attribute 'add'
|
#### Problem description
```
Using backend: pytorch
Traceback (most recent call last):
File "train.py", line 88, in <module>
main(config)
File "train.py", line 17, in main
train_data_loader = config.initialize('train_data_loader', module_data, "train")
File "/gpfs/home/chenzhiqiang_stu/taxoE/parse_config.py", line 64, in initialize
return getattr(module, module_cfg['type'])(*args, **module_cfg['args'])
File "/gpfs/home/chenzhiqiang_stu/taxoE/data_loader/data_loaders.py", line 91, in __init__
msk_graph_dataset = MaskedGraphDataset(raw_graph_dataset, mode=mode, sampling_mode=sampling_mode, negative_size=negative_size, expand_factor=expand_factor, cache_refresh_time=cache_refresh_time, normalize_embed=normalize_embed, test_topk=test_topk)
File "/gpfs/home/chenzhiqiang_stu/taxoE/data_loader/dataset.py", line 229, in __init__
self.kv.add([str(i) for i in range(len(self.vocab))], self.node_features.numpy())
AttributeError: 'KeyedVectors' object has no attribute 'add'
```
#### Steps/code/corpus to reproduce
I
```
# add node feature vector
self.kv = KeyedVectors(vector_size=self.node_features.shape[1])
self.kv.add([str(i) for i in range(len(self.vocab))], self.node_features.numpy())
```
#### Versions
Please provide the output of:
```
python 3.6
gensim 4.1.2
```
|
closed
|
2022-04-13T14:01:25Z
|
2022-04-14T04:39:57Z
|
https://github.com/piskvorky/gensim/issues/3325
|
[] |
GitHubBoys
| 2
|
StratoDem/sd-material-ui
|
dash
| 57
|
Remove PropTypes from package in favor of newly-supported Flow metadata.json in dash
|
### Description
https://github.com/plotly/dash/pull/207 updates to support flow types in [v0.21.0](https://github.com/plotly/dash/blob/ff93d2c4331a576b445be87bb3b77576f18b030a/CHANGELOG.md)
Dash now supports React components that use Flow. To support Flow, component_loader now has the following behavior to create docstrings as determined in discussion in #187: 1. If a Dash component has PropTypes-generated typing, the docstring uses the PropTypes, regardless of whether the component also has Flow types (current behavior). 2. Otherwise if a Dash component has Flow types but not PropTypes, the docstring now uses the objects generated by react-docgen from the Flow types.
We can now remove `PropTypes` entirely from the package and, where necessary, update Flow to mimic the types.
|
closed
|
2018-02-22T13:59:33Z
|
2018-02-22T15:45:22Z
|
https://github.com/StratoDem/sd-material-ui/issues/57
|
[
"Tech: JS",
"Priority: Medium",
"Tech: Architecture",
"Type: Enhancement"
] |
mjclawar
| 0
|
gunthercox/ChatterBot
|
machine-learning
| 1,666
|
how to hide the elements which are in loop in either javascript or jquery
|
Hi everyone,
I want to know how to hide and show the elements from the for loop in java script/J query
Thanks in advance....
Happy codding......
|
closed
|
2019-03-13T05:58:38Z
|
2020-02-06T03:22:34Z
|
https://github.com/gunthercox/ChatterBot/issues/1666
|
[] |
mady143
| 6
|
FujiwaraChoki/MoneyPrinter
|
automation
| 168
|
Stuck with an OSError
|

Can anybody help me fix this? I am kinda lost
|
closed
|
2024-02-11T00:15:29Z
|
2024-02-11T02:04:55Z
|
https://github.com/FujiwaraChoki/MoneyPrinter/issues/168
|
[] |
Kempius
| 0
|
paperless-ngx/paperless-ngx
|
machine-learning
| 9,121
|
[BUG] Webhook URL doesn't accept hostname without dots
|
### Description
I have both `paperless-ngx` and `paperless-ai` running in Docker containers. Trying to use `http://paperless-ai:3000/api/webhook/document` as a webhook URL throws "Enter a valid URL":
```
{"headers":{"normalizedNames":{},"lazyUpdate":null},"status":400,"statusText":"Bad Request","url":"http://danhome.int.d.sb:8001/api/workflows/1/","ok":false,"name":"HttpErrorResponse","message":"Http failure response for http://danhome.int.d.sb:8001/api/workflows/1/: 400 Bad Request","error":{"actions":[{"webhook":{"url":["Enter a valid URL."]}}]}}
```
However, using the name of a Docker container like this is the best way of communicating between two Docker containers, as opposed to hard-coding IP addresses. DNS resolution is provided by Docker itself.
paperless-ngx should allow URLs that do not contain a dot in their domain name.
### Steps to reproduce
1. Go to Workflows -> Add Workflow
2. Add any trigger
3. Add webhook action
4. Try to use `http://paperless-ai:3000/api/webhook/document` as the URL
### Webserver logs
```bash
N/A
```
### Browser logs
```bash
```
### Paperless-ngx version
2.14.7
### Host OS
Unraid 7.0.0 amd64
### Installation method
Docker - official image
### System status
```json
```
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description.
|
closed
|
2025-02-16T02:41:03Z
|
2025-03-24T03:17:54Z
|
https://github.com/paperless-ngx/paperless-ngx/issues/9121
|
[
"bug",
"backend"
] |
Daniel15
| 6
|
plotly/jupyterlab-dash
|
dash
| 29
|
jupyterlab-dash@0.1.0-alpha.2" is not compatible with the current JupyterLab 1.0
|
Hi,
I'm trying to install jupyterlab-dash with jupyterlab 1.0.0, but am getting the following error:
```
build 01-Jul-2019 14:15:16 The following NEW packages will be INSTALLED:
build 01-Jul-2019 14:15:16
build 01-Jul-2019 14:15:16 json5: 0.8.4-py_0
build 01-Jul-2019 14:15:16 jupyterlab: 1.0.0-py36_0
build 01-Jul-2019 14:15:16 jupyterlab_server: 1.0.0-py_0
build 01-Jul-2019 14:15:16
...
...
...
build 01-Jul-2019 14:15:21 [91mEnabling: jupyterlab
build 01-Jul-2019 14:15:21 - Writing config: /opt/conda/envs/lab/etc/jupyter
build 01-Jul-2019 14:15:21 [0m[91m - Validating...
build 01-Jul-2019 14:15:21 [0m[91m jupyterlab 1.0.0 [32mOK[0m
build 01-Jul-2019 14:15:29 [0mFetching package metadata .........................
build 01-Jul-2019 14:15:53 Solving package specifications: .
build 01-Jul-2019 14:15:53
build 01-Jul-2019 14:15:53 Package plan for installation in environment /opt/conda/envs/lab:
build 01-Jul-2019 14:15:53
build 01-Jul-2019 14:15:53 The following NEW packages will be INSTALLED:
build 01-Jul-2019 14:15:53
build 01-Jul-2019 14:15:53 aiohttp: 3.5.4-py36h14c3975_0
build 01-Jul-2019 14:15:53 async-timeout: 3.0.1-py_1000
build 01-Jul-2019 14:15:53 click: 7.0-py_0
build 01-Jul-2019 14:15:53 dash: 0.43.0-py_0
build 01-Jul-2019 14:15:53 dash-core-components: 0.48.0-py_0
build 01-Jul-2019 14:15:53 dash-html-components: 0.16.0-py_0
build 01-Jul-2019 14:15:53 dash-renderer: 0.24.0-py_0
build 01-Jul-2019 14:15:53 dash-table: 3.7.0-py_0
build 01-Jul-2019 14:15:53 flask: 1.0.3-py_0
build 01-Jul-2019 14:15:53 flask-compress: 1.4.0-py36_0
build 01-Jul-2019 14:15:53 idna_ssl: 1.1.0-py36_1000
build 01-Jul-2019 14:15:53 itsdangerous: 1.1.0-py_0
build 01-Jul-2019 14:15:53 jupyter-server-proxy: 1.1.0-py_0
build 01-Jul-2019 14:15:53 jupyterlab-dash: 0.1.0a2-py_0
build 01-Jul-2019 14:15:53 multidict: 4.5.2-py36h14c3975_1000
build 01-Jul-2019 14:15:53 plotly: 3.10.0-py_0
build 01-Jul-2019 14:15:53 retrying: 1.3.3-py_2
build 01-Jul-2019 14:15:53 simpervisor: 0.3-py_1
build 01-Jul-2019 14:15:53 typing_extensions: 3.7.4-py36_0
build 01-Jul-2019 14:15:53 werkzeug: 0.15.4-py_0
build 01-Jul-2019 14:15:53 yarl: 1.3.0-py36h14c3975_1000
build 01-Jul-2019 14:15:53
build 01-Jul-2019 14:15:53 Proceed ([y]/n)?
...
...
...
An error occured.
build 01-Jul-2019 14:17:12 ValueError:
build 01-Jul-2019 14:17:12 "jupyterlab-dash@0.1.0-alpha.2" is not compatible with the current JupyterLab
build 01-Jul-2019 14:17:12 Conflicting Dependencies:
build 01-Jul-2019 14:17:12 JupyterLab Extension Package
build 01-Jul-2019 14:17:12 >=1.0.0 <2.0.0 >=0.19.1 <0.20.0@jupyterlab/application
build 01-Jul-2019 14:17:12 >=1.0.0 <2.0.0 >=0.19.1 <0.20.0@jupyterlab/console
build 01-Jul-2019 14:17:12 >=1.0.0 <2.0.0 >=0.19.2 <0.20.0@jupyterlab/notebook
build 01-Jul-2019 14:17:12 See the log file for details: /tmp/jupyterlab-debug-zjcon8kt.log
build 01-Jul-2019 14:17:14 Removing intermediate container e9d21e6841f0
```
Is jupyterlab-dash compatible with the newest jupyterlab version?
|
closed
|
2019-07-01T20:09:58Z
|
2019-09-13T14:08:29Z
|
https://github.com/plotly/jupyterlab-dash/issues/29
|
[] |
cw6515
| 5
|
chiphuyen/stanford-tensorflow-tutorials
|
tensorflow
| 49
|
03_linear_regression_sol.py can't run with errors as follows
|
Traceback (most recent call last):
File "D:/stanford_tensorflow_tutorials/tf_oreilly/01_linear_regression_sol.py", line 19, in <module>
book = xlrd.open_workbook(DATA_FILE, encoding_override="utf-8")
File "C:\Anaconda3\lib\site-packages\xlrd\__init__.py", line 441, in open_workbook
ragged_rows=ragged_rows,
File "C:\Anaconda3\lib\site-packages\xlrd\book.py", line 107, in open_workbook_xls
bk.fake_globals_get_sheet()
File "C:\Anaconda3\lib\site-packages\xlrd\book.py", line 687, in fake_globals_get_sheet
self.get_sheets()
File "C:\Anaconda3\lib\site-packages\xlrd\book.py", line 678, in get_sheets
self.get_sheet(sheetno)
File "C:\Anaconda3\lib\site-packages\xlrd\book.py", line 669, in get_sheet
sh.read(self)
File "C:\Anaconda3\lib\site-packages\xlrd\sheet.py", line 1475, in read
self.update_cooked_mag_factors()
File "C:\Anaconda3\lib\site-packages\xlrd\sheet.py", line 1543, in update_cooked_mag_factors
elif not (10 <= zoom <= 400):
TypeError: unorderable types: int() <= NoneType()
|
open
|
2017-08-12T13:35:06Z
|
2017-08-12T13:35:06Z
|
https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/49
|
[] |
WoNiuHu
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.