repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
pytest-dev/pytest-html
|
pytest
| 11
|
Show extras with screenshot for passed tests as well
|
Hi, Dave!
Is it possible to show a screenshots for the passed tests as well?
This is my code:
``` python
def pytest_runtest_makereport(__multicall__, item):
_driver = item.funcargs['app'].driver
browser_type = _driver.capabilities['browserName']
pytest_html = item.config.pluginmanager.getplugin('html')
report = __multicall__.execute()
extra = getattr(report, 'extra', [])
if report.when == 'call':
url = _driver.current_url
extra.append(pytest_html.extras.url(url))
screenshot = _driver.get_screenshot_as_base64()
extra.append(pytest_html.extras.image(screenshot, ''))
report.extra = extra
return report
```
Both passed and failed tests has an URL field,
however, only failed tests are displayed with screenshot.
BR and thanks for the great plugin :)
Michael
|
closed
|
2015-07-14T16:19:15Z
|
2016-01-16T01:10:51Z
|
https://github.com/pytest-dev/pytest-html/issues/11
|
[
"bug"
] |
shat00n
| 13
|
stanfordnlp/stanza
|
nlp
| 1,370
|
Italian model runs out of memory on COLAB A100 gpu
|
Trying to process small texts (300-500kb) on a 40gb GPU on colab raises an OutOfMemoryError. Here is log. The English model, on the same text, does not. It happens even with processors='tokenize,lemma,pos'.
---------------------------------------------------------------------------
OutOfMemoryError Traceback (most recent call last)
<ipython-input-4-e215e2dbfcd3> in <cell line: 2>()
1 s = open('KINGSLEY_TIASPETTOACENTRALPARL.txt').read()
----> 2 doc = nlp(s)
11 frames
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/linear.py in forward(self, input)
114
115 def forward(self, input: Tensor) -> Tensor:
--> 116 return F.linear(input, self.weight, self.bias)
117
118 def extra_repr(self) -> str:
OutOfMemoryError: CUDA out of memory. Tried to allocate 9.50 GiB. GPU 0 has a total capacity of 15.77 GiB of which 5.42 GiB is free. Process 3295 has 10.35 GiB memory in use. Of the allocated memory 9.93 GiB is allocated by PyTorch, and 37.90 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
closed
|
2024-03-19T10:04:36Z
|
2024-04-20T19:01:48Z
|
https://github.com/stanfordnlp/stanza/issues/1370
|
[
"bug",
"fixed on dev"
] |
lucaducceschi
| 7
|
pandas-dev/pandas
|
data-science
| 61,076
|
PERF: why nlargest is so slower?
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this issue exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this issue exists on the main branch of pandas.
### Reproducible Example
nlargest is so slow, I think this is question, maybe we should do someting to improve it.
Here, is my code, you can see nlargest use many time more than it should use.

### Installed Versions
<details>
Replace this line with the output of pd.show_versions()
</details>
### Prior Performance
_No response_
|
open
|
2025-03-07T14:31:23Z
|
2025-03-08T00:50:36Z
|
https://github.com/pandas-dev/pandas/issues/61076
|
[
"Performance",
"Needs Info"
] |
ZGarry
| 4
|
MaartenGr/BERTopic
|
nlp
| 1,720
|
flan-t5 for
|
Hi @MaartenGr,
I hope you are doing well. I am getting the following error when using the flan-t5 model for topic representation. Any solution for this? Thanks
```
from transformers import pipeline
from bertopic.representation import TextGeneration
prompt = "I have a topic described by the following keywords: [KEYWORDS]. Based on the previous keywords, what is this topic about?"
# Create your representation model
generator = pipeline('text2text-generation', model='google/flan-t5-base')
representation_model = TextGeneration(generator)
topic_model_t5 = BERTopic(representation_model=representation_model)
```
Error:
----> 2 topics, probs = topic_model_t5.fit_transform(docs)
3 print(topic_model_t5.get_topic_info())
[/usr/local/lib/python3.10/dist-packages/bertopic/representation/_textgeneration.py](https://localhost:8080/#) in extract_topics(self, topic_model, documents, c_tf_idf, topics)
145
146 # Prepare prompt
--> 147 truncated_docs = [truncate_document(topic_model, self.doc_length, self.tokenizer, doc) for doc in docs]
148 prompt = self._create_prompt(truncated_docs, topic, topics)
149 self.prompts_.append(prompt)
TypeError: 'NoneType' object is not iterable
|
open
|
2024-01-02T00:09:46Z
|
2024-01-06T22:08:49Z
|
https://github.com/MaartenGr/BERTopic/issues/1720
|
[] |
mjaved-nz
| 4
|
modoboa/modoboa
|
django
| 3,241
|
Documentation lacks using unix socket
|
# Impacted versions
* OS Type: Arch
* OS Version: current
* Database Type: MySQL
* Database version: 11.3.2
* Modoboa: 2.2.4
* installer used: No
* Webserver: Nginx
The documentation assumes that all connections to external services use TCP, but unix socket may be used too.
I struggled to configure MySQL this way since it's not documented, but now that redis is required, I can't figure out how to use it.
I use this in settings.py:
```
REDIS_HOST = '/run/redis/redis.sock'
REDIS_PORT = 6279
REDIS_QUOTA_DB = 0
REDIS_URL = 'unix://{}?db={}'.format(REDIS_HOST, REDIS_QUOTA_DB)
```
But django seems to keep using REDIS_PORT as i get this error:
```
ConnectionError at /
Error -2 connecting to /run/redis/redis.sock:6279. Name or service not known.
```
Overwriting REDIS_PORT with 0 or empty value keeps throwing errors.
So, how to use redis unix socket?
|
closed
|
2024-04-27T19:16:34Z
|
2024-05-10T10:52:00Z
|
https://github.com/modoboa/modoboa/issues/3241
|
[
"documentation"
] |
yannfill
| 4
|
httpie/cli
|
api
| 829
|
Cannot install with apt-get
|
I tried to install on Ubuntu 18.04.3 and got the following
```bash
root@machine:~# apt-get install httpie
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package httpie is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'httpie' has no installation candidate
```
|
closed
|
2020-01-07T13:56:41Z
|
2020-01-07T14:59:30Z
|
https://github.com/httpie/cli/issues/829
|
[] |
tiagomsmagalhaes
| 4
|
Tinche/aiofiles
|
asyncio
| 99
|
about code generator
|
your idea is so good(use executor to run async task)
however, anyone can not list all the functions.
so, what if use a code generator to generate all the api
|
closed
|
2021-04-17T09:12:46Z
|
2022-01-25T14:41:36Z
|
https://github.com/Tinche/aiofiles/issues/99
|
[] |
GoddessLuBoYan
| 3
|
pydantic/pydantic
|
pydantic
| 10,699
|
Subclassing Pydantic models with Python 3.13 and mypy
|
### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
Hi Pydantic folks, I'm opening this issue because before reading the code I thought the problem was with Pydantic, but I'm not 100% sure, so please let me know if I need to open this in the mypy repository.
When I'm using Models that inherit from another Models and I try to mutate fields like `str` -> `Literal`, mypy is complaining because the type in the `__replace__` function is not valid.
```sh
(venv) ➜ leo313 mypy --pretty x.py
x.py:9: error: Signature of "__replace__" incompatible with supertype "MyBaseClass" [override]
class MyInheritedClass_A(MyBaseClass):
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
x.py:9: note: Superclass:
x.py:9: note: def __replace__(*, __pydantic_extra__: dict[str, Any] | None = ..., __pydantic_fields_set__: set[str] = ..., __pydantic_private__: dict[str, Any] | None = ..., RequestType: str = ...) -> MyBaseClass
x.py:9: note: Subclass:
x.py:9: note: def __replace__(*, __pydantic_extra__: dict[str, Any] | None = ..., __pydantic_fields_set__: set[str] = ..., __pydantic_private__: dict[str, Any] | None = ..., RequestType: Literal['Create'] = ..., PhysicalResourceId: str = ...) -> MyInheritedClass_A
x.py:13: error: Signature of "__replace__" incompatible with supertype "MyBaseClass" [override]
class MyInheritedClass_B(MyBaseClass):
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
x.py:13: note: Superclass:
x.py:13: note: def __replace__(*, __pydantic_extra__: dict[str, Any] | None = ..., __pydantic_fields_set__: set[str] = ..., __pydantic_private__: dict[str, Any] | None = ..., RequestType: str = ...) -> MyBaseClass
x.py:13: note: Subclass:
x.py:13: note: def __replace__(*, __pydantic_extra__: dict[str, Any] | None = ..., __pydantic_fields_set__: set[str] = ..., __pydantic_private__: dict[str, Any] | None = ..., RequestType: Literal['Delete'] = ..., PhysicalResourceId: str = ...) -> MyInheritedClass_B
```
If I run the same code in Python 3.12 it works without any problem.
```sh
(venv) ➜ leo313 mypy --version
mypy 1.13.0 (compiled: yes)
```
Can you please give me a direction how to solve this problem?
Thank you
### Example Code
```Python
from typing import Literal
from pydantic import BaseModel, Field
class MyBaseClass(BaseModel):
request_type: str = Field(..., alias="RequestType")
class MyInheritedClass_A(MyBaseClass):
request_type: Literal["Create"] = Field(..., alias="RequestType")
physical_resource_id: str = Field(..., alias="PhysicalResourceId")
class MyInheritedClass_B(MyBaseClass):
request_type: Literal["Delete"] = Field(..., alias="RequestType")
physical_resource_id: str = Field(..., alias="PhysicalResourceId")
```
### Python, Pydantic & OS Version
```Text
(venv) ➜ leo313 python -c "import pydantic.version; print(pydantic.version.version_info())"
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: /private/tmp/leo313/venv/lib/python3.13/site-packages/pydantic
python version: 3.13.0 (main, Oct 7 2024, 05:02:14) [Clang 15.0.0 (clang-1500.3.9.4)]
platform: macOS-14.7-arm64-arm-64bit-Mach-O
related packages: mypy-1.13.0 typing_extensions-4.12.2
commit: unknown
```
|
closed
|
2024-10-24T13:43:20Z
|
2025-01-15T08:47:47Z
|
https://github.com/pydantic/pydantic/issues/10699
|
[
"topic-type checking"
] |
leandrodamascena
| 3
|
Avaiga/taipy
|
automation
| 1,454
|
action_keys for date with_time
|
### What would you like to share or ask?
is there something similar to #action_keys but for date?
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [X] I am willing to work on this issue (optional)
|
open
|
2024-06-26T17:00:10Z
|
2024-10-11T13:14:11Z
|
https://github.com/Avaiga/taipy/issues/1454
|
[
"🖰 GUI",
"❓ Question",
"🟨 Priority: Medium",
"💬 Discussion"
] |
carza06
| 6
|
dynaconf/dynaconf
|
flask
| 893
|
[RFC] Restructure docs pages hierarchy
|
## Is your feature request related to a problem? Please describe
The lack of hierarchy in the menu makes it harder to navigate. Related to 4.0.0 roadmap proposals.
## Describe the solution you'd like
**Add hierarchy to the pages**, nesting them in categories.
Eg:
- Home `(describe features in Home with links to other parts)`
- Getting started `(simple end to end tutorial covering most features)`
- Guides `(in-depth look at specific tasks)`
- Doing that ...
- How to ...
- Reference `(this may be generated from code?)`
- Configuration
- Validation
- Converters
- CLI
- Contributing `(may be from CONTRIBUTING.md)`
- FAQ
**Redirect old links to new or related ones**, so they would not break. Eg:
dynaconf.com/configuration -> dynaconf.com/reference/configuration
**Questions**
- If menus are foldable or always open
- If guides should be further subdivided
- Should such drastic (structural) changes be released just in 4.0.0? Or the existing content could be migrated to a similar format before that?
## Describe alternatives you've considered
There are many.
## Additional context
As there are proposed changes about docs in the 4.0.0 roadmap, like adding guides/tutorial format, it should be designed to acommodate these proposals too. (if it is to be released before that).
|
open
|
2023-03-16T16:28:17Z
|
2023-08-21T19:47:46Z
|
https://github.com/dynaconf/dynaconf/issues/893
|
[
"Not a Bug",
"RFC",
"Docs"
] |
pedro-psb
| 1
|
littlecodersh/ItChat
|
api
| 397
|
您好,请问有详细的针对小白(无编程基础)的教程或是可执行运行的程序吗?
|
您好,本人只是简单的接触到python,并不是很熟悉。关于教程中的讲解发现还是不太理解,针对所提到的问题,请问有什么好的解决办法吗?个人微信自动回复
|
closed
|
2017-06-05T12:42:41Z
|
2017-06-13T02:56:15Z
|
https://github.com/littlecodersh/ItChat/issues/397
|
[
"question"
] |
lvbin8023
| 3
|
RobertCraigie/prisma-client-py
|
pydantic
| 52
|
Add support for full text search for PostgreSQL
|
[https://www.prisma.io/docs/concepts/components/prisma-client/full-text-search](https://www.prisma.io/docs/concepts/components/prisma-client/full-text-search)
## Suggested solution
Something like the following, `search` should be added to `StringFilter` and must be an instance of `String`.
Should be noted that I'm not stuck on `String` being the name.
```py
# NOTE: actual API is still TODO
from prisma.querying import String
await client.post.find_first(
where={
'content': {
'search': String.contains('cat', 'dog'),
},
},
)
await client.post.find_first(
where={
'content': {
# for anything we don't explicitly support
'search': String.raw('fox \| cat'),
},
},
)
```
|
closed
|
2021-08-26T07:55:37Z
|
2024-08-11T09:05:03Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/52
|
[
"kind/feature",
"process/candidate",
"topic: client",
"level/advanced",
"priority/high"
] |
RobertCraigie
| 4
|
matplotlib/matplotlib
|
data-visualization
| 28,855
|
[Bug]: Using an ipython config of `InteractiveShellApp.matplotlib='inline'` fails on macos for `matplotlib>3.9.0`
|
### Bug summary
I am using a Jupyter notebook in VS Code Version: 1.93.1. on MacOS Ventura 13.4.1 and Python 3.12.5
With `matplotlib<=3.9.0`, I can plot visualizations inline within VS Code. The reported backend is by default `inline` and everything is fine.
Starting with version 3.9.1, the default backend apparently switched to `macosx`. When I try to plot something now, a new Python window pops up, but no plot is created unless I add `plt.show()`.
Is this the intended behavior? It breaks essentially all of my notebooks because I would have to go in and add `%matplotlib inline` to make things work again.
### Code for reproduction
```Python
import matplotlib.pyplot as plt
import matplotlib
print(matplotlib.get_backend())
plt.plot([0, 1, 2], [1, 2, 3])
```
### Actual outcome
With `matplotlib<=3.9.0`:
<img width="941" alt="image" src="https://github.com/user-attachments/assets/35144b43-2452-455c-a93a-2d4f63ea2123">
With `matplotlib>3.9.0`:
<img width="720" alt="image" src="https://github.com/user-attachments/assets/5c4d9e93-b7da-48a8-8551-d3b56874a450">
### Expected outcome
Default backend should be `inline` in `matplotlib>3.9.0`.
### Additional information
_No response_
### Operating system
MacOS Ventura 13.4.1
### Matplotlib Version
3.9.0, 3.9.1, 3.9.2
### Matplotlib Backend
inline, macosx
### Python version
3.12.5
### Jupyter version
6.5.1
### Installation
pip
|
open
|
2024-09-20T19:03:55Z
|
2025-02-21T19:09:02Z
|
https://github.com/matplotlib/matplotlib/issues/28855
|
[
"OS: Apple",
"third-party integration: jupyter"
] |
Simon-Stone
| 14
|
encode/apistar
|
api
| 180
|
Add render_form to jinja2
|
When I'm trying to use `render_form` in the template like this:
```
<html>
<body>
<h1>{{ message }}</h1>
{{ render_form('https://google.com') }}
</body>
</html>
```
I get an error message:
<img width="787" alt="jinja2_exceptions_undefinederror___render_form__is_undefined____werkzeug_debugger" src="https://cloud.githubusercontent.com/assets/3472902/26384435/0a559d82-3fee-11e7-9426-f84e2293c89a.png">
What I'm doing wrong? (haven't change anything yet)
|
closed
|
2017-05-24T02:29:57Z
|
2018-09-25T14:45:54Z
|
https://github.com/encode/apistar/issues/180
|
[] |
nirgn975
| 4
|
django-import-export/django-import-export
|
django
| 1,779
|
Issue with adding rows with generated field
|
**Describe the bug**
Import of rows via django admin breaks when I have a generated column, but works with that field commented out.
**To Reproduce**
Steps to reproduce the behavior:
Import of rows for a model with a generated field.
**Versions (please complete the following information):**
- Django Import Export: [e.g. 1.2, 2.0]
- django 5.0
- import export 3.3.7
**Expected behavior**
Import of rows for a table.
The import works fine when I have no generated column, but breaks when the table has a generated column
```python
class OperationsExportShipments(models.Model):
shipment_id = models.CharField(primary_key=True)
<details removed>
# cost = models.GeneratedField(
# expression=RawSQL('COALESCE(quote_cost, reconciled_cost)',[]),
# db_column='cost',
# output_field=models.DecimalField(max_digits=10, decimal_places=2),
# db_persist=True,
# )
```
Import of my spreadsheet works fine with the cost generated field commented out, but breaks when it is present
**Screenshots**
I can DM these but as it's company code and data I cannot post here.
**Additional context**
Add any other context about the problem here.
|
closed
|
2024-03-29T15:55:06Z
|
2024-04-28T15:08:09Z
|
https://github.com/django-import-export/django-import-export/issues/1779
|
[
"bug"
] |
ghost
| 6
|
deepspeedai/DeepSpeed
|
machine-learning
| 6,978
|
[BUG] pdsh runner doesn't work with tqdm bar
|
**Describe the bug**
Training tqdm bar won't show in the rank0 console if pdsh is used as the launcher.
**To Reproduce**
Run any multi-node training on an example script, the tqdm bar from local rank (localhost) won't show up but show as a blank line.
I assume the root cause it pdsh tries to ssh into even localhost, then the bar cannot be logged through a ssh session somehow.
**Expected behavior**
Be able to see the tqdm bar.
**System info (please complete the following information):**
- OS: Ubuntu 22.04
- x8 GPUs each node
- Python version 3.12.8
- 4 Nodes, same network
**Launcher context**
PDSH
|
open
|
2025-01-29T08:03:44Z
|
2025-01-31T17:39:38Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6978
|
[
"bug",
"training"
] |
Superskyyy
| 3
|
PaddlePaddle/PaddleHub
|
nlp
| 2,259
|
测试国产加速卡寒武纪MLU下,跑ernie的序列标注任务的无法调用mlu加速任务bug,希望官方开发能够答复,非常感谢!!
|
1)PaddlePaddle版本:docker环境下的 ,PaddlePaddle-mlu==0.0.0
2) PaddleHub版本:2.1.0
2)系统环境:centos7.6 linux
任务启动日志:
```
/opt/py37env/lib/python3.7/site-packages/paddle/fluid/framework.py:193: UserWarning: We will fallback into legacy dygraph on NPU/XPU/MLU/IPU/ROCM devices. Because we only support new eager dygraph mode on CPU/GPU currently.
"We will fallback into legacy dygraph on NPU/XPU/MLU/IPU/ROCM devices. Because we only support new eager dygraph mode on CPU/GPU currently. "
[2023-05-29 16:37:34,910] [ INFO] - Already cached /root/.paddlenlp/models/ernie-1.0/ernie_v1_chn_base.pdparams
W0529 16:37:34.916133 5170 device_context.cc:48] Please NOTE: device: 0, MLU Compute Capability: 3.0, Driver API Version: 4.20.18, Runtime API Version: 6.0.2, Cnnl API Version: 1.13.0, MluOp API Version: 0.2.0
[2023-05-29 16:37:54,520] [ INFO] - Loaded parameters from /home/suntao_test/v2_nlp_es005/nlp_mod_005/best_model/model.pdparams
* Serving Flask app 'hub_serving'
* Debug mode: off
INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:8867
INFO:werkzeug:Press CTRL+C to quit
INFO:werkzeug:127.0.0.1 - - [29/May/2023 16:38:04] "POST /predict/NLP_ES005 HTTP/1.1" 200 -
INFO:werkzeug:127.0.0.1 - - [29/May/2023 16:38:05] "POST /predict/NLP_ES005 HTTP/1.1" 200 -
```
报错信息:
```
{
"msg": "The device should not be 'gpu', since PaddlePaddle is not compiled with CUDA",
"status": "000",
"results": ""
}
```
|
open
|
2023-05-29T09:24:01Z
|
2024-02-26T04:59:30Z
|
https://github.com/PaddlePaddle/PaddleHub/issues/2259
|
[] |
suntao2015005848
| 0
|
JaidedAI/EasyOCR
|
deep-learning
| 1,192
|
Unable to do text recognition with custom trained model
|
Hi
I followed the tutorial to retrain the model. I created my own dataset of around 700 images, placed them in all data folder and started the training. Below is the training logs:
[20000/300000] Train loss: 0.14954, Valid loss: 0.09600, Elapsed_time: 1291.66819
Current_accuracy : 90.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
23 Church St | 23 Church St | 0.7626 True
049481 | 049481 | 0.9509 True
--------------------------------------------------------------------------------
[40000/300000] Train loss: 0.11850, Valid loss: 0.12338, Elapsed_time: 2585.50984
Current_accuracy : 80.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
United States | United States | 0.8934 True
United States | United States | 0.8934 True
--------------------------------------------------------------------------------
[60000/300000] Train loss: 0.11800, Valid loss: 0.12152, Elapsed_time: 3880.77837
Current_accuracy : 80.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
049481 | 049481 | 0.9912 True
California | Palieornia | 0.0043 False
--------------------------------------------------------------------------------
[80000/300000] Train loss: 0.11783, Valid loss: 0.09142, Elapsed_time: 5178.66774
Current_accuracy : 80.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
255 Park Drive | 255 Park Drive | 0.9866 True
#10-01 | #10-01 | 0.9993 True
--------------------------------------------------------------------------------
[100000/300000] Train loss: 0.11770, Valid loss: 0.15695, Elapsed_time: 6477.48967
Current_accuracy : 80.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
#10-01 | #10-01 | 0.9994 True
#10-01 | #10-01 | 0.9994 True
--------------------------------------------------------------------------------
[120000/300000] Train loss: 0.11767, Valid loss: 0.15748, Elapsed_time: 7776.41339
Current_accuracy : 80.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
California | Patteorson | 0.0055 False
049481 | 049481 | 0.9948 True
--------------------------------------------------------------------------------
[140000/300000] Train loss: 0.11759, Valid loss: 0.09135, Elapsed_time: 9076.36160
Current_accuracy : 80.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
95363-8876 | 95363-8876 | 0.9956 True
23 Church St | 23 Church St | 0.9874 True
--------------------------------------------------------------------------------
[160000/300000] Train loss: 0.11780, Valid loss: 0.15532, Elapsed_time: 10374.95994
Current_accuracy : 80.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
23 Church St | 23 Church St | 0.8750 True
#10-01 | #10-01 | 0.9980 True
--------------------------------------------------------------------------------
[180000/300000] Train loss: 0.11749, Valid loss: 0.12327, Elapsed_time: 11673.57297
Current_accuracy : 80.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
23 Church St | 23 Church St | 0.9194 True
Singapore | Singapore | 0.9681 True
--------------------------------------------------------------------------------
[200000/300000] Train loss: 0.11743, Valid loss: 0.09138, Elapsed_time: 12974.15161
Current_accuracy : 80.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
Jenny Wilson | Jenny Wilson | 0.9762 True
255 Park Drive | 255 Park Drive | 0.9212 True
--------------------------------------------------------------------------------
[220000/300000] Train loss: 0.11754, Valid loss: 0.12260, Elapsed_time: 14272.45811
Current_accuracy : 80.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
California | Patieorson | 0.0044 False
255 Park Drive | 255 Park Drive | 0.9711 True
--------------------------------------------------------------------------------
[240000/300000] Train loss: 0.11742, Valid loss: 0.12318, Elapsed_time: 15571.78331
Current_accuracy : 80.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
United States | United States | 0.8852 True
Jenny Wilson | Jenny Wilson | 0.9856 True
--------------------------------------------------------------------------------
[260000/300000] Train loss: 0.11739, Valid loss: 0.09150, Elapsed_time: 16869.83790
Current_accuracy : 80.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
95363-8876 | 95363-8876 | 0.9934 True
United States | United States | 0.8652 True
--------------------------------------------------------------------------------
[280000/300000] Train loss: 0.11738, Valid loss: 0.12504, Elapsed_time: 18167.31517
Current_accuracy : 80.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
California | Patteorson | 0.0058 False
United States | United States | 0.8228 True
--------------------------------------------------------------------------------
[300000/300000] Train loss: 0.11735, Valid loss: 0.09141, Elapsed_time: 19466.37895
Current_accuracy : 80.000, Current_norm_ED : 0.9200
Best_accuracy : 90.000, Best_norm_ED : 0.9200
--------------------------------------------------------------------------------
Ground Truth | Prediction | Confidence Score & T/F
--------------------------------------------------------------------------------
#10-01 | #10-01 | 0.9994 True
255 Park Drive | 255 Park Drive | 0.9794 True
--------------------------------------------------------------------------------
I am using my Ubuntu machine. I have created below folders
~/.EasyOCR/model
~/.EasyOCR/user_network
I have renamed the best accuracy pt file to custom_model.pt and have placed it in model dir and have also placed custom_model.py and custom_model.yaml. But when doing inferencing using below code, it shows no output and prints out empty list
import easyocr
img_path = '1a1.png'
reader = easyocr.Reader(['en'], recog_network='custom_model')
result = reader.readtext(img_path, detail=0)
print(result)
How do I solve this? Is there any problem in model or is there any other issue ?
|
open
|
2023-12-26T14:52:56Z
|
2024-10-18T15:05:15Z
|
https://github.com/JaidedAI/EasyOCR/issues/1192
|
[] |
abhinavrawat27
| 2
|
microsoft/unilm
|
nlp
| 891
|
LayoutReader with LayoutLMv3 Pre-Training for Chinese
|
Hi Huang Yupan
Thanks a lot for your code and the LayoutReader example.
You gave us LayoutReader with LayoutLM Pre-Training, could you please also give LayoutReader with LayoutLMv3 Pre-Trainin for Chinese ?
Thanks!
|
closed
|
2022-10-14T20:17:07Z
|
2022-11-03T14:55:22Z
|
https://github.com/microsoft/unilm/issues/891
|
[] |
hehuang139
| 1
|
ibis-project/ibis
|
pandas
| 10,780
|
bug(interactive-repr): interactive repr of null date scalar raises an exception
|
### What happened?
```
In [10]: from ibis.interactive import *
In [11]: s = ibis.null("date")
In [12]: s
```
raises an exception:
```
│ …/ibis/expr/types/pretty.py:139 in _ │
│ │
│ 136 @format_values.register(dt.Date) │
│ 137 def _(dtype, values, **fmt_kwargs): │
│ 138 │ dates = [v.date() if isinstance(v, datetime.datetime) else v for v in values] │
│ ❱ 139 │ return [Text.styled(d.isoformat(), "magenta") for d in dates] │
│ 140 │
│ 141 │
│ 142 @format_values.register(dt.Time) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'NoneType' object has no attribute 'isoformat'
```
### What version of ibis are you using?
`main` @ 12a45cfeaa25129b58632f1ea2dccbf468a84fbb
### What backend(s) are you using, if any?
_No response_
### Relevant log output
```sh
```
### Code of Conduct
- [x] I agree to follow this project's Code of Conduct
|
closed
|
2025-02-03T12:16:49Z
|
2025-02-05T15:43:36Z
|
https://github.com/ibis-project/ibis/issues/10780
|
[
"bug"
] |
cpcloud
| 0
|
AirtestProject/Airtest
|
automation
| 1,219
|
connect_device说明示例错误,通过窗口名称匹配连接,不用单引号
|
**描述问题bug**
错误示例:connect_device("windows:///?title_re='.*explorer.*'")
正确示例:connect_device("windows:///?title_re=.*explorer.*")
**相关截图**
(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)

**python 版本:** `python3.12`
**airtest 版本:** `1.3.4`
|
open
|
2024-06-14T08:03:37Z
|
2024-06-14T08:03:37Z
|
https://github.com/AirtestProject/Airtest/issues/1219
|
[] |
skyfire1997
| 0
|
gee-community/geemap
|
jupyter
| 2,114
|
Help needed
|
import geemap
import geopandas as gpd
import ee
import pandas as pd
from shapely.geometry import mapping
import geemap.foliumap as geemap
import pandas as pd
import numpy as np
ee.Initialize()
region3_gdf = gpd.read_file("Untitled_layer.geojson")
aoi_total_bounds = region3_gdf.total_bounds
center_lat = (aoi_total_bounds[1] + aoi_total_bounds[3]) / 2
center_lon = (aoi_total_bounds[0] + aoi_total_bounds[2]) / 2
Map = geemap.Map(center=(center_lat, center_lon), zoom=10)
Map.add_gdf(region3_gdf, layer_name="Region AOI", style_dict={'color': 'blue', 'fillColor': 'none'})
geojson = region3_gdf.__geo_interface__
roi = ee.Geometry(geojson['features'][0]['geometry'])
modis_aod = ee.ImageCollection("MODIS/061/MCD19A2_GRANULES")
start_date = '2023-11-01'
end_date = '2023-11-11'
modisaod_filtered = modis_aod.filterDate(start_date, end_date).filterBounds(roi)
aod_mean_image = modisaod_filtered.mean().clip(roi)
aod_viz_params = {
'min': 0.0,
'max': 1.0,
'palette': ['blue', 'green', 'yellow', 'red']
}
Map.addLayer(aod_mean_image.select('Optical_Depth_047'), aod_viz_params, 'MODIS AOD (Clipped to Polygon)')
grid_generator1k = grids.SquareGridGenerator(1_000)
grid_gdf1k = grid_generator1k.generate_grid(region3_gdf)
Map.add_gdf(grid_gdf1k, layer_name="Grid", style_dict={'color': '#95ddf1', 'fillColor': 'black', 'opacity': 0.5})
Map.addLayerControl()
Map
i want to extract date, time, latitudes, longitudes and Optical_Density_047 values for every cell inside the grid but i am having trouble doing it. please help as it either provides a client side error or query aborts after 5000 elements.
|
closed
|
2024-08-16T05:42:35Z
|
2024-08-16T13:12:55Z
|
https://github.com/gee-community/geemap/issues/2114
|
[] |
sgindeed
| 0
|
chezou/tabula-py
|
pandas
| 238
|
WARNING: No Unicode mapping for C14 (1) in font ECONFB+AdvP4C4E74
|
<!--- Provide a general summary of your changes in the Title above -->
# Summary of your issue
I am trying to read a pdf file that has a table in it. Though the installation is successful, I encounter the below error when I try to extract the table from the pdf and it returns 0 records
<!-- Write the summary of your issue here -->
# Check list before submit
- [x] Did you read [FAQ](https://tabula-py.readthedocs.io/en/latest/faq.html)?
- [x] (Optional, but really helpful) Your PDF URL: [https://www.ijidonline.com/article/S1201-9712(20)30141-7/pdf](url)
- [x] Paste the output of `import tabula; tabula.environment_info()` on Python REPL: ?
`3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)]
Java version:
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
tabula-py version: 2.1.0
platform: Windows-10-10.0.17134-SP0
uname:
uname_result(system='Windows', node='1234-PC', release='10', version='10.0.17134', machine='AMD64', processor='Intel64 Family 6 Model 142 Stepping 9, GenuineIntel')
linux_distribution: ('', '', '')
mac_ver: ('', ('', '', ''), '')`
If not possible to execute `tabula.environment_info()`, please answer following questions manually.
- [x] Paste the output of `python --version` command on your terminal: yes, refer below
`Python version:
- [x] Paste the output of `java -version` command on your terminal: yes, refer below
`java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)`
- [x] Does `java -h` command work well?; Ensure your java command is included in `PATH`
- [x] Write your OS and it's version: windows 10,
# What did you do when you faced the problem?
## Code:
```
df = tabula.read_pdf('pd.pdf',multiple_tables=True)
```
## Expected behavior:
I expect to see the table from pdf in a dataframe
## Actual behavior:
empty dataframe with warning as given below
`'pages' argument isn't specified.Will extract only from page 1 by default.
Got stderr: May 22, 2020 12:11:26 AM org.apache.pdfbox.pdmodel.font.PDSimpleFont toUnicode
WARNING: No Unicode mapping for C14 (1) in font ECONFB+AdvP4C4E74
May 22, 2020 12:11:26 AM org.apache.pdfbox.rendering.PDFRenderer suggestKCMS
INFO: Your current java version is: 1.8.0_131
May 22, 2020 12:11:26 AM org.apache.pdfbox.rendering.PDFRenderer suggestKCMS
INFO: To get higher rendering speed on old java 1.8 or 9 versions,
May 22, 2020 12:11:26 AM org.apache.pdfbox.rendering.PDFRenderer suggestKCMS
INFO: update to the latest 1.8 or 9 version (>= 1.8.0_191 or >= 9.0.4),
May 22, 2020 12:11:26 AM org.apache.pdfbox.rendering.PDFRenderer suggestKCMS
INFO: or
May 22, 2020 12:11:26 AM org.apache.pdfbox.rendering.PDFRenderer suggestKCMS
INFO: use the option -Dsun.java2d.cmm=sun.java2d.cmm.kcms.KcmsServiceProvider
May 22, 2020 12:11:26 AM org.apache.pdfbox.rendering.PDFRenderer suggestKCMS
INFO: or call System.setProperty("sun.java2d.cmm", "sun.java2d.cmm.kcms.KcmsServiceProvider")
May 22, 2020 12:11:27 AM org.apache.pdfbox.pdmodel.font.PDSimpleFont toUnicode
WARNING: No Unicode mapping for C14 (1) in font ECONFB+AdvP4C4E74`
|
closed
|
2020-05-21T16:27:22Z
|
2020-06-04T02:26:03Z
|
https://github.com/chezou/tabula-py/issues/238
|
[] |
Ak784
| 1
|
PokeAPI/pokeapi
|
graphql
| 1,208
|
Sylveon's back sprite is a blank value
|
<!--
Thanks for contributing to the PokéAPI project. To make sure we're effective, please check the following:
- Make sure your issue hasn't already been submitted on the issues tab. (It has search functionality!)
- If your issue is one of outdated API data, please note that we get our data from [veekun](https://github.com/veekun/pokedex/). If they are not up to date either, please look for or create an issue there. Otherwise, feel free to create an issue here.
- Provide a clear description of the issue.
- Provide a clear description of the steps to reproduce.
- Provide a clear description of the expected behavior.
Thank you!
-->
When I tried to get the back sprite for Sylveon it didn't show up. I don't know whether this is normal or not (I'm new to this). Please don't get frustrated, I'm looking to help...
Thanks :)
|
open
|
2025-02-23T01:06:28Z
|
2025-03-17T00:09:09Z
|
https://github.com/PokeAPI/pokeapi/issues/1208
|
[] |
superL132
| 4
|
developmentseed/lonboard
|
jupyter
| 492
|
Adding a legend?
|
The examples on `deck.gl` don't have an integrated legend but I saw [this suggestion on Pydeck](https://github.com/visgl/deck.gl/issues/4850#issuecomment-1188226561) and wonder if something similar would work with `lonboard`? There isn't an equivalent argument of `description` in the layers but it does look like [there is a description card component in `deck.gl` Jupyter widget](https://github.com/visgl/deck.gl/blob/02d6813584aa10d108e8cb28b7cc84ee6d6a31a9/modules/jupyter-widget/src/lib/components/description-card.js#L5) though not sure how to call it through `lonboard`..
|
open
|
2024-04-29T23:29:20Z
|
2024-09-09T13:05:49Z
|
https://github.com/developmentseed/lonboard/issues/492
|
[] |
shriv
| 3
|
allenai/allennlp
|
pytorch
| 5,106
|
Setting `requires_grad = True` for optimizer parameters
|
**Is your feature request related to a problem? Please describe.**
I'd like the ability to set `requires_grad=True` in the optimizer parameter groups. For instance:
```
...
"text_field_embedder": {
"token_embedders": {
"tokens": {
"type": "pretrained_transformer",
"model_name": transformer_model,
"max_length": 512,
"train_parameters": false,
}
}
},
....
"optimizer": {
...
"parameter_groups": [
# This turns on grad for the attention query bias vectors and the intermediate MLP bias vectors.
# Since we set train_parameters to false in the token_embedder, these are the only weights that will be updated
# in the token_embedder.
[["^_text_field_embedder.token_embedder_tokens.transformer_model.*attention.self.query.bias$"], {"requires_grad": true}],
[["^_text_field_embedder.token_embedder_tokens.transformer_model.*intermediate.dense.bias$"], {"requires_grad": true}]
]
},
```
In this config, I set the token embedder `train_parameters` to false, so it's not trainable. However, i want to train some of the parameters (defined by the regex). The intended outcome is that the token embedder parameters are non-trainable (since `train_parameters = False`), but a subset of them _are_ trainable (defined by the regex).
The current behavior is that these groups are just ignored. This is because, the non-trainable parameters aren't even passed to the optimizer, so the regexes don't match anything / they accordingly can't have their `requires_grad` value changed.
(i realize that i can do this by setting `train_parameters = True`, and then writing a regex to select out all of the parameters that _don't_ match the regexes above and then setting `{requires_grad: False}` on those. however, that regex is borderline unmaintainable / certainly not very readable.)
|
open
|
2021-04-09T01:01:56Z
|
2021-05-05T16:11:26Z
|
https://github.com/allenai/allennlp/issues/5106
|
[
"Contributions welcome",
"Feature request"
] |
nelson-liu
| 4
|
ray-project/ray
|
python
| 50,739
|
[Core] Negative available resources
|
### What happened + What you expected to happen
Currently available CPU resource can be negative due to the behavior that `ray.get()` will temporarily release and acquire CPU resources. We should make `NotifyDirectCallTaskUnblocked` message sync and only sends the response back after we can acquire back the CPU resource.
### Versions / Dependencies
master
### Reproduction script
N/A
### Issue Severity
None
|
open
|
2025-02-19T20:43:16Z
|
2025-03-22T00:57:14Z
|
https://github.com/ray-project/ray/issues/50739
|
[
"bug",
"P1",
"core"
] |
jjyao
| 0
|
PokeAPI/pokeapi
|
graphql
| 1,151
|
Some Generation VIII and all Generation IX moves are returning 404
|
Some `Generation VIII` and all `Generation IX` moves seem to be returning 404 since yesterday.
On the `Generation VIII` moves, the ones that seem to be returning 404 are also missing effect entries, for example:
- [apple-acid](https://pokeapi.co/api/v2/move/apple-acid)
- [court-change](https://pokeapi.co/api/v2/move/court-change)
Other moves from the same generation seem to work as usual, for example:
- [astral-barrage](https://pokeapi.co/api/v2/move/astral-barrage)
- [dragon-energy](https://pokeapi.co/api/v2/move/dragon-energy)
Example of a Generation IX move:
- [kowtow-cleave](https://pokeapi.co/api/v2/move/kowtow-cleave)
All of these move were returning data up until yesterday so I was wondering if anything has changed since then?
|
closed
|
2024-10-16T08:43:09Z
|
2024-10-20T10:57:50Z
|
https://github.com/PokeAPI/pokeapi/issues/1151
|
[] |
andreferreiradlw
| 5
|
aleju/imgaug
|
machine-learning
| 810
|
Upstream as much as possible from `parameters.py` into `scipy.stats`
|
I feel like some things from `parameter.py`, such as visualizatiin and composition of distributions, should not really belong to this project and should probably belong to `scipy.stats`.
|
open
|
2022-02-25T16:32:46Z
|
2022-02-25T16:32:46Z
|
https://github.com/aleju/imgaug/issues/810
|
[] |
KOLANICH
| 0
|
deeppavlov/DeepPavlov
|
nlp
| 1,511
|
Support Cuda 11
|
Hi,
I am trying to Run NER model on A100.
I get CUBLAS_STATUS_ALLOC_FAILED error.
Is it something to do with CUDA ?
A100 supports CUDA 11 - do you have a docker image compatible with CUDA 11?
Thanks
|
closed
|
2021-12-27T16:56:02Z
|
2022-04-01T11:13:26Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1511
|
[
"bug"
] |
lizush23
| 2
|
onnx/onnxmltools
|
scikit-learn
| 545
|
Can I change attribute type to get a smaller model file size?
|
I installed ONNX 1.10.0, ONNX Runtime 1.10.0 and ONNXMLTools 1.10.0
I converted a XGBoost classifier model (trained in Python 3.8.5) to an ONNX model by `onnxmltools.convert.convert_xgboost()`.
However I could not quantize the ONNX model by `onnxruntime.quantization.quantize_dynamic()`.
I found there was only the `TreeEnsembleClassifier `op in the ONNX model.
The `TreeEnsembleClassifier `op has many attributes with different type (ints, floats, ...). [reference](https://github.com/onnx/onnx/blob/main/docs/Changelog-ml.md#aionnxmltreeensembleclassifier-3)
Attribute type can be found by uploading the ONNX model to https://netron.app/
After uploading the ONNX model to https://netron.app/, the `floats` type is as `float32`.
As shown as highlight part in the screen capture below:

Is there any way to make the ONNX model (file size) smaller by changing the types of attributes? (or other alternatives)
Like `float32 `to `float16 `or `float8 `or `int8`?
I appreciate any help and guidance.
|
open
|
2022-05-04T09:32:31Z
|
2022-07-26T08:05:22Z
|
https://github.com/onnx/onnxmltools/issues/545
|
[] |
shcchen
| 1
|
Evil0ctal/Douyin_TikTok_Download_API
|
fastapi
| 7
|
结果页增加快捷一键下载所有无水印视频
|
如果一次给的链接比较多,现在的结果页我感觉更像是详情条目,如果想增加一个按钮 一键下载所有视频 所有背景音乐 你有什么思路吗
|
closed
|
2022-03-13T09:10:42Z
|
2022-11-09T21:10:47Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/7
|
[
"enhancement",
"Fixed"
] |
wanghaisheng
| 10
|
paperless-ngx/paperless-ngx
|
machine-learning
| 7,733
|
[BUG] PAPERLESS_ACCOUNT_SESSION_REMEMBER not being recognized
|
### Description
I'm using latest stable (2.12.1). Setting Environmet variable PAPERLESS_ACCOUNT_SESSION_REMEMBER as implementet in #6105 makes no difference: Regardles if the value is set to "true", "false" or left unset, the session cookie's expiry is always set to 14 days.
### Steps to reproduce
1. Clearing session cookie
2. Setting PAPERLESS_ACCOUNT_SESSION_REMEMBER to true --> cookie sessionid expiry: 14 days
3. Clearing session cookie
4. Setting PAPERLESS_ACCOUNT_SESSION_REMEMBER to false --> cookie sessionid expiry: 14 days
### Webserver logs
```bash
N/A
```
### Browser logs
_No response_
### Paperless-ngx version
2.12.1
### Host OS
Debian 12 aarch64
### Installation method
Docker - official image
### System status
_No response_
### Browser
Edge, Chrome, Safari
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description.
|
closed
|
2024-09-18T08:25:44Z
|
2024-10-21T03:11:27Z
|
https://github.com/paperless-ngx/paperless-ngx/issues/7733
|
[
"not a bug"
] |
oeiber
| 13
|
comfyanonymous/ComfyUI
|
pytorch
| 7,007
|
Are ComfyUI and SDXL incompatible?
|
### Your question
The same prompt、model、parameters、and seeds are drawn in completely different pictures in WebUI and ComfyUI, and the effect of ComfyUI is worse.
ComfyUI is very good when used with SD3.5 and flux, but the effect is a bit poor on SDXL. Is there something missing in my ComfyUI? Or is it not compatible with SDXL?
### Logs
```powershell
```
### Other
_No response_
|
closed
|
2025-02-28T03:13:26Z
|
2025-02-28T06:45:10Z
|
https://github.com/comfyanonymous/ComfyUI/issues/7007
|
[
"User Support"
] |
Gaki1993
| 0
|
geopandas/geopandas
|
pandas
| 2,787
|
BUG: to_crs() does not recognize the current CRS
|
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of geopandas.
- [ ] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
Let's create 2 geaopandas using the same coordinate, but different CRS (WGS84 and ED50)
```{python}
import pandas as pd
import geopandas as gpd
CRS_ED50 = 'EPSG:4230' # ED50, https://epsg.io/4230
CRS_WGS84 = 'EPSG:4326' # WGS84, https://epsg.io/4326
CRS_proj = 'EPSG:21500' # BD50 (Brussels) / Belge Lambert 50, https://epsg.io/21500
# Atomium
dfa = pd.DataFrame({'lat':[50.893989], 'lon':[4.340358]})
gdfa_wgs84 = (gpd.GeoDataFrame(dfa, crs = CRS_WGS84, geometry = gpd.points_from_xy(dfa.lon, dfa.lat)))
print(gdfa_wgs84.crs)
display(gdfa_wgs84)
gdfa_ed50 = (gpd.GeoDataFrame(dfa, crs = CRS_ED50, geometry = gpd.points_from_xy(dfa.lon, dfa.lat)))
print(gdfa_ed50.crs)
display(gdfa_ed50)
```
#### Problem description
Projecting both GeoDataFrames, the result is the same, even though the set CRS was different:
```python
gdfa_wgs84.to_crs(crs = CRS_proj)
```
<html>
<body>
<!--StartFragment-->
lat | lon | geometry
-- | -- | --
50.893989 | 4.340358 | POINT (148056.929 175811.492)
<!--EndFragment-->
</body>
</html>
```{python}
gdfa_ed50.to_crs(crs = CRS_proj)
```
<html>
<body>
<!--StartFragment-->
lat | lon | geometry
-- | -- | --
50.893989 | 4.340358 | POINT (148056.929 175811.492)
<!--EndFragment-->
</body>
</html>
I have to convert first to WGS84 to be able to project it correctly.
```{python}
gdfa_ed50.to_crs(crs = CRS_WGS84).to_crs(crs = CRS_proj)
```
<html>
<body>
<!--StartFragment-->
lat | lon | geometry
-- | -- | --
50.893989 | 4.340358 | POINT (147966.530 175718.918)
<!--EndFragment-->
</body>
</html>
It looks the set CRS of the GeoDataFrame is **not recognized** by the `to_crs()`.
---
#### Output of ``geopandas.show_versions()``
<details>
SYSTEM INFO
-----------
python : 3.10.9 | packaged by conda-forge | (main, Feb 2 2023, 20:14:58) [MSC v.1929 64 bit (AMD64)]
executable : c:\Users\USER\AppData\Local\miniconda3\envs\ENV\python.exe
machine : Windows-10-10.0.19044-SP0
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.11.1
GEOS lib : None
GDAL : 3.6.2
GDAL data dir: None
PROJ : 9.1.1
PROJ data dir: C:\Users\USER\AppData\Local\miniconda3\envs\ENV\Library\share\proj
PYTHON DEPENDENCIES
-------------------
geopandas : 0.12.2
numpy : 1.24.1
pandas : 1.5.3
pyproj : 3.4.1
shapely : 2.0.1
fiona : 1.9.0
geoalchemy2: None
geopy : None
matplotlib : 3.6.3
mapclassify: 2.5.0
pygeos : None
pyogrio : None
psycopg2 : None
pyarrow : None
rtree : 1.0.1
</details>
|
open
|
2023-02-11T19:01:42Z
|
2023-03-03T15:08:44Z
|
https://github.com/geopandas/geopandas/issues/2787
|
[
"bug",
"upstream issue"
] |
kbzsl
| 9
|
sunscrapers/djoser
|
rest-api
| 52
|
Feedback and questions
|
hi Guys,
I do like the product.
The following is the feedback and questions:
**/password/reset**
If you provide invalid email it does return you 200 HTTP code and doesn't tell you that email is not found in the database. I'd like to get a different response back, such as 404 with "details": "such email doesn't exists"
**How to integrate with custom model inherited from django.contrib.auth.user model?**
Would be awesome if small tutorial is provided
**Update profile - 3 methods?**
What's the motivation to make a 3 different methods to update profile /me and POST /username, POST /password? I think it's nicer to just call PUT /me method to update username, password. Otherwise a client should hit 3 different API endpoints.
**How to update other fields of profile with PUT /me?**
It seems right now it only updates email field and nothing else. What about first_name, last_name? What about other custom fields of the model (question #2 above)?
**HTTP Error codes (404, 422)**
For example when you do POST /login with non-existing username you get this
```
400
{
"non_field_errors": [
"Unable to login with provided credentials."
]
}
```
but it's correct request, while the response says: 400 (Bad Request). It would be nicer to have 404, error.
Also when you do POST /register and provide non-unique username you get this:
```
400
{
"username": [
"This field must be unique."
]
}
```
however this is a business logic error and code 422 is better for such use case
**Generic message**
If I do POST /username and provide incorrect value of current_password I get this:
```
400
{
"current_password": [
"Invalid password."
]
}
```
Which makes it hard for the client (say Android mobile app) as I need to know and define maps ahead of time.
Would not it better to return something like this:
```
422
{
"errors":
[
{"field": "password", "message""Invalid password provided"},
....
]
}
```
the same would be for registration and other methods. That way I don't need to define the model for each JSON response body, as they're all generic. Otherwise for each API call you have to create a custom parser (and you have to know all the fields ahead of time).
**Validation for Incorrect request**
If I do:
_PUT /me_
```
{
"email": "admin@admin.com",
"eeee": "jim",
"zzz": "smith"
}
```
I'll get 200 HTTP code back:
```
{
"email": "admin@admin.com",
"id": 1,
"username": "admin"
}
```
However the example above submitted 2 random fields. I would value if API return something like this:
```
422 or 400
{
"errors":
[
{"field": "eee", "message""Unknown field"},
{"field": "zzz", "message""Unknown field"},
....
]
}
```
|
closed
|
2015-06-03T01:51:14Z
|
2023-04-15T13:50:42Z
|
https://github.com/sunscrapers/djoser/issues/52
|
[
"enhancement",
"question"
] |
dmitry-saritasa
| 4
|
aimhubio/aim
|
tensorflow
| 2,807
|
Problem of using Aim
|
## ❓Question
Dear author, I use Aim to track training metrics during Federated Learning where multi participants trains a model using their own data collaboratively. Each participant has metrics such as training loss, trianing accuracy. And I want to plot the same metric of all participant in the same figure so that I can conviently observe the difference of behaviour among participants. Is there a way for me to do this?
|
open
|
2023-06-02T02:13:53Z
|
2023-06-05T19:49:11Z
|
https://github.com/aimhubio/aim/issues/2807
|
[
"type / question"
] |
lhq12
| 1
|
comfyanonymous/ComfyUI
|
pytorch
| 6,763
|
mat1 and mat2 shapes cannot be multiplied (1x768 and 2816x1280)
|
### Expected Behavior

### Actual Behavior
11111111
### Steps to Reproduce
1111111
### Debug Logs
```powershell
111111111
```
### Other
11111
|
closed
|
2025-02-10T10:40:52Z
|
2025-03-11T10:44:51Z
|
https://github.com/comfyanonymous/ComfyUI/issues/6763
|
[
"Potential Bug"
] |
hitcaven
| 2
|
Lightning-AI/pytorch-lightning
|
deep-learning
| 19,861
|
Validation dataloader is added to train dataloader after first epoch
|
### Bug description
Validation dataloader is added to train dataloader after first epoch. What the f*ck ?
### What version are you seeing the problem on?
v1.8
### How to reproduce the bug
```python
trainer.fit(model, train_dataloader, valid_dataloader)
```
### Error messages and logs
```
# Error messages and logs here please
```
ewgtwag
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
egewsg
### More info

|
closed
|
2024-05-12T15:55:37Z
|
2024-05-12T16:57:23Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19861
|
[
"bug",
"needs triage",
"ver: 1.8.x"
] |
AndreyStille
| 0
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 1,205
|
Mac error
|
I keep getting this Mac error message when I try to run the Ultimate Vocal Remover.
I am using a MacBook M1 Sonoma 14.0
"
Ensemble Mode - 1_HP-UVR - Model 1/2
File 1/1 Loading model ... Done!
File 1/1 Running inference...
Process failed, please see error log
Time Elapsed: 00:00:03"
Can someone guide me on what the problem is? thanks
|
open
|
2024-02-25T13:28:28Z
|
2024-04-08T07:40:51Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/1205
|
[] |
Gazzanic
| 1
|
gradio-app/gradio
|
machine-learning
| 10,268
|
gr.Image fullscreen button does not scale the image but scales the EventData
|
### Describe the bug
When I load an image into gr.Image which is smaller than my screen, and I press the fullscreen button, the image remains the same size and surrounded by black borders, rather than being enlarged to fill the screen. I would expect the fullscreen button to make the image larger, otherwise, what is the point?
However, I noticed that when in fullscreen mode, the EventData x/y coordinates DO get scaled as if the image is fullscreen, which breaks use cases where you need to obtain the clicked coordinates.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```
import gradio as gr
# Define a function to handle the image click event
def handle_click(image, event_data: gr.SelectData):
# Extract the coordinates from the event data
x, y = event_data.index
# Return the coordinates as a string
return f"Clicked at coordinates: ({x}, {y})"
# Create a Gradio Blocks app
with gr.Blocks() as demo:
# Add an Image component for the user to upload an image
image_input = gr.Image(label="Upload an Image")
# Add a Textbox to display the coordinates
coordinates_output = gr.Textbox(label="Coordinates")
# Attach the click event to the image input
image_input.select(handle_click, [image_input], coordinates_output)
# Launch the app
if __name__ == "__main__":
demo.launch(show_error=True)
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Operating System: Windows
gradio version: 5.9.1
gradio_client version: 1.5.2
```
### Severity
I can work around it
|
open
|
2024-12-30T17:14:12Z
|
2025-03-23T21:27:20Z
|
https://github.com/gradio-app/gradio/issues/10268
|
[
"bug"
] |
Zarxrax
| 4
|
huggingface/datasets
|
deep-learning
| 6,621
|
deleted
|
...
|
closed
|
2024-01-27T16:59:58Z
|
2024-01-27T17:14:43Z
|
https://github.com/huggingface/datasets/issues/6621
|
[] |
kopyl
| 0
|
paperless-ngx/paperless-ngx
|
django
| 8,082
|
[BUG] TIF document is not displayed when editing
|
### Description
Hello,
As you can see, in the overview all TIF documents are displayed like the PDF documents:

However, if I edit a TIF document, it is not displayed on the right.

### Steps to reproduce
Upload a TIF file to Paperless
### Webserver logs
```bash
Which log is required here?
```
### Browser logs
_No response_
### Paperless-ngx version
2.13.0
### Host OS
Synology 723+ / Container Manager / Docker
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.13.0",
"server_os": "Linux-4.4.302+-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 5749758619648,
"available": 2635827060736
},
"database": {
"type": "postgresql",
"url": "paperless",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "paperless_mail.0027_mailaccount_expiration_mailaccount_account_type_and_more",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://broker:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-10-28T17:38:44.082029+01:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2024-10-28T16:05:37.881750Z",
"classifier_error": null
}
}
```
### Browser
FireFox v131.0.3
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description.
|
closed
|
2024-10-28T16:47:58Z
|
2024-12-01T19:46:20Z
|
https://github.com/paperless-ngx/paperless-ngx/issues/8082
|
[
"not a bug"
] |
OSch24
| 7
|
flairNLP/flair
|
nlp
| 2,657
|
TARS training error
|
Hi,
I would like to fine-tune the tars-ner few shot model with a few additional examples. I think my issue might be related to https://github.com/flairNLP/flair/issues/1540, but I'm getting a different error. I tried with the following code:
```
label_dict = corpus.make_label_dictionary(label_type='ner')
tars = TARSTagger.load('tars-ner')
tars.add_and_switch_to_new_task('med_device_ner',label_dictionary=label_dict, label_type='ner')
trainer = ModelTrainer(tars, corpus)
trainer.train(base_path='resources/taggers/continued_model', # path to store the model artifacts
learning_rate=0.02,
mini_batch_size=16,
mini_batch_chunk_size=4,
max_epochs=5,
)
```
The error I'm getting is:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
/var/folders/st/9h9r98294q54zswbxdsg8z980000gp/T/ipykernel_31425/4102635640.py in <module>
6
7 labels = ['TRADENAME']
----> 8 tars.add_and_switch_to_new_task('ner',label_dictionary=label_dict, label_type='ner')
9
10 trainer = ModelTrainer(tars, corpus)
[dev.txt](https://github.com/flairNLP/flair/files/8197089/dev.txt)
~/opt/anaconda3/envs/basic_env/lib/python3.9/site-packages/flair/models/tars_model.py in add_and_switch_to_new_task(self, task_name, label_dictionary, label_type, multi_label, force_switch)
202 for tag in label_dictionary:
203 if tag == '<unk>' or tag == 'O': con
[dev.txt](https://github.com/flairNLP/flair/files/8197083/dev.txt)
tinue
--> 204 if tag[1] == "-":
205 tag = tag[2:]
206 tag_dictionary.add_item(tag)
IndexError: string index out of range
```
I'm also attaching a small sample of the data I tried to train with. Thank you so much for your time!
|
closed
|
2022-03-07T10:57:43Z
|
2022-06-27T14:05:35Z
|
https://github.com/flairNLP/flair/issues/2657
|
[
"question"
] |
MagdalenaMladenova
| 3
|
deepset-ai/haystack
|
machine-learning
| 8,324
|
Fix broken md in ChatPromptBuilder API ref
|
closed
|
2024-09-04T15:30:19Z
|
2024-09-05T09:13:24Z
|
https://github.com/deepset-ai/haystack/issues/8324
|
[] |
dfokina
| 0
|
|
quantumlib/Cirq
|
api
| 6,440
|
Cannot serialize cirq_google.experimental.ops.coupler_pulse.CouplerPulse()
|
**Description of the issue**
Serialization isn't supported for `cirq_google.experimental.CouplerPulse()`
**How to reproduce the issue**
Run the following circuit:
```
this_circuit = cirq.Circuit(
cirq_google.experimental.CouplerPulse(hold_time=hold_time, coupling_mhz = 20)(*qubits),
cirq.measure(*qubits, key = 'thequbits')
)
```
<details>
Cannot serialize op cirq_google.experimental.ops.coupler_pulse.CouplerPulse(hold_time=5, coupling_mhz=20, rise_time=cirq.Duration(nanos=8), padding_time=cirq.Duration(picos=2500.0), q0_detune_mhz=0.0, q1_detune_mhz=0.0).on(cirq.GridQubit(4, 5), cirq.GridQubit(5, 5)) of type <class 'cirq_google.experimental.ops.coupler_pulse.CouplerPulse'>
</details>
**Cirq version**
1.3.0
|
closed
|
2024-02-02T19:44:34Z
|
2024-02-26T21:38:41Z
|
https://github.com/quantumlib/Cirq/issues/6440
|
[
"kind/bug-report",
"triage/accepted",
"area/google/qrp"
] |
aasfaw
| 2
|
huggingface/datasets
|
pytorch
| 7,147
|
IterableDataset strange deadlock
|
### Describe the bug
```
import datasets
import torch.utils.data
num_shards = 1024
def gen(shards):
for shard in shards:
if shard < 25:
yield {"shard": shard}
def main():
dataset = datasets.IterableDataset.from_generator(
gen,
gen_kwargs={"shards": list(range(num_shards))},
)
dataset = dataset.shuffle(buffer_size=1)
dataset = datasets.interleave_datasets(
[dataset, dataset], probabilities=[1, 0], stopping_strategy="all_exhausted"
)
dataset = dataset.shuffle(buffer_size=1)
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=8,
num_workers=8,
)
for i, batch in enumerate(dataloader):
print(batch)
if i >= 10:
break
print()
if __name__ == "__main__":
for _ in range(100):
main()
```
### Steps to reproduce the bug
Running the script above, at some point it will freeze.
- Changing `num_shards` from 1024 to 25 avoids the issue
- Commenting out the final shuffle avoids the issue
- Commenting out the interleave_datasets call avoids the issue
As an aside, if you comment out just the final shuffle, the output from interleave_datasets is not shuffled at all even though there's the shuffle before it. So something about that shuffle config is not being propagated to interleave_datasets.
### Expected behavior
The script should not freeze.
### Environment info
- `datasets` version: 3.0.0
- Platform: macOS-14.6.1-arm64-arm-64bit
- Python version: 3.12.5
- `huggingface_hub` version: 0.24.7
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1
I observed this with 2.21.0 initially, then tried upgrading to 3.0.0 and could still repro.
|
closed
|
2024-09-12T18:59:33Z
|
2024-09-23T09:32:27Z
|
https://github.com/huggingface/datasets/issues/7147
|
[] |
jonathanasdf
| 6
|
deepset-ai/haystack
|
nlp
| 9,030
|
Enhance Type Validation to Handle Callable Bare Types
|
**Is your feature request related to a problem? Please describe.**
- The current implementation for bare type validation does not adequately support callable bare types due to their unique structure.
**Describe the solution you'd like**
- Update the type validation logic to include specific handling for callable bare types by incorporating `_check_callable_compatibility`. This ensures that callable types are validated correctly and consistently. For example, the test case `pytest.param(Callable[[Any], Any], Callable)` should pass with the updated implementation.
|
open
|
2025-03-12T16:09:18Z
|
2025-03-14T11:41:20Z
|
https://github.com/deepset-ai/haystack/issues/9030
|
[] |
mdrazak2001
| 0
|
encode/apistar
|
api
| 465
|
KeyError: 'return_value' on handlers that annotate `http.Response` on return
|
I ran into this while upgrading my app to 0.5.
Minimal example:
```python
from apistar import App, Route, http
def index() -> http.Response:
return http.Response(b"hello", headers={
"content-type": "foo"
})
app = App(routes=[Route("/", "GET", index)])
app.serve(host="127.0.0.1", port=8000, debug=True)
```
Visiting `/` will raise a KeyError:
```
Traceback (most recent call last):
File "/Users/bogdan/.virtualenvs/blockfraud/lib/python3.6/site-packages/apistar/server/app.py", line 202, in __call__
return self.injector.run(funcs, state)
File "/Users/bogdan/.virtualenvs/blockfraud/lib/python3.6/site-packages/apistar/server/injector.py", line 104, in run
state[output_name] = func(**func_kwargs)
File "/Users/bogdan/.virtualenvs/blockfraud/lib/python3.6/site-packages/apistar/server/app.py", line 160, in finalize_wsgi
raise exc_info[0].with_traceback(exc_info[1], exc_info[2])
File "/Users/bogdan/.virtualenvs/blockfraud/lib/python3.6/site-packages/apistar/server/app.py", line 198, in __call__
return self.injector.run(funcs, state)
File "/Users/bogdan/.virtualenvs/blockfraud/lib/python3.6/site-packages/apistar/server/injector.py", line 104, in run
state[output_name] = func(**func_kwargs)
File "/Users/bogdan/.virtualenvs/blockfraud/lib/python3.6/site-packages/apistar/server/app.py", line 193, in __call__
return self.injector.run(funcs, state)
File "/Users/bogdan/.virtualenvs/blockfraud/lib/python3.6/site-packages/apistar/server/injector.py", line 102, in run
func_kwargs = {key: state[val] for key, val in kwargs.items()}
File "/Users/bogdan/.virtualenvs/blockfraud/lib/python3.6/site-packages/apistar/server/injector.py", line 102, in <dictcomp>
func_kwargs = {key: state[val] for key, val in kwargs.items()}
KeyError: 'return_value'
```
It works fine if you drop the `-> http.Response` annotation.
|
closed
|
2018-04-19T06:33:28Z
|
2018-04-19T08:47:55Z
|
https://github.com/encode/apistar/issues/465
|
[] |
Bogdanp
| 2
|
ultralytics/ultralytics
|
computer-vision
| 18,982
|
YOLOv11 vs SSD performance on 160x120 infrared images
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello,
in our previous project, we successfully implemented object detection in images from 160x120 infrared camera on Raspberry Pi 4. We used SqueezeNet-SSD network trained with CaffeSSD framework. Its performance was very good for our needs (about 60 ms per frame) with excellent accuracy on normal-sized objects (confidence mostly over 90%) but lower accuracy on smaller objects (mostly detected correctly with very low confidence 30%).
Later, we stripped SqueezeNet's fire modules to simple squeeze-expand blocks, added feature-fusion for the first SSD layer and modified priorboxes ratios to match our dataset. We reached detection speed of about 30 ms per frame and excellent accuracy for all objects.
In our upcoming project, we are continuing with similar task but we would like to use more innovative approach, because Caffe framework has not been maintained for years anymore. We're experimenting with Ultralytics framework and it looks very modern to us. We're also thinking about switching to Raspberry Pi 5, maybe with Hailo8 kit which is not supported by CaffeSSD so Ultralytics seems to be good way to go.
Our dataset consists of 5000 training grayscale images and 1000 testing images with resolution of 160x120. Many augmented versions of each training image was added to the training dataset thus it has over 40000 images. We identify 5 types of objects - example: face (about 64x100) and eyes (45x10). It's exactly the same dataset that was used for training our SSD networks. Now we have trained several versions of YOLOv11 with batch size of 128 for 300 epochs. Results are good, but not as good as our original SSD network. Here, I would like to share our benchmarks with others:
**Detection speed**
```
RPi5 Rock 4B+ RPi4 RPi 5 + Hailo 8
----------------------------------------------------------------------------------------------------
SEnet-FSSD-160x120 7.542 ms 27.478 ms 29.263 ms -
SqueezeNet-SSD 10.074 ms 32.615 ms 38.491 ms -
Yolo11n-160 12.317 ms 49.212 ms 45.283 ms 4.252 ms
Yolo11n-320 48.207 ms 177.076 ms 178.268 ms 7.236 ms
Yolo11s-160 30.835 ms 129.767 ms 127.677 ms 10.999 ms
Yolo11m-320 313.738 ms 1121.319 ms 1180.839 ms 24.829 ms
```
As you can see, even the nano version of YOLOv11 is much slower than the original SqueezeNet-SSD. Although we would prefer better times, it is still usable for our needs, especially when we're thinking about Hailo8.
**Detection accuracy**
I don't have specific objective statistics here but it is worst just visually. Even yolo11m-320 version provides worst results. Rectangles are not as exact, confidences are lower and there is a bit higher number of false results. Just for illustration on 1000 validation images:
(mean wIoU is average of IoU for all detections with threshold of 50 weighted by the confidence)
SEnet-FSSD-160x120 - total detections: 2014, false positives: 6, false negatives: 9, mean wIoU: 0.892
SqueezeNet-SSD - total detections: 2029, false positives: 59, false negatives: 47, mean wIoU: 0.855
Yolo11n-160 - total detections: 2027, false positives: 28, false negatives: 18, mean wIoU: 0.851
Yolo11s-160 - total detections: 2023, false positives: 26, false negatives: 20, mean wIoU: 0.859
Yolo11m-320 - total detections: 2078, false positives: 71, false negatives: 12, mean wIoU: 0.845
```
$ ./test image.png senet-fssd-160x120
0: class = 3, confidence = 1.000000, [61, 10, 124, 107]
1: class = 1, confidence = 0.999990, [72, 49, 115, 57]
$ ./test image.png squeezenet-ssd-160x120
0: class = 3, confidence = 1.000000, [61, 10, 123, 108]
1: class = 1, confidence = 0.774772, [72, 49, 114, 57]
$ ./test image.png yolo11n-160.onnx
0: class = 3, confidence = 0.920182, [60, 11, 123, 99]
1: class = 1, confidence = 0.766865, [71, 49, 115, 57]
$ ./test image.png yolo11m-320.onnx
0: class = 3, confidence = 0.895741, [61, 12, 123, 103]
1: class = 1, confidence = 0.745349, [72, 50, 115, 56]
```
Maybe the problem lies in training hyperparameters. We just set batch size to 128 and number of epochs to 300. I would appreciate any ideas. Thank you!
Meanwhile, I've been trying to simulate our SEnet-FSSD using YAML model in Ultralytics. I don't know if it is good idea, I just would like to see if it changes anything. I made pure copy of our network, but it is not possible to train it, because of layers sizes mismatch. MaxPool2d layers don't seem to downscale the resolution in the same way as it happens in Caffe framework. There is also no Eltwise (element sum) layer so I had to change it to Concat layer. Adding padding=1 to all MaxPool2d layers works but it automatically changes the input resolution to 192. But results are practically very similar to other YOLO models and not to our original network.
Here is the model that is 1:1 rewrite of our SSD network. Maybe, someone will be able to fix it:
```
nc: 5
activation: nn.ReLU6()
backbone:
- [-1, 1, Conv, [64, 3, 2, 0]] # 0,conv1
- [-1, 1, nn.MaxPool2d, [3, 2]] # 1,pool1
- [-1, 1, Conv, [32, 1, 1, 0] ] # 2,fire2 squeeze
- [-1, 1, Conv, [64, 3, 1, 1] ] # 3,fire2 expand
- [-1, 1, Conv, [32, 1, 1, 0] ] # 4,fire3 squeeze
- [-1, 1, Conv, [64, 3, 1, 1] ] # 5,fire3 expand
- [-1, 1, nn.MaxPool2d, [3, 2]] # 6,pool3
- [-1, 1, Conv, [64, 1, 1, 0] ] # 7,fire4 squeeze
- [-1, 1, Conv, [128, 3, 1, 1] ] # 8,fire4 expand
- [-1, 1, Conv, [64, 1, 1, 0] ] # 9,fire5 squeeze
- [-1, 1, Conv, [128, 3, 1, 1] ] # 10,fire5 expand
- [-1, 1, nn.MaxPool2d, [3, 2]] # 11,pool5
- [-1, 1, Conv, [96, 1, 1, 0] ] # 12,fire6 squeeze
- [-1, 1, Conv, [192, 3, 1, 1] ] # 13,fire6 expand
- [-1, 1, Conv, [96, 1, 1, 0] ] # 14,fire7 squeeze
- [-1, 1, Conv, [192, 3, 1, 1] ] # 15,fire7 expand
- [-1, 1, Conv, [64, 1, 1, 0] ] # 16,fire8 squeeze
- [-1, 1, Conv, [256, 3, 1, 1] ] # 17,fire8 expand
- [-1, 1, Conv, [64, 1, 1, 0] ] # 18,fire9 squeeze
- [-1, 1, Conv, [256, 3, 1, 1] ] # 19,fire9 expand
head:
- [-1, 1, nn.MaxPool2d, [3, 2]] # 20,pool9
- [-1, 1, Conv, [96, 1, 1, 0] ] # 21,fire10 squeeze
- [-1, 1, Conv, [384, 3, 1, 1] ] # 22,fire10 expand
- [-1, 1, nn.MaxPool2d, [3, 2]] # 23,pool10
- [-1, 1, Conv, [64, 1, 1, 0] ] # 24,fire11 squeeze
- [-1, 1, Conv, [256, 3, 1, 1] ] # 25,fire11 expand
# feature-fusion layers
- [19, 1, Conv, [128, 1, 1, 0] ] # 26
- [-1, 1, nn.Upsample, [None, 2, "nearest"]] # 27
- [22, 1, Conv, [128, 1, 1, 0] ] # 28
- [-1, 1, nn.Upsample, [None, 4, "nearest"]] # 29
- [[29, 27, 10], 1, Concat, [1]] # 30
- [-1, 1, nn.BatchNorm2d, []] # 31
- [-1, 1, Conv, [64, 1, 1, 0] ] # 32
- [-1, 1, Conv, [128, 3, 1, 1] ] # 33
- [[25, 22, 19, 33], 1, Detect, [nc]]
```
### Additional
EDIT: I forgot to provide information that all detections are done using OpenCV in C++
|
open
|
2025-02-03T20:01:01Z
|
2025-02-27T22:16:34Z
|
https://github.com/ultralytics/ultralytics/issues/18982
|
[
"question",
"detect",
"embedded"
] |
BigMuscle85
| 32
|
apify/crawlee-python
|
automation
| 887
|
A proxy created with param proxy_urls crashes PlaywrightCrawler
|
Test program.
```python
import asyncio
from crawlee.crawlers import PlaywrightCrawler, PlaywrightCrawlingContext
from crawlee.proxy_configuration import ProxyConfiguration
# If these go out of service then replace them with your own.
proxies = ['http://178.48.68.61:18080', 'http://198.245.60.202:3128', 'http://15.204.240.177:3128',]
proxy_configuration_fails = ProxyConfiguration(proxy_urls=proxies)
proxy_configuration_succeeds = ProxyConfiguration(
tiered_proxy_urls=[
# No proxy tier. (Not needed, but optional in case you do not want to use any proxy on lowest tier.)
[None],
# lower tier, cheaper, preferred as long as they work
proxies,
# higher tier, more expensive, used as a fallback
]
)
async def main() -> None:
crawler = PlaywrightCrawler(
max_requests_per_crawl=5, # Limit the crawl to 5 requests.
headless=False, # Show the browser window.
browser_type='firefox', # Use the Firefox browser.
proxy_configuration = proxy_configuration_fails,
# proxy_configuration=proxy_configuration_succeeds,
)
# Define the default request handler, which will be called for every request.
@crawler.router.default_handler
async def request_handler(context: PlaywrightCrawlingContext) -> None:
context.log.info(f'Processing {context.request.url} ...')
# Enqueue all links found on the page.
await context.enqueue_links()
# Extract data from the page using Playwright API.
data = {
'url': context.request.url,
'title': await context.page.title(),
'content': (await context.page.content())[:100],
}
# Push the extracted data to the default dataset.
await context.push_data(data)
# Run the crawler with the initial list of URLs.
await crawler.run(['https://crawlee.dev'])
# Export the entire dataset to a JSON file.
await crawler.export_data('results.json')
# Or work with the data directly.
data = await crawler.get_data()
crawler.log.info(f'Extracted data: {data.items}')
if __name__ == '__main__':
asyncio.run(main())
```
Terminal output.
```
/Users/matecsaj/PycharmProjects/wat-crawlee/venv/bin/python /Users/matecsaj/Library/Application Support/JetBrains/PyCharm2024.3/scratches/scratch_4.py
[crawlee._autoscaling.snapshotter] INFO Setting max_memory_size of this run to 8.00 GB.
[crawlee.crawlers._playwright._playwright_crawler] INFO Current request statistics:
┌───────────────────────────────┬──────────┐
│ requests_finished │ 0 │
│ requests_failed │ 0 │
│ retry_histogram │ [0] │
│ request_avg_failed_duration │ None │
│ request_avg_finished_duration │ None │
│ requests_finished_per_minute │ 0 │
│ requests_failed_per_minute │ 0 │
│ request_total_duration │ 0.0 │
│ requests_total │ 0 │
│ crawler_runtime │ 0.038974 │
└───────────────────────────────┴──────────┘
[crawlee._autoscaling.autoscaled_pool] INFO current_concurrency = 0; desired_concurrency = 2; cpu = 0.0; mem = 0.0; event_loop = 0.0; client_info = 0.0
[crawlee.crawlers._playwright._playwright_crawler] ERROR Request failed and reached maximum retries
Traceback (most recent call last):
File "/Users/matecsaj/PycharmProjects/wat-crawlee/venv/lib/python3.13/site-packages/crawlee/crawlers/_basic/_context_pipeline.py", line 65, in __call__
result = await middleware_instance.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/matecsaj/PycharmProjects/wat-crawlee/venv/lib/python3.13/site-packages/crawlee/crawlers/_playwright/_playwright_crawler.py", line 138, in _open_page
crawlee_page = await self._browser_pool.new_page(proxy_info=context.proxy_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/matecsaj/PycharmProjects/wat-crawlee/venv/lib/python3.13/site-packages/crawlee/_utils/context.py", line 38, in async_wrapper
return await method(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/matecsaj/PycharmProjects/wat-crawlee/venv/lib/python3.13/site-packages/crawlee/browsers/_browser_pool.py", line 241, in new_page
return await self._get_new_page(page_id, plugin, proxy_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/matecsaj/PycharmProjects/wat-crawlee/venv/lib/python3.13/site-packages/crawlee/browsers/_browser_pool.py", line 270, in _get_new_page
page = await asyncio.wait_for(
^^^^^^^^^^^^^^^^^^^^^^^
...<5 lines>...
)
^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/tasks.py", line 507, in wait_for
return await fut
^^^^^^^^^
File "/Users/matecsaj/PycharmProjects/wat-crawlee/venv/lib/python3.13/site-packages/crawlee/browsers/_playwright_browser_controller.py", line 119, in new_page
self._browser_context = await self._create_browser_context(browser_new_context_options, proxy_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/matecsaj/PycharmProjects/wat-crawlee/venv/lib/python3.13/site-packages/crawlee/browsers/_playwright_browser_controller.py", line 174, in _create_browser_context
if browser_new_context_options['proxy']:
~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: 'proxy'
[crawlee._autoscaling.autoscaled_pool] INFO Waiting for remaining tasks to finish
[crawlee.crawlers._playwright._playwright_crawler] INFO Error analysis: total_errors=3 unique_errors=1
[crawlee.crawlers._playwright._playwright_crawler] INFO Final request statistics:
┌───────────────────────────────┬───────────┐
│ requests_finished │ 0 │
│ requests_failed │ 1 │
│ retry_histogram │ [0, 0, 1] │
│ request_avg_failed_duration │ 0.025703 │
│ request_avg_finished_duration │ None │
│ requests_finished_per_minute │ 0 │
│ requests_failed_per_minute │ 14 │
│ request_total_duration │ 0.025703 │
│ requests_total │ 1 │
│ crawler_runtime │ 4.189647 │
└───────────────────────────────┴───────────┘
[crawlee.storages._dataset] WARN Attempting to export an empty dataset - no file will be created
[crawlee.crawlers._playwright._playwright_crawler] INFO Extracted data: []
Process finished with exit code 0
```
|
closed
|
2025-01-09T01:26:37Z
|
2025-01-17T11:22:57Z
|
https://github.com/apify/crawlee-python/issues/887
|
[
"bug",
"t-tooling"
] |
matecsaj
| 3
|
apragacz/django-rest-registration
|
rest-api
| 51
|
Deprecate DefaultLoginSerializer.get_authenticated_user in favor of separate LOGIN_AUTHENTICATOR function
|
closed
|
2019-05-11T15:38:49Z
|
2020-09-29T14:42:28Z
|
https://github.com/apragacz/django-rest-registration/issues/51
|
[
"release:next-major-version"
] |
apragacz
| 0
|
|
tqdm/tqdm
|
pandas
| 887
|
update paper
|
Discuss updating [paper](https://github.com/tqdm/tqdm/blob/master/examples/paper.md) (archived at https://doi.org/10.21105/joss.01277) as per https://github.com/openjournals/joss-reviews/issues/1277.
- in particular, authors and acknowledgements
- upon request of Arfon Smith (@arfon) on behalf of the JOSS editorial team
- in addition to assignees; cc @noamraph, @Benjamin-Lee, @GregaVrbancic
|
open
|
2020-01-26T01:48:24Z
|
2020-03-09T12:36:17Z
|
https://github.com/tqdm/tqdm/issues/887
|
[
"question/docs ‽"
] |
casperdcl
| 94
|
wandb/wandb
|
data-science
| 9,435
|
[Bug-App]: The sweep search parameter in offline mode causes most of the results to repeat and loops these results.
|
### Describe the bug
<!--- Describe your issue here --->
wandb The sweep search parameter in offline mode, after a period of time (about 1 hour) the later searches will repeat the results of the previous K runs and loop through the results of these K searches until the sweep_num=end search ends.
even I update the recently version.
|
closed
|
2025-02-07T03:08:09Z
|
2025-02-13T22:04:28Z
|
https://github.com/wandb/wandb/issues/9435
|
[
"ty:bug",
"a:app"
] |
Tokenmw
| 3
|
cvat-ai/cvat
|
computer-vision
| 8,312
|
creating task issue
|
I recently applied for data annotation position, I was given some tasks to do and I was unable to create the task
|
closed
|
2024-08-16T11:20:11Z
|
2024-08-17T04:38:02Z
|
https://github.com/cvat-ai/cvat/issues/8312
|
[] |
ToshVits
| 1
|
vastsa/FileCodeBox
|
fastapi
| 194
|
增加上传和下载速度
|
建议后期增加局域网上传和下载的功能,能够自行选择,公网下的优势在于方便但是受限于服务器的带宽,建议未来开发者可以尝试跟进这个功能,进入页面由用户选择公网还是局域网。
|
open
|
2024-08-15T07:30:08Z
|
2025-02-06T12:33:04Z
|
https://github.com/vastsa/FileCodeBox/issues/194
|
[] |
chenghua-x
| 2
|
PaddlePaddle/models
|
nlp
| 5,296
|
CV video_tag中,3000个标签是什么及模型训练的AUC和准确率是什么
|
想用一下飞桨的视频标签模型,有些疑惑的地方:
https://github.com/PaddlePaddle/PaddleHub/tree/release/v1.7/hub_module/modules/video/classification/videotag_tsn_lstm
![Uploading image.png…]()
需要的是:
1. 3000个标签是哪些
2. 模型训练的AUC和准确率是什么
|
closed
|
2021-04-06T07:44:13Z
|
2021-04-07T06:55:09Z
|
https://github.com/PaddlePaddle/models/issues/5296
|
[] |
zhangweijiqn
| 2
|
dropbox/sqlalchemy-stubs
|
sqlalchemy
| 244
|
sql-alchemy 1.4 - using is_not on column leads to mypy warning, while using isnot does not lead to the warning
|
In the sqlalchemy version 1.4, the method isnot was renamed to is_not, and the old method was left for backward compatibility.
(see https://docs.sqlalchemy.org/en/14/core/sqlelement.html#sqlalchemy.sql.expression.ColumnOperators.is_not)
However, if one try to use is_not on the column expression
(something like:
```
...
.where(and_(pch_alias.c.New_Id.is_not(None))))
```
)
then one gets following warning in mypy:
` error: "Column[Any]" has no attribute "is_not"
`
However, if one is using older isnot construct, then mypy produces no warning.
As is_not now is a recommended method, I think stubs should function with it without warnings.
|
open
|
2022-06-24T11:22:43Z
|
2022-10-20T16:56:17Z
|
https://github.com/dropbox/sqlalchemy-stubs/issues/244
|
[] |
serhiy-yevtushenko
| 1
|
huggingface/pytorch-image-models
|
pytorch
| 1,601
|
[FEATURE] Script to convert weight from Jax to PyTorch
|
**Is your feature request related to a problem? Please describe.**
I am trying to create multiple checkpoints of ViT at different iterations. Are there any systematic way to perform such conversion?
**Describe the solution you'd like**
I would like to be able to convert JAX ViT model to a PyTorch model, similar to this model (https://huggingface.co/google/vit-base-patch16-224)
**Describe alternatives you've considered**
I have tried to start pre-training HF models on A100 but so far was not successful to reach to same accuracy.
|
closed
|
2022-12-23T09:16:04Z
|
2023-02-08T22:56:52Z
|
https://github.com/huggingface/pytorch-image-models/issues/1601
|
[
"enhancement"
] |
yazdanbakhsh
| 6
|
youfou/wxpy
|
api
| 370
|
发送消息失败
|
如图所示 为什么发送消息的代码会报错呢? 我是python小白。
`my_friends = bot.search().search('Shi',sex=MALE,)`
`my_friends.send("hi")`
Traceback (most recent call last):
File "C:/Users/Dell/PycharmProjects/WBot/venv/WBot_test.py", line 12, in <module>
my_friends.send("hi")
**AttributeError: 'Chats' object has no attribute 'send'**
|
open
|
2019-03-02T07:34:11Z
|
2019-04-05T13:33:25Z
|
https://github.com/youfou/wxpy/issues/370
|
[] |
ShihanZhang44419
| 1
|
CTFd/CTFd
|
flask
| 2,290
|
Update SQLAlchemy-Utils
|
The current version of SQLAlchemy-Utils that we are using was yanked for some reason. We should update this soon.
|
closed
|
2023-04-22T06:32:09Z
|
2023-04-27T05:38:33Z
|
https://github.com/CTFd/CTFd/issues/2290
|
[
"easy"
] |
ColdHeat
| 0
|
scrapy/scrapy
|
python
| 6,504
|
Fix typing for Twisted 24.10.0
|
This should be easy but we need this to be included in 2.12 so I've created this for visibility.
> scrapy/pipelines/media.py:209: error: Cannot assign to a method [method-assign]
`Failure.stack` is a special deprecated property thing in Twisted 24.10.0, and assigning to it is a no-op, maybe it's better to put the assignment under a Twisted version check instead of e.g. specifically silencing the typing warning .
|
closed
|
2024-10-22T14:40:29Z
|
2024-10-29T18:08:40Z
|
https://github.com/scrapy/scrapy/issues/6504
|
[
"bug",
"CI",
"typing"
] |
wRAR
| 4
|
deepset-ai/haystack
|
pytorch
| 8,143
|
Clean up AzureOpenAIChatGenerator docstrings
|
closed
|
2024-08-01T13:22:56Z
|
2024-08-02T12:25:57Z
|
https://github.com/deepset-ai/haystack/issues/8143
|
[] |
agnieszka-m
| 0
|
|
reloadware/reloadium
|
pandas
| 71
|
Can't get memory line profiler in PyCharm
|
**Describe the bug**
I just downloaded the pycharm plugin for reloadium after seeing it on reddit. Looks like a very useful tool, but I am having trouble replicating the memory line profiling shown in this post (and README doesn't include info on memory profiling yet): https://www.reddit.com/r/pycharm/comments/z0x4gu/memory_profiling_for_pycharm/
Apologies if I am just doing something wrong!
**To Reproduce**
Steps to reproduce the behavior:
1. Use my code below with breakpoints on each `pass`
2. Debug using reloadium
3. Open profiling details in the margin
4. Details are in units of time, not memory
5. In the Debug tab of pycharm, open the Reloadium sub-tab, change the profiler to "Memory"
6. Nothing changes
7. Repeat steps 2-3, but profiler setting from step 5 resets to Time and details are still in units of time
**Expected behavior**
Should be able to easily switch between units of time/memory, possibly having to rerun the debugger if technically necessary
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: Pop OS (Ubuntu)
- OS version: 22.04 LTS
- M1 chip: No
- Reloadium package version: 0.9.5
- PyCharm plugin version: 0.9.0
- Editor: PyCharm
- Python Version: 2022.2.2 Professional
- Python Architecture: 64bit
- Run mode: Debug
**Additional context**
My code:
```
import numpy as np
import pandas as pd
def foo():
data = np.random.randint(0, 100, size=(10, 1000))
df = pd.DataFrame(data)
df = pd.concat([df, df])
pass
def bar():
a = b"r" * 10_000
pass
if __name__ == '__main__':
foo()
bar()
```
https://user-images.githubusercontent.com/42096328/203413798-251af93b-c2d4-4df4-adf9-a33ffe569cc7.mov
|
closed
|
2022-11-22T20:27:55Z
|
2022-11-22T22:38:06Z
|
https://github.com/reloadware/reloadium/issues/71
|
[] |
danb27
| 2
|
run-llama/rags
|
streamlit
| 37
|
.
|
closed
|
2023-11-28T14:57:51Z
|
2023-11-28T15:12:28Z
|
https://github.com/run-llama/rags/issues/37
|
[] |
piyushyadav0191
| 0
|
|
amidaware/tacticalrmm
|
django
| 2,068
|
[Feature request] Tasks History
|
**Feature Request: Task History Logs**
**Is your feature request related to a problem? Please describe.**
Currently, there is no way to view historical logs for tasks, which makes it difficult to trace past executions, identify recurring issues.
**Describe the solution you'd like**
Implement a history log feature for tasks that records:
- Each execution timestamp
- The task status (e.g., success, failure, timeout)
- runtime
- Terminal output
This would allow to track task to troubleshoot issues
**Describe alternatives you've considered**
Manually logging task output
**Additional context**

|
closed
|
2024-11-11T10:50:43Z
|
2024-11-11T23:25:48Z
|
https://github.com/amidaware/tacticalrmm/issues/2068
|
[] |
P6g9YHK6
| 4
|
microsoft/nni
|
machine-learning
| 5,665
|
Repeat is not supported
|
**Describe the issue**:
When I use "model_space = ProxylessNAS()" as the model_space of RetiariiExperiment, there is an error:
TypeError: Repeat is not supported

**Environment**:
- NNI version:2.10.1
- Training service (local|remote|pai|aml|etc):local
- Client OS:windows11
- Server OS (for remote mode only):
- Python version:3.9
- PyTorch/TensorFlow version:PyTorch
- Is conda/virtualenv/venv used?:colab
- Is running in Docker?:no
|
open
|
2023-08-17T12:39:13Z
|
2023-08-17T12:39:13Z
|
https://github.com/microsoft/nni/issues/5665
|
[] |
flower997
| 0
|
plotly/dash
|
dash
| 3,119
|
pattern-matched long callbacks cancelled incorrectly
|
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.18.2 /home/amorton/gh/dash
dash-core-components 2.0.0
dash_dangerously_set_inner_html 0.0.2
dash-flow-example 0.0.5
dash_generator_test_component_nested 0.0.1 /home/amorton/gh/dash/@plotly/dash-generator-test-component-nested
dash_generator_test_component_standard 0.0.1 /home/amorton/gh/dash/@plotly/dash-generator-test-component-standard
dash_generator_test_component_typescript 0.0.1 /home/amorton/gh/dash/@plotly/dash-generator-test-component-typescript
dash-html-components 2.0.0
dash-table 5.0.0
dash_test_components 0.0.1 /home/amorton/gh/dash/@plotly/dash-test-components
dash-testing-stub 0.0.2
```
**Describe the bug**
Pattern-matched long callbacks incorrectly cancelled based on wildcard output
Consider the following example app:
```python
import time
from dash import Dash, DiskcacheManager, callback, html, MATCH, Output, Input, State
def build_output_message_id(item_id_str: str) -> dict[str, str]:
return {
'component': 'output-message',
'item_id': item_id_str,
}
def build_output_item(item_id: float) -> html.Div:
return html.Div(
[
html.Span(f'{item_id:.2} sec delay:', style={'margin-right': '1rem'}),
html.Span(0, id=build_output_message_id(str(item_id))),
html.Br(),
],
style={"margin-top": "1rem"},
)
def build_app_layout() -> html.Div:
return html.Div([
html.Button('Fire!', id='button', n_clicks=0),
html.Br(),
*[build_output_item(i * 0.2) for i in range(20)],
], style={"display": "block"})
@callback(
Output(build_output_message_id(MATCH), 'children'),
Input('button', 'n_clicks'),
State(build_output_message_id(MATCH), 'children'),
State(build_output_message_id(MATCH), 'id'),
prevent_initial_call=True,
background=True,
interval=200,
)
def update_messages(_, current_value, id_dict):
delay_secs = float(id_dict["item_id"])
time.sleep(delay_secs)
return current_value + 1
app = Dash(
background_callback_manager=DiskcacheManager(),
)
app.layout = build_app_layout()
app.run(
host='0.0.0.0',
debug=True,
)
```
Upon pressing the button you should see many numbers _never_ increment, and many requests being made with a list of `oldJob` values.
This is unexpected, since the outputs don't correspond to the same concrete component.
The following patch resolves the issue in this example app.
```diff
diff --git a/dash/dash-renderer/src/actions/callbacks.ts b/dash/dash-renderer/src/actions/callbacks.ts
index 23da0a3f..a73af9d0 100644
--- a/dash/dash-renderer/src/actions/callbacks.ts
+++ b/dash/dash-renderer/src/actions/callbacks.ts
@@ -561,7 +561,7 @@ function handleServerside(
cacheKey: data.cacheKey as string,
cancelInputs: data.cancel,
progressDefault: data.progressDefault,
- output
+ output: JSON.stringify(payload.outputs),
};
dispatch(addCallbackJob(jobInfo));
job = data.job;
@@ -761,9 +761,10 @@ export function executeCallback(
let lastError: any;
const additionalArgs: [string, string, boolean?][] = [];
+ const jsonOutput = JSON.stringify(payload.outputs);
values(getState().callbackJobs).forEach(
(job: CallbackJobPayload) => {
- if (cb.callback.output === job.output) {
+ if (jsonOutput === job.output) {
// Terminate the old jobs that are not completed
// set as outdated for the callback promise to
// resolve and remove after.
```
|
open
|
2025-01-08T19:58:02Z
|
2025-01-09T19:19:29Z
|
https://github.com/plotly/dash/issues/3119
|
[
"bug",
"P2"
] |
apmorton
| 0
|
dynaconf/dynaconf
|
fastapi
| 876
|
[bug] Dynaconf doesn't load configuration file if cwd doesn't exist
|
**Describe the bug**
When the current workdir directory has been removed, dynaconf refuses to load configuration files.
**To Reproduce**
Steps to reproduce the behavior:
1. Having the following folder structure
<!-- Describe or use the command `$ tree -v` and paste below -->
<details>
<summary> Project structure </summary>
```bash
# /tmp/dyn.yaml
# /home/user/bug_dynaconf/
# app.py
# script.sh
```
</details>
2. Having the following config files:
<!-- Please adjust if you are using different files and formats! -->
<details>
<summary> Config files </summary>
**/tmp/dyn.yaml**
```yaml
offset: 24
```
</details>
3. Having the following app code:
<details>
<summary> Code </summary>
**app.py**
```python
from dynaconf import Dynaconf
settings = Dynaconf(
settings_files=["/tmp/dyn.yaml"]
)
print(settings.offset)
settings.validators.validate()
print(type(settings.offset))
```
</details>
4. Executing under the following environment
<details>
<summary> Execution </summary>
**script.sh**
```bash
#!/bin/bash -x
python3 -m venv venv
source venv/bin/activate
pip install dynaconf==3.1.12
PARENT=$(realpath .)
mkdir nonexistent_dir
cd nonexistent_dir
rm -r ../nonexistent_dir
python $PARENT/app.py
```
</details>
**Expected behavior**
the `app.py` should have printed the type of `offset`, which is an `int`
**Actual behavior**
~~~Python
Traceback (most recent call last):
File "/home/mtarral/debug_dynaconf/app.py", line 7, in <module>
print(settings.offset)
File "/home/mtarral/debug_dynaconf/venv/lib/python3.8/site-packages/dynaconf/base.py", line 138, in __getattr__
value = getattr(self._wrapped, name)
File "/home/mtarral/debug_dynaconf/venv/lib/python3.8/site-packages/dynaconf/base.py", line 300, in __getattribute__
return super().__getattribute__(name)
AttributeError: 'Settings' object has no attribute 'OFFSET'
~~~
**Environment (please complete the following information):**
- OS: ubuntu 20.04
- Dynaconf Version 3.1.12
**Additional context**
Following https://github.com/dynaconf/dynaconf/issues/853, I tried to repro with 3.1.12, and found this issue now.
Thanks for dynaconf !
|
closed
|
2023-03-03T13:40:35Z
|
2023-07-13T19:11:08Z
|
https://github.com/dynaconf/dynaconf/issues/876
|
[
"bug",
"Pending Release"
] |
Wenzel
| 8
|
mitmproxy/mitmproxy
|
python
| 7,379
|
Firebase: NetworkError when attempting to fetch resource
|
#### Problem Description
When the client connects to `POST https://firebaseremoteconfigrealtime.googleapis.com/v1/projects/<..>/namespaces/firebase:streamFetchInvalidations HTTP/1.1
`, the remote server seems to disconnect our connection. Based on [this](https://github.com/mitmproxy/mitmproxy/issues/3362#issuecomment-572694235), I guess missing http2 support might be the reason.
Is there a way to enable http2 for a specific set of domains? (e.g. firebaseremoteconfigrealtime.googleapis.com)
```
Error getting content: TypeError: NetworkError when attempting to fetch resource..
```
Edit:
Or maybe the problem is that it's a streaming response?
|
closed
|
2024-12-07T18:54:52Z
|
2025-01-09T08:33:02Z
|
https://github.com/mitmproxy/mitmproxy/issues/7379
|
[
"kind/triage"
] |
patrikschmidtke
| 1
|
davidteather/TikTok-Api
|
api
| 696
|
[BUG] - if not res["hasMore"] and not first: KeyError: 'hasMore'
|
Description:
Everytime I tried to use by_username() function, i'll encounter this error. It was working last week,, now i cant even get a single post using this function. The bug return the value if not res["hasMore"] and not first:
KeyError: 'hasMore'
buggy code:
The buggy code is by_username() function.
Expected:
The code is expected to return the user TikTok posts data.
Error Trace:
if not res["hasMore"] and not first:
KeyError: 'hasMore'
OS: Windows 10
TikTokApi Version : 4.0.1
|
closed
|
2021-09-17T09:51:11Z
|
2021-09-21T18:18:00Z
|
https://github.com/davidteather/TikTok-Api/issues/696
|
[
"bug"
] |
Syahrol13
| 13
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 402
|
标点符号对QA结果的影响
|
### 详细描述问题
运行langchain_qa.py得到的效果不理想。
是训练模型的问题还是后期处理问题?应该使用什么手段解决问题?
### 运行截图或日志
如下图所示。

### 必查项目(前三项只保留你要问的)
- [ ] **基础模型**:Alpaca-Plus-7B
- [ ] **运行系统**:Windows
- [ ] **问题分类**:效果问题
- [ ] (必选)由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [ ] (必选)我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [ ] (必选)第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
|
closed
|
2023-05-22T03:41:45Z
|
2023-06-01T22:02:20Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/402
|
[
"stale"
] |
geniusnut
| 2
|
ionelmc/pytest-benchmark
|
pytest
| 99
|
need help with writing tox.ini
|
I need some assisstance with getting tox to run the suite on both PY27 PY36 and PYPY and dont know how to write it exactly. Could you help me out? Its not entire benchmark-related.
```ini
[tox]
envlist=
python36
[testenv]
deps=
pytest
pytest-benchmark
pytest-cov
enum34
numpy
commands=
py.test --benchmark-disable --showlocals {posargs}
[testenv:verb]
commands=
py.test --benchmark-disable --fulltrace --showlocals --verbose {posargs}
[testenv:cover]
commands=
py.test --benchmark-disable --cov {envsitepackagesdir}/construct --cov-report html --cov-report term --verbose {posargs}
[testenv:bench]
commands=
py.test --benchmark-only --benchmark-columns=min,max,mean,stddev,median,rounds --benchmark-sort=name --benchmark-compare {posargs}
[testenv:benchsave]
commands=
py.test --benchmark-only --benchmark-columns=min,max,mean,stddev,median,rounds --benchmark-sort=name --benchmark-autosave {posargs}
```
|
closed
|
2018-02-08T18:59:11Z
|
2018-02-11T05:00:00Z
|
https://github.com/ionelmc/pytest-benchmark/issues/99
|
[] |
arekbulski
| 6
|
KaiyangZhou/deep-person-reid
|
computer-vision
| 445
|
How to apply cross dataset validation on custom torchreid model
|
Hi. I have a custom person reid model that is currently not available in this torchreid library. I want to perform cross dataset validation on that custom-designed model. The model is originally trained on msmt17 dataset and I have its checkpoint saved. I want to evaluate this model on market1501 or dukemtmc. How can I do that? Can anybody please guide me?
|
open
|
2021-06-27T21:15:25Z
|
2021-06-29T04:06:21Z
|
https://github.com/KaiyangZhou/deep-person-reid/issues/445
|
[] |
FatimaZulfiqar
| 4
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,228
|
ValueError: Too many packets in payload
|
I'm trying to make an app that sends a stream of images to the server from webcam
and I made those scripts to mimic the process by sending an array of 400x300x3 as image
but I got this error
ValueError: Too many packets in payload
client
```py
import socketio
import time
import numpy as np
import pickle
arr = np.random.randn(400,300,3)
arr = pickle.dumps(arr)
sio = socketio.Client()
@sio.event
def connect():
print('connection established')
@sio.event
def my_message(data):
print('message received with ', data)
sio.emit('my response', {'response': 'my response'})
@sio.event
def disconnect():
print('disconnected from server')
sio.connect('http://localhost:5000')
while True:
# time.sleep(0.001)
sio.emit("message",arr)
sio.wait()
```
server
```py
from flask import Flask, render_template
from flask_socketio import SocketIO
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app)
import numpy as np
import pickle
arr = np.random.randn(300,400,3)
arr = pickle.dumps(arr)
@socketio.on('message')
def handle_message(message):
socketio.emit("my_message", 'aaa')
if __name__ == '__main__':
socketio.run(app)
```
|
closed
|
2020-04-01T14:39:31Z
|
2020-04-02T10:22:52Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1228
|
[] |
DiaaZiada
| 1
|
brightmart/text_classification
|
tensorflow
| 4
|
missing word2vec package
|
I am wondering that the word2vec package is the user defined package, it is not include int the project
----
conda install word2vec
|
closed
|
2017-07-24T03:25:05Z
|
2017-07-25T07:59:31Z
|
https://github.com/brightmart/text_classification/issues/4
|
[] |
jxlijunhao
| 1
|
yunjey/pytorch-tutorial
|
deep-learning
| 232
|
make ur repo cloneable and not editable by anyone.
|
closed
|
2021-06-16T05:23:19Z
|
2021-06-23T11:01:52Z
|
https://github.com/yunjey/pytorch-tutorial/issues/232
|
[] |
Anindyadeep
| 0
|
|
pydantic/pydantic
|
pydantic
| 10,641
|
add environment option to disable disable Pydantic validations
|
### Initial Checks
- [X] I have searched Google & GitHub for similar requests and couldn't find anything
- [X] I have read and followed [the docs](https://docs.pydantic.dev) and still think this feature is missing
### Description
It would be good for the ability to disable Pydantic validations, such as via an environment setting.
A lot of time can be spent tracking down Pydantic issues when upgrading code. Moreover, this places a burden when testing, such as when upgrading from version 1 to version 2. Although ultimately the code should be fixed, sometimes the code just needs to run.
This can be implemented transparently as sketched here:
```
$ git diff pydantic/main.py | grep '[+-]'
diff --git a/pydantic/main.py b/pydantic/main.py
--- a/pydantic/main.py
+++ b/pydantic/main.py
@@ -25,6 +25,8 @@ from typing import (
+from mezcla import debug, system
+
@@ -68,6 +70,8 @@ else:
# and https://youtrack.jetbrains.com/issue/PY-51428
+IGNORE_PYDANTIC_ERRORS = system.getenv_bool("IGNORE_PYDANTIC_ERRORS", False)
+
@@ -210,7 +214,9 @@ class BaseModel(metaclass=_model_construction.ModelMetaclass):
- validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
+ validated_self = self
+ if not IGNORE_PYDANTIC_ERRORS:
+ validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
@@ -970,7 +976,10 @@ class BaseModel(metaclass=_model_construction.ModelMetaclass):
- raise pydantic_core.ValidationError.from_exception_data(self.__class__.__name__, [error])
+ if IGNORE_PYDANTIC_ERRORS:
+ debug.trace(2, f"Warning: validation error in {__pydantic_self__}")
+ else:
+ raise pydantic_core.ValidationError.from_exception_data(self.__class__.__name__, [error])
def __getstate__(self) -> dict[Any, Any]:
@@ -1260,7 +1269,10 @@ class BaseModel(metaclass=_model_construction.ModelMetaclass):
- raise pydantic_core.ValidationError.from_exception_data(cls.__name__, [error])
+ if IGNORE_PYDANTIC_ERRORS:
+ debug.trace(2, f"Warning: validation error in {__pydantic_self__}")
+ else:
+ raise pydantic_core.ValidationError.from_exception_data(self.__class__.__name__, [error])
```
A simple example follows:
```
$ simple_pydantic_example.py
...
u2 = User(id="two", name="Jane Doe")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...
pydantic_core._pydantic_core.ValidationError: 1 validation error for User
...
$ IGNORE_PYDANTIC_ERRORS=1 python
u1=User()
u2=User()
```
[simple_pydantic_example.py.txt](https://github.com/user-attachments/files/17406123/simple_pydantic_example.py.txt)
```
$ python -c "import pydantic.version; print(pydantic.version.version_info())"
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: /Users/joe/virtual-environments/agvisor-py-3-12/lib/python3.12/site-packages/pydantic
python version: 3.12.3 (main, Apr 9 2024, 08:09:14) [Clang 15.0.0 (clang-1500.3.9.4)]
platform: macOS-14.7-arm64-arm-64bit
related packages: typing_extensions-4.12.0 pydantic-settings-2.5.2 fastapi-0.104.1
commit: unknown
```
Of course, other changes would be needed to be complete. This is just to illustrate that it can be readily implemented.
### Affected Components
- [ ] [Compatibility between releases](https://docs.pydantic.dev/changelog/)
- [X] [Data validation/parsing](https://docs.pydantic.dev/concepts/models/#basic-model-usage)
- [ ] [Data serialization](https://docs.pydantic.dev/concepts/serialization/) - `.model_dump()` and `.model_dump_json()`
- [ ] [JSON Schema](https://docs.pydantic.dev/concepts/json_schema/)
- [ ] [Dataclasses](https://docs.pydantic.dev/concepts/dataclasses/)
- [ ] [Model Config](https://docs.pydantic.dev/concepts/config/)
- [ ] [Field Types](https://docs.pydantic.dev/api/types/) - adding or changing a particular data type
- [X] [Function validation decorator](https://docs.pydantic.dev/concepts/validation_decorator/)
- [ ] [Generic Models](https://docs.pydantic.dev/concepts/models/#generic-models)
- [ ] [Other Model behaviour](https://docs.pydantic.dev/concepts/models/) - `model_construct()`, pickling, private attributes, ORM mode
- [ ] [Plugins](https://docs.pydantic.dev/) and integration with other tools - mypy, FastAPI, python-devtools, Hypothesis, VS Code, PyCharm, etc.
|
closed
|
2024-10-17T04:55:05Z
|
2024-10-18T03:21:58Z
|
https://github.com/pydantic/pydantic/issues/10641
|
[
"feature request"
] |
thomas-paul-ohara
| 3
|
ultralytics/yolov5
|
machine-learning
| 13,231
|
How to modify the network structure of the YOLOv5 classification model
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello, I tried to change the example of YOLOv5 model=YOLO ('yolov5s-cls. yaml '). load ('yolov5s-cls. pt') to the corresponding YOLOv5-cls yaml and pt files, but encountered an error AttributeError: Can't get attribute 'Classification Model' on<module 'models. yolo' from'D: \ \ anaco \ \ envs \ \ yolo8 \ \ Lib \ \ site packages \ \ ultralytics \ \ models \ \ yolo \ \ __init__. py '>. How can I solve this problem? I tried importing only the YAML file and it ran correctly. But this cannot read the weight of the PT file. Or is there any way to operate in yolov5 master to change the yolov5 classification network structure.

### Additional
_No response_
|
open
|
2024-07-31T00:23:10Z
|
2024-10-20T19:51:05Z
|
https://github.com/ultralytics/yolov5/issues/13231
|
[
"question"
] |
lielumao
| 4
|
babysor/MockingBird
|
deep-learning
| 45
|
Where can I download aidatatang_200zh dataset?
|
Where can I download aidatatang_200zh dataset?
|
closed
|
2021-08-24T03:39:19Z
|
2021-08-24T03:59:22Z
|
https://github.com/babysor/MockingBird/issues/45
|
[] |
duke91
| 2
|
python-gitlab/python-gitlab
|
api
| 2,825
|
ProjectMergeRequestApprovalManager.set_approvers cannot handle too many approval rules
|
## Description of the problem, including code/CLI snippet
In v4/objects/merge_request_approvals.py ~ line 121
# update any existing approval rule matching the name
existing_approval_rules = approval_rules.list()
for ar in existing_approval_rules:
if ar.name == approval_rule_name:
ar.user_ids = data["user_ids"]
ar.approvals_required = data["approvals_required"]
ar.group_ids = data["group_ids"]
ar.save()
return ar
# if there was no rule matching the rule name, create a new one
return approval_rules.create(data=data, **kwargs)
An existing approval rule will not be found if it exceeds the default pagination count for list(). It will instead attempt to create it (which will fail because it already exists)
## Expected Behavior
approval rule should be found in the list of existing approval rules and be modified.
## Actual Behavior
approval rule is not found and creation of new rule fails.
## Specifications
I believe this fix is to change
approval_rules.list()
to
approval_rules.list(all=True)
- python-gitlab version: 4.4.0
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com):
|
closed
|
2024-03-19T20:55:19Z
|
2024-05-17T10:22:02Z
|
https://github.com/python-gitlab/python-gitlab/issues/2825
|
[] |
liu15
| 0
|
microsoft/MMdnn
|
tensorflow
| 41
|
Error when inception_v3 from tensorflow-IR convert to mxnet
|
OS: CentOS7.0
tensorflow: 1.4.1
mxnet: 1.0.0
I download pre-train model from [inception_v3_2016_08_28.tar.gz](http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz)
I follow these steps:
1. Use python to extract both network architecture and weights
`python -m mmdnn.conversion.examples.tensorflow.extract_model -n inception_v3 -ckpt ./inception_v3.ckpt`
2. Convert model (including architecture and weights) from Tensorflow to IR
`python -m mmdnn.conversion._script.convertToIR -f tensorflow -d inception_v3 -n imagenet_inception_v3.ckpt.meta -w inception_v3.ckpt --dstNodeName Squeeze`
3. Convert models from IR to MXNet code snippet and weights
`python -m mmdnn.conversion._script.IRToCode -f mxnet --IRModelPath inception_v3.pb --dstModelPath mxnet_inception_v3.py --IRWeightPath inception_v3.npy -dw ./mxnet_inception_v3-0000.params`
4. Convert models from IR to MXNet checkpoint file
`python -m mmdnn.conversion.examples.mxnet.imagenet_test -n mxnet_inception_v3 -w mxnet_inception_v3-0000.params --dump inception_v3`
Step 1-3 works well, but step 4 reports some warning:
`/usr/lib/python2.7/site-packages/mxnet/module/base_module.py:53: UserWarning: You created Module with Module(..., label_names=['softmax_label']) but input with name 'softmax_label' is not found in symbol.list_arguments(). Did you mean one of:
input
warnings.warn(msg)
/usr/lib/python2.7/site-packages/mxnet/module/base_module.py:65: UserWarning: Data provided by label_shapes don't match names specified by label_names ([] vs. ['softmax_label'])
warnings.warn(msg)
MXNet checkpoint file is saved with prefix [inception_v3] and iteration 0, generated by [mxnet_inception_v3.py] and [mxnet_inception_v3-0000.params].`
After step 4, `inception_v3-symbol.json` and `inception_v3-0000.params` generated. When I test these file using `sym, arg_params, aux_params = mx.model.load_checkpoint('inception_v3', 0)
mod = mx.mod.Module(symbol=sym, context=mx.cpu(), label_names=None)`, the error occurs:
`Traceback (most recent call last):
File "inception_v3-loading.py", line 10, in <module>
mod = mx.mod.Module(symbol=sym, context=mx.cpu(), label_names=None)
File "/usr/lib/python2.7/site-packages/mxnet/module/module.py", line 93, in __init__
_check_input_names(symbol, data_names, "data", True)
File "/usr/lib/python2.7/site-packages/mxnet/module/base_module.py", line 51, in _check_input_names
raise ValueError(msg)
ValueError: You created Module with Module(..., data_names=['data']) but input with name 'data' is not found in symbol.list_arguments(). Did you mean one of:
input`
Could someone help me?
|
closed
|
2018-01-03T03:42:00Z
|
2018-01-04T02:53:25Z
|
https://github.com/microsoft/MMdnn/issues/41
|
[] |
mapicccy
| 2
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 4,192
|
Closed and reopend reports are not automatically deleted
|
### What version of GlobaLeaks are you using?
4.15.9
### What browser(s) are you seeing the problem on?
All
### What operating system(s) are you seeing the problem on?
Windows, iOS
### Describe the issue
When the report status is changed to closed or a closed report is reopened, it remains on the platform also after the expiration date.
The automatic e-mail notification with the reminder of expiration is not sent.
In the Changelog I can see that the reopening issue was fixed in 4.15.9, but this doesn't work for me. Could this be related to the issue I reported last month? https://github.com/globaleaks/whistleblowing-software/issues/4158#issuecomment-2301243390
And what about the closed report not deleted? I cannot find documentation about this but I presume that it should be automatically deleted on the expiration date. Is this a bug?
Thanks
### Proposed solution
_No response_
|
closed
|
2024-09-17T08:52:10Z
|
2024-10-07T14:34:49Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4192
|
[] |
eleibr
| 4
|
sepandhaghighi/samila
|
matplotlib
| 10
|
Add Projection Parameters
|
closed
|
2021-09-27T15:04:17Z
|
2021-09-29T05:14:13Z
|
https://github.com/sepandhaghighi/samila/issues/10
|
[
"enhancement"
] |
sadrasabouri
| 0
|
|
deepspeedai/DeepSpeed
|
pytorch
| 6,935
|
nv-nightly CI test failure
|
The Nightly CI for https://github.com/deepspeedai/DeepSpeed/actions/runs/13556042840 failed.
|
closed
|
2025-01-09T01:13:10Z
|
2025-02-27T19:02:09Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6935
|
[
"ci-failure"
] |
github-actions[bot]
| 1
|
deezer/spleeter
|
deep-learning
| 551
|
isolating a guitar solo
|
<!-- Please respect the title [Discussion] tag. -->
Goal is to create a track without the guitar solo.
Would spleeter have the ability to isolate a guitar solo from a song if a training track was created consisting of just the actual guitar solo (performed and recorded by me as well as possible)?
|
closed
|
2021-01-07T16:49:43Z
|
2021-01-08T13:21:35Z
|
https://github.com/deezer/spleeter/issues/551
|
[
"question"
] |
yamtssfa
| 1
|
miguelgrinberg/python-socketio
|
asyncio
| 169
|
Running server and client in one application
|
Hi Miguel,
Thanks for the fantastic library! I got it running very quickly.
But now I run into a problem with the idea I had: I am writing a sort of broker between socketio clients and server. It should accept connections from the clients and pass their requests on to the server. It should also pass on data coming from the server to the clients. Whereas this does not make sense per se (the clients could connect to the server directly), this broker could (and will!) mangle/change/translate the data. (I hope that makes sense.)
For the socketio client I used socketIO-client-2 (https://pypi.python.org/pypi/socketIO-client-2/, https://github.com/feus4177/socketIO-client-2).
I use Python 3.4.
I started with eventlet:
```python
import eventlet
eventlet.monkey_patch()
import socketio
from socketIO_client import SocketIO, LoggingNamespace
class Broker:
def __init__(self):
# Connection to socket.IO server as a client
self.client = SocketIO('localhost', 3000, LoggingNamespace)
self.setup_client_callbacks()
# Our own socket.IO server
self.server = socketio.Server(async_mode='eventlet')
self.setup_server_callbacks()
def run(self):
# Start client
self.client.wait_for_callbacks()
print('*** CLIENT ***')
# Start server
app = socketio.Middleware(self.server)
eventlet.wsgi.server(eventlet.listen(('', 3001)), app)
print('*** SERVER ***')
def setup_server_callbacks(self):
self.server.on('connect', self.on_connect)
self.server.on('disconnect', self.on_disconnect)
self.server.on('getState', self.on_getState)
def setup_client_callbacks(self):
self.client.on('pushState', self.on_pushState)
# Server callbacks
def on_connect(self, sid, environ):
print('** connect: {} {}'.format(sid, environ))
def on_getState(self, sid, data):
print('** getState: {}'.format(data))
self.client.emit('getState', data)
print('** getState emitted')
def on_disconnect(self, sid):
print('** disconnect: {}'.format(sid))
# Client callbacks
def on_pushState(self, data):
print('** pushState: {{}}')
self.server.emit('pushState', data)
if __name__ == '__main__':
b = Broker()
b.run()
```
The external server (at port 3000) answers a `getState, ''` message with a `pushState, <data>` message.
When I run the above program, the following happens:
1. The client part connects to the server at port 3000.
2. `** CLIENT` is printed to the console.
3. The server part starts and listens to port 3001 (and prints some messages).
- Now my external client connects to this server at 3001.
4. `** connect: <sid> <environ>` is printed to the console (confirming that the external client connected).
- Now my external client emits a `getState, ''` message.
5. `** getState: ` is printed to the console (confirming that the external client emitted that message).
6. `** getState emitted` is printed to the console (confirming that the `getState` message is passed onto the external server).
- **Here I would now expect for the server to respond with a `pushState, <data>` message and triggering the `on_pushState()` callback method. This method is never called (does not print `** pushState: {}` to the console).**
- I shut down the external client.
7. `** disconnect: <sid>` is printed to the console (confirming that the external client disconnected).
- I press `CTRL-C` to shut down my server/client.
8. `** SERVER` is printed to the console.
I guessed that there might be a problem between `eventlet` and `threading`, which is used by the `socketIO-client-2` module. Hence I added the `eventlet.monkey_patch()` line.
I also tried to use just the `threading` module (here are only the changes):
```python
from flask import Flask
import socketio
from socketIO_client import SocketIO, LoggingNamespace
class Broker:
def __init__(self):
# Connection to socket.IO server as a client
self.client = SocketIO('localhost', 3000, LoggingNamespace)
self.setup_client_callbacks()
# Our own socket.IO server
self.server = socketio.Server(async_mode='threading')
self.setup_server_callbacks()
def run(self):
# Start client
self.client.wait_for_callbacks()
print('*** CLIENT ***')
# Start server
app = Flask(__name__)
app.config['SERVER_NAME'] = 'localhost:3001'
app.wsgi_app = socketio.Middleware(self.server, app.wsgi_app)
app.run(threaded=True)
print('*** SERVER ***')
...
```
but no luck here either.
Could you please be so kind to point me in the right direction how I can make that working (or suggest a better way to do that brokerage).
Thank you for your kind help!!
|
closed
|
2018-02-22T10:39:54Z
|
2019-01-15T09:00:42Z
|
https://github.com/miguelgrinberg/python-socketio/issues/169
|
[
"question"
] |
sphh
| 4
|
dynaconf/dynaconf
|
flask
| 1,134
|
[RFC]typed: Add support for Any type
|
```python
field: Any
field: Annotated[Any]
field: tuple[str, Any]
field: dict[str, Any]
```
Add support for those types and other variations on `is_type_of` function.
|
open
|
2024-07-06T15:17:22Z
|
2024-07-08T18:38:18Z
|
https://github.com/dynaconf/dynaconf/issues/1134
|
[
"Not a Bug",
"RFC",
"typed_dynaconf"
] |
rochacbruno
| 0
|
apragacz/django-rest-registration
|
rest-api
| 89
|
Extend views
|
Is there any way to extend rest registration views and adding other functionallity?
|
closed
|
2019-11-26T06:39:40Z
|
2019-11-26T10:02:46Z
|
https://github.com/apragacz/django-rest-registration/issues/89
|
[
"closed-as:duplicate",
"type:feature-request"
] |
itzmanish
| 2
|
fastapi/sqlmodel
|
sqlalchemy
| 530
|
Relationship type annotations disappear after class definition is evaluated
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from sqlmodel import Relationship, SQLModel, Field
from pydantic.main import BaseModel
class SpecialFieldValidator(BaseModel):
id: int = Field(primary_key=True, index=True)
def __init_subclass__(cls) -> None:
# Try to figure out what type the special_field relation has
print("special_field" in cls.__annotations__) # False
print("special_field" in cls.__fields__) # False
print("special_field" in cls.__sqlmodel_relationships__) # True, but no type info via this dict, just a plain RelationshipInfo() instance:
print(dict(cls.__sqlmodel_relationships__["special_field"].__dict__)) # {'back_populates': None, 'link_model': None, 'sa_relationship': None, 'sa_relationship_args': None, 'sa_relationship_kwargs': None}
return super().__init_subclass__()
class MyModelA(SQLModel, SpecialFieldValidator, table=True):
id: int = Field(primary_key=True)
special_field_id: int = Field(foreign_key="mymodelb.id")
special_field: "MyModelB" = Relationship()
class MyModelB(SQLModel, SpecialFieldValidator, table=True):
id: int = Field(primary_key=True)
special_field_id: int = Field(foreign_key="mymodela.id")
special_field: MyModelA = Relationship()
```
### Description
I'm trying to write a class (SpecialFieldValidator) that is supposed to check whether a specific field, which is always a Relationship, was annotated with a particular type.
However, the actual type annotation seems to be getting erased - looking in __fields__, __annotations__ yields nothing, and the data in __sqlmodel_relationships__ does not feature a type.
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.8
### Python Version
3.11.0
### Additional Context
SQLAlchemy version is 1.4.41
I found this issue: https://github.com/tiangolo/sqlmodel/issues/255 that seems related, saying SQLAlchemy 1.4.36 breaks relationship, but SQLModels dependencies were officially updated to >=1.4.41 more recently than that so I figured this is a new issue.
|
open
|
2023-01-15T23:05:48Z
|
2024-12-02T15:37:34Z
|
https://github.com/fastapi/sqlmodel/issues/530
|
[
"question"
] |
phdowling
| 1
|
MaartenGr/BERTopic
|
nlp
| 1,043
|
Topic modeling regression in 0.14.0 with nr_topics
|
I have noticed a reduction in the quality of topic modeling in 0.14.0 when specifying the nr_topics parameter.
Here is my test script:
```
from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups
from bertopic.vectorizers import ClassTfidfTransformer
newsgroups_train = fetch_20newsgroups(subset='train', remove=('headers', 'footers', 'quotes'))
ctfidf_model = ClassTfidfTransformer(reduce_frequent_words=True)
topic_model = BERTopic(nr_topics=len(newsgroups_train['target_names']),
ctfidf_model=ctfidf_model,
calculate_probabilities=True)
topic_model.fit(newsgroups_train['data'])
print(topic_model.get_topic_info())
```
With bertopic==0.13.0:
```
Topic Count Name
0 -1 4688 -1_maxaxaxaxaxaxaxaxaxaxaxaxaxaxax_for_on_you
1 0 700 0_car_bike_cars_my
2 1 638 1_drive_scsi_drives_disk
3 2 575 2_gun_guns_militia_firearms
4 3 547 3_key_encryption_clipper_chip
5 4 539 4_team_hockey_550_game
6 5 527 5_patients_msg_medical_disease
7 6 483 6_year_baseball_pitching_he
8 7 405 7_card_monitor_video_vga
9 8 375 8_israel_turkish_jews_israeli
10 9 317 9_ditto_ites_hello_hi
11 10 199 10_god_jesus_hell_he
12 11 182 11_window_widget_colormap_server
13 12 173 12_morality_truth_god_moral
14 13 172 13_fbi_koresh_compound_batf
15 14 171 14_amp_condition_scope_offer
16 15 141 15_atheists_atheism_god_universe
17 16 131 16_printer_fonts_font_print
18 17 118 17_ted_post_challenges_you
19 18 118 18_windows_dos_cview_swap
20 19 115 19_xfree86_libxmulibxmuso_symbol_undefined
```
And with bertopic==0.14.0:
```
Topic Count Name
0 -1 3334 -1_you_it_for_is
1 0 4402 0_for_with_on_be
2 1 620 1_god_stephanopoulos_that_mr
3 2 559 2_patients_medical_msg_health
4 3 437 3_space_launch_nasa_lunar
5 4 436 4_israel_were_turkish_armenian
6 5 376 5_car_bike_cars_dog
7 6 296 6_gun_guns_firearms_militia
8 7 230 7_morality_objective_gay_moral
9 8 139 8_symbol_xterm_libxmulibxmuso_server
10 9 119 9_printer_ink_print_hp
11 10 94 10_requests_send_address_list
12 11 88 11_radar_detector_detectors_radio
13 12 42 12_church_pope_schism_mormons
14 13 40 13_ground_battery_grounding_conductor
15 14 36 14_tax_taxes_deficit_income
16 15 24 15_marriage_married_ceremony_commitment
17 16 20 16_maxaxaxaxaxaxaxaxaxaxaxaxaxaxax_mg9vg9vg9vg...
18 17 12 17_ditto_hello_hi_too
19 18 10 18_professors_tas_phds_teaching
```
|
open
|
2023-02-24T16:01:39Z
|
2023-02-25T07:02:27Z
|
https://github.com/MaartenGr/BERTopic/issues/1043
|
[] |
damosuzuki
| 1
|
AUTOMATIC1111/stable-diffusion-webui
|
deep-learning
| 16,586
|
[Bug]: (ZLUDA) Final step when processing in SD 1.0 takes longer than all previous steps combined
|
### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Recently did a fresh install of A1111 with ZLUDA config (previously using DirectML) and saw a massive reduction in step time when generating images, but the final(?) step takes longer than all other steps combined before finishing. The image *does* eventually process, and I'm generally pleased with the overall reduction in time, but the last step is hampering the overall time reduction.
I'm on a full AMD setup with a Radeon VII (16GB VRAM), so when I was using DirectML I was getting around 5-7 seconds per iteration at 1024X1024. With ZLUDA, it appears I'm getting somewhere between 6-10 iterations per second (depending on checkpoint and LoRA usage), but then there's a very long hang before the process finishes. So, steps 1-19 will process in 2-3 seconds and then step 20 (or sometimes after step 20 is reported as 'finished') will take 2-3 minutes. I'm seeing 95-100% GPU usage (Compute1) during the last step, and if I have live previews enabled it's a little longer but the preview will only update once or twice.
I've tried different checkpoints, VAEs, as well as disabling live previews but I've seen no change in behavior. The strange thing is I don't see this with SDXL checkpoints. I get 2-3 iterations per second, steps proceed at a steady pace, and the final step isn't much different than the previous ones (30-40 seconds overall). That's incredible as I wasn't able to even load SDXL checkpoints before ZLUDA, but a lot of the LoRAs I prefer to use are for SD1 and I'd like to be able to use SD1 checkpoints when needed.
### Steps to reproduce the problem
1) Start A1111 with --use-zluda
2) Load a SD1 checkpoint, any VAE
3) Begin a Text2Img or Img2Img process at 1024x1024
4) see nearly all steps running at 5-10 iterations per second
5) see last step take 2-3 minutes while soaking the GPU
6) process finishes, provides image as expected
### What should have happened?
I would expect the final step to not take significantly longer than previous steps when using SD1 checkpoints, especially since SDXL checkpoints do not seem to have this issue.
### What browsers do you use to access the UI ?
Other
### Sysinfo
[sysinfo-2024-10-24-21-06.json](https://github.com/user-attachments/files/17512814/sysinfo-2024-10-24-21-06.json)
### Console logs
```Shell
venv "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.10.1-amd-12-gae5ff7a2
Commit hash: ae5ff7a232cd898f653e4fffb36f507b54de8b72
Using ZLUDA in C:\SD-Zluda\a1111\stable-diffusion-webui-directml\.zluda
ROCm agents: ['gfx906:sramecc-:xnack-'], using gfx906:sramecc-:xnack-
Installing requirements
[Auto-Photoshop-SD] Attempting auto-update...
[Auto-Photoshop-SD] switch branch to extension branch.
checkout_result: Your branch is up to date with 'origin/master'.
[Auto-Photoshop-SD] Current Branch.
branch_result: * master
[Auto-Photoshop-SD] Fetch upstream.
fetch_result:
[Auto-Photoshop-SD] Pull upstream.
pull_result: Already up to date.
Installing requirements for diffusers
Requirement already satisfied: insightface==0.7.3 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from -r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (0.7.3)
Collecting onnx==1.14.0 (from -r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 2))
Using cached onnx-1.14.0-cp310-cp310-win_amd64.whl.metadata (15 kB)
Requirement already satisfied: onnxruntime==1.15.0 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from -r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 3)) (1.15.0)
Requirement already satisfied: opencv-python==4.7.0.72 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from -r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 4)) (4.7.0.72)
Requirement already satisfied: ifnude in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from -r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 5)) (0.0.3)
Requirement already satisfied: cython in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from -r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 6)) (3.0.11)
Requirement already satisfied: numpy in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (1.26.2)
Requirement already satisfied: tqdm in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (4.66.5)
Requirement already satisfied: requests in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (2.32.3)
Requirement already satisfied: matplotlib in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (3.9.2)
Requirement already satisfied: Pillow in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (9.5.0)
Requirement already satisfied: scipy in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (1.14.1)
Requirement already satisfied: scikit-learn in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (1.5.2)
Requirement already satisfied: scikit-image in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (0.21.0)
Requirement already satisfied: easydict in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (1.13)
Requirement already satisfied: albumentations in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (1.4.3)
Requirement already satisfied: prettytable in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (3.11.0)
Collecting protobuf>=3.20.2 (from onnx==1.14.0->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 2))
Using cached protobuf-5.28.3-cp310-abi3-win_amd64.whl.metadata (592 bytes)
Requirement already satisfied: typing-extensions>=3.6.2.1 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from onnx==1.14.0->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 2)) (4.12.2)
Requirement already satisfied: coloredlogs in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime==1.15.0->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 3)) (15.0.1)
Requirement already satisfied: flatbuffers in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime==1.15.0->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 3)) (24.3.25)
Requirement already satisfied: packaging in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime==1.15.0->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 3)) (24.1)
Requirement already satisfied: sympy in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from onnxruntime==1.15.0->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 3)) (1.13.1)
Requirement already satisfied: opencv-python-headless>=4.5.1.48 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from ifnude->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 5)) (4.10.0.84)
Requirement already satisfied: PyYAML in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from albumentations->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (6.0.2)
Requirement already satisfied: networkx>=2.8 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from scikit-image->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (3.2.1)
Requirement already satisfied: imageio>=2.27 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from scikit-image->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (2.36.0)
Requirement already satisfied: tifffile>=2022.8.12 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from scikit-image->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (2024.9.20)
Requirement already satisfied: PyWavelets>=1.1.1 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from scikit-image->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (1.7.0)
Requirement already satisfied: lazy_loader>=0.2 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from scikit-image->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (0.4)
Requirement already satisfied: joblib>=1.2.0 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from scikit-learn->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (1.4.2)
Requirement already satisfied: threadpoolctl>=3.1.0 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from scikit-learn->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (3.5.0)
Requirement already satisfied: humanfriendly>=9.1 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from coloredlogs->onnxruntime==1.15.0->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 3)) (10.0)
Requirement already satisfied: contourpy>=1.0.1 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from matplotlib->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (1.3.0)
Requirement already satisfied: cycler>=0.10 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from matplotlib->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from matplotlib->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (4.54.1)
Requirement already satisfied: kiwisolver>=1.3.1 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from matplotlib->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (1.4.7)
Requirement already satisfied: pyparsing>=2.3.1 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from matplotlib->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (3.2.0)
Requirement already satisfied: python-dateutil>=2.7 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from matplotlib->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (2.9.0.post0)
Requirement already satisfied: wcwidth in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from prettytable->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (0.2.13)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from requests->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (3.4.0)
Requirement already satisfied: idna<4,>=2.5 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from requests->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from requests->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (2.2.3)
Requirement already satisfied: certifi>=2017.4.17 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from requests->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (2024.8.30)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from sympy->onnxruntime==1.15.0->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 3)) (1.3.0)
Requirement already satisfied: colorama in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from tqdm->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (0.4.6)
Requirement already satisfied: pyreadline3 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from humanfriendly>=9.1->coloredlogs->onnxruntime==1.15.0->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 3)) (3.5.4)
Requirement already satisfied: six>=1.5 in c:\sd-zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages (from python-dateutil>=2.7->matplotlib->insightface==0.7.3->-r C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\requirements.txt (line 1)) (1.16.0)
Using cached onnx-1.14.0-cp310-cp310-win_amd64.whl (13.3 MB)
Using cached protobuf-5.28.3-cp310-abi3-win_amd64.whl (431 kB)
Installing collected packages: protobuf, onnx
Attempting uninstall: protobuf
Found existing installation: protobuf 3.20.1
Uninstalling protobuf-3.20.1:
Successfully uninstalled protobuf-3.20.1
Attempting uninstall: onnx
Found existing installation: onnx 1.16.2
Uninstalling onnx-1.16.2:
Successfully uninstalled onnx-1.16.2
Successfully installed onnx-1.14.0 protobuf-5.28.3
C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
rank_zero_deprecation(
Launching Web UI with arguments: --use-zluda
No ROCm runtime is found, using ROCM_HOME='C:\Program Files\AMD\ROCm\5.7'
ONNX failed to initialize: Failed to import diffusers.pipelines.pipeline_utils because of the following error (look up to see its traceback):
Failed to import diffusers.models.autoencoders.autoencoder_kl because of the following error (look up to see its traceback):
Failed to import diffusers.loaders.unet because of the following error (look up to see its traceback):
Building PyTorch extensions using ROCm and Windows is not supported.
python_server_full_path: C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\Auto-Photoshop-StableDiffusion-Plugin\server/python_server
CHv1.8.11: Get Custom Model Folder
[-] ADetailer initialized. version: 24.9.0, num models: 10
CivitAI Browser+: Aria2 RPC started
2024-10-24 16:14:37,786 - roop - INFO - roop v0.0.2
2024-10-24 16:14:37,936 - roop - INFO - roop v0.0.2
Loading weights [6ce0161689] from C:\SD-Zluda\a1111\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\SD-Zluda\a1111\stable-diffusion-webui-directml\configs\v1-inference.yaml
CHv1.8.11: Set Proxy:
creating model quickly: OSError
Traceback (most recent call last):
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/None/resolve/main/config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\hub.py", line 403, in cached_file
resolved_file = hf_hub_download(
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 862, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 969, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1484, in _raise_on_head_call_error
raise head_call_error
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1376, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 1296, in get_hf_file_metadata
r = _request_wrapper(
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 277, in _request_wrapper
response = _request_wrapper(
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\file_download.py", line 301, in _request_wrapper
hf_raise_for_status(response)
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 454, in hf_raise_for_status
raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-671ab8be-2a4cdf0f0b47a4405505c709;d2ce9332-101d-404d-bbbd-4c245ea1e656)
Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\modules\initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\modules\shared_items.py", line 190, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\modules\sd_models.py", line 693, in get_sd_model
load_model()
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\modules\sd_models.py", line 831, in load_model
sd_model = instantiate_from_config(sd_config.model, state_dict)
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\modules\sd_models.py", line 775, in instantiate_from_config
return constructor(**params)
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 563, in __init__
self.instantiate_cond_stage(cond_stage_config)
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\encoders\modules.py", line 104, in __init__
self.transformer = CLIPTextModel.from_pretrained(version)
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\modeling_utils.py", line 3301, in from_pretrained
resolved_config_file = cached_file(
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\utils\hub.py", line 426, in cached_file
raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
Failed to create model quickly; will retry using slow method.
C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\scripts\faceswap.py:38: GradioDeprecationWarning: Usage of gradio.inputs is deprecated, and will not be supported in the future, please import your component from gradio.components
img = gr.inputs.Image(type="pil")
C:\SD-Zluda\a1111\stable-diffusion-webui-directml\modules\gradio_extensons.py:25: GradioDeprecationWarning: `optional` parameter is deprecated, and it has no effect
res = original_IOComponent_init(self, *args, **kwargs)
C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\scripts\faceswap.py:55: GradioDeprecationWarning: Usage of gradio.inputs is deprecated, and will not be supported in the future, please import your component from gradio.components
upscaler_name = gr.inputs.Dropdown(
C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\sd-webui-roop\scripts\faceswap.py:74: GradioDeprecationWarning: Usage of gradio.inputs is deprecated, and will not be supported in the future, please import your component from gradio.components
model = gr.inputs.Dropdown(
*** Error executing callback ui_tabs_callback for C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\facechain\scripts\facechain_sdwebui.py
Traceback (most recent call last):
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\modules\script_callbacks.py", line 283, in ui_tabs_callback
res += c.callback() or []
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\facechain\scripts\facechain_sdwebui.py", line 15, in on_ui_tabs
import app
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\facechain\app.py", line 16, in <module>
from facechain.inference_fact import GenPortrait
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\extensions\facechain\facechain\inference_fact.py", line 20, in <module>
from modelscope.pipelines import pipeline
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\modelscope\pipelines\__init__.py", line 4, in <module>
from .base import Pipeline
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\modelscope\pipelines\base.py", line 16, in <module>
from modelscope.msdatasets import MsDataset
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\modelscope\msdatasets\__init__.py", line 2, in <module>
from modelscope.msdatasets.ms_dataset import MsDataset
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\modelscope\msdatasets\ms_dataset.py", line 16, in <module>
from modelscope.msdatasets.data_loader.data_loader_manager import (
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\modelscope\msdatasets\data_loader\data_loader_manager.py", line 12, in <module>
from modelscope.msdatasets.data_loader.data_loader import OssDownloader
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\modelscope\msdatasets\data_loader\data_loader.py", line 15, in <module>
from modelscope.msdatasets.data_files.data_files_manager import \
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\modelscope\msdatasets\data_files\data_files_manager.py", line 11, in <module>
from modelscope.msdatasets.download.dataset_builder import (
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\modelscope\msdatasets\download\dataset_builder.py", line 24, in <module>
from modelscope.msdatasets.download.download_manager import \
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\modelscope\msdatasets\download\download_manager.py", line 9, in <module>
from modelscope.msdatasets.utils.oss_utils import OssUtilities
File "C:\SD-Zluda\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\modelscope\msdatasets\utils\oss_utils.py", line 7, in <module>
import oss2
ModuleNotFoundError: No module named 'oss2'
---
[ERROR]: Config states C:\SD-Zluda\a1111\stable-diffusion-webui-directml\config_states\civitai_subfolders.json, "created_at" does not exist
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
COMMANDLINE_ARGS does not contain --api, API won't be mounted.
Startup time: 83.4s (prepare environment: 84.3s, initialize shared: 3.3s, other imports: 0.1s, list SD models: 0.2s, load scripts: 2.9s, create ui: 4.5s, gradio launch: 0.8s).
Applying attention optimization: Doggettx... done.
Model loaded in 8.8s (create model: 1.9s, apply weights to model: 4.5s, apply half(): 0.3s, move model to device: 0.1s, load textual inversion embeddings: 0.4s, calculate empty prompt: 1.5s).
100%|██████████████████████████████████████| 20/20 [00:23<00:00, 1.15s/it]
Total progress: 100%|██████████████████████| 20/20 [04:29<00:00, 13.47s/it]
Total progress: 100%|██████████████████████| 20/20 [04:29<00:00, 12.15it/s]
```
### Additional information
Installed latest AMD GPU drivers for Radeon VII
using ROCm 5.7

image attached is an example of GPU resource monitoring during the long 'final step' phase.
|
closed
|
2024-10-24T21:24:57Z
|
2024-10-25T03:13:31Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16586
|
[
"bug-report"
] |
Spudsman2600
| 3
|
netbox-community/netbox
|
django
| 18,449
|
Clean up formatting errors
|
### Proposed Changes
```
netbox\dcim\base_filtersets.py:56:8: E121 continuation line under-indented for hanging indent
netbox\dcim\filtersets.py:1655:17: E126 continuation line over-indented for hanging indent
netbox\extras\tests\test_customfields.py:663:31: E203 whitespace before ','
netbox\ipam\models\vlans.py:364:13: E131 continuation line unaligned for hanging indent
netbox\ipam\models\vlans.py:372:13: E131 continuation line unaligned for hanging indent
```
### Justification
These errors were caught by pycodestyle:
pycodestyle --ignore=W504,E501 --exclude=node_modules netbox/
|
closed
|
2025-01-21T16:23:22Z
|
2025-01-24T00:51:44Z
|
https://github.com/netbox-community/netbox/issues/18449
|
[
"status: accepted",
"type: housekeeping",
"severity: low",
"complexity: low"
] |
DanSheps
| 0
|
daleroberts/itermplot
|
matplotlib
| 29
|
Plots turn brown when resizing the iTerm2 window
|
As the title says, the plots turn brown when the iTerm2 window is resized.
This is on macOS 10.13, iTerm2 (3.1.5) and Python 3.6.1 using itermplot 0.20.

|
open
|
2018-01-22T16:01:13Z
|
2018-06-27T08:50:28Z
|
https://github.com/daleroberts/itermplot/issues/29
|
[] |
tamasgal
| 1
|
jonaswinkler/paperless-ng
|
django
| 285
|
[Feature request] bulk export documents
|
Already requested... sorry...
|
closed
|
2021-01-06T22:34:21Z
|
2021-01-06T22:39:16Z
|
https://github.com/jonaswinkler/paperless-ng/issues/285
|
[] |
Philmo67
| 1
|
0b01001001/spectree
|
pydantic
| 98
|
flask-restful problem: request body and parameters doesn't appear in documentation
|

When I decorate post method in resource class with api.validate(json=User) , request body and parameters doesn't appear in documentation
```
@api.validate(json=User, resp=Response(HTTP_200=None, HTTP_400=None), tags=['UserSignUp'])
def post(self):
print(reqparse.request.context.json)
self.__add_parser_argument()
user_args = self.parser.parse_args()
```
|
closed
|
2021-01-02T13:56:01Z
|
2021-01-15T06:39:43Z
|
https://github.com/0b01001001/spectree/issues/98
|
[] |
arminsoltan
| 2
|
fugue-project/fugue
|
pandas
| 276
|
[FEATURE] Get rid of FUGUE_SQL_CONF_SIMPLE_ASSIGN
|
It seems simple assign in Fugue SQL is the way to go `:=` does not make sense to be another option, we will just remove this option.
|
closed
|
2021-12-31T05:36:04Z
|
2022-01-02T06:48:09Z
|
https://github.com/fugue-project/fugue/issues/276
|
[
"enhancement",
"Fugue SQL",
"behavior change"
] |
goodwanghan
| 0
|
harry0703/MoneyPrinterTurbo
|
automation
| 442
|
求解,imagemagicky没有下错啊,文件地址也更改了
|
OSError: MoviePy Error: creation of None failed because of the following error: [WinError 2] 系统找不到指定的文件。. .This error can be due to the fact that ImageMagick is not installed on your computer, or (for Windows users) that you didn't specify the path to the ImageMagick binary. Check the documentation.
Traceback:
File "D:\ai\anaaconda\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 584, in _run_script
exec(code, module.__dict__)
File "C:\Users\许\MoneyPrinterTurbo\webui\Main.py", line 658, in <module>
result = tm.start(task_id=task_id, params=params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\许\MoneyPrinterTurbo\app\services\task.py", line 184, in start
video.generate_video(video_path=combined_video_path,
File "C:\Users\许\MoneyPrinterTurbo\app\services\video.py", line 257, in generate_video
sub = SubtitlesClip(subtitles=subtitle_path, encoding='utf-8')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ai\anaaconda\Lib\site-packages\moviepy\video\tools\subtitles.py", line 104, in __init__
hasmask = bool(self.make_textclip("T").mask)
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ai\anaaconda\Lib\site-packages\moviepy\video\tools\subtitles.py", line 58, in make_textclip
return TextClip(
^^^^^^^^^
File "<decorator-gen-102>", line 2, in __init__
File "D:\ai\anaaconda\Lib\site-packages\moviepy\decorators.py", line 89, in wrapper
return f(*new_a, **new_kw)
^^^^^^^^^^^^^^^^^^^
File "D:\ai\anaaconda\Lib\site-packages\moviepy\video\VideoClip.py", line 1272, in __init__
raise IOError(error)
|
closed
|
2024-07-08T10:28:23Z
|
2024-07-15T06:13:02Z
|
https://github.com/harry0703/MoneyPrinterTurbo/issues/442
|
[] |
xdf927
| 1
|
jumpserver/jumpserver
|
django
| 14,213
|
[Question] Account Push failed with SSH Key
|
### Product Version
v4.1.0
### Product Edition
- [X] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [X] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
We using Jumpserver in cluster mode on Ubuntu 22.04.
### 🤔 Question Description
what could be the problem with pushing the account correctly with random generate SSH key?
### Expected Behavior
_No response_
### Additional Information
We have a problem with push account with SSH key random generation on asstest using two methods: push account or create template and pushing it via Authorization. Then we takes an error "Authentication failed". We check the asset and find that after creation user in home directory he hasn't public key in authorized_keys file.
|
closed
|
2024-09-20T11:51:45Z
|
2024-10-07T13:37:11Z
|
https://github.com/jumpserver/jumpserver/issues/14213
|
[
"🤔 Question"
] |
obgranat
| 5
|
mckinsey/vizro
|
data-visualization
| 1,067
|
Enable zoom-based interactions between time-series Graphs
|
### Question
Hey there!
I have a dashboard that has two components. For the sake of this argument, let's say it is two time series figures captured into a `vm.Graph`. They both plot a continuous value agains the same `pd.DatetimeIndex`. I would like that whenever I zoom on the source plot, I focus on the same x-range on the target plot. This is very similar to #936 but:
- Filtering on datetime values does not seem to be supported
- I use the figure's `relayoutData` instead of `selectData` as I only want to enable this behavior when I zoom in/out and not when I select.
### Code/Examples
I have started to implement a solution based on creating and InvisibleRangeSlider and having it mapped the relayoutData property of my plot using a client-side callback in a way similar to what is recommended in #936 .
As datetime-based filtering does not seem to be supported. I have tried a few workarounds but none of the has worked. My experiments are available on this [PyCafe link](https://app.py.cafe/snippet/vizro/v1?#c=H4sIAC0m2GcEA61YC2_jRBD-K4ur0kRKrfZoEQr4BHcHXCU4oXuAUBz5NvYmWep4jb3Og6r_nW92vbaTummR2FSpvfP-ZmYfufNilQhv7IXmM5epFkUkM3zzWEuVsY3US1aKVMRaJG-45r5lDbN5oVZM73KZLZhc5arQ7BdJgmmYhVk9k_Ms4SXDX560k6nS6c5fFDxfRmr2F3QbloVqWNbyn0L5KziXGtJ6dUCqdYhtXojSGtiSXeMVTC6dT0kc98zGqRSZLmUiopin6YzHtyN2k-WVbpi10HIlnIB7r8nGCUf7nV66BN-CV_pRzgu-EgRqjWct0hL25GzAPlAVpWONea6rAobpc8JeV6UG_3ueLcSHFAEULFbgzBAP00uumYRotpalnKWQilMOfG7cREdusF75ndfhOMwYBhkfu1ROQq_RFRXEHJWGO_SmLGCPU8lZ0paIOZtVMk0GKKK5s0GjrHLAgvRDj3keDH3LOexh8ku9SwVY70IvkWWe8l3ojeFAhshD774VKQTgylpJB9yPGYebJTASrCoFU3Pz-LkPms8dTFFaHPXn_8YXAtOZLhQqvJ7weZJEhNggRCsZWhl6VEoPlSIsI1UnvitqG0wBt0dF6ZPMAUC-9VGMHICo-LYksJI50kQFapOSJ75WkSvYQZdqmP1SaPR4IraDM5o_G6FeAGgsgo9FJWpb39d1B-9Mo4YeCSOb1HkRemYAWR7NKRqX1rlcwP5C-T_JBYlCwkwrqtEUVhis46laZeVkfH457ZQDZC0iWHnEADo-xFwDpsE2aA35xu0R23XmJlA4HTFqnSD0Vry4FQVyMDRxOM1VDgERbflWlIPWqKlZW7JBImM9qKG3QIwsY6Oorizoo5kwOw4RqqcD0YhRMmyTBO9Ac5jlBZZbaPhEHtJaalSZdZfKuysH7SQhUbjNHHV7pjQjlR00S80LPWIiS5CQlrtlgBLD84g8DULMiFrGoJlA6RU5PQxqM6F3-uf56er8NGGnb8env45PPxgsjJp6QLiGwmpy2etgZLPLXraWrOnh1KpqFcJ7Cu0ZvlsE3GvrOeb_b7-_a-2QWfLaqrLfKBto2GsOS6DWgGjbGWiMRqTTE3aORqc72kkafb2yz9HTOfsMh33UUmtgGr-77UDvNN9pCbvhGTxNT1CdRwYcU4qD9vGwFT6CYlvTbX5JTys8YQ673nFrtfstwarMaZkP3IJf46ulxqIQeq-XKMeSfbqBN5bS7BNlMLEzNCD9M7XxQCaQKlVVxCKKSZjCAGLwOWgX0vnQrTU09oXRAAus173CMNwVnjY-2Y2o61HfrnJQPNYc4Di3cLgQ3bAI72l1Ay7_YKgHKt2YV5khBw-qwPl-OLAjVQbTjkf-mqcVbWM9Mn16TtgNm4uUjkVYKcqlqtKEbVRxixMJFj4c6hTOA1hB-kSPhXRivpug2sPco7w0bCZtTJ2k-i0aCO0xBShr878zTnqDPuY3jcbrww7pU0bjv2aChqp0I_RoqD2yhz50eToInLB3r8bsI2V1Y5KaF2rGZ-mOzYQ51G34jmmFddJluhRihWMxbXddNbR7rATHKQ_cpjLMfcct411W4Noc3TrQ2uw8K7WdWKyUO_UFN9nR5rTcNI61qOXqImhnVjILsLlI2g18vOCYjeNhvnNB7iW-FuHYSBoRvn1CpM4MahdPdNR-i-sU8oC7CI6a9sRtCsadufvi7aykbLMUWJjBaFdOZsCk3d4iJnCV7LnA1YiF9f2Unl2xDwqBCwOqkm6wQ9whDJUGThJ7RPbll9gZ2vfJGc6MsvQN3pOL6dmUfREErKJNRmYi2VNGg26ZUeudOWyjPHMcO_czx-4MJmM2OW5vdMSfS9Cn98NvWxceXoQ2SKPa4M6w71emInsoroVrQSCHz8g8mglzKYbvh5tY6HXdwr5blwTu42vYwAVDaslTkx1znAbdlEd9rTUX3jCze769_UbRGkcOpCuKwEkOzxQv6PyG5nvjXge0Q6PV6B9daMztGxVq746N1NAvKtS6N4Kff1eyECuKHD91GFsvgwv_0n9xFWZgoEuYI-A13-mlyjCR71QCqM7XF_6Lr_1LkGzA3vjOqz31xi9gQCn9Xm1oulZV4G3kxUt4VAgwTRqK5jNUBIgbmeilN768vhh56Mo_7OtX9u2tkIsl7NCrTCCGX2bEK2gtRfEauztH6RWPWCDW85nlBUvOSa_n3U_vRw1P40WfAhBbOZ7nfr7bl35OdHUAl9fX_nV3XD0n2tr7pyJ1Qe4598CFqyv_am-MvKXz7ptnuYMVH9M8fcofx0cO0ed-ZJKBoptM7_8F3sW1tmsTAAA).
Thank you very much for your help! :)
### Which package?
vizro
### Code of Conduct
- [x] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md).
|
open
|
2025-03-17T13:56:07Z
|
2025-03-24T10:09:20Z
|
https://github.com/mckinsey/vizro/issues/1067
|
[
"General Question :question:"
] |
gtauzin
| 7
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.