repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
Textualize/rich
|
python
| 3,068
|
[REQUEST]
|
Please add a way to center/align the Progress module. Currently from what I've seen it can only align to the left but I would love for it to be able to be centered.
|
closed
|
2023-07-30T09:04:38Z
|
2023-07-30T20:53:16Z
|
https://github.com/Textualize/rich/issues/3068
|
[
"Needs triage"
] |
KingKDot
| 4
|
deezer/spleeter
|
deep-learning
| 290
|
Command not found: Spleeter
|
<img width="568" alt="Screen Shot 2020-03-12 at 9 33 25 AM" src="https://user-images.githubusercontent.com/58147163/76526825-87602400-6444-11ea-9ec2-a16ea279bd02.png">
Any ideas? I followed all the instructions.
|
closed
|
2020-03-12T13:34:49Z
|
2020-05-25T19:25:21Z
|
https://github.com/deezer/spleeter/issues/290
|
[
"bug",
"invalid"
] |
chrisgauthier9
| 2
|
ultralytics/ultralytics
|
computer-vision
| 18,758
|
Cannot export model to edgetpu on linux machine
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Other, Export
### Bug
The export to edgetpu works in google colab, however I cannot reproduce the model export in a personal computer
`from ultralytics import YOLO
model = YOLO('yolo11n.pt') # load a pretrained model (recommended for training)
results = model.export(format='edgetpu') # export the model to ONNX format`
TensorFlow SavedModel: export failure ❌ 77.2s: No module named 'imp'
Traceback (most recent call last):
File "/home/andrea/venv/myenv2/lib/python3.12/site-packages/ultralytics/engine/exporter.py", line 1285, in _add_tflite_metadata
from tensorflow_lite_support.metadata import metadata_schema_py_generated as schema # noqa
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'tensorflow_lite_support'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/andrea/venv/myenv2/lib/python3.12/site-packages/01.py", line 3, in <module>
results = model.export(format='edgetpu') # export the model to ONNX format
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/andrea/venv/myenv2/lib/python3.12/site-packages/ultralytics/engine/model.py", line 738, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/andrea/venv/myenv2/lib/python3.12/site-packages/ultralytics/engine/exporter.py", line 403, in __call__
f[5], keras_model = self.export_saved_model()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/andrea/venv/myenv2/lib/python3.12/site-packages/ultralytics/engine/exporter.py", line 176, in outer_func
raise e
File "/home/andrea/venv/myenv2/lib/python3.12/site-packages/ultralytics/engine/exporter.py", line 171, in outer_func
f, model = inner_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/andrea/venv/myenv2/lib/python3.12/site-packages/ultralytics/engine/exporter.py", line 1034, in export_saved_model
f.unlink() if "quant_with_int16_act.tflite" in str(f) else self._add_tflite_metadata(file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/andrea/venv/myenv2/lib/python3.12/site-packages/ultralytics/engine/exporter.py", line 1288, in _add_tflite_metadata
from tflite_support import metadata # noqa
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/andrea/venv/myenv2/lib/python3.12/site-packages/tflite_support/metadata.py", line 28, in <module>
from tflite_support import flatbuffers
File "/home/andrea/venv/myenv2/lib/python3.12/site-packages/tflite_support/flatbuffers/__init__.py", line 15, in <module>
from .builder import Builder
File "/home/andrea/venv/myenv2/lib/python3.12/site-packages/tflite_support/flatbuffers/builder.py", line 15, in <module>
from . import number_types as N
File "/home/andrea/venv/myenv2/lib/python3.12/site-packages/tflite_support/flatbuffers/number_types.py", line 18, in <module>
from . import packer
File "/home/andrea/venv/myenv2/lib/python3.12/site-packages/tflite_support/flatbuffers/packer.py", line 22, in <module>
from . import compat
File "/home/andrea/venv/myenv2/lib/python3.12/site-packages/tflite_support/flatbuffers/compat.py", line 19, in <module>
import imp
ModuleNotFoundError: No module named 'imp'
### Environment
Ultralytics 8.3.63 🚀 Python-3.12.3 torch-2.5.1+cu124 CUDA:0 (NVIDIA GeForce RTX 3060, 11938MiB)
Setup complete ✅ (12 CPUs, 15.5 GB RAM, 360.8/467.9 GB disk)
os: ubuntu
python version: 3.12
package onnx2tf version: 1.26.3
### Minimal Reproducible Example
from ultralytics import YOLO
model = YOLO('yolo11n.pt') # load a pretrained model (recommended for training)
results = model.export(format='edgetpu') # export the model to ONNX format
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
open
|
2025-01-19T11:22:22Z
|
2025-01-20T15:54:53Z
|
https://github.com/ultralytics/ultralytics/issues/18758
|
[
"bug",
"exports"
] |
Lorsu
| 3
|
graphql-python/graphene-django
|
django
| 1,515
|
Documentation Mismatch for Testing API Calls with Django
|
**What is the current behavior?**
The documentation for testing API calls with Django on the official website of [Graphene-Django](https://docs.graphene-python.org/projects/django/en/latest/testing/) shows incorrect usage of a parameter named **op_name** in the code examples for unit tests and pytest integration. However, upon inspecting the corresponding documentation in the [GitHub repository](https://github.com/graphql-python/graphene-django/blob/main/docs/testing.rst), the correct parameter operation_name is used.
**Steps to Reproduce**
Visit the [Graphene-Django](https://docs.graphene-python.org/projects/django/en/latest/testing/) documentation website's section on testing API calls with Django.
Observe the use of op_name in the example code blocks.
Compare with the content in the [testing.rst](https://github.com/graphql-python/graphene-django/blob/main/docs/testing.rst) file in the docs folder of the Graphene-Django GitHub repository, where operation_name is correctly used.
Expected Behavior
The online documentation should reflect the same parameter name, operation_name, as found in the GitHub repository documentation, ensuring consistency and correctness for developers relying on these docs for implementing tests.
Motivation / Use Case for Changing the Behavior
Ensuring the documentation is accurate and consistent across all platforms is crucial for developer experience and adoption. Incorrect documentation can lead to confusion and errors in implementation, especially for new users of Graphene-Django.
|
open
|
2024-04-06T05:32:32Z
|
2024-07-03T18:44:00Z
|
https://github.com/graphql-python/graphene-django/issues/1515
|
[
"🐛bug"
] |
hamza-m-farooqi
| 1
|
FlareSolverr/FlareSolverr
|
api
| 1,288
|
[yggtorrent] (testing) Exception (yggtorrent): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 55.0 seconds.: FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 55.0 seconds.
|
### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version:3.3.21
- Last working FlareSolverr version:3.3.21
- Operating system: Linux-6.1.79-Unraid-x86_64-with-glibc2.31
- Are you using Docker: yes
- FlareSolverr User-Agent (see log traces or / endpoint): Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
- Are you using a VPN: no
- Are you using a Proxy: no
- Are you using Captcha Solver: no
- If using captcha solver, which one:
- URL to test this issue:
https://www.ygg.re/
```
### Description
Hello guys, I hope you are having a nice day. I wanted to thank you all very much for your work.
Regarding the issue, I'm having the same issue that we had few days ago with yggtorrent.
Already implemented patch published here https://github.com/FlareSolverr/FlareSolverr/issues/1253
However it worked for a few days and stopped working today.
### Logged Error Messages
```text
2024-07-25 18:49:37 INFO ReqId 23039076726592 FlareSolverr 3.3.21
2024-07-25 18:49:37 DEBUG ReqId 23039076726592 Debug log enabled
2024-07-25 18:49:37 INFO ReqId 23039076726592 Testing web browser installation...
2024-07-25 18:49:37 INFO ReqId 23039076726592 Platform: Linux-6.1.79-Unraid-x86_64-with-glibc2.31
2024-07-25 18:49:37 INFO ReqId 23039076726592 Chrome / Chromium path: /usr/bin/chromium
2024-07-25 18:49:41 INFO ReqId 23039076726592 Chrome / Chromium major version: 120
2024-07-25 18:49:41 INFO ReqId 23039076726592 Launching web browser...
2024-07-25 18:49:41 DEBUG ReqId 23039076726592 Launching web browser...
2024-07-25 18:49:42 DEBUG ReqId 23039076726592 Started executable: `/app/chromedriver` in a child process with pid: 32
2024-07-25 18:49:43 INFO ReqId 23039076726592 FlareSolverr User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
2024-07-25 18:49:43 INFO ReqId 23039076726592 Test successful!
2024-07-25 18:49:43 INFO ReqId 23039076726592 Serving on http://0.0.0.0:8191
2024-07-25 18:49:51 INFO ReqId 23039035483904 Incoming request => POST /v1 body: {'maxTimeout': 55000, 'cmd': 'request.get', 'url': 'https://www.ygg.re/engine/search?do=search&order=desc&sort=publish_date&category=all'}
2024-07-25 18:49:51 DEBUG ReqId 23039035483904 Launching web browser...
2024-07-25 18:49:51 DEBUG ReqId 23039035483904 Started executable: `/app/chromedriver` in a child process with pid: 177
2024-07-25 18:49:51 DEBUG ReqId 23039035483904 New instance of webdriver has been created to perform the request
2024-07-25 18:49:51 DEBUG ReqId 23039027078912 Navigating to... https://www.ygg.re/engine/search?do=search&order=desc&sort=publish_date&category=all
2024-07-25 18:49:54 INFO ReqId 23039027078912 Challenge detected. Title found: Just a moment...
2024-07-25 18:49:54 DEBUG ReqId 23039027078912 Waiting for title (attempt 1): Just a moment...
2024-07-25 18:49:55 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:49:55 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:49:55 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:49:55 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:49:55 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:49:57 DEBUG ReqId 23039027078912 Waiting for title (attempt 2): Just a moment...
2024-07-25 18:49:58 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:49:58 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:49:58 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:49:58 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:49:58 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:01 DEBUG ReqId 23039027078912 Waiting for title (attempt 3): Just a moment...
2024-07-25 18:50:02 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:02 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:02 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:02 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:02 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:04 DEBUG ReqId 23039027078912 Waiting for title (attempt 4): Just a moment...
2024-07-25 18:50:05 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:05 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:05 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:05 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:05 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:07 DEBUG ReqId 23039027078912 Waiting for title (attempt 5): Just a moment...
2024-07-25 18:50:08 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:08 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:08 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:08 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:08 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:10 DEBUG ReqId 23039027078912 Waiting for title (attempt 6): Just a moment...
2024-07-25 18:50:11 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:11 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:11 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:11 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:11 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:13 DEBUG ReqId 23039027078912 Waiting for title (attempt 7): Just a moment...
2024-07-25 18:50:14 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:14 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:14 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:14 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:14 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:16 DEBUG ReqId 23039027078912 Waiting for title (attempt 8): Just a moment...
2024-07-25 18:50:17 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:17 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:17 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:17 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:17 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:19 DEBUG ReqId 23039027078912 Waiting for title (attempt 9): Just a moment...
2024-07-25 18:50:20 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:20 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:20 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:20 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:20 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:22 DEBUG ReqId 23039027078912 Waiting for title (attempt 10): Just a moment...
2024-07-25 18:50:23 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:23 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:23 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:23 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:23 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:25 DEBUG ReqId 23039027078912 Waiting for title (attempt 11): Just a moment...
2024-07-25 18:50:26 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:26 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:26 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:26 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:26 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:28 DEBUG ReqId 23039027078912 Waiting for title (attempt 12): Just a moment...
2024-07-25 18:50:29 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:29 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:29 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:29 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:29 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:31 DEBUG ReqId 23039027078912 Waiting for title (attempt 13): Just a moment...
2024-07-25 18:50:32 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:32 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:32 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:32 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:32 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:34 DEBUG ReqId 23039027078912 Waiting for title (attempt 14): Just a moment...
2024-07-25 18:50:35 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:35 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:35 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:35 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:35 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:37 DEBUG ReqId 23039027078912 Waiting for title (attempt 15): Just a moment...
2024-07-25 18:50:38 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:38 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:38 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:38 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:38 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:40 DEBUG ReqId 23039027078912 Waiting for title (attempt 16): Just a moment...
2024-07-25 18:50:41 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:41 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:41 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:41 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:41 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:43 DEBUG ReqId 23039027078912 Waiting for title (attempt 17): Just a moment...
2024-07-25 18:50:44 DEBUG ReqId 23039027078912 Timeout waiting for selector
2024-07-25 18:50:44 DEBUG ReqId 23039027078912 Try to find the Cloudflare verify checkbox...
2024-07-25 18:50:44 DEBUG ReqId 23039027078912 Cloudflare verify checkbox not found on the page.
2024-07-25 18:50:44 DEBUG ReqId 23039027078912 Try to find the Cloudflare 'Verify you are human' button...
2024-07-25 18:50:44 DEBUG ReqId 23039027078912 The Cloudflare 'Verify you are human' button not found on the page.
2024-07-25 18:50:46 DEBUG ReqId 23039027078912 Waiting for title (attempt 18): Just a moment...
2024-07-25 18:50:46 DEBUG ReqId 23039035483904 A used instance of webdriver has been destroyed
2024-07-25 18:50:46 ERROR ReqId 23039035483904 Error: Error solving the challenge. Timeout after 55.0 seconds.
2024-07-25 18:50:46 DEBUG ReqId 23039035483904 Response => POST /v1 body: {'status': 'error', 'message': 'Error: Error solving the challenge. Timeout after 55.0 seconds.', 'startTimestamp': 1721933391173, 'endTimestamp': 1721933446914, 'version': '3.3.21'}
2024-07-25 18:50:46 INFO ReqId 23039035483904 Response in 55.741 s
2024-07-25 18:50:46 INFO ReqId 23039035483904 172.17.0.1 POST http://10.1.10.240:8191/v1 500 Internal Server Error
```
### Screenshots
_No response_
|
closed
|
2024-07-25T18:54:10Z
|
2024-07-25T20:19:26Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/1288
|
[
"duplicate"
] |
touzenesmy
| 4
|
PaddlePaddle/PaddleHub
|
nlp
| 2,254
|
AttributeError: module 'paddle' has no attribute '__version__'
|
欢迎您反馈PaddleHub使用问题,非常感谢您对PaddleHub的贡献!
在留下您的问题时,辛苦您同步提供如下信息:
- 版本、环境信息
1)PaddleHub和PaddlePaddle版本:请提供您的PaddleHub和PaddlePaddle版本号,例如PaddleHub1.4.1,PaddlePaddle1.6.2
2)系统环境:请您描述系统类型,例如Linux/Windows/MacOS/,python版本
- 复现信息:如为报错,请给出复现环境、复现步骤
|
closed
|
2023-05-15T06:40:12Z
|
2023-09-15T08:45:13Z
|
https://github.com/PaddlePaddle/PaddleHub/issues/2254
|
[] |
HuangXinzhe
| 3
|
Kanaries/pygwalker
|
plotly
| 252
|
When deploying the streamlit apllication the graph is not visible in browser the api calls are going st_core which is not there in directory
|
closed
|
2023-09-29T02:17:29Z
|
2023-10-17T11:37:29Z
|
https://github.com/Kanaries/pygwalker/issues/252
|
[
"bug",
"fixed but needs feedback"
] |
JeevankumarDharmalingam
| 13
|
|
geex-arts/django-jet
|
django
| 289
|
Page not found (404)
|
Hi, I make download of the project later, I try to run the project and I have the next issue when try to surf of the dashboard.
Using the URLconf defined in jet.tests.urls, Django tried these URL patterns, in this order:
url: http://127.0.0.1:8000/admin/jet
^jet/
^jet/dashboard/
^admin/doc/
^admin/ ^$ [name='index']
^admin/ ^login/$ [name='login']
^admin/ ^logout/$ [name='logout']
^admin/ ^password_change/$ [name='password_change']
^admin/ ^password_change/done/$ [name='password_change_done']
^admin/ ^jsi18n/$ [name='jsi18n']
^admin/ ^r/(?P<content_type_id>\d+)/(?P<object_id>.+)/$ [name='view_on_site']
^admin/ ^auth/user/
^admin/ ^tests/testmodel/
^admin/ ^auth/group/
^admin/ ^sites/site/
^admin/ ^tests/relatedtotestmodel/
^admin/ ^(?P<app_label>auth|tests|sites)/$ [name='app_list']
The current path, admin/jet, didn't match any of these.
Somebody can help me please.
|
open
|
2018-01-31T13:36:45Z
|
2019-01-24T20:31:13Z
|
https://github.com/geex-arts/django-jet/issues/289
|
[] |
JorgeSevilla
| 10
|
ultralytics/ultralytics
|
python
| 19,292
|
Colab default setting could not covert to tflite
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
Few weeks ago I can convert the yolo11n.pt to tflite, but now the colab default python setting will be python 3.11.11.
At this version, the tflite could not be converted by onnx2tf.
How can I fix it?

```
from ultralytics import YOLO
model = YOLO('yolo11n.pt')
model.export(format='tflite', imgsz=192, int8=True)
model = YOLO('yolo11n_saved_model/yolo11n_full_integer_quant.tflite')
res = model.predict(imgsz=192)
res[0].plot(show=True)
```
```
Downloading https://ultralytics.com/assets/Arial.ttf to '/root/.config/Ultralytics/Arial.ttf'...
100%|██████████| 755k/755k [00:00<00:00, 116MB/s]
Scanning /content/datasets/coco8/labels/val... 4 images, 0 backgrounds, 0 corrupt: 100%|██████████| 4/4 [00:00<00:00, 117.40it/s]New cache created: /content/datasets/coco8/labels/val.cache
TensorFlow SavedModel: WARNING ⚠️ >300 images recommended for INT8 calibration, found 4 images.
TensorFlow SavedModel: starting TFLite export with onnx2tf 1.26.3...
ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.
ERROR: The trace log is below.
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 312, in print_wrapper_func
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 385, in inverted_operation_enable_disable_wrapper_func
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 55, in get_replacement_parameter_wrapper_func
func(*args, **kwargs)
File "/usr/local/lib/python3.11/dist-packages/onnx2tf/ops/Mul.py", line 245, in make_node
correction_process_for_accuracy_errors(
File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 5894, in correction_process_for_accuracy_errors
min_abs_err_perm_1: int = [idx for idx in range(len(validation_data_1.shape))]
^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'shape'
ERROR: input_onnx_file_path: yolo11n.onnx
ERROR: onnx_op_name: wa/model.10/m/m.0/attn/Mul
ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement
ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again.
ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option.
ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 312, in print_wrapper_func
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 385, in inverted_operation_enable_disable_wrapper_func
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 55, in get_replacement_parameter_wrapper_func
func(*args, **kwargs)
File "/usr/local/lib/python3.11/dist-packages/onnx2tf/ops/Mul.py", line 245, in make_node
correction_process_for_accuracy_errors(
File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 5894, in correction_process_for_accuracy_errors
min_abs_err_perm_1: int = [idx for idx in range(len(validation_data_1.shape))]
^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'shape'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/IPython/core/interactiveshell.py", line 3553, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-6-da2eaec26985>", line 3, in <cell line: 0>
model.export(format='tflite', imgsz=192, int8=True)
File "/usr/local/lib/python3.11/dist-packages/ultralytics/engine/model.py", line 741, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/ultralytics/engine/exporter.py", line 418, in __call__
f[5], keras_model = self.export_saved_model()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/ultralytics/engine/exporter.py", line 175, in outer_func
f, model = inner_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/ultralytics/engine/exporter.py", line 1036, in export_saved_model
keras_model = onnx2tf.convert(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/onnx2tf/onnx2tf.py", line 1141, in convert
op.make_node(
File "/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py", line 378, in print_wrapper_func
sys.exit(1)
SystemExit: 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/IPython/core/ultratb.py", line 1101, in get_records
return _fixed_getinnerframes(etb, number_of_lines_of_context, tb_offset)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/IPython/core/ultratb.py", line 248, in wrapped
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/IPython/core/ultratb.py", line 281, in _fixed_getinnerframes
records = fix_frame_records_filenames(inspect.getinnerframes(etb, context))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/inspect.py", line 1739, in getinnerframes
traceback_info = getframeinfo(tb, context)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/inspect.py", line 1671, in getframeinfo
lineno = frame.f_lineno
^^^^^^^^^^^^^^
AttributeError: 'tuple' object has no attribute 'f_lineno'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[/usr/local/lib/python3.11/dist-packages/onnx2tf/utils/common_functions.py](https://localhost:8080/#) in print_wrapper_func(*args, **kwargs)
311 try:
--> 312 result = func(*args, **kwargs)
313
18 frames
AttributeError: 'NoneType' object has no attribute 'shape'
During handling of the above exception, another exception occurred:
SystemExit Traceback (most recent call last)
[... skipping hidden 1 frame]
SystemExit: 1
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
[... skipping hidden 1 frame]
[/usr/local/lib/python3.11/dist-packages/IPython/core/ultratb.py](https://localhost:8080/#) in find_recursion(etype, value, records)
380 # first frame (from in to out) that looks different.
381 if not is_recursion_error(etype, value, records):
--> 382 return len(records), 0
383
384 # Select filename, lineno, func_name to track frames with
TypeError: object of type 'NoneType' has no len()
```
Thanks,
Kris
### Environment
colab default Environment
### Minimal Reproducible Example
https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
open
|
2025-02-18T09:09:13Z
|
2025-02-21T01:35:52Z
|
https://github.com/ultralytics/ultralytics/issues/19292
|
[
"bug",
"non-reproducible",
"exports"
] |
kris-himax
| 5
|
koxudaxi/datamodel-code-generator
|
fastapi
| 2,232
|
[PydanticV2] Add parameter to use Python Regex Engine in order to support look-around
|
**Is your feature request related to a problem? Please describe.**
1. I wrote a valid JSON Schema with many properties
2. Some properties' pattern make use of look-ahead and look-behind, which are supported by JSON Schema specifications.
See: [JSON Schema supported patterns](https://json-schema.org/understanding-json-schema/reference/regular_expressions)
4. `datamodel-code-generator` generated the PydanticV2 models.
5. PydanticV2 doesn't support look-around, look-ahead and look-behind by default (see https://github.com/pydantic/pydantic/issues/7058)
```Python traceback
ImportError while loading [...]: in <module>
[...]
.venv/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py:205: in __new__
complete_model_class(
.venv/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py:552: in complete_model_class
cls.__pydantic_validator__ = create_schema_validator(
.venv/lib/python3.12/site-packages/pydantic/plugin/_schema_validator.py:50: in create_schema_validator
return SchemaValidator(schema, config)
E pydantic_core._pydantic_core.SchemaError: Error building "model" validator:
E SchemaError: Error building "model-fields" validator:
E SchemaError: Field "version":
E SchemaError: Error building "str" validator:
E SchemaError: regex parse error:
E ^((?!0[0-9])[0-9]+(\.(?!$)|)){2,4}$
E ^^^
E error: look-around, including look-ahead and look-behind, is not supported
```
**Describe the solution you'd like**
PydanticV2 supports look-around, look-ahead and look-behind using Python as regex engine: https://github.com/pydantic/pydantic/issues/7058#issuecomment-2156772918
I'd like to have a configuration parameter to use `python-re` as regex engine for Pydantic V2.
**Describe alternatives you've considered**
Workaround:
1. Create a custom BaseModel
```python
from pydantic import BaseModel, ConfigDict
class _BaseModel(BaseModel):
model_config = ConfigDict(regex_engine='python-re')
```
2. Use that class as BaseModel:
```sh
datamodel-codegen --base-model "module.with.basemodel._BaseModel"
```
EDIT:
Configuration used:
```ini
[tool.datamodel-codegen]
# Options
input = "<project>/data/schemas/"
input-file-type = "jsonschema"
output = "<project>/models/"
output-model-type = "pydantic_v2.BaseModel"
# Typing customization
base-class = "<project>.models._base_model._BaseModel"
enum-field-as-literal = "all"
use-annotated = true
use-standard-collections = true
use-union-operator = true
# Field customization
collapse-root-models = true
snake-case-field = true
use-field-description = true
# Model customization
disable-timestamp = true
enable-faux-immutability = true
target-python-version = "3.12"
use-schema-description = true
# OpenAPI-only options
#
# We may not want to use these options as we are not generating from OpenAPI schemas
# but this is a workaround to avoid `type | None` in when we have a default value.
#
# The author of the tool doesn't know why he flagged this option as OpenAPI only.
# Reference: https://github.com/koxudaxi/datamodel-code-generator/issues/1441
strict-nullable = true
```
|
open
|
2024-12-17T15:39:44Z
|
2025-02-06T20:06:51Z
|
https://github.com/koxudaxi/datamodel-code-generator/issues/2232
|
[
"bug",
"help wanted"
] |
ilovelinux
| 4
|
deezer/spleeter
|
deep-learning
| 88
|
[Bug] Colab Runtime Disconnected
|
<!-- PLEASE READ THIS CAREFULLY :
- Any issue which does not respect following template or lack of information will be considered as invalid and automatically closed
- First check FAQ from wiki to see if your problem is not already known
-->
## Description
Was having trouble installing for local use, so I decided to try to use the Colab link.
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
1. Opened Colab environment
2. Replaced the wget command with
```
from google.colab import files
uploaded = files.upload()
```
3. Ran all steps in order
4. Uploaded a local mp3 file
5. Ran the rest of the steps
6. Got a successful message from the actual separate function, and files exist in the output directory
7. Tried to listen to the output file and got a "runtime disconnected" error
## Output
`!spleeter separate -i keenanvskel.mp3 -o output/`
Output from the split function
```bash
INFO:spleeter:Loading audio b'keenanvskel.mp3' from 0.0 to 600.0
INFO:spleeter:Audio data loaded successfully
INFO:spleeter:File output/keenanvskel/vocals.wav written
INFO:spleeter:File output/keenanvskel/accompaniment.wav written
```
```
!ls output/keenanvskel
accompaniment.wav vocals.wav
```
Then I ran this
`Audio('output/keenanvskel/vocals.wav')`
And got this response
`Runtime Disconnected`
## Environment
| | |
| ----------------- | ------------------------------- |
| OS | My laptop is running MacOS, but using Colab |
| Installation type | pip |
| RAM available | Got the green checkmark for both RAM and Disk in the Colab environment |
| Hardware spec | unsure |
## Additional context
Not sure if this is an issue with spleeter or with Colab. I love this project conceptually so I was excited to try it out, I saw some other posts online about runtime disconnect errors in Colab, but everything I found seems to only occur if it's running for 15 minutes or more. This whole process was less than 5 minutes, and I've tried multiple times and continually get the same error.
If this is not the correct place for this type of error, or if I am doing something wrong, feel free to close this bug and let me know what next steps I might take.
Will also attach some screenshots of the environment.
<img width="1273" alt="Screen Shot 2019-11-13 at 6 42 28 PM" src="https://user-images.githubusercontent.com/11081954/68817684-0e90bd00-0648-11ea-8f55-8228f81970d5.png">
<img width="1291" alt="Screen Shot 2019-11-13 at 6 42 41 PM" src="https://user-images.githubusercontent.com/11081954/68817685-0e90bd00-0648-11ea-8c1b-c6a94dafd747.png">
<img width="1277" alt="Screen Shot 2019-11-13 at 6 42 54 PM" src="https://user-images.githubusercontent.com/11081954/68817686-0e90bd00-0648-11ea-8738-3bec1b194081.png">
<img width="1301" alt="Screen Shot 2019-11-13 at 6 43 22 PM" src="https://user-images.githubusercontent.com/11081954/68817688-13ee0780-0648-11ea-8a8e-c468bd32bf79.png">
|
closed
|
2019-11-14T01:02:26Z
|
2024-03-19T16:51:17Z
|
https://github.com/deezer/spleeter/issues/88
|
[
"bug",
"invalid",
"wontfix"
] |
tiwonku
| 3
|
proplot-dev/proplot
|
data-visualization
| 332
|
ticklabels of log scale axis should be 'log' by default?
|
<!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
I feel it might be more natural to use 'log' for ticklabels when log scale is used.
### Steps to reproduce
```python
import proplot as pplt
fig = pplt.figure()
ax = fig.subplot(xlabel='x axis', ylabel='y axis')
ax.format(yscale='log', ylim=(1e-5, 1e5))
```
**Expected behavior**: [What you expected to happen]

**Actual behavior**: [What actually happened]

### Steps for expected behavior
I can fix it by hand with `yticklabels='log'`
```python
import proplot as pplt
fig = pplt.figure()
ax = fig.subplot(xlabel='x axis', ylabel='y axis')
ax.format(yscale='log', ylim=(1e-5, 1e5), yticklabels='log')
```
### Proplot version
Paste the results of `import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version)`here.
3.4.3
0.9.5
|
open
|
2022-01-29T16:56:23Z
|
2022-01-29T18:41:10Z
|
https://github.com/proplot-dev/proplot/issues/332
|
[
"enhancement"
] |
syrte
| 6
|
flasgger/flasgger
|
flask
| 512
|
Flasgger Not Working
|
I tried to use colors.py example of the app, but it isn't working


It is giving infinite loader

On checking http://localhost:5000/spec, i am seeing the following error

On googling, as fixing the error by doing yaml.safe_load

I start getting stuck at the following screen

I tried another example from https://github.com/flasgger/flasgger/blob/master/examples/restful.py
In this swag_from import is not available in flasgger, on removing that i am seeing the following empty swagger screen.

But the spec file has all the info

required
I also tried using this example https://github.com/flasgger/flasgger/blob/master/examples/marshmallow_apispec.py but imports doesn't seem to be available in flasgger

|
open
|
2021-12-16T12:24:48Z
|
2021-12-16T12:37:16Z
|
https://github.com/flasgger/flasgger/issues/512
|
[] |
akhilgargjosh
| 0
|
noirbizarre/flask-restplus
|
api
| 211
|
Unhelpful error when using validation and request has no Content-Type header
|
When no `content-type='application/json'` header is present, validation fails with `"": "None is not of type u'object'"`
Some examples:
```
~> curl http://localhost:5000/hello -X POST -d '{"name": "test"}'
{
"errors": {
"": "None is not of type u'object'"
},
"message": "Input payload validation failed"
}
~> curl http://localhost:5000/hello -X POST
{
"errors": {
"": "None is not of type u'object'"
},
"message": "Input payload validation failed"
}
```
And a successful request:
```
~> curl http://localhost:5000/hello -X POST -d '{"name": "test"}' -H "Content-Type: application/json"
{
"hello": "world"
}
```
This is the snippet I used, barely changed from the tutorials:
``` python
from flask import Flask
from flask_restplus import Resource, Api, fields
app = Flask(__name__)
api = Api(app)
resource_fields = api.model('Resource', {
'name': fields.String(required=True),
})
@api.route('/hello')
class HelloWorld(Resource):
@api.expect(resource_fields, validate=True)
def post(self):
return {'hello': 'world'}
if __name__ == '__main__':
app.run(debug=True)
```
|
open
|
2016-10-25T16:53:26Z
|
2019-08-16T13:31:35Z
|
https://github.com/noirbizarre/flask-restplus/issues/211
|
[
"bug"
] |
nfvs
| 3
|
vastsa/FileCodeBox
|
fastapi
| 198
|
后台不知道为啥登录不上
|
![Uploading Snipaste_2024-08-22_16-20-16.png…]()
|
closed
|
2024-08-22T08:20:58Z
|
2024-08-22T08:21:14Z
|
https://github.com/vastsa/FileCodeBox/issues/198
|
[] |
qiuyu2547
| 0
|
OFA-Sys/Chinese-CLIP
|
computer-vision
| 192
|
怎么利用这个工具实现图像描述任务
|
closed
|
2023-08-28T07:33:08Z
|
2023-08-28T13:01:42Z
|
https://github.com/OFA-Sys/Chinese-CLIP/issues/192
|
[] |
yazheng0307
| 2
|
|
sktime/sktime
|
scikit-learn
| 7,134
|
[ENH] `polars` schema checks - address performance warnings
|
The current schema checks for lazy `polars` based data types raise performance warnings, e.g.,
```
sktime/datatypes/tests/test_check.py::test_check_metadata_inference[Table-polars_lazy_table-fixture:1]
/home/runner/work/sktime/sktime/sktime/datatypes/_adapter/polars.py:234: PerformanceWarning: Determining the width of a LazyFrame requires resolving its schema, which is a potentially expensive operation. Use `LazyFrame.collect_schema().len()` to get the width without this warning.
metadata["n_features"] = obj.width - len(index_cols)
```
These should be addressed.
The tests to execute to check whether these warnings persist are those in the `datatypes` module - these are automatically executed for a change in the impacted file, on remote CI.
|
closed
|
2024-09-18T12:55:28Z
|
2024-10-06T19:35:47Z
|
https://github.com/sktime/sktime/issues/7134
|
[
"good first issue",
"module:datatypes",
"enhancement"
] |
fkiraly
| 0
|
chatanywhere/GPT_API_free
|
api
| 200
|
付费API key报错429错误
|
**Describe the bug 描述bug**
付费API key报错429错误
**To Reproduce 复现方法**
发了一条信息之后, 接着询问时出现的. 之前没有遇到过这样的问题
**Screenshots 截图**

**Tools or Programming Language 使用的工具或编程语言**
Botgem
**Additional context 其他内容**
Add any other context about the problem here.
|
closed
|
2024-03-25T05:31:15Z
|
2024-03-25T19:30:00Z
|
https://github.com/chatanywhere/GPT_API_free/issues/200
|
[] |
Sanguine-00
| 1
|
open-mmlab/mmdetection
|
pytorch
| 11,411
|
infer error about visualizer
|
mmdetection/mmdet/visualization/local_visualizer.py", line 153, in _draw_instances
label_text = classes[
IndexError: tuple index out of range
https://github.com/open-mmlab/mmdetection/blob/44ebd17b145c2372c4b700bfb9cb20dbd28ab64a/mmdet/visualization/local_visualizer.py#L153-L154
|
open
|
2024-01-19T19:15:23Z
|
2024-01-20T04:53:14Z
|
https://github.com/open-mmlab/mmdetection/issues/11411
|
[] |
dongfeicui
| 1
|
peerchemist/finta
|
pandas
| 31
|
EMA calculation not matching with Tv output
|
Hi
Can you please check EMA calculation the output is close but not matching with Tv outputs.
Better to test this with TV and realign the code.
|
closed
|
2019-06-08T08:41:44Z
|
2020-01-19T12:56:15Z
|
https://github.com/peerchemist/finta/issues/31
|
[] |
mbmarx
| 2
|
lepture/authlib
|
flask
| 408
|
authorize_access_token broken POST?
|
Hello, I'm really not sure if it is a bug, but when I am trying to use authorize_access_token there is no params in POST body, as I understand they are sending in a new TCP packet (see a tcpdump picture)?
```
oauth = OAuth()
oauth.register(
'zoom',
client_id=os.environ['ZOOM_CLIENT_ID'],
client_secret=os.environ['ZOOM_CLIENT_SECRET'],
access_token_url='http://zoom.us/oauth/token',
refresh_token_url='http://zoom.us/oauth/token',
authorize_url='http://zoom.us/oauth/authorize',
client_kwargs={'token_endpoint_auth_method': 'client_secret_post'},
)
def zoom_login(request):
redirect_uri = request.build_absolute_uri('/zoom_authorize')
return oauth.zoom.authorize_redirect(request, redirect_uri)
def zoom_authorize(request):
print(request)
token = oauth.zoom.authorize_access_token(request, body='aaaa=bbbbb')
#resp = oauth.zoom.get('user', token=token)
#resp.raise_for_status()
#profile = resp.json()
# do something with the token and profile
#return profile
```
I've started tcpdump and can see this:

I think it should be something like this?
```
POST /oauth/token HTTP/1.1
Host: zoom.us
User-Agent: Authlib/0.15.5 (+https://authlib.org/)
Accept-Encoding: gzip, deflate
Accept: application/json
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded;charset=UTF-8
Content-Length: 216
aaa=bbb&client_id=xaaaaa&client_secret=aaaaaaaa&redirect_uri=ffsadfasd
```
|
closed
|
2021-12-04T07:35:07Z
|
2022-03-15T10:28:31Z
|
https://github.com/lepture/authlib/issues/408
|
[
"bug"
] |
akam-it
| 2
|
flasgger/flasgger
|
flask
| 497
|
TypeError: issubclass() arg 2 must be a class or tuple of classes (when apispec not installed)
|
I defined Schema using marshmallow, however I was not able to see swaggerUI (having not installed apispec), thus ending with very vague error
`File "C:\gryton\projects\turbo_auta_z_usa\venv\Lib\site-packages\flasgger\marshmallow_apispec.py", line 118, in convert_schemas
if inspect.isclass(v) and issubclass(v, Schema):
TypeError: issubclass() arg 2 must be a class or tuple of classes`
Debugging issue it seems that it was even considered before, as in marshmallow_api:
```python
if Schema is None:
raise RuntimeError('Please install marshmallow and apispec')
```
But this is called too late, as first will go `issubclass(v, Schema)`
I would move this checking even to `convert_schemas` start, so it's checked only once per function call, if needed libs are installed, so user that made the same mistake as me would receive more meaningful error.
|
open
|
2021-09-23T14:21:16Z
|
2021-09-23T14:21:16Z
|
https://github.com/flasgger/flasgger/issues/497
|
[] |
Gryton
| 0
|
serengil/deepface
|
machine-learning
| 897
|
Got an assertion error when installing deepface
|
i got an error like this when installing deepface
```
#13 138.8 Building wheel for fire (setup.py): finished with status 'done'
#13 138.8 Created wheel for fire: filename=fire-0.5.0-py2.py3-none-any.whl size=116951 sha256=f36f3a2c5b1987dda9e4c020d73b1617adb6c59ae81200fcf4920bd419c4a21f
#13 138.8 Stored in directory: /root/.cache/pip/wheels/90/d4/f7/9404e5db0116bd4d43e5666eaa3e70ab53723e1e3ea40c9a95
#13 138.8 Successfully built imface fire
#13 138.8 ERROR: Exception:
#13 138.8 Traceback (most recent call last):
#13 138.8 File "/usr/lib/python3/dist-packages/pip/_internal/cli/base_command.py", line 165, in exc_logging_wrapper
#13 138.8 status = run_func(*args)
#13 138.8 File "/usr/lib/python3/dist-packages/pip/_internal/cli/req_command.py", line 205, in wrapper
#13 138.8 return func(self, options, args)
#13 138.8 File "/usr/lib/python3/dist-packages/pip/_internal/commands/install.py", line 389, in run
#13 138.8 to_install = resolver.get_installation_order(requirement_set)
#13 138.8 File "/usr/lib/python3/dist-packages/pip/_internal/resolution/resolvelib/resolver.py", line 188, in get_installation_order
#13 138.8 weights = get_topological_weights(
#13 138.8 File "/usr/lib/python3/dist-packages/pip/_internal/resolution/resolvelib/resolver.py", line 276, in get_topological_weights
#13 138.8 assert len(weights) == expected_node_count
#13 138.8 AssertionError
```
|
closed
|
2023-11-29T05:12:49Z
|
2023-11-29T05:13:08Z
|
https://github.com/serengil/deepface/issues/897
|
[] |
ghost
| 0
|
tfranzel/drf-spectacular
|
rest-api
| 1,382
|
Inconsistency between [AllowAny] and [] with permission_classes in schema generation
|
**Describe the bug**
When using AllowAny or IsAuthenticatedOrReadOnly, opening the openapi doc using swagger looks like this:

**To Reproduce**
Create an APIView or an @api_view that overrides the permission_classes with [AllowAny]
**Expected behavior**
Instead, the swagger page should look like this for those endpoints, because they don't require any authentication:

**Workaround**
In Django, omitting AllowAny when it's the only permission_class doesn't change any behavior, but in drf-spectacular fixes this issue and properly displays the open lock icon in swagger.
**Insight**
I think the issue comes from these lines:
https://github.com/tfranzel/drf-spectacular/blob/205f898ff38f3470bf877f709fcf7e582c83c14d/drf_spectacular/openapi.py#L365-L368
where adding {} as schemes has the effect to require auth in swagger
|
open
|
2025-02-17T12:46:12Z
|
2025-02-17T17:50:07Z
|
https://github.com/tfranzel/drf-spectacular/issues/1382
|
[] |
ldeluigi
| 6
|
Lightning-AI/LitServe
|
rest-api
| 195
|
move `wrap_litserve_start` to utils
|
> I would rather reserve `conftest` for fixtures and this (seems to be) general functionality move to another utils module
_Originally posted by @Borda in https://github.com/Lightning-AI/LitServe/pull/190#discussion_r1705957686_
|
closed
|
2024-08-06T18:39:12Z
|
2024-08-12T10:49:09Z
|
https://github.com/Lightning-AI/LitServe/issues/195
|
[
"good first issue",
"help wanted"
] |
aniketmaurya
| 5
|
proplot-dev/proplot
|
data-visualization
| 301
|
Colormaps Docs Error
|
<!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
The latest docs have error in Colormaps section https://proplot.readthedocs.io/en/latest/colormaps.html#
### Steps to reproduce
A "[Minimal, Complete and Verifiable Example](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports)" will make it much easier for maintainers to help you.
```python
import proplot as pplt
fig, axs = pplt.show_cmaps(rasterize=True)
```
**Expected behavior**: [What you expected to happen]
List of included colormaps
**Actual behavior**: [What actually happened]
```TypeError: '>=' not supported between instances of 'list' and 'float'```
|
closed
|
2021-11-06T19:09:17Z
|
2021-11-09T18:19:22Z
|
https://github.com/proplot-dev/proplot/issues/301
|
[
"bug"
] |
pratiman-91
| 2
|
getsentry/sentry
|
django
| 87,159
|
Solve bolding of starred projects
|
closed
|
2025-03-17T10:12:00Z
|
2025-03-17T12:52:06Z
|
https://github.com/getsentry/sentry/issues/87159
|
[] |
matejminar
| 0
|
|
Avaiga/taipy
|
data-visualization
| 2,388
|
Add Taipy-specific and hidden parameters to navigate()
|
### Description
We had a request to allow a Taipy application to query another Taipy application page, potentially served by another Taipy GUI application.
This can be done using the *params* parameter to `navigate()` but then the URL is complemented with the query string that the requestee is willing to hide, for security reasons.
### Solution Proposed
`navigate()` will be added a *hidden_params* parameter, with the same semantics as for *params*: a dictionary that holds string keys and serializable value.
The `on_navigate` callback will receive an additional *hidden_params* parameter, set to a dictionary that reflects what was provided in the call to `navigate()`.
### Impact of Solution
Those parameters are encoded in the GET query header, so no POST is mandatory.
In order not to unintendedly overwrite the original headers configuration, each hidden parameter is propagated with an internal (and undocumented) prefix that identifies them before they're encoded in the query header.
Taipy GUI will decypher all that before `on_navigate` is triggered.
### Acceptance Criteria
- [ ] If applicable, a new demo code is provided to show the new feature in action.
- [ ] Integration tests exhibiting how the functionality works are added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional)
|
open
|
2025-01-09T13:55:18Z
|
2025-01-31T13:27:58Z
|
https://github.com/Avaiga/taipy/issues/2388
|
[
"🖰 GUI",
"🟧 Priority: High",
"✨New feature",
"📝Release Notes"
] |
FabienLelaquais
| 1
|
marimo-team/marimo
|
data-visualization
| 3,869
|
Code Is Repeated after Minimizing Marimo Interface
|
### Describe the bug
Whenever Multitasking with marimo and other tools, once i go back to marimo after minimizing it, the code in all cells are repeated multiple times making the usage of the Marimo not very friendly.
I tried to upgrade to newer version. Same issue persists/
### Environment
<details>
```
{
"marimo": "0.11.7",
"OS": "Windows",
"OS Version": "11",
"Processor": "Intel64 Family 6 Model 186 Stepping 2, GenuineIntel",
"Python Version": "3.12.4",
"Binaries": {
"Browser": "132.0.6834.196",
"Node": "--"
},
"Dependencies": {
"click": "8.1.7",
"docutils": "0.18.1",
"itsdangerous": "2.2.0",
"jedi": "0.18.1",
"markdown": "3.7",
"narwhals": "1.16.0",
"packaging": "23.2",
"psutil": "5.9.0",
"pygments": "2.15.1",
"pymdown-extensions": "10.12",
"pyyaml": "6.0.1",
"ruff": "0.8.2",
"starlette": "0.41.3",
"tomlkit": "0.13.2",
"typing-extensions": "4.11.0",
"uvicorn": "0.32.1",
"websockets": "14.1"
},
"Optional Dependencies": {
"altair": "5.0.1",
"anywidget": "0.9.13",
"duckdb": "1.1.3",
"pandas": "2.2.2",
"polars": "1.22.0",
"pyarrow": "14.0.2"
},
"Experimental Flags": {
"multi_column": true,
"tracing": true,
"rtc": true
}
}
```
</details>
### Code to reproduce
import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\...Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\....Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\.. Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\....Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\....Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\....Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\... Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\..Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\..Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\..Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\...Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\.....Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\....Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\.... Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\...Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
df = pd.read_excel(r"C:\Users\ASUS\Desktop\...Pigging Aug\TARC Frac Job Analysis\Lone Star All.xlsx")
|
closed
|
2025-02-21T10:04:25Z
|
2025-02-25T02:00:27Z
|
https://github.com/marimo-team/marimo/issues/3869
|
[
"bug"
] |
Nashat90
| 1
|
huggingface/transformers
|
python
| 36,769
|
Add Audio inputs available in apply_chat_template
|
### Feature request
Hello, I would like to request support for audio processing in the apply_chat_template function.
### Motivation
With the rapid advancement of multimodal models, audio processing has become increasingly crucial alongside image and text inputs. Models like Qwen2-Audio, Phi-4-multimodal, and various models now support audio understanding, making this feature essential for modern AI applications.
Supporting audio inputs would enable:
```python
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
{"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
{"type": "audio", "audio": "https://huggingface.co/microsoft/Phi-4-multimodal-instruct/resolve/main/examples/what_is_shown_in_this_image.wav"},
{"type": "text", "text": "Follow the instruction in the audio with this image."}
]
}
]
```
This enhancement would significantly expand the capabilities of the library to handle the full spectrum of multimodal inputs that state-of-the-art models now support, keeping the transformers library at the forefront of multimodal AI development.
### Your contribution
I've tested this implementation with several multimodal models and it works well for processing audio inputs alongside images and text. I'd be happy to contribute this code to the repository if there's interest.
|
open
|
2025-03-17T17:05:46Z
|
2025-03-17T20:41:41Z
|
https://github.com/huggingface/transformers/issues/36769
|
[
"Feature request"
] |
junnei
| 1
|
mlfoundations/open_clip
|
computer-vision
| 761
|
Gradient accumulation may requires scaling before backward
|
In function train_one_epoch, in the file [src/training/train.py ](https://github.com/mlfoundations/open_clip/blob/main/src/training/train.py#L159)from line 156 to 162, as shown below:
```python
losses = loss(**inputs, **inputs_no_accum, output_dict=True)
del inputs
del inputs_no_accum
total_loss = sum(losses.values())
losses["loss"] = total_loss
backward(total_loss, scaler)
```
Shouldn't we take the average of loss for gradient accumulation before calling backward()?
|
open
|
2023-12-12T17:46:49Z
|
2023-12-22T01:25:45Z
|
https://github.com/mlfoundations/open_clip/issues/761
|
[] |
yonghanyu
| 1
|
widgetti/solara
|
jupyter
| 206
|
Add release notes
|
As the Solars package gains wider adoption, it becomes increasingly important to provide users with a convenient way to track changes and make informed decisions when selecting versions. Adding release notes would greatly benefit users.
|
open
|
2023-07-11T12:39:34Z
|
2023-10-02T17:01:31Z
|
https://github.com/widgetti/solara/issues/206
|
[] |
lp9052
| 3
|
3b1b/manim
|
python
| 2,125
|
No video is generated
|
Showing the following error
Manim Extension XTerm
Serves as a terminal for logging purpose.
Extension Version 0.2.13
MSV d:\All document\VS Code\Python>"manim" "d:\All document\VS Code\Python\new2.py" CreateCircle
Manim Community v0.18.0
Animation 0: Create(Circle): 0%| | 0/60 [00:00<?, ?it/s][rawvideo @ 0194dfa0]Estimating duration from bitrate, this may be inaccurate
Input #0, rawvideo, from 'pipe:':
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0.0: Video: rawvideo, rgba, 1920x1080, 60 tbr, 60 tbn, 60 tbc
Unknown encoder 'libx264'
+--------------------- Traceback (most recent call last) ---------------------+
| C:\tools\Manim\Lib\site-packages\manim\cli\render\commands.py:115 in render |
| |
| 112 try: |
| 113 with tempconfig({}): |
| 114 scene = SceneClass() |
| > 115 scene.render() |
| 116 except Exception: |
| 117 error_console.print_exception() |
| 118 sys.exit(1) |
| |
| C:\tools\Manim\Lib\site-packages\manim\scene\scene.py:223 in render |
| |
| 220 """ |
| 221 self.setup() |
| 222 try: |
| > 223 self.construct() |
| 224 except EndSceneEarlyException: |
| 225 pass |
| 226 except RerunSceneException as e: |
| |
| d:\All document\VS Code\Python\new2.py:6 in construct |
| |
| 3 def construct(self): |
| 4 circle = Circle() # create a circle |
| 5 circle.set_fill(PINK, opacity=0.5) # set the color and transpa |
| > 6 self.play(Create(circle)) # show the circle on screen |
| 7 self.wait(1) |
| 8 |
| |
| C:\tools\Manim\Lib\site-packages\manim\scene\scene.py:1080 in play |
| |
| 1077 return |
| 1078 |
| 1079 start_time = self.renderer.time |
| > 1080 self.renderer.play(self, *args, **kwargs) |
| 1081 run_time = self.renderer.time - start_time |
| 1082 if subcaption: |
| 1083 if subcaption_duration is None: |
| |
| C:\tools\Manim\Lib\site-packages\manim\renderer\cairo_renderer.py:104 in |
| play |
| |
| 101 # In this case, as there is only a wait, it will be the l |
| 102 self.freeze_current_frame(scene.duration) |
| 103 else: |
| > 104 scene.play_internal() |
| 105 self.file_writer.end_animation(not self.skip_animations) |
| 106 |
| 107 self.num_plays += 1 |
| |
| C:\tools\Manim\Lib\site-packages\manim\scene\scene.py:1245 in play_internal |
| |
| 1242 for t in self.time_progression: |
| 1243 self.update_to_time(t) |
| 1244 if not skip_rendering and not self.skip_animation_previe |
| > 1245 self.renderer.render(self, t, self.moving_mobjects) |
| 1246 if self.stop_condition is not None and self.stop_conditi |
| 1247 self.time_progression.close() |
| 1248 break |
| |
| C:\tools\Manim\Lib\site-packages\manim\renderer\cairo_renderer.py:150 in |
| render |
| |
| 147 |
| 148 def render(self, scene, time, moving_mobjects): |
| 149 self.update_frame(scene, moving_mobjects) |
| > 150 self.add_frame(self.get_frame()) |
| 151 |
| 152 def get_frame(self): |
| 153 """ |
| |
| C:\tools\Manim\Lib\site-packages\manim\renderer\cairo_renderer.py:180 in |
| add_frame |
| |
| 177 return |
| 178 self.time += num_frames * dt |
| 179 for _ in range(num_frames): |
| > 180 self.file_writer.write_frame(frame) |
| 181 |
| 182 def freeze_current_frame(self, duration: float): |
| 183 """Adds a static frame to the movie for a given duration. The |
| |
| C:\tools\Manim\Lib\site-packages\manim\scene\scene_file_writer.py:391 in |
| write_frame |
| |
| 388 elif config.renderer == RendererType.CAIRO: |
| 389 frame = frame_or_renderer |
| 390 if write_to_movie(): |
| > 391 self.writing_process.stdin.write(frame.tobytes()) |
| 392 if is_png_format() and not config["dry_run"]: |
| 393 self.output_image_from_array(frame) |
| 394 |
+-----------------------------------------------------------------------------+
OSError: [Errno 22] Invalid argument
[19512] Execution returned code=1 in 2.156 seconds returned signal null
|
closed
|
2024-04-26T19:50:07Z
|
2024-04-26T20:16:04Z
|
https://github.com/3b1b/manim/issues/2125
|
[] |
ghost
| 0
|
ultralytics/ultralytics
|
deep-learning
| 19,632
|
about precision --Yolov11--fromdocker
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
# baout preciseion
I't's been my first time to use ultarlytics in dokcer way.I'am confused about the yolov11.pt give defaulted in the docker image. the validation rst seems quite low :
root@71643adeada8:/ultralytics# yolo val detect data=coco.yaml device=0 batch=8
Ultralytics 8.3.86 🚀 Python-3.11.10 torch-2.5.1+cu124 CUDA:0 (NVIDIA GeForce RTX 4070 Ti SUPER, 16376MiB)
YOLO11n summary (fused): 100 layers, 2,616,248 parameters, 0 gradients, 6.5 GFLOPs
val: Scanning /datasets/coco/labels/val2017.cache... 4952 images, 48 backgrounds, 0 corrupt: 100%|██████████| 5000/5000 [00:00<?, ?it/s]
Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 625/625 [00:50<00:00, 12.41it/s]
all 5000 36335 0.652 0.505 0.549 0.393
person 2693 10777 0.759 0.676 0.753 0.524
bicycle 149 314 0.682 0.404 0.478 0.276
car 535 1918 0.661 0.533 0.585 0.376
motorcycle 159 367 0.768 0.605 0.677 0.445
airplane 97 143 0.803 0.797 0.867 0.69
bus 189 283 0.772 0.707 0.774 0.651
train 157 190 0.832 0.805 0.862 0.681
truck 250 414 0.586 0.415 0.477 0.323
boat 121 424 0.579 0.351 0.413 0.22
traffic light 191 634 0.646 0.347 0.416 0.218
fire hydrant 86 101 0.867 0.743 0.806 0.637
stop sign 69 75 0.712 0.613 0.662 0.601
parking meter 37 60 0.698 0.55 0.613 0.467
bench 235 411 0.575 0.28 0.325 0.217
### Additional
_No response_
|
open
|
2025-03-11T04:57:45Z
|
2025-03-11T05:15:52Z
|
https://github.com/ultralytics/ultralytics/issues/19632
|
[
"question",
"detect"
] |
deadwoodeatmoon
| 2
|
sigmavirus24/github3.py
|
rest-api
| 641
|
Strict option for branch protection
|
The [protected branches API](https://developer.github.com/v3/repos/branches/) now contains a new option called _strict_, which defines whether the branch that is being merged into a protected branch has to be rebased before. Since github3.py doesn't have this feature implemented yet, it is automatically set to _true_, which is quite limiting.
It would be very practical to implement this feature. This should be easy; I'll try to take a look into it and provide a pull request shortly.
##
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/38540587-strict-option-for-branch-protection?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
closed
|
2016-10-20T14:37:30Z
|
2021-11-01T01:08:43Z
|
https://github.com/sigmavirus24/github3.py/issues/641
|
[] |
kristian-lesko
| 7
|
d2l-ai/d2l-en
|
tensorflow
| 2,308
|
Add chapter on diffusion models
|
GANs are the most advanced topic currently discussed in d2l.ai. However, diffusion models have taken the image generation mantle from GANs. Holding up GANs as the final chapter of the book and not discussing diffusion models at all feels like a clear indication of dated content. Additionally, fast.ai just announced that they will be adding diffusion models to their intro curriculum, as I imagine will many other intro courses. Want to make sure there's at least a discussion about this here if the content isn't already in progress or roadmapped.
|
open
|
2022-09-16T21:41:12Z
|
2023-05-15T13:48:32Z
|
https://github.com/d2l-ai/d2l-en/issues/2308
|
[
"feature request"
] |
dmarx
| 4
|
litestar-org/litestar
|
pydantic
| 3,601
|
Bug: Custom plugin with lifespan breaks channel startup hook
|
### Description
I wrote a simple plugin with a lifespan based on a context manager. When I add the custom plugin and the channels plugin, the channels plugin does not initialize properly. If I remove the lifespan from the custom plugin it works.
### URL to code causing the issue
_No response_
### MCVE
```python
from litestar import Litestar, post
from litestar.testing import TestClient
from litestar.channels import ChannelsPlugin
from litestar.channels.backends.memory import MemoryChannelsBackend
from litestar.channels import ChannelsPlugin
from litestar.plugins import InitPluginProtocol
from contextlib import asynccontextmanager
from collections.abc import AsyncGenerator
import logging
@post("/publish_something")
async def handler(channels: ChannelsPlugin) -> None:
channels.publish("test", "my_new_channel")
@asynccontextmanager
async def some_lifecycle(app: "Litestar") -> "AsyncGenerator[None, None]":
logging.info("some startup")
try:
yield
finally:
logging.info("some shutdown")
class SomePlugin(InitPluginProtocol):
def on_app_init(self, app_config: "AppConfig") -> "AppConfig":
# comment this out and the startup works
app_config.lifespan = [some_lifecycle]
return app_config
with TestClient(
app=Litestar(
debug=True,
plugins=[
ChannelsPlugin(backend=MemoryChannelsBackend(history=0), channels=[], arbitrary_channels_allowed=True),
SomePlugin(),
],
route_handlers=[handler],
)
) as client:
client.post("/publish_something")
```
### Steps to reproduce
```bash
1. Run the script
2. It errors out when hitting the publish_something route
```
### Screenshots
```bash
""
```
### Logs
```bash
INFO - 2024-06-26 12:15:33,244 - root - bug - some startup
ERROR - 2024-06-26 12:15:33,255 - litestar - config - Uncaught exception (connection_type=http, path=/publish_something):
Traceback (most recent call last):
File "/root/innovation-framework/.venv/lib/python3.12/site-packages/litestar/channels/plugin.py", line 144, in publish
self._pub_queue.put_nowait((data, list(channels))) # type: ignore[union-attr]
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'put_nowait'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/innovation-framework/.venv/lib/python3.12/site-packages/litestar/middleware/_internal/exceptions/middleware.py", line 159, in __call__
await self.app(scope, receive, capture_response_started)
File "/root/innovation-framework/.venv/lib/python3.12/site-packages/litestar/_asgi/asgi_router.py", line 99, in __call__
await asgi_app(scope, receive, send)
File "/root/innovation-framework/.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 80, in handle
response = await self._get_response_for_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/innovation-framework/.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 132, in _get_response_for_request
return await self._call_handler_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/innovation-framework/.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 152, in _call_handler_function
response_data, cleanup_group = await self._get_response_data(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/innovation-framework/.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 200, in _get_response_data
else await route_handler.fn(**parsed_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/innovation-framework/bug.py", line 14, in handler
channels.publish("test", "my_new_channel")
File "/root/innovation-framework/.venv/lib/python3.12/site-packages/litestar/channels/plugin.py", line 146, in publish
raise RuntimeError("Plugin not yet initialized. Did you forget to call on_startup?") from e
RuntimeError: Plugin not yet initialized. Did you forget to call on_startup?
INFO - 2024-06-26 12:15:33,264 - httpx - _client - HTTP Request: POST http://testserver.local/publish_something "HTTP/1.1 500 Internal Server Error"
INFO - 2024-06-26 12:15:33,266 - root - bug - some shutdown
```
### Litestar Version
2.9.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
|
closed
|
2024-06-26T10:19:20Z
|
2025-03-20T15:54:48Z
|
https://github.com/litestar-org/litestar/issues/3601
|
[
"Bug :bug:"
] |
TheZwieback
| 1
|
cleanlab/cleanlab
|
data-science
| 400
|
Update links between tutorial notebook and example notebook
|
Update links between [tutorial notebook](https://github.com/cleanlab/cleanlab/blob/master/docs/source/tutorials/token_classification.ipynb) and [example notebook](https://github.com/cleanlab/examples/pull/7) so these properly point at each other (across all future versions of cleanlab).
|
closed
|
2022-09-06T00:45:17Z
|
2022-09-15T13:42:01Z
|
https://github.com/cleanlab/cleanlab/issues/400
|
[] |
elisno
| 1
|
pytorch/pytorch
|
machine-learning
| 149,275
|
torch.onnx.export Dynamic shapes output not working
|
### 🐛 Describe the bug
export not working when using `nvidia/nv-embed-v2` model, if i remove `sentence_embeddings` from the input, it would make the input static
```python
!pip install datasets==3.3.2
!pip install onnx==1.17.0 onnxscript==0.2.2 torch==2.6.0 torchvision==0.21.0 onnxruntime-gpu==1.21.0
!git clone https://github.com/ducknificient/transformers.git
!pip install /content/transformers
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
model = AutoModel.from_pretrained(
'nvidia/nv-embed-v2',
trust_remote_code=True,)
model.eval()
batch_size = 4
dummy_input_ids = torch.randint(0, 32000, (batch_size, 128)) # Batch size 2, sequence length 128
dummy_attention_mask = torch.ones((batch_size, 128), dtype=torch.int64)
dummy_pool_mask = torch.ones((batch_size, 128), dtype=torch.int64)
from torch.export import Dim
dynamic_shapes = {
"input_ids": (Dim.DYNAMIC, Dim.DYNAMIC),
"attention_mask": (Dim.DYNAMIC, Dim.DYNAMIC),
"pool_mask": (Dim.DYNAMIC, Dim.DYNAMIC),
"sentence_embeddings": (Dim.DYNAMIC, Dim.DYNAMIC),
}
# with torch.inference_mode():
torch.onnx.export(
model, # PyTorch model
# (features,), # Model inputs
(dummy_input_ids, dummy_attention_mask, dummy_pool_mask),
output_path, # Output file
export_params=True, # Store the trained weights
opset_version=14, # ONNX opset version
input_names=['input_ids', 'attention_mask','pool_mask'], # Input names
output_names=['sentence_embeddings'], # Output names
dynamic_shapes=dynamic_shapes, # Dynamic axes
dynamo=True,
verbose=True # Detailed output
)
print(f"Model exported to {output_path}")
```
this is the error
```
/usr/local/lib/python3.11/dist-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.Op.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
/usr/local/lib/python3.11/dist-packages/onnxscript/converter.py:823: FutureWarning: 'onnxscript.values.OnnxFunction.param_schemas' is deprecated in version 0.1 and will be removed in the future. Please use '.op_signature' instead.
param_schemas = callee.param_schemas()
`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`.
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with `torch.export.export(..., strict=False)`...
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with `torch.export.export(..., strict=False)`... ❌
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with `torch.export.export`...
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with `torch.export.export`... ❌
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with Torch Script...
/usr/lib/python3.11/contextlib.py:105: FutureWarning: `torch.backends.cuda.sdp_kernel()` is deprecated. In the future, this context manager will be removed. Please see `torch.nn.attention.sdpa_kernel()` for the new context manager, with updated signature.
self.gen = func(*args, **kwds)
W0316 15:11:47.871000 14360 torch/fx/experimental/symbolic_shapes.py:6307] failed during evaluate_expr(Eq((u0//128), 1), hint=None, size_oblivious=True, forcing_spec=False
E0316 15:11:47.872000 14360 torch/fx/experimental/recording.py:299] failed while running evaluate_expr(*(Eq((u0//128), 1), None), **{'fx_node': False, 'size_oblivious': True})
W0316 15:11:47.918000 14360 torch/fx/experimental/symbolic_shapes.py:6830] Unable to find user code corresponding to {u0}
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with Torch Script... ❌
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with internal Dynamo apis...
[torch.onnx] Obtain model graph for `NVEmbedModel([...]` with internal Dynamo apis... ❌
---------------------------------------------------------------------------
UserError Traceback (most recent call last)
[/usr/local/lib/python3.11/dist-packages/torch/onnx/_internal/exporter/_capture_strategies.py](https://localhost:8080/#) in __call__(self, model, args, kwargs, dynamic_shapes)
109 try:
--> 110 exported_program = self._capture(model, args, kwargs, dynamic_shapes)
111 except Exception as e:
17 frames
UserError: When `dynamic_shapes` is specified as a dict, its top-level keys must be the arg names ['input_ids', 'attention_mask', 'pool_mask'] of `inputs`, but here they are ['input_ids', 'attention_mask', 'pool_mask', 'sentence_embeddings']. Alternatively, you could also ignore arg names entirely and specify `dynamic_shapes` as a list/tuple matching `inputs`. For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation
The above exception was the direct cause of the following exception:
TorchExportError Traceback (most recent call last)
[/usr/local/lib/python3.11/dist-packages/torch/onnx/_internal/exporter/_core.py](https://localhost:8080/#) in export(model, args, kwargs, registry, dynamic_shapes, input_names, output_names, report, verify, profile, dump_exported_program, artifacts_dir, verbose)
1290 # torch.jit.trace is due to the fallback and can be confusing to users.
1291 # We save all errors in the error report.
-> 1292 raise _errors.TorchExportError(
1293 _STEP_ONE_ERROR_MESSAGE
1294 + (
TorchExportError: Failed to export the model with torch.export. This is step 1/3 of exporting the model to ONNX. Next steps:
- Modify the model code for `torch.export.export` to succeed. Refer to https://pytorch.org/docs/stable/generated/exportdb/index.html for more information.
- Debug `torch.export.export` and summit a PR to PyTorch.
- Create an issue in the PyTorch GitHub repository against the *torch.export* component and attach the full error stack as well as reproduction scripts.
## Exception summary
<class 'torch._dynamo.exc.UserError'>: When `dynamic_shapes` is specified as a dict, its top-level keys must be the arg names ['input_ids', 'attention_mask', 'pool_mask'] of `inputs`, but here they are ['input_ids', 'attention_mask', 'pool_mask', 'sentence_embeddings']. Alternatively, you could also ignore arg names entirely and specify `dynamic_shapes` as a list/tuple matching `inputs`. For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation
(Refer to the full stack trace above for more information.)
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.1.85+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 550.54.15
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 7
BogoMIPS: 4400.48
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 6 MiB (6 instances)
L3 cache: 38.5 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.11
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.21.0
[pip3] onnxscript==0.2.2
[pip3] optree==0.14.1
[pip3] pynvjitlink-cu12==0.5.2
[pip3] torch==2.6.0+cu124
[pip3] torchaudio==2.6.0+cu124
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.21.0+cu124
[pip3] triton==3.2.0
[conda] Could not collect
|
closed
|
2025-03-16T15:19:49Z
|
2025-03-17T03:30:06Z
|
https://github.com/pytorch/pytorch/issues/149275
|
[] |
ducknificient
| 3
|
keras-team/keras
|
deep-learning
| 20,475
|
Cannot generate heatmaps using Gradcam library
|
import numpy as np
import tensorflow as tf
from tf_explain.core.grad_cam import GradCAM
import matplotlib.pyplot as plt
# Initialize Grad-CAM explainer
explainer = GradCAM()
# Specify the layer for Grad-CAM in the VGG16 model (vgg16_tr)
conv_layer_name = 'block5_conv2' # Adjusted to use 'block5_conv2'
# Get a batch from the validation generator
X_val, y_val = val_generator.__getitem__(0)
# Process each image in the batch
for i in range(len(X_val)):
# Add batch dimension to match the expected input shape for Grad-CAM
image_to_explain = np.expand_dims(X_val[i], axis=0) # Shape: (1, 128, 128, 3)
label_to_explain = np.expand_dims(y_val[i], axis=0) # Shape: (1, num_classes)
true_class = np.argmax(y_val[i]) # Get the true class index from the label
# Ensure the image is passed as a tf.Tensor (not a NumPy array)
image_to_explain = tf.convert_to_tensor(image_to_explain, dtype=tf.float32)
# Generate the Grad-CAM heatmap
heatmap = explainer.explain(
validation_data=(image_to_explain, label_to_explain), # Pass the single image and its label
model=vgg16_tr,
class_index=true_class,
layer_name=conv_layer_name
)
# Plot the original image and Grad-CAM heatmap
fig, ax = plt.subplots(1, 2, figsize=(10, 5))
# Original image
ax[0].imshow(X_val[i])
ax[0].set_title(f'Original Image {i+1}')
ax[0].axis('off')
# Grad-CAM heatmap overlay
ax[1].imshow(X_val[i]) # Show the original image first
ax[1].imshow(heatmap, cmap='jet', alpha=0.3) # Overlay heatmap with transparency
ax[1].set_title(f'Grad-CAM Heatmap {i+1}')
ax[1].axis('off')
plt.tight_layout()
plt.show()
I'm trying to generate heatmaps using inbuilt GradCam function for the celebA dataset. But I get the below error. I have reviewed the size to the input going to the GradCam it is (1, 128,128,3).
error: When providing `inputs` as a list/tuple, all values in the list/tuple must be KerasTensors. Received: inputs=[[<KerasTensor shape=(None, 128, 128, 3), dtype=float32, sparse=None, name=keras_tensor>]] including invalid value [<KerasTensor shape=(None, 128, 128, 3), dtype=float32, sparse=None, name=keras_tensor>] of type <class 'list'>
If anyone could help me debug this, it would be much appriciated.
|
open
|
2024-11-08T23:10:43Z
|
2024-12-03T08:15:10Z
|
https://github.com/keras-team/keras/issues/20475
|
[
"type:Bug"
] |
jeev0306
| 4
|
modin-project/modin
|
pandas
| 7,023
|
`test_corr_non_numeric` failed due to different exception messages
|
Also `test_cov_numeric_only`.
Found in https://github.com/modin-project/modin/pull/6954
|
open
|
2024-03-07T12:02:02Z
|
2024-03-07T14:11:24Z
|
https://github.com/modin-project/modin/issues/7023
|
[
"bug 🦗",
"pandas concordance 🐼",
"P2"
] |
anmyachev
| 0
|
erdewit/ib_insync
|
asyncio
| 243
|
examples/tk.py fails with "RuntimeError: Cannot run the event loop while another loop is running"
|
Hi Ewald,
seems to be very nice library which I am starting to use now. Running the examples with single modification of my local host, port, and client_id in the IB.connect() line.
examples/qt_ticker_table.py runs fine
examples/tk.py fails:
```
rusakov@dfk78:~/mcai/github/ib_insync (master)$ python examples/tk.py
Traceback (most recent call last):
File "examples/tk.py", line 47, in <module>
app.run()
File "examples/tk.py", line 36, in run
self.loop.run_forever()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 531, in run_forever
self._check_runnung()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 526, in _check_runnung
'Cannot run the event loop while another loop is running')
RuntimeError: Cannot run the event loop while another loop is running
```
-----------
MacOS: 10.14.6
Python 3.7.7
ib_insync: latest, installed from git, hash f4e0525
nest-asyncio: 1.3.2
eventkit: 0.8.6
|
closed
|
2020-04-29T08:41:56Z
|
2020-05-04T11:37:52Z
|
https://github.com/erdewit/ib_insync/issues/243
|
[] |
drusakov778
| 6
|
NullArray/AutoSploit
|
automation
| 584
|
Unhandled Exception (55ded4728)
|
Autosploit version: `3.0`
OS information: `Linux-4.18.0-kali2-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error meesage: `argument of type 'NoneType' is not iterable`
Error traceback:
```
Traceback (most recent call):
File "/root/Desktop/AutoSploit/autosploit/main.py", line 117, in main
terminal.terminal_main_display(loaded_tokens)
File "/root/Desktop/AutoSploit/lib/term/terminal.py", line 474, in terminal_main_display
if "help" in choice_data_list:
TypeError: argument of type 'NoneType' is not iterable
```
Metasploit launched: `False`
|
closed
|
2019-03-24T02:05:49Z
|
2019-04-02T20:25:18Z
|
https://github.com/NullArray/AutoSploit/issues/584
|
[] |
AutosploitReporter
| 0
|
marcomusy/vedo
|
numpy
| 1,121
|
Smooth boundary line of a mesh
|
Hi, I'm looking to smooth just the boundary line of a mesh (not the entire mesh), I was wondering if there's any functionality in vedo to achieve this? Thanks!
|
closed
|
2024-05-17T01:26:33Z
|
2024-06-13T18:35:26Z
|
https://github.com/marcomusy/vedo/issues/1121
|
[] |
sean-d-zydex
| 2
|
dpgaspar/Flask-AppBuilder
|
rest-api
| 1,567
|
Running tests: DB Creation and initialization failed
|
I don't know if this is a flask_appbuilder issue, probably not. But I do need to solve this in order to be able to run tests via bash (so I can write and test tests and finish my pull request according to the new reset password functions).
### Environment
Flask-Appbuilder version: 3.1.1
pip freeze output:
-f /usr/share/pip-wheels
apispec==3.3.2
attrs==20.2.0
Babel==2.8.0
blinker==1.4
Brotli==1.0.9
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
colorama==0.4.3
dash==1.17.0
dash-core-components==1.13.0
dash-html-components==1.1.1
dash-renderer==1.8.3
dash-table==4.11.0
defusedxml==0.6.0
dnspython==2.0.0
email-validator==1.1.1
Flask==1.1.2
Flask-AppBuilder==3.1.1
Flask-Babel==1.0.0
Flask-Compress==1.8.0
Flask-JWT-Extended==3.24.1
Flask-Limiter==1.4
Flask-Login==0.4.1
Flask-Mail==0.9.1
Flask-Maintenance==0.0.1
Flask-OpenID==1.2.5
Flask-ReCaptcha==0.4.2
Flask-SQLAlchemy==2.4.4
Flask-WTF==0.14.3
future==0.18.2
idna==2.10
itsdangerous==1.1.0
Jinja2==2.11.2
jsonschema==3.2.0
limits==1.5.1
lxml==4.6.1
MarkupSafe==1.1.1
marshmallow==3.8.0
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.23.1
mysqlclient==2.0.1
numpy==1.19.4
oauthlib==3.1.0
onetimepass==1.0.1
pandas==1.1.4
pandas-datareader==0.9.0
pathlib==1.0.1
plotly==4.12.0
prison==0.1.3
PyJWT==1.7.1
PyQRCode==1.2.1
pyrsistent==0.17.3
python-dateutil==2.8.1
python-dotenv==0.14.0
python3-openid==3.2.0
pytz==2020.1
PyYAML==5.3.1
requests==2.24.0
requests-oauthlib==1.3.0
retrying==1.3.3
six==1.15.0
SQLAlchemy==1.3.19
SQLAlchemy-Utils==0.36.8
urllib3==1.25.10
Werkzeug==1.0.1
WTForms==2.3.3
### Describe the expected results
```python
Running the flask_appbuilder/tests without any error
```
### Describe the actual results
```pytb
ERROR:flask_appbuilder.security.sqla.manager:DB Creation and initialization failed: 'NoneType' object has no attribute 'drivername'
(No creation and initialization of the db causes all tests to fail)
Traceback (most recent call last):
File "/home/username/.virtualenvs/venv/lib/python3.8/site-packages/sqlalchemy/util/_collections.py", line 1020, in __call__
return self.registry[key]
KeyError: 140584349521664
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/username/.virtualenvs/venv/lib/python3.8/site-packages/flask_appbuilder/security/sqla/manager.py", line 99, in create_db
engine = self.get_session.get_bind(mapper=None, clause=None)
File "/home/username/.virtualenvs/venv/lib/python3.8/site-packages/sqlalchemy/orm/scoping.py", line 163, in do
return getattr(self.registry(), name)(*args, **kwargs)
File "/home/username/.virtualenvs/venv/lib/python3.8/site-packages/sqlalchemy/util/_collections.py", line 1022, in __call__
return self.registry.setdefault(key, self.createfunc())
File "/home/username/.virtualenvs/venv/lib/python3.8/site-packages/sqlalchemy/orm/session.py", line 3286, in __call__
return self.class_(**local_kw)
File "/home/username/.virtualenvs/venv/lib/python3.8/site-packages/flask_sqlalchemy/__init__.py", line 138, in __init__
bind = options.pop('bind', None) or db.engine
File "/home/username/.virtualenvs/venv/lib/python3.8/site-packages/flask_sqlalchemy/__init__.py", line 943, in engine
return self.get_engine()
File "/home/username/.virtualenvs/venv/lib/python3.8/site-packages/flask_sqlalchemy/__init__.py", line 962, in get_engine
return connector.get_engine()
File "/home/username/.virtualenvs/venv/lib/python3.8/site-packages/flask_sqlalchemy/__init__.py", line 555, in get_engine
options = self.get_options(sa_url, echo)
File "/home/username/.virtualenvs/venv/lib/python3.8/site-packages/flask_sqlalchemy/__init__.py", line 570, in get_options
self._sa.apply_driver_hacks(self._app, sa_url, options)
File "/home/username/.virtualenvs/venv/lib/python3.8/site-packages/flask_sqlalchemy/__init__.py", line 883, in apply_driver_hacks
if sa_url.drivername.startswith('mysql'):
AttributeError: 'NoneType' object has no attribute 'drivername'
```
### Steps to reproduce
1) startup Bash console on Pythonanywhere
2) workon virtualenvironment
3) copy ........site-packages/flask_appbuilder/test dir to /home/username/app/
4) cd /home/username/app
5) python -m unittest discover
--
Thanks!
|
closed
|
2021-02-11T12:10:07Z
|
2021-02-12T08:23:25Z
|
https://github.com/dpgaspar/Flask-AppBuilder/issues/1567
|
[] |
runoutnow
| 1
|
biolab/orange3
|
pandas
| 6,675
|
File widget: add option to skip a range of rows not holding headers or data
|
<!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
Many datasets that are available online as open data have rows that are not column headers or actual records holding the data. Often they hold descriptions of the features and/or other general data such as licensing info. These could be either above or below the actual data. Also, some datasets have column headers spanning more than one row.
When importing these datasets in Orange, these rows confuse the mechanisms to recognize variables and variable types.
**What's your proposed solution?**
It would be nice to be able to specify a range of rows that has to be disregarded, e.g. rows 3-5, when importing a file
**Are there any alternative solutions?**
Using a spreadsheet to do it manually.
|
closed
|
2023-12-13T14:28:02Z
|
2024-01-05T10:38:14Z
|
https://github.com/biolab/orange3/issues/6675
|
[] |
wvdvegte
| 5
|
pyppeteer/pyppeteer
|
automation
| 136
|
Downloading a file (into memory) with pyppeteer
|
I'm navigating a site in pyppeteer that contains logic in which file download links must be followed in the same session in which they're surfaced. It's complex enough such that simply replaying the requests with postman fails and is not worth reverse-engineering. This means that I need to use pyppeteer to follow a direct link to download a file.
I've looked at these links for how this is done in puppeteer:
https://stackoverflow.com/questions/46966341/puppeteer-how-to-set-download-location
https://stackoverflow.com/questions/55408302/how-to-get-the-download-stream-buffer-using-puppeteer
and I'm a bit confused because
a) it appears that [setDownloadBehavior](https://chromedevtools.github.io/devtools-protocol/tot/Page/#method-setDownloadBehavior) is deprecated
b) if I were to use `page.evaluate` to use puppeteer-type logic, I'm not sure how I would send data back into my python script. Ideally I would read the file into memory to upload to s3 and skip writing to disk.
|
open
|
2020-06-13T15:56:26Z
|
2020-10-21T01:08:32Z
|
https://github.com/pyppeteer/pyppeteer/issues/136
|
[
"bug",
"fixed-in-2.1.1"
] |
jessrosenfield
| 5
|
waditu/tushare
|
pandas
| 1,039
|
[BUG]前复权数据与同花顺不一致
|
代码:
df_bar = ts.pro_bar(ts_code='603156.SH', adj='qfq', start_date='20190507', end_date='20190509')
print(df_bar)
结果:
ts_code trade_date open high ... change pct_chg vol amount
0 603156.SH 20190509 35.75 35.77 ... -1.18 -3.21 25671.86 89925.308
1 603156.SH 20190508 35.77 37.35 ... 0.61 1.69 35060.53 190153.269
2 603156.SH 20190507 34.56 36.31 ... 1.96 5.73 33827.22 177132.146
其中第2行的open、high等前复权数据与同花顺不一致(同花顺open为35.71)

(我的注册链接:https://tushare.pro/register?reg=265109)
|
closed
|
2019-05-12T00:45:49Z
|
2019-05-12T12:26:05Z
|
https://github.com/waditu/tushare/issues/1039
|
[] |
ghost0917
| 2
|
waditu/tushare
|
pandas
| 1,231
|
做了访问限制吗?google vm无法访问http://file.tushare.org
|
一天只调用一次get_stock_basics,没有暴力爬,今天突然就不行了?connection reset by peer,而国内网就没问题
|
closed
|
2019-12-23T05:12:41Z
|
2019-12-24T01:46:57Z
|
https://github.com/waditu/tushare/issues/1231
|
[] |
sidazhang123
| 0
|
MycroftAI/mycroft-core
|
nlp
| 2,363
|
Error during paring
|
After I pair, it gives me this error
```bash
21:01:28.914 | ERROR | 5174 | mycroft.skills.settings:_issue_api_call:310 | Failed to upload skill settings meta for mycroft-pairing|19.08
Traceback (most recent call last):
File "/root/mycroft-core/mycroft/skills/settings.py", line 307, in _issue_api_call
self.api.upload_skill_metadata(self.settings_meta)
File "/root/mycroft-core/mycroft/api/__init__.py", line 382, in upload_skill_metadata
"json": settings_meta
File "/root/mycroft-core/mycroft/api/__init__.py", line 69, in request
return self.send(params)
File "/root/mycroft-core/mycroft/api/__init__.py", line 152, in send
return self.get_response(response, no_refresh)
File "/root/mycroft-core/mycroft/api/__init__.py", line 174, in get_response
raise HTTPError(data, response=response)
requests.exceptions.HTTPError: {'skill_gid': ['Received skill setting definition before manifest for skill mycroft-pairing|19.08']}
~~~~4 | mycroft.skills.settings:_update_settings_meta:301 | DEPRECATION WARNING: The "name" attribute in the settingsmeta file is no longer supported.
```
I am running ```mycroft-core 19.8.1``` on my Linux raspberry pi 1 B+ (I didn't install the PiCroft but install as normal Linux)
|
closed
|
2019-10-12T02:04:23Z
|
2019-11-18T21:46:20Z
|
https://github.com/MycroftAI/mycroft-core/issues/2363
|
[
"Type: Bug - complex"
] |
weathon
| 6
|
biolab/orange3
|
scikit-learn
| 6,835
|
Orange3 version 3.37.0 of the graph_ranks function is replaced by what
|
<!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system:
- Orange version:
- How you installed Orange:
|
closed
|
2024-06-15T02:54:18Z
|
2024-07-05T07:12:15Z
|
https://github.com/biolab/orange3/issues/6835
|
[
"bug report"
] |
CandB1314
| 1
|
developmentseed/lonboard
|
jupyter
| 293
|
Troubleshooting docs page
|
- if you see an error like "model not found", reload your _browser tab_ (not the Jupyter kernel) and try again
|
closed
|
2023-12-06T18:44:28Z
|
2024-09-24T21:06:18Z
|
https://github.com/developmentseed/lonboard/issues/293
|
[] |
kylebarron
| 2
|
Evil0ctal/Douyin_TikTok_Download_API
|
fastapi
| 37
|
宝塔部署里面只能选一个启动文件
|
Web启动文件选择web_zh.py
API启动文件选择web_api.py
尝试创建了2个项目好像后面那个启动不成功
那这样是Web和Api是需要2台服务器分别来部署吗?
|
closed
|
2022-06-08T08:31:33Z
|
2022-06-11T02:42:41Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/37
|
[] |
chris-ss
| 2
|
ray-project/ray
|
data-science
| 51,496
|
CI test windows://python/ray/tests:test_advanced_4 is consistently_failing
|
CI test **windows://python/ray/tests:test_advanced_4** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aaf1-9737-4a02-a7f8-1d7087c16fb1
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4156-97c5-9793049512c1
DataCaseName-windows://python/ray/tests:test_advanced_4-END
Managed by OSS Test Policy
|
closed
|
2025-03-19T00:05:32Z
|
2025-03-19T21:52:09Z
|
https://github.com/ray-project/ray/issues/51496
|
[
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] |
can-anyscale
| 3
|
tflearn/tflearn
|
tensorflow
| 1,131
|
QuickStart tutorial can use columns_to_ignore in load_csv() function call
|
load_csv line can be since there's `columns_to_ignore` param's supported
```
data, labels = load_csv('titanic_dataset.csv', target_column=0, columns_to_ignore=[2, 7], categorical_labels=True, n_classes=2)
```
and we don't need to do that in preprocess()
```
def preprocess(passengers):
for i in range(len(passengers)):
passengers[i][1] = 1. if passengers[i][1] == 'female' else 0.
return np.array(passengers, dtype=np.float32)
```
Thus, calling preprocess could be
```
data = preprocess(data)
```
as well as predict part becomes
```
dicaprio, winslet = preprocess([dicaprio, winslet])
```
|
open
|
2019-06-13T17:36:47Z
|
2019-06-13T17:36:47Z
|
https://github.com/tflearn/tflearn/issues/1131
|
[] |
nicechester
| 0
|
idealo/image-super-resolution
|
computer-vision
| 187
|
Different scaling factors depending on model used
|
I've tried running the code provided here https://github.com/idealo/image-super-resolution#prediction on a sample image. My code is as follows
import numpy as np
from PIL import Image
from ISR.models import RDN, RRDN
img = Image.open('data/input/0001.png')
lr_img = np.array(img)
model = RDN(weights='noise-cancel')
sr_img = model.predict(lr_img)
img = Image.fromarray(sr_img)
img.save('result.png', 'PNG')
When using `model = RRDN(weights='gans')` the resulting image is 4 times the size of the input, but with RDN models, it is twice the size. Is it possible to run prediction with different scales? It is difficult to truly compare the results of the different models because of that, but I might have missed something.
If this is the intended behavior, what is the best approach to target a specific resolution? (in regard to chaining models together perhaps)
|
open
|
2021-03-14T12:43:59Z
|
2021-03-14T12:43:59Z
|
https://github.com/idealo/image-super-resolution/issues/187
|
[] |
axymeus
| 0
|
pytest-dev/pytest-cov
|
pytest
| 290
|
Ok, it appears to be as you explained in https://github.com/pytest-dev/pytest-cov/issues/285#issuecomment-488733091
|
Ok, it appears to be as you explained in https://github.com/pytest-dev/pytest-cov/issues/285#issuecomment-488733091
in fact against current master the following patch is sufficient to not cause an error:
```diff
diff --git a/proj/app/__init__.py b/proj/app/__init__.py
index e69de29..7ec32ee 100644
--- a/proj/app/__init__.py
+++ b/proj/app/__init__.py
@@ -0,0 +1 @@
+from . import templatetags
diff --git a/proj/app/templatetags/__init__.py b/proj/app/templatetags/__init__.py
index e69de29..2e046b5 100644
--- a/proj/app/templatetags/__init__.py
+++ b/proj/app/templatetags/__init__.py
@@ -0,0 +1 @@
+from . import tags
```
`pytest --cov -n2` completes green with the following result:
```
============================================================================ test session starts =============================================================================
platform linux -- Python 3.6.8, pytest-4.4.1, py-1.8.0, pluggy-0.9.0
Django settings: proj.settings (from environment variable)
rootdir: /home/friedel/git/pytest-django-xdist-cov-bug
plugins: xdist-1.28.0, forked-1.0.2, django-3.4.8, cov-2.7.1
gw0 [2] / gw1 [2]
.. [100%]
----------- coverage: platform linux, python 3.6.8-final-0 -----------
Name Stmts Miss Cover
-------------------------------------------------------
proj/app/__init__.py 1 0 100%
proj/app/admin.py 1 0 100%
proj/app/models.py 1 0 100%
proj/app/templatetags/__init__.py 1 0 100%
proj/app/templatetags/tags.py 4 0 100%
proj/app/views.py 3 0 100%
proj/proj/__init__.py 0 0 100%
proj/proj/urls.py 4 0 100%
-------------------------------------------------------
TOTAL 15 0 100%
========================================================================== 2 passed in 2.16 seconds ==========================================================================
```
But the result raises more questions (which is probably a good thing):
1.) home.html is not only reported with coverage of 0% (probably because of https://github.com/nedbat/django_coverage_plugin/issues/36 ) but omitted completely.
2.) https://docs.djangoproject.com/en/2.2/howto/custom-template-tags/ (or 1.11 or any version) does not mention that templatetags libraries need to be imported anywhere. I'm not sure if any of the relevant modules mentions that (pytest-cov, pytest-xdist, pytest-django or django_coverage_plugin). Since production code runs without those imports (and `--cov` and `-n2` on their own as well) I suspect there's still a bug somewhere and importing those modules explicitly is just a workaround, with the advantage that it's simpler than my initial workaround of moving the busines code of the template tags and filters out of the tag library module and testing it separately.
You seem sure that there's no bug in pytest-cov, so I suspect there might be one in pytest-django or pytest_coverage_plugin.
In any case, thanks for your assistance!
_Originally posted by @TauPan in https://github.com/pytest-dev/pytest-cov/issues/285#issuecomment-489447382_
|
closed
|
2019-05-13T12:57:15Z
|
2019-05-13T12:57:43Z
|
https://github.com/pytest-dev/pytest-cov/issues/290
|
[] |
TauPan
| 1
|
Miserlou/Zappa
|
flask
| 2,130
|
documentation link is not working
|
*Zappa documentation link is not working*
I tried to acccess https://blog.zappa.io/, but its not loading
|
closed
|
2020-06-30T06:57:55Z
|
2021-02-08T17:45:07Z
|
https://github.com/Miserlou/Zappa/issues/2130
|
[] |
avinash-bhosale
| 8
|
cvat-ai/cvat
|
computer-vision
| 9,157
|
[Question] how can i fetch frames with annotation via APIs?
|
So, I have our CVAT running locally and for a certain Job X i want to fetch all the frames with annotation via APIs

|
closed
|
2025-02-27T19:47:49Z
|
2025-03-03T09:12:33Z
|
https://github.com/cvat-ai/cvat/issues/9157
|
[
"question"
] |
prabaljainn
| 4
|
apify/crawlee-python
|
web-scraping
| 549
|
Add init script for Playwright browser context
|
- For now, let's use a data file containing fingerprints \(or at minimum user agents\) from the Apify fingerprint dataset.
- Use the init script from the [fingerprint suite](https://github.com/apify/fingerprint-suite/blob/master/packages/fingerprint-injector/src/utils.js).
- A fingerprint, selected based on parameters such as OS or browser type, should be passed to the init script.
- Load the JS init script from a file and convert it into a string for injection.
- Inject the script into the Playwright browser context using `BrowserContext.add_init_scripts()`.
- Possible inspiration:
- [undetected-playwright-python](https://github.com/kaliiiiiiiiii/undetected-playwright-python),
- \[playwright*stealth\]\(https://github.com/AtuboDad/playwright*stealth\).
|
closed
|
2024-09-27T18:02:14Z
|
2025-02-05T15:04:06Z
|
https://github.com/apify/crawlee-python/issues/549
|
[
"enhancement",
"t-tooling",
"product roadmap"
] |
vdusek
| 0
|
nerfstudio-project/nerfstudio
|
computer-vision
| 2,998
|
Train model from custom data without COLMAP
|
Hi, I'm trying to build some NeRF models from images acquired with the [PyRep](https://github.com/stepjam/PyRep) simulator and I cannot use COLMAP to extract camera intrinsics and extrinsics.
I'm currently filling a transform.json with intrinsic and extrinsic parameters from the simulator, but I don't manage to train a model with such data. I attach a screenshot of a trained model, which presents several inconsistencies.

I rendered the images from PyRep in the OpenGL mode, so according to the [documentation](https://docs.nerf.studio/quickstart/data_conventions.html) the data should be in the right format.
I attach [here](https://github.com/nerfstudio-project/nerfstudio/files/14571277/pyrep_images.zip) the data that I am using for training.
Thanks in advance for the support.
|
closed
|
2024-03-12T11:12:37Z
|
2024-08-06T07:15:10Z
|
https://github.com/nerfstudio-project/nerfstudio/issues/2998
|
[] |
fedeceola
| 2
|
LibreTranslate/LibreTranslate
|
api
| 130
|
Submit translations
|
How can translations be provided to improve the LibreTranslate application?
| English | magyar (Hungarian) incorrect | magyar (Hungarian) correct |
|----|----|----|
| bootstrap | bootstrap | rendszerindítás |
| queue | sor | várakozási sor |
| script | szkript | parancsfájl |
| server | szerver | kiszolgáló |
| What’s the time? | Mennyi az idő? | Hany óra? |
Translations could be added from:
1. [Angol-magyar informatikai szótár](https://regi.tankonyvtar.hu/en/tartalom/tkt/angol-magyar/adatok.html). [PDF version](https://regi.tankonyvtar.hu/en/tartalom/tkt/angol-magyar/angol-magyar.pdf)
2. [SZTAKI dictionary](https://szotar.sztaki.hu/en)
What do you think?
Thank you
|
closed
|
2021-09-05T23:24:30Z
|
2021-10-09T14:13:25Z
|
https://github.com/LibreTranslate/LibreTranslate/issues/130
|
[
"enhancement"
] |
ovari
| 2
|
ScrapeGraphAI/Scrapegraph-ai
|
machine-learning
| 411
|
Groq example results in context_length_exceeded error
|
**Describe the bug**
I'm getting an error when running the Groq example on the repo. I confirmed my Groq key works when making a normal request.
*Code*
```python
from scrapegraphai.graphs import SearchGraph
import os
groq_key = os.getenv("GROQ_API_KEY")
# Define the configuration for the graph
graph_config = {
"llm": {
"model": "groq/gemma-7b-it",
"api_key": groq_key,
"temperature": 0
},
"embeddings": {
"model": "ollama/nomic-embed-text",
"base_url": "http://localhost:11434", # set ollama URL arbitrarily
},
"max_results": 5,
}
# Create the SearchGraph instance
search_graph = SearchGraph(
prompt="List me all the traditional recipes from Chioggia",
config=graph_config
)
# Run the graph
result = search_graph.run()
print(result)
```
*Error*
```text
groq.BadRequestError: Error code: 400 - {'error': {'message': 'Please reduce the length of the messages or completion.', 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
```
**Desktop (please complete the following information):**
- OS: MacOS 13.6.7
- Version: 1.7.4
|
closed
|
2024-06-26T22:13:20Z
|
2024-07-10T10:36:02Z
|
https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/411
|
[] |
yngwz
| 2
|
taverntesting/tavern
|
pytest
| 21
|
Logger format issue in python 2.7
|
In the **entry.py** file, **line 63**, the logger format is as follows:
`{asctime:s} [{levelname:s}]: ({name:s}:{lineno:d}) {message:s}`
Any logging output will just print the line above and not the actual variables.
A quick fix is by replacing it with the following:
`%(asctime)s [%(levelname)s]: (%(name)s:%(lineno)d) %(message)s`
I have only tested this on python 2.7.
|
closed
|
2018-01-25T14:07:11Z
|
2018-01-25T16:32:05Z
|
https://github.com/taverntesting/tavern/issues/21
|
[] |
JadedLlama
| 1
|
Textualize/rich
|
python
| 2,762
|
[BUG] rich.progress.Progress transient option not working when exception is thrown within context
|
- [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
When an exception is thrown from within the Progress context, the progress bar fails to exit correctly and leaves a blank line even when transient=True.
To be fair, my use case is pretty niche.
```python
class DownloadContext:
"""Context for Downloader."""
def __init__(self, progress: Progress):
"""Context for Downloader."""
self.progress = progress
def foobar(path):
# if file exists then skip download
if path.is_file():
raise SkipDownloadException
# do download
class Downloader:
"""Download files or webpages with rich progressbar."""
@contextmanager
def begin(self):
status: DownloaderExitStatus
with Progress(transient=True) as prog:
try:
yield DownloadContext(prog)
except SkipDownloadException:
status=DownloaderExitStatus.SUCCESS_SKIPPED
else:
status=DownloaderExitStatus.SUCCESS_DOWNLOAD
# print appropriate message based on status
downloader = Downloader()
with downloader.begin() as context:
context.foobar()
```
When ``foobar()`` runs without throwing an exception python exits my ``with Downloader.begin() as context`` block, evaluates the ``else`` part of my try/except/else block and the progress bars disappear like they're supposed to but when ``SkipDownloadException`` is thrown in ``foobar()`` it goes to the ``except SkipDownloadException`` statement like expected and prints my messages, but the progress bars leave behind a blank line. I think this might just be a unique edge case, but maybe a more seasoned python programmer can tell me if I'm misunderstanding the ``with`` statement, generators, or exceptions or something.
I tried using [this hack](https://github.com/Textualize/rich/issues/110#issuecomment-642805102) as a workaround, but I got this error: ``AttributeError: 'str' object has no attribute 'segment'``
**Edit 1:**
I was able to work around the blank lines by using this command ``console.control(Control.move(0,-1))`` but the issue still persists.
**Edit 2:**
After continuing to work on my project, I realized that raising an exception isn't the problem. The blanks lines occur because when the skip exception occurs, no tasks are added to the progress bar, so it never goes away. Adding something like this ``progress.add_task("", completed=100.0)`` solves the problem with the quirk that if you set visible=False, the problem reappears. I'm going to leave this issue open because it seems weird that the progress bar won't go away without adding an already completed task but, if this is desired behavior, then it's okay to close it.
**Platform**
<details>
<summary>Click to expand</summary>
What platform (Win/Linux/Mac) are you running on? What terminal software are you using?
> Linux
I may ask you to copy and paste the output of the following commands. It may save some time if you do it now.
If you're using Rich in a terminal:
```
python -m rich.diagnose
pip freeze | grep rich
```
```bash
$ python -m rich.diagnose
╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮
│ A high level console interface. │
│ │
│ ╭──────────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=188 ColorSystem.TRUECOLOR> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = 'truecolor' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 42 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=188, height=42), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=188, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=42, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=188, height=42) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 188 │
╰──────────────────────────────────────────────────────────────────────────────────╯
╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮
│ Windows features available. │
│ │
│ ╭───────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ ╰───────────────────────────────────────────────────╯ │
│ │
│ truecolor = False │
│ vt = False │
╰───────────────────────────────────────────────────────╯
╭────── Environment Variables ───────╮
│ { │
│ 'TERM': 'xterm-256color', │
│ 'COLORTERM': 'truecolor', │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰────────────────────────────────────╯
platform="Linux"
$ pip freeze | grep rich
rich==12.6.0
```
If you're using Rich in a Jupyter Notebook, run the following snippet in a cell
and paste the output in your bug report.
```python
from rich.diagnose import report
report()
```
**N/A**
</details>
|
open
|
2023-01-18T22:40:05Z
|
2023-01-21T03:19:54Z
|
https://github.com/Textualize/rich/issues/2762
|
[
"Needs triage"
] |
TerranWorks
| 1
|
babysor/MockingBird
|
pytorch
| 996
|
请问作者大大,encoder如何训练?
|
看了知乎链接的教程,尝试训练encoder

练了半天,这结果似乎没什么变化


数据自建的,有2个多G
问题1:这种情况是正常的吗?如果不正常是什么原因造成的?
问题2:根据知乎上的说法“**实测了一次 训练synthesizer时,4000左右step就能attention收敛,22k step的时候loss就到0.35了,可以很快进行finetune,算是超越预期。**”,训练synthesizer时,如何把encoder加入?
|
open
|
2024-04-30T01:59:55Z
|
2024-04-30T09:22:32Z
|
https://github.com/babysor/MockingBird/issues/996
|
[] |
onedotone-wei
| 2
|
autogluon/autogluon
|
computer-vision
| 4,195
|
[timeseries] When saving predictor to a folder with an existing predictor, delete the old predictor
|
## Description
- When the user sets `TimeSeriesPredictor(path="folder/with/existing/predictor")`, a lot of weird undocumented behaviors may occur (e.g., #4150). We currently log a warning in this case, but it's often ignored by the users, leading to confusion. A cleaner option would be to delete all the files related to the old predictor.
|
closed
|
2024-05-14T07:50:06Z
|
2024-06-27T09:24:45Z
|
https://github.com/autogluon/autogluon/issues/4195
|
[
"enhancement",
"module: timeseries"
] |
shchur
| 2
|
dunossauro/fastapi-do-zero
|
pydantic
| 180
|
Link do Projecto
|
https://github.com/HulitosCode/fast_zero
|
closed
|
2024-06-18T11:54:23Z
|
2024-06-18T12:00:02Z
|
https://github.com/dunossauro/fastapi-do-zero/issues/180
|
[] |
HulitosCode
| 0
|
localstack/localstack
|
python
| 11,537
|
bug: EventBridge pattern using suffix generates incorrect rule
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I'm trying to create an EventBridge rule using the `suffix` matcher, with CDK:
```javascript
new Rule(this, "TestRule", {
eventPattern: {
source: ["aws.s3"],
detailType: ["Object Created"],
detail: {
bucket: { name: ["test-s3-bucket"] },
object: { key: [{ suffix: "/test.json" }] },
},
},
targets: [...],
});
```
Creating the rule succeeds, but when there is an event it throws an error saying the rule is invalid (somthing like: suffix value must be a string, not an array). Upon further inspection it looks like the rule this ends up creating contains the pattern: `{ suffix: ["/test.json"] }`
However If I change this to `prefix`, it creates the rule correctly without the array: `{ prefix: "/test.json" }`
### Expected Behavior
In the rule created by cloudformation the value`"/test.json"` should not be inside an array
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
- create an evenbridge rule using CDK/cloudformation and use the `suffix` matcher in the event pattern
- call `awslocal events describe-rule --name {rule name here}`
- the EventPattern in the rule will have the suffix value inside an array
### Environment
```markdown
- OS: Mac OS 14
- LocalStack:
LocalStack version: 3.7.2
LocalStack Docker image sha: sha256:6121fe59d0c05ccc30bee5d4f24fce23c3e3148a4438551982a5bf60657a9e8d
LocalStack build date: 2024-09-06
LocalStack build git hash: 854016a0
```
### Anything else?
_No response_
|
closed
|
2024-09-18T18:41:45Z
|
2024-12-05T13:54:19Z
|
https://github.com/localstack/localstack/issues/11537
|
[
"type: bug",
"aws:cloudformation",
"status: backlog"
] |
arshsingh
| 3
|
localstack/localstack
|
python
| 12,019
|
bug: Secret exists but is not found
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
We have a couple of secrets that we store in the Secrets Manager using a localstack image. When we fetch the secret through AWS's SDK, it returns a 400 ResoureNotFoundException.

### Expected Behavior
For it to return the secrets that exist in the Secrets Manager's localstack image.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker compose up --build -d
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
##Inside localstack container works
awslocal secretsmanager get-secret-value --secret-id <secret-name>
##Outside container in the same network as localstack doesn't
secretsManagerClient.GetSecretValue(context.Background(), secretValueInput)
### Environment
```markdown
- OS: alpine:latest
- LocalStack
LocalStack version:stable
LocalStack Docker image sha:sha256:518e0fe312f173c8c5268d5e68520b0006716156582c035d9b633db460e73865
LocalStack build date:Nov 29, 2024 at 8:59 pm
LocalStack build git hash:
```
### Anything else?
The latest tag without this issue that we found was 4.0.2.
|
open
|
2024-12-11T11:12:42Z
|
2025-01-03T10:07:33Z
|
https://github.com/localstack/localstack/issues/12019
|
[
"type: bug",
"aws:secretsmanager",
"status: backlog"
] |
bernardo-rodrigues-vwds
| 13
|
ivy-llc/ivy
|
pytorch
| 27,892
|
clip_vector_norm
|
closed
|
2024-01-10T04:05:38Z
|
2024-01-25T23:01:01Z
|
https://github.com/ivy-llc/ivy/issues/27892
|
[
"Sub Task",
"Stale"
] |
JoshOlam
| 2
|
|
plotly/dash
|
data-science
| 2,266
|
Typo in method for context_value upon create
|
https://github.com/plotly/dash/blob/2bd83a98322f95243359d846ec9e97a5b542a36e/dash/_pages.py#L254
I think this may be a typo. It would error out:
File "/opt/homebrew/lib/python3.10/site-packages/dash/_pages.py", line 254, in register_page
if context_value.get().get("ignore_register_page"):
LookupError: <ContextVar name='callback_context' at 0x11f7bff10>
removing the '.get()' method fixed it.
|
closed
|
2022-10-09T21:15:14Z
|
2023-03-21T18:42:12Z
|
https://github.com/plotly/dash/issues/2266
|
[] |
SKnight79
| 1
|
ageitgey/face_recognition
|
python
| 1,199
|
Problem regarding installation of face_recognition API
|
* face_recognition version: 1.3.0
* Python version: 3.8.2
* Operating System: Ubuntu 20.04
### Description
I wanted to install `face_recognition` api with the command `pip3 install face_recognition` and every time I do this, I get an error as following:
```
Collecting face_recognition
Using cached face_recognition-1.3.0-py2.py3-none-any.whl (15 kB)
Requirement already satisfied: face-recognition-models>=0.3.0 in /home/pritam_pagla/.local/lib/python3.8/site-packages (from face_recognition) (0.3.0)
Requirement already satisfied: numpy in /home/pritam_pagla/.local/lib/python3.8/site-packages (from face_recognition) (1.19.1)
Requirement already satisfied: Click>=6.0 in /usr/lib/python3/dist-packages (from face_recognition) (7.0)
Requirement already satisfied: Pillow in /usr/lib/python3/dist-packages (from face_recognition) (7.0.0)
Collecting dlib>=19.7
Using cached dlib-19.21.0.tar.gz (3.2 MB)
Building wheels for collected packages: dlib
Building wheel for dlib (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-x8p3gvgy/dlib/setup.py'"'"'; __file__='"'"'/tmp/pip-install-x8p3gvgy/dlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-8yvcs9nr
cwd: /tmp/pip-install-x8p3gvgy/dlib/
Complete output (53 lines):
running bdist_wheel
running build
running build_py
package init file 'tools/python/dlib/__init__.py' not found (or not a regular file)
running build_ext
Traceback (most recent call last):
File "/tmp/pip-install-x8p3gvgy/dlib/setup.py", line 120, in get_cmake_version
out = subprocess.check_output(['cmake', '--version'])
File "/usr/lib/python3.8/subprocess.py", line 411, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/lib/python3.8/subprocess.py", line 489, in run
with Popen(*popenargs, **kwargs) as process:
File "/usr/lib/python3.8/subprocess.py", line 854, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.8/subprocess.py", line 1702, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'cmake'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-x8p3gvgy/dlib/setup.py", line 223, in <module>
setup(
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 144, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.8/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.8/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/wheel/bdist_wheel.py", line 223, in run
self.run_command('build')
File "/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3.8/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/tmp/pip-install-x8p3gvgy/dlib/setup.py", line 129, in run
cmake_version = self.get_cmake_version()
File "/tmp/pip-install-x8p3gvgy/dlib/setup.py", line 122, in get_cmake_version
raise RuntimeError("\n*******************************************************************\n" +
RuntimeError:
*******************************************************************
CMake must be installed to build the following extensions: _dlib_pybind11
*******************************************************************
----------------------------------------
ERROR: Failed building wheel for dlib
Running setup.py clean for dlib
Failed to build dlib
Installing collected packages: dlib, face-recognition
Running setup.py install for dlib ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-x8p3gvgy/dlib/setup.py'"'"'; __file__='"'"'/tmp/pip-install-x8p3gvgy/dlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-czi36j35/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/pritam_pagla/.local/include/python3.8/dlib
cwd: /tmp/pip-install-x8p3gvgy/dlib/
Complete output (55 lines):
running install
running build
running build_py
package init file 'tools/python/dlib/__init__.py' not found (or not a regular file)
running build_ext
Traceback (most recent call last):
File "/tmp/pip-install-x8p3gvgy/dlib/setup.py", line 120, in get_cmake_version
out = subprocess.check_output(['cmake', '--version'])
File "/usr/lib/python3.8/subprocess.py", line 411, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/lib/python3.8/subprocess.py", line 489, in run
with Popen(*popenargs, **kwargs) as process:
File "/usr/lib/python3.8/subprocess.py", line 854, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.8/subprocess.py", line 1702, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'cmake'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-x8p3gvgy/dlib/setup.py", line 223, in <module>
setup(
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 144, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.8/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.8/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/usr/lib/python3.8/distutils/command/install.py", line 589, in run
self.run_command('build')
File "/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3.8/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/tmp/pip-install-x8p3gvgy/dlib/setup.py", line 129, in run
cmake_version = self.get_cmake_version()
File "/tmp/pip-install-x8p3gvgy/dlib/setup.py", line 122, in get_cmake_version
raise RuntimeError("\n*******************************************************************\n" +
RuntimeError:
*******************************************************************
CMake must be installed to build the following extensions: _dlib_pybind11
*******************************************************************
----------------------------------------
ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-x8p3gvgy/dlib/setup.py'"'"'; __file__='"'"'/tmp/pip-install-x8p3gvgy/dlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-czi36j35/install-record.txt --single-version-externally-managed --user --prefix= --compile --install-headers /home/pritam_pagla/.local/include/python3.8/dlib Check the logs for full command output.
```
Can somebody please provide me with the solution required!
|
open
|
2020-08-12T17:43:29Z
|
2020-08-13T02:10:40Z
|
https://github.com/ageitgey/face_recognition/issues/1199
|
[] |
Pritam-Pagla
| 1
|
deeppavlov/DeepPavlov
|
tensorflow
| 844
|
Setting `DP_MODELS_PATH` and `DP_DOWNLOADS_PATH` variables doesn't really work
|
All configs in DeepPavlov library have their own way of what is written in `MODELS_PATH`:
* some configs have just `{ROOT_PATH}/models`
* some configs have one more folder, ex. `{ROOT_PATH}/models/kbqa_mix_lowercase`
* some configs have a lot of additional folders, ex. `{ROOT_PATH}/models/classifiers/insults_kaggle_v3`
This makes pointless setting `DP_MODELS_PATH` environment variable because it will overwrite `MODELS_PATH` and all paths that have additional folders cannot be found then.
I think it could be fixed by standardizing `MODELS_PATH` in all configs as `{ROOT_PATH}/models`, and all additional folders should be in a different variable (` MY_FOLDER = {MODELS_PATH/whatever/folders}`).
|
closed
|
2019-05-15T17:32:06Z
|
2019-06-26T11:05:14Z
|
https://github.com/deeppavlov/DeepPavlov/issues/844
|
[
"bug"
] |
my-master
| 1
|
scikit-learn/scikit-learn
|
machine-learning
| 31,032
|
`weighted_percentile` should error/warn when all sample weights 0
|
### Describe the bug
Noticed while working on #29431
### Steps/Code to Reproduce
See the following test:
https://github.com/scikit-learn/scikit-learn/blob/cd0478f42b2c873853e6317e3c4f2793dc149636/sklearn/utils/tests/test_stats.py#L67-L73
### Expected Results
Error or warning should probably be given. You're effectively asking for a quantile of a empty array.
### Actual Results
When all sample weights are 0, what happens is that `percentile_in_sorted` (as in the index of desired observation in array is the) is `101` (the last item). We should probably add a check and give a warning when `sample_weights` is all zero
cc @ogrisel @glemaitre
### Versions
```shell
n/a
```
|
open
|
2025-03-20T01:57:45Z
|
2025-03-21T17:30:17Z
|
https://github.com/scikit-learn/scikit-learn/issues/31032
|
[
"Bug"
] |
lucyleeow
| 1
|
nteract/papermill
|
jupyter
| 660
|
nbclient.exceptions.DeadKernelError if Python Warnings Filter set to 'default'
|
## Question
<!-- A clear and concise description of what the bug is. -->
:wave: Hi. While trying to get a nightly notebook `pytest` test working again with modern `papermill` and `scrapbook` (c.f. https://github.com/scikit-hep/pyhf/pull/1841) I noticed that if I have [Python filter warnings](https://docs.python.org/3/library/warnings.html#the-warnings-filter) set to `'default'`, I can get a `nbclient.exceptions.DeadKernelError` from `papermill` if the notebook being executed has any IPython/Jupyter [magics](https://ipython.readthedocs.io/en/stable/interactive/magics.html) like shell evaluation with `!`.
Example from https://github.com/scikit-hep/pyhf/pull/1841:
```yaml
- name: Test example notebooks
shell: bash
run: |
# Override the ini option for filterwarnings with an empty list to disable error
# on filterwarnings as testing for notebooks to run with the latest API, not if
# Jupyter infrastructure is warning free.
# Though still show warnings by setting warning control to 'default'.
export PYTHONWARNINGS='default'
pytest --override-ini filterwarnings= tests/test_notebooks.py
```
Do you have any ideas why setting [`PYTHONWARNINGS='default'`](https://docs.python.org/3/using/cmdline.html#envvar-PYTHONWARNINGS) which will
> print the first occurrence of matching warnings for each location (module + line number) where the warning is issued
would cause a `nbclient.exceptions.DeadKernelError`, which goes away if `PYTHONWARNINGS` is not set?
## Log files
There is a log file for
```console
$ PYTHONWARNINGS='default' pytest --override-ini filterwarnings= tests/test_notebooks.py -k test_xml_importexport &> /tmp/pythonwarnings_log.txt
```
at https://github.com/scikit-hep/pyhf/issues/1840#issuecomment-1089982787, which I can link here: https://github.com/scikit-hep/pyhf/files/8425002/pythonwarnings_log.txt
### Related Issues and PRs
* https://github.com/scikit-hep/pyhf/issues/1840
* https://github.com/scikit-hep/pyhf/pull/1841
|
open
|
2022-04-06T08:40:12Z
|
2022-04-06T14:29:06Z
|
https://github.com/nteract/papermill/issues/660
|
[
"bug",
"help wanted"
] |
matthewfeickert
| 0
|
ultralytics/yolov5
|
deep-learning
| 12,519
|
index: 2 Got: 384 Expected: 640
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I have been using the Ultralytics Python module to use my yolov5 model made with Ultralytics Hub
I switched to doing it this way, (gives more customization):
[https://docs.ultralytics.com/yolov5/tutorials/train_custom_data](url)
However if I try to use the Ultralytics module to load the model, I get this:
```
TypeError: ERROR best.pt appears to be an Ultralytics YOLOv5 model originally trained with https://github.com/ultralytics/yolov5.
This model is NOT forwards compatible with YOLOv8 at https://github.com/ultralytics/ultralytics.
Recommend fixes are to train a new model using the latest 'ultralytics' package or to run a command with an official YOLOv8 model, i.e. 'yolo predict model=yolov8n.pt'
```
So I switched to using torch.hub.load()
It loads the model fine, however when it trys to get detection results, I get this error:
```
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: images for the following indices
index: 2 Got: 384 Expected: 640
Please fix either the inputs or the model.
```
I have gotten this error before using the Ultralytics module, I just changed the imgsz argument and that fixed it.
It appears that torch doesnt support that argument and Im unsure of how to go about fixing this.
Ive tried:
- imgsz
- imgsize
- size
And nothing works
Ive looked through this page about torch.hub.load() to no avail:
[https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/](url)
I am unsure of what to do from here, any help is appreciated.
### Code
```
import os
import dxcam
import cv2
import torch
from ultralytics import YOLO
os.chdir(os.path.dirname(__file__))
model = torch.hub.load('ultralytics/yolov5', 'custom', path='best.onnx', _verbose=False)
# Initzialize a DXcam camera
camera = dxcam.create(device_idx=0, output_color="BGR")
left = 0
top = 0
right = 1360
bottom = 768
cv2.namedWindow("test", cv2.WINDOW_NORMAL)
cv2.resizeWindow("test", 400, 200)
cv2.setWindowProperty("test", cv2.WND_PROP_TOPMOST, 1)
while True:
frame = camera.grab(region=(left,top,right,bottom))
if frame is None:
continue
results = model(frame, imgsz=640)
cv2.imshow("test", frame)
cv2.waitKey(1)
```
|
closed
|
2023-12-17T17:45:54Z
|
2024-02-09T04:05:16Z
|
https://github.com/ultralytics/yolov5/issues/12519
|
[
"question",
"Stale"
] |
DylDevs
| 13
|
allenai/allennlp
|
pytorch
| 5,274
|
allennlp.common.checks.ConfigurationError: srl not in acceptable choices for dataset_reader
|
```
from allennlp.models import load_archive
from allennlp.predictors.predictor import Predictor
import allennlp
def get_srl_predictor():
if torch.cuda.is_available():
archive = allennlp.models.archival.load_archive("https://s3-us-west-2.amazonaws.com/allennlp/models/bert-base-srl-2019.06.17.tar.gz", cuda_device=0)
else:
archive = load_archive("https://s3-us-west-2.amazonaws.com/allennlp/models/bert-base-srl-2019.06.17.tar.gz")
return Predictor.from_archive(archive)
```
Loading the above mentioned model is thorwing this error:
allennlp.common.checks.ConfigurationError: srl not in acceptable choices for dataset_reader.type: ['babi', 'conll2003', 'interleaving', 'multitask', 'sequence_tagging', 'sharded', 'text_classification_json', 'multitask_shim']. You should either use the --include-package flag to make sure the correct module is loaded, or use a fully qualified class name in your config file like {"model": "my_module.models.MyModel"} to have it imported automatically.
I tried using Predictot.path('https://s3-us-west-2.amazonaws.com/allennlp/models/bert-base-srl-2019.06.17.tar.gz'). But I am getting the same error.
Can anyone please help me?
Thankyou
|
closed
|
2021-06-20T07:10:09Z
|
2022-06-01T21:21:45Z
|
https://github.com/allenai/allennlp/issues/5274
|
[
"question"
] |
jeevana28
| 6
|
mckinsey/vizro
|
plotly
| 584
|
Custom data does not get picked up if directly defined inside custom chart
|
### Which package?
vizro
### Package version
0.1.18
### Description
If this is supposed to be like this, then I think we need to raise an error earlier or document it somewhere. Optimally this should work though: https://py.cafe/huong-li-nguyen/vizro-bug-filter-interaction
If the custom data argument is defined directly inside the custom data chart code, it doesn't get picked up (see example app and screenshot). So this **does not work**:
```
@capture("graph")
def custom_chart(x: str, y: str, color: str, data_frame: pd.DataFrame = None):
fig = px.box(
df_gapminder,
x=x,
y=y,
color=color,
custom_data=[color], # this does not get picked up if defined here?
)
return fig
```
It only gets picked up if the custom_data argument is part of the custom data function so **this works**:
```
def custom_chart(x: str, y: str, color: str, custom_data: str, data_frame: pd.DataFrame = None):
fig = px.box(
df_gapminder,
x=x,
y=y,
color=color,
custom_data=[custom_data],
)
```
<img width="1223" alt="Screenshot 2024-07-14 at 20 09 33" src="https://github.com/user-attachments/assets/a9d9ed8d-6e27-4450-8ba1-8ccd42dd564c">
Is it supposed to be like this? If yes, shall we raise a validation error earlier?
### How to Reproduce
https://py.cafe/huong-li-nguyen/vizro-bug-filter-interaction
### Output
_No response_
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md).
|
closed
|
2024-07-14T18:17:07Z
|
2024-07-15T06:28:43Z
|
https://github.com/mckinsey/vizro/issues/584
|
[
"Bug Report :bug:",
"Needs triage :mag:"
] |
huong-li-nguyen
| 1
|
christabor/flask_jsondash
|
plotly
| 85
|
Clear api results field on modal popup
|
Just to prevent any confusion as to what is loaded there.
|
closed
|
2017-03-01T19:04:52Z
|
2017-03-02T20:28:18Z
|
https://github.com/christabor/flask_jsondash/issues/85
|
[
"enhancement"
] |
christabor
| 0
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 630
|
Got this error getting vocals from long webm video
|
Last Error Received:
Process: MDX-Net
The application was unable to allocate enough system memory to use this model.
Please do the following:
1. Restart this application.
2. Ensure any CPU intensive applications are closed.
3. Then try again.
Please Note: Intel Pentium and Intel Celeron processors do not work well with this application.
If the error persists, the system may not have enough RAM, or your CPU might not be supported.
Raw Error Details:
RuntimeError: "[enforce fail at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 15500083200 bytes."
Traceback Error: "
File "UVR.py", line 4716, in process_start
File "separate.py", line 287, in seperate
File "separate.py", line 360, in demix_base
File "separate.py", line 350, in initialize_mix
"
Error Time Stamp [2023-06-23 13:40:33]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 10
window_size: 512
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Inst HQ 1
chunks: Auto
margin: 44100
compensate: Auto
is_denoise: False
is_invert_spec: False
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: True
is_testing_audio: False
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
save_format: WAV
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
|
open
|
2023-06-23T17:43:18Z
|
2023-06-23T17:43:18Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/630
|
[] |
Comput3rUs3r
| 0
|
NVlabs/neuralangelo
|
computer-vision
| 207
|
Direct point cloud extraction. Mesh extraction: addressing different results
|
Hi,
I was wondering if there is a way to directly extract a point cloud without extracting the mesh first?
I am seeing different results in the mesh during training and after extracting the mesh. Do you have any idea why this is happening?
Thank you!
|
open
|
2024-07-29T15:11:36Z
|
2024-07-29T15:11:36Z
|
https://github.com/NVlabs/neuralangelo/issues/207
|
[] |
camolina2
| 0
|
cleanlab/cleanlab
|
data-science
| 638
|
Whether cleanlab can be used for graph link prediction?
|
As the official documentation says, cleanlab can handle image, text, audio, or tabular data, so can it be used for graph data, like link prediction?
|
closed
|
2023-02-19T06:50:46Z
|
2023-05-31T22:05:21Z
|
https://github.com/cleanlab/cleanlab/issues/638
|
[
"question"
] |
xioacd99
| 2
|
django-import-export/django-import-export
|
django
| 1,168
|
What is the right way to import specific sheet in excel??
|
I would like to import specific sheet in excel file...
what is the correct way to implement this??
|
closed
|
2020-07-28T15:19:40Z
|
2020-07-31T20:13:55Z
|
https://github.com/django-import-export/django-import-export/issues/1168
|
[
"question"
] |
rmControls
| 1
|
pallets-eco/flask-wtf
|
flask
| 345
|
cannot import name 'FlaskForm'
|
When I run
`from flask_wtf import FlaskForm`
I get:
```pytb
Traceback (most recent call last):
File "F:/Dokumente/Programmierung/Python/wtforms/wtforms.py", line 5, in <module>
from flask_wtf import FlaskForm
File "C:\Users\Niklas\Documents\venvs\testenv\lib\site-packages\flask_wtf\__init__.py", line 15, in <module>
from .csrf import CSRFProtect, CsrfProtect
File "C:\Users\Niklas\Documents\venvs\testenv\lib\site-packages\flask_wtf\csrf.py", line 11, in <module>
from wtforms import ValidationError
File "F:\Dokumente\Programmierung\Python\wtforms\wtforms.py", line 5, in <module>
from flask_wtf import FlaskForm
ImportError: cannot import name 'FlaskForm'`
```
The versions I use are:
Flask-WTF 2.2.1
Flask 1.0.2
Python 3.6.5
|
closed
|
2018-07-29T17:44:17Z
|
2020-04-21T06:24:40Z
|
https://github.com/pallets-eco/flask-wtf/issues/345
|
[] |
Toxenum
| 1
|
deepset-ai/haystack
|
pytorch
| 8,483
|
Use of meta data of documents inside components
|
**Problem Description**
When using a set of documents in a component like DocumentSplitter esp in a pipeline, the current working is that the same parameters of the component like split_by, split_length etc are applied to all documents. But that may not always be the case, as it is for my need.
**Suggested Solution**
The suggestion is to use the meta properties of the document as a potent way for the developer to pass dynamic parameters. Hence, if the meta data has a parameter the same as the component parameter (for e.g. "split_by" then that parameter will be taken for that document. Since all documents anyway work with the "content" field of each document while processing, it can extend it to the meta fields in case they exist.
**Current Alternative Solution**
The need I am working on is a typical RAG pipeline. But due to the requirement that each file in the pipeline may want to be split in a different strategy, I am constraint to treat each document and its pre and post processing as a batch and I loop through the documents. Thus, it is not a batch of documents in a pipeline but a batch of pipelines with 1 document each.
**Additional context**
I was told by some data scientists that the choice of a splitting strategy is based on the contents of the document and in their opinion a standard process to follow.
Thanks.
|
open
|
2024-10-23T13:17:16Z
|
2025-01-09T07:30:31Z
|
https://github.com/deepset-ai/haystack/issues/8483
|
[
"P3"
] |
rmrbytes
| 2
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 4,067
|
Options suggestions to disable reminder emails
|
### Proposal
This suggestion is to request the possibility to:
1. stop sending reminder notification (report settings) email when a report is in "closed" status, or checkbox option to set it in the status change panel.
2. stop sending the default notification reminder (channel settings) email when at least one receiver will open the report.
### Motivation and context
1 . Since the notification email has a generic text with no reference to a single report, the manual deactivation of a notification setting for every report could become quite confusing, because they could have to scroll every report to see which one has an active notification. And then, in all our cases receivers don't expect to receive reminder emails when reports are closed.
2. in our experience some receivers are listed in the default context receivers but sometimes they don't manage the reports, neither open them because of various reasons, so other receivers manage the report. Those receivers continue to receive the daily "unread report" email, but really the report has been already read.
Considering that those emails are sent daily, this mechanism could create confusion and bring receivers to not use it at all.
Thanks for considering.
|
open
|
2024-05-06T13:15:01Z
|
2024-08-05T16:06:35Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4067
|
[
"C: Client",
"C: Backend",
"T: Feature"
] |
larrykind
| 7
|
paperless-ngx/paperless-ngx
|
django
| 8,096
|
Dokumenten Export über NAS zeigt Warnung
|
### Description
Ich bekomme diese Warnung angezeigt.
Was kann denn das sein?
WARNINGS:
?: Filename format {created_year}/{correspondent}/{title} is using the old style, please update to use double curly brackets
HINT: {{ created_year }}/{{ correspondent }}/{{ title }}
### Steps to reproduce
Nach jeder sicherung
### Webserver logs
```bash
keine
```
### Browser logs
_No response_
### Paperless-ngx version
2.13.0
### Host OS
lxc container
### Installation method
Docker - official image
### System status
_No response_
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description.
|
closed
|
2024-10-29T06:58:54Z
|
2024-12-04T03:17:50Z
|
https://github.com/paperless-ngx/paperless-ngx/issues/8096
|
[
"not a bug"
] |
totocotonio
| 3
|
allure-framework/allure-python
|
pytest
| 730
|
`allure.dynamic.title` does not create an appropriate title if fixture steps fail
|
**Describe the bug**
No appropriate title in case of test's fixture fail
**To Reproduce**
1. Create failed fixture
2. Add fixture to the test
3. Add parametrized test with title param
4. Add `allure.dynamic.title` based on parametrized title
5. Run the test
**Expected behavior**
Title of the failed test same as title in param
**Screenshots**
Normally passed test:
<img width="1440" alt="allure-dynamic-test-pass" src="https://user-images.githubusercontent.com/18244131/217164909-7e5d4b6f-4415-4065-9665-402d3ed73953.png">
Test failed in test body (like `assert False` statement inside the `test_case()`):
<img width="1440" alt="allure-dynamic-assert-fail" src="https://user-images.githubusercontent.com/18244131/217164919-34923e2f-d45a-4c28-9cdd-34c3d4e99c44.png">
**Actual behavior**
Test failed in any fixture passed to the `test_case(failed_fixture)`:
<img width="1440" alt="allure-dynamic-fixture-fail" src="https://user-images.githubusercontent.com/18244131/217164930-6a0166e7-41b8-4aed-ba6d-bd784a73c953.png">
Source code example including `allure.dynamic.title` with test params and fixtures:
<img width="646" alt="allure-dynamic-source" src="https://user-images.githubusercontent.com/18244131/217164955-f05e4570-071d-4614-839f-d6473445ef62.png">
**Environment:**
| Allure version | 2.20.1 |
| --- | --- |
| Test framework | pytest@7.2.1 |
| Allure adaptor | allure-pytest@2.12.0 |
|
closed
|
2023-02-07T06:30:19Z
|
2023-02-08T09:33:17Z
|
https://github.com/allure-framework/allure-python/issues/730
|
[
"theme:pytest"
] |
storenth
| 3
|
supabase/supabase-py
|
flask
| 681
|
Advice or example of mocking the client for tests
|
**Is your feature request related to a problem? Please describe.**
- I'm working on a FastAPI project that uses the `supabase-py` client.
- I am writing tests focused on API interaction (payload validation, return structure, etc).
- Many of the endpoints interact with the supabase client to fetch data, perform some transforms and return it.
- I want to mock the client and fake a query return - without actually calling Supabase.
- However, I cannot find an example of best practice on how to do this.
**Describe the solution you'd like**
- An example of how to mock the client to provide mocked query returns.
**Describe alternatives you've considered**
- Considered running local DB to run tests against - but feels heavy
|
closed
|
2024-02-02T00:16:46Z
|
2024-10-26T08:14:12Z
|
https://github.com/supabase/supabase-py/issues/681
|
[] |
philliphartin
| 6
|
babysor/MockingBird
|
deep-learning
| 614
|
hifigan的训练无法直接使用cpu,且修改代码后无法接着训练
|
**Summary[问题简述(一句话)]**
A clear and concise description of what the issue is.
hifigan的训练无法直接使用cpu,且修改代码后无法接着训练
**Env & To Reproduce[复现与环境]**
描述你用的环境、代码版本、模型
最新环境、代码版本,模型:hifigan
**Screenshots[截图(如有)]**
If applicable, add screenshots to help
将MockingBird-main\vocoder\hifigan下trian.py中41行torch.cuda.manual_seed(h.seed)改为torch..manual_seed(h.seed);
42行 device = torch.device('cuda:{:d}'.format(rank))改为device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')。
之后可以以cpu训练,但每次运行相同代码无法接着上一次训练。
|
open
|
2022-06-12T01:52:36Z
|
2022-07-02T04:57:25Z
|
https://github.com/babysor/MockingBird/issues/614
|
[] |
HSQhere
| 2
|
huggingface/diffusers
|
deep-learning
| 10,872
|
[Feature request] Please add from_single_file support in SanaTransformer2DModel to support first Sana Apache licensed model
|
**Is your feature request related to a problem? Please describe.**
We all know Sana model is very good but unfortunately the LICENSE is restrictive.
Recently a Sana finetuned model is released under Apache LICENSE. Unfortunately SanaTransformer2DModel does not support from_single_file to use it
**Describe the solution you'd like.**
```python
import torch
from diffusers import SanaPipeline
from diffusers import SanaTransformer2DModel
model_path = "Efficient-Large-Model/Sana_1600M_1024px_MultiLing"
dtype = torch.float16
transformer = SanaTransformer2DModel.from_single_file (
"Swarmeta-AI/Twig-v0-alpha/Twig-v0-alpha-1.6B-2048x-fp16.pth",
torch_dtype=dtype,
)
pipe = SanaPipeline.from_pretrained(
pretrained_model_name_or_path=model_path,
transformer=transformer,
torch_dtype=dtype,
use_safetensors=True,
)
pipe.to("cuda")
pipe.enable_model_cpu_offload()
pipe.enable_vae_slicing()
pipe.enable_vae_tiling()
inference_params = {
"prompt": "rose flower",
"negative_prompt": "",
"height": 1024,
"width": 1024,
"guidance_scale": 4.0,
"num_inference_steps": 20,
}
image = pipe(**inference_params).images[0]
image.save("sana.png")
```
```
(venv) C:\aiOWN\diffuser_webui>python sana_apache.py
Traceback (most recent call last):
File "C:\aiOWN\diffuser_webui\sana_apache.py", line 6, in <module>
transformer = SanaTransformer2DModel.from_single_file (
AttributeError: type object 'SanaTransformer2DModel' has no attribute 'from_single_file'
```
**Describe alternatives you've considered.**
No alternatives available as far as I know
**Additional context.**
N.A.
|
closed
|
2025-02-23T11:36:21Z
|
2025-03-10T03:08:32Z
|
https://github.com/huggingface/diffusers/issues/10872
|
[
"help wanted",
"Good second issue",
"contributions-welcome",
"roadmap"
] |
nitinmukesh
| 5
|
SALib/SALib
|
numpy
| 102
|
Negative sobol indices
|
Hi,
I wonder how to interpret negative numbers for the values of sensitivity indices. For example in one of my results, a parameter x has S1 = -71.6 and ST = 0.52. Likewise I saw that the example here gives negative indices:
https://waterprogramming.wordpress.com/2013/08/05/running-sobol-sensitivity-analysis-using-salib/
I thought that Sobol sensitivity indices were by definition positive, since they are defined as a quote of two different variances (see https://en.wikipedia.org/wiki/Variance-based_sensitivity_analysis). Did I misunderstand something?
Best,
Jonatan
|
closed
|
2016-09-26T08:38:39Z
|
2016-10-25T15:50:11Z
|
https://github.com/SALib/SALib/issues/102
|
[
"question"
] |
rJonatan
| 4
|
yzhao062/pyod
|
data-science
| 2
|
AUC Score & Precision score are different why not same?
|
from pyod.utils.data import evaluate_print
# evaluate and print the results
print("\nOn Training Data:")
evaluate_print(clf_name, y_true, y_scores)
On Training Data:
KNN ROC:0.9352, precision @ rank n:0.568
from sklearn import metrics
print("Accuracy Score",round(metrics.accuracy_score(y_true, y_pred),2))
print("Precision Score",round(metrics.precision_score(y_true, y_pred),2))
print("Recall Score",round(metrics.recall_score(y_true, y_pred),2))
print("F1 Score",round(metrics.f1_score(y_true, y_pred),2))
print("Roc Auc score",round(metrics.roc_auc_score(y_true, y_pred),2))
Accuracy Score 0.92
Precision Score 0.55
Recall Score 0.59
F1 Score 0.57
Roc Auc score 0.77
|
closed
|
2018-06-04T19:35:54Z
|
2022-01-07T03:12:09Z
|
https://github.com/yzhao062/pyod/issues/2
|
[
"question"
] |
vchouksey
| 8
|
jupyterhub/repo2docker
|
jupyter
| 682
|
Provide 'podman' as a build backend
|
### Proposed change
Currently, only docker is available to build images. We should:
1. Abstract the interface that builds a Dockerfile
2. Implement docker (via docker-py) as the default implementation
3. Allow [podman](https://podman.io/) to be the second implementation, so we can make sure our design isn't docker specific.
podman supports building Dockerfiles directly, so this should be a fairly minimal change.
### Alternative options
We do (1) and (2) in core, but leave (3) to be a plugin.
### Who would use this feature?
Anyone who doesn't want to use the docker daemon. There are good technical and social reasons for this.
### How much effort will adding it take?
Abstracting the current docker calls behind an interface shouldn't be too much work. Podman is going to be interesting. It seems pretty production ready, but treats running on ubuntu as an extreme second class citizen - I haven't been able to get it to run yet. So that might be more challenging - making the case for keeping the podman build bits in an extension.
### Who can do this work?
You'd need some familiarity with the docker python client and Dockerfile.
|
closed
|
2019-05-20T20:42:47Z
|
2021-07-02T12:44:27Z
|
https://github.com/jupyterhub/repo2docker/issues/682
|
[
"enhancement",
"needs: discussion"
] |
yuvipanda
| 3
|
WZMIAOMIAO/deep-learning-for-image-processing
|
pytorch
| 574
|
yolo spp
|
导儿,我完整下载你的项目,运行yolospp,为啥这边飘红啊

我改成import torch_utils,确实不飘红了
但是运行的时候,变成了这样

这是什么问题啊
|
closed
|
2022-06-20T06:18:00Z
|
2022-06-22T08:45:22Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/574
|
[] |
yinshuisiyuan123
| 2
|
TarrySingh/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
|
scikit-learn
| 421
|
Will close as no description has been provided.
|
Will close as no description has been provided.
_Originally posted by @lorenzodb1 in https://github.com/Yelp/detect-secrets/issues/801#issuecomment-2064959387_
|
open
|
2024-04-26T17:42:54Z
|
2024-04-26T17:42:54Z
|
https://github.com/TarrySingh/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials/issues/421
|
[] |
Geraldine-Maes
| 0
|
ageitgey/face_recognition
|
machine-learning
| 605
|
Confidence matrix for recognition
|
* face_recognition version:1.2.3
* Python version: 3.5.2
* Operating System: Ubuntu 16.0.4
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
IMPORTANT: If your issue is related to a specific picture, include it so others can reproduce the issue.
### What I Did
I build the Model from the knn classifier file
### Question
Can we have API that provide us the confidence matrix of a detected face So, that we can be sure that after 95% of confidence we mark that the face is recognised.
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
|
open
|
2018-08-28T11:03:59Z
|
2018-08-28T11:03:59Z
|
https://github.com/ageitgey/face_recognition/issues/605
|
[] |
hemanthkumar3111
| 0
|
plotly/dash
|
data-visualization
| 2,288
|
Dynamic dcc.Dropdown clears `search_value`
|
**Describe your context**
```
dash 2.6.2
dash-bootstrap-components 1.2.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
Dynamic dcc.Dropdown clears `search_value` immediately when trying to search after the first value is selected. Moreover, it hides selected value when trying to search for something different (e.g. searching 'a' hides selected 'Item 3').
**Expected behavior**
dcc.Dropdown behaves in the same way when searching for the first and second time.
**Screenshots**
https://user-images.githubusercontent.com/20649678/198055941-86374cd2-2225-45bf-bf8b-b0a8154fbc50.mp4
**Minimal example**
```python
from dash import Dash, html, dcc, Input, Output, State
from dash.exceptions import PreventUpdate
app = Dash(__name__,)
items = [{'label': f'Item {i}', 'value': i, 'search': f'{i}'} for i in range(100)]
app.layout = html.Div(
children=[
dcc.Dropdown(
options=items[:10],
placeholder='Type to search',
multi=True,
id='dropdown',
style={'width': '350px'},
),
html.Div(
['Values: ', html.Br(), 'Search value: ',],
id='value-container',
style={'margin-top': '250px'},
),
]
)
@app.callback(
[
Output('dropdown', 'options'),
Output('dropdown', 'value'),
Output('dropdown', 'search_value'),
],
[Input('dropdown', 'search_value'),],
[State('dropdown', 'options'), State('dropdown', 'value'),],
)
def update_options(search_value, items_list, chosen_list):
if not search_value:
raise PreventUpdate
items_return = []
for i in items:
if (
search_value.lower() in i['search'].lower()
or search_value.lower() in i['label'].lower()
):
items_return.append(i)
if len(items_return) > 10:
return [items_return, chosen_list, search_value]
return [items_return, chosen_list, search_value]
@app.callback(
Output('value-container', 'children'),
[
Input('dropdown', 'options'),
Input('dropdown', 'value'),
Input('dropdown', 'search_value'),
],
)
def update_options2(options, value, search_value):
value = value if value else []
search_value = search_value if search_value else ''
return [
'Values: ' + ' '.join(str(v) for v in value),
html.Br(),
'Search value: ' + str(search_value),
]
if __name__ == '__main__':
app.run_server(debug=True)
```
|
closed
|
2022-10-26T14:49:59Z
|
2022-11-06T15:33:22Z
|
https://github.com/plotly/dash/issues/2288
|
[] |
AlimU11
| 2
|
PablocFonseca/streamlit-aggrid
|
streamlit
| 249
|
ColumnsAutoSizeMode.FIT_CONTENTS not work in second tab of st.tabs
|
Using the columns_auto_size_mode=ColumnsAutoSizeMode.FIT_CONTENTS parameter in the AgGrid() call seems to work for all my tables on my first tab;
but in my second tab, it sometimes reduces the first few columns of my AgGrid to minimum width.
And if I use the suppressColumnVirtualisation:True option, it reduces all the columns to minimum width.
**my first tab:**

**my second tab:**

here is my code(streamlit==1.29.0, streamlit-aggrid==0.3.3):
```
import streamlit as st
import pandas as pd
from st_aggrid import GridOptionsBuilder, AgGrid, ColumnsAutoSizeMode
data = {
'System Name': ['System A', 'System B', 'System C', 'System D'],
'Value 1': [10, 20, 30, 40],
'Value 2': [1, 2, 3, 4],
'Value 3': [10, 20, 30, 40],
'Value 4': [1, 2, 3, 4],
'Value 5': [10, 20, 30, 40],
'Value 6': [1, 2, 3, 4],
'Value 7': [10, 20, 30, 40],
'Value 8': [1, 2, 3, 4],
'Value 9': [10, 20, 30, 40],
'Value 10': [1, 2, 3, 4],
'Value 11': [10, 20, 30, 40],
'Value 12': [1, 2, 3, 4],
'Value 13': [10, 20, 30, 40],
'Value 14': [1, 2, 3, 4],
'Value 15': [10, 20, 30, 40],
'Value 16': [1, 2, 3, 4],
}
df = pd.DataFrame(data)
# gridOptions1
gb1 = GridOptionsBuilder.from_dataframe(df)
gridOptions1 = gb1.build()
# gridOptions2
gb2 = GridOptionsBuilder.from_dataframe(df)
other_options = {'suppressColumnVirtualisation': True}
gb2.configure_grid_options(**other_options)
gridOptions2 = gb2.build()
tab1, tab2 = st.tabs(['tab1', 'tab2'])
with tab1:
AgGrid(
df,
gridOptions=gridOptions1,
columns_auto_size_mode=ColumnsAutoSizeMode.FIT_CONTENTS,
key='1',
)
with tab2:
st.write('no FIT_CONTENTS and no suppressColumnVirtualisation')
AgGrid(
df,
gridOptions=gridOptions1,
key='2-1',
)
st.write('FIT_CONTENTS and no suppressColumnVirtualisation')
AgGrid(
df,
gridOptions=gridOptions1,
columns_auto_size_mode=ColumnsAutoSizeMode.FIT_CONTENTS,
key='2-2',
)
st.write('FIT_CONTENTS and suppressColumnVirtualisation is true')
AgGrid(
df,
gridOptions=gridOptions2,
columns_auto_size_mode=ColumnsAutoSizeMode.FIT_CONTENTS,
key='2-3',
)
```
|
open
|
2024-02-20T06:59:30Z
|
2024-03-21T17:22:32Z
|
https://github.com/PablocFonseca/streamlit-aggrid/issues/249
|
[
"bug"
] |
spiritdncyer
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.