repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
ARM-DOE/pyart
|
data-visualization
| 872
|
Speed up of "estimate_noise_hs74"
|
Hi all,
In my own version of Py-ART I have modified the code of pyart/util/hildebrand_sekhon.py in order to speed it up. Check out:
https://github.com/meteoswiss-mdr/pyart/blob/master/pyart/util/hildebrand_sekhon.py
The resultant code is significantly faster. You may want to add it up in Py-ART.
Greetings from Switzerland!
|
closed
|
2019-10-18T11:47:35Z
|
2019-11-11T15:36:59Z
|
https://github.com/ARM-DOE/pyart/issues/872
|
[] |
meteoswiss-mdr
| 3
|
wagtail/wagtail
|
django
| 12,718
|
Sass @import to @use module system migration
|
Part of #12717. We need to migrate our Sass code to the language’s `@use` module system, as `@import` is deprecated. `@use` is more explicit in its behavior and more featureful, but there are some aspects of how we use `@import` that will require more than a 1:1 refactoring.
See [wagtail.org#483](https://github.com/wagtail/wagtail.org/pull/483) as an example of conducting this migration.
More info from Sass: https://sass-lang.com/d/import
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
The first step we would recommend is to review the project’s use of `@import`, and the documentation on `@use`, and devise a migration plan here in the comments. The majority of the `@import` in Wagtail can be switched to `@use` as-is, but there are a few cases where this won’t be possible:
- For our loading of variables mixins / functions, we need to add a lot of `@use` to move away from globally loading those variables / mixins / functions.
- For dependencies that also use Sass, we may have to review their support for the new module system and make more involved changes.
|
closed
|
2024-12-19T13:32:03Z
|
2025-02-24T15:17:12Z
|
https://github.com/wagtail/wagtail/issues/12718
|
[
"type:Cleanup/Optimisation",
"Compatibility"
] |
thibaudcolas
| 7
|
wkentaro/labelme
|
deep-learning
| 1,002
|
Program shutted down after clicked ‘OK' for changing the label in Windows latest release version
|
Program shutted down after clicked ‘OK' for changing the label in Windows latest release version. It happend for all my Windows10 system devices.
```
Traceback (most recent call last):
File "d:\libraries\anaconda3\lib\site-packages\labelme\app.py", line 1075, in editLabel
self._update_shape_color(shape)
File "d:\libraries\anaconda3\lib\site-packages\labelme\app.py", line 1155, in _update_shape_color
r, g, b = self._get_rgb_by_label(shape.label)
File "d:\libraries\anaconda3\lib\site-packages\labelme\app.py", line 1165, in _get_rgb_by_label
item = self.uniqLabelList.findItemsByLabel(label)[0]
IndexError: list index out of range
```
|
open
|
2022-03-14T01:53:06Z
|
2024-12-30T12:47:56Z
|
https://github.com/wkentaro/labelme/issues/1002
|
[
"issue::bug"
] |
Zaoyee
| 1
|
flasgger/flasgger
|
api
| 347
|
Return list as response
|
Hello everyone,
in the example of https://github.com/flasgger/flasgger/blob/master/README.md#using-marshmallow-schemas it shows how to use nested to define a list as response. It returns e.g.
```
{
'cmyk': ['cian', 'magenta', 'yellow', 'black']
}
```
How can I modify it such that it returns directly a list? For the above example, the expected return value would be
```
['cian', 'magenta', 'yellow', 'black']
```
Is that something that would be considered wrong from a design perspective?
I found a way to do it, but it breaks the apidocs overview. I want a view that returns a list of orders. Currently, this is what I did
```
class OrderSchema(Schema):
id = fields.String(description="Identifier")
class OrdersView(SwaggerView):
decorators = None
tags = ["user"]
summary = "List orders"
description = "Returns a list of all orders"
parameters = None
responses = {
200: {"description": "OK", "schema": OrderSchema}
}
validation = False
def get(self):
orders = Oders()
result = OrderSchema().dump(orders, many=True)
return jsonify(result)
```
It works, but in the apidocs, the response value is now not a list, but only one OrderSchema object.
I could introduce a OrderListSchema, that has a nested OrderSchema field, but then the return value would not be directly the list, but a dict with the list in the field's name.
|
closed
|
2019-11-20T09:40:29Z
|
2019-12-18T08:51:17Z
|
https://github.com/flasgger/flasgger/issues/347
|
[] |
sschiessl-bcp
| 2
|
babysor/MockingBird
|
deep-learning
| 735
|
Could not load symbol cublasGetSmCountTarget from cublas64_11.dll. Error code 127
|
训练时提示
Could not load symbol cublasGetSmCountTarget from cublas64_11.dll. Error code 127
但仍可以继续,这是什么情况?
|
open
|
2022-09-06T15:03:56Z
|
2022-09-06T15:03:56Z
|
https://github.com/babysor/MockingBird/issues/735
|
[] |
kensukwok
| 0
|
matplotlib/matplotlib
|
matplotlib
| 29,319
|
[Bug]: Legend with location set to ‘best’ overlaps with the title when the titles is moved down
|
### Bug summary
If I adjust the y-position of a plot title to move it down, and then I add a legend with loc set to `best`, the legend overlaps with the title.
### Code for reproduction
```Python
import matplotlib.pyplot as plt
plt.close('all')
plt.plot((1,2,3), label='Just a very long legend')
plt.title('Just a very long title 1234567890', y=0.9)
plt.legend(loc='best')
plt.show()
```
### Actual outcome

### Expected outcome
I expect the legend to be placed in a location that does not overlap with the title.
### Additional information
_No response_
### Operating system
Linux
### Matplotlib Version
3.9.4
### Matplotlib Backend
qtagg
### Python version
3.12.7
### Jupyter version
_No response_
### Installation
pip
|
open
|
2024-12-16T08:25:31Z
|
2024-12-17T10:14:19Z
|
https://github.com/matplotlib/matplotlib/issues/29319
|
[
"Documentation: API"
] |
aweinstein
| 6
|
AUTOMATIC1111/stable-diffusion-webui
|
deep-learning
| 16,199
|
[Feature Request]: Add Ascend NPU npu_fusion_attention to accelerate training
|
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
- a simple description of npu_fusion_attention operator
- add Ascend NPU npu_fusion_attention to accelerate training
### Proposed workflow
1. Go to add description of npu_fusion_attention operator
2. add Ascend NPU npu_fusion_attention to accelerate training
### Additional information
_No response_
|
open
|
2024-07-12T08:52:41Z
|
2024-07-24T08:48:30Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16199
|
[
"enhancement"
] |
kevin19891229
| 2
|
InstaPy/InstaPy
|
automation
| 5,789
|
Bot not working at all with docker-compose
|
See error log below (username and password are correct):
```InstaPy Version: 0.6.10
._. ._. ._. ._. ._. ._. ._.
Workspace in use: "/code/InstaPyN"
2445kb [00:03, 754.76kb/s]
Traceback (most recent call last):
File "docker_quickstart.py", line 19, in <module>
bot = InstaPy(username=insta_username, password=insta_password, headless_browser=False)
File "/usr/local/lib/python3.7/site-packages/instapy/instapy.py", line 323, in __init__
self.logger,
File "/usr/local/lib/python3.7/site-packages/instapy/browser.py", line 120, in set_selenium_local_session
driver_path = geckodriver_path or get_geckodriver()
File "/usr/local/lib/python3.7/site-packages/instapy/browser.py", line 36, in get_geckodriver
sym_path = gdd.download_and_install()[1]
File "/usr/local/lib/python3.7/site-packages/webdriverdownloader/webdriverdownloader.py", line 215, in download_and_install
os.symlink(symlink_src, symlink_target)
OSError: [Errno 71] Protocol error: '/code/InstaPyN/assets/gecko/v0.27.0/geckodriver-v0.27.0-linux64/geckodriver' -> '/code/InstaPyN/assets/geckodriver'
InstaPy Version: 0.6.10
._. ._. ._. ._. ._. ._. ._.
Workspace in use: "/code/InstaPyN"
Traceback (most recent call last):
File "docker_quickstart.py", line 19, in <module>
bot = InstaPy(username=insta_username, password=insta_password, headless_browser=False)
File "/usr/local/lib/python3.7/site-packages/instapy/instapy.py", line 323, in __init__
self.logger,
File "/usr/local/lib/python3.7/site-packages/instapy/browser.py", line 120, in set_selenium_local_session
driver_path = geckodriver_path or get_geckodriver()
File "/usr/local/lib/python3.7/site-packages/instapy/browser.py", line 36, in get_geckodriver
sym_path = gdd.download_and_install()[1]
File "/usr/local/lib/python3.7/site-packages/webdriverdownloader/webdriverdownloader.py", line 215, in download_and_install
os.symlink(symlink_src, symlink_target)
OSError: [Errno 71] Protocol error: '/code/InstaPyN/assets/gecko/v0.27.0/geckodriver-v0.27.0-linux64/geckodriver' -> '/code/InstaPyN/assets/geckodriver'
```
|
closed
|
2020-09-19T16:01:34Z
|
2021-05-10T04:46:53Z
|
https://github.com/InstaPy/InstaPy/issues/5789
|
[
"wontfix"
] |
wsdt
| 2
|
dnouri/nolearn
|
scikit-learn
| 275
|
Convolutional Autoencoder NaN
|
Hi! I'm trying to make a convolutional autoencoder based off of VGG-S (https://github.com/Lasagne/Recipes/blob/master/modelzoo/vgg_cnn_s.py).
For some reason, learning always converges to NaN almost immediately. I think my architecture is correct from VGG-S, so I'm not sure why this is happening.
<img width="504" alt="screenshot 2016-06-09 10 07 57" src="https://cloud.githubusercontent.com/assets/1855931/15938905/23c4c5a8-2e2a-11e6-948a-926c32976131.png">
Here's my code (https://gist.github.com/sampepose/ccb58557271cff10d182f4ab8282f3b4).
|
closed
|
2016-06-09T17:09:39Z
|
2016-08-28T03:10:50Z
|
https://github.com/dnouri/nolearn/issues/275
|
[] |
sampepose
| 2
|
datapane/datapane
|
data-visualization
| 30
|
Style formatted Pandas Dataframe with ,
|
Hi ,
I am using datapane table populated with formatted pandas dataframe. The issue is out of 6 column , one column transforms to date. Rest of the 5 columns display correctly with , formatted.
|
closed
|
2020-10-23T16:22:21Z
|
2021-01-12T16:48:34Z
|
https://github.com/datapane/datapane/issues/30
|
[
"bug"
] |
kumarmisra
| 2
|
StackStorm/st2
|
automation
| 6,215
|
Inquiries with invalid schema go to blank page
|
## SUMMARY
If there's any issues with the JSON schema for an inquiry, when you click on the inquiry in the UI, it sends you to a blank page
### STACKSTORM VERSION
st2 3.8.1, on Python 3.9.13
##### OS, environment, install method
Post what OS you are running this on, along with any other relevant information/
Docker on Ubuntu server
## Steps to reproduce the problem
Provide an invalid JSON format to an Inquiry, then go to Inquiries in the web UI and click on that newly created Inquiry
## Expected Results
Unsure, but I'd at least expect an error to be shown on the page
## Actual Results
Just a blank page
Thanks!
|
open
|
2024-06-26T06:36:47Z
|
2024-06-26T11:03:13Z
|
https://github.com/StackStorm/st2/issues/6215
|
[] |
lexiismadd
| 1
|
scikit-learn-contrib/metric-learn
|
scikit-learn
| 222
|
Deprecate use_pca for lmnn
|
There is still a `use_pca` attribute for LMNN, that needs to be deprecated
|
closed
|
2019-06-17T07:15:10Z
|
2019-07-04T06:48:28Z
|
https://github.com/scikit-learn-contrib/metric-learn/issues/222
|
[] |
wdevazelhes
| 0
|
proplot-dev/proplot
|
data-visualization
| 137
|
Would you add the "readshapefile" method in proplot?
|
<!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
[Description of the bug or feature.]
### Steps to reproduce
A "[Minimal, Complete and Verifiable Example](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports)" will make it much easier for maintainers to help you.
```python
# your code here
# we should be able to copy-paste this into python and exactly reproduce your bug
```
**Expected behavior**: [What you expected to happen]
**Actual behavior**: [What actually happened]
### Equivalent steps in matplotlib
Please make sure this bug is related to a specific proplot feature. If you're not sure, try to replicate it with the [native matplotlib API](https://matplotlib.org/3.1.1/api/index.html). Matplotlib bugs belong on the [matplotlib github page](https://github.com/matplotlib/matplotlib).
```python
# your code here, if applicable
```
### Proplot version
Paste the result of `import proplot; print(proplot.version)` here.
|
closed
|
2020-04-07T04:14:20Z
|
2020-04-22T23:15:45Z
|
https://github.com/proplot-dev/proplot/issues/137
|
[
"feature"
] |
sfhua
| 2
|
JaidedAI/EasyOCR
|
pytorch
| 386
|
build failed on AArch64, Fedora 33
|
[jw@cn05 easyocr]$ sudo python3 setup.py install --verbose
running install
running bdist_egg
running egg_info
writing easyocr.egg-info/PKG-INFO
writing dependency_links to easyocr.egg-info/dependency_links.txt
writing entry points to easyocr.egg-info/entry_points.txt
writing requirements to easyocr.egg-info/requires.txt
writing top-level names to easyocr.egg-info/top_level.txt
reading manifest file 'easyocr.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'LICENSE.txt'
writing manifest file 'easyocr.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-aarch64/egg
running install_lib
running build_py
creating build/bdist.linux-aarch64/egg
creating build/bdist.linux-aarch64/egg/easyocr
copying build/lib/easyocr/__init__.py -> build/bdist.linux-aarch64/egg/easyocr
copying build/lib/easyocr/cli.py -> build/bdist.linux-aarch64/egg/easyocr
copying build/lib/easyocr/config.py -> build/bdist.linux-aarch64/egg/easyocr
copying build/lib/easyocr/craft.py -> build/bdist.linux-aarch64/egg/easyocr
copying build/lib/easyocr/craft_utils.py -> build/bdist.linux-aarch64/egg/easyocr
copying build/lib/easyocr/detection.py -> build/bdist.linux-aarch64/egg/easyocr
copying build/lib/easyocr/easyocr.py -> build/bdist.linux-aarch64/egg/easyocr
copying build/lib/easyocr/imgproc.py -> build/bdist.linux-aarch64/egg/easyocr
copying build/lib/easyocr/recognition.py -> build/bdist.linux-aarch64/egg/easyocr
copying build/lib/easyocr/utils.py -> build/bdist.linux-aarch64/egg/easyocr
creating build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ab_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/abq_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ady_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/af_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ang_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ar_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/as_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ava_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/az_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/be_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/bg_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/bh_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/bho_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/bn_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/bs_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ch_pin_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ch_sim_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ch_tra_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/che_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/cs_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/cy_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/da_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/dar_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/de_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/en_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/es_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/et_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/fa_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/fr_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ga_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/gom_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/he_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/hi_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/hr_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/hu_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/id_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/inh_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/is_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/it_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ja_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ja_char2.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ja_punc.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/kbd_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/kn.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/kn_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ko_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ku_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/la_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/lbe_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/lez_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/lt_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/lv_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/mah_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/mai_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/mi_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ml_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/mn_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/mr_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ms_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/mt_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ne_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/new_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/nl_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/no_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/oc_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/pb_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/pi_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/pl_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/pt_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ro_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/rs_cyrillic_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/rs_latin_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ru_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/sck_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/sk_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/sl_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/sq_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/sv_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/sw_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ta_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/tab_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/te.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/te_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/th_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/tl_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/tr_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ug_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/uk_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/ur_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/uz_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
copying build/lib/easyocr/character/vi_char.txt -> build/bdist.linux-aarch64/egg/easyocr/character
creating build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ab.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/af.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ar.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/az.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/be.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/bg.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/bn.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/bs.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ch-pin-syl.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ch_pin.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/cs.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/cy.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/da.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/de.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/en.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/es.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/et.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/fa.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/fr.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ga.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/he.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/hi.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/hr.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/hu.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/id.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/is.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/it.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ja.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/kn.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ko.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ku.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/la.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/lt.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/lv.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/mi.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ml.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/mn.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/mr.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ms.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/mt.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ne.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/nl.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/no.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/oc.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/pb.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/pi.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/pl.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/pt.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ro.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/rs_cyrillic.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/rs_latin.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ru.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/sk.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/sl.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/sq.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/sv.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/sw.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ta.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/te.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/th.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/tl.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/tr.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ug.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/uk.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/ur.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/uz.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
copying build/lib/easyocr/dict/vi.txt -> build/bdist.linux-aarch64/egg/easyocr/dict
creating build/bdist.linux-aarch64/egg/easyocr/model
copying build/lib/easyocr/model/__init__.py -> build/bdist.linux-aarch64/egg/easyocr/model
copying build/lib/easyocr/model/model.py -> build/bdist.linux-aarch64/egg/easyocr/model
copying build/lib/easyocr/model/modules.py -> build/bdist.linux-aarch64/egg/easyocr/model
copying build/lib/easyocr/model/vgg_model.py -> build/bdist.linux-aarch64/egg/easyocr/model
byte-compiling build/bdist.linux-aarch64/egg/easyocr/__init__.py to __init__.cpython-39.pyc
byte-compiling build/bdist.linux-aarch64/egg/easyocr/cli.py to cli.cpython-39.pyc
byte-compiling build/bdist.linux-aarch64/egg/easyocr/config.py to config.cpython-39.pyc
byte-compiling build/bdist.linux-aarch64/egg/easyocr/craft.py to craft.cpython-39.pyc
byte-compiling build/bdist.linux-aarch64/egg/easyocr/craft_utils.py to craft_utils.cpython-39.pyc
byte-compiling build/bdist.linux-aarch64/egg/easyocr/detection.py to detection.cpython-39.pyc
byte-compiling build/bdist.linux-aarch64/egg/easyocr/easyocr.py to easyocr.cpython-39.pyc
byte-compiling build/bdist.linux-aarch64/egg/easyocr/imgproc.py to imgproc.cpython-39.pyc
byte-compiling build/bdist.linux-aarch64/egg/easyocr/recognition.py to recognition.cpython-39.pyc
byte-compiling build/bdist.linux-aarch64/egg/easyocr/utils.py to utils.cpython-39.pyc
byte-compiling build/bdist.linux-aarch64/egg/easyocr/model/__init__.py to __init__.cpython-39.pyc
byte-compiling build/bdist.linux-aarch64/egg/easyocr/model/model.py to model.cpython-39.pyc
byte-compiling build/bdist.linux-aarch64/egg/easyocr/model/modules.py to modules.cpython-39.pyc
byte-compiling build/bdist.linux-aarch64/egg/easyocr/model/vgg_model.py to vgg_model.cpython-39.pyc
creating build/bdist.linux-aarch64/egg/EGG-INFO
copying easyocr.egg-info/PKG-INFO -> build/bdist.linux-aarch64/egg/EGG-INFO
copying easyocr.egg-info/SOURCES.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
copying easyocr.egg-info/dependency_links.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
copying easyocr.egg-info/entry_points.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
copying easyocr.egg-info/requires.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
copying easyocr.egg-info/top_level.txt -> build/bdist.linux-aarch64/egg/EGG-INFO
zip_safe flag not set; analyzing archive contents...
easyocr.__pycache__.config.cpython-39: module references __file__
creating 'dist/easyocr-1.2.5.1-py3.9.egg' and adding 'build/bdist.linux-aarch64/egg' to it
removing 'build/bdist.linux-aarch64/egg' (and everything under it)
Processing easyocr-1.2.5.1-py3.9.egg
removing '/usr/local/lib/python3.9/site-packages/easyocr-1.2.5.1-py3.9.egg' (and everything under it)
creating /usr/local/lib/python3.9/site-packages/easyocr-1.2.5.1-py3.9.egg
Extracting easyocr-1.2.5.1-py3.9.egg to /usr/local/lib/python3.9/site-packages
easyocr 1.2.5.1 is already the active version in easy-install.pth
Installing easyocr script to /usr/local/bin
Installed /usr/local/lib/python3.9/site-packages/easyocr-1.2.5.1-py3.9.egg
Processing dependencies for easyocr==1.2.5.1
Searching for PyYAML
Reading https://pypi.org/simple/PyYAML/
Download error on https://pypi.org/simple/PyYAML/: [Errno -2] Name or service not known -- Some packages may not be found!
Couldn't find index page for 'PyYAML' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading https://pypi.org/simple/
Download error on https://pypi.org/simple/: [Errno -2] Name or service not known -- Some packages may not be found!
No local packages or working download links found for PyYAML
error: Could not find suitable distribution for Requirement.parse('PyYAML')
[jw@cn05 easyocr]$ pip3 install pyyaml
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: pyyaml in /home/jw/.local/lib/python3.9/site-packages (5.1)
[jw@cn05 easyocr]$
|
closed
|
2021-03-03T10:52:30Z
|
2021-06-22T10:34:25Z
|
https://github.com/JaidedAI/EasyOCR/issues/386
|
[] |
LutzWeischerFujitsu
| 1
|
matplotlib/matplotlib
|
data-visualization
| 29,534
|
[Bug]: missing graph
|
### Bug summary
Good day, I'm having issues with my graphs showing after running my command, I only get axis but no graph

### Code for reproduction
```Python
Gby_plt.plot()
```
### Actual outcome

### Expected outcome

### Additional information
_No response_
### Operating system
_No response_
### Matplotlib Version
3.9.2
### Matplotlib Backend
_No response_
### Python version
3.10.1
### Jupyter version
_No response_
### Installation
None
|
open
|
2025-01-28T16:37:05Z
|
2025-01-29T17:58:27Z
|
https://github.com/matplotlib/matplotlib/issues/29534
|
[
"Community support"
] |
Gidman21
| 2
|
tflearn/tflearn
|
tensorflow
| 827
|
The published RNN pixels example not working, kindly advise..
|
The tflearn library example is failing at creating the first LSTM layer itself.
How to reproduce the error:
Run RNN pixels example https://github.com/tflearn/tflearn/blob/master/examples/images/rnn_pixels.py
Installations:
tensorflow==1.1.0-rc2
tflearn==for both 0.3.1 and 0.3.2
Error:
```
>>> net = tflearn.lstm(net, 128, return_seq=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pandit/.local/lib/python2.7/site-packages/tflearn/layers/recurrent.py", line 222, in lstm
restore=restore, reuse=reuse)
File "/home/pandit/.local/lib/python2.7/site-packages/tflearn/layers/recurrent.py", line 499, in __init__
self.trainable = trainable
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/layers/base.py", line 124, in __setattr__
super(_Layer, self).__setattr__(name, value)
AttributeError: can't set attribute
>>> net = tflearn.lstm(net, 128)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/pandit/.local/lib/python2.7/site-packages/tflearn/layers/recurrent.py", line 222, in lstm
restore=restore, reuse=reuse)
File "/home/pandit/.local/lib/python2.7/site-packages/tflearn/layers/recurrent.py", line 499, in __init__
self.trainable = trainable
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/layers/base.py", line 124, in __setattr__
super(_Layer, self).__setattr__(name, value)
AttributeError: can't set attribute
```
My own similar script works fine within virtual environment with tf 0.12, and tflearn 0.2.1. Apparently tflearn 0.3 needs updates in these regards?
|
open
|
2017-07-09T16:46:21Z
|
2017-07-10T16:01:22Z
|
https://github.com/tflearn/tflearn/issues/827
|
[] |
vedhas
| 0
|
matterport/Mask_RCNN
|
tensorflow
| 2,820
|
Iou mask
|
My question is when evaluate iou mask for evaluate AP with different
therthold.
The predict mask is 28x28 to evaluate AP or resize to original size of roi
after that compute AP
|
closed
|
2022-05-01T13:37:25Z
|
2022-05-01T13:43:05Z
|
https://github.com/matterport/Mask_RCNN/issues/2820
|
[] |
Nawaffarhan
| 1
|
nonebot/nonebot2
|
fastapi
| 3,221
|
Plugin: SuggarChat OpenAI协议聊天插件
|
### PyPI 项目名
nonebot-plugin-suggarchat
### 插件 import 包名
nonebot_plugin_suggarchat
### 标签
[{"label":"ChatBot","color":"#ea5252"},{"label":"OpenAI","color":"#00ffe3"},{"label":"聊天","color":"#0067ff"}]
### 插件测试
- [ ] 如需重新运行插件测试,请勾选左侧勾选框
|
closed
|
2024-12-29T13:55:28Z
|
2025-02-20T14:56:23Z
|
https://github.com/nonebot/nonebot2/issues/3221
|
[
"Plugin",
"Publish"
] |
JohnRichard4096
| 11
|
strawberry-graphql/strawberry
|
asyncio
| 3,343
|
Schema visibility filters
|
I'm opening a new issue to spec-out the API, and also write down what should we take into account to make this happen.
Visibility filters will be a feature that allows to hide fields (or types) based on the current request. This is different from #3274 where we allow people to customise fields at schema build time.
## API
This is to be defined, I just put a random though here for now:
```python
@strawberry.type
class MyUser:
id: str
email: str = strawberry.field(...) # do we use tags? or do we go with functions?
```
## Considerations
This should be fast, we can't afford to do a lot of work at request time, which I guess means we can't build the schema at request time. @erikwrede @skilkis @bellini666 this could be a cool conversation to have 😊
Security, this should be secure, we should make sure we cover all the execution paths and make sure we never allow running resolvers when they should be hidden. Also we need to make sure we do the right thing when introspecting the schema **and** when printing the schema (where we won't have the request context)
## Help us
If you want this feature, please let us know, I'm also happy to hop on calls, and please consider sponsoring us, especially if your company needs this, there's a badge at the bottom to sponsor this issue, but we also have GitHub sponsorship enabled 😊
Related #361 #3274 #2609
|
open
|
2024-01-16T11:36:50Z
|
2025-03-20T15:56:34Z
|
https://github.com/strawberry-graphql/strawberry/issues/3343
|
[
"feature-request"
] |
patrick91
| 0
|
ultralytics/ultralytics
|
computer-vision
| 18,896
|
Excuse me, how can I solve the problem that the confidence level is only 0.1 after switching to the ONNX model?
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug

### Environment
[2025/01/26 15:52:38] ppocr DEBUG: Namespace(alpha=1.0, alphacolor=(255, 255, 255), benchmark=False, beta=1.0, binarize=False, cls_batch_num=6, cls_image_shape='3, 48, 192', cls_model_dir='./weights/ocr/ch_ppocr_mobile_v2.0_cls_infer/', cls_thresh=0.9, cpu_threads=10, crop_res_save_dir='./output', det=True, det_algorithm='DB', det_box_type='quad', det_db_box_thresh=0.6, det_db_score_mode='fast', det_db_thresh=0.3, det_db_unclip_ratio=1.5, det_east_cover_thresh=0.1, det_east_nms_thresh=0.2, det_east_score_thresh=0.8, det_limit_side_len=960, det_limit_type='max', det_model_dir='/home/tony/.paddleocr/whl/det/en/en_PP-OCRv3_det_infer', det_pse_box_thresh=0.85, det_pse_min_area=16, det_pse_scale=1, det_pse_thresh=0, det_sast_nms_thresh=0.2, det_sast_score_thresh=0.5, draw_img_save_dir='./inference_results', drop_score=0.5, e2e_algorithm='PGNet', e2e_char_dict_path='./ppocr/utils/ic15_dict.txt', e2e_limit_side_len=768, e2e_limit_type='max', e2e_model_dir=None, e2e_pgnet_mode='fast', e2e_pgnet_score_thresh=0.5, e2e_pgnet_valid_set='totaltext', enable_mkldnn=False, formula=False, formula_algorithm='LaTeXOCR', formula_batch_num=1, formula_char_dict_path=None, formula_model_dir=None, fourier_degree=5, gpu_id=0, gpu_mem=500, help='==SUPPRESS==', image_dir=None, image_orientation=False, invert=False, ir_optim=True, kie_algorithm='LayoutXLM', label_list=['0', '180'], lang='en', layout=True, layout_dict_path=None, layout_model_dir=None, layout_nms_threshold=0.5, layout_score_threshold=0.5, max_batch_size=10, max_text_length=25, merge_no_span_structure=True, min_subgraph_size=15, mode='structure', ocr=True, ocr_order_method=None, ocr_version='PP-OCRv4', output='./output', page_num=0, precision='fp32', process_id=0, re_model_dir=None, rec=True, rec_algorithm='SVTR_LCNet', rec_batch_num=6, rec_char_dict_path='./weights/ocr/ppocr_keys_v1_fhhx.txt', rec_image_inverse=True, rec_image_shape='3, 48, 320', rec_model_dir='./weights/ocr/0510/', recovery=False, recovery_to_markdown=False, return_word_box=False, save_crop_res=False, save_log_path='./log_output/', savefile=False, scales=[8, 16, 32], ser_dict_path='../train_data/XFUND/class_list_xfun.txt', ser_model_dir=None, show_log=True, sr_batch_num=1, sr_image_shape='3, 32, 128', sr_model_dir=None, structure_version='PP-StructureV2', table=True, table_algorithm='TableAttn', table_char_dict_path=None, table_max_len=488, table_model_dir=None, total_process_num=1, type='ocr', use_angle_cls=False, use_dilation=False, use_gpu=True, use_mlu=False, use_mp=False, use_npu=False, use_onnx=False, use_pdf2docx_api=False, use_pdserving=False, use_space_char=True, use_tensorrt=False, use_visual_backbone=True, use_xpu=False, vis_font_path='./doc/fonts/simfang.ttf', warmup=False)
[2025/01/26 15:52:38] ppocr WARNING: The first GPU is used for inference by default, GPU ID: 0
[2025/01/26 15:52:39] ppocr WARNING: The first GPU is used for inference by default, GPU ID: 0
Ultralytics 8.3.66 🚀 Python-3.8.8 torch-1.13.1+cu117 CUDA:0 (NVIDIA GeForce RTX 3090, 24132MiB)
### Minimal Reproducible Example
```python
import cv2
import math
import copy
import torch
import time
import os
import onnxruntime as ort
from paddleocr import PaddleOCR
import concurrent.futures
# 将 YOLO 模型转换为 ONNX 模型
def export_to_onnx(weights):
from ultralytics import YOLO
model = YOLO(weights)
try:
model.export(format='onnx')
print("ONNX 模型转换成功。")
except Exception as e:
print(f"ONNX 模型转换失败: {e}")
class YOLO_det:
def __init__(self, weights, imgsz=640, conf_thres=0.1, iou_thres=0.25, max_det=1000): # 提高置信度阈值
if torch.cuda.is_available() and torch.cuda.device_count() > 0:
providers = [
('CUDAExecutionProvider', {
'device_id': 0,
'arena_extend_strategy': 'kNextPowerOfTwo',
'gpu_mem_limit': 4 * 1024 * 1024 * 1024, # 4GB 内存限制
'cudnn_conv_algo_search': 'EXHAUSTIVE',
'do_copy_in_default_stream': True,
})
]
else:
providers = ['CPUExecutionProvider']
onnx_weights = weights.replace('.pt', '.onnx')
if not os.path.exists(onnx_weights):
export_to_onnx(weights)
self.session = ort.InferenceSession(onnx_weights, providers=providers)
self.imgsz = imgsz
self.conf = conf_thres
self.iou = iou_thres
self.max_det = max_det
def preprocess(self, img):
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (self.imgsz, self.imgsz))
img = img.transpose(2, 0, 1)
img = img[None]
img = img.astype('float32') / 255.0
return img
def detect(self, img):
input_name = self.session.get_inputs()[0].name
input_img = self.preprocess(img)
try:
outputs = self.session.run(None, {input_name: input_img})
print(f"推理输出形状: {[o.shape for o in outputs]}") # 打印输出形状
print(f"推理输出部分内容: {outputs[0][0, :5, :]}") # 打印部分输出内容
except Exception as e:
print(f"推理过程中出现异常: {e}")
return []
# 假设输出只有一个数组,需要根据实际情况解析
output = outputs[0]
boxes = []
confidences = []
# 根据实际输出格式调整解析逻辑
if output.ndim == 3:
num_detections = output.shape[1]
for i in range(num_detections):
# 假设前 4 列是边界框信息,第 5 列是置信度
box = output[0, i, :4]
conf = output[0, i, 4]
boxes.append(box)
confidences.append(conf)
else:
print(f"不支持的输出形状: {output.shape}")
return []
return_list = []
for box, conf in zip(boxes, confidences):
if conf > self.conf:
xyxy = box
x1 = math.ceil(xyxy[0])
y1 = math.ceil(xyxy[1])
x2 = math.ceil(xyxy[2])
y2 = math.ceil(xyxy[1])
x3 = math.ceil(xyxy[2])
y3 = math.ceil(xyxy[3])
x4 = math.ceil(xyxy[0])
y4 = math.ceil(xyxy[3])
return_list.append([[x1, y1], [x2, y2], [x3, y3], [x4, y4]])
if not return_list:
print("未检测到目标。")
return return_list
def sorted_boxes(dt_boxes):
sorted_boxes = sorted(dt_boxes, key=lambda x: (x[0][1], x[0][0]))
_boxes = [[[int(num) for num in sub_list] for sub_list in main_list]
for main_list in sorted_boxes]
for i in range(len(dt_boxes) - 1):
for j in range(i, -1, -1):
if abs(_boxes[j + 1][0][1] - _boxes[j][0][1]) < 10 and (
_boxes[j + 1][0][0] < _boxes[j][0][0]
):
tmp = _boxes[j]
_boxes[j] = _boxes[j + 1]
_boxes[j + 1] = tmp
else:
break
return _boxes
def _4point2xyxy(points):
list_out_xyxy = []
for point in points:
x_coords, y_coords = zip(*point)
min_x, max_x = min(x_coords), max(x_coords)
min_y, max_y = min(y_coords), max(y_coords)
rectangle = [int(min_x), int(min_y), int(max_x), int(max_y)]
list_out_xyxy.append(rectangle)
return list_out_xyxy
def process_image(index, image_path, ocr_rec, yolo_det):
start = time.time() # 记录开始时间
try:
det_img = cv2.imread(image_path)
if det_img is None:
print(f"无法读取图片: {image_path}")
return
except Exception as e:
print(f"读取图片 {image_path} 时出现错误: {e}")
return
out_yolo_det = yolo_det.detect(det_img)
if not out_yolo_det:
print(f"在图片 {image_path} 中未检测到目标。")
out_yolo_det = sorted_boxes(out_yolo_det)
list_ocr_det_bbox_xyxy = _4point2xyxy(out_yolo_det)
show_img = copy.deepcopy(det_img)
for i in range(len(list_ocr_det_bbox_xyxy)):
# xyxy 坐标
x1, y1, x2, y2 = list_ocr_det_bbox_xyxy[i]
# 检查截取区域是否有效
if x2 > x1 and y2 > y1:
# 截取文本小图
ocr_rec_det_img = det_img[y1:y2, x1:x2]
one_ocr_rec_out = ocr_rec.ocr(ocr_rec_det_img, det=False, cls=False)
print(one_ocr_rec_out)
# 绘制 bbox
show_img = cv2.rectangle(show_img, (x1, y1), (x2, y2), (0, 0, 255), 1)
# 设置字体、大小、颜色和线条粗细
font = cv2.FONT_HERSHEY_SIMPLEX
# 绘制文本
show_img = cv2.putText(show_img, str(i), (x1, y1 + 20), font, 0.8, (0, 255, 0), 2)
# 确保 output 文件夹存在
output_folder = 'output_onnx'
if not os.path.exists(output_folder):
try:
os.makedirs(output_folder)
print(f"成功创建输出文件夹: {output_folder}")
except OSError as e:
print(f"创建输出文件夹时出错: {e}")
return
# 保存图片
output_path = os.path.join(output_folder, f'out_image{index + 1}.jpg')
if cv2.imwrite(output_path, show_img):
print(f"图片已成功保存到: {output_path}")
else:
print(f"无法保存图片到: {output_path},请检查文件权限或路径是否正确。")
end = time.time() # 记录结束时间
elapsed = end - start # 计算该图片处理耗时
print(f"图片 {image_path} 处理耗时: {elapsed:.2f} 秒")
print("\n", "=" * 200, "\n")
if __name__ == "__main__":
start_time = time.time()
# 文本识别_权重
rec_model_dir = './weights/ocr/0510/'
# 文本字典
rec_char_dict_path = './weights/ocr/ppocr_keys_v1_fhhx.txt'
# 方向分类器
cls_model_dir = './weights/ocr/ch_ppocr_mobile_v2.0_cls_infer/'
# yolo_det 权重目录
weighs = './weights/best.pt'
# 加载 文本检测权重...
ocr_rec = PaddleOCR(lang='en', rec_model_dir=rec_model_dir, rec_char_dict_path=rec_char_dict_path,
cls_model_dir=cls_model_dir, use_gpu=True) # 明确指定使用GPU
# 加载 yolo_det 权重...
yolo_det = YOLO_det(weighs, imgsz=640)
# 获取 input_images 文件夹下的所有图片路径
input_folder = 'input_images'
if not os.path.exists(input_folder):
print(f"输入文件夹 {input_folder} 不存在,请检查路径。")
else:
image_files = [os.path.join(input_folder, f) for f in os.listdir(input_folder)
if f.endswith(('.png', '.jpg', '.jpeg'))]
if not image_files:
print(f"在 {input_folder} 中未找到有效的图片文件,请检查文件夹内容。")
else:
# 使用线程池并行处理图片,并行数量为 10
with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
futures = []
for index, image_path in enumerate(image_files):
future = executor.submit(process_image, index, image_path, ocr_rec, yolo_det)
futures.append(future)
# 等待所有任务完成
concurrent.futures.wait(futures)
end_time = time.time()
elapsed_time = end_time - start_time
print(f"代码总运行时间: {elapsed_time:.2f} 秒")
# 一些后续可能添加的收尾操作可以在这里进行
# 例如,释放一些资源(虽然目前代码里没有明显需要手动释放的资源)
# 或者做一些数据统计、日志记录等额外工作
# 下面是一个简单的示例,用于记录本次运行的总时间到一个日志文件中
log_file_path = "run_log.txt"
try:
with open(log_file_path, "a") as log_file:
log_file.write(f"本次运行于 {time.strftime('%Y-%m-%d %H:%M:%S', time.localtime())} 开始,耗时 {elapsed_time:.2f} 秒。\n")
except Exception as e:
print(f"写入日志文件时出现错误: {e}")
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
open
|
2025-01-26T07:53:15Z
|
2025-01-26T13:48:03Z
|
https://github.com/ultralytics/ultralytics/issues/18896
|
[
"exports"
] |
CanhaoL
| 2
|
fastapi/fastapi
|
pydantic
| 12,290
|
Chrome does not display Swagger UI
|
### Privileged issue
- [X] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content

Chrome does not display Swagger UI, but Edge can.
Is this a bug?
|
closed
|
2024-09-28T13:51:40Z
|
2024-09-28T13:55:37Z
|
https://github.com/fastapi/fastapi/issues/12290
|
[] |
soevai
| 0
|
tfranzel/drf-spectacular
|
rest-api
| 1,271
|
How is it possible to close all groups at Swagger's opening?
|
Maybe it's a silly question, but how can I inform DRF-Spectacular to make all groups closed when opening Swagger? It seems that every time I request Swagger, it opens all groups, and since I have a lot of APIs, navigation becomes a real disaster. Is there any setting?
Thanks for your great module. Best regards.
|
closed
|
2024-08-05T20:32:28Z
|
2024-08-09T20:43:31Z
|
https://github.com/tfranzel/drf-spectacular/issues/1271
|
[] |
amirhoseinbidar
| 2
|
sgl-project/sglang
|
pytorch
| 3,719
|
[Bug] v0.4.3 performance degradation 2x8xH100
|
### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [ ] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
sglang was upgraded to version v0.4.3.post2-cu125, and the performance was found to be seriously degraded
### Reproduction
sglang_launch_server
```
python3 -m sglang.launch_server
--model-path /root/.cache/modelscope/DeepSeek-R1
--served-model-name deepseek-r1
--tp 16
--dist-init-addr $LWS_LEADER_ADDRESS:20000
--nnodes $LWS_GROUP_SIZE
--node-rank 0
--trust-remote-code
--context-length 131072
--enable-metrics
--host 0.0.0.0
--port 8000
--disable-cuda-graph
env:
- name: GLOO_SOCKET_IFNAME
value: eth0
- name: NCCL_IB_HCA
value: "mlx5_0,mlx5_1,mlx5_4,mlx5_5"
- name: NCCL_P2P_LEVEL
value: "NVL"
- name: NCCL_IB_GID_INDEX
value: "0"
- name: NCCL_IB_CUDA_SUPPORT
value: "1"
- name: NCCL_IB_DISABLE
value: "0"
- name: NCCL_SOCKET_IFNAME
value: "eth0"
- name: NCCL_DEBUG
value: "INFO"
- name: NCCL_NET_GDR_LEVEL
value: "2"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: SGLANG_USE_MODELSCOPE
value: "true"
```
v0.4.3
```
python3 -m sglang.bench_one_batch_server --model None --base-url http://127.0.0.1:8000 --batch-size 10 --input-len 1280 --output-len 1280
/usr/local/lib/python3.10/dist-packages/transformers/models/auto/image_processing_auto.py:590: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use `slow_image_processor_class`, or `fast_image_processor_class` instead
warnings.warn(
INFO 02-20 11:31:59 __init__.py:190] Automatically detected platform cuda.
batch size: 16
latency: 8.34 s
output throughput: 30.68 token/s
(input + output) throughput: 1994.45 token/s
batch size: 10
latency: 312.19 s
output throughput: 41.00 token/s
(input + output) throughput: 82.00 token/s
Results are saved to result.jsonl
```
v0.4.2
```
python3 -m sglang.bench_one_batch_server --model None --base-url http://127.0.0.1:8000 --batch-size 10 --input-len 1280 --output-len 1280
INFO 02-18 18:10:55 __init__.py:190] Automatically detected platform cuda.
batch size: 16
latency: 2.50 s
output throughput: 102.31 token/s
(input + output) throughput: 6650.28 token/s
batch size: 10
latency: 58.57 s
output throughput: 218.54 token/s
(input + output) throughput: 437.07 token/s
```
### Environment
```
python3 -m sglang.check_env
INFO 02-20 11:44:59 __init__.py:190] Automatically detected platform cuda.
Python: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H100 80GB HBM3
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 560.35.03
PyTorch: 2.5.1+cu124
sgl_kernel: 0.0.3.post6
flashinfer: 0.2.1.post2+cu124torch2.5
triton: 3.1.0
transformers: 4.48.3
torchao: 0.8.0
numpy: 1.26.4
aiohttp: 3.11.12
fastapi: 0.115.8
hf_transfer: 0.1.9
huggingface_hub: 0.28.1
interegular: 0.3.3
modelscope: 1.23.0
orjson: 3.10.15
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.6
multipart: 0.0.20
zmq: 26.2.1
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
openai: 1.63.2
tiktoken: 0.9.0
anthropic: 0.45.2
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX NODE SYS SYS NODE 0-47,96-143 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 NODE NODE SYS SYS NODE 0-47,96-143 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE PIX SYS SYS NODE 0-47,96-143 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE NODE SYS SYS PIX 0-47,96-143 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS PIX NODE SYS 48-95,144-191 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS NODE NODE SYS 48-95,144-191 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS NODE PIX SYS 48-95,144-191 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS NODE NODE SYS 48-95,144-191 1 N/A
NIC0 PIX NODE NODE NODE SYS SYS SYS SYS X NODE SYS SYS NODE
NIC1 NODE NODE PIX NODE SYS SYS SYS SYS NODE X SYS SYS NODE
NIC2 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS X NODE SYS
NIC3 SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS NODE X SYS
NIC4 NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_4
NIC3: mlx5_5
NIC4: mlx5_bond_0
ulimit soft: 1048576
```
|
open
|
2025-02-20T03:48:02Z
|
2025-02-20T05:04:39Z
|
https://github.com/sgl-project/sglang/issues/3719
|
[] |
Hugh-yw
| 2
|
AUTOMATIC1111/stable-diffusion-webui
|
pytorch
| 16,074
|
[Bug]: 283 settings changed after click to save
|
### Checklist
- [X] The issue exists after disabling all extensions
- [x] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
after a clean install, when i click to save my configs, it say
"283 settings changed: outdir_samples, outdir_txt2img_samples, outdir_img2img_samples, outdir_extras_samples, outdir_grids, outdir_txt2img_grids, outdir_img2img_grids, outdir_save, outdir_init_images, samples_save, samples_format, samples_filename_pattern, save_images_add_number, save_images_replace_action, grid_save, grid_format, grid_extended_filename, grid_only_if_multiple, grid_prevent_empty_spots, grid_zip_filename_pattern, n_rows, font, grid_text_active_color, grid_text_inactive_color, grid_background_color, save_images_before_face_restoration, save_images_before_highres_fix, save_images_before_color_correction, save_mask, save_mask_composite, jpeg_quality, webp_lossless, export_for_4chan, img_downscale_threshold, target_side_length, img_max_size_mp, use_original_name_batch, use_upscaler_name_as_suffix, save_selected_only, save_init_img, temp_dir, clean_temp_dir_at_start, save_incomplete_images, notification_audio, notification_volume, save_to_dirs, grid_save_to_dirs, use_save_to_dirs_for_ui, directories_filename_pattern, directories_max_prompt_words, auto_backcompat, use_old_emphasis_implementation, use_old_karras_scheduler_sigmas, no_dpmpp_sde_batch_determinism, use_old_hires_fix_width_height, dont_fix_second_order_samplers_schedule, hires_fix_use_firstpass_conds, use_old_scheduling, use_downcasted_alpha_bar, lora_functional, extra_networks_show_hidden_directories, extra_networks_dir_button_function, extra_networks_hidden_models, extra_networks_default_multiplier, extra_networks_card_width, extra_networks_card_height, extra_networks_card_text_scale, extra_networks_card_show_desc, extra_networks_card_description_is_html, extra_networks_card_order_field, extra_networks_card_order, extra_networks_tree_view_default_enabled, extra_networks_add_text_separator, ui_extra_networks_tab_reorder, textual_inversion_print_at_load, textual_inversion_add_hashes_to_infotext, sd_hypernetwork, sd_lora, lora_preferred_name, lora_add_hashes_to_infotext, lora_show_all, lora_hide_unknown_for_versions, lora_in_memory_limit, lora_not_found_warning_console, lora_not_found_gradio_warning, cross_attention_optimization, s_min_uncond, token_merging_ratio, token_merging_ratio_img2img, token_merging_ratio_hr, pad_cond_uncond, pad_cond_uncond_v0, persistent_cond_cache, batch_cond_uncond, fp8_storage, cache_fp16_weight, hide_samplers, eta_ddim, eta_ancestral, ddim_discretize, s_churn, s_tmin, s_tmax, s_noise, k_sched_type, sigma_min, sigma_max, rho, eta_noise_seed_delta, always_discard_next_to_last_sigma, sgm_noise_multiplier, uni_pc_variant, uni_pc_skip_type, uni_pc_order, uni_pc_lower_order_final, sd_noise_schedule, sd_checkpoints_limit, sd_checkpoints_keep_in_cpu, sd_checkpoint_cache, sd_unet, enable_quantization, emphasis, enable_batch_seeds, comma_padding_backtrack, CLIP_stop_at_last_layers, upcast_attn, randn_source, tiling, hires_fix_refiner_pass, enable_prompt_comments, sdxl_crop_top, sdxl_crop_left, sdxl_refiner_low_aesthetic_score, sdxl_refiner_high_aesthetic_score, sd_vae_checkpoint_cache, sd_vae, sd_vae_overrides_per_model_preferences, auto_vae_precision_bfloat16, auto_vae_precision, sd_vae_encode_method, sd_vae_decode_method, inpainting_mask_weight, initial_noise_multiplier, img2img_extra_noise, img2img_color_correction, img2img_fix_steps, img2img_background_color, img2img_editor_height, img2img_sketch_default_brush_color, img2img_inpaint_mask_brush_color, img2img_inpaint_sketch_default_brush_color, return_mask, return_mask_composite, img2img_batch_show_results_limit, overlay_inpaint, return_grid, do_not_show_images, js_modal_lightbox, js_modal_lightbox_initially_zoomed, js_modal_lightbox_gamepad, js_modal_lightbox_gamepad_repeat, sd_webui_modal_lightbox_icon_opacity, sd_webui_modal_lightbox_toolbar_opacity, gallery_height, open_dir_button_choice, enable_pnginfo, save_txt, add_model_name_to_info, add_model_hash_to_info, add_vae_name_to_info, add_vae_hash_to_info, add_user_name_to_info, add_version_to_infotext, disable_weights_auto_swap, infotext_skip_pasting, infotext_styles, show_progressbar, live_previews_enable, live_previews_image_format, show_progress_grid, show_progress_every_n_steps, show_progress_type, live_preview_allow_lowvram_full, live_preview_content, live_preview_refresh_period, live_preview_fast_interrupt, js_live_preview_in_modal_lightbox, keyedit_precision_attention, keyedit_precision_extra, keyedit_delimiters, keyedit_delimiters_whitespace, keyedit_move, disable_token_counters, include_styles_into_token_counters, extra_options_txt2img, extra_options_img2img, extra_options_cols, extra_options_accordion, compact_prompt_box, samplers_in_dropdown, dimensions_and_batch_together, sd_checkpoint_dropdown_use_short, hires_fix_show_sampler, hires_fix_show_prompts, txt2img_settings_accordion, img2img_settings_accordion, interrupt_after_current, localization, quicksettings_list, ui_tab_order, hidden_tabs, ui_reorder_list, gradio_theme, gradio_themes_cache, show_progress_in_title, send_seed, send_size, api_enable_requests, api_forbid_local_requests, api_useragent, auto_launch_browser, enable_console_prompts, show_warnings, show_gradio_deprecation_warnings, memmon_poll_rate, samples_log_stdout, multiple_tqdm, enable_upscale_progressbar, print_hypernet_extra, list_hidden_files, disable_mmap_load_safetensors, hide_ldm_prints, dump_stacks_on_signal, face_restoration, face_restoration_model, code_former_weight, face_restoration_unload, postprocessing_enable_in_main_ui, postprocessing_operation_order, upscaling_max_images_in_cache, postprocessing_existing_caption_action, ESRGAN_tile, ESRGAN_tile_overlap, realesrgan_enabled_models, dat_enabled_models, DAT_tile, DAT_tile_overlap, unload_models_when_training, pin_memory, save_optimizer_state, save_training_settings_to_txt, dataset_filename_word_regex, dataset_filename_join_string, training_image_repeats_per_epoch, training_write_csv_every, training_xattention_optimizations, training_enable_tensorboard, training_tensorboard_save_images, training_tensorboard_flush_every, canvas_hotkey_zoom, canvas_hotkey_adjust, canvas_hotkey_shrink_brush, canvas_hotkey_grow_brush, canvas_hotkey_move, canvas_hotkey_fullscreen, canvas_hotkey_reset, canvas_hotkey_overlap, canvas_show_tooltip, canvas_auto_expand, canvas_blur_prompt, canvas_disabled_functions, interrogate_keep_models_in_memory, interrogate_return_ranks, interrogate_clip_num_beams, interrogate_clip_min_length, interrogate_clip_max_length, interrogate_clip_dict_limit, interrogate_clip_skip_categories, interrogate_deepbooru_score_threshold, deepbooru_sort_alpha, deepbooru_use_spaces, deepbooru_escape, deepbooru_filter_tags.
Search...
Paths for saving
Saving images/grids
Saving to a directory
Compatibility
Extra Networks
Optimizations
Sampler parameters
Stable Diffusion
Stable Diffusion XL
VAE
img2img
Gallery
Infotext
Live previews
Prompt editing
Settings in UI
UI alternatives
User interface
API
System
Face restoration
Postprocessing
Upscaling
Training
ADetailer
Aspect Ratio Helper
Canvas Hotkeys
ControlNet
Hypertile
Interrogate
Defaults
Sysinfo
Actions
Licenses
Show all pages
Maximum number of checkpoints loaded at the same time
1
Only keep one model on device
(will keep models other than the currently used one in RAM rather than VRAM)
Checkpoints to cache in RAM (obsolete; set to 0 and use the two settings above instead)
0
SD Unet (choose Unet model: Automatic = use one with same filename as checkpoint; None = use Unet from checkpoint)
Automatic
🔄
Enable quantization in K samplers for sharper and cleaner results. This may change existing seeds
(requires Reload UI)
Emphasis mode (makes it possible to make model to pay (more:1.1) or (less:0.9) attention to text when you use the syntax in prompt; None: disable the mechanism entirely and treat (:.1.1) as literal characters, Ignore: treat all empasised words as if they have no emphasis, Original: the orginal emphasis implementation, No norm: same as orginal, but without normalization (seems to work better for SDXL))"
**and my Lora filtering options goes off, somebody know how to fix?**
### Steps to reproduce the problem
1 - clean install
2 - launch
3 - apply new configs
|
closed
|
2024-06-23T02:10:13Z
|
2024-06-24T08:36:29Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16074
|
[
"asking-for-help-with-local-system-issues"
] |
HenryEvan
| 5
|
miguelgrinberg/Flask-SocketIO
|
flask
| 990
|
Getting 404 error when using gunicorn/eventlet in prod
|
Hi, I've spent several hours looking online and reading through the issues posted but have not found a solution.
This totally works on dev without gunicorn and eventlet
websocket request return the following error:
~~~~
{
"error": "404 Not Found: The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.",
"traceback": [
" File \"/usr/local/lib/python3.7/site-packages/flask/app.py\", line 1813, in full_dispatch_request\n rv = self.dispatch_request()\n",
" File \"/usr/local/lib/python3.7/site-packages/flask/app.py\", line 1791, in dispatch_request\n self.raise_routing_exception(req)\n",
" File \"/usr/local/lib/python3.7/site-packages/flask/app.py\", line 1774, in raise_routing_exception\n raise request.routing_exception\n",
" File \"/usr/local/lib/python3.7/site-packages/flask/ctx.py\", line 336, in match_request\n self.url_adapter.match(return_rule=True)\n",
" File \"/usr/local/lib/python3.7/site-packages/werkzeug/routing.py\", line 1786, in match\n raise NotFound()\n"
]
}
~~~~
I'm running gunicorn like `gunicorn --worker-class eventlet --bind :5000 wsgi:app --log-level=debug --log-file=-`
my wsgi.py file looks like
~~~
from app import init_app
from flask_socketio import SocketIO
app = init_app()
if __name__ == '__main__':
socketio = SocketIO(app)
socketio.run(app)
~~~
the init_app func is just a wrapper for the usual `app = Flask(__name__)` no magic there other than setting some variables and environment configs
my nginx config for this is
~~~~
location /socket.io {
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass http://api:5000/socket.io;
}
~~~~
Python packages
~~~
$ pip freeze
aniso8601==6.0.0
blinker==1.4
Click==7.0
dnspython==1.16.0
eventlet==0.24.1
Flask==1.0.2
Flask-Caching==1.5.0
Flask-Mail==0.9.1
Flask-RESTful==0.3.7
Flask-SocketIO==3.2.1
Flask-SQLAlchemy==2.3.2
greenlet==0.4.15
gunicorn==19.9.0
itsdangerous==1.1.0
Jinja2==2.10.1
MarkupSafe==1.1.1
monotonic==1.5
mysqlclient==1.3.14
PyJWT==1.7.1
python-engineio==3.7.0
python-socketio==4.0.3
pytz==2019.1
six==1.12.0
SQLAlchemy==1.3.4
Werkzeug==0.15.4
~~~
The error output is obviously being generated by flask.
Otherwise my endpoints work fine.
Any ideas?
|
closed
|
2019-05-31T02:26:32Z
|
2019-05-31T15:00:51Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/990
|
[
"question"
] |
jcuna
| 6
|
opengeos/leafmap
|
plotly
| 661
|
NAIP STAC Item added to map as layer disappears on zoom out, needs a very close zoom level to appear.
|
<!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.30.1
- Python version: 3.10
- Operating System: Ubuntu
### Description
I want to zoom out and see my image on the map. But it disappears at far away zoom levels. Also the default zoom level that is set when the map opens won't show the image.
### What I Did
```python
import pystac_client
import planetary_computer
from shapely.geometry import Point
area_of_interest = Point((-121.034, 36.990)) # wright solar farm lon lat
catalog = pystac_client.Client.open(
"https://planetarycomputer.microsoft.com/api/stac/v1",
modifier=planetary_computer.sign_inplace,
)
range_old = "2010-01-01/2013-01-01"
range_new = "2020-01-01/2021-01-01"
search_old = catalog.search(
collections=["naip"], intersects=area_of_interest, datetime=range_old
)
search_new = catalog.search(
collections=["naip"], intersects=area_of_interest, datetime=range_new
)
items_old = search_old.item_collection()
items_new = search_new.item_collection()
print(f"{len(items_old)} Items found in the 'old' range")
print(f"{len(items_new)} Items found in the 'new' range")
map = leafmap.Map()
leafmap.stac_assets(collection="naip", item=items_old[0].id, titiler_endpoint="pc")
m = leafmap.Map()
m.add_stac_layer(
collection="naip",
item='ca_m_3612108_ne_10_1_20120622_20120904',
assets=["image"],
name="Old image 2012 before solar development",
)
m
```
|
closed
|
2024-01-16T22:29:36Z
|
2024-02-06T15:32:44Z
|
https://github.com/opengeos/leafmap/issues/661
|
[
"bug"
] |
rbavery
| 1
|
miguelgrinberg/python-socketio
|
asyncio
| 554
|
Update connect_error documentation
|
Hello,
I would like to suggest you mention in the documentation that the `connect_error` handler can get arguments. Now, it is only shown not getting arguments, [here](https://python-socketio.readthedocs.io/en/latest/client.html?highlight=connect_error#defining-event-handlers).
As you stated in issue [#508](https://github.com/miguelgrinberg/python-socketio/issues/508#issuecomment-646352384), there are some cases where this handler can be invoked with multiple or none arguments, depending on the server, and some cases where it can get one argument.
In my opinion, it is worth mentioning this in the documentation as it can lead to confusion and bugs.
|
closed
|
2020-10-13T10:53:03Z
|
2021-05-04T22:09:46Z
|
https://github.com/miguelgrinberg/python-socketio/issues/554
|
[
"documentation"
] |
turicfr
| 1
|
lexiforest/curl_cffi
|
web-scraping
| 17
|
Bug: Request header is 'application/x-www-form-urlencoded' but use json as request body
|
Request header is 'application/x-www-form-urlencoded' but use json as request body when requests.post have both json and data parameter,
here are code
```python
requests.post("https://httpbin.org/post", data={"data": 1}, json={"json": 1}).json()
```
here are output
```json
{'args': {},
'data': '',
'files': {},
'form': {'{"json": 1}': ''},
'headers': {'Accept': '*/*',
'Accept-Encoding': 'gzip, deflate, br',
'Content-Length': '11',
'Content-Type': 'application/x-www-form-urlencoded',
'Host': 'httpbin.org',
'X-Amzn-Trace-Id': 'Root=1-6401f15c-60b4e9a57970941f002fe1af'},
'json': None,
'origin': '103.116.72.5',
'url': 'https://httpbin.org/post'}
```
|
closed
|
2023-03-03T13:13:08Z
|
2023-09-29T12:25:30Z
|
https://github.com/lexiforest/curl_cffi/issues/17
|
[] |
MagicalBomb
| 3
|
gevent/gevent
|
asyncio
| 1,419
|
ImportError: cannot import name _corecffi
|
I installed gevent using `pip install gevent` and have the latest version 1.4.0. I would like to compare the speed between all event loops but don't manage to use libuv. Is there a specific installation to do ?
```
>>> import gevent
>>> gevent.config.loop = 'libuv'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/gevent/_config.py", line 197, in __setattr__
self.set(name, value)
File "/usr/local/lib/python2.7/dist-packages/gevent/_config.py", line 204, in set
self.settings[name].set(value)
File "/usr/local/lib/python2.7/dist-packages/gevent/_config.py", line 150, in set
self.value = self.validate(self._convert(val))
File "/usr/local/lib/python2.7/dist-packages/gevent/_config.py", line 248, in validate
return self._import_one_of([self.shortname_map.get(x, x) for x in value])
File "/usr/local/lib/python2.7/dist-packages/gevent/_config.py", line 223, in _import_one_of
return self._import_one(candidates[-1])
File "/usr/local/lib/python2.7/dist-packages/gevent/_config.py", line 237, in _import_one
module = importlib.import_module(module)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/usr/local/lib/python2.7/dist-packages/gevent/libuv/loop.py", line 15, in <module>
from gevent.libuv import _corecffi # pylint:disable=no-name-in-module,import-error
ImportError: cannot import name _corecffi
```
|
closed
|
2019-05-04T15:50:43Z
|
2019-05-04T16:27:03Z
|
https://github.com/gevent/gevent/issues/1419
|
[] |
maingoh
| 2
|
nolar/kopf
|
asyncio
| 227
|
[PR] Switch to `aiohttp` and full asynchronous I/O in the core
|
> <a href="https://github.com/nolar"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> A pull request by [nolar](https://github.com/nolar) at _2019-11-13 10:31:58+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/227
> Merged by [nolar](https://github.com/nolar) at _2019-11-13 11:00:51+00:00_
Remove any synchronous Kubernetes API clients or asyncio executors. Make Kopf fully and truly asynchronous.
> Issue : #142, #140, maybe #204, maybe #169
> Replaces: #176 #152 #143, #141
## Problem
`pykube-ng`, `kubernetes`, `requests`, and any other synchronous client libraries use the streaming responses of the built-in `urllib3` and `http` for watching over the k8s-events.
These streaming requests/responses can be closed when a chunk/line is yielded to the consumer, the control flow is returned to the caller, and the streaming socket itself is idling. E.g., for `requests`: https://2.python-requests.org/en/master/user/advanced/#streaming-requests
However, if nothing happens on the k8s-event stream (i.e. no k8s resources are added/modified/deleted), the streaming response spends most of its time in the blocking `read()` operation on a socket. It can remain there for long time — minutes, hours — until some data arrives on the socket.
If the streaming response runs in a thread, while the main thread is used for an asyncio event-loop, such stream cannot be closed/cancelled/terminated (because of the blocking `read()`). This, in turn, makes the application to hang on exit, holding its pod from restarting, since the thread is not finished until the `read()` call is finished.
There is no easy way to terminate the blocking `read()` operation on a socket. One way is a dirty hack with the OS-level process signals, which interrupt the I/O operations on low level (OS&libc&co) — see #152.
## Solution
The proper solution, however, is to use async i/o inside of the async app.
**This PR** converts all i/o to/from Kubernetes API to `aiohttp`. It is already present in the dependencies indirectly.
This efficiently removes `pykube-ng` (or any other clients) from the Kopf's core. There is no much need in them, as the main purpose of the client libraries is to provide a convenient DSL (domain-specific language) for the Kubernetes objects manipulation. In Kopf, all manipulation is unified, used only internally, not exposed as a public interface, and implemented so that the kind of objects being handled is not so important.
*An attempt to do this was already performed in #176, but it contained a huge part about authentication methods. With custom authentication and piggybacking implemented separately in #226. This new PR now contains only the i/o-related changes.*
## Testing
**For testing,** [aresponses](https://github.com/CircleUp/aresponses) is used instead of mocked `requests`. It runs an actual web server locally, and intercepts all aiohttp outgoing requests to be redirected to that web-server.
This switch led to almost full rewrite of all tests for `kopf.clients` module (all API communication) — which makes a half of this PR's size (while keeping the same semantics of the tests).
## Types of Changes
- Bug fix (non-breaking change which fixes an issue)
- Refactor/improvements
## Review
_List of tasks the reviewer must do to review the PR_
- [ ] Tests
- [ ] Documentation
|
closed
|
2020-08-18T20:01:14Z
|
2020-08-23T20:51:34Z
|
https://github.com/nolar/kopf/issues/227
|
[
"enhancement",
"archive",
"refactoring"
] |
kopf-archiver[bot]
| 0
|
shaikhsajid1111/social-media-profile-scrapers
|
web-scraping
| 10
|
The pinterest scraper doesn't work.
|
it returns:
'country'
None
|
open
|
2022-07-06T10:57:07Z
|
2022-07-06T17:07:16Z
|
https://github.com/shaikhsajid1111/social-media-profile-scrapers/issues/10
|
[] |
meatloaf4u
| 2
|
reloadware/reloadium
|
pandas
| 52
|
Plugin 0.8.6 (with Relodium 0.9.3) breaks with PyCharm 2022.2.3
|
**Describe the bug**
Relodium breaks.
I had relodium installed and upgraded both PyCharm and Relodium versions.
After the upgrade, the plugin fails when running. Because code is obfuscated I cannot see where it breaks, but I attached the log console.
**Screenshots**

**Desktop (please complete the following information):**
- OS: Windows 11
- Reloadium package version: 0.8.6
- Editor: PyCharm 2022.2.3 (Professional Edition) Build #PY-222.4345.23, built on October 10, 2022
- Run mode: Run
**Additional context**
Add any other context about the problem here.
|
closed
|
2022-10-20T11:58:54Z
|
2022-10-24T12:37:28Z
|
https://github.com/reloadware/reloadium/issues/52
|
[] |
laurapons
| 9
|
2noise/ChatTTS
|
python
| 225
|
求一个cURL版的API调用方式
|
closed
|
2024-06-03T08:38:55Z
|
2024-06-04T05:06:07Z
|
https://github.com/2noise/ChatTTS/issues/225
|
[] |
Huixxi
| 1
|
|
scikit-image/scikit-image
|
computer-vision
| 7,620
|
Expose clip_negative as a parameter of rescale_intensithy
|
### Description:
`skimage.exposure.rescale_intensity(image, in_range='dtype', out_range='float32'` will take an input image with `dtype` of `int[8,16,32,64]` and scale to be in the range of `[-1,1]`. This is the normal and expected behavior, but I would actually like it to scale in the range of `[0,1]` (as in the case for unsigned integers). I realize this is strange. Why have a signed image as input if I don't want to preserve negative values? The reason is that I am normalizing images from users / external sources, and I *need* all of my images to be in the range of `[0,1]` for subsequent operations. I currently just do the correction after the fact, but would prefer to avoid the extra cpu cycles.
As far as I can tell, the difference in behavior for unsigned and signed integers is captured by the `clip_negative` value that is passed on to `intensity_range`.
```
omin, omax = map(float, intensity_range(image, out_range,
clip_negative=(imin >= 0)))
```
It would be nice to expose this as an overridable parameter to `rescale_intensity` itself, so that I can force it to be `True` regardless of `imin`'s value.
I would be happy to open a PR. Just want to first make sure this is acceptable, given that it is an esoteric request, before going through the trouble.
|
closed
|
2024-11-25T14:59:16Z
|
2024-11-27T12:22:51Z
|
https://github.com/scikit-image/scikit-image/issues/7620
|
[
":pray: Feature request"
] |
gnodar01
| 2
|
mage-ai/mage-ai
|
data-science
| 4,882
|
[BUG] cannot cancel closing unsaved modified file
|
### Mage version
v0.9.68
### Describe the bug
After having modified a file, and then trying to close it without saving the changes, a pop up windows asks if I'm sure to close it without saving changes. If I answer Ok, the expected beahaviour occurs : the file is closed without changes saved. But if I click on the Cancel button, the same behaviour occurs, but we would expect the file not to be closed, so that we don't lose our changes.
### To reproduce
- Open a file in the 'Files' window
- modify the file (like adding a word or whatever)
- click on the cross to close the file
- click the Cancel button (or the equivalent, my system is not in english)
### Expected behavior
I expect the file not to close, and not to be modified by clicking the Cancel button
### Screenshots
_No response_
### Operating system
- OS : Windows 11
- Browser : Chrome version 123.0.6312.86
### Additional context
_No response_
|
closed
|
2024-04-03T20:31:43Z
|
2024-04-29T18:48:30Z
|
https://github.com/mage-ai/mage-ai/issues/4882
|
[
"bug"
] |
gtentillier
| 2
|
BeanieODM/beanie
|
asyncio
| 90
|
[examples] Update example projects
|
There two official example projects:
- [FastAPI Demo](https://github.com/roman-right/beanie-fastapi-demo) - Beanie and FastAPI collaboration demonstration. CRUD and Aggregation.
- [Indexes Demo](https://github.com/roman-right/beanie-index-demo) - Regular and Geo Indexes usage example wrapped to a microservice.
Both should:
- Show, how to use current Beanie syntax
- Contain unit tests
|
closed
|
2021-07-10T20:13:54Z
|
2023-04-16T02:26:00Z
|
https://github.com/BeanieODM/beanie/issues/90
|
[
"good first issue",
"Stale"
] |
roman-right
| 3
|
allenai/allennlp
|
data-science
| 5,043
|
bug
|
<!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [ ] I have verified that the issue exists against the `master` branch of AllenNLP.
- [ ] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/master/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [ ] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [ ] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [ ] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/master/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/master) to find out if the bug was already fixed in the master branch.
- [ ] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [ ] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [ ] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [ ] I have included in the "Environment" section below the output of `pip freeze`.
- [ ] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
<!-- Please provide a clear and concise description of what the bug is here. -->
<details>
<summary><b>Python traceback:</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
```
```
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS:
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version:
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
```
</p>
</details>
## Steps to reproduce
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```
```
</p>
</details>
|
closed
|
2021-03-07T20:55:37Z
|
2021-03-08T19:06:41Z
|
https://github.com/allenai/allennlp/issues/5043
|
[
"bug"
] |
apsiriwat
| 0
|
ghtmtt/DataPlotly
|
plotly
| 244
|
Overlay two graphics on atlas
|
**Describe the bug**
I am trying to overlay two graphics in the atlas.
I think that should be done from element properties by adding two graphics (screenshot 1).
I have an atlas with 100 points. I am interested in representing in the same graphic the 100 points and in another color (superimposing) the point to which the atlas refers. As you can see in the screenshot 2 I have represented 2 times the two points but in the screenshot 3 you can see all the points.
I have activated "use only features inside atlas feature" in onli one but it seems that activating this box affects both graphics and not each one independently.
It is a bug or I am wrong about something?
Thanks

----------------------------------------------------------------------

----------------------------------------------------------------------------------------------------------

|
closed
|
2020-12-19T12:30:27Z
|
2021-03-18T07:24:32Z
|
https://github.com/ghtmtt/DataPlotly/issues/244
|
[
"bug"
] |
cesarcorreo
| 12
|
KaiyangZhou/deep-person-reid
|
computer-vision
| 25
|
How to set the parameters of xent+htri that use the densenet-121
|
Thanks for provide the elegant code;
When I train the densenet 121 with xent+htri loss, I set 80 epoch ;
I train it for three times ,but got not good result:
batch size = 32,epoch = 80 :rank1 = 60.6%
batch size = 16,epoch = 80 :rank1 = 61.2%
batch size = 48,epoch = 60 :rank1 = 58.4%
I don't know why the result is not good like yours,can you teach me how to set the parameter when you train the densenet121;
Thanks
|
closed
|
2018-06-13T09:56:53Z
|
2018-06-21T22:27:33Z
|
https://github.com/KaiyangZhou/deep-person-reid/issues/25
|
[] |
jianwu585218
| 4
|
ycd/manage-fastapi
|
fastapi
| 147
|
Is this repo still maintained?
|
Hey @ycd , @Kludex 👋🏻
I was curious about what your plans are with this repo. It looks like the maintenance stopped a year ago and there are [some important issues ](https://github.com/ycd/manage-fastapi/issues/146)that makes the tool practically unusable, and multiple PRs waiting open for a while.
If you don't plan to sunset the tool, I'd like to help with its cleaning and updates. I can start with triaging/reviewing open PRs, and then continue with updating supported Python versions.
|
open
|
2024-10-24T14:28:19Z
|
2024-10-24T14:28:19Z
|
https://github.com/ycd/manage-fastapi/issues/147
|
[] |
ulgens
| 0
|
jschneier/django-storages
|
django
| 963
|
mistake
|
sorry, mistake to open. Please delete...
|
closed
|
2020-12-11T08:28:05Z
|
2020-12-11T08:52:29Z
|
https://github.com/jschneier/django-storages/issues/963
|
[] |
sakimyto
| 0
|
python-gino/gino
|
asyncio
| 636
|
Query Filters, Pagination and Sorting
|
* GINO version: 0.8.6
* Python version: 3.7.0
* asyncpg version: 0.20.1
* aiocontextvars version: 0.2.2
* PostgreSQL version: 11
### Description
I'm trying to find solution in **Gino** for **filtering**, **pagination**, and **sorting**. Like this one which is for **SQL Alchemy**:
[sqlalchemy-filters](https://pypi.org/project/sqlalchemy-filters/)
I know that **GINO** isn't working with the **sqlalchemy.orm** part. But I wanna ask if there is some way to adapt it to work with **GINO** or is there any similar solution for it?
|
closed
|
2020-03-10T15:15:23Z
|
2020-09-07T21:08:06Z
|
https://github.com/python-gino/gino/issues/636
|
[
"question"
] |
Psykepro
| 8
|
predict-idlab/plotly-resampler
|
plotly
| 274
|
Python 3.12 support
|
closed
|
2023-11-22T18:29:44Z
|
2024-02-05T15:30:35Z
|
https://github.com/predict-idlab/plotly-resampler/issues/274
|
[
"enhancement",
"installation"
] |
jvdd
| 4
|
|
huggingface/transformers
|
tensorflow
| 36,561
|
Improving expected test results
|
Several tests use the concept of "expected" results.
Sometimes the expected results are dependant on the environment.
We've used `torch.cuda.get_device_capability()` to differentiate between different cuda environments, and this has worked fairly well so far.
I recently started adding expected results for AMD devices ([ref](https://github.com/huggingface/transformers/pull/36535/files)) and realized we will be running into conflicts fairly soon and so we need a better approach.
My idea is to add a couple of utility concepts that can make expected results more generic.
The current solution is basically this:
```python
result = "2"
major, minor = torch.cuda.get_device_capability()
EXPECTED_RESULTS= {
8: "1",
9: "2",
}
assert result == EXPECTED_RESULTS[major]
```
I want to end up with functionality that looks like this:
```python
result = "2"
expectations = Expectations(
Expectation.default("1"),
Expectation(Properties("cuda", 8, 1), "2"),
Expectation(Properties("cuda", 7, 0), "3"),
Expectation(Properties("rocm"), "4"),
)
expected = expectations.find_expected()
assert result == expected.result
```
This is a best effort approach. If we are on cuda 7.5 we get "3", but if we are on rocm 7.0 we should get "4".
The `default` case should apply if the best result has a score of 0. In other words if we are on xpu we get "1".
|
open
|
2025-03-05T13:34:16Z
|
2025-03-05T13:34:16Z
|
https://github.com/huggingface/transformers/issues/36561
|
[] |
ivarflakstad
| 0
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
pytorch
| 764
|
Training Time
|
Hi,
I have been training pix2pix with unet256 as generator. My training data is 10000 images of 256*256 resolution.
Each epoch is taking around 25000 seconds (7.25 hrs)
I am running experiments on two 1080 Ti cards with batch size 32
May I know if it is usually the case?
Thanks for any help,
|
closed
|
2019-09-11T05:36:50Z
|
2024-05-25T12:17:03Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/764
|
[] |
jsaisagar
| 3
|
miguelgrinberg/flasky
|
flask
| 82
|
Errata: if current_user.is_authenticated()
|
`Tag: 12a`
In the book the `if current_user.is_authenticated()` field in template `user.html` contains parentheses.
The correct thing without the parenthesis.
Cause the error:

The code on GitHub is correct :+1:
Ps: My English is very poor! Sorry!
|
closed
|
2015-10-14T18:59:21Z
|
2017-03-17T18:54:27Z
|
https://github.com/miguelgrinberg/flasky/issues/82
|
[] |
ghost
| 4
|
deepinsight/insightface
|
pytorch
| 2,073
|
[Inference using model trained on mnet25 backbone] Operands cannot be broadcasted together
|
insightface->detection->retinaface->retinaface.py (line 464)
bbox_pred (line 761)
Dimension issue

|
open
|
2022-08-10T12:24:50Z
|
2022-08-10T12:24:50Z
|
https://github.com/deepinsight/insightface/issues/2073
|
[] |
iqraJilani
| 0
|
AUTOMATIC1111/stable-diffusion-webui
|
pytorch
| 16,378
|
[Feature Request]: Please add support for the FLUX model, thank you!
|
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
The FLUX model's hand processing and prompt accuracy are incredibly powerful, and it's been super popular recently!
### Proposed workflow
1. thank you!
2. thank you!
3. thank you!
### Additional information
_No response_
|
open
|
2024-08-13T08:33:37Z
|
2024-12-06T01:08:19Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16378
|
[
"enhancement"
] |
divineblessing
| 14
|
syrupy-project/syrupy
|
pytest
| 333
|
Incorrect unused snapshot detection for targeting single case in parameterized test
|
**To Reproduce**
- Create a parameterized test case, for example test_dict in tests/test_extension_amber.py.
- Run pytest targeting one case: `pytest tests/test_extension_amber.py::test_dict[actual4]`
Syrupy says 1 snapshot passed, and the rest are unused.
**Expected behavior**
One snapshot should pass. Nothing should be mentioned about the non-targeted test cases.
---
Syrupy==0.6.1
|
closed
|
2020-08-24T20:52:29Z
|
2020-10-30T02:58:21Z
|
https://github.com/syrupy-project/syrupy/issues/333
|
[
"bug",
"released"
] |
noahnu
| 1
|
ufoym/deepo
|
jupyter
| 133
|
theano gpu not working
|
Hi,
I've followed instructions of how to run theano gpu using the deepo; unfortuantely i'm not able to run the theano code with gpu. it uses the cpu instead
steps that I took
`docker run --gpus all -it ufoym/deepo:theano bash`
and I'm running the following test code (from theano documentations)
```
# Code to test if theano is using GPU or CPU
# Reference: https://stackoverflow.com/questions/34328467/how-can-i-use-my-gpu-on-ipython-notebook
import os
os.environ["MKL_THREADING_LAYER"] = "GNU"
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
# if you're using Python3, rename `xrange` to `range` in the following line
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')
```
I'm getting the following output which says it have used cpu instead of gpu:
```
/usr/local/lib/python3.6/dist-packages/theano/gpuarray/dnn.py:184: UserWarning: Your cuDNN version is more recent than Theano. If you encounter problems, try updating Theano or downgrading cuDNN to a version >= v5 and <= v7.
warnings.warn("Your cuDNN version is more recent than "
Using cuDNN version 7605 on context None
Mapped name None to device cuda0: TITAN RTX (0000:65:00.0)
[GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float32, vector)>), HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.180456 seconds
Result is [1.2317803 1.6187935 1.5227807 ... 2.2077181 2.2996776 1.623233 ]
Used the cpu
```
here is my nvidia-smi inside my docker
```
Tue May 5 19:41:17 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.82 Driver Version: 440.82 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 TITAN X (Pascal) Off | 00000000:17:00.0 Off | N/A |
| 23% 28C P8 8W / 250W | 2MiB / 12196MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 TITAN RTX Off | 00000000:65:00.0 On | N/A |
| 41% 33C P8 1W / 280W | 611MiB / 24217MiB | 4% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
```
any help is appreciated
|
closed
|
2020-05-05T19:42:47Z
|
2021-12-27T14:58:36Z
|
https://github.com/ufoym/deepo/issues/133
|
[] |
sedghi
| 1
|
pyg-team/pytorch_geometric
|
pytorch
| 9,530
|
None edge_attr assertion in GeneralConv
|
### 🐛 Describe the bug
The `GeneralConv` layer is raising an assertion when only a node array (`x`) and adjacency (`edge_index`) are provided. I expect the layer to return a result when I don't provide `edge_attr` (default is None).
Context: I'm writing unit tests for a model that is composed of many layers. I've worked back to a core torch_geometric layer that is raising an assert. I get the assertion when providing a basic data input. For example,
```python
from torch_geometric.datasets import FakeDataset
from torch_geometric.nn import GeneralConv
gnn = GeneralConv(in_channels=100, out_channels=100)
dataset = FakeDataset(
num_graphs=32 * 4, # 4 batches of 32
avg_num_nodes=20,
num_channels=100,
num_classes=2,
edge_dim=1,
is_undirected=False,
)
gnn(dataset[0].x, dataset[0].edge_index)
```
The forward pass to `gnn` is raising an AssertionError via `assert edge_attr is not None`. I'm having trouble locating the file asserting the error. Is this a legit bug or user error? Any help would be much appreciated! Thanks!
Here's the traceback from the above test.
```bash
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/home/dev/.venvs/project/lib/python3.11/site-packages/torch/nn/modules/module.py:1532: in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
/home/dev/.venvs/project/lib/python3.11/site-packages/torch/nn/modules/module.py:1541: in _call_impl
return forward_call(*args, **kwargs)
/home/dev/.venvs/project/lib/python3.11/site-packages/torch_geometric/nn/conv/general_conv.py:155: in forward
out = self.propagate(edge_index, x=x, size=size, edge_attr=edge_attr)
/tmp/torch_geometric.nn.conv.general_conv_GeneralConv_propagate_7jgj7uxt.py:163: in propagate
kwargs = self.collect(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GeneralConv(100, 100)
edge_index = tensor([[ 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2,
2, 2, 3, 3, 3, 3, 3, ... 5, 6, 10, 16, 0, 3, 5, 9, 11, 14, 16, 5, 7, 8, 10, 13,
15, 1, 4, 6, 10, 0, 7, 10, 14, 16, 17]])
x = (tensor([[-1.5406, 1.4097, 1.5205, ..., -0.8308, 1.1799, 4.4395],
[-0.4764, 1.4493, 1.8234, ..., 1.57...259, 2.4132, ..., 1.5033, 1.0380, 1.5486],
[ 1.4862, 1.8833, 2.0412, ..., 0.6160, -0.9966, 1.6773]]))
edge_attr = None, size = [None, None]
def collect(
self,
edge_index: Union[Tensor, SparseTensor],
x: OptPairTensor,
edge_attr: OptTensor,
size: List[Optional[int]],
) -> CollectArgs:
i, j = (1, 0) if self.flow == 'source_to_target' else (0, 1)
# Collect special arguments:
if isinstance(edge_index, Tensor):
if is_torch_sparse_tensor(edge_index):
adj_t = edge_index
if adj_t.layout == torch.sparse_coo:
edge_index_i = adj_t.indices()[0]
edge_index_j = adj_t.indices()[1]
ptr = None
elif adj_t.layout == torch.sparse_csr:
ptr = adj_t.crow_indices()
edge_index_j = adj_t.col_indices()
edge_index_i = ptr2index(ptr, output_size=edge_index_j.numel())
else:
raise ValueError(f"Received invalid layout '{adj_t.layout}'")
if edge_attr is None:
_value = adj_t.values()
edge_attr = None if _value.dim() == 1 else _value
else:
edge_index_i = edge_index[i]
edge_index_j = edge_index[j]
ptr = None
elif isinstance(edge_index, SparseTensor):
adj_t = edge_index
edge_index_i, edge_index_j, _value = adj_t.coo()
ptr, _, _ = adj_t.csr()
if edge_attr is None:
edge_attr = None if _value is None or _value.dim() == 1 else _value
else:
raise NotImplementedError
> assert edge_attr is not None
E AssertionError
/tmp/torch_geometric.nn.conv.general_conv_GeneralConv_propagate_7jgj7uxt.py:78: AssertionError
```
### Versions
Collecting environment information...
PyTorch version: 2.3.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (aarch64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.6.32-linuxkit-aarch64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 14
On-line CPU(s) list: 0-13
Vendor ID: Apple
Model: 0
Thread(s) per core: 1
Core(s) per cluster: 14
Socket(s): -
Cluster(s): 1
Stepping: 0x0
BogoMIPS: 48.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp flagm2 frint
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.3.1
[pip3] torch_geometric==2.5.3
[pip3] torchmetrics==1.4.0.post0
[conda] Could not collect
|
open
|
2024-07-22T15:53:49Z
|
2024-08-19T14:56:46Z
|
https://github.com/pyg-team/pytorch_geometric/issues/9530
|
[
"bug"
] |
bgeier
| 2
|
frappe/frappe
|
rest-api
| 31,183
|
Add a Bidirectional Link FieldType for automatic Two-Way Relationships Between DocTypes
|
**Is your feature request related to a problem? Please describe.**
Right now, there’s no easy way to create a two-way link between DocTypes in Frappe. For example, let’s say I have a `Job` DocType and a `Task` DocType:
- In the `Job` form, I have a `child_tasks` field where I can select multiple tasks.
- In the `Task` form, there’s a `parent_job` field linking back to its job.
The problem is, if I update the `child_tasks` field in a `Job`, the `parent_job` field in each `Task` doesn’t update automatically. And if I update the `parent_job` field in a `Task`, the `child_tasks` field in the related `Job` doesn’t change either. This means I have to manually keep them in sync, which is a pain.
**Describe the solution you'd like**
I’d love to have a new FieldType called **Bidirectional Link**. This would make sure that when I link two DocTypes, the relationship stays updated on both sides—automatically!
Using the `Job` and `Task` example:
- If I add or remove tasks in a `Job`, the `parent_job` field in each `Task` should update automatically.
- If I change the `parent_job` field in a `Task`, the `child_tasks` field in the corresponding `Job` should update too.
Notion has something similar with **two-way relations**. When you link databases, Notion automatically mirrors the relationship in both places, making sure everything stays in sync.

**Describe alternatives you've considered**
The only way to do this right now is by writing custom scripts that manually update linked fields whenever something changes. But that’s extra work, can get messy, and is easy to break as things grow.
**Additional context**
Having a **Bidirectional Link** field in Frappe would make life so much easier! It would keep related DocTypes in sync automatically, without needing extra scripts. This would be super useful for any app where different records need to stay connected, and it would save developers a lot of time.
|
open
|
2025-02-07T13:52:50Z
|
2025-02-13T07:12:02Z
|
https://github.com/frappe/frappe/issues/31183
|
[
"feature-request"
] |
Waishnav
| 1
|
jonaswinkler/paperless-ng
|
django
| 381
|
Mail consumer - "It is not a file"
|
Couldn't seem to see this on another bug.
Just set up the imap consumer to pull from a folder. Action to move it to another one. Processing attachments only, and using the attachment filename as the document title.
The following appears in the logs for all the files. These files consume fine when uploading the pdf to the UI.
```
webserver_1 | ERROR 2021-01-18 16:45:13,629 loggers Cannot consume /tmp/paperless/paperless-mail-e6fpeh_h: It is not a file.
webserver_1 | 16:45:13 [Q] INFO Process-1:1 processing [19 11 TV TERMS OF BUSINESS inc PRIVACY NOTICE.pdf]
webserver_1 | 16:45:13 [Q] INFO Process-1:3 processing [19 04 Distance Selling.pdf]
webserver_1 | ERROR 2021-01-18 16:45:13,634 loggers Cannot consume /tmp/paperless/paperless-mail-ii6js02q: It is not a file.
webserver_1 | 16:45:13 [Q] ERROR Failed [CCL Knight.pdf] - Cannot consume /tmp/paperless/paperless-mail-e6fpeh_h: It is not a file : Traceback (most recent call last):
webserver_1 | File "/usr/local/lib/python3.7/site-packages/django_q/cluster.py", line 436, in worker
webserver_1 | res = f(*task["args"], **task["kwargs"])
webserver_1 | File "/usr/src/paperless/src/documents/tasks.py", line 73, in consume_file
webserver_1 | override_tag_ids=override_tag_ids)
webserver_1 | File "/usr/src/paperless/src/documents/consumer.py", line 138, in try_consume_file
webserver_1 | self.pre_check_file_exists()
webserver_1 | File "/usr/src/paperless/src/documents/consumer.py", line 48, in pre_check_file_exists
webserver_1 | self.path))
webserver_1 | documents.consumer.ConsumerError: Cannot consume /tmp/paperless/paperless-mail-e6fpeh_h: It is not a file
webserver_1 |
webserver_1 | ERROR 2021-01-18 16:45:13,636 loggers Cannot consume /tmp/paperless/paperless-mail-_qbqazmv: It is not a file.
webserver_1 | 16:45:13 [Q] ERROR Failed [19 11 TV TERMS OF BUSINESS inc PRIVACY NOTICE.pdf] - Cannot consume /tmp/paperless/paperless-mail-ii6js02q: It is not a file : Traceback (most recent call last):
webserver_1 | File "/usr/local/lib/python3.7/site-packages/django_q/cluster.py", line 436, in worker
webserver_1 | res = f(*task["args"], **task["kwargs"])
webserver_1 | File "/usr/src/paperless/src/documents/tasks.py", line 73, in consume_file
webserver_1 | override_tag_ids=override_tag_ids)
webserver_1 | File "/usr/src/paperless/src/documents/consumer.py", line 138, in try_consume_file
webserver_1 | self.pre_check_file_exists()
webserver_1 | File "/usr/src/paperless/src/documents/consumer.py", line 48, in pre_check_file_exists
webserver_1 | self.path))
webserver_1 | documents.consumer.ConsumerError: Cannot consume /tmp/paperless/paperless-mail-ii6js02q: It is not a file
```
|
closed
|
2021-01-18T16:52:00Z
|
2021-01-22T11:15:16Z
|
https://github.com/jonaswinkler/paperless-ng/issues/381
|
[
"bug"
] |
rknightion
| 10
|
flaskbb/flaskbb
|
flask
| 163
|
Ask for confirmation before deleting things?
|
I love how it asks "Are you sure?" before letting you unban a banned user but will cheerfully blow away a category and all forums below it with a single click.
Would you like me to have a bit of a poke at the admin section and add some "Are you sure?" dialogues?
|
closed
|
2015-12-30T03:28:05Z
|
2018-04-15T07:47:37Z
|
https://github.com/flaskbb/flaskbb/issues/163
|
[] |
gordonjcp
| 1
|
google-research/bert
|
tensorflow
| 858
|
when using c++ to do inference, tensorflow::session->Run error
|
open
|
2019-09-18T08:26:54Z
|
2019-12-01T13:26:01Z
|
https://github.com/google-research/bert/issues/858
|
[] |
Jiayuforfreeo
| 0
|
|
plotly/plotly.py
|
plotly
| 4,507
|
go.Scatter3d doesn't display a given tensor
|
# Issue
While plotting an np.ndarray of type fp64, it is not displayed.
It is so funny that we could reproduce it only for one specific array.
Thigs that make the script work:
- If we add epsilon (as in the commented line), then the pointcloud is displayed properly.
- If we cast to fp32 it works.
- If we add a small value (0.000000000001 for example), it works.
Things that doesn't work:
- If we add or substract big values, like 0.4, 1 or 2, it doesn't work
- If we substract epsilon, it doesn't work
- if we add numbers larger than 0.000000000001, it doesnt' work even if tehy are small (0.000001 for example)
# Environment
We tried this (and reproduced in at least 2 computers) with numpy 1.26.3 and plotly 5.18.0 on ubuntu 22
```
import numpy as np
import plotly.graph_objects as go
init = np.array(
[
[
-0.063,
-0.063,
0.0,
],
[
-0.063,
-0.021,
0.0,
],
[
-0.063,
0.021,
0.0,
],
[
-0.063,
0.063,
0.0,
],
[
-0.021,
-0.021,
0.0,
],
[
-0.021,
-0.063,
0.0,
],
[
-0.021,
0.021,
0.0,
],
[
-0.021,
0.063,
0.0,
],
[
0.021,
-0.063,
0.0,
],
[
0.021,
-0.021,
0.0,
],
[
0.021,
0.021,
0.0,
],
[
0.021,
0.063,
0.0,
],
[
0.063,
-0.063,
0.0,
],
[
0.063,
-0.021,
0.0,
],
[
0.063,
0.021,
0.0,
],
[
0.063,
0.063,
0.0,
],
]
)
rot_mat = [
[4.93038066e-32, -1.00000000e00, 2.22044605e-16],
[2.22044605e-16, 2.22044605e-16, 1.00000000e00],
[-1.00000000e00, 0.00000000e00, 2.22044605e-16],
]
transformed = (rot_mat @ init.T).T + np.array([0.5, 0.5, 0.5])
x, y, z = 0, 1, 2
idx = list(transformed.shape).index(3)
if idx < 0:
raise ValueError("Array must be [X,Y,Z] x N")
elif idx == 1:
array = transformed.transpose()
default_kwargs = dict(
mode="markers",
marker=dict(size=3, color="black"),
)
print(array.dtype)
# array[y]+=np.finfo(np.float64).eps
print(array.dtype)
print(array)
plot = go.Scatter3d(x=array[x], y=array[y], z=array[z], **default_kwargs)
default_kwargs = dict(
scene=dict(
xaxis=dict(title="X", range=[0, 2]),
yaxis=dict(title="Y", range=[0, 2]),
zaxis=dict(title="Z", range=[0, 2]),
),
)
layout = go.Layout(**default_kwargs)
fig = go.Figure(data=plot, layout=layout)
fig.show()
```
|
open
|
2024-02-06T16:07:02Z
|
2024-08-13T13:08:34Z
|
https://github.com/plotly/plotly.py/issues/4507
|
[
"bug",
"sev-2",
"P3"
] |
JuanFMontesinos
| 3
|
chainer/chainer
|
numpy
| 7,797
|
Release Tasks for v7.0.0b3 / v6.3.0
|
This is an issue to track-down release-blocker tasks.
- [x] #7741 NumPy 1.17 support
- [x] Python 2 drop
- [x] chainer #7826
- [x] cupy https://github.com/cupy/cupy/pull/2343
- [x] blog https://github.com/chainer/chainer.org/pull/110
Merge after release:
- ~Separate parameter combinations between master and stable https://github.com/chainer/chainer-test/issues/490~ will be handled in #8006
|
closed
|
2019-07-23T11:14:21Z
|
2019-10-01T07:21:06Z
|
https://github.com/chainer/chainer/issues/7797
|
[
"release-blocker",
"prio:high"
] |
kmaehashi
| 0
|
xinntao/Real-ESRGAN
|
pytorch
| 169
|
Unexpected key(s) in state_dict error
|
我按照Training.md.教學訓練了RealESRGANmodel
將python inference_realesrgan.py --model_path experiments/pretrained_models/RealESRGAN_x4plus.pth --input inputs --face_enhance
換成訓練的model
得到了這個錯誤
|
closed
|
2021-11-29T20:30:04Z
|
2021-11-29T20:35:08Z
|
https://github.com/xinntao/Real-ESRGAN/issues/169
|
[] |
610821216
| 0
|
mwaskom/seaborn
|
data-science
| 3,373
|
Standard seaborn.objects printouts are inaccessible in some ways on Macs
|
Alright, this one involves like four different pieces of software to isolate. But I *think* the issue here is in **seaborn** rather than one of those other places. Here's the issue:
1. I have a Jupyter notebook containing two **seaborn.objects** graphs. The first one is printed using **matplotlib** (`fig = plt.figure()`, then `so.Plot().on(fig)`. The second one is printed using `seaborn.objects` directly (without `.on(fig)`). These graphs render properly in Jupyter.
2. I render that notebook to a document (HTML, Word, PDF, Powerpoint, any of them), using Quarto.
3. The first graph renders properly in the resulting document. The second one does not appear.
Note:
1. This issue occurs _only on Mac_. I have tested this on two Mac machine and two Windows machines. On both Windows machines, both graphs render properly. On both Macs, only the first renders and the second does not appear. I haven't tested Linux.
2. This does not produce an error or anything, the graph simply does not appear in the resulting document.
3. This issue does not occur with non-`objects` **seaborn** graphs. `sns.lineplot()` works fine.
4. The **matplotlib** mode (`inline` or `notebook`) doesn't seem to matter.
5. Versions: **matplotlib** 3.7.1, **seaborn** 0.12.2
I suspect there is something different in the way that **seaborn.objects** prints things as opposed to how **matplotlib** prints things (specifically on Mac I guess?) that is causing this, which is what makes me thing this is a **seaborn.objects** issue as opposed to, say, a Quarto issue.
Here is the code for a Jupyter notebook that exhibits the issue. Note that I am using `p.plot()` here, but the same issue occurs if you don't save the plot as `p` and instead just have `so.Plot()` on a line by itself.
(code chunk 1, this renders properly on both Windows and Mac)
```python
import pandas as pd
import seaborn.objects as so
import matplotlib.pyplot as plt
dat = pd.DataFrame({'a':[1,2],'b':[3,4]})
fig = plt.figure()
p = so.Plot(dat, x = 'a', y = 'b').on(fig).add(so.Dot())
p.plot()
```
(code chunk 2, this does not show up in the resulting document on Mac, but it works fine on Windows)
```python
p = so.Plot(dat, x = 'a', y = 'b').add(so.Dot())
p.plot()
```
|
closed
|
2023-05-25T22:46:46Z
|
2023-05-26T16:49:43Z
|
https://github.com/mwaskom/seaborn/issues/3373
|
[] |
NickCH-K
| 2
|
slackapi/bolt-python
|
fastapi
| 327
|
Custom Select Menu-- Payload Too Big
|
I'm using a custom select menu in socket mode, like this: https://slack.dev/bolt-python/concepts#options and am getting the error:
```
slack_sdk/socket_mode/builtin/internals.py", line 411, in _build_data_frame_for_sending
header += struct.pack("!BH", b2, payload_length)
struct.error: 'H' format requires 0 <= number <= 65535.
```
I think this is because I'm sending a long list of things in the options format:
```python
options = [
{
"text": {"type": "plain_text", "text": "Option 1"},
"value": "1-1",
},
{
"text": {"type": "plain_text", "text": "Option 2"},
"value": "1-2",
},
]
```
Imagine there are many text and value entries here. Is there a way around this payload requirement and if not is there any other way to load my external data into some sort of dropdown format? I basically just want to send in a list of names, but this options format is making the length of the data longer than necessary. Thank you!
#### The `slack_bolt` version
slack-sdk==3.5.0
slack-bolt==1.5.0
#### Python runtime version
python==2.7.16
#### OS info
ProductName: Mac OS X
ProductVersion: 10.15.7
BuildVersion: 19H524
Darwin Kernel Version 19.6.0
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
|
closed
|
2021-05-07T21:32:17Z
|
2021-05-10T16:31:40Z
|
https://github.com/slackapi/bolt-python/issues/327
|
[
"question"
] |
mariebarrramsey
| 5
|
explosion/spaCy
|
nlp
| 12,566
|
CLI benchmark accuracy doesn't save rendered displacy htmls
|
The accuracy benchmark of my model does not save rendered displacy htmls as requested. Benchmark works that's why I'm confused. The model contains only **transformers** and **spancat** components. Does **spancat** is not yet supported? 😞
DocBin does not contain any empty docs
CLI output:
```powershell
$ python -m spacy benchmark accuracy data/models/pl_spancat_acc/model-best/ data/test.spacy --output results/spacy/metrics.json --gpu-id 0 --displacy-path results/spacy/benchmark_acc_test_displacy
ℹ Using GPU: 0
================================== Results ==================================
TOK 100.00
SPAN P 79.31
SPAN R 54.19
SPAN F 64.38
SPEED 3752
============================== SPANS (per type) ==============================
P R F
nam_loc_gpe_city 77.29 74.42 75.83
nam_pro_software 82.35 36.84 50.91
nam_org_institution 63.11 50.78 56.28
nam_liv_person 87.34 82.64 84.93
nam_loc_gpe_country 95.24 85.37 90.03
. . .
nam_pro 0.00 0.00 0.00
✔ Generated 25 parses as HTML
results/spacy/benchmark_acc_test_displacy
✔ Saved results to
results/spacy/benchmark_acc_test_metrics.json
```
Random doc.to_json() from test DocBin:
```python
{'ents': [{'end': 54, 'label': 'nam_adj_country', 'start': 44},
{'end': 83, 'label': 'nam_liv_person', 'start': 69},
{'end': 100, 'label': 'nam_pro_title_book', 'start': 86}],
'spans': {'sc': [{'end': 54,
'kb_id': '',
'label': 'nam_adj_country',
'start': 44},
{'end': 83,
'kb_id': '',
'label': 'nam_liv_person',
'start': 69},
{'end': 100,
'kb_id': '',
'label': 'nam_pro_title_book',
'start': 86}]},
'text': 'Niedawno czytał em nową książkę znakomitego szkockiego medioznawcy , '
'Briana McNaira - Cultural Chaos .',
'tokens': [{'end': 8, 'id': 0, 'start': 0},
{'end': 15, 'id': 1, 'start': 9},
{'end': 18, 'id': 2, 'start': 16},
{'end': 23, 'id': 3, 'start': 19},
{'end': 31, 'id': 4, 'start': 24},
{'end': 43, 'id': 5, 'start': 32},
{'end': 54, 'id': 6, 'start': 44},
{'end': 66, 'id': 7, 'start': 55},
{'end': 68, 'id': 8, 'start': 67},
{'end': 75, 'id': 9, 'start': 69},
{'end': 83, 'id': 10, 'start': 76},
{'end': 85, 'id': 11, 'start': 84},
{'end': 94, 'id': 12, 'start': 86},
{'end': 100, 'id': 13, 'start': 95},
{'end': 102, 'id': 14, 'start': 101}]}
```
<details><summary><b>Model config</b></summary>
<p>
```ini
[paths]
train = null
dev = null
vectors = null
init_tok2vec = null
[system]
gpu_allocator = "pytorch"
seed = 0
[nlp]
lang = "pl"
pipeline = ["transformer","spancat"]
batch_size = 128
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"@tokenizers":"spacy.Tokenizer.v1"}
[components]
[components.spancat]
factory = "spancat"
max_positive = null
scorer = {"@scorers":"spacy.spancat_scorer.v1"}
spans_key = "sc"
threshold = 0.5
[components.spancat.model]
@architectures = "spacy.SpanCategorizer.v1"
[components.spancat.model.reducer]
@layers = "spacy.mean_max_reducer.v1"
hidden_size = 128
[components.spancat.model.scorer]
@layers = "spacy.LinearLogistic.v1"
nO = null
nI = null
[components.spancat.model.tok2vec]
@architectures = "spacy-transformers.TransformerListener.v1"
grad_factor = 1.0
pooling = {"@layers":"reduce_mean.v1"}
upstream = "*"
[components.spancat.suggester]
@misc = "spacy.ngram_suggester.v1"
sizes = [1,2,3]
[components.transformer]
factory = "transformer"
max_batch_items = 4096
set_extra_annotations = {"@annotation_setters":"spacy-transformers.null_annotation_setter.v1"}
[components.transformer.model]
@architectures = "spacy-transformers.TransformerModel.v3"
name = "dkleczek/bert-base-polish-cased-v1"
mixed_precision = false
[components.transformer.model.get_spans]
@span_getters = "spacy-transformers.strided_spans.v1"
window = 128
stride = 96
[components.transformer.model.grad_scaler_config]
[components.transformer.model.tokenizer_config]
use_fast = true
[components.transformer.model.transformer_config]
[corpora]
[corpora.dev]
@readers = "spacy.Corpus.v1"
path = ${paths.dev}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null
[corpora.train]
@readers = "spacy.Corpus.v1"
path = ${paths.train}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null
[training]
accumulate_gradient = 3
dev_corpus = "corpora.dev"
train_corpus = "corpora.train"
seed = ${system.seed}
gpu_allocator = ${system.gpu_allocator}
dropout = 0.1
patience = 1600
max_epochs = 0
max_steps = 20000
eval_frequency = 200
frozen_components = []
annotating_components = []
before_to_disk = null
before_update = null
[training.batcher]
@batchers = "spacy.batch_by_padded.v1"
discard_oversize = true
size = 2000
buffer = 256
get_length = null
[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
progress_bar = false
[training.optimizer]
@optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = false
eps = 0.00000001
[training.optimizer.learn_rate]
@schedules = "warmup_linear.v1"
warmup_steps = 250
total_steps = 20000
initial_rate = 0.00005
[training.score_weights]
spans_sc_f = 1.0
spans_sc_p = 0.0
spans_sc_r = 0.0
[pretraining]
[initialize]
vectors = ${paths.vectors}
init_tok2vec = ${paths.init_tok2vec}
vocab_data = null
lookups = null
before_init = null
after_init = null
[initialize.components]
[initialize.tokenizer]
```
</p>
</details>
|
closed
|
2023-04-23T21:40:47Z
|
2023-05-29T00:02:17Z
|
https://github.com/explosion/spaCy/issues/12566
|
[
"bug",
"feat / cli",
"feat / spancat"
] |
jamnicki
| 3
|
activeloopai/deeplake
|
computer-vision
| 2,848
|
[BUG] I cannot create an empty dataset on custom s3 location due to signed header
|
### Severity
P0 - Critical breaking issue or missing functionality
### Current Behavior
Trying to create an empty dataset using the s3 provider with a custom endpoint I fail with following error
`
Traceback (most recent call last):
File "~/.virtualenvs/exchfmt/lib/python3.10/site-packages/deeplake/core/storage/s3.py", line 275, in get_bytes
return self._get_bytes(path, start_byte, end_byte)
File "~/.virtualenvs/exchfmt/lib/python3.10/site-packages/deeplake/core/storage/s3.py", line 247, in _get_bytes
resp = self.client.get_object(Bucket=self.bucket, Key=path, Range=range)
File "~/.virtualenvs/exchfmt/lib/python3.10/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
File "~/.virtualenvs/exchfmt/lib/python3.10/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the GetObject operation: V4 authentication signed header not found: range
`
### Steps to Reproduce
Following code fails to create an empty dataset
`
import deeplake
import configparser
from pathlib import Path
cfgprs = configparser.ConfigParser()
cfgprs.read_string(Path("~/.aws/credentials").expanduser().read_text())
ds = deeplake.empty(
"s3://koolbucket/deeplake/datasets",
public=False,
creds=dict(
**dict(cfgprs["default"]),
endpoint_url="https://koolcustom.s3.server.com",
region="eu-north-1",
),
)
`
### Expected/Desired Behavior
I expect this to work. I have full access to this s3 instance but apparantly we haven't used s3 signed requests before
### Python Version
Python 3.10.12
### OS
Ubuntu 22.04
### IDE
vscode
### Packages
_No response_
### Additional Context
_No response_
### Possible Solution
Remove the Range=range keyword in as shown below
`
class S3Provider(StorageProvider):
...
def _get_bytes():
...
- resp = self.client.get_object(Bucket=self.bucket, Key=path, Range=range)
+ resp = self.client.get_object(Bucket=self.bucket, Key=path)
...
`
### Are you willing to submit a PR?
- [X] I'm willing to submit a PR (Thank you!)
|
closed
|
2024-05-08T14:24:41Z
|
2024-05-10T15:13:03Z
|
https://github.com/activeloopai/deeplake/issues/2848
|
[
"bug"
] |
hoshimura
| 5
|
pydata/xarray
|
numpy
| 10,157
|
Selecting point closest to (lon0, lat0) when lon,lat coordinates are 2D
|
It's very common to want to extract a time series at a specified coordinate location, and I'm wondering whether xarray could support this directly without using xoak. Currently I'm using xoak, as in this reproducible example:
``` python
import xarray as xr
import intake
import xoak
intake_catalog_url = 'https://usgs-coawst.s3.amazonaws.com/useast-archive/coawst_intake.yml'
cat = intake.open_catalog(intake_catalog_url)
ds = cat['COAWST-USEAST'].to_dask()
lat, lon = 42.5, -70.0 # Gulf of Maine, 100km east of Boston, MA
da = ds['Hwave']
da.xoak.set_index(['lat_rho', 'lat_rho'], 'scipy_kdtree')
ds_point = xr.Dataset({"lon": ("point", [lon]), "lat": ("point", [lat])})
da.xoak.sel(lat_rho=ds_point.lat, lon_rho=ds_point.lon).sel(ocean_time='2012-10-01')
```
which produces:

Would it be possible to enable this in xarray or is this too specific a functionality to consider?
|
open
|
2025-03-20T15:42:46Z
|
2025-03-21T10:14:00Z
|
https://github.com/pydata/xarray/issues/10157
|
[
"enhancement"
] |
rsignell
| 5
|
ploomber/ploomber
|
jupyter
| 805
|
add did you mean feature to `ploomber examples`
|
We should add the "did you mean?" feature when executing `ploomber examples`
```sh
ploomber examples -n cookbook/fileclient -o fileclient
```
```txt
There is no example named "cookbook/fileclient", did you mean "cookbook/file-client"?
```
for reference: We already have this built-in in other places https://github.com/ploomber/ploomber/blob/907c5ed798354efbedc5f4cfd397179e75812ec5/src/ploomber_cli/cli.py#L17
|
closed
|
2022-05-22T04:44:22Z
|
2022-07-19T18:08:47Z
|
https://github.com/ploomber/ploomber/issues/805
|
[] |
edublancas
| 1
|
jackmpcollins/magentic
|
pydantic
| 38
|
Proposal: Custom base url/parameters environment variables for AI gateways
|
Would be neat to support environment variables for base url and necessary key/value parameters to support AI gateways, like Cloudflare's offering!
> ### AI Gateway
> [Cloudflare AI Gateway Documentation](https://developers.cloudflare.com/ai-gateway/)
> Cloudflare’s AI Gateway allows you to gain visibility and control over your AI apps. By connecting your apps to AI Gateway, you can gather insights on how people are using your application with analytics and logging and then control how your application scales with features such as caching, rate limiting, as well as request retries, model fallback, and more. Better yet - it only takes one line of code to get started.
|
closed
|
2023-10-04T14:46:05Z
|
2024-03-04T01:03:12Z
|
https://github.com/jackmpcollins/magentic/issues/38
|
[] |
peteallport
| 3
|
hankcs/HanLP
|
nlp
| 1,236
|
执行from pyhanlp import * 报错”A fatal error has been detected by the Java Runtime Environment:“
|
<!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:hanlp-1.7.4
我使用的版本是:hanlp-1.7.4
<!--以上属于必填项,以下可自由发挥-->
执行 :from pyhanlp import * “ 导入pyhanlp时报错
## 我的问题
Python 3.6.3 |Anaconda, Inc.| (default, Oct 6 2017, 12:04:38)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from pyhanlp import *
<!-- 请详细描述问题,越详细越可能得到解决 -->
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先 在mac的终端Terminater输入”python3“
2. 然后执行”from pyhanlp import *“
### 触发代码
```
from phhanlp import *
```
### 期望输出
```
程序正常执行,无错误提示
```
### 实际输出
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00000001124355c0, pid=20972, tid=775
#
# JRE version: Java(TM) SE Runtime Environment (7.0_79-b15) (build 1.7.0_79-b15)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode bsd-amd64 compressed oops)
# Problematic frame:
# V [libjvm.dylib+0x30f5c0] jni_invoke_nonstatic(JNIEnv_*, JavaValue*, _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*)+0x1b
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /Users/www/hs_err_pid20972.log
```
实际输出
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->

|
closed
|
2019-07-09T12:47:07Z
|
2020-01-01T10:49:17Z
|
https://github.com/hankcs/HanLP/issues/1236
|
[
"ignored"
] |
ferrior30
| 3
|
AntonOsika/gpt-engineer
|
python
| 462
|
Statistics: collection of learnings did not work as intended
|
this line is expecting 1 arguments, might need to specify the `open_ssl` version
https://github.com/AntonOsika/gpt-engineer/blob/main/gpt_engineer/collect.py#L39
## Expected Behavior
Collection of data should have been submitted
## Current Behavior
What is the current behavior?
To help gpt-engineer learn, please answer 3 questions:
```python
Did the generated code run at all? y/n/u(ncertain): y
Did the generated code do everything you wanted? y/n/u(ncertain): y
Thank you
Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniconda/base/bin/gpt-engineer", line 8, in <module>
sys.exit(app())
File "/Users/eleijonmarck/dev/gpt-engineer/gpt_engineer/main.py", line 67, in main
collect_learnings(model, temperature, steps, dbs)
File "/Users/eleijonmarck/dev/gpt-engineer/gpt_engineer/collect.py", line 31, in collect_learnings
model, temperature, steps, dbs, steps_file_hash=steps_file_hash()
File "/Users/eleijonmarck/dev/gpt-engineer/gpt_engineer/collect.py", line 39, in steps_file_hash
return hashlib.sha256(content.encode("utf-8"), usedforsecurity=False).hexdigest()
TypeError: openssl_sha256() takes at most 1 argument (2 given)
```
### Steps to Reproduce
Please provide detailed steps for reproducing the issue.
1. generate project
2. run code
3. CTRL+C (end the program running)
4. answer questions
pyproject.toml
```toml
[build-system]
requires = ["setuptools", "wheel"]
[project]
name = "gpt-engineer"
version = "0.0.7"
description = "Specify what you want it to build, the AI asks for clarification, and then builds it."
readme = "README.md"
requires-python = ">=3.8"
dependencies = [
'black == 23.3.0',
'click >= 8.0.0',
'mypy == 1.3.0',
'openai == 0.27.8',
'pre-commit == 3.3.3',
'pytest == 7.3.1',
'ruff == 0.0.272',
'termcolor==2.3.0',
'typer >= 0.3.2',
'rudder-sdk-python == 2.0.2',
'dataclasses-json == 0.5.7',
]
classifiers = [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"License :: OSI Approved :: MIT License",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
]
[project.scripts]
gpt-engineer = 'gpt_engineer.main:app'
[tool.setuptools]
packages = ["gpt_engineer"]
[tool.ruff]
select = ["F", "E", "W", "I001"]
line-length = 90
show-fixes = false
target-version = "py311"
task-tags = ["TODO", "FIXME"]
exclude = [
".bzr",
".direnv",
".eggs",
".git",
".ruff_cache",
".svn",
".tox",
".venv",
"__pypackages__",
"_build",
"buck-out",
"build",
"dist",
"node_modules",
"venv",
]
[project.urls]
"Homepage" = "https://github.com/AntonOsika/gpt-engineer"
"Bug Tracker" = "https://github.com/AntonOsika/gpt-engineer/issues"
[tool.ruff.isort]
known-first-party = []
known-third-party = []
section-order = [
"future",
"standard-library",
"third-party",
"first-party",
"local-folder",
]
combine-as-imports = true
split-on-trailing-comma = false
lines-between-types = 1
[tool.black]
line-length = 90
target-version = ["py311"]
include = '\.pyi?$'
exclude = '''
(
/(
\.direnv
| \.eggs
| \.git
| \.tox
| \.venv
| _build
| build
| dist
| venv
)/
)
'''
```
|
closed
|
2023-07-01T15:39:58Z
|
2023-07-02T15:37:37Z
|
https://github.com/AntonOsika/gpt-engineer/issues/462
|
[] |
eleijonmarck
| 2
|
huggingface/datasets
|
deep-learning
| 6,851
|
load_dataset('emotion') UnicodeDecodeError
|
### Describe the bug
**emotions = load_dataset('emotion')**
_UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte_
### Steps to reproduce the bug
load_dataset('emotion')
### Expected behavior
succese
### Environment info
py3.10
transformers 4.41.0.dev0
datasets 2.19.0
|
open
|
2024-04-30T09:25:01Z
|
2024-09-05T03:11:04Z
|
https://github.com/huggingface/datasets/issues/6851
|
[] |
L-Block-C
| 2
|
ExpDev07/coronavirus-tracker-api
|
rest-api
| 488
|
Coronavirus data missing for Finland
|
Having downloaded the data from the beginning of the pandemic, I have notices in the last 10 days or so than Finland data are zero. No data from the 15th January to present day. (15-16 January would normally be zero as it is a weekend)
just thought I would point it out.
KEEP UP THE GOOD WORK AND THANKS.
|
closed
|
2022-01-26T09:58:39Z
|
2023-09-10T11:24:33Z
|
https://github.com/ExpDev07/coronavirus-tracker-api/issues/488
|
[] |
PRRH
| 1
|
Farama-Foundation/PettingZoo
|
api
| 1,262
|
[Proposal] Flexibility with _was_dead_step
|
### Proposal
Hello,
I propose to modify how AECEnv (and possibly ParrallelEnv, but I have not worked with it so I'm not sure how its handled there) handles dropping out of agents.
I've been working with the current method (_was_dead_step) for the past couple of weeks and perhaps I think I understand its purpose and the strategy behind its design. My suggestion is to change how the environment handles dropping of dead agents such that the "already dead" agent no longer needs to pass a None action to the step function. Not only is this not intuitive, but it adds unnecessary complexity to the training loop. For example, AgileRL memory buffers (and likely others) do not support None values (pytorch backend I think?), so my training loop currently has to handle the None actions.
I've currently implemented a simple way to do this.
1. Ive changed _was_dead_step to no longer require an action, and instead require an agent_id parameter
` def _was_dead_step(self, agent) -> None:
`
2. I create a wrapper function for checking if _was_dead_step needs to be executed
`
def remove_dead_agents(self, agent):
if (self.terminations[agent] or self.truncations[agent]): # for when you died not on your turn
# print(f'Current agent is dead, {self.agent_selection}, skipping action')
self._was_dead_step(agent)
# Handle the case where the agent dies mid round or something. I'm pretty sure I need to set the next action type to be a base action
self.infos[agent] = {"next_action_type":self.game.turn.next_action_type}
`
3. I call remove_dead_agents on all of my agent ids at THE END of my step function
`
[self.remove_dead_agents(agent) for agent in self.agents]
`
I have not tested this thoroughly across the code-base, and maybe there are implications for this that I'm not aware of. But it seems to be working so far.
### Motivation
Currently, _was_dead_step() is suggested to be called at the top of the .step() function. What this means is that at the begging of the step function, if the agent is dead (terminated = True), then the "action" must default to a "None" action.
I think the handling of dead agents could be more intuitively handled with decreased complexity.
My approach was motivated by the fact that I have an environment which terminates when there is only 1 agent left. This was making it very difficult to setup my proper termination flags and setup how everything interacts with the agent because I was determining if the game was over at the END of the step function (which I think is intuitive). However, the actual deletion of the agent occurs at the beginning of the NEXT step function. This got me thinking and I think it is more intuitive in general to **check at the end of a step function if the game is over or not**, at least for turn-based board games like what I'm working with.
Additionally I was running into problems using AgileRL because the None values were being added to my memory buffer. This is not allowed within the pytorch framework as far as I understand, leading to errors. To handle this I added code to handle the None actions by turning them into non-None values, but this adds unnecessary complexity that I think could just be handled by an alternative strategy.
### Pitch
Change the way that AECEnv handles the dropping out of dead agents by checking the game state at the end of each step (from the beginning of the next step)
### Alternatives
I could be missing something here because I think the documentation for understanding how _was_dead_step handles dead agents, and also how it is truly intended to be used is lacking
### Additional context
_No response_
### Checklist
- [x] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/PettingZoo/issues) in the repo
|
open
|
2025-02-10T23:36:08Z
|
2025-02-10T23:36:08Z
|
https://github.com/Farama-Foundation/PettingZoo/issues/1262
|
[
"enhancement"
] |
AlexAdrian-Hamazaki
| 0
|
tensorflow/datasets
|
numpy
| 5,448
|
etils.epy.lazy_imports not found
|
When running the import of version 4.9.5,
it complains that etils.epy.lazy_imports() is not found.
My version of etils is 1.7.0.
(etils==1.9.0 would require Python 3.11 and I still have Python 3.10.)
Importing version 4.9.4 works fine --- I notice it is also the current default in Colab.
|
closed
|
2024-06-04T00:51:52Z
|
2024-06-04T16:15:50Z
|
https://github.com/tensorflow/datasets/issues/5448
|
[
"bug"
] |
hhoppe
| 5
|
DistrictDataLabs/yellowbrick
|
matplotlib
| 1,065
|
issue in installation in ubuntu
|
**try:
import deepmatcher
except:
!pip install -qqq deepmatcher
While running the above code in python 3.6 i am getting the error mention below
File "/home/vikrant/anaconda2/lib/python2.7/site-packages/deepmatcher/data/field.py", line 163
def build_vocab(self, *args, **vectors=None**, cache=None, **kwargs):
^
SyntaxError: invalid syntax
Showing error at vectors
**<!-- If you have a question, note that you can email us via our listserve:
https://groups.google.com/forum/#!forum/yellowbrick -->
<!-- This line alerts the Yellowbrick maintainers, feel free to use this
@ address to alert us directly in follow up comments -->
@DistrictDataLabs/team-oz-maintainers
|
closed
|
2020-05-13T15:25:16Z
|
2020-05-13T15:49:27Z
|
https://github.com/DistrictDataLabs/yellowbrick/issues/1065
|
[
"invalid"
] |
parulmishra19
| 1
|
mitmproxy/pdoc
|
api
| 127
|
After installing pdoc with pip it not recognised as an executable on Windows
|
Tried navigating to \Scripts, same result.
|
closed
|
2017-04-14T07:26:29Z
|
2021-02-26T00:02:35Z
|
https://github.com/mitmproxy/pdoc/issues/127
|
[] |
epogrebnyak
| 16
|
ranaroussi/yfinance
|
pandas
| 1,425
|
_get_decryption_keys_from_yahoo_js(soup) got yfinance failed to decrypt Yahoo data response error
|
# IMPORTANT
Confirm by running:
tf version : 0.2.12
python version : 3.9.7
using ticker: "AAPL"
Thank you update quickely 0.2.11 -> 0.2.12
but i found error " " in scraper.py
tk = TickerData("AAPL")
tk._get_decryption_keys_from_yahoo_js(soup)
```python
from yfinance.data import decrypt_cryptojs_aes_stores, TickerData
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36'
}
url = 'https://finance.yahoo.com/calendar/earnings'
response = requests.get(url, headers=headers)
page_content = response.content.decode(encoding='utf-8',errors='strict')
soup = BeautifulSoup(response.content, "html.parser")
td = TickerData("AAPL")
keys = td._get_decryption_keys_from_yahoo_js(soup)
page_data_string = [row for row in page_content.split(
'\n') if row.startswith('root.App.main = ')][0][:-1]
page_data_string = page_data_string.split('root.App.main = ', 1)[1]
decrypt_cryptojs_aes_stores(json.loads(page_data_string),keys) # yfinance failed to decrypt Yahoo data response
print(keys) # []
```
im using yfinance with https://github.com/wenboyu2/yahoo-earnings-calendar/pull/35
Thank you share good repo
|
closed
|
2023-02-17T01:47:32Z
|
2023-02-18T11:38:58Z
|
https://github.com/ranaroussi/yfinance/issues/1425
|
[] |
seohyunjun
| 2
|
pyg-team/pytorch_geometric
|
pytorch
| 9,520
|
Take too long to install PyG on Colab
|
### 😵 Describe the installation problem
I used to install the required packages on Colab to run PyG using the following codes within 2 minutes.
```
import torch
def format_pytorch_version(version):
return version.split('+')[0]
TORCH_version = torch.__version__
TORCH = format_pytorch_version(TORCH_version)
def format_cuda_version(version):
return 'cu' + version.replace('.', '')
CUDA_version = torch.version.cuda
CUDA = format_cuda_version(CUDA_version)
!pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-geometric
```
However, when I tried to run the same code on Colab today, it took 15 minutes to install torch-scatter, and after 30 minutes, I am still waiting for the second installation of torch-sparse to finish (it's taking a very long time at _Building wheels for collected packages: torch-sparse_). Is this due to recent updates to the packages? How can I install the required packages more quickly? Thank you very much!
### Environment
* PyG version:
* PyTorch version: 2.3.1
* OS:
* Python version: Python 3.10.12
* CUDA/cuDNN version: 12.1
* How you installed PyTorch and PyG (`conda`, `pip`, source):
* Any other relevant information (*e.g.*, version of `torch-scatter`):
|
open
|
2024-07-19T03:41:35Z
|
2024-09-19T15:48:02Z
|
https://github.com/pyg-team/pytorch_geometric/issues/9520
|
[
"installation"
] |
xubingze
| 4
|
InstaPy/InstaPy
|
automation
| 6,657
|
Login XPaths are broken
|
## Expected Behavior
Can login and execute program
## Current Behavior
Cannot login and program fails. Console logs:
```
InstaPy Version: 0.6.16
._. ._. ._. ._. ._. ._. ._. ._. ._.
Workspace in use: "C:/Users/admin/InstaPy"
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
INFO [2022-11-30 11:36:02] [tardeo.bar] Session started!
oooooooooooooooooooooooooooooooooooooooooooooooooooooo
INFO [2022-11-30 11:36:06] [tardeo.bar] - Cookie file not found, creating cookie...
WARNING [2022-11-30 11:36:16] [tardeo.bar] Login A/B test detected! Trying another string...
WARNING [2022-11-30 11:36:21] [tardeo.bar] Could not pass the login A/B test. Trying last string...
ERROR [2022-11-30 11:36:26] [tardeo.bar] Login A/B test failed!
b"Message: Unable to locate element: //div[text()='Log In']\nStacktrace:\nRemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8\nWebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:182:5\nNoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:394:5\nelement.find/</<@chrome://remote/content/marionette/element.sys.mjs:280:16\n"
Traceback (most recent call last):
File "C:\Python310\lib\site-packages\instapy-0.6.16-py3.10.egg\instapy\login_util.py", line 337, in login_user
login_elem = browser.find_element(
File "C:\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 861, in find_element
return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"]
File "C:\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "C:\Python310\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 249, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //button[text()='Log In']
Stacktrace:
RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8
WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:182:5
NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:394:5
element.find/</<@chrome://remote/content/marionette/element.sys.mjs:280:16
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python310\lib\site-packages\instapy-0.6.16-py3.10.egg\instapy\login_util.py", line 343, in login_user
login_elem = browser.find_element(
File "C:\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 861, in find_element
return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"]
File "C:\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "C:\Python310\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 249, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //a[text()='Log in']
Stacktrace:
RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8
WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:182:5
NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:394:5
element.find/</<@chrome://remote/content/marionette/element.sys.mjs:280:16
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Python310\lib\site-packages\instapy-0.6.16-py3.10.egg\instapy\login_util.py", line 350, in login_user
login_elem = browser.find_element(
File "C:\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 861, in find_element
return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"]
File "C:\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "C:\Python310\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 249, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //div[text()='Log In']
Stacktrace:
RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8
WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:182:5
NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:394:5
element.find/</<@chrome://remote/content/marionette/element.sys.mjs:280:16
........................................................................................................................
CRITICAL [2022-11-30 11:36:26] [tardeo.bar] Unable to login to Instagram! You will find more information in the logs above.
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
INFO [2022-11-30 11:36:29] [tardeo.bar] Sessional Live Report:
|> No any statistics to show
[Session lasted 35.04 seconds]
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
INFO [2022-11-30 11:36:29] [tardeo.bar] Session ended!
ooooooooooooooooooooooooooooooooooooooooooooooooooooooo
```
## Possible Solution (optional)
- Update the XPath
- Review PR in Github, some developers update that.
## InstaPy configuration
```python
import random
import os
from dotenv import load_dotenv
from instapy import InstaPy
from instapy.util import smart_run
load_dotenv()
# login credentials
insta_username = os.getenv('SERVICE_USERNAME')
insta_password = os.getenv('SERVICE_PASSWORD')
# restriction data
dont_like_list = os.getenv('DONT_LIKE_WORD_LIST').split(',')
ignore_user_list = os.getenv('IGNORE_USER_LIST').split(',')
# Prevent commenting on and unfollowing friend_user_list
friend_user_list = os.getenv('FRIEND_USER_LIST').split(',')
# Prevent posts that contains next words
ignore_word_list = os.getenv('IGNORE_WORD_LIST').split(',')
# Set similar accounts and influencers from your niche to target...
target_user_list = os.getenv('TARGET_USER_LIST').split(',')
# Skip all business accounts, except from list given...
target_business_categories = os.getenv(
'TARGET_BUSINESS_CATEGORY_LIST').split(',')
# InstaPy session
session = InstaPy(username=insta_username,
password=insta_password,
headless_browser=False,
disable_image_load=True,
multi_logs=True,
want_check_browser=False,
browser_executable_path=r"C:\Program Files\Mozilla Firefox\firefox.exe")
# Main function
with smart_run(session):
# HEY HO LETS GO
# general settings
session.set_dont_include(friend_user_list)
session.set_dont_like(dont_like_list)
session.set_ignore_if_contains(ignore_word_list)
session.set_ignore_users(ignore_user_list)
session.set_simulation(enabled=True)
session.set_relationship_bounds(enabled=True,
potency_ratio=None,
delimit_by_numbers=True,
max_followers=7500,
max_following=3000,
min_followers=1,
min_following=1,
min_posts=1)
session.set_skip_users(skip_private=True,
skip_no_profile_pic=True,
skip_business=False,
dont_skip_business_categories=[target_business_categories])
session.set_user_interact(amount=3, randomize=True,
percentage=80, media='Photo')
session.set_do_like(enabled=True, percentage=90)
session.set_do_follow(enabled=True, percentage=40, times=1)
# activities
# FOLLOW+INTERACTION on TARGETED accounts
""" Select users form a list of a predefined target_user_list...
"""
number = random.randint(3, 5)
random_targets = target_user_list
if len(target_user_list) <= number:
random_targets = target_user_list
else:
random_targets = random.sample(target_user_list, number)
# Interact with the chosen target_user_list
session.follow_user_followers(random_targets, amount=random.randint(
30, 60), randomize=True, sleep_delay=600, interact=True)
# UNFOLLOW activity
# Unfollow nonfollowers after one day...
session.unfollow_users(amount=random.randint(75, 100), instapy_followed_enabled=True,
instapy_followed_param="all", style="FIFO", unfollow_after=48*60*60, sleep_delay=600)
# Unfollow all users followed by InstaPy after one week to keep the following-level clean...
session.unfollow_users(amount=random.randint(75, 100), instapy_followed_enabled=True,
instapy_followed_param="all", style="FIFO", unfollow_after=168*60*60, sleep_delay=600)
```
|
open
|
2022-11-30T16:41:17Z
|
2023-03-11T16:23:10Z
|
https://github.com/InstaPy/InstaPy/issues/6657
|
[] |
thEpisode
| 4
|
AUTOMATIC1111/stable-diffusion-webui
|
pytorch
| 16,650
|
[Bug]: Error when loading v-pred model on dev branch
|
### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
An error is thrown when trying to load a v-pred model using the dev branch.
### Steps to reproduce the problem
1. Pull dev branch
2. Start WebUI
3. Try to load v-pred model
### What should have happened?
WebUI should successfully load the model.
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
• version: [v1.10.1-42-g7799859f]
• python: 3.10.6
• torch: 2.3.0+cu121
• xformers: 0.0.26.post1
• gradio: 3.41.2
### Console logs
```Shell
changing setting sd_model_checkpoint to noobaiXLNAIXL_vPred05Version.safetensors [78748f163e]: ValueError
Traceback (most recent call last):
File "A:\AI\stable-diffusion-webui\modules\options.py", line 165, in set
option.onchange()
File "A:\AI\stable-diffusion-webui\modules\call_queue.py", line 14, in f
res = func(*args, **kwargs)
File "A:\AI\stable-diffusion-webui\modules\initialize_util.py", line 181, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "A:\AI\stable-diffusion-webui\modules\sd_models.py", line 972, in reload_model_weights
state_dict = get_checkpoint_state_dict(checkpoint_info, timer)
File "A:\AI\stable-diffusion-webui\modules\sd_models.py", line 344, in get_checkpoint_state_dict
res = read_state_dict(checkpoint_info.filename)
File "A:\AI\stable-diffusion-webui\modules\sd_models.py", line 320, in read_state_dict
pl_sd = safetensors.torch.load(open(checkpoint_file, 'rb').read())
File "A:\AI\stable-diffusion-webui\venv\lib\site-packages\safetensors\torch.py", line 338, in load
return _view2torch(flat)
File "A:\AI\stable-diffusion-webui\venv\lib\site-packages\safetensors\torch.py", line 386, in _view2torch
arr = torch.frombuffer(v["data"], dtype=dtype).reshape(v["shape"])
ValueError: both buffer length (0) and count (-1) must not be 0
```
### Additional information
I have tried manually loading the checkpoint using the WebUI env using Diffusers via the following snippet and it successfully loads the model and generates an image: https://huggingface.co/Laxhar/noobai-XL-Vpred-0.5#method-iv-diffusers
I've also tried loading it via the safetensors module (the thing throwing the error above) and it loads fine that way too:
```
import safetensors.torch
import torch
model_path = "./models/Stable-diffusion/noobaiXLNAIXL_vPred05Version.safetensors"
try:
model = safetensors.torch.load_file(model_path, device='cuda' if torch.cuda.is_available() else 'cpu')
print("V-pred model loaded successfully.")
except Exception as e:
print(f"Failed to load V-pred model: {e}")
```
This would seem to indicate that the issue is within WebUI itself.
|
closed
|
2024-11-13T01:22:58Z
|
2024-11-19T07:00:06Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16650
|
[
"bug",
"upstream"
] |
quicks1lver42
| 5
|
scikit-hep/awkward
|
numpy
| 2,936
|
Test against NumPy 2.0
|
Ruff has tools for this, and there are NumPy 2.0 prereleases (or there will be soon).
|
closed
|
2024-01-11T16:28:10Z
|
2024-04-01T18:18:40Z
|
https://github.com/scikit-hep/awkward/issues/2936
|
[] |
jpivarski
| 8
|
lundberg/respx
|
pytest
| 36
|
Fixture / global level mocking
|
Is it possible to do the respx setup in a fixture so that it can be used by all test functions instead of one setup per function?
Thanks and excellent work on this.
|
closed
|
2019-12-30T23:42:46Z
|
2020-01-27T09:09:14Z
|
https://github.com/lundberg/respx/issues/36
|
[
"documentation"
] |
dave-brennan
| 3
|
minimaxir/textgenrnn
|
tensorflow
| 230
|
why train_from_file generate text ?
|
look at the name
|
closed
|
2021-05-18T16:32:47Z
|
2021-05-24T16:16:12Z
|
https://github.com/minimaxir/textgenrnn/issues/230
|
[] |
SomeMinecraftModder
| 0
|
jupyter/nbgrader
|
jupyter
| 961
|
Document how to set up nbgrader for multiple graders when running without JupyterHub
|
There is already documentation on how to use nbgrader with [multiple graders with JupyterHub](http://nbgrader.readthedocs.io/en/master/configuration/jupyterhub_config.html#example-use-case-one-class-multiple-graders), but not when *not* using JupyterHub.
Briefly, the answer is that you still need access to a shared server, which would host the course directory. You would then run a password-protected version of the notebook on a public port on that server, and give the link to your graders so they can access the formgrader.
|
open
|
2018-05-09T20:34:30Z
|
2022-12-02T14:20:19Z
|
https://github.com/jupyter/nbgrader/issues/961
|
[
"documentation"
] |
jhamrick
| 1
|
Farama-Foundation/PettingZoo
|
api
| 1,181
|
[Bug Report] AgileRL tutorials broken
|
### Describe the bug
AgileRL updated to version 0.1.20 a couple days ago. The changes break the example tutorials in PettingZoo
for example: `python agilerl_maddpg.py`
gives
```
Traceback (most recent call last):
File "/opt/home/code/PettingZoo/tutorials/AgileRL/agilerl_maddpg.py", line 86, in <module>
pop = initialPopulation(
File "/opt/conda/lib/python3.9/site-packages/agilerl/utils/utils.py", line 245, in initialPopulation
lr_actor=INIT_HP["LR_ACTOR"],
KeyError: 'LR_ACTOR'
```
I didn't try all of the the tutorials so I don't know what else is broken or what's involved in fixing them.
### Code example
_No response_
### System info
```
>>> import sys; sys.version
'3.9.12 (main, Apr 5 2022, 06:56:58) \n[GCC 7.5.0]'
>>> pettingzoo.__version__
'1.24.3'
```
### Additional context
Pinning the version to 0.1.19 works as a temporary fix but it would be nice to fix these.
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/PettingZoo/issues) in the repo
|
closed
|
2024-02-11T22:44:09Z
|
2024-03-13T18:35:22Z
|
https://github.com/Farama-Foundation/PettingZoo/issues/1181
|
[
"bug"
] |
dm-ackerman
| 1
|
aeon-toolkit/aeon
|
scikit-learn
| 2,211
|
[BUG] RandomIntervalClassifier and SupervisedIntervalClassifier do not set n_jobs in contained scikit learn estimator
|
### Describe the bug
found when writing tests. These classifiers set the contained estimators n_jobs as follows
```python
self._estimator = _clone_estimator(
(
RandomForestClassifier(n_estimators=200)
if self.estimator is None
else self.estimator
),
self.random_state,
)
m = getattr(self._estimator, "n_jobs", None)
if m is not None:
self._estimator.n_jobs = self._n_jobs
```
I assume the problem is with
m = getattr(self._estimator, "n_jobs", None)
returns None if the attribute is present (which it is) but set to the value None

### Steps/Code to reproduce the bug
```python
import pytest
from aeon.classification.interval_based import RandomIntervalClassifier, SupervisedIntervalClassifier
from aeon.testing.data_generation import make_example_3d_numpy
from sklearn.svm import SVC
@pytest.mark.parametrize("cls",[SupervisedIntervalClassifier, RandomIntervalClassifier])
def test_random_interval_classifier(cls):
X,y = make_example_3d_numpy(n_cases=5, n_channels=1, n_timepoints=12)
r = cls(estimator=SVC())
r.fit(X, y)
p = r.predict_proba(X)
assert p.shape == (5, 2)
r = cls(n_jobs=2)
r.fit(X, y)
assert r._estimator.n_jobs == 2
```
### Expected results
test should pass
### Actual results
test_interval_pipelines.py::test_random_interval_classifier[SupervisedIntervalClassifier] FAILED [ 50%]
aeon\classification\interval_based\tests\test_interval_pipelines.py:7 (test_random_interval_classifier[SupervisedIntervalClassifier])
None != 2
Expected :2
Actual :None
### Versions
_No response_
|
closed
|
2024-10-16T10:27:15Z
|
2024-10-18T13:18:36Z
|
https://github.com/aeon-toolkit/aeon/issues/2211
|
[
"bug",
"classification"
] |
TonyBagnall
| 2
|
hpcaitech/ColossalAI
|
deep-learning
| 5,359
|
[DOC]: Fix typo for 1D 张量并行
|
### 📚 The doc issue
The sentence "这就是所谓的行并行方式" should be placed on a new line:

In the English documentation, the layout is correct:

|
closed
|
2024-02-05T07:45:51Z
|
2024-02-19T08:53:30Z
|
https://github.com/hpcaitech/ColossalAI/issues/5359
|
[
"documentation"
] |
yixiaoer
| 0
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
deep-learning
| 1,446
|
using pre-trained day2night for own dataset
|
Hello
I use "!python /content/pytorch-CycleGAN-and-pix2pix/test.py --dataroot /tmp/pitts30k/images/test/ --name day2night_pretrained --model test --no_dropout " for create day to night images with our dataset and we received this error:
AttributeError: 'Sequential' object has no attribute 'model'
how can I solve the problem?
|
open
|
2022-07-05T18:56:23Z
|
2024-03-18T10:04:48Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1446
|
[] |
saeedehj
| 1
|
reloadware/reloadium
|
pandas
| 10
|
Limit files watched
|
Is there anyway to limit what is watched? It's watching even the `.git` directory.
|
closed
|
2022-05-06T11:02:30Z
|
2022-05-06T12:18:21Z
|
https://github.com/reloadware/reloadium/issues/10
|
[] |
iarp
| 8
|
streamlit/streamlit
|
data-visualization
| 10,521
|
Slow download of csv file when using the inbuild download as csv function for tables displayed as dataframes in Edge Browser
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Issue is only in MS Edge Browser:
When pressing "download as csv" on a table the download is really slow. and i run on a thinkpad p16 gen1.
15 columns x 20 k rows takes 9-11 sec
15 columns x 50 k rows takes 19-22 sec
When i do it on with my own function using to_csv from the pandas libary i can do it in less than 1 sec for both 20 k and 50 k
**Issue only occur in Edge browser.**
Brave and firefox works just fine with the inbuild
### Reproducible Code Example
```Python
import streamlit as st
import pandas as pd
# tested in both 1.38 and 1.42.2
# Name: streamlit
# Version: 1.39.0 / 1.42.2
# Define number of rows and columns
num_rows = 20000 # 20 k rows takes 9-11 sec to download via inbuild download as csv
# num_rows = 50000 # 50 k rows takes 19-22 sec to download via inbuild download as csv
num_cols = 15
# Generate random data
data = {
f"Col_{i+1}": np.random.choice(['A', 'B', 'C', 'D', 1, 2, 3, 4, 5, 10.5, 20.8, 30.1], num_rows)
for i in range(num_cols)
}
data = pd.DataFrame(data)
st.write(data) # the same issue when using st.dataframe(data)
# the below method takes less a secound for both 20 k and 50 k rows
# to_csv() is from the pandas libary which also are used in the streamlit package.
csv = data.to_csv(index=False).encode('utf-8')
# Download button
st.download_button(
label="Download as CSV OWN",
data=csv,
file_name='data.csv',
mime='text/csv',
)
```
### Steps To Reproduce
hover over the table, click download as csv and watch your download folder for how slow it loads only a few 50-100 kb a sec
then try using the custom made button: "Download as CSV OWN" then it instantly downloads
### Expected Behavior
i would expect the inbuild download as csv function would be as fast as the pandas.to_csv() function.
I tried it on a Thinkpad T14 gen 3, P16 gen 1 and on a linux server, all have the same issue

### Current Behavior
no error msg, but it just super slow
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.39.0 and 1.42.2
- Python version: 3.12.1
- Operating System: Windows 11 / windows 10, Linux server
- Browser: Edge for business: Version 133.0.3065.82 (Official build) (64-bit)
### Additional Information
_No response_
|
open
|
2025-02-26T08:43:56Z
|
2025-03-03T11:49:27Z
|
https://github.com/streamlit/streamlit/issues/10521
|
[
"type:bug",
"feature:st.dataframe",
"status:confirmed",
"priority:P3",
"feature:st.download_button",
"feature:st.data_editor"
] |
LazerLars
| 2
|
axnsan12/drf-yasg
|
rest-api
| 299
|
Support nested coreschema in CoreAPI compat layer
|
```python
class MyFilterBackend(BaseFilterBackend):
def get_schema_fields(self, view):
return [coreapi.Field(
name="values"
required=False,
schema=coreschema.Array(items=coreschema.Integer(), unique_items=True),
location='query'
)]
```
Result:

|
open
|
2019-01-22T08:18:42Z
|
2025-03-07T12:16:45Z
|
https://github.com/axnsan12/drf-yasg/issues/299
|
[
"triage"
] |
khomyakov42
| 1
|
AntonOsika/gpt-engineer
|
python
| 594
|
Issue with tiktoken ''Could not automatically map gpt-4 to a tokeniser. Please use `tiktok.get_encoding` to explicitly get the tokeniser you expect.''
|
I have tried using both the dev and production versions and get the same error. I have followed the windows guide for setting ENV variables and installed all dependencies. I am on Windows 10 and python 3.11. Full version of the error:

|
closed
|
2023-08-15T18:39:22Z
|
2023-09-28T14:20:09Z
|
https://github.com/AntonOsika/gpt-engineer/issues/594
|
[] |
Wulf-Steppen
| 2
|
piskvorky/gensim
|
nlp
| 2,755
|
Doc2Vec.clear_sims bug
|
I was reading Doc2Vec source code and noticed a probable bug in clear_sims method.
https://github.com/RaRe-Technologies/gensim/blob/8d79794118a3adeda8cf9c873eb205cecf47cfef/gensim/models/doc2vec.py#L387
It sets vectors_docs_norm attribute of Word2VecKeyedVectors to None. However, Word2VecKeyedVectors does not have this attribute. So I think this line should be
`self.docvecs.vectors_docs_norm = None`
|
open
|
2020-02-17T08:23:04Z
|
2020-02-18T21:19:31Z
|
https://github.com/piskvorky/gensim/issues/2755
|
[] |
pavellevap
| 1
|
ultralytics/ultralytics
|
computer-vision
| 18,755
|
cifar100 dataset cannot be loaded
|
closed
|
2025-01-18T15:19:31Z
|
2025-01-18T17:34:26Z
|
https://github.com/ultralytics/ultralytics/issues/18755
|
[
"bug",
"dependencies",
"detect"
] |
cainiao123s
| 2
|
|
widgetti/solara
|
flask
| 950
|
Typing issue with solara `component`
|
<!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
`solara.component` resolves and has proper autocomplete
## Current Behavior
Typing and autocomplete cannot find `solara.component`


## Steps to Reproduce the Problem
<!--- Provide a link to a live example, for example via [PyCafe](https://py.cafe), and/or an unambiguous -->
<!--- set of steps to reproduce this bug. Include code, if relevant -->
import solara, try to use component (vscode)
## Specifications
- Solara Version: 1.43.0
- Platform: MacOS with vscode
- Affected Python Versions: 3.10
Note that my code runs, component is available, it's just a typing/autocomplete issue
|
open
|
2024-12-23T14:55:54Z
|
2024-12-23T15:24:32Z
|
https://github.com/widgetti/solara/issues/950
|
[] |
Ben-Epstein
| 3
|
microsoft/unilm
|
nlp
| 1,106
|
VALL-E demo page missing/404
|
VALL-E demo page is missing/404.
It was initially working.
Plz can you fix?
https://valle-demo.github.io/
|
closed
|
2023-05-27T05:46:01Z
|
2023-06-14T11:51:15Z
|
https://github.com/microsoft/unilm/issues/1106
|
[] |
rickkadamss
| 0
|
TheKevJames/coveralls-python
|
pytest
| 232
|
Python coverage not reported to https://coveralls.io/
|
I'm running:
```bash
coverage run --source=. -m pytest cvise/tests/
coverage report -m
COVERALLS_REPO_TOKEN=xyz coveralls -n
```
where I see:
```
============================= test session starts ==============================
platform linux -- Python 3.8.3, pytest-5.4.3, py-1.9.0, pluggy-0.13.1
rootdir: /usr/src/cvise/objdir
collected 67 items
cvise/tests/test_balanced.py .................. [ 26%]
cvise/tests/test_comments.py .... [ 32%]
cvise/tests/test_ifs.py . [ 34%]
cvise/tests/test_ints.py ...... [ 43%]
cvise/tests/test_line_markers.py .. [ 46%]
cvise/tests/test_nestedmatcher.py ................ [ 70%]
cvise/tests/test_peep.py .......... [ 85%]
cvise/tests/test_special.py .... [ 91%]
cvise/tests/test_ternary.py ...... [100%]
============================== 67 passed in 1.59s ==============================
Name Stmts Miss Cover Missing
-----------------------------------------------------------------
cvise-delta.py 6 6 0% 3-11
cvise.py 200 200 0% 3-291
cvise/__init__.py 1 0 100%
cvise/cvise.py 102 74 27% 43-44, 48-54, 59-107, 110-131, 135-138, 141-145, 149-160
cvise/passes/__init__.py 17 0 100%
cvise/passes/abstract.py 101 41 59% 20, 25, 33, 39, 42-53, 56-62, 75-78, 81-87, 90, 93, 96, 99, 102, 111-112, 120, 123, 126
cvise/passes/balanced.py 87 30 66% 7, 43-44, 46-47, 52-53, 55-56, 58-59, 61-62, 67-68, 70-71, 73-75, 79-89
cvise/passes/blank.py 37 24 35% 10, 13, 16, 19, 23-39, 42-53
cvise/passes/clang.py 31 20 35% 10, 13, 16, 19, 22-41
cvise/passes/clangbinarysearch.py 73 56 23% 13, 16-29, 32-33, 36, 39-43, 46-61, 64-70, 73-92
cvise/passes/clex.py 24 14 42% 9, 12, 15, 18, 21-31
cvise/passes/comments.py 25 1 96% 7
cvise/passes/ifs.py 61 13 79% 11, 23-26, 31, 44-45, 49, 59-62, 69, 76
cvise/passes/includeincludes.py 38 27 29% 10, 13, 16, 19, 22-53
cvise/passes/includes.py 33 22 33% 10, 13, 16, 19, 22-47
cvise/passes/indent.py 30 22 27% 6, 9, 12, 15, 18-43
cvise/passes/ints.py 54 2 96% 10, 42
cvise/passes/line_markers.py 35 4 89% 12, 26, 29, 39
cvise/passes/lines.py 45 32 29% 10, 13-26, 29-31, 35-38, 41, 44, 47-60
cvise/passes/peep.py 107 6 94% 125, 137-140, 164, 227
cvise/passes/special.py 56 13 77% 8, 31, 36-43, 56-60
cvise/passes/ternary.py 39 3 92% 23, 52, 62
cvise/passes/unifdef.py 42 30 29% 11, 14, 17, 20, 23-58
cvise/tests/__init__.py 0 0 100%
cvise/tests/test_balanced.py 205 0 100%
cvise/tests/test_comments.py 55 0 100%
cvise/tests/test_ifs.py 21 0 100%
cvise/tests/test_ints.py 86 0 100%
cvise/tests/test_line_markers.py 28 0 100%
cvise/tests/test_nestedmatcher.py 57 0 100%
cvise/tests/test_peep.py 102 0 100%
cvise/tests/test_special.py 51 0 100%
cvise/tests/test_ternary.py 79 0 100%
cvise/tests/testabstract.py 8 0 100%
cvise/utils/__init__.py 0 0 100%
cvise/utils/error.py 70 38 46% 8, 11, 15-16, 19, 23-24, 27-34, 37, 41, 45, 48, 52-53, 56-66, 73, 89-93, 96, 100-102, 105-122
cvise/utils/nestedmatcher.py 126 9 93% 19, 27, 35, 70, 86-88, 102-103, 118
cvise/utils/readkey.py 28 28 0% 1-40
cvise/utils/statistics.py 41 41 0% 1-50
cvise/utils/testing.py 385 385 0% 1-506
-----------------------------------------------------------------
TOTAL 2586 1141 56%
{'message': 'Job ##1.11', 'url': 'https://coveralls.io/jobs/65487502'}
```
but the sent output with `coveralls` command does not mention any `.py` file.
Can you please help me?
|
closed
|
2020-07-23T08:12:02Z
|
2020-07-25T00:29:54Z
|
https://github.com/TheKevJames/coveralls-python/issues/232
|
[] |
marxin
| 4
|
microsoft/hummingbird
|
scikit-learn
| 237
|
Random forest in LightGBM
|
I want to clarify, now hummingbird is no support random forest in LightGBM? Is it planned?
When I convert from lgbm to onnx this model, I get an error
lgb.LGBMClassifier(boosting_type='rf', n_estimators = 128, max_depth = 5, subsample = 0.3, bagging_freq = 1)
File "/venv/lib/python3.6/site-packages/hummingbird/ml/_parse.py", line 242, in <listcomp>
this_operator.inputs = [scope.variables[in_] for in_ in input_names]
KeyError: 'col_index'
|
open
|
2020-08-17T13:25:46Z
|
2020-11-11T01:51:43Z
|
https://github.com/microsoft/hummingbird/issues/237
|
[
"bug"
] |
arfangeta
| 12
|
dmlc/gluon-cv
|
computer-vision
| 835
|
transfer learning for classification
|
Hello @zhreshold,
As you told me in this issue: #746 to do the fine tuning of 'resnet50_v1b' I should replace the 'finetune_net.output' with 'finetune_net.fc' and it works. But for some classifiers I need to use 'finetune_net.output', so can you explain what's the difference between the 'output' and 'fc' and why some classifiers need 'output' and the others need 'fc'.
Thank you in advance.
|
closed
|
2019-06-25T07:33:56Z
|
2019-07-02T08:02:52Z
|
https://github.com/dmlc/gluon-cv/issues/835
|
[] |
FAFACHR
| 8
|
yihong0618/running_page
|
data-visualization
| 121
|
功能建议:加入月度统计数据展示
|
yihong 您好,
刚刚在看 @geekplux 的 running page 的时候发现 geekplus 在首页中加入了月统计柱状图,主要显示了 跑步,徒步,骑行的月度统计数据。

私以为相比于年度和每日统计数据来说,**月度统计能够在一个更加折中的频率上展示统计数据,也丰富了展示效果**。
所以就依照 geekplus 本人的仓库尝试将此柱状图进行迁移,但是由于我并不曾深入研究学习过react框架导致出现了很多的错误。
所以想问一下您是否有任何计划将此月度统计柱状图迁移到 Running Page 仓库中呢?
|
closed
|
2021-04-17T10:47:51Z
|
2021-04-19T08:56:27Z
|
https://github.com/yihong0618/running_page/issues/121
|
[] |
MFYDev
| 3
|
flasgger/flasgger
|
rest-api
| 586
|
Async/await in Flask 2.0+ breaks due to decorator order
|
| Name | Version |
|--|--|
| Flasgger | 0.9.7.1 |
| Flask | 2.3.2 |
| Python | 3.9, 3.10 |
| OS | macOS 13.3 |
I'm setting up a basic Flask project with an asynchronous route. I want to fetch information online using the `selenium` package, and this requires the Flask route to await for the task to finish.
I've been using `@swag_from()` decorators to document my code and make it easier to work with, here is an idea of what my code looked like:
```py
@website_api.route('/get-data-from-page', methods=['POST'])
@swag_from('swagger/get-data-from-page.yml')
async def get_data_from_page():
data = req.get_json()
res = await common.get_data_from_page(data["url"])
return json.dumps({ "data": res }), 201
```
For some reason, this kept throwing: _TypeError: The view function did not return a valid response. The return type must be a string, dict, list, tuple with headers or status, Response instance, or WSGI callable, but it was a coroutine._
After a couple of hours of trying everything, re-installing Python and both libraries, I found the issue.
### This causes the error:
```py
@app.route(<route>)
@swag_from(<file>)
async def route():
# ...
```
### This doesn't:
```py
@swag_from(<file>)
@app.route(<route>)
async def route():
# ...
```
_(Note the order of the decorators)_
Now, this might not be a bug at all, it even might not have anything to do with flasgger and could be a problem is Flask directly. But the README of this project says to do it like so:
```py
@app.route('/colors/<palette>/')
@swag_from(specs_dict)
def colors(palette):
# ...
```
Maybe a switch between those two lines in the README and any documentation would be interesting?
|
open
|
2023-06-30T14:00:46Z
|
2024-12-02T19:16:39Z
|
https://github.com/flasgger/flasgger/issues/586
|
[] |
phil-chp
| 2
|
mckinsey/vizro
|
plotly
| 630
|
[Docs] Py.Cafe code snippets to-do list
|
Here's an issue (public, so we can ask for contributions to it from our readers) to record the bits and pieces left to do on following the [introduction of py.cafe to our docs' examples](https://github.com/mckinsey/vizro/pull/569).
- [ ] Change the requirement in `hatch.toml` when py.cafe release their mkdocs plugin (separate issue, stored as 1204 internally)
- [ ] Update the code examples in the `data.md` how-to guide to use py.cafe once we can solve how to illustrate data handling without creating multiple projects in a profile to store the `iris.csv` dataset.
- We considered using the function to create the dataframe directly for this example but disregarded it because it doesn't illustrate data loading,
- We could code the data download from storage in this GitHub repo
- We could make a bunch of projects like I did for other examples using custom css, for example, but there are a lot and it results in code example duplication, so it's not the optimum solution for any of the examples.
- Best longer-term solution is to either change the example to call an API to get a dataset (which varies over time, as that's what the example is about) or to wait to see if we can work something out with py.cafe to add files to projects dynamically via the plugin (a bit like we specify requirements files as `extra-requirements`)
- [ ] If a solution arises whereby we no longer need to create examples and link to them (rather than use the plugin) we can delete the examples I've created so far. But long-term, if we have these examples as part of our documentation, we should store them under a new profile in py.cafe that uses a shared email account so the team can update those projects as needed, rather than use my login.
|
open
|
2024-08-15T08:46:02Z
|
2025-01-14T09:40:02Z
|
https://github.com/mckinsey/vizro/issues/630
|
[
"Docs :spiral_notepad:"
] |
stichbury
| 1
|
ivy-llc/ivy
|
pytorch
| 28,517
|
Fix Frontend Failing Test: torch - math.paddle.heaviside
|
To-do List: https://github.com/unifyai/ivy/issues/27498
|
closed
|
2024-03-09T14:58:00Z
|
2024-03-14T21:29:22Z
|
https://github.com/ivy-llc/ivy/issues/28517
|
[
"Sub Task"
] |
ZJay07
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.