repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
Anjok07/ultimatevocalremovergui
pytorch
669
Install errors
No programmer here, I tried the **README.md** instructions: ~~~ > git clone https://github.com/Anjok07/ultimatevocalremovergui.git > pip3 install -r requirements.txt ~~~ Then I get the following errors: ~~~ ERROR: Ignored the following versions that require a different python version: 0.36.0 Requires-Python >=3.6,<3.10; 0.37.0 Requires-Python >=3.7,<3.10; 0.52.0 Requires-Python >=3.6,<3.9; 0.52.0rc3 Requires-Python >=3.6,<3.9; 0.53.0 Requires-Python >=3.6,<3.10; 0.53.0rc1.post1 Requires-Python >=3.6,<3.10; 0.53.0rc2 Requires-Python >=3.6,<3.10; 0.53.0rc3 Requires-Python >=3.6,<3.10; 0.53.1 Requires-Python >=3.6,<3.10; 0.54.0 Requires-Python >=3.7,<3.10; 0.54.0rc2 Requires-Python >=3.7,<3.10; 0.54.0rc3 Requires-Python >=3.7,<3.10; 0.54.1 Requires-Python >=3.7,<3.10 ERROR: Could not find a version that satisfies the requirement onnxruntime==1.13.1 (from versions: none) ERROR: No matching distribution found for onnxruntime==1.13.1 ~~~ Because [#396](https://github.com/Anjok07/ultimatevocalremovergui/issues/396) suggested to run this: ~~~ > python3 UVR.py ~~~ I did get: ~~~ Traceback (most recent call last): File "/home/box/GIT/ultimatevocalremovergui/UVR.py", line 2, in <module> import audioread ModuleNotFoundError: No module named 'audioread' ~~~ --- - My system was full updated. - Because the error, I installed from AUR (with same results): - onnxruntime-bin (1.14.1-3) - onnxruntime-git (1.16.0) - There's another easy way to install and try this? (maybe appimage, flatpak?) --- OS: Arch Linux x86_64 Host: MS-7816 2.0 Kernel: 6.1.14-1-lts Shell: fish 3.6.1 Resolution: 1920x1080 WM: dwm Terminal: tmux CPU: Intel i7-4790 (8) @ 3.600GHz GPU: AMD ATI Radeon RX 470/480/570/570X/580/580X/590 Memory: 3711MiB / 15939MiB
open
2023-07-17T15:36:47Z
2023-07-30T19:23:39Z
https://github.com/Anjok07/ultimatevocalremovergui/issues/669
[]
Disonantemus
1
huggingface/datasets
numpy
6,436
TypeError: <lambda>() takes 0 positional arguments but 1 was given
### Describe the bug ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-35-7b6becee3685>](https://localhost:8080/#) in <cell line: 1>() ----> 1 from datasets import Dataset 9 frames [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 20 __version__ = "2.15.0" 21 ---> 22 from .arrow_dataset import Dataset 23 from .arrow_reader import ReadInstruction 24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 61 import pyarrow.compute as pc 62 from huggingface_hub import CommitOperationAdd, CommitOperationDelete, DatasetCard, DatasetCardData, HfApi ---> 63 from multiprocess import Pool 64 from requests import HTTPError 65 [/usr/local/lib/python3.10/dist-packages/multiprocess/__init__.py](https://localhost:8080/#) in <module> 31 32 import sys ---> 33 from . import context 34 35 # [/usr/local/lib/python3.10/dist-packages/multiprocess/context.py](https://localhost:8080/#) in <module> 4 5 from . import process ----> 6 from . import reduction 7 8 __all__ = () [/usr/local/lib/python3.10/dist-packages/multiprocess/reduction.py](https://localhost:8080/#) in <module> 14 import os 15 try: ---> 16 import dill as pickle 17 except ImportError: 18 import pickle [/usr/local/lib/python3.10/dist-packages/dill/__init__.py](https://localhost:8080/#) in <module> 24 25 ---> 26 from ._dill import ( 27 dump, dumps, load, loads, copy, 28 Pickler, Unpickler, register, pickle, pickles, check, [/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in <module> 166 try: 167 from _pyio import open as _open --> 168 PyTextWrapperType = get_file_type('r', buffering=-1, open=_open) 169 PyBufferedRandomType = get_file_type('r+b', buffering=-1, open=_open) 170 PyBufferedReaderType = get_file_type('rb', buffering=-1, open=_open) [/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in get_file_type(*args, **kwargs) 154 def get_file_type(*args, **kwargs): 155 open = kwargs.pop("open", __builtin__.open) --> 156 f = open(os.devnull, *args, **kwargs) 157 t = type(f) 158 f.close() [/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in open(file, mode, buffering, encoding, errors, newline, closefd, opener) 280 return result 281 encoding = text_encoding(encoding) --> 282 text = TextIOWrapper(buffer, encoding, errors, newline, line_buffering) 283 result = text 284 text.mode = mode [/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in __init__(self, buffer, encoding, errors, newline, line_buffering, write_through) 2043 encoding = "utf-8" 2044 else: -> 2045 encoding = locale.getpreferredencoding(False) 2046 2047 if not isinstance(encoding, str): TypeError: <lambda>() takes 0 positional arguments but 1 was given ``` or ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-36-652e886d387f>](https://localhost:8080/#) in <cell line: 1>() ----> 1 import datasets 9 frames [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 20 __version__ = "2.15.0" 21 ---> 22 from .arrow_dataset import Dataset 23 from .arrow_reader import ReadInstruction 24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 61 import pyarrow.compute as pc 62 from huggingface_hub import CommitOperationAdd, CommitOperationDelete, DatasetCard, DatasetCardData, HfApi ---> 63 from multiprocess import Pool 64 from requests import HTTPError 65 [/usr/local/lib/python3.10/dist-packages/multiprocess/__init__.py](https://localhost:8080/#) in <module> 31 32 import sys ---> 33 from . import context 34 35 # [/usr/local/lib/python3.10/dist-packages/multiprocess/context.py](https://localhost:8080/#) in <module> 4 5 from . import process ----> 6 from . import reduction 7 8 __all__ = () [/usr/local/lib/python3.10/dist-packages/multiprocess/reduction.py](https://localhost:8080/#) in <module> 14 import os 15 try: ---> 16 import dill as pickle 17 except ImportError: 18 import pickle [/usr/local/lib/python3.10/dist-packages/dill/__init__.py](https://localhost:8080/#) in <module> 24 25 ---> 26 from ._dill import ( 27 dump, dumps, load, loads, copy, 28 Pickler, Unpickler, register, pickle, pickles, check, [/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in <module> 166 try: 167 from _pyio import open as _open --> 168 PyTextWrapperType = get_file_type('r', buffering=-1, open=_open) 169 PyBufferedRandomType = get_file_type('r+b', buffering=-1, open=_open) 170 PyBufferedReaderType = get_file_type('rb', buffering=-1, open=_open) [/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in get_file_type(*args, **kwargs) 154 def get_file_type(*args, **kwargs): 155 open = kwargs.pop("open", __builtin__.open) --> 156 f = open(os.devnull, *args, **kwargs) 157 t = type(f) 158 f.close() [/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in open(file, mode, buffering, encoding, errors, newline, closefd, opener) 280 return result 281 encoding = text_encoding(encoding) --> 282 text = TextIOWrapper(buffer, encoding, errors, newline, line_buffering) 283 result = text 284 text.mode = mode [/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in __init__(self, buffer, encoding, errors, newline, line_buffering, write_through) 2043 encoding = "utf-8" 2044 else: -> 2045 encoding = locale.getpreferredencoding(False) 2046 2047 if not isinstance(encoding, str): TypeError: <lambda>() takes 0 positional arguments but 1 was given ``` ### Steps to reproduce the bug `import datasets` on colab ### Expected behavior work fine ### Environment info colab `!pip install datasets`
closed
2023-11-19T13:10:20Z
2024-06-25T06:00:31Z
https://github.com/huggingface/datasets/issues/6436
[]
ahmadmustafaanis
2
seleniumbase/SeleniumBase
web-scraping
2,429
chromium_arg parameter "--disable-features" passed to seleniumbase.Driver() being ignored.
I'm creating a driver instance with code: ```python dr = seleniumbase.Driver(uc=True, proxy="user:pass@127.0.0.1:8090", user_data_dir=user_data_dir, extension_dir=ext_dirs, agent=agent_string, uc_subprocess=True, chromium_arg='--disable-features=UserAgentClientHint') ``` https://httpbin.org/anything shows that `--disable-features=UserAgentClientHint` is ignored, since it shows `Sec-Ch-*` request headers: ```json "headers": { ... "Sec-Ch-Ua": "\"Not_A Brand\";v=\"8\", \"Chromium\";v=\"120\"", "Sec-Ch-Ua-Mobile": "?0", "Sec-Ch-Ua-Platform": "\"Linux\"", ... }, ``` The same thing for browser launched with parameters `chromium disable-features=UserAgentClientHint` shows that there is no such headers sent with the request. It seems that the problem is in function `_set_chrome_options` in `seleniumbase/core/browser_launcher.py` (line 1094 and so on): ```python if user_data_dir: chrome_options.add_argument( "--disable-features=OptimizationHintsFetching,Translate," "OptimizationTargetPrediction,PrivacySandboxSettings4," "DownloadBubble,DownloadBubbleV2" ) else: chrome_options.add_argument( "--disable-features=OptimizationHintsFetching,Translate," "OptimizationTargetPrediction,DownloadBubble,DownloadBubbleV2" ) ``` So, driver instance's ChromeOptions has double `--disable-features` arguments: ```python dr = seleniumbase.Driver(uc=True, proxy="user:pass@127.0.0.1:8090", user_data_dir=user_data_dir, extension_dir=ext_dirs, agent=agent_string, uc_subprocess=True, chromium_arg='--disable-features=UserAgentClientHint') print('\n'.join(dr.options.arguments)) ``` Output: ``` --window-size=1280,840 ... --disable-features=UserAgentClientHint ... --disable-features=OptimizationHintsFetching,Translate,OptimizationTargetPrediction,PrivacySandboxSettings4,DownloadBubble,DownloadBubbleV2 ... ``` As a test, i've added `UserAgentClientHint` to disabled features in `browser_laucher.py`: ```python if user_data_dir: chrome_options.add_argument( "--disable-features=OptimizationHintsFetching,Translate," "OptimizationTargetPrediction,PrivacySandboxSettings4," "DownloadBubble,DownloadBubbleV2,UserAgentClientHint" ) else: chrome_options.add_argument( "--disable-features=OptimizationHintsFetching,Translate," "OptimizationTargetPrediction,DownloadBubble,DownloadBubbleV2,UserAgentClientHint" ) ``` and everything works as it should.
closed
2024-01-14T20:08:16Z
2024-01-19T03:22:42Z
https://github.com/seleniumbase/SeleniumBase/issues/2429
[ "bug" ]
agp22888
2
yeongpin/cursor-free-vip
automation
337
[Bug]: machineid variable and not working on linux (ubuntu 24 LTS)
### Commit before submitting - [x] I understand that Issues are used to provide feedback and solve problems, not to complain in the comments section, and will provide more information to help solve the problem. - [x] I have checked the top Issue and searched for existing [open issues](https://github.com/yeongpin/cursor-free-vip/issues) and [closed issues](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20), and found no similar issues. - [x] I have filled out a short and clear title, so that developers can quickly determine the general problem when browsing the Issue list. Not "a suggestion", "stuck", etc. ### Platform Linux x64 ### Version v0.47.x ### Description Please remove machineId and replace with machineid. In linux -> ~/.config/cursor/machineid. Its not replacing this machine and it has errors while running chrome or chromium. Resolve that!! ### Related log output ```shell ``` ### Additional information _No response_
open
2025-03-20T18:44:13Z
2025-03-24T10:00:17Z
https://github.com/yeongpin/cursor-free-vip/issues/337
[ "bug" ]
itsfarhan
2
jupyter-incubator/sparkmagic
jupyter
323
[Question] ModuleNotFoundError when enabling the serverextension
I successfully invoked the following: pip install sparkmagic pip show sparkmagic jupyter-kernelspec install sparkmagic/kernels/sparkkernel jupyter-kernelspec install sparkmagic/kernels/pysparkkernel jupyter-kernelspec install sparkmagic/kernels/pyspark3kernel jupyter-kernelspec install sparkmagic/kernels/sparkrkernel But when running : jupyter serverextension enable --py sparkmagic There is a long stack trace at the end of which we see ModuleNotFoundError: No module named 'sparkmagic' Here is full stacktrace: $ jupyter serverextension enable --py sparkmagic Traceback (most recent call last): File "/usr/local/bin/jupyter-serverextension", line 11, in <module> sys.exit(main()) File "/usr/local/lib/python3.6/site-packages/jupyter_core/application.py", line 267, in launch_instance return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs) File "/usr/local/lib/python3.6/site-packages/traitlets/config/application.py", line 658, in launch_instance app.start() File "/usr/local/lib/python3.6/site-packages/notebook/serverextensions.py", line 300, in start super(ServerExtensionApp, self).start() File "/usr/local/lib/python3.6/site-packages/jupyter_core/application.py", line 256, in start self.subapp.start() File "/usr/local/lib/python3.6/site-packages/notebook/serverextensions.py", line 217, in start self.toggle_server_extension_python(arg) File "/usr/local/lib/python3.6/site-packages/notebook/serverextensions.py", line 206, in toggle_server_extension_python m, server_exts = _get_server_extension_metadata(package) File "/usr/local/lib/python3.6/site-packages/notebook/serverextensions.py", line 334, in _get_server_extension_metadata m = import_item(module) File "/usr/local/lib/python3.6/site-packages/traitlets/utils/importstring.py", line 42, in import_item return __import__(parts[0]) In addition: I am unable to load the sparkmagic inside ipython. $ipython Python 2.7.12 (default, Oct 11 2016, 05:24:00) IPython 4.2.1 -- An enhanced Interactive Python. In [1]: %load_ext sparkmagic.magics There is a very long stacktrace that includes in the middle: /usr/local/lib/python2.7/site-packages/IPython/core/extensions.pyc in _call_load_ipython_extension(self, mod) 131 def _call_load_ipython_extension(self, mod): 132 if hasattr(mod, 'load_ipython_extension'): --> 133 mod.load_ipython_extension(self.shell) 134 return True 135 /usr/local/lib/python2.7/site-packages/sparkmagic/magics/remotesparkmagics.pyc in load_ipython_extension(ip) 186 187 def load_ipython_extension(ip): --> 188 ip.register_magics(RemoteSparkMagics) So what step(s) am I missing ? Thanks
closed
2017-01-20T05:22:11Z
2017-01-20T21:40:59Z
https://github.com/jupyter-incubator/sparkmagic/issues/323
[]
javadba
3
horovod/horovod
deep-learning
3,650
Can't pass-in edit_fields for TransformSpec in pytorch_lightning
**Environment:** 1. Framework: (TensorFlow, Keras, PyTorch, MXNet) pytorch_lightning 2. Framework version: 1.5.0 3. Horovod version: 0.25.0 4. MPI version: 5. CUDA version: 6. NCCL version: 7. Python version: 8. Spark / PySpark version: 3.2.0 9. Ray version: 10. OS and version: 11. GCC version: 12. CMake version: **Checklist:** 1. Did you search issues to find if somebody asked this question before? 2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? 3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? 4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? **Bug report:** Please describe erroneous behavior you're observing and steps to reproduce it. In PetastormDataModule defined in horovod/spark/lightning/datamodule.py file: <img width="563" alt="image" src="https://user-images.githubusercontent.com/82044803/184599194-fe8b66e6-ca38-4d27-be06-b74235eb9f6c.png"> It only accepts `self.transformation` as a func param into class `TransformSpec`, and this limit means that you could only modify a row and return it with the same key. The ideal design would be that you could pass in all fields that class `TransformSpec` could accept here: <img width="669" alt="image" src="https://user-images.githubusercontent.com/82044803/184599421-aacd91a2-6ff6-4de6-9684-ffd26e946641.png"> Or at least `edit_fields` because the later two `removed_fields` and `selected_fields` can all be done inside of a function or training steps. So I'm hoping we can expose a param named for example `transform_edit_fields` so that we can add more fields into the schema besides the existing ones. And I'm happy to contribute to this feature. Pls let me know if this proposal makes sense for you, thanks!
closed
2022-08-15T08:10:36Z
2022-08-17T22:09:45Z
https://github.com/horovod/horovod/issues/3650
[ "bug" ]
serena-ruan
1
DistrictDataLabs/yellowbrick
matplotlib
714
Add spec and verbose back to test configuration
See #712 for a more detailed discussion and the changes that relate to this issue. The update to pytest 4.2, unfortunately, broke our tests in two ways: 1. The test object no longer has a `_genid` attribute, hurting our verbose names with parametrize 2. pytest-spec issues 1142 warnings making the test output unreadable. Temporarily, we have removed pytest-spec and have lost our nice test report output, but I would like to return to verbose mode and pytest-spec when we fix the above bugs. Note that the user warnings have been reported here: https://github.com/pchomik/pytest-spec/issues/21
closed
2019-02-01T01:45:27Z
2020-06-21T02:17:36Z
https://github.com/DistrictDataLabs/yellowbrick/issues/714
[ "type: technical debt", "review" ]
bbengfort
1
man-group/arctic
pandas
439
pymongo.errors.CursorNotFound when iterating over and processing large data
#### Arctic Version 1.54.0 #### Arctic Store ChunkStore #### Platform and version Arch Linux x64, Python 3 #### Description of problem and/or code sample that reproduces the issue I have a store and I'm storing three keys with chunk sizes 'M', 'W' and 'D' respectively. I have data in each key that spans from 2015 to current days. I am starting to iterate over the three keys at once and extracting 5 minute intervals from each key. When a key's currently loaded chunk doesn't contain the interval I'm trying to extract, I load the next chunk from that key. I'm using `ChunkStore.iterator()` to iterate. However, at one point while iterating over my whole dataset, I get an `pymongo.errors.CursorNotFound: Cursor not found, cursor id: 19102652939` exception when I call `newChunkOfAKey = next(iteratorOfACertainKey)`. In previous iterations of the whole data, when the processing of the 5 minute intervals took significantly less time, the issue did not show up. I added an artificial wait to my processing and it throws the exception much earlier while iterating. As my research goes, this has something to do with mongodb having time limit and if iterating over the data takes longer than that, the exception is thrown. Full stack trace: https://pastebin.com/iKNQEGqc Is there something that arctic can do to avoid this limit and issue? I don't have enough memory to load all data at once and process it later. If not, can I disable this limit and will arctic work fine with it disabled?
closed
2017-10-15T13:52:02Z
2020-07-20T14:36:41Z
https://github.com/man-group/arctic/issues/439
[]
Zvezdin
7
pallets-eco/flask-sqlalchemy
sqlalchemy
501
[API request] Proxy to current Flask-SQLAlchemy instance
Hi, It seems that there isn't an easy way to access the current Flask-SQLAlchemy instance from separate python files (e.g. Pluggable Views). Will there be a plan for a proxy similar to Flask's [current_app](http://flask.pocoo.org/docs/0.12/api/#flask.current_app) and Flask-Login's [current_user](https://flask-login.readthedocs.io/en/latest/#flask_login.current_user)? Thanks!
closed
2017-05-20T11:17:05Z
2020-12-05T20:55:49Z
https://github.com/pallets-eco/flask-sqlalchemy/issues/501
[]
roniemartinez
2
ranaroussi/yfinance
pandas
1,700
Importing yf fails if ~/.cache directory doesn't exist
### Describe bug Even though yf has a .set_tz_cache_location, simply importing yfinance fail if `~/.cache` directory doesn't exist Simply importing yf should not have side-effects like creating a database, and especially should not fail before there is a chance to overwrite the cache directory. This is particularly problematic when it's not possible to create the .cache directory as the app is being deployed to a 3rd-party system. ### Simple code that reproduces your problem ```python # Use a non-existing directory for ease of testing import appdirs as ad ad.user_cache_dir = lambda *args: ".cache" # If the ad.user_cache_dir directory doesn't exist, this fails import yfinance as yf # This line is never reached yf.set_tz_cache_location(".cache") ``` ### Debug log ``` python test.py Traceback (most recent call last): File "/private/tmp/repos/yf/test.py", line 5, in <module> import yfinance as yf # noqa File "/private/tmp/repos/yf/.direnv/python-3.10/lib/python3.10/site-packages/yfinance/__init__.py", line 23, in <module> from .ticker import Ticker File "/private/tmp/repos/yf/.direnv/python-3.10/lib/python3.10/site-packages/yfinance/ticker.py", line 29, in <module> from .base import TickerBase File "/private/tmp/repos/yf/.direnv/python-3.10/lib/python3.10/site-packages/yfinance/base.py", line 38, in <module> from . import utils File "/private/tmp/repos/yf/.direnv/python-3.10/lib/python3.10/site-packages/yfinance/utils.py", line 991, in <module> class _KV(_peewee.Model): File "/private/tmp/repos/yf/.direnv/python-3.10/lib/python3.10/site-packages/yfinance/utils.py", line 995, in _KV class Meta: File "/private/tmp/repos/yf/.direnv/python-3.10/lib/python3.10/site-packages/yfinance/utils.py", line 996, in Meta database = _DBManager.get_database() File "/private/tmp/repos/yf/.direnv/python-3.10/lib/python3.10/site-packages/yfinance/utils.py", line 953, in get_database cls._initialise() File "/private/tmp/repos/yf/.direnv/python-3.10/lib/python3.10/site-packages/yfinance/utils.py", line 972, in _initialise _os.mkdir(cls._cache_dir) FileNotFoundError: [Errno 2] No such file or directory: '.cache/py-yfinance' ``` ### Bad data proof _No response_ ### `yfinance` version 0.2.30 ### Python version 3.10.3 ### Operating system OSX
closed
2023-09-25T14:40:44Z
2023-10-25T13:02:52Z
https://github.com/ranaroussi/yfinance/issues/1700
[]
blackary
14
ivy-llc/ivy
pytorch
28,312
householder_product
I will implement this as a composition function. #28311 will be good to have for better implementation. Conversation of torch.linealg locked!
closed
2024-02-17T17:29:27Z
2024-02-17T17:32:46Z
https://github.com/ivy-llc/ivy/issues/28312
[ "Sub Task" ]
ZenithFlux
0
tensorflow/datasets
numpy
5,044
Amazon dataset URLs are invalid!
Amazon reviews dataset is not accessible from the following URL: ``` https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Books_v1_02.tsv.gz ``` So, TensorFlow Dataset cannot load the dataset: ``` DownloadError: Failed to get url https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Books_v1_02.tsv.gz. HTTP code: 403. ```
open
2023-08-09T17:20:18Z
2023-08-10T22:46:48Z
https://github.com/tensorflow/datasets/issues/5044
[ "bug" ]
xei
2
serengil/deepface
deep-learning
862
[Feature Request]: Allow users to provide images of different formats other than "jpeg" and "png"
A long time ago I created a PR that would have refactored allowed image candidates (by format) within `db_path` directory. Specifically these lines would have been subject to change: https://github.com/serengil/deepface/blob/fb8924e9849a943943ebac992eb5d3e175504981/deepface/DeepFace.py#L477C1-L484C49 I don't see a reason for why we should restrict users to only be able to provide `png` and/or `jpeg`. Or is there a good reason I am not aware of?
closed
2023-10-17T12:08:23Z
2023-10-18T17:06:50Z
https://github.com/serengil/deepface/issues/862
[ "question" ]
ekkolon
6
dsdanielpark/Bard-API
api
60
Getting errors even on first request, even with large timeout
I tried running this, and I am getting the same kind of error. I didn't make a lot of requests, and this error persists whether I vary the following factors: - session: Tried with and without - timeout: Tried with and without Here's what the errors are: ``` Session and timeout: Response Error: b')]}\'\n\n38\n[["wrb.fr",null,null,null,null,[9]]]\n57\n[["di",207],["af.httprm",206,"-5973836830292919317",0]]\n25\n[["e",4,null,null,132]]\n'. Session: Response Error: b')]}\'\n\n38\n[["wrb.fr",null,null,null,null,[9]]]\n57\n[["di",158],["af.httprm",157,"-7097826725018575708",1]]\n25\n[["e",4,null,null,132]]\n'. Timeout: Response Error: b')]}\'\n\n38\n[["wrb.fr",null,null,null,null,[9]]]\n57\n[["di",188],["af.httprm",187,"-7191527685766192777",2]]\n25\n[["e",4,null,null,132]]\n'. No Sesh No Timeout: Response Error: b')]}\'\n\n38\n[["wrb.fr",null,null,null,null,[9]]]\n56\n[["di",170],["af.httprm",170,"3959452704966963607",1]]\n25\n[["e",4,null,null,131]]\n'. ``` ``` !pip install bardapi from bardapi import Bard import os import requests # Get this from going to # Bard web UI --> # Developer tools --> # Applications --> # Storage tab --> # cookies --> # Secure-1PSID token='MYCOOKIE.' session = requests.Session() session.headers = { "Host": "bard.google.com", "X-Same-Domain": "1", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36", "Content-Type": "application/x-www-form-urlencoded;charset=UTF-8", "Origin": "https://bard.google.com", "Referer": "https://bard.google.com/", } session.cookies.set("__Secure-1PSID", token) # TRY DIFFERENT COMBOS OF SESSION AND TIMEOUT # Session and timeout bard = Bard(token=token, session=session, timeout=500) ans = bard.get_answer("What's a creative use for a hat?")['content'] print("Session and timeout:", ans) # Session bard = Bard(token=token, session=session) ans = bard.get_answer("What's a creative use for a hat?")['content'] print("Session:", ans) # Timeout bard = Bard(token=token, timeout=500) ans = bard.get_answer("What's a creative use for a hat?")['content'] print("Timeout:", ans) # No Session, No Timeout bard = Bard(token=token) ans = bard.get_answer("What's a creative use for a hat?")['content'] print("No Sesh No Timeout:", ans) ```
closed
2023-06-12T18:17:16Z
2023-06-13T18:01:26Z
https://github.com/dsdanielpark/Bard-API/issues/60
[]
josh-ashkinaze
6
gradio-app/gradio
data-science
10,523
[Dynamic components] - Calling dynamic rendering area events from the static code area
### Describe the bug I have a question that is really impacting me at the moment. I decided to work with dynamic components, and I don't know if I made the right choice. In fact, my interface became very complex and I had to do a lot of workarounds to get around certain behaviors. I managed to solve most of them, but there is one scenario that I still haven't been able to solve. Here is a very summarized version of my current scenario, as it would be very complex to replicate everything here: I have a Dropdown that is rendered dynamically. For refactoring purposes, the rendering code is not inside the gr.Blocks() block but in an external function that is called inside gr.Blocks(), in the example render_this(). Well, in the static area, I have a checkbox that will have its visibility changed in the change event of that dropdown. So far so good, I managed to reference the checkbox component in the output of the dropdown event. So if I change the Dropdown values ​​to "Fill", it should show the checkbox, otherwise it should hide it. Now imagine that I have a button to load presets on my screen that will set the values ​​of these components from the loaded preset. I placed the "Load preset" button simulating the loading of the values ​​and I want it to set the dropdown with the value "Fill". Next, I want this triggered event to trigger the same change event of the dropdown to trigger the visibility of the checkbox. Well, you may be thinking "just put the checkbox in the output of this event too". In a simple interface this would be practical, but in my case I have countless parameters being reloaded from the preset and I don't want to have to code all the visibility logic for each parameter again in this event. Instead, I want to reuse the logic of the events already implemented by simply calling them again in the .then() of the preset load event. This method has saved me a lot of effort and has worked well, I just haven't been able to handle it well when it comes to dynamic components. You will see that in my example I tried to leave the event in the global scope as events["event_drop_change"] to try to reuse it but I believe that it was assigned to the preset button before even receiving the real event when it is dynamically rendered. ### Have you searched existing issues? 🔎 - [x] I have searched and found no existing issues ### Reproduction ```python import gradio as gr dict_components={"merge_image": gr.State(value=None)} render_inputs=[] loaded_values = {"drop": None} loaded_values_state=None events={"event_drop_change": {}} def render_this(values): global events drop = gr.Dropdown(label="Tile Mode", choices=["Fill", "Deblur"], value="Deblur" if values["drop"] is None else values["drop"]) def drop_change(value): global loaded_values loaded_values["drop"]=value if value == "Fill": return gr.update(visible=True) return gr.update(visible=False) events["event_drop_change"]={"fn":drop_change, "inputs":drop, "outputs":[dict_components["merge_image"]]} drop.select(**events["event_drop_change"]) if drop not in render_inputs: render_inputs.append(drop) with gr.Blocks() as demo: loaded_values_state=gr.State(value=loaded_values) with gr.Row(): @gr.render(inputs=loaded_values_state) def render(values) -> None: return render_this(values) with gr.Row(): merge_chk = dict_components["merge_image"] = gr.Checkbox("Merge image", visible=False) with gr.Row(): button_load_preset = gr.Button("Load preset", interactive=True) def load_preset(): loaded_values["drop"] = "Fill" return loaded_values button_load_preset.click(fn=load_preset, outputs=loaded_values_state).then(**events["event_drop_change"]) demo.launch(inbrowser=True) ``` ### Screenshot _No response_ ### Logs ```shell N/A ``` ### System Info ```shell Gradio Environment Information: ------------------------------ Operating System: Windows gradio version: 5.15.0 gradio_client version: 1.7.0 ------------------------------------------------ gradio dependencies in your environment: aiofiles: 23.2.1 anyio: 4.4.0 audioop-lts is not installed. fastapi: 0.115.4 ffmpy: 0.4.0 gradio-client==1.7.0 is not installed. httpx: 0.27.0 huggingface-hub: 0.28.1 jinja2: 3.1.3 markupsafe: 2.1.5 numpy: 1.26.3 orjson: 3.10.6 packaging: 24.1 pillow: 11.0.0 pydantic: 2.8.2 pydub: 0.25.1 python-multipart: 0.0.19 pyyaml: 6.0.1 ruff: 0.9.4 safehttpx: 0.1.6 semantic-version: 2.10.0 starlette: 0.41.2 tomlkit: 0.12.0 typer: 0.12.3 typing-extensions: 4.12.2 urllib3: 2.2.2 uvicorn: 0.30.5 authlib; extra == 'oauth' is not installed. itsdangerous; extra == 'oauth' is not installed. gradio_client dependencies in your environment: fsspec: 2024.2.0 httpx: 0.27.0 huggingface-hub: 0.28.1 packaging: 24.1 typing-extensions: 4.12.2 websockets: 12.0 ``` ### Severity I can work around it
open
2025-02-05T23:53:36Z
2025-02-05T23:55:26Z
https://github.com/gradio-app/gradio/issues/10523
[ "bug" ]
elismasilva
0
Skyvern-AI/skyvern
api
991
Hi, can we support openrouter? openrouter has many visual models that can be used to test specific models on tasks of varying difficulty. Please consider it
open
2024-10-17T02:05:20Z
2024-10-17T02:06:10Z
https://github.com/Skyvern-AI/skyvern/issues/991
[ "good first issue" ]
vencentml
0
Lightning-AI/pytorch-lightning
pytorch
19,965
Loading saved config file fails because of InterpolationMode
### Bug description In the `SaveConfigCallback`, the config file that was used to run the current experiment is saved to `config.yaml` with this command: ``` self.parser.save( self.config, config_path, skip_none=False, overwrite=self.overwrite, multifile=self.multifile ) ``` If I try to reproduce the experiment using the config file that pytorch lightning saved, the loading of the config file fails because `interpolation`, an argument to a few torchvision transforms, such as torchvision.transforms.Resize. is not correctly set. ### What version are you seeing the problem on? v2.2 ### How to reproduce the bug ```python Any config file that contains a transform such as - class_path: torchvision.transforms.Resize init_args: size: [768, 1024] gets converted to this upon saving by pytorch lightning: - class_path: torchvision.transforms.Resize init_args: size: - 768 - 1024 interpolation: bilinear max_size: null antialias: warn Trying to load this converted config file fails. ``` ### Error messages and logs ``` [19:44:29] ERROR | Rank 0 | Exception occurred: Caught TypeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop data = fetcher.fetch(index) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = self.dataset.__getitems__(possibly_batched_index) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/utils/data/dataset.py", line 364, in __getitems__ return [self.dataset[self.indices[idx]] for idx in indices] File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/utils/data/dataset.py", line 364, in <listcomp> return [self.dataset[self.indices[idx]] for idx in indices] File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torchvision/datasets/folder.py", line 231, in __getitem__ sample = self.transform(sample) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torchvision/transforms/transforms.py", line 95, in __call__ img = t(img) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torchvision/transforms/transforms.py", line 361, in forward return F.resize(img, self.size, self.interpolation, self.max_size, self.antialias) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torchvision/transforms/functional.py", line 461, in resize raise TypeError( TypeError: Argument interpolation should be a InterpolationMode or a corresponding Pillow integer constant Traceback (most recent call last): File "/Users/iamiulialex/Documents/kaiko-eng/libs/ml_framework/kaiko/ml_framework/utils/log_redirect/decorator.py", line 54, in wrapper result = func(*args, **kwargs) File "/Users/iamiulialex/Documents/kaiko-eng/libs/ml_framework/kaiko/ml_framework/__main__.py", line 108, in main return cli.CLI() File "/Users/iamiulialex/Documents/kaiko-eng/libs/ml_framework/kaiko/ml_framework/cli/cli.py", line 47, in __init__ super().__init__( File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/lightning/pytorch/cli.py", line 388, in __init__ self._run_subcommand(self.subcommand) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/lightning/pytorch/cli.py", line 679, in _run_subcommand fn(**fn_kwargs) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 544, in fit call._call_and_handle_interrupt( File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 44, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 580, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 987, in _run results = self._run_stage() File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 1031, in _run_stage self._run_sanity_check() File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 1060, in _run_sanity_check val_loop.run() File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/lightning/pytorch/loops/utilities.py", line 182, in _decorator return loop_run(self, *args, **kwargs) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/lightning/pytorch/loops/evaluation_loop.py", line 128, in run batch, batch_idx, dataloader_idx = next(data_fetcher) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/lightning/pytorch/loops/fetchers.py", line 133, in __next__ batch = super().__next__() File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/lightning/pytorch/loops/fetchers.py", line 60, in __next__ batch = next(self.iterator) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/lightning/pytorch/utilities/combined_loader.py", line 341, in __next__ out = next(self._iterator) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/lightning/pytorch/utilities/combined_loader.py", line 142, in __next__ out = next(self.iterators[0]) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 630, in __next__ data = self._next_data() File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data return self._process_data(data) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data data.reraise() File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/_utils.py", line 694, in reraise raise exception TypeError: Caught TypeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop data = fetcher.fetch(index) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = self.dataset.__getitems__(possibly_batched_index) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/utils/data/dataset.py", line 364, in __getitems__ return [self.dataset[self.indices[idx]] for idx in indices] File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/utils/data/dataset.py", line 364, in <listcomp> return [self.dataset[self.indices[idx]] for idx in indices] File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torchvision/datasets/folder.py", line 231, in __getitem__ sample = self.transform(sample) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torchvision/transforms/transforms.py", line 95, in __call__ img = t(img) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torchvision/transforms/transforms.py", line 361, in forward return F.resize(img, self.size, self.interpolation, self.max_size, self.antialias) File "/Users/iamiulialex/Documents/kaiko-eng/dist/export/python/virtualenvs/python-default/3.10.13/lib/python3.10/site-packages/torchvision/transforms/functional.py", line 461, in resize raise TypeError( TypeError: Argument interpolation should be a InterpolationMode or a corresponding Pillow integer constant ``` ### Environment <details> <summary>Current environment</summary> * CUDA: - GPU: None - available: False - version: None * Lightning: - info-nce-pytorch: 0.1.4 - lightning: 2.2.1 - lightning-utilities: 0.11.2 - pytorch-lightning: 2.2.5 - torch: 2.1.1+cpu - torchdata: 0.7.1 - torchmetrics: 1.4.0.post0 - torchtext: 0.16.1+cpu - torchvision: 0.16.1+cpu * Packages: - aadict: 0.2.3 - absl-py: 2.1.0 - accelerate: 0.30.1 - adlfs: 2023.9.0 - aenum: 3.1.15 - affine: 2.4.0 - agate: 1.9.1 - aiobotocore: 2.5.4 - aiofiles: 23.2.1 - aiohttp: 3.9.5 - aiohttp-cors: 0.7.0 - aioitertools: 0.11.0 - aiosignal: 1.3.1 - albumentations: 1.3.1 - alembic: 1.13.1 - altair: 5.3.0 - aniso8601: 9.0.1 - antlr4-python3-runtime: 4.9.3 - anyio: 4.4.0 - asciitree: 0.3.3 - asgiref: 3.8.1 - asset: 0.6.13 - asttokens: 2.4.1 - async-timeout: 4.0.3 - asyncpg: 0.29.0 - attrs: 23.2.0 - azure-ai-formrecognizer: 3.3.3 - azure-common: 1.1.28 - azure-core: 1.30.1 - azure-data-tables: 12.5.0 - azure-datalake-store: 0.0.53 - azure-functions: 1.19.0 - azure-identity: 1.16.0 - azure-keyvault-secrets: 4.8.0 - azure-storage-blob: 12.20.0 - azure-storage-file-datalake: 12.15.0 - azure-storage-queue: 12.10.0 - babel: 2.15.0 - backoff: 2.2.1 - backports.tarfile: 1.2.0 - bcrypt: 4.1.3 - biopython: 1.83 - bitsandbytes: 0.41.1 - blinker: 1.8.2 - blis: 0.7.11 - boto3: 1.28.17 - botocore: 1.31.17 - braceexpand: 0.1.7 - build: 1.2.1 - cachetools: 5.3.3 - catalogue: 2.0.10 - certifi: 2024.6.2 - cffi: 1.16.0 - chardet: 5.2.0 - charset-normalizer: 3.3.2 - chispa: 0.9.4 - chroma-hnswlib: 0.7.3 - chromadb: 0.5.0 - click: 8.1.7 - click-plugins: 1.1.1 - cligj: 0.7.2 - cloudpathlib: 0.18.1 - cloudpickle: 3.0.0 - colorama: 0.4.6 - coloredlogs: 14.0 - colorful: 0.5.6 - confection: 0.1.5 - contourpy: 1.2.1 - coverage: 7.5.3 - croniter: 2.0.5 - cryptography: 42.0.8 - cycler: 0.12.1 - cymem: 2.0.8 - dacite: 1.8.1 - daff: 1.3.46 - dagster: 1.7.8 - dagster-azure: 0.23.8 - dagster-dbt: 0.23.8 - dagster-deltalake: 0.23.8 - dagster-graphql: 1.7.8 - dagster-k8s: 0.23.8 - dagster-pipes: 1.7.8 - dagster-postgres: 0.23.8 - dagster-prometheus: 0.23.8 - dagster-shell: 0.23.8 - dagster-slack: 0.23.8 - dagster-webserver: 1.7.8 - databricks-sdk: 0.17.0 - databricks-sql-connector: 3.1.2 - dataclasses: 0.6 - dataproperty: 1.0.1 - datasets: 2.19.2 - dbt-adapters: 1.2.1 - dbt-common: 1.3.0 - dbt-core: 1.8.2 - dbt-databricks: 1.8.1 - dbt-extractor: 0.5.1 - dbt-semantic-interfaces: 0.5.1 - dbt-spark: 1.8.0 - decorator: 5.1.1 - deepdiff: 7.0.1 - deepspeed: 0.12.6 - defusedxml: 0.7.1 - delta-spark: 3.2.0 - deltalake: 0.17.4 - deprecated: 1.2.14 - dicomweb-client: 0.59.1 - dill: 0.3.8 - distlib: 0.3.8 - distro: 1.9.0 - dlup: 0.3.38 - dm-tree: 0.1.8 - dnspython: 2.6.1 - docker: 7.1.0 - docker-pycreds: 0.4.0 - docstring-parser: 0.16 - email-validator: 2.1.1 - et-xmlfile: 1.1.0 - evaluate: 0.4.2 - exceptiongroup: 1.2.1 - execnet: 2.1.1 - executing: 2.0.1 - farama-notifications: 0.0.4 - fastapi: 0.111.0 - fastapi-cli: 0.0.4 - fasteners: 0.19 - filelock: 3.14.0 - fiona: 1.9.6 - flask: 3.0.3 - flatbuffers: 24.3.25 - fonttools: 4.53.0 - formenergy-observability: 0.3.2 - frozenlist: 1.4.1 - fsspec: 2023.9.2 - geopandas: 0.14.4 - ghp-import: 2.1.0 - gitdb: 4.0.11 - gitpython: 3.1.43 - giturlparse: 0.12.0 - globre: 0.1.5 - google-api-core: 2.19.0 - google-auth: 2.29.0 - googleapis-common-protos: 1.63.1 - gql: 3.5.0 - graphene: 3.3 - graphql-core: 3.2.3 - graphql-relay: 3.2.0 - greenlet: 3.0.3 - griffe: 0.45.2 - grpcio: 1.64.1 - grpcio-health-checking: 1.62.2 - gymnasium: 0.28.1 - h11: 0.14.0 - h2: 4.1.0 - h5py: 3.11.0 - highdicom: 0.22.0 - hjson: 3.1.0 - hpack: 4.0.0 - httpcore: 1.0.5 - httptools: 0.6.1 - httpx: 0.27.0 - huggingface-hub: 0.23.3 - humanfriendly: 10.0 - hydra-core: 1.3.2 - hyperframe: 6.0.1 - hypothesis: 6.54.6 - idna: 3.7 - imagecodecs: 2023.7.10 - imageio: 2.34.1 - importlib-metadata: 6.11.0 - importlib-resources: 6.4.0 - info-nce-pytorch: 0.1.4 - iniconfig: 2.0.0 - ipython: 8.25.0 - isodate: 0.6.1 - itsdangerous: 2.2.0 - jaraco.classes: 3.4.0 - jaraco.context: 5.3.0 - jaraco.functools: 4.0.1 - jax-jumpy: 1.0.0 - jedi: 0.19.1 - jinja2: 3.1.4 - jmespath: 1.0.1 - joblib: 1.4.2 - jsonargparse: 4.28.0 - jsonlines: 4.0.0 - jsonschema: 4.22.0 - jsonschema-specifications: 2023.12.1 - jwt: 1.3.1 - kaiko-cfmpb: 0.0.1 - kaiko-conductor: 0.0.4 - kaiko-dagster: 0.0.51 - kaiko-data-io: 0.0.15 - kaiko-data-loading: 0.0.36 - kaiko-databits: 0.0.8 - kaiko-fm-pipeline: 0.0.1 - kaiko-fsspec-utils: 0.0.8 - kaiko-geometry: 0.0.40 - kaiko-image-annotation: 0.0.38 - kaiko-image-augmentation: 0.0.27 - kaiko-image-data: 0.0.49 - kaiko-image-processing: 0.0.40 - kaiko-incognito: 0.0.1 - kaiko-inference-engine: 0.0.16 - kaiko-inference-tools: 0.0.8 - kaiko-ingestion: 0.0.24 - kaiko-llm-dev-tool: 0.0.1 - kaiko-llm-serve: 0.0.1 - kaiko-llm-triage: 0.0.64 - kaiko-lmm-bench: 0.0.11 - kaiko-mirax: 0.0.10 - kaiko-ml-framework: 0.0.251 - kaiko-multiprocessing: 0.0.6 - kaiko-nki-dbt: 0.0.24 - kaiko-nlp: 1.0.84 - kaiko-online-patching: 0.0.70 - kaiko-ray-plugins: 0.0.1 - kaiko-test-services: 1.0.20 - kaiko-vef: 0.0.31 - kaiko-wsi: 0.0.46 - kerchunk: 0.2.5 - keyring: 25.2.1 - kiwisolver: 1.4.5 - kornia: 0.7.2 - kornia-rs: 0.1.3 - kubernetes: 29.0.0 - langcodes: 3.4.0 - language-data: 1.2.0 - lark: 1.1.9 - lazy-loader: 0.4 - leather: 0.4.0 - lightly: 1.5.6 - lightly-utils: 0.0.2 - lightning: 2.2.1 - lightning-utilities: 0.11.2 - linkify-it-py: 2.0.3 - llvmlite: 0.42.0 - lm-eval: 0.4.2 - logbook: 1.5.3 - loguru: 0.7.2 - lxml: 5.2.2 - lz4: 4.3.3 - mako: 1.3.5 - marisa-trie: 1.2.0 - markdown: 3.6 - markdown-it-py: 3.0.0 - markupsafe: 2.1.5 - mashumaro: 3.13 - matplotlib: 3.9.0 - matplotlib-inline: 0.1.7 - mbstrdecoder: 1.1.3 - mdit-py-plugins: 0.4.1 - mdurl: 0.1.2 - memray: 1.12.0 - mergedeep: 1.3.4 - minimal-snowplow-tracker: 0.0.2 - mkdocs: 1.6.0 - mkdocs-autorefs: 1.0.1 - mkdocs-gen-files: 0.5.0 - mkdocs-get-deps: 0.2.0 - mkdocs-literate-nav: 0.6.1 - mkdocs-material: 9.5.26 - mkdocs-material-extensions: 1.3.1 - mkdocstrings: 0.25.1 - mkdocstrings-python: 1.10.3 - mmh3: 4.1.0 - monai: 1.3.1 - monai-deploy-app-sdk: 0.5.0 - monotonic: 1.6 - more-itertools: 10.2.0 - mpmath: 1.3.0 - msal: 1.28.0 - msal-extensions: 1.1.0 - msgpack: 1.0.8 - msrest: 0.7.1 - multidict: 6.0.5 - multiprocess: 0.70.16 - murmurhash: 1.0.10 - networkx: 3.2.1 - nibabel: 4.0.2 - ninja: 1.11.1.1 - nltk: 3.8.1 - numba: 0.59.1 - numcodecs: 0.12.1 - numexpr: 2.10.0 - numpy: 1.23.5 - oauthlib: 3.2.2 - omegaconf: 2.3.0 - onnx: 1.16.1 - onnxruntime: 1.15.1 - openai: 1.31.1 - opencensus: 0.11.4 - opencensus-context: 0.1.3 - opencensus-ext-azure: 1.1.13 - opencv-python: 4.10.0.82 - opencv-python-headless: 4.10.0.82 - openpyxl: 3.1.3 - openslide-python: 1.3.1 - opentelemetry-api: 1.25.0 - opentelemetry-exporter-otlp: 1.25.0 - opentelemetry-exporter-otlp-proto-common: 1.25.0 - opentelemetry-exporter-otlp-proto-grpc: 1.25.0 - opentelemetry-exporter-otlp-proto-http: 1.25.0 - opentelemetry-instrumentation: 0.46b0 - opentelemetry-instrumentation-asgi: 0.46b0 - opentelemetry-instrumentation-fastapi: 0.46b0 - opentelemetry-instrumentation-requests: 0.46b0 - opentelemetry-instrumentation-sqlalchemy: 0.46b0 - opentelemetry-proto: 1.25.0 - opentelemetry-sdk: 1.25.0 - opentelemetry-semantic-conventions: 0.46b0 - opentelemetry-util-http: 0.46b0 - ordered-set: 4.1.0 - orjson: 3.10.3 - overrides: 7.7.0 - packaging: 21.3 - paginate: 0.5.6 - pandas: 2.1.4 - paramiko: 3.4.0 - parsedatetime: 2.6 - parso: 0.8.4 - pathspec: 0.12.1 - pathvalidate: 3.2.0 - pdf2image: 1.17.0 - peft: 0.7.0 - pendulum: 3.0.0 - pex: 2.3.2 - pexpect: 4.9.0 - phonenumbers: 8.13.38 - pillow: 10.3.0 - pillow-jpls: 1.3.2 - pip: 23.0.1 - pkgconfig: 1.5.5 - platformdirs: 4.2.2 - playwright: 1.44.0 - plotly: 5.22.0 - pluggy: 1.5.0 - portalocker: 2.8.2 - posthog: 3.5.0 - preshed: 3.0.9 - presidio-analyzer: 2.2.354 - presidio-image-redactor: 0.0.52 - prometheus-client: 0.20.0 - prompt-toolkit: 3.0.46 - proto-plus: 1.23.0 - protobuf: 4.25.3 - psutil: 5.9.8 - psycopg2-binary: 2.9.9 - ptyprocess: 0.7.0 - pure-eval: 0.2.2 - py: 1.11.0 - py-cpuinfo: 9.0.0 - py-spy: 0.3.14 - py4j: 0.10.9.7 - pyaml: 23.9.7 - pyarrow: 14.0.2 - pyarrow-hotfix: 0.6 - pyasn1: 0.6.0 - pyasn1-modules: 0.4.0 - pybind11: 2.12.0 - pycocotools: 2.0.7 - pycparser: 2.22 - pydantic: 1.10.15 - pydeck: 0.9.1 - pydicom: 2.4.4 - pyee: 11.1.0 - pyfaidx: 0.8.1.1 - pygments: 2.18.0 - pyjwt: 2.8.0 - pymdown-extensions: 10.8.1 - pymonad: 2.4.0 - pynacl: 1.5.0 - pynvml: 11.5.0 - pyopenssl: 24.1.0 - pyparsing: 3.1.2 - pypdf2: 3.0.1 - pypika: 0.48.9 - pypng: 0.20220715.0 - pyproj: 3.6.1 - pyproject-hooks: 1.1.0 - pysankeybeta: 1.4.0 - pyspark: 3.5.0 - pytablewriter: 1.2.0 - pytesseract: 0.3.10 - pytest: 7.4.4 - pytest-asyncio: 0.23.7 - pytest-check: 2.3.1 - pytest-cov: 4.1.0 - pytest-forked: 1.6.0 - pytest-lazy-fixture: 0.6.3 - pytest-mock: 3.14.0 - pytest-repeat: 0.9.3 - pytest-rerunfailures: 14.0 - pytest-timeout: 2.3.1 - pytest-xdist: 2.5.0 - python-dateutil: 2.9.0.post0 - python-dotenv: 1.0.1 - python-multipart: 0.0.9 - python-slugify: 8.0.4 - pytimeparse: 1.1.8 - pytorch-lightning: 2.2.5 - pytz: 2024.1 - pyvips: 2.2.3 - pyyaml: 6.0.1 - pyyaml-env-tag: 0.1 - qudida: 0.0.4 - rapidfuzz: 3.9.3 - rasterio: 1.3.10 - ray: 2.11.0 - ray-cpp: 2.11.0 - ray-pex-env: 0.0.11 - referencing: 0.35.1 - regex: 2024.5.15 - requests: 2.32.3 - requests-file: 2.1.0 - requests-mock: 1.12.1 - requests-oauthlib: 2.0.0 - requests-toolbelt: 1.0.0 - retrying: 1.3.4 - rich: 13.7.1 - rouge-metric: 1.0.1 - rouge-score: 0.1.2 - rpds-py: 0.18.1 - rsa: 4.9 - rt-utils: 1.2.7 - ruamel.yaml: 0.18.6 - ruamel.yaml.clib: 0.2.8 - s3fs: 2023.9.2 - s3transfer: 0.6.2 - sacrebleu: 2.4.2 - safetensors: 0.4.3 - scikit-image: 0.23.2 - scikit-learn: 1.3.2 - scipy: 1.13.1 - seaborn: 0.13.2 - segment-anything: 1.0 - sentry-sdk: 2.5.0 - setproctitle: 1.3.3 - setuptools: 69.5.1 - shapely: 2.0.4 - shellingham: 1.5.4 - shtab: 1.7.1 - simpleitk: 2.3.1 - six: 1.16.0 - skorch: 1.0.0 - slack-sdk: 3.27.2 - smart-open: 7.0.4 - smmap: 5.0.1 - sniffio: 1.3.1 - snuggs: 1.4.7 - sortedcontainers: 2.4.0 - spacy: 3.7.5 - spacy-legacy: 3.0.12 - spacy-loggers: 1.0.5 - speechrecognition: 3.10.4 - sqlalchemy: 2.0.30 - sqlglot: 25.0.2 - sqlglotrs: 0.2.5 - sqlitedict: 2.1.0 - sqlparams: 6.0.1 - sqlparse: 0.5.0 - srsly: 2.4.8 - sshtunnel: 0.4.0 - st-copy-to-clipboard: 0.1.6 - st-pages: 0.4.5 - stack-data: 0.6.3 - starlette: 0.37.2 - streamlit: 1.35.0 - streamlit-chat: 0.1.1 - striprtf: 0.0.26 - structlog: 23.3.0 - sympy: 1.12.1 - tabledata: 1.3.3 - tabulate: 0.9.0 - tcolorpy: 0.1.6 - tenacity: 8.3.0 - tensorboard: 2.16.2 - tensorboard-data-server: 0.7.2 - tensorboardx: 2.6.2.2 - text-unidecode: 1.3 - textual: 0.65.1 - thinc: 8.2.4 - threadpoolctl: 3.5.0 - thrift: 0.16.0 - tifffile: 2024.5.22 - tiffslide: 2.4.0 - tifftools: 1.5.2 - tiktoken: 0.7.0 - time-machine: 2.14.1 - timm: 0.9.16 - tldextract: 5.1.2 - tokenizers: 0.19.1 - toml: 0.10.2 - tomli: 2.0.1 - toolz: 0.12.1 - toposort: 1.10 - torch: 2.1.1+cpu - torchdata: 0.7.1 - torchmetrics: 1.4.0.post0 - torchtext: 0.16.1+cpu - torchvision: 0.16.1+cpu - tornado: 6.4 - tqdm: 4.66.4 - tqdm-multiprocess: 0.0.11 - traitlets: 5.14.3 - transformers: 4.41.2 - trl: 0.9.3 - typedspark: 1.4.3 - typeguard: 4.3.0 - typepy: 1.3.2 - typer: 0.12.3 - typeshed-client: 2.5.1 - typing-extensions: 4.12.1 - tyro: 0.8.4 - tzdata: 2024.1 - uc-micro-py: 1.0.3 - ujson: 5.10.0 - universal-pathlib: 0.2.2 - urllib3: 1.26.18 - uvicorn: 0.30.1 - uvloop: 0.19.0 - validators: 0.20.0 - virtualenv: 20.26.2 - wandb: 0.17.0 - wasabi: 1.1.3 - watchdog: 4.0.1 - watchfiles: 0.22.0 - wcwidth: 0.2.13 - weasel: 0.4.1 - webdataset: 0.2.86 - websocket-client: 1.8.0 - websockets: 12.0 - werkzeug: 3.0.3 - word2number: 1.1 - wrapt: 1.16.0 - xformers: 0.0.23 - xlsxwriter: 3.2.0 - xmltodict: 0.13.0 - xxhash: 3.4.1 - yarl: 1.9.4 - zarr: 2.18.2 - zipp: 3.19.2 - zstandard: 0.22.0 * System: - OS: Darwin - architecture: - 64bit - - processor: arm - python: 3.10.13 - release: 23.5.0 - version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000 </details> ### More info _No response_
closed
2024-06-10T18:08:05Z
2024-06-11T13:24:25Z
https://github.com/Lightning-AI/pytorch-lightning/issues/19965
[ "bug", "needs triage" ]
iulialexandra
1
vastsa/FileCodeBox
fastapi
55
.
closed
2023-03-03T03:18:25Z
2023-03-03T03:44:44Z
https://github.com/vastsa/FileCodeBox/issues/55
[]
uu-xixi
0
saulpw/visidata
pandas
1,497
[alt-shift- ] The open-memos keyboard shortcut doesn't work
**Small description** The open-memos keyboard shortcut doesn't work **Expected result** The open-memos keyboard shortcut would open the memo sheet. **Actual result with screenshot** https://asciinema.org/a/iFDKQSKcR6Sn9XTPsRCb3hb2f **Steps to reproduce with sample data and a .vd** `vd` then press `Alt+Shift+M` VisiData is calling it `Alt+Shift+M`, while the shortcut is called `Alt+M` **Additional context** Please include the version of VisiData. Latest develop version.
closed
2022-08-28T16:54:06Z
2022-09-04T21:14:44Z
https://github.com/saulpw/visidata/issues/1497
[ "bug", "fixed" ]
frosencrantz
4
Avaiga/taipy
data-visualization
1,612
[🐛 BUG] Can't submit scenario, "datanode not written" but it is
### What went wrong? 🤔 I have a Taipy application, creating a scenario that triggers a callback that writes data to the data node "demand". The scenario viewer tells me I can't run the scenario because the "demand" data node is not written while it is. I can even see the written data in the data node viewer: ![image](https://github.com/user-attachments/assets/86d558d8-3520-4475-a830-c46627f9642b) Refreshing does not fix the issue ### Expected Behavior I should be able to submit that scenario ### Steps to Reproduce Issue 1. Clone the demo repo ```bash git clone https://github.com/Avaiga/demo-workforce-plan.git ``` 2. Install the develop version of Taipy ```bash pip install git+https://github.com/Avaiga/taipy.git ``` 3. Install requirements ```bash pip install -r requirements.txt ``` 4. Run main.py ``` python main.py ``` 5. Create a new scenario using the scenario selector and try to submit it with the scenario viewer ### Solution Proposed _No response_ ### Screenshots _No response_ ### Runtime Environment _No response_ ### Browsers Chrome ### OS Windows ### Version of Taipy develop ### Additional Context _No response_ ### Acceptance Criteria - [ ] Ensure new code is unit tested, and check code coverage is at least 90%. - [ ] Create related issue in taipy-doc for documentation and Release Notes. ### Code of Conduct - [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+). - [ ] I am willing to work on this issue (optional)
closed
2024-07-30T17:22:29Z
2024-07-31T09:06:03Z
https://github.com/Avaiga/taipy/issues/1612
[ "Core", "🖰 GUI", "💥Malfunction", "🟧 Priority: High" ]
AlexandreSajus
1
mars-project/mars
pandas
3,132
[BUG] mars.learn.metrics.roc_curve can't execute when chunks is > 1
<!-- Thank you for your contribution! Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue. --> **Describe the bug** When invokng mars.learn.metrics.roc_curve with inputs whose chunks is greater than 1, it throws `operands could not be broadcast together with shapes (0,) (600000,) ` error **To Reproduce** To help us reproducing this bug, please provide information below: 1. Your Python version: 3.7.9 2. The version of Mars you use:0.9.0 3. Versions of crucial packages, such as numpy, scipy and pandas: 1.21.6, 1.3.5, 1.7.3 4. Full stack of the error. ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /tmp/ipykernel_11230/4067684078.py in <module> 33 columns=['fst3_term_30dovd_flag', 'original_score']).execute() 34 pd_calc_ks(data_md_test.to_pandas(), 'fst3_term_30dovd_flag', 'original_score', 'brainfull') ---> 35 ks_md_test = md_calc_ks(data_md_test, 'fst3_term_30dovd_flag', 'original_score', 'brainfull') /tmp/ipykernel_11230/4067684078.py in md_calc_ks(data, y_col, pred_col, cust_group_name) 12 p_pred = mt.array(data[pred_col], dtype=float).execute() 13 y = mt.array(data[y_col], dtype=int).execute() ---> 14 fpr, tpr, thresholds = metrics.roc_curve(y_true=y, y_score=p_pred) 15 ks = max(tpr.to_numpy() - fpr.to_numpy()) 16 print('ks:',ks) ~/miniconda3/lib/python3.7/site-packages/mars/learn/metrics/_ranking.py in roc_curve(y_true, y_score, pos_label, sample_weight, drop_intermediate, session, run_kwargs) 723 sample_weight=sample_weight, 724 session=session, --> 725 run_kwargs=run_kwargs, 726 ) 727 ~/miniconda3/lib/python3.7/site-packages/mars/learn/metrics/_ranking.py in _binary_clf_curve(y_true, y_score, pos_label, sample_weight, session, run_kwargs) 223 tps, thresholds = mt.stack([temp_tps, y_score])[:, threshold_idxs] 224 fps = 1 + threshold_idxs - tps --> 225 return _execute([fps, tps, thresholds], session=session, **(run_kwargs or dict())) 226 227 ~/miniconda3/lib/python3.7/site-packages/mars/deploy/oscar/session.py in execute(tileable, session, wait, new_session_kwargs, show_progress, progress_update_interval, *tileables, **kwargs) 1862 show_progress=show_progress, 1863 progress_update_interval=progress_update_interval, -> 1864 **kwargs, 1865 ) 1866 ~/miniconda3/lib/python3.7/site-packages/mars/deploy/oscar/session.py in execute(self, tileable, show_progress, warn_duplicated_execution, *tileables, **kwargs) 1651 try: 1652 execution_info: ExecutionInfo = fut.result( -> 1653 timeout=self._isolated_session.timeout 1654 ) 1655 except KeyboardInterrupt: # pragma: no cover ~/miniconda3/lib/python3.7/concurrent/futures/_base.py in result(self, timeout) 433 raise CancelledError() 434 elif self._state == FINISHED: --> 435 return self.__get_result() 436 else: 437 raise TimeoutError() ~/miniconda3/lib/python3.7/concurrent/futures/_base.py in __get_result(self) 382 def __get_result(self): 383 if self._exception: --> 384 raise self._exception 385 else: 386 return self._result ~/miniconda3/lib/python3.7/site-packages/mars/deploy/oscar/session.py in _execute(session, wait, show_progress, progress_update_interval, cancelled, *tileables, **kwargs) 1836 # set cancelled to avoid wait task leak 1837 cancelled.set() -> 1838 await execution_info 1839 else: 1840 return execution_info ~/miniconda3/lib/python3.7/site-packages/mars/deploy/oscar/session.py in wait() 105 106 async def wait(): --> 107 return await self._aio_task 108 109 self._future_local.future = fut = asyncio.run_coroutine_threadsafe( ~/miniconda3/lib/python3.7/site-packages/mars/deploy/oscar/session.py in _run_in_background(self, tileables, task_id, progress, profiling) 955 ) 956 if task_result.error: --> 957 raise task_result.error.with_traceback(task_result.traceback) 958 if cancelled: 959 return /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/task/supervisor/processor.py in run() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/task/supervisor/processor.py in _process_stage_chunk_graph() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/task/execution/mars/executor.py in execute_subtask_graph() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/task/execution/mars/stage.py in run() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/scheduling/worker/execution.py in internal_run_subtask() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/scheduling/worker/execution.py in _retry_run_subtask() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/scheduling/worker/execution.py in _retry_run() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/scheduling/worker/execution.py in _retry_run() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/scheduling/worker/execution.py in _run_subtask_once() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/oscar/debug.py in task_with_ex_logged() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/subtask/api.py in run_subtask_in_slot() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/context.py in send() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/context.py in _process_result_message() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/pool.py in send() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/pool.py in _run_coro() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/oscar/api.py in __on_receive__() ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in __on_receive__() 508 debug_logger.exception('Got unhandled error when handling message %.500r ' 509 'in actor %s at %s', message, self.uid, self.address) --> 510 raise ex 511 512 ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in mars.oscar.core._BaseActor.__on_receive__() 501 raise ValueError(f'call_method {call_method} not valid') 502 --> 503 return await self._handle_actor_result(result) 504 except Exception as ex: 505 if _log_unhandled_errors: ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in _handle_actor_result() 386 # asyncio.wait as it introduces much overhead 387 if len(coros) == 1: --> 388 task_result = await coros[0] 389 if extract_tuple: 390 result = task_result ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in mars.oscar.core._BaseActor._run_actor_async_generator() 429 res = None 430 while True: --> 431 async with self._lock: 432 with debug_async_timeout('actor_lock_timeout', 433 'async_generator %r hold lock timeout', gen): ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in mars.oscar.core._BaseActor._run_actor_async_generator() 430 while True: 431 async with self._lock: --> 432 with debug_async_timeout('actor_lock_timeout', 433 'async_generator %r hold lock timeout', gen): 434 if not is_exception: ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in mars.oscar.core._BaseActor._run_actor_async_generator() 435 res = await gen.asend(res) 436 else: --> 437 res = await gen.athrow(*res) 438 try: 439 if _log_cycle_send: /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/runner.py in run_subtask() ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in mars.oscar.core._BaseActor._run_actor_async_generator() 440 message_trace = pop_message_trace() 441 --> 442 res = await self._handle_actor_result(res) 443 is_exception = False 444 except: ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in _handle_actor_result() 360 361 if inspect.isawaitable(result): --> 362 result = await result 363 elif is_async_generator(result): 364 result = (result,) /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/context.py in send() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/context.py in _process_result_message() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/pool.py in send() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/oscar/backends/pool.py in _run_coro() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/oscar/api.py in __on_receive__() ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in __on_receive__() 508 debug_logger.exception('Got unhandled error when handling message %.500r ' 509 'in actor %s at %s', message, self.uid, self.address) --> 510 raise ex 511 512 ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in mars.oscar.core._BaseActor.__on_receive__() 501 raise ValueError(f'call_method {call_method} not valid') 502 --> 503 return await self._handle_actor_result(result) 504 except Exception as ex: 505 if _log_unhandled_errors: ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in _handle_actor_result() 386 # asyncio.wait as it introduces much overhead 387 if len(coros) == 1: --> 388 task_result = await coros[0] 389 if extract_tuple: 390 result = task_result ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in mars.oscar.core._BaseActor._run_actor_async_generator() 429 res = None 430 while True: --> 431 async with self._lock: 432 with debug_async_timeout('actor_lock_timeout', 433 'async_generator %r hold lock timeout', gen): ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in mars.oscar.core._BaseActor._run_actor_async_generator() 430 while True: 431 async with self._lock: --> 432 with debug_async_timeout('actor_lock_timeout', 433 'async_generator %r hold lock timeout', gen): 434 if not is_exception: ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in mars.oscar.core._BaseActor._run_actor_async_generator() 435 res = await gen.asend(res) 436 else: --> 437 res = await gen.athrow(*res) 438 try: 439 if _log_cycle_send: /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py in run() ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in mars.oscar.core._BaseActor._run_actor_async_generator() 440 message_trace = pop_message_trace() 441 --> 442 res = await self._handle_actor_result(res) 443 is_exception = False 444 except: ~/miniconda3/lib/python3.7/site-packages/mars/oscar/core.pyx in _handle_actor_result() 360 361 if inspect.isawaitable(result): --> 362 result = await result 363 elif is_async_generator(result): 364 result = (result,) /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/oscar/debug.py in task_with_ex_logged() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py in run() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py in _execute_graph() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/oscar/debug.py in task_with_ex_logged() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/lib/aio/_threads.py in to_thread() /usr/local/python3/lib/python3.7/concurrent/futures/thread.py in run() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/core/mode.py in _inner() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py in _execute_operand() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/services/subtask/worker/processor.py in _execute_operand() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/core/operand/core.py in execute() /home/admin/ray-pack/tmp/job/05010080/pyenv/lib/python3.7/site-packages/mars/tensor/arithmetic/add.py in execute() ValueError: Failed to rerun the <function SubtaskExecutionActor._retry_run_subtask.<locals>._run_subtask_once at 0x7f56e06df950> of subtask e8gGgJ0i5Vp0duhzn2fmbmdY, num_retries: 0, max_retries: 3 due to unhandled exception: [address=ray://mars_cluster_1654843657/4/3, pid=412599] operands could not be broadcast together with shapes (0,) (600000,) . ``` 5. Minimized code to reproduce the error. ```python import sklearn import mars import numpy as np import pandas as pd import mars.dataframe as md import mars.tensor as mt def md_calc_ks(data, y_col, pred_col, cust_group_name): from mars.learn import metrics data = data[~md.isnull(data[y_col])].execute() print(data.shape) p_pred = mt.array(data[pred_col], dtype=float).execute() y = mt.array(data[y_col], dtype=int).execute() fpr, tpr, thresholds = metrics.roc_curve(y_true=y, y_score=p_pred) ks = max(tpr.to_numpy() - fpr.to_numpy()) print('ks:',ks) return ks def pd_calc_ks(data, y_col, pred_col, cust_group_name): from sklearn import metrics data = data[~pd.isnull(data[y_col])] print(data.shape) p_pred = data[pred_col].values y = data[y_col].values fpr, tpr, thresholds = sklearn.metrics.roc_curve(y_true=y, y_score=p_pred) ks = max(tpr - fpr) print('ks:',ks) return ks data_md_test = md.DataFrame( mt.random.randint(0, high=2, size=(60_0000, 2), chunk_size=50_0000), columns=['fst3_term_30dovd_flag', 'original_score']).execute() pd_calc_ks(data_md_test.to_pandas(), 'fst3_term_30dovd_flag', 'original_score', 'brainfull') ks_md_test = md_calc_ks(data_md_test, 'fst3_term_30dovd_flag', 'original_score', 'brainfull') ``` **Expected behavior** A clear and concise description of what you expected to happen. **Additional context** Add any other context about the problem here.
closed
2022-06-10T06:59:32Z
2022-06-12T01:04:52Z
https://github.com/mars-project/mars/issues/3132
[ "type: bug", "mod: learn" ]
chaokunyang
0
PablocFonseca/streamlit-aggrid
streamlit
218
I can use only one AgGrid table on a page. Is it bug?
Hi, Today, I'm facing the problem of using multiple agGrid table on a streamlit page. Specifically my code is as below: ``` # df1, df2 get from any source tab_1, tab_2 = st.tabs(['Tab 1', 'Tab 2']) with tab_1: response_1 = AgGrid(df1, data_return_mode=DataReturnMode.FILTERED_AND_SORTED, height = 300) dataPlot_1 = response_1['data'] # Plotly Figure show..... with tab_2: response_2 = AgGrid(df2, data_return_mode=DataReturnMode.FILTERED_AND_SORTED, height = 300) dataPlot_2 = response_2['data'] # Plotly Figure show..... ``` On interface, when I tried to filter some columns in table of ```tab_1```, and move to ```tab_2``` to using other agGrid table, the table on ```tab_2``` is disappeared and vice versa. It means that, whenever I use the table, I need to refresh the whole page How can I do to solve this problem?
open
2023-05-09T11:12:55Z
2024-03-21T18:32:24Z
https://github.com/PablocFonseca/streamlit-aggrid/issues/218
[ "bug" ]
johnnyb1509
1
AUTOMATIC1111/stable-diffusion-webui
deep-learning
15,482
dreambooth plugin do not have test tab.
### Checklist - [X] The issue exists after disabling all extensions - [ ] The issue exists on a clean installation of webui - [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui - [ ] The issue exists in the current version of the webui - [ ] The issue has not been reported before recently - [ ] The issue has been reported before but has not been fixed yet ### What happened? ![image](https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/17873056/0c913bcc-0997-4a6d-8985-28d342307321) i had trained one model, but do not know how to use it. because there is no test tab. or tell me how to write inference code . thanks ### Steps to reproduce the problem NO ### What should have happened? show test tab. ### What browsers do you use to access the UI ? _No response_ ### Sysinfo i clone the code today ### Console logs ```Shell no ``` ### Additional information _No response_
open
2024-04-11T04:08:12Z
2024-04-11T12:51:47Z
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15482
[ "bug-report" ]
whk6688
1
huggingface/datasets
pandas
7,470
Is it possible to shard a single-sharded IterableDataset?
I thought https://github.com/huggingface/datasets/pull/7252 might be applicable but looking at it maybe not. Say we have a process, eg. a database query, that can return data in slightly different order each time. So, the initial query needs to be run by a single thread (not to mention running multiple times incurs more cost too). But the results are also big enough that we don't want to materialize it entirely and instead stream it with an IterableDataset. But after we have the results we want to split it up across workers to parallelize processing. Is something like this possible to do? Here's a failed attempt. The end result should be that each of the shards has unique data, but unfortunately with this attempt the generator gets run once in each shard and the results end up with duplicates... ``` import random import datasets def gen(): print('RUNNING GENERATOR!') items = list(range(10)) random.shuffle(items) yield from items ds = datasets.IterableDataset.from_generator(gen) print('dataset contents:') for item in ds: print(item) print() print('dataset contents (2):') for item in ds: print(item) print() num_shards = 3 def sharded(shard_id): for i, example in enumerate(ds): if i % num_shards in shard_id: yield example ds1 = datasets.IterableDataset.from_generator( sharded, gen_kwargs={'shard_id': list(range(num_shards))} ) for shard in range(num_shards): print('shard', shard) for item in ds1.shard(num_shards, shard): print(item) ```
open
2025-03-21T04:33:37Z
2025-03-21T04:33:37Z
https://github.com/huggingface/datasets/issues/7470
[]
jonathanasdf
0
miguelgrinberg/python-socketio
asyncio
1,351
AsyncClient automatically disconnects if not explicitly passing 'namespaces'
I'm testing out `python-socketio==5.11.2` with a Python client before plugging it into my real application. At first, I thought events other than `message` don't work, but after logging I found that the client automatically disconnect after the `connect` event handler finishes. One more problem I found is that the catch-all event handlers don't work for both client and server. My server code with `uvicorn test_server:app`: ``` import socketio sio = socketio.AsyncServer(logger=True, async_mode='asgi', engineio_logger=True) app = socketio.ASGIApp(socketio_server=sio) @sio.event async def connect(sid, environ, auth): await sio.emit("message", data={"data": "message connected"}) await sio.emit("chat", data={"data": "chat connected"}) @sio.event async def disconnect(sid): print(f"Client disconnected - Session ID: {sid}") @sio.event async def message(sid, data): print(f"message {data}") @sio.on("chat") async def on_chat(sid, data): print(f"chat {data}") @sio.on('*', namespace='*') async def any_event_any_namespace(event, namespace, sid, data): print(f"Event {event} - Namespace {namespace} - SID {sid} - Data {data}") ``` My client code: ``` import socketio import asyncio # asyncio sio = socketio.AsyncClient(logger=True, engineio_logger=True) @sio.event async def connect(): print("Connected to http://127.0.0.1:8000") await sio.emit("message", {"message": "connected"}) await sio.emit("chat", data={"data": "chat connected"}) print("disconnect here?") @sio.event def connect_error(data): print(f"Connection error: {data}") @sio.event def disconnect(): print("Disconnected!") @sio.on('*', namespace='*') async def any_event_any_namespace(event, namespace, sid, data): print(f"Event {event} - Namespace {namespace} - SID {sid} - Data {data}") @sio.event async def message(data): print(f"message {data}") @sio.event async def chat(data): print(f"chat {data}") async def main(): try: await sio.connect('http://127.0.0.1:8000', retry=True) except: print("refused") print(1) await sio.emit("message", {"message": "1"}) print(2) await sio.emit("chat", {"chat": "2"}) print(3) await sio.emit("chat", {"chat": "3"}) print(sio.transport()) await asyncio.sleep(1) await sio.disconnect() if __name__ == '__main__': asyncio.run(main()) ``` The logs in this case are: ``` ###Server 9gmgwsDDMXSZarbbAAAA: Sending packet OPEN data {'sid': '9gmgwsDDMXSZarbbAAAA', 'upgrades': ['websocket'], 'pingTimeout': 20000, 'pingInterval': 25000} INFO: 127.0.0.1:35558 - "GET /socket.io/?transport=polling&EIO=4&t=1718734956.0483625 HTTP/1.1" 200 OK 9gmgwsDDMXSZarbbAAAA: Received request to upgrade to websocket INFO: ('127.0.0.1', 35558) - "WebSocket /socket.io/?transport=websocket&EIO=4&sid=9gmgwsDDMXSZarbbAAAA&t=1718734956.050288" [accepted] INFO: connection open 9gmgwsDDMXSZarbbAAAA: Upgrade to websocket successful 9gmgwsDDMXSZarbbAAAA: Received packet MESSAGE data 0*,{} message async handler error Traceback (most recent call last): File "/home/mp/.local/lib/python3.10/site-packages/engineio/async_server.py", line 483, in run_async_handler return await self.handlers[event](*args) File "/home/mp/.local/lib/python3.10/site-packages/socketio/async_server.py", line 667, in _handle_eio_message pkt = self.packet_class(encoded_packet=data) File "/home/mp/.local/lib/python3.10/site-packages/socketio/packet.py", line 43, in __init__ self.attachment_count = self.decode(encoded_packet) or 0 File "/home/mp/.local/lib/python3.10/site-packages/socketio/packet.py", line 114, in decode self.data = self.json.loads(ep) File "/home/mp/.local/lib/python3.10/site-packages/engineio/json.py", line 16, in loads return original_loads(*args, **kwargs) File "/usr/lib/python3.10/json/__init__.py", line 359, in loads return cls(**kw).decode(s) File "/usr/lib/python3.10/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) 9gmgwsDDMXSZarbbAAAA: Received packet MESSAGE data 0{} emitting event "message" to all [/] 9gmgwsDDMXSZarbbAAAA: Sending packet MESSAGE data 2["message",{"data":"message connected"}] emitting event "chat" to all [/] 9gmgwsDDMXSZarbbAAAA: Sending packet MESSAGE data 2["chat",{"data":"chat connected"}] 9gmgwsDDMXSZarbbAAAA: Sending packet MESSAGE data 0{"sid":"Fz9W8X_nDDAi1eIzAAAB"} 9gmgwsDDMXSZarbbAAAA: Received packet MESSAGE data 2["message",{"message":"connected"}] received event "message" from Fz9W8X_nDDAi1eIzAAAB [/] message {'message': 'connected'} 9gmgwsDDMXSZarbbAAAA: Received packet MESSAGE data 2["chat",{"data":"chat connected"}] received event "chat" from Fz9W8X_nDDAi1eIzAAAB [/] chat {'data': 'chat connected'} INFO: connection closed Client disconnected - Session ID: Fz9W8X_nDDAi1eIzAAAB ``` ``` ### Client Attempting polling connection to http://127.0.0.1:8000/socket.io/?transport=polling&EIO=4 Polling connection accepted with {'sid': '9gmgwsDDMXSZarbbAAAA', 'upgrades': ['websocket'], 'pingTimeout': 20000, 'pingInterval': 25000} Engine.IO connection established Sending packet MESSAGE data 0*,{} Sending packet MESSAGE data 0{} Attempting WebSocket upgrade to ws://127.0.0.1:8000/socket.io/?transport=websocket&EIO=4 WebSocket upgrade was successful Received packet NOOP data Received packet MESSAGE data 2["message",{"data":"message connected"}] Received event "message" [/] message {'data': 'message connected'} Received packet MESSAGE data 2["chat",{"data":"chat connected"}] Received event "chat" [/] chat {'data': 'chat connected'} Received packet MESSAGE data 0{"sid":"Fz9W8X_nDDAi1eIzAAAB"} Namespace / is connected Connected to http://127.0.0.1:8000 Emitting event "message" [/] Sending packet MESSAGE data 2["message",{"message":"connected"}] Emitting event "chat" [/] Sending packet MESSAGE data 2["chat",{"data":"chat connected"}] disconnect here? Sending packet MESSAGE data 1 Sending packet CLOSE data None Engine.IO connection dropped Write loop: WebSocket connection was closed, aborting Exiting write loop task Server sent close packet data None, aborting Waiting for write loop task to end Exiting read loop task refused 1 Emitting event "message" [/] 2 Emitting event "chat" [/] 3 Emitting event "chat" [/] websocket ``` The disconnect message is sent immediately after the connect event handler finishes. The server couldn't handle this disconnect message and I couldn't even trace which caused that `message async handler error` if I didn't have the client logs. When I add `namespaces="/"` to `sio.connect` the client works fine. This, however, does not make sense because the logs in the previous case state that `Namespace / is connected`. ``` Attempting polling connection to http://127.0.0.1:8000/socket.io/?transport=polling&EIO=4 Polling connection accepted with {'sid': 'rMAzXhR3GSk2y4g6AAAC', 'upgrades': ['websocket'], 'pingTimeout': 20000, 'pingInterval': 25000} Engine.IO connection established Sending packet MESSAGE data 0{} Attempting WebSocket upgrade to ws://127.0.0.1:8000/socket.io/?transport=websocket&EIO=4 WebSocket upgrade was successful Received packet NOOP data Received packet MESSAGE data 2["message",{"data":"message connected"}] Received event "message" [/] message {'data': 'message connected'} Received packet MESSAGE data 2["chat",{"data":"chat connected"}] Received event "chat" [/] chat {'data': 'chat connected'} Received packet MESSAGE data 0{"sid":"JiKwU6DpIR1chjXxAAAD"} Namespace / is connected Connected to http://127.0.0.1:8000 Emitting event "message" [/] Sending packet MESSAGE data 2["message",{"message":"connected"}] Emitting event "chat" [/] Sending packet MESSAGE data 2["chat",{"data":"chat connected"}] disconnect here? 1 Emitting event "message" [/] Sending packet MESSAGE data 2["message",{"message":"1"}] 2 Emitting event "chat" [/] Sending packet MESSAGE data 2["chat",{"chat":"2"}] 3 Emitting event "chat" [/] Sending packet MESSAGE data 2["chat",{"chat":"3"}] websocket Sending packet MESSAGE data 1 Sending packet CLOSE data None Engine.IO connection dropped Disconnected! Write loop: WebSocket connection was closed, aborting Exiting write loop task Server sent close packet data None, aborting Waiting for write loop task to end Exiting read loop task ``` Server logs are correct: ``` Server initialized for asgi. INFO: Started server process [144165] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) kUs1XS7BRhj9aPAWAAAA: Sending packet OPEN data {'sid': 'kUs1XS7BRhj9aPAWAAAA', 'upgrades': ['websocket'], 'pingTimeout': 20000, 'pingInterval': 25000} INFO: 127.0.0.1:58698 - "GET /socket.io/?transport=polling&EIO=4&t=1718735272.3950653 HTTP/1.1" 200 OK kUs1XS7BRhj9aPAWAAAA: Received request to upgrade to websocket INFO: ('127.0.0.1', 58698) - "WebSocket /socket.io/?transport=websocket&EIO=4&sid=kUs1XS7BRhj9aPAWAAAA&t=1718735272.3968916" [accepted] INFO: connection open kUs1XS7BRhj9aPAWAAAA: Upgrade to websocket successful kUs1XS7BRhj9aPAWAAAA: Received packet MESSAGE data 0{} emitting event "message" to all [/] kUs1XS7BRhj9aPAWAAAA: Sending packet MESSAGE data 2["message",{"data":"message connected"}] emitting event "chat" to all [/] kUs1XS7BRhj9aPAWAAAA: Sending packet MESSAGE data 2["chat",{"data":"chat connected"}] kUs1XS7BRhj9aPAWAAAA: Sending packet MESSAGE data 0{"sid":"JI7P4AqFkkaE2u9DAAAB"} kUs1XS7BRhj9aPAWAAAA: Received packet MESSAGE data 2["message",{"message":"connected"}] received event "message" from JI7P4AqFkkaE2u9DAAAB [/] message {'message': 'connected'} kUs1XS7BRhj9aPAWAAAA: Received packet MESSAGE data 2["chat",{"data":"chat connected"}] received event "chat" from JI7P4AqFkkaE2u9DAAAB [/] chat {'data': 'chat connected'} kUs1XS7BRhj9aPAWAAAA: Received packet MESSAGE data 2["message",{"message":"1"}] received event "message" from JI7P4AqFkkaE2u9DAAAB [/] message {'message': '1'} kUs1XS7BRhj9aPAWAAAA: Received packet MESSAGE data 2["chat",{"chat":"2"}] received event "chat" from JI7P4AqFkkaE2u9DAAAB [/] chat {'chat': '2'} kUs1XS7BRhj9aPAWAAAA: Received packet MESSAGE data 2["chat",{"chat":"3"}] received event "chat" from JI7P4AqFkkaE2u9DAAAB [/] chat {'chat': '3'} INFO: connection closed Client disconnected - Session ID: JI7P4AqFkkaE2u9DAAAB ```
closed
2024-06-18T18:42:45Z
2024-06-18T22:39:13Z
https://github.com/miguelgrinberg/python-socketio/issues/1351
[ "bug" ]
notnitsuj
0
pydata/xarray
numpy
9,379
Simplify signature of `xr.open_dataset` using new `decoding_kwargs` dict
### What is your issue? The signature of [`xr.open_dataset`](https://docs.xarray.dev/en/stable/generated/xarray.open_dataset.html) is quite complicated, but many of the kwargs are really just immediately passed on to the public [`xr.decode_cf`](https://docs.xarray.dev/en/latest/generated/xarray.decode_cf.html) function internally. Specifically `mask_and_scale`, `decode_times`, `decode_timedelta`, `use_cftime`, `concat_characters`, `decode_coords`, and `drop_variables` are all passed on. Whether or not `xr.decode_cf` is used at all is controlled by the `decode_cf` kwarg, which is currently a boolean. We could instead group all of these kwargs into a single `decoding_kwargs` dictionary keyword argument. We could also replace the `decode_cf` kwarg with a general `decode` or `decode_func` kwarg, with a type something like `Callable[Dataset | AbstractDataStore, Dataset]`, which by default would point to the `xr.decode_cf` function. This would: - Greatly simplify the signature of `xr.open_dataset`, - More clearly separate concerns (opening data and decoding are separate steps), - Allow users to define their own decoding functions, even with existing backends (e.g. a netCDF file that follows non-CF conventions, such as can be found in plasma physics), - Follow the same pattern we already use in `open_dataset` for `from_array_kwargs` and `backend_kwargs`, - Avoid this old issue https://github.com/pydata/xarray/issues/3020 (because once the deprecation cycle was complete you wouldn't be able to pass a specific decoding kwarg whilst also specifying that no decoding is to happen). The downside of this is that there would be a fairly significant blast radius of warnings raised during the deprecation cycle.
closed
2024-08-18T20:26:01Z
2024-08-19T16:07:41Z
https://github.com/pydata/xarray/issues/9379
[ "API design", "topic-backends", "topic-CF conventions" ]
TomNicholas
3
unionai-oss/pandera
pandas
1,049
MultiIndex with a str dtype schemas can produce invalid examples
Problem: If you make a schema for a MultiIndex dataframe and some columns are `str`s but others are not, the example produced could have incorrect dtypes for the non-`str` indices and the resulting dataframe from `schema.example()` will not be valid according to the schema. - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the latest version of pandera. - [x] (optional) I have confirmed this bug exists on the master branch of pandera. ```python import pandera as pa # Make example schema schema = pa.DataFrameSchema( index=pa.MultiIndex([ pa.Index(str, name="index1"), pa.Index(float, name="index2"), pa.Index(float, name="index3"), ] )) example = schema.example() schema.validate(example) ``` #### Expected behavior There should not be an error thrown, but instead the above code results in: `pandera.errors.SchemaError: expected series 'index3' to have type float64, got object`.
closed
2022-12-11T22:00:03Z
2024-02-19T04:11:05Z
https://github.com/unionai-oss/pandera/issues/1049
[ "bug" ]
gsugar87
1
microsoft/qlib
deep-learning
1,729
Data is fetched twice in CSI500 index collector
https://github.com/microsoft/qlib/blob/98f569eed2252cc7fad0c120cad44f6181c3acf6/scripts/data_collector/cn_index/collector.py#L401-L408 `result` is overwritten with the call to `self.get_data_from_baostock(date)`, and that function contains exactly the same code as the lines above the overwrite. Might be an oversight during refactoring.
open
2024-01-08T07:16:12Z
2024-01-11T12:31:34Z
https://github.com/microsoft/qlib/issues/1729
[]
Chlorie
1
simple-login/app
flask
1,813
Strange source code
Hello! First I’m am writing here because I have some private concerns about Zendesk, so I don’t want to use it. I found the following line in the SimpleLogin code: ``` "class ErrContactErrorUpgradeNeeded(SLException): """raised when user cannot create a contact because the plan doesn't allow it""""" def error_for_user(self) -> str: return "Please upgrade to premium to create reverse-alias"""" ``` https://github.com/simple-login/app/blob/master/app/errors.py (line 70) What is this about? Are there any restrictions on reverse aliases for free accounts?
open
2023-07-20T21:51:22Z
2023-08-07T11:06:04Z
https://github.com/simple-login/app/issues/1813
[]
ghost
2
HumanSignal/labelImg
deep-learning
696
How can I use LabelImg on ARM64 Ubuntu18.04?
Hey. I want to compile this repo under a Jetson Xavier board, which equiped with a ARM64 ubuntu18.04 OS. I suffered a lot for the problem below. I'm hoping for your help, sincerely. - **OS: ARM64, ubuntu18.04** - **PyQt version: 5.12.2** ``` bafs@bafs-xavier:~/installer/labelImg$ sudo pip3 install -r requirements/requirements-linux-python3.txt [sudo] password for bafs: WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip. Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue. To avoid this problem you can invoke Python with '-m pip' instead of running pip directly. WARNING: The directory '/home/bafs/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pyqt5>=5.0 in /usr/local/lib/python3.6/dist-packages (from -r requirements/requirements-linux-python3.txt (line 1)) (5.15.2) WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f83d8e438>: Failed to establish a new connection: [Errno 101] Network is unreachable',)': /simple/lxml/ Collecting lxml==4.2.4 Downloading https://pypi.tuna.tsinghua.edu.cn/packages/ca/63/139b710671c1655aed3b20c1e6776118c62e9f9311152f4c6031e12a0554/lxml-4.2.4.tar.gz (2.5 MB) |████████████████████████████████| 2.5 MB 14.5 MB/s Requirement already satisfied: PyQt5-sip<13,>=12.8 in /usr/local/lib/python3.6/dist-packages (from pyqt5>=5.0->-r requirements/requirements-linux-python3.txt (line 1)) (12.8.1) Building wheels for collected packages: lxml Building wheel for lxml (setup.py) ... error ERROR: Command errored out with exit status 1: command: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-4fobio7k/lxml_1febb099da3746949eff9e364a118d74/setup.py'"'"'; __file__='"'"'/tmp/pip-install-4fobio7k/lxml_1febb099da3746949eff9e364a118d74/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-wgax4awv cwd: /tmp/pip-install-4fobio7k/lxml_1febb099da3746949eff9e364a118d74/ Complete output (93 lines): Building lxml version 4.2.4. Building without Cython. ERROR: b'/bin/sh: 1: xslt-config: not found\n' ** make sure the development packages of libxml2 and libxslt are installed ** Using build configuration of libxslt running bdist_wheel running build running build_py creating build creating build/lib.linux-aarch64-3.6 creating build/lib.linux-aarch64-3.6/lxml copying src/lxml/pyclasslookup.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/usedoctest.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/builder.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/sax.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/__init__.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/cssselect.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/_elementpath.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/doctestcompare.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/ElementInclude.py -> build/lib.linux-aarch64-3.6/lxml creating build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/__init__.py -> build/lib.linux-aarch64-3.6/lxml/includes creating build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/html5parser.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/_html5builder.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/soupparser.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/_diffcommand.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/clean.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/_setmixin.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/usedoctest.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/builder.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/ElementSoup.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/formfill.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/diff.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/__init__.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/defs.py -> build/lib.linux-aarch64-3.6/lxml/html creating build/lib.linux-aarch64-3.6/lxml/isoschematron copying src/lxml/isoschematron/__init__.py -> build/lib.linux-aarch64-3.6/lxml/isoschematron copying src/lxml/etree.h -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/etree_api.h -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/lxml.etree.h -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/lxml.etree_api.h -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/includes/schematron.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/c14n.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/xmlschema.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/xmlerror.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/htmlparser.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/xslt.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/__init__.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/config.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/xpath.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/xinclude.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/dtdvalid.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/xmlparser.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/uri.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/tree.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/relaxng.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/etreepublic.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/etree_defs.h -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/lxml-version.h -> build/lib.linux-aarch64-3.6/lxml/includes creating build/lib.linux-aarch64-3.6/lxml/isoschematron/resources creating build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/rng copying src/lxml/isoschematron/resources/rng/iso-schematron.rng -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/rng creating build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl copying src/lxml/isoschematron/resources/xsl/RNG2Schtrn.xsl -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl copying src/lxml/isoschematron/resources/xsl/XSD2Schtrn.xsl -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl creating build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_dsdl_include.xsl -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_abstract_expand.xsl -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_svrl_for_xslt1.xsl -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_skeleton_for_xslt1.xsl -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_message.xsl -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/readme.txt -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 running build_ext building 'lxml.etree' extension creating build/temp.linux-aarch64-3.6 creating build/temp.linux-aarch64-3.6/src creating build/temp.linux-aarch64-3.6/src/lxml aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DCYTHON_CLINE_IN_TRACEBACK=0 -Isrc -Isrc/lxml/includes -I/usr/include/python3.6m -c src/lxml/etree.c -o build/temp.linux-aarch64-3.6/src/lxml/etree.o -w In file included from src/lxml/etree.c:662:0: src/lxml/includes/etree_defs.h:14:10: fatal error: libxml/xmlversion.h: No such file or directory #include "libxml/xmlversion.h" ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. Compile failed: command 'aarch64-linux-gnu-gcc' failed with exit status 1 creating tmp cc -I/usr/include/libxml2 -c /tmp/xmlXPathInitgatmzszw.c -o tmp/xmlXPathInitgatmzszw.o /tmp/xmlXPathInitgatmzszw.c:2:1: warning: return type defaults to ‘int’ [-Wimplicit-int] main (int argc, char **argv) { ^~~~ cc tmp/xmlXPathInitgatmzszw.o -lxml2 -o a.out error: command 'aarch64-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- ERROR: Failed building wheel for lxml Running setup.py clean for lxml Failed to build lxml Installing collected packages: lxml Attempting uninstall: lxml Found existing installation: lxml 4.2.1 Uninstalling lxml-4.2.1: Successfully uninstalled lxml-4.2.1 Running setup.py install for lxml ... error ERROR: Command errored out with exit status 1: command: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-4fobio7k/lxml_1febb099da3746949eff9e364a118d74/setup.py'"'"'; __file__='"'"'/tmp/pip-install-4fobio7k/lxml_1febb099da3746949eff9e364a118d74/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-mf6xmurg/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.6/lxml cwd: /tmp/pip-install-4fobio7k/lxml_1febb099da3746949eff9e364a118d74/ Complete output (92 lines): Building lxml version 4.2.4. Building without Cython. ERROR: b'/bin/sh: 1: xslt-config: not found\n' ** make sure the development packages of libxml2 and libxslt are installed ** Using build configuration of libxslt running install running build running build_py creating build creating build/lib.linux-aarch64-3.6 creating build/lib.linux-aarch64-3.6/lxml copying src/lxml/pyclasslookup.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/usedoctest.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/builder.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/sax.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/__init__.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/cssselect.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/_elementpath.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/doctestcompare.py -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/ElementInclude.py -> build/lib.linux-aarch64-3.6/lxml creating build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/__init__.py -> build/lib.linux-aarch64-3.6/lxml/includes creating build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/html5parser.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/_html5builder.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/soupparser.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/_diffcommand.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/clean.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/_setmixin.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/usedoctest.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/builder.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/ElementSoup.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/formfill.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/diff.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/__init__.py -> build/lib.linux-aarch64-3.6/lxml/html copying src/lxml/html/defs.py -> build/lib.linux-aarch64-3.6/lxml/html creating build/lib.linux-aarch64-3.6/lxml/isoschematron copying src/lxml/isoschematron/__init__.py -> build/lib.linux-aarch64-3.6/lxml/isoschematron copying src/lxml/etree.h -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/etree_api.h -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/lxml.etree.h -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/lxml.etree_api.h -> build/lib.linux-aarch64-3.6/lxml copying src/lxml/includes/schematron.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/c14n.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/xmlschema.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/xmlerror.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/htmlparser.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/xslt.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/__init__.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/config.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/xpath.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/xinclude.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/dtdvalid.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/xmlparser.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/uri.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/tree.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/relaxng.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/etreepublic.pxd -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/etree_defs.h -> build/lib.linux-aarch64-3.6/lxml/includes copying src/lxml/includes/lxml-version.h -> build/lib.linux-aarch64-3.6/lxml/includes creating build/lib.linux-aarch64-3.6/lxml/isoschematron/resources creating build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/rng copying src/lxml/isoschematron/resources/rng/iso-schematron.rng -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/rng creating build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl copying src/lxml/isoschematron/resources/xsl/RNG2Schtrn.xsl -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl copying src/lxml/isoschematron/resources/xsl/XSD2Schtrn.xsl -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl creating build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_dsdl_include.xsl -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_abstract_expand.xsl -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_svrl_for_xslt1.xsl -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_skeleton_for_xslt1.xsl -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/iso_schematron_message.xsl -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 copying src/lxml/isoschematron/resources/xsl/iso-schematron-xslt1/readme.txt -> build/lib.linux-aarch64-3.6/lxml/isoschematron/resources/xsl/iso-schematron-xslt1 running build_ext building 'lxml.etree' extension creating build/temp.linux-aarch64-3.6 creating build/temp.linux-aarch64-3.6/src creating build/temp.linux-aarch64-3.6/src/lxml aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DCYTHON_CLINE_IN_TRACEBACK=0 -Isrc -Isrc/lxml/includes -I/usr/include/python3.6m -c src/lxml/etree.c -o build/temp.linux-aarch64-3.6/src/lxml/etree.o -w In file included from src/lxml/etree.c:662:0: src/lxml/includes/etree_defs.h:14:10: fatal error: libxml/xmlversion.h: No such file or directory #include "libxml/xmlversion.h" ^~~~~~~~~~~~~~~~~~~~~ compilation terminated. Compile failed: command 'aarch64-linux-gnu-gcc' failed with exit status 1 cc -I/usr/include/libxml2 -c /tmp/xmlXPathInitiwi4t88k.c -o tmp/xmlXPathInitiwi4t88k.o /tmp/xmlXPathInitiwi4t88k.c:2:1: warning: return type defaults to ‘int’ [-Wimplicit-int] main (int argc, char **argv) { ^~~~ cc tmp/xmlXPathInitiwi4t88k.o -lxml2 -o a.out error: command 'aarch64-linux-gnu-gcc' failed with exit status 1 ---------------------------------------- Rolling back uninstall of lxml Moving to /usr/lib/python3/dist-packages/lxml from /usr/lib/python3/dist-packages/~xml Moving to /usr/lib/python3/dist-packages/lxml-4.2.1.egg-info from /usr/lib/python3/dist-packages/~xml-4.2.1.egg-info ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-4fobio7k/lxml_1febb099da3746949eff9e364a118d74/setup.py'"'"'; __file__='"'"'/tmp/pip-install-4fobio7k/lxml_1febb099da3746949eff9e364a118d74/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-mf6xmurg/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.6/lxml Check the logs for full command output. ```
open
2021-01-08T12:31:25Z
2021-01-08T12:32:17Z
https://github.com/HumanSignal/labelImg/issues/696
[]
wbzhang233
0
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
1,472
How to use a pre-trained model for training own dataset?
How to use a pre-trained model for training own dataset?
open
2022-08-23T06:55:03Z
2022-09-06T20:35:34Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1472
[]
TinkingLoeng
1
huggingface/transformers
deep-learning
36,584
Significant Increase in Computation Time When Using Attention Mask in SDPA Attention
### System Info `transformers` version: 4.46.3 - Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.10 - Python version: 3.8.18 - Huggingface_hub version: 0.25.2 - Safetensors version: 0.4.5 - Accelerate version: 1.0.1 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: DEEPSPEED - use_cpu: False - debug: False - num_processes: 8 - machine_rank: 0 - num_machines: 1 - rdzv_backend: static - same_network: True - main_training_function: main - enable_cpu_affinity: False - PyTorch version (GPU?): 2.4.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: False - Using GPU in script?: True - GPU type: NVIDIA A800-SXM4-40GB ### Who can help? @ylacombe, @eustlb ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Hi, I am experiencing a significant increase in computation time when using an attention mask with the WhisperSdpaAttention in the transformers library. I am not sure if this is expected behavior or a potential bug. Below is the code I used to test this: ``` import torch import time from transformers.models.whisper.modeling_whisper import WhisperSdpaAttention def build_mask(x, x_lens): batch_size = x_lens.size(0) max_seq_len = x_lens.max() # Create a sequence tensor of shape (batch_size, max_seq_len) seq_range = ( torch.arange( 0, max_seq_len, dtype=x_lens.dtype, device=x_lens.device, ) .unsqueeze(0) .expand(batch_size, max_seq_len) ) lengths_expand = x_lens.unsqueeze(1).expand(batch_size, max_seq_len) # Create mask padding_mask = seq_range >= lengths_expand audio_attention_mask_ = padding_mask.view(batch_size, 1, 1, max_seq_len).expand( batch_size, 1, max_seq_len, max_seq_len ) audio_attention_mask = audio_attention_mask_.to( dtype=x.dtype, device=x_lens.device, ) audio_attention_mask[audio_attention_mask_] = float("-inf") return audio_attention_mask device = torch.device("cuda:0") x = torch.randn(2, 200, 128).half().to(device) x_lens = torch.tensor([200, 160]).long().to(device) attn1 = WhisperSdpaAttention(embed_dim=128, num_heads=1, is_causal=False) attn1.to(device).half() with torch.no_grad(): begin = time.time() z = attn1(x) print("sdpa without mask: ", time.time() - begin) begin = time.time() mask = build_mask(x, x_lens).to(device) out = attn1(x, attention_mask=mask) print("sdpa with mask: ", time.time() - begin) ``` The output times are as follows: SDPA without mask: 0.028657197952270508 SDPA with mask: 0.13893771171569824 ### Expected behavior As you can see, the computation time increases significantly when an attention mask is used. Could you please let me know if this is expected behavior or if there might be an issue with the implementation? Thank you!
closed
2025-03-06T12:21:38Z
2025-03-08T04:11:34Z
https://github.com/huggingface/transformers/issues/36584
[ "bug" ]
tartarleft
4
matplotlib/cheatsheets
matplotlib
109
'right' and 'top' in plt.subplots_adjust() are not directly padding size
Very thankful of the cheatsheet, but I think there's a little problem. In 'Axes adjustments' of the second cheatsheet, 'right' directly refers to the padding between **subplots' right edge** and **figure's right edge**. However it actually refers to the relative distance between **subplots' right edge** and **figure's left edge**. The same also applies to 'top'.
open
2022-05-05T15:11:21Z
2022-05-07T06:48:51Z
https://github.com/matplotlib/cheatsheets/issues/109
[ "cheatsheet" ]
K-gihu
3
proplot-dev/proplot
data-visualization
123
Add back "miscellaneous" matplotlib colormaps
It would be nice to have by default all the colormaps from Matplotlib (https://matplotlib.org/examples/color/colormaps_reference.html) in particular the `jet` one for example that I was used to pick, because more visible than the `Spectral` one in Proplot. But the `terrain` one is also useful, etc. I don't know if there is particular attention for colorblind people and if it goes in that way or not (I don't have any knowledge on that)?
closed
2020-02-14T09:51:38Z
2020-05-09T23:10:03Z
https://github.com/proplot-dev/proplot/issues/123
[ "feature" ]
mickaellalande
3
quantmind/pulsar
asyncio
154
Application can not call other module because pickle, is that in design?
I have following files: main.py: ``` # -*- coding: utf-8 -*- from pulsar.apps import wsgi, Application import Run import Proc def app(environ, start_response): print(Run) print(Run.proc) #Run.proc.do_something() #need use the Run.proc object start_response('200 OK', [('Content-type', 'text/plain')]) return [b'OK'] if __name__ == '__main__': Run.proc = Proc.Proc() wsgi_server = wsgi.WSGIServer(callable=app, bind='127.0.0.1:12456') wsgi_server.start() ``` Run.py: (use as program wide object storage) ``` # -*- coding: utf-8 -*- proc = None ``` Proc.py: (some other module) ``` # -*- coding: utf-8 -*- class Proc(object): def do_something(self, s): print(s) ``` I post any thing to the wsgi_server, got these output < module 'Run' from 'F:/test\Run.py'> None So, the proc in Run.py module is None in the Application function, I found it may be because pulsar pickle the Application function, any way we can deal with this?
closed
2015-07-16T03:02:34Z
2015-07-20T07:51:33Z
https://github.com/quantmind/pulsar/issues/154
[]
sbant
1
PokeAPI/pokeapi
api
1,129
Undocumented fields in API response
I have noticed that these fields in the JSON response appear to be undocumented at https://pokeapi.co/docs/v2. - `PokemonSprites.other` - `Type.sprites` - `Pokemon.past_abilities` You can see them in https://pokeapi.co/api/v2/type/1/ and https://pokeapi.co/api/v2/pokemon/25/
open
2024-09-14T23:46:44Z
2024-09-14T23:46:44Z
https://github.com/PokeAPI/pokeapi/issues/1129
[]
lunik1
0
aws/aws-sdk-pandas
pandas
2,866
Adding NotebookVersion Parameter as specified in official AWS Docs this parameter is necessary to create session using Spark
### Describe the bug Adding NotebookVersion Parameter as specified in official AWS Docs https://docs.aws.amazon.com/athena/latest/APIReference/API_StartSession.html#athena-StartSession-request-NotebookVersion, this parameter is necessary to create session using Spark Functions called from library - run_spark_calculation - create_spark_session Missing Parameter for starting Session on Boto3. After calling those two function this error is being raised: InvalidRequestException: An error occurred (InvalidRequestException) when calling the StartSession operation: NotebookVersion is required when NotebookId is provided. ### Relates - (https://docs.aws.amazon.com/athena/latest/APIReference/API_StartSession.html#athena-StartSession-request-NotebookVersion) ### How to Reproduce ``` import awswrangler as wr df = wr.athena.run_spark_calculation( code="print(spark)", workgroup="...", ) ``` ### Expected behavior Creation off Session on Athena/ Start of execution on Athena ### Your project _No response_ ### Screenshots ![image](https://github.com/aws/aws-sdk-pandas/assets/33525392/155ca585-0f99-4ca7-b006-8394dd89a2ab) ### OS Unix/Linux/Mac/Win ### Python version 3.11 ### AWS SDK for pandas version latest ### Additional context _No response_
closed
2024-06-21T10:12:56Z
2024-06-24T14:02:39Z
https://github.com/aws/aws-sdk-pandas/issues/2866
[ "bug" ]
DaxterXS
0
python-gino/gino
asyncio
299
Checking for the existence of a row
* GINO version: 0.7.5 * Python version: 3.7.0 * asyncpg version: 0.17.0 * aiocontextvars version: 0.1.2 * PostgreSQL version: 10.4 ### Description Checking for the existence of a row ### What I Did ``` is_exists = await db.SomeModel.query.where(db.SomeModel.name == some_name).gino.first() ``` ``` Traceback (most recent call last): File "/home/ape364/myproj/venv/lib/python3.7/site-packages/aiogram/dispatcher/__init__.py", line 256, in _process_polling_updates for responses in itertools.chain.from_iterable(await self.process_updates(updates)): File "/home/ape364/myproj/venv/lib/python3.7/site-packages/aiogram/dispatcher/__init__.py", line 119, in process_updates return await asyncio.gather(*tasks) File "/home/ape364/myproj/venv/lib/python3.7/site-packages/aiogram/dispatcher/handler.py", line 74, in notify response = await handler(*args) File "/home/ape364/myproj/venv/lib/python3.7/site-packages/aiogram/dispatcher/__init__.py", line 137, in process_update return await self.message_handlers.notify(update.message) File "/home/ape364/myproj/venv/lib/python3.7/site-packages/aiogram/dispatcher/handler.py", line 74, in notify response = await handler(*args) File "/home/ape364/myproj/tgbot/handlers/test.py", line 32, in cmd_test is_exists = await db.SomeModel.query.where(db.SomeModel.name == some_name).gino.first() File "/home/ape364/myproj/venv/lib/python3.7/site-packages/gino/api.py", line 134, in first return await self._query.bind.first(self._query, *multiparams, AttributeError: 'NoneType' object has no attribute 'first' ```
closed
2018-08-07T05:32:37Z
2018-08-15T10:32:31Z
https://github.com/python-gino/gino/issues/299
[ "invalid" ]
ape364
2
modelscope/modelscope
nlp
844
ImportError: cannot import name 'VerificationMode' from 'datasets'
Thanks for your error report and we appreciate it a lot. **Checklist** * I have searched the tutorial on modelscope [doc-site](https://modelscope.cn/docs) * I have searched related issues but cannot get the expected help. * The bug has not been fixed in the latest version. **Describe the bug** root@f3db3488eed3:/ocr/modelscope# python demo.py 2024-04-25 16:25:52,412 - modelscope - INFO - PyTorch version 1.11.0+cpu Found. 2024-04-25 16:25:52,415 - modelscope - INFO - Loading ast index from /mnt/workspace/.cache/modelscope/ast_indexer 2024-04-25 16:25:52,415 - modelscope - INFO - No valid ast index found from /mnt/workspace/.cache/modelscope/ast_indexer, generating ast index from prebuilt! 2024-04-25 16:25:52,526 - modelscope - INFO - AST-Scanning the path "/ocr/modelscope/modelscope" with the following sub folders ['models', 'metrics', 'pipelines', 'preprocessors', 'trainers', 'msdatasets', 'exporters'] 2024-04-25 16:26:21,531 - modelscope - INFO - Scanning done! A number of 976 components indexed or updated! Time consumed 29.115567445755005s 2024-04-25 16:26:21,615 - modelscope - INFO - Loading done! Current index file version is 1.9.4, with md5 b2983ccde06655d34203dbe3e7355438 and a total number of 976 components indexed ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /ocr/modelscope/demo.py:1 in <module> │ │ │ │ ❱ 1 from modelscope.pipelines import pipeline │ │ 2 from modelscope.utils.constant import Tasks │ │ 3 import cv2 │ │ 4 │ │ │ │ /ocr/modelscope/modelscope/pipelines/__init__.py:4 in <module> │ │ │ │ 1 # Copyright (c) Alibaba, Inc. and its affiliates. │ │ 2 │ │ 3 from . import audio, cv, multi_modal, nlp │ │ ❱ 4 from .base import Pipeline │ │ 5 from .builder import pipeline │ │ 6 │ │ │ │ /ocr/modelscope/modelscope/pipelines/base.py:16 in <module> │ │ │ │ 13 from packaging import version │ │ 14 │ │ 15 from modelscope.models.base import Model │ │ ❱ 16 from modelscope.msdatasets import MsDataset │ │ 17 from modelscope.outputs import TASK_OUTPUTS, ModelOutputBase │ │ 18 from modelscope.pipeline_inputs import TASK_INPUTS, check_input_type │ │ 19 from modelscope.preprocessors import Preprocessor │ │ │ │ /ocr/modelscope/modelscope/msdatasets/__init__.py:2 in <module> │ │ │ │ 1 # Copyright (c) Alibaba, Inc. and its affiliates. │ │ ❱ 2 from .ms_dataset import MsDataset │ │ 3 │ │ │ │ /ocr/modelscope/modelscope/msdatasets/ms_dataset.py:24 in <module> │ │ │ │ 21 from modelscope.msdatasets.dataset_cls.custom_datasets.builder import \ │ │ 22 │ build_custom_dataset │ │ 23 from modelscope.msdatasets.utils.delete_utils import DatasetDeleteManager │ │ ❱ 24 from modelscope.msdatasets.utils.hf_datasets_util import load_dataset_with_ctx │ │ 25 from modelscope.msdatasets.utils.upload_utils import DatasetUploadManager │ │ 26 from modelscope.preprocessors import build_preprocessor │ │ 27 from modelscope.utils.config import Config, ConfigDict │ │ │ │ /ocr/modelscope/modelscope/msdatasets/utils/hf_datasets_util.py:15 in <module> │ │ │ │ 12 from urllib.parse import urlencode │ │ 13 │ │ 14 import requests │ │ ❱ 15 from datasets import (BuilderConfig, Dataset, DatasetBuilder, DatasetDict, │ │ 16 │ │ │ │ │ DownloadConfig, DownloadManager, DownloadMode, Features, │ │ 17 │ │ │ │ │ IterableDataset, IterableDatasetDict, Split, │ │ 18 │ │ │ │ │ VerificationMode, Version, config, data_files) │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ImportError: cannot import name 'VerificationMode' from 'datasets' (/opt/conda/lib/python3.7/site-packages/datasets/__init__.py) **To Reproduce** * What command or script did you run? Using docker environment: registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu20.04-py37-torch1.11.0-tf1.15.5-1.6.1 Run python script: from modelscope.pipelines import pipeline from modelscope.utils.constant import Tasks import cv2 ocr_recognition = pipeline(Tasks.ocr_recognition, model='damo/cv_convnextTiny_ocr-recognition-general_damo') ### 使用url img_url = 'http://duguang-labelling.oss-cn-shanghai.aliyuncs.com/mass_img_tmp_20220922/ocr_recognition.jpg' result = ocr_recognition(img_url) print(result) **Your Environments (__required__)** * NAME="Ubuntu" * VERSION="20.04.6 LTS (Focal Fossa)" * ID=ubuntu * ID_LIKE=debian * PRETTY_NAME="Ubuntu 20.04.6 LTS" * VERSION_ID="20.04" Please @ corresponding people according to your problem: Model related: @wenmengzhou @tastelikefeet Model hub related: @liuyhwangyh Dataset releated: @wangxingjun778 Finetune related: @tastelikefeet @Jintao-Huang Pipeline related: @Firmament-cyou @wenmengzhou Contribute your model: @zzclynn
closed
2024-04-25T09:02:53Z
2024-06-01T01:51:53Z
https://github.com/modelscope/modelscope/issues/844
[ "Stale" ]
StochasticGame
4
yunjey/pytorch-tutorial
deep-learning
3
Small understandability improvement in Pytorch basics
Hi! First of all, really nice resource for learning pytorch and neural nets! I am following your tutorial for preparing a short hands-on tutorial During this process, I have identified a very small understandability issue here https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/00%20-%20PyTorch%20Basics/main.py#L74 If understand correctly, your intention is to show how the loss goes down after the first opt step, but you do not make a new forward pass with the new weights, thus the loss is the same as previously. Making linear(x) either in the call or more explicitly reassigning pred = linear(x) would help to show how the loss is reduced. As I said, it is a minor issue but thought it might help! Thank you!
closed
2017-03-13T10:06:28Z
2017-03-13T10:18:21Z
https://github.com/yunjey/pytorch-tutorial/issues/3
[]
dvsrepo
1
lanpa/tensorboardX
numpy
181
Display of images in embedding visualization provides weird result
Hello, when using the add_embedding functionality of the writer class, the following problem appears: I am feeding in (3, 480, 640) sized RGB-images that I resize to (3, 299, 299). I collect a bunch of those and feed them into the add_embedding function, but instead of displaying the correctly colored sprites, the following appears. ![image](https://user-images.githubusercontent.com/32754403/42119083-84c1ecfe-7bd6-11e8-946f-451af9a13851.png) How do I make the sprites be displayed correctly?
closed
2018-06-29T23:56:50Z
2018-08-10T14:12:28Z
https://github.com/lanpa/tensorboardX/issues/181
[]
msieb1
1
SciTools/cartopy
matplotlib
1,518
Adopt NEP29: minimum dependency policy
After Cartopy 0.18, we will be dropping support for Python 2. Currently, for Python 3, we have a minimum of 3.5. This is somewhat problematic on conda-forge because they've already dropped support for it. For the future, we should implement something like [NEP 29](https://numpy.org/neps/nep-0029-deprecation_policy.html) as a minimum dependency policy. For Python, this is a 42-month window and for NumPy, it is a 24-month window. Assuming that 0.18 is out by the end of the month, and 0.19 takes at least 3 months, that'd be end of July. Extending that policy to other dependencies using 42 months (January 30, 2017), that would mean: * Python 3.7 * NumPy 1.16 * Cython 0.26 (older than current requirement) * Proj 5.0.0 * GEOS 3.6.2 * Shapely 1.6b4 * pyshp 1.2.11 (older than current requirement) * pyepsg 0.3.2 (older than current requirement) * OWSLib 0.15.0 * Pillow 4.1.0 * Matplotlib 2.0.1 * GDAL 2.2.0 * SciPy 0.19.0 * pytest 3.0.7 (older than current requirement) Extending that policy to other dependencies using 24 months (July 30, 2018) instead would mean: * Python 3.7 * NumPy 1.16 * Cython 0.28.5 * Proj 5.2.0 * GEOS 3.6.3 * Shapely 1.7a1 * pyshp 2.0.0 * pyepsg 0.4.0 * OWSLib 0.17.0 * Pillow 5.3.0 * Matplotlib 2.2.3 * GDAL 2.3.2 * SciPy 1.2.0 * pytest 3.7.0 Though note, we probably shouldn't switch minimums to alphas or betas; that's just there for reference. We may also want slightly different policy for system deps like Proj and GEOS.
closed
2020-04-12T05:03:51Z
2021-09-17T07:55:55Z
https://github.com/SciTools/cartopy/issues/1518
[ "Type: Enhancement", "Type: Infrastructure" ]
QuLogic
3
google-research/bert
tensorflow
917
Eval every 100 steps during training.
How to implement? It seems that evaluation is only done once after all training epochs have ended. See https://github.com/google-research/bert/issues/636
closed
2019-11-14T03:46:26Z
2021-03-12T03:35:52Z
https://github.com/google-research/bert/issues/917
[]
guotong1988
3
RobertCraigie/prisma-client-py
pydantic
34
Add support for setting a field to null
## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> Optional database fields can be set to null, we currently don't support this as our query builder automatically removes `None` values and does not include them in the generated query. ## Suggested solutions <!-- A clear and concise description of what you want to happen. --> There are currently three solutions being considered ### Refactor the query builder to include type information We will then be able to decide whether or not the field should be removed or not depending on the current context. ### Refactor TypedDict optional fields We currently mark a lot of fields as `Optional` when maybe they shouldn't be. After this we can include all `None` values in generated queries as we can assume them to be valid. ### Add a Null type Whenever the query builder encounters this type it will include it in the generated query instead of discarding it. A disadvantage of this approach is that it could be confusing for users as fields that they are explicitly setting to `None` would be discarded from the query. ```py from prisma.types import Null await client.post.update( data={ 'nullable_field': Null(), }, where={ 'id': 'abc', }, ) ```
closed
2021-07-11T15:06:46Z
2021-08-20T21:13:54Z
https://github.com/RobertCraigie/prisma-client-py/issues/34
[ "kind/feature" ]
RobertCraigie
0
deezer/spleeter
deep-learning
425
[Discussion] How to set GPU device
I have multiple GPU's but I can't figure out how to set which GPU device separator should use. In my python script I've tried adding `os.environ['CUDA_VISIBLE_DEVICES'] = "1"` but it does nothing. I've also tried adding `device_count={'GPU': 1}` to `ConfigProto`, but it does nothing either. Has anyone got a clue how to set which device to use? I'm on Windows 10 with RTX2080
open
2020-06-18T15:35:58Z
2020-06-22T13:01:50Z
https://github.com/deezer/spleeter/issues/425
[ "question" ]
aidv
2
deepset-ai/haystack
pytorch
8,863
Explain deprecation of `dataframe` field in documentation
We should explain here that dataframe is deprecated. https://docs.haystack.deepset.ai/docs/data-classes#document We will need to fully remove the dataframe field from the explanation after the 2.11 release.
closed
2025-02-14T15:40:53Z
2025-02-17T12:20:19Z
https://github.com/deepset-ai/haystack/issues/8863
[ "type:documentation", "P1" ]
julian-risch
0
piskvorky/gensim
machine-learning
3,496
is the summarization module removed in the newest version of gensim, i find it nowhere in the documentation?
I actually want the keywords functionality
closed
2023-09-15T04:00:15Z
2023-09-15T15:11:21Z
https://github.com/piskvorky/gensim/issues/3496
[]
zcsh
1
deepinsight/insightface
pytorch
2,240
Ask for the official code to count FLOPS
May I ask for the official code to count FLOPS of a model? Thanks for your early reply!
open
2023-02-13T12:29:51Z
2023-02-13T14:30:27Z
https://github.com/deepinsight/insightface/issues/2240
[]
5RJ
2
horovod/horovod
machine-learning
3,404
When would be next release of Horovod available?
Hello Horovod Team, Just curious, I'm wondering when the next release of Horovod would be available on PyPI? And would it be a major or minor release? Thank you very much and feel free to correct anything
closed
2022-02-08T06:36:35Z
2022-03-02T16:23:52Z
https://github.com/horovod/horovod/issues/3404
[]
Tony-Feng
7
noirbizarre/flask-restplus
flask
566
Is there a way to disable Automatically documented models in the swagger UI ?
Hello, I use models to describe some json inputs and ouputs of different methods. I would like to disable the list of models in the Swagger documentation.
open
2018-12-21T10:29:44Z
2019-03-26T20:58:51Z
https://github.com/noirbizarre/flask-restplus/issues/566
[ "question" ]
guissart
0
ray-project/ray
deep-learning
51,423
Ray on kubernetes with custom image_uri is broken
### What happened + What you expected to happen Hi, I am trying to use a custom image on a kubernetes cluster. I am using this cluster: `https://github.com/ray-project/kuberay/blob/master/ray-operator/config/samples/ray-cluster.autoscaler.yaml`. Unfortunately, it seems that ray uses podman to launch custom images (`https://github.com/ray-project/ray/blame/master/python/ray/_private/runtime_env/image_uri.py#L16C10-L18C15`) (by @zcin), however it does not seem that in the official ray image that podman is installed, so I get issues saying that podman is not installed. I have tried installing podman manually, but then I get the error `WARN[0000] "/" is not a shared mount, this could cause issues or missing mounts with rootless containers`. In my opinion, the best solution for this would be to completely remove this podman dependency, as it seems to be causing many issues. Is there a workaround for this right now? I'm completely blocked as things stand. ### Versions / Dependencies latest ### Reproduction script ```python from ray.job_submission import JobSubmissionClient client = JobSubmissionClient(args.address) job_id = client.submit_job( entrypoint=""" cat /etc/hostname; echo "import ray; print(ray.__version__); print('hello'); import time; time.sleep(100); print('done');" > main.py; python main.py """, runtime_env={ "image_uri": "<choose an image here>", }, ) print(job_id) ``` ### Issue Severity High: It blocks me from completing my task.
open
2025-03-17T15:07:30Z
2025-03-21T23:06:12Z
https://github.com/ray-project/ray/issues/51423
[ "bug", "triage", "core" ]
CowKeyMan
5
thp/urlwatch
automation
146
Make subfilter parameters configurable
For any given set of different method parameters, we would have to create a new function to support it. As it's the case with https://github.com/thp/urlwatch/pull/145 for just one simple extra configuration. Maybe it would be better if we make it configurable, for example: filter: html2text:pyhtml2text:body-width=0:protect-links=1:images-with-size=1 filter: html2text:lynx:list_inline=1:unique_urls=1 Is that a good idea? I would be happy to implement it.
closed
2017-04-07T13:30:48Z
2017-06-28T10:26:25Z
https://github.com/thp/urlwatch/issues/146
[]
vmassuchetto
1
tableau/server-client-python
rest-api
1,564
ImageRequestOptions ignoring viz_height and viz_width
**Describe the bug** The viz_height and viz_width options were added to ImageRequestOptions (this is the [merge](https://github.com/tableau/server-client-python/commit/d09a9ceeae33400536abcbc6c60393887fa03e04)). The resulting class has the properties, but it doesn't actually control the image resolution output. **Versions** Details of your environment, including: - Tableau Server version (or note if using Tableau Online): 2024.2.2 - Python version: 3.11.2 - TSC library version: v0.36 **To Reproduce** Code snippet: `image_req_option = TSC.ImageRequestOptions(viz_height=image_height,viz_width=image_width,maxage=-1)` `print(f"TSC Image Height: {image_req_option.viz_height} TSC Image Width: {image_req_option.viz_width}")` `self.server.views.populate_image(view, image_req_option)` I set the properties in the req_option, I print the properties to show they are set, then I populate the image with the given image_req_option. **Results** The resulting image only abides by the default behavior, or the High resolution request option. Maybe there's something I'm missing? Print result: ![Image](https://github.com/user-attachments/assets/fb7bc836-3b02-4293-b944-68eb8bac1ad4) Output resolution: ![Image](https://github.com/user-attachments/assets/02543d41-31e8-4cb0-a4b7-8ff48dad51d4)
closed
2025-02-06T21:00:04Z
2025-02-07T16:19:05Z
https://github.com/tableau/server-client-python/issues/1564
[]
bnoffke-uwcu
3
polakowo/vectorbt
data-visualization
562
Maybe a bug in `nb.sort_call_seq_nb` when provide `size` in group length.
I am using `.from_order_func`, my strategy is 1. select top 20 stocks base on rolling 10 `amount `( price*vol) every 100 days (control by select_interval below) 1. filter the stock by `entries` every 10 days. (control by segment_mask) 1. rebalance the portfolio (size) ```python @njit def topk_indices(arr, k): indices = np.argsort(arr)[-k:] indices = indices[::-1] return indices @njit def fill_size_on_topk_values(arr, k, portion): size = np.full(arr.shape[0], 0, dtype=np.float_) indices = topk_indices(arr, k) for i in indices: size[i] = portion return size @njit def pre_segment_func_nb(c, amount, entries, select_interval, topk, portion, size_type, direction): for col in range(c.from_col, c.to_col): # Here we use order price for group valuation c.last_val_price[col] = nb.get_col_elem_nb(c, col, c.close) # custom # select_interval = 100 , size = fill_size_on_topk_values(amount[c.i // select_interval * select_interval, c.from_col:c.to_col], topk, portion) size[~entries[c.i, c.from_col:c.to_col]] = 0 order_value_out = np.empty(c.group_len, dtype=np.float_) nb.sort_call_seq_nb(c, size, size_type, direction, order_value_out) return (size,) ``` Above code works fine **without group_by**. --- However testing with range_split which create groups, it failed with wrong index . As above code , you can see the lenght of `size` is group length in fact. With groups : ``` split_close, split_close_ind = close.vbt.range_split(**range_split_kwargs) vbt.Portfolio.from_order_func( split_close, ... group_by=split_close.columns.get_level_values(0), ... ) ``` `nb.sort_call_seq_nb` got error `IndexError: index 3 is out of bounds for axis 0 with size 3`. The key problem detail: ![7GAV{``} AH~D_3PH3K_~ F](https://user-images.githubusercontent.com/3938751/218001981-291a798a-3a4c-4bed-bdf1-9613102d7558.png) Because the hirerachy is `group -> segment` , I was very naturely assume `size` legnth equals to `c.group_len` .
open
2023-02-10T04:39:17Z
2024-02-26T01:48:30Z
https://github.com/polakowo/vectorbt/issues/562
[ "stale" ]
eromoe
4
sqlalchemy/sqlalchemy
sqlalchemy
10,821
Deprecate array any/all implementation specific implementations
These method have a peculiar signature that's quite different from the rest of the api. Since there are alternatives these should be deprecated in 2.1
open
2024-01-03T00:03:24Z
2024-01-03T00:03:25Z
https://github.com/sqlalchemy/sqlalchemy/issues/10821
[ "task", "sql", "datatypes" ]
CaselIT
0
microsoft/nni
data-science
5,456
'Trainer' object has no attribute 'optimizer_frequencies'
**Describe the issue**: when running the experiment via experiment.run(config), this error throws : AttributeError Traceback (most recent call last) [<ipython-input-32-1486538249e8>](https://localhost:8080/#) in <module> 6 config = RetiariiExeConfig(execution_engine='oneshot') 7 experiment = RetiariiExperiment(model_space, evaluator=evaluator, strategy=strategy) ----> 8 experiment.run(config) 16 frames [/usr/local/lib/python3.9/dist-packages/nni/nas/oneshot/pytorch/base_lightning.py](https://localhost:8080/#) in advance_optimization(self, loss, batch_idx, gradient_clip_val, gradient_clip_algorithm) 442 raise ValueError('This method should not be used when automatic optimization is turned on.') 443 --> 444 if self.trainer.optimizer_frequencies: 445 warnings.warn('optimizer_frequencies is not supported in NAS. It will be ignored.', UserWarning) 446 AttributeError: 'Trainer' object has no attribute 'optimizer_frequencies' **Environment**: - NNI version: latest - Is conda/virtualenv/venv used?: yes - Is running in Docker?: no **Configuration**: - Experiment config (remember to remove secrets!): config = RetiariiExeConfig(execution_engine='oneshot') - Search space: nni.retiarii.hub.pytorch.DARTS
open
2023-03-16T16:55:02Z
2023-05-29T02:15:29Z
https://github.com/microsoft/nni/issues/5456
[]
yasmineLalabouali
2
huggingface/transformers
pytorch
36,730
On MoE implementation in HuggingFace
On the Mixtral MoE implementation, I saw it mentioned that it is equivalent to `standard MoE with full capacity (no dropped tokens)`. I just wonder where the token dropless logic is implemented? Code reference: https://github.com/huggingface/transformers/blob/2c2495cc7b0e3e2942a9310f61548f40a2bc8425/src/transformers/models/mixtral/modeling_mixtral.py#L89C28-L90C20 CC @ArthurZucker if you have any insights. Thank you!
closed
2025-03-14T20:31:26Z
2025-03-17T09:20:55Z
https://github.com/huggingface/transformers/issues/36730
[]
Neo9061
2
deeppavlov/DeepPavlov
tensorflow
1,228
Download models data from s3
Привет. Обученные модели я загружаю на s3, чтобы потом добавить их в конфиг в `"download"`. Проблема в том, что DeepPavlov не может скачивать данные по ссылкам на s3. Данные из s3 можно загрузить создав временную ссылку на файл или сделав файл публичным, но это не очень удобно. Было бы круто, если можно было загружать данные напрямую через s3. Например, указать в конфиге s3 ссылку на обученную модель. ``` "download": [ { "url": "s3://bucket/model.tar.gz", "subdir": "{MODELS_PATH}/model" } ] ```
closed
2020-05-25T07:30:50Z
2020-07-03T10:31:07Z
https://github.com/deeppavlov/DeepPavlov/issues/1228
[]
pituganov
2
krish-adi/barfi
streamlit
50
[Bug] - the commands argument in the st_flow function does not impact the UI.
As stated in the description. If the commands list is changed to just `commands = ["execute"]` the st_flow widget still has the Execute and Save buttons, both of which still function if you click on them.
open
2025-02-12T14:08:40Z
2025-02-12T14:08:40Z
https://github.com/krish-adi/barfi/issues/50
[]
nwshell
0
unit8co/darts
data-science
2,500
[BUG] How to prevent darts from caring about the date frequency
**Describe the bug** I am trying to train TFT to predict the price movement of a stock given 10 other stocks. I removed after hours trading and non weekdays from my dataframe. Therefore, I only have data from monday to friday - from 9:30am to 4pm. I do not need DARTS to populate these after hours dates with nan values ........... How can I get rid of this ? Thanks
closed
2024-08-14T16:06:00Z
2024-08-15T10:43:30Z
https://github.com/unit8co/darts/issues/2500
[ "question" ]
valentin-fngr
2
pytorch/vision
computer-vision
8,382
Regarding IMAGENET1K_V1 and IMAGENET1K_V2 weights
### 🐛 Describe the bug I found a very strange "bug" while I was trying to find similiar instances in a vector database of pictures. The model I used is ResNet50. The problem occurs only when using the` IMAGENET1K_V2` weights, but does not appear when using the legacy `V1` weights (referring to https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/). When I calculate the **cosine similarity** with `V1` weights for two almost identical pictures I get `values > 0.95`, however when I use `V2` weights with the same pictures I get `values < 0.7`. In layman terms with `V2` identical pictures are not recognized as such anymore. I gave you two example pictures below and the code to reproduce the problem. Does somebody have a concise explanation for this behaviour? When you increase the size in your `transform.resize((x, y))` the problem gradually begins to vanish, however this is not really a good solution since it produces overhead during inference. Would be happy for any insights on this topic :) ``` from torchvision import models from torchvision.models import ResNet50_Weights import torchvision.io from torch import nn import numpy as np from numpy.linalg import norm class Identity(nn.Module): def __init__(self): super(Identity, self).__init__() def forward(self, x): return x # Get weights weights = ResNet50_Weights.IMAGENET1K_V1 preprocess = weights.transforms() model = models.resnet50(weights=ResNet50_Weights.IMAGENET1K_V1).to("cuda:0") model.fc = Identity() a = model(preprocess(torchvision.io.read_image("/raid/..../datasets/lion/lion_ori_small.jpg").unsqueeze(dim=0).to("cuda:0"))).cpu().detach().numpy().squeeze() b = model(preprocess(torchvision.io.read_image("/raid/.../datasets/lion/lion_fake_small.jpg").unsqueeze(dim=0).to("cuda:0"))).cpu().detach().numpy().squeeze() cosine = np.dot(a,b)/(norm(a)*norm(b)) ``` ![lion_fake](https://github.com/pytorch/vision/assets/138434950/36983e9d-61af-41bf-9e88-793d149c0188) ![lion_ori](https://github.com/pytorch/vision/assets/138434950/095e9a5b-0fbe-49eb-820b-41b500f116a8) ### Versions torchvision 0.19
open
2024-04-17T09:30:50Z
2024-04-17T09:33:44Z
https://github.com/pytorch/vision/issues/8382
[]
asusdisciple
0
ijl/orjson
numpy
555
Support for reading/writing directly to file objects
Seeing this has already been mentioned in #516, but did not get any comments there so opening a new issue. I'm in a scenario where having `load`/`dump` support would probably be very helpful for us. I'm willing contribute the work on myself, but don't want to spend time on it unless you'd be willing to accept the change. I could see this working either like the `load` and `dump` in the json module, passing a python file object to the function, or as in some other json libraries where the filename/path is passed to the function. Is this something you'd consider? If so, do you have opinions on how the implementation should look? I've had a look at the code and have a few ideas but no strong opinions.
closed
2025-02-22T17:33:46Z
2025-03-06T08:02:25Z
https://github.com/ijl/orjson/issues/555
[ "Stale" ]
joburatti
4
jmcnamara/XlsxWriter
pandas
612
Feature request: Option to lengthen worksheet names
Can an option be added to control the max length the worksheet name? Right now it's hard coded to limit at 31 characters (Excel Display limit), but I think the file format limit is 255 characters and LibreOffice displays longer names.
closed
2019-03-26T16:16:51Z
2019-03-26T16:37:31Z
https://github.com/jmcnamara/XlsxWriter/issues/612
[ "question" ]
kk49
1
ydataai/ydata-profiling
pandas
1,490
Bug Report - comparison of two time series reports
### Current Behaviour Hi, I'm trying to compare two multivariate/univariate time series, with version 4.6.0 and python 3.10.13. when I run `compare` I get an error ``` UnionMatchError: can not match type "list" to any type of "time_index_analysis.period" union: typing.Union[float, typing.List[float]] ``` Full error trace: ``` --------------------------------------------------------------------------- UnionMatchError Traceback (most recent call last) test_notebook.ipynb Cell 19 line 4 30 latest_training_report = ProfileReport( 31 prod_data, 32 title="Latest", 33 tsmode=True, 34 ) 35 production_training_report = ProfileReport( 36 train_data, 37 title="Production", 38 tsmode=True, 39 ) ---> 41 comparison_report = compare([latest_training_report, production_training_report]) File ~/path_to_dir/.venv/lib/python3.10/site-packages/ydata_profiling/compare_reports.py:363, in compare(reports, config, compute) 361 res["time_index_analysis"] = None 362 profile = ProfileReport(None, config=_config) --> 363 profile._description_set = from_dict(data_class=BaseDescription, data=res) 364 return profile File ~/path_to_dir/.venv/lib/python3.10/site-packages/dacite/core.py:64, in from_dict(data_class, data, config) 62 try: 63 field_data = data[field.name] ---> 64 value = _build_value(type_=field_type, data=field_data, config=config) 65 except DaciteFieldError as error: 66 error.update_path(field.name) File ~/path_to_dir/.venv/lib/python3.10/site-packages/dacite/core.py:95, in _build_value(type_, data, config) 93 return data 94 if is_union(type_): ---> 95 data = _build_value_for_union(union=type_, data=data, config=config) 96 elif is_generic_collection(type_): 97 data = _build_value_for_collection(collection=type_, data=data, config=config) File ~/path_to_dir/.venv/lib/python3.10/site-packages/dacite/core.py:113, in _build_value_for_union(union, data, config) 111 types = extract_generic(union) 112 if is_optional(union) and len(types) == 2: --> 113 return _build_value(type_=types[0], data=data, config=config) 114 union_matches = {} 115 for inner_type in types: File ~/path_to_dir/.venv/lib/python3.10/site-packages/dacite/core.py:99, in _build_value(type_, data, config) 97 data = _build_value_for_collection(collection=type_, data=data, config=config) 98 elif cache(is_dataclass)(type_) and isinstance(data, Mapping): ---> 99 data = from_dict(data_class=type_, data=data, config=config) 100 for cast_type in config.cast: 101 if is_subclass(type_, cast_type): File ~/path_to_dir/.venv/lib/python3.10/site-packages/dacite/core.py:64, in from_dict(data_class, data, config) 62 try: 63 field_data = data[field.name] ---> 64 value = _build_value(type_=field_type, data=field_data, config=config) 65 except DaciteFieldError as error: 66 error.update_path(field.name) File ~/path_to_dir/.venv/lib/python3.10/site-packages/dacite/core.py:95, in _build_value(type_, data, config) 93 return data 94 if is_union(type_): ---> 95 data = _build_value_for_union(union=type_, data=data, config=config) 96 elif is_generic_collection(type_): 97 data = _build_value_for_collection(collection=type_, data=data, config=config) File ~/path_to_dir/.venv/lib/python3.10/site-packages/dacite/core.py:135, in _build_value_for_union(union, data, config) 133 if not config.check_types: 134 return data --> 135 raise UnionMatchError(field_type=union, value=data) UnionMatchError: can not match type "list" to any type of "time_index_analysis.period" union: typing.Union[float, typing.List[float]] ``` ### Expected Behaviour Comparison works as expected and I get a report ### Data Description The data supplied here is two pandas series, with 5 rows each. these are samples from my datasets. ### Code that reproduces the bug Here's a code including data samples that reproduce the error. ```Python from ydata_profiling import ProfileReport, compare import pandas as pd from pandas import Timestamp pd.options.plotting.backend = "matplotlib" prod_data_dict = { "feature_A": { Timestamp("2023-04-03 00:00:00", freq="H"): 53321.6700520833, Timestamp("2023-04-03 01:00:00", freq="H"): 53552.70312500002, Timestamp("2023-04-03 02:00:00", freq="H"): 48905.89615885409, Timestamp("2023-04-03 03:00:00", freq="H"): 46832.90592447904, Timestamp("2023-04-03 04:00:00", freq="H"): 51819.66223958326, } } train_data_dict = { "feature_A": { Timestamp("2023-04-03 00:00:00", freq="H"): 53321.6700520833, Timestamp("2023-04-03 01:00:00", freq="H"): 53552.70312500002, Timestamp("2023-04-03 02:00:00", freq="H"): 48905.89615885409, Timestamp("2023-04-03 03:00:00", freq="H"): 46832.90592447904, Timestamp("2023-04-03 04:00:00", freq="H"): 51819.66223958326, } } prod_data = pd.DataFrame.from_dict(prod_data.to_dict()) train_data = pd.DataFrame.from_dict(train_data.to_dict()) latest_training_report = ProfileReport( prod_data, title="Latest", tsmode=True, ) production_training_report = ProfileReport( train_data, title="Production", tsmode=True, ) comparison_report = compare([latest_training_report, production_training_report]) ``` When checking the content of `res['time_index_analysis']` from the method `compare ` under `compare_reports.py` I am getting: ``` { "n_series": [0, 0], "length": [5, 5], "start": [Timestamp("2023-04-03 00:00:00"), Timestamp("2023-04-03 00:00:00")], "end": [Timestamp("2023-04-03 04:00:00"), Timestamp("2023-04-03 04:00:00")], "period": [Timedelta("0 days 01:00:00"), Timedelta("0 days 01:00:00")], "frequency": [None, None], } ``` ### pandas-profiling version v4.6.0 ### Dependencies ```Text aiobotocore 2.4.2 Async client for aws services using botocore and aiohttp aiohttp 3.8.6 Async http client/server framework (asyncio) aioitertools 0.11.0 itertools and builtins for AsyncIO and mixed iterables aiosignal 1.3.1 aiosignal: a list of registered asynchronous callbacks analytics-python 1.4.post1 The hassle-free way to integrate analytics into any python application. appnope 0.1.3 Disable App Nap on macOS >= 10.9 apscheduler 3.10.4 In-process task scheduler with Cron-like capabilities asttokens 2.4.0 Annotate AST trees with source code positions async-timeout 4.0.3 Timeout context manager for asyncio programs attrs 23.1.0 Classes Without Boilerplate backcall 0.2.0 Specifications for callback functions passed in to an API backoff 1.10.0 Function decoration for backoff and retry beautifulsoup4 4.12.2 Screen-scraping library bleach 6.1.0 An easy safelist-based HTML-sanitizing tool. boto3 1.24.59 The AWS SDK for Python botocore 1.27.59 Low-level, data-driven core of boto 3. cachetools 5.3.2 Extensible memoizing collections and decorators category-encoders 2.6.2 A collection of sklearn transformers to encode categorical variables as numeric certifi 2023.7.22 Python package for providing Mozilla's CA Bundle. charset-normalizer 3.3.1 The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet. click 8.1.7 Composable command line interface toolkit cloudpathlib 0.16.0 pathlib-style classes for cloud storage services. comm 0.1.4 Jupyter Python Comm implementation, for usage in ipykernel, xeus-python etc. commonmark 0.9.1 Python parser for the CommonMark Markdown spec contourpy 1.1.1 Python library for calculating contours of 2D quadrilateral grids cramjam 2.7.0 Thin Python bindings to de/compression algorithms in Rust cycler 0.12.1 Composable style cycles dacite 1.8.1 Simple creation of data classes from dictionaries. debugpy 1.8.0 An implementation of the Debug Adapter Protocol for Python decorator 5.1.1 Decorators for Humans defusedxml 0.7.1 XML bomb protection for Python stdlib modules exceptiongroup 1.1.3 Backport of PEP 654 (exception groups) executing 2.0.0 Get the currently executing AST node of a frame, and other information fastjsonschema 2.18.1 Fastest Python implementation of JSON schema fastparquet 2023.8.0 Python support for Parquet file format flaml 1.2.4 A fast library for automated machine learning and tuning fonttools 4.43.1 Tools to manipulate font files frozenlist 1.4.0 A list-like structure which implements collections.abc.MutableSequence fsql 0.23.0 Metastore-like capabilities for various filesystems fsspec 2022.11.0 File-system specification gcsfs 2022.11.0 Convenient Filesystem interface over GCS google-api-core 2.12.0 Google API client core library google-auth 2.23.3 Google Authentication Library google-auth-oauthlib 1.1.0 Google Authentication Library google-cloud-core 2.3.3 Google Cloud API client core library google-cloud-storage 2.12.0 Google Cloud Storage API client library google-crc32c 1.5.0 A python wrapper of the C library 'Google CRC32C' google-resumable-media 2.6.0 Utilities for Google Media Downloads and Resumable Uploads googleapis-common-protos 1.61.0 Common protobufs used in Google APIs htmlmin 0.1.12 An HTML Minifier idna 3.4 Internationalized Domain Names in Applications (IDNA) imagehash 4.3.1 Image Hashing library ipykernel 6.26.0 IPython Kernel for Jupyter ipython 8.16.1 IPython: Productive Interactive Computing ipywidgets 8.1.1 Jupyter interactive widgets jedi 0.19.1 An autocompletion tool for Python that can be used for text editors. jinja2 3.0.3 A very fast and expressive template engine. jmespath 1.0.1 JSON Matching Expressions joblib 1.3.2 Lightweight pipelining with Python functions jsonschema 4.19.1 An implementation of JSON Schema validation for Python jsonschema-specifications 2023.7.1 The JSON Schema meta-schemas and vocabularies, exposed as a Registry jupyter-client 8.5.0 Jupyter protocol implementation and client libraries jupyter-core 5.4.0 Jupyter core package. A base package on which Jupyter projects rely. jupyterlab-pygments 0.2.2 Pygments theme using JupyterLab CSS variables jupyterlab-widgets 3.0.9 Jupyter interactive widgets for JupyterLab kaleido 0.2.1 Static image export for web-based visualization libraries with zero dependencies kiwisolver 1.4.5 A fast implementation of the Cassowary constraint solver lightgbm 3.3.5 LightGBM Python Package llvmlite 0.41.1 lightweight wrapper around basic LLVM functionality markupsafe 2.1.3 Safely add untrusted strings to HTML/XML markup. matplotlib 3.7.3 Python plotting package matplotlib-inline 0.1.6 Inline Matplotlib backend for Jupyter mistune 3.0.2 A sane and fast Markdown parser with useful plugins and renderers monotonic 1.6 An implementation of time.monotonic() for Python 2 & < 3.3 multidict 6.0.4 multidict implementation multimethod 1.10 Multiple argument dispatching. nannyml 0.9.1 NannyML, Your library for monitoring model performance. nbclient 0.8.0 A client library for executing notebooks. Formerly nbconvert's ExecutePreprocessor. nbconvert 7.9.2 Converting Jupyter Notebooks nbformat 5.9.2 The Jupyter Notebook format nest-asyncio 1.5.8 Patch asyncio to allow nested event loops networkx 3.2 Python package for creating and manipulating graphs and networks numba 0.58.1 compiling Python code using LLVM numpy 1.24.4 Fundamental package for array computing in Python oauthlib 3.2.2 A generic, spec-compliant, thorough implementation of the OAuth request-signing logic packaging 23.2 Core utilities for Python packages pandas 1.5.3 Powerful data structures for data analysis, time series, and statistics pandocfilters 1.5.0 Utilities for writing pandoc filters in python parso 0.8.3 A Python Parser patsy 0.5.3 A Python package for describing statistical models and for building design matrices. pexpect 4.8.0 Pexpect allows easy control of interactive console applications. phik 0.12.3 Phi_K correlation analyzer library pickleshare 0.7.5 Tiny 'shelve'-like database with concurrency support pillow 10.1.0 Python Imaging Library (Fork) platformdirs 3.11.0 A small Python package for determining appropriate platform-specific dirs, e.g. a "user data dir". plotly 5.17.0 An open-source, interactive data visualization library for Python prompt-toolkit 3.0.39 Library for building powerful interactive command lines in Python protobuf 4.24.4 psutil 5.9.6 Cross-platform lib for process and system monitoring in Python. psycopg2-binary 2.9.9 psycopg2 - Python-PostgreSQL Database Adapter ptyprocess 0.7.0 Run a subprocess in a pseudo terminal pure-eval 0.2.2 Safely evaluate AST nodes without side effects pyarrow 12.0.1 Python library for Apache Arrow pyasn1 0.5.0 Pure-Python implementation of ASN.1 types and DER/BER/CER codecs (X.208) pyasn1-modules 0.3.0 A collection of ASN.1-based protocols modules pydantic 1.10.13 Data validation and settings management using python type hints pyfiglet 0.8.post1 Pure-python FIGlet implementation pygments 2.16.1 Pygments is a syntax highlighting package written in Python. pyparsing 3.1.1 pyparsing module - Classes and methods to define and execute parsing grammars python-dateutil 2.8.2 Extensions to the standard Python datetime module python-dotenv 0.21.1 Read key-value pairs from a .env file and set them as environment variables pytz 2023.3.post1 World timezone definitions, modern and historical pywavelets 1.4.1 PyWavelets, wavelet transform module pyyaml 6.0.1 YAML parser and emitter for Python pyzmq 25.1.1 Python bindings for 0MQ referencing 0.30.2 JSON Referencing + Python requests 2.31.0 Python HTTP for Humans. requests-oauthlib 1.3.1 OAuthlib authentication support for Requests. rich 12.6.0 Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal rpds-py 0.10.6 Python bindings to Rust's persistent data structures (rpds) rsa 4.9 Pure-Python RSA implementation s3fs 2022.11.0 Convenient Filesystem interface over S3 s3transfer 0.6.2 An Amazon S3 Transfer Manager scikit-learn 1.3.2 A set of python modules for machine learning and data mining scipy 1.11.3 Fundamental algorithms for scientific computing in Python seaborn 0.11.2 seaborn: statistical data visualization setuptools 68.2.2 Easily download, build, install, upgrade, and uninstall Python packages setuptools-scm 8.0.4 the blessed package to manage your versions by scm tags six 1.16.0 Python 2 and 3 compatibility utilities soupsieve 2.5 A modern CSS selector implementation for Beautiful Soup. sqlalchemy 1.4.41 Database Abstraction Library sqlalchemy2-stubs 0.0.2a35 Typing Stubs for SQLAlchemy 1.4 sqlmodel 0.0.8 SQLModel, SQL databases in Python, designed for simplicity, compatibility, and robustness. stack-data 0.6.3 Extract data from python stack frames and tracebacks for informative displays statsmodels 0.14.0 Statistical computations and models for Python tangled-up-in-unicode 0.2.0 Access to the Unicode Character Database (UCD) tenacity 8.2.3 Retry code until it succeeds threadpoolctl 3.2.0 threadpoolctl tinycss2 1.2.1 A tiny CSS parser tomli 2.0.1 A lil' TOML parser tornado 6.3.3 Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. tqdm 4.66.1 Fast, Extensible Progress Meter traitlets 5.12.0 Traitlets Python configuration system typeguard 4.1.5 Run-time type checker for Python types-python-dateutil 2.8.19.14 Typing stubs for python-dateutil types-pyyaml 6.0.12.12 Typing stubs for PyYAML typing-extensions 4.8.0 Backported and Experimental Type Hints for Python 3.8+ tzlocal 5.2 tzinfo object for the local timezone urllib3 1.26.18 HTTP library with thread-safe connection pooling, file post, and more. visions 0.7.5 Visions wcwidth 0.2.8 Measures the displayed width of unicode strings in a terminal webencodings 0.5.1 Character encoding aliases for legacy web content wheel 0.41.2 A built-package format for Python widgetsnbextension 4.0.9 Jupyter interactive widgets for Jupyter Notebook wordcloud 1.9.2 A little word cloud generator wrapt 1.15.0 Module for decorators, wrappers and monkey patching. xgboost 2.0.1 XGBoost Python Package yarl 1.9.2 Yet another URL library ydata-profiling 4.6.0 Generate profile report for pandas DataFrame ``` ### OS Mac os 14.1 ### Checklist - [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues) - [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report. - [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
closed
2023-10-30T10:09:15Z
2024-01-08T12:18:13Z
https://github.com/ydataai/ydata-profiling/issues/1490
[ "bug 🐛", "needs-triage" ]
dean-sh
0
onnx/onnx
scikit-learn
6,069
[Spec] DepthToSpace `mode` attribute is counter-intuitive
I want to discuss [our spec of the DepthToSpace operator](https://onnx.ai/onnx/operators/onnx__DepthToSpace.html#depthtospace-13). Please help to current me if any misunderstanding. Thanks! (I don't want to mark this as a _bug_ as it is not.) For this operator, the `mode` attribute defaults to `DCR` is counter-intuitive. IMO, there are 2 problems. ## The two problems ### 1. We should use `CRD` as the default DepthToSpace default mode should be compatible with the framework that defines it. In practice, if a framework default tensor layout is `NCHW`, the DepthToSpace operator should assume to rearrange the `C` dimension to `HW` dimensions in an `NCHW` approach (i.e. `CRD`). [TensorFlow](https://www.tensorflow.org/api_docs/python/tf/nn/depth_to_space) is an example of default to `NHWC` tensor layout. So, for ONNX, we should use `CRD` as the default. ``` tf.nn.depth_to_space( input, block_size, data_format='NHWC', name=None ) ``` ### 2. Our `mode` spec is confusing In our spec: > By default, mode = DCR. In the DCR mode, elements along the depth dimension from the input tensor are rearranged in the following order: depth, column, and then row. It is not wrong, but DepthToSpace and SpaceToDepth are manipulating the `C` dimension with `HW` dimensions. Using "depth, column, and then row" is confusing as it doesn't make sense if 3D image. We can argue that 3D is unsupported, but that doesn't solve the problem. Using something like `NCHW` and `NHWC` is more easy to understand. ## What to do? I suggest: 1. Rename the attribute to `NCHW` and `NHWC`. We can provide compatibility implicitly, and deprecate/remove legacy ones in the long term. 2. Make `NCHW` as the default. I am not sure if we have done this before, it's kind of tricky as it would impact the default API behavior...
open
2024-04-08T03:10:35Z
2025-03-10T06:09:46Z
https://github.com/onnx/onnx/issues/6069
[ "question", "topic: operator", "topic: documentation", "module: spec", "topic: spec clarification", "contributions welcome" ]
zhenhuaw-me
1
tatsu-lab/stanford_alpaca
deep-learning
107
How to run the finetune code using the slurm launcher in the cluster?
open
2023-03-20T13:02:59Z
2023-03-20T13:02:59Z
https://github.com/tatsu-lab/stanford_alpaca/issues/107
[]
tongwwt
0
holoviz/panel
plotly
7,248
Create Templates composed solely of Panel components
@philippjfr mentioned that the current generation of Panel templates are backed by Jinja2 templates initially because of a limitation in Bokeh where there can't be too many nested shadow roots(?), or else it would lag. Now, I think this limitation is fixed(?). If so, we can steer away from Jinja2 templates and re-create them using native Panel components, with the benefit that it's usable in notebooks and there's no need to learn a separate template concept. Also, we can experiment with much more variety of templates easily. The few things that I can think of that are not available are a collapsible sidebar component and ability to swap between light/dark theme. cc @MarcSkovMadsen
open
2024-09-10T09:21:04Z
2024-10-04T10:59:19Z
https://github.com/holoviz/panel/issues/7248
[ "type: feature" ]
ahuang11
10
jonaswinkler/paperless-ng
django
236
Non-existing pre and post hooks stop consumption
Hi :wave:, while I reconfigured my paperless-ng instance, I wrongly linked the pre and post hooks to a host path instead of a mapped docker container path. Today, I tried to import new documents and it occurred to me that those documents were not imported. The web UI logs as follows: ```bash 12/31/20, 3:51 PM INFO Consuming 20190122Z - corr1 - title1 - tax,work.pdf 12/31/20, 3:47 PM INFO Consuming 20180220Z - corr2 - title2.pdf ... ``` The docker logs then reveled the actual problem: ```bash 15:58:05 [Q] ERROR Failed [20190122Z - corr1 - title1 - tax,work.pdf] - [Errno 2] No such file or directory: '/mnt/user/appdata/paperless-ng/data/pre': '/mnt/user/appdata/paperless-ng/data/pre' : Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/django_q/cluster.py", line 436, in worker res = f(*task["args"], **task["kwargs"]) File "/usr/src/paperless/src/documents/tasks.py", line 73, in consume_file override_tag_ids=override_tag_ids) File "/usr/src/paperless/src/documents/consumer.py", line 119, in try_consume_file logging_group=self.logging_group File "/usr/local/lib/python3.7/site-packages/django/dispatch/dispatcher.py", line 179, in send for receiver in self._live_receivers(sender) File "/usr/local/lib/python3.7/site-packages/django/dispatch/dispatcher.py", line 179, in <listcomp> for receiver in self._live_receivers(sender) File "/usr/src/paperless/src/documents/signals/handlers.py", line 155, in run_pre_consume_script Popen((settings.PRE_CONSUME_SCRIPT, filename)).wait() File "/usr/local/lib/python3.7/subprocess.py", line 800, in __init__ restore_signals, start_new_session) File "/usr/local/lib/python3.7/subprocess.py", line 1551, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: '/mnt/user/appdata/paperless-ng/data/pre': '/mnt/user/appdata/paperless-ng/data/pre' ``` After fixing the path it works as expected. Now, I have the following questions: * Is it intended that the web UI logs do not tell that there is a problem with the consumption? * Should the consumption be blocked when those scripts are not existing, but configured?
closed
2020-12-31T15:17:37Z
2022-06-25T13:07:45Z
https://github.com/jonaswinkler/paperless-ng/issues/236
[ "bug", "fixed in next release" ]
Tooa
4
hzwer/ECCV2022-RIFE
computer-vision
339
Bad video output
Hi, I just tried to use RIFE for the first time and the output video is simply broken. I followed the instructions from the README and ran the command `python3 inference_video.py --exp=2 --video=video.mp4` I'm attaching the[ Google Drive Link here](https://drive.google.com/drive/folders/1A5eMb9SU51tEp6RmHQLPHEkRB2cCNKZ6?usp=share_link), which contains the original video, the interpolated video, and the console output screenshot. What could be the problem here? Am I simply using the wrong pre-trained model - or is there more to it? Let me know if I can provide any more information on this. Thank you!
open
2023-10-18T16:31:46Z
2023-10-18T16:31:46Z
https://github.com/hzwer/ECCV2022-RIFE/issues/339
[]
matejhacin
0
python-visualization/folium
data-visualization
1,986
Add zoomSnap parameter
I use folium to render maps for automatically generating figures but the lack of zoom granularity makes it difficult to generate nice figures that contain the bounds of the region of interest. Add a parameter to folium.Map() to edit zoomSnap value https://leafletjs.com/reference.html#map-zoomsnap
closed
2024-07-10T10:11:59Z
2024-07-28T10:27:06Z
https://github.com/python-visualization/folium/issues/1986
[]
Chris-airseed
2
2noise/ChatTTS
python
705
如何固定音色?
由于我的语料较长,需要多次输入,这样会出现转语音时,不同段落转出来的音色值不同,声音不一致。
closed
2024-08-20T09:50:48Z
2024-12-16T04:01:38Z
https://github.com/2noise/ChatTTS/issues/705
[ "documentation", "stale" ]
Shengrun2020
6
coqui-ai/TTS
pytorch
4,006
[Bug] Process *Killed* when executing parallel tts commands on different containers (Docker version)
### Describe the bug Trying to experiment a little bit with running multiple TTS instances at the same time using the docker image, I created 5 different containers, and trying to execute a TTS command on each of those running containers, but only 2 out of 5 produce an actual output, while the others simply log "Killed" and terminate. Is there a limitation on a shared resource between those 5 separate containers? I though they are running on separate environments. ### To Reproduce 1. Download the docker Image. 2. Create 5 different containers with the docker image. 3. Open each container in a different cmd window. 4. On each container past and run this command simultaneously `tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 --speaker_idx "Daisy Studious" --language_idx en --text "Hello world." --out_path "out.pcm" --use_cuda true`. ### Expected behavior All 5 containers, reproduce 5 outputs separately. Instead 3 gets killed and only 2 produce the outputs. ### Logs ```shell > tts_models/multilingual/multi-dataset/xtts_v2 is already downloaded. > Using model: xtts Killed ``` ``` ### Environment ```shell { "CUDA": { "GPU": [ "NVIDIA GeForce RTX 3060" ], "available": true, "version": "11.8" }, "Packages": { "PyTorch_debug": false, "PyTorch_version": "2.1.1+cu118", "TTS": "0.22.0", "numpy": "1.22.0" }, "System": { "OS": "Linux", "architecture": [ "64bit", "" ], "processor": "x86_64", "python": "3.10.12", "version": "#1 SMP Fri Mar 29 23:14:13 UTC 2024" } } ``` ### Additional context _No response_
closed
2024-10-04T16:25:01Z
2024-12-28T11:58:13Z
https://github.com/coqui-ai/TTS/issues/4006
[ "bug", "wontfix" ]
khaldi-yass
2
jupyter/nbgrader
jupyter
1,306
missing
<!-- Thanks for helping to improve nbgrader! If you are submitting a bug report or looking for support, please use the below template so we can efficiently solve the problem. If you are requesting a new feature, feel free to remove irrelevant pieces of the issue template. --> ### Operating system Ubuntu 18.04 ### `nbgrader --version` 0.6.1 ### `jupyterhub --version` (if used with JupyterHub) 1.0.0b1 ### `jupyter notebook --version` 5.7.8 ### Expected behavior I expected to generate feedback. ### Actual behavior Instead I received the error messages below: [ERROR] There was an error processing assignment: /home/jovyan/bootcamp/autograded/test/ex1 [ERROR] Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/nbgrader/converters/base.py", line 336, in convert_notebooks self.convert_single_notebook(notebook_filename) File "/opt/conda/lib/python3.7/site-packages/nbgrader/converters/base.py", line 292, in convert_single_notebook output, resources = self.exporter.from_filename(notebook_filename, resources=resources) File "/opt/conda/lib/python3.7/site-packages/nbconvert/exporters/exporter.py", line 179, in from_filename return self.from_file(f, resources=resources, **kw) File "/opt/conda/lib/python3.7/site-packages/nbconvert/exporters/exporter.py", line 197, in from_file return self.from_notebook_node(nbformat.read(file_stream, as_version=4), resources=resources, **kw) File "/opt/conda/lib/python3.7/site-packages/nbconvert/exporters/html.py", line 100, in from_notebook_node return super(HTMLExporter, self).from_notebook_node(nb, resources, **kw) File "/opt/conda/lib/python3.7/site-packages/nbconvert/exporters/templateexporter.py", line 357, in from_notebook_node output = self.template.render(nb=nb_copy, resources=resources) File "/opt/conda/lib/python3.7/site-packages/jinja2/asyncsupport.py", line 76, in render return original_render(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/jinja2/environment.py", line 1008, in render return self.environment.handle_exception(exc_info, True) File "/opt/conda/lib/python3.7/site-packages/jinja2/environment.py", line 780, in handle_exception reraise(exc_type, exc_value, tb) File "/opt/conda/lib/python3.7/site-packages/jinja2/_compat.py", line 37, in reraise raise value.with_traceback(tb) File "/opt/conda/lib/python3.7/site-packages/nbgrader/server_extensions/formgrader/templates/feedback.tpl", line 1, in top-level template code {%- extends 'basic.tpl' -%} jinja2.exceptions.TemplateNotFound: basic.tpl I tried to find such file, but I only found the base.tpl one, which I guess is not the right one to import. How could I solve this?
closed
2020-01-22T22:54:17Z
2021-03-25T22:09:35Z
https://github.com/jupyter/nbgrader/issues/1306
[ "bug", "duplicate" ]
alexlopespereira
8
huggingface/datasets
numpy
6,950
`Dataset.with_format` behaves inconsistently with documentation
### Describe the bug The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation. https://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays https://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays > If your dataset consists of N-dimensional arrays, you will see that by default they are considered as nested lists. > In particular, a PyTorch formatted dataset outputs nested lists instead of a single tensor. > A TensorFlow formatted dataset outputs a RaggedTensor instead of a single tensor. But I get a single tensor by default, which is inconsistent with the description. Actually the current behavior seems more reasonable to me. Therefore, the document needs to be modified. ### Steps to reproduce the bug ```python >>> from datasets import Dataset >>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]] >>> ds = Dataset.from_dict({"data": data}) >>> ds = ds.with_format("torch") >>> ds[0] {'data': tensor([[1, 2], [3, 4]])} >>> ds = ds.with_format("tf") >>> ds[0] {'data': <tf.Tensor: shape=(2, 2), dtype=int64, numpy= array([[1, 2], [3, 4]])>} ``` ### Expected behavior ```python >>> from datasets import Dataset >>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]] >>> ds = Dataset.from_dict({"data": data}) >>> ds = ds.with_format("torch") >>> ds[0] {'data': [tensor([1, 2]), tensor([3, 4])]} >>> ds = ds.with_format("tf") >>> ds[0] {'data': <tf.RaggedTensor [[1, 2], [3, 4]]>} ``` ### Environment info datasets==2.19.1 torch==2.1.0 tensorflow==2.13.1
closed
2024-06-04T09:18:32Z
2024-06-25T08:05:49Z
https://github.com/huggingface/datasets/issues/6950
[ "documentation" ]
iansheng
2
autogluon/autogluon
data-science
4,195
[timeseries] When saving predictor to a folder with an existing predictor, delete the old predictor
## Description - When the user sets `TimeSeriesPredictor(path="folder/with/existing/predictor")`, a lot of weird undocumented behaviors may occur (e.g., #4150). We currently log a warning in this case, but it's often ignored by the users, leading to confusion. A cleaner option would be to delete all the files related to the old predictor.
closed
2024-05-14T07:50:06Z
2024-06-27T09:24:45Z
https://github.com/autogluon/autogluon/issues/4195
[ "enhancement", "module: timeseries" ]
shchur
2
bauerji/flask-pydantic
pydantic
45
Is there any way to make custom response?
Hello! Thanks for awesome package. I want to make custom response ASIS ``` { "validation_error": { "query_params": [ { "loc": ["age"], "msg": "value is not a valid integer", "type": "type_error.integer" } ] } } ``` TOBE(or some other fomat, maybe) ``` {"error" : "validation_error", "desc" : "some_my_custome_message"} ``` Thanks for reading this.
closed
2021-12-11T14:42:35Z
2022-09-25T08:04:27Z
https://github.com/bauerji/flask-pydantic/issues/45
[]
matthew-cupist
2
BeastByteAI/scikit-llm
scikit-learn
121
Add Structured Output support
Structured outputs allow users to define an output scheme using pydantic. OpenAI and most others support this now (see e.g. [OpenAI](https://openai.com/index/introducing-structured-outputs-in-the-api/) and their [docs](https://platform.openai.com/docs/guides/structured-outputs)). ~~~python from pydantic import BaseModel from openai import OpenAI client = OpenAI() class CalendarEvent(BaseModel): name: str date: str participants: list[str] completion = client.beta.chat.completions.parse( model="gpt-4o-2024-08-06", messages=[ {"role": "system", "content": "Extract the event information."}, {"role": "user", "content": "Alice and Bob are going to a science fair on Friday."}, ], response_format=CalendarEvent, ) event = completion.choices[0].message.parsed ~~~ In my own tests using scikit-ollama I found the models to adhere much better to the output scheme. It barely ever had to fall back to the default label which made it overall more accurate. It was also usually faster.
open
2025-01-23T15:03:11Z
2025-01-23T15:03:11Z
https://github.com/BeastByteAI/scikit-llm/issues/121
[]
AndreasKarasenko
0
robotframework/robotframework
automation
4,912
Parsing model: Move `type` and `tokens` from `_fields` to `_attributes`
Our parsing model is based on Python's [ast](https://docs.python.org/3/library/ast.html). The `Statement` base class currently has `type` and `token` listed in its `_fields`. According to the [documentation](https://docs.python.org/3/library/ast.html#ast.AST._fields), `_fields` should contain names of the child nodes and neither `type` or `token` is such a node. A better place to include them is `_attributes` (which doesn't seem to be documented) that we already use for things line `lineno` and `source`. In addition to not using `_fields` for semantically wrong purposes, this change seems to increase the performance of visiting the model. The [generic_visit](https://docs.python.org/3/library/ast.html#ast.NodeVisitor.generic_visit) goes through child nodes listed in `_fields` and thus traverses all tokens unnecessarily. This most likely won't have a big impact in data parsing with Robot itself, but external tools using the parser model can benefit more. See the PR #4911 for more information and other related performance enhancements. This change was also initially part of that PR, but because there's a small behavior change a separate issue was created for it. This change affects logging/debugging a bit. When using [ast.dump](https://docs.python.org/3/library/ast.html#ast.dump) with default arguments, it shows all fields but doesn't show attributes. Thus the type and the tokens are nowadays shown with statements, but they won't be included anymore in the future. It is, however, possible to use `include_attributes=True` to get attributes listed. Not showing tokens by default is actually mostly a good thing with bigger and more complicated models. This change affects also anyone who inspects `_fields` or `_attributes`. I don't see why anyone would do that, but it's anyway a good idea to give this issue the ´backwards incompatible` label and mention the change in the release notes.
closed
2023-10-24T16:11:04Z
2023-11-07T09:15:03Z
https://github.com/robotframework/robotframework/issues/4912
[ "enhancement", "priority: medium", "backwards incompatible", "alpha 1", "effort: small" ]
pekkaklarck
0
httpie/cli
python
1,428
Support different keywords for Bearer authentication
## Checklist - [x] I've searched for similar feature requests. --- ## Enhancement request Adding support for different keywords than "Bearer" for Bearer authentication. --- ## Problem it solves The default keyword in Falcon and Django REST Framework is "Token" instead of "Bearer". In HTTPie the only possible keyword is Bearer. Due to this it is not possible to use HTTPie with projects which does not explicitly use the keyword Bearer.
open
2022-08-19T20:09:07Z
2022-11-30T21:53:57Z
https://github.com/httpie/cli/issues/1428
[ "enhancement", "new" ]
eraxeg
1
kaliiiiiiiiii/Selenium-Driverless
web-scraping
289
[BUG] does not collect Request after page load + does not always find the element, although it is loaded
Two problems: 1) It does not collect the request, after pressing the button, although it happens and in the network tab is displayed 2) The button is found once in a while, although it is always loaded after an error I check in the code of the element paths coincide. Requests after button ![изображение](https://github.com/user-attachments/assets/063376c9-fb79-4110-84bb-00029d14daaf) ``` import asyncio from selenium_driverless import webdriver #from module_driver import browser_executable_path from selenium_driverless.scripts.network_interceptor import NetworkInterceptor async def main(): opt = webdriver.ChromeOptions() #opt._binary_location = browser_executable_path driver = await webdriver.Chrome(options=opt) await driver.maximize_window() await driver.get("https://www.deepl.com/ru/translator", wait_load=True) async with NetworkInterceptor(driver) as network: button = await driver.find_element("xpath", "/html/body/div[1]/div[1]/div/div[3]/div[2]/div[2]/div[2]/div[1]/main/div[2]/nav/div/div[1]/div/div[1]/a[2]/div/div", timeout=10) await button.click() async for request in network: print(request) if __name__ == '__main__': asyncio.run(main()) ```
closed
2024-11-30T20:29:22Z
2024-12-01T10:13:49Z
https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/289
[ "invalid" ]
Toxenskiy
3
ccxt/ccxt
api
25,371
vertex, bitopro, watch_order_book, error
### Operating System debian 12 ### Programming Languages _No response_ ### CCXT Version 4.4.62 ### Description vertex ``` watch_order_book(symbol="BTC/USDC:USDC") ccxt.base.errors.NetworkError: 403, message='Invalid response status', url='wss://gateway.prod.vertexprotocol.com/v1/subscribe' ``` bitopro ``` watch_order_book(symbol="BTC/USDT", limit=10) TypeError: can only concatenate str (not "int") to str ``` ``` watch_order_book(symbol="BTC/USDT", limit="10") ccxt.base.errors.ExchangeError: bitopro watchOrderBook limit argument must be None, 5, 10, 20, 50, 100, 500 or 1000 ``` ### Code ```    ```
closed
2025-02-27T11:28:34Z
2025-03-02T13:28:40Z
https://github.com/ccxt/ccxt/issues/25371
[ "bug" ]
williamyizhu
3
ultralytics/yolov5
deep-learning
13,066
Labelling Objects Occluded objects in Extreme Environment
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question I am dealing with object detection of objects with similar color that are occlude each other due to the manufacturing nature. I am trying to get multiple view of the object occluded from 3 directions to alleviate this problem. I am not sure if suggest an alternative solution? ### Additional _No response_
closed
2024-06-03T15:46:46Z
2024-10-20T19:47:14Z
https://github.com/ultralytics/yolov5/issues/13066
[ "question", "Stale" ]
Avv22
5
TencentARC/GFPGAN
pytorch
518
ফাগুনের আগুন লেগেছে নবীন হৃদয়ে,সেজেছে বসন্ত আজ পলাশের রঙে ।
open
2024-02-16T17:06:05Z
2024-02-16T19:17:52Z
https://github.com/TencentARC/GFPGAN/issues/518
[]
lavlu2004
2
deepfakes/faceswap
machine-learning
1,002
Convert Images has no face
Hello. Author, I get a problem when I convert my images. The output of my convert image has no face ,can you help me ? The output pic just like this: ![0](https://user-images.githubusercontent.com/27938135/77992017-cdc7e500-7357-11ea-877b-c46c3df88b20.png) The order I input is: `python faceswap.py convert -i=./data/src -o=./data/convert_out -m=models` and the models folder is like this: ![1](https://user-images.githubusercontent.com/27938135/77992515-c94ffc00-7358-11ea-9d9b-79be934a6f31.png) Do I use the wrong input order ?
closed
2020-03-31T06:07:11Z
2020-03-31T09:01:59Z
https://github.com/deepfakes/faceswap/issues/1002
[]
Tian14267
4
gradio-app/gradio
data-science
10,710
API requests error
### Describe the bug Hello! I am using SDK Gradio on on my Public space on Huggingface. I am deployed AI chat bot. WEB works OK. All requests and responses via terminal Bash - OK. But when I try to connect my telegram-bot to Gradio API - then i have error in log: server rejected WebSocket connection: HTTP 403. Could you please advice me what is wrong with requests via telegram? Thanks! On Hugging face I am using: gradio_client==0.6.1 huggingface-hub>=0.19.3,<1.0 gradio==4.26.0 All API requests are done according to Gradio: api_name: /chat from gradio_client import Client client = Client("dkmanika/nika_prop") result = client.predict( message="Hello!!", api_name="/chat" ) print(result) ### Have you searched existing issues? 🔎 - [x] I have searched and found no existing issues ### Reproduction ```python import os import asyncio import logging import json import random import psutil from telegram import Update, InlineKeyboardButton, InlineKeyboardMarkup from telegram.ext import ( ApplicationBuilder, CommandHandler, MessageHandler, CallbackQueryHandler, filters, CallbackContext, ) from dotenv import load_dotenv from gradio_client import Client ``` #request to Gradio API, telegram-bot code, part, Python 3.19. async def send_request_to_gradio(query: str, state: str = None) -> str: try: await asyncio.sleep(random.uniform(1.0, 2.0)) client = Client(HF_SPACE_NAME, hf_token=HF_TOKEN) result = client.predict([ query, state ], api_name="/chat") response = result[0] if isinstance(result, list) and result else "Не найдено" return response except Exception as e: logging.error(f"ERROR requests to Gradio: {e}") return "ERROR requests to Gradio. Try again." ### Screenshot _No response_ ### Logs ```shell ===== Application Startup at 2025-02-28 12:28:46 ===== tokenizer_config.json: 0%| | 0.00/453 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 453/453 [00:00<00:00, 3.56MB/s] tokenizer.json: 0%| | 0.00/16.3M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 16.3M/16.3M [00:00<00:00, 148MB/s] added_tokens.json: 0%| | 0.00/23.0 [00:00<?, ?B/s] added_tokens.json: 100%|██████████| 23.0/23.0 [00:00<00:00, 197kB/s] special_tokens_map.json: 0%| | 0.00/173 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 173/173 [00:00<00:00, 1.61MB/s] config.json: 0%| | 0.00/879 [00:00<?, ?B/s] config.json: 100%|██████████| 879/879 [00:00<00:00, 8.16MB/s] model.safetensors: 0%| | 0.00/1.11G [00:00<?, ?B/s] model.safetensors: 3%|▎ | 32.9M/1.11G [00:01<00:40, 26.5MB/s] model.safetensors: 43%|████▎ | 473M/1.11G [00:02<00:02, 244MB/s] model.safetensors: 100%|█████████▉| 1.11G/1.11G [00:02<00:00, 380MB/s] /usr/local/lib/python3.10/site-packages/gradio/components/chatbot.py:228: UserWarning: The 'tuples' format for chatbot messages is deprecated and will be removed in a future version of Gradio. Please set type='messages' instead, which uses openai-style 'role' and 'content' keys. warnings.warn( * Running on local URL: http://0.0.0.0:7860, with SSR ⚡ To create a public link, set `share=True` in `launch()`. ``` ### System Info ```shell I cant find the reason of error ``` ### Severity Blocking usage of gradio
closed
2025-03-02T18:36:50Z
2025-03-04T09:09:35Z
https://github.com/gradio-app/gradio/issues/10710
[ "bug", "pending clarification" ]
brokerelcom
7
numpy/numpy
numpy
28,434
DOC: PyArray_CHKFLAGS protorype is wrong in the documentation
### Issue with current documentation: PyArray_CHKFLAGS is shown to receive a `PyObject *` argument in the documentation, but the actual function prototype is `PyArray_CHKFLAGS(const PyArrayObject *arr, int flags)` ![Image](https://github.com/user-attachments/assets/c0328370-bdfb-4a08-9e64-e80cae7f5299) This results in failed compilation (please see this example https://github.com/danielhrisca/asammdf/actions/runs/13681474652/job/38254696040) As a result I think all the macros that are built using PyArray_CHKFLAGS have wrong prototypes in the documentation ### Idea or request for content: _No response_
closed
2025-03-05T17:25:53Z
2025-03-07T15:28:38Z
https://github.com/numpy/numpy/issues/28434
[ "04 - Documentation" ]
danielhrisca
2
encode/apistar
api
379
Test client uses ImmutableDict from requests.Session for data
When using the apistar test client, data is passed as an `ImmutableDict`, so if you are trying to modify the `http.RequestData` in a view it will fail during tests. Only in a test situation like this would the data be passed as an `ImmutableDict` structure ```python def create_account(request: http.Request, auth: Auth, session: Session, data: http.RequestData): if not hasattr(data, 'owner'): data['owner'] = auth.user['id'] # throws error from trying to modify an ImmutableDict ``` A similar issue is that if you pass `{}` or `None` as the data in the test client the default parsers don't work to throw an error response: ```python res = client.post('/accounts/', headers=auth_header, data={}) assert res.status_code == 400 assert res.json() == {'message': 'Empty JSON'} # FAILS ``` Instead the `create_account` function that I posted above would receive an empty `ImmutableDict()` as the `data` instead. I don't want to have to add error handling for how the test client is passing information and getting around some pieces of the apistar framework that normal requests never would be able to get around
closed
2017-12-26T22:37:43Z
2018-04-17T12:49:56Z
https://github.com/encode/apistar/issues/379
[]
audiolion
0
nikitastupin/clairvoyance
graphql
1
Probe for Input Object type
Now we're probing only for (1) argument name and (2) it's type. However in case if argument of [INPUT_OBJECT](https://spec.graphql.org/June2018/#sec-Input-Object-Values) type we can probe for (3) fields too.
open
2020-10-23T14:31:12Z
2021-03-22T10:35:32Z
https://github.com/nikitastupin/clairvoyance/issues/1
[ "enhancement" ]
nikitastupin
2
automl/auto-sklearn
scikit-learn
1,704
calling model.show_models() give error as
## Describe the bug ## Please describe the bug you're experiencing is precise as possible. ## To Reproduce ## Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error ## Expected behavior ## A clear and concise description of what you expected to happen. ## Actual behavior, stacktrace or logfile ## Please describe the expected behavior here. If there is a stacktrace, please paste it here. If there is no stacktrace printed, please upload the logfile which was stored in the `tmp_folder` ## Environment and installation: ## Please give details about your installation: * OS * Is your installation in a virtual environment or conda environment? * Python version * Auto-sklearn version
open
2023-11-13T07:10:25Z
2023-11-13T07:10:25Z
https://github.com/automl/auto-sklearn/issues/1704
[]
AhangarAamir
0
vitalik/django-ninja
pydantic
381
AttributeError when running export_openapi_schema
Running ``` ./manage.py export_openapi_schema ``` Results in the following error: ``` Traceback (most recent call last): File "./manage.py", line 22, in <module> main() File "./manage.py", line 18, in main execute_from_command_line(sys.argv) File ".../venv/lib/python3.7/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line utility.execute() File ".../venv/lib/python3.7/site-packages/django/core/management/__init__.py", line 413, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File ".../venv/lib/python3.7/site-packages/django/core/management/base.py", line 354, in run_from_argv self.execute(*args, **cmd_options) File ".../venv/lib/python3.7/site-packages/django/core/management/base.py", line 398, in execute output = self.handle(*args, **options) File ".../venv/lib/python3.7/site-packages/ninja/management/commands/export_openapi_schema.py", line 45, in handle api = self._get_api_instance(options["api"]) File ".../venv/lib/python3.7/site-packages/ninja/management/commands/export_openapi_schema.py", line 17, in _get_api_instance return resolve("/api/").func.keywords["api"] # type: ignore AttributeError: 'function' object has no attribute 'keywords' ``` Debugging with pdb on the corresponding line: ``` -> return resolve("/api/").func.keywords["api"] # type: ignore (Pdb) m = resolve("/api/").func (Pdb) m <bound method PathView._sync_view of <ninja.operation.PathView object at 0x7f6b81f7a5c0>> (Pdb) dir(m) ['__call__', '__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__func__', '__ge__', '__get__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__self__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'csrf_exempt'] ``` So it seems the original intention of this line was to determine the NinjaAPI instance. Yet, I fail to understand how a `keywords` attribute could be present on a bound method object. Wouldn't it be better to get the API instance like this? ``` return resolve("/api/").func.__self__.api ``` **Versions:** - Python version: 3.7.3 - Django version: 3.2.12 - Django-Ninja version: 0.17.0 - Related work: https://github.com/vitalik/django-ninja/pull/288 <details> <summary>requirements.txt</summary> <pre> asgiref==3.5.0 async-generator==1.10 attrs==21.4.0 beautifulsoup4==4.10.0 bleach==4.1.0 certifi==2021.10.8 cffi==1.15.0 charset-normalizer==2.0.12 colorama==0.4.4 configparser==5.2.0 crayons==0.4.0 cryptography==36.0.1 da-vinci==0.3.0 Django==3.2.12 django-appconf==1.0.5 django-compressor==3.1 django-cors-headers==3.11.0 django-libsass==0.9 django-markdownify==0.9.0 django-ninja==0.17.0 django-thumbnails==0.7.0 ecdsa==0.17.0 h11==0.13.0 idna==3.3 importlib-metadata==4.11.0 libsass==0.21.0 lxml==4.8.0 Markdown==3.3.6 outcome==1.1.0 packaging==21.3 Pillow==9.0.1 pycparser==2.21 pydantic==1.9.0 pymarkdown-video==1.0.0 pyOpenSSL==22.0.0 pyparsing==3.0.7 pytz==2021.3 rcssmin==1.1.0 requests==2.27.1 rjsmin==1.2.0 selenium==4.1.0 shortuuid==1.0.8 six==1.16.0 sniffio==1.2.0 sortedcontainers==2.4.0 soupsieve==2.3.1 sqlparse==0.4.2 trio==0.19.0 trio-websocket==0.9.2 typing_extensions==4.1.0 urllib3==1.26.8 webdriver-manager==3.5.3 webencodings==0.5.1 wsproto==1.0.0 zipp==3.7.0 </pre> </details>
closed
2022-03-01T20:57:51Z
2023-02-07T09:27:51Z
https://github.com/vitalik/django-ninja/issues/381
[]
lausek
1
Sanster/IOPaint
pytorch
213
I have problem with run "lama-clean"
I can't run "lama-clean". Could you help me, please? ![cmd_b9TZhKsJgm](https://user-images.githubusercontent.com/125296774/218526700-669d77c5-a33d-4058-95bc-e5f4bcd2b7bb.png) ![cmd_0CqHgYeFUd](https://user-images.githubusercontent.com/125296774/218526410-713ecc77-90c2-4f80-a824-ead803e04f26.png)
open
2023-02-13T17:12:41Z
2023-04-21T02:18:36Z
https://github.com/Sanster/IOPaint/issues/213
[ "bug", "help wanted" ]
Ostruy
12
ymcui/Chinese-LLaMA-Alpaca
nlp
58
请教合并模型merge_llama_with_chinese_lora.py的原理和中文LLaMA训练细节源码
你好,阅读了merge_llama_with_chinese_lora可是不太明白translate_state_dict_key背后的原理 以及请教中文LLaMA过程中词表扩充-预训练-指令精调的源代码学习学习,感谢~
closed
2023-04-04T09:48:44Z
2023-04-18T14:07:44Z
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/58
[]
jiangliqin
1
marcomusy/vedo
numpy
638
mesh texture
DATA(rename `txt` to `zip`): [mesh_tex.txt](https://github.com/marcomusy/vedo/files/8618771/mesh_tex.txt) CODE: ``` from vedo import * path_pfx = "/home/lab0/Pictures/mesh_tex" path_obj = path_pfx + "/pro_img_ds_HR_00000_0.obj" path_ply = path_pfx + "/cv_img_du_00000_0_mesh_inC.ply" path_opt = path_pfx + "/pro_img_ds_HR_00000_0_opt.ply" path_tex = path_pfx + "/img_c_ortho_HR_00000_0.png" plt = Plotter(shape=(1, 3), size=(800, 600)) M1 = Mesh(path_obj).texture(path_tex) M2 = Mesh(path_ply).c((200, 200, 200)) M3 = Mesh(path_opt).c((200, 200, 200)) plt.show(M1, at=0) plt.show(M2, at=1) plt.show(M3, at=2) plt.show(interactive=True, zoom=1.0) ``` RESULT: ![screenshot](https://user-images.githubusercontent.com/34391447/166645228-b36cae54-4551-4ada-976d-b12e5aea1bcb.png) QUESTIONS: 1. how to fix the texture mapping 2. how to remove the two vertical lines (borderless mode) 3. how to set zoom to fit the height like the following (I adjust it manually) ![screenshot](https://user-images.githubusercontent.com/34391447/166645593-89b650ce-06f4-4fda-be49-0ca26428bd03.png)
closed
2022-05-04T08:27:22Z
2022-05-04T13:31:10Z
https://github.com/marcomusy/vedo/issues/638
[ "enhancement" ]
LogWell
5
frol/flask-restplus-server-example
rest-api
56
Update Swagger UI to 3.x
The changes are quite radical, so it will take some time to get things going. Just for the reference: https://github.com/noirbizarre/flask-restplus/issues/267
open
2017-04-04T15:42:03Z
2019-02-06T10:33:11Z
https://github.com/frol/flask-restplus-server-example/issues/56
[]
frol
2
alyssaq/face_morpher
numpy
59
"Invalid HAAR feature" and no face in image error
Attempting to run facemorpher in a docker container [as described in the readme](https://github.com/alyssaq/face_morpher#try-out-in-a-docker-container), ``` root@6411e86f84bd:/# facemorpher --src=/images/alyssa.jpg --dest=/images/ian.jpg --plot Failed finding face points: cascadedetect.cpp(569) : Unspecified error : > Invalid HAAR feature (expected: 'rw.r.x + rw.r.width <= W'), where > 'rw.r.x + rw.r.width' is 15 > must be less than or equal to > 'W' is 12 No face in /images/alyssa.jpg Failed finding face points: cascadedetect.cpp(569) : Unspecified error : > Invalid HAAR feature (expected: 'rw.r.x + rw.r.width <= W'), where > 'rw.r.x + rw.r.width' is 15 > must be less than or equal to > 'W' is 12 No face in /images/ian.jpg Traceback (most recent call last): File "/usr/local/bin/facemorpher", line 10, in <module> sys.exit(main()) File "/usr/local/lib/python3.7/site-packages/facemorpher/morpher.py", line 149, in main args['--plot'], args['--background']) File "/usr/local/lib/python3.7/site-packages/facemorpher/morpher.py", line 134, in morpher src_img, src_points = next(images_points_gen) StopIteration ``` The inputs are jpg files that open fine, and I've even grabbed `alyssa.jpg` from the README to ensure that my particular face images are not the problem.
open
2019-10-18T03:54:04Z
2020-05-03T08:45:06Z
https://github.com/alyssaq/face_morpher/issues/59
[]
jstray
3
mwaskom/seaborn
data-science
3,598
DOC: seaborn.set_context should note that it sets the global defaults for all plots using the matplotlib rcParams system
Unless I'm mistaken, the doc for [`seaborn.set_context`](https://seaborn.pydata.org/generated/seaborn.set_context.html) should note that it sets the global defaults for all plots using the matplotlib `rcParams` system.
closed
2023-12-21T23:59:02Z
2023-12-22T22:36:35Z
https://github.com/mwaskom/seaborn/issues/3598
[]
rootsmusic
6
ultrafunkamsterdam/undetected-chromedriver
automation
1,667
driver.quit not working
driver.quit closes the driver window but leaves the chromes in task manager which eat the CPU
open
2023-11-18T09:26:49Z
2024-08-22T14:10:26Z
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1667
[]
tromotixc
10
tflearn/tflearn
tensorflow
192
Dropout in RNNs
It seems there is a bug where dropout in RNNs are still applied at prediction time, investigating if it is coming from tflearn or tensorflow.
closed
2016-07-10T12:23:26Z
2016-07-11T08:40:39Z
https://github.com/tflearn/tflearn/issues/192
[ "bug" ]
aymericdamien
2
xlwings/xlwings
automation
2,125
How to catch exceptions in VBA thrown from Python when using RunPython?
I'm using `RunPython` because it is simpler to setup. I haven't had luck running UDF as described here https://docs.xlwings.org/en/stable/udfs.html, on VBA always says `Syntax error` on the unknown UDF python function. So I do from Python: ``` def test(): raise ValueError("test catching this") ``` then from VBA: ``` Sub Test() On Error Goto PythonError RunPython(“import module; module.test()”) Exit Sub PythonEror: ‘Do something sensible End Sub ``` the Goto PythonError is bypassed and the exception is propagated all the way to Excel and shown to the user. How can I get this to work? do we have to raise a specific xlwings exception for VBA to be able to catch it?
closed
2022-12-20T10:19:45Z
2023-01-18T18:18:38Z
https://github.com/xlwings/xlwings/issues/2125
[]
bravegag
7
activeloopai/deeplake
tensorflow
2,976
[BUG] V4.0 breaking?
### Severity P0 - Critical breaking issue or missing functionality ### Current Behavior Neither of these imports work with v4.0 ``` from deeplake.core.vectorstore.deeplake_vectorstore import VectorStore from deeplake.core.vectorstore import VectorStore ``` It looks like the docs aren't updated either. Is there a new import? Is v4.0 broken? ### Steps to Reproduce from deeplake.core.vectorstore.deeplake_vectorstore import VectorStore from deeplake.core.vectorstore import VectorStore ### Expected/Desired Behavior Import works ### Python Version _No response_ ### OS _No response_ ### IDE _No response_ ### Packages _No response_ ### Additional Context _No response_ ### Possible Solution _No response_ ### Are you willing to submit a PR? - [ ] I'm willing to submit a PR (Thank you!)
closed
2024-10-25T16:22:14Z
2024-10-25T17:30:23Z
https://github.com/activeloopai/deeplake/issues/2976
[ "bug" ]
logan-markewich
3