repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
serengil/deepface
machine-learning
924
Unable to find good face match with deepface and Annoy
Hi I am using following code from Serengil's tutorial (you tube), for finding best face match. The embeddings are from deepface and Annoy's ANNS based search is used for find best matching face. This code does not give good matching face, as was hightlighted in Serengil's you tube video. Looking for help about why this code would not give best matching face. Thanks import os from deepface.commons import functions from keras_facenet import FaceNet from deepface.basemodels import Facenet from annoy import AnnoyIndex import matplotlib.pyplot as plt import random import time facial_images = [] for root, directories, files in os.walk("../../deepface/deepface/tests/dataset"): for file in files: if ('.jpg' in files): exact_path = root + files facial_images.append(exact_path) embedder = FaceNet() model = Facenet.loadModel() representations = [] min = 100000000000 max = -10000000000 for face in facial_images: img = functions.preprocess_face(img = face, target_size = (160,160)) # embedding = embedder.embeddings(img) embedding = model.predict(img)[0,:] temp = embedding.min() if (min > temp): min = temp temp = embedding.max() if (max < temp): max = temp representation = [] representation.append(img) representation.append(embedding) representations.append(representation) #synthetic data to enlarge data size for i in range(len(representations), 100000): filename = "dummy_%d.jpg" % i vector = [random.gauss(min, max) for z in range(128)] dummy_item = [] dummy_item.append(filename) dummy_item.append(vector) representations.append(dummy_item) t = AnnoyIndex(128, 'euclidean') for i in range(len(representations)): vector = representations[i][1] t.add_item(i, vector) t.build(3) idx = 0 k = 2 start = time.time() neighbors = t.get_nns_by_item(idx,k) end = time.time() print ("get_nns_by_item took " , end-start, " seconds"); print (neighbors)
closed
2023-12-20T08:22:34Z
2023-12-20T08:46:59Z
https://github.com/serengil/deepface/issues/924
[ "question" ]
dumbogeorge
2
mwaskom/seaborn
matplotlib
2,821
Calling `sns.heatmap()` changes matplotlib rcParams
See the following example ```python import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns mpl.rcParams["figure.dpi"] = 120 mpl.rcParams["figure.facecolor"] = "white" mpl.rcParams["figure.figsize"] = (9, 6) data = sns.load_dataset("iris") print(mpl.rcParams["figure.dpi"]) print(mpl.rcParams["figure.facecolor"]) print(mpl.rcParams["figure.figsize"]) #120.0 #white #[9.0, 6.0] fig, ax = plt.subplots() sns.heatmap(data.corr(), vmin=-1, vmax=1, center=0, annot=True, linewidths=4, ax=ax); print(mpl.rcParams["figure.dpi"]) print(mpl.rcParams["figure.facecolor"]) print(mpl.rcParams["figure.figsize"]) #72.0 #(1, 1, 1, 0) #[6.0, 4.0] ``` If I call again ```python mpl.rcParams["figure.dpi"] = 120 mpl.rcParams["figure.facecolor"] = "white" mpl.rcParams["figure.figsize"] = (9, 6) ``` then it works fine, but I don't know why it changes the rcParams. **Edit** These are the versions being used ``` Last updated: Wed May 25 2022 Python implementation: CPython Python version : 3.9.12 IPython version : 8.3.0 matplotlib: 3.5.2 seaborn : 0.11.2 sys : 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:25:59) [GCC 10.3.0] Watermark: 2.3.0 ```
closed
2022-05-25T19:16:45Z
2022-05-27T11:13:29Z
https://github.com/mwaskom/seaborn/issues/2821
[]
tomicapretto
2
strawberry-graphql/strawberry
fastapi
3,790
Incorrect typing for the `type` decorator
## Describe the Bug The type function is decorated with ``` @dataclass_transform( order_default=True, kw_only_default=True, field_specifiers=(field, StrawberryField) ) ``` Therefore mypy treats classes decorated with type as being dataclasses with ordering functions. In particular, defining `__gt__` on such a class will be treated as an error by mypy. However, type (that is the underlying `_wrap_dataclass` function) do not do anything to ensure the dataclass actually has order functions defined. I see multiple solutions: - Removing the `order_default=True` part of the `dataclass_transform` decorating `type` - Enforcing the `order=True` in `_wrap_dataclass` - Allowing the caller to pass dataclass kwargs (as per [my previous issue](https://github.com/strawberry-graphql/strawberry/issues/2688)) ## System Information - Operating system: Ubuntu 24.04 - Strawberry version (if applicable): 0.256.1 ## Additional Context Code samples to be clear on the issue ``` @strawberry.type class MyClass: attr: str k = MyClass(attr="abc") j = MyClass(attr="def") j > k # TypeError: '<' not supported between instances of 'MyClass' and 'MyClass' ``` ``` @strawberry.type class MyClass: attr: str def __gt__(self, other): return self.attr > other.attr k = MyClass(attr="abc") j = MyClass(attr="def") j > k # True # When running mypy error: You may not have a custom "__gt__" method when "order" is True [misc] ```
open
2025-02-21T11:26:35Z
2025-02-21T11:28:42Z
https://github.com/strawberry-graphql/strawberry/issues/3790
[ "bug" ]
Corentin-Bravo
0
521xueweihan/HelloGitHub
python
2,078
【开源自荐】类似 rz / sz,支持 tmux 的文件传输工具 trzsz ( trz / tsz )
## 项目推荐 - 项目地址:https://github.com/trzsz/trzsz - 类别:Python - 项目后续更新计划: * 支持 mac 以外的其他平台(例如 windows),要 SecureCRT、Xshell 等像 iTerm2 一样支持 [coprocesses](https://iterm2.com/documentation-coprocesses.html) 才能搞。 * 将进度条改成模态框,防止在文件传输过程中误触键盘导致传输失败,需要 iTerm2 支持显示 [mac 进度条](https://developer.apple.com/library/archive/documentation/LanguagesUtilities/Conceptual/MacAutomationScriptingGuide/DisplayProgress.html) 才能搞。 * 使用 tmux 控制模式时,支持二进制上传和下载,需要 tmux 和 iTerm2 支持一些 future 才好搞。 * 使用 tmux 普通模式时,现已支持二进制下载,支持二进制上传需要 tmux 支持输入设置成 latin1 字符集,或者找到个稳定可靠的办法,将被 tmux 转成 UTF-8 的数据转换回原始的二进制数据。 - 项目描述: [trzsz](https://trzsz.github.io/) 是一个简单的文件传输工具,和 lrzsz ( rz / sz ) 类似但支持 tmux,和 iTerm2 一起使用,并且有一个不错的进度条。 - 推荐理由: * 登录远程电脑时用 tmux 保持会话,但 tmux 不支持用 rz / sz 上传和下载文件,这就很不方便了。 * 既然 tmux 不愿意支持 rz / sz ,那我们就设计一个新的 trz / tsz ( [trzsz](https://github.com/trzsz/trzsz) ) 去支持 tmux 。 * 不管经过多少跳才登录到远程服务器,都可以方便地上传和下载文件,没有 scp 需要中转的麻烦。 * 原来 rz / sz 的实现比较简陋,连个进度条都没有,上传和下载文件不知道还要多久才能完成,又或者卡住了也不知道。 * trzsz ( trz / tsz ) 有一个不错的进度条,虽然现在还不是模态框,但也能清楚地看到上传和下载的进度。 - 截图: * 上传文件示例 ![上传文件示例](https://user-images.githubusercontent.com/20320324/149653033-988d4c54-3787-4530-b02b-f3008f99dd9a.gif) * 下载文件示例 ![下载文件示例](https://user-images.githubusercontent.com/20320324/149653078-bdbe3ccc-4ac5-49bc-b2df-205cf184edbb.gif)
closed
2022-01-16T08:50:39Z
2022-01-28T01:21:25Z
https://github.com/521xueweihan/HelloGitHub/issues/2078
[ "已发布", "Python 项目" ]
lonnywong
1
DistrictDataLabs/yellowbrick
scikit-learn
1,014
Using your own models with yellow brick
**Describe the issue** If you create your own clustering algorithm that follows the sklearn pattern, is there anything I need to know to allow users to use the additional clustering methods to extend this package to my own sklearn like models?
closed
2020-01-29T22:48:21Z
2020-02-26T14:28:46Z
https://github.com/DistrictDataLabs/yellowbrick/issues/1014
[ "type: question" ]
achapkowski
3
fa0311/TwitterInternalAPIDocument
graphql
660
Any idea how long each guest token is valid for?
I am using endpoints which can be viewed in incognito mode. I see each ip has a 95 request limit for a 13 minute window. But any idea how long a guest token remains valid before it starts giving 403? I am caching the guest token in order to minimize requests but in production ended up getting 403 errors after sometime. any idea @fa0311 ?
open
2024-10-14T04:49:27Z
2024-10-14T07:53:56Z
https://github.com/fa0311/TwitterInternalAPIDocument/issues/660
[]
abhranil26
3
httpie/cli
python
1,402
can httpie support JSON5 (JSON for Humans) input?
## Checklist - [x] I've searched for similar feature requests. --- ## Enhancement request trying to call httpie with post body including raw valid js objects, http <url> metrics:='[{name: "activeUsers"}]' got error: 'metrics:=[{name: "activeUsers"}]': Expecting property name enclosed in double quotes: line 1 column 3 (char 2) as expected; but always expected tools like `httpie` could/should accept more format for flexibility, and there is a project for that, called JSON5 (https://json5.org/) JSON for Humans, it allows many features, and some would be most useful for `httpie`: 1. Object keys may be an ECMAScript 5.1 IdentifierName, with single quote, or without quote, if unambiguous, like my example: `metrics:='[{name: "activeUsers"}]'` hope `httpie` can recognize it and auto-translate to valid json, 2. Strings may be single quoted, 3. Numbers may be hexadecimal, or scientific format, like `1e3` `28e6` ? ## Additional information, screenshots, or code examples JSON5 Python libraries: 1. https://pypi.org/project/json5/ 6. https://pyjson5.readthedocs.io/
open
2022-05-16T07:35:20Z
2022-05-16T07:52:38Z
https://github.com/httpie/cli/issues/1402
[ "enhancement", "needs product design" ]
tx0c
1
trevorstephens/gplearn
scikit-learn
32
Include logic regression
New estimator, needs much more research to see how/if it fits into `gplearn`'s API. No milestone yet. [Citation](http://kooperberg.fhcrc.org/logic/documents/logic-regression.pdf) Add boolean/logical functions, conditional functions and potential to input a binary input dataset
closed
2017-04-27T10:29:19Z
2020-02-13T11:32:51Z
https://github.com/trevorstephens/gplearn/issues/32
[ "enhancement" ]
trevorstephens
3
django-oscar/django-oscar
django
3,921
Unable to access oscar on https://example.com:8443/oscar
### Issue Summary I followed [this guide](https://worldoscar.org/knowledge-base/oscar-19-installation/?epkb_post_type_1=oscar-19-installation) and I managed to install the most recent version [oscar_emr19-66~1881.deb](https://sourceforge.net/projects/oscarmcmaster/files/Oscar%20Debian%2BUbuntu%20deb%20Package/oscar_emr19-66~1881.deb/download) on Ubuntu 22.04. After a successful installation, I was able to arrive at the below stages: ![image](https://user-images.githubusercontent.com/7415824/166155088-3adb7b18-7208-4830-9fe0-a80739b51847.png) ![image](https://user-images.githubusercontent.com/7415824/166154636-cd429909-6936-4df2-8343-83fc13d22124.png) When I `run less /usr/share/oscar-emr/README.txt` , I got: ![image](https://user-images.githubusercontent.com/7415824/166154724-e6272921-de8c-4795-b4ce-a92bfd8e087a.png) ### The problem When I visited https://[example.com](https://example.com:8443/oscar):8443/oscar I got: ![image](https://user-images.githubusercontent.com/7415824/166155558-da0b157a-87f4-48fb-9eee-d3dea6a6d078.png) ### Technical details ``` java -version openjdk version "11.0.15" 2022-04-19 OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.20.04.1) OpenJDK 64-Bit Server VM (build 11.0.15+10-Ubuntu-0ubuntu0.20.04.1, mixed mode, sharing) ``` * Python version: When I run `python --version`, I got: `-bash: python: command not found`. But `pip3 --version` returned `pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8)` * Django version: When I run `pip show django | grep Version`, I got `WARNING: Package(s) not found: django` * Oscar version: When I run `pip show django-oscar | grep Version`, I got `WARNING: Package(s) not found: django-oscar` ### Help needed How can I fix my installation to get this up and running?
closed
2022-05-01T16:35:58Z
2022-05-02T04:03:02Z
https://github.com/django-oscar/django-oscar/issues/3921
[]
jessicana
1
dgtlmoon/changedetection.io
web-scraping
2,174
[feature] Sort tags / groups by alphabet
**Version and OS** 0.45.14 on Linux/Docker **Is your feature request related to a problem? Please describe.** To keep an overview about my watch jobs, I added tags / groups on them. In the meantime I work with 14 tags, and they are sorted by date created. **Describe the solution you'd like** It would be helpful, if the tags are sorted by alphabet automatically. Or if we had at least the ability to sort them manually. **Describe the use-case and give concrete real-world examples** <img width="714" alt="Bildschirmfoto 2024-02-10 um 09 20 11" src="https://github.com/dgtlmoon/changedetection.io/assets/3201804/ccf5be77-34a3-4385-9519-a5e1bd8b5b91"> <img width="694" alt="Bildschirmfoto 2024-02-10 um 09 20 42" src="https://github.com/dgtlmoon/changedetection.io/assets/3201804/c84d57d9-23c5-4e21-b6a2-1dd7aaa395a1"> **Additional context** BTW, isn't tags the better term here than groups? From my experience, an item can have multiple tags attached, but can only be in one group at the same time.
closed
2024-02-10T08:21:00Z
2024-03-10T10:10:57Z
https://github.com/dgtlmoon/changedetection.io/issues/2174
[ "enhancement" ]
plangin
3
deezer/spleeter
deep-learning
186
por favor su ayuda me sale esto
![2019-12-16 17_46_18-Anaconda Prompt (Anaconda3) - deactivate - spleeter separate -i spleeter_audio_](https://user-images.githubusercontent.com/58925994/70949604-33ec5e80-202c-11ea-807e-4ba8bb6a1cf4.png)
closed
2019-12-16T22:48:17Z
2019-12-18T14:18:10Z
https://github.com/deezer/spleeter/issues/186
[ "bug", "invalid" ]
excel77
1
predict-idlab/plotly-resampler
data-visualization
60
`FigureResampler` replace not working as it should when using a `go.Figure`
![image](https://user-images.githubusercontent.com/38005924/168543433-67db7ba1-0916-4bfd-af5d-0106fb9275a1.png)
closed
2022-05-16T07:44:20Z
2022-05-16T15:34:17Z
https://github.com/predict-idlab/plotly-resampler/issues/60
[ "bug" ]
jonasvdd
1
robotframework/robotframework
automation
4,803
Async support to dynamic and hybrid library APIs
For our Lib we are using framework that is async and we have to await for results. our lib works similar to Remote Lib to get the keywords and arguments but communication is done using asyncio. with RF 6.1 run_keyword can be made async but functions for getting keywords/arguments/etc. can't. as a workaround I'm getting context with EXECUTION_CONTEXTS.current and run async operations with context.asynchronous.run_until_complete(). this works but if the implementation of EXECUTION_CONTEXTS changes the library will break. would it be possible to add async to get_* functions in Dynamic & Hybrid Lib interfaces?
closed
2023-06-22T09:26:53Z
2023-11-27T12:28:17Z
https://github.com/robotframework/robotframework/issues/4803
[ "enhancement", "priority: high", "alpha 2", "acknowledge" ]
WisniewskiP
9
autogluon/autogluon
scikit-learn
4,900
[tabular] Add `num_cpus`, `num_gpus` to `predictor.predict`
Related: #4871 We should add ways for user to control num_cpus and num_gpus during model inference. This also ties into adding parallel inference support.
open
2025-02-17T21:05:37Z
2025-02-17T21:05:37Z
https://github.com/autogluon/autogluon/issues/4900
[ "enhancement", "module: tabular" ]
Innixma
0
plotly/dash
data-visualization
2,765
When moving the cursor, it will sometimes get stuck
**Describe your context** Please provide us your environment, so we can easily reproduce the issue.¨ 16 core 32 Thread dual CPU server, view selected to show all cores Running in Docker on Ubuntu Server - if frontend related, tell us your Browser, Version and OS - OS: Windows - Browser Chrome - Version 123.0.6308.0 **Describe the bug** When moving the cursor around on the preview, especially on machines with many cores/threads the cursor will sometimes get stuck. Video Attached **Expected behavior** Graph cursor should follow the mouse and not stay in a previous position **Screenshots** If applicable, add screenshots or screen recording to help explain your problem. https://github.com/plotly/dash/assets/46653946/3576cae4-67ba-4d46-9b21-7af5febac7b3
closed
2024-02-18T19:45:58Z
2024-05-31T20:09:58Z
https://github.com/plotly/dash/issues/2765
[ "bug", "sev-2" ]
Spillebulle
1
deeppavlov/DeepPavlov
nlp
1,008
Readme for /examples
Please add readme with short description of provided examples in https://github.com/deepmipt/DeepPavlov/tree/master/examples Please also add to the readme links to other resources to learn DeepPavlov - https://github.com/deepmipt/dp_tutorials https://github.com/deepmipt/dp_notebooks
closed
2019-09-21T10:24:24Z
2019-09-26T13:54:55Z
https://github.com/deeppavlov/DeepPavlov/issues/1008
[]
DeepPavlovAdmin
1
vitalik/django-ninja
rest-api
818
How to create Generic Schema for openapi?
### Discussed in https://github.com/vitalik/django-ninja/discussions/817 <div type='discussions-op-text'> <sup>Originally posted by **suuperhu** August 8, 2023</sup> **I have the following piece of code, but there is a problem with openapi page, how should I solve it?** _Code environment : ubuntu 20.04 python 3.10.11 django 3.2.15 django-ninja 0.22.2 pydantic 1.10.12_ ```python from typing import TypeVar, Generic, List from django.http import HttpRequest from ninja import Router, Query, Schema from pydantic import Field T1 = TypeVar('T1') T2 = TypeVar('T2') router = Router() class Params(Schema): name: str age: int = Field(0, ge=0, le=120) class Response(Schema, Generic[T1, T2]): code: int data: T1 message: T2 class Data(Schema): a: int b: List[str] @router.get('demo/', response=Response[Data, str]) def list_level(request: HttpRequest, params: Params = Query(...)): return { 'code': 200, 'data': {'a': 1, 'b': ['a', 'b', 'c']}, 'message': 'good' } ``` The openapi page: ![图片](https://github.com/vitalik/django-ninja/assets/34603874/661da3f9-9a01-474e-8cea-31b2e8e29b8a) So,How should I make the openapi page's 200 response look like this: ```json { "code": 0, "data": { "a": 0, "b": [ "string", "string", "..." ] }, "message": "string" } ``` </div>
closed
2023-08-08T05:04:16Z
2023-08-08T06:46:52Z
https://github.com/vitalik/django-ninja/issues/818
[]
hushoujier
1
huggingface/datasets
nlp
7,041
`sort` after `filter` unreasonably slow
### Describe the bug as the tittle says ... ### Steps to reproduce the bug `sort` seems to be normal. ```python from datasets import Dataset import random nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)] ds = Dataset.from_list(nums) print("start sort") ds = ds.sort("k") print("finish sort") ``` but `sort` after `filter` is extremely slow. ```python from datasets import Dataset import random nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)] ds = Dataset.from_list(nums) ds = ds.filter(lambda x:x > 100, input_columns="k") print("start sort") ds = ds.sort("k") print("finish sort") ``` ### Expected behavior Is this a bug, or is it a misuse of the `sort` function? ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2023.10.0
open
2024-07-12T03:29:27Z
2024-07-22T13:55:17Z
https://github.com/huggingface/datasets/issues/7041
[]
Tobin-rgb
1
piskvorky/gensim
machine-learning
2,873
Further focus/slim keyedvectors.py module
Pre-#2698, `keyedvectors.py` was 2500+ lines, including functionality over-specific to other models, & redundant classes. Post-#2698, with some added generic functionality, it's still over 1800 lines. It should shed some other grab-bag utility functions that have accumulated, & don't logically fit inside the `KeyedVectors` class. In particular, the evaluation (analogies, word_ranks) helpers could move to their own module that takes a KV instance as an argument. (If other more-sophisticated evaluations can be contributed, as would be welcome, they should also live alongside those, rather than bloating `KeyedVectors`.) The `get_keras_embedding` method, as its utilit is narrow to very specific uses, and is conditional on a not-necessarily install package, could go elsewhere too – either a kera-focused utilities module, or even just documentation/example code about how to convert to/from keras from `KeyedVectors. Some of the more advanced word-vector-**using** calculations, like 'Word Mover's Distance' or 'Soft Cosine SImilarity', could move to method-specific modules that are then better documented/self-contained/optimized, without bloating the generic 'set of vectors' module. (They might be more discoverable, there, as well.) And finally, some of the existing calculations could be unified/streamlined (especially the two variants of `most_similar()`, and some of the steps shared by multiple operations). My hope would be the module is eventually <1000 lines.
open
2020-07-06T20:00:39Z
2021-03-09T07:59:52Z
https://github.com/piskvorky/gensim/issues/2873
[]
gojomo
8
3b1b/manim
python
1,442
Error installing manim and running a test program
### Describe the error <!-- A clear and concise description of what you want to make. --> I installed ffmpeg from the APT package manager and PyOpenGL from pip3 in a virtual environment. Then, when I tried to install manim in the same virtual environment it gave the following error However, in the end, it said that it successfully installed manim and its dependencies. Then, I tried to run a simple program to see if it works: But, it just keeps running with no errors (like it is frozen). ### Code and Error **Code**: <!-- The code you run --> ``` class SquareImage(Scene): def construct(self): square = Square() self.add(square) ``` ``` manimgl manim_test.py SquareImage -s ``` **Error**: <!-- The error traceback you get when run your code --> ``` ERROR: Command errored out with exit status 1: command: /home/karan/manim/venv/bin/python /home/karan/manim/venv/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py get_requires_for_build_wheel /tmp/tmp8troxa62 cwd: /tmp/pip-install-vtkpor5m/manimpango_d759c30457a744afbb343d792b24d300 Complete output (33 lines): Package pangocairo was not found in the pkg-config search path. Perhaps you should add the directory containing `pangocairo.pc' to the PKG_CONFIG_PATH environment variable No package 'pangocairo' found Traceback (most recent call last): File "setup.py", line 124, in check_min_version check_call(command, stdout=subprocess.DEVNULL) File "/usr/lib/python3.8/subprocess.py", line 364, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['pkg-config', '--print-errors', '--atleast-version', '1.30.0', 'pangocairo']' returned non-zero exit status 1. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/karan/manim/venv/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module> main() File "/home/karan/manim/venv/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "/home/karan/manim/venv/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py", line 114, in get_requires_for_build_wheel return hook(config_settings) File "/tmp/pip-build-env-vytjtzxg/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 149, in get_requires_for_build_wheel return self._get_build_requires( File "/tmp/pip-build-env-vytjtzxg/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 130, in _get_build_requires self.run_setup() File "/tmp/pip-build-env-vytjtzxg/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 253, in run_setup super(_BuildMetaLegacyBackend, File "/tmp/pip-build-env-vytjtzxg/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 145, in run_setup exec(compile(code, __file__, 'exec'), locals()) File "setup.py", line 190, in <module> _pkg_config.check_min_version(MINIMUM_PANGO_VERSION) File "setup.py", line 127, in check_min_version raise RequiredDependencyException(f"{self.name} >= {version} is required") __main__.RequiredDependencyException: pangocairo >= 1.30.0 is required ---------------------------------------- ``` ### Environment **OS System**: Linux Mint **manim version**: master <!-- make sure you are using the latest version of master branch --> **python version**: 3.8.5
open
2021-03-21T09:19:49Z
2022-06-13T16:52:39Z
https://github.com/3b1b/manim/issues/1442
[]
kkin1995
6
tensorlayer/TensorLayer
tensorflow
680
Failed: TensorLayer (bfffb588)
*Sent by Read the Docs (readthedocs@readthedocs.org). Created by [fire](https://fire.fundersclub.com/).* --- | TensorLayer build #7291120 --- | ![](https://media.readthedocs.org/images/email-header.png) --- | Build Failed for TensorLayer (latest) --- You can find out more about this failure here: [TensorLayer build #7291120](https://readthedocs.org/projects/tensorlayer/builds/7291120/) \- failed If you have questions, a good place to start is the FAQ: <https://docs.readthedocs.io/en/latest/faq.html> You can unsubscribe from these emails in your [Notification Settings](https://readthedocs.org/dashboard/tensorlayer/notifications/) Keep documenting, Read the Docs | Read the Docs <https://readthedocs.org> --- ![](http://email.readthedocs.org/o/eJwNzFEOgyAMANDTzM-mFiz0g8Mg1GniJCnqstvPd4BXk4tEToYtEY4RGf3oyRMBEws4EeaXx6_O6KCr3WodTHM9V62tdGj2HtZEEwcJMXjkSRBVY65ZA2qUPPsRB0unHr3Znn9qT7hsprBcR32-sl8zlPb5A5LcKjk)
closed
2018-06-04T14:24:24Z
2018-06-04T14:56:34Z
https://github.com/tensorlayer/TensorLayer/issues/680
[]
fire-bot
0
RayVentura/ShortGPT
automation
70
🐛 [Bug]:
### What happened? Step 9 _prepareBackgroundAssets {'voiceover_audio_url': '.editing_assets/reddit_shorts_assets/b32f7d0a87944aa99dd1b826/audio_voice.wav', 'video_duration': None, 'background_video_url': 'https://rr3---sn-npoe7nez.googlevideo.com/videoplayback?expire=1690975098&ei=GufJZN7mJaei9fwPveCisA8&ip=194.156.163.133&id=o-AC0PipqKARrGGjkNooYfuCxs-noXKlpCEhSXMUGnVmtJ&itag=335&source=youtube&requiressl=yes&mh=ww&mm=31%2C26&mn=sn-npoe7nez%2Csn-ntqe6n76&ms=au%2Conr&mv=m&mvi=3&pl=24&initcwndbps=70318750&vprv=1&svpuc=1&mime=video%2Fwebm&gir=yes&clen=463772648&dur=545.611&lmt=1629834060752995&mt=1690953169&fvip=5&keepalive=yes&fexp=24007246%2C51000011%2C51000023&c=IOS&txp=5511222&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cvprv%2Csvpuc%2Cmime%2Cgir%2Cclen%2Cdur%2Clmt&sig=AOq0QJ8wRgIhANUzWaBjEcKzgNA2yiHX_42W4mkpVTLPv_64hiw9laDRAiEA2VYtKYeThTpaZIHsqSbBU8hsPz9Rkpwyb5_VXocQfDE%3D&lsparams=mh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=AG3C_xAwRgIhAKQyMfTyaa5QifrExwyXHxaomTdA5wP4q5aICytBfRuFAiEAn7pYfppbDvlnRTNZQfRstO3gD894BpmVRo1fPKp8am0%3D', 'music_url': 'public/Music dj quads.wav'} Error File "C:\Users\EDY\Desktop\c\shortgpt\gui\short_automation_ui.py", line 107, in create_short for step_num, step_info in shortEngine.makeContent(): File "C:\Users\EDY\Desktop\c\shortgpt\shortGPT\engine\abstract_content_engine.py", line 70, in makeContent self.stepDict[currentStep]() File "C:\Users\EDY\Desktop\c\shortgpt\shortGPT\engine\content_short_engine.py", line 97, in _prepareBackgroundAssets self.verifyParameters( File "C:\Users\EDY\Desktop\c\shortgpt\shortGPT\engine\abstract_content_engine.py", line 55, in verifyParameters raise Exception(f"Parameter :{key} is null") ### What type of browser are you seeing the problem on? Chrome ### What type of Operating System are you seeing the problem on? Windows ### Python Version 3.10 ### Application Version v0.0.1 ### Expected Behavior Step 9 _prepareBackgroundAssets {'voiceover_audio_url': '.editing_assets/reddit_shorts_assets/b32f7d0a87944aa99dd1b826/audio_voice.wav', 'video_duration': None, 'background_video_url': 'https://rr3---sn-npoe7nez.googlevideo.com/videoplayback?expire=1690975098&ei=GufJZN7mJaei9fwPveCisA8&ip=194.156.163.133&id=o-AC0PipqKARrGGjkNooYfuCxs-noXKlpCEhSXMUGnVmtJ&itag=335&source=youtube&requiressl=yes&mh=ww&mm=31%2C26&mn=sn-npoe7nez%2Csn-ntqe6n76&ms=au%2Conr&mv=m&mvi=3&pl=24&initcwndbps=70318750&vprv=1&svpuc=1&mime=video%2Fwebm&gir=yes&clen=463772648&dur=545.611&lmt=1629834060752995&mt=1690953169&fvip=5&keepalive=yes&fexp=24007246%2C51000011%2C51000023&c=IOS&txp=5511222&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cvprv%2Csvpuc%2Cmime%2Cgir%2Cclen%2Cdur%2Clmt&sig=AOq0QJ8wRgIhANUzWaBjEcKzgNA2yiHX_42W4mkpVTLPv_64hiw9laDRAiEA2VYtKYeThTpaZIHsqSbBU8hsPz9Rkpwyb5_VXocQfDE%3D&lsparams=mh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=AG3C_xAwRgIhAKQyMfTyaa5QifrExwyXHxaomTdA5wP4q5aICytBfRuFAiEAn7pYfppbDvlnRTNZQfRstO3gD894BpmVRo1fPKp8am0%3D', 'music_url': 'public/Music dj quads.wav'} Error File "C:\Users\EDY\Desktop\c\shortgpt\gui\short_automation_ui.py", line 107, in create_short for step_num, step_info in shortEngine.makeContent(): File "C:\Users\EDY\Desktop\c\shortgpt\shortGPT\engine\abstract_content_engine.py", line 70, in makeContent self.stepDict[currentStep]() File "C:\Users\EDY\Desktop\c\shortgpt\shortGPT\engine\content_short_engine.py", line 97, in _prepareBackgroundAssets self.verifyParameters( File "C:\Users\EDY\Desktop\c\shortgpt\shortGPT\engine\abstract_content_engine.py", line 55, in verifyParameters raise Exception(f"Parameter :{key} is null") ### Error Message _No response_ ### Code to produce this issue. _No response_ ### Screenshots/Assets/Relevant links _No response_
open
2023-08-02T05:37:02Z
2023-08-02T05:37:02Z
https://github.com/RayVentura/ShortGPT/issues/70
[ "bug" ]
rpp-Little-pig
0
raphaelvallat/pingouin
pandas
209
pairwise_nonparametric()
Hi, I was looking for non-parametric pairwise tests and only found the parameter `parametric=False` of the `pairwise_ttests()` after some time from the flowcharts. This seems confusing to me. I'd suggest adding function `pairwise_nonparametric()` for this purpose mainly because of discoverability and clarity. Also pls note that the documentation to `pairwise_ttests()` just says: "Pairwise T-tests.". I'd be happy to prepare PR if that helps.
closed
2021-11-10T10:45:33Z
2022-03-12T23:51:51Z
https://github.com/raphaelvallat/pingouin/issues/209
[ "docs/testing :book:", "IMPORTANT❗" ]
michalkahle
5
ymcui/Chinese-BERT-wwm
nlp
146
RoBERTa-wwm-ext-large能不能把mlm权重补充上?
现在很多研究都表明MLM其实也是一个相当有用的语言模型,并不是纯粹的只有预训练的左右了,所以能不能麻烦一下把MLM的权重补上? 而且我最不能理解的就是,要是扔掉MLM的权重也就算了,为啥还要随机初始化一个放在那里,这不是容易误导人么?
closed
2020-09-21T06:22:18Z
2020-09-21T07:01:24Z
https://github.com/ymcui/Chinese-BERT-wwm/issues/146
[ "wontfix" ]
bojone
1
newpanjing/simpleui
django
43
左侧栏收起后无全屏缩小
**bug描述** 简单的描述下遇到的bug: **重现步骤** 1. 2. 3. **环境** 1.操作系统: 2.python版本: 3.django版本: 4.simpleui版本: **其他描述**
closed
2019-05-21T01:38:45Z
2019-05-21T02:30:57Z
https://github.com/newpanjing/simpleui/issues/43
[ "bug" ]
Qianzujin
1
tqdm/tqdm
pandas
749
Logging from separate threads pushes tqdm bar up
Versions I'm using: ``` tqdm = 4.31.1 python = 3.7.3 OS: Ubuntu 19.04 (Linux: 5.0) ``` If we're processing an list of elements with a multiprocessing function, say applying `f` to the list with `pool.imap_unordered`, and log a message from within `f`, then the progress bar will be pushed up for each message logged. An example script that produces this behavior: ```python import logging import multiprocessing import time import tqdm # Set up tqdm logging handler class TqdmLoggingHandler (logging.StreamHandler): def __init__(self, stream=None): super().__init__(stream) def emit(self, record): try: msg = self.format(record) tqdm.tqdm.write(msg) self.flush() except (KeyboardInterrupt, SystemExit): raise except: self.handleError(record) logger = logging.getLogger() logger.setLevel(logging.INFO) logger.handlers = [] logger.addHandler(TqdmLoggingHandler()) def func_mult(x): logger.info('processing %s', x) # <- log from other threads time.sleep(1) return x array = list(range(6)) with multiprocessing.Pool(2) as pool: for xx in tqdm.tqdm( pool.imap_unordered(func_mult, array), total=len(array)): logger.info('getting result: %s', xx) # <- log from current thread ``` This is the output I get: ![image](https://user-images.githubusercontent.com/3422347/58253825-94067500-7d69-11e9-9878-8f0970b92846.png) Note that the `getting result` messages are printed correctly (above the bar), but the `processing` messages issued from other threads don't respect the bar and are added to the bottom of the screen instead. Is this behavior expected or should log messages issued from other threats respect tqdm?
open
2019-05-23T12:51:57Z
2019-06-17T16:43:02Z
https://github.com/tqdm/tqdm/issues/749
[ "p3-enhancement 🔥", "help wanted 🙏", "need-feedback 📢", "p2-bug-warning ⚠", "synchronisation ⇶" ]
tupini07
1
huggingface/transformers
nlp
36,217
Albert does not use SDPA's Flash Attention since attention mask is always created
### System Info NA ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://github.com/huggingface/transformers/blob/dd16acb8a3e93b643aa374c9fb80749f5235c1a6/src/transformers/models/albert/modeling_albert.py#L772-L773 Since attention mask is always used by `AlbertModel`, SDPA will dispatch memory-efficient kernel instead of flash attention. The inner classes (`AlbertTransformer`, `AlbertLayerGroup`, ...) seem to handle attention_mask=None correctly, so it's only the outermost class is problematic. ### Expected behavior Flash attention should be used when attention_mask=None
closed
2025-02-15T15:40:58Z
2025-02-16T02:36:30Z
https://github.com/huggingface/transformers/issues/36217
[ "bug" ]
gau-nernst
1
plotly/dash-component-boilerplate
dash
62
capitalize component name
hi, while creating a new project it asks for component_name, if i give a name that starts with lowercase, it creates react component with lowercase, which is wrong from react component standards. React component must starts with uppercase.
open
2019-03-13T01:14:51Z
2019-03-13T01:14:51Z
https://github.com/plotly/dash-component-boilerplate/issues/62
[]
rajeevmaster
0
igorbenav/FastAPI-boilerplate
sqlalchemy
138
DB session from a worker function?
Hey, Im trying for 2 days now to make this work, How do I pass the session from an endpoint to the background worker function? My idea is to insert a record in the db from my endpoint then process it from the background function. *Edit* I made it work by creating another function with contextmanager ``` @asynccontextmanager async def async_get_context() -> AsyncSession: async_session = local_session async with async_session() as db: yield db ```
closed
2024-05-21T13:16:39Z
2024-06-02T14:05:30Z
https://github.com/igorbenav/FastAPI-boilerplate/issues/138
[ "documentation", "enhancement" ]
kaStoyanov
4
ultralytics/yolov5
machine-learning
13,519
Detection with Torch Hub Failing
### Search before asking - [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question I have ML project on Python 3.7.8, it was working fine, but recently I am getting error when loading model with torch.hub.load File "C:\Users\Strange/.cache\torch\hub\ultralytics_yolov5_master\models\common.py", line 39, in <module> from utils.dataloaders import exif_transpose, letterbox File "C:\Users\Strange/.cache\torch\hub\ultralytics_yolov5_master\utils\dataloaders.py", line 776 if mosaic := self.mosaic and random.random() < hyp["mosaic"]: I know yolov5 require python > 3.8, but all current dependencies in my project is of 3.7.8, please suggest workaround to work with it ### Additional _No response_
closed
2025-02-25T04:56:37Z
2025-02-27T23:58:53Z
https://github.com/ultralytics/yolov5/issues/13519
[ "question", "dependencies", "detect" ]
llavkush
4
WeblateOrg/weblate
django
14,232
Expose 'add comment' API endpoint
### Describe the problem We're migrating from another platform to Weblate, and would like to take along our comments. It's currently not possible to do this easily/programmatically. Consequently, we're using valuable contributor feedback on our strings. ### Describe the solution you would like We'd like to have an API endpoint that allows to post a comment on a given string. Matching: * If we provide the email address of the poster, and it exists as user on Weblate, the comment should be associated with that user. * We'd like to provide the string key to identify the relevant string on Weblate. ### Describe alternatives you have considered I wondered whether to request the following, but concluded that the benefits don't outweigh the (technical) challenges: If we provide a timestamp (in the past), it is maintained so that when viewing the comment in context there is additional metadata to consider whether the comment is still relevant. ### Screenshots _No response_ ### Additional context _No response_
open
2025-03-16T11:51:08Z
2025-03-17T18:52:56Z
https://github.com/WeblateOrg/weblate/issues/14232
[ "enhancement", "undecided", "Area: API" ]
keunes
2
graphql-python/graphene-django
graphql
1,116
graphene-neo4j
I will be glad if you update graphene-neo4j with Django 3.
closed
2021-02-15T19:17:27Z
2021-02-16T06:12:03Z
https://github.com/graphql-python/graphene-django/issues/1116
[ "🐛bug" ]
MajidHeydari
1
3b1b/manim
python
2,097
Example Gallery Bug when Rendering High Quality
### Describe the bug For the Gallery Example with https://docs.manim.community/en/stable/examples.html#pointwithtrace **Code**: ```py from manim import * class PointWithTrace(Scene): def construct(self): path = VMobject() dot = Dot() path.set_points_as_corners([dot.get_center(), dot.get_center()]) def update_path(path): previous_path = path.copy() previous_path.add_points_as_corners([dot.get_center()]) path.become(previous_path) path.add_updater(update_path) self.add(path, dot) self.play(Rotating(dot, radians=PI, about_point=RIGHT, run_time=2)) self.wait() self.play(dot.animate.shift(UP)) self.play(dot.animate.shift(LEFT)) self.wait() ``` **Wrong display or Error traceback**: ![image](https://github.com/3b1b/manim/assets/58462210/5a1f3f2d-9ec0-43cb-95f4-da91fcb74170) ### Additional context The circular arc is drawn correctly, but once it is done, it deletes the whole arc and continuous as if the path started from the origin completing the drawing. This doesn't happen with low quality, but happens when you use high quality.
open
2024-01-29T04:51:34Z
2024-01-29T04:53:52Z
https://github.com/3b1b/manim/issues/2097
[ "bug" ]
wmstack
1
ivy-llc/ivy
pytorch
28,339
Fix Frontend Failing Test: paddle - math.tensorflow.math.argmin
To-do List: https://github.com/unifyai/ivy/issues/27500
closed
2024-02-20T08:12:51Z
2024-02-20T10:21:20Z
https://github.com/ivy-llc/ivy/issues/28339
[ "Sub Task" ]
Sai-Suraj-27
0
gevent/gevent
asyncio
1,962
ImportError: cannot import name 'match_hostname' from 'ssl' (/usr/lib/python3.12/ssl.py)
* gevent version: 22.10.2 - fedora package * Python version: 3.12.0b3 * Operating System: Fedora rawhide ### Description: While trying to build the sphinx documentation for the x2go python module: ``` + make -C docs SPHINXBUILD=/usr/bin/sphinx-build-3 html make: Entering directory '/builddir/build/BUILD/python-x2go-0.6.1.3/docs' /usr/bin/sphinx-build-3 -b html -d build/doctrees source build/html Running Sphinx v6.1.3 Configuration error: There is a programmable error in your configuration file: Traceback (most recent call last): File "/usr/lib/python3.12/site-packages/sphinx/config.py", line 351, in eval_config_file exec(code, namespace) # NoQA: S102 ^^^^^^^^^^^^^^^^^^^^^ File "/builddir/build/BUILD/python-x2go-0.6.1.3/docs/source/conf.py", line 22, in <module> import x2go File "/builddir/build/BUILD/python-x2go-0.6.1.3/x2go/__init__.py", line 42, in <module> monkey.patch_all() File "/usr/lib/python3.12/site-packages/gevent/monkey.py", line 1279, in patch_all patch_ssl(_warnings=_warnings, _first_time=first_time) File "/usr/lib/python3.12/site-packages/gevent/monkey.py", line 200, in ignores return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/gevent/monkey.py", line 1044, in patch_ssl gevent_mod, _ = _patch_module('ssl', _warnings=_warnings) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/gevent/monkey.py", line 462, in _patch_module gevent_module, target_module, target_module_name = _check_availability(name) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/gevent/monkey.py", line 448, in _check_availability gevent_module = getattr(__import__('gevent.' + name), name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/gevent/ssl.py", line 32, in <module> from gevent import _ssl3 as _source # pragma: no cover ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.12/site-packages/gevent/_ssl3.py", line 53, in <module> from ssl import match_hostname ImportError: cannot import name 'match_hostname' from 'ssl' (/usr/lib/python3.12/ssl.py) ```
closed
2023-06-26T23:33:53Z
2023-07-10T15:30:56Z
https://github.com/gevent/gevent/issues/1962
[]
opoplawski
1
sinaptik-ai/pandas-ai
data-science
686
TypeError: 'NoneType' object is not callable . Retrying Unfortunately, I was not able to answer your question, because of the following error: No code found in the response
#### Error:-/usr/local/lib/python3.10/dist-packages/pandasai/llm/starcoder.py:28: UserWarning: Starcoder is deprecated and will be removed in a future release. Please use langchain.llms.HuggingFaceHub instead, although please be aware that it may perform poorly. warnings.warn( /usr/local/lib/python3.10/dist-packages/pandasai/llm/falcon.py:29: UserWarning: Falcon is deprecated and will be removed in a future release. Please use langchain.llms.HuggingFaceHub instead, although please be aware that it may perform poorly. warnings.warn( WARNING:pandasai.helpers.logger:Error of executing code WARNING:pandasai.helpers.logger:Failed to execute code with a correction framework [retry number: 1] ERROR:pandasai.helpers.logger:Failed with error: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/pandasai/smart_datalake/__init__.py", line 394, in chat result = self._code_manager.execute_code( File "/usr/local/lib/python3.10/dist-packages/pandasai/helpers/code_manager.py", line 276, in execute_code return analyze_data(self._get_originals(dfs)) TypeError: 'NoneType' object is not callable . Retrying Unfortunately, I was not able to answer your question, because of the following error: No code found in the response Unfortunately, I was not able to answer your question, because of the following error: #### Code:-- ```python df = pd.DataFrame({ "country": [ "United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China", ], "gdp": [ 19294482071552, 2891615567872, 2411255037952, 3435817336832, 1745433788416, 1181205135360, 1607402389504, 1490967855104, 4380756541440, 14631844184064, ], "happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12], }) from pandasai import SmartDataframe from pandasai.llm import Starcoder, Falcon starcoder_llm = Starcoder(api_token=token) falcon_llm = Falcon(api_token=token) df1 = SmartDataframe(df, config={"llm": starcoder_llm}) df2 = SmartDataframe(df, config={"llm": falcon_llm}) print(df1.chat("Which country has the highest GDP?")) print(df2.chat("Which one is the unhappiest country?"))```
closed
2023-10-25T12:07:05Z
2024-06-01T00:20:12Z
https://github.com/sinaptik-ai/pandas-ai/issues/686
[]
jaysinhpadhiyar
1
matplotlib/matplotlib
data-science
29,489
[Bug]: Systematic test failures with ubuntu-22.04-arm pipeline
### Bug summary ``` __________________________ test_errorbar_limits[svg] ___________________________ [gw2] linux -- Python 3.12.8 /opt/hostedtoolcache/Python/3.12.8/arm64/bin/python args = () kwds = {'extension': 'svg', 'request': <FixtureRequest for <Function test_errorbar_limits[svg]>>} @wraps(func) def inner(*args, **kwds): with self._recreate_cm(): > return func(*args, **kwds) E matplotlib.testing.exceptions.ImageComparisonFailure: images not close (RMS 0.002): E result_images/test_axes/errorbar_limits_svg.png E result_images/test_axes/errorbar_limits-expected_svg.png E result_images/test_axes/errorbar_limits_svg-failed-diff.png /opt/hostedtoolcache/Python/3.12.8/arm64/lib/python3.12/contextlib.py:81: ImageComparisonFailure ``` and ``` _____________________________ test_get_font_names ______________________________ [gw3] linux -- Python 3.12.8 /opt/hostedtoolcache/Python/3.12.8/arm64/bin/python @pytest.mark.skipif(sys.platform == 'win32', reason='Linux or OS only') def test_get_font_names(): paths_mpl = [cbook._get_data_path('fonts', subdir) for subdir in ['ttf']] fonts_mpl = findSystemFonts(paths_mpl, fontext='ttf') fonts_system = findSystemFonts(fontext='ttf') ttf_fonts = [] for path in fonts_mpl + fonts_system: try: font = ft2font.FT2Font(path) prop = ttfFontProperty(font) ttf_fonts.append(prop.name) except Exception: pass available_fonts = sorted(list(set(ttf_fonts))) mpl_font_names = sorted(fontManager.get_font_names()) > assert set(available_fonts) == set(mpl_font_names) E AssertionError: assert {'C059', 'D05...ns Mono', ...} == {'C059', 'D05...ns Mono', ...} E E Extra items in the right set: E 'Liberation Serif' E 'Liberation Sans Narrow' E 'Liberation Sans' E 'Liberation Mono' [...] ```
closed
2025-01-20T11:12:23Z
2025-01-21T23:05:49Z
https://github.com/matplotlib/matplotlib/issues/29489
[ "CI: testing" ]
timhoffm
3
Gerapy/Gerapy
django
74
在用admin进行user和group添加和修改时会出现TypeError
前端显示: ![image](https://user-images.githubusercontent.com/27114287/43241572-8fcf3ce8-90ce-11e8-8d31-2303f881caec.png) 后台报错: ![image](https://user-images.githubusercontent.com/27114287/43241652-f6aee4cc-90ce-11e8-8dc6-9d9e90ace82c.png) 主要原因是,在/server/core/TransformMiddleware中对所有request格式进行了转换,所以django自带处理方法中会出现类型错误。我目前的解决办法是加入判断,屏蔽掉来自admin的请求,初步测试可行。 > class TransformMiddleware(MiddlewareMixin): def __call__(self, request): """ Change request body to str type :param request: :return: """ if isinstance(request.body, bytes): if not 'admin' in request.path: data = getattr(request, '_body', request.body) request._body = data.decode('utf-8') response = self.get_response(request) return response 不过这个解决方案只针对了admin,不知道会不会有别的类似的问题。
open
2018-07-26T04:37:27Z
2018-07-26T04:37:27Z
https://github.com/Gerapy/Gerapy/issues/74
[]
Fesbruk
0
MilesCranmer/PySR
scikit-learn
2
[Question] Pure Julia package
Hi, Is there a plan to have pure Julia API and expose it Julia package?
closed
2020-09-24T22:42:20Z
2021-01-18T09:24:06Z
https://github.com/MilesCranmer/PySR/issues/2
[ "implemented" ]
sheevy
13
hbldh/bleak
asyncio
1,419
Qt timer can't stop bleak's notify.
* bleak version:0.20.0 * Python version: 3.11.4 * Operating System: windows 11 ### Description I integrated Bleak into a Qt application. I found that when using Qtimer to close the notification, there will be a problem and notify cannot be stopped. ### What I Did My code ```python class MainWindow(QMainWindow): # device discovery signal devices_discovery_signal = Signal() devices_discovery_end_signal = Signal() connect_signal = Signal() disconnect_signal = Signal() start_notify_signal = Signal() start_notify_end_signal = Signal() stop_notify_signal = Signal() stop_notify_end_signal = Signal() def __init__(self): super().__init__() self.devices = [] self.devices_addr = [] self.ui = Ui_VTCSensorEvaluation() self.ui.setupUi(self) self.ui.pushButton_3.clicked.connect(self.async_start_device_discovery) self.ui.pushButton.clicked.connect(self.connect_device_click) self.ui.pushButton_2.clicked.connect(self.start_notify_click) self.ui.pushButton_2.setEnabled(False) self.myclass = myclass() self.update_timer = QTimer(self) self.ui.pushButton_5.clicked.connect(self.calculate_data_bias) self.bias_num = 0 self.call_num = 0 self.Ble = Ble_inst(self.update_device_list ,self.connect_status_update,self.receive_data_cb) self.ui.ini_ui() def update_data_timer_func(self): datas = self.myclass.ret_cali_data() if(datas == None): return self.ui.sensor_message_update("acc = {},{},{}".format(round(datas[0],2),round(datas[1],2),round(datas[2],2))) self.ui.sensor_message_update("mag = {},{},{}".format(round(datas[3],2),round(datas[4],2),round(datas[5],2))) self.ui.sensor_message_update("gyro = {},{},{}".format(round(datas[6],2),round(datas[7],2),round(datas[8],2))) self.ui.sensor_message_update("") return def close_notify(self): self.start_notify_end_signal.emit() self.stop_notify_signal.emit() self.stop_notify_end_signal.emit() return def update_device_list(self,dev_list): self.ui.common_message_update("search the device") self.ui.comboBox.clear() self.ui.comboBox.addItems(dev_list) self.devices_discovery_end_signal.emit() return @Slot() def async_start_device_discovery(self): self.devices_discovery_signal.emit() return def connect_status_update(self,flag): self.ui.pushButton.setEnabled(flag) self.ui.pushButton_2.setEnabled(not(flag)) self.ui.pushButton_2.setText("Run Compass") if(flag == False): self.ui.common_message_update("Connect Successfully....") else : self.ui.common_message_update("Connect Failed.....") return def receive_data_cb(self,ax,ay,az,mx,my,mz,gx,gy,gz): self.myclass.do_cali_data(ax,ay,az,mx,my,mz,gx,gy,gz) # print("acc data:",round(ax,2),round(ay,2),round(az,2)) # print("mag data:",round(mx,2),round(my,2),round(mz,2)) # print("gyro data:",round(gx,2),round(gy,2),round(gz,2)) return def connect_device_click(self): self.ui.pushButton.setEnabled(False) ind = self.ui.comboBox.currentIndex() if(ind == 0): return self.Ble.set_con_index(ind) self.ui.common_message_update("connect the device({})".format(self.ui.comboBox.currentText())) self.connect_signal.emit() return def start_notify_click(self): print("click Run") if(self.ui.pushButton_2.text() == "Run Compass"): print("Run select A") self.ui.common_message_update("start the notification....") self.stop_notify_end_signal.emit() self.start_notify_signal.emit() self.ui.pushButton_2.setText("Stop Compass") self.update_timer.timeout.connect(self.update_data_timer_func) self.update_timer.start(10) # self.start_notify_end_signal.emit() elif(self.ui.pushButton_2.text() == "Stop Compass"): print("Run select B") self.ui.common_message_update("stop the notification....") self.close_notify() self.ui.pushButton_2.setText("Run Compass") self.update_timer.stop() return def calculate_data_bias(self): tmp_txt = self.ui.plainTextEdit_30.toPlainText() try: self.bias_num = int(tmp_txt) if(self.bias_num > 1024): self.ui.common_message_update("The number is bigger than 1024.") return if(self.bias_num <= 0): self.ui.common_message_update("The number is less than 1.") return except: self.ui.common_message_update("Text cannot be converted to a number") return self.call_num = 0 self.myclass.init_bias_container() self.start_notify_signal.emit() self.update_timer.timeout.connect(self.close_notify) self.update_timer.start(1000) return class Ble_inst(QObject): def __init__(self, update_func ,connect_status_func,rec_func): super().__init__() self.devices_addr = [] self.devices = [] self.client = bleak.BleakClient("FF:FF:FF:FF:FF:FF") self.list_upadte = update_func self.con_status_func = connect_status_func self.con_index = -1 self.recieve_data = rec_func return async def add_device(self): devices = await bleak.BleakScanner.discover() self.devices = [""] self.devices_addr = [""] for d in devices: if(d.name != None): self.devices.append(d.name) self.devices_addr.append(d.address) print(d.name) self.list_upadte(self.devices) def disconnect_cb(self,client): self.con_status_func(True) print("connect loss....") return def set_con_index(self,index): self.con_index = index return async def connect_device_func(self): self.client = bleak.BleakClient(self.devices_addr[self.con_index],disconnected_callback=self.disconnect_cb) await self.client.connect() if(self.client.is_connected): self.con_status_func(False) return def receive_notify_cb(self,uuid,barray): ........ return async def stop_notify(self): print("stop notify....") await self.client.stop_notify(target_char) return async def start_notify(self): print("start notify.....") await self.client.start_notify(target_char,self.receive_notify_cb) return ``` ### Logs Error Log ```console Task exception was never retrieved future: <Task finished name='Task-38' coro=<Ble_inst.stop_notify() done, defined at d:\workstation\VTC_display\Ble_instance.py:71> exception=KeyError(15)> Traceback (most recent call last): File "d:\workstation\VTC_display\Ble_instance.py", line 73, in stop_notify await self.client.stop_notify(target_char) File "C:\Users\isly8\anaconda3\Lib\site-packages\bleak\__init__.py", line 852, in stop_notify await self._backend.stop_notify(char_specifier) File "C:\Users\isly8\anaconda3\Lib\site-packages\bleak\backends\winrt\client.py", line 1020, in stop_notify event_handler_token = self._notification_callbacks.pop(characteristic.handle) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ KeyError: 15 Task exception was never retrieved future: <Task finished name='Task-39' coro=<Ble_inst.stop_notify() done, defined at d:\workstation\VTC_display\Ble_instance.py:71> exception=KeyError(15)> Traceback (most recent call last): File "d:\workstation\VTC_display\Ble_instance.py", line 73, in stop_notify await self.client.stop_notify(target_char) File "C:\Users\isly8\anaconda3\Lib\site-packages\bleak\__init__.py", line 852, in stop_notify await self._backend.stop_notify(char_specifier) File "C:\Users\isly8\anaconda3\Lib\site-packages\bleak\backends\winrt\client.py", line 1020, in stop_notify event_handler_token = self._notification_callbacks.pop(characteristic.handle) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ KeyError: 15 ```
closed
2023-09-15T01:26:02Z
2023-09-15T01:48:48Z
https://github.com/hbldh/bleak/issues/1419
[]
ChienHao-Hung
0
microsoft/nni
pytorch
5,019
where is scripts.compression_mnist_model
**Describe the issue**: When I tried to run the demo from the doc here https://nni.readthedocs.io/en/stable/tutorials/quantization_speedup.html, I could not found `scripts.compression_mnist_model` in `from scripts.compression_mnist_model import TorchModel, trainer, evaluator, device, test_trt` **Environment**: - NNI version: 2.8 - Training service (local|remote|pai|aml|etc): - Client OS: linux - Server OS (for remote mode only): - Python version: 3.8 - PyTorch/TensorFlow version: - Is conda/virtualenv/venv used?: - Is running in Docker?: **Configuration**: - Experiment config (remember to remove secrets!): - Search space: **Log message**: - nnimanager.log: - dispatcher.log: - nnictl stdout and stderr: <!-- Where can you find the log files: LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout --> **How to reproduce it?**:
closed
2022-07-25T23:37:38Z
2022-11-23T03:09:31Z
https://github.com/microsoft/nni/issues/5019
[ "user raised", "documentation", "support", "ModelSpeedup", "v2.9.1" ]
james20141606
4
neuml/txtai
nlp
544
Add support for custom scoring instances
Add support for custom scoring instances. See implementations in ANNFactory, DatabaseFactory and GraphFactory.
closed
2023-09-06T21:21:06Z
2023-09-06T21:25:06Z
https://github.com/neuml/txtai/issues/544
[]
davidmezzetti
0
LAION-AI/Open-Assistant
python
3,630
Add Persian QA Dataset
After fine-tuning on [Farsi data](https://github.com/LAION-AI/Open-Assistant/pull/3629), I think adding QA Datasets like [this one](https://github.com/sajjjadayobi/PersianQA) can be helpful. If the teams want to add support for Farsi, I will be glad to contribute and add this dataset in the standard format.
closed
2023-08-02T10:30:53Z
2023-08-03T19:24:31Z
https://github.com/LAION-AI/Open-Assistant/issues/3630
[]
pourmand1376
0
adbar/trafilatura
web-scraping
712
setup: use `pyproject.toml` file
This is now standard for Python 3.8+ versions.
closed
2024-10-07T10:33:18Z
2024-10-10T11:12:05Z
https://github.com/adbar/trafilatura/issues/712
[ "maintenance" ]
adbar
0
Ehco1996/django-sspanel
django
925
有一个请求不知道是什么 找不到,404
**问题的描述** 有一个请求不知道是什么 找不到,404 **项目的配置文件** **如何复现** **相关截图/log** ![image](https://github.com/Ehco1996/django-sspanel/assets/39944221/407c67d4-6098-436a-8ebe-1edbe14ea3f0) **其他信息**
closed
2024-03-06T08:39:24Z
2024-03-10T00:20:20Z
https://github.com/Ehco1996/django-sspanel/issues/925
[]
wangxingsheng
1
tableau/server-client-python
rest-api
861
Extracting "Who has seen the view"?
Hi, I looked through the documentation, maybe it's there but i couldn't find it. Is there a way to extract the breakdown of "Who has seen this view" for each view? Thanks in advance!
open
2021-07-12T08:23:19Z
2021-11-08T06:13:15Z
https://github.com/tableau/server-client-python/issues/861
[ "enhancement", "Server-Side Enhancement" ]
Zestsx
2
jumpserver/jumpserver
django
14,217
[Bug] 配置LDAP会自动还原
### 产品版本 v4.1.0 ### 版本类型 - [X] 社区版 - [ ] 企业版 - [ ] 企业试用版 ### 安装方式 - [ ] 在线安装 (一键命令安装) - [ ] 离线包安装 - [ ] All-in-One - [ ] 1Panel - [X] Kubernetes - [ ] 源码安装 ### 环境信息 helm 包安装 ### 🐛 缺陷描述 admin账号登陆后,配置ldap登陆。配置信息填写完毕、开启ldap功能,选择提交。测试链接正常,可以获取到用户数;测试ldap用户登陆提示Authentication failed (before login check failed): not enabled auth ldap。刷新页面再看即可发现ldap开关已被关闭,配置信息已被还原。 ### 复现步骤 admin账号登陆后,配置ldap登陆。配置信息填写完毕、开启ldap功能,选择提交。测试链接正常,可以获取到用户数;测试ldap用户登陆提示Authentication failed (before login check failed): not enabled auth ldap。刷新页面再看即可发现ldap开关已被关闭,配置信息已被还原。 ### 期望结果 能够正常保存ldap信息,不被还原。 ### 补充信息 _No response_ ### 尝试过的解决方案 _No response_
closed
2024-09-23T06:45:24Z
2024-09-26T10:01:03Z
https://github.com/jumpserver/jumpserver/issues/14217
[ "🐛 Bug" ]
yulinor
6
strawberry-graphql/strawberry
asyncio
3,430
Schema basics docs
https://strawberry.rocks/docs/general/schema-basics returns 500 error
closed
2024-04-02T08:58:46Z
2025-03-20T15:56:39Z
https://github.com/strawberry-graphql/strawberry/issues/3430
[ "bug" ]
lorddaedra
2
dmlc/gluon-cv
computer-vision
1,582
Top-1 accuracy on UCF101 dataset is bad?
I trained `i3d_nl5_resnet50_v1` on `UCF101` datasets with `Kinetics400` pretrained, the acc: `top-1=85.2%, top-5=95.4%`. Params: `clip_len=32, input=224x224, lr=0.01, batch_size=8`. It seems not so good, what is the possible reason?
closed
2021-01-07T03:19:58Z
2021-01-08T02:41:31Z
https://github.com/dmlc/gluon-cv/issues/1582
[]
Tramac
3
HIT-SCIR/ltp
nlp
261
编译完成后在lib目录下并看不到segmentor_jni.so呀?
ltp编译完成后有个lib目录里面有很多so文件, 但是并看不到segmentor_jni.so等ltp4j使用的so? 这是为什么? 是因为ltp4j太久没人维护了吗?
closed
2017-11-11T16:07:55Z
2018-01-23T12:56:06Z
https://github.com/HIT-SCIR/ltp/issues/261
[]
ambjlon
1
PeterL1n/RobustVideoMatting
computer-vision
127
CUA
closed
2022-01-12T11:35:10Z
2022-01-14T06:39:23Z
https://github.com/PeterL1n/RobustVideoMatting/issues/127
[]
HarrytheOrange
0
plotly/dash-core-components
dash
475
box plot issue with dcc.Graph?
Getting a few reports in the community forum that look valid, but I haven't attempted reproducing yet: https://community.plot.ly/t/boxplot-in-dash/19623/4
open
2019-03-05T15:00:28Z
2019-03-05T15:00:28Z
https://github.com/plotly/dash-core-components/issues/475
[]
chriddyp
0
gradio-app/gradio
python
10,255
Bug of gr.ImageMask save image
### Describe the bug Hi, author. I use gr.ImageMask met a bug. Wish you can solve this problem. I set of ImageMask width and height, in the page save image from gr.ImageMask, but image size is compressed, not the width and height of the original upload. ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Reproduction ```python import gradio as gr with gr.Blocks() as demo: with gr.Row(): im = gr.ImageMask( type="numpy", interactive=True, height=150, width=500 ) ``` ### Screenshot _No response_ ### Logs _No response_ ### System Info ```shell gradio:5.9.1 ``` ### Severity I can work around it
closed
2024-12-26T09:42:37Z
2025-01-22T18:35:03Z
https://github.com/gradio-app/gradio/issues/10255
[ "bug", "🖼️ ImageEditor" ]
yaosheng216
1
satwikkansal/wtfpython
python
137
Is copy or reference in self recursion?
Hello, I not found in this repository about this grammar. so, I open this issue. My CPython version is 3.7.1 - list recursion (reference ) ```python class C: def f1(self, a): if a[0][0] == 0: return a else: a[0][0] -= 1 self.f1(a) return a print(C().f1([[5, 2],[3, 4]])) ``` result: ```bash >>> [[0, 2], [3, 4]] ``` - variable recursion (copy) ```python class C: def f1(self, a): if a == 0: return a else: a -= 1 self.f1(a) return a print(C().f1(3)) ``` result ```bash >>> 2 ``` Just defining functions is the same. Thank you.
closed
2019-09-09T04:14:31Z
2019-12-21T17:08:04Z
https://github.com/satwikkansal/wtfpython/issues/137
[ "new snippets" ]
daidai21
1
plotly/dash-cytoscape
plotly
175
Graph nodes flocking to single point
#### Description Nodes in graphs are shown correctly for a second before flocking to a single point, usually happens when inserting Cyto-components using callbacks. It is independent of any particular layout. This happens often. Looks like this: <img src="https://i.ibb.co/6WV2BF3/Screenshot-2022-05-17-at-16-15-32.png" alt="Screenshot-2022-05-17-at-16-15-32" border="0"> #### Steps/Code to Reproduce Not deterministic. Happens when having multiple graphs, shown at different times in the Dash app. #### Expected Results A graph with properly placed nodes. #### Actual Results This. <img src="https://i.ibb.co/6WV2BF3/Screenshot-2022-05-17-at-16-15-32.png" alt="Screenshot-2022-05-17-at-16-15-32" border="0"> #### Versions ``` Dash 2.3.1 /Users/niels/Documents/GitHub/test/test.py:3: UserWarning: The dash_html_components package is deprecated. Please replace `import dash_html_components as html` with `from dash import html` import dash_html_components; print("Dash Core Components", dash_html_components.__version__) Dash Core Components 2.0.2 /Users/niels/Documents/GitHub/test/test.py:4: UserWarning: The dash_core_components package is deprecated. Please replace `import dash_core_components as dcc` with `from dash import dcc` import dash_core_components; print("Dash HTML Components", dash_core_components.__version__) Dash HTML Components 2.3.0 Dash Renderer 1.9.1 Dash HTML Components 0.2.0 ```
open
2022-05-17T14:21:44Z
2022-05-24T15:48:35Z
https://github.com/plotly/dash-cytoscape/issues/175
[]
nilq
3
aio-libs/aiomysql
asyncio
979
aiomysql raise InvalidRequestError: Cannot release a connection with not finished transaction
### Describe the bug When the database connection runs out and a new request is sent, an error occurd and the whole application goes down! in the aiomysql/utils.py, line 137 ```python3 async def __aexit__(self, exc_type, exc, tb): try: await self._pool.release(self._conn) finally: self._pool = None self._conn = None ``` why release the whole connection pool? ### To Reproduce ```python3 import asyncio import base64 import aiohttp.web_routedef from aiohttp import web from aiomysql.sa import create_engine route = aiohttp.web.RouteTableDef() @route.get('/') async def test(request): async with request.app.db.acquire() as conn: trans = await conn.begin() await asyncio.sleep(60) await trans.commit() return web.Response(text="ok") @route.get('/test') async def test(request): async with request.app.db.acquire() as conn: trans = await conn.begin() await trans.close() return web.Response(text="not ok") async def mysql_context(app): engine = await create_engine( db='fmp_new', user='root', password="lyp82nlf", host="localhost", maxsize=1, echo=True, pool_recycle=1 ) app.db = engine yield app.db.close() await app.db.wait_closed() async def init_app(): app = web.Application() app.add_routes(route) app.cleanup_ctx.append(mysql_context) return app def main(): app = init_app() web.run_app(app, port=9999) if __name__ == '__main__': main() ``` ### Expected behavior when a new request arrives , If the connection is exhausted, then wait for it release ### Logs/tracebacks ```python-traceback Error handling request Traceback (most recent call last): File "test_aiomysql.py", line 20, in test await asyncio.sleep(60) File "/usr/lib/python3.7/asyncio/tasks.py", line 595, in sleep return await future concurrent.futures._base.CancelledError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/yangshen/Envs/fmp_37/lib/python3.7/site-packages/aiohttp/web_protocol.py", line 422, in _handle_request resp = await self._request_handler(request) File "/home/yangshen/Envs/fmp_37/lib/python3.7/site-packages/aiohttp/web_app.py", line 499, in _handle resp = await handler(request) File "test_aiomysql.py", line 21, in test await trans.commit() File "/home/yangshen/Envs/fmp_37/lib/python3.7/site-packages/aiomysql/utils.py", line 139, in __aexit__ await self._pool.release(self._conn) File "/home/yangshen/Envs/fmp_37/lib/python3.7/site-packages/aiomysql/sa/engine.py", line 163, in release raise InvalidRequestError("Cannot release a connection with " aiomysql.sa.exc.InvalidRequestError: Cannot release a connection with not finished transaction ``` ### Python Version ```console $ python --version 3.7.5 ``` ### aiomysql Version ```console $ python -m pip show aiomysql 0.1.0 ``` ### PyMySQL Version ```console $ python -m pip show PyMySQL 1.1.0 ``` ### SQLAlchemy Version ```console $ python -m pip show sqlalchemy 1.4.49 ``` ### OS ubuntu 18.04 ### Database type and version ```console 5.7.39-0ubuntu0.18.04.2 ``` ### Additional context _No response_ ### Code of Conduct - [X] I agree to follow the aio-libs Code of Conduct
open
2024-03-07T09:01:24Z
2024-03-07T09:06:15Z
https://github.com/aio-libs/aiomysql/issues/979
[ "bug" ]
ShownYoung
0
KevinMusgrave/pytorch-metric-learning
computer-vision
370
Does library supports load data from from torch.utils.data.Dataset? 
closed
2021-10-05T07:14:17Z
2021-10-05T09:49:20Z
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/370
[]
KennyTC
1
wandb/wandb
tensorflow
9,198
[Bug-App]: Sweep UI (Parallel Coordinates Plot, Parameter Importance Plot, etc.) Missing in W&B Dashboard
### Describe the bug Description: I ran the following code from the sweep tutorial using the W&B library. While the logged score is visible on the W&B website, I cannot see the Sweep UI features such as the parallel coordinates plot or parameter importance plot. Code: ``` # Import the W&B Python Library and log into W&B import wandb wandb.login() # 1: Define objective/training function def objective(config): score = config.x**3 + config.y return score def main(): wandb.init(project="my-first-sweep") score = objective(wandb.config) wandb.log({"score": score}) # 2: Define the search space sweep_configuration = { "method": "random", "metric": {"goal": "minimize", "name": "score"}, "parameters": { "x": {"max": 0.1, "min": 0.01}, "y": {"values": [1, 3, 7]}, }, } # 3: Start the sweep sweep_id = wandb.sweep(sweep=sweep_configuration, project="my-first-sweep") wandb.agent(sweep_id, function=main, count=10) ``` Here is the capture of the sweep web UI. <img width="1845" alt="Image" src="https://github.com/user-attachments/assets/5e15e4fc-236c-407d-a374-0016c91058d4" />
closed
2025-01-07T12:15:07Z
2025-01-09T12:36:42Z
https://github.com/wandb/wandb/issues/9198
[ "ty:bug", "c:sweeps", "a:app" ]
hspark1212
2
tartiflette/tartiflette
graphql
549
Setup/teardown is failing when doing automated tests
This is a bug report about doing setup/teardown for running automated tests with Tartiflette (and tartiflette-aiohttp). It seems that the first test succeeds, but subsequent tests fail. I've created a repository with a minimal reproducible test case, which is here: https://github.com/singingwolfboy/tartiflette-test-bug * [x] **Explain with a simple sentence the expected behavior** (above) * [x] **Tartiflette version:** 1.4.1 * [x] **Python version:** 3.10.0 * [x] **Executed in docker:** Yes * [x] **Dockerfile sample:** https://github.com/singingwolfboy/tartiflette-test-bug/blob/main/Dockerfile * [x] **GraphQL Schema & Query:** unrelated * [x] **Is it a regression from a previous versions?** unknown * [x] **Stack trace**: in the README of the repo (click on "Here's an example of the test failure output" to see it)
open
2021-12-02T17:54:09Z
2021-12-02T17:54:09Z
https://github.com/tartiflette/tartiflette/issues/549
[]
singingwolfboy
0
httpie/cli
python
966
[Docs] `cv@` & `=@` are getting replaced in docsite with [email-protected]
![image (3)](https://user-images.githubusercontent.com/1513265/93520987-5b46c380-f8fd-11ea-8955-1f32519f5cbb.png) Clicking on the link in `[email-protected]` takes me to https://httpie.org/cdn-cgi/l/email-protection
closed
2020-09-17T19:50:28Z
2020-09-20T06:36:16Z
https://github.com/httpie/cli/issues/966
[]
jskrzypek
1
netbox-community/netbox
django
17,940
"Tagged VLANs" multiselect in Interface incomplete
### Deployment Type Self-hosted ### Triage priority N/A ### NetBox Version v4.1.4 ### Python Version 3.10 ### Steps to Reproduce Create more than 100 VLANs. Create a Device with an Interface. Set the 802.1q mode to tagged. Open the "Tagged VLANs" multiselect and select VLANs. ### Expected Behavior All existing VLANs show up and can be selected. ### Observed Behavior VLANS 1 to 100 show up. VLAN 101 and above do not show up.
closed
2024-11-06T09:23:59Z
2025-02-06T03:03:46Z
https://github.com/netbox-community/netbox/issues/17940
[]
georg-again
2
matplotlib/mplfinance
matplotlib
368
Point and figure [pnf] - Reversal param
Hello, Is it possible to change the PNF param "reversal" the same way as i can do with "box_size"? Is it even possible? `mpf.plot(df, type='pnf', pnf_params=dict(box_size=7))`
closed
2021-04-01T14:54:53Z
2021-04-25T18:06:50Z
https://github.com/matplotlib/mplfinance/issues/368
[ "enhancement", "released" ]
heytechv
4
amisadmin/fastapi-amis-admin
sqlalchemy
179
添加search_fields模糊搜索字段后报错
不添加页面正常显示,添加后页面为空报错 ![1](https://github.com/user-attachments/assets/2819b6c1-4ea9-4b3b-8592-06e39d3e918a) 变量变成了[~]$样式 ![Snipaste_2024-08-01_11-25-57](https://github.com/user-attachments/assets/13cb1cc4-cdf1-4641-ae7d-8d45f2af851d)
closed
2024-07-25T09:01:58Z
2024-08-02T02:12:03Z
https://github.com/amisadmin/fastapi-amis-admin/issues/179
[]
zeroChen00
1
pandas-dev/pandas
python
60,656
ENH: Add `date_format` and `date_unit` to `to_dict` similar to what exists in `to_json`
### Feature Type - [X] Adding new functionality to pandas - [ ] Changing existing functionality in pandas - [ ] Removing existing functionality in pandas ### Problem Description `df.to_dict` is often (maybe even mostly) to post data via REST API. My usual workflow for posting data looks like this: ``` df = pd.DataFrame({"date": pd.to_daterange("2021-01-01", "2024-01-01")}) df["values"] = 1 df["date"] = df["date"].map(lambda x: x.isoformat()) df = df.to_dict("records") df["some_key"] = "some_value" response= requests.post(url, data=df) ``` Ideally, the datetime to string could happen inside of `to_dict` via an optional `date_format` and `date_unit` parameter similar to `df.to_json`. However, `df.to_json` is not a suitable alternative because often I need to add an additional key as seen above. If this seems like an appropriate new feature, I will dig into how to implement this. ### Feature Description Add 2 new parameters to `to_dict`: 1. `date_format`. Possible values: None, Epoch, isoformat. Default: None. If None, do nothing. 2. `date_unit`. One of ‘s’, ‘ms’, ‘us’, ‘ns’ . Default ms ### Alternative Solutions None really ### Additional Context Code for to_dict: https://github.com/pandas-dev/pandas/blob/1be26374dd7ef43bc709c4bc6db2daf7bfd606c8/pandas/core/methods/to_dict.py#L56 to_json: https://github.com/pandas-dev/pandas/blob/v2.2.3/pandas/core/generic.py#L2428-L2717
open
2025-01-04T10:06:33Z
2025-01-04T14:19:10Z
https://github.com/pandas-dev/pandas/issues/60656
[ "Enhancement", "IO Data", "Closing Candidate" ]
lucasjamar
1
huggingface/datasets
computer-vision
6,689
.load_dataset() method defaults to zstandard
### Describe the bug Regardless of what method I use, datasets defaults to zstandard for unpacking my datasets. This is poor behavior, because not only is zstandard not a dependency in the huggingface package (and therefore, your dataset loading will be interrupted while it asks you to install the package), but it happens on datasets that are uploaded in json format too, meaning the dataset loader will attempt to convert the data to a zstandard compatible format, and THEN try to unpackage it. My 4tb drive runs out of room when using zstandard on slimpajama. It loads fine on 1.5tb when using json, however I lack the understanding of the "magic numbers" system used to select the unpackaging algorithm, so I can't push a change myself. Commenting out this line, in "/datasets/utils/extract.py" fixes the issue, and causes SlimPajama to properly extract using rational amounts of storage, however it completely disables zstandard, which is probably undesirable behavior. Someone with an understanding of the "magic numbers" system should probably take a pass over this issue. ``` class Extractor: # Put zip file to the last, b/c it is possible wrongly detected as zip (I guess it means: as tar or gzip) extractors: Dict[str, Type[BaseExtractor]] = { "tar": TarExtractor, "gzip": GzipExtractor, "zip": ZipExtractor, "xz": XzExtractor, #"zstd": ZstdExtractor, # This line needs to go, in order for datasets to work w/o non-dependent packages "rar": RarExtractor, "bz2": Bzip2Extractor, "7z": SevenZipExtractor, # <Added version="2.4.0"/> "lz4": Lz4Extractor, # <Added version="2.4.0"/> } ``` ### Steps to reproduce the bug ''' from datasaets import load_dataset load_dataset(path="/cerebras/SlimPajama-627B") ''' This alone should trigger the error on any system that does not have zstandard pip installed. ### Expected behavior This repository (which is encoded in json format, not zstandard) should check whether zstandard is installed before defaulting to it. Additionally, using zstandard should not use more than 3x the required space that other extraction mechanisms use. ### Environment info - `datasets` version: 2.17.1 - Platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35 - Python version: 3.12.0 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
closed
2024-02-22T17:39:27Z
2024-03-07T14:54:16Z
https://github.com/huggingface/datasets/issues/6689
[]
ElleLeonne
4
scrapfly/scrapfly-scrapers
web-scraping
5
Failed to register account
remind must be work email
closed
2023-10-12T08:24:41Z
2023-10-12T08:56:40Z
https://github.com/scrapfly/scrapfly-scrapers/issues/5
[ "question" ]
lovelifelovejava
0
microsoft/UFO
automation
129
AttributeError: 'AppAgentProcessor' object has no attribute 'update_step'. Did you mean: 'update_cost'?
I tried setting up and running the UFO first time with a basic task of sending an email to a gmail address with a simple question. UFO does everything as expected until the last step when it asks the user for permission to send the email. That's where I received the error. Full traceback is pasted below: [Input Required:] UFO🛸 will apply click_input(button='left', double=False) on the [Send] item. Please confirm whether to proceed or not. Please input Y or N. Y Traceback (most recent call last): File "C:\Users\anaconda3\envs\ufo\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\anaconda3\envs\ufo\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\code\UFO\ufo\__main__.py", line 7, in <module> ufo.main() File "C:\Users\code\UFO\ufo\ufo.py", line 56, in main clients.run_all() File "C:\Users\code\UFO\ufo\module\client.py", line 28, in run_all session.run() File "C:\Users\code\UFO\ufo\module\sessions\session.py", line 103, in run super().run() File "C:\Users\code\UFO\ufo\module\basic.py", line 355, in run round.run() File "C:\Users\code\UFO\ufo\module\basic.py", line 104, in run self.agent.handle(self.context) File "C:\Users\code\UFO\ufo\agents\agent\basic.py", line 224, in handle self.state.handle(self, context) File "C:\Users\code\UFO\ufo\agents\states\app_agent_state.py", line 315, in handle agent.process_resume() File "C:\Users\code\UFO\ufo\agents\agent\basic.py", line 237, in process_resume self.processor.resume() File "C:\Users\code\UFO\ufo\agents\processors\basic.py", line 152, in resume self.update_step() AttributeError: 'AppAgentProcessor' object has no attribute 'update_step'. Did you mean: 'update_cost'? Currently on Windows 11 using Anaconda Virtual Env with python 3.10.15 Also tried python 3.13 but same error. Any assistance would be appreciated, thanks
open
2024-10-27T10:39:11Z
2024-11-23T13:00:25Z
https://github.com/microsoft/UFO/issues/129
[]
HamzaAsiff
8
adbar/trafilatura
web-scraping
786
Slow extraction after recent PRs
Hi @unsleepy22, I just ran tests on the benchmark with `pyinstrument tests/comparison_small.py` and there seems to be an issue with your PR which improve the results but have a major cost in terms of timing (say 3-4x as slow overall). The problem seems to be in `determine_returnstring` > `xmltotxt` > `process_element`. My guess is that a slow test with descendants or ancestors is run recursively although there is no need. The function `is_in_table_cell()` is a potential culprit, its use should be limited to cases where it's absolutely necessary and/or there should be a better way to perform the test after all.
closed
2025-02-10T16:38:09Z
2025-02-17T16:29:13Z
https://github.com/adbar/trafilatura/issues/786
[ "bug" ]
adbar
0
piskvorky/gensim
data-science
3,295
Get coverage to work properly under Github Actions
See the suggestions here: https://github.com/RaRe-Technologies/gensim/pull/3286#issuecomment-1050561253
open
2022-02-25T07:01:04Z
2022-02-25T07:13:39Z
https://github.com/piskvorky/gensim/issues/3295
[ "housekeeping" ]
mpenkov
0
miguelgrinberg/flasky
flask
148
Fix manage.py path problem on windows
> covdir = os.path.join(basedir, 'tmp/coverage') > COV.html_report(directory=covdir) > print('HTML version: file://%s/index.html' % covdir) There is some problems on _windows_, coz path is '\'.
closed
2016-05-29T04:44:46Z
2017-12-10T20:04:19Z
https://github.com/miguelgrinberg/flasky/issues/148
[ "bug" ]
viprs
1
Asabeneh/30-Days-Of-Python
matplotlib
437
Python Program
closed
2023-08-23T10:27:38Z
2023-08-23T10:28:38Z
https://github.com/Asabeneh/30-Days-Of-Python/issues/437
[]
xozayn
0
suitenumerique/docs
django
332
Web accessibility problems on document edition page
## Bug Report **Problematic behavior** Here is the detailed analysis by Antoine : https://www.loom.com/share/3c9642546c2c4e5391b2ce04a5c3df93
open
2024-10-14T08:10:00Z
2024-12-11T14:11:07Z
https://github.com/suitenumerique/docs/issues/332
[ "good first issue", "frontend" ]
virgile-dev
1
tfranzel/drf-spectacular
rest-api
904
How can I use pagination in extend_schema_field?
I have a field in my serializer: ``` messages = serializers.SerializerMethodField() ``` and corresponding method: ``` def get_messages(self, obj): ``` that returns a list of paginated messages. I use `LimitOffsetPagination` class and my custom `MessageSerializer` serializer. I wish to define a type for this method, by using `extend_schema_field` decorator. what I've done is: ``` @extend_schema_field(MessageSerializer(many=True)) def get_messages(self, obj): ``` but this type is incorrect because data returned by this method is paginated. How can I declare the type for this method taking into the account paginator class?
closed
2022-12-23T09:18:55Z
2022-12-23T16:06:27Z
https://github.com/tfranzel/drf-spectacular/issues/904
[]
adybionka
2
coqui-ai/TTS
deep-learning
2,350
ValueError(" [!] Unkown encoder type.")
### Describe the bug I am trying to load the pretrained glowtts model from the repo but I am stuck with this error when ***setup_model()*** function runs. ***ValueError(" [!] Unkown encoder type.")*** ### To Reproduce to reproduce simply load the config for the glowtts model and run setup_model: config = load_config(path_to_config) model = setup_model(config) ### Expected behavior Traceback (most recent call last): File "C:\TTS\run_glowtts_hifigan.py", line 72, in <module> model = setup_model(C) File "c:\TTS\tts\models\__init__.py", line 13, in setup_model model = MyModel.init_from_config(config, samples) File "c:\tts\TTS\tts\models\glow_tts.py", line 557, in init_from_config return GlowTTS(new_config, ap, tokenizer, speaker_manager) File "c:\TTS\tts\models\glow_tts.py", line 80, in __init__ self.encoder = Encoder( File "c:\TTS\tts\layers\glow_tts\encoder.py", line 132, in __init__ raise ValueError(" [!] Unkown encoder type.") ValueError: [!] Unkown encoder type. ### Logs _No response_ ### Environment ```shell Env: Cuda 11.6 Conda 4.5.12 Python 3.8 Pytorch 1.13.1 TTS version 0.11.1 ``` ### Additional context Anyone who can explain this behavior?
closed
2023-02-16T15:34:23Z
2023-03-25T23:58:54Z
https://github.com/coqui-ai/TTS/issues/2350
[ "bug", "wontfix" ]
nicholasguimaraes
1
onnx/onnx
deep-learning
5,869
Cannot install on windows 10 with pip - `test_data_set_0` folder is missing
# Bug Report ### Is the issue related to model conversion? <!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. --> ### Describe the bug <!-- Please describe the bug clearly and concisely --> When trying to install it via `pip install onnx`, I get the following error: ``` ERROR: Could not install packages due to an OSError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\hrger\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python310\\site-packages\\onnx\\backend\\test\\data\\node\\test_averagepool_3d_dilations_large_count_include_pad_is_0_ceil_mode_is_False\\test_data_set_0' ``` upon `cd`ing to `C:\\Users\\hrger\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python310\\site-packages\\onnx\\backend\\test\\data\\node\\test_averagepool_3d_dilations_large_count_include_pad_is_0_ceil_mode_is_False`, it has only one file `model.onnx`: ``` Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 19/01/2024 20:05 303 model.onnx ``` ### System information <!-- - OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*): - ONNX version (*e.g. 1.13*): - Python version: - GCC/Compiler version (if compiling from source): - CMake version: - Protobuf version: - Visual Studio version (if applicable):--> - OS Platform and Distribution: WIndows 10 Professional 22H2 - ONNX version: 1.15.0 - Python version: 3.10.11 ### Reproduction instructions <!-- - Describe the code to reproduce the behavior. ``` import onnx model = onnx.load('model.onnx') ... ``` - Attach the ONNX model to the issue (where applicable)--> `pip instsall onnx` ### Expected behavior <!-- A clear and concise description of what you expected to happen. --> It should install successfully. ### Notes <!-- Any additional information -->
closed
2024-01-19T20:11:58Z
2024-01-25T14:20:05Z
https://github.com/onnx/onnx/issues/5869
[ "bug" ]
Grsz
3
plotly/dash
dash
2,497
select certain columns / rows that are False when cell_selectable=False
When I use dashtable with cell_selectable=False, all cells are not selectable. I am wondering if we can have the option to choose certain columns / rows that are not selectable? In my case, I only want 1 column that is selectable. Other columns should be disabled. Using dash 2.9
open
2023-04-05T05:42:18Z
2024-08-13T19:31:23Z
https://github.com/plotly/dash/issues/2497
[ "feature", "P3" ]
slyin87
0
chiphuyen/stanford-tensorflow-tutorials
tensorflow
30
advise to add a Jupyter Notebook
hi, here is a suggestion, how about add a "Jupyter Notebook" in example PATH?
open
2017-06-18T09:05:43Z
2017-07-11T17:47:48Z
https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/30
[]
DoneHome
1
satwikkansal/wtfpython
python
219
Explain meaning of asterisk in heading titles
A lot of the headings end in a asterisk e.g. "First things first! *" does but "Strings can be tricky sometimes" does not. This suggests a footnote but after quite a bit of searching I can't find one. Perhaps it means something else like "new" or "reader contributed". Please explain on the page otherwise there's not much point it being there and it's a bit frustrating.
closed
2020-08-24T09:24:05Z
2020-08-24T14:41:05Z
https://github.com/satwikkansal/wtfpython/issues/219
[]
arthur-tacca
1
onnx/onnx
pytorch
6,168
Any operation to convert "Tile" to "Slice"
I am working on some Low-Rank Adaptation (LoRA) models, in which two set of `torch.nn.Parameters` are initialized and then multiplied into a complete weight matrix of Conv/Attention during inference. I noticed that, the weight of original Conv is converted into `Slice` type, while the torch.nn.Parameters designed for LoRA Conv/Attention are converted into `Tile` type. Is it possible to convert the self-designed `torch.nn.Parameters` into `Slice` type, or is it possible to convert any `Tile` type into `Slice` type?
closed
2024-06-11T07:29:08Z
2024-07-10T05:41:22Z
https://github.com/onnx/onnx/issues/6168
[ "question" ]
zw-xxx
2
pyg-team/pytorch_geometric
deep-learning
9,734
`SetTransformerAggregation` has parameter incorrectly written
### 📚 Describe the documentation issue The page for the `nn.aggr.SetTransformerAggregation` class has a typo, where the parameter `layer_norm` is shown as `norm`. ### Suggest a potential alternative/fix I will open a PR with an edit to that particular docstring.
closed
2024-10-25T19:20:18Z
2024-10-28T13:59:57Z
https://github.com/pyg-team/pytorch_geometric/issues/9734
[ "documentation" ]
eurunuela
1
BeanieODM/beanie
pydantic
922
Concerns and Suggestions Regarding Beanie Library
I wanted to raise some concerns regarding the current state of the Beanie library, particularly in its role as an async ODM in Python. As a user of the library, I've noticed that the maintainer seems to be unavailable, which has raised some concerns about its suitability for use in our production environment. Additionally, I've observed that there are numerous open issues and pull requests that often take a month or longer to be reviewed. In light of these observations, I believe it might be beneficial to explore some potential solutions to improve the maintenance and responsiveness of the project. Here are a few suggestions: 1. **Consider Turning the Project into an Organization:** By transitioning the project to an organization, it could attract more contributors and potentially alleviate the burden on a single maintainer. 2. **Engage with MongoDB:** Since Beanie is closely related to MongoDB, it might be worthwhile to reach out to the MongoDB community for support or collaboration opportunities. 3. **Expand the Maintainer Team:** Adding more maintainers to the project could help distribute the workload and improve the speed of issue resolution and pull request reviews. I believe implementing one or more of these suggestions could greatly benefit the Beanie library and its users. I'm eager to hear your thoughts and discuss potential next steps.
closed
2024-04-23T17:43:49Z
2024-10-08T17:06:22Z
https://github.com/BeanieODM/beanie/issues/922
[ "Stale" ]
alm0ra
10
seleniumbase/SeleniumBase
pytest
3,351
how can I keep the browser open and keep it open?
how can I keep the browser open and keep it open so that I can continue the secondary operation without having to restart the browser every time?
closed
2024-12-19T04:23:13Z
2025-01-12T14:16:55Z
https://github.com/seleniumbase/SeleniumBase/issues/3351
[ "duplicate", "question" ]
quyunet
6
ymcui/Chinese-BERT-wwm
nlp
111
MaskedLM的head能开源吗?
现在提供的模型只包含WWM fine tune 完成的BERT模型。能同时提供论文中用来fine tune 的MLM的linear head 吗?
closed
2020-04-27T00:45:34Z
2020-04-28T16:46:09Z
https://github.com/ymcui/Chinese-BERT-wwm/issues/111
[]
snakeztc
3
ckan/ckan
api
8,409
Accessibility: <a> tags without href
Hi, there are a couple of `<a>` tags without `href` attributes in some of the templates. For example, https://github.com/ckan/ckan/blob/5117fbaeee5a60cf62e42abde2dcef19c747505a/ckan/templates/organization/snippets/info.html#L67 As per https://accessibleweb.com/question-answer/link-element-still-accessible-without-href/ > Links that are built with anchor elements (`<a>`) are not accessible without an `href` value. The `href` value determines what page or content a user will be directed to once the link is activated. If the `href` value is left blank, then the link may appear to be broken to users and may also cause confusing screen reader announcements. Instead of using `<a>` for these buttons, the best option here would be to use a `<form>` tag: ```html <form method=post action=... hx-post=...> <input class="btn btn-danger" type=submit value=Unfollow> </form> ``` This ensures two things: 1. Accessiblity. Screen readers 'understand' and announce forms as actionable items on the page. 2. Progressive enhancement. The follow/unfollow actions will work even with JavaScript disabled (they will just be normal form submits).
open
2024-08-22T04:32:40Z
2024-08-22T13:02:19Z
https://github.com/ckan/ckan/issues/8409
[]
yawaramin
0
iperov/DeepFaceLab
deep-learning
847
Inconsistent iterations times and issues with RW override in new version.
Expected behavior: Correct me if I'm wrong but the recent updates should be overriding random warp setting state to disabled (n) as long as pretrain is enabled (y). Actual behavior: Despite this override update it seems like the behavior is not how one would expect, either explanation or bug fix is required, whenever random warp is set to disabled (n) overall VRAM usage is at about 9,3GB (batch 14, 224 DF-UD 288/80/80/16, optimizer set to GPU, LR disabled, GTX 1080Ti 11GB, Windows 10, 2004 update, display running of the same GPU) whereas when random warp is enabled (y) VRAM usage is close to 10,6GB. I also tested this very same model on older version of DFL before the RW override during pretrain (before July 26), I've tested it being enabled/disabled couple times just to be sure and sure enough, old version same VRAM usage, new one with RW enabled, 10,6 even saw 10,7GB VRAM usage after a minute of training. Also overall iterations since few updated ago have the issue where every nth one will be higher, usually twice but not always, for me it was usually all consistent -/+ 50-100ms, after some update it changed and then every 16th iteration was about twice of the previous 15 (600ms for 15 iterration, 1200ms at 16 and then back to 600ms) and now it seems to have gotten even worse, CPU usage is pinned at 100% (before it was lower) and while at the beginning it still behaves the same (16th iter twice of the rest) after few minutes it gets worse and then either every 3rd or 4th iteration is like 1000ms, 1500ms, even 2200 or up to 3000ms. It's worse with new update and RW enabled, bit better with it disabled or enabled/disabled on older version (probably due to that lower vram usage). I'm thinking it might be the issue with my CPU being tad weak since it's pinned at 95-100% all the time, but this wasn't the case before, is the -U or -D (or -UD) heavier on CPU? Or some other thing made it heavier on CPU overall? Shouldn't then this override update keep RW disabled all the time when pretrain is enabled? Why higher VRAM usage? What's with the iteration times being all over the place, this is not an issue just for me, there have been many people who reported the same issue over past month. UPDATE: It seems like I was able to resolve this issue, not sure why but after I've downloaded the newest 08.02 release and ran it once it fixed it in the version I had and now VRAM usage after RW override update is the same no matter the RW setting (about 9,3-9,4 GB). I will now leave it to run for few minutes to see if maybe the iteration time issue was fixed or at least improved. Steps to reproduce: Download newest version of DFL (08.02) and run a model with or without random warp enabled, observe VRAM usage and iteration times. Other issues/questions: Recent updates also changed behavior of random warp, changed value in warp.py, introduced more changes in the code as well as forced it to be disable during pretraining without even a simple explanation as to why, this raises couple questions about how models should be now correctly trained: 1. Does this mean pretrain is not required at all or just for pretraining? 2. How does this change affect lower resolution models that still use base DF and LIAE as well as -U variants? 3. Do models still correctly generalize with RW being disabled during pretraining? 4. Is changing batch size still allowed during pretraining/training or should it remain now at mostly the same setting throughout the training? 5. How should LR be used now that RW is disabled, previous updated mentioned workflow like this: RW:Y LR:N -> RW:Y LR:Y -> RW:N LR:???? (enable before GAN). Should LR be used at all during pretraining? Other relevant information: GTX 1080Ti, 16GB of RAM, 4 core, 8 thread i7 4,1GHZ when load on all 8 threads Windows 10 64bit with 2004 update, display running of the GPU DF-UD 224 Full Face model, dims: 288/80/80/16, Pretrain mode enabled
open
2020-08-03T12:39:58Z
2023-06-08T23:12:09Z
https://github.com/iperov/DeepFaceLab/issues/847
[]
ThomasBardem
6
xonsh/xonsh
data-science
4,960
Web config tool includes \r in prompt
When using the web config (xonfig web), newlines are interpreted as "\r\n". On macOS (and Linux, I assume) this adds a "^M" to the prompt, before the newline. ## xonfig ``` +------------------+--------------------------------+ | xonsh | 0.13.3 | | Python | 3.10.7 | | PLY | 3.11 | | have readline | True | | prompt toolkit | 3.0.31 | | shell type | prompt_toolkit | | history backend | json | | pygments | 2.13.0 | | on posix | True | | on linux | False | | on darwin | True | | on windows | False | | on cygwin | False | | on msys2 | False | | is superuser | False | | default encoding | utf-8 | | xonsh encoding | utf-8 | | encoding errors | surrogateescape | | xontrib | [] | +------------------+--------------------------------+ ``` ## Expected Behavior I would expect the "\r" to be stripped from the text field before the prompt is set. ## Current Behavior I was able to confirm that the "\r" is being added to the prompt variable by checking my .xonshrc after using the web config. Removing this value removes the "^M" from my prompt. ## Steps to Reproduce * Run xonfig web from your shell. * Select a multi-line prompt, and set your .xonshrc * Exit and restart xonsh. You'll see a "^M" before the new line. ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
closed
2022-10-04T22:43:21Z
2022-10-25T14:07:21Z
https://github.com/xonsh/xonsh/issues/4960
[ "good first issue", "xonfig-web" ]
JoBrad
2
SYSTRAN/faster-whisper
deep-learning
1,231
Problem installing on macOS with Python 3.13.1
This seems to be an issue with the av library being locked to a specific version. I can install av 14.1.0 just fine, but I believe faster-whisper is requiring a version below 13. I am on macOS in a virtualenv with nothing else installed and python 3.13.1. ![Image](https://github.com/user-attachments/assets/4c0dcd36-0cef-43f1-9097-c57cee79bd8d)
open
2025-01-28T12:02:16Z
2025-01-30T16:58:02Z
https://github.com/SYSTRAN/faster-whisper/issues/1231
[]
creyD
1
PokeAPI/pokeapi
graphql
1,178
Title: is_default Field Incorrect for Pokémon Form 10441
# `is_default` Field Incorrect for Pokémon Form `10441` ## Description The `is_default` field in the `/api/v2/pokemon-form/10441/` endpoint is incorrectly set to `false`, despite being the only form associated with Pokémon `10272` (`ursaluna-bloodmoon`). According to the [documentation for Pokémon forms](https://pokeapi.co/docs/v2#pokemon-forms), the `is_default` field should be `true` for exactly one form per Pokémon. Since this Pokémon has only one form, it should logically be marked as the default form. --- ## Steps to Reproduce 1. Fetch the Pokémon data from `/api/v2/pokemon/10272/`: ```json { "forms": [ { "name": "ursaluna-bloodmoon", "url": "https://pokeapi.co/api/v2/pokemon-form/10441/" } ] } ``` 2. Fetch the form data from `/api/v2/pokemon-form/10441/`: ```json { "id": 10441, "is_battle_only": false, "is_default": false } ``` 3. Observe that the `is_default` field is set to `false`. --- ## Expected Behavior Since form `10441` is the only form for Pokémon `10272`, the `is_default` field should be `true` to align with the documentation. --- ## Observed Behavior The `is_default` field is incorrectly set to `false`. --- ## Suggestion Update the `is_default` field for `/api/v2/pokemon-form/10441/` to `true` to reflect the correct behavior for the only form of this Pokémon.
open
2024-12-25T20:25:03Z
2024-12-26T09:41:46Z
https://github.com/PokeAPI/pokeapi/issues/1178
[]
Ferlow
1
Farama-Foundation/PettingZoo
api
1,143
[Bug Report] MPE SimpleEnv continuous actions are the "other way"
### Describe the bug At the moment the simple env action.u computations are opposite to the discrete environment's. In the current setup when the agents recieve [0, 1, 0, 1, 0] for example they start moving to the top right, instead of the expected bottom left, based on `Agent and adversary action space: [no_action, move_left, move_right, move_down, move_up]` ### Code example ```shell # simple_env.py from line 206 if self.continuous_actions: # Process continuous action as in OpenAI MPE agent.action.u[0] += action[0][1] - action[0][2] # Here agent.action.u[1] += action[0][3] - action[0][4] # And here else: # process discrete action if action[0] == 1: agent.action.u[0] = -1.0 if action[0] == 2: agent.action.u[0] = +1.0 if action[0] == 3: agent.action.u[1] = -1.0 if action[0] == 4: agent.action.u[1] = +1.0 ``` ### System info _No response_ ### Additional context _No response_ ### Checklist - [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/PettingZoo/issues) in the repo
closed
2023-12-02T13:14:48Z
2023-12-07T14:42:28Z
https://github.com/Farama-Foundation/PettingZoo/issues/1143
[ "bug" ]
mrxaxen
3
Nike-Inc/koheesio
pydantic
136
[MAJOR] rename SynchronizeDeltaToSnowflakeTask to SynchronizeDeltaToSnowflakeStep in a future major release
Does it make sense to rename it SynchronizeDeltaToSnowflakeStep in a future major release? _Originally posted by @riccamini in https://github.com/Nike-Inc/koheesio/pull/97#discussion_r1860832343_
open
2024-11-29T14:07:41Z
2024-11-29T14:07:56Z
https://github.com/Nike-Inc/koheesio/issues/136
[ "postponed" ]
dannymeijer
0
flairNLP/flair
nlp
3,626
[Question]: Is it possible to add UMLS Metathesaurus as custom linking model
### Question On https://flairnlp.github.io/flair/v0.15.1/tutorial/tutorial-hunflair2/customize-linking.html there is example on how to add Human Phenotype Ontology. Would it be possible to add UMLS Metathesaurus and how?
closed
2025-03-04T17:35:28Z
2025-03-22T11:01:25Z
https://github.com/flairNLP/flair/issues/3626
[ "question" ]
darije
3
PrefectHQ/prefect
data-science
17,042
Allow a List of Inputs for Prefect ECS Work Pool Start Commands
### Bug summary ## Issue Currently Prefect doesn't support a list of inputs as a custom start command. This functionality is natively supported by Docker and ECS. ## Example Command: `['/bin/sh', '-c', 'python -m some.module', '&&', 'prefect flow-run execute']` If I pass the above start command list to a Prefect ECS work pool. I get the following error: Stacktrace: ``` Failed to submit flow run '89015a3e-c72b-4e8c-af78-c90d7521eea5' to infrastructure. Traceback (most recent call last): File "/usr/local/lib/python3.12/site-packages/prefect/workers/base.py", line 1007, in _submit_run_and_capture_errors configuration = await self._get_configuration(flow_run) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/prefect/workers/base.py", line 1105, in _get_configuration configuration = await self.job_configuration.from_template_and_values( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/prefect/client/utilities.py", line 99, in with_injected_client return await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/prefect_aws/workers/ecs_worker.py", line 422, in from_template_and_values return cls(**populated_configuration) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/pydantic/main.py", line 214, in __init__ validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ pydantic_core._pydantic_core.ValidationError: 1 validation error for ECSJobConfiguration command Input should be a valid string [type=string_type, input_value=['/bin/sh', '-c', 'python...efect flow-run execute'], input_type=list] For further information visit https://errors.pydantic.dev/2.10/v/string_type ``` I can't pass in the commands as a single string because Docker will only execute the first command in front of the `&&`. For example, `python -m some.module && prefect flow-run execute` will result in only `python -m some.module` being executed. ### Version info ```Text Version: 3.1.8 API version: 0.8.4 Python version: 3.11.11 Git commit: 53a83ebc Built: Tue, Dec 17, 2024 10:20 AM OS/Arch: linux/x86_64 Profile: ephemeral Server type: ephemeral Pydantic version: 2.9.2 Server: Database: sqlite SQLite version: 3.34.1 Integrations: prefect-shell: 0.3.0 prefect-dask: 0.3.2 ``` ### Additional context _No response_
open
2025-02-07T14:53:33Z
2025-02-07T14:53:33Z
https://github.com/PrefectHQ/prefect/issues/17042
[ "bug" ]
skohlleffel
0
Ehco1996/django-sspanel
django
717
数据后台无法跳转
**问题的描述** 我已经把页面功能启动来了,整体登录也OK,但是在看后端对接教程的时候,发现数据后台无法跳转,我看了代码里也没有在后台跳转处写跳转链接,包括html代码里也没有数据后台的添加SS链接等功能 **项目的配置文件** **如何复现** **相关截图/log** ![image](https://user-images.githubusercontent.com/13774826/188256622-b121788d-735c-4632-aa39-0f76697047ac.png) ![image](https://user-images.githubusercontent.com/13774826/188256636-bf379800-272d-46b2-be2e-2cc407bcbf62.png) **其他信息**
closed
2022-09-03T05:05:56Z
2022-11-22T04:10:04Z
https://github.com/Ehco1996/django-sspanel/issues/717
[ "bug" ]
a807966224
2
danimtb/dasshio
dash
21
Dasshio randomly stops
My Dasshio add-on service randomly stops. Sometimes the dash buttons seem to stop working, but on inspecting the add-on I find that it has changed state to "stopped", with an error in the log. The last time, this is what happened: - I pressed a dash button once - it correctly performed the associated action - then I pressed it again but it didn't work - I checked the Hass.io web interface and the add-on was in "**state: stopped**" - I started the add-on and then it worked correctly in detecting button presses several times. This is the log: ``` 2017-12-13 07:01:35,704 | INFO | Gillette button pressed! 2017-12-13 07:01:35,705 | INFO | Request: http://homeassistant:8123/api/services/light/toggle 2017-12-13 07:01:35,810 | INFO | Status Code: 200 2017-12-13 07:01:35,811 | INFO | Successful request Traceback (most recent call last): File "dasshio.py", line 81, in <module> sniff(stop_filter=arp_display, filter='arp or (udp and src port 68 and dst port 67 and src host 0.0.0.0)', store=0, count=0) File "/usr/lib/python3.6/site-packages/scapy/sendrecv.py", line 617, in sniff s.close() File "/usr/lib/python3.6/site-packages/scapy/arch/linux.py", line 499, in close set_promisc(self.ins, i, 0) File "/usr/lib/python3.6/site-packages/scapy/arch/linux.py", line 151, in set_promisc mreq = struct.pack("IHH8s", get_if_index(iff), PACKET_MR_PROMISC, 0, b"") File "/usr/lib/python3.6/site-packages/scapy/arch/linux.py", line 294, in get_if_index return int(struct.unpack("I",get_if(iff, SIOCGIFINDEX)[16:20])[0]) File "/usr/lib/python3.6/site-packages/scapy/arch/linux.py", line 288, in get_if ifreq = ioctl(s, cmd, struct.pack("16s16x",bytes(iff,'utf-8'))) OSError: [Errno 19] No such device Connection lost. Reconnecting… ``` This is the add-on configuration. Notice that at the moment I am actively using only the first configured button, "Gillette". ``` Description Use Amazon Dash Buttons in Home Assistant Version 0.0.9 State stopped Boot auto Auto update X Uses host network V Builds locally V Detached V Options { "buttons": [ { "name": "Gillette", "address": "**:**:**:**:**:ea", "url": "http://homeassistant:8123/api/services/light/toggle", "headers": "{\"x-ha-access\": \"*******\"}", "body": "{\"entity_id\": \"light.abatjour\"}" }, { "name": "Angelica", "address": "**:**:**:**:**:B8", "url": "http://homeassistant:8123/api/services/persistent_notification/create", "headers": "{\"x-ha-access\": \"*******\"}", "body": "{\"message\": \"prova angelica\"}" }, { "name": "Finish", "address": "**:**:**:**:**:B9", "url": "http://homeassistant:8123/api/services/persistent_notification/create", "headers": "{\"x-ha-access\": \"*******\"}", "body": "{\"message\": \"prova finish\"}" }, { "name": "Duracell", "address": "**:**:**:**:**:A0", "url": "http://homeassistant:8123/api/services/automation/trigger", "headers": "{\"x-ha-access\": \"*******\"}", "body": "{\"entity_id\":\"automation.demo_audio_e_luce\"}" }, { "name": "Barilla", "address": "**:**:**:**:**:0D", "url": "http://homeassistant:8123/api/services/persistent_notification/create", "headers": "{\"x-ha-access\": \"*******\"}", "body": "{\"message\": \"prova barilla\"}" } ] } ``` Hass.io version is 0.59.2 running on Raspberry Pi 3.
closed
2017-12-14T09:05:47Z
2018-03-19T23:07:21Z
https://github.com/danimtb/dasshio/issues/21
[ "bug" ]
amagnolo
47
junyanz/pytorch-CycleGAN-and-pix2pix
computer-vision
1,485
AttributeError: Can't pickle local object 'get_transform.<locals>.<lambda>'
need some help with this problem 😭
open
2022-09-19T14:45:31Z
2022-09-20T20:25:16Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1485
[]
Zhou248
1
Lightning-AI/pytorch-lightning
pytorch
19,553
Permission denied: '/usr/share/lmod/lmod/init/ksh_funcs/libcudart.so'
### Bug description When using bitsandbytes==0.41, getting the following error ``` permissionError: [Errno 13] Permission denied: '/usr/share/lmod/lmod/init/ksh_funcs/libcudart.so' ``` which is also mentioned here: https://github.com/TimDettmers/bitsandbytes/issues/809 I think the issue is still here because Lightning is one version behind. ### What version are you seeing the problem on? v2.2 ### How to reproduce the bug _No response_ ### Error messages and logs ``` permissionError: [Errno 13] Permission denied: '/usr/share/lmod/lmod/init/ksh_funcs/libcudart.so' ``` ### Environment <details> <summary>Current environment</summary> ``` #- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow): #- PyTorch Lightning Version (e.g., 1.5.0): #- Lightning App Version (e.g., 0.5.2): #- PyTorch Version (e.g., 2.0): #- Python version (e.g., 3.9): #- OS (e.g., Linux): #- CUDA/cuDNN version: #- GPU models and configuration: #- How you installed Lightning(`conda`, `pip`, source): #- Running environment of LightningApp (e.g. local, cloud): ``` </details> ### More info _No response_ cc @carmocca @awaelchli
closed
2024-03-01T04:55:32Z
2024-03-04T15:11:32Z
https://github.com/Lightning-AI/pytorch-lightning/issues/19553
[ "bug", "precision: bnb" ]
dipta007
2
developmentseed/lonboard
data-visualization
313
[EPIC] Javascript enhancements
## Context Interactive maps need to be reactive in any and all environments. ## Issue Let's implement some JS-based improvements. ## Acceptance-Criteria These are the tasks that need to be completed or artifacts that need to be produced. - [ ] https://github.com/developmentseed/lonboard/issues/148 - [x] https://github.com/developmentseed/lonboard/issues/264 - [x] https://github.com/developmentseed/lonboard/issues/112
open
2024-01-11T15:46:32Z
2024-04-19T15:03:14Z
https://github.com/developmentseed/lonboard/issues/313
[ "javascript" ]
emmalu
1
graphql-python/graphene-sqlalchemy
sqlalchemy
112
Generate Input Arguments from SQLAlchemy Class?
Hello, Do you know if it's possible to generate input arguments dynamically from the SQLAlchemy class that will be transformed by the mutation? Example: My input arguments for a `CreatePerson` mutation look like this: ```python class CreatePersonInput(graphene.InputObjectType): """Arguments to create a person.""" name = graphene.String(required=True, description="Name of the person to be created.") height = graphene.String(default_value="unknown", description="Height of the person to be created.") mass = graphene.String(default_value="unknown", description="Mass of the person to be created.") hair_color = graphene.String(default_value="unknown", description="Hair color of the person to be created.") skin_color = graphene.String(default_value="unknown", description="Skin color of the person to be created.") eye_color = graphene.String(default_value="unknown", description="Eye color of the person to be created.") birth_year = graphene.String(default_value="unknown", description="Birth year of the person to be created.") gender = graphene.String(default_value="unknown", description="Gender of the person to be created.") planet_id = graphene.ID(default_value="unknown", description="Global Id of the planet from which the person to be created comes from.") url = graphene.String(default_value="unknown", description="URL of the person in the Star Wars API.") class CreatePerson(graphene.Mutation): """Mutation to create a person.""" person = graphene.Field(lambda: People, description="Person created by this mutation.") class Arguments: input = CreatePersonInput(required=True) ... ``` In the meantime, the input arguments for my `UpdatePerson` mutation look like this: ```python class UpdatePersonInput(graphene.InputObjectType): """Arguments to update a person.""" id = graphene.ID(required=True) name = graphene.String() height = graphene.String() mass = graphene.String() hair_color = graphene.String() skin_color = graphene.String() eye_color = graphene.String() birth_year = graphene.String() gender = graphene.String() planet_id = graphene.ID() url = graphene.String() class UpdatePerson(graphene.Mutation): """Update a person.""" person = graphene.Field(lambda: People, description="Person updated by this mutation.") class Arguments: input = UpdatePersonInput(required=True) ... ``` Finally, my SQLAlchemy class look like this: ```python class ModelPeople(Base): """People model.""" __tablename__ = 'people' id = Column('id', Integer, primary_key=True) name = Column('name', String) height = Column('height', String) mass = Column('mass', String) hair_color = Column('hair_color', String) skin_color = Column('skin_color', String) eye_color = Column('eye_color', String) birth_year = Column('birth_year', String) gender = Column('gender', String) planet_id = Column('planet_id', Integer, ForeignKey('planet.id')) created = Column('created', String) edited = Column('edited', String) url = Column('url', String) ... ``` This is all pretty redundant and it would be ideal if we could just reuse the SQLAlchemy class attributes in the `InputObjectType`
open
2018-02-09T05:44:41Z
2018-04-24T18:13:29Z
https://github.com/graphql-python/graphene-sqlalchemy/issues/112
[]
alexisrolland
3
schemathesis/schemathesis
pytest
2,504
Specifying --hypothesis-seed=# does not recreate tests with the same data
### Checklist - [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation - [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues) - [x] I am using the latest version of Schemathesis ### Describe the bug When running the CLI SchemaThesis tests, we cannot reproduce errors due to the changing data sent to the API. When specifying the seed, the initial several tests do use the same data, but eventually diverges into completely different data sets unique to each run. ### To Reproduce 1. Run this command twice: st run --base-url=http://host.docker.internal:3050/api --checks=all --data-generation-method=all --target=all --max-failures=5000 --validate-schema=true --debug-output-file=/mnt/debug.json --show-trace --code-sample-style=curl --cassette-path /mnt/cassette.yaml --fixups=all --stateful=links --sanitize-output=true --contrib-unique-data --contrib-openapi-fill-missing-examples --hypothesis-max-examples=25 --hypothesis-phases=generate,explicit --hypothesis-verbosity=verbose --experimental=openapi-3.1 --generation-allow-x00=false --schemathesis-io-telemetry=false --hypothesis-seed=115111099930844871414873197742186425200 -H Authorization: Bearer TOKEN /mnt/api-specification.json' 2. Cassette file is different lengths, different number of tests were run, different data was sent to the API Please include a minimal API schema causing this issue: ```yaml { "openapi": "3.0.1", "info": { "title": "System APIs", "description": "DESCRIPTION REDACTED", "version": "1.00.00" }, "servers": [ { "url": "/api" } ], "paths": { .... ``` ### Expected behavior The documentation for the run seed is sparse, but it indicates that the seed should be used to reproduce test results. We are not seeing this behavior. ### Environment ``` - OS: [Windows in Docker] - Python version: [Using schemathesis/schemathesis:stable] - Schemathesis version: [Using schemathesis/schemathesis:stable] - Spec version: [e.g. Open API 3.0.1] ``` ### Additional context Where the two test runs begin to diverge, one test run will send URLs like: uri: '[http://host.docker.internal:3050/api/PATHREDACTED?0=False&0=False&=true&%C2%94%F1%B3%9C%B3%C2%BC=0'] Whereas the equivalent other test run at the same line in the cassette file will be: uri: '[http://host.docker.internal:3050/api/PATHREDACTED?ID=0']
open
2024-10-09T20:26:42Z
2024-10-20T01:56:38Z
https://github.com/schemathesis/schemathesis/issues/2504
[ "Type: Bug", "Status: Needs Triage" ]
hydroculator
2
pyg-team/pytorch_geometric
pytorch
8,853
Add Support for Pytorch 2.2
### 🚀 The feature, motivation and pitch Pytorch 2.2 was released: [Blog](https://pytorch.org/blog/pytorch2-2/) [Release Notes](https://github.com/pytorch/pytorch/releases/tag/v2.2.0) ### Alternatives _No response_ ### Additional context _No response_
closed
2024-02-02T09:40:55Z
2024-02-03T10:13:11Z
https://github.com/pyg-team/pytorch_geometric/issues/8853
[ "feature" ]
Foisunt
2