repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
ShishirPatil/gorilla
|
api
| 567
|
How to add a model that is not in vllm
|
Hi,
I am evaluating a new model which is not in vllm. How can I generate responses with this model since I find vllm is the only way for generate()?
Thank you!
|
closed
|
2024-08-04T22:00:33Z
|
2025-02-20T22:38:10Z
|
https://github.com/ShishirPatil/gorilla/issues/567
|
[] |
shizhediao
| 5
|
allenai/allennlp
|
pytorch
| 5,211
|
Models loaded using the `from_archive` method need to be saved with original config
|
When `allennlp train` is used to fine-tune a pretrained model (`model A`) using `from_archive(path_to_A)`, the finetuned model (`model B`) is saved with the config that contains `from_archive`. This means that if you try to now finetune the `model B`, it needs the original `model A` at the exact `path_to_A`, as well as `model B`. In the normal usecase, this will fail if the user does not have access to the original `model A`. On beaker, depending on how the code is setup, if the path to the pretrained model remains the same in `experiment A -> B` and `experiment B -> C`, it will cause a `maximum recursion depth` error.
Potential solution is to store the original configuration when saving a fine-tuned model (i.e., the `from_archive` case).
|
open
|
2021-05-18T19:28:40Z
|
2021-05-28T16:33:03Z
|
https://github.com/allenai/allennlp/issues/5211
|
[
"bug"
] |
AkshitaB
| 1
|
modelscope/modelscope
|
nlp
| 587
|
Model Export Error. AttributeError: 'dict' object has no attribute 'model_dir'
|
**Description:**
When trying to use the `Model.from_pretrained()` method with the following code:
```python
from modelscope.models import Model
from modelscope.exporters import Exporter
model_id = 'damo/cv_unet_skin-retouching'
model = Model.from_pretrained(model_id)
output_files = Exporter.from_model(model).export_onnx(opset=13, output_dir='/tmp', ...)
print(output_files)
```
I encounter the following error:
AttributeError Traceback (most recent call last)
----> model = Model.from_pretrained(model_id)
/usr/local/lib/python3.10/dist-packages/modelscope/models/base/base_model.py in from_pretrained(cls, model_name_or_path, revision, cfg_dict, device, **kwargs)
168 return model
169 # use ms
--> 170 model_cfg.model_dir = local_model_dir
171
172 # install and import remote repos before build
AttributeError: 'dict' object has no attribute 'model_dir'
|
closed
|
2023-10-16T05:53:45Z
|
2023-10-20T09:40:18Z
|
https://github.com/modelscope/modelscope/issues/587
|
[] |
chiragsamal
| 3
|
Farama-Foundation/PettingZoo
|
api
| 1,215
|
[Proposal] Integration of gfootball
|
### Proposal
[gfootball](https://github.com/google-research/football) is widely used in SOTA MARL algorithms, e.g., https://arxiv.org/abs/2103.01955, https://arxiv.org/abs/2302.06205.
Will it be integrated in future releases?
### Motivation
_No response_
### Pitch
_No response_
### Alternatives
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/PettingZoo/issues) in the repo
|
closed
|
2024-06-20T19:40:11Z
|
2024-06-24T04:19:12Z
|
https://github.com/Farama-Foundation/PettingZoo/issues/1215
|
[
"enhancement"
] |
xihuai18
| 2
|
dbfixtures/pytest-postgresql
|
pytest
| 303
|
Windows support
|
### What action do you want to perform
Hi, we are wanting to use the postgresql_proc fixture in our test suite and we ran into a few errors. Version 2.4.0 on Windows 10 and PostgreSQL version 11.
### What are the results
On the creation of PostgreSQLExecutor we find it errors when calling pg_ctl due to the quotes around stderr [on this line](https://github.com/ClearcodeHQ/pytest-postgresql/blob/master/src/pytest_postgresql/executor.py#L47) , seems Windows is not happy. By removing the quotes it managed to set up the database and run the test suite.
However it now runs into the problem of not being able to stop with `os.killpg`, as this function doesn't exist for Windows.
```
@pytest.fixture(scope='session')
def postgresql_proc_fixture(request, tmpdir_factory):
"""
Process fixture for PostgreSQL.
:param FixtureRequest request: fixture request object
:rtype: pytest_dbfixtures.executors.TCPExecutor
:returns: tcp executor
"""
.
.
.
# start server
with postgresql_executor:
postgresql_executor.wait_for_postgres()
> yield postgresql_executor
..\..\appdata\local\pypoetry\cache\virtualenvs\trase-iwsqw52c-py3.7\lib\site-packages\pytest_postgresql\factories.py:200:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
..\..\appdata\local\pypoetry\cache\virtualenvs\trase-iwsqw52c-py3.7\lib\site-packages\mirakuru\base.py:179: in __exit__
self.stop()
..\..\appdata\local\pypoetry\cache\virtualenvs\trase-iwsqw52c-py3.7\lib\site-packages\pytest_postgresql\executor.py:220: in stop
super().stop(sig, exp_sig)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pytest_postgresql.executor.PostgreSQLExecutor: "C:/PROGRA~..." 0x194068a34a8>, sig = <Signals.SIGTERM: 15>, exp_sig = None
def stop(
self: SimpleExecutorType,
sig: int = None,
exp_sig: int = None
) -> SimpleExecutorType:
.
.
.
try:
> os.killpg(self.process.pid, sig)
E AttributeError: module 'os' has no attribute 'killpg'
```
### What are the expected results
We were wondering if others have run into this issue too and if there's a way to get this fix in. Any help is appreciated as its a great plugin to use!
|
open
|
2020-07-16T13:51:47Z
|
2022-08-01T17:01:33Z
|
https://github.com/dbfixtures/pytest-postgresql/issues/303
|
[
"enhancement",
"help wanted"
] |
pernlofgren
| 5
|
coqui-ai/TTS
|
deep-learning
| 3,587
|
[Feature request] Move to MIT License
|
The company is shutting down and can no longer license this project for commercial purposes or benefit from such licensing. Suggest moving to MIT license for more permissive modification and redistribution by the community.
|
closed
|
2024-02-17T11:23:38Z
|
2025-01-06T02:42:59Z
|
https://github.com/coqui-ai/TTS/issues/3587
|
[
"wontfix",
"feature request"
] |
geofurb
| 22
|
modin-project/modin
|
pandas
| 6,783
|
ModuleNotFoundError: No module named 'modin.pandas.testing'
|
This module is public and is used quite often.
It shouldn't be difficult to maintain, as it has a few functions:
```python
__all__ = [
"assert_extension_array_equal",
"assert_frame_equal",
"assert_series_equal",
"assert_index_equal",
]
```
|
closed
|
2023-11-29T22:16:12Z
|
2024-03-11T19:15:46Z
|
https://github.com/modin-project/modin/issues/6783
|
[
"new feature/request 💬",
"pandas concordance 🐼",
"P2"
] |
anmyachev
| 3
|
marshmallow-code/marshmallow-sqlalchemy
|
sqlalchemy
| 347
|
Dependabot couldn't authenticate with https://pypi.python.org/simple/
|
Dependabot couldn't authenticate with https://pypi.python.org/simple/.
You can provide authentication details in your [Dependabot dashboard](https://app.dependabot.com/accounts/marshmallow-code) by clicking into the account menu (in the top right) and selecting 'Config variables'.
[View the update logs](https://app.dependabot.com/accounts/marshmallow-code/update-logs/48681631).
|
closed
|
2020-09-25T05:16:36Z
|
2020-09-28T05:15:05Z
|
https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/347
|
[] |
dependabot-preview[bot]
| 0
|
serengil/deepface
|
deep-learning
| 556
|
Lot of False positives in Deepface
|
Hi @serengil ,
Kindly share your thoughts on why i am getting false positives in images repeatedly.
I have observed that setting threshold value to less than 0.40 gives lot of matches, I have extracted representations and calculated cosine score for images, but its giving me poor results in 2 images below, one image is of girl and other is of boy,


```(0.22019625290381495, 'C:/Users/Risin/Desktop/id_photo/omkar.jpg')```
how this can be explained? Do you think , its possible to get such results? is there any thing that i might have missed, my code is below:
`
from mtcnn import MTCNN
import cv2
import os
from deepface import DeepFace
import numpy as np
import mtcnn
from mtcnn import MTCNN
from deepface.detectors import FaceDetector
from deepface.detectors import OpenCvWrapper, SsdWrapper, MtcnnWrapper, RetinaFaceWrapper,MediapipeWrapper
from numpy import asarray
import multiprocessing as multi
from DeepFace import*
from commons import functions, distance,realtime
from functions import*
from commons.distance import*
import time
import pickle
import pathlib
from pathlib import Path
from threading import Thread
from queue import Queue
import math
# from folder_alex import* #apicall() imported for getting link to save images after attendance
# from search_folders import*
from datetime import datetime, timedelta
from scipy import spatial
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
models = ["VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib"]
models = {
'VGG-Face': VGGFace.loadModel,
'OpenFace': OpenFace.loadModel,
'Facenet': Facenet.loadModel,
'Facenet512': Facenet512.loadModel,
'DeepFace': FbDeepFace.loadModel,
'DeepID': DeepID.loadModel,
# 'Dlib': DlibWrapper.loadModel,
'ArcFace': ArcFace.loadModel,
'Emotion': Emotion.loadModel,
'Age': Age.loadModel,
'Gender': Gender.loadModel,
'Race': Race.loadModel
}
backends = {
'opencv': OpenCvWrapper.build_model,
'ssd': SsdWrapper.build_model,
# 'dlib': DlibWrapper.build_model,
'mtcnn': MtcnnWrapper.build_model,
'retinaface': RetinaFaceWrapper.build_model,
'mediapipe': MediapipeWrapper.build_model
}
# #neural network models can be selected from above.
model = build_model('VGG-Face')
# #read face image extracted from mtcnn() method
img = cv2.imread('C:/Users/Risin/Desktop/id_photo/yashila_1.jpeg')
#img = cv2.imread("C:/Users/Risin/Desktop/FR_ats/Students/Student/1011/1017/Achyatanandmadhukar_102_1.jpg")
img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
# #get its representation.
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.Laplacian(gray, cv2.CV_64F).var()
print("blur_score",blur)
# img= cv2.detailEnhance(img, sigma_s=10, sigma_r=0.15)
cv2.imshow("test_image",img)
cv2.waitKey(0)
embeddings1 = represent(img, model_name='VGG-Face', model=model, enforce_detection= False,
detector_backend='mtcnn', align=True, normalization='VGG-Face')
# #read 2nd image from Database with which it shows similarity in real run with file , face_detector.py
detector = MTCNN()
img2 = cv2.imread('C:/Users/Risin/Desktop/id_photo/omkar.jpg')
img2 = cv2.cvtColor(img2,cv2.COLOR_BGR2RGB)
embeddings2 = represent(img2, model_name='VGG-Face', model=model, enforce_detection= False,
detector_backend='mtcnn', align=True, normalization='base')
print(len(embeddings2))
########################FIND COSINE
r = findCosineDistance(embeddings1,embeddings2)
print("VALUE WHEN COMPARED ALONE WITH IMAGE", r)
# find euclideanL2 as its a better measure of accuracy
#scipy cosine
distance_v2 = spatial.distance.cosine(embeddings1, embeddings2)
print("distance_v2",distance_v2)
def findEuclideanDistance(source_representation, test_representation):
if type(source_representation) == list:
source_representation = np.array(source_representation)
if type(test_representation) == list:
test_representation = np.array(test_representation)
euclidean_distance = source_representation - test_representation
euclidean_distance = np.sum(np.multiply(euclidean_distance, euclidean_distance))
euclidean_distance = np.sqrt(euclidean_distance)
return euclidean_distance
euclidean_distance = findEuclideanDistance(embeddings1,embeddings2)
print("euclidean_distance",euclidean_distance)
def l2_normalize(x):
return x / np.sqrt(np.sum(np.multiply(x, x)))
# find euclideanL2 as its a better measure of accuracy
detections = detector.detect_faces(img)
print("detections:=================",detections)
for detection in detections:
if (detection['confidence']) > 0.0:
x, y, w, h = detection["box"]
face_area_db_img = (w*h)
#w_ex, h_ex ,_ = img.shape
#find if image is FRONTAL FACE or side face.
ratio_of_height2width = (h/w)
print("ratio of Height to width ", ratio_of_height2width)
#img2 = cv2.cvtColor(img2,cv2.COLOR_BGR2RGB)
detected_face = img[int(y):int(y+h), int(x):int(x+w)]
start_point = (int(x),int(y))
end_point = (int(x+w),int(y+h))
color= (0,255,0)
thickness= 1
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale= .3
#img = cv2.putText(img, "fps"+ str(int(fps)), (50,50),font,fontScale,color,thickness,cv2.LINE_AA)
img =cv2.rectangle(img, start_point, end_point, color, thickness)
# path_decided_ ="C:/Users/Risin/Desktop/FR_ats/deepface-master/deepface-master/deepface/images/face_detected"
# img_to_save_ = ("{}"+ ".jpeg").format(counter)
# cv2.imwrite(path__,detected_face)
# embeddings2 = represent(img2, model_name='VGG-Face', model=model, enforce_detection= False,
# detector_backend='mtcnn', align=True, normalization='VGG-Face')
# print(type(embeddings2))
#result = findCosineDistance(embeddings1,embeddings2)
#result2 = findEuclideanDistance(embeddings1,embeddings2)
# print("result",result)
# print("result_eculidean", result2)
detected_face = cv2.resize(detected_face, (200,200))
img = cv2.resize(img,(200,200))
final_result = cv2.hconcat([detected_face,img])
cv2.imshow("img", final_result)
if cv2.waitKey(0) & 0xFF == ord('q'):
break
# if result < 0.40:
# detected_face = cv2.resize(detected_face, (200,200))
# img = cv2.resize(img,(200,200))
# final_result = cv2.hconcat([detected_face,img])
# cv2.imshow("img", final_result)
# if cv2.waitKey(0) & 0xFF == ord('q'):
# break
#cal embedding of img2
final_result =[]
address_image=[]
probable =[]
with open("representation_ATS.pkl", 'rb') as file:
data = pickle.load(file)
for d in data: #list compri first iteration
# print(data[i])
address, vector_db = d
result = findCosineDistance(embeddings1,vector_db)
print("COMPARED WITH PICKLE VECTOR",result,address)
#final_result.append(result)
#address_image.append(address)
if result<0.40:
probable.append(result)
address_image.append(address)
explaination= ("{}"+"........Matching File Name"+ "{}").format(result, address)
print(explaination)
combined = list(zip(probable,address_image))
print("< 0.40 match",combined)
best_candidate = min(combined)
print(best_candidate)
final = cv2.imread(best_candidate[1])
final = cv2.resize(final,(200,200))
cv2.imshow("bestmatch",final)
cv2.waitKey(0)
`
|
closed
|
2022-09-06T11:21:25Z
|
2022-09-06T13:54:39Z
|
https://github.com/serengil/deepface/issues/556
|
[
"question"
] |
Risingabhi
| 3
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 655
|
???
|
Last Error Received:
Process: VR Architecture
If this error persists, please contact the developers with the error details.
Raw Error Details:
ValueError: "zero-size array to reduction operation maximum which has no identity"
Traceback Error: "
File "UVR.py", line 4719, in process_start
File "separate.py", line 683, in seperate
File "lib_v5/spec_utils.py", line 112, in normalize
File "numpy/core/_methods.py", line 40, in _amax
"
Error Time Stamp [2023-07-08 12:20:27]
Full Application Settings:
vr_model: 3_HP-Vocal-UVR
aggression_setting: 10
window_size: 512
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Inst HQ 1
chunks: Auto
margin: 44100
compensate: Auto
is_denoise: False
is_invert_spec: False
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
save_format: WAV
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
|
open
|
2023-07-08T04:21:23Z
|
2023-07-08T04:21:23Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/655
|
[] |
ToBeAnUncle
| 0
|
roboflow/supervision
|
pytorch
| 1,720
|
The character's ID changed after a brief loss
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Hello, I am using supervision ByteTrack, in the video, the male ID is 2 and the female ID is 3. However, when the female continues to move forward and the ID is lost, when it is detected again, the female ID is no longer 3, and the male and female IDs alternate. What should I do?Thank you
Male ID: 2
Girl ID: 3


Male ID: 3
Female ID: 2

### Additional
_No response_
|
closed
|
2024-12-09T11:06:03Z
|
2024-12-09T11:12:54Z
|
https://github.com/roboflow/supervision/issues/1720
|
[
"question"
] |
DreamerYinYu
| 1
|
sammchardy/python-binance
|
api
| 604
|
is this repo supports for coin futures ?
|
is this repo supports for corn futures? I don't see any implementation related to **/dapi/v1/**
|
open
|
2020-10-13T10:47:34Z
|
2020-10-23T18:24:27Z
|
https://github.com/sammchardy/python-binance/issues/604
|
[] |
keerthankumar
| 1
|
huggingface/transformers
|
python
| 36,774
|
Please support GGUF format for UMT5EncoderModel
|
### Feature request
```python
import torch
from transformers import UMT5EncoderModel
from huggingface_hub import hf_hub_download
path = hf_hub_download(
repo_id="city96/umt5-xxl-encoder-gguf", filename="umt5-xxl-encoder-Q8_0.gguf"
)
text_encoder = UMT5EncoderModel.from_pretrained(
"Wan-AI/Wan2.1-I2V-14B-480P-Diffusers", # or any Hub path with the correct config
subfolder="text_encoder",
gguf_file=path,
torch_dtype=torch.bfloat16,
)
```
```
umt5-xxl-encoder-Q8_0.gguf: 100%|████████████████████████████| 6.04G/6.04G [07:46<00:00, 12.9MB/s]
config.json: 100%|███████████████████████████████████████████████████████| 854/854 [00:00<?, ?B/s]
Traceback (most recent call last):
File "C:\aiOWN\diffuser_webui\UMT.py", line 9, in <module>
text_encoder = UMT5EncoderModel.from_pretrained(
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\transformers\modeling_utils.py", line 271, in _wrapper
return func(*args, **kwargs)
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\transformers\modeling_utils.py", line 4158, in from_pretrained
state_dict = load_gguf_checkpoint(gguf_path, return_tensors=True, model_to_load=dummy_model)["tensors"]
File "C:\aiOWN\diffuser_webui\venv\lib\site-packages\transformers\modeling_gguf_pytorch_utils.py", line 399, in load_gguf_checkpoint
raise ValueError(f"GGUF model with architecture {architecture} is not supported yet.")
ValueError: GGUF model with architecture t5encoder is not supported yet.
```
### Motivation
It will help low VRAM users to run WAN
### Your contribution
I am not a developer so can't contribute
|
open
|
2025-03-17T19:32:17Z
|
2025-03-19T07:24:19Z
|
https://github.com/huggingface/transformers/issues/36774
|
[
"Feature request"
] |
nitinmukesh
| 2
|
ivy-llc/ivy
|
numpy
| 28,118
|
Fix Frontend Failing Test: torch - tensor.torch.Tensor.reshape_as
|
To-do List: https://github.com/unifyai/ivy/issues/27498
|
closed
|
2024-01-30T09:55:24Z
|
2024-01-30T10:01:24Z
|
https://github.com/ivy-llc/ivy/issues/28118
|
[
"Sub Task"
] |
Aryan8912
| 1
|
labmlai/annotated_deep_learning_paper_implementations
|
deep-learning
| 275
|
How to Contribute to This Repository
|
Hello,
I’ve been learning various AI/ML-related algorithms recently, and my notes are quite similar to the content of your repository. Also this excellent work has helped me understand some of the algorithms, and I’d love to contribute by adding papers such as those on VAE, CLIP, BLIP, etc.
But before I start, I have a question on the implementation method. What tools did you use to generate the web pages for this repository? Are they hard-coded, or did you use a framework?
Looking forward to your response and contributing to this project!
|
open
|
2024-09-15T08:51:24Z
|
2025-02-17T20:28:55Z
|
https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/275
|
[] |
terancejiang
| 4
|
iperov/DeepFaceLab
|
machine-learning
| 695
|
Merge Quick96 produces frames with original dst face
|
Hi, Everything seems to have worked up to to step 6- train Quick96. I did 120k iterations. I try to run step 7 to merge, and it simply keeping the original dst video face. It doesn't use the src face on any of the frames. I get this line: "no faces found for xxxxx.png, copying without faces" for all the frames.
thanks
|
closed
|
2020-04-04T21:42:58Z
|
2023-09-21T04:25:07Z
|
https://github.com/iperov/DeepFaceLab/issues/695
|
[] |
glueydoob1
| 1
|
Lightning-AI/pytorch-lightning
|
machine-learning
| 20,209
|
ImportError: cannot import name '_TORCHMETRICS_GREATER_EQUAL_1_0_0' from 'pytorch_lightning.utilities.imports'
|
### Bug description
─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py:3553 in run_code │
│ │
│ 3550 │ │ │ │ elif async_ : │
│ 3551 │ │ │ │ │ await eval(code_obj, self.user_global_ns, self.user_ns) │
│ 3552 │ │ │ │ else: │
│ ❱ 3553 │ │ │ │ │ exec(code_obj, self.user_global_ns, self.user_ns) │
│ 3554 │ │ │ finally: │
│ 3555 │ │ │ │ # Reset our crash handler in place │
│ 3556 │ │ │ │ sys.excepthook = old_excepthook │
│ in <cell line: 6>:6 │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/__init__.py:27 in <module> │
│ │
│ 24 │
│ 25 from lightning_fabric.utilities.seed import seed_everything # noqa: E402 │
│ 26 from lightning_fabric.utilities.warnings import disable_possible_user_warnings # noqa: │
│ ❱ 27 from pytorch_lightning.callbacks import Callback # noqa: E402 │
│ 28 from pytorch_lightning.core import LightningDataModule, LightningModule # noqa: E402 │
│ 29 from pytorch_lightning.trainer import Trainer # noqa: E402 │
│ 30 │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/callbacks/__init__.py:29 in <module> │
│ │
│ 26 from pytorch_lightning.callbacks.on_exception_checkpoint import OnExceptionCheckpoint │
│ 27 from pytorch_lightning.callbacks.prediction_writer import BasePredictionWriter │
│ 28 from pytorch_lightning.callbacks.progress import ProgressBar, RichProgressBar, TQDMProgr │
│ ❱ 29 from pytorch_lightning.callbacks.pruning import ModelPruning │
│ 30 from pytorch_lightning.callbacks.rich_model_summary import RichModelSummary │
│ 31 from pytorch_lightning.callbacks.spike import SpikeDetection │
│ 32 from pytorch_lightning.callbacks.stochastic_weight_avg import StochasticWeightAveraging │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/callbacks/pruning.py:32 in <module> │
│ │
│ 29 │
│ 30 import pytorch_lightning as pl │
│ 31 from pytorch_lightning.callbacks.callback import Callback │
│ ❱ 32 from pytorch_lightning.core.module import LightningModule │
│ 33 from pytorch_lightning.utilities.exceptions import MisconfigurationException │
│ 34 from pytorch_lightning.utilities.rank_zero import rank_zero_debug, rank_zero_only │
│ 35 │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/core/__init__.py:16 in <module> │
│ │
│ 13 # limitations under the License. │
│ 14 │
│ 15 from pytorch_lightning.core.datamodule import LightningDataModule │
│ ❱ 16 from pytorch_lightning.core.module import LightningModule │
│ 17 │
│ 18 __all__ = ["LightningDataModule", "LightningModule"] │
│ 19 │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/core/module.py:62 in <module> │
│ │
│ 59 from pytorch_lightning.core.optimizer import LightningOptimizer │
│ 60 from pytorch_lightning.core.saving import _load_from_checkpoint │
│ 61 from pytorch_lightning.loggers import Logger │
│ ❱ 62 from pytorch_lightning.trainer import call │
│ 63 from pytorch_lightning.trainer.connectors.logger_connector.fx_validator import _FxValida │
│ 64 from pytorch_lightning.trainer.connectors.logger_connector.result import _get_default_dt │
│ 65 from pytorch_lightning.utilities import GradClipAlgorithmType │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/__init__.py:17 in <module> │
│ │
│ 14 """""" │
│ 15 │
│ 16 from lightning_fabric.utilities.seed import seed_everything │
│ ❱ 17 from pytorch_lightning.trainer.trainer import Trainer │
│ 18 │
│ 19 __all__ = ["Trainer", "seed_everything"] │
│ 20 │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py:45 in <module> │
│ │
│ 42 from pytorch_lightning.loggers.csv_logs import CSVLogger │
│ 43 from pytorch_lightning.loggers.tensorboard import TensorBoardLogger │
│ 44 from pytorch_lightning.loggers.utilities import _log_hyperparams │
│ ❱ 45 from pytorch_lightning.loops import _PredictionLoop, _TrainingEpochLoop │
│ 46 from pytorch_lightning.loops.evaluation_loop import _EvaluationLoop │
│ 47 from pytorch_lightning.loops.fit_loop import _FitLoop │
│ 48 from pytorch_lightning.loops.utilities import _parse_loop_limits, _reset_progress │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/__init__.py:15 in <module> │
│ │
│ 12 # See the License for the specific language governing permissions and │
│ 13 # limitations under the License. │
│ 14 from pytorch_lightning.loops.loop import _Loop # noqa: F401 isort: skip (avoids circula │
│ ❱ 15 from pytorch_lightning.loops.evaluation_loop import _EvaluationLoop # noqa: F401 │
│ 16 from pytorch_lightning.loops.fit_loop import _FitLoop # noqa: F401 │
│ 17 from pytorch_lightning.loops.optimization import _AutomaticOptimization, _ManualOptimiza │
│ 18 from pytorch_lightning.loops.prediction_loop import _PredictionLoop # noqa: F401 │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/loops/evaluation_loop.py:39 in │
│ <module> │
│ │
│ 36 │ _request_dataloader, │
│ 37 │ _resolve_overfit_batches, │
│ 38 ) │
│ ❱ 39 from pytorch_lightning.trainer.connectors.logger_connector.result import _OUT_DICT, _Res │
│ 40 from pytorch_lightning.trainer.states import RunningStage, TrainerFn │
│ 41 from pytorch_lightning.utilities.combined_loader import CombinedLoader │
│ 42 from pytorch_lightning.utilities.data import has_len_all_ranks │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/__ │
│ init__.py:1 in <module> │
│ │
│ ❱ 1 from pytorch_lightning.trainer.connectors.logger_connector.logger_connector import _Logg │
│ 2 │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/lo │
│ gger_connector.py:25 in <module> │
│ │
│ 22 from lightning_fabric.utilities import move_data_to_device │
│ 23 from lightning_fabric.utilities.apply_func import convert_tensors_to_scalars │
│ 24 from pytorch_lightning.loggers import CSVLogger, Logger, TensorBoardLogger │
│ ❱ 25 from pytorch_lightning.trainer.connectors.logger_connector.result import _METRICS, _OUT_ │
│ 26 from pytorch_lightning.utilities.rank_zero import WarningCache │
│ 27 │
│ 28 warning_cache = WarningCache() │
│ │
│ /usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/connectors/logger_connector/re │
│ sult.py:29 in <module> │
│ │
│ 26 from lightning_fabric.utilities.distributed import _distributed_is_initialized │
│ 27 from pytorch_lightning.utilities.data import extract_batch_size │
│ 28 from pytorch_lightning.utilities.exceptions import MisconfigurationException │
│ ❱ 29 from pytorch_lightning.utilities.imports import _TORCHMETRICS_GREATER_EQUAL_1_0_0 │
│ 30 from pytorch_lightning.utilities.memory import recursive_detach │
│ 31 from pytorch_lightning.utilities.rank_zero import WarningCache, rank_zero_warn │
│ 32 from pytorch_lightning.utilities.warnings import PossibleUserWarning │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name '_TORCHMETRICS_GREATER_EQUAL_1_0_0' from 'pytorch_lightning.utilities.imports'
(/usr/local/lib/python3.10/dist-packages/pytorch_lightning/utilities/imports.py)
### What version are you seeing the problem on?
v2.2, v2.3, v2.4
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version: (2.3.1)
#- PyTorch Version: (2.3.1)
#- Python version: (3.10.12)
#- OS: Google Collab
#- CUDA/cuDNN version: 12.1
#- How you installed Lightning: (`pip`)
```
</details>
### More info
also in imports.py in utilities folder thers no `_TORCHMETRICS_GREATER_EQUAL_1_0_0`
so is there any other version of pytorch-lightning that includes ?
or how can i solve this? please help me!
|
open
|
2024-08-17T18:48:44Z
|
2024-08-17T18:48:58Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20209
|
[
"bug",
"needs triage",
"ver: 2.2.x",
"ver: 2.4.x",
"ver: 2.3.x"
] |
Horizon-369
| 0
|
kennethreitz/responder
|
flask
| 392
|
Project status - doc URL broken
|
Is this repo the official one now?
https://python-responder.org/ not responding
|
closed
|
2019-09-25T00:50:51Z
|
2019-09-27T21:55:12Z
|
https://github.com/kennethreitz/responder/issues/392
|
[] |
michela
| 1
|
Lightning-AI/pytorch-lightning
|
machine-learning
| 19,780
|
Does `fabric.save()` save on rank 0?
|
### Bug description
I'm trying to save a simple object using `fabric.save()` but always get the same error and I don't know if I'm missing something about the way checkpoints are saved and loaded or if it's a bug. The error is caused when saving the model, and the `fabric.barrier()` produces that the state.pkl file is saved correclty. However I always get the same `RuntimeError: [../third_party/gloo/gloo/transport/tcp/pair.cc:534] Connection closed by peer [10.1.103.33]:62095` error.
I've already red the [documentation](https://lightning.ai/docs/fabric/stable/guide/checkpoint/distributed_checkpoint.html) but I still don't understand why it is happening.
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
```python
import lightning as L
def setup():
fabric = L.Fabric(accelerator="cpu", devices=2)
fabric.launch(main)
def main(fabric):
state = {"a": 1, "b": 2}
if fabric.global_rank == 0:
fabric.save("state.pkl", state)
fabric.barrier()
if __name__ == "__main__":
setup()
```
### Error messages and logs
```
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/2
Initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/2
----------------------------------------------------------------------------------------------------
distributed_backend=gloo
All distributed processes registered. Starting with 2 processes
----------------------------------------------------------------------------------------------------
Traceback (most recent call last):
File "/mnt/DATOS-KOALA/Documents/Doctorado/test_fabric/main.py", line 20, in <module>
setup()
File "/mnt/DATOS-KOALA/Documents/Doctorado/test_fabric/main.py", line 7, in setup
fabric.launch(main)
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/lightning/fabric/fabric.py", line 859, in launch
return self._wrap_and_launch(function, self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/lightning/fabric/fabric.py", line 944, in _wrap_and_launch
return launcher.launch(to_run, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/lightning/fabric/strategies/launchers/subprocess_script.py", line 107, in launch
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/lightning/fabric/fabric.py", line 950, in _wrap_with_setup
return to_run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/DATOS-KOALA/Documents/Doctorado/test_fabric/main.py", line 16, in main
fabric.barrier()
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/lightning/fabric/fabric.py", line 545, in barrier
self._strategy.barrier(name=name)
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/lightning/fabric/strategies/ddp.py", line 162, in barrier
torch.distributed.barrier()
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 72, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/DATOS-KOALA/anaconda3/envs/llmcal/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 3446, in barrier
work.wait()
RuntimeError: [../third_party/gloo/gloo/transport/tcp/pair.cc:534] Connection closed by peer [10.1.103.33]:62095
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU: None
- available: False
- version: 12.1
* Lightning:
- lightning: 2.2.2
- lightning-utilities: 0.11.2
- pytorch-lightning: 2.2.2
- torch: 2.2.2
- torchmetrics: 1.3.2
* Packages:
- aiohttp: 3.9.4
- aiosignal: 1.3.1
- attrs: 23.2.0
- filelock: 3.13.4
- frozenlist: 1.4.1
- fsspec: 2024.3.1
- idna: 3.7
- jinja2: 3.1.3
- lightning: 2.2.2
- lightning-utilities: 0.11.2
- markupsafe: 2.1.5
- mpmath: 1.3.0
- multidict: 6.0.5
- networkx: 3.3
- numpy: 1.26.4
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu12: 8.9.2.26
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu12: 2.19.3
- nvidia-nvjitlink-cu12: 12.4.127
- nvidia-nvtx-cu12: 12.1.105
- packaging: 24.0
- pip: 23.3.1
- pytorch-lightning: 2.2.2
- pyyaml: 6.0.1
- setuptools: 68.2.2
- sympy: 1.12
- torch: 2.2.2
- torchmetrics: 1.3.2
- tqdm: 4.66.2
- typing-extensions: 4.11.0
- wheel: 0.41.2
- yarl: 1.9.4
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.12.2
- release: 6.5.0-26-generic
- version: #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Mar 12 10:22:43 UTC 2
</details>
### More info
_No response_
cc @carmocca @justusschock @awaelchli
|
closed
|
2024-04-15T20:05:40Z
|
2024-04-16T11:45:38Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19780
|
[
"question",
"fabric"
] |
LautaroEst
| 3
|
vitalik/django-ninja
|
django
| 391
|
Using a schema outside of Ninja
|
Firstly, I hope your well and safe!
I may have overlooked something but I am wondering how I can use a schema outside of Ninja.
My use case is using channels and it could just be that it's late here but I couldn't get it to parse. Any ideas?
|
closed
|
2022-03-15T20:15:01Z
|
2022-03-21T08:18:20Z
|
https://github.com/vitalik/django-ninja/issues/391
|
[] |
bencleary
| 2
|
Nemo2011/bilibili-api
|
api
| 130
|
【建议】将原先 session.py 中的 Picture 类移动到其他位置
|
现在b站视频评论已经可以发图片了,我在自己视频底下发了个测试的图片。
用以下代码获取到图片列表,并可通过键值对应为 Picture 类对象。
```python
from bilibili_api import comment, sync, bvid2aid
from bilibili_api.session import Picture
async def main():
c = await comment.get_comments(bvid2aid("BV1vU4y1r7VD"), comment.CommentResourceType.VIDEO, 1)
for cmt in c["replies"]:
if cmt["content"].get("pictures"):
for p in cmt['content']["pictures"]:
print(p)
print(Picture(
height=p["img_height"],
imageType=p["img_src"].split(".")[-1],
original=1,
size=p["img_size"],
url=p["img_src"],
width=p["img_width"]
))
sync(main())
```
结果如下
```python
{'img_src': 'https://i0.hdslb.com/bfs/new_dyn/e54fc3170aa56ccb9e735a2b6ed9d5fe188888131.png', 'img_width': 709, 'img_height': 709, 'img_size': 133.85}
Picture(height=709, imageType='png', original=1, size=133.85, url='https://i0.hdslb.com/bfs/new_dyn/e54fc3170aa56ccb9e735a2b6ed9d5fe188888131.png', width=709)
```
其实不移动也没啥问题,毕竟这个类只实现了下载到本地这一个功能。
可以先讨论下移动的必要性,或者给这个类添加其他功能,例如:转为 PIL Image 对象、转为发送动态的图片流格式。
|
closed
|
2023-01-04T10:02:02Z
|
2023-01-08T11:38:43Z
|
https://github.com/Nemo2011/bilibili-api/issues/130
|
[] |
Drelf2018
| 11
|
huggingface/transformers
|
python
| 36,574
|
After tokenizers upgrade, the length of the token does not correspond to the length of the model
|
### System Info
#36532
1. My version is 4.48.1, a relatively new version. After referring to the document and executing it, my reasoning is still abnormal and the result is the same as the original reasoning
`import torch
import transformers
from transformers import PegasusForConditionalGeneration
# 加载 Pegasus 模型
#model = PegasusForConditionalGeneration.from_pretrained('google/pegasus-large')
# 获取模型参数
params = model.state_dict()
print(params)
# 获取词嵌入层权重(Pegasus 的词嵌入层是 `model.shared.weight`)
embeddings = params['model.shared.weight']
print(embeddings)
# 计算扩展前的词嵌入统计量
pre_expansion_embeddings = embeddings[:-3, :] # 排除最后 3 个 token
mu = torch.mean(pre_expansion_embeddings, dim=0) # 均值
n = pre_expansion_embeddings.size()[0] # 词嵌入数量
sigma = ((pre_expansion_embeddings - mu).T @ (pre_expansion_embeddings - mu)) / n # 协方差矩阵
# 定义多元正态分布
dist = torch.distributions.multivariate_normal.MultivariateNormal(
mu, covariance_matrix=1e-5 * sigma
)
# 生成新的词嵌入
new_embeddings = torch.stack(tuple((dist.sample() for _ in range(3))), dim=0)
print(new_embeddings)
# 替换词嵌入层的最后 3 个 token
embeddings[-3:, :] = new_embeddings
params['model.shared.weight'][-3:, :] = new_embeddings
# 加载更新后的参数
model.load_state_dict(params)`
3. And I found that on NPU, although the token has changed, there are no errors and the reasoning is normal. Transformers and tokenizers are the same, only the torch version is different. On NPU, it is 2.1.0
### Who can help?
hi,@Rocketknight1
Please don't close it yet, how can I adjust and adapt to the new version.thanks
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Please don't close it yet, how can I adjust and adapt to the new version.thanks
### Expected behavior
Please don't close it yet, how can I adjust and adapt to the new version.thanks
|
open
|
2025-03-06T06:18:28Z
|
2025-03-06T06:18:28Z
|
https://github.com/huggingface/transformers/issues/36574
|
[
"bug"
] |
CurtainRight
| 0
|
lukas-blecher/LaTeX-OCR
|
pytorch
| 27
|
The problms of mismatched evaluation metrics
|
Hi, thank you for your excellent work. I reproduce your work with the config file named default.yaml, but cannot get the same result(BLEU=0.74). And I found the train loss increased after a few epoches. Can you give some adivice?

|
closed
|
2021-07-20T02:50:03Z
|
2022-01-09T17:21:49Z
|
https://github.com/lukas-blecher/LaTeX-OCR/issues/27
|
[] |
Root970103
| 4
|
schemathesis/schemathesis
|
pytest
| 1,848
|
[schemathesis]Why there is no setup.py in V 3.19.7
|
Why there is no setup.py in V 3.19.7
https://pypi.org/simple/schemathesis/
How to install use this way "python setup.py install"?
(Background: I couldn't install using the PIP command due to network limitations and had to install using the python command)
|
closed
|
2023-10-18T10:21:25Z
|
2023-10-18T10:30:03Z
|
https://github.com/schemathesis/schemathesis/issues/1848
|
[] |
RayLXing
| 2
|
InstaPy/InstaPy
|
automation
| 6,595
|
No such file _followedPool.csv [Win]
|
## Expected Behavior
I expect to b**e able to save and load** the followed pool into csv, but it's not working on windows
## Current Behavior
**Doesn't saves the followed pool list.**
```
ERROR [2022-04-26 23:27:00] [XXXXXX] Error occurred while generating a user list from the followed pool!
b"[Errno 2] No such file or directory: 'C:\\\\Users\\\\XXXXX\\\\InstaPy\\\\logs\\\\\\\\XXXXXXXXX_followedPool.csv'"
```
The file does not exist
## Possible Solution (optional)
Could someone **point me** to the file i should check to debug?
## InstaPy configuration
pip install instapy
**Nothing else.**
Thanks
|
open
|
2022-04-27T04:20:55Z
|
2022-04-27T04:22:45Z
|
https://github.com/InstaPy/InstaPy/issues/6595
|
[] |
thekorko
| 0
|
home-assistant/core
|
python
| 141,124
|
ZHA doesn't connect to Sonoff Zigbee dongle. Baud rate problem?
|
### The problem
Can't get ZHA to connect to usb-Itead_Sonoff_Zigbee_3.0_USB_Dongle_Plus_V2_9c414fdf5bd9ee118970b24c37b89984-if00-port0. I noticed in the logs it is trying to connect at 460800 baud. I don't know if that has anything to do with it.
### What version of Home Assistant Core has the issue?
core-2025.3.4
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
ZHA
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/zha/
### Diagnostics information
[zha.log](https://github.com/user-attachments/files/19404059/zha.log)
### Example YAML snippet
```yaml
I don't see any YAML for ZHA. I can tell you the port:
/dev/serial/by-id/usb-Itead_Sonoff_Zigbee_3.0_USB_Dongle_Plus_V2_9c414fdf5bd9ee118970b24c37b89984-if00-port0 -> ../../ttyUSB0
radio type: EZSP
baud rate: 115200
Flow Control: Software
```
### Anything in the logs that might be useful for us?
```txt
2025-03-22 11:25:04.105 DEBUG (MainThread) [homeassistant.components.zha] Failed to set up ZHA
Traceback (most recent call last):
File "/usr/local/lib/python3.13/site-packages/bellows/uart.py", line 109, in reset
return await self._reset_future
^^^^^^^^^^^^^^^^^^^^^^^^
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/zha/__init__.py", line 156, in async_setup_entry
await zha_gateway.async_initialize()
File "/usr/local/lib/python3.13/site-packages/zha/application/gateway.py", line 271, in async_initialize
await self._async_initialize()
File "/usr/local/lib/python3.13/site-packages/zha/application/gateway.py", line 254, in _async_initialize
await self.application_controller.startup(auto_form=True)
File "/usr/local/lib/python3.13/site-packages/zigpy/application.py", line 220, in startup
await self.connect()
File "/usr/local/lib/python3.13/site-packages/bellows/zigbee/application.py", line 153, in connect
await self._ezsp.connect(use_thread=self.config[CONF_USE_THREAD])
File "/usr/local/lib/python3.13/site-packages/bellows/ezsp/__init__.py", line 138, in connect
await self.startup_reset()
File "/usr/local/lib/python3.13/site-packages/bellows/ezsp/__init__.py", line 127, in startup_reset
await self.reset()
File "/usr/local/lib/python3.13/site-packages/bellows/ezsp/__init__.py", line 146, in reset
await self._gw.reset()
File "/usr/local/lib/python3.13/site-packages/bellows/uart.py", line 108, in reset
async with asyncio_timeout(RESET_TIMEOUT):
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/asyncio/timeouts.py", line 116, in __aexit__
raise TimeoutError from exc_val
TimeoutError
Later in the logs:
2025-03-22 11:25:04.106 DEBUG (MainThread) [zigpy.serial] Opening a serial connection to '/dev/serial/by-id/usb-Itead_Sonoff_Zigbee_3.0_USB_Dongle_Plus_V2_9c414fdf5bd9ee118970b24c37b89984-if00-port0' (baudrate=115200, xonxoff=False, rtscts=False)
2025-03-22 11:25:06.115 DEBUG (MainThread) [zigpy.serial] Opening a serial connection to '/dev/serial/by-id/usb-Itead_Sonoff_Zigbee_3.0_USB_Dongle_Plus_V2_9c414fdf5bd9ee118970b24c37b89984-if00-port0' (baudrate=460800, xonxoff=False, rtscts=False)
```
### Additional information
_No response_
|
open
|
2025-03-22T16:25:57Z
|
2025-03-23T15:39:35Z
|
https://github.com/home-assistant/core/issues/141124
|
[
"integration: zha"
] |
ncp1113
| 4
|
sktime/sktime
|
scikit-learn
| 7,489
|
[ENH] improved methodology test coverage for `DirectReductionForecaster` and `RecursiveReductionForecaster`
|
We should increase methodology test coverage for `DirectReductionForecaster` and `RecursiveReductionForecaster` - that is, testing across cases that indeed expected outputs are produced.
My suggest would be to do this in simple cases where we know the result and can calculate it by hand.
Example 1: window length of 2, data size of 4, no exogenous data.
Compare the output of fit/predict with that of manually performing linear regression.
Example 2: window length of 2, data size of 4, two rows of exogenous data.
Compare the output of fit/predict with that of manually performing linear regression - in concurrent and shifted `X` handling case.
Example 1b, 2b: the same, but pooled - two time series of size 4.
In each case, we compute the feature matrices manually and enter them as a fixed `np.ndarray`, and compare the outputs of `predict` to manual application of the linear regression forecaster.
|
closed
|
2024-12-06T20:25:17Z
|
2025-03-22T14:12:19Z
|
https://github.com/sktime/sktime/issues/7489
|
[
"module:forecasting",
"enhancement"
] |
fkiraly
| 2
|
lexiforest/curl_cffi
|
web-scraping
| 34
|
和sanic使用时会报错
|
```
Exception in callback <bound method AsyncCurl.process_data of <curl_cffi.aio.AsyncCurl object at 0x7f1738a64310>>
handle: <TimerHandle AsyncCurl.process_data created at /home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py:39>
source_traceback: Object created at (most recent call last):
File "/home/alex/pycharm-2022.3.2/plugins/python/helpers/pydev/pydevd.py", line 1496, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/alex/pycharm-2022.3.2/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/alex/workspace/parse_server/main.py", line 54, in <module>
app.run(host="0.0.0.0", port=15368, debug=True, auto_reload=True, workers=1, access_log=False)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/mixins/runner.py", line 145, in run
self.__class__.serve(primary=self) # type: ignore
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/mixins/runner.py", line 578, in serve
serve_single(primary_server_info.settings)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/server/runners.py", line 206, in serve_single
serve(**server_settings)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/server/runners.py", line 155, in serve
loop.run_forever()
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 117, in process_data
self.socket_action(sockfd, ev_bitmask)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 113, in socket_action
lib.curl_multi_socket_action(self._curlm, sockfd, ev_bitmask, running_handle)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 39, in timer_function
async_curl.timer = async_curl.loop.call_later(
Traceback (most recent call last):
File "uvloop/cbhandles.pyx", line 251, in uvloop.loop.TimerHandle._run
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 117, in process_data
self.socket_action(sockfd, ev_bitmask)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 113, in socket_action
lib.curl_multi_socket_action(self._curlm, sockfd, ev_bitmask, running_handle)
TypeError: initializer for ctype 'void *' must be a cdata pointer, not NoneType
Exception in callback <bound method AsyncCurl.process_data of <curl_cffi.aio.AsyncCurl object at 0x7f1738a642b0>>
handle: <TimerHandle AsyncCurl.process_data created at /home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py:39>
source_traceback: Object created at (most recent call last):
File "/home/alex/pycharm-2022.3.2/plugins/python/helpers/pydev/pydevd.py", line 1496, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/alex/pycharm-2022.3.2/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/alex/workspace/parse_server/main.py", line 54, in <module>
app.run(host="0.0.0.0", port=15368, debug=True, auto_reload=True, workers=1, access_log=False)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/mixins/runner.py", line 145, in run
self.__class__.serve(primary=self) # type: ignore
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/mixins/runner.py", line 578, in serve
serve_single(primary_server_info.settings)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/server/runners.py", line 206, in serve_single
serve(**server_settings)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/server/runners.py", line 155, in serve
loop.run_forever()
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 117, in process_data
self.socket_action(sockfd, ev_bitmask)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 113, in socket_action
lib.curl_multi_socket_action(self._curlm, sockfd, ev_bitmask, running_handle)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 39, in timer_function
async_curl.timer = async_curl.loop.call_later(
Traceback (most recent call last):
File "uvloop/cbhandles.pyx", line 251, in uvloop.loop.TimerHandle._run
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 117, in process_data
self.socket_action(sockfd, ev_bitmask)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 113, in socket_action
lib.curl_multi_socket_action(self._curlm, sockfd, ev_bitmask, running_handle)
TypeError: initializer for ctype 'void *' must be a cdata pointer, not NoneType
Exception in callback <bound method AsyncCurl.process_data of <curl_cffi.aio.AsyncCurl object at 0x7f1738a64f40>>
handle: <TimerHandle AsyncCurl.process_data created at /home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py:39>
source_traceback: Object created at (most recent call last):
File "/home/alex/pycharm-2022.3.2/plugins/python/helpers/pydev/pydevd.py", line 1496, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/alex/pycharm-2022.3.2/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/alex/workspace/parse_server/main.py", line 54, in <module>
app.run(host="0.0.0.0", port=15368, debug=True, auto_reload=True, workers=1, access_log=False)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/mixins/runner.py", line 145, in run
self.__class__.serve(primary=self) # type: ignore
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/mixins/runner.py", line 578, in serve
serve_single(primary_server_info.settings)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/server/runners.py", line 206, in serve_single
serve(**server_settings)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/server/runners.py", line 155, in serve
loop.run_forever()
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 98, in _force_timeout
self.socket_action(CURL_SOCKET_TIMEOUT, CURL_POLL_NONE)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 113, in socket_action
lib.curl_multi_socket_action(self._curlm, sockfd, ev_bitmask, running_handle)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 39, in timer_function
async_curl.timer = async_curl.loop.call_later(
Traceback (most recent call last):
File "uvloop/cbhandles.pyx", line 251, in uvloop.loop.TimerHandle._run
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 117, in process_data
self.socket_action(sockfd, ev_bitmask)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 113, in socket_action
lib.curl_multi_socket_action(self._curlm, sockfd, ev_bitmask, running_handle)
TypeError: initializer for ctype 'void *' must be a cdata pointer, not NoneType
Exception in callback <bound method AsyncCurl.process_data of <curl_cffi.aio.AsyncCurl object at 0x7f1738a64f40>>
handle: <TimerHandle AsyncCurl.process_data created at /home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py:39>
source_traceback: Object created at (most recent call last):
File "/home/alex/pycharm-2022.3.2/plugins/python/helpers/pydev/pydevd.py", line 1496, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/alex/pycharm-2022.3.2/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/alex/workspace/parse_server/main.py", line 54, in <module>
app.run(host="0.0.0.0", port=15368, debug=True, auto_reload=True, workers=1, access_log=False)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/mixins/runner.py", line 145, in run
self.__class__.serve(primary=self) # type: ignore
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/mixins/runner.py", line 578, in serve
serve_single(primary_server_info.settings)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/server/runners.py", line 206, in serve_single
serve(**server_settings)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/sanic/server/runners.py", line 155, in serve
loop.run_forever()
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 117, in process_data
self.socket_action(sockfd, ev_bitmask)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 113, in socket_action
lib.curl_multi_socket_action(self._curlm, sockfd, ev_bitmask, running_handle)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 39, in timer_function
async_curl.timer = async_curl.loop.call_later(
Traceback (most recent call last):
File "uvloop/cbhandles.pyx", line 251, in uvloop.loop.TimerHandle._run
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 117, in process_data
self.socket_action(sockfd, ev_bitmask)
File "/home/alex/workspace/parse_server/venv/lib/python3.8/site-packages/curl_cffi/aio.py", line 113, in socket_action
lib.curl_multi_socket_action(self._curlm, sockfd, ev_bitmask, running_handle)
TypeError: initializer for ctype 'void *' must be a cdata pointer, not NoneType
```
|
closed
|
2023-03-29T16:39:24Z
|
2023-04-18T14:54:05Z
|
https://github.com/lexiforest/curl_cffi/issues/34
|
[] |
alexliyu7352
| 6
|
oegedijk/explainerdashboard
|
plotly
| 114
|
AttributeError: 'str' object has no attribute 'shape'
|
* After using running the following code, and I also showed the data structure of the input parameters in second cell:

* I get the following error:

```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-144-a0b685ad6516> in <module>
1 db = ExplainerDashboard(explainer,
2 title="Loan Defaults Explainer",
----> 3 shap_interaction=False
4 )
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\explainerdashboard\dashboards.py in __init__(self, explainer, tabs, title, name, description, hide_header, header_hide_title, header_hide_selector, hide_poweredby, block_selector_callbacks, pos_label, fluid, mode, width, height, bootstrap, external_stylesheets, server, url_base_pathname, responsive, logins, port, importances, model_summary, contributions, whatif, shap_dependence, shap_interaction, decision_trees, **kwargs)
481 block_selector_callbacks=self.block_selector_callbacks,
482 pos_label=self.pos_label,
--> 483 fluid=fluid))
484 else:
485 tabs = self._convert_str_tabs(tabs)
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\explainerdashboard\dashboards.py in __init__(self, explainer, tabs, title, description, header_hide_title, header_hide_selector, hide_poweredby, block_selector_callbacks, pos_label, fluid, **kwargs)
85
86 self.selector = PosLabelSelector(explainer, name="0", pos_label=pos_label)
---> 87 self.tabs = [instantiate_component(tab, explainer, name=str(i+1), **kwargs) for i, tab in enumerate(tabs)]
88 assert len(self.tabs) > 0, 'When passing a list to tabs, need to pass at least one valid tab!'
89
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\explainerdashboard\dashboards.py in <listcomp>(.0)
85
86 self.selector = PosLabelSelector(explainer, name="0", pos_label=pos_label)
---> 87 self.tabs = [instantiate_component(tab, explainer, name=str(i+1), **kwargs) for i, tab in enumerate(tabs)]
88 assert len(self.tabs) > 0, 'When passing a list to tabs, need to pass at least one valid tab!'
89
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\explainerdashboard\dashboard_methods.py in instantiate_component(component, explainer, name, **kwargs)
625 kwargs = {k:v for k,v in kwargs.items() if k in init_argspec.args + init_argspec.kwonlyargs}
626 if "name" in init_argspec.args+init_argspec.kwonlyargs:
--> 627 component = component(explainer, name=name, **kwargs)
628 else:
629 print(f"ExplainerComponent {component} does not accept a name parameter, "
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\explainerdashboard\dashboard_components\composites.py in __init__(self, explainer, title, name, hide_predindexselector, hide_predictionsummary, hide_contributiongraph, hide_pdp, hide_contributiontable, hide_title, hide_selector, index_check, **kwargs)
274 hide_selector=hide_selector, **kwargs)
275 self.pdp = PdpComponent(explainer, name=self.name+"3",
--> 276 hide_selector=hide_selector, **kwargs)
277 self.contributions_list = ShapContributionsTableComponent(explainer, name=self.name+"4",
278 hide_selector=hide_selector, **kwargs)
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\explainerdashboard\dashboard_components\overview_components.py in __init__(self, explainer, title, name, subtitle, hide_col, hide_index, hide_title, hide_subtitle, hide_footer, hide_selector, hide_popout, hide_dropna, hide_sample, hide_gridlines, hide_gridpoints, hide_cats_sort, index_dropdown, feature_input_component, pos_label, col, index, dropna, sample, gridlines, gridpoints, cats_sort, description, **kwargs)
309
310 if self.col is None:
--> 311 self.col = self.explainer.columns_ranked_by_shap()[0]
312
313 if self.feature_input_component is not None:
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\explainerdashboard\explainers.py in inner(self, *args, **kwargs)
64 else:
65 kwargs.update(dict(pos_label=self.pos_label))
---> 66 return func(self, **kwargs)
67
68 return inner
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\explainerdashboard\explainers.py in columns_ranked_by_shap(self, pos_label)
990
991 """
--> 992 return self.mean_abs_shap_df(pos_label).Feature.tolist()
993
994 @insert_pos_label
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\explainerdashboard\explainers.py in inner(self, *args, **kwargs)
64 else:
65 kwargs.update(dict(pos_label=self.pos_label))
---> 66 return func(self, **kwargs)
67
68 return inner
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\explainerdashboard\explainers.py in mean_abs_shap_df(self, pos_label)
2356 .sort_values(ascending=False)
2357 .to_frame().rename_axis(index="Feature").reset_index()
-> 2358 .rename(columns={0:"MEAN_ABS_SHAP"}) for pos_label in self.labels]
2359 return self._mean_abs_shap_df[pos_label]
2360
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\explainerdashboard\explainers.py in <listcomp>(.0)
2356 .sort_values(ascending=False)
2357 .to_frame().rename_axis(index="Feature").reset_index()
-> 2358 .rename(columns={0:"MEAN_ABS_SHAP"}) for pos_label in self.labels]
2359 return self._mean_abs_shap_df[pos_label]
2360
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\explainerdashboard\explainers.py in inner(self, *args, **kwargs)
64 else:
65 kwargs.update(dict(pos_label=self.pos_label))
---> 66 return func(self, **kwargs)
67
68 return inner
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\explainerdashboard\explainers.py in get_shap_values_df(self, pos_label)
2257 return self._shap_values_df
2258 elif pos_label == 0:
-> 2259 return self._shap_values_df.multiply(-1)
2260 else:
2261 raise ValueError(f"pos_label={pos_label}, but should be either 1 or 0!")
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\pandas\core\ops\__init__.py in f(self, other, axis, level, fill_value)
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\pandas\core\ops\__init__.py in dispatch_to_series(left, right, func, str_rep, axis)
379 """
380 Wrapper function for Series arithmetic operations, to avoid
--> 381 code duplication.
382 """
383 assert special # non-special uses _flex_method_SERIES
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\pandas\core\internals\managers.py in apply(self, f, filter, **kwargs)
438 interpolation : type of interpolation, default 'linear'
439 qs : a scalar or list of the quantiles to be computed
--> 440 numeric_only : ignored
441
442 Returns
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\pandas\core\internals\blocks.py in apply(self, func, **kwargs)
388 # equivalent: _try_coerce_args(value) would not raise
389 blocks = self.putmask(mask, value, inplace=inplace)
--> 390 return self._maybe_downcast(blocks, downcast)
391
392 # we can't process the value, but nothing to do
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\pandas\core\ops\array_ops.py in arithmetic_op(left, right, op, str_rep)
195 def comparison_op(left: ArrayLike, right: Any, op) -> ArrayLike:
196 """
--> 197 Evaluate a comparison operation `=`, `!=`, `>=`, `>`, `<=`, or `<`.
198
199 Parameters
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\pandas\core\ops\array_ops.py in na_arithmetic_op(left, right, op, str_rep)
147 # In this case we do not fall back to the masked op, as that
148 # will handle complex numbers incorrectly, see GH#32047
--> 149 raise
150 result = masked_arith_op(left, right, op)
151
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\pandas\core\computation\expressions.py in evaluate(op, a, b, use_numexpr)
231 use_numexpr = use_numexpr and _bool_arith_check(op_str, a, b)
232 if use_numexpr:
--> 233 return _evaluate(op, op_str, a, b) # type: ignore
234 return _evaluate_standard(op, op_str, a, b)
235
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\pandas\core\computation\expressions.py in _evaluate_numexpr(op, op_str, a, b)
98 result = None
99
--> 100 if _can_use_numexpr(op, op_str, a, b, "evaluate"):
101 is_reversed = op.__name__.strip("_").startswith("r")
102 if is_reversed:
c:\users\firo obeid\anaconda3\envs\pushstashenv\lib\site-packages\pandas\core\computation\expressions.py in _can_use_numexpr(op, op_str, a, b, dtype_check)
74
75 # required min elements (otherwise we are adding overhead)
---> 76 if np.prod(a.shape) > _MIN_ELEMENTS:
77 # check for dtype compatibility
78 dtypes = set()
AttributeError: 'str' object has no attribute 'shape'
```
|
closed
|
2021-04-30T04:00:12Z
|
2021-04-30T17:32:20Z
|
https://github.com/oegedijk/explainerdashboard/issues/114
|
[] |
firobeid
| 2
|
schemathesis/schemathesis
|
pytest
| 1,754
|
[DOCUMENTATION] how to use graphql schema get_case_strategy
|
I found no documentation how I can use the get_case_strategy of graphql schema to only test queries or just a subset.
Can you please expand the documentation or point out where I find the information?
|
closed
|
2023-07-08T10:40:40Z
|
2023-10-11T06:52:50Z
|
https://github.com/schemathesis/schemathesis/issues/1754
|
[
"Type: Bug",
"Status: Needs Triage"
] |
devkral
| 2
|
jupyterlab/jupyter-ai
|
jupyter
| 737
|
jupyter-ai for a geospatial foundation model -- allow rendering of geojson file types?
|
<!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
<!--
Thanks for thinking of a way to improve JupyterLab. If this solves a problem for you, then it probably solves that problem for lots of people! So the whole community will benefit from this request.
Before creating a new feature request please search the issues for relevant feature requests.
-->
### Problem
<!-- Provide a clear and concise description of what problem this feature will solve. For example:
* I'm always frustrated when [...] because [...]
* I would like it if [...] happened when I [...] because [...]
-->
Hi- I’m working with [Clay](https://clay-foundation.github.io/model/index.html), a foundation model for Earth Observation data. I’m exploring what an integration with Jupyter-ai would look like, but have a couple questions due to the differences between a vision based model (Clay) and a language model such as all the integrations currently available within jupyter-ai.
There are a couple tasks I would like to accomplish with jupyter-ai and Clay, to start. First, to create embeddings, which currently is possible via an API — inputs are a geojson polygon and outputs are a list of geojsons.
The second task is to query embeddings (do similarity search) — in this case, the user offers the ID of an embedding as an input to the API (can be created in the task above ^ or otherwise and receives back a list of recommended similar embeddings (which can then be converted to geojson)
### Proposed Solution
<!-- Provide a clear and concise description of a way to accomplish what you want. For example:
* Add an option so that when [...] [...] will happen
-->
I can think of a few ways to make this happen:
- on Clay's end, we're working on translating text to Earth embeddings, which means in the future a language based query "identify plastic pollution sites in Indonesia" would be functional. Outputs would still need to be in a geojson format to be most useful -- this is a WIP and not ready for integration yet
- including [geojson-extension](https://github.com/jupyterlab/jupyter-renderers) to render geojson files within the notebook, and allowing it as valid output when using Clay as the model
- from the UI perspective, there are some mentions of language models explicitly (selecting the model) that could be changed to be more agnostic, or the selection of Clay could operate as a different dropdown / UX
### Additional context
<!-- Add any other context or screenshots about the feature request here. You can also include links to examples of other programs that have something similar to your request. For example:
* Another project [...] solved this by [...]
-->
I understand that jupyter-ai is meant to be vendor-agnostic, so perhaps the best option is to stick with text-based outputs, as best possible (within Clay, we’re working on translating EO data into text formats, which would be helpful, but is still a ways off), but I think it would be a loss if we didn’t consider the ways in which non-text inputs and outputs can be made available. This is probably part of a larger conversation about how jupyter-ai is set up, and so I wanted to create an issue to hear if there are other thoughts on the subject. Happy to provide more context as needed.
|
open
|
2024-04-19T18:01:35Z
|
2024-04-19T18:01:39Z
|
https://github.com/jupyterlab/jupyter-ai/issues/737
|
[
"enhancement"
] |
k-y-le
| 1
|
itamarst/eliot
|
numpy
| 461
|
Add support for Python 3.9
|
Hopefully just need to run tests.
|
closed
|
2020-12-15T18:33:37Z
|
2020-12-15T18:51:18Z
|
https://github.com/itamarst/eliot/issues/461
|
[] |
itamarst
| 0
|
fastapi/sqlmodel
|
fastapi
| 312
|
TypeError: issubclass() arg 1 must be a class
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from datetime import date
from typing import Optional, Union
from sqlmodel import SQLModel
class Person(SQLModel):
name: str
birthdate: Optional[Union[date, str]]
....
```
### Description
I want to store information about some people in a MySQL database. Due to the nature of the information, birth dates can be full dates (1940-05-02), month and year (1932-07) or years only (1965). I searched the pydantic documentation and it says to use Unions to accept multiple data types. However, when I try to do this sqlmodel raises the error `TypeError: issubclass() arg 1 must be a class`. I know the issue comes from union because if I remove it then the code works just fine.
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.10.2
### Additional Context
_No response_
|
open
|
2022-04-23T23:47:10Z
|
2022-04-24T21:43:04Z
|
https://github.com/fastapi/sqlmodel/issues/312
|
[
"question"
] |
Maypher
| 1
|
vitalik/django-ninja
|
rest-api
| 444
|
Automated related ForeignKey Models in ModelSchema
|
Can related models be handled automatically?
**Model**
```
class ExampleModel(models.Model):
relation = models.ForeignKey(SomeOtherModel, on_delete=models.PROTECT, related_name='example')
```
**ModelSchema**
```
class ExampleSchema(ModelSchema):
class Config:
model = models.ExampleModel
model_fields = ['id', 'relation']
```
**API**
```
@api.post('/example')
def create_example(request, payload: schemas.ExampleSchema):
pl = payload.dict()
relation = get_object_or_404(SomeOtherModel, id=payload.relation)
pl['relation'] = relation
example = models.ExampleModel.objects.create(**pl)
return {'id': example.id}
```
The above example works with the payload `{'relation_id': 4}` if the object with ID 4 exists.
**Question 1**
Is there any specific reason why the field has to be named `relation_id` in the payload and not just `relation`?
**Question 2**
In the API, is it really necessary to do
```
pl = payload.dict()
relation = get_object_or_404(SomeOtherModel, id=payload.relation)
pl['relation'] = relation
```
or can that be somehow done automatically?
It seems that writing the `get_object_or_404()` is currently needed, or can that be omitted with an other apporach?
|
open
|
2022-05-11T12:07:11Z
|
2024-11-18T11:56:27Z
|
https://github.com/vitalik/django-ninja/issues/444
|
[] |
21adrian1996
| 8
|
coqui-ai/TTS
|
pytorch
| 3,308
|
[Bug] torch `weight_norm` issue
|
### Describe the bug
In some of the model files, we use the `weight_norm` function which is imported as follows
```python
from torch.nn.utils.parametrizations import weight_norm
```
which is giving me the error:
```
ImportError: cannot import name 'weight_norm' from 'torch.nn.utils.parametrizations' (/data/saiakarsh/envs/coqui/lib/python3.10/site-packages/torch/nn/utils/parametrizations.py)
```
but by changing it to
```python
from torch.nn.utils import weight_norm
```
the issue is solved.
I checked the commit history and found out that `parametrizations` was added in this commit #3176 recently.
So considering it was intentional, am I missing something here? Like any torch update that's necessary?
### To Reproduce
for example you can go to [hifigan_generator.py](https://github.com/coqui-ai/TTS/blob/dev/TTS/vocoder/models/hifigan_generator.py), and check:
```python
from torch.nn.utils.parametrizations import weight_norm
```
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA A100 80GB PCIe",
"NVIDIA A100 80GB PCIe",
"NVIDIA A100 80GB PCIe",
"NVIDIA A100 80GB PCIe"
],
"available": true,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.1+cu117",
"TTS": "0.20.6",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.13",
"version": "#97-Ubuntu SMP Mon Oct 2 21:09:21 UTC 2023"
}
}
```
### Additional context
_No response_
|
closed
|
2023-11-25T07:30:43Z
|
2024-02-08T11:03:38Z
|
https://github.com/coqui-ai/TTS/issues/3308
|
[
"bug"
] |
saiakarsh193
| 2
|
mwaskom/seaborn
|
pandas
| 3,383
|
Doc: Inheritance informations not shown
|
As an example see the [docu about `FacetGrid`](https://seaborn.pydata.org/generated/seaborn.FacetGrid.html?highlight=grid#seaborn.FacetGrid). There you don't see wich class it is inherited.
It is `Grid`.
That infos should be added somehow.
|
closed
|
2023-06-14T10:27:38Z
|
2023-06-14T11:28:37Z
|
https://github.com/mwaskom/seaborn/issues/3383
|
[] |
buhtz
| 2
|
Kludex/mangum
|
asyncio
| 101
|
Document an example project that uses WebSockets
|
Probably will use Serverless Framework for this in a separate repo. Not sure yet.
|
closed
|
2020-05-04T08:18:20Z
|
2020-06-28T01:52:35Z
|
https://github.com/Kludex/mangum/issues/101
|
[
"docs",
"websockets"
] |
jordaneremieff
| 1
|
nolar/kopf
|
asyncio
| 368
|
[PR] Fix an issue with client connection errors raised from initial request
|
> <a href="https://github.com/nolar"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> A pull request by [nolar](https://github.com/nolar) at _2020-05-26 09:59:06+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/368
>
# What do these changes do?
Suppress all connection errors from the initial streaming request the same way as all connection errors from the continuous stream itself.
## Description
An issue was caused by an `aiohttp.client_exceptions.ServerDisconnectedError` raised from the initial streaming request:
```
[2020-05-25 10:44:44,924] kopf.reactor.running [ERROR ] Root task 'watcher of pods' is failed:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/kopf/reactor/running.py", line 453, in _root_task_checker
await coro
File "/usr/local/lib/python3.7/dist-packages/kopf/reactor/queueing.py", line 109, in watcher
async for raw_event in stream:
File "/usr/local/lib/python3.7/dist-packages/kopf/clients/watching.py", line 75, in infinite_watch
async for raw_event in stream:
File "/usr/local/lib/python3.7/dist-packages/kopf/clients/watching.py", line 112, in streaming_watch
async for raw_event in stream:
File "/usr/local/lib/python3.7/dist-packages/kopf/clients/watching.py", line 146, in continuous_watch
async for raw_input in stream:
File "/usr/local/lib/python3.7/dist-packages/kopf/clients/auth.py", line 78, in wrapper
async for item in fn(*args, **kwargs, context=context):
File "/usr/local/lib/python3.7/dist-packages/kopf/clients/watching.py", line 215, in watch_objs
sock_connect=settings.watching.connect_timeout,
File "/usr/local/lib/python3.7/dist-packages/aiohttp/client.py", line 504, in _request
await resp.start(conn)
File "/usr/local/lib/python3.7/dist-packages/aiohttp/client_reqrep.py", line 847, in start
message, payload = await self._protocol.read() # type: ignore # noqa
File "/usr/local/lib/python3.7/dist-packages/aiohttp/streams.py", line 591, in read
await self._waiter
aiohttp.client_exceptions.ServerDisconnectedError
```
The issue is not with the stream itself broken, but with the initial streaming handshake broken.
This kind of errors should be treated the same as the streaming error: i.e., by ignoring them and restarting the stream request.
## Issues/PRs
> Issues: fixes #369
## Type of changes
<!-- Remove the irrelevant items. Keep only those that reflect the main purpose of the change. -->
- New feature (non-breaking change which adds functionality)
- Bug fix (non-breaking change which fixes an issue)
- Refactoring (non-breaking change which does not alter the behaviour)
- Mostly documentation and examples (no code changes)
- Mostly CI/CD automation, contribution experience
## Checklist
- [ ] The code addresses only the mentioned problem, and this problem only
- [ ] I think the code is well written
- [ ] Unit tests for the changes exist
- [ ] Documentation reflects the changes
- [ ] If you provide code modification, please add yourself to `CONTRIBUTORS.txt`
<!-- Are there any questions or uncertainties left?
Any tasks that have to be done to complete the PR? -->
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-08-20 20:13:13+00:00_
>
Closed in favor of https://github.com/nolar/kopf/pull/512
|
closed
|
2020-08-18T20:04:50Z
|
2020-09-09T22:04:19Z
|
https://github.com/nolar/kopf/issues/368
|
[
"bug",
"archive"
] |
kopf-archiver[bot]
| 1
|
d2l-ai/d2l-en
|
machine-learning
| 1,701
|
suggestion: rename train_ch3 to something more generic
|
The [train_ch3](https://github.com/d2l-ai/d2l-en/blob/master/d2l/torch.py#L326) function could be renamed to 'train_loop_v1' or something like that, since it is quite generic, and is used in several chapters (eg sec 4.6 on dropout).
I would also suggest removing the hard-coded assertions about train/test accuracy from this function, since they may fail if different arguments are passed.
Similarly I would rename [predict_ch3](https://github.com/d2l-ai/d2l-en/blob/master/d2l/torch.py#L341) to something like 'predict_fashion_mnist', since you hard-code the assumption about the dataset inside the function.
|
closed
|
2021-03-29T20:13:53Z
|
2021-03-31T17:44:43Z
|
https://github.com/d2l-ai/d2l-en/issues/1701
|
[] |
murphyk
| 1
|
mars-project/mars
|
scikit-learn
| 2,917
|
[BUG] mars client timeout after cancel subtask in notebook
|
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
When running following code in nodebook cell and cancel in the middle and re-execute it again, mars will throw timeout error::
```
urldf = df.groupby(["id"])["trd_longitude","trd_latitude","id"].apply(lambda x: x.sum()).reset_index().execute()
print(urldf.head(2).execute())
```
timeout error stack:
```
/root/miniconda3/lib/python3.7/site-packages/mars/dataframe/groupby/getitem.py:48: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.
indexed = groupby.op.build_mock_groupby()[self.selection]
2022-04-13 20:38:17,152 - mars.deploy.oscar.session - INFO - Time consuming to generate a tileable graph is 0.004700660705566406s with address http://11.72.5.50:56741, session id xofsti2ZB62dPMp5X8w5BHEx
2022-04-13 20:38:21,303 - mars.services.web.core - WARNING - Request http://11.72.5.50:56741/api/session/xofsti2ZB62dPMp5X8w5BHEx/task/3bsProIFVZFc3VZTqYqToBy1 timeout, requests params is {'params': {'action': 'progress'}}, ex is 'Timeout during request'. sleep 20 seconds and retry 2rd times again.
---------------------------------------------------------------------------
TimeoutError Traceback (most recent call last)
/tmp/ipykernel_88934/2699780381.py in <module>
18 #lambda x: pd.Series([0, 0], index=['pred', 'scores'])
19
---> 20 main()
/tmp/ipykernel_88934/2699780381.py in main()
2
3
----> 4 urldf = df.groupby(["id"])["trd_longitude","trd_latitude","id"].apply(lambda x: x.sum()).reset_index().execute()
5 print(urldf.head(2).execute())
6
~/miniconda3/lib/python3.7/site-packages/mars/core/entity/tileables.py in execute(self, session, **kw)
462
463 def execute(self, session=None, **kw):
--> 464 result = self.data.execute(session=session, **kw)
465 if isinstance(result, TILEABLE_TYPE):
466 return self
~/miniconda3/lib/python3.7/site-packages/mars/core/entity/executable.py in execute(self, session, **kw)
136
137 session = _get_session(self, session)
--> 138 return execute(self, session=session, **kw)
139
140 def _check_session(self, session: SessionType, action: str):
~/miniconda3/lib/python3.7/site-packages/mars/deploy/oscar/session.py in execute(tileable, session, wait, new_session_kwargs, show_progress, progress_update_interval, *tileables, **kwargs)
1865 show_progress=show_progress,
1866 progress_update_interval=progress_update_interval,
-> 1867 **kwargs,
1868 )
1869
~/miniconda3/lib/python3.7/site-packages/mars/deploy/oscar/session.py in execute(self, tileable, show_progress, warn_duplicated_execution, *tileables, **kwargs)
1654 try:
1655 execution_info: ExecutionInfo = fut.result(
-> 1656 timeout=self._isolated_session.timeout
1657 )
1658 except KeyboardInterrupt: # pragma: no cover
~/miniconda3/lib/python3.7/concurrent/futures/_base.py in result(self, timeout)
433 raise CancelledError()
434 elif self._state == FINISHED:
--> 435 return self.__get_result()
436 else:
437 raise TimeoutError()
~/miniconda3/lib/python3.7/concurrent/futures/_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
~/miniconda3/lib/python3.7/site-packages/mars/deploy/oscar/session.py in _execute(session, wait, show_progress, progress_update_interval, cancelled, *tileables, **kwargs)
1811 **kwargs,
1812 ):
-> 1813 execution_info = await session.execute(*tileables, **kwargs)
1814
1815 def _attach_session(future: asyncio.Future):
~/miniconda3/lib/python3.7/site-packages/mars/deploy/oscar/session.py in execute(self, *tileables, **kwargs)
997 task_name=task_name,
998 fuse_enabled=fuse_enabled,
--> 999 extra_config=extra_config,
1000 )
1001
~/miniconda3/lib/python3.7/site-packages/mars/services/task/api/web.py in submit_tileable_graph(self, graph, task_name, fuse_enabled, extra_config)
229 method="POST",
230 headers={"Content-Type": "application/octet-stream"},
--> 231 data=body,
232 )
233 return res.body.decode().strip()
~/miniconda3/lib/python3.7/site-packages/mars/services/web/core.py in _request_url(self, method, path, wrap_timeout_exception, **kwargs)
240 except HTTPTimeoutError as ex:
241 if wrap_timeout_exception:
--> 242 raise TimeoutError(str(ex)) from None
243 else:
244 raise ex
TimeoutError: Timeout during request
```
The mars cluter is a 100 * 8core cluster, and I'm using mars web api to conect to the mars cluster.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version: 3.7.9
2. The version of Mars you use: master
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
|
open
|
2022-04-13T12:40:15Z
|
2022-04-27T07:11:19Z
|
https://github.com/mars-project/mars/issues/2917
|
[] |
chaokunyang
| 0
|
microsoft/nni
|
tensorflow
| 5,325
|
All Trial jobs failed from MNIST example of nni
|
**Describe the issue**:
All trail failed on "nnictl create --config nni-2.10\examples\trials\mnist-pytorch\config_detailed.yml".
But I can run it properly without nni
(base) C:\Users\vanil>python nni-2.10\examples\trials\mnist-pytorch\mnist.py
C:\Users\vanil\anaconda3\lib\site-packages\nni\runtime\platform\standalone.py:32: RuntimeWarning: Running trial code without runtime. Please check the tutorial if you are new to NNI: https://nni.readthedocs.io/en/stable/tutorials/hpo_quickstart_pytorch/main.html
warnings.warn(warning_message, RuntimeWarning)
{'data_dir': './data', 'batch_size': 64, 'batch_num': None, 'hidden_size': 512, 'lr': 0.01, 'momentum': 0.5, 'epochs': 10, 'seed': 1, 'no_cuda': False, 'log_interval': 1000}
[2023-01-29 17:30:13] ←[32mIntermediate result: 97.01 (Index 0)←[0m
[2023-01-29 17:31:03] ←[32mIntermediate result: 98.1 (Index 1)←[0m
[2023-01-29 17:31:56] ←[32mIntermediate result: 98.42 (Index 2)←[0m
[2023-01-29 17:33:06] ←[32mIntermediate result: 98.56 (Index 3)←[0m
**Environment**:
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc): local
- Client OS: Windows 10
- Server OS (for remote mode only):
- Python version: 3.8
- PyTorch/TensorFlow version: 1.10
- Is conda/virtualenv/venv used?: Yes
- Is running in Docker?: No
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
{
"params": {
"experimentType": "hpo",
"trialCommand": "python mnist.py",
"trialCodeDirectory": "C:\\Users\\vanil\\nni-2.10\\examples\\trials\\mnist-pytorch",
"trialConcurrency": 4,
"trialGpuNumber": 0,
"maxExperimentDuration": "1h",
"maxTrialNumber": 10,
"useAnnotation": false,
"debug": false,
"logLevel": "info",
"experimentWorkingDirectory": "C:\\Users\\vanil\\nni-experiments",
"tuner": {
"name": "TPE",
"classArgs": {
"optimize_mode": "maximize"
}
},
"trainingService": {
"platform": "local",
"trialCommand": "python mnist.py",
"trialCodeDirectory": "C:\\Users\\vanil\\nni-2.10\\examples\\trials\\mnist-pytorch",
"trialGpuNumber": 0,
"debug": false,
"useActiveGpu": true,
"maxTrialNumberPerGpu": 1,
"reuseMode": false
}
},
"execDuration": "42s",
"nextSequenceId": 12,
"revision": 17
}
**Log message**:
- nnimanager.log:
[2023-01-29 17:51:20] INFO (main) Start NNI manager
[2023-01-29 17:51:20] INFO (NNIDataStore) Datastore initialization done
[2023-01-29 17:51:20] INFO (RestServer) Starting REST server at port 8080, URL prefix: "/"
[2023-01-29 17:51:20] INFO (RestServer) REST server started.
[2023-01-29 17:51:20] INFO (NNIManager) Starting experiment: ae130u9f
[2023-01-29 17:51:20] INFO (NNIManager) Setup training service...
[2023-01-29 17:51:20] INFO (LocalTrainingService) Construct local machine training service.
[2023-01-29 17:51:20] INFO (NNIManager) Setup tuner...
[2023-01-29 17:51:20] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: RUNNING
[2023-01-29 17:51:22] INFO (NNIManager) Add event listeners
[2023-01-29 17:51:22] INFO (LocalTrainingService) Run local machine training service.
[2023-01-29 17:51:22] INFO (NNIManager) NNIManager received command from dispatcher: ID,
[2023-01-29 17:51:22] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"batch_size": 16, "hidden_size": 128, "lr": 0.001, "momentum": 0.8719471722758799}, "parameter_index": 0}
[2023-01-29 17:51:22] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 1, "parameter_source": "algorithm", "parameters": {"batch_size": 64, "hidden_size": 1024, "lr": 0.1, "momentum": 0.9431249121895658}, "parameter_index": 0}
[2023-01-29 17:51:22] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 2, "parameter_source": "algorithm", "parameters": {"batch_size": 16, "hidden_size": 512, "lr": 0.0001, "momentum": 0.4171615260590785}, "parameter_index": 0}
[2023-01-29 17:51:22] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 3, "parameter_source": "algorithm", "parameters": {"batch_size": 32, "hidden_size": 512, "lr": 0.0001, "momentum": 0.3612376708356295}, "parameter_index": 0}
[2023-01-29 17:51:27] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 0,
hyperParameters: {
value: '{"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"batch_size": 16, "hidden_size": 128, "lr": 0.001, "momentum": 0.8719471722758799}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2023-01-29 17:51:27] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 1,
hyperParameters: {
value: '{"parameter_id": 1, "parameter_source": "algorithm", "parameters": {"batch_size": 64, "hidden_size": 1024, "lr": 0.1, "momentum": 0.9431249121895658}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2023-01-29 17:51:27] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 2,
hyperParameters: {
value: '{"parameter_id": 2, "parameter_source": "algorithm", "parameters": {"batch_size": 16, "hidden_size": 512, "lr": 0.0001, "momentum": 0.4171615260590785}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2023-01-29 17:51:27] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 3,
hyperParameters: {
value: '{"parameter_id": 3, "parameter_source": "algorithm", "parameters": {"batch_size": 32, "hidden_size": 512, "lr": 0.0001, "momentum": 0.3612376708356295}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2023-01-29 17:51:37] INFO (NNIManager) Trial job WzycO status changed from WAITING to FAILED
[2023-01-29 17:51:37] INFO (NNIManager) Trial job BaFxU status changed from WAITING to FAILED
[2023-01-29 17:51:37] INFO (NNIManager) Trial job NW7SR status changed from WAITING to FAILED
[2023-01-29 17:51:38] INFO (NNIManager) Trial job ASmzF status changed from WAITING to FAILED
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
|
closed
|
2023-01-30T01:58:07Z
|
2023-02-08T22:03:44Z
|
https://github.com/microsoft/nni/issues/5325
|
[] |
yiqiaoc11
| 3
|
openapi-generators/openapi-python-client
|
rest-api
| 958
|
Invalid code generated for nullable discriminated union
|
**Describe the bug**
Given the schema below, using the generated code like:
```python
from test_client.models.demo import Demo
from test_client.models.a import A
Demo(example_union=A()).to_dict()
```
fails with:
```
Traceback (most recent call last):
File "/Users/eric/Desktop/test/test.py", line 4, in <module>
Demo(example_union=A()).to_dict()
File "/Users/eric/Desktop/test/test_client/models/demo.py", line 32, in to_dict
elif isinstance(self.example_union, Union["A", "B"]):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/eric/.pyenv/versions/3.12.1/lib/python3.12/typing.py", line 1564, in __instancecheck__
return self.__subclasscheck__(type(obj))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/eric/.pyenv/versions/3.12.1/lib/python3.12/typing.py", line 1568, in __subclasscheck__
if issubclass(cls, arg):
^^^^^^^^^^^^^^^^^^^^
TypeError: issubclass() arg 2 must be a class, a tuple of classes, or a union
```
**OpenAPI Spec File**
```yaml
openapi: 3.0.3
info:
title: Test
version: 0.0.0
description: test
paths: {}
components:
schemas:
Demo:
type: object
properties:
example_union:
allOf:
- $ref: '#/components/schemas/ExampleUnion'
nullable: true # <-- bug does not happen if this line is removed
A:
type: object
properties:
type:
type: string
B:
type: object
properties:
type:
type: string
ExampleUnion:
oneOf:
- $ref: '#/components/schemas/A'
- $ref: '#/components/schemas/B'
discriminator:
propertyName: type
mapping:
a: '#/components/schemas/A'
b: '#/components/schemas/B'
```
**Desktop**
- OS: macOS 14.3
- Python Version: 3.12.1
- openapi-python-client version: 0.17.2
**Additional context**
The failing generated code is:
```python
isinstance(self.example_union, Union["A", "B"])
```
Using `instanceof` on a `Union` with quoted types is not allowed:
```
>>> class A:
... pass
...
>>> class B:
... pass
...
>>> from typing import Union
>>> isinstance(None, Union[A, B])
False
>>> isinstance(None, Union["A", "B"])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/eric/.pyenv/versions/3.12.1/lib/python3.12/typing.py", line 1564, in __instancecheck__
return self.__subclasscheck__(type(obj))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/eric/.pyenv/versions/3.12.1/lib/python3.12/typing.py", line 1568, in __subclasscheck__
if issubclass(cls, arg):
^^^^^^^^^^^^^^^^^^^^
TypeError: issubclass() arg 2 must be a class, a tuple of classes, or a union
```
|
closed
|
2024-02-09T18:53:43Z
|
2024-02-20T01:11:40Z
|
https://github.com/openapi-generators/openapi-python-client/issues/958
|
[] |
codebutler
| 0
|
httpie/cli
|
api
| 894
|
HTML docs: anchor links broken on mobile
|
In the HTTPie website's [documentation ](https://httpie.org/doc) there are numerous instances of links to other sections of the documentation that use a standard `<a href="#section-name">` anchor link. Unfortunately, on at least the Firefox 68.7.0 and Chrome 80 mobile browsers, the links don't jump to the appropriate section when clicked on. The URL of the browser is changed as expected, and reloading the page after clicking on the link loads the page at the appropriate section.
I've done some digging around in the site's HTML; in `/static/dist/app.js` there are two methods that I think may have something to do with the issue: `handleLinkClick` and `handleHashChange`. I suspect that this problem affects any website that uses whatever static site generator HTTPie uses, so a fix here should probably be sent upstream also.
I'll try having a go at fixing the issue once I've slept (it's 5am here) and I'm back at my computer (debugging JS on mobile sounds about as fun as a root canal). If anyone knows what static site generator or HTML documentation generator was used, that would be a big help in getting the fix upstreamed.
|
closed
|
2020-04-10T08:30:38Z
|
2020-04-13T15:19:12Z
|
https://github.com/httpie/cli/issues/894
|
[] |
shanepelletier
| 1
|
airtai/faststream
|
asyncio
| 1,159
|
Eliminate use of pytest-retry and pytest-timeout in tests/brokers/confluent/test_parser.py
|
closed
|
2024-01-22T07:05:41Z
|
2024-02-05T08:44:00Z
|
https://github.com/airtai/faststream/issues/1159
|
[] |
davorrunje
| 0
|
|
microsoft/JARVIS
|
deep-learning
| 19
|
suggestion to requirement.txt install failure
|
pip install controlnet-aux==0.0.1
requirement.txt 前3项如果出现安装问题,可以下载后,copy到javis环境目录
第三个,直接使用上面的pip安装。
|
closed
|
2023-04-04T08:05:12Z
|
2023-04-06T19:41:23Z
|
https://github.com/microsoft/JARVIS/issues/19
|
[] |
samqin123
| 2
|
davidsandberg/facenet
|
computer-vision
| 538
|
How limit RAM usage
|
I'm following the topic: Classifier training of inception resnet v1
so when i run the step 5 my RAM isn't enough , is possible limit the memory ram usage? There's any parameter for setup this?
|
closed
|
2017-11-16T16:17:27Z
|
2017-12-13T11:37:14Z
|
https://github.com/davidsandberg/facenet/issues/538
|
[] |
ramonamorim
| 2
|
numpy/numpy
|
numpy
| 28,042
|
BUG: Race in PyArray_UpdateFlags under free threading
|
### Describe the issue:
thread-sanitizer reports a race in PyArray_UpdateFlags under free threading.
It may take a few runs to reproduce this race.
### Reproduce the code example:
```python
import concurrent.futures
import functools
import threading
import numpy as np
num_threads = 8
def closure(b, x):
b.wait()
for _ in range(100):
list(x.flat)
with concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor:
for _ in range(100):
b = threading.Barrier(num_threads)
x = np.arange(20).reshape(5, 4).T
for _ in range(num_threads):
executor.submit(functools.partial(closure, b, x))
```
### Error message:
```shell
WARNING: ThreadSanitizer: data race (pid=409824)
Write of size 4 at 0x7fadb72e6db0 by thread T135:
#0 PyArray_UpdateFlags <null> (_multiarray_umath.cpython-313t-x86_64-linux-gnu.so+0x296e4b) (BuildId: a138a702a237ca030803af4168ee423ada9702f7)
#1 PyArray_RawIterBaseInit <null> (_multiarray_umath.cpython-313t-x86_64-linux-gnu.so+0x2ac9cd) (BuildId: a138a702a237ca030803af4168ee423ada9702f7)
#2 PyArray_IterNew <null> (_multiarray_umath.cpython-313t-x86_64-linux-gnu.so+0x2ad200) (BuildId: a138a702a237ca030803af4168ee423ada9702f7)
#3 array_flat_get getset.c (_multiarray_umath.cpython-313t-x86_64-linux-gnu.so+0x29a045) (BuildId: a138a702a237ca030803af4168ee423ada9702f7)
#4 getset_get /usr/local/google/home/phawkins/p/cpython/Objects/descrobject.c:193:16 (python3.13+0x1ffa68) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#5 _PyObject_GenericGetAttrWithDict /usr/local/google/home/phawkins/p/cpython/Objects/object.c:1635:19 (python3.13+0x295de1) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#6 PyObject_GenericGetAttr /usr/local/google/home/phawkins/p/cpython/Objects/object.c:1717:12 (python3.13+0x295c32) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#7 PyObject_GetAttr /usr/local/google/home/phawkins/p/cpython/Objects/object.c:1231:18 (python3.13+0x294597) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#8 _PyEval_EvalFrameDefault /usr/local/google/home/phawkins/p/cpython/Python/generated_cases.c.h:3766:28 (python3.13+0x3eddf0) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#9 _PyEval_EvalFrame /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_ceval.h:119:16 (python3.13+0x3de62a) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#10 _PyEval_Vector /usr/local/google/home/phawkins/p/cpython/Python/ceval.c:1811:12 (python3.13+0x3de62a)
#11 _PyFunction_Vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/call.c (python3.13+0x1eb61f) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#12 _PyObject_VectorcallTstate /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_call.h:168:11 (python3.13+0x571bb2) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#13 partial_vectorcall /usr/local/google/home/phawkins/p/cpython/./Modules/_functoolsmodule.c:252:16 (python3.13+0x571bb2)
#14 _PyVectorcall_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:273:16 (python3.13+0x1eb293) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#15 _PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:348:16 (python3.13+0x1eb293)
#16 PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:373:12 (python3.13+0x1eb315) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#17 _PyEval_EvalFrameDefault /usr/local/google/home/phawkins/p/cpython/Python/generated_cases.c.h:1355:26 (python3.13+0x3e46e2) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#18 _PyEval_EvalFrame /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_ceval.h:119:16 (python3.13+0x3de62a) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#19 _PyEval_Vector /usr/local/google/home/phawkins/p/cpython/Python/ceval.c:1811:12 (python3.13+0x3de62a)
#20 _PyFunction_Vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/call.c (python3.13+0x1eb61f) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#21 _PyObject_VectorcallTstate /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_call.h:168:11 (python3.13+0x1ef5ef) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#22 method_vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/classobject.c:70:20 (python3.13+0x1ef5ef)
#23 _PyVectorcall_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:273:16 (python3.13+0x1eb293) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#24 _PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:348:16 (python3.13+0x1eb293)
#25 PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:373:12 (python3.13+0x1eb315) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#26 thread_run /usr/local/google/home/phawkins/p/cpython/./Modules/_threadmodule.c:337:21 (python3.13+0x564292) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#27 pythread_wrapper /usr/local/google/home/phawkins/p/cpython/Python/thread_pthread.h:243:5 (python3.13+0x4bd637) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
Previous write of size 4 at 0x7fadb72e6db0 by thread T132:
#0 PyArray_UpdateFlags <null> (_multiarray_umath.cpython-313t-x86_64-linux-gnu.so+0x296e4b) (BuildId: a138a702a237ca030803af4168ee423ada9702f7)
#1 PyArray_RawIterBaseInit <null> (_multiarray_umath.cpython-313t-x86_64-linux-gnu.so+0x2ac9cd) (BuildId: a138a702a237ca030803af4168ee423ada9702f7)
#2 PyArray_IterNew <null> (_multiarray_umath.cpython-313t-x86_64-linux-gnu.so+0x2ad200) (BuildId: a138a702a237ca030803af4168ee423ada9702f7)
#3 array_flat_get getset.c (_multiarray_umath.cpython-313t-x86_64-linux-gnu.so+0x29a045) (BuildId: a138a702a237ca030803af4168ee423ada9702f7)
#4 getset_get /usr/local/google/home/phawkins/p/cpython/Objects/descrobject.c:193:16 (python3.13+0x1ffa68) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#5 _PyObject_GenericGetAttrWithDict /usr/local/google/home/phawkins/p/cpython/Objects/object.c:1635:19 (python3.13+0x295de1) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#6 PyObject_GenericGetAttr /usr/local/google/home/phawkins/p/cpython/Objects/object.c:1717:12 (python3.13+0x295c32) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#7 PyObject_GetAttr /usr/local/google/home/phawkins/p/cpython/Objects/object.c:1231:18 (python3.13+0x294597) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#8 _PyEval_EvalFrameDefault /usr/local/google/home/phawkins/p/cpython/Python/generated_cases.c.h:3766:28 (python3.13+0x3eddf0) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#9 _PyEval_EvalFrame /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_ceval.h:119:16 (python3.13+0x3de62a) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#10 _PyEval_Vector /usr/local/google/home/phawkins/p/cpython/Python/ceval.c:1811:12 (python3.13+0x3de62a)
#11 _PyFunction_Vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/call.c (python3.13+0x1eb61f) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#12 _PyObject_VectorcallTstate /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_call.h:168:11 (python3.13+0x571bb2) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#13 partial_vectorcall /usr/local/google/home/phawkins/p/cpython/./Modules/_functoolsmodule.c:252:16 (python3.13+0x571bb2)
#14 _PyVectorcall_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:273:16 (python3.13+0x1eb293) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#15 _PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:348:16 (python3.13+0x1eb293)
#16 PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:373:12 (python3.13+0x1eb315) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#17 _PyEval_EvalFrameDefault /usr/local/google/home/phawkins/p/cpython/Python/generated_cases.c.h:1355:26 (python3.13+0x3e46e2) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#18 _PyEval_EvalFrame /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_ceval.h:119:16 (python3.13+0x3de62a) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#19 _PyEval_Vector /usr/local/google/home/phawkins/p/cpython/Python/ceval.c:1811:12 (python3.13+0x3de62a)
#20 _PyFunction_Vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/call.c (python3.13+0x1eb61f) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#21 _PyObject_VectorcallTstate /usr/local/google/home/phawkins/p/cpython/./Include/internal/pycore_call.h:168:11 (python3.13+0x1ef5ef) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#22 method_vectorcall /usr/local/google/home/phawkins/p/cpython/Objects/classobject.c:70:20 (python3.13+0x1ef5ef)
#23 _PyVectorcall_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:273:16 (python3.13+0x1eb293) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#24 _PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:348:16 (python3.13+0x1eb293)
#25 PyObject_Call /usr/local/google/home/phawkins/p/cpython/Objects/call.c:373:12 (python3.13+0x1eb315) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#26 thread_run /usr/local/google/home/phawkins/p/cpython/./Modules/_threadmodule.c:337:21 (python3.13+0x564292) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
#27 pythread_wrapper /usr/local/google/home/phawkins/p/cpython/Python/thread_pthread.h:243:5 (python3.13+0x4bd637) (BuildId: 9c1c16fb1bb8a435fa6fa4c6944da5d41f654e96)
```
### Python and NumPy Versions:
2.3.0.dev0+git20241219.35b2c4a
3.13.1 experimental free-threading build (tags/v3.13.1:06714517797, Dec 15 2024, 15:38:01) [Clang 18.1.8 (11)]
### Runtime Environment:
[{'numpy_version': '2.3.0.dev0+git20241219.35b2c4a',
'python': '3.13.1 experimental free-threading build '
'(tags/v3.13.1:06714517797, Dec 15 2024, 15:38:01) [Clang 18.1.8 '
'(11)]',
'uname': uname_result(system='Linux', node='', release='', version='https://github.com/numpy/numpy/pull/1 SMP PREEMPT_DYNAMIC Debian 6.redacted (2024-10-16)', machine='x86_64')},
{'simd_extensions': {'baseline': ['SSE', 'SSE2', 'SSE3'],
'found': ['SSSE3',
'SSE41',
'POPCNT',
'SSE42',
'AVX',
'F16C',
'FMA3',
'AVX2'],
'not_found': ['AVX512F',
'AVX512CD',
'AVX512_KNL',
'AVX512_SKX',
'AVX512_CLX',
'AVX512_CNL',
'AVX512_ICL']}},
{'architecture': 'Zen',
'filepath': '/usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.27.so',
'internal_api': 'openblas',
'num_threads': 128,
'prefix': 'libopenblas',
'threading_layer': 'pthreads',
'user_api': 'blas',
'version': '0.3.27'}]
### Context for the issue:
Found when working on free-threading support in JAX.
|
closed
|
2024-12-19T17:30:27Z
|
2025-01-10T21:37:58Z
|
https://github.com/numpy/numpy/issues/28042
|
[
"00 - Bug",
"sprintable - C",
"39 - free-threading"
] |
hawkinsp
| 1
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 3,084
|
The Close button in the "Request support" window is not translated.
|
When any other than English language is set, the "Request support" window is translated except the Close button, which is always in English.

The "Request support" window is accessible via lifebuoy icon from backend (next to logout) and frontend, on the Password reset page.
GlobaLeaks version: 4.4.4
|
closed
|
2021-11-02T14:41:43Z
|
2021-11-03T08:53:29Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3084
|
[
"T: Bug",
"C: Client"
] |
JackBRN
| 1
|
microsoft/nni
|
tensorflow
| 5,714
|
Quantization with NAS
|
Does NNI support using NAS to quantify neural networks? To find the optimal bit width for each layer
|
open
|
2023-11-23T04:50:21Z
|
2023-11-23T04:50:21Z
|
https://github.com/microsoft/nni/issues/5714
|
[] |
lightup666
| 0
|
ymcui/Chinese-LLaMA-Alpaca-2
|
nlp
| 125
|
推理时base_model和lora_model精度不匹配报错
|
### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案
### 问题类型
模型推理
### 基础模型
Alpaca-2-7B
### 操作系统
Linux
### 详细描述问题
```
# 请在此处粘贴运行代码(如没有可删除该代码块)
```
python scripts/inference/inference_hf.py --base_model /workspace/model/chinese-alpaca-2-7b-hf --lora_model /workspace/Chinese-LLaMA-Alpaca-2/output/sft_lora_model --with_prompt --interactive 报错(看下方日志)
python scripts/inference/inference_hf.py --base_model /workspace/model/chinese-alpaca-2-7b-hf --lora_model /workspace/Chinese-LLaMA-Alpaca-2/output/sft_lora_model --with_prompt --interactive --load_in_8bit 正常推理
python scripts/inference/inference_hf.py --base_model /workspace/model/chinese-alpaca-2-7b-hf --with_prompt --interactive 正常推理
### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况
```
训练采用run_sft.sh基于Chinese-Alpaca-2训练全新的指令精调LoRA权重
root@gpu-xdev-test02:/workspace/Chinese-LLaMA-Alpaca-2# cat scripts/training/run_sft.sh
lr=1e-4
lora_rank=64
lora_alpha=128
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05
pretrained_model=/workspace/model/chinese-alpaca-2-7b-hf/
chinese_tokenizer_path=/workspace/model/chinese-alpaca-2-7b-hf/
dataset_dir=/workspace/Chinese-LLaMA-Alpaca-2/data/
per_device_train_batch_size=1
per_device_eval_batch_size=1
gradient_accumulation_steps=8
output_dir=/workspace/Chinese-LLaMA-Alpaca-2/output
peft_model=path/to/peft/model/dir
validation_file=/workspace/Chinese-LLaMA-Alpaca-2/data/xcloud.json
RANDOM=100
deepspeed_config_file=ds_zero2_no_offload.json
torchrun --nnodes 1 --nproc_per_node 1 run_clm_sft_with_peft.py \
--deepspeed ${deepspeed_config_file} \
--model_name_or_path ${pretrained_model} \
--tokenizer_name_or_path ${chinese_tokenizer_path} \
--dataset_dir ${dataset_dir} \
--validation_split_percentage 0.001 \
--per_device_train_batch_size ${per_device_train_batch_size} \
--per_device_eval_batch_size ${per_device_eval_batch_size} \
--do_train \
--do_eval \
--seed $RANDOM \
--fp16 \
--num_train_epochs 300 \
--lr_scheduler_type cosine \
--learning_rate ${lr} \
--warmup_ratio 0.03 \
--weight_decay 0 \
--logging_strategy steps \
--logging_steps 10 \
--save_strategy steps \
--save_total_limit 3 \
--evaluation_strategy steps \
--eval_steps 100 \
--save_steps 200 \
--gradient_accumulation_steps ${gradient_accumulation_steps} \
--preprocessing_num_workers 8 \
--max_seq_length 1024 \
--output_dir ${output_dir} \
--overwrite_output_dir \
--ddp_timeout 30000 \
--logging_first_step True \
--lora_rank ${lora_rank} \
--lora_alpha ${lora_alpha} \
--trainable ${lora_trainable} \
--lora_dropout ${lora_dropout} \
--torch_dtype float16 \
--validation_file ${validation_file} \
--ddp_find_unused_parameters False
### 运行日志或截图
```
# 请在此处粘贴运行日志
```
root@gpu-xdev-test02:/workspace/Chinese-LLaMA-Alpaca-2# python scripts/inference/inference_hf.py --base_model /workspace/model/chinese-alpaca-2-7b-hf --lora_model /workspace/Chinese-LLaMA-Alpaca-2/output/sft_lora_model --with_prompt --interactive
[2023-08-13 10:33:25,536] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Xformers is not installed correctly. If you want to use memory_efficient_attention use the following command to install Xformers
pip install xformers.
USE_MEM_EFF_ATTENTION: False
STORE_KV_BEFORE_ROPE: False
Apply NTK scaling with ALPHA=1.0
You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:15<00:00, 7.61s/it]
Vocab of the base model: 55296
Vocab of the tokenizer: 55296
loading peft model
Start inference with instruction mode.
=====================================================================================
+ 该模式下仅支持单轮问答,无多轮对话能力。
+ 如要进行多轮对话,请使用llama.cpp或本项目中的gradio_demo.py。
-------------------------------------------------------------------------------------
+ This mode only supports single-turn QA.
+ If you want to experience multi-turn dialogue, please use llama.cpp or gradio_demo.py.
=====================================================================================
Input:介绍下xcloud是什么
Traceback (most recent call last):
File "/workspace/Chinese-LLaMA-Alpaca-2/scripts/inference/inference_hf.py", line 182, in <module>
generation_output = model.generate(
File "/opt/conda/lib/python3.10/site-packages/peft/peft_model.py", line 581, in generate
outputs = self.base_model.generate(**kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 1588, in generate
return self.sample(
File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 2642, in sample
outputs = self(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 806, in forward
outputs = self.model(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 693, in forward
layer_outputs = decoder_layer(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 408, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/workspace/Chinese-LLaMA-Alpaca-2/scripts/attn_and_long_ctx_patches.py", line 44, in xformers_forward
query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/peft/tuners/lora.py", line 358, in forward
result += self.lora_B(self.lora_A(self.lora_dropout(x))) * self.scaling
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: expected scalar type Half but found Float
root@gpu-xdev-test02:/workspace/Chinese-LLaMA-Alpaca-2# python scripts/inference/inference_hf.py --base_model /workspace/model/chinese-alpaca-2-7b-hf --lora_model /workspace/Chinese-LLaMA-Alpaca-2/output/sft_lora_model --with_prompt --interactive --load_in_8bit
[2023-08-13 10:36:02,731] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Xformers is not installed correctly. If you want to use memory_efficient_attention use the following command to install Xformers
pip install xformers.
USE_MEM_EFF_ATTENTION: False
STORE_KV_BEFORE_ROPE: False
Apply NTK scaling with ALPHA=1.0
You are using the legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This means that tokens that come after special tokens will not be properly handled. We recommend you to read the related pull request available at https://github.com/huggingface/transformers/pull/24565
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:12<00:00, 6.49s/it]
Vocab of the base model: 55296
Vocab of the tokenizer: 55296
loading peft model
Start inference with instruction mode.
=====================================================================================
+ 该模式下仅支持单轮问答,无多轮对话能力。
+ 如要进行多轮对话,请使用llama.cpp或本项目中的gradio_demo.py。
-------------------------------------------------------------------------------------
+ This mode only supports single-turn QA.
+ If you want to experience multi-turn dialogue, please use llama.cpp or gradio_demo.py.
=====================================================================================
Input:介绍下xxx是什么
Response: xxx提供自动驾驶数据服务、自动驾驶训练服务、自动驾驶仿真服务品。
|
closed
|
2023-08-13T11:04:51Z
|
2023-08-28T22:04:20Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/125
|
[
"stale"
] |
Celester
| 2
|
NullArray/AutoSploit
|
automation
| 1,310
|
Unhandled Exception (15515d351)
|
Autosploit version: `4.0`
OS information: `Linux-5.10.0-kali9-amd64-x86_64-with-debian-kali-rolling`
Running context: `autosploit.py`
Error mesage: `can only concatenate list (not "str") to list`
Error traceback:
```
Traceback (most recent call):
File "/home/s/AutoSploit/AutoSploit/lib/term/terminal.py", line 719, in terminal_main_display
self.do_nmap_scan(target, arguments)
File "/home/s/AutoSploit/AutoSploit/lib/term/terminal.py", line 498, in do_nmap_scan
output, warnings, errors = lib.scanner.nmap.do_scan(target, nmap_path, arguments=passable_arguments)
File "/home/s/AutoSploit/AutoSploit/lib/scanner/nmap.py", line 127, in do_scan
] + arguments
TypeError: can only concatenate list (not "str") to list
```
Metasploit launched: `False`
|
open
|
2022-01-24T16:48:18Z
|
2022-01-24T16:48:18Z
|
https://github.com/NullArray/AutoSploit/issues/1310
|
[] |
AutosploitReporter
| 0
|
lexiforest/curl_cffi
|
web-scraping
| 144
|
[BUG]使用headers自定义User Agent时,会自动转为小写。
|
**Describe the bug**
你好,我不清楚这是不是需求导致的问题,我通过headers修改UserAgent时,curl_cffi会自动将我的User Agent转换为小写,导致我访问网站的时候被拦截。希望能够得到解决方案,谢谢。
curl_cffi/requests/headers.py:81行
```py
self._list = [
(
normalize_header_key(k, lower=False, encoding=encoding),
normalize_header_key(k, lower=True, encoding=encoding),
normalize_header_value(v, encoding),
)
for k, v in headers.items()
]
```
这里会生成大小写的headers
curl_cffi/requests/headers.py:147行
```py
{key.decode(self.encoding): None for _, key, _ in self._list}.keys()
```
这段代码会自动选择小写的key作为传入值
**Versions**
- OS: [e.g. windows 10]
- curl_cffi version [0.5.9]
|
closed
|
2023-10-18T08:44:32Z
|
2023-11-02T09:20:40Z
|
https://github.com/lexiforest/curl_cffi/issues/144
|
[
"enhancement",
"good first issue"
] |
AboutDark
| 3
|
amdegroot/ssd.pytorch
|
computer-vision
| 296
|
when i training my dataset ,loss is very high. and don't converge.Please
|
open
|
2019-03-04T10:59:56Z
|
2019-03-05T04:35:03Z
|
https://github.com/amdegroot/ssd.pytorch/issues/296
|
[] |
tangdong1994
| 0
|
|
nok/sklearn-porter
|
scikit-learn
| 26
|
Can't test accuracy in python and exported java code gives bad accuracy!
|
Hi, I started using your code to port a random forest estimator, first off I can't call the porter.integrity_score() function cause I get the following error:
```
Traceback (most recent call last):
File "C:/Python Project/Euler.py", line 63, in <module>
accuracy = porter.integrity_score(test_X)
File "C:\Python\lib\site-packages\sklearn_porter\Porter.py", line 440, in integrity_score
keep_tmp_dir=True, num_format=num_format)
File "C:\Python\lib\site-packages\sklearn_porter\Porter.py", line 342, in predict
self._test_dependencies()
File "C:\Python\lib\site-packages\sklearn_porter\Porter.py", line 454, in _test_dependencies
raise EnvironmentError(error)
OSError: The required dependencies aren't available on Windows.
```
So I can't check the accuracy in python, and when I used the java code in eclipse it gives me very bad accuracy, the original scikit model gave me about 69% accuracy whereas the accuracy from the java code is less than 10%.
I need the code for an important project, would really appreciate some help on this.
|
open
|
2018-01-23T15:51:14Z
|
2019-08-10T13:16:24Z
|
https://github.com/nok/sklearn-porter/issues/26
|
[
"question",
"1.0.0"
] |
Gizmomens
| 13
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 1,178
|
Warnings, exceptions and random crashes
|
I would really appreciate some help here I wanted to use this to help me with my Dyslexia, but it's too unreliable.
`
OS: Linux Mint 20.2 x86_64
Host: GA-78LMT-USB3 6.0
Kernel: 5.4.0-144-generic
Uptime: 5 hours, 29 mins
Packages: 2868 (dpkg), 7 (flatpak), 14 (snap)
Shell: bash 5.0.17
Resolution: 1600x900
DE: Cinnamon
WM: Mutter (Muffin)
WM Theme: Adapta-Nokto (Adapta-Nokto)
Icons: Mint-Y-Aqua [GTK2/3]
Terminal: gnome-terminal
CPU: AMD FX-8320 (8) @ 3.500GHz
GPU: NVIDIA GeForce GTX 1060 6GB
Memory: 4272MiB / 7938MiB
`
**python3 demo_toolbox.py**
`/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.7) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Arguments:
datasets_root: None
models_dir: saved_models
cpu: False
seed: None
Warning: you did not pass a root directory for datasets as argument.
The recognized datasets are:
LibriSpeech/dev-clean
LibriSpeech/dev-other
LibriSpeech/test-clean
LibriSpeech/test-other
LibriSpeech/train-clean-100
LibriSpeech/train-clean-360
LibriSpeech/train-other-500
LibriTTS/dev-clean
LibriTTS/dev-other
LibriTTS/test-clean
LibriTTS/test-other
LibriTTS/train-clean-100
LibriTTS/train-clean-360
LibriTTS/train-other-500
LJSpeech-1.1
VoxCeleb1/wav
VoxCeleb1/test_wav
VoxCeleb2/dev/aac
VoxCeleb2/test/aac
VCTK-Corpus/wav48
Feel free to add your own. You can still use the toolbox by recording samples yourself.
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1736
Expression 'AlsaOpen( hostApi, parameters, streamDir, &pcm )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1768
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1736
Expression 'AlsaOpen( hostApi, parameters, streamDir, &pcm )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1768
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1736
Expression 'AlsaOpen( hostApi, parameters, streamDir, &pcm )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1768
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1736
Expression 'AlsaOpen( hostApi, parameters, streamDir, &pcm )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1768
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
`
random crash with
`Traceback (most recent call last):
File "/home/strings/Real-Time-Voice-Cloning/toolbox/__init__.py", line 170, in record
self.add_real_utterance(wav, name, speaker_name)
File "/home/strings/Real-Time-Voice-Cloning/toolbox/__init__.py", line 174, in add_real_utterance
spec = Synthesizer.make_spectrogram(wav)
File "/home/strings/Real-Time-Voice-Cloning/synthesizer/inference.py", line 152, in make_spectrogram
mel_spectrogram = audio.melspectrogram(wav, hparams).astype(np.float32)
File "/home/strings/Real-Time-Voice-Cloning/synthesizer/audio.py", line 60, in melspectrogram
D = _stft(preemphasis(wav, hparams.preemphasis, hparams.preemphasize), hparams)
File "/home/strings/Real-Time-Voice-Cloning/synthesizer/audio.py", line 121, in _stft
return librosa.stft(y=y, n_fft=hparams.n_fft, hop_length=get_hop_size(hparams), win_length=hparams.win_size)
File "/home/strings/.local/lib/python3.8/site-packages/librosa/core/spectrum.py", line 217, in stft
util.valid_audio(y)
File "/home/strings/.local/lib/python3.8/site-packages/librosa/util/utils.py", line 310, in valid_audio
raise ParameterError("Audio buffer is not finite everywhere")
librosa.util.exceptions.ParameterError: Audio buffer is not finite everywhere
Loaded encoder "encoder.pt" trained to step 1564501
python3: src/hostapi/alsa/pa_linux_alsa.c:3641: PaAlsaStreamComponent_BeginPolling: Assertion `ret == self->nfds' failed.
Aborted (core dumped)
`
|
open
|
2023-03-21T16:35:51Z
|
2023-03-22T14:06:14Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1178
|
[] |
007Srings
| 1
|
gevent/gevent
|
asyncio
| 2,086
|
Question: How is the alignment of a thread and a hub in gevent? (Porting from asyncio)
|
* gevent version: gevent-24.11.2.dev0 (locally built from latest gevent sources, clang 17)
* Python version: 3.11.9, pyenv build
* Operating System: openSUSE LEAP 15.6, Linux kernel 6.4.0, x86_64
### Description:
If the question may not be too general, I wonder if one could ask if there may be any clarification available, generally about the alignment of threads and hubs in gevent?
I was hoping that I'd guessed out how to detect if a Greenlet is running in the same hub, if not also the same thread (in any sense) as a calling function, if simply by checking the `parent` property of the Greenlet, then checking whether it's the same object as that returned by `get_hub()` for the caller. On later experience, I wonder if that may be an insufficient test for this case, however. In the process of discovering this, I developed the following code sample.
The following was an effort to develop a reproducible example for a segfault I'd been seeing with the libev loop, Python 3.11, and gevent on openSUSE 15.6. In the process of developing this code sample, I may have discovered some possible issues in the original test code. For instance, I may have been using what may have been the wrong loop.run_callback function for this case, to call Greenlet.throw in throwing an exception to a destination Greenlet, across threads.
The original idea had been to implement a sort of `cancel()` method on a Greenlet class. I'd discovered the initial segfault while developing a test case for one approach to this, then testing the `cancel()` method for a cross-threads scenario.
In fact, I wasn't able to reproduce the original segfault outside of the original code. I've now updated the original code after this sample case. The segfault is no longer appearing as such.
I'm not certain if I've actually understood how Greenlets, the hub, and the calling / destination thread might be aligned though, and thus my question.
```python
# an effort at developing an independent test case for a certain error.
#
# this is a port of an unpublished test file, here released under public domain.
from collections.abc import Iterator
from contextlib import contextmanager
from gevent import get_hub, Greenlet, sleep
from gevent._hub_local import get_hub_class, set_hub
import sys
from threading import Thread
from gevent.event import AsyncResult
@contextmanager
def future_catch(future: AsyncResult) -> Iterator[AsyncResult]:
try:
yield future
except BaseException as exc:
info = sys.exc_info()
try:
if not future.exception:
future.set_exception(exc, exc_info=info)
except BaseException:
pass
raise
def test_cancel_cross_thread():
hub = get_hub()
def thr_func(gl_future: AsyncResult, rslt_future: AsyncResult, test_future: AsyncResult):
print("-- DBG THR_FUNC")
try:
run_state = None
# try to ensure a hub is created specifically for this thread ...
# (This is only a guess, as to how one might approach the same)
cls = get_hub_class()
hub = cls()
set_hub(hub)
def run_func(future: AsyncResult):
nonlocal run_state
print("-- DBG RUN_FUNC") # reached presently
with future_catch(future):
run_state = -2
## try to ensure this waits until cancelled
#
# it was a troublesome call, here, in some modes of placement,
# centering on the call to get_hub().switch() and a subsequent
# segfault
#
# this was where an exception might have been received from a non-local,
# apparently non-thread-safe throw under another function
#
# the cancel() method ideally would have detected that the
# destination Greenlet was running under a different thread,
# and so used the thread-safe callback function for the throw.
# It may not have detected that accurately, however.
#
# The cancel() method has since been updated to always use
# the thread-safe callback to throw the Cancelled() exception ...
#
try:
print("-- DBG SWITCH")
get_hub().switch()
except BaseException as exc:
future.set_exception(exc)
return
# this expression should not be reached, in the test case:
run_state = -3
gl = Greenlet(run_func, rslt_future)
gl_future.set_result(gl)
gl.start()
gl.join(2)
assert run_state == -2
test_future.set_result(0)
except BaseException as exc:
try:
test_future.set_exception(exc)
except BaseException:
pass
raise exc
gl_future = AsyncResult()
rslt_future = AsyncResult()
test_future = AsyncResult()
thr = Thread(
target=thr_func,
args=(gl_future, rslt_future, test_future),
daemon=True)
thr.start()
interval = hub.loop.approx_timer_resolution * 2
while not gl_future.done():
try:
gl_future.wait()
# spin until the item arrives at a completed state
#
# It's not a particularly event-oriented approach,
# in this short sample code ...
except BaseException:
sleep(interval)
gl = gl_future.result(0)
# The original goal for the initial test case:
# Try to cancel the destination Greenlet, across threads
gl.parent.loop.run_callback_threadsafe(gl.throw, RuntimeError("Cancel"))
# ^ there may have been one issue here, in the original source code,
# in which the exception may have been thrown with the non-thread-safe
# callback method on the loop. Once received in the destination Greenlet,
# there was a segfault (illustrated below)
thr.join()
if __name__ == "__main__":
test_cancel_cross_thread()
```
In the `cancel()` method that the original source code was supposed to test, then for purpose of trying to detect whether the greenlet was running under a different thread than the thread where the `cancel()` method was being called, I'd simply compared the result of the destination greenlet's `Greenlet.parent` property to the object returned from `get_hub()` for the caller. I'd thought that it might be a sufficient indication that the destination Greenlet was running under the same thread, if its `parent` object was the same as that received under `get_hub()`. I'm not sure if this was actually the effect of that test though, and hence the question here.
I'm not sure if it may be too general a question, candidly. I thought I should try to figure this out in a follow-through after the earlier debugging.
For purpose of porting from asyncio, generally, could one ask how threads and hubs might be aligned here?
Might there be any way to detect if a Greenlet is running under a different thread than the caller?
I'm not certain if it could be any more expensive to always use a thread-safe callback when needed, in gevent. I'd thought I should try to find clarification about this, however - with apologies, as the source code above might not be the most succinct example possible for this question.
The description template here had said something about backtrace. The following was a backtrace that I'd seen under the original code, when using the non-thread-safe `Hub.loop.run_callback` to call `Greenlet.throw` to throw an exception to the Greenlet-to-be-cancelled. I've not been able to reproduce this outside of the original, unpublished, presently revised source code. I'm not sure if it may help to clarify much. With the revised code always using the thread-safe function to call Greenlet.throw, this isn't showing up now.
```python-traceback
$ env PYTHONMALLOC=debug PYTHONASYNCIODEBUG=1 PYTHONWARNINGS=default PYTHONFAULTHANDLER=1 GEVENT_LOOP=libev-cffi gdb --args python -m gevent.monkey --module pytest -s
GNU gdb (GDB; SUSE Linux Enterprise 15) 13.2
Copyright (C) 2023 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-suse-linux".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://bugs.opensuse.org/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from python...
(gdb) run
Starting program: /home/user/project_xyzzy/env/bin/python -m gevent.monkey --module pytest -s
Missing separate debuginfos, use: zypper install glibc-debuginfo-2.38-150600.12.1.x86_64
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
============================================================ test session starts ============================================================
platform linux -- Python 3.11.9, pytest-8.3.3, pluggy-1.5.0
rootdir: /home/user/project_xyzzy
configfile: pyproject.toml
testpaths: test
plugins: dependency-0.6.0, anyio-4.6.2.post1, asyncio-0.24.0
asyncio: mode=Mode.AUTO, default_loop_scope=function
collecting ... Missing separate debuginfo for /home/user/project_xyzzy/env/lib/python3.11/site-packages/numpy/_core/../../numpy.libs/libgfortran-040039e1-0352e75f.so.5.0.0
Try: zypper install -C "debuginfo(build-id)=5bbe74eb6855e0a2c043c0bec2f484bf3e9f14c0"
Missing separate debuginfo for /home/user/project_xyzzy/env/lib/python3.11/site-packages/numpy/_core/../../numpy.libs/libquadmath-96973f99-934c22de.so.0.0.0
Try: zypper install -C "debuginfo(build-id)=549b4c82347785459571c79239872ad31509dcf4"
[New Thread 0x7ffff19ff6c0 (LWP 27920)]
[New Thread 0x7ffff11fe6c0 (LWP 27921)]
[New Thread 0x7fffee9fd6c0 (LWP 27922)]
[New Thread 0x7fffec1fc6c0 (LWP 27923)]
[New Thread 0x7fffe79fb6c0 (LWP 27924)]
[New Thread 0x7fffe71fa6c0 (LWP 27925)]
[New Thread 0x7fffe49f96c0 (LWP 27926)]
[New Thread 0x7fffe21f86c0 (LWP 27927)]
[New Thread 0x7fffdf9f76c0 (LWP 27928)]
[New Thread 0x7fffdd1f66c0 (LWP 27929)]
[New Thread 0x7fffd89f56c0 (LWP 27930)]
[New Thread 0x7fffd81f46c0 (LWP 27931)]
[New Thread 0x7fffd39f36c0 (LWP 27932)]
[New Thread 0x7fffd31f26c0 (LWP 27933)]
[New Thread 0x7fffd09f16c0 (LWP 27934)]
collected 29 items
test/test_click_deco.py ....
test/test_corolet_proto.py -- DBG running in test_corolet_main
-- DBG corolet_main running in /home/user/project_xyzzy/test/corolet_main.py
-- DBG corolet_main start @ <Corolet at 0x7fffcbc90380: _run>
-- DBG corolet_main join @ <Corolet at 0x7fffcbc90380: _run>
-- DBG Corolet: new loop <_UnixSelectorEventLoop running=False closed=False debug=True>
-- DBG Corolet: run loop [closing]: <_UnixSelectorEventLoop running=False closed=False debug=True>
-- DBG corolet_main.thunk
12321
.
test/test_gcondition.py ....
test/test_gconfig.py .
test/test_glaunch_click.py ......
test/test_green_generics.py .
Thread 1 "python" received signal SIGSEGV, Segmentation fault.
ev_feed_event (loop=loop@entry=0x7ffff76b9020 <default_loop_struct>, w=0x3160a20, revents=revents@entry=16384) at deps/libev/ev.c:2319
2319 pendings [pri][w_->pending - 1].events |= revents;
Missing separate debuginfos, use: zypper install libbz2-1-debuginfo-1.0.8-150400.1.122.x86_64 libffi7-debuginfo-3.2.1.git259-10.8.x86_64 libgcc_s1-debuginfo-13.2.1+git8285-150000.1.9.1.x86_64 libjitterentropy3-debuginfo-3.4.0-150000.1.9.1.x86_64 liblzma5-debuginfo-5.4.1-150600.1.2.x86_64 libopenssl1_1-debuginfo-1.1.1w-150600.3.10.x86_64 libsqlite3-0-debuginfo-3.44.0-150000.3.23.1.x86_64 libstdc++6-debuginfo-13.2.1+git8285-150000.1.9.1.x86_64 libuuid1-debuginfo-2.39.3-150600.2.1.x86_64 libz1-debuginfo-1.2.13-150500.4.3.1.x86_64
(gdb) bt
#0 ev_feed_event (loop=loop@entry=0x7ffff76b9020 <default_loop_struct>, w=0x3160a20, revents=revents@entry=16384) at deps/libev/ev.c:2319
#1 0x00007ffff76a4081 in queue_events (loop=0x7ffff76b9020 <default_loop_struct>, events=0x7ffff1de6b00, type=16384,
eventcnt=<optimized out>) at deps/libev/ev.c:2352
#2 ev_run (loop=0x7ffff76b9020 <default_loop_struct>, flags=0) at deps/libev/ev.c:4062
#3 0x00007ffff76aee1a in _cffi_f_ev_run (self=<optimized out>, args=<optimized out>)
at build/temp.linux-x86_64-cpython-311/gevent.libev._corecffi.c:2967
#4 0x00007ffff7ab3d02 in cfunction_call (func=<built-in method ev_run of _cffi_backend.Lib object at remote 0x7ffff6e98b40>,
args=(<_cffi_backend._CDataBase at remote 0x7ffff6f4b700>, 0), kwargs=0x0) at Objects/methodobject.c:553
#5 0x00007ffff7a49204 in _PyObject_MakeTpCall (tstate=0x7ffff7f78a78 <_PyRuntime+166328>,
callable=<built-in method ev_run of _cffi_backend.Lib object at remote 0x7ffff6e98b40>, args=0x7ffff6619130, nargs=2, keywords=0x0)
at Objects/call.c:214
#6 0x00007ffff7a488ff in _PyObject_VectorcallTstate (tstate=0x7ffff7f78a78 <_PyRuntime+166328>,
callable=<built-in method ev_run of _cffi_backend.Lib object at remote 0x7ffff6e98b40>, args=0x7ffff6619130,
nargsf=9223372036854775810, kwnames=0x0) at ./Include/internal/pycore_call.h:90
#7 0x00007ffff7a495bb in PyObject_Vectorcall (callable=<built-in method ev_run of _cffi_backend.Lib object at remote 0x7ffff6e98b40>,
args=0x7ffff6619130, nargsf=9223372036854775810, kwnames=0x0) at Objects/call.c:299
#8 0x00007ffff7b9d93b in _PyEval_EvalFrameDefault (tstate=0x7ffff7f78a78 <_PyRuntime+166328>, frame=0x7ffff66190b8, throwflag=0)
at Python/ceval.c:4769
#9 0x00007ffff7b86748 in _PyEval_EvalFrame (tstate=0x7ffff7f78a78 <_PyRuntime+166328>, frame=0x7ffff6619020, throwflag=0)
at ./Include/internal/pycore_ceval.h:73
#10 0x00007ffff7ba5ebd in _PyEval_Vector (tstate=0x7ffff7f78a78 <_PyRuntime+166328>, func=0x7ffff6955d30, locals=0x0, args=0x7ffffffdf8e0,
argcount=1, kwnames=0x0) at Python/ceval.c:6434
#11 0x00007ffff7a499a1 in _PyFunction_Vectorcall (func=<function at remote 0x7ffff6955d30>, stack=0x7ffffffdf8e0, nargsf=1, kwnames=0x0)
at Objects/call.c:393
#12 0x00007ffff7a4cbd4 in _PyObject_VectorcallTstate (tstate=0x7ffff7f78a78 <_PyRuntime+166328>,
callable=<function at remote 0x7ffff6955d30>, args=0x7ffffffdf8e0, nargsf=1, kwnames=0x0) at ./Include/internal/pycore_call.h:92
#13 0x00007ffff7a4d2af in method_vectorcall (method=<method at remote 0x7fffcbc4e870>, args=0x7ffff7f5e6f0 <_PyRuntime+58928>, nargsf=0,
kwnames=0x0) at Objects/classobject.c:67
#14 0x00007ffff7a49380 in _PyVectorcall_Call (tstate=0x7ffff7f78a78 <_PyRuntime+166328>, func=0x7ffff7a4d14d <method_vectorcall>,
callable=<method at remote 0x7fffcbc4e870>, tuple=(), kwargs=0x0) at Objects/call.c:245
#15 0x00007ffff7a496f1 in _PyObject_Call (tstate=0x7ffff7f78a78 <_PyRuntime+166328>, callable=<method at remote 0x7fffcbc4e870>, args=(),
kwargs=0x0) at Objects/call.c:328
#16 0x00007ffff7a497e3 in PyObject_Call (callable=<method at remote 0x7fffcbc4e870>, args=(), kwargs=0x0) at Objects/call.c:355
#17 0x00007ffff767db95 in greenlet::UserGreenlet::inner_bootstrap (this=this@entry=0x7ffff4f035a0, origin_greenlet=<optimized out>,
run=run@entry=<method at remote 0x7fffcbc4e870>) at src/greenlet/TUserGreenlet.cpp:458
#18 0x00007ffff767f0da in greenlet::UserGreenlet::g_initialstub (this=0x7ffff4f035a0, mark=0x7ffffffdfc18)
at src/greenlet/TUserGreenlet.cpp:305
#19 0x00007ffff767d627 in greenlet::UserGreenlet::g_switch (this=0x7ffff4f035a0) at src/greenlet/TUserGreenlet.cpp:173
#20 0x00007ffff7aeaa5c in PyUnicode_IS_READY (op=<error reading variable: Cannot access memory at address 0xffffff8>)
at ./Include/cpython/unicodeobject.h:269
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
(gdb) py-bt
Traceback (most recent call first):
<built-in method ev_run of _cffi_backend.Lib object at remote 0x7ffff6e98b40>
File "/home/user/project_xyzzy/env/lib/python3.11/site-packages/gevent/libev/corecffi.py", line 366, in run
libev.ev_run(self._ptr, flags)
File "/home/user/project_xyzzy/env/lib/python3.11/site-packages/gevent/hub.py", line 647, in run
loop.run()
(gdb) py-up
#8 Frame 0x7ffff66190b8, for file /home/user/project_xyzzy/env/lib/python3.11/site-packages/gevent/libev/corecffi.py, line 366, in run (self=<loop(_ffi=<_cffi_backend.FFI at remote 0x7ffff7079e50>, _lib=<_cffi_backend.Lib at remote 0x7ffff6e98b40>, _ptr=<_cffi_backend._CDataBase at remote 0x7ffff6f4b700>, _handle_to_self=<_cffi_backend.__CDataOwnGC at remote 0x7fffcbc4c0b0>, _watchers=<module at remote 0x7ffff6ef9970>, _in_callback=False, _callbacks=<collections.deque at remote 0x7fffcbd7b850>, _keepaliveset={<async_(_flags=6, loop=<...>, _watcher__init_priority=None, _watcher__init_args=(), _watcher__init_ref=False, _watcher=<_cffi_backend.__CDataOwn at remote 0x4d503d0>, _callback=<function at remote 0x7ffff6910c00>, _args=(...), _handle=<_cffi_backend.__CDataOwnGC at remote 0x7ffff7238b30>) at remote 0x7ffff6f7a130>}, _check=<_cffi_backend.__CDataOwn at remote 0x377ae20>, _prepare=<_cffi_backend.__CDataOwn at remote 0x376dc70>, _timer0=<_cffi_backend.__CDataOwn at remote 0x38f8330>, _threadsafe_a...(truncated)
libev.ev_run(self._ptr, flags)
#8 Frame 0x7ffff6619020, for file /home/user/project_xyzzy/env/lib/python3.11/site-packages/gevent/hub.py, line 647, in run (self=<Hub(spawning_greenlet=<weakref.ReferenceType at remote 0x7ffff1db5e80>, spawn_tree_locals={}, thread_ident=140737345099008, _resolver=None, _threadpool=None, format_context=<function at remote 0x7ffff59a9de0>, minimal_ident=2, name='GenericGreenlet.dispatch+', exception_stream=None) at remote 0x7fffcbd0a810>, loop=<loop(_ffi=<_cffi_backend.FFI at remote 0x7ffff7079e50>, _lib=<_cffi_backend.Lib at remote 0x7ffff6e98b40>, _ptr=<_cffi_backend._CDataBase at remote 0x7ffff6f4b700>, _handle_to_self=<_cffi_backend.__CDataOwnGC at remote 0x7fffcbc4c0b0>, _watchers=<module at remote 0x7ffff6ef9970>, _in_callback=False, _callbacks=<collections.deque at remote 0x7fffcbd7b850>, _keepaliveset={<async_(_flags=6, loop=<...>, _watcher__init_priority=None, _watcher__init_args=(), _watcher__init_ref=False, _watcher=<_cffi_backend.__CDataOwn at remote 0x4d503d0>, _callback=<function at rem...(truncated)
loop.run()
(gdb) py-up
Unable to find an older python frame
(gdb) quit
A debugging session is active.
Inferior 1 [process 26918] will be killed.
Quit anyway? (y or n) EOF [assumed Y]
```
|
closed
|
2024-12-17T03:48:09Z
|
2024-12-18T21:05:46Z
|
https://github.com/gevent/gevent/issues/2086
|
[
"Type: Question"
] |
spchamp
| 1
|
arogozhnikov/einops
|
tensorflow
| 55
|
New pypi release (repeat not part of version installed by pip)
|
Is it possible to get out a new release of einops on pypi?
It seems like the version installable by pip doesn't include `repeat` (which is a very useful op).
|
closed
|
2020-08-06T14:51:15Z
|
2020-09-11T06:01:58Z
|
https://github.com/arogozhnikov/einops/issues/55
|
[] |
Froskekongen
| 4
|
davidsandberg/facenet
|
computer-vision
| 472
|
OutOfRangeError (see above for traceback): FIFOQueue '_2_batch_join/fifo_queue' is closed and has insufficient elements (requested 1, current size 0)
|
**Notice:i need run grayscale picture**
**first:** i test my picture can be read and show

Secondly:i changed 'channels=3' to 'channels=1' .(All the places that involve channels=3)
meanwhile:i set ‘batch_size’ to 1,and set 'max_nrof_epochs' to 10 .(The settings are small enough to avoid too few pictures)
but:i met this mistake 'OutOfRangeError (see above for traceback): FIFOQueue '_2_batch_join/fifo_queue' is closed and has insufficient elements (requested 1, current size 0)'
and that mean: i can‘t read my .png file.
What other places do I need to change besides ’channels‘, 'batch_size' and 'epoch_size' ?
|
closed
|
2017-09-28T08:20:08Z
|
2017-11-29T12:15:05Z
|
https://github.com/davidsandberg/facenet/issues/472
|
[] |
rainy1798
| 1
|
graphql-python/graphene-django
|
django
| 1,071
|
Filtering based off of ManyToManyField
|
I'm trawling through the documentation and looking up what I know regarding Django Filters and Django, but there doesn't seem to be a way of filtering a `Node` object on a `ManyToMany` instance attribute ... e.g., RelatedName.name doesn't seem to be filterable within `filter_fields = {}` e.g.:
```
filter_fields = {
'categories__name': ['exact', 'icontains', 'istartswith'],
}
```
I just seem to get the following error when attempting to query by `category.name`:
```
{
"errors": [
{
"message": "Unknown argument \"categoriesName\" on field \"allIngredients\" of type \"Query\".",
"locations": [
{
"line": 149,
"column": 15
}
]
}
]
}
```
What would be the best way to achieve filtering based on a ManyToManyField instance attribute?
|
open
|
2020-11-30T13:12:44Z
|
2020-11-30T13:12:44Z
|
https://github.com/graphql-python/graphene-django/issues/1071
|
[
"✨enhancement"
] |
asencis
| 0
|
3b1b/manim
|
python
| 1,976
|
TypeError when running "manimgl example_scenes.py OpeningManimExample"
|
### Describe the error
After a fresh install (macOS Ventura 13.1), the starting example does not work for me.
### Code and Error
**Code**:
```
manimgl example_scenes.py OpeningManimExample
```
**Error**:
```
ManimGL v1.6.1
[18:49:08] INFO Using the default configuration file, which you can modify in config.py:348
`/Users/bsuberca/Research/manim/manimlib/default_config.yml`
INFO If you want to create a local configuration file, you can create a file named config.py:349
`custom_config.yml`, or run `manimgl --config`
2023-01-28 18:49:09.002 Python[75224:37743336] ApplePersistenceIgnoreState: Existing state will not be touched. New state will be written to /var/folders/vx/lhs1msj56h715m0q_sjnbnmm0000gr/T/org.python.python.savedState
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/bin/manimgl", line 8, in <module>
sys.exit(main())
File "/Users/bsuberca/Research/manim/manimlib/__main__.py", line 25, in main
scene.run()
File "/Users/bsuberca/Research/manim/manimlib/scene/scene.py", line 148, in run
self.construct()
File "/Users/bsuberca/Research/manim/example_scenes.py", line 21, in construct
self.play(Write(intro_words))
File "/Users/bsuberca/Research/manim/manimlib/scene/scene.py", line 617, in play
self.progress_through_animations(animations)
File "/Users/bsuberca/Research/manim/manimlib/scene/scene.py", line 590, in progress_through_animations
self.update_frame(dt)
File "/Users/bsuberca/Research/manim/manimlib/scene/scene.py", line 313, in update_frame
self.camera.capture(*self.mobjects)
File "/Users/bsuberca/Research/manim/manimlib/camera/camera.py", line 225, in capture
mobject.render(self.ctx, self.uniforms)
File "/Users/bsuberca/Research/manim/manimlib/mobject/mobject.py", line 1941, in render
self.shader_wrappers = self.get_shader_wrapper_list(ctx)
File "/Users/bsuberca/Research/manim/manimlib/mobject/types/vectorized_mobject.py", line 1306, in get_shader_wrapper_list
self.back_stroke_shader_wrapper.read_in(
File "/Users/bsuberca/Research/manim/manimlib/shader_wrapper.py", line 193, in read_in
np.concatenate(data_list, out=self.vert_data)
File "<__array_function__ internals>", line 5, in concatenate
TypeError: Cannot cast array data from dtype({'names':['point','fill_rgba','fill_border_width','joint_product'], 'formats':[('<f4', (3,)),('<f4', (4,)),('<f4', (1,)),('<f4', (4,))], 'offsets':[0,48,88,32], 'itemsize':92}) to dtype([('point', '<f4', (3,)), ('stroke_rgba', '<f4', (4,)), ('stroke_width', '<f4', (1,)), ('joint_product', '<f4', (4,))]) according to the rule 'same_kind'
```
### Environment
**OS System**: MacOS Ventura 13.1
**manim version**: master v1.6.1-997-gf296dd8d
**python version**: 3.10.
|
closed
|
2023-01-29T02:50:47Z
|
2023-02-03T22:54:54Z
|
https://github.com/3b1b/manim/issues/1976
|
[] |
bsubercaseaux
| 2
|
scrapy/scrapy
|
python
| 5,912
|
Update Parsel output in docs/topics/selectors.rst
|
`docs/topics/selectors.rst` still contains `[<Selector xpath='//title/text()' data='Example website'>]` etc., it should be updated for the new Parsel version. Also I think it makes sense to modify it so that these examples are checked by tests like in docs/intro/tutorial.rst (probably an invisible code block loading docs/_static/selectors-sample1.html should be enough).
|
closed
|
2023-04-27T10:06:56Z
|
2023-05-04T13:30:35Z
|
https://github.com/scrapy/scrapy/issues/5912
|
[
"enhancement",
"docs"
] |
wRAR
| 0
|
robinhood/faust
|
asyncio
| 239
|
Feature Pattern Matching / Join
|
Hi,
first of all a very interessting project of high quality. Good work !
One question concering planned features:
Are there any plans (if yes when? or do you have a roadmap or something similar) to implement some sort of complex event progressing. Like joining multiple streams based on a correlationid in a give time window and doing pattern matching on it?
greets Florian
|
open
|
2018-12-18T12:27:03Z
|
2020-10-20T15:42:14Z
|
https://github.com/robinhood/faust/issues/239
|
[
"Issue-Type: Feature Request"
] |
FlorianKuckelkorn
| 2
|
tqdm/tqdm
|
pandas
| 639
|
tensorflow/core/kernels/mkl_concat_op.cc:363] Check failed: dnn Concat Create_F32(&mkl_context.prim_concat, __null, N, &mkl_context.lt_inputs[0]) == E_SUCCESS (-1 vs. 0)
|
I am a freshman to the tensorflow, when I ran a deep nerualnetwork program, an error happen, I donot known, what can I do? Can you help me?
|
closed
|
2018-11-11T13:36:54Z
|
2018-11-14T01:35:26Z
|
https://github.com/tqdm/tqdm/issues/639
|
[
"invalid ⛔"
] |
yjyGo
| 3
|
flairNLP/flair
|
nlp
| 2,875
|
Automatic mixed precision for TARS model
|
Hi can you please add apex=True as a parameter for training TARS also? I have seen it enabled for other models as described here: https://github.com/flairNLP/flair/pull/934#issuecomment-516775884
If such a feature already exists please provide appropriate documentation.
Thanks and regards,
Ujwal.
|
closed
|
2022-07-26T12:34:03Z
|
2023-01-07T13:48:06Z
|
https://github.com/flairNLP/flair/issues/2875
|
[
"wontfix"
] |
Ujwal2910
| 1
|
Yorko/mlcourse.ai
|
pandas
| 410
|
topic 5 part 1 summation sign
|
[comment in ODS](https://opendatascience.slack.com/archives/C39147V60/p1541584422610100)
|
closed
|
2018-11-07T10:57:25Z
|
2018-11-10T16:18:10Z
|
https://github.com/Yorko/mlcourse.ai/issues/410
|
[
"minor_fix"
] |
Yorko
| 1
|
Lightning-AI/pytorch-lightning
|
data-science
| 19,801
|
Construct objects from yaml by classmethod
|
### Description & Motivation
Sometimes I want to construct an object from classmethods (e.g. `from_pretrained`) instead of `__init__`. But looks like currently lightning does not support it
### Pitch
```
model:
class_path: Model.from_pretrained
init_args:
...
```
### Alternatives
_No response_
### Additional context
_No response_
cc @borda
|
open
|
2024-04-22T19:48:52Z
|
2024-04-22T19:49:14Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/19801
|
[
"feature",
"needs triage"
] |
Boltzmachine
| 0
|
plotly/plotly.py
|
plotly
| 4,298
|
scatter plots with datetime x-axis and more than 1000 data points do not render all points
|
I believe this issue is similar to https://github.com/plotly/plotly_express/issues/145 .
I am using plotly 5.15.0 in jupyter lab 3.6.3.
My browsers (have tried firefox and chrome) support webgl.
When I render a scatter plot where the x-axis values are datetime objects,
if the plot has 1000 points, it renders fine:
```
size = 1000
x = np.random.randint(1690332167190, 1690332667190, size=size)
y = np.random.randint(0, 10000, size=size)
df = pd.DataFrame()
df['x'] = pd.to_datetime(x, unit='ms')
df['y'] = y
fig = px.scatter(df, x='x', y='y')
fig
```

but if I change size to 1001, the render only shows a few clusters of values on the x-axis:

the hover/mouseover behavior makes it clear that there are a bunch of points that exist in the spaces between what is plotted on the graph:

but the individual points don't render (or are rendered with the wrong x-value, not sure which).
|
closed
|
2023-07-27T15:19:11Z
|
2024-02-23T16:06:52Z
|
https://github.com/plotly/plotly.py/issues/4298
|
[] |
cprice404
| 4
|
hankcs/HanLP
|
nlp
| 572
|
JDK1.6 portable包下报错
|
## 版本号
当前最新版本号是:hanlp-portable-1.3.4.jar
我使用的版本是:hanlp-portable-1.3.4.jar
## 我的问题
直接新建项目,加入引用hanlp-portable-1.3.4.jar,在jdk1.6环境下以下代码会报错
错误信息
```
2017-6-29 9:31:53 com.hankcs.hanlp.dictionary.TransformMatrixDictionary load
警告: 读取data/dictionary/CoreNatureDictionary.tr.txt失败java.lang.NullPointerException: Inflater has been closed
Exception in thread "main" java.lang.NullPointerException
at com.hankcs.hanlp.algorithm.Viterbi.compute(Viterbi.java:121)
at com.hankcs.hanlp.seg.WordBasedGenerativeModelSegment.speechTagging(WordBasedGenerativeModelSegment.java:533)
at com.hankcs.hanlp.seg.Viterbi.ViterbiSegment.segSentence(ViterbiSegment.java:120)
at com.hankcs.hanlp.seg.Segment.seg(Segment.java:498)
at Main.main(Main.java:19)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
```
### 触发代码
```
Segment segment = new ViterbiSegment(){{
enableIndexMode(false);
enableOffset(true);
enableNumberQuantifierRecognize(false);
enableOrganizationRecognize(true);
enableCustomDictionary(true);
enablePlaceRecognize(true);
enableNameRecognize(true);
enableJapaneseNameRecognize(false);
enableTranslatedNameRecognize(false);
enablePartOfSpeechTagging(true);
}};
System.out.println(segment.seg("看一下小王的日报"));
```
## 其他信息
1. 注释`enablePlaceRecognize(true);`可正常运行;
2. 使用portable源码调试无报错。
|
closed
|
2017-06-29T01:39:13Z
|
2018-04-10T08:51:33Z
|
https://github.com/hankcs/HanLP/issues/572
|
[
"ignored"
] |
AnyListen
| 8
|
sigmavirus24/github3.py
|
rest-api
| 527
|
content-type: text/plain overwritten to None for GitHub.markdown()
|
When `raw=True` for `GitHub.markdown()`, `Content-Type` should be `text/plain`. However, `github3.models` overwrites the `Content-Type` on POST requests. I encountered this while completing the migration of `tests_github.py`.
### Unit Test assertion for markdown with raw=True
```
AssertionError: Expected call: post('https://api.github.com/markdown', 'Hello', headers={'Content-Type': 'text/plain'})
Actual call: post('https://api.github.com/markdown/raw', 'Hello', headers=None)
```
### Setting Content-Type to text/plain
https://github.com/sigmavirus24/github3.py/blob/develop/github3/github.py#L781-L784
### Overwriting Content-Type on POST
https://github.com/sigmavirus24/github3.py/blob/develop/github3/models.py#L202-L209
## API Docs
https://developer.github.com/v3/markdown/
|
closed
|
2015-12-31T16:52:00Z
|
2016-01-05T18:04:18Z
|
https://github.com/sigmavirus24/github3.py/issues/527
|
[] |
itsmemattchung
| 4
|
taverntesting/tavern
|
pytest
| 303
|
Length of returned list was different than expected - expected 1 items, got 2
|
Can anyone indicate me how can I match the first element of the array Categories -> "code: 1"
Actually, I got this error:
```
failed:
E - Value mismatch in body: Length of returned list was different than expected - expected 1 items, got 2 (expected["data"]["Categories"] = '[{'code': <tavern.util.loader.AnythingSentinel object at 0x1068f7a90>}]
```
The JSON is
```
"data": {
"title": "hello",
"Categories": [
{
"code": "1",
"name": "iphone"
},
{
"code": "2",
"name": "iphone X"
}
```
the tavern test is
```
response:
body:
data:
title: "hello"
Categories:
-
code: "1"
```
|
closed
|
2019-03-08T20:01:09Z
|
2019-04-20T17:26:47Z
|
https://github.com/taverntesting/tavern/issues/303
|
[] |
varayal
| 2
|
pallets/flask
|
python
| 4,466
|
TaggedJSONSerializer doesn't round-trip naive datetimes
|
Since https://github.com/pallets/werkzeug/commit/265bad7, `parse_date` always returns an aware datetime. This can't be compared with naive datetimes, so will break a bunch of existing code.
```
>>> from flask.json.tag import TaggedJSONSerializer
>>> s = TaggedJSONSerializer()
>>> datetime.utcnow()
datetime.datetime(2022, 2, 25, 1, 52, 43, 79586)
>>> s.loads(s.dumps(datetime.utcnow()))
datetime.datetime(2022, 2, 25, 1, 52, 44, tzinfo=datetime.timezone.utc)
```
This probably makes sense for werkzeug, but not for a JSON serialiser.
It might be enough to replace the calls to `http_date` and `parse_date` with `value.isoformat()` and `datetime.fromisoformat(value)`, which will work fine for aware and naive datetimes.
Environment:
- Python version: 3.9
- Flask version: 2.0.2
|
closed
|
2022-02-25T02:01:22Z
|
2022-03-12T00:04:45Z
|
https://github.com/pallets/flask/issues/4466
|
[] |
marksteward
| 5
|
hzwer/ECCV2022-RIFE
|
computer-vision
| 114
|
Optical Flow Labels
|
Good evening!
Can you tell me how you generated the optical flow labels in the 100 sample subset of the vimeo-90k dataset?
I cannot reproduce these in the same way with [pytorch-liteflownet](https://github.com/sniklaus/pytorch-liteflownet)
This is the one I can generate with liteflownet
`python run.py --model default --first ./images/first.png --second ./images/second.png --out ./out.flo
`

and this is the one that is used in the trainings sample net

)
The optical flow I get is much more blurry.
Did you use special code or a different model than the pretrained ones?
I already used different Input image sizes.
Rescaled from 1x to 5x, but all are blurry.
Thank you in advance!
|
closed
|
2021-02-24T21:46:53Z
|
2021-02-28T16:57:09Z
|
https://github.com/hzwer/ECCV2022-RIFE/issues/114
|
[] |
niklasAust
| 2
|
dynaconf/dynaconf
|
flask
| 768
|
[RFC] Resolve depreciation warning for depreciated property kv
|
**Is your feature request related to a problem? Please describe.**
Yes, Currently we are hitting the depreciation warning in hvac 0.11 since the kv property is depreciated and adviced to use from `Client.secrets`
Clear Warning:
DeprecationWarning: Call to deprecated property 'kv'. This property will be removed in version '0.9.0' Please use the 'kv' property on the 'Client.secrets' attribute moving forward
**Describe the solution you'd like**
Remove the usage of kv property directly in dynaconf and use if from `Client.secrets`
**Describe alternatives you've considered**
The alternative is not required.
|
closed
|
2022-07-15T09:11:08Z
|
2022-07-16T19:03:29Z
|
https://github.com/dynaconf/dynaconf/issues/768
|
[
"Not a Bug",
"RFC"
] |
jyejare
| 0
|
lukas-blecher/LaTeX-OCR
|
pytorch
| 42
|
how to ocr low width latex image
|


If image width is too low, the ocr result will useless.
I have try to reduce patch_size to 8, but the error occured:
`Exception has occurred: RuntimeError
The size of tensor a (33) must match the size of tensor b (129) at non-singleton dimension 1
File "F:\code\LaTeX-OCR\models.py", line 81, in forward_features
x += self.pos_embed[:, pos_emb_ind]
File "F:\code\LaTeX-OCR\train.py", line 48, in train
encoded = encoder(im.to(device))
File "F:\code\LaTeX-OCR\train.py", line 88, in <module>
train(args)`
I have struggle this issue several days, Please tell me what can I do for this situation.
Thank you very much!
|
closed
|
2021-10-18T02:06:09Z
|
2021-10-28T02:25:17Z
|
https://github.com/lukas-blecher/LaTeX-OCR/issues/42
|
[] |
daassh
| 1
|
ets-labs/python-dependency-injector
|
flask
| 551
|
Dependencies are not injected when creating injections into a module using "providers.DependenciesContainer"
|
hello.
The following code tries to use SubContainer to perform dependency injection into a Service, but this code cannot be executed with the exception "exception: no description".
```python
from dependency_injector import containers, providers
from dependency_injector.wiring import Provide
class Service:
...
class Container(containers.DeclarativeContainer):
sub_container = providers.DependenciesContainer()
class SubContainer(containers.DeclarativeContainer):
service = providers.Factory(Service)
service: Service = Provide[Container.sub_container.service]
if __name__ == "__main__":
container = Container()
container.wire(modules=[__name__])
assert isinstance(service, Service)
```
How can I perform dependency injection into a SubContainer?
|
open
|
2022-01-21T02:45:17Z
|
2022-03-25T17:57:57Z
|
https://github.com/ets-labs/python-dependency-injector/issues/551
|
[] |
satodaiki
| 1
|
sqlalchemy/alembic
|
sqlalchemy
| 1,244
|
v1.10.4 -> v1.11.0: pyright: "__setitem__" method not defined on type "Mapping[str, str]"
|
Hi Alembic and all,
There seems to be a type change (for better or for worse) that's being caught by `pyright` -- possibly others.
We have a migration `env.py` with a statement like `configuration["sqlalchemy.url"] = database` that's throwing the error when `pyright` runs. With v1.10.4, there's no such error. I'm not sure if this is part of autogenerated output, if the code was a bit invalid from the start, or if there's something else going on.
In this case, `configuration = config.get_section(config.config_ini_section)`, `config = context.config`, and `from alembic import context`.
Let me know if I can provide anything else to help debug.
Thanks!
|
closed
|
2023-05-16T14:19:51Z
|
2023-05-16T17:14:56Z
|
https://github.com/sqlalchemy/alembic/issues/1244
|
[
"bug",
"pep 484"
] |
gitpushdashf
| 4
|
SciTools/cartopy
|
matplotlib
| 2,339
|
Cartopy 0.23 release
|
### Description
Our last release was in August 2023, which means we haven't released since Python 3.12 has come out which I did not realize 😮
https://github.com/SciTools/cartopy/releases
I think we should aim for a release soon. Numpy 2.0 is coming out soon as well, so we will for sure need one after that. Should we wait for that, or should we do a release before that pinning to `numpy<2` before that is released and then release another one right after that?
Are there any PRs / issues we need to address before the release?
Two issues in the milestone that aren't really dealbreakers: https://github.com/SciTools/cartopy/issues?q=is%3Aopen+is%3Aissue+milestone%3A%22Next+Release%22
There are a few PRs, but all of these have been around for a while at this point so probably not necessary to hold up the release for either: https://github.com/SciTools/cartopy/pulls?q=is%3Aopen+is%3Apr+milestone%3A%22Next+Release%22
|
closed
|
2024-03-05T01:17:42Z
|
2024-04-10T18:25:33Z
|
https://github.com/SciTools/cartopy/issues/2339
|
[] |
greglucas
| 3
|
JaidedAI/EasyOCR
|
pytorch
| 1,234
|
Output of reader not shown in command window
|
Using this model through embedding it into C++ and the result of running the reader function does not show anything in the command line. This is how I run it:
```
PyRun_SimpleString("import easyocr");
PyRun_SimpleString("reader = easyocr.Reader(['ja'], gpu=True)");
PyRun_SimpleString("result = reader.readtext('./Input/2161700_20240202182712_1.png')");
```
I am new to all of this so I'm unsure how to debug this.
|
open
|
2024-03-25T17:58:13Z
|
2024-03-25T17:58:13Z
|
https://github.com/JaidedAI/EasyOCR/issues/1234
|
[] |
danielzx2
| 0
|
feature-engine/feature_engine
|
scikit-learn
| 845
|
expand probe feature selection to derive importance with additional methods
|
At the moment it supports embedded methods and single feature models if i remember correctly. We could also rank features based on anova, correlation and mi and permutation.
|
open
|
2025-02-03T15:42:57Z
|
2025-02-20T07:37:13Z
|
https://github.com/feature-engine/feature_engine/issues/845
|
[] |
solegalli
| 1
|
sinaptik-ai/pandas-ai
|
data-visualization
| 721
|
Inconsistentb datatype formation
|
### System Info
Please share your system info with us.
OS version:Windows
Python version:3.11
The current version of pandasai being used:1.4
### 🐛 Describe the bug
using DataLake function the output type is in consistent, even after mentioning the output type the output comes as different one
example
dl.chat('show count of Codes on basis of year',output_type='DataFrame')
response - 'There were 1809 codes in 2020 and 1336 codes in 2021.'
|
closed
|
2023-11-02T06:59:10Z
|
2023-11-15T10:23:36Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/721
|
[] |
MuthusivamGS
| 1
|
sinaptik-ai/pandas-ai
|
pandas
| 1,392
|
TypeError: e?.map is not a function
|
### System Info
pandasai:2.2.15
python:3.12.4
masos
### 🐛 Describe the bug
```
> client@0.1.0 build
> next build
▲ Next.js 14.2.15
- Environments: .env
Creating an optimized production build ...
✓ Compiled successfully
Skipping linting
✓ Checking validity of types
✓ Collecting page data
Generating static pages (7/15) [= ]TypeError: e?.map is not a function
at c (/Users/tom_tang/Projects/pandas-ai/client/build/server/app/settings/workspaces/page.js:1:10168)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
digest: '3282278517'
}
TypeError: e?.map is not a function
at c (/Users/tom_tang/Projects/pandas-ai/client/build/server/app/settings/workspaces/page.js:1:10168)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
digest: '3282278517'
}
Error occurred prerendering page "/settings/workspaces". Read more: https://nextjs.org/docs/messages/prerender-error
TypeError: e?.map is not a function
at c (/Users/tom_tang/Projects/pandas-ai/client/build/server/app/settings/workspaces/page.js:1:10168)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
✓ Generating static pages (15/15)
> Export encountered errors on following paths:
/settings/workspaces/page: /settings/workspaces
```
|
closed
|
2024-10-12T03:53:51Z
|
2025-01-20T16:00:19Z
|
https://github.com/sinaptik-ai/pandas-ai/issues/1392
|
[
"bug"
] |
tangfei
| 3
|
graphql-python/graphene
|
graphql
| 1,400
|
Python 3.10 support in v3
|
* **What is the current behavior?**
Two tests fail on Python 3.10, with slightly different output on Python 3.10, that the tests do not expect.
```py
[ 101s] =================================== FAILURES ===================================
[ 101s] ___________________ test_objecttype_as_container_extra_args ____________________
[ 101s]
[ 101s] def test_objecttype_as_container_extra_args():
[ 101s] with raises(TypeError) as excinfo:
[ 101s] Container("1", "2", "3")
[ 101s]
[ 101s] > assert "__init__() takes from 1 to 3 positional arguments but 4 were given" == str(
[ 101s] excinfo.value
[ 101s] )
[ 101s] E AssertionError: assert '__init__() t... 4 were given' == 'Container.__... 4 were given'
[ 101s] E - Container.__init__() takes from 1 to 3 positional arguments but 4 were given
[ 101s] E ? ----------
[ 101s] E + __init__() takes from 1 to 3 positional arguments but 4 were given
[ 101s]
[ 101s] graphene/types/tests/test_objecttype.py:197: AssertionError
[ 101s] _________________ test_objecttype_as_container_invalid_kwargs __________________
[ 101s]
[ 101s] def test_objecttype_as_container_invalid_kwargs():
[ 101s] with raises(TypeError) as excinfo:
[ 101s] Container(unexisting_field="3")
[ 101s]
[ 101s] > assert "__init__() got an unexpected keyword argument 'unexisting_field'" == str(
[ 101s] excinfo.value
[ 101s] )
[ 101s] E assert "__init__() g...isting_field'" == "Container.__...isting_field'"
[ 101s] E - Container.__init__() got an unexpected keyword argument 'unexisting_field'
[ 101s] E ? ----------
[ 101s] E + __init__() got an unexpected keyword argument 'unexisting_field'
[ 101s]
[ 101s] graphene/types/tests/test_objecttype.py:206: AssertionError
```
* **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem** via
a github repo, https://repl.it or similar.
* **What is the expected behavior?**
Tests pass on Python 3.10
* **What is the motivation / use case for changing the behavior?**
* **Please tell us about your environment:**
- Version:
- Platform:
* **Other information** (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow)
|
closed
|
2022-01-11T03:33:20Z
|
2022-06-14T14:57:23Z
|
https://github.com/graphql-python/graphene/issues/1400
|
[
"🐛 bug"
] |
jayvdb
| 3
|
piskvorky/gensim
|
data-science
| 3,493
|
Search feature on website is broken
|
The title.
|
open
|
2023-08-30T04:34:26Z
|
2023-09-02T02:38:41Z
|
https://github.com/piskvorky/gensim/issues/3493
|
[
"housekeeping"
] |
mbarberry
| 1
|
deepset-ai/haystack
|
machine-learning
| 8,621
|
Update materials to access `ChatMessage` `text` (instead of `content`)
|
part of https://github.com/deepset-ai/haystack/issues/8583
Since `content` will be removed in future in favor of `text` (and other types of content),
we are already transitioning to `text` to smooth the transition process.
```[tasklist]
### Materials to update
- [x] tutorials
- [x] cookbook (in review...)
- [x] integration pages (in review...)
- [x] blogposts (in review...)
```
|
closed
|
2024-12-10T14:27:24Z
|
2024-12-12T11:37:59Z
|
https://github.com/deepset-ai/haystack/issues/8621
|
[] |
anakin87
| 2
|
davidsandberg/facenet
|
tensorflow
| 544
|
A question about how to make a face verification?
|
Hello, now I have a new question: if I have an array of one people's face embeddings, how can I use these embeddings to make face verification?
I mean if I need do train these embeddings with a model and then give a new embeddings to verify, maybe there is a threshold, if the probability is greater than the threshold they are the same person, or they are two people.
How I choose the model?(Apparently the SVM model cant be used under the circumstances )
Looking forward your replying sincerely
Thanks @davidsandberg
|
closed
|
2017-11-22T10:47:47Z
|
2017-11-23T08:05:16Z
|
https://github.com/davidsandberg/facenet/issues/544
|
[] |
xvdehao
| 4
|
pyppeteer/pyppeteer
|
automation
| 479
|
is this supported by centOS?
|
Im asking this because playwright it's not supported by centOS
|
open
|
2024-06-30T13:01:53Z
|
2024-06-30T13:02:16Z
|
https://github.com/pyppeteer/pyppeteer/issues/479
|
[] |
juanfrilla
| 0
|
iperov/DeepFaceLab
|
machine-learning
| 5,262
|
Graphic Card Recommendation
|
Hi there, I'm just beginner and want to upgrade my computer.
I was wondering if GeForce GC or Radeon would perform better, in the same price range I could get GeForce with 8/10 GB VRam or Radeon with 16GB VRam.
Any recommendations ?
|
open
|
2021-01-26T10:07:37Z
|
2023-06-08T21:42:04Z
|
https://github.com/iperov/DeepFaceLab/issues/5262
|
[] |
Spoko-luzik
| 2
|
coqui-ai/TTS
|
python
| 3,235
|
Slow deep voice after Vits training
|
### Describe the bug
Hi,
I'm trying to train a model with my own voice. I recorded 150 samples with studio quality (44100Hz), which I reduced for training to 22050Hz. After 10000 epochs, I get a slow, deep voice far from the original voice.
What do I need to change to get a nearly original voice render?
Here is an example of the render with best_model_80000.pth after 10000 epochs with "Vous écoutez COQUI" (You listen to [COQUI)
https://github.com/coqui-ai/TTS/assets/62831563/c165133d-e443-4e68-804f-a7e161eb53c8
```
config = VitsConfig(
batch_size=15,
eval_batch_size=10,
num_loader_workers=4,
num_eval_loader_workers=4,
run_eval=True,
test_delay_epochs=-1,
epochs=10000,
save_step=1000,
text_cleaner="french_cleaners",
use_phonemes=True,
phoneme_language="fr-fr",
phoneme_cache_path=os.path.join(output_path, "phoneme_cache"),
compute_input_seq_cache=True,
print_step=25,
print_eval=True,
mixed_precision=True,
output_path=output_path,
characters=characters_config,
datasets=[dataset_config],
audio=audio_config,
cudnn_benchmark=True,
test_sentences=[
["Vous écouter COQUI"]
]
)
```
|
closed
|
2023-11-16T05:21:48Z
|
2023-11-16T13:14:07Z
|
https://github.com/coqui-ai/TTS/issues/3235
|
[
"bug"
] |
o3web
| 7
|
jschneier/django-storages
|
django
| 621
|
django-storage with dropbox breaks browser cache
|
I'm facing the same issue decribed [here](https://stackoverflow.com/questions/15668443/django-storage-with-s3-boto-break-browser-cache) but instead of amazon storage, I'm using the dropbox option. The current configuration works for a while with my cache configuration (cache_page), but after a while all the media files return 404 till I clear the cache again.
Is there any specific configuration for Dropbox or using `AWS_QUERYSTRING_AUTH = False` will also work?
|
closed
|
2018-10-24T15:33:05Z
|
2018-10-25T13:56:26Z
|
https://github.com/jschneier/django-storages/issues/621
|
[
"bug",
"dropbox"
] |
JoabMendes
| 2
|
scikit-optimize/scikit-optimize
|
scikit-learn
| 1,007
|
[QUESTION] Is existence of points provided by sampling method checked if already provided with x0/y0?
|
Hi,
I am sorry, I am not sure how to check this in the source code.
When using `forest_minimize`, I intend to use both options of:
- sampling the search space through Latin Hypercube
- providing known points by use of x0/y0 parameters
Because `func` to minimize is an expensive function to assess, it would be best that from the list of points provided by the sampling function, those for which results of `func` are already known are removed.
Please, is it the case?
Thanks for your help.
Have a good day,
Bests
|
open
|
2021-02-26T12:40:06Z
|
2021-06-25T12:52:04Z
|
https://github.com/scikit-optimize/scikit-optimize/issues/1007
|
[
"Question"
] |
yohplala
| 1
|
pytest-dev/pytest-django
|
pytest
| 914
|
docs(readme): update readme broken-links.
|
- line 58
- Manage test dependencies with pytest fixtures. <https://pytest.org/en/latest/fixture.html>
- line 62
- Make use of other `pytest plugins <https://pytest.org/en/latest/plugins.html>
Will submit a pull request for this.
|
closed
|
2021-03-16T21:05:31Z
|
2021-04-02T19:51:27Z
|
https://github.com/pytest-dev/pytest-django/issues/914
|
[] |
wearypossum4770
| 1
|
openapi-generators/openapi-python-client
|
fastapi
| 773
|
generated client doesn't support proxies
|
**Is your feature request related to a problem? Please describe.**
The generated client is nice - clean, ergonomic, and lean - but it doesn't provide the full ability to pass arguments through to `httpx`. In the case I care about right now, I need to pass a proxy configuration to it (and I need to do so in code, I cannot rely on environment variables for an uninteresting reason, which is that I need one application to use different proxies in different places).
**Describe the solution you'd like**
I believe something like #202 would help, but I'm open to other ideas. It occurs to me that the format of the proxies dict that `httpx` accepts is actually itself an implementation detail specific to `httpx`. `requests`, for example, uses keys like `http` and `https` while `httpx`'s proxy keys are `http://` and `https://`.
**Describe alternatives you've considered**
So far I've customized the `client.py.jinja` and `endpoint_module.py.jinja` templates in the obvious way, and it works, but I don't want to be subject to bugs if the templates change in future versions, I'd rather `openapi-python-client` intentionally support some form of proxy config.
|
closed
|
2023-06-27T02:09:34Z
|
2023-07-23T19:38:28Z
|
https://github.com/openapi-generators/openapi-python-client/issues/773
|
[
"✨ enhancement"
] |
leifwalsh
| 3
|
python-visualization/folium
|
data-visualization
| 1,178
|
Folium layer properties method
|
#### Please add a code sample or a nbviewer link, copy-pastable if possible
```python
mc = MarkerCluster(name='Stations')
for station in stationsgeo.iterrows():
popupstation = 'Name : {} ID : {}'.format(station[1]['englishNameStr'],station[1]['stationIDInt'])
mc.add_child(Marker([station[1]['latitudeFloat'],station[1]['longitudeFloat']],
popup=popupstation, tooltip=folium.map.Tooltip(text = popupstation),icon=folium.Icon(color='green', icon='cloud'),))
m.add_child(mc)
stationsearch = Search(
layer=mc,
geom_type='Point',
placeholder='Search for a station',
collapsed=False,
search_label='englishNameStr',
weight=3
).add_to(m)
```
#### Problem description
The MarkerCluster layer mc is not searchable because I can not find the search_label that needs to be passed. I have looked both into the documentation of MarkerCluster and Search (https://python-visualization.github.io/folium/plugins.html) as well as looked into the Leaflet.js documentation and source code but I can not figure out how to get a list of the viable search_labels for a MarkerCluster.
The GeoDataframe stationsgeo.head(1)
| stationIDInt | englishNameStr | latitudeFloat | longitudeFloat | geometry | poly
-- | -- | -- | -- | -- | -- | --
0 | 10102 | St. John's | 47.560000 | -52.711389 | POINT (47.56 -52.71138888888891) | POLYGON ((47.561 -52.71138888888891, 47.560995...
-- | -- | -- | -- | -- | -- | --
#### Expected Output
Expected to be searchable just like my other GeoJson layers.
#### Output of ``folium.__version__``
0.9.1
|
closed
|
2019-07-16T19:37:31Z
|
2020-03-28T22:32:14Z
|
https://github.com/python-visualization/folium/issues/1178
|
[] |
itsgifnotjiff
| 0
|
scrapy/scrapy
|
web-scraping
| 6,188
|
install scrapy for raspberry
|

|
closed
|
2023-12-26T16:03:07Z
|
2023-12-26T16:04:14Z
|
https://github.com/scrapy/scrapy/issues/6188
|
[] |
WangXinis
| 0
|
dunossauro/fastapi-do-zero
|
pydantic
| 126
|
[MELHORIAS] Simplificação nas interações com o banco de dados via SQLAlchemy
|
A ideia principal dessa issue é tornar as relações com o banco de dados e sqlalchemy mais palatável para iniciantes.
### Problema da herança
Exemplo da aula 04:
```python
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = 'users'
id: Mapped[int] = mapped_column(primary_key=True)
username: Mapped[str]
password: Mapped[str]
email: Mapped[str]
```
Na aula 09:
```python
class Todo(Base):
__tablename__ = 'todos'
id: Mapped[int] = mapped_column(primary_key=True)
title: Mapped[str]
description: Mapped[str]
state: Mapped[TodoState]
user_id: Mapped[int] = mapped_column(ForeignKey('users.id'))
user: Mapped[User] = relationship(back_populates='todos')
```
O modelo original, definido no curso, tem alguns conceitos complexos, como o `DeclarativeBase` com uma classe vazia para ser herdada.
A relação exclusiva da classe `Base` é o uso nas migrações e nos testes.
### Nas migrações
A classe `Base` é usada para o target:
```python
rom fast_zero.models import Base
...
target_metadata = Base.metadata
```
### Nos testes
```python
@pytest.fixture()
def session():
engine = create_engine('sqlite:///:memory:')
Session = sessionmaker(bind=engine)
Base.metadata.create_all(engine)
yield Session()
Base.metadata.drop_all(engine)
```
## O problema da fixture
As fixtures são um problema a parte em relação à complexidade. O uso do `sessionmaker` não traz nenhum valor adicional nos cenários de uso. Embora seja o padrão pra coisas mais complexas, no sentido do curso isso poderia ser substituído pelo gerenciador de contexto da própria fixture.
# Possíveis soluções
Algumas ideias básicas para melhorar a leitura e entendimento
## O caso da herança
O `DeclarativeBase` poderia ser substituída pelo `register`, tonando os modelos `dataclasses`.
```python
from sqlalchemy.orm import Mapped, mapped_column, registry
reg = registry()
@reg.mapped_as_dataclass
class User:
__tablename__ = 'users'
id: Mapped[int] = mapped_column(primary_key=True)
username: Mapped[str]
password: Mapped[str]
email: Mapped[str]
```
Isso facilitaria o debug, pois as dataclasses tem `__repr__` por padrão e também simplifica o caso da herança. Fazendo até com que fique mais descritivo.
## O caso dos testes
Podemos usar a pópria `Session` sem o ruído deixado pelo `sessionmaker`:
```python
@pytest.fixture()
def session():
engine = create_engine('sqlite:///:memory:')
reg.metadata.create_all(engine)
with Session(engine) as session:
yield session
reg.metadata.drop_all(engine)
```
Isso facilitaria o entendimento, pois a sessão é usada dessa forma no código-fonte, sem `sessionmaker`
|
closed
|
2024-04-18T21:54:10Z
|
2024-05-05T06:39:41Z
|
https://github.com/dunossauro/fastapi-do-zero/issues/126
|
[] |
dunossauro
| 1
|
learning-at-home/hivemind
|
asyncio
| 575
|
How well does it scale?
|
Hello,
I am researching P2P solutions and am wondering how well Hivemind scales?
Thanks
|
open
|
2023-07-20T01:37:29Z
|
2023-07-20T20:50:52Z
|
https://github.com/learning-at-home/hivemind/issues/575
|
[] |
lonnietc
| 2
|
jina-ai/serve
|
machine-learning
| 5,622
|
refactor: extract GrpcConnectionPool implementation into a package
|
Currently the `GrpcConnectionPool` class that handles gRPC channels and sending requests to targets in a single giant file. The goals of the refactor are:
- create a new networking package to extract inner classes to package inner classes.
- add unit tests to each inner class implementation.
- use correct synchornization (asyncio or thread) when resetting or recreating new gRPC channels and stubs which are stored in a dict.
|
closed
|
2023-01-25T08:33:30Z
|
2023-02-01T12:26:01Z
|
https://github.com/jina-ai/serve/issues/5622
|
[
"epic/gRPCTransport"
] |
girishc13
| 1
|
davidsandberg/facenet
|
tensorflow
| 1,172
|
Get model graph in JSON
|
Hello, David! Your repo is really usefull in my neural networks' studying. I have done softmax training on my own dataset. But to use model with identical structure it's neccessary to get graph in json format. Used model is this: 20180402-114759 (trained on VGGFace). Please, help me to get it or show the way to it...
|
open
|
2020-09-18T10:11:57Z
|
2021-10-26T20:05:28Z
|
https://github.com/davidsandberg/facenet/issues/1172
|
[] |
Julia2505
| 1
|
vitalik/django-ninja
|
pydantic
| 320
|
Docs - file uploads
|
since this question is popped up very often -
add here (/tutorial/file-params/)
-1) an example how to store uploaded file to a model
-2) example File with form data(schema)
|
open
|
2022-01-10T10:33:07Z
|
2022-07-02T15:28:37Z
|
https://github.com/vitalik/django-ninja/issues/320
|
[
"documentation"
] |
vitalik
| 0
|
babysor/MockingBird
|
deep-learning
| 686
|
在作者模型上训练后再次训练报错
|

从cuda11.6降到cuda11.3还是不可以

|
open
|
2022-07-28T23:12:53Z
|
2022-07-29T04:32:49Z
|
https://github.com/babysor/MockingBird/issues/686
|
[] |
yunqi777
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.