repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
vllm-project/vllm
|
pytorch
| 14,403
|
[Bug]: Error when Run Image Docker Vllm v0.7.3 - Unexpected error from cudaGetDeviceCount(). ....
|
### Your current environment
<details>
<summary>
I have problem when start docker image with vllm v0.7.3 ( lasted now )

My docker-compose.yml file
### Docker Compose Configuration
```yaml
version: "3.8"
services:
vllm-openai:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities:
- gpu
volumes:
- ~/.cache/huggingface:/root/.cache/huggingface
environment:
- HUGGING_FACE_HUB_TOKEN=<...>
ports:
- 8000:8000
ipc: host
image: vllm/vllm-openai:latest
runtime: nvidia
command: --model deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
```
this my result when run nvidia-smi

and nvcc --version

</summary>
</details>
### 🐛 Describe the bug

### Before submitting a new issue...
- [ ] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
|
open
|
2025-03-07T04:24:55Z
|
2025-03-10T17:07:25Z
|
https://github.com/vllm-project/vllm/issues/14403
|
[
"bug"
] |
duytran1999
| 2
|
ets-labs/python-dependency-injector
|
asyncio
| 692
|
Selector should be able to select between different Configurations
|
Let's say I have this yaml configuration:
```
selected: option1
option1:
param1:...
param2:....
option2:
param1:...
param2:...
```
I want to be able to do something like this:
```
class Container(containers.DeclarativeContainer):
config = providers.Configuration()
parameters = providers.Selector(
config.selected,
option1=config.option1,
option2=config.option2
)
foo = providers.Factory(
SomeClass,
param1=parameters.param1,
param1=parameters.param2
)
```
|
open
|
2023-03-31T21:09:27Z
|
2023-04-01T13:12:34Z
|
https://github.com/ets-labs/python-dependency-injector/issues/692
|
[] |
andresi
| 1
|
davidsandberg/facenet
|
tensorflow
| 429
|
Why the dataset has to be aligned
|
i ran the following script ,
`sudo python align_dataset_mtcnn.py /datasets/lfw/raw /datasets/lfw/lfw_mtcnnpy_160 --image_size 160 --margin 32 --random_order --gpu_memory_fraction 0.25 `
May I know what this script basically does? why do we have to align it? [This](https://github.com/davidsandberg/facenet/wiki/Validate-on-lfw#4-align-the-lfw-dataset) doesn't explain why alignment is necessary
|
closed
|
2017-08-20T11:52:26Z
|
2017-08-23T17:35:39Z
|
https://github.com/davidsandberg/facenet/issues/429
|
[] |
Zumbalamambo
| 2
|
amdegroot/ssd.pytorch
|
computer-vision
| 56
|
RunTime Error in Training with default values
|
`python train.py
Loading base network...
Initializing weights...
Loading Dataset...
Training SSD on VOC0712
Traceback (most recent call last):
File "train.py", line 232, in <module>
train()
File "train.py", line 181, in train
out = net(images)
File "/users/gpu/utkrsh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 224, in __call__
result = self.forward(*input, **kwargs)
File "/users/gpu/utkrsh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 60, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/users/gpu/utkrsh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 70, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/users/gpu/utkrsh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 67, in parallel_apply
raise output
File "/users/gpu/utkrsh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 42, in _worker
output = module(*input, **kwargs)
File "/users/gpu/utkrsh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 224, in __call__
result = self.forward(*input, **kwargs)
File "/data/gpu/utkrsh/code/ssd.pytorch/ssd.py", line 76, in forward
s = self.L2Norm(x)
File "/users/gpu/utkrsh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 224, in __call__
result = self.forward(*input, **kwargs)
File "/data/gpu/utkrsh/code/ssd.pytorch/layers/modules/l2norm.py", line 21, in forward
x/=norm.expand_as(x)
File "/users/gpu/utkrsh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/autograd/variable.py", line 725, in expand_as
return Expand.apply(self, (tensor.size(),))
File "/users/gpu/utkrsh/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/autograd/_functions/tensor.py", line 111, in forward
result = i.expand(*new_size)
RuntimeError: The expanded size of the tensor (512) must match the existing size (8) at non-singleton dimension 1. at /opt/conda/conda-bld/pytorch_1502009910772/work/torch/lib/THC/generic/T$CTensor.c:323
`
I am getting the above stack trace after running train.py for default values. The dataset and weights were downloaded in the default location.
I am using python 3.6 and pytorch 0.2.0
I do understand the meaning of the error, I am just not able to find the source. Can anyone point in the right direction?
|
closed
|
2017-08-18T12:59:08Z
|
2019-05-21T06:39:46Z
|
https://github.com/amdegroot/ssd.pytorch/issues/56
|
[] |
chauhan-utk
| 13
|
graphql-python/graphene-sqlalchemy
|
graphql
| 133
|
Nested inputs in mutations
|
I'm failing to convert nested inputs into sqlalchemy models in mutations. Here's my example:
Let's say I want to create a quiz. For that I have the following code:
```
'''
GraphQL Models
'''
class Quiz(SQLAlchemyObjectType):
'''
GraphQL representation of a Quiz
'''
class Meta:
model = QuizModel
class Question(SQLAlchemyObjectType):
'''
GraphQL representation of a Question
'''
class Meta:
model = QuestionModel
'''
Inputs
'''
class QuestionInput(graphene.InputObjectType):
text = graphene.String()
media = graphene.String()
class QuizInput(graphene.InputObjectType):
name = graphene.String()
creator_id = graphene.Int()
description = graphene.String()
media = graphene.String()
id = graphene.Int()
debugging_questions = graphene.InputField(graphene.List(QuestionInput))
questions = graphene.InputField(graphene.List(QuestionInput))
'''
Mutation
'''
class CreateQuiz(graphene.Mutation):
class Arguments:
quiz = QuizInput(required=True)
quiz = graphene.Field(lambda: Quiz)
def mutate(self, info, **kwargs):
quiz_attributes = dict(kwargs['quiz'])
if quiz_attributes.get('id'):
quiz = db_session.query(QuizModel).filter((QuizModel.id == quiz_attributes.get('id'))).one()
quiz_attributes.pop('id')
for key, value in quiz_attributes.items():
setattr(quiz, key, value)
else:
quiz = QuizModel(**quiz_attributes)
db_session.add(quiz)
db_session.commit()
return CreateQuiz(quiz=quiz)
```
My GraphQL query is the following:
```
mutation creatingQuiz($quiz: QuizInput!) {
createQuiz(quiz: $quiz) {
quiz {
id,
name,
description,
media,
creatorId,
questions {
id,
text,
media,
}
}
}
}
```
Note the relation between the global variables and the response:
Example A - returns a response, obviously doesn't add any quizzes because it uses `debuggingQuestions`.
```
Global variables:
{
"quiz": {
"id": 136,
"name": "fake name",
"description": "simple desc",
"creatorId": 1,
"media": "img.jpg",
"debuggingQuestions": [{
"media": "media",
"text": "text"
}]
}
}
Response:
{
"data": {
"createQuiz": {
"quiz": {
"id": "136",
"name": "fake name",
"description": "simple desc",
"media": "img.jpg",
"creatorId": 1,
"questions": []
}
}
}
}
```
Now if I try and pass question data in the `questions` field instead of `debuggingQuestions`:
```
Global variables:
{
"quiz": {
"id": 136,
"name": "fake name",
"description": "simple desc",
"creatorId": 1,
"media": "img.jpg",
"questions": [{
"media": "media",
"text": "text"
}]
}
}
Response:
{
"errors": [{
"message": "unhashable type: 'QuestionInput'",
"locations": [{
"line": 2,
"column": 3
}]
}],
"data": {
"createQuiz": null
}
}
```
What step am I missing so that `QuestionInput` is automatically converted into a Question sqlalchemy model?
|
open
|
2018-05-22T19:21:42Z
|
2018-05-22T19:49:08Z
|
https://github.com/graphql-python/graphene-sqlalchemy/issues/133
|
[] |
NathanBWaters
| 2
|
ultralytics/ultralytics
|
deep-learning
| 19,574
|
Use ray to tune
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Train
### Bug
Hi guys, I need your help with an issue I'm facing when using Ray to tune my YOLO model.
When using Ray, some processes run normally while others fail.
The error I'm encountering is:
```
Failure # 1 (occurred at 2025-03-08_15-26-26)
[36mray::ImplicitFunc.train()[39m (pid=499864, ip=192.168.5.3, actor_id=2c6a9084244fbf1b3f754eb001000000, repr=_tune)
File "/home/aiwork/anaconda3/envs/py39/lib/python3.9/site-packages/ray/tune/trainable/trainable.py", line 330, in train
raise skipped from exception_cause(skipped)
File "/home/aiwork/anaconda3/envs/py39/lib/python3.9/site-packages/ray/air/_internal/util.py", line 107, in run
self._ret = self._target(*self._args, **self._kwargs)
File "/home/aiwork/anaconda3/envs/py39/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 45, in <lambda>
training_func=lambda: self._trainable_func(self.config),
File "/home/aiwork/anaconda3/envs/py39/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 261, in _trainable_func
output = fn()
File "/home/aiwork/csn/Projects/ultralytics/ultralytics/utils/tuner.py", line 106, in _tune
results = model_to_train.train(**config)
File "/home/aiwork/csn/Projects/ultralytics/ultralytics/engine/model.py", line 810, in train
self.trainer.train()
File "/home/aiwork/csn/Projects/ultralytics/ultralytics/engine/trainer.py", line 203, in train
raise e
File "/home/aiwork/csn/Projects/ultralytics/ultralytics/engine/trainer.py", line 201, in train
subprocess.run(cmd, check=True)
File "/home/aiwork/anaconda3/envs/py39/lib/python3.9/subprocess.py", line 528, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/home/aiwork/anaconda3/envs/py39/bin/python', '-m', 'torch.distributed.run', '--nproc_per_node', '2', '--master_port', '43925', '/home/aiwork/.config/Ultralytics/DDP/_temp_aqpask7_133441279378960.py']' returned non-zero exit status 1.
```
### Environment
My server configuration is as follows:
- System: Ubuntu 20.04
- CPU: 80 cores
- GPU: 2 x NVIDIA 3090
- Python: 3.9
- I'm using the latest versions of Ultralytics and Ray.
### Minimal Reproducible Example
Here's my code:
```python
# test_model_tune.py
import warnings
warnings.filterwarnings('ignore')
from ultralytics import YOLO
import ray
import os
if __name__ == '__main__':
# Initialize Ray
weights_path = os.path.abspath('./weights/yolo11n.pt')
model = YOLO(weights_path) # Need to modify
print(f"model.ckpt_path:{model.ckpt_path}")
ray.init(num_cpus=20, num_gpus=2) # Adjust according to your hardware configuration
result_grid = model.tune(
data=r'./custom_configs/dateset/image_split.yaml', # Need to modify
imgsz=2560,
epochs=10,
batch=8,
device='0,1',
optimizer='SGD',
project='runs/tune',
iterations=10,
name='exp',
use_ray=True
)
for i, result in enumerate(result_grid):
print(f"Trial #{i}: Configuration: {result.config}, Last Reported Metrics: {result.metrics}")
# Shutdown Ray
ray.shutdown()
```
```python
# tuner.py
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
from ultralytics.cfg import TASK2DATA, TASK2METRIC, get_cfg, get_save_dir
from ultralytics.utils import DEFAULT_CFG, DEFAULT_CFG_DICT, LOGGER, NUM_THREADS, checks
def run_ray_tune(
model,
space: dict = None,
grace_period: int = 10,
gpu_per_trial: int = None,
max_samples: int = 10,
**train_args,
):
"""
Runs hyperparameter tuning using Ray Tune.
Args:
model (YOLO): Model to run the tuner on.
space (dict, optional): The hyperparameter search space. Defaults to None.
grace_period (int, optional): The grace period in epochs of the ASHA scheduler. Defaults to 10.
gpu_per_trial (int, optional): The number of GPUs to allocate per trial. Defaults to None.
max_samples (int, optional): The maximum number of trials to run. Defaults to 10.
train_args (dict, optional): Additional arguments to pass to the `train()` method. Defaults to {}.
Returns:
(dict): A dictionary containing the results of the hyperparameter search.
Example:
```python
from ultralytics import YOLO
# Load a YOLO11n model
model = YOLO("yolo11n.pt")
# Start tuning hyperparameters for YOLO11n training on the COCO8 dataset
result_grid = model.tune(data="coco8.yaml", use_ray=True)
```
"""
LOGGER.info("💡 Learn about RayTune at https://docs.ultralytics.com/integrations/ray-tune ")
if train_args is None:
train_args = {}
try:
checks.check_requirements("ray[tune]")
import ray
from ray import tune
from ray.air import RunConfig
from ray.air.integrations.wandb import WandbLoggerCallback
from ray.tune.schedulers import ASHAScheduler
except ImportError:
raise ModuleNotFoundError('Ray Tune required but not found. To install run: pip install "ray[tune]"')
try:
import wandb
assert hasattr(wandb, "__version__")
except (ImportError, AssertionError):
wandb = False
checks.check_version(ray.__version__, ">=2.0.0", "ray")
default_space = {
# 'optimizer': tune.choice(['SGD', 'Adam', 'AdamW', 'NAdam', 'RAdam', 'RMSProp']),
"lr0": tune.uniform(1e-5, 1e-1),
"lrf": tune.uniform(0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf)
"momentum": tune.uniform(0.6, 0.98), # SGD momentum/Adam beta1
"weight_decay": tune.uniform(0.0, 0.001), # optimizer weight decay 5e-4
"warmup_epochs": tune.uniform(0.0, 5.0), # warmup epochs (fractions ok)
"warmup_momentum": tune.uniform(0.0, 0.95), # warmup initial momentum
"box": tune.uniform(0.02, 0.2), # box loss gain
"cls": tune.uniform(0.2, 4.0), # cls loss gain (scale with pixels)
"hsv_h": tune.uniform(0.0, 0.1), # image HSV-Hue augmentation (fraction)
"hsv_s": tune.uniform(0.0, 0.9), # image HSV-Saturation augmentation (fraction)
"hsv_v": tune.uniform(0.0, 0.9), # image HSV-Value augmentation (fraction)
"degrees": tune.uniform(0.0, 45.0), # image rotation (+/- deg)
"translate": tune.uniform(0.0, 0.9), # image translation (+/- fraction)
"scale": tune.uniform(0.0, 0.9), # image scale (+/- gain)
"shear": tune.uniform(0.0, 10.0), # image shear (+/- deg)
"perspective": tune.uniform(0.0, 0.001), # image perspective (+/- fraction), range 0-0.001
"flipud": tune.uniform(0.0, 1.0), # image flip up-down (probability)
"fliplr": tune.uniform(0.0, 1.0), # image flip left-right (probability)
"bgr": tune.uniform(0.0, 1.0), # image channel BGR (probability)
"mosaic": tune.uniform(0.0, 1.0), # image mixup (probability)
"mixup": tune.uniform(0.0, 1.0), # image mixup (probability)
"copy_paste": tune.uniform(0.0, 1.0), # segment copy-paste (probability)
}
# Put the model in ray store
task = model.task
model_in_store = ray.put(model)
def _tune(config):
"""
Trains the YOLO model with the specified hyperparameters and additional arguments.
Args:
config (dict): A dictionary of hyperparameters to use for training.
Returns:
None
"""
model_to_train = ray.get(model_in_store) # get the model from ray store for tuning
model_to_train.reset_callbacks()
config.update(train_args)
results = model_to_train.train(**config)
if results is not None:
print(results)
return results.results_dict
else:
print("_tune::results is None")
return None
# Get search space
if not space:
space = default_space
LOGGER.warning("WARNING ⚠️ search space not provided, using default search space.")
# Get dataset
data = train_args.get("data", TASK2DATA[task])
space["data"] = data
if "data" not in train_args:
LOGGER.warning(f'WARNING ⚠️ data not provided, using default "data={data}".')
# modified by chenshining
# Define the trainable function with allocated resources
# trainable_with_resources = tune.with_resources(_tune, {"cpu": NUM_THREADS, "gpu": gpu_per_trial or 0})
trainable_with_resources = tune.with_resources(_tune, {"cpu": 4, "gpu": gpu_per_trial or 1})
# Define the ASHA scheduler for hyperparameter search
asha_scheduler = ASHAScheduler(
time_attr="epoch",
metric=TASK2METRIC[task],
mode="max",
max_t=train_args.get("epochs") or DEFAULT_CFG_DICT["epochs"] or 100,
grace_period=grace_period,
reduction_factor=3,
)
# Define the callbacks for the hyperparameter search
tuner_callbacks = [WandbLoggerCallback(project="YOLOv8-tune")] if wandb else []
# Create the Ray Tune hyperparameter search tuner
tune_dir = get_save_dir(
get_cfg(DEFAULT_CFG, train_args), name=train_args.pop("name", "tune")
).resolve() # must be absolute dir
tune_dir.mkdir(parents=True, exist_ok=True)
# modified by chenshining
tuner = tune.Tuner(
trainable_with_resources,
param_space=space,
tune_config=tune.TuneConfig(scheduler=asha_scheduler, num_samples=max_samples, max_concurrent_trials=4),
run_config=RunConfig(name="memory_optimized_tune", callbacks=tuner_callbacks, storage_path=tune_dir),
)
# Run the hyperparameter search
tuner.fit()
# Get the results of the hyperparameter search
results = tuner.get_results()
# Shut down Ray to clean up workers
ray.shutdown()
return results
```
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR!
|
closed
|
2025-03-08T08:28:18Z
|
2025-03-24T09:03:21Z
|
https://github.com/ultralytics/ultralytics/issues/19574
|
[
"bug",
"enhancement"
] |
csn223355
| 12
|
saulpw/visidata
|
pandas
| 1,462
|
Copy multiple columns across different files
|
Is there a way for me to copy multiple consecutive columns from one file and paste/insert them into another file? Thanks!
|
closed
|
2022-08-09T23:22:38Z
|
2022-08-09T23:29:03Z
|
https://github.com/saulpw/visidata/issues/1462
|
[
"wishlist"
] |
jingxixu
| 1
|
tqdm/tqdm
|
pandas
| 1,283
|
`ipywidgets` variant broken
|
```python
import sys, time, tqdm
for j in tqdm.trange(100, file=sys.stdout, leave=False, unit_scale=True, desc='loop'):
time.sleep(1)
```
works, but
```python
for j in tqdm.auto.tqdm(range(100), file=sys.stdout, leave=False, unit_scale=True, desc='loop'):
time.sleep(1)
````
shows a frozen progress bar and no percent update:
```
loop: 0%| | 0.00/100 [00:00<?, ?it/s]
```
<details><summary><b>conda list</b></summary>
```
# packages in environment at D:\Anaconda\envs\pyt:
#
# Name Version Build Channel
absl-py 0.15.0 pyhd8ed1ab_0 conda-forge
aiohttp 3.7.4.post0 py38h294d835_1 conda-forge
alabaster 0.7.12 py_0 conda-forge
anyio 3.3.3 py38haa244fe_0 conda-forge
appdirs 1.4.4 pyh9f0ad1d_0 conda-forge
argh 0.26.2 pyh9f0ad1d_1002 conda-forge
argon2-cffi 21.1.0 py38h294d835_0 conda-forge
arrow 1.2.0 pyhd8ed1ab_0 conda-forge
astroid 2.5.8 py38haa244fe_0 conda-forge
async-timeout 3.0.1 py_1000 conda-forge
async_generator 1.10 py_0 conda-forge
atomicwrites 1.4.0 pyh9f0ad1d_0 conda-forge
attrs 21.2.0 pyhd8ed1ab_0 conda-forge
audioread 2.1.9 py38haa244fe_0 conda-forge
autopep8 1.6.0 pyhd8ed1ab_1 conda-forge
babel 2.9.1 pyh44b312d_0 conda-forge
backcall 0.2.0 pyh9f0ad1d_0 conda-forge
backports 1.0 py_2 conda-forge
backports.functools_lru_cache 1.6.4 pyhd8ed1ab_0 conda-forge
bcrypt 3.2.0 py38h294d835_1 conda-forge
binaryornot 0.4.4 py_1 conda-forge
black 21.9b0 pyhd8ed1ab_0 conda-forge
blas 1.0 mkl
bleach 4.1.0 pyhd8ed1ab_0 conda-forge
blinker 1.4 py_1 conda-forge
brotli-python 1.0.9 py38h885f38d_5 conda-forge
brotlipy 0.7.0 py38h294d835_1001 conda-forge
bzip2 1.0.8 h8ffe710_4 conda-forge
ca-certificates 2021.10.26 haa95532_2
cached-property 1.5.2 hd8ed1ab_1 conda-forge
cached_property 1.5.2 pyha770c72_1 conda-forge
cachetools 4.2.4 pyhd8ed1ab_0 conda-forge
certifi 2021.10.8 py38haa244fe_1 conda-forge
cffi 1.14.6 py38hd8c33c5_1 conda-forge
chardet 4.0.0 py38haa244fe_1 conda-forge
charset-normalizer 2.0.0 pyhd8ed1ab_0 conda-forge
click 7.1.2 pyh9f0ad1d_0 conda-forge
cloudpickle 2.0.0 pyhd8ed1ab_0 conda-forge
colorama 0.4.4 pyh9f0ad1d_0 conda-forge
conda 4.11.0 py38haa244fe_0 conda-forge
conda-package-handling 1.7.3 py38h31c79cd_1 conda-forge
configparser 5.1.0 pyhd8ed1ab_0 conda-forge
cookiecutter 1.6.0 py38_1000 conda-forge
cryptography 3.4.7 py38hd7da0ea_0 conda-forge
cudatoolkit 11.3.1 h59b6b97_2
cupy 9.5.0 py38hf95616d_1 conda-forge
cycler 0.10.0 py_2 conda-forge
cython 0.29.24 py38h885f38d_0 conda-forge
dash 2.0.0 pyhd8ed1ab_0 conda-forge
dataclasses 0.8 pyhc8e2a94_3 conda-forge
debugpy 1.4.1 py38h885f38d_0 conda-forge
decorator 5.1.0 pyhd8ed1ab_0 conda-forge
defusedxml 0.7.1 pyhd8ed1ab_0 conda-forge
diff-match-patch 20200713 pyh9f0ad1d_0 conda-forge
docker-pycreds 0.4.0 py_0 conda-forge
docutils 0.17.1 py38haa244fe_0 conda-forge
entrypoints 0.3 pyhd8ed1ab_1003 conda-forge
fastrlock 0.8 py38h885f38d_1 conda-forge
fftw 3.3.10 nompi_hea9a5d6_101 conda-forge
flake8 4.0.1 pyhd8ed1ab_1 conda-forge
flask 2.0.2 pyhd8ed1ab_0 conda-forge
flask-compress 1.10.1 pyhd8ed1ab_0 conda-forge
freetype 2.10.4 h546665d_1 conda-forge
fsspec 2021.10.1 pyhd8ed1ab_0 conda-forge
future 0.18.2 py38haa244fe_3 conda-forge
gitdb 4.0.9 pyhd8ed1ab_0 conda-forge
gitpython 3.1.24 pyhd8ed1ab_0 conda-forge
google-auth 1.35.0 pyh6c4a22f_0 conda-forge
google-auth-oauthlib 0.4.6 pyhd8ed1ab_0 conda-forge
grpcio 1.41.1 py38he5377a8_1 conda-forge
h5py 3.6.0 nompi_py38hde0384b_100 conda-forge
hdf5 1.12.1 nompi_h2a0e4a3_103 conda-forge
icu 68.1 h0e60522_0 conda-forge
idna 3.1 pyhd3deb0d_0 conda-forge
imagesize 1.2.0 py_0 conda-forge
importlib-metadata 4.2.0 py38haa244fe_0 conda-forge
importlib_metadata 4.2.0 hd8ed1ab_0 conda-forge
inflection 0.5.1 pyh9f0ad1d_0 conda-forge
iniconfig 1.1.1 pyh9f0ad1d_0 conda-forge
intel-openmp 2021.3.0 h57928b3_3372 conda-forge
intervaltree 3.0.2 py_0 conda-forge
ipykernel 6.4.1 py38h595d716_0 conda-forge
ipython 7.28.0 py38h595d716_0 conda-forge
ipython_genutils 0.2.0 py_1 conda-forge
ipywidgets 7.6.5 pyhd8ed1ab_0 conda-forge
isort 5.9.3 pyhd8ed1ab_0 conda-forge
itsdangerous 2.0.1 pyhd8ed1ab_0 conda-forge
jbig 2.1 h8d14728_2003 conda-forge
jedi 0.18.0 py38haa244fe_2 conda-forge
jellyfish 0.8.9 py38h294d835_2 conda-forge
jinja2 3.0.2 pyhd8ed1ab_0 conda-forge
jinja2-time 0.2.0 py_2 conda-forge
joblib 1.1.0 pyhd8ed1ab_0 conda-forge
jpeg 9d h8ffe710_0 conda-forge
json5 0.9.6 pyhd3eb1b0_0
jsonschema 4.1.0 pyhd8ed1ab_0 conda-forge
jupyter-console 6.4.0 pypi_0 pypi
jupyter_client 6.1.12 pyhd8ed1ab_0 conda-forge
jupyter_core 4.8.1 py38haa244fe_0 conda-forge
jupyter_server 1.11.1 pyhd8ed1ab_0 conda-forge
jupyterlab 3.2.1 pyhd8ed1ab_0 conda-forge
jupyterlab-server 1.2.0 pypi_0 pypi
jupyterlab_pygments 0.1.2 pyh9f0ad1d_0 conda-forge
jupyterlab_server 2.8.2 pyhd8ed1ab_0 conda-forge
jupyterlab_widgets 1.0.2 pyhd8ed1ab_0 conda-forge
keyring 23.2.1 py38haa244fe_0 conda-forge
kiwisolver 1.3.2 py38hbd9d945_0 conda-forge
krb5 1.19.2 h20d022d_3 conda-forge
lazy-object-proxy 1.6.0 py38h294d835_0 conda-forge
lcms2 2.12 h2a16943_0 conda-forge
lerc 2.2.1 h0e60522_0 conda-forge
libarchive 3.5.2 hb45042f_1 conda-forge
libblas 3.9.0 11_win64_mkl conda-forge
libcblas 3.9.0 11_win64_mkl conda-forge
libclang 11.1.0 default_h5c34c98_1 conda-forge
libcurl 7.80.0 h789b8ee_1 conda-forge
libdeflate 1.7 h8ffe710_5 conda-forge
libflac 1.3.3 h0e60522_1 conda-forge
libiconv 1.16 he774522_0 conda-forge
liblapack 3.9.0 11_win64_mkl conda-forge
libmamba 0.19.1 h44daa3b_0 conda-forge
libmambapy 0.19.1 py38h2bfd5b9_0 conda-forge
libogg 1.3.5 h2bbff1b_1
libopus 1.3.1 h8ffe710_1 conda-forge
libpng 1.6.37 h1d00b33_2 conda-forge
libprotobuf 3.19.1 h7755175_0 conda-forge
librosa 0.8.1 pyhd8ed1ab_0 conda-forge
libsndfile 1.0.31 h0e60522_1 conda-forge
libsodium 1.0.18 h8d14728_1 conda-forge
libsolv 0.7.19 h7755175_5 conda-forge
libspatialindex 1.9.3 h39d44d4_4 conda-forge
libssh2 1.10.0 h680486a_2 conda-forge
libtiff 4.3.0 h0c97f57_1 conda-forge
libuv 1.40.0 he774522_0
libvorbis 1.3.7 ha925a31_0 conda-forge
libxml2 2.9.12 hf5bbc77_1 conda-forge
libzlib 1.2.11 h8ffe710_1013 conda-forge
llvmlite 0.36.0 py38h57a6900_0 conda-forge
lz4-c 1.9.3 h8ffe710_1 conda-forge
lzo 2.10 hfa6e2cd_1000 conda-forge
m2w64-gcc-libgfortran 5.3.0 6 conda-forge
m2w64-gcc-libs 5.3.0 7 conda-forge
m2w64-gcc-libs-core 5.3.0 7 conda-forge
m2w64-gmp 6.1.0 2 conda-forge
m2w64-libwinpthread-git 5.0.0.4634.697f757 2 conda-forge
mamba 0.19.1 py38hecfeebb_0 conda-forge
markdown 3.3.4 pyhd8ed1ab_0 conda-forge
markupsafe 2.0.1 py38h294d835_0 conda-forge
matplotlib 3.4.3 py38haa244fe_1 conda-forge
matplotlib-base 3.4.3 py38h1f000d6_1 conda-forge
matplotlib-inline 0.1.3 pyhd8ed1ab_0 conda-forge
mccabe 0.6.1 py_1 conda-forge
menuinst 1.4.18 py38haa244fe_1 conda-forge
mistune 0.8.4 py38h294d835_1004 conda-forge
mkl 2021.3.0 hb70f87d_564 conda-forge
more-itertools 8.10.0 pyhd8ed1ab_0 conda-forge
mpmath 1.2.1 pyhd8ed1ab_0 conda-forge
msys2-conda-epoch 20160418 1 conda-forge
multidict 5.2.0 py38h294d835_1 conda-forge
mypy_extensions 0.4.3 py38haa244fe_3 conda-forge
nbclassic 0.3.2 pyhd8ed1ab_0 conda-forge
nbclient 0.5.4 pyhd8ed1ab_0 conda-forge
nbconvert 5.6.1 pypi_0 pypi
nbformat 5.1.3 pyhd8ed1ab_0 conda-forge
nest-asyncio 1.5.1 pyhd8ed1ab_0 conda-forge
ninja 1.10.2 h6d14046_1
notebook 6.4.4 pyha770c72_0 conda-forge
numba 0.53.0 py38h5c177ec_0 conda-forge
numpy 1.21.2 py38h089cfbf_0 conda-forge
numpydoc 1.1.0 py_1 conda-forge
oauthlib 3.1.1 pyhd8ed1ab_0 conda-forge
olefile 0.46 pyh9f0ad1d_1 conda-forge
openjpeg 2.4.0 hb211442_1 conda-forge
openssl 1.1.1l h8ffe710_0 conda-forge
packaging 21.0 pyhd8ed1ab_0 conda-forge
pandas 1.3.3 py38h5d928e2_0 conda-forge
pandoc 2.14.2 h8ffe710_0 conda-forge
pandocfilters 1.5.0 pyhd8ed1ab_0 conda-forge
paramiko 2.7.2 pyh9f0ad1d_0 conda-forge
parso 0.8.2 pyhd8ed1ab_0 conda-forge
pathspec 0.9.0 pyhd8ed1ab_0 conda-forge
pathtools 0.1.2 py_1 conda-forge
pdfkit 0.6.1 pypi_0 pypi
pexpect 4.8.0 pyh9f0ad1d_2 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pillow 8.3.2 py38h794f750_0 conda-forge
pip 21.2.4 pyhd8ed1ab_0 conda-forge
platformdirs 2.3.0 pyhd8ed1ab_0 conda-forge
plotly 5.3.1 py_0 plotly
pluggy 1.0.0 py38haa244fe_1 conda-forge
pooch 1.5.2 pyhd8ed1ab_0 conda-forge
poyo 0.5.0 py_0 conda-forge
prometheus_client 0.11.0 pyhd8ed1ab_0 conda-forge
promise 2.3 py38haa244fe_5 conda-forge
prompt-toolkit 3.0.20 pyha770c72_0 conda-forge
protobuf 3.19.1 py38h885f38d_1 conda-forge
psutil 5.8.0 py38h294d835_1 conda-forge
ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge
py 1.10.0 pyhd3deb0d_0 conda-forge
py-lz4framed 0.14.0 pypi_0 pypi
pyasn1 0.4.8 py_0 conda-forge
pyasn1-modules 0.2.8 py_0
pybind11-abi 4 hd8ed1ab_3 conda-forge
pycodestyle 2.8.0 pyhd8ed1ab_0 conda-forge
pycosat 0.6.3 py38h294d835_1009 conda-forge
pycparser 2.20 pyh9f0ad1d_2 conda-forge
pydeprecate 0.3.1 pyhd8ed1ab_0 conda-forge
pydocstyle 6.1.1 pyhd8ed1ab_0 conda-forge
pyfftw 0.12.0 py38h46b76f8_3 conda-forge
pyflakes 2.4.0 pyhd8ed1ab_0 conda-forge
pygments 2.10.0 pyhd8ed1ab_0 conda-forge
pyjwt 2.3.0 pyhd8ed1ab_0 conda-forge
pylint 2.7.2 py38haa244fe_0 conda-forge
pyls-spyder 0.4.0 pyhd8ed1ab_0 conda-forge
pynacl 1.4.0 py38h31c79cd_2 conda-forge
pyopenssl 21.0.0 pyhd8ed1ab_0 conda-forge
pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge
pypiwin32 223 pypi_0 pypi
pyqt 5.12.3 py38haa244fe_7 conda-forge
pyqt-impl 5.12.3 py38h885f38d_7 conda-forge
pyqt5-sip 4.19.18 py38h885f38d_7 conda-forge
pyqtchart 5.12 py38h885f38d_7 conda-forge
pyqtwebengine 5.12.1 py38h885f38d_7 conda-forge
pyrsistent 0.17.3 py38h294d835_2 conda-forge
pysocks 1.7.1 py38haa244fe_3 conda-forge
pysoundfile 0.10.3.post1 pyhd3deb0d_0 conda-forge
pytest 6.2.5 py38haa244fe_0 conda-forge
python 3.8.12 h7840368_1_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python-lsp-black 1.0.0 pyhd8ed1ab_0 conda-forge
python-lsp-jsonrpc 1.0.0 pyhd8ed1ab_0 conda-forge
python-lsp-server 1.3.3 pyhd8ed1ab_0 conda-forge
python_abi 3.8 2_cp38 conda-forge
pytorch 1.10.0 py3.8_cuda11.3_cudnn8_0 pytorch
pytorch-lightning 1.5.6 pyhd8ed1ab_0 conda-forge
pytorch-mutex 1.0 cuda pytorch
pytz 2021.3 pyhd8ed1ab_0 conda-forge
pyu2f 0.1.5 pyhd8ed1ab_0 conda-forge
pywin32 301 py38h294d835_0 conda-forge
pywin32-ctypes 0.2.0 py38haa244fe_1003 conda-forge
pywinpty 1.1.4 py38hd3f51b4_0 conda-forge
pyyaml 5.4.1 py38h294d835_1 conda-forge
pyzmq 22.3.0 py38h09162b1_0 conda-forge
qdarkstyle 3.0.2 pyhd8ed1ab_0 conda-forge
qstylizer 0.2.1 pyhd8ed1ab_0 conda-forge
qt 5.12.9 h5909a2a_4 conda-forge
qtawesome 1.0.3 pyhd8ed1ab_0 conda-forge
qtconsole 5.2.2 pyhd8ed1ab_0 conda-forge
qtpy 1.11.2 pyhd8ed1ab_0 conda-forge
regex 2021.10.8 py38h294d835_0 conda-forge
reproc 14.2.3 h8ffe710_0 conda-forge
reproc-cpp 14.2.3 h0e60522_0 conda-forge
requests 2.26.0 pyhd8ed1ab_0 conda-forge
requests-oauthlib 1.3.0 pyh9f0ad1d_0 conda-forge
requests-unixsocket 0.2.0 py_0 conda-forge
resampy 0.2.2 py_0 conda-forge
rope 0.20.1 pyhd8ed1ab_0 conda-forge
rsa 4.7.2 pyh44b312d_0 conda-forge
rtree 0.9.7 py38h8b54edf_2 conda-forge
ruamel_yaml 0.15.100 py38h2bbff1b_0
scikit-learn 1.0 py38h8224a6f_1 conda-forge
scipy 1.7.1 py38ha1292f7_0 conda-forge
send2trash 1.8.0 pyhd8ed1ab_0 conda-forge
sentry-sdk 1.5.0 pyhd8ed1ab_0 conda-forge
setuptools 58.2.0 py38haa244fe_0 conda-forge
shortuuid 1.0.8 py38haa244fe_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
smmap 3.0.5 pyh44b312d_0 conda-forge
sniffio 1.2.0 py38haa244fe_1 conda-forge
snowballstemmer 2.1.0 pyhd8ed1ab_0 conda-forge
sortedcontainers 2.4.0 pyhd8ed1ab_0 conda-forge
sounddevice 0.4.3 pypi_0 pypi
sphinx 4.2.0 pyh6c4a22f_0 conda-forge
sphinxcontrib-applehelp 1.0.2 py_0 conda-forge
sphinxcontrib-devhelp 1.0.2 py_0 conda-forge
sphinxcontrib-htmlhelp 2.0.0 pyhd8ed1ab_0 conda-forge
sphinxcontrib-jsmath 1.0.1 py_0 conda-forge
sphinxcontrib-qthelp 1.0.3 py_0 conda-forge
sphinxcontrib-serializinghtml 1.1.5 pyhd8ed1ab_0 conda-forge
spyder 5.2.1 py38haa244fe_0 conda-forge
spyder-kernels 2.2.0 py38haa244fe_0 conda-forge
sqlite 3.36.0 h8ffe710_2 conda-forge
subprocess32 3.5.4 py_1 conda-forge
sympy 1.9 py38haa244fe_0 conda-forge
tbb 2021.3.0 h2d74725_0 conda-forge
tenacity 8.0.1 py38haa95532_0
tensorboard 2.6.0 pyhd8ed1ab_1 conda-forge
tensorboard-data-server 0.6.0 py38haa244fe_1 conda-forge
tensorboard-plugin-wit 1.8.0 pyh44b312d_0 conda-forge
termcolor 1.1.0 py_2 conda-forge
terminado 0.12.1 py38haa244fe_0 conda-forge
testpath 0.5.0 pyhd8ed1ab_0 conda-forge
textdistance 4.2.1 pyhd8ed1ab_0 conda-forge
threadpoolctl 3.0.0 pyh8a188c0_0 conda-forge
three-merge 0.1.1 pyh9f0ad1d_0 conda-forge
tinycss2 1.1.0 pyhd8ed1ab_0 conda-forge
tk 8.6.11 h8ffe710_1 conda-forge
toml 0.10.2 pyhd8ed1ab_0 conda-forge
tomli 1.2.1 pyhd8ed1ab_0 conda-forge
torchinfo 1.5.4 pyhd8ed1ab_0 conda-forge
torchmetrics 0.6.0 pyhd8ed1ab_0 conda-forge
torchsummary 1.5.1 pypi_0 pypi
torchvision 0.11.1 py38_cu113 pytorch
tornado 6.1 py38h294d835_1 conda-forge
tqdm 4.62.3 pyhd8ed1ab_0 conda-forge
traitlets 4.3.3 pypi_0 pypi
typed-ast 1.4.3 py38h294d835_0 conda-forge
typing-extensions 3.10.0.2 hd8ed1ab_0 conda-forge
typing_extensions 3.10.0.2 pyha770c72_0 conda-forge
ucrt 10.0.20348.0 h57928b3_0 conda-forge
ujson 4.2.0 py38h885f38d_0 conda-forge
urllib3 1.26.7 pyhd8ed1ab_0 conda-forge
vc 14.2 hb210afc_5 conda-forge
vs2015_runtime 14.29.30037 h902a5da_5 conda-forge
wandb 0.12.9 pyhd8ed1ab_0 conda-forge
watchdog 2.1.6 py38haa244fe_0 conda-forge
wcwidth 0.2.5 pyh9f0ad1d_2 conda-forge
webencodings 0.5.1 py_1 conda-forge
websocket-client 0.58.0 py38haa95532_4
werkzeug 2.0.1 pyhd8ed1ab_0 conda-forge
wheel 0.37.0 pyhd8ed1ab_1 conda-forge
whichcraft 0.6.1 py_0 conda-forge
widgetsnbextension 3.5.2 py38haa244fe_0 conda-forge
win10toast 0.9 pypi_0 pypi
win_inet_pton 1.1.0 py38haa244fe_2 conda-forge
winpty 0.4.3 4 conda-forge
wrapt 1.12.1 py38h294d835_3 conda-forge
xz 5.2.5 h62dcd97_1 conda-forge
yaml 0.2.5 he774522_0 conda-forge
yaml-cpp 0.6.3 ha925a31_4 conda-forge
yapf 0.31.0 pyhd8ed1ab_0 conda-forge
yarl 1.7.2 py38h294d835_1 conda-forge
yaspin 2.1.0 pyhd8ed1ab_0 conda-forge
zeromq 4.3.4 h0e60522_1 conda-forge
zipp 3.6.0 pyhd8ed1ab_0 conda-forge
zlib 1.2.11 h8ffe710_1013 conda-forge
zstd 1.5.0 h6255e5f_0 conda-forge
```
</details>
<details><summary><b>conda info</b></summary>
```
active environment : pyt
active env location : D:\Anaconda\envs\pyt
shell level : 2
user config file : C:\Users\OverL\.condarc
populated config files : C:\Users\OverL\.condarc
conda version : 4.10.3
conda-build version : 3.18.11
python version : 3.8.3.final.0
virtual packages : __cuda=11.5=0
__win=0=0
__archspec=1=x86_64
base environment : D:\Anaconda (writable)
conda av data dir : D:\Anaconda\etc\conda
conda av metadata url : None
channel URLs : https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : D:\Anaconda\pkgs
C:\Users\OverL\.conda\pkgs
C:\Users\OverL\AppData\Local\conda\conda\pkgs
envs directories : D:\Anaconda\envs
C:\Users\OverL\.conda\envs
C:\Users\OverL\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.10.3 requests/2.24.0 CPython/3.8.3 Windows/10 Windows/10.0.19041
administrator : False
netrc file : C:\Users\OverL/.netrc
offline mode : False
```
</details>
Discovered in [PL](https://github.com/PyTorchLightning/pytorch-lightning/issues/11208)
|
open
|
2021-12-22T22:20:23Z
|
2022-09-14T15:03:40Z
|
https://github.com/tqdm/tqdm/issues/1283
|
[
"invalid ⛔",
"need-feedback 📢",
"p2-bug-warning ⚠",
"submodule-notebook 📓"
] |
OverLordGoldDragon
| 5
|
databricks/koalas
|
pandas
| 1,305
|
Index.to_series() works not properly
|
When converting an Index to Series for comparing operation like the below,
there is something problem.
```python
>>> pidx = pd.Index([1, 2, 3, 4, 5])
>>> kidx1 = ks.Index([1, 2, 3, 4, 5])
>>> kidx2 = ks.Index(pidx)
>>> kidx3 = ks.from_pandas(pidx)
>>> kidx1.to_series() == kidx2.to_series() == kidx3.to_series()
Traceback (most recent call last):
...
AssertionError: (1, 0)
```
The existing implementation seems only can convert from index to series properly when the index is came from DataFrame like the below.
```python
>>> df = ks.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)],
... columns=['dogs', 'cats'],
... index=list('abcd'))
>>> df['dogs'].index.to_series() == df['cats'].index.to_series()
a True
b True
c True
d True
Name: 0, dtype: bool
```
|
closed
|
2020-02-24T17:35:58Z
|
2020-03-02T18:50:31Z
|
https://github.com/databricks/koalas/issues/1305
|
[
"bug"
] |
itholic
| 0
|
django-cms/django-cms
|
django
| 7,794
|
[DOC] code update
|
Do you also want to take a look at https://github.com/django-cms/django-cms/blob/develop-4/docs/contributing/code.rst? There's still reference to aldryn-boilerplate (a bootstrap3 thing)
|
open
|
2024-01-29T11:57:19Z
|
2025-02-22T18:27:01Z
|
https://github.com/django-cms/django-cms/issues/7794
|
[
"component: documentation"
] |
marksweb
| 1
|
vitalik/django-ninja
|
rest-api
| 593
|
How to define responses for the swagger
|
Hello!
I'm passing my API responses to schemas so that the corresponding responses and status appear in the swagger documentation.
In those that return a list with keys I have no problem, but in those that return a plain text, or a list without keys, how could I define it?
For example, an endpoint returning the services available at a location, the current response is:
`['Internet', 'Aplicaciones', 'Wifi' .... ]`
Or an endpoint that returns the status of the API, it is currently returning "OK"
Regards!
|
closed
|
2022-10-17T14:52:38Z
|
2023-01-13T10:03:27Z
|
https://github.com/vitalik/django-ninja/issues/593
|
[] |
JFeldaca
| 1
|
huggingface/datasets
|
machine-learning
| 7,442
|
Flexible Loader
|
### Feature request
Can we have a utility function that will use `load_from_disk` when given the local path and `load_dataset` if given an HF dataset?
It can be something as simple as this one:
```
def load_hf_dataset(path_or_name):
if os.path.exists(path_or_name):
return load_from_disk(path_or_name)
else:
return load_dataset(path_or_name)
```
### Motivation
This can be done inside the user codebase, too, but in my experience, it becomes repetitive code.
### Your contribution
I can open a pull request.
|
open
|
2025-03-09T16:55:03Z
|
2025-03-17T20:35:07Z
|
https://github.com/huggingface/datasets/issues/7442
|
[
"enhancement"
] |
dipta007
| 2
|
zama-ai/concrete-ml
|
scikit-learn
| 852
|
[Feature Request] Support for threshold decryption
|
## Feature request
Hi. Is there any plan to support Concrete-ML (or Concrete) with threshold decryption?
## Motivation
I came across this paper [https://eprint.iacr.org/2023/815.pdf](url). It seems that there has already been some research done by Zama about threshold decryption on TFHE. It would be good to also have Concrete-ML (and Concrete) support this feature. Thanks!
|
open
|
2024-09-02T09:01:17Z
|
2024-09-02T13:48:49Z
|
https://github.com/zama-ai/concrete-ml/issues/852
|
[] |
gy-cao
| 1
|
pydantic/logfire
|
fastapi
| 907
|
Emit severity text in logRecord?
|
### Question
Hello,
I'm trying to use logfire with an alternative backend, however I am having issues getting the severity level to show up in loki. When looking at the otel endpoint, the following is found in the trace attributes: `logfire.level_num`. However this doesn't translate to anything concrete. Is it possible to add another attribute when emitting the log?
I am also using the loguru integration if that changes things. I can see the number being set here: https://github.com/pydantic/logfire/blob/06b5531896dbae3bfc43e21e733fcdc208312c7a/logfire/integrations/logging.py#L84
Let me know if this is something that should be supported and I'll whip up a PR! Thanks for any help.
|
closed
|
2025-03-04T20:21:32Z
|
2025-03-05T16:11:41Z
|
https://github.com/pydantic/logfire/issues/907
|
[
"Question"
] |
jonas-meyer
| 5
|
aimhubio/aim
|
data-visualization
| 2,659
|
Unable to access Aim instance with a domain without specifying a port number
|
### Describe the bug
When trying to access an Aim instance using a domain, the current implementation expects a port number to be included in the URL. However, in some cases, the port number might not be required, especially when using default ports (e.g., port 80 for HTTP). The current implementation of the _separate_paths() method in the Client class does not handle cases where no port number is provided, causing a ValueError.
### To Reproduce
Here's an example of the problematic code:
```py
aim_run_remote = Run(repo='aim://aim-server.domain.com', experiment='test-remote')
```
The above code raises the following exception:
`ValueError: not enough values to unpack (expected 2, got 1)`
### Expected behavior
Aim should be able to handle cases where no port number is provided in the URL, using a default port or handling it in a more graceful manner.
https://github.com/aimhubio/aim/blob/4a934662e42c4d250dc2d0395fb12e0b302b2604/aim/ext/transport/client.py#L72
|
open
|
2023-04-18T03:40:02Z
|
2023-05-02T06:45:21Z
|
https://github.com/aimhubio/aim/issues/2659
|
[] |
ds-sebastian
| 1
|
praw-dev/praw
|
api
| 1,107
|
Allow creating a userflair template with both a CSS class and a background color
|
## Issue Description
When support was added for v2 flairs back in January (#1018), the Reddit API did not support defining new userflair templates that had both CSS classes and background colors defined. There are some checks in the code currently that throw errors if you try to do this (eg. praw/models/reddit/subreddit.py, line 956).
After doing some testing myself, I found that the API does support this now, so this limitation is no longer necessary.
Reading through the other v2 flair code, it looks like there are a lot of similar limitations in other places. I haven't had the time to test those yet, but some of those might also be able to be removed.
This should be a very simple change. I might try to put together a pull request if I can find the time.
## System Information
- PRAW Version: 6.3.2.dev0
- Python Version: 3.5.2
- Operating System: Linux Mint 18.3
|
closed
|
2019-07-15T21:20:55Z
|
2019-07-29T02:08:40Z
|
https://github.com/praw-dev/praw/issues/1107
|
[] |
jenbanim
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 1,274
|
How can I decrease speed for cloning?
|
I am using this model on aws ec2 instance but it takes approx 30 seconds to clone, but I want to clone it faster.
I have tried changing the instance types but that didn't worked.
I have tried g4, g5 and p3 instances but the time taken was same in all of them.
|
open
|
2023-11-26T16:06:22Z
|
2023-11-26T16:06:22Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1274
|
[] |
Satyam206
| 0
|
arogozhnikov/einops
|
tensorflow
| 74
|
[Feature suggestion] Add layer 'repeat_as'
|
In pytorch, we have 'expand_as' which check dim before expand.
I'm aware of 'repeat' layer as replace for 'expand' but could you add 'repeat_as' as expand for 'expand as' ?
Thanks.
|
closed
|
2020-10-21T03:31:57Z
|
2024-09-16T18:49:01Z
|
https://github.com/arogozhnikov/einops/issues/74
|
[] |
NguyenVanThanhHust
| 1
|
marshmallow-code/flask-smorest
|
rest-api
| 177
|
Document file download
|
How can I document a file download (done with send_file from flask) in openapi.json?
|
closed
|
2020-08-06T11:06:34Z
|
2020-08-14T02:57:49Z
|
https://github.com/marshmallow-code/flask-smorest/issues/177
|
[] |
kettenbach-it
| 1
|
noirbizarre/flask-restplus
|
api
| 267
|
upgrade swagger to 3.0
|
https://github.com/swagger-api/swagger-ui/blob/master/dist/index.html
need a little fix on css

|
closed
|
2017-03-29T12:40:01Z
|
2022-06-24T02:56:03Z
|
https://github.com/noirbizarre/flask-restplus/issues/267
|
[] |
tkizm1
| 3
|
davidsandberg/facenet
|
computer-vision
| 964
|
VGG19 Model
|
Can some one share VGG19 architecture, modified to run with facenet code?
|
open
|
2019-01-29T09:44:52Z
|
2019-03-20T12:21:00Z
|
https://github.com/davidsandberg/facenet/issues/964
|
[] |
Shahnawazgrewal
| 1
|
akfamily/akshare
|
data-science
| 5,593
|
AKShare 接口问题报告 | ak.stock_zh_a_spot_em只能抓取200个股票了
|
ak.stock_zh_a_spot_em 今天上午还能抓取5000多个股票的,下午变成只能抓取200个股票了。请问是什么问题,我akshare版本降级到1.15.84以后还是只能抓取200个。
序号 代码 名称 最新价 涨跌幅 涨跌额 成交量 ... 市净率 总市值 流通市值 涨速 5分钟涨跌 60日涨跌幅 年初至今涨跌幅
0 1 873167 新赣江 28.08 30.00 6.48 112559 ... 4.22 1989783900 1173234825 0.00 0.00 64.98 99.57
1 2 835305 云创数据 45.13 29.98 10.41 264575 ... 7.70 5974134521 3660082489 0.00 0.00 97.16 122.21
2 3 430300 辰光医疗 16.70 29.96 3.85 218519 ... 5.25 1433647004 1111078655 0.00 0.00 21.19 43.97
3 4 300478 杭州高新 12.94 20.04 2.16 310775 ... 21.53 1639148620 1639148620 0.00 0.00 18.72 43.62
4 5 300287 飞利信 7.19 20.03 1.20 3634914 ... 8.20 10319618680 9424298834 0.00 0.00 26.14 71.19
.. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
195 196 300036 超图软件 19.19 6.49 1.17 456369 ... 3.31 9456191380 8390385305 0.00 -0.05 -4.53 13.02
196 197 300143 盈康生命 10.18 6.49 0.62 310752 ... 3.20 7629417329 6529214602 0.10 -0.10 -3.78 10.65
197 198 688302 海创药业-U 31.79 6.46 1.93 17801 ... 2.56 3147705860 2311910552 0.22 0.35 -19.58 3.52
198 199 300460 惠伦晶体 12.89 6.44 0.78 275660 ... 4.06 3619566795 3619566795 0.23 0.08 -21.59 6.18
199 200 836504 博迅生物 20.86 6.43 1.26 23721 ... 4.86 903928466 267794485 -0.10 0.10 -7.33 17.92
[200 rows x 23 columns]
|
closed
|
2025-02-15T07:49:04Z
|
2025-02-15T14:25:06Z
|
https://github.com/akfamily/akshare/issues/5593
|
[
"bug"
] |
diana518516
| 5
|
deezer/spleeter
|
tensorflow
| 917
|
[Bug] protobuf incompatibility
|
- [ ✅] I didn't find a similar issue already open.
- [ ✅] I read the documentation (README AND Wiki)
- [ ✅] I have installed FFMpeg
- [ ❌] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others)
## Description
I'm trying to build a streamlit web application with library version 1.39.0 which it requires protobuf<6,>=3.20 but latest spleeter version which i installed from github using `pip install git+https://github.com/deezer/spleeter` requires protobuf<3.20,>=3.9.2
Can you consider adapting spleeter to newer versions of protobuf so i could easily use it?
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
1. Installed using `pip install git+https://github.com/deezer/spleeter`
2. Run as `user`
3. Got `streamlit 1.39.0 requires protobuf<6,>=3.20, but you have protobuf 3.19.6 which is incompatible.` error
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows 11 |
| Python | 3.10.15 |
| Installation type | pip install git+https://github.com/deezer/spleeter |
| RAM available | 16GB |
| Hardware spec | 12gen intel corei9 with 14 cores |
|
open
|
2024-11-04T08:35:15Z
|
2024-11-04T08:35:15Z
|
https://github.com/deezer/spleeter/issues/917
|
[
"bug",
"invalid"
] |
sahandkh1419
| 0
|
Gozargah/Marzban
|
api
| 1,058
|
nodes
|
سلام وقت بخیر
من یه مشکلی که دارم بعضی موقعا نود ام به قطع و وصلی میوفته مجبور میشم برم ssh بزنم به سرور node یه بار دستور ریستارت اجرا کنم
```
docker compose down --remove-orphans; docker compose up -d
```
تا مشکل حل بشه
کانفیگ داکر سرور نودم هم به این صورت هست
```
services:
marzban-node:
# build: .
image: gozargah/marzban-node:latest
restart: always
network_mode: host
environment:
XRAY_EXECUTABLE_PATH: /var/lib/marzban/xray-core/xray
SSL_CLIENT_CERT_FILE: /var/lib/marzban-node/ssl_client_cert.pem
volumes:
- /var/lib/marzban-node:/var/lib/marzban-node
- /var/lib/marzban:/var/lib/marzban
|
closed
|
2024-06-22T21:31:25Z
|
2024-06-26T04:41:20Z
|
https://github.com/Gozargah/Marzban/issues/1058
|
[
"Bug"
] |
xmohammad1
| 0
|
onnx/onnx
|
tensorflow
| 5,954
|
How to export a yolov8 model to onnx format
|
open
|
2024-02-23T03:23:33Z
|
2024-02-23T03:23:33Z
|
https://github.com/onnx/onnx/issues/5954
|
[
"question"
] |
LeoYoung6k
| 0
|
|
jupyter/docker-stacks
|
jupyter
| 1,775
|
[BUG] - java & pyspark not pre-installed?
|
### What docker image(s) are you using?
pyspark-notebook
### OS system and architecture running docker image
windows 11
### What Docker command are you running?
docker run --name pyspark -p 8888:8888 jupyter/scipy-notebook:latest
### How to Reproduce the problem?
from pyspark.sql import *
spark = SparkSession.builder.appName('PySpark Read CSV').getOrCreate()
# Reading csv file
df = spark.read.csv("users.csv")
df.printSchema()
df.show()
### Command output
```bash session
JAVA_HOME is not set
```
### Expected behavior
data frame created successfully
### Actual behavior
It seems that java is not installed on the docker container and the JAVA_HOME environment variable is not set
### Anything else?
I was able to fix this issue, but I had to pip install pyspark and install java through conda and set the environment variable.
I'm not sure if this is a bug or a feature but, I would image you would want the container to have java and pyspark pre installed?
|
closed
|
2022-08-21T16:06:07Z
|
2022-08-21T16:30:34Z
|
https://github.com/jupyter/docker-stacks/issues/1775
|
[
"type:Bug"
] |
TBrannan
| 3
|
keras-team/keras
|
deep-learning
| 20,048
|
keras.ops.map can't handle nested structures for TensorFlow backend
|
Keras: 3.4.1
TensorFlow: 2.17.0
As background, I am looking to leverage both `keras.ops.map` as well as `keras.ops.vectorize_map` for custom preprocessing layers. Certain layers require sequential mapping hence I use `keras.ops.map`. If I pass a nested input, `keras.ops.map` will fail when using TensorFlow backend.
I believe [this line](https://github.com/keras-team/keras/blob/7d92e9eea354da51e7c2a3edd679839ca0315a02/keras/src/backend/tensorflow/core.py#L233) is an issue as it assumes the input is not nested:
```python
def map(f, xs):
xs = tree.map_structure(convert_to_tensor, xs)
def get_fn_output_signature(x):
out = f(x)
return tree.map_structure(tf.TensorSpec.from_tensor, out)
fn_output_signature = get_fn_output_signature(xs[0])
return tf.map_fn(f, xs, fn_output_signature=fn_output_signature)
```
From what I can tell, it is trying to determine the output signature (which might not match the input) by feeding the function a single element (e.g. `xs[0]`) which won't work on nested inputs. I was able to fix it by updating the function as follows (note: I've done only limited testing).
```python
def map(f, xs):
xs = tree.map_structure(convert_to_tensor, xs)
def get_fn_output_signature(x):
out = f(x)
return tree.map_structure(tf.TensorSpec.from_tensor, out)
# Grab single element unpacking and repacking single element
x = tf.nest.pack_sequence_as(xs, [x[0] for x in tf.nest.flatten(xs)])
fn_output_signature = get_fn_output_signature(x)
return tf.map_fn(f, xs, fn_output_signature=fn_output_signature)
```
Test case:
```python
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import keras
import tensorflow as tf
def my_fn(inputs):
outputs = dict(inputs)
outputs['x'] = inputs['x'][:, 0]
outputs['y'] = inputs['y'] + 1
return outputs
xs = {
'x': tf.convert_to_tensor(np.random.rand(4, 100, 3), dtype=tf.float32),
'y': tf.convert_to_tensor(np.random.randint(0, 10, size=(4, 1)), dtype=tf.int32)
}
```
Calling `keras.ops.map`:
```python
ys = keras.ops.map(my_fn, xs)
```
produces error:
```bash
225 out = f(x)
226 return tree.map_structure(tf.TensorSpec.from_tensor, out)
--> 228 fn_output_signature = get_fn_output_signature(xs[0])
229 return tf.map_fn(f, xs, fn_output_signature=fn_output_signature)
KeyError: 0"
```
Calling custom `map`:
```python
ys = map(my_fn, xs)
print(ys['x'].shape)
```
produces correct result:
```bash
(4, 100)
```
|
closed
|
2024-07-26T14:59:47Z
|
2024-08-11T22:24:16Z
|
https://github.com/keras-team/keras/issues/20048
|
[
"type:Bug",
"backend:tensorflow"
] |
apage224
| 2
|
BMW-InnovationLab/BMW-YOLOv4-Training-Automation
|
rest-api
| 27
|
FileNotFoundError: [Errno 2] No such file or directory: 'config/darknet/yolov4_default_weights/yolov4.weights'
|
Hi, I am facing this issue

Though we can see that the file exists there:

|
closed
|
2021-09-03T09:16:15Z
|
2021-09-03T10:38:08Z
|
https://github.com/BMW-InnovationLab/BMW-YOLOv4-Training-Automation/issues/27
|
[] |
boredomed
| 1
|
encode/databases
|
asyncio
| 424
|
Clarification on transaction isolation and management
|
Consider the following simulation of concurrent access:
```
# pylint: skip-file
import asyncio
import os
from databases import Database
async def tx1(db):
async with db.transaction():
await db.execute("INSERT INTO foo VALUES (1)")
await asyncio.sleep(1.5)
async def tx2(db):
async with db.transaction():
await asyncio.sleep(0.5)
result = await db.execute("SELECT * FROM foo")
assert result is None, result
await asyncio.sleep(1)
async def main():
db = Database("postgresql://rdbms:rdbms@localhost")
await db.connect()
await db.execute("CREATE TABLE IF NOT EXISTS foo (bar int4)")
await db.execute("TRUNCATE foo CASCADE")
await asyncio.gather(
tx1(db.connection()),
tx2(db.connection())
)
if __name__ == '__main__':
asyncio.run(main())
```
This code should exit succesfully, but either fails with `cannot perform operation: another operation is in progress` (which is also weird because a new connection is requested) or at the `assert` statement. Please provide some clarification regarding the expected transactional behavior and isolation of this module.
|
closed
|
2021-11-16T01:38:20Z
|
2023-08-28T14:44:24Z
|
https://github.com/encode/databases/issues/424
|
[
"clean up"
] |
cochiseruhulessin
| 6
|
freqtrade/freqtrade
|
python
| 11,218
|
ModuleNotFoundError: No module named 'freqtrade'
|
* Operating system: ____Linux 5.14.0-427.37.1.el9_4.x86_64
* Python Version: _____in openshift
* CCXT version: _____in openshift
* Freqtrade Version: ____ image: freqtradeorg/freqtrade:stable
The trying to install in openshift with "oc new-app freqtradeorg/freqtrade:stable" fails with the following error:
Traceback (most recent call last):
File "/home/ftuser/.local/bin/freqtrade", line 5, in <module>
from freqtrade.main import main
ModuleNotFoundError: No module named 'freqtrade'
Steps to reproduce:
Run :
"oc new-app freqtradeorg/freqtrade:stable"
|
closed
|
2025-01-12T07:28:17Z
|
2025-01-15T02:40:16Z
|
https://github.com/freqtrade/freqtrade/issues/11218
|
[
"Question",
"Install"
] |
chmj
| 2
|
zappa/Zappa
|
django
| 569
|
[Migrated] Flask 1.0 is out
|
Originally from: https://github.com/Miserlou/Zappa/issues/1493 by [mnp](https://github.com/mnp)
<!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
There's a new 1.0 release of Flask: https://www.palletsprojects.com/blog/flask-1-0-released/
## Expected Behavior
<!--- Tell us what should happen -->
It offers some new features which might need to be evaluated and integrated.
## Actual Behavior
<!--- Tell us what happens instead -->
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1.
2.
3.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used:
* Operating System and Python version:
* The output of `pip freeze`:
* Link to your project (optional):
* Your `zappa_settings.py`:
|
closed
|
2021-02-20T12:22:54Z
|
2022-07-16T07:04:54Z
|
https://github.com/zappa/Zappa/issues/569
|
[] |
jneves
| 1
|
Neoteroi/BlackSheep
|
asyncio
| 51
|
Enrich the API for OpenAPI Docs
|
* support defining common responses to be shared across all operations
* support defining ~~security and~~ servers settings without subclassing `OpenAPIHandler`
|
closed
|
2020-11-30T19:51:37Z
|
2020-12-27T11:48:12Z
|
https://github.com/Neoteroi/BlackSheep/issues/51
|
[
"enhancement"
] |
RobertoPrevato
| 0
|
jazzband/django-oauth-toolkit
|
django
| 594
|
[Question]: What is music?
|
I have questions
1. What is `scope` in the document context?
```python
OAUTH2_PROVIDER = {
# this is the list of available scopes
'SCOPES': {'read': 'Read scope', 'write': 'Write scope', 'groups': 'Access to your groups'}
}
```
Because `scopes` contain `verb`, and `plural nouns`. I am confusing the usage and key idea of it
2. ` required_scopes = ['music']` What is music? Is it model
3. What is the relation between `music` and `song`? What is the model relation?
|
closed
|
2018-05-11T06:48:06Z
|
2018-05-19T10:08:20Z
|
https://github.com/jazzband/django-oauth-toolkit/issues/594
|
[
"question"
] |
elcolie
| 2
|
sktime/pytorch-forecasting
|
pandas
| 1,753
|
[BUG] temporal fusion transformer trained with GPU's then loaded with map_locations=torch.device('cpu') does not apply the correct device to loss metric
|
**Describe the bug**
<!--
A clear and concise description of what the bug is.
-->
**To Reproduce**
<!--
Add a Minimal, Complete, and Verifiable example (for more details, see e.g. https://stackoverflow.com/help/mcve
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com
-->
```python
<Paste your code here>
```
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
-->
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
from sktime import show_versions; show_versions()
-->
</details>
<!-- Thanks for contributing! -->
|
closed
|
2025-01-15T03:56:04Z
|
2025-01-15T04:02:01Z
|
https://github.com/sktime/pytorch-forecasting/issues/1753
|
[
"bug"
] |
arizzuto
| 0
|
koaning/scikit-lego
|
scikit-learn
| 630
|
[DOCS] Document KlusterFoldValidation
|
Related to https://www.linkedin.com/feed/update/urn:li:activity:7176859386554789888?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7176859386554789888%2C7176877653679894528%29&dashCommentUrn=urn%3Ali%3Afsd_comment%3A%287176877653679894528%2Curn%3Ali%3Aactivity%3A7176859386554789888%29
It doesn't help that it is misspelled but the docs are also just plain missing. No bueno. Will pick this up during the pyladies sprint tomorrow.
|
closed
|
2024-03-22T16:28:44Z
|
2024-03-24T14:11:10Z
|
https://github.com/koaning/scikit-lego/issues/630
|
[
"documentation"
] |
koaning
| 3
|
tfranzel/drf-spectacular
|
rest-api
| 996
|
Defining a static dict for an error response in @extend_schema returns "string" as the response body
|
**Describe the bug**
In instances where there is no available serializer, or the response returns a dict, I would like to be able to specify that dict as the response in my responses list under `extend_schema`. I understand this is not maybe how it should be expected to behave but I was also unable to figure out a solution for this via the documentation.
**To Reproduce**
```
class RequestAPIView(APIView):
@extend_schema(
responses={
200: ResponseSerializer,
404: {"id": "not_found", "message": "User not found"}
},
)
def get(self, request, format=None):
...
except:
return Response(data={"id": "not_found", "message": "User not found"})
....
```
output the following openapi spec
<img width="547" alt="image" src="https://github.com/tfranzel/drf-spectacular/assets/1347347/ab0d801f-e171-4ef8-80d6-8c7c12845cd1">
**Expected behavior**
It would be nice to either have documentation on how to return a static dict response such as this, or to be able to specify as defined.
Thanks in advance for any and all help
|
closed
|
2023-05-31T19:50:27Z
|
2023-06-11T18:55:56Z
|
https://github.com/tfranzel/drf-spectacular/issues/996
|
[] |
dashdanw
| 3
|
babysor/MockingBird
|
pytorch
| 310
|
本人小白,语音合成时遇Errors in loading staste_dict for Tacotron 求解决方法
|

|
closed
|
2022-01-03T05:10:55Z
|
2022-01-03T05:51:36Z
|
https://github.com/babysor/MockingBird/issues/310
|
[] |
Kristen-PRC
| 3
|
deepspeedai/DeepSpeed
|
pytorch
| 6,951
|
[REQUEST] Pipeline Parallelism support multi optimizer to train
|
**Is your feature request related to a problem? Please describe.**
i want to train big model gan, must shard to multi-gpu, but the pipeline_module seems not support multi-optimizer
**Describe the solution you'd like**
support
**Describe alternatives you've considered**
can control flow to which layer
**Additional context**
|
open
|
2025-01-15T11:48:23Z
|
2025-01-15T11:48:23Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6951
|
[
"enhancement"
] |
whcjb
| 0
|
amdegroot/ssd.pytorch
|
computer-vision
| 251
|
eval.py Error [ ValueError: not enough values to unpack (expected 2, got 0) ]
|
../ssd.pytorch/layers/functions/detection.py", line 54, in forward
ids, count = nms(boxes, scores, self.nms_thresh, self.top_k)
ValueError: not enough values to unpack (expected 2, got 0)
What does that mean?
How to solve that problem?
|
closed
|
2018-10-19T09:57:23Z
|
2020-02-11T11:57:40Z
|
https://github.com/amdegroot/ssd.pytorch/issues/251
|
[] |
MakeToast
| 7
|
microsoft/nlp-recipes
|
nlp
| 74
|
[Example] Named Entity Recognition using MT-DNN
|
closed
|
2019-05-28T16:51:13Z
|
2020-01-14T17:51:50Z
|
https://github.com/microsoft/nlp-recipes/issues/74
|
[
"example"
] |
saidbleik
| 3
|
|
gunthercox/ChatterBot
|
machine-learning
| 1,950
|
SpecificResponseAdapter "Not processing the statement"
|
I'm confused about how this adapter works. I'm using the example. The only thing I'm doing differently is using Mongo as storage.
Here is the log:

Am I missing something here? Do I need to set read only to False, or train my bot with the response adapter before I call it? Not likely, since the example should work without any problems as it is right?
|
open
|
2020-04-19T01:20:33Z
|
2020-04-19T01:20:44Z
|
https://github.com/gunthercox/ChatterBot/issues/1950
|
[] |
FallenSpaces
| 0
|
babysor/MockingBird
|
pytorch
| 994
|
预处理pre.py报错
|
Traceback (most recent call last):
File "/home_1/gaoyiyao/MockingBird-main/pre.py", line 74, in <module>
preprocess_dataset(**vars(args))
File "/home_1/gaoyiyao/MockingBird-main/models/synthesizer/preprocess.py", line 86, in preprocess_dataset
for speaker_metadata in tqdm(job, dataset, len(speaker_dirs), unit="speakers"):
File "/home_1/gaoyiyao/anaconda3/envs/sound/lib/python3.9/site-packages/tqdm/std.py", line 1181, in __iter__
for obj in iterable:
File "/home_1/gaoyiyao/anaconda3/envs/sound/lib/python3.9/multiprocessing/pool.py", line 870, in next
raise value
FileNotFoundError: [Errno 2] No such file or directory: 'data/SV2TTS/synthesizer/audio/audio-SSB03850064.wav_00.npy'
哪位大佬知道怎么解决
|
open
|
2024-04-20T13:46:15Z
|
2024-04-25T12:29:55Z
|
https://github.com/babysor/MockingBird/issues/994
|
[] |
gaoyiyao
| 3
|
geopandas/geopandas
|
pandas
| 2,872
|
BUG: Different results between `GeoSeries.fillna` and `GeoDataFrame.fillna`
|
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of geopandas.
- [x] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
>>> import geopandas as gpd
>>> from shapely.geometry import Polygon
>>> s = gpd.GeoSeries(
... [
... Polygon([(0, 0), (1, 1), (0, 1)]),
... None,
... Polygon([(0, 0), (-1, 1), (0, -1)]),
... ]
... )
>>> s.fillna() # no error
0 POLYGON ((0.00000 0.00000, 1.00000 1.00000, 0....
1 GEOMETRYCOLLECTION EMPTY
2 POLYGON ((0.00000 0.00000, -1.00000 1.00000, 0...
dtype: geometry
>>> df = s.to_frame("geometry")
>>> type(df)
geopandas.geodataframe.GeoDataFrame
>>> df.fillna() # get an error
```
<details>
<summary>error</summary>
```python
File C:\Software\miniforge3\envs\dtoolkit\lib\site-packages\pandas\core\frame.py:5501, in DataFrame.fillna(self, value, method, axis, inplace, limit, downcast)
5490 @doc(NDFrame.fillna, **_shared_doc_kwargs)
5491 def fillna(
5492 self,
(...)
5499 downcast: dict | None = None,
5500 ) -> DataFrame | None:
-> 5501 return super().fillna(
5502 value=value,
5503 method=method,
5504 axis=axis,
5505 inplace=inplace,
5506 limit=limit,
5507 downcast=downcast,
5508 )
File C:\Software\miniforge3\envs\dtoolkit\lib\site-packages\pandas\core\generic.py:6866, in NDFrame.fillna(self, value, method, axis, inplace, limit, downcast)
6753 """
6754 Fill NA/NaN values using the specified method.
6755
(...)
6863 Note that column D is not affected since it is not present in df2.
6864 """
6865 inplace = validate_bool_kwarg(inplace, "inplace")
-> 6866 value, method = validate_fillna_kwargs(value, method)
6868 # set the default here, so functions examining the signaure
6869 # can detect if something was set (e.g. in groupby) (GH9221)
6870 if axis is None:
File C:\Software\miniforge3\envs\dtoolkit\lib\site-packages\pandas\util\_validators.py:288, in validate_fillna_kwargs(value, method, validate_scalar_dict_value)
285 from pandas.core.missing import clean_fill_method
287 if value is None and method is None:
--> 288 raise ValueError("Must specify a fill 'value' or 'method'.")
289 if value is None and method is not None:
290 method = clean_fill_method(method)
ValueError: Must specify a fill 'value' or 'method'.
```
</details>
#### Problem description
`GeoSeries.fillna()` will get result but `GeoDataFrame.fillna` will get an `ValueError`.
Because `GeoDataFrame` don't have the following lines.
https://github.com/geopandas/geopandas/blob/76403be5b772ca13802b8f57f1ff803dc1a81f4b/geopandas/geoseries.py#L825-L827
#### Expected Output
#### Output of ``geopandas.show_versions()``
<details>
SYSTEM INFO
-----------
python : 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:18:12) [MSC v.1929 64 bit (AMD64)]
executable : C:\Software\miniforge3\envs\dtoolkit\python.exe
machine : Windows-10-10.0.22621-SP0
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.11.1
GEOS lib : None
GDAL : 3.6.2
GDAL data dir: None
PROJ : 9.1.1
PROJ data dir: C:\Software\miniforge3\envs\dtoolkit\Library\share\proj
PYTHON DEPENDENCIES
-------------------
geopandas : 0.12.2
numpy : 1.22.3
pandas : 2.0.0
pyproj : 3.4.1
shapely : 2.0.1
fiona : 1.8.22
geoalchemy2: None
geopy : 2.2.0
matplotlib : 3.5.1
mapclassify: 2.4.3
pygeos : None
pyogrio : 0.5.1
psycopg2 : None
pyarrow : None
rtree : 1.0.0
</details>
|
open
|
2023-04-16T12:00:45Z
|
2023-04-17T09:11:59Z
|
https://github.com/geopandas/geopandas/issues/2872
|
[
"bug",
"enhancement"
] |
Zeroto521
| 1
|
coqui-ai/TTS
|
deep-learning
| 2,642
|
[Bug]
|
### Describe the bug
Problem when running yourtts training recipe with use_phonemes=True
Error -
```
> TRAINING (2023-05-30 17:28:48)
! Run is kept in /newvolume/souvik/yourtts_exp/TTS/recipes/vctk/yourtts/YourTTS-EN-VCTK-May-30-2023_05+28PM-2071088b
Traceback (most recent call last):
File "/newvolume/anaconda3/envs/yourtts/lib/python3.9/site-packages/trainer/trainer.py", line 1591, in fit
self._fit()
File "/newvolume/anaconda3/envs/yourtts/lib/python3.9/site-packages/trainer/trainer.py", line 1544, in _fit
self.train_epoch()
File "/newvolume/anaconda3/envs/yourtts/lib/python3.9/site-packages/trainer/trainer.py", line 1308, in train_epoch
for cur_step, batch in enumerate(self.train_loader):
File "/newvolume/anaconda3/envs/yourtts/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 634, in __next__
data = self._next_data()
File "/newvolume/anaconda3/envs/yourtts/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1346, in _next_data
return self._process_data(data)
File "/newvolume/anaconda3/envs/yourtts/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1372, in _process_data
data.reraise()
File "/newvolume/anaconda3/envs/yourtts/lib/python3.9/site-packages/torch/_utils.py", line 644, in reraise
raise exception
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/newvolume/anaconda3/envs/yourtts/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/newvolume/anaconda3/envs/yourtts/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/newvolume/anaconda3/envs/yourtts/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/newvolume/souvik/yourtts_exp/TTS/TTS/tts/models/vits.py", line 272, in __getitem__
token_ids = self.get_token_ids(idx, item["text"])
File "/newvolume/souvik/yourtts_exp/TTS/TTS/tts/datasets/dataset.py", line 240, in get_token_ids
token_ids = self.get_phonemes(idx, text)["token_ids"]
File "/newvolume/souvik/yourtts_exp/TTS/TTS/tts/datasets/dataset.py", line 217, in get_phonemes
out_dict = self.phoneme_dataset[idx]
File "/newvolume/souvik/yourtts_exp/TTS/TTS/tts/datasets/dataset.py", line 607, in __getitem__
ids = self.compute_or_load(string2filename(item["audio_unique_name"]), item["text"], item["language"])
File "/newvolume/souvik/yourtts_exp/TTS/TTS/tts/datasets/dataset.py", line 620, in compute_or_load
cache_path = os.path.join(self.cache_path, file_name + file_ext)
File "/newvolume/anaconda3/envs/yourtts/lib/python3.9/posixpath.py", line 76, in join
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
### To Reproduce
Run the same code just change use_phonemes=True in vits config
### Expected behavior
Should have ran by using phonemes.
### Logs
_No response_
### Environment
```shell
Updated tts version
```
### Additional context
_No response_
|
closed
|
2023-05-30T17:47:11Z
|
2023-06-05T08:05:31Z
|
https://github.com/coqui-ai/TTS/issues/2642
|
[
"bug"
] |
souvikg544
| 2
|
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 1,864
|
[Nodriver] Why does the number of browser.tabs remain unchanged after "Page" close?
|
browser = await uc.start()
page = await browser.get('url')
url2 = 'example.com'
#For special reasons, you need to use evaluate window.open to bypass Cloudflare
await page.evaluate(f'''window.open("{url2}", "_blank"); ''')
await page.close()
print(str(len(browser.tabs)))
#output len 2
page=browser.tabs[len(self.browser.tabs)-1]
In fact, the effect I want is to make the page become the last page of the browser when closing the current page.
But this doesn’t seem right, any suggestions?
|
open
|
2024-05-04T01:52:52Z
|
2024-05-04T01:52:52Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1864
|
[] |
mashien0201
| 0
|
LibrePhotos/librephotos
|
django
| 664
|
My Albums disappear and new ones are no longer created/visible : Showing 0 user created albums
|
# 🐛 Bug Report
* [X] 📁 I've Included a ZIP file containing my librephotos `log` files [gunicorn_django.log](https://github.com/LibrePhotos/librephotos/files/9791059/gunicorn_django.log)
* [X] ❌ I have looked for similar issues (including closed ones)
* [X] 🎬 (If applicable) I've provided pictures or links to videos that clearly demonstrate the issue [Example of behavior](https://gfycat.com/officialadoredbufflehead)
## 📝 Description of issue:
After uploading photos then adding them into an album, any albums previously created disappear. Attempts to recreate previous albums or create new ones still results in **Showing 0 user created albums**
All pictures are still present
Albums shared with a second account are still visible for the second account
Sometimes I'm able to setup 4 or 5 new albums, but every time at some point it will no longer show any albums
I'm not sure what logs are necessary to help
## 🔁 How can we reproduce it:
### [Following Docker install steps](https://docs.librephotos.com/1/standard_install/)
`cd librephotos-docker`
`cp librephotos.env .env`
### Edit .env the following variables
```
scanDirectory=~/scanDirectory
data=./librephotos/data
logs=~/logs
dbName=librephotos
dbUser=librephotos
dbPass=<pass>
```
### From docker-compose.yml
```
backend:
image: reallibrephotos/librephotos:${tag}
container_name: backend
restart: unless-stopped
volumes:
- ${scanDirectory}:/data
- ${data}/protected_media:/protected_media
- ${logs}:/logs
- ${data}/cache:/root/.cache
```
### docker-compose up
- Begin uploading photos from localhost:3000
- Wait until worker is green and available (upper right)
- Select uploaded photos and click the + to add to an album
- Go to Albums > My albums > refresh page to see new data
## Please provide additional information:
- 💻 Operating system:
Debian 11 Bullseye (Standard), from Proxmox templates
- ⚙ Architecture (x86 or ARM):
x86_64
- 🔢 Librephotos version:
reallibrephotos/librephotos-frontend latest
reallibrephotos/librephotos latest
reallibrephotos/librephotos-proxy
- 📸 Librephotos installation method (Docker, Kubernetes, .deb, etc.):
Docker : latest
- 📁 How is you picture library mounted (Local file system (Type), NFS, SMB, etc.):
Cephfs with a symlink from cephfs to workingdir \librephotos-docker
scanDirectory -> /mnt/pictures/librephotos/scanDirectory/
- ☁ If you are virtualizing librephotos, Virtualization platform (Proxmox, Xen, HyperV, etc.):
Proxmox VE 7.2
|
closed
|
2022-10-14T18:35:44Z
|
2023-04-13T08:21:34Z
|
https://github.com/LibrePhotos/librephotos/issues/664
|
[
"bug"
] |
Circenn5130
| 7
|
ultralytics/yolov5
|
machine-learning
| 13,453
|
conv2d() received an invalid combination of arguments
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
## environment
windows10
python3.8
## question
I used the trained model to detect. The following code throws an error
```
import pathlib
import torch
from PIL import Image
import numpy as np
from pathlib import Path
pathlib.PosixPath = pathlib.WindowsPath
model = torch.load(r'D:\py\yolo\yolov5\mymodel\testbest.pt', map_location=torch.device('cpu'))['model'].float()
model.eval()
results = model(r'D:\py\code\dnfm-yolo-tutorial\naima\28.png')
results.print()
results.show()
```
the error
```
Traceback (most recent call last):
File "D:/py/PyCharm 2024.1.6/plugins/python/helpers/pydev/pydevd.py", line 1551, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "D:\py\PyCharm 2024.1.6\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "D:\py\yolo\yolov5\test.py", line 13, in <module>
results = model(r'D:\py\code\dnfm-yolo-tutorial\naima\28.png')
File "D:\py\yolo\yolov5\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\py\yolo\yolov5\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "D:\py\yolo\yolov5\models\yolo.py", line 267, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "D:\py\yolo\yolov5\models\yolo.py", line 167, in _forward_once
x = m(x) # run
File "D:\py\yolo\yolov5\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\py\yolo\yolov5\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "D:\py\yolo\yolov5\models\common.py", line 86, in forward
return self.act(self.bn(self.conv(x)))
File "D:\py\yolo\yolov5\venv\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\py\yolo\yolov5\venv\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "D:\py\yolo\yolov5\venv\lib\site-packages\torch\nn\modules\conv.py", line 458, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\py\yolo\yolov5\venv\lib\site-packages\torch\nn\modules\conv.py", line 454, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
TypeError: conv2d() received an invalid combination of arguments - got (str, Parameter, NoneType, tuple, tuple, tuple, int), but expected one of:
* (Tensor input, Tensor weight, Tensor bias = None, tuple of ints stride = 1, tuple of ints padding = 0, tuple of ints dilation = 1, int groups = 1)
didn't match because some of the arguments have invalid types: (!str!, !Parameter!, !NoneType!, !tuple of (int, int)!, !tuple of (int, int)!, !tuple of (int, int)!, !int!)
* (Tensor input, Tensor weight, Tensor bias = None, tuple of ints stride = 1, str padding = "valid", tuple of ints dilation = 1, int groups = 1)
didn't match because some of the arguments have invalid types: (!str!, !Parameter!, !NoneType!, !tuple of (int, int)!, !tuple of (int, int)!, !tuple of (int, int)!, !int!)
```
### Additional
_No response_
|
open
|
2024-12-08T13:24:11Z
|
2024-12-13T10:18:27Z
|
https://github.com/ultralytics/yolov5/issues/13453
|
[
"question",
"detect"
] |
niusme
| 3
|
nltk/nltk
|
nlp
| 2,858
|
Manually wrap output lines in HOWTO files
|
cf https://github.com/nltk/nltk/pull/2856#issuecomment-945193387
|
open
|
2021-10-18T10:02:18Z
|
2024-10-07T16:34:36Z
|
https://github.com/nltk/nltk/issues/2858
|
[
"good first issue",
"documentation"
] |
stevenbird
| 3
|
wkentaro/labelme
|
computer-vision
| 1,448
|
labels.txt not able to read the label 'car'
|
### Provide environment information
python labelme2voc.py /home/amodpatil/semantic_dataset/segmentation /home/amodpatil/semantic_dataset/am --labels /home/amodpatil/pytorch-segmentation/datasets/labels.txt
Creating dataset: /home/amodpatil/semantic_dataset/am
Generating dataset from: /home/amodpatil/semantic_dataset/segmentation/left2306.json
Label name to value dictionary: {'vegetation': 1, 'traffic-light': 2, 'road': 3, 'sidewalk': 4, 'parking': 5, 'building': 6}
Shapes: [{'label': 'sidewalk', 'points': [[586.0, 413.5], [552.0, 412.5], [551.0, 411.5], [547.0, 411.5], [546.0, 410.5], [533.0, 410.5], [532.0, 409.5], [512.0, 408.5], [509.0, 406.5], [507.0, 407.5], [506.0, 406.5], [492.0, 406.5], [491.0, 405.5], [482.0, 405.5], [481.0, 404.5], [477.0, 403.5], [475.5, 402.0], [475.5, 399.0], [481.5, 394.0], [481.5, 392.0], [480.5, 391.0], [481.5, 386.0], [480.5, 384.0], [476.5, 381.0], [473.5, 375.0], [473.5, 372.0], [471.5, 368.0], [470.5, 361.0], [469.5, 360.0], [469.5, 354.0], [467.5, 350.0], [467.5, 342.0], [466.5, 341.0], [466.5, 338.0], [463.0, 333.5], [461.0, 333.5], [458.5, 332.0], [456.5, 330.0], [456.5, 328.0], [460.0, 326.5], [475.0, 326.5], [476.0, 327.5], [487.0, 327.5], [488.0, 328.5], [511.0, 328.5], [512.0, 329.5], [525.0, 329.5], [526.0, 328.5], [535.0, 328.5], [536.0, 327.5], [542.0, 327.5], [543.0, 328.5], [545.0, 328.5], [546.0, 327.5], [553.0, 327.5], [554.0, 328.5], [556.0, 327.5], [559.0, 327.5], [560.0, 328.5], [575.0, 329.5], [575.0, 331.5], [577.0, 331.5], [578.0, 332.5], [583.0, 332.5], [639.0, 332.70833333333337], [639.0, 338.95833333333337], [639.0, 347.2916666666667], [639.0, 363.95833333333337], [639.0, 372.2916666666667], [639.0, 368.12500000000006], [639.0, 386.0416666666667], [639.0, 383.5416666666667], [639.0, 393.95833333333337], [639.0, 405.62500000000006], [637.2916666666666, 419.37500000000006], [617.7083333333334, 419.7916666666667], [589.5, 412.0]], 'shape_type': 'polygon', 'flags': {}, 'description': '', 'group_id': None, 'mask': None, 'other_data': {}}, {'label': 'sidewalk', 'points': [[412.0, 393.5], [391.0, 393.5], [390.0, 392.5], [379.0, 392.5], [378.0, 391.5], [371.0, 391.5], [370.0, 390.5], [362.0, 390.5], [361.0, 389.5], [347.0, 389.5], [346.0, 388.5], [337.0, 388.5], [336.0, 387.5], [325.0, 387.5], [324.0, 386.5], [281.0, 385.5], [280.0, 384.5], [272.0, 383.5], [270.0, 381.5], [267.0, 381.5], [265.5, 380.0], [269.0, 378.5], [269.5, 377.0], [275.0, 372.5], [277.0, 372.5], [291.0, 365.5], [293.0, 365.5], [295.0, 363.5], [302.0, 362.5], [305.0, 359.5], [307.0, 358.5], [310.0, 358.5], [315.0, 355.5], [317.0, 355.5], [325.0, 350.5], [328.0, 350.5], [329.0, 349.5], [332.0, 349.5], [335.0, 347.5], [337.0, 347.5], [340.0, 344.5], [349.0, 340.5], [354.0, 336.5], [362.0, 336.5], [363.0, 335.5], [368.0, 334.5], [371.0, 332.5], [373.0, 332.5], [376.0, 330.5], [378.0, 330.5], [381.0, 328.5], [383.0, 328.5], [384.0, 327.5], [389.0, 327.5], [396.0, 333.5], [401.0, 335.5], [402.5, 337.0], [403.5, 343.0], [404.5, 344.0], [402.5, 347.0], [402.5, 365.0], [403.5, 366.0], [404.5, 372.0], [406.5, 375.0], [406.5, 377.0], [409.5, 384.0], [409.5, 390.0], [412.5, 392.0]], 'shape_type': 'polygon', 'flags': {}, 'description': '', 'group_id': None, 'mask': None, 'other_data': {}}, {'label': 'road', 'points': [[521.0, 479.0], [506.0, 475.5], [502.0, 467.5], [499.5, 469.0], [502.5, 476.0], [496.0, 476.5], [493.5, 461.0], [463.0, 433.5], [452.0, 438.5], [449.0, 432.5], [442.0, 434.5], [439.0, 430.5], [410.5, 429.0], [440.0, 426.5], [399.0, 423.5], [324.0, 427.5], [283.0, 433.5], [285.0, 437.5], [281.0, 437.5], [279.0, 432.5], [274.5, 434.0], [277.5, 436.0], [275.0, 437.5], [241.0, 432.5], [179.0, 434.5], [136.0, 444.5], [69.0, 448.5], [1.0, 465.5], [0.5, 365.0], [9.0, 360.5], [11.0, 366.5], [21.0, 369.5], [104.0, 368.5], [105.0, 371.5], [82.0, 369.5], [80.5, 373.0], [112.0, 377.5], [140.0, 377.5], [144.0, 372.5], [159.0, 371.5], [160.5, 376.0], [155.5, 380.0], [158.0, 381.5], [217.0, 386.5], [229.0, 381.5], [248.0, 380.5], [249.0, 377.5], [253.0, 380.5], [254.0, 376.5], [256.0, 382.5], [272.0, 381.5], [282.0, 386.5], [310.0, 387.5], [312.0, 390.5], [389.0, 394.5], [421.0, 398.5], [422.0, 401.5], [479.0, 406.5], [481.0, 410.5], [553.0, 413.5], [549.5, 417.0], [559.0, 417.5], [561.0, 421.5], [611.0, 420.5], [620.5, 422.0], [615.0, 425.5], [635.5, 427.0], [620.0, 428.5], [619.0, 431.5], [616.0, 428.5], [597.0, 431.5], [594.0, 428.5], [590.0, 431.5], [584.0, 428.5], [577.0, 431.5], [572.0, 427.5], [548.0, 431.5], [541.0, 428.5], [540.0, 431.5], [521.0, 431.5], [519.0, 434.5], [512.0, 430.5], [509.5, 440.0], [502.0, 438.5], [499.0, 446.5], [492.0, 439.5], [490.5, 450.0], [502.0, 461.5], [514.0, 466.5], [520.5, 462.0]], 'shape_type': 'polygon', 'flags': {}, 'description': '', 'group_id': None, 'mask': None, 'other_data': {}}, {'label': 'car', 'points': [[276.0, 356.5], [259.0, 356.5], [251.0, 348.5], [238.0, 348.5], [237.0, 347.5], [231.0, 348.5], [229.0, 346.5], [226.0, 346.5], [224.0, 348.5], [222.0, 346.5], [217.0, 345.5], [216.0, 346.5], [207.0, 345.5], [206.0, 346.5], [197.0, 345.5], [193.0, 346.5], [180.0, 344.5], [167.0, 344.5], [165.0, 342.5], [137.0, 341.5], [133.5, 338.0], [130.5, 330.0], [130.5, 308.0], [133.5, 294.0], [136.5, 288.0], [140.5, 284.0], [141.5, 278.0], [145.0, 274.5], [152.0, 271.5], [170.0, 269.5], [171.0, 267.5], [178.0, 267.5], [181.0, 265.5], [186.0, 265.5], [193.0, 262.5], [207.0, 247.5], [213.0, 244.5], [220.0, 237.5], [231.0, 235.5], [246.0, 235.5], [247.0, 234.5], [253.0, 235.5], [264.0, 233.5], [280.0, 235.5], [295.0, 234.5], [318.0, 237.5], [334.0, 242.5], [351.5, 261.0], [353.5, 266.0], [356.5, 288.0], [359.5, 295.0], [360.5, 302.0], [360.5, 317.0], [358.5, 324.0], [354.0, 327.5], [346.0, 328.5], [339.0, 327.5], [334.0, 323.5], [330.0, 323.5], [291.0, 334.5], [287.5, 339.0], [285.5, 348.0], [283.5, 351.0]], 'shape_type': 'polygon', 'flags': {}, 'description': '', 'group_id': None, 'mask': None, 'other_data': {}}, {'label': 'car', 'points': [[69.0, 334.5], [62.0, 334.5], [54.0, 331.5], [47.0, 331.5], [39.0, 329.5], [29.5, 318.0], [27.5, 313.0], [23.0, 308.5], [19.0, 307.5], [14.5, 304.0], [15.5, 302.0], [13.5, 299.0], [13.5, 296.0], [10.5, 294.0], [12.5, 293.0], [12.0, 291.5], [5.0, 291.5], [4.0, 292.5], [0.5, 291.0], [0.5, 270.0], [8.0, 268.5], [10.0, 269.5], [18.0, 269.5], [19.0, 268.5], [20.0, 269.5], [35.0, 269.5], [44.0, 267.5], [47.0, 265.5], [52.0, 265.5], [55.0, 264.5], [56.0, 262.5], [62.0, 260.5], [68.0, 254.5], [74.0, 252.5], [77.0, 249.5], [79.0, 249.5], [89.0, 243.5], [92.0, 244.5], [98.0, 242.5], [101.0, 243.5], [112.0, 242.5], [115.5, 241.0], [114.5, 238.0], [116.0, 236.5], [121.5, 234.0], [105.0, 233.5], [105.0, 231.5], [121.0, 230.5], [122.0, 229.5], [184.0, 228.5], [185.0, 229.5], [206.0, 229.5], [215.5, 232.0], [216.5, 236.0], [203.0, 247.5], [197.0, 251.5], [191.0, 251.5], [187.0, 253.5], [182.0, 263.5], [162.0, 266.5], [144.0, 272.5], [139.5, 277.0], [137.5, 284.0], [132.5, 291.0], [124.5, 320.0], [117.0, 323.5], [114.0, 328.5], [110.0, 330.5], [106.0, 330.5], [104.0, 333.5], [101.0, 330.5], [96.0, 331.5], [92.0, 330.5], [81.0, 331.5], [79.0, 330.5], [77.0, 333.5], [75.0, 331.5], [73.0, 333.5]], 'shape_type': 'polygon', 'flags': {}, 'description': '', 'group_id': None, 'mask': None, 'other_data': {}}, {'label': 'parking', 'points': [[227.0, 383.5], [219.0, 380.5], [204.0, 380.5], [182.0, 377.5], [158.0, 377.5], [154.0, 379.5], [150.0, 375.5], [146.0, 375.5], [139.0, 378.5], [116.0, 378.5], [84.5, 374.0], [86.0, 372.5], [96.5, 373.0], [93.0, 370.5], [80.0, 370.5], [77.0, 372.5], [38.0, 372.5], [16.0, 370.5], [7.0, 365.5], [0.5, 366.0], [0.5, 339.0], [13.0, 341.5], [53.0, 341.5], [77.0, 345.5], [97.0, 345.5], [115.0, 340.5], [124.0, 340.5], [128.0, 345.5], [134.0, 347.5], [147.0, 348.5], [160.0, 352.5], [200.0, 352.5], [209.0, 353.5], [214.0, 356.5], [223.0, 355.5], [229.0, 357.5], [231.0, 355.5], [242.0, 355.5], [260.5, 363.0], [255.0, 365.5], [266.0, 366.5], [270.5, 370.0], [269.0, 375.5], [258.0, 381.5], [241.0, 380.5], [229.0, 381.5]], 'shape_type': 'polygon', 'flags': {}, 'description': '', 'group_id': None, 'mask': None, 'other_data': {}}, {'label': 'person', 'points': [[462.0, 397.5], [460.5, 397.0], [460.5, 390.0], [459.5, 389.0], [460.5, 388.0], [460.5, 387.0], [459.5, 386.0], [459.5, 385.0], [456.5, 382.0], [456.5, 380.0], [455.0, 379.5], [453.5, 378.0], [453.5, 376.0], [451.5, 374.0], [451.5, 373.0], [448.5, 370.0], [448.5, 368.0], [447.5, 367.0], [447.5, 365.0], [447.0, 364.5], [445.0, 364.5], [444.0, 363.5], [440.0, 363.5], [439.0, 364.5], [436.0, 364.5], [435.5, 364.0], [435.5, 360.0], [434.5, 359.0], [434.5, 356.0], [432.5, 354.0], [432.5, 352.0], [431.0, 350.5], [430.0, 350.5], [426.5, 347.0], [426.5, 346.0], [425.5, 345.0], [424.5, 342.0], [421.5, 339.0], [421.5, 338.0], [419.5, 335.0], [419.5, 334.0], [418.5, 333.0], [418.5, 327.0], [419.5, 326.0], [419.5, 323.0], [420.5, 322.0], [420.5, 321.0], [419.0, 320.5], [416.5, 318.0], [416.5, 314.0], [417.5, 313.0], [417.5, 310.0], [418.5, 309.0], [418.5, 308.0], [421.0, 305.5], [427.0, 305.5], [429.5, 307.0], [429.5, 308.0], [430.5, 309.0], [431.5, 312.0], [436.5, 317.0], [435.5, 318.0], [435.5, 320.0], [437.0, 320.5], [445.5, 329.0], [445.5, 330.0], [447.0, 331.5], [448.0, 331.5], [450.0, 333.5], [452.0, 333.5], [453.0, 334.5], [454.0, 334.5], [455.5, 336.0], [454.0, 336.5], [452.5, 338.0], [452.5, 339.0], [457.5, 342.0], [459.5, 348.0], [463.0, 351.5], [464.5, 352.0], [464.5, 360.0], [463.5, 361.0], [463.5, 364.0], [464.0, 364.5], [467.0, 364.5], [467.5, 365.0], [467.5, 367.0], [469.5, 370.0], [469.5, 375.0], [468.0, 375.5], [465.5, 378.0], [468.5, 381.0], [469.0, 382.5], [473.0, 382.5], [475.5, 386.0], [475.5, 388.0], [474.5, 389.0], [474.5, 392.0], [473.0, 393.5], [469.0, 393.5], [468.0, 392.5], [467.0, 392.5], [465.5, 391.0], [465.5, 389.0], [464.0, 388.5], [463.5, 389.0], [463.5, 391.0], [462.5, 392.0], [462.5, 397.0]], 'shape_type': 'polygon', 'flags': {}, 'description': '', 'group_id': None, 'mask': None, 'other_data': {}}]
Traceback (most recent call last):
File "/home/amodpatil/pytorch-segmentation/labelme2voc.py", line 103, in <module>
main()
File "/home/amodpatil/pytorch-segmentation/labelme2voc.py", line 83, in main
lbl, _ = labelme.utils.shapes_to_label(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/amodpatil/miniconda3/lib/python3.12/site-packages/labelme/utils/shape.py", line 68, in shapes_to_label
cls_id = label_name_to_value[cls_name]
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
KeyError: 'car'
### What OS are you using?
Ubuntu 20.04
### Describe the Bug
python labelme2voc.py /home/amodpatil/semantic_dataset/segmentation /home/amodpatil/semantic_dataset/am --labels /home/amodpatil/pytorch-segmentation/datasets/labels.txt
Creating dataset: /home/amodpatil/semantic_dataset/am
Generating dataset from: /home/amodpatil/semantic_dataset/segmentation/left2306.json
Label name to value dictionary: {'vegetation': 1, 'traffic-light': 2, 'road': 3, 'sidewalk': 4, 'parking': 5, 'building': 6}
Shapes: [{'label': 'sidewalk', 'points': [[586.0, 413.5], [552.0, 412.5], [551.0, 411.5], [547.0, 411.5], [546.0, 410.5], [533.0, 410.5], [532.0, 409.5], [512.0, 408.5], [509.0, 406.5], [507.0, 407.5], [506.0, 406.5], [492.0, 406.5], [491.0, 405.5], [482.0, 405.5], [481.0, 404.5], [477.0, 403.5], [475.5, 402.0], [475.5, 399.0], [481.5, 394.0], [481.5, 392.0], [480.5, 391.0], [481.5, 386.0], [480.5, 384.0], [476.5, 381.0], [473.5, 375.0], [473.5, 372.0], [471.5, 368.0], [470.5, 361.0], [469.5, 360.0], [469.5, 354.0], [467.5, 350.0], [467.5, 342.0], [466.5, 341.0], [466.5, 338.0], [463.0, 333.5], [461.0, 333.5], [458.5, 332.0], [456.5, 330.0], [456.5, 328.0], [460.0, 326.5], [475.0, 326.5], [476.0, 327.5], [487.0, 327.5], [488.0, 328.5], [511.0, 328.5], [512.0, 329.5], [525.0, 329.5], [526.0, 328.5], [535.0, 328.5], [536.0, 327.5], [542.0, 327.5], [543.0, 328.5], [545.0, 328.5], [546.0, 327.5], [553.0, 327.5], [554.0, 328.5], [556.0, 327.5], [559.0, 327.5], [560.0, 328.5], [575.0, 329.5], [575.0, 331.5], [577.0, 331.5], [578.0, 332.5], [583.0, 332.5], [639.0, 332.70833333333337], [639.0, 338.95833333333337], [639.0, 347.2916666666667], [639.0, 363.95833333333337], [639.0, 372.2916666666667], [639.0, 368.12500000000006], [639.0, 386.0416666666667], [639.0, 383.5416666666667], [639.0, 393.95833333333337], [639.0, 405.62500000000006], [637.2916666666666, 419.37500000000006], [617.7083333333334, 419.7916666666667], [589.5, 412.0]], 'shape_type': 'polygon', 'flags': {}, 'description': '', 'group_id': None, 'mask': None, 'other_data': {}}, {'label': 'sidewalk', 'points': [[412.0, 393.5], [391.0, 393.5], [390.0, 392.5], [379.0, 392.5], [378.0, 391.5], [371.0, 391.5], [370.0, 390.5], [362.0, 390.5], [361.0, 389.5], [347.0, 389.5], [346.0, 388.5], [337.0, 388.5], [336.0, 387.5], [325.0, 387.5], [324.0, 386.5], [281.0, 385.5], [280.0, 384.5], [272.0, 383.5], [270.0, 381.5], [267.0, 381.5], [265.5, 380.0], [269.0, 378.5], [269.5, 377.0], [275.0, 372.5], [277.0, 372.5], [291.0, 365.5], [293.0, 365.5], [295.0, 363.5], [302.0, 362.5], [305.0, 359.5], [307.0, 358.5], [310.0, 358.5], [315.0, 355.5], [317.0, 355.5], [325.0, 350.5], [328.0, 350.5], [329.0, 349.5], [332.0, 349.5], [335.0, 347.5], [337.0, 347.5], [340.0, 344.5], [349.0, 340.5], [354.0, 336.5], [362.0, 336.5], [363.0, 335.5], [368.0, 334.5], [371.0, 332.5], [373.0, 332.5], [376.0, 330.5], [378.0, 330.5], [381.0, 328.5], [383.0, 328.5], [384.0, 327.5], [389.0, 327.5], [396.0, 333.5], [401.0, 335.5], [402.5, 337.0], [403.5, 343.0], [404.5, 344.0], [402.5, 347.0], [402.5, 365.0], [403.5, 366.0], [404.5, 372.0], [406.5, 375.0], [406.5, 377.0], [409.5, 384.0], [409.5, 390.0], [412.5, 392.0]], 'shape_type': 'polygon', 'flags': {}, 'description': '', 'group_id': None, 'mask': None, 'other_data': {}}, {'label': 'road', 'points': [[521.0, 479.0], [506.0, 475.5], [502.0, 467.5], [499.5, 469.0], [502.5, 476.0], [496.0, 476.5], [493.5, 461.0], [463.0, 433.5], [452.0, 438.5], [449.0, 432.5], [442.0, 434.5], [439.0, 430.5], [410.5, 429.0], [440.0, 426.5], [399.0, 423.5], [324.0, 427.5], [283.0, 433.5], [285.0, 437.5], [281.0, 437.5], [279.0, 432.5], [274.5, 434.0], [277.5, 436.0], [275.0, 437.5], [241.0, 432.5], [179.0, 434.5], [136.0, 444.5], [69.0, 448.5], [1.0, 465.5], [0.5, 365.0], [9.0, 360.5], [11.0, 366.5], [21.0, 369.5], [104.0, 368.5], [105.0, 371.5], [82.0, 369.5], [80.5, 373.0], [112.0, 377.5], [140.0, 377.5], [144.0, 372.5], [159.0, 371.5], [160.5, 376.0], [155.5, 380.0], [158.0, 381.5], [217.0, 386.5], [229.0, 381.5], [248.0, 380.5], [249.0, 377.5], [253.0, 380.5], [254.0, 376.5], [256.0, 382.5], [272.0, 381.5], [282.0, 386.5], [310.0, 387.5], [312.0, 390.5], [389.0, 394.5], [421.0, 398.5], [422.0, 401.5], [479.0, 406.5], [481.0, 410.5], [553.0, 413.5], [549.5, 417.0], [559.0, 417.5], [561.0, 421.5], [611.0, 420.5], [620.5, 422.0], [615.0, 425.5], [635.5, 427.0], [620.0, 428.5], [619.0, 431.5], [616.0, 428.5], [597.0, 431.5], [594.0, 428.5], [590.0, 431.5], [584.0, 428.5], [577.0, 431.5], [572.0, 427.5], [548.0, 431.5], [541.0, 428.5], [540.0, 431.5], [521.0, 431.5], [519.0, 434.5], [512.0, 430.5], [509.5, 440.0], [502.0, 438.5], [499.0, 446.5], [492.0, 439.5], [490.5, 450.0], [502.0, 461.5], [514.0, 466.5], [520.5, 462.0]], 'shape_type': 'polygon', 'flags': {}, 'description': '', 'group_id': None, 'mask': None, 'other_data': {}}, {'label': 'car', 'points': [[276.0, 356.5], [259.0, 356.5], [251.0, 348.5], [238.0, 348.5], [237.0, 347.5], [231.0, 348.5], [229.0, 346.5], [226.0, 346.5], [224.0, 348.5], [222.0, 346.5], [217.0, 345.5], [216.0, 346.5], [207.0, 345.5], [206.0, 346.5], [197.0, 345.5], [193.0, 346.5], [180.0, 344.5], [167.0, 344.5], [165.0, 342.5], [137.0, 341.5], [133.5, 338.0], [130.5, 330.0], [130.5, 308.0], [133.5, 294.0], [136.5, 288.0], [140.5, 284.0], [141.5, 278.0], [145.0, 274.5], [152.0, 271.5], [170.0, 269.5], [171.0, 267.5], [178.0, 267.5], [181.0, 265.5], [186.0, 265.5], [193.0, 262.5], [207.0, 247.5], [213.0, 244.5], [220.0, 237.5], [231.0, 235.5], [246.0, 235.5], [247.0, 234.5], [253.0, 235.5], [264.0, 233.5], [280.0, 235.5], [295.0, 234.5], [318.0, 237.5], [334.0, 242.5], [351.5, 261.0], [353.5, 266.0], [356.5, 288.0], [359.5, 295.0], [360.5, 302.0], [360.5, 317.0], [358.5, 324.0], [354.0, 327.5], [346.0, 328.5], [339.0, 327.5], [334.0, 323.5], [330.0, 323.5], [291.0, 334.5], [287.5, 339.0], [285.5, 348.0], [283.5, 351.0]], 'shape_type': 'polygon', 'flags': {}, 'description': '', 'group_id': None, 'mask': None, 'other_data': {}}, {'label': 'car', 'points': [[69.0, 334.5], [62.0, 334.5], [54.0, 331.5], [47.0, 331.5], [39.0, 329.5], [29.5, 318.0], [27.5, 313.0], [23.0, 308.5], [19.0, 307.5], [14.5, 304.0], [15.5, 302.0], [13.5, 299.0], [13.5, 296.0], [10.5, 294.0], [12.5, 293.0], [12.0, 291.5], [5.0, 291.5], [4.0, 292.5], [0.5, 291.0], [0.5, 270.0], [8.0, 268.5], [10.0, 269.5], [18.0, 269.5], [19.0, 268.5], [20.0, 269.5], [35.0, 269.5], [44.0, 267.5], [47.0, 265.5], [52.0, 265.5], [55.0, 264.5], [56.0, 262.5], [62.0, 260.5], [68.0, 254.5], [74.0, 252.5], [77.0, 249.5], [79.0, 249.5], [89.0, 243.5], [92.0, 244.5], [98.0, 242.5], [101.0, 243.5], [112.0, 242.5], [115.5, 241.0], [114.5, 238.0], [116.0, 236.5], [121.5, 234.0], [105.0, 233.5], [105.0, 231.5], [121.0, 230.5], [122.0, 229.5], [184.0, 228.5], [185.0, 229.5], [206.0, 229.5], [215.5, 232.0], [216.5, 236.0], [203.0, 247.5], [197.0, 251.5], [191.0, 251.5], [187.0, 253.5], [182.0, 263.5], [162.0, 266.5], [144.0, 272.5], [139.5, 277.0], [137.5, 284.0], [132.5, 291.0], [124.5, 320.0], [117.0, 323.5], [114.0, 328.5], [110.0, 330.5], [106.0, 330.5], [104.0, 333.5], [101.0, 330.5], [96.0, 331.5], [92.0, 330.5], [81.0, 331.5], [79.0, 330.5], [77.0, 333.5], [75.0, 331.5], [73.0, 333.5]], 'shape_type': 'polygon', 'flags': {}, 'description': '', 'group_id': None, 'mask': None, 'other_data': {}}, {'label': 'parking', 'points': [[227.0, 383.5], [219.0, 380.5], [204.0, 380.5], [182.0, 377.5], [158.0, 377.5], [154.0, 379.5], [150.0, 375.5], [146.0, 375.5], [139.0, 378.5], [116.0, 378.5], [84.5, 374.0], [86.0, 372.5], [96.5, 373.0], [93.0, 370.5], [80.0, 370.5], [77.0, 372.5], [38.0, 372.5], [16.0, 370.5], [7.0, 365.5], [0.5, 366.0], [0.5, 339.0], [13.0, 341.5], [53.0, 341.5], [77.0, 345.5], [97.0, 345.5], [115.0, 340.5], [124.0, 340.5], [128.0, 345.5], [134.0, 347.5], [147.0, 348.5], [160.0, 352.5], [200.0, 352.5], [209.0, 353.5], [214.0, 356.5], [223.0, 355.5], [229.0, 357.5], [231.0, 355.5], [242.0, 355.5], [260.5, 363.0], [255.0, 365.5], [266.0, 366.5], [270.5, 370.0], [269.0, 375.5], [258.0, 381.5], [241.0, 380.5], [229.0, 381.5]], 'shape_type': 'polygon', 'flags': {}, 'description': '', 'group_id': None, 'mask': None, 'other_data': {}}, {'label': 'person', 'points': [[462.0, 397.5], [460.5, 397.0], [460.5, 390.0], [459.5, 389.0], [460.5, 388.0], [460.5, 387.0], [459.5, 386.0], [459.5, 385.0], [456.5, 382.0], [456.5, 380.0], [455.0, 379.5], [453.5, 378.0], [453.5, 376.0], [451.5, 374.0], [451.5, 373.0], [448.5, 370.0], [448.5, 368.0], [447.5, 367.0], [447.5, 365.0], [447.0, 364.5], [445.0, 364.5], [444.0, 363.5], [440.0, 363.5], [439.0, 364.5], [436.0, 364.5], [435.5, 364.0], [435.5, 360.0], [434.5, 359.0], [434.5, 356.0], [432.5, 354.0], [432.5, 352.0], [431.0, 350.5], [430.0, 350.5], [426.5, 347.0], [426.5, 346.0], [425.5, 345.0], [424.5, 342.0], [421.5, 339.0], [421.5, 338.0], [419.5, 335.0], [419.5, 334.0], [418.5, 333.0], [418.5, 327.0], [419.5, 326.0], [419.5, 323.0], [420.5, 322.0], [420.5, 321.0], [419.0, 320.5], [416.5, 318.0], [416.5, 314.0], [417.5, 313.0], [417.5, 310.0], [418.5, 309.0], [418.5, 308.0], [421.0, 305.5], [427.0, 305.5], [429.5, 307.0], [429.5, 308.0], [430.5, 309.0], [431.5, 312.0], [436.5, 317.0], [435.5, 318.0], [435.5, 320.0], [437.0, 320.5], [445.5, 329.0], [445.5, 330.0], [447.0, 331.5], [448.0, 331.5], [450.0, 333.5], [452.0, 333.5], [453.0, 334.5], [454.0, 334.5], [455.5, 336.0], [454.0, 336.5], [452.5, 338.0], [452.5, 339.0], [457.5, 342.0], [459.5, 348.0], [463.0, 351.5], [464.5, 352.0], [464.5, 360.0], [463.5, 361.0], [463.5, 364.0], [464.0, 364.5], [467.0, 364.5], [467.5, 365.0], [467.5, 367.0], [469.5, 370.0], [469.5, 375.0], [468.0, 375.5], [465.5, 378.0], [468.5, 381.0], [469.0, 382.5], [473.0, 382.5], [475.5, 386.0], [475.5, 388.0], [474.5, 389.0], [474.5, 392.0], [473.0, 393.5], [469.0, 393.5], [468.0, 392.5], [467.0, 392.5], [465.5, 391.0], [465.5, 389.0], [464.0, 388.5], [463.5, 389.0], [463.5, 391.0], [462.5, 392.0], [462.5, 397.0]], 'shape_type': 'polygon', 'flags': {}, 'description': '', 'group_id': None, 'mask': None, 'other_data': {}}]
Traceback (most recent call last):
File "/home/amodpatil/pytorch-segmentation/labelme2voc.py", line 103, in <module>
main()
File "/home/amodpatil/pytorch-segmentation/labelme2voc.py", line 83, in main
lbl, _ = labelme.utils.shapes_to_label(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/amodpatil/miniconda3/lib/python3.12/site-packages/labelme/utils/shape.py", line 68, in shapes_to_label
cls_id = label_name_to_value[cls_name]
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
KeyError: 'car'
### Expected Behavior
not providing an output
### To Reproduce
It should the labels.txt with all the labels in it
|
open
|
2024-06-11T11:44:14Z
|
2024-06-11T11:44:14Z
|
https://github.com/wkentaro/labelme/issues/1448
|
[
"issue::bug"
] |
amodpatil1
| 0
|
biolab/orange3
|
numpy
| 6,553
|
Nomogram and Predictions widget report different class probabilities for the same naive Bayesian model
|
**What's wrong?**
Given the same feature-value pairs for the data instances and the same model, nomogram should predict the same class probability as the Predict widget. It does not, and the numbers are different.
The attached workflow exemplifies this problem. I have considered the Titanic data set, create an instance with sex=female and status=third. Nomogram predicts probability of survival of 66%, whereas Predictions shows 82%.

I get similar problem one other data sets (e.g., Attrition), where sometime the differences are even more pronounced.
If I use logistic regression, the predictions in the Nomogram and Predictions match (the comparison requires that all feature values are defined).
**How can we reproduce the problem?**
See the attached workflow.
[nomogram-vs-predictions.ows.zip](https://github.com/biolab/orange3/files/12445283/nomogram-vs-predictions.ows.zip)
**What's your environment?**
OS X, Orange 3.35, dmg.
|
closed
|
2023-08-26T11:54:15Z
|
2023-09-01T15:16:13Z
|
https://github.com/biolab/orange3/issues/6553
|
[
"bug"
] |
BlazZupan
| 3
|
AirtestProject/Airtest
|
automation
| 486
|
find_element_by_xpath("//button[@type='submit']") 执行失败
|
(请尽量按照下面提示内容填写,有助于我们快速定位和解决问题,感谢配合。否则直接关闭。)
**(重要!问题分类)**
* 测试开发环境AirtestIDE使用问题 -> https://github.com/AirtestProject/AirtestIDE/issues
* 控件识别、树状结构、poco库报错 -> https://github.com/AirtestProject/Poco/issues
* 图像识别、设备控制相关问题 -> 按下面的步骤
**描述问题bug**
(简洁清晰得概括一下遇到的问题是什么。或者是报错的traceback信息。)
1.当执行WEB端自动化的时候,采用airtestIDE工具进行UI抓取登录的button按钮时,
获取的脚本信息为:
driver.find_element_by_xpath("//button[@type='submit']")
2.在执行脚本的过程中出现 了以下的错误信息
(在这里粘贴traceback或其他报错信息)
screenshot does not match file type. It should end with a `.png` extension
"type. It should end with a `.png` extension", UserWarning)
custom tearDown
**相关截图**
(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)
**复现步骤**
打开我们的内部网站
1. driver.find_element_by_xpath("//button[@type='submit']")
**预期效果**
(预期想要得到什么、见到什么)
通过UI抓取的信息,能正常执行
**python 版本:** `python3.5`
**airtest 版本:** `1.0.69`
> airtest版本通过`pip freeze`可以命令可以查到
**设备:**
- 型号: [e.g. google pixel 2]
- 系统: [e.g. Android 8.1]
- (别的信息)
**其他相关环境信息**
(其他运行环境,例如在linux ubuntu16.04上运行异常,在windows上正常。)
|
closed
|
2019-08-03T06:21:23Z
|
2019-08-16T01:37:54Z
|
https://github.com/AirtestProject/Airtest/issues/486
|
[] |
chen072086
| 1
|
google-research/bert
|
tensorflow
| 1,376
|
Bert pre training approach
|
Hi,
I have stated working on Bert model. Do anyone know what was Bert pre-training accuracy(not fine tuned) using 100-0-0 masking approach vs 80-10-10 approach. I could not get it anywhere.
Basically I understand why 80-10-10 approach is implemented but did they do any experiments to figure this out
|
open
|
2022-12-23T14:22:04Z
|
2022-12-23T14:22:17Z
|
https://github.com/google-research/bert/issues/1376
|
[] |
shu1273
| 0
|
deezer/spleeter
|
deep-learning
| 920
|
[Discussion] is there any way to use spleeter in flutter ?
|
Flutter has a tensorflow lite version.
https://pub-web.flutter-io.cn/packages/tflite_flutter
|
open
|
2024-12-09T12:25:23Z
|
2024-12-09T12:25:23Z
|
https://github.com/deezer/spleeter/issues/920
|
[
"question"
] |
badboy-tian
| 0
|
jupyter/nbviewer
|
jupyter
| 272
|
Notebooks with accents in the filename do not render
|
It seems that notebooks with accented characters in the filename do not render correctly. For example:
https://github.com/computo-fc/python_cientifico/blob/master/0.%20Por%20que%CC%81%20Python.ipynb
gives the following error:
```
404 : Not Found
You are requesting a page that does not exist!"
The remote resource was not found.
```
This carries over also to not showing the containing directory.
Other notebooks in the same directory with filenames not containing accents are fine:
http://nbviewer.ipython.org/github/computo-fc/python_cientifico/blob/master/2.%20Estructuras%20de%20control.ipynb
|
closed
|
2014-05-05T21:57:30Z
|
2014-05-06T21:03:19Z
|
https://github.com/jupyter/nbviewer/issues/272
|
[] |
dpsanders
| 9
|
PokemonGoF/PokemonGo-Bot
|
automation
| 5,859
|
Automatic Installation
|
I don't understand why this is called "automatic" installation while we need to search and find and download ourselves some UNFOUNDABLE DLLs (encrypt.so and encrypt.dll or encrypt_64.dll) for whom you cannot give us links.
|
closed
|
2017-01-05T21:22:33Z
|
2017-01-08T12:29:19Z
|
https://github.com/PokemonGoF/PokemonGo-Bot/issues/5859
|
[] |
mcferson
| 6
|
roboflow/supervision
|
deep-learning
| 686
|
Class none person, how to remove it ?
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
I'm running this code on my raspberry pi 4 with picamera to detect and count people, using yolov8n, but sometimes it detects a class called none person, and then shows several boxes, when crossing the line it ends up counting these boxes with the class none person, and then, it ends up uncalibrating the count... I didn't find it in the documentation about this class called none person... how to disable it?
`import cv2
import json
import numpy as np
from picamera2 import Picamera2
from ultralytics import YOLO
import supervision as sv
import os
class PiLineCounter:
def __init__(self, lines_json_path, model_path):
with open(lines_json_path, 'r') as f:
self.lines_data = json.load(f)
self.model = YOLO(model_path)
# Inicialização dos anotadores
self.line_annotator = sv.LineZoneAnnotator(
thickness=1,
text_thickness=1,
text_scale=1,
custom_in_text="entrando",
custom_out_text="saindo"
)
self.box_annotator = sv.BoxAnnotator(
thickness=2,
text_thickness=1,
text_scale=0.5
)
# Inicialização da PiCamera2
self.picam2 = Picamera2()
preview_config = self.picam2.create_preview_configuration()
self.picam2.configure(preview_config)
self.picam2.start()
# Inicialização dos contadores de linha
self.line_counters = {}
self.initialize_counters()
def initialize_counters(self):
for line_key, line_value in self.lines_data.items():
# Usar as coordenadas das linhas diretamente do JSON
start_point_x, start_point_y = line_value['points'][0]
end_point_x, end_point_y = line_value['points'][1]
start_point = sv.Point(start_point_x, start_point_y)
end_point = sv.Point(end_point_x, end_point_y)
self.line_counters[line_key] = sv.LineZone(start=start_point, end=end_point)
def run(self):
while True:
frame = self.picam2.capture_array()
frame = cv2.cvtColor(np.array(frame), cv2.COLOR_RGB2BGR)
results = self.model.track(frame, show=False, stream=False, agnostic_nms=True, imgsz=320)
print(f"Número de resultados de detecção: {len(results)}")
for result in results:
detections = sv.Detections.from_ultralytics(result)
if detections is None or len(detections.xyxy) == 0:
print("Nenhuma detecção neste frame. Pulando...")
continue
# Imprimir todas as detecções e seus respectivos class_id, labels e confianças
for d in detections:
class_id = d[3]
label = self.model.model.names[class_id]
confidence = d[2]
print(f"Detecção: class_id={class_id}, label={label}, confiança={confidence:.2f}")
detections_filtered = [d for d in detections if d[3] == 0]
print(f"Número de detecções de pessoas: {len(detections_filtered)}")
labels = [f"{d[4]} {self.model.model.names[d[3]]} {d[2]:0.2f}" for d in detections_filtered]
for line_key in self.line_counters.keys():
in_count, out_count = self.line_counters[line_key].trigger(detections=detections_filtered)
print(f"Linha {line_key}: Entrando - {in_count}, Saindo - {out_count}")
# Criar um objeto Detections que possa ser usado pelo BoxAnnotator
if len(detections_filtered) > 0:
xyxy = np.array([d[0] for d in detections_filtered])
confidences = np.array([d[2] for d in detections_filtered])
class_ids = np.array([d[3] for d in detections_filtered])
tracker_ids = np.array([d[4] for d in detections_filtered])
detections_for_annotation = sv.Detections(xyxy=xyxy, confidence=confidences, class_id=class_ids, tracker_id=tracker_ids)
frame = self.box_annotator.annotate(
scene=frame,
detections=detections_for_annotation,
labels=labels
)
else:
print("Nenhuma detecção de pessoas neste frame.")
for line_key in self.line_counters.keys():
self.line_annotator.annotate(frame=frame, line_counter=self.line_counters[line_key])
# Exibir o frame original sem redimensionamento
cv2.imshow("PiCamera Line Counter", frame)
#cv2.imwrite('/dev/shm/frame.jpg', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
if __name__ == '__main__':
lines_json_path = "lines_with_doubled_data.json"
model_path = "yolov8n.pt"
pi_line_counter = PiLineCounter(
lines_json_path=lines_json_path,
model_path=model_path
)
pi_line_counter.run()
`
### Additional
[lines_with_doubled_data.json](https://github.com/roboflow/supervision/files/13736268/lines_with_doubled_data.json)
|
closed
|
2023-12-21T05:05:40Z
|
2023-12-28T12:25:28Z
|
https://github.com/roboflow/supervision/issues/686
|
[
"question"
] |
Rasantis
| 1
|
plotly/dash-cytoscape
|
plotly
| 162
|
Background image
|
Is there a way to set a background image?
|
open
|
2022-01-06T16:20:15Z
|
2022-01-06T16:20:15Z
|
https://github.com/plotly/dash-cytoscape/issues/162
|
[] |
hitnik
| 0
|
gradio-app/gradio
|
machine-learning
| 10,021
|
gr.State serializes pydantic BaseModel objects at initialization
|
### Describe the bug
When passing an object based on the Pydantic BaseModel to gr.State during initalization of the gr.State the object gets serialized into a dictionary.
This doesn't happen for a regular class object.
Interestingly, when passing a Pydantic object into an already initialized gr.State the serialization does not occur (at least in a certain scenario).
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
from pydantic import BaseModel
class TestRegular:
def __init__(self, name):
self.name = name
class TestPydantic(BaseModel):
name: str
t_reg = TestRegular("hey there")
t_pyd = TestPydantic(name="wassup")
state_reg = gr.State(t_reg)
state_pyd = gr.State(t_pyd)
print("== regular class remains unchanged ==")
print(f"{t_reg=}")
print(f"{state_reg.value=}")
print("== pydantic is serialized ==")
print(f"{t_pyd=}")
print(f"{state_pyd.value=}")
```
### Screenshot

The output of running the above code snippet.
### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.6.0
gradio_client version: 1.4.3
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.4.0
audioop-lts is not installed.
fastapi: 0.115.5
ffmpy: 0.4.0
gradio-client==1.4.3 is not installed.
httpx: 0.27.2
huggingface-hub: 0.25.2
jinja2: 3.1.4
markupsafe: 2.1.3
numpy: 1.26.4
orjson: 3.10.7
packaging: 24.1
pandas: 2.2.2
pillow: 10.4.0
pydantic: 2.10.1
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.2
ruff: 0.6.7
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit==0.12.0 is not installed.
typer: 0.12.5
typing-extensions: 4.12.2
urllib3: 2.2.2
uvicorn: 0.30.6
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.5.0
httpx: 0.27.2
huggingface-hub: 0.25.2
packaging: 24.1
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it
|
closed
|
2024-11-22T15:23:34Z
|
2024-11-27T19:25:03Z
|
https://github.com/gradio-app/gradio/issues/10021
|
[
"bug"
] |
filiso
| 0
|
huggingface/transformers
|
python
| 36,014
|
[Bug-Qwen2VL] Error when calling generate with fsdp2
|
I have split Qwen2VL-2B using fsdp2 across 8 GPUs. Following the [official example](https://github.com/huggingface/transformers/blob/main/tests/generation/test_fsdp.py#L81-L101), I call the generate function, but encounter the following error:
```markdown
[rank7]: File "/software/mamba/envs/mm/lib/python3.11/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 1791, in prepare_inputs_for_generation
[rank7]: input_ids = input_ids[:, cache_position]
[rank7]: ~~~~~~~~~^^^^^^^^^^^^^^^^^^^
[rank7]: RuntimeError: CUDA error: device-side assert triggered
[rank7]: Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions
```
Upon inspecting the source code of `modeling_qwen2_vl.py`, I found that the `cache_position` in the[ line 1735](https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_vl/modeling_qwen2_vl.py#L1735) `input_ids = input_ids[:, cache_position]` is out of bounds.
It seems that the generation has already stopped and `input_ids` is no longer being extended, but `cache_position` is still increasing. I am unsure of the deeper cause of this issue.
A quick fix I found is to add `cache_position[0] = min(cache_position[0], input_ids.shape[1] - 1)` above this line.
I’d like to ask for your help—does this seem like a viable fix? If not, is there a way to make generate work correctly under fsdp2 for Qwen2VL without modifying the source code?
Here is a minimal reproduction script (transformers.__version__
==4.47.0). After the above fix, this script should run correctly on 8 GPUs.
```python
import torch
from transformers import Qwen2VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
import torch.distributed
from torch.distributed._composable.fsdp import fully_shard, register_fsdp_forward_method
from torch.distributed.device_mesh import init_device_mesh
from transformers.models.qwen2_vl.modeling_qwen2_vl import Qwen2VLDecoderLayer
queries = [
"What is the main object shown in the picture?",
"What type of plants are present in the image?",
"What items can be seen on the table?",
"Is the view outside the window sunny or cloudy?",
"Can you spot any drinks in the image?",
"Are there any decorations or paintings on the wall?",
"What is the color and material of the chair?",
"Is the scene set indoors or outdoors?"
]
data = []
for query in queries:
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": query},
],
}
]
data.append(messages)
def fsdp2_generate():
torch.cuda.set_device(device := torch.device(rank := torch.distributed.get_rank()))
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen2/Qwen2-VL-2B-Instruct",
device_map="cpu"
)
model.to(device)
mesh = init_device_mesh("cuda", (torch.distributed.get_world_size(),))
for submodule in model.modules():
if isinstance(submodule, Qwen2VLDecoderLayer):
fully_shard(submodule, mesh=mesh)
fully_shard(model, mesh=mesh)
register_fsdp_forward_method(model, "generate")
processor = AutoProcessor.from_pretrained("Qwen2/Qwen2-VL-2B-Instruct")
messages = data[rank]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, _ = process_vision_info(messages)
batch = processor(
text=[text],
images=image_inputs,
return_tensors="pt",
).to(device)
query_responses = model.generate(
**batch,
max_new_tokens=128,
)
context_length = batch.input_ids.shape[1]
responses = query_responses[:, context_length:]
response_texts = processor.batch_decode(
responses, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
rank = torch.distributed.get_rank()
print()
print(f"Rank {rank}: {response_texts}")
print()
if __name__ == "__main__":
torch.distributed.init_process_group(backend='nccl', world_size=torch.cuda.device_count())
fsdp2_generate()
torch.distributed.destroy_process_group()
```
### Who can help?
@amyeroberts @qubvel @zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import Qwen2VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
import torch.distributed
from torch.distributed._composable.fsdp import fully_shard, register_fsdp_forward_method
from torch.distributed.device_mesh import init_device_mesh
from transformers.models.qwen2_vl.modeling_qwen2_vl import Qwen2VLDecoderLayer
queries = [
"What is the main object shown in the picture?",
"What type of plants are present in the image?",
"What items can be seen on the table?",
"Is the view outside the window sunny or cloudy?",
"Can you spot any drinks in the image?",
"Are there any decorations or paintings on the wall?",
"What is the color and material of the chair?",
"Is the scene set indoors or outdoors?"
]
data = []
for query in queries:
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": query},
],
}
]
data.append(messages)
def fsdp2_generate():
torch.cuda.set_device(device := torch.device(rank := torch.distributed.get_rank()))
model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen2/Qwen2-VL-2B-Instruct",
device_map="cpu"
)
model.to(device)
mesh = init_device_mesh("cuda", (torch.distributed.get_world_size(),))
for submodule in model.modules():
if isinstance(submodule, Qwen2VLDecoderLayer):
fully_shard(submodule, mesh=mesh)
fully_shard(model, mesh=mesh)
register_fsdp_forward_method(model, "generate")
processor = AutoProcessor.from_pretrained("Qwen2/Qwen2-VL-2B-Instruct")
messages = data[rank]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, _ = process_vision_info(messages)
batch = processor(
text=[text],
images=image_inputs,
return_tensors="pt",
).to(device)
query_responses = model.generate(
**batch,
max_new_tokens=128,
)
context_length = batch.input_ids.shape[1]
responses = query_responses[:, context_length:]
response_texts = processor.batch_decode(
responses, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
rank = torch.distributed.get_rank()
print()
print(f"Rank {rank}: {response_texts}")
print()
if __name__ == "__main__":
torch.distributed.init_process_group(backend='nccl', world_size=torch.cuda.device_count())
fsdp2_generate()
torch.distributed.destroy_process_group()
```
### Expected behavior
All ranks can generate correct outputs.
|
closed
|
2025-02-03T13:35:57Z
|
2025-02-04T01:15:16Z
|
https://github.com/huggingface/transformers/issues/36014
|
[
"bug",
"VLM"
] |
mantle2048
| 2
|
davidsandberg/facenet
|
computer-vision
| 404
|
where to find the model file?
|
Hi, Experts
I had try to train the model. After the training finish, I can find some files under the folder of models, includes: .meta, .index, .data-00000-of-00001 and checkpoint, but can not find the .pb file.
Where can I get the .pb file, which can be used for compare.
|
closed
|
2017-08-01T02:33:48Z
|
2017-10-21T11:34:53Z
|
https://github.com/davidsandberg/facenet/issues/404
|
[] |
tonybaigang
| 2
|
jofpin/trape
|
flask
| 57
|
trape showing issue
|
[2018-10-03 11:14:38,961] ERROR in app: Exception on /rl [GET]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/root/trape/core/victim.py", line 36, in homeVictim
html = victim_inject_code(opener.open(trape.url_to_clone).read(), 'lure')
File "/usr/lib/python2.7/urllib2.py", line 421, in open
protocol = req.get_type()
File "/usr/lib/python2.7/urllib2.py", line 283, in get_type
raise ValueError, "unknown url type: %s" % self.__original
ValueError: unknown url type: rl
|
closed
|
2018-10-03T15:15:21Z
|
2018-11-24T01:54:59Z
|
https://github.com/jofpin/trape/issues/57
|
[] |
asciiterminal
| 1
|
xuebinqin/U-2-Net
|
computer-vision
| 308
|
3D Photo: New app on iOS using U-2-Net
|
Hey everyone,
Happy to share that we finished last week a new app using U-2-Net, that lets anyone create engaging animated video from any static photo. The app offers dozens of animated 3D motion styles. In seconds, you can turn a photo (or Live Photo) into an animated video using U-2-Net. Optionally add 3D layers, apply filters, animated overlays, music, add text / stickers, and save & share the video.
I hope you will like it, and add it to the README of the project @xuebinqin
https://apps.apple.com/us/app/3d-photo-creator/id1619676262
[Demo GIF](https://i.giphy.com/media/JAzFTPLZ4lSH1khb15/giphy-downsized-large.gif)


Have fun :)
|
open
|
2022-06-03T07:37:32Z
|
2022-06-03T07:37:32Z
|
https://github.com/xuebinqin/U-2-Net/issues/308
|
[] |
adirkol
| 0
|
lexiforest/curl_cffi
|
web-scraping
| 325
|
[BUG] KEY_USAGE_BIT_INCORRECT CHECK PLS PLS
|
https://github.com/yifeikong/curl_cffi/issues/323
Look, we need to solve this problem, I'm willing to pay 100-200$. Give me your contacts and we'll talk!
here's the solution on httpx, but I really want it to work in your library! It's all about the certificate, we need openssl, certifi.
import ssl
import httpx
ssl_context = ssl.create_default_context()
ssl_context.set_ciphers(':HIGH:!DH:!aNULL')
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
LOCK = asyncio.Lock()
transport = httpx.AsyncHTTPTransport(retries=3, verify=ssl_context,)
async with httpx.AsyncClient(
headers=headers,
transport=transport,
timeout=300.0,
follow_redirects=True
) as session:
|
closed
|
2024-06-12T13:16:21Z
|
2024-06-13T06:42:13Z
|
https://github.com/lexiforest/curl_cffi/issues/325
|
[
"bug"
] |
viskok-yuri
| 1
|
JaidedAI/EasyOCR
|
deep-learning
| 520
|
How to train the recognition model?
|
I have a set of pictures, but the recognition accuracy of the existing model is not high. How can I train my own recognition model?
|
closed
|
2021-08-18T09:30:42Z
|
2022-03-02T09:25:33Z
|
https://github.com/JaidedAI/EasyOCR/issues/520
|
[] |
yourstar9
| 6
|
localstack/localstack
|
python
| 11,454
|
[INC-16] Certificate revocation for localhost.localstack.cloud
|
Updates for the ongoing issue (`INC-16`) with the certificate revocation for localhost.localstack.cloud[[1](https://localstack.statuspage.io/incidents/qcft2h8sffsb)][[2](https://localstack.statuspage.io/incidents/lpwmzs8x47y8)].
> [!IMPORTANT]
> We recommend **[updating to the latest LocalStack version](https://docs.localstack.cloud/getting-started/installation/#updating)** for the most reliable and seamless experience.
<details>
<summary>🟢 Sep 6, 2024: Incident resolved with the LocalStack 3.7.2 patch release.</summary>
### Summary
- No further service degradation observed for the past 48 hours.
- All fixes are applied in the new 3.7.2 patch release. Make sure to [update Docker images](https://docs.localstack.cloud/references/docker-images/) using `latest` or `3.7.2` tag.
- If you are using an older version, you may still encounter certificate revocation issues. Please, update to the latest version or see [How do I resolve SSL issues due to revoked local certificate for localhost.localstack.cloud](https://docs.localstack.cloud/getting-started/faq/#how-do-i-resolve-ssl-issues-due-to-revoked-local-certificate-for-localhostlocalstackcloud) in the docs.
</details>
<details>
<summary>🟢 Sep 5, 2024: No service degradation observed for the past 24 hours.</summary>
### Summary
Incident under control with short and mid-term solutions in the latest LocalStack version.
</details>
<details>
<summary>🟡 Sep 4, 2024: Recommended action: update LocalStack CLI to the latest version (3.7)</summary>
### Summary
- **CI/LI usage**: Older images were encountering issues downloading the certificate from GitHub and the CDN, resulting in a fallback to a self-signed certificate that affected CI/CLI functionality. We have implemented a fix to restore certificate downloads from GitHub, resolving the CI/CLI issues with older LS images.
</details>
<details>
<summary>🟡 Sep 3, 2024: Incident contained, with temporary workarounds. We're actively working on a long-term solution</summary>
### Summary
- **Incident contained**: We’ve implemented short-term fixes to contain the issue. The incident isn’t fully resolved yet, but we’re working on it.
- **Temporary workarounds**: If you still experience certificate revocation issues:
1. Set the environment variable `SKIP_SSL_CERT_DOWNLOAD=1` to use a self-signed SSL certificate.
2. Use `http://` instead of `https://` where possible.
- **Long-term solution**: We’re working on a permanent fix and will update you as we progress.
- **Recent DNS resolution issues**: Some customers experienced DNS issues from yesterday afternoon until this morning (CET). This has been fixed, and certificate renewals should no longer impact DNS resolution.
</details>
---
We are actively working on a long-term solution and will keep you updated. 🐿️
Follow [status.localstack.cloud](https://status.localstack.cloud/) for more updates, and thanks for your patience! 💛
|
closed
|
2024-09-03T15:01:52Z
|
2024-09-25T08:01:41Z
|
https://github.com/localstack/localstack/issues/11454
|
[] |
gtsiolis
| 0
|
sebp/scikit-survival
|
scikit-learn
| 473
|
Histogram-based Gradient Boosting survival models
|
It would be great to have Histogram-based Gradient Boosting models on top of normal ones as it is much more scalable :
They are supported by scikit-learn:
- https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.HistGradientBoostingRegressor.html#sklearn.ensemble.HistGradientBoostingRegressor
|
open
|
2024-08-08T07:48:50Z
|
2025-02-25T19:59:00Z
|
https://github.com/sebp/scikit-survival/issues/473
|
[
"enhancement"
] |
ogencoglu
| 2
|
matplotlib/mplfinance
|
matplotlib
| 523
|
How to draw a hidden K-line with mplfinance?
|
@DanielGoldfarb :
When there are many K-line data, the drawing is not intuitive. I want to draw a hidden K-line.
The following figure shows the effect drawn with Matplotlib.
How to draw such an effect with mplfinance?

|
closed
|
2022-04-21T02:30:38Z
|
2022-06-09T11:40:51Z
|
https://github.com/matplotlib/mplfinance/issues/523
|
[
"question"
] |
lyl1836
| 4
|
sigmavirus24/github3.py
|
rest-api
| 705
|
Support for fetching README rendered as HTML
|
The Github API supports fetching a repository's README rendered as HTML, i.e. as displayed in the Github web interface. This feature is described here [Contents: custom media types](https://developer.github.com/v3/repos/contents/#custom-media-types). In fact that seems to work for any file although I haven't actually tested this.
With a bit of trickery this is already possible with *github3.py*,
```python
import github3
repo = github3.repository('sigmavirus24', 'github3.py')
# Here come the clever bits
repo._session.headers['Accept'] = 'application/vnd.github.v3.html'
url = repo._build_url('readme', base_url=repo._api)
response = repo._get(url)
print response.content
# Don't forget to reset the header!!!
repo._session.headers['Accept'] = 'application/vnd.github.v3.full+json'
```
This could be added to the API by either
* Adding a `Repository.readme_as_html()` method, or
* Adding a `format="html"` argument to `Repository.readme()` and `Repository.contents()` methods.
Any thoughts?
Markus
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/45280852-support-for-fetching-readme-rendered-as-html?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
</bountysource-plugin>
|
open
|
2017-05-17T10:47:40Z
|
2018-03-22T02:35:33Z
|
https://github.com/sigmavirus24/github3.py/issues/705
|
[] |
mjuenema
| 1
|
coqui-ai/TTS
|
deep-learning
| 3,488
|
next steps after shutdown
|
According to the main site https://coqui.ai/ , coqui is shutting down, which is unfortunate as these open source libraries are great and could still be maintained. I'm wondering if there are any next steps to proceed, like if the license should allow for commercial use and the open source community could fork this repository to keep it alive. Hoping we can still get use out of it because I'd say it is currently the best open-source voice synthesis and cloning toolkit out there at the moment.
For reference, I also have tried standalone bark, StyleTTS 2 and OpenVoice as alternatives, but I find the voice cloning was not as good as this library (general synthesis is pretty good, but cloning in particular is hard to get right).
|
open
|
2024-01-03T14:38:14Z
|
2025-03-15T11:51:33Z
|
https://github.com/coqui-ai/TTS/issues/3488
|
[
"feature request"
] |
bachittle
| 31
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 1,710
|
I've a problem to create the stems of every song
|
Last Error Received:
Process: Demucs
If this error persists, please contact the developers with the error details.
Raw Error Details:
RuntimeError: "Could not allocate tensor with 231211008 bytes. There is not enough GPU video memory available!"
Traceback Error: "
File "UVR.py", line 6638, in process_start
File "separate.py", line 855, in seperate
File "separate.py", line 1000, in demix_demucs
File "demucs\apply.py", line 196, in apply_model
File "demucs\apply.py", line 222, in apply_model
File "demucs\apply.py", line 256, in apply_model
File "demucs\utils.py", line 490, in result
File "demucs\apply.py", line 271, in apply_model
File "torch\nn\modules\module.py", line 1501, in _call_impl
File "demucs\htdemucs.py", line 593, in forward
File "torch\nn\modules\module.py", line 1501, in _call_impl
File "demucs\transformer.py", line 667, in forward
File "torch\nn\modules\module.py", line 1501, in _call_impl
File "demucs\transformer.py", line 365, in forward
File "torch\nn\modules\transformer.py", line 581, in _sa_block
File "torch\nn\modules\module.py", line 1501, in _call_impl
File "torch\nn\modules\activation.py", line 1189, in forward
File "torch\nn\functional.py", line 5334, in multi_head_attention_forward
"
Error Time Stamp [2025-01-22 16:51:48]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 5
window_size: 512
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: v4 | htdemucs
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Choose Model
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_use_opencl: True
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
device_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems
|
open
|
2025-01-22T15:55:08Z
|
2025-02-02T20:10:26Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/1710
|
[] |
Flavioalex75
| 2
|
keras-team/keras
|
deep-learning
| 20,108
|
Bug in `keras.src.saving.saving_lib._save_model_to_dir`
|
`tf.keras.__version__` -> "3.4.1"
If model is already saved then method call by `keras.src.models.model.Model.save` call `keras.src.saving.saving_lib._save_model_to_dir`, if model is already saved then `asset_store = DiskIOStore(assert_dirpath, mode="w")` ([Line - 178](https://github.com/keras-team/keras/blob/master/keras/src/saving/saving_lib.py#L179)) raise `FileExistsError` which error handling and finally clause line - `asset_store.close()` ([Line - 189](https://github.com/keras-team/keras/blob/master/keras/src/saving/saving_lib.py#L189)) causes - `UnboundLocalError: local variable 'asset_store' referenced before assignment` as `asset_store` is not define.
```shell
FileExistsError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py](https://localhost:8080/#) in _save_model_to_dir(model, dirpath, weights_format)
139 )
--> 140 asset_store = DiskIOStore(assert_dirpath, mode="w")
141 _save_state(
FileExistsError: [Errno 17] File exists: '/content/.../model_weights/assets'
During handling of the above exception, another exception occurred:
UnboundLocalError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py](https://localhost:8080/#) in _save_model_to_dir(model, dirpath, weights_format)
148 finally:
149 weights_store.close()
--> 150 asset_store.close()
151
152
UnboundLocalError: local variable 'asset_store' referenced before assignment
```
Solution to move `asset_store.close()` from `finally` clause to try clause or check if `asset_store` is define then only call `asset_store.close()` (Update from line 158 to line 189 i.e., https://github.com/keras-team/keras/blob/master/keras/src/saving/saving_lib.py#L158-L189)
```python
def _save_model_to_dir(model, dirpath, weights_format):
if not file_utils.exists(dirpath):
file_utils.makedirs(dirpath)
config_json, metadata_json = _serialize_model_as_json(model)
with open(file_utils.join(dirpath, _METADATA_FILENAME), "w") as f:
f.write(metadata_json)
with open(file_utils.join(dirpath, _CONFIG_FILENAME), "w") as f:
f.write(config_json)
weights_filepath = file_utils.join(dirpath, _VARS_FNAME_H5)
assert_dirpath = file_utils.join(dirpath, _ASSETS_DIRNAME)
try:
if weights_format == "h5":
weights_store = H5IOStore(weights_filepath, mode="w")
elif weights_format == "npz":
weights_store = NpzIOStore(weights_filepath, mode="w")
else:
raise ValueError(
"Unknown `weights_format` argument. "
"Expected 'h5' or 'npz'. "
f"Received: weights_format={weights_format}"
)
asset_store = DiskIOStore(assert_dirpath, mode="w")
_save_state(
model,
weights_store=weights_store,
assets_store=asset_store,
inner_path="",
visited_saveables=set(),
)
finally:
weights_store.close()
if ('asset_store' in locals()): asset_store.close() # check if `asset_store` define then only close
```
|
closed
|
2024-08-10T13:12:49Z
|
2024-08-15T05:33:26Z
|
https://github.com/keras-team/keras/issues/20108
|
[
"stat:awaiting response from contributor",
"type:Bug"
] |
MegaCreater
| 6
|
deeppavlov/DeepPavlov
|
tensorflow
| 896
|
Bert_Squad context length longer than 512 tokens sequences
|
Hi,
Currently I have a long context and the answer cannot be extracted from the context when the context exceeds a certain length. Is there any python code to deal with long length context.
P.S. I am using BERT-based model for context question answering.
|
closed
|
2019-06-23T14:37:55Z
|
2020-05-13T11:41:45Z
|
https://github.com/deeppavlov/DeepPavlov/issues/896
|
[] |
Chunglwc
| 2
|
litestar-org/litestar
|
api
| 3,801
|
Docs: improve "Improving performance with the codegen backend" docs in "DTOs"
|
### Summary
Link: https://docs.litestar.dev/latest/usage/dto/0-basic-use.html#improving-performance-with-the-codegen-backend
Why do I think that it needs a refactor?
1. It was written when `experimental_codegen_backend` was not enabled by default
2. Right now it makes more sense to show how to turn it off
3. We can still leave a note for older version on how to turn this on, but it should not be the default intention
For example: `app = Litestar(experimental_features=[ExperimentalFeatures.DTO_CODEGEN])` will now produce a warning:
```python
if ExperimentalFeatures.DTO_CODEGEN in self.experimental_features:
warnings.warn(
"Use of redundant experimental feature flag DTO_CODEGEN. "
"DTO codegen backend is enabled by default since Litestar 2.8. The "
"DTO_CODEGEN feature flag can be safely removed from the configuration "
"and will be removed in version 3.0.",
category=LitestarWarning,
stacklevel=2,
)
```
Plus, I can see several typos there :)
PR is incoming! 🏎️
|
closed
|
2024-10-15T13:44:27Z
|
2025-03-20T15:54:58Z
|
https://github.com/litestar-org/litestar/issues/3801
|
[
"Documentation :books:",
"DTOs"
] |
sobolevn
| 0
|
geex-arts/django-jet
|
django
| 103
|
The revenue model of Django JET
|
I think Django JET has the most potential of all the Django admin extensions available to date. Simply because it has the most complete responsive experience with easy integration.
That said, I question the renevue model of Django JET mr @f1nality.
Do you want this to be the product of a one-man army?
Or do you prefer to have a community around Django JET?
I just can't see a reason why people would want to contribute to a product, where you would have to buy a license to use the product commercially. I honestly think there are other revenue models like BountySource or donations that would not eliminate the possibility to make this a community product.
|
closed
|
2016-08-19T09:49:34Z
|
2016-08-27T11:30:32Z
|
https://github.com/geex-arts/django-jet/issues/103
|
[] |
Zundrium
| 2
|
marshmallow-code/flask-smorest
|
rest-api
| 65
|
Path parameters: document converters parameters
|
In FlaskPlugin, add min/max to number converters.
Manage negative values introduced in Werkzeug 0.15 (https://github.com/pallets/werkzeug/pull/1355).
|
closed
|
2019-05-03T07:54:56Z
|
2020-10-01T21:32:17Z
|
https://github.com/marshmallow-code/flask-smorest/issues/65
|
[
"enhancement",
"help wanted",
"backwards incompat"
] |
lafrech
| 1
|
iperov/DeepFaceLab
|
machine-learning
| 5,232
|
pressing "L" key while in training preview switches to command window
|
## Expected behavior
2021 releases - pressing "L" key is supposed to change the graph granularity in the training preview window
## Actual behavior
"L" key now just switches to command window or brings it to the front. Can get around by pressing shift+L in preview window
## Steps to reproduce
Just hit L at the preview window.
## Other relevant information
|
closed
|
2021-01-04T23:25:06Z
|
2021-01-07T17:21:50Z
|
https://github.com/iperov/DeepFaceLab/issues/5232
|
[] |
frighte
| 2
|
iMerica/dj-rest-auth
|
rest-api
| 330
|
Allauth creates migrations in site packages - Django 3.2.4
|
Hi, it seems that Django 3.2.X is not properly supported due to a bug which I guess comes from allouth and seem to be fixed in the latest version as per https://github.com/pennersr/django-allauth/issues/2971
This results in migrations being created inside the `site-packages` of the virtual environment.
This has been already reported https://github.com/pennersr/django-allauth/issues/2891
According to release notes of AllAuth Django3.2 compatibility has been released just 2 days ago (as of writing this) https://github.com/pennersr/django-allauth/blob/0.46.0/ChangeLog.rst
Steps to reproduce:
1) Create venv
2) `pip install "dj-rest-auth[with_social]==2.1.11"`
3) `django-admin startproject config`
4) `cd config`
5) update `config/settings.py`
```
# all auth
ACCOUNT_UNIQUE_EMAIL = False
ACCOUNT_EMAIL_REQUIRED = True
# ACCOUNT_EMAIL_VERIFICATION = "mandatory"
ACCOUNT_EMAIL_VERIFICATION = True
ACCOUNT_EMAIL_CONFIRMATION_HMAC = False
SITE_ID = 1
INSTALLED_APPS += [
"django.contrib.sites",
"allauth",
"allauth.account",
# "allauth.socialaccount",
"dj_rest_auth",
"dj_rest_auth.registration",
"rest_framework",
"rest_framework.authtoken",
]
```
6) `./manage.py makemigrations`
7) Migration created in virtualenv
```
Migrations for 'account':
/venv/lib/python3.8/site-packages/allauth/account/migrations/0003_auto_20211117_1455.py
- Alter field id on emailaddress
- Alter field id on emailconfirmation
```
|
open
|
2021-11-17T14:57:57Z
|
2021-11-17T14:57:57Z
|
https://github.com/iMerica/dj-rest-auth/issues/330
|
[] |
1oglop1
| 0
|
jazzband/django-oauth-toolkit
|
django
| 1,018
|
access django request in OAuth2Validator.get_additional_claims
|
In get_additional_claims I want to give url of users avatar, some thing like google claims...
But django image field just give path of file without domain
base on https://stackoverflow.com/questions/1451138/how-can-i-get-the-domain-name-of-my-site-within-a-django-template
I have to somehow access django request to find site domain and build avatars full url
But base on https://django-oauth-toolkit.readthedocs.io/en/1.5.0/oidc.html?highlight=OAuth2Validator#adding-claims-to-the-id-token
request object that pass to get_additional_claims isn't a django request object and seems have no data of site domin and its schema to build full url
I know i can set a variable in setting like `SITE_URL` or use `contrib.site` to build full url and currently use `SITE_URL` but using django request objest is far better solution because won't break when domain changed
So is there any way to access django request object or can you provide a interface( or anything ) for it?
|
closed
|
2021-10-01T14:42:47Z
|
2023-10-04T15:01:06Z
|
https://github.com/jazzband/django-oauth-toolkit/issues/1018
|
[
"question"
] |
amirhoseinbidar
| 1
|
PokeAPI/pokeapi
|
api
| 734
|
Add Pokemon strengths and weaknesses
|
Hey,
During my use of the API i also discovered that in a normal [Pokémon Request](https://pokeapi.co/api/v2/pokemon/1) no strengths and weaknesses are provided in the response json.
It would be really cool if you could expand on this.
Best Regards
|
open
|
2022-07-18T06:55:52Z
|
2022-11-11T04:07:24Z
|
https://github.com/PokeAPI/pokeapi/issues/734
|
[] |
bidery
| 1
|
mherrmann/helium
|
web-scraping
| 52
|
How to do multiple select?
|
Hi,
I am trying to select more than one option from a multi select element.
Right now, I've achieved this by writing a select command for each option like
``` Python
select(ComboBox('multi select element'), 'option 1')
select(ComboBox('multi select element'), 'option 2')
select(ComboBox('multi select element'), 'option 4')
```
Is there any better or easy way to do this?
|
closed
|
2020-12-19T19:15:33Z
|
2020-12-21T04:14:48Z
|
https://github.com/mherrmann/helium/issues/52
|
[] |
some-sh
| 1
|
CanopyTax/asyncpgsa
|
sqlalchemy
| 60
|
pip failed to install package
|
pip can't install **_asyncpgsa_** if it's part of requirements list alongside with **_asyncpg_** (in other words it requires _asyncpg_ to be installed before installing _asyncpgsa_). Is there a way to change version determination?
```
Collecting asyncpg==0.12.0 (from -r .meta/packages (line 5))
....
Collecting asyncpgsa==0.18.1 (from -r .meta/packages (line 6))
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-gcdgh8ov/asyncpgsa/setup.py", line 6, in <module>
version=__import__('asyncpgsa').__version__,
File "/tmp/pip-build-gcdgh8ov/asyncpgsa/asyncpgsa/__init__.py", line 1, in <module>
from .pool import create_pool
File "/tmp/pip-build-gcdgh8ov/asyncpgsa/asyncpgsa/pool.py", line 3, in <module>
import asyncpg
ModuleNotFoundError: No module named 'asyncpg'
```
|
closed
|
2017-12-15T15:57:52Z
|
2018-02-13T00:13:09Z
|
https://github.com/CanopyTax/asyncpgsa/issues/60
|
[] |
vayw
| 2
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 90
|
找不到params.json 文件,模型文件里面也没有这个文件呀
|
FileNotFoundError: [Errno 2] No such file or directory:
'/content/drive/MyDrive/model/7B/params.json'
|
closed
|
2023-04-09T05:53:20Z
|
2023-08-24T13:16:47Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/90
|
[] |
Song367
| 4
|
ymcui/Chinese-LLaMA-Alpaca-2
|
nlp
| 474
|
位置插值训练数据相关咨询
|
### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
其他问题
### 基础模型
Others
### 操作系统
Linux
### 详细描述问题
各位大佬好,因为在discussion区提问没有得到回复,所以在issue区也提一个,请各位大佬见谅。
我这边有个模型现在有8k的上下文长度,如果我想扩展到16k的长度,已经使用ntk直接推理测试过,但是想用线性插值训练一下模型测试看看,线性插值的代码已经实现。
所以想请教一下下面的几个问题:
1. 添加位置插值之后,训练的方式是sft还是增量预训练?
2. 添加位置插值之后,训练的语料长度一般为多少更合适?如现有模型是8k的话,目标是将模型的context length训练到16k,训练语料长度是否16k?
3. 使用位置插值训练,参考[PI](https://arxiv.org/abs/2306.15595)这篇论文的说法,在千步级别的steps即可达到很好的效果,所以想要学习一下大佬的经验,Chinese-LLaMA-Alpaca的位置插值训练的时候,训练了多少个steps?
4. 训练的数据如果是开源的话,请问能给个地址吗?
### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况(请粘贴在本代码块里)
```
### 运行日志或截图
```
# 请在此处粘贴运行日志(请粘贴在本代码块里)
```
|
closed
|
2023-12-14T07:35:44Z
|
2023-12-28T23:46:51Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/474
|
[
"stale"
] |
KyrieXu11
| 5
|
plotly/dash-cytoscape
|
dash
| 206
|
[BUG] CyLeaflet: Updating tile layer causes map to be initially blue before pan/zoom
|
<!--
Thanks for your interest in Plotly's Dash Cytoscape Component!
Note that GitHub issues in this repo are reserved for bug reports and feature
requests. Implementation questions should be discussed in our
[Dash Community Forum](https://community.plotly.com/c/dash).
Before opening a new issue, please search through existing issues (including
closed issues) and the [Dash Community Forum](https://community.plotly.com/c/dash).
When reporting a bug, please include a reproducible example! We recommend using
the [latest version](https://github.com/plotly/dash-cytoscape/blob/master/CHANGELOG.md)
as this project is frequently updated. Issues can be browser-specific so
it's usually helpful to mention the browser and version that you are using.
-->
#### Description
When the tile layer of a CyLeaflet component is updated via callback, the map shows initially blue before manual pan/zoom. After manual pan/zoom, the map renders normally.
This happens whether the tile layer is updated by re-instantiating the entire CyLeaflet component, or by using a callback to update just the `children` of the underlying Leaflet component.
Initially (after callback):

After zooming out then in:

#### Steps/Code to Reproduce
```python
import dash
from dash import html, dcc, callback, Input, Output
import dash_cytoscape as cyto
import dash_leaflet as dl
CARTO_TILES = dl.TileLayer(
url="https://{s}.basemaps.cartocdn.com/rastertiles/voyager_labels_under/{z}/{x}/{y}{r}.png",
maxZoom = 30,
attribution='© <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors © <a href="https://carto.com/attributions">CARTO</a>',
)
ELEMENTS = [
{"data": {"id": "a", "label": "Node A", "lat": 45.519, "lon": -73.576}},
{"data": {"id": "b", "label": "Node B", "lat": 45.521, "lon": -73.574}},
{"data": {"id": "c", "label": "Node C", "lat": 45.520, "lon": -73.572}},
{"data": {"id": "ab", "source": "a", "target": "b"}},
]
cyleaflet_leaflet_id= {
"id":"cyleaflet_tiles_from_callback",
"component":"cyleaflet",
"sub": "leaf",
}
def serve_layout():
return html.Div(
children=[
html.Div('Tiles dropdown'),
dcc.Dropdown(id='tiles_dropdown',
options=[{'label': x, 'value': x} for x in ['OSM', 'CARTO']],
value='CARTO',
),
cyto.CyLeaflet(
id="cyleaflet_tiles_from_callback",
cytoscape_props=dict(
elements=ELEMENTS,
),
),
],
)
app = dash.Dash(__name__)
server = app.server
app.layout = serve_layout
@callback(
Output(cyleaflet_leaflet_id, "children"),
Input("tiles_dropdown", "value"),
)
def update_tiles(tiles):
if tiles == 'OSM':
return cyto.CyLeaflet.OSM
else:
return CARTO_TILES
if __name__ == "__main__":
app.run_server(debug=True)
```
#### Versions
`dash_cytoscape==1.0.0`
|
closed
|
2024-02-07T15:56:30Z
|
2024-07-11T09:10:39Z
|
https://github.com/plotly/dash-cytoscape/issues/206
|
[] |
emilykl
| 2
|
Evil0ctal/Douyin_TikTok_Download_API
|
fastapi
| 10
|
多条视频链接下莫名卡在第六条 无报错。。
|
```
127.0.0.1 - - [17/Mar/2022 20:15:05] "GET /?app=index HTTP/1.1" 200 -
Sending request to:
https://www.iesdouyin.com/web/api/v2/aweme/iteminfo/?item_ids=7074510252833606925
Type = video
http://v95-a.douyinvod.com/3ae2fc443a268604e866b91136d7f97c/6233347a/video/tos/cn/tos-cn-ve-15c001-alinc2/0515dd92f27f421b874fbd0009f2672e/?a=1128&br=1117&bt=1117&cd=0%7C0%7C0%7C0&ch=0&cr=0&cs=0&cv=1&dr=0&ds=3&er=&ft=gGf_l88-oU-DYlnt7TQ_plXxuhsd38yytqY&l=202203172015030102080971031F05DDA3&lr=&mime_type=video_mp4&net=0&pl=0&qs=0&rc=M2drOzU6ZnV1OzMzNGkzM0ApNjw2PGY4Nzw5N2g8NGdkN2cpaGRqbGRoaGRmYmctNXI0MDVqYC0tZC0vc3MxMDBhLzMzYTZjMC01LV5fOmNwb2wrbStqdDo%3D&vl=&vr=
https://sf6-cdn-tos.douyinstatic.com/obj/ies-music/7074510303102552869.mp3
https://sf6-cdn-tos.douyinstatic.com/obj/ies-music/7074510303102552869.mp3
惊不惊喜,意不意外#搞笑 #沙雕 @磁铁李飞(沙雕村)
彭恰恰(沙雕村)
pengqq88888
getting douyin result ['http://v95-a.douyinvod.com/3ae2fc443a268604e866b91136d7f97c/6233347a/video/tos/cn/tos-cn-ve-15c001-alinc2/0515dd92f27f421b874fbd0009f2672e/?a=1128&br=1117&bt=1117&cd=0%7C0%7C0%7C0&ch=0&cr=0&cs=0&cv=1&dr=0&ds=3&er=&ft=gGf_l88-oU-DYlnt7TQ_plXxuhsd38yytqY&l=202203172015030102080971031F05DDA3&lr=&mime_type=video_mp4&net=0&pl=0&qs=0&rc=M2drOzU6ZnV1OzMzNGkzM0ApNjw2PGY4Nzw5N2g8NGdkN2cpaGRqbGRoaGRmYmctNXI0MDVqYC0tZC0vc3MxMDBhLzMzYTZjMC01LV5fOmNwb2wrbStqdDo%3D&vl=&vr=', 'https://sf6-cdn-tos.douyinstatic.com/obj/ies-music/7074510303102552869.mp3', '惊不惊喜,意不意外#搞笑 #沙雕 @磁铁李飞(沙雕村)', '彭恰恰(沙雕村)', 'pengqq88888', 'https://www.douyin.com/video/7074510252833606925\n']
getting video info https://www.douyin.com/video/7073080385739033887
127.0.0.1 - - [17/Mar/2022 20:15:06] "GET /?app=index HTTP/1.1" 200 -
Sending request to:
https://www.iesdouyin.com/web/api/v2/aweme/iteminfo/?item_ids=7073080385739033887
127.0.0.1 - - [17/Mar/2022 20:15:07] "GET /?app=index HTTP/1.1" 200 -
127.0.0.1 - - [17/Mar/2022 20:15:08] "GET /?app=index HTTP/1.1" 200 -
getting video info https://www.douyin.com/video/7073080385739033887
127.0.0.1 - - [17/Mar/2022 20:15:09] "GET /?app=index HTTP/1.1" 200 -
Sending request to:
https://www.iesdouyin.com/web/api/v2/aweme/iteminfo/?item_ids=7073080385739033887
Type = video
http://v99-cold.douyinvod.com/01e01753cae1e96b28a702510487bae3/623334d5/video/tos/cn/tos-cn-ve-15c001-alinc2/67b6db9ab3d14e5baa9c9bec7a72727f/?a=1128&br=2037&bt=2037&cd=0%7C0%7C0%7C0&ch=0&cr=0&cs=0&cv=1&dr=0&ds=3&er=&ft=gGf_l88-oU-DYlnt7TQ_plXxuhsdG8yytqY&l=202203172015070102091570483105DB89&lr=&mime_type=video_mp4&net=0&pl=0&qs=0&rc=Mzp4cTs6Zjw8OzMzNGkzM0ApaDk0OGRnOWVmNzk2Zjo4ZmcpaGRqbGRoaGRmNTBlL3I0X25oYC0tZC0vc3MwLV42YDUyMjBjYzEtLmEvOmNwb2wrbStqdDo%3D&vl=&vr=
https://sf6-cdn-tos.douyinstatic.com/obj/ies-music/7073080454420630302.mp3
https://sf6-cdn-tos.douyinstatic.com/obj/ies-music/7073080454420630302.mp3
又是斗志斗勇的一天#凡尔赛式退货#搞笑 #沙雕
彭恰恰(沙雕村)
pengqq88888
getting douyin result ['http://v99-cold.douyinvod.com/01e01753cae1e96b28a702510487bae3/623334d5/video/tos/cn/tos-cn-ve-15c001-alinc2/67b6db9ab3d14e5baa9c9bec7a72727f/?a=1128&br=2037&bt=2037&cd=0%7C0%7C0%7C0&ch=0&cr=0&cs=0&cv=1&dr=0&ds=3&er=&ft=gGf_l88-oU-DYlnt7TQ_plXxuhsdG8yytqY&l=202203172015070102091570483105DB89&lr=&mime_type=video_mp4&net=0&pl=0&qs=0&rc=Mzp4cTs6Zjw8OzMzNGkzM0ApaDk0OGRnOWVmNzk2Zjo4ZmcpaGRqbGRoaGRmNTBlL3I0X25oYC0tZC0vc3MwLV42YDUyMjBjYzEtLmEvOmNwb2wrbStqdDo%3D&vl=&vr=', 'https://sf6-cdn-tos.douyinstatic.com/obj/ies-music/7073080454420630302.mp3', '又是斗志斗勇的一天#凡 尔赛式退货#搞笑 #沙雕', '彭恰恰(沙雕村)', 'pengqq88888', 'https://www.douyin.com/video/7073080385739033887\n']
getting video info https://www.douyin.com/video/7072296901554474247
127.0.0.1 - - [17/Mar/2022 20:15:10] "GET /?app=index HTTP/1.1" 200 -
Sending request to:
https://www.iesdouyin.com/web/api/v2/aweme/iteminfo/?item_ids=7072296901554474247
127.0.0.1 - - [17/Mar/2022 20:15:11] "GET /?app=index HTTP/1.1" 200 -
getting video info https://www.douyin.com/video/7072296901554474247
127.0.0.1 - - [17/Mar/2022 20:15:12] "GET /?app=index HTTP/1.1" 200 -
Sending request to:
https://www.iesdouyin.com/web/api/v2/aweme/iteminfo/?item_ids=7072296901554474247
Type = video
http://v5-coldy.douyinvod.com/e10a1263277990bb0c0aa3e9bba15dbf/62333492/video/tos/cn/tos-cn-ve-15-alinc2/96ddf1dc899a4a6795df4573b3872958/?a=1128&br=1672&bt=1672&cd=0%7C0%7C0%7C0&ch=0&cr=0&cs=0&cv=1&dr=0&ds=3&er=&ft=gGf_l88-oU-DYlnt7TQ_plXxuhsdC8yytqY&l=202203172015100102101860444A060559&lr=&mime_type=video_mp4&net=0&pl=0&qs=0&rc=M3I4eTo6Zml3OzMzNGkzM0ApMzxlaTc6ZDw5N2ZnZ2k7ZWcpaGRqbGRoaGRmMGhgNHI0Z2RmYC0tZC0vc3MyYjExYTMwNV4xX15gLzIzOmNwb2wrbStqdDo%3D&vl=&vr=
https://sf6-cdn-tos.douyinstatic.com/obj/ies-music/7072296930944043807.mp3
https://sf6-cdn-tos.douyinstatic.com/obj/ies-music/7072296930944043807.mp3
遭了!昨晚玩游戏忘充电了😱#搞笑 #沙雕@磁铁李飞(沙雕村)
彭恰恰(沙雕村)
pengqq88888
getting douyin result ['http://v5-coldy.douyinvod.com/e10a1263277990bb0c0aa3e9bba15dbf/62333492/video/tos/cn/tos-cn-ve-15-alinc2/96ddf1dc899a4a6795df4573b3872958/?a=1128&br=1672&bt=1672&cd=0%7C0%7C0%7C0&ch=0&cr=0&cs=0&cv=1&dr=0&ds=3&er=&ft=gGf_l88-oU-DYlnt7TQ_plXxuhsdC8yytqY&l=202203172015100102101860444A060559&lr=&mime_type=video_mp4&net=0&pl=0&qs=0&rc=M3I4eTo6Zml3OzMzNGkzM0ApMzxlaTc6ZDw5N2ZnZ2k7ZWcpaGRqbGRoaGRmMGhgNHI0Z2RmYC0tZC0vc3MyYjExYTMwNV4xX15gLzIzOmNwb2wrbStqdDo%3D&vl=&vr=', 'https://sf6-cdn-tos.douyinstatic.com/obj/ies-music/7072296930944043807.mp3', '遭了!昨晚玩游戏忘充电了😱#搞笑 #沙雕@磁铁李飞(沙雕村)', '彭恰恰(沙雕村)', 'pengqq88888', 'https://www.douyin.com/video/7072296901554474247\n']
127.0.0.1 - - [17/Mar/2022 20:15:13] "GET /?app=index HTTP/1.1" 200 -
getting video info https://www.douyin.com/video/7071923678782557477
Sending request to:
https://www.iesdouyin.com/web/api/v2/aweme/iteminfo/?item_ids=7071923678782557477
127.0.0.1 - - [17/Mar/2022 20:15:14] "GET /?app=index HTTP/1.1" 200 -
127.0.0.1 - - [17/Mar/2022 20:15:15] "GET /?app=index HTTP/1.1" 200 -
getting video info https://www.douyin.com/video/7071923678782557477
Sending request to:
https://www.iesdouyin.com/web/api/v2/aweme/iteminfo/?item_ids=7071923678782557477
Type = video
http://v26-cold.douyinvod.com/4988cd1d8baaa575038689dc0d2d5237/6233348d/video/tos/cn/tos-cn-ve-15-alinc2/926e645571fd4b4d977bfec4a67239a7/?a=1128&br=1327&bt=1327&cd=0%7C0%7C0%7C0&ch=0&cr=0&cs=0&cv=1&dr=0&ds=3&er=&ft=gGf_l88-oU-DYlnt7TQ_plXxuhsdO8yytqY&l=202203172015130102120980964605C56C&lr=&mime_type=video_mp4&net=0&pl=0&qs=0&rc=M3Q5czo6ZnY7OzMzNGkzM0ApPGU1ZWc0NDxpNzU7NzhlaGcpaGRqbGRoaGRmYDNxMXI0ZzZmYC0tZC0vc3MtMDUuLzI0NV9iNS02LTExOmNwb2wrbStqdDo%3D&vl=&vr=
https://sf6-cdn-tos.douyinstatic.com/obj/ies-music/7071923708365261604.mp3
https://sf6-cdn-tos.douyinstatic.com/obj/ies-music/7071923708365261604.mp3
这个不说话的男人回来了#搞笑 #沙雕 @磁铁李飞(沙雕村)
彭恰恰(沙雕村)
pengqq88888
getting douyin result ['http://v26-cold.douyinvod.com/4988cd1d8baaa575038689dc0d2d5237/6233348d/video/tos/cn/tos-cn-ve-15-alinc2/926e645571fd4b4d977bfec4a67239a7/?a=1128&br=1327&bt=1327&cd=0%7C0%7C0%7C0&ch=0&cr=0&cs=0&cv=1&dr=0&ds=3&er=&ft=gGf_l88-oU-DYlnt7TQ_plXxuhsdO8yytqY&l=202203172015130102120980964605C56C&lr=&mime_type=video_mp4&net=0&pl=0&qs=0&rc=M3Q5czo6ZnY7OzMzNGkzM0ApPGU1ZWc0NDxpNzU7NzhlaGcpaGRqbGRoaGRmYDNxMXI0ZzZmYC0tZC0vc3MtMDUuLzI0NV9iNS02LTExOmNwb2wrbStqdDo%3D&vl=&vr=', 'https://sf6-cdn-tos.douyinstatic.com/obj/ies-music/7071923708365261604.mp3', '这个不说话的男人回来了#搞 笑 #沙雕 @磁铁李飞(沙雕村)', '彭恰恰(沙雕村)', 'pengqq88888', 'https://www.douyin.com/video/7071923678782557477\n']
getting video info https://www.douyin.com/video/7070123234179517733
127.0.0.1 - - [17/Mar/2022 20:15:16] "GET /?app=index HTTP/1.1" 200 -
Sending request to:
https://www.iesdouyin.com/web/api/v2/aweme/iteminfo/?item_ids=7070123234179517733
127.0.0.1 - - [17/Mar/2022 20:15:17] "GET /?app=index HTTP/1.1" 200 -
Type = video
http://v95-a.douyinvod.com/787e5645a4d3b712f94ff7b3b729760b/623334e5/video/tos/cn/tos-cn-ve-15-alinc2/e4bbb663a15442718744e83fccbe1206/?a=1128&br=2184&bt=2184&cd=0%7C0%7C0%7C0&ch=0&cr=0&cs=0&cv=1&dr=0&ds=3&er=&ft=gGf_l88-oU-DYlnt7TQ_plXxuhsdz8yytqY&l=2022031720151501021203810940054DFF&lr=&mime_type=video_mp4&net=0&pl=0&qs=0&rc=anR2Ojk6ZjU0OzMzNGkzM0ApZ2c4NTk1ZGQ1N2c1OjczZ2cpaGRqbGRoaGRmYGlsMnI0Z3JjYC0tZC0vc3NiNTQzNi5hMzYzLWA0MWNfOmNwb2wrbStqdDo%3D&vl=&vr=
getting video info https://www.douyin.com/video/7070123234179517733
Sending request to:
https://www.iesdouyin.com/web/api/v2/aweme/iteminfo/?item_ids=7070123234179517733
127.0.0.1 - - [17/Mar/2022 20:15:18] "GET /?app=index HTTP/1.1" 200 -
127.0.0.1 - - [17/Mar/2022 20:15:19] "GET /?app=index HTTP/1.1" 200 -
getting video info https://www.douyin.com/video/7070123234179517733
Sending request to:
https://www.iesdouyin.com/web/api/v2/aweme/iteminfo/?item_ids=7070123234179517733
Type = video
http://v95-a.douyinvod.com/c4fc2102337b736514f9e8900846f1b7/623334e8/video/tos/cn/tos-cn-ve-15-alinc2/e4bbb663a15442718744e83fccbe1206/?a=1128&br=2184&bt=2184&cd=0%7C0%7C0%7C0&ch=0&cr=0&cs=0&cv=1&dr=0&ds=3&er=&ft=gGf_l88-oU-DYlnt7TQ_plXxuhsdT8yytqY&l=2022031720151801021207406923058A72&lr=&mime_type=video_mp4&net=0&pl=0&qs=0&rc=anR2Ojk6ZjU0OzMzNGkzM0ApZ2c4NTk1ZGQ1N2c1OjczZ2cpaGRqbGRoaGRmYGlsMnI0Z3JjYC0tZC0vc3NiNTQzNi5hMzYzLWA0MWNfOmNwb2wrbStqdDo%3D&vl=&vr=
127.0.0.1 - - [17/Mar/2022 20:15:20] "GET /?app=index HTTP/1.1" 200 -
```
|
closed
|
2022-03-17T12:22:32Z
|
2022-03-17T18:10:21Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/10
|
[] |
wanghaisheng
| 0
|
jupyter/nbviewer
|
jupyter
| 554
|
Slideviewer 400 errors.
|
Hi guys,
I'm attempting to view some test books to see if the slideviewer app is working (I remember Damian having said it was in development but haven't kept up to date on whether it's still functional.)
I was unable to get these notebooks to render a slideshow. I'm unsure if I'm just doing it incorrectly:
https://raw.githubusercontent.com/mburke05/work_notebooks/master/test_three.ipynb
https://github.com/mburke05/work_notebooks/blob/master/slide_test.ipynb
I figured maybe it could have something to do with rendering the lightning-viz objects (though they render fine in nbviewer). Hence the test with just inline code in test_three.ipynb
Matt
|
closed
|
2015-12-21T06:28:28Z
|
2015-12-21T14:09:17Z
|
https://github.com/jupyter/nbviewer/issues/554
|
[] |
mburke05
| 2
|
horovod/horovod
|
tensorflow
| 2,956
|
When using the ring-of-rings branch of horovod,Segmentation fault(MPI)will appear when running distributed programs.
|
1. Framework: (using TensorFlow v1 (1.15) with Keras2.2.4)
2. OS and version: Ubuntu16.04 LTS
3. Horovod version: Branch ring-of-rings(horovod==0.12.2.dev0)
4. MPI version: OpenMPI 4.0.0
5. CUDA version:10.0
6. NCCL version:2.5.6
7. Python version:3.6.13(conda)
8. GCC version:5.4.0
9. CMake version:3.18.4
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
No
2. If your question is about hang, did you read [this doc]
(https://github.com/horovod/horovod/blob/master/docs/running.rst)?
Yes
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
Yes
**Bug report:**
Firstly compile the source code of the ring-of-rings branch,than I enter the conda environment. Finally,when I attempt to train Resnet50 model&Vgg model,some problems about mpi happens:Segmentation fault(mpirun noticed that process rank 0 with PID 0 on node node06 exited on signal 11 (Segmentation fault).
**The instruction to run the distributed program is: mpirun -np 2 python cifar10_resnet50.py**
The specific error information is as follows:
[node06:24453] *** Process received signal ***
[node06:24453] Signal: Segmentation fault (11)
[node06:24453] Signal code: Invalid permissions (2)
[node06:24453] Failing at address: 0x230f589800
[node06:24453] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980)[0x7fed0c311980]
[node06:24453] [ 1] /lib/x86_64-linux-gnu/libc.so.6(+0x18ec21)[0x7fed0c09cc21]
[node06:24453] [ 2] /usr/local/lib/openmpi/mca_btl_vader.so(+0x2ed0)[0x7fec925a5ed0]
[node06:24453] [ 3] /usr/local/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_send_request_start_prepare+0x51)[0x7fec917803e1]
[node06:24453] [ 4] /usr/local/lib/openmpi/mca_pml_ob1.so(mca_pml_ob1_send+0x14e3)[0x7fec9176ddf3]
[node06:24453] [ 5] /usr/local/lib/libmpi.so.40(ompi_coll_base_bcast_intra_split_bintree+0x6ef)[0x7feca20cdc0f]
[node06:24453] [ 6] /usr/local/lib/openmpi/mca_coll_tuned.so(ompi_coll_tuned_bcast_intra_dec_fixed+0x126)[0x7fec90702386]
[node06:24453] [ 7] /usr/local/lib/libmpi.so.40(MPI_Bcast+0x199)[0x7feca208f079]
[node06:24453] [ 8] /home/antl/anaconda3/envs/tf1.15-test/lib/python3.6/site-packages/horovod-0.12.2.dev0-py3.6-linux-x86_64.egg/horovod/common/mpi_lib.cpython-36m-x86_64-linux-gnu.so(+0x47650)[0x7feca2605650]
[node06:24453] [ 9] /home/antl/anaconda3/envs/tf1.15-test/lib/python3.6/site-packages/horovod-0.12.2.dev0-py3.6-linux-x86_64.egg/horovod/common/mpi_lib.cpython-36m-x86_64-linux-gnu.so(+0x4ff31)[0x7feca260df31]
[node06:24453] [10] /home/antl/anaconda3/envs/tf1.15-test/bin/../lib/libstdc++.so.6(+0xc819d)[0x7fecb833b19d]
[node06:24453] [11] /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db)[0x7fed0c3066db]
[node06:24453] [12] /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7fed0c02f71f]
[node06:24453] *** End of error message ***
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
**What confuses me is that before 2021.05, there is no problem using the ring-of-rings branch to run distributed programs, but since May, once the mpi that supports cuda-aware is not compiled, MPI Segmentation fault will appear mistake!!!**
|
open
|
2021-06-07T09:53:38Z
|
2021-06-07T09:53:38Z
|
https://github.com/horovod/horovod/issues/2956
|
[
"bug"
] |
Frank00001
| 0
|
Skyvern-AI/skyvern
|
api
| 1,190
|
Can you add a feature to capture specific network requests?
|
Can you add a feature to capture specific network requests? I need to get information from the network request headers. Also, it seems the F12 developer tools and bookmark creation are not supported.
|
closed
|
2024-11-14T14:35:07Z
|
2024-11-26T01:46:29Z
|
https://github.com/Skyvern-AI/skyvern/issues/1190
|
[] |
chaoqunxie
| 8
|
mwaskom/seaborn
|
matplotlib
| 3,365
|
Change in Behavior for Python 3.12 for TestRegressionPlotter
|
Hi Team,
I would like to bring to your attention there is an expected change in behavior of of how variables are scoped inside comprehensions inside a class scope, details are here: https://discuss.python.org/t/pep-709-one-behavior-change-that-was-missed-in-the-pep
It has been identified in that thread (thanks to the work of Hugo van Kemenade and Carl Meyer) that [this line](https://github.com/mwaskom/seaborn/blob/v0.12.2/tests/test_regression.py#L122) in TestRegressionPlotter is affected by this behavioral change:
```python
df["c"] = [rs.binomial(1, p_i) for p_i in p]
```
Currently `rs` is sourced from the global scope and in Python 3.12 it will instead source it from the class scope.
Reading the context of the code it seems like sourcing from the class scope is actually the intended behavior but I thought I'd raise this issue just to inform you anyway. Please feel free to close this issue if this is actually the intended behavior.
|
closed
|
2023-05-14T13:35:54Z
|
2023-05-15T23:17:55Z
|
https://github.com/mwaskom/seaborn/issues/3365
|
[] |
notatallshaw
| 1
|
alteryx/featuretools
|
data-science
| 2,368
|
Revert changes for local docs build once related sphinx issue is closed
|
In MR #2367, changes were made in `docs/Makefile` to allow docs to be build locally using the `make html` command. This was needed due to errors that happened when attempting to built the docs with Featuretools installed in editable mode. Docs builds failing in editable mode *might* be related to an issue with sphinx.
When sphinx issue 10943 (https://github.com/sphinx-doc/sphinx/issues/10943) has been closed and resolved, we should revert the changes that were mode to the Makefile as indicated by the comments here:
```
.PHONY: html
html:
# Remove the following line when sphinx issue (https://github.com/sphinx-doc/sphinx/issues/10943) is closed
python -m pip install .. --quiet --no-dependencies
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html $(SPHINXOPTS)
# Remove the following line when sphinx issue (https://github.com/sphinx-doc/sphinx/issues/10943) is closed
python -m pip install -e .. --quiet --no-dependencies
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
`
|
open
|
2022-11-09T16:59:06Z
|
2024-04-10T20:04:36Z
|
https://github.com/alteryx/featuretools/issues/2368
|
[
"good first issue",
"documentation"
] |
thehomebrewnerd
| 10
|
mljar/mercury
|
data-visualization
| 101
|
Return URL address of HTML/PDF notebook after REST API excution
|
There should be an option to execute the notebook with REST API and return the address of the resulting HTML/PDF notebook.
It will create a lot of new possibilities for creating dynamic reports.
### The workflow
1. Create a notebook with variables editable in Mercury (variables in YAML header).
2. Share notebook as REST API endpoint in Mercury.
3. Execute notebook with REST API, send variables in JSON request.
4. Run notebook with new parameters, and create HTML / PDF outputs.
5. Return address to HTML / PDF outputs.
|
closed
|
2022-05-18T06:51:35Z
|
2023-02-15T10:13:23Z
|
https://github.com/mljar/mercury/issues/101
|
[
"enhancement",
"help wanted"
] |
pplonski
| 0
|
huggingface/diffusers
|
pytorch
| 10,050
|
Is there any img2img KDiffusion equivalent of StableDiffusionKDiffusionPipeline?
|
### Model/Pipeline/Scheduler description
I'm working on result alignment between diffusers and A1111 webui.
In txt2img scene, I can achieve via `StableDiffusionKDiffusionPipeline`, refer to https://github.com/huggingface/diffusers/issues/3253.
But in img2img scene, is there any KDiffusion pipeline equivalent?
I'm also trying to implement this by merging `StableDiffusionKDiffusionPipeline` and `StableDiffusionImg2ImgPipeline` together.
Any clarification and help is appreciated.
### Open source status
- [ ] The model implementation is available.
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links for the implementation
_No response_
|
open
|
2024-11-29T07:47:11Z
|
2024-12-29T15:03:05Z
|
https://github.com/huggingface/diffusers/issues/10050
|
[
"stale"
] |
juju812
| 2
|
JoeanAmier/XHS-Downloader
|
api
| 39
|
如何把笔记的文案 保存到对应笔记的文件夹中
|
如何把笔记的文案 保存到对应笔记的文件夹中
|
open
|
2024-01-05T02:43:23Z
|
2024-01-07T17:17:42Z
|
https://github.com/JoeanAmier/XHS-Downloader/issues/39
|
[] |
hackettk
| 5
|
mwaskom/seaborn
|
data-visualization
| 3,423
|
Issue with lineplot - conflict with pandas
|
When trying to create a lineplot - with only x (datetime64 or simple int64) and y without any sophisticated arguments - the following error is raised:
> OptionError: No such keys(s): 'mode.use_inf_as_null'
This is the detailed reference to pandas:
> File [~/Documents/NeueFische/4_Capstone/capstone_solar_energy/.venv/lib/python3.11/site-packages/seaborn/_core.py:1054](https://file+.vscode-resource.vscode-cdn.net/Users/kathse/Documents/NeueFische/4_Capstone/capstone_solar_energy/notebooks/~/Documents/NeueFische/4_Capstone/capstone_solar_energy/.venv/lib/python3.11/site-packages/seaborn/_core.py:1054), in VectorPlotter.comp_data(self)
> 1050 axis = getattr(ax, f"{var}axis")
> 1052 # Use the converter assigned to the axis to get a float representation
> 1053 # of the data, passing np.nan or pd.NA through (pd.NA becomes np.nan)
> -> 1054 with pd.option_context('mode.use_inf_as_null', True):
> 1055 orig = self.plot_data[var].dropna()
> 1056 comp_col = pd.Series(index=orig.index, dtype=float, name=var)
>
> File [~/Documents/NeueFische/4_Capstone/capstone_solar_energy/.venv/lib/python3.11/site-packages/pandas/_config/config.py:441](https://file+.vscode-resource.vscode-cdn.net/Users/kathse/Documents/NeueFische/4_Capstone/capstone_solar_energy/notebooks/~/Documents/NeueFische/4_Capstone/capstone_solar_energy/.venv/lib/python3.11/site-packages/pandas/_config/config.py:441), in option_context.__enter__(self)
> 440 def __enter__(self) -> None:
> --> 441 self.undo = [(pat, _get_option(pat, silent=True)) for pat, val in self.ops]
> 443 for pat, val in self.ops:
> 444 _set_option(pat, val, silent=True)
> File [~/Documents/NeueFische/4_Capstone/capstone_solar_energy/.venv/lib/python3.11/site-packages/pandas/_config/config.py:441](https://file+.vscode-resource.vscode-cdn.net/Users/kathse/Documents/NeueFische/4_Capstone/capstone_solar_energy/notebooks/~/Documents/NeueFische/4_Capstone/capstone_solar_energy/.venv/lib/python3.11/site-packages/pandas/_config/config.py:441), in (.0)
> 440 def __enter__(self) -> None:
> --> 441 self.undo = [(pat, _get_option(pat, silent=True)) for pat, val in self.ops]
> 443 for pat, val in self.ops:
> 444 _set_option(pat, val, silent=True)
>
> File [~/Documents/NeueFische/4_Capstone/capstone_solar_energy/.venv/lib/python3.11/site-packages/pandas/_config/config.py:135](https://file+.vscode-resource.vscode-cdn.net/Users/kathse/Documents/NeueFische/4_Capstone/capstone_solar_energy/notebooks/~/Documents/NeueFische/4_Capstone/capstone_solar_energy/.venv/lib/python3.11/site-packages/pandas/_config/config.py:135), in _get_option(pat, silent)
> 134 def _get_option(pat: str, silent: bool = False) -> Any:
> --> 135 key = _get_single_key(pat, silent)
> 137 # walk the nested dict
> 138 root, k = _get_root(key)
I tried multiple seaborn version, including 0.12.1 and 0.12.2. For pandas, I tried 2.0.2 and 2.0.1. Matplotlib is version 3.7.1.
I would appreciate your help and also I would like to thank you for your awesome work!!
|
closed
|
2023-07-20T22:31:12Z
|
2023-08-02T11:20:03Z
|
https://github.com/mwaskom/seaborn/issues/3423
|
[] |
KathSe1984
| 3
|
huggingface/datasets
|
tensorflow
| 7,073
|
CI is broken for convert_to_parquet: Invalid rev id: refs/pr/1 404 error causes RevisionNotFoundError
|
See: https://github.com/huggingface/datasets/actions/runs/10095313567/job/27915185756
```
FAILED tests/test_hub.py::test_convert_to_parquet - huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Root=1-66a25839-31ce7b475e70e7db1e4d44c2;b0c8870f-d5ef-4bf2-a6ff-0191f3df0f64)
Revision Not Found for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs%2Fpr%2F1.
Invalid rev id: refs/pr/1
```
```
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/hub.py:86: in convert_to_parquet
dataset.push_to_hub(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/dataset_dict.py:1722: in push_to_hub
split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/arrow_dataset.py:5511: in _push_parquet_shards_to_hub
api.preupload_lfs_files(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/hf_api.py:4231: in preupload_lfs_files
_fetch_upload_modes(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:118: in _inner_fn
return fn(*args, **kwargs)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/_commit_api.py:507: in _fetch_upload_modes
hf_raise_for_status(resp)
```
|
closed
|
2024-07-26T08:27:41Z
|
2024-07-27T05:48:02Z
|
https://github.com/huggingface/datasets/issues/7073
|
[] |
albertvillanova
| 9
|
graphistry/pygraphistry
|
jupyter
| 321
|
install pygraphistry togoogle colab
|
Hi,
I have used succesfully on my personal laptop, but I need to use in Colab. How I can use pygraphistry in Colab?
I have try to install in Google Colab with
`!pip install graphistry`
`!apt install graphistry`0
Every time it print that it successfully installed but then when running:
`import graphistry`
I receive error:
> ModuleNotFoundError: No module named 'graphistry'
thanks for the help
|
closed
|
2022-03-18T14:15:22Z
|
2022-03-21T20:12:49Z
|
https://github.com/graphistry/pygraphistry/issues/321
|
[] |
SalvatoreRa
| 5
|
PaddlePaddle/ERNIE
|
nlp
| 249
|
bert 预训练代码有个小错误
|
ERNIE/BERT/reader/pretraining.py
91行与162行应该统一吧
162行应该是大于号
|
closed
|
2019-08-01T11:25:24Z
|
2019-08-19T02:54:35Z
|
https://github.com/PaddlePaddle/ERNIE/issues/249
|
[] |
zle1992
| 2
|
freqtrade/freqtrade
|
python
| 11,216
|
Problem with order/trade open price (wrong price) in dry run
|
Describe your environment
Operating system: Ubuntu 22.04.1 LTS
Python Version: > 3.10
Freqtrade 2024.7.1
Freqtrade running in docker
Exchange: Binance
Dry-run mode without new BNFCR features
I found a problem with order/trade open price.
Strategy is on 1m TF
Send an limit order for TRX/USDT:USDT at open_rate: 0.24442
But at time my order was send and filled, open price on the 1 min candle was between 0.24271 - 0.24282. We can see current_rate: 0.24274.
2025-01-11 07:19:39,888 - freqtrade.rpc.rpc_manager - INFO - Sending rpc message: {'trade_id': 54, 'type': entry_fill, 'buy_tag': 'long htf', 'enter_tag': 'long htf', 'exchange': 'Binance', 'pair': 'TRX/USDT:USDT', 'leverage': 1.0, 'direction': 'Long', 'limit': 0.24442, 'open_rate': 0.24442, 'order_type': 'limit', 'stake_amount': 1435.9675, 'stake_currency': 'USDT', 'base_currency': 'TRX', 'quote_currency': 'USDT', 'fiat_currency': 'USD', 'amount': 5875.0, 'open_date': datetime.datetime(2025, 1, 11, 7, 19, 39, 47685, tzinfo=datetime.timezone.utc), 'current_rate': 0.24274, 'sub_trade': False}
But my order and trade will show the initial price 0.24442, not the real price 0.24274, that should be in order and trade:
/status 54
Trade ID: 54 (since 2025-01-11 07:19:39)
Current Pair: TRX/USDT:USDT
Direction: Long (1.0x)
Amount: 5875.0 (1435.967 USDT)
Total invested: 1435.967 USDT
Enter Tag: long htf
Number of Entries: 1
Number of Exits: 0
Open Rate: 0.24442
Open Date: 2025-01-11 07:19:39
Current Rate: 0.24248
Unrealized Profit: -0.89% (-12.828 USDT)
Stoploss: 0.2189 (-9.81%)
Stoploss distance: -0.02358 (-9.72%)
/order 54
Order List for Trade #54
Entry #1:
Amount: 5875 (1435.967 USDT)
Average Price: 0.24442
```
2025-01-11 07:19:38,801 - freqtrade.freqtradebot - INFO - Long signal found: about create a new trade for TRX/USDT:USDT with stake_amount: 1436.0752056404247 ...
2025-01-11 07:19:39,047 - freqtrade.freqtradebot - INFO - Order dry_run_buy_TRX/USDT:USDT_1736579978.801997 was created for TRX/USDT:USDT and status is closed.
2025-01-11 07:19:39,623 - freqtrade.wallets - INFO - Wallets synced.
2025-01-11 07:19:39,623 - freqtrade.rpc.rpc_manager - INFO - Sending rpc message: {'trade_id': 54, 'type': entry, 'buy_tag': 'long htf', 'enter_tag': 'long htf', 'exchange': 'Binance', 'pair': 'TRX/USDT:USDT', 'leverage': 1.0, 'direction': 'Long', 'limit': 0.24442, 'open_rate': 0.24442, 'order_type': 'limit', 'stake_amount': 1435.9675, 'stake_currency': 'USDT', 'base_currency': 'TRX', 'quote_currency': 'USDT', 'fiat_currency': 'USD', 'amount': 5875.0, 'open_date': datetime.datetime(2025, 1, 11, 7, 19, 39, 47685, tzinfo=datetime.timezone.utc), 'current_rate': 0.24274, 'sub_trade': False}
2025-01-11 07:19:39,623 - freqtrade.rpc.telegram - INFO - Notification 'entry' not sent.
2025-01-11 07:19:39,624 - freqtrade.freqtradebot - INFO - Found open order for Trade(id=54, pair=TRX/USDT:USDT, amount=5875.00000000, is_short=False, leverage=1.0, open_rate=0.24442000, open_since=2025-01-11 07:19:39)
2025-01-11 07:19:39,627 - freqtrade.freqtradebot - INFO - Fee for Trade Trade(id=54, pair=TRX/USDT:USDT, amount=5875.00000000, is_short=False, leverage=1.0, open_rate=0.24442000, open_since=2025-01-11 07:19:39) [buy]: 0.71798375 USDT - rate: 0.0005
2025-01-11 07:19:39,627 - freqtrade.persistence.trade_model - INFO - Updating trade (id=54) ...
2025-01-11 07:19:39,628 - freqtrade.persistence.trade_model - INFO - LIMIT_BUY has been fulfilled for Trade(id=54, pair=TRX/USDT:USDT, amount=5875.00000000, is_short=False, leverage=1.0, open_rate=0.24442000, open_since=2025-01-11 07:19:39).
2025-01-11 07:19:39,883 - freqtrade.wallets - INFO - Wallets synced.
2025-01-11 07:19:39,888 - freqtrade.rpc.rpc_manager - INFO - Sending rpc message: {'trade_id': 54, 'type': entry_fill, 'buy_tag': 'long htf', 'enter_tag': 'long htf', 'exchange': 'Binance', 'pair': 'TRX/USDT:USDT', 'leverage': 1.0, 'direction': 'Long', 'limit': 0.24442, 'open_rate': 0.24442, 'order_type': 'limit', 'stake_amount': 1435.9675, 'stake_currency': 'USDT', 'base_currency': 'TRX', 'quote_currency': 'USDT', 'fiat_currency': 'USD', 'amount': 5875.0, 'open_date': datetime.datetime(2025, 1, 11, 7, 19, 39, 47685, tzinfo=datetime.timezone.utc), 'current_rate': 0.24274, 'sub_trade': False}
2025-01-11 07:19:39,889 - freqtrade.worker - INFO - Bot heartbeat. PID=1, version='2024.7.1', state='RUNNING'
```

|
closed
|
2025-01-11T07:52:04Z
|
2025-01-11T15:21:22Z
|
https://github.com/freqtrade/freqtrade/issues/11216
|
[
"Question"
] |
dobremha
| 2
|
AirtestProject/Airtest
|
automation
| 553
|
关于iOS环境text()用法的疑问
|
想要跟set_text()一样的效果:输入框没有内容写入,输入框有内容直接覆盖
但是iOS不支持set_text(),也不支持用keyevent("KEYCODE_DEL")删除内容,请问还有什么办法可以实现直接覆盖输入框内容或者删除输入框内容吗?
|
open
|
2019-10-11T07:07:05Z
|
2020-09-06T13:19:28Z
|
https://github.com/AirtestProject/Airtest/issues/553
|
[] |
appp-deng
| 3
|
stanfordnlp/stanza
|
nlp
| 825
|
Stanza Document model to dataframe
|
Hi, I got this following output from NER process. I want this in the form of a dataframe .In that case,"id,text,upos,xpos,ner" shoud be column names.Is that possible to convert into dataframe?
[
[
{
"id": 1,
"text": "[",
"upos": "PUNCT",
"xpos": "-LRB-",
"start_char": 0,
"end_char": 1,
"ner": "O"
},
{
"id": 2,
"text": "'",
"upos": "PUNCT",
"xpos": "''",
"start_char": 1,
"end_char": 2,
"ner": "O"
}
],
[
{
"id": 1,
"text": "OLD",
"upos": "ADJ",
"xpos": "NNP",
"feats": "Degree=Pos",
"start_char": 2,
"end_char": 5,
"ner": "B-FAC"
},
{
"id": 2,
"text": "COAST",
"upos": "PROPN",
"xpos": "NNP",
"feats": "Number=Sing",
"start_char": 6,
"end_char": 11,
"ner": "I-FAC"
},
{
"id": 3,
"text": "BRIDGE",
"upos": "PROPN",
"xpos": "NNP",
"feats": "Number=Sing",
"start_char": 12,
"end_char": 18,
"ner": "I-FAC"
},
{
"id": 4,
"text": "1",
"upos": "NUM",
"xpos": "CD",
"feats": "NumForm=Digit|NumType=Card",
"start_char": 19,
"end_char": 20,
"ner": "E-FAC"
},
|
open
|
2021-10-08T08:52:47Z
|
2022-07-14T20:27:04Z
|
https://github.com/stanfordnlp/stanza/issues/825
|
[
"enhancement",
"question"
] |
sangeethsn
| 10
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.