repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
HumanSignal/labelImg
deep-learning
147
Uable to select the small bounding box
I am working on ubuntu 16.04. I am able to select the bounding box and label them. But if the objects I have is smaller (about 10 x 10) px. then I am not able to select that bounding box and due to that I am not able to edit the label. It works for larger bounding box, but not for smaller bounding box.
closed
2017-08-21T09:18:38Z
2017-08-25T05:46:17Z
https://github.com/HumanSignal/labelImg/issues/147
[]
ghost
0
SYSTRAN/faster-whisper
deep-learning
722
[INFO] Getting encoder output
I would like to extract the encoder output, so far, I did this: ``` from faster_whisper import WhisperModel from datasets import load_dataset from datasets import Audio minds = load_dataset("PolyAI/minds14", name="de-DE", split="train") minds = minds.cast_column("audio", Audio(sampling_rate=16_000)) model_size = "tiny" model = WhisperModel(model_size, device="cuda", compute_type="int8") features = model.feature_extractor(minds[0]['audio']['array']) segment = features[:, : model.feature_extractor.nb_max_frames] encoder_output = model.encode(segment) ``` The output is of type `ctranslate2._ext.StorageView`, any idea how get `encoder_output` as a numpy array please?
closed
2024-02-28T20:49:59Z
2024-07-08T02:23:57Z
https://github.com/SYSTRAN/faster-whisper/issues/722
[]
ManilBen
2
Lightning-AI/pytorch-lightning
data-science
20,231
torch.cuda.OutOfMemoryError after running tuner.scale_batch_size() in "binsearch" mode
### Bug description I am encountering a torch.cuda.OutOfMemoryError after using the tuner.scale_batch_size() method in "binsearch" mode to find the optimal batch size for my model. In contrast, when I use mode="power", the tuning process works without any issues, and I can successfully find an optimal batch size without running into memory errors. I'm using a NVIDIA RTX 6000 Ada Generation gpu with 48GB VRAM. ### What version are you seeing the problem on? master ### How to reproduce the bug ```python import torch import torch.nn as nn from torch.utils.data import DataLoader, TensorDataset, random_split import numpy as np from lightning.pytorch.trainer import Trainer from lightning.pytorch import LightningDataModule, LightningModule from lightning.pytorch.callbacks import BatchSizeFinder from torchmetrics import Accuracy from lightning.pytorch.tuner import Tuner class SimpleModel(LightningModule): def __init__(self): super(SimpleModel, self).__init__() self.conv = nn.Conv2d(3, 16, kernel_size=3) self.fc = nn.Linear(16 * 766 * 766, 10) self.train_accuracy = Accuracy(task="multiclass", num_classes=10) self.val_accuracy = Accuracy(task="multiclass", num_classes=10) self.test_accuracy = Accuracy(task="multiclass", num_classes=10) def forward(self, x): x = self.conv(x) x = nn.functional.relu(x) x = x.view(x.size(0), -1) x = self.fc(x) return x def training_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = nn.functional.cross_entropy(y_hat, y) self.log("train_loss", loss) return loss def validation_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = nn.functional.cross_entropy(y_hat, y) self.log("val_loss", loss) self.val_accuracy(y_hat, y) def test_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = nn.functional.cross_entropy(y_hat, y) self.log("test_loss", loss) self.test_accuracy(y_hat, y) def on_train_epoch_end(self): self.log("train_accuracy", self.train_accuracy.compute()) self.train_accuracy.reset() def on_validation_epoch_end(self): self.log("val_accuracy", self.val_accuracy.compute()) self.val_accuracy.reset() def on_test_epoch_end(self): self.log("test_accuracy", self.test_accuracy.compute()) self.test_accuracy.reset() def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=0.001) class MyDataModule(LightningDataModule): def __init__(self, batch_size=32, num_samples=10000, val_split=0.2, test_split=0.3): super().__init__() self.batch_size = batch_size self.num_samples = num_samples self.val_split = val_split self.test_split = test_split self.dataset = None self.train_dataset = None self.val_dataset = None self.test_dataset = None def create_dataset(self): x = np.random.rand(self.num_samples, 3, 768, 768).astype( np.float32 ) # 3 channels, 768x768 images y = np.random.randint(0, 10, size=(self.num_samples,)) # 10 classes return TensorDataset(torch.tensor(x), torch.tensor(y)) def setup(self, stage=None): self.dataset = self.create_dataset() total_size = len(self.dataset) val_size = int(total_size * self.val_split) test_size = int(total_size * self.test_split) train_size = total_size - val_size - test_size self.train_dataset, self.val_dataset, self.test_dataset = random_split( self.dataset, [train_size, val_size, test_size] ) def train_dataloader(self): return DataLoader(self.train_dataset, batch_size=self.batch_size, shuffle=True) def val_dataloader(self): return DataLoader(self.val_dataset, batch_size=self.batch_size) def test_dataloader(self): return DataLoader(self.test_dataset, batch_size=self.batch_size) def main(): data_module = MyDataModule() model = SimpleModel() trainer = Trainer(max_epochs=1, devices=1) tuner = Tuner(trainer) # Auto-scale batch size with binary search tuner.scale_batch_size(model, mode="binsearch", datamodule=data_module) trainer.fit(model, datamodule=data_module, ckpt_path=None) trainer.test(datamodule=data_module, ckpt_path="best") if __name__ == "__main__": main() ``` ### Error messages and logs ``` python /home/rittik/Rough/issue.py GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs /home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py:75: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `lightning.pytorch` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default You are using a CUDA device ('NVIDIA RTX 6000 Ada Generation') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3] /home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:441: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=63` in the `DataLoader` to improve performance. /home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/data_connector.py:441: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=63` in the `DataLoader` to improve performance. /home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/torchmetrics/utilities/prints.py:43: UserWarning: The ``compute`` method of metric MulticlassAccuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated. warnings.warn(*args, **kwargs) # noqa: B028 `Trainer.fit` stopped: `max_steps=3` reached. Batch size 2 succeeded, trying batch size 4 `Trainer.fit` stopped: `max_steps=3` reached. Batch size 4 succeeded, trying batch size 8 `Trainer.fit` stopped: `max_steps=3` reached. Batch size 8 succeeded, trying batch size 16 `Trainer.fit` stopped: `max_steps=3` reached. Batch size 16 succeeded, trying batch size 32 `Trainer.fit` stopped: `max_steps=3` reached. Batch size 32 succeeded, trying batch size 64 `Trainer.fit` stopped: `max_steps=3` reached. Batch size 64 succeeded, trying batch size 128 /home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py:298: The number of training batches (40) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch. `Trainer.fit` stopped: `max_steps=3` reached. Batch size 128 succeeded, trying batch size 256 /home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py:298: The number of training batches (20) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch. `Trainer.fit` stopped: `max_steps=3` reached. Batch size 256 succeeded, trying batch size 512 /home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py:298: The number of training batches (10) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch. Batch size 512 failed, trying batch size 384 /home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py:298: The number of training batches (14) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch. `Trainer.fit` stopped: `max_steps=3` reached. Batch size 384 succeeded, trying batch size 448 /home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py:298: The number of training batches (12) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch. Batch size 448 failed, trying batch size 416 /home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py:298: The number of training batches (13) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch. Batch size 416 failed, trying batch size 400 Batch size 400 failed, trying batch size 392 `Trainer.fit` stopped: `max_steps=3` reached. Batch size 392 succeeded, trying batch size 396 `Trainer.fit` stopped: `max_steps=3` reached. Batch size 396 succeeded, trying batch size 398 Batch size 398 failed, trying batch size 397 Batch size 397 failed, trying batch size 396 Finished batch size finder, will continue with full run using batch size 396 Restoring states from the checkpoint path at /home/rittik/EUS_ML/.scale_batch_size_41bc8e23-6280-4976-be47-3f335c76444f.ckpt Restored all states from the checkpoint at /home/rittik/EUS_ML/.scale_batch_size_41bc8e23-6280-4976-be47-3f335c76444f.ckpt LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3] | Name | Type | Params ------------------------------------------------------ 0 | conv | Conv2d | 448 1 | fc | Linear | 93.9 M 2 | train_accuracy | MulticlassAccuracy | 0 3 | val_accuracy | MulticlassAccuracy | 0 4 | test_accuracy | MulticlassAccuracy | 0 ------------------------------------------------------ 93.9 M Trainable params 0 Non-trainable params 93.9 M Total params 375.526 Total estimated model params size (MB) Epoch 0: 0%| | 0/13 [00:00<?, ?it/s]Traceback (most recent call last): File "/home/rittik/Rough/issue.py", line 135, in <module> main() File "/home/rittik/Rough/issue.py", line 130, in main trainer.fit(model, datamodule=data_module, ckpt_path=None) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 544, in fit call._call_and_handle_interrupt( File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 44, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 580, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 987, in _run results = self._run_stage() File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 1033, in _run_stage self.fit_loop.run() File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py", line 205, in run self.advance() File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py", line 363, in advance self.epoch_loop.run(self._data_fetcher) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/training_epoch_loop.py", line 140, in run self.advance(data_fetcher) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/training_epoch_loop.py", line 250, in advance batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 190, in run self._optimizer_step(batch_idx, closure) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 268, in _optimizer_step call._call_lightning_module_hook( File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 157, in _call_lightning_module_hook output = fn(*args, **kwargs) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/core/module.py", line 1303, in optimizer_step optimizer.step(closure=optimizer_closure) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/core/optimizer.py", line 152, in step step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 239, in optimizer_step return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/plugins/precision/precision.py", line 122, in optimizer_step return optimizer.step(closure=closure, **kwargs) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/torch/optim/optimizer.py", line 391, in wrapper out = func(*args, **kwargs) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/torch/optim/optimizer.py", line 76, in _use_grad ret = func(self, *args, **kwargs) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/torch/optim/adam.py", line 148, in step loss = closure() File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/plugins/precision/precision.py", line 108, in _wrap_closure closure_result = closure() File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 144, in __call__ self._result = self.closure(*args, **kwargs) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 138, in closure self._backward_fn(step_output.closure_loss) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 239, in backward_fn call._call_strategy_hook(self.trainer, "backward", loss, optimizer) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 309, in _call_strategy_hook output = fn(*args, **kwargs) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 213, in backward self.precision_plugin.backward(closure_loss, self.lightning_module, optimizer, *args, **kwargs) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/plugins/precision/precision.py", line 72, in backward model.backward(tensor, *args, **kwargs) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/lightning/pytorch/core/module.py", line 1090, in backward loss.backward(*args, **kwargs) File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/torch/_tensor.py", line 525, in backward torch.autograd.backward( File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 267, in backward _engine_run_backward( File "/home/rittik/.cache/pypoetry/virtualenvs/eus-ml-iKodjVHl-py3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 744, in _engine_run_backward return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 13.85 GiB. GPU Epoch 0: 0%| | 0/13 [00:06<?, ?it/s] ``` ### Environment <details> <summary>Current environment</summary> * CUDA: - GPU: - NVIDIA RTX 6000 Ada Generation - available: True - version: 12.1 * Lightning: - lightning: 2.2.4 - lightning-utilities: 0.11.2 - pytorch-lightning: 2.2.4 - pytorch-metric-learning: 2.5.0 - torch: 2.3.0 - torchmetrics: 1.4.0.post0 - torchvision: 0.18.0 * Packages: - aiohttp: 3.9.5 - aiosignal: 1.3.1 - albumentations: 1.4.7 - annotated-types: 0.6.0 - annoy: 1.17.3 - asttokens: 2.4.1 - async-timeout: 4.0.3 - attrs: 23.2.0 - black: 24.4.2 - boto3: 1.34.105 - botocore: 1.34.105 - certifi: 2024.2.2 - charset-normalizer: 3.3.2 - click: 8.1.7 - comm: 0.2.2 - contourpy: 1.2.1 - cycler: 0.12.1 - debugpy: 1.8.1 - decorator: 5.1.1 - docker-pycreds: 0.4.0 - exceptiongroup: 1.2.1 - executing: 2.0.1 - filelock: 3.14.0 - fonttools: 4.51.0 - frozenlist: 1.4.1 - fsspec: 2024.3.1 - gitdb: 4.0.11 - gitpython: 3.1.43 - huggingface-hub: 0.23.0 - idna: 3.7 - imageio: 2.34.1 - ipykernel: 6.29.4 - ipython: 8.24.0 - jedi: 0.19.1 - jinja2: 3.1.4 - jmespath: 1.0.1 - joblib: 1.4.2 - jupyter-client: 8.6.1 - jupyter-core: 5.7.2 - kiwisolver: 1.4.5 - lazy-loader: 0.4 - lightning: 2.2.4 - lightning-utilities: 0.11.2 - llvmlite: 0.42.0 - markdown-it-py: 3.0.0 - markupsafe: 2.1.5 - matplotlib: 3.8.4 - matplotlib-inline: 0.1.7 - mdurl: 0.1.2 - mpmath: 1.3.0 - multidict: 6.0.5 - mypy-extensions: 1.0.0 - nest-asyncio: 1.6.0 - networkx: 3.3 - numba: 0.59.1 - numpy: 1.26.4 - nvidia-cublas-cu12: 12.1.3.1 - nvidia-cuda-cupti-cu12: 12.1.105 - nvidia-cuda-nvrtc-cu12: 12.1.105 - nvidia-cuda-runtime-cu12: 12.1.105 - nvidia-cudnn-cu12: 8.9.2.26 - nvidia-cufft-cu12: 11.0.2.54 - nvidia-curand-cu12: 10.3.2.106 - nvidia-cusolver-cu12: 11.4.5.107 - nvidia-cusparse-cu12: 12.1.0.106 - nvidia-nccl-cu12: 2.20.5 - nvidia-nvjitlink-cu12: 12.4.127 - nvidia-nvtx-cu12: 12.1.105 - opencv-python: 4.9.0.80 - opencv-python-headless: 4.9.0.80 - packaging: 24.0 - pacmap: 0.7.2 - pandas: 2.2.2 - parso: 0.8.4 - pathspec: 0.12.1 - pexpect: 4.9.0 - pillow: 10.3.0 - pip: 24.0 - platformdirs: 4.2.2 - prettytable: 3.10.0 - prompt-toolkit: 3.0.43 - protobuf: 4.25.3 - psutil: 5.9.8 - ptyprocess: 0.7.0 - pure-eval: 0.2.2 - pydantic: 2.7.1 - pydantic-core: 2.18.2 - pygments: 2.18.0 - pyparsing: 3.1.2 - python-dateutil: 2.9.0.post0 - pytorch-lightning: 2.2.4 - pytorch-metric-learning: 2.5.0 - pytz: 2024.1 - pyyaml: 6.0.1 - pyzmq: 26.0.3 - requests: 2.31.0 - rich: 13.7.1 - s3transfer: 0.10.1 - safetensors: 0.4.3 - scikit-image: 0.23.2 - scikit-learn: 1.4.2 - scipy: 1.13.0 - seaborn: 0.13.2 - sentry-sdk: 2.1.1 - setproctitle: 1.3.3 - setuptools: 69.5.1 - six: 1.16.0 - smmap: 5.0.1 - stack-data: 0.6.3 - sympy: 1.12 - thop: 0.1.1.post2209072238 - threadpoolctl: 3.5.0 - tifffile: 2024.5.10 - timm: 0.9.16 - tomli: 2.0.1 - torch: 2.3.0 - torchmetrics: 1.4.0.post0 - torchvision: 0.18.0 - tornado: 6.4 - tqdm: 4.66.4 - traitlets: 5.14.3 - triton: 2.3.0 - typing-extensions: 4.11.0 - tzdata: 2024.1 - urllib3: 2.2.1 - wandb: 0.17.0 - wcwidth: 0.2.13 - yarl: 1.9.4 * System: - OS: Linux - architecture: - 64bit - ELF - processor: x86_64 - python: 3.10.12 - release: 5.15.0-119-generic - version: #129-Ubuntu SMP Fri Aug 2 19:25:20 UTC 2024 </details> ### More info I also tried to tackle this issue using BatchSizeFinder callback but it persists ```python import torch import torch.nn as nn from torch.utils.data import DataLoader, TensorDataset, random_split import numpy as np from lightning.pytorch.trainer import Trainer from lightning.pytorch import LightningDataModule, LightningModule from lightning.pytorch.callbacks import BatchSizeFinder from torchmetrics import Accuracy from lightning.pytorch.tuner import Tuner class SimpleModel(LightningModule): def __init__(self): super(SimpleModel, self).__init__() self.conv = nn.Conv2d(3, 16, kernel_size=3) self.fc = nn.Linear(16 * 766 * 766, 10) self.train_accuracy = Accuracy(task="multiclass", num_classes=10) self.val_accuracy = Accuracy(task="multiclass", num_classes=10) self.test_accuracy = Accuracy(task="multiclass", num_classes=10) def forward(self, x): x = self.conv(x) x = nn.functional.relu(x) x = x.view(x.size(0), -1) x = self.fc(x) return x def training_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = nn.functional.cross_entropy(y_hat, y) self.log("train_loss", loss) return loss def validation_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = nn.functional.cross_entropy(y_hat, y) self.log("val_loss", loss) self.val_accuracy(y_hat, y) def test_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = nn.functional.cross_entropy(y_hat, y) self.log("test_loss", loss) self.test_accuracy(y_hat, y) def on_train_epoch_end(self): self.log("train_accuracy", self.train_accuracy.compute()) self.train_accuracy.reset() def on_validation_epoch_end(self): self.log("val_accuracy", self.val_accuracy.compute()) self.val_accuracy.reset() def on_test_epoch_end(self): self.log("test_accuracy", self.test_accuracy.compute()) self.test_accuracy.reset() def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=0.001) class MyDataModule(LightningDataModule): def __init__(self, batch_size=32, num_samples=10000, val_split=0.2, test_split=0.3): super().__init__() self.batch_size = batch_size self.num_samples = num_samples self.val_split = val_split self.test_split = test_split self.dataset = None self.train_dataset = None self.val_dataset = None self.test_dataset = None def create_dataset(self): x = np.random.rand(self.num_samples, 3, 768, 768).astype( np.float32 ) # 3 channels, 768x768 images y = np.random.randint(0, 10, size=(self.num_samples,)) # 10 classes return TensorDataset(torch.tensor(x), torch.tensor(y)) def setup(self, stage=None): self.dataset = self.create_dataset() total_size = len(self.dataset) val_size = int(total_size * self.val_split) test_size = int(total_size * self.test_split) train_size = total_size - val_size - test_size self.train_dataset, self.val_dataset, self.test_dataset = random_split( self.dataset, [train_size, val_size, test_size] ) def train_dataloader(self): return DataLoader(self.train_dataset, batch_size=self.batch_size, shuffle=True) def val_dataloader(self): return DataLoader(self.val_dataset, batch_size=self.batch_size) def test_dataloader(self): return DataLoader(self.test_dataset, batch_size=self.batch_size) def main(): data_module = MyDataModule() model = SimpleModel() batch_size_finder_callback = BatchSizeFinder( mode="binsearch", steps_per_trial=3, init_val=2, max_trials=25, batch_arg_name="batch_size", ) trainer = Trainer(max_epochs=1, devices=1, callbacks=[batch_size_finder_callback]) # Auto-scale batch size with binary search trainer.fit(model, datamodule=data_module, ckpt_path=None) trainer.test(datamodule=data_module, ckpt_path="best") if __name__ == "__main__": main() ```
open
2024-08-26T20:13:58Z
2024-10-23T17:32:09Z
https://github.com/Lightning-AI/pytorch-lightning/issues/20231
[ "bug", "needs triage", "ver: 2.4.x" ]
rittik9
0
mirumee/ariadne
graphql
660
snake_case_fallback_resolvers not calling obj.get(attr_name)
**Ariadne version:** 0.13.0 **Python version:** 3.8.11 Hello. I am using the [databases](https://www.encode.io/databases/) package with an [asyncpg](https://magicstack.github.io/asyncpg/current/) backend to interact with a PostgreSQL database. The objects returned from my queries are of the type `databases.backends.postgres.Record`. The desired attributes can only can accessed via the get method. However, when I use `snake_case_fallback_resolvers`, Ariadne has trouble resolving the requested fields and I receive the following error: `Cannot return null for non-nullable field` If I instead use the regular `fallback_resolvers` (adjusting my schema's naming conventions), Ariadne is able to resolve the requested fields. Is this a bug or am I doing something wrong? Thank you for your time.
closed
2021-08-31T22:54:18Z
2021-09-03T22:52:35Z
https://github.com/mirumee/ariadne/issues/660
[ "enhancement", "roadmap" ]
RodrigoTMOLima
1
biolab/orange3
data-visualization
6,001
Recursive imputer
Use case: We have a model with 200 features. We apply a “Model base imputer (simple tree)” to fill in missing data. Problem: The “Imputer” widget fills in just a part of the missing data (that's the limit of the default 1-NN regressor used). Current workaround: We have chained 5 instances of the same imputer, that’s to say, as much as needed to “complete” the imputing procedure for our data. At each stage, more data are imputed, up until the point where the regressor cannot produce further predictions. Proposed solution: Add a check-box in the Impute widget, to activate iteration. The process will be repeated leveraging the imputed data from the previous iteration. Loop until no more changes are produced.
closed
2022-06-02T15:41:36Z
2022-09-30T08:49:45Z
https://github.com/biolab/orange3/issues/6001
[ "bug" ]
hydrastarmaster
5
QuivrHQ/quivr
api
3,192
Remove knowledge from brain
* shouldn't remove knowledge from the KMS * if KM is folder remove all subsequent children from brain * Check what rules apply for the syncs vs local knowledge
closed
2024-09-11T12:53:40Z
2024-12-30T00:26:22Z
https://github.com/QuivrHQ/quivr/issues/3192
[ "Stale", "area: backend" ]
linear[bot]
2
mlfoundations/open_clip
computer-vision
505
Information about BiomedCLIP-PubMedBERT config
I saw that a pull request had been made for BioMedCLIP model. I can also see that it has been merged. However, if I try to load the model using `model, _, preprocess_val = open_clip.create_model_and_transforms("BiomedCLIP-PubMedBERT_256-vit_base_patch16_224")` I get an error `RuntimeError: Model config for BiomedCLIP-PubMedBERT_256-vit_base_patch16_224 not found.` On a related note, I can see that the model is available on Huggingface pretrained checkpoints. However, it seems that when these models are converted to the huggingface format, some more wrappers are created. Is it possible to do the reverse i.e. download the huggingface CLIP model weights and convert it to OpenCLIP? Thanks.
closed
2023-04-22T13:36:55Z
2023-06-29T20:37:44Z
https://github.com/mlfoundations/open_clip/issues/505
[]
chinmay5
4
robotframework/robotframework
automation
4,931
Collapse long failure messages in log and report
Currently long failure messages (over 40 lines by default, configurable with `--max-error-lines` (#2576)) are cut from the middle. This is done to avoid huge messages messing up logs and reports, but the problem is that some valuable information may be lost. Another issue is that even the resulting messages are somewhat long and take lot of space. The above is an old problem, but the situation is getting worse in RF 7.0 due failure messages being shown not only with tests,but also with each keyword and control structure. Earlier keywords and control structures in the result model didn't have a message at all, but it was added as part of the result model cleanup (#4883). The motivation was this: - We are adding JSON representation to the result model (#4847) and want the model to be as stable and future-proof as possible. - We likely want to in the future allow running individual keywords outside Robot core. At that point we want the result model to have a message, not only status as earlier. - In some special cases (at least with `--flatten-keywords` and `--remove-keywords`) we want to add some extra notes to result objects. Earlier we used documentation for that, but it was odd because control structures such as FOR loops cannot otherwise have a documentation. Using the message for this purpose works much better. Now that also keywords and control structures also have a message, the same message can be shown on multiple levels in the log file. That's rather annoying in general, but gets especially irritating if the message is long. To mitigate this issue, and to fix the old issue with long messages, I propose we do the following: 1. Show only the beginning of long failure messages in log and report by default. I believe we should show so much that typical messages are shown fully, but considerably less than 40 lines that is the current maximum. We could possibly also show more with tests than with keywords and control structures. 2. Have some way to show the full message. Probably a simple "Show more." button/link would be fine. 3. Stop cutting long messages otherwise. This can increase output file sizes, but I doubt the difference is too big.
open
2023-11-06T17:11:28Z
2024-03-25T22:19:45Z
https://github.com/robotframework/robotframework/issues/4931
[ "enhancement", "priority: medium", "effort: medium" ]
pekkaklarck
3
huggingface/datasets
deep-learning
7,320
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label']
### Describe the bug I am trying to create a PEFT model from DISTILBERT model, and run a training loop. However, the trainer.train() is giving me this error: ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label'] Here is my code: ### Steps to reproduce the bug #Creating a PEFT Config from peft import LoraConfig from transformers import AutoTokenizer, AutoModelForSequenceClassification from peft import get_peft_model lora_config = LoraConfig( task_type="SEQ_CLASS", r=8, lora_alpha=32, target_modules=["q_lin", "k_lin", "v_lin"], lora_dropout=0.01, ) #Converting a Transformers Model into a PEFT Model model = AutoModelForSequenceClassification.from_pretrained( "distilbert-base-uncased", num_labels=2, #Binary classification, 1 = positive, 0 = negative ) lora_model = get_peft_model(model, lora_config) print(lora_model) Tokenize data set from datasets import load_dataset from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") # Load the train and test splits dataset dataset = load_dataset("fancyzhx/amazon_polarity") #create a smaller subset for train and test subset_size = 5000 small_train_dataset = dataset["train"].shuffle(seed=42).select(range(subset_size)) small_test_dataset = dataset["test"].shuffle(seed=42).select(range(subset_size)) #Tokenize data def tokenize_function(example): return tokenizer(example["content"], padding="max_length", truncation=True) tokenized_train_dataset = small_train_dataset.map(tokenize_function, batched=True) tokenized_test_dataset = small_test_dataset.map(tokenize_function, batched=True) train_lora = tokenized_train_dataset.rename_column('label', 'labels') test_lora = tokenized_test_dataset.rename_column('label', 'labels') print(tokenized_train_dataset.column_names) print(tokenized_test_dataset.column_names) #Train the PEFT model import numpy as np from transformers import Trainer, TrainingArguments, default_data_collator, DataCollatorWithPadding from datasets import load_dataset from transformers import AutoTokenizer, AutoModelForSequenceClassification def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return {"accuracy": (predictions == labels).mean()} trainer = Trainer( model=lora_model, args=TrainingArguments( output_dir=".", learning_rate=2e-3, # Reduce the batch size if you don't have enough memory per_device_train_batch_size=1, per_device_eval_batch_size=1, num_train_epochs=3, weight_decay=0.01, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, ), train_dataset=tokenized_train_dataset, eval_dataset=tokenized_test_dataset, tokenizer=tokenizer, data_collator=DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="pt"), compute_metrics=compute_metrics, ) trainer.train() ### Expected behavior Example of output: [558/558 01:04, Epoch XX] Epoch | Training Loss | Validation Loss | Accuracy -- | -- | -- | -- 1 | No log | 0.046478 | 0.988341 2 | 0.052800 | 0.048840 | 0.988341 ### Environment info Using python and jupyter notbook
closed
2024-12-10T20:23:11Z
2024-12-10T23:22:23Z
https://github.com/huggingface/datasets/issues/7320
[]
atrompeterog
1
zappa/Zappa
django
510
[Migrated] Be able to provide the path of the task as a string
Originally from: https://github.com/Miserlou/Zappa/issues/1333 by [michelorengo](https://github.com/michelorengo) ## Description This is to be able to provide the function/task path as a string (and not use the inspection) ## GitHub Issues #1332 1332
closed
2021-02-20T09:43:42Z
2022-07-16T07:20:49Z
https://github.com/zappa/Zappa/issues/510
[]
jneves
1
mwaskom/seaborn
matplotlib
3,544
`CategoricalPlotter` broke statannotations
Hi, I am using [`statannotations`](https://github.com/trevismd/statannotations) toolbox to perform statistics and add p<sub>value</sub> on a violin plot but it stopped working with `seaborn v0.13`. Here is the issue faced: `statannotations` relies on `seabon v0.11`, you can push up to `v0.12.2` but not `v0.13` as this last version introduced quite some significant changes. The error message given by the package is the following [Categorical issue seaborn #133](https://github.com/trevismd/statannotations/issues/133). Here is the code that isn't working with `seaborn v0.13`: ```python plotter = sns.categorical._ViolinPlotter(x, y, hue, data, order, hue_order, bw = plot_params.get("bw", "scott"), cut = plot_params.get("cut", 2), scale = plot_params.get("scale", "area"), scale_hue = plot_params.get("scale_hue", True), gridsize = plot_params.get("gridsize", 100), width = plot_params.get("width", 0.8), inner = plot_params.get("inner", None), split = plot_params.get("split", False), dodge = True, orient=plot_params.get("orient"), linewidth = plot_params.get("linewidth"), color = None, palette = None, saturation = .75) ``` Now I tried to make it work with `seaborn v0.13`, without success: ```python plotter = sns.categorical._CategoricalPlotter.plot_violins(self = sns.categorical._CategoricalPlotter(), dodge = True, common_norm = True, density_norm = "area", inner = None, inner_kws = True, kde_kws = True, plot_kws = True, fill = False, split = True, linewidth = .5, linecolor = "black", color = 'palette', gap = True, width = 0.8) ``` If anyone could help me with this, or if anyone has another method to perform a statistical test (t-test and ANOVA) and then add the p<sub>value</sub> on a violin plot, I'ld take it! Thank you
closed
2023-11-02T16:18:18Z
2024-09-04T14:15:31Z
https://github.com/mwaskom/seaborn/issues/3544
[]
sjg2203
5
keras-team/keras
deep-learning
20,390
Empty model history logs when training with large `steps_per_execution`
Using the tensorflow backend, training with `steps_per_execution` larger than the training set results in empty logs. Code to reproduce: `!pip install keras-nightly` ```python import numpy as np import os os.environ["KERAS_BACKEND"] = "tensorflow" import keras x = np.ones((10, 4)) y = np.ones((10, 1)) input = keras.Input(shape=[4]) output = keras.layers.Dense(1, activation='relu')(input) model = keras.Model(inputs=input, outputs=output) model.compile( loss="mse", optimizer="adam", steps_per_execution=20, ) epochs = 2 history = model.fit( x=x, y=y, batch_size=2, epochs=epochs, verbose=0, ) print(history.history) ``` Output: ``` {} /usr/lib/python3.10/contextlib.py:153: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset. self.gen.throw(typ, value, traceback) ```
closed
2024-10-22T09:21:17Z
2024-10-22T23:36:49Z
https://github.com/keras-team/keras/issues/20390
[]
nicolaspi
0
python-restx/flask-restx
api
172
No swagger related files after packaging
**Ask a question** No swagger related files after packaging **Additional context** I pulled the source code of the project, packaged it through "python setup.py sdist bdist_wheel", and found that there are no swagger related files in the package. It is inaccessible to access http://127.0.0.1:8080/swagger
open
2020-07-09T06:38:08Z
2020-09-14T19:45:01Z
https://github.com/python-restx/flask-restx/issues/172
[ "question" ]
somta
1
babysor/MockingBird
deep-learning
388
这个只是学习音色?能学习说话风格语气么?
open
2022-02-14T04:39:02Z
2022-02-17T09:05:41Z
https://github.com/babysor/MockingBird/issues/388
[]
SeedKunY
1
Sanster/IOPaint
pytorch
292
[BUG] When ControlNet is enabled I only get glitched/corrupted results in the inpainting area
**Model** Realistic Vision 1.4 **Describe the bug** When ControlNet is enabled I only get glitched/corrupted results in the inpainting area **System Info** Software version used - lama-cleaner: 1.1.2 - pytorch: 2.0.0 I'm using mps, latest Mac OS Ventura 13.3.1 Screenshot: ![Arc - lama-cleaner - Image inpainting powered by SOTA AI model - 23-04-2023 - 20 57 15@2x](https://user-images.githubusercontent.com/11060179/233862397-91b0ca49-809b-4e80-aa09-26ef010002b3.jpg) Question: What ControlNet model is being used? I've used ControlNet with Auto1111 since it first became available and find they all have their uses! Would be good to get more control. Thanks
open
2023-04-23T19:58:43Z
2023-04-25T01:26:25Z
https://github.com/Sanster/IOPaint/issues/292
[]
alexzadeh
1
huggingface/transformers
python
36,124
Speaker Verification: All Speakers Getting Perfect 1.000 Similarity Scores
### System Info ### Bug Report <!-- Important information --> Model name (e.g. bert-base-cased): pyannote/embedding Language (if applicable): English Framework (PyTorch, TensorFlow, etc...): PyTorch ### Description Using pyannote/embedding for speaker verification, getting perfect similarity scores (1.000) for all speakers, even between obviously different voices in an audiobook. ### Code To Reproduce The Issue python import torch import torchaudio from pyannote.audio import Model import torch.nn.functional as F Setup device = torch.device("cuda") embedding_model = Model.from_pretrained("pyannote/embedding", use_auth_token='xxx').to(device) Load and process reference audio reference_waveform, sample_rate = torchaudio.load("reference.flac") reference_waveform = reference_waveform.mean(dim=0, keepdim=True).to(device) reference_features = embedding_model(reference_waveform.unsqueeze(0)) reference_features = F.normalize(reference_features, p=2, dim=1) Load test audio segment test_waveform, = torchaudio.load("test.flac") test_waveform = test_waveform.mean(dim=0, keepdim=True).to(device) test_embedding = embedding_model(test_waveform.unsqueeze(0)) test_embedding = F.normalize(test_embedding, p=2, dim=1) Calculate similarity similarity = F.cosine_similarity(reference_features, test_embedding, dim=1).mean() print(f"Similarity: {similarity.item():.6f}") ### Expected Results Different speakers should have varying similarity scores below 1.000 ### Actual Results All speakers get perfect 1.000 similarity scores: - Speaker A vs Reference: 1.000000 - Speaker B vs Reference: 0.999998 - Speaker C vs Reference: 1.000000 ### Environment - pyannote.audio: 3.1.1 - torch: 2.5.1+cu124 - Platform: Google Colab (Ubuntu Linux) - CUDA: Yes - GPU: Tesla T4 - Python: 3.11 - torchaudio: 2.5.1+cu124 ### Additional Context - Using professional audiobook with distinct voices - Reference is 10-minute high-quality audio - Testing with 4-hour audiobook - Consistent 1.000 similarity across all different speakers ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Install dependencies: pip install pyannote.audio==3.1.1 torch==2.5.1+cu124 torchaudio==2.5.1+cu124 2. Use reference audio (10-minute FLAC file) and test audio (different speaker, FLAC file) 3. Run the provided code: - Load model and audio files - Extract embeddings - Calculate similarity 4. Observe that similarity scores are always 1.000 regardless of speaker differences Full code provided in the description above. This can be reproduced with any two different speakers' audio files. ### Expected behavior The similarity scores should: - Be less than 1.000 for different speakers - Show variation between different voices - Have lower scores for more dissimilar voices - Only approach 1.000 for the same speaker Instead, we're getting perfect 1.000 similarity scores for all speakers, even between obviously different voices (male/female) from a professional audiobook.
closed
2025-02-10T20:58:01Z
2025-03-21T08:04:37Z
https://github.com/huggingface/transformers/issues/36124
[ "bug" ]
misterpathologist
2
alpacahq/alpaca-trade-api-python
rest-api
660
[Bug]: asyncio.run() cannot be called from a running event loop
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior when trying to run the historic_async.py in spyder keep getting the asyncio.run() running event loop error . I am running python 3.8 Could you kindly advice and help ### Expected Behavior Expecting price data for 200 symbols in the list [symbols] ### SDK Version I encountered this issue in python version 3.8.5 ### Steps To Reproduce ```markdown 1.running historic_async.py 2.getting error RuntimeError: asyncio.run() cannot be called from a running event loop ``` ### Filled out the Steps to Reproduce section? - [X] I have entered valid steps to reproduce my issue or have attached a minimally reproducible case in code that shows my issue happening; and understand that without this my issue will be flagged as invalid and closed after 30 days. ### Anything else? _No response_
open
2022-10-03T14:39:21Z
2023-01-26T07:12:43Z
https://github.com/alpacahq/alpaca-trade-api-python/issues/660
[]
mehtap198
2
LibreTranslate/LibreTranslate
api
687
HelloGitHub Badge
Hi, We're thrilled to share that [your project](https://hellogithub.com/en/repository/a414dc09995f4b5188cf5acbe54c9107) has caught the attention of the HelloGitHub community and has been recognized for its merit. Your work is truly inspiring, and we'd like to invite you to participate in our [HelloGitHub Badge Program](https://hellogithub.com/en/repository/a414dc09995f4b5188cf5acbe54c9107/embed). By joining, you'll enjoy a few benefits that we believe will complement your project's journey: 1. **Community Acknowledgement**: The badge serves as a mark of distinction, indicating that your project has met the community's criteria for recommendation. 2. **Visibility Boost**: Wearing the badge can help increase the visibility of your project, potentially drawing the attention of new users and contributors. 3. **Easier Engagement**: The badge acts as a quick reference for users to understand your project, facilitating interaction through likes, comments, and bookmarks. 4. **Feedback Opportunities**: It opens the door for valuable feedback from a diverse user base, which can be instrumental in your project's growth and refinement. 5. **Notable Recognition**: Verified participants will receive a special identifier, highlighting their contributions within the community. We invite you to consider joining the [HelloGitHub Badge Program](https://hellogithub.com/en/repository/a414dc09995f4b5188cf5acbe54c9107/embed) as a way to further engage with the community and enhance your project's presence. It's completely up to you, and we respect your decision either way. Thank you for being part of this journey with us. Warm regards, The HelloGitHub Team
closed
2024-10-03T03:53:54Z
2024-10-03T14:59:33Z
https://github.com/LibreTranslate/LibreTranslate/issues/687
[]
521xueweihan
0
mage-ai/mage-ai
data-science
5,408
[BUG] Block fails to update metadatabase when running time is greater then idle_in_transaction_session_timeout
### Mage version 0.9.73 ### Describe the bug We are using PostgreSQL as metadata storage for MageAI, our current orchestration tool. In the orchestrated pipelines, we have some blocks that can run for hours, in extreme cases, up to one day, as they orchestrate the processing of a massive amount of data. What has been happening is that when a pipeline block runs for more than an hour, it ends up returning an idle-transaction error, and Mage is unable to update the block to the "SUCCESS" status, so the block restarts. It was found that the idle_in_transaction_session_timeout parameter is configured for 1 hour on the instance in use. Is it possible to change the behaviour of thoses sesssions opened by blocks to update block status in mageAI metdatabase? I ### To reproduce Create a block that runs for more time that set in `idle_in_transaction_session_timeout` parameter in mage database. ### Expected behavior _No response_ ### Screenshots ![image](https://github.com/user-attachments/assets/04233b46-1f5b-4910-9c1f-bae64de9e067) ![image](https://github.com/user-attachments/assets/0015f5d7-4ceb-4a37-b44f-b6d00f0cf667) ### Operating system _No response_ ### Additional context _No response_
open
2024-09-11T21:11:58Z
2024-09-13T12:00:07Z
https://github.com/mage-ai/mage-ai/issues/5408
[ "bug" ]
messerzen
0
geopandas/geopandas
pandas
3,239
REGR: incorrect order of left sjoin with within predicate
I think that #2353 has caused a regression when doing left join with within predicate: ```py pts = gpd.GeoDataFrame(geometry=gpd.points_from_xy(*np.random.rand(2, 10))) polys = gpd.GeoDataFrame({"id":[1, 2, 3, 4]}, geometry=[ box(0, 0, .5, .5), box(0, .5, .5, 1), box(.5, 0, 1, .5), box(.5, .5, 1, 1) ]) pts.sjoin(polys, predicate="within", how="left") geometry index_right id 9 POINT (0.12418 0.11896) 0 1 1 POINT (0.22954 0.40218) 0 1 5 POINT (0.42634 0.49957) 0 1 8 POINT (0.34231 0.94855) 1 2 4 POINT (0.93312 0.02055) 2 3 2 POINT (0.99042 0.32572) 2 3 3 POINT (0.6554 0.69657) 3 4 6 POINT (0.82468 0.79515) 3 4 7 POINT (0.59083 0.88989) 3 4 0 POINT (0.79659 0.89147) 3 4 ``` The join is correct but the order of rows is not. Any other combination I tried seems okay. The result is apparently sorted by the index of right rather than the index of left, so it seems that the swap of left/right we do for `"within"` is incorrect at some point. We should try to fix this before 1.0-alpha1.
closed
2024-04-02T15:24:08Z
2024-04-13T13:36:14Z
https://github.com/geopandas/geopandas/issues/3239
[]
martinfleis
1
napari/napari
numpy
7,701
Review these items for 0.6.0a1
# Follow up items - [ ] remove deprecated items #7550 - [ ] #7355 - [ ] Review feedback from #7700 - [ ] #7683 - [ ] #7665 - [ ] #7149
open
2025-03-14T23:48:11Z
2025-03-15T01:14:51Z
https://github.com/napari/napari/issues/7701
[ "feature" ]
willingc
0
BlinkDL/RWKV-LM
pytorch
290
RWKV-6 and newer
To bring more awareness and adoption of RWKV, would it be possible to get benchmark scores on the Huggingface LLM leaderboard or on the model cards itself (For RWKV-6 and newer)? https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/ Looks like they current track IFEval,BBH, MATH, GPQA, MUSR and MMLU-Pro
open
2025-02-09T23:00:45Z
2025-02-16T10:02:28Z
https://github.com/BlinkDL/RWKV-LM/issues/290
[]
DIGist
1
strawberry-graphql/strawberry
django
3,466
Add better support for nested generics
This snippet is broken, because we don't check nested type var, we have done something similar here: https://github.com/strawberry-graphql/strawberry/pull/3463 ```python import strawberry @strawberry.type class Wrapper[T]: value: T @strawberry.type class BlockRow[T]: item: Wrapper[T] @strawberry.type class Query: @strawberry.field def blocks(self) -> list[BlockRow[str] | BlockRow[int]]: return [ BlockRow(item=Wrapper(value="Hello")), BlockRow(item=Wrapper(value=1)), ] schema = strawberry.Schema(query=Query) ```
open
2024-04-20T20:11:11Z
2025-03-20T15:56:42Z
https://github.com/strawberry-graphql/strawberry/issues/3466
[]
patrick91
0
paperless-ngx/paperless-ngx
django
8,317
[BUG] trying to modify a specific correspondant item freeze the web page
### Description I have a correspondent in my list that causes the web page to freeze whenever I try to edit any correspondent. ![paperless](https://github.com/user-attachments/assets/01c9975c-6bb5-427b-a114-58137d8e0d3b) In the screenshot above, it's the last one. As soon as I apply a filter to remove this correspondent from the list, I can edit other items without any issues. ![paperless2](https://github.com/user-attachments/assets/fbe26bb2-235a-413d-b7ba-21868d293fc7) Since I added this correspondent (the one causing the freeze), any newly added correspondents are no longer appearing in the list, even though they are used elsewhere in the app. ### Steps to reproduce 1. go to the correspondant List 2. by default try to edit or open a correspondant 3. the interface freeze (not repsond anymore) 4. refresh the browser 5. you get access back to the app in the same page ### Webserver logs ```bash no specific log appear in logs.. ``` ### Browser logs ```bash ERROR se: NG02100 at zr (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:190493) at t.transform (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:200329) at t.transform (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:688621) at valueFn (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:1822035) at n5t (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:1817480) at cS (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:65312) at Hj (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:75337) at Um (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:76711) at OS (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:76533) at AS (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:76465) handleError @ main.js:3 (anonymous) @ main.js:3 invoke @ polyfills.js:1 run @ polyfills.js:1 runOutsideAngular @ main.js:3 push.516.ge.factory @ main.js:3 _tick @ main.js:3 tick @ main.js:3 (anonymous) @ main.js:3 invoke @ polyfills.js:1 onInvoke @ main.js:3 invoke @ polyfills.js:1 run @ polyfills.js:1 run @ main.js:3 next @ main.js:3 next @ main.js:3 _next @ main.js:3 next @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 next @ main.js:3 emit @ main.js:3 W4 @ main.js:3 Bb @ main.js:3 onInvokeTask @ main.js:3 invokeTask @ polyfills.js:1 runTask @ polyfills.js:1 invokeTask @ polyfills.js:1 Z @ polyfills.js:1 x @ polyfills.js:1 U @ polyfills.js:1 load (anonymous) @ polyfills.js:1 scheduleTask @ polyfills.js:1 onScheduleTask @ polyfills.js:1 scheduleTask @ polyfills.js:1 scheduleTask @ polyfills.js:1 scheduleEventTask @ polyfills.js:1 (anonymous) @ polyfills.js:1 (anonymous) @ main.js:3 _trySubscribe @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 next @ main.js:3 (anonymous) @ main.js:3 _trySubscribe @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 C @ main.js:3 m @ main.js:3 (anonymous) @ main.js:3 next @ main.js:3 (anonymous) @ main.js:3 _trySubscribe @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 Xne @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 reloadData @ main.js:3 ngOnInit @ main.js:3 wb @ main.js:3 KB @ main.js:3 Cb @ main.js:3 ep @ main.js:3 Hj @ main.js:3 Um @ main.js:3 OS @ main.js:3 AS @ main.js:3 Hj @ main.js:3 Um @ main.js:3 OS @ main.js:3 Bj @ main.js:3 yS @ main.js:3 Hj @ main.js:3 Um @ main.js:3 OS @ main.js:3 AS @ main.js:3 Hj @ main.js:3 Um @ main.js:3 OS @ main.js:3 Bj @ main.js:3 yS @ main.js:3 Hj @ main.js:3 Um @ main.js:3 $j @ main.js:3 Dp @ main.js:3 TQ @ main.js:3 synchronizeOnce @ main.js:3 synchronize @ main.js:3 _tick @ main.js:3 tick @ main.js:3 (anonymous) @ main.js:3 invoke @ polyfills.js:1 onInvoke @ main.js:3 invoke @ polyfills.js:1 run @ polyfills.js:1 run @ main.js:3 next @ main.js:3 next @ main.js:3 _next @ main.js:3 next @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 next @ main.js:3 emit @ main.js:3 W4 @ main.js:3 onHasTask @ main.js:3 hasTask @ polyfills.js:1 _updateTaskCount @ polyfills.js:1 _updateTaskCount @ polyfills.js:1 runTask @ polyfills.js:1 K @ polyfills.js:1 invokeTask @ polyfills.js:1 Z @ polyfills.js:1 x @ polyfills.js:1 U @ polyfills.js:1Understand this errorAI main.js:3 ERROR se: NG02100 at zr (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:190493) at t.transform (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:200329) at t.transform (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:688621) at valueFn (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:1822035) at n5t (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:1817480) at cS (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:65312) at Hj (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:75337) at Um (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:76711) at OS (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:76533) at AS (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:76465) handleError @ main.js:3 (anonymous) @ main.js:3 invoke @ polyfills.js:1 run @ polyfills.js:1 runOutsideAngular @ main.js:3 push.516.ge.factory @ main.js:3 _tick @ main.js:3 tick @ main.js:3 (anonymous) @ main.js:3 invoke @ polyfills.js:1 onInvoke @ main.js:3 invoke @ polyfills.js:1 run @ polyfills.js:1 run @ main.js:3 next @ main.js:3 next @ main.js:3 _next @ main.js:3 next @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 next @ main.js:3 emit @ main.js:3 W4 @ main.js:3 onHasTask @ main.js:3 hasTask @ polyfills.js:1 _updateTaskCount @ polyfills.js:1 _updateTaskCount @ polyfills.js:1 runTask @ polyfills.js:1 K @ polyfills.js:1 invokeTask @ polyfills.js:1 Z @ polyfills.js:1 x @ polyfills.js:1 U @ polyfills.js:1 load (anonymous) @ polyfills.js:1 scheduleTask @ polyfills.js:1 onScheduleTask @ polyfills.js:1 scheduleTask @ polyfills.js:1 scheduleTask @ polyfills.js:1 scheduleEventTask @ polyfills.js:1 (anonymous) @ polyfills.js:1 (anonymous) @ main.js:3 _trySubscribe @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 next @ main.js:3 (anonymous) @ main.js:3 _trySubscribe @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 C @ main.js:3 m @ main.js:3 (anonymous) @ main.js:3 next @ main.js:3 (anonymous) @ main.js:3 _trySubscribe @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 Xne @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 reloadData @ main.js:3 ngOnInit @ main.js:3 wb @ main.js:3 KB @ main.js:3 Cb @ main.js:3 ep @ main.js:3 Hj @ main.js:3 Um @ main.js:3 OS @ main.js:3 AS @ main.js:3 Hj @ main.js:3 Um @ main.js:3 OS @ main.js:3 Bj @ main.js:3 yS @ main.js:3 Hj @ main.js:3 Um @ main.js:3 OS @ main.js:3 AS @ main.js:3 Hj @ main.js:3 Um @ main.js:3 OS @ main.js:3 Bj @ main.js:3 yS @ main.js:3 Hj @ main.js:3 Um @ main.js:3 $j @ main.js:3 Dp @ main.js:3 TQ @ main.js:3 synchronizeOnce @ main.js:3 synchronize @ main.js:3 _tick @ main.js:3 tick @ main.js:3 (anonymous) @ main.js:3 invoke @ polyfills.js:1 onInvoke @ main.js:3 invoke @ polyfills.js:1 run @ polyfills.js:1 run @ main.js:3 next @ main.js:3 next @ main.js:3 _next @ main.js:3 next @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 next @ main.js:3 emit @ main.js:3 W4 @ main.js:3 onHasTask @ main.js:3 hasTask @ polyfills.js:1 _updateTaskCount @ polyfills.js:1 _updateTaskCount @ polyfills.js:1 runTask @ polyfills.js:1 K @ polyfills.js:1 invokeTask @ polyfills.js:1 Z @ polyfills.js:1 x @ polyfills.js:1 U @ polyfills.js:1Understand this errorAI main.js:3 ERROR se: NG02100 at zr (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:190493) at t.transform (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:200329) at t.transform (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:688621) at valueFn (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:1822035) at n5t (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:1817480) at cS (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:65312) at Hj (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:75337) at Um (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:76711) at OS (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:76533) at AS (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:76465) handleError @ main.js:3 (anonymous) @ main.js:3 invoke @ polyfills.js:1 run @ polyfills.js:1 runOutsideAngular @ main.js:3 push.516.ge.factory @ main.js:3 _tick @ main.js:3 tick @ main.js:3 (anonymous) @ main.js:3 invoke @ polyfills.js:1 onInvoke @ main.js:3 invoke @ polyfills.js:1 run @ polyfills.js:1 run @ main.js:3 next @ main.js:3 next @ main.js:3 _next @ main.js:3 next @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 next @ main.js:3 emit @ main.js:3 W4 @ main.js:3 Bb @ main.js:3 onInvokeTask @ main.js:3 invokeTask @ polyfills.js:1 runTask @ polyfills.js:1 invokeTask @ polyfills.js:1 E.invoke @ polyfills.js:1 (anonymous) @ polyfills.js:1 Z @ polyfills.js:1 x @ polyfills.js:1 U @ polyfills.js:1 load (anonymous) @ polyfills.js:1 scheduleTask @ polyfills.js:1 onScheduleTask @ polyfills.js:1 scheduleTask @ polyfills.js:1 scheduleTask @ polyfills.js:1 scheduleEventTask @ polyfills.js:1 (anonymous) @ polyfills.js:1 (anonymous) @ main.js:3 _trySubscribe @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 next @ main.js:3 (anonymous) @ main.js:3 _trySubscribe @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 C @ main.js:3 m @ main.js:3 (anonymous) @ main.js:3 next @ main.js:3 (anonymous) @ main.js:3 _trySubscribe @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 Xne @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 subscribe @ main.js:3 reloadData @ main.js:3 ngOnInit @ main.js:3 wb @ main.js:3 KB @ main.js:3 Cb @ main.js:3 ep @ main.js:3 Hj @ main.js:3 Um @ main.js:3 OS @ main.js:3 AS @ main.js:3 Hj @ main.js:3 Um @ main.js:3 OS @ main.js:3 Bj @ main.js:3 yS @ main.js:3 Hj @ main.js:3 Um @ main.js:3 OS @ main.js:3 AS @ main.js:3 Hj @ main.js:3 Um @ main.js:3 OS @ main.js:3 Bj @ main.js:3 yS @ main.js:3 Hj @ main.js:3 Um @ main.js:3 $j @ main.js:3 Dp @ main.js:3 TQ @ main.js:3 synchronizeOnce @ main.js:3 synchronize @ main.js:3 _tick @ main.js:3 tick @ main.js:3 (anonymous) @ main.js:3 invoke @ polyfills.js:1 onInvoke @ main.js:3 invoke @ polyfills.js:1 run @ polyfills.js:1 run @ main.js:3 next @ main.js:3 next @ main.js:3 _next @ main.js:3 next @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 next @ main.js:3 emit @ main.js:3 W4 @ main.js:3 onHasTask @ main.js:3 hasTask @ polyfills.js:1 _updateTaskCount @ polyfills.js:1 _updateTaskCount @ polyfills.js:1 runTask @ polyfills.js:1 K @ polyfills.js:1 invokeTask @ polyfills.js:1 Z @ polyfills.js:1 x @ polyfills.js:1 U @ polyfills.js:1Understand this errorAI 2main.js:3 ERROR se: NG02100 at zr (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:190493) at t.transform (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:200329) at t.transform (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:688621) at valueFn (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:1822035) at n5t (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:1817480) at cS (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:65312) at Hj (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:75337) at Um (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:76711) at OS (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:76533) at AS (http://192.168.1.100:30070/static/frontend/fr-FR/main.js:3:76465) handleError @ main.js:3 (anonymous) @ main.js:3 invoke @ polyfills.js:1 run @ polyfills.js:1 runOutsideAngular @ main.js:3 push.516.ge.factory @ main.js:3 _tick @ main.js:3 tick @ main.js:3 (anonymous) @ main.js:3 invoke @ polyfills.js:1 onInvoke @ main.js:3 invoke @ polyfills.js:1 run @ polyfills.js:1 run @ main.js:3 next @ main.js:3 next @ main.js:3 _next @ main.js:3 next @ main.js:3 (anonymous) @ main.js:3 b_ @ main.js:3 next @ main.js:3 emit @ main.js:3 W4 @ main.js:3 Bb @ main.js:3 onInvokeTask @ main.js:3 invokeTask @ polyfills.js:1 runTask @ polyfills.js:1 invokeTask @ polyfills.js:1 Z @ polyfills.js:1 x @ polyfills.js:1 U @ polyfills.js:1Understand this errorAI ``` ### Paperless-ngx version 2.13.5 ### Host OS Truenas application (docker) ### Installation method Docker - official image ### System status _No response_ ### Browser Chrome ### Configuration changes _No response_ ### Please confirm the following - [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation. - [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools. - [X] I have already searched for relevant existing issues and discussions before opening this report. - [X] I have updated the title field above with a concise description.
closed
2024-11-19T22:00:59Z
2024-11-19T22:09:31Z
https://github.com/paperless-ngx/paperless-ngx/issues/8317
[ "not a bug" ]
Underscan007
0
globaleaks/globaleaks-whistleblowing-software
sqlalchemy
3,253
Disabling of user not shown in audit log
Hi! Would it be possible to add the date of disabeling of a user to the audit log? Currently it does not show up there. The disabeling is only shown in the Users log - but still with no date. The User log GUI could also benefit of a checkmark to verify a n account is active or disabled (Like the MFA column)
open
2022-08-02T08:13:16Z
2023-02-10T14:36:46Z
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3253
[ "F: Audit Log" ]
schris-dk
0
zappa/Zappa
django
542
[Migrated] Pipenv (if desired)
Originally from: https://github.com/Miserlou/Zappa/issues/1435 by [kennethreitz42](https://github.com/kennethreitz42) Ran the tests, using only the provided `Pipfile.lock` — they mostly passed, so I assume everything is working as intended. ``` Ran 94 tests in 171.589s FAILED (errors=2) ``` Please dismiss if this isn't desired.
closed
2021-02-20T12:22:30Z
2022-07-16T07:13:47Z
https://github.com/zappa/Zappa/issues/542
[]
jneves
1
biolab/orange3
data-visualization
6,999
Crash on "Show help" and "Create report"
<!-- Thanks for taking the time to report a bug! If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3 To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability. --> **What's wrong?** <!-- Be specific, clear, and concise. Include screenshots if relevant. --> <!-- If you're getting an error message, copy it, and enclose it with three backticks (```). --> GUI crashes when clicking on "Show help button" or "Create and display a report" buttons inside a widget properties window. **How can we reproduce the problem?** <!-- Upload a zip with the .ows file and data. --> <!-- Describe the steps (open this widget, click there, then add this...) --> 1. Open "crash_report.ows" with Orange3 2. Double click on "Distributions" node 3. Click on "Show help" or "Create and display a report" button in the bottom left corner of the "Distributions - Orange" window. [crash_report.zip](https://github.com/user-attachments/files/18482866/crash_report.zip) **What's your environment?** <!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code --> - Operating system: **Linux Mint 22.1** - Orange version: **3.38.1** - How you installed Orange: 1. `conda create python=3.12 --yes --name orange3` 2. `conda activate orange3` 3. `conda config --add channels conda-forge` 4. `conda config --set channel_priority strict` 5. `conda install orange3` 6. `python -m Orange.canvas`
closed
2025-01-20T22:25:37Z
2025-01-24T08:20:06Z
https://github.com/biolab/orange3/issues/6999
[ "bug report" ]
GuidoBartoli
5
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
1,382
Warning: wandb package cannot be found. The option "--use_wandb" will result in error.
Warning: wandb package cannot be found. The option "--use_wandb" will result in error. How can I solve the problem? Thanks a lot.
closed
2022-02-22T06:30:48Z
2022-09-06T20:22:48Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1382
[]
raylu1314coding
6
vitalik/django-ninja
django
1,207
Returning alternate response objects
I have an endpoint that returns a profile details ``` @router.get("/profile/{id}", response=ProfileDetailsSchema) ``` to avoid leaking PPI, the ProfileDetailsSchema does not contain any fields containing personal details. PPI details are accessed via ``` @router.get("/profile/{id}/private", response=PrivateProfileSchema) ``` But to simplify the API I'd like to use a single endpoint that looks at the request.user to see if they should have access to the PPI or not. How would I specify the `response=` field to allow for either `ProfileDetailsSchema` or `PrivateProfileSchema` and then return the appropriate response...
open
2024-06-26T11:00:04Z
2024-06-27T16:00:56Z
https://github.com/vitalik/django-ninja/issues/1207
[]
nhi-vanye
1
python-gino/gino
asyncio
668
[question] Tornado fork mode with Gino
Hi, I am trying to create a REST API template based on Tornado and Gino and I would like your opinion/expertise on the fork mode. is that likely to pose any problem? when should I fork ? after import models ? before the set_bind ? no matter ? what could be the side effect on the event loop ?
closed
2020-05-10T17:09:48Z
2020-05-17T05:22:52Z
https://github.com/python-gino/gino/issues/668
[ "question" ]
flapili
2
pennersr/django-allauth
django
3,634
'SyncToAsync.__call__' was never awaited session_check(request)
After upgrade to v0.61.0, i can see spurious warnings in logs: ``` .../lib/python3.12/site-packages/allauth/account/middleware.py:53: RuntimeWarning: coroutine 'SyncToAsync.__call__' was never awaited session_check(request) RuntimeWarning: Enable tracemalloc to get the object allocation traceback ``` Looking at https://github.com/pennersr/django-allauth/blob/main/allauth/account/middleware.py, I think there is indeed something fishy. In the async case: ``` _remove_dangling_login( request, response, sync_to_async(_session_check) ) ``` so a coroutine is passed as the third argument of `_remove_dangling_login`. Then such coroutine is called but not awaited on line 53: ``` session_check(request) ``` The awaitable result of the session_check call is lost and left be caught by GC. Hence the warning.
closed
2024-02-08T17:20:52Z
2024-02-09T09:57:41Z
https://github.com/pennersr/django-allauth/issues/3634
[]
stephane-martin
2
lukas-blecher/LaTeX-OCR
pytorch
8
Error: Index out of range in self during the model training
I tried to train the model, in the CPU, but received the below error; not sure what could be the cause? Loss: 1.0180: 2%|█▉ | 421/18013 [09:41<6:45:16, 1.38s/it] Traceback (most recent call last): File "train.py", line 94, in <module> train(args) File "train.py", line 52, in train loss = decoder(tgt_seq, mask=tgt_mask, context=encoded) File "/home/devops/Envs/latex_ocr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/devops/Envs/latex_ocr/lib/python3.8/site-packages/x_transformers/autoregressive_wrapper.py", line 102, in forward out = self.net(xi, **kwargs) File "/home/devops/Envs/latex_ocr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/devops/Envs/latex_ocr/lib/python3.8/site-packages/x_transformers/x_transformers.py", line 738, in forward x += self.pos_emb(x) File "/home/devops/Envs/latex_ocr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/devops/Envs/latex_ocr/lib/python3.8/site-packages/x_transformers/x_transformers.py", line 107, in forward return self.emb(n)[None, :, :] File "/home/devops/Envs/latex_ocr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/devops/Envs/latex_ocr/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 156, in forward return F.embedding( File "/home/devops/Envs/latex_ocr/lib/python3.8/site-packages/torch/nn/functional.py", line 1916, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self
closed
2021-05-02T19:32:04Z
2021-05-03T15:33:02Z
https://github.com/lukas-blecher/LaTeX-OCR/issues/8
[]
GopinathCool
3
huggingface/datasets
computer-vision
6,853
Support soft links for load_datasets imagefolder
### Feature request Load_dataset from a folder of images doesn't seem to support soft links. It would be nice if it did, especially during methods development where image folders are being curated. ### Motivation Images are coming from a complex variety of sources and we'd like to be able to soft link directly from the originating folders as opposed to copying. Having a copy of the file ensures that there may be issues with image versioning as well as having double the amount of required disk space. ### Your contribution N/A
open
2024-04-30T22:14:29Z
2024-04-30T22:14:29Z
https://github.com/huggingface/datasets/issues/6853
[ "enhancement" ]
billytcl
0
jonra1993/fastapi-alembic-sqlmodel-async
sqlalchemy
32
Add a table self association example.
RT, In the near future, I will add the following data tables: ``` python class HeroComment: id: hero_id: user_id: content: parent_id: -> point to other HeroComment ``` But before that, I want to discuss the problem of API path. Which of the following two forms is more appropriate? - /comment/hero - /hero/{hero_id}/comment
closed
2022-11-09T01:36:36Z
2023-02-12T01:30:13Z
https://github.com/jonra1993/fastapi-alembic-sqlmodel-async/issues/32
[]
dongfengweixiao
3
modin-project/modin
pandas
7,007
How to Use Modin with ExtensionArrays and Accessors?
Pandas has an extension framework, how do you register the accessors and extension arrays with modin? My API has lots of custom dtypes and accessors to do specialized analysis and it would be great if modin honored those. see: https://pandas.pydata.org/docs/development/extending.html
closed
2024-03-05T11:48:48Z
2024-03-07T14:07:35Z
https://github.com/modin-project/modin/issues/7007
[ "question ❓" ]
achapkowski
3
aio-libs-abandoned/aioredis-py
asyncio
1,174
[2.0] xread() halts when the connection is lost.
### Describe the bug If the connection with Redis is lost while the program is awaiting for `xread()` it get stuck until the connection is reestablished. It doesn't raise any Exception, or at least timeout after the `block` time. If I wrap `xread()` with `asyncio.wait_for()`, and force a disconnection, the `wait_for()` timeouts correctly, and the subsequent `xread()` calls raise ConnectionError. So it confirms that the bug only happens if the code is awaiting `xread()` when the disconnection occurs. ### To Reproduce 1. You need a running Redis with a stream with entries 2. Set correct values for `redis_dsn` and `stream`. Then run: ```python import asyncio import logging from datetime import datetime import aioredis logging.basicConfig(level=logging.INFO) logger = logging.getLogger() redis_dsn = "redis://localhost:6379" stream = "stream_name" async def main() -> None: redis = aioredis.from_url(redis_dsn, decode_responses=True) index = "0" while True: try: response = await redis.xread( {stream: index}, block=1000, count=10, ) logger.info( f"{datetime.now()} : Read {len(response)} messages from stream." ) for _stream_name, messages in response: for message_id, values in messages: index = message_id except Exception as err: logger.warn(err) asyncio.run(main()) ``` 3. The logs should be displayed as the entries are read. 4. Force a disconnection, such as stopping the Redis. 5. You should see that the logs have stopped and no exception is logged. 6. [Optional] If you force a reconnection, the program should resume so it start to display the logs again. ### Expected behavior After running the snippet, you should see the logs been displayed rapidly. After the disconnection the program will halt, so the logs will also stop, and no exception is raised. ### Logs/tracebacks ```python-traceback It's not applicable. ``` ### Python Version ```console Python 3.8.10 ``` ### aioredis Version ```console aioredis 2.0.0 ``` ### Additional context _No response_ ### Code of Conduct - [X] I agree to follow the aio-libs Code of Conduct
open
2021-10-19T18:10:24Z
2022-01-12T03:42:47Z
https://github.com/aio-libs-abandoned/aioredis-py/issues/1174
[ "bug" ]
marciorasf
1
fastapi/sqlmodel
pydantic
57
[BUG] Variables with annotation of 'typing.Literal' causes a panic
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python from sqlmodel import SQLModel from typing import Literal class Cool(SQLModel, table=True): name: Literal["fastapi", "sqlmodel"] # This code outputs the following to stderr # File "main.py", line 5, in <module> # class Cool(SQLModel, table=True): # File "sqlmodel/main.py", line 292, in __new__ # col = get_column_from_field(v) # File "sqlmodel/main.py", line 415, in get_column_from_field # sa_type = get_sqlachemy_type(field) # File "sqlmodel/main.py", line 373, in get_sqlachemy_type # if issubclass(field.type_, str): # TypeError: issubclass() arg 1 must be a class ``` ### Description * Create a sub-class of `sqlmodel.main.SQLModel` with `table=True` * Add a variable and annotate it with `typing.Literal[...]` * Python raises a `TypeError` exception at `sqlmodel/main:get_sqlachemy_type` This happens because `typing.Literal[...]` is a function and since SQLModel uses `issubclass` to get the variable's SQLAlchemy type; `issubclass` just throws a `TypeError` since it expects a class. ### Operating System Linux ### Operating System Details _No response_ ### SQLModel Version 0.0.4 ### Python Version Python 3.9.6 ### Additional Context Funny enough [Pydantic](https://github.com/samuelcolvin/pydantic) had quite the same issue samuelcolvin/pydantic#1026 so it can be as easy as importing `pydantic.typing.is_literal_type` and running an if statement at the top of `get_sqlachemy_type` I think the real issue is that `typing.Literal` is an 'Union' type it receive one or more values but since it's often used for one type (e.g. Literal["r", "w", ...]) we can return the type of all the passed values if they are all the same else None. The values can be retrieved with `typing.get_args` as a tuple.
open
2021-08-29T07:58:37Z
2024-10-16T08:59:12Z
https://github.com/fastapi/sqlmodel/issues/57
[ "question" ]
faresbakhit
9
mljar/mljar-supervised
scikit-learn
247
Negative AUC and R2
When optimized AUC eval_metric it has negative values.
closed
2020-11-26T12:53:20Z
2021-02-19T07:49:21Z
https://github.com/mljar/mljar-supervised/issues/247
[ "bug" ]
pplonski
1
ranaroussi/yfinance
pandas
2,270
throw exception when calling history() on a vietnamese stock which contains dividend record
### Describe bug When calling history() for Vietnamese stocks, if it contains dividend records, the dividend records will have a currency field, causing an exception. For example: ``` tk = yf.Ticker('PNJ.VN') hist = tk.history(start='2024-11-01') ``` will cause: ```Traceback (most recent call last): File "/Users/alai04/projects/python/yfinance/history.py", line 8, in <module> hist = tk.history(start='2024-11-01') File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/utils.py", line 104, in wrapper result = func(*args, **kwargs) File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/base.py", line 80, in history return self._lazy_load_price_history().history(*args, **kwargs) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/utils.py", line 104, in wrapper result = func(*args, **kwargs) File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/scrapers/history.py", line 318, in history dividends, splits, capital_gains = utils.parse_actions(data["chart"]["result"][0]) ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/utils.py", line 536, in parse_actions dividends.columns = ["Dividends"] ^^^^^^^^^^^^^^^^^ File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/generic.py", line 6313, in __setattr__ return object.__setattr__(self, name, value) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^ File "properties.pyx", line 69, in pandas._libs.properties.AxisProperty.__set__ File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/generic.py", line 814, in _set_axis self._mgr.set_axis(axis, labels) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^ File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/internals/managers.py", line 238, in set_axis self._validate_set_axis(axis, new_labels) ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^ File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/internals/base.py", line 98, in _validate_set_axis raise ValueError( ...<2 lines>... ) ValueError: Length mismatch: Expected axis has 2 elements, new values have 1 elements ``` ### Simple code that reproduces your problem ```python tk = yf.Ticker('PNJ.VN') hist = tk.history(start='2024-11-01') ``` ### Debug log DEBUG Entering history() DEBUG Entering history() DEBUG PNJ.VN: Yahoo GET parameters: {'period1': '2024-11-01 00:00:00+07:00', 'period2': '2025-02-17 14:10:03+07:00', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'} DEBUG Entering get() DEBUG Entering _make_request() DEBUG url=https://query2.finance.yahoo.com/v8/finance/chart/PNJ.VN DEBUG params={'period1': 1730394000, 'period2': 1739776203, 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'} DEBUG Entering _get_cookie_and_crumb() DEBUG cookie_mode = 'basic' DEBUG Entering _get_cookie_and_crumb_basic() DEBUG loaded persistent cookie DEBUG reusing cookie DEBUG crumb = 'vsEB7v4e5NI' DEBUG Exiting _get_cookie_and_crumb_basic() DEBUG Exiting _get_cookie_and_crumb() DEBUG response code=200 DEBUG Exiting _make_request() DEBUG Exiting get() DEBUG PNJ.VN: yfinance received OHLC data: 2024-11-01 02:00:00 -> 2025-02-17 06:54:51 DEBUG PNJ.VN: OHLC after cleaning: 2024-11-01 09:00:00+07:00 -> 2025-02-17 13:54:51+07:00 Traceback (most recent call last): File "/Users/alai04/projects/python/yfinance/history.py", line 6, in <module> hist = tk.history(start='2024-11-01') File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/utils.py", line 104, in wrapper result = func(*args, **kwargs) File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/base.py", line 80, in history return self._lazy_load_price_history().history(*args, **kwargs) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^ File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/utils.py", line 104, in wrapper result = func(*args, **kwargs) File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/scrapers/history.py", line 318, in history dividends, splits, capital_gains = utils.parse_actions(data["chart"]["result"][0]) ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/yfinance/utils.py", line 535, in parse_actions dividends.columns = ["Dividends"] ^^^^^^^^^^^^^^^^^ File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/generic.py", line 6313, in __setattr__ return object.__setattr__(self, name, value) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^ File "properties.pyx", line 69, in pandas._libs.properties.AxisProperty.__set__ File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/generic.py", line 814, in _set_axis self._mgr.set_axis(axis, labels) ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^ File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/internals/managers.py", line 238, in set_axis self._validate_set_axis(axis, new_labels) ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^ File "/Users/alai04/.virtualenvs/stock/lib/python3.13/site-packages/pandas/core/internals/base.py", line 98, in _validate_set_axis raise ValueError( ...<2 lines>... ) ValueError: Length mismatch: Expected axis has 2 elements, new values have 1 elements ### Bad data proof _No response_ ### `yfinance` version 0.2.53 ### Python version 3.13.2 ### Operating system macOS 15.3
open
2025-02-17T07:13:40Z
2025-03-08T11:45:44Z
https://github.com/ranaroussi/yfinance/issues/2270
[]
alai04
2
xzkostyan/clickhouse-sqlalchemy
sqlalchemy
247
Release request
Hi! I'm starting to use your project. It looks awesome. I noticed that the main branch has long had support for the asynch driver, but there was still no release. When to expect it? It would help me a lot. Thanks!
closed
2023-04-25T22:51:32Z
2023-05-02T17:48:21Z
https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/247
[]
Noidor1
3
CPJKU/madmom
numpy
70
remove obsolete class constants
Formerly, most of these class constants were needed to set the default values for both the `__init__()` and the `add_arguments()` method. Since the latter moved to use `None` as default for most arguments, the class constants are more or less obsolete. We should remove them before someone starts using them.
closed
2016-01-25T13:53:30Z
2016-03-07T14:38:50Z
https://github.com/CPJKU/madmom/issues/70
[]
superbock
1
microsoft/MMdnn
tensorflow
112
CaffeEmitter has not supported operator [Sequential]
Platform (like ubuntu 16.04/win10): ubuntu 14.04 Python version: 3.6 Source framework with version (like Tensorflow 1.4.1 with GPU): Keras 2.1.3 Destination framework with version (like CNTK 2.3 with GPU): Caffe Pre-trained model path (webpath or webdisk path): Running scripts: I am trying to convert a VGG keras model to Caffe model. I had no issue converting keras to IR model but when trying to create the target network code .py and target weights .npy with caffe, I get the following : CaffeEmitter has not supported operator [sequential]. But I did get a .py and npy weights file after that.
closed
2018-03-16T15:07:53Z
2018-07-05T05:07:16Z
https://github.com/microsoft/MMdnn/issues/112
[]
lamdawr
1
miguelgrinberg/Flask-Migrate
flask
494
[4.0] app factory is called before click groups
**Describe the bug** In 3.1.0, click groups declared with [FlaskGroup](https://flask.palletsprojects.com/en/2.2.x/api/#flask.cli.FlaskGroup) before the `create_app` factory of the group is called. In 4.0.0, this behavior changed and the app factory gets called before the click groups. This is a breaking change. **To Reproduce** ```python #!/bin/env python import os import click from flask import Flask from flask.cli import FlaskGroup from flask_sqlalchemy import SQLAlchemy from flask_migrate import Migrate basedir = os.path.abspath(os.path.dirname(__file__)) db = SQLAlchemy() def app_factory(): ctx = click.get_current_context() assert 'my_global_option' in ctx.obj.data app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' + os.path.join( basedir, 'app.db') app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False db.init_app(app) Migrate(app, db, compare_type=False) return app class User(db.Model): id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String()) @click.group(cls=FlaskGroup, create_app=app_factory) @click.option("--my-global-option", "my_global_option", type=str) def cli(my_global_option): ctx = click.get_current_context() ctx.obj.data['my_global_option'] = my_global_option if __name__ == '__main__': cli() ``` And run, for example, `./script.py db upgrade`; it fails on the assertion. **Expected behavior** In v3.1.0, the `cli()` group is called before the `app_factory()`. As a result, it is able to set the click context to be used within the `app_factory()`. In v4.0.0, the `cli()` group is called after the `app_factory()`. As a result, the context is missing and the assertion in `app_factory()` fails. **Logs** Not relevant. **Additional context** This issue was introduced [here](https://github.com/miguelgrinberg/Flask-Migrate/commit/b9c9d35744a08f4f62084ce6e3ddf30d21431dc7#diff-fa602a8a75dc9dcc92261bac5f533c2a85e34fcceaff63b3a3a81d9acde2fc52). It seems to be related to the fact that the `db` function was removed from flask_command entrypoints. If I add this entrypoint back, everything works just as before. And indeed, Flask applies [some special behaviors](https://github.com/pallets/flask/blob/cc66213e579d6b35d9951c21b685d0078f373c44/src/flask/cli.py#L547-L565) to entrypoints, which might explain why the order of stuff is slightly different. I've not dug deeper at the moment. I use this context-trick to define global configuration options that I am able to reuse within the factory, and have them affect the `db` command. That being said, I have workarounds (either declare the entrypoint myself, or leave the group empty and parse click arguments directly within the app factory), but I just wanted to disclose this issue as it might affect a few other people. I think removing the entrypoint as it was done in 4.0 might lead to other unexpected quirks like this. Although I'm actually very unsure what should be the proper fix because it might just be undefined behavior from Flask or Click. Maybe it's just enough to just document this. Thanks :) .
closed
2022-11-15T15:39:24Z
2022-11-16T09:19:48Z
https://github.com/miguelgrinberg/Flask-Migrate/issues/494
[ "question" ]
gilbsgilbs
6
NullArray/AutoSploit
automation
398
Unhandled Exception (3842f7fee)
Autosploit version: `3.0` OS information: `Linux-4.18.0-kali2-amd64-x86_64-with-Kali-kali-rolling-kali-rolling` Running context: `autosploit.py` Error meesage: `global name 'Except' is not defined` Error traceback: ``` Traceback (most recent call): File "/root/Desktop/exploit 2018/Autosploit/autosploit/main.py", line 113, in main loaded_exploits = load_exploits(EXPLOIT_FILES_PATH) File "/root/Desktop/exploit 2018/Autosploit/lib/jsonize.py", line 61, in load_exploits except Except: NameError: global name 'Except' is not defined ``` Metasploit launched: `False`
closed
2019-01-21T21:40:12Z
2019-04-02T20:27:08Z
https://github.com/NullArray/AutoSploit/issues/398
[]
AutosploitReporter
0
biolab/orange3
data-visualization
6,976
Group By: add straightforward possiblility to do aggregations over all records
<!-- Thanks for taking the time to submit a feature request! For the best chance at our team considering your request, please answer the following questions to the best of your ability. --> **What's your use case?** **What's your proposed solution?** Sometimes it is useful to have several aggregations of selected variables over all the data records. To that end, it would be nice to have "all rows" as an option in the column on the left in the Group-by interface **Are there any alternative solutions?** The obvious trick to achieve this with the current functionality is to introduce a dummy variable that has the same value for all rows, and to group by the dummy variable. However, as a workaround it is not _that_ intuitive.
open
2025-01-02T15:54:33Z
2025-01-13T09:47:36Z
https://github.com/biolab/orange3/issues/6976
[ "meal" ]
wvdvegte
5
suitenumerique/docs
django
592
Many file type not accepted as upload
## Bug Report **Problematic behavior** .docx .mp4 .mov are not accepted **Expected behavior/code** I'd like to have the following to be accepted MS (.docx, .pptx, .xlsx) Vidéo (.mp4 . mov) OpenDocument (.odt, .odp, .ods)
closed
2025-01-29T10:18:17Z
2025-03-03T19:58:41Z
https://github.com/suitenumerique/docs/issues/592
[ "bug" ]
virgile-dev
4
kubeflow/katib
scikit-learn
1,802
docker run is work. but katib trial don't work
/kind bug `Error: failed to start container "tensorflow": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "--df_me=df_me.pkl": executable file not found in $PATH: unknown` **What steps did you take and what happened:** **1. docker build** ``` FROM tensorflow/tensorflow ENV PYTHONUNBUFFERED 1 RUN mkdir /code WORKDIR /code COPY . /code/ RUN /usr/bin/python3 -m pip install --upgrade pip RUN pip install -r requirements.txt ENTRYPOINT ["python", "main.py"] ```` **2. `docker run -it moey920/train_predict_model:latest --df_me df_me.pkl --input_width 8 --label_width 7 --shift 3 --model_name baseline`** is working right. 3. However, in Katib, if the argument of --df_me is given as `df_me.pkl` as in docker run, an error occurs saying that trial cannot find the path `--df_me=df_me.pkl`. **What did you expect to happen:** would like to know how can Katib properly find the path to a data file within a docker image. **Environment:** I am using kubeflow through minikube, and katib is installed with it. - Katib version (check the Katib controller image version): I don't know how to check the version. - Kubernetes version: (`kubectl version`): ``` sysadmin@tyan:~$ kubectl version Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.11", GitCommit:"27522a29febbcc4badac257763044d0d90c11abd", GitTreeState:"clean", BuildDate:"2021-09-15T19:16:25Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"} WARNING: version difference between client (1.22) and server (1.20) exceeds the supported minor version skew of +/-1 ``` - OS (`uname -a`): ubuntu 18.04 --- <!-- Don't delete this message to encourage users to support your issue! --> Impacted by this bug? Give it a 👍 We prioritize the issues with the most 👍
closed
2022-02-08T08:00:26Z
2022-02-08T08:35:16Z
https://github.com/kubeflow/katib/issues/1802
[ "kind/bug" ]
moey920
4
lexiforest/curl_cffi
web-scraping
341
[BUG] Curl.setopt() 中糟糕的类型传递
我尝试在aarch64 Android平台上运行这个项目,用最简单的方式测试: ```python from curl_cffi import requests res = requests.get("https://www.baidu.com") print(res.text) ``` 我得到了如下报错: ``` Traceback (most recent call last): File "/data/user/0/coding.yu.pythoncompiler.new/files/default.py", line 3, in <module> res = requests.get("https://www.baidu.com") File "/data/user/0/coding.yu.pythoncompiler.new/files/PYROOT3/lib/python3.8/site-packages/curl_cffi/requests/__init__.py", line 125, in request return s.request( File "/data/user/0/coding.yu.pythoncompiler.new/files/PYROOT3/lib/python3.8/site-packages/curl_cffi/requests/session.py", line 899, in request req, buffer, header_buffer, q, header_recved, quit_now = self._set_curl_options( File "/data/user/0/coding.yu.pythoncompiler.new/files/PYROOT3/lib/python3.8/site-packages/curl_cffi/requests/session.py", line 652, in _set_curl_options c.setopt(CurlOpt.MAX_RECV_SPEED_LARGE, max_recv_speed) File "/data/user/0/coding.yu.pythoncompiler.new/files/PYROOT3/lib/python3.8/site-packages/curl_cffi/curl.py", line 217, in setopt self._check_error(ret, "setopt", option, value) File "/data/user/0/coding.yu.pythoncompiler.new/files/PYROOT3/lib/python3.8/site-packages/curl_cffi/curl.py", line 137, in _check_error raise error curl_cffi.curl.CurlError: Failed to setopt CurlOpt.MAX_RECV_SPEED_LARGE 0, curl: (43) . See https://curl.se/libcurl/c/libcurl-errors.html first for more details. ``` **Versions** - OS: ARM64 Android13 (MIUI14) - curl_cffi version 0.7.0 **Additional context** 错误码: CURLE_BAD_FUNCTION_ARGUMENT (43) Curl.setopt()中对于curl_off_t的类型的传参方式与int相同,但在_curl_easy_setopt()中并不会在curl_off_t的情况下对指针解引用,导致实际传给curl_easy_setopt()的值是指针本身的值。由于目前版本内部仅有一处 ```python # request/session.py c.setopt(CurlOpt.MAX_RECV_SPEED_LARGE, max_recv_speed) ``` 涉及到curl_off_t类型的使用,而指针通常具有的数值范围刚好可以使得这个隐患隐藏至今。Android的MTE机制将指针高位打上标记使其变为负数,从而使curl_easy_setopt()返回CURLE_BAD_FUNCTION_ARGUMENT。 **预期行为** 在Curl.setopt()中对于curl_off_t类型应构造int64_t类型并传递指针(curl/system.h中注明curl_off_t在任何平台下均为64位有符号整型),_curl_easy_setopt()中应对传curl_off_t的情况解引用。
closed
2024-07-05T21:48:53Z
2024-07-05T21:51:06Z
https://github.com/lexiforest/curl_cffi/issues/341
[ "bug" ]
qishipai
0
Nike-Inc/koheesio
pydantic
114
[BUG] Duplicate implementation of SynchronizeDeltaToSnowflakeTask
<!-- Please provide as much detail as possible to help us understand and reproduce the issue. This will enable us to address it more effectively. --> ## Describe the bug There are two implementations of `SynchronizeDeltaToSnowflakeTask` ## Steps to Reproduce Spark(Old): https://github.com/Nike-Inc/koheesio/blob/602866b7decc79487c39df6cfbf985f243328812/src/koheesio/spark/snowflake.py#L989 Spark + Python (New): https://github.com/Nike-Inc/koheesio/blob/602866b7decc79487c39df6cfbf985f243328812/src/koheesio/integrations/spark/snowflake.py#L648 ## Expected behavior Keep one in `koheesio.integrations.spark.snowflake` and make imports in old location from new one
closed
2024-11-24T09:25:42Z
2024-11-24T23:14:05Z
https://github.com/Nike-Inc/koheesio/issues/114
[ "bug" ]
mikita-sakalouski
0
plotly/dash-table
plotly
491
Changing one header cell changes all the header cells on the left
![bug2](https://user-images.githubusercontent.com/23650639/60672261-e778e200-9e42-11e9-8cc2-e8b046cefe3c.gif) This only happened once when the cell is first changed.
closed
2019-07-04T14:04:36Z
2019-07-19T13:41:36Z
https://github.com/plotly/dash-table/issues/491
[ "dash-type-bug", "size: 1" ]
alinastarkov
1
scrapy/scrapy
web-scraping
6,354
AttributeError: 'SelectReactor' object has no attribute '_handleSignals'
<!-- Thanks for taking an interest in Scrapy! If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/. The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself. Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs --> ### Description When I was using Scrapy, I encountered a minor issue. However, I'm uncertain if it's significant enough to be considered as a bug that needs fixing. I'm a programmer who usually uses Python 3.7.16. I'm new to Scrapy, so I naturally downloaded the latest versions of Twisted (23.8.0) and Scrapy (2.9.0) for Python 3.7. However, I was surprised to find that Scrapy didn't work. ![image](https://github.com/scrapy/scrapy/assets/152846349/e23b1250-df11-4dc1-8684-0bd2f3c6a767) To solve it, I can either choose to upgrade Python to a version > 3.8 or downgrade Twisted to 22.10.0. Therefore, I'm unsure if this issue needs fixing. However, I really find it unreasonable that the latest versions of Twisted and Scrapy are incompatible with Python 3.7, which can be quite confusing to users, particularly newcomers like me. It appears that my issue is akin to what was reported in Issue #6024. I've learned that in the newest version of Scrapy 2.11.1 coupled with Twisted 24.3.0, the problem has been addressed. However, this solution requires an upgrade to Python version >3.8. For those of us using older versions of Python, specifically 3.7.16, it's not immediately clear whether upgrading Python is expected or whether compatibility for Python 3.7 will be addressed in future releases. ### Steps to Reproduce use Twisted 23.8.0 and Scrapy 2.9.0 to scrapy anything or just use Python3.7 and pip install the newest version of Twisted and Scrapy **Expected behavior:** Successfully crawl **Actual behavior:** Attribute Error **Reproduces how often:** always
closed
2024-05-12T07:42:04Z
2024-05-13T09:57:16Z
https://github.com/scrapy/scrapy/issues/6354
[]
windY1Y
2
modin-project/modin
data-science
7,032
`test_mixed_dtypes_groupby` failed due to different exception messages
Found in https://github.com/modin-project/modin/pull/6954
open
2024-03-07T15:10:40Z
2024-03-07T15:13:23Z
https://github.com/modin-project/modin/issues/7032
[ "bug 🦗", "pandas concordance 🐼", "P2" ]
anmyachev
0
alteryx/featuretools
scikit-learn
2,122
release Featuretools 1.10.0
- Release of Featuretools on June 21 - We want to get these PRs and issues in before the release: - https://github.com/alteryx/featuretools/pull/2120 - https://github.com/alteryx/featuretools/pull/2099 - Instructions for release: - https://github.com/alteryx/featuretools/blob/main/contributing.md
closed
2022-06-17T21:27:04Z
2022-06-24T15:50:42Z
https://github.com/alteryx/featuretools/issues/2122
[]
gsheni
1
pytest-dev/pytest-cov
pytest
337
Proposal: pytest-cov should do less
I've been working in the pytest-cov code recently to get it to support coverage.py 5.0 (pull request: #319), and also reviewing #330. The problems that have come up have brought me to a conclusion: pytest-cov does too many things. The current code attempts to be a complete UI for coverage. I think this is a mistake. Pytest-cov should limit itself to those functions that can only be done within a pytest plugin. In particular, the reporting options of pytest-cov add no value over just using coverage directly to generate reports. Pytest-cov is no one's passion project. There are a few of us that would like to move it forward, but want to do it efficiently. Removing functionality from pytest-cov would make it easier to maintain. If we keep to the current model of pytest-cov being a complete UI for coverage.py, then every time something is added to coverage.py, the pytest-cov plugin needs to be updated. As an example, we just added a new report type to coverage.py (JSON). Why do we need to add options to pytest-cov to support it? Pytest is a test runner. Its job is to run tests. The pytest-cov plugin should limit itself to integrating the run phase of coverage into the test running process. Reporting is a separate step that is better separately. The current design leads to tortured option syntax in an attempt to squeeze everything into the pytest command line. There's no need. Remove options that aren't related to running, and let some options be specified in configuration files. Sophisticated test suites already have configuration files. Let's simplify pytest-cov. Thoughts?
open
2019-09-10T14:39:22Z
2022-12-03T19:49:58Z
https://github.com/pytest-dev/pytest-cov/issues/337
[]
nedbat
47
jina-ai/serve
deep-learning
5,963
Docarray ValidationError when monitoring is enabled
**Describe the bug** <!-- A clear and concise description of what the bug is. --> When I run a schema'ed executor with custom docarray types as inputs and outputs with monitoring enabled, I get an error. I believe it is trying to cast the return value as the output type. **Environment** <!-- Run `jina --version-full` and copy paste the output here --> `jina== 3.19.0` **Screenshots** <!-- If applicable, add screenshots to help explain your problem. --> ``` ERROR @32875 ValidationError(model='Input', errors=[{'loc': ('messages',), 'msg': 'field required', 'type': 'value_error.missing'}, {'loc': ('__root__',), 'msg': 'Only one model config type can be set, found []', 'type': 'value_error'}]) add "--quiet-error" to suppress the exception details Traceback (most recent call last): File "/Users/user/env/lib/python3.9/site-packages/jina/serve/runtimes/worker/… line 956, in process_data result = await self.handle( File "/Users/user/env/lib/python3.9/site-packages/jina/serve/runtimes/worker/… line 670, in handle self._record_docs_processed_monitoring(requests) File "/Users/user/env/lib/python3.9/site-packages/jina/serve/runtimes/worker/… line 481, in _record_docs_processed_monitoring len(requests[0].docs) File "/Users/user/env/lib/python3.9/site-packages/jina/types/request/data.py", line 282, in docs return self.data.docs File "/Users/user/env/lib/python3.9/site-packages/jina/types/request/data.py", line 50, in docs self._loaded_doc_array = self.document_array_cls.from_protobuf( File "/Users/user/env/lib/python3.9/site-packages/docarray/array/doc_list/doc… line 296, in from_protobuf return super().from_protobuf(pb_msg) File "/Users/user/env/lib/python3.9/site-packages/docarray/array/doc_list/io.… line 119, in from_protobuf return cls(cls.doc_type.from_protobuf(doc_proto) for doc_proto in pb_msg.docs) File "/Users/user/env/lib/python3.9/site-packages/docarray/array/doc_list/doc… line 128, in __init__ super().__init__(docs) File "/Users/user/env/lib/python3.9/site-packages/docarray/array/doc_list/doc… line 155, in _validate_docs for doc in docs: File "/Users/user/env/lib/python3.9/site-packages/docarray/array/doc_list/io.… line 119, in <genexpr> return cls(cls.doc_type.from_protobuf(doc_proto) for doc_proto in pb_msg.docs) File "/Users/user/env/lib/python3.9/site-packages/docarray/base_doc/mixins/io… line 247, in from_protobuf return cls(**fields) File "/Users/user/env/hub/schema.py", line 28, in __init__ super().__init__(**data) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 2 validation errors for Input messages field required (type=value_error.missing) ```
closed
2023-07-13T14:46:45Z
2023-07-13T18:56:08Z
https://github.com/jina-ai/serve/issues/5963
[]
NarekA
2
vitalik/django-ninja
django
1,390
[BUG] 404 handler override not working as expected
**Describe the bug** I have a simple demo Django-Ninja app working thanks to a helpful onboarding documentation on the related docs. All of the endpoints in my simple demo work as expected. However, the following code in my urls.py file does not produce the expected result based on the docs: ``` from django.http import Http404 ``` ``` @api.exception_handler(Http404) def page_not_found(request, exc): return api.create_response( request, {"message": "Please retry later"}, status=404, ) ``` I expect my handler to pick up 404s when the server is running, but I get the default Not Found The requested resource was not found on this server. Did a lot of research beforehand and tried a variety of overrides, but none worked-- so am documenting the method from the dn docs: https://django-ninja.dev/guides/errors/ My code is here: https://github.com/ErikPohl444/django_ninja_playground Working control case: http://127.0.0.1:8000/api/hello_html?name=you Not working experimental case: http://127.0.0.1:8000/api/this_endpoint_is_undefined I've done similar captures of the 404 in Flask and in Django, but I'm hitting a wall here. It is either: a) something is buggy with d-n [unlikely] b) something can be added to for newbies like me to the documentation [potentially] c) my code is buggy [in which case I might want to make a PR with doc changes --see b -- for folks like me :) ] **Versions (please complete the following information):** - Python version: [e.g. 3.6] Python 3.12.0 - Django version: [e.g. 4.0] Django==5.1.4 - Django-Ninja version: [e.g. 0.16.2] django-ninja==1.3.0 - Pydantic version: [e.g. 1.9.0] pydantic==2.10.4
open
2025-01-12T00:50:36Z
2025-02-24T20:05:57Z
https://github.com/vitalik/django-ninja/issues/1390
[]
ErikPohl444
5
napari/napari
numpy
7,552
Add ability to draw circles Shapes from center
## 🚀 Feature Add ability to draw ellipses/circles from their center. (Can also do rectangles) ## Motivation This is a common feature in drawing programs. It can be convenient to be able to expand a shape from the center rather than corner to corner, particularly for circles. In Adobe products and Pixelmator, `alt` is used as the modifier.
open
2025-01-22T22:06:30Z
2025-01-22T22:06:30Z
https://github.com/napari/napari/issues/7552
[ "feature" ]
psobolewskiPhD
0
koxudaxi/datamodel-code-generator
fastapi
1,796
Property name in schema is the same as imported module name, causing RecursionError in pydantic
**Describe the bug** This is not a bug so much as a question on how to workaround the issue without changing the schema. **To Reproduce** Example schema: ```json { "components": { "schemas": { "Request": { "properties": { "date": { "type": "string", "format": "date", "title": "Date" } }, "type": "object", "required": [ "date" ], "title": "Request" } } } } ``` Used commandline: ```bash $ datamodel-codegen --input request.json --input-file-type openapi ``` Generated model output: ```python # generated by datamodel-codegen: # filename: request.json # timestamp: 2024-01-04T10:37:43+00:00 from __future__ import annotations from datetime import date from pydantic import BaseModel, Field class Request(BaseModel): date: date = Field(..., title='Date') ``` When I try to import this, I get recursion depth error from pydantic: ```bash .... File "...\.venv\Lib\site-packages\pydantic\_internal\_repr.py", line 55, in <genexpr> return join_str.join(repr(v) if a is None else f'{a}={v!r}' for a, v in self.__repr_args__()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...\.venv\Lib\site-packages\pydantic\fields.py", line 556, in __repr_args__ yield 'annotation', _repr.PlainRepr(_repr.display_as_type(self.annotation)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...\.venv\Lib\site-packages\pydantic\_internal\_repr.py", line 95, in display_as_type return repr(obj) ^^^^^^^^^ RecursionError: maximum recursion depth exceeded while getting the repr of an object ``` **Expected behavior** When using pydantic, I'd create the model like this: ```py from __future__ import annotations # from datetime import date import datetime from pydantic import BaseModel, Field class Request(BaseModel): date: datetime.date = Field(..., title="Date") # <--- datetime.date here, not "date" ``` Is there a way to make datamodel-codegen to just `import datetime` and then use `datetime.date` as typing? **Version:** - OS: Windows - Python version: 3.11 - datamodel-code-generator version: 0.25.2
closed
2024-01-04T10:51:28Z
2024-01-04T13:12:08Z
https://github.com/koxudaxi/datamodel-code-generator/issues/1796
[]
speksen-kb01
1
K3D-tools/K3D-jupyter
jupyter
270
Support rendering cell attributes
A far as I understand, k3d only supports vertex attributes. but in many cases we have simulation results assigned to cells rather than points/vertices. one solution would be to convert cell data to point data but its not always straightforward. It would be nice to add support for rendering cell attributes as well. Thanks
closed
2021-04-11T06:56:43Z
2025-03-11T12:41:09Z
https://github.com/K3D-tools/K3D-jupyter/issues/270
[ "Next release" ]
esalehim
5
autogluon/autogluon
data-science
3,924
Request: Implement Feature Importance Explainability for Time-Series Module
### Summary: The AutoGluon time-series module has proven to be a powerful tool for forecasting tasks. However, one area that could significantly enhance its utility is the inclusion of feature importance explainability in terms of both global training as well as inclusion as covariates, akin to what is currently available in the AutoGluon tabular module. This feature would greatly aid in understanding model decisions, facilitating a more intuitive analysis and improvement of models by highlighting which features contribute most to predictions. ### Detail: The tabular module in AutoGluon offers an insightful feature importance mechanism that helps users understand the impact of each feature on the model's predictions. This is not only crucial for model interpretation but also for improving model performance by focusing on the most influential features. Implementing a similar feature for the time-series module would provide users with a comprehensive tool for time-series forecasting that is not only powerful but also interpretable. - Model Transparency: Provides clear insights into how and why predictions are made, increasing trust in the model. - Feature Engineering: Identifies which features are most valuable, guiding users on where to focus their feature engineering efforts. - Model Improvement: Helps in diagnosing model performance issues by highlighting features that are less important or potentially noisy. ## Suggested Implementation: It would be extremely helpful for the time-series module to incorporate a feature importance mechanism. This could potentially leverage some modified version of existing frameworks like SHAP (SHapley Additive exPlanations) or permutation importance, similar to the approach used in the tabular module. The addition of feature importance explainability to the AutoGluon time-series module would be a valuable enhancement, making the module not only a powerful forecasting tool but also an interpretable and transparent one. It would align with the growing need for explainable AI in critical applications and facilitate a deeper understanding and trust in AI-driven forecasting models. Thank you for considering this feature request. I believe it would make a significant contribution to the AutoGluon toolkit and its user community.
closed
2024-02-15T16:00:13Z
2024-04-09T16:41:52Z
https://github.com/autogluon/autogluon/issues/3924
[ "enhancement", "module: timeseries" ]
kristinakupf
3
docarray/docarray
fastapi
1,310
change return type of `find_batched()` utility function
Currently, `find_batched()` has the following signature: ```python def find_batched( ... ) -> List[FindResult]: ``` In order to align with `DocumentIndex`, it should be switched to: ```python def find_batched( ... ) -> FindResultBatched: ``` **TODOs:** - move the definition of `FindResultBatched` from `index/abstract.py` to `utils/find.py` - change the signature as described above - fix the imports on all Document Index classes - Probably `FindResultBatched` needs a tweak as well: `scores: np.ndarray` probably needs to be changed to allow a union of ndarray, torch.Tensor and tf.Tensor
closed
2023-03-29T12:41:30Z
2023-04-14T15:51:05Z
https://github.com/docarray/docarray/issues/1310
[ "DocArray v2", "good-first-issue" ]
JohannesMessner
4
scikit-image/scikit-image
computer-vision
6,851
skimage > 0.17.2 required
### Description: With skimage version '0.17.2' you get error: `binary_dilation() got an unexpected keyword argument 'footprint'` when using `gf.fill_rectangle(...)` function. Upgrading to latest version of scikit-image fixes the error. ### Way to reproduce: _No response_ ### Version information: ```Shell gf version '6.70.0' ```
closed
2023-03-25T18:36:27Z
2023-09-17T11:41:54Z
https://github.com/scikit-image/scikit-image/issues/6851
[ ":bug: Bug" ]
albe-jj
1
TencentARC/GFPGAN
pytorch
307
NCNN
是否有NCNN能用的模型下载?
open
2022-11-17T04:32:34Z
2022-11-17T04:32:34Z
https://github.com/TencentARC/GFPGAN/issues/307
[]
skymailwu
0
huggingface/datasets
pytorch
6,744
Option to disable file locking
### Feature request Commands such as `load_dataset` creates file locks with `filelock.FileLock`. It would be good if there was a way to disable this. ### Motivation File locking doesn't work on all file-systems (in my case NFS mounted Weka). If the `cache_dir` only had small files then it would be possible to point to local disk and the problem would be solved. However, as cache_dir is both where the small info files are written and the processed datasets are put this isn't a feasible solution. Considering https://github.com/huggingface/datasets/issues/6395 I still do think this is something that belongs in HuggingFace. The possibility to control packages separately is valuable. It might be that a user has their dataset on a file-system that doesn't support file-locking while they are using file locking on local disk to control some other type of access. ### Your contribution My suggested solution: ``` diff --git a/src/datasets/utils/_filelock.py b/src/datasets/utils/_filelock.py index 19620e6e..58f41a02 100644 --- a/src/datasets/utils/_filelock.py +++ b/src/datasets/utils/_filelock.py @@ -18,11 +18,15 @@ import os from filelock import FileLock as FileLock_ -from filelock import UnixFileLock +from filelock import SoftFileLock, UnixFileLock from filelock import __version__ as _filelock_version from packaging import version +if os.getenv('HF_USE_SOFTFILELOCK', 'false').lower() in ('true', '1'): + FileLock_ = SoftFileLock + + class FileLock(FileLock_): """ A `filelock.FileLock` initializer that handles long paths. ```
open
2024-03-20T15:59:45Z
2024-03-20T15:59:45Z
https://github.com/huggingface/datasets/issues/6744
[ "enhancement" ]
VRehnberg
0
ultralytics/yolov5
machine-learning
12,569
Why does the learning rate suddenly increase
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question Hello, Why does my learning rate suddenly increase when I reach approximately 1739 epochs during training?As a result, my model loss has also increased. My hyperparameters: hyp: lr0: 0.01 lrf: 0.01 momentum: 0.937 weight_decay: 0.0005 warmup_epochs: 3.0 warmup_momentum: 0.8 warmup_bias_lr: 0.1 box: 0.05 cls: 0.5 cls_pw: 1.0 obj: 1.0 obj_pw: 1.0 iou_t: 0.2 anchor_t: 4.0 fl_gamma: 0.0 hsv_h: 0.015 hsv_s: 0.7 hsv_v: 0.4 degrees: 0.0 translate: 0.1 scale: 0.5 shear: 0.0 perspective: 0.0 flipud: 0.0 fliplr: 0.5 mosaic: 1.0 mixup: 0.0 copy_paste: 0.0 epochs: 5000 batch_size: 1024 imgsz: 640 rect: false resume: true nosave: false noval: false noautoanchor: false noplots: false evolve: null bucket: '' cache: null image_weights: false device: 0,1,2,3,4,5,6,7 multi_scale: false single_cls: false optimizer: SGD sync_bn: false workers: 64 project: runs/train name: exp exist_ok: false quad: false cos_lr: false label_smoothing: 0.0 patience: 100 freeze: - 0 save_period: -1 seed: 0 local_rank: -1 entity: null upload_dataset: false bbox_interval: -1 artifact_alias: latest save_dir: runs/train/exp9 ### Additional _No response_
closed
2024-01-02T08:32:39Z
2024-02-16T00:19:59Z
https://github.com/ultralytics/yolov5/issues/12569
[ "question", "Stale" ]
Gary55555
3
plotly/dash
plotly
3,023
add tooling to show Dash memory usage
It would be useful to have a way for Dash to report how much memory it is using where. The report could be textual (CSV / JSON) or graphical (an introspective chart?).
open
2024-10-02T16:52:20Z
2024-10-02T16:52:20Z
https://github.com/plotly/dash/issues/3023
[ "feature", "P3" ]
gvwilson
0
FujiwaraChoki/MoneyPrinter
automation
225
Design Modern UI with React
Please would you develop a modern and configurable (for example multilanguage, custom title, design etc.) UI ? Best regards,
closed
2024-02-13T17:58:31Z
2024-02-13T19:05:35Z
https://github.com/FujiwaraChoki/MoneyPrinter/issues/225
[]
Natgho
2
deepfakes/faceswap
deep-learning
1,145
bug on require CPU (Mac Big Sur)
in the end of pip require.... Collecting pynvx==1.0.0 Using cached pynvx-1.0.0-cp39-cp39-macosx_10_9_x86_64.whl (119 kB) ERROR: Could not find a version that satisfies the requirement tensorflow<2.5.0,>=2.2.0 ERROR: No matching distribution found for tensorflow<2.5.0,>=2.2.0 resolve?
closed
2021-04-06T19:17:20Z
2022-06-30T10:02:04Z
https://github.com/deepfakes/faceswap/issues/1145
[ "bug" ]
SiNaPsEr0x
8
mljar/mercury
data-visualization
196
add option to hide sidebar in notebook
closed
2023-01-05T12:11:03Z
2023-02-17T13:04:58Z
https://github.com/mljar/mercury/issues/196
[]
pplonski
1
roboflow/supervision
tensorflow
1,554
Allow TIFF (and more) image formats in `load_yolo_annotations`
### Search before asking - [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests. ### Description * Currently, `load_yolo_annotations` only allows `png,` `jpg`, and `jpeg` file formats. `load_yolo_annotations` is internally called by `sv.DetectionDataset.from_yolo` functionality. https://github.com/roboflow/supervision/blob/1860fdb0a4e21edc5fa03d973e9f31c055bdcf4f/supervision/dataset/formats/yolo.py#L156 * Ultralytics supports a wide variety of image formats. Copied the following table from [their website](https://docs.ultralytics.com/modes/predict/#image-and-video-formats:~:text=The%20below%20table%20contains%20valid%20Ultralytics%20image%20formats.). | Image Suffix | Example Predict Command | Reference | |---------------|-----------------------------------|---------------------------------| | .bmp | yolo predict source=image.bmp | Microsoft BMP File Format | | .dng | yolo predict source=image.dng | Adobe DNG | | .jpeg | yolo predict source=image.jpeg | JPEG | | .jpg | yolo predict source=image.jpg | JPEG | | .mpo | yolo predict source=image.mpo | Multi Picture Object | | .png | yolo predict source=image.png | Portable Network Graphics | | .tif | yolo predict source=image.tif | Tag Image File Format | | .tiff | yolo predict source=image.tiff | Tag Image File Format | | .webp | yolo predict source=image.webp | WebP | | .pfm | yolo predict source=image.pfm | Portable FloatMap | * Use of TIFF files is common in satellite imagery such as Sentinel-2. One may prefer to preserve the TIFF format over convert it to PNG/JPG because TIFF allows the storage of georeferencing information. * I see that the `load_yolo_annotations` uses `cv2.imread` to read the image files. [OpenCV seems to support](https://docs.opencv.org/4.x/d4/da8/group__imgcodecs.html#gacbaa02cffc4ec2422dfa2e24412a99e2) many of the Ultralytics-supported formats. https://github.com/roboflow/supervision/blob/1860fdb0a4e21edc5fa03d973e9f31c055bdcf4f/supervision/dataset/formats/yolo.py#L170 ### Proposals * P1: We can expand the hardcoded list of allowed formats. * P2: We may choose not to restrict the image format and let it fail later. ### Use case * Everyone who'd like to use formats other than `png,` `jpg`, and `jpeg` will be able to use this extension. ### Additional _No response_ ### Are you willing to submit a PR? - [X] Yes I'd like to help by submitting a PR!
closed
2024-09-28T15:04:57Z
2025-01-19T13:02:38Z
https://github.com/roboflow/supervision/issues/1554
[ "enhancement", "hacktoberfest" ]
patel-zeel
11
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
1,423
Training related issues
Hello, I'm currently conducting training experiments. I have **17000** infrared images (trainA) and corresponding daytime results (trainB). The data resolution is **256 * 256**, 8 Tesla P4 graphics cards (**GPU Memory: 8*8GB = 64GB** ) and **batch_ Size = 16**. I have the following questions to ask you: 1. How many iterations(epoch) does it take? 2. In the test results, it is obvious that I feel fake_ B is more effective than real_ B fuzzy, because real_ A is it the reason for the infrared image? 3. I am going to use the resolution of **2560 * 1440** to conduct the experiment. Is the following training command correct,**the input and output resolutions shall be consistent during the test:** (1): **python train. py --dataroot ./ traffic_ dataset --name train_ traffic --model cycle_ gan --batch_ size 16 --load_ size 2560 --crop_ size 360 --gpu_ids 0,1,2,3** (2): **python test. py --dataroot ./test_ jpg --name train_ traffic --model cycle_ gan --preprocess none --gpu_ids 0,1,2,3** 4. If I use the above command (train.py) to train images with a resolution of 2560 * 1440, how many iterations do I need? At present, the default is 200. Look forward to your reply!
open
2022-05-23T08:35:42Z
2022-06-14T19:57:17Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1423
[]
songyn95
3
K3D-tools/K3D-jupyter
jupyter
172
k3d.plot().display() - can't change background color or axes labels
![Screen Shot 2019-07-06 at 5 25 33 PM](https://user-images.githubusercontent.com/50302382/60761312-a3890700-a013-11e9-9141-c2d8fc0e233f.png) ![Screen Shot 2019-07-06 at 6 53 06 PM](https://user-images.githubusercontent.com/50302382/60761854-63c81c80-a01f-11e9-8944-5e7df2cabaf1.png) Hi, I attached a picture of what I see when I run k3d.plot().display() in a couple of test cases. It's black, no labels along the axes. I'd like to change the background color and axes labels. Thanks for the help
closed
2019-07-06T22:57:52Z
2021-08-11T08:53:44Z
https://github.com/K3D-tools/K3D-jupyter/issues/172
[]
tsj83
5
PrefectHQ/prefect
data-science
17,230
Self-hosted server not communicating with dashboard
### Bug summary I am running a Prefect 3.2.6 server and set a custom API host and URL via: ``` prefect config set PREFECT_SERVER_API_HOST="xx.xx.xx.xx" prefect config set PREFECT_API_URL="http://xx.xx.xx.xx:4200/api" ``` As expected, running `prefect server start` yields ``` ___ ___ ___ ___ ___ ___ _____ | _ \ _ \ __| __| __/ __|_ _| | _/ / _|| _|| _| (__ | | |_| |_|_\___|_| |___\___| |_| Configure Prefect to communicate with the server with: prefect config set PREFECT_API_URL=http://xx.xx.xx.xx:4200/api View the API reference documentation at http://xx.xx.xx.xx:4200/docs Check out the dashboard at http://xx.xx.xx.xx:4200 ``` However, while I can load the dashboard at the mentioned address, the dashboard returns the following error message and does not show any flow run data: ``` Can't connect to Server API at http://127.0.0.1:4200/api. Check that it's accessible from your machine. ``` Since I had already updated the `PREFECT_API_URL`, it is not clear to me why the dashboard is trying to connect to the default sever API URL. Going into settings in the dashboard did not resolve this. Any suggestions? ### Version info ```Text Version: 3.2.6 API version: 0.8.4 Python version: 3.11.11 Git commit: 5ceb3ada Built: Wed, Feb 19, 2025 9:24 PM OS/Arch: linux/x86_64 Profile: local Server type: server Pydantic version: 2.10.6 Integrations: prefect-dask: 0.3.3 ``` ### Additional context _No response_
closed
2025-02-21T15:37:25Z
2025-02-21T15:39:41Z
https://github.com/PrefectHQ/prefect/issues/17230
[ "bug" ]
Andrew-S-Rosen
1
plotly/dash-table
plotly
147
Add derived properties `derived_virtual_selected_rows` and `derived_viewport_selected_rows`
`selected_rows` doesn't update on sort or filter See this example: https://github.com/plotly/dash-docs/pull/232/files#diff-61a5d030fd2c372f3abe94110f039e85
closed
2018-10-22T16:42:20Z
2018-11-08T17:56:07Z
https://github.com/plotly/dash-table/issues/147
[ "dash-type-enhancement" ]
chriddyp
6
babysor/MockingBird
deep-learning
433
关于plot文件夹下自动生成的注意力图的含义求解
想知道plot文件夹下自动生成的注意力图里的横坐标、纵坐标分别是代表什么参数?为什么出现一条漂亮的斜线就是“出现了注意力机制”?如果两幅图都出现了注意力机制线条,那么能否以及如何比较他们的好坏? 目前查阅了一堆关于注意力机制的内容,都没有看到和这幅图相关的知识点,求解!
closed
2022-03-07T11:47:21Z
2022-03-15T10:45:39Z
https://github.com/babysor/MockingBird/issues/433
[]
flysmart
0
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
904
Learning?
Hello, Thank you for providing the code. I am using a custom dataset and generator, however my approach is based on pix2pix. I have noticed that the loss_D_fake and loss _D_real never oscillate after the first epoch! The average loss for these is around 0.25 for 21,000 images. According to [soumith point 10](https://github.com/soumith/ganhacks), things are working when D has low variance and goes down over time. My discriminator loss plots show no variance at all after 2 epochs. Similarly, the generator loss "MSE loss" is also stuck near 0.25 averaged over 21,000 images. However, the L1 loss is steadily decreasing, = 1.6 after 1 epoch and 1.27 after 18th epoch. 1) Visually, the result seems fine, but I am concerned that the generator converges too quickly to a non-optimum result and is able to easily fool the discriminator, any thoughts on this? 2) According to [goodfellow](https://arxiv.org/pdf/1701.00160.pdf), the Nash equilibrium is around D(x) = 0.5 for all x, seeing how mine is stuck at 0.25 when you combine Discriminator loss, what could be possibly done to alleviate the issue? I also tried to make the discriminator stronger by increasing LR and increasing the number of layers but the loss was still near 0.25 3) Also, since you have a "n_layer" implementation of your discriminator, how do you usually choose the layers? For example, I have 6 layers in my generator, what would be the ideal choice for N_layer_discriminator? Training Details: LR = 0.0001 or 0.00005 Learning Rate Decay = None Dropout: None No LeakyReLU's in Generator BatchNorms = None in Generator Loss: lsgan BatchSize = 1 ### **Loss_D_Fake** ![image](https://user-images.githubusercontent.com/44021337/72986719-22476500-3de9-11ea-885b-f738cf2b334a.png) **Loss_D_Real** ![image](https://user-images.githubusercontent.com/44021337/72986748-2ffcea80-3de9-11ea-8dd8-70ede9592414.png) **Loss_G_L1** ![image](https://user-images.githubusercontent.com/44021337/72986813-502ca980-3de9-11ea-8f45-a60aebe516ec.png) **Loss_G_GAN** ![image](https://user-images.githubusercontent.com/44021337/72986831-5ae73e80-3de9-11ea-8e87-febc61c77802.png)
open
2020-01-23T13:12:41Z
2020-01-27T18:45:37Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/904
[]
DeepZono
2
gee-community/geemap
streamlit
1,100
Please use Pythons style guide (Pep 8)
I would like to be able to use geemap in something close to production. To make our production apps robust we don't deploy if they fail basic tests. For example we use Pylint to test for Python style issue (https://peps.python.org/pep-0008/). This makes sure our teams code base looks and works in an aligned way. As the users I support would gladly copy example code from geemap this leads to problems as the code is not Pep 8 compliant. A specific example is the use of the name `Map` for an object. It should be lower cased. ![image](https://user-images.githubusercontent.com/42288570/173224424-f26e06db-7531-42fd-8ed0-685f354ae1c3.png) So my request is to update the examples to be Pep 8 complient. It will lower the time to production which is the big issue for data science and analytics in Python.
closed
2022-06-12T08:31:59Z
2022-06-12T14:19:01Z
https://github.com/gee-community/geemap/issues/1100
[ "Feature Request" ]
MarcSkovMadsen
3
open-mmlab/mmdetection
pytorch
11,147
How to generate VOC format for custom dataset having original images and ground truth images for detection.
I am trying to implement my model for the **detection of disease** (mmdetection) in medical imaging. How I can convert my database into **VOC format?** Is there any supporting .py file available in the mmdetection database folder? If not, how I can do this? Can you please help me to sort out this issue? My database consists of two main folders. 1) Original Images JPEG format 2) Ground Truth images in JPEG format Now I want to convert my all database into VOC format for XML annotations of Ground Truth images.
open
2023-11-09T07:51:51Z
2023-11-09T07:51:51Z
https://github.com/open-mmlab/mmdetection/issues/11147
[]
ztahirzju
0
Nemo2011/bilibili-api
api
289
【提问】```user.get_user_info()```、```user.get_videos()```和```user.get_dynamics()```接口无法获取到有效信息,但是没有错误信息被抛出
**Python 版本:** 3.11.2 **模块版本:** 15.4.2 **运行环境:** Windows --- 在bot内通过```user.get_user_info()```、```user.get_videos()```和```user.get_dynamics()```接口获取信息时经常没有获取到有效内容,但是也没有任何错误信息接收,单独运行类似代码时却能够正常获取信息 **部分代码** 该代码运行于一个**异步函数**中 ```CREDENTIAL```是一个事先定义了的静态```Credential```对象,在我看来并不是问题的重点 ```sub_list```是一个记录了指定用户的```uid```、```name```、```bvid```和```dynamic_id```的字典列表,用于记录获取到的信息并统合进一个json文件中 ``` new_sub_list = [] for sub_info in sub_list: #遍历群组关注列表 uid = sub_info['uid'] user = User(uid, CREDENTIAL) #创建用户对象 #获取用户昵称 try: user_info = await user.get_user_info() name = user_info['name'] except Exception as exception: logger.info(f'本次用户信息获取失败,错误信息:{exception} 目标uid:{uid}') logger.error(exception) name = sub_info['name'] #获取最新视频bvid try: videos = await user.get_videos() vlist = videos['list']['vlist'] if len(vlist) != 0: bvid = vlist[0]['bvid'] else: #如果没有视频(为什么?) bvid = None except Exception as exception: logger.info(f'本次视频获取失败,错误信息:{exception} 目标uid:{uid}') logger.error(exception) bvid = sub_info['bvid'] #获取最新动态dynamic_id try: dynamics = await user.get_dynamics() if dynamics.get('cards'): dynamic_id = dynamics['cards'][0]['desc']['dynamic_id'] else: #如果没有动态(为什么?) dynamic_id = None except Exception as exception: logger.info(f'本次动态获取失败,错误信息:{exception} 目标uid:{uid}') logger.error(exception) dynamic_id = sub_info['dynamicId'] #记录关注信息 new_sub_list.append({'uid': uid, 'name': name, 'bvid': bvid, 'dynamicId': dynamic_id}) #判断并输出更新信息 if bvid != None and (sub_info.get('bvid') == None or sub_info['bvid'] != bvid): #如果旧列表无bvid或与新bvid不同 logger.info(f'[{group_id}]更新了UP主{name}的视频:{bvid}') if sub_info.get('bvid') == None: #如果旧列表无bvid await videoPost(bvid, group_id, 'new') else: bvid_list = [] end_reach = False for video in vlist: if video['bvid'] == sub_info['bvid']: #如果遍历到了之前更新的BV号 end_reach = True if not end_reach: bvid_list.append(video['bvid']) #记录这个BV号 if end_reach: #如果是正常遍历到上次的记录而中断 for bvid_now in reversed(bvid_list): #逆向遍历记录到的BV号 await videoPost(bvid_now, group_id, 'new') else: #如果是因到达第一页限制而中断(证明之前记录的BV号有问题) await videoPost(bvid, group_id, 'new') if dynamic_id != None and (sub_info.get('dynamicId') == None or sub_info['dynamicId'] != dynamic_id): #如果旧列表无dynamicId或与新id不同 logger.info(f'[{group_id}]更新了UP主{name}的动态:{dynamic_id}') # TODO: 绘制并通过API发送动态图片 pass return group_id, new_sub_list ``` **运行结果** (运行结果被记录在logger中) ``` [2023-05-17 12:32:24,653] INFO in bilibili: [572486880]更新了UP主酸酸子Histidine的视频:BV1qo4y1t743 [2023-05-17 12:32:29,825] INFO in bot_main: 消息发送成功,返回码:200 [2023-05-17 12:32:29,827] INFO in bilibili: 本次动态获取失败,错误信息: 目标uid:11350969 [2023-05-17 12:32:29,827] ERROR in bilibili: [2023-05-17 12:32:29,827] INFO in bilibili: [161155357]更新了UP主WindowsSov8的视频:BV1uo4y1t7DQ [2023-05-17 12:32:29,827] INFO in bilibili: 本次用户信息获取失败,错误信息: 目标uid:25199467 [2023-05-17 12:32:29,827] ERROR in bilibili: [2023-05-17 12:32:29,827] INFO in bilibili: 本次动态获取失败,错误信息: 目标uid:522443873 [2023-05-17 12:32:29,827] ERROR in bilibili: [2023-05-17 12:32:29,827] INFO in bilibili: 本次动态获取失败,错误信息: 目标uid:11350969 [2023-05-17 12:32:29,827] ERROR in bilibili: [2023-05-17 12:32:38,522] INFO in bot_main: 消息发送成功,返回码:200 [2023-05-17 12:32:38,523] INFO in bilibili: 本次视频获取失败,错误信息: 目标uid:25199467 [2023-05-17 12:32:38,523] ERROR in bilibili: [2023-05-17 12:32:38,526] INFO in bilibili: 本次用户信息获取失败,错误信息: 目标uid:494479664 [2023-05-17 12:32:38,526] ERROR in bilibili: [2023-05-17 12:32:38,526] INFO in bilibili: 本次用户信息获取失败,错误信息: 目标uid:11350969 [2023-05-17 12:32:38,526] ERROR in bilibili: ``` 从logger记录的输出信息来看,失败概率不低 **单独运行时的代码** ``` # -*- coding: utf-8 -*- #!/usr/bin/python3 import os import json import asyncio from bilibili_api import Credential, HEADERS from bilibili_api.user import name2uid, User from bilibili_api.video import Video, VideoDownloadURLDataDetecter #获取当前文件所在的目录 current_dir = os.path.dirname(__file__) CURRENT_DIR = current_dir[:-1] #切换到当前文件的目录 os.chdir(current_dir) #获取所需的Credential CREDENTIAL = Credential(sessdata='', bili_jct='', buvid3='', dedeuserid='') uid = input() user = User(uid, CREDENTIAL) user_info = asyncio.run(user.get_user_info()) name = user_info['name'] print(name) videos = asyncio.run(user.get_videos()) dynamics = asyncio.run(user.get_dynamics()) print(videos['list']['vlist'][0]['bvid']) print(dynamics['cards'][0]['desc']['dynamic_id']) ``` 这段代码独立进行运行的时候是能够正常获取并输出信息的,且几乎没有出错过
closed
2023-05-17T06:17:36Z
2023-05-27T10:25:43Z
https://github.com/Nemo2011/bilibili-api/issues/289
[ "bug" ]
WindowsSov8forUs
16
dask/dask
numpy
11,154
dask.dataframe import error for Python 3.12.3
<!-- Please include a self-contained copy-pastable example that generates the issue if possible. Please be concise with code posted. See guidelines below on how to provide a good bug report: - Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports - Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly. --> **Describe the issue**: Saw the similar issue for 3.11.9 https://github.com/dask/dask/pull/11035 ``` ipython Python 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:35:20) [Clang 16.0.6 ] Type 'copyright', 'credits' or 'license' for more information IPython 8.22.2 -- An enhanced Interactive Python. Type '?' for help. In [1]: import dask In [2]: import dask.dataframe as dd --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[2], line 1 ----> 1 import dask.dataframe as dd File ~/miniconda3/envs/ibisml-dev/lib/python3.12/site-packages/dask/dataframe/__init__.py:100 98 import dask.dataframe._pyarrow_compat 99 from dask.base import compute --> 100 from dask.dataframe import backends, dispatch, rolling 101 from dask.dataframe.core import ( 102 DataFrame, 103 Index, (...) 109 to_timedelta, 110 ) 111 from dask.dataframe.groupby import Aggregation File ~/miniconda3/envs/ibisml-dev/lib/python3.12/site-packages/dask/dataframe/backends.py:15 13 from dask.backends import CreationDispatch, DaskBackendEntrypoint 14 from dask.dataframe._compat import PANDAS_GE_220, is_any_real_numeric_dtype ---> 15 from dask.dataframe.core import DataFrame, Index, Scalar, Series, _Frame 16 from dask.dataframe.dispatch import ( 17 categorical_dtype_dispatch, 18 concat, (...) 35 union_categoricals_dispatch, 36 ) 37 from dask.dataframe.extensions import make_array_nonempty, make_scalar File ~/miniconda3/envs/ibisml-dev/lib/python3.12/site-packages/dask/dataframe/core.py:36 34 from dask.blockwise import Blockwise, BlockwiseDep, BlockwiseDepDict, blockwise 35 from dask.context import globalmethod ---> 36 from dask.dataframe import methods 37 from dask.dataframe._compat import ( 38 PANDAS_GE_140, 39 PANDAS_GE_150, (...) 47 is_string_dtype, 48 ) 49 from dask.dataframe.accessor import CachedAccessor, DatetimeAccessor, StringAccessor File ~/miniconda3/envs/ibisml-dev/lib/python3.12/site-packages/dask/dataframe/methods.py:34 22 # preserve compatibility while moving dispatch objects 23 from dask.dataframe.dispatch import ( # noqa: F401 24 concat, 25 concat_dispatch, (...) 32 union_categoricals, 33 ) ---> 34 from dask.dataframe.utils import is_dataframe_like, is_index_like, is_series_like 35 from dask.utils import _deprecated_kwarg 37 # cuDF may try to import old dispatch functions File ~/miniconda3/envs/ibisml-dev/lib/python3.12/site-packages/dask/dataframe/utils.py:20 18 from dask.base import get_scheduler, is_dask_collection 19 from dask.core import get_deps ---> 20 from dask.dataframe import ( # noqa: F401 register pandas extension types 21 _dtypes, 22 methods, 23 ) 24 from dask.dataframe._compat import PANDAS_GE_150, tm # noqa: F401 25 from dask.dataframe.dispatch import ( # noqa : F401 26 make_meta, 27 make_meta_obj, 28 meta_nonempty, 29 ) File ~/miniconda3/envs/ibisml-dev/lib/python3.12/site-packages/dask/dataframe/_dtypes.py:9 6 import pandas as pd 8 from dask.dataframe._compat import PANDAS_GE_150 ----> 9 from dask.dataframe.extensions import make_array_nonempty, make_scalar 12 @make_array_nonempty.register(pd.DatetimeTZDtype) 13 def _(dtype): 14 return pd.array([pd.Timestamp(1), pd.NaT], dtype=dtype) File ~/miniconda3/envs/ibisml-dev/lib/python3.12/site-packages/dask/dataframe/extensions.py:8 1 """ 2 Support for pandas ExtensionArray in dask.dataframe. 3 4 See :ref:`extensionarrays` for more. 5 """ 6 from __future__ import annotations ----> 8 from dask.dataframe.accessor import ( 9 register_dataframe_accessor, 10 register_index_accessor, 11 register_series_accessor, 12 ) 13 from dask.utils import Dispatch 15 make_array_nonempty = Dispatch("make_array_nonempty") File ~/miniconda3/envs/ibisml-dev/lib/python3.12/site-packages/dask/dataframe/accessor.py:126 113 token = f"{self._accessor_name}-{attr}" 114 return self._series.map_partitions( 115 self._delegate_method, 116 self._accessor_name, (...) 122 token=token, 123 ) --> 126 class DatetimeAccessor(Accessor): 127 """Accessor object for datetimelike properties of the Series values. 128 129 Examples (...) 132 >>> s.dt.microsecond # doctest: +SKIP 133 """ 135 _accessor_name = "dt" File ~/miniconda3/envs/ibisml-dev/lib/python3.12/site-packages/dask/dataframe/accessor.py:81, in Accessor.__init_subclass__(cls, **kwargs) 79 attr, min_version = item if isinstance(item, tuple) else (item, None) 80 if not hasattr(cls, attr): ---> 81 _bind_property(cls, pd_cls, attr, min_version) File ~/miniconda3/envs/ibisml-dev/lib/python3.12/site-packages/dask/dataframe/accessor.py:35, in _bind_property(cls, pd_cls, attr, min_version) 33 except Exception: 34 pass ---> 35 setattr(cls, attr, property(derived_from(pd_cls, version=min_version)(func))) File ~/miniconda3/envs/ibisml-dev/lib/python3.12/site-packages/dask/utils.py:987, in derived_from.<locals>.wrapper(method) 985 try: 986 extra = getattr(method, "__doc__", None) or "" --> 987 method.__doc__ = _derived_from( 988 original_klass, 989 method, 990 ua_args=ua_args, 991 extra=extra, 992 skipblocks=skipblocks, 993 inconsistencies=inconsistencies, 994 ) 995 return method 997 except AttributeError: File ~/miniconda3/envs/ibisml-dev/lib/python3.12/site-packages/dask/utils.py:940, in _derived_from(cls, method, ua_args, extra, skipblocks, inconsistencies) 938 # Mark unsupported arguments 939 try: --> 940 method_args = get_named_args(method) 941 original_args = get_named_args(original_method) 942 not_supported = [m for m in original_args if m not in method_args] File ~/miniconda3/envs/ibisml-dev/lib/python3.12/site-packages/dask/utils.py:701, in get_named_args(func) 699 def get_named_args(func) -> list[str]: 700 """Get all non ``*args/**kwargs`` arguments for a function""" --> 701 s = inspect.signature(func) 702 return [ 703 n 704 for n, p in s.parameters.items() 705 if p.kind in [p.POSITIONAL_OR_KEYWORD, p.POSITIONAL_ONLY, p.KEYWORD_ONLY] 706 ] File ~/miniconda3/envs/ibisml-dev/lib/python3.12/inspect.py:3310, in signature(obj, follow_wrapped, globals, locals, eval_str) 3308 def signature(obj, *, follow_wrapped=True, globals=None, locals=None, eval_str=False): 3309 """Get a signature object for the passed callable.""" -> 3310 return Signature.from_callable(obj, follow_wrapped=follow_wrapped, 3311 globals=globals, locals=locals, eval_str=eval_str) File ~/miniconda3/envs/ibisml-dev/lib/python3.12/inspect.py:3054, in Signature.from_callable(cls, obj, follow_wrapped, globals, locals, eval_str) 3050 @classmethod 3051 def from_callable(cls, obj, *, 3052 follow_wrapped=True, globals=None, locals=None, eval_str=False): 3053 """Constructs Signature for the given callable object.""" -> 3054 return _signature_from_callable(obj, sigcls=cls, 3055 follow_wrapper_chains=follow_wrapped, 3056 globals=globals, locals=locals, eval_str=eval_str) File ~/miniconda3/envs/ibisml-dev/lib/python3.12/inspect.py:2642, in _signature_from_callable(obj, follow_wrapper_chains, skip_bound_arg, globals, locals, eval_str, sigcls) 2640 call = getattr_static(type(obj), '__call__', None) 2641 if call is not None: -> 2642 call = _descriptor_get(call, obj) 2643 return _get_signature_of(call) 2645 raise ValueError('callable {!r} is not supported by signature'.format(obj)) File ~/miniconda3/envs/ibisml-dev/lib/python3.12/inspect.py:2467, in _descriptor_get(descriptor, obj) 2465 if get is _sentinel: 2466 return descriptor -> 2467 return get(descriptor, obj, type(obj)) TypeError: descriptor '__call__' for 'type' objects doesn't apply to a 'property' object ``` **Minimal Complete Verifiable Example**: ```python import dask.dataframe as dd ``` **Anything else we need to know?**: **Environment**: - Dask version: 2024.2.0 - Python version: 3.12.3 - Operating System: MacOS - Install method (conda, pip, source): pip
closed
2024-05-29T21:06:58Z
2024-05-29T21:47:46Z
https://github.com/dask/dask/issues/11154
[ "needs info" ]
jitingxu1
3
matplotlib/matplotlib
matplotlib
28,898
[Bug]: Matplotlib font error for negative data
### Bug summary When using a combination of text.usetex: True, font.family: 'serif', font.serif = 'Times', attempting to plot negative data results in a LookupError of the required font ### Code for reproduction ```Python import matplotlib.pyplot as plt plt.rcParams['text.usetex'] = True plt.rcParams['font.family'] = 'serif' plt.rcParams['font.serif'] = 'Times' fig, ax = plt.subplots() ax.plot([-1, 2, 3]) fig.savefig('test.pdf') ``` ### Actual outcome LookupError: An associated PostScript font (required by Matplotlib) could not be found for TeX font 'zptmcm7y' in 'C:/Users/XXX/AppData/Local/MiKTeX/fonts/map/pdftex/pdftex.map'; this problem can often be solved by installing a suitable PostScript font package in your TeX package manager ### Expected outcome A plot with the data in tex font ### Additional information Installing or re-installing a postscript font package using MikTeX did not solve the issue ### Operating system Windows 10 ### Matplotlib Version 3.9.2 ### Matplotlib Backend module://backend_interagg ### Python version 3.12.1 ### Jupyter version _No response_ ### Installation pip
closed
2024-09-27T07:54:07Z
2024-09-27T14:28:45Z
https://github.com/matplotlib/matplotlib/issues/28898
[ "topic: text/usetex", "topic: text/fonts" ]
JeltevL
1
FactoryBoy/factory_boy
django
811
get_or_create fails with Trait
#### Description If you use get_or_create and you use trait, the value of the trait will be ignored, an existing object without trait will be returned #### To Reproduce Failing test case here: https://github.com/FactoryBoy/factory_boy/pull/810
open
2020-11-02T18:03:04Z
2020-11-02T18:03:04Z
https://github.com/FactoryBoy/factory_boy/issues/811
[]
MRigal
0
mwaskom/seaborn
data-science
3,320
LOWESS Smoother for The .objects Interface
Here's a LOWESS smoother for the .object interface. Like @tomicapretto, I slightly modified the PolyFit implementation. Until there is a release with a LOWESS smoother, this may do. ``` """A smoother that has the same interface as the Seaborn PolyFit class.""" from __future__ import annotations from dataclasses import dataclass import pandas as pd from seaborn._stats.base import Stat import statsmodels.api as sm @dataclass class Lowess(Stat): """ Fit a LOWESS smooth of data. Modeled on PolyFit. """ frac: float = 0.2 # Mysterious incantation to make the argument work. def _fit(self, data): self.frac = min(self.frac, 1.0) x = data["x"] y = data["y"] yy = sm.nonparametric.lowess(exog=x, endog=y, frac=self.frac) df = pd.DataFrame(data = yy, columns = ('x', 'y') ) return df # TODO we should have a way of identifying the method that will be applied # and then only define __call__ on a base-class of stats with this pattern def __call__(self, data, groupby, orient, scales): return ( groupby .apply(data.dropna(subset=["x", "y"]), self._fit) ) ```
open
2023-04-13T13:11:30Z
2023-08-27T12:13:38Z
https://github.com/mwaskom/seaborn/issues/3320
[ "wishlist" ]
tbpassin
2
jmcnamara/XlsxWriter
pandas
257
Feature request: Add docs on timezone aware/naive datetime
Excel, and thus XlsxWriter, doesn't support timezones in dates/times. The current recommendation is to remove or adjust the timezone from the datetime before passing it to XlsxWriter. Something like this from the pytz docs: ``` python dt = datetime(2005, 3, 1, 14, 13, 21, tzinfo=utc) naive = dt.replace(tzinfo=None) ``` This should be documented in the Working with Dates and Times section of the docs. Possibly, a constructor option could be added to remove any timezones in the data.
closed
2015-05-20T07:22:48Z
2016-12-02T21:56:14Z
https://github.com/jmcnamara/XlsxWriter/issues/257
[ "feature request", "documentation", "ready to close", "short term" ]
jmcnamara
10
littlecodersh/ItChat
api
785
已解决
彻底无法给好友发送消息 不知道什么情况 。。。 也删不掉好友 但是能收到好友发来的消息
closed
2019-01-24T02:53:54Z
2019-06-27T02:26:47Z
https://github.com/littlecodersh/ItChat/issues/785
[]
Liumxv
2
MilesCranmer/PySR
scikit-learn
126
[Feature] Automatic Plotting + LaTeX rendering
It would be great if PySR would automatically generate plots for discovered expressions, as well as LaTeX of the expression form, during training. Then, one does not need to stop training to test the expressions - there would already be plots + generated latex for each expression. This could be done occasionally, in the backend of SymbolicRegression.jl. It could be generated to a temporary folder.
open
2022-04-25T17:15:59Z
2023-04-20T06:07:09Z
https://github.com/MilesCranmer/PySR/issues/126
[ "enhancement" ]
MilesCranmer
0
nolar/kopf
asyncio
1,030
Embedding in shared event loop
### Keywords embed ### Problem First of all - thank you for this project. Having framework for building operators in native Python is dope! I have a question about embedding kopf. Documentation states that kopf needs its own event loop because it considers all spawned tasks as its own. That would interfere with different application. This requirement seems like something coming from kopf itself rather then from asyncio or Python. The question: why? For me, it seemed obvious to run some web app with embedded operator on the same event loop. Requirement to have separate event loop seems strange. Maybe I'm naive but maybe it could be solved by: a) keeping reference to all kopf-spawned tasks and cancel only them when embedding b) set some custom attribute (i.e. `__kopf_managed__`) on created tasks and when shutting down embedded operator, just filter for tasks with such attribute I do not mean to be mean or critical - I am genuinely curious what is the reason for this requirement.
open
2023-05-24T16:27:22Z
2023-11-05T17:44:02Z
https://github.com/nolar/kopf/issues/1030
[ "question" ]
scabala
4
Significant-Gravitas/AutoGPT
python
8,796
integration test: Run a saved agent.
closed
2024-11-27T02:03:26Z
2024-12-06T04:44:35Z
https://github.com/Significant-Gravitas/AutoGPT/issues/8796
[]
ntindle
0
flasgger/flasgger
flask
124
Release new version with decorating flasgger views support
Hey! Could you release new version of flasgger? I want to use this https://github.com/rochacbruno/flasgger/pull/109 feature. Thanks in advice.
closed
2017-06-26T08:47:01Z
2017-06-27T00:43:59Z
https://github.com/flasgger/flasgger/issues/124
[]
rafal-jaworski
2
cvat-ai/cvat
pytorch
8,873
Revisit undo/redo functionality in annotation UI
### Actions before raising this issue - [X] I searched the existing issues and did not find anything similar. - [X] I read/searched [the docs](https://docs.cvat.ai/docs/) ### Is your feature request related to a problem? Please describe. Currently, when creating composite shapes, such as polylines, points, skeletons, polygons, and masks, the user makes several clicks to obtain the final annotation. This is also true for shape editing, where available. Here are several problems: - there is are extra keybinds in addition to standard ctrl+z/ctrl+shift+z to remove the last point added. Not sure if there is a keybind to redo the removed point. It's an extra keybind to remember. - when a shape is being edited or created (e.g. a mask) pressing ctrl+z/ctrl+shift+z typically leads to some actions outside of the shape being edited. It's possible to make changes on other frames this way. The focus will be moved to that other frame. I may be wrong, but it looks to me more annoying than really useful. A possible use case is that some (temporary) shape was added (removed) on this frame and you're trying to remove-restore it to change some visual conditions during drawing a new one, but it looks like it's should not be solved this way. - if a mistake is made during creating (editing) a composite shape (e.g. a mask), you can only redo the last brush action manually or to restart from scratch. Simply undoing the last action would save the time. ### Describe the solution you'd like When creating (editing) a shape, relate undo/redo (ctrl+z/ctrl+shift+z) to this shape editing "session" only. Behavior: - remove/restore the last point added (polygons, polylines, points) - remove/restore the last brush move (masks) - undo/redo the last point move (polygons, polylines, points, skeleton points, rectangles, cuboids, ellipses) ### Describe alternatives you've considered It's possible to remember actions on the specific shape elements after shape editing, but I'm not sure it's really needed now, because it can be a lot of useless changes to remember from one shape editing. Also, in some cases it's already available. ### Additional context _No response_
open
2024-12-25T11:04:39Z
2025-01-14T10:00:45Z
https://github.com/cvat-ai/cvat/issues/8873
[ "enhancement", "ui/ux" ]
zhiltsov-max
1
WZMIAOMIAO/deep-learning-for-image-processing
pytorch
841
跑up的模型之后C盘爆满怎么解决
跑up的模型之后C盘爆满怎么解决
open
2024-10-16T07:32:13Z
2024-11-09T07:03:59Z
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/841
[]
zhy12123
2
qubvel-org/segmentation_models.pytorch
computer-vision
318
Reduce the model filesize if encoder depth is less than 5
I noticed that if the depth of an encoder is set to a value less than 5, the unused nodes still remain in the graph and will be saved in checkpoints and torchscript files. This happens for example when the PSPNet is used with standard settings (extracting features after stage 3). I think it could be beneficial to remove these nodes from the graph for two main reasons: - the file size of checkpoints and torchscript exports will be much less - when we debug the model and use print(model) or the like, the output will be easier to understand (not including unused layers) For the MobileNetV2Encoder I guess the change would look like that: ``` def __init__(self, out_channels, depth=5, **kwargs): #... if self._depth == 4: del self.features[14:] elif self._depth == 3: del self.features[7:] elif self._depth == 2: del self.features[4:] elif self._depth == 1: del self.features[2:] ```
closed
2020-12-18T07:51:31Z
2022-04-30T02:16:06Z
https://github.com/qubvel-org/segmentation_models.pytorch/issues/318
[ "Stale" ]
winfried-ripken
7
jupyterlab/jupyter-ai
jupyter
530
Improve API key error handling
### Problem API key error handling can be further improved: > My high-level thought is that while this is a good starting point, this strategy is a little unintuitive. It requires manually detecting and handling API key exceptions. In the future, we should probably wrap the LangChain methods in an exception handler that raises some custom ApiKeyInvalid exception in this case. That way, we show the same error message in magics and in chat. _Originally posted by @dlqqq in https://github.com/jupyterlab/jupyter-ai/issues/513#issuecomment-1863223676_ ## Proposed solution Implement API key error handling as described above.
open
2023-12-19T23:14:49Z
2023-12-20T00:20:28Z
https://github.com/jupyterlab/jupyter-ai/issues/530
[ "enhancement" ]
andrii-i
1
OpenInterpreter/open-interpreter
python
1,447
VOICE MODEEEEEEEEEEEEEEEEEEEEEEEE
### Is your feature request related to a problem? Please describe. no ### Describe the solution you'd like I've waited long enough for GPT-4o voice mode, so I’m curious if what you mentioned in your blog post about speech-to-speech models could be implemented sooner using Gemini 1.5 Pro. Could we use its audio input and have TTS output? What do you think? ### Describe alternatives you've considered _No response_ ### Additional context _No response_
open
2024-09-10T02:36:37Z
2024-11-04T16:27:00Z
https://github.com/OpenInterpreter/open-interpreter/issues/1447
[ "Enhancement" ]
OpenMachinesAI
1
deeppavlov/DeepPavlov
tensorflow
881
NER problems with download=False
This code ```build_model(configs.ner.ner_rus, download=False)``` leads to following exception: ``` 2019-06-14 14:51:45.158 INFO in 'deeppavlov.models.embedders.fasttext_embedder'['fasttext_embedder'] at line 67: [loading fastText embeddings from `/root/.deeppavlov/downloads/embeddings/lenta_lower_100.bin`] INFO:deeppavlov.models.embedders.fasttext_embedder:[loading fastText embeddings from `/root/.deeppavlov/downloads/embeddings/lenta_lower_100.bin`] 2019-06-14 14:51:45.177 ERROR in 'deeppavlov.core.common.params'['params'] at line 110: Exception in <class 'deeppavlov.models.embedders.fasttext_embedder.FasttextEmbedder'> Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/deeppavlov/core/common/params.py", line 104, in from_params component = cls(**dict(config_params, **kwargs)) File "/usr/local/lib/python3.6/dist-packages/deeppavlov/models/embedders/abstract_embedder.py", line 56, in __init__ self.load() File "/usr/local/lib/python3.6/dist-packages/deeppavlov/models/embedders/fasttext_embedder.py", line 68, in load self.model = fastText.load_model(str(self.load_path)) File "/usr/local/lib/python3.6/dist-packages/fastText/FastText.py", line 303, in load_model return _FastText(path) File "/usr/local/lib/python3.6/dist-packages/fastText/FastText.py", line 37, in __init__ self.f.loadModel(model) ValueError: /root/.deeppavlov/downloads/embeddings/lenta_lower_100.bin cannot be opened for loading! ERROR:deeppavlov.core.common.params:Exception in <class 'deeppavlov.models.embedders.fasttext_embedder.FasttextEmbedder'> Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/deeppavlov/core/common/params.py", line 104, in from_params component = cls(**dict(config_params, **kwargs)) File "/usr/local/lib/python3.6/dist-packages/deeppavlov/models/embedders/abstract_embedder.py", line 56, in __init__ self.load() File "/usr/local/lib/python3.6/dist-packages/deeppavlov/models/embedders/fasttext_embedder.py", line 68, in load self.model = fastText.load_model(str(self.load_path)) File "/usr/local/lib/python3.6/dist-packages/fastText/FastText.py", line 303, in load_model return _FastText(path) File "/usr/local/lib/python3.6/dist-packages/fastText/FastText.py", line 37, in __init__ self.f.loadModel(model) ValueError: /root/.deeppavlov/downloads/embeddings/lenta_lower_100.bin cannot be opened for loading! Traceback (most recent call last): File "/opt/aidoc/src/server.py", line 141, in <module> load_nodes() File "/opt/aidoc/src/server.py", line 137, in load_nodes nodes[node_name] = NodeModels(os.path.join(root_data_path, node_name), ft_model, morph) File "/opt/aidoc/src/server.py", line 103, in __init__ self.NER = build_model(configs.ner.ner_rus, download=False) File "/usr/local/lib/python3.6/dist-packages/deeppavlov/core/commands/infer.py", line 61, in build_model component = from_params(component_config, mode=mode, serialized=component_serialized) File "/usr/local/lib/python3.6/dist-packages/deeppavlov/core/common/params.py", line 104, in from_params component = cls(**dict(config_params, **kwargs)) File "/usr/local/lib/python3.6/dist-packages/deeppavlov/models/embedders/abstract_embedder.py", line 56, in __init__ self.load() File "/usr/local/lib/python3.6/dist-packages/deeppavlov/models/embedders/fasttext_embedder.py", line 68, in load self.model = fastText.load_model(str(self.load_path)) File "/usr/local/lib/python3.6/dist-packages/fastText/FastText.py", line 303, in load_model return _FastText(path) File "/usr/local/lib/python3.6/dist-packages/fastText/FastText.py", line 37, in __init__ self.f.loadModel(model) ValueError: /root/.deeppavlov/downloads/embeddings/lenta_lower_100.bin cannot be opened for loading! ```
closed
2019-06-14T14:55:03Z
2019-06-27T15:02:24Z
https://github.com/deeppavlov/DeepPavlov/issues/881
[]
bavadim
4
mars-project/mars
pandas
3,192
[BUG] supervisor memory leak
<!-- Thank you for your contribution! Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue. --> **Describe the bug** A clear and concise description of what the bug is. ![image](https://user-images.githubusercontent.com/6308809/179145659-00ac7756-e65f-4938-83a3-82e62dc21b14.png) The Mars job executed hundreds of tasks, some task has a large graph: ![image](https://user-images.githubusercontent.com/6308809/179145897-23bb9e41-1a13-4eb5-ae25-e14ed5da3836.png) Set environ var `PYTHONTRACEMALLOC=1`, then run the following code: ```python import tracemalloc import gc import mars import mars.tensor as mt import mars.dataframe as md def run_round(extra_config=None): df = md.DataFrame( mt.random.rand(1200, 70, chunk_size=70), columns=[f"col{i}" for i in range(70)]) for i in range(70): df[f"col{i + 70}"] = df[f"col{i}"].fillna(0) df[f"col{i + 140}"] = df[f"col{i}"].fillna(0) for i in range(70): df[f"col{i}"] = df[f"col{i}"] / 100 df = df.fillna(0) df.map_chunk(lambda x: x).execute(extra_config=extra_config) def main(): mars.new_session() count = 0 run_round(extra_config={"enable_profiling": True}) gc.collect() s1 = tracemalloc.take_snapshot() while True: print(f"==============>run {count}") count += 1 run_round() if count == 3: break gc.collect() s2 = tracemalloc.take_snapshot() diff = s2.compare_to(s1, "lineno") print("[ Top 10 differences ]") for stat in diff[:10]: print(stat) if __name__ == "__main__": main() ``` output ``` python [ Top 10 differences ] /home/admin/mars/mars/core/base.py:96: size=10.5 MiB (+8098 KiB), count=59034 (+44276), average=187 B /home/admin/mars/mars/core/operand/core.py:100: size=10200 KiB (+7647 KiB), count=61221 (+45900), average=171 B /home/admin/mars/mars/core/graph/builder/base.py:72: size=9861 KiB (+7400 KiB), count=85245 (+63956), average=118 B /home/admin/mars/mars/services/task/analyzer/analyzer.py:250: size=9629 KiB (+7222 KiB), count=91908 (+68931), average=107 B /home/admin/mars/mars/core/graph/builder/base.py:62: size=9194 KiB (+6895 KiB), count=91812 (+68859), average=103 B /home/admin/mars/mars/core/base.py:37: size=7590 KiB (+5691 KiB), count=131726 (+98781), average=59 B /home/admin/mars/mars/core/operand/base.py:266: size=6733 KiB (+5048 KiB), count=132583 (+99408), average=52 B /home/admin/mars/mars/core/operand/base.py:225: size=5691 KiB (+4268 KiB), count=132393 (+99294), average=44 B /home/admin/mars/mars/core/entity/core.py:35: size=5180 KiB (+3883 KiB), count=66294 (+49706), average=80 B /home/admin/mars/mars/core/operand/core.py:121: size=5025 KiB (+3769 KiB), count=30629 (+22971), average=168 B ``` After digging into the code, the memory leak is because the `TaskProcessor` is not removed. The tileable graph, chunk graph and subtask graph are not GC. **To Reproduce** To help us reproducing this bug, please provide information below: 1. Your Python version 2. The version of Mars you use Latest master 3. Versions of crucial packages, such as numpy, scipy and pandas 4. Full stack of the error. 5. Minimized code to reproduce the error. **Expected behavior** A clear and concise description of what you expected to happen. **Additional context** Add any other context about the problem here.
closed
2022-07-15T04:03:15Z
2022-09-14T09:32:44Z
https://github.com/mars-project/mars/issues/3192
[]
fyrestone
0
mithi/hexapod-robot-simulator
dash
85
Refactoring Suggestions
Replace: https://github.com/mithi/hexapod-robot-simulator/blob/808534d769476342ae56e88ea865c33b36c89490/index.py#L14 With: ```python div_header = html.Div( [ html.A(html.H6("👾"), href=URL_REPO, target="_blank", style=icon_link_style), html.A(html.H6("☕"), href=URL_KOFI, target="_blank", style=icon_link_style), dcc.Link(html.H6("●"), href="/", style=icon_link_style), dcc.Link(html.H6("●"), href="/inverse-kinematics", style=icon_link_style), dcc.Link(html.H6("●"), href="/kinematics", style=icon_link_style), ], style={"display": "flex", "flex-direction": "row"} ) ``` So that the page does not not refresh
closed
2020-04-24T17:00:13Z
2020-04-27T13:29:04Z
https://github.com/mithi/hexapod-robot-simulator/issues/85
[ "feature request", "good first issue", "low hanging fruit", "code quality" ]
mithi
4
taverntesting/tavern
pytest
241
Global variables are not correctly updated from ext function
I am trying to run tavern tests from a python script, for that matter I have a yaml file that tests an API call, the idea is to store the response in a global variable and use it latter. ```python #My Python Script: def login(host, login, password): config = { "variables": { "host": host, "login": login, "password": password, } } success = run("test_login.tavern.yaml", config, pytest_args=["-v", "-x","--disable-warnings"]) return testtoken #external function called from yaml def test_functions(response): filename = "data.json" with open(filename, "w") as f: json.dump(response.json(), f) global testtoken testtoken = response.json()["result"]["token"] ``` After script execution, my testtoken variable has always the "empty" value and not the expected token from response, but the data.json file has correct information
closed
2019-02-01T10:33:30Z
2019-08-10T20:57:13Z
https://github.com/taverntesting/tavern/issues/241
[]
ymouncef
1
modelscope/data-juicer
streamlit
388
[Bug]: Loading checkpoint shards:的时候直接kill了是什么,是内存不够了吗
### Before Reporting 报告之前 - [X] I have pulled the latest code of main branch to run again and the bug still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。 - [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully and no error occurred during the installation process. (Otherwise, we recommend that you can ask a question using the Question template) 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引,并且在安装过程中没有错误发生。(否则,我们建议您使用Question模板向我们进行提问) ### Search before reporting 先搜索,再报告 - [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar bugs. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的bug报告。 ### OS 系统 Ubuntu ### Installation Method 安装方式 from source ### Data-Juicer Version Data-Juicer版本 latest ### Python Version Python版本 3.10 ### Describe the bug 描述这个bug 2024-08-18 12:12:34 | INFO | data_juicer.core.executor:47 - Using cache compression method: [None] 2024-08-18 12:12:34 | INFO | data_juicer.core.executor:52 - Setting up data formatter... 2024-08-18 12:12:34 | INFO | data_juicer.core.executor:74 - Preparing exporter... 2024-08-18 12:12:34 | INFO | data_juicer.core.executor:86 - Preparing tracer... 2024-08-18 12:12:34 | INFO | data_juicer.core.executor:90 - Trace for all ops. 2024-08-18 12:12:34 | INFO | data_juicer.core.executor:151 - Loading dataset from data formatter... 2024-08-18 12:12:35 | INFO | data_juicer.format.formatter:185 - Unifying the input dataset formats... 2024-08-18 12:12:35 | INFO | data_juicer.format.formatter:200 - There are 400000 sample(s) in the original dataset. Filter (num_proc=4): 100%|##########| 400000/400000 [00:02<00:00, 183155.28 examples/s] 2024-08-18 12:12:37 | INFO | data_juicer.format.formatter:214 - 400000 samples left after filtering empty text. 2024-08-18 12:12:37 | INFO | data_juicer.format.formatter:237 - Converting relative paths in the dataset to their absolute version. (Based on the directory of input dataset file) Map (num_proc=4): 100%|##########| 400000/400000 [00:11<00:00, 35108.88 examples/s] 2024-08-18 12:12:48 | INFO | data_juicer.format.mixture_formatter:137 - sampled 400000 from 400000 2024-08-18 12:12:48 | INFO | data_juicer.format.mixture_formatter:143 - There are 400000 in final dataset 2024-08-18 12:12:48 | INFO | data_juicer.core.executor:157 - Preparing process operators... Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]/root/miniconda3/envs/dj/lib/python3.10/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.__get__(instance, owner)() Loading checkpoint shards: 100%|##########| 2/2 [00:19<00:00, 9.70s/it] Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]Killed 加载数据都正常,但是再加载checkpoint直接断掉了,是为什么,我的内存40个g也不够吗,有什么方法可以减少内存使用压力 ### To Reproduce 如何复现 运行了这个命令,dj-process --config /root/autodl-tmp/dj_synth_challenge/toolkit/data-juicer/configs/dj_game.yaml,再加载检查点的时候直接断开了 ### Configs 配置信息 _No response_ ### Logs 报错日志 _No response_ ### Screenshots 截图 _No response_ ### Additional 额外信息 _No response_
closed
2024-08-18T04:18:12Z
2024-08-18T04:52:12Z
https://github.com/modelscope/data-juicer/issues/388
[ "bug" ]
ZHJ19970917
1
CorentinJ/Real-Time-Voice-Cloning
pytorch
1,152
train my dataset
can i train for custom dataset??
closed
2023-01-02T10:07:23Z
2023-01-08T08:55:17Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1152
[]
alidabaghi123
0