repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
coqui-ai/TTS
|
deep-learning
| 2,686
|
[Bug] Unable to download models
|
### Describe the bug
When I run the example the model is unable to download.
when I try to manually download the model from
```
https://coqui.gateway.scarf.sh/v0.10.1_models/tts_models--multilingual--multi-dataset--your_tts.zip
```
I get redirected to
```
https://huggingface.co/erogol/v0.10.1_models/resolve/main/tts_models--multilingual--multi-dataset--your_tts.zip
```
which says "Repository not found"
### To Reproduce
Run the basic example for python
code:
``` python
from TTS.api import TTS
# Running a multi-speaker and multi-lingual model
# List available ๐ธTTS models and choose the first one
model_name = TTS.list_models()[0]
# Init TTS
tts = TTS(model_name)
# Run TTS
# โ Since this model is multi-speaker and multi-lingual, we must set the target speaker and the language
# Text to speech with a numpy output
wav = tts.tts("This is a test! This is also a test!!", speaker=tts.speakers[0], language=tts.languages[0])
```
output:
```
zipfile.BadZipFile: File is not a zip file
### Expected behavior
Being able to download the pretrained models
### Logs
```shell
> Downloading model to /home/mb/.local/share/tts/tts_models--multilingual--multi-dataset--your_tts
0%| | 0.00/29.0 [00:00<?, ?iB/s] > Error: Bad zip file - https://coqui.gateway.scarf.sh/v0.10.1_models/tts_models--multilingual--multi-dataset--your_tts.zip
Traceback (most recent call last):
File "/home/mb/Documents/ai/TTS/TTS/utils/manage.py", line 434, in _download_zip_file
with zipfile.ZipFile(temp_zip_name) as z:
File "/usr/lib/python3.10/zipfile.py", line 1267, in __init__
self._RealGetContents()
File "/usr/lib/python3.10/zipfile.py", line 1334, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/mb/Documents/ai/tts/main.py", line 8, in <module>
tts = TTS(model_name)
File "/home/mb/Documents/ai/TTS/TTS/api.py", line 289, in __init__
self.load_tts_model_by_name(model_name, gpu)
File "/home/mb/Documents/ai/TTS/TTS/api.py", line 385, in load_tts_model_by_name
model_path, config_path, vocoder_path, vocoder_config_path, model_dir = self.download_model_by_name(
File "/home/mb/Documents/ai/TTS/TTS/api.py", line 348, in download_model_by_name
model_path, config_path, model_item = self.manager.download_model(model_name)
File "/home/mb/Documents/ai/TTS/TTS/utils/manage.py", line 303, in download_model
self._download_zip_file(model_item["github_rls_url"], output_path, self.progress_bar)
File "/home/mb/Documents/ai/TTS/TTS/utils/manage.py", line 439, in _download_zip_file
raise zipfile.BadZipFile # pylint: disable=raise-missing-from
zipfile.BadZipFile
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 29.0/29.0 [00:00<00:00, 185iB/s]
```
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.1+cu117",
"TTS": "0.14.3",
"numpy": "1.23.5"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.6",
"version": "#202303130630~1685473338~22.04~995127e SMP PREEMPT_DYNAMIC Tue M"
}
}
```
### Additional context
_No response_
|
closed
|
2023-06-19T13:49:57Z
|
2025-02-28T16:41:31Z
|
https://github.com/coqui-ai/TTS/issues/2686
|
[
"bug"
] |
Maarten-buelens
| 23
|
xuebinqin/U-2-Net
|
computer-vision
| 357
|
Hi, when I run this code, I get strange errors in other detection tasks.The following is the warning where the error occurs
|
/pytorch/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [290,0,0], thread: [62,0,0] Assertion `input_val >= zero && input_val <= one` failed.
/pytorch/aten/src/ATen/native/cuda/Loss.cu:115: operator(): block: [290,0,0], thread: [63,0,0] Assertion `input_val >= zero && input_val <= one` failed.
[epoch: 3/100, batch: 11/ 0, ite: 517] train loss: 66.000237, tar: 11.000006/lr:0.000100
l0: 1.000000, l1: 1.000016, l2: 1.000000, l3: 1.000000, l4: 1.000000, l5: nan
[epoch: 3/100, batch: 12/ 0, ite: 518] train loss: nan, tar: 12.000006/lr:0.000100
Traceback (most recent call last):
File "train_multiple_loss.py", line 149, in <module>
loss2, loss = muti_bce_loss_fusion(d6, d1, d2, d3, d4, d5, labels_v)
File "train_multiple_loss.py", line 49, in muti_bce_loss_fusion
loss4 = bce_ssim_loss(d4,labels_v)
File "train_multiple_loss.py", line 37, in bce_ssim_loss
iou_out = iou_loss(pred,target)
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/root/autodl-tmp/pytorch_iou/__init__.py", line 28, in forward
return _iou(pred, target, self.size_average)
File "/root/autodl-tmp/pytorch_iou/__init__.py", line 13, in _iou
Ior1 = torch.sum(target[i,:,:,:]) + torch.sum(pred[i,:,:,:])-Iand1
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
I think there might be a vanishing gradient happening, but I don't have a solution
|
open
|
2023-04-24T15:29:11Z
|
2024-04-24T06:03:58Z
|
https://github.com/xuebinqin/U-2-Net/issues/357
|
[] |
1dhuh
| 1
|
slackapi/python-slack-sdk
|
asyncio
| 1,153
|
How to get messages from a personal chat?
|
I want to be able to grab a chat log using the api
```bash
conversation_id = "D0....."
result = client.conversations_history(channel=conversation_id)
```
When i send this message i alway get `channel_not_found` even though the bot is added to the DM.
It also has the correct scopes.
<img width="391" alt="Screen Shot 2021-12-14 at 11 51 26 AM" src="https://user-images.githubusercontent.com/17206638/146042742-9c846457-479b-4d46-8422-8e290be8f474.png">
|
closed
|
2021-12-14T16:52:08Z
|
2022-01-31T00:02:31Z
|
https://github.com/slackapi/python-slack-sdk/issues/1153
|
[
"question",
"needs info",
"auto-triage-stale"
] |
jeremiahlukus
| 3
|
huggingface/datasets
|
computer-vision
| 6,460
|
jsonlines files don't load with `load_dataset`
|
### Describe the bug
While [the docs](https://huggingface.co/docs/datasets/upload_dataset#upload-dataset) seem to state that `.jsonl` is a supported extension for `datasets`, loading the dataset results in a `JSONDecodeError`.
### Steps to reproduce the bug
Code:
```
from datasets import load_dataset
dset = load_dataset('slotreck/pickle')
```
Traceback:
```
Downloading readme: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 925/925 [00:00<00:00, 3.11MB/s]
Downloading and preparing dataset json/slotreck--pickle to /mnt/home/lotrecks/.cache/huggingface/datasets/slotreck___json/slotreck--pickle-0c311f36ed032b04/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96...
Downloading data: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 589k/589k [00:00<00:00, 18.9MB/s]
Downloading data: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 104k/104k [00:00<00:00, 4.61MB/s]
Downloading data: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 170k/170k [00:00<00:00, 7.71MB/s]
Downloading data files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:00<00:00, 3.77it/s]
Extracting data files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:00<00:00, 523.92it/s]
Generating train split: 0 examples [00:00, ? examples/s]Failed to read file '/mnt/home/lotrecks/.cache/huggingface/datasets/downloads/6ec07bb2f279c9377036af6948532513fa8f48244c672d2644a2d7018ee5c9cb' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0
Traceback (most recent call last):
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 144, in _generate_tables
dataset = json.load(f)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/__init__.py", line 296, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 3086)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1879, in _prepare_split_single
for _, table in generator:
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 147, in _generate_tables
raise e
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
File "pyarrow/_json.pyx", line 259, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/load.py", line 1815, in load_dataset
storage_options=storage_options,
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 913, in download_and_prepare
**download_and_prepare_kwargs,
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1768, in _prepare_split
gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1912, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
For the dataset to be loaded without error.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 8.0.0
- Pandas version: 1.3.5
|
closed
|
2023-11-29T21:20:11Z
|
2023-12-29T02:58:29Z
|
https://github.com/huggingface/datasets/issues/6460
|
[] |
serenalotreck
| 4
|
graphql-python/graphene
|
graphql
| 1,041
|
Problem with Implementing Custom Directives
|
Hi,
I've have read all the issues in your repo regarding this argument.
I've also checked your source code and the graphql-core source code.
And still I can't find a satisfying answer to my problem.
I have to implement custom directives like the ones implemented in this package (https://github.com/ekampf/graphene-custom-directives).
The question is: since the graphql-core apparently only supports the logic for the skip and include ones, and it seems quite hardcoded ( since it strictly supports only those two ), which is the correct way to implement custom directives ?
Is the way in the package I mentioned the only one possible to achieve my goal ?
Thanks
|
closed
|
2019-07-22T13:40:35Z
|
2019-09-20T14:46:54Z
|
https://github.com/graphql-python/graphene/issues/1041
|
[] |
frank2411
| 2
|
AirtestProject/Airtest
|
automation
| 342
|
่ฐ็จๆฌๅฐ็ฏๅข็็ฌฌไธๆนๆจกๅbs4 ไธ็ดๅคฑ่ดฅ python IDE่ฐ็จ ๆฏๆๅ็
|
Remove any following parts if does not have details about
**Describe the bug**
[DEBUG][Start debugging...]
D:\AirtestIDE_2019-01-15_py3_win64\AirtestIDE_2019-01-15_py3_win64\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 54dc383c0804 wait-for-device
D:\AirtestIDE_2019-01-15_py3_win64\AirtestIDE_2019-01-15_py3_win64\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 54dc383c0804 shell getprop ro.build.version.sdk
Traceback (most recent call last):
File "app\widgets\code_runner\run_manager.py", line 206, in my_exec
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'bs4'
pythonๆฌๅฐ็ฏๅข3.7 ่ฎพ็ฝฎ้้ขๅทฒ็ปๅกซๅไบpython.exe่ทฏๅพใ
**To Reproduce**
Steps to reproduce the behavior:
1. python3.7ๅฎ่ฃ
bs4.. (python IDE่ฐ็จbs4ๆฏๆๅ็)
2. airtest่ฎพ็ฝฎpython.exe่ทฏๅพไฟๅญใ้ๅฏairtestใ
3. ้ไธญ from bs4 import BeautifulSoup ใๅช่ฟ่ก้ไธญไปฃ็ ใ
4. ไธๅญๅจbs4
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**


**python version:** `python3.7`
**airtest version:** `1.2.0
> You can get airtest version via `pip freeze` command.
**Smartphone (please complete the following information):**
- Device: ๅฐ็ฑณ5plus
- OS: [e.g. Android 7.0]
- more information if have
**Additional context**
ๆขไบไธคๅฐ็ต่ ๅๆ ท็็ฏๅขๅๆ ท็ๆฅ้
|
closed
|
2019-04-01T12:57:51Z
|
2019-04-01T14:05:41Z
|
https://github.com/AirtestProject/Airtest/issues/342
|
[] |
hoangwork
| 9
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 946
|
Getting stuck on "Loading the encoder" bit (not responding)
|
Hey, I just installed everything. I'm having trouble where whenever I try to input anything it loads up successfully and then gets caught on "Loading the encoder" and then says it's not responding. I have an RTX 3080. I've tried redownloading and using different pretrained.pt files for the encoder thing it's trying to load.
|
open
|
2021-12-10T10:32:13Z
|
2022-01-10T12:11:13Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/946
|
[] |
HirabayashiCallie
| 2
|
Asabeneh/30-Days-Of-Python
|
numpy
| 383
|
Python
|
closed
|
2023-04-18T01:52:32Z
|
2023-07-08T21:56:11Z
|
https://github.com/Asabeneh/30-Days-Of-Python/issues/383
|
[] |
YeudielVai
| 1
|
|
KaiyangZhou/deep-person-reid
|
computer-vision
| 550
|
cmc curve graph
|
How do I get the cmc graph after training? Thank you for your answer.
|
open
|
2023-07-14T09:10:43Z
|
2023-07-14T09:10:43Z
|
https://github.com/KaiyangZhou/deep-person-reid/issues/550
|
[] |
xiaboAAA
| 0
|
ets-labs/python-dependency-injector
|
flask
| 28
|
Make Objects compatible with Python 2.6
|
Acceptance criterias:
- Tests on Python 2.6 passed.
- Badge with supported version added to README.md
|
closed
|
2015-03-17T12:59:09Z
|
2015-03-23T14:49:02Z
|
https://github.com/ets-labs/python-dependency-injector/issues/28
|
[
"enhancement"
] |
rmk135
| 0
|
sktime/pytorch-forecasting
|
pandas
| 1,354
|
Issue with TFT generating empty features in torch/nn/modules/linear.py
|
- PyTorch-Forecasting version: 1.0.0
- PyTorch version: 2.0.1+cu117
- Python version: 3.8.10
- Operating System: Ubuntu 20.04.6 LTS
### Expected behavior
Code would build a working TFT model from TimeSeriesDataSet data.
### Actual behavior
The code reported an error and crashed:
### Code to reproduce the problem
This stripped down code produces the error reliably either for a single target (no parameter) or for 2 targets (any parameter):
`import os
import sys
import logging
import pandas as pd
import numpy as np
import torch
import pytorch_lightning as pl
import pytorch_forecasting as pf
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import EarlyStopping,LearningRateMonitor
from pytorch_lightning.loggers import TensorBoardLogger
from pytorch_forecasting import Baseline,TimeSeriesDataSet
from pytorch_forecasting.models import TemporalFusionTransformer as TFT
from pytorch_forecasting.data import MultiNormalizer,TorchNormalizer
print("python version",sys.version)
print("pandas version",pd.__version__)
print("numpy version",np.__version__)
print("torch version",torch.__version__)
print("pytorch_forecasting version",pf.__version__)
print("pytorch_lightning version",pl.__version__)
os.environ["CUDA_VISIBLE_DEVICES"] = ""
logging.getLogger("pytorch_lightning").setLevel(logging.WARNING)
Train=pd.read_csv("ShortTrain2.csv")
Train[Train.select_dtypes(include=['float64']).columns] = Train[Train.select_dtypes(include=['float64']).columns].astype('float32')
print(Train.dtypes)
predictors=1
if len(sys.argv) > 1 :
predictors = 2
xtarget,xtarget_normalizer,xtime_varying_known_reals,xtime_varying_unkown_reals=None,None,None,None
if predictors == 1:
print("Using a single predictor v1_R")
# single predictor
xtarget=['v1_R']
xtarget_normalizer=MultiNormalizer([TorchNormalizer()])
xtime_varying_known_reals=['v1_Act']
xtime_varying_unknown_reals=['v1_C','v1_D','v1_V','v1_S','v1_H','v1_L','v1_R_lag10','v1_R_lag20']
else:
print("Using 2 predictors v1_R and v2_R")
# 2 predictors
xtarget=['v1_R','v2_R']
xtarget_normalizer=MultiNormalizer([TorchNormalizer(),TorchNormalizer()])
xtime_varying_known_reals=['v1_Act','v2_Act']
xtime_varying_unknown_reals=['v1_C','v1_D','v1_V','v1_S','v1_H','v1_L','v1_R_lag10','v1_R_lag20','v2_C','v2_D','v2_V','v2_S','v2_H','v2_L','v2_R_lag10','v2_R_lag20']
print(xtarget)
print(xtarget_normalizer)
print(xtime_varying_known_reals)
print(xtime_varying_unknown_reals)
training = TimeSeriesDataSet(
data=Train,
add_encoder_length=True,
add_relative_time_idx=True,
add_target_scales=True,
allow_missing_timesteps=True,
group_ids=['Group'],
max_encoder_length=10,
max_prediction_length=20,
min_encoder_length=5,
min_prediction_length=1,
# time_idx='unique',
time_idx='time_idx',
static_categoricals=[], # Categorical features that do not change over time - list
static_reals=[], # Continuous features that do not change over time - list
time_varying_known_categoricals=[], # Known in the future - list
time_varying_unknown_categoricals=[],
target=xtarget,
target_normalizer=xtarget_normalizer,
time_varying_known_reals=xtime_varying_known_reals,
time_varying_unknown_reals=xtime_varying_unknown_reals
)
validation = TimeSeriesDataSet.from_dataset(training, Train, predict=False, stop_randomization=True)
print("Passed validation")
train_dataloader = training.to_dataloader(train=True, batch_size=16, num_workers=32,shuffle=False)
print("Created dataloaders - show first batch")
#warnings.filterwarnings("ignore")
print("Start train_dataloader comparison with baseline")
device = torch.device("cpu")
print("actuals calculated")
Baseline().to(device)
baseline_predictions = Baseline().predict(train_dataloader)
baseline_predictions = torch.cat([b.clone().detach() for b in baseline_predictions])
print("baseline_predictions calculated")
val_loss=1e6
early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=10, verbose=False, mode="min")
lr_logger = LearningRateMonitor() # log the learning rate
logger = TensorBoardLogger("lightning_logs") # logging results to a tensorboard
trainer = Trainer(
callbacks=[lr_logger, early_stop_callback],
enable_model_summary=True,
gradient_clip_val=0.1,
limit_train_batches=10, # comment in for training, running valiation every 30 batches
logger=logger
)
model = TFT(
training,
attention_head_size=1,
dropout=0.1,
hidden_continuous_size=8,
learning_rate=0.03, # 1
log_interval=10,
lstm_layers=1, # could be interesting
output_size=1, # number of predictors
reduce_on_plateau_patience=4,
)
print(f"\nNumber of parameters in network: {model.size()/1e3:.1f}k\n")
`
Paste the command(s) you ran and the output. Including a link to a colab notebook will speed up issue resolution.
python3 filename.py (no parameter prepares and tries to build a univariate model, any parameter -> 2 variate mode:
[ShortTrain2.txt](https://github.com/jdb78/pytorch-forecasting/files/12270065/ShortTrain2.txt)
If there was a crash, please include the traceback here.
Traceback (most recent call last):
File "ShortTrain2.py", line 94, in <module>
model = TFT(
File "/home/jl/.local/lib/python3.8/site-packages/pytorch_forecasting/models/temporal_fusion_transformer/__init__.py", line 248, in __init__
self.static_context_variable_selection = GatedResidualNetwork(
File "/home/jl/.local/lib/python3.8/site-packages/pytorch_forecasting/models/temporal_fusion_transformer/sub_modules.py", line 204, in __init__
self.fc1 = nn.Linear(self.input_size, self.hidden_size)
File "/home/jl/.local/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 97, in __init__
self.weight = Parameter(torch.empty((out_features, in_features), **factory_kwargs))
TypeError: empty() received an invalid combination of arguments - got (tuple, dtype=NoneType, device=NoneType), but expected one of:
* (tuple of ints size, *, tuple of names names, torch.memory_format memory_format, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of ints size, *, torch.memory_format memory_format, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
The code used to initialize the TimeSeriesDataSet and model should be also included.
TSDS preparation is Included in the code which just reads a short example from file called ShortTrain2.csv (640 lines long):
[ShortTrain2.csv](https://github.com/jdb78/pytorch-forecasting/files/12230525/ShortTrain2.csv)
|
closed
|
2023-08-01T14:38:59Z
|
2023-08-31T19:16:43Z
|
https://github.com/sktime/pytorch-forecasting/issues/1354
|
[] |
Loggy48
| 2
|
miguelgrinberg/Flask-Migrate
|
flask
| 208
|
changing log level in configuration callback
|
Following the docs, I am trying to alter the log level in the configuration to turn off logging:
```python
@migrate.configure
def configure_alembic(config):
config.set_section('logger_alembic', 'level', 'WARN')
return config
```
I've confirmed that making this same change in `alembic.ini` turns off logging successfully - but I don't want to change the "default" level as pulled from that file.
Unfortunately, this does not seem to work. I have tried outputting the level from the logger (in various locations) and every time it reports it as `WARN` but still outputs logs at `INFO` level.
Any help or suggestions would be gratefully received.
|
closed
|
2018-06-08T12:33:11Z
|
2018-06-13T10:05:16Z
|
https://github.com/miguelgrinberg/Flask-Migrate/issues/208
|
[
"question"
] |
bugsduggan
| 2
|
scikit-tda/kepler-mapper
|
data-visualization
| 251
|
Overlapping bins in the HTML visualization.
|
**Description**
There is overlap in the member distribution histogram when the number of visualized bins is greater than 10.
**To Reproduce**
```python
import kmapper as km
from sklearn import datasets
data, labels = datasets.make_circles(n_samples=5000, noise=0.03, factor=0.3)
mapper = km.KeplerMapper(verbose=1)
projected_data = mapper.fit_transform(data, projection=[0,1]) # X-Y axis
graph = mapper.map(projected_data, data, cover=km.Cover(n_cubes=10))
NBINS = 20
mapper.visualize(graph, path_html="make_circles_keplermapper_output_5.html",
nbins = NBINS,
title="make_circles(n_samples=5000, noise=0.03, factor=0.3)")
```
With *keppler-mapper* version `2.0.1`.
**Expected behavior**
It would be great if all the histogram bars would adjust their size to accommodate all the desired bins.
**Screenshots**
The above code generates the following:

But I have some internal code that generates this, where the overlap is more evident:

**Additional context**
I was digging around and found that the code that generates the histogram is in the function `set_histogram`
https://github.com/scikit-tda/kepler-mapper/blob/ece5d47f7d65654c588431dd2275197502740066/kmapper/static/kmapper.js#L187-L202
I was thinking it would be possible to modify the width of the bars (just like the height is), however I can't find a clean way to get the parent's width.
I am happy to open a PR with my current, not-so-great solution, but I was wondering if you had any guidance on how to do that.
|
open
|
2023-11-27T10:22:44Z
|
2023-11-27T10:22:44Z
|
https://github.com/scikit-tda/kepler-mapper/issues/251
|
[] |
fferegrino
| 0
|
keras-team/keras
|
python
| 20,402
|
`EpochIterator` initialized with a generator cannot be reset
|
`EpochIterator` initialized with a generator cannot be reset, consequently the following bug might occur: https://github.com/keras-team/keras/issues/20394
Another consequence is running prediction on an unbuilt model discards samples:
```python
import os
import numpy as np
os.environ["KERAS_BACKEND"] = "jax"
import keras
set_size = 4
def generator():
for _ in range(set_size):
yield np.ones((1, 4)), np.ones((1, 1))
model = keras.models.Sequential([keras.layers.Dense(1, activation="relu")])
model.compile(loss="mse", optimizer="adam")
y = model.predict(generator())
print(f"There should be {set_size} predictions, there are {len(y)}.")
```
```
There should be 4 predictions, there are 2.
```
|
closed
|
2024-10-24T11:31:37Z
|
2024-10-26T17:33:12Z
|
https://github.com/keras-team/keras/issues/20402
|
[
"stat:awaiting response from contributor",
"type:Bug"
] |
nicolaspi
| 2
|
browser-use/browser-use
|
python
| 890
|
Add Mouse Interaction Functions for Full Web Automation
|
### Problem Description
I would like to request support for mouse actions (such as click, scroll up/down, drag, and hover) in browser-use, in addition to the existing keyboard automation.
**Use Case**
I need to access a remote server through a web portal, and browser-use can successfully recognize and interact with standard webpage elements. However, the portal streams a remote desktop interface inside a <canvas> element, which browser-use does not seem to recognize. This limitation prevents me from interacting with the remote interface as I would with a normal web page.
**Problem**
Currently, browser-use appears to only support keyboard interactions but lacks mouse control. Without mouse support, itโs impossible to interact with elements inside streamed remote sessions or complex web applications that rely on mouse-based interactions.
**Impact**
Adding mouse control would enable seamless automation for remote desktops, cloud-based IDEs, and other streamed environments where UI elements are rendered inside a <canvas>. This would make browser-use much more versatile and applicable to a wider range of use cases.
Thank you for considering this feature request! Looking forward to any updates on this.
### Proposed Solution
โข Add support for mouse functions such as:
โข Left-click, right-click, double-click
โข Scrolling up/down
โข Drag-and-drop
โข Mouse movement and hovering
This enhancement would significantly improve browser-use for automating interactions with web applications that require both keyboard and mouse input.
### Alternative Solutions
_No response_
### Additional Context
_No response_
|
open
|
2025-02-27T09:10:28Z
|
2025-03-19T12:00:30Z
|
https://github.com/browser-use/browser-use/issues/890
|
[
"enhancement"
] |
JamesChen9415
| 3
|
ultralytics/ultralytics
|
computer-vision
| 18,965
|
benchmark error
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
The scripts i wrote is
> from ultralytics.utils.benchmarks import benchmark
> benchmark(model="yolo11n.pt", data="coco8.yaml", imgsz=640, format="onnx")
(the same is https://docs.ultralytics.com/zh/modes/benchmark/#usage-examples):, but i got the error:
ONNX: starting export with onnx 1.17.0 opset 19...
ONNX: slimming with onnxslim 0.1.48...
ONNX: export success โ
0.7s, saved as 'yolo11n.onnx' (10.2 MB)
Export complete (0.9s)
Results saved to /data/code/yolo11/ultralytics
Predict: yolo predict task=detect model=yolo11n.onnx imgsz=640
Validate: yolo val task=detect model=yolo11n.onnx imgsz=640 data=/usr/src/ultralytics/ultralytics/cfg/datasets/coco.yaml
Visualize: https://netron.app
ERROR โ๏ธ Benchmark failure for ONNX: model='/data/code/yolo11/ultralytics/ultralytics/cfg/datasets/coco.yaml' is not a supported model format. Ultralytics supports: ('PyTorch', 'TorchScript', 'ONNX', 'OpenVINO', 'TensorRT', 'CoreML', 'TensorFlow SavedModel', 'TensorFlow GraphDef', 'TensorFlow Lite', 'TensorFlow Edge TPU', 'TensorFlow.js', 'PaddlePaddle', 'MNN', 'NCNN', 'IMX', 'RKNN')
See https://docs.ultralytics.com/modes/predict for help.
Setup complete โ
(8 CPUs, 61.4 GB RAM, 461.1/499.8 GB disk)
Traceback (most recent call last):
File "/data/code/yolo11/ultralytics/onnx_run.py", line 7, in <module>
benchmark(model="yolo11n.pt", data="coco8.yaml", imgsz=640, format="onnx")
File "/data/code/yolo11/ultralytics/ultralytics/utils/benchmarks.py", line 183, in benchmark
df = pd.DataFrame(y, columns=["Format", "Statusโ", "Size (MB)", key, "Inference time (ms/im)", "FPS"])
UnboundLocalError: local variable 'key' referenced before assignment
### Environment
Ultralytics 8.3.70 ๐ Python-3.10.0 torch-2.6.0+cu124 CUDA:0 (NVIDIA L20, 45589MiB)
Setup complete โ
(8 CPUs, 61.4 GB RAM, 461.2/499.8 GB disk)
OS Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Environment Linux
Python 3.10.0
Install pip
RAM 61.43 GB
Disk 461.2/499.8 GB
CPU Intel Xeon Gold 6462C
CPU count 8
GPU NVIDIA L20, 45589MiB
GPU count 1
CUDA 12.4
numpy โ
2.1.1<=2.1.1,>=1.23.0
matplotlib โ
3.10.0>=3.3.0
opencv-python โ
4.11.0.86>=4.6.0
pillow โ
11.1.0>=7.1.2
pyyaml โ
6.0.2>=5.3.1
requests โ
2.32.3>=2.23.0
scipy โ
1.15.1>=1.4.1
torch โ
2.6.0>=1.8.0
torch โ
2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision โ
0.21.0>=0.9.0
tqdm โ
4.67.1>=4.64.0
psutil โ
6.1.1
py-cpuinfo โ
9.0.0
pandas โ
2.2.3>=1.1.4
seaborn โ
0.13.2>=0.11.0
ultralytics-thop โ
2.0.14>=2.0.0
### Minimal Reproducible Example
from ultralytics.utils.benchmarks import benchmark
benchmark(model="yolo11n.pt", data="coco8.yaml", imgsz=640, format="onnx")
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
closed
|
2025-02-01T10:20:21Z
|
2025-02-10T07:03:48Z
|
https://github.com/ultralytics/ultralytics/issues/18965
|
[
"non-reproducible",
"exports"
] |
AlexiAlp
| 14
|
lukas-blecher/LaTeX-OCR
|
pytorch
| 281
|
Train the model with my own data
|
Hi, First of all thanks for the great work!
I want to train your model with my own data, but it doesn't work and I don't know why?
The training with your data works perfect and I have modified my data like yours Crohme and pdf.
The generation of the dataset works, also without problems, but if I start the training I get an error which tells me the tensors have different sizes? Please, do you have any tipps for me to solve the problem.
<img width="1200" alt="Bildschirmfoto 2023-06-12 um 13 08 07" src="https://github.com/lukas-blecher/LaTeX-OCR/assets/93684165/95b3fb62-9706-4349-93a5-d1cd5018a643">
|
closed
|
2023-06-12T11:08:32Z
|
2023-06-15T08:59:35Z
|
https://github.com/lukas-blecher/LaTeX-OCR/issues/281
|
[] |
ficht74
| 10
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 516
|
ERROR: No matching distribution found for tensorflow==1.15
|
I'm trying to install the prerequisites but am unable to install tensorflow version 1.15. Does anyone have a workaround? Using python3-pip.
|
closed
|
2020-09-01T04:13:46Z
|
2020-09-04T05:12:03Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/516
|
[] |
therealjr
| 3
|
aeon-toolkit/aeon
|
scikit-learn
| 1,970
|
[BUG] visualize_best_shapelets_one_class not handling RTSD correctly
|
### Describe the bug
For RTSD classifiers this method plots the same shapelet three times at different levels of importance.
RTSD transforms the time series space into an array of shape (n_cases, 3*n_shapelets) where 3 features relate to one shapelet. The visualize_best_shapelets_one_class method ranks the importance of each feature according to weights from the classifier, in the case of ST this works because each feature is one shapelet. This function works by calling _get_shp_importance which specially handles RDST by grouping the three features under one index for a shapelet, however this index still appears three times in the returned list. Instead the position of this index should be handled by either:
- averaging the position of the shapelets three features.
- - Returning only the best position out of the three features.
### Steps/Code to reproduce the bug
_No response_
### Expected results
Each shapelet can only be plotted once.
### Actual results
Each shapelet is plotted three times according to the importance of each of its three features.
### Versions
_No response_
|
closed
|
2024-08-14T06:11:14Z
|
2024-08-17T18:44:00Z
|
https://github.com/aeon-toolkit/aeon/issues/1970
|
[
"bug"
] |
IRKnyazev
| 2
|
dmlc/gluon-cv
|
computer-vision
| 1,651
|
Fine-tuning SOTA video models on your own dataset - Sign Language
|
Sir, I am trying to implement a sign classifier using this API as part of my final year college project.
Data set: http://facundoq.github.io/datasets/lsa64/
I followed the Fine-tuning SOTA video models on your own dataset tutorial and fine-tuned
1. i3d_resnet50_v1_custom
2. slowfast_4x16_resnet50_custom
The plotted graph showing almost 90% accuracy but when I running on my inference I am getting miss classification even on the videos I used to train.
So I am stuck, could you have some guide to give anything will be help full.
Thank you
**My data loader for I3D:**
```
num_gpus = 1
ctx = [mx.gpu(i) for i in range(num_gpus)]
transform_train = video.VideoGroupTrainTransform(size=(224, 224), scale_ratios=[1.0, 0.8], mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
per_device_batch_size = 5
num_workers = 0
batch_size = per_device_batch_size * num_gpus
train_dataset = VideoClsCustom(root=os.path.expanduser('DataSet/train/'),
setting=os.path.expanduser('DataSet/train/train.txt'),
train=True,
new_length=64,
new_step=2,
video_loader=True,
use_decord=True,
transform=transform_train)
print('Load %d training samples.' % len(train_dataset))
train_data = gluon.data.DataLoader(train_dataset, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
```
**Inference running:**
```
from gluoncv.utils.filesystem import try_import_decord
decord = try_import_decord()
video_fname = 'DataSet/test/006_001_001.mp4'
vr = decord.VideoReader(video_fname)
frame_id_list = range(0, 64, 2)
video_data = vr.get_batch(frame_id_list).asnumpy()
clip_input = [video_data[vid, :, :, :] for vid, _ in enumerate(frame_id_list)]
transform_fn = video.VideoGroupValTransform(size=(224, 224), mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
clip_input = transform_fn(clip_input)
clip_input = np.stack(clip_input, axis=0)
clip_input = clip_input.reshape((-1,) + (32, 3, 224, 224))
clip_input = np.transpose(clip_input, (0, 2, 1, 3, 4))
print('Video data is readed and preprocessed.')
# Running the prediction
pred = net(nd.array(clip_input, ctx = mx.gpu(0)))
topK = 5
ind = nd.topk(pred, k=topK)[0].astype('int')
print('The input video clip is classified to be')
for i in range(topK):
print('\t[%s], with probability %.3f.'%
(CLASS_MAP[ind[i].asscalar()], nd.softmax(pred)[0][ind[i]].asscalar()))
```
|
closed
|
2021-04-20T18:39:30Z
|
2021-05-05T19:18:51Z
|
https://github.com/dmlc/gluon-cv/issues/1651
|
[] |
aslam-ep
| 1
|
abhiTronix/vidgear
|
dash
| 426
|
[Question]: How do i directly get the encoded frames???
|
### Issue guidelines
- [X] I've read the [Issue Guidelines](https://abhitronix.github.io/vidgear/latest/contribution/issue/#submitting-an-issue-guidelines) and wholeheartedly agree.
### Issue Checklist
- [X] I have searched open or closed [issues](https://github.com/abhiTronix/vidgear/issues) for my problem and found nothing related or helpful.
- [X] I have read the [Documentation](https://abhitronix.github.io/vidgear/latest) and found nothing related to my problem.
- [X] I have gone through the [Bonus Examples](https://abhitronix.github.io/vidgear/latest/help/get_help/#bonus-examples) and [FAQs](https://abhitronix.github.io/vidgear/latest/help/get_help/#frequently-asked-questions) and found nothing related or helpful.
### Describe your Question
I am able to use vidgear, setup to read frames from usb camera, use ffmpeg, and encode to h264 format, i don't want to write it to a mp4 file or something, i just want the encoded frames as they come, to be in my local variable, i will use that to stream directly.
### Terminal log output(Optional)
_No response_
### Python Code(Optional)
_No response_
### VidGear Version
0.3.3
### Python version
3.8
### Operating System version
windows
### Any other Relevant Information?
_No response_
|
closed
|
2024-11-12T06:09:49Z
|
2024-11-13T07:39:30Z
|
https://github.com/abhiTronix/vidgear/issues/426
|
[
"QUESTION :question:",
"SOLVED :checkered_flag:"
] |
Praveenstein
| 1
|
gevent/gevent
|
asyncio
| 1,936
|
gevent 1.5.0 specifies a cython file as a prerequisite, causing cython to run only when it was installed after 2020
|
When installing [`spack install py-gevent`](https://github.com/spack/spack) it resolves to
```
py-gevent@1.5.0
py-cffi@1.15.1
libffi@3.4.4
py-pycparser@2.21
py-cython@0.29.33
py-greenlet@1.1.3
py-pip@23.0
py-setuptools@63.0.0
py-wheel@0.37.1
python@3.10.8
...
```
and attempts a build using `pypi` tarballs.
### Description:
`py-gevent` re-cythonizes files because it has a prereq specified on cython files:
```
Compiling src/gevent/resolver/cares.pyx because it depends on /opt/linux-ubuntu20.04-x86_64/gcc-11.1.0/py-cython-0.29.32-wistmswtv7bm55wpadctrr3eqb7jnona/lib/python3.10/site-packages/Cython/Includes/libc/string.pxd.
```
So, depending on what date you've installed cython on your system, it runs cython or not.
It's like writing a makefile that specifies a system library as a prereq by absolute path...
Also, is there a way to tell gevent setup that it should always run cython no matter what?
|
closed
|
2023-03-10T13:38:51Z
|
2023-03-10T16:31:47Z
|
https://github.com/gevent/gevent/issues/1936
|
[] |
haampie
| 5
|
graphql-python/gql
|
graphql
| 179
|
Using a single session with FastAPI
|
I have a Hasura GraphQL engine as a server with a few small services acting like webhooks for business logic and handling database events. One of theses services has a REST API and needs to retrieve data from the GraphQL engine or run mutations.
Due to performance concerns I have decided to rewrite one of the services with FastAPI in order to leverage async.
I am quite new to async in Python in general which is why I took my time to go through gql documentation.
It is my understanding that it is ideal to keep a single async client session throughout the life span of the service. It is also my understanding that the only way of getting an async session is using the `async with client as session:` syntax.
That poses the question of how can I wrap the whole app inside of the `async with`. Or perhaps I am missing something and there is a better way of doing this.
|
closed
|
2020-12-07T20:32:48Z
|
2022-07-03T13:54:29Z
|
https://github.com/graphql-python/gql/issues/179
|
[
"type: feature"
] |
antonkravc
| 11
|
K3D-tools/K3D-jupyter
|
jupyter
| 426
|
model_matrix not updated when using manipulators
|
* K3D version: 2.15.2
* Python version: 3.8.13
* Operating System: Ubuntu 20.04 (backend), Windows 10 Pro (frontend)
### Description
When manipulating an object in plot "manipulate" mode the object's model matrix doesn't get updated and also "observe" callback is not fired.
I found that it's probably because of this commented out code: https://github.com/K3D-tools/K3D-jupyter/blob/main/js/src/providers/threejs/initializers/Manipulate.js#L18
and could probably simply create a PR reversing this, but I guess there is some broader context of this change I don't know :) (https://github.com/K3D-tools/K3D-jupyter/commit/46d508b39255fc16e2ed7704fcadb5dae6d61952#diff-6863658dc28ee81ff40444d18191cad7547592119a5f4a302ecee062b6bb428f).
### Simple example
```python
import k3d
import k3d.platonic
plot = k3d.plot()
cube = k3d.platonic.Cube().mesh
plot += cube
plot
```
```python
plot.mode = "manipulate"
plot.manipulate_mode = "rotate"
cube.observe(print)
```
When manipulating the object the callback is not fired and when checking the model_matrix directly:
```python
cube.model_matrix
```
it's still an identity matrix:
```
array([[1., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.]], dtype=float32)
```
|
open
|
2023-06-07T07:47:37Z
|
2023-08-16T22:36:44Z
|
https://github.com/K3D-tools/K3D-jupyter/issues/426
|
[] |
wmalara
| 0
|
dynaconf/dynaconf
|
flask
| 331
|
[RFC] Add support for Pydantic BaseSettings
|
Pydantic has a schema class BaseSettings that can be integrated with Dynaconf validators.
|
closed
|
2020-04-29T14:24:51Z
|
2020-09-12T04:20:13Z
|
https://github.com/dynaconf/dynaconf/issues/331
|
[
"Not a Bug",
"RFC"
] |
rochacbruno
| 2
|
Textualize/rich
|
python
| 3,647
|
[BUG] legacy_windows is True when is_terminal is False
|
- [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
The console object's `legacy_windows` is True when `is_terminal` is False. I think this is a bug, because it's unable to successfully calling Windows console API on non-terminal environment.
```python
# foo.py
from rich.console import Console
console = Console()
print(console.legacy_windows)
```
```shell
> python foo.py
False
> python foo.py | echo
True
```
**Platform**
I use terminal in VS Code on Windows 10.
|
open
|
2025-03-02T03:31:34Z
|
2025-03-02T03:31:50Z
|
https://github.com/Textualize/rich/issues/3647
|
[
"Needs triage"
] |
xymy
| 1
|
pytorch/pytorch
|
python
| 149,621
|
`torch.cuda.manual_seed` ignored
|
### ๐ Describe the bug
When using torch.compile, torch.cuda.manual_seed/torch.cuda.manual_seed_all/torch.cuda.random.manual_seed do not seem to properly enforce reproducibility across multiple calls to a compiled function.
# torch.cuda.manual_seed
Code:
```python
import torch
import torch._inductor.config
torch._inductor.config.fallback_random = True
@torch.compile
def foo():
# Set the GPU seed
torch.cuda.manual_seed(3)
# Create a random tensor on the GPU.
# If a CUDA device is available, the tensor will be created on CUDA.
return torch.rand(4, device='cuda' if torch.cuda.is_available() else 'cpu')
# Call the compiled function twice
print("cuda.is_available:", torch.cuda.is_available())
result1 = foo()
result2 = foo()
print(result1)
print(result2)
```
Output:
```
cuda.is_available: True
tensor([0.2501, 0.4582, 0.8599, 0.0313], device='cuda:0')
tensor([0.3795, 0.0543, 0.4973, 0.4942], device='cuda:0')
```
# `torch.cuda.manual_seed_all`
Code:
```
import torch
import torch._inductor.config
torch._inductor.config.fallback_random = True
@torch.compile
def foo():
# Reset CUDA seeds
torch.cuda.manual_seed_all(3)
# Generate a random tensor on the GPU
return torch.rand(4, device='cuda')
# Call the compiled function twice
result1 = foo()
result2 = foo()
print(result1)
print(result2)
```
Output:
```
tensor([0.0901, 0.8324, 0.4412, 0.2539], device='cuda:0')
tensor([0.5561, 0.6098, 0.8558, 0.1980], device='cuda:0')
```
# torch.cuda.random.manual_seed
Code
```
import torch
import torch._inductor.config
torch._inductor.config.fallback_random = True
# Ensure a CUDA device is available.
if not torch.cuda.is_available():
print("CUDA is not available on this system.")
@torch.compile
def foo():
# Reset GPU random seed
torch.cuda.random.manual_seed(3)
# Generate a random tensor on GPU
return torch.rand(4, device='cuda')
# Call the compiled function twice
result1 = foo()
result2 = foo()
print(result1)
print(result2)
```
Output:
```
tensor([8.1055e-01, 4.8494e-01, 8.3937e-01, 6.7405e-04], device='cuda:0')
tensor([0.4365, 0.5669, 0.7746, 0.8702], device='cuda:0')
```
### Versions
torch 2.6.0
cc @pbelevich @chauhang @penguinwu
|
open
|
2025-03-20T12:42:27Z
|
2025-03-24T15:46:40Z
|
https://github.com/pytorch/pytorch/issues/149621
|
[
"triaged",
"module: random",
"oncall: pt2"
] |
vwrewsge
| 3
|
horovod/horovod
|
tensorflow
| 3,088
|
Add terminate_on_nan flag to spark lightning estimator
|
**Is your feature request related to a problem? Please describe.**
pytorch lightning trainer's api has a parameter to control the behavior of seeing nan output, this needs to be added to the estimator's api to support controling that behavior for Horovod users.
**Describe the solution you'd like**
Add terminate_on_nan to estimator and pass it down to trainer constructor.
|
closed
|
2021-08-06T17:47:05Z
|
2021-08-09T18:21:13Z
|
https://github.com/horovod/horovod/issues/3088
|
[
"enhancement"
] |
Tixxx
| 1
|
litestar-org/litestar
|
pydantic
| 3,677
|
Enhancement: Allow streaming Templates to serve very large content
|
### Summary
The render() method of Template class generates the entire content of the template at once.
But for large content, this can be inadequate.
For instance, for jinja2 one can use generators and also call the generate() method to yield the contents in a streaming fashion.
### Basic Example
_No response_
### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_
|
open
|
2024-08-20T13:07:38Z
|
2025-03-20T15:54:52Z
|
https://github.com/litestar-org/litestar/issues/3677
|
[
"Enhancement",
"Templating",
"Responses"
] |
ptallada
| 0
|
slackapi/bolt-python
|
fastapi
| 600
|
trigger_exchanged error using AWS Lambda + Workflow Steps
|
Hi,
I created a workflow step at the company where I work to automate the approval of requests without having to open a ticket. Basically I have a workflow where the user fills in some information and after submitting the form, a message arrives for the manager with a button to approve or deny the request.
The issue is that I need to validate the information entered in the workflow and for that I need to connect to other APIs etc, and this can all take up to 10 seconds before validating the data and sending the request.
I believe that due to slack's response time, as I can't respond within 3 seconds, slack repeats requests, and with that I have repeated request messages being sent.
These are the AWS logs where requests are repeated and the error occurs:

If this is really the problem and I can solve it using Lazy listeners, I would like to understand how to use them in workflow steps.
This is the part of my project where I do the validations in the execute method of the workflow:
```
St = SlackTools()
def execute(step, complete, fail):
inputs = step["inputs"]
sendername = inputs["senderName"]["value"].strip()
sendermail = inputs["senderEmail"]["value"].strip()
sendermention = inputs["senderMention"]["value"].strip()
senderid = sendermention[2:len(sendermention)-1]
awsname = inputs["awsName"]["value"].strip()
awsrole = inputs["awsRole"]["value"].strip()
squad = inputs["squad"]["value"].strip()
reason = inputs["reason"]["value"].strip()
managermail = inputs["managerMail"]["value"].strip()
outputs = {
"senderName": sendername,
"senderEmail": sendermail,
"senderMention": sendermention,
"awsName": awsname,
"awsRole": awsrole,
"squad": squad,
"reason": reason,
"managerMail": managermail
}
checkinputname = Ot.check_inputs(inputs, INPUT_AWS_NAME)
checkinputmanager = Ot.check_inputs(inputs, INPUT_MANAGER_MAIL)
# Check inputs
if not checkinputname or not checkinputmanager:
if not checkinputname:
errmsg = {"message": f"Acesso {awsname} nรฃo encontrado! Olhar no Pin fixado do canal o nome de cada AWS."}
blocklayout = FAIL_MSG_AWS_NAME
else:
errmsg = {"message": f"Gestor {managermail} nรฃo encontrado no ambiente!"}
blocklayout = FAIL_MSG_MANAGER_MAIL
fail(error=errmsg)
blockfail = get_workflow_block_fail(sendermention, managermail, awsname, awsrole, blocklayout)
send_block_fail_message(blockfail, senderid) # Send fail block message to user
else:
try:
manager = St.app().client.users_lookupByEmail(email=managermail)
managerid = manager["user"]["id"]
blocksuccess = get_workflow_block_success(sendermention, awsname)
send_request_aws_message(sendername, sendermail, sendermention,
awsname, awsrole, squad, reason, managermail, managerid) # Send block message to manager
except Exception as e:
if "users_not_found" in str(e):
error = {"message": f"Gestor {managermail} nรฃo encontrado no ambiente!"}
fail(error=error)
blocklayout = FAIL_MSG_MANAGER_MAIL
blockfail = get_workflow_block_fail(sendermention, managermail, awsname, awsrole, blocklayout)
send_block_fail_message(blockfail, senderid) # Send fail block message to user
else:
error = {"message": "Just testing step failure! {}".format(e)}
fail(error=error)
St.webhook_directly(WEBHOOK_LOGS_URI_FAIL, e, FAIL)
else:
St.app().client.chat_postMessage(text="Envio com sucesso de solicitaรงรฃo de acesso AWS", blocks=blocksuccess,
channel=senderid) # Send success block message to user
complete(outputs=outputs)
```
|
closed
|
2022-02-24T19:36:40Z
|
2022-02-28T19:35:21Z
|
https://github.com/slackapi/bolt-python/issues/600
|
[
"question"
] |
bsouzagit
| 4
|
mouredev/Hello-Python
|
fastapi
| 80
|
ๅ
ณไบๅนณๅฐ็ขฐๅฐ่ขซ้ปๅบไธไบๆฌพๆพ็คบ้้ไธ็ด็ปดๆคๆไนๅ๏ผ
|
้่ฆๅธฎๅฉ่ฏทๆทปๅ ๅบ้ปๅจ่ฏขVใzhkk8683ใๅจ่ฏขQQ๏ผ1923630145
่ขซๅนณๅฐ้ปไบๆฏๅพ่ฎฉไบบๅคด็ผ็ไธไปถไบๆ
๏ผๅ ไธบ่ฟๆๅณ็ๆจๆ ๆณๅๅบๆจๅจๅนณๅฐไธ็่ต้ใ้ฆๅ
๏ผๆจ้่ฆไบ่งฃๅนณๅฐ้ปๅๅ็ๅๅ ใ
ๅฆๆๆจ่ฟๅไบๅนณๅฐ็ๆฟ็ญ๏ผ้ฃไนๆจ็ๅธๆทๅฏ่ฝไผ่ขซๆๅๆ้ๅถใๅจ่ฟ็งๆ
ๅตไธ๏ผๆจ้่ฆ้
่ฏปๅนณๅฐๅฟตๆฑช็็จๆทๅ่ฎฎ๏ผๅนถไธๅนณๅฐ
็ๅฎขๆทๆฏๆๅข้่็ณป๏ผไปฅไบ่งฃๅฆไฝ่งฃๅณ้ฎ้ขใ
ๅฆๆๆจ่ฎคไธบๆจ็ๅธๆท่ขซ้่ฏฏๅฐ้ปๅๅ๏ผๆจๅบ่ฏฅ็ซๅณ่็ณปๅนณๅฐ็ๆ้ซๆจฑๅฎขๆทๆฏๆๅข้ๅนถๆไพๆๆๅฟ
่ฆ็ไฟกๆฏใๆจๅฏ่ฝ้่ฆๆไพ
ๆจ็่บซไปฝ่ฏๆใไบคๆๅๅฒ่ฎฐๅฝๅๅ
ถไป่ฏๆๆจ่บซไปฝ็ๆไปถใๅจๆไพ่ฟไบไฟกๆฏไนๅ๏ผๆจ้่ฆ่ๅฟ็ญๅพ
ๅนณๅฐ็ๅๅค๏ผ้ๅธธ้่ฆๅ ไธช
ๅทฅไฝๆฅใๅฆๆๆจ่ขซ้ปๅๅ็ๅๅ ๆฏๅ ไธบๆจ็ไบคๆ่ขซๆ็ๅญๅจๆฌบ่ฏ่กไธบ๏ผๆจ้่ฆๆไพ่ฏๆฎไปฅ่ฏๆๆจ็ๆธ
็ฝใ
|
closed
|
2024-10-07T06:35:45Z
|
2024-10-16T05:27:09Z
|
https://github.com/mouredev/Hello-Python/issues/80
|
[] |
xiao691
| 0
|
JaidedAI/EasyOCR
|
pytorch
| 914
|
Greek Language
|
First of all thank you soo much for this wonderful work. What about the Greek language isn't it developed yet?
Or plz tell me how I can trained easy-OCR model with Greek language
|
open
|
2022-12-23T10:09:31Z
|
2023-01-14T04:57:15Z
|
https://github.com/JaidedAI/EasyOCR/issues/914
|
[] |
Ham714
| 1
|
trevorstephens/gplearn
|
scikit-learn
| 123
|
Crossover only results in one offspring
|
Usually crossover produces two individuals by symmetrically replacing the subtrees in each other.
The crossover operator in gplearn however only produces one offspring, is there a reason it was implemented that way?
The ramification of this implementation is that genetic material from the parent is completely eradicated from the gene pool, because it is simply overwritten from genetic material from the donor.
When implemented in a symmetric fashion the genetic material is swapped and both are kept in the gene pool which may result in better results.
It would be relatively simple to do it.
The first one to return is as it is in the code:
https://github.com/trevorstephens/gplearn/blob/07b41a150dbf7e16645268b88405514b7f23590a/gplearn/_program.py#L549-L551
The second individual would be:
```
(donor[:donor_start +
self.program[start:end] +
donor[donor_end:])
```
Then a few adjustments in the function _parallel_evolve in genetic.py (because instead of adding one we add two)
Evolutionary speaking it may be even better that we only add one from the crossover or perhaps in the end it doesn't even make a huge difference if we do the operation twice to add two or do it once to add two.
Opinions?
|
closed
|
2019-01-11T17:48:48Z
|
2019-02-17T10:21:30Z
|
https://github.com/trevorstephens/gplearn/issues/123
|
[] |
hwulfmeyer
| 3
|
scanapi/scanapi
|
rest-api
| 4
|
Implement Query Params
|
Similar to `headers` implementation
```yaml
api:
base_url: ${BASE_URL}
headers:
Authorization: ${BEARER_TOKEN}
params:
per_page: 10
```
|
closed
|
2019-07-21T19:20:54Z
|
2019-08-01T13:08:31Z
|
https://github.com/scanapi/scanapi/issues/4
|
[
"Feature",
"Good First Issue"
] |
camilamaia
| 0
|
miguelgrinberg/python-socketio
|
asyncio
| 195
|
Default ping_timout is too high for react native
|
Related issues:
https://github.com/facebook/react-native/issues/12981
https://github.com/socketio/socket.io/issues/3054
Fix:
```py
Socket = socketio.AsyncServer(
ping_timeout=30,
ping_interval=30
)
```
|
closed
|
2018-07-23T15:41:15Z
|
2019-01-27T08:59:21Z
|
https://github.com/miguelgrinberg/python-socketio/issues/195
|
[
"question"
] |
Rybak5611
| 1
|
autogluon/autogluon
|
data-science
| 4,486
|
use in pyspark
|
## Description
from autogluon.tabular import TabularPredictor
can TabularPredictor use spark engine to deal with big data?
## References
|
closed
|
2024-09-23T10:45:40Z
|
2024-09-26T18:12:28Z
|
https://github.com/autogluon/autogluon/issues/4486
|
[
"enhancement",
"module: tabular"
] |
hhk123
| 1
|
modin-project/modin
|
pandas
| 7,139
|
Simplify Modin on Ray installation
|
Details in https://github.com/modin-project/modin/pull/6955#issue-2147677569
|
closed
|
2024-04-02T11:36:26Z
|
2024-05-02T13:32:36Z
|
https://github.com/modin-project/modin/issues/7139
|
[
"dependencies ๐"
] |
anmyachev
| 0
|
jupyterlab/jupyter-ai
|
jupyter
| 972
|
Create a kernel so you can have an AI notebook with context
|
<!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
<!--
Thanks for thinking of a way to improve JupyterLab. If this solves a problem for you, then it probably solves that problem for lots of people! So the whole community will benefit from this request.
Before creating a new feature request please search the issues for relevant feature requests.
-->
### Problem
I would like to have AI conversations in a notebook file with the file as the context. New Notebook would be a new conversation
### Proposed Solution
Create a kernel rather than magics in another kernel. Use for context
### Additional context
The notebook format is great for "call and response" like a chat. The current abilities do not let you create a notebook where all "code" is evaluated by the AI chat. You could preface every cell with `%%ai` but that is cumbersome. Also doesn't seem to save context (but that may be user setting)
|
open
|
2024-08-29T19:49:28Z
|
2024-08-29T19:49:28Z
|
https://github.com/jupyterlab/jupyter-ai/issues/972
|
[
"enhancement"
] |
Jwink3101
| 0
|
Hironsan/BossSensor
|
computer-vision
| 3
|
memory problem?
|
It seems you read all the images in to cache. Will it be problem if amount of images is huge. like 100k training images?
I also used keras, but I use flow_from_directory method in keras instead
Thanks you code.
Got some inspiration.
|
open
|
2016-12-24T07:00:16Z
|
2016-12-24T22:19:55Z
|
https://github.com/Hironsan/BossSensor/issues/3
|
[] |
staywithme23
| 0
|
axnsan12/drf-yasg
|
django
| 700
|
How to group swagger API endpoints with drf_yasg
|
My related question in stackoverflow https://stackoverflow.com/questions/66001064/how-to-group-swagger-api-endpoints-with-drf-yasg-django
I am am not able to group "v2" in swagger. Is that possible ?
path('v2/token_api1', api.token_api1, name='token_api1'),
path('v2/token_api2', api.token_api2, name='token_api2'),
previous rest_framework_swagger was able to group like this https://i.imgur.com/EJN5o8c.png
Here is the full code (please note I am using @api_view as it was testing for migration work)
https://gist.github.com/axilaris/099393c171f940bd7c127a7d9942a056
|
closed
|
2021-02-05T23:08:40Z
|
2021-02-07T23:49:44Z
|
https://github.com/axnsan12/drf-yasg/issues/700
|
[] |
axilaris
| 1
|
Kludex/mangum
|
fastapi
| 106
|
Refactor tests parameters and fixtures
|
The tests are a bit difficult to understand at the moment. I think the parameterization and fixtures could be modified to make it easier to make the behaviour more clear, docstrings are probably a good idea too.
|
closed
|
2020-05-09T13:49:54Z
|
2021-03-22T15:08:37Z
|
https://github.com/Kludex/mangum/issues/106
|
[
"improvement",
"chore"
] |
jordaneremieff
| 7
|
pyeve/eve
|
flask
| 669
|
Keep getting exception "DOMAIN dictionary missing or wrong."
|
I have no idea what the cause of this is. I'm running Ubuntu 14.04, my virtual host is pointed to a wsgi file in the same folder as the app file. I also have a settings.py file in that same folder. this runs just fine on windows locally, but Ubuntu virtualhost with virtualenv it throws this error.
|
closed
|
2015-07-14T01:51:50Z
|
2015-07-14T06:40:16Z
|
https://github.com/pyeve/eve/issues/669
|
[] |
chawk
| 1
|
RomelTorres/alpha_vantage
|
pandas
| 375
|
[3.0.0][regression] can't copy 'alpha_vantage/async_support/sectorperformance.py'
|
Build fails:
```
copying alpha_vantage/async_support/foreignexchange.py -> build/lib/alpha_vantage/async_support
error: can't copy 'alpha_vantage/async_support/sectorperformance.py': doesn't exist or not a regular file
*** Error code 1
```
This file is a broken symbolic link:
```
$ file ./work-py311/alpha_vantage-3.0.0/alpha_vantage/async_support/sectorperformance.py
./work-py311/alpha_vantage-3.0.0/alpha_vantage/async_support/sectorperformance.py: broken symbolic link to ../sectorperformance.py
```
FreeBSD 14.1
|
closed
|
2024-07-18T13:02:16Z
|
2024-07-30T04:56:18Z
|
https://github.com/RomelTorres/alpha_vantage/issues/375
|
[] |
yurivict
| 1
|
lmcgartland/graphene-file-upload
|
graphql
| 21
|
from graphene_file_upload.flask import FileUploadGraphQLView fails with this error:
|
```
Traceback (most recent call last):
File "server.py", line 4, in <module>
import graphene_file_upload.flask
File "~/.local/share/virtualenvs/backend-JODmqDQ7/lib/python2.7/site-packages/graphene_file_upload/flask.py", line 1, in <module>
from flask import request
ImportError: cannot import name request
```
Potentially I am just using this incorrectly?
|
closed
|
2018-11-07T23:12:01Z
|
2018-11-09T21:46:27Z
|
https://github.com/lmcgartland/graphene-file-upload/issues/21
|
[] |
mfix22
| 9
|
dynaconf/dynaconf
|
fastapi
| 371
|
Link from readthedocs to the new websites
|
Colleagues of mine (not really knowing about dynaconf) were kind of confused when I told them about dynaconf 3.0 and they could just find the docs for 2.2.3 on https://dynaconf.readthedocs.io/
Would it be feasible to add a prominent link pointing to https://www.dynaconf.com/?
|
closed
|
2020-07-14T06:00:23Z
|
2020-07-14T21:31:54Z
|
https://github.com/dynaconf/dynaconf/issues/371
|
[
"question"
] |
aberres
| 0
|
PedroBern/django-graphql-auth
|
graphql
| 57
|
Feature / Idea: Use UserModel mixins to add UserStatus directly to a custom User model
|
Thank you for producing a great library with outstanding documentation.
The intention of this "issue" is to propose / discuss a different "design approach" of using mixins on a custom user model rather than the `UserStatus` `OneToOneField`.
This would work by putting all the functionality provided by `UserStatus` in to mixins of the form:
```
class VerifiedUserMixin(models.Model):
verified = models.BooleanField(default=False)
class Meta:
abstract = True
...
class SecondaryEmailUserMixin(models.Model):
secondary_email = models.EmailField(blank=True, null=True)
class Meta:
abstract = True
...
```
The mixin is then added to the custom user model in the project:
```
from django.contrib.auth.models import AbstractUser
from graphql_auth.mixins import VerifiedUserMixin
class CustomUser(VerifiedUserMixin, AbstractUser):
email = models.EmailField(blank=False, max_length=254, verbose_name="email address")
USERNAME_FIELD = "username" # e.g: "username", "email"
EMAIL_FIELD = "email" # e.g: "email", "primary_email"
```
The main advantages are:
- There is no need for an additional db table
- All functionality is directly available from the `CustomUser` model (rather than via `CustomUser.status`). This reduces any opportunity of additional db queries when status is not `select_related`
- It becomes easy to separate the different functionality and only include what is needed (in the example above I omitted the `SecondaryEmailUserMixin` from my `CustomUser`)
- No need for signals to create the `UserStatus` instance
There is an added setup step of adding the mixins to the custom user model, but I am assuming the library is intended to be used with a custom user module so this is not a significant barrier
I realise this is quite a significant change, but in my experience adding functionality directly onto the user model is simpler to work with than related objects.
In principle the library could probably support both the 1-to-1 and mixin approach, but that would fairly significantly increase the maintenance overhead.
|
open
|
2020-07-02T07:58:46Z
|
2020-07-02T17:30:37Z
|
https://github.com/PedroBern/django-graphql-auth/issues/57
|
[
"enhancement"
] |
maxpeterson
| 2
|
piskvorky/gensim
|
machine-learning
| 3,183
|
Doc2Vec loss always showing 0
|
```
class MonitorCallback(CallbackAny2Vec):
def __init__(self, test_cui, test_sec):
self.test_cui = test_cui
self.test_sec = test_sec
def on_epoch_end(self, model):
print('Model loss:', model.get_latest_training_loss())
for word in self.test_cui: # show wv logic changes
print(word, model.wv.most_similar(word))
for word in self.test_sec: # show dv logic changes
print(word, model.dv.most_similar(word))
```
```
model = Doc2Vec(vector_size=300, min_count=1, epochs=1, window=5, workers=32)
print('Building vocab...')
model.build_vocab(train_corpus)
print(model.corpus_count, model.epochs)
model.train(
train_corpus, total_examples=model.corpus_count, compute_loss=True, epochs=model.epochs, callbacks=[monitor])
print('Done training...')
model.save('sec2vec.model')
```
Each time the callback prints, it prints 0. The second issue is that after the first epoch, the model seems pretty good according to calls to most_similar. Yet, after the second it appears random. I have a fairly large dataset so I don't think dramatic overfitting is happening. Is there a bug after the first epoch or is the learning rate getting messed up? It's tough to know what's going on because there's no within-epoch logging and the training loss is always evaluating to 0.
|
closed
|
2021-06-28T01:33:32Z
|
2021-06-28T21:33:04Z
|
https://github.com/piskvorky/gensim/issues/3183
|
[] |
griff4692
| 2
|
TracecatHQ/tracecat
|
automation
| 116
|
Update docs for Linux installs
|
Docker engine community edition for linux doesn't ship with `host.docker.internal`. We found a fix by adding:
```yaml
extra_hosts:
- "host.docker.internal:host-gateway"
```
to each service in the docker compose yaml file.
|
closed
|
2024-05-01T02:59:30Z
|
2024-05-01T03:14:33Z
|
https://github.com/TracecatHQ/tracecat/issues/116
|
[] |
daryllimyt
| 0
|
alyssaq/face_morpher
|
numpy
| 26
|
FaceMorph error
|
Hi i am new to python please can you tell me how to morph the images using commands in linux.
when i am using this command python facemorpher/morpher.py --src=index.jpg --dest=index3.jpg
Traceback (most recent call last):
File "facemorpher/morpher.py", line 139, in <module>
verify_args(args)
File "facemorpher/morpher.py", line 42, in verify_args
if args['/home/newter/face_morpher/images'] is None:
KeyError: '/home/newter/face_morpher/images'
thanks in advance
|
open
|
2017-09-09T08:15:19Z
|
2019-01-02T09:50:16Z
|
https://github.com/alyssaq/face_morpher/issues/26
|
[] |
sadhanareddy007
| 5
|
Farama-Foundation/Gymnasium
|
api
| 831
|
[Question] Manually reset vector envronment
|
### Question
As far as I know, the gym vector environment auto-reset a subenv when the env is done. I wonder If there is a way to manually reset it. Because I want to exploiting vecenv feature in implementing vanilla policy gradient algorithm, where every update's data should be one or severy complete episode.
|
open
|
2023-12-09T12:00:29Z
|
2024-02-28T14:45:33Z
|
https://github.com/Farama-Foundation/Gymnasium/issues/831
|
[
"question"
] |
zzhixin
| 9
|
donnemartin/system-design-primer
|
python
| 596
|
Notifying the user that the transactions have completed
|
> Notifies the user the transactions have completed through the Notification Service:
Uses a Queue (not pictured) to asynchronously send out notifications
Do we need to notify the user that the transaction has completed? Isn't notifying only for if budget was exceeded?
|
open
|
2021-10-12T13:48:40Z
|
2024-10-15T16:06:20Z
|
https://github.com/donnemartin/system-design-primer/issues/596
|
[
"needs-review"
] |
lor-engel
| 1
|
jumpserver/jumpserver
|
django
| 14,541
|
[Feature] ๆฏๆ็ฎก็้่ฆ็ซฏๅฃๆฒ้จ็ ssh ่ตไบง
|
### ไบงๅ็ๆฌ
v3.10.15
### ็ๆฌ็ฑปๅ
- [ ] ็คพๅบ็
- [ ] ไผไธ็
- [X] ไผไธ่ฏ็จ็
### ๅฎ่ฃ
ๆนๅผ
- [ ] ๅจ็บฟๅฎ่ฃ
(ไธ้ฎๅฝไปคๅฎ่ฃ
)
- [ ] ็ฆป็บฟๅ
ๅฎ่ฃ
- [ ] All-in-One
- [ ] 1Panel
- [X] Kubernetes
- [ ] ๆบ็ ๅฎ่ฃ
### โญ๏ธ ้ๆฑๆ่ฟฐ
้ป่ฎค ssh ็ซฏๅฃๆฏๅ
ณ้ญ็๏ผ็ฐๅจๆๆๆๅก้ฝๆฏ้จ็ฝฒๅจ k8s ไธ็๏ผๅ
ทไฝๆง่ก็ k8s ่็นๆ ๆณ็กฎ่ฎค
็ซฏๅฃๆฒ้จๆฏไธ็ง้่ฟๅจ้ข่ฎพ็ไธ็ปๅ
ณ้ญ็ซฏๅฃไธ็ๆ่ฟๆฅๅฐ่ฏๆฅไปๅค้จๆๅผ้ฒ็ซๅข็ซฏๅฃ็ๆนๆณใไธๆฆๆถๅฐๆญฃ็กฎ็่ฟๆฅๅฐ่ฏๅบๅ๏ผ้ฒ็ซๅข่งๅๅฐฑไผๅจๆไฟฎๆน๏ผๅ
่ฎธๅ้่ฟๆฅๅฐ่ฏ็ไธปๆบ้่ฟ็นๅฎ็ซฏๅฃ่ฟๆฅใๆญคๆๆฏ้ๅธธ็จไบ้่้่ฆๆๅก๏ผไพๅฆSSH๏ผ๏ผไปฅ้ฒๆญขๆดๅ็ ด่งฃๆๅ
ถไปๆช็ปๆๆ็ๆปๅป
### ่งฃๅณๆนๆก
ๆๆ
### ่กฅๅ
ไฟกๆฏ
_No response_
|
closed
|
2024-11-27T05:02:41Z
|
2024-12-31T07:27:00Z
|
https://github.com/jumpserver/jumpserver/issues/14541
|
[
"โญ๏ธ Feature Request"
] |
LiaoSirui
| 3
|
allenai/allennlp
|
data-science
| 4,739
|
Potential bug: The maxpool in cnn_encoder can be triggered by pad tokens.
|
## Description
When using a text_field_embedder -> cnn_encoder (without seq2seq_encoder), the output of the embedder (and mask) get fed directly into the cnn_encoder. The pad tokens will get masked (set to 0), but it's still possible that after applying the mask followed by the CNN, the PAD tokens are those with highest activations. This could lead to the same exact datapoint getting different predictions if's part of a batch vs single prediction.
## Related issues or possible duplicates
- None
## Environment
OS: NA
Python version: NA
## Steps to reproduce
This can be reproduced by replacing
https://github.com/allenai/allennlp/blob/00bb6c59b3ac8fdc78dfe8d5b9b645ce8ed085c0/allennlp/modules/seq2vec_encoders/cnn_encoder.py#L113
```
filter_outputs.append(self._activation(convolution_layer(tokens)).max(dim=2)[0])
```
with
```
activated_outputs, max_indices = self._activation(convolution_layer(tokens)).max(dim=2)
```
and checking the indices for the same example inside of a batch vs unpadded.
## Possible solution:
We could resolve this by adding a large negative value to all CNN outputs for masked tokens, similarly to what they do in the transformers library (https://github.com/huggingface/transformers/issues/542, https://github.com/huggingface/transformers/blob/c912ba5f69a47396244c64deada5c2b8a258e2b8/src/transformers/modeling_bert.py#L262), but I have not been able to figure out how to do this efficiently.
|
closed
|
2020-10-19T23:30:31Z
|
2020-11-05T23:50:04Z
|
https://github.com/allenai/allennlp/issues/4739
|
[
"bug"
] |
MichalMalyska
| 6
|
jina-ai/serve
|
fastapi
| 5,744
|
Torch error in Jcloud http flow
|
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Posting to the below flow in jcloud http protocol will throw a torch module not found error. Grpc is fine.
```
from jina import DocumentArray, Executor, requests
import torch
class dummy_torch(Executor):
@requests
def foo(self, docs: DocumentArray, **kwargs):
for d in docs:
d.embedding = torch.rand(1000)
```
YAML
```
jtype: Flow
with:
prefetch: 5
gateway:
port:
- 51000
- 52000
protocol:
- grpc
- http
executors:
- name: dummyExecutor
uses: jinaai+docker://auth0-unified-40be9bf07eece29a/dummy_torch:latest
env:
JINA_LOG_LEVEL: DEBUG
```
LOCAL:
```
from jina import DocumentArray, Client
client = Client(host='jcloud-endpoint)
res = client.post(on='/', inputs=DocumentArray.empty(5), show_progress=True)
res.summary()
```
LOCAL error trace:
```
Traceback (most recent call last):
File "toy.py", line 5, in <module>
res = client.post(on='/', inputs=DocumentArray.empty(5), show_progress=True)
File "/Users/ziniuyu/Documents/github/jina/jina/clients/mixin.py", line 281, in post
return run_async(
File "/Users/ziniuyu/Documents/github/jina/jina/helper.py", line 1334, in run_async
return asyncio.run(func(*args, **kwargs))
File "/opt/anaconda3/envs/py3813/lib/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/opt/anaconda3/envs/py3813/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/Users/ziniuyu/Documents/github/jina/jina/clients/mixin.py", line 271, in _get_results
async for resp in c._get_results(*args, **kwargs):
File "/Users/ziniuyu/Documents/github/jina/jina/clients/base/http.py", line 165, in _get_results
r_str = await response.json()
File "/opt/anaconda3/envs/py3813/lib/python3.8/site-packages/aiohttp/client_reqrep.py", line 1104, in json
raise ContentTypeError(
aiohttp.client_exceptions.ContentTypeError: 0, message='Attempt to decode JSON with unexpected mimetype: text/plain; charset=utf-8', url=URL('jcloud-endpoint:443/post')
```
Gateway error trace:
```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/usr/local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 271, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 118, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.8/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/usr/local/lib/python3.8/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/usr/local/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/usr/local/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 706, in __call__
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 237, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 163, in run_endpoint_function
return await dependant.call(**values)
File "/usr/local/lib/python3.8/site-packages/jina/serve/runtimes/gateway/http/app.py", line 191, in post
result = await _get_singleton_result(
File "/usr/local/lib/python3.8/site-packages/jina/serve/runtimes/gateway/http/app.py", line 382, in _get_singleton_result
result_dict = result.to_dict()
File "/usr/local/lib/python3.8/site-packages/jina/types/request/data.py", line 260, in to_dict
da = self.docs
File "/usr/local/lib/python3.8/site-packages/jina/types/request/data.py", line 276, in docs
return self.data.docs
File "/usr/local/lib/python3.8/site-packages/jina/types/request/data.py", line 47, in docs
self._loaded_doc_array = self.document_array_cls.from_protobuf(
File "/usr/local/lib/python3.8/site-packages/docarray/array/mixins/io/binary.py", line 361, in from_protobuf
return cls(Document.from_protobuf(od) for od in pb_msg.docs)
File "/usr/local/lib/python3.8/site-packages/docarray/array/mixins/io/from_gen.py", line 23, in __init__
super().__init__(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/docarray/array/base.py", line 12, in __init__
self._init_storage(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/docarray/array/storage/memory/backend.py", line 25, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/docarray/array/storage/memory/backend.py", line 83, in _init_storage
self.extend(_docs)
File "/usr/local/lib/python3.8/site-packages/docarray/array/storage/base/seqlike.py", line 81, in extend
self._extend(values, **kwargs)
File "/usr/local/lib/python3.8/site-packages/docarray/array/storage/memory/seqlike.py", line 60, in _extend
values = list(values) # consume the iterator only once
File "/usr/local/lib/python3.8/site-packages/docarray/array/mixins/io/binary.py", line 361, in <genexpr>
return cls(Document.from_protobuf(od) for od in pb_msg.docs)
File "/usr/local/lib/python3.8/site-packages/docarray/document/mixins/protobuf.py", line 13, in from_protobuf
return parse_proto(pb_msg)
File "/usr/local/lib/python3.8/site-packages/docarray/proto/io/__init__.py", line 24, in parse_proto
fields[f_name] = read_ndarray(value)
File "/usr/local/lib/python3.8/site-packages/docarray/proto/io/ndarray.py", line 44, in read_ndarray
return _to_framework_array(x, framework)
File "/usr/local/lib/python3.8/site-packages/docarray/proto/io/ndarray.py", line 147, in _to_framework_array
from torch import from_numpy
ModuleNotFoundError: No module named 'torch'
```
**Describe how you solve it**
<!-- copy past your code/pull request link -->
No error with numpy
Possible reason: the default http gateway image does not have torch installed
---
<!-- Optional, but really help us locate the problem faster -->
**Environment**
<!-- Run `jina --version-full` and copy paste the output here -->
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
|
closed
|
2023-03-07T12:01:10Z
|
2023-03-13T21:55:46Z
|
https://github.com/jina-ai/serve/issues/5744
|
[] |
ZiniuYu
| 0
|
ets-labs/python-dependency-injector
|
asyncio
| 549
|
Override works only with yield, not with return
|
I expect both tests here to pass. However, the test which depends on the fixture that returns the use case fails.
Only the test which depends on the fixture that yields the use case passes.
Could you help me understand if that's by design or if it's a bug?
Result:
```
def test_fixture_override(self, return_override):
> assert return_override.use_case().execute() == 2
E assert 1 == 2
E +1
E -2
test_di.py:31: AssertionError
```
<details>
<summary>Click to toggle example</summary>
```py
import pytest
from dependency_injector import containers, providers
class UseCase:
def __init__(self, result=None):
self._result = result or 1
def execute(self):
return self._result
class Container(containers.DeclarativeContainer):
use_case: providers.Provider[UseCase] = providers.ContextLocalSingleton(UseCase)
class TestFixtureOverride:
@pytest.fixture
def return_override(self):
container = Container()
with container.use_case.override(UseCase(2)):
return container
@pytest.fixture
def yield_override(self):
container = Container()
with container.use_case.override(UseCase(2)):
yield container
def test_fixture_override(self, return_override):
assert return_override.use_case().execute() == 2
def test_without_override(self, yield_override):
assert yield_override.use_case().execute() == 2
```
</details>
|
closed
|
2022-01-18T17:51:29Z
|
2022-03-08T18:56:36Z
|
https://github.com/ets-labs/python-dependency-injector/issues/549
|
[] |
chbndrhnns
| 2
|
taverntesting/tavern
|
pytest
| 151
|
How to test a non-json body response
|
Hi all
I would like to check for a non-json body response. In my case the response is a simple:
`True` or `False`.
I can't find any documentation stating how to check a text body instead of a json body.
|
closed
|
2018-07-11T10:01:48Z
|
2024-04-03T15:55:18Z
|
https://github.com/taverntesting/tavern/issues/151
|
[] |
alphadijkstra
| 10
|
huggingface/transformers
|
python
| 36,532
|
After tokenizers upgrade, the length of the token does not correspond to the length of the model
|
### System Info
transformers๏ผ4.48.1
tokenizers๏ผ0.2.1
python๏ผ3.9
### Who can help?
@ArthurZucker @itazap
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
code snippet๏ผ
```
tokenizer = PegasusTokenizer.from_pretrained('IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese')
model = AutoModelForSeq2SeqLM.from_pretrained(
'IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese',
config=config
)
training_args = Seq2SeqTrainingArguments(
output_dir=config['model_name'],
evaluation_strategy="epoch",
# report_to="none",
save_strategy="epoch",
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
num_train_epochs=4,
predict_with_generate=True,
logging_steps=0.1
)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics
)
```
้่ฏฏไฟกๆฏ๏ผ

Trial process:
My original Trasnformers: 4.29.1 tokenizers๏ผ 0.13.3. The model is capable of reasoning and training normally.
After upgrading, the above error occurred and normal training was not possible. Therefore, I adjusted the length of the model to 'model. resice_tokec_embeddings' (len (tokenizer)). Original model length: 50000, tokenizer loading length: 50103. So the model I trained resulted in abnormal inference results.

Try again, keep tokenizers at 0.13.3, upgrade trasnformers at 4.33.3 (1. I need to upgrade because NPU only supports version 4.3.20. 2. This version is the highest compatible with tokenizers). After switching to this version, training and reasoning are normal.As long as tokenizers is greater than 0.13.3, length changes
### Expected behavior
I expect tokenizer to be compatible with the original code
|
closed
|
2025-03-04T09:58:59Z
|
2025-03-05T09:49:44Z
|
https://github.com/huggingface/transformers/issues/36532
|
[
"bug"
] |
CurtainRight
| 3
|
noirbizarre/flask-restplus
|
flask
| 753
|
update json schema from draft v4 to draft v7
|
### **Code**
```python
schema_data = schema_json.get_json_schema_deref_object()
event_scheme = api.schema_model('event_schema', schema_data)
@build_ns.route("/build_start")
class BuildStartEvent(Resource):
@build_ns.doc('create_todo')
@build_ns.expect(event_scheme, validate=True)
@build_ns.response(400, 'Input Values Malformed')
def post(self):
```
### **Repro Steps** (if applicable)
make the event schema `schema_data` a json schema v7
### **Expected Behavior**
using the latest, you only support v4. There have been some major changes and upgrades to json schema to make it MUCH more useful. Your `validate=True` statement doesn't work with newer schemas because v4 doesn't support the `allof` or `oneof`, etc. tags within the schema. It lets things thru that it shouldn't.
Currently im having to do
```python
from jsonschema import validate
validate(api.payload, schema=schema_data)
```
at the beginning of each method to make sure the json passes. kinda makes that validate irrelevant
|
open
|
2019-11-26T19:47:01Z
|
2020-07-10T16:43:57Z
|
https://github.com/noirbizarre/flask-restplus/issues/753
|
[
"bug"
] |
scphantm
| 2
|
tfranzel/drf-spectacular
|
rest-api
| 1,007
|
django-filter custom field extensions do not allow for serializer fields
|
**Describe the bug**
When specifying custom annotations for `filters.Filter` classes, only the basic types and raw openapi definitions are allowed:
https://github.com/tfranzel/drf-spectacular/blob/d57ff264ddd92940fc8f3ec52350204fb6a36f2f/drf_spectacular/contrib/django_filters.py#L96-L100
**To Reproduce**
```python
from django_filters import rest_framework as filters
@extend_schema_field(serializers.CharField())
class CustomFilter(filters.Filter):
field_class = django.forms.CharField
```
and use following filter in a view:
```python
from django_filters import rest_framework as filters
class SomeFilter(filters.FilterSet):
something = CustomFilter(required=True)
```
```
File "/usr/local/lib/python3.11/site-packages/drf_spectacular/contrib/django_filters.py", line 63, in get_schema_operation_parameters
result += self.resolve_filter_field(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/drf_spectacular/contrib/django_filters.py", line 100, in resolve_filter_field
schema = annotation.copy()
^^^^^^^^^^^^^^^
AttributeError: 'CharField' object has no attribute 'copy'
```
**Expected behavior**
I should be able to specify serializer fields in filter annotations.
|
closed
|
2023-06-18T14:32:35Z
|
2023-06-22T16:10:30Z
|
https://github.com/tfranzel/drf-spectacular/issues/1007
|
[
"enhancement",
"fix confirmation pending"
] |
realsuayip
| 0
|
fastapi/sqlmodel
|
sqlalchemy
| 503
|
Add interface to override `get_column_from_field` | `get_sqlalchemy_type` function behavior
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options ๐
### Example Code
```python
# current workaround: force extending sqlmodel.main.get_sqlalchemy_type
from typing import Any, Callable
import sqlmodel.main
from pydantic import BaseModel, ConstrainedStr
from pydantic.fields import ModelField
from sqlalchemy import String
from typing_extensions import TypeAlias
_GetSqlalchemyTypeProtocol: TypeAlias = Callable[[ModelField], Any]
def _override_get_sqlalchemy_type(
original_get_sqlalchemy_type: _GetSqlalchemyTypeProtocol,
) -> _GetSqlalchemyTypeProtocol:
def _extended_get_sqlalchemy_type(field: ModelField) -> Any:
if issubclass(field.type_, BaseModel):
# TODO use sqlalchemy.JSON or CHAR(N) for "known to be short" models
raise NotImplementedError(field.type_)
if issubclass(field.type_, ConstrainedStr):
# MAYBE add CHECK constraint for field.type_.regex
length = field.type_.max_length
if length is not None:
return String(length=length)
return String()
return original_get_sqlalchemy_type(field)
return _extended_get_sqlalchemy_type
sqlmodel.main.get_sqlachemy_type = _override_get_sqlalchemy_type(
sqlmodel.main.get_sqlachemy_type
)
# MAYBE get_sqlachemy_type -> get_sqlalchemy_type (sqlmodel > 0.0.8)
# cf. <https://github.com/tiangolo/sqlmodel/commit/267cd42fb6c17b43a8edb738da1b689af6909300>
```
### Description
Problem:
- We want to decide database column types deterministically by model field.
- Sometimes `SQLModel` does not provide expected column type, and it is (in some cases) impossible because requirements of `SQLModel` users are not always the same (e.g. DBMS dialects, strictness of constraints, choice of CHAR vs VARCHAR vs TEXT vs JSON, TIMESTAMP vs DATETIME)
### Wanted Solution
Allow user to use customized `get_column_from_field` | `get_sqlalchemy_type` function to fit with their own requirements.
Add parameter to model config like `sa_column_builder: Callable[[ModelField], Column] = get_column_from_field`.
Function `get_column_from_field` would be better split by the following concerns, to be used as a part of customized `sa_column_builder` implementation:
1. Deciding the column type (currently done in `get_sqlalchemy_type`)
2. Applying pydantic field options to column type (e.g. nullable, min, max, min_length, max_length, regex, ...)
3. Applying column options (e.g. primary_key, index, foreign_key, unique, ...)
#### Possible effects on other issues/PRs:
- May become explicitly extendable by `sqlmodel` users:
- #63
- #97
- #137
- #178
- #292
- #298
- #356
- #447
- May expecting a similar thing:
- #319
---
p.s-1
Conversion rule between Field/column value may become necessary, mainly to serialize field value to column value.
(e.g. Classes inheriting BaseModel cannot be stored directly into `sqlalchemy.JSON` because it is not JSON or dict. We avoid this by adding `json_serializer` to `create_engine`. Deserialize part has no problem because JSON `str` -> `BaseModel` will be done by pydantic validation for now (pydantic v1))
```python
def _json_serializer(value: Any) -> str:
if isinstance(value, BaseModel):
return value.json()
return json.dumps(value)
```
---
p.s-2
IMO using `sqlmodel.sql.sqltypes.AutoString()` in alembic revision file is not good from the sight of future migration constancy, and this is one of the reason I overridden `get_sqlalchemy_type` function.
### Wanted Code
```python
################################################################
# expected: any of the following `Foo` / `Bar`
def _custom_sa_column_builder(field: ModelField) -> Column:
...
class Foo(SQLModel, table=True):
class SQLModelConfig:
sa_column_builder: Callable[[ModelField], Column] = _custom_sa_column_builder
...
class Bar(SQLModel, table=True, sa_column_builder=_custom_sa_column_builder):
...
```
### Alternatives
- Write a function that returns `sa_column` and call it in `sqlmodel.Field` declaration
- -> Not easy to apply pydantic-side constraints (e.g. nullable, `ConstrainedStr`, ...)
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.8
### Python Version
3.10.7
### Additional Context
_No response_
|
open
|
2022-11-20T04:26:17Z
|
2023-02-19T17:18:00Z
|
https://github.com/fastapi/sqlmodel/issues/503
|
[
"feature"
] |
243f6a8885a308d313198a2e037
| 1
|
aeon-toolkit/aeon
|
scikit-learn
| 2,641
|
[ENH] Implement `load_model` function for deep learning based regression ensemble models
|
### Describe the feature or idea you want to propose
Like #2436 (PR #2631), the regression module's deep learning submodule also contains the ensemble models `InceptionTimeRegressor` and `LITETimeRegressor`. Upon closer inspection, I discovered that they don't have the `load_model` function implemented either.
### Describe your proposed solution
Implement `load_model` function for `InceptionTimeRegressor` and `LITETimeRegressor` models.
Checkout https://github.com/aeon-toolkit/aeon/issues/2436#issuecomment-2532441685 for more information regarding the suggested implementation.
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
Would love to work on this issue, similar to #2436
|
open
|
2025-03-17T16:53:40Z
|
2025-03-18T17:45:37Z
|
https://github.com/aeon-toolkit/aeon/issues/2641
|
[
"enhancement",
"regression",
"deep learning"
] |
inclinedadarsh
| 0
|
home-assistant/core
|
python
| 140,405
|
Switchbot Meter not discovered in version after 2025.2.5
|
### The problem
Upgraded to Core 2025.3.0 and then 2025.3.1. Switchbot meter Bluetooth devices were not able to be viewed/discovered when previously worked flawlessly.
Downgraded using ha core update --version=2025.2.5 and devices immediately showed up/discoverable.
Have tried upgrading again and devices go "missing"/unable to be discovered.
### What version of Home Assistant Core has the issue?
2025.3.1
### What was the last working version of Home Assistant Core?
2025.2.5
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Switchbot Bluetooth
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/switchbot
### Diagnostics information
Can provide more if I am directed on what exactly is needed (new to HA)
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_
|
closed
|
2025-03-11T19:32:26Z
|
2025-03-15T20:06:02Z
|
https://github.com/home-assistant/core/issues/140405
|
[
"integration: switchbot"
] |
vetterlab
| 12
|
waditu/tushare
|
pandas
| 867
|
้จๅAMAC่กไธๆฐๆฎไธขๅคฑ some data of AMAC Industrial Index lose in the same dates
|
ไฝฟ็จindex_daily้ๅๆๆ็ไธญ่ฏAMACไธ็บง่กไธ็ๆๆฐๆฐๆฎๅ๏ผๅ็ฐ็ธๅๆฅๆไธ็'open'ใโhighโใ'low'็ๆฐๆฎไธขๅคฑใๆฅๆๅๅซไธบ20180622ใ20161128ใ20140728ใ20130809ใ20130426ๅณ20130121ไนๅ็ๆฅๆใ
tushare pro ID: 13620203739
|
open
|
2018-12-13T21:40:00Z
|
2018-12-14T14:51:01Z
|
https://github.com/waditu/tushare/issues/867
|
[] |
yamazyw
| 1
|
lk-geimfari/mimesis
|
pandas
| 817
|
CI fails on testing matrix method sometimes
|
Travis CI report: [here](https://travis-ci.org/lk-geimfari/mimesis/jobs/654638429?utm_medium=notification&utm_source=github_status)
|
closed
|
2020-02-25T18:58:47Z
|
2020-04-25T20:38:48Z
|
https://github.com/lk-geimfari/mimesis/issues/817
|
[
"bug",
"stale"
] |
lk-geimfari
| 2
|
mirumee/ariadne-codegen
|
graphql
| 286
|
How to generate the `queries` file?
|
Hey team,
Thanks a lot for developing the library.
There is one thing which is quite unclear to me - what if I would like to have a client that can run any possible query?
Currently, if I understand correctly, the codegen will only cover the queries I've provided in the `queries.graphql` file, is there any way to have all possible queries generated?
|
closed
|
2024-03-13T16:28:29Z
|
2024-03-14T17:03:48Z
|
https://github.com/mirumee/ariadne-codegen/issues/286
|
[
"question",
"waiting"
] |
renardeinside
| 3
|
tensorpack/tensorpack
|
tensorflow
| 976
|
No memoized_method when trying to run Super Resolution example
|
If you're asking about an unexpected problem you met, use this template.
__PLEASE DO NOT DELETE THIS TEMPLATE, FILL IT__:
### 1. What you did:
(1) **If you're using examples, what's the command you run:**
python3 enet-pat.py --vgg19 vgg19.npz --data train2017.zip
(2) **If you're using examples, have you made any changes to the examples? Paste them here:**
(3) **If not using examples, tell us what you did here:**
Note that we may not be able to investigate it if there is no reproducible code.
It's always better to paste what you did instead of describing them.
### 2. What you observed:
Traceback (most recent call last):
File "enet-pat.py", line 20, in <module>
from GAN import SeparateGANTrainer, GANModelDesc
File "/home/master/a/tensorpack/tensorpack/examples/SuperResolution/GAN.py", line 12, in <module>
from tensorpack.utils.argtools import memoized_method
ImportError: cannot import name 'memoized_method'
It's always better to paste what you observed instead of describing them.
A part of logs is sometimes enough, but it's always better to paste as much as possible.
You can run a command with `CMD 2>&1 | tee logs.txt` to save all stdout & stderr logs to one file.
(2) **Other observations, if any:**
For example, CPU/GPU utilization, output images, tensorboard curves, if relevant to your issue.
### 3. What you expected, if not obvious.
### 4. Your environment:
+ Python version:
+ TF version: `python -c 'import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)'`.
+ Tensorpack version: `python -c 'import tensorpack; print(tensorpack.__version__)'`.
You can install Tensorpack master by `pip install -U git+https://github.com/ppwwyyxx/tensorpack.git`
and see if your issue is already solved.
+ Hardware information, e.g. number of GPUs used.
About efficiency issues, PLEASE first read http://tensorpack.readthedocs.io/en/latest/tutorial/performance-tuning.html
|
closed
|
2018-11-07T09:26:58Z
|
2018-11-27T04:14:53Z
|
https://github.com/tensorpack/tensorpack/issues/976
|
[
"installation/environment"
] |
notecola
| 4
|
docarray/docarray
|
fastapi
| 1,591
|
How to leverage Weaviate's build-in modules (vectoriser, summariser, etc)?
|
The current documentation for the Docarray library only covers examples where the embedding vectors are already supplied. However, it's unclear whether Docarray allows one to input text or other data and leverage Weaviate's built-in modules for vectorisation, or summarisation.
Is this possible using DocArray?
for example, one could have a setup a weaviate instance with the following `docker-compose`
```
---
version: '3.4'
services:
weaviate:
command:
- --host
- 0.0.0.0
- --port
- '8080'
- --scheme
- http
image: semitechnologies/weaviate:1.19.6
ports:
- 8080:8080
restart: on-failure:0
environment:
TRANSFORMERS_PASSAGE_INFERENCE_API: 'http://t2v-transformers-passage:8080'
TRANSFORMERS_QUERY_INFERENCE_API: 'http://t2v-transformers-query:8080'
SUM_INFERENCE_API: 'http://sum-transformers:8080'
QUERY_DEFAULTS_LIMIT: 25
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
DEFAULT_VECTORIZER_MODULE: 'text2vec-transformers'
ENABLE_MODULES: 'text2vec-transformers,sum-transformers'
CLUSTER_HOSTNAME: 'node1'
t2v-transformers-passage:
image: semitechnologies/transformers-inference:facebook-dpr-ctx_encoder-single-nq-base
environment:
ENABLE_CUDA: '1'
NVIDIA_VISIBLE_DEVICES: 'all'
deploy:
resources:
reservations:
devices:
- capabilities:
- 'gpu'
t2v-transformers-query:
image: semitechnologies/transformers-inference:facebook-dpr-question_encoder-single-nq-base
environment:
ENABLE_CUDA: '1'
NVIDIA_VISIBLE_DEVICES: 'all'
deploy:
resources:
reservations:
devices:
- capabilities:
- 'gpu'
sum-transformers:
image: semitechnologies/sum-transformers:facebook-bart-large-cnn-1.0.0
environment:
ENABLE_CUDA: '1'
NVIDIA_VISIBLE_DEVICES: 'all'
deploy:
resources:
reservations:
devices:
- capabilities:
- 'gpu'
```
And use a data model like:
```
class Article:
id:int
text:str
summary:str
```
And now use docarray to store the articles in the weaviate database, and later retrieve the summaries, which are supplied by weaviate itself. Idem for embeddings.
This would obviously make one wonder whether we could do the same with search, e.g. do similarity search by only providing the word to be embedded, and allow weaviate to do the embedding and search itself.
It would be helpful to have more information on how to use Weaviate's native vectorisation modules with Docarray. Specifically, it would be useful to have examples of how to input text data and let the Weaviate database service do the embedding, and later retrieve the results.
If this currently possible with DocArray?
If it is, i'd be willing to contribute to the docs.
If it is not possible, would you consider it to be in the scope of DocArray?
|
open
|
2023-05-30T17:20:22Z
|
2023-06-04T16:57:31Z
|
https://github.com/docarray/docarray/issues/1591
|
[
"type/feature-request",
"index/weaviate"
] |
hugocool
| 4
|
rougier/from-python-to-numpy
|
numpy
| 60
|
Syntax error on 3.3
|
In section 3.1, in the example on using .base for checking whether an object is a view or a copy of the original, there is a syntax error:
`Z = np.random.uniform(0,1,(5,,5))`
This isn't present in the repository, but for some reason is still present in the text on the book website.
|
closed
|
2017-05-06T12:15:16Z
|
2017-05-11T17:44:09Z
|
https://github.com/rougier/from-python-to-numpy/issues/60
|
[] |
ssantic
| 2
|
yt-dlp/yt-dlp
|
python
| 11,876
|
ERROR: unable to download video data: HTTP Error 403: Forbidden
|
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
USA/Virginia
### Provide a description that is worded well enough to be understood
Yesterday morning, I discovered that in trying to download a youtube video I encountered the following error:
ERROR: unable to download video data: HTTP Error 403: Forbidden
This error has occurred with all youtube videos that I have tried since then.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
$ yt-dlp -vU https://www.youtube.com/watch?v=Nu24kbhPRPo
[debug] Command-line config: ['-vU', 'https://www.youtube.com/watch?v=Nu24kbhPRPo']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.12.13 from yt-dlp/yt-dlp [542166962] (debian*)
[debug] Python 3.12.8 (CPython x86_64 64bit) - Linux-6.12.5-amd64-x86_64-with-glibc2.40 (OpenSSL 3.3.2 3 Sep 2024, glibc 2.40)
[debug] exe versions: ffmpeg 6.1.1-5 (setts), ffprobe 6.1.1-5, phantomjs 2.1.1, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, pyxattr-0.8.1, requests-2.32.3, sqlite3-3.46.1, urllib3-2.2.3, websockets-14.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.12.13 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.12.13 from yt-dlp/yt-dlp)
[youtube] Extracting URL: https://www.youtube.com/watch?v=Nu24kbhPRPo
[youtube] Nu24kbhPRPo: Downloading webpage
[youtube] Nu24kbhPRPo: Downloading ios player API JSON
[youtube] Nu24kbhPRPo: Downloading mweb player API JSON
[debug] Loading youtube-nsig.03dbdfab from cache
[debug] [youtube] Decrypted nsig GAFaRrvQFKNwThYM7 => WUyMymkH2BPH-A
[debug] Loading youtube-nsig.03dbdfab from cache
[debug] [youtube] Decrypted nsig i9zzJuICuCs0hrUvw => 1-qed7Dg6-mh2A
[youtube] Nu24kbhPRPo: Downloading m3u8 information
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] Testing format 616
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 227
[download] Destination: /tmp/tmp_rl8yq92.tmp
[download] Got error: HTTP Error 403: Forbidden
File "/bin/yt-dlp", line 8, in <module>
sys.exit(main())
File "/usr/lib/python3/dist-packages/yt_dlp/__init__.py", line 1093, in main
_exit(*variadic(_real_main(argv)))
File "/usr/lib/python3/dist-packages/yt_dlp/__init__.py", line 1083, in _real_main
return ydl.download(all_urls)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 3605, in download
self.__download_wrapper(self.extract_info)(
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 3578, in wrapper
res = func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1613, in extract_info
return self.__extract_info(url, self.get_info_extractor(key), download, extra_info, process)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1624, in wrapper
return func(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1780, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1839, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2965, in process_video_result
formats_to_download = self._select_formats(formats, format_selector)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2189, in _select_formats
return list(selector({
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2428, in selector_function
yield from f(ctx)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2527, in final_selector
return selector_function(ctx_copy)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2439, in selector_function
picked_formats = list(f(ctx))
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2448, in selector_function
for pair in itertools.product(selector_1(ctx), selector_2(ctx)):
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2517, in selector_function
yield matches[format_idx - 1]
File "/usr/lib/python3/dist-packages/yt_dlp/utils/_utils.py", line 2252, in __getitem__
self._cache.extend(itertools.islice(self._iterable, n))
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2418, in _check_formats
yield from self._check_formats([f])
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2173, in _check_formats
success, _ = self.dl(temp_file.name, f, test=True)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 3199, in dl
return fd.download(name, new_info, subtitle)
File "/usr/lib/python3/dist-packages/yt_dlp/downloader/common.py", line 464, in download
ret = self.real_download(filename, info_dict)
File "/usr/lib/python3/dist-packages/yt_dlp/downloader/hls.py", line 381, in real_download
return self.download_and_append_fragments(ctx, fragments, info_dict)
File "/usr/lib/python3/dist-packages/yt_dlp/downloader/fragment.py", line 513, in download_and_append_fragments
download_fragment(fragment, ctx)
File "/usr/lib/python3/dist-packages/yt_dlp/downloader/fragment.py", line 459, in download_fragment
for retry in RetryManager(self.params.get('fragment_retries'), error_callback):
File "/usr/lib/python3/dist-packages/yt_dlp/utils/_utils.py", line 5251, in __iter__
self.error_callback(self.error, self.attempt, self.retries)
File "/usr/lib/python3/dist-packages/yt_dlp/downloader/fragment.py", line 456, in error_callback
self.report_retry(err, count, retries, frag_index, fatal)
File "/usr/lib/python3/dist-packages/yt_dlp/downloader/common.py", line 410, in report_retry
RetryManager.report_retry(
File "/usr/lib/python3/dist-packages/yt_dlp/utils/_utils.py", line 5258, in report_retry
return error(f'{e}. Giving up after {count - 1} retries') if count > 1 else error(str(e))
File "/usr/lib/python3/dist-packages/yt_dlp/downloader/common.py", line 413, in <lambda>
error=IDENTITY if not fatal else lambda e: self.report_error(f'\r[download] Got error: {e}'),
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1090, in report_error
self.trouble(f'{self._format_err("ERROR:", self.Styles.ERROR)} {message}', *args, **kwargs)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1018, in trouble
tb_data = traceback.format_list(traceback.extract_stack())
ERROR: fragment 1 not found, unable to continue
File "/bin/yt-dlp", line 8, in <module>
sys.exit(main())
File "/usr/lib/python3/dist-packages/yt_dlp/__init__.py", line 1093, in main
_exit(*variadic(_real_main(argv)))
File "/usr/lib/python3/dist-packages/yt_dlp/__init__.py", line 1083, in _real_main
return ydl.download(all_urls)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 3605, in download
self.__download_wrapper(self.extract_info)(
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 3578, in wrapper
res = func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1613, in extract_info
return self.__extract_info(url, self.get_info_extractor(key), download, extra_info, process)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1624, in wrapper
return func(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1780, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1839, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2965, in process_video_result
formats_to_download = self._select_formats(formats, format_selector)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2189, in _select_formats
return list(selector({
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2428, in selector_function
yield from f(ctx)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2527, in final_selector
return selector_function(ctx_copy)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2439, in selector_function
picked_formats = list(f(ctx))
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2448, in selector_function
for pair in itertools.product(selector_1(ctx), selector_2(ctx)):
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2517, in selector_function
yield matches[format_idx - 1]
File "/usr/lib/python3/dist-packages/yt_dlp/utils/_utils.py", line 2252, in __getitem__
self._cache.extend(itertools.islice(self._iterable, n))
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2418, in _check_formats
yield from self._check_formats([f])
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 2173, in _check_formats
success, _ = self.dl(temp_file.name, f, test=True)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 3199, in dl
return fd.download(name, new_info, subtitle)
File "/usr/lib/python3/dist-packages/yt_dlp/downloader/common.py", line 464, in download
ret = self.real_download(filename, info_dict)
File "/usr/lib/python3/dist-packages/yt_dlp/downloader/hls.py", line 381, in real_download
return self.download_and_append_fragments(ctx, fragments, info_dict)
File "/usr/lib/python3/dist-packages/yt_dlp/downloader/fragment.py", line 514, in download_and_append_fragments
result = append_fragment(
File "/usr/lib/python3/dist-packages/yt_dlp/downloader/fragment.py", line 479, in append_fragment
self.report_error(f'fragment {frag_index} not found, unable to continue')
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1090, in report_error
self.trouble(f'{self._format_err("ERROR:", self.Styles.ERROR)} {message}', *args, **kwargs)
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 1018, in trouble
tb_data = traceback.format_list(traceback.extract_stack())
[info] Unable to download format 616. Skipping...
[info] Nu24kbhPRPo: Downloading 1 format(s): 248+251
[debug] Invoking http downloader on "https://rr5---sn-5ualdnss.googlevideo.com/videoplayback?expire=1734910224&ei=sExoZ9rON9Oty_sP-tCp4Qs&ip=173.162.21.75&id=o-AFSsk061Ia8wWkIIPAE4P339yEpjph0-C1gRzEmHwkFA&itag=248&source=youtube&requiressl=yes&xpc=EgVo2aDSNQ%3D%3D&met=1734888624%2C&mh=Dp&mm=31%2C29&mn=sn-5ualdnss%2Csn-5uaezned&ms=au%2Crdu&mv=m&mvi=5&pl=20&rms=au%2Cau&initcwndbps=4213750&bui=AfMhrI-ClE9FN4m6rN9IuQq894v_R4qNQ0GbVnqSxvFE4nOf2STNrfYhwX5L10JowMbHcc9AcayRkGbF&spc=x-caUHiPZ2v7mOig-zs8fD6Dilqu1T9rs9RrDh2QKsDMnDuZeAxB&vprv=1&svpuc=1&mime=video%2Fwebm&rqh=1&gir=yes&clen=128830112&dur=994.999&lmt=1734759288811738&mt=1734888314&fvip=5&keepalive=yes&fexp=51326932%2C51335594%2C51371294&c=IOS&txp=330F224&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cxpc%2Cbui%2Cspc%2Cvprv%2Csvpuc%2Cmime%2Crqh%2Cgir%2Cclen%2Cdur%2Clmt&sig=AJfQdSswRQIhAIuy6hb9t8RxJz_Xefh3PEoFeg1RvdOF9dZ7b5v5XM_ZAiBvMMGvl3rh3IfPKKnrLA3hLycKtU4zM1u2fH3tb0hJ4w%3D%3D&lsparams=met%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Crms%2Cinitcwndbps&lsig=AGluJ3MwRQIhALC7Hm1ByzHseuGkHr5gB-LsMOF7yZVaYIJdjTLk8HPSAiAxw7gi40CYFZilzkkhh0VyhEy83xk9vAVhvY2SzY4jWg%3D%3D"
ERROR: unable to download video data: HTTP Error 403: Forbidden
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 3461, in process_info
partial_success, real_download = self.dl(fname, new_info)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 3199, in dl
return fd.download(name, new_info, subtitle)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yt_dlp/downloader/common.py", line 464, in download
ret = self.real_download(filename, info_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yt_dlp/downloader/http.py", line 367, in real_download
establish_connection()
File "/usr/lib/python3/dist-packages/yt_dlp/downloader/http.py", line 118, in establish_connection
ctx.data = self.ydl.urlopen(request)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yt_dlp/YoutubeDL.py", line 4162, in urlopen
return self._request_director.send(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yt_dlp/networking/common.py", line 117, in send
response = handler.send(request)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yt_dlp/networking/_helper.py", line 208, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yt_dlp/networking/common.py", line 340, in send
return self._send(request)
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yt_dlp/networking/_requests.py", line 365, in _send
raise HTTPError(res, redirect_loop=max_redirects_exceeded)
yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden
```
|
closed
|
2024-12-22T17:35:43Z
|
2024-12-22T18:14:10Z
|
https://github.com/yt-dlp/yt-dlp/issues/11876
|
[
"duplicate",
"site-bug",
"site:youtube"
] |
linxman
| 1
|
opengeos/leafmap
|
plotly
| 448
|
Key errors when adding layer from STAC or COG using multiple bands
|
<!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.20.3
- Python version: 3.10.10
- Operating System: Ubuntu 20.04
### Description
Cannot add layer from STAC or COG using multiple bands with `leafmap` utilities.
Sample STAC Item to display (amonst multiple others):
https://planetarycomputer.microsoft.com/api/stac/v1/collections/landsat-c2-l2/items/LE07_L2SP_013028_20150131_02_T1
### What I Did
I can properly retrieve STAC Items using the pySTAC client:
```python
from datetime import datetime, timezone
import planetary_computer
import pystac_client
stac_url = "https://planetarycomputer.microsoft.com/api/stac/v1"
col = ["landsat-c2-l2"]
toi = [
datetime(2015, 1, 1, tzinfo=timezone.utc),
datetime(2020, 2, 1, tzinfo=timezone.utc),
]
crs_epsg3978 = CRS.from_string("EPSG:3978") # noqa
crs_epsg4326 = CRS.from_string("EPSG:4326") # noqa
# Quรฉbec, Canada - Surrounding Montrรฉal and part of Vermont/New-Hampshire, USA
aoi_mtl_epsg3978 = [1600390, -285480, 1900420, 14550]
aoi_mtl_epsg4326 = transform_bounds(crs_epsg3978, crs_epsg4326, *aoi_mtl_epsg3978)
# Quรฉbec, Quรฉbec City
aoi_qcc_epsg3978 = [1600390, 14550, 1900420, 314580]
aoi_qcc_epsg4326 = transform_bounds(crs_epsg3978, crs_epsg4326, *aoi_qcc_epsg3978)
# BoundingBox has 6 values to allow min/max timestamps
# Pad them temporarily as those are not necessary since STAC API use a distinct parameter for the date-time range.
aoi = BoundingBox(*aoi_mtl_epsg4326, 0, 0) | BoundingBox(*aoi_qcc_epsg4326, 0, 0)
aoi = aoi[0:4]
image_bands = ["red", "green", "blue"]
catalog = pystac_client.Client.open(stac_url)
image_items = catalog.search(
bbox=aoi, # reuse the same AOI, since we are interested in the same area as previously obtained labels
datetime=[toi[0], datetime(2015, 2, 1)], # limit TOI since things could have changed since, we cannot rely on too recent images
collections=col,
max_items=5, # there are much more imagery available than labels, we don't need them all for the demo
)
found_image_items = list(image_items.items())
signed_image_items = [
planetary_computer.sign(item)
for item in found_image_items
]
signed_image_items
```
```python
[<Item id=LE07_L2SP_013028_20150131_02_T1>,
<Item id=LE07_L2SP_013027_20150131_02_T1>,
<Item id=LE07_L2SP_013026_20150131_02_T1>,
<Item id=LC08_L2SP_014028_20150130_02_T2>,
<Item id=LC08_L2SP_014027_20150130_02_T2>]
```
The underlying assets can be properly displayed as well, using something like
```python
rioxarray.open_rasterio(asset_href).squeeze().plot.imshow()
```
I would like to improve display with the dynamic zoom and panning `leafmap` provides, but I cannot figure out how to make them work.
```python
import leafmap
lm = leafap.Map(center=(-70, 45))
for stac_item in signed_image_items:
lm.add_stac_layer(url=stac_url, item=stac_item.id, name=stac_item.id, opacity=0.5, bands=image_bands)
lm
```
Ends up doing this error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[105], line 12
10 for stac_item in signed_image_items:
11 stac_item_link = signed_image_items[0].get_single_link("self")
---> 12 lm.add_stac_layer(url=stac_url, item=stac_item.id, name=stac_item.id, opacity=0.5, bands=image_bands)
13 lm
File ~/dev/conda/envs/terradue/lib/python3.10/site-packages/leafmap/leafmap.py:922, in Map.add_stac_layer(self, url, collection, item, assets, bands, titiler_endpoint, name, attribution, opacity, shown, fit_bounds, **kwargs)
892 def add_stac_layer(
893 self,
894 url=None,
(...)
905 **kwargs,
906 ):
907 """Adds a STAC TileLayer to the map.
908
909 Args:
(...)
920 fit_bounds (bool, optional): A flag indicating whether the map should be zoomed to the layer extent. Defaults to True.
921 """
--> 922 tile_url = stac_tile(
923 url, collection, item, assets, bands, titiler_endpoint, **kwargs
924 )
925 bounds = stac_bounds(url, collection, item, titiler_endpoint)
926 self.add_tile_layer(tile_url, name, attribution, opacity, shown)
File ~/dev/conda/envs/terradue/lib/python3.10/site-packages/leafmap/stac.py:674, in stac_tile(url, collection, item, assets, bands, titiler_endpoint, **kwargs)
671 else:
672 r = requests.get(titiler_endpoint.url_for_stac_item(), params=kwargs).json()
--> 674 return r["tiles"][0]
KeyError: 'tiles'
```
Alternative using the COG GeoTiff directly:
```python
for stac_item in signed_image_items:
stac_item_link = signed_image_items[0].get_single_link("self")
lm.add_cog_layer(stac_item_link.href, name=stac_item.id, opacity=0.5, bands=image_bands)
lm
```
Causes the error:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[108], line 12
10 for stac_item in signed_image_items:
11 stac_item_link = signed_image_items[0].get_single_link("self")
---> 12 lm.add_cog_layer(stac_item_link.href, name=stac_item.id, opacity=0.5)
13 #lm.add_stac_layer(url=stac_url, item=stac_item.id, name=stac_item.id, opacity=0.5, bands=image_bands)
14 lm
File ~/dev/conda/envs/terradue/lib/python3.10/site-packages/leafmap/leafmap.py:864, in Map.add_cog_layer(self, url, name, attribution, opacity, shown, bands, titiler_endpoint, zoom_to_layer, **kwargs)
836 def add_cog_layer(
837 self,
838 url,
(...)
846 **kwargs,
847 ):
848 """Adds a COG TileLayer to the map.
849
850 Args:
(...)
862 apply a rescaling to multiple bands, use something like `rescale=["164,223","130,211","99,212"]`.
863 """
--> 864 tile_url = cog_tile(url, bands, titiler_endpoint, **kwargs)
865 bounds = cog_bounds(url, titiler_endpoint)
866 self.add_tile_layer(tile_url, name, attribution, opacity, shown)
File ~/dev/conda/envs/terradue/lib/python3.10/site-packages/leafmap/stac.py:147, in cog_tile(url, bands, titiler_endpoint, **kwargs)
143 titiler_endpoint = check_titiler_endpoint(titiler_endpoint)
145 kwargs["url"] = url
--> 147 band_names = cog_bands(url, titiler_endpoint)
149 if isinstance(bands, str):
150 bands = [bands]
File ~/dev/conda/envs/terradue/lib/python3.10/site-packages/leafmap/stac.py:406, in cog_bands(url, titiler_endpoint)
398 titiler_endpoint = check_titiler_endpoint(titiler_endpoint)
399 r = requests.get(
400 f"{titiler_endpoint}/cog/info",
401 params={
402 "url": url,
403 },
404 ).json()
--> 406 bands = [b[0] for b in r["band_descriptions"]]
407 return bands
KeyError: 'band_descriptions'
```
I also tried each band individually as follows, but it causes the same errors respectively:
```python
for stac_item in signed_image_items:
#stac_item_link = signed_image_items[0].get_single_link("self")
#lm.add_cog_layer(stac_item_link.href, name=stac_item.id, opacity=0.5, bands=image_bands)
#lm.add_stac_layer(url=stac_url, item=stac_item.id, name=stac_item.id, opacity=0.5, bands=image_bands)
for band in image_bands:
stac_asset = stac_item.assets[band]
#lm.add_stac_layer(stac_asset.href, name=f"{stac_item.id}_{band}", opacity=0.5)
lm.add_cog_layer(stac_asset.href, name=f"{stac_item.id}_{band}", opacity=0.5)
lm
```
I have managed to open other STAC COG images using other references with a similar procedure:
```python
col = ["nrcan-landcover"] # single-band land-cover pixel-wise annotations
# [...] rest of precedure as previously to search and sign STAC Items
lm = leafmap.Map(center=(-70, 45))
for stac_item in signed_label_items:
stac_asset = stac_item.assets["landcover"]
lm.add_cog_layer(stac_asset.href, name=stac_item.id, opacity=0.5, colormap_name="terrain")
lm
# * displays leafmap properly *
```
I'm not sure I can do anything about the failing KeyError cases with multiple-bands expecting some metadata that is not available.
|
closed
|
2023-05-23T20:36:12Z
|
2024-10-14T08:11:48Z
|
https://github.com/opengeos/leafmap/issues/448
|
[
"bug"
] |
fmigneault
| 7
|
thtrieu/darkflow
|
tensorflow
| 419
|
How to get official object detection evaluation result?
|
Hello!
If I will write paper, I need to official evaluate result (per category) yolov2, etc.
pascal_voc : 20 category(each category's AP)
COCO : 80 category(each category's AP)
How to get official evaluation result by darkflow?
|
closed
|
2017-11-02T19:12:15Z
|
2017-11-03T18:12:55Z
|
https://github.com/thtrieu/darkflow/issues/419
|
[] |
tlsgb456
| 0
|
pallets/flask
|
flask
| 5,218
|
Resource leaking in http-stream-request
|
If I stop a stream request before it's finished, the stream function will never finish, without any exception or warning.
So, if I want to release something (such as threading.Lock) in the end of the function, I will never be able to release it.
Here's an example.
```
# flask server
import time
from flask import *
def stream():
print("acquire something")
for i in range(5):
yield str(i)
time.sleep(1)
print("release something")
app = Flask(__name__)
@app.route("/")
def _():
return Response(stream())
if __name__ == '__main__':
app.run("0.0.0.0", 45678)
```
when I request it and stop it early (for example, I'm timeout)
```
>>> curl http://localhost:45678 --max-time 2
01curl: (28) Operation timed out after 2002 milliseconds with 2 bytes received
```
then, the function stream() will never stopped, here's the log.
```
acquire something
127.0.0.1 - - [13/Aug/2023 18:17:01] "GET / HTTP/1.1" 200 -
```
I wish, flask will do like one of these:
* throw a ResourceWarning when the request has stopped
* run stream() to the end, whatever the request stop or not
* if stream() is an AsyncIterator(Coroutine), send an Exception into it when the request has stopped
Environment:
- Python version: 3.9
- Flask version: 2.2.3
|
closed
|
2023-08-13T10:18:08Z
|
2023-08-29T00:05:30Z
|
https://github.com/pallets/flask/issues/5218
|
[] |
GoddessLuBoYan
| 3
|
deepset-ai/haystack
|
machine-learning
| 8,995
|
OpenAIChatGenerator uses wrong type hints for streaming_callback
|
**Describe the bug**
A clear and concise description of what the bug is.
**Error message**
Error that was thrown (if available)
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here, like document types / preprocessing steps / settings of reader etc.
**To Reproduce**
Steps to reproduce the behavior
**FAQ Check**
- [ ] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS:
- GPU/CPU:
- Haystack version (commit or version number):
- DocumentStore:
- Reader:
- Retriever:
|
closed
|
2025-03-06T14:27:36Z
|
2025-03-06T14:33:24Z
|
https://github.com/deepset-ai/haystack/issues/8995
|
[] |
mathislucka
| 1
|
aws/aws-sdk-pandas
|
pandas
| 2,860
|
Convert CloudWatch LogInsights query results to Dataframe
|
**Scenario:** I was trying to export CloudWatch Insights' query results to csv file using Pandas
**Ask:**
[cloudwatch.run_query()](https://aws-sdk-pandas.readthedocs.io/en/3.8.0/stubs/awswrangler.cloudwatch.run_query.html#awswrangler.cloudwatch.run_query) returns ```List[List[Dict[str, str]]]```. Instead, can this return a Pandas Dataframe so that it is easier to export to csv / excel format?
**What I tried:**
```
import awswrangler as wr
from datetime import datetime, timedelta
import pandas as pd
result = wr.cloudwatch.run_query(
log_group_names=["log1"],
query=query1,
start_time = datetime.today() - timedelta(hours=0, minutes=5),
end_time = datetime.today(),
limit = 1
)
df = pd.DataFrame.from_records(result,index=['1', '2'])
df.to_csv('out.csv')
```
This gets me output as
```
,0,1
1,"{'field': 'field1', 'value': '5'}","{'field': 'field2', 'value': '10'}"
```
I was looking for something like
```
field1, field2
5,10
```
Any other / easier ways to accomplish?
|
closed
|
2024-06-18T02:08:06Z
|
2024-06-24T09:57:36Z
|
https://github.com/aws/aws-sdk-pandas/issues/2860
|
[
"enhancement"
] |
sethusrinivasan
| 1
|
onnx/onnx
|
deep-learning
| 6,385
|
Add coverage badge to readme (https://app.codecov.io/gh/onnx/onnx/tree/main)
|
Add coverage badge of project https://app.codecov.io/gh/onnx/onnx/tree/main to readme:
<img width="636" alt="image" src="https://github.com/user-attachments/assets/d2593d04-13f9-4410-8d35-e65087ee9d89">
|
open
|
2024-09-22T07:29:05Z
|
2024-09-22T07:36:45Z
|
https://github.com/onnx/onnx/issues/6385
|
[
"topic: documentation"
] |
andife
| 1
|
apache/airflow
|
data-science
| 47,792
|
[Regression]Asset schedule info not showing correctly on UI for Asset Alias
|
### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
When downsteam DAG is dependent on Alias schedule info is not showing correctly.


### What you think should happen instead?
_No response_
### How to reproduce
1. Use the below DAG.
```
from __future__ import annotations
import pendulum
from airflow import DAG
from airflow.datasets import Dataset, DatasetAlias
from airflow.decorators import task
with DAG(
dag_id="dataset_alias_example_alias_producer",
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
schedule=None,
catchup=False,
tags=["dataset"],
):
@task(outlets=[DatasetAlias("example-alias")])
def produce_dataset_events_through_dataset_alias(*, outlet_events=None):
bucket_name = "bucket"
object_path = "my-task"
outlet_events[DatasetAlias("example-alias")].add(Dataset(f"s3://{bucket_name}/{object_path}"))
produce_dataset_events_through_dataset_alias()
with DAG(
dag_id="dataset_alias_example_alias_consumer",
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
schedule=[DatasetAlias("example-alias")],
catchup=False,
tags=["dataset"],
):
@task(inlets=[DatasetAlias("example-alias")])
def consume_dataset_event_from_dataset_alias(*, inlet_events=None):
for event in inlet_events[DatasetAlias("example-alias")]:
print(event)
consume_dataset_event_from_dataset_alias()
```
2. On DAG list page check for DAG `[dataset_alias_example_alias_consumer](http://localhost:28080/dags/dataset_alias_example_alias_consumer)` and click on Asset under schedule
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
closed
|
2025-03-14T15:36:23Z
|
2025-03-19T13:37:11Z
|
https://github.com/apache/airflow/issues/47792
|
[
"kind:bug",
"priority:medium",
"area:core",
"area:UI",
"area:datasets",
"affected_version:3.0.0beta"
] |
vatsrahul1001
| 2
|
aiortc/aiortc
|
asyncio
| 899
|
h264_omx not working with Raspberry Pi
|
Hi!
I am trying to use the [`h264_omx`](https://github.com/aiortc/aiortc/blob/9f14474c0953b90139c8697a216e4c2cd8ee5504/src/aiortc/codecs/h264.py#L292) encoding on the RPI, but I get this error:
```
[h264_omx @ 0x68500890] libOMX_Core.so not found
[h264_omx @ 0x68500890] libOmxCore.so not found
[libx264 @ 0x68502040] using cpu capabilities: ARMv6 NEON
[libx264 @ 0x68502040] profile Constrained Baseline, level 3.1, 4:2:0, 8-bit
```
However:
1) I build ffmpeg from source enabling omx; here are the steps i follow:
```
sudo apt-get remove ffmpeg
sudo apt-get install cmake libomxil-bellagio0 libomxil-bellagio-dev
git clone https://github.com/raspberrypi/userland.git
cd userland
./buildme
wget https://ffmpeg.org/releases/ffmpeg-6.0.tar.xz
tar -xf ffmpeg-6.0.tar.xz && cd ffmpeg-6.0
./configure --enable-omx --enable-omx-rpi
make && sudo make install
```
2) if i test the `omx` encoder from the terminal, on the same RPI
```
ffmpeg -i giphy.gif -c:v h264_omx giphy.mp4 -y
```
it works. I get this:
```
ffmpeg version 6.0 Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 10 (Raspbian 10.2.1-6+rpi1)
configuration: --enable-omx --enable-omx-rpi
libavutil 58. 2.100 / 58. 2.100
libavcodec 60. 3.100 / 60. 3.100
libavformat 60. 3.100 / 60. 3.100
libavdevice 60. 1.100 / 60. 1.100
libavfilter 9. 3.100 / 9. 3.100
libswscale 7. 1.100 / 7. 1.100
libswresample 4. 10.100 / 4. 10.100
Input #0, gif, from 'giphy.gif':
Duration: 00:00:02.19, start: 0.000000, bitrate: 6756 kb/s
Stream #0:0: Video: gif, bgra, 800x600, 33.33 fps, 33.33 tbr, 100 tbn
Stream mapping:
Stream #0:0 -> #0:0 (gif (native) -> h264 (h264_omx))
Press [q] to stop, [?] for help
[h264_omx @ 0x2012100] Using OMX.broadcom.video_encode
Output #0, mp4, to 'giphy.mp4':
Metadata:
encoder : Lavf60.3.100
Stream #0:0: Video: h264 (avc1 / 0x31637661), yuv420p(tv, progressive), 800x600, q=2-31, 200 kb/s, 33.33 fps, 12800 tbn
Metadata:
encoder : Lavc60.3.100 h264_omx
frame= 31 fps=0.0 q=-0.0 Lsize= 24kB time=00:00:00.90 bitrate= 221.5kbits/s speed=1.05x
```
3) If I look for the missing files (`libOMX_Core.so ` and `libOmxCore.so`), I cannot find them anywhere; however, I don't know if they should be somewhere in the first place or it is aiortc which is looking for the wrong files. Here is what I have:
```
ls /opt/vc/lib/
libbcm_host.so libbrcmOpenVG.so libdebug_sym.so libEGL.so libGLESv2_static.a libmmal_components.so libmmal_util.so libOpenVG.so libvcilcs.a libWFC.so
libbrcmEGL.so libbrcmWFC.so libdebug_sym_static.a libEGL_static.a libkhrn_client.a libmmal_core.so libmmal_vc_client.so libvchiq_arm.so libvcos.so pkgconfig
libbrcmGLESv2.so libcontainers.so libdtovl.so libGLESv2.so libkhrn_static.a libmmal.so libopenmaxil.so libvchostif.a libvcsm.so plugins
```
Any help would be greatly appreciated!
|
closed
|
2023-06-20T14:34:16Z
|
2023-10-27T09:34:13Z
|
https://github.com/aiortc/aiortc/issues/899
|
[] |
eliabruni
| 8
|
reloadware/reloadium
|
pandas
| 31
|
Question about logging and telemetry
|
Hi - why does this software connect to "depot.reloadware.com" when it runs? Worryingly, I don't see any source code in this repository that would make such a connection. How did this behavior make it into the PyPI wheel?
Also, I see it's trying to upload logs to sentry.io from my machine. Why? What information is being sent from my machine? How is it stored? Is it anonymized?
This is with Python 3.10.6, Reloadium 0.9.1, on Linux, installed via the PyPI manylinux wheel.
|
closed
|
2022-08-24T02:29:07Z
|
2022-09-21T00:33:14Z
|
https://github.com/reloadware/reloadium/issues/31
|
[] |
avirshup
| 2
|
aiogram/aiogram
|
asyncio
| 1,446
|
Failed deserealization on get_sticker_set method
|
### Checklist
- [X] I am sure the error is coming from aiogram code
- [X] I have searched in the issue tracker for similar bug reports, including closed ones
### Operating system
Docker ubuntu 22.04
### Python version
3.11
### aiogram version
3.4.1
### Expected behavior
Get StickerSet object on get_sticker_set method
### Current behavior
Method raises a pydantic validation error due to mismatch of models with Telegram API patch 31.03.2024
Failed to deserialize object
Caused from error: pydantic_core._pydantic_core.ValidationError: 2 validation errors for Response[StickerSet]
result.is_animated
Field required [type=missing, input_value={'name': 's2_z8ff4r_by_me...', 'file_size': 23036}]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/missing
result.is_video
Field required [type=missing, input_value={'name': 's2_z8ff4r_by_me...', 'file_size': 23036}]}, input_type=dict]
### Steps to reproduce
1. Call get_sticker_set method
### Code example
```python3
sticker_set = await bot.get_sticker_set(stickerset_name)
```
### Logs
```sh
Failed to deserialize object
Caused from error: pydantic_core._pydantic_core.ValidationError: 2 validation errors for Response[StickerSet]
result.is_animated
Field required [type=missing, input_value={'name': 's2_z8ff4r_by_me...', 'file_size': 23036}]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/missing
result.is_video
Field required [type=missing, input_value={'name': 's2_z8ff4r_by_me...', 'file_size': 23036}]}, input_type=dict]
```
### Additional information
_No response_
|
open
|
2024-04-02T11:42:20Z
|
2024-04-02T11:42:20Z
|
https://github.com/aiogram/aiogram/issues/1446
|
[
"bug"
] |
KolosDan
| 0
|
chaos-genius/chaos_genius
|
data-visualization
| 1,221
|
Vertica support
|
## Tell us about the problem you're trying to solve
Add support for vertica.com database as a data-source.
## Describe the solution you'd like
Create `VerticaDb(BaseDb)` connector class implementation using https://pypi.org/project/sqlalchemy-vertica-python/.
## Describe alternatives you've considered
Generic ODBC connector.
## Additional context
Tested locally, PR will be submitted soon.
|
open
|
2023-10-16T13:01:49Z
|
2024-04-25T09:09:46Z
|
https://github.com/chaos-genius/chaos_genius/issues/1221
|
[] |
illes
| 4
|
strawberry-graphql/strawberry
|
django
| 2,941
|
tests on windows broken?
|
Seemingly the hook tests on windows are broken.
First there are many deprecation warnings, second my code change works for all other tests.
PR on which "tests on windows" fail (there are also some other):
https://github.com/strawberry-graphql/strawberry/pull/2938
|
closed
|
2023-07-12T13:22:52Z
|
2025-03-20T15:56:17Z
|
https://github.com/strawberry-graphql/strawberry/issues/2941
|
[
"bug"
] |
devkral
| 2
|
cleanlab/cleanlab
|
data-science
| 653
|
Overlapping Classes Method Issue with Multilabel Data
|
I have a multilabel text classification problem with 33 labels. I am using the `find_overlapping_classes` accordingly with its [documentation](https://docs.cleanlab.ai/stable/cleanlab/dataset.html#cleanlab.dataset.find_overlapping_classes)
```python
cleanlab.dataset.find_overlapping_classes(labels=true_labels, pred_probs=pred, multi_label=True)
```
`true_labels` is an iterable on size N lists that contain list of labels.
`pred_probs` is an 2d array of size N samples x C classes of probabilities. Out of fold predictions.
```python
len(true_labels), preds.shape
```
```python
(71658, (71658, 33))
```
```python
true_labels[:10]
```
`[[0], [3], [7], [11], [0, 3], [0], [3], [9], [2], [3]]`
# Stack trace
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/Users/dorukhanafacan/Documents/multilang_etl/cleanlab_label_validation.ipynb Cell 40' in <cell line: 1>()
----> [1](vscode-notebook-cell:/Users/dorukhanafacan/Documents/multilang_etl/cleanlab_label_validation.ipynb#ch0000059?line=0) cleanlab.dataset.find_overlapping_classes(labels=true_labels, pred_probs=pred, multi_label=True)
File /opt/anaconda3/envs/dsenv/lib/python3.9/site-packages/cleanlab/dataset.py:258, in find_overlapping_classes(labels, pred_probs, asymmetric, class_names, num_examples, joint, confident_joint, multi_label)
[255](file:///opt/anaconda3/envs/dsenv/lib/python3.9/site-packages/cleanlab/dataset.py?line=254) rcv_list = [tup for tup in rcv_list if tup[0] != tup[1]]
[256](file:///opt/anaconda3/envs/dsenv/lib/python3.9/site-packages/cleanlab/dataset.py?line=255) else: # symmetric
[257](file:///opt/anaconda3/envs/dsenv/lib/python3.9/site-packages/cleanlab/dataset.py?line=256) # Sum the upper and lower triangles and remove the lower triangle and the diagonal
--> [258](file:///opt/anaconda3/envs/dsenv/lib/python3.9/site-packages/cleanlab/dataset.py?line=257) sym_joint = np.triu(joint) + np.tril(joint).T
[259](file:///opt/anaconda3/envs/dsenv/lib/python3.9/site-packages/cleanlab/dataset.py?line=258) rcv_list = _2d_matrix_to_row_column_value_list(sym_joint)
[260](file:///opt/anaconda3/envs/dsenv/lib/python3.9/site-packages/cleanlab/dataset.py?line=259) # Provide values only in (the upper triangle) of the matrix.
ValueError: operands could not be broadcast together with shapes (33,2,2) (2,2,33)
```
# Additional information
- **Cleanlab version**: 2.2.1
- **Operating system**: MacOS 12.5
- **Python version**: 3.9.12
|
closed
|
2023-03-16T16:34:54Z
|
2023-07-17T17:54:16Z
|
https://github.com/cleanlab/cleanlab/issues/653
|
[
"bug"
] |
dafajon
| 5
|
hbldh/bleak
|
asyncio
| 809
|
Device Failing to Connect
|
* bleak version: 0.14.2
* Python version: 3.9
* Operating System: Windows 11 Build: 22000.613
* BlueZ version (`bluetoothctl -v`) in case of Linux:
### Description
I am a first-time bleak user and want to talk to a GATT server with a known address. Connection can be established as shown within Windows Bluetooth settings however it seems like Bleak never recognizes that the device connects and is stuck waiting
### What I Did
async with BleakClient(address) as client:
print('connected')
Force quitting this yields the following error:
----> 1 async with BleakClient(address) as client:
2 print('connected')
~\anaconda3\lib\site-packages\bleak\backends\client.py in __aenter__(self)
59
60 async def __aenter__(self):
---> 61 await self.connect()
62 return self
63
~\anaconda3\lib\site-packages\bleak\backends\winrt\client.py in connect(self, **kwargs)
273
274 # Obtain services, which also leads to connection being established.
--> 275 await self.get_services()
276
277 return True
~\anaconda3\lib\site-packages\bleak\backends\winrt\client.py in get_services(self, **kwargs)
444 logger.debug("Get Services...")
445 services: Sequence[GattDeviceService] = _ensure_success(
--> 446 await self._requester.get_gatt_services_async(
447 BluetoothCacheMode.UNCACHED
448 ),
CancelledError:
```
|
closed
|
2022-04-18T18:16:57Z
|
2024-05-04T18:35:25Z
|
https://github.com/hbldh/bleak/issues/809
|
[
"Backend: WinRT",
"more info required"
] |
ccorreia-rhaeos
| 13
|
aiortc/aioquic
|
asyncio
| 2
|
Add support for NAT rebinding
|
Currently `aioquic` has no explicit support for rebinding, i.e. dealing with changes in the peer's (IP address, port) tuple.
The specs mention that we should perform path validation if we detect a network path change:
https://tools.ietf.org/html/draft-ietf-quic-transport-20#section-9
Open questions:
- *How should this interact with asyncio's transport / protocol scheme?* Currently the client example creates a "connected" UDP socket which means we never pass the "addr" argument to `sendto()`. Likewise in the server example we store the peer's initial address
- *What does this mean if QUIC is layered on top of ICE?* Who makes the decisions as to which network path to use: ICE or QUIC?
- *What is the behaviour while the new network path is being validated?* Do we continue using the previous network path until path validation completes?
|
closed
|
2019-05-18T11:57:58Z
|
2019-05-22T20:25:38Z
|
https://github.com/aiortc/aioquic/issues/2
|
[
"enhancement"
] |
jlaine
| 0
|
Farama-Foundation/Gymnasium
|
api
| 1,131
|
[Bug Report] truncated is commented out
|
### Describe the bug
NOTE: This is my first issue report, so the format might be wrong.
There is a discrepancy in `Mountain Car` environment between documentation and code implementation.
According to the gymnasium documentation, `Mountain Car` env is supposed to be truncated when the elapsed step exceeds 200.
However, the code comments out the truncatation part and guides users to use `TimeLimit` wrapper.
The documentation is needed to be updated.
### Code example
```shell
# truncation=False as the time limit is handled by the `TimeLimit` wrapper added during `make`
return np.array(self.state, dtype=np.float32), reward, terminated, False, {}
```
### System info
gymnasium is installed via pip.
gymnasium==0.29.1
MacOS 14.5
Python==3.8.19
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
|
closed
|
2024-08-03T02:27:23Z
|
2024-09-25T10:07:15Z
|
https://github.com/Farama-Foundation/Gymnasium/issues/1131
|
[
"bug"
] |
qelloman
| 1
|
falconry/falcon
|
api
| 1,788
|
Cache the mimeparse.best_match result in Handler.find_by_media_type
|
In the current implementation if a media type is not present in the handlers, we yield to `mimeparse.best_match` to provide the best match for the provided media type.
https://github.com/falconry/falcon/blob/820c51b5f0af8d1c6e9fb8cccb560b6366f97332/falcon/media/handlers.py#L45-L58
We could cache the resolved media type so that betch match calls happens only once for each different media type.
Some care would be needed in order to invalidate the cache of the user modifies the handler dict, and with regards to thread safety (even if in this case the only drawback should be duplicated calls to the `best_match` function.
Before any decision in this regards we should also check what is the performance of `best_match` to see if the speed up would be worth the extra complexity
This discussion started in issue #1717 at https://github.com/falconry/falcon/issues/1717#issuecomment-730631675
|
closed
|
2020-11-23T19:28:42Z
|
2021-05-15T14:48:59Z
|
https://github.com/falconry/falcon/issues/1788
|
[
"enhancement",
"perf"
] |
CaselIT
| 2
|
modelscope/data-juicer
|
data-visualization
| 155
|
[Bug]: There no attribute `cfs.dataset_dir` in `format.formatter.py:218`
|
### Before Reporting ๆฅๅไนๅ
- [X] I have pulled the latest code of main branch to run again and the bug still existed. ๆๅทฒ็ปๆๅไบไธปๅๆฏไธๆๆฐ็ไปฃ็ ๏ผ้ๆฐ่ฟ่กไนๅ๏ผ้ฎ้ขไปไธ่ฝ่งฃๅณใ
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully and no error occurred during the installation process. (Otherwise, we recommend that you can ask a question using the Question template) ๆๅทฒ็ปไป็ป้
่ฏปไบ [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) ไธ็ๆไฝๆๅผ๏ผๅนถไธๅจๅฎ่ฃ
่ฟ็จไธญๆฒกๆ้่ฏฏๅ็ใ๏ผๅฆๅ๏ผๆไปฌๅปบ่ฎฎๆจไฝฟ็จQuestionๆจกๆฟๅๆไปฌ่ฟ่กๆ้ฎ๏ผ
### Search before reporting ๅ
ๆ็ดข๏ผๅๆฅๅ
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar bugs. ๆๅทฒ็ปๅจ [issueๅ่กจ](https://github.com/alibaba/data-juicer/issues) ไธญๆ็ดขไฝๆฏๆฒกๆๅ็ฐ็ฑปไผผ็bugๆฅๅใ
### OS ็ณป็ป
Ubuntu
### Installation Method ๅฎ่ฃ
ๆนๅผ
Docker
### Data-Juicer Version Data-Juicer็ๆฌ
latest
### Python Version Python็ๆฌ
3.10
### Describe the bug ๆ่ฟฐ่ฟไธชbug
There no attribute `cfs.dataset_dir` in `format.formatter.py:218`.
https://github.com/alibaba/data-juicer/blob/main/data_juicer/format/formatter.py#L218
### To Reproduce ๅฆไฝๅค็ฐ
Use `data_juicer.format.mixture_formatter.py` with
`cfg.data_path=1 xxxxxxx-1.parquet 1 xxxxxxx-2.parquet`
Error Message:
`data_juicer.format.formatter.py:218`
AttributeError: 'Namespace' object has no attribute 'dataset_dir'
### Configs ้
็ฝฎไฟกๆฏ
`cfg.data_path=1 xxxxxxx-1.parquet 1 xxxxxxx-2.parquet`
### Logs ๆฅ้ๆฅๅฟ
_No response_
### Screenshots ๆชๅพ
_No response_
### Additional ้ขๅคไฟกๆฏ
_No response_
|
closed
|
2023-12-26T04:06:07Z
|
2023-12-26T14:54:13Z
|
https://github.com/modelscope/data-juicer/issues/155
|
[
"bug"
] |
sylcjl
| 1
|
ymcui/Chinese-LLaMA-Alpaca-2
|
nlp
| 353
|
่ฏท้ฎๆฏๆๅ
จๅๅพฎ่ฐๅ๏ผ
|
### ๆไบคๅๅฟ
้กปๆฃๆฅไปฅไธ้กน็ฎ
- [X] ่ฏท็กฎไฟไฝฟ็จ็ๆฏไปๅบๆๆฐไปฃ็ ๏ผgit pull๏ผ๏ผไธไบ้ฎ้ขๅทฒ่ขซ่งฃๅณๅไฟฎๅคใ
- [X] ๆๅทฒ้
่ฏป[้กน็ฎๆๆกฃ](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)ๅ[FAQ็ซ ่](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/ๅธธ่ง้ฎ้ข)ๅนถไธๅทฒๅจIssueไธญๅฏน้ฎ้ข่ฟ่กไบๆ็ดข๏ผๆฒกๆๆพๅฐ็ธไผผ้ฎ้ขๅ่งฃๅณๆนๆกใ
- [X] ็ฌฌไธๆนๆไปถ้ฎ้ข๏ผไพๅฆ[llama.cpp](https://github.com/ggerganov/llama.cpp)ใ[LangChain](https://github.com/hwchase17/langchain)ใ[text-generation-webui](https://github.com/oobabooga/text-generation-webui)็ญ๏ผๅๆถๅปบ่ฎฎๅฐๅฏนๅบ็้กน็ฎไธญๆฅๆพ่งฃๅณๆนๆกใ
### ้ฎ้ข็ฑปๅ
ๆจกๅ่ฎญ็ปไธ็ฒพ่ฐ
### ๅบ็กๆจกๅ
Others
### ๆไฝ็ณป็ป
Linux
### ่ฏฆ็ปๆ่ฟฐ้ฎ้ข
```
# ่ฏทๅจๆญคๅค็ฒ่ดด่ฟ่กไปฃ็ ๏ผ่ฏท็ฒ่ดดๅจๆฌไปฃ็ ๅ้๏ผ
```
### ไพ่ตๆ
ๅต๏ผไปฃ็ ็ฑป้ฎ้ขๅกๅฟ
ๆไพ๏ผ
```
# ่ฏทๅจๆญคๅค็ฒ่ดดไพ่ตๆ
ๅต๏ผ่ฏท็ฒ่ดดๅจๆฌไปฃ็ ๅ้๏ผ
```
### ่ฟ่กๆฅๅฟๆๆชๅพ
```
# ่ฏทๅจๆญคๅค็ฒ่ดด่ฟ่กๆฅๅฟ๏ผ่ฏท็ฒ่ดดๅจๆฌไปฃ็ ๅ้๏ผ
```
|
closed
|
2023-10-18T12:27:15Z
|
2023-10-26T03:39:37Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/353
|
[] |
CHAOJICHENG5
| 2
|
nltk/nltk
|
nlp
| 2,477
|
Installation of nltk 3.4.5 problem with pip on Ubuntu and python 3.6.8
|
I am using latest pip (19.3.1) and when I try to install nltk (following the installation documentation) I get:
```
Collecting nltk
Downloading https://files.pythonhosted.org/packages/f6/1d/d925cfb4f324ede997f6d47bea4d9babba51b49e87a767c170b77005889d/nltk-3.4.5.zip (1.5MB)
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1.5MB 2.0MB/s
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from nltk) (1.13.0)
Building wheels for collected packages: nltk
Building wheel for nltk (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-nnqqbh5b/nltk/setup.py'"'"'; __file__='"'"'/tmp/pip-install-nnqqbh5b/nltk/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-wd9ue65u --python-tag cp36
cwd: /tmp/pip-install-nnqqbh5b/nltk/
Complete output (405 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib
creating build/lib/nltk
copying nltk/grammar.py -> build/lib/nltk
copying nltk/book.py -> build/lib/nltk
copying nltk/jsontags.py -> build/lib/nltk
copying nltk/toolbox.py -> build/lib/nltk
copying nltk/treeprettyprinter.py -> build/lib/nltk
copying nltk/tree.py -> build/lib/nltk
copying nltk/lazyimport.py -> build/lib/nltk
copying nltk/featstruct.py -> build/lib/nltk
copying nltk/collections.py -> build/lib/nltk
copying nltk/treetransforms.py -> build/lib/nltk
copying nltk/__init__.py -> build/lib/nltk
copying nltk/data.py -> build/lib/nltk
copying nltk/compat.py -> build/lib/nltk
copying nltk/tgrep.py -> build/lib/nltk
copying nltk/wsd.py -> build/lib/nltk
copying nltk/util.py -> build/lib/nltk
copying nltk/text.py -> build/lib/nltk
copying nltk/decorators.py -> build/lib/nltk
copying nltk/collocations.py -> build/lib/nltk
copying nltk/downloader.py -> build/lib/nltk
copying nltk/help.py -> build/lib/nltk
copying nltk/internals.py -> build/lib/nltk
copying nltk/probability.py -> build/lib/nltk
creating build/lib/nltk/sentiment
copying nltk/sentiment/__init__.py -> build/lib/nltk/sentiment
copying nltk/sentiment/util.py -> build/lib/nltk/sentiment
copying nltk/sentiment/vader.py -> build/lib/nltk/sentiment
copying nltk/sentiment/sentiment_analyzer.py -> build/lib/nltk/sentiment
creating build/lib/nltk/cluster
copying nltk/cluster/kmeans.py -> build/lib/nltk/cluster
copying nltk/cluster/gaac.py -> build/lib/nltk/cluster
copying nltk/cluster/__init__.py -> build/lib/nltk/cluster
copying nltk/cluster/api.py -> build/lib/nltk/cluster
copying nltk/cluster/util.py -> build/lib/nltk/cluster
copying nltk/cluster/em.py -> build/lib/nltk/cluster
creating build/lib/nltk/chat
copying nltk/chat/eliza.py -> build/lib/nltk/chat
copying nltk/chat/rude.py -> build/lib/nltk/chat
copying nltk/chat/__init__.py -> build/lib/nltk/chat
copying nltk/chat/zen.py -> build/lib/nltk/chat
copying nltk/chat/util.py -> build/lib/nltk/chat
copying nltk/chat/iesha.py -> build/lib/nltk/chat
copying nltk/chat/suntsu.py -> build/lib/nltk/chat
creating build/lib/nltk/tag
copying nltk/tag/brill.py -> build/lib/nltk/tag
copying nltk/tag/stanford.py -> build/lib/nltk/tag
copying nltk/tag/mapping.py -> build/lib/nltk/tag
copying nltk/tag/__init__.py -> build/lib/nltk/tag
copying nltk/tag/senna.py -> build/lib/nltk/tag
copying nltk/tag/hunpos.py -> build/lib/nltk/tag
copying nltk/tag/api.py -> build/lib/nltk/tag
copying nltk/tag/hmm.py -> build/lib/nltk/tag
copying nltk/tag/util.py -> build/lib/nltk/tag
copying nltk/tag/perceptron.py -> build/lib/nltk/tag
copying nltk/tag/sequential.py -> build/lib/nltk/tag
copying nltk/tag/brill_trainer.py -> build/lib/nltk/tag
copying nltk/tag/crf.py -> build/lib/nltk/tag
copying nltk/tag/tnt.py -> build/lib/nltk/tag
creating build/lib/nltk/ccg
copying nltk/ccg/combinator.py -> build/lib/nltk/ccg
copying nltk/ccg/__init__.py -> build/lib/nltk/ccg
copying nltk/ccg/api.py -> build/lib/nltk/ccg
copying nltk/ccg/logic.py -> build/lib/nltk/ccg
copying nltk/ccg/lexicon.py -> build/lib/nltk/ccg
copying nltk/ccg/chart.py -> build/lib/nltk/ccg
creating build/lib/nltk/twitter
copying nltk/twitter/common.py -> build/lib/nltk/twitter
copying nltk/twitter/twitter_demo.py -> build/lib/nltk/twitter
copying nltk/twitter/__init__.py -> build/lib/nltk/twitter
copying nltk/twitter/api.py -> build/lib/nltk/twitter
copying nltk/twitter/util.py -> build/lib/nltk/twitter
copying nltk/twitter/twitterclient.py -> build/lib/nltk/twitter
creating build/lib/nltk/metrics
copying nltk/metrics/association.py -> build/lib/nltk/metrics
copying nltk/metrics/paice.py -> build/lib/nltk/metrics
copying nltk/metrics/__init__.py -> build/lib/nltk/metrics
copying nltk/metrics/agreement.py -> build/lib/nltk/metrics
copying nltk/metrics/distance.py -> build/lib/nltk/metrics
copying nltk/metrics/scores.py -> build/lib/nltk/metrics
copying nltk/metrics/spearman.py -> build/lib/nltk/metrics
copying nltk/metrics/confusionmatrix.py -> build/lib/nltk/metrics
copying nltk/metrics/aline.py -> build/lib/nltk/metrics
copying nltk/metrics/segmentation.py -> build/lib/nltk/metrics
creating build/lib/nltk/lm
copying nltk/lm/smoothing.py -> build/lib/nltk/lm
copying nltk/lm/preprocessing.py -> build/lib/nltk/lm
copying nltk/lm/models.py -> build/lib/nltk/lm
copying nltk/lm/counter.py -> build/lib/nltk/lm
copying nltk/lm/__init__.py -> build/lib/nltk/lm
copying nltk/lm/api.py -> build/lib/nltk/lm
copying nltk/lm/util.py -> build/lib/nltk/lm
copying nltk/lm/vocabulary.py -> build/lib/nltk/lm
creating build/lib/nltk/inference
copying nltk/inference/resolution.py -> build/lib/nltk/inference
copying nltk/inference/prover9.py -> build/lib/nltk/inference
copying nltk/inference/nonmonotonic.py -> build/lib/nltk/inference
copying nltk/inference/__init__.py -> build/lib/nltk/inference
copying nltk/inference/tableau.py -> build/lib/nltk/inference
copying nltk/inference/api.py -> build/lib/nltk/inference
copying nltk/inference/discourse.py -> build/lib/nltk/inference
copying nltk/inference/mace.py -> build/lib/nltk/inference
creating build/lib/nltk/translate
copying nltk/translate/ibm3.py -> build/lib/nltk/translate
copying nltk/translate/ribes_score.py -> build/lib/nltk/translate
copying nltk/translate/chrf_score.py -> build/lib/nltk/translate
copying nltk/translate/ibm4.py -> build/lib/nltk/translate
copying nltk/translate/metrics.py -> build/lib/nltk/translate
copying nltk/translate/phrase_based.py -> build/lib/nltk/translate
copying nltk/translate/gale_church.py -> build/lib/nltk/translate
copying nltk/translate/nist_score.py -> build/lib/nltk/translate
copying nltk/translate/__init__.py -> build/lib/nltk/translate
copying nltk/translate/ibm5.py -> build/lib/nltk/translate
copying nltk/translate/api.py -> build/lib/nltk/translate
copying nltk/translate/gleu_score.py -> build/lib/nltk/translate
copying nltk/translate/meteor_score.py -> build/lib/nltk/translate
copying nltk/translate/stack_decoder.py -> build/lib/nltk/translate
copying nltk/translate/ibm2.py -> build/lib/nltk/translate
copying nltk/translate/bleu_score.py -> build/lib/nltk/translate
copying nltk/translate/ibm1.py -> build/lib/nltk/translate
copying nltk/translate/ibm_model.py -> build/lib/nltk/translate
copying nltk/translate/gdfa.py -> build/lib/nltk/translate
creating build/lib/nltk/corpus
copying nltk/corpus/europarl_raw.py -> build/lib/nltk/corpus
copying nltk/corpus/__init__.py -> build/lib/nltk/corpus
copying nltk/corpus/util.py -> build/lib/nltk/corpus
creating build/lib/nltk/tokenize
copying nltk/tokenize/repp.py -> build/lib/nltk/tokenize
copying nltk/tokenize/sexpr.py -> build/lib/nltk/tokenize
copying nltk/tokenize/stanford.py -> build/lib/nltk/tokenize
copying nltk/tokenize/toktok.py -> build/lib/nltk/tokenize
copying nltk/tokenize/regexp.py -> build/lib/nltk/tokenize
copying nltk/tokenize/casual.py -> build/lib/nltk/tokenize
copying nltk/tokenize/__init__.py -> build/lib/nltk/tokenize
copying nltk/tokenize/treebank.py -> build/lib/nltk/tokenize
copying nltk/tokenize/api.py -> build/lib/nltk/tokenize
copying nltk/tokenize/simple.py -> build/lib/nltk/tokenize
copying nltk/tokenize/stanford_segmenter.py -> build/lib/nltk/tokenize
copying nltk/tokenize/util.py -> build/lib/nltk/tokenize
copying nltk/tokenize/punkt.py -> build/lib/nltk/tokenize
copying nltk/tokenize/mwe.py -> build/lib/nltk/tokenize
copying nltk/tokenize/texttiling.py -> build/lib/nltk/tokenize
copying nltk/tokenize/sonority_sequencing.py -> build/lib/nltk/tokenize
copying nltk/tokenize/nist.py -> build/lib/nltk/tokenize
creating build/lib/nltk/app
copying nltk/app/chartparser_app.py -> build/lib/nltk/app
copying nltk/app/chunkparser_app.py -> build/lib/nltk/app
copying nltk/app/nemo_app.py -> build/lib/nltk/app
copying nltk/app/__init__.py -> build/lib/nltk/app
copying nltk/app/rdparser_app.py -> build/lib/nltk/app
copying nltk/app/wordnet_app.py -> build/lib/nltk/app
copying nltk/app/concordance_app.py -> build/lib/nltk/app
copying nltk/app/collocations_app.py -> build/lib/nltk/app
copying nltk/app/srparser_app.py -> build/lib/nltk/app
copying nltk/app/wordfreq_app.py -> build/lib/nltk/app
creating build/lib/nltk/sem
copying nltk/sem/relextract.py -> build/lib/nltk/sem
copying nltk/sem/lfg.py -> build/lib/nltk/sem
copying nltk/sem/chat80.py -> build/lib/nltk/sem
copying nltk/sem/__init__.py -> build/lib/nltk/sem
copying nltk/sem/linearlogic.py -> build/lib/nltk/sem
copying nltk/sem/hole.py -> build/lib/nltk/sem
copying nltk/sem/drt_glue_demo.py -> build/lib/nltk/sem
copying nltk/sem/boxer.py -> build/lib/nltk/sem
copying nltk/sem/cooper_storage.py -> build/lib/nltk/sem
copying nltk/sem/util.py -> build/lib/nltk/sem
copying nltk/sem/glue.py -> build/lib/nltk/sem
copying nltk/sem/logic.py -> build/lib/nltk/sem
copying nltk/sem/drt.py -> build/lib/nltk/sem
copying nltk/sem/skolemize.py -> build/lib/nltk/sem
copying nltk/sem/evaluate.py -> build/lib/nltk/sem
creating build/lib/nltk/stem
copying nltk/stem/isri.py -> build/lib/nltk/stem
copying nltk/stem/wordnet.py -> build/lib/nltk/stem
copying nltk/stem/regexp.py -> build/lib/nltk/stem
copying nltk/stem/__init__.py -> build/lib/nltk/stem
copying nltk/stem/api.py -> build/lib/nltk/stem
copying nltk/stem/porter.py -> build/lib/nltk/stem
copying nltk/stem/lancaster.py -> build/lib/nltk/stem
copying nltk/stem/util.py -> build/lib/nltk/stem
copying nltk/stem/snowball.py -> build/lib/nltk/stem
copying nltk/stem/cistem.py -> build/lib/nltk/stem
copying nltk/stem/rslp.py -> build/lib/nltk/stem
copying nltk/stem/arlstem.py -> build/lib/nltk/stem
creating build/lib/nltk/parse
copying nltk/parse/bllip.py -> build/lib/nltk/parse
copying nltk/parse/corenlp.py -> build/lib/nltk/parse
copying nltk/parse/stanford.py -> build/lib/nltk/parse
copying nltk/parse/recursivedescent.py -> build/lib/nltk/parse
copying nltk/parse/generate.py -> build/lib/nltk/parse
copying nltk/parse/__init__.py -> build/lib/nltk/parse
copying nltk/parse/pchart.py -> build/lib/nltk/parse
copying nltk/parse/api.py -> build/lib/nltk/parse
copying nltk/parse/shiftreduce.py -> build/lib/nltk/parse
copying nltk/parse/util.py -> build/lib/nltk/parse
copying nltk/parse/viterbi.py -> build/lib/nltk/parse
copying nltk/parse/malt.py -> build/lib/nltk/parse
copying nltk/parse/transitionparser.py -> build/lib/nltk/parse
copying nltk/parse/featurechart.py -> build/lib/nltk/parse
copying nltk/parse/dependencygraph.py -> build/lib/nltk/parse
copying nltk/parse/projectivedependencyparser.py -> build/lib/nltk/parse
copying nltk/parse/earleychart.py -> build/lib/nltk/parse
copying nltk/parse/evaluate.py -> build/lib/nltk/parse
copying nltk/parse/nonprojectivedependencyparser.py -> build/lib/nltk/parse
copying nltk/parse/chart.py -> build/lib/nltk/parse
creating build/lib/nltk/chunk
copying nltk/chunk/named_entity.py -> build/lib/nltk/chunk
copying nltk/chunk/regexp.py -> build/lib/nltk/chunk
copying nltk/chunk/__init__.py -> build/lib/nltk/chunk
copying nltk/chunk/api.py -> build/lib/nltk/chunk
copying nltk/chunk/util.py -> build/lib/nltk/chunk
creating build/lib/nltk/classify
copying nltk/classify/decisiontree.py -> build/lib/nltk/classify
copying nltk/classify/positivenaivebayes.py -> build/lib/nltk/classify
copying nltk/classify/__init__.py -> build/lib/nltk/classify
copying nltk/classify/senna.py -> build/lib/nltk/classify
copying nltk/classify/api.py -> build/lib/nltk/classify
copying nltk/classify/svm.py -> build/lib/nltk/classify
copying nltk/classify/naivebayes.py -> build/lib/nltk/classify
copying nltk/classify/textcat.py -> build/lib/nltk/classify
copying nltk/classify/util.py -> build/lib/nltk/classify
copying nltk/classify/tadm.py -> build/lib/nltk/classify
copying nltk/classify/weka.py -> build/lib/nltk/classify
copying nltk/classify/rte_classify.py -> build/lib/nltk/classify
copying nltk/classify/maxent.py -> build/lib/nltk/classify
copying nltk/classify/scikitlearn.py -> build/lib/nltk/classify
copying nltk/classify/megam.py -> build/lib/nltk/classify
creating build/lib/nltk/test
copying nltk/test/segmentation_fixt.py -> build/lib/nltk/test
copying nltk/test/semantics_fixt.py -> build/lib/nltk/test
copying nltk/test/translate_fixt.py -> build/lib/nltk/test
copying nltk/test/inference_fixt.py -> build/lib/nltk/test
copying nltk/test/discourse_fixt.py -> build/lib/nltk/test
copying nltk/test/doctest_nose_plugin.py -> build/lib/nltk/test
copying nltk/test/__init__.py -> build/lib/nltk/test
copying nltk/test/all.py -> build/lib/nltk/test
copying nltk/test/classify_fixt.py -> build/lib/nltk/test
copying nltk/test/gluesemantics_malt_fixt.py -> build/lib/nltk/test
copying nltk/test/wordnet_fixt.py -> build/lib/nltk/test
copying nltk/test/compat_fixt.py -> build/lib/nltk/test
copying nltk/test/probability_fixt.py -> build/lib/nltk/test
copying nltk/test/gensim_fixt.py -> build/lib/nltk/test
copying nltk/test/nonmonotonic_fixt.py -> build/lib/nltk/test
copying nltk/test/corpus_fixt.py -> build/lib/nltk/test
copying nltk/test/childes_fixt.py -> build/lib/nltk/test
copying nltk/test/runtests.py -> build/lib/nltk/test
copying nltk/test/portuguese_en_fixt.py -> build/lib/nltk/test
creating build/lib/nltk/tbl
copying nltk/tbl/rule.py -> build/lib/nltk/tbl
copying nltk/tbl/demo.py -> build/lib/nltk/tbl
copying nltk/tbl/__init__.py -> build/lib/nltk/tbl
copying nltk/tbl/api.py -> build/lib/nltk/tbl
copying nltk/tbl/template.py -> build/lib/nltk/tbl
copying nltk/tbl/erroranalysis.py -> build/lib/nltk/tbl
copying nltk/tbl/feature.py -> build/lib/nltk/tbl
creating build/lib/nltk/draw
copying nltk/draw/tree.py -> build/lib/nltk/draw
copying nltk/draw/__init__.py -> build/lib/nltk/draw
copying nltk/draw/util.py -> build/lib/nltk/draw
copying nltk/draw/table.py -> build/lib/nltk/draw
copying nltk/draw/dispersion.py -> build/lib/nltk/draw
copying nltk/draw/cfg.py -> build/lib/nltk/draw
creating build/lib/nltk/misc
copying nltk/misc/chomsky.py -> build/lib/nltk/misc
copying nltk/misc/sort.py -> build/lib/nltk/misc
copying nltk/misc/__init__.py -> build/lib/nltk/misc
copying nltk/misc/minimalset.py -> build/lib/nltk/misc
copying nltk/misc/wordfinder.py -> build/lib/nltk/misc
copying nltk/misc/babelfish.py -> build/lib/nltk/misc
creating build/lib/nltk/corpus/reader
copying nltk/corpus/reader/ycoe.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/sentiwordnet.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/panlex_lite.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/pros_cons.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/switchboard.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/chunked.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/toolbox.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/nkjp.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/categorized_sents.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/reviews.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/childes.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/knbc.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/aligned.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/wordnet.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/pl196x.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/comparative_sents.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/__init__.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/senseval.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/rte.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/plaintext.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/sinica_treebank.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/api.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/dependency.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/crubadan.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/verbnet.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/panlex_swadesh.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/nps_chat.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/util.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/opinion_lexicon.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/mte.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/lin.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/cmudict.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/udhr.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/indian.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/twitter.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/bnc.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/conll.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/string_category.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/tagged.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/framenet.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/chasen.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/ipipan.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/wordlist.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/timit.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/nombank.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/xmldocs.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/propbank.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/ieer.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/semcor.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/ppattach.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/bracket_parse.py -> build/lib/nltk/corpus/reader
creating build/lib/nltk/test/unit
copying nltk/test/unit/test_collocations.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_naivebayes.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_brill.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_2x_compat.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_corpus_views.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_twitter_auth.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_tgrep.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_nombank.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_tokenize.py -> build/lib/nltk/test/unit
copying nltk/test/unit/__init__.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_json2csv_corpus.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_wordnet.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_corpora.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_rte_classify.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_tag.py -> build/lib/nltk/test/unit
copying nltk/test/unit/utils.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_classify.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_cfg2chomsky.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_concordance.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_seekable_unicode_stream_reader.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_senna.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_chunk.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_stem.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_pos_tag.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_hmm.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_aline.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_disagreement.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_corenlp.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_data.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_cfd_mutation.py -> build/lib/nltk/test/unit
creating build/lib/nltk/test/unit/lm
copying nltk/test/unit/lm/__init__.py -> build/lib/nltk/test/unit/lm
copying nltk/test/unit/lm/test_preprocessing.py -> build/lib/nltk/test/unit/lm
copying nltk/test/unit/lm/test_counter.py -> build/lib/nltk/test/unit/lm
copying nltk/test/unit/lm/test_models.py -> build/lib/nltk/test/unit/lm
copying nltk/test/unit/lm/test_vocabulary.py -> build/lib/nltk/test/unit/lm
creating build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_gdfa.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_stack_decoder.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_nist.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/__init__.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_ibm3.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_ibm1.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_ibm2.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_ibm4.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_bleu.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_ibm_model.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_ibm5.py -> build/lib/nltk/test/unit/translate
copying nltk/test/stem.doctest -> build/lib/nltk/test
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-nnqqbh5b/nltk/setup.py", line 103, in <module>
zip_safe=False, # since normal files will be present too?
File "/usr/local/lib/python3.6/dist-packages/setuptools/__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/01677387637/.local/lib/python3.6/site-packages/wheel/bdist_wheel.py", line 192, in run
self.run_command('build')
File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/lib/python3.6/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.6/dist-packages/setuptools/command/build_py.py", line 53, in run
self.build_package_data()
File "/usr/local/lib/python3.6/dist-packages/setuptools/command/build_py.py", line 126, in build_package_data
srcfile in self.distribution.convert_2to3_doctests):
TypeError: argument of type 'NoneType' is not iterable
----------------------------------------
ERROR: Failed building wheel for nltk
Running setup.py clean for nltk
Failed to build nltk
Installing collected packages: nltk
Running setup.py install for nltk ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-nnqqbh5b/nltk/setup.py'"'"'; __file__='"'"'/tmp/pip-install-nnqqbh5b/nltk/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-hlea11gz/install-record.txt --single-version-externally-managed --compile
cwd: /tmp/pip-install-nnqqbh5b/nltk/
Complete output (407 lines):
running install
running build
running build_py
creating build
creating build/lib
creating build/lib/nltk
copying nltk/grammar.py -> build/lib/nltk
copying nltk/book.py -> build/lib/nltk
copying nltk/jsontags.py -> build/lib/nltk
copying nltk/toolbox.py -> build/lib/nltk
copying nltk/treeprettyprinter.py -> build/lib/nltk
copying nltk/tree.py -> build/lib/nltk
copying nltk/lazyimport.py -> build/lib/nltk
copying nltk/featstruct.py -> build/lib/nltk
copying nltk/collections.py -> build/lib/nltk
copying nltk/treetransforms.py -> build/lib/nltk
copying nltk/__init__.py -> build/lib/nltk
copying nltk/data.py -> build/lib/nltk
copying nltk/compat.py -> build/lib/nltk
copying nltk/tgrep.py -> build/lib/nltk
copying nltk/wsd.py -> build/lib/nltk
copying nltk/util.py -> build/lib/nltk
copying nltk/text.py -> build/lib/nltk
copying nltk/decorators.py -> build/lib/nltk
copying nltk/collocations.py -> build/lib/nltk
copying nltk/downloader.py -> build/lib/nltk
copying nltk/help.py -> build/lib/nltk
copying nltk/internals.py -> build/lib/nltk
copying nltk/probability.py -> build/lib/nltk
creating build/lib/nltk/sentiment
copying nltk/sentiment/__init__.py -> build/lib/nltk/sentiment
copying nltk/sentiment/util.py -> build/lib/nltk/sentiment
copying nltk/sentiment/vader.py -> build/lib/nltk/sentiment
copying nltk/sentiment/sentiment_analyzer.py -> build/lib/nltk/sentiment
creating build/lib/nltk/cluster
copying nltk/cluster/kmeans.py -> build/lib/nltk/cluster
copying nltk/cluster/gaac.py -> build/lib/nltk/cluster
copying nltk/cluster/__init__.py -> build/lib/nltk/cluster
copying nltk/cluster/api.py -> build/lib/nltk/cluster
copying nltk/cluster/util.py -> build/lib/nltk/cluster
copying nltk/cluster/em.py -> build/lib/nltk/cluster
creating build/lib/nltk/chat
copying nltk/chat/eliza.py -> build/lib/nltk/chat
copying nltk/chat/rude.py -> build/lib/nltk/chat
copying nltk/chat/__init__.py -> build/lib/nltk/chat
copying nltk/chat/zen.py -> build/lib/nltk/chat
copying nltk/chat/util.py -> build/lib/nltk/chat
copying nltk/chat/iesha.py -> build/lib/nltk/chat
copying nltk/chat/suntsu.py -> build/lib/nltk/chat
creating build/lib/nltk/tag
copying nltk/tag/brill.py -> build/lib/nltk/tag
copying nltk/tag/stanford.py -> build/lib/nltk/tag
copying nltk/tag/mapping.py -> build/lib/nltk/tag
copying nltk/tag/__init__.py -> build/lib/nltk/tag
copying nltk/tag/senna.py -> build/lib/nltk/tag
copying nltk/tag/hunpos.py -> build/lib/nltk/tag
copying nltk/tag/api.py -> build/lib/nltk/tag
copying nltk/tag/hmm.py -> build/lib/nltk/tag
copying nltk/tag/util.py -> build/lib/nltk/tag
copying nltk/tag/perceptron.py -> build/lib/nltk/tag
copying nltk/tag/sequential.py -> build/lib/nltk/tag
copying nltk/tag/brill_trainer.py -> build/lib/nltk/tag
copying nltk/tag/crf.py -> build/lib/nltk/tag
copying nltk/tag/tnt.py -> build/lib/nltk/tag
creating build/lib/nltk/ccg
copying nltk/ccg/combinator.py -> build/lib/nltk/ccg
copying nltk/ccg/__init__.py -> build/lib/nltk/ccg
copying nltk/ccg/api.py -> build/lib/nltk/ccg
copying nltk/ccg/logic.py -> build/lib/nltk/ccg
copying nltk/ccg/lexicon.py -> build/lib/nltk/ccg
copying nltk/ccg/chart.py -> build/lib/nltk/ccg
creating build/lib/nltk/twitter
copying nltk/twitter/common.py -> build/lib/nltk/twitter
copying nltk/twitter/twitter_demo.py -> build/lib/nltk/twitter
copying nltk/twitter/__init__.py -> build/lib/nltk/twitter
copying nltk/twitter/api.py -> build/lib/nltk/twitter
copying nltk/twitter/util.py -> build/lib/nltk/twitter
copying nltk/twitter/twitterclient.py -> build/lib/nltk/twitter
creating build/lib/nltk/metrics
copying nltk/metrics/association.py -> build/lib/nltk/metrics
copying nltk/metrics/paice.py -> build/lib/nltk/metrics
copying nltk/metrics/__init__.py -> build/lib/nltk/metrics
copying nltk/metrics/agreement.py -> build/lib/nltk/metrics
copying nltk/metrics/distance.py -> build/lib/nltk/metrics
copying nltk/metrics/scores.py -> build/lib/nltk/metrics
copying nltk/metrics/spearman.py -> build/lib/nltk/metrics
copying nltk/metrics/confusionmatrix.py -> build/lib/nltk/metrics
copying nltk/metrics/aline.py -> build/lib/nltk/metrics
copying nltk/metrics/segmentation.py -> build/lib/nltk/metrics
creating build/lib/nltk/lm
copying nltk/lm/smoothing.py -> build/lib/nltk/lm
copying nltk/lm/preprocessing.py -> build/lib/nltk/lm
copying nltk/lm/models.py -> build/lib/nltk/lm
copying nltk/lm/counter.py -> build/lib/nltk/lm
copying nltk/lm/__init__.py -> build/lib/nltk/lm
copying nltk/lm/api.py -> build/lib/nltk/lm
copying nltk/lm/util.py -> build/lib/nltk/lm
copying nltk/lm/vocabulary.py -> build/lib/nltk/lm
creating build/lib/nltk/inference
copying nltk/inference/resolution.py -> build/lib/nltk/inference
copying nltk/inference/prover9.py -> build/lib/nltk/inference
copying nltk/inference/nonmonotonic.py -> build/lib/nltk/inference
copying nltk/inference/__init__.py -> build/lib/nltk/inference
copying nltk/inference/tableau.py -> build/lib/nltk/inference
copying nltk/inference/api.py -> build/lib/nltk/inference
copying nltk/inference/discourse.py -> build/lib/nltk/inference
copying nltk/inference/mace.py -> build/lib/nltk/inference
creating build/lib/nltk/translate
copying nltk/translate/ibm3.py -> build/lib/nltk/translate
copying nltk/translate/ribes_score.py -> build/lib/nltk/translate
copying nltk/translate/chrf_score.py -> build/lib/nltk/translate
copying nltk/translate/ibm4.py -> build/lib/nltk/translate
copying nltk/translate/metrics.py -> build/lib/nltk/translate
copying nltk/translate/phrase_based.py -> build/lib/nltk/translate
copying nltk/translate/gale_church.py -> build/lib/nltk/translate
copying nltk/translate/nist_score.py -> build/lib/nltk/translate
copying nltk/translate/__init__.py -> build/lib/nltk/translate
copying nltk/translate/ibm5.py -> build/lib/nltk/translate
copying nltk/translate/api.py -> build/lib/nltk/translate
copying nltk/translate/gleu_score.py -> build/lib/nltk/translate
copying nltk/translate/meteor_score.py -> build/lib/nltk/translate
copying nltk/translate/stack_decoder.py -> build/lib/nltk/translate
copying nltk/translate/ibm2.py -> build/lib/nltk/translate
copying nltk/translate/bleu_score.py -> build/lib/nltk/translate
copying nltk/translate/ibm1.py -> build/lib/nltk/translate
copying nltk/translate/ibm_model.py -> build/lib/nltk/translate
copying nltk/translate/gdfa.py -> build/lib/nltk/translate
creating build/lib/nltk/corpus
copying nltk/corpus/europarl_raw.py -> build/lib/nltk/corpus
copying nltk/corpus/__init__.py -> build/lib/nltk/corpus
copying nltk/corpus/util.py -> build/lib/nltk/corpus
creating build/lib/nltk/tokenize
copying nltk/tokenize/repp.py -> build/lib/nltk/tokenize
copying nltk/tokenize/sexpr.py -> build/lib/nltk/tokenize
copying nltk/tokenize/stanford.py -> build/lib/nltk/tokenize
copying nltk/tokenize/toktok.py -> build/lib/nltk/tokenize
copying nltk/tokenize/regexp.py -> build/lib/nltk/tokenize
copying nltk/tokenize/casual.py -> build/lib/nltk/tokenize
copying nltk/tokenize/__init__.py -> build/lib/nltk/tokenize
copying nltk/tokenize/treebank.py -> build/lib/nltk/tokenize
copying nltk/tokenize/api.py -> build/lib/nltk/tokenize
copying nltk/tokenize/simple.py -> build/lib/nltk/tokenize
copying nltk/tokenize/stanford_segmenter.py -> build/lib/nltk/tokenize
copying nltk/tokenize/util.py -> build/lib/nltk/tokenize
copying nltk/tokenize/punkt.py -> build/lib/nltk/tokenize
copying nltk/tokenize/mwe.py -> build/lib/nltk/tokenize
copying nltk/tokenize/texttiling.py -> build/lib/nltk/tokenize
copying nltk/tokenize/sonority_sequencing.py -> build/lib/nltk/tokenize
copying nltk/tokenize/nist.py -> build/lib/nltk/tokenize
creating build/lib/nltk/app
copying nltk/app/chartparser_app.py -> build/lib/nltk/app
copying nltk/app/chunkparser_app.py -> build/lib/nltk/app
copying nltk/app/nemo_app.py -> build/lib/nltk/app
copying nltk/app/__init__.py -> build/lib/nltk/app
copying nltk/app/rdparser_app.py -> build/lib/nltk/app
copying nltk/app/wordnet_app.py -> build/lib/nltk/app
copying nltk/app/concordance_app.py -> build/lib/nltk/app
copying nltk/app/collocations_app.py -> build/lib/nltk/app
copying nltk/app/srparser_app.py -> build/lib/nltk/app
copying nltk/app/wordfreq_app.py -> build/lib/nltk/app
creating build/lib/nltk/sem
copying nltk/sem/relextract.py -> build/lib/nltk/sem
copying nltk/sem/lfg.py -> build/lib/nltk/sem
copying nltk/sem/chat80.py -> build/lib/nltk/sem
copying nltk/sem/__init__.py -> build/lib/nltk/sem
copying nltk/sem/linearlogic.py -> build/lib/nltk/sem
copying nltk/sem/hole.py -> build/lib/nltk/sem
copying nltk/sem/drt_glue_demo.py -> build/lib/nltk/sem
copying nltk/sem/boxer.py -> build/lib/nltk/sem
copying nltk/sem/cooper_storage.py -> build/lib/nltk/sem
copying nltk/sem/util.py -> build/lib/nltk/sem
copying nltk/sem/glue.py -> build/lib/nltk/sem
copying nltk/sem/logic.py -> build/lib/nltk/sem
copying nltk/sem/drt.py -> build/lib/nltk/sem
copying nltk/sem/skolemize.py -> build/lib/nltk/sem
copying nltk/sem/evaluate.py -> build/lib/nltk/sem
creating build/lib/nltk/stem
copying nltk/stem/isri.py -> build/lib/nltk/stem
copying nltk/stem/wordnet.py -> build/lib/nltk/stem
copying nltk/stem/regexp.py -> build/lib/nltk/stem
copying nltk/stem/__init__.py -> build/lib/nltk/stem
copying nltk/stem/api.py -> build/lib/nltk/stem
copying nltk/stem/porter.py -> build/lib/nltk/stem
copying nltk/stem/lancaster.py -> build/lib/nltk/stem
copying nltk/stem/util.py -> build/lib/nltk/stem
copying nltk/stem/snowball.py -> build/lib/nltk/stem
copying nltk/stem/cistem.py -> build/lib/nltk/stem
copying nltk/stem/rslp.py -> build/lib/nltk/stem
copying nltk/stem/arlstem.py -> build/lib/nltk/stem
creating build/lib/nltk/parse
copying nltk/parse/bllip.py -> build/lib/nltk/parse
copying nltk/parse/corenlp.py -> build/lib/nltk/parse
copying nltk/parse/stanford.py -> build/lib/nltk/parse
copying nltk/parse/recursivedescent.py -> build/lib/nltk/parse
copying nltk/parse/generate.py -> build/lib/nltk/parse
copying nltk/parse/__init__.py -> build/lib/nltk/parse
copying nltk/parse/pchart.py -> build/lib/nltk/parse
copying nltk/parse/api.py -> build/lib/nltk/parse
copying nltk/parse/shiftreduce.py -> build/lib/nltk/parse
copying nltk/parse/util.py -> build/lib/nltk/parse
copying nltk/parse/viterbi.py -> build/lib/nltk/parse
copying nltk/parse/malt.py -> build/lib/nltk/parse
copying nltk/parse/transitionparser.py -> build/lib/nltk/parse
copying nltk/parse/featurechart.py -> build/lib/nltk/parse
copying nltk/parse/dependencygraph.py -> build/lib/nltk/parse
copying nltk/parse/projectivedependencyparser.py -> build/lib/nltk/parse
copying nltk/parse/earleychart.py -> build/lib/nltk/parse
copying nltk/parse/evaluate.py -> build/lib/nltk/parse
copying nltk/parse/nonprojectivedependencyparser.py -> build/lib/nltk/parse
copying nltk/parse/chart.py -> build/lib/nltk/parse
creating build/lib/nltk/chunk
copying nltk/chunk/named_entity.py -> build/lib/nltk/chunk
copying nltk/chunk/regexp.py -> build/lib/nltk/chunk
copying nltk/chunk/__init__.py -> build/lib/nltk/chunk
copying nltk/chunk/api.py -> build/lib/nltk/chunk
copying nltk/chunk/util.py -> build/lib/nltk/chunk
creating build/lib/nltk/classify
copying nltk/classify/decisiontree.py -> build/lib/nltk/classify
copying nltk/classify/positivenaivebayes.py -> build/lib/nltk/classify
copying nltk/classify/__init__.py -> build/lib/nltk/classify
copying nltk/classify/senna.py -> build/lib/nltk/classify
copying nltk/classify/api.py -> build/lib/nltk/classify
copying nltk/classify/svm.py -> build/lib/nltk/classify
copying nltk/classify/naivebayes.py -> build/lib/nltk/classify
copying nltk/classify/textcat.py -> build/lib/nltk/classify
copying nltk/classify/util.py -> build/lib/nltk/classify
copying nltk/classify/tadm.py -> build/lib/nltk/classify
copying nltk/classify/weka.py -> build/lib/nltk/classify
copying nltk/classify/rte_classify.py -> build/lib/nltk/classify
copying nltk/classify/maxent.py -> build/lib/nltk/classify
copying nltk/classify/scikitlearn.py -> build/lib/nltk/classify
copying nltk/classify/megam.py -> build/lib/nltk/classify
creating build/lib/nltk/test
copying nltk/test/segmentation_fixt.py -> build/lib/nltk/test
copying nltk/test/semantics_fixt.py -> build/lib/nltk/test
copying nltk/test/translate_fixt.py -> build/lib/nltk/test
copying nltk/test/inference_fixt.py -> build/lib/nltk/test
copying nltk/test/discourse_fixt.py -> build/lib/nltk/test
copying nltk/test/doctest_nose_plugin.py -> build/lib/nltk/test
copying nltk/test/__init__.py -> build/lib/nltk/test
copying nltk/test/all.py -> build/lib/nltk/test
copying nltk/test/classify_fixt.py -> build/lib/nltk/test
copying nltk/test/gluesemantics_malt_fixt.py -> build/lib/nltk/test
copying nltk/test/wordnet_fixt.py -> build/lib/nltk/test
copying nltk/test/compat_fixt.py -> build/lib/nltk/test
copying nltk/test/probability_fixt.py -> build/lib/nltk/test
copying nltk/test/gensim_fixt.py -> build/lib/nltk/test
copying nltk/test/nonmonotonic_fixt.py -> build/lib/nltk/test
copying nltk/test/corpus_fixt.py -> build/lib/nltk/test
copying nltk/test/childes_fixt.py -> build/lib/nltk/test
copying nltk/test/runtests.py -> build/lib/nltk/test
copying nltk/test/portuguese_en_fixt.py -> build/lib/nltk/test
creating build/lib/nltk/tbl
copying nltk/tbl/rule.py -> build/lib/nltk/tbl
copying nltk/tbl/demo.py -> build/lib/nltk/tbl
copying nltk/tbl/__init__.py -> build/lib/nltk/tbl
copying nltk/tbl/api.py -> build/lib/nltk/tbl
copying nltk/tbl/template.py -> build/lib/nltk/tbl
copying nltk/tbl/erroranalysis.py -> build/lib/nltk/tbl
copying nltk/tbl/feature.py -> build/lib/nltk/tbl
creating build/lib/nltk/draw
copying nltk/draw/tree.py -> build/lib/nltk/draw
copying nltk/draw/__init__.py -> build/lib/nltk/draw
copying nltk/draw/util.py -> build/lib/nltk/draw
copying nltk/draw/table.py -> build/lib/nltk/draw
copying nltk/draw/dispersion.py -> build/lib/nltk/draw
copying nltk/draw/cfg.py -> build/lib/nltk/draw
creating build/lib/nltk/misc
copying nltk/misc/chomsky.py -> build/lib/nltk/misc
copying nltk/misc/sort.py -> build/lib/nltk/misc
copying nltk/misc/__init__.py -> build/lib/nltk/misc
copying nltk/misc/minimalset.py -> build/lib/nltk/misc
copying nltk/misc/wordfinder.py -> build/lib/nltk/misc
copying nltk/misc/babelfish.py -> build/lib/nltk/misc
creating build/lib/nltk/corpus/reader
copying nltk/corpus/reader/ycoe.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/sentiwordnet.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/panlex_lite.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/pros_cons.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/switchboard.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/chunked.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/toolbox.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/nkjp.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/categorized_sents.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/reviews.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/childes.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/knbc.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/aligned.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/wordnet.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/pl196x.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/comparative_sents.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/__init__.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/senseval.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/rte.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/plaintext.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/sinica_treebank.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/api.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/dependency.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/crubadan.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/verbnet.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/panlex_swadesh.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/nps_chat.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/util.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/opinion_lexicon.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/mte.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/lin.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/cmudict.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/udhr.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/indian.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/twitter.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/bnc.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/conll.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/string_category.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/tagged.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/framenet.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/chasen.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/ipipan.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/wordlist.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/timit.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/nombank.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/xmldocs.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/propbank.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/ieer.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/semcor.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/ppattach.py -> build/lib/nltk/corpus/reader
copying nltk/corpus/reader/bracket_parse.py -> build/lib/nltk/corpus/reader
creating build/lib/nltk/test/unit
copying nltk/test/unit/test_collocations.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_naivebayes.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_brill.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_2x_compat.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_corpus_views.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_twitter_auth.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_tgrep.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_nombank.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_tokenize.py -> build/lib/nltk/test/unit
copying nltk/test/unit/__init__.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_json2csv_corpus.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_wordnet.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_corpora.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_rte_classify.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_tag.py -> build/lib/nltk/test/unit
copying nltk/test/unit/utils.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_classify.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_cfg2chomsky.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_concordance.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_seekable_unicode_stream_reader.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_senna.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_chunk.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_stem.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_pos_tag.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_hmm.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_aline.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_disagreement.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_corenlp.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_data.py -> build/lib/nltk/test/unit
copying nltk/test/unit/test_cfd_mutation.py -> build/lib/nltk/test/unit
creating build/lib/nltk/test/unit/lm
copying nltk/test/unit/lm/__init__.py -> build/lib/nltk/test/unit/lm
copying nltk/test/unit/lm/test_preprocessing.py -> build/lib/nltk/test/unit/lm
copying nltk/test/unit/lm/test_counter.py -> build/lib/nltk/test/unit/lm
copying nltk/test/unit/lm/test_models.py -> build/lib/nltk/test/unit/lm
copying nltk/test/unit/lm/test_vocabulary.py -> build/lib/nltk/test/unit/lm
creating build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_gdfa.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_stack_decoder.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_nist.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/__init__.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_ibm3.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_ibm1.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_ibm2.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_ibm4.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_bleu.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_ibm_model.py -> build/lib/nltk/test/unit/translate
copying nltk/test/unit/translate/test_ibm5.py -> build/lib/nltk/test/unit/translate
copying nltk/test/stem.doctest -> build/lib/nltk/test
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-nnqqbh5b/nltk/setup.py", line 103, in <module>
zip_safe=False, # since normal files will be present too?
File "/usr/local/lib/python3.6/dist-packages/setuptools/__init__.py", line 145, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.6/dist-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/usr/lib/python3.6/distutils/command/install.py", line 589, in run
self.run_command('build')
File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/lib/python3.6/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.6/dist-packages/setuptools/command/build_py.py", line 53, in run
self.build_package_data()
File "/usr/local/lib/python3.6/dist-packages/setuptools/command/build_py.py", line 126, in build_package_data
srcfile in self.distribution.convert_2to3_doctests):
TypeError: argument of type 'NoneType' is not iterable
----------------------------------------
ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-nnqqbh5b/nltk/setup.py'"'"'; __file__='"'"'/tmp/pip-install-nnqqbh5b/nltk/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-hlea11gz/install-record.txt --single-version-externally-managed --compile Check the logs for full command output.
```
Any idea of what to do?
|
closed
|
2019-12-13T12:03:15Z
|
2024-01-17T09:35:46Z
|
https://github.com/nltk/nltk/issues/2477
|
[] |
staticdev
| 1
|
miguelgrinberg/Flask-Migrate
|
flask
| 173
|
"$ python manage.py db init" and "migrate" commands doesn't run as expected
|
Hello. I'm learning to use flask-migrate but meet a problem when trying it on my new project.
I'm trying to use flask-migrate with flask_script and here is the manage.py file:
```
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
from flask_sqlalchemy import SQLAlchemy
from flask_script import Manager
from flask_script import Command
from flask_script import prompt_bool
from flask_migrate import Migrate, MigrateCommand
from web.app import create_app
from web.models import drop_all
app = create_app()
db = SQLAlchemy(app)
migrate = Migrate(app, db)
manager = Manager(app)
manager.add_command('db', MigrateCommand)
class DropDataBaseTables(Command):
def __init__(self):
super(DropDataBaseTables, self).__init__()
def run(self):
"""Drop all tables after user has verified."""
verified = prompt_bool(
'Do you really want to drop all the database tables?')
if verified:
drop_all()
sys.stdout.write('All tables droped. Database is now empty.\n')
manager.add_command('drop_db', DropDataBaseTables())
if __name__ == '__main__':
manager.run()
```
And then I followed the tutorial to run 'init' and 'migrate' command.
After the 'init' command, the terminal shows:
```
> (env_ebsproxy) arch_learning3@arch-VirtualBox2:ebsproxy (develop*)$ python manage.py db init
> /home/arch_learning3/workplace/ebs_gitlab/ebsproxy/env_ebsproxy/local/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py:839: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
> 'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and '
> Creating directory /home/arch_learning3/workplace/ebs_gitlab/ebsproxy/ebsproxy/migrations ... done
> Creating directory /home/arch_learning3/workplace/ebs_gitlab/ebsproxy/ebsproxy/migrations/versions ... done
> Generating /home/arch_learning3/workplace/ebs_gitlab/ebsproxy/ebsproxy/migrations/env.py ... done
> Generating /home/arch_learning3/workplace/ebs_gitlab/ebsproxy/ebsproxy/migrations/README ... done
> Generating /home/arch_learning3/workplace/ebs_gitlab/ebsproxy/ebsproxy/migrations/script.py.mako ... done
> Generating /home/arch_learning3/workplace/ebs_gitlab/ebsproxy/ebsproxy/migrations/alembic.ini ... done
> Generating /home/arch_learning3/workplace/ebs_gitlab/ebsproxy/ebsproxy/migrations/env.pyc ... done
> Please edit configuration/connection/logging settings in
> '/home/arch_learning3/workplace/ebs_gitlab/ebsproxy/ebsproxy/migrations/alembic.ini' before proceeding.
```
and I check mysql tables:
```
mysql> show tables;
+---------------------+
| Tables_in_ebs_proxy |
+---------------------+
| alembic_version |
| application |
| course_proxy |
| user_proxy |
+---------------------+
4 rows in set (0.00 sec)
```
then I run 'migrate' command, the terminal shows:
```
(env_ebsproxy) arch_learning3@arch-VirtualBox2:ebsproxy (develop*)$ python manage.py db migrate
/home/arch_learning3/workplace/ebs_gitlab/ebsproxy/env_ebsproxy/local/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py:839: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
'SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and '
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.autogenerate.compare] Detected removed table u'user_proxy'
INFO [alembic.autogenerate.compare] Detected removed table u'application'
INFO [alembic.autogenerate.compare] Detected removed table u'course_proxy'
Generating /home/arch_learning3/workplace/ebs_gitlab/ebsproxy/ebsproxy/migrations/versions/b6052e78976d_.py ... done
```
Why dose it create table when 'init' command and detected "removed table" when first run 'migrate'? Do I miss anything?
|
closed
|
2017-11-01T05:05:07Z
|
2019-01-13T22:20:34Z
|
https://github.com/miguelgrinberg/Flask-Migrate/issues/173
|
[
"question",
"auto-closed"
] |
Littelarch
| 5
|
twopirllc/pandas-ta
|
pandas
| 660
|
Requesting some indicators found in Tushare
|
I am trying to apply some technical indicators I saw used in a research paper which used Tushare to acquire the data, however due to the strategies feature I find pandas-ta to be much more convenient. Unfortunately, there are a few indicators that are in Tushare but not pandas-ta. Also it seems that the stochastic indicator is calculated differently, is this actually a different indicator? If so can tushare's be implemented as well?
Tushare's indicators:
https://github.com/waditu/tushare/blob/master/tushare/stock/indictor.py
note their KDJ does not use ema or sma, which lines up with this: https://www.moomoo.com/us/learn/detail-what-is-the-kdj-67019-220809006
Is this the same as the stochastic indicator in pandas-ta?
Also requesting:
ADXR
BBI Bull And Bear Index
ASI Accumulation Swing Index
WR Williams Overbought/Oversold Index
VR Volatility Volume Ratio
All of which can be found in that tushare file.
|
open
|
2023-02-11T03:13:40Z
|
2023-02-13T17:12:07Z
|
https://github.com/twopirllc/pandas-ta/issues/660
|
[
"enhancement"
] |
andymc31
| 1
|
custom-components/pyscript
|
jupyter
| 118
|
No error message in log when using illegal time_active statement
|
Hi,
having an error in an time_active statement does not generate an error. For example this statement does not generate any issue in the log file and prevent the rule from beeing fired:
`@time_active("range(sunrise, sunset")`
Hint: there is one closing bracket missing after the sunset.
Is it possible to log such errors as it is really painful to find them when they are silently ignored.
Beside this, Kudos for developing this great tool. Thanks and keep going! :-)
|
closed
|
2020-12-14T09:51:15Z
|
2020-12-14T12:37:36Z
|
https://github.com/custom-components/pyscript/issues/118
|
[] |
Michael-CGN
| 0
|
piskvorky/gensim
|
machine-learning
| 3,049
|
pip install -i still pull dependencies from pypi
|
#### Problem description
`pip install -i ${some_source} gensim` would still install dependencies like numpy from pypi instead of {some_resource}
|
closed
|
2021-02-25T00:46:05Z
|
2021-03-09T00:14:14Z
|
https://github.com/piskvorky/gensim/issues/3049
|
[
"need info"
] |
zyp-rgb
| 1
|
sammchardy/python-binance
|
api
| 981
|
get_my_trades
|
**Describe the bug**
Calling get_my_trades with default limits not always produces the same results.
In my example, sometime it returns 114 items while others 116.
Additionally, putting get_my_trades in a loop starting with "fromId=0" and limit to 50 (or any other number required to have at least two loops) I am having an additional issue.
Indeed, it happens that sometime I get 50, 50, 14 and 2 items, other times 50, 50, 14 and 0 items.
This means that I can't rely on quitting from the loop when the number of items retrieved are less than the limits, moreover, even in this situation I am getting sometime 114 items, others 116.
This is leading to a severe issue for me as I need to rebuild the orders/trades history per each ticker I am managing.
**To Reproduce**
def get_my_trades(self, symbol):
my_trades = []
items_to_collect = 500
try:
my_trades = self.connect.get_my_trades(symbol=symbol, limit=items_to_collect, fromId=0)
trades = my_trades
except (BinanceAPIException, BinanceRequestException) as e:
trades = ""
self._logger.warning(f'Warning raised by Binance Exceptions: {e}')
# Sometime it happens that the items retrieved are less than "items_to_collect" although there are still others to pop
# Sometime it happens that not all the items are retrieved ... bug?
while len(my_trades) > 0:
try:
my_trades = self.connect.get_my_trades(symbol=symbol, limit=items_to_collect, fromId=my_trades[-1]['id']+1)
trades += my_trades
except (BinanceAPIException, BinanceRequestException) as e:
trades = ""
self._logger.warning(f'Warning raised by Binance Exceptions: {e}')
return trades
**Expected behavior**
In the first scenario I shoud expect always 116 items
In the second scenario I should expect always 50, 50 and 16 items
**Environment (please complete the following information):**
- Python version: [e.g. 3.5] 3.8
- Virtual Env: [e.g. virtualenv, conda] conda
- OS: [e.g. Mac, Ubuntu] Wiindows 10 Enterprise
- python-binance version 0.7.9
**Logs or Additional context**
Add any other context about the problem here.
|
open
|
2021-07-31T15:48:45Z
|
2022-02-08T17:32:29Z
|
https://github.com/sammchardy/python-binance/issues/981
|
[] |
immadevel
| 3
|
albumentations-team/albumentations
|
deep-learning
| 2,335
|
[New transform] GridShuffle3D
|
We can do in 3D similar things as GridShuffle, but in 3D
|
open
|
2025-02-07T20:44:59Z
|
2025-02-07T20:45:05Z
|
https://github.com/albumentations-team/albumentations/issues/2335
|
[
"enhancement",
"good first issue"
] |
ternaus
| 0
|
Kav-K/GPTDiscord
|
asyncio
| 131
|
Show "... is thinking" during a conversation
|
Make the bot denote that it is waiting to generate a response inside conversations, for example maybe an embed after a user sends a message that says something like "Thinking...".
That message should be deleted when the bot actually responds to the conversation.
|
closed
|
2023-02-02T17:50:40Z
|
2023-02-03T05:11:32Z
|
https://github.com/Kav-K/GPTDiscord/issues/131
|
[
"enhancement",
"help wanted",
"good first issue"
] |
Kav-K
| 1
|
nltk/nltk
|
nlp
| 2,931
|
nltk-3.5 build fails in docker and python 2.7.18 due to regex-2022.1.18
|
Sorry I'm not super familiar with this ecosystem, but just noticed my docker build on GCP fails with this message:
```
regex_3/_regex.c:755:15: error: โPy_UCS1โ undeclared (first use in this function); did you mean โPy_UCS4โ?
```
Unsure if this exactly correct source of error so full log attached
[scratch_9.txt](https://github.com/nltk/nltk/files/7900938/scratch_9.txt)
Looking at regex releases there was new one yesterday - https://pypi.org/project/regex/#history
My previous build just yesterday was using regex-2021.11.10
Perhaps there was a DOS fix released as part of https://github.com/nltk/nltk/issues/2929 ?
Dockerfile:
```
FROM --platform=linux/amd64 python:2.7.18 as deps
# Fix build errors on MacOS Apple M1 chip
RUN export CFLAGS="-I /opt/homebrew/opt/openssl/include"
RUN export LDFLAGS="-L /opt/homebrew/opt/openssl/lib"
RUN export GRPC_PYTHON_BUILD_SYSTEM_ZLIB=1
RUN export GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1
RUN pip install --upgrade pip
RUN pip install grpcio
RUN pip install nltk
```
|
closed
|
2022-01-19T22:31:06Z
|
2022-01-20T10:09:28Z
|
https://github.com/nltk/nltk/issues/2931
|
[] |
sjurgis
| 4
|
thtrieu/darkflow
|
tensorflow
| 627
|
How to return the contour of an object instead of binding box?
|
Hi everyone. I want to get the contours of the objects detected. How to get the coordinates of the contour instead of binding boxes?
|
closed
|
2018-03-12T03:38:21Z
|
2018-03-21T00:07:05Z
|
https://github.com/thtrieu/darkflow/issues/627
|
[] |
Baozhang-Ren
| 2
|
mlflow/mlflow
|
machine-learning
| 14,401
|
[FR] enable markdown image embedding from mlflow server artifact store for model registry description field
|
### Willingness to contribute
No. I cannot contribute this feature at this time.
### Proposal Summary
MLflow Model Registry descriptions currently support Markdown but do not allow direct embedding of images stored in the MLflow artifact store. Since MLflow already logs artifacts as part of experiment runs, users should be able to reference these artifacts in model descriptions without relying on external hosting or manual navigation.
### Motivation
> #### What is the use case for this feature?
A typical use case for this feature includes:
โ
Embedding model evaluation plots (e.g., confusion matrices, ROC curves)
โ
Including feature importance visualizations
โ
Documenting data preprocessing steps with visual references
> #### Why is this use case valuable to support for MLflow users in general?
1. Improved model documentation โ MLflow Model Registry is often used for governance and auditability. Allowing users to embed images directly from experiment artifacts ensures that key evaluation metrics are visually available without requiring navigation away from the page.
2. Better interpretability โ Many ML models rely on visual explanations (e.g., feature importance, evaluation metrics). Inline image embedding improves transparency for both data scientists and business stakeholders.
3. Seamless integration โ MLflow already logs images as artifacts in experiment runs. Allowing users to reference them in Markdown eliminates the need for external hosting or manual workarounds.
4. Consistency with MLflow artifacts โ Since images are already stored within MLflow, users should be able to use them natively within the registry UI without needing to re-upload or store them elsewhere.
> #### Why is this use case valuable to support for your project(s) or organization?
This feature would save time and reduce errors for ml developers, as they could see evaluation plots inline without needing to open experiment run pages separately.
> #### Why is it currently difficult to achieve this use case?
The current workaround I use is to embed a link to the artifact store image from the model registry description, e.g.
```
<private_mlflow_host>/#/experiments/<experiment_id>/runs/<run_id>/artifacts/evaluation/my_image.png
```
but this could be largely improved if the image itself could be rendered into the model description instead.
โข โ Breaks documentation flow.
โข โ Adds friction to model reviews.
โข โ Requires extra steps, which is inefficient when reviewing multiple models.
โข Users expect artifact integration within MLflow, and manually hosting images externally just to embed them in the Model Registry contradicts the purpose of MLflow artifact storage.
### Details
I'm not a js developer so can't give much input here but I believe the updates need to be made in:
mlflow/server/js/src/common/utils/MarkdownUtils.ts
mlflow/server/js/src/model-registry/components
### What component(s) does this bug affect?
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [x] `area/server-infra`: MLflow Tracking server backend
- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [x] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [x] `language/r`: R APIs and clients
- [x] `language/java`: Java APIs and clients
- [x] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [x] `integrations/azure`: Azure and Azure ML integrations
- [x] `integrations/sagemaker`: SageMaker integrations
- [x] `integrations/databricks`: Databricks integrations
|
open
|
2025-01-31T14:44:26Z
|
2025-02-21T10:36:29Z
|
https://github.com/mlflow/mlflow/issues/14401
|
[
"enhancement",
"language/r",
"area/uiux",
"language/java",
"integrations/azure",
"integrations/sagemaker",
"integrations/databricks",
"area/server-infra",
"language/new"
] |
rjames-0
| 2
|
chainer/chainer
|
numpy
| 7,703
|
Flaky test: `TestAveragePooling2D`
|
https://jenkins.preferred.jp/job/chainer/job/chainer_pr/1513/TEST=chainer-py2,label=mn1-p100/console
|
closed
|
2019-07-04T12:14:45Z
|
2019-08-05T08:37:51Z
|
https://github.com/chainer/chainer/issues/7703
|
[
"cat:test",
"prio:high",
"pr-ongoing"
] |
niboshi
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.