repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
ipython/ipython
|
jupyter
| 14,184
|
Tab complete for class __call__ not working
|
<!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
If I make the following two files:
```python
# t.py
class MyClass:
def my_method(self) -> None:
print('this is my method')
def foo() -> MyClass: return MyClass()
```
and
```python
# f.py
class MyClass:
def my_method(self) -> None:
print('this is my method')
class Foo:
def __call__(self) -> MyClass:
return MyClass()
foo = Foo()
```
then `foo().my_method()` works for both
But - I only get the auto-complete suggestion for the first one: `ipython -i t.py`:

`ipython -i f.py`:

Shouldn't both autocomplete to the method available in `MyClass`?
For reference, this came up here https://github.com/pola-rs/polars/issues/11433
|
closed
|
2023-09-30T09:31:52Z
|
2023-10-01T17:20:31Z
|
https://github.com/ipython/ipython/issues/14184
|
[
"tab-completion"
] |
MarcoGorelli
| 3
|
graphistry/pygraphistry
|
jupyter
| 549
|
[BUG] categorical color encoding does not allow two values to have the same color
|
**Describe the bug**
If I use a categorical color encoding and try to set two values to have the same color, only one of the them uses the color, the other gets set to the default color
**To Reproduce**
```
import datetime
import pandas as pd
import numpy as np
import graphistry
import math
graphistry.register(api=3, personal_key_id='', personal_key_secret='', server='...')
num_recs=353
num_edges=787
ndf = pd.DataFrame({'ID' : range(num_recs),
'tier' : np.random.randint(1,9,num_recs),
'FL_risk_exposure' : np.random.randint(0,100,num_recs),
'risk_level' : [np.random.choice(['no_risk', 'high_risk', 'med_risk']) for i in range(num_recs)],
'flagged_company' : [np.random.choice(['Yes', 'No']) for i in range(num_recs)],
'rating' : np.random.randint(0,10,num_recs)})
edf = pd.DataFrame({'source' : np.random.choice(10,num_edges),
'target' : np.random.choice(10,num_edges)})
tier2len = ndf.tier.value_counts().to_dict()
ndf['x'] = ndf.apply(lambda row: (row['tier']) * math.cos(2*math.pi * row['ID']/tier2len[row['tier']] ), axis=1)
ndf['y'] = ndf.apply(lambda row: (row['tier']) * math.sin(2*math.pi * row['ID']/tier2len[row['tier']]), axis=1)
g3 = (graphistry.addStyle(bg={'color': '#F2F7F8'})
.nodes(ndf, 'ID')
.edges(edf, 'source', 'target')
.settings(url_params={'play': 0,'pointSize': .3,'edgeOpacity': .1}, height=800)
.encode_axis([
{'r': 1, 'space': True, "label": "Tier 1"},
{'r': 2, 'space': True, "label": "Tier 2"},
{'r': 3, 'space': True, "label": "Tier 3"},
{'r': 4, 'space': True, "label": "Tier 4"},
{'r': 5, 'space': True, "label": "Tier 5"},
{'r': 6, 'space': True, "label": "Tier 6"},
{'r': 7, 'space': True, "label": "Tier 7"},
{'r': 8, 'space': True, "label": "Tier 8"},
{'r': 9, 'space': True, "label": "Tier 9"}])
.encode_point_color('risk_level', as_categorical=True,
categorical_mapping={
# 'no_risk' : 'black',
'high_risk' : 'red',
'med_risk' : 'red'}, default_mapping='black')
)
g3.plot()
```
**Expected behavior**
expect med_risk to be red, but it is showing black
**Screenshots**

pygraphistry `v0.33.2`
|
open
|
2024-02-28T17:35:01Z
|
2024-03-01T22:14:16Z
|
https://github.com/graphistry/pygraphistry/issues/549
|
[
"bug",
"customer"
] |
DataBoyTX
| 2
|
dmlc/gluon-nlp
|
numpy
| 690
|
[MXNet] - [BERT]
|
There is a problem with a custom BERT model training with the later version of MXNet 1.5.0 (observed with cu90).
mlm_loss stops around 7.2X and nsp_acc stopps around 54.
The last mxnet-cu90 version is still viable is 1.5.0b20190425.
1.5.0b20190426 onward has this issue. Thus, you cannot train a custom BERT model with the latest version of MXNet now.
I assume there was a change in optimization between April 25th and 26th.
I used the latest version of gluonnlp for the following test. I think it is not the problem with gluonnlp (0.6.0).
(i.e. pip install https://github.com/dmlc/gluon-nlp/tarball/master )
With mxnet-cu90==1.5.0b20190425 (This is working)
```bash
(mxnet_p36_updated_4) sh-4.2$ python run_pretraining.py --gpus 0,1,2,3,4,5,6,7 --batch_size 8 --lr 1e-3 --data "out_mcg_test-big/part-001.npz" @--warmup_ratio 0.01 --num_steps 1000000 --log_interval=250 --data_eval "out_mcg_test-big/part-001.npz" --batch_size_eval 8 --ckpt_dir ckpt --ckpt_interval 25000 --accumulate 4 --num_buckets 10 --dtype float16
INFO:root:Namespace(accumulate=4, batch_size=8, batch_size_eval=8, by_token=False, ckpt_dir='ckpt', ckpt_interval=25000, data='out_mcg_test-big/part-001.npz', data_eval='out_mcg_test-big/part-001.npz', dataset_name='book_corpus_wiki_en_uncased',dtype='float16', eval_only=False, gpus='0,1,2,3,4,5,6,7', kvstore='device', log_interval=250, lr=0.001, model='bert_12_768_12', num_buckets=10, num_steps=1000000, pretrained=False, profile=False, seed=0, start_step=0, verbose=False, warmup_ratio=0.01)
INFO:root:Using training data at out_mcg_test-big/part-001.npz
[20:42:24] src/kvstore/././comm_tree.h:356: only 32 out of 56 GPU pairs are enabled direct access. It may affect the performance. You can set MXNET_ENABLE_GPU_P2P=0 to turn it off
[20:42:24] src/kvstore/././comm_tree.h:365: .vvvv...
[20:42:24] src/kvstore/././comm_tree.h:365: v.vv.v..
[20:42:24] src/kvstore/././comm_tree.h:365: vv.v..v.
[20:42:24] src/kvstore/././comm_tree.h:365: vvv....v
...
[20:42:24] src/kvstore/./././gpu_topology.h:216: cudaDeviceGetP2PAttribute incorrect. Falling back to cudaDeviceEnablePeerAccess for topology detection
[20:42:24] src/kvstore/././comm_tree.h:380: Using Kernighan-Lin to generate trees
[20:42:24] src/kvstore/././comm_tree.h:391: Using Tree
[20:42:25] src/kvstore/././comm_tree.h:488: Size 2 occurs 1 times
[20:42:25] src/kvstore/././comm_tree.h:488: Size 768 occurs 114 times
[20:42:25] src/kvstore/././comm_tree.h:488: Size 1536 occurs 2 times
[20:42:25] src/kvstore/././comm_tree.h:488: Size 3072 occurs 12 times
[20:42:25] src/kvstore/././comm_tree.h:488: Size 30522 occurs 1 times
[20:42:25] src/kvstore/././comm_tree.h:488: Size 393216 occurs 1 times
[20:42:25] src/kvstore/././comm_tree.h:488: Size 589824 occurs 50 times
[20:42:25] src/kvstore/././comm_tree.h:488: Size 2359296 occurs 24 times
[20:42:25] src/kvstore/././comm_tree.h:488: Size 23440896 occurs 1 times
INFO:root:[step 249] mlm_loss=8.02087 mlm_acc=6.88167 nsp_loss=0.69021 nsp_acc=53.343 throughput=24.2K tks/s lr=0.0000249 time=315.28
INFO:root:[step 499] mlm_loss=6.85134 mlm_acc=11.51758 nsp_loss=0.65648 nsp_acc=60.298 throughput=57.7K tks/s lr=0.0000499 time=133.49
INFO:root:[step 749] mlm_loss=6.60548 mlm_acc=13.98383 nsp_loss=0.58539 nsp_acc=67.169 throughput=57.7K tks/s lr=0.0000749 time=133.54
```
With mxnet-cu90==1.5.0b20190426 (This is not working)
```bash
#(Same)#
INFO:root:[step 249] mlm_loss=nan mlm_acc=4.56305 nsp_loss=nan nsp_acc=54.454 throughput=23.7K tks/s lr=0.0000249 time=321.78
INFO:root:[step 499] mlm_loss=7.27492 mlm_acc=5.76089 nsp_loss=0.68847 nsp_acc=54.719 throughput=57.4K tks/s lr=0.0000499 time=134.22
INFO:root:[step 749] mlm_loss=7.26470 mlm_acc=5.82224 nsp_loss=0.68894 nsp_acc=54.428 throughput=57.3K tks/s lr=0.0000749 time=134.40
```
|
closed
|
2019-05-02T21:48:45Z
|
2019-05-11T07:36:14Z
|
https://github.com/dmlc/gluon-nlp/issues/690
|
[
"bug"
] |
araitats
| 9
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 736
|
Issue running: python demo_toolbox.py
|
Hello,
I have satisfied all of my requirements and when I try to run the command
`python demo_toolbox.py`
I get an output that looks something like this:
`ModuleNotFoundError: No module named 'torch'`
I believe this is saying I don't have PyTorch. However, I installed PyTorch a few months ago and have been using programs requiring it since.
When I first installed PyTorch I used this command:
`conda install --yes -c PyTorch pytorch=1.7.1 torchvision cudatoolkit=11.0`
I am doing this all on PowerShell on Windows 10. Additionally, I am running python 3.6.8
Any thoughts would be great.
|
closed
|
2021-04-14T06:41:44Z
|
2021-04-20T02:56:32Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/736
|
[] |
alexp-12
| 2
|
proplot-dev/proplot
|
data-visualization
| 422
|
Hide datetime minorticks?
|
This is a minor issue, but I don't think the default behavior of datetime minorticks is pretty and publication-quality as shown in the example from the proplot homepage.
Since we always need to modify or hide these minorticks, can we hide them by default or make them prettier?

|
open
|
2023-05-01T16:11:07Z
|
2023-05-01T16:47:30Z
|
https://github.com/proplot-dev/proplot/issues/422
|
[
"enhancement"
] |
kinyatoride
| 0
|
NullArray/AutoSploit
|
automation
| 1,161
|
Unhandled Exception (f938320af)
|
Autosploit version: `2.2.3`
OS information: `Linux-4.4.189~EviraPure-Miui-V1.7-armv8l-with-libc`
Running context: `/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/autosploit.py`
Error meesage: `[Errno 2] No such file or directory: '?'`
Error traceback:
```
Traceback (most recent call):
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/autosploit/main.py", line 123, in main
terminal.terminal_main_display(loaded_exploits)
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/lib/term/terminal.py", line 331, in terminal_main_display
self.custom_host_list(loaded_mods)
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/lib/term/terminal.py", line 277, in custom_host_list
self.exploit_gathered_hosts(mods, hosts=provided_host_file)
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/lib/term/terminal.py", line 217, in exploit_gathered_hosts
host_file = open(hosts).readlines()
IOError: [Errno 2] No such file or directory: '?'
```
Metasploit launched: `True`
|
closed
|
2019-09-03T19:46:32Z
|
2019-09-03T21:40:15Z
|
https://github.com/NullArray/AutoSploit/issues/1161
|
[] |
AutosploitReporter
| 0
|
anselal/antminer-monitor
|
dash
| 152
|
Get Hardware Error Rate from summary instead of stats
|
Some Antminer models like the E3 and S17 do not return the `Hardware Error Rate` in the stats but only in the summary. To fix this swap lines
https://github.com/anselal/antminer-monitor/blob/dee617db435d30c8cf75d2331baaee6873a8609c/antminermonitor/blueprints/asicminer/asic_antminer.py#L124-L125
with
https://github.com/anselal/antminer-monitor/blob/dee617db435d30c8cf75d2331baaee6873a8609c/antminermonitor/blueprints/asicminer/asic_antminer.py#L127-L130
|
closed
|
2019-12-13T13:35:25Z
|
2019-12-18T21:28:15Z
|
https://github.com/anselal/antminer-monitor/issues/152
|
[
":bug: bug"
] |
anselal
| 0
|
microsoft/unilm
|
nlp
| 1,256
|
[Vlmo] ERROR -IndexError: list index out of range
|
➜ TRANSFORMERS_OFFLINE=1 python run.py with data_root=/home/ubuntu/Zuolab/unilm/vlmo/data/data_arrows_root/ num_gpus=1 num_nodes=1 task_mlm_itm_itc_base whole_word_masking=True step200k per_gpu_batchsize=8 load_path=$INIT_CKPT log_dir=./out/
WARNING - VLMo - No observers have been added to this run
INFO - VLMo - Running command 'main'
INFO - VLMo - Started
Global seed set to 1
drop path rate: 0.1
window_size: (14, 14)
Load ckpt from: /home/ubuntu/Zuolab/unilm/vlmo/1_My_Vlmo/pth/vlmo_base_patch16_224_stage2.pt
Read state dict from ckpt.
relative_position_bias_table = torch.Size([1126, 144])
relative_position_index = torch.Size([197, 197])
text_relative_position_index = torch.Size([196, 196])
text_imag_relative_position_index = torch.Size([393, 393])
text_embeddings.position_ids = torch.Size([1, 196])
text_embeddings.word_embeddings.weight = torch.Size([30522, 768])
text_embeddings.position_embeddings.weight = torch.Size([196, 768])
text_embeddings.token_type_embeddings.weight = torch.Size([2, 768])
text_embeddings.LayerNorm.weight = torch.Size([768])
text_embeddings.LayerNorm.bias = torch.Size([768])
token_type_embeddings.weight = torch.Size([2, 768])
transformer.cls_token = torch.Size([1, 1, 768])
transformer.patch_embed.proj.weight = torch.Size([768, 3, 16, 16])
transformer.patch_embed.proj.bias = torch.Size([768])
transformer.blocks.0.gamma_1 = torch.Size([768])
transformer.blocks.0.gamma_2 = torch.Size([768])
transformer.blocks.0.norm1.weight = torch.Size([768])
transformer.blocks.0.norm1.bias = torch.Size([768])
transformer.blocks.0.attn.q_bias = torch.Size([768])
transformer.blocks.0.attn.v_bias = torch.Size([768])
transformer.blocks.0.attn.qkv.weight = torch.Size([2304, 768])
transformer.blocks.0.attn.proj.weight = torch.Size([768, 768])
transformer.blocks.0.attn.proj.bias = torch.Size([768])
transformer.blocks.0.norm2_text.weight = torch.Size([768])
transformer.blocks.0.norm2_text.bias = torch.Size([768])
transformer.blocks.0.norm2_imag.weight = torch.Size([768])
transformer.blocks.0.norm2_imag.bias = torch.Size([768])
transformer.blocks.0.mlp_text.fc1.weight = torch.Size([3072, 768])
transformer.blocks.0.mlp_text.fc1.bias = torch.Size([3072])
transformer.blocks.0.mlp_text.fc2.weight = torch.Size([768, 3072])
transformer.blocks.0.mlp_text.fc2.bias = torch.Size([768])
transformer.blocks.0.mlp_imag.fc1.weight = torch.Size([3072, 768])
transformer.blocks.0.mlp_imag.fc1.bias = torch.Size([3072])
transformer.blocks.0.mlp_imag.fc2.weight = torch.Size([768, 3072])
transformer.blocks.0.mlp_imag.fc2.bias = torch.Size([768])
transformer.blocks.1.gamma_1 = torch.Size([768])
transformer.blocks.1.gamma_2 = torch.Size([768])
transformer.blocks.1.norm1.weight = torch.Size([768])
transformer.blocks.1.norm1.bias = torch.Size([768])
transformer.blocks.1.attn.q_bias = torch.Size([768])
transformer.blocks.1.attn.v_bias = torch.Size([768])
transformer.blocks.1.attn.qkv.weight = torch.Size([2304, 768])
transformer.blocks.1.attn.proj.weight = torch.Size([768, 768])
transformer.blocks.1.attn.proj.bias = torch.Size([768])
transformer.blocks.1.norm2_text.weight = torch.Size([768])
transformer.blocks.1.norm2_text.bias = torch.Size([768])
transformer.blocks.1.norm2_imag.weight = torch.Size([768])
transformer.blocks.1.norm2_imag.bias = torch.Size([768])
transformer.blocks.1.mlp_text.fc1.weight = torch.Size([3072, 768])
transformer.blocks.1.mlp_text.fc1.bias = torch.Size([3072])
transformer.blocks.1.mlp_text.fc2.weight = torch.Size([768, 3072])
transformer.blocks.1.mlp_text.fc2.bias = torch.Size([768])
transformer.blocks.1.mlp_imag.fc1.weight = torch.Size([3072, 768])
transformer.blocks.1.mlp_imag.fc1.bias = torch.Size([3072])
transformer.blocks.1.mlp_imag.fc2.weight = torch.Size([768, 3072])
transformer.blocks.1.mlp_imag.fc2.bias = torch.Size([768])
transformer.blocks.2.gamma_1 = torch.Size([768])
transformer.blocks.2.gamma_2 = torch.Size([768])
transformer.blocks.2.norm1.weight = torch.Size([768])
transformer.blocks.2.norm1.bias = torch.Size([768])
transformer.blocks.2.attn.q_bias = torch.Size([768])
transformer.blocks.2.attn.v_bias = torch.Size([768])
transformer.blocks.2.attn.qkv.weight = torch.Size([2304, 768])
transformer.blocks.2.attn.proj.weight = torch.Size([768, 768])
transformer.blocks.2.attn.proj.bias = torch.Size([768])
transformer.blocks.2.norm2_text.weight = torch.Size([768])
transformer.blocks.2.norm2_text.bias = torch.Size([768])
transformer.blocks.2.norm2_imag.weight = torch.Size([768])
transformer.blocks.2.norm2_imag.bias = torch.Size([768])
transformer.blocks.2.mlp_text.fc1.weight = torch.Size([3072, 768])
transformer.blocks.2.mlp_text.fc1.bias = torch.Size([3072])
transformer.blocks.2.mlp_text.fc2.weight = torch.Size([768, 3072])
transformer.blocks.2.mlp_text.fc2.bias = torch.Size([768])
transformer.blocks.2.mlp_imag.fc1.weight = torch.Size([3072, 768])
transformer.blocks.2.mlp_imag.fc1.bias = torch.Size([3072])
transformer.blocks.2.mlp_imag.fc2.weight = torch.Size([768, 3072])
transformer.blocks.2.mlp_imag.fc2.bias = torch.Size([768])
transformer.blocks.3.gamma_1 = torch.Size([768])
transformer.blocks.3.gamma_2 = torch.Size([768])
transformer.blocks.3.norm1.weight = torch.Size([768])
transformer.blocks.3.norm1.bias = torch.Size([768])
transformer.blocks.3.attn.q_bias = torch.Size([768])
transformer.blocks.3.attn.v_bias = torch.Size([768])
transformer.blocks.3.attn.qkv.weight = torch.Size([2304, 768])
transformer.blocks.3.attn.proj.weight = torch.Size([768, 768])
transformer.blocks.3.attn.proj.bias = torch.Size([768])
transformer.blocks.3.norm2_text.weight = torch.Size([768])
transformer.blocks.3.norm2_text.bias = torch.Size([768])
transformer.blocks.3.norm2_imag.weight = torch.Size([768])
transformer.blocks.3.norm2_imag.bias = torch.Size([768])
transformer.blocks.3.mlp_text.fc1.weight = torch.Size([3072, 768])
transformer.blocks.3.mlp_text.fc1.bias = torch.Size([3072])
transformer.blocks.3.mlp_text.fc2.weight = torch.Size([768, 3072])
transformer.blocks.3.mlp_text.fc2.bias = torch.Size([768])
transformer.blocks.3.mlp_imag.fc1.weight = torch.Size([3072, 768])
transformer.blocks.3.mlp_imag.fc1.bias = torch.Size([3072])
transformer.blocks.3.mlp_imag.fc2.weight = torch.Size([768, 3072])
transformer.blocks.3.mlp_imag.fc2.bias = torch.Size([768])
transformer.blocks.4.gamma_1 = torch.Size([768])
transformer.blocks.4.gamma_2 = torch.Size([768])
transformer.blocks.4.norm1.weight = torch.Size([768])
transformer.blocks.4.norm1.bias = torch.Size([768])
transformer.blocks.4.attn.q_bias = torch.Size([768])
transformer.blocks.4.attn.v_bias = torch.Size([768])
transformer.blocks.4.attn.qkv.weight = torch.Size([2304, 768])
transformer.blocks.4.attn.proj.weight = torch.Size([768, 768])
transformer.blocks.4.attn.proj.bias = torch.Size([768])
transformer.blocks.4.norm2_text.weight = torch.Size([768])
transformer.blocks.4.norm2_text.bias = torch.Size([768])
transformer.blocks.4.norm2_imag.weight = torch.Size([768])
transformer.blocks.4.norm2_imag.bias = torch.Size([768])
transformer.blocks.4.mlp_text.fc1.weight = torch.Size([3072, 768])
transformer.blocks.4.mlp_text.fc1.bias = torch.Size([3072])
transformer.blocks.4.mlp_text.fc2.weight = torch.Size([768, 3072])
transformer.blocks.4.mlp_text.fc2.bias = torch.Size([768])
transformer.blocks.4.mlp_imag.fc1.weight = torch.Size([3072, 768])
transformer.blocks.4.mlp_imag.fc1.bias = torch.Size([3072])
transformer.blocks.4.mlp_imag.fc2.weight = torch.Size([768, 3072])
transformer.blocks.4.mlp_imag.fc2.bias = torch.Size([768])
transformer.blocks.5.gamma_1 = torch.Size([768])
transformer.blocks.5.gamma_2 = torch.Size([768])
transformer.blocks.5.norm1.weight = torch.Size([768])
transformer.blocks.5.norm1.bias = torch.Size([768])
transformer.blocks.5.attn.q_bias = torch.Size([768])
transformer.blocks.5.attn.v_bias = torch.Size([768])
transformer.blocks.5.attn.qkv.weight = torch.Size([2304, 768])
transformer.blocks.5.attn.proj.weight = torch.Size([768, 768])
transformer.blocks.5.attn.proj.bias = torch.Size([768])
transformer.blocks.5.norm2_text.weight = torch.Size([768])
transformer.blocks.5.norm2_text.bias = torch.Size([768])
transformer.blocks.5.norm2_imag.weight = torch.Size([768])
transformer.blocks.5.norm2_imag.bias = torch.Size([768])
transformer.blocks.5.mlp_text.fc1.weight = torch.Size([3072, 768])
transformer.blocks.5.mlp_text.fc1.bias = torch.Size([3072])
transformer.blocks.5.mlp_text.fc2.weight = torch.Size([768, 3072])
transformer.blocks.5.mlp_text.fc2.bias = torch.Size([768])
transformer.blocks.5.mlp_imag.fc1.weight = torch.Size([3072, 768])
transformer.blocks.5.mlp_imag.fc1.bias = torch.Size([3072])
transformer.blocks.5.mlp_imag.fc2.weight = torch.Size([768, 3072])
transformer.blocks.5.mlp_imag.fc2.bias = torch.Size([768])
transformer.blocks.6.gamma_1 = torch.Size([768])
transformer.blocks.6.gamma_2 = torch.Size([768])
transformer.blocks.6.norm1.weight = torch.Size([768])
transformer.blocks.6.norm1.bias = torch.Size([768])
transformer.blocks.6.attn.q_bias = torch.Size([768])
transformer.blocks.6.attn.v_bias = torch.Size([768])
transformer.blocks.6.attn.qkv.weight = torch.Size([2304, 768])
transformer.blocks.6.attn.proj.weight = torch.Size([768, 768])
transformer.blocks.6.attn.proj.bias = torch.Size([768])
transformer.blocks.6.norm2_text.weight = torch.Size([768])
transformer.blocks.6.norm2_text.bias = torch.Size([768])
transformer.blocks.6.norm2_imag.weight = torch.Size([768])
transformer.blocks.6.norm2_imag.bias = torch.Size([768])
transformer.blocks.6.mlp_text.fc1.weight = torch.Size([3072, 768])
transformer.blocks.6.mlp_text.fc1.bias = torch.Size([3072])
transformer.blocks.6.mlp_text.fc2.weight = torch.Size([768, 3072])
transformer.blocks.6.mlp_text.fc2.bias = torch.Size([768])
transformer.blocks.6.mlp_imag.fc1.weight = torch.Size([3072, 768])
transformer.blocks.6.mlp_imag.fc1.bias = torch.Size([3072])
transformer.blocks.6.mlp_imag.fc2.weight = torch.Size([768, 3072])
transformer.blocks.6.mlp_imag.fc2.bias = torch.Size([768])
transformer.blocks.7.gamma_1 = torch.Size([768])
transformer.blocks.7.gamma_2 = torch.Size([768])
transformer.blocks.7.norm1.weight = torch.Size([768])
transformer.blocks.7.norm1.bias = torch.Size([768])
transformer.blocks.7.attn.q_bias = torch.Size([768])
transformer.blocks.7.attn.v_bias = torch.Size([768])
transformer.blocks.7.attn.qkv.weight = torch.Size([2304, 768])
transformer.blocks.7.attn.proj.weight = torch.Size([768, 768])
transformer.blocks.7.attn.proj.bias = torch.Size([768])
transformer.blocks.7.norm2_text.weight = torch.Size([768])
transformer.blocks.7.norm2_text.bias = torch.Size([768])
transformer.blocks.7.norm2_imag.weight = torch.Size([768])
transformer.blocks.7.norm2_imag.bias = torch.Size([768])
transformer.blocks.7.mlp_text.fc1.weight = torch.Size([3072, 768])
transformer.blocks.7.mlp_text.fc1.bias = torch.Size([3072])
transformer.blocks.7.mlp_text.fc2.weight = torch.Size([768, 3072])
transformer.blocks.7.mlp_text.fc2.bias = torch.Size([768])
transformer.blocks.7.mlp_imag.fc1.weight = torch.Size([3072, 768])
transformer.blocks.7.mlp_imag.fc1.bias = torch.Size([3072])
transformer.blocks.7.mlp_imag.fc2.weight = torch.Size([768, 3072])
transformer.blocks.7.mlp_imag.fc2.bias = torch.Size([768])
transformer.blocks.8.gamma_1 = torch.Size([768])
transformer.blocks.8.gamma_2 = torch.Size([768])
transformer.blocks.8.norm1.weight = torch.Size([768])
transformer.blocks.8.norm1.bias = torch.Size([768])
transformer.blocks.8.attn.q_bias = torch.Size([768])
transformer.blocks.8.attn.v_bias = torch.Size([768])
transformer.blocks.8.attn.qkv.weight = torch.Size([2304, 768])
transformer.blocks.8.attn.proj.weight = torch.Size([768, 768])
transformer.blocks.8.attn.proj.bias = torch.Size([768])
transformer.blocks.8.norm2_text.weight = torch.Size([768])
transformer.blocks.8.norm2_text.bias = torch.Size([768])
transformer.blocks.8.norm2_imag.weight = torch.Size([768])
transformer.blocks.8.norm2_imag.bias = torch.Size([768])
transformer.blocks.8.mlp_text.fc1.weight = torch.Size([3072, 768])
transformer.blocks.8.mlp_text.fc1.bias = torch.Size([3072])
transformer.blocks.8.mlp_text.fc2.weight = torch.Size([768, 3072])
transformer.blocks.8.mlp_text.fc2.bias = torch.Size([768])
transformer.blocks.8.mlp_imag.fc1.weight = torch.Size([3072, 768])
transformer.blocks.8.mlp_imag.fc1.bias = torch.Size([3072])
transformer.blocks.8.mlp_imag.fc2.weight = torch.Size([768, 3072])
transformer.blocks.8.mlp_imag.fc2.bias = torch.Size([768])
transformer.blocks.9.gamma_1 = torch.Size([768])
transformer.blocks.9.gamma_2 = torch.Size([768])
transformer.blocks.9.norm1.weight = torch.Size([768])
transformer.blocks.9.norm1.bias = torch.Size([768])
transformer.blocks.9.attn.q_bias = torch.Size([768])
transformer.blocks.9.attn.v_bias = torch.Size([768])
transformer.blocks.9.attn.qkv.weight = torch.Size([2304, 768])
transformer.blocks.9.attn.proj.weight = torch.Size([768, 768])
transformer.blocks.9.attn.proj.bias = torch.Size([768])
transformer.blocks.9.norm2_text.weight = torch.Size([768])
transformer.blocks.9.norm2_text.bias = torch.Size([768])
transformer.blocks.9.norm2_imag.weight = torch.Size([768])
transformer.blocks.9.norm2_imag.bias = torch.Size([768])
transformer.blocks.9.mlp_text.fc1.weight = torch.Size([3072, 768])
transformer.blocks.9.mlp_text.fc1.bias = torch.Size([3072])
transformer.blocks.9.mlp_text.fc2.weight = torch.Size([768, 3072])
transformer.blocks.9.mlp_text.fc2.bias = torch.Size([768])
transformer.blocks.9.mlp_imag.fc1.weight = torch.Size([3072, 768])
transformer.blocks.9.mlp_imag.fc1.bias = torch.Size([3072])
transformer.blocks.9.mlp_imag.fc2.weight = torch.Size([768, 3072])
transformer.blocks.9.mlp_imag.fc2.bias = torch.Size([768])
transformer.blocks.10.gamma_1 = torch.Size([768])
transformer.blocks.10.gamma_2 = torch.Size([768])
transformer.blocks.10.norm1.weight = torch.Size([768])
transformer.blocks.10.norm1.bias = torch.Size([768])
transformer.blocks.10.attn.q_bias = torch.Size([768])
transformer.blocks.10.attn.v_bias = torch.Size([768])
transformer.blocks.10.attn.qkv.weight = torch.Size([2304, 768])
transformer.blocks.10.attn.proj.weight = torch.Size([768, 768])
transformer.blocks.10.attn.proj.bias = torch.Size([768])
transformer.blocks.10.norm2_text.weight = torch.Size([768])
transformer.blocks.10.norm2_text.bias = torch.Size([768])
transformer.blocks.10.norm2_imag.weight = torch.Size([768])
transformer.blocks.10.norm2_imag.bias = torch.Size([768])
transformer.blocks.10.mlp_text.fc1.weight = torch.Size([3072, 768])
transformer.blocks.10.mlp_text.fc1.bias = torch.Size([3072])
transformer.blocks.10.mlp_text.fc2.weight = torch.Size([768, 3072])
transformer.blocks.10.mlp_text.fc2.bias = torch.Size([768])
transformer.blocks.10.mlp_imag.fc1.weight = torch.Size([3072, 768])
transformer.blocks.10.mlp_imag.fc1.bias = torch.Size([3072])
transformer.blocks.10.mlp_imag.fc2.weight = torch.Size([768, 3072])
transformer.blocks.10.mlp_imag.fc2.bias = torch.Size([768])
transformer.blocks.11.gamma_1 = torch.Size([768])
transformer.blocks.11.gamma_2 = torch.Size([768])
transformer.blocks.11.norm1.weight = torch.Size([768])
transformer.blocks.11.norm1.bias = torch.Size([768])
transformer.blocks.11.attn.q_bias = torch.Size([768])
transformer.blocks.11.attn.v_bias = torch.Size([768])
transformer.blocks.11.attn.qkv.weight = torch.Size([2304, 768])
transformer.blocks.11.attn.proj.weight = torch.Size([768, 768])
transformer.blocks.11.attn.proj.bias = torch.Size([768])
transformer.blocks.11.norm2_text.weight = torch.Size([768])
transformer.blocks.11.norm2_text.bias = torch.Size([768])
transformer.blocks.11.norm2_imag.weight = torch.Size([768])
transformer.blocks.11.norm2_imag.bias = torch.Size([768])
transformer.blocks.11.mlp_text.fc1.weight = torch.Size([3072, 768])
transformer.blocks.11.mlp_text.fc1.bias = torch.Size([3072])
transformer.blocks.11.mlp_text.fc2.weight = torch.Size([768, 3072])
transformer.blocks.11.mlp_text.fc2.bias = torch.Size([768])
transformer.blocks.11.mlp_imag.fc1.weight = torch.Size([3072, 768])
transformer.blocks.11.mlp_imag.fc1.bias = torch.Size([3072])
transformer.blocks.11.mlp_imag.fc2.weight = torch.Size([768, 3072])
transformer.blocks.11.mlp_imag.fc2.bias = torch.Size([768])
transformer.norm.weight = torch.Size([768])
transformer.norm.bias = torch.Size([768])
pooler.dense.weight = torch.Size([768, 768])
pooler.dense.bias = torch.Size([768])
mlm_score.bias = torch.Size([30522])
mlm_score.transform.dense.weight = torch.Size([768, 768])
mlm_score.transform.dense.bias = torch.Size([768])
mlm_score.transform.LayerNorm.weight = torch.Size([768])
mlm_score.transform.LayerNorm.bias = torch.Size([768])
mlm_score.decoder.weight = torch.Size([30522, 768])
{'itm': 1, 'itc': 1, 'mlm': 1, 'textmlm': 0, 'vqa': 0, 'nlvr2': 0, 'irtr': 0}
text position_embeddings size: torch.Size([40, 768])
missing_keys: ['logit_scale', 'logit_vl_scale', 'transformer.blocks.10.mlp_vl.fc1.weight', 'transformer.blocks.10.mlp_vl.fc1.bias', 'transformer.blocks.10.mlp_vl.fc2.weight', 'transformer.blocks.10.mlp_vl.fc2.bias', 'transformer.blocks.10.norm2_vl.weight', 'transformer.blocks.10.norm2_vl.bias', 'transformer.blocks.11.mlp_vl.fc1.weight', 'transformer.blocks.11.mlp_vl.fc1.bias', 'transformer.blocks.11.mlp_vl.fc2.weight', 'transformer.blocks.11.mlp_vl.fc2.bias', 'transformer.blocks.11.norm2_vl.weight', 'transformer.blocks.11.norm2_vl.bias', 'itm_score.fc.weight', 'itm_score.fc.bias', 'itc_text_proj.fc.weight', 'itc_image_proj.fc.weight', 'itc_vl_text_proj.fc.weight', 'itc_vl_image_proj.fc.weight']
unexpected_keys: []
grad_steps: 128
resume_ckpt: None
ClusterPlugin: using Lightning Cluster Environment
plugin_list: [<pytorch_lightning.plugins.environments.lightning_environment.LightningEnvironment object at 0x7ff4a2652cd0>]
[W Context.cpp:70] Warning: torch.use_deterministic_algorithms is in beta, and its design and functionality may change in the future. (function operator())
Using 16bit native Automatic Mixed Precision (AMP)
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/logger_connector/logger_connector.py:59: LightningDeprecationWarning: Setting `Trainer(flush_logs_every_n_steps=10)` is deprecated in v1.5 and will be removed in v1.7. Please configure flushing in the logger instead.
rank_zero_deprecation(
Global seed set to 1
initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1
INFO - root - Added key: store_based_barrier_key:1 to store for rank: 0
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 1 processes
----------------------------------------------------------------------------------------------------
/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/torchvision/transforms/transforms.py:803: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
warnings.warn(
/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/torchvision/transforms/transforms.py:257: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
warnings.warn(
ERROR - VLMo - Failed after 0:00:05!
Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/sacred/experiment.py", line 312, in run_commandline
return self.run(
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/sacred/experiment.py", line 276, in run
run()
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/sacred/run.py", line 238, in __call__
self.result = self.main_function(*args)
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/sacred/config/captured_function.py", line 42, in captured_function
result = wrapped(*args, **kwargs)
File "run.py", line 166, in main
trainer.fit(model, datamodule=dm)
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 737, in fit
self._call_and_handle_interrupt(
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 682, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 772, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1134, in _run
self._call_setup_hook() # allow user to setup lightning_module in accelerator environment
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1434, in _call_setup_hook
self.datamodule.setup(stage=fn)
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py", line 474, in wrapped_fn
fn(*args, **kwargs)
File "/home/ubuntu/Zuolab/unilm/vlmo/vlmo/datamodules/multitask_datamodule.py", line 34, in setup
dm.setup(stage)
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py", line 474, in wrapped_fn
fn(*args, **kwargs)
File "/home/ubuntu/Zuolab/unilm/vlmo/vlmo/datamodules/datamodule_base.py", line 150, in setup
self.set_train_dataset()
File "/home/ubuntu/Zuolab/unilm/vlmo/vlmo/datamodules/datamodule_base.py", line 77, in set_train_dataset
self.train_dataset = self.dataset_cls(
File "/home/ubuntu/Zuolab/unilm/vlmo/vlmo/datasets/vg_caption_dataset.py", line 15, in __init__
super().__init__(*args, **kwargs, names=names, text_column_name="caption")
File "/home/ubuntu/Zuolab/unilm/vlmo/vlmo/datasets/base_dataset.py", line 53, in __init__
self.table_names += [name] * len(tables[i])
IndexError: list index out of range
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run.py", line 69, in <module>
def main(_config):
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/sacred/experiment.py", line 190, in automain
self.run_commandline()
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/sacred/experiment.py", line 347, in run_commandline
print_filtered_stacktrace()
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/sacred/utils.py", line 493, in print_filtered_stacktrace
print(format_filtered_stacktrace(filter_traceback), file=sys.stderr)
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/sacred/utils.py", line 528, in format_filtered_stacktrace
return "".join(filtered_traceback_format(tb_exception))
File "/home/ubuntu/miniconda3/envs/vlmo/lib/python3.8/site-packages/sacred/utils.py", line 568, in filtered_traceback_format
current_tb = tb_exception.exc_traceback
AttributeError: 'TracebackException' object has no attribute 'exc_traceback'
my data:
data_arrows_root
├── coco_caption_karpathy_restval.arrow
├── coco_caption_karpathy_test.arrow
├── coco_caption_karpathy_train.arrow
└── coco_caption_karpathy_val.arrow
checkpoint:
pth
└── vlmo_base_patch16_224_stage2.pt
|
closed
|
2023-08-17T08:37:38Z
|
2023-08-30T10:11:07Z
|
https://github.com/microsoft/unilm/issues/1256
|
[] |
CHB-learner
| 1
|
QingdaoU/OnlineJudge
|
django
| 237
|
中文翻译错误 && 后台用户无法使用TFA登录
|
1、后天用户管理,点击某一用户,“是否可见”应改为“是否拉黑”
2、开启TFA的用户,无法在后台登录页面登录,因为页面没有配置TFA输入框
|
open
|
2019-04-03T10:50:19Z
|
2019-04-03T11:05:20Z
|
https://github.com/QingdaoU/OnlineJudge/issues/237
|
[] |
AndyShaw2048
| 1
|
deedy5/primp
|
web-scraping
| 20
|
cookies argument for requests
|
For example in the requests library:
```py
import requests
cookies = {
"key": "value"
}
response = requests.get("https://example.com", cookies=cookies)
```
|
closed
|
2024-07-12T19:38:32Z
|
2024-07-15T19:47:20Z
|
https://github.com/deedy5/primp/issues/20
|
[] |
Mouad-scriptz
| 1
|
nschloe/tikzplotlib
|
matplotlib
| 208
|
No support for the contourf plots?
|
I tried very hard to make contourf plots work with this package. But, it seems that it is not supported at the moment? It is pity because this is such an excellent tool.
Can you please give me a hint how can I implement it? What document should I refer to do so? I have found out the pgfplot syntax for contourf plots, but I don't know where/how should I add it.
I have already made tons of (line & image) plots. Now, if I can't make contourf plots with this tool, I would need to re-do all my plots to be consistent in the work.
Thanks!
|
closed
|
2017-10-24T19:50:47Z
|
2019-03-17T14:34:36Z
|
https://github.com/nschloe/tikzplotlib/issues/208
|
[] |
vineetsoni
| 2
|
autokey/autokey
|
automation
| 294
|
daemon is killed when GUI closed
|
## Classification:
Usability
## Reproducibility:
Always
## Version
AutoKey version: autokey-gtk 0.95.6
Used GUI (Gtk, Qt, or both):
gtk
Installed via: (PPA, pip3, …).
sporkwitch PPA
Linux Distribution:
ubu 18.04
## Summary
AK ist launched at system boot. That starts the daemon, but leaves the GUI closed. When I manually open the GUI, do some editing and close the GUI, the daemon is killed - no more expanding happens.
This is unexpected - I'd expect the daemon to continue working. If it needs a relaunch, I'd like AK to do that automatically.
Summary of the problem.
Not a problem - just a bit of convenience.
## Steps to Reproduce (if applicable)
see above.
|
closed
|
2019-07-05T06:41:46Z
|
2019-07-10T13:07:02Z
|
https://github.com/autokey/autokey/issues/294
|
[] |
herrdeh
| 7
|
robotframework/robotframework
|
automation
| 5,322
|
Expand Button Displays + Sign for Failed or Skipped Tests in Report - log.html
|
Description:
When a test is marked as FAILED or SKIPPED, the expand button in the generated log.html report always displays the + sign, regardless of whether the user has clicked it to expand or not.
This behavior is inconsistent and can lead to confusion, as the button should toggle between + (collapsed) and - (expanded) correctly.
Steps to Reproduce:
Run a test suite with tests that result in FAILED or SKIPPED status.
Open the generated log.html report.
Expand a failed or skipped test.
Notice that the expand button remains as + even when expanded.
Expected Behavior:
The expand button should display - when the test is expanded and toggle back to + when collapsed.
|
open
|
2025-01-22T17:11:01Z
|
2025-02-20T05:27:10Z
|
https://github.com/robotframework/robotframework/issues/5322
|
[] |
jyoti-arora1991
| 2
|
apache/airflow
|
data-science
| 47,450
|
All dagruns are listed to be cleared while clearing a specific dagrun (Intermittent issue)
|
### Apache Airflow version
3.0.0b1
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Lets say, for a DAG, many dagruns are available and user wants to clear a specific dagrun. When user selects a
dagrun and click on 'Clear Run' button, all dagruns tasks are listed to be cleared and if user clicks on confirm button then all the dagruns are cleared.
<img width="1598" alt="Image" src="https://github.com/user-attachments/assets/90ead14a-4cd2-414a-8d86-f817d30f6ea3" />
https://github.com/user-attachments/assets/f46ce29b-c7a0-49a4-ba48-9c490705d146
### What you think should happen instead?
Only tasks specific to a dagrun should be listed to be cleared when clearing a dagrun.
### How to reproduce
On AF3 UI:
1. Create multiple dagruns for a DAG.
2. Select a dagrun and try to clear the dagrun.
3. Notice tasks listed on the clear dagrun model has all dagruns tasks.
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
closed
|
2025-03-06T13:48:45Z
|
2025-03-18T16:12:26Z
|
https://github.com/apache/airflow/issues/47450
|
[
"kind:bug",
"priority:high",
"area:core",
"area:UI",
"affected_version:3.0.0beta"
] |
atul-astronomer
| 11
|
giotto-ai/giotto-tda
|
scikit-learn
| 100
|
Diffusion module
|
Create a new module implementing diffusion on simplicial complexes via the Hodge Laplacian operator.
|
closed
|
2019-11-27T11:00:54Z
|
2019-12-19T14:41:49Z
|
https://github.com/giotto-ai/giotto-tda/issues/100
|
[] |
giotto-learn
| 2
|
globaleaks/globaleaks-whistleblowing-software
|
sqlalchemy
| 3,251
|
Cant find views folder
|
Hello, unfortunately I cannot find the footer and header.html in File Zilla after the installation. How can I access it? Unfortunately, I don't have that much knowledge about this topic. Thanks in advance.
|
closed
|
2022-07-20T13:35:47Z
|
2022-08-02T11:50:09Z
|
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3251
|
[] |
JimpoTEDY
| 9
|
huggingface/transformers
|
pytorch
| 36,576
|
Some methods in TrainerControl seem not to be utilized.
|
Looking at the callback code has caused me a great deal of confusion. It seems that this function has never been used. I'm not sure if I've missed something.
https://github.com/huggingface/transformers/blob/6966fa190172b48b2fb46fe4552a13b943e692cf/src/transformers/trainer_callback.py#L275
|
closed
|
2025-03-06T07:19:53Z
|
2025-03-13T16:21:17Z
|
https://github.com/huggingface/transformers/issues/36576
|
[] |
mst272
| 2
|
scikit-learn/scikit-learn
|
machine-learning
| 30,151
|
Segmentation fault in sklearn.metrics.pairwise_distances with OpenBLAS 0.3.28 (only pthreads variant)
|
```
mamba create -n testenv scikit-learn python=3.12 libopenblas=0.3.28 -y
conda activate testenv
PYTHONFAULTHANDLER=1 python /tmp/test_openblas.py
```
```py
# /tmp/test_openblas.py
import numpy as np
from joblib import Parallel, delayed
from threadpoolctl import threadpool_limits
from sklearn.metrics.pairwise import pairwise_distances
X = np.ones((1000, 10))
def blas_threaded_func(i):
X.T @ X
# Needs to be there and before Parallel
threadpool_limits(10)
Parallel(n_jobs=2)(delayed(blas_threaded_func)(i) for i in range(10))
for _ in range(10):
distances = pairwise_distances(X, metric="l2", n_jobs=2)
```
This happens with OpenBLAS 0.3.28 but not 0.3.27. Setting the `OPENBLAS_NUM_THREADS` or `OMP_NUM_THREADS` environment variable also make the issue disappear.
This is somewhat reminiscent of https://github.com/scipy/scipy/issues/21479 so there may be something in OpenBLAS 0.3.28 [^1] that doesn't like `threapool_limits` followed by `Parallel`? No idea how to test this hypothesis ... this could well be OS-dependent since https://github.com/scipy/scipy/issues/21479 only happens on Linux.
[^1]: OpenBLAS 0.3.28 is used in numpy development wheel and OpenBLAS 0.3.27 is used in numpy latest release 2.1.2 at the time of writing
<details>
<summary>Python traceback</summary>
```
Fatal Python error: Segmentation fault
Thread 0x00007c7907e006c0 (most recent call first):
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/multiprocessing/pool.py", line 579 in _handle_results
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1012 in run
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1032 in _bootstrap
Thread 0x00007c790d2006c0 (most recent call first):
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/multiprocessing/pool.py", line 531 in _handle_tasks
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1012 in run
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1032 in _bootstrap
Thread 0x00007c790dc006c0 (most recent call first):
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/selectors.py", line 415 in select
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/multiprocessing/connection.py", line 1136 in wait
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/multiprocessing/pool.py", line 502 in _wait_for_updates
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/multiprocessing/pool.py", line 522 in _handle_workers
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1012 in run
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1032 in _bootstrap
Thread 0x00007c79146006c0 (most recent call first):
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/utils/extmath.py", line 205 in safe_sparse_dot
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/metrics/pairwise.py", line 407 in _euclidean_distances
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/metrics/pairwise.py", line 372 in euclidean_distances
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/utils/_param_validation.py", line 186 in wrapper
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/metrics/pairwise.py", line 1881 in _dist_wrapper
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/utils/parallel.py", line 136 in __call__
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/joblib/parallel.py", line 598 in __call__
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/joblib/_utils.py", line 72 in __call__
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/multiprocessing/pool.py", line 125 in worker
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1012 in run
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1032 in _bootstrap
Thread 0x00007c7913c006c0 (most recent call first):
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/utils/extmath.py", line 205 in safe_sparse_dot
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/metrics/pairwise.py", line 407 in _euclidean_distances
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/metrics/pairwise.py", line 372 in euclidean_distances
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/utils/_param_validation.py", line 186 in wrapper
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/metrics/pairwise.py", line 1881 in _dist_wrapper
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/utils/parallel.py", line 136 in __call__
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/joblib/parallel.py", line 598 in __call__
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/joblib/_utils.py", line 72 in __call__
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/multiprocessing/pool.py", line 125 in worker
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1012 in run
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1032 in _bootstrap
Thread 0x00007c79132006c0 (most recent call first):
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 355 in wait
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/joblib/externals/loky/backend/queues.py", line 147 in _feed
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1012 in run
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1032 in _bootstrap
Thread 0x00007c79128006c0 (most recent call first):
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/selectors.py", line 415 in select
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/multiprocessing/connection.py", line 1136 in wait
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/joblib/externals/loky/process_executor.py", line 654 in wait_result_broken_or_wakeup
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/joblib/externals/loky/process_executor.py", line 596 in run
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1075 in _bootstrap_inner
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/threading.py", line 1032 in _bootstrap
Thread 0x00007c797c527480 (most recent call first):
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/joblib/parallel.py", line 1762 in _retrieve
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/joblib/parallel.py", line 1650 in _get_outputs
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/joblib/parallel.py", line 2007 in __call__
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/utils/parallel.py", line 74 in __call__
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/metrics/pairwise.py", line 1898 in _parallel_pairwise
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/metrics/pairwise.py", line 2375 in pairwise_distances
File "/home/lesteve/micromamba/envs/testenv/lib/python3.12/site-packages/sklearn/utils/_param_validation.py", line 213 in wrapper
File "/tmp/test_openblas.py", line 21 in <module>
```
</details>
<details>
<summary>Version info</summary>
```
System:
python: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 16:05:46) [GCC 13.3.0]
executable: /home/lesteve/micromamba/envs/testenv/bin/python
machine: Linux-6.10.10-arch1-1-x86_64-with-glibc2.40
Python dependencies:
sklearn: 1.5.2
pip: 24.2
setuptools: 75.1.0
numpy: 2.1.2
scipy: 1.14.1
Cython: None
pandas: None
matplotlib: None
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
num_threads: 12
prefix: libopenblas
filepath: /home/lesteve/micromamba/envs/testenv/lib/libopenblasp-r0.3.28.so
version: 0.3.28
threading_layer: pthreads
architecture: Haswell
user_api: openmp
internal_api: openmp
num_threads: 12
prefix: libgomp
filepath: /home/lesteve/micromamba/envs/testenv/lib/libgomp.so.1.0.0
version: None
```
</details>
<details>
<summary>mamba list output</summary>
```
❯ mamba list
List of packages in environment: "/home/lesteve/micromamba/envs/testenv"
Name Version Build Channel
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
_libgcc_mutex 0.1 conda_forge conda-forge/linux-64/_libgcc_mutex-0.1-conda_forge.tar.bz2
_openmp_mutex 4.5 2_gnu conda-forge/linux-64/_openmp_mutex-4.5-2_gnu.tar.bz2
bzip2 1.0.8 h4bc722e_7 conda-forge/linux-64/bzip2-1.0.8-h4bc722e_7.conda
ca-certificates 2024.8.30 hbcca054_0 conda-forge/linux-64/ca-certificates-2024.8.30-hbcca054_0.conda
joblib 1.4.2 pyhd8ed1ab_0 conda-forge/noarch/joblib-1.4.2-pyhd8ed1ab_0.conda
ld_impl_linux-64 2.43 h712a8e2_2 conda-forge/linux-64/ld_impl_linux-64-2.43-h712a8e2_2.conda
libblas 3.9.0 25_linux64_openblas conda-forge/linux-64/libblas-3.9.0-25_linux64_openblas.conda
libcblas 3.9.0 25_linux64_openblas conda-forge/linux-64/libcblas-3.9.0-25_linux64_openblas.conda
libexpat 2.6.3 h5888daf_0 conda-forge/linux-64/libexpat-2.6.3-h5888daf_0.conda
libffi 3.4.2 h7f98852_5 conda-forge/linux-64/libffi-3.4.2-h7f98852_5.tar.bz2
libgcc 14.2.0 h77fa898_1 conda-forge/linux-64/libgcc-14.2.0-h77fa898_1.conda
libgcc-ng 14.2.0 h69a702a_1 conda-forge/linux-64/libgcc-ng-14.2.0-h69a702a_1.conda
libgfortran 14.2.0 h69a702a_1 conda-forge/linux-64/libgfortran-14.2.0-h69a702a_1.conda
libgfortran-ng 14.2.0 h69a702a_1 conda-forge/linux-64/libgfortran-ng-14.2.0-h69a702a_1.conda
libgfortran5 14.2.0 hd5240d6_1 conda-forge/linux-64/libgfortran5-14.2.0-hd5240d6_1.conda
libgomp 14.2.0 h77fa898_1 conda-forge/linux-64/libgomp-14.2.0-h77fa898_1.conda
liblapack 3.9.0 25_linux64_openblas conda-forge/linux-64/liblapack-3.9.0-25_linux64_openblas.conda
libnsl 2.0.1 hd590300_0 conda-forge/linux-64/libnsl-2.0.1-hd590300_0.conda
libopenblas 0.3.28 pthreads_h94d23a6_0 conda-forge/linux-64/libopenblas-0.3.28-pthreads_h94d23a6_0.conda
libsqlite 3.47.0 hadc24fc_0 conda-forge/linux-64/libsqlite-3.47.0-hadc24fc_0.conda
libstdcxx 14.2.0 hc0a3c3a_1 conda-forge/linux-64/libstdcxx-14.2.0-hc0a3c3a_1.conda
libuuid 2.38.1 h0b41bf4_0 conda-forge/linux-64/libuuid-2.38.1-h0b41bf4_0.conda
libxcrypt 4.4.36 hd590300_1 conda-forge/linux-64/libxcrypt-4.4.36-hd590300_1.conda
libzlib 1.3.1 hb9d3cd8_2 conda-forge/linux-64/libzlib-1.3.1-hb9d3cd8_2.conda
ncurses 6.5 he02047a_1 conda-forge/linux-64/ncurses-6.5-he02047a_1.conda
numpy 2.1.2 py312h58c1407_0 conda-forge/linux-64/numpy-2.1.2-py312h58c1407_0.conda
openssl 3.3.2 hb9d3cd8_0 conda-forge/linux-64/openssl-3.3.2-hb9d3cd8_0.conda
pip 24.2 pyh8b19718_1 conda-forge/noarch/pip-24.2-pyh8b19718_1.conda
python 3.12.7 hc5c86c4_0_cpython conda-forge/linux-64/python-3.12.7-hc5c86c4_0_cpython.conda
python_abi 3.12 5_cp312 conda-forge/linux-64/python_abi-3.12-5_cp312.conda
readline 8.2 h8228510_1 conda-forge/linux-64/readline-8.2-h8228510_1.conda
scikit-learn 1.5.2 py312h7a48858_1 conda-forge/linux-64/scikit-learn-1.5.2-py312h7a48858_1.conda
scipy 1.14.1 py312h62794b6_1 conda-forge/linux-64/scipy-1.14.1-py312h62794b6_1.conda
setuptools 75.1.0 pyhd8ed1ab_0 conda-forge/noarch/setuptools-75.1.0-pyhd8ed1ab_0.conda
threadpoolctl 3.5.0 pyhc1e730c_0 conda-forge/noarch/threadpoolctl-3.5.0-pyhc1e730c_0.conda
tk 8.6.13 noxft_h4845f30_101 conda-forge/linux-64/tk-8.6.13-noxft_h4845f30_101.conda
tzdata 2024b hc8b5060_0 conda-forge/noarch/tzdata-2024b-hc8b5060_0.conda
wheel 0.44.0 pyhd8ed1ab_0 conda-forge/noarch/wheel-0.44.0-pyhd8ed1ab_0.conda
xz 5.2.6 h166bdaf_0 conda-forge/linux-64/xz-5.2.6-h166bdaf_0.tar.bz2
```
</details>
I saw this segmentation fault locally when running the scikit-learn tests:
```
pytest --pyargs sklearn.metrics.tests
```
which can be reduced a bit more:
```
pytest --pyargs sklearn.metrics.tests.test_classification sklearn.metrics.tests.test_pairwise -k 'classification and nan_valid and scoring0 or test_parallel_pairwise_distances_diagonal and float64' -v
```
If I had to guess the reason that it has not been seen in the CI is that it doesn't test advanced parallelism very thoroughly: in most CI builds there are 2 cores and pytest-xdist is used so neither BLAS nor OpenMP parallelism. We have a CI build without pytest-xdist but since we have two cores, we probably only use BLAS parallelism or OpenMP parallelism, not both at the same time.
|
closed
|
2024-10-25T08:39:46Z
|
2024-11-25T16:32:15Z
|
https://github.com/scikit-learn/scikit-learn/issues/30151
|
[
"Bug"
] |
lesteve
| 13
|
ckan/ckan
|
api
| 7,592
|
Error while deleting a package with an extra field
|
## CKAN version
2.10
## Describe the bug
When deleting a package with an extra field, the following exception is traced:
```
ckan | 2023-05-16 13:05:12,630 ERROR [ckan.model.modification]
ckan | Traceback (most recent call last):
ckan | File "/srv/app/src/ckan/ckan/model/modification.py", line 71, in notify
ckan | observer.notify(entity, operation)
ckan | File "/srv/app/src/ckan/ckan/lib/search/__init__.py", line 167, in notify
ckan | logic.get_action('package_show')(cast(
ckan | File "/srv/app/src/ckan/ckan/logic/__init__.py", line 551, in wrapped
ckan | result = _action(context, data_dict, **kw)
ckan | File "/srv/app/src/ckan/ckan/logic/action/get.py", line 1018, in package_show
ckan | raise NotFound
ckan | ckan.logic.NotFound
```
### Steps to reproduce
Create a schema with an extra field on a package.
Create an instance of this package, then delete it.
After the call of the deletion we have no changed nor new objects.
<img width="461" alt="image" src="https://github.com/ckan/ckan/assets/338699/b01c3cd5-b6a0-40b7-8b16-f074695719f5">
But right after in `modification.py` L58, ckan is going through all deleted object to notify a 'change' on their related packages
<img width="640" alt="image" src="https://github.com/ckan/ckan/assets/338699/9abdd70a-0906-4c9a-bcae-cc6c99a15ee7">
But in this case the related package is the one that has been deleted
<img width="1191" alt="image" src="https://github.com/ckan/ckan/assets/338699/ac87ca50-60c6-44d6-87ab-9d68ae7cf39e">
### Expected behavior
Do not trace exception as the deletion is done.
Do not notifiy a package as changed when it is already deleted.
### Additional details
|
open
|
2023-05-16T14:56:10Z
|
2025-03-11T13:25:45Z
|
https://github.com/ckan/ckan/issues/7592
|
[] |
pkernevez
| 1
|
chaoss/augur
|
data-visualization
| 2,942
|
Gitlab - repo_info data population
|
Right now repo_info is not populating for gitlab. Further investigation needed
|
open
|
2024-10-21T15:52:57Z
|
2024-11-04T15:57:19Z
|
https://github.com/chaoss/augur/issues/2942
|
[] |
cdolfi
| 0
|
christabor/flask_jsondash
|
plotly
| 63
|
Use c3.load to reload data instead of re-generating in c3js
|
For performance and transition animations.
|
closed
|
2016-10-24T18:50:48Z
|
2017-07-11T17:34:10Z
|
https://github.com/christabor/flask_jsondash/issues/63
|
[
"enhancement",
"performance"
] |
christabor
| 2
|
graphql-python/graphene-django
|
graphql
| 488
|
Mutations + Django Forms + Additional arguments passed to form
|
Hello,
I was quite happy to learn that this lib has added the ability to integrate with existing Django forms. That being said, our use case requires us to alter the options in the forms dynamically.
In general we accomplish this in one of two ways:
```python
def make_UserForm(tenant):
class UserForm(forms.Form):
user = forms.ModelChoiceField(
User.objects.filter(profile__tenant=tenant),
)
return UserForm
```
Or
```python
class UserForm(forms.Form):
user = forms.ModelChoiceField(User.objects.none())
def __init__(self, tenant, *args, **kwargs):
super(UserForm, self).__init__(*args, **kwargs)
self.fields['user'].queryset = User.objects.filter(profile__tenant=tenant)
```
Both cases are simplified examples, but they get the point across. In order to use these types of forms, we need access to either returning a Form class or updating what arguments get passed to the form. I see that we have access to `get_form` and `get_for_kwargs`. So we could very easily override one or both of those methods to accomplish this goal. However, I am not certain if that is the ideal way to handle this. Or if that is the stance this lib would want to take.
Neither `get_form` or `get_form_kwargs` are underscored, so theoretically, I could see overriding those to be the correct answer. That being said, overriding `get_form` requires remembering to call `get_form_kwargs` manually. To me, it would make more sense to only use the second example above and override `get_for_kwargs`. However, I believe there is a third option.
We could make a new method called `get_additional_form_kwargs` that always returns an empty dict that `get_form_kwargs` calls: `return kwargs.update(cls.get_additional_form_kwargs(root, info, **input))`. If we did something like this, I would pitch renaming `get_form` and `get_form_kwargs` to lead with an underscore so it's known these are "private" methods. At which point we could rename the new method `get_addtional_form_kwargs` to `get_form_kwargs`. Maybe.
I am happy to make a PR to implement any part of this. I'm also open to being told that I'm missing something obvious and all of this is moot.
|
closed
|
2018-08-03T16:49:11Z
|
2019-06-18T12:09:19Z
|
https://github.com/graphql-python/graphene-django/issues/488
|
[
"wontfix"
] |
jlward
| 1
|
ageitgey/face_recognition
|
python
| 635
|
To make importing images in 'facerec_from_webcam_faster.py' easier by only providing the known_pictures folder path
|
* face_recognition version:1.2.3
* Python version:2.7
* Operating System:Fedora 27
### Description
In Examples of face_recognition the code 'facerec_from_webcam_faster.py' works fine, but I wanted to make import the images easier, Now in the above code instead of manually writing the name and path of each image, I intend to create a function that would only take the folder path of the known images with image name as their original name, the function should be able to generate the face encoding of each image and would be able to create two lists with face encodings and face names respectively.
Any help would be appreciated.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
|
open
|
2018-09-29T19:04:54Z
|
2018-10-12T09:37:35Z
|
https://github.com/ageitgey/face_recognition/issues/635
|
[] |
sid-star
| 3
|
great-expectations/great_expectations
|
data-science
| 10,410
|
[BUG] Exception during validation of ExpectColumnValuesToNotBeNull
|
**Describe the bug**
I am using a spark/pandas dataframe. The dataframe has multiple columns and I am using one of them as a parameter for this expectation. If I use a column which has no null values then there is no exception and I get the expected result. Now when I pass some other column (also does not have any null value) or some columns which have nulls, I see exceptions.
**To Reproduce**
**Traceback:**
"exception_info": {
"exception_traceback": "Traceback (most recent call last):\n File \"/local_disk0/.ephemeral_nfs/envs/pythonEnv-6e39b63e-ade0-4e51-94c2-99c6cf2319a5/lib/python3.9/site-packages/great_expectations/validator/validator.py\", line 648, in graph_validate\n result = expectation.metrics_validate(\n File \"/local_disk0/.ephemeral_nfs/envs/pythonEnv-6e39b63e-ade0-4e51-94c2-99c6cf2319a5/lib/python3.9/site-packages/great_expectations/expectations/expectation.py\", line 1081, in metrics_validate\n _validate_dependencies_against_available_metrics(\n File \"/local_disk0/.ephemeral_nfs/envs/pythonEnv-6e39b63e-ade0-4e51-94c2-99c6cf2319a5/lib/python3.9/site-packages/great_expectations/expectations/expectation.py\", line 2773, in _validate_dependencies_against_available_metrics\n raise InvalidExpectationConfigurationError( # noqa: TRY003\ngreat_expectations.exceptions.exceptions.InvalidExpectationConfigurationError: Metric ('column_values.nonnull.unexpected_count', '657e384d8614677fff7d7be97ee019fe', ()) is not available for validation of configuration. Please check your configuration.\n",
"exception_message": "Metric ('column_values.nonnull.unexpected_count', '657e384d8614677fff7d7be97ee019fe', ()) is not available for validation of configuration. Please check your configuration.",
"raised_exception": true
**Environment (please complete the following information):**
- Databricks runtime 12.2 LTS
- GX version 1.0.4
|
open
|
2024-09-17T15:31:41Z
|
2024-11-19T06:52:19Z
|
https://github.com/great-expectations/great_expectations/issues/10410
|
[
"bug"
] |
Utkarsh-Krishna
| 13
|
skypilot-org/skypilot
|
data-science
| 4,702
|
Add us-east-3 region support to Lambda Labs
|
At the moment, a GH200 can't be launched in Lambda Labs in the us-east-3 region via SkyPilot:
```
$ sky launch -c test_gh200 --region=us-east-3 test_docker.yaml
Task from YAML spec: test_docker.yaml
ValueError: Invalid (region 'us-east-3', zone None) for cloud Lambda. Details:
Invalid region 'us-east-3'
List of supported lambda regions: 'asia-northeast-1, asia-northeast-2, asia-south-1, europe-central-1, europe-south-1, me-west-1, us-east-1, us-east-2, us-midwest-1, us-south-1, us-south-2, us-south-3, us-west-1, us-west-2, us-west-3'
```
It may need to be added to the region list here: https://github.com/skypilot-org/skypilot/blob/1fe3fab0e7a3242f32039d55b456603350dc4196/sky/clouds/service_catalog/data_fetchers/fetch_lambda_cloud.py#L21-L38
<img width="596" alt="Image" src="https://github.com/user-attachments/assets/8147793c-3ed0-4c48-9369-eb9cfd5009a5" />
|
closed
|
2025-02-12T20:12:25Z
|
2025-02-12T21:31:33Z
|
https://github.com/skypilot-org/skypilot/issues/4702
|
[] |
ajayjain
| 3
|
alteryx/featuretools
|
data-science
| 1,944
|
Remove excessive lint checking in all python version (just do 3.10)
|
- We should not do excessive lint checking in all python versions.
- We are moving to using black soon, and the main developer [suggested](https://github.com/psf/black/issues/2383#issuecomment-882729863) only using 3.10
|
closed
|
2022-03-10T18:04:19Z
|
2022-03-10T18:58:39Z
|
https://github.com/alteryx/featuretools/issues/1944
|
[] |
gsheni
| 0
|
vitalik/django-ninja
|
pydantic
| 497
|
Two endpoints with the same url, but different method throws METHOD NOT ALLOWED
|
Hi!
I was implementing two endpoints that have the same url, but it changes in the query parameters, and they have different access methods, and in the second one it is responding with a status 405 and METHOD NOT ALLOWED.
Although it is true that typing the variables in the url solves the problem, the following question arises, can't there really be two endpoints with the same url but with different methods?
For example:
@api.get('/user/{userid}', tags=['users'])
def get_user_userid(request, userid: int):
return "OK GET"
@api.put('/user/{userid}', tags=['users'])
def put_user_userid(request, userid: int)
return "OK PUT"
The result of querying PUT /user/1 is a 405 METHOD NOT ALLOWED.
Changing the url of the PUT method to /user/{str:userid} does get the response I expect.
|
closed
|
2022-07-05T08:28:43Z
|
2024-01-14T19:28:50Z
|
https://github.com/vitalik/django-ninja/issues/497
|
[] |
JFeldaca
| 6
|
ultralytics/yolov5
|
deep-learning
| 12,859
|
Why is background FP so high?
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I am using my customized data set, and the training result background FP is very high. The following are the relevant training results. What problem did I encounter?


### Additional
_No response_
|
closed
|
2024-03-28T05:04:19Z
|
2024-11-07T14:57:21Z
|
https://github.com/ultralytics/yolov5/issues/12859
|
[
"question",
"Stale"
] |
a41497254
| 6
|
elliotgao2/toapi
|
api
| 124
|
Flask logging error
|
python 3.7
toapi 2.1.1
```
Traceback (most recent call last):
File "main.py", line 5, in <module>
api = Api()
File "/usr/local/lib/python3.7/site-packages/toapi/api.py", line 24, in __init__
self.__init_server()
File "/usr/local/lib/python3.7/site-packages/toapi/api.py", line 27, in __init_server
self.app.logger.setLevel(logging.ERROR)
AttributeError: module 'flask.logging' has no attribute 'ERROR'
```
|
closed
|
2018-07-14T14:18:22Z
|
2018-08-06T14:26:55Z
|
https://github.com/elliotgao2/toapi/issues/124
|
[] |
tmshv
| 2
|
pydantic/pydantic-ai
|
pydantic
| 603
|
PydanticAI doesn't return None but raw VertexAI code does
|
So just using VertexAI libraries I can have the option to get back null values. However, when writing the equivalent code using pydanticAI I never get back null values:
**Using vertex libraries:**
```py
from vertexai.generative_models import GenerativeModel, GenerationConfig
response_schema = {
"type": "object",
"properties": {
"age": {
"type": "INTEGER", "nullable": True
}
}
}
generation_config = GenerationConfig(
response_mime_type="application/json",
response_schema=response_schema
)
model = GenerativeModel("gemini-1.5-flash")
result = model.generate_content(
"The man was very old. what is the age of the old man?",
generation_config=generation_config
)
print(result.candidates[0].content.parts[0].text)
# "{\"age\": null}"
```
**Using pydanticAI:**
```py
from pydantic_ai import Agent
from pydantic import BaseModel
from pydantic_ai.models.vertexai import VertexAIModel
class AgeModel(BaseModel):
age: int | None = None
gemini_model = VertexAIModel('gemini-1.5-flash')
prompt = "The man was very old. what is the age of the old man?"
agent = Agent(gemini_model, result_type=AgeModel)
age = agent.run_sync(prompt)
print(age.data)
# age=100
```
|
closed
|
2025-01-03T09:41:48Z
|
2025-01-06T16:48:16Z
|
https://github.com/pydantic/pydantic-ai/issues/603
|
[
"bug"
] |
DataMonsterBoy
| 1
|
encode/uvicorn
|
asyncio
| 2,166
|
`--reload-include` doesn't work with hidden files e.g. `--reload-include .env`
|
### Initial Checks
- [X] I confirm this was discussed, and the maintainers suggest I open an issue.
- [X] I'm aware that if I created this issue without a discussion, it may be closed without a response.
### Discussion Link
```Text
https://github.com/encode/uvicorn/discussions/1705
```
### Description
The `--reload-include` CLI flag doesn't work when specifying a hidden file directly e.g.
```sh
uvicorn src.main:app --reload --reload-include .env
```
### Example Code
_No response_
### Python, Uvicorn & OS Version
```Text
Running uvicorn 0.23.2 with CPython 3.11.5 on Darwin
```
<!-- POLAR PLEDGE BADGE START -->
> [!IMPORTANT]
> - We're using [Polar.sh](https://polar.sh/encode) so you can upvote and help fund this issue.
> - We receive the funding once the issue is completed & confirmed by you.
> - Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/encode/uvicorn/issues/2166">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/encode/uvicorn/issues/2166/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/encode/uvicorn/issues/2166/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
|
closed
|
2023-11-29T15:49:24Z
|
2024-02-10T14:33:45Z
|
https://github.com/encode/uvicorn/issues/2166
|
[] |
michaeloliverx
| 1
|
newpanjing/simpleui
|
django
| 94
|
浏览器刷新后无法滑动到底部
|
**bug描述**
浏览器刷新后无法滑动到底部
**重现步骤**
1. 刷新浏览器
2. 划动无法到最底部
|
closed
|
2019-06-19T03:43:01Z
|
2019-07-09T05:48:30Z
|
https://github.com/newpanjing/simpleui/issues/94
|
[
"bug"
] |
JohnYan2017
| 1
|
aiortc/aiortc
|
asyncio
| 500
|
Allow MediaPlayer to be closed on application shutdown
|
Add a `stop()` function to the MediaPlayer so that an application can shutdown cleanly. This is also the case for the `webcam` example. This will only properly close if a peer connection was opened at least once.
I already wrote a stop function so I could do a PR
|
closed
|
2021-03-09T09:00:45Z
|
2022-03-22T02:39:54Z
|
https://github.com/aiortc/aiortc/issues/500
|
[
"stale"
] |
paulhobbel
| 5
|
marcomusy/vedo
|
numpy
| 1,115
|
Intersection between watertight mesh and plane mesh
|
I am using `cut_with_mesh` to obtain the part of a plane mesh that is inside another mesh. What I am using now:
```python
mesh = trimesh2vedo(plane_mesh).cut_with_mesh(trimesh2vedo(mesh))
```
It looks like this:

Right now it is crashing silently and I do not know why. I am using the right function to do this?
|
closed
|
2024-05-13T10:43:29Z
|
2024-05-14T18:15:28Z
|
https://github.com/marcomusy/vedo/issues/1115
|
[] |
omaralvarez
| 8
|
huggingface/transformers
|
pytorch
| 36,941
|
Add param_to_hook_all_reduce parameter in HF Trainer
|
### Feature request
PyTorch DistributedDataParallel has `param_to_hook_all_reduce` option which is missing in trainer.py. This is needed to overlap gradient synchronization with backward pass
https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html
### Motivation
This will make Multi node DDP training more efficient
### Your contribution
Maybe
|
open
|
2025-03-24T18:00:06Z
|
2025-03-24T18:00:06Z
|
https://github.com/huggingface/transformers/issues/36941
|
[
"Feature request"
] |
awsankur
| 0
|
mwaskom/seaborn
|
pandas
| 3,575
|
scatterplot bug in 0.13.0
|
Hello and first off, major thanks to @mwaskom for this incredible package. I report this bug as a developer of [Pyleoclim](https://github.com/LinkedEarth/Pyleoclim_util), which has seaborn as a dependency.
In upgrading to Python 3.11, we also upgraded to seaborn 0.13.0, and [this docstring example](https://pyleoclim-util.readthedocs.io/en/latest/core/api.html#pyleoclim.core.geoseries.GeoSeries.map_neighbors) starting giving us grief. Specifically:
```
Traceback (most recent call last):
Cell In[3], line 17
gs.map_neighbors(mgs, radius=4000)
File ~/Documents/GitHub/Pyleoclim_util/pyleoclim/core/geoseries.py:446 in map_neighbors
fig, ax_d = mapping.scatter_map(neighborhood, fig=fig, gs_slot=gridspec_slot, hue=hue, size=size, marker=marker, projection=projection,
File ~/Documents/GitHub/Pyleoclim_util/pyleoclim/utils/mapping.py:1205 in scatter_map
_, ax_d = plot_scatter(df=df, x=x, y=y, hue_var=hue, size_var=size, marker_var=marker, ax_d=ax_d, proj=None, edgecolor=edgecolor,
File ~/Documents/GitHub/Pyleoclim_util/pyleoclim/utils/mapping.py:946 in plot_scatter
sns.scatterplot(data=hue_data, x=x, y=y, hue=hue_var, size=size_var,transform=transform, #change to transform=scatter_kwargs['transform']
File ~/opt/miniconda3/envs/pyleo/lib/python3.11/site-packages/seaborn/relational.py:624 in scatterplot
p.plot(ax, kwargs)
File ~/opt/miniconda3/envs/pyleo/lib/python3.11/site-packages/seaborn/relational.py:458 in plot
self.add_legend_data(ax, _scatter_legend_artist, kws, attrs)
File ~/opt/miniconda3/envs/pyleo/lib/python3.11/site-packages/seaborn/_base.py:1270 in add_legend_data
artist = func(label=label, **{"color": ".2", **common_kws, **level_kws})
File ~/opt/miniconda3/envs/pyleo/lib/python3.11/site-packages/seaborn/utils.py:922 in _scatter_legend_artist
if edgecolor == "face":
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
The problem goes away if I revert to seaborn 0.12.2, keeping all other packages the same. My environment.yml is copied to help troubleshoot. It seems that it the fix could be a minor change in `_scatter_legend_artist` but I don't have a good sense of the ramifications.
Best,
J.E.G.
--
name: pyleo
channels:
- defaults
dependencies:
- alabaster=0.7.12=pyhd3eb1b0_0
- appnope=0.1.2=py311hecd8cb5_1001
- asttokens=2.0.5=pyhd3eb1b0_0
- babel=2.11.0=py311hecd8cb5_0
- backcall=0.2.0=pyhd3eb1b0_0
- brotli-python=1.0.9=py311hcec6c5f_7
- bzip2=1.0.8=h1de35cc_0
- ca-certificates=2023.08.22=hecd8cb5_0
- certifi=2023.11.17=py311hecd8cb5_0
- cffi=1.16.0=py311h6c40b1e_0
- cloudpickle=2.2.1=py311hecd8cb5_0
- colorama=0.4.6=py311hecd8cb5_0
- cryptography=41.0.3=py311h30e54ef_0
- debugpy=1.6.7=py311hcec6c5f_0
- decorator=5.1.1=pyhd3eb1b0_0
- docutils=0.18.1=py311hecd8cb5_3
- executing=0.8.3=pyhd3eb1b0_0
- idna=3.4=py311hecd8cb5_0
- imagesize=1.4.1=py311hecd8cb5_0
- ipykernel=6.25.0=py311h85bffb1_0
- ipython=8.15.0=py311hecd8cb5_0
- jedi=0.18.1=py311hecd8cb5_1
- jinja2=3.1.2=py311hecd8cb5_0
- jupyter_client=8.6.0=py311hecd8cb5_0
- jupyter_core=5.5.0=py311hecd8cb5_0
- libcxx=14.0.6=h9765a3e_0
- libffi=3.4.4=hecd8cb5_0
- libsodium=1.0.18=h1de35cc_0
- markupsafe=2.1.1=py311h6c40b1e_0
- matplotlib-inline=0.1.6=py311hecd8cb5_0
- ncurses=6.4=hcec6c5f_0
- nest-asyncio=1.5.6=py311hecd8cb5_0
- openssl=3.0.12=hca72f7f_0
- parso=0.8.3=pyhd3eb1b0_0
- pexpect=4.8.0=pyhd3eb1b0_3
- pickleshare=0.7.5=pyhd3eb1b0_1003
- pip=23.3=py311hecd8cb5_0
- platformdirs=3.10.0=py311hecd8cb5_0
- prompt-toolkit=3.0.36=py311hecd8cb5_0
- psutil=5.9.0=py311h6c40b1e_0
- ptyprocess=0.7.0=pyhd3eb1b0_2
- pure_eval=0.2.2=pyhd3eb1b0_0
- pycparser=2.21=pyhd3eb1b0_0
- pygments=2.15.1=py311hecd8cb5_1
- pyopenssl=23.2.0=py311hecd8cb5_0
- pysocks=1.7.1=py311hecd8cb5_0
- python=3.11.5=hf27a42d_0
- python-dateutil=2.8.2=pyhd3eb1b0_0
- pytz=2023.3.post1=py311hecd8cb5_0
- pyzmq=25.1.0=py311hcec6c5f_0
- readline=8.2=hca72f7f_0
- requests=2.31.0=py311hecd8cb5_0
- setuptools=68.0.0=py311hecd8cb5_0
- six=1.16.0=pyhd3eb1b0_1
- snowballstemmer=2.2.0=pyhd3eb1b0_0
- sphinx=5.0.2=py311hecd8cb5_0
- sphinxcontrib-applehelp=1.0.2=pyhd3eb1b0_0
- sphinxcontrib-devhelp=1.0.2=pyhd3eb1b0_0
- sphinxcontrib-htmlhelp=2.0.0=pyhd3eb1b0_0
- sphinxcontrib-jsmath=1.0.1=pyhd3eb1b0_0
- sphinxcontrib-qthelp=1.0.3=pyhd3eb1b0_0
- sphinxcontrib-serializinghtml=1.1.5=pyhd3eb1b0_0
- spyder-kernels=2.4.4=py311hecd8cb5_0
- sqlite=3.41.2=h6c40b1e_0
- stack_data=0.2.0=pyhd3eb1b0_0
- tk=8.6.12=h5d9f67b_0
- tornado=6.3.3=py311h6c40b1e_0
- traitlets=5.7.1=py311hecd8cb5_0
- wcwidth=0.2.5=pyhd3eb1b0_0
- wheel=0.41.2=py311hecd8cb5_0
- wurlitzer=3.0.2=py311hecd8cb5_0
- xz=5.4.2=h6c40b1e_0
- zeromq=4.3.4=h23ab428_0
- zlib=1.2.13=h4dc903c_0
- pip:
- attrs==23.1.0
- bagit==1.8.1
- beautifulsoup4==4.12.2
- bibtexparser==1.4.1
- bleach==6.1.0
- cartopy==0.22.0
- chardet==5.2.0
- charset-normalizer==3.3.2
- comm==0.2.0
- contourpy==1.2.0
- cycler==0.12.1
- defusedxml==0.7.1
- demjson3==3.0.6
- dill==0.3.7
- doi2bib==0.4.0
- fastjsonschema==2.19.0
- fonttools==4.44.3
- future==0.18.3
- ipywidgets==8.1.1
- isodate==0.6.1
- joblib==1.3.2
- jsonschema==4.20.0
- jsonschema-specifications==2023.11.1
- jupyter-sphinx==0.4.0
- jupyterlab-pygments==0.2.2
- jupyterlab-widgets==3.0.9
- kiwisolver==1.4.5
- kneed==0.8.5
- latexcodec==2.0.1
- lipd==0.2.8.8
- llvmlite==0.41.1
- matplotlib==3.8.2
- mistune==3.0.2
- multiprocess==0.70.15
- nbclient==0.9.0
- nbconvert==7.11.0
- nbformat==5.9.2
- nbsphinx==0.9.3
- nitime==0.10.2
- numba==0.58.1
- numpy==1.23.5
- numpydoc==1.6.0
- packaging==23.2
- pandas==2.1.3
- pandocfilters==1.5.0
- pathos==0.3.1
- patsy==0.5.3
- pillow==10.1.0
- ply==3.11
- pox==0.3.3
- ppft==1.7.6.7
- pybtex==0.24.0
- pyhht==0.1.0
- pylipd==1.3.6
- pyparsing==3.1.1
- pyproj==3.6.1
- pyshp==2.3.1
- pyyaml==6.0.1
- rdflib==7.0.0
- readthedocs-sphinx-search==0.3.1
- referencing==0.31.0
- rpds-py==0.13.0
- scikit-learn==1.3.2
- scipy==1.11.3
- seaborn==0.12.2
- shapely==2.0.2
- sip==6.7.12
- soupsieve==2.5
- sphinx-copybutton==0.5.2
- sphinx-rtd-theme==1.3.0
- sphinxcontrib-jquery==4.1
- statsmodels==0.14.0
- tabulate==0.9.0
- tftb==0.1.4
- threadpoolctl==3.2.0
- tinycss2==1.2.1
- tqdm==4.66.1
- tzdata==2023.3
- unidecode==1.3.7
- urllib3==2.1.0
- webencodings==0.5.1
- wget==3.2
- widgetsnbextension==4.0.9
- xlrd==2.0.1
|
open
|
2023-11-29T02:45:56Z
|
2023-11-30T15:23:29Z
|
https://github.com/mwaskom/seaborn/issues/3575
|
[
"mod:relational",
"needs-reprex"
] |
CommonClimate
| 2
|
iMerica/dj-rest-auth
|
rest-api
| 562
|
Why does the registration functionality depend on allauth?
|
It's kind of strange to me that to enable registration I have to install allauth. It's especially strange that I need to include the `allauth.socialaccount` app, even when I am not planning to ever support social logins. Is this something that could be made optional?
|
open
|
2023-10-25T14:17:12Z
|
2024-01-06T15:21:52Z
|
https://github.com/iMerica/dj-rest-auth/issues/562
|
[] |
kevinrenskers
| 2
|
plotly/dash
|
flask
| 2,850
|
Dropdown changes in Dash 2.17 causing page loading issues in Dash docs
|
Our docs tests test different paths:
https://github.com/plotly/ddk-dash-docs/blob/main/tests/integration/test_bad_paths.py
With Dash 2.17, some of these tests fail because the page never loads
The page ends up stuck in a state like this

Seems to be happening on page where the dropdown is updated based on the URL and vice versa. Seems to work without this change: https://github.com/plotly/dash/pull/2816
To recreate:
Run the /ddk-dash-docs tests using Dash 2.17
|
closed
|
2024-05-06T18:54:50Z
|
2024-05-14T15:01:53Z
|
https://github.com/plotly/dash/issues/2850
|
[
"bug"
] |
LiamConnors
| 0
|
plotly/dash
|
plotly
| 2,470
|
Provide Input Patch-Like Behavior
|
**Is your feature request related to a problem? Please describe.**
With the introduction of the `Patch` component for outputs, I was thinking that it would be pretty useful as well to be able to configure Patch-like features for inputs as well. Right now, if any key of the `dcc.Graph` figure attribute changes, then a callback would be triggered, and all of the associated data would be sent across the network.
**Describe the solution you'd like**
Instead, it might be nice if we could get a bit more granular in specifying the keys within a figure or store's data configuration. I.e. Fire this callback if the actual data coordinates of the figure changes, but don't fire the callback if the user changes the color of the title.
**Describe alternatives you've considered**
I'm not actually sure how you would go about checking which underlying key of an attribute fired in a callback with the current implementation of Dash. Perhaps some sort of `Store` component that holds the previous figure state?
(Not an urgent request - just a nice-to-have down the line as partial updates start to receive first-class treatment and Dash applications start to process larger data sets).
|
open
|
2023-03-20T16:25:52Z
|
2024-08-13T19:29:21Z
|
https://github.com/plotly/dash/issues/2470
|
[
"feature",
"P3"
] |
milind
| 0
|
roboflow/supervision
|
machine-learning
| 1,243
|
Request: PolygonZone determination using object recognition
|
### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
I am currently using supervision in my thesis to analyse driving behavior in different videos and its super useful. But the PolygonZone array must be determined manually for each video.
Would it be possible to (semi-) automate this process with object recognition? By specifying an object that can be found in several places in a frame, the feature would then return the coordinates of the object from the frame and append them to an array.
### Use case
It would be very useful, for example, when determining the polygon zone, which is created on the basis of delineators. In this way, a road section can be recognized directly without having to enter an array manually.
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
closed
|
2024-05-29T12:20:02Z
|
2024-05-29T12:44:24Z
|
https://github.com/roboflow/supervision/issues/1243
|
[
"enhancement"
] |
pasionline
| 1
|
pandas-dev/pandas
|
python
| 60,364
|
DOC: Add missing links to optional dependencies in getting_started/install.html
|
### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/getting_started/install.html
### Documentation problem
On the “Installation” page, links are provided to the GitHub pages for the required dependencies and some of the optional dependencies, but the optional dependencies in the tables from “Visualization” onward do not link to the GitHub pages of the projects. Links are provided to the optional HTML-related dependencies are present, but not in the dependency table.
### Suggested fix for documentation
Add links to the library names which link to their respective repositories to make the page more consistent.
|
closed
|
2024-11-19T20:05:36Z
|
2024-12-02T19:09:40Z
|
https://github.com/pandas-dev/pandas/issues/60364
|
[
"Build",
"Docs"
] |
bluestarunderscore
| 8
|
yzhao062/pyod
|
data-science
| 416
|
Meaning of Contamination in SUOD
|
Hello,
Thanks for the excellent work with PyOD! I am using SUOD to create an ensemble of 6 models (3 ABOD, 3 INNE). Each of these instances has their own (different) contamination value. However, the SUOD object itself has a separate contamination parameter. What is the effect of/interaction between the contamination parameter on the SUOD object and the constituent outlier detectors (which could have their own individual contamination settings)?
On a side note, it is my understanding that each of these 6 models learns (in) a different subspace, since (different) random projections are used per instance. Is this correct?
Thanks!
|
closed
|
2022-06-23T17:53:05Z
|
2022-06-27T20:01:28Z
|
https://github.com/yzhao062/pyod/issues/416
|
[] |
madarax64
| 6
|
miguelgrinberg/Flask-SocketIO
|
flask
| 1,544
|
How to handle Bad request type?
|
``How do I handle code 400, message Bad HTTP/0.9 request type ('<?xml')??
**Logs**
I am trying to connect to a particular device based on sockets. All I am getting is this error 400 error.
Is there anything like .recv() or something?
```
from flask import Flask, render_template
from flask_socketio import SocketIO, emit
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
socketio = SocketIO(app)
@socketio.on('message')
def handle_message(data):
print('received message: ' + data)
@socketio.event
def my_event(message):
emit('my response', {'data': 'got it!'})
if __name__ == '__main__':
socketio.run(app, host='0.0.0.0', port=3030)
```
|
closed
|
2021-05-08T16:20:06Z
|
2021-05-10T13:04:19Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1544
|
[
"question"
] |
bansalnaman15
| 11
|
BeanieODM/beanie
|
pydantic
| 487
|
[BUG] how return id instead of _id in fastapi web
|

|
closed
|
2023-02-13T07:43:52Z
|
2023-03-31T02:25:45Z
|
https://github.com/BeanieODM/beanie/issues/487
|
[
"Stale"
] |
linpan
| 6
|
python-gino/gino
|
sqlalchemy
| 48
|
The URL of documention is error.
|
老铁,更新一下文档地址吧~
非常感谢 ^_^
|
closed
|
2017-09-01T04:31:21Z
|
2017-09-01T08:54:33Z
|
https://github.com/python-gino/gino/issues/48
|
[
"duplicate"
] |
zjxubinbin
| 6
|
SciTools/cartopy
|
matplotlib
| 2,093
|
Problem with annotation in image
|
### Description
Hello,
I encountered a problem where text will not be cut off properly in cartopy at the edge of the figure.
This is a simplyfied version of the problem, but it should be obvious what is going wrong.
You can clearly see that the higher numbers (90+) are still displayed, even though they are outside of the image. This is a problem coming from the projection. I encounter the same problem in my "real" code where I have river names on a map.
The problem also happens when you zoom in and it display text that is outside of the figure.

#### Code to reproduce
```
import matplotlib.pyplot as plt
import numpy as np
import cartopy.crs as ccrs
if __name__ == '__main__':
x = np.arange(100)
fig = plt.figure()
ax = plt.axes(projection=ccrs.PlateCarree())
scatter_plot=plt.scatter(x, x)
for item in x:
plt.annotate(str(item), (item, item))
plt.show()
```
#### Traceback
There is no error message. Problem is purely optical.
### Operating system
Windows 10
### Cartopy version
0.21.0
### conda list
```
# Name Version Build Channel
adjusttext 0.7.3.1 py_1 conda-forge
arrow-cpp 9.0.0 py39h07ee6b1_6_cpu conda-forge
asciitree 0.3.3 py_2 conda-forge
aws-c-cal 0.5.11 he19cf47_0 conda-forge
aws-c-common 0.6.2 h8ffe710_0 conda-forge
aws-c-event-stream 0.2.7 h70e1b0c_13 conda-forge
aws-c-io 0.10.5 h2fe331c_0 conda-forge
aws-checksums 0.1.11 h1e232aa_7 conda-forge
aws-sdk-cpp 1.8.186 hb0612c5_3 conda-forge
brotli 1.0.9 h8ffe710_7 conda-forge
brotli-bin 1.0.9 h8ffe710_7 conda-forge
brotlipy 0.7.0 py39hb82d6ee_1004 conda-forge
bzip2 1.0.8 h8ffe710_4 conda-forge
c-ares 1.18.1 h8ffe710_0 conda-forge
ca-certificates 2022.9.24 h5b45459_0 conda-forge
cartopy 0.21.0 py39h4915f10_0 conda-forge
certifi 2022.9.24 pyhd8ed1ab_0 conda-forge
cffi 1.15.1 py39h0878f49_0 conda-forge
cftime 1.6.2 py39hc266a54_0 conda-forge
charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge
colorama 0.4.5 pyhd8ed1ab_0 conda-forge
conda 22.9.0 py39hcbf5309_1 conda-forge
conda-package-handling 1.9.0 py39h09fa780_0 conda-forge
console_shortcut 0.1.1 4
contourpy 1.0.5 py39h1f6ef14_0 conda-forge
cryptography 38.0.1 py39h58e9bdb_0 conda-forge
curl 7.85.0 heaf79c2_0 conda-forge
cycler 0.11.0 pyhd8ed1ab_0 conda-forge
entrypoints 0.4 pyhd8ed1ab_0 conda-forge
eofs 1.4.0 py_0 conda-forge
fasteners 0.17.3 pyhd8ed1ab_0 conda-forge
fonttools 4.37.4 py39ha55989b_0 conda-forge
freetype 2.12.1 h546665d_0 conda-forge
geos 3.11.0 h39d44d4_0 conda-forge
gettext 0.19.8.1 h5728263_1009 conda-forge
gflags 2.2.2 ha925a31_1004 conda-forge
glib 2.74.0 h12be248_0 conda-forge
glib-tools 2.74.0 h12be248_0 conda-forge
glog 0.6.0 h4797de2_0 conda-forge
grpc-cpp 1.47.1 h535cfc9_6 conda-forge
gst-plugins-base 1.20.3 h001b923_2 conda-forge
gstreamer 1.20.3 h6b5321d_2 conda-forge
hdf4 4.2.15 h0e5069d_4 conda-forge
hdf5 1.12.2 nompi_h2a0e4a3_100 conda-forge
icu 70.1 h0e60522_0 conda-forge
idna 3.4 pyhd8ed1ab_0 conda-forge
imageio 2.22.0 pyhfa7a67d_0 conda-forge
intel-openmp 2022.1.0 h57928b3_3787 conda-forge
joblib 1.2.0 pyhd8ed1ab_0 conda-forge
jpeg 9e h8ffe710_2 conda-forge
kiwisolver 1.4.4 py39h2e07f2f_0 conda-forge
krb5 1.19.3 h1176d77_0 conda-forge
lcms2 2.12 h2a16943_0 conda-forge
lerc 4.0.0 h63175ca_0 conda-forge
libabseil 20220623.0 cxx17_h1a56200_4 conda-forge
libblas 3.9.0 16_win64_mkl conda-forge
libbrotlicommon 1.0.9 h8ffe710_7 conda-forge
libbrotlidec 1.0.9 h8ffe710_7 conda-forge
libbrotlienc 1.0.9 h8ffe710_7 conda-forge
libcblas 3.9.0 16_win64_mkl conda-forge
libclang 14.0.6 default_h77d9078_0 conda-forge
libclang13 14.0.6 default_h77d9078_0 conda-forge
libcrc32c 1.1.2 h0e60522_0 conda-forge
libcurl 7.85.0 heaf79c2_0 conda-forge
libdeflate 1.14 hcfcfb64_0 conda-forge
libffi 3.4.2 h8ffe710_5 conda-forge
libglib 2.74.0 h79619a9_0 conda-forge
libgoogle-cloud 2.2.0 hc8dde07_1 conda-forge
libiconv 1.17 h8ffe710_0 conda-forge
liblapack 3.9.0 16_win64_mkl conda-forge
libnetcdf 4.8.1 nompi_h85765be_104 conda-forge
libogg 1.3.4 h8ffe710_1 conda-forge
libpng 1.6.38 h19919ed_0 conda-forge
libprotobuf 3.21.7 h12be248_0 conda-forge
libsqlite 3.39.4 hcfcfb64_0 conda-forge
libssh2 1.10.0 h680486a_3 conda-forge
libthrift 0.16.0 h9f558f2_2 conda-forge
libtiff 4.4.0 h8e97e67_4 conda-forge
libutf8proc 2.7.0 hcb41399_0 conda-forge
libvorbis 1.3.7 h0e60522_0 conda-forge
libwebp-base 1.2.4 h8ffe710_0 conda-forge
libxcb 1.13 hcd874cb_1004 conda-forge
libzip 1.9.2 hfed4ece_1 conda-forge
libzlib 1.2.12 hcfcfb64_4 conda-forge
lz4-c 1.9.3 h8ffe710_1 conda-forge
m2w64-gcc-libgfortran 5.3.0 6 conda-forge
m2w64-gcc-libs 5.3.0 7 conda-forge
m2w64-gcc-libs-core 5.3.0 7 conda-forge
m2w64-gmp 6.1.0 2 conda-forge
m2w64-libwinpthread-git 5.0.0.4634.697f757 2 conda-forge
matplotlib 3.6.0 py39hcbf5309_0 conda-forge
matplotlib-base 3.6.0 py39haf65ace_0 conda-forge
menuinst 1.4.19 py39hcbf5309_0 conda-forge
mkl 2022.1.0 h6a75c08_874 conda-forge
mplcursors 0.5.1 pyhd8ed1ab_0 conda-forge
mpldatacursor 0.7.1 pyhd8ed1ab_0 conda-forge
msgpack-python 1.0.4 py39h2e07f2f_0 conda-forge
msys2-conda-epoch 20160418 1 conda-forge
munkres 1.1.4 pyh9f0ad1d_0 conda-forge
netcdf4 1.6.1 nompi_py39h34fa13a_100 conda-forge
numcodecs 0.10.2 py39h415ef7b_0 conda-forge
numpy 1.23.3 py39h9061af7_0 conda-forge
openjpeg 2.5.0 hc9384bd_1 conda-forge
openssl 1.1.1q h8ffe710_0 conda-forge
packaging 21.3 pyhd8ed1ab_0 conda-forge
pandas 1.5.0 py39h2ba5b7c_0 conda-forge
parquet-cpp 1.5.1 2 conda-forge
pcre2 10.37 hdfff0fc_1 conda-forge
pillow 9.2.0 py39hcef8f5f_2 conda-forge
pip 22.2.2 pyhd8ed1ab_0 conda-forge
plotly 5.10.0 pyhd8ed1ab_0 conda-forge
ply 3.11 py_1 conda-forge
powershell_shortcut 0.0.1 3
proj 9.1.0 h3863b3b_0 conda-forge
pthread-stubs 0.4 hcd874cb_1001 conda-forge
pyarrow 9.0.0 py39h2c50fde_6_cpu conda-forge
pycosat 0.6.3 py39hb82d6ee_1010 conda-forge
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pyopenssl 22.0.0 pyhd8ed1ab_1 conda-forge
pyparsing 3.0.9 pyhd8ed1ab_0 conda-forge
pyproj 3.4.0 py39haa55e60_1 conda-forge
pyqt 5.15.7 py39hb08f45d_0 conda-forge
pyqt5-sip 12.11.0 py39h415ef7b_0 conda-forge
pyshp 2.3.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyh0701188_6 conda-forge
python 3.9.13 h9a09f29_0_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python_abi 3.9 2_cp39 conda-forge
pytz 2022.4 pyhd8ed1ab_0 conda-forge
qt-main 5.15.6 hf0cf448_0 conda-forge
re2 2022.06.01 h0e60522_0 conda-forge
requests 2.28.1 pyhd8ed1ab_1 conda-forge
ruamel_yaml 0.15.80 py39hb82d6ee_1007 conda-forge
scipy 1.9.1 py39h316f440_0 conda-forge
setuptools 65.4.1 pyhd8ed1ab_0 conda-forge
shapely 1.8.4 py39he0923fe_0 conda-forge
sip 6.6.2 py39h415ef7b_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
snappy 1.1.9 h82413e6_1 conda-forge
sqlite 3.39.4 hcfcfb64_0 conda-forge
tbb 2021.6.0 h91493d7_0 conda-forge
tenacity 8.1.0 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h8ffe710_0 conda-forge
toml 0.10.2 pyhd8ed1ab_0 conda-forge
toolz 0.12.0 pyhd8ed1ab_0 conda-forge
tornado 6.2 py39hb82d6ee_0 conda-forge
tqdm 4.64.1 pyhd8ed1ab_0 conda-forge
typing-extensions 4.3.0 hd8ed1ab_0 conda-forge
typing_extensions 4.3.0 pyha770c72_0 conda-forge
tzdata 2022d h191b570_0 conda-forge
ucrt 10.0.20348.0 h57928b3_0 conda-forge
unicodedata2 14.0.0 py39hb82d6ee_1 conda-forge
urllib3 1.26.11 pyhd8ed1ab_0 conda-forge
vc 14.2 hac3ee0b_8 conda-forge
vs2015_runtime 14.29.30139 h890b9b1_8 conda-forge
wheel 0.37.1 pyhd8ed1ab_0 conda-forge
win_inet_pton 1.1.0 py39hcbf5309_4 conda-forge
xarray 2022.9.0 pyhd8ed1ab_0 conda-forge
xorg-libxau 1.0.9 hcd874cb_0 conda-forge
xorg-libxdmcp 1.1.3 hcd874cb_0 conda-forge
xz 5.2.6 h8d14728_0 conda-forge
yaml 0.2.5 h8ffe710_2 conda-forge
zarr 2.13.2 pyhd8ed1ab_1 conda-forge
zlib 1.2.12 hcfcfb64_4 conda-forge
zstd 1.5.2 h7755175_4 conda-forge
```
</details>
|
open
|
2022-10-06T15:29:13Z
|
2022-10-15T21:11:20Z
|
https://github.com/SciTools/cartopy/issues/2093
|
[] |
HelixPiano
| 5
|
sloria/TextBlob
|
nlp
| 396
|
Error in translation and detect_language
|
Hi,
When calling for the functions detect_language & translate are not working. After a couple of pushes, I receive a HTTP 404 error unfortunately. I'm quite unsure what the issue might be. I don't if Google updated their criteria regarding API's but @sloria could you have a look at this? I'm happy to help.
|
closed
|
2021-09-14T09:56:18Z
|
2021-09-14T12:59:55Z
|
https://github.com/sloria/TextBlob/issues/396
|
[] |
DennisvDijk
| 2
|
autogluon/autogluon
|
data-science
| 4,578
|
[BUG] TabularPredictor fit method with an (hyper)parameter `learning_curves` crashes
|
**Bug Report Checklist**
- [ ] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [ ] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [x ] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
Hello,
I am trying to get the learning curves out of the tabular predictor after fitting.
but I could not manag to have the TabularPredictor method `fit` to run with the `learning_curves` as hyperparameter
either as `True` or as a dictionary.
It does crash
**Expected behavior**
Fit method to run properly and after, to be able to call learning_curves() from the predictor.
And if I actually made a mistake with the parameters, a better error message.
**To Reproduce**
```python
import autogluon.tabular
train_data = autogluon.tabular.TabularDataset(data=train_data_file_path)
predictor = autogluon.tabular.TabularPredictor(
label='TARGET',
eval_metric='roc_auc',
path = c.model_folder_path,
)
predictor.fit(
train_data=train_data,
presets=[
'optimize_for_deployment', # will prune not so important sub models
'medium_quality' # will speed up training
# 'interpretable', # will crash
],
time_limit=60*45, # seconds
hyperparameters={
'learning_curves':{
'metrics': 'roc_auc',
'use_error':False,
},
},
)
```
**Screenshots / Logs**
error message:
<details>
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[12], line 35
33 utils.delete_empty_folder(c.model_folder_path)
34 assert not os.path.exists(c.model_folder_path)
---> 35 raise(error)
Cell In[12], line 10
5 predictor = autogluon.tabular.TabularPredictor(
6 label='TARGET',
7 eval_metric='roc_auc',
8 path = c.model_folder_path,
9 )
---> 10 predictor.fit(
11 train_data=train_data,
12 presets=[
13 'optimize_for_deployment', # will prune not so important sub models
14 'medium_quality' # will speed up training
15 # 'interpretable', # will crash
16 ],
17 time_limit=60*45, # seconds
18 hyperparameters={
19 'learning_curves':{
20 'metrics': 'roc_auc',
21 'use_error':False,
22 },
23 },
24 # learning_curves=True,
25 )
26 assert predictor.path == c.model_folder_path
27 print(f"{predictor.predictor_file_name = }")
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/core/utils/decorators.py:31](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/core/utils/decorators.py#line=30), in unpack.<locals>._unpack_inner.<locals>._call(*args, **kwargs)
28 @functools.wraps(f)
29 def _call(*args, **kwargs):
30 gargs, gkwargs = g(*other_args, *args, **kwargs)
---> 31 return f(*gargs, **gkwargs)
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/tabular/predictor/predictor.py:1167](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/tabular/predictor/predictor.py#line=1166), in TabularPredictor.fit(self, train_data, tuning_data, time_limit, presets, hyperparameters, feature_metadata, infer_limit, infer_limit_batch_size, fit_weighted_ensemble, fit_full_last_level_weighted_ensemble, full_weighted_ensemble_additionally, dynamic_stacking, calibrate_decision_threshold, num_cpus, num_gpus, **kwargs)
1164 ag_fit_kwargs["num_stack_levels"] = num_stack_levels
1165 ag_fit_kwargs["time_limit"] = time_limit
-> 1167 self._fit(ag_fit_kwargs=ag_fit_kwargs, ag_post_fit_kwargs=ag_post_fit_kwargs)
1169 return self
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/tabular/predictor/predictor.py:1173](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/tabular/predictor/predictor.py#line=1172), in TabularPredictor._fit(self, ag_fit_kwargs, ag_post_fit_kwargs)
1171 def _fit(self, ag_fit_kwargs: dict, ag_post_fit_kwargs: dict):
1172 self.save(silent=True) # Save predictor to disk to enable prediction and training after interrupt
-> 1173 self._learner.fit(**ag_fit_kwargs)
1174 self._set_post_fit_vars()
1175 self._post_fit(**ag_post_fit_kwargs)
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/tabular/learner/abstract_learner.py:159](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/tabular/learner/abstract_learner.py#line=158), in AbstractTabularLearner.fit(self, X, X_val, **kwargs)
157 raise AssertionError("Learner is already fit.")
158 self._validate_fit_input(X=X, X_val=X_val, **kwargs)
--> 159 return self._fit(X=X, X_val=X_val, **kwargs)
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/tabular/learner/default_learner.py:122](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/tabular/learner/default_learner.py#line=121), in DefaultLearner._fit(self, X, X_val, X_unlabeled, holdout_frac, num_bag_folds, num_bag_sets, time_limit, infer_limit, infer_limit_batch_size, verbosity, **trainer_fit_kwargs)
119 self.eval_metric = trainer.eval_metric
121 self.save()
--> 122 trainer.fit(
123 X=X,
124 y=y,
125 X_val=X_val,
126 y_val=y_val,
127 X_unlabeled=X_unlabeled,
128 holdout_frac=holdout_frac,
129 time_limit=time_limit_trainer,
130 infer_limit=infer_limit,
131 infer_limit_batch_size=infer_limit_batch_size,
132 groups=groups,
133 **trainer_fit_kwargs,
134 )
135 self.save_trainer(trainer=trainer)
136 time_end = time.time()
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/tabular/trainer/auto_trainer.py:125](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/tabular/trainer/auto_trainer.py#line=124), in AutoTrainer.fit(self, X, y, hyperparameters, X_val, y_val, X_unlabeled, holdout_frac, num_stack_levels, core_kwargs, aux_kwargs, time_limit, infer_limit, infer_limit_batch_size, use_bag_holdout, groups, **kwargs)
122 log_str += "}"
123 logger.log(20, log_str)
--> 125 self._train_multi_and_ensemble(
126 X=X,
127 y=y,
128 X_val=X_val,
129 y_val=y_val,
130 X_unlabeled=X_unlabeled,
131 hyperparameters=hyperparameters,
132 num_stack_levels=num_stack_levels,
133 time_limit=time_limit,
134 core_kwargs=core_kwargs,
135 aux_kwargs=aux_kwargs,
136 infer_limit=infer_limit,
137 infer_limit_batch_size=infer_limit_batch_size,
138 groups=groups,
139 )
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py:2589](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py#line=2588), in AbstractTrainer._train_multi_and_ensemble(self, X, y, X_val, y_val, hyperparameters, X_unlabeled, num_stack_levels, time_limit, groups, **kwargs)
2587 self._num_rows_val = len(X_val)
2588 self._num_cols_train = len(list(X.columns))
-> 2589 model_names_fit = self.train_multi_levels(
2590 X,
2591 y,
2592 hyperparameters=hyperparameters,
2593 X_val=X_val,
2594 y_val=y_val,
2595 X_unlabeled=X_unlabeled,
2596 level_start=1,
2597 level_end=num_stack_levels + 1,
2598 time_limit=time_limit,
2599 **kwargs,
2600 )
2601 if len(self.get_model_names()) == 0:
2602 # TODO v1.0: Add toggle to raise exception if no models trained
2603 logger.log(30, "Warning: AutoGluon did not successfully train any models")
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py:452](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py#line=451), in AbstractTrainer.train_multi_levels(self, X, y, hyperparameters, X_val, y_val, X_unlabeled, base_model_names, core_kwargs, aux_kwargs, level_start, level_end, time_limit, name_suffix, relative_stack, level_time_modifier, infer_limit, infer_limit_batch_size)
450 core_kwargs_level["time_limit"] = core_kwargs_level.get("time_limit", time_limit_core)
451 aux_kwargs_level["time_limit"] = aux_kwargs_level.get("time_limit", time_limit_aux)
--> 452 base_model_names, aux_models = self.stack_new_level(
453 X=X,
454 y=y,
455 X_val=X_val,
456 y_val=y_val,
457 X_unlabeled=X_unlabeled,
458 models=hyperparameters,
459 level=level,
460 base_model_names=base_model_names,
461 core_kwargs=core_kwargs_level,
462 aux_kwargs=aux_kwargs_level,
463 name_suffix=name_suffix,
464 infer_limit=infer_limit,
465 infer_limit_batch_size=infer_limit_batch_size,
466 full_weighted_ensemble=full_weighted_ensemble,
467 additional_full_weighted_ensemble=additional_full_weighted_ensemble,
468 )
469 model_names_fit += base_model_names + aux_models
470 if self.model_best is None and len(model_names_fit) != 0:
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py:600](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py#line=599), in AbstractTrainer.stack_new_level(self, X, y, models, X_val, y_val, X_unlabeled, level, base_model_names, core_kwargs, aux_kwargs, name_suffix, infer_limit, infer_limit_batch_size, full_weighted_ensemble, additional_full_weighted_ensemble)
598 core_kwargs["name_suffix"] = core_kwargs.get("name_suffix", "") + name_suffix
599 aux_kwargs["name_suffix"] = aux_kwargs.get("name_suffix", "") + name_suffix
--> 600 core_models = self.stack_new_level_core(
601 X=X,
602 y=y,
603 X_val=X_val,
604 y_val=y_val,
605 X_unlabeled=X_unlabeled,
606 models=models,
607 level=level,
608 infer_limit=infer_limit,
609 infer_limit_batch_size=infer_limit_batch_size,
610 base_model_names=base_model_names,
611 **core_kwargs,
612 )
614 aux_models = []
615 if full_weighted_ensemble:
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py:706](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/core/trainer/abstract_trainer.py#line=705), in AbstractTrainer.stack_new_level_core(self, X, y, models, X_val, y_val, X_unlabeled, level, base_model_names, stack_name, ag_args, ag_args_fit, ag_args_ensemble, included_model_types, excluded_model_types, ensemble_type, name_suffix, get_models_func, refit_full, infer_limit, infer_limit_batch_size, **kwargs)
693 ensemble_kwargs = {
694 "base_model_names": base_model_names,
695 "base_model_paths_dict": base_model_paths,
696 "base_model_types_dict": base_model_types,
697 "random_state": level + self.random_state,
698 }
699 get_models_kwargs.update(
700 dict(
701 ag_args_ensemble=ag_args_ensemble,
(...)
704 )
705 )
--> 706 models, model_args_fit = get_models_func(hyperparameters=models, **get_models_kwargs)
707 if model_args_fit:
708 hyperparameter_tune_kwargs = {
709 model_name: model_args_fit[model_name]["hyperparameter_tune_kwargs"]
710 for model_name in model_args_fit
711 if "hyperparameter_tune_kwargs" in model_args_fit[model_name]
712 }
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/tabular/trainer/auto_trainer.py:31](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/tabular/trainer/auto_trainer.py#line=30), in AutoTrainer.construct_model_templates(self, hyperparameters, **kwargs)
28 ag_args_fit = ag_args_fit.copy()
29 ag_args_fit["quantile_levels"] = quantile_levels
---> 31 return get_preset_models(
32 path=path,
33 problem_type=problem_type,
34 eval_metric=eval_metric,
35 hyperparameters=hyperparameters,
36 ag_args_fit=ag_args_fit,
37 invalid_model_names=invalid_model_names,
38 silent=silent,
39 **kwargs,
40 )
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/tabular/trainer/model_presets/presets.py:246](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/tabular/trainer/model_presets/presets.py#line=245), in get_preset_models(path, problem_type, eval_metric, hyperparameters, level, ensemble_type, ensemble_kwargs, ag_args_fit, ag_args, ag_args_ensemble, name_suffix, default_priorities, invalid_model_names, included_model_types, excluded_model_types, hyperparameter_preprocess_func, hyperparameter_preprocess_kwargs, silent)
244 model_cfgs_to_process.append(model_cfg)
245 for model_cfg in model_cfgs_to_process:
--> 246 model_cfg = clean_model_cfg(
247 model_cfg=model_cfg,
248 model_type=model_type,
249 ag_args=ag_args,
250 ag_args_ensemble=ag_args_ensemble,
251 ag_args_fit=ag_args_fit,
252 problem_type=problem_type,
253 )
254 model_cfg[AG_ARGS]["priority"] = model_cfg[AG_ARGS].get("priority", default_priorities.get(model_type, DEFAULT_CUSTOM_MODEL_PRIORITY))
255 model_priority = model_cfg[AG_ARGS]["priority"]
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/tabular/trainer/model_presets/presets.py:302](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/tabular/trainer/model_presets/presets.py#line=301), in clean_model_cfg(model_cfg, model_type, ag_args, ag_args_ensemble, ag_args_fit, problem_type)
300 model_type = model_cfg[AG_ARGS]["model_type"]
301 if not inspect.isclass(model_type):
--> 302 model_type = MODEL_TYPES[model_type]
303 elif not issubclass(model_type, AbstractModel):
304 logger.warning(
305 f"Warning: Custom model type {model_type} does not inherit from {AbstractModel}. This may lead to instability. Consider wrapping {model_type} with an implementation of {AbstractModel}!"
306 )
KeyError: 'learning_curves'
```
</details>
**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
```python
INSTALLED VERSIONS
------------------
date : 2024-10-24
time : 18:17:04.029570
python : 3.11.10.final.0
OS : Linux
OS-release : 6.11.1-arch1-1
Version : #1 SMP PREEMPT_DYNAMIC Mon, 30 Sep 2024 23:49:50 +0000
machine : x86_64
processor :
num_cores : 8
cpu_ram_mb : 7781.19921875
cuda version : None
num_gpus : 0
gpu_ram_mb : []
avail_disk_size_mb : 12036
autogluon : None
autogluon.common : 1.1.1
autogluon.core : 1.1.1
autogluon.features : 1.1.1
autogluon.tabular : 1.1.1
boto3 : 1.35.47
catboost : 1.2.7
fastai : 2.7.17
hyperopt : 0.2.7
imodels : None
lightgbm : 4.3.0
matplotlib : 3.9.2
networkx : 3.4.2
numpy : 1.26.4
onnx : None
onnxruntime : None
onnxruntime-gpu : None
pandas : 2.1.4
psutil : 5.9.8
pyarrow : 15.0.0
ray : 2.24.0
requests : 2.32.3
scikit-learn : 1.4.0
scikit-learn-intelex: None
scipy : 1.12.0
setuptools : 75.1.0
skl2onnx : None
tabpfn : None
torch : 2.1.2
tqdm : 4.66.5
vowpalwabbit : None
xgboost : 2.1.1
```
</details>
**I also tried with **
```python
predictor.fit(
train_data=train_data,
presets=[
'optimize_for_deployment', # will prune not so important sub models
'medium_quality' # will speed up training
# 'interpretable', # will crash
],
time_limit=60*45, # seconds
hyperparameters={
'learning_curves':True,
},
)
```
but it crashed it seems that the "hyperparamter flag" 'learning_curves':True is not handled despite being mentionned in documentation
<details>
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[17], line 6
1 predictor = autogluon.tabular.TabularPredictor(
2 label='TARGET',
3 eval_metric='roc_auc',
4 path = c.model_folder_path,
5 )
----> 6 predictor.fit(
7 train_data=train_data,
8 presets=[
9 'optimize_for_deployment', # will prune not so important sub models
10 'medium_quality' # will speed up training
11 # 'interpretable', # will crash
12 ],
13 time_limit=60*45, # seconds
14 hyperparameters={
15 'learning_curves':True,
16 },
17 )
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/core/utils/decorators.py:31](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/core/utils/decorators.py#line=30), in unpack.<locals>._unpack_inner.<locals>._call(*args, **kwargs)
28 @functools.wraps(f)
29 def _call(*args, **kwargs):
30 gargs, gkwargs = g(*other_args, *args, **kwargs)
---> 31 return f(*gargs, **gkwargs)
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/tabular/predictor/predictor.py:1167](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/tabular/predictor/predictor.py#line=1166), in TabularPredictor.fit(self, train_data, tuning_data, time_limit, presets, hyperparameters, feature_metadata, infer_limit, infer_limit_batch_size, fit_weighted_ensemble, fit_full_last_level_weighted_ensemble, full_weighted_ensemble_additionally, dynamic_stacking, calibrate_decision_threshold, num_cpus, num_gpus, **kwargs)
1164 ag_fit_kwargs["num_stack_levels"] = num_stack_levels
1165 ag_fit_kwargs["time_limit"] = time_limit
-> 1167 self._fit(ag_fit_kwargs=ag_fit_kwargs, ag_post_fit_kwargs=ag_post_fit_kwargs)
1169 return self
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/tabular/predictor/predictor.py:1173](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/tabular/predictor/predictor.py#line=1172), in TabularPredictor._fit(self, ag_fit_kwargs, ag_post_fit_kwargs)
1171 def _fit(self, ag_fit_kwargs: dict, ag_post_fit_kwargs: dict):
1172 self.save(silent=True) # Save predictor to disk to enable prediction and training after interrupt
-> 1173 self._learner.fit(**ag_fit_kwargs)
1174 self._set_post_fit_vars()
1175 self._post_fit(**ag_post_fit_kwargs)
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/tabular/learner/abstract_learner.py:159](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/tabular/learner/abstract_learner.py#line=158), in AbstractTabularLearner.fit(self, X, X_val, **kwargs)
157 raise AssertionError("Learner is already fit.")
158 self._validate_fit_input(X=X, X_val=X_val, **kwargs)
--> 159 return self._fit(X=X, X_val=X_val, **kwargs)
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/tabular/learner/default_learner.py:122](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/tabular/learner/default_learner.py#line=121), in DefaultLearner._fit(self, X, X_val, X_unlabeled, holdout_frac, num_bag_folds, num_bag_sets, time_limit, infer_limit, infer_limit_batch_size, verbosity, **trainer_fit_kwargs)
119 self.eval_metric = trainer.eval_metric
121 self.save()
--> 122 trainer.fit(
123 X=X,
124 y=y,
125 X_val=X_val,
126 y_val=y_val,
127 X_unlabeled=X_unlabeled,
128 holdout_frac=holdout_frac,
129 time_limit=time_limit_trainer,
130 infer_limit=infer_limit,
131 infer_limit_batch_size=infer_limit_batch_size,
132 groups=groups,
133 **trainer_fit_kwargs,
134 )
135 self.save_trainer(trainer=trainer)
136 time_end = time.time()
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/tabular/trainer/auto_trainer.py:108](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/tabular/trainer/auto_trainer.py#line=107), in AutoTrainer.fit(self, X, y, hyperparameters, X_val, y_val, X_unlabeled, holdout_frac, num_stack_levels, core_kwargs, aux_kwargs, time_limit, infer_limit, infer_limit_batch_size, use_bag_holdout, groups, **kwargs)
97 raise AssertionError(
98 "X_val, y_val is not None, but bagged mode was specified. "
99 "If calling from `TabularPredictor.fit()`, `tuning_data` should be None.\n"
(...)
104 "\tpredictor.fit(..., tuning_data=tuning_data, use_bag_holdout=True)"
105 )
107 # Log the hyperparameters dictionary so it easy to edit if the user wants.
--> 108 n_configs = sum([len(hyperparameters[k]) for k in hyperparameters.keys()])
109 extra_log_str = ""
110 display_all = (n_configs < 20) or (self.verbosity >= 3)
File [~/kood/credit-scoring/envs/credit/lib/python3.11/site-packages/autogluon/tabular/trainer/auto_trainer.py:108](http://localhost:8888/lab/tree/envs/credit/lib/python3.11/site-packages/autogluon/tabular/trainer/auto_trainer.py#line=107), in <listcomp>(.0)
97 raise AssertionError(
98 "X_val, y_val is not None, but bagged mode was specified. "
99 "If calling from `TabularPredictor.fit()`, `tuning_data` should be None.\n"
(...)
104 "\tpredictor.fit(..., tuning_data=tuning_data, use_bag_holdout=True)"
105 )
107 # Log the hyperparameters dictionary so it easy to edit if the user wants.
--> 108 n_configs = sum([len(hyperparameters[k]) for k in hyperparameters.keys()])
109 extra_log_str = ""
110 display_all = (n_configs < 20) or (self.verbosity >= 3)
TypeError: object of type 'bool' has no len()
```
</details>
|
closed
|
2024-10-24T15:30:43Z
|
2024-10-25T19:43:44Z
|
https://github.com/autogluon/autogluon/issues/4578
|
[
"bug: unconfirmed",
"Needs Triage"
] |
g-ameline
| 3
|
tqdm/tqdm
|
pandas
| 1,483
|
img.tqdm.ml links broken
|
- [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] visual output bug
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [ ] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
The README contains several image links that point to https://img.tqdm.ml, e.g. the beautiful "walkthrough" animation: https://img.tqdm.ml/tqdm.gif
At the time of writing, this server is not accessible - I can't even find any current DNS records for it.
Has the server been moved to a different address, and the README needs an update?
|
closed
|
2023-07-20T11:17:06Z
|
2023-08-09T11:21:29Z
|
https://github.com/tqdm/tqdm/issues/1483
|
[
"p0-bug-critical ☢",
"question/docs ‽"
] |
DFEvans
| 1
|
mwaskom/seaborn
|
pandas
| 3,131
|
Add flag to jointplot: perform "copula like" plot by using empirical probability integral transform on X and Y variables
|
As far as I understand, performing a jointplot for looking at the relation between 2 random variables can be a bit deceptive depending on the context, as the 2D density plot obtained is then a mixture of the effect of the marginal distributions and the effect of the relation between the variables. A way to get a more clear / unbiased view of the relation between 2 variables can be to apply the probability integral transform (PIT) on the 2 variables to compare, so that they have both uniform distribution on [0, 1]. This way, performing a jointplot of the PIT-transformed variables really only shows the effect of the relation between these, and takes away the effect of the marginal distributions on how the plot looks like.
As far as I understand, this is the core idea behind the copula approach.
I do not think that there is a way to get this out of the box with the ```joinplot``` command at the moment? Would it be of interest to provide a parameter flag to the ```joinplot``` command, for example ```probability_integral_transform=False``` (by default ```False``` to not change the current behavior), in order to provide a turn key approach for this?
|
closed
|
2022-11-08T21:00:38Z
|
2022-11-18T23:39:31Z
|
https://github.com/mwaskom/seaborn/issues/3131
|
[] |
jerabaul29
| 4
|
Kludex/mangum
|
fastapi
| 319
|
VPC Lattice event support
|
VPC Lattice is a fairly new AWS Service (see, e.g.: https://aws.amazon.com/blogs/aws/introducing-vpc-lattice-simplify-networking-for-service-to-service-communication-preview/) for easier communication between VPC resources.
As a VPC Service will create a new type of event, it's not working if the Lattice Target Group targets (e.g. a lambda function) that are running mangum. For event structures see: https://docs.aws.amazon.com/lambda/latest/dg/services-vpc-lattice.html#vpc-lattice-receiving-events
Would it be possible to add support for these events in mangum as well?
|
open
|
2024-03-01T11:38:16Z
|
2024-03-01T11:39:49Z
|
https://github.com/Kludex/mangum/issues/319
|
[] |
michal-sa
| 0
|
Netflix/metaflow
|
data-science
| 1,443
|
@retry that retries only system errors
|
To handle interrupted spot instances and other system-level exceptions, we need a version of `@retry` that lets non-retrieable user errors go through.
The example below does the trick for locally scheduled runs but not on production runs on Argo/SFN/Airflow:
```
import sys
import time
import traceback
from functools import wraps
from metaflow import FlowSpec, step, retry
from metaflow.exception import METAFLOW_EXIT_DISALLOW_RETRY
def platform_retry(f):
@wraps(f)
def wrapper(self):
try:
f(self)
except:
traceback.print_exc()
sys.exit(METAFLOW_EXIT_DISALLOW_RETRY)
return retry(wrapper)
class PlatformRetryFlow(FlowSpec):
@platform_retry
@step
def start(self):
time.sleep(10)
print('fail', 1 / 0)
self.next(self.end)
@platform_retry
@step
def end(self):
print("done!")
if __name__ == '__main__':
PlatformRetryFlow()
```
We could implement the pattern e.g. as an option in `@retry`, e.g. `@retry(only_system=True)`
|
open
|
2023-06-08T15:35:14Z
|
2023-06-08T15:35:14Z
|
https://github.com/Netflix/metaflow/issues/1443
|
[
"enhancement"
] |
tuulos
| 0
|
python-gino/gino
|
asyncio
| 349
|
Trying to close already closed BindContext
|
* GINO version: 0.7.5
* Python version: 3.6.6
* asyncpg version: 0.17.0
* aiocontextvars version: 0.1.2
* PostgreSQL version: postgres:10.3-alpine
### Description
When i run tests, then gino trying to close already closed bind
```
test/integration/test_add_extra_number_segment.py:26 (test_unable_find_route)
def finalizer():
"""Yield again, to finalize."""
async def async_finalizer():
try:
await gen_obj.__anext__()
except StopAsyncIteration:
pass
else:
msg = "Async generator fixture didn't stop."
msg += "Yield only once."
raise ValueError(msg)
> loop.run_until_complete(async_finalizer())
../../../venv/wisdom6/lib/python3.6/site-packages/pytest_asyncio/plugin.py:94:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/base_events.py:468: in run_until_complete
return future.result()
../../../venv/wisdom6/lib/python3.6/site-packages/pytest_asyncio/plugin.py:86: in async_finalizer
await gen_obj.__anext__()
conftest.py:23: in app
yield client
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <gino.api._BindContext object at 0x10ae6b6d8>, exc_type = None
exc_val = None, exc_tb = None
async def __aexit__(self, exc_type, exc_val, exc_tb):
> await self._args[0].pop_bind().close()
E AttributeError: 'NoneType' object has no attribute 'close'
../../../venv/wisdom6/lib/python3.6/site-packages/gino/api.py:183: AttributeError
```
But i'm not able to make script which will reproduce this error.
Below is code which will fix this error. If you would like i can make a PR with this change
### What I Did
Change from:
```
class _BindContext:
async def __aexit__(self, exc_type, exc_val, exc_tb):
await self._args[0].pop_bind().close()
```
to:
```
class _BindContext:
async def __aexit__(self, exc_type, exc_val, exc_tb):
bind_to_close = self._args[0].pop_bind()
if bind_to_close:
await bind_to_close.close()
```
|
closed
|
2018-09-26T13:09:38Z
|
2018-09-27T02:55:05Z
|
https://github.com/python-gino/gino/issues/349
|
[] |
patriczek
| 5
|
Kanaries/pygwalker
|
matplotlib
| 531
|
Use Streamlit's officially recommended method to render Pygwalker UI
|
**Use Streamlit's officially recommended method to render Pygwalker UI, so the following changes have been made:**
#### 1.Temporary delete the Exploer button under pure chart
Under the new rendering method, Pygwalker needs to complete the Request-Response communication through rerun, and retun will lose the state of modal.
old:

new:

#### 2. Modify the parameters of the rendering method
The new API will delete width, height, scrolling parameters, and when the user needs to render the same component multiple times, different keys need to be used, which will be consistent with the parameters of the official streamlit component.
old:
```python
def viewer(
self,
width: Optional[int] = None,
height: int = 1010,
scrolling: bool = False,
) -> "DeltaGenerator":
"""Render filter renderer UI"""
pass
def explorer(
self,
width: Optional[int] = None,
height: int = 1010,
scrolling: bool = False,
default_tab: Literal["data", "vis"] = "vis"
) -> "DeltaGenerator":
"""Render explore UI(it can drag and drop fields)"""
pass
def chart(
self,
index: int,
width: Optional[int] = None,
height: Optional[int] = None,
scrolling: bool = False,
pre_filters: Optional[List[PreFilter]] = None,
) -> "DeltaGenerator":
pass
```
new:
```python
def viewer(
self,
*,
key: str = "viewer",
size: Optional[Tuple[int, int]] = None
):
"""Render filter renderer UI"""
pass
def explorer(
self,
*,
key: str = "explorer",
default_tab: Literal["data", "vis"] = "vis"
):
"""Render explore UI(it can drag and drop fields)"""
pass
def chart(
self,
index: int,
*,
key: str = "chart",
size: Optional[Tuple[int, int]] = None,
pre_filters: Optional[List[PreFilter]] = None,
):
pass
```
beta version: `pip install pygwalker==0.4.9a3`
|
closed
|
2024-04-18T06:23:41Z
|
2025-02-26T02:15:53Z
|
https://github.com/Kanaries/pygwalker/issues/531
|
[
"proposal"
] |
longxiaofei
| 0
|
pydantic/FastUI
|
pydantic
| 36
|
Tailwind, Bootstrap and other CSS framework support.
|
We don't need more built-in framework support. We can make a base component and others will override to use any CSS frameworks. We can't support every frameworks behaviour but the most popular ones.
|
closed
|
2023-12-01T19:22:54Z
|
2024-03-12T08:46:11Z
|
https://github.com/pydantic/FastUI/issues/36
|
[] |
Almas-Ali
| 5
|
man-group/arctic
|
pandas
| 473
|
Usage discussion: VersionStore vs TickStore, allowed options for VersionStore.write..
|
First of all - my thanks to the maintainers. This library is exactly what I was looking for and looks very promising.
I've been having a bit of trouble figuring how to optimally use `arctic` though. I've been following the examples in /howto which are... sparse. Is there somewhere else I might find examples or docs?
Now, some dumb questions about `VersionStore` and `TickStore`:
- I've noticed that every time I write to a `VersionStore`, an entirely new version is created. Are finer-grained options for versioning available? For instance, I would like to write streaming updates to a single version, only incrementing version when manually specified. I tried just passing `version=1` to `lib.write`, but this doesn't seem to be supported.
- In what scenarios might one want to use `VersionStore` vs `TickStore`? It's not clear to me what the differences are from the README or the code.
- My current use case is primarily as a database for streams - for this use case `TickStore` is recommended? Is there a reason one might want to use `VersionStore` for this?
- ~~Is `TickStore` appropriate for data which may have more than row for each timestamp (event data)?~~ Nope, not allowed by `TickStore`
Thanks in advance for your help and patience!
|
closed
|
2017-12-20T02:06:55Z
|
2019-04-03T21:33:22Z
|
https://github.com/man-group/arctic/issues/473
|
[] |
rueberger
| 19
|
MaartenGr/BERTopic
|
nlp
| 1,383
|
Expired link contained in the document page
|
Hello. I found a expired link in the documentation.
The link included in the tips in section 4 Bag-of-words of the algorithm on the documentation page has expired.
page containing expired link:
https://maartengr.github.io/BERTopic/algorithm/algorithm.html#3-cluster-documents
expired link:
https://maartengr.github.io/BERTopic/getting_started/countvectorizer/countvectorizer.html
|
closed
|
2023-07-02T07:24:33Z
|
2023-09-27T09:09:40Z
|
https://github.com/MaartenGr/BERTopic/issues/1383
|
[] |
burugaria7
| 3
|
yinkaisheng/Python-UIAutomation-for-Windows
|
automation
| 258
|
希望可以对UIAumation的事件进行支持
|
1. 微软的UIAumation是支持监听某些事件的,如某个控件被聚焦,但现在python没有实现,希望将来可以实现。
2. 还希望可以支持异步编程,因为从3.8的python之后,asyncio库已经很成熟,使用也很方便,希望可以把所有的函数都改成async的,因为在代码里面我发现有很多的地方有time.sleep(),这个函数是很糟糕的,会让整个线程睡眠,导致所有的函数都无法运行,使用了异步方法的话就不会,因为使用await asyncio.sleep()的睡眠仅仅会睡眠当前的异步函数,别的函数仍然可以继续运行,这样是非常高性能的。
|
open
|
2023-09-20T19:56:13Z
|
2023-09-28T04:13:17Z
|
https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/258
|
[] |
mzdk100
| 1
|
AutoGPTQ/AutoGPTQ
|
nlp
| 365
|
[BUG]
|
**Describe the bug**
error: subprocess-exited-with-error
!git clone https://github.com/PanQiWei/AutoGPTQ
# !cd AutoGPTQ
!pip3 install .
Please teach me.
|
closed
|
2023-10-07T17:08:02Z
|
2023-10-25T16:16:19Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/365
|
[
"bug"
] |
MotoyaTakashi
| 2
|
home-assistant/core
|
python
| 141,272
|
Unable to add location to a local calendar event
|
### The problem
Even though the documentation specifies that you can automate on the location attribute there is no way to add location to an event when creating a new one.
Would be nice if there was a field to enter this information when creating/editing a local calendar event
### What version of Home Assistant Core has the issue?
2025.3.0
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Local Calendar
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/local_calendar/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_
|
closed
|
2025-03-24T10:52:47Z
|
2025-03-24T14:23:04Z
|
https://github.com/home-assistant/core/issues/141272
|
[
"integration: local_calendar"
] |
martinsheldon
| 2
|
huggingface/datasets
|
numpy
| 7,247
|
Adding column with dict struction when mapping lead to wrong order
|
### Describe the bug
in `map()` function, I want to add a new column with a dict structure.
```
def map_fn(example):
example['text'] = {'user': ..., 'assistant': ...}
return example
```
However this leads to a wrong order `{'assistant':..., 'user':...}` in the dataset.
Thus I can't concatenate two datasets due to the different feature structures.
[Here](https://colab.research.google.com/drive/1zeaWq9Ith4DKWP_EiBNyLfc8S8I68LyY?usp=sharing) is a minimal reproducible example
This seems an issue in low level pyarrow library instead of datasets, however, I think datasets should allow concatenate two datasets actually in the same structure.
### Steps to reproduce the bug
[Here](https://colab.research.google.com/drive/1zeaWq9Ith4DKWP_EiBNyLfc8S8I68LyY?usp=sharing) is a minimal reproducible example
### Expected behavior
two datasets could be concatenated.
### Environment info
N/A
|
open
|
2024-10-22T18:55:11Z
|
2024-10-22T18:55:23Z
|
https://github.com/huggingface/datasets/issues/7247
|
[] |
chchch0109
| 0
|
suitenumerique/docs
|
django
| 323
|
Receive email notification in my language
|
## Bug Report
**Problematic behavior**
Even though my interface is in french I'm receiving notification emails in english.
Right the emails are sent in the language setting the sender using on the app. It should be the opposite.
|
closed
|
2024-10-11T10:08:12Z
|
2025-03-05T13:29:25Z
|
https://github.com/suitenumerique/docs/issues/323
|
[
"bug",
"backend",
"i18n"
] |
virgile-dev
| 8
|
kiwicom/pytest-recording
|
pytest
| 83
|
[BUG] having this plugin enabled breaks Intellij Idea failed test reports
|
Ok, so this was a weird one to debug...
Simply having the `pytest-recording` plugin enabled breaks [Intellij Idea](https://www.jetbrains.com/idea/) pytest [failed test reports](https://www.jetbrains.com/help/idea/product-tests.html) in some specific test cases ([here is an example](https://github.com/CarloDePieri/pytest-recording-idea-issue/blob/main/tests/test_issue.py)):
- a test cassette is being recorded via plain `vcrpy` syntax or via `pytest-recording` decorator;
- two (or more) network calls are being executed and recorded: the first one succeeds, the second fails and then an
error is raised by `requests`' `raise_for_status()` method.
Instead of reporting the correct stack trace and main error, Idea reports there has been a failed string comparison
involving url paths.
My guess is that `pytest-recording` breaks something Idea's test runner relies on to generate errors messages, because:
- pytest output in the terminal is consistent and correct with or without the plugin installed;
- disabling the `pytest-recording` plugin in Idea ui by adding `-p no:recording` as additional argument restore the correct
error message;
- removing the plugin also restore the correct error message.
### How to reproduce the issue
Checkout the minimal test repo with `git clone https://github.com/CarloDePieri/pytest-recording-idea-issue`.
Create a virtualenv and install all dependencies there:
```
cd pytest-recording-idea-issue
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
```
IMPORTANT:
- this DOES NOT install `pytest-recording`;
- the test we are going to launch DOES NOT use this plugin decorator, but plain `vcrpy`.
Then:
- import the folder into Idea as a new project;
- add the created virtualenv as a python sdk for the project;
- run the test from the Idea ui: observe that the test fails with the **correct** error message reporting a 404;
- manually install `pytest-recording` with `pip install pytest-recording` in the venv;
- relaunch the test from the Idea ui: the error message is now **completely off track**: it reports a difference between
expected and actual values `'/api/users/23' != '/api/users/2'`.
#### Under the hood
Idea uses [this test runner](https://github.com/JetBrains/intellij-community/blob/09da58dedb5b39278df01c5dee01af19752d063d/python/helpers/pycharm/_jb_pytest_runner.py)
to launch pytest tests and generate the report message. Launching the script directly in the terminal shows indeed the
wrong error message when `pytest-recording` is installed.
#### Installed software versions
```
python: 3.10.4
pytest: 7.1.2
pytest-recording: 0.12.0
vcrpy: 4.1.1
requests: 2.27.1
Idea Ultimate: Build #IU-221.5591.52, built on May 10, 2022
os: arch linux
```
|
open
|
2022-05-17T10:21:19Z
|
2022-05-17T10:21:52Z
|
https://github.com/kiwicom/pytest-recording/issues/83
|
[
"Status: Review Needed",
"Type: Bug"
] |
CarloDePieri
| 0
|
hbldh/bleak
|
asyncio
| 462
|
No devices discovered on Windows
|
* bleak version: 0.9.0, 0.9.1, 0.10.0
* Python version: 3.9
* Operating System: Win10
### Description
I'm running the [scanning example](https://bleak.readthedocs.io/en/latest/scanning.html) on a Windows machine.
However, it does not discover any devices.
### What I Did
Installed bleak with `pip install bleak`.
Running the example from both VSCode and command prompt. (Different command to enable logging)
I've tried with `$env:BLEAK_LOGGING=1` and also `logging.basicConfig(level=logging.DEBUG)` but I don't see any logging.
The only output is `DEBUG:asyncio:Using proactor: IocpProactor`
It runs on another machine, so I'm thinking there is something related to the installation?
Unfortunately, I did not do the installation on the other machine.
Any help would be greatly appreciated.
|
closed
|
2021-02-24T13:47:14Z
|
2021-02-24T15:46:37Z
|
https://github.com/hbldh/bleak/issues/462
|
[] |
0ge
| 0
|
deezer/spleeter
|
tensorflow
| 925
|
[Discussion] Whether to support cuda12?
|
I want to be able to use the same environment with whisperx and gradio, but when installing in the cuda12 environment, the following error message is displayed:
```
tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
```
And some versions of the packages are not compatible. Does anyone know how to run spleeter in cuda12?
I tried to install three software packages at the same time in python3.8, but there is still an error.
My environement:
OS: ubuntu 22.04
python: 3.10.7
cuda: 12.1
cudnn: 8
|
open
|
2025-01-23T01:44:34Z
|
2025-01-23T01:50:10Z
|
https://github.com/deezer/spleeter/issues/925
|
[
"question"
] |
aef5748
| 0
|
torchbox/wagtail-grapple
|
graphql
| 204
|
`RichTextField` returns internal markup while `RichTextBlock` returns browser-ready HTML
|
First, a hat tip to all the developers working on this project; thank you for all you have done.
I encountered this one today. The issue can be described thus:
```python
class ExamplePage(HeadlessPreviewMixin, Page):
body = RichTextField()
stream_body = StreamField([
('paragraph', RichTextBlock()),
...
])
graphql_fields = [
GraphQLString("body"),
GraphQLStreamfield("stream_body"),
]
```
In the result of my GraphQL query for the `ExamplePage` object described above, the `body` field will be in Wagtail's [Rich Text Internals](https://docs.wagtail.io/en/stable/extending/rich_text_internals.html#rich-text-internals) format, which allows the Gatsby client to interpolate objects like images correctly. This is good!
Unfortunately, the `stream_body` field is returned in browser-ready HTML, which is not so good, at least for my purposes — ideally I'd like the resolver to return the actual `raw_value` for `RichText` objects as the property name suggests. The current implementation returns what I would describe as a rendered or interpolated value.
I'm not sure what the expected behaviour _should_ be, but I was able to patch the issue trivially by adding an additional condition to `types.streamfield.StreamFieldInterface.resolve_raw_value()` as follows:
```python
def resolve_raw_value(self, info, **kwargs):
if isinstance(self, blocks.StructValue):
# This is the value for a nested StructBlock defined via GraphQLStreamfield
return serialize_struct_obj(self)
elif isinstance(self.value, dict):
return serialize_struct_obj(self.value)
# per https://docs.wagtail.io/en/stable/extending/rich_text_internals.html#data-format
# RichTextBlock.value is converted to browser HTML; this change returns the "internal source"
# so we can parse and mark-up client-side using the same logic as for `RichTextField`
elif isinstance(self.value, RichText):
return self.value.source
return self.value
```
I'm not sure if this is the desired behaviour or not, but this would amount to a breaking change if existing code expects to carry forward the current, heterogeneous rendering approach between these two similar objects.
Can anyone from the project chime in? I'd be happy to submit a PR with the code above if the maintainers consider this a defect.
|
closed
|
2021-11-13T00:06:08Z
|
2022-08-19T16:37:54Z
|
https://github.com/torchbox/wagtail-grapple/issues/204
|
[] |
isolationism
| 2
|
localstack/localstack
|
python
| 11,840
|
bug: DistributedMap Step Misinterprets Input Structure in LocalStack
|
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I’ve encountered a discrepancy between AWS Step Functions behavior and LocalStack behavior when using a DistributedMap step in a State Machine. In AWS, the DistributedMap correctly accesses fields from the initial context input. However, in LocalStack, the DistributedMap appears to restrict its accessible fields solely to the items specified in itemsPath, leading to a failure when referencing fields outside of values.
For instance, in the following implementation:
```ts
const initialStep = new Pass(this, 'Initial step', {
parameters: {
bucket: 'test-bucket',
values: ['1', '2', '3']
},
resultPath: '$.content'
});
const mapStep = new DistributedMap(this, 'Map step', {
itemsPath: '$.content.values',
itemSelector: {
bucketName: JsonPath.stringAt('$.content.bucket'),
value: JsonPath.numberAt('$$.Map.Item.Value')
},
resultPath: JsonPath.DISCARD
}).itemProcessor(endStep);
```
The DistributedMap step should have access to $.content.bucket as well as each item in values.
In LocalStack, when running the above configuration, the DistributedMap only processes values directly without access to $.content.bucket. As a result, trying to access bucket in itemSelector fails with an error:
```plaintext
2024-11-13T13:28:56.932 ERROR --- [-1398 (eval)] l.s.s.a.c.eval_component : Exception=FailureEventException,
Error=States.Runtime, Details={"taskFailedEventDetails": {"error": "States.Runtime", "cause": "The JSONPath
$.content.bucket specified for the field bucketName.$ could not be found in the input [\"1\", \"2\", \"3\"]"}} at '(ItemSelector|
{'payload_tmpl': (PayloadTmpl| {'payload_bindings': [(PayloadBindingPath| {'field': 'bucketName', 'path': '$.content.bucket'},
(PayloadBindingPathContextObj| {'field': 'value', 'path_context_obj': '$.Map.Item.Value'}]}}'
```
### Expected Behavior
The distributedMap step must have access to input fields.
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
1. Launch LocalStack with the client.
2. Deploy and execute the Step Function.
### Environment
```markdown
- OS: macOS sonoma 14.7
- LocalStack version: 3.8.1
```
### Anything else?
Even though an error occurs, the execution of the Step Function still returns `"status": "SUCCEEDED"`.
|
closed
|
2024-11-13T14:50:49Z
|
2024-11-19T15:41:11Z
|
https://github.com/localstack/localstack/issues/11840
|
[
"type: bug",
"status: resolved/fixed",
"aws:stepfunctions"
] |
gerson24
| 3
|
cvat-ai/cvat
|
pytorch
| 8,842
|
Flickering in 3D play functionality
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
The issue arise while in annotation-view for 3d tasks.
The issue happens on a self-hosted CVAT instance.
ISSUE:
When clicking on the play button, a gray screen is shown to indicate that a frame is getting loaded.
The time spent on a frame is smaller than the time on the gray screen.
This makes the video functionality unusable.
See video.
https://github.com/user-attachments/assets/7b52cfad-d33b-403a-94cd-770df413c813
### Expected Behavior
_No response_
### Possible Solution
A good fix could be to just avoid showing the gray screen in between frames (and hold the previous frame instead).
This will make the video functionality usable, even in case of very long loading time (will just result in a slower frames-per-second).
When navigating between frames with the next/previous buttons though I think it would be better to maintain the loading gray screen to distinguish between a frame and the next (in a video sequence they will be very similar).
### Context
This issue affect the ability to review the annotation, as reviewers need to navigate through the video.
In addition, without the play functionality, it would not be possible to review that the labels are smooth in time (across different frames)
### Environment
- ubuntu 20.04
- CVAT v2.23.1
|
open
|
2024-12-18T14:26:03Z
|
2025-01-15T12:59:52Z
|
https://github.com/cvat-ai/cvat/issues/8842
|
[
"ui/ux"
] |
alessandrocennamo
| 0
|
fugue-project/fugue
|
pandas
| 357
|
[FEATURE] Map type support
|
Fugue needs to support Map type. Map type is in the form of <key_type, value_type> and the data is in the form of a list of key-value tuples or just a dict.
The construction of map type data is very different between different backends, for example duckdb will use (list_of_keys, list_of_values), spark only accepts dict, and pyarrow only accepts a list of tuples.
So the purpose of Map type support is when the data type appears in an input dataframe or intermediate results, Fugue will be able to carry it without throwing NotImplementedError.
|
closed
|
2022-09-11T05:41:25Z
|
2022-10-03T03:35:17Z
|
https://github.com/fugue-project/fugue/issues/357
|
[
"enhancement",
"spark",
"core feature",
"pandas",
"dask",
"duckdb",
"ray"
] |
goodwanghan
| 1
|
pennersr/django-allauth
|
django
| 3,413
|
Documentation has a different version to the deployed
|
When I add this to middleware section _allauth.account.middleware.AccountMiddleware_ it returns this response in the terminal.

|
closed
|
2023-09-04T02:40:41Z
|
2023-09-04T05:47:44Z
|
https://github.com/pennersr/django-allauth/issues/3413
|
[] |
RadySonabu
| 1
|
saleor/saleor
|
graphql
| 17,468
|
Bug: user query is case sensitive for the email field
|
### What are you trying to achieve?
My backend needs to query users by email (with manager permissions). I started running into issues and then realised that the [`user`](https://docs.saleor.io/api-reference/users/queries/user) query is case-sensitive.
### Steps to reproduce the problem
To reproduce this, sign up a user with an email containing uppercase letters. Then run a query like:
```
query { user(email: "...") { email id } }
```
where `...` should be replaced by a all lowercase email. This return with `user` set to `null`.
### What did you expect to happen?
I expected the lowercase email to match and return the user.
### Logs
_No response_
### Environment
Saleor version: dashboard v3.20.30, core v3.20.75
OS and version: NA
|
open
|
2025-03-10T14:41:24Z
|
2025-03-10T14:42:00Z
|
https://github.com/saleor/saleor/issues/17468
|
[
"bug",
"triage"
] |
rizo
| 1
|
mlflow/mlflow
|
machine-learning
| 14,843
|
[FR] Support runs:/ and models:/ for scoring
|
### Willingness to contribute
Yes. I would be willing to contribute this feature with guidance from the MLflow community.
### Proposal Summary
On this [PR discussion](https://github.com/mlflow/mlflow/pull/9538#discussion_r1318088799) the possibility to use runs:/ and models:/ for scoring has been proposed but it was decided to proceed with the PR without this feature.
Is there any plan to add this feature in the coming future? Imho this would enhance the testing possibilities by a lot withing mlflow.
### Motivation
> #### What is the use case for this feature?
> #### Why is this use case valuable to support for MLflow users in general?
> #### Why is this use case valuable to support for your project(s) or organization?
> #### Why is it currently difficult to achieve this use case?
### Details
_No response_
### What component(s) does this bug affect?
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [ ] `area/server-infra`: MLflow Tracking server backend
- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations
|
open
|
2025-03-04T20:25:30Z
|
2025-03-05T11:58:33Z
|
https://github.com/mlflow/mlflow/issues/14843
|
[
"enhancement"
] |
f2cf2e10
| 2
|
wkentaro/labelme
|
computer-vision
| 615
|
--flags or --labels for text file displayed as name of text file instead of list in text file
|
I am trying to open labelme with labels or flags. By using labelme --flags flags.txt, labelme should open and show the list in the flags.txt under the flags but instead it displays the name of the text file which is 'flags.txt'. Below is an example:
I have a text file named flags.txt and it contains a list of fruits line after line.
When I run the labelme --flags flags.txt the list of fruits should appear under the flags. But it shows 'flags.txt' instead.
I am not sure how to solve this.
|
open
|
2020-03-04T08:24:40Z
|
2024-12-30T12:47:58Z
|
https://github.com/wkentaro/labelme/issues/615
|
[
"issue::bug"
] |
Khuzai
| 4
|
errbotio/errbot
|
automation
| 873
|
Implement connection to Lets Chat server
|
closed
|
2016-10-15T08:41:00Z
|
2019-01-05T17:15:11Z
|
https://github.com/errbotio/errbot/issues/873
|
[
"backend: Common"
] |
ibiBgOR
| 3
|
|
widgetti/solara
|
fastapi
| 114
|
Solara as desktop app
|
Started https://github.com/widgetti/solara/discussions/100 and also asked about on Discord.
I'm opening this to collect interest.
What I can see happening is a pyinstaller + https://pypi.org/project/pywebview/ in CI to test if it is possible to make a desktop-like application and because in CI it will always be stable.
But users will still have to build the custom apps themselves if they need particular python packages.
|
open
|
2023-05-24T20:02:59Z
|
2023-05-25T12:07:34Z
|
https://github.com/widgetti/solara/issues/114
|
[] |
maartenbreddels
| 2
|
ionelmc/pytest-benchmark
|
pytest
| 66
|
asyncio support
|
are there any plans on adding support for benchmarking coroutines?
|
open
|
2017-01-23T22:27:31Z
|
2024-04-16T07:08:45Z
|
https://github.com/ionelmc/pytest-benchmark/issues/66
|
[] |
argaen
| 16
|
pytorch/pytorch
|
machine-learning
| 149,527
|
GHA request labels should represent independent fleet of runners
|
Currently we identified that a few runners are provided by multiple vendors/organizations and use the same label.
* linux.s390x
* linux.idc.xpu
* linux.rocm.gpu.2
* macos-m2-15 (and mac label standards)
We need to identify the labels that are reused across fleets and define a new standard that better reflect where the runners are hosted.
The reasoning for this is related to the SLO agreement and the monitoring tooling that is available to us is based on the label requested by jobs. AFAIK this limitation comes from GH side that only reports the requested label for a job in its job API.
We can automate the distribution of load across multiple organizations/providers/fleets by using experiment and runner determinator.
|
open
|
2025-03-19T16:34:42Z
|
2025-03-20T16:52:22Z
|
https://github.com/pytorch/pytorch/issues/149527
|
[
"module: ci",
"triaged",
"enhancement",
"needs design"
] |
jeanschmidt
| 3
|
apache/airflow
|
automation
| 47,582
|
Not able to authenticate FastAPI while requesting endpoints from the docs page
|
### Apache Airflow version
3.0.0b2
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Not able to authenticate FastAPI docs page (http://localhost:28080/docs)

### What you think should happen instead?
User should be able to authenticate and execute API's
### How to reproduce
Open RestAPI docs page from AF3 UI and try to authorise by giving creds admin/admin
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
|
open
|
2025-03-10T17:48:20Z
|
2025-03-18T07:46:22Z
|
https://github.com/apache/airflow/issues/47582
|
[
"kind:bug",
"area:API",
"priority:low",
"area:auth",
"area:core",
"affected_version:3.0.0beta"
] |
atul-astronomer
| 8
|
supabase/supabase-py
|
flask
| 1,071
|
Supabase not working in gradio python space
|
Supabase not working in gradio python space in Hugging face
Error log:
```
Exit code: 1. Reason: Traceback (most recent call last):
File "/home/user/app/app.py", line 9, in <module>
create_client(supabase_key=key, supabase_url=url)
File "/usr/local/lib/python3.10/site-packages/supabase/_sync/client.py", line 335, in create_client
return SyncClient.create(
File "/usr/local/lib/python3.10/site-packages/supabase/_sync/client.py", line 102, in create
client = cls(supabase_url, supabase_key, options)
File "/usr/local/lib/python3.10/site-packages/supabase/_sync/client.py", line 58, in __init__
raise SupabaseException("Invalid URL")
supabase._sync.client.SupabaseException: Invalid URL
```
```
===== Application Startup at 2024-12-16 17:28:42 =====
Traceback (most recent call last):
File "/home/user/app/app.py", line 9, in <module>
create_client(supabase_key=key, supabase_url=url)
File "/usr/local/lib/python3.10/site-packages/supabase/_sync/client.py", line 335, in create_client
return SyncClient.create(
File "/usr/local/lib/python3.10/site-packages/supabase/_sync/client.py", line 102, in create
client = cls(supabase_url, supabase_key, options)
File "/usr/local/lib/python3.10/site-packages/supabase/_sync/client.py", line 58, in __init__
raise SupabaseException("Invalid URL")
supabase._sync.client.SupabaseException: Invalid URL
Traceback (most recent call last):
File "/home/user/app/app.py", line 9, in <module>
create_client(supabase_key=key, supabase_url=url)
File "/usr/local/lib/python3.10/site-packages/supabase/_sync/client.py", line 335, in create_client
return SyncClient.create(
File "/usr/local/lib/python3.10/site-packages/supabase/_sync/client.py", line 102, in create
client = cls(supabase_url, supabase_key, options)
File "/usr/local/lib/python3.10/site-packages/supabase/_sync/client.py", line 58, in __init__
raise SupabaseException("Invalid URL")
supabase._sync.client.SupabaseException: Invalid URL
```
|
closed
|
2024-12-16T17:36:55Z
|
2025-03-18T18:57:49Z
|
https://github.com/supabase/supabase-py/issues/1071
|
[
"bug"
] |
UNNAMMEDUSER
| 6
|
FujiwaraChoki/MoneyPrinter
|
automation
| 62
|
[BUG] CleintConnectorCertificateError
|
**Describe the bug**
127.0.0.1 - - [07/Feb/2024 21:15:26] "POST /api/generate HTTP/1.1" 200 -
127.0.0.1 - - [07/Feb/2024 21:16:48] "OPTIONS /api/generate HTTP/1.1" 200 -
[+] Cleaned ../temp/ directory
[+] Cleaned ../subtitles/ directory
[Video to be generated]
Subject: test
```
FreeGpt: ClientConnectorCertificateError: Cannot connect to host s.aifree.site:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)')]
You: ClientConnectorCertificateError: Cannot connect to host you.com:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)')]
ChatgptDemoAi: ClientConnectorCertificateError: Cannot connect to host chat.chatgptdemo.ai:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)')]
FakeGpt: ClientConnectorCertificateError: Cannot connect to host chat-shared2.zhile.io:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)')]
ChatgptNext: ClientConnectorCertificateError: Cannot connect to host chat.fstha.com:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)')]
Chatgpt4Online: URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)>
ChatgptDemo: ClientConnectorCertificateError: Cannot connect to host chat.chatgptdemo.net:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)')]
Gpt6: ClientConnectorCertificateError: Cannot connect to host seahorse-app-d29hu.ondigitalocean.app:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1131)')]
GeekGpt: HTTPError: 521 Server Error: for url: https://ai.fakeopen.com/v1/chat/completions
```
**Expected behavior**
Expected an option to disable ssl or perhaps guidance on how to handle.
**Desktop (please complete the following information):**
- OSX
- Python 3.8.10
- PIp 24
**Additional context**
Add any other context about the problem here.
|
closed
|
2024-02-08T03:22:18Z
|
2024-02-09T17:36:01Z
|
https://github.com/FujiwaraChoki/MoneyPrinter/issues/62
|
[] |
thebetterjort
| 3
|
vaexio/vaex
|
data-science
| 2,126
|
[BUG-REPORT] setup uses hardcoded name for python
|
Setup.py currently uses
` os.system('python -m pip install --upgrade .')`
but this should be
` os.system(f'{sys.executable} -m pip install --upgrade .')`
Using just 'python' can create problems when several pythons are installed, e.g. python3.10 and python3.9 and the default system python is linked to python3.10. In this case one can start the install using `python3.9 -m pip install .`, but python3.10 will be called from within setup.py. Happy to supply a pull request for this (I only found 3 locations where this should be changed in the top setup.py file)
|
closed
|
2022-07-24T17:53:58Z
|
2022-08-08T22:44:58Z
|
https://github.com/vaexio/vaex/issues/2126
|
[] |
arunpersaud
| 1
|
quantumlib/Cirq
|
api
| 6,800
|
Replace check/tool scripts with a direct tool invocation as feasible
|
**Description of the issue**
Some of the `check/SomeTool` scripts such as [check/mypy](https://github.com/quantumlib/Cirq/blob/9e2162215fea2acadca7125114958370ef12892a/check/mypy) or [check/pylint](https://github.com/quantumlib/Cirq/blob/9e2162215fea2acadca7125114958370ef12892a/check/pylint) effectively run the tool with a custom configuration file. They can be eliminated if we move tool settings to pyproject.toml and just use standard tool execution instead. This will also make it easier to run SomeTool on a specific file or with extra options as there would be no need to figure if the check/ scripts passes extra options or not.
**Cirq version**
1.5.0.dev at 9e2162215fea2acadca7125114958370ef12892a
|
closed
|
2024-11-18T23:04:24Z
|
2025-02-06T15:36:14Z
|
https://github.com/quantumlib/Cirq/issues/6800
|
[
"no QC knowledge needed",
"kind/health",
"triage/accepted",
"area/dev",
"area/mypy",
"area/checks",
"area/pylint"
] |
pavoljuhas
| 0
|
praw-dev/praw
|
api
| 1,439
|
Implement endpoints for rule creation. deletion, and modification
|
**Describe the solution you'd like**
To programmatically define rules for a subreddit, IMO the clearest option is to implement `SubredditWidgetsModeration.add_rules_widget`. [Several related methods](https://praw.readthedocs.io/en/v7.0.0/code_overview/other/subredditwidgetsmoderation.html) are supported, including:
- add_button_widget
- add_calendar
- add_community_list
- add_custom_widget
- add_image_widget
- add_menu
- add_post_flair_widget
- add_text_area
**Describe alternatives you've considered**
I can use another widget to broadcast rules for a subreddit, but AFAIK only rules defined in the rules widget can be used to report specific rule violations.
|
closed
|
2020-04-26T15:38:27Z
|
2020-04-29T19:45:42Z
|
https://github.com/praw-dev/praw/issues/1439
|
[
"Feature"
] |
vogt4nick
| 3
|
lukas-blecher/LaTeX-OCR
|
pytorch
| 241
|
Latexocr generate completely off result
|
The pix2tex command line tool works really well for me, generating high quality results most of the time. But the GUI by calling latexocr gives completely random results.
E.g. when pix2tex generate $E=m c^{2}$ the GUI generate $\scriptstyle{\hat{s e}}_{k\in G a l}^{a=1,10}$
Environment: macOS 12.6.3, Python 3.7.7, PyQt5
PS, can you add option in pix2tex command line to automatically bracket with $$? Thanks!
|
open
|
2023-02-17T15:19:34Z
|
2024-03-21T20:14:40Z
|
https://github.com/lukas-blecher/LaTeX-OCR/issues/241
|
[
"gui",
"macOS"
] |
shrninepoints
| 4
|
abhiTronix/vidgear
|
dash
| 151
|
why to used vidgear?do u give me a simple examples instead of opencv?
|
I already read the introducation of vidgear. and I noticed multi-thread and something in it.but ,I can't understand the differents with opencv lib.
If I have two cameras,how to use vidgear?anything like opencv ?
could u give me some multi threads simple?
|
closed
|
2020-07-27T07:43:35Z
|
2020-07-27T08:44:05Z
|
https://github.com/abhiTronix/vidgear/issues/151
|
[
"INVALID :stop_sign:",
"QUESTION :question:",
"ANSWERED IN DOCS :book:"
] |
tms2003
| 1
|
laughingman7743/PyAthena
|
sqlalchemy
| 512
|
Okta authentication support
|
I'm curious if `pyathena` supports Oka authentication as desribed here:
https://docs.aws.amazon.com/athena/latest/ug/jdbc-v3-driver-okta-credentials.html
If not, do you have any plan to support this?
|
closed
|
2024-02-09T08:37:25Z
|
2024-02-09T13:09:37Z
|
https://github.com/laughingman7743/PyAthena/issues/512
|
[] |
jinserk
| 1
|
allenai/allennlp
|
nlp
| 5,171
|
No module named 'allennlp.data.tokenizers.word_splitter'
|
I'm using python 3.7 in google colab. I install allennlp=2.4.0 and allennlp-models.
When I run my code:
from allennlp.data.tokenizers.word_splitter import SpacyWordSplitter
I get this error:
ModuleNotFoundError: No module named 'allennlp.data.tokenizers.word_splitter'
help me please.
|
closed
|
2021-04-30T17:11:44Z
|
2021-05-17T16:10:36Z
|
https://github.com/allenai/allennlp/issues/5171
|
[
"question",
"stale"
] |
mitra8814
| 2
|
codertimo/BERT-pytorch
|
nlp
| 86
|
In Next Sentence Prediction task,the original code may choose the same line when you try to use the negative sample
|
```bash
def get_random_line(self):
...
return self.lines[random.randrange(len(self.lines))][1]
...
```
it should be changed to the following:
```bash
def get_random_line(self,index):
...
tmp = random.randrange(len(self.lines))
while(tmp == index):
tmp = random.randrange(len(self.lines))
return self.lines[random.randrange(len(self.lines))][1]
...
```
|
open
|
2020-12-07T05:46:51Z
|
2020-12-07T05:55:49Z
|
https://github.com/codertimo/BERT-pytorch/issues/86
|
[] |
Emir-Liu
| 0
|
dask/dask
|
scikit-learn
| 11,566
|
Should `dask.persist` raise on non-persistable objects?
|
# Problem
Until [recently](https://github.com/dask/distributed/issues/8948), `dask.persist()` supported both persistable Dask collections and ordinary Python objects as inputs. The Dask collections would be persisted (as expected) while the Python objects would be handled transparently and returned as-is in the output.
To the best of my knowledge, this behavior is not documented anywhere, and there is only a single test for this (`test_distributed.py::test_persist_nested`).
To me, this behavior seems odd: I would argue that it's reasonable for a user to expect that `dask.persist(some_large_pandas_dataframe)` actually persists that large object on a `distributed` cluster to make it available. It would also hide user errors where the user intends to persist a collection but instead persists `Future`s, e.g., by calling `persist(df.compute())` instead of `persist(df)`.
# Possible solution
Instead of fixing this undocumented behavior, I suggest that `persist` should raise on inputs that are no persistable Dask collection. This clarifies the intended and supported behavior, limits the amount of hidden magic, and allows us to raise meaningful errors on anti-patterns like persisting `Future`s.
# Caveat
This would break current undocumented Dask behavior, and it's unclear how much users or downstream libraries rely on this.
|
open
|
2024-11-25T18:20:34Z
|
2025-02-24T02:01:27Z
|
https://github.com/dask/dask/issues/11566
|
[
"needs attention",
"needs triage"
] |
hendrikmakait
| 3
|
piskvorky/gensim
|
data-science
| 2,693
|
Number of Sentences in corpusfile don't match trained sentences.
|
#### Problem description
I'm training a fasttext model (CBOW) over a corpus, for instance `enwik8`.
The number of sentences trained (or example_count as referred in log methods) on doesn't equal the number of sentences in the file (`wc -l` or `len(f.readlines())`, referred as `expected_count` or `total_examples` ).
Why is this happening? Also, in the method [here](https://github.com/RaRe-Technologies/gensim/blob/e391f0c25599c751e127dde925e062c7132e4737/gensim/models/base_any2vec.py#L1301), this warning has been suppressed for corpus mode.
### Versions
```python
Linux-4.4.0-1096-aws-x86_64-with-debian-stretch-sid
Python 3.7.5 (default, Oct 25 2019, 15:51:11)
[GCC 7.3.0]
NumPy 1.17.2
SciPy 1.3.1
gensim 3.8.1
FAST_VERSION 1
```
|
open
|
2019-12-02T10:11:29Z
|
2019-12-02T13:48:13Z
|
https://github.com/piskvorky/gensim/issues/2693
|
[] |
tshrjn
| 1
|
yeongpin/cursor-free-vip
|
automation
| 155
|
重置机器码后 换新邮箱 依然无法使用
|
使用curso0.46.8
|
closed
|
2025-03-07T07:42:10Z
|
2025-03-10T06:03:16Z
|
https://github.com/yeongpin/cursor-free-vip/issues/155
|
[] |
ycsxd
| 2
|
3b1b/manim
|
python
| 1,137
|
Add another good tutorial resource to README
|
I found this really extensive tutorial/example resource for manim and I think it should be listed in the Walkthough section of the README.md
[Elteoremadebeethoven/AnimationsWithManim](https://github.com/Elteoremadebeethoven/AnimationsWithManim)
|
closed
|
2020-06-16T04:14:34Z
|
2021-02-10T06:36:07Z
|
https://github.com/3b1b/manim/issues/1137
|
[] |
zyansheep
| 0
|
aiortc/aiortc
|
asyncio
| 725
|
janus example bogusly calls webcam.py
|
Recent git commit 713fb64 introduced option `--play-without-decoding` to example script `janus.py`, and added an example commandline to `examples/janus/README.rst`.
...but that commandline calls `webcam.py` (not `janus-py`).
|
closed
|
2022-05-26T12:58:14Z
|
2022-06-16T14:40:35Z
|
https://github.com/aiortc/aiortc/issues/725
|
[] |
jonassmedegaard
| 2
|
MaxHalford/prince
|
scikit-learn
| 56
|
Is there a way to transform new data after fitting with FAMD?
|
Hello,
I just discovered this package and it seems very interesting. I was wondering is there a way to apply the transform function to new unseen data after calling FAMD fit? Analogous to how PCA works in sklearn.
When I try to do this I get an error:
X)
102 X = self.scaler_.transform(X)
103
--> 104 return pd.DataFrame(data=X.dot(self.V_.T), index=index)
105
106 def row_standard_coordinates(self, X):
ValueError: shapes (2,20) and (49,2) not aligned: 20 (dim 1) != 49 (dim 0)
Basically it looks like it doesn't understand there are a different number of "training examples" as opposed to when the fit occurred.
Cheers,
Kuhan
|
closed
|
2019-03-14T16:02:29Z
|
2021-07-27T15:08:45Z
|
https://github.com/MaxHalford/prince/issues/56
|
[] |
kuhanw
| 12
|
ShishirPatil/gorilla
|
api
| 77
|
[feature] Run gorilla locally without GPUs 🦍
|
Today, Gorilla end-points run on UC Berkeley hosted servers 🐻 When you try our colab, or our chat completion API, or the CLI tool, it hits our GPUs for inference. A popular ask among our users is to run Gorilla locally on Macbooks/Linux/WSL.
**Describe the solution you'd like:**
Have the model(s) running locally on MPS/CPU/GPU and listening to a port. All the current gorilla end-points can then just hit `localhost` to get the response to any given prompt.
**Additional context:**
Here is an application that would immediately use it: https://github.com/gorilla-llm/gorilla-cli
Given, we have LLaMA models, these should be plug-and-play: [ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp) and [karpathy/llama2.c](https://github.com/karpathy/llama2.c)
Also relevant: https://huggingface.co/TheBloke/gorilla-7B-GPTQ
Update 1: If you happen to have an RTX, or V100 or A100 or H100, you can use Gorilla today without any latency hit. The goal of this enhancement is to help those who may not have access to and greatest GPUs.
|
closed
|
2023-08-01T09:12:05Z
|
2024-02-04T08:34:53Z
|
https://github.com/ShishirPatil/gorilla/issues/77
|
[
"enhancement"
] |
ShishirPatil
| 11
|
skforecast/skforecast
|
scikit-learn
| 165
|
Closed Issue
|
closed
|
2022-06-13T03:15:41Z
|
2022-06-20T09:17:00Z
|
https://github.com/skforecast/skforecast/issues/165
|
[
"invalid"
] |
hdattada
| 1
|
|
axnsan12/drf-yasg
|
rest-api
| 813
|
`ImportError: Module "drf_yasg.generators" does not define a "OpenAPISchemaGenerator" attribute/class` after upgrading DRF==3.14.0
|
# Bug Report
## Description
After update Django Rest Framework to 3.14.0, Django did not start, because drf-yasg raise exception `ImportError: Could not import 'drf_yasg.generators.OpenAPISchemaGenerator' for API setting 'DEFAULT_GENERATOR_CLASS'. ` Reverting to drf==3.13.1 resolves the issue.
## Is this a regression?
Not sure, there was a similar issue here: https://github.com/axnsan12/drf-yasg/issues/641
## Minimal Reproduction
```python
# settings.py
INSTALLED_APPS = [
...
"drf_yasg",
]
SWAGGER_SETTINGS = {
"SECURITY_DEFINITIONS": {
"Bearer": {
"type": "apiKey",
"in": "header",
"name": "Authorization",
"template": "Bearer {apiKey}",
},
},
"DEFAULT_FIELD_INSPECTORS": [
"drf_yasg.inspectors.CamelCaseJSONFilter",
"drf_yasg.inspectors.InlineSerializerInspector",
"drf_yasg.inspectors.RelatedFieldInspector",
"drf_yasg.inspectors.ChoiceFieldInspector",
"drf_yasg.inspectors.FileFieldInspector",
"drf_yasg.inspectors.DictFieldInspector",
"drf_yasg.inspectors.SimpleFieldInspector",
"drf_yasg.inspectors.StringDefaultFieldInspector",
],
}
```
```python
# urls.py
from drf_yasg import openapi
from drf_yasg.views import get_schema_view
from rest_framework import permissions
schema_view = get_schema_view(
openapi.Info(
title="My API",
default_version="v1",
description="My API",
terms_of_service="https://www.google.com/policies/terms/",
contact=openapi.Contact(email="leohakim@gmail.com"),
license=openapi.License(name="BSD License"),
),
public=True,
permission_classes=(permissions.AllowAny,),
)
```
## Stack trace / Error message
```code
File "/usr/local/lib/python3.9/site-packages/drf_yasg/views.py", line 67, in get_schema_view
_generator_class = generator_class or swagger_settings.DEFAULT_GENERATOR_CLASS
File "/usr/local/lib/python3.9/site-packages/drf_yasg/app_settings.py", line 122, in __getattr__
val = perform_import(val, attr)
File "/usr/local/lib/python3.9/site-packages/rest_framework/settings.py", line 166, in perform_import
return import_from_string(val, setting_name)
File "/usr/local/lib/python3.9/site-packages/rest_framework/settings.py", line 180, in import_from_string
raise ImportError(msg)
ImportError: Could not import 'drf_yasg.generators.OpenAPISchemaGenerator' for API setting 'DEFAULT_GENERATOR_CLASS'. ImportError: Module "drf_yasg.generators" does not define a "OpenAPISchemaGenerator" attribute/class.
```
## Your Environment
```code
drf-yasg=1.21.3
djangorestframework=3.14.0
django=4.1.1
```
|
closed
|
2022-09-23T11:50:21Z
|
2023-10-06T09:52:16Z
|
https://github.com/axnsan12/drf-yasg/issues/813
|
[] |
chrismaille
| 0
|
noirbizarre/flask-restplus
|
flask
| 541
|
Can it declare more than one model?
|
I declare 2 models like this
```
fields = api.model('MyModel', {
'id_siswa': fields.String(),
'nama_siswa': fields.String(),
'kelas': fields.String(),
'hasil': fields.List(fields.Integer),
'id_penilai': fields.String(),
'nama_penilai':fields.String(),
})
indicators = api.model('ModelIndikator', {
'sikap_spritual': fields.String(),
'sikap_sosial': fields.String(),
})
```
with @api.doc()
but get error
`Traceback (most recent call last): File "app.py", line 40, in <module> 'sikap_spritual': fields.String(), AttributeError: 'Model' object has no attribute 'String'`
|
closed
|
2018-10-18T09:53:19Z
|
2018-10-19T03:24:49Z
|
https://github.com/noirbizarre/flask-restplus/issues/541
|
[] |
kafey
| 2
|
plotly/plotly.py
|
plotly
| 4,118
|
Gibberish / malformed / strange negative y-axis values
|
```python
#!/usr/bin/env python3
"""Test negative bug."""
from plotly.subplots import make_subplots
fig = make_subplots(rows=1, cols=1)
fig.add_bar(x=[1, 2, 3], y=[-4, 5, -6], row=1, col=1)
fig.update_layout(height=400, width=500, showlegend=True)
with open('web/negative.html', 'w', encoding='utf-8') as index_file:
index_file.write(fig.to_html(full_html=False))
```
This results in:

When I change to `full_html=True`, it works properly:

This seems to be a recent bug because it worked fine before I upgraded Plotly. I’m using this version from conda-forge:
`plotly 5.13.1 pyhd8ed1ab_0 conda-forge`
More details on this issue:
https://community.plotly.com/t/gibberish-malformed-negative-y-axis-values-in-plotly-charts-in-python/71924/1
|
closed
|
2023-03-21T07:13:05Z
|
2024-03-27T14:40:24Z
|
https://github.com/plotly/plotly.py/issues/4118
|
[] |
valankar
| 6
|
fbdesignpro/sweetviz
|
data-visualization
| 17
|
Larger feature visualization on right is hidden
|
Appreciate the effort for this library and I see the potential!. Tried out at work and when the html displayed the charts for features are partially hidden. Thought that I could scroll over to see them but no horizontal scroll bar was available.
|
closed
|
2020-07-07T20:04:07Z
|
2020-07-24T02:35:11Z
|
https://github.com/fbdesignpro/sweetviz/issues/17
|
[
"bug"
] |
JohnDeJesus22
| 6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.