repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
pyppeteer/pyppeteer
|
automation
| 230
|
Because of the time limit of IP proxy, how can we add different ip proxy to each page or request? Thank you
|
open
|
2021-02-26T07:16:59Z
|
2021-07-20T06:01:08Z
|
https://github.com/pyppeteer/pyppeteer/issues/230
|
[] |
fendouwhy
| 3
|
|
comfyanonymous/ComfyUI
|
pytorch
| 7,205
|
那里的问题?
|
### Feature Idea
求指教
### Existing Solutions
_No response_
### Other
_No response_
|
closed
|
2025-03-12T11:25:19Z
|
2025-03-12T18:10:31Z
|
https://github.com/comfyanonymous/ComfyUI/issues/7205
|
[
"Feature"
] |
yema99
| 0
|
tensorpack/tensorpack
|
tensorflow
| 1,367
|
Duplicate
|
**Greetings,**
First I would like to thank all creators of this framework, it is an awesome tool and was a huge factor in the success of my deep learning project.
I've trained a custom Model, it is based on the ResNet50 backbone and does not use a pretrained model. I've evaluated the trained model on my test dataset and it has an overall accuracy mAP (0.7) of 90%. I have used the Faster-RCNN Example with MASK_CNN and FPN set to TRUE.
I did not change any other parameters (except learnrate and LR_schedule).
Now I would like to export the model with Tensorflow-Serving in order to get a .pb File that I can use in further development.
I've read the documentation and applied the instructions given in:
https://tensorpack.readthedocs.io/tutorial/inference.html.
However, I'm having trouble to find the input_names and output_names arguments for the export_serving(model_path) function in **export-model.py.**
I did convert my Model to .npz format with the provided script **dump-model-params.py**
Than I iterated trough the dict and printed all variables in the dict, with the goal to find the input and output layers. I tried to adress certain elemens of the dict, for example output_names =['fastrcnn/outputs/box/W:0'] but I always get the **error:** 'fastrcnn/outputs/box/W:0' **refers to a Tensor which does not exist**.
I know that this issue is related to inference and that tensorpack does only provide the support for training a model, but I would be really glad if someone could tell me what I'm doing wrong.
**When I use the default values** ('input_img' and 'prediction_img') **for my checkpoint**, the Export starts but then throws a "ValueError: no variables to save" exception.
Before that, I get two Warnings:
**[WRN]:** The following variables are in the graph, but not found in the dict: filter
**[WRN]:** The following variables are in the dict, but not found in the graph: ---- This contains all variables that I posted in the list below -----
**When I use the default values for my Model.npz file**, the Export starts and I get the same two Warnings:
[sessinit.py:223] Variables to restore from dict:
** [sessinit.py:87] WRN:** The following variables are in the graph, but not found in the checkpoint: filter
** [sessinit.py:87] WRN:** The following variables are in the checkpoint, but not found in the graph:
----> This contains the whole content of the npz dict at the end of this comment <-----
[sessinit.py:236] Restoring 0 variables from dict
The script than throws an exception in session.py[1335]:
attempted to use an unitialized value filter
The tensor name of the unitialized value filter which the script is trying to access is: "edge_10_filter"
**My code to read the dict:**
```
import numpy as np
import os.path
model_dir = 'my_model_path'
fname = os.path.join(model_dir, 'MyModel.npz')
with np.load(fname) as npz:
data = dict(npz)
for k in data:
print("{}, shape: {}".format(k, data[k].shape))
```
**The content of the dict is:**
> EMA/maskrcnn_loss/accuracy:0, shape: ()
> group1/block3/conv1/W:0, shape: (1, 1, 512, 128)
> conv0/gn/beta:0, shape: (64,)
> group0/block0/conv1/W/AccumGrad:0, shape: (1, 1, 64, 64)
> EMA/fastrcnn_losses/label_metrics/fg_accuracy/biased:0, shape: ()
> fpn/gn_c4/gamma/Momentum:0, shape: (256,)
> EMA/fastrcnn_losses/label_loss/biased:0, shape: ()
> EMA/rpn_losses/level5/label_metrics/precision_th0.5/biased:0, shape: ()
> EMA/rpn_losses/level3/label_metrics/precision_th0.1/local_step:0, shape: ()
> EMA/maskrcnn_loss/maskrcnn_loss:0, shape: ()
> group2/block1/conv1/W/Momentum:0, shape: (1, 1, 1024, 256)
> EMA/rpn_losses/level6/label_metrics/recall_th0.5/biased:0, shape: ()
> group3/block2/conv1/W/Momentum:0, shape: (1, 1, 2048, 512)
> EMA/QueueInput/queue_size:0, shape: ()
> EMA/maskrcnn_loss/maskrcnn_loss/biased:0, shape: ()
> group0/block2/conv2/gn/gamma/AccumGrad:0, shape: (64,)
> EMA/QueueInput/queue_size/biased:0, shape: ()
> EMA/multilevel_roi_align_mask/fpn_map_rois_to_levels/num_roi_level3/biased:0, shape: ()
> EMA/maskrcnn_loss/pos_accuracy:0, shape: ()
> group0/block0/convshortcut/W/AccumGrad:0, shape: (1, 1, 64, 256)
> fastrcnn/outputs/box/W:0, shape: (1024, 16)
> EMA/fastrcnn_losses/box_loss/biased:0, shape: ()
> EMA/maskrcnn_loss/maskrcnn_loss/local_step:0, shape: ()
> fpn/gn_c5/beta/AccumGrad:0, shape: (256,)
> group0/block1/conv3/gn/gamma:0, shape: (256,)
> EMA/QueueInput/queue_size/local_step:0, shape: ()
> fpn/gn_p4/beta:0, shape: (256,)
> group1/block2/conv3/gn/gamma/AccumGrad:0, shape: (512,)
> EMA/rpn_losses/level4/box_loss/biased:0, shape: ()
> EMA/fastrcnn_losses/label_metrics/fg_accuracy/local_step:0, shape: ()
> group0/block1/conv3/gn/gamma/Momentum:0, shape: (256,)
> EMA/fastrcnn_losses/label_loss/local_step:0, shape: ()
> EMA/rpn_losses/level5/label_metrics/precision_th0.5/local_step:0, shape: ()
> group0/block2/conv2/gn/beta/AccumGrad:0, shape: (64,)
> EMA/multilevel_roi_align_mask/fpn_map_rois_to_levels/num_roi_level3:0, shape: ()
> EMA/multilevel_roi_align_mask/fpn_map_rois_to_levels/num_roi_level2/biased:0, shape: ()
> EMA/rpn_losses/level2/label_metrics/precision_th0.5:0, shape: ()
> group2/block0/conv1/gn/gamma/AccumGrad:0, shape: (256,)
> EMA/multilevel_roi_align/fpn_map_rois_to_levels/num_roi_level5/local_step:0, shape: ()
> EMA/fastrcnn_losses/box_loss:0, shape: ()
> EMA/rpn_losses/level2/label_metrics/recall_th0.1/local_step:0, shape: ()
> group3/block1/conv2/W:0, shape: (3, 3, 512, 512)
> EMA/multilevel_roi_align_mask/fpn_map_rois_to_levels/num_roi_level3/local_step:0, shape: ()
> rpn/conv0/b/AccumGrad:0, shape: (256,)
> group0/block0/convshortcut/gn/gamma:0, shape: (256,)
> EMA/maskrcnn_loss/fg_pixel_ratio:0, shape: ()
> EMA/rpn_losses/level2/label_metrics/precision_th0.5/local_step:0, shape: ()
> EMA/fastrcnn_losses/box_loss/local_step:0, shape: ()
> group2/block0/conv3/gn/gamma/Momentum:0, shape: (1024,)
> EMA/multilevel_roi_align/fpn_map_rois_to_levels/num_roi_level2/biased:0, shape: ()
> group1/block3/conv2/gn/gamma/AccumGrad:0, shape: (128,)
> group1/block0/conv2/W/Momentum:0, shape: (3, 3, 128, 128)
> EMA/mean_gt_box_area/local_step:0, shape: ()
> EMA/rpn_losses/level4/label_metrics/recall_th0.2/local_step:0, shape: ()
> group2/block0/convshortcut/gn/gamma/AccumGrad:0, shape: (1024,)
> EMA/fastrcnn_losses/label_metrics/fg_accuracy:0, shape: ()
> group2/block2/conv3/gn/beta:0, shape: (1024,)
> EMA/fastrcnn_losses/label_loss:0, shape: ()
> EMA/rpn_losses/level5/label_metrics/precision_th0.5:0, shape: ()
> group1/block1/conv2/W:0, shape: (3, 3, 128, 128)
> EMA/multilevel_roi_align_mask/fpn_map_rois_to_levels/num_roi_level5/local_step:0, shape: ()
> EMA/fastrcnn_losses/label_metrics/accuracy:0, shape: ()
> EMA/fastrcnn_losses/label_metrics/false_negative/local_step:0, shape: ()
> fastrcnn/conv2/b/AccumGrad:0, shape: (256,)
> EMA/fastrcnn_losses/label_metrics/accuracy/biased:0, shape: ()
> group1/block0/conv2/W/AccumGrad:0, shape: (3, 3, 128, 128)
> EMA/fastrcnn_losses/label_metrics/accuracy/local_step:0, shape: ()
> EMA/sample_fast_rcnn_targets/proposal_metrics/best_iou_per_gt/biased:0, shape: ()
> group2/block0/convshortcut/W:0, shape: (1, 1, 512, 1024)
> EMA/fastrcnn_losses/label_metrics/false_negative:0, shape: ()
> group2/block0/conv2/W/Momentum:0, shape: (3, 3, 256, 256)
> EMA/rpn_losses/level5/label_metrics/precision_th0.1/local_step:0, shape: ()
> maskrcnn/deconv/W:0, shape: (2, 2, 256, 256)
> EMA/fastrcnn_losses/label_metrics/false_negative/biased:0, shape: ()
> EMA/fastrcnn_losses/num_fg_label:0, shape: ()
> group2/block0/conv2/gn/gamma:0, shape: (256,)
> EMA/rpn_losses/level2/label_metrics/recall_th0.2/local_step:0, shape: ()
> EMA/multilevel_roi_align/fpn_map_rois_to_levels/num_roi_level3:0, shape: ()
> EMA/fastrcnn_losses/num_fg_label/biased:0, shape: ()
> EMA/fastrcnn_losses/num_fg_label/local_step:0, shape: ()
> group3/block2/conv2/gn/beta:0, shape: (512,)
> EMA/maskrcnn_loss/accuracy/biased:0, shape: ()
> EMA/maskrcnn_loss/accuracy/local_step:0, shape: ()
> EMA/maskrcnn_loss/fg_pixel_ratio/biased:0, shape: ()
> maskrcnn/fcn1/b/AccumGrad:0, shape: (256,)
> EMA/maskrcnn_loss/pos_accuracy/local_step:0, shape: ()
> group0/block0/conv2/W/Momentum:0, shape: (3, 3, 64, 64)
> EMA/multilevel_roi_align/fpn_map_rois_to_levels/num_roi_level4:0, shape: ()
> group2/block3/conv3/gn/beta/AccumGrad:0, shape: (1024,)
> EMA/maskrcnn_loss/fg_pixel_ratio/local_step:0, shape: ()
> group2/block3/conv3/gn/beta:0, shape: (1024,)
> EMA/maskrcnn_loss/pos_accuracy/biased:0, shape: ()
> EMA/rpn_losses/box_loss/local_step:0, shape: ()
> EMA/rpn_losses/level3/box_loss/local_step:0, shape: ()
> group2/block1/conv2/W:0, shape: (3, 3, 256, 256)
> group1/block1/conv3/W/AccumGrad:0, shape: (1, 1, 128, 512)
> EMA/mean_gt_box_area:0, shape: ()
> EMA/rpn_losses/level2/label_metrics/recall_th0.5/biased:0, shape: ()
> EMA/mean_gt_box_area/biased:0, shape: ()
> EMA/multilevel_roi_align_mask/fpn_map_rois_to_levels/num_roi_level4/local_step:0, shape: ()
> fpn/gn_p5/beta/AccumGrad:0, shape: (256,)
> EMA/multilevel_roi_align/fpn_map_rois_to_levels/num_roi_level3/biased:0, shape: ()
> group1/block3/conv3/W:0, shape: (1, 1, 128, 512)
> EMA/multilevel_roi_align/fpn_map_rois_to_levels/num_roi_level2:0, shape: ()
> group0/block2/conv1/gn/gamma:0, shape: (64,)
> EMA/rpn_losses/level2/label_metrics/precision_th0.2/local_step:0, shape: ()
> group2/block1/conv3/gn/gamma/AccumGrad:0, shape: (1024,)
> EMA/multilevel_roi_align/fpn_map_rois_to_levels/num_roi_level2/local_step:0, shape: ()
> EMA/rpn_losses/level5/num_pos_anchor/local_step:0, shape: ()
> fastrcnn/gn1/beta/AccumGrad:0, shape: (256,)
> EMA/rpn_losses/label_loss:0, shape: ()
> EMA/rpn_losses/level3/label_loss:0, shape: ()
> EMA/multilevel_roi_align/fpn_map_rois_to_levels/num_roi_level5/biased:0, shape: ()
> group1/block0/conv1/gn/beta/AccumGrad:0, shape: (128,)
> EMA/multilevel_roi_align/fpn_map_rois_to_levels/num_roi_level3/local_step:0, shape: ()
> EMA/multilevel_roi_align/fpn_map_rois_to_levels/num_roi_level4/biased:0, shape: ()
> EMA/rpn_losses/level6/label_metrics/precision_th0.5/local_step:0, shape: ()
> group2/block2/conv2/gn/gamma:0, shape: (256,)
> group1/block0/convshortcut/W/Momentum:0, shape: (1, 1, 256, 512)
> EMA/multilevel_roi_align/fpn_map_rois_to_levels/num_roi_level4/local_step:0, shape: ()
> group3/block1/conv2/gn/gamma/AccumGrad:0, shape: (512,)
> group2/block1/conv2/W/Momentum:0, shape: (3, 3, 256, 256)
> EMA/multilevel_roi_align/fpn_map_rois_to_levels/num_roi_level5:0, shape: ()
> EMA/rpn_losses/level5/label_metrics/precision_th0.2/biased:0, shape: ()
> fastrcnn/gn0/beta:0, shape: (256,)
> group2/block0/conv2/gn/beta/Momentum:0, shape: (256,)
> EMA/multilevel_roi_align_mask/fpn_map_rois_to_levels/num_roi_level2:0, shape: ()
> group2/block1/conv1/gn/beta:0, shape: (256,)
> fpn/lateral_1x1_c3/b/AccumGrad:0, shape: (256,)
> group3/block2/conv1/gn/beta:0, shape: (512,)
> EMA/multilevel_roi_align_mask/fpn_map_rois_to_levels/num_roi_level2/local_step:0, shape: ()
> group2/block1/conv2/gn/gamma/Momentum:0, shape: (256,)
> EMA/rpn_losses/level2/label_metrics/recall_th0.2/biased:0, shape: ()
> group3/block2/conv2/gn/gamma/Momentum:0, shape: (512,)
> EMA/multilevel_roi_align_mask/fpn_map_rois_to_levels/num_roi_level4:0, shape: ()
> EMA/multilevel_roi_align_mask/fpn_map_rois_to_levels/num_roi_level5:0, shape: ()
> rpn/class/W/AccumGrad:0, shape: (1, 1, 256, 3)
> EMA/multilevel_roi_align_mask/fpn_map_rois_to_levels/num_roi_level4/biased:0, shape: ()
> group3/block1/conv3/gn/gamma/Momentum:0, shape: (2048,)
> EMA/multilevel_roi_align_mask/fpn_map_rois_to_levels/num_roi_level5/biased:0, shape: ()
> fpn/gn_c4/beta/Momentum:0, shape: (256,)
> fpn/gn_c4/gamma:0, shape: (256,)
> EMA/rpn_losses/box_loss:0, shape: ()
> EMA/rpn_losses/level3/box_loss:0, shape: ()
> EMA/rpn_losses/box_loss/biased:0, shape: ()
> EMA/rpn_losses/level3/box_loss/biased:0, shape: ()
> fpn/lateral_1x1_c5/b/AccumGrad:0, shape: (256,)
> EMA/rpn_losses/level4/box_loss:0, shape: ()
> group2/block2/conv3/W/Momentum:0, shape: (1, 1, 256, 1024)
> group0/block2/conv2/gn/gamma:0, shape: (64,)
> group3/block1/conv3/W/Momentum:0, shape: (1, 1, 512, 2048)
> EMA/rpn_losses/label_loss/biased:0, shape: ()
> EMA/rpn_losses/level3/label_loss/biased:0, shape: ()
> group3/block0/conv1/W:0, shape: (1, 1, 1024, 512)
> group1/block3/conv1/W/Momentum:0, shape: (1, 1, 512, 128)
> EMA/rpn_losses/label_loss/local_step:0, shape: ()
> EMA/rpn_losses/level3/label_loss/local_step:0, shape: ()
> EMA/rpn_losses/level3/num_pos_anchor:0, shape: ()
> conv0/gn/beta/Momentum:0, shape: (64,)
> EMA/rpn_losses/level2/box_loss:0, shape: ()
> group2/block2/conv2/W/AccumGrad:0, shape: (3, 3, 256, 256)
> EMA/rpn_losses/level2/box_loss/biased:0, shape: ()
> EMA/rpn_losses/level6/label_metrics/precision_th0.2:0, shape: ()
> EMA/rpn_losses/level2/box_loss/local_step:0, shape: ()
> group2/block0/convshortcut/W/AccumGrad:0, shape: (1, 1, 512, 1024)
> EMA/rpn_losses/level2/label_loss:0, shape: ()
> EMA/rpn_losses/level2/label_loss/biased:0, shape: ()
> group0/block1/conv1/W/Momentum:0, shape: (1, 1, 256, 64)
> group1/block2/conv3/W/Momentum:0, shape: (1, 1, 128, 512)
> EMA/rpn_losses/level2/label_loss/local_step:0, shape: ()
> fpn/posthoc_3x3_p3/b:0, shape: (256,)
> EMA/rpn_losses/level2/label_metrics/precision_th0.1:0, shape: ()
> EMA/rpn_losses/level2/label_metrics/precision_th0.1/biased:0, shape: ()
> EMA/rpn_losses/level2/label_metrics/precision_th0.1/local_step:0, shape: ()
> fastrcnn/gn1/gamma/AccumGrad:0, shape: (256,)
> EMA/rpn_losses/level2/label_metrics/precision_th0.2:0, shape: ()
> EMA/rpn_losses/level6/box_loss/local_step:0, shape: ()
> fastrcnn/gn0/gamma/AccumGrad:0, shape: (256,)
> group1/block1/conv3/gn/beta/Momentum:0, shape: (512,)
> EMA/rpn_losses/level2/label_metrics/precision_th0.2/biased:0, shape: ()
> EMA/rpn_losses/level2/label_metrics/precision_th0.5/biased:0, shape: ()
> fpn/gn_c2/gamma/AccumGrad:0, shape: (256,)
> EMA/rpn_losses/level2/label_metrics/recall_th0.1:0, shape: ()
> EMA/rpn_losses/level3/label_metrics/precision_th0.2/local_step:0, shape: ()
> maskrcnn/fcn3/b:0, shape: (256,)
> EMA/rpn_losses/level2/label_metrics/recall_th0.1/biased:0, shape: ()
> EMA/rpn_losses/level2/label_metrics/recall_th0.2:0, shape: ()
> EMA/rpn_losses/level4/label_metrics/recall_th0.5/biased:0, shape: ()
> fpn/posthoc_3x3_p2/W/AccumGrad:0, shape: (3, 3, 256, 256)
> group2/block0/conv1/gn/beta/AccumGrad:0, shape: (256,)
> group1/block0/conv3/W:0, shape: (1, 1, 128, 512)
> EMA/rpn_losses/level2/label_metrics/recall_th0.5:0, shape: ()
> group1/block1/conv2/gn/beta:0, shape: (128,)
> EMA/rpn_losses/level2/label_metrics/recall_th0.5/local_step:0, shape: ()
> EMA/rpn_losses/level2/num_pos_anchor:0, shape: ()
> group0/block0/conv3/gn/beta/Momentum:0, shape: (256,)
> group2/block3/conv3/gn/gamma/AccumGrad:0, shape: (1024,)
> EMA/rpn_losses/level2/num_pos_anchor/biased:0, shape: ()
> EMA/rpn_losses/level2/num_pos_anchor/local_step:0, shape: ()
> conv0/W:0, shape: (7, 7, 3, 64)
> EMA/rpn_losses/level2/num_valid_anchor:0, shape: ()
> EMA/rpn_losses/level6/num_pos_anchor:0, shape: ()
> group0/block1/conv2/gn/beta/Momentum:0, shape: (64,)
> EMA/rpn_losses/level2/num_valid_anchor/biased:0, shape: ()
> EMA/rpn_losses/level6/num_pos_anchor/biased:0, shape: ()
> group0/block0/conv3/gn/beta/AccumGrad:0, shape: (256,)
> group0/block0/conv2/gn/gamma:0, shape: (64,)
> maskrcnn/conv/W/AccumGrad:0, shape: (1, 1, 256, 3)
> group3/block2/conv3/gn/gamma:0, shape: (2048,)
> EMA/rpn_losses/level2/num_valid_anchor/local_step:0, shape: ()
> EMA/rpn_losses/level6/num_pos_anchor/local_step:0, shape: ()
> EMA/rpn_losses/level3/label_metrics/precision_th0.1:0, shape: ()
> group2/block3/conv2/gn/gamma:0, shape: (256,)
> fpn/posthoc_3x3_p5/W/Momentum:0, shape: (3, 3, 256, 256)
> group3/block0/convshortcut/gn/beta/Momentum:0, shape: (2048,)
> group3/block0/conv2/gn/gamma:0, shape: (512,)
> EMA/rpn_losses/level3/label_metrics/precision_th0.1/biased:0, shape: ()
> EMA/rpn_losses/level3/label_metrics/precision_th0.2:0, shape: ()
> EMA/rpn_losses/level3/label_metrics/precision_th0.2/biased:0, shape: ()
> group1/block0/conv3/gn/gamma:0, shape: (512,)
> EMA/rpn_losses/level3/label_metrics/precision_th0.5:0, shape: ()
> fpn/gn_c5/gamma/AccumGrad:0, shape: (256,)
> EMA/rpn_losses/level3/label_metrics/precision_th0.5/biased:0, shape: ()
> group2/block4/conv3/W:0, shape: (1, 1, 256, 1024)
> EMA/rpn_losses/level3/label_metrics/precision_th0.5/local_step:0, shape: ()
> EMA/rpn_losses/level3/label_metrics/recall_th0.1:0, shape: ()
> fastrcnn/conv1/W:0, shape: (3, 3, 256, 256)
> EMA/rpn_losses/level3/label_metrics/recall_th0.1/biased:0, shape: ()
> EMA/rpn_losses/level5/label_metrics/recall_th0.1/local_step:0, shape: ()
> group0/block1/conv1/W:0, shape: (1, 1, 256, 64)
> group2/block4/conv3/gn/beta/AccumGrad:0, shape: (1024,)
> EMA/rpn_losses/level3/label_metrics/recall_th0.1/local_step:0, shape: ()
> EMA/rpn_losses/level3/label_metrics/recall_th0.2:0, shape: ()
> fastrcnn/gn2/beta:0, shape: (256,)
> fpn/posthoc_3x3_p4/b:0, shape: (256,)
> fastrcnn/conv3/W/Momentum:0, shape: (3, 3, 256, 256)
> EMA/rpn_losses/level3/label_metrics/recall_th0.2/biased:0, shape: ()
> group0/block0/convshortcut/W/Momentum:0, shape: (1, 1, 64, 256)
> EMA/rpn_losses/level3/label_metrics/recall_th0.2/local_step:0, shape: ()
> EMA/total_cost/local_step:0, shape: ()
> EMA/rpn_losses/level3/label_metrics/recall_th0.5:0, shape: ()
> group0/block0/conv1/gn/gamma/Momentum:0, shape: (64,)
> EMA/rpn_losses/level3/label_metrics/recall_th0.5/biased:0, shape: ()
> group0/block1/conv3/W/AccumGrad:0, shape: (1, 1, 64, 256)
> group1/block0/convshortcut/gn/beta/AccumGrad:0, shape: (512,)
> EMA/rpn_losses/level3/label_metrics/recall_th0.5/local_step:0, shape: ()
> EMA/rpn_losses/level4/num_valid_anchor/biased:0, shape: ()
> group0/block0/conv1/gn/gamma/AccumGrad:0, shape: (64,)
> rpn/conv0/b/Momentum:0, shape: (256,)
> EMA/rpn_losses/level3/num_pos_anchor/biased:0, shape: ()
> group2/block5/conv3/gn/beta:0, shape: (1024,)
> EMA/rpn_losses/level3/num_pos_anchor/local_step:0, shape: ()
> fpn/gn_c5/beta/Momentum:0, shape: (256,)
> EMA/rpn_losses/level5/num_pos_anchor/biased:0, shape: ()
> group3/block1/conv1/gn/gamma/Momentum:0, shape: (512,)
> group1/block3/conv3/gn/beta:0, shape: (512,)
> EMA/rpn_losses/level3/num_valid_anchor:0, shape: ()
> fpn/posthoc_3x3_p5/b/Momentum:0, shape: (256,)
> group2/block2/conv1/gn/gamma/Momentum:0, shape: (256,)
> fastrcnn/outputs/box/W/AccumGrad:0, shape: (1024, 16)
> group1/block0/conv1/W/Momentum:0, shape: (1, 1, 256, 128)
> maskrcnn/gn2/gamma:0, shape: (256,)
> EMA/rpn_losses/level3/num_valid_anchor/biased:0, shape: ()
> maskrcnn/fcn0/W/AccumGrad:0, shape: (3, 3, 256, 256)
> group3/block0/conv1/W/Momentum:0, shape: (1, 1, 1024, 512)
> group1/block2/conv2/W:0, shape: (3, 3, 128, 128)
> EMA/rpn_losses/level3/num_valid_anchor/local_step:0, shape: ()
> group2/block0/conv3/gn/beta:0, shape: (1024,)
> EMA/rpn_losses/level4/box_loss/local_step:0, shape: ()
> EMA/rpn_losses/level4/label_loss:0, shape: ()
> fpn/gn_c2/beta/Momentum:0, shape: (256,)
> group2/block4/conv2/gn/gamma/Momentum:0, shape: (256,)
> group0/block0/conv1/gn/beta/Momentum:0, shape: (64,)
> EMA/rpn_losses/level4/label_loss/biased:0, shape: ()
> EMA/rpn_losses/level4/label_loss/local_step:0, shape: ()
> group1/block3/conv1/gn/gamma:0, shape: (128,)
> EMA/rpn_losses/level4/label_metrics/precision_th0.1:0, shape: ()
> fpn/gn_p5/gamma/Momentum:0, shape: (256,)
> EMA/rpn_losses/level4/label_metrics/precision_th0.1/biased:0, shape: ()
> fpn/posthoc_3x3_p5/W:0, shape: (3, 3, 256, 256)
> group3/block0/convshortcut/gn/beta:0, shape: (2048,)
> EMA/rpn_losses/level4/label_metrics/precision_th0.1/local_step:0, shape: ()
> EMA/rpn_losses/level4/label_metrics/precision_th0.2:0, shape: ()
> group0/block1/conv3/W:0, shape: (1, 1, 64, 256)
> EMA/rpn_losses/level4/label_metrics/precision_th0.2/biased:0, shape: ()
> group1/block0/conv1/gn/gamma/Momentum:0, shape: (128,)
> EMA/rpn_losses/level4/label_metrics/precision_th0.2/local_step:0, shape: ()
> group1/block3/conv1/gn/gamma/AccumGrad:0, shape: (128,)
> group1/block0/conv3/gn/gamma/Momentum:0, shape: (512,)
> EMA/total_cost/biased:0, shape: ()
> group2/block4/conv3/gn/gamma:0, shape: (1024,)
> EMA/rpn_losses/level4/label_metrics/precision_th0.5:0, shape: ()
> EMA/rpn_losses/level4/label_metrics/precision_th0.5/biased:0, shape: ()
> fastrcnn/gn2/gamma/Momentum:0, shape: (256,)
> EMA/rpn_losses/level4/label_metrics/precision_th0.5/local_step:0, shape: ()
> group1/block1/conv1/gn/gamma/Momentum:0, shape: (128,)
> EMA/rpn_losses/level4/label_metrics/recall_th0.1:0, shape: ()
> fastrcnn/conv0/b/Momentum:0, shape: (256,)
> EMA/rpn_losses/level4/label_metrics/recall_th0.1/biased:0, shape: ()
> fpn/gn_p4/gamma/AccumGrad:0, shape: (256,)
> EMA/rpn_losses/level4/label_metrics/recall_th0.1/local_step:0, shape: ()
> group0/block2/conv3/W/AccumGrad:0, shape: (1, 1, 64, 256)
> group2/block2/conv2/W/Momentum:0, shape: (3, 3, 256, 256)
> EMA/rpn_losses/level4/label_metrics/recall_th0.2:0, shape: ()
> EMA/rpn_losses/level4/label_metrics/recall_th0.2/biased:0, shape: ()
> fpn/lateral_1x1_c5/b/Momentum:0, shape: (256,)
> EMA/rpn_losses/level4/label_metrics/recall_th0.5:0, shape: ()
> fpn/posthoc_3x3_p3/W:0, shape: (3, 3, 256, 256)
> group1/block1/conv2/gn/gamma/Momentum:0, shape: (128,)
> EMA/rpn_losses/level4/label_metrics/recall_th0.5/local_step:0, shape: ()
> EMA/rpn_losses/level4/num_pos_anchor:0, shape: ()
> EMA/rpn_losses/level4/num_pos_anchor/biased:0, shape: ()
> group0/block1/conv1/gn/gamma/AccumGrad:0, shape: (64,)
> group2/block1/conv1/gn/gamma:0, shape: (256,)
> EMA/rpn_losses/level4/num_pos_anchor/local_step:0, shape: ()
> fastrcnn/gn3/gamma/Momentum:0, shape: (256,)
> EMA/rpn_losses/level4/num_valid_anchor:0, shape: ()
> EMA/rpn_losses/level6/label_metrics/recall_th0.2/biased:0, shape: ()
> fastrcnn/gn1/gamma:0, shape: (256,)
> maskrcnn/fcn3/W/AccumGrad:0, shape: (3, 3, 256, 256)
> EMA/rpn_losses/level4/num_valid_anchor/local_step:0, shape: ()
> group2/block1/conv3/W/AccumGrad:0, shape: (1, 1, 256, 1024)
> fpn/gn_p3/gamma/AccumGrad:0, shape: (256,)
> group3/block2/conv3/W/AccumGrad:0, shape: (1, 1, 512, 2048)
> EMA/rpn_losses/level5/box_loss:0, shape: ()
> EMA/rpn_losses/level5/box_loss/biased:0, shape: ()
> EMA/rpn_losses/level5/label_metrics/recall_th0.5:0, shape: ()
> group2/block4/conv1/gn/beta/Momentum:0, shape: (256,)
> EMA/rpn_losses/level6/num_valid_anchor:0, shape: ()
> EMA/rpn_losses/level5/box_loss/local_step:0, shape: ()
> EMA/rpn_losses/level5/label_loss:0, shape: ()
> fpn/lateral_1x1_c3/b/Momentum:0, shape: (256,)
> group2/block5/conv1/gn/gamma:0, shape: (256,)
> EMA/rpn_losses/level5/label_loss/biased:0, shape: ()
> EMA/rpn_losses/level5/label_loss/local_step:0, shape: ()
> EMA/rpn_losses/level5/num_pos_anchor:0, shape: ()
> maskrcnn/fcn3/b/Momentum:0, shape: (256,)
> EMA/rpn_losses/level5/label_metrics/precision_th0.1:0, shape: ()
> group2/block1/conv2/gn/gamma/AccumGrad:0, shape: (256,)
> EMA/sample_fast_rcnn_targets/num_fg:0, shape: ()
> group3/block2/conv2/gn/gamma/AccumGrad:0, shape: (512,)
> EMA/rpn_losses/level5/label_metrics/precision_th0.1/biased:0, shape: ()
> group0/block0/conv2/gn/gamma/Momentum:0, shape: (64,)
> group1/block1/conv1/W:0, shape: (1, 1, 512, 128)
> EMA/rpn_losses/level5/label_metrics/precision_th0.2:0, shape: ()
> EMA/rpn_losses/level5/label_metrics/precision_th0.2/local_step:0, shape: ()
> EMA/rpn_losses/level5/label_metrics/recall_th0.1:0, shape: ()
> EMA/rpn_losses/level5/label_metrics/recall_th0.1/biased:0, shape: ()
> fpn/gn_p5/gamma:0, shape: (256,)
> EMA/rpn_losses/level5/num_valid_anchor:0, shape: ()
> EMA/rpn_losses/level5/label_metrics/recall_th0.2:0, shape: ()
> group0/block1/conv1/gn/beta/AccumGrad:0, shape: (64,)
> EMA/rpn_losses/level5/label_metrics/recall_th0.2/biased:0, shape: ()
> group1/block2/conv3/gn/beta/AccumGrad:0, shape: (512,)
> EMA/rpn_losses/level5/label_metrics/recall_th0.2/local_step:0, shape: ()
> maskrcnn/conv/W:0, shape: (1, 1, 256, 3)
> EMA/rpn_losses/level5/label_metrics/recall_th0.5/biased:0, shape: ()
> group0/block1/conv2/W/Momentum:0, shape: (3, 3, 64, 64)
> fpn/posthoc_3x3_p3/W/AccumGrad:0, shape: (3, 3, 256, 256)
> group2/block0/convshortcut/gn/gamma:0, shape: (1024,)
> group1/block0/conv2/gn/gamma/Momentum:0, shape: (128,)
> EMA/rpn_losses/level5/label_metrics/recall_th0.5/local_step:0, shape: ()
> maskrcnn/conv/b:0, shape: (3,)
> EMA/rpn_losses/level5/num_valid_anchor/biased:0, shape: ()
> fastrcnn/gn1/gamma/Momentum:0, shape: (256,)
> EMA/rpn_losses/level6/label_metrics/precision_th0.5:0, shape: ()
> maskrcnn/fcn2/W/AccumGrad:0, shape: (3, 3, 256, 256)
> maskrcnn/deconv/b/Momentum:0, shape: (256,)
> EMA/rpn_losses/level5/num_valid_anchor/local_step:0, shape: ()
> group2/block2/conv1/W:0, shape: (1, 1, 1024, 256)
> EMA/rpn_losses/level6/box_loss:0, shape: ()
> EMA/rpn_losses/level6/box_loss/biased:0, shape: ()
> group0/block1/conv1/gn/beta/Momentum:0, shape: (64,)
> group0/block2/conv3/gn/beta/AccumGrad:0, shape: (256,)
> fpn/lateral_1x1_c4/b/AccumGrad:0, shape: (256,)
> EMA/rpn_losses/level6/label_loss:0, shape: ()
> group0/block2/conv3/gn/beta:0, shape: (256,)
> group1/block2/conv1/gn/gamma/AccumGrad:0, shape: (128,)
> fpn/lateral_1x1_c4/b:0, shape: (256,)
> maskrcnn/fcn0/W:0, shape: (3, 3, 256, 256)
> EMA/rpn_losses/level6/label_loss/biased:0, shape: ()
> group3/block1/conv1/W/AccumGrad:0, shape: (1, 1, 2048, 512)
> EMA/rpn_losses/level6/label_loss/local_step:0, shape: ()
> group3/block1/conv3/gn/gamma/AccumGrad:0, shape: (2048,)
> fpn/gn_c4/beta/AccumGrad:0, shape: (256,)
> group2/block2/conv3/gn/gamma/AccumGrad:0, shape: (1024,)
> EMA/rpn_losses/level6/label_metrics/precision_th0.1:0, shape: ()
> EMA/rpn_losses/level6/label_metrics/precision_th0.1/biased:0, shape: ()
> EMA/rpn_losses/level6/label_metrics/precision_th0.1/local_step:0, shape: ()
> EMA/rpn_losses/level6/label_metrics/precision_th0.2/biased:0, shape: ()
> EMA/rpn_losses/level6/label_metrics/precision_th0.2/local_step:0, shape: ()
> group0/block2/conv2/gn/beta:0, shape: (64,)
> maskrcnn/gn3/gamma:0, shape: (256,)
> EMA/rpn_losses/level6/label_metrics/precision_th0.5/biased:0, shape: ()
> EMA/rpn_losses/level6/label_metrics/recall_th0.1:0, shape: ()
> maskrcnn/fcn0/b/Momentum:0, shape: (256,)
> group2/block5/conv3/gn/gamma/Momentum:0, shape: (1024,)
> EMA/rpn_losses/level6/label_metrics/recall_th0.1/biased:0, shape: ()
> fpn/gn_p2/beta/Momentum:0, shape: (256,)
> fpn/lateral_1x1_c2/b/Momentum:0, shape: (256,)
> group0/block0/convshortcut/gn/beta/AccumGrad:0, shape: (256,)
> EMA/rpn_losses/level6/label_metrics/recall_th0.1/local_step:0, shape: ()
> group0/block1/conv1/gn/beta:0, shape: (64,)
> EMA/rpn_losses/level6/label_metrics/recall_th0.2:0, shape: ()
> group3/block2/conv1/W/AccumGrad:0, shape: (1, 1, 2048, 512)
> EMA/rpn_losses/level6/label_metrics/recall_th0.2/local_step:0, shape: ()
> group0/block0/conv2/W:0, shape: (3, 3, 64, 64)
> group3/block2/conv1/gn/beta/Momentum:0, shape: (512,)
> EMA/rpn_losses/level6/label_metrics/recall_th0.5:0, shape: ()
> group1/block2/conv1/gn/beta:0, shape: (128,)
> fastrcnn/outputs/class/W:0, shape: (1024, 4)
> group1/block3/conv2/gn/gamma/Momentum:0, shape: (128,)
> EMA/rpn_losses/level6/label_metrics/recall_th0.5/local_step:0, shape: ()
> fastrcnn/conv2/W/AccumGrad:0, shape: (3, 3, 256, 256)
> group2/block0/convshortcut/gn/beta/Momentum:0, shape: (1024,)
> EMA/rpn_losses/level6/num_valid_anchor/biased:0, shape: ()
> group3/block2/conv3/gn/beta/Momentum:0, shape: (2048,)
> EMA/rpn_losses/level6/num_valid_anchor/local_step:0, shape: ()
> group0/block1/conv2/gn/gamma/Momentum:0, shape: (64,)
> group2/block1/conv3/gn/beta/Momentum:0, shape: (1024,)
> EMA/sample_fast_rcnn_targets/num_fg/biased:0, shape: ()
> EMA/sample_fast_rcnn_targets/num_bg:0, shape: ()
> group0/block0/conv2/gn/beta/AccumGrad:0, shape: (64,)
> EMA/sample_fast_rcnn_targets/num_bg/biased:0, shape: ()
> EMA/sample_fast_rcnn_targets/num_bg/local_step:0, shape: ()
> group0/block0/conv2/gn/beta/Momentum:0, shape: (64,)
> EMA/sample_fast_rcnn_targets/num_fg/local_step:0, shape: ()
> group2/block1/conv1/W:0, shape: (1, 1, 1024, 256)
> EMA/sample_fast_rcnn_targets/proposal_metrics/best_iou_per_gt:0, shape: ()
> fastrcnn/gn0/beta/AccumGrad:0, shape: (256,)
> group3/block1/conv1/gn/gamma:0, shape: (512,)
> EMA/sample_fast_rcnn_targets/proposal_metrics/best_iou_per_gt/local_step:0, shape: ()
> fpn/posthoc_3x3_p5/b:0, shape: (256,)
> maskrcnn/gn1/beta/Momentum:0, shape: (256,)
> EMA/sample_fast_rcnn_targets/proposal_metrics/recall_iou0.3:0, shape: ()
> EMA/sample_fast_rcnn_targets/proposal_metrics/recall_iou0.3/biased:0, shape: ()
> EMA/sample_fast_rcnn_targets/proposal_metrics/recall_iou0.3/local_step:0, shape: ()
> fpn/gn_c3/gamma/AccumGrad:0, shape: (256,)
> EMA/sample_fast_rcnn_targets/proposal_metrics/recall_iou0.5:0, shape: ()
> group1/block0/convshortcut/W/AccumGrad:0, shape: (1, 1, 256, 512)
> EMA/sample_fast_rcnn_targets/proposal_metrics/recall_iou0.5/biased:0, shape: ()
> fpn/gn_p2/gamma:0, shape: (256,)
> group0/block0/convshortcut/gn/beta:0, shape: (256,)
> EMA/sample_fast_rcnn_targets/proposal_metrics/recall_iou0.5/local_step:0, shape: ()
> fastrcnn/conv2/b/Momentum:0, shape: (256,)
> EMA/total_cost:0, shape: ()
> EMA/wd_cost:0, shape: ()
> EMA/wd_cost/biased:0, shape: ()
> fastrcnn/outputs/class/b:0, shape: (4,)
> group0/block2/conv2/gn/beta/Momentum:0, shape: (64,)
> EMA/wd_cost/local_step:0, shape: ()
> apply_gradients/AccumGrad/counter:0, shape: ()
> fpn/lateral_1x1_c3/b:0, shape: (256,)
> group3/block1/conv1/gn/beta:0, shape: (512,)
> conv0/W/AccumGrad:0, shape: (7, 7, 3, 64)
> conv0/W/Momentum:0, shape: (7, 7, 3, 64)
> group1/block3/conv1/W/AccumGrad:0, shape: (1, 1, 512, 128)
> conv0/gn/beta/AccumGrad:0, shape: (64,)
> conv0/gn/gamma:0, shape: (64,)
> group0/block2/conv3/gn/gamma/Momentum:0, shape: (256,)
> conv0/gn/gamma/AccumGrad:0, shape: (64,)
> group3/block2/conv3/gn/beta/AccumGrad:0, shape: (2048,)
> conv0/gn/gamma/Momentum:0, shape: (64,)
> group0/block1/conv2/gn/gamma/AccumGrad:0, shape: (64,)
> fastrcnn/conv0/W:0, shape: (3, 3, 256, 256)
> fastrcnn/conv0/W/AccumGrad:0, shape: (3, 3, 256, 256)
> fastrcnn/conv0/W/Momentum:0, shape: (3, 3, 256, 256)
> group3/block0/conv2/gn/beta/Momentum:0, shape: (512,)
> group1/block1/conv1/gn/gamma:0, shape: (128,)
> fastrcnn/conv0/b:0, shape: (256,)
> group0/block0/conv3/W:0, shape: (1, 1, 64, 256)
> group1/block1/conv1/gn/gamma/AccumGrad:0, shape: (128,)
> fastrcnn/conv0/b/AccumGrad:0, shape: (256,)
> group3/block1/conv3/W:0, shape: (1, 1, 512, 2048)
> fastrcnn/conv1/W/AccumGrad:0, shape: (3, 3, 256, 256)
> group3/block0/conv1/gn/gamma/AccumGrad:0, shape: (512,)
> group1/block0/conv1/W:0, shape: (1, 1, 256, 128)
> group2/block3/conv1/gn/gamma/AccumGrad:0, shape: (256,)
> fastrcnn/conv1/W/Momentum:0, shape: (3, 3, 256, 256)
> group1/block3/conv3/W/Momentum:0, shape: (1, 1, 128, 512)
> group0/block2/conv1/gn/gamma/Momentum:0, shape: (64,)
> fastrcnn/conv1/b:0, shape: (256,)
> fpn/gn_c3/beta/AccumGrad:0, shape: (256,)
> fastrcnn/conv1/b/AccumGrad:0, shape: (256,)
> group3/block2/conv1/gn/beta/AccumGrad:0, shape: (512,)
> fastrcnn/conv1/b/Momentum:0, shape: (256,)
> maskrcnn/fcn2/W/Momentum:0, shape: (3, 3, 256, 256)
> fastrcnn/conv2/W:0, shape: (3, 3, 256, 256)
> group2/block4/conv1/gn/gamma:0, shape: (256,)
> group0/block1/conv2/W/AccumGrad:0, shape: (3, 3, 64, 64)
> fastrcnn/conv2/W/Momentum:0, shape: (3, 3, 256, 256)
> fastrcnn/conv2/b:0, shape: (256,)
> group2/block3/conv3/W/Momentum:0, shape: (1, 1, 256, 1024)
> fastrcnn/conv3/W:0, shape: (3, 3, 256, 256)
> group2/block5/conv1/W/Momentum:0, shape: (1, 1, 1024, 256)
> fastrcnn/conv3/W/AccumGrad:0, shape: (3, 3, 256, 256)
> group2/block4/conv3/gn/gamma/Momentum:0, shape: (1024,)
> fastrcnn/conv3/b:0, shape: (256,)
> fastrcnn/conv3/b/AccumGrad:0, shape: (256,)
> fastrcnn/conv3/b/Momentum:0, shape: (256,)
> fpn/gn_p4/beta/Momentum:0, shape: (256,)
> fastrcnn/fc/W:0, shape: (12544, 1024)
> maskrcnn/fcn3/W/Momentum:0, shape: (3, 3, 256, 256)
> fastrcnn/fc/W/AccumGrad:0, shape: (12544, 1024)
> fastrcnn/gn3/gamma:0, shape: (256,)
> fastrcnn/fc/W/Momentum:0, shape: (12544, 1024)
> fastrcnn/fc/b/AccumGrad:0, shape: (1024,)
> fastrcnn/fc/b:0, shape: (1024,)
> fastrcnn/fc/b/Momentum:0, shape: (1024,)
> fastrcnn/gn0/beta/Momentum:0, shape: (256,)
> fastrcnn/gn0/gamma:0, shape: (256,)
> fastrcnn/gn0/gamma/Momentum:0, shape: (256,)
> fastrcnn/gn1/beta:0, shape: (256,)
> group3/block0/conv3/gn/gamma:0, shape: (2048,)
> fastrcnn/gn1/beta/Momentum:0, shape: (256,)
> fastrcnn/gn2/beta/AccumGrad:0, shape: (256,)
> fpn/gn_p2/gamma/Momentum:0, shape: (256,)
> fpn/posthoc_3x3_p4/b/AccumGrad:0, shape: (256,)
> fastrcnn/gn2/beta/Momentum:0, shape: (256,)
> fpn/posthoc_3x3_p4/b/Momentum:0, shape: (256,)
> fastrcnn/gn2/gamma:0, shape: (256,)
> fastrcnn/gn2/gamma/AccumGrad:0, shape: (256,)
> fastrcnn/gn3/beta:0, shape: (256,)
> group2/block0/conv1/W:0, shape: (1, 1, 512, 256)
> group1/block1/conv1/gn/beta/Momentum:0, shape: (128,)
> fastrcnn/gn3/beta/AccumGrad:0, shape: (256,)
> fastrcnn/gn3/beta/Momentum:0, shape: (256,)
> fastrcnn/gn3/gamma/AccumGrad:0, shape: (256,)
> fastrcnn/outputs/box/W/Momentum:0, shape: (1024, 16)
> fastrcnn/outputs/box/b:0, shape: (16,)
> fpn/lateral_1x1_c5/W/Momentum:0, shape: (1, 1, 2048, 256)
> fastrcnn/outputs/box/b/AccumGrad:0, shape: (16,)
> group3/block2/conv1/gn/gamma/AccumGrad:0, shape: (512,)
> fastrcnn/outputs/box/b/Momentum:0, shape: (16,)
> group1/block2/conv1/gn/beta/AccumGrad:0, shape: (128,)
> fastrcnn/outputs/class/W/AccumGrad:0, shape: (1024, 4)
> fpn/lateral_1x1_c5/b:0, shape: (256,)
> fpn/posthoc_3x3_p2/b/AccumGrad:0, shape: (256,)
> group1/block2/conv1/gn/beta/Momentum:0, shape: (128,)
> fastrcnn/outputs/class/W/Momentum:0, shape: (1024, 4)
> fastrcnn/outputs/class/b/AccumGrad:0, shape: (4,)
> fpn/posthoc_3x3_p2/W/Momentum:0, shape: (3, 3, 256, 256)
> fastrcnn/outputs/class/b/Momentum:0, shape: (4,)
> fpn/gn_c2/beta:0, shape: (256,)
> fpn/gn_c3/beta/Momentum:0, shape: (256,)
> fpn/gn_c2/beta/AccumGrad:0, shape: (256,)
> fpn/gn_c2/gamma:0, shape: (256,)
> group1/block0/conv1/gn/beta/Momentum:0, shape: (128,)
> fpn/gn_c2/gamma/Momentum:0, shape: (256,)
> fpn/gn_c3/beta:0, shape: (256,)
> fpn/gn_c3/gamma:0, shape: (256,)
> fpn/gn_c3/gamma/Momentum:0, shape: (256,)
> group3/block1/conv3/gn/gamma:0, shape: (2048,)
> fpn/gn_c4/beta:0, shape: (256,)
> fpn/gn_c4/gamma/AccumGrad:0, shape: (256,)
> group0/block0/conv3/gn/gamma/Momentum:0, shape: (256,)
> fpn/gn_c5/beta:0, shape: (256,)
> group1/block0/conv3/W/Momentum:0, shape: (1, 1, 128, 512)
> fpn/gn_c5/gamma:0, shape: (256,)
> fpn/lateral_1x1_c5/W:0, shape: (1, 1, 2048, 256)
> fpn/gn_c5/gamma/Momentum:0, shape: (256,)
> maskrcnn/fcn0/b:0, shape: (256,)
> group2/block5/conv3/gn/gamma:0, shape: (1024,)
> fpn/gn_p2/beta:0, shape: (256,)
> fpn/gn_p5/gamma/AccumGrad:0, shape: (256,)
> maskrcnn/fcn0/b/AccumGrad:0, shape: (256,)
> group2/block5/conv3/gn/gamma/AccumGrad:0, shape: (1024,)
> group1/block1/conv2/gn/gamma:0, shape: (128,)
> fpn/gn_p2/beta/AccumGrad:0, shape: (256,)
> group0/block2/conv3/gn/beta/Momentum:0, shape: (256,)
> fpn/gn_p2/gamma/AccumGrad:0, shape: (256,)
> fpn/lateral_1x1_c4/b/Momentum:0, shape: (256,)
> fpn/gn_p3/beta:0, shape: (256,)
> fpn/gn_p3/beta/AccumGrad:0, shape: (256,)
> group0/block2/conv3/W:0, shape: (1, 1, 64, 256)
> group1/block1/conv1/gn/beta/AccumGrad:0, shape: (128,)
> fpn/gn_p3/beta/Momentum:0, shape: (256,)
> group0/block2/conv1/gn/beta:0, shape: (64,)
> group2/block1/conv3/W:0, shape: (1, 1, 256, 1024)
> fpn/gn_p3/gamma:0, shape: (256,)
> group3/block2/conv3/gn/beta:0, shape: (2048,)
> group3/block2/conv3/W/Momentum:0, shape: (1, 1, 512, 2048)
> group0/block1/conv2/gn/gamma:0, shape: (64,)
> group2/block1/conv3/gn/beta:0, shape: (1024,)
> group2/block1/conv3/W/Momentum:0, shape: (1, 1, 256, 1024)
> fpn/gn_p3/gamma/Momentum:0, shape: (256,)
> fpn/gn_p4/beta/AccumGrad:0, shape: (256,)
> fpn/gn_p4/gamma:0, shape: (256,)
> fpn/gn_p4/gamma/Momentum:0, shape: (256,)
> fpn/gn_p5/beta:0, shape: (256,)
> fpn/gn_p5/beta/Momentum:0, shape: (256,)
> fpn/lateral_1x1_c2/W:0, shape: (1, 1, 256, 256)
> group1/block0/conv2/gn/gamma:0, shape: (128,)
> fpn/lateral_1x1_c2/W/AccumGrad:0, shape: (1, 1, 256, 256)
> fpn/lateral_1x1_c2/W/Momentum:0, shape: (1, 1, 256, 256)
> group3/block0/conv3/gn/beta/Momentum:0, shape: (2048,)
> fpn/lateral_1x1_c2/b:0, shape: (256,)
> group3/block1/conv2/gn/beta/Momentum:0, shape: (512,)
> fpn/lateral_1x1_c2/b/AccumGrad:0, shape: (256,)
> fpn/lateral_1x1_c3/W:0, shape: (1, 1, 512, 256)
> fpn/lateral_1x1_c3/W/AccumGrad:0, shape: (1, 1, 512, 256)
> group0/block0/conv1/W/Momentum:0, shape: (1, 1, 64, 64)
> fpn/lateral_1x1_c3/W/Momentum:0, shape: (1, 1, 512, 256)
> fpn/lateral_1x1_c4/W:0, shape: (1, 1, 1024, 256)
> fpn/lateral_1x1_c4/W/AccumGrad:0, shape: (1, 1, 1024, 256)
> fpn/lateral_1x1_c4/W/Momentum:0, shape: (1, 1, 1024, 256)
> fpn/lateral_1x1_c5/W/AccumGrad:0, shape: (1, 1, 2048, 256)
> fpn/posthoc_3x3_p2/W:0, shape: (3, 3, 256, 256)
> fpn/posthoc_3x3_p2/b:0, shape: (256,)
> fpn/posthoc_3x3_p2/b/Momentum:0, shape: (256,)
> group3/block2/conv2/W/AccumGrad:0, shape: (3, 3, 512, 512)
> group1/block1/conv3/gn/gamma/AccumGrad:0, shape: (512,)
> fpn/posthoc_3x3_p3/W/Momentum:0, shape: (3, 3, 256, 256)
> group0/block0/conv1/gn/gamma:0, shape: (64,)
> group1/block1/conv2/W/Momentum:0, shape: (3, 3, 128, 128)
> fpn/posthoc_3x3_p3/b/AccumGrad:0, shape: (256,)
> fpn/posthoc_3x3_p3/b/Momentum:0, shape: (256,)
> fpn/posthoc_3x3_p4/W:0, shape: (3, 3, 256, 256)
> fpn/posthoc_3x3_p4/W/AccumGrad:0, shape: (3, 3, 256, 256)
> fpn/posthoc_3x3_p4/W/Momentum:0, shape: (3, 3, 256, 256)
> fpn/posthoc_3x3_p5/W/AccumGrad:0, shape: (3, 3, 256, 256)
> group3/block1/conv1/gn/gamma/AccumGrad:0, shape: (512,)
> fpn/posthoc_3x3_p5/b/AccumGrad:0, shape: (256,)
> group1/block3/conv3/gn/beta/AccumGrad:0, shape: (512,)
> global_step:0, shape: ()
> group0/block1/conv3/W/Momentum:0, shape: (1, 1, 64, 256)
> group1/block0/convshortcut/gn/beta/Momentum:0, shape: (512,)
> group0/block0/conv1/W:0, shape: (1, 1, 64, 64)
> group2/block4/conv2/gn/gamma:0, shape: (256,)
> group0/block0/conv1/gn/beta:0, shape: (64,)
> group2/block4/conv2/gn/gamma/AccumGrad:0, shape: (256,)
> group1/block1/conv2/gn/beta/Momentum:0, shape: (128,)
> group0/block0/conv1/gn/beta/AccumGrad:0, shape: (64,)
> group0/block0/conv2/W/AccumGrad:0, shape: (3, 3, 64, 64)
> group0/block0/conv2/gn/beta:0, shape: (64,)
> group0/block0/conv2/gn/gamma/AccumGrad:0, shape: (64,)
> group0/block0/conv3/W/AccumGrad:0, shape: (1, 1, 64, 256)
> group0/block0/conv3/W/Momentum:0, shape: (1, 1, 64, 256)
> group0/block0/conv3/gn/beta:0, shape: (256,)
> group0/block0/conv3/gn/gamma:0, shape: (256,)
> group0/block0/conv3/gn/gamma/AccumGrad:0, shape: (256,)
> group0/block2/conv3/W/Momentum:0, shape: (1, 1, 64, 256)
> group0/block0/convshortcut/W:0, shape: (1, 1, 64, 256)
> group0/block0/convshortcut/gn/beta/Momentum:0, shape: (256,)
> group0/block0/convshortcut/gn/gamma/AccumGrad:0, shape: (256,)
> group0/block0/convshortcut/gn/gamma/Momentum:0, shape: (256,)
> group0/block1/conv1/W/AccumGrad:0, shape: (1, 1, 256, 64)
> group0/block1/conv1/gn/gamma:0, shape: (64,)
> group0/block1/conv1/gn/gamma/Momentum:0, shape: (64,)
> maskrcnn/gn1/beta/AccumGrad:0, shape: (256,)
> group0/block1/conv2/W:0, shape: (3, 3, 64, 64)
> group1/block0/conv3/W/AccumGrad:0, shape: (1, 1, 128, 512)
> group0/block1/conv2/gn/beta:0, shape: (64,)
> group0/block1/conv2/gn/beta/AccumGrad:0, shape: (64,)
> group0/block2/conv2/gn/gamma/Momentum:0, shape: (64,)
> group0/block1/conv3/gn/beta:0, shape: (256,)
> group0/block1/conv3/gn/beta/AccumGrad:0, shape: (256,)
> group0/block2/conv2/W:0, shape: (3, 3, 64, 64)
> group0/block1/conv3/gn/beta/Momentum:0, shape: (256,)
> group0/block1/conv3/gn/gamma/AccumGrad:0, shape: (256,)
> group2/block4/conv2/gn/beta/AccumGrad:0, shape: (256,)
> group0/block2/conv1/W:0, shape: (1, 1, 256, 64)
> group0/block2/conv1/W/AccumGrad:0, shape: (1, 1, 256, 64)
> group0/block2/conv1/W/Momentum:0, shape: (1, 1, 256, 64)
> group0/block2/conv1/gn/beta/AccumGrad:0, shape: (64,)
> group3/block0/conv1/gn/beta/AccumGrad:0, shape: (512,)
> group0/block2/conv1/gn/beta/Momentum:0, shape: (64,)
> group1/block3/conv3/W/AccumGrad:0, shape: (1, 1, 128, 512)
> group0/block2/conv1/gn/gamma/AccumGrad:0, shape: (64,)
> group1/block2/conv3/gn/gamma:0, shape: (512,)
> group0/block2/conv2/W/AccumGrad:0, shape: (3, 3, 64, 64)
> group0/block2/conv2/W/Momentum:0, shape: (3, 3, 64, 64)
> group0/block2/conv3/gn/gamma:0, shape: (256,)
> group2/block0/conv3/gn/beta/Momentum:0, shape: (1024,)
> group0/block2/conv3/gn/gamma/AccumGrad:0, shape: (256,)
> group1/block0/conv1/W/AccumGrad:0, shape: (1, 1, 256, 128)
> group2/block0/convshortcut/gn/beta/AccumGrad:0, shape: (1024,)
> group1/block0/conv1/gn/beta:0, shape: (128,)
> group1/block0/conv1/gn/gamma:0, shape: (128,)
> group1/block0/conv1/gn/gamma/AccumGrad:0, shape: (128,)
> group1/block0/conv2/W:0, shape: (3, 3, 128, 128)
> group1/block1/conv1/W/Momentum:0, shape: (1, 1, 512, 128)
> group1/block0/conv2/gn/beta:0, shape: (128,)
> group1/block0/conv2/gn/beta/AccumGrad:0, shape: (128,)
> group1/block0/conv2/gn/beta/Momentum:0, shape: (128,)
> group3/block2/conv2/W:0, shape: (3, 3, 512, 512)
> group1/block1/conv3/gn/gamma:0, shape: (512,)
> group1/block0/conv2/gn/gamma/AccumGrad:0, shape: (128,)
> rpn/conv0/W:0, shape: (3, 3, 256, 256)
> group1/block0/conv3/gn/beta:0, shape: (512,)
> rpn/conv0/W/AccumGrad:0, shape: (3, 3, 256, 256)
> group1/block0/conv3/gn/beta/AccumGrad:0, shape: (512,)
> rpn/conv0/W/Momentum:0, shape: (3, 3, 256, 256)
> group1/block0/conv3/gn/beta/Momentum:0, shape: (512,)
> group1/block0/conv3/gn/gamma/AccumGrad:0, shape: (512,)
> group1/block0/convshortcut/W:0, shape: (1, 1, 256, 512)
> group1/block3/conv1/gn/beta/AccumGrad:0, shape: (128,)
> group1/block0/convshortcut/gn/beta:0, shape: (512,)
> group1/block0/convshortcut/gn/gamma:0, shape: (512,)
> group1/block0/convshortcut/gn/gamma/AccumGrad:0, shape: (512,)
> group1/block0/convshortcut/gn/gamma/Momentum:0, shape: (512,)
> group1/block1/conv1/W/AccumGrad:0, shape: (1, 1, 512, 128)
> group1/block1/conv1/gn/beta:0, shape: (128,)
> group1/block1/conv3/W/Momentum:0, shape: (1, 1, 128, 512)
> group1/block1/conv2/W/AccumGrad:0, shape: (3, 3, 128, 128)
> group1/block1/conv2/gn/beta/AccumGrad:0, shape: (128,)
> group1/block1/conv2/gn/gamma/AccumGrad:0, shape: (128,)
> group1/block1/conv3/W:0, shape: (1, 1, 128, 512)
> group1/block1/conv3/gn/beta:0, shape: (512,)
> group1/block1/conv3/gn/beta/AccumGrad:0, shape: (512,)
> group3/block2/conv2/W/Momentum:0, shape: (3, 3, 512, 512)
> group2/block2/conv2/gn/gamma/AccumGrad:0, shape: (256,)
> group1/block1/conv3/gn/gamma/Momentum:0, shape: (512,)
> group1/block2/conv1/W:0, shape: (1, 1, 512, 128)
> maskrcnn/fcn2/b:0, shape: (256,)
> group3/block1/conv3/gn/beta:0, shape: (2048,)
> group1/block2/conv1/W/AccumGrad:0, shape: (1, 1, 512, 128)
> group2/block4/conv1/W/AccumGrad:0, shape: (1, 1, 1024, 256)
> group1/block2/conv1/W/Momentum:0, shape: (1, 1, 512, 128)
> group1/block2/conv1/gn/gamma:0, shape: (128,)
> group1/block2/conv1/gn/gamma/Momentum:0, shape: (128,)
> maskrcnn/gn0/beta/Momentum:0, shape: (256,)
> group2/block4/conv2/gn/beta:0, shape: (256,)
> group1/block2/conv2/W/AccumGrad:0, shape: (3, 3, 128, 128)
> group1/block2/conv2/W/Momentum:0, shape: (3, 3, 128, 128)
> group1/block2/conv2/gn/beta:0, shape: (128,)
> group1/block2/conv2/gn/beta/AccumGrad:0, shape: (128,)
> group2/block3/conv2/W:0, shape: (3, 3, 256, 256)
> group1/block2/conv2/gn/beta/Momentum:0, shape: (128,)
> group1/block2/conv2/gn/gamma:0, shape: (128,)
> group1/block2/conv3/gn/gamma/Momentum:0, shape: (512,)
> group1/block2/conv2/gn/gamma/AccumGrad:0, shape: (128,)
> group1/block2/conv2/gn/gamma/Momentum:0, shape: (128,)
> group1/block2/conv3/W:0, shape: (1, 1, 128, 512)
> group1/block2/conv3/W/AccumGrad:0, shape: (1, 1, 128, 512)
> group2/block2/conv2/gn/beta/AccumGrad:0, shape: (256,)
> group1/block2/conv3/gn/beta:0, shape: (512,)
> group1/block2/conv3/gn/beta/Momentum:0, shape: (512,)
> group3/block0/conv2/W/Momentum:0, shape: (3, 3, 512, 512)
> group1/block3/conv1/gn/beta:0, shape: (128,)
> group2/block4/conv3/gn/gamma/AccumGrad:0, shape: (1024,)
> group1/block3/conv1/gn/beta/Momentum:0, shape: (128,)
> group1/block3/conv1/gn/gamma/Momentum:0, shape: (128,)
> group1/block3/conv2/W:0, shape: (3, 3, 128, 128)
> group1/block3/conv2/W/AccumGrad:0, shape: (3, 3, 128, 128)
> group1/block3/conv2/W/Momentum:0, shape: (3, 3, 128, 128)
> group1/block3/conv2/gn/beta:0, shape: (128,)
> group1/block3/conv2/gn/beta/AccumGrad:0, shape: (128,)
> group1/block3/conv2/gn/beta/Momentum:0, shape: (128,)
> group1/block3/conv2/gn/gamma:0, shape: (128,)
> group3/block1/conv3/W/AccumGrad:0, shape: (1, 1, 512, 2048)
> group1/block3/conv3/gn/beta/Momentum:0, shape: (512,)
> group1/block3/conv3/gn/gamma:0, shape: (512,)
> group1/block3/conv3/gn/gamma/AccumGrad:0, shape: (512,)
> maskrcnn/gn1/gamma/AccumGrad:0, shape: (256,)
> group1/block3/conv3/gn/gamma/Momentum:0, shape: (512,)
> group2/block0/conv1/W/AccumGrad:0, shape: (1, 1, 512, 256)
> group2/block0/conv1/W/Momentum:0, shape: (1, 1, 512, 256)
> group2/block0/conv1/gn/beta:0, shape: (256,)
> group2/block2/conv3/W:0, shape: (1, 1, 256, 1024)
> group2/block0/conv1/gn/beta/Momentum:0, shape: (256,)
> group2/block0/conv1/gn/gamma:0, shape: (256,)
> group2/block0/conv1/gn/gamma/Momentum:0, shape: (256,)
> group2/block0/conv2/W:0, shape: (3, 3, 256, 256)
> group2/block0/conv2/W/AccumGrad:0, shape: (3, 3, 256, 256)
> group2/block0/conv2/gn/beta:0, shape: (256,)
> group2/block0/conv2/gn/beta/AccumGrad:0, shape: (256,)
> group3/block0/conv1/gn/beta/Momentum:0, shape: (512,)
> group2/block0/conv2/gn/gamma/AccumGrad:0, shape: (256,)
> group2/block5/conv2/gn/gamma:0, shape: (256,)
> group2/block0/conv2/gn/gamma/Momentum:0, shape: (256,)
> group2/block0/conv3/W:0, shape: (1, 1, 256, 1024)
> group2/block0/conv3/W/AccumGrad:0, shape: (1, 1, 256, 1024)
> group2/block0/conv3/W/Momentum:0, shape: (1, 1, 256, 1024)
> group2/block0/conv3/gn/beta/AccumGrad:0, shape: (1024,)
> group2/block0/conv3/gn/gamma:0, shape: (1024,)
> group2/block0/conv3/gn/gamma/AccumGrad:0, shape: (1024,)
> group2/block0/convshortcut/W/Momentum:0, shape: (1, 1, 512, 1024)
> group2/block0/convshortcut/gn/beta:0, shape: (1024,)
> group2/block0/convshortcut/gn/gamma/Momentum:0, shape: (1024,)
> group2/block1/conv1/W/AccumGrad:0, shape: (1, 1, 1024, 256)
> group2/block1/conv1/gn/beta/AccumGrad:0, shape: (256,)
> group2/block1/conv1/gn/beta/Momentum:0, shape: (256,)
> group2/block1/conv1/gn/gamma/AccumGrad:0, shape: (256,)
> group2/block1/conv1/gn/gamma/Momentum:0, shape: (256,)
> group2/block1/conv2/W/AccumGrad:0, shape: (3, 3, 256, 256)
> group2/block1/conv2/gn/beta:0, shape: (256,)
> group2/block1/conv2/gn/beta/AccumGrad:0, shape: (256,)
> group2/block1/conv2/gn/beta/Momentum:0, shape: (256,)
> group2/block1/conv2/gn/gamma:0, shape: (256,)
> group2/block1/conv3/gn/beta/AccumGrad:0, shape: (1024,)
> maskrcnn/deconv/W/Momentum:0, shape: (2, 2, 256, 256)
> group2/block1/conv3/gn/gamma:0, shape: (1024,)
> group2/block1/conv3/gn/gamma/Momentum:0, shape: (1024,)
> group2/block2/conv1/W/AccumGrad:0, shape: (1, 1, 1024, 256)
> group2/block2/conv1/W/Momentum:0, shape: (1, 1, 1024, 256)
> group2/block2/conv1/gn/beta:0, shape: (256,)
> group2/block2/conv1/gn/beta/AccumGrad:0, shape: (256,)
> group2/block2/conv1/gn/beta/Momentum:0, shape: (256,)
> group2/block2/conv1/gn/gamma:0, shape: (256,)
> group2/block5/conv2/W:0, shape: (3, 3, 256, 256)
> group2/block2/conv1/gn/gamma/AccumGrad:0, shape: (256,)
> group2/block2/conv2/W:0, shape: (3, 3, 256, 256)
> group2/block2/conv2/gn/beta:0, shape: (256,)
> group2/block2/conv2/gn/beta/Momentum:0, shape: (256,)
> group2/block3/conv1/gn/beta:0, shape: (256,)
> group2/block2/conv2/gn/gamma/Momentum:0, shape: (256,)
> group2/block2/conv3/W/AccumGrad:0, shape: (1, 1, 256, 1024)
> group2/block2/conv3/gn/beta/AccumGrad:0, shape: (1024,)
> maskrcnn/gn0/gamma:0, shape: (256,)
> group2/block5/conv3/gn/beta/AccumGrad:0, shape: (1024,)
> group2/block2/conv3/gn/beta/Momentum:0, shape: (1024,)
> group2/block2/conv3/gn/gamma:0, shape: (1024,)
> rpn/conv0/b:0, shape: (256,)
> group2/block2/conv3/gn/gamma/Momentum:0, shape: (1024,)
> group2/block3/conv1/W:0, shape: (1, 1, 1024, 256)
> group2/block3/conv1/W/AccumGrad:0, shape: (1, 1, 1024, 256)
> group2/block3/conv1/W/Momentum:0, shape: (1, 1, 1024, 256)
> group2/block3/conv1/gn/beta/AccumGrad:0, shape: (256,)
> group2/block3/conv1/gn/beta/Momentum:0, shape: (256,)
> group2/block3/conv1/gn/gamma:0, shape: (256,)
> group2/block3/conv1/gn/gamma/Momentum:0, shape: (256,)
> group2/block3/conv2/W/AccumGrad:0, shape: (3, 3, 256, 256)
> group2/block3/conv2/W/Momentum:0, shape: (3, 3, 256, 256)
> rpn/class/b:0, shape: (3,)
> group2/block3/conv2/gn/beta:0, shape: (256,)
> rpn/class/b/AccumGrad:0, shape: (3,)
> group2/block3/conv2/gn/beta/AccumGrad:0, shape: (256,)
> rpn/class/b/Momentum:0, shape: (3,)
> group2/block3/conv2/gn/beta/Momentum:0, shape: (256,)
> group2/block3/conv2/gn/gamma/AccumGrad:0, shape: (256,)
> maskrcnn/gn2/beta:0, shape: (256,)
> group2/block3/conv2/gn/gamma/Momentum:0, shape: (256,)
> group2/block3/conv3/W:0, shape: (1, 1, 256, 1024)
> group2/block3/conv3/W/AccumGrad:0, shape: (1, 1, 256, 1024)
> group2/block3/conv3/gn/beta/Momentum:0, shape: (1024,)
> group2/block3/conv3/gn/gamma:0, shape: (1024,)
> maskrcnn/conv/b/AccumGrad:0, shape: (3,)
> group2/block3/conv3/gn/gamma/Momentum:0, shape: (1024,)
> group2/block4/conv1/W:0, shape: (1, 1, 1024, 256)
> group2/block4/conv1/W/Momentum:0, shape: (1, 1, 1024, 256)
> group2/block4/conv1/gn/beta:0, shape: (256,)
> group2/block4/conv1/gn/beta/AccumGrad:0, shape: (256,)
> group2/block4/conv1/gn/gamma/AccumGrad:0, shape: (256,)
> group2/block4/conv1/gn/gamma/Momentum:0, shape: (256,)
> group2/block4/conv2/W:0, shape: (3, 3, 256, 256)
> group2/block4/conv2/W/AccumGrad:0, shape: (3, 3, 256, 256)
> group2/block4/conv2/W/Momentum:0, shape: (3, 3, 256, 256)
> rpn/box/W:0, shape: (1, 1, 256, 12)
> group2/block4/conv2/gn/beta/Momentum:0, shape: (256,)
> group2/block4/conv3/W/AccumGrad:0, shape: (1, 1, 256, 1024)
> group2/block4/conv3/W/Momentum:0, shape: (1, 1, 256, 1024)
> group2/block4/conv3/gn/beta:0, shape: (1024,)
> group2/block4/conv3/gn/beta/Momentum:0, shape: (1024,)
> group3/block1/conv2/W/AccumGrad:0, shape: (3, 3, 512, 512)
> group2/block5/conv1/W:0, shape: (1, 1, 1024, 256)
> group2/block5/conv1/W/AccumGrad:0, shape: (1, 1, 1024, 256)
> group2/block5/conv1/gn/beta:0, shape: (256,)
> group2/block5/conv1/gn/beta/AccumGrad:0, shape: (256,)
> group2/block5/conv1/gn/beta/Momentum:0, shape: (256,)
> group2/block5/conv1/gn/gamma/AccumGrad:0, shape: (256,)
> group3/block0/convshortcut/gn/gamma/AccumGrad:0, shape: (2048,)
> group2/block5/conv1/gn/gamma/Momentum:0, shape: (256,)
> group2/block5/conv2/W/AccumGrad:0, shape: (3, 3, 256, 256)
> group2/block5/conv2/W/Momentum:0, shape: (3, 3, 256, 256)
> group2/block5/conv2/gn/beta:0, shape: (256,)
> group2/block5/conv2/gn/beta/AccumGrad:0, shape: (256,)
> group2/block5/conv2/gn/beta/Momentum:0, shape: (256,)
> group2/block5/conv2/gn/gamma/AccumGrad:0, shape: (256,)
> group2/block5/conv2/gn/gamma/Momentum:0, shape: (256,)
> group2/block5/conv3/W:0, shape: (1, 1, 256, 1024)
> group2/block5/conv3/W/AccumGrad:0, shape: (1, 1, 256, 1024)
> group2/block5/conv3/W/Momentum:0, shape: (1, 1, 256, 1024)
> group2/block5/conv3/gn/beta/Momentum:0, shape: (1024,)
> group3/block0/conv1/W/AccumGrad:0, shape: (1, 1, 1024, 512)
> group3/block1/conv2/gn/gamma/Momentum:0, shape: (512,)
> group3/block0/conv1/gn/beta:0, shape: (512,)
> group3/block0/conv1/gn/gamma:0, shape: (512,)
> group3/block0/conv1/gn/gamma/Momentum:0, shape: (512,)
> group3/block0/conv2/W:0, shape: (3, 3, 512, 512)
> group3/block0/conv2/W/AccumGrad:0, shape: (3, 3, 512, 512)
> group3/block0/conv2/gn/beta:0, shape: (512,)
> group3/block0/conv2/gn/beta/AccumGrad:0, shape: (512,)
> group3/block0/conv2/gn/gamma/AccumGrad:0, shape: (512,)
> group3/block0/conv2/gn/gamma/Momentum:0, shape: (512,)
> group3/block0/conv3/W:0, shape: (1, 1, 512, 2048)
> group3/block0/conv3/W/AccumGrad:0, shape: (1, 1, 512, 2048)
> group3/block0/conv3/W/Momentum:0, shape: (1, 1, 512, 2048)
> group3/block0/conv3/gn/beta:0, shape: (2048,)
> group3/block0/conv3/gn/beta/AccumGrad:0, shape: (2048,)
> group3/block0/conv3/gn/gamma/AccumGrad:0, shape: (2048,)
> group3/block0/conv3/gn/gamma/Momentum:0, shape: (2048,)
> group3/block0/convshortcut/W:0, shape: (1, 1, 1024, 2048)
> group3/block0/convshortcut/W/AccumGrad:0, shape: (1, 1, 1024, 2048)
> group3/block0/convshortcut/W/Momentum:0, shape: (1, 1, 1024, 2048)
> group3/block0/convshortcut/gn/beta/AccumGrad:0, shape: (2048,)
> group3/block0/convshortcut/gn/gamma:0, shape: (2048,)
> group3/block0/convshortcut/gn/gamma/Momentum:0, shape: (2048,)
> group3/block1/conv1/W:0, shape: (1, 1, 2048, 512)
> group3/block1/conv1/W/Momentum:0, shape: (1, 1, 2048, 512)
> group3/block1/conv1/gn/beta/AccumGrad:0, shape: (512,)
> group3/block1/conv1/gn/beta/Momentum:0, shape: (512,)
> group3/block1/conv2/W/Momentum:0, shape: (3, 3, 512, 512)
> group3/block1/conv2/gn/beta:0, shape: (512,)
> group3/block1/conv2/gn/beta/AccumGrad:0, shape: (512,)
> group3/block1/conv2/gn/gamma:0, shape: (512,)
> maskrcnn/fcn2/b/AccumGrad:0, shape: (256,)
> group3/block1/conv3/gn/beta/AccumGrad:0, shape: (2048,)
> maskrcnn/gn1/beta:0, shape: (256,)
> maskrcnn/fcn2/b/Momentum:0, shape: (256,)
> group3/block1/conv3/gn/beta/Momentum:0, shape: (2048,)
> group3/block2/conv1/W:0, shape: (1, 1, 2048, 512)
> group3/block2/conv1/gn/gamma:0, shape: (512,)
> group3/block2/conv1/gn/gamma/Momentum:0, shape: (512,)
> group3/block2/conv2/gn/beta/AccumGrad:0, shape: (512,)
> group3/block2/conv2/gn/beta/Momentum:0, shape: (512,)
> group3/block2/conv2/gn/gamma:0, shape: (512,)
> maskrcnn/gn2/gamma/Momentum:0, shape: (256,)
> group3/block2/conv3/W:0, shape: (1, 1, 512, 2048)
> group3/block2/conv3/gn/gamma/AccumGrad:0, shape: (2048,)
> group3/block2/conv3/gn/gamma/Momentum:0, shape: (2048,)
> learning_rate:0, shape: ()
> maskrcnn/conv/W/Momentum:0, shape: (1, 1, 256, 3)
> maskrcnn/fcn1/W:0, shape: (3, 3, 256, 256)
> maskrcnn/conv/b/Momentum:0, shape: (3,)
> maskrcnn/gn3/gamma/Momentum:0, shape: (256,)
> maskrcnn/deconv/W/AccumGrad:0, shape: (2, 2, 256, 256)
> maskrcnn/deconv/b:0, shape: (256,)
> maskrcnn/deconv/b/AccumGrad:0, shape: (256,)
> maskrcnn/fcn0/W/Momentum:0, shape: (3, 3, 256, 256)
> maskrcnn/fcn1/W/AccumGrad:0, shape: (3, 3, 256, 256)
> maskrcnn/fcn1/W/Momentum:0, shape: (3, 3, 256, 256)
> maskrcnn/fcn1/b:0, shape: (256,)
> maskrcnn/fcn1/b/Momentum:0, shape: (256,)
> maskrcnn/fcn2/W:0, shape: (3, 3, 256, 256)
> maskrcnn/fcn3/W:0, shape: (3, 3, 256, 256)
> maskrcnn/fcn3/b/AccumGrad:0, shape: (256,)
> maskrcnn/gn0/beta:0, shape: (256,)
> maskrcnn/gn0/beta/AccumGrad:0, shape: (256,)
> maskrcnn/gn0/gamma/AccumGrad:0, shape: (256,)
> maskrcnn/gn0/gamma/Momentum:0, shape: (256,)
> maskrcnn/gn1/gamma:0, shape: (256,)
> maskrcnn/gn1/gamma/Momentum:0, shape: (256,)
> maskrcnn/gn2/beta/AccumGrad:0, shape: (256,)
> maskrcnn/gn2/beta/Momentum:0, shape: (256,)
> maskrcnn/gn2/gamma/AccumGrad:0, shape: (256,)
> maskrcnn/gn3/beta:0, shape: (256,)
> maskrcnn/gn3/beta/AccumGrad:0, shape: (256,)
> maskrcnn/gn3/beta/Momentum:0, shape: (256,)
> maskrcnn/gn3/gamma/AccumGrad:0, shape: (256,)
> rpn/box/b/Momentum:0, shape: (12,)
> rpn/box/W/AccumGrad:0, shape: (1, 1, 256, 12)
> rpn/box/W/Momentum:0, shape: (1, 1, 256, 12)
> rpn/box/b:0, shape: (12,)
> rpn/box/b/AccumGrad:0, shape: (12,)
> rpn/class/W:0, shape: (1, 1, 256, 3)
> rpn/class/W/Momentum:0, shape: (1, 1, 256, 3)
|
closed
|
2019-12-02T11:09:43Z
|
2019-12-02T17:54:54Z
|
https://github.com/tensorpack/tensorpack/issues/1367
|
[
"duplicate"
] |
Razor1O9
| 0
|
ultralytics/yolov5
|
pytorch
| 12,833
|
let say we have train a custom model from yolov8 on low resolution images if i give that model to do inference on a high resolution image how the model will handle it in background and is there any information loss in the scaling process
|
let say we have train a custom model from yolov8 on low resolution images if i give that model to do inference on a high resolution image how the model will handle it in background and is there any information loss in the scaling process
_Originally posted by @dharakpatel in https://github.com/ultralytics/yolov5/issues/2660#issuecomment-2011680043_
|
closed
|
2024-03-21T09:02:41Z
|
2024-10-20T19:41:52Z
|
https://github.com/ultralytics/yolov5/issues/12833
|
[
"Stale"
] |
dharakpatel
| 3
|
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 304
|
Set/get the InferenceModel index embeddings, and save/load the index
|
is there a way to get the embeddings and save them after calling the inference.train_indexer? I really dont want to retrain the indexer each time the app starts. I looked at the code and could't see a way to set/get the list of embeddings or add new ones to the already trained indexer.
from what I see, the FaissIndexer class needs to be extended with get_index, set_index and add_to_index
|
closed
|
2021-04-17T06:19:01Z
|
2021-05-10T02:54:25Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/304
|
[
"enhancement",
"fixed in dev branch"
] |
mosheliv
| 3
|
vimalloc/flask-jwt-extended
|
flask
| 496
|
Old Access token still working
|
Hello.
I created an access token with **_create_access_token(identity=handle, fresh=True)_**.
Then I copy the access token without using it.
If I created another access token for the same user and I try to use it with **_@jwt_required(fresh=True)_**
I find out that both access tokens (the one I first created without using and the second one) are able to work with the @jwt_required(fresh=True).
Question: How do I prevent the first(older) access token from working when I create a new one for a user
|
closed
|
2022-08-27T22:23:28Z
|
2022-08-28T16:06:22Z
|
https://github.com/vimalloc/flask-jwt-extended/issues/496
|
[] |
BloverseUkeme
| 2
|
miguelgrinberg/python-socketio
|
asyncio
| 755
|
Support custom serialization methods for message queue packets
|
Currently, `pickle` is used, without an easy option to change it.
|
open
|
2021-07-20T23:20:18Z
|
2021-10-28T23:27:32Z
|
https://github.com/miguelgrinberg/python-socketio/issues/755
|
[
"enhancement"
] |
miguelgrinberg
| 6
|
tensorpack/tensorpack
|
tensorflow
| 1,101
|
When to stop?
|
Hi Yuxin,
Recently I've applied Example/faster-rcnn(mask-rcnn) to medical image processing(breast tumor segmentation), and I have few questions:
1. I noticed that in config.py we use TRAIN.LR_SCHEDULE and TRAIN.STEPS_PER_EPOCH to control training step and epoach number;How to set suitable and appropriate parameters to fit my own dataset(have 300 samples to train and 50 to evaluate)?;
2. How to judge the model's convergence ? Do we judge that by the loss? I didn't see any related code in the project which make training stop juding by the loss.
Thanks a lot!
|
closed
|
2019-03-04T15:05:24Z
|
2019-03-05T00:46:34Z
|
https://github.com/tensorpack/tensorpack/issues/1101
|
[
"unrelated"
] |
zxpeter
| 3
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 1,140
|
Caught exception: ModuleNotFoundError("No module named 'sounddevice'") Restarting
|
Reference voice: enter an audio filepath of a voice to be cloned (mp3, wav, m4a, flac, ...):
test.wav
Loaded file succesfully
Created the embedding
Write a sentence (+-20 words) to be synthesized:
我的声音是这样的吗?你来听一下,现在是我克隆的声音。
| Generating 1/1
Done.
Created the mel spectrogram
Synthesizing the waveform:
{| ████████████████ 152000/153600 | Batch Size: 16 | Gen Rate: 2.7kHz | }Caught exception: ModuleNotFoundError("No module named 'sounddevice'")
Restarting
Reference voice: enter an audio filepath of a voice to be cloned (mp3, wav, m4a, flac, ...):
what is this error??
|
closed
|
2022-11-23T12:17:02Z
|
2022-12-02T08:51:52Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1140
|
[] |
jasonyun
| 1
|
modelscope/modelscope
|
nlp
| 1,269
|
如何设置 modelscope 为 HF_ENDPOINT 环境变量呢?
|
**Describe the feature**
我想设置 HF_ENDPOINT 为 modelscope,应该如何设置?我尝试如下方法,貌似不行:
```python
os.environ["HF_ENDPOINT"] = "modelscope-registry.cn-beijing.cr.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.1.0-py310-torch2.3.1-tf2.16.1-1.22.2"
```
|
closed
|
2025-03-17T14:54:59Z
|
2025-03-18T01:23:53Z
|
https://github.com/modelscope/modelscope/issues/1269
|
[] |
yuruotong1
| 1
|
BayesWitnesses/m2cgen
|
scikit-learn
| 229
|
Refactor count_exprs function
|
- [ ] Exclude `IdExpr`: https://github.com/BayesWitnesses/m2cgen/pull/208#discussion_r422246777.
- [ ] Fix compatibility with fallback expressions: https://github.com/BayesWitnesses/m2cgen/pull/208#issuecomment-628326602.
|
open
|
2020-06-01T20:15:09Z
|
2020-06-01T20:15:09Z
|
https://github.com/BayesWitnesses/m2cgen/issues/229
|
[] |
StrikerRUS
| 0
|
zappa/Zappa
|
django
| 1,018
|
Cloudwatch events errors out when APP_MODULE is not available in zappa_settings.yml
|
Zappa errors when APP_MODULE is not in zappa_settings.yml and CloudWatch events triggered to Lambda.
## Context
When CloudWatch is configured to trigger a Lambda function deployed by Zappa (without APP_MODULE in zappa_settings.yml) and the event data is {'awslogs': {'data': 'some_random_data'}}, the below error occurs,
```
{
"errorMessage": "module 'zappa_settings' has no attribute 'APP_MODULE'",
"errorType": "AttributeError",
"stackTrace": [
[
"/var/task/handler.py",
657,
"lambda_handler",
"return LambdaHandler.lambda_handler(event, context)"
],
[
"/var/task/handler.py",
254,
"lambda_handler",
"return handler.handler(event, context)"
],
[
"/var/task/handler.py",
495,
"handler",
"whole_function = \"{}.{}\".format(settings.APP_MODULE, settings.APP_FUNCTION)"
]
]
}
```
# Expected Behavior
Lambda should not return an error
# Actual Behavior
Lambda trigger is failing
## Possible Fix
Check for APP_MODULE and APP_FUNCTION attributes in settings and log an error if it doesn't exist
https://github.com/rmithun/Zappa/pull/1/files
## Steps to Reproduce
Create and deploy a lambda function with Zappa without APP_MODULE in zappa_settings.yml.
Go to the "Test" tab in the deployed lambda
Enter {"awslogs": {"data": "some_data"}} and click test
You should see execution result as failed
## Environment
* Zappa version: 0.52.0
* Operating System and Python version: Python 3.6
* The output of `pip freeze`:
```
appdirs==1.4.4
argcomplete==1.12.3
asgiref==3.4.1
asn1crypto==1.2.0
atomicwrites==1.4.0
attrs==21.2.0
aws-xray-sdk==2.4.2
awscli==1.19.111
backcall==0.2.0
boto3==1.17.111
botocore==1.20.111
cached-property==1.5.2
cachetools==4.2.2
certifi==2021.5.30
cffi==1.14.6
cfn-flip==1.2.3
chardet==4.0.0
click==8.0.1
colorama==0.4.3
contextlib2==0.6.0.post1
coreapi==2.3.3
coreschema==0.0.4
coverage==4.5.3
coveralls==1.7.0
cryptography==3.3.1
datadog==0.28.0
decorator==5.0.9
defusedxml==0.7.1
Django==3.0.4
django-admin-actions==0.1.1
django-annoying==0.10.4
django-colorfield==0.1.15
django-cors-headers==3.2.1
django-enumfields==2.0.0
django-excel==0.0.10
django-extensions==3.1.3
django-filter==2.2.0
django-ical==1.4
django-money==1.0
django-multiselectfield==0.1.12
django-ordered-model==3.3.0
django-recurrence==1.10.2
django-requestlogging-redux==1.2.1
django-rest-multiple-models==1.8.2
django-ses==2.1.1
django-simple-history==2.8.0
django-storages==1.9.1
django-timezone-field==4.0
django-yearlessdate==1.3.1
djangorestframework==3.11.0
djangorestframework-csv==2.1.0
djangorestframework-jwt==1.11.0
docopt==0.6.2
docutils==0.14
drf-yasg==1.17.0
durationpy==0.5
pytest-flake8==1.0.4
python-dateutil==2.7.0
websocket-client==0.54.0
Werkzeug==0.16.1
whitenoise==4.1.3
wrapt==1.12.1
wsgi-request-logger==0.4.6
xlrd==2.0.1
xlwt==1.3.0
zappa==0.52.0
zeep==3.4.0
zipp==3.5.0
```
* Your `zappa_settings.yml`:
```
common:
django_settings: *
profile_name: *
project_name: *
runtime: python3.6
log_level: INFO
num_retained_versions: 10
timeout_seconds: 300
memory_size: 1512
development:
extends: common
aws_region: *
s3_bucket: *
keep_warm: false
environment_variables:
DEV_ENV: "*"
DB_PORT: "*"
DB_USER: *
DB_NAME: *
DB_HOST: *
```
|
closed
|
2021-08-04T16:48:17Z
|
2024-04-13T19:37:21Z
|
https://github.com/zappa/Zappa/issues/1018
|
[
"no-activity",
"auto-closed"
] |
rmithun
| 3
|
chatanywhere/GPT_API_free
|
api
| 42
|
免费 API key 怎么使用
|
closed
|
2023-06-20T05:53:29Z
|
2023-06-20T06:03:30Z
|
https://github.com/chatanywhere/GPT_API_free/issues/42
|
[] |
Allen-Scai
| 0
|
|
huggingface/diffusers
|
deep-learning
| 10,992
|
Loading WanTransformer3DModel using torch_dtype=torch.bfloat16 keeps some parameters as float32
|
### Describe the bug
Just checking if this is the expected behavior. Calling WanTransformer3DModel.from_pretrained with argument torch_dtype=torch.bfloat16 keeps some parameters as float32.
### Reproduction
```
repo_id = "Wan-AI/Wan2.1-I2V-14B-480P-Diffusers"
transformer = WanTransformer3DModel.from_pretrained(repo_id, subfolder="transformer", torch_dtype=torch.bfloat16)
print(transformer.blocks[0].norm2.bias.dtype) # torch.float32
print(transformer.blocks[0].scale_shift_table.dtype) # torch.float32
print(transformer.blocks[0].attn1.norm_k.weight.dtype) # torch.bfloat16
print(transformer.blocks[0].attn1.to_k.weight.dtype) # torch.bfloat16
```
### Logs
```shell
```
### System Info
Diffusers #812b4e1eaa20fa8d88aa48b645b9d34ca45ecfde (2025-03-06), Linux, Python 3.10
### Who can help?
Calling @DN6 @a-r-r-o-w
|
closed
|
2025-03-06T18:48:23Z
|
2025-03-07T10:25:09Z
|
https://github.com/huggingface/diffusers/issues/10992
|
[
"bug"
] |
spezialspezial
| 5
|
oegedijk/explainerdashboard
|
dash
| 74
|
Reducing time to updating
|
Hello,
In my application I have around 300k samples and many dimensions. In model_summary when I change cutoff, some components (e.g, Model performance metrics, ROC AUC plot, PR AUC plot) take a long time to updating (around 20 seconds) because of the necessity to recalculate the values. Any suggestions on how to reduce this time??
|
closed
|
2021-01-28T16:02:48Z
|
2021-02-02T20:42:01Z
|
https://github.com/oegedijk/explainerdashboard/issues/74
|
[] |
mvpalheta
| 8
|
aimhubio/aim
|
data-visualization
| 2,583
|
GCP VertexAI to AIM converter request
|
## 🚀 Feature
A converter or possibly some other kind of solution to export or access Google Cloud Platform (GCP) VertexAI (VAI) experiments in AIM.
### Motivation
Ability to seamlessly integrate VAI executions (and associated metadata and artifacts) within AIM.
### Pitch
AIM will allow for 1-to-1 access and review of VAI recorded experiments including associated metadata.
### Alternatives
VAI experiment tracker.
### Additional context
Would be great to leverage AIM around VAI executions.
|
open
|
2023-03-10T01:35:52Z
|
2023-03-30T04:02:31Z
|
https://github.com/aimhubio/aim/issues/2583
|
[
"type / enhancement",
"area / integrations"
] |
krstp
| 3
|
tensorpack/tensorpack
|
tensorflow
| 698
|
Add Style guidelines?
|
I suggest to pick a style for the formatting i.e.
```
[yapf]
based_on_style = chromium
```
```
yapf -i --recursive ./tensorpack
```
|
closed
|
2018-03-17T22:50:26Z
|
2018-05-30T20:59:39Z
|
https://github.com/tensorpack/tensorpack/issues/698
|
[] |
Mistobaan
| 1
|
Evil0ctal/Douyin_TikTok_Download_API
|
fastapi
| 185
|
api错误抖音
|
你好 !帮我看看抖音api,我无法访问抖音api
|
closed
|
2023-03-27T04:28:43Z
|
2023-03-27T04:33:14Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/185
|
[
"BUG"
] |
cauindo113
| 2
|
huggingface/datasets
|
pytorch
| 6,663
|
`write_examples_on_file` and `write_batch` are broken in `ArrowWriter`
|
### Describe the bug
`write_examples_on_file` and `write_batch` are broken in `ArrowWriter` since #6636. The order between the columns and the schema is not preserved anymore. So these functions don't work anymore unless the order happens to align well.
### Steps to reproduce the bug
Try to do `write_batch` with anything that has many columns, and it's likely to break.
### Expected behavior
I expect these functions to work, instead of it trying to cast a column to its incorrect type.
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-5.15.0-1040-aws-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.19.4
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
|
closed
|
2024-02-15T01:43:27Z
|
2024-02-16T09:25:00Z
|
https://github.com/huggingface/datasets/issues/6663
|
[] |
bryant1410
| 3
|
charlesq34/pointnet
|
tensorflow
| 282
|
[Semantic Segmentation] - Problem when re-training Semantic segmentation, Accuracy decrease with parameter default?
|
Hi @charlesq34,
I'm really grateful when you public this source for me to have a chance to learn.
I'm trying to re-training PointNet Semantic Segmentation but seems to be the training process met overfitting/underfitting with the 10th epoch. I still try for researching the cause but I think you can know the more cleary result.
This is a summary result:
([log_train.txt](https://github.com/charlesq34/pointnet/files/6607150/log_train.txt)) - almost file
EPOCH 009
mean loss: 271829.089959
accuracy: 0.549186
----
eval mean loss: 167083.881273
eval accuracy: 0.421601
eval avg class acc: 0.260372
**** EPOCH 010 ****
----
mean loss: 129570.972106
accuracy: 0.561843
----
eval mean loss: 103253.728045
eval accuracy: 0.484906
eval avg class acc: 0.297415
Model saved in file: log6/model.ckpt
**** EPOCH 011 ****
----
mean loss: 37190426.054835
accuracy: 0.354552
----
eval mean loss: 6327.641183
eval accuracy: 0.462198
eval avg class acc: 0.178751
**** EPOCH 012 ****
----
mean loss: 5618.020521
accuracy: 0.337940
----
eval mean loss: 4386.426989
eval accuracy: 0.203347
eval avg class acc: 0.133981
|
open
|
2021-06-07T08:19:02Z
|
2022-02-10T00:27:18Z
|
https://github.com/charlesq34/pointnet/issues/282
|
[] |
vanloctc
| 2
|
huggingface/peft
|
pytorch
| 1,391
|
Softprompt broken with use_cache=True
|
### System Info
I am using transformers 4.37 and peft 0.7.1
Quite simply, generate is broken with use_cache = True.
Probably it has something to do with this error:
https://github.com/kipgparker/soft-prompt-tuning/issues/7
This was working in previous versions, but broke recently. I am not sure what version it was working on.
To replicate, train a softprompt, then test it with use_cache = False and use_cache = True. Only the first token generated is correct, after that the outputs are garbled and messed up.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import MistralForCausalLM, LlamaTokenizer
from peft import PeftModel, PeftConfig
import torch
# Hugging Face model id
peft_model_id = "mis_7b_sp_f"
model_id = "mistralai/Mistral-7B-v0.1"
# Load models and tokenizer
config = PeftConfig.from_pretrained(peft_model_id)
model = MistralForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map={"":0}, use_cache=False)
model = PeftModel.from_pretrained(model, peft_model_id)
tokenizer = LlamaTokenizer.from_pretrained(model_id)
def generate_from_model(text):
encoded_input = tokenizer(text, return_tensors='pt')
output_sequences = model.generate(
input_ids=encoded_input['input_ids'].cuda(0),
do_sample=False,
max_new_tokens=35,
num_return_sequences=1,
output_scores=True,
return_dict_in_generate=True,
pad_token_id=0,
eos_token_id=2,
use_cache= True #change this to False to get correct output
)
gen_sequences = output_sequences.sequences[:, :]
for sequence in gen_sequences:
print(tokenizer.decode(sequence, skip_special_tokens=False))
```
### Expected behavior
By switching the use_cache off the problem is fixed. The generation should be the same whether use_cache is turned on or off, but it is different and when cache is turned on the output is garbled and broken.
|
closed
|
2024-01-23T15:57:30Z
|
2024-02-06T01:38:47Z
|
https://github.com/huggingface/peft/issues/1391
|
[] |
Rallio67
| 3
|
lanpa/tensorboardX
|
numpy
| 500
|
Is there anyway for me to add new content in a existing event log?
|
I want to add new log information into the existing event log after a crash or a break, how should I do it?
|
closed
|
2019-08-27T09:00:08Z
|
2019-10-03T13:55:41Z
|
https://github.com/lanpa/tensorboardX/issues/500
|
[] |
ArchNew
| 1
|
marshmallow-code/flask-smorest
|
rest-api
| 7
|
flask_classful support
|
I use [flask_classful](https://github.com/teracyhq/flask-classful) in some of my projects, and it would be great if it was supported.
I'll need to dig into the code a bit more, but I'm fairly certain flask_classful's main export, `FlaskView`, is fairly similar in implementation to the now standard `Flask.MethodView` (which is already supported).
Would this be something that could be integrated into the project?
|
closed
|
2018-09-06T05:17:36Z
|
2019-09-21T00:53:30Z
|
https://github.com/marshmallow-code/flask-smorest/issues/7
|
[
"question"
] |
connebs
| 9
|
scikit-hep/awkward
|
numpy
| 2,971
|
Intermittent segfault in pypy
|
### Version of Awkward Array
HEAD
### Description and code to reproduce
We now have a test that's tracking it, but that test fails (with the segfault) occasionally, and we eventually need to do something about it.
If we knew that the segfault is _only_ in pypy, then I'm partially inclined to just declare that we don't support pypy. (How important is it? Who uses it?) _But_, it's not uncommon for a segfault that only occurs on one platform to be an indication of a bug across all platforms, but it "accidentally works" on the others because it lands within C++'s undefined behavior. So as long as we don't know why the segfault occurs, it's more important than just supporting pypy.
|
closed
|
2024-01-19T23:01:52Z
|
2024-05-02T17:07:47Z
|
https://github.com/scikit-hep/awkward/issues/2971
|
[
"bug"
] |
jpivarski
| 0
|
adithya-s-k/marker-api
|
rest-api
| 10
|
Tried to convert multiple pdf files, reported error: itmalloc(): unsorted double linked list corrupted
|
### Tried to convert multiple pdf files:

### reported error:

|
open
|
2024-06-21T02:53:56Z
|
2024-06-21T08:34:53Z
|
https://github.com/adithya-s-k/marker-api/issues/10
|
[] |
Sakura4036
| 1
|
cobrateam/splinter
|
automation
| 432
|
Firefox 41 workaround: Unable to connect to host 127.0.0.1 on port 7055 after 45000 ms
|
Firefox 41 allows only installation of signed extensions. Selenium does not have a signed extension available yet. Running Splinter on Firefox 41 fails with `Unable to connect to host 127.0.0.1 on port 7055 after 45000 ms`
You can make Firefox to work again by dropping:
```
firefox_profile.set_preference('xpinstall.signatures.required', False)
```
to `firefox.py`.
This workaround won't work for Firefox 42.
More information
- https://github.com/SeleniumHQ/selenium/issues/927
- https://github.com/SeleniumHQ/selenium/issues/901
Just for your information. I guess this issue can be closed when Selenium gets a proper release.
|
closed
|
2015-09-10T12:40:28Z
|
2016-01-22T18:51:25Z
|
https://github.com/cobrateam/splinter/issues/432
|
[] |
miohtama
| 3
|
deepspeedai/DeepSpeed
|
pytorch
| 6,629
|
when take --bind_cores_to_rank on,only half of CPUs is used
|
#!/bin/bash
NUM_NODES=1
NUM_GPUS=3
EXPERTS=3
deepspeed --num_nodes=${NUM_NODES}\
--num_gpus=${NUM_GPUS} \
--bind_cores_to_rank \
train_autodirect_moe.py \
--log-interval 100 \
--noisy-gate-policy 'RSample' \
--moe-param-group \
--config ../configs/autoDirectFinal/XXX.json \
--gpus ${NUM_GPUS} \
--lr 0.0003 \
--clip 1.0
this is my deepspeed sh,and when i take -bind_cores_to_rank on, only half of cpus is used

when take it off,all of cpus is used and it results in lower performance

is there anyway to control deepspeed to use more cpu?thanks
|
closed
|
2024-10-16T08:54:33Z
|
2024-10-25T17:24:02Z
|
https://github.com/deepspeedai/DeepSpeed/issues/6629
|
[] |
GensokyoLover
| 2
|
anselal/antminer-monitor
|
dash
| 66
|
Antminer R4 support
|
A user on reddit asked for support for Antminer R4.
>https://www.reddit.com/r/BitcoinMining/comments/75twce/antminer_monitor_opensource_monitoring_software/ds1cl3b/
Can you add the R4? It only has 2, instead of 3, boards (126 chips).
|
closed
|
2018-02-01T17:18:24Z
|
2018-02-21T23:58:49Z
|
https://github.com/anselal/antminer-monitor/issues/66
|
[
":pick: miner_support"
] |
anselal
| 2
|
JoeanAmier/XHS-Downloader
|
api
| 42
|
quicktime format can be saved as mov format
|
Hello, when I was working on this link, I found a point that could improve the user experience:[http://xhslink.com/6eOiVy](http://xhslink.com/6eOiVy)
The video in this URL is downloaded through the program and saved in quicktime format. Its download address is [https://sns-video-hw.xhscdn.com/01024301kqe0qqgvdwc050mx97r08bo2kt](https://sns-video-hw.xhscdn.com/01024301kqe0qqgvdwc050mx97r08bo2kt).
After checking the information, I learned that **quicktime's default save type is mov.** And when our browser (chrome and edge) opens this address, the browser will automatically save the format as mov by default.
In addition, for Windows' native video player, it usually does not support quicktime. It would be very convenient if quicktime could be saved as mov.
|
open
|
2024-01-08T05:18:06Z
|
2024-01-08T14:07:02Z
|
https://github.com/JoeanAmier/XHS-Downloader/issues/42
|
[] |
SherlockNovitch
| 1
|
eriklindernoren/ML-From-Scratch
|
deep-learning
| 10
|
Tiene probela
|
closed
|
2017-03-02T18:35:39Z
|
2017-03-02T18:37:22Z
|
https://github.com/eriklindernoren/ML-From-Scratch/issues/10
|
[] |
Anon23456
| 0
|
|
cleanlab/cleanlab
|
data-science
| 406
|
CleanLearning default classifier
|
Currently CleanLearning can be run like this which is nice for users who don't know what classifier to use for their data:
```
cl = CleanLearning()
cl.find_label_issues(data, labels)
```
but it always defaults to sklearn's LogisticRegression, which may not work for many types of `data`. Consider deferring the choice of default classifier until `data` is provided and then selecting from a broader suite of default options beyond LogisticRegression as well to make this work for more data types.
Challenge is how to do this without introducing additional dependencies for the cleanlab package, or making this code too complex. This challenge makes this quite complex without developing a full autoML strategy, which we don't want to do here.
A useful contribution in the meantime could be just to provide better error handling when the default LogisticRegression classifier won't work (eg. the dataset is a pytorch or tensorflow dataset).
|
open
|
2022-09-06T20:44:01Z
|
2023-11-23T18:42:03Z
|
https://github.com/cleanlab/cleanlab/issues/406
|
[
"enhancement",
"good first issue"
] |
jwmueller
| 5
|
cleanlab/cleanlab
|
data-science
| 1,127
|
update knn shapely score transformation
|
to be: max(x, 0) , instead of the current transformation: 0.5*(x+1)
Needs property based testing. Test should fail if x is ever negative pre-transformation.
Also update the is_issue threshold to be <= 1e-6
|
closed
|
2024-05-23T17:51:33Z
|
2024-06-19T19:09:36Z
|
https://github.com/cleanlab/cleanlab/issues/1127
|
[
"next release"
] |
jwmueller
| 0
|
NVlabs/neuralangelo
|
computer-vision
| 217
|
problem about ground truth
|
I saw the GT was obtained from structured-light scanner in the paper

anyone can give me some instructions about how to get GT 3D model from structured-light scanner ? (assume I have got one)
|
open
|
2024-11-28T02:17:46Z
|
2024-11-28T02:17:46Z
|
https://github.com/NVlabs/neuralangelo/issues/217
|
[] |
NNsauce
| 0
|
flasgger/flasgger
|
flask
| 496
|
Unable to set different endpoint in config for static files
|
If to set another 'endpoint' value in config we receive error.
Static files have hardcoded endpoint: https://github.com/flasgger/flasgger/blob/0e51362abe0c890688b39eacd807228b8207b161/flasgger/base.py#L99
Example:
config = Swagger.DEFAULT_CONFIG
config['endpoint'] = 'flaskgger'
Swagger(application, config=config)
werkzeug.routing.BuildError: Could not build url for endpoint 'flasgger.static' with values ['filename']. Did you mean 'flaskgger.static' instead?
|
open
|
2021-09-22T04:33:47Z
|
2021-09-22T04:33:47Z
|
https://github.com/flasgger/flasgger/issues/496
|
[] |
karinazuzex
| 0
|
christabor/flask_jsondash
|
flask
| 147
|
add requests_mock to requirements.txt
|
This should help Travis tests pass. Alternatively can change travis.yml,
|
closed
|
2017-12-01T15:52:23Z
|
2018-05-09T23:27:23Z
|
https://github.com/christabor/flask_jsondash/issues/147
|
[
"testing"
] |
epogrebnyak
| 1
|
zappa/Zappa
|
django
| 437
|
[Migrated] Route 53 Hosted Zones not updated after `zappa certify` and Let's Encrypt
|
Originally from: https://github.com/Miserlou/Zappa/issues/1141 by [ScottSturdivant](https://github.com/ScottSturdivant)
After following the [Let's Encrypt Guide](https://github.com/Miserlou/Zappa/blob/master/docs/domain_with_free_ssl_dns.md) and running the `zappa certify` command, I waited 1.5 hours (more than the 40 minute recommendation) and my domain was not available. Trying to figure out what Zappa was doing in AWS land, I determined that the domain name had been created in the API Gateway and that it was pointing to a cloudfront distribution.
For those who are not familiar with API Gateway's usage of CloudFront, apparently these shouldn't show up if you're looking at the CloudFront interface. (I kept waiting for mine to appear.)
In Route 53, I had a hunch that I needed an A or CNAME record with my Hosted Zone. None were present.
The slack channel suggested adding an A Alias record, pointing to the cloudfront distribution that the API Gateway - Custom Domain was pointing to.
As soon as I did this, my custom domain was functional.
## Expected Behavior
After waiting the 40+ minutes, the domain would be operational.
## Actual Behavior
No A record was created, meaning that the domain was not resolvable and not accessible. (I just tried using ACM instead of Let's Encrypt, and the Route 53 Hosted Zone was updated to have a CNAME. Presumably Let's Encrypt should behave the same.)
## Possible Fix
The `route_53` argument is now ignored in `core.update_domain_name()`. Looks like it was removed in this PR: https://github.com/Miserlou/Zappa/pull/883/files#diff-69e19f3663afb330594add0cc19b0a1cL1769
Maybe it shouldn't have been removed?
|
closed
|
2021-02-20T08:32:55Z
|
2024-04-13T16:17:51Z
|
https://github.com/zappa/Zappa/issues/437
|
[
"no-activity",
"auto-closed"
] |
jneves
| 2
|
microsoft/qlib
|
machine-learning
| 1,757
|
HS_SYMBOLS_URL 404
|
## 🐛 Bug Description
The URL used in scripts/data_collector/utils.py is dead.
## To Reproduce
Steps to reproduce the behavior:
1. run collector.py under scripts/data_collector/pit
2. it gets stuck after logging "get cn stock symbols....."
3. debug finds that `requests.get(HS_SYMBOLS_URL.format(s_type=_k), timeout=None)` gets 404 response.
## Expected Behavior
HS_SYMBOLS_URL return right symbols
## Environment
**Note**: User could run `cd scripts && python collect_info.py all` under project directory to get system information
and paste them here directly.
- Qlib version: "0.9.3.99"
- Python version: 3.8
- OS (`Windows`, `Linux`, `MacOS`):MacOS
## Additional Notes
<!-- Add any other information about the problem here. -->
|
open
|
2024-03-11T03:50:27Z
|
2024-04-14T05:56:28Z
|
https://github.com/microsoft/qlib/issues/1757
|
[
"bug"
] |
zhstark
| 1
|
donnemartin/system-design-primer
|
python
| 657
|
system_design
|
open
|
2022-04-13T08:39:24Z
|
2022-04-23T13:17:58Z
|
https://github.com/donnemartin/system-design-primer/issues/657
|
[
"needs-review"
] |
abnnjo
| 0
|
|
onnx/onnx
|
scikit-learn
| 5,848
|
ONNX model > 2G
|
# Ask a Question
### Question
Hello, I am converting an onnx model that is larger than 2G. I know that it cannot > 2G because of Google's protobuf. What I want to ask is is there any way to overcome the 2G limit? Can you give me some ideas?
|
closed
|
2024-01-09T09:34:41Z
|
2025-01-31T06:44:03Z
|
https://github.com/onnx/onnx/issues/5848
|
[
"question",
"stale"
] |
hch-baobei
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 1,146
|
OSError: PortAudio library not found
|
Does it have to do with FFMPEG?
|
closed
|
2022-12-11T14:18:25Z
|
2023-01-08T08:55:15Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1146
|
[] |
pikleman101
| 1
|
waditu/tushare
|
pandas
| 969
|
stk_account接口超时
|
使用http的Restful请求经常返回超时,
请求如下:
{"api_name":"stk_account","token":"","params":{"start_date":"20010101","end_date":"20010201"},"fields":null}
返回如下:
<html>
<head>
<title>504 Gateway Time-out</title>
</head>
<body bgcolor="white">
<center>
<h1>504 Gateway Time-out</h1>
</center>
<hr>
<center>nginx/1.15.0</center>
</body>
</html>
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
注册的ID号:reg=243037
|
closed
|
2019-03-17T08:48:27Z
|
2019-03-25T13:18:03Z
|
https://github.com/waditu/tushare/issues/969
|
[] |
xiaokesong
| 1
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 674
|
Error
|
hi im getting an error and im not sure why, can you help.
Last Error Received:
Process: Ensemble Mode
If this error persists, please contact the developers with the error details.
Raw Error Details:
ZeroDivisionError: "float division by zero"
Traceback Error: "
File "UVR.py", line 4716, in process_start
File "separate.py", line 672, in seperate
File "separate.py", line 803, in inference_vr
File "separate.py", line 767, in _execute
"
Error Time Stamp [2023-07-19 16:18:59]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 10
window_size: 512
batch_size: 4
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Choose Model
chunks: Auto
margin: 44100
compensate: Auto
is_denoise: False
is_invert_spec: False
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_gpu_conversion: False
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
save_format: WAV
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
|
open
|
2023-07-19T06:26:27Z
|
2023-07-19T06:26:27Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/674
|
[] |
JayLogan93
| 0
|
tflearn/tflearn
|
data-science
| 487
|
Is there any complete examples on training using multi-gpus?
|
I'm a fresher of tflearn. Recently I'm trying to accelerate training using multi-gpus, but haven't found any tutorials/examples on that.
|
open
|
2016-11-29T06:41:56Z
|
2017-06-30T11:32:54Z
|
https://github.com/tflearn/tflearn/issues/487
|
[] |
bohanzhuang
| 1
|
fastapi-users/fastapi-users
|
asyncio
| 343
|
VARCHAR requires a length on dialect mysql
|
Hello
# Problem
There is something error on creating tables on MySQL.
`
sqlalchemy.exc.CompileError: (in table 'user', column 'email'): VARCHAR requires a length on dialect mysql
`
effected columns
SQLAlchemyBaseUserTable:
email
hashed_password
SQLAlchemyBaseOAuthAccountTable:
oauth_name
access_token
refresh_token
account_id
account_email
https://github.com/frankie567/fastapi-users/blob/0134a43fade100c32588a0bf09d21b1939d24c8e/fastapi_users/db/sqlalchemy.py#L56
According to SQLAlchemy Docs,
`CREATE TABLE DDL is issued if a VARCHAR with no length is included.`
# Solution
Pull request #344
|
closed
|
2020-09-27T09:46:41Z
|
2020-09-30T12:48:51Z
|
https://github.com/fastapi-users/fastapi-users/issues/343
|
[
"bug"
] |
soulee-dev
| 2
|
widgetti/solara
|
jupyter
| 506
|
Help trying to port my widget to work in Solara
|
I am the creator of [Buckaroo](https://github.com/paddymul/buckaroo) a full featured dataframe viewer that wraps ag-grid. I am trying to figure out how to expose buckaroo as a component for Solara apps.
A couple of points:
* Buckaroo extends ipywidgets.DOMWidget
* Buckaroo generally functions as its own miniapp configured with optionally user provided mutation functions, users don't wire in any events themselves.
* Inside the frontend of Buckaroo there is the core `DFViewer` component that takes a serialized dataframe and Buckaroo styling config. There is also extra UI that swaps clientside styling configs, and sets properties on the parent widget that communicates back with the python widget (which transform function to apply, some other things)
It might be easier and more straightforward to integrate just the `DFViewer` frontend component. I don't have this as a separate DOMWidget, but I could work on it. This would probably be easier for solara users since it is less opinionated than the full BuckarooWidget.
# Where I'm getting stuck
I am having trouble understanding the wrapping stage. I'm a bit lost as to how to make Buckaroo work there. I looked at the other examples (IPYWidgets, BQPlot, IPYVeutify).
The codegen in particular is confusing. what is the generated code accomplishing?
Are you inserting actual python code into an existing file in site-packages?
are you only using it to run mypy on the generated code?
are you using it to get completions in an IDE?
---
Ahhh after looking at the code in my sitepackages, I see that you are indeed writing to the file.
Documenting this would help.
|
open
|
2024-02-17T16:26:35Z
|
2024-02-17T16:36:35Z
|
https://github.com/widgetti/solara/issues/506
|
[] |
paddymul
| 2
|
ansible/awx
|
automation
| 15,712
|
Host and Group vars reordered (sorted alphabetically)
|
### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
Check mode doesn't work correctly cause all printed jinja2 variables (Host and Group variables) sorted alphabetically.
Variables in inventory also sorted alphabetically. Can check original order. It's important cause if access and deny order changed config will be broken.
```
access_log /var/log/nginx/xxx.log;
+ client_max_body_size 50m;
error_log /var/log/nginx/xxx.log notice;
+ proxy_connect_timeout 10;
proxy_pass http://192.168.10.115:9080;
+ proxy_send_timeout 360;
proxy_set_header X-Forwarded-Proto $scheme;
- proxy_send_timeout 360;
- proxy_connect_timeout 10;
- client_max_body_size 50m;
```
### AWX version
24.6.2.dev0
### Select the relevant components
- [X] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
docker development environment
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
_No response_
### Steps to reproduce
Used project with git source and inventory from project.
### Expected results
If used ansible --check --diff option no changes found.
### Actual results
Changes found cause lines reodered alphabetically
### Additional information
Option to view sorted variables helphul but only if not applied to real configs.
|
open
|
2024-12-18T17:00:33Z
|
2025-01-22T18:57:04Z
|
https://github.com/ansible/awx/issues/15712
|
[
"type:bug",
"component:ui",
"help wanted",
"community"
] |
leweafan
| 0
|
alteryx/featuretools
|
data-science
| 2,680
|
Update deps so min dependency generator works
|
closed
|
2024-02-23T17:39:44Z
|
2024-02-26T14:59:07Z
|
https://github.com/alteryx/featuretools/issues/2680
|
[] |
thehomebrewnerd
| 0
|
|
mwaskom/seaborn
|
matplotlib
| 3,176
|
pandas plotting backend?
|
Plotly has toplevel `.plot` function which allows for a [pandas plotting backend](https://github.com/pandas-dev/pandas/blob/d95bf9a04f10590fff41e75de94c321a8743af72/pandas/plotting/_core.py#L1848-L1861) to exist:
https://github.com/plotly/plotly.py/blob/4363c51448cda178463277ff3c12becf35dbd3b8/packages/python/plotly/plotly/__init__.py
Like this, if people have `plotly` installed, they can do:
```
pd.set_option('plotting.backend', 'plotly')
```
and then `df.plot.line(x=x, y=y)` will defer to `plotly.express.line(data_frame=df, x=x, y=y)`:

It'd be nice to be able to do
```
pd.set_option('plotting.backend', 'seaborn')
```
and then have `df.plot.line(x=x, y=y)` defer to `seaborn.line(data=df, x=x, y=y)`
Would you be open to these ~150 lines of code or so to allow `seaborn` to be set as a plotting backend in pandas? Check the link above to see what it looks like in `plotly`. I'd be happy to implement this, just checking if it'd be welcome
|
closed
|
2022-12-05T08:43:50Z
|
2022-12-06T21:32:08Z
|
https://github.com/mwaskom/seaborn/issues/3176
|
[] |
MarcoGorelli
| 8
|
WZMIAOMIAO/deep-learning-for-image-processing
|
deep-learning
| 379
|
老师!怎么faster-rcnn怎么训练单通道的数据集
|
我做了如下改动:
1. 在mobilenetv2_model.py 修改了第一层卷积的输入通道数,改成了1
2. 我在transforms里加入了一个修改图片通道数的预处理,并添加到了开头的transforms.Compose中
class Grayscale(nn.Module):
def __init__(self, num_output_channels=1):
super().__init__()
self.num_output_channels = num_output_channels
def __call__(self, image, target):
if image.mode != 'L':
image = image.convert('L')
return image, target
发现还是不行,依旧是报错RuntimeError: Given groups=1, weight of size [32, 1, 3, 3], expected input[8, 3, 800, 1216] to have 1 channels, but got 3 channels instead
|
closed
|
2021-10-27T07:36:33Z
|
2021-10-28T08:44:36Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/379
|
[] |
ghost
| 2
|
aleju/imgaug
|
deep-learning
| 178
|
Does this code use the gpu ?
|
Hi. I just found out that opencv-python doesn't actually use the gpu, even when properly compiled with cuda support on.
I'm currently actively looking for a package that does so. Does this repo expose a code that makes augmentations run on gpu ?
|
open
|
2018-09-10T15:10:03Z
|
2018-10-30T14:50:22Z
|
https://github.com/aleju/imgaug/issues/178
|
[] |
dmenig
| 2
|
mouredev/Hello-Python
|
fastapi
| 405
|
皇家国际创联娱乐值得信赖的在线赌场 - 2025年最佳在线博彩网站
|
上线博彩对于那些寻求顶级在线博彩平台的人来说,皇家国际创联娱乐在线赌场是最佳选择。创联娱 下载游戏APP对该平台进行了彻底研究并深入研究,以提供全面的见解。是皇家国际创联娱乐一家值得信赖且信誉良好的在线赌场,拥有各种在线赌场游戏,注重玩家安全、隐私和用户友好性。 创联娱乐开始您的在线博彩游戏娱乐平台
下载创联娱乐游戏APP网址-376838.com 也可以通过加微信我们工作人员帮你 开户欢迎各位喜爱玩家来咨询 微信::xiaolu460570 和飞机:@lc15688
凭借十多年的经验创联娱乐在线赌场已经建立了庞大而忠诚的客户群,包括游戏爱好者和来自各个行业的名人,巩固了其作为皇家国际公司顶级在线赌场之一的地位。这家值得信赖的在线赌场由业内资深人士于 2011 年创立,始终如一地提供高质量的游戏体验。柴犬是我们公司的象征,代表了我们对游戏社区的信任、忠诚和陪伴的承诺。创联娱乐在线赌场凭借其众多的认证和许可证,成为声誉良好的在线赌博目的地。
这家皇家国际创联娱乐在线赌场拥有库拉索政府颁发的令人垂涎的主许可证,即库拉索博彩,授权其提供博彩服务。创联娱乐是获得政府机构、国际博彩机构、游戏测试实验室和监管机构批准的创联娱乐在线赌场之一
|
closed
|
2025-02-23T04:51:25Z
|
2025-03-02T11:12:44Z
|
https://github.com/mouredev/Hello-Python/issues/405
|
[] |
yzclylgj
| 0
|
dask/dask
|
pandas
| 11,842
|
⚠️ Upstream CI failed ⚠️
|
[Workflow Run URL](https://github.com/dask/dask/actions/runs/14025315686)
<details><summary>Python 3.12 Test Summary</summary>
```
dask/dataframe/tests/test_dataframe.py::test_describe_numeric[tdigest-test_values0]: AssertionError: DataFrame are different
DataFrame shape mismatch
[left]: (7, 2)
[right]: (8, 2)
dask/dataframe/tests/test_dataframe.py::test_describe_numeric[dask-test_values1]: AssertionError: DataFrame are different
DataFrame shape mismatch
[left]: (7, 2)
[right]: (8, 2)
dask/dataframe/tests/test_dataframe.py::test_describe[include8-None-percentiles8-None]: AssertionError: DataFrame are different
DataFrame shape mismatch
[left]: (11, 4)
[right]: (10, 4)
```
</details>
|
open
|
2025-03-23T02:01:10Z
|
2025-03-24T02:02:50Z
|
https://github.com/dask/dask/issues/11842
|
[
"upstream"
] |
github-actions[bot]
| 0
|
NullArray/AutoSploit
|
automation
| 1,318
|
needs updating to work with newer pip ie kali 2022.4 compatible
|
<!--
Package python-pip is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
python3-pip
E: Package 'python-pip' has no installation candidate
./install.sh: line 15: pip2: command not found
Installation Complete
-->
# Running information
<!-- Running detail, OS, arch, did you clone, etc -->
- What branch did you download?
- Clone, or docker run?
- What OS are you running?
# Exploit module information
<!-- We will need this information to determine if it is a metasploit issue or not -->
- What exploit was deployed?
- Was a session generated for the target?
- What version of metasploit are you running?
# Program information
<!-- Basic python information we will need -->
- Python version number?
- AutoSploit version number?
- Any console output that is relevant to the issue:
- Traceback (error) if any:
|
open
|
2023-03-13T14:51:03Z
|
2023-03-13T14:51:03Z
|
https://github.com/NullArray/AutoSploit/issues/1318
|
[] |
jamieduk
| 0
|
huggingface/datasets
|
numpy
| 6,810
|
Allow deleting a subset/config from a no-script dataset
|
As proposed by @BramVanroy, it would be neat to have this functionality through the API.
|
closed
|
2024-04-15T07:53:26Z
|
2025-01-11T18:40:40Z
|
https://github.com/huggingface/datasets/issues/6810
|
[
"enhancement"
] |
albertvillanova
| 3
|
AUTOMATIC1111/stable-diffusion-webui
|
deep-learning
| 15,483
|
[Bug]: Loading an fp16 SDXL model fails on MPS when model loading RAM optimization is enabled
|
### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Loading an fp16 model on MPS fails when `--disable-model-loading-ram-optimization` is not enabled. I suspect this is because fp32 is natively supported by MPS, but something is causing an attempted cast to fp64 from fp16, which is not supported.
### Steps to reproduce the problem
1. Try to load an fp16 model on MPS (Apple Silicon).
### What should have happened?
The model loading RAM optimization hack shouldn't interfere with this.
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
```
{
"Platform": "macOS-14.2.1-arm64-arm-64bit",
"Python": "3.10.13",
"Version": "v1.9.0-RC-12-gac8ffb34",
"Commit": "ac8ffb34e3d1196319c8dcfcf0f5d3eded713176",
"Commandline": [
"webui.py",
"--skip-torch-cuda-test",
"--upcast-sampling",
"--no-half-vae",
"--use-cpu",
"interrogate",
"--skip-load-model-at-start"
],
}
```
### Console logs
```Shell
Traceback (most recent call last):
File "stable-diffusion-webui/modules/options.py", line 165, in set
option.onchange()
File "stable-diffusion-webui/modules/call_queue.py", line 13, in f
res = func(*args, **kwargs)
File "stable-diffusion-webui/modules/initialize_util.py", line 181, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "stable-diffusion-webui/modules/sd_models.py", line 860, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
File "stable-diffusion-webui/modules/sd_models.py", line 826, in reuse_model_from_already_loaded
load_model(checkpoint_info)
File "stable-diffusion-webui/modules/sd_models.py", line 748, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "stable-diffusion-webui/modules/sd_models.py", line 393, in load_model_weights
model.load_state_dict(state_dict, strict=False)
File "stable-diffusion-webui/modules/sd_disable_initialization.py", line 223, in <lambda>
module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
File "stable-diffusion-webui/modules/sd_disable_initialization.py", line 221, in load_state_dict
original(module, state_dict, strict=strict)
File "stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2139, in load_state_dict
load(self, state_dict)
File "stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2127, in load
load(child, child_state_dict, child_prefix)
File "stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2127, in load
load(child, child_state_dict, child_prefix)
File "stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2127, in load
load(child, child_state_dict, child_prefix)
[Previous line repeated 3 more times]
File "stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2121, in load
module._load_from_state_dict(
File "stable-diffusion-webui/modules/sd_disable_initialization.py", line 226, in <lambda>
conv2d_load_from_state_dict = self.replace(torch.nn.Conv2d, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(conv2d_load_from_state_dict, *args, **kwargs))
File "stable-diffusion-webui/modules/sd_disable_initialization.py", line 191, in load_from_state_dict
module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
File "stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_meta_registrations.py", line 4806, in zeros_like
res = aten.empty_like.default(
File "stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_ops.py", line 513, in __call__
return self._op(*args, **(kwargs or {}))
File "stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 250, in _fn
result = fn(*args, **kwargs)
File "stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/_refs/__init__.py", line 4755, in empty_like
return torch.empty_permuted(
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead
```
### Additional information
_No response_
|
open
|
2024-04-11T05:57:42Z
|
2024-04-11T05:58:36Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15483
|
[
"bug-report"
] |
akx
| 0
|
iterative/dvc
|
data-science
| 9,789
|
dvc remove --force
|
At the moment (v2.58.2), `dvc remove {target.dvc}` first unprotects `{target.dvc}`, at least when applied to `.dvc` files.
I'm sure that this is a good reason for this, but this can be painfully slow for large files.
I'd like to request a `--force` option that just does all the other `dvc remove` stuff, but without unprotecting the targets first. All the normal caveats around use `--force` options would apply, and it might be worth double-checking with the user before carrying it out, as is done with `dvc commit`, for example.
This would be particularly useful for large imports, where we are not afraid to delete the data from the workspace because we are confident that we can always import it again from the source data registry.
An alternative name for this kind of option might be `--no-unprotect` or similar.
|
open
|
2023-08-02T09:56:54Z
|
2024-01-11T20:47:10Z
|
https://github.com/iterative/dvc/issues/9789
|
[
"feature request",
"p3-nice-to-have",
"A: data-management"
] |
johnyaku
| 4
|
yunjey/pytorch-tutorial
|
deep-learning
| 186
|
i can't doenload the pretrained model, may you send the model to me? my email is 2413226942@qq.com
|
closed
|
2019-07-30T05:47:38Z
|
2019-07-30T05:47:45Z
|
https://github.com/yunjey/pytorch-tutorial/issues/186
|
[] |
zhengmingzhang
| 0
|
|
man-group/arctic
|
pandas
| 638
|
date util test fail when run from a Europe/London timezone
|
#### Arctic Version
```
1.69.0
```
#### Arctic Store
```
Any
```
#### Platform and version
Ubuntu 14.04.5 LTS, python 2.7.6
#### Description of problem and/or code sample that reproduces the issue
Test `tests/unit/date/test_util.py#test_datetime_to_ms_and_back()` fails when run from a machine located in the timezone `mktz('Europe/London')`
I've made a suggested fix here https://github.com/manahl/arctic/pull/637
|
closed
|
2018-10-15T15:06:54Z
|
2018-10-26T01:02:18Z
|
https://github.com/man-group/arctic/issues/638
|
[] |
krywen
| 1
|
deepset-ai/haystack
|
nlp
| 8,681
|
Haystack should not configure root logger handlers
|
**Describe the bug**
Any application that imports this library cannot expect their own configuration of the Python root logger to be respected, because this library adds to the root logger's list of handlers.
This issue occurred previously in https://github.com/deepset-ai/haystack/issues/2485 and https://github.com/deepset-ai/haystack/issues/4202
**Expected behavior**
An application using this library should be able to `import haystack` and then use `logging.basicConfig()` as normal.
**Additional context**
[This issue was introduced here](https://github.com/deepset-ai/haystack/commit/2a591280ab43aba52bfd5cf61c2b0056c5655b98#diff-6de31bc13ff57e52637aeb2c3c8946b8244ae6426f5a0940a2dbf4ff331b3214R89-R97)
This is an issue because [`logging.basicConfig()` is ignored once any handlers are configured](https://docs.python.org/3/library/logging.html#logging.basicConfig). At a bare minimum, it is reasonable to expect all libraries make no modifications to the root handler. The quickest fix is to edit line 89 so as to only add the handler onto the subloggers that will be used throughout the library:
```python
haystack_logger = logging.getLogger("haystack") # only add our handler within our library's hierarchy
# avoid adding our handler twice
old_handlers = [
h for h in haystack_logger.handlers
if (isinstance(h, logging.StreamHandler) and h.name == "HaystackLoggingHandler")
]
for old_handler in old_handlers:
haystack_logger.removeHandler(old_handler)
haystack_logger.addHandler(handler)
# or more succinctly, only add if not already present
# if not old_handlers:
# haystack_logger.addHandler(handler)
```
However, it is also generally expected that the application and not the library is the arbiter of all log handlers, [as recommended in the python docs' Logging Cookbook](https://docs.python.org/3.12/howto/logging-cookbook.html#adding-handlers-other-than-nullhandler-to-a-logger-in-a-library). This would mean it is unusual for any library to implicitly add a log handler -- it is the application developer who knows best what log formats they need.
I agree that providing recommended overrides can be very convenient; one route would be to export a factory for the provided handler so that the consuming application can easily opt-in to this feature:
```python
from haystack.logging import configure_logging_handler # function to create the HaystackLoggingHandler
logging.getLogger().addHandler(configure_logging_handler()) # app dev can choose to add at the root, at the haystack level, or not at all
````
Quick blog post summary of developer expectations on this topic: http://rednafi.com/python/no_hijack_root_logger/
**To Reproduce**
Minimal repro:
```python
from haystack import Document
from haystack.document_stores.in_memory import InMemoryDocumentStore
import logging
import pandas as pd
logging.basicConfig(level=logging.CRITICAL)
document_store = InMemoryDocumentStore()
document_store.write_documents([
# still prints a warning, because of the logging.getLogger().root changes within haystack
Document(
content="My name is Jean and I live in Paris.",
dataframe=pd.DataFrame({"name": ["Jean"], "city": ["Paris"]}),
),
])
```
**FAQ Check**
- [X] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS: Ubuntu 22.04
- GPU/CPU: N/A
- Haystack version (commit or version number): 2.7.0 in my testing, up to present
- DocumentStore: N/A
- Reader: N/A
- Retriever: N/A
|
open
|
2025-01-03T21:17:31Z
|
2025-01-06T08:47:40Z
|
https://github.com/deepset-ai/haystack/issues/8681
|
[
"P2"
] |
CSRessel
| 0
|
keras-team/keras
|
python
| 20,112
|
Custom loss usage by name: not working in V3 but worked in V2
|
Default use case for custom loss in V2 (for me) was:
1. Register custom loss with register_keras_serializable
2. Use it in model compile by name compile(loss="MyPackage>MyLoss")
This worked in Keras V2, but not working in Keras V3. Here is a simple test https://colab.research.google.com/drive/1R3z3fFhV7NN1Oa_HJYp60LsneoxjOlvj?usp=sharing
|
open
|
2024-08-12T11:49:14Z
|
2024-08-22T10:39:13Z
|
https://github.com/keras-team/keras/issues/20112
|
[
"type:support"
] |
shkarupa-alex
| 4
|
mitmproxy/pdoc
|
api
| 248
|
SyntaxError in templates: unexpected EOF while parsing
|
#### Problem Description
When parsing our Python 3.8 compatible code-base, the following error occurs, after generating parts of the documenation:
```
File "/usr/local/lib/python3.8/dist-packages/jinja2/runtime.py", line 262, in call
return __obj(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/jinja2/runtime.py", line 570, in __call__
return self._invoke(arguments, autoescape)
File "/usr/local/lib/python3.8/dist-packages/jinja2/asyncsupport.py", line 110, in _invoke
return original_invoke(self, arguments, autoescape)
File "/usr/local/lib/python3.8/dist-packages/jinja2/runtime.py", line 574, in _invoke
rv = self._func(*arguments)
File "/home/anyone/.local/lib/python3.8/site-packages/pdoc/templates/default/module.html.jinja2", line 196, in macro
{% block style_content %}
File "/usr/local/lib/python3.8/dist-packages/jinja2/environment.py", line 430, in getattr
return getattr(obj, attribute)
File "/usr/lib/python3.8/functools.py", line 966, in __get__
val = self.func(instance)
File "/home/anyone/.local/lib/python3.8/site-packages/pdoc/doc.py", line 767, in decorators
for t in doc_ast.parse(obj).decorator_list:
File "/home/anyone/.local/lib/python3.8/site-packages/pdoc/doc_ast.py", line 60, in parse
return _parse_function(src)
File "/home/anyone/.local/lib/python3.8/site-packages/pdoc/doc_ast.py", line 196, in _parse_function
tree = ast.parse(_dedent(source))
File "/usr/lib/python3.8/ast.py", line 47, in parse
return compile(source, filename, mode, flags,
File "<unknown>", line 9
)
SyntaxError: unexpected EOF while parsing
```
Before, a number of user-warnings were logged, e.g. when using stringified annotations which were only imported with `typing.TYPE_CHECKING`, e.g.
```
if TYPE_CHECKING:
from pkg import MyType
class SomeClass:
attribute: "MyType"
```
would result in
```
UserWarning: Error parsing type annotation for pkg.some_class. Import of MyType failed: name 'MyType' is not defined
```
However, those warnings don't seem to cause the error, since some of their documentations were created successfully, before the error occurred.
Since the error is quite inexpressive, I can't give a small reproducible example. How could I generate more useful insights?
#### System Information
pdoc version 6.4.3
|
closed
|
2021-04-20T16:13:16Z
|
2021-04-21T11:06:16Z
|
https://github.com/mitmproxy/pdoc/issues/248
|
[
"bug"
] |
jstriebel
| 7
|
vitalik/django-ninja
|
pydantic
| 970
|
[BUG] Unable to generate pydantic-core schema for <class 'function'>.
|
**Describe the bug**
When I try to pass a function to auth I receive an error from Pydantic about it being `Unable to generate pydantic-core schema for <class 'function'>.`
**Versions (please complete the following information):**
- Python version: 3.10
- Django version: 4.2.5
- Django-Ninja version: 1.0.1
- Pydantic version: 2.4.1
**Detailed description**
When I try to run this snippet that's specified in the [official documentation](https://django-ninja.dev/guides/authentication/):
```python
def ip_whitelist(request):
if request.META["REMOTE_ADDR"] == "8.8.8.8":
return "8.8.8.8"
@api.get("/ipwhiltelist", auth=ip_whitelist)
def ipwhiltelist(request):
return f"Authenticated client, IP = {request.auth}"
```
I run into the following problem:
```
_pydantic.errors.PydanticSchemaGenerationError: Unable to generate pydantic-core schema for <class 'function'>. Set \arbitrary_types_allowed=True\ in the model_config to ignore this error or implement `__get_pydantic_core_schema__` on your type to fully support it.``
If you got this error by calling handler(<some type>) within \__get_pydantic_core_schema__\ then you likely need to call `handler.generate_schema(<some type>)` since we do not call `__get_pydantic_core_schema__` on `<some type>` otherwise to avoid infinite recursion.``_
```
|
open
|
2023-12-01T07:07:03Z
|
2023-12-05T18:18:38Z
|
https://github.com/vitalik/django-ninja/issues/970
|
[] |
alessandrosp
| 1
|
Sanster/IOPaint
|
pytorch
| 284
|
Many dependencies are pinned to specific versions
|
I am trying to add this to an environment with other dependencies and it's extremely easy to break everything due to many pinned dependencies to specific versions (== in the requirements file). Are those pinned dependencies strictly needed?
|
closed
|
2023-04-20T08:36:22Z
|
2023-04-30T14:33:23Z
|
https://github.com/Sanster/IOPaint/issues/284
|
[] |
luke14free
| 1
|
ray-project/ray
|
deep-learning
| 50,762
|
Minio as S3 storage
|
### Description
I want to use Minio as the storage for S3. When I released it, I pointed to the S3 address of Minio, but there was an error message.
`Runtime env setup for app 'api' failed:
Traceback (most recent call last):
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/serve/_private/application_state.py", line 660, in _reconcile_build_app_task
args, err = ray.get(self._build_app_task_info.obj_ref)
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper
return fn(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper
return func(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/_private/worker.py", line 2772, in get
values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/_private/worker.py", line 921, in get_objects
raise value
ray.exceptions.RuntimeEnvSetupError: Failed to set up runtime environment.
Traceback (most recent call last):
File "/home/ray/anaconda3/lib/python3.9/site-packages/smart_open/s3.py", line 330, in _get
return client.get_object(Bucket=bucket, Key=key, Range=range_string)
File "/home/ray/anaconda3/lib/python3.9/site-packages/botocore/client.py", line 535, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/ray/anaconda3/lib/python3.9/site-packages/botocore/client.py", line 980, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidAccessKeyId) when calling the GetObject operation: The AWS Access Key Id you provided does not exist in our records.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/_private/runtime_env/agent/runtime_env_agent.py", line 384, in _create_runtime_env_with_retry
runtime_env_context = await asyncio.wait_for(
File "/home/ray/anaconda3/lib/python3.9/asyncio/tasks.py", line 479, in wait_for
return fut.result()
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/_private/runtime_env/agent/runtime_env_agent.py", line 332, in _setup_runtime_env
await create_for_plugin_if_needed(
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/_private/runtime_env/plugin.py", line 254, in create_for_plugin_if_needed
size_bytes = await plugin.create(uri, runtime_env, context, logger=logger)
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/_private/runtime_env/working_dir.py", line 163, in create
local_dir = await download_and_unpack_package(
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/_private/runtime_env/packaging.py", line 756, in download_and_unpack_package
protocol.download_remote_uri(source_uri=pkg_uri, dest_file=pkg_file)
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/_private/runtime_env/protocol.py", line 106, in _download_remote_uri
return _protocols_provider.download_remote_uri(self.value, source_uri, dest_file)
File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/_private/runtime_env/protocol.py", line 80, in download_remote_uri
with open_file(source_uri, "rb", transport_params=tp) as fin:
File "/home/ray/anaconda3/lib/python3.9/site-packages/smart_open/smart_open_lib.py", line 224, in open
binary = _open_binary_stream(uri, binary_mode, transport_params)
File "/home/ray/anaconda3/lib/python3.9/site-packages/smart_open/smart_open_lib.py", line 400, in _open_binary_stream
fobj = submodule.open_uri(uri, mode, transport_params)
File "/home/ray/anaconda3/lib/python3.9/site-packages/smart_open/s3.py", line 224, in open_uri
return open(parsed_uri['bucket_id'], parsed_uri['key_id'], mode, **kwargs)
File "/home/ray/anaconda3/lib/python3.9/site-packages/smart_open/s3.py", line 291, in open
fileobj = Reader(
File "/home/ray/anaconda3/lib/python3.9/site-packages/smart_open/s3.py", line 574, in __init__
self.seek(0)
File "/home/ray/anaconda3/lib/python3.9/site-packages/smart_open/s3.py", line 666, in seek
self._current_pos = self._raw_reader.seek(offset, whence)
File "/home/ray/anaconda3/lib/python3.9/site-packages/smart_open/s3.py", line 417, in seek
self._open_body(start, stop)
File "/home/ray/anaconda3/lib/python3.9/site-packages/smart_open/s3.py", line 438, in _open_body
response = _get(
File "/home/ray/anaconda3/lib/python3.9/site-packages/smart_open/s3.py", line 338, in _get
raise wrapped_error from error
OSError: unable to access bucket: 'ray-fs' key: 'local_copy_of_text.zip' version: None error: An error occurred (InvalidAccessKeyId) when calling the GetObject operation: The AWS Access Key Id you provided does not exist in our records.`
1. Actually, it is possible to obtain minio files


2. The following parameters have been set:

### Link
_No response_
|
closed
|
2025-02-20T08:15:50Z
|
2025-02-21T07:52:51Z
|
https://github.com/ray-project/ray/issues/50762
|
[
"triage",
"docs"
] |
CurtainRight
| 1
|
nteract/papermill
|
jupyter
| 400
|
How to retain widget state in output notebooks?
|
I'm trying to use papermill to generate report notebooks that include ipywidgets in their outputs. When I use papermill to build these, the widget state doesn't seem to be included in the output notebook. I've tried playing around with this a bit and have found the following things:
* If I run the notebook and create the widgets interactively, then use nbconvert to convert to a different notebook (e.g. `jupyter-nbconvert --to notebook mynotebook.ipynb`) then the widgets display correctly
* If I run papermill on the notebook, then the widget is *not* displayed correctly (it just displays the text representation of the widgets)
* If I use `--execute` to the nbconvert command, it also does not display correctly (`jupyter-nbconvert --to notebook mynotebook.ipynb --execute`).
So I guess this is a problem with *executing* the notebook rather than simply converting the output of a pre-existing notebook. The final point above makes me suspect that this is a problem at the `nbconvert` level rather than the papermill level, but I'd be curious if folks have run into this here and if there are any workarounds.
It sounds like @maartenbreddels and @jasongrout worked on retaining widget state in https://github.com/jupyter/nbconvert/pull/900 - perhaps they've got thoughts on achieving the same thing after executing the cells in a notebook
|
open
|
2019-07-15T21:39:11Z
|
2022-06-14T01:39:32Z
|
https://github.com/nteract/papermill/issues/400
|
[] |
choldgraf
| 5
|
tfranzel/drf-spectacular
|
rest-api
| 1,390
|
extend_schema_field not work in @extend_schema_serializer class
|
**Describe the bug**
I'm using drf_spectacular 0.28.0 and `drf_spectacular.openapi.AutoSchema`
I want to use two layers envelopes as following
```json
{ // RootEnvelope
"success": true,
"list": { // PageEnvelope
"count": 42,
"items": [ // ItemSerializer(many=True)
{
"id": 0,
"name": "string"
}
]
}
}
```
EnvelopedSerializer is generated by
```py
# from [test_serializer_method_pagination_through_extension](https://github.com/tfranzel/drf-spectacular/blob/master/tests/test_extensions.py#L280)
class PaginationWrapper(BaseSerializer):
def __init__(self, serializer_class: type[BaseSerializer], pagination_class: type[BasePagination], **kwargs):
self.serializer_class = serializer_class
self.pagination_class = pagination_class
super().__init__(**kwargs)
class PaginationWrapperExtension(OpenApiSerializerExtension):
target_class = PaginationWrapper
def get_name(self, auto_schema: AutoSchema, direction: Direction):
return auto_schema.get_paginated_name(
auto_schema._get_serializer_name(serializer=force_instance(self.target.serializer_class), direction=direction)
)
def map_serializer(self, auto_schema: AutoSchema, direction: Direction):
component = auto_schema.resolve_serializer(self.target.serializer_class, direction)
paginated_schema = self.target.pagination_class().get_paginated_response_schema(component.ref)
return paginated_schema
def envelope_paged(serializer_class: type[BaseSerializer], pagination_class: type[BasePagination] = None):
component_name = 'Enveloped{}Page'.format(
serializer_class.__name__.replace("Serializer", ""),
)
if not pagination_class:
pagination_class = api_settings.DEFAULT_PAGINATION_CLASS
def extended_field(v):
return extend_schema_field(PaginationWrapper(serializer_class, pagination_class))(v)
@extend_schema_serializer(many=False, component_name=component_name)
class EnvelopePaginatedSerializer(serializers.Serializer):
success = serializers.BooleanField(default=True)
list = extended_field(serializer_class(many=True))
return EnvelopePaginatedSerializer
```
I found the actual schema example is
```json
{
"success": true,
"list": [
{
"id": 0,
"name": "string"
}
]
}
```
Then I started debugging and found the key issue here
```
_unwrap_list_serializer, openapi.py:1426 <-- serializer=ItemSerializer
_unwrap_list_serializer, openapi.py:1429 <-- serializer=ListSerializer, not intercepted by PaginationWrapperExtension
_map_serializer_field, openapi.py:679
_map_basic_serializer, openapi.py:1048 <-- field=ListSerializer, lost _spectacular_annotation
_map_serializer, openapi.py:949
resolve_serializer, openapi.py:1648 <-- serializer=EnvelopePaginatedSerializer, has _spectacular_annotation
_get_response_for_code, openapi.py:1453
_get_response_bodies, openapi.py:1406
get_operation, openapi.py:112
get_operation, utils.py:451
parse, generators.py:256
get_schema, generators.py:285
generate_schema, spectacular_test.py:24
...
```
I found a workaround to use SerializerMethodField instead, but could you please look into this and find if it's good to fix?
**To Reproduce**
Here's my test code
```py
class Item(Model):
name = CharField(max_length=100)
class ItemSerializer(ModelSerializer):
class Meta:
model = Item
fields = '__all__'
class MyViewSet(GenericViewSet):
queryset = Item.objects.all()
serializer_class = ItemSerializer
envelope_paged_class = envelope_paged(serializer_class)
@extend_schema(envelope_paged_class)
def list(self, request: Request, *args, **kwargs):
queryset = self.filter_queryset(self.get_queryset())
page = self.paginate_queryset(queryset)
serializer = self.get_serializer(page, many=True)
return self.get_paginated_response(serializer.data)
if __name__ == '__main__':
print(generate_schema('x', MyViewSet))
```
**Expected behavior**
fields won't lose _spectacular_annotation, so it gen the json at the head of this issue.
|
open
|
2025-03-06T17:08:41Z
|
2025-03-06T17:35:02Z
|
https://github.com/tfranzel/drf-spectacular/issues/1390
|
[] |
hsupu
| 0
|
browser-use/browser-use
|
python
| 470
|
Add support for deepseek janus
|
### Problem Description
Add support for deepseek janus 7b multimodal model
### Proposed Solution
add new deep seek janus model
### Alternative Solutions
_No response_
### Additional Context
_No response_
|
open
|
2025-01-29T18:19:51Z
|
2025-01-29T18:19:51Z
|
https://github.com/browser-use/browser-use/issues/470
|
[
"enhancement"
] |
a-ma-n
| 0
|
Lightning-AI/pytorch-lightning
|
data-science
| 20,425
|
When interrupting a run with Ctrl+C, sometimes the WandbLogger does not upload a checkpoint artifact
|
### Bug description
When interrupting a run with Ctrl+C, the WandbLogger does not upload a checkpoint artifact
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
_No response_
### Error messages and logs
```
Epoch 20: 28%|██▏ | 6502/23178 [29:11<1:14:53, 3.71it/s, v_num=gwj7, train_loss=nan.0]^C
Detected KeyboardInterrupt, attempting graceful shutdown ...
wandb: 🚀 View run train-release-0.1 at: https://wandb.ai/eschwartz/dire/runs/uvexgwj7
Epoch 20: 28%|██▏ | 6502/23178 [29:16<1:15:04, 3.70it/s, v_num=gwj7, train_loss=nan.0]
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA GeForce RTX 4070 Laptop GPU
- available: True
- version: 12.1
* Lightning:
- lightning-utilities: 0.11.7
- pytorch-lightning: 2.4.0
- torch: 2.3.0
- torchmetrics: 1.6.0
* Packages:
- absl-py: 2.1.0
- aiohappyeyeballs: 2.4.3
- aiohttp: 3.10.10
- aiosignal: 1.3.1
- appdirs: 1.4.4
- asttokens: 2.4.1
- async-timeout: 4.0.3
- attrs: 23.2.0
- braceexpand: 0.1.7
- certifi: 2024.2.2
- charset-normalizer: 3.3.2
- click: 8.1.7
- decorator: 5.1.1
- docker-pycreds: 0.4.0
- docopt: 0.6.2
- editdistance: 0.5.3
- et-xmlfile: 1.1.0
- exceptiongroup: 1.2.2
- executing: 2.1.0
- filelock: 3.13.4
- frozenlist: 1.5.0
- fsspec: 2024.3.1
- future: 1.0.0
- gitdb: 4.0.11
- gitpython: 3.1.43
- grpcio: 1.62.2
- hjson: 3.1.0
- idna: 3.7
- ipdb: 0.13.13
- ipython: 8.27.0
- jedi: 0.19.1
- jep: 4.2.0
- jinja2: 3.1.3
- jsonlines: 4.0.0
- jsonnet: 0.16.0
- lightning-utilities: 0.11.7
- markdown: 3.6
- markdown-it-py: 2.2.0
- markupsafe: 2.1.5
- matplotlib-inline: 0.1.7
- mdurl: 0.1.2
- mpmath: 1.3.0
- msgpack: 1.0.8
- multidict: 6.1.0
- networkx: 3.3
- numpy: 1.26.4
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu12: 8.9.2.26
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu12: 2.20.5
- nvidia-nvjitlink-cu12: 12.4.127
- nvidia-nvtx-cu12: 12.1.105
- objectio: 0.2.29
- openpyxl: 3.1.2
- packaging: 24.1
- pandas: 2.2.2
- parso: 0.8.4
- pexpect: 4.9.0
- pillow: 10.3.0
- pip: 22.0.2
- platformdirs: 4.3.6
- prompt-toolkit: 3.0.47
- propcache: 0.2.0
- protobuf: 4.25.3
- psutil: 5.9.8
- ptyprocess: 0.7.0
- pure-eval: 0.2.3
- pyelftools: 0.31
- pygments: 2.6.1
- python-dateutil: 2.9.0.post0
- pytorch-lightning: 2.4.0
- pytz: 2024.1
- pyyaml: 6.0.1
- requests: 2.31.0
- rich: 13.2.0
- sentencepiece: 0.1.99
- sentry-sdk: 2.0.1
- setproctitle: 1.3.3
- setuptools: 59.6.0
- shellingham: 1.5.4
- simplejson: 3.19.2
- six: 1.16.0
- smmap: 5.0.1
- stack-data: 0.6.3
- sympy: 1.12
- tensorboard: 2.16.2
- tensorboard-data-server: 0.7.2
- tomli: 2.0.1
- torch: 2.3.0
- torchmetrics: 1.6.0
- tqdm: 4.66.2
- traitlets: 5.14.3
- triton: 2.3.0
- typer: 0.12.3
- typing-extensions: 4.11.0
- tzdata: 2024.1
- ujson: 3.2.0
- urllib3: 2.2.1
- wandb: 0.18.6
- wcwidth: 0.2.13
- webdataset: 0.2.100
- werkzeug: 3.0.2
- yarl: 1.16.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.10.12
- release: 6.8.0-48-generic
- version: #48~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Oct 7 11:24:13 UTC 2
</details>
### More info
_No response_
|
open
|
2024-11-16T15:30:30Z
|
2024-11-16T21:28:59Z
|
https://github.com/Lightning-AI/pytorch-lightning/issues/20425
|
[
"bug",
"needs triage",
"ver: 2.4.x"
] |
edmcman
| 2
|
MagicStack/asyncpg
|
asyncio
| 457
|
Custom Codec doesn't work with DOMAIN types
|
* **asyncpg version**: 0.18.3
* **PostgreSQL version**: 11.3
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: Local
* **Python version**: 3.7.3
* **Platform**: MacOS
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: Yes
* **If you built asyncpg locally, which version of Cython did you use?**: PyPI
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: Don't Know
When creating a DOMAIN, and using a custom codec, it calls the native type's codec, as opposed to the custom code. Here is the test code that is supposed to test for this:
https://github.com/MagicStack/asyncpg/blob/master/tests/test_codecs.py#L1224-L1239
However, the lambda functions aren't called. It appears the data type is recognized as an `int`.
|
closed
|
2019-06-13T23:16:58Z
|
2020-12-02T01:37:45Z
|
https://github.com/MagicStack/asyncpg/issues/457
|
[] |
kjmph
| 7
|
STVIR/pysot
|
computer-vision
| 40
|
关于画面切换的目标追踪目前支持吗
|
提个外行的问题😁尝试了一下,剪了一段视频最初ROI选定人脸,由于画面有切换,不是一直追踪拍摄,切换后目标就乱了,使用SiamMask算法,SiamRPN稍好一点。有时会再捕捉到原目标,有时就走远了~~

|
closed
|
2019-06-11T10:44:10Z
|
2019-11-26T09:02:13Z
|
https://github.com/STVIR/pysot/issues/40
|
[] |
Anonym91
| 7
|
piskvorky/gensim
|
machine-learning
| 3,341
|
Clean up aarch64 wheel builds
|
in .travis.yml:
- [ ] Build Python 3.10 wheels
- [ ] Perform the wheel builds regularly, so we know when something breaks
- [ ] Document the separate travis.yml file in the wiki (e.g. "we use Travis for aarch64 because github actions don't support aarch64 builds yet")
|
open
|
2022-05-02T12:35:12Z
|
2022-05-02T12:35:12Z
|
https://github.com/piskvorky/gensim/issues/3341
|
[
"help wanted",
"housekeeping"
] |
mpenkov
| 0
|
mwaskom/seaborn
|
matplotlib
| 3,237
|
`so.Hist`: Document and propagate `weight` parameter
|
`so.Hist()` has a currently-undocumented `weight` parameter that can be used to make a weighted histogram like so:
```python
df = sns.load_dataset('titanic')
df = df[['age', 'fare']].dropna() # can't deal well with NaNs. This is not the bug reported!
# Total fare collected for each age group:
(
so.Plot(df, x='age')
.add(so.Bars(), so.Hist(bins=10), weight='fare')
)
```
This is useful *but*:
- It's currently undocumented. It should be mentioned in the documentation of `so.Hist`.
- `weight` currently *has* to be provided like above, i.e., as a parameter to `.add()`. It can't be provided as...
- a parameter to `so.Hist()` --- This may or may not be right, I don't know the design ideology so well.
- a parameter to `so.Plot()` --- This looks like it should be possible based on the design ideology?
|
open
|
2023-01-26T00:07:36Z
|
2024-12-05T06:54:54Z
|
https://github.com/mwaskom/seaborn/issues/3237
|
[
"docs",
"objects-stat"
] |
sschuldenzucker
| 2
|
MolSSI/cookiecutter-cms
|
pytest
| 10
|
Version management
|
There are a few tools that allow for fairly nice automatic versioning. Something to think about but certainly not necessary.
https://docs.openstack.org/pbr/latest/user/features.html#version
And I think ParmEd uses https://github.com/warner/python-versioneer
|
closed
|
2018-03-16T20:46:02Z
|
2018-10-05T14:33:07Z
|
https://github.com/MolSSI/cookiecutter-cms/issues/10
|
[] |
ctk3b
| 7
|
mwaskom/seaborn
|
matplotlib
| 3,755
|
NameError: name 'FloatSlider' is not defined when using widgets.py without ipywidgets installed
|
<small>I'm running the latest seaborn (0.13.2) on Mac OS 14.2.1 (x86_64), Python 3.12.4.</small>
## The issue
When a user attempts to use widgets from widgets.py module without having ipywidgets installed, the intended ImportError is not raised because of a NameError raised moments before that.
Steps to reproduce:
<details>
<summary>1. Set up a fresh virtualenv.</summary>
```shell
$ pip freeze
setuptools==72.1.0
wheel==0.43.0
```
</details>
<details><summary>2. Install seaborn (skip ipywidgets).</summary>
```shell
$ pip install seaborn
...
$ pip freeze
contourpy==1.2.1
cycler==0.12.1
fonttools==4.53.1
kiwisolver==1.4.5
matplotlib==3.9.2
numpy==2.0.1
packaging==24.1
pandas==2.2.2
pillow==10.4.0
pyparsing==3.1.2
python-dateutil==2.9.0.post0
pytz==2024.1
seaborn==0.13.2
setuptools==72.1.0
six==1.16.0
tzdata==2024.1
wheel==0.43.0
```
</details>
3. Attempt to use one of the colormap widgets.
```shell
$ python -c "import seaborn; seaborn.choose_colorbrewer_palette('quatlitative')" (test-seaborn-fresh)
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<redacted>/anaconda3/envs/test-seaborn-setup/lib/python3.12/site-packages/seaborn/widgets.py", line 134, in choose_colorbrewer_palette
desat=FloatSlider(min=0, max=1, value=1)):
^^^^^^^^^^^
NameError: name 'FloatSlider' is not defined
```
</details>
## Root cause
The widgets.py module wraps imports from `ipywidgets` with a try/except clause ([link](https://github.com/mwaskom/seaborn/blob/master/seaborn/widgets.py#L5-L10)). When the user doesn't have ipywidgets installed, the `interact` function is patched to raise an `ImportError` and notify the user on the missing module upon invocation.
The local functions defined later in the module are guarded using the wrapper:
```
@interact
def choose_sequential(name=opts, n=(2, 18),
desat=FloatSlider(min=0, max=1, value=1),
variant=variants):
```
Unfortunately, such function definitions already attempt to use the members of the ipywidgets module to define the default values for parameters (in this case the `FloatSlider` is used to define a default for `desat`). This prevents the user from seeing the intended `ImportError` and presents them with a `NameError` instead like the one reproduced above.
|
open
|
2024-08-29T07:15:11Z
|
2024-08-29T07:17:26Z
|
https://github.com/mwaskom/seaborn/issues/3755
|
[] |
mrapacz
| 1
|
allenai/allennlp
|
nlp
| 5,363
|
Cannot use ConllCorefScores for general coref tasks
|
**Is your feature request related to a problem? Please describe.**
The coreference model implement in allennlp [coref.py](https://github.com/allenai/allennlp-models/blob/4eb7c27d7fad00ac886ffdefc6c152909fa28f23/allennlp_models/coref/models/coref.py#L24) is based on [Higher-order Coreference Resolution with Coarse-to-fine Inference](https://arxiv.org/pdf/1804.05392.pdf) by Lee et al., 2018.
This model considers for a span i, all antecedents j, j<i and introduces a dummy cluster to represents spans with no antecedent. In [DWIE: an entity-centric dataset for multi-task document-level information extraction] by Zaporojects et al. 2021 this was seen as a limitation, they allowed singleton clusters j=i and eliminated the dummy cluster. In my current work I've combined both, a span can either be invalid (belong to the dummy cluster), be a singleton or have an antecedent.
The problem I've run into is due to a single assertion in `ConllCorefScores` the scores cannot be computed when the model is capable of predicting singletons
https://github.com/allenai/allennlp-models/blob/5012f2387decc806152fcba6ad81345b7627fc2a/allennlp_models/coref/metrics/conll_coref_scores.py#L107
I realise the code is `correct` in that it's based on models which cannot predict singleton clusters, but it would be nice if it were more general-purpose - specifically the only issue being a single assertion that is only valid for specific models.
|
closed
|
2021-08-17T22:50:22Z
|
2021-08-31T19:49:29Z
|
https://github.com/allenai/allennlp/issues/5363
|
[
"Contributions welcome"
] |
david-waterworth
| 2
|
aimhubio/aim
|
tensorflow
| 2,491
|
Ability to show the run messages in UI
|
## 🚀 Feature
As a user, I want to see run messages displayed in the UI as a separate tab, so that I can view them as a timeline and stay updated on live messages.
### Pitch
1. Add a "Run Messages" tab to the run page
2. Display messages as a timeline
3. Show live updated messages.
|
closed
|
2023-01-20T13:53:42Z
|
2023-02-10T12:56:13Z
|
https://github.com/aimhubio/aim/issues/2491
|
[
"type / enhancement",
"area / Web-UI",
"phase / shipped"
] |
VkoHov
| 0
|
pytorch/vision
|
machine-learning
| 8,438
|
Allow passing a file-like object to torchvision.io.video.read_video
|
### 🚀 The feature
As the title states, I'd like to be able to pass a file-like object to `read_video`.
### Motivation, pitch
As far as I can see, there should be no issue allowing this, as `pyav` supports that. The current `if not os.path.exists(filename):` is the only thing preventing this, as I see it. It could be specially useful e.g. for webdataset, which currently extracts the videos to `/tmp`, but could also pass it directly from memory, which would be much more efficient.
I wonder if there is any reason to not expose this interface?
### Alternatives
_No response_
### Additional context
_No response_
|
closed
|
2024-05-23T08:54:47Z
|
2024-06-03T07:59:55Z
|
https://github.com/pytorch/vision/issues/8438
|
[] |
voegtlel
| 6
|
FactoryBoy/factory_boy
|
sqlalchemy
| 191
|
Make previous versions of factory_boy available on PyPI
|
2.5.0 was released yesterday and it breaks our builds. We would like to pin the factory_boy requirement to an earlier version, until we can sort out why the builds are failing and update our factory_boy usage. This is currently not possible as only the latest version of factory_boy is available on PyPI and we create a clean virtualenv for each build.
|
closed
|
2015-03-27T10:52:08Z
|
2015-03-27T12:46:56Z
|
https://github.com/FactoryBoy/factory_boy/issues/191
|
[] |
kristaps
| 2
|
sktime/sktime
|
scikit-learn
| 7,200
|
[DOC] Feature importances of direct-strategy reduced forecasters with exogenous variables are not documented
|
**Problem Statement**
It is not clear on the documentation of [sktime.forecasting.compose.make_reduction](https://www.sktime.net/en/latest/api_reference/auto_generated/sktime.forecasting.compose.make_reduction.html) how one can interpret the `feature_importances_` of the `estimator`, in the case of direct-strategy forecasting through reduction.
**Minimum Reproducible Example**
```python
import numpy as np
import pandas as pd
from sklearn.ensemble import ExtraTreesRegressor
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.compose import make_reduction
from sktime.forecasting.model_selection import temporal_train_test_split
if __name__ == "__main__":
y = load_airline()
X = pd.DataFrame(
{
"x1": np.random.rand(y.shape[0]),
"x2": np.random.rand(y.shape[0]),
}
).set_index(y.index)
y_train, y_test, X_train, X_test = temporal_train_test_split(y, X, test_size=5)
regressor = ExtraTreesRegressor()
forecaster = make_reduction(regressor, window_length=5, strategy="direct")
fh = ForecastingHorizon(y_test.index, is_relative=False)
forecaster.fit(y_train, X=X_train, fh=fh)
feature_importances = forecaster.get_fitted_params()["estimators"][0].feature_importances_
print(feature_importances)
```
This yields the following array:
```
[0.08666385 0.0838308 0.15618081 0.14396575 0.45666079 0.00584428
0.00518489 0.00855417 0.00749582 0.00788163 0.0085631 0.00395616
0.00599802 0.00733155 0.01188838]
```
While all `15` values in this array sum to 1.0 (typical for feature importances), one cannot point out which value corresponds to which original feature.
**Solution**
First of all, given that we have selected:
- `window_length = 5`
- `2` exogenous variables (`x1`, `x2`)
it makes sense that the array has length `15` since we have essentially `3` features in total (1 target variable + 2 exogenous), each with 5 lags (due to window length). In short: `15 = 3 * 5`.
While this by itself is not a trivial result (_i think sktime users will find it very useful to have something like this documented_), even less trivial is figuring out how these `15` values (feature importances) are associated to the original 3 features (target + exogenous).
By introducing the random-variabled `x1` and `x2`, as well as adding another "pseudo-feature" such as `"x3": 2 * y` in the design matrix `X` of the example above, one can make the following observations:
- the first 5 values in the feature importances array correspond to the 5 lags of the target variable `y` (first value is`lag_5` and 5th value is `lag_1`, based on feature importance)
- each subsequent group of values of size 5 corresponds to each exogenous variable in the original design matrix `X` (left to right).
My suggestion is that this particular behaviour should be:
1. validated from at least one `sktime` contributor (*i may have missed something myself*)
2. documented somewhere in the `sktime` documentation (*i think it's something valuable and i am willing to help contribute to it - if you find it valuable as well*)
|
open
|
2024-09-30T10:37:18Z
|
2024-09-30T11:57:10Z
|
https://github.com/sktime/sktime/issues/7200
|
[
"implementing algorithms",
"documentation",
"module:forecasting"
] |
ilias-ant
| 4
|
AUTOMATIC1111/stable-diffusion-webui
|
deep-learning
| 15,601
|
[Bug]: No module named 'pillow_avif'
|
### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py", line 13, in <module>
initialize.imports()
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/initialize.py", line 39, in imports
from modules import processing, gradio_extensons, ui # noqa: F401
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 18, in <module>
import modules.sd_hijack
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack.py", line 5, in <module>
from modules import devices, sd_hijack_optimizations, shared, script_callbacks, errors, sd_unet, patches
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 13, in <module>
from modules.hypernetworks import hypernetwork
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/hypernetworks/hypernetwork.py", line 8, in <module>
import modules.textual_inversion.dataset
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/dataset.py", line 12, in <module>
from modules import devices, shared, images
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/images.py", line 17, in <module>
import pillow_avif # noqa: F401
ModuleNotFoundError: No module named 'pillow_avif'
### Steps to reproduce the problem
Just start webui
### What should have happened?
Start of SD
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
Cant open
### Console logs
```Shell
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py", line 13, in <module>
initialize.imports()
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/initialize.py", line 39, in imports
from modules import processing, gradio_extensons, ui # noqa: F401
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 18, in <module>
import modules.sd_hijack
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack.py", line 5, in <module>
from modules import devices, sd_hijack_optimizations, shared, script_callbacks, errors, sd_unet, patches
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 13, in <module>
from modules.hypernetworks import hypernetwork
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/hypernetworks/hypernetwork.py", line 8, in <module>
import modules.textual_inversion.dataset
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/textual_inversion/dataset.py", line 12, in <module>
from modules import devices, shared, images
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/images.py", line 17, in <module>
import pillow_avif # noqa: F401
ModuleNotFoundError: No module named 'pillow_avif'
```
### Additional information
_No response_
|
closed
|
2024-04-22T18:11:18Z
|
2024-04-23T21:00:08Z
|
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15601
|
[
"not-an-issue"
] |
gokuryo
| 2
|
ipython/ipython
|
jupyter
| 13,915
|
Autoreload fails silently when a comment is on the same line.
|
`%autoreload 2 # this line will automatically reload every imported library`
This line will fail silently. It does nothing at all and does not produce any error. Libs will not be auto-reloaded and no error ever appear.
See line 555 of https://github.com/ipython/ipython/edit/main/IPython/extensions/autoreload.py
see the pull request https://github.com/ipython/ipython/pull/13916
|
open
|
2023-01-30T16:30:13Z
|
2023-01-30T16:31:00Z
|
https://github.com/ipython/ipython/issues/13915
|
[] |
vmerckle
| 0
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 1,121
|
Screenreader Blind user for this program
|
Hello, I am a blind user for this program and I really love it so far. However, it is not supported by screenreaders such as NVDA for windows. I hope that someone or anjok can make the next version accessible for screenreaders so I can play around the settings more and I wish I can contact him one day if it's possible. This software is the best so far for me as far as separating the vocals and instrumentals. Thanks
|
open
|
2024-01-20T19:49:29Z
|
2024-01-20T19:49:29Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/1121
|
[] |
Miladdb
| 0
|
plotly/dash
|
jupyter
| 2,940
|
dcc.Graph with scattermapbox stops updating after double clicking on a trace in the legend
|
**Describe your context**
```
dash 2.17.1
```
**Describe the bug**
1. create a map with 7 traces and double click on the seventh trace in the legend to hide all other ones
2. change the map to one with fewer traces
3. get error in console `Uncaught (in promise) Error: Mapbox error.`
4. trying to change the graph again fails
```python
import dash
from dash.dependencies import Input, Output
import dash_core_components as dcc
import dash_html_components as html
import plotly.graph_objects as go
import pandas as pd
# Replace 'your_access_token' with your actual Mapbox access token
# Sample data (replace with your actual data)
df1 = pd.DataFrame(
{"lat": [35.6895, 34.0522, 32.7767], "lon": [-82.6446, -118.2437, -96.7970], "trace_name": ["A", "B", "C"]}
)
df2 = pd.DataFrame(
{
"lat": [40.7128, 39.9526, 38.8951, 37.7749, 36.1699, 34.0522, 32.7767],
"lon": [-74.0060, -75.1652, -77.0367, -122.4194, -119.4179, -118.2437, -96.7970],
"trace_name": ["D", "E", "F", "G", "H", "I", "J"],
}
)
df3 = pd.DataFrame(
{
"lat": [40.7128, 39.9526, 38.8951, 37.7749, 36.1699],
"lon": [-74.0060, -75.1652, -77.0367, -122.4194, -119.4179],
"trace_name": ["K", "L", "M", "N", "O"],
}
)
data_dict = {"Option 1": df1, "Option 2": df2, "Option 3": df3}
app = dash.Dash(__name__)
app.layout = html.Div(
[
dcc.Dropdown(id="dropdown", options=[{"label": k, "value": k} for k in data_dict.keys()], value="Option 1"),
dcc.Graph(id="map"),
]
)
@app.callback(Output("map", "figure"), Input("dropdown", "value"))
def update_graph(selected_option):
df = data_dict[selected_option]
fig = go.Figure()
for trace_name, group in df.groupby("trace_name"):
fig.add_trace(go.Scattermapbox(lat=group["lat"], lon=group["lon"], name=trace_name, mode="markers"))
fig.update_layout(mapbox_style="open-street-map")
return fig
if __name__ == "__main__":
app.run_server(debug=True, port=8051)
```
**Expected behavior**
the current state of the restyleData shouldn't prevent the graph from updating
**Screenshots**

I wonder if this is related to https://github.com/plotly/dash/issues/2521 at all
|
open
|
2024-08-05T21:50:35Z
|
2024-08-13T19:57:56Z
|
https://github.com/plotly/dash/issues/2940
|
[
"bug",
"P3"
] |
michaelbabyn
| 0
|
microsoft/nlp-recipes
|
nlp
| 546
|
[ASK] Test classes on all supported models
|
### Description
<!--- Describe your general ask in detail -->
### Other Comments
|
open
|
2020-01-24T18:25:15Z
|
2020-01-24T18:25:15Z
|
https://github.com/microsoft/nlp-recipes/issues/546
|
[] |
hlums
| 0
|
swisskyrepo/GraphQLmap
|
graphql
| 51
|
Issue launching GraphQLmap.py - import module not found
|
Previously it was working fine however, I had to reinstall WSL and try to install GraphQLmap again. When I attempted to, it threw an error once I had extracted the repo and attempted to run the binary from the bin folder provided. I am not sure but, the first time round, installing this was no issue at all.
For example:
Called from: ~/GraphQLmap/bin
Command: ./graphqlmap
Error:
Traceback (most recent call last):
File "/home/{$user}/GraphQLmap/bin/./graphqlmap", line 8, in <module>
from graphqlmap.attacks import *
ModuleNotFoundError: No module named 'graphqlmap'
Apparently there is an outdated version of something, I have included the error message when attempting to launch setup.py with python3.
warnings.warn(
Traceback (most recent call last):
File "/home/og/GraphQLmap/setup.py", line 6, in <module>
setuptools.setup(
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 108, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 1213, in run_command
super().run_command(command)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 74, in run
self.do_egg_install()
File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 117, in do_egg_install
cmd.ensure_finalized() # finalize before bdist_egg munges install cmd
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 111, in ensure_finalized
self.finalize_options()
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 335, in finalize_options
self.local_index = Environment(self.shadow_path + sys.path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 1044, in __init__
self.scan(search_path)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 1077, in scan
self.add(dist)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 1096, in add
dists.sort(key=operator.attrgetter('hashcmp'), reverse=True)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2631, in hashcmp
self.parsed_version,
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2679, in parsed_version
self._parsed_version = parse_version(self.version)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/pkg_resources/_vendor/packaging/version.py", line 266, in __init__
raise InvalidVersion(f"Invalid version: '{version}'")
pkg_resources.extern.packaging.version.InvalidVersion: Invalid version: '1.4-py1'
(package: adns)
I would like to use the tool so any advice on getting it to work is useful. I have tried uninstalling, moving the file to another DIR and also moving to bin. As mentioned, it has worked fine before but, I had an issue which forced me to restart my WSL so I am not sure if that issue is connected.
|
open
|
2023-03-30T10:25:00Z
|
2023-03-30T10:26:57Z
|
https://github.com/swisskyrepo/GraphQLmap/issues/51
|
[] |
24allday
| 0
|
Farama-Foundation/Gymnasium
|
api
| 1,124
|
[Bug Report] The description of the observation space of "half cheetah" is not correct on the documentation website.
|
### Describe the bug
[https://gymnasium.farama.org/environments/mujoco/half_cheetah/](https://gymnasium.farama.org/environments/mujoco/half_cheetah/)
The table of observation space description is not correct. For example, "angle of the second rotor" and "x-coordinate of the front tip" appear repetitively. Some of the names are obviously mismatched with the Unit.
|
closed
|
2024-07-19T19:43:02Z
|
2024-07-22T10:24:19Z
|
https://github.com/Farama-Foundation/Gymnasium/issues/1124
|
[
"bug"
] |
chawuchen
| 1
|
huggingface/datasets
|
machine-learning
| 6,793
|
Loading just one particular split is not possible for imagenet-1k
|
### Describe the bug
I'd expect the following code to download just the validation split but instead I get all data on my disk (train, test and validation splits)
`
from datasets import load_dataset
dataset = load_dataset("imagenet-1k", split="validation", trust_remote_code=True)
`
Is it expected to work like that?
### Steps to reproduce the bug
1. Install the required libraries (python, datasets, huggingface_hub)
2. Login using huggingface cli
2. Run the code in the description
### Expected behavior
Just a single (validation) split should be downloaded.
### Environment info
python: 3.12.2
datasets: 2.18.0
huggingface_hub: 0.22.2
|
open
|
2024-04-08T14:39:14Z
|
2024-09-12T16:24:48Z
|
https://github.com/huggingface/datasets/issues/6793
|
[] |
PaulPSta
| 1
|
profusion/sgqlc
|
graphql
| 223
|
Add support for `UUID` Type
|
## 🚀 Feature Request
Add support for `UUID` in `sgqlc.types`
## Description
<!-- Add a clear and concise description of what this new feature aims to do and achieve -->
<!-- Is this new feature related to a specific problem? If so, please describe it -->
There are the following types supported by `sgqlc`:
`int, str, boolean, float, id, enum, union, datetime , non_null , list_of`
(Maybe more)
But there is no support for `UUID`.
Most of the people who work with `databases` in the backend, need a unique identification for entities that they can use as a `primary key`. This can be of type `str` but not always.
`UUID` is used by many(including my organization). So, it would be great to have this ` UUID Type` supported.
## Implementation details
<!-- How will this feature work? Do you already have something in mind?
A potential implementation may be (concisely) explained here -->
For example:
I want to create a `user` and update that with its `uuid`.
The `class` for this `User` will look like this:
```
class User(Type):
uuid=UUID
code=str
name=str
createdAt=datetime
```
1. Create with `uuid` type in response:
```
mutation MutationCreateUser($code: String!, $name: String!) {
createUser(code: $code, name: $name) {
uuid
code
name
createdAt
}
}
```
Variables:
` {"code":"user_code", "name":"user_name"}`
2. Update with created `uuid` in request:
```
mutation MutationUpdateUser($uuid: UUID! $code: String!, $name: String!) {
updateUser(uuid: $uuid, code: $code, name: $name) {
uuid
code
name
createdAt
}
}
```
Variables:
`{"uuid": "94fda4fb-d574-470b-82e2-0f4ec2a2db90", "code":"user_code_updated", "name":"user_name_updated"}`
*Note*:
There is a library for the `uuid` type(https://docs.python.org/3/library/uuid.html) which can be referred to.
## Acceptance criteria
- Support for `UUID` in `sgqlc.types`
- The query mentioned in the example above should be possible to execute.
|
closed
|
2023-02-23T15:50:58Z
|
2023-02-27T18:55:43Z
|
https://github.com/profusion/sgqlc/issues/223
|
[
"enhancement",
"help wanted",
"good first issue"
] |
neetaBirajdar
| 3
|
tensorpack/tensorpack
|
tensorflow
| 591
|
How to use tf.add_check_numerics_ops()
|
I got nans and inf during training. I would like to use tf.add_check_numerics_ops() to detect the location caused it in the first place. But, I don't know where I should put the code.
What is the best way to add tf.add_check_numerics_ops() in tensorpack?
|
closed
|
2018-01-15T00:56:06Z
|
2018-05-30T20:59:32Z
|
https://github.com/tensorpack/tensorpack/issues/591
|
[
"usage"
] |
junhaenglee
| 5
|
sktime/pytorch-forecasting
|
pandas
| 1,071
|
Bug Report: Error encountered when the covariates' size is different in N-Hits
|
- PyTorch-Forecasting version:0.10.1
- PyTorch version:1.12.0
- Python version:3.8
- Operating System:Linux version 4.19.91-007.ali4000.alios7.x86_64
### Abstract
As #1065 issus metioned, N-HiTS encounters a RuntimeError when using 'time_varying_unknown_reals' and 'time_varying_known_reals' covariates simultaneously. The reason of that error is becasue the dimension of the full-connected network is mismatched during the network define process and tensor forward process.
### Error Location In Source Code
In `pytorch_forecasting.models.nhits.sub_modules.py` From line 124 to line 128, The Linear layers' dimension is defined by
```
self.hidden_size = [
self.context_length_pooled * self.output_size
+ (self.context_length + self.prediction_length) * self.covariate_size
+ self.static_hidden_size
] + hidden_size
```
The `covariate_size ` is input and is defined in `pytorch_forecasting.models.nhits.__init__.py` From line 200 to line 208
```
def covariate_size(self) -> int:
"""Covariate size.
Returns:
int: size of time-dependent covariates
"""
return len(set(self.hparams.time_varying_reals_decoder) - set(self.target_names)) + sum(
self.embeddings.output_size[name] for name in self.hparams.time_varying_categoricals_encoder
)
```
From the defination of variable `self.covariate_size` we can figure out the defination is only consider the size of categoricals variables in encoder and the size of reals variables in decoder.
Howerer during the forward process, the dimension of input tensor is defined in `pytorch_forecasting.models.nhits.sub_modules.py` From line 163 to line 181
```
encoder_y = encoder_y.transpose(1, 2)
# Pooling layer to downsample input
encoder_y = self.pooling_layer(encoder_y)
encoder_y = encoder_y.transpose(1, 2).reshape(batch_size, -1)
if self.covariate_size > 0:
encoder_y = torch.cat(
(
encoder_y,
encoder_x_t.reshape(batch_size, -1),
decoder_x_t.reshape(batch_size, -1),
),
1,
)
# Static exogenous
if (self.static_size > 0) and (self.static_hidden_size > 0):
x_s = self.static_encoder(x_s)
encoder_y = torch.cat((encoder_y, x_s), 1)
```
When the dimension of covariates is different between encoder and decoder, Whether categoricals variables or reals variables, it will encounter a dimension mismatched error as #1065 showen.
### Fixed Code
The fixed code is in linked https://github.com/zhangguangxun/nhits which defines encoder_covariate_size and decoder_covariate_size in `__init__.py` and pass them to `sub_modules.py` to make sure the dimension is matched.
|
closed
|
2022-07-25T11:44:05Z
|
2023-10-03T19:06:05Z
|
https://github.com/sktime/pytorch-forecasting/issues/1071
|
[] |
zhangguangxun
| 5
|
sourcery-ai/python-best-practices-cookiecutter
|
pytest
| 23
|
Add `py.typed`
|
The [PEP 561](https://peps.python.org/pep-0561/) has a lot of best practices when we talk about the distribution and packaging type Information in a Python application.
A very simple good practice is to add a `py.typed` file in the project directory.
This makes it possible not to have stub files and to embed the typing directly in the `.py` files
|
open
|
2022-09-05T15:58:36Z
|
2022-09-05T15:58:36Z
|
https://github.com/sourcery-ai/python-best-practices-cookiecutter/issues/23
|
[] |
av1m
| 0
|
sigmavirus24/github3.py
|
rest-api
| 995
|
github3.apps.App(json,session) method
|
Hi,
I'm still trying to learn this library (internship), so my question may look so stupid :laughing: .
I'm having issue with the following method parameters :
`github3.apps.App(json,session)`
I noticed that using github3.login() willl return an authenticated GitHub session
BUT I can't find out how to fill that json parameter.
|
closed
|
2020-06-15T10:24:55Z
|
2020-06-15T11:15:32Z
|
https://github.com/sigmavirus24/github3.py/issues/995
|
[] |
bilelelleuch
| 1
|
capitalone/DataProfiler
|
pandas
| 674
|
Azure Synapse table with uniqueidentifier columns fails to profiler
|
I have lots of tables with multiple columns of type uniqueidentifier
when I run them through data profiler i get below error
Errored for schema name : P360 table name : Staging_InsertQ4Alignmnets
An error occurred while calling o1225.getResult.
: org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:429)
at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:107)
at org.apache.spark.security.SocketAuthServer.getResult(SocketAuthServer.scala:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:306)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:195)
at py4j.ClientServerConnection.run(ClientServerConnection.java:115)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.databricks.spark.sqldw.SqlDWSideException: Azure Synapse Analytics failed to execute the JDBC query produced by the connector.
Underlying SQLException(s):
- com.microsoft.sqlserver.jdbc.SQLServerException: Columns with UniqueIdentifier types are not supported in external tables. [ErrorCode = 102050] [SQLState = S0001]
at com.databricks.spark.sqldw.Utils$.wrapExceptions(Utils.scala:723)
at com.databricks.spark.sqldw.SqlDWRelation.buildScan(SqlDWRelation.scala:116)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.$anonfun$apply$3(DataSourceStrategy.scala:455)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.$anonfun$pruneFilterProject$1(DataSourceStrategy.scala:490)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.pruneFilterProjectRaw(DataSourceStrategy.scala:570)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.pruneFilterProject(DataSourceStrategy.scala:489)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.apply(DataSourceStrategy.scala:455)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:69)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$1(QueryPlanner.scala:69)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:100)
at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$4(QueryPlanner.scala:85)
at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:82)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:100)
at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:78)
at org.apache.spark.sql.execution.QueryExecution$.createSparkPlan(QueryExecution.scala:625)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$sparkPlan$1(QueryExecution.scala:215)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:80)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:151)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:265)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:973)
at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:265)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:215)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:208)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executedPlan$1(QueryExecution.scala:227)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:973)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:227)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:222)
at org.apache.spark.sql.execution.QueryExecution.simpleString(QueryExecution.scala:298)
at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$explainString(QueryExecution.scala:365)
at org.apache.spark.sql.execution.QueryExecution.explainStringLocal(QueryExecution.scala:329)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$8(SQLExecution.scala:203)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:388)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:187)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:973)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:142)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:338)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:4044)
at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$2(Dataset.scala:3950)
at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$2$adapted(Dataset.scala:3949)
at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$2(SocketAuthServer.scala:153)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1690)
at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$1(SocketAuthServer.scala:155)
at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$1$adapted(SocketAuthServer.scala:150)
at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:124)
at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:117)
at org.apache.spark.security.SocketAuthServer$$anon$1.$anonfun$run$4(SocketAuthServer.scala:70)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.security.SocketAuthServer$$anon$1.run(SocketAuthServer.scala:70)
Caused by: java.sql.SQLException: Exception thrown in awaitResult:
at com.databricks.spark.sqldw.JDBCWrapper.executeInterruptibly(SqlDWJDBCWrapper.scala:137)
at com.databricks.spark.sqldw.JDBCWrapper.$anonfun$executeInterruptibly$1(SqlDWJDBCWrapper.scala:115)
at com.databricks.spark.sqldw.JDBCWrapper.$anonfun$executeInterruptibly$1$adapted(SqlDWJDBCWrapper.scala:115)
at com.databricks.spark.sqldw.JDBCWrapper.withPreparedStatement(SqlDWJDBCWrapper.scala:362)
at com.databricks.spark.sqldw.JDBCWrapper.executeInterruptibly(SqlDWJDBCWrapper.scala:115)
at com.databricks.spark.sqldw.JDBCWrapper.createExternalTable(SqlDWJDBCWrapper.scala:536)
at com.databricks.spark.sqldw.JDBCWrapper.withExternalTable(SqlDWJDBCWrapper.scala:548)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$getRDDFromBlobStore$5(SqlDWRelation.scala:241)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.spark.sqldw.JDBCWrapper.withTempFileFormat(SqlDWJDBCWrapper.scala:523)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$getRDDFromBlobStore$4(SqlDWRelation.scala:241)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.spark.sqldw.JDBCWrapper.withDataSource(SqlDWJDBCWrapper.scala:489)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$getRDDFromBlobStore$3(SqlDWRelation.scala:240)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$getRDDFromBlobStore$3$adapted(SqlDWRelation.scala:237)
at com.databricks.spark.sqldw.JDBCWrapper.withAzureStorageCredential(SqlDWJDBCWrapper.scala:445)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$getRDDFromBlobStore$2(SqlDWRelation.scala:237)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$getRDDFromBlobStore$2$adapted(SqlDWRelation.scala:235)
at com.databricks.spark.sqldw.JDBCWrapper.withConnection(SqlDWJDBCWrapper.scala:340)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$getRDDFromBlobStore$1(SqlDWRelation.scala:235)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.backend.daemon.driver.ProgressReporter$.withStatusCode(ProgressReporter.scala:364)
at com.databricks.spark.util.SparkDatabricksProgressReporter$.withStatusCode(ProgressReporter.scala:34)
at com.databricks.spark.sqldw.SqlDWRelation.getRDDFromBlobStore(SqlDWRelation.scala:235)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$buildScan$1(SqlDWRelation.scala:149)
at com.databricks.spark.sqldw.Utils$.wrapExceptions(Utils.scala:692)
... 63 more
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Columns with UniqueIdentifier types are not supported in external tables.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:262)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1632)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:602)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:524)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7418)
at com.microsoft.sqls
*** WARNING: max output size exceeded, skipping output. ***
cala:298)
at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$explainString(QueryExecution.scala:365)
at org.apache.spark.sql.execution.QueryExecution.explainStringLocal(QueryExecution.scala:329)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$8(SQLExecution.scala:203)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:388)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:187)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:973)
at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:142)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:338)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:4044)
at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$2(Dataset.scala:3950)
at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$2$adapted(Dataset.scala:3949)
at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$2(SocketAuthServer.scala:153)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1690)
at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$1(SocketAuthServer.scala:155)
at org.apache.spark.security.SocketAuthServer$.$anonfun$serveToStream$1$adapted(SocketAuthServer.scala:150)
at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:124)
at org.apache.spark.security.SocketFuncServer.handleConnection(SocketAuthServer.scala:117)
at org.apache.spark.security.SocketAuthServer$$anon$1.$anonfun$run$4(SocketAuthServer.scala:70)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.security.SocketAuthServer$$anon$1.run(SocketAuthServer.scala:70)
Caused by: java.sql.SQLException: Exception thrown in awaitResult:
at com.databricks.spark.sqldw.JDBCWrapper.executeInterruptibly(SqlDWJDBCWrapper.scala:137)
at com.databricks.spark.sqldw.JDBCWrapper.$anonfun$executeInterruptibly$1(SqlDWJDBCWrapper.scala:115)
at com.databricks.spark.sqldw.JDBCWrapper.$anonfun$executeInterruptibly$1$adapted(SqlDWJDBCWrapper.scala:115)
at com.databricks.spark.sqldw.JDBCWrapper.withPreparedStatement(SqlDWJDBCWrapper.scala:362)
at com.databricks.spark.sqldw.JDBCWrapper.executeInterruptibly(SqlDWJDBCWrapper.scala:115)
at com.databricks.spark.sqldw.JDBCWrapper.createExternalTable(SqlDWJDBCWrapper.scala:536)
at com.databricks.spark.sqldw.JDBCWrapper.withExternalTable(SqlDWJDBCWrapper.scala:548)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$getRDDFromBlobStore$5(SqlDWRelation.scala:241)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.spark.sqldw.JDBCWrapper.withTempFileFormat(SqlDWJDBCWrapper.scala:523)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$getRDDFromBlobStore$4(SqlDWRelation.scala:241)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.spark.sqldw.JDBCWrapper.withDataSource(SqlDWJDBCWrapper.scala:489)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$getRDDFromBlobStore$3(SqlDWRelation.scala:240)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$getRDDFromBlobStore$3$adapted(SqlDWRelation.scala:237)
at com.databricks.spark.sqldw.JDBCWrapper.withAzureStorageCredential(SqlDWJDBCWrapper.scala:445)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$getRDDFromBlobStore$2(SqlDWRelation.scala:237)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$getRDDFromBlobStore$2$adapted(SqlDWRelation.scala:235)
at com.databricks.spark.sqldw.JDBCWrapper.withConnection(SqlDWJDBCWrapper.scala:340)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$getRDDFromBlobStore$1(SqlDWRelation.scala:235)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.backend.daemon.driver.ProgressReporter$.withStatusCode(ProgressReporter.scala:364)
at com.databricks.spark.util.SparkDatabricksProgressReporter$.withStatusCode(ProgressReporter.scala:34)
at com.databricks.spark.sqldw.SqlDWRelation.getRDDFromBlobStore(SqlDWRelation.scala:235)
at com.databricks.spark.sqldw.SqlDWRelation.$anonfun$buildScan$1(SqlDWRelation.scala:149)
at com.databricks.spark.sqldw.Utils$.wrapExceptions(Utils.scala:692)
... 63 more
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: Columns with UniqueIdentifier types are not supported in external tables.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:262)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1632)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:602)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:524)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7418)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:3272)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:247)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:222)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.execute(SQLServerPreparedStatement.java:505)
at com.databricks.spark.sqldw.JDBCWrapper.$anonfun$executeInterruptibly$2(SqlDWJDBCWrapper.scala:115)
at com.databricks.spark.sqldw.JDBCWrapper.$anonfun$executeInterruptibly$2$adapted(SqlDWJDBCWrapper.scala:115)
at com.databricks.spark.sqldw.JDBCWrapper.$anonfun$executeInterruptibly$3(SqlDWJDBCWrapper.scala:129)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Any thought around what would this fail?
|
closed
|
2022-10-04T00:37:44Z
|
2022-10-05T22:44:13Z
|
https://github.com/capitalone/DataProfiler/issues/674
|
[] |
dilkushpatel
| 4
|
hankcs/HanLP
|
nlp
| 689
|
如何在程序中重新加载自定义的词典
|
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
当前最新版本号是:
我使用的版本是:hanlp-1.3.2-portable.jar 、 hanlp-lucene-plugin-1.1.2.jar
## 实际场景
一个web容器中有两个webapp,一个是webappA,另一个是solr,solr使用hanlp为中文分词器,并配置了用户词典(即customDictionaryPath属性),webappA有一个在线编辑词典的功能,希望编辑完字典,solr能够看到效果而不需要重启tomcat容器。
## 解决思路
在hanlp solr插件中的HanLPTokenizerFactory开启一个守护线程,每隔一段时间去检查字典的校检码,如果发生变化就删掉.bin缓存文件,并重新加载字典。
## 我的问题
1、我目前在CustomDictionary添加了如下一个静态方法,但是这样会把所有的自定义词典重新加载一遍,有没有只加载某个文件的方法呢
```
public static void reloadDic(){
trie = null;
dat = new DoubleArrayTrie<CoreDictionary.Attribute>();
loadMainDictionary(path[0]);
}
```
2、执行CustomDictionary.insert()方法后,为什么新词典已经产生效果,但dat.size()没有发生变化
|
closed
|
2017-11-23T03:21:31Z
|
2020-01-01T10:51:43Z
|
https://github.com/hankcs/HanLP/issues/689
|
[
"ignored"
] |
kjdongzh
| 2
|
public-apis/public-apis
|
api
| 3,735
|
A collective list of free APIs
|
closed
|
2023-12-31T17:36:17Z
|
2024-01-01T06:16:27Z
|
https://github.com/public-apis/public-apis/issues/3735
|
[] |
Elfera
| 0
|
|
tflearn/tflearn
|
data-science
| 1,082
|
Error during model.load if wrapped inside custom graph
|
This code works:
`
next_network = network.classify_next_item(network.game_config, network.next_network_config)
self.next_model = tflearn.DNN(next_network, tensorboard_verbose=0)
self.next_model.load(next_item_model_path)
`
this code doesn't:
```
self.next_graph = tf.Graph()
with self.next_graph.as_default():
next_network = network.classify_next_item(network.game_config, network.next_network_config)
self.next_model = tflearn.DNN(next_network, tensorboard_verbose=0)
self.next_model.load(next_item_model_path)
```
I need to enclose this network in a separate graph because I have to load a total of 5 separate network which would clash if I defaulted to the same graph as shown in the first code sample.
Here is my network:
```
final_input_layer = input_data(shape=[None, in_dim], name='input')
net = relu(batch_normalization(fully_connected(final_input_layer, 256, bias=False, activation=None, regularizer="L2")))
for i in range(5):
net = highway(net, 256, activation='elu', regularizer="L2", transform_dropout=0.7)
net = fully_connected(net, out_dim , activation=None)
return regression(net, optimizer='adam', to_one_hot=True, n_classes=total_num_items, shuffle_batches=True, learning_rate=learning_rate,
loss='binary_crossentropy', name='target', metric=multi_class_top_k_acc)
```
The error message is:
> WARNING:tensorflow:From E:\user\myproject\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
> Instructions for updating:
> Use the retry module or similar alternatives.
> hdf5 is not supported on this machine (please install/reinstall h5py for optimal experience)
> curses is not supported on this machine (please install/reinstall curses for an optimal experience)
> Using resolution 1920,1080
> Initializing neural networks...
> WARNING:tensorflow:From E:\user\myproject\lib\site-packages\tflearn\initializations.py:119: UniformUnitScaling.__init__ (from tensorflow.python.ops.init_ops) is deprecated and will be removed in a future version.
> Instructions for updating:
> Use tf.initializers.variance_scaling instead with distribution=uniform to get equivalent behavior.
> WARNING:tensorflow:From E:\user\myproject\lib\site-packages\tflearn\objectives.py:66: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
> Instructions for updating:
> keep_dims is deprecated, use keepdims instead
> 2018-07-30 15:03:52.443207: W T:\src\github\tensorflow\tensorflow\core\framework\op_kernel.cc:1273] OP_REQUIRES failed at save_restore_v2_ops.cc:184 : Not found: Key BatchNormalization/is_training not found in checkpoint
> Traceback (most recent call last):
> File "E:\user\myproject\lib\site-packages\tensorflow\python\client\session.py", line 1327, in _do_call
> return fn(*args)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\client\session.py", line 1312, in _run_fn
> options, feed_dict, fetch_list, target_list, run_metadata)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\client\session.py", line 1420, in _call_tf_sessionrun
> status, run_metadata)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 516, in __exit__
> c_api.TF_GetCode(self.status.status))
> tensorflow.python.framework.errors_impl.NotFoundError: Key BatchNormalization/is_training not found in checkpoint
> [[Node: save_1/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save_1/Const_0_0, save_1/RestoreV2/tensor_names, save_1/RestoreV2/shape_and_slices)]]
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "E:/user/myproject/screenshot.py", line 26, in <module>
> predict = Predictor()
> File "E:\user\myproject\predict.py", line 95, in __init__
> self.next_model.load(next_item_model_path)
> File "E:\user\myproject\lib\site-packages\tflearn\models\dnn.py", line 308, in load
> self.trainer.restore(model_file, weights_only, **optargs)
> File "E:\user\myproject\lib\site-packages\tflearn\helpers\trainer.py", line 490, in restore
> self.restorer.restore(self.session, model_file)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\training\saver.py", line 1775, in restore
> {self.saver_def.filename_tensor_name: save_path})
> File "E:\user\myproject\lib\site-packages\tensorflow\python\client\session.py", line 905, in run
> run_metadata_ptr)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\client\session.py", line 1140, in _run
> feed_dict_tensor, options, run_metadata)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\client\session.py", line 1321, in _do_run
> run_metadata)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\client\session.py", line 1340, in _do_call
> raise type(e)(node_def, op, message)
> tensorflow.python.framework.errors_impl.NotFoundError: Key BatchNormalization/is_training not found in checkpoint
> [[Node: save_1/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save_1/Const_0_0, save_1/RestoreV2/tensor_names, save_1/RestoreV2/shape_and_slices)]]
>
> Caused by op 'save_1/RestoreV2', defined at:
> File "E:/user/myproject/screenshot.py", line 26, in <module>
> predict = Predictor()
> File "E:\user\myproject\predict.py", line 94, in __init__
> self.next_model = tflearn.DNN(next_network, tensorboard_verbose=0)
> File "E:\user\myproject\lib\site-packages\tflearn\models\dnn.py", line 65, in __init__
> best_val_accuracy=best_val_accuracy)
> File "E:\user\myproject\lib\site-packages\tflearn\helpers\trainer.py", line 147, in __init__
> allow_empty=True)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\training\saver.py", line 1311, in __init__
> self.build()
> File "E:\user\myproject\lib\site-packages\tensorflow\python\training\saver.py", line 1320, in build
> self._build(self._filename, build_save=True, build_restore=True)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\training\saver.py", line 1357, in _build
> build_save=build_save, build_restore=build_restore)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\training\saver.py", line 809, in _build_internal
> restore_sequentially, reshape)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\training\saver.py", line 448, in _AddRestoreOps
> restore_sequentially)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\training\saver.py", line 860, in bulk_restore
> return io_ops.restore_v2(filename_tensor, names, slices, dtypes)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\ops\gen_io_ops.py", line 1541, in restore_v2
> shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
> op_def=op_def)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\framework\ops.py", line 3290, in create_op
> op_def=op_def)
> File "E:\user\myproject\lib\site-packages\tensorflow\python\framework\ops.py", line 1654, in __init__
> self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
>
> NotFoundError (see above for traceback): Key BatchNormalization/is_training not found in checkpoint
> [[Node: save_1/RestoreV2 = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save_1/Const_0_0, save_1/RestoreV2/tensor_names, save_1/RestoreV2/shape_and_slices)]]
|
open
|
2018-07-30T13:11:08Z
|
2018-07-30T13:13:32Z
|
https://github.com/tflearn/tflearn/issues/1082
|
[] |
deekay42
| 0
|
python-restx/flask-restx
|
api
| 420
|
flask restx user case to adopt flask-oidc authentication
|
Hello Team,
Recently I am working on oidc auth for my flask restx app.
The most examples I see online about flask-oidc is just based on a barebone flask app.
That usually works.
But through googling, I do not see any user case where flask restx can adopt flask-oidc for authentication so we can enjoy the benefit of testing api through swagger UI.
?
any thought or quick example you have in mind?
|
open
|
2022-03-17T19:13:57Z
|
2022-03-17T19:13:57Z
|
https://github.com/python-restx/flask-restx/issues/420
|
[
"question"
] |
zhoupoko2000
| 0
|
gunthercox/ChatterBot
|
machine-learning
| 1,887
|
Can we Teach The Bot our name? (Question)
|
I am able to make a bot and have good and basic conversations, but where I get stuck is at making it learn something, like our name or maybe our age. I would love if anyone could teach me how to do something like this:
**bot:** Hello, What is your name?
**User:** I am DMion
**bot:** Nice to meet you DMion!
**I am new to programming so any help is appreciated!**
|
open
|
2019-12-20T07:04:21Z
|
2019-12-20T07:04:21Z
|
https://github.com/gunthercox/ChatterBot/issues/1887
|
[] |
DMion-Div
| 0
|
plotly/plotly.py
|
plotly
| 4,719
|
Feature request: Support for plotting `pint` scalars
|
I love the figures generated with plotly. I want to use it as the backend for a Pandas dataframe that has units of measurement using (Pint). Unfortunately, these plots fail with the error:
```python
AttributeError: 'float' object has no attribute 'tolist'
```
It seems that the issue is that Plotly is not recognizing Pint scalars as scalars. A MWE example is given on the following [pint-pandas issue](https://github.com/hgrecco/pint-pandas/issues/249#issuecomment-2293758537).
Thanks.
|
open
|
2024-08-16T18:54:54Z
|
2024-08-21T14:17:34Z
|
https://github.com/plotly/plotly.py/issues/4719
|
[
"feature",
"P3"
] |
hdavid16
| 0
|
huggingface/datasets
|
deep-learning
| 7,425
|
load_dataset("livecodebench/code_generation_lite", version_tag="release_v2") TypeError: 'NoneType' object is not callable
|
### Describe the bug
from datasets import load_dataset
lcb_codegen = load_dataset("livecodebench/code_generation_lite", version_tag="release_v2")
or
configs = get_dataset_config_names("livecodebench/code_generation_lite", trust_remote_code=True)
both error:
Traceback (most recent call last):
File "", line 1, in
File "/workspace/miniconda/envs/grpo/lib/python3.10/site-packages/datasets/load.py", line 2131, in load_dataset
builder_instance = load_dataset_builder(
File "/workspace/miniconda/envs/grpo/lib/python3.10/site-packages/datasets/load.py", line 1888, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
TypeError: 'NoneType' object is not callable
### Steps to reproduce the bug
from datasets import get_dataset_config_names
configs = get_dataset_config_names("livecodebench/code_generation_lite", trust_remote_code=True)
OR
lcb_codegen = load_dataset("livecodebench/code_generation_lite", version_tag="release_v2")
### Expected behavior
load datasets livecodebench/code_generation_lite
### Environment info
import datasets
version '3.3.2'
|
open
|
2025-02-27T07:36:02Z
|
2025-03-24T05:57:06Z
|
https://github.com/huggingface/datasets/issues/7425
|
[] |
dshwei
| 9
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.