Search is not available for this dataset
repo stringlengths 2 152 ⌀ | file stringlengths 15 239 | code stringlengths 0 58.4M | file_length int64 0 58.4M | avg_line_length float64 0 1.81M | max_line_length int64 0 12.7M | extension_type stringclasses 364
values |
|---|---|---|---|---|---|---|
mmdetection | mmdetection-master/configs/grid_rcnn/README.md | # Grid R-CNN
> [Grid R-CNN](https://arxiv.org/abs/1811.12030)
<!-- [ALGORITHM] -->
## Abstract
This paper proposes a novel object detection framework named Grid R-CNN, which adopts a grid guided localization mechanism for accurate object detection. Different from the traditional regression based methods, the Grid R... | 5,884 | 121.604167 | 1,057 | md |
mmdetection | mmdetection-master/configs/grid_rcnn/grid_rcnn_r101_fpn_gn-head_2x_coco.py | _base_ = './grid_rcnn_r50_fpn_gn-head_2x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 206 | 24.875 | 61 | py |
mmdetection | mmdetection-master/configs/grid_rcnn/grid_rcnn_r50_fpn_gn-head_1x_coco.py | _base_ = ['grid_rcnn_r50_fpn_gn-head_2x_coco.py']
# learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=0.001,
step=[8, 11])
checkpoint_config = dict(interval=1)
# runtime settings
runner = dict(type='EpochBasedRunner', max_epochs=12)
| 300 | 24.083333 | 53 | py |
mmdetection | mmdetection-master/configs/grid_rcnn/grid_rcnn_r50_fpn_gn-head_2x_coco.py | _base_ = [
'../_base_/datasets/coco_detection.py', '../_base_/default_runtime.py'
]
# model settings
model = dict(
type='GridRCNN',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requir... | 4,315 | 31.69697 | 79 | py |
mmdetection | mmdetection-master/configs/grid_rcnn/grid_rcnn_x101_32x4d_fpn_gn-head_2x_coco.py | _base_ = './grid_rcnn_r50_fpn_gn-head_2x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
style='pytorch',
init_cfg=dict(
type='Pretra... | 697 | 26.92 | 76 | py |
mmdetection | mmdetection-master/configs/grid_rcnn/grid_rcnn_x101_64x4d_fpn_gn-head_2x_coco.py | _base_ = './grid_rcnn_x101_32x4d_fpn_gn-head_2x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
style='pytorch',
init_cfg=dict(
type=... | 380 | 26.214286 | 76 | py |
mmdetection | mmdetection-master/configs/grid_rcnn/metafile.yml | Collections:
- Name: Grid R-CNN
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- RPN
- Dilated Convolution
- ResNet
- RoIAlign
Paper:
URL: https:/... | 3,242 | 30.794118 | 174 | yml |
mmdetection | mmdetection-master/configs/groie/README.md | # GRoIE
> [A novel Region of Interest Extraction Layer for Instance Segmentation](https://arxiv.org/abs/2004.13665)
<!-- [ALGORITHM] -->
## Abstract
Given the wide diffusion of deep neural network architectures for computer vision tasks, several new applications are nowadays more and more feasible. Among them, a pa... | 9,994 | 135.917808 | 592 | md |
mmdetection | mmdetection-master/configs/groie/faster_rcnn_r50_fpn_groie_1x_coco.py | _base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
# model settings
model = dict(
roi_head=dict(
bbox_roi_extractor=dict(
type='GenericRoIExtractor',
aggregation='sum',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=2),
out_channels=256,
... | 834 | 31.115385 | 77 | py |
mmdetection | mmdetection-master/configs/groie/grid_rcnn_r50_fpn_gn-head_groie_1x_coco.py | _base_ = '../grid_rcnn/grid_rcnn_r50_fpn_gn-head_1x_coco.py'
# model settings
model = dict(
roi_head=dict(
bbox_roi_extractor=dict(
type='GenericRoIExtractor',
aggregation='sum',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=2),
out_channels=25... | 1,534 | 32.369565 | 78 | py |
mmdetection | mmdetection-master/configs/groie/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_groie_1x_coco.py | _base_ = '../gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py'
# model settings
model = dict(
roi_head=dict(
bbox_roi_extractor=dict(
type='GenericRoIExtractor',
aggregation='sum',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=2),
... | 1,552 | 32.76087 | 78 | py |
mmdetection | mmdetection-master/configs/groie/mask_rcnn_r50_fpn_groie_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
# model settings
model = dict(
roi_head=dict(
bbox_roi_extractor=dict(
type='GenericRoIExtractor',
aggregation='sum',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=2),
out_channels=256,
... | 1,526 | 32.195652 | 78 | py |
mmdetection | mmdetection-master/configs/groie/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_groie_1x_coco.py | _base_ = '../gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py'
# model settings
model = dict(
roi_head=dict(
bbox_roi_extractor=dict(
type='GenericRoIExtractor',
aggregation='sum',
roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=2),
... | 1,551 | 32.73913 | 78 | py |
mmdetection | mmdetection-master/configs/groie/metafile.yml | Collections:
- Name: GRoIE
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- Generic RoI Extractor
- FPN
- RPN
- ResNet
- RoIAlign
Paper:
U... | 3,339 | 34.157895 | 219 | yml |
mmdetection | mmdetection-master/configs/guided_anchoring/README.md | # Guided Anchoring
> [Region Proposal by Guided Anchoring](https://arxiv.org/abs/1901.03278)
<!-- [ALGORITHM] -->
## Abstract
Region anchors are the cornerstone of modern object detection techniques. State-of-the-art detectors mostly rely on a dense anchoring scheme, where anchors are sampled uniformly over the spa... | 12,044 | 199.75 | 1,206 | md |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_fast_r50_caffe_fpn_1x_coco.py | _base_ = '../fast_rcnn/fast_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=False),
norm_eval=True,
style='caffe',
in... | 2,407 | 35.484848 | 78 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_faster_r101_caffe_fpn_1x_coco.py | _base_ = './ga_faster_r50_caffe_fpn_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron2/resnet101_caffe')))
| 222 | 26.875 | 67 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_faster_r50_caffe_fpn_1x_coco.py | _base_ = '../faster_rcnn/faster_rcnn_r50_caffe_fpn_1x_coco.py'
model = dict(
rpn_head=dict(
_delete_=True,
type='GARPNHead',
in_channels=256,
feat_channels=256,
approx_anchor_generator=dict(
type='AnchorGenerator',
octave_base_scale=8,
scal... | 2,408 | 35.5 | 77 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_faster_r50_fpn_1x_coco.py | _base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
model = dict(
rpn_head=dict(
_delete_=True,
type='GARPNHead',
in_channels=256,
feat_channels=256,
approx_anchor_generator=dict(
type='AnchorGenerator',
octave_base_scale=8,
scales_per... | 2,402 | 35.409091 | 77 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_faster_x101_32x4d_fpn_1x_coco.py | _base_ = './ga_faster_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pytorch',
... | 419 | 27 | 76 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_faster_x101_64x4d_fpn_1x_coco.py | _base_ = './ga_faster_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pytorch',
... | 419 | 27 | 76 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_retinanet_r101_caffe_fpn_1x_coco.py | _base_ = './ga_retinanet_r50_caffe_fpn_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron2/resnet101_caffe')))
| 225 | 27.25 | 67 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_retinanet_r101_caffe_fpn_mstrain_2x.py | _base_ = '../_base_/default_runtime.py'
# model settings
model = dict(
type='RetinaNet',
backbone=dict(
type='ResNet',
depth=101,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=False),
norm_eval=True,
... | 5,095 | 28.976471 | 74 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_retinanet_r50_caffe_fpn_1x_coco.py | _base_ = '../retinanet/retinanet_r50_caffe_fpn_1x_coco.py'
model = dict(
bbox_head=dict(
_delete_=True,
type='GARetinaHead',
num_classes=80,
in_channels=256,
stacked_convs=4,
feat_channels=256,
approx_anchor_generator=dict(
type='AnchorGenerator',
... | 2,055 | 31.634921 | 74 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_retinanet_r50_fpn_1x_coco.py | _base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py'
model = dict(
bbox_head=dict(
_delete_=True,
type='GARetinaHead',
num_classes=80,
in_channels=256,
stacked_convs=4,
feat_channels=256,
approx_anchor_generator=dict(
type='AnchorGenerator',
... | 2,049 | 31.539683 | 74 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_retinanet_x101_32x4d_fpn_1x_coco.py | _base_ = './ga_retinanet_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pytorch'... | 422 | 27.2 | 76 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_retinanet_x101_64x4d_fpn_1x_coco.py | _base_ = './ga_retinanet_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pytorch'... | 422 | 27.2 | 76 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_rpn_r101_caffe_fpn_1x_coco.py | _base_ = './ga_rpn_r50_caffe_fpn_1x_coco.py'
# model settings
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron2/resnet101_caffe')))
| 236 | 25.333333 | 67 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_rpn_r50_caffe_fpn_1x_coco.py | _base_ = '../rpn/rpn_r50_caffe_fpn_1x_coco.py'
model = dict(
rpn_head=dict(
_delete_=True,
type='GARPNHead',
in_channels=256,
feat_channels=256,
approx_anchor_generator=dict(
type='AnchorGenerator',
octave_base_scale=8,
scales_per_octave=3,... | 2,028 | 33.389831 | 74 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_rpn_r50_fpn_1x_coco.py | _base_ = '../rpn/rpn_r50_fpn_1x_coco.py'
model = dict(
rpn_head=dict(
_delete_=True,
type='GARPNHead',
in_channels=256,
feat_channels=256,
approx_anchor_generator=dict(
type='AnchorGenerator',
octave_base_scale=8,
scales_per_octave=3,
... | 2,022 | 33.288136 | 74 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_rpn_x101_32x4d_fpn_1x_coco.py | _base_ = './ga_rpn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pytorch',
... | 416 | 26.8 | 76 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/ga_rpn_x101_64x4d_fpn_1x_coco.py | _base_ = './ga_rpn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pytorch',
... | 416 | 26.8 | 76 | py |
mmdetection | mmdetection-master/configs/guided_anchoring/metafile.yml | Collections:
- Name: Guided Anchoring
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- FPN
- Guided Anchoring
- ResNet
Paper:
URL: https://arxiv.org/abs/1... | 8,296 | 32.591093 | 187 | yml |
mmdetection | mmdetection-master/configs/hrnet/README.md | # HRNet
> [Deep High-Resolution Representation Learning for Human Pose Estimation](https://arxiv.org/abs/1902.09212)
<!-- [BACKBONE] -->
## Abstract
This is an official pytorch implementation of Deep High-Resolution Representation Learning for Human Pose Estimation. In this work, we are interested in the human pose... | 25,312 | 247.166667 | 1,224 | md |
mmdetection | mmdetection-master/configs/hrnet/cascade_mask_rcnn_hrnetv2p_w18_20e_coco.py | _base_ = './cascade_mask_rcnn_hrnetv2p_w32_20e_coco.py'
# model settings
model = dict(
backbone=dict(
extra=dict(
stage2=dict(num_channels=(18, 36)),
stage3=dict(num_channels=(18, 36, 72)),
stage4=dict(num_channels=(18, 36, 72, 144))),
init_cfg=dict(
t... | 462 | 37.583333 | 77 | py |
mmdetection | mmdetection-master/configs/hrnet/cascade_mask_rcnn_hrnetv2p_w32_20e_coco.py | _base_ = '../cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
_delete_=True,
type='HRNet',
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
... | 1,296 | 30.634146 | 76 | py |
mmdetection | mmdetection-master/configs/hrnet/cascade_mask_rcnn_hrnetv2p_w40_20e_coco.py | _base_ = './cascade_mask_rcnn_hrnetv2p_w32_20e_coco.py'
# model settings
model = dict(
backbone=dict(
type='HRNet',
extra=dict(
stage2=dict(num_channels=(40, 80)),
stage3=dict(num_channels=(40, 80, 160)),
stage4=dict(num_channels=(40, 80, 160, 320))),
init... | 487 | 36.538462 | 78 | py |
mmdetection | mmdetection-master/configs/hrnet/cascade_rcnn_hrnetv2p_w18_20e_coco.py | _base_ = './cascade_rcnn_hrnetv2p_w32_20e_coco.py'
# model settings
model = dict(
backbone=dict(
extra=dict(
stage2=dict(num_channels=(18, 36)),
stage3=dict(num_channels=(18, 36, 72)),
stage4=dict(num_channels=(18, 36, 72, 144))),
init_cfg=dict(
type='... | 457 | 37.166667 | 77 | py |
mmdetection | mmdetection-master/configs/hrnet/cascade_rcnn_hrnetv2p_w32_20e_coco.py | _base_ = '../cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
_delete_=True,
type='HRNet',
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
... | 1,291 | 30.512195 | 76 | py |
mmdetection | mmdetection-master/configs/hrnet/cascade_rcnn_hrnetv2p_w40_20e_coco.py | _base_ = './cascade_rcnn_hrnetv2p_w32_20e_coco.py'
# model settings
model = dict(
backbone=dict(
type='HRNet',
extra=dict(
stage2=dict(num_channels=(40, 80)),
stage3=dict(num_channels=(40, 80, 160)),
stage4=dict(num_channels=(40, 80, 160, 320))),
init_cfg=... | 482 | 36.153846 | 78 | py |
mmdetection | mmdetection-master/configs/hrnet/faster_rcnn_hrnetv2p_w18_1x_coco.py | _base_ = './faster_rcnn_hrnetv2p_w32_1x_coco.py'
# model settings
model = dict(
backbone=dict(
extra=dict(
stage2=dict(num_channels=(18, 36)),
stage3=dict(num_channels=(18, 36, 72)),
stage4=dict(num_channels=(18, 36, 72, 144))),
init_cfg=dict(
type='Pr... | 455 | 37 | 77 | py |
mmdetection | mmdetection-master/configs/hrnet/faster_rcnn_hrnetv2p_w18_2x_coco.py | _base_ = './faster_rcnn_hrnetv2p_w18_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 154 | 24.833333 | 53 | py |
mmdetection | mmdetection-master/configs/hrnet/faster_rcnn_hrnetv2p_w32_1x_coco.py | _base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
_delete_=True,
type='HRNet',
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
... | 1,185 | 30.210526 | 76 | py |
mmdetection | mmdetection-master/configs/hrnet/faster_rcnn_hrnetv2p_w32_2x_coco.py | _base_ = './faster_rcnn_hrnetv2p_w32_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 153 | 29.8 | 53 | py |
mmdetection | mmdetection-master/configs/hrnet/faster_rcnn_hrnetv2p_w40_1x_coco.py | _base_ = './faster_rcnn_hrnetv2p_w32_1x_coco.py'
model = dict(
backbone=dict(
type='HRNet',
extra=dict(
stage2=dict(num_channels=(40, 80)),
stage3=dict(num_channels=(40, 80, 160)),
stage4=dict(num_channels=(40, 80, 160, 320))),
init_cfg=dict(
t... | 463 | 37.666667 | 78 | py |
mmdetection | mmdetection-master/configs/hrnet/faster_rcnn_hrnetv2p_w40_2x_coco.py | _base_ = './faster_rcnn_hrnetv2p_w40_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 153 | 29.8 | 53 | py |
mmdetection | mmdetection-master/configs/hrnet/fcos_hrnetv2p_w18_gn-head_4x4_1x_coco.py | _base_ = './fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py'
model = dict(
backbone=dict(
extra=dict(
stage2=dict(num_channels=(18, 36)),
stage3=dict(num_channels=(18, 36, 72)),
stage4=dict(num_channels=(18, 36, 72, 144))),
init_cfg=dict(
type='Pretrained', c... | 443 | 39.363636 | 77 | py |
mmdetection | mmdetection-master/configs/hrnet/fcos_hrnetv2p_w18_gn-head_4x4_2x_coco.py | _base_ = './fcos_hrnetv2p_w18_gn-head_4x4_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 158 | 30.8 | 53 | py |
mmdetection | mmdetection-master/configs/hrnet/fcos_hrnetv2p_w18_gn-head_mstrain_640-800_4x4_2x_coco.py | _base_ = './fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py'
model = dict(
backbone=dict(
extra=dict(
stage2=dict(num_channels=(18, 36)),
stage3=dict(num_channels=(18, 36, 72)),
stage4=dict(num_channels=(18, 36, 72, 144))),
init_cfg=dict(
type... | 459 | 40.818182 | 77 | py |
mmdetection | mmdetection-master/configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py | _base_ = '../fcos/fcos_r50_caffe_fpn_gn-head_4x4_1x_coco.py'
model = dict(
backbone=dict(
_delete_=True,
type='HRNet',
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
... | 2,333 | 31.873239 | 78 | py |
mmdetection | mmdetection-master/configs/hrnet/fcos_hrnetv2p_w32_gn-head_4x4_2x_coco.py | _base_ = './fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 158 | 30.8 | 53 | py |
mmdetection | mmdetection-master/configs/hrnet/fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py | _base_ = './fcos_hrnetv2p_w32_gn-head_4x4_1x_coco.py'
img_norm_cfg = dict(
mean=[103.53, 116.28, 123.675], std=[57.375, 57.12, 58.395], to_rgb=False)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='Resize',
img_scale=[(1333, 64... | 1,337 | 32.45 | 78 | py |
mmdetection | mmdetection-master/configs/hrnet/fcos_hrnetv2p_w40_gn-head_mstrain_640-800_4x4_2x_coco.py | _base_ = './fcos_hrnetv2p_w32_gn-head_mstrain_640-800_4x4_2x_coco.py'
model = dict(
backbone=dict(
type='HRNet',
extra=dict(
stage2=dict(num_channels=(40, 80)),
stage3=dict(num_channels=(40, 80, 160)),
stage4=dict(num_channels=(40, 80, 160, 320))),
init_cf... | 484 | 39.416667 | 78 | py |
mmdetection | mmdetection-master/configs/hrnet/htc_hrnetv2p_w18_20e_coco.py | _base_ = './htc_hrnetv2p_w32_20e_coco.py'
model = dict(
backbone=dict(
extra=dict(
stage2=dict(num_channels=(18, 36)),
stage3=dict(num_channels=(18, 36, 72)),
stage4=dict(num_channels=(18, 36, 72, 144))),
init_cfg=dict(
type='Pretrained', checkpoint='o... | 431 | 38.272727 | 77 | py |
mmdetection | mmdetection-master/configs/hrnet/htc_hrnetv2p_w32_20e_coco.py | _base_ = '../htc/htc_r50_fpn_20e_coco.py'
model = dict(
backbone=dict(
_delete_=True,
type='HRNet',
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_ch... | 1,170 | 29.815789 | 76 | py |
mmdetection | mmdetection-master/configs/hrnet/htc_hrnetv2p_w40_20e_coco.py | _base_ = './htc_hrnetv2p_w32_20e_coco.py'
model = dict(
backbone=dict(
type='HRNet',
extra=dict(
stage2=dict(num_channels=(40, 80)),
stage3=dict(num_channels=(40, 80, 160)),
stage4=dict(num_channels=(40, 80, 160, 320))),
init_cfg=dict(
type='Pr... | 456 | 37.083333 | 78 | py |
mmdetection | mmdetection-master/configs/hrnet/htc_hrnetv2p_w40_28e_coco.py | _base_ = './htc_hrnetv2p_w40_20e_coco.py'
# learning policy
lr_config = dict(step=[24, 27])
runner = dict(type='EpochBasedRunner', max_epochs=28)
| 146 | 28.4 | 53 | py |
mmdetection | mmdetection-master/configs/hrnet/htc_x101_64x4d_fpn_16x1_28e_coco.py | _base_ = '../htc/htc_x101_64x4d_fpn_16x1_20e_coco.py'
# learning policy
lr_config = dict(step=[24, 27])
runner = dict(type='EpochBasedRunner', max_epochs=28)
| 158 | 30.8 | 53 | py |
mmdetection | mmdetection-master/configs/hrnet/mask_rcnn_hrnetv2p_w18_1x_coco.py | _base_ = './mask_rcnn_hrnetv2p_w32_1x_coco.py'
model = dict(
backbone=dict(
extra=dict(
stage2=dict(num_channels=(18, 36)),
stage3=dict(num_channels=(18, 36, 72)),
stage4=dict(num_channels=(18, 36, 72, 144))),
init_cfg=dict(
type='Pretrained', checkpoi... | 436 | 38.727273 | 77 | py |
mmdetection | mmdetection-master/configs/hrnet/mask_rcnn_hrnetv2p_w18_2x_coco.py | _base_ = './mask_rcnn_hrnetv2p_w18_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 151 | 29.4 | 53 | py |
mmdetection | mmdetection-master/configs/hrnet/mask_rcnn_hrnetv2p_w32_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
_delete_=True,
type='HRNet',
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
... | 1,181 | 30.105263 | 76 | py |
mmdetection | mmdetection-master/configs/hrnet/mask_rcnn_hrnetv2p_w32_2x_coco.py | _base_ = './mask_rcnn_hrnetv2p_w32_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 151 | 29.4 | 53 | py |
mmdetection | mmdetection-master/configs/hrnet/mask_rcnn_hrnetv2p_w40_1x_coco.py | _base_ = './mask_rcnn_hrnetv2p_w18_1x_coco.py'
model = dict(
backbone=dict(
type='HRNet',
extra=dict(
stage2=dict(num_channels=(40, 80)),
stage3=dict(num_channels=(40, 80, 160)),
stage4=dict(num_channels=(40, 80, 160, 320))),
init_cfg=dict(
typ... | 461 | 37.5 | 78 | py |
mmdetection | mmdetection-master/configs/hrnet/mask_rcnn_hrnetv2p_w40_2x_coco.py | _base_ = './mask_rcnn_hrnetv2p_w40_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 151 | 29.4 | 53 | py |
mmdetection | mmdetection-master/configs/hrnet/metafile.yml | Models:
- Name: faster_rcnn_hrnetv2p_w18_1x_coco
In Collection: Faster R-CNN
Config: configs/hrnet/faster_rcnn_hrnetv2p_w18_1x_coco.py
Metadata:
Training Memory (GB): 6.6
inference time (ms/im):
- value: 74.63
hardware: V100
backend: PyTorch
batch size: 1
... | 32,463 | 32.399177 | 203 | yml |
mmdetection | mmdetection-master/configs/htc/README.md | # HTC
> [Hybrid Task Cascade for Instance Segmentation](https://arxiv.org/abs/1901.07518)
<!-- [ALGORITHM] -->
## Abstract
Cascade is a classic yet powerful architecture that has boosted performance on various tasks. However, how to introduce cascade to instance segmentation remains an open question. A simple combi... | 8,571 | 125.058824 | 1,254 | md |
mmdetection | mmdetection-master/configs/htc/htc_r101_fpn_20e_coco.py | _base_ = './htc_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
# learning policy
lr_config = dict(step=[16, 19])
runner = dict(type='EpochBasedRunner', max_epochs=20)
| 295 | 28.6 | 61 | py |
mmdetection | mmdetection-master/configs/htc/htc_r50_fpn_1x_coco.py | _base_ = './htc_without_semantic_r50_fpn_1x_coco.py'
model = dict(
roi_head=dict(
semantic_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
out_channels=256,
featmap_strides=[8]),
semanti... | 1,998 | 34.070175 | 79 | py |
mmdetection | mmdetection-master/configs/htc/htc_r50_fpn_20e_coco.py | _base_ = './htc_r50_fpn_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 19])
runner = dict(type='EpochBasedRunner', max_epochs=20)
| 140 | 27.2 | 53 | py |
mmdetection | mmdetection-master/configs/htc/htc_without_semantic_r50_fpn_1x_coco.py | _base_ = [
'../_base_/datasets/coco_instance.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
# model settings
model = dict(
type='HybridTaskCascade',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen... | 8,333 | 34.164557 | 79 | py |
mmdetection | mmdetection-master/configs/htc/htc_x101_32x4d_fpn_16x1_20e_coco.py | _base_ = './htc_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
norm_eval=True,
... | 591 | 28.6 | 76 | py |
mmdetection | mmdetection-master/configs/htc/htc_x101_64x4d_fpn_16x1_20e_coco.py | _base_ = './htc_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
norm_eval=True,
... | 591 | 28.6 | 76 | py |
mmdetection | mmdetection-master/configs/htc/htc_x101_64x4d_fpn_dconv_c3-c5_mstrain_400_1400_16x1_20e_coco.py | _base_ = './htc_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
norm_eval=True,
... | 1,489 | 32.863636 | 79 | py |
mmdetection | mmdetection-master/configs/htc/metafile.yml | Collections:
- Name: HTC
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- FPN
- HTC
- RPN
- ResNet
- ResNeXt
- RoIAlign
Paper:
URL... | 4,924 | 28.668675 | 210 | yml |
mmdetection | mmdetection-master/configs/instaboost/README.md | # Instaboost
> [Instaboost: Boosting instance segmentation via probability map guided copy-pasting](https://arxiv.org/abs/1908.07801)
<!-- [ALGORITHM] -->
## Abstract
Instance segmentation requires a large number of training samples to achieve satisfactory performance and benefits from proper data augmentation. To ... | 7,033 | 118.220339 | 1,257 | md |
mmdetection | mmdetection-master/configs/instaboost/cascade_mask_rcnn_r101_fpn_instaboost_4x_coco.py | _base_ = './cascade_mask_rcnn_r50_fpn_instaboost_4x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 217 | 26.25 | 61 | py |
mmdetection | mmdetection-master/configs/instaboost/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco.py | _base_ = '../cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.py'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='InstaBoost',
action_candidate=('normal', 'horizontal', 'skip'),
... | 1,023 | 34.310345 | 77 | py |
mmdetection | mmdetection-master/configs/instaboost/cascade_mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco.py | _base_ = './cascade_mask_rcnn_r50_fpn_instaboost_4x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
... | 438 | 28.266667 | 76 | py |
mmdetection | mmdetection-master/configs/instaboost/mask_rcnn_r101_fpn_instaboost_4x_coco.py | _base_ = './mask_rcnn_r50_fpn_instaboost_4x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 208 | 28.857143 | 61 | py |
mmdetection | mmdetection-master/configs/instaboost/mask_rcnn_r50_fpn_instaboost_4x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='InstaBoost',
action_candidate=('normal', 'horizontal', 'skip'),
action_pr... | 1,012 | 33.931034 | 77 | py |
mmdetection | mmdetection-master/configs/instaboost/mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco.py | _base_ = './mask_rcnn_r50_fpn_instaboost_4x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='... | 430 | 27.733333 | 76 | py |
mmdetection | mmdetection-master/configs/instaboost/metafile.yml | Collections:
- Name: InstaBoost
Metadata:
Training Data: COCO
Training Techniques:
- InstaBoost
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Paper:
URL: https://arxiv.org/abs/1908.07801
Title: 'Instaboost: Boosting instance segmentat... | 3,340 | 32.41 | 188 | yml |
mmdetection | mmdetection-master/configs/lad/README.md | # LAD
> [Improving Object Detection by Label Assignment Distillation](https://arxiv.org/abs/2108.10520)
<!-- [ALGORITHM] -->
## Abstract
Label assignment in object detection aims to assign targets, foreground or background, to sampled regions in an image. Unlike labeling for image classification, this problem is no... | 5,027 | 110.733333 | 1,358 | md |
mmdetection | mmdetection-master/configs/lad/lad_r101_paa_r50_fpn_coco_1x.py | _base_ = [
'../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
teacher_ckpt = 'https://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_1x_coco/paa_r50_fpn_1x_coco_20200821-936edec3.pth' # noqa
model = dict(
type='LAD',
# student
bac... | 3,928 | 29.937008 | 138 | py |
mmdetection | mmdetection-master/configs/lad/lad_r50_paa_r101_fpn_coco_1x.py | _base_ = [
'../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
teacher_ckpt = 'http://download.openmmlab.com/mmdetection/v2.0/paa/paa_r101_fpn_1x_coco/paa_r101_fpn_1x_coco_20200821-0a1825a4.pth' # noqa
model = dict(
type='LAD',
# student
ba... | 3,906 | 30.007937 | 139 | py |
mmdetection | mmdetection-master/configs/lad/metafile.yml | Collections:
- Name: Label Assignment Distillation
Metadata:
Training Data: COCO
Training Techniques:
- Label Assignment Distillation
- SGD with Momentum
- Weight Decay
Training Resources: 2x V100 GPUs
Architecture:
- FPN
- ResNet
Paper:
UR... | 1,522 | 32.108696 | 151 | yml |
mmdetection | mmdetection-master/configs/ld/README.md | # LD
> [Localization Distillation for Dense Object Detection](https://arxiv.org/abs/2102.12252)
<!-- [ALGORITHM] -->
## Abstract
Knowledge distillation (KD) has witnessed its powerful capability in learning compact models in object detection. Previous KD methods for object detection mostly focus on imitating deep f... | 7,063 | 159.545455 | 1,310 | md |
mmdetection | mmdetection-master/configs/ld/ld_r101_gflv1_r101dcn_fpn_coco_2x.py | _base_ = ['./ld_r18_gflv1_r101_fpn_coco_1x.py']
teacher_ckpt = 'https://download.openmmlab.com/mmdetection/v2.0/gfl/gfl_r101_fpn_dconv_c3-c5_mstrain_2x_coco/gfl_r101_fpn_dconv_c3-c5_mstrain_2x_coco_20200630_102002-134b07df.pth' # noqa
model = dict(
teacher_config='configs/gfl/gfl_r101_fpn_dconv_c3-c5_mstrain_2x_co... | 1,628 | 35.2 | 187 | py |
mmdetection | mmdetection-master/configs/ld/ld_r18_gflv1_r101_fpn_coco_1x.py | _base_ = [
'../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
teacher_ckpt = 'https://download.openmmlab.com/mmdetection/v2.0/gfl/gfl_r101_fpn_mstrain_2x_coco/gfl_r101_fpn_mstrain_2x_coco_20200629_200126-dd12f847.pth' # noqa
model = dict(
type='Kn... | 2,120 | 32.666667 | 163 | py |
mmdetection | mmdetection-master/configs/ld/ld_r34_gflv1_r101_fpn_coco_1x.py | _base_ = ['./ld_r18_gflv1_r101_fpn_coco_1x.py']
model = dict(
backbone=dict(
type='ResNet',
depth=34,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
norm_eval=True,
style='pytorch',
init_c... | 569 | 27.5 | 79 | py |
mmdetection | mmdetection-master/configs/ld/ld_r50_gflv1_r101_fpn_coco_1x.py | _base_ = ['./ld_r18_gflv1_r101_fpn_coco_1x.py']
model = dict(
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
norm_eval=True,
style='pytorch',
init_c... | 572 | 27.65 | 79 | py |
mmdetection | mmdetection-master/configs/ld/metafile.yml | Collections:
- Name: Localization Distillation
Metadata:
Training Data: COCO
Training Techniques:
- Localization Distillation
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- FPN
- ResNet
Paper:
URL: https... | 2,433 | 33.771429 | 160 | yml |
mmdetection | mmdetection-master/configs/legacy_1.x/README.md | # Legacy Configs in MMDetection V1.x
<!-- [OTHERS] -->
Configs in this directory implement the legacy configs used by MMDetection V1.x and its model zoos.
To help users convert their models from V1.x to MMDetection V2.0, we provide v1.x configs to inference the converted v1.x models.
Due to the BC-breaking changes i... | 5,329 | 95.909091 | 366 | md |
mmdetection | mmdetection-master/configs/legacy_1.x/cascade_mask_rcnn_r50_fpn_1x_coco_v1.py | _base_ = [
'../_base_/models/cascade_mask_rcnn_r50_fpn.py',
'../_base_/datasets/coco_instance.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(
type='CascadeRCNN',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indice... | 2,791 | 33.9 | 79 | py |
mmdetection | mmdetection-master/configs/legacy_1.x/faster_rcnn_r50_fpn_1x_coco_v1.py | _base_ = [
'../_base_/models/faster_rcnn_r50_fpn.py',
'../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(
type='FasterRCNN',
backbone=dict(
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
rp... | 1,385 | 34.538462 | 79 | py |
mmdetection | mmdetection-master/configs/legacy_1.x/mask_rcnn_r50_fpn_1x_coco_v1.py | _base_ = [
'../_base_/models/mask_rcnn_r50_fpn.py',
'../_base_/datasets/coco_instance.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(
rpn_head=dict(
anchor_generator=dict(type='LegacyAnchorGenerator', center_offset=0.5),
bbox_coder=dict(type='Le... | 1,238 | 34.4 | 79 | py |
mmdetection | mmdetection-master/configs/legacy_1.x/retinanet_r50_caffe_fpn_1x_coco_v1.py | _base_ = './retinanet_r50_fpn_1x_coco_v1.py'
model = dict(
backbone=dict(
norm_cfg=dict(requires_grad=False),
norm_eval=True,
style='caffe',
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron/resnet50_caffe')))
# use caffe img_norm
img_norm_c... | 1,413 | 32.666667 | 75 | py |
mmdetection | mmdetection-master/configs/legacy_1.x/retinanet_r50_fpn_1x_coco_v1.py | _base_ = [
'../_base_/models/retinanet_r50_fpn.py',
'../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(
bbox_head=dict(
type='RetinaHead',
anchor_generator=dict(
type='LegacyAnchorGenerator',
... | 617 | 33.333333 | 73 | py |
mmdetection | mmdetection-master/configs/legacy_1.x/ssd300_coco_v1.py | _base_ = [
'../_base_/models/ssd300.py', '../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py'
]
# model settings
input_size = 300
model = dict(
bbox_head=dict(
type='SSDHead',
anchor_generator=dict(
type='LegacySSDAnchorGene... | 2,846 | 32.494118 | 79 | py |
mmdetection | mmdetection-master/configs/libra_rcnn/README.md | # Libra R-CNN
> [Libra R-CNN: Towards Balanced Learning for Object Detection](https://arxiv.org/abs/1904.02701)
<!-- [ALGORITHM] -->
## Abstract
Compared with model architectures, the training process, which is also crucial to the success of detectors, has received relatively less attention in object detection. In ... | 7,409 | 136.222222 | 1,049 | md |