Search is not available for this dataset
repo stringlengths 2 152 ⌀ | file stringlengths 15 239 | code stringlengths 0 58.4M | file_length int64 0 58.4M | avg_line_length float64 0 1.81M | max_line_length int64 0 12.7M | extension_type stringclasses 364
values |
|---|---|---|---|---|---|---|
mmdetection | mmdetection-master/configs/faster_rcnn/metafile.yml | Collections:
- Name: Faster R-CNN
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- FPN
- RPN
- ResNet
- RoIPool
Paper:
URL: https://arxiv.org/abs/... | 16,194 | 34.750552 | 200 | yml |
mmdetection | mmdetection-master/configs/fcos/README.md | # FCOS
> [FCOS: Fully Convolutional One-Stage Object Detection](https://arxiv.org/abs/1904.01355)
<!-- [ALGORITHM] -->
## Abstract
We propose a fully convolutional one-stage object detector (FCOS) to solve object detection in a per-pixel prediction fashion, analogue to semantic segmentation. Almost all state-of-the... | 8,433 | 182.347826 | 1,176 | md |
mmdetection | mmdetection-master/configs/fcos/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco.py | _base_ = 'fcos_r50_caffe_fpn_gn-head_1x_coco.py'
model = dict(
backbone=dict(
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron2/resnet50_caffe')),
bbox_head=dict(
norm_on_bbox=True,
centerness_on_reg=True,
dcn_on_last_conv=False,
... | 1,780 | 31.381818 | 72 | py |
mmdetection | mmdetection-master/configs/fcos/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_dcn_1x_coco.py | _base_ = 'fcos_r50_caffe_fpn_gn-head_1x_coco.py'
model = dict(
backbone=dict(
dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False),
stage_with_dcn=(False, True, True, True),
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron2/resnet50_c... | 1,904 | 32.421053 | 74 | py |
mmdetection | mmdetection-master/configs/fcos/fcos_center_r50_caffe_fpn_gn-head_1x_coco.py | _base_ = './fcos_r50_caffe_fpn_gn-head_1x_coco.py'
model = dict(bbox_head=dict(center_sampling=True, center_sample_radius=1.5))
| 128 | 42 | 76 | py |
mmdetection | mmdetection-master/configs/fcos/fcos_r101_caffe_fpn_gn-head_1x_coco.py | _base_ = './fcos_r50_caffe_fpn_gn-head_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron/resnet101_caffe')))
| 224 | 27.125 | 66 | py |
mmdetection | mmdetection-master/configs/fcos/fcos_r101_caffe_fpn_gn-head_mstrain_640-800_2x_coco.py | _base_ = './fcos_r50_caffe_fpn_gn-head_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron/resnet101_caffe')))
img_norm_cfg = dict(
mean=[102.9801, 115.9465, 122.7717], std=[1.0, 1.0, 1.0], to_rgb=False)... | 1,550 | 31.3125 | 75 | py |
mmdetection | mmdetection-master/configs/fcos/fcos_r50_caffe_fpn_gn-head_1x_coco.py | _base_ = [
'../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
# model settings
model = dict(
type='FCOS',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
... | 3,281 | 29.672897 | 75 | py |
mmdetection | mmdetection-master/configs/fcos/fcos_r50_caffe_fpn_gn-head_4x4_1x_coco.py | # TODO: Remove this config after benchmarking all related configs
_base_ = 'fcos_r50_caffe_fpn_gn-head_1x_coco.py'
data = dict(samples_per_gpu=4, workers_per_gpu=4)
| 166 | 32.4 | 65 | py |
mmdetection | mmdetection-master/configs/fcos/fcos_r50_caffe_fpn_gn-head_fp16_1x_bs8x8_coco.py | _base_ = ['./fcos_r50_caffe_fpn_gn-head_1x_coco.py']
data = dict(samples_per_gpu=8, workers_per_gpu=8)
# optimizer
optimizer = dict(lr=0.04)
fp16 = dict(loss_scale='dynamic')
# learning policy
# In order to avoid non-convergence in the early stage of
# mixed-precision training, the warmup in the lr_config is set to ... | 457 | 31.714286 | 75 | py |
mmdetection | mmdetection-master/configs/fcos/fcos_r50_caffe_fpn_gn-head_mstrain_640-800_2x_coco.py | _base_ = './fcos_r50_caffe_fpn_gn-head_1x_coco.py'
img_norm_cfg = dict(
mean=[102.9801, 115.9465, 122.7717], std=[1.0, 1.0, 1.0], to_rgb=False)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='Resize',
img_scale=[(1333, 640), (1... | 1,331 | 32.3 | 75 | py |
mmdetection | mmdetection-master/configs/fcos/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_2x_coco.py | _base_ = './fcos_r50_caffe_fpn_gn-head_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
norm_eval... | 1,966 | 31.245902 | 77 | py |
mmdetection | mmdetection-master/configs/fcos/metafile.yml | Collections:
- Name: FCOS
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- FPN
- Group Normalization
- ResNet
Paper:
URL: https://arxiv.org/abs/1904.01355... | 5,124 | 33.863946 | 222 | yml |
mmdetection | mmdetection-master/configs/foveabox/README.md | # FoveaBox
> [FoveaBox: Beyond Anchor-based Object Detector](https://arxiv.org/abs/1904.03797)
<!-- [ALGORITHM] -->
## Abstract
We present FoveaBox, an accurate, flexible, and completely anchor-free framework for object detection. While almost all state-of-the-art object detectors utilize predefined anchors to enum... | 9,490 | 174.759259 | 1,416 | md |
mmdetection | mmdetection-master/configs/foveabox/fovea_align_r101_fpn_gn-head_4x4_2x_coco.py | _base_ = './fovea_r50_fpn_4x4_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')),
bbox_head=dict(
with_deform=True,
norm_cfg=dict(type='GN', num_groups=32, requires_grad=True)))
# learn... | 417 | 31.153846 | 69 | py |
mmdetection | mmdetection-master/configs/foveabox/fovea_align_r101_fpn_gn-head_mstrain_640-800_4x4_2x_coco.py | _base_ = './fovea_r50_fpn_4x4_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')),
bbox_head=dict(
with_deform=True,
norm_cfg=dict(type='GN', num_groups=32, requires_grad=True)))
img_nor... | 1,042 | 33.766667 | 77 | py |
mmdetection | mmdetection-master/configs/foveabox/fovea_align_r50_fpn_gn-head_4x4_2x_coco.py | _base_ = './fovea_r50_fpn_4x4_1x_coco.py'
model = dict(
bbox_head=dict(
with_deform=True,
norm_cfg=dict(type='GN', num_groups=32, requires_grad=True)))
# learning policy
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
optimizer_config = dict(
_delete_=True, ... | 362 | 32 | 69 | py |
mmdetection | mmdetection-master/configs/foveabox/fovea_align_r50_fpn_gn-head_mstrain_640-800_4x4_2x_coco.py | _base_ = './fovea_r50_fpn_4x4_1x_coco.py'
model = dict(
bbox_head=dict(
with_deform=True,
norm_cfg=dict(type='GN', num_groups=32, requires_grad=True)))
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFi... | 901 | 33.692308 | 77 | py |
mmdetection | mmdetection-master/configs/foveabox/fovea_r101_fpn_4x4_1x_coco.py | _base_ = './fovea_r50_fpn_4x4_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 197 | 27.285714 | 61 | py |
mmdetection | mmdetection-master/configs/foveabox/fovea_r101_fpn_4x4_2x_coco.py | _base_ = './fovea_r50_fpn_4x4_2x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 197 | 27.285714 | 61 | py |
mmdetection | mmdetection-master/configs/foveabox/fovea_r50_fpn_4x4_1x_coco.py | _base_ = [
'../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
# model settings
model = dict(
type='FOVEA',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
... | 1,612 | 29.433962 | 79 | py |
mmdetection | mmdetection-master/configs/foveabox/fovea_r50_fpn_4x4_2x_coco.py | _base_ = './fovea_r50_fpn_4x4_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 146 | 28.4 | 53 | py |
mmdetection | mmdetection-master/configs/foveabox/metafile.yml | Collections:
- Name: FoveaBox
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 4x V100 GPUs
Architecture:
- FPN
- ResNet
Paper:
URL: https://arxiv.org/abs/1904.03797
Title: 'FoveaBox: B... | 5,682 | 31.849711 | 205 | yml |
mmdetection | mmdetection-master/configs/fpg/README.md | # FPG
> [Feature Pyramid Grids](https://arxiv.org/abs/2004.03580)
<!-- [ALGORITHM] -->
## Abstract
Feature pyramid networks have been widely adopted in the object detection literature to improve feature representations for better handling of variations in scale. In this paper, we present Feature Pyramid Grids (FPG)... | 7,489 | 169.227273 | 925 | md |
mmdetection | mmdetection-master/configs/fpg/faster_rcnn_r50_fpg-chn128_crop640_50e_coco.py | _base_ = 'faster_rcnn_r50_fpg_crop640_50e_coco.py'
norm_cfg = dict(type='BN', requires_grad=True)
model = dict(
neck=dict(out_channels=128, inter_channels=128),
rpn_head=dict(in_channels=128),
roi_head=dict(
bbox_roi_extractor=dict(out_channels=128),
bbox_head=dict(in_channels=128)))
| 314 | 30.5 | 52 | py |
mmdetection | mmdetection-master/configs/fpg/faster_rcnn_r50_fpg_crop640_50e_coco.py | _base_ = 'faster_rcnn_r50_fpn_crop640_50e_coco.py'
norm_cfg = dict(type='BN', requires_grad=True)
model = dict(
neck=dict(
type='FPG',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
inter_channels=256,
num_outs=5,
stack_times=9,
paths=['bu'] * 9,
... | 1,452 | 28.653061 | 64 | py |
mmdetection | mmdetection-master/configs/fpg/faster_rcnn_r50_fpn_crop640_50e_coco.py | _base_ = [
'../_base_/models/faster_rcnn_r50_fpn.py',
'../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
norm_cfg = dict(type='BN', requires_grad=True)
model = dict(
backbone=dict(norm_cfg=norm_cfg, norm_eval=False),
neck=dict(norm_cfg=norm... | 2,327 | 30.459459 | 77 | py |
mmdetection | mmdetection-master/configs/fpg/mask_rcnn_r50_fpg-chn128_crop640_50e_coco.py | _base_ = 'mask_rcnn_r50_fpg_crop640_50e_coco.py'
model = dict(
neck=dict(out_channels=128, inter_channels=128),
rpn_head=dict(in_channels=128),
roi_head=dict(
bbox_roi_extractor=dict(out_channels=128),
bbox_head=dict(in_channels=128),
mask_roi_extractor=dict(out_channels=128),
... | 357 | 31.545455 | 52 | py |
mmdetection | mmdetection-master/configs/fpg/mask_rcnn_r50_fpg_crop640_50e_coco.py | _base_ = 'mask_rcnn_r50_fpn_crop640_50e_coco.py'
norm_cfg = dict(type='BN', requires_grad=True)
model = dict(
neck=dict(
type='FPG',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
inter_channels=256,
num_outs=5,
stack_times=9,
paths=['bu'] * 9,
... | 1,450 | 28.612245 | 64 | py |
mmdetection | mmdetection-master/configs/fpg/mask_rcnn_r50_fpn_crop640_50e_coco.py | _base_ = [
'../_base_/models/mask_rcnn_r50_fpn.py',
'../_base_/datasets/coco_instance.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
norm_cfg = dict(type='BN', requires_grad=True)
model = dict(
backbone=dict(norm_cfg=norm_cfg, norm_eval=False),
neck=dict(
type='F... | 2,499 | 30.25 | 78 | py |
mmdetection | mmdetection-master/configs/fpg/metafile.yml | Collections:
- Name: Feature Pyramid Grids
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- Feature Pyramid Grids
Paper:
URL: https://arxiv.org/abs/2004.03580
Title... | 3,717 | 34.409524 | 181 | yml |
mmdetection | mmdetection-master/configs/fpg/retinanet_r50_fpg-chn128_crop640_50e_coco.py | _base_ = 'retinanet_r50_fpg_crop640_50e_coco.py'
model = dict(
neck=dict(out_channels=128, inter_channels=128),
bbox_head=dict(in_channels=128))
| 154 | 24.833333 | 52 | py |
mmdetection | mmdetection-master/configs/fpg/retinanet_r50_fpg_crop640_50e_coco.py | _base_ = '../nas_fpn/retinanet_r50_nasfpn_crop640_50e_coco.py'
norm_cfg = dict(type='BN', requires_grad=True)
model = dict(
neck=dict(
_delete_=True,
type='FPG',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
inter_channels=256,
num_outs=5,
add_extra_c... | 1,571 | 28.111111 | 64 | py |
mmdetection | mmdetection-master/configs/free_anchor/README.md | # FreeAnchor
> [FreeAnchor: Learning to Match Anchors for Visual Object Detection](https://arxiv.org/abs/1909.02466)
<!-- [ALGORITHM] -->
## Abstract
Modern CNN-based object detectors assign anchors for ground-truth objects under the restriction of object-anchor Intersection-over-Unit (IoU). In this study, we propo... | 4,536 | 118.394737 | 857 | md |
mmdetection | mmdetection-master/configs/free_anchor/metafile.yml | Collections:
- Name: FreeAnchor
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- FreeAnchor
- ResNet
Paper:
URL: https://arxiv.org/abs/1909.02466
Title: 'Fr... | 2,648 | 32.1125 | 184 | yml |
mmdetection | mmdetection-master/configs/free_anchor/retinanet_free_anchor_r101_fpn_1x_coco.py | _base_ = './retinanet_free_anchor_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 209 | 29 | 61 | py |
mmdetection | mmdetection-master/configs/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco.py | _base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py'
model = dict(
bbox_head=dict(
_delete_=True,
type='FreeAnchorRetinaHead',
num_classes=80,
in_channels=256,
stacked_convs=4,
feat_channels=256,
anchor_generator=dict(
type='AnchorGenerator',
... | 775 | 32.73913 | 74 | py |
mmdetection | mmdetection-master/configs/free_anchor/retinanet_free_anchor_x101_32x4d_fpn_1x_coco.py | _base_ = './retinanet_free_anchor_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
style='pytorch',
init_cfg=dict(
type='Pr... | 377 | 26 | 76 | py |
mmdetection | mmdetection-master/configs/fsaf/README.md | # FSAF
> [Feature Selective Anchor-Free Module for Single-Shot Object Detection](https://arxiv.org/abs/1903.00621)
<!-- [ALGORITHM] -->
## Abstract
We motivate and present feature selective anchor-free (FSAF) module, a simple and effective building block for single-shot object detectors. It can be plugged into sing... | 6,653 | 113.724138 | 1,487 | md |
mmdetection | mmdetection-master/configs/fsaf/fsaf_r101_fpn_1x_coco.py | _base_ = './fsaf_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 192 | 26.571429 | 61 | py |
mmdetection | mmdetection-master/configs/fsaf/fsaf_r50_fpn_1x_coco.py | _base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py'
# model settings
model = dict(
type='FSAF',
bbox_head=dict(
type='FSAFHead',
num_classes=80,
in_channels=256,
stacked_convs=4,
feat_channels=256,
reg_decoded_bbox=True,
# Only anchor-free branch is imple... | 1,554 | 30.734694 | 77 | py |
mmdetection | mmdetection-master/configs/fsaf/fsaf_x101_64x4d_fpn_1x_coco.py | _base_ = './fsaf_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pytorch',
... | 414 | 26.666667 | 76 | py |
mmdetection | mmdetection-master/configs/fsaf/metafile.yml | Collections:
- Name: FSAF
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x Titan-XP GPUs
Architecture:
- FPN
- FSAF
- ResNet
Paper:
URL: https://arxiv.org/abs/1903.00621
Titl... | 2,356 | 28.098765 | 134 | yml |
mmdetection | mmdetection-master/configs/gcnet/README.md | # GCNet
> [GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond](https://arxiv.org/abs/1904.11492)
<!-- [ALGORITHM] -->
## Abstract
The Non-Local Network (NLNet) presents a pioneering approach for capturing long-range dependencies, via aggregating query-specific global context to each query positio... | 19,731 | 280.885714 | 1,167 | md |
mmdetection | mmdetection-master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco.py | _base_ = '../cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False))
| 180 | 35.2 | 75 | py |
mmdetection | mmdetection-master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_1x_coco.py | _base_ = '../dcn/cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False))
| 183 | 35.8 | 75 | py |
mmdetection | mmdetection-master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r16_gcb_c3-c5_1x_coco.py | _base_ = '../dcn/cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True),
norm_eval=False,
plugins=[
dict(
cfg=dict(type='ContextBlock', ratio=1. / 16),
stages=(False, T... | 390 | 31.583333 | 73 | py |
mmdetection | mmdetection-master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_dconv_c3-c5_r4_gcb_c3-c5_1x_coco.py | _base_ = '../dcn/cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True),
norm_eval=False,
plugins=[
dict(
cfg=dict(type='ContextBlock', ratio=1. / 4),
stages=(False, Tr... | 389 | 31.5 | 73 | py |
mmdetection | mmdetection-master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py | _base_ = '../cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True),
norm_eval=False,
plugins=[
dict(
cfg=dict(type='ContextBlock', ratio=1. / 16),
stages=(False, True... | 387 | 31.333333 | 70 | py |
mmdetection | mmdetection-master/configs/gcnet/cascade_mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py | _base_ = '../cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True),
norm_eval=False,
plugins=[
dict(
cfg=dict(type='ContextBlock', ratio=1. / 4),
stages=(False, True,... | 386 | 31.25 | 70 | py |
mmdetection | mmdetection-master/configs/gcnet/mask_rcnn_r101_fpn_r16_gcb_c3-c5_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py'
model = dict(
backbone=dict(plugins=[
dict(
cfg=dict(type='ContextBlock', ratio=1. / 16),
stages=(False, True, True, True),
position='after_conv3')
]))
| 258 | 27.777778 | 57 | py |
mmdetection | mmdetection-master/configs/gcnet/mask_rcnn_r101_fpn_r4_gcb_c3-c5_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py'
model = dict(
backbone=dict(plugins=[
dict(
cfg=dict(type='ContextBlock', ratio=1. / 4),
stages=(False, True, True, True),
position='after_conv3')
]))
| 257 | 27.666667 | 56 | py |
mmdetection | mmdetection-master/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False))
| 163 | 31.8 | 75 | py |
mmdetection | mmdetection-master/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True),
norm_eval=False,
plugins=[
dict(
cfg=dict(type='ContextBlock', ratio=1. / 16),
stages=(False, True, True, True),
... | 370 | 29.916667 | 61 | py |
mmdetection | mmdetection-master/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True),
norm_eval=False,
plugins=[
dict(
cfg=dict(type='ContextBlock', ratio=1. / 4),
stages=(False, True, True, True),
... | 369 | 29.833333 | 60 | py |
mmdetection | mmdetection-master/configs/gcnet/mask_rcnn_r50_fpn_r16_gcb_c3-c5_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(plugins=[
dict(
cfg=dict(type='ContextBlock', ratio=1. / 16),
stages=(False, True, True, True),
position='after_conv3')
]))
| 257 | 27.666667 | 57 | py |
mmdetection | mmdetection-master/configs/gcnet/mask_rcnn_r50_fpn_r4_gcb_c3-c5_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(plugins=[
dict(
cfg=dict(type='ContextBlock', ratio=1. / 4),
stages=(False, True, True, True),
position='after_conv3')
]))
| 256 | 27.555556 | 56 | py |
mmdetection | mmdetection-master/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False))
| 162 | 31.6 | 75 | py |
mmdetection | mmdetection-master/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True),
norm_eval=False,
plugins=[
dict(
cfg=dict(type='ContextBlock', ratio=1. / 16),
stages=(False, True, True, True),
... | 369 | 29.833333 | 61 | py |
mmdetection | mmdetection-master/configs/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True),
norm_eval=False,
plugins=[
dict(
cfg=dict(type='ContextBlock', ratio=1. / 4),
stages=(False, True, True, True),
... | 368 | 29.75 | 60 | py |
mmdetection | mmdetection-master/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False))
| 169 | 33 | 75 | py |
mmdetection | mmdetection-master/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True),
norm_eval=False,
plugins=[
dict(
cfg=dict(type='ContextBlock', ratio=1. / 16),
stages=(False, True, True, Tru... | 376 | 30.416667 | 61 | py |
mmdetection | mmdetection-master/configs/gcnet/mask_rcnn_x101_32x4d_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(type='SyncBN', requires_grad=True),
norm_eval=False,
plugins=[
dict(
cfg=dict(type='ContextBlock', ratio=1. / 4),
stages=(False, True, True, True... | 375 | 30.333333 | 60 | py |
mmdetection | mmdetection-master/configs/gcnet/metafile.yml | Collections:
- Name: GCNet
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- Global Context Block
- FPN
- RPN
- ResNet
- ResNeXt
Paper:
URL... | 15,471 | 34.0839 | 261 | yml |
mmdetection | mmdetection-master/configs/gfl/README.md | # GFL
> [Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection](https://arxiv.org/abs/2006.04388)
<!-- [ALGORITHM] -->
## Abstract
One-stage detector basically formulates object detection as dense classification and localization. The classification is usually optimized... | 7,832 | 181.162791 | 1,844 | md |
mmdetection | mmdetection-master/configs/gfl/gfl_r101_fpn_dconv_c3-c5_mstrain_2x_coco.py | _base_ = './gfl_r50_fpn_mstrain_2x_coco.py'
model = dict(
backbone=dict(
type='ResNet',
depth=101,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=F... | 529 | 32.125 | 72 | py |
mmdetection | mmdetection-master/configs/gfl/gfl_r101_fpn_mstrain_2x_coco.py | _base_ = './gfl_r50_fpn_mstrain_2x_coco.py'
model = dict(
backbone=dict(
type='ResNet',
depth=101,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
norm_eval=True,
style='pytorch',
init_cfg=... | 406 | 28.071429 | 61 | py |
mmdetection | mmdetection-master/configs/gfl/gfl_r50_fpn_1x_coco.py | _base_ = [
'../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(
type='GFL',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=di... | 1,739 | 29 | 79 | py |
mmdetection | mmdetection-master/configs/gfl/gfl_r50_fpn_mstrain_2x_coco.py | _base_ = './gfl_r50_fpn_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
# multi-scale training
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
... | 788 | 33.304348 | 77 | py |
mmdetection | mmdetection-master/configs/gfl/gfl_x101_32x4d_fpn_dconv_c4-c5_mstrain_2x_coco.py | _base_ = './gfl_r50_fpn_mstrain_2x_coco.py'
model = dict(
type='GFL',
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
... | 585 | 29.842105 | 76 | py |
mmdetection | mmdetection-master/configs/gfl/gfl_x101_32x4d_fpn_mstrain_2x_coco.py | _base_ = './gfl_r50_fpn_mstrain_2x_coco.py'
model = dict(
type='GFL',
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
... | 461 | 26.176471 | 76 | py |
mmdetection | mmdetection-master/configs/gfl/metafile.yml | Collections:
- Name: Generalized Focal Loss
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- Generalized Focal Loss
- FPN
- ResNet
Paper:
URL: https://arx... | 4,400 | 31.6 | 187 | yml |
mmdetection | mmdetection-master/configs/ghm/README.md | # GHM
> [Gradient Harmonized Single-stage Detector](https://arxiv.org/abs/1811.05181)
<!-- [ALGORITHM] -->
## Abstract
Despite the great success of two-stage detectors, single-stage detector is still a more elegant and efficient way, yet suffers from the two well-known disharmonies during training, i.e. the huge di... | 4,812 | 140.558824 | 1,195 | md |
mmdetection | mmdetection-master/configs/ghm/metafile.yml | Collections:
- Name: GHM
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- GHM-C
- GHM-R
- FPN
- ResNet
Paper:
URL: https://arxiv.org/abs/1811.0518... | 3,103 | 29.431373 | 160 | yml |
mmdetection | mmdetection-master/configs/ghm/retinanet_ghm_r101_fpn_1x_coco.py | _base_ = './retinanet_ghm_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 201 | 27.857143 | 61 | py |
mmdetection | mmdetection-master/configs/ghm/retinanet_ghm_r50_fpn_1x_coco.py | _base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py'
model = dict(
bbox_head=dict(
loss_cls=dict(
_delete_=True,
type='GHMC',
bins=30,
momentum=0.75,
use_sigmoid=True,
loss_weight=1.0),
loss_bbox=dict(
_delete_=True,... | 532 | 25.65 | 60 | py |
mmdetection | mmdetection-master/configs/ghm/retinanet_ghm_x101_32x4d_fpn_1x_coco.py | _base_ = './retinanet_ghm_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pytorch... | 423 | 27.266667 | 76 | py |
mmdetection | mmdetection-master/configs/ghm/retinanet_ghm_x101_64x4d_fpn_1x_coco.py | _base_ = './retinanet_ghm_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pytorch... | 423 | 27.266667 | 76 | py |
mmdetection | mmdetection-master/configs/gn+ws/README.md | # GN + WS
> [Weight Standardization](https://arxiv.org/abs/1903.10520)
<!-- [ALGORITHM] -->
## Abstract
Batch Normalization (BN) has become an out-of-box technique to improve deep network training. However, its effectiveness is limited for micro-batch training, i.e., each GPU typically has only 1-2 images for train... | 11,966 | 216.581818 | 1,404 | md |
mmdetection | mmdetection-master/configs/gn+ws/faster_rcnn_r101_fpn_gn_ws-all_1x_coco.py | _base_ = './faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://jhu/resnet101_gn_ws')))
| 209 | 29 | 79 | py |
mmdetection | mmdetection-master/configs/gn+ws/faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py | _base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
conv_cfg = dict(type='ConvWS')
norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
model = dict(
backbone=dict(
conv_cfg=conv_cfg,
norm_cfg=norm_cfg,
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://jhu/... | 577 | 33 | 78 | py |
mmdetection | mmdetection-master/configs/gn+ws/faster_rcnn_x101_32x4d_fpn_gn_ws-all_1x_coco.py | _base_ = './faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py'
conv_cfg = dict(type='ConvWS')
norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
... | 546 | 27.789474 | 67 | py |
mmdetection | mmdetection-master/configs/gn+ws/faster_rcnn_x50_32x4d_fpn_gn_ws-all_1x_coco.py | _base_ = './faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py'
conv_cfg = dict(type='ConvWS')
norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
model = dict(
backbone=dict(
type='ResNeXt',
depth=50,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
... | 544 | 27.684211 | 66 | py |
mmdetection | mmdetection-master/configs/gn+ws/mask_rcnn_r101_fpn_gn_ws-all_20_23_24e_coco.py | _base_ = './mask_rcnn_r101_fpn_gn_ws-all_2x_coco.py'
# learning policy
lr_config = dict(step=[20, 23])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 157 | 30.6 | 53 | py |
mmdetection | mmdetection-master/configs/gn+ws/mask_rcnn_r101_fpn_gn_ws-all_2x_coco.py | _base_ = './mask_rcnn_r50_fpn_gn_ws-all_2x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://jhu/resnet101_gn_ws')))
| 207 | 28.714286 | 79 | py |
mmdetection | mmdetection-master/configs/gn+ws/mask_rcnn_r50_fpn_gn_ws-all_20_23_24e_coco.py | _base_ = './mask_rcnn_r50_fpn_gn_ws-all_2x_coco.py'
# learning policy
lr_config = dict(step=[20, 23])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 156 | 30.4 | 53 | py |
mmdetection | mmdetection-master/configs/gn+ws/mask_rcnn_r50_fpn_gn_ws-all_2x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
conv_cfg = dict(type='ConvWS')
norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
model = dict(
backbone=dict(
conv_cfg=conv_cfg,
norm_cfg=norm_cfg,
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://jhu/resn... | 739 | 34.238095 | 78 | py |
mmdetection | mmdetection-master/configs/gn+ws/mask_rcnn_x101_32x4d_fpn_gn_ws-all_20_23_24e_coco.py | _base_ = './mask_rcnn_x101_32x4d_fpn_gn_ws-all_2x_coco.py'
# learning policy
lr_config = dict(step=[20, 23])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 163 | 31.8 | 58 | py |
mmdetection | mmdetection-master/configs/gn+ws/mask_rcnn_x101_32x4d_fpn_gn_ws-all_2x_coco.py | _base_ = './mask_rcnn_r50_fpn_gn_ws-all_2x_coco.py'
# model settings
conv_cfg = dict(type='ConvWS')
norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices... | 561 | 27.1 | 67 | py |
mmdetection | mmdetection-master/configs/gn+ws/mask_rcnn_x50_32x4d_fpn_gn_ws-all_20_23_24e_coco.py | _base_ = './mask_rcnn_x50_32x4d_fpn_gn_ws-all_2x_coco.py'
# learning policy
lr_config = dict(step=[20, 23])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 162 | 31.6 | 57 | py |
mmdetection | mmdetection-master/configs/gn+ws/mask_rcnn_x50_32x4d_fpn_gn_ws-all_2x_coco.py | _base_ = './mask_rcnn_r50_fpn_gn_ws-all_2x_coco.py'
# model settings
conv_cfg = dict(type='ConvWS')
norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
model = dict(
backbone=dict(
type='ResNeXt',
depth=50,
groups=32,
base_width=4,
num_stages=4,
out_indices=... | 559 | 27 | 66 | py |
mmdetection | mmdetection-master/configs/gn+ws/metafile.yml | Collections:
- Name: Weight Standardization
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- Group Normalization
- Weight Standardization
Paper:
URL: https://arxi... | 8,999 | 33.090909 | 190 | yml |
mmdetection | mmdetection-master/configs/gn/README.md | # GN
> [Group Normalization](https://arxiv.org/abs/1803.08494)
<!-- [ALGORITHM] -->
## Abstract
Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems --- BN's error increases rapid... | 6,491 | 153.571429 | 1,398 | md |
mmdetection | mmdetection-master/configs/gn/mask_rcnn_r101_fpn_gn-all_2x_coco.py | _base_ = './mask_rcnn_r50_fpn_gn-all_2x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron/resnet101_gn')))
| 219 | 26.5 | 63 | py |
mmdetection | mmdetection-master/configs/gn/mask_rcnn_r101_fpn_gn-all_3x_coco.py | _base_ = './mask_rcnn_r101_fpn_gn-all_2x_coco.py'
# learning policy
lr_config = dict(step=[28, 34])
runner = dict(type='EpochBasedRunner', max_epochs=36)
| 155 | 25 | 53 | py |
mmdetection | mmdetection-master/configs/gn/mask_rcnn_r50_fpn_gn-all_2x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
model = dict(
backbone=dict(
norm_cfg=norm_cfg,
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron/resnet50_gn')),
neck=dict(norm_cfg=norm_... | 1,755 | 34.12 | 77 | py |
mmdetection | mmdetection-master/configs/gn/mask_rcnn_r50_fpn_gn-all_3x_coco.py | _base_ = './mask_rcnn_r50_fpn_gn-all_2x_coco.py'
# learning policy
lr_config = dict(step=[28, 34])
runner = dict(type='EpochBasedRunner', max_epochs=36)
| 154 | 24.833333 | 53 | py |
mmdetection | mmdetection-master/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
model = dict(
backbone=dict(
norm_cfg=norm_cfg,
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://contrib/resnet50_gn')),
neck=dict(norm_cfg=norm_cfg),
roi_... | 613 | 33.111111 | 79 | py |
mmdetection | mmdetection-master/configs/gn/mask_rcnn_r50_fpn_gn-all_contrib_3x_coco.py | _base_ = './mask_rcnn_r50_fpn_gn-all_contrib_2x_coco.py'
# learning policy
lr_config = dict(step=[28, 34])
runner = dict(type='EpochBasedRunner', max_epochs=36)
| 162 | 26.166667 | 56 | py |
mmdetection | mmdetection-master/configs/gn/metafile.yml | Collections:
- Name: Group Normalization
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- Group Normalization
Paper:
URL: https://arxiv.org/abs/1803.08494
Title: 'G... | 5,088 | 30.220859 | 167 | yml |