Search is not available for this dataset
repo stringlengths 2 152 ⌀ | file stringlengths 15 239 | code stringlengths 0 58.4M | file_length int64 0 58.4M | avg_line_length float64 0 1.81M | max_line_length int64 0 12.7M | extension_type stringclasses 364
values |
|---|---|---|---|---|---|---|
mmdetection | mmdetection-master/configs/libra_rcnn/libra_fast_rcnn_r50_fpn_1x_coco.py | _base_ = '../fast_rcnn/fast_rcnn_r50_fpn_1x_coco.py'
# model settings
model = dict(
neck=[
dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
dict(
type='BFP',
in_channels=256,
num_l... | 1,590 | 30.196078 | 68 | py |
mmdetection | mmdetection-master/configs/libra_rcnn/libra_faster_rcnn_r101_fpn_1x_coco.py | _base_ = './libra_faster_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 205 | 28.428571 | 61 | py |
mmdetection | mmdetection-master/configs/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco.py | _base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
# model settings
model = dict(
neck=[
dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
dict(
type='BFP',
in_channels=256,
n... | 1,268 | 29.214286 | 68 | py |
mmdetection | mmdetection-master/configs/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco.py | _base_ = './libra_faster_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pyt... | 427 | 27.533333 | 76 | py |
mmdetection | mmdetection-master/configs/libra_rcnn/libra_retinanet_r50_fpn_1x_coco.py | _base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py'
# model settings
model = dict(
neck=[
dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
start_level=1,
add_extra_convs='on_input',
num_outs=5),
dict(
... | 674 | 24 | 52 | py |
mmdetection | mmdetection-master/configs/libra_rcnn/metafile.yml | Collections:
- Name: Libra R-CNN
Metadata:
Training Data: COCO
Training Techniques:
- IoU-Balanced Sampling
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- Balanced Feature Pyramid
Paper:
URL: https://arxiv.org/a... | 3,246 | 31.47 | 175 | yml |
mmdetection | mmdetection-master/configs/lvis/README.md | # LVIS
> [LVIS: A Dataset for Large Vocabulary Instance Segmentation](https://arxiv.org/abs/1908.03195)
<!-- [DATASET] -->
## Abstract
Progress on object detection is enabled by datasets that focus the research community's attention on open challenges. This process led us from simple images to complex scenes and fr... | 9,264 | 161.54386 | 801 | md |
mmdetection | mmdetection-master/configs/lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_1x_lvis_v1.py | _base_ = './mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 219 | 30.428571 | 63 | py |
mmdetection | mmdetection-master/configs/lvis/mask_rcnn_r101_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py | _base_ = './mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 221 | 30.714286 | 65 | py |
mmdetection | mmdetection-master/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py | _base_ = [
'../_base_/models/mask_rcnn_r50_fpn.py',
'../_base_/datasets/lvis_v1_instance.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(
roi_head=dict(
bbox_head=dict(num_classes=1203), mask_head=dict(num_classes=1203)),
test_cfg=dict(
rcnn=d... | 1,160 | 35.28125 | 77 | py |
mmdetection | mmdetection-master/configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py | _base_ = [
'../_base_/models/mask_rcnn_r50_fpn.py',
'../_base_/datasets/lvis_v0.5_instance.py',
'../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py'
]
model = dict(
roi_head=dict(
bbox_head=dict(num_classes=1230), mask_head=dict(num_classes=1230)),
test_cfg=dict(
rcnn... | 1,162 | 35.34375 | 77 | py |
mmdetection | mmdetection-master/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_1x_lvis_v1.py | _base_ = './mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
... | 441 | 28.466667 | 76 | py |
mmdetection | mmdetection-master/configs/lvis/mask_rcnn_x101_32x4d_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py | _base_ = './mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
... | 443 | 28.6 | 76 | py |
mmdetection | mmdetection-master/configs/lvis/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_1x_lvis_v1.py | _base_ = './mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
... | 441 | 28.466667 | 76 | py |
mmdetection | mmdetection-master/configs/lvis/mask_rcnn_x101_64x4d_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py | _base_ = './mask_rcnn_r50_fpn_sample1e-3_mstrain_2x_lvis_v0.5.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
... | 443 | 28.6 | 76 | py |
mmdetection | mmdetection-master/configs/mask2former/README.md | # Mask2Former
> [Masked-attention Mask Transformer for Universal Image Segmentation](http://arxiv.org/abs/2112.01527)
<!-- [ALGORITHM] -->
## Abstract
Image segmentation is about grouping pixels with different semantics, e.g., category or instance membership, where each choice of semantics defines a task. While onl... | 12,992 | 174.581081 | 942 | md |
mmdetection | mmdetection-master/configs/mask2former/mask2former_r101_lsj_8x2_50e_coco-panoptic.py | _base_ = './mask2former_r50_lsj_8x2_50e_coco-panoptic.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 214 | 25.875 | 61 | py |
mmdetection | mmdetection-master/configs/mask2former/mask2former_r101_lsj_8x2_50e_coco.py | _base_ = ['./mask2former_r50_lsj_8x2_50e_coco.py']
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 207 | 25 | 61 | py |
mmdetection | mmdetection-master/configs/mask2former/mask2former_r50_lsj_8x2_50e_coco-panoptic.py | _base_ = [
'../_base_/datasets/coco_panoptic.py', '../_base_/default_runtime.py'
]
num_things_classes = 80
num_stuff_classes = 53
num_classes = num_things_classes + num_stuff_classes
model = dict(
type='Mask2Former',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_i... | 8,600 | 32.862205 | 79 | py |
mmdetection | mmdetection-master/configs/mask2former/mask2former_r50_lsj_8x2_50e_coco.py | _base_ = ['./mask2former_r50_lsj_8x2_50e_coco-panoptic.py']
num_things_classes = 80
num_stuff_classes = 0
num_classes = num_things_classes + num_stuff_classes
model = dict(
panoptic_head=dict(
num_things_classes=num_things_classes,
num_stuff_classes=num_stuff_classes,
loss_cls=dict(class_wei... | 2,781 | 33.775 | 78 | py |
mmdetection | mmdetection-master/configs/mask2former/mask2former_swin-b-p4-w12-384-in21k_lsj_8x2_50e_coco-panoptic.py | _base_ = ['./mask2former_swin-b-p4-w12-384_lsj_8x2_50e_coco-panoptic.py']
pretrained = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384_22k.pth' # noqa
model = dict(
backbone=dict(init_cfg=dict(type='Pretrained', checkpoint=pretrained)))
| 294 | 48.166667 | 128 | py |
mmdetection | mmdetection-master/configs/mask2former/mask2former_swin-b-p4-w12-384_lsj_8x2_50e_coco-panoptic.py | _base_ = ['./mask2former_swin-t-p4-w7-224_lsj_8x2_50e_coco-panoptic.py']
pretrained = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384.pth' # noqa
depths = [2, 2, 18, 2]
model = dict(
backbone=dict(
pretrain_img_size=384,
embed_dims=128,
de... | 1,609 | 36.44186 | 124 | py |
mmdetection | mmdetection-master/configs/mask2former/mask2former_swin-l-p4-w12-384-in21k_lsj_16x1_100e_coco-panoptic.py | _base_ = ['./mask2former_swin-b-p4-w12-384_lsj_8x2_50e_coco-panoptic.py']
pretrained = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window12_384_22k.pth' # noqa
model = dict(
backbone=dict(
embed_dims=192,
num_heads=[6, 12, 24, 48],
init_cfg=dict(t... | 1,017 | 36.703704 | 129 | py |
mmdetection | mmdetection-master/configs/mask2former/mask2former_swin-s-p4-w7-224_lsj_8x2_50e_coco-panoptic.py | _base_ = ['./mask2former_swin-t-p4-w7-224_lsj_8x2_50e_coco-panoptic.py']
pretrained = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_small_patch4_window7_224.pth' # noqa
depths = [2, 2, 18, 2]
model = dict(
backbone=dict(
depths=depths, init_cfg=dict(type='Pretrained',
... | 1,466 | 37.605263 | 124 | py |
mmdetection | mmdetection-master/configs/mask2former/mask2former_swin-s-p4-w7-224_lsj_8x2_50e_coco.py | _base_ = ['./mask2former_swin-t-p4-w7-224_lsj_8x2_50e_coco.py']
pretrained = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_small_patch4_window7_224.pth' # noqa
depths = [2, 2, 18, 2]
model = dict(
backbone=dict(
depths=depths, init_cfg=dict(type='Pretrained',
... | 1,457 | 37.368421 | 124 | py |
mmdetection | mmdetection-master/configs/mask2former/mask2former_swin-t-p4-w7-224_lsj_8x2_50e_coco-panoptic.py | _base_ = ['./mask2former_r50_lsj_8x2_50e_coco-panoptic.py']
pretrained = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth' # noqa
depths = [2, 2, 6, 2]
model = dict(
type='Mask2Former',
backbone=dict(
_delete_=True,
type='SwinTransformer',
... | 2,066 | 31.809524 | 123 | py |
mmdetection | mmdetection-master/configs/mask2former/mask2former_swin-t-p4-w7-224_lsj_8x2_50e_coco.py | _base_ = ['./mask2former_r50_lsj_8x2_50e_coco.py']
pretrained = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth' # noqa
depths = [2, 2, 6, 2]
model = dict(
type='Mask2Former',
backbone=dict(
_delete_=True,
type='SwinTransformer',
emb... | 2,056 | 32.177419 | 123 | py |
mmdetection | mmdetection-master/configs/mask2former/metafile.yml | Collections:
- Name: Mask2Former
Metadata:
Training Data: COCO
Training Techniques:
- AdamW
- Weight Decay
Training Resources: 8x A100 GPUs
Architecture:
- Mask2Former
Paper:
URL: https://arxiv.org/pdf/2112.01527
Title: 'Masked-attention Mask Transfo... | 7,717 | 33.455357 | 227 | yml |
mmdetection | mmdetection-master/configs/mask_rcnn/README.md | # Mask R-CNN
> [Mask R-CNN](https://arxiv.org/abs/1703.06870)
<!-- [ALGORITHM] -->
## Abstract
We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for... | 17,058 | 283.316667 | 1,070 | md |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r101_caffe_fpn_1x_coco.py | _base_ = './mask_rcnn_r50_caffe_fpn_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron2/resnet101_caffe')))
| 222 | 26.875 | 67 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r101_caffe_fpn_mstrain-poly_3x_coco.py | _base_ = [
'../common/mstrain-poly_3x_coco_instance.py',
'../_base_/models/mask_rcnn_r50_fpn.py'
]
model = dict(
backbone=dict(
depth=101,
norm_cfg=dict(requires_grad=False),
norm_eval=True,
style='caffe',
init_cfg=dict(
type='Pretrained',
che... | 1,660 | 28.660714 | 77 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py | _base_ = './mask_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 197 | 27.285714 | 61 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r101_fpn_2x_coco.py | _base_ = './mask_rcnn_r50_fpn_2x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 197 | 27.285714 | 61 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r101_fpn_mstrain-poly_3x_coco.py | _base_ = [
'../common/mstrain-poly_3x_coco_instance.py',
'../_base_/models/mask_rcnn_r50_fpn.py'
]
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 263 | 23 | 61 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r50_caffe_c4_1x_coco.py | _base_ = [
'../_base_/models/mask_rcnn_r50_caffe_c4.py',
'../_base_/datasets/coco_instance.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
# use caffe img_norm
img_norm_cfg = dict(
mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
train_pipeline = [
dic... | 1,413 | 34.35 | 77 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_1x_coco.py | _base_ = './mask_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(requires_grad=False),
style='caffe',
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron2/resnet50_caffe')))
# use caffe img_norm
img_norm_cfg = dict(
mean=[103.5... | 1,412 | 33.463415 | 77 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py | _base_ = './mask_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(requires_grad=False),
style='caffe',
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron2/resnet50_caffe')))
# use caffe img_norm
img_norm_cfg = dict(
mean=[103.5... | 1,606 | 31.14 | 77 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_2x_coco.py | _base_ = './mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 23])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 165 | 32.2 | 60 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_3x_coco.py | _base_ = './mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py'
# learning policy
lr_config = dict(step=[28, 34])
runner = dict(type='EpochBasedRunner', max_epochs=36)
| 165 | 32.2 | 60 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain_1x_coco.py | _base_ = './mask_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(requires_grad=False),
style='caffe',
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron2/resnet50_caffe')))
# use caffe img_norm
img_norm_cfg = dict(
mean=[103.5... | 1,556 | 32.847826 | 77 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_poly_1x_coco_v1.py | _base_ = './mask_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
norm_cfg=dict(requires_grad=False),
style='caffe',
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron2/resnet50_caffe')),
rpn_head=dict(
loss_bbox=dict(type='SmoothL1L... | 2,066 | 32.33871 | 78 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py | _base_ = [
'../_base_/models/mask_rcnn_r50_fpn.py',
'../_base_/datasets/coco_instance.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
| 174 | 28.166667 | 72 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r50_fpn_1x_wandb_coco.py | _base_ = [
'../_base_/models/mask_rcnn_r50_fpn.py',
'../_base_/datasets/coco_instance.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
# Set evaluation interval
evaluation = dict(interval=2)
# Set checkpoint interval
checkpoint_config = dict(interval=4)
# yapf:disable
log_config... | 716 | 25.555556 | 72 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r50_fpn_2x_coco.py | _base_ = [
'../_base_/models/mask_rcnn_r50_fpn.py',
'../_base_/datasets/coco_instance.py',
'../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py'
]
| 174 | 28.166667 | 72 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r50_fpn_fp16_1x_coco.py | _base_ = './mask_rcnn_r50_fpn_1x_coco.py'
# fp16 settings
fp16 = dict(loss_scale=512.)
| 87 | 21 | 41 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r50_fpn_mstrain-poly_3x_coco.py | _base_ = [
'../common/mstrain-poly_3x_coco_instance.py',
'../_base_/models/mask_rcnn_r50_fpn.py'
]
| 107 | 20.6 | 49 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_r50_fpn_poly_1x_coco.py | _base_ = [
'../_base_/models/mask_rcnn_r50_fpn.py',
'../_base_/datasets/coco_instance.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFro... | 805 | 32.583333 | 77 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py | _base_ = './mask_rcnn_r101_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pytorch',
... | 420 | 27.066667 | 76 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_x101_32x4d_fpn_2x_coco.py | _base_ = './mask_rcnn_r101_fpn_2x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pytorch',
... | 420 | 27.066667 | 76 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_x101_32x4d_fpn_mstrain-poly_3x_coco.py | _base_ = [
'../common/mstrain-poly_3x_coco_instance.py',
'../_base_/models/mask_rcnn_r50_fpn.py'
]
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_c... | 485 | 24.578947 | 76 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_x101_32x8d_fpn_1x_coco.py | _base_ = './mask_rcnn_r101_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=8,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=False),
style='pytorch',... | 2,132 | 31.318182 | 77 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_x101_32x8d_fpn_mstrain-poly_1x_coco.py | _base_ = './mask_rcnn_r101_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=8,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=False),
style='pytorch',... | 1,838 | 29.147541 | 77 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_x101_32x8d_fpn_mstrain-poly_3x_coco.py | _base_ = [
'../common/mstrain-poly_3x_coco_instance.py',
'../_base_/models/mask_rcnn_r50_fpn.py'
]
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=8,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_c... | 2,537 | 28.511628 | 77 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_x101_64x4d_fpn_1x_coco.py | _base_ = './mask_rcnn_x101_32x4d_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pyto... | 426 | 27.466667 | 76 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_x101_64x4d_fpn_2x_coco.py | _base_ = './mask_rcnn_x101_32x4d_fpn_2x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pyto... | 426 | 27.466667 | 76 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/mask_rcnn_x101_64x4d_fpn_mstrain-poly_3x_coco.py | _base_ = [
'../common/mstrain-poly_3x_coco_instance.py',
'../_base_/models/mask_rcnn_r50_fpn.py'
]
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_c... | 485 | 24.578947 | 76 | py |
mmdetection | mmdetection-master/configs/mask_rcnn/metafile.yml | Collections:
- Name: Mask R-CNN
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- Softmax
- RPN
- Convolution
- Dense Connections
- FPN
- Res... | 14,706 | 32.123874 | 220 | yml |
mmdetection | mmdetection-master/configs/maskformer/README.md | # MaskFormer
> [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278)
<!-- [ALGORITHM] -->
## Abstract
Modern approaches typically formulate semantic segmentation as a per-pixel classification task, while instance-level segmentation is handled with an alternative ... | 5,593 | 102.592593 | 1,043 | md |
mmdetection | mmdetection-master/configs/maskformer/maskformer_r50_mstrain_16x1_75e_coco.py | _base_ = [
'../_base_/datasets/coco_panoptic.py', '../_base_/default_runtime.py'
]
num_things_classes = 80
num_stuff_classes = 53
num_classes = num_things_classes + num_stuff_classes
model = dict(
type='MaskFormer',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_in... | 8,584 | 34.920502 | 79 | py |
mmdetection | mmdetection-master/configs/maskformer/maskformer_swin-l-p4-w12_mstrain_64x1_300e_coco.py | _base_ = './maskformer_r50_mstrain_16x1_75e_coco.py'
pretrained = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window12_384_22k.pth' # noqa
depths = [2, 2, 18, 2]
model = dict(
backbone=dict(
_delete_=True,
type='SwinTransformer',
pretrain_img_size... | 1,949 | 27.676471 | 129 | py |
mmdetection | mmdetection-master/configs/maskformer/metafile.yml | Collections:
- Name: MaskFormer
Metadata:
Training Data: COCO
Training Techniques:
- AdamW
- Weight Decay
Training Resources: 16x V100 GPUs
Architecture:
- MaskFormer
Paper:
URL: https://arxiv.org/pdf/2107.06278
Title: 'Per-Pixel Classification is No... | 1,568 | 34.659091 | 196 | yml |
mmdetection | mmdetection-master/configs/ms_rcnn/README.md | # MS R-CNN
> [Mask Scoring R-CNN](https://arxiv.org/abs/1903.00241)
<!-- [ALGORITHM] -->
## Abstract
Letting a deep network be aware of the quality of its own predictions is an interesting yet important problem. In the task of instance segmentation, the confidence of instance classification is used as mask quality ... | 6,623 | 178.027027 | 1,190 | md |
mmdetection | mmdetection-master/configs/ms_rcnn/metafile.yml | Collections:
- Name: Mask Scoring R-CNN
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- RPN
- FPN
- ResNet
- RoIAlign
Paper:
URL: https://arxiv.o... | 5,102 | 30.89375 | 190 | yml |
mmdetection | mmdetection-master/configs/ms_rcnn/ms_rcnn_r101_caffe_fpn_1x_coco.py | _base_ = './ms_rcnn_r50_caffe_fpn_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(
type='Pretrained',
checkpoint='open-mmlab://detectron2/resnet101_caffe')))
| 220 | 26.625 | 67 | py |
mmdetection | mmdetection-master/configs/ms_rcnn/ms_rcnn_r101_caffe_fpn_2x_coco.py | _base_ = './ms_rcnn_r101_caffe_fpn_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 151 | 29.4 | 53 | py |
mmdetection | mmdetection-master/configs/ms_rcnn/ms_rcnn_r50_caffe_fpn_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r50_caffe_fpn_1x_coco.py'
model = dict(
type='MaskScoringRCNN',
roi_head=dict(
type='MaskScoringRoIHead',
mask_iou_head=dict(
type='MaskIoUHead',
num_convs=4,
num_fcs=2,
roi_feat_size=14,
in_channels=256... | 515 | 29.352941 | 58 | py |
mmdetection | mmdetection-master/configs/ms_rcnn/ms_rcnn_r50_caffe_fpn_2x_coco.py | _base_ = './ms_rcnn_r50_caffe_fpn_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 150 | 29.2 | 53 | py |
mmdetection | mmdetection-master/configs/ms_rcnn/ms_rcnn_r50_fpn_1x_coco.py | _base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
model = dict(
type='MaskScoringRCNN',
roi_head=dict(
type='MaskScoringRoIHead',
mask_iou_head=dict(
type='MaskIoUHead',
num_convs=4,
num_fcs=2,
roi_feat_size=14,
in_channels=256,
... | 509 | 29 | 52 | py |
mmdetection | mmdetection-master/configs/ms_rcnn/ms_rcnn_x101_32x4d_fpn_1x_coco.py | _base_ = './ms_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=32,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pytorch',
... | 417 | 26.866667 | 76 | py |
mmdetection | mmdetection-master/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_1x_coco.py | _base_ = './ms_rcnn_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pytorch',
... | 417 | 26.866667 | 76 | py |
mmdetection | mmdetection-master/configs/ms_rcnn/ms_rcnn_x101_64x4d_fpn_2x_coco.py | _base_ = './ms_rcnn_x101_64x4d_fpn_1x_coco.py'
# learning policy
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 151 | 29.4 | 53 | py |
mmdetection | mmdetection-master/configs/nas_fcos/README.md | # NAS-FCOS
> [NAS-FCOS: Fast Neural Architecture Search for Object Detection](https://arxiv.org/abs/1906.04423)
<!-- [ALGORITHM] -->
## Abstract
The success of deep neural networks relies on significant architecture engineering. Recently neural architecture search (NAS) has emerged as a promise to greatly reduce ma... | 4,505 | 124.166667 | 1,351 | md |
mmdetection | mmdetection-master/configs/nas_fcos/metafile.yml | Collections:
- Name: NAS-FCOS
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 4x V100 GPUs
Architecture:
- FPN
- NAS-FCOS
- ResNet
Paper:
URL: https://arxiv.org/abs/1906.04423
... | 1,585 | 34.244444 | 195 | yml |
mmdetection | mmdetection-master/configs/nas_fcos/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco.py | _base_ = [
'../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(
type='NASFCOS',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_c... | 3,012 | 28.831683 | 73 | py |
mmdetection | mmdetection-master/configs/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco.py | _base_ = [
'../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(
type='NASFCOS',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_c... | 2,990 | 28.91 | 73 | py |
mmdetection | mmdetection-master/configs/nas_fpn/README.md | # NAS-FPN
> [NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection](https://arxiv.org/abs/1904.07392)
<!-- [ALGORITHM] -->
## Abstract
Current state-of-the-art convolutional architectures for object detection are manually designed. Here we aim to learn a better architecture of feature pyramid... | 3,848 | 103.027027 | 881 | md |
mmdetection | mmdetection-master/configs/nas_fpn/metafile.yml | Collections:
- Name: NAS-FPN
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- NAS-FPN
- ResNet
Paper:
URL: https://arxiv.org/abs/1904.07392
Title: 'NAS-FPN:... | 1,869 | 30.166667 | 157 | yml |
mmdetection | mmdetection-master/configs/nas_fpn/retinanet_r50_fpn_crop640_50e_coco.py | _base_ = [
'../_base_/models/retinanet_r50_fpn.py',
'../_base_/datasets/coco_detection.py', '../_base_/default_runtime.py'
]
cudnn_benchmark = True
norm_cfg = dict(type='BN', requires_grad=True)
model = dict(
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(... | 2,675 | 30.116279 | 79 | py |
mmdetection | mmdetection-master/configs/nas_fpn/retinanet_r50_nasfpn_crop640_50e_coco.py | _base_ = [
'../_base_/models/retinanet_r50_fpn.py',
'../_base_/datasets/coco_detection.py', '../_base_/default_runtime.py'
]
cudnn_benchmark = True
# model settings
norm_cfg = dict(type='BN', requires_grad=True)
model = dict(
type='RetinaNet',
backbone=dict(
type='ResNet',
depth=50,
... | 2,665 | 30.364706 | 79 | py |
mmdetection | mmdetection-master/configs/objects365/README.md | # Objects365 Dataset
> [Objects365 Dataset](https://openaccess.thecvf.com/content_ICCV_2019/papers/Shao_Objects365_A_Large-Scale_High-Quality_Dataset_for_Object_Detection_ICCV_2019_paper.pdf)
<!-- [DATASET] -->
## Abstract
<!-- [ABSTRACT] -->
#### Objects365 Dataset V1
[Objects365 Dataset V1](http://www.objects36... | 8,976 | 86.15534 | 558 | md |
mmdetection | mmdetection-master/configs/objects365/faster_rcnn_r50_fpn_16x4_1x_obj365v1.py | _base_ = [
'../_base_/models/faster_rcnn_r50_fpn.py',
'../_base_/datasets/objects365v1_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(roi_head=dict(bbox_head=dict(num_classes=365)))
data = dict(samples_per_gpu=4)
# Using 32 GPUS while training
optimizer... | 786 | 29.269231 | 72 | py |
mmdetection | mmdetection-master/configs/objects365/faster_rcnn_r50_fpn_16x4_1x_obj365v2.py | _base_ = [
'../_base_/models/faster_rcnn_r50_fpn.py',
'../_base_/datasets/objects365v2_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(roi_head=dict(bbox_head=dict(num_classes=365)))
data = dict(samples_per_gpu=4)
# Using 32 GPUS while training
optimizer... | 786 | 29.269231 | 72 | py |
mmdetection | mmdetection-master/configs/objects365/faster_rcnn_r50_fpn_syncbn_1350k_obj365v1.py | _base_ = [
'../_base_/models/faster_rcnn_r50_fpn.py',
'../_base_/datasets/objects365v1_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
backbone=dict(norm_cfg=norm_cfg),
roi_head=dict(bbox_head=dict... | 1,039 | 31.5 | 74 | py |
mmdetection | mmdetection-master/configs/objects365/metafile.yml | - Name: retinanet_r50_fpn_1x_obj365v1
In Collection: RetinaNet
Config: configs/objects365/retinanet_r50_fpn_1x_obj365v1.py
Metadata:
Training Memory (GB): 7.4
Epochs: 12
Training Data: Objects365 v1
Training Techniques:
- SGD with Momentum
- Weight Decay
Results:
- Task: Object Det... | 3,448 | 32.813725 | 182 | yml |
mmdetection | mmdetection-master/configs/objects365/retinanet_r50_fpn_1x_obj365v1.py | _base_ = [
'../_base_/models/retinanet_r50_fpn.py',
'../_base_/datasets/objects365v1_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(bbox_head=dict(num_classes=365))
# Using 8 GPUS while training
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_... | 736 | 29.708333 | 72 | py |
mmdetection | mmdetection-master/configs/objects365/retinanet_r50_fpn_1x_obj365v2.py | _base_ = [
'../_base_/models/retinanet_r50_fpn.py',
'../_base_/datasets/objects365v2_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(bbox_head=dict(num_classes=365))
# Using 8 GPUS while training
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_... | 736 | 29.708333 | 72 | py |
mmdetection | mmdetection-master/configs/objects365/retinanet_r50_fpn_syncbn_1350k_obj365v1.py | _base_ = [
'../_base_/models/faster_rcnn_r50_fpn.py',
'../_base_/datasets/objects365v1_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(backbone=dict(norm_cfg=norm_cfg), bbox_head=dict(num_classes=365))
# U... | 1,016 | 32.9 | 79 | py |
mmdetection | mmdetection-master/configs/openimages/README.md | # Open Images Dataset
> [Open Images Dataset](https://arxiv.org/abs/1811.00982)
<!-- [DATASET] -->
## Abstract
<!-- [ABSTRACT] -->
#### Open Images v6
[Open Images](https://storage.googleapis.com/openimages/web/index.html) is a dataset of ~9M images annotated with image-level labels,
object bounding boxes, object... | 11,578 | 76.711409 | 669 | md |
mmdetection | mmdetection-master/configs/openimages/faster_rcnn_r50_fpn_32x2_1x_openimages.py | _base_ = [
'../_base_/models/faster_rcnn_r50_fpn.py',
'../_base_/datasets/openimages_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(roi_head=dict(bbox_head=dict(num_classes=601)))
# Using 32 GPUS while training
optimizer = dict(type='SGD', lr=0.08, momen... | 751 | 30.333333 | 72 | py |
mmdetection | mmdetection-master/configs/openimages/faster_rcnn_r50_fpn_32x2_1x_openimages_challenge.py | _base_ = ['faster_rcnn_r50_fpn_32x2_1x_openimages.py']
model = dict(
roi_head=dict(bbox_head=dict(num_classes=500)),
test_cfg=dict(rcnn=dict(score_thr=0.01)))
# dataset settings
dataset_type = 'OpenImagesChallengeDataset'
data_root = 'data/OpenImages/'
data = dict(
train=dict(
type=dataset_type,
... | 1,957 | 39.791667 | 73 | py |
mmdetection | mmdetection-master/configs/openimages/faster_rcnn_r50_fpn_32x2_cas_1x_openimages.py | _base_ = ['faster_rcnn_r50_fpn_32x2_1x_openimages.py']
# Use ClassAwareSampler
data = dict(
train_dataloader=dict(class_aware_sampler=dict(num_sample_class=1)))
| 166 | 26.833333 | 72 | py |
mmdetection | mmdetection-master/configs/openimages/faster_rcnn_r50_fpn_32x2_cas_1x_openimages_challenge.py | _base_ = ['faster_rcnn_r50_fpn_32x2_1x_openimages_challenge.py']
# Use ClassAwareSampler
data = dict(
train_dataloader=dict(class_aware_sampler=dict(num_sample_class=1)))
| 176 | 28.5 | 72 | py |
mmdetection | mmdetection-master/configs/openimages/metafile.yml | Models:
- Name: faster_rcnn_r50_fpn_32x2_1x_openimages
In Collection: Faster R-CNN
Config: configs/openimages/faster_rcnn_r50_fpn_32x2_1x_openimages.py
Metadata:
Training Memory (GB): 7.7
Epochs: 12
Training Data: Open Images v6
Training Techniques:
- SGD with Momentum
... | 3,886 | 36.737864 | 206 | yml |
mmdetection | mmdetection-master/configs/openimages/retinanet_r50_fpn_32x2_1x_openimages.py | _base_ = [
'../_base_/models/retinanet_r50_fpn.py',
'../_base_/datasets/openimages_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(bbox_head=dict(num_classes=601))
optimizer = dict(type='SGD', lr=0.08, momentum=0.9, weight_decay=0.0001)
optimizer_config =... | 703 | 29.608696 | 72 | py |
mmdetection | mmdetection-master/configs/openimages/ssd300_32x8_36e_openimages.py | _base_ = [
'../_base_/models/ssd300.py', '../_base_/datasets/openimages_detection.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_1x.py'
]
model = dict(
bbox_head=dict(
num_classes=601,
anchor_generator=dict(basesize_ratio_range=(0.2, 0.9))))
# dataset settings
dataset_typ... | 2,838 | 32.797619 | 79 | py |
mmdetection | mmdetection-master/configs/paa/README.md | # PAA
> [Probabilistic Anchor Assignment with IoU Prediction for Object Detection](https://arxiv.org/abs/2007.08103)
<!-- [ALGORITHM] -->
## Abstract
In object detection, determining which anchors to assign as positive or negative samples, known as anchor assignment, has been revealed as a core procedure that can s... | 7,926 | 164.145833 | 1,515 | md |
mmdetection | mmdetection-master/configs/paa/metafile.yml | Collections:
- Name: PAA
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- FPN
- Probabilistic Anchor Assignment
- ResNet
Paper:
URL: https://arxiv.org/abs... | 3,350 | 30.914286 | 151 | yml |
mmdetection | mmdetection-master/configs/paa/paa_r101_fpn_1x_coco.py | _base_ = './paa_r50_fpn_1x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 191 | 26.428571 | 61 | py |
mmdetection | mmdetection-master/configs/paa/paa_r101_fpn_2x_coco.py | _base_ = './paa_r101_fpn_1x_coco.py'
lr_config = dict(step=[16, 22])
runner = dict(type='EpochBasedRunner', max_epochs=24)
| 123 | 30 | 53 | py |
mmdetection | mmdetection-master/configs/paa/paa_r101_fpn_mstrain_3x_coco.py | _base_ = './paa_r50_fpn_mstrain_3x_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 199 | 27.571429 | 61 | py |