Search is not available for this dataset
repo
stringlengths
2
152
file
stringlengths
15
239
code
stringlengths
0
58.4M
file_length
int64
0
58.4M
avg_line_length
float64
0
1.81M
max_line_length
int64
0
12.7M
extension_type
stringclasses
364 values
mmdetection
mmdetection-master/configs/seesaw_loss/mask_rcnn_r101_fpn_sample1e-3_seesaw_loss_mstrain_2x_lvis_v1.py
_base_ = './mask_rcnn_r50_fpn_sample1e-3_seesaw_loss_mstrain_2x_lvis_v1.py' model = dict( backbone=dict( depth=101, init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet101')))
231
32.142857
75
py
mmdetection
mmdetection-master/configs/seesaw_loss/mask_rcnn_r101_fpn_sample1e-3_seesaw_loss_normed_mask_mstrain_2x_lvis_v1.py
_base_ = './mask_rcnn_r50_fpn_sample1e-3_seesaw_loss_normed_mask_mstrain_2x_lvis_v1.py' # noqa: E501 model = dict( backbone=dict( depth=101, init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet101')))
257
35.857143
101
py
mmdetection
mmdetection-master/configs/seesaw_loss/mask_rcnn_r50_fpn_random_seesaw_loss_mstrain_2x_lvis_v1.py
_base_ = [ '../_base_/models/mask_rcnn_r50_fpn.py', '../_base_/datasets/coco_instance.py', '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py' ] model = dict( roi_head=dict( bbox_head=dict( num_classes=1203, cls_predictor_cfg=dict(type='NormedLinear', tem...
2,510
32.039474
77
py
mmdetection
mmdetection-master/configs/seesaw_loss/mask_rcnn_r50_fpn_random_seesaw_loss_normed_mask_mstrain_2x_lvis_v1.py
_base_ = './mask_rcnn_r50_fpn_random_seesaw_loss_mstrain_2x_lvis_v1.py' model = dict( roi_head=dict( mask_head=dict( predictor_cfg=dict(type='NormedConv2d', tempearture=20))))
200
32.5
71
py
mmdetection
mmdetection-master/configs/seesaw_loss/mask_rcnn_r50_fpn_sample1e-3_seesaw_loss_mstrain_2x_lvis_v1.py
_base_ = [ '../_base_/models/mask_rcnn_r50_fpn.py', '../_base_/datasets/lvis_v1_instance.py', '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py' ] model = dict( roi_head=dict( bbox_head=dict( num_classes=1203, cls_predictor_cfg=dict(type='NormedLinear', ...
1,486
34.404762
77
py
mmdetection
mmdetection-master/configs/seesaw_loss/mask_rcnn_r50_fpn_sample1e-3_seesaw_loss_normed_mask_mstrain_2x_lvis_v1.py
_base_ = './mask_rcnn_r50_fpn_sample1e-3_seesaw_loss_mstrain_2x_lvis_v1.py' model = dict( roi_head=dict( mask_head=dict( predictor_cfg=dict(type='NormedConv2d', tempearture=20))))
204
33.166667
75
py
mmdetection
mmdetection-master/configs/seesaw_loss/metafile.yml
Collections: - Name: Seesaw Loss Metadata: Training Data: LVIS Training Techniques: - SGD with Momentum - Weight Decay Training Resources: 8x V100 GPUs Architecture: - Softmax - RPN - Convolution - Dense Connections - FPN - Re...
7,809
37.284314
166
yml
mmdetection
mmdetection-master/configs/selfsup_pretrain/README.md
# Backbones Trained by Self-Supervise Algorithms <!-- [OTHERS] --> ## Abstract Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. These contrastive methods typically work online and rely on a large n...
10,082
90.663636
1,510
md
mmdetection
mmdetection-master/configs/selfsup_pretrain/mask_rcnn_r50_fpn_mocov2-pretrain_1x_coco.py
_base_ = [ '../_base_/models/mask_rcnn_r50_fpn.py', '../_base_/datasets/coco_instance.py', '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' ] model = dict( backbone=dict( frozen_stages=0, norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False, ...
418
28.928571
78
py
mmdetection
mmdetection-master/configs/selfsup_pretrain/mask_rcnn_r50_fpn_mocov2-pretrain_ms-2x_coco.py
_base_ = [ '../_base_/models/mask_rcnn_r50_fpn.py', '../_base_/datasets/coco_instance.py', '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py' ] model = dict( backbone=dict( frozen_stages=0, norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False, ...
1,072
31.515152
78
py
mmdetection
mmdetection-master/configs/selfsup_pretrain/mask_rcnn_r50_fpn_swav-pretrain_1x_coco.py
_base_ = [ '../_base_/models/mask_rcnn_r50_fpn.py', '../_base_/datasets/coco_instance.py', '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' ] model = dict( backbone=dict( frozen_stages=0, norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False, ...
416
28.785714
76
py
mmdetection
mmdetection-master/configs/selfsup_pretrain/mask_rcnn_r50_fpn_swav-pretrain_ms-2x_coco.py
_base_ = [ '../_base_/models/mask_rcnn_r50_fpn.py', '../_base_/datasets/coco_instance.py', '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py' ] model = dict( backbone=dict( frozen_stages=0, norm_cfg=dict(type='SyncBN', requires_grad=True), norm_eval=False, ...
1,070
31.454545
77
py
mmdetection
mmdetection-master/configs/simple_copy_paste/README.md
# SimpleCopyPaste > [Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation](https://arxiv.org/abs/2012.07177) <!-- [ALGORITHM] --> ## Abstract Building instance segmentation models that are data-efficient and can handle rare object categories is an important challenge in computer vision. ...
6,383
162.692308
1,136
md
mmdetection
mmdetection-master/configs/simple_copy_paste/mask_rcnn_r50_fpn_syncbn-all_rpn-2conv_ssj_32x2_270k_coco.py
_base_ = [ '../_base_/models/mask_rcnn_r50_fpn.py', # 270k iterations with batch_size 64 is roughly equivalent to 144 epochs '../common/ssj_270k_coco_instance.py', ] norm_cfg = dict(type='SyncBN', requires_grad=True) # Use MMSyncBN that handles empty tensor in head. It can be changed to # SyncBN after http...
813
37.761905
76
py
mmdetection
mmdetection-master/configs/simple_copy_paste/mask_rcnn_r50_fpn_syncbn-all_rpn-2conv_ssj_32x2_90k_coco.py
_base_ = 'mask_rcnn_r50_fpn_syncbn-all_rpn-2conv_ssj_32x2_270k_coco.py' # lr steps at [0.9, 0.95, 0.975] of the maximum iterations lr_config = dict( warmup_iters=500, warmup_ratio=0.067, step=[81000, 85500, 87750]) # 90k iterations with batch_size 64 is roughly equivalent to 48 epochs runner = dict(type='IterBased...
346
42.375
71
py
mmdetection
mmdetection-master/configs/simple_copy_paste/mask_rcnn_r50_fpn_syncbn-all_rpn-2conv_ssj_scp_32x2_270k_coco.py
_base_ = [ '../_base_/models/mask_rcnn_r50_fpn.py', # 270k iterations with batch_size 64 is roughly equivalent to 144 epochs '../common/ssj_scp_270k_coco_instance.py' ] norm_cfg = dict(type='SyncBN', requires_grad=True) # Use MMSyncBN that handles empty tensor in head. It can be changed to # SyncBN after h...
816
37.904762
76
py
mmdetection
mmdetection-master/configs/simple_copy_paste/mask_rcnn_r50_fpn_syncbn-all_rpn-2conv_ssj_scp_32x2_90k_coco.py
_base_ = 'mask_rcnn_r50_fpn_syncbn-all_rpn-2conv_ssj_scp_32x2_270k_coco.py' # lr steps at [0.9, 0.95, 0.975] of the maximum iterations lr_config = dict( warmup_iters=500, warmup_ratio=0.067, step=[81000, 85500, 87750]) # 90k iterations with batch_size 64 is roughly equivalent to 48 epochs runner = dict(type='IterB...
350
42.875
75
py
mmdetection
mmdetection-master/configs/simple_copy_paste/metafile.yml
Collections: - Name: SimpleCopyPaste Metadata: Training Data: COCO Training Techniques: - SGD with Momentum - Weight Decay Training Resources: 32x A100 GPUs Architecture: - Softmax - RPN - Convolution - Dense Connections - FPN ...
3,526
36.924731
231
yml
mmdetection
mmdetection-master/configs/solo/README.md
# SOLO > [SOLO: Segmenting Objects by Locations](https://arxiv.org/abs/1912.04488) <!-- [ALGORITHM] --> ## Abstract We present a new, embarrassingly simple approach to instance segmentation in images. Compared to many other dense prediction tasks, e.g., semantic segmentation, it is the arbitrary number of instances...
6,475
116.745455
1,258
md
mmdetection
mmdetection-master/configs/solo/decoupled_solo_light_r50_fpn_3x_coco.py
_base_ = './decoupled_solo_r50_fpn_3x_coco.py' # model settings model = dict( mask_head=dict( type='DecoupledSOLOLightHead', num_classes=80, in_channels=256, stacked_convs=4, feat_channels=256, strides=[8, 8, 16, 32, 32], scale_ranges=((1, 64), (32, 128), (64...
2,062
31.234375
78
py
mmdetection
mmdetection-master/configs/solo/decoupled_solo_r50_fpn_1x_coco.py
_base_ = [ './solo_r50_fpn_1x_coco.py', ] # model settings model = dict( mask_head=dict( type='DecoupledSOLOHead', num_classes=80, in_channels=256, stacked_convs=7, feat_channels=256, strides=[8, 8, 16, 32, 32], scale_ranges=((1, 96), (48, 192), (96, 384),...
822
27.37931
78
py
mmdetection
mmdetection-master/configs/solo/decoupled_solo_r50_fpn_3x_coco.py
_base_ = './solo_r50_fpn_3x_coco.py' # model settings model = dict( mask_head=dict( type='DecoupledSOLOHead', num_classes=80, in_channels=256, stacked_convs=7, feat_channels=256, strides=[8, 8, 16, 32, 32], scale_ranges=((1, 96), (48, 192), (96, 384), (192, 7...
775
28.846154
78
py
mmdetection
mmdetection-master/configs/solo/metafile.yml
Collections: - Name: SOLO Metadata: Training Data: COCO Training Techniques: - SGD with Momentum - Weight Decay Training Resources: 8x V100 GPUs Architecture: - FPN - Convolution - ResNet Paper: https://arxiv.org/abs/1912.04488 README: config...
3,420
28.491379
168
yml
mmdetection
mmdetection-master/configs/solo/solo_r50_fpn_1x_coco.py
_base_ = [ '../_base_/datasets/coco_instance.py', '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' ] # model settings model = dict( type='SOLO', backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, ...
1,523
27.222222
78
py
mmdetection
mmdetection-master/configs/solo/solo_r50_fpn_3x_coco.py
_base_ = './solo_r50_fpn_1x_coco.py' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True, with_mask=True), dict( type='Resize', img_scale=[(1333, 800...
942
31.517241
77
py
mmdetection
mmdetection-master/configs/solov2/README.md
# SOLOv2 > [SOLOv2: Dynamic and Fast Instance Segmentation](https://arxiv.org/abs/2003.10152) <!-- [ALGORITHM] --> ## Abstract In this work, we aim at building a simple, direct, and fast instance segmentation framework with strong performance. We follow the principle of the SOLO method of Wang et al. "SOLO: segment...
7,591
125.533333
478
md
mmdetection
mmdetection-master/configs/solov2/metafile.yml
Collections: - Name: SOLOv2 Metadata: Training Data: COCO Training Techniques: - SGD with Momentum - Weight Decay Training Resources: 8x A100 GPUs Architecture: - FPN - Convolution - ResNet Paper: https://arxiv.org/abs/2003.10152 README: conf...
3,914
31.625
154
yml
mmdetection
mmdetection-master/configs/solov2/solov2_light_r18_fpn_3x_coco.py
_base_ = 'solov2_light_r50_fpn_3x_coco.py' # model settings model = dict( backbone=dict( depth=18, init_cfg=dict(checkpoint='torchvision://resnet18')), neck=dict(in_channels=[64, 128, 256, 512]))
213
25.75
70
py
mmdetection
mmdetection-master/configs/solov2/solov2_light_r34_fpn_3x_coco.py
_base_ = 'solov2_light_r50_fpn_3x_coco.py' # model settings model = dict( backbone=dict( depth=34, init_cfg=dict(checkpoint='torchvision://resnet34')), neck=dict(in_channels=[64, 128, 256, 512]))
213
25.75
70
py
mmdetection
mmdetection-master/configs/solov2/solov2_light_r50_dcn_fpn_3x_coco.py
_base_ = 'solov2_r50_fpn_3x_coco.py' # model settings model = dict( backbone=dict( dcn=dict(type='DCNv2', deformable_groups=1, fallback_on_stride=False), stage_with_dcn=(False, True, True, True)), mask_head=dict( feat_channels=256, stacked_convs=3, scale_ranges=((1, 64),...
1,991
30.619048
78
py
mmdetection
mmdetection-master/configs/solov2/solov2_light_r50_fpn_3x_coco.py
_base_ = 'solov2_r50_fpn_1x_coco.py' # model settings model = dict( mask_head=dict( stacked_convs=2, feat_channels=256, scale_ranges=((1, 56), (28, 112), (56, 224), (112, 448), (224, 896)), mask_feature_head=dict(out_channels=128))) # learning policy lr_config = dict( policy='s...
1,747
29.137931
78
py
mmdetection
mmdetection-master/configs/solov2/solov2_r101_dcn_fpn_3x_coco.py
_base_ = 'solov2_r50_fpn_3x_coco.py' # model settings model = dict( backbone=dict( depth=101, init_cfg=dict(checkpoint='torchvision://resnet101'), dcn=dict(type='DCNv2', deformable_groups=1, fallback_on_stride=False), stage_with_dcn=(False, True, True, True)), mask_head=dict( ...
452
31.357143
78
py
mmdetection
mmdetection-master/configs/solov2/solov2_r101_fpn_3x_coco.py
_base_ = 'solov2_r50_fpn_3x_coco.py' # model settings model = dict( backbone=dict( depth=101, init_cfg=dict(checkpoint='torchvision://resnet101')))
161
22.142857
72
py
mmdetection
mmdetection-master/configs/solov2/solov2_r50_fpn_1x_coco.py
_base_ = [ '../_base_/datasets/coco_instance.py', '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' ] # model settings model = dict( type='SOLOv2', backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1,...
1,825
28.451613
78
py
mmdetection
mmdetection-master/configs/solov2/solov2_r50_fpn_3x_coco.py
_base_ = 'solov2_r50_fpn_1x_coco.py' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True, with_mask=True), dict( type='Resize', img_scale=[(1333, 800...
942
31.517241
77
py
mmdetection
mmdetection-master/configs/solov2/solov2_x101_dcn_fpn_3x_coco.py
_base_ = 'solov2_r50_fpn_3x_coco.py' # model settings model = dict( backbone=dict( type='ResNeXt', depth=101, groups=64, base_width=4, dcn=dict(type='DCNv2', deformable_groups=1, fallback_on_stride=False), stage_with_dcn=(False, True, True, True), init_cfg=di...
555
29.888889
78
py
mmdetection
mmdetection-master/configs/sparse_rcnn/README.md
# Sparse R-CNN > [Sparse R-CNN: End-to-End Object Detection with Learnable Proposals](https://arxiv.org/abs/2011.12450) <!-- [ALGORITHM] --> ## Abstract We present Sparse R-CNN, a purely sparse method for object detection in images. Existing works on object detection heavily rely on dense object candidates, such as...
6,964
177.589744
1,110
md
mmdetection
mmdetection-master/configs/sparse_rcnn/metafile.yml
Collections: - Name: Sparse R-CNN Metadata: Training Data: COCO Training Techniques: - SGD with Momentum - Weight Decay Training Resources: 8x V100 GPUs Architecture: - FPN - ResNet - Sparse R-CNN Paper: URL: https://arxiv.org/abs/2011.1245...
3,153
37.938272
229
yml
mmdetection
mmdetection-master/configs/sparse_rcnn/sparse_rcnn_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco.py
_base_ = './sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco.py' model = dict( backbone=dict( depth=101, init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet101')))
235
28.5
78
py
mmdetection
mmdetection-master/configs/sparse_rcnn/sparse_rcnn_r101_fpn_mstrain_480-800_3x_coco.py
_base_ = './sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco.py' model = dict( backbone=dict( depth=101, init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet101')))
216
26.125
61
py
mmdetection
mmdetection-master/configs/sparse_rcnn/sparse_rcnn_r50_fpn_1x_coco.py
_base_ = [ '../_base_/datasets/coco_detection.py', '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' ] num_stages = 6 num_proposals = 100 model = dict( type='SparseRCNN', backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=(0, 1, 2, 3), ...
3,469
35.145833
79
py
mmdetection
mmdetection-master/configs/sparse_rcnn/sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco.py
_base_ = './sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco.py' num_proposals = 300 model = dict( rpn_head=dict(num_proposals=num_proposals), test_cfg=dict( _delete_=True, rpn=None, rcnn=dict(max_per_img=num_proposals))) img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], t...
2,191
40.358491
78
py
mmdetection
mmdetection-master/configs/sparse_rcnn/sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco.py
_base_ = './sparse_rcnn_r50_fpn_1x_coco.py' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) min_values = (480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), ...
853
34.583333
77
py
mmdetection
mmdetection-master/configs/ssd/README.md
# SSD > [SSD: Single Shot MultiBox Detector](https://arxiv.org/abs/1512.02325) <!-- [ALGORITHM] --> ## Abstract We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different a...
6,646
104.507937
1,486
md
mmdetection
mmdetection-master/configs/ssd/ascend_ssd300_coco.py
_base_ = [ '../_base_/models/ascend_ssd300.py', '../_base_/datasets/coco_detection.py', '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py' ] # dataset settings dataset_type = 'CocoDataset' data_root = 'data/coco/' img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=Tr...
2,371
31.493151
79
py
mmdetection
mmdetection-master/configs/ssd/metafile.yml
Collections: - Name: SSD Metadata: Training Data: COCO Training Techniques: - SGD with Momentum - Weight Decay Training Resources: 8x V100 GPUs Architecture: - VGG Paper: URL: https://arxiv.org/abs/1512.02325 Title: 'SSD: Single Shot MultiBox Detecto...
2,277
27.835443
169
yml
mmdetection
mmdetection-master/configs/ssd/ssd300_coco.py
_base_ = [ '../_base_/models/ssd300.py', '../_base_/datasets/coco_detection.py', '../_base_/schedules/schedule_2x.py', '../_base_/default_runtime.py' ] # dataset settings dataset_type = 'CocoDataset' data_root = 'data/coco/' img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True) train_p...
2,360
31.791667
79
py
mmdetection
mmdetection-master/configs/ssd/ssd300_fp16_coco.py
_base_ = ['./ssd300_coco.py'] fp16 = dict(loss_scale='dynamic') # learning policy # In order to avoid non-convergence in the early stage of # mixed-precision training, the warmup in the lr_config is set to linear, # warmup_iters increases and warmup_ratio decreases. lr_config = dict(warmup='linear', warmup_iters=1000...
345
33.6
75
py
mmdetection
mmdetection-master/configs/ssd/ssd512_coco.py
_base_ = 'ssd300_coco.py' input_size = 512 model = dict( neck=dict( out_channels=(512, 1024, 512, 256, 256, 256, 256), level_strides=(2, 2, 2, 2, 1), level_paddings=(1, 1, 1, 1, 1), last_kernel_size=4), bbox_head=dict( in_channels=(512, 1024, 512, 256, 256, 256, 256), ...
2,820
32.188235
79
py
mmdetection
mmdetection-master/configs/ssd/ssd512_fp16_coco.py
_base_ = ['./ssd512_coco.py'] # fp16 settings fp16 = dict(loss_scale='dynamic') # learning policy # In order to avoid non-convergence in the early stage of # mixed-precision training, the warmup in the lr_config is set to linear, # warmup_iters increases and warmup_ratio decreases. lr_config = dict(warmup='linear', wa...
360
35.1
75
py
mmdetection
mmdetection-master/configs/ssd/ssdlite_mobilenetv2_scratch_600e_coco.py
_base_ = [ '../_base_/datasets/coco_detection.py', '../_base_/default_runtime.py' ] model = dict( type='SingleStageDetector', backbone=dict( type='MobileNetV2', out_indices=(4, 7), norm_cfg=dict(type='BN', eps=0.001, momentum=0.03), init_cfg=dict(type='TruncNormal', layer='C...
4,928
31.642384
77
py
mmdetection
mmdetection-master/configs/strong_baselines/README.md
# Strong Baselines <!-- [OTHERS] --> We train Mask R-CNN with large-scale jitter and longer schedule as strong baselines. The modifications follow those in [Detectron2](https://github.com/facebookresearch/detectron2/tree/master/configs/new_baselines). ## Results and Models | Backbone | Style | Lr schd | Mem (GB) ...
1,636
76.952381
182
md
mmdetection
mmdetection-master/configs/strong_baselines/mask_rcnn_r50_caffe_fpn_syncbn-all_rpn-2conv_lsj_100e_coco.py
_base_ = [ '../_base_/models/mask_rcnn_r50_fpn.py', '../common/lsj_100e_coco_instance.py' ] norm_cfg = dict(type='SyncBN', requires_grad=True) # Use MMSyncBN that handles empty tensor in head. It can be changed to # SyncBN after https://github.com/pytorch/pytorch/issues/36530 is fixed # Requires MMCV-full afte...
2,703
32.382716
77
py
mmdetection
mmdetection-master/configs/strong_baselines/mask_rcnn_r50_caffe_fpn_syncbn-all_rpn-2conv_lsj_100e_fp16_coco.py
_base_ = 'mask_rcnn_r50_caffe_fpn_syncbn-all_rpn-2conv_lsj_100e_coco.py' fp16 = dict(loss_scale=512.)
102
33.333333
72
py
mmdetection
mmdetection-master/configs/strong_baselines/mask_rcnn_r50_caffe_fpn_syncbn-all_rpn-2conv_lsj_400e_coco.py
_base_ = './mask_rcnn_r50_caffe_fpn_syncbn-all_rpn-2conv_lsj_100e_coco.py' # Use RepeatDataset to speed up training # change repeat time from 4 (for 100 epochs) to 16 (for 400 epochs) data = dict(train=dict(times=4 * 4)) lr_config = dict(warmup_iters=500 * 4)
261
36.428571
74
py
mmdetection
mmdetection-master/configs/strong_baselines/mask_rcnn_r50_fpn_syncbn-all_rpn-2conv_lsj_100e_coco.py
_base_ = [ '../_base_/models/mask_rcnn_r50_fpn.py', '../common/lsj_100e_coco_instance.py' ] norm_cfg = dict(type='SyncBN', requires_grad=True) # Use MMSyncBN that handles empty tensor in head. It can be changed to # SyncBN after https://github.com/pytorch/pytorch/issues/36530 is fixed # Requires MMCV-full afte...
893
37.869565
77
py
mmdetection
mmdetection-master/configs/strong_baselines/mask_rcnn_r50_fpn_syncbn-all_rpn-2conv_lsj_100e_fp16_coco.py
_base_ = 'mask_rcnn_r50_fpn_syncbn-all_rpn-2conv_lsj_100e_coco.py' # use FP16 fp16 = dict(loss_scale=512.)
107
26
66
py
mmdetection
mmdetection-master/configs/strong_baselines/mask_rcnn_r50_fpn_syncbn-all_rpn-2conv_lsj_50e_coco.py
_base_ = 'mask_rcnn_r50_fpn_syncbn-all_rpn-2conv_lsj_100e_coco.py' # Use RepeatDataset to speed up training # change repeat time from 4 (for 100 epochs) to 2 (for 50 epochs) data = dict(train=dict(times=2))
208
33.833333
66
py
mmdetection
mmdetection-master/configs/swin/README.md
# Swin > [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) <!-- [BACKBONE] --> ## Abstract This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision. Challenges in adapting Tr...
5,732
135.5
1,455
md
mmdetection
mmdetection-master/configs/swin/mask_rcnn_swin-s-p4-w7_fpn_fp16_ms-crop-3x_coco.py
_base_ = './mask_rcnn_swin-t-p4-w7_fpn_fp16_ms-crop-3x_coco.py' pretrained = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_small_patch4_window7_224.pth' # noqa model = dict( backbone=dict( depths=[2, 2, 18, 2], init_cfg=dict(type='Pretrained', checkpoint=pretrained)))
318
44.571429
124
py
mmdetection
mmdetection-master/configs/swin/mask_rcnn_swin-t-p4-w7_fpn_1x_coco.py
_base_ = [ '../_base_/models/mask_rcnn_r50_fpn.py', '../_base_/datasets/coco_instance.py', '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' ] pretrained = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth' # noqa model = dict( type...
1,301
29.27907
123
py
mmdetection
mmdetection-master/configs/swin/mask_rcnn_swin-t-p4-w7_fpn_fp16_ms-crop-3x_coco.py
_base_ = './mask_rcnn_swin-t-p4-w7_fpn_ms-crop-3x_coco.py' # you need to set mode='dynamic' if you are using pytorch<=1.5.0 fp16 = dict(loss_scale=dict(init_scale=512))
169
41.5
64
py
mmdetection
mmdetection-master/configs/swin/mask_rcnn_swin-t-p4-w7_fpn_ms-crop-3x_coco.py
_base_ = [ '../_base_/models/mask_rcnn_r50_fpn.py', '../_base_/datasets/coco_instance.py', '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' ] pretrained = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth' # noqa model = dict( ty...
3,305
34.934783
123
py
mmdetection
mmdetection-master/configs/swin/metafile.yml
Models: - Name: mask_rcnn_swin-s-p4-w7_fpn_fp16_ms-crop-3x_coco In Collection: Mask R-CNN Config: configs/swin/mask_rcnn_swin-s-p4-w7_fpn_fp16_ms-crop-3x_coco.py Metadata: Training Memory (GB): 11.9 Epochs: 36 Training Data: COCO Training Techniques: - AdamW Training ...
4,301
34.553719
190
yml
mmdetection
mmdetection-master/configs/swin/retinanet_swin-t-p4-w7_fpn_1x_coco.py
_base_ = [ '../_base_/models/retinanet_r50_fpn.py', '../_base_/datasets/coco_detection.py', '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' ] pretrained = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth' # noqa model = dict( bac...
1,073
33.645161
123
py
mmdetection
mmdetection-master/configs/timm_example/README.md
# Timm Example > [PyTorch Image Models](https://github.com/rwightman/pytorch-image-models) <!-- [OTHERS] --> ## Abstract Py**T**orch **Im**age **M**odels (`timm`) is a collection of image models, layers, utilities, optimizers, schedulers, data-loaders / augmentations, and reference training / validation scripts tha...
2,623
40.650794
300
md
mmdetection
mmdetection-master/configs/timm_example/retinanet_timm_efficientnet_b1_fpn_1x_coco.py
_base_ = [ '../_base_/models/retinanet_r50_fpn.py', '../_base_/datasets/coco_detection.py', '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' ] # please install mmcls>=0.20.0 # import mmcls.models to trigger register_module in mmcls custom_imports = dict(imports=['mmcls.models'], allow_f...
679
31.380952
75
py
mmdetection
mmdetection-master/configs/timm_example/retinanet_timm_tv_resnet50_fpn_1x_coco.py
_base_ = [ '../_base_/models/retinanet_r50_fpn.py', '../_base_/datasets/coco_detection.py', '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' ] # please install mmcls>=0.20.0 # import mmcls.models to trigger register_module in mmcls custom_imports = dict(imports=['mmcls.models'], allow_f...
666
32.35
75
py
mmdetection
mmdetection-master/configs/tood/README.md
# TOOD > [TOOD: Task-aligned One-stage Object Detection](https://arxiv.org/abs/2108.07755) <!-- [ALGORITHM] --> ## Abstract One-stage object detection is commonly implemented by optimizing two sub-tasks: object classification and localization, using heads with two parallel branches, which might lead to a certain le...
6,820
165.365854
1,244
md
mmdetection
mmdetection-master/configs/tood/metafile.yml
Collections: - Name: TOOD Metadata: Training Data: COCO Training Techniques: - SGD Training Resources: 8x V100 GPUs Architecture: - TOOD Paper: URL: https://arxiv.org/abs/2108.07755 Title: 'TOOD: Task-aligned One-stage Object Detection' README: configs/t...
3,206
32.40625
178
yml
mmdetection
mmdetection-master/configs/tood/tood_r101_fpn_dconv_c3-c5_mstrain_2x_coco.py
_base_ = './tood_r101_fpn_mstrain_2x_coco.py' model = dict( backbone=dict( dcn=dict(type='DCNv2', deformable_groups=1, fallback_on_stride=False), stage_with_dcn=(False, True, True, True)), bbox_head=dict(num_dcn=2))
241
29.25
78
py
mmdetection
mmdetection-master/configs/tood/tood_r101_fpn_mstrain_2x_coco.py
_base_ = './tood_r50_fpn_mstrain_2x_coco.py' model = dict( backbone=dict( depth=101, init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet101')))
201
24.25
61
py
mmdetection
mmdetection-master/configs/tood/tood_r50_fpn_1x_coco.py
_base_ = [ '../_base_/datasets/coco_detection.py', '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' ] model = dict( type='TOOD', backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=d...
2,306
29.76
79
py
mmdetection
mmdetection-master/configs/tood/tood_r50_fpn_anchor_based_1x_coco.py
_base_ = './tood_r50_fpn_1x_coco.py' model = dict(bbox_head=dict(anchor_type='anchor_based'))
94
30.666667
56
py
mmdetection
mmdetection-master/configs/tood/tood_r50_fpn_mstrain_2x_coco.py
_base_ = './tood_r50_fpn_1x_coco.py' # learning policy lr_config = dict(step=[16, 22]) runner = dict(type='EpochBasedRunner', max_epochs=24) # multi-scale training img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), ...
789
33.347826
77
py
mmdetection
mmdetection-master/configs/tood/tood_x101_64x4d_fpn_dconv_c4-c5_mstrain_2x_coco.py
_base_ = './tood_x101_64x4d_fpn_mstrain_2x_coco.py' model = dict( backbone=dict( dcn=dict(type='DCNv2', deformable_groups=1, fallback_on_stride=False), stage_with_dcn=(False, False, True, True), ), bbox_head=dict(num_dcn=2))
253
30.75
78
py
mmdetection
mmdetection-master/configs/tood/tood_x101_64x4d_fpn_mstrain_2x_coco.py
_base_ = './tood_r50_fpn_mstrain_2x_coco.py' model = dict( backbone=dict( type='ResNeXt', depth=101, groups=64, base_width=4, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=True), norm_eval=True...
447
25.352941
76
py
mmdetection
mmdetection-master/configs/tridentnet/README.md
# TridentNet > [Scale-Aware Trident Networks for Object Detection](https://arxiv.org/abs/1901.01892) <!-- [ALGORITHM] --> ## Abstract Scale variation is one of the key challenges in object detection. In this work, we first present a controlled experiment to investigate the effect of receptive fields for scale varia...
3,931
99.820513
986
md
mmdetection
mmdetection-master/configs/tridentnet/metafile.yml
Collections: - Name: TridentNet Metadata: Training Data: COCO Training Techniques: - SGD with Momentum - Weight Decay Training Resources: 8x V100 GPUs Architecture: - ResNet - TridentNet Block Paper: URL: https://arxiv.org/abs/1901.01892 Titl...
1,921
33.321429
174
yml
mmdetection
mmdetection-master/configs/tridentnet/tridentnet_r50_caffe_1x_coco.py
_base_ = [ '../_base_/models/faster_rcnn_r50_caffe_c4.py', '../_base_/datasets/coco_detection.py', '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' ] model = dict( type='TridentFasterRCNN', backbone=dict( type='TridentResNet', trident_dilations=(1, 2, 3), ...
1,868
32.375
74
py
mmdetection
mmdetection-master/configs/tridentnet/tridentnet_r50_caffe_mstrain_1x_coco.py
_base_ = 'tridentnet_r50_caffe_1x_coco.py' # use caffe img_norm img_norm_cfg = dict( mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict( type='Resize', img_scale=[(133...
756
31.913043
72
py
mmdetection
mmdetection-master/configs/tridentnet/tridentnet_r50_caffe_mstrain_3x_coco.py
_base_ = 'tridentnet_r50_caffe_mstrain_1x_coco.py' lr_config = dict(step=[28, 34]) runner = dict(type='EpochBasedRunner', max_epochs=36)
138
26.8
53
py
mmdetection
mmdetection-master/configs/vfnet/README.md
# VarifocalNet > [VarifocalNet: An IoU-aware Dense Object Detector](https://arxiv.org/abs/2008.13367) <!-- [ALGORITHM] --> ## Abstract Accurately ranking the vast number of candidate detections is crucial for dense object detectors to achieve high performance. Prior work uses the classification score or a combinati...
8,844
179.510204
1,350
md
mmdetection
mmdetection-master/configs/vfnet/metafile.yml
Collections: - Name: VFNet Metadata: Training Data: COCO Training Techniques: - SGD with Momentum - Weight Decay Training Resources: 8x V100 GPUs Architecture: - FPN - ResNet - Varifocal Loss Paper: URL: https://arxiv.org/abs/2008.13367 ...
4,075
33.837607
191
yml
mmdetection
mmdetection-master/configs/vfnet/vfnet_r101_fpn_1x_coco.py
_base_ = './vfnet_r50_fpn_1x_coco.py' model = dict( backbone=dict( depth=101, init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet101')))
193
26.714286
61
py
mmdetection
mmdetection-master/configs/vfnet/vfnet_r101_fpn_2x_coco.py
_base_ = './vfnet_r50_fpn_1x_coco.py' model = dict( backbone=dict( depth=101, init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet101'))) lr_config = dict(step=[16, 22]) runner = dict(type='EpochBasedRunner', max_epochs=24)
279
30.111111
61
py
mmdetection
mmdetection-master/configs/vfnet/vfnet_r101_fpn_mdconv_c3-c5_mstrain_2x_coco.py
_base_ = './vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py' model = dict( backbone=dict( type='ResNet', depth=101, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=True), norm_eval=True, style='pytorch', ...
546
33.1875
74
py
mmdetection
mmdetection-master/configs/vfnet/vfnet_r101_fpn_mstrain_2x_coco.py
_base_ = './vfnet_r50_fpn_mstrain_2x_coco.py' model = dict( backbone=dict( depth=101, init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet101')))
201
27.857143
61
py
mmdetection
mmdetection-master/configs/vfnet/vfnet_r2_101_fpn_mdconv_c3-c5_mstrain_2x_coco.py
_base_ = './vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py' model = dict( backbone=dict( type='Res2Net', depth=101, scales=4, base_width=26, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=True), n...
602
30.736842
74
py
mmdetection
mmdetection-master/configs/vfnet/vfnet_r2_101_fpn_mstrain_2x_coco.py
_base_ = './vfnet_r50_fpn_mstrain_2x_coco.py' model = dict( backbone=dict( type='Res2Net', depth=101, scales=4, base_width=26, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=True), norm_eval=True...
464
26.352941
62
py
mmdetection
mmdetection-master/configs/vfnet/vfnet_r50_fpn_1x_coco.py
_base_ = [ '../_base_/datasets/coco_detection.py', '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' ] # model settings model = dict( type='VFNet', backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, ...
3,240
29.009259
79
py
mmdetection
mmdetection-master/configs/vfnet/vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py
_base_ = './vfnet_r50_fpn_mstrain_2x_coco.py' model = dict( backbone=dict( dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False), stage_with_dcn=(False, True, True, True)), bbox_head=dict(dcn_on_last_conv=True))
248
34.571429
74
py
mmdetection
mmdetection-master/configs/vfnet/vfnet_r50_fpn_mstrain_2x_coco.py
_base_ = './vfnet_r50_fpn_1x_coco.py' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict( type='Resize', img_scale=[(1333, 480), (1333, 960)],...
1,312
31.825
77
py
mmdetection
mmdetection-master/configs/vfnet/vfnet_x101_32x4d_fpn_mdconv_c3-c5_mstrain_2x_coco.py
_base_ = './vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py' model = dict( backbone=dict( type='ResNeXt', depth=101, groups=32, base_width=4, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=True), n...
585
31.555556
76
py
mmdetection
mmdetection-master/configs/vfnet/vfnet_x101_32x4d_fpn_mstrain_2x_coco.py
_base_ = './vfnet_r50_fpn_mstrain_2x_coco.py' model = dict( backbone=dict( type='ResNeXt', depth=101, groups=32, base_width=4, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=True), norm_eval=True...
447
27
76
py
mmdetection
mmdetection-master/configs/vfnet/vfnet_x101_64x4d_fpn_mdconv_c3-c5_mstrain_2x_coco.py
_base_ = './vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py' model = dict( backbone=dict( type='ResNeXt', depth=101, groups=64, base_width=4, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=True), n...
585
31.555556
76
py
mmdetection
mmdetection-master/configs/vfnet/vfnet_x101_64x4d_fpn_mstrain_2x_coco.py
_base_ = './vfnet_r50_fpn_mstrain_2x_coco.py' model = dict( backbone=dict( type='ResNeXt', depth=101, groups=64, base_width=4, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=True), norm_eval=True...
447
27
76
py
mmdetection
mmdetection-master/configs/wider_face/README.md
# WIDER FACE > [WIDER FACE: A Face Detection Benchmark](https://arxiv.org/abs/1511.06523) <!-- [DATASET] --> ## Abstract Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that the...
2,669
45.034483
1,000
md
mmdetection
mmdetection-master/configs/wider_face/ssd300_wider_face.py
_base_ = [ '../_base_/models/ssd300.py', '../_base_/datasets/wider_face.py', '../_base_/default_runtime.py' ] model = dict(bbox_head=dict(num_classes=1)) # optimizer optimizer = dict(type='SGD', lr=0.012, momentum=0.9, weight_decay=5e-4) optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) # learni...
557
28.368421
71
py
mmdetection
mmdetection-master/configs/yolact/README.md
# YOLACT > [YOLACT: Real-time Instance Segmentation](https://arxiv.org/abs/1904.02689) <!-- [ALGORITHM] --> ## Abstract We present a simple, fully-convolutional model for real-time instance segmentation that achieves 29.8 mAP on MS COCO at 33.5 fps evaluated on a single Titan Xp, which is significantly faster than ...
4,963
64.315789
1,034
md