Search is not available for this dataset
repo stringlengths 2 152 ⌀ | file stringlengths 15 239 | code stringlengths 0 58.4M | file_length int64 0 58.4M | avg_line_length float64 0 1.81M | max_line_length int64 0 12.7M | extension_type stringclasses 364
values |
|---|---|---|---|---|---|---|
mmsegmentation | mmsegmentation-master/configs/hrnet/fcn_hr48_480x480_80k_pascal_context_59.py | _base_ = './fcn_hr18_480x480_80k_pascal_context_59.py'
model = dict(
pretrained='open-mmlab://msra/hrnetv2_w48',
backbone=dict(
extra=dict(
stage2=dict(num_channels=(48, 96)),
stage3=dict(num_channels=(48, 96, 192)),
stage4=dict(num_channels=(48, 96, 192, 384)))),
... | 414 | 36.727273 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/hrnet/fcn_hr48_4x4_512x512_80k_vaihingen.py | _base_ = './fcn_hr18_4x4_512x512_80k_vaihingen.py'
model = dict(
pretrained='open-mmlab://msra/hrnetv2_w48',
backbone=dict(
extra=dict(
stage2=dict(num_channels=(48, 96)),
stage3=dict(num_channels=(48, 96, 192)),
stage4=dict(num_channels=(48, 96, 192, 384)))),
dec... | 410 | 36.363636 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/hrnet/fcn_hr48_4x4_896x896_80k_isaid.py | _base_ = './fcn_hr18_4x4_896x896_80k_isaid.py'
model = dict(
pretrained='open-mmlab://msra/hrnetv2_w48',
backbone=dict(
extra=dict(
stage2=dict(num_channels=(48, 96)),
stage3=dict(num_channels=(48, 96, 192)),
stage4=dict(num_channels=(48, 96, 192, 384)))),
decode_... | 406 | 36 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/hrnet/fcn_hr48_512x1024_160k_cityscapes.py | _base_ = './fcn_hr18_512x1024_160k_cityscapes.py'
model = dict(
pretrained='open-mmlab://msra/hrnetv2_w48',
backbone=dict(
extra=dict(
stage2=dict(num_channels=(48, 96)),
stage3=dict(num_channels=(48, 96, 192)),
stage4=dict(num_channels=(48, 96, 192, 384)))),
deco... | 409 | 36.272727 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/hrnet/fcn_hr48_512x1024_40k_cityscapes.py | _base_ = './fcn_hr18_512x1024_40k_cityscapes.py'
model = dict(
pretrained='open-mmlab://msra/hrnetv2_w48',
backbone=dict(
extra=dict(
stage2=dict(num_channels=(48, 96)),
stage3=dict(num_channels=(48, 96, 192)),
stage4=dict(num_channels=(48, 96, 192, 384)))),
decod... | 408 | 36.181818 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/hrnet/fcn_hr48_512x1024_80k_cityscapes.py | _base_ = './fcn_hr18_512x1024_80k_cityscapes.py'
model = dict(
pretrained='open-mmlab://msra/hrnetv2_w48',
backbone=dict(
extra=dict(
stage2=dict(num_channels=(48, 96)),
stage3=dict(num_channels=(48, 96, 192)),
stage4=dict(num_channels=(48, 96, 192, 384)))),
decod... | 408 | 36.181818 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/hrnet/fcn_hr48_512x512_160k_ade20k.py | _base_ = './fcn_hr18_512x512_160k_ade20k.py'
model = dict(
pretrained='open-mmlab://msra/hrnetv2_w48',
backbone=dict(
extra=dict(
stage2=dict(num_channels=(48, 96)),
stage3=dict(num_channels=(48, 96, 192)),
stage4=dict(num_channels=(48, 96, 192, 384)))),
decode_he... | 404 | 35.818182 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/hrnet/fcn_hr48_512x512_20k_voc12aug.py | _base_ = './fcn_hr18_512x512_20k_voc12aug.py'
model = dict(
pretrained='open-mmlab://msra/hrnetv2_w48',
backbone=dict(
extra=dict(
stage2=dict(num_channels=(48, 96)),
stage3=dict(num_channels=(48, 96, 192)),
stage4=dict(num_channels=(48, 96, 192, 384)))),
decode_h... | 405 | 35.909091 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/hrnet/fcn_hr48_512x512_40k_voc12aug.py | _base_ = './fcn_hr18_512x512_40k_voc12aug.py'
model = dict(
pretrained='open-mmlab://msra/hrnetv2_w48',
backbone=dict(
extra=dict(
stage2=dict(num_channels=(48, 96)),
stage3=dict(num_channels=(48, 96, 192)),
stage4=dict(num_channels=(48, 96, 192, 384)))),
decode_h... | 405 | 35.909091 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/hrnet/fcn_hr48_512x512_80k_ade20k.py | _base_ = './fcn_hr18_512x512_80k_ade20k.py'
model = dict(
pretrained='open-mmlab://msra/hrnetv2_w48',
backbone=dict(
extra=dict(
stage2=dict(num_channels=(48, 96)),
stage3=dict(num_channels=(48, 96, 192)),
stage4=dict(num_channels=(48, 96, 192, 384)))),
decode_hea... | 403 | 35.727273 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/hrnet/fcn_hr48_512x512_80k_loveda.py | _base_ = './fcn_hr18_512x512_80k_loveda.py'
model = dict(
backbone=dict(
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://msra/hrnetv2_w48'),
extra=dict(
stage2=dict(num_channels=(48, 96)),
stage3=dict(num_channels=(48, 96, 192)),
stage4=dict... | 454 | 36.916667 | 75 | py |
mmsegmentation | mmsegmentation-master/configs/hrnet/fcn_hr48_512x512_80k_potsdam.py | _base_ = './fcn_hr18_512x512_80k_potsdam.py'
model = dict(
pretrained='open-mmlab://msra/hrnetv2_w48',
backbone=dict(
extra=dict(
stage2=dict(num_channels=(48, 96)),
stage3=dict(num_channels=(48, 96, 192)),
stage4=dict(num_channels=(48, 96, 192, 384)))),
decode_he... | 404 | 35.818182 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/hrnet/hrnet.yml | Models:
- Name: fcn_hr18s_512x1024_40k_cityscapes
In Collection: FCN
Metadata:
backbone: HRNetV2p-W18-Small
crop size: (512,1024)
lr schd: 40000
inference time (ms/im):
- value: 42.12
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,1024)
... | 22,287 | 31.022989 | 174 | yml |
mmsegmentation | mmsegmentation-master/configs/icnet/README.md | # ICNet
[ICNet for Real-time Semantic Segmentation on High-resolution Images](https://arxiv.org/abs/1704.08545)
## Introduction
<!-- [ALGORITHM] -->
<a href="https://github.com/hszhao/ICNet">Official Repo</a>
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.18.0/mmseg/models/necks/ic_neck.py#L77">Code... | 10,282 | 179.403509 | 675 | md |
mmsegmentation | mmsegmentation-master/configs/icnet/icnet.yml | Collections:
- Name: ICNet
Metadata:
Training Data:
- Cityscapes
Paper:
URL: https://arxiv.org/abs/1704.08545
Title: ICNet for Real-time Semantic Segmentation on High-resolution Images
README: configs/icnet/README.md
Code:
URL: https://github.com/open-mmlab/mmsegmentation/blob/v0.18.0/mmseg/... | 7,283 | 34.019231 | 190 | yml |
mmsegmentation | mmsegmentation-master/configs/icnet/icnet_r101-d8_832x832_160k_cityscapes.py | _base_ = './icnet_r50-d8_832x832_160k_cityscapes.py'
model = dict(backbone=dict(backbone_cfg=dict(depth=101)))
| 111 | 36.333333 | 57 | py |
mmsegmentation | mmsegmentation-master/configs/icnet/icnet_r101-d8_832x832_80k_cityscapes.py | _base_ = './icnet_r50-d8_832x832_80k_cityscapes.py'
model = dict(backbone=dict(backbone_cfg=dict(depth=101)))
| 110 | 36 | 57 | py |
mmsegmentation | mmsegmentation-master/configs/icnet/icnet_r101-d8_in1k-pre_832x832_160k_cityscapes.py | _base_ = './icnet_r50-d8_832x832_160k_cityscapes.py'
model = dict(
backbone=dict(
backbone_cfg=dict(
depth=101,
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://resnet101_v1c'))))
| 242 | 29.375 | 78 | py |
mmsegmentation | mmsegmentation-master/configs/icnet/icnet_r101-d8_in1k-pre_832x832_80k_cityscapes.py | _base_ = './icnet_r50-d8_832x832_80k_cityscapes.py'
model = dict(
backbone=dict(
backbone_cfg=dict(
depth=101,
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://resnet101_v1c'))))
| 241 | 29.25 | 78 | py |
mmsegmentation | mmsegmentation-master/configs/icnet/icnet_r18-d8_832x832_160k_cityscapes.py | _base_ = './icnet_r50-d8_832x832_160k_cityscapes.py'
model = dict(
backbone=dict(layer_channels=(128, 512), backbone_cfg=dict(depth=18)))
| 142 | 34.75 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/icnet/icnet_r18-d8_832x832_80k_cityscapes.py | _base_ = './icnet_r50-d8_832x832_80k_cityscapes.py'
model = dict(
backbone=dict(layer_channels=(128, 512), backbone_cfg=dict(depth=18)))
| 141 | 34.5 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/icnet/icnet_r18-d8_in1k-pre_832x832_160k_cityscapes.py | _base_ = './icnet_r50-d8_832x832_160k_cityscapes.py'
model = dict(
backbone=dict(
layer_channels=(128, 512),
backbone_cfg=dict(
depth=18,
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://resnet18_v1c'))))
| 275 | 29.666667 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/icnet/icnet_r18-d8_in1k-pre_832x832_80k_cityscapes.py | _base_ = './icnet_r50-d8_832x832_80k_cityscapes.py'
model = dict(
backbone=dict(
layer_channels=(128, 512),
backbone_cfg=dict(
depth=18,
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://resnet18_v1c'))))
| 274 | 29.555556 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/icnet/icnet_r50-d8_832x832_160k_cityscapes.py | _base_ = [
'../_base_/models/icnet_r50-d8.py',
'../_base_/datasets/cityscapes_832x832.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_160k.py'
]
| 176 | 28.5 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/icnet/icnet_r50-d8_832x832_80k_cityscapes.py | _base_ = [
'../_base_/models/icnet_r50-d8.py',
'../_base_/datasets/cityscapes_832x832.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_80k.py'
]
| 175 | 28.333333 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/icnet/icnet_r50-d8_in1k-pre_832x832_160k_cityscapes.py | _base_ = './icnet_r50-d8_832x832_160k_cityscapes.py'
model = dict(
backbone=dict(
backbone_cfg=dict(
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://resnet50_v1c'))))
| 218 | 30.285714 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/icnet/icnet_r50-d8_in1k-pre_832x832_80k_cityscapes.py | _base_ = './icnet_r50-d8_832x832_80k_cityscapes.py'
model = dict(
backbone=dict(
backbone_cfg=dict(
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://resnet50_v1c'))))
| 217 | 30.142857 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/imagenets/README.md | # ImageNet-S
> [Large-scale Unsupervised Semantic Segmentation](https://arxiv.org/abs/2106.03149)
<!-- [DATASET] -->
## Abstract
<!-- [ABSTRACT] -->
Powered by the ImageNet dataset, unsupervised learning on large-scale data has made significant advances for classification tasks. There are two major challenges to a... | 9,373 | 86.607477 | 845 | md |
mmsegmentation | mmsegmentation-master/configs/imagenets/fcn_mae-base_finetuned_fp16_8x32_224x224_3600_imagenets919.py | _base_ = [
'../_base_/models/fcn_r50-d8.py', '../_base_/datasets/imagenets.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_20k.py'
]
model = dict(
pretrained='./pretrain/mae_finetuned_vit_base_mmcls.pth',
backbone=dict(
_delete_=True,
type='VisionTransformer',
... | 1,915 | 26.371429 | 75 | py |
mmsegmentation | mmsegmentation-master/configs/imagenets/fcn_mae-base_pretrained_fp16_8x32_224x224_3600_imagenets919.py | _base_ = [
'../_base_/models/fcn_r50-d8.py', '../_base_/datasets/imagenets.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_20k.py'
]
model = dict(
pretrained='./pretrain/mae_pretrain_vit_base_mmcls.pth',
backbone=dict(
_delete_=True,
type='VisionTransformer',
... | 1,914 | 26.357143 | 75 | py |
mmsegmentation | mmsegmentation-master/configs/imagenets/fcn_sere-small_finetuned_fp16_8x32_224x224_3600_imagenets919.py | _base_ = [
'../_base_/models/fcn_r50-d8.py', '../_base_/datasets/imagenets.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_20k.py'
]
model = dict(
pretrained='./pretrain/sere_finetuned_vit_small_ep100_mmcls.pth',
backbone=dict(
_delete_=True,
type='VisionTransformer',... | 1,922 | 26.471429 | 75 | py |
mmsegmentation | mmsegmentation-master/configs/imagenets/fcn_sere-small_pretrained_fp16_8x32_224x224_3600_imagenets919.py | _base_ = [
'../_base_/models/fcn_r50-d8.py', '../_base_/datasets/imagenets.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_20k.py'
]
model = dict(
pretrained='./pretrain/sere_pretrained_vit_small_ep100_mmcls.pth',
backbone=dict(
_delete_=True,
type='VisionTransformer'... | 1,923 | 26.485714 | 75 | py |
mmsegmentation | mmsegmentation-master/configs/imagenets/imagenets.yml | Models:
- Name: fcn_mae-base_pretrained_fp16_8x32_224x224_3600_imagenets919
In Collection: FCN
Metadata:
backbone: ViT-B/16
crop size: (224,224)
lr schd: 3600
inference time (ms/im):
- value: 17.18
hardware: A100
backend: PyTorch
batch size: 32
mode: FP16
resolution... | 2,714 | 37.785714 | 222 | yml |
mmsegmentation | mmsegmentation-master/configs/isanet/README.md | # ISANet
[Interlaced Sparse Self-Attention for Semantic Segmentation](https://arxiv.org/abs/1907.12273)
## Introduction
<!-- [ALGORITHM] -->
<a href="https://github.com/openseg-group/openseg.pytorch">Official Repo</a>
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.18.0/mmseg/models/decode_heads/isa_... | 14,667 | 180.08642 | 1,059 | md |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet.yml | Collections:
- Name: ISANet
Metadata:
Training Data:
- Cityscapes
- ADE20K
- Pascal VOC 2012 + Aug
Paper:
URL: https://arxiv.org/abs/1907.12273
Title: Interlaced Sparse Self-Attention for Semantic Segmentation
README: configs/isanet/README.md
Code:
URL: https://github.com/open-mmlab/... | 11,656 | 30.505405 | 175 | yml |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r101-d8_512x1024_40k_cityscapes.py | _base_ = './isanet_r50-d8_512x1024_40k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 134 | 44 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r101-d8_512x1024_80k_cityscapes.py | _base_ = './isanet_r50-d8_512x1024_80k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 134 | 44 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r101-d8_512x512_160k_ade20k.py | _base_ = './isanet_r50-d8_512x512_160k_ade20k.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 130 | 42.666667 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r101-d8_512x512_20k_voc12aug.py | _base_ = './isanet_r50-d8_512x512_20k_voc12aug.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 131 | 43 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r101-d8_512x512_40k_voc12aug.py | _base_ = './isanet_r50-d8_512x512_40k_voc12aug.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 131 | 43 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r101-d8_512x512_80k_ade20k.py | _base_ = './isanet_r50-d8_512x512_80k_ade20k.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 129 | 42.333333 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r101-d8_769x769_40k_cityscapes.py | _base_ = './isanet_r50-d8_769x769_40k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 133 | 43.666667 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r101-d8_769x769_80k_cityscapes.py | _base_ = './isanet_r50-d8_769x769_80k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 133 | 43.666667 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r50-d8_512x1024_40k_cityscapes.py | _base_ = [
'../_base_/models/isanet_r50-d8.py', '../_base_/datasets/cityscapes.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
| 164 | 32 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r50-d8_512x1024_80k_cityscapes.py | _base_ = [
'../_base_/models/isanet_r50-d8.py', '../_base_/datasets/cityscapes.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
| 164 | 32 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r50-d8_512x512_160k_ade20k.py | _base_ = [
'../_base_/models/isanet_r50-d8.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
]
model = dict(
decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
| 252 | 35.142857 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r50-d8_512x512_20k_voc12aug.py | _base_ = [
'../_base_/models/isanet_r50-d8.py',
'../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_20k.py'
]
model = dict(
decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
| 263 | 32 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r50-d8_512x512_40k_voc12aug.py | _base_ = [
'../_base_/models/isanet_r50-d8.py',
'../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
model = dict(
decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
| 263 | 32 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r50-d8_512x512_80k_ade20k.py | _base_ = [
'../_base_/models/isanet_r50-d8.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
model = dict(
decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
| 251 | 35 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r50-d8_769x769_40k_cityscapes.py | _base_ = [
'../_base_/models/isanet_r50-d8.py',
'../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
model = dict(
decode_head=dict(align_corners=True),
auxiliary_head=dict(align_corners=True),
test_cfg=dict(mode='slide', crop_size=(... | 351 | 34.2 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/isanet/isanet_r50-d8_769x769_80k_cityscapes.py | _base_ = [
'../_base_/models/isanet_r50-d8.py',
'../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_80k.py'
]
model = dict(
decode_head=dict(align_corners=True),
auxiliary_head=dict(align_corners=True),
test_cfg=dict(mode='slide', crop_size=(... | 351 | 34.2 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/knet/README.md | # K-Net
[K-Net: Towards Unified Image Segmentation](https://arxiv.org/abs/2106.14855)
## Introduction
<!-- [ALGORITHM] -->
<a href="https://github.com/ZwwWayne/K-Net/">Official Repo</a>
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.23.0/mmseg/models/decode_heads/knet_head.py#L392">Code Snippet</a>
... | 7,987 | 155.627451 | 1,267 | md |
mmsegmentation | mmsegmentation-master/configs/knet/knet.yml | Collections:
- Name: KNet
Metadata:
Training Data:
- ADE20K
Paper:
URL: https://arxiv.org/abs/2106.14855
Title: 'K-Net: Towards Unified Image Segmentation'
README: configs/knet/README.md
Code:
URL: https://github.com/open-mmlab/mmsegmentation/blob/v0.23.0/mmseg/models/decode_heads/knet_head.... | 5,628 | 32.111765 | 203 | yml |
mmsegmentation | mmsegmentation-master/configs/knet/knet_s3_deeplabv3_r50-d8_8x2_512x512_adamw_80k_ade20k.py | _base_ = [
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_80k.py'
]
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
num_stages = 3
conv_kernel_size = 1
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
bac... | 2,984 | 30.755319 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/knet/knet_s3_fcn_r50-d8_8x2_512x512_adamw_80k_ade20k.py | _base_ = [
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_80k.py'
]
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
num_stages = 3
conv_kernel_size = 1
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
bac... | 2,999 | 30.914894 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/knet/knet_s3_pspnet_r50-d8_8x2_512x512_adamw_80k_ade20k.py | _base_ = [
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_80k.py'
]
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
num_stages = 3
conv_kernel_size = 1
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
bac... | 2,981 | 31.064516 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/knet/knet_s3_upernet_r50-d8_8x2_512x512_adamw_80k_ade20k.py | _base_ = [
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_80k.py'
]
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
num_stages = 3
conv_kernel_size = 1
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
ba... | 3,012 | 31.053191 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/knet/knet_s3_upernet_swin-l_8x2_512x512_adamw_80k_ade20k.py | _base_ = 'knet_s3_upernet_swin-t_8x2_512x512_adamw_80k_ade20k.py'
checkpoint_file = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/swin/swin_large_patch4_window7_224_22k_20220308-d5bdebaf.pth' # noqa
# model settings
model = dict(
pretrained=checkpoint_file,
backbone=dict(
embed_dims=192... | 747 | 36.4 | 148 | py |
mmsegmentation | mmsegmentation-master/configs/knet/knet_s3_upernet_swin-l_8x2_640x640_adamw_80k_ade20k.py | _base_ = 'knet_s3_upernet_swin-t_8x2_512x512_adamw_80k_ade20k.py'
checkpoint_file = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/swin/swin_large_patch4_window7_224_22k_20220308-d5bdebaf.pth' # noqa
# model settings
model = dict(
pretrained=checkpoint_file,
backbone=dict(
embed_dims=192... | 2,028 | 35.232143 | 148 | py |
mmsegmentation | mmsegmentation-master/configs/knet/knet_s3_upernet_swin-t_8x2_512x512_adamw_80k_ade20k.py | _base_ = 'knet_s3_upernet_r50-d8_8x2_512x512_adamw_80k_ade20k.py'
checkpoint_file = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/swin/swin_tiny_patch4_window7_224_20220308-f41b89d3.pth' # noqa
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
num_stages = 3
conv_kernel_size = 1
mod... | 1,737 | 28.965517 | 143 | py |
mmsegmentation | mmsegmentation-master/configs/mae/README.md | # MAE
[Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377)
## Introduction
<!-- [BACKBONE] -->
<a href="https://github.com/facebookresearch/mae">Official Repo</a>
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.24.0/mmseg/models/backbones/mae.py#L46">Code Snippet</a>
... | 5,795 | 68.831325 | 1,114 | md |
mmsegmentation | mmsegmentation-master/configs/mae/mae.yml | Models:
- Name: upernet_mae-base_fp16_8x2_512x512_160k_ade20k
In Collection: UPerNet
Metadata:
backbone: ViT-B
crop size: (512,512)
lr schd: 160000
inference time (ms/im):
- value: 140.06
hardware: V100
backend: PyTorch
batch size: 1
mode: FP16
resolution: (512,512)... | 730 | 29.458333 | 186 | yml |
mmsegmentation | mmsegmentation-master/configs/mae/upernet_mae-base_fp16_512x512_160k_ade20k_ms.py | _base_ = './upernet_mae-base_fp16_8x2_512x512_160k_ade20k.py'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(2048, 512),
img_ratios=[0.5, 0.7... | 767 | 29.72 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/mae/upernet_mae-base_fp16_8x2_512x512_160k_ade20k.py | _base_ = [
'../_base_/models/upernet_mae.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
]
model = dict(
pretrained='./pretrain/mae_pretrain_vit_base_mmcls.pth',
backbone=dict(
type='MAE',
img_size=(512, 512),
patch_siz... | 1,342 | 26.408163 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v2/README.md | # MobileNetV2
[MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381)
## Introduction
<!-- [BACKBONE] -->
<a href="https://github.com/tensorflow/models/tree/master/research/deeplab">Official Repo</a>
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.17.0/mmseg/models/... | 9,873 | 172.22807 | 995 | md |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x1024_80k_cityscapes.py | _base_ = '../deeplabv3/deeplabv3_r101-d8_512x1024_80k_cityscapes.py'
model = dict(
pretrained='mmcls://mobilenet_v2',
backbone=dict(
_delete_=True,
type='MobileNetV2',
widen_factor=1.,
strides=(1, 2, 2, 1, 1, 1, 1),
dilations=(1, 1, 1, 2, 2, 4, 4),
out_indices=(1,... | 470 | 32.642857 | 68 | py |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v2/deeplabv3_m-v2-d8_512x512_160k_ade20k.py | _base_ = '../deeplabv3/deeplabv3_r101-d8_512x512_160k_ade20k.py'
model = dict(
pretrained='mmcls://mobilenet_v2',
backbone=dict(
_delete_=True,
type='MobileNetV2',
widen_factor=1.,
strides=(1, 2, 2, 1, 1, 1, 1),
dilations=(1, 1, 1, 2, 2, 4, 4),
out_indices=(1, 2, ... | 466 | 32.357143 | 64 | py |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v2/deeplabv3plus_m-v2-d8_512x1024_80k_cityscapes.py | _base_ = '../deeplabv3plus/deeplabv3plus_r101-d8_512x1024_80k_cityscapes.py'
model = dict(
pretrained='mmcls://mobilenet_v2',
backbone=dict(
_delete_=True,
type='MobileNetV2',
widen_factor=1.,
strides=(1, 2, 2, 1, 1, 1, 1),
dilations=(1, 1, 1, 2, 2, 4, 4),
out_ind... | 497 | 34.571429 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v2/deeplabv3plus_m-v2-d8_512x512_160k_ade20k.py | _base_ = '../deeplabv3plus/deeplabv3plus_r101-d8_512x512_160k_ade20k.py'
model = dict(
pretrained='mmcls://mobilenet_v2',
backbone=dict(
_delete_=True,
type='MobileNetV2',
widen_factor=1.,
strides=(1, 2, 2, 1, 1, 1, 1),
dilations=(1, 1, 1, 2, 2, 4, 4),
out_indices... | 493 | 34.285714 | 72 | py |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v2/fcn_m-v2-d8_512x1024_80k_cityscapes.py | _base_ = '../fcn/fcn_r101-d8_512x1024_80k_cityscapes.py'
model = dict(
pretrained='mmcls://mobilenet_v2',
backbone=dict(
_delete_=True,
type='MobileNetV2',
widen_factor=1.,
strides=(1, 2, 2, 1, 1, 1, 1),
dilations=(1, 1, 1, 2, 2, 4, 4),
out_indices=(1, 2, 4, 6),
... | 458 | 31.785714 | 58 | py |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v2/fcn_m-v2-d8_512x512_160k_ade20k.py | _base_ = '../fcn/fcn_r101-d8_512x512_160k_ade20k.py'
model = dict(
pretrained='mmcls://mobilenet_v2',
backbone=dict(
_delete_=True,
type='MobileNetV2',
widen_factor=1.,
strides=(1, 2, 2, 1, 1, 1, 1),
dilations=(1, 1, 1, 2, 2, 4, 4),
out_indices=(1, 2, 4, 6),
... | 454 | 31.5 | 58 | py |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v2/mobilenet_v2.yml | Models:
- Name: fcn_m-v2-d8_512x1024_80k_cityscapes
In Collection: FCN
Metadata:
backbone: M-V2-D8
crop size: (512,1024)
lr schd: 80000
inference time (ms/im):
- value: 70.42
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,1024)
Trainin... | 5,534 | 31.368421 | 195 | yml |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v2/pspnet_m-v2-d8_512x1024_80k_cityscapes.py | _base_ = '../pspnet/pspnet_r101-d8_512x1024_80k_cityscapes.py'
model = dict(
pretrained='mmcls://mobilenet_v2',
backbone=dict(
_delete_=True,
type='MobileNetV2',
widen_factor=1.,
strides=(1, 2, 2, 1, 1, 1, 1),
dilations=(1, 1, 1, 2, 2, 4, 4),
out_indices=(1, 2, 4,... | 464 | 32.214286 | 62 | py |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v2/pspnet_m-v2-d8_512x512_160k_ade20k.py | _base_ = '../pspnet/pspnet_r101-d8_512x512_160k_ade20k.py'
model = dict(
pretrained='mmcls://mobilenet_v2',
backbone=dict(
_delete_=True,
type='MobileNetV2',
widen_factor=1.,
strides=(1, 2, 2, 1, 1, 1, 1),
dilations=(1, 1, 1, 2, 2, 4, 4),
out_indices=(1, 2, 4, 6),... | 460 | 31.928571 | 58 | py |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v3/README.md | # MobileNetV3
[Searching for MobileNetV3](https://arxiv.org/abs/1905.02244)
## Introduction
<!-- [BACKBONE] -->
<!-- [ALGORITHM] -->
<a href="https://github.com/tensorflow/models/tree/master/research/deeplab">Official Repo</a>
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.17.0/mmseg/models/backbon... | 6,489 | 126.254902 | 1,517 | md |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v3/lraspp_m-v3-d8_512x1024_320k_cityscapes.py | _base_ = [
'../_base_/models/lraspp_m-v3-d8.py', '../_base_/datasets/cityscapes.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
]
model = dict(pretrained='open-mmlab://contrib/mobilenet_v3_large')
# Re-config the data sampler.
data = dict(samples_per_gpu=4, workers_per_gpu=4)
runn... | 372 | 30.083333 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v3/lraspp_m-v3-d8_scratch_512x1024_320k_cityscapes.py | _base_ = [
'../_base_/models/lraspp_m-v3-d8.py', '../_base_/datasets/cityscapes.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
]
# Re-config the data sampler.
data = dict(samples_per_gpu=4, workers_per_gpu=4)
runner = dict(type='IterBasedRunner', max_iters=320000)
| 304 | 29.5 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v3/lraspp_m-v3s-d8_512x1024_320k_cityscapes.py | _base_ = './lraspp_m-v3-d8_512x1024_320k_cityscapes.py'
norm_cfg = dict(type='SyncBN', eps=0.001, requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://contrib/mobilenet_v3_small',
backbone=dict(
type='MobileNetV3',
arch='small',
out_indices=(0, 1, 12),
... | 766 | 30.958333 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v3/lraspp_m-v3s-d8_scratch_512x1024_320k_cityscapes.py | _base_ = './lraspp_m-v3-d8_scratch_512x1024_320k_cityscapes.py'
norm_cfg = dict(type='SyncBN', eps=0.001, requires_grad=True)
model = dict(
type='EncoderDecoder',
backbone=dict(
type='MobileNetV3',
arch='small',
out_indices=(0, 1, 12),
norm_cfg=norm_cfg),
decode_head=dict(
... | 716 | 30.173913 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/mobilenet_v3/mobilenet_v3.yml | Collections:
- Name: LRASPP
Metadata:
Training Data:
- Cityscapes
Paper:
URL: https://arxiv.org/abs/1905.02244
Title: Searching for MobileNetV3
README: configs/mobilenet_v3/README.md
Code:
URL: https://github.com/open-mmlab/mmsegmentation/blob/v0.17.0/mmseg/models/backbones/mobilenet_v3.py#L... | 3,425 | 31.942308 | 201 | yml |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/README.md | # NonLocal Net
[Non-local Neural Networks](https://arxiv.org/abs/1711.07971)
## Introduction
<!-- [ALGORITHM] -->
<a href="https://github.com/facebookresearch/video-nonlocal-net">Official Repo</a>
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.17.0/mmseg/models/decode_heads/nl_head.py#L10">Code Snip... | 14,868 | 214.492754 | 912 | md |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_net.yml | Collections:
- Name: NonLocalNet
Metadata:
Training Data:
- Cityscapes
- ADE20K
- Pascal VOC 2012 + Aug
Paper:
URL: https://arxiv.org/abs/1711.07971
Title: Non-local Neural Networks
README: configs/nonlocal_net/README.md
Code:
URL: https://github.com/open-mmlab/mmsegmentation/blob/v0... | 10,411 | 33.476821 | 185 | yml |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r101-d8_512x1024_40k_cityscapes.py | _base_ = './nonlocal_r50-d8_512x1024_40k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 136 | 44.666667 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r101-d8_512x1024_80k_cityscapes.py | _base_ = './nonlocal_r50-d8_512x1024_80k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 136 | 44.666667 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r101-d8_512x512_160k_ade20k.py | _base_ = './nonlocal_r50-d8_512x512_160k_ade20k.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 132 | 43.333333 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r101-d8_512x512_20k_voc12aug.py | _base_ = './nonlocal_r50-d8_512x512_20k_voc12aug.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 133 | 43.666667 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r101-d8_512x512_40k_voc12aug.py | _base_ = './nonlocal_r50-d8_512x512_40k_voc12aug.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 133 | 43.666667 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r101-d8_512x512_80k_ade20k.py | _base_ = './nonlocal_r50-d8_512x512_80k_ade20k.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 131 | 43 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r101-d8_769x769_40k_cityscapes.py | _base_ = './nonlocal_r50-d8_769x769_40k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 135 | 44.333333 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r101-d8_769x769_80k_cityscapes.py | _base_ = './nonlocal_r50-d8_769x769_80k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 135 | 44.333333 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r50-d8_512x1024_40k_cityscapes.py | _base_ = [
'../_base_/models/nonlocal_r50-d8.py', '../_base_/datasets/cityscapes.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
| 166 | 32.4 | 78 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r50-d8_512x1024_80k_cityscapes.py | _base_ = [
'../_base_/models/nonlocal_r50-d8.py', '../_base_/datasets/cityscapes.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
| 166 | 32.4 | 78 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r50-d8_512x512_160k_ade20k.py | _base_ = [
'../_base_/models/nonlocal_r50-d8.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
]
model = dict(
decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
| 254 | 35.428571 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r50-d8_512x512_20k_voc12aug.py | _base_ = [
'../_base_/models/nonlocal_r50-d8.py',
'../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_20k.py'
]
model = dict(
decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
| 265 | 32.25 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r50-d8_512x512_40k_voc12aug.py | _base_ = [
'../_base_/models/nonlocal_r50-d8.py',
'../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
model = dict(
decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
| 265 | 32.25 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r50-d8_512x512_80k_ade20k.py | _base_ = [
'../_base_/models/nonlocal_r50-d8.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
model = dict(
decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
| 253 | 35.285714 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r50-d8_769x769_40k_cityscapes.py | _base_ = [
'../_base_/models/nonlocal_r50-d8.py',
'../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
model = dict(
decode_head=dict(align_corners=True),
auxiliary_head=dict(align_corners=True),
test_cfg=dict(mode='slide', crop_size... | 353 | 34.4 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/nonlocal_net/nonlocal_r50-d8_769x769_80k_cityscapes.py | _base_ = [
'../_base_/models/nonlocal_r50-d8.py',
'../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_80k.py'
]
model = dict(
decode_head=dict(align_corners=True),
auxiliary_head=dict(align_corners=True),
test_cfg=dict(mode='slide', crop_size... | 353 | 34.4 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/ocrnet/README.md | # OCRNet
[Object-Contextual Representations for Semantic Segmentation](https://arxiv.org/abs/1909.11065)
## Introduction
<!-- [ALGORITHM] -->
<a href="https://github.com/openseg-group/OCNet.pytorch">Official Repo</a>
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.17.0/mmseg/models/decode_heads/ocr_h... | 20,386 | 225.522222 | 1,164 | md |
mmsegmentation | mmsegmentation-master/configs/ocrnet/ocrnet.yml | Collections:
- Name: OCRNet
Metadata:
Training Data:
- Cityscapes
- ADE20K
- Pascal VOC 2012 + Aug
Paper:
URL: https://arxiv.org/abs/1909.11065
Title: Object-Contextual Representations for Semantic Segmentation
README: configs/ocrnet/README.md
Code:
URL: https://github.com/open-mmlab... | 14,727 | 32.548975 | 183 | yml |