Search is not available for this dataset
repo stringlengths 2 152 ⌀ | file stringlengths 15 239 | code stringlengths 0 58.4M | file_length int64 0 58.4M | avg_line_length float64 0 1.81M | max_line_length int64 0 12.7M | extension_type stringclasses 364
values |
|---|---|---|---|---|---|---|
mmsegmentation | mmsegmentation-master/configs/twins/twins_svt-b_fpn_fpnhead_8x4_512x512_80k_ade20k.py | _base_ = ['./twins_svt-s_fpn_fpnhead_8x4_512x512_80k_ade20k.py']
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_base_20220308-1b7eb711.pth' # noqa
model = dict(
backbone=dict(
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[96, 192, 38... | 444 | 33.230769 | 123 | py |
mmsegmentation | mmsegmentation-master/configs/twins/twins_svt-b_uperhead_8x2_512x512_160k_ade20k.py | _base_ = ['./twins_svt-s_uperhead_8x2_512x512_160k_ade20k.py']
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_base_20220308-1b7eb711.pth' # noqa
model = dict(
backbone=dict(
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[96, 192, 384,... | 489 | 36.692308 | 123 | py |
mmsegmentation | mmsegmentation-master/configs/twins/twins_svt-l_fpn_fpnhead_8x4_512x512_80k_ade20k.py | _base_ = ['./twins_svt-s_fpn_fpnhead_8x4_512x512_80k_ade20k.py']
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_large_20220308-fb5936f3.pth' # noqa
model = dict(
backbone=dict(
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[128, 256, ... | 477 | 33.142857 | 124 | py |
mmsegmentation | mmsegmentation-master/configs/twins/twins_svt-l_uperhead_8x2_512x512_160k_ade20k.py | _base_ = ['./twins_svt-s_uperhead_8x2_512x512_160k_ade20k.py']
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_large_20220308-fb5936f3.pth' # noqa
model = dict(
backbone=dict(
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
embed_dims=[128, 256, 51... | 522 | 36.357143 | 124 | py |
mmsegmentation | mmsegmentation-master/configs/twins/twins_svt-s_fpn_fpnhead_8x4_512x512_80k_ade20k.py | _base_ = [
'../_base_/models/twins_pcpvt-s_fpn.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_small_20220308-7e1c3695.pth' # noqa
model = dict(
backbone=di... | 811 | 34.304348 | 124 | py |
mmsegmentation | mmsegmentation-master/configs/twins/twins_svt-s_uperhead_8x2_512x512_160k_ade20k.py | _base_ = [
'../_base_/models/twins_pcpvt-s_upernet.py',
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_160k.py'
]
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/alt_gvt_small_20220308-7e1c3695.pth' # noqa
model = dict(
ba... | 1,187 | 26 | 124 | py |
mmsegmentation | mmsegmentation-master/configs/unet/README.md | # UNet
[U-Net: Convolutional Networks for Biomedical Image Segmentation](https://arxiv.org/abs/1505.04597)
## Introduction
<!-- [ALGORITHM] -->
<a href="http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net">Official Repo</a>
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.17.0/mmseg/models/bac... | 26,493 | 283.88172 | 1,109 | md |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_128x128_40k_chase_db1.py | _base_ = [
'../_base_/models/deeplabv3_unet_s5-d16.py',
'../_base_/datasets/chase_db1.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(128, 128), stride=(85, 85)))
evaluation = dict(metric='mDice')
| 276 | 33.625 | 70 | py |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_128x128_40k_stare.py | _base_ = [
'../_base_/models/deeplabv3_unet_s5-d16.py', '../_base_/datasets/stare.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(128, 128), stride=(85, 85)))
evaluation = dict(metric='mDice')
| 268 | 37.428571 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_256x256_40k_hrf.py | _base_ = [
'../_base_/models/deeplabv3_unet_s5-d16.py', '../_base_/datasets/hrf.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(256, 256), stride=(170, 170)))
evaluation = dict(metric='mDice')
| 268 | 37.428571 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_64x64_40k_drive.py | _base_ = [
'../_base_/models/deeplabv3_unet_s5-d16.py', '../_base_/datasets/drive.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(64, 64), stride=(42, 42)))
evaluation = dict(metric='mDice')
| 266 | 37.142857 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1.py | _base_ = './deeplabv3_unet_s5-d16_128x128_40k_chase_db1.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 264 | 36.857143 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare.py | _base_ = './deeplabv3_unet_s5-d16_128x128_40k_stare.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 260 | 36.285714 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf.py | _base_ = './deeplabv3_unet_s5-d16_256x256_40k_hrf.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 258 | 36 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/deeplabv3_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive.py | _base_ = './deeplabv3_unet_s5-d16_64x64_40k_drive.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 258 | 36 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_128x128_40k_chase_db1.py | _base_ = [
'../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/chase_db1.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(128, 128), stride=(85, 85)))
evaluation = dict(metric='mDice')
| 266 | 37.142857 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_128x128_40k_stare.py | _base_ = [
'../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/stare.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(128, 128), stride=(85, 85)))
evaluation = dict(metric='mDice')
| 262 | 36.571429 | 73 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_256x256_40k_hrf.py | _base_ = [
'../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/hrf.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(256, 256), stride=(170, 170)))
evaluation = dict(metric='mDice')
| 262 | 36.571429 | 73 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_4x4_512x1024_160k_cityscapes.py | _base_ = [
'../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/cityscapes.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
]
model = dict(
decode_head=dict(num_classes=19),
auxiliary_head=dict(num_classes=19),
# model training and testing settings
train_cfg=dic... | 420 | 23.764706 | 78 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_64x64_40k_drive.py | _base_ = [
'../_base_/models/fcn_unet_s5-d16.py', '../_base_/datasets/drive.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(64, 64), stride=(42, 42)))
evaluation = dict(metric='mDice')
| 260 | 36.285714 | 73 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1.py | _base_ = './fcn_unet_s5-d16_128x128_40k_chase_db1.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 258 | 36 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare.py | _base_ = './fcn_unet_s5-d16_128x128_40k_stare.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 254 | 35.428571 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf.py | _base_ = './fcn_unet_s5-d16_256x256_40k_hrf.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 252 | 35.142857 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/fcn_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive.py | _base_ = './fcn_unet_s5-d16_64x64_40k_drive.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 252 | 35.142857 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_128x128_40k_chase_db1.py | _base_ = [
'../_base_/models/pspnet_unet_s5-d16.py',
'../_base_/datasets/chase_db1.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(128, 128), stride=(85, 85)))
evaluation = dict(metric='mDice')
| 273 | 33.25 | 70 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_128x128_40k_stare.py | _base_ = [
'../_base_/models/pspnet_unet_s5-d16.py', '../_base_/datasets/stare.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(128, 128), stride=(85, 85)))
evaluation = dict(metric='mDice')
| 265 | 37 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_256x256_40k_hrf.py | _base_ = [
'../_base_/models/pspnet_unet_s5-d16.py', '../_base_/datasets/hrf.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(256, 256), stride=(170, 170)))
evaluation = dict(metric='mDice')
| 265 | 37 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_64x64_40k_drive.py | _base_ = [
'../_base_/models/pspnet_unet_s5-d16.py', '../_base_/datasets/drive.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
model = dict(test_cfg=dict(crop_size=(64, 64), stride=(42, 42)))
evaluation = dict(metric='mDice')
| 263 | 36.714286 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_chase-db1.py | _base_ = './pspnet_unet_s5-d16_128x128_40k_chase_db1.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 261 | 36.428571 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_128x128_40k_stare.py | _base_ = './pspnet_unet_s5-d16_128x128_40k_stare.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 257 | 35.857143 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_256x256_40k_hrf.py | _base_ = './pspnet_unet_s5-d16_256x256_40k_hrf.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 255 | 35.571429 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/pspnet_unet_s5-d16_ce-1.0-dice-3.0_64x64_40k_drive.py | _base_ = './pspnet_unet_s5-d16_64x64_40k_drive.py'
model = dict(
decode_head=dict(loss_decode=[
dict(type='CrossEntropyLoss', loss_name='loss_ce', loss_weight=1.0),
dict(type='DiceLoss', loss_name='loss_dice', loss_weight=3.0)
]))
| 255 | 35.571429 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/unet/unet.yml | Collections:
- Name: UNet
Metadata:
Training Data:
- Cityscapes
- DRIVE
- STARE
- CHASE_DB1
- HRF
Paper:
URL: https://arxiv.org/abs/1505.04597
Title: 'U-Net: Convolutional Networks for Biomedical Image Segmentation'
README: configs/unet/README.md
Code:
URL: https://github.com... | 14,174 | 36.5 | 215 | yml |
mmsegmentation | mmsegmentation-master/configs/upernet/README.md | # UPerNet
[Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/pdf/1807.10221.pdf)
## Introduction
<!-- [ALGORITHM] -->
<a href="https://github.com/CSAILVision/unifiedparsing">Official Repo</a>
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.17.0/mmseg/models/decode_heads/uper_head.... | 17,297 | 229.64 | 851 | md |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet.yml | Collections:
- Name: UPerNet
Metadata:
Training Data:
- Cityscapes
- ADE20K
- Pascal VOC 2012 + Aug
Paper:
URL: https://arxiv.org/pdf/1807.10221.pdf
Title: Unified Perceptual Parsing for Scene Understanding
README: configs/upernet/README.md
Code:
URL: https://github.com/open-mmlab/mm... | 13,543 | 31.714976 | 172 | yml |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_512x1024_40k_cityscapes.py | _base_ = './upernet_r50_512x1024_40k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 132 | 43.333333 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_512x1024_80k_cityscapes.py | _base_ = './upernet_r50_512x1024_80k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 132 | 43.333333 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_512x512_160k_ade20k.py | _base_ = './upernet_r50_512x512_160k_ade20k.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 128 | 42 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_512x512_20k_voc12aug.py | _base_ = './upernet_r50_512x512_20k_voc12aug.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 129 | 42.333333 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_512x512_40k_voc12aug.py | _base_ = './upernet_r50_512x512_40k_voc12aug.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 129 | 42.333333 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_512x512_80k_ade20k.py | _base_ = './upernet_r50_512x512_80k_ade20k.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 127 | 41.666667 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_769x769_40k_cityscapes.py | _base_ = './upernet_r50_769x769_40k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 131 | 43 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r101_769x769_80k_cityscapes.py | _base_ = './upernet_r50_769x769_80k_cityscapes.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| 131 | 43 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r18_512x1024_40k_cityscapes.py | _base_ = './upernet_r50_512x1024_40k_cityscapes.py'
model = dict(
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(depth=18),
decode_head=dict(in_channels=[64, 128, 256, 512]),
auxiliary_head=dict(in_channels=256))
| 236 | 32.857143 | 54 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r18_512x1024_80k_cityscapes.py | _base_ = './upernet_r50_512x1024_80k_cityscapes.py'
model = dict(
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(depth=18),
decode_head=dict(in_channels=[64, 128, 256, 512]),
auxiliary_head=dict(in_channels=256))
| 236 | 32.857143 | 54 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r18_512x512_160k_ade20k.py | _base_ = [
'../_base_/models/upernet_r50.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
]
model = dict(
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(depth=18),
decode_head=dict(in_channels=[64, 128, 256, 512], num_classes=150),
... | 377 | 36.8 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r18_512x512_20k_voc12aug.py | _base_ = [
'../_base_/models/upernet_r50.py',
'../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_20k.py'
]
model = dict(
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(depth=18),
decode_head=dict(in_channels=[64, 128, 256, 512], num_cla... | 388 | 34.363636 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r18_512x512_40k_voc12aug.py | _base_ = [
'../_base_/models/upernet_r50.py',
'../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
model = dict(
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(depth=18),
decode_head=dict(in_channels=[64, 128, 256, 512], num_cla... | 388 | 34.363636 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r18_512x512_80k_ade20k.py | _base_ = [
'../_base_/models/upernet_r50.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
model = dict(
pretrained='open-mmlab://resnet18_v1c',
backbone=dict(depth=18),
decode_head=dict(in_channels=[64, 128, 256, 512], num_classes=150),
... | 376 | 36.7 | 73 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_512x1024_40k_cityscapes.py | _base_ = [
'../_base_/models/upernet_r50.py', '../_base_/datasets/cityscapes.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
]
| 162 | 31.6 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_512x1024_80k_cityscapes.py | _base_ = [
'../_base_/models/upernet_r50.py', '../_base_/datasets/cityscapes.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
| 162 | 31.6 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_512x512_160k_ade20k.py | _base_ = [
'../_base_/models/upernet_r50.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
]
model = dict(
decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
| 250 | 34.857143 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_512x512_20k_voc12aug.py | _base_ = [
'../_base_/models/upernet_r50.py',
'../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_20k.py'
]
model = dict(
decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
| 261 | 31.75 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_512x512_40k_voc12aug.py | _base_ = [
'../_base_/models/upernet_r50.py',
'../_base_/datasets/pascal_voc12_aug.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
model = dict(
decode_head=dict(num_classes=21), auxiliary_head=dict(num_classes=21))
| 261 | 31.75 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_512x512_80k_ade20k.py | _base_ = [
'../_base_/models/upernet_r50.py', '../_base_/datasets/ade20k.py',
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
]
model = dict(
decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
| 249 | 34.714286 | 76 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_769x769_40k_cityscapes.py | _base_ = [
'../_base_/models/upernet_r50.py',
'../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_40k.py'
]
model = dict(
decode_head=dict(align_corners=True),
auxiliary_head=dict(align_corners=True),
test_cfg=dict(mode='slide', crop_size=(76... | 349 | 34 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/upernet/upernet_r50_769x769_80k_cityscapes.py | _base_ = [
'../_base_/models/upernet_r50.py',
'../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_80k.py'
]
model = dict(
decode_head=dict(align_corners=True),
auxiliary_head=dict(align_corners=True),
test_cfg=dict(mode='slide', crop_size=(76... | 349 | 34 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/vit/README.md | # Vision Transformer
[An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/pdf/2010.11929.pdf)
## Introduction
<!-- [BACKBONE] -->
<a href="https://github.com/google-research/vision_transformer">Official Repo</a>
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0... | 9,860 | 137.887324 | 854 | md |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-b16_512x512_160k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_160k_ade20k.py'
model = dict(
pretrained='pretrain/deit_base_patch16_224-b5f2ef4d.pth',
backbone=dict(drop_path_rate=0.1),
neck=None)
| 187 | 25.857143 | 61 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-b16_512x512_80k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_80k_ade20k.py'
model = dict(
pretrained='pretrain/deit_base_patch16_224-b5f2ef4d.pth',
backbone=dict(drop_path_rate=0.1),
neck=None)
| 186 | 25.714286 | 61 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-b16_ln_mln_512x512_160k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_160k_ade20k.py'
model = dict(
pretrained='pretrain/deit_base_patch16_224-b5f2ef4d.pth',
backbone=dict(drop_path_rate=0.1, final_norm=True))
| 189 | 30.666667 | 61 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-b16_mln_512x512_160k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_160k_ade20k.py'
model = dict(
pretrained='pretrain/deit_base_patch16_224-b5f2ef4d.pth',
backbone=dict(drop_path_rate=0.1),
)
| 174 | 24 | 61 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-s16_512x512_160k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_160k_ade20k.py'
model = dict(
pretrained='pretrain/deit_small_patch16_224-cd65a155.pth',
backbone=dict(num_heads=6, embed_dims=384, drop_path_rate=0.1),
decode_head=dict(num_classes=150, in_channels=[384, 384, 384, 384]),
neck=None,
auxiliary_head=dict(num_cl... | 349 | 37.888889 | 72 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-s16_512x512_80k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_80k_ade20k.py'
model = dict(
pretrained='pretrain/deit_small_patch16_224-cd65a155.pth',
backbone=dict(num_heads=6, embed_dims=384, drop_path_rate=0.1),
decode_head=dict(num_classes=150, in_channels=[384, 384, 384, 384]),
neck=None,
auxiliary_head=dict(num_cla... | 348 | 37.777778 | 72 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-s16_ln_mln_512x512_160k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_160k_ade20k.py'
model = dict(
pretrained='pretrain/deit_small_patch16_224-cd65a155.pth',
backbone=dict(
num_heads=6, embed_dims=384, drop_path_rate=0.1, final_norm=True),
decode_head=dict(num_classes=150, in_channels=[384, 384, 384, 384]),
neck=dict(in_ch... | 427 | 41.8 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_deit-s16_mln_512x512_160k_ade20k.py | _base_ = './upernet_vit-b16_mln_512x512_160k_ade20k.py'
model = dict(
pretrained='pretrain/deit_small_patch16_224-cd65a155.pth',
backbone=dict(num_heads=6, embed_dims=384, drop_path_rate=0.1),
decode_head=dict(num_classes=150, in_channels=[384, 384, 384, 384]),
neck=dict(in_channels=[384, 384, 384, 384... | 401 | 43.666667 | 72 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_vit-b16_ln_mln_512x512_160k_ade20k.py | _base_ = [
'../_base_/models/upernet_vit-b16_ln_mln.py',
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_160k.py'
]
model = dict(
pretrained='pretrain/vit_base_patch16_224.pth',
backbone=dict(drop_path_rate=0.1, final_norm=True),
decode_head=dict(nu... | 1,044 | 25.125 | 70 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_vit-b16_mln_512x512_160k_ade20k.py | _base_ = [
'../_base_/models/upernet_vit-b16_ln_mln.py',
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_160k.py'
]
model = dict(
pretrained='pretrain/vit_base_patch16_224.pth',
decode_head=dict(num_classes=150),
auxiliary_head=dict(num_classes=150)... | 988 | 24.358974 | 70 | py |
mmsegmentation | mmsegmentation-master/configs/vit/upernet_vit-b16_mln_512x512_80k_ade20k.py | _base_ = [
'../_base_/models/upernet_vit-b16_ln_mln.py',
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
'../_base_/schedules/schedule_80k.py'
]
model = dict(
pretrained='pretrain/vit_base_patch16_224.pth',
decode_head=dict(num_classes=150),
auxiliary_head=dict(num_classes=150))... | 987 | 24.333333 | 70 | py |
mmsegmentation | mmsegmentation-master/configs/vit/vit.yml | Models:
- Name: upernet_vit-b16_mln_512x512_80k_ade20k
In Collection: UPerNet
Metadata:
backbone: ViT-B + MLN
crop size: (512,512)
lr schd: 80000
inference time (ms/im):
- value: 144.09
hardware: V100
backend: PyTorch
batch size: 1
mode: FP32
resolution: (512,512)
... | 7,742 | 30.733607 | 182 | yml |
mmsegmentation | mmsegmentation-master/demo/image_demo.py | # Copyright (c) OpenMMLab. All rights reserved.
from argparse import ArgumentParser
from mmcv.cnn.utils.sync_bn import revert_sync_batchnorm
from mmseg.apis import inference_segmentor, init_segmentor, show_result_pyplot
from mmseg.core.evaluation import get_palette
def main():
parser = ArgumentParser()
pars... | 1,499 | 30.914894 | 79 | py |
mmsegmentation | mmsegmentation-master/demo/video_demo.py | # Copyright (c) OpenMMLab. All rights reserved.
from argparse import ArgumentParser
import cv2
from mmseg.apis import inference_segmentor, init_segmentor
from mmseg.core.evaluation import get_palette
def main():
parser = ArgumentParser()
parser.add_argument('video', help='Video file or webcam id')
parse... | 3,787 | 32.522124 | 79 | py |
mmsegmentation | mmsegmentation-master/docker/serve/entrypoint.sh | #!/bin/bash
set -e
if [[ "$1" = "serve" ]]; then
shift 1
torchserve --start --ts-config /home/model-server/config.properties
else
eval "$@"
fi
# prevent docker exit
tail -f /dev/null
| 197 | 14.230769 | 71 | sh |
mmsegmentation | mmsegmentation-master/docs/en/changelog.md | ## Changelog
### V0.30.0 (01/09/2023)
**New Features**
- Support Delving into High-Quality Synthetic Face Occlusion Segmentation Datasets ([#2194](https://github.com/open-mmlab/mmsegmentation/pull/2194))
**Bug Fixes**
- Fix incorrect `test_cfg` setting in UNet base configs ([#2347](https://github.com/open-mmlab/mm... | 51,944 | 55.832604 | 597 | md |
mmsegmentation | mmsegmentation-master/docs/en/conf.py | # Copyright (c) OpenMMLab. All rights reserved.
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup -----------------------... | 4,020 | 28.785185 | 79 | py |
mmsegmentation | mmsegmentation-master/docs/en/dataset_prepare.md | <!-- #region -->
## Prepare datasets
It is recommended to symlink the dataset root to `$MMSEGMENTATION/data`.
If your folder structure is different, you may need to change the corresponding paths in config files.
```none
mmsegmentation
├── mmseg
├── tools
├── configs
├── data
│ ├── cityscapes
│ │ ├── leftImg8b... | 21,617 | 33.314286 | 698 | md |
mmsegmentation | mmsegmentation-master/docs/en/faq.md | # Frequently Asked Questions (FAQ)
We list some common troubles faced by many users and their corresponding solutions here. Feel free to enrich the list if you find any frequent issues and have ways to help others to solve them. If the contents here do not cover your issue, please create an issue using the [provided t... | 8,826 | 62.05 | 459 | md |
mmsegmentation | mmsegmentation-master/docs/en/get_started.md | # Prerequisites
In this section we demonstrate how to prepare an environment with PyTorch.
MMSegmentation works on Linux, Windows and macOS. It requires Python 3.6+, CUDA 9.2+ and PyTorch 1.3+.
```{note}
If you are experienced with PyTorch and have already installed it, just skip this part and jump to the [next sect... | 8,433 | 40.343137 | 663 | md |
mmsegmentation | mmsegmentation-master/docs/en/inference.md | ## Inference with pretrained models
We provide testing scripts to evaluate a whole dataset (Cityscapes, PASCAL VOC, ADE20k, etc.),
and also some high-level apis for easier integration to other projects.
### Test a dataset
- single GPU
- CPU
- single node multiple GPU
- multiple node
You can use the following comman... | 6,384 | 47.371212 | 459 | md |
mmsegmentation | mmsegmentation-master/docs/en/model_zoo.md | # Benchmark and Model Zoo
## Common settings
- We use distributed training with 4 GPUs by default.
- All pytorch-style pretrained backbones on ImageNet are train by ourselves, with the same procedure in the [paper](https://arxiv.org/pdf/1812.01187.pdf).
Our ResNet style backbone are based on ResNetV1c variant, whe... | 6,935 | 36.090909 | 200 | md |
mmsegmentation | mmsegmentation-master/docs/en/stat.py | #!/usr/bin/env python
# Copyright (c) OpenMMLab. All rights reserved.
import functools as func
import glob
import os.path as osp
import re
import numpy as np
url_prefix = 'https://github.com/open-mmlab/mmsegmentation/blob/master/'
files = sorted(glob.glob('../../configs/*/README.md'))
stats = []
titles = []
num_ckp... | 1,642 | 23.893939 | 74 | py |
mmsegmentation | mmsegmentation-master/docs/en/switch_language.md | ## <a href='https://mmsegmentation.readthedocs.io/en/latest/'>English</a>
## <a href='https://mmsegmentation.readthedocs.io/zh_CN/latest/'>简体中文</a>
| 149 | 36.5 | 73 | md |
mmsegmentation | mmsegmentation-master/docs/en/train.md | ## Train a model
MMSegmentation implements distributed training and non-distributed training,
which uses `MMDistributedDataParallel` and `MMDataParallel` respectively.
All outputs (log files and checkpoints) will be saved to the working directory,
which is specified by `work_dir` in the config file.
By default we ev... | 6,671 | 38.247059 | 323 | md |
mmsegmentation | mmsegmentation-master/docs/en/useful_tools.md | ## Useful tools
Apart from training/testing scripts, We provide lots of useful tools under the
`tools/` directory.
### Get the FLOPs and params (experimental)
We provide a script adapted from [flops-counter.pytorch](https://github.com/sovrasov/flops-counter.pytorch) to compute the FLOPs and params of a given model.
... | 16,685 | 35.592105 | 423 | md |
mmsegmentation | mmsegmentation-master/docs/en/_static/css/readthedocs.css | .header-logo {
background-image: url("../images/mmsegmentation.png");
background-size: 201px 40px;
height: 40px;
width: 201px;
}
| 145 | 19.857143 | 58 | css |
mmsegmentation | mmsegmentation-master/docs/en/tutorials/config.md | # Tutorial 1: Learn about Configs
We incorporate modular and inheritance design into our config system, which is convenient to conduct various experiments.
If you wish to inspect the config file, you may run `python tools/print_config.py /PATH/TO/CONFIG` to see the complete config.
You may also pass `--cfg-options xxx... | 20,875 | 52.804124 | 248 | md |
mmsegmentation | mmsegmentation-master/docs/en/tutorials/customize_datasets.md | # Tutorial 2: Customize Datasets
## Data configuration
`data` in config file is the variable for data configuration, to define the arguments that are used in datasets and dataloaders.
Here is an example of data configuration:
```python
data = dict(
samples_per_gpu=4,
workers_per_gpu=4,
train=dict(
... | 9,519 | 31.491468 | 288 | md |
mmsegmentation | mmsegmentation-master/docs/en/tutorials/customize_models.md | # Tutorial 4: Customize Models
## Customize optimizer
Assume you want to add a optimizer named as `MyOptimizer`, which has arguments `a`, `b`, and `c`.
You need to first implement the new optimizer in a file, e.g., in `mmseg/core/optimizer/my_optimizer.py`:
```python
from mmcv.runner import OPTIMIZERS
from torch.opt... | 6,521 | 26.753191 | 179 | md |
mmsegmentation | mmsegmentation-master/docs/en/tutorials/customize_runtime.md | # Tutorial 6: Customize Runtime Settings
## Customize optimization settings
### Customize optimizer supported by Pytorch
We already support to use all the optimizers implemented by PyTorch, and the only modification is to change the `optimizer` field of config files.
For example, if you want to use `ADAM` (note that... | 9,571 | 37.910569 | 417 | md |
mmsegmentation | mmsegmentation-master/docs/en/tutorials/data_pipeline.md | # Tutorial 3: Customize Data Pipelines
## Design of Data pipelines
Following typical conventions, we use `Dataset` and `DataLoader` for data loading
with multiple workers. `Dataset` returns a dict of data items corresponding
the arguments of models' forward method.
Since the data in semantic segmentation may not be t... | 4,496 | 25.145349 | 132 | md |
mmsegmentation | mmsegmentation-master/docs/en/tutorials/training_tricks.md | # Tutorial 5: Training Tricks
MMSegmentation support following training tricks out of box.
## Different Learning Rate(LR) for Backbone and Heads
In semantic segmentation, some methods make the LR of heads larger than backbone to achieve better performance or faster convergence.
In MMSegmentation, you may add follow... | 4,298 | 46.241758 | 494 | md |
mmsegmentation | mmsegmentation-master/docs/zh_cn/conf.py | # Copyright (c) OpenMMLab. All rights reserved.
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup -----------------------... | 3,981 | 28.496296 | 79 | py |
mmsegmentation | mmsegmentation-master/docs/zh_cn/dataset_prepare.md | ## 准备数据集
推荐用软链接,将数据集根目录链接到 `$MMSEGMENTATION/data` 里。如果您的文件夹结构是不同的,您也许可以试着修改配置文件里对应的路径。
```none
mmsegmentation
├── mmseg
├── tools
├── configs
├── data
│ ├── cityscapes
│ │ ├── leftImg8bit
│ │ │ ├── train
│ │ │ ├── val
│ │ ├── gtFine
│ │ │ ├── train
│ │ │ ├── val
│ ├── VOCdevkit
│ ... | 10,853 | 28.574932 | 687 | md |
mmsegmentation | mmsegmentation-master/docs/zh_cn/faq.md | # 常见问题解答(FAQ)
我们在这里列出了使用时的一些常见问题及其相应的解决方案。 如果您发现有一些问题被遗漏,请随时提 PR 丰富这个列表。 如果您无法在此获得帮助,请使用 [issue模板](https://github.com/open-mmlab/mmsegmentation/blob/master/.github/ISSUE_TEMPLATE/error-report.md/)创建问题,但是请在模板中填写所有必填信息,这有助于我们更快定位问题。
## 安装
兼容的MMSegmentation和MMCV版本如下。请安装正确版本的MMCV以避免安装问题。
| MMSegmentation version | ... | 6,735 | 47.114286 | 244 | md |
mmsegmentation | mmsegmentation-master/docs/zh_cn/get_started.md | # 依赖
在本节中,我们将演示如何用PyTorch准备一个环境。
MMSegmentation 可以在 Linux、Windows 和 MacOS 上运行。它需要 Python 3.6 以上,CUDA 9.2 以上和 PyTorch 1.3 以上。
```{note}
如果您对PyTorch有经验并且已经安装了它,请跳到下一节。否则,您可以按照以下步骤进行准备。
```
**第一步** 从[官方网站](https://docs.conda.io/en/latest/miniconda.html)下载并安装 Miniconda。
**第二步** 创建并激活一个 conda 环境。
```shell
conda create... | 5,921 | 26.802817 | 405 | md |
mmsegmentation | mmsegmentation-master/docs/zh_cn/inference.md | ## 使用预训练模型推理
我们提供测试脚本来评估完整数据集(Cityscapes, PASCAL VOC, ADE20k 等)上的结果,同时为了使其他项目的整合更容易,也提供一些高级 API。
### 测试一个数据集
- 单卡 GPU
- CPU
- 单节点多卡 GPU
- 多节点
您可以使用以下命令来测试一个数据集。
```shell
# 单卡 GPU 测试
python tools/test.py ${配置文件} ${检查点文件} [--out ${结果文件}] [--eval ${评估指标}] [--show]
# CPU: 如果机器没有 GPU, 则跟上述单卡 GPU 测试一致
# CPU: 如果机器有 GPU,... | 4,424 | 33.570313 | 210 | md |
mmsegmentation | mmsegmentation-master/docs/zh_cn/model_zoo.md | # 标准与模型库
## 共同设定
- 我们默认使用 4 卡分布式训练
- 所有 PyTorch 风格的 ImageNet 预训练网络由我们自己训练,和 [论文](https://arxiv.org/pdf/1812.01187.pdf) 保持一致。
我们的 ResNet 网络是基于 ResNetV1c 的变种,在这里输入层的 7x7 卷积被 3个 3x3 取代
- 为了在不同的硬件上保持一致,我们以 `torch.cuda.max_memory_allocated()` 的最大值作为 GPU 占用率,同时设置 `torch.backends.cudnn.benchmark=False`。
注意,这通常比 `nvidia-... | 4,939 | 31.287582 | 191 | md |
mmsegmentation | mmsegmentation-master/docs/zh_cn/stat.py | #!/usr/bin/env python
# Copyright (c) OpenMMLab. All rights reserved.
import functools as func
import glob
import os.path as osp
import re
import numpy as np
url_prefix = 'https://github.com/open-mmlab/mmsegmentation/blob/master/'
files = sorted(glob.glob('../../configs/*/README.md'))
stats = []
titles = []
num_ckp... | 1,600 | 23.257576 | 74 | py |
mmsegmentation | mmsegmentation-master/docs/zh_cn/switch_language.md | ## <a href='https://mmsegmentation.readthedocs.io/en/latest/'>English</a>
## <a href='https://mmsegmentation.readthedocs.io/zh_CN/latest/'>简体中文</a>
| 149 | 36.5 | 73 | md |
mmsegmentation | mmsegmentation-master/docs/zh_cn/train.md | ## 训练一个模型
MMSegmentation 可以执行分布式训练和非分布式训练,分别使用 `MMDistributedDataParallel` 和 `MMDataParallel` 命令。
所有的输出(日志 log 和检查点 checkpoints )将被保存到工作路径文件夹里,它可以通过配置文件里的 `work_dir` 指定。
在一定迭代轮次后,我们默认在验证集上评估模型表现。您可以在训练配置文件中添加间隔参数来改变评估间隔。
```python
evaluation = dict(interval=4000) # 每4000 iterations 评估一次模型的性能
```
**\*重要提示\***: 在配置... | 4,285 | 25.7875 | 181 | md |