Search is not available for this dataset
repo stringlengths 2 152 ⌀ | file stringlengths 15 239 | code stringlengths 0 58.4M | file_length int64 0 58.4M | avg_line_length float64 0 1.81M | max_line_length int64 0 12.7M | extension_type stringclasses 364
values |
|---|---|---|---|---|---|---|
octomap | octomap-master/octovis/src/extern/QGLViewer/VRender/Vector2.h | /*
This file is part of the VRender library.
Copyright (C) 2005 Cyril Soler (Cyril.Soler@imag.fr)
Version 1.0.0, released on June 27, 2005.
http://artis.imag.fr/Members/Cyril.Soler/VRender
VRender is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as pub... | 5,582 | 29.675824 | 108 | h |
octomap | octomap-master/octovis/src/extern/QGLViewer/VRender/Vector3.cpp | /*
This file is part of the VRender library.
Copyright (C) 2005 Cyril Soler (Cyril.Soler@imag.fr)
Version 1.0.0, released on June 27, 2005.
http://artis.imag.fr/Members/Cyril.Soler/VRender
VRender is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as pub... | 5,034 | 28.444444 | 85 | cpp |
octomap | octomap-master/octovis/src/extern/QGLViewer/VRender/Vector3.h | /*
This file is part of the VRender library.
Copyright (C) 2005 Cyril Soler (Cyril.Soler@imag.fr)
Version 1.0.0, released on June 27, 2005.
http://artis.imag.fr/Members/Cyril.Soler/VRender
VRender is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as pub... | 6,196 | 30.617347 | 129 | h |
octomap | octomap-master/octovis/src/extern/QGLViewer/VRender/VisibilityOptimizer.cpp | /*
This file is part of the VRender library.
Copyright (C) 2005 Cyril Soler (Cyril.Soler@imag.fr)
Version 1.0.0, released on June 27, 2005.
http://artis.imag.fr/Members/Cyril.Soler/VRender
VRender is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as pub... | 14,215 | 50.507246 | 147 | cpp |
octomap | octomap-master/octovis/src/extern/QGLViewer/VRender/gpc.cpp | /*
This file is part of the VRender library.
Copyright (C) 2005 Cyril Soler (Cyril.Soler@imag.fr)
Version 1.0.0, released on June 27, 2005.
http://artis.imag.fr/Members/Cyril.Soler/VRender
VRender is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as pub... | 67,922 | 25.688802 | 91 | cpp |
octomap | octomap-master/octovis/src/extern/QGLViewer/VRender/gpc.h | /*
This file is part of the VRender library.
Copyright (C) 2005 Cyril Soler (Cyril.Soler@imag.fr)
Version 1.0.0, released on June 27, 2005.
http://artis.imag.fr/Members/Cyril.Soler/VRender
VRender is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as pub... | 6,364 | 34.361111 | 78 | h |
octomap | octomap-master/scripts/increase_version.py | #!/usr/bin/env python
# Increases the version number of package.xml and CMakeLists.txt files in
# subfolders. The first argument specifies the version increase:
# major, minor, or patch (default, e.g. 1.6.2 --> 1.6.3)
#
# Borrows heaviliy from ROS / catkin release tools
import re
import sys
import copy
manifest_mat... | 3,451 | 29.548673 | 107 | py |
octomap | octomap-master/scripts/travis_build_jobs.sh | #!/bin/bash
# travis build script for test compilations
set -e
function build {
cd $1
mkdir build
cd build
cmake .. -DCMAKE_INSTALL_PREFIX=/tmp/octomap/$1
make -j4
cd ..
}
case "$1" in
"dist")
build .
cd build && make test
make install
;;
"components")
build octomap
cd build && make test
m... | 453 | 10.35 | 49 | sh |
mmsegmentation | mmsegmentation-master/.owners.yml | assign:
strategy:
# random
# round-robin
daily-shift-based
assignees:
- csatsurnh
- xiexinch
- MeowZheng
- csatsurnh
- xiexinch
| 164 | 12.75 | 21 | yml |
mmsegmentation | mmsegmentation-master/.pre-commit-config.yaml | repos:
- repo: https://github.com/PYCQA/flake8.git
rev: 5.0.4
hooks:
- id: flake8
- repo: https://github.com/zhouzaida/isort
rev: 5.12.1
hooks:
- id: isort
- repo: https://github.com/pre-commit/mirrors-yapf
rev: v0.32.0
hooks:
- id: yapf
- repo: https://github.com/pre-c... | 1,830 | 29.016393 | 102 | yaml |
mmsegmentation | mmsegmentation-master/.readthedocs.yml | version: 2
formats: all
python:
version: 3.7
install:
- requirements: requirements/docs.txt
- requirements: requirements/readthedocs.txt
| 151 | 14.2 | 48 | yml |
mmsegmentation | mmsegmentation-master/LICENSES.md | # Licenses for special features
In this file, we list the features with other licenses instead of Apache 2.0. Users should be careful about adopting these features in any commercial matters.
| Feature | ... | 1,581 | 174.777778 | 346 | md |
mmsegmentation | mmsegmentation-master/README.md | <div align="center">
<img src="resources/mmseg-logo.png" width="600"/>
<div> </div>
<div align="center">
<b><font size="5">OpenMMLab website</font></b>
<sup>
<a href="https://openmmlab.com">
<i><font size="4">HOT</font></i>
</a>
</sup>
<b><font... | 13,936 | 51.992395 | 249 | md |
mmsegmentation | mmsegmentation-master/README_zh-CN.md | <div align="center">
<img src="resources/mmseg-logo.png" width="600"/>
<div> </div>
<div align="center">
<b><font size="5">OpenMMLab 官网</font></b>
<sup>
<a href="https://openmmlab.com">
<i><font size="4">HOT</font></i>
</a>
</sup>
<b><font size... | 10,804 | 40.557692 | 200 | md |
mmsegmentation | mmsegmentation-master/model-index.yml | Import:
- configs/ann/ann.yml
- configs/apcnet/apcnet.yml
- configs/beit/beit.yml
- configs/bisenetv1/bisenetv1.yml
- configs/bisenetv2/bisenetv2.yml
- configs/ccnet/ccnet.yml
- configs/cgnet/cgnet.yml
- configs/convnext/convnext.yml
- configs/danet/danet.yml
- configs/deeplabv3/deeplabv3.yml
- configs/deeplabv3plus/de... | 1,374 | 27.061224 | 41 | yml |
mmsegmentation | mmsegmentation-master/setup.py | # Copyright (c) OpenMMLab. All rights reserved.
import os
import os.path as osp
import platform
import shutil
import sys
import warnings
from setuptools import find_packages, setup
def readme():
with open('README.md', encoding='utf-8') as f:
content = f.read()
return content
version_file = 'mmseg/ve... | 7,221 | 36.811518 | 125 | py |
mmsegmentation | mmsegmentation-master/.circleci/config.yml | version: 2.1
# this allows you to use CircleCI's dynamic configuration feature
setup: true
# the path-filtering orb is required to continue a pipeline based on
# the path of an updated fileset
orbs:
path-filtering: circleci/path-filtering@0.1.2
workflows:
# the always-run workflow is always triggered, regardless... | 1,275 | 35.457143 | 87 | yml |
mmsegmentation | mmsegmentation-master/.circleci/test.yml |
version: 2.1
# the default pipeline parameters, which will be updated according to
# the results of the path-filtering orb
parameters:
lint_only:
type: boolean
default: true
jobs:
lint:
docker:
- image: cimg/python:3.7.4
steps:
- checkout
- run:
name: Install depende... | 6,184 | 30.556122 | 177 | yml |
mmsegmentation | mmsegmentation-master/.circleci/scripts/get_mmcv_var.sh | #!/bin/bash
TORCH=$1
CUDA=$2
# 10.2 -> cu102
MMCV_CUDA="cu`echo ${CUDA} | tr -d '.'`"
# MMCV only provides pre-compiled packages for torch 1.x.0
# which works for any subversions of torch 1.x.
# We force the torch version to be 1.x.0 to ease package searching
# and avoid unnecessary rebuild during MMCV's installatio... | 574 | 27.75 | 66 | sh |
mmsegmentation | mmsegmentation-master/.dev/batch_test_list.py | # yapf: disable
# Inference Speed is tested on NVIDIA V100
hrnet = [
dict(
config='configs/hrnet/fcn_hr18s_512x512_160k_ade20k.py',
checkpoint='fcn_hr18s_512x512_160k_ade20k_20200614_214413-870f65ac.pth', # noqa
eval='mIoU',
metric=dict(mIoU=33.0),
),
dict(
config='co... | 4,856 | 35.246269 | 130 | py |
mmsegmentation | mmsegmentation-master/.dev/benchmark_evaluation.sh | PARTITION=$1
CHECKPOINT_DIR=$2
echo 'configs/hrnet/fcn_hr18s_512x512_160k_ade20k.py' &
GPUS=4 GPUS_PER_NODE=4 CPUS_PER_TASK=2 tools/slurm_test.sh $PARTITION fcn_hr18s_512x512_160k_ade20k configs/hrnet/fcn_hr18s_512x512_160k_ade20k.py $CHECKPOINT_DIR/fcn_hr18s_512x512_160k_ade20k_20200614_214413-870f65ac.pth --eval m... | 8,747 | 207.285714 | 517 | sh |
mmsegmentation | mmsegmentation-master/.dev/benchmark_inference.py | # Copyright (c) OpenMMLab. All rights reserved.
import hashlib
import logging
import os
import os.path as osp
import warnings
from argparse import ArgumentParser
import requests
from mmcv import Config
from mmseg.apis import inference_segmentor, init_segmentor, show_result_pyplot
from mmseg.utils import get_root_logg... | 5,429 | 35.689189 | 116 | py |
mmsegmentation | mmsegmentation-master/.dev/benchmark_train.sh | PARTITION=$1
echo 'configs/hrnet/fcn_hr18s_512x512_160k_ade20k.py' &
GPUS=4 GPUS_PER_NODE=4 CPUS_PER_TASK=2 ./tools/slurm_train.sh $PARTITION fcn_hr18s_512x512_160k_ade20k configs/hrnet/fcn_hr18s_512x512_160k_ade20k.py --cfg-options checkpoint_config.max_keep_ckpts=1 dist_params.port=24727 --work-dir work_dirs/hrnet... | 7,626 | 185.02439 | 420 | sh |
mmsegmentation | mmsegmentation-master/.dev/check_urls.py | # Copyright (c) OpenMMLab. All rights reserved.
import logging
import os
from argparse import ArgumentParser
import requests
import yaml as yml
from mmseg.utils import get_root_logger
def check_url(url):
"""Check url response status.
Args:
url (str): url needed to check.
Returns:
int, ... | 3,392 | 33.622449 | 153 | py |
mmsegmentation | mmsegmentation-master/.dev/gather_benchmark_evaluation_results.py | # Copyright (c) OpenMMLab. All rights reserved.
import argparse
import glob
import os.path as osp
import mmcv
from mmcv import Config
def parse_args():
parser = argparse.ArgumentParser(
description='Gather benchmarked model evaluation results')
parser.add_argument('config', help='test config file pat... | 3,183 | 33.608696 | 77 | py |
mmsegmentation | mmsegmentation-master/.dev/gather_benchmark_train_results.py | import argparse
import glob
import os.path as osp
import mmcv
from gather_models import get_final_results
from mmcv import Config
def parse_args():
parser = argparse.ArgumentParser(
description='Gather benchmarked models train results')
parser.add_argument('config', help='test config file path')
... | 3,481 | 33.475248 | 77 | py |
mmsegmentation | mmsegmentation-master/.dev/gather_models.py | # Copyright (c) OpenMMLab. All rights reserved.
import argparse
import glob
import hashlib
import json
import os
import os.path as osp
import shutil
import mmcv
import torch
# build schedule look-up table to automatically find the final model
RESULTS_LUT = ['mIoU', 'mAcc', 'aAcc']
def calculate_file_sha256(file_pat... | 7,368 | 33.596244 | 78 | py |
mmsegmentation | mmsegmentation-master/.dev/generate_benchmark_evaluation_script.py | # Copyright (c) OpenMMLab. All rights reserved.
import argparse
import os.path as osp
from mmcv import Config
def parse_args():
parser = argparse.ArgumentParser(
description='Convert benchmark test model list to script')
parser.add_argument('config', help='test config file path')
parser.add_argum... | 3,381 | 28.929204 | 79 | py |
mmsegmentation | mmsegmentation-master/.dev/generate_benchmark_train_script.py | # Copyright (c) OpenMMLab. All rights reserved.
import argparse
import os.path as osp
# Default using 4 gpu when training
config_8gpu_list = [
'configs/swin/upernet_swin_tiny_patch4_window7_512x512_160k_ade20k_pretrain_224x224_1K.py', # noqa
'configs/vit/upernet_vit-b16_ln_mln_512x512_160k_ade20k.py',
'co... | 2,770 | 30.134831 | 103 | py |
mmsegmentation | mmsegmentation-master/.dev/md2yml.py | #!/usr/bin/env python
# Copyright (c) OpenMMLab. All rights reserved.
# This tool is used to update model-index.yml which is required by MIM, and
# will be automatically called as a pre-commit hook. The updating will be
# triggered if any change of model information (.md files in configs/) has been
# detected before a... | 12,306 | 37.701258 | 79 | py |
mmsegmentation | mmsegmentation-master/.dev/upload_modelzoo.py | # Copyright (c) OpenMMLab. All rights reserved.
import argparse
import os
import os.path as osp
import oss2
ACCESS_KEY_ID = os.getenv('OSS_ACCESS_KEY_ID', None)
ACCESS_KEY_SECRET = os.getenv('OSS_ACCESS_KEY_SECRET', None)
BUCKET_NAME = 'openmmlab'
ENDPOINT = 'https://oss-accelerate.aliyuncs.com'
def parse_args():
... | 1,324 | 28.444444 | 77 | py |
mmsegmentation | mmsegmentation-master/.dev/log_collector/example_config.py | work_dir = '../../work_dirs'
metric = 'mIoU'
# specify the log files we would like to collect in `log_items`
log_items = [
'segformer_mit-b5_512x512_160k_ade20k_cnn_lr_with_warmup',
'segformer_mit-b5_512x512_160k_ade20k_cnn_no_warmup_lr',
'segformer_mit-b5_512x512_160k_ade20k_mit_trans_lr',
'segformer_... | 641 | 32.789474 | 65 | py |
mmsegmentation | mmsegmentation-master/.dev/log_collector/log_collector.py | # Copyright (c) OpenMMLab. All rights reserved.
import argparse
import datetime
import json
import os
import os.path as osp
from collections import OrderedDict
from utils import load_config
# automatically collect all the results
# The structure of the directory:
# ├── work-dir
# │ ├── config_1
# │ │... | 4,962 | 34.45 | 78 | py |
mmsegmentation | mmsegmentation-master/.dev/log_collector/readme.md | # Log Collector
## Function
Automatically collect logs and write the result in a json file or markdown file.
If there are several `.log.json` files in one folder, Log Collector assumes that the `.log.json` files other than the first one are resume from the preceding `.log.json` file. Log Collector returns the result... | 4,517 | 30.158621 | 243 | md |
mmsegmentation | mmsegmentation-master/.dev/log_collector/utils.py | # Copyright (c) OpenMMLab. All rights reserved.
# modified from https://github.dev/open-mmlab/mmcv
import os.path as osp
import sys
from importlib import import_module
def load_config(cfg_dir: str) -> dict:
assert cfg_dir.endswith('.py')
root_path, file_name = osp.split(cfg_dir)
temp_module = osp.splitext... | 582 | 26.761905 | 66 | py |
mmsegmentation | mmsegmentation-master/.github/CODE_OF_CONDUCT.md | # Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex ch... | 3,355 | 42.584416 | 87 | md |
mmsegmentation | mmsegmentation-master/.github/CONTRIBUTING.md | # Contributing to mmsegmentation
All kinds of contributions are welcome, including but not limited to the following.
- Fixes (typo, bugs)
- New features and components
## Workflow
1. fork and pull the latest mmsegmentation
2. checkout a new branch (do not use master branch for PRs)
3. commit your changes
4. create ... | 1,889 | 31.033898 | 128 | md |
mmsegmentation | mmsegmentation-master/.github/pull_request_template.md | Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
## Motivation
Please describe the motivation of this ... | 1,312 | 49.5 | 264 | md |
mmsegmentation | mmsegmentation-master/.github/ISSUE_TEMPLATE/config.yml | blank_issues_enabled: false
contact_links:
- name: MMSegmentation Documentation
url: https://mmsegmentation.readthedocs.io
about: Check the docs and FAQ to see if you question is already answered.
| 208 | 28.857143 | 77 | yml |
mmsegmentation | mmsegmentation-master/.github/ISSUE_TEMPLATE/error-report.md | ---
name: Error report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
**Describe the bug**... | 1,317 | 25.897959 | 194 | md |
mmsegmentation | mmsegmentation-master/.github/ISSUE_TEMPLATE/feature_request.md | ---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
# Describe the feature
**Motivation**
A clear and concise description of the motivation of the feature.
Ex1. It is inconvenient when \[....\].
Ex2. There is a recent paper \[....\], which is very helpful for \[...... | 705 | 31.090909 | 139 | md |
mmsegmentation | mmsegmentation-master/.github/ISSUE_TEMPLATE/general_questions.md | ---
name: General questions
about: Ask general questions to get help
title: ''
labels: ''
assignees: ''
---
| 108 | 12.625 | 40 | md |
mmsegmentation | mmsegmentation-master/.github/ISSUE_TEMPLATE/reimplementation_questions.md | ---
name: Reimplementation Questions
about: Ask about questions during model reimplementation
title: ''
labels: reimplementation
assignees: ''
---
If you feel we have helped you, give us a STAR! :satisfied:
**Notice**
There are several common situations in the reimplementation issues as below
1. Reimplement a model... | 2,728 | 37.985714 | 435 | md |
mmsegmentation | mmsegmentation-master/.github/workflows/build.yml | name: build
on:
push:
paths-ignore:
- 'demo/**'
- '.dev/**'
- 'docker/**'
- 'tools/**'
- '**.md'
- 'projects/**'
pull_request:
paths-ignore:
- 'demo/**'
- '.dev/**'
- 'docker/**'
- 'tools/**'
- 'docs/**'
- '**.md'
- 'projects/**... | 11,980 | 36.323988 | 182 | yml |
mmsegmentation | mmsegmentation-master/.github/workflows/deploy.yml | name: deploy
on: push
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build-n-publish:
runs-on: ubuntu-latest
if: startsWith(github.event.ref, 'refs/tags')
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.7
uses: actions/setu... | 659 | 23.444444 | 74 | yml |
mmsegmentation | mmsegmentation-master/.github/workflows/lint.yml | name: lint
on: [push, pull_request]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
lint:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.7
uses: actions/setup-python@v2
with:
python-versi... | 761 | 25.275862 | 155 | yml |
mmsegmentation | mmsegmentation-master/.github/workflows/test_mim.yml | name: test-mim
on:
push:
paths:
- 'model-index.yml'
- 'configs/**'
pull_request:
paths:
- 'model-index.yml'
- 'configs/**'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build_cpu:
runs-on: ubuntu-18.04
strategy:
ma... | 1,156 | 24.711111 | 148 | yml |
mmsegmentation | mmsegmentation-master/configs/_base_/default_runtime.py | # yapf:disable
log_config = dict(
interval=50,
hooks=[
dict(type='TextLoggerHook', by_epoch=False),
# dict(type='TensorboardLoggerHook')
# dict(type='PaviLoggerHook') # for internal services
])
# yapf:enable
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resum... | 383 | 23 | 61 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/ade20k.py | # dataset settings
dataset_type = 'ADE20KDataset'
data_root = 'data/ade/ADEChallengeData2016'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 512)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', reduce_zero_labe... | 1,844 | 32.545455 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/ade20k_640x640.py | # dataset settings
dataset_type = 'ADE20KDataset'
data_root = 'data/ade/ADEChallengeData2016'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (640, 640)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', reduce_zero_labe... | 1,844 | 32.545455 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/chase_db1.py | # dataset settings
dataset_type = 'ChaseDB1Dataset'
data_root = 'data/CHASE_DB1'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
img_scale = (960, 999)
crop_size = (128, 128)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
d... | 1,924 | 31.083333 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/cityscapes.py | # dataset settings
dataset_type = 'CityscapesDataset'
data_root = 'data/cityscapes/'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 1024)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize',... | 1,780 | 31.381818 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/cityscapes_1024x1024.py | _base_ = './cityscapes.py'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (1024, 1024)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)),
dic... | 1,283 | 34.666667 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/cityscapes_768x768.py | _base_ = './cityscapes.py'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (768, 768)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize', img_scale=(2049, 1025), ratio_range=(0.5, 2.0)),
dict(... | 1,281 | 34.611111 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/cityscapes_769x769.py | _base_ = './cityscapes.py'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (769, 769)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize', img_scale=(2049, 1025), ratio_range=(0.5, 2.0)),
dict(... | 1,281 | 34.611111 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/cityscapes_832x832.py | _base_ = './cityscapes.py'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (832, 832)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize', img_scale=(2048, 1024), ratio_range=(0.5, 2.0)),
dict(... | 1,281 | 34.611111 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/coco-stuff10k.py | # dataset settings
dataset_type = 'COCOStuffDataset'
data_root = 'data/coco_stuff10k'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 512)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', reduce_zero_label=True),... | 1,926 | 32.224138 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/coco-stuff164k.py | # dataset settings
dataset_type = 'COCOStuffDataset'
data_root = 'data/coco_stuff164k'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 512)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize'... | 1,803 | 31.8 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/drive.py | # dataset settings
dataset_type = 'DRIVEDataset'
data_root = 'data/DRIVE'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
img_scale = (584, 565)
crop_size = (64, 64)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type=... | 1,915 | 30.933333 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/hrf.py | # dataset settings
dataset_type = 'HRFDataset'
data_root = 'data/HRF'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
img_scale = (2336, 3504)
crop_size = (256, 256)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type=... | 1,915 | 30.933333 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/imagenets.py | # dataset settings
dataset_type = 'ImageNetSDataset'
subset = 919
data_root = 'data/ImageNetS/ImageNetS919'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (224, 224)
train_pipeline = [
dict(type='LoadImageNetSImageFromFile', downsample_large_image=True... | 1,996 | 31.209677 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/isaid.py | # dataset settings
dataset_type = 'iSAIDDataset'
data_root = 'data/iSAID'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
"""
This crop_size setting is followed by the implementation of
`PointFlow: Flowing Semantics Through Points for Aerial Image
Segmentation <https:... | 1,943 | 29.857143 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/loveda.py | # dataset settings
dataset_type = 'LoveDADataset'
data_root = 'data/loveDA'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 512)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', reduce_zero_label=True),
dict(... | 1,784 | 31.454545 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/occlude_face.py | dataset_type = 'FaceOccludedDataset'
data_root = 'data/occlusion-aware-face-dataset'
crop_size = (512, 512)
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resize', ... | 2,277 | 27.835443 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/pascal_context.py | # dataset settings
dataset_type = 'PascalContextDataset'
data_root = 'data/VOCdevkit/VOC2010/'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
img_scale = (520, 520)
crop_size = (480, 480)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnno... | 1,998 | 31.770492 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/pascal_context_59.py | # dataset settings
dataset_type = 'PascalContextDataset59'
data_root = 'data/VOCdevkit/VOC2010/'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
img_scale = (520, 520)
crop_size = (480, 480)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAn... | 2,024 | 32.196721 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/pascal_voc12.py | # dataset settings
dataset_type = 'PascalVOCDataset'
data_root = 'data/VOCdevkit/VOC2012'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 512)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(type='Resi... | 1,930 | 32.293103 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/pascal_voc12_aug.py | _base_ = './pascal_voc12.py'
# dataset settings
data = dict(
train=dict(
ann_dir=['SegmentationClass', 'SegmentationClassAug'],
split=[
'ImageSets/Segmentation/train.txt',
'ImageSets/Segmentation/aug.txt'
]))
| 261 | 25.2 | 62 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/potsdam.py | # dataset settings
dataset_type = 'PotsdamDataset'
data_root = 'data/potsdam'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 512)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', reduce_zero_label=True),
dic... | 1,783 | 31.436364 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/stare.py | # dataset settings
dataset_type = 'STAREDataset'
data_root = 'data/STARE'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
img_scale = (605, 700)
crop_size = (128, 128)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(typ... | 1,917 | 30.966667 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/datasets/vaihingen.py | # dataset settings
dataset_type = 'ISPRSDataset'
data_root = 'data/vaihingen'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
crop_size = (512, 512)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', reduce_zero_label=True),
dic... | 1,783 | 31.436364 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/ann_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=... | 1,346 | 27.659574 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/apcnet_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=... | 1,302 | 27.955556 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/bisenetv1_r18-d32.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
backbone=dict(
type='BiSeNetV1',
in_channels=3,
context_channels=(128, 256, 512),
spatial_channels=(64, 64, 64, 128),
out_indices=(0, 1, 2),
out_channels=256,
... | 2,014 | 28.202899 | 78 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/bisenetv2.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained=None,
backbone=dict(
type='BiSeNetV2',
detail_channels=(64, 64, 128),
semantic_channels=(16, 32, 64, 128),
semantic_expansion_ratio=6,
bga_channels=128,... | 2,419 | 28.876543 | 78 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/ccnet_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=... | 1,258 | 26.977778 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/cgnet.py | # model settings
norm_cfg = dict(type='SyncBN', eps=1e-03, requires_grad=True)
model = dict(
type='EncoderDecoder',
backbone=dict(
type='CGNet',
norm_cfg=norm_cfg,
in_channels=3,
num_channels=(32, 64, 128),
num_blocks=(3, 21),
dilations=(2, 4),
reductions=... | 1,110 | 29.861111 | 79 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/danet_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=... | 1,261 | 27.044444 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/deeplabv3_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=... | 1,273 | 27.311111 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/deeplabv3_unet_s5-d16.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained=None,
backbone=dict(
type='UNet',
in_channels=3,
base_channels=64,
num_stages=5,
strides=(1, 1, 1, 1, 1),
enc_num_convs=(2, 2, 2, 2, 2),
... | 1,513 | 28.686275 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/deeplabv3plus_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=... | 1,343 | 27.595745 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/dmnet_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=... | 1,302 | 27.955556 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/dnl_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=... | 1,316 | 27.021277 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/dpt_vit-b16.py | norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='pretrain/vit-b16_p16_224-80ecf9dd.pth', # noqa
backbone=dict(
type='VisionTransformer',
img_size=224,
embed_dims=768,
num_layers=12,
num_heads=12,
out_indices=(... | 1,004 | 30.40625 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/emanet_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=... | 1,329 | 26.708333 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/encnet_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=... | 1,435 | 28.306122 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/erfnet_fcn.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained=None,
backbone=dict(
type='ERFNet',
in_channels=3,
enc_downsample_channels=(16, 64, 128),
enc_stage_non_bottlenecks=(5, 8),
enc_non_bottleneck_dilations... | 1,008 | 29.575758 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/fast_scnn.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True, momentum=0.01)
model = dict(
type='EncoderDecoder',
backbone=dict(
type='FastSCNN',
downsample_dw_channels=(32, 48),
global_in_channels=64,
global_block_channels=(64, 96, 128),
global_block_strides=(2, 2,... | 1,759 | 29.344828 | 77 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/fastfcn_r50-d32_jpu_psp.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
dilations=(1, 1, 2, 4),
strides=(1, 2, 2, 2),
out_indices=... | 1,502 | 26.833333 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/fcn_hr18.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://msra/hrnetv2_w18',
backbone=dict(
type='HRNet',
norm_cfg=norm_cfg,
norm_eval=False,
extra=dict(
stage1=dict(
num_modul... | 1,646 | 30.075472 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/fcn_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=... | 1,285 | 26.956522 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/fcn_unet_s5-d16.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained=None,
backbone=dict(
type='UNet',
in_channels=3,
base_channels=64,
num_stages=5,
strides=(1, 1, 1, 1, 1),
enc_num_convs=(2, 2, 2, 2, 2),
... | 1,526 | 28.365385 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/fpn_poolformer_s12.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
checkpoint_file = 'https://download.openmmlab.com/mmclassification/v0/poolformer/poolformer-s12_3rdparty_32xb128_in1k_20220414-f8d83051.pth' # noqa
custom_imports = dict(imports='mmcls.models', allow_failed_imports=False)
model = dict(
type='Encod... | 1,368 | 30.837209 | 148 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/fpn_r50.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 1, 1),
strides=... | 1,056 | 27.567568 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/gcnet_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=... | 1,326 | 27.234043 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/icnet_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
backbone=dict(
type='ICNet',
backbone_cfg=dict(
type='ResNetV1c',
in_channels=3,
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
... | 2,154 | 27.733333 | 78 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/isanet_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=... | 1,291 | 27.086957 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/lraspp_m-v3-d8.py | # model settings
norm_cfg = dict(type='SyncBN', eps=0.001, requires_grad=True)
model = dict(
type='EncoderDecoder',
backbone=dict(
type='MobileNetV3',
arch='large',
out_indices=(1, 3, 16),
norm_cfg=norm_cfg),
decode_head=dict(
type='LRASPPHead',
in_channels=(1... | 766 | 28.5 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/nonlocal_r50-d8.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='EncoderDecoder',
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=... | 1,315 | 27 | 74 | py |
mmsegmentation | mmsegmentation-master/configs/_base_/models/ocrnet_hr18.py | # model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
model = dict(
type='CascadeEncoderDecoder',
num_stages=2,
pretrained='open-mmlab://msra/hrnetv2_w18',
backbone=dict(
type='HRNet',
norm_cfg=norm_cfg,
norm_eval=False,
extra=dict(
stage1=dict(
... | 2,196 | 30.84058 | 78 | py |