Search is not available for this dataset
repo stringlengths 2 152 ⌀ | file stringlengths 15 239 | code stringlengths 0 58.4M | file_length int64 0 58.4M | avg_line_length float64 0 1.81M | max_line_length int64 0 12.7M | extension_type stringclasses 364
values |
|---|---|---|---|---|---|---|
mmdetection | mmdetection-master/configs/yolact/metafile.yml | Collections:
- Name: YOLACT
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- FPN
- ResNet
Paper:
URL: https://arxiv.org/abs/1904.02689
Title: 'YOLACT: Real-... | 2,305 | 28.189873 | 131 | yml |
mmdetection | mmdetection-master/configs/yolact/yolact_r101_1x8_coco.py | _base_ = './yolact_r50_1x8_coco.py'
model = dict(
backbone=dict(
depth=101,
init_cfg=dict(type='Pretrained',
checkpoint='torchvision://resnet101')))
| 192 | 23.125 | 61 | py |
mmdetection | mmdetection-master/configs/yolact/yolact_r50_1x8_coco.py | _base_ = '../_base_/default_runtime.py'
# model settings
img_size = 550
model = dict(
type='YOLACT',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=-1, # do not freeze stem
norm_cfg=dict(type='BN', requires_grad=Tru... | 5,272 | 30.76506 | 79 | py |
mmdetection | mmdetection-master/configs/yolact/yolact_r50_8x8_coco.py | _base_ = 'yolact_r50_1x8_coco.py'
optimizer = dict(type='SGD', lr=8e-3, momentum=0.9, weight_decay=5e-4)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=1000,
warmup_ratio=0.1,
step=[20, 42, 49, 52])
... | 507 | 28.882353 | 70 | py |
mmdetection | mmdetection-master/configs/yolo/README.md | # YOLOv3
> [YOLOv3: An Incremental Improvement](https://arxiv.org/abs/1804.02767)
<!-- [ALGORITHM] -->
## Abstract
We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate.... | 7,661 | 135.821429 | 538 | md |
mmdetection | mmdetection-master/configs/yolo/metafile.yml | Collections:
- Name: YOLOv3
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- DarkNet
Paper:
URL: https://arxiv.org/abs/1804.02767
Title: 'YOLOv3: An Incremental Imp... | 3,996 | 30.976 | 176 | yml |
mmdetection | mmdetection-master/configs/yolo/yolov3_d53_320_273e_coco.py | _base_ = './yolov3_d53_mstrain-608_273e_coco.py'
# dataset settings
img_norm_cfg = dict(mean=[0, 0, 0], std=[255., 255., 255.], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='Expand',
mean=img_norm_cfg['mean'],
... | 1,439 | 32.488372 | 72 | py |
mmdetection | mmdetection-master/configs/yolo/yolov3_d53_fp16_mstrain-608_273e_coco.py | _base_ = './yolov3_d53_mstrain-608_273e_coco.py'
# fp16 settings
fp16 = dict(loss_scale='dynamic')
| 99 | 24 | 48 | py |
mmdetection | mmdetection-master/configs/yolo/yolov3_d53_mstrain-416_273e_coco.py | _base_ = './yolov3_d53_mstrain-608_273e_coco.py'
# dataset settings
img_norm_cfg = dict(mean=[0, 0, 0], std=[255., 255., 255.], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
type='Expand',
mean=img_norm_cfg['mean'],
... | 1,453 | 32.813953 | 77 | py |
mmdetection | mmdetection-master/configs/yolo/yolov3_d53_mstrain-608_273e_coco.py | _base_ = '../_base_/default_runtime.py'
# model settings
model = dict(
type='YOLOV3',
backbone=dict(
type='Darknet',
depth=53,
out_indices=(3, 4, 5),
init_cfg=dict(type='Pretrained', checkpoint='open-mmlab://darknet53')),
neck=dict(
type='YOLOV3Neck',
num_scal... | 4,418 | 32.225564 | 79 | py |
mmdetection | mmdetection-master/configs/yolo/yolov3_mobilenetv2_320_300e_coco.py | _base_ = ['./yolov3_mobilenetv2_mstrain-416_300e_coco.py']
# yapf:disable
model = dict(
bbox_head=dict(
anchor_generator=dict(
base_sizes=[[(220, 125), (128, 222), (264, 266)],
[(35, 87), (102, 96), (60, 170)],
[(10, 15), (24, 36), (72, 42)]])))
#... | 1,756 | 31.537037 | 77 | py |
mmdetection | mmdetection-master/configs/yolo/yolov3_mobilenetv2_mstrain-416_300e_coco.py | _base_ = '../_base_/default_runtime.py'
# model settings
model = dict(
type='YOLOV3',
backbone=dict(
type='MobileNetV2',
out_indices=(2, 4, 6),
act_cfg=dict(type='LeakyReLU', negative_slope=0.1),
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://mmdet/mobilen... | 4,664 | 31.622378 | 78 | py |
mmdetection | mmdetection-master/configs/yolof/README.md | # YOLOF
> [You Only Look One-level Feature](https://arxiv.org/abs/2103.09460)
<!-- [ALGORITHM] -->
## Abstract
This paper revisits feature pyramids networks (FPN) for one-stage detectors and points out that the success of FPN is due to its divide-and-conquer solution to the optimization problem in object detection ... | 3,467 | 95.333333 | 1,112 | md |
mmdetection | mmdetection-master/configs/yolof/metafile.yml | Collections:
- Name: YOLOF
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Momentum
- Weight Decay
Training Resources: 8x V100 GPUs
Architecture:
- Dilated Encoder
- ResNet
Paper:
URL: https://arxiv.org/abs/2103.09460
Title: 'Yo... | 959 | 28.090909 | 145 | yml |
mmdetection | mmdetection-master/configs/yolof/yolof_r50_c5_8x8_1x_coco.py | _base_ = [
'../_base_/datasets/coco_detection.py',
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
]
model = dict(
type='YOLOF',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(3, ),
frozen_stages=1,
norm_cfg=dict(ty... | 3,504 | 30.294643 | 77 | py |
mmdetection | mmdetection-master/configs/yolof/yolof_r50_c5_8x8_iter-1x_coco.py | _base_ = './yolof_r50_c5_8x8_1x_coco.py'
# We implemented the iter-based config according to the source code.
# COCO dataset has 117266 images after filtering. We use 8 gpu and
# 8 batch size training, so 22500 is equivalent to
# 22500/(117266/(8x8))=12.3 epoch, 15000 is equivalent to 8.2 epoch,
# 20000 is equivalent ... | 671 | 43.8 | 69 | py |
mmdetection | mmdetection-master/configs/yolox/README.md | # YOLOX
> [YOLOX: Exceeding YOLO Series in 2021](https://arxiv.org/abs/2107.08430)
<!-- [ALGORITHM] -->
## Abstract
In this report, we present some experienced improvements to YOLO series, forming a new high-performance detector -- YOLOX. We switch the YOLO detector to an anchor-free manner and conduct other advanc... | 5,251 | 130.3 | 1,138 | md |
mmdetection | mmdetection-master/configs/yolox/metafile.yml | Collections:
- Name: YOLOX
Metadata:
Training Data: COCO
Training Techniques:
- SGD with Nesterov
- Weight Decay
- Cosine Annealing Lr Updater
Training Resources: 8x TITANXp GPUs
Architecture:
- CSPDarkNet
- PAFPN
Paper:
URL: https://arxiv.... | 2,257 | 30.802817 | 145 | yml |
mmdetection | mmdetection-master/configs/yolox/yolox_l_8x8_300e_coco.py | _base_ = './yolox_s_8x8_300e_coco.py'
# model settings
model = dict(
backbone=dict(deepen_factor=1.0, widen_factor=1.0),
neck=dict(
in_channels=[256, 512, 1024], out_channels=256, num_csp_blocks=3),
bbox_head=dict(in_channels=256, feat_channels=256))
| 272 | 29.333333 | 74 | py |
mmdetection | mmdetection-master/configs/yolox/yolox_m_8x8_300e_coco.py | _base_ = './yolox_s_8x8_300e_coco.py'
# model settings
model = dict(
backbone=dict(deepen_factor=0.67, widen_factor=0.75),
neck=dict(in_channels=[192, 384, 768], out_channels=192, num_csp_blocks=2),
bbox_head=dict(in_channels=192, feat_channels=192),
)
| 266 | 28.666667 | 79 | py |
mmdetection | mmdetection-master/configs/yolox/yolox_nano_8x8_300e_coco.py | _base_ = './yolox_tiny_8x8_300e_coco.py'
# model settings
model = dict(
backbone=dict(deepen_factor=0.33, widen_factor=0.25, use_depthwise=True),
neck=dict(
in_channels=[64, 128, 256],
out_channels=64,
num_csp_blocks=1,
use_depthwise=True),
bbox_head=dict(in_channels=64, fea... | 356 | 28.75 | 77 | py |
mmdetection | mmdetection-master/configs/yolox/yolox_s_8x8_300e_coco.py | _base_ = ['../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py']
img_scale = (640, 640) # height, width
# model settings
model = dict(
type='YOLOX',
input_size=img_scale,
random_size_range=(15, 25),
random_size_interval=10,
backbone=dict(type='CSPDarknet', deepen_factor=0.33, widen... | 4,997 | 29.108434 | 79 | py |
mmdetection | mmdetection-master/configs/yolox/yolox_tiny_8x8_300e_coco.py | _base_ = './yolox_s_8x8_300e_coco.py'
# model settings
model = dict(
random_size_range=(10, 20),
backbone=dict(deepen_factor=0.33, widen_factor=0.375),
neck=dict(in_channels=[96, 192, 384], out_channels=96),
bbox_head=dict(in_channels=96, feat_channels=96))
img_scale = (640, 640) # height, width
tra... | 1,821 | 29.881356 | 76 | py |
mmdetection | mmdetection-master/configs/yolox/yolox_x_8x8_300e_coco.py | _base_ = './yolox_s_8x8_300e_coco.py'
# model settings
model = dict(
backbone=dict(deepen_factor=1.33, widen_factor=1.25),
neck=dict(
in_channels=[320, 640, 1280], out_channels=320, num_csp_blocks=4),
bbox_head=dict(in_channels=320, feat_channels=320))
| 274 | 29.555556 | 74 | py |
mmdetection | mmdetection-master/demo/create_result_gif.py | # Copyright (c) OpenMMLab. All rights reserved.
import argparse
import os
import os.path as osp
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
import mmcv
import numpy as np
try:
import imageio
except ImportError:
imageio = None
def parse_args():
parser = argparse.ArgumentParser(d... | 4,930 | 29.067073 | 79 | py |
mmdetection | mmdetection-master/demo/image_demo.py | # Copyright (c) OpenMMLab. All rights reserved.
import asyncio
from argparse import ArgumentParser
from mmdet.apis import (async_inference_detector, inference_detector,
init_detector, show_result_pyplot)
def parse_args():
parser = ArgumentParser()
parser.add_argument('img', help='Imag... | 2,164 | 30.376812 | 79 | py |
mmdetection | mmdetection-master/demo/video_demo.py | # Copyright (c) OpenMMLab. All rights reserved.
import argparse
import cv2
import mmcv
from mmdet.apis import inference_detector, init_detector
def parse_args():
parser = argparse.ArgumentParser(description='MMDetection video demo')
parser.add_argument('video', help='Video file')
parser.add_argument('co... | 1,974 | 30.854839 | 76 | py |
mmdetection | mmdetection-master/demo/video_gpuaccel_demo.py | # Copyright (c) OpenMMLab. All rights reserved.
import argparse
import cv2
import mmcv
import numpy as np
import torch
from torchvision.transforms import functional as F
from mmdet.apis import init_detector
from mmdet.datasets.pipelines import Compose
try:
import ffmpegcv
except ImportError:
raise ImportErro... | 3,892 | 33.149123 | 77 | py |
mmdetection | mmdetection-master/demo/webcam_demo.py | # Copyright (c) OpenMMLab. All rights reserved.
import argparse
import cv2
import torch
from mmdet.apis import inference_detector, init_detector
def parse_args():
parser = argparse.ArgumentParser(description='MMDetection webcam demo')
parser.add_argument('config', help='test config file path')
parser.ad... | 1,308 | 26.270833 | 78 | py |
mmdetection | mmdetection-master/docker/serve/entrypoint.sh | #!/bin/bash
set -e
if [[ "$1" = "serve" ]]; then
shift 1
torchserve --start --ts-config /home/model-server/config.properties
else
eval "$@"
fi
# prevent docker exit
tail -f /dev/null
| 197 | 14.230769 | 71 | sh |
mmdetection | mmdetection-master/docs/en/1_exist_data_model.md | # 1: Inference and train with existing models and standard datasets
MMDetection provides hundreds of existing and existing detection models in [Model Zoo](https://mmdetection.readthedocs.io/en/latest/model_zoo.html)), and supports multiple standard datasets, including Pascal VOC, COCO, CityScapes, LVIS, etc. This note... | 27,696 | 38.680516 | 736 | md |
mmdetection | mmdetection-master/docs/en/2_new_data_model.md | # 2: Train with customized datasets
In this note, you will know how to inference, test, and train predefined models with customized datasets. We use the [balloon dataset](https://github.com/matterport/Mask_RCNN/tree/master/samples/balloon) as an example to describe the whole process.
The basic steps are as below:
1.... | 7,204 | 25.985019 | 345 | md |
mmdetection | mmdetection-master/docs/en/3_exist_data_new_model.md | # 3: Train with customized models and standard datasets
In this note, you will know how to train, test and inference your own customized models under standard datasets. We use the cityscapes dataset to train a customized Cascade Mask R-CNN R50 model as an example to demonstrate the whole process, which using [`AugFPN`... | 10,219 | 34.985915 | 405 | md |
mmdetection | mmdetection-master/docs/en/changelog.md | ## Changelog
### v2.28.2 (24/2/2023)
#### New Features and Improvements
- Add Twitter, Discord, Medium and YouTube link (#9774)
- Update `customize_runtime.md` (#9797)
#### Bug Fixes
- Fix `WIDERFace SSD` loss for Nan problem (#9734)
- Fix missing API documentation in Readthedoc (#9729)
- Fix the configuration fil... | 84,827 | 43.693361 | 754 | md |
mmdetection | mmdetection-master/docs/en/compatibility.md | # Compatibility of MMDetection 2.x
## MMDetection 2.25.0
In order to support Mask2Former for instance segmentation, the original config files of Mask2Former for panpotic segmentation need to be renamed [PR #7571](https://github.com/open-mmlab/mmdetection/pull/7571).
<table align="center">
<thead>
<tr ali... | 13,244 | 72.994413 | 578 | md |
mmdetection | mmdetection-master/docs/en/conf.py | # Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If ex... | 3,439 | 28.401709 | 79 | py |
mmdetection | mmdetection-master/docs/en/conventions.md | # Conventions
Please check the following conventions if you would like to modify MMDetection as your own project.
## Loss
In MMDetection, a `dict` containing losses and metrics will be returned by `model(**data)`.
For example, in bbox head,
```python
class BBoxHead(nn.Module):
...
def loss(self, ...):
... | 3,252 | 40.177215 | 226 | md |
mmdetection | mmdetection-master/docs/en/faq.md | # Frequently Asked Questions
We list some common troubles faced by many users and their corresponding solutions here. Feel free to enrich the list if you find any frequent issues and have ways to help others to solve them. If the contents here do not cover your issue, please create an issue using the [provided templat... | 16,218 | 66.298755 | 723 | md |
mmdetection | mmdetection-master/docs/en/get_started.md | # Prerequisites
In this section we demonstrate how to prepare an environment with PyTorch.
MMDetection works on Linux, Windows and macOS. It requires Python 3.7+, CUDA 9.2+ and PyTorch 1.5+.
```{note}
If you are experienced with PyTorch and have already installed it, just skip this part and jump to the [next section... | 8,263 | 38.54067 | 450 | md |
mmdetection | mmdetection-master/docs/en/model_zoo.md | # Benchmark and Model Zoo
## Mirror sites
We only use aliyun to maintain the model zoo since MMDetection V2.0. The model zoo of V1.x has been deprecated.
## Common settings
- All models were trained on `coco_2017_train`, and tested on the `coco_2017_val`.
- We use distributed training.
- All pytorch-style pretraine... | 22,922 | 62.14876 | 654 | md |
mmdetection | mmdetection-master/docs/en/projects.md | # Projects based on MMDetection
There are many projects built upon MMDetection.
We list some of them as examples of how to extend MMDetection for your own projects.
As the page might not be completed, please feel free to create a PR to update this page.
## Projects as an extension
Some projects extend the boundary o... | 8,495 | 143 | 338 | md |
mmdetection | mmdetection-master/docs/en/robustness_benchmarking.md | # Corruption Benchmarking
## Introduction
We provide tools to test object detection and instance segmentation models on the image corruption benchmark defined in [Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming](https://arxiv.org/abs/1907.07484).
This page provides basic tutorial... | 5,984 | 52.918919 | 242 | md |
mmdetection | mmdetection-master/docs/en/stat.py | #!/usr/bin/env python
import functools as func
import glob
import os.path as osp
import re
import numpy as np
url_prefix = 'https://github.com/open-mmlab/mmdetection/blob/master/configs'
files = sorted(glob.glob('../../configs/*/README.md'))
stats = []
titles = []
num_ckpts = 0
for f in files:
url = osp.dirnam... | 1,539 | 22.692308 | 76 | py |
mmdetection | mmdetection-master/docs/en/switch_language.md | ## <a href='https://mmdetection.readthedocs.io/en/latest/'>English</a>
## <a href='https://mmdetection.readthedocs.io/zh_CN/latest/'>简体中文</a>
| 143 | 35 | 70 | md |
mmdetection | mmdetection-master/docs/en/useful_tools.md | Apart from training/testing scripts, We provide lots of useful tools under the
`tools/` directory.
## Log Analysis
`tools/analysis_tools/analyze_logs.py` plots loss/mAP curves given a training
log file. Run `pip install seaborn` first to install the dependency.
```shell
python tools/analysis_tools/analyze_logs.py pl... | 20,350 | 33.49322 | 456 | md |
mmdetection | mmdetection-master/docs/en/_static/css/readthedocs.css | .header-logo {
background-image: url("../image/mmdet-logo.png");
background-size: 156px 40px;
height: 40px;
width: 156px;
}
| 140 | 19.142857 | 53 | css |
mmdetection | mmdetection-master/docs/en/device/npu.md | # NPU (HUAWEI Ascend)
## Usage
Please refer to the [building documentation of MMCV](https://mmcv.readthedocs.io/en/latest/get_started/build.html#build-mmcv-full-on-ascend-npu-machine) to install MMCV on NPU devices
Here we use 8 NPUs on your computer to train the model with the following command:
```shell
bash tool... | 5,844 | 103.375 | 282 | md |
mmdetection | mmdetection-master/docs/en/tutorials/config.md | # Tutorial 1: Learn about Configs
We incorporate modular and inheritance design into our config system, which is convenient to conduct various experiments.
If you wish to inspect the config file, you may run `python tools/misc/print_config.py /PATH/TO/CONFIG` to see the complete config.
## Modify config through scrip... | 33,858 | 60.338768 | 251 | md |
mmdetection | mmdetection-master/docs/en/tutorials/customize_dataset.md | # Tutorial 2: Customize Datasets
## Support new data format
To support a new data format, you can either convert them to existing formats (COCO format or PASCAL format) or directly convert them to the middle format. You could also choose to convert them offline (before training by a script) or online (implement a new... | 18,750 | 33.532228 | 749 | md |
mmdetection | mmdetection-master/docs/en/tutorials/customize_losses.md | # Tutorial 6: Customize Losses
MMDetection provides users with different loss functions. But the default configuration may be not applicable for different datasets or models, so users may want to modify a specific loss to adapt the new situation.
This tutorial first elaborate the computation pipeline of losses, then ... | 4,777 | 36.622047 | 637 | md |
mmdetection | mmdetection-master/docs/en/tutorials/customize_models.md | # Tutorial 4: Customize Models
We basically categorize model components into 5 types.
- backbone: usually an FCN network to extract feature maps, e.g., ResNet, MobileNet.
- neck: the component between backbones and heads, e.g., FPN, PAFPN.
- head: the component for specific tasks, e.g., bbox prediction and mask predi... | 10,216 | 27.068681 | 204 | md |
mmdetection | mmdetection-master/docs/en/tutorials/customize_runtime.md | # Tutorial 5: Customize Runtime Settings
## Customize optimization settings
### Customize optimizer supported by Pytorch
We already support to use all the optimizers implemented by PyTorch, and the only modification is to change the `optimizer` field of config files.
For example, if you want to use `ADAM` (note that... | 12,070 | 36.256173 | 417 | md |
mmdetection | mmdetection-master/docs/en/tutorials/data_pipeline.md | # Tutorial 3: Customize Data Pipelines
## Design of Data pipelines
Following typical conventions, we use `Dataset` and `DataLoader` for data loading
with multiple workers. `Dataset` returns a dict of data items corresponding
the arguments of models' forward method.
Since the data in object detection may not be the sa... | 5,606 | 27.035 | 241 | md |
mmdetection | mmdetection-master/docs/en/tutorials/finetune.md | # Tutorial 7: Finetuning Models
Detectors pre-trained on the COCO dataset can serve as a good pre-trained model for other datasets, e.g., CityScapes and KITTI Dataset.
This tutorial provides instruction for users to use the models provided in the [Model Zoo](../model_zoo.md) for other datasets to obtain better perform... | 3,936 | 42.744444 | 432 | md |
mmdetection | mmdetection-master/docs/en/tutorials/how_to.md | # Tutorial 11: How to xxx
This tutorial collects answers to any `How to xxx with MMDetection`. Feel free to update this doc if you meet new questions about `How to` and find the answers!
## Use backbone network through MMClassification
The model registry in MMDet, MMCls, MMSeg all inherit from the root registry in M... | 8,374 | 39.853659 | 374 | md |
mmdetection | mmdetection-master/docs/en/tutorials/init_cfg.md | # Tutorial 10: Weight initialization
During training, a proper initialization strategy is beneficial to speeding up the training or obtaining a higher performance. [MMCV](https://github.com/open-mmlab/mmcv/blob/master/mmcv/cnn/utils/weight_init.py) provide some commonly used methods for initializing modules like `nn.C... | 6,251 | 37.592593 | 444 | md |
mmdetection | mmdetection-master/docs/en/tutorials/onnx2tensorrt.md | # Tutorial 9: ONNX to TensorRT (Experimental)
> ## [Try the new MMDeploy to deploy your model](https://mmdeploy.readthedocs.io/)
<!-- TOC -->
- [Tutorial 9: ONNX to TensorRT (Experimental)](#tutorial-9-onnx-to-tensorrt-experimental)
- [How to convert models from ONNX to TensorRT](#how-to-convert-models-from-onnx-t... | 5,762 | 52.859813 | 300 | md |
mmdetection | mmdetection-master/docs/en/tutorials/pytorch2onnx.md | # Tutorial 8: Pytorch to ONNX (Experimental)
> ## [Try the new MMDeploy to deploy your model](https://mmdeploy.readthedocs.io/)
<!-- TOC -->
- [Tutorial 8: Pytorch to ONNX (Experimental)](#tutorial-8-pytorch-to-onnx-experimental)
- [How to convert models from Pytorch to ONNX](#how-to-convert-models-from-pytorch-to... | 17,669 | 51.746269 | 596 | md |
mmdetection | mmdetection-master/docs/en/tutorials/test_results_submission.md | # Tutorial 12: Test Results Submission
## Panoptic segmentation test results submission
The following sections introduce how to produce the prediction results of panoptic segmentation models on the COCO test-dev set and submit the predictions to [COCO evaluation server](https://competitions.codalab.org/competitions/1... | 4,953 | 42.840708 | 484 | md |
mmdetection | mmdetection-master/docs/en/tutorials/useful_hooks.md | # Tutorial 13: Useful Hooks
MMDetection and MMCV provide users with various useful hooks including log hooks, evaluation hooks, NumClassCheckHook, etc. This tutorial introduces the functionalities and usages of hooks implemented in MMDetection. For using hooks in MMCV, please read the [API documentation in MMCV](https... | 3,774 | 43.940476 | 368 | md |
mmdetection | mmdetection-master/docs/zh_cn/1_exist_data_model.md | # 1: 使用已有模型在标准数据集上进行推理
MMDetection 在 [Model Zoo](https://mmdetection.readthedocs.io/en/latest/model_zoo.html) 中提供了数以百计的检测模型,并支持多种标准数据集,包括 Pascal VOC,COCO,Cityscapes,LVIS 等。这份文档将会讲述如何使用这些模型和标准数据集来运行一些常见的任务,包括:
- 使用现有模型在给定图片上进行推理
- 在标准数据集上测试现有模型
- 在标准数据集上训练预定义的模型
## 使用现有模型进行推理
推理是指使用训练好的模型来检测图像上的目标。在 MMDetection 中,一个... | 19,333 | 27.474227 | 394 | md |
mmdetection | mmdetection-master/docs/zh_cn/2_new_data_model.md | # 2: 在自定义数据集上进行训练
通过本文档,你将会知道如何使用自定义数据集对预先定义好的模型进行推理,测试以及训练。我们使用 [balloon dataset](https://github.com/matterport/Mask_RCNN/tree/master/samples/balloon) 作为例子来描述整个过程。
基本步骤如下:
1. 准备自定义数据集
2. 准备配置文件
3. 在自定义数据集上进行训练,测试和推理。
## 准备自定义数据集
MMDetection 一共支持三种形式应用新数据集:
1. 将数据集重新组织为 COCO 格式。
2. 将数据集重新组织为一个中间格式。
3. 实现一个新的数据集。
... | 5,528 | 19.630597 | 183 | md |
mmdetection | mmdetection-master/docs/zh_cn/3_exist_data_new_model.md | # 3: 在标准数据集上训练自定义模型
在本文中,你将知道如何在标准数据集上训练、测试和推理自定义模型。我们将在 cityscapes 数据集上以自定义 Cascade Mask R-CNN R50 模型为例演示整个过程,为了方便说明,我们将 neck 模块中的 `FPN` 替换为 `AugFPN`,并且在训练中的自动增强类中增加 `Rotate` 或 `Translate`。
基本步骤如下所示:
1. 准备标准数据集
2. 准备你的自定义模型
3. 准备配置文件
4. 在标准数据集上对模型进行训练、测试和推理
## 准备标准数据集
在本文中,我们使用 cityscapes 标准数据集为例进行说明。
推荐将数据集根路径采... | 8,105 | 27.542254 | 227 | md |
mmdetection | mmdetection-master/docs/zh_cn/article.md | ## 中文解读文案汇总
### 1 官方解读文案
#### 1.1 框架解读
- **[轻松掌握 MMDetection 整体构建流程(一)](https://zhuanlan.zhihu.com/p/337375549)**
- **[轻松掌握 MMDetection 整体构建流程(二)](https://zhuanlan.zhihu.com/p/341954021)**
- **[轻松掌握 MMDetection 中 Head 流程](https://zhuanlan.zhihu.com/p/343433169)**
#### 1.2 算法解读
- **[轻松掌握 MMDetection 中常用算法(一):Retina... | 2,777 | 50.444444 | 109 | md |
mmdetection | mmdetection-master/docs/zh_cn/compatibility.md | # MMDetection v2.x 兼容性说明
## MMDetection 2.25.0
为了加入 Mask2Former 实例分割模型,对 Mask2Former 的配置文件进行了重命名 [PR #7571](https://github.com/open-mmlab/mmdetection/pull/7571):
<table align="center">
<thead>
<tr align='center'>
<td>在 v2.25.0 之前</td>
<td>v2.25.0 及之后</td>
</tr>
</thead... | 7,409 | 40.629213 | 240 | md |
mmdetection | mmdetection-master/docs/zh_cn/conf.py | # Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If ex... | 3,461 | 28.092437 | 79 | py |
mmdetection | mmdetection-master/docs/zh_cn/conventions.md | # 默认约定
如果你想把 MMDetection 修改为自己的项目,请遵循下面的约定。
## 损失
在 MMDetection 中,`model(**data)` 的返回值是一个字典,包含着所有的损失和评价指标,他们将会由 `model(**data)` 返回。
例如,在 bbox head 中,
```python
class BBoxHead(nn.Module):
...
def loss(self, ...):
losses = dict()
# 分类损失
losses['loss_cls'] = self.loss_cls(...)
... | 2,283 | 29.052632 | 150 | md |
mmdetection | mmdetection-master/docs/zh_cn/faq.md | # 常见问题解答
我们在这里列出了使用时的一些常见问题及其相应的解决方案。 如果您发现有一些问题被遗漏,请随时提 PR 丰富这个列表。 如果您无法在此获得帮助,请使用 [issue模板](https://github.com/open-mmlab/mmdetection/blob/master/.github/ISSUE_TEMPLATE/error-report.md/)创建问题,但是请在模板中填写所有必填信息,这有助于我们更快定位问题。
## MMCV 安装相关
- MMCV 与 MMDetection 的兼容问题: "ConvWS is already registered in conv layer"; "Assert... | 7,554 | 45.349693 | 393 | md |
mmdetection | mmdetection-master/docs/zh_cn/get_started.md | ## 依赖
- Linux 和 macOS (Windows 理论上支持)
- Python 3.7 +
- PyTorch 1.3+
- CUDA 9.2+ (如果基于 PyTorch 源码安装,也能够支持 CUDA 9.0)
- GCC 5+
- [MMCV](https://mmcv.readthedocs.io/en/latest/#installation)
MMDetection 和 MMCV 版本兼容性如下所示,需要安装正确的 MMCV 版本以避免安装出现问题。
| MMDetection 版本 | MMCV 版本 |
| :--------------: | :--------... | 9,100 | 33.343396 | 466 | md |
mmdetection | mmdetection-master/docs/zh_cn/model_zoo.md | # 模型库
## 镜像地址
从 MMDetection V2.0 起,我们只通过阿里云维护模型库。V1.x 版本的模型已经弃用。
## 共同设置
- 所有模型都是在 `coco_2017_train` 上训练,在 `coco_2017_val` 上测试。
- 我们使用分布式训练。
- 所有 pytorch-style 的 ImageNet 预训练主干网络来自 PyTorch 的模型库,caffe-style 的预训练主干网络来自 detectron2 最新开源的模型。
- 为了与其他代码库公平比较,文档中所写的 GPU 内存是8个 GPU 的 `torch.cuda.max_memory_allocated()` 的最大值,... | 18,583 | 53.982249 | 932 | md |
mmdetection | mmdetection-master/docs/zh_cn/projects.md | # 基于 MMDetection 的项目
有许多开源项目都是基于 MMDetection 搭建的,我们在这里列举一部分作为样例,展示如何基于 MMDetection 搭建您自己的项目。
由于这个页面列举的项目并不完全,我们欢迎社区提交 Pull Request 来更新这个文档。
## MMDetection 的拓展项目
一些项目拓展了 MMDetection 的边界,如将 MMDetection 拓展支持 3D 检测或者将 MMDetection 用于部署。
它们展示了 MMDetection 的许多可能性,所以我们在这里也列举一些。
- [OTEDetection](https://github.com/opencv/mm... | 6,521 | 132.102041 | 338 | md |
mmdetection | mmdetection-master/docs/zh_cn/robustness_benchmarking.md | # 检测器鲁棒性检查
## 介绍
我们提供了在 [Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming](https://arxiv.org/abs/1907.07484) 中定义的「图像损坏基准测试」上测试目标检测和实例分割模型的工具。
此页面提供了如何使用该基准测试的基本教程。
```latex
@article{michaelis2019winter,
title={Benchmarking Robustness in Object Detection:
Autonomous Driving ... | 4,966 | 44.154545 | 233 | md |
mmdetection | mmdetection-master/docs/zh_cn/stat.py | #!/usr/bin/env python
import functools as func
import glob
import os.path as osp
import re
import numpy as np
url_prefix = 'https://github.com/open-mmlab/mmdetection/blob/master/'
files = sorted(glob.glob('../configs/*/README.md'))
stats = []
titles = []
num_ckpts = 0
for f in files:
url = osp.dirname(f.replac... | 1,519 | 22.384615 | 74 | py |
mmdetection | mmdetection-master/docs/zh_cn/switch_language.md | ## <a href='https://mmdetection.readthedocs.io/en/latest/'>English</a>
## <a href='https://mmdetection.readthedocs.io/zh_CN/latest/'>简体中文</a>
| 143 | 35 | 70 | md |
mmdetection | mmdetection-master/docs/zh_cn/useful_tools.md | ## 日志分析
| 8 | 3.5 | 7 | md |
mmdetection | mmdetection-master/docs/zh_cn/_static/css/readthedocs.css | .header-logo {
background-image: url("../image/mmdet-logo.png");
background-size: 156px 40px;
height: 40px;
width: 156px;
}
| 140 | 19.142857 | 53 | css |
mmdetection | mmdetection-master/docs/zh_cn/device/npu.md | # NPU (华为 昇腾)
## 使用方法
请参考 [MMCV 的安装文档](https://mmcv.readthedocs.io/en/latest/get_started/build.html#build-mmcv-full-on-ascend-npu-machine) 来安装 NPU 版本的 MMCV。
以下展示单机八卡场景的运行指令:
```shell
bash tools/dist_train.sh configs/ssd/ssd300_coco.py 8
```
以下展示单机单卡下的运行指令:
```shell
python tools/train.py configs/ssd/ssd300_coco.py... | 4,971 | 89.4 | 282 | md |
mmdetection | mmdetection-master/docs/zh_cn/tutorials/config.md | # 教程 1: 学习配置文件
我们在配置文件中支持了继承和模块化,这便于进行各种实验。如果需要检查配置文件,可以通过运行 `python tools/misc/print_config.py /PATH/TO/CONFIG` 来查看完整的配置。
## 通过脚本参数修改配置
当运行 `tools/train.py` 和 `tools/test.py` 时,可以通过 `--cfg-options` 来修改配置文件。
- 更新字典链中的配置
可以按照原始配置文件中的 dict 键顺序地指定配置预选项。例如,使用 `--cfg-options model.backbone.norm_eval=False` 将模型主干网络中的所... | 24,350 | 45.032136 | 236 | md |
mmdetection | mmdetection-master/docs/zh_cn/tutorials/customize_dataset.md | # 教程 2: 自定义数据集
## 支持新的数据格式
为了支持新的数据格式,可以选择将数据转换成现成的格式(COCO 或者 PASCAL)或将其转换成中间格式。当然也可以选择以离线的形式(在训练之前使用脚本转换)或者在线的形式(实现一个新的 dataset 在训练中进行转换)来转换数据。
在 MMDetection 中,建议将数据转换成 COCO 格式并以离线的方式进行,因此在完成数据转换后只需修改配置文件中的标注数据的路径和类别即可。
### 将新的数据格式转换为现有的数据格式
最简单的方法就是将你的数据集转换成现有的数据格式(COCO 或者 PASCAL VOC)
COCO 格式的 json 标注文件有如下必要的字段... | 11,384 | 23.912473 | 335 | md |
mmdetection | mmdetection-master/docs/zh_cn/tutorials/customize_losses.md | # 教程 6: 自定义损失函数
MMDetection 为用户提供了不同的损失函数。但是默认的配置可能无法适应不同的数据和模型,所以用户可能会希望修改某一个损失函数来适应新的情况。
本教程首先详细的解释计算损失的过程然后给出一些关于如何修改每一个步骤的指导。对损失的修改可以被分为微调和加权。
## 一个损失的计算过程
给定输入(包括预测和目标,以及权重),损失函数会把输入的张量映射到最后的损失标量。映射过程可以分为下面五个步骤:
1. 设置采样方法为对正负样本进行采样。
2. 通过损失核函数获取**元素**或者**样本**损失。
3. 通过权重张量来给损失**逐元素**权重。
4. 把损失张量归纳为一个**标量**。... | 2,932 | 22.277778 | 440 | md |
mmdetection | mmdetection-master/docs/zh_cn/tutorials/customize_models.md | # 教程 4: 自定义模型
我们简单地把模型的各个组件分为五类:
- 主干网络 (backbone):通常是一个用来提取特征图 (feature map) 的全卷积网络 (FCN network),例如:ResNet, MobileNet。
- Neck:主干网络和 Head 之间的连接部分,例如:FPN, PAFPN。
- Head:用于具体任务的组件,例如:边界框预测和掩码预测。
- 区域提取器 (roi extractor):从特征图中提取 RoI 特征,例如:RoI Align。
- 损失 (loss):在 Head 组件中用于计算损失的部分,例如:FocalLoss, L1Loss, GHMLoss.
## 开发新的... | 8,781 | 23.394444 | 138 | md |
mmdetection | mmdetection-master/docs/zh_cn/tutorials/customize_runtime.md | # 教程 5: 自定义训练配置
| 16 | 7.5 | 15 | md |
mmdetection | mmdetection-master/docs/zh_cn/tutorials/data_pipeline.md | # 教程 3: 自定义数据预处理流程
## 数据流程的设计
按照惯例,我们使用 `Dataset` 和 `DataLoader` 进行多进程的数据加载。`Dataset` 返回字典类型的数据,数据内容为模型 `forward` 方法的各个参数。由于在目标检测中,输入的图像数据具有不同的大小,我们在 `MMCV` 里引入一个新的 `DataContainer` 类去收集和分发不同大小的输入数据。更多细节请参考[这里](https://github.com/open-mmlab/mmcv/blob/master/mmcv/parallel/data_container.py)。
数据的准备流程和数据集是解耦的。通常一个数据集定义了... | 4,400 | 22.041885 | 260 | md |
mmdetection | mmdetection-master/docs/zh_cn/tutorials/finetune.md | # 教程 7: 模型微调
在 COCO 数据集上预训练的检测器可以作为其他数据集(例如 CityScapes 和 KITTI 数据集)优质的预训练模型。
本教程将指导用户如何把 [ModelZoo](../model_zoo.md) 中提供的模型用于其他数据集中并使得当前所训练的模型获得更好性能。
以下是在新数据集中微调模型需要的两个步骤。
- 按 [教程2:自定义数据集的方法](customize_dataset.md) 中的方法对新数据集添加支持中的方法对新数据集添加支持
- 按照本教程中所讨论方法,修改配置信息
接下来将会以 Cityscapes Dataset 上的微调过程作为例子,具体讲述用户需要在配置中修改的五个... | 2,742 | 30.170455 | 315 | md |
mmdetection | mmdetection-master/docs/zh_cn/tutorials/how_to.md | # 教程 11: How to xxx
本教程收集了任何如何使用 MMDetection 进行 xxx 的答案。 如果您遇到有关`如何做`的问题及答案,请随时更新此文档!
## 使用 MMClassification 的骨干网络
MMDet、MMCls、MMSeg 中的模型注册表都继承自 MMCV 中的根注册表,允许这些存储库直接使用彼此已经实现的模块。 因此用户可以在 MMDetection 中使用来自 MMClassification 的骨干网络,而无需实现MMClassification 中已经存在的网络。
### 使用在 MMClassification 中实现的骨干网络
假设想将 `MobileNetV3-sma... | 6,933 | 32.990196 | 325 | md |
mmdetection | mmdetection-master/docs/zh_cn/tutorials/init_cfg.md | # 教程 10: 权重初始化
在训练过程中,适当的初始化策略有利于加快训练速度或获得更⾼的性能。 [MMCV](https://github.com/open-mmlab/mmcv/blob/master/mmcv/cnn/utils/weight_init.py) 提供了一些常⽤的初始化模块的⽅法,如 `nn.Conv2d`。 MMdetection 中的模型初始化主要使⽤ `init_cfg`。⽤⼾可以通过以下两个步骤来初始化模型:
1. 在 `model_cfg` 中为模型或其组件定义 `init_cfg`,但⼦组件的 `init_cfg` 优先级更⾼,会覆盖⽗模块的 `init_cfg` 。
2. 像往常一样构建模型,然... | 4,579 | 27.271605 | 207 | md |
mmdetection | mmdetection-master/docs/zh_cn/tutorials/onnx2tensorrt.md | # 教程 9: ONNX 到 TensorRT 的模型转换(实验性支持)
> ## [尝试使用新的 MMDeploy 来部署你的模型](https://mmdeploy.readthedocs.io/)
<!-- TOC -->
- [教程 9: ONNX 到 TensorRT 的模型转换(实验性支持)](#%E6%95%99%E7%A8%8B-9-onnx-%E5%88%B0-tensorrt-%E7%9A%84%E6%A8%A1%E5%9E%8B%E8%BD%AC%E6%8D%A2%E5%AE%9E%E9%AA%8C%E6%80%A7%E6%94%AF%E6%8C%81)
- [如何将模型从 ONNX 转换为 Tens... | 4,722 | 43.140187 | 263 | md |
mmdetection | mmdetection-master/docs/zh_cn/tutorials/pytorch2onnx.md | # 教程 8: Pytorch 到 ONNX 的模型转换(实验性支持)
> ## [尝试使用新的 MMDeploy 來部署你的模型](https://mmdeploy.readthedocs.io/)
| 102 | 24.75 | 64 | md |
mmdetection | mmdetection-master/mmdet/__init__.py | # Copyright (c) OpenMMLab. All rights reserved.
import mmcv
from .version import __version__, short_version
def digit_version(version_str):
digit_version = []
for x in version_str.split('.'):
if x.isdigit():
digit_version.append(int(x))
elif x.find('rc') != -1:
patch_v... | 909 | 29.333333 | 77 | py |
mmdetection | mmdetection-master/mmdet/version.py | # Copyright (c) OpenMMLab. All rights reserved.
__version__ = '2.28.2'
short_version = __version__
def parse_version_info(version_str):
version_info = []
for x in version_str.split('.'):
if x.isdigit():
version_info.append(int(x))
elif x.find('rc') != -1:
patch_version... | 529 | 25.5 | 56 | py |
mmdetection | mmdetection-master/mmdet/apis/__init__.py | # Copyright (c) OpenMMLab. All rights reserved.
from .inference import (async_inference_detector, inference_detector,
init_detector, show_result_pyplot)
from .test import multi_gpu_test, single_gpu_test
from .train import (get_root_logger, init_random_seed, set_random_seed,
t... | 563 | 42.384615 | 76 | py |
mmdetection | mmdetection-master/mmdet/apis/inference.py | # Copyright (c) OpenMMLab. All rights reserved.
import warnings
from pathlib import Path
import mmcv
import numpy as np
import torch
from mmcv.ops import RoIPool
from mmcv.parallel import collate, scatter
from mmcv.runner import load_checkpoint
from mmdet.core import get_classes
from mmdet.datasets import replace_Ima... | 8,629 | 32.449612 | 79 | py |
mmdetection | mmdetection-master/mmdet/apis/test.py | # Copyright (c) OpenMMLab. All rights reserved.
import os.path as osp
import pickle
import shutil
import tempfile
import time
import mmcv
import torch
import torch.distributed as dist
from mmcv.image import tensor2imgs
from mmcv.runner import get_dist_info
from mmdet.core import encode_mask_results
def single_gpu_t... | 7,817 | 36.228571 | 79 | py |
mmdetection | mmdetection-master/mmdet/apis/train.py | # Copyright (c) OpenMMLab. All rights reserved.
import os
import random
import numpy as np
import torch
import torch.distributed as dist
from mmcv.runner import (DistSamplerSeedHook, EpochBasedRunner,
Fp16OptimizerHook, OptimizerHook, build_runner,
get_dist_info)
from... | 8,379 | 32.927126 | 79 | py |
mmdetection | mmdetection-master/mmdet/core/__init__.py | # Copyright (c) OpenMMLab. All rights reserved.
from .anchor import * # noqa: F401, F403
from .bbox import * # noqa: F401, F403
from .data_structures import * # noqa: F401, F403
from .evaluation import * # noqa: F401, F403
from .hook import * # noqa: F401, F403
from .mask import * # noqa: F401, F403
from .optimiz... | 445 | 39.545455 | 50 | py |
mmdetection | mmdetection-master/mmdet/core/anchor/__init__.py | # Copyright (c) OpenMMLab. All rights reserved.
from .anchor_generator import (AnchorGenerator, LegacyAnchorGenerator,
YOLOAnchorGenerator)
from .builder import (ANCHOR_GENERATORS, PRIOR_GENERATORS,
build_anchor_generator, build_prior_generator)
from .point_generator... | 720 | 47.066667 | 73 | py |
mmdetection | mmdetection-master/mmdet/core/anchor/anchor_generator.py | # Copyright (c) OpenMMLab. All rights reserved.
import warnings
import mmcv
import numpy as np
import torch
from torch.nn.modules.utils import _pair
from .builder import PRIOR_GENERATORS
@PRIOR_GENERATORS.register_module()
class AnchorGenerator:
"""Standard anchor generator for 2D anchor-based detectors.
A... | 37,205 | 41.913495 | 79 | py |
mmdetection | mmdetection-master/mmdet/core/anchor/builder.py | # Copyright (c) OpenMMLab. All rights reserved.
import warnings
from mmcv.utils import Registry, build_from_cfg
PRIOR_GENERATORS = Registry('Generator for anchors and points')
ANCHOR_GENERATORS = PRIOR_GENERATORS
def build_prior_generator(cfg, default_args=None):
return build_from_cfg(cfg, PRIOR_GENERATORS, de... | 583 | 28.2 | 74 | py |
mmdetection | mmdetection-master/mmdet/core/anchor/point_generator.py | # Copyright (c) OpenMMLab. All rights reserved.
import numpy as np
import torch
from torch.nn.modules.utils import _pair
from .builder import PRIOR_GENERATORS
@PRIOR_GENERATORS.register_module()
class PointGenerator:
def _meshgrid(self, x, y, row_major=True):
xx = x.repeat(len(y))
yy = y.view(-1... | 10,739 | 39.681818 | 79 | py |
mmdetection | mmdetection-master/mmdet/core/anchor/utils.py | # Copyright (c) OpenMMLab. All rights reserved.
import torch
def images_to_levels(target, num_levels):
"""Convert targets by image to targets by feature level.
[target_img0, target_img1] -> [target_level0, target_level1, ...]
"""
target = torch.stack(target, 0)
level_targets = []
start = 0
... | 2,545 | 33.876712 | 79 | py |