1
- <div align =" center " ><img src =" assets/logo.png " width =" 600 " ></div >
2
-
1
+ <div align =" center " ><img src =" assets/logo.png " width =" 350 " ></div >
3
2
<img src =" assets/demo.png " >
4
3
5
- ## < div align = " center " > Introduction</ div >
4
+ ## Introduction
6
5
YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities.
7
6
7
+ <img src =" assets/git_fig.png " width =" 1000 " >
8
8
9
- ## <div align =" center " >Why YOLOX?</div >
10
-
11
- <div align =" center " ><img src =" assets/fig1.png " width =" 400 " ><img src =" assets/fig2.png " width =" 400 " ></div >
12
-
13
- ## <div align =" center " >News!!</div >
14
- * 【2020/07/19】 We have released our technical report on [ Arxiv] ( xxx ) !!
9
+ ## Updates!!
10
+ * 【2020/07/19】 We have released our technical report on Arxiv.
15
11
16
- ## < div align = " center " > Benchmark</ div >
12
+ ## Benchmark
17
13
18
- ### Standard Models.
14
+ #### Standard Models.
19
15
| Model | size | mAP<sup >test<br >0.5:0.95 | Speed V100<br >(ms) | Params<br >(M) | FLOPs<br >(B)| weights |
20
16
| ------ | :---: | :---: | :---: | :---: | :---: | :----: |
21
- | [ YOLOX-s] ( ) | 640 | 39.6 | 9.8 | 9.0 | 26.8 | - |
22
- | [ YOLOX-m] ( ) | 640 | 46.4 | 12.3 | 25.3 | 73.8| - |
23
- | [ YOLOX-l] ( ) | 640 | 50.0 | 14.5 | 54.2| 155.6 | - |
24
- | [ YOLOX-x] ( ) | 640 | ** 51.2** | 17.3 | 99.1 | 281.9 | - |
17
+ | [ YOLOX-s] ( ./exps/yolox_s.py ) | 640 | 39.6 | 9.8 | 9.0 | 26.8 | [ Download] ( https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EW62gmO2vnNNs5npxjzunVwB9p307qqygaCkXdTO88BLUg?e=NMTQYw ) |
18
+ | [ YOLOX-m] ( ./exps/yolox_m.py ) | 640 | 46.4 | 12.3 | 25.3 | 73.8| [ Download] ( https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/ERMTP7VFqrVBrXKMU7Vl4TcBQs0SUeCT7kvc-JdIbej4tQ?e=1MDo9y ) |
19
+ | [ YOLOX-l] ( ./exps/yolox_l.py ) | 640 | 50.0 | 14.5 | 54.2| 155.6 | [ Download] ( https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EWA8w_IEOzBKvuueBqfaZh0BeoG5sVzR-XYbOJO4YlOkRw?e=wHWOBE ) |
20
+ | [ YOLOX-x] ( ./exps/yolox_x.py ) | 640 | ** 51.2** | 17.3 | 99.1 | 281.9 | [ Download] ( https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EdgVPHBziOVBtGAXHfeHI5kBza0q9yyueMGdT0wXZfI1rQ?e=tABO5u ) |
21
+ | [ YOLOX-Darknet53] ( ./exps/yolov3.py ) | 640 | 47.4 | 11.1 | 63.7 | 185.3 | [ Download] ( https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EZ-MV1r_fMFPkPrNjvbJEMoBLOLAnXH-XKEB77w8LhXL6Q?e=mf6wOc ) |
25
22
26
- ### Light Models.
27
- | Model | size | mAP<sup >val<br >0.5:0.95 | Speed V100< br >(ms) | Params<br >(M) | FLOPs<br >(B)| weights |
28
- | ------ | :---: | :---: | :---: | :---: | :---: | :----: |
29
- | [ YOLOX-Nano] ( ) | 416 | 25.3 | - | 0.91 | 1.08 | - |
30
- | [ YOLOX-Tiny] ( ) | 416 | 31.7 | - | 5.06 | 6.45 | - |
23
+ #### Light Models.
24
+ | Model | size | mAP<sup >val<br >0.5:0.95 | Params<br >(M) | FLOPs<br >(B)| weights |
25
+ | ------ | :---: | :---: | :---: | :---: | :---: |
26
+ | [ YOLOX-Nano] ( ./exps/nano.py ) | 416 | 25.3 | 0.91 | 1.08 | [ Download ] ( https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EdcREey-krhLtdtSnxolxiUBjWMy6EFdiaO9bdOwZ5ygCQ?e=yQpdds ) |
27
+ | [ YOLOX-Tiny] ( ./exps/yolox_tiny.py ) | 416 | 31.7 | 5.06 | 6.45 | [ Download ] ( https://megvii-my.sharepoint.cn/:u:/g/personal/gezheng_megvii_com/EYtjNFPqvZBBrQ-VowLcSr4B6Z5TdTflUsr_gO2CwhC3bQ?e=SBTwXj ) |
31
28
32
- ## < div align = " center " > Quick Start</ div >
29
+ ## Quick Start
33
30
34
- ### Installation
31
+ <details >
32
+ <summary >Installation</summary >
35
33
36
34
Step1. Install [ apex] ( https://github.com/NVIDIA/apex ) .
37
35
@@ -47,38 +45,53 @@ $ cd yolox
47
45
$ pip3 install -v -e . # or "python3 setup.py develop
48
46
```
49
47
50
- ### Demo
48
+ </details >
49
+
50
+ <details >
51
+ <summary >Demo</summary >
52
+
53
+ Step1. Download a pretrained model from the benchmark table.
51
54
52
- You can use either -n or -f to specify your detector's config:
55
+ Step2. Use either -n or -f to specify your detector's config. For example :
53
56
54
57
``` shell
55
- python tools/demo.py -n yolox-s -c < MODEL_PATH > --conf 0.3 --nms 0.65 --tsize 640
58
+ python tools/demo.py image -n yolox-s -c /path/to/your/yolox_s.pth.tar --path assets/dog.jpg -- conf 0.3 --nms 0.65 --tsize 640 --save_result
56
59
```
57
60
or
58
61
``` shell
59
- python tools/demo.py -f exps/base/yolox_s.py -c < MODEL_PATH> --conf 0.3 --nms 0.65 --tsize 640
62
+ python tools/demo.py image -f exps/yolox_s.py -c /path/to/your/yolox_s.pth.tar --path assets/dog.jpg --conf 0.3 --nms 0.65 --tsize 640 --save_result
63
+ ```
64
+ Demo for video:
65
+ ``` shell
66
+ python tools/demo.py video -n yolox-s -c /path/to/your/yolox_s.pth.tar --path /path/to/your/video --conf 0.3 --nms 0.65 --tsize 640 --save_result
60
67
```
61
68
62
69
63
- <details open >
70
+ </details >
71
+
72
+ <details >
64
73
<summary >Reproduce our results on COCO</summary >
65
74
66
- Step1.
75
+ Step1. Prepare dataset
76
+ ``` shell
77
+ cd < YOLOX_HOME>
78
+ mkdir datasets
79
+ ln -s /path/to/your/COCO ./datasets/COCO
80
+ ```
67
81
68
- * Reproduce our results on COCO by specifying -n:
82
+ Step2. Reproduce our results on COCO by specifying -n:
69
83
70
84
``` shell
71
85
python tools/train.py -n yolox-s -d 8 -b 64 --fp16 -o
72
86
yolox-m
73
87
yolox-l
74
88
yolox-x
75
89
```
76
- Notes:
77
90
* -d: number of gpu devices
78
- * -b: total batch size, the recommended number for -b equals to num_gpu * 8
91
+ * -b: total batch size, the recommended number for -b is num_gpu * 8
79
92
* --fp16: mixed precision training
80
93
81
- The above commands are equivalent to:
94
+ When using -f, the above commands are equivalent to:
82
95
83
96
``` shell
84
97
python tools/train.py -f exps/base/yolox-s.py -d 8 -b 64 --fp16 -o
@@ -87,42 +100,49 @@ python tools/train.py -f exps/base/yolox-s.py -d 8 -b 64 --fp16 -o
87
100
exps/base/yolox-x.py
88
101
```
89
102
90
- * Customize your training.
91
-
92
- * Finetune your datset on COCO pretrained models.
93
103
</details >
94
104
95
- <details open >
105
+
106
+ <details >
96
107
<summary >Evaluation</summary >
108
+
97
109
We support batch testing for fast evaluation:
98
110
99
111
``` shell
100
- python tools/eval.py -n yolox-s -b 64 -- conf 0.001 --fp16 (optional) --fuse (optional) --test (for test-dev set)
112
+ python tools/eval.py -n yolox-s -c yolox_s.pth.tar - b 64 -d 8 -- conf 0.001 [ --fp16] [ --fuse]
101
113
yolox-m
102
114
yolox-l
103
115
yolox-x
104
116
```
117
+ * --fuse: fuse conv and bn
118
+ * -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used.
119
+ * -b: total batch size across on all GPUs
105
120
106
121
To reproduce speed test, we use the following command:
107
122
``` shell
108
- python tools/eval.py -n yolox-s -b 1 -d 0 --conf 0.001 --fp16 --fuse --test (for test-dev set)
123
+ python tools/eval.py -n yolox-s -c yolox_s.pth.tar - b 1 -d 1 --conf 0.001 --fp16 --fuse
109
124
yolox-m
110
125
yolox-l
111
126
yolox-x
112
127
```
113
128
114
- ## <div align =" center " >Deployment</div >
115
-
116
129
</details >
117
130
118
- 1 . [ ONNX: Including ONNX export and an ONNXRuntime demo.] ( )
119
- 2 . [ TensorRT in both C++ and Python] ( )
120
- 3 . [ NCNN in C++] ( )
121
- 4 . [ OpenVINO in both C++ and Python] ( )
122
131
123
- ## <div align =" center " >Cite Our Work</div >
132
+ <details open >
133
+ <summary >Toturials</summary >
134
+
135
+ * [ Training on custom data] ( docs/train_custom_data.md ) .
136
+
137
+ </details >
138
+
139
+ ## Deployment
124
140
125
141
126
- If you find this project useful for you, please use the following BibTeX entry.
142
+ 1 . [ ONNX: Including ONNX export and an ONNXRuntime demo.] ( ./demo/ONNXRuntime )
143
+ 2 . [ TensorRT in both C++ and Python] ( ./demo/TensorRT )
144
+ 3 . [ NCNN in C++] ( ./demo/ncnn/android )
145
+ 4 . [ OpenVINO in both C++ and Python] ( ./demo/OpenVINO )
127
146
128
- TODO
147
+ ## Citing YOLOX
148
+ If you use YOLOX in your research, please cite our work by using the following BibTeX entry:
0 commit comments