Mahdi Abdollah Pour daffe87b3c meta learning added | 3 years ago | |
---|---|---|
.. | ||
include | 3 years ago | |
src | 3 years ago | |
CMakeLists.txt | 3 years ago | |
README.md | 3 years ago |
Install opencv with sudo apt-get install libopencv-dev
(we don’t need a higher version of opencv like v3.3+).
Install eigen-3.3.9 [google], [baidu(code:ueq4)].
unzip eigen-3.3.9.zip
cd eigen-3.3.9
mkdir build
cd build
cmake ..
sudo make install
Follow the TensorRT Python demo to convert and save the serialized engine file.
Check the ‘model_trt.engine’ file, which will be automatically saved at the YOLOX_output dir.
You should set the TensorRT path and CUDA path in CMakeLists.txt.
For bytetrack_s model, we set the input frame size 1088 x 608. For bytetrack_m, bytetrack_l, bytetrack_x models, we set the input frame size 1440 x 800. You can modify the INPUT_W and INPUT_H in src/bytetrack.cpp
static const int INPUT_W = 1088;
static const int INPUT_H = 608;
You can first build the demo:
cd <ByteTrack_HOME>/demo/TensorRT/cpp
mkdir build
cd build
cmake ..
make
Then you can run the demo with 200 FPS:
./bytetrack ../../../../YOLOX_outputs/yolox_s_mix_det/model_trt.engine -i ../../../../videos/palace.mp4
(If you find the output video lose some frames, you can convert the input video by running:
cd <ByteTrack_HOME>
python3 tools/convert_video.py
to generate an appropriate input video for TensorRT C++ demo. )