Deploy Deep Learning Models
Last updated
Last updated
OK. Another FLAG.
Profiling 看瓶颈
NumPy、Pytorch等的一些写法改变就能带来很大的提升
copy、cast、矩阵乘法、初始化
推理引擎
推理引擎自带的量化功能
直接把模型变小,看看精度会掉多少
拿 C++ 重写瓶颈部分
知识蒸馏、其他高端的网络压缩方法
Quickstart Guide: https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html
Install
安装 onnx:conda install -c conda-forge onnx
安装 pycuda:pip install pycuda
安装 TensorRT 7.0.0 (https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-700/tensorrt-install-guide/index.html#installing-tar)
安装 CUDA 10.0, CUDNN 7.6.5(只有这一步需要sudo,其实也可以把cuda装在自己的目录下,所以理论上来说,整体的安装其实可以不需要sudo)(https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-700/tensorrt-support-matrix/index.html)
下载 tensorrt tarball:https://developer.nvidia.com/nvidia-tensorrt-7x-download
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<TensorRT-${version}/lib>
pip install tensorrt-*-cp3x-none-linux_x86_64.whl
pip install uff-0.6.9-py2.py3-none-any.whl
pip install graphsurgeon-0.4.5-py2.py3-none-any.whl
Pytorch->ONNX->TensorRT
编译 trtexec:
cd /data1/lzhgck/tensorrt/TensorRT-7.0.0.11/samples/trtexec
CUDA_INSTALL_DIR=/usr/local/cuda-10.0 make
可执行文件会出现在: /data1/lzhgck/tensorrt/TensorRT-7.0.0.11/bin/trtexec
可以写入 ~/.bashrc: export PATH=$PATH:/data1/lzhgck/tensorrt/TensorRT-7.0.0.11/bin