ultralytics 8.0.195 NVIDIA Triton Inference Server support (#5257)
Co-authored-by: TheConstant3 <46416203+TheConstant3@users.noreply.github.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
parent
40e3923cfc
commit
c7aa83da31
21 changed files with 349 additions and 98 deletions
|
|
@ -3,7 +3,7 @@
|
|||
# Image is CUDA-optimized for YOLOv8 single/multi-GPU training and inference
|
||||
|
||||
# Start FROM PyTorch image https://hub.docker.com/r/pytorch/pytorch or nvcr.io/nvidia/pytorch:23.03-py3
|
||||
FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime
|
||||
FROM pytorch/pytorch:2.1.0-cuda12.1-cudnn8-runtime
|
||||
RUN pip install --no-cache nvidia-tensorrt --index-url https://pypi.ngc.nvidia.com
|
||||
|
||||
# Downloads to user config dir
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue