ultralytics 8.0.196 instance-mean Segment loss (#5285)
Co-authored-by: Andy <39454881+yermandy@users.noreply.github.com>
This commit is contained in:
parent
7517667a33
commit
e7f0658744
72 changed files with 369 additions and 493 deletions
|
|
@ -10,9 +10,9 @@ The [Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inferen
|
|||
|
||||
<p align="center">
|
||||
<br>
|
||||
<iframe width="720" height="405" src="https://www.youtube.com/embed/NQDtfSi5QF4"
|
||||
title="Getting Started with NVIDIA Triton Inference Server" frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
<iframe width="720" height="405" src="https://www.youtube.com/embed/NQDtfSi5QF4"
|
||||
title="Getting Started with NVIDIA Triton Inference Server" frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
allowfullscreen>
|
||||
</iframe>
|
||||
<br>
|
||||
|
|
@ -60,11 +60,11 @@ The Triton Model Repository is a storage location where Triton can access and lo
|
|||
|
||||
```python
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
# Define paths
|
||||
triton_repo_path = Path('tmp') / 'triton_repo'
|
||||
triton_model_path = triton_repo_path / 'yolo'
|
||||
|
||||
|
||||
# Create directories
|
||||
(triton_model_path / '1').mkdir(parents=True, exist_ok=True)
|
||||
```
|
||||
|
|
@ -73,10 +73,10 @@ The Triton Model Repository is a storage location where Triton can access and lo
|
|||
|
||||
```python
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
# Move ONNX model to Triton Model path
|
||||
Path(onnx_file).rename(triton_model_path / '1' / 'model.onnx')
|
||||
|
||||
|
||||
# Create config file
|
||||
(triton_model_path / 'config.pdtxt').touch()
|
||||
```
|
||||
|
|
@ -134,4 +134,4 @@ subprocess.call(f'docker kill {container_id}', shell=True)
|
|||
|
||||
---
|
||||
|
||||
By following the above steps, you can deploy and run Ultralytics YOLOv8 models efficiently on Triton Inference Server, providing a scalable and high-performance solution for deep learning inference tasks. If you face any issues or have further queries, refer to the [official Triton documentation](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html) or reach out to the Ultralytics community for support.
|
||||
By following the above steps, you can deploy and run Ultralytics YOLOv8 models efficiently on Triton Inference Server, providing a scalable and high-performance solution for deep learning inference tasks. If you face any issues or have further queries, refer to the [official Triton documentation](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html) or reach out to the Ultralytics community for support.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue