Fix Adobe DNG and ONNX Rust links (#11169)
This commit is contained in:
parent
b01fcd943f
commit
9e0c13610f
2 changed files with 18 additions and 25 deletions
|
|
@ -406,7 +406,7 @@ The below table contains valid Ultralytics image formats.
|
|||
| Image Suffixes | Example Predict Command | Reference |
|
||||
|----------------|----------------------------------|-------------------------------------------------------------------------------|
|
||||
| `.bmp` | `yolo predict source=image.bmp` | [Microsoft BMP File Format](https://en.wikipedia.org/wiki/BMP_file_format) |
|
||||
| `.dng` | `yolo predict source=image.dng` | [Adobe DNG](https://www.adobe.com/products/photoshop/extend.displayTab2.html) |
|
||||
| `.dng` | `yolo predict source=image.dng` | [Adobe DNG](https://helpx.adobe.com/camera-raw/digital-negative.html) |
|
||||
| `.jpeg` | `yolo predict source=image.jpeg` | [JPEG](https://en.wikipedia.org/wiki/JPEG) |
|
||||
| `.jpg` | `yolo predict source=image.jpg` | [JPEG](https://en.wikipedia.org/wiki/JPEG) |
|
||||
| `.mpo` | `yolo predict source=image.mpo` | [Multi Picture Object](https://fileinfo.com/extension/mpo) |
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ You can follow the instruction with `ort` doc or simply do this:
|
|||
|
||||
On ubuntu, You can do like this:
|
||||
|
||||
```
|
||||
```bash
|
||||
vim ~/.bashrc
|
||||
|
||||
# Add the path of ONNXRUntime lib
|
||||
|
|
@ -65,25 +65,25 @@ yolo export model=yolov8m-seg.pt format=onnx simplify
|
|||
|
||||
It will perform inference with the ONNX model on the source image.
|
||||
|
||||
```
|
||||
```bash
|
||||
cargo run --release -- --model <MODEL> --source <SOURCE>
|
||||
```
|
||||
|
||||
Set `--cuda` to use CUDA execution provider to speed up inference.
|
||||
|
||||
```
|
||||
```bash
|
||||
cargo run --release -- --cuda --model <MODEL> --source <SOURCE>
|
||||
```
|
||||
|
||||
Set `--trt` to use TensorRT execution provider, and you can set `--fp16` at the same time to use TensorRT FP16 engine.
|
||||
|
||||
```
|
||||
```bash
|
||||
cargo run --release -- --trt --fp16 --model <MODEL> --source <SOURCE>
|
||||
```
|
||||
|
||||
Set `--device_id` to select which device to run. When you have only one GPU, and you set `device_id` to 1 will not cause program panic, the `ort` would automatically fall back to `CPU` EP.
|
||||
|
||||
```
|
||||
```bash
|
||||
cargo run --release -- --cuda --device_id 0 --model <MODEL> --source <SOURCE>
|
||||
```
|
||||
|
||||
|
|
@ -91,25 +91,25 @@ Set `--batch` to do multi-batch-size inference.
|
|||
|
||||
If you're using `--trt`, you can also set `--batch-min` and `--batch-max` to explicitly specify min/max/opt batch for dynamic batch input.(https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#explicit-shape-range-for-dynamic-shape-input).(Note that the ONNX model should exported with dynamic shapes)
|
||||
|
||||
```
|
||||
```bash
|
||||
cargo run --release -- --cuda --batch 2 --model <MODEL> --source <SOURCE>
|
||||
```
|
||||
|
||||
Set `--height` and `--width` to do dynamic image size inference. (Note that the ONNX model should exported with dynamic shapes)
|
||||
|
||||
```
|
||||
```bash
|
||||
cargo run --release -- --cuda --width 480 --height 640 --model <MODEL> --source <SOURCE>
|
||||
```
|
||||
|
||||
Set `--profile` to check time consumed in each stage.(Note that the model usually needs to take 1~3 times dry run to warmup. Make sure to run enough times to evaluate the result.)
|
||||
|
||||
```
|
||||
```bash
|
||||
cargo run --release -- --trt --fp16 --profile --model <MODEL> --source <SOURCE>
|
||||
```
|
||||
|
||||
Results: (yolov8m.onnx, batch=1, 3 times, trt, fp16, RTX 3060Ti)
|
||||
|
||||
```
|
||||
```bash
|
||||
==> 0
|
||||
[Model Preprocess]: 12.75788ms
|
||||
[ORT H2D]: 237.118µs
|
||||
|
|
@ -145,7 +145,7 @@ And also:
|
|||
|
||||
you can check out all CLI arguments by:
|
||||
|
||||
```
|
||||
```bash
|
||||
git clone https://github.com/ultralytics/ultralytics
|
||||
cd ultralytics/examples/YOLOv8-ONNXRuntime-Rust
|
||||
cargo run --release -- --help
|
||||
|
|
@ -153,17 +153,19 @@ cargo run --release -- --help
|
|||
|
||||
## Examples
|
||||
|
||||

|
||||
|
||||
### Classification
|
||||
|
||||
Running dynamic shape ONNX model on `CPU` with image size `--height 224 --width 224`. Saving plotted image in `runs` directory.
|
||||
|
||||
```
|
||||
```bash
|
||||
cargo run --release -- --model ../assets/weights/yolov8m-cls-dyn.onnx --source ../assets/images/dog.jpg --height 224 --width 224 --plot --profile
|
||||
```
|
||||
|
||||
You will see result like:
|
||||
|
||||
```
|
||||
```bash
|
||||
Summary:
|
||||
> Task: Classify (Ultralytics 8.0.217)
|
||||
> EP: Cpu
|
||||
|
|
@ -185,37 +187,28 @@ Summary:
|
|||
Masks: None,
|
||||
},
|
||||
]
|
||||
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Object Detection
|
||||
|
||||
Using `CUDA` EP and dynamic image size `--height 640 --width 480`
|
||||
|
||||
```
|
||||
```bash
|
||||
cargo run --release -- --cuda --model ../assets/weights/yolov8m-dynamic.onnx --source ../assets/images/bus.jpg --plot --height 640 --width 480
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Pose Detection
|
||||
|
||||
using `TensorRT` EP
|
||||
|
||||
```
|
||||
```bash
|
||||
cargo run --release -- --trt --model ../assets/weights/yolov8m-pose.onnx --source ../assets/images/bus.jpg --plot
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Instance Segmentation
|
||||
|
||||
using `TensorRT` EP and FP16 model `--fp16`
|
||||
|
||||
```
|
||||
```bash
|
||||
cargo run --release -- --trt --fp16 --model ../assets/weights/yolov8m-seg.onnx --source ../assets/images/0172.jpg --plot
|
||||
```
|
||||
|
||||

|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue