Docs Colab, OBB and typos fixes (#10366)
Co-authored-by: Olivier Louvignes <olivier@mg-crea.com> Co-authored-by: RainRat <rainrat78@yahoo.ca>
This commit is contained in:
parent
f646972b95
commit
d6bb3046a8
13 changed files with 18 additions and 16 deletions
|
|
@ -82,7 +82,7 @@ To use the Edge TPU, you need to convert your model into a compatible format. It
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('path/to/model.pt') # Load a official model or custom model
|
model = YOLO('path/to/model.pt') # Load an official model or custom model
|
||||||
|
|
||||||
# Export the model
|
# Export the model
|
||||||
model.export(format='edgetpu')
|
model.export(format='edgetpu')
|
||||||
|
|
@ -91,7 +91,7 @@ To use the Edge TPU, you need to convert your model into a compatible format. It
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
yolo export model=path/to/model.pt format=edgetpu # Export a official model or custom model
|
yolo export model=path/to/model.pt format=edgetpu # Export an official model or custom model
|
||||||
```
|
```
|
||||||
|
|
||||||
The exported model will be saved in the `<model_name>_saved_model/` folder with the name `<model_name>_full_integer_quant_edgetpu.tflite`.
|
The exported model will be saved in the `<model_name>_saved_model/` folder with the name `<model_name>_full_integer_quant_edgetpu.tflite`.
|
||||||
|
|
@ -108,7 +108,7 @@ After exporting your model, you can run inference with it using the following co
|
||||||
from ultralytics import YOLO
|
from ultralytics import YOLO
|
||||||
|
|
||||||
# Load a model
|
# Load a model
|
||||||
model = YOLO('path/to/edgetpu_model.tflite') # Load a official model or custom model
|
model = YOLO('path/to/edgetpu_model.tflite') # Load an official model or custom model
|
||||||
|
|
||||||
# Run Prediction
|
# Run Prediction
|
||||||
model.predict("path/to/source.png")
|
model.predict("path/to/source.png")
|
||||||
|
|
@ -117,7 +117,7 @@ After exporting your model, you can run inference with it using the following co
|
||||||
=== "CLI"
|
=== "CLI"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
yolo predict model=path/to/edgetpu_model.tflite source=path/to/source.png # Load a official model or custom model
|
yolo predict model=path/to/edgetpu_model.tflite source=path/to/source.png # Load an official model or custom model
|
||||||
```
|
```
|
||||||
|
|
||||||
Find comprehensive information on the [Predict](../modes/predict.md) page for full prediction mode details.
|
Find comprehensive information on the [Predict](../modes/predict.md) page for full prediction mode details.
|
||||||
|
|
|
||||||
|
|
@ -108,7 +108,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
||||||
|
|
||||||
1. For more info on `c.masks.xy` see [Masks Section from Predict Mode](../modes/predict.md#masks).
|
1. For more info on `c.masks.xy` see [Masks Section from Predict Mode](../modes/predict.md#masks).
|
||||||
|
|
||||||
2. Here, the values are cast into `np.int32` for compatibility with `drawContours()` function from OpenCV.
|
2. Here the values are cast into `np.int32` for compatibility with `drawContours()` function from OpenCV.
|
||||||
|
|
||||||
3. The OpenCV `drawContours()` function expects contours to have a shape of `[N, 1, 2]` expand section below for more details.
|
3. The OpenCV `drawContours()` function expects contours to have a shape of `[N, 1, 2]` expand section below for more details.
|
||||||
|
|
||||||
|
|
@ -145,7 +145,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
||||||
|
|
||||||
***
|
***
|
||||||
|
|
||||||
5. Next the there are 2 options for how to move forward with the image from this point and a subsequent option for each.
|
5. Next there are 2 options for how to move forward with the image from this point and a subsequent option for each.
|
||||||
|
|
||||||
### Object Isolation Options
|
### Object Isolation Options
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -54,7 +54,7 @@ The first step after getting your hands on an NVIDIA Jetson device is to flash N
|
||||||
|
|
||||||
The fastest way to get started with Ultralytics YOLOv8 on NVIDIA Jetson is to run with pre-built docker image for Jetson.
|
The fastest way to get started with Ultralytics YOLOv8 on NVIDIA Jetson is to run with pre-built docker image for Jetson.
|
||||||
|
|
||||||
Execute the below command to pull the Docker containter and run on Jetson. This is based on [l4t-pytorch](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch) docker image which contains PyTorch and Torchvision in a Python3 environment.
|
Execute the below command to pull the Docker container and run on Jetson. This is based on [l4t-pytorch](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch) docker image which contains PyTorch and Torchvision in a Python3 environment.
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
t=ultralytics/ultralytics:latest-jetson && sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t
|
t=ultralytics/ultralytics:latest-jetson && sudo docker pull $t && sudo docker run -it --ipc=host --runtime=nvidia $t
|
||||||
|
|
|
||||||
|
|
@ -10,7 +10,7 @@ keywords: YOLOv8, VSCode, Terminal, Remote Development, Ultralytics, SSH, Object
|
||||||
<img width="800" src="https://raw.githubusercontent.com/saitoha/libsixel/data/data/sixel.gif" alt="Sixel example of image in Terminal">
|
<img width="800" src="https://raw.githubusercontent.com/saitoha/libsixel/data/data/sixel.gif" alt="Sixel example of image in Terminal">
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
Image from the the [libsixel](https://saitoha.github.io/libsixel/) website.
|
Image from the [libsixel](https://saitoha.github.io/libsixel/) website.
|
||||||
|
|
||||||
## Motivation
|
## Motivation
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -95,7 +95,7 @@ There are many options for training and evaluating YOLOv8 models, so what makes
|
||||||
|
|
||||||
If you’d like to dive deeper into Google Colab, here are a few resources to guide you.
|
If you’d like to dive deeper into Google Colab, here are a few resources to guide you.
|
||||||
|
|
||||||
- **[Training Custom Datasets with Ultralytics YOLOv8 in Google Colab](https://www.ultralytics.com/blog/training-custom-datasets-with-ultralytics-yolov8-in-google-Colab)**: Learn how to train custom datasets with Ultralytics YOLOv8 on Google Colab. This comprehensive blog post will take you through the entire process, from initial setup to the training and evaluation stages.
|
- **[Training Custom Datasets with Ultralytics YOLOv8 in Google Colab](https://www.ultralytics.com/blog/training-custom-datasets-with-ultralytics-yolov8-in-google-colab)**: Learn how to train custom datasets with Ultralytics YOLOv8 on Google Colab. This comprehensive blog post will take you through the entire process, from initial setup to the training and evaluation stages.
|
||||||
|
|
||||||
- **[Curated Notebooks](https://colab.google/notebooks/)**: Here you can explore a series of organized and educational notebooks, each grouped by specific topic areas.
|
- **[Curated Notebooks](https://colab.google/notebooks/)**: Here you can explore a series of organized and educational notebooks, each grouped by specific topic areas.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -69,6 +69,7 @@ Ultralytics YOLO models return either a Python list of `Results` objects, or a m
|
||||||
masks = result.masks # Masks object for segmentation masks outputs
|
masks = result.masks # Masks object for segmentation masks outputs
|
||||||
keypoints = result.keypoints # Keypoints object for pose outputs
|
keypoints = result.keypoints # Keypoints object for pose outputs
|
||||||
probs = result.probs # Probs object for classification outputs
|
probs = result.probs # Probs object for classification outputs
|
||||||
|
obb = result.obb # Oriented boxes object for OBB outputs
|
||||||
result.show() # display to screen
|
result.show() # display to screen
|
||||||
result.save(filename='result.jpg') # save to disk
|
result.save(filename='result.jpg') # save to disk
|
||||||
```
|
```
|
||||||
|
|
@ -90,6 +91,7 @@ Ultralytics YOLO models return either a Python list of `Results` objects, or a m
|
||||||
masks = result.masks # Masks object for segmentation masks outputs
|
masks = result.masks # Masks object for segmentation masks outputs
|
||||||
keypoints = result.keypoints # Keypoints object for pose outputs
|
keypoints = result.keypoints # Keypoints object for pose outputs
|
||||||
probs = result.probs # Probs object for classification outputs
|
probs = result.probs # Probs object for classification outputs
|
||||||
|
obb = result.obb # Oriented boxes object for OBB outputs
|
||||||
result.show() # display to screen
|
result.show() # display to screen
|
||||||
result.save(filename='result.jpg') # save to disk
|
result.save(filename='result.jpg') # save to disk
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -83,7 +83,7 @@ YOLO classification dataset format can be found in detail in the [Dataset Guide]
|
||||||
|
|
||||||
## Val
|
## Val
|
||||||
|
|
||||||
Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes.
|
Validate trained YOLOv8n-cls model accuracy on the MNIST160 dataset. No argument need to passed as the `model` retains its training `data` and arguments as model attributes.
|
||||||
|
|
||||||
!!! Example
|
!!! Example
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -103,7 +103,7 @@ OBB dataset format can be found in detail in the [Dataset Guide](../datasets/obb
|
||||||
## Val
|
## Val
|
||||||
|
|
||||||
Validate trained YOLOv8n-obb model accuracy on the DOTA8 dataset. No argument need to passed as the `model`
|
Validate trained YOLOv8n-obb model accuracy on the DOTA8 dataset. No argument need to passed as the `model`
|
||||||
retains it's training `data` and arguments as model attributes.
|
retains its training `data` and arguments as model attributes.
|
||||||
|
|
||||||
!!! Example
|
!!! Example
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -97,7 +97,7 @@ YOLO pose dataset format can be found in detail in the [Dataset Guide](../datase
|
||||||
## Val
|
## Val
|
||||||
|
|
||||||
Validate trained YOLOv8n-pose model accuracy on the COCO128-pose dataset. No argument need to passed as the `model`
|
Validate trained YOLOv8n-pose model accuracy on the COCO128-pose dataset. No argument need to passed as the `model`
|
||||||
retains it's training `data` and arguments as model attributes.
|
retains its training `data` and arguments as model attributes.
|
||||||
|
|
||||||
!!! Example
|
!!! Example
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -83,7 +83,7 @@ YOLO segmentation dataset format can be found in detail in the [Dataset Guide](.
|
||||||
## Val
|
## Val
|
||||||
|
|
||||||
Validate trained YOLOv8n-seg model accuracy on the COCO128-seg dataset. No argument need to passed as the `model`
|
Validate trained YOLOv8n-seg model accuracy on the COCO128-seg dataset. No argument need to passed as the `model`
|
||||||
retains it's training `data` and arguments as model attributes.
|
retains its training `data` and arguments as model attributes.
|
||||||
|
|
||||||
!!! Example
|
!!! Example
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -13,7 +13,7 @@ cd examples/YOLOv8-CPP-Inference
|
||||||
# Add a **yolov8\_.onnx** and/or **yolov5\_.onnx** model(s) to the ultralytics folder.
|
# Add a **yolov8\_.onnx** and/or **yolov5\_.onnx** model(s) to the ultralytics folder.
|
||||||
# Edit the **main.cpp** to change the **projectBasePath** to match your user.
|
# Edit the **main.cpp** to change the **projectBasePath** to match your user.
|
||||||
|
|
||||||
# Note that by default the CMake file will try and import the CUDA library to be used with the OpenCVs dnn (cuDNN) GPU Inference.
|
# Note that by default the CMake file will try to import the CUDA library to be used with the OpenCVs dnn (cuDNN) GPU Inference.
|
||||||
# If your OpenCV build does not use CUDA/cuDNN you can remove that import call and run the example on CPU.
|
# If your OpenCV build does not use CUDA/cuDNN you can remove that import call and run the example on CPU.
|
||||||
|
|
||||||
mkdir build
|
mkdir build
|
||||||
|
|
|
||||||
|
|
@ -161,7 +161,7 @@ impl OrtBackend {
|
||||||
Ok(metadata) => match metadata.custom("task") {
|
Ok(metadata) => match metadata.custom("task") {
|
||||||
Err(_) => panic!("Can not get custom value. Try making it explicit by `--task`"),
|
Err(_) => panic!("Can not get custom value. Try making it explicit by `--task`"),
|
||||||
Ok(value) => match value {
|
Ok(value) => match value {
|
||||||
None => panic!("No correspoing value of `task` found in metadata. Make it explicit by `--task`"),
|
None => panic!("No corresponding value of `task` found in metadata. Make it explicit by `--task`"),
|
||||||
Some(task) => match task.as_str() {
|
Some(task) => match task.as_str() {
|
||||||
"classify" => YOLOTask::Classify,
|
"classify" => YOLOTask::Classify,
|
||||||
"detect" => YOLOTask::Detect,
|
"detect" => YOLOTask::Detect,
|
||||||
|
|
|
||||||
|
|
@ -50,7 +50,7 @@ python yolov8_region_counter.py --source "path/to/video.mp4" --save-img --weight
|
||||||
# If you want to detect specific class (first class and third class)
|
# If you want to detect specific class (first class and third class)
|
||||||
python yolov8_region_counter.py --source "path/to/video.mp4" --classes 0 2 --weights "path/to/model.pt"
|
python yolov8_region_counter.py --source "path/to/video.mp4" --classes 0 2 --weights "path/to/model.pt"
|
||||||
|
|
||||||
# If you dont want to save results
|
# If you don't want to save results
|
||||||
python yolov8_region_counter.py --source "path/to/video.mp4" --view-img
|
python yolov8_region_counter.py --source "path/to/video.mp4" --view-img
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue