Threaded inference docs improvements (#16313)
Signed-off-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
This commit is contained in:
parent
617d58d430
commit
7b19e0daa0
5 changed files with 70 additions and 95 deletions
42
README.md
42
README.md
|
|
@ -87,14 +87,25 @@ YOLOv8 may also be used directly in a Python environment, and accepts the same [
|
|||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.yaml") # build a new model from scratch
|
||||
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
|
||||
model = YOLO("yolov8n.pt")
|
||||
|
||||
# Use the model
|
||||
model.train(data="coco8.yaml", epochs=3) # train the model
|
||||
metrics = model.val() # evaluate model performance on the validation set
|
||||
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
|
||||
path = model.export(format="onnx") # export the model to ONNX format
|
||||
# Train the model
|
||||
train_results = model.train(
|
||||
data="coco8.yaml", # path to dataset YAML
|
||||
epochs=100, # number of training epochs
|
||||
imgsz=640, # training image size
|
||||
device="cpu", # device to run on, i.e. device=0 or device=0,1,2,3 or device=cpu
|
||||
)
|
||||
|
||||
# Evaluate model performance on the validation set
|
||||
metrics = model.val()
|
||||
|
||||
# Perform object detection on an image
|
||||
results = model("path/to/image.jpg")
|
||||
results[0].show()
|
||||
|
||||
# Export the model to ONNX format
|
||||
path = model.export(format="onnx") # return path to exported model
|
||||
```
|
||||
|
||||
See YOLOv8 [Python Docs](https://docs.ultralytics.com/usage/python/) for more examples.
|
||||
|
|
@ -139,23 +150,6 @@ See [Detection Docs](https://docs.ultralytics.com/tasks/detect/) for usage examp
|
|||
|
||||
</details>
|
||||
|
||||
<details><summary>Detection (Open Image V7)</summary>
|
||||
|
||||
See [Detection Docs](https://docs.ultralytics.com/tasks/detect/) for usage examples with these models trained on [Open Image V7](https://docs.ultralytics.com/datasets/detect/open-images-v7/), which include 600 pre-trained classes.
|
||||
|
||||
| Model | size<br><sup>(pixels) | mAP<sup>val<br>50-95 | Speed<br><sup>CPU ONNX<br>(ms) | Speed<br><sup>A100 TensorRT<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>(B) |
|
||||
| ----------------------------------------------------------------------------------------- | --------------------- | -------------------- | ------------------------------ | ----------------------------------- | ------------------ | ----------------- |
|
||||
| [YOLOv8n](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n-oiv7.pt) | 640 | 18.4 | 142.4 | 1.21 | 3.5 | 10.5 |
|
||||
| [YOLOv8s](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s-oiv7.pt) | 640 | 27.7 | 183.1 | 1.40 | 11.4 | 29.7 |
|
||||
| [YOLOv8m](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8m-oiv7.pt) | 640 | 33.6 | 408.5 | 2.26 | 26.2 | 80.6 |
|
||||
| [YOLOv8l](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8l-oiv7.pt) | 640 | 34.9 | 596.9 | 2.43 | 44.1 | 167.4 |
|
||||
| [YOLOv8x](https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8x-oiv7.pt) | 640 | 36.3 | 860.6 | 3.56 | 68.7 | 260.6 |
|
||||
|
||||
- **mAP<sup>val</sup>** values are for single-model single-scale on [Open Image V7](https://docs.ultralytics.com/datasets/detect/open-images-v7/) dataset. <br>Reproduce by `yolo val detect data=open-images-v7.yaml device=0`
|
||||
- **Speed** averaged over Open Image V7 val images using an [Amazon EC2 P4d](https://aws.amazon.com/ec2/instance-types/p4/) instance. <br>Reproduce by `yolo val detect data=open-images-v7.yaml batch=1 device=0|cpu`
|
||||
|
||||
</details>
|
||||
|
||||
<details><summary>Segmentation (COCO)</summary>
|
||||
|
||||
See [Segmentation Docs](https://docs.ultralytics.com/tasks/segment/) for usage examples with these models trained on [COCO-Seg](https://docs.ultralytics.com/datasets/segment/coco/), which include 80 pre-trained classes.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue