Docs Ruff codeblocks reformat and fix (#12847)

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
Glenn Jocher 2024-05-19 19:13:04 +02:00 committed by GitHub
parent be5cf7a033
commit 68031133fd
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
9 changed files with 167 additions and 178 deletions

View file

@ -80,17 +80,17 @@ The example showcases the variety and complexity of the images in the Tiger-Pose
from ultralytics import YOLO from ultralytics import YOLO
# Load a model # Load a model
model = YOLO('path/to/best.pt') # load a tiger-pose trained model model = YOLO("path/to/best.pt") # load a tiger-pose trained model
# Run inference # Run inference
results = model.predict(source="https://www.youtube.com/watch?v=MIBAT6BGE6U&pp=ygUYdGlnZXIgd2Fsa2luZyByZWZlcmVuY2Ug" show=True) results = model.predict(source="https://youtu.be/MIBAT6BGE6U", show=True)
``` ```
=== "CLI" === "CLI"
```bash ```bash
# Run inference using a tiger-pose trained model # Run inference using a tiger-pose trained model
yolo task=pose mode=predict source="https://www.youtube.com/watch?v=MIBAT6BGE6U&pp=ygUYdGlnZXIgd2Fsa2luZyByZWZlcmVuY2Ug" show=True model="path/to/best.pt" yolo task=pose mode=predict source="https://youtu.be/MIBAT6BGE6U" show=True model="path/to/best.pt"
``` ```
## Citations and Acknowledgments ## Citations and Acknowledgments

View file

@ -35,7 +35,7 @@ Here's a compilation of in-depth guides to help you master different aspects of
- [Conda Quickstart](conda-quickstart.md) 🚀 NEW: Step-by-step guide to setting up a [Conda](https://anaconda.org/conda-forge/ultralytics) environment for Ultralytics. Learn how to install and start using the Ultralytics package efficiently with Conda. - [Conda Quickstart](conda-quickstart.md) 🚀 NEW: Step-by-step guide to setting up a [Conda](https://anaconda.org/conda-forge/ultralytics) environment for Ultralytics. Learn how to install and start using the Ultralytics package efficiently with Conda.
- [Docker Quickstart](docker-quickstart.md) 🚀 NEW: Complete guide to setting up and using Ultralytics YOLO models with [Docker](https://hub.docker.com/r/ultralytics/ultralytics). Learn how to install Docker, manage GPU support, and run YOLO models in isolated containers for consistent development and deployment. - [Docker Quickstart](docker-quickstart.md) 🚀 NEW: Complete guide to setting up and using Ultralytics YOLO models with [Docker](https://hub.docker.com/r/ultralytics/ultralytics). Learn how to install Docker, manage GPU support, and run YOLO models in isolated containers for consistent development and deployment.
- [Raspberry Pi](raspberry-pi.md) 🚀 NEW: Quickstart tutorial to run YOLO models to the latest Raspberry Pi hardware. - [Raspberry Pi](raspberry-pi.md) 🚀 NEW: Quickstart tutorial to run YOLO models to the latest Raspberry Pi hardware.
- [Nvidia-Jetson](nvidia-jetson.md)🚀 NEW: Quickstart guide for deploying YOLO models on Nvidia Jetson devices. - [NVIDIA-Jetson](nvidia-jetson.md)🚀 NEW: Quickstart guide for deploying YOLO models on NVIDIA Jetson devices.
- [Triton Inference Server Integration](triton-inference-server.md) 🚀 NEW: Dive into the integration of Ultralytics YOLOv8 with NVIDIA's Triton Inference Server for scalable and efficient deep learning inference deployments. - [Triton Inference Server Integration](triton-inference-server.md) 🚀 NEW: Dive into the integration of Ultralytics YOLOv8 with NVIDIA's Triton Inference Server for scalable and efficient deep learning inference deployments.
- [YOLO Thread-Safe Inference](yolo-thread-safe-inference.md) 🚀 NEW: Guidelines for performing inference with YOLO models in a thread-safe manner. Learn the importance of thread safety and best practices to prevent race conditions and ensure consistent predictions. - [YOLO Thread-Safe Inference](yolo-thread-safe-inference.md) 🚀 NEW: Guidelines for performing inference with YOLO models in a thread-safe manner. Learn the importance of thread safety and best practices to prevent race conditions and ensure consistent predictions.
- [Isolating Segmentation Objects](isolating-segmentation-objects.md) 🚀 NEW: Step-by-step recipe and explanation on how to extract and/or isolate objects from images using Ultralytics Segmentation. - [Isolating Segmentation Objects](isolating-segmentation-objects.md) 🚀 NEW: Step-by-step recipe and explanation on how to extract and/or isolate objects from images using Ultralytics Segmentation.

View file

@ -63,13 +63,12 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
# (2) Iterate detection results (helpful for multiple images) # (2) Iterate detection results (helpful for multiple images)
for r in res: for r in res:
img = np.copy(r.orig_img) img = np.copy(r.orig_img)
img_name = Path(r.path).stem # source image base-name img_name = Path(r.path).stem # source image base-name
# Iterate each object contour (multiple detections) # Iterate each object contour (multiple detections)
for ci,c in enumerate(r): for ci, c in enumerate(r):
# (1) Get detection class name # (1) Get detection class name
label = c.names[c.boxes.cls.tolist().pop()] label = c.names[c.boxes.cls.tolist().pop()]
``` ```
1. To learn more about working with detection results, see [Boxes Section for Predict Mode](../modes/predict.md#boxes). 1. To learn more about working with detection results, see [Boxes Section for Predict Mode](../modes/predict.md#boxes).
@ -98,12 +97,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
# Draw contour onto mask # Draw contour onto mask
_ = cv2.drawContours(b_mask, _ = cv2.drawContours(b_mask, [contour], -1, (255, 255, 255), cv2.FILLED)
[contour],
-1,
(255, 255, 255),
cv2.FILLED)
``` ```
1. For more info on `c.masks.xy` see [Masks Section from Predict Mode](../modes/predict.md#masks). 1. For more info on `c.masks.xy` see [Masks Section from Predict Mode](../modes/predict.md#masks).
@ -280,16 +274,16 @@ import cv2
import numpy as np import numpy as np
from ultralytics import YOLO from ultralytics import YOLO
m = YOLO('yolov8n-seg.pt')#(4)! m = YOLO("yolov8n-seg.pt") # (4)!
res = m.predict()#(3)! res = m.predict() # (3)!
# iterate detection results (5) # Iterate detection results (5)
for r in res: for r in res:
img = np.copy(r.orig_img) img = np.copy(r.orig_img)
img_name = Path(r.path).stem img_name = Path(r.path).stem
# iterate each object contour (6) # Iterate each object contour (6)
for ci,c in enumerate(r): for ci, c in enumerate(r):
label = c.names[c.boxes.cls.tolist().pop()] label = c.names[c.boxes.cls.tolist().pop()]
b_mask = np.zeros(img.shape[:2], np.uint8) b_mask = np.zeros(img.shape[:2], np.uint8)
@ -312,7 +306,6 @@ for r in res:
iso_crop = isolated[y1:y2, x1:x2] iso_crop = isolated[y1:y2, x1:x2]
# TODO your actions go here (2) # TODO your actions go here (2)
``` ```
1. The line populating `contour` is combined into a single line here, where it was split to multiple above. 1. The line populating `contour` is combined into a single line here, where it was split to multiple above.

View file

@ -61,9 +61,9 @@ The VSCode compatible protocols for viewing images using the integrated terminal
# Run inference on an image # Run inference on an image
results = model.predict(source="ultralytics/assets/bus.jpg") results = model.predict(source="ultralytics/assets/bus.jpg")
# Plot inference results # Plot inference results
plot = results[0].plot() #(1)! plot = results[0].plot() # (1)!
``` ```
1. See [plot method parameters](../modes/predict.md#plot-method-parameters) to see possible arguments to use. 1. See [plot method parameters](../modes/predict.md#plot-method-parameters) to see possible arguments to use.
@ -73,9 +73,9 @@ The VSCode compatible protocols for viewing images using the integrated terminal
```{ .py .annotate } ```{ .py .annotate }
# Results image as bytes # Results image as bytes
im_bytes = cv.imencode( im_bytes = cv.imencode(
".png", #(1)! ".png", # (1)!
plot, plot,
)[1].tobytes() #(2)! )[1].tobytes() # (2)!
# Image bytes as a file-like object # Image bytes as a file-like object
mem_file = io.BytesIO(im_bytes) mem_file = io.BytesIO(im_bytes)
@ -110,9 +110,8 @@ The VSCode compatible protocols for viewing images using the integrated terminal
import io import io
import cv2 as cv import cv2 as cv
from ultralytics import YOLO
from sixel import SixelWriter from sixel import SixelWriter
from ultralytics import YOLO
# Load a model # Load a model
model = YOLO("yolov8n.pt") model = YOLO("yolov8n.pt")
@ -121,13 +120,13 @@ model = YOLO("yolov8n.pt")
results = model.predict(source="ultralytics/assets/bus.jpg") results = model.predict(source="ultralytics/assets/bus.jpg")
# Plot inference results # Plot inference results
plot = results[0].plot() #(3)! plot = results[0].plot() # (3)!
# Results image as bytes # Results image as bytes
im_bytes = cv.imencode( im_bytes = cv.imencode(
".png", #(1)! ".png", # (1)!
plot, plot,
)[1].tobytes() #(2)! )[1].tobytes() # (2)!
mem_file = io.BytesIO(im_bytes) mem_file = io.BytesIO(im_bytes)
w = SixelWriter() w = SixelWriter()

View file

@ -39,15 +39,15 @@ TensorBoard is conveniently pre-installed with YOLOv8, eliminating the need for
For detailed instructions and best practices related to the installation process, be sure to check our [YOLOv8 Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips. For detailed instructions and best practices related to the installation process, be sure to check our [YOLOv8 Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
## Configuring TensorBoard for Google Collab ## Configuring TensorBoard for Google Colab
When using Google Colab, it's important to set up TensorBoard before starting your training code: When using Google Colab, it's important to set up TensorBoard before starting your training code:
!!! Example "Configure TensorBoard for Google Collab" !!! Example "Configure TensorBoard for Google Colab"
=== "Python" === "Python"
```python ```ipython
%load_ext tensorboard %load_ext tensorboard
%tensorboard --logdir path/to/runs %tensorboard --logdir path/to/runs
``` ```

View file

@ -153,15 +153,16 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
model = YOLO("yolov8n.pt") model = YOLO("yolov8n.pt")
model.export( model.export(
format="engine", format="engine",
dynamic=True, #(1)! dynamic=True, # (1)!
batch=8, #(2)! batch=8, # (2)!
workspace=4, #(3)! workspace=4, # (3)!
int8=True, int8=True,
data="coco.yaml", #(4)! data="coco.yaml", # (4)!
) )
# Load the exported TensorRT INT8 model # Load the exported TensorRT INT8 model
model = YOLO("yolov8n.engine", task="detect") model = YOLO("yolov8n.engine", task="detect")
# Run inference # Run inference
result = model.predict("https://ultralytics.com/images/bus.jpg") result = model.predict("https://ultralytics.com/images/bus.jpg")
``` ```
@ -385,36 +386,14 @@ Expand sections below for information on how these models were exported and test
model = YOLO("yolov8n.pt") model = YOLO("yolov8n.pt")
# TensorRT FP32 # TensorRT FP32
out = model.export( out = model.export(format="engine", imgsz=640, dynamic=True, verbose=False, batch=8, workspace=2)
format="engine",
imgsz:640,
dynamic:True,
verbose:False,
batch:8,
workspace:2
)
# TensorRT FP16 # TensorRT FP16
out = model.export(format="engine", imgsz=640, dynamic=True, verbose=False, batch=8, workspace=2, half=True)
# TensorRT INT8 with calibration `data` (i.e. COCO, ImageNet, or DOTAv1 for appropriate model task)
out = model.export( out = model.export(
format="engine", format="engine", imgsz=640, dynamic=True, verbose=False, batch=8, workspace=2, int8=True, data="coco8.yaml"
imgsz:640,
dynamic:True,
verbose:False,
batch:8,
workspace:2,
half=True
)
# TensorRT INT8
out = model.export(
format="engine",
imgsz:640,
dynamic:True,
verbose:False,
batch:8,
workspace:2,
int8=True,
data:"data.yaml" # COCO, ImageNet, or DOTAv1 for appropriate model task
) )
``` ```

View file

@ -87,23 +87,31 @@ Val mode is used for validating a YOLOv8 model after it has been trained. In thi
=== "Val after training" === "Val after training"
```python ```python
from ultralytics import YOLO from ultralytics import YOLO
model = YOLO('yolov8n.yaml') # Load a YOLOv8 model
model.train(data='coco8.yaml', epochs=5) model = YOLO("yolov8n.yaml")
model.val() # It'll automatically evaluate the data you trained.
# Train the model
model.train(data="coco8.yaml", epochs=5)
# Validate on training data
model.val()
``` ```
=== "Val independently" === "Val on another dataset"
```python ```python
from ultralytics import YOLO from ultralytics import YOLO
model = YOLO("model.pt") # Load a YOLOv8 model
# It'll use the data YAML file in model.pt if you don't set data. model = YOLO("yolov8n.yaml")
model.val()
# or you can set the data you want to val # Train the model
model.val(data='coco8.yaml') model.train(data="coco8.yaml", epochs=5)
# Validate on separate data
model.val(data="path/to/separate/data.yaml")
``` ```
[Val Examples](../modes/val.md){ .md-button } [Val Examples](../modes/val.md){ .md-button }
@ -188,20 +196,20 @@ Export mode is used for exporting a YOLOv8 model to a format that can be used fo
Export an official YOLOv8n model to ONNX with dynamic batch-size and image-size. Export an official YOLOv8n model to ONNX with dynamic batch-size and image-size.
```python ```python
from ultralytics import YOLO from ultralytics import YOLO
model = YOLO('yolov8n.pt') model = YOLO("yolov8n.pt")
model.export(format='onnx', dynamic=True) model.export(format="onnx", dynamic=True)
``` ```
=== "Export to TensorRT" === "Export to TensorRT"
Export an official YOLOv8n model to TensorRT on `device=0` for acceleration on CUDA devices. Export an official YOLOv8n model to TensorRT on `device=0` for acceleration on CUDA devices.
```python ```python
from ultralytics import YOLO from ultralytics import YOLO
model = YOLO('yolov8n.pt') model = YOLO("yolov8n.pt")
model.export(format='onnx', device=0) model.export(format="onnx", device=0)
``` ```
[Export Examples](../modes/export.md){ .md-button } [Export Examples](../modes/export.md){ .md-button }

View file

@ -36,10 +36,10 @@ Dataset annotation is a very resource intensive and time-consuming process. If y
```{ .py .annotate } ```{ .py .annotate }
from ultralytics.data.annotator import auto_annotate from ultralytics.data.annotator import auto_annotate
auto_annotate(#(1)! auto_annotate( # (1)!
data='path/to/new/data', data="path/to/new/data",
det_model='yolov8n.pt', det_model="yolov8n.pt",
sam_model='mobile_sam.pt', sam_model="mobile_sam.pt",
device="cuda", device="cuda",
output_dir="path/to/save_labels", output_dir="path/to/save_labels",
) )
@ -58,9 +58,9 @@ Use to convert COCO JSON annotations into proper YOLO format. For object detecti
```{ .py .annotate } ```{ .py .annotate }
from ultralytics.data.converter import convert_coco from ultralytics.data.converter import convert_coco
convert_coco(#(1)! convert_coco( # (1)!
'../datasets/coco/annotations/', "../datasets/coco/annotations/",
use_segments=False, use_segments=False,
use_keypoints=False, use_keypoints=False,
cls91to80=True, cls91to80=True,
) )
@ -113,10 +113,10 @@ data
```{ .py .annotate } ```{ .py .annotate }
from ultralytics.data.converter import yolo_bbox2segment from ultralytics.data.converter import yolo_bbox2segment
yolo_bbox2segment(#(1)! yolo_bbox2segment( # (1)!
im_dir="path/to/images", im_dir="path/to/images",
save_dir=None, # saved to "labels-segment" in images directory save_dir=None, # saved to "labels-segment" in images directory
sam_model="sam_b.pt" sam_model="sam_b.pt",
) )
``` ```
@ -129,20 +129,22 @@ yolo_bbox2segment(#(1)!
If you have a dataset that uses the [segmentation dataset format](../datasets/segment/index.md) you can easily convert these into up-right (or horizontal) bounding boxes (`x y w h` format) with this function. If you have a dataset that uses the [segmentation dataset format](../datasets/segment/index.md) you can easily convert these into up-right (or horizontal) bounding boxes (`x y w h` format) with this function.
```python ```python
import numpy as np
from ultralytics.utils.ops import segments2boxes from ultralytics.utils.ops import segments2boxes
segments = np.array( segments = np.array(
[[805, 392, 797, 400, ..., 808, 714, 808, 392], [
[115, 398, 113, 400, ..., 150, 400, 149, 298], [805, 392, 797, 400, ..., 808, 714, 808, 392],
[267, 412, 265, 413, ..., 300, 413, 299, 412], [115, 398, 113, 400, ..., 150, 400, 149, 298],
[267, 412, 265, 413, ..., 300, 413, 299, 412],
] ]
) )
segments2boxes([s.reshape(-1,2) for s in segments]) segments2boxes([s.reshape(-1, 2) for s in segments])
>>> array([[ 741.66, 631.12, 133.31, 479.25], # >>> array([[ 741.66, 631.12, 133.31, 479.25],
[ 146.81, 649.69, 185.62, 502.88], # [ 146.81, 649.69, 185.62, 502.88],
[ 281.81, 636.19, 118.12, 448.88]], # [ 281.81, 636.19, 118.12, 448.88]],
dtype=float32) # xywh bounding boxes # dtype=float32) # xywh bounding boxes
``` ```
To understand how this function works, visit the [reference page](../reference/utils/ops.md#ultralytics.utils.ops.segments2boxes) To understand how this function works, visit the [reference page](../reference/utils/ops.md#ultralytics.utils.ops.segments2boxes)
@ -155,10 +157,11 @@ Compresses a single image file to reduced size while preserving its aspect ratio
```{ .py .annotate } ```{ .py .annotate }
from pathlib import Path from pathlib import Path
from ultralytics.data.utils import compress_one_image from ultralytics.data.utils import compress_one_image
for f in Path('path/to/dataset').rglob('*.jpg'): for f in Path("path/to/dataset").rglob("*.jpg"):
compress_one_image(f)#(1)! compress_one_image(f) # (1)!
``` ```
1. Nothing returns from this function 1. Nothing returns from this function
@ -170,10 +173,10 @@ Automatically split a dataset into `train`/`val`/`test` splits and save the resu
```{ .py .annotate } ```{ .py .annotate }
from ultralytics.data.utils import autosplit from ultralytics.data.utils import autosplit
autosplit( #(1)! autosplit( # (1)!
path="path/to/images", path="path/to/images",
weights=(0.9, 0.1, 0.0), # (train, validation, test) fractional splits weights=(0.9, 0.1, 0.0), # (train, validation, test) fractional splits
annotated_only=False # split only images with annotation file when True annotated_only=False, # split only images with annotation file when True
) )
``` ```
@ -194,9 +197,7 @@ import numpy as np
from ultralytics.data.utils import polygon2mask from ultralytics.data.utils import polygon2mask
imgsz = (1080, 810) imgsz = (1080, 810)
polygon = np.array( polygon = np.array([805, 392, 797, 400, ..., 808, 714, 808, 392]) # (238, 2)
[805, 392, 797, 400, ..., 808, 714, 808, 392], # (238, 2)
)
mask = polygon2mask( mask = polygon2mask(
imgsz, # tuple imgsz, # tuple
@ -213,32 +214,36 @@ mask = polygon2mask(
To manage bounding box data, the `Bboxes` class will help to convert between box coordinate formatting, scale box dimensions, calculate areas, include offsets, and more! To manage bounding box data, the `Bboxes` class will help to convert between box coordinate formatting, scale box dimensions, calculate areas, include offsets, and more!
```python ```python
import numpy as np
from ultralytics.utils.instance import Bboxes from ultralytics.utils.instance import Bboxes
boxes = Bboxes( boxes = Bboxes(
bboxes=np.array( bboxes=np.array(
[[ 22.878, 231.27, 804.98, 756.83,], [
[ 48.552, 398.56, 245.35, 902.71,], [22.878, 231.27, 804.98, 756.83],
[ 669.47, 392.19, 809.72, 877.04,], [48.552, 398.56, 245.35, 902.71],
[ 221.52, 405.8, 344.98, 857.54,], [669.47, 392.19, 809.72, 877.04],
[ 0, 550.53, 63.01, 873.44,], [221.52, 405.8, 344.98, 857.54],
[ 0.0584, 254.46, 32.561, 324.87,]] [0, 550.53, 63.01, 873.44],
[0.0584, 254.46, 32.561, 324.87],
]
), ),
format="xyxy", format="xyxy",
) )
boxes.areas() boxes.areas()
>>> array([ 4.1104e+05, 99216, 68000, 55772, 20347, 2288.5]) # >>> array([ 4.1104e+05, 99216, 68000, 55772, 20347, 2288.5])
boxes.convert("xywh") boxes.convert("xywh")
boxes.bboxes print(boxes.bboxes)
>>> array( # >>> array(
[[ 413.93, 494.05, 782.1, 525.56], # [[ 413.93, 494.05, 782.1, 525.56],
[ 146.95, 650.63, 196.8, 504.15], # [ 146.95, 650.63, 196.8, 504.15],
[ 739.6, 634.62, 140.25, 484.85], # [ 739.6, 634.62, 140.25, 484.85],
[ 283.25, 631.67, 123.46, 451.74], # [ 283.25, 631.67, 123.46, 451.74],
[ 31.505, 711.99, 63.01, 322.91], # [ 31.505, 711.99, 63.01, 322.91],
[ 16.31, 289.67, 32.503, 70.41]] # [ 16.31, 289.67, 32.503, 70.41]]
) # )
``` ```
See the [`Bboxes` reference section](../reference/utils/instance.md#ultralytics.utils.instance.Bboxes) for more attributes and methods available. See the [`Bboxes` reference section](../reference/utils/instance.md#ultralytics.utils.instance.Bboxes) for more attributes and methods available.
@ -257,37 +262,39 @@ import numpy as np
from ultralytics.utils.ops import scale_boxes from ultralytics.utils.ops import scale_boxes
image = cv.imread("ultralytics/assets/bus.jpg") image = cv.imread("ultralytics/assets/bus.jpg")
*(h, w), c = image.shape h, w, c = image.shape
resized = cv.resize(image, None, (), fx=1.2, fy=1.2) resized = cv.resize(image, None, (), fx=1.2, fy=1.2)
*(new_h, new_w), _ = resized.shape new_h, new_w, _ = resized.shape
xyxy_boxes = np.array( xyxy_boxes = np.array(
[[ 22.878, 231.27, 804.98, 756.83,], [
[ 48.552, 398.56, 245.35, 902.71,], [22.878, 231.27, 804.98, 756.83],
[ 669.47, 392.19, 809.72, 877.04,], [48.552, 398.56, 245.35, 902.71],
[ 221.52, 405.8, 344.98, 857.54,], [669.47, 392.19, 809.72, 877.04],
[ 0, 550.53, 63.01, 873.44,], [221.52, 405.8, 344.98, 857.54],
[ 0.0584, 254.46, 32.561, 324.87,]] [0, 550.53, 63.01, 873.44],
[0.0584, 254.46, 32.561, 324.87],
]
) )
new_boxes = scale_boxes( new_boxes = scale_boxes(
img1_shape=(h, w), # original image dimensions img1_shape=(h, w), # original image dimensions
boxes=xyxy_boxes, # boxes from original image boxes=xyxy_boxes, # boxes from original image
img0_shape=(new_h, new_w), # resized image dimensions (scale to) img0_shape=(new_h, new_w), # resized image dimensions (scale to)
ratio_pad=None, ratio_pad=None,
padding=False, padding=False,
xywh=False, xywh=False,
) )
new_boxes#(1)! print(new_boxes) # (1)!
>>> array( # >>> array(
[[ 27.454, 277.52, 965.98, 908.2], # [[ 27.454, 277.52, 965.98, 908.2],
[ 58.262, 478.27, 294.42, 1083.3], # [ 58.262, 478.27, 294.42, 1083.3],
[ 803.36, 470.63, 971.66, 1052.4], # [ 803.36, 470.63, 971.66, 1052.4],
[ 265.82, 486.96, 413.98, 1029], # [ 265.82, 486.96, 413.98, 1029],
[ 0, 660.64, 75.612, 1048.1], # [ 0, 660.64, 75.612, 1048.1],
[ 0.0701, 305.35, 39.073, 389.84]] # [ 0.0701, 305.35, 39.073, 389.84]]
) # )
``` ```
1. Bounding boxes scaled for the new image size 1. Bounding boxes scaled for the new image size
@ -303,24 +310,26 @@ import numpy as np
from ultralytics.utils.ops import xyxy2xywh from ultralytics.utils.ops import xyxy2xywh
xyxy_boxes = np.array( xyxy_boxes = np.array(
[[ 22.878, 231.27, 804.98, 756.83,], [
[ 48.552, 398.56, 245.35, 902.71,], [22.878, 231.27, 804.98, 756.83],
[ 669.47, 392.19, 809.72, 877.04,], [48.552, 398.56, 245.35, 902.71],
[ 221.52, 405.8, 344.98, 857.54,], [669.47, 392.19, 809.72, 877.04],
[ 0, 550.53, 63.01, 873.44,], [221.52, 405.8, 344.98, 857.54],
[ 0.0584, 254.46, 32.561, 324.87,]] [0, 550.53, 63.01, 873.44],
[0.0584, 254.46, 32.561, 324.87],
]
) )
xywh = xyxy2xywh(xyxy_boxes) xywh = xyxy2xywh(xyxy_boxes)
xywh print(xywh)
>>> array( # >>> array(
[[ 413.93, 494.05, 782.1, 525.56], # [[ 413.93, 494.05, 782.1, 525.56],
[ 146.95, 650.63, 196.8, 504.15], # [ 146.95, 650.63, 196.8, 504.15],
[ 739.6, 634.62, 140.25, 484.85], # [ 739.6, 634.62, 140.25, 484.85],
[ 283.25, 631.67, 123.46, 451.74], # [ 283.25, 631.67, 123.46, 451.74],
[ 31.505, 711.99, 63.01, 322.91], # [ 31.505, 711.99, 63.01, 322.91],
[ 16.31, 289.67, 32.503, 70.41]] # [ 16.31, 289.67, 32.503, 70.41]]
) # )
``` ```
### All Bounding Box Conversions ### All Bounding Box Conversions
@ -352,9 +361,9 @@ import cv2 as cv
import numpy as np import numpy as np
from ultralytics.utils.plotting import Annotator, colors from ultralytics.utils.plotting import Annotator, colors
names { #(1)! names = { # (1)!
0: "person", 0: "person",
5: "bus", 5: "bus",
11: "stop sign", 11: "stop sign",
} }
@ -362,18 +371,20 @@ image = cv.imread("ultralytics/assets/bus.jpg")
ann = Annotator( ann = Annotator(
image, image,
line_width=None, # default auto-size line_width=None, # default auto-size
font_size=None, # default auto-size font_size=None, # default auto-size
font="Arial.ttf", # must be ImageFont compatible font="Arial.ttf", # must be ImageFont compatible
pil=False, # use PIL, otherwise uses OpenCV pil=False, # use PIL, otherwise uses OpenCV
) )
xyxy_boxes = np.array( xyxy_boxes = np.array(
[[ 5, 22.878, 231.27, 804.98, 756.83,], # class-idx x1 y1 x2 y2 [
[ 0, 48.552, 398.56, 245.35, 902.71,], [5, 22.878, 231.27, 804.98, 756.83], # class-idx x1 y1 x2 y2
[ 0, 669.47, 392.19, 809.72, 877.04,], [0, 48.552, 398.56, 245.35, 902.71],
[ 0, 221.52, 405.8, 344.98, 857.54,], [0, 669.47, 392.19, 809.72, 877.04],
[ 0, 0, 550.53, 63.01, 873.44,], [0, 221.52, 405.8, 344.98, 857.54],
[11, 0.0584, 254.46, 32.561, 324.87,]] [0, 0, 550.53, 63.01, 873.44],
[11, 0.0584, 254.46, 32.561, 324.87],
]
) )
for nb, box in enumerate(xyxy_boxes): for nb, box in enumerate(xyxy_boxes):
@ -412,7 +423,7 @@ ann = Annotator(
for obb in obb_boxes: for obb in obb_boxes:
c_idx, *obb = obb c_idx, *obb = obb
obb = np.array(obb).reshape(-1, 4, 2).squeeze() obb = np.array(obb).reshape(-1, 4, 2).squeeze()
label = f"{names.get(int(c_idx))}" label = f"{obb_names.get(int(c_idx))}"
ann.box_label( ann.box_label(
obb, obb,
label, label,
@ -434,11 +445,11 @@ Check duration for code to run/process either using `with` or as a decorator.
```python ```python
from ultralytics.utils.ops import Profile from ultralytics.utils.ops import Profile
with Profile(device=device) as dt: with Profile(device="cuda:0") as dt:
pass # operation to measure pass # operation to measure
print(dt) print(dt)
>>> "Elapsed time is 9.5367431640625e-07 s" # >>> "Elapsed time is 9.5367431640625e-07 s"
``` ```
### Ultralytics Supported Formats ### Ultralytics Supported Formats
@ -446,11 +457,10 @@ print(dt)
Want or need to use the formats of [images or videos types supported](../modes/predict.md#image-and-video-formats) by Ultralytics programmatically? Use these constants if you need. Want or need to use the formats of [images or videos types supported](../modes/predict.md#image-and-video-formats) by Ultralytics programmatically? Use these constants if you need.
```python ```python
from ultralytics.data.utils import IMG_FORMATS from ultralytics.data.utils import IMG_FORMATS, VID_FORMATS
from ultralytics.data.utils import VID_FORMATS
print(IMG_FORMATS) print(IMG_FORMATS)
>>> ('bmp', 'dng', 'jpeg', 'jpg', 'mpo', 'png', 'tif', 'tiff', 'webp', 'pfm') # >>> ('bmp', 'dng', 'jpeg', 'jpg', 'mpo', 'png', 'tif', 'tiff', 'webp', 'pfm')
``` ```
### Make Divisible ### Make Divisible
@ -461,7 +471,7 @@ Calculates the nearest whole number to `x` to make evenly divisible when divided
from ultralytics.utils.ops import make_divisible from ultralytics.utils.ops import make_divisible
make_divisible(7, 3) make_divisible(7, 3)
>>> 9 # >>> 9
make_divisible(7, 2) make_divisible(7, 2)
>>> 8 # >>> 8
``` ```

View file

@ -12,8 +12,8 @@ You can also explore other quickstart options for YOLOv5, such as our [Colab Not
## Prerequisites ## Prerequisites
1. **Nvidia Driver**: Version 455.23 or higher. Download from [Nvidia's website](https://www.nvidia.com/Download/index.aspx). 1. **NVIDIA Driver**: Version 455.23 or higher. Download from [Nvidia's website](https://www.nvidia.com/Download/index.aspx).
2. **Nvidia-Docker**: Allows Docker to interact with your local GPU. Installation instructions are available on the [Nvidia-Docker GitHub repository](https://github.com/NVIDIA/nvidia-docker). 2. **NVIDIA-Docker**: Allows Docker to interact with your local GPU. Installation instructions are available on the [NVIDIA-Docker GitHub repository](https://github.com/NVIDIA/nvidia-docker).
3. **Docker Engine - CE**: Version 19.03 or higher. Download and installation instructions can be found on the [Docker website](https://docs.docker.com/install/). 3. **Docker Engine - CE**: Version 19.03 or higher. Download and installation instructions can be found on the [Docker website](https://docs.docker.com/install/).
## Step 1: Pull the YOLOv5 Docker Image ## Step 1: Pull the YOLOv5 Docker Image