Docs Ruff codeblocks reformat and fix (#12847)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
parent
be5cf7a033
commit
68031133fd
9 changed files with 167 additions and 178 deletions
|
|
@ -80,17 +80,17 @@ The example showcases the variety and complexity of the images in the Tiger-Pose
|
|||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO('path/to/best.pt') # load a tiger-pose trained model
|
||||
model = YOLO("path/to/best.pt") # load a tiger-pose trained model
|
||||
|
||||
# Run inference
|
||||
results = model.predict(source="https://www.youtube.com/watch?v=MIBAT6BGE6U&pp=ygUYdGlnZXIgd2Fsa2luZyByZWZlcmVuY2Ug" show=True)
|
||||
results = model.predict(source="https://youtu.be/MIBAT6BGE6U", show=True)
|
||||
```
|
||||
|
||||
=== "CLI"
|
||||
|
||||
```bash
|
||||
# Run inference using a tiger-pose trained model
|
||||
yolo task=pose mode=predict source="https://www.youtube.com/watch?v=MIBAT6BGE6U&pp=ygUYdGlnZXIgd2Fsa2luZyByZWZlcmVuY2Ug" show=True model="path/to/best.pt"
|
||||
yolo task=pose mode=predict source="https://youtu.be/MIBAT6BGE6U" show=True model="path/to/best.pt"
|
||||
```
|
||||
|
||||
## Citations and Acknowledgments
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@ Here's a compilation of in-depth guides to help you master different aspects of
|
|||
- [Conda Quickstart](conda-quickstart.md) 🚀 NEW: Step-by-step guide to setting up a [Conda](https://anaconda.org/conda-forge/ultralytics) environment for Ultralytics. Learn how to install and start using the Ultralytics package efficiently with Conda.
|
||||
- [Docker Quickstart](docker-quickstart.md) 🚀 NEW: Complete guide to setting up and using Ultralytics YOLO models with [Docker](https://hub.docker.com/r/ultralytics/ultralytics). Learn how to install Docker, manage GPU support, and run YOLO models in isolated containers for consistent development and deployment.
|
||||
- [Raspberry Pi](raspberry-pi.md) 🚀 NEW: Quickstart tutorial to run YOLO models to the latest Raspberry Pi hardware.
|
||||
- [Nvidia-Jetson](nvidia-jetson.md)🚀 NEW: Quickstart guide for deploying YOLO models on Nvidia Jetson devices.
|
||||
- [NVIDIA-Jetson](nvidia-jetson.md)🚀 NEW: Quickstart guide for deploying YOLO models on NVIDIA Jetson devices.
|
||||
- [Triton Inference Server Integration](triton-inference-server.md) 🚀 NEW: Dive into the integration of Ultralytics YOLOv8 with NVIDIA's Triton Inference Server for scalable and efficient deep learning inference deployments.
|
||||
- [YOLO Thread-Safe Inference](yolo-thread-safe-inference.md) 🚀 NEW: Guidelines for performing inference with YOLO models in a thread-safe manner. Learn the importance of thread safety and best practices to prevent race conditions and ensure consistent predictions.
|
||||
- [Isolating Segmentation Objects](isolating-segmentation-objects.md) 🚀 NEW: Step-by-step recipe and explanation on how to extract and/or isolate objects from images using Ultralytics Segmentation.
|
||||
|
|
|
|||
|
|
@ -63,13 +63,12 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
|||
# (2) Iterate detection results (helpful for multiple images)
|
||||
for r in res:
|
||||
img = np.copy(r.orig_img)
|
||||
img_name = Path(r.path).stem # source image base-name
|
||||
img_name = Path(r.path).stem # source image base-name
|
||||
|
||||
# Iterate each object contour (multiple detections)
|
||||
for ci,c in enumerate(r):
|
||||
for ci, c in enumerate(r):
|
||||
# (1) Get detection class name
|
||||
label = c.names[c.boxes.cls.tolist().pop()]
|
||||
|
||||
```
|
||||
|
||||
1. To learn more about working with detection results, see [Boxes Section for Predict Mode](../modes/predict.md#boxes).
|
||||
|
|
@ -98,12 +97,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
|||
|
||||
|
||||
# Draw contour onto mask
|
||||
_ = cv2.drawContours(b_mask,
|
||||
[contour],
|
||||
-1,
|
||||
(255, 255, 255),
|
||||
cv2.FILLED)
|
||||
|
||||
_ = cv2.drawContours(b_mask, [contour], -1, (255, 255, 255), cv2.FILLED)
|
||||
```
|
||||
|
||||
1. For more info on `c.masks.xy` see [Masks Section from Predict Mode](../modes/predict.md#masks).
|
||||
|
|
@ -280,16 +274,16 @@ import cv2
|
|||
import numpy as np
|
||||
from ultralytics import YOLO
|
||||
|
||||
m = YOLO('yolov8n-seg.pt')#(4)!
|
||||
res = m.predict()#(3)!
|
||||
m = YOLO("yolov8n-seg.pt") # (4)!
|
||||
res = m.predict() # (3)!
|
||||
|
||||
# iterate detection results (5)
|
||||
# Iterate detection results (5)
|
||||
for r in res:
|
||||
img = np.copy(r.orig_img)
|
||||
img_name = Path(r.path).stem
|
||||
|
||||
# iterate each object contour (6)
|
||||
for ci,c in enumerate(r):
|
||||
# Iterate each object contour (6)
|
||||
for ci, c in enumerate(r):
|
||||
label = c.names[c.boxes.cls.tolist().pop()]
|
||||
|
||||
b_mask = np.zeros(img.shape[:2], np.uint8)
|
||||
|
|
@ -312,7 +306,6 @@ for r in res:
|
|||
iso_crop = isolated[y1:y2, x1:x2]
|
||||
|
||||
# TODO your actions go here (2)
|
||||
|
||||
```
|
||||
|
||||
1. The line populating `contour` is combined into a single line here, where it was split to multiple above.
|
||||
|
|
|
|||
|
|
@ -61,9 +61,9 @@ The VSCode compatible protocols for viewing images using the integrated terminal
|
|||
|
||||
# Run inference on an image
|
||||
results = model.predict(source="ultralytics/assets/bus.jpg")
|
||||
|
||||
|
||||
# Plot inference results
|
||||
plot = results[0].plot() #(1)!
|
||||
plot = results[0].plot() # (1)!
|
||||
```
|
||||
|
||||
1. See [plot method parameters](../modes/predict.md#plot-method-parameters) to see possible arguments to use.
|
||||
|
|
@ -73,9 +73,9 @@ The VSCode compatible protocols for viewing images using the integrated terminal
|
|||
```{ .py .annotate }
|
||||
# Results image as bytes
|
||||
im_bytes = cv.imencode(
|
||||
".png", #(1)!
|
||||
".png", # (1)!
|
||||
plot,
|
||||
)[1].tobytes() #(2)!
|
||||
)[1].tobytes() # (2)!
|
||||
|
||||
# Image bytes as a file-like object
|
||||
mem_file = io.BytesIO(im_bytes)
|
||||
|
|
@ -110,9 +110,8 @@ The VSCode compatible protocols for viewing images using the integrated terminal
|
|||
import io
|
||||
|
||||
import cv2 as cv
|
||||
|
||||
from ultralytics import YOLO
|
||||
from sixel import SixelWriter
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO("yolov8n.pt")
|
||||
|
|
@ -121,13 +120,13 @@ model = YOLO("yolov8n.pt")
|
|||
results = model.predict(source="ultralytics/assets/bus.jpg")
|
||||
|
||||
# Plot inference results
|
||||
plot = results[0].plot() #(3)!
|
||||
plot = results[0].plot() # (3)!
|
||||
|
||||
# Results image as bytes
|
||||
im_bytes = cv.imencode(
|
||||
".png", #(1)!
|
||||
".png", # (1)!
|
||||
plot,
|
||||
)[1].tobytes() #(2)!
|
||||
)[1].tobytes() # (2)!
|
||||
|
||||
mem_file = io.BytesIO(im_bytes)
|
||||
w = SixelWriter()
|
||||
|
|
|
|||
|
|
@ -39,15 +39,15 @@ TensorBoard is conveniently pre-installed with YOLOv8, eliminating the need for
|
|||
|
||||
For detailed instructions and best practices related to the installation process, be sure to check our [YOLOv8 Installation guide](../quickstart.md). While installing the required packages for YOLOv8, if you encounter any difficulties, consult our [Common Issues guide](../guides/yolo-common-issues.md) for solutions and tips.
|
||||
|
||||
## Configuring TensorBoard for Google Collab
|
||||
## Configuring TensorBoard for Google Colab
|
||||
|
||||
When using Google Colab, it's important to set up TensorBoard before starting your training code:
|
||||
|
||||
!!! Example "Configure TensorBoard for Google Collab"
|
||||
!!! Example "Configure TensorBoard for Google Colab"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
```ipython
|
||||
%load_ext tensorboard
|
||||
%tensorboard --logdir path/to/runs
|
||||
```
|
||||
|
|
|
|||
|
|
@ -153,15 +153,16 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
|
|||
model = YOLO("yolov8n.pt")
|
||||
model.export(
|
||||
format="engine",
|
||||
dynamic=True, #(1)!
|
||||
batch=8, #(2)!
|
||||
workspace=4, #(3)!
|
||||
dynamic=True, # (1)!
|
||||
batch=8, # (2)!
|
||||
workspace=4, # (3)!
|
||||
int8=True,
|
||||
data="coco.yaml", #(4)!
|
||||
data="coco.yaml", # (4)!
|
||||
)
|
||||
|
||||
# Load the exported TensorRT INT8 model
|
||||
model = YOLO("yolov8n.engine", task="detect")
|
||||
|
||||
# Run inference
|
||||
result = model.predict("https://ultralytics.com/images/bus.jpg")
|
||||
```
|
||||
|
|
@ -385,36 +386,14 @@ Expand sections below for information on how these models were exported and test
|
|||
model = YOLO("yolov8n.pt")
|
||||
|
||||
# TensorRT FP32
|
||||
out = model.export(
|
||||
format="engine",
|
||||
imgsz:640,
|
||||
dynamic:True,
|
||||
verbose:False,
|
||||
batch:8,
|
||||
workspace:2
|
||||
)
|
||||
|
||||
out = model.export(format="engine", imgsz=640, dynamic=True, verbose=False, batch=8, workspace=2)
|
||||
|
||||
# TensorRT FP16
|
||||
out = model.export(format="engine", imgsz=640, dynamic=True, verbose=False, batch=8, workspace=2, half=True)
|
||||
|
||||
# TensorRT INT8 with calibration `data` (i.e. COCO, ImageNet, or DOTAv1 for appropriate model task)
|
||||
out = model.export(
|
||||
format="engine",
|
||||
imgsz:640,
|
||||
dynamic:True,
|
||||
verbose:False,
|
||||
batch:8,
|
||||
workspace:2,
|
||||
half=True
|
||||
)
|
||||
|
||||
# TensorRT INT8
|
||||
out = model.export(
|
||||
format="engine",
|
||||
imgsz:640,
|
||||
dynamic:True,
|
||||
verbose:False,
|
||||
batch:8,
|
||||
workspace:2,
|
||||
int8=True,
|
||||
data:"data.yaml" # COCO, ImageNet, or DOTAv1 for appropriate model task
|
||||
format="engine", imgsz=640, dynamic=True, verbose=False, batch=8, workspace=2, int8=True, data="coco8.yaml"
|
||||
)
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -87,23 +87,31 @@ Val mode is used for validating a YOLOv8 model after it has been trained. In thi
|
|||
=== "Val after training"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
from ultralytics import YOLO
|
||||
|
||||
model = YOLO('yolov8n.yaml')
|
||||
model.train(data='coco8.yaml', epochs=5)
|
||||
model.val() # It'll automatically evaluate the data you trained.
|
||||
# Load a YOLOv8 model
|
||||
model = YOLO("yolov8n.yaml")
|
||||
|
||||
# Train the model
|
||||
model.train(data="coco8.yaml", epochs=5)
|
||||
|
||||
# Validate on training data
|
||||
model.val()
|
||||
```
|
||||
|
||||
=== "Val independently"
|
||||
=== "Val on another dataset"
|
||||
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
from ultralytics import YOLO
|
||||
|
||||
model = YOLO("model.pt")
|
||||
# It'll use the data YAML file in model.pt if you don't set data.
|
||||
model.val()
|
||||
# or you can set the data you want to val
|
||||
model.val(data='coco8.yaml')
|
||||
# Load a YOLOv8 model
|
||||
model = YOLO("yolov8n.yaml")
|
||||
|
||||
# Train the model
|
||||
model.train(data="coco8.yaml", epochs=5)
|
||||
|
||||
# Validate on separate data
|
||||
model.val(data="path/to/separate/data.yaml")
|
||||
```
|
||||
|
||||
[Val Examples](../modes/val.md){ .md-button }
|
||||
|
|
@ -188,20 +196,20 @@ Export mode is used for exporting a YOLOv8 model to a format that can be used fo
|
|||
|
||||
Export an official YOLOv8n model to ONNX with dynamic batch-size and image-size.
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
from ultralytics import YOLO
|
||||
|
||||
model = YOLO('yolov8n.pt')
|
||||
model.export(format='onnx', dynamic=True)
|
||||
model = YOLO("yolov8n.pt")
|
||||
model.export(format="onnx", dynamic=True)
|
||||
```
|
||||
|
||||
=== "Export to TensorRT"
|
||||
|
||||
Export an official YOLOv8n model to TensorRT on `device=0` for acceleration on CUDA devices.
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
from ultralytics import YOLO
|
||||
|
||||
model = YOLO('yolov8n.pt')
|
||||
model.export(format='onnx', device=0)
|
||||
model = YOLO("yolov8n.pt")
|
||||
model.export(format="onnx", device=0)
|
||||
```
|
||||
|
||||
[Export Examples](../modes/export.md){ .md-button }
|
||||
|
|
|
|||
|
|
@ -36,10 +36,10 @@ Dataset annotation is a very resource intensive and time-consuming process. If y
|
|||
```{ .py .annotate }
|
||||
from ultralytics.data.annotator import auto_annotate
|
||||
|
||||
auto_annotate(#(1)!
|
||||
data='path/to/new/data',
|
||||
det_model='yolov8n.pt',
|
||||
sam_model='mobile_sam.pt',
|
||||
auto_annotate( # (1)!
|
||||
data="path/to/new/data",
|
||||
det_model="yolov8n.pt",
|
||||
sam_model="mobile_sam.pt",
|
||||
device="cuda",
|
||||
output_dir="path/to/save_labels",
|
||||
)
|
||||
|
|
@ -58,9 +58,9 @@ Use to convert COCO JSON annotations into proper YOLO format. For object detecti
|
|||
```{ .py .annotate }
|
||||
from ultralytics.data.converter import convert_coco
|
||||
|
||||
convert_coco(#(1)!
|
||||
'../datasets/coco/annotations/',
|
||||
use_segments=False,
|
||||
convert_coco( # (1)!
|
||||
"../datasets/coco/annotations/",
|
||||
use_segments=False,
|
||||
use_keypoints=False,
|
||||
cls91to80=True,
|
||||
)
|
||||
|
|
@ -113,10 +113,10 @@ data
|
|||
```{ .py .annotate }
|
||||
from ultralytics.data.converter import yolo_bbox2segment
|
||||
|
||||
yolo_bbox2segment(#(1)!
|
||||
yolo_bbox2segment( # (1)!
|
||||
im_dir="path/to/images",
|
||||
save_dir=None, # saved to "labels-segment" in images directory
|
||||
sam_model="sam_b.pt"
|
||||
save_dir=None, # saved to "labels-segment" in images directory
|
||||
sam_model="sam_b.pt",
|
||||
)
|
||||
```
|
||||
|
||||
|
|
@ -129,20 +129,22 @@ yolo_bbox2segment(#(1)!
|
|||
If you have a dataset that uses the [segmentation dataset format](../datasets/segment/index.md) you can easily convert these into up-right (or horizontal) bounding boxes (`x y w h` format) with this function.
|
||||
|
||||
```python
|
||||
import numpy as np
|
||||
from ultralytics.utils.ops import segments2boxes
|
||||
|
||||
segments = np.array(
|
||||
[[805, 392, 797, 400, ..., 808, 714, 808, 392],
|
||||
[115, 398, 113, 400, ..., 150, 400, 149, 298],
|
||||
[267, 412, 265, 413, ..., 300, 413, 299, 412],
|
||||
[
|
||||
[805, 392, 797, 400, ..., 808, 714, 808, 392],
|
||||
[115, 398, 113, 400, ..., 150, 400, 149, 298],
|
||||
[267, 412, 265, 413, ..., 300, 413, 299, 412],
|
||||
]
|
||||
)
|
||||
|
||||
segments2boxes([s.reshape(-1,2) for s in segments])
|
||||
>>> array([[ 741.66, 631.12, 133.31, 479.25],
|
||||
[ 146.81, 649.69, 185.62, 502.88],
|
||||
[ 281.81, 636.19, 118.12, 448.88]],
|
||||
dtype=float32) # xywh bounding boxes
|
||||
segments2boxes([s.reshape(-1, 2) for s in segments])
|
||||
# >>> array([[ 741.66, 631.12, 133.31, 479.25],
|
||||
# [ 146.81, 649.69, 185.62, 502.88],
|
||||
# [ 281.81, 636.19, 118.12, 448.88]],
|
||||
# dtype=float32) # xywh bounding boxes
|
||||
```
|
||||
|
||||
To understand how this function works, visit the [reference page](../reference/utils/ops.md#ultralytics.utils.ops.segments2boxes)
|
||||
|
|
@ -155,10 +157,11 @@ Compresses a single image file to reduced size while preserving its aspect ratio
|
|||
|
||||
```{ .py .annotate }
|
||||
from pathlib import Path
|
||||
|
||||
from ultralytics.data.utils import compress_one_image
|
||||
|
||||
for f in Path('path/to/dataset').rglob('*.jpg'):
|
||||
compress_one_image(f)#(1)!
|
||||
for f in Path("path/to/dataset").rglob("*.jpg"):
|
||||
compress_one_image(f) # (1)!
|
||||
```
|
||||
|
||||
1. Nothing returns from this function
|
||||
|
|
@ -170,10 +173,10 @@ Automatically split a dataset into `train`/`val`/`test` splits and save the resu
|
|||
```{ .py .annotate }
|
||||
from ultralytics.data.utils import autosplit
|
||||
|
||||
autosplit( #(1)!
|
||||
autosplit( # (1)!
|
||||
path="path/to/images",
|
||||
weights=(0.9, 0.1, 0.0), # (train, validation, test) fractional splits
|
||||
annotated_only=False # split only images with annotation file when True
|
||||
weights=(0.9, 0.1, 0.0), # (train, validation, test) fractional splits
|
||||
annotated_only=False, # split only images with annotation file when True
|
||||
)
|
||||
```
|
||||
|
||||
|
|
@ -194,9 +197,7 @@ import numpy as np
|
|||
from ultralytics.data.utils import polygon2mask
|
||||
|
||||
imgsz = (1080, 810)
|
||||
polygon = np.array(
|
||||
[805, 392, 797, 400, ..., 808, 714, 808, 392], # (238, 2)
|
||||
)
|
||||
polygon = np.array([805, 392, 797, 400, ..., 808, 714, 808, 392]) # (238, 2)
|
||||
|
||||
mask = polygon2mask(
|
||||
imgsz, # tuple
|
||||
|
|
@ -213,32 +214,36 @@ mask = polygon2mask(
|
|||
To manage bounding box data, the `Bboxes` class will help to convert between box coordinate formatting, scale box dimensions, calculate areas, include offsets, and more!
|
||||
|
||||
```python
|
||||
import numpy as np
|
||||
from ultralytics.utils.instance import Bboxes
|
||||
|
||||
boxes = Bboxes(
|
||||
bboxes=np.array(
|
||||
[[ 22.878, 231.27, 804.98, 756.83,],
|
||||
[ 48.552, 398.56, 245.35, 902.71,],
|
||||
[ 669.47, 392.19, 809.72, 877.04,],
|
||||
[ 221.52, 405.8, 344.98, 857.54,],
|
||||
[ 0, 550.53, 63.01, 873.44,],
|
||||
[ 0.0584, 254.46, 32.561, 324.87,]]
|
||||
[
|
||||
[22.878, 231.27, 804.98, 756.83],
|
||||
[48.552, 398.56, 245.35, 902.71],
|
||||
[669.47, 392.19, 809.72, 877.04],
|
||||
[221.52, 405.8, 344.98, 857.54],
|
||||
[0, 550.53, 63.01, 873.44],
|
||||
[0.0584, 254.46, 32.561, 324.87],
|
||||
]
|
||||
),
|
||||
format="xyxy",
|
||||
)
|
||||
|
||||
boxes.areas()
|
||||
>>> array([ 4.1104e+05, 99216, 68000, 55772, 20347, 2288.5])
|
||||
# >>> array([ 4.1104e+05, 99216, 68000, 55772, 20347, 2288.5])
|
||||
|
||||
boxes.convert("xywh")
|
||||
boxes.bboxes
|
||||
>>> array(
|
||||
[[ 413.93, 494.05, 782.1, 525.56],
|
||||
[ 146.95, 650.63, 196.8, 504.15],
|
||||
[ 739.6, 634.62, 140.25, 484.85],
|
||||
[ 283.25, 631.67, 123.46, 451.74],
|
||||
[ 31.505, 711.99, 63.01, 322.91],
|
||||
[ 16.31, 289.67, 32.503, 70.41]]
|
||||
)
|
||||
print(boxes.bboxes)
|
||||
# >>> array(
|
||||
# [[ 413.93, 494.05, 782.1, 525.56],
|
||||
# [ 146.95, 650.63, 196.8, 504.15],
|
||||
# [ 739.6, 634.62, 140.25, 484.85],
|
||||
# [ 283.25, 631.67, 123.46, 451.74],
|
||||
# [ 31.505, 711.99, 63.01, 322.91],
|
||||
# [ 16.31, 289.67, 32.503, 70.41]]
|
||||
# )
|
||||
```
|
||||
|
||||
See the [`Bboxes` reference section](../reference/utils/instance.md#ultralytics.utils.instance.Bboxes) for more attributes and methods available.
|
||||
|
|
@ -257,37 +262,39 @@ import numpy as np
|
|||
from ultralytics.utils.ops import scale_boxes
|
||||
|
||||
image = cv.imread("ultralytics/assets/bus.jpg")
|
||||
*(h, w), c = image.shape
|
||||
h, w, c = image.shape
|
||||
resized = cv.resize(image, None, (), fx=1.2, fy=1.2)
|
||||
*(new_h, new_w), _ = resized.shape
|
||||
new_h, new_w, _ = resized.shape
|
||||
|
||||
xyxy_boxes = np.array(
|
||||
[[ 22.878, 231.27, 804.98, 756.83,],
|
||||
[ 48.552, 398.56, 245.35, 902.71,],
|
||||
[ 669.47, 392.19, 809.72, 877.04,],
|
||||
[ 221.52, 405.8, 344.98, 857.54,],
|
||||
[ 0, 550.53, 63.01, 873.44,],
|
||||
[ 0.0584, 254.46, 32.561, 324.87,]]
|
||||
[
|
||||
[22.878, 231.27, 804.98, 756.83],
|
||||
[48.552, 398.56, 245.35, 902.71],
|
||||
[669.47, 392.19, 809.72, 877.04],
|
||||
[221.52, 405.8, 344.98, 857.54],
|
||||
[0, 550.53, 63.01, 873.44],
|
||||
[0.0584, 254.46, 32.561, 324.87],
|
||||
]
|
||||
)
|
||||
|
||||
new_boxes = scale_boxes(
|
||||
img1_shape=(h, w), # original image dimensions
|
||||
boxes=xyxy_boxes, # boxes from original image
|
||||
img1_shape=(h, w), # original image dimensions
|
||||
boxes=xyxy_boxes, # boxes from original image
|
||||
img0_shape=(new_h, new_w), # resized image dimensions (scale to)
|
||||
ratio_pad=None,
|
||||
padding=False,
|
||||
xywh=False,
|
||||
)
|
||||
|
||||
new_boxes#(1)!
|
||||
>>> array(
|
||||
[[ 27.454, 277.52, 965.98, 908.2],
|
||||
[ 58.262, 478.27, 294.42, 1083.3],
|
||||
[ 803.36, 470.63, 971.66, 1052.4],
|
||||
[ 265.82, 486.96, 413.98, 1029],
|
||||
[ 0, 660.64, 75.612, 1048.1],
|
||||
[ 0.0701, 305.35, 39.073, 389.84]]
|
||||
)
|
||||
print(new_boxes) # (1)!
|
||||
# >>> array(
|
||||
# [[ 27.454, 277.52, 965.98, 908.2],
|
||||
# [ 58.262, 478.27, 294.42, 1083.3],
|
||||
# [ 803.36, 470.63, 971.66, 1052.4],
|
||||
# [ 265.82, 486.96, 413.98, 1029],
|
||||
# [ 0, 660.64, 75.612, 1048.1],
|
||||
# [ 0.0701, 305.35, 39.073, 389.84]]
|
||||
# )
|
||||
```
|
||||
|
||||
1. Bounding boxes scaled for the new image size
|
||||
|
|
@ -303,24 +310,26 @@ import numpy as np
|
|||
from ultralytics.utils.ops import xyxy2xywh
|
||||
|
||||
xyxy_boxes = np.array(
|
||||
[[ 22.878, 231.27, 804.98, 756.83,],
|
||||
[ 48.552, 398.56, 245.35, 902.71,],
|
||||
[ 669.47, 392.19, 809.72, 877.04,],
|
||||
[ 221.52, 405.8, 344.98, 857.54,],
|
||||
[ 0, 550.53, 63.01, 873.44,],
|
||||
[ 0.0584, 254.46, 32.561, 324.87,]]
|
||||
[
|
||||
[22.878, 231.27, 804.98, 756.83],
|
||||
[48.552, 398.56, 245.35, 902.71],
|
||||
[669.47, 392.19, 809.72, 877.04],
|
||||
[221.52, 405.8, 344.98, 857.54],
|
||||
[0, 550.53, 63.01, 873.44],
|
||||
[0.0584, 254.46, 32.561, 324.87],
|
||||
]
|
||||
)
|
||||
xywh = xyxy2xywh(xyxy_boxes)
|
||||
|
||||
xywh
|
||||
>>> array(
|
||||
[[ 413.93, 494.05, 782.1, 525.56],
|
||||
[ 146.95, 650.63, 196.8, 504.15],
|
||||
[ 739.6, 634.62, 140.25, 484.85],
|
||||
[ 283.25, 631.67, 123.46, 451.74],
|
||||
[ 31.505, 711.99, 63.01, 322.91],
|
||||
[ 16.31, 289.67, 32.503, 70.41]]
|
||||
)
|
||||
print(xywh)
|
||||
# >>> array(
|
||||
# [[ 413.93, 494.05, 782.1, 525.56],
|
||||
# [ 146.95, 650.63, 196.8, 504.15],
|
||||
# [ 739.6, 634.62, 140.25, 484.85],
|
||||
# [ 283.25, 631.67, 123.46, 451.74],
|
||||
# [ 31.505, 711.99, 63.01, 322.91],
|
||||
# [ 16.31, 289.67, 32.503, 70.41]]
|
||||
# )
|
||||
```
|
||||
|
||||
### All Bounding Box Conversions
|
||||
|
|
@ -352,9 +361,9 @@ import cv2 as cv
|
|||
import numpy as np
|
||||
from ultralytics.utils.plotting import Annotator, colors
|
||||
|
||||
names { #(1)!
|
||||
0: "person",
|
||||
5: "bus",
|
||||
names = { # (1)!
|
||||
0: "person",
|
||||
5: "bus",
|
||||
11: "stop sign",
|
||||
}
|
||||
|
||||
|
|
@ -362,18 +371,20 @@ image = cv.imread("ultralytics/assets/bus.jpg")
|
|||
ann = Annotator(
|
||||
image,
|
||||
line_width=None, # default auto-size
|
||||
font_size=None, # default auto-size
|
||||
font="Arial.ttf", # must be ImageFont compatible
|
||||
pil=False, # use PIL, otherwise uses OpenCV
|
||||
font_size=None, # default auto-size
|
||||
font="Arial.ttf", # must be ImageFont compatible
|
||||
pil=False, # use PIL, otherwise uses OpenCV
|
||||
)
|
||||
|
||||
xyxy_boxes = np.array(
|
||||
[[ 5, 22.878, 231.27, 804.98, 756.83,], # class-idx x1 y1 x2 y2
|
||||
[ 0, 48.552, 398.56, 245.35, 902.71,],
|
||||
[ 0, 669.47, 392.19, 809.72, 877.04,],
|
||||
[ 0, 221.52, 405.8, 344.98, 857.54,],
|
||||
[ 0, 0, 550.53, 63.01, 873.44,],
|
||||
[11, 0.0584, 254.46, 32.561, 324.87,]]
|
||||
[
|
||||
[5, 22.878, 231.27, 804.98, 756.83], # class-idx x1 y1 x2 y2
|
||||
[0, 48.552, 398.56, 245.35, 902.71],
|
||||
[0, 669.47, 392.19, 809.72, 877.04],
|
||||
[0, 221.52, 405.8, 344.98, 857.54],
|
||||
[0, 0, 550.53, 63.01, 873.44],
|
||||
[11, 0.0584, 254.46, 32.561, 324.87],
|
||||
]
|
||||
)
|
||||
|
||||
for nb, box in enumerate(xyxy_boxes):
|
||||
|
|
@ -412,7 +423,7 @@ ann = Annotator(
|
|||
for obb in obb_boxes:
|
||||
c_idx, *obb = obb
|
||||
obb = np.array(obb).reshape(-1, 4, 2).squeeze()
|
||||
label = f"{names.get(int(c_idx))}"
|
||||
label = f"{obb_names.get(int(c_idx))}"
|
||||
ann.box_label(
|
||||
obb,
|
||||
label,
|
||||
|
|
@ -434,11 +445,11 @@ Check duration for code to run/process either using `with` or as a decorator.
|
|||
```python
|
||||
from ultralytics.utils.ops import Profile
|
||||
|
||||
with Profile(device=device) as dt:
|
||||
with Profile(device="cuda:0") as dt:
|
||||
pass # operation to measure
|
||||
|
||||
print(dt)
|
||||
>>> "Elapsed time is 9.5367431640625e-07 s"
|
||||
# >>> "Elapsed time is 9.5367431640625e-07 s"
|
||||
```
|
||||
|
||||
### Ultralytics Supported Formats
|
||||
|
|
@ -446,11 +457,10 @@ print(dt)
|
|||
Want or need to use the formats of [images or videos types supported](../modes/predict.md#image-and-video-formats) by Ultralytics programmatically? Use these constants if you need.
|
||||
|
||||
```python
|
||||
from ultralytics.data.utils import IMG_FORMATS
|
||||
from ultralytics.data.utils import VID_FORMATS
|
||||
from ultralytics.data.utils import IMG_FORMATS, VID_FORMATS
|
||||
|
||||
print(IMG_FORMATS)
|
||||
>>> ('bmp', 'dng', 'jpeg', 'jpg', 'mpo', 'png', 'tif', 'tiff', 'webp', 'pfm')
|
||||
# >>> ('bmp', 'dng', 'jpeg', 'jpg', 'mpo', 'png', 'tif', 'tiff', 'webp', 'pfm')
|
||||
```
|
||||
|
||||
### Make Divisible
|
||||
|
|
@ -461,7 +471,7 @@ Calculates the nearest whole number to `x` to make evenly divisible when divided
|
|||
from ultralytics.utils.ops import make_divisible
|
||||
|
||||
make_divisible(7, 3)
|
||||
>>> 9
|
||||
# >>> 9
|
||||
make_divisible(7, 2)
|
||||
>>> 8
|
||||
# >>> 8
|
||||
```
|
||||
|
|
|
|||
|
|
@ -12,8 +12,8 @@ You can also explore other quickstart options for YOLOv5, such as our [Colab Not
|
|||
|
||||
## Prerequisites
|
||||
|
||||
1. **Nvidia Driver**: Version 455.23 or higher. Download from [Nvidia's website](https://www.nvidia.com/Download/index.aspx).
|
||||
2. **Nvidia-Docker**: Allows Docker to interact with your local GPU. Installation instructions are available on the [Nvidia-Docker GitHub repository](https://github.com/NVIDIA/nvidia-docker).
|
||||
1. **NVIDIA Driver**: Version 455.23 or higher. Download from [Nvidia's website](https://www.nvidia.com/Download/index.aspx).
|
||||
2. **NVIDIA-Docker**: Allows Docker to interact with your local GPU. Installation instructions are available on the [NVIDIA-Docker GitHub repository](https://github.com/NVIDIA/nvidia-docker).
|
||||
3. **Docker Engine - CE**: Version 19.03 or higher. Download and installation instructions can be found on the [Docker website](https://docs.docker.com/install/).
|
||||
|
||||
## Step 1: Pull the YOLOv5 Docker Image
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue