Docs cleanup and Google-style tracker docstrings (#6751)

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
Glenn Jocher 2023-12-03 04:12:33 +01:00 committed by GitHub
parent 60041014a8
commit 80802be1e5
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
44 changed files with 740 additions and 529 deletions

View file

@ -30,11 +30,16 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
---
1. Load a model and run `predict()` method on a source.
2. Load a model and run `predict()` method on a source.
```py
m = YOLO('yolov8n-seg.pt')
res = m.predict()
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-seg.pt')
# Run inference
result = model.predict()
```
??? question "No Prediction Arguments?"
@ -48,11 +53,11 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
This is helpful for rapid testing with the `predict()` method.
For additional information about Segmentation Models, visit the [Segment Task](../tasks/segment.md/#models) page. To learn more about `predict()` method, see [Predict Mode](../modes/predict.md) section of the Documentation.
For additional information about Segmentation Models, visit the [Segment Task](../tasks/segment.md#models) page. To learn more about `predict()` method, see [Predict Mode](../modes/predict.md) section of the Documentation.
---
1. Now iterate over the results and the contours. For workflows that want to save an image to file, the source image `base-name` and the detection `class-label` are retrieved for later use (optional).
3. Now iterate over the results and the contours. For workflows that want to save an image to file, the source image `base-name` and the detection `class-label` are retrieved for later use (optional).
``` { .py .annotate }
# (2) Iterate detection results (helpful for multiple images)
@ -67,8 +72,8 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
```
1. To learn more about working with detection results, see [Boxes Section for Predict Mode](../modes/predict.md/#boxes).
2. To learn more about `predict()` results see [Working with Results for Predict Mode](../modes/predict.md/#working-with-results)
1. To learn more about working with detection results, see [Boxes Section for Predict Mode](../modes/predict.md#boxes).
2. To learn more about `predict()` results see [Working with Results for Predict Mode](../modes/predict.md#working-with-results)
??? info "For-Loop"
@ -76,7 +81,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
---
1. Start with generating a binary mask from the source image and then draw a filled contour onto the mask. This will allow the object to be isolated from the other parts of the image. An example from `bus.jpg` for one of the detected `person` class objects is shown on the right.
4. Start with generating a binary mask from the source image and then draw a filled contour onto the mask. This will allow the object to be isolated from the other parts of the image. An example from `bus.jpg` for one of the detected `person` class objects is shown on the right.
![Binary Mask Image](https://github.com/ultralytics/ultralytics/assets/62214284/59bce684-fdda-4b17-8104-0b4b51149aca){ width="240", align="right" }
``` { .py .annotate }
@ -100,7 +105,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
```
1. For more info on `c.masks.xy` see [Masks Section from Predict Mode](../modes/predict.md/#masks).
1. For more info on `c.masks.xy` see [Masks Section from Predict Mode](../modes/predict.md#masks).
2. Here, the values are cast into `np.int32` for compatibility with `drawContours()` function from OpenCV.
@ -110,13 +115,13 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
<summary> Expand to understand what is happening when defining the <code>contour</code> variable.</summary>
<p>
- `c.masks.xy` :: Provides the coordinates of the mask contour points in the format `(x, y)`. For more details, refer to the [Masks Section from Predict Mode](../modes/predict.md/#masks).
- `c.masks.xy` :: Provides the coordinates of the mask contour points in the format `(x, y)`. For more details, refer to the [Masks Section from Predict Mode](../modes/predict.md#masks).
- `.pop()` :: As `masks.xy` is a list containing a single element, this element is extracted using the `pop()` method.
- `.astype(np.int32)` :: Using `masks.xy` will return with a data type of `float32`, but this won't be compatible with the OpenCV `drawContours()` function, so this will change the data type to `int32` for compatibility.
- `.reshape(-1, 1, 2)` :: Reformats the data into the required shape of of `[N, 1, 2]` where `N` is the number of contour points, with each point represented by a single entry `1`, and the entry is composed of `2` values. The `-1` denotes that the number of values along this dimension is flexible.
- `.reshape(-1, 1, 2)` :: Reformats the data into the required shape of `[N, 1, 2]` where `N` is the number of contour points, with each point represented by a single entry `1`, and the entry is composed of `2` values. The `-1` denotes that the number of values along this dimension is flexible.
</details>
<p></p>
@ -139,7 +144,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
---
1. Next the there are 2 options for how to move forward with the image from this point and a subsequent option for each.
5. Next the there are 2 options for how to move forward with the image from this point and a subsequent option for each.
### Object Isolation Options
@ -252,7 +257,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
---
1. <u>What to do next is entirely left to the you as the developer.</u> A basic example of one possible next step (saving the image to file for future use) is shown.
6. <u>What to do next is entirely left to you as the developer.</u> A basic example of one possible next step (saving the image to file for future use) is shown.
- **NOTE:** this step is optional and can be skipped if not required for your specific use case.
@ -314,6 +319,6 @@ for r in res:
1. The line populating `contour` is combined into a single line here, where it was split to multiple above.
2. {==What goes here is up to you!==}
3. See [Predict Mode](../modes/predict.md) for additional information.
4. See [Segment Task](../tasks/segment.md/#models) for more information.
5. Learn more about [Working with Results](../modes/predict.md/#working-with-results)
6. Learn more about [Segmentation Mask Results](../modes/predict.md/#masks)
4. See [Segment Task](../tasks/segment.md#models) for more information.
5. Learn more about [Working with Results](../modes/predict.md#working-with-results)
6. Learn more about [Segmentation Mask Results](../modes/predict.md#masks)

View file

@ -172,7 +172,7 @@ The rows index the label files, each corresponding to an image in your dataset,
fold_lbl_distrb.loc[f'split_{n}'] = ratio
```
The ideal scenario is for all class ratios to be reasonably similar for each split and across classes. This, however, will be subject to the specifics of your dataset.
The ideal scenario is for all class ratios to be reasonably similar for each split and across classes. This, however, will be subject to the specifics of your dataset.
4. Next, we create the directories and dataset YAML files for each split.

View file

@ -33,10 +33,10 @@ import cv2
model = YOLO("yolov8n.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
counter = object_counter.ObjectCounter() # Init Object Counter
counter = object_counter.ObjectCounter() # Init Object Counter
region_points = [(20, 400), (1080, 404), (1080, 360), (20, 360)]
counter.set_args(view_img=True, reg_pts=region_points,
classes_names=model.model.names, draw_tracks=True)
classes_names=model.names, draw_tracks=True)
while cap.isOpened():
success, frame = cap.read()
@ -46,7 +46,7 @@ while cap.isOpened():
counter.start_counting(frame, tracks)
```
???+ tip "Region is Moveable"
???+ tip "Region is Movable"
You can move the region anywhere in the frame by clicking on its edges

View file

@ -18,7 +18,7 @@ Object counting in regions with [Ultralytics YOLOv8](https://github.com/ultralyt
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Ultralytics YOLOv8 Object Counting in Multiple & Moveable Regions
<strong>Watch:</strong> Ultralytics YOLOv8 Object Counting in Multiple & Movable Regions
</p>
## Advantages of Object Counting in Regions?
@ -52,7 +52,7 @@ cd ultralytics/examples/YOLOv8-Region-Counter
Execute the following basic commands for inference.
???+ tip "Region is Moveable"
???+ tip "Region is Movable"
During video playback, you can interactively move the region within the video by clicking and dragging using the left mouse button.
@ -75,12 +75,12 @@ python yolov8_region_counter.py --source "path/to/video.mp4" --view-img
### Optional Arguments
| Name | Type | Default | Description |
|----------------------|--------|--------------|-------------------------------------------|
| `--source` | `str` | `None` | Path to video file, for webcam 0 |
| `--line_thickness` | `int` | `2` | Bounding Box thickness |
| `--save-img` | `bool` | `False` | Save the predicted video/image |
| `--weights` | `str` | `yolov8n.pt` | Weights file path |
| `--classes` | `list` | `None` | Detect specific classes i.e --classes 0 2 |
| `--region-thickness` | `int` | `2` | Region Box thickness |
| `--track-thickness` | `int` | `2` | Tracking line thickness |
| Name | Type | Default | Description |
|----------------------|--------|--------------|--------------------------------------------|
| `--source` | `str` | `None` | Path to video file, for webcam 0 |
| `--line_thickness` | `int` | `2` | Bounding Box thickness |
| `--save-img` | `bool` | `False` | Save the predicted video/image |
| `--weights` | `str` | `yolov8n.pt` | Weights file path |
| `--classes` | `list` | `None` | Detect specific classes i.e. --classes 0 2 |
| `--region-thickness` | `int` | `2` | Region Box thickness |
| `--track-thickness` | `int` | `2` | Tracking line thickness |

View file

@ -51,7 +51,7 @@ from email.mime.text import MIMEText
```python
password = ""
from_email = "" # must match the email used to generate the password
from_email = "" # must match the email used to generate the password
to_email = "" # receiver email
```
@ -108,7 +108,7 @@ class ObjectDetection:
text = f'FPS: {int(fps)}'
text_size = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX, 1.0, 2)[0]
gap = 10
cv2.rectangle(im0, (20 - gap, 70 - text_size[1] - gap), (20 + text_size[0] + gap, 70 + gap), (255, 255, 255),-1)
cv2.rectangle(im0, (20 - gap, 70 - text_size[1] - gap), (20 + text_size[0] + gap, 70 + gap), (255, 255, 255), -1)
cv2.putText(im0, text, (20, 70), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 0, 0), 2)
def plot_bboxes(self, results, im0):
@ -135,7 +135,7 @@ class ObjectDetection:
results = self.predict(im0)
im0, class_ids = self.plot_bboxes(results, im0)
if len(class_ids) > 0: # Only send email If not sent before
if len(class_ids) > 0: # Only send email If not sent before
if not self.email_sent:
send_email(to_email, from_email, len(class_ids))
self.email_sent = True

View file

@ -1,12 +1,12 @@
---
comments: true
description: A step-by-step guide on integrating Ultralytics YOLOv8 with Triton Inference Server for scalable and high-performance deep learning inference deployments.
keywords: YOLOv8, Triton Inference Server, ONNX, Deep Learning Deployment, Scalable Inference, Ultralytics, NVIDIA, Object Detection, Cloud Inferencing
keywords: YOLOv8, Triton Inference Server, ONNX, Deep Learning Deployment, Scalable Inference, Ultralytics, NVIDIA, Object Detection, Cloud Inference
---
# Triton Inference Server with Ultralytics YOLOv8
The [Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server) (formerly known as TensorRT Inference Server) is an open-source software solution developed by NVIDIA. It provides a cloud inferencing solution optimized for NVIDIA GPUs. Triton simplifies the deployment of AI models at scale in production. Integrating Ultralytics YOLOv8 with Triton Inference Server allows you to deploy scalable, high-performance deep learning inference workloads. This guide provides steps to set up and test the integration.
The [Triton Inference Server](https://developer.nvidia.com/nvidia-triton-inference-server) (formerly known as TensorRT Inference Server) is an open-source software solution developed by NVIDIA. It provides a cloud inference solution optimized for NVIDIA GPUs. Triton simplifies the deployment of AI models at scale in production. Integrating Ultralytics YOLOv8 with Triton Inference Server allows you to deploy scalable, high-performance deep learning inference workloads. This guide provides steps to set up and test the integration.
<p align="center">
<br>
@ -25,7 +25,7 @@ Triton Inference Server is designed to deploy a variety of AI models in producti
- Serving multiple models from a single server instance.
- Dynamic model loading and unloading without server restart.
- Ensemble inferencing, allowing multiple models to be used together to achieve results.
- Ensemble inference, allowing multiple models to be used together to achieve results.
- Model versioning for A/B testing and rolling updates.
## Prerequisites