Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
Glenn Jocher 2023-11-22 20:45:46 +01:00 committed by GitHub
parent 0c4e97443b
commit 16a13a1ce0
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
178 changed files with 14224 additions and 561 deletions

View file

@ -41,7 +41,7 @@ Once your model is trained and validated, the next logical step is to evaluate i
Run YOLOv8n benchmarks on all supported export formats including ONNX, TensorRT etc. See Arguments section below for a full list of export arguments.
!!! Example ""
!!! Example
=== "Python"

View file

@ -48,7 +48,7 @@ Here are some of the standout functionalities:
Export a YOLOv8n model to a different format like ONNX or TensorRT. See Arguments section below for a full list of export arguments.
!!! Example ""
!!! Example
=== "Python"

View file

@ -28,7 +28,7 @@ In the world of machine learning and computer vision, the process of making sens
| Manufacturing | Sports | Safety |
|:-------------------------------------------------:|:----------------------------------------------------:|:-------------------------------------------:|
| ![Vehicle Spare Parts Detection][car spare parts] | ![Football Player Detection][football player detect] | ![People Fall Detection][human fall detect] |
| Vehicle Spare Parts Detection | Football Player Detection | People Fall Detection |
| Vehicle Spare Parts Detection | Football Player Detection | People Fall Detection |
## Why Use Ultralytics YOLO for Inference?
@ -715,5 +715,7 @@ Here's a Python script using OpenCV (`cv2`) and YOLOv8 to run inference on video
This script will run predictions on each frame of the video, visualize the results, and display them in a window. The loop can be exited by pressing 'q'.
[car spare parts]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/a0f802a8-0776-44cf-8f17-93974a4a28a1
[football player detect]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/7d320e1f-fc57-4d7f-a691-78ee579c3442
[human fall detect]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/86437c4a-3227-4eee-90ef-9efb697bdb43

View file

@ -32,10 +32,10 @@ The output from Ultralytics trackers is consistent with standard object detectio
## Real-world Applications
| Transportation | Retail | Aquaculture |
| Transportation | Retail | Aquaculture |
|:----------------------------------:|:--------------------------------:|:----------------------------:|
| ![Vehicle Tracking][vehicle track] | ![People Tracking][people track] | ![Fish Tracking][fish track] |
| Vehicle Tracking | People Tracking | Fish Tracking |
| Vehicle Tracking | People Tracking | Fish Tracking |
## Features at a Glance
@ -58,7 +58,7 @@ The default tracker is BoT-SORT.
To run the tracker on video streams, use a trained Detect, Segment or Pose model such as YOLOv8n, YOLOv8n-seg and YOLOv8n-pose.
!!! Example ""
!!! Example
=== "Python"
@ -97,7 +97,7 @@ As can be seen in the above usage, tracking is available for all Detect, Segment
Tracking configuration shares properties with Predict mode, such as `conf`, `iou`, and `show`. For further configurations, refer to the [Predict](../modes/predict.md#inference-arguments) model page.
!!! Example ""
!!! Example
=== "Python"
@ -120,7 +120,7 @@ Tracking configuration shares properties with Predict mode, such as `conf`, `iou
Ultralytics also allows you to use a modified tracker configuration file. To do this, simply make a copy of a tracker config file (for example, `custom_tracker.yaml`) from [ultralytics/cfg/trackers](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/cfg/trackers) and modify any configurations (except the `tracker_type`) as per your needs.
!!! Example ""
!!! Example
=== "Python"
@ -354,5 +354,7 @@ To initiate your contribution, please refer to our [Contributing Guide](https://
Together, let's enhance the tracking capabilities of the Ultralytics YOLO ecosystem 🙏!
[vehicle track]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/ee6e6038-383b-4f21-ac29-b2a1c7d386ab
[people track]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/93bb4ee2-77a0-4e4e-8eb6-eb8f527f0527
[fish track]: https://github.com/RizwanMunawar/ultralytics/assets/62513924/a5146d0f-bfa8-4e0a-b7df-3c1446cd8142

View file

@ -236,7 +236,7 @@ To use a logger, select it from the dropdown menu in the code snippet above and
To use Comet:
!!! Example ""
!!! Example
=== "Python"
```python
@ -254,7 +254,7 @@ Remember to sign in to your Comet account on their website and get your API key.
To use ClearML:
!!! Example ""
!!! Example
=== "Python"
```python
@ -272,7 +272,7 @@ After running this script, you will need to sign in to your ClearML account on t
To use TensorBoard in [Google Colab](https://colab.research.google.com/github/ultralytics/ultralytics/blob/main/examples/tutorial.ipynb):
!!! Example ""
!!! Example
=== "CLI"
```bash
@ -282,7 +282,7 @@ To use TensorBoard in [Google Colab](https://colab.research.google.com/github/ul
To use TensorBoard locally run the below command and view results at http://localhost:6006/.
!!! Example ""
!!! Example
=== "CLI"
```bash

View file

@ -38,7 +38,7 @@ These are the notable functionalities offered by YOLOv8's Val mode:
Validate trained YOLOv8n model accuracy on the COCO128 dataset. No argument need to passed as the `model` retains it's training `data` and arguments as model attributes. See Arguments section below for a full list of export arguments.
!!! Example ""
!!! Example
=== "Python"