Update to lowercase MkDocs admonitions (#15990)
Co-authored-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
parent
ce24c7273e
commit
c2b647a768
133 changed files with 529 additions and 521 deletions
|
|
@ -22,7 +22,7 @@ This guide provides a comprehensive overview of three fundamental types of data
|
|||
- Bar plots, on the other hand, are suitable for comparing quantities across different categories and showing relationships between a category and its numerical value.
|
||||
- Lastly, pie charts are effective for illustrating proportions among categories and showing parts of a whole.
|
||||
|
||||
!!! Analytics "Analytics Examples"
|
||||
!!! analytics "Analytics Examples"
|
||||
|
||||
=== "Line Graph"
|
||||
|
||||
|
|
|
|||
|
|
@ -85,7 +85,7 @@ After installing the runtime, you need to plug in your Coral Edge TPU into a USB
|
|||
|
||||
To use the Edge TPU, you need to convert your model into a compatible format. It is recommended that you run export on Google Colab, x86_64 Linux machine, using the official [Ultralytics Docker container](docker-quickstart.md), or using [Ultralytics HUB](../hub/quickstart.md), since the Edge TPU compiler is not available on ARM. See the [Export Mode](../modes/export.md) for the available arguments.
|
||||
|
||||
!!! Exporting the model
|
||||
!!! exporting the model
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -111,7 +111,7 @@ The exported model will be saved in the `<model_name>_saved_model/` folder with
|
|||
|
||||
After exporting your model, you can run inference with it using the following code:
|
||||
|
||||
!!! Running the model
|
||||
!!! running the model
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -170,7 +170,7 @@ Make sure to uninstall any previous Coral Edge TPU runtime versions by following
|
|||
|
||||
Yes, you can export your Ultralytics YOLOv8 model to be compatible with the Coral Edge TPU. It is recommended to perform the export on Google Colab, an x86_64 Linux machine, or using the [Ultralytics Docker container](docker-quickstart.md). You can also use Ultralytics HUB for exporting. Here is how you can export your model using Python and CLI:
|
||||
|
||||
!!! Exporting the model
|
||||
!!! exporting the model
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -212,7 +212,7 @@ For a specific wheel, such as TensorFlow 2.15.0 `tflite-runtime`, you can downlo
|
|||
|
||||
After exporting your YOLOv8 model to an Edge TPU-compatible format, you can run inference using the following code snippets:
|
||||
|
||||
!!! Running the model
|
||||
!!! running the model
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ This comprehensive guide provides a detailed walkthrough for deploying Ultralyti
|
|||
|
||||
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/deepstream-nvidia-jetson.avif" alt="DeepStream on NVIDIA Jetson">
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
This guide has been tested with both [Seeed Studio reComputer J4012](https://www.seeedstudio.com/reComputer-J4012-p-5586.html) which is based on NVIDIA Jetson Orin NX 16GB running JetPack release of [JP5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) and [Seeed Studio reComputer J1020 v2](https://www.seeedstudio.com/reComputer-J1020-v2-p-5498.html) which is based on NVIDIA Jetson Nano 4GB running JetPack release of [JP4.6.4](https://developer.nvidia.com/jetpack-sdk-464). It is expected to work across all the NVIDIA Jetson hardware lineup including latest and legacy.
|
||||
|
||||
|
|
@ -39,7 +39,7 @@ Before you start to follow this guide:
|
|||
- For JetPack 4.6.4, install [DeepStream 6.0.1](https://docs.nvidia.com/metropolis/deepstream/6.0.1/dev-guide/text/DS_Quickstart.html)
|
||||
- For JetPack 5.1.3, install [DeepStream 6.3](https://docs.nvidia.com/metropolis/deepstream/6.3/dev-guide/text/DS_Quickstart.html)
|
||||
|
||||
!!! Tip
|
||||
!!! tip
|
||||
|
||||
In this guide we have used the Debian package method of installing DeepStream SDK to the Jetson device. You can also visit the [DeepStream SDK on Jetson (Archived)](https://developer.nvidia.com/embedded/deepstream-on-jetson-downloads-archived) to access legacy versions of DeepStream.
|
||||
|
||||
|
|
@ -67,7 +67,7 @@ Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcosluc
|
|||
wget https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt
|
||||
```
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
You can also use a [custom trained YOLOv8 model](https://docs.ultralytics.com/modes/train/).
|
||||
|
||||
|
|
@ -77,7 +77,7 @@ Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcosluc
|
|||
python3 utils/export_yoloV8.py -w yolov8s.pt
|
||||
```
|
||||
|
||||
!!! Note "Pass the below arguments to the above command"
|
||||
!!! note "Pass the below arguments to the above command"
|
||||
|
||||
For DeepStream 6.0.1, use opset 12 or lower. The default opset is 16.
|
||||
|
||||
|
|
@ -175,13 +175,13 @@ Here we are using [marcoslucianops/DeepStream-Yolo](https://github.com/marcosluc
|
|||
deepstream-app -c deepstream_app_config.txt
|
||||
```
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
It will take a long time to generate the TensorRT engine file before starting the inference. So please be patient.
|
||||
|
||||
<div align=center><img width=1000 src="https://github.com/ultralytics/docs/releases/download/0/yolov8-with-deepstream.avif" alt="YOLOv8 with deepstream"></div>
|
||||
|
||||
!!! Tip
|
||||
!!! tip
|
||||
|
||||
If you want to convert the model to FP16 precision, simply set `model-engine-file=model_b1_gpu0_fp16.engine` and `network-mode=2` inside `config_infer_primary_yoloV8.txt`
|
||||
|
||||
|
|
@ -217,7 +217,7 @@ If you want to use INT8 precision for inference, you need to follow the steps be
|
|||
done
|
||||
```
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
NVIDIA recommends at least 500 images to get a good accuracy. On this example, 1000 images are chosen to get better accuracy (more images = more accuracy). You can set it from **head -1000**. For example, for 2000 images, **head -2000**. This process can take a long time.
|
||||
|
||||
|
|
@ -234,7 +234,7 @@ If you want to use INT8 precision for inference, you need to follow the steps be
|
|||
export INT8_CALIB_BATCH_SIZE=1
|
||||
```
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
Higher INT8_CALIB_BATCH_SIZE values will result in more accuracy and faster calibration speed. Set it according to you GPU memory.
|
||||
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ Measuring the gap between two objects is known as distance calculation within a
|
|||
|
||||
- Click on any two bounding boxes with Left Mouse click for distance calculation
|
||||
|
||||
!!! Example "Distance Calculation using YOLOv8 Example"
|
||||
!!! example "Distance Calculation using YOLOv8 Example"
|
||||
|
||||
=== "Video Stream"
|
||||
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ A heatmap generated with [Ultralytics YOLOv8](https://github.com/ultralytics/ult
|
|||
- `heatmap_alpha`: Ensure this value is within the range (0.0 - 1.0).
|
||||
- `decay_factor`: Used for removing heatmap after an object is no longer in the frame, its value should also be in the range (0.0 - 1.0).
|
||||
|
||||
!!! Example "Heatmaps using Ultralytics YOLOv8 Example"
|
||||
!!! example "Heatmaps using Ultralytics YOLOv8 Example"
|
||||
|
||||
=== "Heatmap"
|
||||
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ The process is repeated until either the set number of iterations is reached or
|
|||
|
||||
Here's how to use the `model.tune()` method to utilize the `Tuner` class for hyperparameter tuning of YOLOv8n on COCO8 for 30 epochs with an AdamW optimizer and skipping plotting, checkpointing and validation other than on final epoch for faster Tuning.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -212,7 +212,7 @@ For deeper insights, you can explore the `Tuner` class source code and accompany
|
|||
|
||||
To optimize the learning rate for Ultralytics YOLO, start by setting an initial learning rate using the `lr0` parameter. Common values range from `0.001` to `0.01`. During the hyperparameter tuning process, this value will be mutated to find the optimal setting. You can utilize the `model.tune()` method to automate this process. For example:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -68,7 +68,7 @@ Let's work together to make the Ultralytics YOLO ecosystem more robust and versa
|
|||
|
||||
Training a custom object detection model with Ultralytics YOLO is straightforward. Start by preparing your dataset in the correct format and installing the Ultralytics package. Use the following code to initiate training:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ There are two types of instance segmentation tracking available in the Ultralyti
|
|||
|  |  |
|
||||
| Ultralytics Instance Segmentation 😍 | Ultralytics Instance Segmentation with Object Tracking 🔥 |
|
||||
|
||||
!!! Example "Instance Segmentation and Tracking"
|
||||
!!! example "Instance Segmentation and Tracking"
|
||||
|
||||
=== "Instance Segmentation"
|
||||
|
||||
|
|
@ -146,7 +146,7 @@ For any inquiries, feel free to post your questions in the [Ultralytics Issue Se
|
|||
|
||||
To perform instance segmentation using Ultralytics YOLOv8, initialize the YOLO model with a segmentation version of YOLOv8 and process video frames through it. Here's a simplified code example:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -200,7 +200,7 @@ Ultralytics YOLOv8 offers real-time performance, superior accuracy, and ease of
|
|||
|
||||
To implement object tracking, use the `model.track` method and ensure that each object's ID is consistently assigned across frames. Below is a simple example:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -331,7 +331,7 @@ For more insights, check out our [blog post](https://www.ultralytics.com/blog/ac
|
|||
|
||||
Yes, YOLOv8 models can be deployed on mobile devices using TensorFlow Lite (TF Lite) for both Android and iOS platforms. TF Lite is designed for mobile and embedded devices, providing efficient on-device inference.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -63,7 +63,7 @@ The `imgsz` validation parameter sets the maximum dimension for image resizing,
|
|||
|
||||
If you want to get a deeper understanding of your YOLOv8 model's performance, you can easily access specific evaluation metrics with a few lines of Python code. The code snippet below will let you load your model, run an evaluation, and print out various metrics that show how well your model is doing.
|
||||
|
||||
!!! Example "Usage"
|
||||
!!! example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -165,7 +165,7 @@ Improving mean average precision (mAP) for a YOLOv8 model involves several steps
|
|||
|
||||
You can access YOLOv8 model evaluation metrics using Python with the following steps:
|
||||
|
||||
!!! Example "Usage"
|
||||
!!! example "Usage"
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ This comprehensive guide provides a detailed walkthrough for deploying Ultralyti
|
|||
|
||||
<img width="1024" src="https://github.com/ultralytics/docs/releases/download/0/nvidia-jetson-ecosystem.avif" alt="NVIDIA Jetson Ecosystem">
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
This guide has been tested with both [Seeed Studio reComputer J4012](https://www.seeedstudio.com/reComputer-J4012-p-5586.html) which is based on NVIDIA Jetson Orin NX 16GB running the latest stable JetPack release of [JP6.0](https://developer.nvidia.com/embedded/jetpack-sdk-60), JetPack release of [JP5.1.3](https://developer.nvidia.com/embedded/jetpack-sdk-513) and [Seeed Studio reComputer J1020 v2](https://www.seeedstudio.com/reComputer-J1020-v2-p-5498.html) which is based on NVIDIA Jetson Nano 4GB running JetPack release of [JP4.6.1](https://developer.nvidia.com/embedded/jetpack-sdk-461). It is expected to work across all the NVIDIA Jetson hardware lineup including latest and legacy.
|
||||
|
||||
|
|
@ -57,7 +57,7 @@ The first step after getting your hands on an NVIDIA Jetson device is to flash N
|
|||
3. If you own a Seeed Studio reComputer J4012 device, you can [flash JetPack to the included SSD](https://wiki.seeedstudio.com/reComputer_J4012_Flash_Jetpack/) and if you own a Seeed Studio reComputer J1020 v2 device, you can [flash JetPack to the eMMC/ SSD](https://wiki.seeedstudio.com/reComputer_J2021_J202_Flash_Jetpack/).
|
||||
4. If you own any other third party device powered by the NVIDIA Jetson module, it is recommended to follow [command-line flashing](https://docs.nvidia.com/jetson/archives/r35.5.0/DeveloperGuide/IN/QuickStart.html).
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
For methods 3 and 4 above, after flashing the system and booting the device, please enter "sudo apt update && sudo apt install nvidia-jetpack -y" on the device terminal to install all the remaining JetPack components needed.
|
||||
|
||||
|
|
@ -157,7 +157,7 @@ wget https://nvidia.box.com/shared/static/48dtuob7meiw6ebgfsfqakc9vse62sg4.whl -
|
|||
pip install onnxruntime_gpu-1.18.0-cp310-cp310-linux_aarch64.whl
|
||||
```
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
`onnxruntime-gpu` will automatically revert back the numpy version to latest. So we need to reinstall numpy to `1.23.5` to fix an issue by executing:
|
||||
|
||||
|
|
@ -230,7 +230,7 @@ wget https://nvidia.box.com/shared/static/zostg6agm00fb6t5uisw51qi6kpcuwzd.whl -
|
|||
pip install onnxruntime_gpu-1.17.0-cp38-cp38-linux_aarch64.whl
|
||||
```
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
`onnxruntime-gpu` will automatically revert back the numpy version to latest. So we need to reinstall numpy to `1.23.5` to fix an issue by executing:
|
||||
|
||||
|
|
@ -244,7 +244,7 @@ Out of all the model export formats supported by Ultralytics, TensorRT delivers
|
|||
|
||||
The YOLOv8n model in PyTorch format is converted to TensorRT to run inference with the exported model.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -274,7 +274,7 @@ The YOLOv8n model in PyTorch format is converted to TensorRT to run inference wi
|
|||
yolo predict model=yolov8n.engine source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
Visit the [Export page](../modes/export.md#arguments) to access additional arguments when exporting models to different model formats
|
||||
|
||||
|
|
@ -294,7 +294,7 @@ Even though all model exports are working with NVIDIA Jetson, we have only inclu
|
|||
|
||||
The below table represents the benchmark results for five different models (YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) across ten different formats (PyTorch, TorchScript, ONNX, OpenVINO, TensorRT, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN), giving us the status, size, mAP50-95(B) metric, and inference time for each combination.
|
||||
|
||||
!!! Performance
|
||||
!!! performance
|
||||
|
||||
=== "YOLOv8n"
|
||||
|
||||
|
|
@ -377,7 +377,7 @@ The below table represents the benchmark results for five different models (YOLO
|
|||
|
||||
To reproduce the above Ultralytics benchmarks on all export [formats](../modes/export.md) run this code:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ Object blurring with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
|
|||
- **Selective Focus**: YOLOv8 allows for selective blurring, enabling users to target specific objects, ensuring a balance between privacy and retaining relevant visual information.
|
||||
- **Real-time Processing**: YOLOv8's efficiency enables object blurring in real-time, making it suitable for applications requiring on-the-fly privacy enhancements in dynamic environments.
|
||||
|
||||
!!! Example "Object Blurring using YOLOv8 Example"
|
||||
!!! example "Object Blurring using YOLOv8 Example"
|
||||
|
||||
=== "Object Blurring"
|
||||
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
|
|||
|  |  |
|
||||
| Conveyor Belt Packets Counting Using Ultralytics YOLOv8 | Fish Counting in Sea using Ultralytics YOLOv8 |
|
||||
|
||||
!!! Example "Object Counting using YOLOv8 Example"
|
||||
!!! example "Object Counting using YOLOv8 Example"
|
||||
|
||||
=== "Count in Region"
|
||||
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
|
|||
|  |
|
||||
| Suitcases Cropping at airport conveyor belt using Ultralytics YOLOv8 |
|
||||
|
||||
!!! Example "Object Cropping using YOLOv8 Example"
|
||||
!!! example "Object Cropping using YOLOv8 Example"
|
||||
|
||||
=== "Object Cropping"
|
||||
|
||||
|
|
|
|||
|
|
@ -38,18 +38,18 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
|
|||
|
||||
### Selection of Points
|
||||
|
||||
!!! Tip "Point Selection is now Easy"
|
||||
!!! tip "Point Selection is now Easy"
|
||||
|
||||
Choosing parking points is a critical and complex task in parking management systems. Ultralytics streamlines this process by providing a tool that lets you define parking lot areas, which can be utilized later for additional processing.
|
||||
|
||||
- Capture a frame from the video or camera stream where you want to manage the parking lot.
|
||||
- Use the provided code to launch a graphical interface, where you can select an image and start outlining parking regions by mouse click to create polygons.
|
||||
|
||||
!!! Warning "Image Size"
|
||||
!!! warning "Image Size"
|
||||
|
||||
Max Image Size of 1920 * 1080 supported
|
||||
|
||||
!!! Example "Parking slots Annotator Ultralytics YOLOv8"
|
||||
!!! example "Parking slots Annotator Ultralytics YOLOv8"
|
||||
|
||||
=== "Parking Annotator"
|
||||
|
||||
|
|
@ -65,7 +65,7 @@ Parking management with [Ultralytics YOLOv8](https://github.com/ultralytics/ultr
|
|||
|
||||
### Python Code for Parking Management
|
||||
|
||||
!!! Example "Parking management using YOLOv8 Example"
|
||||
!!! example "Parking management using YOLOv8 Example"
|
||||
|
||||
=== "Parking Management"
|
||||
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ Queue management using [Ultralytics YOLOv8](https://github.com/ultralytics/ultra
|
|||
|  |  |
|
||||
| Queue management at airport ticket counter Using Ultralytics YOLOv8 | Queue monitoring in crowd Ultralytics YOLOv8 |
|
||||
|
||||
!!! Example "Queue Management using YOLOv8 Example"
|
||||
!!! example "Queue Management using YOLOv8 Example"
|
||||
|
||||
=== "Queue Manager"
|
||||
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ This comprehensive guide provides a detailed walkthrough for deploying Ultralyti
|
|||
<strong>Watch:</strong> Raspberry Pi 5 updates and improvements.
|
||||
</p>
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
This guide has been tested with Raspberry Pi 4 and Raspberry Pi 5 running the latest [Raspberry Pi OS Bookworm (Debian 12)](https://www.raspberrypi.com/software/operating-systems/). Using this guide for older Raspberry Pi devices such as the Raspberry Pi 3 is expected to work as long as the same Raspberry Pi OS Bookworm is installed.
|
||||
|
||||
|
|
@ -100,7 +100,7 @@ Out of all the model export formats supported by Ultralytics, [NCNN](https://doc
|
|||
|
||||
The YOLOv8n model in PyTorch format is converted to NCNN to run inference with the exported model.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -130,7 +130,7 @@ The YOLOv8n model in PyTorch format is converted to NCNN to run inference with t
|
|||
yolo predict model='yolov8n_ncnn_model' source='https://ultralytics.com/images/bus.jpg'
|
||||
```
|
||||
|
||||
!!! Tip
|
||||
!!! tip
|
||||
|
||||
For more details about supported export options, visit the [Ultralytics documentation page on deployment options](https://docs.ultralytics.com/guides/model-deployment-options).
|
||||
|
||||
|
|
@ -138,7 +138,7 @@ The YOLOv8n model in PyTorch format is converted to NCNN to run inference with t
|
|||
|
||||
YOLOv8 benchmarks were run by the Ultralytics team on nine different model formats measuring speed and accuracy: PyTorch, TorchScript, ONNX, OpenVINO, TF SavedModel, TF GraphDef, TF Lite, PaddlePaddle, NCNN. Benchmarks were run on both Raspberry Pi 5 and Raspberry Pi 4 at FP32 precision with default input image size of 640.
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
We have only included benchmarks for YOLOv8n and YOLOv8s models because other models sizes are too big to run on the Raspberry Pis and does not offer decent performance.
|
||||
|
||||
|
|
@ -224,7 +224,7 @@ The below table represents the benchmark results for two different models (YOLOv
|
|||
|
||||
To reproduce the above Ultralytics benchmarks on all [export formats](../modes/export.md), run this code:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -251,11 +251,11 @@ To reproduce the above Ultralytics benchmarks on all [export formats](../modes/e
|
|||
|
||||
When using Raspberry Pi for Computer Vision projects, it can be essentially to grab real-time video feeds to perform inference. The onboard MIPI CSI connector on the Raspberry Pi allows you to connect official Raspberry PI camera modules. In this guide, we have used a [Raspberry Pi Camera Module 3](https://www.raspberrypi.com/products/camera-module-3) to grab the video feeds and perform inference using YOLOv8 models.
|
||||
|
||||
!!! Tip
|
||||
!!! tip
|
||||
|
||||
Learn more about the [different camera modules offered by Raspberry Pi](https://www.raspberrypi.com/documentation/accessories/camera.html) and also [how to get started with the Raspberry Pi camera modules](https://www.raspberrypi.com/documentation/computers/camera_software.html#introducing-the-raspberry-pi-cameras).
|
||||
|
||||
!!! Note
|
||||
!!! note
|
||||
|
||||
Raspberry Pi 5 uses smaller CSI connectors than the Raspberry Pi 4 (15-pin vs 22-pin), so you will need a [15-pin to 22pin adapter cable](https://www.raspberrypi.com/products/camera-cable) to connect to a Raspberry Pi Camera.
|
||||
|
||||
|
|
@ -267,7 +267,7 @@ Execute the following command after connecting the camera to the Raspberry Pi. Y
|
|||
rpicam-hello
|
||||
```
|
||||
|
||||
!!! Tip
|
||||
!!! tip
|
||||
|
||||
Learn more about [`rpicam-hello` usage on official Raspberry Pi documentation](https://www.raspberrypi.com/documentation/computers/camera_software.html#rpicam-hello)
|
||||
|
||||
|
|
@ -275,13 +275,13 @@ rpicam-hello
|
|||
|
||||
There are 2 methods of using the Raspberry Pi Camera to inference YOLOv8 models.
|
||||
|
||||
!!! Usage
|
||||
!!! usage
|
||||
|
||||
=== "Method 1"
|
||||
|
||||
We can use `picamera2`which comes pre-installed with Raspberry Pi OS to access the camera and inference YOLOv8 models.
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -333,7 +333,7 @@ There are 2 methods of using the Raspberry Pi Camera to inference YOLOv8 models.
|
|||
|
||||
Learn more about [`rpicam-vid` usage on official Raspberry Pi documentation](https://www.raspberrypi.com/documentation/computers/camera_software.html#rpicam-vid)
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -353,7 +353,7 @@ There are 2 methods of using the Raspberry Pi Camera to inference YOLOv8 models.
|
|||
yolo predict model=yolov8n.pt source="tcp://127.0.0.1:8888"
|
||||
```
|
||||
|
||||
!!! Tip
|
||||
!!! tip
|
||||
|
||||
Check our document on [Inference Sources](https://docs.ultralytics.com/modes/predict/#inference-sources) if you want to change the image/ video input type
|
||||
|
||||
|
|
@ -410,7 +410,7 @@ Ultralytics YOLOv8's NCNN format is highly optimized for mobile and embedded pla
|
|||
|
||||
You can convert a PyTorch YOLOv8 model to NCNN format using either Python or CLI commands:
|
||||
|
||||
!!! Example
|
||||
!!! example
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -187,7 +187,7 @@ That's it! Now you're equipped to use YOLOv8 with SAHI for both standard and sli
|
|||
|
||||
If you use SAHI in your research or development work, please cite the original SAHI paper and acknowledge the authors:
|
||||
|
||||
!!! Quote ""
|
||||
!!! quote ""
|
||||
|
||||
=== "BibTeX"
|
||||
|
||||
|
|
|
|||
|
|
@ -38,7 +38,7 @@ keywords: Ultralytics YOLOv8, speed estimation, object tracking, computer vision
|
|||
|  |  |
|
||||
| Speed Estimation on Road using Ultralytics YOLOv8 | Speed Estimation on Bridge using Ultralytics YOLOv8 |
|
||||
|
||||
!!! Example "Speed Estimation using YOLOv8 Example"
|
||||
!!! example "Speed Estimation using YOLOv8 Example"
|
||||
|
||||
=== "Speed Estimation"
|
||||
|
||||
|
|
|
|||
|
|
@ -38,7 +38,7 @@ Streamlit makes it simple to build and deploy interactive web applications. Comb
|
|||
|
||||
Before you start building the application, ensure you have the Ultralytics Python Package installed. You can install it using the command **pip install ultralytics**
|
||||
|
||||
!!! Example "Streamlit Application"
|
||||
!!! example "Streamlit Application"
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -60,7 +60,7 @@ This will launch the Streamlit application in your default web browser. You will
|
|||
|
||||
You can optionally supply a specific model in Python:
|
||||
|
||||
!!! Example "Streamlit Application with a custom model"
|
||||
!!! example "Streamlit Application with a custom model"
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
@ -104,7 +104,7 @@ pip install ultralytics
|
|||
|
||||
Then, you can create a basic Streamlit application to run live inference:
|
||||
|
||||
!!! Example "Streamlit Application"
|
||||
!!! example "Streamlit Application"
|
||||
|
||||
=== "Python"
|
||||
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ keywords: VisionEye, YOLOv8, Ultralytics, object mapping, object tracking, dista
|
|||
|  |  |  |
|
||||
| VisionEye View Object Mapping using Ultralytics YOLOv8 | VisionEye View Object Mapping with Object Tracking using Ultralytics YOLOv8 | VisionEye View with Distance Calculation using Ultralytics YOLOv8 |
|
||||
|
||||
!!! Example "VisionEye Object Mapping using YOLOv8"
|
||||
!!! example "VisionEye Object Mapping using YOLOv8"
|
||||
|
||||
=== "VisionEye Object Mapping"
|
||||
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
|
|||
|  |  |
|
||||
| PushUps Counting | PullUps Counting |
|
||||
|
||||
!!! Example "Workouts Monitoring Example"
|
||||
!!! example "Workouts Monitoring Example"
|
||||
|
||||
=== "Workouts Monitoring"
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue