Update TFLite Docs images (#8605)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
parent
1146bb0582
commit
36408c974c
33 changed files with 112 additions and 107 deletions
|
|
@ -72,7 +72,7 @@ from ultralytics import YOLO
|
|||
|
||||
model = YOLO('yolov8n.pt') # initialize model
|
||||
results = model('path/to/image.jpg') # perform inference
|
||||
results.show() # display results
|
||||
results[0].show() # display results for the first image
|
||||
```
|
||||
|
||||
---
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ keywords: Ultralytics, YOLOv8, Object Detection, Coral, Edge TPU, Raspberry Pi,
|
|||
|
||||
## What is a Coral Edge TPU?
|
||||
|
||||
The Coral Edge TPU is a compact device that adds an Edge TPU coprocessor to your system. It enables low-power, high-performance ML inferencing for TensorFlow Lite models. Read more at the [Coral Edge TPU home page](https://coral.ai/products/accelerator).
|
||||
The Coral Edge TPU is a compact device that adds an Edge TPU coprocessor to your system. It enables low-power, high-performance ML inference for TensorFlow Lite models. Read more at the [Coral Edge TPU home page](https://coral.ai/products/accelerator).
|
||||
|
||||
## Boost Raspberry Pi Model Performance with Coral Edge TPU
|
||||
|
||||
|
|
@ -37,16 +37,16 @@ This guide assumes that you already have a working Raspberry Pi OS install and h
|
|||
|
||||
First, we need to install the Edge TPU runtime. There are many different versions available, so you need to choose the right version for your operating system.
|
||||
|
||||
| Raspberry Pi OS | High frequency mode | Version to download |
|
||||
|-----------------|:-------------------:|------------------------------------------|
|
||||
| Bullseye 32bit | No | libedgetpu1-std_ ... .bullseye_armhf.deb |
|
||||
| Bullseye 64bit | No | libedgetpu1-std_ ... .bullseye_arm64.deb |
|
||||
| Bullseye 32bit | Yes | libedgetpu1-max_ ... .bullseye_armhf.deb |
|
||||
| Bullseye 64bit | Yes | libedgetpu1-max_ ... .bullseye_arm64.deb |
|
||||
| Bookworm 32bit | No | libedgetpu1-std_ ... .bookworm_armhf.deb |
|
||||
| Bookworm 64bit | No | libedgetpu1-std_ ... .bookworm_arm64.deb |
|
||||
| Bookworm 32bit | Yes | libedgetpu1-max_ ... .bookworm_armhf.deb |
|
||||
| Bookworm 64bit | Yes | libedgetpu1-max_ ... .bookworm_arm64.deb |
|
||||
| Raspberry Pi OS | High frequency mode | Version to download |
|
||||
|-----------------|:-------------------:|--------------------------------------------|
|
||||
| Bullseye 32bit | No | `libedgetpu1-std_ ... .bullseye_armhf.deb` |
|
||||
| Bullseye 64bit | No | `libedgetpu1-std_ ... .bullseye_arm64.deb` |
|
||||
| Bullseye 32bit | Yes | `libedgetpu1-max_ ... .bullseye_armhf.deb` |
|
||||
| Bullseye 64bit | Yes | `libedgetpu1-max_ ... .bullseye_arm64.deb` |
|
||||
| Bookworm 32bit | No | `libedgetpu1-std_ ... .bookworm_armhf.deb` |
|
||||
| Bookworm 64bit | No | `libedgetpu1-std_ ... .bookworm_arm64.deb` |
|
||||
| Bookworm 32bit | Yes | `libedgetpu1-max_ ... .bookworm_armhf.deb` |
|
||||
| Bookworm 64bit | Yes | `libedgetpu1-max_ ... .bookworm_arm64.deb` |
|
||||
|
||||
[Download the latest version from here](https://github.com/feranick/libedgetpu/releases).
|
||||
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ Hyperparameters are high-level, structural settings for the algorithm. They are
|
|||
<img width="640" src="https://user-images.githubusercontent.com/26833433/263858934-4f109a2f-82d9-4d08-8bd6-6fd1ff520bcd.png" alt="Hyperparameter Tuning Visual">
|
||||
</p>
|
||||
|
||||
For a full list of augmentation hyperparameters used in YOLOv8 please refer to the [configurations page](../usage/cfg.md#augmentation).
|
||||
For a full list of augmentation hyperparameters used in YOLOv8 please refer to the [configurations page](../usage/cfg.md#augmentation-settings).
|
||||
|
||||
### Genetic Evolution and Mutation
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
comments: true
|
||||
description: Instance Segmentation with Object Tracking using Ultralytics YOLOv8
|
||||
keywords: Ultralytics, YOLOv8, Instance Segmentation, Object Detection, Object Tracking, Segbbox, Computer Vision, Notebook, IPython Kernel, CLI, Python SDK
|
||||
keywords: Ultralytics, YOLOv8, Instance Segmentation, Object Detection, Object Tracking, Bounding Box, Computer Vision, Notebook, IPython Kernel, CLI, Python SDK
|
||||
---
|
||||
|
||||
# Instance Segmentation and Tracking using Ultralytics YOLOv8 🚀
|
||||
|
|
|
|||
|
|
@ -14,12 +14,12 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
|||
|
||||
## Recipe Walk Through
|
||||
|
||||
1. Begin with the necessary imports
|
||||
1. Begin with the necessary imports
|
||||
|
||||
```py
|
||||
```python
|
||||
from pathlib import Path
|
||||
|
||||
import cv2 as cv
|
||||
import cv2
|
||||
import numpy as np
|
||||
from ultralytics import YOLO
|
||||
```
|
||||
|
|
@ -30,19 +30,19 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
|||
|
||||
***
|
||||
|
||||
2. Load a model and run `predict()` method on a source.
|
||||
2. Load a model and run `predict()` method on a source.
|
||||
|
||||
```py
|
||||
```python
|
||||
from ultralytics import YOLO
|
||||
|
||||
# Load a model
|
||||
model = YOLO('yolov8n-seg.pt')
|
||||
|
||||
# Run inference
|
||||
result = model.predict()
|
||||
results = model.predict()
|
||||
```
|
||||
|
||||
??? question "No Prediction Arguments?"
|
||||
!!! question "No Prediction Arguments?"
|
||||
|
||||
Without specifying a source, the example images from the library will be used:
|
||||
|
||||
|
|
@ -57,7 +57,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
|||
|
||||
***
|
||||
|
||||
3. Now iterate over the results and the contours. For workflows that want to save an image to file, the source image `base-name` and the detection `class-label` are retrieved for later use (optional).
|
||||
3. Now iterate over the results and the contours. For workflows that want to save an image to file, the source image `base-name` and the detection `class-label` are retrieved for later use (optional).
|
||||
|
||||
```{ .py .annotate }
|
||||
# (2) Iterate detection results (helpful for multiple images)
|
||||
|
|
@ -81,7 +81,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
|||
|
||||
***
|
||||
|
||||
4. Start with generating a binary mask from the source image and then draw a filled contour onto the mask. This will allow the object to be isolated from the other parts of the image. An example from `bus.jpg` for one of the detected `person` class objects is shown on the right.
|
||||
4. Start with generating a binary mask from the source image and then draw a filled contour onto the mask. This will allow the object to be isolated from the other parts of the image. An example from `bus.jpg` for one of the detected `person` class objects is shown on the right.
|
||||
|
||||
{ width="240", align="right" }
|
||||
|
||||
|
|
@ -98,11 +98,11 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
|||
|
||||
|
||||
# Draw contour onto mask
|
||||
_ = cv.drawContours(b_mask,
|
||||
_ = cv2.drawContours(b_mask,
|
||||
[contour],
|
||||
-1,
|
||||
(255, 255, 255),
|
||||
cv.FILLED)
|
||||
cv2.FILLED)
|
||||
|
||||
```
|
||||
|
||||
|
|
@ -136,7 +136,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
|||
|
||||
- The `tuple` `(255, 255, 255)` represents the color white, which is the desired color for drawing the contour in this binary mask.
|
||||
|
||||
- The addition of `cv.FILLED` will color all pixels enclosed by the contour boundary the same, in this case, all enclosed pixels will be white.
|
||||
- The addition of `cv2.FILLED` will color all pixels enclosed by the contour boundary the same, in this case, all enclosed pixels will be white.
|
||||
|
||||
- See [OpenCV Documentation on `drawContours()`](https://docs.opencv.org/4.8.0/d6/d6e/group__imgproc__draw.html#ga746c0625f1781f1ffc9056259103edbc) for more information.
|
||||
|
||||
|
|
@ -145,7 +145,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
|||
|
||||
***
|
||||
|
||||
5. Next the there are 2 options for how to move forward with the image from this point and a subsequent option for each.
|
||||
5. Next the there are 2 options for how to move forward with the image from this point and a subsequent option for each.
|
||||
|
||||
### Object Isolation Options
|
||||
|
||||
|
|
@ -155,10 +155,10 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
|||
|
||||
```py
|
||||
# Create 3-channel mask
|
||||
mask3ch = cv.cvtColor(b_mask, cv.COLOR_GRAY2BGR)
|
||||
mask3ch = cv2.cvtColor(b_mask, cv2.COLOR_GRAY2BGR)
|
||||
|
||||
# Isolate object with binary mask
|
||||
isolated = cv.bitwise_and(mask3ch, img)
|
||||
isolated = cv2.bitwise_and(mask3ch, img)
|
||||
|
||||
```
|
||||
|
||||
|
|
@ -258,7 +258,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
|||
|
||||
***
|
||||
|
||||
6. <u>What to do next is entirely left to you as the developer.</u> A basic example of one possible next step (saving the image to file for future use) is shown.
|
||||
6. <u>What to do next is entirely left to you as the developer.</u> A basic example of one possible next step (saving the image to file for future use) is shown.
|
||||
|
||||
- **NOTE:** this step is optional and can be skipped if not required for your specific use case.
|
||||
|
||||
|
|
@ -266,7 +266,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab
|
|||
|
||||
```py
|
||||
# Save isolated object to file
|
||||
_ = cv.imwrite(f'{img_name}_{label}-{ci}.png', iso_crop)
|
||||
_ = cv2.imwrite(f'{img_name}_{label}-{ci}.png', iso_crop)
|
||||
```
|
||||
|
||||
- In this example, the `img_name` is the base-name of the source image file, `label` is the detected class-name, and `ci` is the index of the object detection (in case of multiple instances with the same class name).
|
||||
|
|
@ -278,7 +278,7 @@ Here, all steps from the previous section are combined into a single block of co
|
|||
```{ .py .annotate }
|
||||
from pathlib import Path
|
||||
|
||||
import cv2 as cv
|
||||
import cv2
|
||||
import numpy as np
|
||||
from ultralytics import YOLO
|
||||
|
||||
|
|
@ -298,13 +298,13 @@ for r in res:
|
|||
|
||||
# Create contour mask (1)
|
||||
contour = c.masks.xy.pop().astype(np.int32).reshape(-1, 1, 2)
|
||||
_ = cv.drawContours(b_mask, [contour], -1, (255, 255, 255), cv.FILLED)
|
||||
_ = cv2.drawContours(b_mask, [contour], -1, (255, 255, 255), cv2.FILLED)
|
||||
|
||||
# Choose one:
|
||||
|
||||
# OPTION-1: Isolate object with black background
|
||||
mask3ch = cv.cvtColor(b_mask, cv.COLOR_GRAY2BGR)
|
||||
isolated = cv.bitwise_and(mask3ch, img)
|
||||
mask3ch = cv2.cvtColor(b_mask, cv2.COLOR_GRAY2BGR)
|
||||
isolated = cv2.bitwise_and(mask3ch, img)
|
||||
|
||||
# OPTION-2: Isolate object with transparent background (when saved as PNG)
|
||||
isolated = np.dstack([img, b_mask])
|
||||
|
|
|
|||
|
|
@ -175,8 +175,8 @@ Object counting with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
|
|||
| Name | Type | Default | Description |
|
||||
|-----------------------|-------------|----------------------------|-----------------------------------------------|
|
||||
| `view_img` | `bool` | `False` | Display frames with counts |
|
||||
| `view_in_counts` | `bool` | `True` | Display incounts only on video frame |
|
||||
| `view_out_counts` | `bool` | `True` | Display outcounts only on video frame |
|
||||
| `view_in_counts` | `bool` | `True` | Display in-counts only on video frame |
|
||||
| `view_out_counts` | `bool` | `True` | Display out-counts only on video frame |
|
||||
| `line_thickness` | `int` | `2` | Increase bounding boxes thickness |
|
||||
| `reg_pts` | `list` | `[(20, 400), (1260, 400)]` | Points defining the Region Area |
|
||||
| `classes_names` | `dict` | `model.model.names` | Dictionary of Class Names |
|
||||
|
|
|
|||
|
|
@ -16,7 +16,6 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
|
|||
- **Reduced Data Volume**: By extracting only relevant objects, object cropping helps in minimizing data size, making it efficient for storage, transmission, or subsequent computational tasks.
|
||||
- **Enhanced Precision**: YOLOv8's object detection accuracy ensures that the cropped objects maintain their spatial relationships, preserving the integrity of the visual information for detailed analysis.
|
||||
|
||||
|
||||
## Visuals
|
||||
|
||||
| Airport Luggage |
|
||||
|
|
@ -24,7 +23,6 @@ Object cropping with [Ultralytics YOLOv8](https://github.com/ultralytics/ultraly
|
|||
|  |
|
||||
| Suitcases Cropping at airport conveyor belt using Ultralytics YOLOv8 |
|
||||
|
||||
|
||||
!!! Example "Object Cropping using YOLOv8 Example"
|
||||
|
||||
=== "Object Cropping"
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
|
|||
- **Optimized Performance:** Tailoring workouts based on monitoring data for better results.
|
||||
- **Goal Achievement:** Track and adjust fitness goals for measurable progress.
|
||||
- **Personalization:** Customized workout plans based on individual data for effectiveness.
|
||||
- **Health Awareness:** Early detection of patterns indicating health issues or overtraining.
|
||||
- **Health Awareness:** Early detection of patterns indicating health issues or over-training.
|
||||
- **Informed Decisions:** Data-driven decisions for adjusting routines and setting realistic goals.
|
||||
|
||||
## Real World Applications
|
||||
|
|
@ -109,7 +109,7 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
|
|||
| `kpts_to_check` | `list` | `None` | List of three keypoints index, for counting specific workout, followed by keypoint Map |
|
||||
| `view_img` | `bool` | `False` | Display the frame with counts |
|
||||
| `line_thickness` | `int` | `2` | Increase the thickness of count value |
|
||||
| `pose_type` | `str` | `pushup` | Pose that need to be monitored, "pullup" and "abworkout" also supported |
|
||||
| `pose_type` | `str` | `pushup` | Pose that need to be monitored, `pullup` and `abworkout` also supported |
|
||||
| `pose_up_angle` | `int` | `145` | Pose Up Angle value |
|
||||
| `pose_down_angle` | `int` | `90` | Pose Down Angle value |
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue