Add HeatMap guide in real-world-projects + Code in Solutions Directory (#6796)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
Muhammad Rizwan Munawar 2023-12-07 01:39:29 +05:00 committed by GitHub
parent 1e1247ddee
commit 742cbc1b4e
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
10 changed files with 448 additions and 52 deletions

View file

@ -23,27 +23,72 @@ Monitoring workouts through pose estimation with [Ultralytics YOLOv8](https://gi
| ![PushUps Counting](https://github.com/RizwanMunawar/ultralytics/assets/62513924/cf016a41-589f-420f-8a8c-2cc8174a16de) | ![PullUps Counting](https://github.com/RizwanMunawar/ultralytics/assets/62513924/cb20f316-fac2-4330-8445-dcf5ffebe329) |
| PushUps Counting | PullUps Counting |
## Example
```python
from ultralytics import YOLO
from ultralytics.solutions import ai_gym
import cv2
!!! Example "Workouts Monitoring Example"
model = YOLO("yolov8n-pose.pt")
cap = cv2.VideoCapture("path/to/video.mp4")
=== "Workouts Monitoring"
```python
from ultralytics import YOLO
from ultralytics.solutions import ai_gym
import cv2
gym_object = ai_gym.AIGym() # init AI GYM module
gym_object.set_args(line_thickness=2, view_img=True, pose_type="pushup", kpts_to_check=[6, 8, 10])
model = YOLO("yolov8n-pose.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
if not cap.isOpened():
print("Error reading video file")
exit(0)
frame_count = 0
while cap.isOpened():
success, frame = cap.read()
if not success: exit(0)
frame_count += 1
results = model.predict(frame, verbose=False)
gym_object.start_counting(frame, results, frame_count)
```
gym_object = ai_gym.AIGym() # init AI GYM module
gym_object.set_args(line_thickness=2,
view_img=True,
pose_type="pushup",
kpts_to_check=[6, 8, 10])
frame_count = 0
while cap.isOpened():
success, im0 = cap.read()
if not success:
exit(0)
frame_count += 1
results = model.predict(im0, verbose=False)
im0 = gym_object.start_counting(im0, results, frame_count)
```
=== "Workouts Monitoring with Save Output"
```python
from ultralytics import YOLO
from ultralytics.solutions import ai_gym
import cv2
model = YOLO("yolov8n-pose.pt")
cap = cv2.VideoCapture("path/to/video/file.mp4")
if not cap.isOpened():
print("Error reading video file")
exit(0)
video_writer = cv2.VideoWriter("workouts.avi",
cv2.VideoWriter_fourcc(*'mp4v'),
int(cap.get(5)),
(int(cap.get(3)), int(cap.get(4))))
gym_object = ai_gym.AIGym() # init AI GYM module
gym_object.set_args(line_thickness=2,
view_img=True,
pose_type="pushup",
kpts_to_check=[6, 8, 10])
frame_count = 0
while cap.isOpened():
success, im0 = cap.read()
if not success:
exit(0)
frame_count += 1
results = model.predict(im0, verbose=False)
im0 = gym_object.start_counting(im0, results, frame_count)
video_writer.write(im0)
video_writer.release()
```
???+ tip "Support"
@ -51,7 +96,7 @@ while cap.isOpened():
### KeyPoints Map
![keyPoints Order Ultralytics YOLOv8 Pose](https://github.com/RizwanMunawar/ultralytics/assets/62513924/520059af-f961-433b-b2fb-7fe8c4336ee5)
![keyPoints Order Ultralytics YOLOv8 Pose](https://github.com/ultralytics/ultralytics/assets/62513924/f45d8315-b59f-47b7-b9c8-c61af1ce865b)
### Arguments `set_args`
@ -63,3 +108,22 @@ while cap.isOpened():
| pose_type | `str` | `pushup` | Pose that need to be monitored, "pullup" and "abworkout" also supported |
| pose_up_angle | `int` | `145` | Pose Up Angle value |
| pose_down_angle | `int` | `90` | Pose Down Angle value |
### Arguments `model.predict`
| Name | Type | Default | Description |
|-----------------|----------------|------------------------|----------------------------------------------------------------------------|
| `source` | `str` | `'ultralytics/assets'` | source directory for images or videos |
| `conf` | `float` | `0.25` | object confidence threshold for detection |
| `iou` | `float` | `0.7` | intersection over union (IoU) threshold for NMS |
| `imgsz` | `int or tuple` | `640` | image size as scalar or (h, w) list, i.e. (640, 480) |
| `half` | `bool` | `False` | use half precision (FP16) |
| `device` | `None or str` | `None` | device to run on, i.e. cuda device=0/1/2/3 or device=cpu |
| `max_det` | `int` | `300` | maximum number of detections per image |
| `vid_stride` | `bool` | `False` | video frame-rate stride |
| `stream_buffer` | `bool` | `False` | buffer all streaming frames (True) or return the most recent frame (False) |
| `visualize` | `bool` | `False` | visualize model features |
| `augment` | `bool` | `False` | apply image augmentation to prediction sources |
| `agnostic_nms` | `bool` | `False` | class-agnostic NMS |
| `retina_masks` | `bool` | `False` | use high-resolution segmentation masks |
| `classes` | `None or list` | `None` | filter results by class, i.e. classes=0, or classes=[0,2,3] |