diff --git a/README.md b/README.md
index a2e81867..5a7c305c 100644
--- a/README.md
+++ b/README.md
@@ -11,6 +11,7 @@
+
diff --git a/docs/en/guides/defining-project-goals.md b/docs/en/guides/defining-project-goals.md
index cb6dbc2d..b241c7ea 100644
--- a/docs/en/guides/defining-project-goals.md
+++ b/docs/en/guides/defining-project-goals.md
@@ -175,4 +175,4 @@ Common challenges include:
- Insufficient understanding of technical constraints.
- Underestimating data requirements.
-Address these challenges through thorough initial research, clear communication with stakeholders, and iterative refinement of the problem statement and objectives. Learn more about these challenges [here](#common-challenges).
+Address these challenges through thorough initial research, clear communication with stakeholders, and iterative refinement of the problem statement and objectives. Learn more about these challenges in our [Computer Vision Project guide](steps-of-a-cv-project.md).
diff --git a/docs/en/guides/model-training-tips.md b/docs/en/guides/model-training-tips.md
index d7329f36..ea369d2d 100644
--- a/docs/en/guides/model-training-tips.md
+++ b/docs/en/guides/model-training-tips.md
@@ -179,4 +179,4 @@ Using pre-trained weights can significantly reduce training times and improve mo
### What is the recommended number of epochs for training a model, and how do I set this in YOLOv8?
-The number of epochs refers to the complete passes through the training dataset during model training. A typical starting point is 300 epochs. If your model overfits early, you can reduce the number. Alternatively, if overfitting isnโt observed, you might extend training to 600, 1200, or more epochs. To set this in YOLOv8, use the `epochs` parameter in your training script. For additional advice on determining the ideal number of epochs, refer to this section on [number of epochs](#the-number-of-epochs-to-train-for).
+The number of epochs refers to the complete passes through the training dataset during model training. A typical starting point is 300 epochs. If your model overfits early, you can reduce the number. Alternatively, if overfitting isn't observed, you might extend training to 600, 1200, or more epochs. To set this in YOLOv8, use the `epochs` parameter in your training script. For additional advice on determining the ideal number of epochs, refer to this section on [number of epochs](#the-number-of-epochs-to-train-for).
diff --git a/docs/en/guides/steps-of-a-cv-project.md b/docs/en/guides/steps-of-a-cv-project.md
index c6b45db4..a734ecf2 100644
--- a/docs/en/guides/steps-of-a-cv-project.md
+++ b/docs/en/guides/steps-of-a-cv-project.md
@@ -10,7 +10,7 @@ keywords: Computer Vision, AI, Object Detection, Image Classification, Instance
Computer vision is a subfield of artificial intelligence (AI) that helps computers see and understand the world like humans do. It processes and analyzes images or videos to extract information, recognize patterns, and make decisions based on that data.
-Computer vision techniques like [object detection](../tasks/detect.md), [image classification](../tasks/classify.md), and [instance segmentation](../tasks/segment.md) can be applied across various industries, from [autonomous driving](https://www.ultralytics.com/solutions/ai-in-self-driving) to [medical imaging](https://www.ultralytics.com/solutions/ai-in-healthcare), to gain valuable insights.
+Computer vision techniques like [object detection](../tasks/detect.md), [image classification](../tasks/classify.md), and [instance segmentation](../tasks/segment.md) can be applied across various industries, from [autonomous driving](https://www.ultralytics.com/solutions/ai-in-self-driving) to [medical imaging](https://www.ultralytics.com/solutions/ai-in-healthcare) to gain valuable insights.
@@ -227,4 +227,4 @@ For more information, check out the [model export guide](../modes/export.md).
### What are the best practices for monitoring and maintaining a deployed computer vision model?
-Continuous monitoring and maintenance are essential for a model's long-term success. Implement tools for tracking Key Performance Indicators (KPIs) and detecting anomalies. Regularly retrain the model with updated data to counteract model drift. Document the entire process, including model architecture, hyperparameters, and changes, to ensure reproducibility and ease of future updates. Learn more in our [monitoring and maintenance guide](#monitoring-maintenance-and-documentation).
+Continuous monitoring and maintenance are essential for a model's long-term success. Implement tools for tracking Key Performance Indicators (KPIs) and detecting anomalies. Regularly retrain the model with updated data to counteract model drift. Document the entire process, including model architecture, hyperparameters, and changes, to ensure reproducibility and ease of future updates. Learn more in our [monitoring and maintenance guide](#step-8-monitoring-maintenance-and-documentation).
diff --git a/docs/en/guides/streamlit-live-inference.md b/docs/en/guides/streamlit-live-inference.md
new file mode 100644
index 00000000..b88ae9f5
--- /dev/null
+++ b/docs/en/guides/streamlit-live-inference.md
@@ -0,0 +1,138 @@
+---
+comments: true
+description: Learn how to set up a real-time object detection application using Streamlit and Ultralytics YOLOv8. Follow this step-by-step guide to implement webcam-based object detection.
+keywords: Streamlit, YOLOv8, Real-time Object Detection, Streamlit Application, YOLOv8 Streamlit Tutorial, Webcam Object Detection
+---
+
+# Live Inference with Streamlit Application using Ultralytics YOLOv8
+
+## Introduction
+
+Streamlit makes it simple to build and deploy interactive web applications. Combining this with Ultralytics YOLOv8 allows for real-time object detection and analysis directly in your browser. YOLOv8 high accuracy and speed ensure seamless performance for live video streams, making it ideal for applications in security, retail, and beyond.
+
+| Aquaculture | Animals husbandry |
+| :---------------------------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------: |
+|  |  |
+| Fish Detection using Ultralytics YOLOv8 | Animals Detection using Ultralytics YOLOv8 |
+
+## Advantages of Live Inference
+
+- **Seamless Real-Time Object Detection**: Streamlit combined with YOLOv8 enables real-time object detection directly from your webcam feed. This allows for immediate analysis and insights, making it ideal for applications requiring instant feedback.
+- **User-Friendly Deployment**: Streamlit's interactive interface makes it easy to deploy and use the application without extensive technical knowledge. Users can start live inference with a simple click, enhancing accessibility and usability.
+- **Efficient Resource Utilization**: YOLOv8 optimized algorithm ensure high-speed processing with minimal computational resources. This efficiency allows for smooth and reliable webcam inference even on standard hardware, making advanced computer vision accessible to a wider audience.
+
+## Streamlit Application Code
+
+!!! tip "Ultralytics Installation"
+
+ Before you start building the application, ensure you have the Ultralytics Python Package installed. You can install it using the command **pip install ultralytics**
+
+!!! Example "Streamlit Application"
+
+ === "Python"
+
+ ```Python
+ from ultralytics import solutions
+
+ solutions.inference()
+
+ ### Make sure to run the file using command `streamlit run
+## ::: ultralytics.cfg.handle_streamlit_inference
+
+
+
## ::: ultralytics.cfg.parse_key_value_pair
diff --git a/docs/en/reference/solutions/streamlit_inference.md b/docs/en/reference/solutions/streamlit_inference.md
new file mode 100644
index 00000000..f31e4771
--- /dev/null
+++ b/docs/en/reference/solutions/streamlit_inference.md
@@ -0,0 +1,16 @@
+---
+description: Explore the live inference capabilities of Streamlit combined with Ultralytics YOLOv8. Learn to implement real-time object detection in your web applications with our comprehensive guide.
+keywords: Ultralytics, YOLOv8, live inference, real-time object detection, Streamlit, computer vision, webcam inference, object detection, Python, ML, cv2
+---
+
+# Reference for `ultralytics/solutions/streamlit_inference.py`
+
+!!! Note
+
+ This file is available at [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/solutions/streamlit_inference.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/solutions/streamlit_inference.py). If you spot a problem please help fix it by [contributing](https://docs.ultralytics.com/help/contributing/) a [Pull Request](https://github.com/ultralytics/ultralytics/edit/main/ultralytics/solutions/streamlit_inference.py) ๐ ๏ธ. Thank you ๐!
+
+
+
+## ::: ultralytics.solutions.streamlit_inference.inference
+
+
diff --git a/docs/en/solutions/index.md b/docs/en/solutions/index.md
index af46a20d..496578cf 100644
--- a/docs/en/solutions/index.md
+++ b/docs/en/solutions/index.md
@@ -28,6 +28,7 @@ Here's our curated list of Ultralytics solutions that can be used to create awes
- [Queue Management](../guides/queue-management.md) ๐ NEW: Implement efficient queue management systems to minimize wait times and improve productivity using YOLOv8.
- [Parking Management](../guides/parking-management.md) ๐ NEW: Organize and direct vehicle flow in parking areas with YOLOv8, optimizing space utilization and user experience.
- [Analytics](../guides/analytics.md) ๐ NEW: Conduct comprehensive data analysis to discover patterns and make informed decisions, leveraging YOLOv8 for descriptive, predictive, and prescriptive analytics.
+- [Live Inference with Streamlit](../guides/streamlit-live-inference.md) ๐ NEW: Leverage the power of YOLOv8 for real-time object detection directly through your web browser with a user-friendly Streamlit interface.
## Contribute to Our Solutions
diff --git a/mkdocs.yml b/mkdocs.yml
index 1f2f3918..bb33beb6 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -162,7 +162,7 @@ nav:
- guides/index.md
- Explorer:
- datasets/explorer/index.md
- - NEW ๐ Analytics: guides/analytics.md # for promotion of new pages
+ - NEW ๐ Live Inference: guides/streamlit-live-inference.md # for promotion of new pages
- Languages:
- ๐ฌ๐ง  English: https://ultralytics.com/docs/
- ๐จ๐ณ  ็ฎไฝไธญๆ: https://docs.ultralytics.com/zh/
@@ -300,7 +300,7 @@ nav:
- datasets/track/index.md
- NEW ๐ Solutions:
- solutions/index.md
- - NEW ๐ Analytics: guides/analytics.md
+ - Analytics: guides/analytics.md
- Object Counting: guides/object-counting.md
- Object Cropping: guides/object-cropping.md
- Object Blurring: guides/object-blurring.md
@@ -314,6 +314,7 @@ nav:
- Distance Calculation: guides/distance-calculation.md
- Queue Management: guides/queue-management.md
- Parking Management: guides/parking-management.md
+ - NEW ๐ Live Inference: guides/streamlit-live-inference.md
- Guides:
- guides/index.md
- YOLO Common Issues: guides/yolo-common-issues.md
@@ -548,6 +549,7 @@ nav:
- parking_management: reference/solutions/parking_management.md
- queue_management: reference/solutions/queue_management.md
- speed_estimation: reference/solutions/speed_estimation.md
+ - streamlit_inference: reference/solutions/streamlit_inference.md
- trackers:
- basetrack: reference/trackers/basetrack.md
- bot_sort: reference/trackers/bot_sort.md
diff --git a/ultralytics/__init__.py b/ultralytics/__init__.py
index d2834c94..7a963123 100644
--- a/ultralytics/__init__.py
+++ b/ultralytics/__init__.py
@@ -1,6 +1,6 @@
# Ultralytics YOLO ๐, AGPL-3.0 license
-__version__ = "8.2.49"
+__version__ = "8.2.50"
import os
diff --git a/ultralytics/cfg/__init__.py b/ultralytics/cfg/__init__.py
index 293abcb5..34886eb1 100644
--- a/ultralytics/cfg/__init__.py
+++ b/ultralytics/cfg/__init__.py
@@ -78,10 +78,13 @@ CLI_HELP_MSG = f"""
4. Export a YOLOv8n classification model to ONNX format at image size 224 by 128 (no TASK required)
yolo export model=yolov8n-cls.pt format=onnx imgsz=224,128
- 6. Explore your datasets using semantic search and SQL with a simple GUI powered by Ultralytics Explorer API
+ 5. Explore your datasets using semantic search and SQL with a simple GUI powered by Ultralytics Explorer API
yolo explorer
-
- 5. Run special commands:
+
+ 6. Streamlit real-time object detection on your webcam with Ultralytics YOLOv8
+ yolo streamlit-predict
+
+ 7. Run special commands:
yolo help
yolo checks
yolo version
@@ -514,6 +517,13 @@ def handle_explorer():
subprocess.run(["streamlit", "run", ROOT / "data/explorer/gui/dash.py", "--server.maxMessageSize", "2048"])
+def handle_streamlit_inference():
+ """Open the Ultralytics Live Inference streamlit app for real time object detection."""
+ checks.check_requirements(["streamlit", "opencv-python", "torch"])
+ LOGGER.info("๐ก Loading Ultralytics Live Inference app...")
+ subprocess.run(["streamlit", "run", ROOT / "solutions/streamlit_inference.py", "--server.headless", "true"])
+
+
def parse_key_value_pair(pair):
"""Parse one 'key=value' pair and return key and value."""
k, v = pair.split("=", 1) # split on first '=' sign
@@ -582,6 +592,7 @@ def entrypoint(debug=""):
"login": lambda: handle_yolo_hub(args),
"copy-cfg": copy_default_cfg,
"explorer": lambda: handle_explorer(),
+ "streamlit-predict": lambda: handle_streamlit_inference(),
}
full_args_dict = {**DEFAULT_CFG_DICT, **{k: None for k in TASKS}, **{k: None for k in MODES}, **special}
diff --git a/ultralytics/data/augment.py b/ultralytics/data/augment.py
index 2400de11..d0623b74 100644
--- a/ultralytics/data/augment.py
+++ b/ultralytics/data/augment.py
@@ -686,7 +686,7 @@ class RandomFlip:
flip_idx (array-like, optional): Index mapping for flipping keypoints, if any.
"""
assert direction in {"horizontal", "vertical"}, f"Support direction `horizontal` or `vertical`, got {direction}"
- assert 0 <= p <= 1.0
+ assert 0 <= p <= 1.0, f"The probability should be in range [0, 1], but got {p}."
self.p = p
self.direction = direction
@@ -1210,7 +1210,7 @@ def classify_transforms(
import torchvision.transforms as T # scope for faster 'import ultralytics'
if isinstance(size, (tuple, list)):
- assert len(size) == 2
+ assert len(size) == 2, f"'size' tuples must be length 2, not length {len(size)}"
scale_size = tuple(math.floor(x / crop_fraction) for x in size)
else:
scale_size = math.floor(size / crop_fraction)
@@ -1288,7 +1288,7 @@ def classify_augmentations(
secondary_tfl = []
disable_color_jitter = False
if auto_augment:
- assert isinstance(auto_augment, str)
+ assert isinstance(auto_augment, str), f"Provided argument should be string, but got type {type(auto_augment)}"
# color jitter is typically disabled if AA/RA on,
# this allows override without breaking old hparm cfgs
disable_color_jitter = not force_color_jitter
diff --git a/ultralytics/engine/results.py b/ultralytics/engine/results.py
index 346ed650..2afcc6f2 100644
--- a/ultralytics/engine/results.py
+++ b/ultralytics/engine/results.py
@@ -42,7 +42,7 @@ class BaseTensor(SimpleClass):
base_tensor = BaseTensor(data, orig_shape)
```
"""
- assert isinstance(data, (torch.Tensor, np.ndarray))
+ assert isinstance(data, (torch.Tensor, np.ndarray)), "data must be torch.Tensor or np.ndarray"
self.data = data
self.orig_shape = orig_shape
diff --git a/ultralytics/models/fastsam/prompt.py b/ultralytics/models/fastsam/prompt.py
index bab6173e..4add9fbb 100644
--- a/ultralytics/models/fastsam/prompt.py
+++ b/ultralytics/models/fastsam/prompt.py
@@ -286,7 +286,7 @@ class FastSAMPrompt:
def box_prompt(self, bbox):
"""Modifies the bounding box properties and calculates IoU between masks and bounding box."""
if self.results[0].masks is not None:
- assert bbox[2] != 0 and bbox[3] != 0
+ assert bbox[2] != 0 and bbox[3] != 0, "Bounding box width and height should not be zero"
masks = self.results[0].masks.data
target_height, target_width = self.results[0].orig_shape
h = masks.shape[1]
diff --git a/ultralytics/models/sam/amg.py b/ultralytics/models/sam/amg.py
index 128108fe..b61c6a71 100644
--- a/ultralytics/models/sam/amg.py
+++ b/ultralytics/models/sam/amg.py
@@ -133,7 +133,7 @@ def remove_small_regions(mask: np.ndarray, area_thresh: float, mode: str) -> Tup
"""Remove small disconnected regions or holes in a mask, returning the mask and a modification indicator."""
import cv2 # type: ignore
- assert mode in {"holes", "islands"}
+ assert mode in {"holes", "islands"}, f"Provided mode {mode} is invalid"
correct_holes = mode == "holes"
working_mask = (correct_holes ^ mask).astype(np.uint8)
n_labels, regions, stats, _ = cv2.connectedComponentsWithStats(working_mask, 8)
diff --git a/ultralytics/models/sam/modules/tiny_encoder.py b/ultralytics/models/sam/modules/tiny_encoder.py
index 5ecf426c..bc026dd6 100644
--- a/ultralytics/models/sam/modules/tiny_encoder.py
+++ b/ultralytics/models/sam/modules/tiny_encoder.py
@@ -261,7 +261,7 @@ class Attention(torch.nn.Module):
"""
super().__init__()
- assert isinstance(resolution, tuple) and len(resolution) == 2
+ assert isinstance(resolution, tuple) and len(resolution) == 2, "'resolution' argument not tuple of length 2"
self.num_heads = num_heads
self.scale = key_dim**-0.5
self.key_dim = key_dim
diff --git a/ultralytics/models/yolo/world/train_world.py b/ultralytics/models/yolo/world/train_world.py
index a65c8332..df26986d 100644
--- a/ultralytics/models/yolo/world/train_world.py
+++ b/ultralytics/models/yolo/world/train_world.py
@@ -72,8 +72,8 @@ class WorldTrainerFromScratch(WorldTrainer):
"""
final_data = {}
data_yaml = self.args.data
- assert data_yaml.get("train", False) # object365.yaml
- assert data_yaml.get("val", False) # lvis.yaml
+ assert data_yaml.get("train", False), "train dataset not found" # object365.yaml
+ assert data_yaml.get("val", False), "validation dataset not found" # lvis.yaml
data = {k: [check_det_dataset(d) for d in v.get("yolo_data", [])] for k, v in data_yaml.items()}
assert len(data["val"]) == 1, f"Only support validating on 1 dataset for now, but got {len(data['val'])}."
val_split = "minival" if "lvis" in data["val"][0]["val"] else "val"
diff --git a/ultralytics/solutions/__init__.py b/ultralytics/solutions/__init__.py
index d4e58afd..9b8c25f9 100644
--- a/ultralytics/solutions/__init__.py
+++ b/ultralytics/solutions/__init__.py
@@ -8,6 +8,7 @@ from .object_counter import ObjectCounter
from .parking_management import ParkingManagement, ParkingPtsSelection
from .queue_management import QueueManager
from .speed_estimation import SpeedEstimator
+from .streamlit_inference import inference
__all__ = (
"AIGym",
diff --git a/ultralytics/solutions/streamlit_inference.py b/ultralytics/solutions/streamlit_inference.py
new file mode 100644
index 00000000..b8fdc74f
--- /dev/null
+++ b/ultralytics/solutions/streamlit_inference.py
@@ -0,0 +1,154 @@
+# Ultralytics YOLO ๐, AGPL-3.0 license
+
+import io
+import time
+
+import cv2
+import torch
+
+
+def inference():
+ """Runs real-time object detection on video input using Ultralytics YOLOv8 in a Streamlit application."""
+
+ # Scope imports for faster ultralytics package load speeds
+ import streamlit as st
+
+ from ultralytics import YOLO
+
+ # Hide main menu style
+ menu_style_cfg = """"""
+
+ # Main title of streamlit application
+ main_title_cfg = """
+ Ultralytics YOLOv8 Streamlit Application
+
+ Experience real-time object detection on your webcam with the power of Ultralytics YOLOv8! ๐
+