Add camera device to inference sources (#16866)
Co-authored-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
parent
a9d0cf66cb
commit
19cbaa501c
1 changed files with 15 additions and 0 deletions
|
|
@ -120,6 +120,7 @@ YOLO11 can process different types of input sources for inference, as shown in t
|
||||||
| YouTube ✅ | `'https://youtu.be/LNwODJXcvt4'` | `str` | URL to a YouTube video. |
|
| YouTube ✅ | `'https://youtu.be/LNwODJXcvt4'` | `str` | URL to a YouTube video. |
|
||||||
| stream ✅ | `'rtsp://example.com/media.mp4'` | `str` | URL for streaming protocols such as RTSP, RTMP, TCP, or an IP address. |
|
| stream ✅ | `'rtsp://example.com/media.mp4'` | `str` | URL for streaming protocols such as RTSP, RTMP, TCP, or an IP address. |
|
||||||
| multi-stream ✅ | `'list.streams'` | `str` or `Path` | `*.streams` text file with one stream URL per row, i.e. 8 streams will run at batch-size 8. |
|
| multi-stream ✅ | `'list.streams'` | `str` or `Path` | `*.streams` text file with one stream URL per row, i.e. 8 streams will run at batch-size 8. |
|
||||||
|
| webcam ✅ | `0` | `int` | Index of the connected camera device to run inference on. |
|
||||||
|
|
||||||
Below are code examples for using each source type:
|
Below are code examples for using each source type:
|
||||||
|
|
||||||
|
|
@ -376,6 +377,20 @@ Below are code examples for using each source type:
|
||||||
|
|
||||||
Each row in the file represents a streaming source, allowing you to monitor and perform inference on several video streams at once.
|
Each row in the file represents a streaming source, allowing you to monitor and perform inference on several video streams at once.
|
||||||
|
|
||||||
|
=== "Webcam"
|
||||||
|
|
||||||
|
You can run inference on a connected camera device by passing the index of that particular camera to `source`.
|
||||||
|
|
||||||
|
```python
|
||||||
|
from ultralytics import YOLO
|
||||||
|
|
||||||
|
# Load a pretrained YOLO11n model
|
||||||
|
model = YOLO("yolo11n.pt")
|
||||||
|
|
||||||
|
# Run inference on the source
|
||||||
|
results = model(source=0, stream=True) # generator of Results objects
|
||||||
|
```
|
||||||
|
|
||||||
## Inference Arguments
|
## Inference Arguments
|
||||||
|
|
||||||
`model.predict()` accepts multiple arguments that can be passed at inference time to override defaults:
|
`model.predict()` accepts multiple arguments that can be passed at inference time to override defaults:
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue