diff --git a/docs/en/guides/isolating-segmentation-objects.md b/docs/en/guides/isolating-segmentation-objects.md index e0a0aac5..3caad3d7 100644 --- a/docs/en/guides/isolating-segmentation-objects.md +++ b/docs/en/guides/isolating-segmentation-objects.md @@ -141,7 +141,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab === "Black Background Pixels" - ```py + ```python # Create 3-channel mask mask3ch = cv2.cvtColor(b_mask, cv2.COLOR_GRAY2BGR) @@ -192,7 +192,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab === "Transparent Background Pixels" - ```py + ```python # Isolate object with transparent background (when saved as PNG) isolated = np.dstack([img, b_mask]) ``` @@ -248,7 +248,7 @@ After performing the [Segment Task](../tasks/segment.md), it's sometimes desirab ??? example "Example Final Step" - ```py + ```python # Save isolated object to file _ = cv2.imwrite(f"{img_name}_{label}-{ci}.png", iso_crop) ``` diff --git a/docs/en/hub/datasets.md b/docs/en/hub/datasets.md index 5e6f3c4c..24689cc5 100644 --- a/docs/en/hub/datasets.md +++ b/docs/en/hub/datasets.md @@ -48,7 +48,7 @@ The dataset YAML is the same standard YOLOv5 and YOLOv8 YAML format. After zipping your dataset, you should [validate it](https://docs.ultralytics.com/reference/hub/__init__/#ultralytics.hub.check_dataset) before uploading it to [Ultralytics HUB](https://www.ultralytics.com/hub). [Ultralytics HUB](https://www.ultralytics.com/hub) conducts the dataset validation check post-upload, so by ensuring your dataset is correctly formatted and error-free ahead of time, you can forestall any setbacks due to dataset rejection. -```py +```python from ultralytics.hub import check_dataset check_dataset("path/to/dataset.zip", task="detect") diff --git a/docs/en/integrations/tensorrt.md b/docs/en/integrations/tensorrt.md index 0e401981..1a8e5a91 100644 --- a/docs/en/integrations/tensorrt.md +++ b/docs/en/integrations/tensorrt.md @@ -380,7 +380,7 @@ Expand sections below for information on how these models were exported and test See [export mode](../modes/export.md) for details regarding export configuration arguments. - ```py + ```python from ultralytics import YOLO model = YOLO("yolov8n.pt") @@ -401,7 +401,7 @@ Expand sections below for information on how these models were exported and test See [predict mode](../modes/predict.md) for additional information. - ```py + ```python import cv2 from ultralytics import YOLO @@ -421,7 +421,7 @@ Expand sections below for information on how these models were exported and test See [`val` mode](../modes/val.md) to learn more about validation configuration arguments. - ```py + ```python from ultralytics import YOLO model = YOLO("yolov8n.engine") diff --git a/docs/en/usage/python.md b/docs/en/usage/python.md index c3afa2a1..af0546f4 100644 --- a/docs/en/usage/python.md +++ b/docs/en/usage/python.md @@ -306,26 +306,26 @@ Explorer API can be used to explore datasets with advanced semantic, vector-simi !!! tip "Detection Trainer Example" - ```python - from ultralytics.models.yolo import DetectionPredictor, DetectionTrainer, DetectionValidator + ```python + from ultralytics.models.yolo import DetectionPredictor, DetectionTrainer, DetectionValidator - # trainer - trainer = DetectionTrainer(overrides={}) - trainer.train() - trained_model = trainer.best + # trainer + trainer = DetectionTrainer(overrides={}) + trainer.train() + trained_model = trainer.best - # Validator - val = DetectionValidator(args=...) - val(model=trained_model) + # Validator + val = DetectionValidator(args=...) + val(model=trained_model) - # predictor - pred = DetectionPredictor(overrides={}) - pred(source=SOURCE, model=trained_model) + # predictor + pred = DetectionPredictor(overrides={}) + pred(source=SOURCE, model=trained_model) - # resume from last weight - overrides["resume"] = trainer.last - trainer = detect.DetectionTrainer(overrides=overrides) - ``` + # resume from last weight + overrides["resume"] = trainer.last + trainer = detect.DetectionTrainer(overrides=overrides) + ``` You can easily customize Trainers to support custom tasks or explore R&D ideas. Learn more about Customizing `Trainers`, `Validators` and `Predictors` to suit your project needs in the Customization Section.