Ultralytics Code Refactor https://ultralytics.com/actions (#16493)
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
parent
4bcc80c646
commit
38e95f33a5
6 changed files with 30 additions and 31 deletions
|
|
@ -88,7 +88,7 @@ Let's say you are ready to annotate now. There are several open-source tools ava
|
||||||
|
|
||||||
- **[Label Studio](https://github.com/HumanSignal/label-studio)**: A flexible tool that supports a wide range of annotation tasks and includes features for managing projects and quality control.
|
- **[Label Studio](https://github.com/HumanSignal/label-studio)**: A flexible tool that supports a wide range of annotation tasks and includes features for managing projects and quality control.
|
||||||
- **[CVAT](https://github.com/cvat-ai/cvat)**: A powerful tool that supports various annotation formats and customizable workflows, making it suitable for complex projects.
|
- **[CVAT](https://github.com/cvat-ai/cvat)**: A powerful tool that supports various annotation formats and customizable workflows, making it suitable for complex projects.
|
||||||
- **[Labelme](https://github.com/labelmeai/labelme)**: A simple and easy-to-use tool that allows for quick annotation of images with polygons, making it ideal for straightforward tasks.
|
- **[Labelme](https://github.com/wkentaro/labelme)**: A simple and easy-to-use tool that allows for quick annotation of images with polygons, making it ideal for straightforward tasks.
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/labelme-instance-segmentation-annotation.avif" alt="LabelMe Overview">
|
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/labelme-instance-segmentation-annotation.avif" alt="LabelMe Overview">
|
||||||
|
|
@ -167,7 +167,7 @@ Several popular open-source tools can streamline the data annotation process:
|
||||||
|
|
||||||
- **[Label Studio](https://github.com/HumanSignal/label-studio)**: A flexible tool supporting various annotation tasks, project management, and quality control features.
|
- **[Label Studio](https://github.com/HumanSignal/label-studio)**: A flexible tool supporting various annotation tasks, project management, and quality control features.
|
||||||
- **[CVAT](https://www.cvat.ai/)**: Offers multiple annotation formats and customizable workflows, making it suitable for complex projects.
|
- **[CVAT](https://www.cvat.ai/)**: Offers multiple annotation formats and customizable workflows, making it suitable for complex projects.
|
||||||
- **[Labelme](https://github.com/labelmeai/labelme)**: Ideal for quick and straightforward image annotation with polygons.
|
- **[Labelme](https://github.com/wkentaro/labelme)**: Ideal for quick and straightforward image annotation with polygons.
|
||||||
|
|
||||||
These tools can help enhance the efficiency and accuracy of your annotation workflows. For extensive feature lists and guides, refer to our [data annotation tools documentation](../datasets/index.md).
|
These tools can help enhance the efficiency and accuracy of your annotation workflows. For extensive feature lists and guides, refer to our [data annotation tools documentation](../datasets/index.md).
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -100,7 +100,7 @@ However, if you choose to collect images or take your own pictures, you'll need
|
||||||
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/different-types-of-image-annotation.avif" alt="Different Types of Image Annotation">
|
<img width="100%" src="https://github.com/ultralytics/docs/releases/download/0/different-types-of-image-annotation.avif" alt="Different Types of Image Annotation">
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
[Data collection and annotation](./data-collection-and-annotation.md) can be a time-consuming manual effort. Annotation tools can help make this process easier. Here are some useful open annotation tools: [LabeI Studio](https://github.com/HumanSignal/label-studio), [CVAT](https://github.com/cvat-ai/cvat), and [Labelme](https://github.com/labelmeai/labelme).
|
[Data collection and annotation](./data-collection-and-annotation.md) can be a time-consuming manual effort. Annotation tools can help make this process easier. Here are some useful open annotation tools: [LabeI Studio](https://github.com/HumanSignal/label-studio), [CVAT](https://github.com/cvat-ai/cvat), and [Labelme](https://github.com/wkentaro/labelme).
|
||||||
|
|
||||||
## Step 3: [Data Augmentation](https://www.ultralytics.com/glossary/data-augmentation) and Splitting Your Dataset
|
## Step 3: [Data Augmentation](https://www.ultralytics.com/glossary/data-augmentation) and Splitting Your Dataset
|
||||||
|
|
||||||
|
|
@ -215,7 +215,7 @@ Data annotation is vital for teaching your model to recognize patterns. The type
|
||||||
- **Object Detection**: Bounding boxes drawn around objects.
|
- **Object Detection**: Bounding boxes drawn around objects.
|
||||||
- **Image Segmentation**: Each pixel labeled according to the object it belongs to.
|
- **Image Segmentation**: Each pixel labeled according to the object it belongs to.
|
||||||
|
|
||||||
Tools like [Label Studio](https://github.com/HumanSignal/label-studio), [CVAT](https://github.com/cvat-ai/cvat), and [Labelme](https://github.com/labelmeai/labelme) can assist in this process. For more details, refer to our [data collection and annotation guide](./data-collection-and-annotation.md).
|
Tools like [Label Studio](https://github.com/HumanSignal/label-studio), [CVAT](https://github.com/cvat-ai/cvat), and [Labelme](https://github.com/wkentaro/labelme) can assist in this process. For more details, refer to our [data collection and annotation guide](./data-collection-and-annotation.md).
|
||||||
|
|
||||||
### What steps should I follow to augment and split my dataset effectively?
|
### What steps should I follow to augment and split my dataset effectively?
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -148,7 +148,7 @@ After running the usage code snippet, you can access the Weights & Biases (W&B)
|
||||||
|
|
||||||
- **Model Artifacts Management**: Access and share model checkpoints, facilitating easy deployment and collaboration.
|
- **Model Artifacts Management**: Access and share model checkpoints, facilitating easy deployment and collaboration.
|
||||||
|
|
||||||
- **Viewing Inference Results with Image Overlay**: Visualize the prediction results on images using interactive overlays in Weights & Biases, providing a clear and detailed view of model performance on real-world data. For more detailed information on Weights & Biases' image overlay capabilities, check out this [link](https://docs.wandb.ai/guides/track/log/media#image-overlays). [See how Weights & Biases' image overlays helps visualize model inferences](https://imgur.com/a/UTSiufs).
|
- **Viewing Inference Results with Image Overlay**: Visualize the prediction results on images using interactive overlays in Weights & Biases, providing a clear and detailed view of model performance on real-world data. For more detailed information on Weights & Biases' image overlay capabilities, check out this [link](https://docs.wandb.ai/guides/track/log/media/#image-overlays). [See how Weights & Biases' image overlays helps visualize model inferences](https://imgur.com/a/UTSiufs).
|
||||||
|
|
||||||
By using these features, you can effectively track, analyze, and optimize your YOLOv8 model's training, ensuring the best possible performance and efficiency.
|
By using these features, you can effectively track, analyze, and optimize your YOLOv8 model's training, ensuring the best possible performance and efficiency.
|
||||||
|
|
||||||
|
|
@ -156,7 +156,7 @@ By using these features, you can effectively track, analyze, and optimize your Y
|
||||||
|
|
||||||
This guide helped you explore Ultralytics' YOLOv8 integration with Weights & Biases. It illustrates the ability of this integration to efficiently track and visualize model training and prediction results.
|
This guide helped you explore Ultralytics' YOLOv8 integration with Weights & Biases. It illustrates the ability of this integration to efficiently track and visualize model training and prediction results.
|
||||||
|
|
||||||
For further details on usage, visit [Weights & Biases' official documentation](https://docs.wandb.ai/guides/integrations/ultralytics).
|
For further details on usage, visit [Weights & Biases' official documentation](https://docs.wandb.ai/guides/integrations/ultralytics/).
|
||||||
|
|
||||||
Also, be sure to check out the [Ultralytics integration guide page](../integrations/index.md), to learn more about different exciting integrations.
|
Also, be sure to check out the [Ultralytics integration guide page](../integrations/index.md), to learn more about different exciting integrations.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -645,9 +645,7 @@ class SAM2Model(torch.nn.Module):
|
||||||
# The case of `self.num_maskmem == 0` below is primarily used for reproducing SAM on images.
|
# The case of `self.num_maskmem == 0` below is primarily used for reproducing SAM on images.
|
||||||
# In this case, we skip the fusion with any memory.
|
# In this case, we skip the fusion with any memory.
|
||||||
if self.num_maskmem == 0: # Disable memory and skip fusion
|
if self.num_maskmem == 0: # Disable memory and skip fusion
|
||||||
pix_feat = current_vision_feats[-1].permute(1, 2, 0).view(B, C, H, W)
|
return current_vision_feats[-1].permute(1, 2, 0).view(B, C, H, W)
|
||||||
return pix_feat
|
|
||||||
|
|
||||||
num_obj_ptr_tokens = 0
|
num_obj_ptr_tokens = 0
|
||||||
# Step 1: condition the visual features of the current frame on previous memories
|
# Step 1: condition the visual features of the current frame on previous memories
|
||||||
if not is_init_cond_frame:
|
if not is_init_cond_frame:
|
||||||
|
|
|
||||||
|
|
@ -176,22 +176,24 @@ class ObjectCounter:
|
||||||
|
|
||||||
# Count objects using line
|
# Count objects using line
|
||||||
elif len(self.reg_pts) == 2:
|
elif len(self.reg_pts) == 2:
|
||||||
if prev_position is not None and track_id not in self.count_ids:
|
if (
|
||||||
# Check if the object's movement segment intersects the counting line
|
prev_position is not None
|
||||||
if LineString([(prev_position[0], prev_position[1]), (box[0], box[1])]).intersects(
|
and track_id not in self.count_ids
|
||||||
|
and LineString([(prev_position[0], prev_position[1]), (box[0], box[1])]).intersects(
|
||||||
self.counting_line_segment
|
self.counting_line_segment
|
||||||
):
|
)
|
||||||
self.count_ids.append(track_id)
|
):
|
||||||
|
self.count_ids.append(track_id)
|
||||||
|
|
||||||
# Determine the direction of movement (IN or OUT)
|
# Determine the direction of movement (IN or OUT)
|
||||||
dx = (box[0] - prev_position[0]) * (self.counting_region.centroid.x - prev_position[0])
|
dx = (box[0] - prev_position[0]) * (self.counting_region.centroid.x - prev_position[0])
|
||||||
dy = (box[1] - prev_position[1]) * (self.counting_region.centroid.y - prev_position[1])
|
dy = (box[1] - prev_position[1]) * (self.counting_region.centroid.y - prev_position[1])
|
||||||
if dx > 0 and dy > 0:
|
if dx > 0 and dy > 0:
|
||||||
self.in_counts += 1
|
self.in_counts += 1
|
||||||
self.class_wise_count[self.names[cls]]["IN"] += 1
|
self.class_wise_count[self.names[cls]]["IN"] += 1
|
||||||
else:
|
else:
|
||||||
self.out_counts += 1
|
self.out_counts += 1
|
||||||
self.class_wise_count[self.names[cls]]["OUT"] += 1
|
self.class_wise_count[self.names[cls]]["OUT"] += 1
|
||||||
|
|
||||||
labels_dict = {}
|
labels_dict = {}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -128,14 +128,13 @@ class ParkingPtsSelection:
|
||||||
|
|
||||||
rg_data = [] # regions data
|
rg_data = [] # regions data
|
||||||
for box in self.rg_data:
|
for box in self.rg_data:
|
||||||
rs_box = [] # rescaled box list
|
rs_box = [
|
||||||
for x, y in box:
|
(
|
||||||
rs_box.append(
|
int(x * self.imgw / self.canvas.winfo_width()), # width scaling
|
||||||
(
|
int(y * self.imgh / self.canvas.winfo_height()), # height scaling
|
||||||
int(x * self.imgw / self.canvas.winfo_width()), # width scaling
|
)
|
||||||
int(y * self.imgh / self.canvas.winfo_height()),
|
for x, y in box
|
||||||
)
|
]
|
||||||
) # height scaling
|
|
||||||
rg_data.append({"points": rs_box})
|
rg_data.append({"points": rs_box})
|
||||||
with open("bounding_boxes.json", "w") as f:
|
with open("bounding_boxes.json", "w") as f:
|
||||||
json.dump(rg_data, f, indent=4)
|
json.dump(rg_data, f, indent=4)
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue