Fix spelling (#18827)
This commit is contained in:
parent
5bdcf9116d
commit
e170d50665
6 changed files with 13 additions and 13 deletions
|
|
@ -82,8 +82,8 @@ Without further ado, let's dive in!
|
|||
```python
|
||||
import pandas as pd
|
||||
|
||||
indx = [label.stem for label in labels] # uses base filename as ID (no extension)
|
||||
labels_df = pd.DataFrame([], columns=cls_idx, index=indx)
|
||||
index = [label.stem for label in labels] # uses base filename as ID (no extension)
|
||||
labels_df = pd.DataFrame([], columns=cls_idx, index=index)
|
||||
```
|
||||
|
||||
5. Count the instances of each class-label present in the annotation files.
|
||||
|
|
@ -146,11 +146,11 @@ The rows index the label files, each corresponding to an image in your dataset,
|
|||
|
||||
```python
|
||||
folds = [f"split_{n}" for n in range(1, ksplit + 1)]
|
||||
folds_df = pd.DataFrame(index=indx, columns=folds)
|
||||
folds_df = pd.DataFrame(index=index, columns=folds)
|
||||
|
||||
for idx, (train, val) in enumerate(kfolds, start=1):
|
||||
folds_df[f"split_{idx}"].loc[labels_df.iloc[train].index] = "train"
|
||||
folds_df[f"split_{idx}"].loc[labels_df.iloc[val].index] = "val"
|
||||
for i, (train, val) in enumerate(kfolds, start=1):
|
||||
folds_df[f"split_{i}"].loc[labels_df.iloc[train].index] = "train"
|
||||
folds_df[f"split_{i}"].loc[labels_df.iloc[val].index] = "val"
|
||||
```
|
||||
|
||||
3. Now we will calculate the distribution of class labels for each fold as a ratio of the classes present in `val` to those present in `train`.
|
||||
|
|
|
|||
|
|
@ -95,7 +95,7 @@ Here we will install Ultralytics package on the Raspberry Pi with optional depen
|
|||
|
||||
## Use NCNN on Raspberry Pi
|
||||
|
||||
Out of all the model export formats supported by Ultralytics, [NCNN](https://docs.ultralytics.com/integrations/ncnn/) delivers the best inference performance when working with Raspberry Pi devices because NCNN is highly optimized for mobile/ embedded platforms (such as ARM architecture). Therefor our recommendation is to use NCNN with Raspberry Pi.
|
||||
Out of all the model export formats supported by Ultralytics, [NCNN](https://docs.ultralytics.com/integrations/ncnn/) delivers the best inference performance when working with Raspberry Pi devices because NCNN is highly optimized for mobile/ embedded platforms (such as ARM architecture).
|
||||
|
||||
## Convert Model to NCNN and Run Inference
|
||||
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ from ultralytics import YOLO
|
|||
# Load a model
|
||||
model = YOLO("yolo11n.pt") # load an official model
|
||||
|
||||
# Retreive metadata during export
|
||||
# Retrieve metadata during export
|
||||
metadata = []
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -133,7 +133,7 @@ After loading the dataset, we printed and saved our working directory. We have a
|
|||
|
||||
If you see "trash_ICRA19" among the directory's contents, then it has loaded successfully. You should see three files/folders: a `config.yaml` file, a `videos_for_testing` directory, and a `dataset` directory. We will ignore the `videos_for_testing` directory, so feel free to delete it.
|
||||
|
||||
We will use the config.yaml file and the contents of the dataset directory to train our [object detection](https://www.ultralytics.com/glossary/object-detection) model. Here is a sample image from our marine litter data set.
|
||||
We will use the `config.yaml` file and the contents of the dataset directory to train our [object detection](https://www.ultralytics.com/glossary/object-detection) model. Here is a sample image from our marine litter data set.
|
||||
|
||||
<p align="center">
|
||||
<img width="400" src="https://github.com/ultralytics/docs/releases/download/0/marine-litter-bounding-box.avif" alt="Marine Litter with Bounding Box">
|
||||
|
|
@ -205,14 +205,14 @@ names:
|
|||
2: rov
|
||||
```
|
||||
|
||||
Run the following script to delete the current contents of config.yaml and replace it with the above contents that reflect our new data set directory structure. Be certain to replace the work_dir portion of the root directory path in line 4 with your own working directory path we retrieved earlier. Leave the train, val, and test subdirectory definitions. Also, do not change {work_dir} in line 23 of the code.
|
||||
Run the following script to delete the current contents of `config.yaml` and replace it with the above contents that reflect our new data set directory structure. Be certain to replace the work_dir portion of the root directory path in line 4 with your own working directory path we retrieved earlier. Leave the train, val, and test subdirectory definitions. Also, do not change {work_dir} in line 23 of the code.
|
||||
|
||||
!!! example "Edit the .yaml File"
|
||||
|
||||
=== "Python"
|
||||
|
||||
```python
|
||||
# Contents of new confg.yaml file
|
||||
# Contents of new config.yaml file
|
||||
def update_yaml_file(file_path):
|
||||
data = {
|
||||
"path": "work_dir/trash_ICRA19/dataset",
|
||||
|
|
|
|||
|
|
@ -185,7 +185,7 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
|
|||
|
||||
???+ warning "Calibration Cache"
|
||||
|
||||
TensorRT will generate a calibration `.cache` which can be re-used to speed up export of future model weights using the same data, but this may result in poor calibration when the data is vastly different or if the `batch` value is changed drastically. In these circumstances, the existing `.cache` should be renamed and moved to a different directory or deleted entirely.
|
||||
TensorRT will generate a calibration `.cache` which can be reused to speed up export of future model weights using the same data, but this may result in poor calibration when the data is vastly different or if the `batch` value is changed drastically. In these circumstances, the existing `.cache` should be renamed and moved to a different directory or deleted entirely.
|
||||
|
||||
#### Advantages of using YOLO with TensorRT INT8
|
||||
|
||||
|
|
|
|||
|
|
@ -118,7 +118,7 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo
|
|||
# Predict a segment based on a single point prompt
|
||||
model.predict("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1])
|
||||
|
||||
# Predict mutiple segments based on multiple points prompt
|
||||
# Predict multiple segments based on multiple points prompt
|
||||
model.predict("ultralytics/assets/zidane.jpg", points=[[400, 370], [900, 370]], labels=[1, 1])
|
||||
|
||||
# Predict a segment based on multiple points prompt per object
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue