From e170d5066568289c1291000a464b56fbfdb3122e Mon Sep 17 00:00:00 2001 From: Glenn Jocher Date: Wed, 22 Jan 2025 19:13:06 +0100 Subject: [PATCH] Fix spelling (#18827) --- docs/en/guides/kfold-cross-validation.md | 12 ++++++------ docs/en/guides/raspberry-pi.md | 2 +- docs/en/guides/triton-inference-server.md | 2 +- docs/en/integrations/ibm-watsonx.md | 6 +++--- docs/en/integrations/tensorrt.md | 2 +- docs/en/models/mobile-sam.md | 2 +- 6 files changed, 13 insertions(+), 13 deletions(-) diff --git a/docs/en/guides/kfold-cross-validation.md b/docs/en/guides/kfold-cross-validation.md index 44ba8d82..bb8efb7d 100644 --- a/docs/en/guides/kfold-cross-validation.md +++ b/docs/en/guides/kfold-cross-validation.md @@ -82,8 +82,8 @@ Without further ado, let's dive in! ```python import pandas as pd - indx = [label.stem for label in labels] # uses base filename as ID (no extension) - labels_df = pd.DataFrame([], columns=cls_idx, index=indx) + index = [label.stem for label in labels] # uses base filename as ID (no extension) + labels_df = pd.DataFrame([], columns=cls_idx, index=index) ``` 5. Count the instances of each class-label present in the annotation files. @@ -146,11 +146,11 @@ The rows index the label files, each corresponding to an image in your dataset, ```python folds = [f"split_{n}" for n in range(1, ksplit + 1)] - folds_df = pd.DataFrame(index=indx, columns=folds) + folds_df = pd.DataFrame(index=index, columns=folds) - for idx, (train, val) in enumerate(kfolds, start=1): - folds_df[f"split_{idx}"].loc[labels_df.iloc[train].index] = "train" - folds_df[f"split_{idx}"].loc[labels_df.iloc[val].index] = "val" + for i, (train, val) in enumerate(kfolds, start=1): + folds_df[f"split_{i}"].loc[labels_df.iloc[train].index] = "train" + folds_df[f"split_{i}"].loc[labels_df.iloc[val].index] = "val" ``` 3. Now we will calculate the distribution of class labels for each fold as a ratio of the classes present in `val` to those present in `train`. diff --git a/docs/en/guides/raspberry-pi.md b/docs/en/guides/raspberry-pi.md index 4268287f..00b8d315 100644 --- a/docs/en/guides/raspberry-pi.md +++ b/docs/en/guides/raspberry-pi.md @@ -95,7 +95,7 @@ Here we will install Ultralytics package on the Raspberry Pi with optional depen ## Use NCNN on Raspberry Pi -Out of all the model export formats supported by Ultralytics, [NCNN](https://docs.ultralytics.com/integrations/ncnn/) delivers the best inference performance when working with Raspberry Pi devices because NCNN is highly optimized for mobile/ embedded platforms (such as ARM architecture). Therefor our recommendation is to use NCNN with Raspberry Pi. +Out of all the model export formats supported by Ultralytics, [NCNN](https://docs.ultralytics.com/integrations/ncnn/) delivers the best inference performance when working with Raspberry Pi devices because NCNN is highly optimized for mobile/ embedded platforms (such as ARM architecture). ## Convert Model to NCNN and Run Inference diff --git a/docs/en/guides/triton-inference-server.md b/docs/en/guides/triton-inference-server.md index 67d419bf..68aa3cd8 100644 --- a/docs/en/guides/triton-inference-server.md +++ b/docs/en/guides/triton-inference-server.md @@ -48,7 +48,7 @@ from ultralytics import YOLO # Load a model model = YOLO("yolo11n.pt") # load an official model -# Retreive metadata during export +# Retrieve metadata during export metadata = [] diff --git a/docs/en/integrations/ibm-watsonx.md b/docs/en/integrations/ibm-watsonx.md index 16ebaa2a..0e77bc5e 100644 --- a/docs/en/integrations/ibm-watsonx.md +++ b/docs/en/integrations/ibm-watsonx.md @@ -133,7 +133,7 @@ After loading the dataset, we printed and saved our working directory. We have a If you see "trash_ICRA19" among the directory's contents, then it has loaded successfully. You should see three files/folders: a `config.yaml` file, a `videos_for_testing` directory, and a `dataset` directory. We will ignore the `videos_for_testing` directory, so feel free to delete it. -We will use the config.yaml file and the contents of the dataset directory to train our [object detection](https://www.ultralytics.com/glossary/object-detection) model. Here is a sample image from our marine litter data set. +We will use the `config.yaml` file and the contents of the dataset directory to train our [object detection](https://www.ultralytics.com/glossary/object-detection) model. Here is a sample image from our marine litter data set.

Marine Litter with Bounding Box @@ -205,14 +205,14 @@ names: 2: rov ``` -Run the following script to delete the current contents of config.yaml and replace it with the above contents that reflect our new data set directory structure. Be certain to replace the work_dir portion of the root directory path in line 4 with your own working directory path we retrieved earlier. Leave the train, val, and test subdirectory definitions. Also, do not change {work_dir} in line 23 of the code. +Run the following script to delete the current contents of `config.yaml` and replace it with the above contents that reflect our new data set directory structure. Be certain to replace the work_dir portion of the root directory path in line 4 with your own working directory path we retrieved earlier. Leave the train, val, and test subdirectory definitions. Also, do not change {work_dir} in line 23 of the code. !!! example "Edit the .yaml File" === "Python" ```python - # Contents of new confg.yaml file + # Contents of new config.yaml file def update_yaml_file(file_path): data = { "path": "work_dir/trash_ICRA19/dataset", diff --git a/docs/en/integrations/tensorrt.md b/docs/en/integrations/tensorrt.md index cac4ac32..59dbb280 100644 --- a/docs/en/integrations/tensorrt.md +++ b/docs/en/integrations/tensorrt.md @@ -185,7 +185,7 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i ???+ warning "Calibration Cache" - TensorRT will generate a calibration `.cache` which can be re-used to speed up export of future model weights using the same data, but this may result in poor calibration when the data is vastly different or if the `batch` value is changed drastically. In these circumstances, the existing `.cache` should be renamed and moved to a different directory or deleted entirely. + TensorRT will generate a calibration `.cache` which can be reused to speed up export of future model weights using the same data, but this may result in poor calibration when the data is vastly different or if the `batch` value is changed drastically. In these circumstances, the existing `.cache` should be renamed and moved to a different directory or deleted entirely. #### Advantages of using YOLO with TensorRT INT8 diff --git a/docs/en/models/mobile-sam.md b/docs/en/models/mobile-sam.md index a65587de..34740c6c 100644 --- a/docs/en/models/mobile-sam.md +++ b/docs/en/models/mobile-sam.md @@ -118,7 +118,7 @@ You can download the model [here](https://github.com/ChaoningZhang/MobileSAM/blo # Predict a segment based on a single point prompt model.predict("ultralytics/assets/zidane.jpg", points=[900, 370], labels=[1]) - # Predict mutiple segments based on multiple points prompt + # Predict multiple segments based on multiple points prompt model.predict("ultralytics/assets/zidane.jpg", points=[[400, 370], [900, 370]], labels=[1, 1]) # Predict a segment based on multiple points prompt per object