Add FAQ sections to Modes and Tasks (#14181)

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: UltralyticsAssistant <web@ultralytics.com>
Co-authored-by: Abirami Vina <abirami.vina@gmail.com>
Co-authored-by: RizwanMunawar <chr043416@gmail.com>
Co-authored-by: Muhammad Rizwan Munawar <muhammadrizwanmunawar123@gmail.com>
This commit is contained in:
Glenn Jocher 2024-07-04 17:16:16 +02:00 committed by GitHub
parent e285d3d1b2
commit 6c13bea7b8
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
39 changed files with 2247 additions and 481 deletions

View file

@ -10,11 +10,11 @@ keywords: Overfitting and Underfitting in Machine Learning, Model Testing, Data
After [training](./model-training-tips.md) and [evaluating](./model-evaluation-insights.md) your model, it's time to test it. Model testing involves assessing how well it performs in real-world scenarios. Testing considers factors like accuracy, reliability, fairness, and how easy it is to understand the model's decisions. The goal is to make sure the model performs as intended, delivers the expected results, and fits into the [overall objective of your application](./defining-project-goals.md) or project.
Model testings definition is quite similar to model evaluation, but they are two distinct [steps in a computer vision project](./steps-of-a-cv-project.md). Model evaluation involves metrics and plots to assess the model's accuracy. On the other hand, model testing checks if the models learned behavior is the same as expectations. In this guide, well explore strategies for testing your computer vision models.
Model testing's definition is quite similar to model evaluation, but they are two distinct [steps in a computer vision project](./steps-of-a-cv-project.md). Model evaluation involves metrics and plots to assess the model's accuracy. On the other hand, model testing checks if the model's learned behavior is the same as expectations. In this guide, we'll explore strategies for testing your computer vision models.
## Model Testing Vs. Model Evaluation
First, lets understand the difference between model evaluation and testing with an example.
First, let's understand the difference between model evaluation and testing with an example.
Suppose you have trained a computer vision model to recognize cats and dogs, and you want to deploy this model at a pet store to monitor the animals. During the model evaluation phase, you use a labeled dataset to calculate metrics like accuracy, precision, recall, and F1 score. For instance, the model might have an accuracy of 98% in distinguishing between cats and dogs in a given dataset.
@ -26,7 +26,7 @@ Computer vision models learn from datasets by detecting patterns, making predict
Here are two points to keep in mind before testing your model:
- **Realistic Representation:** The previously unseen testing data should be similar to the data that the model will have to handle when deployed. This helps get a realistic understanding of the models capabilities.
- **Realistic Representation:** The previously unseen testing data should be similar to the data that the model will have to handle when deployed. This helps get a realistic understanding of the model's capabilities.
- **Sufficient Size:** The size of the testing dataset needs to be large enough to provide reliable insights into how well the model performs.
## Testing Your Computer Vision Model
@ -35,18 +35,18 @@ Here are the key steps to take to test your computer vision model and understand
- **Run Predictions:** Use the model to make predictions on the test dataset.
- **Compare Predictions:** Check how well the model's predictions match the actual labels (ground truth).
- **Calculate Performance Metrics:** [Compute metrics](./yolo-performance-metrics.md) like accuracy, precision, recall, and F1 score to understand the models strengths and weaknesses. Testing focuses on how these metrics reflect real-world performance.
- **Calculate Performance Metrics:** [Compute metrics](./yolo-performance-metrics.md) like accuracy, precision, recall, and F1 score to understand the model's strengths and weaknesses. Testing focuses on how these metrics reflect real-world performance.
- **Visualize Results:** Create visual aids like confusion matrices and ROC curves. These help you spot specific areas where the model might not be performing well in practical applications.
Next, the testing results can be analyzed:
- **Misclassified Images:** Identify and review images that the model misclassified to understand where it is going wrong.
- **Error Analysis:** Perform a thorough error analysis to understand the types of errors (e.g., false positives vs. false negatives) and their potential causes.
- **Bias and Fairness:** Check for any biases in the models predictions. Ensure that the model performs equally well across different subsets of the data, especially if it includes sensitive attributes like race, gender, or age.
- **Bias and Fairness:** Check for any biases in the model's predictions. Ensure that the model performs equally well across different subsets of the data, especially if it includes sensitive attributes like race, gender, or age.
## Testing Your YOLOv8 Model
To test your YOLOv8 model, you can use the validation mode. It's a straightforward way to understand the model's strengths and areas that need improvement. Also, youll need to format your test dataset correctly for YOLOv8. For more details on how to use the validation mode, check out the [Model Validation](../modes/val.md) docs page.
To test your YOLOv8 model, you can use the validation mode. It's a straightforward way to understand the model's strengths and areas that need improvement. Also, you'll need to format your test dataset correctly for YOLOv8. For more details on how to use the validation mode, check out the [Model Validation](../modes/val.md) docs page.
## Using YOLOv8 to Predict on Multiple Test Images
@ -76,7 +76,7 @@ Overfitting happens when your model learns the training data too well, including
### Underfitting
Underfitting occurs when your model cant capture the underlying patterns in the data. In computer vision, an underfitted model might not even recognize objects correctly in the training images.
Underfitting occurs when your model can't capture the underlying patterns in the data. In computer vision, an underfitted model might not even recognize objects correctly in the training images.
#### Signs of Underfitting
@ -85,10 +85,11 @@ Underfitting occurs when your model cant capture the underlying patterns in t
### Balancing Overfitting and Underfitting
The key is to find a balance between overfitting and underfitting. Ideally, a model should perform well on both training and validation datasets. Regularly monitoring your models performance through metrics and visual inspections, along with applying the right strategies, can help you achieve the best results.
The key is to find a balance between overfitting and underfitting. Ideally, a model should perform well on both training and validation datasets. Regularly monitoring your model's performance through metrics and visual inspections, along with applying the right strategies, can help you achieve the best results.
<p align="center">
<img width="100%" src="https://viso.ai/wp-content/uploads/2022/07/overfitting-underfitting-appropriate-fitting.jpg" alt="Overfitting and Underfitting Overview">
</p>
## Data Leakage in Computer Vision and How to Avoid It
@ -100,7 +101,7 @@ Data leakage can be tricky to spot and often comes from hidden biases in the tra
- **Camera Bias:** Different angles, lighting, shadows, and camera movements can introduce unwanted patterns.
- **Overlay Bias:** Logos, timestamps, or other overlays in images can mislead the model.
- **Font and Object Bias:** Specific fonts or objects that frequently appear in certain classes can skew the models learning.
- **Font and Object Bias:** Specific fonts or objects that frequently appear in certain classes can skew the model's learning.
- **Spatial Bias:** Imbalances in foreground-background, bounding box distributions, and object locations can affect training.
- **Label and Domain Bias:** Incorrect labels or shifts in data types can lead to leakage.
@ -110,7 +111,7 @@ To find data leakage, you can:
- **Check Performance:** If the model's results are surprisingly good, it might be leaking.
- **Look at Feature Importance:** If one feature is much more important than others, it could indicate leakage.
- **Visual Inspection:** Double-check that the models decisions make sense intuitively.
- **Visual Inspection:** Double-check that the model's decisions make sense intuitively.
- **Verify Data Separation:** Make sure data was divided correctly before any processing.
### Avoiding Data Leakage
@ -138,4 +139,4 @@ These resources will help you navigate challenges and remain updated on the late
## In Summary
Building trustworthy computer vision models relies on rigorous model testing. By testing the model with previously unseen data, we can analyze it and spot weaknesses like overfitting and data leakage. Addressing these issues before deployment helps the model perform well in real-world applications. Its important to remember that model testing is just as crucial as model evaluation in guaranteeing the model's long-term success and effectiveness.
Building trustworthy computer vision models relies on rigorous model testing. By testing the model with previously unseen data, we can analyze it and spot weaknesses like overfitting and data leakage. Addressing these issues before deployment helps the model perform well in real-world applications. It's important to remember that model testing is just as crucial as model evaluation in guaranteeing the model's long-term success and effectiveness.