PyCharm Docs Inspect fixes (#18432)
Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com> Co-authored-by: UltralyticsAssistant <web@ultralytics.com> Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
This commit is contained in:
parent
ef16c56c99
commit
f7b9009c91
15 changed files with 52 additions and 50 deletions
|
|
@ -209,7 +209,7 @@ Yes, you can export your Ultralytics YOLO11 model to be compatible with the Cora
|
|||
|
||||
For more information, refer to the [Export Mode](../modes/export.md) documentation.
|
||||
|
||||
### What should I do if TensorFlow is already installed on my Raspberry Pi but I want to use tflite-runtime instead?
|
||||
### What should I do if TensorFlow is already installed on my Raspberry Pi, but I want to use tflite-runtime instead?
|
||||
|
||||
If you have TensorFlow installed on your Raspberry Pi and need to switch to `tflite-runtime`, you'll need to uninstall TensorFlow first using:
|
||||
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ Evaluating how well a model performs helps us understand how effectively it work
|
|||
|
||||
The confidence score represents the model's certainty that a detected object belongs to a particular class. It ranges from 0 to 1, with higher scores indicating greater confidence. The confidence score helps filter predictions; only detections with confidence scores above a specified threshold are considered valid.
|
||||
|
||||
_Quick Tip:_ When running inferences, if you aren't seeing any predictions and you've checked everything else, try lowering the confidence score. Sometimes, the threshold is too high, causing the model to ignore valid predictions. Lowering the score allows the model to consider more possibilities. This might not meet your project goals, but it's a good way to see what the model can do and decide how to fine-tune it.
|
||||
_Quick Tip:_ When running inferences, if you aren't seeing any predictions, and you've checked everything else, try lowering the confidence score. Sometimes, the threshold is too high, causing the model to ignore valid predictions. Lowering the score allows the model to consider more possibilities. This might not meet your project goals, but it's a good way to see what the model can do and decide how to fine-tune it.
|
||||
|
||||
### Intersection over Union
|
||||
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ Regular model monitoring helps developers track the [model's performance](./mode
|
|||
Here are some best practices to keep in mind while monitoring your computer vision model in production:
|
||||
|
||||
- **Track Performance Regularly**: Continuously monitor the model's performance to detect changes over time.
|
||||
- **Double Check the Data Quality**: Check for missing values or anomalies in the data.
|
||||
- **Double-Check the Data Quality**: Check for missing values or anomalies in the data.
|
||||
- **Use Diverse Data Sources**: Monitor data from various sources to get a comprehensive view of the model's performance.
|
||||
- **Combine Monitoring Techniques**: Use a mix of drift detection algorithms and rule-based approaches to identify a wide range of issues.
|
||||
- **Monitor Inputs and Outputs**: Keep an eye on both the data the model processes and the results it produces to make sure everything is functioning correctly.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue