Docs spelling and grammar fixes (#13307)

Signed-off-by: Glenn Jocher <glenn.jocher@ultralytics.com>
Co-authored-by: RainRat <rainrat78@yahoo.ca>
This commit is contained in:
Glenn Jocher 2024-06-02 14:07:14 +02:00 committed by GitHub
parent bddea17bf3
commit 064e2fd282
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
48 changed files with 179 additions and 172 deletions

View file

@ -6,7 +6,7 @@ keywords: YOLOv8, Amazon SageMaker, deploy YOLOv8, AWS deployment, machine learn
# A Guide to Deploying YOLOv8 on Amazon SageMaker Endpoints
Deploying advanced computer vision models like [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) on Amazon SageMaker Endpoints opens up a wide range of possibilities for various machine learning applications. The key to effectively using these models lies in understanding their setup, configuration, and deployment processes. YOLOv8 becomes even more powerful when integrated seamlessly with Amazon SageMaker, a robust and scalable machine learning service by AWS.
Deploying advanced computer vision models like [Ultralytics' YOLOv8](https://github.com/ultralytics/ultralytics) on Amazon SageMaker Endpoints opens up a wide range of possibilities for various machine learning applications. The key to effectively using these models lies in understanding their setup, configuration, and deployment processes. YOLOv8 becomes even more powerful when integrated seamlessly with Amazon SageMaker, a robust and scalable machine learning service by AWS.
This guide will take you through the process of deploying YOLOv8 PyTorch models on Amazon SageMaker Endpoints step by step. You'll learn the essentials of preparing your AWS environment, configuring the model appropriately, and using tools like AWS CloudFormation and the AWS Cloud Development Kit (CDK) for deployment.
@ -32,7 +32,7 @@ First, ensure you have the following prerequisites in place:
- An AWS Account: If you don't already have one, sign up for an AWS account.
- Configured IAM Roles: Youll need an IAM role with the necessary permissions for Amazon SageMaker, AWS CloudFormation, and Amazon S3. This role should have policies that allow it to access these services.
- Configured IAM Roles: You'll need an IAM role with the necessary permissions for Amazon SageMaker, AWS CloudFormation, and Amazon S3. This role should have policies that allow it to access these services.
- AWS CLI: If not already installed, download and install the AWS Command Line Interface (CLI) and configure it with your account details. Follow [the AWS CLI instructions](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) for installation.
@ -144,7 +144,7 @@ Now that your YOLOv8 model is deployed, it's important to test its performance a
- Open the Test Notebook: In the same Jupyter environment, locate and open the 2_TestEndpoint.ipynb notebook, also in the sm-notebook directory.
- Run the Test Notebook: Follow the instructions within the notebook to test the deployed SageMaker endpoint. This includes sending an image to the endpoint and running inferences. Then, youll plot the output to visualize the models performance and accuracy, as shown below.
- Run the Test Notebook: Follow the instructions within the notebook to test the deployed SageMaker endpoint. This includes sending an image to the endpoint and running inferences. Then, you'll plot the output to visualize the model's performance and accuracy, as shown below.
<p align="center">
<img width="640" src="https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2023/02/28/ML13353_InferenceOutput.png" alt="Testing Results YOLOv8">

View file

@ -41,7 +41,7 @@ For detailed instructions and best practices related to the installation process
Once you have installed the necessary packages, the next step is to initialize and configure your ClearML SDK. This involves setting up your ClearML account and obtaining the necessary credentials for a seamless connection between your development environment and the ClearML server.
Begin by initializing the ClearML SDK in your environment. The clearml-init command starts the setup process and prompts you for the necessary credentials.
Begin by initializing the ClearML SDK in your environment. The 'clearml-init' command starts the setup process and prompts you for the necessary credentials.
!!! Tip "Initial SDK Setup"
@ -86,7 +86,7 @@ Before diving into the usage instructions, be sure to check out the range of [YO
### Understanding the Code
Lets understand the steps showcased in the usage code snippet above.
Let's understand the steps showcased in the usage code snippet above.
**Step 1: Creating a ClearML Task**: A new task is initialized in ClearML, specifying your project and task names. This task will track and manage your model's training.

View file

@ -37,7 +37,7 @@ To install the required packages, run:
## Configuring Comet ML
After installing the required packages, youll need to sign up, get a [Comet API Key](https://www.comet.com/signup), and configure it.
After installing the required packages, you'll need to sign up, get a [Comet API Key](https://www.comet.com/signup), and configure it.
!!! Tip "Configuring Comet ML"
@ -89,7 +89,7 @@ Comet automatically logs the following data with no additional configuration: me
## Understanding Your Model's Performance with Comet ML Visualizations
Let's dive into what you'll see on the Comet ML dashboard once your YOLOv8 model begins training. The dashboard is where all the action happens, presenting a range of automatically logged information through visuals and statistics. Heres a quick tour:
Let's dive into what you'll see on the Comet ML dashboard once your YOLOv8 model begins training. The dashboard is where all the action happens, presenting a range of automatically logged information through visuals and statistics. Here's a quick tour:
**Experiment Panels**

View file

@ -40,7 +40,7 @@ Apple's CoreML framework offers robust features for on-device machine learning.
## CoreML Deployment Options
Before we look at the code for exporting YOLOv8 models to the CoreML format, lets understand where CoreML models are usually used.
Before we look at the code for exporting YOLOv8 models to the CoreML format, let's understand where CoreML models are usually used.
CoreML offers various deployment options for machine learning models, including:
@ -50,7 +50,7 @@ CoreML offers various deployment options for machine learning models, including:
- **Downloaded Models**: These models are fetched from a server as needed. This approach is suitable for larger models or those needing regular updates. It helps keep the app bundle size smaller.
- **Cloud-Based Deployment**: CoreML models are hosted on servers and accessed by the iOS app through API requests. This scalable and flexible option enables easy model updates without app revisions. Its ideal for complex models or large-scale apps requiring regular updates. However, it does require an internet connection and may pose latency and security issues.
- **Cloud-Based Deployment**: CoreML models are hosted on servers and accessed by the iOS app through API requests. This scalable and flexible option enables easy model updates without app revisions. It's ideal for complex models or large-scale apps requiring regular updates. However, it does require an internet connection and may pose latency and security issues.
## Exporting YOLOv8 Models to CoreML
@ -123,4 +123,4 @@ In this guide, we went over how to export Ultralytics YOLOv8 models to CoreML fo
For further details on usage, visit the [CoreML official documentation](https://developer.apple.com/documentation/coreml).
Also, if youd like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of valuable resources and insights there.
Also, if you'd like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of valuable resources and insights there.

View file

@ -166,6 +166,6 @@ Based on your analysis, iterate on your experiments. Adjust model configurations
This guide has led you through the process of integrating DVCLive with Ultralytics' YOLOv8. You have learned how to harness the power of DVCLive for detailed experiment monitoring, effective visualization, and insightful analysis in your machine learning endeavors.
For further details on usage, visit [DVCLives official documentation](https://dvc.org/doc/dvclive/ml-frameworks/yolo).
For further details on usage, visit [DVCLive's official documentation](https://dvc.org/doc/dvclive/ml-frameworks/yolo).
Additionally, explore more integrations and capabilities of Ultralytics by visiting the [Ultralytics integration guide page](../integrations/index.md), which is a collection of great resources and insights.

View file

@ -32,7 +32,7 @@ Here are the key features that make TFLite Edge TPU a great model format choice
## Deployment Options with TFLite Edge TPU
Before we jump into how to export YOLOv8 models to the TFLite Edge TPU format, lets understand where TFLite Edge TPU models are usually used.
Before we jump into how to export YOLOv8 models to the TFLite Edge TPU format, let's understand where TFLite Edge TPU models are usually used.
TFLite Edge TPU offers various deployment options for machine learning models, including:
@ -76,7 +76,7 @@ Before diving into the usage instructions, it's important to note that while all
model = YOLO("yolov8n.pt")
# Export the model to TFLite Edge TPU format
model.export(format="edgetpu") # creates 'yolov8n_full_integer_quant_edgetpu.tflite
model.export(format="edgetpu") # creates 'yolov8n_full_integer_quant_edgetpu.tflite'
# Load the exported TFLite Edge TPU model
edgetpu_model = YOLO("yolov8n_full_integer_quant_edgetpu.tflite")
@ -111,7 +111,7 @@ However, for in-depth instructions on deploying your TFLite Edge TPU models, tak
## Summary
In this guide, weve learned how to export Ultralytics YOLOv8 models to TFLite Edge TPU format. By following the steps mentioned above, you can increase the speed and power of your computer vision applications.
In this guide, we've learned how to export Ultralytics YOLOv8 models to TFLite Edge TPU format. By following the steps mentioned above, you can increase the speed and power of your computer vision applications.
For further details on usage, visit the [Edge TPU official website](https://cloud.google.com/edge-tpu).

View file

@ -6,15 +6,15 @@ keywords: Ultralytics YOLOv8, Google Colab, CPU, GPU, TPU, Browser-based, Hardwa
# Accelerating YOLOv8 Projects with Google Colab
Many developers lack the powerful computing resources needed to build deep learning models. Acquiring high-end hardware or renting a decent GPU can be expensive. Google Colab is a great solution to this. Its a browser-based platform that allows you to work with large datasets, develop complex models, and share your work with others without a huge cost.
Many developers lack the powerful computing resources needed to build deep learning models. Acquiring high-end hardware or renting a decent GPU can be expensive. Google Colab is a great solution to this. It's a browser-based platform that allows you to work with large datasets, develop complex models, and share your work with others without a huge cost.
You can use Google Colab to work on projects related to [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models. Google Colabs user-friendly environment is well suited for efficient model development and experimentation. Lets learn more about Google Colab, its key features, and how you can use it to train YOLOv8 models.
You can use Google Colab to work on projects related to [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models. Google Colab's user-friendly environment is well suited for efficient model development and experimentation. Let's learn more about Google Colab, its key features, and how you can use it to train YOLOv8 models.
## Google Colaboratory
Google Colaboratory, commonly known as Google Colab, was developed by Google Research in 2017. It is a free online cloud-based Jupyter Notebook environment that allows you to train your machine learning and deep learning models on CPUs, GPUs, and TPUs. The motivation behind developing Google Colab was Google's broader goals to advance AI technology and educational tools, and encourage the use of cloud services.
You can use Google Colab regardless of the specifications and configurations of your local computer. All you need is a Google account and a web browser, and youre good to go.
You can use Google Colab regardless of the specifications and configurations of your local computer. All you need is a Google account and a web browser, and you're good to go.
## Training YOLOv8 Using Google Colaboratory
@ -39,10 +39,10 @@ Learn how to train a YOLOv8 model with custom data on YouTube with Nicolai. Chec
### Common Questions While Working with Google Colab
When working with Google Colab, you might have a few common questions. Lets answer them.
When working with Google Colab, you might have a few common questions. Let's answer them.
**Q: Why does my Google Colab session timeout?**
A: Google Colab sessions can timeout due to inactivity, especially for free users who have a limited session duration.
A: Google Colab sessions can time out due to inactivity, especially for free users who have a limited session duration.
**Q: Can I increase the session duration in Google Colab?**
A: Free users face limits, but Google Colab Pro offers extended session durations.
@ -85,7 +85,7 @@ There are many options for training and evaluating YOLOv8 models, so what makes
- **Integration with Google Drive:** Colab seamlessly integrates with Google Drive to make data storage, access, and management simple. Datasets and models can be stored and retrieved directly from Google Drive.
- **Markdown Support:** You can use markdown format for enhanced documentation within notebooks.
- **Markdown Support:** You can use Markdown format for enhanced documentation within notebooks.
- **Scheduled Execution:** Developers can set notebooks to run automatically at specified times.
@ -93,18 +93,18 @@ There are many options for training and evaluating YOLOv8 models, so what makes
## Keep Learning about Google Colab
If youd like to dive deeper into Google Colab, here are a few resources to guide you.
If you'd like to dive deeper into Google Colab, here are a few resources to guide you.
- **[Training Custom Datasets with Ultralytics YOLOv8 in Google Colab](https://www.ultralytics.com/blog/training-custom-datasets-with-ultralytics-yolov8-in-google-colab)**: Learn how to train custom datasets with Ultralytics YOLOv8 on Google Colab. This comprehensive blog post will take you through the entire process, from initial setup to the training and evaluation stages.
- **[Curated Notebooks](https://colab.google/notebooks/)**: Here you can explore a series of organized and educational notebooks, each grouped by specific topic areas.
- **[Google Colabs Medium Page](https://medium.com/google-colab)**: You can find tutorials, updates, and community contributions here that can help you better understand and utilize this tool.
- **[Google Colab's Medium Page](https://medium.com/google-colab)**: You can find tutorials, updates, and community contributions here that can help you better understand and utilize this tool.
## Summary
Weve discussed how you can easily experiment with Ultralytics YOLOv8 models on Google Colab. You can use Google Colab to train and evaluate your models on GPUs and TPUs with a few clicks.
We've discussed how you can easily experiment with Ultralytics YOLOv8 models on Google Colab. You can use Google Colab to train and evaluate your models on GPUs and TPUs with a few clicks.
For more details, visit [Google Colabs FAQ page](https://research.google.com/colaboratory/intl/en-GB/faq.html).
For more details, visit [Google Colab's FAQ page](https://research.google.com/colaboratory/intl/en-GB/faq.html).
Interested in more YOLOv8 integrations? Visit the [Ultralytics integration guide page](index.md) to explore additional tools and capabilities that can improve your machine-learning projects.

View file

@ -34,7 +34,7 @@ NCNN models offer a wide range of key features that enable on-device machine lea
## Deployment Options with NCNN
Before we look at the code for exporting YOLOv8 models to the NCNN format, lets understand how NCNN models are normally used.
Before we look at the code for exporting YOLOv8 models to the NCNN format, let's understand how NCNN models are normally used.
NCNN models, designed for efficiency and performance, are compatible with a variety of deployment platforms:

View file

@ -1,26 +1,26 @@
---
comments: true
description: Learn how to deploy your YOLOv8 models rapidly using Neural Magics DeepSparse. This guide focuses on integrating Ultralytics YOLOv8 with the DeepSparse Engine for high-speed, CPU-based inference, leveraging advanced neural network sparsity techniques.
description: Learn how to deploy your YOLOv8 models rapidly using Neural Magic's DeepSparse. This guide focuses on integrating Ultralytics YOLOv8 with the DeepSparse Engine for high-speed, CPU-based inference, leveraging advanced neural network sparsity techniques.
keywords: YOLOv8, DeepSparse Engine, Ultralytics, CPU Inference, Neural Network Sparsity, Object Detection, Model Optimization
---
# Optimizing YOLOv8 Inferences with Neural Magics DeepSparse Engine
# Optimizing YOLOv8 Inferences with Neural Magic's DeepSparse Engine
When deploying object detection models like [Ultralytics YOLOv8](https://ultralytics.com) on various hardware, you can bump into unique issues like optimization. This is where YOLOv8s integration with Neural Magics DeepSparse Engine steps in. It transforms the way YOLOv8 models are executed and enables GPU-level performance directly on CPUs.
When deploying object detection models like [Ultralytics YOLOv8](https://ultralytics.com) on various hardware, you can bump into unique issues like optimization. This is where YOLOv8's integration with Neural Magic's DeepSparse Engine steps in. It transforms the way YOLOv8 models are executed and enables GPU-level performance directly on CPUs.
This guide shows you how to deploy YOLOv8 using Neural Magic's DeepSparse, how to run inferences, and also how to benchmark performance to ensure it is optimized.
## Neural Magics DeepSparse
## Neural Magic's DeepSparse
<p align="center">
<img width="640" src="https://docs.neuralmagic.com/assets/images/nm-flows-55d56c0695a30bf9ecb716ea98977a95.png" alt="Neural Magics DeepSparse Overview">
<img width="640" src="https://docs.neuralmagic.com/assets/images/nm-flows-55d56c0695a30bf9ecb716ea98977a95.png" alt="Neural Magic's DeepSparse Overview">
</p>
[Neural Magics DeepSparse](https://neuralmagic.com/deepsparse/) is an inference run-time designed to optimize the execution of neural networks on CPUs. It applies advanced techniques like sparsity, pruning, and quantization to dramatically reduce computational demands while maintaining accuracy. DeepSparse offers an agile solution for efficient and scalable neural network execution across various devices.
[Neural Magic's DeepSparse](https://neuralmagic.com/deepsparse/) is an inference run-time designed to optimize the execution of neural networks on CPUs. It applies advanced techniques like sparsity, pruning, and quantization to dramatically reduce computational demands while maintaining accuracy. DeepSparse offers an agile solution for efficient and scalable neural network execution across various devices.
## Benefits of Integrating Neural Magics DeepSparse with YOLOv8
## Benefits of Integrating Neural Magic's DeepSparse with YOLOv8
Before diving into how to deploy YOLOV8 using DeepSparse, lets understand the benefits of using DeepSparse. Some key advantages include:
Before diving into how to deploy YOLOV8 using DeepSparse, let's understand the benefits of using DeepSparse. Some key advantages include:
- **Enhanced Inference Speed**: Achieves up to 525 FPS (on YOLOv8n), significantly speeding up YOLOv8's inference capabilities compared to traditional methods.
@ -44,7 +44,7 @@ Before diving into how to deploy YOLOV8 using DeepSparse, lets understand the
## How Does Neural Magic's DeepSparse Technology Works?
Neural Magics Deep Sparse technology is inspired by the human brains efficiency in neural network computation. It adopts two key principles from the brain as follows:
Neural Magic's Deep Sparse technology is inspired by the human brain's efficiency in neural network computation. It adopts two key principles from the brain as follows:
- **Sparsity**: The process of sparsification involves pruning redundant information from deep learning networks, leading to smaller and faster models without compromising accuracy. This technique reduces the network's size and computational needs significantly.
@ -155,8 +155,8 @@ After running the eval command, you will receive detailed output metrics such as
## Summary
This guide explored integrating Ultralytics YOLOv8 with Neural Magic's DeepSparse Engine. It highlighted how this integration enhances YOLOv8's performance on CPU platforms, offering GPU-level efficiency and advanced neural network sparsity techniques.
This guide explored integrating Ultralytics' YOLOv8 with Neural Magic's DeepSparse Engine. It highlighted how this integration enhances YOLOv8's performance on CPU platforms, offering GPU-level efficiency and advanced neural network sparsity techniques.
For more detailed information and advanced usage, visit [Neural Magics DeepSparse documentation](https://docs.neuralmagic.com/products/deepsparse/). Also, check out Neural Magics documentation on the integration with YOLOv8 [here](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/yolov8#yolov8-inference-pipelines) and watch a great session on it [here](https://www.youtube.com/watch?v=qtJ7bdt52x8).
For more detailed information and advanced usage, visit [Neural Magic's DeepSparse documentation](https://docs.neuralmagic.com/products/deepsparse/). Also, check out Neural Magic's documentation on the integration with YOLOv8 [here](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/yolov8#yolov8-inference-pipelines) and watch a great session on it [here](https://www.youtube.com/watch?v=qtJ7bdt52x8).
Additionally, for a broader understanding of various YOLOv8 integrations, visit the [Ultralytics integration guide page](../integrations/index.md), where you can discover a range of other exciting integration possibilities.

View file

@ -6,7 +6,7 @@ keywords: Ultralytics, YOLOv8, ONNX Format, Export YOLOv8, CUDA Support, Model D
# ONNX Export for YOLOv8 Models
Often, when deploying computer vision models, youll need a model format that's both flexible and compatible with multiple platforms.
Often, when deploying computer vision models, you'll need a model format that's both flexible and compatible with multiple platforms.
Exporting [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models to ONNX format streamlines deployment and ensures optimal performance across various environments. This guide will show you how to easily convert your YOLOv8 models to ONNX and enhance their scalability and effectiveness in real-world applications.
@ -44,7 +44,7 @@ The ability of ONNX to handle various formats can be attributed to the following
## Common Usage of ONNX
Before we jump into how to export YOLOv8 models to the ONNX format, lets take a look at where ONNX models are usually used.
Before we jump into how to export YOLOv8 models to the ONNX format, let's take a look at where ONNX models are usually used.
### CPU Deployment
@ -131,4 +131,4 @@ In this guide, you've learned how to export Ultralytics YOLOv8 models to ONNX fo
For further details on usage, visit the [ONNX official documentation](https://onnx.ai/onnx/intro/).
Also, if youd like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of useful resources and insights there.
Also, if you'd like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of useful resources and insights there.

View file

@ -16,11 +16,11 @@ The ability to export to PaddlePaddle model format allows you to optimize your [
<img width="75%" src="https://github.com/PaddlePaddle/Paddle/blob/develop/doc/imgs/logo.png?raw=true" alt="PaddlePaddle Logo">
</p>
Developed by Baidu, [PaddlePaddle](https://www.paddlepaddle.org.cn/en) (**PA**rallel **D**istributed **D**eep **LE**arning) is China's first open-source deep learning platform. Unlike some frameworks built mainly for research, PaddlePaddle prioritizes ease of use and smooth integration across industries.
Developed by Baidu, [PaddlePaddle](https://www.paddlepaddle.org.cn/en) (**PArallel **D**istributed **D**eep **LE**arning) is China's first open-source deep learning platform. Unlike some frameworks built mainly for research, PaddlePaddle prioritizes ease of use and smooth integration across industries.
It offers tools and resources similar to popular frameworks like TensorFlow and PyTorch, making it accessible for developers of all experience levels. From farming and factories to service businesses, PaddlePaddle's large developer community of over 4.77 million is helping create and deploy AI applications.
By exporting your Ultralytics YOLOv8 models to PaddlePaddle format, you can tap into PaddlePaddles strengths in performance optimization. PaddlePaddle prioritizes efficient model execution and reduced memory usage. As a result, your YOLOv8 models can potentially achieve even better performance, delivering top-notch results in practical scenarios.
By exporting your Ultralytics YOLOv8 models to PaddlePaddle format, you can tap into PaddlePaddle's strengths in performance optimization. PaddlePaddle prioritizes efficient model execution and reduced memory usage. As a result, your YOLOv8 models can potentially achieve even better performance, delivering top-notch results in practical scenarios.
## Key Features of PaddlePaddle Models

View file

@ -46,12 +46,12 @@ Explore more capabilities of YOLOv8 and Paperspace Gradient in a discussion with
allowfullscreen>
</iframe>
<br>
<strong>Watch:</strong> Ultralytics Live Session 7: Its All About the Environment: Optimizing YOLOv8 Training With Gradient
<strong>Watch:</strong> Ultralytics Live Session 7: It's All About the Environment: Optimizing YOLOv8 Training With Gradient
</p>
## Key Features of Paperspace Gradient
As you explore the Paperspace console, youll see how each step of the machine-learning workflow is supported and enhanced. Here are some things to look out for:
As you explore the Paperspace console, you'll see how each step of the machine-learning workflow is supported and enhanced. Here are some things to look out for:
- **One-Click Notebooks:** Gradient provides pre-configured Jupyter Notebooks specifically tailored for YOLOv8, eliminating the need for environment setup and dependency management. Simply choose the desired notebook and start experimenting immediately.
@ -81,6 +81,6 @@ While many options are available for training, deploying, and evaluating YOLOv8
This guide explored the Paperspace Gradient integration for training YOLOv8 models. Gradient provides the tools and infrastructure to accelerate your AI development journey from effortless model training and evaluation to streamlined deployment options.
For further exploration, visit [PaperSpaces official documentation](https://docs.digitalocean.com/products/paperspace/).
For further exploration, visit [PaperSpace's official documentation](https://docs.digitalocean.com/products/paperspace/).
Also, visit the [Ultralytics integration guide page](index.md) to learn more about different YOLOv8 integrations. It's full of insights and tips to take your computer vision projects to the next level.

View file

@ -6,7 +6,7 @@ keywords: Ultralytics, YOLOv8, Ray Tune, hyperparameter tuning, machine learning
# Efficient Hyperparameter Tuning with Ray Tune and YOLOv8
Hyperparameter tuning is vital in achieving peak model performance by discovering the optimal set of hyperparameters. This involves running trials with different hyperparameters and evaluating each trials performance.
Hyperparameter tuning is vital in achieving peak model performance by discovering the optimal set of hyperparameters. This involves running trials with different hyperparameters and evaluating each trial's performance.
## Accelerate Tuning with Ultralytics YOLOv8 and Ray Tune
@ -182,4 +182,4 @@ plt.show()
In this documentation, we covered common workflows to analyze the results of experiments run with Ray Tune using Ultralytics. The key steps include loading the experiment results from a directory, performing basic experiment-level and trial-level analysis and plotting metrics.
Explore further by looking into Ray Tunes [Analyze Results](https://docs.ray.io/en/latest/tune/examples/tune_analyze_results.html) docs page to get the most out of your hyperparameter tuning experiments.
Explore further by looking into Ray Tune's [Analyze Results](https://docs.ray.io/en/latest/tune/examples/tune_analyze_results.html) docs page to get the most out of your hyperparameter tuning experiments.

View file

@ -6,7 +6,7 @@ keywords: Ultralytics, YOLOv8, Roboflow, vector analysis, confusion matrix, data
# Roboflow
[Roboflow](https://roboflow.com/?ref=ultralytics) has everything you need to build and deploy computer vision models. Connect Roboflow at any step in your pipeline with APIs and SDKs, or use the end-to-end interface to automate the entire process from image to inference. Whether youre in need of [data labeling](https://roboflow.com/annotate?ref=ultralytics), [model training](https://roboflow.com/train?ref=ultralytics), or [model deployment](https://roboflow.com/deploy?ref=ultralytics), Roboflow gives you building blocks to bring custom computer vision solutions to your project.
[Roboflow](https://roboflow.com/?ref=ultralytics) has everything you need to build and deploy computer vision models. Connect Roboflow at any step in your pipeline with APIs and SDKs, or use the end-to-end interface to automate the entire process from image to inference. Whether you're in need of [data labeling](https://roboflow.com/annotate?ref=ultralytics), [model training](https://roboflow.com/train?ref=ultralytics), or [model deployment](https://roboflow.com/deploy?ref=ultralytics), Roboflow gives you building blocks to bring custom computer vision solutions to your project.
!!! Question "Licensing"

View file

@ -4,9 +4,9 @@ description: Walk through the integration of YOLOv8 with TensorBoard to be able
keywords: TensorBoard, YOLOv8, Visualization, TensorFlow, Training Analysis, Metric Tracking, Model Graphs, Experimentation, Ultralytics
---
# Gain Visual Insights with YOLOv8s Integration with TensorBoard
# Gain Visual Insights with YOLOv8's Integration with TensorBoard
Understanding and fine-tuning computer vision models like [Ultralytics YOLOv8](https://ultralytics.com) becomes more straightforward when you take a closer look at their training processes. Model training visualization helps with getting insights into the model's learning patterns, performance metrics, and overall behavior. YOLOv8's integration with TensorBoard makes this process of visualization and analysis easier and enables more efficient and informed adjustments to the model.
Understanding and fine-tuning computer vision models like [Ultralytics' YOLOv8](https://ultralytics.com) becomes more straightforward when you take a closer look at their training processes. Model training visualization helps with getting insights into the model's learning patterns, performance metrics, and overall behavior. YOLOv8's integration with TensorBoard makes this process of visualization and analysis easier and enables more efficient and informed adjustments to the model.
This guide covers how to use TensorBoard with YOLOv8. You'll learn about various visualizations, from tracking metrics to analyzing model graphs. These tools will help you understand your YOLOv8 model's performance better.
@ -82,7 +82,7 @@ For more information related to the model training process, be sure to check our
## Understanding Your TensorBoard for YOLOv8 Training
Now, lets focus on understanding the various features and components of TensorBoard in the context of YOLOv8 training. The three key sections of the TensorBoard are Time Series, Scalars, and Graphs.
Now, let's focus on understanding the various features and components of TensorBoard in the context of YOLOv8 training. The three key sections of the TensorBoard are Time Series, Scalars, and Graphs.
### Time Series
@ -102,7 +102,7 @@ The Time Series feature in the TensorBoard offers a dynamic and detailed perspec
#### Importance of Time Series in YOLOv8 Training
The Time Series section is essential for a thorough analysis of the YOLOv8 model's training progress. It lets you track the metrics in real time to promptly identify and solve issues. It also offers a detailed view of each metric's progression, which is crucial for fine-tuning the model and enhancing its performance.
The Time Series section is essential for a thorough analysis of the YOLOv8 model's training progress. It lets you track the metrics in real time to promptly identify and solve issues. It also offers a detailed view of each metrics progression, which is crucial for fine-tuning the model and enhancing its performance.
### Scalars
@ -148,6 +148,6 @@ Graphs are particularly useful for debugging the model, especially in complex ar
This guide aims to help you use TensorBoard with YOLOv8 for visualization and analysis of machine learning model training. It focuses on explaining how key TensorBoard features can provide insights into training metrics and model performance during YOLOv8 training sessions.
For a more detailed exploration of these features and effective utilization strategies, you can refer to TensorFlows official [TensorBoard documentation](https://www.tensorflow.org/tensorboard/get_started) and their [GitHub repository](https://github.com/tensorflow/tensorboard).
For a more detailed exploration of these features and effective utilization strategies, you can refer to TensorFlow's official [TensorBoard documentation](https://www.tensorflow.org/tensorboard/get_started) and their [GitHub repository](https://github.com/tensorflow/tensorboard).
Want to learn more about the various integrations of Ultralytics? Check out the [Ultralytics integrations guide page](../integrations/index.md) to see what other exciting capabilities are waiting to be discovered!

View file

@ -16,7 +16,7 @@ By using the TensorRT export format, you can enhance your [Ultralytics YOLOv8](h
<img width="100%" src="https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-601/tensorrt-developer-guide/graphics/whatistrt2.png" alt="TensorRT Overview">
</p>
[TensorRT](https://developer.nvidia.com/tensorrt), developed by NVIDIA, is an advanced software development kit (SDK) designed for high-speed deep learning inference. Its well-suited for real-time applications like object detection.
[TensorRT](https://developer.nvidia.com/tensorrt), developed by NVIDIA, is an advanced software development kit (SDK) designed for high-speed deep learning inference. It's well-suited for real-time applications like object detection.
This toolkit optimizes deep learning models for NVIDIA GPUs and results in faster and more efficient operations. TensorRT models undergo TensorRT optimization, which includes techniques like layer fusion, precision calibration (INT8 and FP16), dynamic tensor memory management, and kernel auto-tuning. Converting deep learning models into the TensorRT format allows developers to realize the potential of NVIDIA GPUs fully.
@ -40,7 +40,7 @@ TensorRT models offer a range of key features that contribute to their efficienc
## Deployment Options in TensorRT
Before we look at the code for exporting YOLOv8 models to the TensorRT format, lets understand where TensorRT models are normally used.
Before we look at the code for exporting YOLOv8 models to the TensorRT format, let's understand where TensorRT models are normally used.
TensorRT offers several deployment options, and each option balances ease of integration, performance optimization, and flexibility differently:
@ -205,7 +205,7 @@ Experimentation by NVIDIA led them to recommend using at least 500 calibration i
- **Increased development times:** Finding the "optimal" settings for INT8 calibration for dataset and device can take a significant amount of testing.
- **Hardware dependency:** Calibration and performance gains could be highly hardware dependent and model weights are less transferrable.
- **Hardware dependency:** Calibration and performance gains could be highly hardware dependent and model weights are less transferable.
## Ultralytics YOLO TensorRT Export Performance

View file

@ -107,7 +107,7 @@ For more details about supported export options, visit the [Ultralytics document
## Deploying Exported YOLOv8 TF GraphDef Models
Once youve exported your YOLOv8 model to the TF GraphDef format, the next step is deployment. The primary and recommended first step for running a TF GraphDef model is to use the YOLO("model.pb") method, as previously shown in the usage code snippet.
Once you've exported your YOLOv8 model to the TF GraphDef format, the next step is deployment. The primary and recommended first step for running a TF GraphDef model is to use the YOLO("model.pb") method, as previously shown in the usage code snippet.
However, for more information on deploying your TF GraphDef models, take a look at the following resources:

View file

@ -42,7 +42,7 @@ TF SavedModel provides a range of options to deploy your machine learning models
- **Mobile and Embedded Devices:** TensorFlow Lite, a lightweight solution for running machine learning models on mobile, embedded, and IoT devices, supports converting TF SavedModels to the TensorFlow Lite format. This allows you to deploy your models on a wide range of devices, from smartphones and tablets to microcontrollers and edge devices.
- **TensorFlow Runtime:** TensorFlow Runtime (tfrt) is a high-performance runtime for executing TensorFlow graphs. It provides lower-level APIs for loading and running TF SavedModels in C++ environments. TensorFlow Runtime offers better performance compared to the standard TensorFlow runtime. It is suitable for deployment scenarios that require low-latency inference and tight integration with existing C++ codebases.
- **TensorFlow Runtime:** TensorFlow Runtime (`tfrt`) is a high-performance runtime for executing TensorFlow graphs. It provides lower-level APIs for loading and running TF SavedModels in C++ environments. TensorFlow Runtime offers better performance compared to the standard TensorFlow runtime. It is suitable for deployment scenarios that require low-latency inference and tight integration with existing C++ codebases.
## Exporting YOLOv8 Models to TF SavedModel
@ -105,7 +105,7 @@ Now that you have exported your YOLOv8 model to the TF SavedModel format, the ne
However, for in-depth instructions on deploying your TF SavedModel models, take a look at the following resources:
- **[TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving)**: Heres the developer documentation for how to deploy your TF SavedModel models using TensorFlow Serving.
- **[TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving)**: Here's the developer documentation for how to deploy your TF SavedModel models using TensorFlow Serving.
- **[Run a TensorFlow SavedModel in Node.js](https://blog.tensorflow.org/2020/01/run-tensorflow-savedmodel-in-nodejs-directly-without-conversion.html)**: A TensorFlow blog post on running a TensorFlow SavedModel in Node.js directly without conversion.

View file

@ -6,9 +6,9 @@ keywords: Ultralytics YOLOv8, TensorFlow.js, TF.js, Model Deployment, Node.js, M
# Export to TF.js Model Format From a YOLOv8 Model Format
Deploying machine learning models directly in the browser or on Node.js can be tricky. Youll need to make sure your model format is optimized for faster performance so that the model can be used to run interactive applications locally on the users device. The TensorFlow.js, or TF.js, model format is designed to use minimal power while delivering fast performance.
Deploying machine learning models directly in the browser or on Node.js can be tricky. You'll need to make sure your model format is optimized for faster performance so that the model can be used to run interactive applications locally on the user's device. The TensorFlow.js, or TF.js, model format is designed to use minimal power while delivering fast performance.
The export to TF.js model format feature allows you to optimize your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for high-speed and locally-run object detection inference. In this guide, we'll walk you through converting your models to the TF.js format, making it easier for your models to perform well on various local browsers and Node.js applications.
The 'export to TF.js model format' feature allows you to optimize your [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) models for high-speed and locally-run object detection inference. In this guide, we'll walk you through converting your models to the TF.js format, making it easier for your models to perform well on various local browsers and Node.js applications.
## Why Should You Export to TF.js?
@ -103,7 +103,7 @@ Now that you have exported your YOLOv8 model to the TF.js format, the next step
However, for in-depth instructions on deploying your TF.js models, take a look at the following resources:
- **[Chrome Extension](https://www.tensorflow.org/js/tutorials/deployment/web_ml_in_chrome)**: Heres the developer documentation for how to deploy your TF.js models to a Chrome extension.
- **[Chrome Extension](https://www.tensorflow.org/js/tutorials/deployment/web_ml_in_chrome)**: Here's the developer documentation for how to deploy your TF.js models to a Chrome extension.
- **[Run TensorFlow.js in Node.js](https://www.tensorflow.org/js/guide/nodejs)**: A TensorFlow blog post on running TensorFlow.js in Node.js directly.

View file

@ -34,7 +34,7 @@ TFLite models offer a wide range of key features that enable on-device machine l
## Deployment Options in TFLite
Before we look at the code for exporting YOLOv8 models to the TFLite format, lets understand how TFLite models are normally used.
Before we look at the code for exporting YOLOv8 models to the TFLite format, let's understand how TFLite models are normally used.
TFLite offers various on-device deployment options for machine learning models, including:
@ -117,6 +117,6 @@ After successfully exporting your Ultralytics YOLOv8 models to TFLite format, yo
In this guide, we focused on how to export to TFLite format. By converting your Ultralytics YOLOv8 models to TFLite model format, you can improve the efficiency and speed of YOLOv8 models, making them more effective and suitable for edge computing environments.
For further details on usage, visit [TFLites official documentation](https://www.tensorflow.org/lite/guide).
For further details on usage, visit the [TFLite official documentation](https://www.tensorflow.org/lite/guide).
Also, if you're curious about other Ultralytics YOLOv8 integrations, make sure to check out our [integration guide page](../integrations/index.md). You'll find tons of helpful info and insights waiting for you there.

View file

@ -30,11 +30,11 @@ TorchScript, a key part of the PyTorch ecosystem, provides powerful features for
Here are the key features that make TorchScript a valuable tool for developers:
- **Static Graph Execution**: TorchScript uses a static graph representation of the models computation, which is different from PyTorchs dynamic graph execution. In static graph execution, the computational graph is defined and compiled once before the actual execution, resulting in improved performance during inference.
- **Static Graph Execution**: TorchScript uses a static graph representation of the model's computation, which is different from PyTorch's dynamic graph execution. In static graph execution, the computational graph is defined and compiled once before the actual execution, resulting in improved performance during inference.
- **Model Serialization**: TorchScript allows you to serialize PyTorch models into a platform-independent format. Serialized models can be loaded without requiring the original Python code, enabling deployment in different runtime environments.
- **JIT Compilation**: TorchScript uses Just-In-Time (JIT) compilation to convert PyTorch models into an optimized intermediate representation. JIT compiles the models computational graph, enabling efficient execution on target devices.
- **JIT Compilation**: TorchScript uses Just-In-Time (JIT) compilation to convert PyTorch models into an optimized intermediate representation. JIT compiles the model's computational graph, enabling efficient execution on target devices.
- **Cross-Language Integration**: With TorchScript, you can export PyTorch models to other languages such as C++, Java, and JavaScript. This makes it easier to integrate PyTorch models into existing software systems written in different languages.
@ -42,7 +42,7 @@ Here are the key features that make TorchScript a valuable tool for developers:
## Deployment Options in TorchScript
Before we look at the code for exporting YOLOv8 models to the TorchScript format, lets understand where TorchScript models are normally used.
Before we look at the code for exporting YOLOv8 models to the TorchScript format, let's understand where TorchScript models are normally used.
TorchScript offers various deployment options for machine learning models, such as:
@ -121,6 +121,6 @@ After successfully exporting your Ultralytics YOLOv8 models to TorchScript forma
In this guide, we explored the process of exporting Ultralytics YOLOv8 models to the TorchScript format. By following the provided instructions, you can optimize YOLOv8 models for performance and gain the flexibility to deploy them across various platforms and environments.
For further details on usage, visit [TorchScripts official documentation](https://pytorch.org/docs/stable/jit.html).
For further details on usage, visit [TorchScript's official documentation](https://pytorch.org/docs/stable/jit.html).
Also, if youd like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of useful resources and insights there.
Also, if you'd like to know more about other Ultralytics YOLOv8 integrations, visit our [integration guide page](../integrations/index.md). You'll find plenty of useful resources and insights there.

View file

@ -8,7 +8,7 @@ keywords: Ultralytics, YOLOv8, Object Detection, Weights & Biases, Model Trainin
Object detection models like [Ultralytics YOLOv8](https://github.com/ultralytics/ultralytics) have become integral to many computer vision applications. However, training, evaluating, and deploying these complex models introduces several challenges. Tracking key training metrics, comparing model variants, analyzing model behavior, and detecting issues require substantial instrumentation and experiment management.
This guide showcases Ultralytics YOLOv8 integration with Weights & Biases for enhanced experiment tracking, model-checkpointing, and visualization of model performance. It also includes instructions for setting up the integration, training, fine-tuning, and visualizing results using Weights & Biases interactive features.
This guide showcases Ultralytics YOLOv8 integration with Weights & Biases' for enhanced experiment tracking, model-checkpointing, and visualization of model performance. It also includes instructions for setting up the integration, training, fine-tuning, and visualizing results using Weights & Biases' interactive features.
## Weights & Biases
@ -93,7 +93,7 @@ Before diving into the usage instructions for YOLOv8 model training with Weights
### Understanding the Code
Lets understand the steps showcased in the usage code snippet above.
Let's understand the steps showcased in the usage code snippet above.
- **Step 1: Initialize a Weights & Biases Run**: Start by initializing a Weights & Biases run, specifying the project name and the job type. This run will track and manage the training and validation processes of your model.
@ -114,7 +114,7 @@ Lets understand the steps showcased in the usage code snippet above.
Upon running the usage code snippet above, you can expect the following key outputs:
- The setup of a new run with its unique ID, indicating the start of the training process.
- A concise summary of the models structure, including the number of layers and parameters.
- A concise summary of the model's structure, including the number of layers and parameters.
- Regular updates on important metrics such as box loss, cls loss, dfl loss, precision, recall, and mAP scores during each training epoch.
- At the end of training, detailed metrics including the model's inference speed, and overall accuracy metrics are displayed.
- Links to the Weights & Biases dashboard for in-depth analysis and visualization of the training process, along with information on local log file locations.
@ -141,15 +141,15 @@ After running the usage code snippet, you can access the Weights & Biases (W&B)
- **Model Artifacts Management**: Access and share model checkpoints, facilitating easy deployment and collaboration.
- **Viewing Inference Results with Image Overlay**: Visualize the prediction results on images using interactive overlays in Weights & Biases, providing a clear and detailed view of model performance on real-world data. For more detailed information on Weights & Biases image overlay capabilities, check out this [link](https://docs.wandb.ai/guides/track/log/media#image-overlays).
- **Viewing Inference Results with Image Overlay**: Visualize the prediction results on images using interactive overlays in Weights & Biases, providing a clear and detailed view of model performance on real-world data. For more detailed information on Weights & Biases' image overlay capabilities, check out this [link](https://docs.wandb.ai/guides/track/log/media#image-overlays).
<div style="text-align:center;"><blockquote class="imgur-embed-pub" lang="en" data-id="a/UTSiufs" data-context="false" ><a href="//imgur.com/a/UTSiufs">Take a look at how Weights & Biases image overlays helps visualize model inferences.</a></blockquote></div><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script>
<div style="text-align:center;"><blockquote class="imgur-embed-pub" lang="en" data-id="a/UTSiufs" data-context="false" ><a href="//imgur.com/a/UTSiufs">Take a look at how Weights & Biases' image overlays helps visualize model inferences.</a></blockquote></div><script async src="//s.imgur.com/min/embed.js" charset="utf-8"></script>
By using these features, you can effectively track, analyze, and optimize your YOLOv8 model's training, ensuring the best possible performance and efficiency.
## Summary
This guide helped you explore Ultralytics YOLOv8 integration with Weights & Biases. It illustrates the ability of this integration to efficiently track and visualize model training and prediction results.
This guide helped you explore Ultralytics' YOLOv8 integration with Weights & Biases. It illustrates the ability of this integration to efficiently track and visualize model training and prediction results.
For further details on usage, visit [Weights & Biases' official documentation](https://docs.wandb.ai/guides/integrations/ultralytics).