ultralytics 8.0.196 instance-mean Segment loss (#5285)
Co-authored-by: Andy <39454881+yermandy@users.noreply.github.com>
This commit is contained in:
parent
7517667a33
commit
e7f0658744
72 changed files with 369 additions and 493 deletions
|
|
@ -30,8 +30,7 @@ DeepSparse is an inference runtime with exceptional performance on CPUs. For ins
|
|||
<img width="60%" src="https://github.com/neuralmagic/deepsparse/raw/main/examples/ultralytics-yolo/ultralytics-readmes/performance-chart-5.8x.png">
|
||||
</p>
|
||||
|
||||
For the first time, your deep learning workloads can meet the performance demands of production without the complexity and costs of hardware accelerators.
|
||||
Put simply, DeepSparse gives you the performance of GPUs and the simplicity of software:
|
||||
For the first time, your deep learning workloads can meet the performance demands of production without the complexity and costs of hardware accelerators. Put simply, DeepSparse gives you the performance of GPUs and the simplicity of software:
|
||||
|
||||
- **Flexible Deployments**: Run consistently across cloud, data center, and edge with any hardware provider from Intel to AMD to ARM
|
||||
- **Infinite Scalability**: Scale vertically to 100s of cores, out with standard Kubernetes, or fully-abstracted with Serverless
|
||||
|
|
@ -41,10 +40,7 @@ Put simply, DeepSparse gives you the performance of GPUs and the simplicity of s
|
|||
|
||||
DeepSparse takes advantage of model sparsity to gain its performance speedup.
|
||||
|
||||
Sparsification through pruning and quantization is a broadly studied technique, allowing order-of-magnitude reductions in the size and compute needed to
|
||||
execute a network, while maintaining high accuracy. DeepSparse is sparsity-aware, meaning it skips the zeroed out parameters, shrinking amount of compute
|
||||
in a forward pass. Since the sparse computation is now memory bound, DeepSparse executes the network depth-wise, breaking the problem into Tensor Columns,
|
||||
vertical stripes of computation that fit in cache.
|
||||
Sparsification through pruning and quantization is a broadly studied technique, allowing order-of-magnitude reductions in the size and compute needed to execute a network, while maintaining high accuracy. DeepSparse is sparsity-aware, meaning it skips the zeroed out parameters, shrinking amount of compute in a forward pass. Since the sparse computation is now memory bound, DeepSparse executes the network depth-wise, breaking the problem into Tensor Columns, vertical stripes of computation that fit in cache.
|
||||
|
||||
<p align="center">
|
||||
<img width="60%" src="https://github.com/neuralmagic/deepsparse/raw/main/examples/ultralytics-yolo/ultralytics-readmes/tensor-columns.png">
|
||||
|
|
@ -96,8 +92,7 @@ wget -O basilica.jpg https://raw.githubusercontent.com/neuralmagic/deepsparse/ma
|
|||
|
||||
#### Python API
|
||||
|
||||
`Pipelines` wrap pre-processing and output post-processing around the runtime, providing a clean interface for adding DeepSparse to an application.
|
||||
The DeepSparse-Ultralytics integration includes an out-of-the-box `Pipeline` that accepts raw images and outputs the bounding boxes.
|
||||
`Pipelines` wrap pre-processing and output post-processing around the runtime, providing a clean interface for adding DeepSparse to an application. The DeepSparse-Ultralytics integration includes an out-of-the-box `Pipeline` that accepts raw images and outputs the bounding boxes.
|
||||
|
||||
Create a `Pipeline` and run inference:
|
||||
|
||||
|
|
@ -127,9 +122,7 @@ apt-get install libgl1-mesa-glx
|
|||
|
||||
#### HTTP Server
|
||||
|
||||
DeepSparse Server runs on top of the popular FastAPI web framework and Uvicorn web server. With just a single CLI command, you can easily setup a model
|
||||
service endpoint with DeepSparse. The Server supports any Pipeline from DeepSparse, including object detection with YOLOv5, enabling you to send raw
|
||||
images to the endpoint and receive the bounding boxes.
|
||||
DeepSparse Server runs on top of the popular FastAPI web framework and Uvicorn web server. With just a single CLI command, you can easily setup a model service endpoint with DeepSparse. The Server supports any Pipeline from DeepSparse, including object detection with YOLOv5, enabling you to send raw images to the endpoint and receive the bounding boxes.
|
||||
|
||||
Spin up the Server with the pruned-quantized YOLOv5s:
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue