From 5376b1a42e82664cfe46e1f0eca88c31a4712b96 Mon Sep 17 00:00:00 2001 From: Kayzwer <68285002+Kayzwer@users.noreply.github.com> Date: Wed, 5 Jun 2024 02:42:03 +0800 Subject: [PATCH] Fix Docs export table from `onnxsim` to `onnxslim` (#13324) Co-authored-by: Glenn Jocher --- docs/en/modes/export.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/modes/export.md b/docs/en/modes/export.md index 7f629a77..2191e9d5 100644 --- a/docs/en/modes/export.md +++ b/docs/en/modes/export.md @@ -83,7 +83,7 @@ This table details the configurations and options available for exporting YOLO m | `half` | `bool` | `False` | Enables FP16 (half-precision) quantization, reducing model size and potentially speeding up inference on supported hardware. | | `int8` | `bool` | `False` | Activates INT8 quantization, further compressing the model and speeding up inference with minimal accuracy loss, primarily for edge devices. | | `dynamic` | `bool` | `False` | Allows dynamic input sizes for ONNX and TensorRT exports, enhancing flexibility in handling varying image dimensions. | -| `simplify` | `bool` | `False` | Simplifies the model graph for ONNX exports with `onnxsim`, potentially improving performance and compatibility. | +| `simplify` | `bool` | `False` | Simplifies the model graph for ONNX exports with `onnxslim`, potentially improving performance and compatibility. | | `opset` | `int` | `None` | Specifies the ONNX opset version for compatibility with different ONNX parsers and runtimes. If not set, uses the latest supported version. | | `workspace` | `float` | `4.0` | Sets the maximum workspace size in GiB for TensorRT optimizations, balancing memory usage and performance. | | `nms` | `bool` | `False` | Adds Non-Maximum Suppression (NMS) to the CoreML export, essential for accurate and efficient detection post-processing. |