Link Search Menu Expand Document

Add a new operator to ONNX Runtime

Contents

A new op can be written and registered with ONNXRuntime in the following 3 ways

Custom Operator API

Use the custom operator C/C++ API (onnxruntime_c_api.h)

  • Create an OrtCustomOpDomain with the domain name used by the custom ops
  • Create an OrtCustomOp structure for each op and add them to the OrtCustomOpDomain with OrtCustomOpDomain_Add
  • Call OrtAddCustomOpDomain to add the custom domain of ops to the session options

See this for examples called MyCustomOp and SliceCustomOp that use the C++ helper API (onnxruntime_cxx_api.h).

You can also compile the custom ops into a shared library and use that to run a model via the C++ API. The same test file contains an example.

The source code for a sample custom op shared library containing two custom kernels is here.

See this for an example called testRegisterCustomOpsLibrary that uses the Python API to register a shared library that contains custom op kernels. Currently, the only supported Execution Providers (EPs) for custom ops registered via this approach are the CUDA and the CPU EPs.

Note that when a model being inferred on gpu, onnxruntime will insert MemcpyToHost op before a cpu custom op and append MemcpyFromHost after to make sure tensor(s) are accessible throughout calling, meaning there are no extra efforts required from custom op developer for the case.

Use RegisterCustomRegistry API

Implement your kernel and schema (if required) using the OpKernel and OpSchema APIs (headers are in the include folder). Create a CustomRegistry object and register your kernel and schema with this registry. Register the custom registry with ONNXRuntime using RegisterCustomRegistry API. See this for an example. for an example.

Contribute the operator to ONNXRuntime

This is for ops that are in the process of being proposed to ONNX. This way you don’t have to wait for an approval from the ONNX team if the op is required in production today. See this for an example.