What is Open Neural Network Exchange (ONNX)?

The Open Neural Network Exchange (ONNX) is a joint initiative announced by Facebook and Microsoft that is aimed at creating an open ecosystem where developers and data analysts can exchange different machine learning or deep learning models.

The aim of ONNX is to facilitate the portability of deep learning models and create an environment where vendors are not locking in their machine learning frameworks. This will create a rich space for AI innovation. ONNX creates interoperability between different frameworks. Currently, there are numerous frameworks but each has its own unique format that is not compatible with the next one. To ensure interoperability, data analysts or artificial intelligence developers have to use a conversion tool for inter-framework exchange. With ONNX, it will be possible to exchange these models freely between frameworks via an open source eco-system without the need for conversion. It will be possible for data analysts and developers to train a model on one framework and freely infer or evaluate it on another.

Implications for Developers in the AI Community

The introduction of interoperability between different machine learning frameworks is going to streamline the process of innovation in artificial intelligence. ONNX will also increase the speed of execution between research and product development. Some of the leading machine learning libraries such as PyTorch, Apache MXNet, Caffe, TensorFlow and Microsoft Cognitive Toolkit have already integrated ONNX support. While these are some of the most popular, there are numerous other less popular machine learning libraries that are currently in use. Each of these has its own unique framework and portability of AI work across these disparate platforms is always a serious headache for developers.

With Open Neural Network Exchange, you can have the luxury of training an AI model using any machine learning library that you are more familiar with and then proceed to seamlessly deploy it in another library for inferencing as well as for prediction purposes. There is no question that this is going to introduce some execution grace and greater simplicity in the training and deployment of AI models.

For interoperability, developers or data analysts will simply need to export their artificial intelligence models in the form of the model.onnx format which will take the format of a serialized representation of the AI model exported in a photobuf file. Native support for ONNX is already available in the above-mentioned machine learning libraries. However, if you wish to deploy an AI model to other libraries such as TensorFlow and CoreML that have not integrated ONNX support, you will still need to use converters. But ONNX adoption is growing expeditiously in other machine learning libraries so expect interoperability to be an almost universal feature in various artificial intelligence frameworks in the coming years.

Using ONNX in Practice

In practice, you can train an AI model in one framework such as PyTorch (with a convoluted neural network) and then go on to deploy it in another framework such as an iOS environment for use in smartphones that uses a CoreML machine learning library. The ONNX serves as an intermediary, enabling you to easily shift from one environment to the next without the need for conversions.

As the AI field advances, more deep learning frameworks are going to emerge and the workflows are going to get increasingly complex. There will be an increasing need for inter-exchange of models across frameworks.

In the future, the need for portability will be an essential requirement in AI development to allow for versatility and flexibility to work with multiple environments. ONNX is a pacesetter in this regard, creating an open standard that will ensure AI models are reusable across multiple platforms over the long haul.

References

IBM: https://www.ibm.com/blogs/research/2017/10/open-standards-deep-learning-simplify-development-neural-networks/