For more information on the optimizations as well as performance data, see this blog post Faster AI Inference with Intel Optimization for TensorFlow. In order to take full advantage of Intel architecture and to extract maximum performance, the TensorFlow framework has been optimized using oneAPI Deep Neural Network Library (oneDNN) primitives, a popular performance library for deep learning applications. This optimized version of TensorFlow for Windows OS has been produced by Intel. Originally developed by researchers and engineers from the Google Brain team ( ) within Google's AI organization, TensorFlow comes with strong support for machine learning and deep learning with a flexible numerical computation core that can be used across many other scientific domains. Its flexible architecture allows easy deployment of computation across a variety of platforms, and from desktops to clusters of servers to mobile and edge devices. TensorFlow is an open source software library for high performance numerical computation.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |