OmniML and Intel to Unlock Hardware-Efficient AI
Although AI is taking off, getting responsible, accurate, and efficient applications to work in production represents a significant challenge, given the large gaps between machine learning (ML) model training and ML model inferencing. To that end, OmniML announced a strategic partnership with Intel to accelerate the development and deployment of AI applications targeting enterprises of all sizes. The two companies will collaborate on growth opportunities under the Intel Disruptor Initiative and access to OmniML’s Omnimizer software platform.
OmniML and Intel are bridging the gap between model training and inferencing by incorporating hardware-efficient AI development from the outset. To demonstrate capabilities, OmniML, using its Omnimizer platform and 4th Gen Intel Xeon Scalable processors and integrated acceleration via Intel AMX technology, achieved more than 10x speed processing words per second over a multi-language DistilBERT.
The Omnimizer ML platform automates machine learning model design, training, and deployment and helps users identify design flaws and performance bottlenecks for faster production with superior runtime. It features a cloud-native interface to rapidly profile and visualize ML model performance on Intel and other hardware devices. Omnimizer provides a significant performance boost for many applications in computer vision and natural language processing (NLP).
Natural language processing and improving language models’ performance are high on the list of AI applications, as common devices incorporate AI-based language models as a core design feature. Many of these language models are based on transformer architecture. Using Omnimizer to increase transformer-based language efficiency opens up use cases that weren’t possible before, lowering the total cost of ownership for both on-device AI and cloud inferencing.