MLPerf, a collaboration of tech giants and researchers from numerous universities including Harvard, Stanford and the University of California Berkeley,is aspiring to drive progress in ML by developing a suite of fair and reliable benchmarks for emerging artificial intelligence (AI) hardware and software platforms.
myrtle.ai has been selected to provide the computer code that will be the benchmark standard for the Speech Recognition division. The code is a new implementation of two AI models known as DeepSpeech 1 and DeepSpeech 2, building on models originally developed by Baidu.
“We are honoured to be providing the reference implementations for the Speech to Text category of MLPerf”, says Peter Baldwin CEO of myrtle.ai. "Myrtle has a world class machine learning group and we are pleased to be able to provide the code as open source so that everyone can benefit from it.”
This is the first time the AI community has come together to try and develop a series of reliable, transparent and vendor-neutral ML benchmarks to highlight performance differences between different ML algorithms and cloud configurations. The new benchmarking suite will be used to test and measure training speeds and inference times for a range of ML tasks.
myrtle ai’s Speech Recognition benchmark is based on proven experience in this field. The company’s core R&D team has speeded up Mozilla’s DeepSpeech implementations 100-fold when training on Librispeech, demonstrating their practical experience of training and deploying AI and ML algorithms.