Many of our portable devices today work with advanced voice or image features, but your personal Siri or Google Photos app can’t process speech or image recognition solely on your smartphone’s hardware.
But what if speech and image recognition and other complex cognitive tasks could all be performed on a single portable device without an internet connection and high-power servers behind the scenes?
Jae-sun Seo is attempting to shatter the computing, energy and size limitations of state-of-the-art learning algorithms to fit on small footprint devices with the help of custom-designed hardware.
This research caught the attention of the National Science Foundation and earned Seo, an assistant professor of electrical engineering in Arizona State University’s Ira A. Fulton Schools of Engineering, a five-year, nearly $473,000 CAREER Award.
“The overarching goal of this project is to build brain-inspired intelligent computing systems using custom hardware designs that are energy-efficient and programmable for various cognitive tasks, including autonomous driving, speech and biomedical applications,” Seo says.