Supersmart Phones
Using classical math theory, researchers aim to design powerful AI tools for small electronic devices.
By Rosemarie D. Wesson and Bo Yuan
Artificial intelligence (AI) has emerged over the past decade as the most exciting trend in information technology, able to outperform humans on many tasks requiring “intelligence.” Perhaps AI’s most celebrated achievement was the defeat of the reigning world Go champion Ke Jie by Google’s AlphaGo program in May 2017. The event proved AI’s mastery of a 2,500-year-old strategy board game vastly more complex than chess, with 10 to the power of 170 possible configurations of stones on a board. This and other record-breaking triumphs, such as image and speech recognition and automatic language translation, lead some experts to equate AI with the dawn of electric power in its potential impact on society.
But unlocking AI’s full capability is currently limited to companies and institutions with access to powerful hardware platforms that can sift through a myriad of data. For instance, deep neural networks (DNN), the most representative AI machine learning technique, demand extensive computation to achieve something even close to a human’s experience with vision, speech, or natural language tasks. Performing computations necessary for machine training and inference typically takes hundreds or thousands of high-end servers in the data centers of tech giants such as IBM, Google, Facebook, and Amazon. As the size of DNN models continues to grow, so will the demand for ever more powerful computing tools.
A team of researchers wants to change that and find a way to build AI power into small devices, such as smartphones. Collaborators from the City College of New York, the University of Southern California, and Northeastern and Syracuse universities are striving for an efficient mathematical way to reduce the required computation in neural networks. Using structured matrix theory, they aim to design a DNN training and inference process with orders of magnitude improvement in speed and energy efficiency, thereby allowing neural networks to be embedded in various applications.
The key idea of their approach is to build neural network models based on the format of structured matrices. From the perspective of mathematics, the neural networks can be represented in the function of matrices. Nowadays all the neural networks deployed in practical applications adopt the unstructured matrix format. So what are the benefits of using structured ones? First, storage costs and power consumption are heavily determined by the number of parameters used when the hardware runs the neural networks. A strong enough structure of the underlying matrices can lead to significant reduction in the number of parameters required to represent the network model.
This reduction in parameters makes it possible to deploy a neural network on resource-constrained hardware, such as graphic processing units, application-specific integrated circuits, field-programmable gate arrays, cloud servers—even hand-held devices. Second, compared with the unstructured matrices, structured matrices have unique mathematical characteristics resulting in fast arithmetic operations, which further translates to very fast processing speed for those structured neural networks. As a result, this charming advantage will help the battery-limited mobile devices process more AI tasks in the future.
The “old-fashioned” classical matrix theory may seem far from the current “hot” AI techniques. But in fact, the combination of these two fields demonstrates the unique power of mathematical tools for conducting cutting-edge science and engineering research.
Rosemarie D. Wesson, Ph.D., is associate dean for research at the Grove School of Engineering, City College of New York. Bo Yuan, Ph.D., is an assistant professor of computer science at the City College of New York. Research in this project at CCNY and Northeastern University is supported by the National Science Foundation.