Researchers show how network pruning can skew deep learning models

Computer science researchers have demonstrated that a widely used technique called neural network pruning can adversely affect the performance of deep learning models, detailed the causes of these performance issues, and demonstrated a technique to overcome the challenge.

Deep learning is a type of artificial intelligence that can be used to classify things, like images, text, or sound. For example, it can be used to identify individuals based on facial images. However, deep learning models often require a lot of computational resources to run. This causes problems when a deep learning model is put into practice for some applications.

To address these challenges, some systems engage in “neural network pruning”. This effectively makes the deep learning model more compact and therefore able to operate while using less computing resources.

“However, our research shows that this network pruning can impair the ability of deep learning models to identify certain clusters,” says Jung-Eun Kim, co-author of a paper on the work and assistant professor of computer science. to the state of North Carolina. University.

“For example, if a security system uses deep learning to scan people’s faces to determine if they have access to a building, the deep learning model should be compact so that it can work effectively. This may work fine most of the time, but network pruning may also affect the ability of the deep learning model to identify certain faces.

In their new paper, the researchers explain why network pruning can negatively affect model performance when identifying certain groups – what the literature calls “minority groups” – and demonstrate a new technique to address these challenges.

Two factors explain how network pruning can affect the performance of deep learning models.

In technical terms, these two factors are: the disparity in gradient norms between groups; and the disparity of Hessian standards associated with inaccuracies in a group’s data. Concretely, this means that deep learning models can become less accurate in recognizing specific categories of images, sounds or texts. Specifically, pruning the network can amplify accuracy flaws that already existed in the model.

For example, if a deep learning model is trained to recognize faces using a dataset that includes the faces of 100 white people and 60 Asian people, it might be more accurate at recognizing white faces. , but could still achieve adequate performance in recognizing Asian faces. . After network pruning, the model is more likely to be unable to recognize some Asian faces.

“The defect may not have been noticeable in the original model, but as it is amplified by pruning the network, the defect may become noticeable,” Kim says.

“To mitigate this issue, we demonstrated an approach that uses mathematical techniques to equalize the groups that the deep learning model uses to categorize the data samples,” Kim explains. “In other words, we use algorithms to close the precision gap between groups.”

In testing, the researchers demonstrated that using their mitigation technique improved the fairness of a deep learning model that had undergone network pruning, essentially returning it to pre-device accuracy levels. ‘pruning.

“I think the most important aspect of this work is that we now have a deeper understanding of exactly how network pruning can influence the performance of deep learning models for identifying minority groups, e.g. both theoretically and empirically,” Kim says. “We are also open to working with partners to identify unknown or overlooked impacts of model reduction techniques, especially in real-world applications for deep learning models.”

The paper, “Pruning has a disparate impact on model accuracy,” will be presented at the 36th Conference on Neural Information Processing Systems (NeurIPS 2022), taking place November 28-December 28 . 9 in New Orleans. The first author of the article is Cuong Tran of Syracuse University. The article was co-authored by Ferdinando Fioretto of Syracuse and Rakshit Naidu of Carnegie Mellon University.

The work was done with support from the National Science Foundation, under grants SaTC-1945541, SaTC-2133169, and CAREER-2143706; as well as a Google Research Scholar Award and an Amazon Research Award.

-vessel-

Note to Editors: The summary of the study follows.

“Pruning has a disparate impact on model accuracy”

Authors: Cuong Tran and Ferdinando Fioretto, Syracuse University; Jung-Eun Kim, North Carolina State University; and Rakshit Naidu, Carnegie Mellon University

Present: 28 Nov.-Dec. 9, 36th Conference on Neural Information Processing Systems (NeurIPS 2022)

Summary: Network pruning is a widely used compression technique that is capable of significantly reducing over-parameterized models with minimal loss of accuracy. This article shows that pruning can create or exacerbate disparate impacts. The article sheds light on the factors driving these disparities, suggesting that differences in gradient norms and distance to the decision boundary between groups are responsible for this critical problem. It analyzes these factors in detail, providing both theoretical and empirical support, and proposes a simple, yet effective solution that mitigates the disparate impacts caused by pruning.

About Edward Fries

Check Also

Business People: Hiring and Rewards in the Fort Wayne Area | Company

Cason Amornarthakij joined Do it Best as a Category Management Planner; David Badinasenior system administrator; …