EQ-Net: Elastic Quantization Neural Networks
Ke Xu
|
Lei Han
|
Ye Tian
|
ShangShang Yang
|
Xingyi Zhang
|
|
Current model quantization methods have shown their promising capability in reducing storage space and computation complexity.
However, due to the diversity of quantization forms supported by different hardware,
one limitation of existing solutions is that usually require repeated optimization for different scenarios.
How to construct a model with flexible quantization forms has been less studied.
In this paper, we explore a one-shot network quantization regime, named Elastic Quantization Neural Networks (EQ-Net), which
aims to train a robust weight-sharing quantization supernet. First of all, we propose an elastic quantization space (including elastic bit-width, granularity, and symmetry)
to adapt to various mainstream quantitative forms. Secondly, we propose the Weight Distribution Regularization Loss (WDR-Loss) and Group Progressive Guidance Loss (GPG-Loss)
to bridge the inconsistency of the distribution for weights and output logits in the elastic quantization space gap. Lastly, we incorporate genetic algorithms and the proposed
Conditional Quantization-Aware Accuracy Predictor (CQAP) as an estimator to quickly search mixed-precision quantized neural networks in supernet.
Extensive experiments demonstrate that our EQ-Net is close to or even better than its static counterparts as well as state-of-the-art robust bit-width methods.
Results
Acknowledgements
This work was supported in part by the National Natural Science Foundation of China (No. 62206003, No. 62276001, No. 62136008, No. U20A20306, No. U21A20512)
and in part by the Excellent Youth Foundation of Anhui Provincial Colleges (No. 2022AH030013).
|
Total Visitor Count:
     
Unique Visitor Count:
|