Influence based bit-quantization for machine learning: Cost Quality Tradeoffs
Palem, Krishna V.
Master of Science
Due to the significant computational cost associated with machine learning architectures such as neural networks or network for short, there has been significant interest in quantizing or reducing the number of bits used. Current quantization approaches treat all of the network parameters equally by allocating the same bit width budget to all of them. In this work we are proposing a quantization approach which allocates bit budgets to parameters preferentially based on their influence. Here, our notion of influence is inspired by the traditional definition of this concept from the Fourier analysis of Boolean functions. We show that guiding investment of bit budgets using influence can get acceptable accuracy with lower overall bit budgets when compared to approaches that do not use quantization. We show that by trading 4.5% in accuracy, we can gain in bit budgets by a factor of 28. To better understand our approach, we also considered allocating bit budgets through random allocations and found that an our influence based approach outperforms most of the time by noticeable margins. All of these results are based on the MNIST data set and our algorithm for computing influence is based on a simple and easy to implement greedy approach.
Neural Networks; Quantization; Bit Influence