RTN: Reparameterized Ternary Network

RTN: Reparameterized Ternary Network

Abstract

To deploy deep neural networks on resource-limited devices, quantization has been widely explored. In this work, we study the extremely low-bit networks which have tremendous speed-up, memory saving with quantized activation and weights. We first bring up three omitted issues in extremely low-bit networks: the squashing range of quantized values; the gradient vanishing during backpropagation and the unexploited hardware acceleration of ternary networks. By reparameterizing quantized activation and weights vector with full precision scale and offset for fixed ternary vector, we decouple the range and magnitude from the direction to extenuate the three issues. Learnable scale and offset can automatically adjust the range of quantized values and sparsity without gradient vanishing. A novel encoding and computation pat-tern are designed to support efficient computing for our reparameterized ternary network (RTN). Experiments on ResNet-18 for ImageNet demonstrate that the proposed RTN finds a much better efficiency between bitwidth and accuracy, and achieves up to 26.76% relative accuracy improvement compared with state-of-the-art methods. Moreover, we validate the proposed computation pattern on Field Programmable Gate Arrays (FPGA), and it brings 46.46x and 89.17x savings on power and area respectively compared with the full precision convolution.

Publication
AAAI Conference on Artificial Intelligence (AAAI 2020)

To deploy deep neural networks on resource-limited devices, quantization has been widely explored. In this work, we study the extremely low-bit networks which have tremendous speed up, memory saving, but large accuracy degradation compared with full precision values. We first bring up three omitted issues in extremely low-bit networks: the unexploited nice properties of ternary quantized networks; the squashing range of quantized values and gradient vanishing during backward. By reparameterizing quantized activation vector with full precision scale $\gamma$ and offset $\beta$ for ternary direction vector $\mathbf{A}^t\in{-1, 0, +1}^n$, we decouple the range $\gamma$, $\beta$ from direction $\mathbf{A}^t$ to extenuate above problems. Learnable scale and offset can automatically adjust the range of quantized values and sparsity without gradient vanishing. A novel encoding and computation pattern is designed to support efficient computing for our reparameterized ternary network~(RTN). Extensive experiments on ResNet-18 for ImageNet demonstrate that the proposed RTN finds a much better efficiency between bitwidth and accuracy and achieves up to 24.22% relative accuracy improvement compared with state-of-the-art extremely low-bit networks. Moreover, we validate the proposed computation pattern on Field Programmable Gate Arrays~(FPGA), and it brings $46.46\times$ and $89.17\times$ savings on power and area compared with the full precision convolution.

Results on ImageNet