Jian-Hui Duan

DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients

论文:DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients 论文地址:https://arxiv.org/abs/1606.06160 代码地址:DoReFa-Net: Github ​ 这篇论文可以看作是相对于BNN,尤其是XNOR-Net的进一步研究。在...

QAT: Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference

论文: Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference 论文链接:https://arxiv.org/abs/1712.05877 在此之前一些压缩、量化方法的缺点: baseline使用的是AlexNet、VGG、GoogLeNet,这些网络...

EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

论文:EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks 论文链接:https://arxiv.org/abs/1905.11946 代码链接:https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ...

XNOR Net: ImageNet Classification Using Binary Convolutional Neural Networks

论文:XNOR Net: ImageNet Classification Using Binary Convolutional Neural Networks 论文地址:https://arxiv.org/abs/1603.05279 代码地址:http://allenai.org/plato/xnornet 针对神经网络的二值化,一共就三个方面: 整个网络架构的层的...

MobileNets Analysis: From V1 to V3

1. Depthwise Separable Convolution 原始图像大小为,如果channel一共有$M$个,卷积核大小为$D_k \times D_k$。 Depthwise Convolution Depthwise的方法是在DW层只使用$M$个卷积核,每一个图像的channel都使用一个单独的卷积核,然后产生最后的中间输出。如上图琐事,原始输...