SeesawFaceNets: sparse and robust face verification model for mobile platform

08/24/2019
by   Jintao Zhang, et al.
0

Deep Convolutional Neural Network (DCNNs) come to be the most widely used solution for most computer vision related tasks, and one of the most important application scenes is face verification. Due to its high-accuracy performance, deep face verification models of which the inference stage occurs on cloud platform through internet plays the key role on most prectical scenes. However, two critical issues exist: First, individual privacy may not be well protected since they have to upload their personal photo and other private information to the online cloud backend. Secondly, either training or inference stage is time-comsuming and the latency may affect customer experience, especially when the internet link speed is not so stable or in remote areas where mobile reception is not so good, but also in cities where building and other construction may block mobile signals. Therefore, designing lightweight networks with low memory requirement and computational cost is one of the most practical solutions for face verification on mobile platform. In this paper, a novel mobile network named SeesawFaceNets, a simple but effective model, is proposed for productively deploying face recognition for mobile devices. Dense experimental results have shown that our proposed model SeesawFaceNets outperforms the baseline MobilefaceNets, with only 66%(146M VS 221M MAdds) computational cost, smaller batch size and less training steps, and SeesawFaceNets achieve comparable performance with other SOTA model e.g. mobiface with only 54.2%(1.3M VS 2.4M) parameters and 31.6%(146M VS 462M MAdds) computational cost, It is also eventually competitive against large-scale deep-networks face recognition on all 5 listed public validation datasets, with 6.5%(4.2M VS 65M) parameters and 4.35%(526M VS 12G MAdds) computational cost.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset