From Zero-Shot to Few-Shot Learning: A Step of Embedding-Aware Generative Models
Embedding-aware generative model (EAGM) addresses the data insufficiency problem for zero-shot learning (ZSL) by constructing a generator between semantic and visual embedding spaces. Thanks to the predefined benchmark and protocols, the number of proposed EAGMs for ZSL is increasing rapidly. We argue that it is time to take a step back and reconsider the embedding-aware generative paradigm. The purpose of this paper is three-fold. First, given the fact that the current embedding features in benchmark datasets are somehow out-of-date, we improve the performance of EAGMs for ZSL remarkably with embarrassedly simple modifications on the embedding features. This is an important contribution, since the results reveal that the embedding of EAGMs deserves more attention. Second, we compare and analyze a significant number of EAGMs in depth. Based on five benchmark datasets, we update the state-of-the-art results for ZSL and give a strong baseline for few-shot learning (FSL), including the classic unseen-class few-shot learning (UFSL) and the more challenging seen-class few-shot learning (SFSL). Finally, a comprehensive generative model repository, namely, generative any-shot learning (GASL) repository, is provided, which contains the models, features, parameters, and settings of EAGMs for ZSL and FSL. Any results in this paper can be readily reproduced with only one command line based on GASL.
READ FULL TEXT