Investigating the Impact of Data Volume and Domain Similarity on Transfer Learning Applications
Transfer Learning helps to build a system to recognize and apply knowledge and experience learned in previous tasks (source task) to new tasks or new domains (target task), which share some commonality. The two important factors that impact the performance of transfer learning models are: (a) the size of the target dataset and (b) the similarity in distribution between source and target domains. Thus far there has been little investigation into just how important these factors are. In this paper, we investigated the impact of target dataset size and source/target domain similarity on model performance through a series of experiments. We found that more data is always beneficial, and that model performance improved linearly with the log of data size, until we were out of data. As source/target domains differ, more data is required and fine tuning will render better performance than feature extraction. When source/target domains are similar and data size is small, fine tuning and feature extraction renders equivalent performance. We hope that our study inspires further work in transfer learning, which continues to be a very important technique for developing practical machine learning applications in business domains.
READ FULL TEXT