Multi-Modal Representation Learning with Self-Adaptive Thresholds for Commodity Verification

08/23/2022
by   Chenchen Han, et al.
0

In this paper, we propose a method to identify identical commodities. In e-commerce scenarios, commodities are usually described by both images and text. By definition, identical commodities are those that have identical key attributes and are cognitively identical to consumers. There are two main challenges: 1) The extraction and fusion of multi-modal representation. 2) The ability to verify whether two commodities are identical by comparing the distance between representations with a threshold. To address the above problems, we propose an end-to-end identical commodity verification method based on self-adaptive thresholds. We use a dual-stream network to extract commodity embeddings and threshold embeddings separately and then concatenate them to obtain commodity representation. Our method is able to obtain different thresholds according to different commodities while maintaining the indexability of the entire commodity representation. We experimentally validate the effectiveness of our multimodal feature fusion and the advantages of self-adaptive thresholds. Besides, our method achieves an F1 score of 0.8936 and takes the 3rd place on the leaderboard for the second task of the CCKS-2022 Knowledge Graph Evaluation for Digital Commerce Competition. Code and pretrained models are available at https://github.com/hanchenchen/CCKS2022-track2-solution.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset