Learning from the Best: Contrastive Representations Learning Across Sensor Locations for Wearable Activity Recognition

10/04/2022
by   Vitor Fortes Rey, et al.
0

We address the well-known wearable activity recognition problem of having to work with sensors that are non-optimal in terms of information they provide but have to be used due to wearability/usability concerns (e.g. the need to work with wrist-worn IMUs because they are embedded in most smart watches). To mitigate this problem we propose a method that facilitates the use of information from sensors that are only present during the training process and are unavailable during the later use of the system. The method transfers information from the source sensors to the latent representation of the target sensor data through contrastive loss that is combined with the classification loss during joint training. We evaluate the method on the well-known PAMAP2 and Opportunity benchmarks for different combinations of source and target sensors showing average (over all activities) F1 score improvements of between 5 13 benefit from the additional information going up to between 20

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset