Are Neighbors Enough? Multi-Head Neural n-gram can be Alternative to Self-attention

07/27/2022
by   Mengsay Loem, et al.
16

Impressive performance of Transformer has been attributed to self-attention, where dependencies between entire input in a sequence are considered at every position. In this work, we reform the neural n-gram model, which focuses on only several surrounding representations of each position, with the multi-head mechanism as in Vaswani et al.(2017). Through experiments on sequence-to-sequence tasks, we show that replacing self-attention in Transformer with multi-head neural n-gram can achieve comparable or better performance than Transformer. From various analyses on our proposed method, we find that multi-head neural n-gram is complementary to self-attention, and their combinations can further improve performance of vanilla Transformer.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro