Applying SoftTriple Loss for Supervised Language Model Fine Tuning
We introduce a new loss function TripleEntropy, to improve classification performance for fine-tuning general knowledge pre-trained language models based on cross-entropy and SoftTriple loss. This loss function can improve the robust RoBERTa baseline model fine-tuned with cross-entropy loss by about (0.02 2.29 samples in the training dataset, the higher gain – thus, for small-sized dataset it is 0.78 extra-large 0.04
READ FULL TEXT