TabNet: Attentive Interpretable Tabular Learning
We propose a novel high-performance interpretable deep tabular data learning network, TabNet. TabNet utilizes a sequential attention mechanism to choose which features to reason from at each decision step and then aggregates the processed information towards the final decision. Explicit selection of sparse features enables more efficient learning as the model capacity at each decision step is fully utilized for the most relevant features, and also more interpretable decision making via visualization of selection masks. We demonstrate that TabNet outperforms other neural network and decision tree variants on a wide range of tabular data learning datasets while yielding interpretable feature attributions and insights into the global model behavior.
READ FULL TEXT