Embedding Complexity In the Data Representation Instead of In the Model: A Case Study Using Heterogeneous Medical Data

02/12/2018
by   Jacek M. Bajor, et al.
0

Electronic Health Records have become popular sources of data for secondary research, but their use is hampered by the amount of effort it takes to overcome the sparsity, irregularity, and noise that they contain. Modern learning architectures can remove the need for expert-driven feature engineering, but not the need for expert-driven preprocessing to abstract away the inherent messiness of clinical data. This preprocessing effort is often the dominant component of a typical clinical prediction project. In this work we propose using semantic embedding methods to directly couple the raw, messy clinical data to downstream learning architectures with truly minimal preprocessing. We examine this step from the perspective of capturing and encoding complex data dependencies in the data representation instead of in the model, which has the nice benefit of allowing downstream processing to be done with fast, lightweight, and simple models accessible to researchers without machine learning expertise. We demonstrate with three typical clinical prediction tasks that the highly compressed, embedded data representations capture a large amount of useful complexity, although in some cases the compression is not completely lossless.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset