Dual Memory Neural Computer for Asynchronous Two-view Sequential Learning

02/02/2018
by   Hung Le, et al.
0

One of the core task in multi-view learning is to capture all relations among views. For sequential data, the relations not only span across views, but also extend throughout the view length to form long-term intra-view and cross-view interactions. In this paper, we present a new memory augmented neural network model that aims to model these complex interactions between two sequential views that are asynchronous. Our model uses two neural encoders for reading from and writing to two external memories for encoding input views. The intra-view interactions and the long-term dependencies are captured by the use of memories during this encoding process. There are two modes of memory accessing in our system: late-fusion and early-fusion, corresponding to late and early cross-view interactions. In late-fusion mode, the two memories are separated, containing only view-specific contents. In the early-fusion mode, the two memories share the same addressing space, allowing cross-memory accessing. In both cases, the knowledge from the memories finally will be synthesized by a decoder to make predictions over the output space. The resulting dual memory neural computer is demonstrated on various of experiments, from the synthetic sum of two sequences task to the tasks of drug prescription and disease progressions in healthcare. The results show improved performance over both traditional algorithms and deep learning methods designed for multi-view problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset