AAII Day Presentation 11: Yaqiong Li
Recurrent Dirichlet Belief Networks for interpretable Dynamic Relational Data Modelling
TIME: 3:40pm - 4:00pm
SPEAKER: Yaqiong Li, Australian Artificial Intelligence Institute
ABSTRACT:
The Dirichlet Belief Network~(DirBN) has been recently proposed as a promising approach in learning interpretable deep latent representations for objects. In this work, we leverage its interpretable modelling architecture and propose a deep dynamic probabilistic framework - the Recurrent Dirichlet Belief Network(Recurrent-DBN) - to study interpretable hidden structures from dynamic relational data.
The proposed Recurrent-DBN has the following merits: (1) it infers interpretable and organised hierarchical latent structures for objects within and across time steps; (2) it enables recurrent long-term temporal dependence modelling, which outperforms the one-order Markov descriptions in most of the dynamic probabilistic frameworks. In addition, we develop a new inference strategy, which first upward-and-backward propagates latent counts and then downward-and-forward samples variables, to enable efficient Gibbs sampling for the Recurrent-DBN. We apply the Recurrent-DBN to dynamic relational data problems.
The extensive experiment results on real-world data validate the advantages of the Recurrent-DBN over the state-of-the-art models in interpretable latent structure discovery and improved link prediction performance.