A principled approach to characterize the hidden structure of networks is to formulate generative models, and then infer their parameters from data. When the desired structure is composed of modules or “communities”, a popular choice for this task is the stochastic block model, where nodes are divided into groups, and the placement of edges is conditioned on the group memberships. In this talk, we will present a nonparametric Bayesian inference framework based on a microcanonical formulation of the stochastic block model. We show how this simple model variation allows simultaneously for two important improvements over more traditional inference approaches: 1. Deeper Bayesian hierarchies, with noninformative priors replaced by sequences of priors and hyperpriors, that not only remove limitations that seriously degrade the inference of large networks, but also reveal structures at multiple scales; 2. A very efficient inference algorithm that scales well not only for networks with a large number of nodes and edges, but also with an unlimited number of groups. We show also how this approach can be used to sample group hierarchies from the posterior distribution, perform model selection, and how it can be easily generalized to networks with edge covariates and node annotations.
Contact: Keith Briggs () or Richard G. Clegg (richard@richardclegg.org)