This is a copy of a post I had way back:
My training at Columbia was quite a condensed crash course to Bayesian statistics. This included but was not limited to the typical probabilistic graphical model thinking, the generative model constructions, the hierarchical extensions to all sorts of tools, the stochastic processes (and often the measure theory entangled within) and its nonparametric (as well as semi-parametric siblings) modern applications. Well, and the inference + sampling, of course. Only a small crowd of academia at Columbia (back in time) seemed to really care about neural nets, despite its wide-spread industrial success, and so these things were a bit underexposed to me, until reading a lot on word embeddings lately.
Knowing the work from many famous Bayesian ML folks, like Yee Whye Teh, Mike Jordan Max Welling and D.P. Kingma, the hybrid and integration of probabilistic generative models with neural nets are more appealing than ever. Many of the fancy tools and smart tricks (e.g. variational inference, Gaussian processes, etc.) found their adopted versions under the neural net community, which sparked quite some waves into scaling inference and more robust modeling for enhancing NN’s.
OpenAI’s blog post on generative modeling certainly got me hooked. Unlike many supervised NN’s, they focused a lot on unsupervised models, sometimes changing the perspective of viewing the problem in a different way (e.g. Generative Adversarial Networks), which is quite appealing. I may consider relocating into this field in the future.
Now in hindsight there are a lot more to sit down and write. I will be offering some short discussions on hybrid models very soon.