Dmitry Vetrov
Dmitry Vetrov (graduated from Moscow State Univerisity in 2003, PhD in 2006) is a research professor at Higher School of Economics, Moscow, and leading researcher at AIRI, Moscow. He is a founder and the head of the Bayesian Methods Research Group which became one of the strongest research groups in Russia. Three of his recent PhD students became researchers at DeepMind. His research focuses on combining Bayesian framework with Deep Learning models. His group is also actively involved in developing more efficient algorithms for diffusion models, studying the properties of loss landscape in deep neural networks, building scalable tools for stochastic optimization, application of tensor decomposition methods to large-scale Machine Learning, improving conditional text generation models, etc.
Recent Publications
- HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Network (2022) NeurIPS 2022
- Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes (2022) NeurIPS 2022
- FFC-SE: Fast Fourier Convolution for Speech Enhancement (2022) INTERSPEECH 2022
- Variational Autoencoders for Precoding Matrices with High Spectral Efficiency (2022) MOTOR 2022
- On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay (2021) NeurIPS 2021
- Leveraging Recursive Gumbel-Max Trick for Approximate Inference in Combinatorial Spaces (2021) NeurIPS 2021
- On Power Laws in Deep Ensembles (2020) NeurIPS 2020
- On Power Laws in Deep Ensembles (2020)
- Involutive MCMC: One Way to Derive Them All (2020) ICML 2020
- Controlling Overestimation Bias with Truncated Mixture of Continuous Distributional Quantile Critics (2020) ICML 2020
- Greedy Policy Search: A Simple Baseline for Learnable Test-Time Augmentation (2020) UAI 2020
- Deterministic Decoding for Discrete Data in Variational Autoencoders (2020) AISTATS 2020
- Stochasticity in Neural ODEs: An Empirical Study (2020)
- Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning (2020) ICLR 2020
- Structured Sparsification of Gated Recurrent Neural Networks (2020) AAAI 2020