Dmitry Vetrov
Dmitry Vetrov (graduated from Moscow State Univerisity in 2003, PhD in 2006) is a professor of Computer Science at Constructor University, Bremen, and a research professor at Higher School of Economics, Moscow. He is a founder and the head of the Bayesian Methods Research Group which became one of the strongest research groups in Russia. Three of his recent PhD students became researchers at DeepMind. His research focuses on combining Bayesian framework with Deep Learning models. His group is also actively involved in developing more efficient algorithms for diffusion models, studying the properties of loss landscape in deep neural networks, building scalable tools for stochastic optimization, application of tensor decomposition methods to large-scale Machine Learning, improving conditional text generation models, etc.
Recent Publications
- Gradual Optimization Learning for Conformational Energy Minimization (2024) ICLR 2024
- The Devil is in the Details: StyleFeatureEditor for Detail-Rich StyleGAN Inversion and High Quality Image Editing (2024) CVPR 2024
- Differentiable Rendering with Reparameterized Volume Sampling (2024) AISTATS 2024
- Generative Flow Networks as Entropy-Regularized RL (2024) AISTATS 2024
- To Stay or Not to Stay in the Pre-train Basin: Insights on Ensembling in Transfer Learning (2023) NeurIPS 2023
- Star-Shaped Denoising Diffusion Probabilistic Models (2023) NeurIPS 2023
- Entropic Neural Optimal Transport via Diffusion Processes (2023) NeurIPS 2023
- StyleDomain: Efficient and Lightweight Parameterizations of StyleGAN for One-shot and Few-shot Domain Adaptation (2023) ICCV 2023
- UnDiff: Unsupervised Voice Restoration with Unconditional Diffusion Model (2023) Interspeech 2023
- HIFI++: A Unified Framework for Bandwidth Extension and Speech Enhancement (2023) ICASSP 2023
- MARS: Masked Automatic Ranks Selection in Tensor Decompositions (2023) AISTATS 2023
- HyperDomainNet: Universal Domain Adaptation for Generative Adversarial Network (2022) NeurIPS 2022
- Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes (2022) NeurIPS 2022
- FFC-SE: Fast Fourier Convolution for Speech Enhancement (2022) INTERSPEECH 2022
- Variational Autoencoders for Precoding Matrices with High Spectral Efficiency (2022) MOTOR 2022