Three young researchers of our team became the Ilya Segalovich Scholarship winners. The scholarship established by the Faculty of Computer Science (HSE University) in 2015 rewards academic and scientific achievements. Congrats to our winners, Artyom Gadetsky, Denis Rakitin and Sergey Troshin!
Dmitry Vetrov was awarded the prestigious Research Excellence Award Russia 2021 in the nomination ‘Contribution to the development of a globally relevant research topic’ for his work on Convolutional Neural Networks.
The Research Excellence Award Russia is presented by Elsevier, the world's largest publisher of scientific and technical literature and provider of information solutions in the field of science and education.
Dmitry Vetrov dedicated the award to his students, past and current.
Two papers were accepted to the 2021 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2021):
- “On the Embeddings of Variables in Recurrent Neural Networks for Source Code” by Nadezhda Chirkova;
- “A Simple Approach for Handling Out-of-Vocabulary Identifiers in Deep Learning for Source Code” by Nadezhda Chirkova and Sergey Troshin.
The paper "Controlling Overestimation Bias with Truncated Mixture of Continuous Distributional Quantile Critics" by Arseny Kuznetsov, Pavel Shvechikov, Alexander Grishin, and Dmitry Vetrov earned a bronze award on Samsung Best Paper Award 2020. It is the central academic competition at Samsung which stimulates engineers and researchers to publish world-class level papers. Our members' paper was the only one from Russia that made it to the final stage along with 107 other papers (among 2180 initial submissions) from all over the world.
The paper On Power Laws in Deep Ensembles by Ekaterina Lobacheva, Nadezhda Chirkova, Maxim Kodryan and Dmitry Vetrov has been aceepted to NeurIPS 2020 for a spotlight presentation.
Dmitry Vetrov is the first Russian scientist to be elected as a member to the ELLIS Society (the European Laboratory for Learning and Intelligent Systems), a leading European organization in the field of artificial intelligence.
Being a member of the ELLIS Society, Dmitry Vetrov says, will provide more opportunities for students and doctoral students in the faculty.
Two papers have been accepted to the conference track of ICML 2020:
- Controlling Overestimation Bias with Truncated Mixture of Continuous Distributional Quantile Critics by Arseny Kuznetsov, Pavel Shvechikov, Alexander Grishin, Dmitry Vetrov
- Involutive MCMC: One Way to Derive Them All by Kirill Neklyudov, Dmitry Vetrov in collaboration with Max Welling and Evgenii Egorov
And the paper "On Power Laws in Deep Ensembles" by Ekaterina Lobacheva, Nadezhda Chirkova, Maxim Kodryan, and Dmitry Vetrov has been accepted to the Workshop on Uncertainty and Robustness in Deep Learning, ICML 2020
A paper Greedy Policy Search: A Simple Baseline for Learnable Test-Time Augmentation by Dmitry Molchanov, Alexander Lyzhov, Yuliya Molchanova, Arsenii Ashukha and Dmitry Vetrov has been accepted to the conference track of UAI 2020.
A paper Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning by Arsenii Ashukha, Alexander Lyzhov, Dmitry Molchanov and Dmitry Vetrov has been accepted to the conference track of ICLR 2020, and Deterministic Decoding for Discrete Data in Variational Autoencoders by Daniil Polykovskiy and Dmitry Vetrov has been accepted to AISTATS 2020.
The prize was awarded for a cycle of papers on machine learning methods for predicting structured objects. The cycle includes 13 works of 2011-2019 published at leading international conferences such as NeurIPS, ICML, CVPR, ICCV, ICLR and in the leading journals IEEE TPAMI and IJCV.
Two papers have been accepted to the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2020):
- Structured Sparsification of Gated Recurrent Neural Networks by Ekaterina Lobacheva, Nadezhda Chirkova, Aleksandr Markovich, and Dmitry Vetrov
- Low-variance Black-box Gradient Estimates for the Plackett-Luce Distribution by Artyom Gadetsky, Kirill Struminsky, Dmitry Vetrov in collaboration with Christopher Robinson and Novi Quadrianto.
We've got several papers accepted to NeurIPS workshops:
- "Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning" by Arsenii Ashukha, Alexander Lyzhov, Dmitry Molchanov and Dmitry Vetrov has been accepted to the Bayesian Deep Learning Workshop.
- "Low-variance Gradient Estimates for the Plackett-Luce Distribution" by Artyom Gadetsky, Kirill Struminsky, Novi Quadrianto and Dmitry Vetrov in collaboration with Christopher Robinson has been accepted to the Bayesian Deep Learning Workshop.
- "Unsupervised Domain Adaptation with Shared Latent Dynamics for Reinforcement Learning" by Evgenii Nikishin, Arsenii Ashukha and Dmitry Vetrov has also been accepted to the Bayesian Deep Learning Workshop.
- "Structured Sparsification of Gated Recurrent Neural Networks" by Ekaterina Lobacheva, Nadezhda Chirkova, Alexander Markovich and Dmitry Vetrov has been accepted to the workshop on Context and Compositionality in Biological and Artificial Neural Systems.
- Finally, Max Kochurov contributed to the "PyMC4: Exploiting Coroutines for Implementing a Probabilistic Programming Framework" paper accepted to the workshop on Program Transformations.
This year we've doubled our presence at NeurIPS with four papers accepted:
- Importance Weighted Hierarchical Variational Inference by Artem Sobolev and Dmitry Vetrov.
- The Implicit Metropolis-Hastings Algorithm by Kirill Neklyudov and Dmitry Vetrov in collaboration with Evgenii Egorov.
- A Simple Baseline for Bayesian Uncertainty in Deep Learning by Timur Garipov and Dmitry Vetrov in collaboration with Wesley Maddox, Pavel Izmailov and Andrew Gordon Wilson.
- A Prior of a Googol Gaussians: a Tensor Ring Induced Prior for Generative Models by Maxim Kuznetsov, Daniil Polykovskiy and Dmitry Vetrov in collaboration with Alexander Zhebrak.
Good academic service is not only about producing novel research, but also about providing critical assessment of other's work. We're proud that Kirill Struminsky, Ekaterina Lobacheva, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov and Dmitry Kropotov were recognized as top 50% reviewers.
Insilico Medicine published an article in Nature Biotechnology coauthored by our members Maxim Kuznetsov and Daniil Polykovskiy. The paper describes a timed challenge, where the new machine learning system called Generative Tensorial Reinforcement Learning (GENTRL) designed six novel inhibitors of DDR1, a kinase target implicated in fibrosis and other diseases, in 21 days. Four compounds were active in biochemical assays, and two were validated in cell-based assays. One lead candidate was tested and demonstrated favorable pharmacokinetics in mice.
It's this time of a year: once again students from all over the world gathered in Moscow to participate in the Deep|Bayes 2019 – a summer school on Bayesian Deep Learning. Just like the last year, the school featured both lectures and practical assignments. We've been also fortunate to have a couple of invited speakers: Novi Quadrianto from University of Sussex and Higher School of Economics, Maurizio Filippone from EURECOM, Francisco Jesus Rodriguez Ruiz from Columbia University and University of Cambridge, Andrey Malinin from University of Cambridge and Sergey Bartunov from DeepMind.
We are looking for a postdoc to join our group! Please see details here.
One again, we're organizing an international summer school on Bayesian Deep Learning to be held in Moscow, August 20–25. Head over to deepbayes.ru to view last year's videos, practical assignments and apply to this year's run.
We got 3 papers accepted to ICLR 2019:
- Variational Autoencoder with Arbitrary Conditioning by Oleg Ivanov, Michael Figurnov and Dmitry Vetrov;
- Variance Networks: When Expectation Does Not Meet Your Expectations by Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov;
- The Deep Weight Prior by Andrei Atanov, Arsenii Ashukha, Kirill Struminsky, Dmitriy Vetrov in collaboration with Max Welling.
This year's NeurIPS conference turned out to be a very fruitful one! We've had
- Two papers accepted, one of them being a spotlight
- Quantifying Learning Guarantees for Convex but Inconsistent Surrogates by Kirill Struminsky and Anton Osokin in collaboration with Simon Lacoste-Julien
- Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs by Timur Garipov, Dmitrii Podoprikhin and Dmitry Vetrov in collaboration with Pavel Izmailov and Andrew Gordon Wilson
- An invited talk by Dmitry Vetrov at the Bayesian Deep Learning workshop
- Three papers accepted to the aforementioned workshop
- Importance Weighted Hierarchical Variational Inference by Artem Sobolev and Dmitry Vetrov
- Variational Dropout via Empirical Bayes by Valery Kharitonov, Dmitry Molchanov and Dmitry Vetrov
- Subset-Conditioned Generation Using Variational Autoencoder With A Learnable Tensor-Train Induced Prior by Maksim Kuznetsov, Daniil Polykovskiy and Dmitry Vetrov in collaboration with Alexander Zhebrak
- A paper accepted to the Reinforcement Learning under Partial Observability workshop (contributed talk)
- Joint Belief Tracking and Reward Optimization through Approximate Inference by Pavel Shvechikov, Alexander Grishin, Arseny Kuznetsov, Alexander Fritzler and Dmitry Vetrov
- A paper accepted to the Compact Deep Neural Network Representation with Industrial Applications workshop
- Bayesian Sparsification of Gated Recurrent Neural Networks by Ekaterina Lobacheva, Nadezhda Chirkova and Dmitry Vetrov
- Two papers accepted, one of them being a spotlight
We have 3 papers accepted to ACML 2018: Concorde: Morphological Agreement in Conversational Models by Daniil Polykovskiy in collaboration with Dmitry Soloviev and Sergey Nikolenko, ReSet: Learning Recurrent Dynamic Routing in ResNet-like Neural Networks by Iurii Kemaev, Daniil Polykovskiy and Dmitry Vetrov, and Extracting Invariant Features From Images Using An Equivariant Autoencoder by Daniil Polykovskiy in collaboration with Denis Kuzminykh, Alexander Zhebrak.
Daniil Polykovskiy and Dmitry Vetrov in collaboration with Insilico Medicine (Alexander Zhebrak, Yan Ivanenkov, Vladimir Aladinskiy, Polina Mamoshina, Marine Bozdaganyan, Alexander Aliper, Alex Zhavoronkov, and Artur Kadurin) have published "Entangled Conditional Adversarial Autoencoder for de Novo Drug Discovery" paper in the Molecular Pharmaceutics journal where they applied modern Deep Learning techniques to the problem of molecules generation.
We're happy to inform that our group got two papers accepted to NIPS 2018: "Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs" (spotlight talk) by Timur Garipov*, Dmitrii Podoprikhin*, Dmitry Vetrov in collaboration with Pavel Izmailov* (our alumni) and Andrew Gordon Wilson from Cornell University (* denotes equal contribution) and "Quantifying Learning Guarantees for Convex but Inconsistent Surrogates" by Kirill Struminsky and Anton Osokin (equal contribution) in collaboration with Simon Lacoste-Julien from University of Montreal.
Also, our reviewers' contribution has been recognized: Anton Osokin made it into top-200 reviewers, and Dmitry Vetrov got in top 30%.
Over the course of last week over a hundred students from all over the world gathered in Moscow to familiarize themselves with modern research on Bayesian Deep Learning. The school presented an intense in-depth immersion through both lectures and practical assignments. The lecturers included not only the members of our group, but some prominent invited speakers as well: Max Welling from University of Amsterdam, Maurizio Filippone from EURECOM, Alessandro Achille from University of California, Los Angeles, Sergey Bartunov and Michael Figurnov from DeepMind.
The slides, videos and practicals are available at deepbayes.ru
A short paper "Bayesian Compression for Natural Language Processing" by Nadezhda Chirkova, Ekaterina Lobacheva and Dmitry Vetrov was accepted to EMNLP 2018, and will be presented in Brussels during the main conference.
Dmitry Vetrov was invited to give a talk at Google DeepMind. He presented recent results on Bayesian Deep Learning our group has discovered.
Samsung has officially opened its new Samsung AI Center in Moscow, where among others Dmitry Vetrov and our group will be involved in research on Bayesian Methods and Deep Learning.
A paper "Conditional Generators of Words Definitions" by Artyom Gadetsky, Ilya Yakubovskiy and Dmitry Vetrov was accepted to ACL 2018, and will be presented in Melbourne later this year.