News

  1. NeurIPS 2019 Workshops

    We've got several papers accepted to NeurIPS workshops:

    1. "Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning" by Arsenii Ashukha, Alexander Lyzhov, Dmitry Molchanov and Dmitry Vetrov has been accepted to the Bayesian Deep Learning Workshop.
    2. "Low-variance Gradient Estimates for the Plackett-Luce Distribution" by Artyom Gadetsky, Kirill Struminsky, Novi Quadrianto and Dmitry Vetrov in collaboration with Christopher Robinson has been accepted to the Bayesian Deep Learning Workshop.
    3. "Unsupervised Domain Adaptation with Shared Latent Dynamics for Reinforcement Learning" by Evgenii Nikishin, Arsenii Ashukha and Dmitry Vetrov has also been accepted to the Bayesian Deep Learning Workshop.
    4. "Structured Sparsification of Gated Recurrent Neural Networks" by Ekaterina Lobacheva, Nadezhda Chirkova, Alexander Markovich and Dmitry Vetrov has been accepted to the workshop on Context and Compositionality in Biological and Artificial Neural Systems.
    5. Finally, Max Kochurov contributed to the "PyMC4: Exploiting Coroutines for Implementing a Probabilistic Programming Framework" paper accepted to the workshop on Program Transformations.
  2. NeurIPS 2019

    This year we've doubled our presence at NeurIPS with four papers accepted:

    1. Importance Weighted Hierarchical Variational Inference by Artem Sobolev and Dmitry Vetrov.
    2. The Implicit Metropolis-Hastings Algorithm by Kirill Neklyudov and Dmitry Vetrov in collaboration with Evgenii Egorov.
    3. A Simple Baseline for Bayesian Uncertainty in Deep Learning by Timur Garipov and Dmitry Vetrov in collaboration with Wesley Maddox, Pavel Izmailov and Andrew Gordon Wilson.
    4. A Prior of a Googol Gaussians: a Tensor Ring Induced Prior for Generative Models by Maxim Kuznetsov, Daniil Polykovskiy and Dmitry Vetrov in collaboration with Alexander Zhebrak.

    Good academic service is not only about producing novel research, but also about providing critical assessment of other's work. We're proud that Kirill Struminsky, Ekaterina Lobacheva, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov and Dmitry Kropotov were recognized as top 50% reviewers.

  3. A paper published in Nature Biotechnology

    Insilico Medicine published an article in Nature Biotechnology coauthored by our members Maxim Kuznetsov and Daniil Polykovskiy. The paper describes a timed challenge, where the new machine learning system called Generative Tensorial Reinforcement Learning (GENTRL) designed six novel inhibitors of DDR1, a kinase target implicated in fibrosis and other diseases, in 21 days. Four compounds were active in biochemical assays, and two were validated in cell-based assays. One lead candidate was tested and demonstrated favorable pharmacokinetics in mice.

  4. Deep|Bayes 2019 is over

    It's this time of a year: once again students from all over the world gathered in Moscow to participate in the Deep|Bayes 2019 – a summer school on Bayesian Deep Learning. Just like the last year, the school featured both lectures and practical assignments. We've been also fortunate to have a couple of invited speakers: Novi Quadrianto from University of Sussex and Higher School of Economics, Maurizio Filippone from EURECOM, Francisco Jesus Rodriguez Ruiz from Columbia University and University of Cambridge, Andrey Malinin from University of Cambridge and Sergey Bartunov from DeepMind.

    For an official press-release see the HSE website. The slides, videos and practicals are available at deepbayes.ru

  5. Call for Postdoc on Deep RL

    We are looking for a postdoc to join our group! Please see details here.

  6. Applications to Deep|Bayes 2019 are now open

    One again, we're organizing an international summer school on Bayesian Deep Learning to be held in Moscow, August 20–25. Head over to deepbayes.ru to view last year's videos, practical assignments and apply to this year's run.

  7. NeurIPS 2018 Results

    This year's NeurIPS conference turned out to be a very fruitful one! We've had

  8. Three papers accepted to ACML 2018

    We have 3 papers accepted to ACML 2018: Concorde: Morphological Agreement in Conversational Models by Daniil Polykovskiy in collaboration with Dmitry Soloviev and Sergey Nikolenko, ReSet: Learning Recurrent Dynamic Routing in ResNet-like Neural Networks by Iurii Kemaev, Daniil Polykovskiy and Dmitry Vetrov, and Extracting Invariant Features From Images Using An Equivariant Autoencoder by Daniil Polykovskiy in collaboration with Denis Kuzminykh, Alexander Zhebrak.

  9. A paper is accepted to Molecular Pharmaceutics journal

    Daniil Polykovskiy and Dmitry Vetrov in collaboration with Insilico Medicine (Alexander Zhebrak, Yan Ivanenkov, Vladimir Aladinskiy, Polina Mamoshina, Marine Bozdaganyan, Alexander Aliper, Alex Zhavoronkov, and Artur Kadurin) have published "Entangled Conditional Adversarial Autoencoder for de Novo Drug Discovery" paper in the Molecular Pharmaceutics journal where they applied modern Deep Learning techniques to the problem of molecules generation.

  10. NIPS 2018: Papers accepted, reviewers appreciated

    We're happy to inform that our group got two papers accepted to NIPS 2018: "Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs" (spotlight talk) by Timur Garipov*, Dmitrii Podoprikhin*, Dmitry Vetrov in collaboration with Pavel Izmailov* (our alumni) and Andrew Gordon Wilson from Cornell University (* denotes equal contribution) and "Quantifying Learning Guarantees for Convex but Inconsistent Surrogates" by Kirill Struminsky and Anton Osokin (equal contribution) in collaboration with Simon Lacoste-Julien from University of Montreal.

    Also, our reviewers' contribution has been recognized: Anton Osokin made it into top-200 reviewers, and Dmitry Vetrov got in top 30%.

  11. First international run of Deep|Bayes, a summer school on Bayesian Deep Learning

    Over the course of last week over a hundred students from all over the world gathered in Moscow to familiarize themselves with modern research on Bayesian Deep Learning. The school presented an intense in-depth immersion through both lectures and practical assignments. The lecturers included not only the members of our group, but some prominent invited speakers as well: Max Welling from University of Amsterdam, Maurizio Filippone from EURECOM, Alessandro Achille from University of California, Los Angeles, Sergey Bartunov and Michael Figurnov from DeepMind.

    The slides, videos and practicals are available at deepbayes.ru

  12. A short paper is accepted to EMNLP 2018

    A short paper "Bayesian Compression for Natural Language Processing" by Nadezhda Chirkova, Ekaterina Lobacheva and Dmitry Vetrov was accepted to EMNLP 2018, and will be presented in Brussels during the main conference.

  13. Professor Vetrov gave a talk at Google DeepMind

    Dmitry Vetrov was invited to give a talk at Google DeepMind. He presented recent results on Bayesian Deep Learning our group has discovered.

  14. BayesGroup has joined forces with Samsung AI Center

    Samsung has officially opened its new Samsung AI Center in Moscow, where among others Dmitry Vetrov and our group will be involved in research on Bayesian Methods and Deep Learning.

  15. A paper is accepted to ACL 2018

    A paper "Conditional Generators of Words Definitions" by Artyom Gadetsky, Ilya Yakubovskiy and Dmitry Vetrov was accepted to ACL 2018, and will be presented in Melbourne later this year.