News

  1. NeurIPS 2019

    This year we've doubled our presence at NeurIPS with four papers accepted:

    1. Importance Weighted Hierarchical Variational Inference by Artem Sobolev and Dmitry Vetrov.
    2. The Implicit Metropolis-Hastings Algorithm by Kirill Neklyudov and Dmitry Vetrov in collaboration with Evgenii Egorov.
    3. A Simple Baseline for Bayesian Uncertainty in Deep Learning by Timur Garipov and Dmitry Vetrov in collaboration with Wesley Maddox, Pavel Izmailov and Andrew Gordon Wilson.
    4. A Prior of a Googol Gaussians: a Tensor Ring Induced Prior for Generative Models by Maxim Kuznetsov, Daniil Polykovskiy and Dmitry Vetrov in collaboration with Alexander Zhebrak.

    Good academic service is not only about producing novel research, but also about providing critical assessment of other's work. We're proud that Kirill Struminsky, Ekaterina Lobacheva, Dmitry Molchanov, Arsenii Ashukha, Dmitry Vetrov and Dmitry Kropotov were recognized as top 50% reviewers.

  2. A paper published in Nature Biotechnology

    Insilico Medicine published an article in Nature Biotechnology coauthored by our members Maxim Kuznetsov and Daniil Polykovskiy. The paper describes a timed challenge, where the new machine learning system called Generative Tensorial Reinforcement Learning (GENTRL) designed six novel inhibitors of DDR1, a kinase target implicated in fibrosis and other diseases, in 21 days. Four compounds were active in biochemical assays, and two were validated in cell-based assays. One lead candidate was tested and demonstrated favorable pharmacokinetics in mice.

  3. Deep|Bayes 2019 is over

    It's this time of a year: once again students from all over the world gathered in Moscow to participate in the Deep|Bayes 2019 – a summer school on Bayesian Deep Learning. Just like the last year, the school featured both lectures and practical assignments. We've been also fortunate to have a couple of invited speakers: Novi Quadrianto from University of Sussex and Higher School of Economics, Maurizio Filippone from EURECOM, Francisco Jesus Rodriguez Ruiz from Columbia University and University of Cambridge, Andrey Malinin from University of Cambridge and Sergey Bartunov from DeepMind.

    For an official press-release see the HSE website. The slides, videos and practicals are available at deepbayes.ru

  4. Call for Postdoc on Deep RL

    We are looking for a postdoc to join our group! Please see details here.

  5. Applications to Deep|Bayes 2019 are now open

    One again, we're organizing an international summer school on Bayesian Deep Learning to be held in Moscow, August 20–25. Head over to deepbayes.ru to view last year's videos, practical assignments and apply to this year's run.

  6. NeurIPS 2018 Results

    This year's NeurIPS conference turned out to be a very fruitful one! We've had

  7. Three papers accepted to ACML 2018

    We have 3 papers accepted to ACML 2018: Concorde: Morphological Agreement in Conversational Models by Daniil Polykovskiy in collaboration with Dmitry Soloviev and Sergey Nikolenko, ReSet: Learning Recurrent Dynamic Routing in ResNet-like Neural Networks by Iurii Kemaev, Daniil Polykovskiy and Dmitry Vetrov, and Extracting Invariant Features From Images Using An Equivariant Autoencoder by Daniil Polykovskiy in collaboration with Denis Kuzminykh, Alexander Zhebrak.

  8. A paper is accepted to Molecular Pharmaceutics journal

    Daniil Polykovskiy and Dmitry Vetrov in collaboration with Insilico Medicine (Alexander Zhebrak, Yan Ivanenkov, Vladimir Aladinskiy, Polina Mamoshina, Marine Bozdaganyan, Alexander Aliper, Alex Zhavoronkov, and Artur Kadurin) have published "Entangled Conditional Adversarial Autoencoder for de Novo Drug Discovery" paper in the Molecular Pharmaceutics journal where they applied modern Deep Learning techniques to the problem of molecules generation.

  9. NIPS 2018: Papers accepted, reviewers appreciated

    We're happy to inform that our group got two papers accepted to NIPS 2018: "Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs" (spotlight talk) by Timur Garipov*, Dmitrii Podoprikhin*, Dmitry Vetrov in collaboration with Pavel Izmailov* (our alumni) and Andrew Gordon Wilson from Cornell University (* denotes equal contribution) and "Quantifying Learning Guarantees for Convex but Inconsistent Surrogates" by Kirill Struminsky and Anton Osokin (equal contribution) in collaboration with Simon Lacoste-Julien from University of Montreal.

    Also, our reviewers' contribution has been recognized: Anton Osokin made it into top-200 reviewers, and Dmitry Vetrov got in top 30%.

← Newer Page 3 out of 4 Older →