Aliaksandr Siarohin

Welcome! I am a Research Scientist working at Snap Research in the Creative vision team. Previosly, I was a Ph.D Student at the University of Trento where I worked under the supervision of Nicu Sebe at the Multimedia and Human Understanding Group (MHUG). My research interests include machine learning for image animation, video generation, generative adversarial networks and domain adaptation.

My work has been published in top computer vision and machine learning conferences. I also did internships at Snap Inc. and Google.

I'm a Snap Research Fellow in 2020.

Contact: aliaksandr [dot] siarohin [at] gmail [dot] com

[Google Scholar] [GitHub] [CV]

Publications:


Playable Environments: Video Manipulation in Space and Time

Willi Menapace, Stéphane Lathuilière, Aliaksandr Siarohin, Christian Theobalt, Sergey Tulyakov, Vladislav Golyanik, Elisa Ricci
CVPR 2022

[Paper] [Website] [Code]

Motion Representations for Articulated Animation

Aliaksandr Siarohin, Oliver Woodford, Jian Ren, Menglei Chai, Sergey Tulyakov
CVPR 2021

[Paper] [Website] [Code]

Playable Video Generation

Willi Menapace, Stéphane Lathuilière, Sergey Tulyakov, Aliaksandr Siarohin, Elisa Ricci
CVPR 2021

[Paper] [Website] [Code]

Motion-supervised Co-Part Segmentation

Aliaksandr Siarohin, Subhankar Roy, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, Nicu Sebe
ICPR 2021

[Paper] [Code]

TriGAN: image-to-image translation for multi-source domain adaptation

Subhankar Roy, Aliaksandr Siarohin, Enver Sangineto, Nicu Sebe, Elisa Ricci
Machine Vision and Applications 2021

[Paper]

First Order Motion Model for Image Animation

Aliaksandr Siarohin, Stephane Lathuillere, Sergey Tulyakov, Elisa Ricci, Nicu Sebe
NeurIPS 2019

[Paper] [Code]

Attention-based Fusion for Multi-source Human Image Generation

Stéphane Lathuilière, Enver Sangineto, Aliaksandr Siarohin, Nicu Sebe
WACV 2020

[Paper]

DwNet: Dense warp-based network for pose-guided human video generation

Polina Zablotskaia, Aliaksandr Siarohin, Bo Zhao, Leonid Sigal
BMVC 2019

[Paper] [Code]

Increasing image memorability with neural style transfer

Aliaksandr Siarohin, Gloria Zen, Cveta Majtanovic, Xavier Alameda-Pineda, Elisa Ricci, Nicu Sebe
TOMM 2019 (Best Paper Award)

[Paper] [Code]

Appearance and Pose-Conditioned Human Image Generation using Deformable GANs

Aliaksandr Siarohin, Stéphane Lathuilière, Enver Sangineto, Nicu Sebe
PAMI 2019

[Paper] [Code]

Unsupervised Domain Adaptation using Feature-Whitening and Consensus Loss

Subhankar Roy, Aliaksandr Siarohin, Enver Sangineto, Samuel Rota Bulo, Nicu Sebe, Elisa Ricci
CVPR 2019

[Paper] [Code]

Animating arbitrary objects via deep motion transfer

Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, Nicu Sebe
CVPR 2019

[Paper] [Code]

Whitening and Coloring Batch Transform for GANs

Aliaksandr Siarohin, Enver Sangineto, Nicu Sebe
ICLR 2019

[Paper] [Code]

Enhancing Perceptual Attributes with Bayesian Style Generation

Aliaksandr Siarohin, Gloria Zen, Nicu Sebe, Elisa Ricci
ACCV 2018

[Paper] [Code]

Deformable gans for pose-based human image generation

Aliaksandr Siarohin, Enver Sangineto, Stéphane Lathuilière, Nicu Sebe
CVPR 2018

[Paper] [Code]

How to make an image more memorable?: A deep style transfer approach

Aliaksandr Siarohin, Gloria Zen, Cveta Majtanovic, Xavier Alameda-Pineda, Elisa Ricci, Nicu Sebe
ICMR 2017

[Paper] [Code]