Stathis Galanakis

Stathis Galanakis

PhD Student in Imperial College London

Imperial College London

Biography

I am excited to use my industry and academic experience to bring cutting edge Computer Vision research to life. My interests lie in the field of Computer Vision particularly in tackling intricate challenges related to human faces and bodies. These encompass areas such as 3D facial reconstruction from monocular images, facial avatar generation, as well as engaging in dataset creation efforts.

Interests
  • Artificial Intelligence
  • 3D Face Reconstruction
  • Computer Graphics
Education
  • PhD in Artificial Intelligence, 2021

    Imperial College London

  • M.Eng. in Electrical & Computer Engineering, 2019

    National Technical University of Athens

Experience

 
 
 
 
 
Computer Vision Intership
Huawei UK
January 2022 – Present London
I am currently working as a Computer Vision Engineer in Huawei UK, where my primary focus lies in the task of 3D facial reconstruction from a monocular image. In this role, I am responsible for seamlessly integrating cutting-edge techniques within this field. My objective is to enhance the effectiveness and accuracy of facial reconstruction methods, pushing the boundaries of what is currently considered state-of-the-art methods like NeRF and diffusion-based techniques.
 
 
 
 
 
Research Assistant
Imperial College London
February 2021 – January 2022 London
I worked as a Research Assistan(RA) in the project ARISE held by Business School, Imperial College of London. ARISE was a European Union-funded initiative designed to forecast agricultural crop yields within a specific region during targeted time periods. My responsibilities encompassed the utilization of data derived from satellites and weather stations, employing state-of-the-art machine learning algorithms to extract meaningful insights. Additionally, I was tasked with generating synthetic data for regions with limited data availability, ensuring a comprehensive and robust approach to yield prediction.
 
 
 
 
 
Computer Vision Scientist
ArielAI
January 2020 – October 2020 London
My main responsibilities included the design and implementation of cutting-edge automated pipelines for collecting images across the web. These pipelines were instrumental in the creation of novel datasets that accurately represented real-world scenarios. In addition, I took charge of designing and coordinating human annotation tasks for the annotators at ArielAI while ensuring precise and consistent annotations..
 
 
 
 
 
R&D, ML Engineer
Pobuca Ltd
May 2018 – January 2019 Athens
In my capacity as the sole Machine Learning Engineer, I undertook the development of a robust network for automated product recognition within images captured from supermarket shelves. This required designing Computer Vision algorithms and tools for easy annotation and creating both training and detection procedures alongside with back-end support.

Publications

3DMM-RF: Convolutional Radiance Fields for 3D Face Modeling
3DMM-RF: Convolutional Radiance Fields for 3D Face Modeling

Facial 3D Morphable Models are a main computer vision subject with countless applications and have been highly optimized in the last two decades. The tremendous improvements of deep generative networks have created various possibilities for improving such models and have attracted wide interest. Moreover, the recent advances in neural radiance fields, are revolutionising novel-view synthesis of known scenes. In this work, we present a facial 3D Morphable Model, which exploits both of the above, and can accurately model a subject’s identity, pose and expression and render it in arbitrary illumination. This is achieved by utilizing a powerful deep style-based generator to overcome two main weaknesses of neural radiance fields, their rigidity and rendering speed. We introduce a style-based generative network that synthesizes in one pass all and only the required rendering samples of a neural radiance field. We create a vast labelled synthetic dataset of facial renders, and train the network on these data, so that it can accurately model and generalize on facial identity, pose and appearance. Finally, we show that this model can accurately be fit to “in-the-wild” facial images of arbitrary pose and illumination, extract the facial characteristics, and be used to re-render the face in controllable conditions.