Prof. Michel Valstar
University of Nottingham
Title: Computational Face Analysis
Abstract: In this tutorial I will go through the pipeline of computational face analysis, which includes face detection, face recognition, facial expression recognition, and higher-level behaviour analysis such as the prediction of behaviomedical conditions, for example depression or pain. The tutorial will be focused on practical issues that one needs to consider, and will use the popular open-source toolbox ‘OpenFace’. I will also go through a number of valuable publicly available face-related databases. While the focus will be on practical implementations, I will refer to and briefly address the literature so attendees can follow up on this at their own leisure after the tutorial.
Bio: Michel Valstar (http://www.cs.nott.ac.uk/~pszmv) is an associate professor in Computer Science at the University of Nottingham, and member of both the Computer Vision and Mixed Reality Labs. He is an expert the fields of computer vision and pattern recognition, where his main interest and world-leading work is in automatic recognition of human behaviour, specialising in the analysis of facial expressions. Valstar pioneered the concept of Behaviomedics, which aims to diagnose, monitor, and treat medical conditions that alter expressive behaviour by employing objective assessment of that behaviour. Previously he was a Visiting Researcher at MIT’s Media Lab, and a Research Associate in the intelligent Behaviour Understanding Group (iBUG) at Imperial College London. He received his masters’ degree in Electrical Engineering at Delft University of Technology in 2005 and his PhD at Imperial College London in 2008. He is the founder of the facial expression recognition challenges, FERA 2011/2015/2017, and the Audio-Visual Emotion recognition Challenge series, AVEC 2011-2018. He leads the Objective Assessment research area as the only non-professorial Research Area lead of a £23.6M Biomedical Research Centre, and was the coordinator of the EU Horizon 2020 project ARIA-VALUSPA. Valstar is recipient of Melinda & Bill Gates Foundation funding to help premature babies survive in the developing world. His work has received popular press coverage in The Guardian, Science Magazine, New Scientist, CBC, and on BBC Radio, among others. Valstar is a senior member of the IEEE. He has published over 90 peer-reviewed articles, attracting > 7,500 citations and attaining an H-index of 36.
Prof. John Collomosse
University of Surrey
Prof. Ondrej Chum
Czech Technical University in Prague
Title: Robust visual search and matching
Abstract: Visual search and matching are long-standing challenges in computer vision, transformed by deep learning. This tutorial will focus on the latest CNN architectures for robust visual search and retrieval. We will analyse design choices and compare various components of the latest descriptor designs, including the aggregation, dimensionality reduction, binarization and end-to-end training of large-scale visual search systems. As examples, we will analyse the REMAP global descriptor, which won the Google Landmark Retrieval Challenge on Kaggle in 2018, and the topic of cross-domain matching through deep representations that disentangle structure and style enabling, for example, sketch based search. The tutorial will cover contemporary methods improving visual search techniques by considering structures, often called manifolds, created by the descriptors of relevant images in the descriptor space. We will consider both, the query-time methods, such as query expansion, and the offline methods, in particular diffusion, which shifts some of the computation into the preprocessing stage.
Bio: John Collomosse is a Professor of Computer Vision at the Centre for Vision Speech and Signal Processing (CVSSP), and visiting professor at Adobe Research, Creative Intelligence Lab. John joined CVSSP in 2009. Previously he was an Assistant Professor at the Department of Computer Science, University of Bath where he completed his PhD in 2004 on the topic of AI for Image Stylization. John has also spent periods of time in commercial R&D, working for IBM UK Labs (Hursley), Vodafone R&D (Munich), Hewlett Packard Labs (Bristol); the latter under a Royal Academy of Engineering fellowship. His research focuses on the interaction of Computer Vision, Graphics and AI for the creative industries, specifically for human performance capture, video post-production and vfx, and intuitive visual search (particularly sketch search). He also heads up the Surrey Blockchain activity exploring the fusion of AI and Distributed Ledger Technologies. John is a Chartered Engineer (C.Eng, 2013) and since 2018 a member of the EPSRC ICT Strategic Advisory Team (SAT) and UKRI Digital Economy Programme Advisory Board (PAB).
Bio: Ondrej Chum is an associate professor at the Czech Technical University in Prague, where he leads a team within the Visual Recognition Group at the Department of Cybernetics, Faculty of Electrical Engineering. He received the MSc degree in computer science from Charles University, Prague, in 2001 and the PhD degree from the Czech Technical University in Prague, in 2005. From 2006 to 2007, he was a postdoctoral researcher at the Visual Geometry Group, University of Oxford, United Kingdom. The research interests include large-scale image and particular object retrieval, object recognition, and robust estimation of geometric models. He is a member of Image and Vision Computing editorial board, and has served in various roles at major international conferences (e.g., ICCV, ECCV, CVPR, and BMVC). Ondrej co-organizes Computer Vision and Sports Summers School in Prague. He was the recipient of the Best Paper Prize at the BMVC in 2002, the Best Science Paper Honorable Mention at BMVC 2017, Longuet-Higgins Prize at CVPR 2017, and the Saburo Tsuji Best Paper Award at ACCV 2018. Ondrej was awarded the 2012 Outstanding Young Researcher in Image & Vision Computing runner up for researchers within seven years of their PhD.