Face recognition in unconstrained videos
Recognizing faces in unconstrained videos is a task of mounting importance. While obviously related to face recognition in still images, it has its own unique characteristics and algorithmic requirements.
Over the years several methods have been suggested for this problem, and a few benchmark data sets have been assembled to facilitate its study.
However, there is a sizable gap between the actual application needs and the current state of the art. We are developing novel set-to-set similarity measures that are based on employing
multiple classifiers and multiple descriptors in order to capture the appearance of an individual's face as depicted in a video clip.
These new methods are shown to achieve performance that is significantly better than all existing methods when tested on
benchmarks containing videos that were captured under challenging and uncontrolled conditions (i.e., ‘in the wild’ and not 'in the lab')