Face recognition and comparison for onbaording

Face recognition has made big waves in recent years. Waves which were made by breakthrough improvements thanks to deep learning. The changes help with making identity verification at a distance a reality. Face comparison against identity documents, which used to be done manually by humans, is moving into the online realm.

Humans are good at recognizing and comparing faces, thanks to the brain areas dedicated for this task. In addition to face recognition, brain does more actions related to identifying people. Those range from analysis of clothes, gender, location, context recognition to analysis of walking and moving style.

Making artificial intelligence achieve comparable results on whole person recognition would need to encompass multiple ways, so for now let us focus only on the limited part of face recognition. Doing face recognition in the online realm requires approach that goes with validating input, running models, and performing decision making based on data collected.

Photos used for decision making in face recognition are of these types:

  1. Simple user-uploaded photo,
  2. Selfie taken at the time of recognition process,
  3. Photo taken during liveness detection process.

The photo of a person is subsequently matched against other source. That can either be services by governments (such as NCIIC in China, Dukcapil in Indonesia), or photo of identity card (ID, Driving license, passport).

Data input validation

Major ways in validating veracity of digital photos are

  • Error level analysis (more),
  • EXIF metadata Analysis (more),
  • Last saved quality.

Only two of those can be done well automatically without running into too many false positives. Error level analysis has the issue of needing someone check the results, because it is more of a visual tool. Also, it fails at detecting some photos manipulated in smart, yet simple ways. For example, screenshot of a manipulated photo will usually not be detected as manipulated by ELA.

Metadata analysis contains several useful information ranging from camera used, timestamps, location of certain objects in photo and sometimes even geo location. This can be helpful, when you want to make sure the photo was taken at the correct location (point of sales?), not too far back (within minutes before the event of analysis), and was not manipulated (photoshop of other software is not present in metadata). If metadata are stripped, it should be big warning. However, if you are missing metadata on all photos, visit your software developers to find out how photos are taken, changed and stored.

Last saved quality is related to compression as usually stored and manipulated photos are compressed and are not in full quality as when taken by device.

Most of the problems with getting fake photo can be overcome by using liveness detection in the process. That usually requires having in the process mobile app that runs liveness detection algorithms and in between of the process takes a photo that is sent for recognition. The potential attack vector at this moment is API that uploads photo to the server. To decrease risk, it is recommended to use not only standard hashing + encryption, but also other security by obscurity methods to harden security of API endpoints against receiving fake data.

Running comparison

Once we are sure about data we are receiving, next is to run comparison. Apart from government services, which are usually performing well (rule of thumb is that the more authoritarian a country, the better the face recognition algorithms). To train own model is in these days is almost irrational due to the amount of data necessary to train the deep learning models and due to the advancement of services, which are quite cheap.

Selection of service can be done of several dimensions – price, speed, quality of comparison. I would recommend also being selective based on the racial profile of people being recognized/compared. For example, whereas Microsoft services seem to be performing quite well on caucasians, they get sometimes absolutely lost when comparing Asian. For Asians, I have seen the best results from Face++. On caucasians they are sometimes off when it gets to detailed analysis of facial features.

Usually there are two services that I recommend for doing properly face recognition. One is analysis and one is comparison. Sometimes people just ran comparison without running analysis. Analysis can be used as a check on what is being compared. Sometimes algorithms can be off the mark – saying that someone is male even though it is obviously a female.

Final decisioning

Good process for face recognition decision making is:

  1. Data validation - Incoming data can be trusted
  2. Outlier/strange result check - Using results from analysis for “trouble detection”
  3. Final decision - Comparison of confidence results

Incoming data detection rules recommendation is as follow:

  • No Photoshop or other software
  • Camera maker matches phone make (from other e.g. browser metadata)
  • Geolocation is not positioned off
  • Photo is not too old
  • Image metadata are present

Photo analysis

  • Gender match
  • Just one person detected in the photo

Comparison

  • The comparison results are too high – e.g. 99%
  • Confidence mark based on recommendation by service vendor. Usually:
    • 80%+ high confidence of same person
    • 60-80% Some certainty
    • <60% not the same person

Final words…

Goal of this post is to explain in broad strokes the process of face recognition. I to do not claim it is comprehensive and if anything, should be at least the beginning, but not the end of policy/strategy for decision making in identity verification. The problem of human identification is a multifaceted one. Doing face recognition and comparison when being focused on just running some cognitive services and only setting cutoff can lead to stupid decision making done by believing in having great data.