Select Page

Unmask Face Liveness Detection by Integrating Google ML Kit into your Flutter App

Mar 10, 2023

Video KYC is a game changer in digital onboarding. Since 2020, video-based identity verification has been effective in preventing fraud, reducing onboarding costs, and speeding up the authentication process.  

A robust face liveness detection is a key part of the VKYC process. It is basically a selfie-based check that scans the user’s facial expressions and blinking gestures to differentiate a real user from a fake impersonator or a bot. 

How can you Create a Robust Face Liveness Detection Application? 

There are several tools and technologies available in the market, but the simplest cost-effective method involves using Flutter and Google ML Kit. Using these, you can easily create a face liveness detection application. 

Wonder how?  

We have our Flutter experts Chinnadurai Viswanathan and Nandhakumar Krishnan explain how a face liveness detection application can be created using Google ML Kit.  

So, here you go.  

To start with, facial images will be detected by doing the following gestures.

  1. Left – Turn 
  2. Right – Turn 
  3. Look – UP 
  4. Look – Down 
  5. Wink 😉 
  6. Smile 😃 

We will use Google’s library named Firebase ML Kit to detect the faces within an image. CameraPreview is used to capture the user’s image, and then it will be sent to the Firebase ML Kit library, which provides the face values.   

Now, let’s explore the Firebase ML Kit further. 

Firebase ML (Firebase ML Kit): 

Firebase ML Kit’s Face Detection API is a powerful tool to detect faces within images, which allows you to identify important facial features and obtain the contours of detected faces.  

This technology performs exceptionally well in the pre-processing phase of the image and detects different zones of the face accurately. The API can also identify many facial expressions and emotions, making it a valuable tool for developers looking to create facial recognition with greater accuracy and precision.  

After the images are captured from CameraPreview, it is tested using the following dependencies. 

  1. Google MLKit (google_mlkit) 
  2. Camera (camera) 

Flutter Dependencies 

Android Dependencies 

How to Get Face Values using Flutter & Android dependencies?

The following steps will help you get face values using Flutter and Android dependencies. 

Step 1

The first step is to set the preview of the camera in Flutter.  

In Flutter, the camera controller needs 3 parameters. 

  1. List of cameras (List<CameraDescription> cameras = [];) 
  2. Resolution of Photo 
  3. Audio needs — Based on your requirement 

Using StreamBuilder, we can get the photo from the camera every second to process the images. 

The cameraController?.startImageStream()method returns a CameraImage which can be used for ML image processing. If you use Firebase ML for Flutter, you can directly pass the CameraImage to it for processing. Once the processing is complete, you can return the result and display it in a User Interface (UI) 

Note: The ‘cameraController?.startImageStream() method provides ‘CameraImage instance, but Google ML Kit requires an InputImage object to perform face detection. Therefore, we need to convert the ‘CameraImage’ to an InputImage’ object to build the face detection logic. In case you’re not familiar with ‘InputImage’, it is an ML Kit class that offers a common representation of images used as input data for machine learning models. It can be created from various sources such as bitmap, media image, or Byte Buffer 

Step 2

Once you have implemented the above code, you can design your camera view in your application.  

Note:The below onImage method code is our special requirement, so we’ve implemented a few things. 

Now we have obtained the InputImage from the camera stream, then the next step is to process the image and detect faces using the FaceDetector from the google_mlkit_commons library. This will display a list of all the faces present in the image, that can be used for further processing.  

Step 3

Using this faceDetector instance, we will get the list of faces from InputImages objects.

The output of the image will be as below. 

If you are interested in reusing the above codes in your existing native applications, you can refer to this article on The Implications of Flutter in Fintech.  With the power of ML and Flutter, the possibilities are endless for creating intelligent apps that offer a better user experience.  

If you want to know more about how we implement face liveness detection in Flutter and Google ML Kit, write to us at! 

This article is authored by Chinnadurai Viswanathan.

Subscribe to our newsletter and get the latest fintech news, views, and insights, directly to your inbox.

Follow us on LinkedIn and Twitter for insightful fintech tales curated for curious minds like you.


Submit a Comment

Your email address will not be published.

You May Also Like…

50 Fintech Buzzwords Explained

50 Fintech Buzzwords Explained

The Fintech industry is constantly evolving with innovations and technologies coming up often. Though many concepts...