Recognizing Face in Android using Deep Neural Network + TensorFlow Lite (2023)

Muhammad Wyndham Haryata Permana

(Video) Face and object detection with TensorFlow Lite deep neural networks on Android for robotics

·

Follow

Published in

Gravel Engineering

·

8 min read

·

Jul 20, 2022

--

(Video) Face Recognition Android App Using TensorFlow Lite And OpenCV: Training Model Part 1

In the previous article, we explored how we could implement face detection in android apps to introduce a face recognition pipeline on mobile devices.

Before we start, there are various jargons that readers should know about, such as Deep Neural Network, Convolutional Neural Network, Triplet Loss and Inference Time which I’ll happily explain below:

  • Deep Neural Network

When we talk about neural networks, we talk about how machine learning works. A neural network consists of several connected units called nodes. These nodes mimic how a neuron in a human brain works, and each node will process the input, then give the result and pass this result to the next node. These chains of nodes are basically what a neural network is.

These nodes are then grouped into a layer, and this layer has a specific kind of input and a specific kind of output.

When we want to solve a problem, we often need multiple kinds of outputs, and can’t be simply contained to single processing. Therefore, to solve this, we need another layer that will do different things to solve different problems.

And when the layer count is more than 1, it means the neural network has “depth” as in multiple layers, hence Deep Neural Network.

  • Convolutional Neural Network

In mathematical terms, Convolutions means a mathematical operation on two functions, let’s say f and g to produce a third function h that expresses how the shape of the function is modified by one another.

(Video) Real Time Android Face Recognition using TensorFlow Lite and Deep Learning

In the context of neural networks, it means that we replace the general matrix multiplication that is usually the calculation involved inside a neural network with convolution operation instead.

This process gives better results when it comes to processing visual imagery, including facial recognition, object recognition, and various other visual processing.

  • Triplet Loss

What is triplet loss? Triplet loss is a loss function (which is a function that maps events or values of variables, in this case, an array, into a real number representing cost or loss associated) for machine learning algorithms.

This loss function work where an input (which we called “Anchor”) is compared to pre-existing matching input (means the known input which we know that is the same person as the anchor, called “positive”) and pre-existing non-matching input (a totally different person, called “negative”).

The goal of this loss function is to minimize the distance between anchor to positive and maximize the distance between anchor to negative.

  • Inference Time

Inference time is a metric of how long a machine learning model runs to decide on a solution. In computing terms, how long is an output being made for each input being made.

Kinda understand it now? Great! Now let’s move on to the implementation of Face Recognition.

Our implementation of Face Recognition uses something called TensorFlow Lite to run various implementations of pre-trained models of the Deep Neural Network (DNN) based Face Recognition Algorithm.

What is TensorFlow Lite? Tensor Flow Lite is a library that’s developed by Tensor Flow to run Machine Learning models that are developed in Tensor Flow Language to run on edge/mobile devices. This enables us to convert existing Machine Learning models into a format that even mobile phones can run and apply.

I won’t explain how to install/apply this library to the android project. You can actually learn how to apply it properly here, but I do explain the step I took personally to complete this project.

There are various models that can be used, but for brevity’s sake, I’ll use two particular models, which are:

  • MobileFaceNet
    A Convolutional Neural Network Based Implementation of MobileNet V2 for face recognition with reduced parameters that allows it to work with a mobile device at a reasonable accuracy. The output of this model is Euclidean space of 192-bytes parameters. As the author explains:

We present a class of extremely efficient CNN models, MobileFaceNets, which use less than 1 million parameters and are specifically tailored for high-accuracy real-time face verification on mobile and embedded devices.

  • FaceNet
    Another implementation of Convolutional Neural Network that implements the Embedding to a Euclidean space of 128-bytes parameters to map the face in a likeness array. This method leverages the Triplet Loss method during its training and achieves almost impeccable performance results.

Between these two algorithms that I have chosen, there are advantages and disadvantages when being applied to run with Android Apps:

MobileFaceNet

  • Based on MobileNet V2 (retrofit to V3)
  • Very Fast Inference time (168–320ms for dataset source, +-160ms for typical)
  • Reduced Parameters, can run on less powerful SoC (System on Chip, or what we usually called the “Processor” or “Brain” of a mobile phone).
  • Not so accurate recognizing face with an expression
  • Not so accurate recognizing face rotation
  • More accurate when using straight face photo
  • Input format ARGB8888 112x122 px Bitmap
  • Output format Float Array 192D
  • Small Model size (5.1 MB un-Quantized)

FaceNet

  • Relatively slow inference time (560–1496ms for dataset source, +-200ms for typical)
  • Not run well on slow SoC (System on Chip, or what we usually called the “Processor” or “Brain” of a mobile phone).
  • Can handle more variables in environment and facial expression including dark environment, smile, and visible teeth.
  • Can detect a person even if using a mask
  • Can handle Face Rotation to some degree
  • Input format ARGB8888 160x160 px Bitmap
  • Output format Float Array 128D
  • Large Model size (45MB un-Quantized)

Currently, our implementation of Face Recognition in Android is using pre-DNN-ed Images that specify each person like this :

Where Likeness is the Result of Euclidean space from the DNN model used via TensorFlowLite. This Euclidean space is representing the likeness of a face in an N-Dimensional Array, where we could do some simple arithmetic to calculate the average distance to each recognized face.

Then following the flowchart below:

Recognizing Face in Android using Deep Neural Network + TensorFlow Lite (3)
(Video) Android Face Mask Recognition App using TensorFlow Lite

We can determine which person is which from our pre-DNN-ed list of people. The explanations of step by step from the flowchart above are as explained below:

  • Following the Face-Detection step previously, we already had an image with ARGB8888 at specified n x n Resolution which the model requires. (112 x 112 px for MobileFaceNet and 160 x 160 px for FaceNet).
  • This image was then inputted to the DNN Model, waiting for output
  • If the output is successful, then we should have an N-Dimensional array that represents the Likeness of a face in a Euclidean space.
  • This likeness then compared to our existing list of people above.
  • For each person in the list above, we calculate the L2 Euclidean Distance from the Result we have to each likeness a Person has. This distance was then averaged for each person.
  • After each person’s distance is averaged, then we take the smallest average, the person associated with this smallest average is our candidate of similarities.
  • This smallest average compared to our predetermined maximum distance, which in this case that we took reference from the DeepFace library (1.0 for FaceNet and 1.0 for MobileFaceNet).
  • If the smallest average is bigger than our maximum distance, then we assume that this is not the same person as our candidate. If yes, we assume that it is indeed the same person.

Then for the implementation in the Android app, we require dependency below :

Then make sure our model (which should be .tflite model) is added to /app/src/main/assets path.
For implementations, after we got the cropped image from the Face Detection process, we need to convert them to ByteBuffer :

Then run the Inference process to DNN Model :

This FloatArray Result is the Likeness array that we got from the DNN model. Which should be calculated as mentioned step-by-step above.

As for how we calculate the distance, we have two possible method, one is using L2 Normalization by finding the Normalized value of two arrays (one array is the the array of the result from above and the other is the array from our pre-calculated database) which we can use the code below:

And another method called Cosine Similarity between 2 array:

What was this distance calculation for? Basically this: you had the result from the face recognition model. This result comes in the format of arrays. The model itself doesn’t which face is who, but the result of this array is basically the coordinate of a dot in an n-dimensional graph, with n representing the size of the array (128 or 192 respectively).

Previously we had the pre-calculated face data that pass from the same model, this array also represents the same data we mentioned above. From this, we should be able to calculate the distance between the array we got from the new face vs our database of arrays that represents specific faces.

This n-dimensional distance calculation can be done by Cosine Similarity mentioned above, with x1 representing the new array we had, and x2 representing the database of data we had previously. It is also doable to use L2 Normalization between 2 arrays as a preliminary step.

As for the Result in Android Implementation using the FaceNet model, we can see the image below:

Recognizing Face in Android using Deep Neural Network + TensorFlow Lite (4)

As you can see, the average of each person in our database shows as above:

  • Wyndham: 0.70820
  • Zidni: 1.190301
  • Alfin: 1.075332
  • Reza: 1.012211

The Person with the lowest Average Distance is Wyndham, and because it is lower than our maximum limit of 1.0, we can determine that this person indeed a person called Wyndham.

Great! We have implemented a proper Face Recognition system in an app and see the result. Unfortunately, All of that implementations aside, there are some requirements and limitations which may or may not be dealbreakers for your particular application. Few which I list below:

  • Minimum SDK is 21 (Lollipop 5.0)
  • Minimum Tested System RAM is 2GB
  • Has Camera Access
  • Can be fooled with photograph
  • The Model Size is big and incompressible.

As we see throughout this article, it is possible to do Face Recognition on mobile/edge devices. This implementation in particular uses pre-existing models to recognize the faces. On the implementation side of things, we are using TensorFlow Lite, available on various platforms, including Android.

Before we can use proper face recognition tho, we need to prepare our face data first, which we touch upon in the first article. After we have a proper face image that we can use for face recognition, we convert the image to a byte array, then run it through the pre-trained model, and the result we can use to calculate the distance between the scanned image and with existing databases of images.

When it comes to the model, it is uses several models that uses the same basic methods: Convolutional Neural Network with Triplet Loss.There are various models available, but in this example, I’m using 2 in particular: MobileFaceNet and FaceNet.

In the next article, we will explore another field of machine learning in mobile devices, some of which include text recognition and filling up missing data in such a way that we can extract proper information from it.

(Video) Use TensorFlow Lite to Estimate Age & Gender of Faces on Android | Android TensorFlow Lite #2

Thank you very much and see you in the next article!

FAQs

How to use TensorFlow Lite model in Android? ›

Use a custom TensorFlow Lite model on Android
  1. On this page.
  2. TensorFlow Lite models.
  3. Before you begin.
  4. Deploy your model.
  5. Download the model to the device and initialize a TensorFlow Lite interpreter.
  6. Perform inference on input data. Get your model's input and output shapes. Run the interpreter.
  7. Appendix: Model security.

Can TensorFlow be used for face recognition? ›

We explore face recognition using the TensorFlow learning framework systematically in three steps with a focus on using it for positive ends in our personal spaces: Get a simple TensorFlow face recognition model up and running quickly. Fine-tune it on a custom dataset for closed-set personal face recognition.

Which method is used to detect the face in Android? ›

You can use ML Kit to detect faces in images and video. This API is available using either an unbundled library that must be downloaded before use or a bundled library that increases your app size. See this guide for more information on the differences between the two installation options.

Which neural network model is best for face recognition? ›

Deep Learning - Convolutional Neural Network (CNN) In deep learning, a convolutional neural network (CNN) is a special type of neural network that is designed to process data through multiple layers of arrays. A CNN is well-suited for applications like image recognition and is often used in face recognition software.

Can TensorFlow be used on Android? ›

TensorFlow Lite lets you run TensorFlow machine learning (ML) models in your Android apps. The TensorFlow Lite system provides prebuilt and customizable execution environments for running models on Android quickly and efficiently, including options for hardware acceleration.

Is TensorFlow Lite for mobile? ›

TensorFlow Lite is a mobile library for deploying models on mobile, microcontrollers and other edge devices.

What can TensorFlow Lite detect? ›

TensorFlow Lite example apps

Identify hundreds of objects, including people, activities, animals, plants, and places. Detect multiple objects with bounding boxes. Yes, dogs and cats too.

Which algorithm is best for face recognition machine learning? ›

The most popular and well-known machine learning algorithm for face recognition is the Viola-Jones algorithm. It detects photos in several stages: feature definition, feature assessment, feature classifier definition, and classifier cascade check.

Does Android have face detection? ›

Pixel 7 & later. If you have a Pixel 7 or later, you can use Face Unlock to unlock your phone. Important: You can't use Face Unlock on Pixel 7 or later to sign into apps or make payments. For those activities, you can instead make use of Fingerprint Unlock and/or strong passwords, patterns, or PINs.

What is difference between face detection and face recognition? ›

While face detection trains a computer to pick out a human face, face recognition software will analyze the image. It will turn the image into a set of data about your facial features. This can include the distance between your eyes, forehead, and chin, and other geometric measurements.

What type of sensor is used for face recognition? ›

Three-dimensional face recognition technique uses 3D sensors to capture information about the shape of a face. This information is then used to identify distinctive features on the surface of a face, such as the contour of the eye sockets, nose, and chin.

Which algorithm is most accurate for face recognition? ›

The most common type of machine learning algorithm used for facial recognition is a deep learning Convolutional Neural Network (CNN). CNNs are a type of artificial neural network that are well-suited for image classification tasks.

What is the most popular face recognition algorithm? ›

The Eigen faces Algorithm is the most commonly used methods in the field of facial recognition.

What is the most used face recognition algorithm? ›

Fisherfaces. Fisherfaces is one of the most popular facial recognition algorithms; it's considered superior to many of its alternatives.

Can TensorFlow Lite run TensorFlow models? ›

TensorFlow Lite uses TensorFlow models converted into a smaller, more efficient machine learning (ML) model format. You can use pre-trained models with TensorFlow Lite, modify existing models, or build your own TensorFlow models and then convert them to TensorFlow Lite format.

What is the difference between TensorFlow and TensorFlow Lite? ›

The differences between TensorFlow Lite and TensorFlow Mobile are as follows: It is the next version of TensorFlow mobile. Generally, applications developed on TensorFlow Lite will have better performance and less binary file size than TensorFlow mobile.

How to enable TensorFlow Lite model in Android Studio? ›

Import a TensorFlow Lite model in Android Studio
  1. Right-click on the module you would like to use the TFLite model or click on File , then New > Other > TensorFlow Lite Model.
  2. Select the location of your TFLite file. ...
  3. Click Finish .
  4. The following screen will appear after the import is successful.
May 26, 2022

What is the disadvantage of TensorFlow Lite? ›

Tensorflow Lite Limitations
  • Model size and complexity: TensorFlow Lite is designed for running small to medium-sized models on resource-constrained devices. ...
  • Limited training support: TensorFlow Lite is primarily designed for inference, rather than training, of machine learning models.
Feb 23, 2023

How to install TensorFlow in Android? ›

TensorFlow on Android
  1. Download Android SDK & NDK. You can download Android SDK using the terminal and then extract it into your TensorFlow directory. ...
  2. Download Inception. ...
  3. Modify WORKSPACE File. ...
  4. Enable USB debugging and adb. ...
  5. Build the APK. ...
  6. Install the APK. ...
  7. Using a custom classifier. ...
  8. Copy labels into assets folder.
Oct 13, 2016

Which one is better for face recognition OpenCV or TensorFlow? ›

To summarize: Tensorflow is better than OpenCV for some use cases and OpenCV is better than Tensorflow in some other use cases. Tensorflow's points of strength are in the training side. OpenCV's points of strength are in the deployment side, if you're deploying your models as part of a C++ application/API/SDK.

Can facial recognition be fooled? ›

Many phones that can be unlocked using facial recognition can be fooled by a photograph, research has found. According to consumer body Which?, scammers can bypass the screen lock on certain Android phones and access sensitive information.

How can I make my face recognition accurate? ›

The ways to increase the accuracy of facial recognition technology are through the enhancement of neural network architectures, and the improvement of deep learning models due to their continuous training on new datasets, which are often larger and of higher quality.

How to use object detection in Android? ›

For that first we need to convert the Bitmap image to TensorImage and then call the detect() method of ObjectDetector for detecting objects. Here, the results list contains all the detected objects from a particular image which can be used to draw boxes over the detected object using the Canvas class.

Is TensorFlow Lite faster than TensorFlow? ›

Using TensorFlow Lite we see a considerable speed increase when compared with the original results from our previous benchmarks using full TensorFlow. We see an approximately ×2 increase in inferencing speed between the original TensorFlow figures and the new results using TensorFlow Lite.

Which programming language is best for face detection? ›

C++ is considered to be the fastest programming language, which is highly important for faster execution of heavy AI algorithms. A popular machine learning library TensorFlow is written in low-level C/C++ and is used for real-time image recognition systems.

Which Pretrained model is best for face recognition? ›

Very Deep Convolutional Networks for Large-Scale Image Recognition(VGG-16) The VGG-16 is one of the most popular pre-trained models for image classification.

Which dataset is used in face recognition? ›

The PASCAL FACE dataset is a dataset for face detection and face recognition. It has a total of 851 images which are a subset of the PASCAL VOC and has a total of 1,341 annotations.

Why don't Androids use Face ID? ›

Because it's using 3D technology. Android phones use 2D, thus they can be unlocked literally by a 2D picture of the user's face, which is a pretty lazy attempt to conceive biometric security. On the newer iPhone models, does Apple store facial recognition information?

How do I get face recognition on my mobile? ›

With Face Unlock turned on:
  1. Open your phone's Settings app.
  2. Tap Security Face and Fingerprint Unlock.
  3. Enter your PIN, pattern or password.
  4. Tap Face Unlock.
  5. Under 'When using Face Unlock', turn off Require eyes to be open.

Does Android have Face ID or Touch ID? ›

Android mobile app

Set up your Touch ID/Face ID on your phone if you haven't already. You will see the option to enable biometric login when you log in for the first time on your device. Tap ENABLE to allow facial recognition for future logins.

What is the Haar cascade algorithm for face recognition? ›

Haar Cascade is a feature-based object detection algorithm to detect objects from images. A cascade function is trained on lots of positive and negative images for detection. The algorithm does not require extensive computation and can run in real-time.

What are the advantages and disadvantages of facial recognition? ›

Advantages of face detection include better security, easy integration, and automated identification; Disadvantages include huge storage requirements, vulnerable detection, and potential privacy issues.

Is face recognition better than Touch ID? ›

Touch ID is currently more reliable than Face ID for some of the reasons which have been touched upon: Fingerprints are less subject to change than facial appearance. Fingerprint recognition doesn't depend on a specific camera angle. Fingerprint patterns are more unique than facial patterns.

What is the traditional face recognition algorithm? ›

The classic face recognition algorithm 1.

The recognition process involves projecting a new image into the eigenface subspace and determining and recognizing it by the position of its projection points in the subspace and the length of the projection lines.

What type of algorithm should you use to identify faces in an image? ›

The haar-like algorithm is also used for feature selection or feature extraction for an object in an image, with the help of edge detection, line detection, centre detection for detecting eyes, nose, mouth, etc.

What is the fastest face recognition? ›

Sface: the fastest (also powerful) deep learning face recognition model in the world. I published about YuNet — Ultra-High-Performance Face Detection in OpenCV — a good solution for real-time POC, Demo, and face applications.

Which AI algorithm is best for image recognition? ›

Due to their unique work principle, convolutional neural networks (CNN) yield the best results with deep learning image recognition.

How accurate are face recognition algorithms? ›

For the top 20 algorithms, accuracy of the highest performing demographic versus the lowest varies only between 99.7% and 99.8%.

How to add TensorFlow Lite model to Android Studio? ›

Import a TensorFlow Lite model in Android Studio
  1. Right-click on the module you would like to use the TFLite model or click on File , then New > Other > TensorFlow Lite Model.
  2. Select the location of your TFLite file. ...
  3. Click Finish .
  4. The following screen will appear after the import is successful.
May 26, 2022

How to integrate ML model in Android? ›

Incorporating machine learning into your Android app
  1. Getting to know ML Kit.
  2. Setting up the Android project.
  3. Adding text recognition. Scanning and extracting with the file picker. ...
  4. Adding object detection and tracking.
  5. Bonus: Using Text recognition V2 API.
  6. ML Kit alternative: Firebase Machine Learning.
Jan 23, 2023

How to create custom model for Android using TensorFlow? ›

Now you'll build a prototype by integrating a pre-trained TFLite model that can detect common objects into the starter app.
  1. Download a pre-trained TFLite object detection model.
  2. Add the model to the starter app.
  3. Update the Gradle file Task Library dependencies.
  4. Sync your project with gradle files.
Dec 2, 2021

What is TensorFlow Lite object detection? ›

Given an image or a video stream, an object detection model can identify which of a known set of objects might be present and provide information about their positions within the image.

How to implement AI in Android app? ›

You can make an AI application by following steps:
  1. Identify a problem.
  2. Select a development company.
  3. Design an app.
  4. Create an AI algorithm.
  5. Choose tech stack.
  6. Launch and maintain your app.

How do you train a ML model to recognize objects? ›

To train an object detection model, you provide AutoML Vision Edge a set of images with corresponding object labels and object boundaries. AutoML Vision Edge uses this dataset to train a new model in the cloud, which you can use for on-device object detection.

What is ML Kit in Android? ›

ML Kit is a mobile SDK that brings Google's on-device machine learning expertise to Android and iOS apps. Use our powerful yet easy to use Vision and Natural Language APIs to solve common challenges in your apps or create brand-new user experiences.

What language is used for TensorFlow Lite? ›

Supported on multiple platforms and languages such as Java, Swift, C++, Objective-C and Python.

Videos

1. Real time face recognition in Android using MobileFaceNet and Tensorflow Lite
(esteban uri)
2. Android Face Mask Recognition App using TensorFlow Lite
(esteban uri)
3. TensorFlow Lite for Android (Coding TensorFlow)
(TensorFlow)
4. Face Recognition Android App Using TensorFlow Lite And OpenCV: Final Part 4
(ElectroCode)
5. Real time face recognition in Android using MobileFaceNet and Tensorflow Lite
(esteban uri)
6. Real time face recognition in Android using MobileFaceNet and Tensorflow Lite
(esteban uri)

References

Top Articles
Latest Posts
Article information

Author: Wyatt Volkman LLD

Last Updated: 22/10/2023

Views: 5657

Rating: 4.6 / 5 (46 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Wyatt Volkman LLD

Birthday: 1992-02-16

Address: Suite 851 78549 Lubowitz Well, Wardside, TX 98080-8615

Phone: +67618977178100

Job: Manufacturing Director

Hobby: Running, Mountaineering, Inline skating, Writing, Baton twirling, Computer programming, Stone skipping

Introduction: My name is Wyatt Volkman LLD, I am a handsome, rich, comfortable, lively, zealous, graceful, gifted person who loves writing and wants to share my knowledge and understanding with you.