What is Google ML Kit, text recognizer and face detection using Flutter application

By Faheem Ahmed - january 08,2024
What Is Google ML Kit, Make A Text Recognizer And Face Detection Application Using Flutter Application Article

Introduction

To build your own ML-KIT powered Mobile app using Flutter, you don't need to be an expert in programming. Flutter is a user-friendly framework that allows you to create mobile applications with ease for multiple platforms.

Google ML Kit allows developers to add local machine learning to their projects and its integration with Flutter can open up exciting possibilities for creating intelligent and feature-rich applications that use Machine Learning to perform various tasks.

You can start developing your app quickly with the easy-to-use interface and helpful documentation. Whether you are new or experienced, Flutter makes it easy to create amazing mobile apps.

Today you will learn how to use the Google ML-Kit Package to create an application.

This application features a home screen with two options:

  • Text Recognition from an image
  • Face Count Detection from an image Users can choose any image and the results will be displayed on the screen.

Things You are going to learn today

  • Build a machine learning application using flutter.
  • Setting up a Flutter project and creating a basic UI.
  • Using Google ML-Kit package to perform functions on any image.
  • Get Motivated to use ML in your next project!

Things You Need

  • Basic Understanding of Flutter Framework
  • IDE like VSCODE or Android Studio

Getting Started

Set up your Flutter project:

First, make sure you have Flutter SDK installed and configured on your machine. If not, you can follow the official Flutter installation guide here: https://docs.flutter.dev/get-started/install

Create a new Flutter project using the following command in your VS code Terminal:

flutter create flutter_ml_kit_example

In “main.dart” file, remove old code and replace with the below code:

				
					class MainApp extends StatelessWidget {
 const MainApp({super.key});

 @override
 Widget build(BuildContext context) {
   return const MaterialApp(
     title: 'ML kit application',
     debugShowCheckedModeBanner: false,
     home: HomePage(),
   );
 }
}
				
			

In the above code we are basically just calling the class “HomePage()” that will contain the UI.

Adding required Dependencies

Open the pubspec.yaml  and add the following dependencies:

				
					 google_ml_kit: ^0.16.3
 image_picker: ^1.0.4
				
			

You can get these packages from above provided links.

Creating homepage

Create a new file named home_page.dart

Now, Make the UI by pasting this code in your editor:

				
					import 'package:flutter/material.dart';
import 'package:flutter_ml_kit_example/presentation/face_detect.dart';
import 'package:flutter_ml_kit_example/presentation/text_recognize.dart';

class HomePage extends StatelessWidget {
 const HomePage({super.key});

 @override
 Widget build(BuildContext context) {
   return Scaffold(
       appBar: AppBar(
         title: const Text('Flutter ML Kit App'),
         centerTitle: true,
       ),
       body: SingleChildScrollView(
         child: Column(
           children: [
             const Center(
                 child:
                     Text('Select an option', style: TextStyle(fontSize: 30))),
             const SizedBox(height: 50),
             ElevatedButton(
               onPressed: () {
                 Navigator.push(
                   context,
                   MaterialPageRoute(
                     builder: (context) {
                       return const TextRecognize();
                     },
                   ),
                 );
               },
               child: const Text('Image Text Recognition'),
             ),
             const SizedBox(height: 10),
             ElevatedButton(
               onPressed: () {
                 Navigator.push(
                   context,
                   MaterialPageRoute(
                     builder: (context) {
                       return const FaceDetect();
                     },
                   ),
                 );
               },
               child: const Text('Number of Faces'),
             ),
           ],
         ),
       ));}}


				
			

In home_screen.dart create a stateless class “HomeScreen“.

Here we have created two buttons that take us to the “text Recognition” and “Face Count Detection” screens respectively.

We have created this simple UI that has two buttons that point to their respective functions.

Text Recognition

Create a new file named text_recognize.dart and write the following code:

				
					import 'dart:io';
import 'package:flutter/material.dart';
import 'package:google_ml_kit/google_ml_kit.dart';
import 'package:image_picker/image_picker.dart';

class TextRecognize extends StatefulWidget {
 const TextRecognize({super.key});

 @override
 _TextRecognizeState createState() => _TextRecognizeState();
}

class _TextRecognizeState extends State<TextRecognize> {
 // String to hold the recognized text
 String recognizedText = "The text will be displayed here.";
 // Instance of Google's text recognizer
 final textDetector = GoogleMlKit.vision.textRecognizer();
 // File to hold the selected image
 File? _image;

 // Function to pick an image from the gallery and recognize text in it
 Future pickImage() async {
   // Pick an image
   final pickedFile =
       await ImagePicker().pickImage(source: ImageSource.gallery);
   if (pickedFile != null) {
     _image = File(pickedFile.path);
     // Prepare the image for text recognition
     final inputImage = InputImage.fromFilePath(pickedFile.path);
     // Recognize text in the image
     final RecognizedText recognisedText =
         await textDetector.processImage(inputImage);
     String text = recognisedText.text;
     // Update the state with the recognized text
     setState(() {
       recognizedText = text;
     });
   }
 }

 // Dispose of the text recognizer when it's no longer needed
 @override
 void dispose() {
   textDetector.close();
   super.dispose();
 }

 // Build the UI
 @override
 Widget build(BuildContext context) {
   return Scaffold(
     appBar: AppBar(
       title: const Text('Text Recognition'),
     ),
     body: SingleChildScrollView(
       child: Center(
         child: Column(
           mainAxisAlignment: MainAxisAlignment.center,
           children:[
             // Display the selected image or a placeholder text
             _image == null
                 ? const Text('No image selected.')
                 : Image.file(_image!),
             const SizedBox(height: 10),
             // Display the recognized text
             Text(recognizedText == '' ? 'No text recognized' : recognizedText),
             // Button to pick an image and recognize text in it
             ElevatedButton(
               onPressed: pickImage,
               child: const Text('Pick an image to recognize text '),
             ),
           ],
         ),
       ),
     ),
   );
 }
}

				
			

In this code, we created a UI and defined a functon for text recognition, where it recognizes text in an image selected from the gallery of the user’s device.

The pickImage function picks the image and processes it and displays the text it recognizes from It uses the ImagePicker to pick an image from the device's gallery.

When the user taps the button, an image is selected (pickedFile is not null), it creates a File instance from the selected image's path. Then it prepares the image for text recognition by creating an InputImage instance from the file path. Then, it uses the textDetector to recognize text in the image.

It extracts the recognized text from the RecognizedText instance.

Finally, it updates the state of the widget with the recognized text shown on the screen below the selected image.

Face Detect


Create a new file named face_detect.dart and write the following code:

				
					import 'dart:io';
import 'package:flutter/material.dart';
import 'package:google_ml_kit/google_ml_kit.dart';
import 'package:image_picker/image_picker.dart';

class FaceDetect extends StatefulWidget {
 const FaceDetect({super.key});

 @override
 _FaceDetectState createState() => _FaceDetectState();
}

class _FaceDetectState extends State<FaceDetect> {
 // String to hold the number of detected faces
 String numberOfFaces = "Faces will be counted here.";
 // Instance of Google's face detector
 final faceDetector = GoogleMlKit.vision.faceDetector();
 // File to hold the selected image
 File? _image;

 // Function to pick an image from the gallery and detect faces in it
 Future pickImage() async {
   // Pick an image
   final pickedFile =
       await ImagePicker().pickImage(source: ImageSource.gallery);
   if (pickedFile != null) {
     _image = File(pickedFile.path);
     // Prepare the image for face detection
     final inputImage = InputImage.fromFilePath(pickedFile.path);
     // Detect faces in the image
     final List<Face> faces = await faceDetector.processImage(inputImage);
     // Update the state with the number of detected faces
     setState(() {
       numberOfFaces = 'Number of faces detected: ${faces.length}';
     });
   }
 }

 // Dispose of the face detector when it's no longer needed
 @override
 void dispose() {
   faceDetector.close();
   super.dispose();
 }




 // Build the UI
 @override
 Widget build(BuildContext context) {
   return Scaffold(
     appBar: AppBar(
       title: const Text('Face Count Detection'),
     ),
     body: Center(
       child: Column(
         mainAxisAlignment: MainAxisAlignment.center,
         children: <Widget>[
           // Display the number of detected faces or a placeholder text
           Text(numberOfFaces == ''
               ? 'Faces will be counted here.'
               : numberOfFaces),
           // Display the selected image or a placeholder text
           _image == null
               ? const Text('No image selected.')
               : Padding(
                   padding: const EdgeInsets.all(12.0),
                   child: Flexible(
                     // Ensure the image fits within the available space
                     child:
                         Image.file(_image!, fit: BoxFit.contain, width: 300),
                   ),
                 ),
           // Button to pick an image and detect faces in it
           ElevatedButton(
             onPressed: pickImage,
             child: const Text('Pick Face Image'),
           ),
         ],
       ),
     ),
   );
 }
}
				
			

In this code, we created a UI and defined a functon for face count detect, where it recognizes the faces of people in an image selected from the gallery of the user’s device and displays the number of persons present in that image.

When the user taps the button, an image is selected (pickedFile is not null), it creates a File instance from the selected image's path. Then it prepares the image for text recognition by creating an InputImage instance from the file path.

faceDetector.processImage (inputImage) function detects the faces in the image, and returns a list of face objects.Then, in this line of code: numberOfFaces = 'Number of faces detected: ${faces.length}'; length of detected images is passed to the string numberOfFaces which then display it to the screen.

Conclusion

Today you learned how easy it is to build your very own Flutter Application that utilizes the powers of the machine learning package provided by Google, and how it makes the complex tasks as simple as possible, so you can focus on your creativity and ideas.

You first started out with a default flutter project, then you learned how to build a User Interface that is simple and clear.

Then you learned how to use the Google ML-Kit Package to perform ‘Text-Recognition’ and ‘Face-Detection’ functions and applied them on the selected image to get the desired results.

If this article has helped in any way and you have learned something new today, Kindly consider sharing the article so others can also benefit and learn how to build simple AI applications. Subscribe to our website for more interesting articles on AI and other interesting fields of IT.

You can find the code for this here:

Repository Link:
Written by
Faheem Ahmed

wpChatIcon
wpChatIcon