Face Detection & Blurring Using Computer Vision

Executive Summary:

Computer vision is being used in an increasing number of industries due to ongoing advancements in science and technology. This is how computer vision technology is developing quickly. The field of computer vision has advanced significantly over time. As a result, computer vision is now used in many diverse sectors. This article will describe how computer vision is used in face detection and blurring.

Introduction:

In this article, we’ll demonstrate how to use computer vision – OpenCV and the Haar cascade method for Face Detection and blurring. After that, we’ll utilize image blurring to blur the face portion of the image.

Face detection services use computer vision technologies to find human faces in real-time video streams, movies, and digital photos. A face detection service aims to identify every face in a given image and offer information about each face’s location and size.

Various algorithms are utilized for object and face detection, including the Haar cascade, YOLO, and R-CNN.

In this tutorial, the Haar cascade will be used. Paul Viola and Michael Jones first found the Haar cascade classifier in their paper Rapid Object Detection using a Boosted Cascade of Simple Features.

We won’t delve into the theoretical specifics of this method; instead, we’ll look at how to apply the algorithm and carry out face identification.

We will use a pre-trained Haar cascade rather than creating our own.

Face detection and image blurring

Face Recognition and detection:

Face detection refers to identifying human faces in images using computer vision. First, face detection services recognize a face’s presence and distinctive facial features. After that, one can blur or attempt to identify the face’s owner based on personal preference. Thanks to developments in computing technologies, automatic Face detection services and face blurring are now possible.

Some of these technologies are free and simple to use, while others are offered as solutions that can aid in the anonymization of business companies’ data.

Several pre-trained Haar cascades are available in the OpenCV package for detecting faces, eyes, number plates, etc.

In this instance, we’ll take advantage of a Haar cascade that has already been trained to recognize faces. So, first, download the haar cascade frontal face default.xml file from the official OpenCV repository, then place it in the project directory.

Now, create a new Python file, and let’s write some code:

To use OpenCV, we currently only need the cv2 package.

The input image is then loaded, made grayscale, and loaded into the Haar cascade face detector.

Detecting faces and creating bounding boxes around them can then be done:

To identify faces in the grayscale image, we used the detectMultiScale method.

The first argument for this method is the image that we want to use to find faces. The second argument is the scale factor, which defines how much the image size is reduced at each image scale. Using several 1.04, we may shrink the image by 4% at each iteration. The third point is the minimum number of neighbors each window needs to stay in place. If you desire greater accuracy, increase this value; nevertheless, the algorithm may overlook some faces.

The object’s minimum size is the final argument. Smaller things should be taken into consideration. The method then produces a list of the bounding boxes for each recognized face, together with its (x, y) coordinates, width, and height. We next go over the list of bounding boxes and draw a rectangle around each detected face.

See the outcome of the Haar cascade face detector in the image below:

As you can see, the algorithm successfully identified the face. Let’s play with another picture.

 

While the face was accurately identified, a false positive detection can be seen at the lower left.

You can manually adjust the detectMultiScale function’s settings to avoid false positive detection. Unfortunately, false positive detections are one of the shortcomings of Haar cascades, and you must manually adjust the detectMultiScale function’s settings to get rid of them.

Face Blurring

In a nutshell, “face blurring” is applying an algorithm to recognize human faces in images, videos, or real-time streams and blurring them. By blurring faces wherever they appear in a photo or video using certain software, facial blurring is used to safeguard people’s privacy. Typically, it begins by identifying the face in the picture before adding pixels or other displacement effects. However, the visual quality is good in the process.

Well, to understand it, we’re going to blur the face portion right now. To do that, we must first identify the area of the face in the image and then cloud that area before replacing the face with the blurred portion.

Let’s examine the procedure:

Once more, we loop over the face bounding boxes before slicing the image with Numpy to extract only the portion of the face.

After that, we blur the affected face area using a very large kernel size (99) to get an extremely blurred image. Finally, to replace the blurred face portion, we once more employed Numpy slicing.

The face has been blurred, as shown in the image below:

The face was appropriately blurred; however, we chose only to obscure a portion of the face. It’s fine, but doing it in a circle will be preferable.

Quick recap

You can use different pre-trained Haar cascades to find other objects, such as vehicles, cats, eyes, etc.

The Haar cascade is one of the earliest object detection methods. It is not the most precise method and is notorious for false positive detections (seeing a face when there isn’t one in the image), but it is highly quick and can be used in real-time.

Visit our website to find out more about computer vision and image processing.

 

Leave a Reply
Previous Post

Benefits of Computer Vision for Defect Detection and Scaling in Automotive

Next Post
Speech to text

How To Use Speech-To-Text On A Windows Computer To Quickly Dictate Text Without Typing

Related Posts