Save real-time detected face(Track Faces) image using 'android-vision' library

1.3k Views Asked by At

For my university thesis, I need a android program which can detect and recognize a face in real time. I have read about 'android-vision' library and tested the example code.

https://github.com/googlesamples/android-vision/tree/master/visionSamples/FaceTracker/app/src/main/java/com/google/android/gms/samples/vision/face/facetracker.

Modified code:

package com.google.android.gms.samples.vision.face.facetracker;

import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.os.AsyncTask;
import android.os.Environment;
import android.util.Log;
import android.widget.Toast;

import com.google.android.gms.samples.vision.face.facetracker.ui.camera.GraphicOverlay;
import com.google.android.gms.vision.face.Face;

import java.io.ByteArrayOutputStream;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.net.Socket;
import java.text.SimpleDateFormat;
import java.util.Date;

/**
 * Graphic instance for rendering face position, orientation, and landmarks within an associated
 * graphic overlay view.
 */
class FaceGraphic extends GraphicOverlay.Graphic
{
    private static final float FACE_POSITION_RADIUS = 10.0f;
    private static final float ID_TEXT_SIZE = 40.0f;
    private static final float ID_Y_OFFSET = 50.0f;
    private static final float ID_X_OFFSET = -50.0f;
    private static final float BOX_STROKE_WIDTH = 5.0f;

    public Canvas canvas1;
    public Face face;
    int i =0;
    int flag = 0;

    private static final int COLOR_CHOICES[] = {
        Color.BLUE,
        Color.CYAN,
        Color.GREEN,
        Color.MAGENTA,
        Color.RED,
        Color.WHITE,
        Color.YELLOW
    };
    private static int mCurrentColorIndex = 0;

    private Paint mFacePositionPaint;
    private Paint mIdPaint;
    private Paint mBoxPaint;

    private volatile Face mFace;
    private int mFaceId;
    private float mFaceHappiness;
    public Bitmap myBitmap ;
    FaceGraphic(GraphicOverlay overlay)
    {
        super(overlay);

        mCurrentColorIndex = (mCurrentColorIndex + 1) % COLOR_CHOICES.length;
        final int selectedColor = COLOR_CHOICES[mCurrentColorIndex];

        mFacePositionPaint = new Paint();
        mFacePositionPaint.setColor(selectedColor);

        mIdPaint = new Paint();
        mIdPaint.setColor(selectedColor);
        mIdPaint.setTextSize(ID_TEXT_SIZE);

        mBoxPaint = new Paint();
        mBoxPaint.setColor(selectedColor);
        mBoxPaint.setStyle(Paint.Style.STROKE);
        mBoxPaint.setStrokeWidth(BOX_STROKE_WIDTH);
    }

    void setId(int id)
    {
        mFaceId = id;
        flag = 1;
    }


    /**
     * Updates the face instance from the detection of the most recent frame.  Invalidates the
     * relevant portions of the overlay to trigger a redraw.
     */
    void updateFace(Face face)
    {
        mFace = face;
        postInvalidate();
    }

    /**
     * Draws the face annotations for position on the supplied canvas.
     */
    @Override
    public void draw(Canvas canvas)
    {
        face = mFace;
        if (face == null)
        {
            return;
        }

        // Draws a circle at the position of the detected face, with the face's track id below.
        float x = translateX(face.getPosition().x + face.getWidth() / 2);
        float y = translateY(face.getPosition().y + face.getHeight() / 2);
 //       canvas.drawCircle(x, y, FACE_POSITION_RADIUS, mFacePositionPaint);
        canvas.drawText("id: " + mFaceId, x + ID_X_OFFSET, y + ID_Y_OFFSET, mIdPaint);
  //      canvas.drawText("happiness: " + String.format("%.2f", face.getIsSmilingProbability()), x - ID_X_OFFSET, y - ID_Y_OFFSET, mIdPaint);
  //      canvas.drawText("right eye: " + String.format("%.2f", face.getIsRightEyeOpenProbability()), x + ID_X_OFFSET * 2, y + ID_Y_OFFSET * 2, mIdPaint);
  //      canvas.drawText("left eye: " + String.format("%.2f", face.getIsLeftEyeOpenProbability()), x - ID_X_OFFSET*2, y - ID_Y_OFFSET*2, mIdPaint);

        // Draws a bounding box around the face.
        float xOffset = scaleX(face.getWidth() / 2.0f);
        float yOffset = scaleY(face.getHeight() / 2.0f);
        float left = x - xOffset;
        float top = y - yOffset;
        float right = x + xOffset;
        float bottom = y + yOffset;
        canvas.drawRect(left, top, right, bottom, mBoxPaint);

        Log.d("MyTag", "hello "+i);
        i++;

        if (flag == 1)
        {
            flag = 0;
            canvas1=canvas;
            // send face image to server for recognition
            new MyAsyncTask().execute("ppppp");

        }
    }


    class MyAsyncTask extends AsyncTask<String, Void, String>
    {
        private Context context;

        public MyAsyncTask()
        {
            // TODO Auto-generated constructor stub
            //context = applicationContext;
        }

        protected String doInBackground(String... params)
        {
            try
            {

                Log.d("MyTag", "face.getWidth() "+face.getWidth());
                Bitmap temp_bitmap = Bitmap.createBitmap((int)face.getWidth(), (int)face.getHeight(), Bitmap.Config.RGB_565);
                canvas1.setBitmap(temp_bitmap);


            }
            catch (Exception e)
            {
                Log.e("MyTag", "I got an error", e);
                e.printStackTrace();
            }
            Log.d("MyTag", "doInBackground");
            return null;
        }

        protected void onPostExecute(String result) {
            Log.d("MyTag", "onPostExecute " + result);
            // tv2.setText(s);

        }

    }

}

It give me this error:

12-16 03:08:00.310 22926-23044/com.google.android.gms.samples.vision.face.facetracker E/MyTag: I got an error
                                                                                               java.lang.UnsupportedOperationException
                                                                                                   at android.view.HardwareCanvas.setBitmap(HardwareCanvas.java:39)
                                                                                                   at com.google.android.gms.samples.vision.face.facetracker.FaceGraphic$MyAsyncTask.doInBackground(FaceGraphic.java:175)
                                                                                                   at com.google.android.gms.samples.vision.face.facetracker.FaceGraphic$MyAsyncTask.doInBackground(FaceGraphic.java:158)

This code can detect face in real time. For recognition part, I am planning to use 'JavaCV' https://github.com/bytedeco/javacv. If I can capture the face in a bitmap then I can save it in .jpg image then I can recognize it. Could you please give me some advise, how to save detected face. Thank you.

1

There are 1 best solutions below

0
On

TL;DR: Capture a Frame, process it, then save/export.

From the source

@Override
public void setBitmap(Bitmap bitmap) {
    throw new UnsupportedOperationException();
}

This means that the Canvas cannot handle the setBitmap(Bitmap bitmap) method

You have several issues with what you are performing.

First: Loads of AsynkTask(s), and many are usless/redundand

If you are using the com.google.android.gms.vision.* classes, then you are likely receiveing around 30 events per second. When an event occurs, the captured Frame is almost assured to be different from the one evaluated. You are racing against your conditions.

Second: Using Canvas to set Bitmap

Always, when using a Class, check its documentation and ancestors and finally its implementation.

An ImageView would perform as you desire. By receiving a Bitmap, then setting to it. All racing conditions would be handled by the OS< and redundant requests would be dropped by the Main Looper

Finally

If what you need is perhaps "Take a picture, when someone is smiling with eyes closed", then you need to inverse your logic. Use a source to generate Frame(s). Then process the Frame, and if it meets your criteria, save it.

This codelabs project does almost what you want, and it explains its details very well