My app starts to detect text from the camera (the surfaceview) continuously and sets it to a textview like this:
cameraView = findViewById(R.id.surface_view);
textView = findViewById(R.id.text_view);
// 1
final TextRecognizer textRecognizer = new TextRecognizer.Builder(getApplicationContext()).build();
if (!textRecognizer.isOperational()) {
Log.w("MainActivity", "Detected dependence are not found ");
} else {
cameraSource = new CameraSource.Builder(getApplicationContext(), textRecognizer)
.setFacing(CameraSource.CAMERA_FACING_BACK)
.setRequestedPreviewSize(1280, 1024)
.setRequestedFps(2.0f)
.setAutoFocusEnabled(true)
.build();
// 2
cameraView.getHolder().addCallback(new SurfaceHolder.Callback() {
@Override
public void surfaceCreated(SurfaceHolder holder) {
try {
if (ActivityCompat.checkSelfPermission(getApplicationContext(),Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED){
ActivityCompat.requestPermissions(MainActivity.this,new String[]{Manifest.permission.CAMERA},
RequestCameraPermission);
}
cameraSource.start(cameraView.getHolder());
} catch (IOException e) {
e.printStackTrace();
}
}
@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
}
@Override
public void surfaceDestroyed(SurfaceHolder holder) {
cameraSource.stop();
}
});
// 4
textRecognizer.setProcessor(new Detector.Processor<TextBlock>() {
@Override
public void release() {
}
@Override
public void receiveDetections(Detector.Detections<TextBlock> detections) {
final SparseArray<TextBlock> items = detections.getDetectedItems();
if (items.size() != 0 ){
textView.post(new Runnable() {
@Override
public void run() {
StringBuilder stringBuilder = new StringBuilder();
for (int i = 0 ;i < items.size();i++){
TextBlock item = items.valueAt(i);
stringBuilder.append(item.getValue());
stringBuilder.append("\n");
}
textView.setText(stringBuilder.toString());
Log.d("Text",stringBuilder.toString());
}
});
}
}
});
}
I want to see if I get the same text detection results if I use ML Kit's TextRecognizer by using the same logic as above. I'm trying to figure out what would be the simplist way to replace it. Because setProcessor accepts a Detector.Processor instance as an argument and TextRecognizer.process an image I'm not sure how to achieve my goal. Textrecognizer.process, from ML Kit, accepts an image as argument like below:
//creating TextRecognizer instance
TextRecognizer recognizer = TextRecognition.getClient();
//process the image
recognizer.process(image)
.addOnSuccessListener(
new OnSuccessListener<Text>() {
@Override
public void onSuccess(Text texts) {
processTextRecognitionResult(texts);
}
})
.addOnFailureListener(
new OnFailureListener() {
@Override
public void onFailure(@NonNull Exception e) {
// Task failed with an exception
e.printStackTrace();
}
});
How do I reconcile the two? So my goal is to replace only the text recognizer with the ML Kit's text recognizer and detect text in a continuous manner, like in my original code.
I have spend a lot of time researching the solution but came up empty. Any help with be greatly appreciated.
I have been tried on my side let me share code may be this can help you
if this is not working then try to Replace the TextRecognizer instantiation with ML Kit's TextRecognizer
for further information you can check this Recognize Text in Images with ML Kit on Android