Custom IBM Watson Visual Recognition service integration with Flutter/Dart

179 Views Asked by At

I created and trained a custom visual recognition model on IBM Cloud. I want to connect it to my application which I am building using Flutter. I looked at the IBM API document reference below and everything worked well but it doesn't talk about connecting it to your application.

https://cloud.ibm.com/apidocs/visual-recognition/visual-recognition-v3#classify-images

I tried using the flutter_ibm_watson package from pub.dev (the package is severely outdated and has many issues but I still tried it anyway). I plugged in my API key and URL but it didn't even output the result of the image from the classifiers. It just identified what the image was (e.g. skyscraper image returned 'skyscraper').

IamOptions options = await IamOptions(iamApiKey: "NRDjngCby2d-pSHOPyWQJxhuB6vOY2uOTCX6KV2BCfwB", url: "https://api.us-south.visual-recognition.watson.cloud.ibm.com/instances/ef286f4e-84c7-44e0-b63d-a6a49a142a30").build();
VisualRecognition visualRecognition = new VisualRecognition(iamOptions: options, language: Language.ENGLISH); // Language.ENGLISH is language response
ClassifiedImages classifiedImages = await visualRecognition.classifyImageUrl("https://starindojaya.com/images/products/PAPER_CUP_PAPERCUP_2_OZ.jpg");
print(classifiedImages.getImages()[0].getClassifiers()[0].getClasses()[0].className);

I also downloaded the CoreML file as stated in the API docs but am unsure of what to do with it. On the side note, I did get my application to connect to my custom visual recognition model through the StreamMyClassifier class in Flutter and it worked very well. However, I also wanted the "confidence" score as well in order to display to the user as well. I would appreciate it if you could help. Anything helps. Thanks.

1

There are 1 best solutions below

4
On

Your print command is printing the classname of the top class, of the top classifier of the 1st image that you have submitted. So writing skyscraper is working as it should. If you want all the classifiers then modify your print to:

print(classifiedImages.getImages()[0].getClassifiers());

If you have a custom classifier then you need to pass in a parameter to tell the service. You can do this by either setting the parameter owner to me or setting classifier_ids to include your classier id. Both are arrays so remember to wrap them in []. If you specify both then classifier_ids take preference.

You use the Core ML model to run Visual Recognition on iOS / MacOS devices with Apple Core ML. Typically developed with Swift on Xcode.