SWIFT - CoreML - Image Classification Setup

247 Views Asked by At

I am facing some difficult issues through this tutorial from apple-docs.

https://developer.apple.com/documentation/createml/creating_an_image_classifier_model/#overview

I have successfully created many mlmodel already now as given in the tutorial. However when I come to the step to integrate it to Xcode, I am facing the Issue that I don't get any predictions from my mlmodel.

I follow all steps in this tutorial, and downloaded the example code. The tutorial said "just change this model line, with your model and it works" but indeed it doesn't.

With doesn't work, I mean that I don't get any predictions back when I use this example. I can start the application and test it with the iPhone simulator. But the only output that I get is "no predictions, please check console log".

I searched down the code an could find out, that this is an error-message which appears from the code of

MainViewController.swift (99:103)

 private func imagePredictionHandler(_ predictions: [ImagePredictor.Prediction]?) {
        guard let predictions = predictions else {
            updatePredictionLabel("No predictions. (Check console log.)")
            return
        }

As i understand the code, it return the message when no predictions come back from the mlmodel.

If I use mlmodel given by apple (as MobileNetV2 etc.) the example code is working every time (give predictions back). Thats why I am pretty sure, the issue has to be anywhere on my side but I can't figure it out.

The mlmodel is trained with images from fruits360-dataset and self added some images of charts. To equal the values I tooked 70 pictures of each class. If I try this model in createML-Preview I can see the model is able to predict my validation-pictures. But when I integrate the model in Xcode it isn't able to give me that predictions for the exact same image.

Do anyone know how to get this issue done?

Im using the latest Xcode version.

Thanks in advance

0

There are 0 best solutions below