I have this model.
In Mac, when an image is input, it detects objects and represents what it is.
https://i.stack.imgur.com/qdFBr.png
However, this model has four outputs.
I think the first output is the result.
So I converted the results into an image in Python as follows and saved the model as mlmodel.
import coremltools.proto.FeatureTypes_pb2 as ft
output = spec.description.output[0]
output.type.imageType.colorSpace = ft.ImageFeatureType.GRAYSCALE
output.type.imageType.height = 416
output.type.imageType.width = 416
In Swift, the converted results are stored in this form.
lazy var var_944: CVPixelBuffer = {
[unowned self] in return self.provider.featureValue(for: "var_944")!.imageBufferValue
}()!
After putting an image in the input, convert the output CVPixelBuffer to UIImage, and put UIImage in ImageView, no image appears.
Does anyone know the solution?
(Please understand that I used Papago because I am not good at English.)