I am looking for an efficient way to convert multiple numpy arrays (images) into bytes so I can display them into a GUI, in my case imgui from https://github.com/pyimgui/pyimgui.
The way I'm doing this seems a bit counterintuitive, since I am getting the images from neural networks and I need to transform frame by frame to display in the rendering engine. The pipeline is:
get z vector ->
generate image data from the z vector ->
convert the image data to PIL image ->
.convert("RGB") the PIL image ->
get the PIL image in bytes using : data = im.tobytes("raw", "RGBA", 0, -1)
This seems extremely inefficient to me and I am doing this for 5 textures at the same time (from two different neural networks). Even when I try to display, instead of bytes, either the PIL image or even the numpy array directly in the OpenGL context I only see a glitch.
Any help is appreciated.