Developing apps for iOS, it is simple to extract pixels from UIImage, manipulate them, then reconstruct an UIImage from them, example:
csr = CGColorSpaceCreateDeviceRGB();
ctx = CGBitmapContextCreate(pax, devlar, devalt, 8, devlar*4,
csr,kCGImageAlphaNoneSkipLast);
rim = CGBitmapContextCreateImage(ctx);
gen = [UIImage imageWithCGImage:rim scale:1.0 orientation:UIImageOrientationUp];*
But developing a macOS app, things are different, data structures and methods are different, and I don't understand how to.
The process is exactly the same except rather than
UIImageconvenience initializerimageWithCGImage:, you callNSImageinitializerinitWithCGImage:size::