Working in Swift, I am trying to implement some cropping and other simple edits to full resolution photos for my iOS app. However, when I try to draw a full size image into the context, it crashes due to memory problems.
I wanted to use UIGraphicsImageRenderer.image() and keep them as UIimages because the user might add an image not taken by the iPhone camera, and I was having trouble with orientation. I'm assuming I will need to use a CGimage or something instead to fix this problem, but would love some ideas about what I'm doing incorrectly.
Here is the relevant code. The canvas size is set by the user choosing a quality of export, the largest being 4000 x 4000. The crash is happening when the UIImage (which is stored in the class this function is in) is the size of a full resolution iPhone photo file of 4032 x 3024.
if let loadedImage = uiImage {
let renderer = UIGraphicsImageRenderer(size: canvasSize)
let img = renderer.image(){ ctx in
// create background
let rectangle = CGRect(x: 0, y: 0, width: canvasWidth, height: canvasHeight)
ctx.cgContext.setFillColor(UIColor.white.cgColor)
ctx.cgContext.addRect(rectangle)
ctx.cgContext.drawPath(using: .fill)
//draw in image
loadedImage.draw(in: photoRect)
} // end renderer.image
uiImage = img
}
By default,
UIGraphicsImageRendereruses point size.Assuming you're running this on a
@3xscreen scaled device, you are actually trying to generate an image that is12,000 x 12,000pixels.Try it by using a
UIGraphicsImageRendererFormatwith.scale = 1: