How do I erase part of a CALayer?

803 Views Asked by At

I have a UIImageView which I cover with a transparent mask using CALayer.

I want to be able to erase parts of the CALayer with a brush made from a UIImage.

Here is my code so far for the first 2 steps.

topImageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, self.frame.size.width, self.frame.size.height)];
topImageView.image = [UIImage imageNamed:@"testimage2.PNG"];
topImageView.contentMode = UIViewContentModeScaleAspectFill;
topImageView.userInteractionEnabled = FALSE;
[self addSubview:topImageView];

CALayer *mask = [CALayer layer];
mask.bounds = CGRectMake(0, 0, topImageView.frame.size.width, topImageView.frame.size.height);
topImageView.layer.mask = mask;

enter image description here

1

There are 1 best solutions below

1
On

You need to use contents property of CALayer to edit part of CALayer.

Several ways to prepare contents. For example, you create bitmap of RGBA in UInt8 array, then create CGImage from it.

Swift:

func createCGImageFromBitmap(bitmap: UnsafeMutablePointer<UInt8>, width: Int, height: Int) -> CGImage {
    let colorSpace = CGColorSpaceCreateDeviceRGB()
    let context = CGContext(data: bitmap, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width * 4, space: colorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
    let imageRef = context?.makeImage()
    return imageRef!
}

Objective-C:

CGImageRef createCGImageFromBitmap(unsigned char *bitmap,  int width, int height) {
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(bitmap, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast);
    CGImageRef imageRef = CGBitmapContextCreateImage(context);
    return imageRef;
}

Here, bitmap is just a memory array in RGBARGBA..., of which size is width*height*4 bytes. Note I updated the original answer since I realized that CGContext(data:..) (swift)/CGBitmapContextCreate (obj-c) doesn't accept last/kCGImageAlphaLast. It compiles but causes run-time error w/ "unsupported error" message. So we need to pre-multiply alpha to RGB.

Then,

Swift:

    let screenScale = Int(UIScreen.main.scale)
    let widthScaled = width * screenScale
    let heightScaled = height * screenScale
    let memSize = widthScaled * heightScaled * 4
    let myBitmap = UnsafeMutablePointer<UInt8>.allocate(capacity: memSize)
    // set RGBA of myBitmap. for your case, alpha of erased area gets zero
    .....
    let imageRef = createCGImageFromBitmap(bitmap: myBitmap, width: widthScaled, height: heightScaled)
    myBitmap.deallocate(capacity: memSize)
    myCALayer.contents = imageRef

Objective-C:

    int screenScale = (int)[[UIScreen mainScreen] scale];
    int widthScaled = width * screenScale;
    int heightScaled = height * screenScale;
    int memSize = widthScaled * heightScaled * 4;
    unsigned char *myBitmap = (unsigned char *)malloc(memSize);
    // set RGBA of myBitmap. for your case, alpha of erased area gets zero
    .....
    CGImageRef imageRef = createCGImageFromBitmap(bitmap, width, height);
    free(myBitmap);
    myCALayer.contents = CFBridgingRelease(imageRef);

Since Core Graphics doesn't take Retina display into account, we need to scale bitmap size manually. You can get the scaling with UIScreen.main.scale.

One more note: y axis of core graphics is from bottom to top, which is opposite to UIKit. So you need to flip top and bottom, though it's a simple task.

Or if you have UIImage of the mask (already edited), you can create CGImage from UIImage just with

myCGImage = myUIImage.cgImage