CALayer shadowPath ignored on rendering as image

161 Views Asked by At

I am making an app that uses CAlayer to show contents like image in a NSWindow. The layer has shadows styled with shadowPath to make a better appearance. Finally to save/export the whole CAlayer and its contents the parent NSView is converted to an NSImage. The shadow in NSImage is entirely different from that of the actual shadow in CALayer. I can't get the reason why this is happening. Is it a normal thing on AppKit or am I doing it wrong?

These are the difference in shadows:

image(1) - CAlayer with shadowPath (Shadow only in bottom).

image(2) - NSImage created from the superview (Shadow in 4 sides).

enter image description here

This is how shadow is added in image(1):

    layer?.masksToBounds = false
    let size: CGFloat = 100
    let distance: CGFloat = 200
    let rect = CGRect(
        x: -size,
        y: layer.frame.height - (size * 0.4) + distance,
        width: layer.frame.width + size * 2,
        height: size
    )

    layer.shadowColor = .black
    layer.shadowRadius = 100
    layer.shadowOpacity = 1
    layer.shadowPath = NSBezierPath(ovalIn: rect).cgPath

This is how the superview is converted to NSImage image(2):

    let imageRep = view.bitmapImageRepForCachingDisplay(in: view.bounds)
    
    view.cacheDisplay(in: view.bounds, to: imageRep!)
    let image = NSImage(size: view.bounds.size)
    image.addRepresentation(imageRep!)
    let imageData = image.tiffRepresentation

    return NSImage(data: imageData!)!
1

There are 1 best solutions below

1
DonMag On

One option - may or may not be suitable:

  • generate an oval image
  • blur that image
  • use the resulting image in an image view or maybe as the content of a layer

Here's an attempt - note: I work with iOS, so lots of hard-coded values and possibly (likely) incorrect ways to do this:

import Cocoa

class ViewController: NSViewController {
    
    let cyanView = NSView()
    let shadowView = NSImageView()
    
    override func viewDidLoad() {
        super.viewDidLoad()
        
        view.wantsLayer = true
        if let myLayer = view.layer {
            myLayer.backgroundColor = NSColor.gray.cgColor
        }
        
        cyanView.wantsLayer = true
        if let myLayer = cyanView.layer {
            myLayer.backgroundColor = NSColor.cyan.cgColor
        }
        
        // let's use constraints
        [shadowView, cyanView].forEach { v in
            v.translatesAutoresizingMaskIntoConstraints = false
            view.addSubview(v)
        }
        
        NSLayoutConstraint.activate([
            
            cyanView.widthAnchor.constraint(equalToConstant: 400.0),
            cyanView.heightAnchor.constraint(equalToConstant: 200.0),
            cyanView.centerXAnchor.constraint(equalTo: view.centerXAnchor),
            cyanView.topAnchor.constraint(equalTo: view.topAnchor, constant: 80.0),
            
            shadowView.widthAnchor.constraint(equalTo: cyanView.widthAnchor, multiplier: 1.5),
            shadowView.heightAnchor.constraint(equalToConstant: 80.0),
            shadowView.topAnchor.constraint(equalTo: cyanView.bottomAnchor, constant: 0.0),
            shadowView.centerXAnchor.constraint(equalTo: cyanView.centerXAnchor),
            
        ])
        
        let recognizer = NSClickGestureRecognizer(target: self, action: #selector(clickView(_:)))
        view.addGestureRecognizer(recognizer)
        
    }
    
    override func viewDidLayout() {
        super.viewDidLayout()
        
        // create a blurred oval image for the shadowView
        let img = NSImage.init(color: .black, size: shadowView.frame.size).oval()
        
        guard let tiffRep = img.tiffRepresentation,
              let blurFilter = CIFilter(name: "CIGaussianBlur")
        else { return }
        let inputImage = CIImage(data: tiffRep)
        blurFilter.setDefaults()
        blurFilter.setValue(inputImage, forKey: kCIInputImageKey)
        blurFilter.setValue(NSNumber(value: 40.0), forKey: "inputRadius")
        guard let outputImage = blurFilter.value(forKey: kCIOutputImageKey) as? CIImage else { return }
        let outputImageRect = NSRectFromCGRect(outputImage.extent)
        let blurredImage = NSImage(size: outputImageRect.size)
        blurredImage.lockFocus()
        outputImage.draw(at: .zero, from: outputImageRect, operation: .copy, fraction: 1.0)
        blurredImage.unlockFocus()
        
        shadowView.image = blurredImage.resize(to: shadowView.bounds.size)
    }
    
    @objc func clickView(_ sender: NSClickGestureRecognizer) {
        
        let img = view.imageRepresentation()
        
        // do something with the image
        
        print("clicked")
        
    }
    
    override var representedObject: Any? {
        didSet {
            // Update the view, if already loaded.
        }
    }
    
}
extension NSView {
    
    func imageRepresentation() -> NSImage? {
        if let bitRep = self.bitmapImageRepForCachingDisplay(in: self.bounds) {
            bitRep.size = self.bounds.size
            self.cacheDisplay(in: self.bounds, to: bitRep)
            let image = NSImage(size: self.bounds.size)
            image.addRepresentation(bitRep)
            return image
        }
        return nil
    }

}

extension NSImage {
    
    func resize(to size: NSSize) -> NSImage {
        return NSImage(size: size, flipped: false, drawingHandler: {
            self.draw(in: $0)
            return true
        })
    }

    convenience init(color: NSColor, size: NSSize) {
        self.init(size: size)
        lockFocus()
        color.drawSwatch(in: NSRect(origin: .zero, size: size))
        unlockFocus()
    }
    
    func oval(in rect: CGRect) -> NSImage {
        let image = NSImage(size: size)
        image.lockFocus()
        
        NSGraphicsContext.current?.imageInterpolation = .high
        NSBezierPath(ovalIn: rect).addClip()
        draw(at: rect.origin, from: rect, operation: .sourceOver, fraction: 1)
        
        image.unlockFocus()
        return image
    }
    
    func oval() -> NSImage {
        return oval(in: NSRect(origin: .zero, size: size))
    }
    
}

Output when running:

enter image description here

Result of let img = view.imageRepresentation():

enter image description here