Why is Core Image inserting two transforms in this render graph?

159 Views Asked by At

TL;DR: Why is Core Image applying an affine transform and clamp at the start and end of the attached render info diagram?

In the WWDC 2017 - Session510 Apple introduced QuickLook support in Xcode for CIRenderTask, CIRenderInfo and improvements to CIImage. I'm trying to better understand the output of these and how to interpret them to improve Core Image performance.

For this test, the inputImage is a CIImage that was created from an IOSurface. There's a single CIFilter in the pipeline that applies a gaussian blur to the image. The outputImage from the filter is then rendered to an IOSurface and displayed on the screen. The mechanics are working fine, but I'd like to understand why this is taking 126ms.

In the CIRenderInfo diagram, the first step involves applying an affine transform and a clamp that takes 56ms. Then the blur filter is applied which only takes 42ms, then a second affine transform is being applied that takes 27ms. (I assume the two transforms are two flip the contents and then flip them back?)

Of the total 126ms required to render the image, only 42ms is taken up by the gaussian blur itself.

Why is Core Image adding those other two steps and is there a way I can provide Core Image with an inputImage that is formatted in such a way as to not require these transforms?

(Furthermore, why is Core Image not able to at least re-use a cached version of the first transform's output in subsequent renders?)


CIRenderTask Output:

Output of a CIRenderTask object.


CIRenderInfo Output:

Output of a CIRenderInfo object.


(This is on macOS 10.13)

0

There are 0 best solutions below