zfit ConvPDF for small kernel uses a lot of memory

30 Views Asked by At

I'm using the zfit FFTConvPDFV1 object and experience that for a lot smaller kernel (space-) limits, the memory consumption rises extremely.

In my case, I'm using a DoubleCB PDF with floating parameters convolved with the convolution of two other DoubleCB PDFs with fixed parameters where the func has the same (space-) limits as the floating PDF but the kernel is a lot smaller (e.g. func in [-0.3,0.2] and kernel in [-5e-7,5e-7]). Can the memory consumption (in order of 100 GB) be reduced somehow?

1

There are 1 best solutions below

0
On

I highly suspect that this is caused by the number of values evaluated: the pdf is discretized into n x l steps, where n is the number of points that are used to discretize the kernel (i.e. how many to describe the kernel appropriately? this can be changed, I think it defaults to about 50) and l is how often the kernel fits into the PDF -> that's the problem.

For the convolution, the evaluated kernel will slide over the PDF evaluated points. But the later will need to be evaluated at about 10^7 * 50 ~ 10^8 or 10^9 points. That's a lot! That itself, I assume, should still barely work, somewhere in the GB region maybe, maximum. (The interpolation and your datapoints add a bit, but let's neglect that). But combining with the other convolution and the need for a numerical integral most likely just kills everything in the end (not expactly sure, but let's say, 10^9 points for a convolution are a lot).

But then, it could also be a problem within zfit that amplifies it.

So to go forward, I would suggest: can you reduce the resolution of your Kernel? i.e. 5 points only for example? Alternatively, to cross-check, you could try with other convolution methods in Python; I do suspect that memory will be a bottleneck (a the convolution in zfit uses already the highly optimized TF convolution), just to test manually.

(Also, make sure to use the linear interpolation, in case that isn't the default yet in the ConvPDF)

Just to mention the most elegant of all: can you do the convolution analytically, using mathematica/sympy? If so, you've won! Because you could implement your custom PDF with nearly 0 memory footprint!