So the cufftComplex
type is an array with n
structs with an x
and a y
-field, respectively representing the real and the imaginary parts of each complex number.
On the other hand, if I want to create a vertex buffer object in OpenGL with an x-
and y-
field, i.e. a 2D vertex or just a vertex buffer object that also represents n
complex numbers, I would have to create a 2n
sized array of floats with a layout like this:
x0 y0 | x1 y1 | ... | xn yn
I then write it to the VBO by calling:
glBufferData(GL_ARRAY_BUFFER, n * sizeof(GLfloat), complex_values_array, GL_DYNAMIC_DRAW);
I would like to Fourier-transform an image with cuFFT, and display e.g. the magnitude of the complex values. How do I resolve this incompatibility between the two data types? Is there a way for cuFFT to act on VBO's?
Edit:
Perhaps I should write a CUDA-kernel that takes the cufftComplex type and maps the magnitude of each complex number to a 1D-VBO. Or a CUDA-kernel that maps the cufftComplex type to a 2D-VBO. I do not know what the overhead would be, since it's device-> device I expect it to be manageable.
I managed to resolve this issue by writing a kernel as follows:
It involves no host-device transfers so it's pretty quick.