I wrote a full working example for both nfft, and scipy.fft. In both cases I start with a simple 1D sinusoidal signal with a little noise, take the fourier transform, and then go backwards and reconstruct the original signal.
Here is my code as clean and readable as I could manage:
import numpy
import nfft
import scipy
import scipy.fft
import matplotlib.pyplot as plt
if True: #<--- Ensure non-global namespace
#Define signal:
x = -0.5 + numpy.random.rand(1000)
#x = numpy.linspace(-.5, .5, 1000) #--> in case we want to run uniform time domain
f = numpy.sin(10 * 2 * numpy.pi * x) + .1*numpy.random.randn( 1000 ) #Add some 'y' randomness to the sample
#prepare wavenumbers for transform:
N = len(x)
k = - N // 2 + numpy.arange(N)
#print ('k', k) #---> Uniform Steps [-500, -499, ...0..., 499,500]
f_k = nfft.nfft_adjoint(x, f, len(k), truncated=False )
#plot transform
plt.figure()
plt.plot(k, f_k.real, label='real')
plt.plot(k, f_k.imag, label='imag')
plt.legend()
#Reconstruct the original signal with nfft
f_recon = nfft.nfft( x, f_k ) / 2000
#Plot original vs reconstructed
plt.figure()
plt.title('nfft')
plt.scatter(x, f, label='f(x)')
plt.scatter(x, f_recon, label='f_recon(x)', marker='+')
plt.legend()
if True: #<--- Ensure non-global namespace
#Define signal:
x = numpy.linspace(-.5, .5, 1000)
f = numpy.sin(10 * 2 * numpy.pi * x) + .1*numpy.random.randn( 1000 ) #Add some 'y' randomness to the sample
#prepare wavenumbers for transform:
N = len(x)
TimeSpacing = x[1] - x[0]
k = scipy.fft.fftfreq(N, TimeSpacing)
#print ('k', k) #---> Confusing steps: [0,1,...500,-500,-499,...-1]
f_k = scipy.fft.fft(f)
#plot transform
plt.figure()
plt.plot(k, f_k.real, label='real')
plt.plot(k, f_k.imag, label='imag')
plt.legend()
#Reconstruct the original signal with scipy.fft
f_recon = scipy.fft.ifft(f_k)
#Plot original vs reconstructed
plt.figure()
plt.title('scipy.fft')
plt.scatter(x, f, label='f(x)')
plt.scatter(x, f_recon, label='f_recon(x)', marker='+')
plt.legend()
plt.show()
Here are the relevant generated plots:

The nfft reconstruction seems to fail at normalizing. I arbitrarily divided the magnitudes by 2000 just to get them to plot well. What is the correct normalization constant?
The nfft also seems to not reproduce the original points. Even if I got hte normalization constant correct, there is no way I would get the original points back here.
What did I do wrong, and how do I fix it?

The above mentioned package does not implement a inverse
nfftThe
ndftisf_hat @ np.exp(-2j * np.pi * x * k[:, None])Thendft_adjointisf @ np.exp(2j * np.pi * k * x[:, None])Let
k = -N//2 + np.arange(N), andA = np.exp(-2j * np.pi * k * k[:, None])A @ np.conj(A) = N * np.eye(N)(checked numerically)Thus, for random
xthe adjoint transformation is equals to the inverse transform. The given reference paper provides a few options, I implemented Algorithm 1 CGNE, from page 9The algorithm converges slowly and poorly
I tried also to use scipy minimization, to minimize the residual
||nfft(x, f) - y||**2explicitlyThe results look similar, you can try by different methods or parameters by your self if you want. But I believe the transformation is ill conditioned because transformed residual is considerably reduced but the residual on the reconstructed values is large.
Edit 1
Please check the section 2.3 of the reference paper
Numerical comparison
Cris Luengo answer metioned another possibility, that is, instead of reconstructing
fat the pointsx, you may reconstruct a resampled version at equidistant points using theifft. So you already have three options, and I will do a quick comparison. Bear in mind that the plot shown there is based on a NFFT computed in 16k samples, while here I am using 1k samples.Since the FFT method uses different points we cannot compare to the original signal, what I will do is to compare with the harmonic function without the noise. The variance of the noise is
0.01, so an exact reconstruction would lead to this mean squared error.Results:
Another way to view is
Even though the IFFT matrix is better conditioned, it gives results in a worse reconstruction of the signal. Also from the last plot it becomes more noticeable that there is a slight attenuation. Probably due to a systematic energy leak to the imaginary part (an error in my code??). Just a quick test, multiplying it by
1.3gives better results