The following is an abstract for the selected article. A PDF download of the full text of this article is available here. Members may download full texts at no charge. Non-members may be charged a small fee for certain articles.

Effects of Noise, Time-Domain Damping, Zero-Filling and the FFT Algorithm on the "Exact" Interpolation of Fast Fourier Transform Spectra

Volume 42, Number 5 (July 1988) Page 715-721

Verdun, Francis R.; Giancaspro, Carlo; Marshall, Alan G.

A frequency-domain Lorentzian spectrum can be derived from the Fourier transform of a time-domain exponentially damped sinusoid of infinite duration. Remarkably, it has been shown that even when such a noiseless time-domain signal is truncated to zero amplitude after a finite observation period, one can determine the correct frequency of its corresponding magnitude-mode spectral peak maximum by fitting as few as three spectral data points to a magnitude-mode Lorentzian spectrum. In this paper, we show how the accuracy of such a procedure depends upon the ratio of time-domain acquisition period to exponential damping time constant, number of time-domain data points, computer word length, and number of time-domain zero-fillings. In particular, we show that extended zero-filling (e.g., a "zoom" transform) actually reduces the accuracy with which the spectral peak position can be determined. We also examine the effects of frequency-domain random noise and round-off errors in the fast Fourier transformation (FFT) of time-domain data of limited discrete data word length (e.g., 20 bit/word at single and double precision). Our main conclusions are: (1) even in the presence of noise, a three-point fit of a magnitude-mode spectrum to a magnitude-mode Lorentzian line shape can offer an accurate estimate of peak position in Fourier transform spectroscopy; (2) the results can be more accurate (by a factor of up to 10) when the FFT processor operates with floating-point (preferably double-precision) rather than fixed-point arithmetic; and (3) FFT roundoff errors can be made negligible by use of sufficiently large (> 16 K) data sets.