ESS 265

Chapter 8. Transforming to the Frequency Domain

 

The key to extracting information from a set of measurements is to display those measurements in such a manner that their information content becomes obvious. Often the key to extracting information is to move from a spatial or temporal domain into a frequency domain in which the exact temporal or spatial sequence is not as important as the fact that the signal repeats on a particular temporal or spatial scale. Modern techniques let us readily transform a series of 2N points into estimates of the amplitudes of a set of oscillators at N frequencies and two phases in quadrature. The first such transformation in wide use was the Fourier transform. These frequency domain amplitudes were used to create estimates of power spectral density and other parameters that characterize the nature of the signal [Rankin and Kurtz, 1970; Means, 1972]. It is of perhaps only historical interest, that at the dawn of the computer age before the spread of fast algorithms to compute Fourier transforms, power spectra were not calculated via Fourier transforms of the original data but by first calculating an auto-correlation function and then transforming to the power spectrum. In this chapter we will examine the Fourier transform for continuous and then discrete signals, the creation of spectra, and dynamic spectra or windowed Fourier spectra and then wavelet transformations a more recent development.

Fourier Transformation

A signal is a function that conveys information, generally about the state or behavior of a physical system. Continuous-time signals are defined at a continuum of times and thus are represented by continuous time variable functions. We usually refer to signals that are continuous in time and amplitude as analog signals. Discrete-time signals are defined at specific moments in time and thus the independent variable takes on only discrete values. Digital signals are discrete in both time and amplitude. A system is defined by a unique transformation or operator T that maps an input sequence x(t) into an output sequence y(t). This is denoted as: y(t) = T[x(t)]. A linear system is defined by the principle of superposition: If y1(t) and y2(t) are the responses when x1(t) and x2(t) respectively are the inputs, then a system is linear if: T[ax1(t) + bx2(t)] = aT[x1(t)] + bT[x2(t)] = ay1(t) + by2(t) for arbitrary constants a and b. A time-invariant system has the property that if y(t) is the response to x(t), then y(t-) is the response to x (t-), where is an arbitrary delay. The eigenvectors of linear time-invariant systems are complex sinusoidal waves, eit. Thus it is "tempting" to decompose any function as a sum of these eigenvectors. This process is known as Fourier analysis.

The Fourier transform provides a spectral density distribution which identifies the amplitudes and phases at the various frequencies that contribute to time series. However, it provides no information on their temporal or spatial localization within the time series. Therefore, Fourier transformation may not be the technique of choice for analysis of nonstationary and/or weak signals in geophysical data. Mathematically, the Fourier transform is expressed as:

Figure 8.1 shows a number of familiar transformations. The top panel shows that the transform of a box car has the form of a sin(x)/x function. The next panel shows that the Fourier transform of a sin(x)/x function is a box car. The bottom two panels show that the transform of a constant signal is a delta function and vice versa.

The Fourier transform has many useful properties. It is first linear. Thus if the Fourier transforms of x(t) and y(t) are X(f) and Y(f) then the Fourier transform of: x(t) + y(t) is X(f) + Y(f). The process is also symmetric so that if the Fourier transform of h(t) is H(f) then the Fourier transform of H(t) is h(-f). Time scaling is such that the Fourier transform of h(kt) is |k|-1 H(f/k) where k is a real number. Similarly frequency scaling is such that the inverse Fourier transform of H(kf) is |k| -1 h(t/k). Finally, the convolutional theorem states that the convolution of two functions is equal to the product of their Fourier transforms, i.e.

Figure 8.2 illustrates a number of useful properties of Fourier transforms. In the top panel we see that the Fourier transform of a series of delta functions is a series of delta functions in the frequency domain. In the middle panel we see that the Fourier transform of a symmetric cosine wave is two delta functions equally placed on either side of zero frequency. The Fourier transform of an asymmetric in time sine wave is a pair of delta functions, again equal situated away from zero but of opposite sign. Figure 8.3 illustrates frequency scaling. Here the analysis interval goes from To to + To in each of these cases but the duration of the box car signal in this interval shortens from top to bottom.

The convolution theorem is a very powerful one in Fourier transform analysis and it is worth spending a few moments to understand it. Figure 8.4 illustrates the steps needed to perform a convolution of two series in the time domain. Figure 8.5 shows two time series at the top and their Fourier transforms on the bottom. Their convolution and the multiplication of the two Fourier transforms yield the two functions in the center of the diagram that are related via a Fourier transform. Figure 8.6 illustrates the same procedure for two different time series and their transforms. The top now shows a continuous time series on the left and a series of delta functions on the right. Their multiplication gives the "sampled" waveform in the middle of the diagram. If we convolve their two Fourier transforms we get the lower middle spectrum that is the Fourier transform of the "sampled" waveform above.

Unless we are exploring a theoretical proof, we are generally working with a discrete time series and we analyze this temporal or spatial series with a discrete Fourier transform. If a series consists of N measurements over an interval NT, then the Fourier transform of the signal has N frequency estimates calculated by summing the product of the original series g(kT)[k = 0 to N-1] times the frequency factor exp
(-i 2nk/N)

We recall that exp i is a complex number equal to cos + i sin and the result of this operation is a set of (N/2 + 1) real coefficients from zero frequency to half the sampling frequency and a set of (N/2 1) imaginary coefficients running from the frequency corresponding to the inverse of the fundamental period of the time series to a frequency that is less than half the sampling period by the fundamental frequency. We can write the inverse discrete transform as:

In order to calculate a discrete Fourier transform, the time function h(t) must be periodic, and it must be band-limited. The sampling rate must be at least twice the largest frequency component of h(t) and the truncation function x(t) must be non-zero over exactly one period (or integer multiple period) of h(t). In modern analysis we calculate the Fourier transform with a Fast Fourier transformer (FFT) algorithm that reduces the computational time of the Fourier transform from N2 to Nlog2N. There are numerous such algorithms today. Some of which use mixed radix arithmetic that can handle series of lengths that are not equal to a power of 2.

To understand why we must sample at a sample rate that is at least twice the frequency of the highest signal we examine Fig. 8.7 that illustrates what happens when we do not obey this rule. The top left panel shows a waveform of a band limited signal whose spectrum cuts off at fc as shown on the bottom right. The top right panel shows the sampling function an infinite series of delta functions whose separations are greater than the distance between the peaks and valleys of the waveform on the left. When these two functions are multiplied they produce the discrete time series in the middle whose Fourier transform is shown in the lower middle panel. This Fourier transform bears only a passing resemblance to the Fourier transform on the lower right so the digitization process has strongly affected the spectrum. How this has come about can be easily seen by examining the Fourier transform of the sampling waveform on the lower right. When this is convolved with the spectrum on the lower left the close spacing of the transform's delta functions in frequency cause the infinite series of spectra to overlap. Thus power at one frequency has been ascribed to an incorrect part of the spectrum. This process of mistaken frequency assignment due to undersampling is called aliasing.

If a band limited periodic waveform is present with a period near the truncation interval or some small submultiple of it, then it is important to choose the truncation period to be as equal as possible to the period of the signal (or a multiple of the period). This is illustrated in Figure 8.9 where the proper truncation interval has been chosen for the analysis of the waveform shown in the top left panel and in Figure 8.10 where the truncation period and the analysis interval are not equal. The panels here each replicate the steps shown in Figure 8.8 in generating the spectrum on the lower right. We see that when the truncation interval is properly matched to the waveform that the spectrum accurately reproduces the spectrum of the undigitized signal on the top right but when the truncation interval is not matched to the wave there are additional frequencies in the sampled spectrum.

We can illustrate the effects of the process of digitization of the signals with Figure 8.8. On the upper left is a cusp-shaped waveform whose Fourier transform is on the right. Our sampling waveform is immediately below with its Fourier transform. The result of sampling this waveform is immediately below that. The convolution of the top two transforms on the right has produced some overlap at the Nyquist frequency or half the sample rate of the signal because of the high frequency content of the peak in the center of our cusp - shaped waveform. The next two panels down show the truncation function and its transform. This function appears because the analyzed time series does not go on forever, even though the original time series may be infinitely long. The transform of this box car function is the sin (x)/x function on the right. When these functions are multiplied (on the left) and convolved (on the right) with our original sampled waveforms we obtain the signal and its transform on the third row from the bottom. The transformed signal now has irregularity introduce by the sin (x)/x convolution that was not there before. Finally, the Fourier transform process assumes that the unsampled time series to the left and right of our interval of interest repeats ad infinitum. To obtain such a repeating time series as shown on the bottom left we must convolve with the infinite series of delta functions (on the left) on the second row from the bottom. The Fourier transform of this is the infinite series of delta functions on the right, that when multiplied by the spectrum above it produces the discretely sampled spectrum on the lower right.

Dynamic Spectra

Although the Fourier transform is not ideally suited for a signal whose properties change in the analysis interval, it can be used dynamically if the signals change slowly enough. This permits the creation of three-dimensional displays of, for example, wave power as contours on a grid of frequency versus time. These displays are often called spectrograms or dynamic spectra. To create these we use a windowed Fourier transform.

The windowed Fourier transform localizes signals in both the frequency and time domains. However, the width of the time-frequency window is fixed. Since high-frequency oscillations require a narrow time window, while low-frequency oscillations require a wide time window, the windowed Fourier transform has limited application for simultaneously detecting high frequency signals embedded within low frequency phenomena. The windowed Fourier transform can be expressed as:

To understand why one might wish to use windowed Fourier analysis consider the signals shown in Figure 8.11. On the left the signals consist of two simultaneously occurring signals at different frequencies. On the right the signal changes frequency in the middle of the interval. If we Fourier analyze over the entire interval shown then we get the spectra shown below. There is no obvious way to determine from their spectra whether the original waveform was like the one on the top left or the top right.

If we subdivide the analysis interval into short segments we can create spectrograms that can distinguish the difference as shown in Figure 8.12. To avoid power introduced by the truncation process, it is wise to include a tapered window in the time domain such as the Hanning function window as shown in Figure 8.13. The Fourier transform of this window is shown in the lower portion of the figure. It has low amplitude high frequency lobes that pick up minimal power from the truncation process.

A time series of magnetometer data obtained at Jupiter by the Galileo spacecraft as it approached the moon Io is shown in Figure 8.14. We immediately see two strong signals, a periodic signal, strongest in the r and directions that are nearly perpendicular to the magnetic field and a linear trend in the data due to the increase of the magnetic field as Jupiter is approached. A linear trend produces signal over a broad band of frequencies as the Fourier transform attempts to reproduce a sawtooth wave as the truncated, infinitely replicated waveform becomes. Thus it is prudent to remove the trend before applying the windowed Fourier transform analysis. The results of this analysis is illustrated in Figure 8.15 that shows power as a function of frequency over a 15 minute interval plus two derived quantities from the full complex spectral matrix, the ellipticity of the signal, and the propagation angle of the signal. The two power estimates shown are calculated as follows. The compressional power is the power spectral density as a function of frequency of the field strength. The transverse power is the sum of the power spectral densities on each of the three orthogonal sensors minus the compressional power.

Analysis with Spatial Localization

The windowed Fourier transform allows us to examine slowly time-varying signals but its underlying basis functions are infinitely enduring periodic waves. The waves that one encounters in practice rarely are this ideal. Recently new techniques have been advanced to deal with time-varying signals. The most popular of these is the wavelet transformation.

Figure 8.16 shows Morlet wavelets scaled and translated with varying s and u. We note that the Morelet wavelet comes in two phases corresponding to sine and cosine waves in Fourier analysis as illustrated in Figure 8.17. The real and imaginary parts of the Morelet wavelet are products of a gaussian wave times these two monotonic waves. The width of the gaussian wave relative to the wave period is adjustable so that a Morlet wavelet transform can be created with few or many wave cycles. Once the number of cycles is specified the scaling and translation does not affect the number of cycles in the wavelet. The powers in the pure sinusoid and the Morlet wavelet are also shown.


The uncertainty principle as applied to windowed Fourier transforms and to wavelets is shown in Figure 8.18.

We can create scalograms from wavelet transformations that are analogous to the dynamic spectrograms of windowed Fourier analysis. Our example of simultaneous signals and a frequency shift in one signal is shown in Figure 8.19. We can also use wavelet transformations to create power spectral densities of naturally occurring signals such as those seen at Jupiter. Such a spectrum is shown in Figure 8.20. We can use the two Morlet wavelets corresponding to sine and cosine waves to get real and imaginary transforms and calculate phase differences between pairs of signals. This is shown in Figure 8.21. Wavelets in fact can be used to produce almost everything that has been developed for the Fourier transform.

References

Means, J. D., Use of three dimensional covariance matrix in analyzing the polarization properties of plane waves, J. Geophys. Res., 27, 5551-5559, 1972.

Rankine, D., and R. Kurtz, Statistical study of micropulsation amplitudes, J. Geophys. Res., 75, 5444-5458, 1970.




Back to the ESS 265 Homepage