Discrete Hartley transform
A discrete Hartley transform is a Fourier-related transform of discrete, periodic data similar to the discrete Fourier transform, with analogous applications in signal processing and related fields. Its main distinction from the DFT is that it transforms real inputs to real outputs, with no intrinsic involvement of complex numbers. Just as the DFT is the discrete analogue of the continuous Fourier transform, the DHT is the discrete analogue of the continuous Hartley transform, introduced by Ralph V. L. Hartley in 1942.
Because there are fast algorithms for the DHT analogous to the fast Fourier transform, the DHT was originally proposed by Ronald N. Bracewell in 1983 as a more efficient computational tool in the common case where the data are purely real. It was subsequently argued, however, that specialized FFT algorithms for real inputs or outputs can ordinarily be found with slightly fewer operations than any corresponding algorithm for the DHT.
Definition
Formally, the discrete Hartley transform is a linear, invertible function H: Rn → Rn. The N real numbers x0,..., xN−1 are transformed into the N real numbers H0,..., HN−1 according to the formulaThe combination is sometimes denoted , and should not be confused with, or which appears in the DFT definition.
As with the DFT, the overall scale factor in front of the transform and the sign of the sine term are a matter of convention. Although these conventions occasionally vary between authors, they do not affect the essential properties of the transform.
Properties
The transform can be interpreted as the multiplication of the vector by an N-by-N matrix; therefore, the discrete Hartley transform is a linear operator. The matrix is invertible; the inverse transformation, which allows one to recover the xn from the Hk, is simply the DHT of Hk multiplied by 1/N. That is, the DHT is its own inverse, up to an overall scale factor.The DHT can be used to compute the DFT, and vice versa. For real inputs xn, the DFT output Xk has a real part /2 and an imaginary part /2. Conversely, the DHT is equivalent to computing the DFT of xn multiplied by 1 + i, then taking the real part of the result.
As with the DFT, a cyclic convolution z = x∗y of two vectors x = and y = to produce a vector z =, all of length N, becomes a simple operation after the DHT. In particular, suppose that the vectors X, Y, and Z denote the DHT of x, y, and z respectively. Then the elements of Z are given by:
where we take all of the vectors to be periodic in N. Thus, just as the DFT transforms a convolution into a pointwise multiplication of complex numbers, the DHT transforms a convolution into a simple combination of pairs of real frequency components. The inverse DHT then yields the desired vector z. In this way, a fast algorithm for the DHT yields a fast algorithm for convolution.
Fast algorithms
Just as for the DFT, evaluating the DHT definition directly would require O arithmetical operations. There are fast algorithms similar to the FFT, however, that compute the same result in only O operations. Nearly every FFT algorithm, from Cooley–Tukey to prime-factor to Winograd to Bruun's, has a direct analogue for the discrete Hartley transform.In particular, the DHT analogue of the Cooley–Tukey algorithm is commonly known as the fast Hartley transform algorithm, and was first described by Bracewell in 1984. This FHT algorithm, at least when applied to power-of-two sizes N, is the subject of the United States patent number 4,646,256, issued in 1987 to Stanford University. Stanford placed this patent in the public domain in 1994.
As mentioned above, DHT algorithms are typically slightly less efficient than the corresponding DFT algorithm specialized for real inputs. This was first argued by Sorensen et al. and Duhamel & Vetterli. The latter authors obtained what appears to be the lowest published operation count for the DHT of power-of-two sizes, employing a split-radix algorithm that breaks a DHT of length N into a DHT of length N/2 and two real-input DFTs of length N/4. In this way, they argued that a DHT of power-of-two length can be computed with, at best, 2 more additions than the corresponding number of arithmetic operations for the real-input DFT.
On present-day computers, performance is determined more by cache and CPU pipeline considerations than by strict operation counts, and a slight difference in arithmetic cost is unlikely to be significant. Since FHT and real-input FFT algorithms have similar computational structures, neither appears to have a substantial a priori speed advantage. As a practical matter, highly optimized real-input FFT libraries are available from many sources, whereas highly optimized DHT libraries are less common.
On the other hand, the redundant computations in FFTs due to real inputs are more difficult to eliminate for large prime N, despite the existence of O complex-data algorithms for such cases, because the redundancies are hidden behind intricate permutations and/or phase rotations in those algorithms. In contrast, a standard prime-size FFT algorithm, Rader's algorithm, can be directly applied to the DHT of real data for roughly a factor of two less computation than that of the equivalent complex FFT. On the other hand, a non-DHT-based adaptation of Rader's algorithm for real-input DFTs is also possible.
Multi-Dimensional Discrete Hartley Transform (MD-DHT)
The rD-DHT is given bywith and where
Similar to the 1-D case, as a real and symmetric transform, the MD-DHT is simpler than the MD-DFT. For one, the inverse DHT is identical to the forward transform, with the addition of a scaling factor;
and second, since the kernel is real, it avoids the computational complexity of complex numbers. Additionally, the DFT is directly obtainable from the DHT by a simple additive operation.
The MD-DHT is widely used in areas like image and optical signal processing. Specific applications include computer vision, high-definition television, and teleconferencing, areas that process or analyze motion images.
Fast algorithms for the MD-DHT
As computing speed keeps increasing, bigger multidimensional problems become computationally feasible, requiring the need for fast multidimensional algorithms. Three such algorithms follow.In pursuit of separability for efficiency, we consider the following transform,
It was shown in Bortfeld, that the two can be related by a few additions. For example, in 3-D,
For, row-column algorithms can then be implemented. This technique is commonly used due to the simplicity of such R-C algorithms, but they are not optimized for general M-D spaces.
Other fast algorithms have been developed, such as radix-2, radix-4, and split radix. For example, Boussakta developed the 3-D vector radix,
It was also presented in Boussakta that this 3D-vector radix algorithm takes multiplications and additions compared to multiplications and additions from the row-column approach. The drawback is that the implementation of these radix-type of algorithms is hard to generalize for signals of arbitrary dimensions.
Number theoretic transforms have also been used for solving the MD-DHT, since they perform extremely fast convolutions. In Boussakta, it was shown how to decompose the MD-DHT transform into a form consisting of convolutions:
For the 2-D case,
,
can be decomposed into 1-D and 2-D circular convolutions as follows,
where
Developing further,
At this point we present the Fermat number transform. The tth Fermat number is given by, with. The well known Fermat numbers are for ,. The Fermat number transform is given by
with. and are roots of unity of order and respectively.
Going back to the decomposition, the last term for will be denoted as, then
If and are primitive roots of and then and map to So, mapping and to and, one gets the following,
Which is now a circular convolution. With,, and, one has
where denotes term by term multiplication. It was also stated in that this algorithm reduces the number of multiplications by a factor of 8–20 over other DHT algorithms at a cost of a slight increase in the number of shift and add operations, which are assumed to be simpler than multiplications. The drawback of this algorithm is the constraint that each dimension of the transform has a primitive root.