Discrete two dimensional Fourier transform in polar coordinates part II: numerical computation and approximation of the continuous transform
- Published
- Accepted
- Received
- Academic Editor
- Yilun Shang
- Subject Areas
- Algorithms and Analysis of Algorithms, Scientific Computing and Simulation, Theory and Formal Methods
- Keywords
- Fourier theory, DFT in polar coordinates, Polar coordinates, Multidimensional DFT, Discrete Hankel Transform, Discrete Fourier Transform, Orthogonality
- Copyright
- © 2020 Yao and Baddour
- Licence
- This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
- Cite this article
- 2020. Discrete two dimensional Fourier transform in polar coordinates part II: numerical computation and approximation of the continuous transform. PeerJ Computer Science 6:e257 https://doi.org/10.7717/peerj-cs.257
Abstract
The theory of the continuous two-dimensional (2D) Fourier Transform in polar coordinates has been recently developed but no discrete counterpart exists to date. In the first part of this two-paper series, we proposed and evaluated the theory of the 2D Discrete Fourier Transform (DFT) in polar coordinates. The theory of the actual manipulated quantities was shown, including the standard set of shift, modulation, multiplication, and convolution rules. In this second part of the series, we address the computational aspects of the 2D DFT in polar coordinates. Specifically, we demonstrate how the decomposition of the 2D DFT as a DFT, Discrete Hankel Transform and inverse DFT sequence can be exploited for coding. We also demonstrate how the proposed 2D DFT can be used to approximate the continuous forward and inverse Fourier transform in polar coordinates in the same manner that the 1D DFT can be used to approximate its continuous counterpart.
Introduction
The Fourier Transform (FT) is a powerful analytical tool and has proved to be invaluable in many disciplines such as physics, mathematics and engineering. The development of the Fast Fourier Transform (FFT) algorithm (Cooley & Tukey, 1965), which computes the Discrete Fourier Transform (DFT) with a fast algorithm, firmly established the FT as a practical tool in diverse areas, most notably signal and image processing.
In two dimensions, the FFT can still be used to compute the DFT in Cartesian coordinates. However, in many applications such as photoacoustics (Xu, Feng & Wang, 2002) and tomography (Scott et al., 2012; Fahimian et al., 2013; Lee et al., 2008; Lewitt & Matej, 2003), it is often necessary to compute the FT in polar coordinates. Moreover, for functions that are naturally described in polar coordinates, a discrete version of the 2D FT in polar coordinates is needed. There have been some attempts to calculate the FT in polar coordinates, most notably through the Hankel transform, since the zeroth order Hankel transform is known to be a 2D FT in polar coordinates for rotationally symmetric functions. However, prior work has focused on numerically approximating the continuous transform. This stands in contrast to the FT, where the DFT can stand alone as an orthogonal transform, independent of the existence of its continuous counterpart.
The idea of a Polar FT has been previously investigated, where the spatial function is in Cartesian coordinates but its FT is computed in polar coordinates (Averbuch et al., 2006; Abbas, Sun & Foroosh, 2017; Fenn, Kunis & Potts, 2007). FTs have been proposed for non-equispaced data, referred to as Unequally Spaced FFT (USFFT) or Non-Uniform FFT (NUFFT) (Dutt & Rokhlin, 1993; Fourmont, 2003; Dutt & Rokhlin, 1995; Potts, Steidl & Tasche, 2001; Fessler & Sutton, 2003). A recent book gives a unified treatment of these topics (Plonka et al., 2018). Previous work has also considered the implications of using a polar grid (Stark, 1979; Stark & Wengrovitz, 1983). Although the above references demonstrate that the computation of a discrete 2D FT on a polar grid has previously been considered in the literature, there is to date no discrete 2D FT in polar coordinates that exists as a transform in its own right, with its own set of rules of the actual manipulated quantities.
In part I of this two part series, we proposed an independent discrete 2D FT in polar coordinates, which has been defined to be discrete from first principles (Baddour, 2019). For a discrete transform, the values of the transform are only given as entries in a vector or matrix and the transform manipulates a set of discrete values. To quote Bracewell (Bracewell, 1999), “we often think of this as though an underlying function of a continuous variable really exists and we are approximating it. From an operational viewpoint, however, it is irrelevant to talk about the existence of values other than those given and those computed (the input and output). Therefore, it is desirable to have a mathematical theory of the actual quantities manipulated”. Hence, in our previous paper (Baddour, 2019), standard operational ‘rules’ of shift, modulation and convolution rules for this 2D DFT in polar coordinates were demonstrated. The operational rules were demonstrated via the key properties of the proposed discrete kernel of the transform. However, using the discrete kernel may not be the most effective way to compute the transform. Furthermore, while the 2D DFT in polar coordinates was demonstrated to have properties and rules as a standalone transform independent of its relationship to any continuous transform, an obvious application of the proposed discrete transform is to approximate its continuous counterpart.
Hence, the goal of this second part of this two-part paper series is to propose computational approaches to the computation of the previously proposed 2D DFT in polar coordinates and also to validate its effectiveness to approximate the continuous 2D FT in polar coordinates.
The outline of the paper is as follows. “Definition of the Discrete 2D FT in Polar Coordinates” states the proposed definition of the discrete 2D FT in polar coordinates. The motivation of this definition and the transform rules (multiplication, convolution, shift etc.) are given in the first part of this two-part paper. The transform exists in its own right and manipulates discrete quantities that do not necessarily stem from sampling an underlying continuous quantity. Nevertheless, the motivation for the definition of the transform is based on an implied underlying discretization scheme. “Discrete Transform to approximate the continuous transform” introduces the implied underlying discretization scheme where we show the connection between discrete samples of the continuous functions and the discrete transform, should it be desirable to interpret the transform in this manner. Here, the connection between using the proposed 2D DFT and sampled vales of the continuous functions is explained. The proposed 2D DFT was motivated by a specific sampling scheme (Discrete Transform to approximate the continuous transform) which can be plotted and analyzed for “grid coverage”—how much of the 2D plane is covered and at which density. Thus, “Discretization Points and Sampling Grid” analyzes the proposed discretization points and their implication on the sampling grid for density and coverage of the grid. The insights gained from this section will be useful in interpreting the results of approximating the continuous transform with the discrete transform. “Numerical Computation of the Transform” introduces numerical computation schemes whereby the interpretation of the proposed 2D transform as a sequence of 1D DFT, 1D Discrete Hankel Transform (DHT) and 1D inverse DFT (IDFT) is exploited. “Numerical Evaluation of the 2D DFT in Polar Coordinates to Approximate the Continuous FT” then investigates the ability of the proposed 2D DFT to approximate the continuous transform in terms of precision and accuracy. Three test functions for which closed-form continuous transforms are known are analyzed. Finally, “Summary and Conclusion” summarizes and concludes the paper.
Definition of the Discrete 2D FT in Polar Coordinates
The 2D-DFT in polar coordinates has been defined in the first part of this two-paper series as the discrete transform that takes the matrix (or double-subscripted series) f_{pk} to the matrix (double-subscripted series) F_{ql} such that ${f}_{pk}\to {F}_{qm}$ is given by (1) $${F}_{qm}=\mathbb{F}\left({f}_{pk}\right)=\sum _{k=1}^{{N}_{1}-1}\sum _{p=-M}^{M}{f}_{pk}{E}_{qm;pk}^{-}$$
where $p,k,q,m,n$, ${N}_{1}$, and ${N}_{2}$ are integers such that $-M\le n\le M$, where $2M+1={N}_{2}$ $1\le m,k,\le {N}_{1}-1$ and $-M\le p,q\le M$. Unless otherwise stated, in the remainder of the paper it shall be assumed that $p,k,q,m,n$, ${N}_{1}$, and ${N}_{2}$ are within these stated ranges. Similarly, for the inverse transform we propose (2) $${f}_{pk}={\mathbb{F}}^{-1}\left({F}_{qm}\right)=\sum _{m=1}^{{N}_{1}-1}\sum _{q=-M}^{M}{F}_{qm}{E}_{qm;pk}^{+}$$
In Eqs. (1) and (2), ${E}_{qm;pk}^{\pm}$ are the kernels of the transformation. These can be chosen as the “non-symmetric” form given by (3) $$\begin{array}{rl}& {E}_{qm;pk}^{-}=\frac{1}{{N}_{2}}\sum _{n=-M}^{M}\frac{{J}_{n}\left(\frac{{j}_{nk}{j}_{nm}}{{j}_{n{N}_{1}}}\right)}{{j}_{n{N}_{1}}^{2}{J}_{n+1}^{2}\left({j}_{nk}\right)}2{i}^{-n}{e}^{-i\frac{2\mathrm{\pi}np}{{N}_{2}}}{e}^{+i\frac{2\mathrm{\pi}nq}{{N}_{2}}}\\ & {E}_{qm;pk}^{+}=\frac{1}{{N}_{2}}\sum _{n=-M}^{M}\frac{{J}_{n}\left(\frac{{j}_{nm}{j}_{nk}}{{j}_{n{N}_{1}}}\right)}{{J}_{n+1}^{2}\left({j}_{nm}\right)}2{i}^{+n}{e}^{+i\frac{2\mathrm{\pi}np}{{N}_{2}}}{e}^{-i\frac{2\mathrm{\pi}nq}{{N}_{2}}}\end{array}$$
Here, ${J}_{n}\left(z\right)$ is the nth order Bessel function of the first kind and ${j}_{nk}$ denotes the kth zero of the nth Bessel function. The subscript (+ or −) indicated the sign on the ${i}^{\pm}$ and on the exponent containing the p variable; the q variable exponent then takes the opposite sign. From a matrix point of view, both ${f}_{pk}$ and ${F}_{ql}$ are ${N}_{2}\times \left({N}_{1}-1\right)$ sized matrices. The form of the kernel in Eq. (3) arises naturally from discretization of the continuous transform, but does not lead to the expected Parseval relationship. A possible symmetric kernel is discussed in the first part of this two-part paper and Parseval relationships are discussed further there (Baddour, 2019).
Discrete Transform to Approximate the Continuous Transform
In this section, relationships between discretely sampled values of the function and its continuous 2D FT are presented in the case of a space-limited or band-limited function. These relationships were derived in the first part of the paper and are repeated here to demonstrate how they form the basis for the using the discrete transform to approximate the continuous transform at specified sampling points.
Space-limited functions
Consider a function in the space domain $f(r,\mathrm{\theta})$ which is space limited to $r\in \left[0,R\right]$. This implies that the function is zero outside of the circle bounded by $r\in \left[0,R\right]$. An approximate relationship between sampled values of the continuous function and sampled values of its continuous forward 2D transform $F\left(\mathrm{\rho},\mathrm{\psi}\right)$ has been derived in the first part of the two-part paper as (4) $$F\left(\frac{{j}_{qm}}{R},\frac{2\mathrm{\pi}q}{{N}_{2}}\right)\approx 2\mathrm{\pi}{R}^{2}\sum _{k=1}^{{N}_{1}-1}\sum _{p=-M}^{M}f\left(\frac{{j}_{pk}R}{{j}_{p{N}_{1}}},\frac{2\mathrm{\pi}p}{{N}_{2}}\right)\phantom{\rule{thickmathspace}{0ex}}\frac{1}{{N}_{2}}\sum _{n=-M}^{M}\frac{2{i}^{-n}{J}_{n}\left(\frac{{j}_{nk}{j}_{nm}}{{j}_{n{N}_{1}}}\right)}{{j}_{n{N}_{1}}^{2}{J}_{n+1}^{2}\left({j}_{nk}\right)}{e}^{-i\frac{2\mathrm{\pi}np}{{N}_{2}}}{e}^{+i\frac{2\mathrm{\pi}nq}{{N}_{2}}}$$
Similarly, an approximate relationship between sampled values of the continuous forward transform $F\left(\mathrm{\rho},\mathrm{\psi}\right)$ and sampled values of the continuous original function $f(r,\mathrm{\theta})$ was shown to be given by (5) $$f\left(\frac{{j}_{pk}R}{{j}_{p{N}_{1}}},\frac{2\mathrm{\pi}p}{{N}_{2}}\right)\approx \frac{1}{2\mathrm{\pi}{R}^{2}}\sum _{m=1}^{{N}_{1}-1}\sum _{q=-M}^{M}F\left(\frac{{j}_{qm}}{R},\frac{2\mathrm{\pi}q}{{N}_{2}}\right)\phantom{\rule{thickmathspace}{0ex}}\frac{1}{{N}_{2}}\sum _{n=-M}^{M}\frac{2{i}^{n}{J}_{n}\left(\frac{{j}_{nm}{j}_{nk}}{{j}_{n{N}_{1}}}\right)}{{J}_{n+1}^{2}\left({j}_{nm}\right)}{e}^{+i\frac{2\mathrm{\pi}np}{{N}_{2}}}{e}^{-i\frac{2\mathrm{\pi}nq}{{N}_{2}}}$$
In Eqs. (4) and (5), f(r, θ) is the original function in 2D space and $F(\mathrm{\rho},\mathrm{\psi})$ is the 2D FT of the function in polar coordinates.
To evaluate if the 2D DFT as proposed in Eqs. (1) and (2) can be used to approximate sampled values of f(r, θ) and $F(\mathrm{\rho},\mathrm{\psi})$, the process is as follows. For the forward transform, we start with the continuous f(r, θ), evaluate it at the sampling points and then assign this value to f_{pk} via (6) $${f}_{pk}=f\left(\frac{{j}_{pk}R}{{j}_{pN1}},\frac{2\mathrm{\pi}p}{{N}_{2}}\right)$$
Then, ${F}_{qm}$ is calculated from the 2D DFT scaled by $2\mathrm{\pi}{R}^{2}$, Eq. (1), that is (7) ${F}_{qm}=2\mathrm{\pi}{R}^{2}\mathbb{F}\left({f}_{pk}\right)=2\mathrm{\pi}{R}^{2}\sum _{k=1}^{{N}_{1}-1}\sum _{p=-M}^{M}{f}_{pk}{E}_{qm;pk}^{-}$
The factor of $2\mathrm{\pi}{R}^{2}$ is necessary so that the evaluation in Eq. (7) matches the expression in Eq. (4). To evaluate if the proposed 2D DFT can be used to approximate the continuous transform, the question becomes how well ${F}_{qm}$ calculated from the 2D DFT in Eq. (7) approximates $F\left(\frac{{j}_{qm}}{R},\frac{2\mathrm{\pi}q}{{N}_{2}}\right)$—the values of the continuous 2D FT evaluated on the sampling grid.
To evaluate the inverse 2D DFT, the process is similar. We start with the continuous $F(\mathrm{\rho},\mathrm{\psi})$, evaluate it at the sampling points and assign this value to ${F}_{qm}$ via (8) $${F}_{qm}=F\left(\frac{{j}_{qm}}{R},\frac{2\mathrm{\pi}q}{{N}_{2}}\right)$$
Now, ${f}_{pk}$ is calculated from a scaled version of the inverse 2D DFT, Eq. (2) that is (9) $${f}_{pk}=\frac{1}{2\mathrm{\pi}{R}^{2}}{\mathbb{F}}^{-1}\left({F}_{qm}\right)=\frac{1}{2\mathrm{\pi}{R}^{2}}\sum _{m=1}^{{N}_{1}-1}\sum _{q=-M}^{M}{F}_{qm}{E}_{qm;pk}^{+}$$
To evaluate if the proposed transform can approximate the continuous transform, the question becomes how well ${f}_{pk}$ calculated from Eq. (9) approximates $f\left(\frac{{j}_{pk}R}{{j}_{pN1}},\frac{2\mathrm{\pi}p}{{N}_{2}}\right)$—the values of the continuous function evaluated on the sampling grid.
Band-limited functions
The process for band-limited functions follows the same process as outlined in the previous section, with the exception that the sampling points and scaling factors are slightly different as they are now given in terms of the band limit rather than the space limit. Now consider functions in the frequency domain $F\left(\mathrm{\rho},\mathrm{\psi}\right)$ with an effective band limit $\mathrm{\rho}\in \left[0,{W}_{\mathrm{\rho}}\right]$. That is, we suppose that the 2D FT $F\left(\mathrm{\rho},\mathrm{\psi}\right)$ of $f(r,\mathrm{\theta})$ is band-limited, meaning that $F\left(\mathrm{\rho},\mathrm{\psi}\right)$ is zero for $\mathrm{\rho}\ge {W}_{\mathrm{\rho}}=2\mathrm{\pi}W$. The variable ${W}_{\mathrm{\rho}}$ is written in this form since $W$ would typically be quoted in units of Hz (cycles per second) if using temporal units or cycles per meter if using spatial units. Therefore, the multiplication by $2\mathrm{\pi}$ ensures that the final units are in ${s}^{-1}$ or ${m}^{-1}$. The approximate relationship between sampled values of the continuous 2D FT $F\left(\mathrm{\rho},\mathrm{\psi}\right)$ and sampled values of the original continuous function $f\left(r,\mathrm{\theta}\right)$ was derived in the first part of the paper and is given by (10) $$F\left(\frac{{j}_{qm}{W}_{\mathrm{\rho}}}{{j}_{q{N}_{1}}},\frac{2\mathrm{\pi}q}{{N}_{2}}\right)\approx \frac{2\mathrm{\pi}}{{W}_{\mathrm{\rho}}^{2}}\sum _{k=1}^{{N}_{1}-1}\sum _{p=-M}^{M}f\left(\frac{{j}_{pk}}{{W}_{\mathrm{\rho}}},\frac{2\mathrm{\pi}p}{{N}_{2}}\right)\phantom{\rule{thickmathspace}{0ex}}\frac{1}{{N}_{2}}\sum _{n=-M}^{M}\frac{2{i}^{-n}{J}_{n}\left(\frac{{j}_{nm}{j}_{nk}}{{j}_{n{N}_{1}}}\right)}{{J}_{n+1}^{2}\left({j}_{nk}\right)}{e}^{-i\frac{2\mathrm{\pi}np}{{N}_{2}}}{e}^{+i\frac{2\mathrm{\pi}nq}{{N}_{2}}}$$
Similarly, the inverse relationship between sampled values of $F\left(\mathrm{\rho},\mathrm{\psi}\right)$ and sampled values of $f(r,\mathrm{\theta})$ was shown to be given by (11) $\phantom{\rule{thickmathspace}{0ex}}f\left(\frac{{j}_{pk}}{{W}_{\mathrm{\rho}}},\frac{2\mathrm{\pi}p}{{N}_{2}}\right)\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\approx \frac{{W}_{\mathrm{\rho}}^{2}}{2\mathrm{\pi}}\sum _{m=1}^{{N}_{1}-1}\sum _{q=-M}^{M}F\left(\frac{{j}_{qm}{W}_{\mathrm{\rho}}}{{j}_{q{N}_{1}}},\frac{2\mathrm{\pi}q}{{N}_{2}}\right)\phantom{\rule{thickmathspace}{0ex}}\frac{1}{{N}_{2}}\sum _{n=-M}^{M}\frac{2{i}^{n}{J}_{n}\left(\frac{{j}_{nk}{j}_{nm}}{{j}_{n{N}_{1}}}\right)}{{j}_{n{N}_{1}}^{2}{J}_{n+1}^{2}\left({j}_{nm}\right)}{e}^{-i\frac{2\mathrm{\pi}nq}{{N}_{2}}}{e}^{+i\frac{2\mathrm{\pi}np}{{N}_{2}}}$
The relationships in Eqs. (10) and (11) give relationships between the sampled values of the original function and sampled values of its 2D FT.
To evaluate the forward 2D DFT, we start with $f\left(r,\mathrm{\theta}\right)$, evaluate it at the (bandlimited specific) sampling points and assign this value to ${f}_{pk}$ via (12) $${f}_{pk}=\phantom{\rule{thickmathspace}{0ex}}f\left(\frac{{j}_{pk}}{{W}_{\mathrm{\rho}}},\frac{2\mathrm{\pi}p}{{N}_{2}}\right)\phantom{\rule{thickmathspace}{0ex}}$$
Then, ${F}_{qm}$ is calculated from the discrete transform scaled by $\frac{2\mathrm{\pi}}{{W}_{\mathrm{\rho}}^{2}}$, Eq. (1), that is (13) $${F}_{qm}=\frac{2\mathrm{\pi}}{{W}_{\mathrm{\rho}}^{2}}\mathbb{F}\left({f}_{pk}\right)=\frac{2\mathrm{\pi}}{{W}_{\mathrm{\rho}}^{2}}\sum _{k=1}^{{N}_{1}-1}\sum _{p=-M}^{M}{f}_{pk}{E}_{qm;pk}^{-}$$
To evaluate if the proposed 2D DFT can be used to approximate the continuous transform, the question is how well ${F}_{qm}$ calculated from Eq. (13) approximates $F\left(\frac{{j}_{qm}{W}_{\mathrm{\rho}}}{{j}_{q{N}_{1}}},\frac{2\mathrm{\pi}q}{{N}_{2}}\right)$, which are the values of the continuous 2D FT, evaluated on the sampling grid. The evaluation of the inverse transform for the band-limited function proceeds similarly by comparing values obtained from the inverse 2D DFT to the values obtained by sampling the continuous function directly.
The relationships given by Eqs. (4), (5), (10) and (11), were the motivating definition of a 2D DFT in polar coordinates, defined in the first part of this two-part paper. In the context of this second part of the two-part paper, they are also the relationships that permit the use of the discrete transform to approximate the continuous transform at the specified sampling points. They are also the relationships that permit the examination of whether the discrete quantities ${f}_{pk}$ and ${F}_{qm}$ calculated via the proposed 2D DFT are in fact reasonable approximations to the sampled values of the continuous functions, as stated in the objectives of the paper.
Discretization Points and Sampling Grid
The transforms defined in Eqs. (1) and (2) can be applied to any matrix ${f}_{pk}$ to yield its forward transform ${F}_{qm}$, which can then be transformed backwards by using the inverse transform. However, if these same discrete transforms are to be used for the purpose of approximating a continuous 2D FT, then these transforms need to be applied to the specific sampled values of the continuous functions in both space and frequency domains, as shown in Eqs. (6), (8) and (12). The relationships in Eqs. (4) and (10) define the sampling points that need to be used and it is noted that the points are defined differently based on whether we start with the assumption of a space or band limited function. These specific sampling points imply a specific sampling grid for the function. In this section, the sampling grid (its coverage and density in 2D) is analyzed.
Sampling points
For a space-limited function, we assume that the original function of interest is defined over continuous $\left(r,\mathrm{\theta}\right)$ space where $0\le r\le R$ and $0\le \mathrm{\theta}\le 2\mathrm{\pi}$. The discrete sampling spaces used for radial and angular sampling points in regular $\overrightarrow{r}$ space $\left(r,\mathrm{\theta}\right)$ and $\overrightarrow{\mathrm{\omega}}$ frequency $\left(\mathrm{\rho},\mathrm{\psi}\right)$ space are defined as (14) $${r}_{pk}=\frac{{j}_{pk}R}{{j}_{p{N}_{1}}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}{\mathrm{\theta}}_{p}=\frac{p2\mathrm{\pi}}{{N}_{2}}$$
and (15) $${\mathrm{\rho}}_{qm}=\frac{{j}_{qm}}{R}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}{\mathrm{\psi}}_{q}=\frac{q2\mathrm{\pi}}{{N}_{2}}$$For a band limited function, the function is assumed band-limited to $0\le \mathrm{\rho}\le {W}_{\mathrm{\rho}}$, $0\le \mathrm{\psi}\le 2\mathrm{\pi}$. The sampling space used for radial and angular sampling points in regular $\overrightarrow{\mathrm{\omega}}$ frequency space $\left(\mathrm{\rho},\mathrm{\psi}\right)$ and $\overrightarrow{r}$ space $\left(r,\mathrm{\theta}\right)$ for a bandlimited function is defined as (16) $${r}_{pk}=\frac{{j}_{pk}}{{W}_{\mathrm{\rho}}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}{\mathrm{\theta}}_{p}=\frac{p2\mathrm{\pi}}{{N}_{2}}$$and (17) $${\mathrm{\rho}}_{qm}=\frac{{j}_{qm}{W}_{\mathrm{\rho}}}{{j}_{q{N}_{1}}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}{\mathrm{\psi}}_{q}=\frac{q2\mathrm{\pi}}{{N}_{2}}$$
Clearly, the density of the sampling points depends on the numbers of points chosen, that is on ${N}_{1}$ and ${N}_{2}$. Also clear is the fact that the grid is not equispaced in the radial variable. The sampling grid for a space-limited function are plotted below to enable visualization. In the first instance, the polar grids are plotted for the case $R=1$, ${N}_{1}=16$ and ${N}_{2}=15$. These are shown in space (r space) and frequency (ρ space) in Figs. 1 and 2 respectively. It should be noted that although we refer the grids in this article as polar grids, they are not true polar grids in the sense of equispaced sampling in the radial and angular coordinates.
Clearly, the grids in Figs. 1 and 2 are fairly sparse, but the low values of ${N}_{2}$ and ${N}_{1}$ have been chosen so that the structure of the sampling points can be easily seen. It can be observed that there is a hole at the center area in both domains which is caused by the special sampling points. For higher values of the ${N}_{2}$ and ${N}_{1}$, the grid becomes fairly dense, obtaining good coverage of both spaces, but details are harder to observe. To demonstrate, the polar grids are plotted for the case R = 1, ${N}_{1}=96$ and ${N}_{2}=95$. These are shown in Figs. 3 and 4 respectively.
From Figs. 3 and 4, by choosing higher values of ${N}_{1}$ and ${N}_{2}$, the sampling grid becomes denser, however there is still a gap in the center area. The sampling grids for band-limited functions are not plotted here since the sample grid for a band-limited function has the same shape as with space limited function but the domains are reversed.
Sample grid analysis
From part I of the paper, it was shown that the 2D-FT can be interpreted as a DFT in the angular direction, a DHT in the radial direction and then an IDFT in the angular direction. Hence, the sample size in the angular direction could have been decided by the Nyquist sampling theorem (Shannon, 1984), which states that (18) $${f}_{s}>2{f}_{max}$$where ${f}_{s}$ is the sample frequency and ${f}_{max}$ is the highest frequency or band limit.
In the radial direction, the necessary relationship for the DHT is given by Baddour & Chouinard (2015) (19) $${W}_{\mathrm{\rho}}R={j}_{n{N}_{1}}$$where ${W}_{\mathrm{\rho}}$ is the effective band-limit, $R$ is the effective space limit and ${j}_{nN}$ is the Nth zero of ${J}_{n}\left(r\right)$. For the 2D FT, since $-M\le p\le M$, the order of the Bessel zero ranges from $-M$ to $M$, the required relationship becomes (20) $$\mathrm{m}\mathrm{i}\mathrm{n}({j}_{p{N}_{1}})\ge {W}_{\mathrm{\rho}}R$$
The relationships ${j}_{nN}={j}_{-nN}$ and ${j}_{0{N}_{1}}<{j}_{\pm 1{N}_{1}}<{j}_{\pm 2{N}_{1}}<...<{j}_{\pm M{N}_{1}}$ are valid (Lozier, 2003), hence Eq. (20) can be written as (21) $${j}_{0{N}_{1}}\ge {W}_{\mathrm{\rho}}R$$
It is pointed out in Baddour (2019) and Guizar-Sicairos & Gutiérrez-Vega (2004) that the zeros of ${J}_{n}\left(z\right)$ are almost evenly spaced at intervals of $\mathrm{\pi}$ and that the spacing becomes exactly $\mathrm{\pi}$ in the limit as $z\to \mathrm{\infty}$. The reader unfamiliar with Bessel functions is directed to references (Bracewell, 1999; Lozier, 2003). In fact, it is shown in Dutt & Rokhlin (1993) that a simple asymptotic form for the Bessel function is given by (22) $${J}_{n}\left(z\right)\approx \sqrt{\frac{2}{\mathrm{\pi}z}}\mathrm{cos}\left[z-\left(n+\frac{1}{2}\right)\frac{\mathrm{\pi}}{2}\right]$$
Therefore, an approximation to the Bessel zero, ${j}_{nk}$ is given by (23) $${j}_{nk}\approx \left(2k+n-\frac{1}{2}\right)\frac{\mathrm{\pi}}{2}$$
Hence, Eq. (21) can be written to choose ${N}_{1}$ approximately as (24) $$\begin{array}{rl}{N}_{1}\mathrm{\pi}\ge & {W}_{\mathrm{\rho}}R=2\mathrm{\pi}WR\\ & \Rightarrow {N}_{1}\ge 2WR\end{array}$$where the reader is reminded that the units of $W$ is m^{−1} (the space equivalent of Hz). ${N}_{1}/R$ is the spatial sampling frequency and we see that Eq. (24) effectively makes the same statement as Eq. (18), as it should.
Intuitively, more sample points lead to more information captured, which gives an expectation that increasing ${N}_{1}$ or ${N}_{2}$ individually will give a better sampling grid coverage. However, it can be seen from Figs. 1–4 that there is a gap in the center of the sample grid. From Eqs. (14) and (15), the area of the gap in the center is related to the ranges of $p$ and $k$, that is ${N}_{2}$ and ${N}_{1}$. In the sections below, it is assumed that the sampling theorems are already satisfied (that is, an appropriate space and band limit is selected) and the relationship between ${N}_{2}$, ${N}_{1}$ and the size of the gap will be discussed.
Space-limited function
In this section, it is assumed that the function is a space limited function, defined in $r\in [0,R]$. The sampling points are defined as Eq. (14) in the space domain and Eq. (15) in the frequency domain. In the following, a relationship between ${N}_{2}$, ${N}_{1}$ and the area of the gap in both domains is discussed.
Sample grid in the space domain
In the space domain, the effective limit in the space domain, R, is fixed. To analyze how the values of ${N}_{2}$ and ${N}_{1}$ affect the coverage of the grid in space domain, consider the following definition of ‘grid coverage’ (25) $${A}_{r}=\frac{\mathrm{\pi}{R}^{2}-\mathrm{\pi}{\overline{r}}^{2}}{\mathrm{\pi}{R}^{2}}\cdot 100$$where $\overline{r}$ denotes the average radius of the gap (the hole in the middle of the grid). ${A}_{r}$ as defined in Eq. (25) is a measure of the “grid coverage” since it gives a percentage of how much of the original space limited domain area is captured by the discrete grid. For example, if the average radius of the center gap is zero, then ${A}_{r}$ would be 100%, that is, complete coverage. Based on the observation of Figs. 1 and 3, the relationship ${r}_{01}<{r}_{\pm 11}<{r}_{\pm 21}<{r}_{\pm M1}$ is valid. Therefore, from Eq. (14), the average radius of the gap is given by (26) $$\overline{r}=\frac{({r}_{01}+{r}_{M1})}{2}=\frac{1}{2}\left(\frac{{j}_{01}}{{j}_{0{N}_{1}}}R+\frac{{j}_{M1}}{{j}_{M{N}_{1}}}\phantom{\rule{thickmathspace}{0ex}}R\right)$$
Hence, Eq. (25) for grid coverage can be written as (27) $${A}_{r}=\left[1-\frac{1}{4}{\left(\frac{{j}_{01}}{{j}_{0{N}_{1}}}+\frac{{j}_{M1}}{{j}_{M{N}_{1}}}\right)}^{2}\right]\cdot 100$$
Table 1 shows the different values of grid coverage ${A}_{r}$ as the values of ${N}_{1}$ and ${N}_{2}$ are changed.
N_{2} | N_{1} | |||
---|---|---|---|---|
15 | 75 | 150 | 300 | |
15 | A_{r} = 98.48% | A_{r} = 99.92% | A_{r} = 99.98% | A_{r} = 99.99% |
75 | A_{r} = 93.78% | A_{r} = 99.36% | A_{r} = 99.81% | A_{r} = 99.95% |
151 | A_{r} = 90.14% | A_{r} = 98.42% | A_{r} = 99.46% | A_{r} = 99.84% |
301 | A_{r} = 86.17% | A_{r} = 96.58% | A_{r} = 98.59% | A_{r} = 99.51% |
From Table 1, it can be seen that increasing ${N}_{1}$ (sample size in the radial direction) tends to increase the grid coverage. Since the effective space limit $R$ is fixed, from Eq. (21) it follows that increasing ${N}_{1}$ actually increases the effective band limit. However, increasing ${N}_{2}$ (sample size in angular direction) will result in a bigger gap in the center of the grid, which then decreases the coverage.
Sample grid in the frequency domain
Similarly, coverage of the grid in the frequency domain is defined as (28) $${A}_{\mathrm{\rho}}=\frac{\mathrm{\pi}{W}_{\mathrm{\rho}}^{2}-\mathrm{\pi}{\overline{\mathrm{\rho}}}^{2}}{\mathrm{\pi}{W}_{\mathrm{\rho}}^{2}}\cdot 100$$where $\overline{\mathrm{\rho}}$ denotes the average radius of the gap. Since (29) $$\overline{\mathrm{\rho}}=\frac{({\mathrm{\rho}}_{01}+{\mathrm{\rho}}_{M1})}{2}=\frac{({j}_{01}+{j}_{M1})}{2R}$$
Then, it follows that Eq. (28) for frequency domain grid coverage can be written as (30) $${A}_{\mathrm{\rho}}\phantom{\rule{thinmathspace}{0ex}}=\phantom{\rule{thinmathspace}{0ex}}\left[1-\frac{{({j}_{01}+{j}_{M1})}^{2}}{4{R}^{2}{W}_{\mathrm{\rho}}^{2}}\right]\cdot 100\mathrm{\%}$$
From Eq. (30), it can be observed that the sample grid coverage in the frequency domain is affected by $R$, ${W}_{\mathrm{\rho}}$ and $M$. Since ${N}_{2}=2M+1$, in order to get a better grid coverage with a fixed ${W}_{\mathrm{\rho}}$, $R$ and ${N}_{2}$ can be adjusted. Table 2 shows the grid coverage ${A}_{\mathrm{\rho}}$ for different values of $R$ and ${N}_{2}$.
From Table 2, the conclusion for the frequency domain is that when the effective band limit is fixed, increasing $R$ (effective space limit) tends to increase the coverage in the frequency domain, while increasing ${N}_{2}$ (sample size in the angular direction) decreases the coverage. However, from Eq. (21) it should be noted that to satisfy the sampling theorem, increasing $R$ with fixed ${W}_{\mathrm{\rho}}$ requires an increase in ${N}_{1}$ at the same time.
N_{2} | R | |||
---|---|---|---|---|
15 | 75 | 150 | 300 | |
15 | A_{ρ} = 99.80% | A_{ρ} = 99.99% | A_{ρ} = 100.00% | A_{ρ} = 100.00% |
75 | A_{ρ} = 97.66% | A_{ρ} = 99.91% | A_{ρ} = 99.98% | A_{ρ} = 99.99% |
151 | A_{ρ} = 91.88% | A_{ρ} = 99.68% | A_{ρ} = 99.92% | A_{ρ} = 99.98% |
301 | A_{ρ} = 70.67% | A_{ρ} = 98.83% | A_{ρ} = 99.71% | A_{ρ} = 99.93% |
Band-limited function
In this section, we suppose that the function is an effectively band limited function, defined on $\mathrm{\rho}\in [0,{W}_{p}]$. The sampling points are defined as in Eq. (16) in the space domain and as in the frequency domain. In this subsection, the relationship between ${N}_{2}$, ${N}_{1}$ and the area of the gap in both domains is discussed.
Sampling grid in the space domain
The same definition of grid coverage in the space domain will be used as in Eq. (25). Since the sampling points of a band-limited function are given by Eqs. (16) and (17), the average radius of the gap can be defined as (31) $$\overline{r}=\frac{({r}_{01}+{r}_{M1})}{2}=\frac{1}{2}\left(\frac{{j}_{01}}{{W}_{\mathrm{\rho}}}+\frac{{j}_{M1}}{{W}_{\mathrm{\rho}}}\phantom{\rule{thickmathspace}{0ex}}\right)$$
Therefore, the coverage of the grid in space domain can be written as (32) $${A}_{r}=\left[1-\frac{{({j}_{01}+{j}_{M1})}^{2}}{4{W}_{\mathrm{\rho}}^{2}{R}^{2}}\right]\cdot 100$$
It can be observed that the grid coverage in the space domain of a band-limited function is the same as the grid coverage in the frequency domain of space limited function.
Sample grid in frequency domain
The coverage of the grid in the frequency domain of a band limited function is defined by Eq. (28). With sampling points defined in Eq. (17), the average radius of the gap can be defined as (33) $$\overline{\mathrm{\rho}}=\frac{({\mathrm{\rho}}_{01}+{\mathrm{\rho}}_{M1})}{2}\phantom{\rule{thinmathspace}{0ex}}=\frac{1}{2}\left(\frac{{j}_{01}}{{j}_{0{N}_{1}}}{W}_{\mathrm{\rho}}+\frac{{j}_{M1}}{{j}_{M{N}_{1}}}\phantom{\rule{thickmathspace}{0ex}}{W}_{\mathrm{\rho}}\right)$$
The coverage of the grid in frequency domain can be written as (34) $${A}_{\mathrm{\rho}}=\left[1-\frac{1}{4}\left(\frac{{j}_{01}}{{j}_{0{N}_{1}}}+\frac{{j}_{M1}}{{j}_{M{N}_{1}}}\right){\phantom{\rule{thickmathspace}{0ex}}}^{2}\right]\cdot 100$$
It can be observed that the grid coverage in the frequency domain of a band-limited function is the same as the grid coverage in the space domain of a space limited function.
Conclusion
Based on the discussion above, the following conclusions can be made:
Increasing ${N}_{2}$ (angular direction) tends to decrease the sampling grid coverage in both domains. Increasing ${N}_{1}$ (radial direction) tends to increase the sampling coverage in the space domain for a space-limited function and in the frequency domain for a frequency-limited function. So, if a signal changes sharply in the angular direction such that large values of ${N}_{2}$ are needed, a large value of ${N}_{1}$ is also needed to compensate for the effect of increasing ${N}_{2}$ on the grid coverage.
For a space-limited function, if there is a lot of energy at the origin in the space domain, a larger value of ${N}_{1}$ will be required to ensure that the sampling grid gets as close to the origin as possible in the space domain. If the function has a lot of energy at the origin in the frequency domain, a large value for both ${N}_{1}$ and $R$ will be required to ensure adequate grid coverage.
For a band-limited function, if there is a lot of energy at the origin in the frequency domain, a large value of ${N}_{1}$ will be needed to ensure that the sample grid gets as close to the origin as possible in the frequency domain. If the function has a lot of energy at the origin in the space domain, large values for both ${N}_{1}$ and ${W}_{\mathrm{\rho}}$ are required.
Numerical Computation of the Transform
We have already demonstrated in part I of the paper that the discrete 2D FT in polar coordinates can be interpreted as a DFT, DHT and then IDFT. This interpretation is quite helpful in coding the transform and in exploiting the speed of the FFT (Fast Fourier Transform) in implementing the computations. In this section, we explain how the speed of Matlab’s (Mathworks 2018) built-in code (or similar software) can be exploited to implement the 2D DFT in polar coordinates.
Forward transform
The values ${f}_{pk}$ can be considered as the entries in a matrix. To transform ${f}_{pk}\to {F}_{qm}$, the operation is performed as a sequence of steps which are a 1D DFT (column-wise), followed by a scaled 1D DHT (row-wise), finally followed by a 1D IDFT (column-wise). The reader is reminded that the range of indices is given by $m,k=1\dots {N}_{1}-1$ and $n,p,q=-M\dots M$, where $2M+1={N}_{2}$. These steps can be summarized succinctly by rewriting Eq. (1) as (35) $${F}_{qm}=\underset{\mathrm{i}\mathrm{n}\mathrm{v}\mathrm{e}\mathrm{r}\mathrm{s}\mathrm{e}\phantom{\rule{thickmathspace}{0ex}}1\mathrm{D}\phantom{\rule{thickmathspace}{0ex}}\mathrm{D}\mathrm{F}\mathrm{T}\phantom{\rule{thickmathspace}{0ex}}\mathrm{c}\mathrm{o}\mathrm{l}\mathrm{u}\mathrm{m}\mathrm{n}-\mathrm{w}\mathrm{i}\mathrm{s}\mathrm{e}}{\underset{\u23df}{\frac{1}{{N}_{2}}\sum _{n=-M}^{M}\underset{\mathrm{s}\mathrm{c}\mathrm{a}\mathrm{l}\mathrm{e}\mathrm{d}\phantom{\rule{thickmathspace}{0ex}}1\mathrm{D}\phantom{\rule{thickmathspace}{0ex}}\mathrm{D}\mathrm{H}\mathrm{T}\phantom{\rule{thickmathspace}{0ex}}\mathrm{r}\mathrm{o}\mathrm{w}-\mathrm{w}\mathrm{i}\mathrm{s}\mathrm{e}}{\underset{\u23df}{\left[\frac{2\mathrm{\pi}{R}^{2}{i}^{-n}}{{j}_{n{N}_{1}}}\sum _{k=1}^{{N}_{1}-1}{Y}_{m,k}^{n{N}_{1}}\underset{1\mathrm{D}\phantom{\rule{thickmathspace}{0ex}}\mathrm{D}\mathrm{F}\mathrm{T}\phantom{\rule{thickmathspace}{0ex}}\mathrm{c}\mathrm{o}\mathrm{l}\mathrm{u}\mathrm{m}\mathrm{n}-\mathrm{w}\mathrm{i}\mathrm{s}\mathrm{e}}{\underset{\u23df}{\sum _{p=-M}^{M}{f}_{pk}{e}^{-in\frac{2\mathrm{\pi}p}{{N}_{2}}}}}\right]}}{e}^{+in\frac{2\mathrm{\pi}q}{{N}_{2}}}}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}$$where the DHT is defined in Baddour & Chouinard (2015) via the transformation matrix (36) $${Y}_{m,k}^{n{N}_{1}}=\frac{2}{{j}_{n{N}_{1}}{J}_{n+1}^{2}\left({j}_{nk}\right)}{J}_{n}\left(\frac{{j}_{nm}{j}_{nk}}{{j}_{n{N}_{1}}}\right)\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}1\le m,k\le {N}_{1}-1\phantom{\rule{thickmathspace}{0ex}}$$
Matlab code for the DHT is described in Baddour & Chouinard (2017). The inverse 2D DFT can be similarly interpreted, as shown in “Inverse Transform”.
Inverse transform
The steps of the inverse 2D DFT are the reverse of the steps outlined above for the forward 2D DFT. For $p=-M\dots M$ and $k=1\dots {N}_{1}-1$, Eq. (2) this can be expressed as (37) $${f}_{pk}=\underset{\mathrm{i}\mathrm{n}\mathrm{v}\mathrm{e}\mathrm{r}\mathrm{s}\mathrm{e}\phantom{\rule{thickmathspace}{0ex}}1\mathrm{D}\phantom{\rule{thickmathspace}{0ex}}\mathrm{D}\mathrm{F}\mathrm{T}\phantom{\rule{thickmathspace}{0ex}}(\mathrm{c}\mathrm{o}\mathrm{l}\mathrm{u}\mathrm{m}\mathrm{n}-\mathrm{w}\mathrm{i}\mathrm{s}\mathrm{e})}{\underset{\u23df}{\frac{1}{{N}_{2}}\sum _{n=-M}^{M}\underset{\mathrm{s}\mathrm{c}\mathrm{a}\mathrm{l}\mathrm{e}\mathrm{d}\phantom{\rule{thickmathspace}{0ex}}1\mathrm{D}\phantom{\rule{thickmathspace}{0ex}}\mathrm{D}\mathrm{F}\mathrm{T}\phantom{\rule{thickmathspace}{0ex}}(\mathrm{r}\mathrm{o}\mathrm{w}-\mathrm{w}\mathrm{i}\mathrm{s}\mathrm{e})}{\underset{\u23df}{\left[\frac{{j}_{n{N}_{1}}{i}^{+n}}{2\mathrm{\pi}{R}^{2}}\sum _{m=1}^{{N}_{1}-1}{Y}_{k,m}^{n{N}_{1}}\underset{1\mathrm{D}\phantom{\rule{thickmathspace}{0ex}}\mathrm{D}\mathrm{F}\mathrm{T}\phantom{\rule{thickmathspace}{0ex}}(\mathrm{c}\mathrm{o}\mathrm{l}\mathrm{u}\mathrm{m}\mathrm{n}-\mathrm{w}\mathrm{i}\mathrm{s}\mathrm{e})}{\underset{\u23df}{\sum _{q=M}^{M}{F}_{qm}{e}^{-i\frac{2\mathrm{\pi}nq}{{N}_{2}}}}}\right]}}{e}^{+i\frac{2\mathrm{\pi}np}{{N}_{2}}}}}$$
This parallels the steps taken for the continuous case, with each continuous operation (Fourier series, Hankel transform) replaced by its discrete counterpart (DFT, DHT).
Therefore, for both forward and inverse 2D-DFT, the sequence of operations is a DFT of each column of the starting matrix, followed by a DHT of each row, a term-by-term scaling, followed by an IDFT of each column. This is a significant computational improvement because by interpreting the transform this way, the Fast Fourier Transform (FFT) can be used, which reduces the computational time quite significantly in comparison with a direct implementation of the summation definitions in Eqs. (1) and (2).
Interpretation of the sampled forward transform in Matlab terms
To use the built-in Matlab function fft, a few operations are required. First, we define Matlab-friendly indices ${p}^{\prime}=p+(M+1)$ and ${n}^{\prime}=n+(M+1)$ so that $p,n=-M\dots M$ become ${p}^{\prime},{n}^{\prime}=1\dots 2M+1=1\dots {N}_{2}$ (since $2M+1={N}_{2}$). That is, the primed variables range from $1\dots 2M+1$ rather than $-M\dots M$. Hence, if the matrix $\mathbf{f}$ with entries ${f}_{{p}^{\prime}k}$ is defined, where ${p}^{\prime}=1\dots {N}_{2},\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}k=1\dots {N}_{1}-1$, then the first step in which is a column-wise DFT can be written as the Matlab-defined DFT as (38) $${\overline{f}}_{{n}^{\prime}k}=\sum _{{p}^{\prime}=1}^{{N}_{2}}{f}_{pk}{e}^{\frac{-2\mathrm{\pi}i({p}^{\prime}-1-M)({n}^{\prime}-1-M)}{{N}_{2}}}$$
The overbar denotes a DFT. The definition of DFT in Matlab is actually given by the relationship (39) $${\overline{f}}_{{n}^{\prime}k}=\sum _{{p}^{\prime}=1}^{{N}_{2}}{f}_{{p}^{\prime}k}{e}^{\frac{-2\mathrm{\pi}i({p}^{\prime}-1)({n}^{\prime}-1)}{{N}_{2}}}$$Since the relationship $\sum _{{p}^{\prime}=1}^{{N}_{2}}{f}_{{p}^{\prime}k}{e}^{\frac{-2\mathrm{\pi}i({p}^{\prime}-1)(n{}^{\prime}-1-M)}{{N}_{2}}}=\sum _{{p}^{\prime}=1}^{{N}_{2}}{f}_{pk}{e}^{\frac{-2\mathrm{\pi}i({p}^{\prime}-1-M)({n}^{\prime}-1-M)}{{N}_{2}}}$ is valid, we can sample the original function to obtain the discrete ${f}_{pk}$ values, put them in the matrix ${f}_{{p}^{\prime}k}$ then shift the matrix ${f}_{{p}^{\prime}k}$ by $M+1$ along the column direction. In Matlab, the function $circshift\left(A,K,\mathrm{d}\mathrm{i}\mathrm{m}\right)$ can be used, which circularly shifts the values in array $A$ by $K$ positions along dimension dim. Inputs $K$ and $\mathrm{d}\mathrm{i}\mathrm{m}$ must be scalars. Specifically, $\mathrm{d}\mathrm{i}\mathrm{m}=1$ indicates the columns of matrix $A$ and $\mathrm{d}\mathrm{i}\mathrm{m}=2$ indicates the rows of matrix A. Hence, Eq. (38) can be written as (40) $${\overline{f}}_{{n}^{\prime}k}=fft\left(circshift\left({f}_{{p}^{\prime}k},M+1,1\right),{N}_{2},1\right)$$
In matrix operations, this is equivalent to stating that each column of ${f}_{{p}^{\prime}k}$ is DFT’ed to yield ${\overline{f}}_{{n}^{\prime}k}$.
The second step in Eq. (35) is a DHT of order $n$, transforming ${\overline{f}}_{n\mathrm{\prime}k}\to {\hat{\overline{f}}}_{n\mathrm{\prime}l}$ so that the $k$ subscript is Hankel transformed to the $l$ subscript. The overhat denotes a DHT. In order to relate the order $n$ to the index ${n}^{\prime}$, we need to shift ${\overline{f}}_{{n}^{\prime}k}$ by $-(M+1)$ along column direction so that the order ranges from –M to M.
(41) $${\hat{\overline{f}}}_{n\mathrm{\prime}l}=\sum _{k=1}^{{N}_{1}-1}\frac{2{J}_{n}\left(\frac{{j}_{nk}{j}_{nl}}{{j}_{n{N}_{1}}}\right)}{{j}_{n{N}_{1}}^{}{J}_{n+1}^{2}\left({j}_{nk}\right)}\phantom{\rule{thickmathspace}{0ex}}circshift\left({\overline{f}}_{{n}^{\prime}k},-(M+1),1\right)\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\{\begin{array}{c}\mathrm{f}\mathrm{o}\mathrm{r}\phantom{\rule{thickmathspace}{0ex}}{n}^{\prime}=1\dots {N}_{2},\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}l=1\dots {N}_{1}-1\\ \mathrm{w}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{e}\phantom{\rule{thickmathspace}{0ex}}n=n\mathrm{\prime}-\left(M+1\right)\end{array}$$By using the Hankel transform matrix defined in Baddour & Chouinard (2015), Eq. (41) can be rewritten as (42) $${\hat{\overline{f}}}_{n{}^{\mathrm{\prime}}l}=circshift({\overline{f}}_{{n}^{\mathrm{\prime}}k},-(M+1),1)\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}{\left({Y}_{l,k}^{n{N}_{1}}\right)}^{T}\phantom{\rule{1em}{0ex}}\{\begin{array}{c}\mathrm{f}\mathrm{o}\mathrm{r}\phantom{\rule{thickmathspace}{0ex}}{n}^{\mathrm{\prime}}=1\dots {N}_{2},\phantom{\rule{1em}{0ex}}l=1\dots {N}_{1}-1\\ \mathrm{w}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{e}\phantom{\rule{thickmathspace}{0ex}}n={n}^{\mathrm{\prime}}-M-1\end{array}$$
In matrix operations, this states that each row of ${\overline{f}}_{{n}^{\prime}k}$ is DHT’ed to yield ${\hat{\overline{f}}}_{{n}^{\mathrm{\prime}}l}$. These are now scaled to give the Fourier coefficients of the 2D DFT ${\hat{\overline{f}}}_{{n}^{\mathrm{\prime}}l}\to {\overline{F}}_{{n}^{\mathrm{\prime}}l}$. In order to proceed to an IDFT in the next step, it is necessary to shift the matrix by $M+1$ along the column direction after scaling (43) $${\overline{F}}_{{n}^{\mathrm{\prime}}l}=\mathrm{c}\mathrm{i}\mathrm{r}\mathrm{c}\mathrm{s}\mathrm{h}\mathrm{i}\mathrm{f}\mathrm{t}\left(\frac{2\mathrm{\pi}{R}^{2}}{{j}_{n{N}_{1}}}{i}^{-n}{\hat{\overline{f}}}_{{n}^{\mathrm{\prime}}l},M+1,1\right)\phantom{\rule{1em}{0ex}}\{\begin{array}{c}\mathrm{f}\mathrm{o}\mathrm{r}\phantom{\rule{thickmathspace}{0ex}}{n}^{\mathrm{\prime}}=1\dots {N}_{2},\phantom{\rule{1em}{0ex}}l=1\dots {N}_{1}-1\\ \mathrm{w}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{e}\phantom{\rule{thickmathspace}{0ex}}n={n}^{\mathrm{\prime}}-\left(M+1\right)\end{array}$$
This last step is a 1D IDFT for each column of ${\overline{F}}_{n{}^{\prime}l}$ to obtain ${F}_{ql}$. Using $2M+1={N}_{2}$, and ${q}^{\prime}=q+1+M$, this can be written as (44) $$\begin{array}{rl}& {F}_{{q}^{\prime}l}=\frac{1}{{N}_{2}}\sum _{{n}^{\prime}=1}^{{N}_{2}}{\overline{F}}_{nl}{e}^{+i\left({n}^{\prime}-M-1\right)\frac{2\mathrm{\pi}\left({q}^{\prime}-1-M\right)}{{N}_{2}}}\phantom{\rule{1em}{0ex}}\mathrm{f}\mathrm{o}\mathrm{r}\phantom{\rule{thickmathspace}{0ex}}{q}^{\prime}=1\dots {N}_{2},\phantom{\rule{1em}{0ex}}l=\mathrm{1..}{N}_{1}-1\\ & \phantom{\rule{1em}{0ex}}=\frac{1}{{N}_{2}}\sum _{{n}^{\prime}=1}^{{N}_{2}}{\overline{F}}_{{n}^{\prime}l}{e}^{+i({n}^{\prime}-1)\frac{2\mathrm{\pi}\left({q}^{\prime}-1-M\right)}{{N}_{2}}}\\ & \phantom{\rule{1em}{0ex}}=\mathrm{c}\mathrm{i}\mathrm{r}\mathrm{c}\mathrm{s}\mathrm{h}\mathrm{i}\mathrm{f}\mathrm{t}\left(ifft\left({\overline{F}}_{{n}^{\prime}l},{N}_{2},1\right),-\left(M+1\right),1\right)\end{array}$$
Interpretation of the sampled inverse transform in Matlab terms
Similar to the forward transform, matlab-friendly indices ${q}^{\prime}=q+(M+1)$ and ${n}^{\prime}=n+(M+1)$ are also defined. Hence, if the matrix $F$ with entries ${F}_{{q}^{\prime}l}$ is defined, where ${q}^{\prime}=\mathrm{1..}{N}_{2,}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}l=\mathrm{1..}{N}_{1}-1$, it then follows that the first 1D DFT step in Eq. (37) can be written as the Matlab-defined DFT as (45) $$\begin{array}{rl}& {\overline{F}}_{{n}^{{}^{\prime}}l}=\sum _{{q}^{\prime}=1}^{{N}_{2}}{F}_{ql}{e}^{-i({n}^{\prime}-M-1)\frac{2\mathrm{\pi}({q}^{\prime}-1-M)}{{N}_{2}}}\phantom{\rule{1em}{0ex}}\mathrm{f}\mathrm{o}\mathrm{r}\phantom{\rule{thickmathspace}{0ex}}{n}^{\prime}=1\dots {N}_{2},\phantom{\rule{1em}{0ex}}l=1\dots {N}_{1}-1\\ & \phantom{\rule{2em}{0ex}}=\sum _{{q}^{\prime}=1}^{{N}_{2}}{F}_{{q}^{\prime}l}{e}^{-i({n}^{\prime}-M-1)\frac{2\mathrm{\pi}({q}^{\prime}-1)}{{N}_{2}}}\end{array}$$
If the original function can be sampled as ${F}_{ql}$ and then put into matrix ${F}_{{q}^{\prime}l}$, then we need an $circshift$ operation. So Eq. (45) can be written as (46) $${\overline{F}}_{{n}^{\prime}l}=fft\left(circshift({F}_{{q}^{\prime}l},M+1,1),{N}_{2},1\right)$$
Subsequently, a DHT of order $n$ is required, transforming ${\overline{F}}_{{n}^{\prime}l}\to {\hat{\overline{F}}}_{{n}^{\prime}l}$ so that the $l$ subscript is Hankel transformed to the $k$ subscript. To achieve this, $circshift$ is also needed here.
(47) $${\hat{\overline{F}}}_{{n}^{\prime}k}=circshift\left({\overline{F}}_{{n}^{\prime}l},-(M+1),1\right){\left({Y}_{k,l}^{n{N}_{1}}\right)}^{T}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\{\begin{array}{c}\mathrm{f}\mathrm{o}\mathrm{r}\phantom{\rule{thickmathspace}{0ex}}{n}^{\prime}=1\dots {N}_{2},\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}l=1\dots {N}_{1}-1\\ \mathrm{w}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{e}\phantom{\rule{thickmathspace}{0ex}}n={n}^{\prime}-M-1\end{array}$$This is followed by a scaling operation to obtain ${\hat{\overline{F}}}_{{n}^{\prime}k}\to {\overline{f}}_{{n}^{\prime}k}$ and then a $circshift$ by $(M+1)$ so that (48) $${\overline{f}}_{{n}^{\prime}k}=circshift\left(\frac{{j}_{n{N}_{1}}}{2\mathrm{\pi}{R}^{2}}{i}^{+n}{\hat{\overline{F}}}_{{n}^{\prime}k},(M+1),1\right)\phantom{\rule{1em}{0ex}}\{\begin{array}{c}\mathrm{f}\mathrm{o}\mathrm{r}\phantom{\rule{thickmathspace}{0ex}}{n}^{\prime}=1\dots {N}_{2},\phantom{\rule{1em}{0ex}}k=1\dots {N}_{1}-1\\ \mathrm{w}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{e}\phantom{\rule{thickmathspace}{0ex}}n={n}^{\prime}-\left(M+1\right)\end{array}$$
This last step is a 1D IDFT for each column of ${\overline{f}}_{{n}^{\prime}k}$ to get ${f}_{{p}^{\prime}k}$. Using $2M+1={N}_{2}$, and ${p}^{\prime}=p+1$, Eq. (37) can be written as (49) $$\begin{array}{rl}& {f}_{{p}^{\prime}k}=\frac{1}{{N}_{2}}\sum _{{n}^{\prime}=1}^{{N}_{2}}{\overline{f}}_{nk}{e}^{+i\left({n}^{\prime}-M-1\right)\frac{2\mathrm{\pi}\left({p}^{\prime}-1-M\right)}{{N}_{2}}}\phantom{\rule{1em}{0ex}}\mathrm{f}\mathrm{o}\mathrm{r}\phantom{\rule{thickmathspace}{0ex}}{p}^{\prime}=1\dots {N}_{2},\phantom{\rule{1em}{0ex}}k=1\dots {N}_{1}-1\\ & \phantom{\rule{2em}{0ex}}=\frac{1}{{N}_{2}}\sum _{{n}^{\prime}=1}^{{N}_{2}}{\overline{f}}_{{n}^{\prime}k}{e}^{+i\frac{2\mathrm{\pi}\left({n}^{\prime}-1\right)\left({p}^{\prime}-1-M\right)}{{N}_{2}}}\\ & \phantom{\rule{2em}{0ex}}=circshift\left(ifft\left({\overline{f}}_{{n}^{\prime}k},{N}_{2},1\right),-(M+1),1\right)\end{array}$$
In conclusion, in this section, by using the interpretation of the kernel as sequential DFT, DHT and IDFT operations, Matlab (or similar software) built-in code can be used to efficiently implement the 2D DFT algorithm in polar coordinates.
Numerical Evaluation of the 2D DFT in Polar Coordinates to Approximate the Continuous FT
In this section, the 2D DFT is evaluated for its ability to estimate the continuous FT at the selected special sampling points in the spatial and frequency domains.
Method for testing the algorithm
Accuracy
In order to test accuracy of the 2D-DFT and 2D-IDFT to calculate approximate the continuous counterpart, the dynamic error is proposed as a metric. The dynamic error is defined as Guizar-Sicairos & Gutiérrez-Vega (2004) (50) $$E(v)=20\phantom{\rule{thickmathspace}{0ex}}\mathrm{l}\mathrm{o}{\mathrm{g}}_{10}\left[\frac{\left|C(v)-D(v)\right|}{max\left|D(v)\right|}\right]$$where $C(v)$ is the continuous forward or inverse 2D-FT and $D(v)$ is the value obtained from the discrete counterpart. The dynamic error is defined as the ratio of the absolute error to the maximum amplitude of the discrete function, calculated on a log scale. Therefore, a large negative value represents an accurate discrete transform. The dynamic error is used instead of the percentage error in order to avoid division by zero.
Precision
The precision of the algorithm is an important evaluation criterion, which is tested by sequentially performing a pair of forward and inverse transforms and comparing the result to the original function. High precision indicates that numerical evaluation of the transform does not add much error. An average of the absolute error between the original function and the calculated counterpart at each sample point is used to measure the precision. It is given by (51) $$\mathrm{\epsilon}=\frac{1}{\left({N}_{1}-1\right)\cdot {N}_{2}}\sum _{n=1}^{\left({N}_{1}-1\right)\cdot {N}_{2}}\left|f-{f}^{\ast}\right|$$where $f$ is the original function and ${f}^{\ast}$ is the value obtained after sequentially performing a forward and then inverse transform. An ideal precision would result in the absolute error being zero.
Test functions
In this section, three test functions are chosen to evaluate the ability of the discrete transform to approximate the continuous counterpart. The first test case is the circularly symmetric Gaussian function. Given that it is circularly symmetric and that the Gaussian is continuous and smooth, the proposed DFT is expected to perform well. The second test case is “Four-term sinusoid and Sinc” function, which is not symmetric in the angular direction and suffers a discontinuity in the radial direction. The third test function presents a more challenging test function, the “Four-term sinusoid and Modified exponential” function. In this case, the test function is not circularly symmetric and it explodes at the origin (approaches infinity at the origin). Given that as shown above, the sampling grid cannot cover the area around the origin very well, a function that explodes at the origin should give more error and should provide a reasonable test case for evaluating the performance of the discrete transform. The test functions are chosen to test specific aspects of the performance of the discrete transform but also because a closed-form expression for both the function and its transform are available. This then allows a numerical evaluation of the error between the quantities computed with the 2D DFT and the quantities obtained by evaluating (sampling) the continuous (forward or inverse) transform at the grid points.
Gaussian
The first function chosen for evaluation is a circular symmetric function which is Gaussian in the radial direction. Specifically, the function in the space domain is given by (52) $$f(r,\mathrm{\theta})={e}^{-{a}^{2}{r}^{2}}$$where a is some real constant. Since the function is circularly symmetric, the 2D-DFT is a zeroth-order Hankel Transform (Poularikas, 2010) and is given by (53) $$F(\mathrm{\rho},\mathrm{\psi})=\frac{\mathrm{\pi}}{{a}^{2}}{e}^{\frac{-{\mathrm{\rho}}^{2}}{4{a}^{2}}}$$
The graphs for the original function and its continuous 2D-DFT (which is also a Gaussian) are plotted with $a=1$ and shown in Fig. 5. From Fig. 5, the function is circular symmetric and fairly smooth in the radial direction. Moreover, the function can be considered as either an effectively space-limited function or an effectively band-limited function. For the purposes of testing it, it shall be considered as a space-limited function and Eqs. (14) and (15) will be used to proceed with the forward and inverse transform in sequence.
To perform the transform, the following variables need to be chosen: ${N}_{2}$, $R$ and ${N}_{1}$. In the angular direction, since the function in the spatial domain is circularly symmetric, ${N}_{2}$ can be chosen to be small. Thus, ${N}_{2}=15$ is chosen.
In the radial direction, from plotting the function, it can be seen that the effective space limit can be taken to be $R=5$ and the effective band limit can be taken to be ${W}_{\mathrm{\rho}}=10$. From Eq. (21), ${j}_{0{N}_{1}}\ge R\cdot {W}_{\mathrm{\rho}}=50$. Therefore, ${N}_{1}=17$ is chosen (we could also have obtained a rough estimate of ${N}_{1}$ from Eq. (24)). However, most of the energy of the function in both the space and frequency domains is located in the center near the origin. Based on the discussion in “Conclusion”, relatively large values of $R$ and ${W}_{\mathrm{\rho}}$ are needed. The effective space limit $R=40$ and effective band-limit ${W}_{p}=30$ are thus chosen, which gives ${j}_{0{N}_{1}}\ge R\cdot {W}_{\mathrm{\rho}}=1200$. Therefore ${N}_{1}=383$ is chosen in order to satisfy this constraint. Both cases discussed here (${N}_{1}=17$ and ${N}_{1}=383$) are tested in following.
Forward transform
Test results with $R=5$, ${N}_{1}=17$ are shown in Figs. 6 and 7. Figure 6 shows the sampled continuous forward transform and the discrete forward transform. Figure 7 shows the error between the sampled values of the continuous transform and the discretely calculated values.
From Fig. 7, it can be observed that the error gets bigger at the center, which is as expected because the sampling grid shows that the sampling points can never attain the origin. The maximum value of the error is ${E}_{max}=-0.9115\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$ and this occurs at the center. The average error is ${E}_{\mathrm{a}\mathrm{v}\mathrm{g}.}=-30.4446\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$.
Error test results with $R=40$, ${N}_{1}=383$ are shown in Fig. 8. Similar to the previous case, the error gets larger at the center, as expected. However, the maximum value of the error is ${E}_{max}=-8.3842\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$ and this occurs at the center. The average value of the error is ${E}_{\mathrm{a}\mathrm{v}\mathrm{g}.}=-63.8031\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$. Clearly, the test with $R=40$, ${N}_{1}=383$ gives a better approximation, which verifies the discussion in “Conclusion”.
With $R=40$, Table 3 shows the errors (max and average error) with respect to different value of ${N}_{1}$ and ${N}_{2}$. The trends as functions of ${N}_{1}$ and ${N}_{2}$ are shown as plots in Figs. 9 and 10.
N_{2} | N_{1} | ||||
---|---|---|---|---|---|
283 | 333 | 383 | 433 | 483 | |
3 | E_{max.} = −21.6 | E_{max.} = −23.0 | E_{max.} = −24.3 | E_{max.} = −25.4 | E_{max.} = −26.3 |
E_{avg.} = −71.3 | E_{avg.} = −76.9 | E_{avg.} = −81.8 | E_{avg.} = −86.0 | E_{avg.} = −89.8 | |
7 | E_{max.} = −12.9 | E_{max.} = −14.4 | E_{max.} = −15.7 | E_{max.} = −16.9 | E_{max.} = −17.8 |
E_{avg.} = −62.6 | E_{avg.} = −68.3 | E_{avg.} = −73.2 | E_{avg.} = −77.5 | E_{avg.} = −81.4 | |
15 | E_{max.} = −5.4 | E_{max.} = −7.0 | E_{max.} = −8.4 | E_{max.} = −9.6 | E_{max.} = −10.6 |
E_{avg.} = −53.1 | E_{avg.} = −58.9 | E_{avg.} = −63.8 | E_{avg.} = −68.1 | E_{avg.} = −72.0 | |
31 | E_{max.} = 2.3 | E_{max.} = 0.5 | E_{max.} = −1.0 | E_{max.} = −2.3 | E_{max.} = −3.4 |
E_{avg.} = −42.0 | E_{avg.} = −47.6 | E_{avg.} = −52.5 | E_{avg.} = −56.9 | E_{avg.} = −60.7 | |
61 | E_{max.} = 9.7 | E_{max.} = 7.9 | E_{max.} = 6.4 | E_{max.} = 5.0 | E_{max.} = 3.8 |
E_{avg.} = −32.5 | E_{avg.} = −37.5 | E_{avg.} = −42.0 | E_{avg.} = −46.1 | E_{avg.} = −49.8 |
From Fig. 9, it can be seen that when ${N}_{1}$ individually (${N}_{2}$ is fixed at ${N}_{2}=15$) is less than the minimum of 383 obtained from the sampling theorem, increasing ${N}_{1}$ will lead to smaller errors, as expected. When ${N}_{1}$ is bigger than the sampling-theorem threshold of 383, increasing ${N}_{1}$ still decreases the error which verifies the discussion about sample grid coverage in “Conclusion”. Increasing ${N}_{1}$ tends to increase the sample grid coverage and capture more information at the center area and thus leads to smaller errors.
From Fig. 10, increasing ${N}_{2}$ alone (i.e., without a corresponding increase in ${N}_{1}$) leads to larger errors, both ${\mathrm{E}\mathrm{r}\mathrm{r}\mathrm{o}\mathrm{r}}_{max}$ and ${\mathrm{E}\mathrm{r}\mathrm{r}\mathrm{o}\mathrm{r}}_{\mathrm{a}\mathrm{v}\mathrm{e}\mathrm{r}\mathrm{a}\mathrm{g}\mathrm{e}}$. Although at first counterintuitive, this result is actually reasonable because the function is radially symmetric which implies that ${N}_{2}=1$ should be sufficient based on the sampling theorem for the angular direction. Therefore, increasing ${N}_{2}$ will not lead to a better approximation. Moreover, from the discussion of the sample grid coverage in “Conclusion”, the sampling grid coverage in both domains gets worse when ${N}_{2}$ gets bigger because more information from the center is lost. This problem can be solved by increasing ${N}_{1}$ at the same time, but it could be computationally time consuming. Therefore, choosing ${N}_{2}$ properly is very important from the standpoint of accuracy and computational efficiency.
Inverse transform
Test results for the inverse transform with $R=5$, ${N}_{1}=17$ are shown in Figs. 11 and 12. Figure 11 shows the sampled continuous inverse transform and discrete inverse transform and Fig. 12 shows the error between the sampled continuous and discretely calculated values.
Similar to the case for the forward transform, the error gets larger at the center, which is as expected because the sampling grid shows that the sampling points never attain the center. The maximum value of the error is ${E}_{max}=3.1954\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$ and this occurs at the center. The average of the error is ${E}_{\mathrm{a}\mathrm{v}\mathrm{g}.}=-25.7799\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$.
Error test results for the inverse transform with $R=40$, ${N}_{1}=383$ are shown in Fig. 13. In this case, the maximum value of the error is ${E}_{max}=-12.2602\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$ and this occurs at the center. The average of the error is ${E}_{\mathrm{a}\mathrm{v}\mathrm{g}.}=-98.0316\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$. Table 4 shows the errors with respect to different value of ${N}_{1}$ and ${N}_{2}$, from which Figs. 14 and 15 demonstrate the trend.
N_{2} | N_{1} | ||||
---|---|---|---|---|---|
283 | 333 | 383 | 433 | 483 | |
3 | E_{max.} = −25.9 | E_{max.} = −27.5 | E_{max.} = −28.9 | E_{max.} = −30.2 | E_{max.} = −31.3 |
E_{avg.} = −115.3 | E_{avg.} = −115.4 | E_{avg.} = −115.4 | E_{avg.} = −115.5 | E_{avg.} = −115.5 | |
7 | E_{max.} = −16.5 | E_{max.} = −18.1 | E_{max.} = −19.4 | E_{max.} = −20.5 | E_{max.} = −21.6 |
E_{avg.} = −107.0 | E_{avg.} = −107.1 | E_{avg.} = −107.2 | E_{avg.} = −107.2 | E_{avg.} = −107.2 | |
15 | E_{max.} = −9.7 | E_{max.} = −11.0 | E_{max.} = −12.3 | E_{max.} = −13.4 | E_{max.} = −14.4 |
E_{avg.} = −97.9 | E_{avg.} = −98.0 | E_{avg.} = −98.0 | E_{avg.} = −98.1 | E_{avg.} = −98.1 | |
34 | E_{max.} = −4.4 | E_{max.} = −5.5 | E_{max.} = −6.5 | E_{max.} = −7.5 | E_{max.} = −8.3 |
E_{avg.} = −86.9 | E_{avg.} = −86.9 | E_{avg.} = −87.0 | E_{avg.} = −87.0 | E_{avg.} = −87.0 | |
61 | E_{max.} = −1.1 | E_{max.} = −1.7 | E_{max.} = −2.4 | E_{max.} = −3.0 | E_{max.} = −3.7 |
E_{avg.} = −75.6 | E_{avg.} = −75.6 | E_{avg.} = −75.6 | E_{avg.} = −75.6 | E_{avg.} = −75.7 |
From Fig. 15 it can be observed that increasing ${N}_{1}$ tends to improve the result but not significantly. This could be explained by the discussion for $R=40$, ${N}_{1}=383$ that with fixed $R$ and ${W}_{\mathrm{\rho}}$, increasing ${N}_{1}$ will not allow the sampling grid in the frequency domain to get any closer to the origin to capture more information. From Fig. 15, increasing ${N}_{2}$ (with fixed ${N}_{1}=383$) leads to a worse approximation which verifies the discussion for $R=40$, ${N}_{1}=383$.
Performing sequential 2D-DFT and 2D-IDFT results in $\mathrm{\epsilon}=4.1656\times {e}^{-17}$ where $\mathrm{\epsilon}$ is calculated with Eq. (51). Therefore, performing sequential forward and inverse transforms does not add much error.
Four-term sinusoid & Sinc function
The second function chosen for evaluation is given by (54) $$f(r,\mathrm{\theta})=\frac{\mathrm{sin}(ar)}{ar}[3sin(\mathrm{\theta})+\mathrm{sin}(3\mathrm{\theta})+4cos(10\mathrm{\theta})+12sin(15\mathrm{\theta})]$$which is a sinc function in the radial direction and a four-term sinusoid in the angular direction. The graphs for the original function and the magnitude of its continuous 2D-FT with $a=5$ are shown in Fig. 16. From Fig. 16, the function can be considered as a band-limited function. Therefore Eqs. (16) and (17) were used to implement the forward and inverse transform.
The continuous 2D-FT can be calculated from Baddour (2011) (55) $$F(\mathrm{\rho},\mathrm{\psi})=\sum _{n=-\mathrm{\infty}}^{\mathrm{\infty}}2\mathrm{\pi}{i}^{-n}{e}^{in\mathrm{\psi}}\underset{0}{\overset{\mathrm{\infty}}{\int}}{f}_{n}(r){J}_{n}(\mathrm{\rho}r)rdr$$where ${f}_{n}(r)$ is the Fourier series of $f(r,\mathrm{\theta})$ and can be written as (56) $${f}_{n}(r)=\frac{1}{2\mathrm{\pi}}\underset{-\mathrm{\pi}}{\overset{\mathrm{\pi}}{\int}}f(r,\mathrm{\theta}){e}^{-in\mathrm{\theta}}d\mathrm{\theta}$$
From the sampling theorem for the angular direction, the highest angular frequency in Eq. (54) results in ${N}_{2}$ being at least 31 required to reconstruct the signal. Therefore, at least 31 terms are required to calculate the continuous 2D-FT, which can be written as (57) $$\begin{array}{c}F(\mathrm{\rho},\mathrm{\psi})=\hfill \\ \{\begin{array}{c}\frac{8\mathrm{\pi}\mathrm{cos}(10\mathrm{\psi}){\mathrm{\rho}}^{10}}{a\sqrt{{a}^{2}-{\mathrm{\rho}}^{2}}{(a+\sqrt{{a}^{2}-{\mathrm{\rho}}^{2}})}^{10}},\phantom{\rule{1em}{0ex}}\mathrm{\rho}<a\hfill \\ -\frac{6\mathrm{\pi}i\mathrm{sin}(\mathrm{\psi})}{a\mathrm{\rho}\sqrt{{\mathrm{\rho}}^{2}+{a}^{2}}}+\frac{2\mathrm{\pi}i\mathrm{sin}\left(3\mathrm{arcsin}\left(\frac{a}{\mathrm{\rho}}\right)\right)\mathrm{sin}(3\mathrm{\psi})}{\sqrt{{\mathrm{\rho}}^{2}+{a}^{2}}}\phantom{\rule{thinmathspace}{0ex}}-\frac{8\mathrm{\pi}\mathrm{sin}\left(10\mathrm{arcsin}\left(\frac{a}{\mathrm{\rho}}\right)\right)cos(10\mathrm{\psi})}{\sqrt{{\mathrm{\rho}}^{2}+{a}^{2}}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\hfill \\ +\frac{24\mathrm{\pi}i\mathrm{sin}\left(15\mathrm{arcsin}\left(\frac{a}{\mathrm{\rho}}\right)\right)\mathrm{sin}(15\mathrm{\psi})}{\sqrt{{\mathrm{\rho}}^{2}+{a}^{2}}},\phantom{\rule{1em}{0ex}}\mathrm{\rho}>a\hfill \end{array}\hfill \end{array}$$
In the angular direction, the highest frequency term in the function in the space domain is $12sin(15\mathrm{\theta})$. From the sampling theorem, the sampling frequency should be at least twice that of the highest frequency present in the signal. Thus, ${N}_{2}=41$ is chosen in order to go a little past the minimum requirement of 31. In the radial direction, from the graphs of the original function and its 2D-FT, it can be assumed that $f(r,\mathrm{\theta})$ is space-limited at $R=15$ and band-limited at ${W}_{\mathrm{\rho}}=30$. However, since most of the energy in the space domain is located at the origin, a relatively large band limit should be chosen based on the discussion in “Conclusion”. Therefore, ${W}_{\mathrm{\rho}}=90$, ${N}_{1}=430$ are chosen.
Forward transform
The error results for the forward 2D-DFT of Four-term sinusoid & Sinc function with ${W}_{\mathrm{\rho}}=90$, ${N}_{1}=430$ are shown in Fig. 17. The discrete transform does not approximate the continuous transform very well. This is expected because the function in the frequency domain is discontinuous and the sampling points close to the discontinuity will result in a very large error. The maximum value of the error is ${\mathrm{E}\mathrm{r}\mathrm{r}\mathrm{o}\mathrm{r}}_{max}=10.6535\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$ and this occurs where the discontinuities are located. The average of the error is ${\mathrm{E}\mathrm{r}\mathrm{r}\mathrm{o}\mathrm{r}}_{\mathrm{a}\mathrm{v}\mathrm{e}\mathrm{r}\mathrm{a}\mathrm{g}\mathrm{e}}=-38.7831\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$.
With ${W}_{\mathrm{\rho}}=90$, ${N}_{1}=430$, Table 5 shows the errors with respect to different value of ${N}_{1}$ and ${N}_{2}$, from which Figs. 18 and 19 show the trend. From Fig. 18, increasing ${N}_{1}$ alone tends improve the average error. The maximum error does not change with ${N}_{1}$, which is reasonable because of the discontinuity of the function in the frequency domain.
N_{2} | N_{1} | ||||
---|---|---|---|---|---|
330 | 380 | 430 | 480 | 530 | |
11 | E_{max.} = 4.6 | E_{max.} = 7.1 | E_{max.} = 3.4 | E_{max.} = 9.0 | E_{max.} = 2.8 |
E_{avg.} = −33.6 | E_{avg.} = −33.4 | E_{avg.} = −33.5 | E_{avg.} = −35.1 | E_{avg.} = −35.5 | |
21 | E_{max.} = 6.7 | E_{max.} = 10.5 | E_{max.} = 3.2 | E_{max.} = 6.9 | E_{max.} = 3.6 |
E_{avg.} = −33.9 | E_{avg.} = −34.6 | E_{avg.} = −37.2 | E_{avg.} = −38.0 | E_{avg.} = −38.1 | |
41 | E_{max.} = 8.5 | E_{max.} = 35.1 | E_{max.} = 10.7 | E_{max.} = 14.6 | E_{max.} = 11.1 |
E_{avg.} = −38.7 | E_{avg.} = −38.9 | E_{avg.} = −38.8 | E_{avg.} = −39.8 | E_{avg.} = −41.3 | |
81 | E_{max.} = 9.7 | E_{max.} = 32.7 | E_{max.} = 14.8 | E_{max.} = 22.6 | E_{max.} = 14.5 |
E_{avg.} = −34.3 | E_{avg.} = 35.5 | E_{avg.} = −36.2 | E_{avg.} = −37.3 | E_{avg.} = −37.5 | |
161 | E_{max.} = 19.9 | E_{max.} = 30.2 | E_{max.} = 22.5 | E_{max.} = 22.5 | E_{max.} = 16.1 |
E_{avg.} = −29.4 | E_{avg.} = −30.7 | E_{avg.} = −31.1 | E_{avg.} = −32.2 | E_{avg.} = −32.8 |
From Fig. 19, increasing ${N}_{2}$ leads to ${\mathrm{E}\mathrm{r}\mathrm{r}\mathrm{o}\mathrm{r}}_{max}$ and ${\mathrm{E}\mathrm{r}\mathrm{r}\mathrm{o}\mathrm{r}}_{\mathrm{a}\mathrm{v}\mathrm{e}\mathrm{r}\mathrm{a}\mathrm{g}\mathrm{e}}$ first improving and then worsening. This is reasonable because when ${N}_{2}$ is less than the minimum requirement of 31 from sampling theorem, the test result is actually affected by both sampling point density (from the sampling theorem) and grid coverage (discussed in “Conclusion”). Increasing ${N}_{2}$ should give better results from the point of view of the sampling theorem but worse grid coverage. The result from the combined effects is dependent on the function properties. In the specific case of this function, when ${N}_{2}$ is bigger than 31, thereby implying that the angular sampling theorem has been satisfied—the results get worse with increasing ${N}_{2}$.
Inverse transform
The error results for the 2D-IDFT of Four-term sinusoid & Sinc function with ${W}_{\mathrm{\rho}}=90$, ${N}_{1}=430$ are shown in Fig. 20. The maximum value of the error is ${\mathrm{E}\mathrm{r}\mathrm{r}\mathrm{o}\mathrm{r}}_{max}=-8.6734\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$. The average of the error is ${\mathrm{E}\mathrm{r}\mathrm{r}\mathrm{o}\mathrm{r}}_{\mathrm{a}\mathrm{v}\mathrm{e}\mathrm{r}\mathrm{a}\mathrm{g}\mathrm{e}}=-37.8119\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$. With ${W}_{\mathrm{\rho}}=90$, ${N}_{1}=430$, Table 6 shows the errors with respect to different value of ${N}_{1}$ and ${N}_{2}$, from which Figs. 21 and 22 show the trend.
N_{2} | N_{1} | ||||
---|---|---|---|---|---|
330 | 380 | 430 | 480 | 530 | |
11 | E_{max.} = 0.1 | E_{max.} = 0.1 | E_{max.} = 0.1 | E_{max.} = 0.1 | E_{max.} = 0.1 |
E_{avg.} = −43.7 | E_{avg.} = −43.7 | E_{avg.} = −46.6 | E_{avg.} = −45.6 | E_{avg.} = −48.1 | |
21 | E_{max.} = 0.7 | E_{max.} = 0.7 | E_{max.} = 0.6 | E_{max.} = 0.6 | E_{max.} = 0.7 |
E_{avg.} = −38.3 | E_{avg.} = −38.0 | E_{avg.} = −40.4 | E_{avg.} = −40.6 | E_{avg.} = −42.2 | |
41 | E_{max.} = −9.0 | E_{max.} = −8.5 | E_{max.} = −8.7 | E_{max.} = −8.8 | E_{max.} = −8.6 |
E_{avg.} = −35.9 | E_{avg.} = −24.7 | E_{avg.} = −37.8 | E_{avg.} = −38.2 | E_{avg.} = −39.0 | |
81 | E_{max.} = −4.5 | E_{max.} = −4.7 | E_{max.} = −4.5 | E_{max.} = −4.6 | E_{max.} = −4.5 |
E_{avg.} = −35.7 | E_{avg.} = −26.5 | E_{avg.} = −37.5 | E_{avg.} = −36.2 | E_{avg.} = −39.0 | |
161 | E_{max.} = 0.8 | E_{max.} = 0.7 | E_{max.} = 0.7 | E_{max.} = 0.7 | E_{max.} = 0.7 |
E_{avg.} = −35.6 | E_{avg.} = −32.5 | E_{avg.} = −36.6 | E_{avg.} = −37.2 | E_{avg.} = −39.2 |
From Fig. 21, it can be observed that the increasing ${N}_{1}$ alone improves the average error, as was expected. However, ${N}_{1}=380$ gives an apparently worse average error than the other points. This could be caused by the discontinuity of the function in the frequency domain. Changing to ${N}_{1}=381$, the average error becomes −37.0 dB which proves that the large error is caused by the discontinuity.
From Fig. 22, increasing ${N}_{2}$ does not lead to worse results, which is different from previous cases. However, from Fig. 16 it can be seen that the function in the frequency domain does not have much information in the center area. So, even though increasing ${N}_{2}$ causes a larger hole in the center as discussed in “Conclusion”, it does not lead to losing much energy which explains why Fig. 22 shows a different trend from the previous cases.
Performing 2D-DFT and 2D-IDFT sequentially results in $\mathrm{\epsilon}=1.3117\times {e}^{-12}$ where $\mathrm{\epsilon}$ is calculated by Eq. (51).
Four-term sinusoid and modified exponential
For the next test function, the function is given by (58) $$f(r,\mathrm{\theta})=\frac{{e}^{-ar}}{r}[3sin(\mathrm{\theta})+\mathrm{sin}(3\mathrm{\theta})+4cos(10\mathrm{\theta})+12sin(15\mathrm{\theta})]$$
Its continuous 2D-FT can be calculated as (59) $$\begin{array}{rl}& F(\mathrm{\rho},\mathrm{\psi})=-6\mathrm{\pi}i\mathrm{sin}(\mathrm{\psi})\frac{\sqrt{{\mathrm{\rho}}^{2}+{a}^{2}}-a}{\mathrm{\rho}\sqrt{{\mathrm{\rho}}^{2}+{a}^{2}}}+2\mathrm{\pi}i\mathrm{sin}(3\mathrm{\psi})\frac{{(\sqrt{{\mathrm{\rho}}^{2}+{a}^{2}}-a)}^{3}}{{\mathrm{\rho}}^{3}\sqrt{{\mathrm{\rho}}^{2}+{a}^{2}}}\\ & \phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}-8\mathrm{\pi}cos(10\mathrm{\psi})\frac{{(\sqrt{{\mathrm{\rho}}^{2}+{a}^{2}}-a)}^{10}}{{\mathrm{\rho}}^{10}\sqrt{{\mathrm{\rho}}^{2}+{a}^{2}}}+24\mathrm{\pi}i\mathrm{sin}(15\mathrm{\psi})\frac{{(\sqrt{{\mathrm{\rho}}^{2}+{a}^{2}}-a)}^{15}}{{\mathrm{\rho}}^{15}\sqrt{{\mathrm{\rho}}^{2}+{a}^{2}}}\end{array}$$
The graphs for the original function and the magnitude of its continuous 2D-FT with a = 0.1 are shown in Fig. 23. From Fig. 23, it can be observed that the function has a spike in both domains, which is a more difficult scenario based on the discussion in “Conclusion”. In this case, the function can be assumed as space-limited or band-limited. This function will be tested as being space-limited.
From graph of the original function and its 2D-DFT, it can be assumed that $f(r,\mathrm{\theta})$ is effectively space-limited with $R=20$, and $F(\mathrm{\rho},\mathrm{\psi})$ is effectively band-limited with ${W}_{\mathrm{\rho}}=15$, which gives ${j}_{0{N}_{1}}\approx 300$. This results in ${N}_{1}=96.$ However, since the function explodes at the center area in both domains, relatively large values of $R$ and ${W}_{\mathrm{\rho}}$ should give a better approximation. Therefore, another case with $R=40$, ${W}_{\mathrm{\rho}}=30$ is tested. In this case, ${N}_{1}=383$ is chosen.
In the angular direction, the highest frequency term is $12sin(15\mathrm{\theta})$. From the sampling theorem, the sampling frequency should be at least twice of the highest frequency of signal. Thus, ${N}_{2}=41$ is chosen.
Forward transform
Here, the function is tested as a space limited function and Eqs. (14) and (15) are used to proceed with the forward and inverse transform in sequence. The error results with $R=40,\phantom{\rule{thinmathspace}{0ex}}{W}_{\mathrm{\rho}}=30,\phantom{\rule{thinmathspace}{0ex}}{N}_{1}=383$ are shown in Fig. 24. The maximum value of the error is ${\mathrm{E}\mathrm{r}\mathrm{r}\mathrm{o}\mathrm{r}}_{max}=-10.1535\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$ and this occurs at the center area. The average of the error is ${\mathrm{E}\mathrm{r}\mathrm{r}\mathrm{o}\mathrm{r}}_{\mathrm{a}\mathrm{v}\mathrm{e}\mathrm{r}\mathrm{a}\mathrm{g}\mathrm{e}}=-32.7619\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$. This demonstrates that the discrete function approximates the sampled values of the continuous function quite well.
Inverse transform
The error results with $R=40,\phantom{\rule{thinmathspace}{0ex}}{W}_{\mathrm{\rho}}=30,\phantom{\rule{thinmathspace}{0ex}}{N}_{1}=383$ are shown in Fig. 25.
The maximum value of the error is ${\mathrm{E}\mathrm{r}\mathrm{r}\mathrm{o}\mathrm{r}}_{max}=0.5579\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$ and this occurs at the center. The average of the error is${\mathrm{E}\mathrm{r}\mathrm{r}\mathrm{o}\mathrm{r}}_{\mathrm{a}\mathrm{v}\mathrm{e}\mathrm{r}\mathrm{a}\mathrm{g}\mathrm{e}}=-68.7317\phantom{\rule{thickmathspace}{0ex}}\mathrm{d}\mathrm{B}$.
Performing 2D-DFT and 2D-IDFT results in $\mathrm{\epsilon}=1.421\times {e}^{-12}$, where $\mathrm{\epsilon}$ is calculated by Eq. (51).
It can be observed that even for functions with the worst properties, the proposed transform can still be used to approximate the continuous FT with fairly small errors, as long as the function is sampled properly.
Summary and Conclusion
Accuracy and precision of the transform
The proposed discrete 2D-FT in polar coordinates demonstrates an acceptable accuracy in providing discrete estimates to the continuous FT in polar coordinates. In Baddour & Chouinard (2015), Guizar-Sicairos & Gutiérrez-Vega (2004) and Higgins & Munson (1987), the one dimensional Hankel transform of a sinc function showed similar dynamic error, which could be used as a comparative measure. Since the DHT is one step of the proposed discrete 2D-FT, and the definition of the Hankel transform is based on Abbas, Sun & Foroosh (2017), a similar dynamic error should be expected.
The test cases showed that the transform introduced very small errors ($\mathrm{\epsilon}=1.4004\times {e}^{-12}$ for worst case) by performing a forward transform and an inverse transform sequentially, which demonstrates that the discrete transform shows good precision.
Guidelines of choosing sample size
As discussed in “Conclusion” and proved by test cases, the sample size ${N}_{1}$ (sample size in the radial direction) and ${N}_{2}$ (sample size in the angular direction) do not have to be of the same order. For functions with different properties, sample size in the different directions could be very different. To approximate the continuous FT properly, sample size should be chosen based on the discussion in “Conclusion”.
Interpretation of the transform
By interpreting the transform as a 1DDFT, 1D DHT and 1D IDFT, the computing time of the transform is improved to a useful level since the FFT can be used to compute the DFT.
Appendix: Improving the Computing Time of the Transform
One of the advantages of the traditional FT is that the computing speed is fast by using the now well-established fft algorithm. To reduce the computing time of the 2D DFT in polar coordinates, the following steps are recommended:
Interpreting the transform as three sequential operations (DFT, DHT, IDFT) instead of a single four-dimensional matrix.
Pre-calculating and saving the Bessel zeros.
Reducing computing time by interpreting the transform as three operations in sequence
As explained above, the essence of the transform is that the matrix f_{pk} is transformed into the matrix F_{ql}. The intuitive way to interpret the transform kernel is to think of it as a four-dimensional matrix or matrices in a matrix. However, interpreting the transform as a 1D-DFT of each column, a 1D-DHT of each row and then a 1D-IDFT of each column makes it possible to use the Matlab built in functions fft and ifft, which significantly reduced the computational time.
Reduce computing time by pre-calculating the Bessel Zeros
After defining the transform as three operations in sequence and using the matrix for the DHT defined in Lozier (2003), it was found that a lot of computational time was used to calculate the Bessel zeros for every different test case, even though the Bessel zeros are the same in every case. Pre-calculating the Bessel zeros and storing the results for large numbers of N_{1} and N_{2} saves a lot of time.
Table 7 shows the computing time of a forward transform on the same computer (Processor: Intel(R) Core(TM) i7-4710HQ CPU, RAM:12GB) for three cases:
Evaluate the transform as matrices in a matrix without pre-calculating the Bessel zeros.
Evaluate the transform as a DFT, DHT and IDFT in sequence without pre-calculating the Bessel zeros.
Evaluate the transform as a DFT, DHT and IDFT in sequence with pre-calculating the Bessel zeros.
Test cases | Total computing time (s) |
---|---|
Case 1 | 3,346.5 |
Case 2 | 321.1 |
Case 3 | 14.3 |
The Gaussian function was used as the test function therefore ${N}_{1}=383$ and ${N}_{2}=15$.
From Table 7, it can be clearly observed that the computing time by running the transform as matrices in a matrix costs 3,346.5 s, which is not acceptable for the transform to be useful. Testing the transform as three operations turns 3,346.5 s into 321.1 s. This is much better. Finally, pre-calculating the Bessel Zeros makes the transform much faster and applicable.