1  Fourier Analysis

$$ \newcommand{\LetThereBe}[2]{\newcommand{#1}{#2}} \newcommand{\letThereBe}[3]{\newcommand{#1}[#2]{#3}} \newcommand{\ForceToBe}[2]{\renewcommand{#1}{#2}} \newcommand{\forceToBe}[3]{\renewcommand{#1}[#2]{#3}} \newcommand{\MayThereBe}[2]{\newcommand{#1}{#2}} \newcommand{\mayThereBe}[3]{\newcommand{#1}[#2]{#3}} % Declare mathematics (so they can be overwritten for PDF) \newcommand{\declareMathematics}[2]{\DeclareMathOperator{#1}{#2}} \newcommand{\declareMathematicsStar}[2]{\DeclareMathOperator*{#1}{#2}} % striked integral \newcommand{\avint}{\mathop{\mathchoice{\,\rlap{-}\!\!\int} {\rlap{\raise.15em{\scriptstyle -}}\kern-.2em\int} {\rlap{\raise.09em{\scriptscriptstyle -}}\!\int} {\rlap{-}\!\int}}\nolimits} % \d does not work well for PDFs \LetThereBe{\d}{\differential} \LetThereBe{\Im}{\IM} \LetThereBe{\Re}{\RE} \letThereBe{\linefrac}{2}{#1/#2} \LetThereBe{\ExtProd}{\mathsf{\Lambda}} \letThereBe{\unicodeInt}{1}{\mathop{\vcenter{\mathchoice{\huge\unicode{#1}}{\unicode{#1}}{\unicode{#1}}{\unicode{#1}}}}\nolimits} \letThereBe{\Oiint}{1}{\underset{ #1 \;}{ {\rlap{\mspace{1mu} \boldsymbol{\bigcirc}}{\rlap{\int}{\;\int}}} }} \letThereBe{\sOiint}{1}{\unicodeInt{x222F}_{#1}} $$ $$ % Simply for testing \LetThereBe{\foo}{\textrm{FIXME: this is a test!}} % Font styles \letThereBe{\mcal}{1}{\mathcal{#1}} \letThereBe{\chem}{1}{\mathrm{#1}} % Sets \LetThereBe{\C}{\mathbb{C}} \LetThereBe{\R}{\mathbb{R}} \LetThereBe{\Z}{\mathbb{Z}} \LetThereBe{\N}{\mathbb{N}} \LetThereBe{\K}{\mathbb{K}} \LetThereBe{\im}{\mathrm{i}} % Sets from PDEs \LetThereBe{\boundaryOf}{\partial} \letThereBe{\closureOf}{1}{\overline{#1}} \letThereBe{\Contf}{1}{\mcal C^{#1}} \letThereBe{\contf}{2}{\Contf{#2}(#1)} \letThereBe{\compactContf}{2}{\mcal C_c^{#2}(#1)} \letThereBe{\ball}{2}{B\brackets{#1, #2}} \letThereBe{\closedBall}{2}{B\parentheses{#1, #2}} \LetThereBe{\compactEmbed}{\subset\subset} \letThereBe{\inside}{1}{#1^o} \LetThereBe{\neighborhood}{\mcal O} \letThereBe{\neigh}{1}{\neighborhood \brackets{#1}} % Basic notation - vectors and random variables \letThereBe{\vi}{1}{\boldsymbol{#1}} %vector or matrix \letThereBe{\dvi}{1}{\vi{\dot{#1}}} %differentiated vector or matrix \letThereBe{\vii}{1}{\mathbf{#1}} %if \vi doesn't work \letThereBe{\dvii}{1}{\vii{\dot{#1}}} %if \dvi doesn't work \letThereBe{\rnd}{1}{\mathup{#1}} %random variable \letThereBe{\vr}{1}{\mathbf{#1}} %random vector or matrix \letThereBe{\vrr}{1}{\boldsymbol{#1}} %random vector if \vr doesn't work \letThereBe{\dvr}{1}{\vr{\dot{#1}}} %differentiated vector or matrix \letThereBe{\vb}{1}{\pmb{#1}} %#TODO \letThereBe{\dvb}{1}{\vb{\dot{#1}}} %#TODO \letThereBe{\oper}{1}{\mathsf{#1}} \letThereBe{\quotient}{2}{{^{\displaystyle #1}}/{_{\displaystyle #2}}} % Basic notation - general \letThereBe{\set}{1}{\left\{#1\right\}} \letThereBe{\seqnc}{4}{\set{#1_{#2}}_{#2 = #3}^{#4}} \letThereBe{\Seqnc}{3}{\set{#1}_{#2}^{#3}} \letThereBe{\brackets}{1}{\left( #1 \right)} \letThereBe{\parentheses}{1}{\left[ #1 \right]} \letThereBe{\dom}{1}{\mcal{D}\, \brackets{#1}} \letThereBe{\complexConj}{1}{\overline{#1}} \LetThereBe{\divider}{\; \vert \;} \LetThereBe{\gets}{\leftarrow} \letThereBe{\rcases}{1}{\left.\begin{aligned}#1\end{aligned}\right\}} \letThereBe{\rcasesAt}{2}{\left.\begin{alignedat}{#1}#2\end{alignedat}\right\}} \letThereBe{\lcases}{1}{\begin{cases}#1\end{cases}} \letThereBe{\lcasesAt}{2}{\left\{\begin{alignedat}{#1}#2\end{alignedat}\right.} \letThereBe{\evaluateAt}{2}{\left.#1\right|_{#2}} \LetThereBe{\Mod}{\;\mathrm{mod}\;} \LetThereBe{\bigO}{O} \letThereBe{\BigO}{1}{\bigO\brackets{#1}} % Special symbols \LetThereBe{\const}{\mathrm{const}} \LetThereBe{\konst}{\mathrm{konst.}} \LetThereBe{\vf}{\varphi} \LetThereBe{\ve}{\varepsilon} \LetThereBe{\tht}{\theta} \LetThereBe{\Tht}{\Theta} \LetThereBe{\after}{\circ} \LetThereBe{\lmbd}{\lambda} \LetThereBe{\Lmbd}{\Lambda} % Shorthands \LetThereBe{\xx}{\vi x} \LetThereBe{\yy}{\vi y} \LetThereBe{\XX}{\vi X} \LetThereBe{\AA}{\vi A} \LetThereBe{\bb}{\vi b} \LetThereBe{\vvf}{\vi \vf} \LetThereBe{\ff}{\vi f} \LetThereBe{\gg}{\vi g} % Basic functions \letThereBe{\absval}{1}{\left| #1 \right|} \LetThereBe{\id}{\mathrm{id}} \letThereBe{\floor}{1}{\left\lfloor #1 \right\rfloor} \letThereBe{\ceil}{1}{\left\lceil #1 \right\rceil} \declareMathematics{\image}{im} %image \declareMathematics{\domain}{dom} %image \declareMathematics{\tg}{tg} \declareMathematics{\sign}{sign} \declareMathematics{\card}{card} %cardinality \letThereBe{\setSize}{1}{\left| #1 \right|} \LetThereBe{\countElements}{\#} \declareMathematics{\exp}{exp} \letThereBe{\Exp}{1}{\exp\brackets{#1}} \LetThereBe{\ee}{\mathrm{e}} \letThereBe{\indicator}{1}{\mathbb{I}_{#1}} \declareMathematics{\arccot}{arccot} \declareMathematics{\gcd}{gcd} % Greatest Common Divisor \declareMathematics{\lcm}{lcm} % Least Common Multiple \letThereBe{\limInfty}{1}{\lim_{#1 \to \infty}} \letThereBe{\limInftyM}{1}{\lim_{#1 \to -\infty}} % Useful commands \letThereBe{\onTop}{2}{\mathrel{\overset{#2}{#1}}} \letThereBe{\onBottom}{2}{\mathrel{\underset{#2}{#1}}} \letThereBe{\tOnTop}{2}{\mathrel{\overset{\text{#2}}{#1}}} \letThereBe{\tOnBottom}{2}{\mathrel{\underset{\text{#2}}{#1}}} \LetThereBe{\EQ}{\onTop{=}{!}} \LetThereBe{\letDef}{:=} %#TODO: change the symbol \LetThereBe{\isPDef}{\onTop{\succ}{?}} \LetThereBe{\inductionStep}{\tOnTop{=}{induct. step}} \LetThereBe{\fromDef}{\triangleq} % Optimization \declareMathematicsStar{\argmin}{argmin} \declareMathematicsStar{\argmax}{argmax} \letThereBe{\maxOf}{1}{\max\set{#1}} \letThereBe{\minOf}{1}{\min\set{#1}} \declareMathematics{\prox}{prox} \declareMathematics{\loss}{loss} \declareMathematics{\supp}{supp} \letThereBe{\Supp}{1}{\supp\brackets{#1}} \LetThereBe{\constraint}{\text{s.t.}\;} $$ $$ % Operators - Analysis \LetThereBe{\hess}{\nabla^2} \LetThereBe{\lagr}{\mcal L} \LetThereBe{\lapl}{\Delta} \declareMathematics{\grad}{grad} \declareMathematics{\divergence}{div} \declareMathematics{\Dgrad}{D} \LetThereBe{\gradient}{\nabla} \LetThereBe{\jacobi}{\nabla} \LetThereBe{\Jacobi}{\vi{\mathrm J}} \letThereBe{\jacobian}{2}{\Dgrad_{#1}\brackets{#2}} \LetThereBe{\d}{\mathrm{d}} \LetThereBe{\dd}{\,\mathrm{d}} \letThereBe{\partialDeriv}{2}{\frac {\partial #1} {\partial #2}} \letThereBe{\npartialDeriv}{3}{\partialDeriv{^{#1} #2} {#3^{#1}}} \letThereBe{\partialOp}{1}{\frac {\partial} {\partial #1}} \letThereBe{\npartialOp}{2}{\frac {\partial^{#1}} {\partial #2^{#1}}} \letThereBe{\pDeriv}{2}{\partialDeriv{#1}{#2}} \letThereBe{\npDeriv}{3}{\npartialDeriv{#1}{#2}{#3}} \letThereBe{\deriv}{2}{\frac {\d #1} {\d #2}} \letThereBe{\nderiv}{3}{\frac {\d^{#1} #2} {\d #3^{#1}}} \letThereBe{\derivOp}{1}{\frac {\d} {\d #1}\,} \letThereBe{\nderivOp}{2}{\frac {\d^{#1}} {\d #2^{#1}}\,} % Convergence \LetThereBe{\pointwiseTo}{\to} \LetThereBe{\uniformlyTo}{\rightrightarrows} \LetThereBe{\normallyTo}{\tOnTop{\longrightarrow}{norm}} \LetThereBe{\compactlyTo}{\tOnTop{\longrightarrow}{comp.}} \LetThereBe{\locallyUnifTo}{\tOnTop{\longrightarrow}{l.u.}} % Curves \letThereBe{\graphOf}{1}{\parentheses{#1}} \declareMathematics{\interior}{Int} % complex \LetThereBe{\Cinfty}{\tilde{\C}} \declareMathematics{\residual}{res} \letThereBe{\resAt}{1}{\residual_{#1}} \declareMathematics{\complexarg}{arg} \declareMathematics{\complexArg}{Arg} \LetThereBe{\carg}{\complexarg} \LetThereBe{\cArg}{\complexArg} \LetThereBe{\IM}{\mathfrak{Im}} \LetThereBe{\RE}{\mathfrak{Re}} \letThereBe{\imOf}{1}{\IM\,#1} \letThereBe{\reOf}{1}{\RE\,#1} \letThereBe{\ImOf}{1}{\IM \brackets{#1}} \letThereBe{\ReOf}{1}{\RE \brackets{#1}} $$ $$ % Linear algebra \letThereBe{\norm}{1}{\left\lVert #1 \right\rVert} \letThereBe{\seminorm}{1}{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert #1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} \letThereBe{\scal}{2}{\left\langle #1, #2 \right\rangle} \letThereBe{\avg}{1}{\overline{#1}} \letThereBe{\Avg}{1}{\bar{#1}} \letThereBe{\linspace}{1}{\mathrm{lin}\set{#1}} \letThereBe{\algMult}{1}{\mu_{\mathrm A} \brackets{#1}} \letThereBe{\geomMult}{1}{\mu_{\mathrm G} \brackets{#1}} \LetThereBe{\Nullity}{\mathrm{nullity}} \letThereBe{\nullity}{1}{\Nullity \brackets{#1}} \LetThereBe{\nulty}{\nu} \declareMathematics{\SpanOf}{span} \letThereBe{\Span}{1}{\SpanOf\set{#1}} \LetThereBe{\projection}{\Pi} % Linear algebra - Matrices \LetThereBe{\tr}{\top} \LetThereBe{\Tr}{^\tr} \LetThereBe{\pinv}{\dagger} \LetThereBe{\Pinv}{^\dagger} \LetThereBe{\Inv}{^{-1}} \LetThereBe{\ident}{\vi{I}} \letThereBe{\mtr}{1}{\begin{pmatrix}#1\end{pmatrix}} \letThereBe{\bmtr}{1}{\begin{bmatrix}#1\end{bmatrix}} \declareMathematics{\trace}{tr} \declareMathematics{\diagonal}{diag} \declareMathematics{\rank}{rank} % Multilinear algebra \LetThereBe{\tensorProd}{\otimes} \LetThereBe{\tprod}{\tensorProd} \LetThereBe{\extProd}{\wedge} \LetThereBe{\wdg}{\extProd} \LetThereBe{\wedges}{\wedge \dots \wedge} \declareMathematics{\altMap}{Alt} $$ $$ % Statistics \LetThereBe{\iid}{\overset{\text{i.i.d.}}{\sim}} \LetThereBe{\ind}{\overset{\text{ind}}{\sim}} \LetThereBe{\condp}{\,\vert\,} \letThereBe{\complementOf}{1}{{{#1}^c}} \LetThereBe{\acov}{\gamma} \LetThereBe{\acf}{\rho} \LetThereBe{\stdev}{\sigma} \LetThereBe{\procMean}{\mu} \LetThereBe{\procVar}{\stdev^2} \declareMathematics{\variance}{var} \letThereBe{\Variance}{1}{\variance \brackets{#1}} \declareMathematics{\cov}{cov} \declareMathematics{\corr}{cor} \letThereBe{\sampleVar}{1}{\rnd S^2_{#1}} \letThereBe{\populationVar}{1}{V_{#1}} \declareMathematics{\expectedValue}{\mathbb{E}} \declareMathematics{\rndMode}{Mode} \letThereBe{\RndMode}{1}{\rndMode\brackets{#1}} \letThereBe{\expect}{1}{\expectedValue #1} \letThereBe{\Expect}{1}{\expectedValue \brackets{#1}} \letThereBe{\expectIn}{2}{\expectedValue_{#1} #2} \letThereBe{\ExpectIn}{2}{\expectedValue_{#1} \brackets{#2}} \LetThereBe{\betaF}{\mathrm B} \LetThereBe{\fisherMat}{J} \LetThereBe{\mutInfo}{I} \LetThereBe{\expectedGain}{I_e} \letThereBe{\KLDiv}{2}{D\brackets{#1 \parallel #2}} \LetThereBe{\entropy}{H} \LetThereBe{\diffEntropy}{h} \LetThereBe{\probF}{\pi} \LetThereBe{\densF}{\vf} \LetThereBe{\att}{_t} %at time \letThereBe{\estim}{1}{\hat{#1}} \letThereBe{\estimML}{1}{\hat{#1}_{\mathrm{ML}}} \letThereBe{\estimOLS}{1}{\hat{#1}_{\mathrm{OLS}}} \letThereBe{\estimMAP}{1}{\hat{#1}_{\mathrm{MAP}}} \letThereBe{\predict}{3}{\estim {\rnd #1}_{#2 | #3}} \letThereBe{\periodPart}{3}{#1+#2-\ceil{#2/#3}#3} \letThereBe{\infEstim}{1}{\tilde{#1}} \letThereBe{\predictDist}{1}{{#1}^*} \LetThereBe{\backs}{\oper B} \LetThereBe{\diff}{\oper \Delta} \LetThereBe{\BLP}{\oper P} \LetThereBe{\arPoly}{\Phi} \letThereBe{\ArPoly}{1}{\arPoly\brackets{#1}} \LetThereBe{\maPoly}{\Theta} \letThereBe{\MaPoly}{1}{\maPoly\brackets{#1}} \letThereBe{\ARmod}{1}{\mathrm{AR}\brackets{#1}} \letThereBe{\MAmod}{1}{\mathrm{MA}\brackets{#1}} \letThereBe{\ARMA}{2}{\mathrm{ARMA}\brackets{#1, #2}} \letThereBe{\sARMA}{3}{\mathrm{ARMA}\brackets{#1}\brackets{#2}_{#3}} \letThereBe{\SARIMA}{3}{\mathrm{ARIMA}\brackets{#1}\brackets{#2}_{#3}} \letThereBe{\ARIMA}{3}{\mathrm{ARIMA}\brackets{#1, #2, #3}} \LetThereBe{\pacf}{\alpha} \letThereBe{\parcorr}{3}{\rho_{#1 #2 | #3}} \LetThereBe{\noise}{\mathscr{N}} \LetThereBe{\jeffreys}{\mathcal J} \LetThereBe{\likely}{\mcal L} \letThereBe{\Likely}{1}{\likely\brackets{#1}} \LetThereBe{\loglikely}{\mcal l} \letThereBe{\Loglikely}{1}{\loglikely \brackets{#1}} \LetThereBe{\CovMat}{\Gamma} \LetThereBe{\covMat}{\vi \CovMat} \LetThereBe{\rcovMat}{\vrr \CovMat} \LetThereBe{\AIC}{\mathrm{AIC}} \LetThereBe{\BIC}{\mathrm{BIC}} \LetThereBe{\AICc}{\mathrm{AIC}_c} \LetThereBe{\nullHypo}{H_0} \LetThereBe{\altHypo}{H_1} \LetThereBe{\rve}{\rnd \ve} \LetThereBe{\rtht}{\rnd \theta} \LetThereBe{\rX}{\rnd X} \LetThereBe{\rY}{\rnd Y} \LetThereBe{\rZ}{\rnd Z} \LetThereBe{\rA}{\rnd A} \LetThereBe{\rB}{\rnd B} \LetThereBe{\vrZ}{\vr Z} \LetThereBe{\vrY}{\vr Y} \LetThereBe{\vrX}{\vr X} \LetThereBe{\rW}{\rnd W} \LetThereBe{\rS}{\rnd S} \LetThereBe{\rM}{\rnd M} \LetThereBe{\rtau}{\rnd \tau} % Bayesian inference \LetThereBe{\paramSet}{\mcal T} \LetThereBe{\sampleSet}{\mcal Y} \LetThereBe{\bayesSigmaAlg}{\mcal B} % Different types of convergence \LetThereBe{\inDist}{\onTop{\to}{d}} \letThereBe{\inDistWhen}{1}{\onBottom{\onTop{\longrightarrow}{d}}{#1}} \LetThereBe{\inProb}{\onTop{\to}{P}} \letThereBe{\inProbWhen}{1}{\onBottom{\onTop{\longrightarrow}{P}}{#1}} \LetThereBe{\inMeanSq}{\onTop{\to}{\ltwo}} \LetThereBe{\inltwo}{\onTop{\to}{\ltwo}} \letThereBe{\inMeanSqWhen}{1}{\onBottom{\onTop{\longrightarrow}{\ltwo}}{#1}} \LetThereBe{\convergeAS}{\tOnTop{\to}{a.s.}} \letThereBe{\convergeASWhen}{1}{\onBottom{\tOnTop{\longrightarrow}{a.s.}}{#1}} % Asymptotic qualities \LetThereBe{\simAsymp}{\tOnTop{\sim}{as.}} % Stochastic analysis \letThereBe{\diffOn}{2}{\diff #1_{[#2]}} % \LetThereBe{\timeSet}{\Theta} \LetThereBe{\eventSet}{\Omega} \LetThereBe{\filtration}{\mcal F} % TODO: Rename allFiltrations and the like \letThereBe{\allFiltrations}{1}{\set{\filtration_t}_{#1}} \letThereBe{\natFilter}{1}{\filtration_t^{#1}} \letThereBe{\NatFilter}{2}{\filtration_{#2}^{#1}} \letThereBe{\filterAll}{1}{\set{#1}_{t \geq 0}} \letThereBe{\FilterAll}{2}{\set{#1}_{#2}} \LetThereBe{\borelAlgebra}{\mcal B} \LetThereBe{\sAlgebra}{\mcal A} \LetThereBe{\quadVar}{Q} \LetThereBe{\totalVar}{V} \LetThereBe{\adaptIntProcs}{\mcal M} \letThereBe{\reflectProc}{2}{#1^{#2}} $$ $$ % Distributions \letThereBe{\WN}{2}{\mathrm{WN}\brackets{#1,#2}} \declareMathematics{\uniform}{Unif} \declareMathematics{\binomDist}{Bi} \declareMathematics{\negbinomDist}{NBi} \declareMathematics{\betaDist}{Beta} \declareMathematics{\betabinomDist}{BetaBin} \declareMathematics{\gammaDist}{Gamma} \declareMathematics{\igammaDist}{IGamma} \declareMathematics{\invgammaDist}{IGamma} \declareMathematics{\expDist}{Ex} \declareMathematics{\poisDist}{Po} \declareMathematics{\erlangDist}{Er} \declareMathematics{\altDist}{A} \declareMathematics{\geomDist}{Ge} \LetThereBe{\normalDist}{\mathcal N} %\declareMathematics{\normalDist}{N} \letThereBe{\normalD}{1}{\normalDist \brackets{#1}} \letThereBe{\mvnormalD}{2}{\normalDist_{#1} \brackets{#2}} \letThereBe{\NormalD}{2}{\normalDist \brackets{#1, #2}} \LetThereBe{\lognormalDist}{\log\normalDist} $$ $$ % Game Theory \LetThereBe{\doms}{\succ} \LetThereBe{\isdom}{\prec} \letThereBe{\OfOthers}{1}{_{-#1}} \LetThereBe{\ofOthers}{\OfOthers{i}} \LetThereBe{\pdist}{\sigma} \letThereBe{\domGame}{1}{G_{DS}^{#1}} \letThereBe{\ratGame}{1}{G_{Rat}^{#1}} \letThereBe{\bestRep}{2}{\mathrm{BR}_{#1}\brackets{#2}} \letThereBe{\perf}{1}{{#1}_{\mathrm{perf}}} \LetThereBe{\perfG}{\perf{G}} \letThereBe{\imperf}{1}{{#1}_{\mathrm{imp}}} \LetThereBe{\imperfG}{\imperf{G}} \letThereBe{\proper}{1}{{#1}_{\mathrm{proper}}} \letThereBe{\finrep}{2}{{#2}_{#1{\text -}\mathrm{rep}}} %T-stage game \letThereBe{\infrep}{1}{#1_{\mathrm{irep}}} \LetThereBe{\repstr}{\tau} %strategy in a repeated game \LetThereBe{\emptyhist}{\epsilon} \letThereBe{\extrep}{1}{{#1^{\mathrm{rep}}}} \letThereBe{\avgpay}{1}{#1^{\mathrm{avg}}} \LetThereBe{\succf}{\pi} %successor function \LetThereBe{\playf}{\rho} %player function \LetThereBe{\actf}{\chi} %action function $$ $$ \LetThereBe{\fourierOp}{\mcal{F}} \letThereBe{\fourier}{1}{\widehat{#1}} \letThereBe{\ifourier}{1}{\check{#1}} % Shortcuts \letThereBe{\FT}{1}{\fourier{#1}} \letThereBe{\iFT}{1}{\ifourier{#1}} \LetThereBe{\FTOp}{\fourierOp} \LetThereBe{\lspace}{\mcal L} \LetThereBe{\lone}{\lspace^{1}} \letThereBe{\Lone}{1}{\lone\brackets{#1}} \LetThereBe{\ltwo}{\lspace^2} \letThereBe{\Ltwo}{1}{\ltwo\brackets{#1}} \letThereBe{\lp}{1}{\lspace^{#1}} \letThereBe{\Lp}{2}{\lp{#1}\brackets{#2}} \LetThereBe{\linfty}{\lspace^{\infty}} \letThereBe{\Linfty}{1}{\linfty\brackets{#1}} \LetThereBe{\ltwoEq}{\onTop{=}{\ltwo}} \letThereBe{\decayContf}{1}{\mcal C_0\brackets{#1}} \letThereBe{\cinftyContf}{1}{\mcal C^{\infty}_c\brackets{#1}} \LetThereBe{\unitBall}{\mathbb{S}} \LetThereBe{\unitBallSurface}{S} \LetThereBe{\gammaF}{\Gamma} \letThereBe{\GammaF}{1}{\gammaF\brackets{#1}} \LetThereBe{\betaF}{B} \letThereBe{\BetaF}{1}{\betaF\brackets{#1}} \LetThereBe{\ofOrder}{\asymp} \declareMathematics{\vol}{vol} \LetThereBe{\holomorphic}{\mcal H} \LetThereBe{\hlmr}{\holomorphic} \LetThereBe{\schwartz}{\mcal S} \LetThereBe{\testSpace}{\mcal D} \letThereBe{\TestSpace}{1}{\testSpace\brackets{#1}} \LetThereBe{\bumpf}{\psi} \letThereBe{\Bumpf}{1}{\bumpf\brackets{#1}} \letThereBe{\reverse}{1}{\check{#1}} \LetThereBe{\translationOp}{\mcal{T}} \letThereBe{\translateBy}{1}{\translationOp_{#1}} \LetThereBe{\flcOp}{\mathscr{F}} \letThereBe{\flc}{1}{\flcOp\brackets{#1}} \LetThereBe{\hodge}{*} \LetThereBe{\tangent}{\mathrm{T}} \letThereBe{\tangentAt}{1}{\tangent_{#1}} \letThereBe{\lieBracket}{2}{\left[#1, #2\right]} \declareMathematics{\degree}{deg} \LetThereBe{\forms}{\mathscr{F}} \letThereBe{\kforms}{1}{\forms^{#1}} \letThereBe{\formsOver}{1}{\forms\brackets{#1}} \letThereBe{\kformsOver}{2}{\kforms{#1}\brackets{#2}} \letThereBe{\pullback}{1}{{#1}^*} \LetThereBe{\atlas}{\mcal A} \LetThereBe{\adjd}{\oper \delta} \LetThereBe{\connect}{\nabla} \LetThereBe{\christoffel}{\Gamma} \letThereBe{\lengthOf}{1}{\mathcal{l}\brackets{#1}} \LetThereBe{\HOT}{\mathrm{h.o.t.}} $$ $$ % ODEs \LetThereBe{\timeInt}{\mcal I} \LetThereBe{\stimeInt}{\mcal J} \LetThereBe{\Wronsk}{\mcal W} \letThereBe{\wronsk}{1}{\Wronsk \parentheses{#1}} \LetThereBe{\prufRadius}{\rho} \LetThereBe{\prufAngle}{\vf} \LetThereBe{\weyr}{\sigma} \LetThereBe{\linDifOp}{\mathsf{L}} \LetThereBe{\Hurwitz}{\vi H} \letThereBe{\hurwitz}{1}{\Hurwitz \brackets{#1}} % Cont. Models \LetThereBe{\dirac}{\delta} \LetThereBe{\torus}{\mathbb{T}} % PDEs % \avint -- defined in format-respective tex files \LetThereBe{\fundamental}{\Phi} \LetThereBe{\fund}{\fundamental} \letThereBe{\normaDeriv}{1}{\partialDeriv{#1}{\vec{n}}} \letThereBe{\volAvg}{2}{\avint_{\ball{#1}{#2}}} \LetThereBe{\VolAvg}{\volAvg{x}{\ve}} \letThereBe{\surfAvg}{2}{\avint_{\boundaryOf \ball{#1}{#2}}} \LetThereBe{\SurfAvg}{\surfAvg{x}{\ve}} \LetThereBe{\corrF}{\varphi^{\times}} \LetThereBe{\greenF}{G} \letThereBe{\reflect}{1}{\tilde{#1}} \LetThereBe{\conv}{*} \letThereBe{\dotP}{2}{#1 \cdot #2} \letThereBe{\translation}{1}{\tau_{#1}} \declareMathematics{\dist}{dist} \letThereBe{\regularizef}{1}{\eta_{#1}} \letThereBe{\fourier}{1}{\widehat{#1}} \letThereBe{\ifourier}{1}{\check{#1}} \LetThereBe{\fourierOp}{\mcal F} \LetThereBe{\ifourierOp}{\mcal F^{-1}} \letThereBe{\FourierOp}{1}{\fourierOp\set{#1}} \letThereBe{\iFourierOp}{1}{\ifourierOp\set{#1}} \LetThereBe{\laplaceOp}{\mcal L} \letThereBe{\LaplaceOp}{1}{\laplaceOp\set{#1}} \letThereBe{\Norm}{1}{\absval{#1}} % SINDy \LetThereBe{\Koop}{\mcal K} \letThereBe{\oneToN}{1}{\left[#1\right]} \LetThereBe{\meas}{\mathrm{m}} \LetThereBe{\stateLoss}{\mcal J} \LetThereBe{\lagrm}{p} % Stochastic analysis \LetThereBe{\RiemannInt}{(\mcal R)} \LetThereBe{\RiemannStieltjesInt}{(\mcal {R_S})} \LetThereBe{\LebesgueInt}{(\mcal L)} \LetThereBe{\ItoInt}{(\mcal I)} \LetThereBe{\Stratonovich}{\circ} \LetThereBe{\infMean}{\alpha} \LetThereBe{\infVar}{\beta} % Dynamical systems \LetThereBe{\nUnit}{\mathrm N} \LetThereBe{\timeUnit}{\mathrm T} % Masters thesis \LetThereBe{\evolOp}{\oper{\vf}} \letThereBe{\obj}{1}{\mathbb{#1}} \LetThereBe{\timeSet}{\obj T} \LetThereBe{\stateSpace}{\obj X} \LetThereBe{\contStateSpace}{\stateSpace_{C}} \LetThereBe{\orbit}{Or} \letThereBe{\Orbit}{1}{\orbit\brackets{#1}} \LetThereBe{\limitSet}{\obj \Lambda} \LetThereBe{\crossSection}{\obj \Sigma} \declareMathematics{\codim}{codim} % Left and right closed-or-open intervals \LetThereBe{\lco}{\langle} \LetThereBe{\rco}{\rangle} \letThereBe{\testInt}{1}{\mathrm{Int}_{#1}} \letThereBe{\evalOp}{1}{\oper{\eta}_{#1}} \LetThereBe{\nonzeroEl}{\bullet} \LetThereBe{\zeroEl}{\circ} \LetThereBe{\solOp}{\oper{S}} \LetThereBe{\infGen}{\oper{A}} \LetThereBe{\indexSet}{\mcal I} \letThereBe{\indicesOf}{1}{\indexSet\parentheses{#1}} \letThereBe{\IndicesOf}{2}{\indexSet_{#2}\parentheses{#1}} \LetThereBe{\meshGrid}{\obj M} \declareMathematics{\starter}{starter} \declareMathematics{\indexer}{indx} \declareMathematics{\enumerator}{enum} \LetThereBe{\inSS}{_{\infty}} \LetThereBe{\manifold}{\mcal M} \LetThereBe{\curve}{\mcal C} % Numerical methods \declareMathematics{\globErr}{err} \declareMathematics{\locErr}{le} \declareMathematics{\locTrErr}{lte} \declareMathematics{\estimErr}{est} \declareMathematics{\incrementFunc}{Inc} \letThereBe{\incrementF}{1}{\incrementFunc \brackets{#1}} \LetThereBe{\discreteNodes}{\mcal T} \LetThereBe{\stableFunc}{R} \letThereBe{\stableF}{1}{\stableFunc\brackets{#1}} \LetThereBe{\stableRegion}{\Omega} %Stochastic analysis \LetThereBe{\RiemannInt}{(\mcal R)} \LetThereBe{\RiemannStieltjesInt}{(\mcal {R_S})} \LetThereBe{\LebesgueInt}{(\mcal L)} \LetThereBe{\ItoInt}{(\mcal I)} \LetThereBe{\Stratonovich}{\circ} \LetThereBe{\infMean}{\alpha} \LetThereBe{\infVar}{\beta} %Optimization \LetThereBe{\goldRatio}{\tau} %Interpolation \LetThereBe{\lagrPoly}{l} $$

Definition 1.1 (Fourier transform) Let \(f \in \lone(\R)\). Then \[ \fourier{f}(t) \letDef \int_{-\infty}^\infty f(x) \ee^{-2 \pi \im t x} \dd x \] for \(t \in \R\) is called the Fourier transform of \(f\).

Remark 1.1. From \(f \in \lone\) follows that \(\fourier{f}(t)\) is defined for all \(t \in \R\).

Let us note there are different notions one can use for the Fourier transform, namely \[ \int_{-\infty}^\infty f(x) \ee^{- \im t x} \dd x, \quad \text{or} \quad \frac 1 {\sqrt{2 \pi}} \int_{-\infty}^\infty f(x) \ee^{- \im t x} \dd x, \] though all are just normalizations and change of variables w.r.t. each other.

Before further study of this concept, a motivation for it is due. As will be clear later, the Fourier transform turns complicated operations on functions into simpler operations on the transform. Indeed, consider \(f, f' \in \lone(\R)\), then \[ \begin{aligned} \int_{-\infty}^\infty f'(x) \ee^{-2 \pi \im t x} \dd x &= \overbrace{f(x) \ee^{-2 \pi \im t x} \vert_{-\infty}^\infty}^{0} + 2 \pi \im t \int_{-\infty}^\infty f(x) \ee^{-2 \pi \im t x} \dd x\\ &= 2\pi \im t \fourier{f}(t). \end{aligned} \tag{1.1}\] This, however, introduces the need for an “inverse” transform, where it turns out one can use \[ f(x) = \int_{-\infty}^\infty \fourier{f}(t) \ee^{2 \pi \im t x} \dd t. \] Intuitively, we decompose an \(\lone\)-function into a collection for corresponding oscillations.

NoteDecay of \(\lone\) functions

If one recalls the computation (1.1), we used the fact that \(f, f' \in \lone(\R)\) to evaluate the boundary condition \(f(x) \ee^{-2 \pi \im t x} \vert_{-\infty}^\infty\) to \(0\). As \(\absval{\ee^{-2 \pi \im t x}} = 1\), it poses the question whether indeed for such \(f\) it holds that \(\lim_{\absval{x} \to \infty} f(x) = 0\).

In this spirit of this question, let us take \(f, f' \in \lone(\R)\) and we shall assume there exists a sequence \((x_n)_{n = 1}^{\infty}\) such that \(x_n \to \infty\) as \(n \to \infty\) with \(\absval{f(x_n)} \to B > 0\). In other words, we assume \(\absval{f}\) either has a positive limit \(B\) in infinity or at least \(B\) is an accumulation point of \(\absval{f}\) if it does not admit a limit. Firstly, notice that \(f' \in \lone(\R)\) implies \(f\) is continuous and differentiable (but not necessarily continuously differentiable). Furthermore, let \(B > \ve > 0\) be chosen arbitrarily then by continuity of \(f\) there exists a sequence of disjunct intervals \(\brackets{(a_n, b_n)}_{n = 1}^\infty\) such that for all \(n \in \N\) it holds that

  • \(b_n < a_{n+1} < b_{n+1}\) (equivalent with the intervals being disjunct);
  • \(x_n \in (a_n, b_n)\);
  • \(\forall x \in (a_n, b_n)\) we have that \(\absval{f(x)} \geq \ve > 0\).

Taking \(d_n \letDef b_n - a_n\), we get by \(f \in \lone(\R)\) that \(d_n \to 0\) as \(n \to \infty\) (and everywhere else the function must tend to \(0\)).

We have assumed \(\absval{f(x_n)} \to B\), hence for any \(\sigma > 0\) follows that there exists an index \(N_0 \in \N\) such that for \(k > N_0\) we have \(\absval{f(x_k) - B} < \sigma\). However, using \(f \in \Lone{\R}\) for any \(\ve > \delta > 0\) there also has to exist some \(y \in \R\) such that \(\absval{f(x)} < \delta\) for \(x \in (y, \infty) \setminus \bigcup_{n = 1}^\infty (a_n, b_n)\).

Denote as \(N > N_0\) an index for which \(x_N > y\). Now, for each \(k > N_0\) there also exists a \(m_k \in (a_k, b_k)\) such that \(\absval{f}\) is increasing on \((a_k, m_k)\) (and this \(m_k\) is maximal). This implies that (since \(f\) is differentiable) \[ \underbrace{B - \sigma - \ve}_C \leq f(m_k) - f(a_k) = \int_{a_k}^{m_k} f'(s) \dd s \leq \int_{a_k}^{m_k} \absval{f'(s)} \dd s. \] Finally, by \(f' \in \lone(\R)\) \[ \infty > \int_{\R} \absval{f'(s)} \dd s \geq \sum_{k = N + 1}^\infty \int_{a_k}^{m_k} \absval{f'(s)} \dd s \geq \sum_{k = N+1}^\infty C = \infty, \] which is a contradiction. Therefore, in this setting of \(f, f' \in \lone(\R)\) the function \(f\) must decay point-wise to \(0\) at infinity.

For completeness’ sake, let us mention one can consider the compactly supported truncation of \(f\) and \(f'\), i.e., for \(\chi_{A} \letDef \chi_{[-A, A]}\) we have that \(f\chi_A \onBottom{\longrightarrow}{A \to \infty} f\) (and similarly for \(f'\)) whilst allowing us to vanish the boundary term, without the necessity of having point-wise limits of \(f\) at infinity be zero.

Lastly, recall that Fourier series for 1-periodic functions \(f \in \lone(\R / \Z)\) we have \[ f(x) \sim \sum_{k \in \Z} a_k \ee^{2\pi \im k x}, \] where the coefficients \(a_k\) solve the following \(\ltwo\)-problem, \[ \int_0^1 \absval{f(x) - \sum_{k = -n}^n a_k \ee^{2 \pi \im k x} }^2 \dd x, \] is minimal for all \(n\). In other words, the coefficients \(a_k\) must be chosen such that \(\sum_{k = -n}^n a_k \ee^{2 \pi \im k x}\) is a projection of \(f\) to this lower dimensional subspace. We can prove convergence in the \(\ltwo\)-sense, although not point-wise.

Definition 1.2 (Convolution) Let \(f,g \in \lone(\R)\), then \[ f \conv g (x) \letDef \int_{-\infty}^\infty f(x - y) g(y) \dd y \] is called the convolution of \(f\) and \(g\).

Here we note that the point-wise product of \(f,g \in \lone(\R)\) is not an \(\lone\) function, but convolution is, as will be shown by the following theorem.

Theorem 1.1 Let \(f,g\in \lone(\R)\). Then \[ \int_{-\infty}^\infty \absval{f(x - y)} \absval{g(y)} \dd y < \infty \] almost everywhere on \(x \in \R\). Furthermore, \(f \conv g \in \lone(\R)\) and \(\norm{f \conv g}_1 \leq \norm{f}_1 \cdot \norm{g}_1\).

Proof. Notice that \(F(x,y) = f(x - y)g(y)\) is measurable, if \(f,g\) are measurable (which, in turn, follows from \(f,g \in \lone\)). Thus, \[\begin{align*} \int_{\R} \brackets{\int_{\R} \absval{F(x,y)} \dd x} \dd y &= \int_{\R} \bigg(\overbrace{\int_{\R} \absval{f(x-y)} \dd x}^{\norm{f}_1} \absval{g(y)} \bigg) \dd y \\ &= \norm{f}_1 \cdot \norm{g}_1 < \infty \end{align*}\] by Fubini’s theorem. Therefore \(F \in \lone(\R^2)\), and \[ \int_{\R} \absval{F(x,y)} \dd y = \int_{\R} \absval{f(x-y)} \absval{g(y)} \tOnTop{<}{a.e.} \infty. \] Finally, \[ \norm{f \conv g}_1 = \int_{\R} \absval{\int_{\R} f(x-y) g(y) \dd y} \dd x \leq \int_{\R}\int_{\R} \absval{F(x,y)} \dd y \dd x = \norm{f}_1 \cdot \norm{g}_1. \]

As a reminder we state that \((\lone(\R), + , \conv)\) is a Banach algebra, i.e., “multiplication” (commutative, distributive, associative) must be compatible with norm, which follows from Theorem 1.1.

Remark 1.2 (Properties of the Fourier transform).

  1. \(g(x) = f(x) \ee^{2 \pi \im \alpha x}\), then \(\fourier{g}(t) = \fourier{f}(t - \alpha)\);
  2. \(g(x) = f(x - \alpha)\), then \(\fourier{g}(t) = \fourier{f}(t) \ee^{-2\pi \im \alpha t}\);
  3. \(f,g \in \lone(\R)\), then \(\fourier{f \conv g} (t) = \fourier{f} (t) \cdot \fourier{g}(t)\);
  4. \(g(x) = \complexConj{f(-x)}\), then \(\fourier{g}(t) = \complexConj{\fourier{f}(t)}\);
  5. \(g(x) = f\brackets{x \over \lmbd}\) for \(\lmbd > 0\), then \(\fourier{g} = \lmbd \fourier{f}(\lmbd t)\);
  6. \(g(x) = - 2 \pi \im x f(x)\) such that \(g \in \lone(\R)\), then \(\fourier{g}(t) = (\fourier{f})'(t)\).

Proof. Properties 1., 2., 4. and 5. should follow trivially from change of variables. Let us first show the third property. Take \(f,g, \in \lone(\R)\), then \[\begin{align*} \fourier{f \conv g}(t) &= \int_{\R} \int_{\R} f(x - y) g(y) \dd y \; \ee^{-2 \pi \im t x} \dd x \\ &\tOnTop{=}{Fubini} \int_{\R} \underbrace{\int_{\R} f(x - y) \ee^{-2 \pi \im t (x - y)} \dd x}_{\fourier{f}(t)} \; g(y) \ee^{-2 \pi \im t y} \dd y \\ &= \fourier{f}(t) \cdot \fourier{g}(t). \end{align*}\] Now let us turn our attention to 6., where we take \(g(x) = - 2 \pi \im x f(x) \in \lone(\R)\). To prove the desired equality, consider the following \[ \frac{\fourier{f}(s) - \fourier{f}(t)}{s - t} = \int_{\R} f(x) \frac {\ee^{-2 \pi \im s x} - \ee^{-2 \pi \im t x}}{s - t} \dd x. \] Re-arranging \(\ee^{-2 \pi \im s x} - \ee^{2 \pi \im t x}\) yields \[\begin{align*} \ee^{-2 \pi \im s x} - \ee^{2 \pi \im t x} &= \ee^{-\pi \im (s+t) x} \brackets{\ee^{- \pi \im (s - t)x} - \ee^{- \pi \im (t - s) x}}\\ &= 2 \im \sin(-\pi (s - t) x) \ee^{- \pi \im (s+t) x}, \end{align*}\] thus \[ \absval{\frac {\ee^{-2 \pi \im s x} - \ee^{-2 \pi \im t x}}{s - t}} = \frac{\overbrace{\absval{2 \im \sin(-\pi (s - t) x)}}^{\leq 2 \pi \absval{s - t} \absval{x}} \overbrace{\absval{\ee^{- \pi \im (s+t) x}}}^{{}=1}}{\absval{s - t}} \leq 2 \pi |x|. \] Hence \[ \int_{\R} \absval{f(x)} \absval{\frac {\ee^{-2 \pi \im s x} - \ee^{-2 \pi \im t x}}{s - t}} \dd x \leq \int_{\R} \underbrace{\absval{f(x)} \cdot 2 \pi \absval{x}}_{\absval{g(x)}} \dd x < \infty, \] where \(g \in \lone\) by our assumption. In other words, \(\frac{\fourier{f}(s) - \fourier{f}(t)}{s - t}\) is “dominated” by \(g\), see Theorem 4.2, and we may write \[\begin{align*} (\fourier{f})'(t) &= \lim_{s \to t} \frac{\fourier{f}(s) - \fourier{f}(t)}{s - t} \\ &= \int_{\R} f(x) \lim_{s \to t} \frac{2 \im \sin(\pi (s - t) x) \ee^{- \pi \im (s + t) x}}{s - t} \dd x \\ &= \int_{\R} -2 \pi \im x f(x) \ee^{- 2 \pi \im t x} \dd x \\ &= \fourierOp(-2 \pi \im x f(x)), \end{align*}\] where we recalled \(\lim_{x \to 0} \frac {\sin(x)} x = 1\).

Now we shall discuss how to recover \(f\) back from \(\fourier{f}\).

Lemma 1.1 Let \(1 \leq p < \infty\) and \(f \in \lp{p}(\R)\). The map \[ y \mapsto f(\cdot - y) =: f_y, \] for \(y \in \R\), is uniformly continuous.

Proof. Let \(f \in \lp{p}(\R)\), and \(\ve > 0\). Then (by density of \(\compactContf{\R}{}\) in \(\lp{p}\)) there exists \(g \in \compactContf{\R}{}\) with \(\norm{f - g}_p < \ve\). Hence there also exists \(A\) such that \(\supp g \subseteq [-A, A]\). From compact support and continuity follows uniform continuity, i.e., there exists \(0 < \delta < A\) such that \[ \absval{s - t} < \delta \implies \absval{g(s) - g(t)} < (3A)^{-\frac 1 p} \ve. \tag{1.2}\]

Let us use \(g\) as a proxy for dealing with \(f_y\). Using (1.2), \[ \overbrace{\int_{\R} \absval{g(x - s) - g(x - t)}^p \dd x}^{\text{integral over finite set as } g \in \compactContf{\R}{}} = \norm{g_s - g_t}_p^p < \frac 1 {3A} \ve^p (2A + \delta) < \ve^p. \] Finally, we can use triangle inequality to obtain \[ \norm{f_s - f_t}_p \leq \norm{f_s - g_s}_p + \norm{g_s - g_t}_p + \norm{g_t - f_t} < 3\ve. \]

Theorem 1.2 (Riemann-Lebesgue Lemma) Let \(f \in \lone(R)\). Then \(\fourier{f} \in \decayContf{\R}\)1 and \(\norm{\fourier{f}}_{\infty} \leq \norm{f}_1\).

Proof. Surely, \[ \absval{\fourier{f}(t)} \leq \int_{\R} \big|f(x) \overbrace{\ee^{-2 \pi \im t x}}^{\text{factor of } 1}\big| \dd x = \norm{f}_1. \] Moreover, this inequality also holds for supremum, which proves the second statement of the theorem.

Now, let us focus on proving \(\fourier{f}\) is continuous. Let \(t_n \to t\) be a sequence, then \[ \FT{f}(t_n) - \FT{f}(t) = \int_{\R} f(x) \brackets{\ee^{-2 \pi \im t_n x} - \ee^{-2 \pi \im t x}} \dd x. \] Since \(\absval{\ee^{-2 \pi \im t_n x} - \ee^{-2 \pi \im t x}} \leq 2\) (both are complex with modulus 1), we get \[ \absval{\FT{f}(t_n) - \FT{f}(t)} \leq 2 \absval{f(x)}, \] which by dominated convergence Theorem 4.2 yields \[ \lim_{n \to \infty} \brackets{\FT{f}(t_n) - \FT{f}(t)} = 0. \]

Lastly, we shall prove that \(\FT{f}\) decays to zero at infinity. Note that this result is called the Riemann-Lebesgue lemma. By definition and Remark 1.2 (2.), \[\begin{align*} \FT{f}(t) &= \int_{\R} f(x) \ee^{- 2 \pi \im t x} \dd x\\ &= \overbrace{\ee^{\pi \im - \pi \im}}^{{} = 1} \int_{\R} f(x) \ee^{- 2 \pi \im t x} \dd x \\ &= - \int_{\R} f(x) \ee^{-2 \pi \im t \brackets{x + \frac 1 {2t}}} \dd x \\ &= - \int_{\R} f\brackets{x - \frac 1 {2t}} \ee^{-2 \pi \im t x} \dd x. \end{align*}\] Thus, \[ 2\FT{f}(t) = \int_{\R} \brackets{f(x) - f\brackets{x - \frac 1 {2t}}} \ee^{-2 \pi \im t x} \dd x, \] which, in turn, implies \[ 2 \absval{\FT{f}(t)} \leq \norm{f - f_{\frac 1 {2t}}}_1. \] By Lemma 1.1 we know \(y \mapsto f_y\) is uniformly continuous, hence for \(\absval{t} \to \infty\) we obtain \[ \lim_{\absval{t} \to \infty} \FT{f}(t) = 0. \]

Example 1.1 (Gaussian) Let us consider \(h(x) = \ee^{- \pi x^2}\), to which we want to find \(\FT{h}\) by the means of complex analysis. Firstly, recall that \(f(z) = \ee^{- \pi z^2}\) is an entire function. Hence, we may employ Cauchy’s integral theorem 4.9, i.e., \[ \oint_C f(z) \dd z = 0 = \int_{-R}^R \ee^{- \pi x^2} \dd x + \int_{R}^{R + \im t} f(z) \dd z - \int_{-R}^R \ee^{- \pi (x + \im t)^2} - \int_{-R}^{-R + \im t} f(z) \dd z, \] where \(C\) is shown on the diagram Figure 1.1.

Figure 1.1: Closed curve \(C\) for the use of Cauchy’s theorem with \(h\).

Examining integrals along the “vertical” parts of \(C\) produces \[ \absval{\int_{R}^{R + \im t} f(z) \dd z} = \int_0^t \big|\ee^{- \pi \overbrace{(R + \im y)^2}^{(R^2 - y^2)}}\big| \dd y \leq t \ee^{-\pi (R^2 - t^2)} \onBottom{\longrightarrow}{R \to \infty} 0. \] Moreover, for \(R \to \infty\), we also obtain \[\begin{gather*} 0 = \overbrace{\int_{-\infty}^\infty \ee^{- \pi x^2} \dd x}^{{} = 1 \text{ (akin to normal dist.)}} - \int_{-\infty}^\infty \ee^{- \pi x^2} \ee^{-2 \pi x t} \ee^{\pi t^2} \dd x \\ \Downarrow \\ \FT{h}(t) = \int_{\R} \ee^{- \pi x^2} \ee^{-2 \pi \im x t} \dd x = \ee^{-\pi t^2}. \end{gather*}\]

Finally, by the scaling properties of Fourier transform, see Remark 1.2, we can morph \(h\) into \(h_{\lmbd}\), \[ h_{\lmbd}(x) = \frac 1 {\sqrt{\lmbd}} \ee^{- \pi \frac {x^2}{\lmbd}} \quad \& \quad \FT{h}_{\lmbd} (t) = \ee^{- \pi \lmbd t^2}, \tag{1.3}\] which will become handy for us later.

Remark 1.3. An attentive reader might realize that \(h_{\lmbd}\) with \(\lmbd\) representing the time is the heat kernel. In other words, \(f \conv h_{\lmbd}\) is the solution to the heat equation, i.e., \[ c u_x = u_{xx} \quad \& \quad u(0,x) = f(x). \]

Lemma 1.2 Let \(f \in \lone(\R)\). Then \[ f \conv h_{\lmbd}(x) = \int_{\R} \FT{f}(t) \ee^{- \pi \lmbd t^2} \ee^{2 \pi \im t x} \dd t. \]

Note that for \(\lmbd \to 0\), we basically recover \(f\) or the inverse Fourier transform of \(\FT{f}\) on the right side, respectively.

Proof. By the assumption and Example 1.1, one obtains right away \[\begin{align*} f \conv h_{\lmbd}(x) &= \int_{\R} f(x - y) h_{\lmbd}(y) \dd y = \int_{\R} f(x - y) \int_{\R} \ee^{- \pi \lmbd t^2} \ee^{2 \pi \im t y} \dd t \dd y \\ &\tOnTop{=}{Fubini} \int \ee^{- \pi \lmbd t^2} \underbrace{\int_{\R} f(x - y) \ee^{2 \pi \im t y} \dd y}_{\ee^{2 \pi \im t x} \FT{f}(t)} \dd t. \end{align*}\]

Theorem 1.3 Let \(g \in \linfty(\R)\) be continuous in (fixed) \(x \in \R\). Then \[ \lim_{\lmbd \to 0} g \conv h_{\lmbd}(x) = g(x). \]

Proof. Using the fact that \(\int_{\R} h_{\lmbd} (y) \dd y = 1\), let us examine \[\begin{align*} g \conv h_{\lmbd}(x) - g(x) &= \int_{\R} \brackets{g(x - y) - g(x)} h_{\lmbd}(y) \dd y \\ &= \int_{\R} (g(x - y) - g(x)) \frac 1 {\sqrt{\lmbd}} \ee^{- \pi \frac {y^2} {\lmbd}} \dd y \\ &\onTop{=}{s = \frac y {\sqrt{\lmbd}}} \int_{\R} (g(x - s\sqrt{\lmbd}) - g(x)) \underbrace{\ee^{- \pi s^2}}_{\text{bounded}} \dd s. \end{align*}\] As \(g \in \linfty(\R)\), we have \(\absval{g\brackets{x - s \sqrt{\lmbd}} - g(x)} \leq 2 \norm{g}_{\infty}\). Thus, there exists majorant of the integrand for the dominated convergence theorem, see Theorem 4.2, and we may pass \(\lim_{\lmbd \to 0}\) inside of the integral. Finally, by continuity of \(g\) we obtain the desired results.

Theorem 1.4 Let \(1 \leq p < \infty\) and \(f \in \lp{p}(\R)\). Then \[ \lim_{\lmbd \to 0} \norm{ f \conv h_{\lmbd} - f }_p = 0. \]

Proof. One might observe that \(h_{\lmbd} \in \lp{q}(\R)\), such that \(\frac 1 p + \frac 1 q = 1\), and that \(f \conv h_{\lmbd}(x)\) is defined for every \(x\), i.e., the integral is convergent for every \(x\).

Similarly to the proof of Theorem 1.3, we might use the fact that \(h_{\lmbd}\) is a probability measure and compute \[ f \conv h_{\lmbd}(x) - f(x) = \int_{\R} \brackets{f(x-y) - f(x)} h_{\lmbd}(y) \dd y. \] Now by Jensen’s inequality, see Proposition 4.1, we get \[ \absval{f \conv h_{\lmbd}(x) - f(x)}^p \leq \int_{\R} \absval{f(x - y) - f(x)}^p h_{\lmbd}(y) \dd y. \] Integrating both sides w.r.t. to \(x\) yields \[ \norm{f \conv h_{\lmbd} - f}_p^p \leq \int_{\R} \norm{f_y - f}_p^p h_{\lmbd}(y) \dd y. \] Since both \(y \mapsto f_y\) and norm are continuous, \(\norm{f_y - f}_p^p\) is continuous in \(y\) and bounded, we can apply Theorem 1.3 to obtain \[ \lim_{\lmbd \to 0} \norm{f \conv h_{\lmbd} - f} = 0. \]

Remark 1.4. The main difference between Theorem 1.3 and Theorem 1.4 can be seen in the kind of convergence both theorems describe. While Theorem 1.3 gives us point-wise convergence for bounded and continuous functions, Theorem 1.4 provides an alternative view with \(\lp{p}\)-convergence for appropriate functions.

Lastly, Lemma 1.2 simply characterizes the same situation for the special case of \(\lone(\R)\), without going into the details on convergence when \(\lmbd \to 0\).

Let us now turn our attention to the relationship of Fourier transform and it’s inverse. Namely, we shall characterize the case when we recover the original function \(f\) almost everywhere, with the caveat of needing \(\FT{f} \in \lone(\R)\). However, there are prominent examples of \(f\) such that \(\FT{f} \notin \lone(\R)\) — we shall address this issue later.

Theorem 1.5 If \(f, \FT{f} \in \lone(\R)\), then \[ g(x) = \int_{\R} \FT{f}(t) \ee^{2 \pi \im t x} \dd t, \tag{1.4}\] with \(g \in \decayContf{\R}\), such that \(g(x) = f(x)\) almost everywhere for \(x \in \R\).

Proof. By Lemma 1.2, we directly have \[ f \conv h_{\lmbd}(x) = \int_{\R} \FT{f}(t) \ee^{- \pi \lmbd t^2} \overbrace{\ee^{2 \pi \im t x}}^{\text{modulus 1}} \dd t, \tag{1.5}\] i.e., the integrand is bounded by \(\absval{\FT{f}(t)}\) and \(\FT{f} \in \lone(\R)\). Hence, we can use the dominated convergence theorem 4.2, yielding \[ g(x) = \lim_{\lmbd \to 0} f \conv h_{\lmbd}(x). \] Recalling (1.4) and the Definition 1.1 of the Fourier transform (evaluated here at \(-x\)) of \(\FT{f}\), we get \(g \in \decayContf{\R}\) by Theorem 1.2.

Finally, from Theorem 1.4 we know that \(\lim_{\lmbd \to 0} \norm{f \conv h_{\lmbd} - f}_p\) and by Theorem 4.3 (which, in turn, relies on Lemma 4.1), we have \[ \underbrace{\lim_{n \to \infty} f \conv h_{\lmbd_n} (x)}_{g(x)} \tOnTop{=}{a.e.} f(x). \]

Corollary 1.1 Let \(f \in \lone(\R)\) and \(\FT{f} \equiv 0\). Then \(f(x) \tOnTop{=}{a.e.} 0\).

Proof. Follows immediately from Theorem 1.5.

So far, we know that \[ \FTOp: \lone(\R) \to \decayContf{\R}, \] and even so, \(\FTOp\) is still not surjective. For example, one can define Fourier transform for measures and find measures, which do not have density. Moreover, the inversion formula works only for \(\lone(\R) \cap \decayContf{\R}\). Therefore, let us now focus on finding a suitable spaces \(X, Y\) such that \(\FTOp : X \to Y\) is bijective.

1.1 Fourier Transform on \(\mathcal{L}^2\)

Firstly, let us note \(\lone \cap \ltwo\) is dense in \(\ltwo\) (and also in \(\lone\)). Moreover, we already have the Fourier transform defined on \(\lone \cap \ltwo\) and our goal will be to extend it to \(\ltwo\). Without getting ahead of ourselves too much, we shall spoil that we can understand \(\FT{f}\) as an element in \(\ltwo\), which is captured by the following theorem.

Theorem 1.6 (Plancherel’s) There exists a map \(\FTOp: \ltwo(\R) \to \ltwo(\R)\) with the following properties:

  1. for \(f \in \lone(\R) \cap \ltwo(\R)\) it holds \(\FTOp f = \FT{f}\);
  2. \(\norm{f}_2 = \norm{\FTOp f}_2\), i.e., \(\FTOp\) is an isometry;
  3. if \(\vf_A(t) = \int_{-A}^A f(x) \ee^{-2 \pi \im t x} \dd x\) and \(\psi_A(x) = \int_{-A}^A \FTOp f(t) \ee^{2 \pi \im t x} \dd t\), then \[ \lim_{A \to \infty} \norm{\FTOp f - \vf_A}_2 = 0 \quad \& \quad \lim_{A \to \infty} \norm{f - \psi_A}_2 = 0, \] i.e., both integral transforms “work almost everywhere”.

Proof. We shall show the existence and properties of \(\FTOp\) as results of its behavior on \(\lone \cap \ltwo\), e.g., by employing density arguments. In the end, we will give a concrete \(\lone\)-proxy for \(\FTOp: \ltwo \to \ltwo\) in the limit sense.

Denote \(f^*(x) \letDef \complexConj{f(-x)}\) for \(f \in \lone \cap \ltwo\) and \(g \letDef f \conv f^*\), then by Theorem 1.1 \(g \in \lone\). As \[ g(x) = \int_{\R} f(x - y) \complexConj{f(-y)} \dd y = \int_{\R} f(x + y) \complexConj{f(y)}\dd y, \] we can interpret \(g\) as a scalar product of \(f\) and shifted \(f_{-x}\). From this immediately follows that \(g(0) = \norm{f}_2^2\) and that \(g(x) = \scal{f_{-x}} {f}_{\ltwo}\) is uniformly continuous, since \(x \to f_{-x}\) is continuous by Lemma 1.1. By Cauchy-Schwarz inequality 4.4, we get \[ \absval{g(x)} \leq \norm{f}_2^2, \] thus \(g(x)\) is bounded.

Now, recalling the scaled gaussian “mollifier” \(h_{\lmbd}\) from (1.3), we may invoke Lemma 1.2 in order to obtain \[ g \conv h_{\lmbd} (0) = \int_{\R} \FT{g}(t) \ee^{- \pi \lmbd t^2} \dd t. \] By Remark 1.2 (3. & 4.), \(\FT{g}(t) = \FT{f}(t) \cdot \complexConj{\FT{f}(t)} = \absval{\FT{f}(t)}^2\), therefore \[ g \conv h_{\lmbd} (0) = \int_{\R} \absval{\FT{f}(t)}^2 \ee^{- \pi \lmbd t^2} \dd t, \] with \(\ee^{- \pi \lmbd t^2}\) monotone increasing for \(\lmbd \to 0\). On one hand, we use Theorem 1.3 ensured by boundedness and continuity of \(g\), while on the other, employing Theorem 4.1 produces \[ \underbrace{\lim_{\lmbd \to 0} g \conv h_{\lmbd}(0)}_{{} = g(0) = \norm{f}_2^2} = \int_{\R} \absval{\FT{f}(t)}^2 \dd t = \norm{\FT{f}}_2^2. \]

Furthermore, let \(Y = \FTOp[\lone(\R) \cap \ltwo(\R)]\). Our goal will be to show that \(Y\) is dense in \(\ltwo(\R)\). In particular, since \(\ltwo(\R)\) is a Hilbert space 2 it suffices to show that the orthogonal complement is trivial, i.e., \(Y^{\perp} = \set{0}\).

Consider a function \(x \mapsto \ee^{2 \pi \im \alpha x} \ee^{- \pi \lmbd x^2} \in \lone \cap \ltwo\), then applying \(\FTOp\) gives (by (1.3) and Example 1.1) \[ \underbrace{t \mapsto \int_{\R} \ee^{2 \pi \im t x} \ee^{- \pi \lmbd x^2} \ee^{-2 \pi \im t x}}_{h_{\lmbd}(t - \alpha)} \in Y. \] Now take \(w \in Y^{\perp}\) and note \[ (h_{\lmbd} \conv \complexConj{w})(\alpha) = \int_{\R} h_{\lmbd}(\alpha - t) \complexConj{w}(t) \dd t = \scal{h_{\lmbd}(\cdot - \alpha)}{w}_{\ltwo} = 0. \] However, by Theorem 1.4 we also know \[ \lim_{\lmbd \to 0} \norm{h_{\lmbd} \conv \complexConj{w} - \complexConj{w}}_2 = 0, \] which, taking into account the previous computation, necessitates \(w \tOnTop{=}{a.e.} 0\), hence \(Y^{\perp} = \set{0}\). In other words, \(Y\) is dense in \(\ltwo\), thus proving \(\FTOp\) is isometry.

Lastly, let \(\chi_A \letDef \chi_{[-A, A]}\) be an indicator function, i.e., we may write \[ \vf_A(t) = \int_{-A}^A f(x) \ee^{-2 \pi \im t x} \dd x = \int_{\R} (f \cdot \chi_A) \ee^{-2 \pi \im t x} \dd x = \FT{f \cdot \chi_A}(t). \] Thus by \(\FTOp\) being isometry (see above), \(\norm{\FT{f} - \vf_A}_2^2 = \norm{\FT{f} - \FT{f \cdot \chi_A}}_2^2 = \norm{f - f \cdot \chi_A}_2^2\), which goes to \(0\) for \(A \to \infty\). Analogously, we can also calculate (recall the definition of \(\psi_A\)) \[ \norm{f - \psi_A}_2^2 = \norm{f(- x) - \FT{\FT{f} \cdot \chi_A}(- x)}_2^2 = \norm{\FT{f} - \FT{f} \cdot \chi_A}_2^2 \onBottom{\longrightarrow}{A \to \infty} 0. \]

Corollary 1.2 Let \(f \in \ltwo, \FT{f} \in \lone\), then \(f(x) \tOnTop{=}{a.e.} \int_{\R} \FT{f}(t) \ee^{2 \pi \im t x} \dd t\).

Remark 1.5 (Heisenberg uncertainty principle). Consider \(\absval{f(x)}\) a density of a probability distribution, i.e., \[ \norm{f}_2^2 = \int_{\R} \absval{f(x)}^2 \dd x = 1, \] such that, for simplicity, its mean value is zero. Now \(\int_{\R} x^2 \absval{f(x)}^2 \dd x\) is the variance of said probability distribution. By isometry (see Theorem 1.6) we also know that \(\int_{\R} \absval{\FT{f}(t)}^2 \dd t = 1\), i.e., \(\absval{\FT{f}(t)}\) is also a probability distribution with its associated variance.

By computing \[ \int_{\R} x^2 \absval{f(x)}^2 \dd x \cdot \int_{\R} t^2 \absval{\FT{f}(t)}^2 \dd t \geq C \norm{f}_2^2 \cdot \norm{\FT{f}}_2^2 = C \norm{f}_2^4 = C \] it is possible to see that if one of the variances is small, the other has to be large in order to compensate. In other words, both the variance of \(\absval{f(x)}\) and \(\absval{\FT{f}(t)}\) cannot be small at the same time.

Let us now determine the value of \(C\). Recall from Remark 1.2 (6.) and the introductory motivation that \(\FT{f'}(t) = 2 \pi \im t \FT{f}(t)\) if \(f' \in \lone\). Then by Theorem 1.6 \[ \int_{\R} t^2 \absval{\FT{f}(t)}^2 \dd t = \frac 1 {4 \pi^2} \int_{\R} \absval{\FT{f'}(t)}^2 \dd t = \frac 1 {4 \pi^2} \int_{\R} \absval{f'(x)}^2 \dd x, \] and by Cauchy-Schwarz Theorem 4.4, it also follows that \[\begin{align*} \frac 1 {4 \pi^2} \norm{x f(x)}_2^2 \cdot \norm{f'}_2^2 &\geq \frac 1 {4 \pi^2} \absval{\scal{x f(x)} {f'}}^2 = \frac 1 {4 \pi^2} \absval{\int_{\R} x f(x) \complexConj{f'(x)} \dd x} \\ &\geq \frac 1 {4 \pi^2} \absval{\reOf{\int_{\R} x f(x) \complexConj{f'(x)} \dd x}}^2. \end{align*}\] Since \[ \ReOf{f(x) \complexConj{f'(x)}} = \frac 1 2 \brackets{f(x) \complexConj{f'(x)} + \complexConj{f(x)} f'(x)} = \frac 1 2 \brackets{\scal{f}{f}}' = \frac 1 2 \brackets{\absval{f(x)}^2}', \] it yields (using integration by parts) \[ \norm{x f(x)}_2^2 \cdot \norm{t \FT{f}(t)}_2^2 \geq \frac 1 {16 \pi^2} \absval{\int_{\R} x \brackets{\absval{f(x)}^2}' \dd x}^2 = \frac 1 {16 \pi^2} \brackets{\int_{\R} \absval{f(x)}^2 \dd x}^2 = \frac 1 {16 \pi^2} \norm{f}_2^4. \]

Lastly, the above attains equality (from Cauchy-Schwarz) when \(f'(x) = Cxf(x)\), i.e., \(f(x) = \ee^{-\frac c 2 x^2}\)\(f\) needs to be a restriction of an entire function to the real line (and we shall discuss this case in more depth later).

1.2 Fourier Transform as Complex-Valued Continuous Linear Functional

We have already seen that \((\lone, + , \conv)\) is a Banach algebra. Now we would like to understand continuous linear function \(\vf: \lone \to \C\), such that \[ \vf (f \conv g) = \vf(f) \cdot \vf(g), \] e.g., \(\vf\) could be the Fourier transform evaluated at a given point.

Proposition 1.1 Using \(\vf\) as defined above, it holds that \[ \norm{\vf} = \sup_{\norm{x} \leq 1} \absval{\vf(x)} \leq 1. \]

Proof. We shall prove this result by contradiction. Assume \(\norm{\vf} > 1\), then there exists \(x \in \lone\) such that \(\norm{x} < 1\) and \(\vf(x) = 1\). Let \(s_n = - \sum_{i = 1}^n x^i\), where \(x^i\) is the \(i\)-fold convolution of \(x\) with itself. Since \(\norm{x^n} \leq \norm{x}^n \onBottom{\longrightarrow}{n \to \infty} 0\), \((s_n)\) is a Cauchy sequence with \(s \letDef \lim_{n \to \infty} s_n\). Then \[\begin{align*} x \conv s_n &= s_n + x - x^n \\ &\Downarrow \text{ when } n \to \infty \\ x \conv s &= s + x \\ &\Downarrow \\ \vf(x) \cdot \vf(s) &= \vf(s) + \vf(x) \\ \vf(s) &= \vf(s) + 1, \end{align*}\] which is a contradiction.

We have recalled before that \(\ltwo\) is a Hilbert space3, and so is \(\lone\). Therefore by Riesz representation theorem 4.5, there exists \(\beta \in \linfty\) such that \(\vf(f) = \int_{\R} f(x) \beta(x) \dd x\). Then \[ \begin{aligned} \vf(f \conv g) &= \int_{\R} \brackets{\int_{\R} f(x - y) g(y) \dd y} \beta(x) \dd x \\ &\tOnTop{=}{Fubini} \int_{\R} g(y) \underbrace{\int_{\R} f(x - y) \beta(x) \dd x}_{\vf(f_y)} \dd y \\ &= \int_{\R} g(y) \vf(f_y) \dd y, \end{aligned} \] but also \[ \vf(f \conv g) = \vf(f) \cdot \vf(g) = \vf(f) \cdot \int_{\R} g(y) \beta(y) \dd y. \] As these equalities hold for any \(g \in \lone\), it implies \[ \vf(f) \beta(y) \tOnTop{=}{a.e.} \vf(f_y), \tag{1.6}\] which is called the Cauchy’s exponential functional equation. Even more, \[ \vf(f_{x+y}) = \vf(f) \beta(x+y) = \vf((f_y)_x) = \vf(f_y) \beta(x) = \vf(f) \beta(y)\beta(x), \] thus \(\beta(x+y) = \beta(x)\beta(y)\). In addition, since \(y \mapsto f_y\) is (uniformly) continuous, by Lemma 1.1, and \(\vf\) is continuous by definition, it necessitates \(\beta\) is also continuous. Further, requirement \(\beta \neq 0\) (at no point, otherwise \(\beta \equiv 0\)) together with \(x = y = 0\) imply \(\beta(0) = 1\).

Hence, there exists some \(\delta > 0\) such that \(\int_0^{\delta} \beta(x) \dd x = c \neq 0\) with \(c \in \C\). Moreover, \[ \beta(y)\overbrace{\int_0^{\delta} \beta(x) \dd x}^{{} = c} = \int_0^\delta \beta(x+y) \dd x = \int_y^{y + \delta} \beta(x) \dd x, \] so \(\beta(y) = \frac 1 c \int_y^{y + \delta} \beta(x) \dd x\). Thus, \(\beta\) is differential, as it is an integral of a continuous function. Lastly, applying \(\derivOp{y}\) to both sides of (1.6) produces \[ \beta'(x+y) = \beta(x) \beta'(y), \] and evaluating at \(y = 0\) gives us \[ \beta'(x) = \beta'(0) \beta(x) \implies \beta(x) = \ee^{\beta'(0)x}. \] Using the fact that \(\beta \in \linfty\), necessarily \(\beta'(0) \in \im \R\), thus \(\beta(x) = \ee^{-2 \pi \im t x}\). To conclude, there exists \(t \in \R\) (dependent on \(\vf\)) such that \(\vf(f) = \FT{f}(t)\). In other words, the only linear continuous transform respecting convolution is precisely the Fourier transform.

1.3 Fourier Transform on \(\mathbb{R^n}\)

Remark 1.6. All the ideas used for defining and studying the Fourier transform on \(\R\) can be transferred to \(\R^n\). Indeed, for \(f \in \lone(\R^n)\) we set \[ \FT{f}(\vi t) = \int_{\R^n} f(\vi x) \ee^{-2 \pi \im \scal{\vi t}{\vi x}} \dd \lmbd(\vi x) \] where \(\lmbd\) is the \(n\)-dimensional Lebesgue measure (also sometimes denoted \(\lmbd^n\)). The inversion formula follows by similar arguments to the one-dimensional case, and reads \[ f(\vi x) = \int_{\R^n} \FT{f}(\vi t) \ee^{2 \pi \im \scal{\vi t}{\vi x}} \dd \lmbd(\vi t). \]

Moreover, we can extend the Plancherel’s theorem to \(\Ltwo{\R^n}\), obtaining that \(\FTOp\) is an isometry on \(\Ltwo{\R^n}\) and that \(\FTOp^4 = \id\). Hence, the eigenvalues of \(\FTOp\) are the fourth roots of unity, i.e., \(\pm 1\) and \(\pm i\). In the exercises, we have shown that the Hermite functions \[ H_n(x) = \ee^{\pi x^2} \nderivOp{n}{x} \ee^{-2 \pi x^2}, \] which are, in fact, of form \(H_n(x) = \ee^{- \pi x^2} \cdot P_n(x)\) (with \(P_n\) a polynomial of degree \(n\)), are eigen-functions of \(\FTOp\), i.e., \[ \FTOp(H_n)(t) = (-i)^n H_n(t). \] As such, these functions form a complete orthogonal system for \(\Ltwo{\R}\).

1.3.1 Fourier transform of Radial Functions

Consider a radial function \(F\) on \(\R^n\), i.e., \(F(\vi x) = f(\norm{\vi x}) = f(r)\) for \(r \geq 0\). Then by transformation to polar coordinates \[ \int_{\R^n} \absval{F(\vi x)} \dd \lmbd(\vi x) \onTop{=}{\vi x = r \vi y, \; \norm{\vi y} = 1} \int_0^{\infty} \absval{f(r)} r^{n-1} \dd r \underbrace{\int_{\unitBall^{n-1}} \dd \sigma}_{{} =: \unitBallSurface(n - 1)}, \tag{1.7}\] where \(\sigma\) is the surface measure and \(\unitBall^{n-1}\) the unit ball in \(\R^n\).

In particular, taking \(F(\vi x) = \ee^{- \pi \norm{\vi x}^2}\) yields \[ \int_{\R^n} F(\vi x) \dd \lmbd(\vi x) = \brackets{\int_{\R} \ee^{-\pi x^2} \dd x}^n = 1, \] as \(e^{- \pi \norm{\vi x}^2} = \ee^{- \pi x_1^2} \cdot \ldots \cdot \ee^{- \pi x_n^2}\). However, from (1.7) it follows that (see Definition 4.1) \[\begin{align*} 1 &= \int_0^{\infty} \ee^{- \pi r^2} r^{n-1} \dd r \unitBallSurface(n - 1) \\ &\onTop{=}{t = \pi r^2} \int_0^{\infty} \ee^{-t} \brackets{\frac{t} {\pi}}^{\frac {n - 2} n} \frac{\dd t}{2 \pi} \unitBallSurface(n - 1) \\ &= \frac {\unitBallSurface(n-1)} {2 \pi \cdot \pi^{\frac {n-2} 2}} \GammaF{\frac n 2}. \end{align*}\] thus \[ \unitBallSurface(n-1) = \frac {2 \pi^{\frac n 2}} {\GammaF{\frac n 2}}, \tag{1.8}\] which we shall use throughout this section as a normalizing constant.

Let us now return back to the general case of \(F(\vi x)\), then by again applying the polar coordinates \(\vi x = r \vi y\) with \(\norm{\vi y} = 1\), \[ \FT{F}(\vi t) = \int_{\R^n} F(\vi x) \ee^{- 2 \pi \im \scal{\vi t}{\vi x}} \dd \vi \lmbd(\vi x) = \int_0^{\infty} f(r) r^{n-1} \int_{\unitBall^{n-1}} \ee^{-2 \pi \im r \scal {\vi t} {\vi y}} \dd \sigma(\vi y) \dd r. \] Firstly, we expect \(\FT{f}(\vi t)\) to be radial again, i.e., the inner integral should also depend only on \(\norm{\vi t}\). Specifically, we may write \(\vi t = \norm{\vi t} \vi e\), where \(\vi e\) is any unit vector, such that its transformations correspond to rotations of \(\vi e\). By radial symmetry, we fix \(\vi e\) to our choice of \(\vi e_n\), i.e., the \(n\)-th base unit vector of \(\R^n\).

Hence \[ \int_{\R^n} \ee^{-2 \pi \im r \scal{\vi t}{\vi y}} \dd \sigma(\vi y) = \int_0^{\pi} \ee^{- 2 \pi \im r \norm{\vi t} \cos(\theta)} \sin(\theta)^{n-1} \dd \theta \cdot C_n, \tag{1.9}\] such that \(C_n \int_0^{\pi} \sin(\theta)^{n-2} \dd \theta = \unitBallSurface(n-1)\), which follows from integration of all the variables that do not occur in \(\vi e_n\). Denote by \(\BetaF{\alpha, \beta}\) the Beta function, which reads \[ \BetaF{\alpha, \beta} \letDef \int_0^{\frac {\pi} 2} \sin(\theta)^{2\alpha - 1} \cos^{2\beta - 1} \dd \theta = \frac {\GammaF{\alpha}\GammaF{\beta}} {\GammaF{\alpha + \beta}}. \] In total, we get by (1.7) and \(\GammaF{\frac 1 2} = \sqrt{\pi}\), \[ C_n = \frac{\unitBallSurface(n - 1)}{2\BetaF{\frac {n-1}2, \frac 1 2}} = \frac {2 \pi^{\frac n 2}}{\GammaF{\frac n 2}} \frac {\GammaF{\frac n 2}} {2\GammaF{\frac{n - 1} 2} \GammaF{\frac 1 2}} = \frac {\pi^{\frac {n - 1} 2}} {\GammaF{\frac {n-1} 2}} \tag{1.10}\]

Continuing with (1.9), notice that (by Taylor expansion) \[ \ee^{-2 \pi \im r \norm{\vi t} \cos(\theta)} = \sum_{k = 0}^{\infty} \frac 1 {k!} \brackets{-2 \pi \im r \norm{\vi t}}^k \cos(\theta)^k, \] which converges uniformly (allowing us to commute \(\sum_{k = 0}^{\infty}\) and \(\int_0^{\pi} \cdot \dd \theta\)). Thus, \[\begin{align*} \int_{\R^n} \ee^{-2 \pi \im r \scal{\vi t}{\vi y}} \dd \sigma(\vi y) &= C_n \sum_{k = 0}^{\infty} \frac 1 {k!} \brackets{-2 \pi \im r \norm{\vi t}}^k \underbrace{\int_0^{\pi} \cos(\theta)^k \sin(\theta)^{n-2} \dd \theta}_{{} = 0 \text{ for } k \text{ odd}} \\ &= C_n \sum_{k = 0}^{\infty} \frac 1 {(2k)!} (-1)^k \brackets{2 \pi r \norm{\vi t}}^{2k} \cdot 2 \underbrace{\int_0^{\frac {\pi} 2} \cos(\theta)^{2k} \sin(\theta)^{n-2} \dd \theta}_{\BetaF{k+\frac 1 2, \frac {n-1} 2} = \frac{\GammaF{k + \frac 1 2} \GammaF{\frac{n - 1} 2}} {\GammaF{\frac {2k + n} 2}}} \\ &= 2 C_n \GammaF{\frac {n-1} 2} \sum_{k = 0}^{\infty} \frac {(-1)^k}{(2k)!} \brackets{2 \pi r \norm{\vi t}}^{2k} \frac {\GammaF{k + \frac {1} 2}} {\GammaF{k + \frac {n} 2}}. \end{align*}\] Now because \[ \GammaF{k + \frac 1 2} = \GammaF{\frac 1 2} \prod_{i = 0}^{k - 1} \brackets{\frac 1 2 + i} = \frac {\sqrt{\pi}} {2^k} \cdot 1 \cdot 3 \cdot 5 \cdots (2k-1) = \frac {\sqrt{\pi}} {2^k} \frac{(2k)!} {2^k k!} = \frac {\sqrt{\pi} (2k)!} {4^k k!}, \] refer to the definition of the gamma function 4.1 for more information, then once again resuming on (1.9) with (1.10) \[\begin{align*} \int_{\R^n} \ee^{-2 \pi \im r \scal{\vi t}{\vi y}} \dd \sigma(\vi y) &= 2 C_n \GammaF{\frac {n-1} 2} \sum_{k = 0}^{\infty} \frac {(-1)^k}{(2k)!} \brackets{2 \pi r \norm{\vi t}}^{2k} \frac 1 {\GammaF{k + \frac n 2}} \frac {\sqrt{\pi} (2k)!} {4^k k!} \\ &= 2 \sqrt{\pi} \frac{\pi^{\frac {n-1} 2}} {\GammaF{\frac {n-1} 2}} \GammaF{\frac {n-1} 2} \sum_{k = 0}^{\infty} \frac {(-1)^k} {k! \GammaF{k + \frac n 2}} (\pi r \norm{\vi t})^{2k} \\ &= 2 \pi^{\frac n 2} \sum_{k = 0}^{\infty} \frac {(-1)^k} {k! \GammaF{k + \frac n 2}} (\pi r \norm{\vi t})^{2k}, \end{align*}\] which loosely corresponds to the Bessel function of index \(\alpha \geq 0\), defined as \[ J_{\alpha}(x) = \sum_{k = 0}^{\infty} \frac{(-1)^k}{k! \GammaF{k + \alpha + 1}} \brackets{\frac x 2}^{2k + \alpha}. \] Note that the Bessel function \(J_{\alpha}\) solves the following Bessel’s differential equation in \(y(x)\), \[ x^2 y'' + 2x y' + (x^2 - \alpha^2) = 0. \] In total, we get \[ \FT{f}(\norm{\vi t}) = \FT{F}(\vi t) = 2 \pi \frac 1 {\norm{\vi t}^{\frac n 2 - 1}} \int_0^{\infty} f(r) J_{\frac n 2 - 1}(2 \pi r \norm{\vi t}) r^{\frac n 2} \dd r, \] and similarly, \[ F(\vi x) = \frac {2 \pi} {\norm{\vi x}^{\frac n 2 -1}} \int_0^{\infty} \FT{f}(t) J_{\frac n 2 - 1}(2 \pi t \norm{\vi x})t^{\frac n 2} \dd t. \] Note that this is also called the Bessel transform.

1.3.2 Poisson Summation Formula

So far, we have studied various functions on \(\R^n\) and how they behave under the Fourier transform. However, we have yet to discuss a very important class of functions, namely the periodic functions. More precisely, we shall be interested in what is a periodic analog to a given function on \(\R^n\), i.e., what object corresponds to it on the \(n\)-torus \(\torus_n\) [1, pg. 250-253, 2, chpt. 3.2.2.].

To this end, let us have a function \(f : \R^n \to \R\) (or \(\C\)) and a lattice \(\Lmbd \subseteq \R^n\), i.e., \(\Lmbd\) is a discrete subgroup4 such that \(\SpanOf_{\R}(\Lmbd) = \R^n\). For example, \[ \Lmbd = \SpanOf_{\Z} (\vi v_1, \dots, \vi v_n) = \set{\sum_{j = 1}^n w_j \vi v_j \divider w_i, \dots, w_n \in \Z} = \vi A \Z^n, \] where \(\vi A = (\vi v_1, \dots, \vi v_n)\) is a matrix.

In particular, we shall consider the periodization of \(f\) \[ F(\vi x) = \sum_{\vi \lmbd \in \Lmbd} f(\vi x + \vi \lmbd), \tag{1.11}\] which, let us note, is a summation over a countable set (or countable family), thus it only makes sense if the series is absolutely convergent. Indeed, if \(F\) is in fact absolutely convergent, one can observe (as was desired) that \(F\) from (1.11) is \(\Lmbd\)-periodic, i.e., \(\forall \vi \lmbd \in \Lmbd\) it holds that \(F(\vi x + \vi \lmbd) = F(\vi x)\) — shifting by \(\vi \lmbd\) simply re-arranges the sum.

The absolute convergence of (1.11) we may ensure by a pragmatic assumption \[ \exists C, \ve > 0 \,:\; \forall \vi x \in \R^n \,:\; \absval{f(\vi x)} \leq C (1 + \norm{\vi x})^{-n - \ve}. \tag{1.12}\] Note that using only the exponent \(-n\) instead of \(-n - \ve\) would not guarantee convergence.

Proposition 1.2 The sum \(\sum_{\substack{\vi \lmbd \in \Z^n \\ \vi \lmbd \neq \vi 0}} \norm{\vi \lmbd}^{-\alpha}\) converges if, and only if \(\alpha > n\).

Proof. Surely, \[ \Z^n \setminus \set{\vi 0} = \bigcup_{k = 0}^\infty \underbrace{\set{\vi \lmbd \in \Z^n \divider 2^k \leq \norm{\vi \lmbd} \leq 2^{k+1}}}_{B_k}. \] Then \(\countElements B_k \ofOrder 2^{nk}\), where \(\ofOrder\) means that the two quantities are of the same order, i.e., they differ up to a constant multiple. Thus \(\vi \lmbd \in B_k\) implies \(\norm{\vi \lmbd} \geq 2^k\) and \[ \sum_{\substack{\vi \lmbd \in \Z^n \\ \vi \lmbd \neq \vi 0}} \norm{\vi \lmbd}^{-\alpha} \leq \sum_{k = 0}^\infty \frac 1 {2^{\alpha k}} \cdot \countElements B_k \approx \sum_{k = 0}^{\infty} \frac 1 {2^{k(\alpha - n)}} < \infty \] for \(\alpha > n\).

Notice that Proposition 1.2 holds also for general lattices and \(\sum \norm{\vi x + \vi \lmbd}\) by offsetting the balls (in the case of a general lattice \(\Lmbd\) these become scaled ellipsoids) used for computation, and that \[ \sum_{\vi \lmbd \in \Lmbd} \frac C {(1 + \norm{\vi x + \vi \lmbd})^{n + \ve}} \leq \overbrace{\frac C {(1 + \norm{\vi x})^{n + \ve}}}^{\vi \lmbd = \vi 0} + \sum_{\substack{\vi \lmbd \in \Lmbd \\ \vi \lmbd \neq \vi 0}} \frac C {\norm{\vi x + \vi \lmbd}^{n + \ve}}, \] where the \(\vi \lmbd = \vi 0\)-part does not influence absolute convergence. In other words, Proposition 1.2 along with its proof justify the assumption (1.12).

We shall look for a Fourier(-like) transform for \(\Lmbd\)-periodic functions – so far, the functions \(\ee^{-2 \pi \im \scal{\vi t}{\vi x}}\) formed a complete orthogonal system. Now, our goal will be to decompose \(F(\vi x)\) into a system of “simple” functions. Ideally, we could just use the “old” orthogonal basis \(\ee^{2 \pi \im \scal{\vi t}{\vi x}}\) if we are able to choose \(\vi t\) such that these functions are also \(\Lmbd\)-periodic, i.e., \[ \forall \vi \lmbd \in \Lmbd \,:\; \ee^{2 \pi \im \scal {\vi t}{\vi x + \vi \lmbd}} = \ee^{2 \pi \im \scal {\vi t}{\vi x}} \iff \ee^{2 \pi \im \scal {\vi t}{\vi \lmbd}} = 1 \iff \scal{\vi t}{\vi \lmbd} \in \Z. \] For this purpose we define the dual lattice \(\Lmbd^*\) of \(\Lmbd\) as \[ \Lmbd^* \letDef \set{\vi t \in \R^n \divider \forall \vi \lmbd \in \Lmbd \, : \; \scal{\vi t}{\vi \lmbd} \in \Z}, \] and it can be shown that \(\Lmbd^* = \brackets{A\Tr}^{-1} \Z^n\).

Proposition 1.3 The family of functions \(\set{\ee^{2 \pi \im \scal{\vi t}{\vi x}} \divider \vi t \in \Lmbd^*}\) forms a complete orthogonal system of \(\Lmbd\)-periodic functions.

Proof. Firstly, orthogonality follows a similar argument as in the one-dimensional case, see for example [3, chpt. 8.4.2] (for simplicity, one might consider 1-torus-periodic functions). Secondly, notice that the system is closed under multiplication and complex conjugation. Furthermore, since \(\Lmbd\)-periodic functions are functions on \(\R^n/\Lmbd\), which is compact, we may apply Stone-Weierstrass theorem 4.6 to get that \[ \Span{\ee^{2 \pi \im \scal{\vi t}{\vi x}} \divider \vi t \in \Lmbd^*} \] is dense in \(\contf{\R^n/\Lmbd}{}\) which, in turn, is dense in \(\Ltwo{\R^n/\Lmbd}\).

We would like to obtain the “Fourier” series for \(F\), \[ F(\vi x) = \sum_{\vi \mu \in \Lmbd^*} a_{\vi \mu} \ee^{2 \pi \im \scal{\vi \mu} {\vi x}}, \tag{1.13}\] which already holds in the \(\ltwo\) sense by Proposition 1.3. However, we want to obtain a point-wise version of this equality. Recall \(\Lmbd = \vi A \Z^n\) and that \(\R^n/\Lmbd\) inherits the Lebesgue measure of \(\R^n\), therefore \[ \vol(\R^n/\Lmbd) = \vol(\underbrace{\vi A [0,1]^n}_{U_{\Lmbd}}) = \absval{\det \vi A} =: \setSize{\Lmbd}, \] where \(\setSize{\Lmbd}\) is the covolume of \(\Lmbd\) and \(\vol(\R^n/\Lmbd)\) gives the volume of a unit cell \(U_{\Lmbd}\). Remember that \(F(\vi x) = \sum_{\vi \lmbd \in \Lmbd} f(\vi x + \vi \lmbd)\) is absolutely convergent, which allows us to compute \(a_{\vi \mu}\) for a given \(\vi \mu \in \Lmbd^*\) as follows (note that \(\int_{\R^n/\Lmbd}\) again represents an integral over a unit cell) \[\begin{align*} a_{\vi \mu} &= \frac 1 {\setSize{\Lmbd}} \int_{\R^n/\Lmbd} F(\vi x) \ee^{-2 \pi \im \scal{\vi \mu} {\vi x}} \dd \lmbd(\vi x) \tOnTop{=}{abs. conv.} \frac 1 {\setSize{\Lmbd}} \sum_{\vi \lmbd \in \Lmbd} \int_{U_{\Lmbd}} f(\vi x + \vi \lmbd) \ee^{-2 \pi \im \overbrace{\scal{\vi \mu}{x}}^{{} = \scal{\vi \mu}{\vi x + \vi \lmbd}}} \dd \lmbd(\vi x) \\ &= \frac 1 {\setSize{\Lmbd}} \sum_{\vi \lmbd \in \Lmbd} \int_{\vi \lmbd + U_{\Lmbd}} f(\vi x) \ee^{-2 \pi \im \scal{\vi \mu}{\vi x}} \dd \lmbd(\vi x) = \frac 1 {\setSize{\Lmbd}} \int_{\R^n} f(\vi x) \ee^{-2 \pi \im \scal{\vi \mu}{\vi x}} \dd \lmbd (\vi x)\\ &= \frac {1} {\setSize{\Lmbd}} \FT{f}(\vi \mu). \end{align*}\] Since we required absolute convergence of \(F\), it must again be attained even by (1.13). In particular, placing the same assumption (1.12) on \(\FT{f}\) ensures the absolute convergence of (1.13)5. Then \[ F(\vi x) = \frac 1 {\setSize{\Lmbd}} \sum_{\vi \mu \in \Lmbd^*} \FT{f}(\vi \mu) \ee^{2 \pi \im \scal{\vi \mu}{\vi x}} = \sum_{\vi \lmbd \in \Lmbd} f(\vi x + \vi \lmbd), \tag{1.14}\] which is called the Poisson summation formula. Most frequently it is used for \(\vi x = 0\) as \[ \frac 1 {\setSize{\Lmbd}} \sum_{\vi \mu \in \Lmbd^*} \FT{f}(\vi \mu) = \sum_{\vi \lmbd \in \Lmbd} f(\vi \lmbd). \]

Remark 1.7. In 2016, researchers employed the Poisson summation formula to solve optimal sphere packing problem in \(n = 8\). They worked with a function \(f\) such that \(\FT{f}(\vi t) \geq 0\) and \(f(\vi \lmbd) \leq 0\) for \(\norm{\vi \lmbd} \geq \minOf{\norm{\vi v} \divider \vi v \in \Lmbd \setSize{\vi 0}}\), then \(\frac 1 {\setSize{\Lmbd}} \FT{f}(\vi 0) \leq f(\vi 0)\) and if \(f\) is a radial function, then \(f\) can “detect a ball” by being positive inside and negative outside of it. However, such function does not exists exist for every dimension. For \(n = 8\), numerical experiments suggested there was such a function, namely the Gaussian-polynomial function.

1.4 Holomorphic Fourier Transform

In this section, we shall pay attention ot the growth of entire functions. Recall that restriction of a certain entire function to \(\R\) happened to be lie in \(\Ltwo{\R}\) and thus existed their Fourier-Plancherel transform. In particular, we investigate the relationship between the growth of a function and the size of the support of its Fourier transform.

In the spirit of these findings, let \(f \in \hlmr(\C)\), i.e., \(f\) is an entire function. If \(\absval{f(z)} \leq C(1 + |z|)^m\) then by results from complex analysis we know that \(f\) must be a polynomial — however a polynomial is unbounded in every direction, i.e., its restriction to \(\R\) would not be in \(\ltwo\). Additionally, if \(f\) is bounded, then by theorem of Liouville 4.7 we get \(f\) is constant — again not in \(\ltwo\).

For this reason, consider \(f \in \hlmr(\C)\) such that \[ \absval{f(z)} \leq C \ee^{A \absval{z}^{\alpha}} \] for \(A, \alpha > 0\).

Definition 1.3 (Order & Type [4]) Let \(f \in \hlmr(\C)\) and denote \(M_f(r) = \max_{\absval{z} = r} \absval{f(z)}\). We say that \(f\) is a function of finite order if there exists \(k > 0\) such that \[ M_f(r) > \ee^{r^k} \] holds for all sufficiently large \(r\) (\(r>r_0(k)\)). The infimum \(\alpha\) among all such \(k\) is called the (exact) order of \(f\), and can be equivalently characterized as \[ \ee^{r^{\alpha - \ve}} < M_f(r) < \ee^{r^{\alpha + \ve}} \] for all sufficiently large \(r\) and any \(\ve > 0\).

Moreover, we define the type \(A\) of an entire function \(f\) of order \(\alpha\) as the infimum of all \(A' > 0\) which satisfy \[ M_f(r) < \ee^{A r^{\alpha}} \] for sufficiently large \(r\).

Proposition 1.4 Let \(f \in \hlmr(\C)\). Then \(f\) is an entire function of order \(\alpha\) and type \(A\) if and only if the following equalities hold \[ \alpha = \limsup_{\absval{z} \to \infty} \frac{\log \log \absval{f(z)}} {\log \absval{z}} \quad \& \quad A = \limsup_{\absval{z} \to \infty} \frac {\log \absval{f(z)}} {\absval{z}^{\alpha}}. \]

It can be shown that for \(\alpha < 1\), such entire functions cannot be bounded on a line. Moreover, functions of order \(\alpha > 1\) and type \(2\pi A\), i.e., \(\absval{f(z)} \leq C \ee^{2 \pi A \absval{z}}\) for all \(z \in \C\), like \(\sin\), \(\cos\) are bounded on \(\R\). However they grow on the imaginary axis, and on \(\R\) they are still not in \(\ltwo\). This finally leads us to the following theorem.

Theorem 1.7 (Paley-Wiener) Let \(f \in \hlmr(\C)\) be an entire function such that \(f \vert_{\R} \in \Ltwo{\R}\). If \(f\) is of exponential type \(2 \pi A\) (and \(\alpha = 1\)), then there exists \(F \in \Ltwo{[-A, A]}\) such that \[ f(z) = \int_{-A}^A F(t) \ee^{2 \pi \im t z} \dd t. \]

Note that, in particular, we have now connected complex analysis, more precisely the growth of an entire function, with the (truncated) Fourier transform.

Proof. Let \(f \in \hlmr(\C)\) be an entire function and consider its mollifier version \(f_{\ve}(x) = f(x) \ee^{-\ve \absval{x}}\) for \(x \in \R\). Assume that \[ \lim_{\ve \to 0} \int_{\R} f_{\ve}(x) \ee^{-2 \pi \im t x} \dd x = 0 \tag{1.15}\] for \(t > \absval{A}\). By the definition of \(f_{\ve}\) we have \(\lim_{\ve \to 0} \norm{f - f_{\ve}}_2 = 0\), thus by Plancherel’s theorem 1.6 we also get \(\lim_{\ve \to 0} \norm{\FT{f} - \FT{f}_{\ve}}_2 = 0\). Then from our assumption (1.15) follows that \(\FT{f}\) is supported only on \([-A,A]\), which proves the theorem. In other words, it suffices to show (1.15) holds.

For \(\alpha \in [- \pi, \pi]\) define \(\Gamma_{\alpha}\) as the ray given by \(z = \ee^{\im \alpha} s\) with \(s \in [0, \infty)\), and \(\Pi_{\alpha} = \set{z \in \C \divider \ReOf{z \ee^{\im \alpha}} > A}\) is a half-plane, see Figure 1.2.

Figure 1.2: Illustration of \(\Gamma_{\alpha}\) and \(\Pi_{\alpha}\)

Let \(w \in \Pi_{\alpha}\) and set \[ \phi_{\alpha}(w) = \int_{\Gamma_{\alpha}} f(z) \ee^{-2 \pi w z} \dd z \onTop{=}{z = \ee^{\im \alpha} s} \ee^{\im \alpha} \int_0^{\infty} f(\ee^{\im \alpha} s) \ee^{-2 \pi \im w \ee^{\im \alpha} s} \dd s, \] then surely (given \(f\) is of exponential type \(2 \pi A\)) \[ \absval{f(\ee^{\im \alpha} s)} \leq C \ee^{2 \pi A s} \implies \absval{f(\ee^{\im \alpha} s)} \cdot \absval{\ee^{-2 \pi \im w \ee^{\im \alpha} s}} \leq C \ee^{2 \pi A s - 2 \pi \ReOf{w \ee^{\im \alpha}} s}. \] In other words, \(\phi_{\alpha}\) converges for \(w \in \Pi_{\alpha}\) (above we show it is bounded by an \(\lone\) function, then it follows from dominated convergence theorem 4.2). Continuity of \(\phi_{\alpha}\) follows from the fact that \(f\) is entire. Applying the theorems of Fubini and Morera 4.8 when considering integrals over closed smooth curves in \(\Pi_{\alpha}\) yields holomorphy of \(\phi_{\alpha}\). In particular, for \(\alpha = 0\) and \(\alpha = \pi\) we obtain that \(\phi_0\) is holomorphic for \(\reOf{w} > 0\) and \(\phi_{\pi}\) for \(\reOf{w} < 0\), respectively, see Figure 1.3.

Figure 1.3: Illustration of various \(\Pi_{\alpha}\)

Finally, we can calculate \[\begin{align*} \int_{\R} f_{\ve}(x) \ee^{-2 \pi \im t x} \dd x &= \overbrace{\int_0^{\infty} f(x) \ee^{-(\ve + 2 \pi \im t) x} \dd x}^{\phi_0\brackets{\frac{\ve}{2 \pi} + \im t}} - \overbrace{\int_0^{\infty} f(x) \ee^{-(-\ve + 2 \pi \im t)(-x)} \dd x}^{\phi_{\pi}\brackets{\frac{-\ve}{2 \pi} + \im t}} \\ &= \phi_0\brackets{\frac{\ve}{2 \pi} + \im t} - \phi_{\pi}\brackets{\frac{-\ve}{2 \pi} + \im t}, \end{align*}\] i.e., the two values need to have the same limit as \(\ve \to 0\). We shall show this by demonstrating that any of two \(\phi_{\alpha}\) and \(\phi_{\beta}\) agree on the intersection of their domains. Put differently, we shall prove that \(\phi_{\alpha}\) and \(\phi_{\beta}\) are analytic continuations of each other. If we have that, we can replace \(\phi_0\) and \(\phi_{\pi}\) by \(\phi_{\frac {\pi} 2}\) if \(t < -A\) and by \(\phi_{- \frac {\pi} 2}\) if \(t > A\), from which it follows rather obviously.

Take \(0 < \beta - \alpha < \pi\) and \(w \in \Pi_{\alpha} \cap \Pi_{\beta} \neq \emptyset\) and consider the \(R\)-radius curve \(\curve_R\), see Figure 1.4. By Cauchy’s integral theorem 4.9 we know that \(\oint_{\curve_R} f(z) \ee^{-2 \pi w z} \dd z = 0\). Certainly, for \(z \in {\mcal A}_R\) (the connecting arc of \(\curve_R\)) holds that \(\absval{f(z)} \leq \ee^{2 \pi A R}\), and \[ \absval{\ee^{-2 \pi w z}} \leq \ee^{-2 \pi R \cdot \overbrace{\min(\reOf{\ee^{\im \alpha} w}, \reOf{\ee^{\im \beta} w})}^{> A}}. \]

Figure 1.4: \(R\)-radius curve \(\curve_R\) between \(\Gamma_{\alpha}\) and \(\Gamma_{\beta}\) with the connecting arc \({\mcal A}_R\)

Hence, \[ \absval{\int_{{\mcal A}_R} f(z) \ee^{-2 \pi w z} \dd z} \leq \ee^{2 \pi R A} \cdot \ee^{- 2 \pi R \min(\reOf{\ee^{\im \alpha} w}, \reOf{\ee^{\im \beta} w})} \cdot R \cdot (\beta - \alpha) \onBottom{\longrightarrow}{R \to \infty} 0. \] Then \[\begin{align*} \phi_{\alpha}(w) - \phi_{\beta}(w) &= \int_{\Gamma_{\alpha}} f(z) \ee^{-2 \pi w z} \dd z - \int_{\Gamma_{\beta}} f(z) \ee^{-2 \pi w z} \dd z \\ &= \lim_{R \to \infty} \brackets{\oint_{\curve_R} f(z) \ee^{-2 \pi w z} \dd z - \int_{{\mcal A}_R} \ee^{-2 \pi w z} \dd z} = 0, \end{align*}\] i.e., \(\phi_{\alpha}\) and \(\phi_{\beta}\) agree on the intersection of their domains. Thus, as we have suggested before, for \(t > A\) we use \[ \phi_0\brackets{\frac{\ve}{2 \pi} + \im t} - \phi_{\pi}\brackets{\frac{-\ve}{2 \pi} + \im t} = \phi_{- \frac {\pi} 2}\brackets{\frac{\ve}{2 \pi} + \im t} - \phi_{- \frac {\pi} 2}\brackets{\frac{-\ve}{2 \pi} + \im t} \onBottom{\longrightarrow}{\ve \to 0} 0 \] and similarly for \(t < -A\) with \(\phi_{\frac {\pi} 2}\).

1.4.1 Phragmén-Lindelöf principle

Let us attempt to generalize the maximum principle to unbounded domains \(U \subseteq \C\). In particular, taking \(f \in \hlmr(U) \cap \contf{\closureOf{U}}{}\) we would like to get that \(f\) bounded on \(\boundaryOf U\) implies \(f\) is bounded on \(U\). Unfortunately for us, there exist counterexamples for \(U = \C\) (or, in general, for unbounded \(U\) and any \(f\) as defined above). However, if we add a growth condition to \(f\) we can prove this statement (and for each type of unbounded \(U\) we get a different growth condition).

Theorem 1.8 (Phragmén-Lindelöf) Let \(0 < \beta < \pi\), \(\alpha < \frac {\pi} {2 \beta}\) and set \(S_{\beta} \letDef \set{z \in \C \divider \absval{\cArg{z}} < \beta}\). Consider \(f\) holomorphic on \(S_{\beta}\) and continuous on \(\closureOf{S_{\beta}}\), i.e., \(f \in \hlmr(S_{\beta}) \cap \contf{\closureOf{S_{\beta}}}{}\). If there exists \(A > 0\) such that \(\absval{f(z)} \leq \ee^{A \absval{z}^{\alpha}}\) and \(f\) is bounded on \(\boundaryOf S_{\beta}\), then \(f\) is bounded on \(S_{\beta}\).

Example 1.2 There are functions bounded on the boundary, yet unbounded on the interior. Take, for example, \(\alpha = \frac {\pi} {2 \beta}\) and \(f = \ee^{A z^{\alpha}}\). Setting \(\beta = \frac {\pi} 2\) clearly gives that \(\boundaryOf S_{\beta}\) is the imaginary line where \(f\) is bounded, whereas it is clearly unbounded on the real line.

Proof. Take \(\gamma\) with \(\alpha < \gamma < \frac {\pi} {2 \beta}\) and \(\ve > 0\). Define the function \[ g_{\ve}(z) = f(z)\ee^{-\ve z^{\gamma}} \] and notice \(g_{\ve}\) is holomorphic on \(S_{\beta}\) and continuous on \(\closureOf{S_{\beta}}\). Furthermore, \[ g_{\ve}(\ee^{\pm \im \beta}t) = f(\ee^{\pm \im \beta}t) \ee^{-\ve \ee^{\pm \im \beta \gamma}t^{\gamma}} \quad \& \quad \ReOf{\ee^{\pm \im \beta \gamma}} > 0, \] since \(\beta \gamma < \frac {\pi} 2\). In total, we have \(\absval{g_{\ve}(\ee^{\pm \im \beta}t)} \leq 1\) for \(t \geq 0\). Also, it follows that \(\absval{\ee^{- \ve \ee^{\pm \im \beta \gamma} t^{\gamma}}} = \ee^{-\ve \ReOf{\ee^{\pm \im \beta \gamma}} t^{\gamma}} \leq 1\), therefore \(\absval{g_{\ve}(\ee^{\pm \im \beta} t)} \leq \absval{f(\ee^{\pm \im \beta t} t)}\).

Thus, for any \(R>0\) and \(\vf \in [-\beta, \beta]\), we have \[ \absval{g_{\ve}(R \ee^{\im \vf})} \leq \ee^{A R^{\alpha}} \ee^{-\ve R^{\gamma} \ee^{\im \vf \gamma}} \leq \ee^{AR^{\vf}-\ve\cos(\beta \gamma) R^{\gamma}}, \] which holds since \(\ReOf{\ee^{\im \vf \gamma}} \geq \cos(\beta\gamma) > 0\). Thus, for \(r \geq R\) large enough, we have \(\absval{g_{\ve}(r\ee^{\im \vf})} \leq 1\).

Consider the truncated sector \[ S_{\beta, R} \letDef \set{r \ee^{\im \vf} \divider 0 < r < R, \; \vf \in (-\beta, \beta)} \subset S_{\beta}. \] Then \(S_{\beta, R}\) is bounded and by maximum modulus principle 4.10 the maximum of \(\absval{g_{\ve}}\) is attained on the boundary \(\boundaryOf S_{\beta, R}\), i.e., on the \(\pm \beta\)-rays or the \(R\)-diameter arc. We assume that \(f\) is bounded on \(\boundaryOf S_{\beta}\) (which are the two \(\pm \beta\)-rays), hence \(\absval{g_{\ve}} \leq \absval{f(z)} \leq M\) for \(z = r \ee^{\pm \im \beta}\) and \(0 < r < R\). By the above, \(\absval{g_{\ve}(R \ee^{\im \vf})} \leq 1\) for \(\vf \in [-\beta, \beta]\). In total, \(\absval{g_{\ve}(z)} \leq \maxOf{M, 1}\) for \(z \in S_{\beta, R}\).

Taking the limit \(R \to \infty\) (where the boundedness on the arc still necessarily holds) gives that \(\sup_{z \in S_{\beta}} \absval{g_{\ve}(z)} \leq \maxOf{M, 1}\). Letting \(\ve \to 0\) produces \(\ee^{-\ve z^{\gamma}} \to 1\) uniformly on compact subsets. Because the supremum bound does not depend on \(\ve\), we finally obtain \(\sup_{z \in S_{\beta}} \absval{f(z)} \leq \maxOf{M, 1}\).

Recall the uncertainty principle example 1.5 we have discussed above. There, we related the variance of \(f\) and \(\FT{f}\) to each other, giving a lower bounded for product of the variances. Using the growth condition on \(f\) and \(\FT{f}\) and the Phragmen-Lindelöf principle, we can strengthen this result to produce a single (small family of) distribution for \(f\).

Theorem 1.9 (Hardy’s uncertainty principle) Let \(f : \R \to \C\) be such that \(\absval{f(x)} \leq \ee^{- \pi x^2}\) and \(\absval{\FT{f}(t)} \leq \ee^{- \pi t^2}\). Then \(f(x) = C \ee^{- \pi x^2}\) for some \(C > 0\).

Note

I am not sure whether the proof was given in the lecture. In any case, I refer to [5] for more details and the proof.

Corollary 1.3 Let \(a,b > 0\) with \(ab > 1\) and let \(f: \R \to \C\) be such that \(\absval{f(x)} \leq \ee^{-\pi a x^2}\) and \(\absval{\FT{f}(t)} \leq \ee^{-\pi b t^2}\). Then \(f \equiv 0\).

Proof. Let, without loss of generality (we can make a change of variables, if necessary), \(a = 1\). Then \(\absval{\FT{f}(t)} \leq \ee^{- \pi b t^2} \leq \ee^{- \pi t^2}\) as \(b > 1\). By Theorem 1.9, \(f(x) = c \ee^{- \pi x^2}\) and thus (see Example 1.1) \(\FT{f}(t) = C \ee^{- \pi t^2}\). However, \(C \ee^{- \pi t^2} \leq \ee^{- \pi b t^2}\) for all \(t > 0\), which only holds if \(C = 0\). This, in turn, implies \(c = 0\).

Remark 1.8. Paley and Wiener have a small book [6]6, where they discuss a large collection of similar results. For example, one of the theorems concerns a generalization of Corollary 1.3 where we allow more growth on \(f\). Then it can be shown, after a lot more work, that \(f\) must be polynomial.


  1. \(\decayContf{\R}\) denotes continuous functions tending to 0.↩︎

  2. Hilbert space is a real (or complex) inner product space which is also a complete metric space, i.e., a special case of Banach space.↩︎

  3. Hilbert space is a real (or complex) inner product space which is also a complete metric space, i.e., a special case of Banach space.↩︎

  4. A topological group \(G\) is called a discrete group if there is not limit point (accumulation point) in it. In other words, for each element in \(G\) there exists a neighborhood which only contains that element.↩︎

  5. While [1] uses the pragmatic assumption (1.12) for both \(f\) and \(\FT{f}\), [2] only restrict \(f\) this way and requires \(\FT{f}\) to be absolutely convergent on the lattice.↩︎

  6. To the best of my knowledge, Prof. Grabner meant this book.↩︎