2  Distributions

$$ \newcommand{\LetThereBe}[2]{\newcommand{#1}{#2}} \newcommand{\letThereBe}[3]{\newcommand{#1}[#2]{#3}} \newcommand{\ForceToBe}[2]{\renewcommand{#1}{#2}} \newcommand{\forceToBe}[3]{\renewcommand{#1}[#2]{#3}} \newcommand{\MayThereBe}[2]{\newcommand{#1}{#2}} \newcommand{\mayThereBe}[3]{\newcommand{#1}[#2]{#3}} % Declare mathematics (so they can be overwritten for PDF) \newcommand{\declareMathematics}[2]{\DeclareMathOperator{#1}{#2}} \newcommand{\declareMathematicsStar}[2]{\DeclareMathOperator*{#1}{#2}} % striked integral \newcommand{\avint}{\mathop{\mathchoice{\,\rlap{-}\!\!\int} {\rlap{\raise.15em{\scriptstyle -}}\kern-.2em\int} {\rlap{\raise.09em{\scriptscriptstyle -}}\!\int} {\rlap{-}\!\int}}\nolimits} % \d does not work well for PDFs \LetThereBe{\d}{\differential} \LetThereBe{\Im}{\IM} \LetThereBe{\Re}{\RE} \letThereBe{\linefrac}{2}{#1/#2} \LetThereBe{\ExtProd}{\mathsf{\Lambda}} \letThereBe{\unicodeInt}{1}{\mathop{\vcenter{\mathchoice{\huge\unicode{#1}}{\unicode{#1}}{\unicode{#1}}{\unicode{#1}}}}\nolimits} \letThereBe{\Oiint}{1}{\underset{ #1 \;}{ {\rlap{\mspace{1mu} \boldsymbol{\bigcirc}}{\rlap{\int}{\;\int}}} }} \letThereBe{\sOiint}{1}{\unicodeInt{x222F}_{#1}} $$ $$ % Simply for testing \LetThereBe{\foo}{\textrm{FIXME: this is a test!}} % Font styles \letThereBe{\mcal}{1}{\mathcal{#1}} \letThereBe{\chem}{1}{\mathrm{#1}} % Sets \LetThereBe{\C}{\mathbb{C}} \LetThereBe{\R}{\mathbb{R}} \LetThereBe{\Z}{\mathbb{Z}} \LetThereBe{\N}{\mathbb{N}} \LetThereBe{\K}{\mathbb{K}} \LetThereBe{\im}{\mathrm{i}} % Sets from PDEs \LetThereBe{\boundaryOf}{\partial} \letThereBe{\closureOf}{1}{\overline{#1}} \letThereBe{\Contf}{1}{\mcal C^{#1}} \letThereBe{\contf}{2}{\Contf{#2}(#1)} \letThereBe{\compactContf}{2}{\mcal C_c^{#2}(#1)} \letThereBe{\ball}{2}{B\brackets{#1, #2}} \letThereBe{\closedBall}{2}{B\parentheses{#1, #2}} \LetThereBe{\compactEmbed}{\subset\subset} \letThereBe{\inside}{1}{#1^o} \LetThereBe{\neighborhood}{\mcal O} \letThereBe{\neigh}{1}{\neighborhood \brackets{#1}} % Basic notation - vectors and random variables \letThereBe{\vi}{1}{\boldsymbol{#1}} %vector or matrix \letThereBe{\dvi}{1}{\vi{\dot{#1}}} %differentiated vector or matrix \letThereBe{\vii}{1}{\mathbf{#1}} %if \vi doesn't work \letThereBe{\dvii}{1}{\vii{\dot{#1}}} %if \dvi doesn't work \letThereBe{\rnd}{1}{\mathup{#1}} %random variable \letThereBe{\vr}{1}{\mathbf{#1}} %random vector or matrix \letThereBe{\vrr}{1}{\boldsymbol{#1}} %random vector if \vr doesn't work \letThereBe{\dvr}{1}{\vr{\dot{#1}}} %differentiated vector or matrix \letThereBe{\vb}{1}{\pmb{#1}} %#TODO \letThereBe{\dvb}{1}{\vb{\dot{#1}}} %#TODO \letThereBe{\oper}{1}{\mathsf{#1}} \letThereBe{\quotient}{2}{{^{\displaystyle #1}}/{_{\displaystyle #2}}} % Basic notation - general \letThereBe{\set}{1}{\left\{#1\right\}} \letThereBe{\seqnc}{4}{\set{#1_{#2}}_{#2 = #3}^{#4}} \letThereBe{\Seqnc}{3}{\set{#1}_{#2}^{#3}} \letThereBe{\brackets}{1}{\left( #1 \right)} \letThereBe{\parentheses}{1}{\left[ #1 \right]} \letThereBe{\dom}{1}{\mcal{D}\, \brackets{#1}} \letThereBe{\complexConj}{1}{\overline{#1}} \LetThereBe{\divider}{\; \vert \;} \LetThereBe{\gets}{\leftarrow} \letThereBe{\rcases}{1}{\left.\begin{aligned}#1\end{aligned}\right\}} \letThereBe{\rcasesAt}{2}{\left.\begin{alignedat}{#1}#2\end{alignedat}\right\}} \letThereBe{\lcases}{1}{\begin{cases}#1\end{cases}} \letThereBe{\lcasesAt}{2}{\left\{\begin{alignedat}{#1}#2\end{alignedat}\right.} \letThereBe{\evaluateAt}{2}{\left.#1\right|_{#2}} \LetThereBe{\Mod}{\;\mathrm{mod}\;} \LetThereBe{\bigO}{O} \letThereBe{\BigO}{1}{\bigO\brackets{#1}} % Special symbols \LetThereBe{\const}{\mathrm{const}} \LetThereBe{\konst}{\mathrm{konst.}} \LetThereBe{\vf}{\varphi} \LetThereBe{\ve}{\varepsilon} \LetThereBe{\tht}{\theta} \LetThereBe{\Tht}{\Theta} \LetThereBe{\after}{\circ} \LetThereBe{\lmbd}{\lambda} \LetThereBe{\Lmbd}{\Lambda} % Shorthands \LetThereBe{\xx}{\vi x} \LetThereBe{\yy}{\vi y} \LetThereBe{\XX}{\vi X} \LetThereBe{\AA}{\vi A} \LetThereBe{\bb}{\vi b} \LetThereBe{\vvf}{\vi \vf} \LetThereBe{\ff}{\vi f} \LetThereBe{\gg}{\vi g} % Basic functions \letThereBe{\absval}{1}{\left| #1 \right|} \LetThereBe{\id}{\mathrm{id}} \letThereBe{\floor}{1}{\left\lfloor #1 \right\rfloor} \letThereBe{\ceil}{1}{\left\lceil #1 \right\rceil} \declareMathematics{\image}{im} %image \declareMathematics{\domain}{dom} %image \declareMathematics{\tg}{tg} \declareMathematics{\sign}{sign} \declareMathematics{\card}{card} %cardinality \letThereBe{\setSize}{1}{\left| #1 \right|} \LetThereBe{\countElements}{\#} \declareMathematics{\exp}{exp} \letThereBe{\Exp}{1}{\exp\brackets{#1}} \LetThereBe{\ee}{\mathrm{e}} \letThereBe{\indicator}{1}{\mathbb{I}_{#1}} \declareMathematics{\arccot}{arccot} \declareMathematics{\gcd}{gcd} % Greatest Common Divisor \declareMathematics{\lcm}{lcm} % Least Common Multiple \letThereBe{\limInfty}{1}{\lim_{#1 \to \infty}} \letThereBe{\limInftyM}{1}{\lim_{#1 \to -\infty}} % Useful commands \letThereBe{\onTop}{2}{\mathrel{\overset{#2}{#1}}} \letThereBe{\onBottom}{2}{\mathrel{\underset{#2}{#1}}} \letThereBe{\tOnTop}{2}{\mathrel{\overset{\text{#2}}{#1}}} \letThereBe{\tOnBottom}{2}{\mathrel{\underset{\text{#2}}{#1}}} \LetThereBe{\EQ}{\onTop{=}{!}} \LetThereBe{\letDef}{:=} %#TODO: change the symbol \LetThereBe{\isPDef}{\onTop{\succ}{?}} \LetThereBe{\inductionStep}{\tOnTop{=}{induct. step}} \LetThereBe{\fromDef}{\triangleq} % Optimization \declareMathematicsStar{\argmin}{argmin} \declareMathematicsStar{\argmax}{argmax} \letThereBe{\maxOf}{1}{\max\set{#1}} \letThereBe{\minOf}{1}{\min\set{#1}} \declareMathematics{\prox}{prox} \declareMathematics{\loss}{loss} \declareMathematics{\supp}{supp} \letThereBe{\Supp}{1}{\supp\brackets{#1}} \LetThereBe{\constraint}{\text{s.t.}\;} $$ $$ % Operators - Analysis \LetThereBe{\hess}{\nabla^2} \LetThereBe{\lagr}{\mcal L} \LetThereBe{\lapl}{\Delta} \declareMathematics{\grad}{grad} \declareMathematics{\divergence}{div} \declareMathematics{\Dgrad}{D} \LetThereBe{\gradient}{\nabla} \LetThereBe{\jacobi}{\nabla} \LetThereBe{\Jacobi}{\vi{\mathrm J}} \letThereBe{\jacobian}{2}{\Dgrad_{#1}\brackets{#2}} \LetThereBe{\d}{\mathrm{d}} \LetThereBe{\dd}{\,\mathrm{d}} \letThereBe{\partialDeriv}{2}{\frac {\partial #1} {\partial #2}} \letThereBe{\npartialDeriv}{3}{\partialDeriv{^{#1} #2} {#3^{#1}}} \letThereBe{\partialOp}{1}{\frac {\partial} {\partial #1}} \letThereBe{\npartialOp}{2}{\frac {\partial^{#1}} {\partial #2^{#1}}} \letThereBe{\pDeriv}{2}{\partialDeriv{#1}{#2}} \letThereBe{\npDeriv}{3}{\npartialDeriv{#1}{#2}{#3}} \letThereBe{\deriv}{2}{\frac {\d #1} {\d #2}} \letThereBe{\nderiv}{3}{\frac {\d^{#1} #2} {\d #3^{#1}}} \letThereBe{\derivOp}{1}{\frac {\d} {\d #1}\,} \letThereBe{\nderivOp}{2}{\frac {\d^{#1}} {\d #2^{#1}}\,} % Convergence \LetThereBe{\pointwiseTo}{\to} \LetThereBe{\uniformlyTo}{\rightrightarrows} \LetThereBe{\normallyTo}{\tOnTop{\longrightarrow}{norm}} \LetThereBe{\compactlyTo}{\tOnTop{\longrightarrow}{comp.}} \LetThereBe{\locallyUnifTo}{\tOnTop{\longrightarrow}{l.u.}} % Curves \letThereBe{\graphOf}{1}{\parentheses{#1}} \declareMathematics{\interior}{Int} % complex \LetThereBe{\Cinfty}{\tilde{\C}} \declareMathematics{\residual}{res} \letThereBe{\resAt}{1}{\residual_{#1}} \declareMathematics{\complexarg}{arg} \declareMathematics{\complexArg}{Arg} \LetThereBe{\carg}{\complexarg} \LetThereBe{\cArg}{\complexArg} \LetThereBe{\IM}{\mathfrak{Im}} \LetThereBe{\RE}{\mathfrak{Re}} \letThereBe{\imOf}{1}{\IM\,#1} \letThereBe{\reOf}{1}{\RE\,#1} \letThereBe{\ImOf}{1}{\IM \brackets{#1}} \letThereBe{\ReOf}{1}{\RE \brackets{#1}} $$ $$ % Linear algebra \letThereBe{\norm}{1}{\left\lVert #1 \right\rVert} \letThereBe{\seminorm}{1}{{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert #1 \right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}} \letThereBe{\scal}{2}{\left\langle #1, #2 \right\rangle} \letThereBe{\avg}{1}{\overline{#1}} \letThereBe{\Avg}{1}{\bar{#1}} \letThereBe{\linspace}{1}{\mathrm{lin}\set{#1}} \letThereBe{\algMult}{1}{\mu_{\mathrm A} \brackets{#1}} \letThereBe{\geomMult}{1}{\mu_{\mathrm G} \brackets{#1}} \LetThereBe{\Nullity}{\mathrm{nullity}} \letThereBe{\nullity}{1}{\Nullity \brackets{#1}} \LetThereBe{\nulty}{\nu} \declareMathematics{\SpanOf}{span} \letThereBe{\Span}{1}{\SpanOf\set{#1}} \LetThereBe{\projection}{\Pi} % Linear algebra - Matrices \LetThereBe{\tr}{\top} \LetThereBe{\Tr}{^\tr} \LetThereBe{\pinv}{\dagger} \LetThereBe{\Pinv}{^\dagger} \LetThereBe{\Inv}{^{-1}} \LetThereBe{\ident}{\vi{I}} \letThereBe{\mtr}{1}{\begin{pmatrix}#1\end{pmatrix}} \letThereBe{\bmtr}{1}{\begin{bmatrix}#1\end{bmatrix}} \declareMathematics{\trace}{tr} \declareMathematics{\diagonal}{diag} \declareMathematics{\rank}{rank} % Multilinear algebra \LetThereBe{\tensorProd}{\otimes} \LetThereBe{\tprod}{\tensorProd} \LetThereBe{\extProd}{\wedge} \LetThereBe{\wdg}{\extProd} \LetThereBe{\wedges}{\wedge \dots \wedge} \declareMathematics{\altMap}{Alt} $$ $$ % Statistics \LetThereBe{\iid}{\overset{\text{i.i.d.}}{\sim}} \LetThereBe{\ind}{\overset{\text{ind}}{\sim}} \LetThereBe{\condp}{\,\vert\,} \letThereBe{\complementOf}{1}{{{#1}^c}} \LetThereBe{\acov}{\gamma} \LetThereBe{\acf}{\rho} \LetThereBe{\stdev}{\sigma} \LetThereBe{\procMean}{\mu} \LetThereBe{\procVar}{\stdev^2} \declareMathematics{\variance}{var} \letThereBe{\Variance}{1}{\variance \brackets{#1}} \declareMathematics{\cov}{cov} \declareMathematics{\corr}{cor} \letThereBe{\sampleVar}{1}{\rnd S^2_{#1}} \letThereBe{\populationVar}{1}{V_{#1}} \declareMathematics{\expectedValue}{\mathbb{E}} \declareMathematics{\rndMode}{Mode} \letThereBe{\RndMode}{1}{\rndMode\brackets{#1}} \letThereBe{\expect}{1}{\expectedValue #1} \letThereBe{\Expect}{1}{\expectedValue \brackets{#1}} \letThereBe{\expectIn}{2}{\expectedValue_{#1} #2} \letThereBe{\ExpectIn}{2}{\expectedValue_{#1} \brackets{#2}} \LetThereBe{\betaF}{\mathrm B} \LetThereBe{\fisherMat}{J} \LetThereBe{\mutInfo}{I} \LetThereBe{\expectedGain}{I_e} \letThereBe{\KLDiv}{2}{D\brackets{#1 \parallel #2}} \LetThereBe{\entropy}{H} \LetThereBe{\diffEntropy}{h} \LetThereBe{\probF}{\pi} \LetThereBe{\densF}{\vf} \LetThereBe{\att}{_t} %at time \letThereBe{\estim}{1}{\hat{#1}} \letThereBe{\estimML}{1}{\hat{#1}_{\mathrm{ML}}} \letThereBe{\estimOLS}{1}{\hat{#1}_{\mathrm{OLS}}} \letThereBe{\estimMAP}{1}{\hat{#1}_{\mathrm{MAP}}} \letThereBe{\predict}{3}{\estim {\rnd #1}_{#2 | #3}} \letThereBe{\periodPart}{3}{#1+#2-\ceil{#2/#3}#3} \letThereBe{\infEstim}{1}{\tilde{#1}} \letThereBe{\predictDist}{1}{{#1}^*} \LetThereBe{\backs}{\oper B} \LetThereBe{\diff}{\oper \Delta} \LetThereBe{\BLP}{\oper P} \LetThereBe{\arPoly}{\Phi} \letThereBe{\ArPoly}{1}{\arPoly\brackets{#1}} \LetThereBe{\maPoly}{\Theta} \letThereBe{\MaPoly}{1}{\maPoly\brackets{#1}} \letThereBe{\ARmod}{1}{\mathrm{AR}\brackets{#1}} \letThereBe{\MAmod}{1}{\mathrm{MA}\brackets{#1}} \letThereBe{\ARMA}{2}{\mathrm{ARMA}\brackets{#1, #2}} \letThereBe{\sARMA}{3}{\mathrm{ARMA}\brackets{#1}\brackets{#2}_{#3}} \letThereBe{\SARIMA}{3}{\mathrm{ARIMA}\brackets{#1}\brackets{#2}_{#3}} \letThereBe{\ARIMA}{3}{\mathrm{ARIMA}\brackets{#1, #2, #3}} \LetThereBe{\pacf}{\alpha} \letThereBe{\parcorr}{3}{\rho_{#1 #2 | #3}} \LetThereBe{\noise}{\mathscr{N}} \LetThereBe{\jeffreys}{\mathcal J} \LetThereBe{\likely}{\mcal L} \letThereBe{\Likely}{1}{\likely\brackets{#1}} \LetThereBe{\loglikely}{\mcal l} \letThereBe{\Loglikely}{1}{\loglikely \brackets{#1}} \LetThereBe{\CovMat}{\Gamma} \LetThereBe{\covMat}{\vi \CovMat} \LetThereBe{\rcovMat}{\vrr \CovMat} \LetThereBe{\AIC}{\mathrm{AIC}} \LetThereBe{\BIC}{\mathrm{BIC}} \LetThereBe{\AICc}{\mathrm{AIC}_c} \LetThereBe{\nullHypo}{H_0} \LetThereBe{\altHypo}{H_1} \LetThereBe{\rve}{\rnd \ve} \LetThereBe{\rtht}{\rnd \theta} \LetThereBe{\rX}{\rnd X} \LetThereBe{\rY}{\rnd Y} \LetThereBe{\rZ}{\rnd Z} \LetThereBe{\rA}{\rnd A} \LetThereBe{\rB}{\rnd B} \LetThereBe{\vrZ}{\vr Z} \LetThereBe{\vrY}{\vr Y} \LetThereBe{\vrX}{\vr X} \LetThereBe{\rW}{\rnd W} \LetThereBe{\rS}{\rnd S} \LetThereBe{\rM}{\rnd M} \LetThereBe{\rtau}{\rnd \tau} % Bayesian inference \LetThereBe{\paramSet}{\mcal T} \LetThereBe{\sampleSet}{\mcal Y} \LetThereBe{\bayesSigmaAlg}{\mcal B} % Different types of convergence \LetThereBe{\inDist}{\onTop{\to}{d}} \letThereBe{\inDistWhen}{1}{\onBottom{\onTop{\longrightarrow}{d}}{#1}} \LetThereBe{\inProb}{\onTop{\to}{P}} \letThereBe{\inProbWhen}{1}{\onBottom{\onTop{\longrightarrow}{P}}{#1}} \LetThereBe{\inMeanSq}{\onTop{\to}{\ltwo}} \LetThereBe{\inltwo}{\onTop{\to}{\ltwo}} \letThereBe{\inMeanSqWhen}{1}{\onBottom{\onTop{\longrightarrow}{\ltwo}}{#1}} \LetThereBe{\convergeAS}{\tOnTop{\to}{a.s.}} \letThereBe{\convergeASWhen}{1}{\onBottom{\tOnTop{\longrightarrow}{a.s.}}{#1}} % Asymptotic qualities \LetThereBe{\simAsymp}{\tOnTop{\sim}{as.}} % Stochastic analysis \letThereBe{\diffOn}{2}{\diff #1_{[#2]}} % \LetThereBe{\timeSet}{\Theta} \LetThereBe{\eventSet}{\Omega} \LetThereBe{\filtration}{\mcal F} % TODO: Rename allFiltrations and the like \letThereBe{\allFiltrations}{1}{\set{\filtration_t}_{#1}} \letThereBe{\natFilter}{1}{\filtration_t^{#1}} \letThereBe{\NatFilter}{2}{\filtration_{#2}^{#1}} \letThereBe{\filterAll}{1}{\set{#1}_{t \geq 0}} \letThereBe{\FilterAll}{2}{\set{#1}_{#2}} \LetThereBe{\borelAlgebra}{\mcal B} \LetThereBe{\sAlgebra}{\mcal A} \LetThereBe{\quadVar}{Q} \LetThereBe{\totalVar}{V} \LetThereBe{\adaptIntProcs}{\mcal M} \letThereBe{\reflectProc}{2}{#1^{#2}} $$ $$ % Distributions \letThereBe{\WN}{2}{\mathrm{WN}\brackets{#1,#2}} \declareMathematics{\uniform}{Unif} \declareMathematics{\binomDist}{Bi} \declareMathematics{\negbinomDist}{NBi} \declareMathematics{\betaDist}{Beta} \declareMathematics{\betabinomDist}{BetaBin} \declareMathematics{\gammaDist}{Gamma} \declareMathematics{\igammaDist}{IGamma} \declareMathematics{\invgammaDist}{IGamma} \declareMathematics{\expDist}{Ex} \declareMathematics{\poisDist}{Po} \declareMathematics{\erlangDist}{Er} \declareMathematics{\altDist}{A} \declareMathematics{\geomDist}{Ge} \LetThereBe{\normalDist}{\mathcal N} %\declareMathematics{\normalDist}{N} \letThereBe{\normalD}{1}{\normalDist \brackets{#1}} \letThereBe{\mvnormalD}{2}{\normalDist_{#1} \brackets{#2}} \letThereBe{\NormalD}{2}{\normalDist \brackets{#1, #2}} \LetThereBe{\lognormalDist}{\log\normalDist} $$ $$ % Game Theory \LetThereBe{\doms}{\succ} \LetThereBe{\isdom}{\prec} \letThereBe{\OfOthers}{1}{_{-#1}} \LetThereBe{\ofOthers}{\OfOthers{i}} \LetThereBe{\pdist}{\sigma} \letThereBe{\domGame}{1}{G_{DS}^{#1}} \letThereBe{\ratGame}{1}{G_{Rat}^{#1}} \letThereBe{\bestRep}{2}{\mathrm{BR}_{#1}\brackets{#2}} \letThereBe{\perf}{1}{{#1}_{\mathrm{perf}}} \LetThereBe{\perfG}{\perf{G}} \letThereBe{\imperf}{1}{{#1}_{\mathrm{imp}}} \LetThereBe{\imperfG}{\imperf{G}} \letThereBe{\proper}{1}{{#1}_{\mathrm{proper}}} \letThereBe{\finrep}{2}{{#2}_{#1{\text -}\mathrm{rep}}} %T-stage game \letThereBe{\infrep}{1}{#1_{\mathrm{irep}}} \LetThereBe{\repstr}{\tau} %strategy in a repeated game \LetThereBe{\emptyhist}{\epsilon} \letThereBe{\extrep}{1}{{#1^{\mathrm{rep}}}} \letThereBe{\avgpay}{1}{#1^{\mathrm{avg}}} \LetThereBe{\succf}{\pi} %successor function \LetThereBe{\playf}{\rho} %player function \LetThereBe{\actf}{\chi} %action function $$ $$ \LetThereBe{\fourierOp}{\mcal{F}} \letThereBe{\fourier}{1}{\widehat{#1}} \letThereBe{\ifourier}{1}{\check{#1}} % Shortcuts \letThereBe{\FT}{1}{\fourier{#1}} \letThereBe{\iFT}{1}{\ifourier{#1}} \LetThereBe{\FTOp}{\fourierOp} \LetThereBe{\lspace}{\mcal L} \LetThereBe{\lone}{\lspace^{1}} \letThereBe{\Lone}{1}{\lone\brackets{#1}} \LetThereBe{\ltwo}{\lspace^2} \letThereBe{\Ltwo}{1}{\ltwo\brackets{#1}} \letThereBe{\lp}{1}{\lspace^{#1}} \letThereBe{\Lp}{2}{\lp{#1}\brackets{#2}} \LetThereBe{\linfty}{\lspace^{\infty}} \letThereBe{\Linfty}{1}{\linfty\brackets{#1}} \LetThereBe{\ltwoEq}{\onTop{=}{\ltwo}} \letThereBe{\decayContf}{1}{\mcal C_0\brackets{#1}} \letThereBe{\cinftyContf}{1}{\mcal C^{\infty}_c\brackets{#1}} \LetThereBe{\unitBall}{\mathbb{S}} \LetThereBe{\unitBallSurface}{S} \LetThereBe{\gammaF}{\Gamma} \letThereBe{\GammaF}{1}{\gammaF\brackets{#1}} \LetThereBe{\betaF}{B} \letThereBe{\BetaF}{1}{\betaF\brackets{#1}} \LetThereBe{\ofOrder}{\asymp} \declareMathematics{\vol}{vol} \LetThereBe{\holomorphic}{\mcal H} \LetThereBe{\hlmr}{\holomorphic} \LetThereBe{\schwartz}{\mcal S} \LetThereBe{\testSpace}{\mcal D} \letThereBe{\TestSpace}{1}{\testSpace\brackets{#1}} \LetThereBe{\bumpf}{\psi} \letThereBe{\Bumpf}{1}{\bumpf\brackets{#1}} \letThereBe{\reverse}{1}{\check{#1}} \LetThereBe{\translationOp}{\mcal{T}} \letThereBe{\translateBy}{1}{\translationOp_{#1}} \LetThereBe{\flcOp}{\mathscr{F}} \letThereBe{\flc}{1}{\flcOp\brackets{#1}} \LetThereBe{\hodge}{*} \LetThereBe{\tangent}{\mathrm{T}} \letThereBe{\tangentAt}{1}{\tangent_{#1}} \letThereBe{\lieBracket}{2}{\left[#1, #2\right]} \declareMathematics{\degree}{deg} \LetThereBe{\forms}{\mathscr{F}} \letThereBe{\kforms}{1}{\forms^{#1}} \letThereBe{\formsOver}{1}{\forms\brackets{#1}} \letThereBe{\kformsOver}{2}{\kforms{#1}\brackets{#2}} \letThereBe{\pullback}{1}{{#1}^*} \LetThereBe{\atlas}{\mcal A} \LetThereBe{\adjd}{\oper \delta} \LetThereBe{\connect}{\nabla} \LetThereBe{\christoffel}{\Gamma} \letThereBe{\lengthOf}{1}{\mathcal{l}\brackets{#1}} \LetThereBe{\HOT}{\mathrm{h.o.t.}} $$ $$ % ODEs \LetThereBe{\timeInt}{\mcal I} \LetThereBe{\stimeInt}{\mcal J} \LetThereBe{\Wronsk}{\mcal W} \letThereBe{\wronsk}{1}{\Wronsk \parentheses{#1}} \LetThereBe{\prufRadius}{\rho} \LetThereBe{\prufAngle}{\vf} \LetThereBe{\weyr}{\sigma} \LetThereBe{\linDifOp}{\mathsf{L}} \LetThereBe{\Hurwitz}{\vi H} \letThereBe{\hurwitz}{1}{\Hurwitz \brackets{#1}} % Cont. Models \LetThereBe{\dirac}{\delta} \LetThereBe{\torus}{\mathbb{T}} % PDEs % \avint -- defined in format-respective tex files \LetThereBe{\fundamental}{\Phi} \LetThereBe{\fund}{\fundamental} \letThereBe{\normaDeriv}{1}{\partialDeriv{#1}{\vec{n}}} \letThereBe{\volAvg}{2}{\avint_{\ball{#1}{#2}}} \LetThereBe{\VolAvg}{\volAvg{x}{\ve}} \letThereBe{\surfAvg}{2}{\avint_{\boundaryOf \ball{#1}{#2}}} \LetThereBe{\SurfAvg}{\surfAvg{x}{\ve}} \LetThereBe{\corrF}{\varphi^{\times}} \LetThereBe{\greenF}{G} \letThereBe{\reflect}{1}{\tilde{#1}} \LetThereBe{\conv}{*} \letThereBe{\dotP}{2}{#1 \cdot #2} \letThereBe{\translation}{1}{\tau_{#1}} \declareMathematics{\dist}{dist} \letThereBe{\regularizef}{1}{\eta_{#1}} \letThereBe{\fourier}{1}{\widehat{#1}} \letThereBe{\ifourier}{1}{\check{#1}} \LetThereBe{\fourierOp}{\mcal F} \LetThereBe{\ifourierOp}{\mcal F^{-1}} \letThereBe{\FourierOp}{1}{\fourierOp\set{#1}} \letThereBe{\iFourierOp}{1}{\ifourierOp\set{#1}} \LetThereBe{\laplaceOp}{\mcal L} \letThereBe{\LaplaceOp}{1}{\laplaceOp\set{#1}} \letThereBe{\Norm}{1}{\absval{#1}} % SINDy \LetThereBe{\Koop}{\mcal K} \letThereBe{\oneToN}{1}{\left[#1\right]} \LetThereBe{\meas}{\mathrm{m}} \LetThereBe{\stateLoss}{\mcal J} \LetThereBe{\lagrm}{p} % Stochastic analysis \LetThereBe{\RiemannInt}{(\mcal R)} \LetThereBe{\RiemannStieltjesInt}{(\mcal {R_S})} \LetThereBe{\LebesgueInt}{(\mcal L)} \LetThereBe{\ItoInt}{(\mcal I)} \LetThereBe{\Stratonovich}{\circ} \LetThereBe{\infMean}{\alpha} \LetThereBe{\infVar}{\beta} % Dynamical systems \LetThereBe{\nUnit}{\mathrm N} \LetThereBe{\timeUnit}{\mathrm T} % Masters thesis \LetThereBe{\evolOp}{\oper{\vf}} \letThereBe{\obj}{1}{\mathbb{#1}} \LetThereBe{\timeSet}{\obj T} \LetThereBe{\stateSpace}{\obj X} \LetThereBe{\contStateSpace}{\stateSpace_{C}} \LetThereBe{\orbit}{Or} \letThereBe{\Orbit}{1}{\orbit\brackets{#1}} \LetThereBe{\limitSet}{\obj \Lambda} \LetThereBe{\crossSection}{\obj \Sigma} \declareMathematics{\codim}{codim} % Left and right closed-or-open intervals \LetThereBe{\lco}{\langle} \LetThereBe{\rco}{\rangle} \letThereBe{\testInt}{1}{\mathrm{Int}_{#1}} \letThereBe{\evalOp}{1}{\oper{\eta}_{#1}} \LetThereBe{\nonzeroEl}{\bullet} \LetThereBe{\zeroEl}{\circ} \LetThereBe{\solOp}{\oper{S}} \LetThereBe{\infGen}{\oper{A}} \LetThereBe{\indexSet}{\mcal I} \letThereBe{\indicesOf}{1}{\indexSet\parentheses{#1}} \letThereBe{\IndicesOf}{2}{\indexSet_{#2}\parentheses{#1}} \LetThereBe{\meshGrid}{\obj M} \declareMathematics{\starter}{starter} \declareMathematics{\indexer}{indx} \declareMathematics{\enumerator}{enum} \LetThereBe{\inSS}{_{\infty}} \LetThereBe{\manifold}{\mcal M} \LetThereBe{\curve}{\mcal C} % Numerical methods \declareMathematics{\globErr}{err} \declareMathematics{\locErr}{le} \declareMathematics{\locTrErr}{lte} \declareMathematics{\estimErr}{est} \declareMathematics{\incrementFunc}{Inc} \letThereBe{\incrementF}{1}{\incrementFunc \brackets{#1}} \LetThereBe{\discreteNodes}{\mcal T} \LetThereBe{\stableFunc}{R} \letThereBe{\stableF}{1}{\stableFunc\brackets{#1}} \LetThereBe{\stableRegion}{\Omega} %Stochastic analysis \LetThereBe{\RiemannInt}{(\mcal R)} \LetThereBe{\RiemannStieltjesInt}{(\mcal {R_S})} \LetThereBe{\LebesgueInt}{(\mcal L)} \LetThereBe{\ItoInt}{(\mcal I)} \LetThereBe{\Stratonovich}{\circ} \LetThereBe{\infMean}{\alpha} \LetThereBe{\infVar}{\beta} %Optimization \LetThereBe{\goldRatio}{\tau} %Interpolation \LetThereBe{\lagrPoly}{l} $$

2.1 Introduction

As a preparation, we shall study functionals on \(\cinftyContf{\R^n}\), i.e., infinitely-differentiable compactly-supported functions. Before we show an example of a function from this class, consider \(f: \R \to \R\) defined as \[ f(x) = \lcases{ 0, & x \leq 0, \\ \ee^{-\frac 1 x}, & x \geq 0, } \] which surely lies in \(\contf{\R}{\infty}\). Now, set \(\vf: \R^n \to \R\) as \(\vf(\vi x) = c f(1 - \norm{\vi x})\) such that \(c\) is chosen to satisfy \(\int_{\R^n} \vf(\vi x) \dd \vi x = 1\). Clearly, \(\supp \vf = \closureOf{\ball{\vi 0}{1}}\), hence \(\vf \in \cinftyContf{\R^n}\).

Moreover, let \(f \in \Lone{\R^n}\) and notice \(\vf_{\ve}(\vi x) = \frac 1 {\ve^n} \vf\brackets{\frac 1 {\ve} \vi x}\) is supported on \(\closureOf{\ball{\vi 0}{\ve}}\), whilst maintaining \(\int_{\R^n} \vf_{\ve}(\vi x) \dd \vi x = 1\). Then \(f \conv \vf_{\ve} \in \cinftyContf{\R^n}\) by the definition of convolution 1.2.

Definition 2.1 (Urysohn function) Let \((X, \tau)\) be a topological space and \(V,U \subseteq X\) be open such that \(V \subseteq \closureOf{V} \subseteq U\). We call a continuous map \(f : X \to \R\) a Urysohn function for \(U\) and \(V\), if \(\indicator{\closureOf{V}} \leq f \leq \indicator{U}\).

Remark 2.1. In other words, the Urysohn function acts as continuous interpolant between \(\indicator{\closureOf{V}}\) and \(\indicator{U}\).

Can we find a “better” interpolation function \(f\) such that \(f \in \contf{\R^n}{\infty}\)? Consider \(V \subseteq \closureOf{V} \subseteq U \subseteq \R^n\) (assume there is positive distance between \(\closureOf{V}\) and \(U\)) with \(V, U\) open and \(\closureOf{V}\) now also compact, then \(\indicator{V}, \indicator{\closureOf{V}} \in \Lone{\R^n}\). Setting \(f = \indicator{\closureOf{V}} \conv \vf_{\ve}\) notice it is supported on a subset of \(U\) for \(\ve\) small enough, i.e., \(\supp f \subseteq \closureOf{V} + \closureOf{\ball{\vi 0}{\ve}}\), where \(+\) represents the point-wise sum. Then, by the above reasoning, \(f \in \cinftyContf{\R^n}\).

Definition 2.2 (Locally finite covering) Let \(\Omega \subseteq \R^n\) be an open set. The family of open sets \(\brackets{U_i}_{i \in \N}\) is called the locally finite covering of \(\Omega\), if \(\Omega \subseteq \bigcup_{i \in \N} U_i\) and every \(x \in \Omega\) is contained in at most finitely many \(U_i\), i.e., \(\set{i \in \N \divider x \in U_i}\) is finite for all \(x \in \Omega\).

Proposition 2.1 Let \(\brackets{U_i}_{i \in \N}\) be a locally finite covering of \(\Omega\). Then, every compact set \(K \subseteq \Omega\) intersects only finitely many \(U_i\).

Proof. Exercise.

Proposition 2.2 (Subordinate locally finite covering) For every locally finite cover \(\brackets{U_i}_{i \in \N}\) of \(\Omega\) there exists a subordinate locally finite cover \(\brackets{V_i}_{i \in \N}\), i.e., \(\brackets{V_i}_{i \in \N}\) is a cover of \(\Omega\), such that for all \(i \in \N\) holds \(\closureOf{V_i} \subseteq U_i\) and \(\closureOf{V_i}\) is compact

Remark 2.2 (Resolution of one). Given a subordinate locally finite cover \(\brackets{V_i}_{i \in \N}\) take \(f_i = \indicator{\closureOf{V_i}} \conv \vf_{\ve_i}\) with \(\ve_i > 0\) such that \(\supp f_i \subseteq U_i\). Then \(F(\vi x) \letDef \sum_{i = 1}^\infty f_i(\vi x)\) is finite for every fixed \(\vi x\), and \(F(\vi x) > 0\) for all \(\vi x \in \Omega\). Taking \(\psi_i(\vi x) = \frac {f_i(\vi x)} {F(\vi x)}\) we get functions \(\psi_i\) such that \(\supp \psi_i \subseteq U_i\) and \(\sum \psi_i = 1\). The family \(\brackets{\psi_i}_{i \in \N}\) is called resolution of one. In other words, we have decomposed a constant function without compact support (either \(\indicator{\Omega}\) or \(\indicator{U_i}\), depending on the perspective) to compactly supported functions.

2.2 Topology of \(\mathcal C_c^{\infty}(\mathbb{R}^n)\) or \(\mathcal C_c^{\infty}(\Omega)\)

Let us attempt to discern the topology of these function spaces. Firstly, we shall equip \(\cinftyContf{\R^n}\) with a family of semi-norms \[ \seminorm{f}_N = \sum_{\absval{\vi \alpha} \leq N} \norm{\brackets{1 + \norm{\vi x}^2}^N \Dgrad^{\vi \alpha} f}_{\infty}, \quad N \in \N_0, \] where \(\vi \alpha = (\alpha_1, \dots, \alpha_n)\) is a multi-index (\(\absval{\vi \alpha} = \alpha_1 + \dots + \alpha_n\)), i.e., \[ \Dgrad^{\vi \alpha} = \frac{\partial^{\absval{\vi \alpha}}} {\partial x_1^{\alpha_1} \cdots \partial x_n^{\alpha_n}} \quad \& \quad \vi x^{\vi \alpha} = \prod_{i = 1}^n x_i^{\alpha_i}. \tag{2.1}\] As \(f \in \cinftyContf{\R^n}\), \(\seminorm{f}_N\) is always finite, and if \(f\) is a polynomial of degree \(m\) then for \(N > m\) we get \(\seminorm{f}_N = 0\), i.e., it is indeed a semi-norm.

We now equip \(\cinftyContf{\R^n}\) with the topology induced by this family of semi-norms \(\brackets{\norm{\cdot}_N}_{N \in \N_0}\) (which is generated by all open balls w.r.t. all the semi-norms). Although this does not produce a normed space, we get that the topology is metric with \[ d(f,g) = \sum_{N = 0}^{\infty} 2^{-N} \frac {\seminorm{f - g}_N}{1 + \seminorm{f - g}_N}. \] Nonetheless, for practical purposes we would like to work with a complete space, hence we introduce the completion of \(\cinftyContf{\R^n}\) with respect to \(d\) as the Schwartz space \[ \schwartz \letDef \set{f \in \contf{\R^n}{\infty} \divider \forall N \in \N_0: \; \seminorm{f}_N < \infty}. \tag{2.2}\] For all functions \(f \in \schwartz\) we know that it and all its derivative vanish faster than any negative of \(\norm{\vi x}\). For example, it can be shown that \(\ee^{- \norm{\vi x}^2} \in \schwartz\).

If we take \(f \in \schwartz\) and calculate \[ \int_{\R^n} \absval{f(\vi x)}^p \dd \vi x = \int_{\R^n} \absval{\brackets{1+\norm{\vi x}^2} f(\vi x)}^p \frac 1 {\brackets{1 + \norm{\vi x}^2}^p} \dd x \leq \int_{\R^n} \frac {\seminorm{f}_1^p} {\brackets{1 + \norm{\vi x}^2}^p} \dd \vi x, \tag{2.3}\] then as \(\frac 1 {\brackets{1 + \norm{\vi x}^2}^p} \leq \frac 1 {1 + \norm{\vi x}^2} \in \Lone{\R^n}\)1, we obtain \(\schwartz \subseteq \Lp{p}{\R^n}\). In addition, \(\schwartz\) is even dense in \(\Lp{p}{\R^n}\). However, this density is with respect to the \(\lp{p}\)-norm, i.e., we “lose” the topology of \(\schwartz\) defined above. Still, it follows from Plancherel’s theorem 1.6 that we have Fourier transform defined on \(\schwartz\), \[ f \in \schwartz: \quad \FTOp f(\vi t) = \int_{\R^n} f(\vi x) \ee^{-2 \pi \im \scal{\vi x}{\vi t}} \dd \vi x. \tag{2.4}\] Recalling that for \(f \in \schwartz\) we have \(\Dgrad^{\vi \alpha} f \in \schwartz \subseteq \Ltwo{\R^n}\) it yields \[ \FTOp \brackets{\Dgrad^{\vi \alpha} f}(\vi t) = (-2 \pi \im)^{\absval{\vi \alpha}} \vi t^{\vi \alpha} \cdot \FTOp f (\vi t) \tag{2.5}\] by (2.1) and computation similar as in (1.1). By the definition of fourier transform (2.4) we get that \(\FTOp f\) decays to 0 at infinity faster than any \(\vi t^{\vi \alpha}\) grows, which combined with (2.5) leads to \(\FTOp f \in \schwartz\). Applying similar reasoning to \(\FTOp\Inv\) gives that \(\FTOp: \schwartz \to \schwartz\) is an automorphism of the Schwartz space.

Remark 2.3. We could have used the fact that \(\schwartz\) is dense in \(\ltwo\) (in the \(\ltwo\)-norm) together with \(\FTOp\) being automorphism of \(\schwartz\) to prove the Plancherel’s theorem 1.6.

2.3 Distributions

We turn our attention back to the space of infinitely-differentiable compactly-supported functions \(\cinftyContf{\Omega}\) with \(\Omega \subseteq \R^n\) open equipped with the family of norms \[ \norm{\vf}_N = \sum_{\absval{\vi \alpha} \leq N} \norm{\Dgrad^{\vi \alpha} \vf}_{\infty}, \tag{2.6}\] which induce a metric structure on \(\cinftyContf{\Omega}\) (and thus also topology).

Definition 2.3 (Distribution) A linear functional \(\oper T: \cinftyContf{\Omega} \to \R\) is called a distribution, if for every compact \(K \subseteq \Omega\) there exist constants \(C > 0\) and \(N \in \N_0\) such that for all \(\vf \in \cinftyContf{\Omega}\) with support contained in \(K\), i.e., \(\supp \vf \subseteq K\), it holds that \[ \absval{\oper T(\vf)} \leq C \norm{\vf}_N. \tag{2.7}\]

Proposition 2.3 Equivalently, we can characterize distributions as being continuous linear functionals with respect to the topology induced by (2.6), see [1, def. 6.7 and thm. 6.8].

Let us note that \(C, N\) do depend on \(K\), i.e., on the support of \(\vf\)!

Definition 2.4 (Order of distribution) A distribution is of order \(N\), if \(N \in \N_0\) is the smallest integer such that (2.7) holds for all compact \(K \subseteq \Omega\).

Example 2.1 Firstly, let \(\mu\) be the signed Borel measure on \(\Omega\). Then \(\vf \mapsto \int_{\Omega} \vf \dd \mu\) is a distribution of order 0. Indeed, \[ \oper T(\vf) = \int_{K} \vf \dd \mu \leq \mu(K) \sup_{\vi x \in K} \vf = \overbrace{\mu(K) \norm{\vf}_{\infty}}^{N = 0}. \] Similarly, \(\vf \mapsto \Dgrad^{\vi \alpha} \vf(\vi 0)\) is a distribution of order \(\absval{\vi \alpha}\).

Theorem 2.1 Let \(\oper T\) be a distribution of order 0. Then there exists a signed measure \(\mu\) such that \(\oper T(\vf) = \int_{\Omega} \vf \dd \mu\).

Before proving the theorem, let us introduce the following notation for the space of test functions \(\TestSpace{\Omega} \letDef \cinftyContf{\Omega}\) equipped with the topology considered above.

Proof. Let \(\brackets{U_i}_{i \in \N}\) be a locally finite covering of \(\Omega\), see Definition 2.2. The distribution \(\oper T\) extends to a continuous linear functional on \(\contf{\closureOf{U_i}}{}\) for \(i \in \N\) by Hahn-Banach theorem. Additionally, by Riesz’ representation theorem 4.5 there exists a signed measure \(\mu_i\) on \(\closureOf{U_i}\) such that \[ \forall \vf \in \contf{\closureOf{U_i}}{}: \quad \oper T(\vf) = \int_{\closureOf{U_i}} \vf \dd \mu_i. \]

Let \(\brackets{\psi_i}_{i \in \N}\) be the resolution of one, see Remark 2.2, and \(\vf \in \TestSpace{\Omega}\), then \(\vf(\vi x) = \sum_{i \in \N} \vf(\vi x) \psi_i(\vi x)\) as \(\sum_{i \in \N} \psi(\vi x) = 1\) for all \(\vi x \in \Omega\). Now, we can calculate \[ \oper T(\vf) = \oper T\brackets{\sum_{i \in \N} \vf \cdot \psi_i} \tOnTop{=}{linear} \sum_{i \in \N} \oper T\overbrace{(\vf \cdot \psi_i)}^{\supp \subseteq U_i} = \int_{\closureOf{U_i}} \vf \cdot \psi_i \dd \mu_i. \] Taking \(\dd \mu = \sum_{i \in \N} \psi_i \dd \mu_i\) gives \(\oper T(\vf) = \int_{\Omega} \vf \dd \mu\). As a closing note, let us remark that \(\sum_{i \in \N} \vf \cdot \psi_i\) is always finite (see Proposition 2.2) and thus linearity of \(\oper T\) is applicable.

Definition 2.5 A distribution \(\oper T\) is called positive, if for every \(\vf \in \TestSpace{\Omega}\) such that \(\vf(\vi x) \geq 0\) for all \(\vi x \in \Omega\) it holds that \(\oper T(\vf) \geq 0\).

Theorem 2.2 Every positive distribution is of order 0.

Proof. Let \(K \subseteq \Omega\) be compact and \(\chi \in \TestSpace{\Omega}\) such that \(\chi(\vi x) = 1\) for all \(x \in K\). For \(\vf \in \TestSpace{\Omega}\) with \(\supp \vf \subseteq K\) we have \[ -\norm{\vf}_{\infty} \chi(\vi x) \leq \vf(\vi x) \leq \norm{\vf}_{\infty} \chi(\vi x). \] Then continuity and positivity of \(\oper T\) yields (recall (2.6)) \[ -\norm{\vf}_{\infty} \oper T(\chi) \leq \oper T(\vf) \leq \norm{\vf}_{\infty} \oper T(\chi) \implies \absval{\oper T(\vf)} \leq \oper T(\chi) \norm{\vf}_{\infty}, \] thus \(N = 0\) in the sense of Definition 2.4.

Corollary 2.1 Every positive distribution “is” a positive measure.

2.3.1 Calculus of Distributions

Definition 2.6 (Differentiation of distribution) Let \(\oper T\) be a distribution. Then we define for \(\vf \in \TestSpace{\Omega}\) \[ \pDeriv{\oper T}{x_j}(\vf) \letDef - \oper T\brackets{\pDeriv{\vf}{x_j}}. \tag{2.8}\]

Intuitively, the minus sign in (2.8) comes from integration by parts.

Remark 2.4. Let \(\oper T\) be a distribution of order \(N\), i.e., \(\absval{\oper T(\vf)} \leq C \norm{\vf}_N\). Then \(\absval{\pDeriv{\oper T}{x_j}(\vf)} \leq C \norm{\vf}_{N+1}\), i.e., the distribution \(\pDeriv{\oper T}{x_j}\) is of order \(N+1\).

Also, it is important to note that the Schwarz’ theorem holds for the differentiation of distributions, i.e., \(\frac{\partial^2 \oper T}{\partial x_j \partial x_k} = \frac{\partial^2 \oper T}{\partial x_k \partial x_j}\).

Example 2.2 Firstly, let \(\dirac(\vf) = \vf(\vi 0)\) be the dirac distribution (centered at \(0\)). Then \[ \Dgrad^{\vi \alpha} \dirac(\vf) = (-1)^{\absval{\vi \alpha}} \dirac\brackets{\Dgrad^{\vi \alpha} \vf} = (-1)^{\absval{\vi \alpha}} \Dgrad^{\vi \alpha} \vf (\vi 0). \]

Secondly, let \(n = 1\) and \(f(x) = \absval{x}\). What is \(f''(x)\)? We can consider \(\oper T: \vf \mapsto \int_{\R} \vf(x) \cdot \absval{x} \dd x\), then \[\begin{align*} \oper T''(\vf) &= \int_{\R} \vf''(x) \absval{x} \dd x = \overbrace{\vf'(x) \absval{x} \bigg|_{-\infty}^{\infty}}^{\supp \vf \text{ compact}} - \int_{\R} \vf'(x) \sign(x) \dd x \\ &= + \int_{-\infty}^0 \vf'(x) \dd x - \int_0^{\infty} \vf'(x) \dd x = \brackets{\vf(x) \big|_{-\infty}^0} - \brackets{\vf(x) \big|_0^{\infty}} \\ &= \vf(0) + \vf(0) = 2 \dirac(\vf)\\ &\Downarrow \\ f'' &= 2 \dirac. \end{align*}\]

Remark 2.5. Let \(f \in \contf{\Omega}{}\), then \(f\) corresponds to the distribution \(\oper T: \vf \mapsto \int_{\R^n} f(\vi x) \vf(\vi x) \dd x\). Let us study the case \(\Omega = \R^3\).

If we consider \(E(\vi x) = \frac 1 {\norm{\vi x}}\), what is \(\lapl E\)? We know that \(\lapl E(\vi x) = 0\) for \(\vi x \neq 0\), however we would like to understand it in the distributional sense. Recall the Gauss’ divergence theorem, i.e., \[ \Oiint{\boundaryOf B} \vi v \dd \vi A = \iiint_{B} \divergence \vi v \dd x \dd y \dd z \] for a differentiable vector field \(\vi v\), and take \(\vi v = f \cdot \grad(\vi g)\) where \(f,g\) are differentiable functions. Then \[ \Oiint{\boundaryOf B} f \grad(g) \dd \vi A = \iiint_B \brackets{\scal{\grad f}{\grad g} + f \lapl g} \dd x \dd y \dd z, \] from which follows the Gauss-Green formula, \[\Oiint{\boundaryOf B} \brackets{f \grad(g) - g \grad(f)} \dd \vi A = \iiint_{B} \brackets{f \lapl g - g \lapl f} \dd x \dd y \dd z. \tag{2.9}\]

Now let \(g(\vi x) = \frac 1 {\norm{\vi x}}\) on \(B = \set{\vi x \in \R^3 \divider r \leq \norm{\vi x} \leq \norm R}\) to exclude the singularity of \(g\) at \(0\). Lastly, take \(f \in \TestSpace{\R^3}\) such that \(\supp f \subseteq \set{\vi x \in \R^3 \divider \norm{\vi x} < R}\). Then \(\grad g(\vi x) = \frac {- \vi x}{\norm{\vi x}^3}\) and by (2.9) \[ \Oiint{\norm{\vi x} = R} \bigg(\overbrace{- f(\vi x) \frac {\vi x}{\norm{\vi x^3}} - \frac 1 {\norm{\vi x}} \grad(f)(\vi x)}^{\text{outside of }\supp f \text { with } f \text{ smooth } \implies 0}\bigg) \dd \vi A - \Oiint{\norm{\vi x} = r} \bigg(- f(\vi x) \frac {\vi x}{\norm{\vi x^3}} - \frac 1 {\norm{\vi x}} \grad(f)(\vi x)\bigg) \overbrace{\dd \vi A}^{\frac {\vi x}{\vi r} \dd A} = \iiint_{B} \big(\overbrace{f \lapl g}^{\lapl g = 0} - g \lapl f\big) \dd x \dd y \dd z, \] thus \[ \begin{aligned} \iiint_{B} g \lapl f \dd x \dd y \dd z &= - \Oiint{\norm{\vi x} = r} f(\vi x) \overbrace{\frac {\vi x}{r^3} \cdot \frac {\vi x}{r}}^{\frac {\scal {\vi x}{\vi x}} {\norm{\vi x}^4} = \frac 1 {r^2}} \dd A - \frac 1 r \overbrace{\Oiint{\norm{\vi x} = r} \grad f(\vi x) \dd \vi A}^{\text{divergence theorem}} \\ &= - \frac 1 {r^2} \Oiint{\norm{\vi x} = r} f(\vi x) \dd A - \underbrace{\frac 1 r \iiint_{\norm{\vi x} \leq r} \lapl f \dd x \dd y \dd z}_{f \in \TestSpace{\R^3} \implies \frac 1 r \BigO{r^3} \text{ so } \onBottom{\longrightarrow}{r \to 0} \, 0}. \end{aligned} \] Recall \(f \in \TestSpace{\R^3}\) is (infinitely) differentiable, i.e., \(f(\vi x) = f(\vi 0) + \BigO{\norm{\vi x}}\), hence \[ \Oiint{\norm{\vi x} = r} f(\vi x) \dd A = \Oiint{\norm{\vi x} = r} (f(\vi 0) + \BigO{\norm{\vi x}}) \dd A = (f(\vi 0) + \BigO{\norm{\vi x}})\underbrace{\Oiint{\norm{\vi x} = r} \dd A}_{\vi x \in \R^3 \implies 4 \pi r^2} = 4 \pi r^2 f(\vi 0) + 4 \pi \BigO{r^3}. \] By letting \(r \to 0\), we finally obtain \[ \iint_{\norm{\vi x} \leq R} f \lapl g \dd x \dd y \dd z = \iint_{\norm{\vi x} \leq R} \frac {\lapl f(\vi x)}{\norm {\vi x}} \dd x \dd y \dd z = - 4 \pi f(\vi 0) \implies \lapl g = - 4 \pi \dirac_0 \tag{2.10}\] as \(\lapl g(f) = g(\lapl f)\) in the sense of the distributional Laplacian.

2.3.2 Supported Distributions

Definition 2.7 (Support of a distribution) Let us define the support of a distribution \(\oper T\) as \[\begin{align*} \supp \oper T &= \set{\vi x \in \Omega \divider \forall \text{ open } U \ni \vi x \; \exists \vf \in \TestSpace{\Omega} \text{ s.t. } \supp \vf \subseteq U : \; \oper T(\vf) \neq 0} \\ &= \Omega \setminus \set{\vi x \in \Omega \divider \exists U \ni \vi x \; \forall \vf \in \TestSpace{\Omega} \text{ s.t. } \supp \vf \subseteq U : \; \oper T(\vf) = 0}. \end{align*}\]

Remark 2.6. Directly from the definition of differentiation of a distribution 2.6 follows that \(\supp \Dgrad^{\vi \alpha} \oper T \subseteq \supp \oper T\) as \(\supp \Dgrad^{\vi \alpha} \vf \subseteq \supp \vf\) for all \(\vf \in \TestSpace{\Omega}\) (recalling that \(\supp \vf \letDef \closureOf{\set{\vi x \in \Omega \divider \vf(\vi x) \neq 0}}\)). Moreover, this set inclusion can even be strict, as illustrated by the previous Example 2.2.

Definition 2.8 Let \(\oper T\) be a distribution on \(\Omega\) and \(a \in \TestSpace{\Omega}\). Then \(a\oper T\) is defined by \((a\oper T)(\vf) = \oper T(a \vf)\).

Remark 2.7. The product rule, and thus also the Leibniz rule, remains valid for \(a\oper T\), i.e., \[ \pDeriv{(a \oper T)}{x_j} = \pDeriv{a}{x_j} \oper T + a \pDeriv{\oper T}{x_j}. \]

Proposition 2.4 Let \(\oper T\) be a distribution of order \(N\) with compact support. Then there exists a constant \(C > 0\), such that \(\absval{\oper T(\vf)} \leq C \norm{\vf}_N\) (in the sense of (2.6)) independent on the support of \(\vf\).

Proof. Let \(\chi \in \TestSpace{\Omega}\) be a test function with \(\chi(\vi x) = 1\) on the support of \(\oper T\) and let \(K \subseteq \Omega\) be a compact set such that \(\supp \chi \subseteq K\). For a test function \(\vf \in \TestSpace{\Omega}\) it holds that \(\supp (\chi \vf) \subseteq K\), and \(\oper T(\vf \chi) = \oper T(\vf)\) since \(\vf \chi - \vf = 0\) on \(\supp \oper T\). Now \(\absval{\oper T(\vf)} \leq C \norm{\vf \psi}_N\) where \(C\) depends only on \(K\) (which includes \(\supp (\vf \chi)\)), and because \[\begin{align*} \norm{\vf \chi}_N &= \sum_{\absval{\vi \alpha} \leq N} \norm{\Dgrad^{\vi \alpha} (\vf \chi)}_{\infty} = \sum_{\absval{\vi \alpha} \leq N} \norm{\sum_{\vi \beta \leq \vi \alpha} \binom{\vi \alpha}{\vi \beta} \Dgrad^{\vi \beta} \vf \Dgrad^{\vi \alpha - \vi \beta} \chi}_{\infty} \\ &\leq \sum_{\absval{\vi \alpha} \leq N} \sum_{\vi \beta \leq \vi \alpha} \binom{\vi \alpha}{\vi \beta} \norm{\Dgrad^{\vi \beta} \vf}_{\infty} \norm{\Dgrad^{\vi \alpha - \vi \beta} \chi}_{\infty} \\ &\leq \tilde{C} \cdot \brackets{\sum_{\absval{\vi \alpha} \leq N} \norm{\Dgrad^{\vi \alpha} \vf}_{\infty}} \brackets{\sum_{\absval{\vi \alpha} \leq N} \norm{\Dgrad^{\vi \alpha} \chi}_{\infty}} = \tilde{C} \norm{\vf}_N \norm{\chi}_N, \end{align*}\] we finally obtain \(\absval{\oper T(\vf)} \leq C \cdot \tilde{C} \cdot \norm{\chi}_N \norm{\psi_N}\), where \(C \cdot \tilde{C} \cdot \norm{\chi}_N\) is independent of \(\supp \vf\).

Remark 2.8. Let \(\vf, \chi \in \TestSpace{\Omega}\) be test functions such that \(\chi(\vi x) = 1\) for \(\vi x \in \supp \vf\). Then for \(P\) the Taylor polynomial of \(\vf\) of degree \(N\) at \(\vi 0\) we have \[ \vf(\vi x) = \vf(\vi x) \chi(\vi x) = P(\vi x) \chi(\vi x) + \Psi(\vi x) \quad \& \quad \norm{\Psi(\vi x)} \leq C \norm{\vi x}^{N+1}, \] i.e., \(\Psi(\vi x)\) is also a test function. If, in addition, all derivatives of order \(\leq N\) of \(\vf\) vanish at \(\vi 0\) (so \(P(\vi x) = 0\)), then by differentiation \[ \absval{\Dgrad^{\vi \alpha} \vf}(\vi x) \leq C_{\vi \alpha} \norm{\vi x}^{N+1 - \absval{\alpha}} \] for \(\absval{\vi \alpha} \leq N\).

Theorem 2.3 Let \(\oper T\) be a distribution of order \(N\) with \(\supp \oper T \subseteq F\). If \(\vf\) is a test function such that \(\vf\) and all its derivatives up to order \(N\) vanish on \(F\), then \(\oper T(\vf) = 0\).

Proof. Let \(\ve > 0\) and denote \(\chi_{\ve}\) the regularization of the indicator function of \(F_{2\ve} \letDef \set{\vi x \in \R \divider \dist(\vi x, F) \leq 2\ve}\), i.e., \[ \chi_{\ve}(\vi x) = \frac 1 {\ve^n} \int_{F_{2\ve}} \Bumpf{\frac {\vi x - \vi y} {\ve}} \dd \vi y, \] where \(\bumpf\) is the bump function satisfying \(\supp \bumpf \subseteq \closureOf{\ball{\vi 0}{1}}\), \(\bumpf \geq 0\), and \(\int \bumpf(\vi x) \dd \vi x = 1\). Then \(\supp \chi_{\ve} \subseteq F_{4\ve}\) and, by our assumption, \(\oper T(\vf) = \oper T(\vf \chi_{\ve})\). As such, \(\supp \vf \chi_{\ve} \subseteq F_{4 \ve}\) and also \(\vf \chi_{\ve} = 0\) on \(F\) (as \(\vf\) vanishes there). Thus \[ \absval{\oper T(\vf)} = \absval{\oper T(\vf \chi_{\ve})} \leq C \norm{\vf \chi_{\ve}}_N. \] Through differentiation we accumulate powers of \(\ve\) obtaining \(\absval{\Dgrad^{\vi \alpha} \chi_{\ve}(\vi x)} \leq C_{\vi \alpha} \ve^{-\absval{\vi \alpha}}\). On the other hand, the last remark 2.8 dictates that \(\absval{\Dgrad^{\vi \alpha} \vf} \leq C \ve^{N+1 - \absval{\vi \alpha}}\) for \(\absval{\vi \alpha} \leq N\). Hence, on \(F_{4\ve}\) we have \[ \absval{\Dgrad^{\vi \alpha}(\vf \chi_{\ve})} \leq \sum_{\vi \beta + \vi \gamma = \vi \alpha} C_{\vi \beta} C_{\vi \gamma} \ve^{N + 1 - \absval{\vi \beta}} \ve^{-\absval{\vi \gamma}} \leq \tilde{C} \ve^{N+1 - \absval{\vi \alpha}}. \] Thus \(\norm{\vf \chi_{\ve}}_N \leq \tilde{C} \ve\) in the sense of (2.6), which further implies \[ \absval{\oper T(\vf)} = \absval{\oper T(\vf \chi_{\ve})} \leq C \norm{\vf \chi_{\ve}}_N \leq C \cdot \tilde{C} \cdot \ve \] and as \(\ve > 0\) was arbitrary it yields \(\oper T(\vf) = 0\).

Corollary 2.2 A distribution, whose support is a single point, is a linear combination of (evaluation of) \(\vf\) and its derivatives.

Proof. Without loss of generality let \(\supp \oper T = \set{\vi 0}\). Write, per Remark 2.8, \(\vf(\vi x) = P(\vi x) \chi(\vi x) + \Psi(\vi x)\) where \(P\) is the Taylor polynomial of order \(N\) and \(\Psi\) vanishes up to order \(N\) at \(\vi 0\). From Theorem 2.3 it follows \(\oper T(\Psi) = 0\), therefore \[ \oper T(\vf) = \oper T(P \chi) + \overbrace{\oper T(\Psi)}^0 = \sum_{\absval{\vi \alpha} \leq N} \frac {\Dgrad^{\vi \alpha} \vf(\vi 0)} {\vi \alpha!} \underbrace{\oper T(\vi x^{\vi \alpha} \chi)}_{C_{\vi \alpha}} = \sum_{\absval{\vi \alpha} \leq N} \frac {(-1)^{\absval{\vi \alpha}} C_{\vi \alpha}} {\vi \alpha!} (\Dgrad^{\vi \alpha} \dirac) (\vf), \] as \(P(\vi x) = \sum_{\absval{\vi \alpha} \leq N} \frac {\Dgrad^{\vi \alpha} \vf(\vi 0)} {\vi \alpha!} \vi x^{\vi \alpha}\) (recall (2.1)).

2.3.3 Convolution of Distributions

Let \(\vf, \psi \in \TestSpace{\R^n}\) be test functions, then, as we already know, their convolution reads \(\vf \conv \psi (\vi x) = \int_{\R^n} \vf(\vi x - \vi y) \psi(\vi y) \dd \vi y\) (which is well-defined as \(\vf, \psi\) are compactly supported). We shall now introduce the following notation:

  • reverse of a test function \(\vf\), \[ \reverse{\vf}(\vi x) \letDef \vf(-\vi x) \quad \& \quad \reverse{\oper T}(\vf) \letDef \oper T(\reverse{\vf}); \tag{2.11}\]
  • translation of \(\vf\) by \(\vi h\), \[ \translateBy{\vi h} \vf(\vi x) \letDef \vf(\vi x - \vi h) \quad \& \quad \translateBy{\vi h} \oper T(\vf) \letDef \oper{T}\brackets{\translateBy{-\vi h}\vf}. \tag{2.12}\]
Caution

One should be careful not to forget the minus sign in the definition (2.12) of translation for distributions.

While this might seem arbitrary at first, an attentive reader might realize that \[ \vf \conv \psi (\vi x) = \int_{\R^n} \vf(\vi y) (\translateBy{\vi x} \reverse{\psi})(\vi y) \dd \vi y = \vf\brackets{\translateBy{\vi x} \reverse{\psi}}, \tag{2.13}\] which makes it rather natural to write \(\vf \conv \psi(\vi x) = \vf\brackets{\translateBy{\vi x} \reverse{\psi}}\).

Remark 2.9. It can be shown that \(\translateBy{\vi h}\) commutes with differentiation \(\Dgrad^{\vi \alpha}\).

Definition 2.9 (Convolution of a distribution and test function) Let \(\oper T\) be a distribution and \(\vf \in \TestSpace{\R^n}\) be a test function. Then we define the convolution of a distribution and test function as \(\oper T \conv \vf(\vi x) \letDef \oper T\brackets{\translateBy{\vi x} \reverse{\vf}}\).

Remark 2.10. Firstly, let us note that, in general, \(\oper T \conv \vf \in \contf{\R^n}{\infty}\), which is not compactly supported, i.e., we do not get back a test function.

However, if \(\oper T\) is compactly supported, then so is \(\oper T \conv \vf\). Thus, in this case, \(\oper T \conv \vf \in \TestSpace{\R^n}\).

Theorem 2.4 Let \(\oper L : \TestSpace{\R^n} \to \contf{\R^n}{\infty}\) be a continuous linear map which commutes with translation, then there exists exactly one distribution \(\oper T\) such that \(\oper L \vf = \oper T \conv \vf\).

Proof. Since \(\oper L\) commutes with translation, we can simply study \(\oper L(\vf)(\vi 0)\). Now this is a continuous linear functional on \(\TestSpace{\R^n}\), thus by Proposition 2.3 a distribution, say \(\reverse{\oper T}\). Hence, we may write \(\oper L\vf(\vi 0) = \reverse{\oper T}(\vf)\), and by commutation with \(\translationOp\), we have \[ \oper L \vf(\vi x) = \oper L\brackets{\translateBy{- \vi x} \vf}(\vi 0) = \reverse{\oper T}\brackets{\translateBy{-\vi x} \vf} = \oper T\brackets{\translateBy{\vi x} \vf} = \oper T \conv \vf(\vi x). \]

So far, we have discussed a convolution of a distribution with a test function. Now, we shall turn our attention to the convolution of two distributions!

Definition 2.10 (Convolution of distributions) Let \(\oper S\) be a compactly supported distribution and \(\oper T\) a distribution. We define the convolution \(\oper T \conv \oper S\) of a compactly supported distribution with a distribution as \[ (\oper T \conv \oper S) \conv \vf \letDef \oper T \conv \brackets{\oper S \conv \vf}. \tag{2.14}\]

Remark 2.11. Consider the setting of Definition 2.10, then \(\oper S\) gives by convolution in the sense of Definition 2.9 a map \(\TestSpace{\R^n} \to \TestSpace{\R^n}\), while \(\oper T\) gives a map \(\TestSpace{\oper T} \to \contf{\R^n}{\infty}\). Recall (2.14), and notice \[ (\oper T \conv \oper S) \conv \vf = \underbrace{\oper T \conv \overbrace{\brackets{\oper S \conv \vf}}^{\in \TestSpace{\R^n}}}_{\in \contf{\R^n}{\infty}} \] by Remark 2.10, i.e., \((\oper T \conv \oper S) \conv \cdot\) is a continuous linear map \(\TestSpace{\R^n} \to \contf{\R^n}{\infty}\). From Theorem 2.4 it follows that this linear map uniquely corresponds to a distribution, which, in this case, is precisely \(\oper T \conv \oper S\).

Proposition 2.5 In the setting of Definition 2.10, \(\oper T \conv \oper S = \oper S \conv \oper T\) holds.

Proof. Let us show that \(\oper T \conv \oper S \conv \vf \conv \psi = \oper S \conv \oper T \conv \vf \conv \psi\) for two test functions \(\vf, \psi \in \TestSpace{\R^n}\). If this holds, one may choose \(\psi = \bumpf_{\ve}\) (rescaled bump function) and let \(\ve \to 0\), which gives the result.

Hence, denote \(a = \oper T \conv \psi \in \contf{\R^n}{\infty}\) and calculate (using the fact that convolution between (test) functions is commutative) \[\begin{align*} \oper T \conv \oper S \conv \vf \conv \psi &\fromDef \oper T \conv \overbrace{\brackets{\oper S \conv \vf}}^{\in \TestSpace{\R^n}} \conv \psi \tOnTop{=}{com.} \underbrace{\oper T \conv \psi}_{a} \conv \brackets{\oper S \conv \vf} = \oper S \conv \vf \conv a \\ &= \oper S \conv \underbrace{\brackets{\oper T \conv \psi}}_{a} \conv \vf = \oper S \conv \oper T \conv \vf \conv \psi. \end{align*}\]

2.3.4 Harmonic Distributions

Definition 2.11 (Harmonic distribution) A solution to the equation \(\lapl f = 0\) is called a harmonic distribution (or function, if applicable).

Theorem 2.5 (Weyl [2 (pp. 127)]) Let \(\oper T\) be a distribution satisfying \(\lapl \oper T = 0\). Then \(\oper T\) is a function.

Proof. We must show that \(\oper T\) is (at least) a \(\Contf{2}\)-function, because then the notions of distributional and usual Laplacian coincide, and \(\oper T\) appears as an ordinary harmonic function. To this end let \(\vi z \in \R^n\) be fixed and \(\chi\) be a test function, which satisfies \(\chi(\vi x) = 1\) for \(\norm{\vi x - \vi z} \leq r\). We then consider \(\oper S = \chi \oper T\) and show that \(\oper S\) is a \(\Contf{\infty}\)-function on \(\norm{\vi x - \vi z} < r\) (which, in turn, implies \(\oper T\) is a smooth function on a neighborhood of arbitrarily chosen \(\vi z\)).

From the definition of \(\oper S\) and \(\oper T\) being harmonic, we see right away that \(\lapl \oper S\) has a compact support and vanishes on \(\norm{\vi x - \vi z} < r\) (as \(\lapl \oper S(\vf) = 0\) for \(\supp \vf \subseteq \set{\vi x \divider \norm{\vi x - \vi z} < r}\)).

Let \(E(\vi x) = - \frac 1 {(n-2) \omega_n} \frac 1 {\norm{\vi x}^{n-2}}\) with \(\omega_n\) denoting the area of a unit sphere in \(\R^n\) and recall Remark 2.5, where we have shown \(\lapl E = \dirac_{\vi 0}\) (for \(n = 3\), but this holds in general). Take a test function \(\vf\) such that \(\vf(\vi x) = 1\) for \(\norm{\vi x} < \ve\) and \(\vf(\vi x) = 0\) for \(\norm{\vi x} > 2\ve\). Moreover, \[ E(\vi x) = \underbrace{\vf(\vi x) E(\vi x)}_{E_1(\vi x)} + \underbrace{(1 - \vf(\vi x))E(\vi x)}_{E_2(\vi x)}, \] and notice that \(E_2 \in \Contf{\infty}\) (the singularity is hidden by \(1 - \vf\)) and that \(E_1\) is distribution with a compact support (by Definition 2.8, as we can view \(E\) as a distribution2). By previous computation we get \(E(\lapl \vf) = \lapl E(\vf) = \dirac_{\vi 0} \vf = \vf(\vi 0)\), hence, \[ \oper S = \dirac \conv \oper S = \lapl E \conv \oper S = E \conv \lapl \oper S = E_1 \conv \lapl \oper S + E_2 \conv \lapl \oper S, \] where \(E_1 \conv \lapl S\) is a convolution of 2 compactly supported distributions, and \(E_2 \conv \lapl S\) is a \(\Contf{\infty}\) function (by being a convolution of a \(\Contf{\infty}\)-function with compactly supported distribution, see Remark 2.10). Using the fact that \(\supp \lapl \oper S \subseteq \set{\norm{\vi x - \vi z} > r}\) we get \(\supp (E_1 \conv \lapl \oper S) \subseteq \set{\norm{\vi x - \vi z} > r - 2\ve}\). Thus on \(\set{\norm{\vi x - \vi z} < r - 2\ve}\) the distribution \(\oper S\) is in fact a \(\Contf{\infty}\)-function, and therefore by the argumentation above \(\oper T\) is a \(\Contf{\infty}\)-function.

Remark 2.12. A solution to \(\lapl \oper T = 0\) is not only \(\Contf{2}\) but even \(\Contf{\infty}\), i.e., the Laplacian forces functions in its kernel to be \(\Contf{\infty}\).

Let us again restrict ourselves, for simplicity, to \(n = 3\) and consider a harmonic function \(f\). In Remark 2.5, we have shown \[ \Oiint{\boundaryOf B} \brackets{f \grad g - g \grad f} \dd \vi A = \iiint_{B} (f \lapl g - g \lapl f) \dd V. \]

Moreover, take \(g(\vi x) = - \frac 1 {4 \pi} \frac 1 {\norm{\vi x}}\) (as in the proof of Theorem 2.5), i.e., \(\lapl g = 0\) for \(\vi x \neq \vi 0\) and \(\grad g(\vi x) = \frac 1 {4 \pi} \frac {\vi x}{\norm{\vi x}^3}\), with \(B = \set{\vi x \in \R^n \divider r \leq \norm{\vi x} \leq R}\). Then \[\begin{gather*} \Oiint{\norm{\vi x} = R} \brackets{f \grad g - g \grad f} \dd \vi A - \Oiint{\norm{\vi x} = r} \brackets{f \grad g - g \grad f} \dd \vi A = 0\\ \Downarrow\\ \Oiint{\norm{\vi x} = r} \brackets{f \grad g - g \grad f} \dd \vi A = V, \end{gather*}\] where \(V\) does not depend on \(r\). Hence, \[ V = \frac 1 {4 \pi} \Oiint{\norm{\vi x} = r} \brackets{f(\vi x) \frac{\vi x}{\norm{\vi x}^3} - \frac 1 r \grad f} \overbrace{\frac {\vi x}{r} \dd A}^{\dd \vi A} = \frac 1 {4 \pi r^2} \Oiint{\norm{\vi x} = r} f(\vi x) \dd A - \frac 1 {4 \pi r} \underbrace{\Oiint{\norm{\vi x} = r} \grad f \dd \vi A}_{\iiint_{\norm{\vi x} < r} \lapl f \dd V = 0}. \] In other words, we have obtained that \(\frac 1 {4 \pi r^2} \sOiint{\norm{\vi x} = r} f(\vi x) \dd A\) does not depend on \(r\)! Furthermore, if \(f\) is continuous, then \(\lim_{r \to 0} \frac 1 {4 \pi r^2} \sOiint{\norm{\vi x} = r} f(\vi x) \dd A = f(\vi 0)\). Combined with the result above, we have shown the mean value property for harmonic functions \[ f(\vi x_0) = \frac 1 {4 \pi r^2} \Oiint{\norm{\vi x - \vi x_0} = r} f(\vi x - \vi x_0) \dd A = \frac 1 {n \omega_n r^{n-1}} \oint_{\norm{\vi x - \vi x_0} = r} f(\vi x - \vi x_0) \dd A(\vi x). \] In particular, for \(n = 2\) we get the Cauchy integral theorem 4.9 as in \(\R^2\) harmonic functions form the real part of holomorphic functions (in simply connected domains, to be precise).

Conversely, let \(f\) be a continuous function satisfying the mean-value property. Take \(\psi_{\ve}\) a regularization function, which, you might recall, is radial and supported on \([0, \ve]\), then \[\begin{align*} \overbrace{f \conv \psi_{\ve}}^{\in \Contf{\infty}}(\vi x) &= \int_{\R^3} f(\vi x - \vi y) \psi_{\ve}(\vi y) \dd V(\vi y) = \int_0^{\infty} \overbrace{\int_{\norm{\vi y} = r} f(\vi x - \vi y) \dd A(\vi y)}^{4 \pi r^2 f(\vi x)} \psi_{\ve}(r) \dd r \\ &= 4 \pi f(\vi x) \underbrace{\int_0^{\ve} r^2 \psi_{\ve}(r) \dd r}_{\int_{\R^3} \psi_{\ve}(\vi x) \dd \vi x = 1 \implies \frac 1 {4 \pi}} = f(\vi x), \end{align*}\] i.e., \(f\) is a \(\Contf{\infty}\)-function!

2.4 Tempered Distributions

Let us now take test functions rapidly decaying in infinity (whereas before we had compactly supported test functions). Thus, we also need to consider distribution which do not grow fast at infinity — in fact, the name “tempered distributions” is a short for “distribution of temperate growth”, where “temperate growth” means polynomial growth. Unfortunately, this introduces a bit of a chaos to the naming where both variants (“tempered distributions[35] and “temperate distributions[2]) are common.

Nevertheless, we shall recall the Schwartz space (2.2) of functions such that they and all their derivatives decay at infinity faster than any negative power of \(\norm{\vi x}\). To this end, we defined a family of semi-norms \[ \seminorm{f}_k \letDef \sum_{\absval{\vi \alpha} \leq k} \norm{\brackets{1 + \norm{\vi x}^2}^k \Dgrad^{\vi \alpha} f}_{\infty} \tag{2.15}\] such that the Schwartz space (2.2) is given \[ \schwartz = \set{f \in \contf{\R^n}{\infty} \divider \forall k \in \N : \; \seminorm{f}_k < \infty}. \] Moreover, remember that we equip \(\schwartz\) with the topology induced by these semi-norms, and that \(\schwartz\) is a complete metric space (however not a normed one).

Theorem 2.6 The space of test functions \(\TestSpace{\R^n}\) is dense in \(\schwartz\).

This theorem should come as rather natural, because surely compactly supported functions decay faster than any negative power of \(\norm{\vi x}\) at infinity.

Proof. Let \(\vf\) be a test function such that \(\vf(\vi x) = 1\) for \(\norm{\vi x} \leq 1\) and \(\vf(\vi x) = 0\) for \(\norm{\vi x} \geq 2\). Take \(f \in \schwartz\) and \(f_n(\vi x) = \vf\brackets{\frac {\vi x} n} f(\vi x) \in \TestSpace{\R^n}\). To prove the density of \(\TestSpace{\R^n}\) in \(\schwartz\) it suffices to show that \(\seminorm{f_n - f}_k \to 0\) for all \(k\).

Surely, \[ \Dgrad^{\vi \alpha} (f_n - f) = \sum_{\vi \beta \leq \vi \alpha} \frac {\vi \alpha!}{\vi \beta! (\vi \alpha - \vi \beta)!} \Dgrad^{\vi \beta} \brackets{1 - \vf\brackets{\frac {\vi x} n}} \Dgrad^{\vi \alpha - \vi \beta} f(\vi x) \] by Leibniz’ product rule. Now as \(\vf \in \TestSpace{\R^n}\), we have3 \[ \absval{\Dgrad^{\vi \beta} \brackets{1 - \vf \brackets{\frac {\vi x} n}}} \leq \frac {C_{\vf, \vi \beta}} {n^{\absval{\vi \beta}}} \leq \frac {C_{\vf, \vi \beta}} n \] for \(\vi \beta \neq \vi 0\). Then \[ \norm{\brackets{1 + \norm{\vi x}^2}^k \Dgrad^{\vi \alpha}(f_n - f)}_{\infty} \leq \overbrace{\sum_{\vi 0 < \vi \beta \leq \vi \alpha} \frac {\vi \alpha!}{\vi \beta! (\vi \alpha - \vi \beta)!} \frac {C_{\vf, \vi \beta}} n \norm{\brackets{1 + \norm{\vi x}^2}^k \Dgrad^{\vi \alpha - \vi \beta} f}_{\infty}}^{\vi \beta \neq \vi 0} + \norm{1 - \vf}_{\infty} \cdot \overbrace{\sup_{\norm{\vi x} \geq n} \absval{\brackets{1 + \norm{\vi x}^2}^k \Dgrad^{\vi \alpha} f(\vi x)}}^{f \in \schwartz \implies \text{tends to } 0 \text{ for } n \to \infty}, \] which goes to \(0\) for \(n \to \infty\), thus \(f_n \to f\) in the semi-norms (note that \(\absval{\vi \alpha}\) depends on \(k\), not \(n\)).

Remark 2.13. Let us remark that \(\schwartz\) is closed under differentiation (as its elements are infinitely differentiable), and that every continuous linear functional on \(\schwartz\) is a distribution (as \(\TestSpace{\R^n} \subseteq \schwartz\)) by Proposition 2.3.

Definition 2.12 (Tempered distribution) A continuous linear functional on \(\schwartz\) is called a tempered4 distribution.

Remark 2.14. Clearly, a distribution with compact support is tempered. However, there are distributions that are not tempered, e.g., for \(n = 1\) the distribution induced by \(f(x) = \ee^{x}\) diverges for \(\vf(x) = \ee^{-(1+x^2)^{\frac 1 4}} \in \schwartz\) (for compactly supported \(\psi \in \TestSpace{\R}\), it would not be an issue).

Proposition 2.6 Let \(\mu\) be a signed measure such that \(\absval{\mu}\brackets{\ball{\vi 0}{R}} \leq C R^N\) for some \(C > 0\) and some \(N > 0\). Then \(\mu\) canonically induces a tempered distribution \(\oper T_{\mu}(\vf) = \int_{\R^n} \vf(\vi x) \dd \mu(\vi x)\).

Proof. It holds that \[ \absval{\oper T_{\mu} (\vf)} \leq \int_{\R^n} \absval{\vf(\vi x)} \dd \absval{\mu}(\vi x), \] and as \(\absval{\vf(\vi x)} \leq \frac{\seminorm{\vf}_k}{\brackets{1 + \norm{\vi x}^2}^k}\) (from (2.15)), we further obtain \[\begin{align*} \int_{\R^n} \frac 1 {\brackets{1 + \norm{\vi x}^2}^k} \dd \absval{\mu}(\vi x) &\leq \underbrace{\int_{\norm{\vi x} \leq 1} \frac 1 {\brackets{1 + \norm{\vi x}^2}^k} \dd \absval{\mu}(\vi x)}_{I} + \sum_{l = 0}^{\infty} \underbrace{\int_{2^l \leq \norm{\vi x} \leq 2^{l+1}} \frac 1 {\brackets{1 + \norm{\vi x}^2}^k} \dd \absval{\mu}(\vi x)}_{\leq \frac 1 {4^{kl}} C 2^{(l+1)N}} \\ &\leq I + C 2^N \sum_{l = 0}^{\infty} 2^{-(2k - N)l}, \end{align*}\] which converges for \(2k > N\). In other words, we have shown that \(\oper T_{\mu}\) is a bounded5, thus also continuous, linear operator.

2.4.1 Fourier Transform of Tempered Distributions

NoteFourier transform on the Schwartz spaced

Recall that we have already discussed relationship of Schwartz space \(\schwartz\) and the Fourier transform in Section 2.2. In particular, for \(f \in \schwartz\) we have \[ \FT{f}(\vi t) = \int_{\R^n} f(\vi x) \ee^{-2 \pi \im \scal{\vi x}{\vi t}} \dd \vi x, \] which converges and is well-defined (by (2.3)). Moreover, from (2.5) we deduced that \(\FT{f} \in \schwartz\) and that \(\FTOp\) is an automorphism on the Schwartz space — indeed, the identity \(\FT{\FT{f}} = \reverse{f}\), where \(\reverse{f}\) is the “reverse” of the function, see (2.11), further implies that the Fourier transform is a continuous bijection on \(\schwartz\).

Definition 2.13 (Fourier transform of tempered distributions) Let \(\oper T\) be a tempered distribution. We define its Fourier transform as \(\FT{\oper T}(\vf) \letDef \oper T(\FT{\vf})\).

One can notice right away that since \(\vf, \FT{\vf} \in \schwartz\) both \(\oper T\), and \(\FT{\oper T}\) a tempered.

Proposition 2.7 The following properties follow directly from Definition 2.13:

  1. \(\FTOp \brackets{\Dgrad^{\vi \alpha} \oper T} = (2 \pi \im \vi t)^{\vi \alpha} \FT{\oper T}\);
  2. \(\Dgrad^{\vi \alpha} \FT{\oper T} = \FTOp \brackets{(-2 \pi \im \vi x)^{\vi \alpha} \oper T}\);
  3. \(\FTOp \brackets{\translateBy{h} \oper T} = \ee^{-2 \pi \im \scal{\vi h}{\vi t}} \FT{\oper T}\);
  4. \(\translateBy{h}\FT{\oper T} = \FTOp \brackets{\ee^{2 \pi \im \scal{\vi h}{\vi t}} \oper T}\).

Proof. Recall Remark 1.2, otherwise obvious.

Theorem 2.7 Let \(\mu\) be a signed measure with \(\absval{\mu}(\R^n) < \infty\), i.e., with finite total mass. Then its Fourier transform is the continuous and bounded function \(\FT{\mu}(\vi t)\) defined by \[ \FT{\mu}(\vi t) = \int_{\R^n} \ee^{-2 \pi \im \scal{\vi x}{\vi t}} \dd \mu(\vi x). \]

Proof. Firstly, the defining integral surely exists as \(\mu\) has finite total mass \(\norm{\mu} = \int_{\R^n} \dd \absval{\mu}(\vi x)\). The exponential has always modulus 1, thus boundedness easily follows from \(\absval{\FT{\mu}(\vi t)} \leq \absval{\int_{\R^n} \ee^{-2 \pi \im \scal{\vi t}{\vi x}} \dd \mu(\vi x)} = \norm{\mu}\). The continuity stems of the function \(\FT{\mu}\) from the fact that as \(\vi t_k\) converge to \(\vi t_0\), the integrands \(\ee^{- 2 \pi \im \scal{\vi t_k}{\vi x}}\) converge point-wise to \(\ee^{-2 \pi \im \scal{\vi t_0}{\vi x}}\) and are bounded by the integrable function \(+1\) (\(\mu\) has finite total mass), which hence allows the use dominated convergence theorem 4.2.

Lastly, it suffices to show that \(\FT{\mu}\) is indeed the Fourier transform. To that end, let \(\vf \in \schwartz\), then \[\begin{align*} \oper T_{\mu} \FT{\vf} = \int_{\R^n} \FT{\vf}(\vi t) \dd \mu(\vi t) &= \int_{\R^n} \int_{\R^n} \vf(\vi x) \ee^{-2 \pi \im \scal{\vi x}{\vi t}} \dd \vi x \dd \mu(\vi t) \\ &= \int_{\R^n} \vf(\vi x) \underbrace{\int_{\R^n} \ee^{-2 \pi \im \scal{\vi t}{\vi x}} \dd \mu(\vi t)}_{\FT{\mu}(\vi x)} \dd \vi x \\ &= \int_{\R^n} \vf(\vi x) \FT{\mu}(\vi x) \dd x = \FT{\oper T}_{\vi \mu} \vf. \end{align*}\]

Example 2.3 Recall the Poisson summation formula (1.14) from Section 1.3.2, \[ \sum_{\vi \lmbd \in \Lmbd} f(\vi \lmbd) = \sum_{\vi \mu \in \Lmbd^*} \FT{f}(\vi \mu) \] for a lattice \(\Lmbd\) in \(\R^n\). For the identity to hold, we required \(f\) and \(\FT{f}\) to be absolutely convergent, which we ensured by the assumption (1.12) on both functions. Clearly, if \(f \in \schwartz\) it is trivially satisfied, and by the properties of the Fourier transform on the Schwartz space it is also satisfied for \(\FT{f}\). In other words, this formula holds for all \(f \in \schwartz\).

However, we can also translate this into the distributional setting by defining the Dirac comb distribution \[ \oper T = \sum_{\vi \lmbd \in \Lmbd} \dirac_{\vi \lmbd} \implies \FT{\oper T} = \sum_{\vi \mu \in \Lmbd^*} \dirac_{\vi \mu}. \] In particular, it holds that \(\FT{\dirac}_{\vi 0} \vf = \dirac_{\vi 0} \FT{\vf} = \int_{\R^n} \vf(\vi x) \dd \vi x\) and as such \(\FT{\dirac}_{\vi 0} = 1\) in the functional sense (by the canonical induction of distributions from functions).

Definition 2.14 (Convolution measure) Let \(\mu\) and \(\nu\) be finite measures on \(\R^n\). Then we define the convolution measure as \[ \mu \conv \nu(A) = \int_{\R^n} \mu(\vi x + A) \dd \nu(\vi x), \] where \(\vi x + A\) denotes the set \(A\) translated by \(\vi x\).

Surely, \(\mu \conv \nu\) is also a finite measure such that \(\norm{\mu \conv \nu} \leq \norm{\mu} \cdot \norm{\nu}\). Also, requirements of Proposition 2.6 are satisfied, thus the convolution measure “is” a tempered distribution. In other words, we can calculate \[ \FT{\mu \conv \nu}(\vi t) = \int_{\R^n} \ee^{-2 \pi \im \scal{\vi t}{\vi x}} \dd (\mu \conv \nu)(\vi x) = \int_{\R^n} \int_{\R^n} \ee^{-2 \pi \im \scal{\vi t}{\vi y + \vi x}} \dd \mu(\vi y) \dd \nu(\vi x) = \FT{\mu}(\vi t) \FT{\nu}(\vi t). \]

Recall now the Paley-Wiener theorem 1.7 which hinted at entire functions of restricted growth being Fourier transform of \(\ltwo\)-integrable truncated functions on the real line.

Theorem 2.8 Let \(\oper T\) be a distribution with \(\supp \oper T \subseteq \ball{\vi 0}{A}\). Then \(\FT{\oper T}\) is a function on \(\R^n\) that extends to entire function on \(\C^n\). This function satisfies \[ \absval{\FT{\oper T}(\vi z)} \leq C \brackets{1 + \norm{\vi z}^2}^{\frac N 2} \ee^{2 \pi A \norm{\imOf{\vi z}}} \] such that \(\FT{\oper T}(\vi z) = \oper T\brackets{\ee^{-2 \pi \im \scal{\cdot}{\vi z}}}\).

Proof. We shall start by writing down the Taylor expansion in \(\vi z\) of \(\ee^{-2 \pi \im \scal{\vi x}{\vi z}} = \sum_{\vi \alpha} \frac {(-2 \pi \im \vi x)^{\vi \alpha}} {\vi \alpha!} \vi z^{\vi \alpha}\) and defining \[ \oper T\big(\overbrace{(-2 \pi \im \vi x)^{\vi \alpha}}^{\notin \TestSpace{\R^n}}\big) = \oper T\big(\overbrace{(-2 \pi \im \vi x)^{\vi \alpha} \chi(\vi x)}^{\in \TestSpace{\R^n}}\big) \] for \(\chi \in \TestSpace{\R^n}\) such that \(\chi(\vi x) = 1\) when \(\norm{\vi x} \leq A\). Then \(\absval{\oper T\brackets{(-2 \pi \im \vi x)^{\alpha}}} \leq C \seminorm{(- 2 \pi \im \vi x)^{\vi \alpha} \chi(\vi x)}_N\) for some \(C > 0\) and some \(N > 0\), where the semi-norm is polynomial in \(\vi \alpha\) (by the internal differentiation). Thus \(\oper T\brackets{\ee^{-2 \pi \im \scal {\vi x}{\vi z}}} = \sum_{\vi \alpha} \frac {\oper T\brackets{(-2 \pi \im \vi x)^{\vi \alpha}} \vi z^{\vi \alpha}} {\vi \alpha!}\) converges uniformly on compact subsets of \(\C^n\) due to the polynomial behavior of \(\oper T\brackets{(-2 \pi \im \vi x)^{\vi \alpha}}\). Now6, by Definition 2.3, \[\begin{align*} \absval{\oper T\brackets{\ee^{-2 \pi \im \scal {\vi x}{\vi z}}}} \leq C' \sum_{\absval{\vi \alpha} \leq N} \sup_{\norm{\vi x} \leq 2 \pi A} \underbrace{\norm{\Dgrad^{\vi \alpha} \ee^{-2 \pi \im \scal{\vi x}{\vi z}}}}_{\norm{(-2 \pi \im \vi z)^{\vi \alpha} \ee^{- 2 \pi \scal{\vi x}{\vi z}}}}, \end{align*}\] where \(\norm{- 2 \pi \im \vi z}^{\vi \alpha} \leq C \brackets{1 + \norm{\vi z}}^{\frac N 2}\) and \(\absval{\ee^{-2 \pi \im \scal{\vi x}{\vi z}}} \leq \ee^{2 \pi A \norm{\ImOf{\vi z}}}\).

Lastly, we shall verify that we have indeed found the Fourier transform. Let \(\vf \in \schwartz\) be a test function (such that \(\FT{\vf}\) is a function of the variable \(\vi t\)), then \[ \oper T(\FT{\vf}) = \oper T \brackets{\int_{\R^n} \vf(\vi x) \ee^{-2 \pi \im \scal{\vi t}{\vi x}} \dd \vi x} = \int_{\R^n} \vf (\vi x) \oper T\brackets{\ee^{-2 \pi \im \scal {\vi x}{\vi t}}} \dd \vi x = \FT{\oper T}(\vf), \] where the second operation is justified by the fact that we can approximate an integral by Riemann sums and the fact that \(\oper T\) is continuous and linear.

2.4.2 Convolutions of Tempered Distributions

Remark 2.15. Let \(f,g \in \schwartz\) be Schwartz functions, then \(f \conv g\) is defined (as \(f,g \in \lone\), see (2.3)) and \(\FT{f \conv g} = \FT{f} \cdot \FT{g} \in \schwartz\) (both \(\FT{f}, \FT{g} \in \schwartz\)), thus also \(f \conv g \in \schwartz\).

We shall now consider convolution of a tempered distribution and a Schwartz test function analogously to the “general distribution” case, see Definition 2.9. In other words, for \(\oper T\) a tempered distribution and \(f \in \schwartz\), we have \[ \oper T \conv f(\vi x) \letDef \oper T\big(\underbrace{\translateBy{\vi x} \reverse{f}}_{\in \schwartz}\big), \tag{2.16}\] which is surely well-defined.

Theorem 2.9 Let \(\oper T\) be a tempered distribution and \(f \in \schwartz\). Then \(\oper T \conv f\) is a \(\Contf{\infty}\)-function and satisfies \[ \absval{\oper T \conv f(\vi x)} \leq C \brackets{1 + \norm{\vi x}^2}^N \] for some \(C, N\).

In other words, Theorem 2.9 states that \(\oper T \conv f\) is a function growing at most polynomially (whilst being in \(\Contf{\infty}\)).

Proof. Right away, we have for some \(C, N\) (see Definition 2.3) \[\begin{align*} \absval{\oper T \conv f(\vi h)} \fromDef \absval{\oper T\brackets{\translateBy{\vi h} \reverse{f}}} &\leq C \seminorm{\translateBy{\vi h} \reverse{f}}_N\\ &= C \sum_{\absval{\vi \alpha} \leq N} \norm{\brackets{1 + \norm{\vi x}^2}^N \Dgrad^{\vi \alpha} f(\vi x - \vi h)}_{\infty} \\ &\tOnTop{=}{subs.} C \sum_{\absval{\vi \alpha} \leq N} \norm{\brackets{1 + \norm{\vi x + \vi h}^2}^N \Dgrad^{\vi \alpha} f(\vi x)}_{\infty} \\ &\leq 2 C \seminorm{f}_N \brackets{1 + \norm{\vi h}^2}^N, \end{align*}\] where we used \((1 + \norm{\vi x + \vi h}^2) \leq 2 (1 + \norm{\vi x}^2)(1 + \norm{\vi h}^2)\). Let us remark the \(\linfty\)-norm is understood w.r.t. \(\vi x\). The fact that \(\oper T \conv f \in \Contf{\infty}\) stems from the relationship of distributions and differential operators, see Definition 2.6, and \(f\) being Schwartz.

Let us now state an analog of Theorem 2.4 for tempered distributions.

Theorem 2.10 Let \(\oper L\) be a continuous linear map \(\schwartz \to \Contf{\infty}\), which commutes with all translations. Then \(\oper L\) is a convolution with a tempered distribution.

Proof. Analogous to the proof of Theorem 2.4.

Theorem 2.11 Let \(\oper T\) be a tempered distribution and \(f \in \schwartz\). Then \(\FT{\oper T \conv f} = \FT{f} \cdot \FT{\oper T}\).

Proof. Since \(\oper T\) is tempered, also \(\FT{\oper T}\) is tempered and thus \(\FT{f} \cdot \FT{\oper T}\) is tempered. Furthermore, even \(\oper T \conv f\) is a tempered distribution, since it is a \(\Contf{\infty}\)-function of at most polynomial growth (by Theorem 2.9) and as such we can integrate it against any \(g \in \schwartz\) and get convergence. To complete the proof we calculate for an arbitrary \(g \in \schwartz\) \[\begin{align*} \FT{\oper T \conv f}(g) &\fromDef (\oper T \conv f)(\FT{g}) = \oper T \conv f \conv \reverse{\FT{g} }(\vi 0) = \oper T \conv \big(\reverse{f} \conv \FT{g}\reverse{\big)} (\vi 0)\\ &= \oper T \brackets{\reverse{f} \conv \FT{g}} = \oper T\brackets{\big( f \conv \reverse{\FT{g}} \reverse{\big)}} = \FT{\oper T} \brackets{\FT{f \conv \reverse{\FT{g}}}} \\ &= \FT{\oper T}\brackets{\FT{f} \cdot \FT{\reverse{\FT{g}}}} = \FT{\oper T} \brackets{\FT{f} \cdot \reverse{\reverse{g}}} = \FT{\oper T} \brackets{\FT{f} \cdot g} \\ &= \FT{f} \cdot \FT{\oper T}(g). \end{align*}\] using Theorem 2.10 and (2.13) for the second step (with \(\FT{g} = \translateBy{\vi 0} \FT{g}\)), and \(\FT{\FT{f}} = \reverse{f}\) for the eighth step, as well as the fact that \[\begin{align*} \big( f \conv g \reverse{\big)}(\vi x) &= (f \conv g)(- \vi x) = \int_{\R^n} f(- \vi x - \vi y) g(\vi y) \dd \vi y = \int_{\R^n} \reverse{f}(\vi x + \vi y) \reverse{g}(- \vi y) \dd y \\ &\onTop{=}{\vi s = - \vi y} \int_{\R^n} \reverse{f}(\vi x - \vi s) \reverse{g}(\vi s) \dd \vi s = (\reverse{f} \conv \reverse{g})(\vi x). \end{align*}\]

Theorem 2.12 Let \(\oper R\) be a compactly supported distribution and \(\oper T\) a tempered one. Then \(\oper T \conv \oper R\) is tempered and \(\FT{\oper T \conv \oper R} = \FT{\oper R} \cdot \FT{\oper T}\).

Proof. Firstly, observe that convolution with \(\oper R\) continuously takes \(\schwartz\) into itself. And since \(\FT{\oper R}\) is a \(\Contf{\infty}\)-function of polynomial growth by Theorem 2.8, multiplication (as in Definition 2.8) with \(\FT{\oper R}\) also takes \(\schwartz\) into itself continuously, i.e., the problem is well-defined.

Now, \(\oper T \conv \oper R\) is clearly a continuous linear map in the sense of Remark 2.11, thus a tempered distribution. Then \[\begin{align*} \FT{\oper T \conv \oper R}(f) &\fromDef (\oper T \conv \oper R)(\FT{f}) = \oper T \conv \oper R \conv \reverse{\FT{f}}(\vi 0) \\ &= \oper T\brackets{\FT{\FT{\oper R} \cdot f}} = \FT{\oper T}\brackets{\FT{\oper R} f} = \FT{\oper R} \cdot \FT{\oper T}(f), \end{align*}\] where we used the same “tricks” as in the proof of Theorem 2.11. In particular, by the introductory remarks to this proof, \(\FT{\oper R \conv \reverse{\FT{f}}} = \FT{\oper R} \cdot f\) is a Schwartz function, and \(\bigg( \oper R \conv \reverse{\FT{f}} \reverse{\bigg)} = \FT{\FT{\oper R} \cdot f}\).

2.5 Analytic Continuation

Remark 2.16 (Homogeneous distribution [2, chpt. 23]). Let \(f\) be a locally integrable function on \(\R^n\) (which thus canonically prescribes a distribution) and \(\vi L\) an invertible linear transformation of \(\R^n\) into itself. Such mapping \(\vi L\) surely induces a new function \((f \after \vi L)(\vi x) = f(\vi L \vi x)\), which is again invertible. Using the distribution perspective, we have \[ (f \after \vi L)(\vf) = \int_{\R^n} f(\vi L \vi x) \vf(\vi x) \dd \vi x \onTop{=}{\vi y = \vi L \vi x} \int_{\R^n} f(\vi y) \vf(\vi L\Inv \vi y) \absval{\det \vi L\Inv} \dd \vi y = f\brackets{\frac {\vf \after \vi L\Inv} {\absval{\det \vi L}}}, \] where the last equality holds in the distributional sense. Following this principle, we define \((\oper T \after \vi L)(\vf) \letDef \frac 1 {\absval{\det \vi L}} \oper T(\vf \after \vi L\Inv)\), and as such the composition with \(\vi L\) defines a linear mapping of the space of distributions onto itself (for distributions with \(\vi L\)-invariant domains).

An example of particular importance for us is \(\vi L_{\ve} = \ve \ident_n\) for some positive number \(\ve\). Here \(\absval{\det \vi L_{\ve}} = \ve^n\) and \[ (\oper T \after \vi L_{\ve})(\vf) = \frac 1 {\ve^n} \oper T\brackets{\vf\brackets{\frac{\cdot} {\ve}}}. \tag{2.17}\] Using this equality, a distribution \(\oper T\) is homogeneous of degree \(k\), if for all positive \(\ve\), it holds that \(\oper T \after \vi L_{\ve} = \ve^k \oper T\). In particular, for the Dirac distribution at origin we have \[ (\dirac_{\vi 0} \after \vi L_{\ve})(\vf) = \frac 1 {\ve^n} \dirac_{\vi 0}\brackets{\vf\brackets{\frac{\cdot}{\ve}}} = \frac 1 {\ve^n} \vf\brackets{\frac 0 {\ve}} = \ve^{-n} \dirac_{\vi 0}(\vf), \tag{2.18}\] i.e., the Dirac distribution at origin \(\dirac_{\vi 0}\) is homogeneous of degree \(-n\).

Caution

The following example has been adapted from Prof. Grabner’s lecture and [2] (page 156 and onward).

Example 2.4 Consider the function \(f(\vi x) = \frac 1 {\norm{\vi x}^{n- \lmbd}}\) for \(\reOf{\lmbd} > 0\). Then \(f\) is locally integrable, i.e., for every ball we can evaluate the integral7, and thus canonically corresponds to a tempered distribution. We shall discuss the canonical distribution \(\oper T_{\lmbd}\) is meromorphic8 with respect to \(\lmbd\).

For \(0 < \reOf \lmbd < \frac n 2\) we can write (so that the “\(\ltwo\)-part” is still square-integrable) \[ f(\vi x) = \underbrace{\chi_{\norm{\cdot} \leq 1}(\vi x) \frac 1 {\norm{\vi x}^{n- \lmbd}}}_{\in \lone} + \underbrace{\chi_{\norm{\cdot} > 1}(\vi x) \frac 1 {\norm{\vi x}^{n- \lmbd}}}_{\in \ltwo}, \tag{2.19}\] and because we can compute the Fourier transform of both parts (using Definition 1.1 and Theorem 1.6), by linearity of \(\FTOp\) we have that the Fourier transform of \(f\) is again a function. Since \(f\) is radial, \(\FT{f}\) is also radial. Furthermore, because \(f\) is homogeneous of degree \(\lmbd-n\), i.e., \(f(t \vi x) = t^{\lmbd - n}f(\vi x)\) for \(t > 0\), \(\FT{f}\) has to also be homogeneous of degree \(-n - (\lmbd - n) = -\lmbd\). In total, we obtain \(\FT{f}(\vi t) = C(n, \lmbd) \frac 1 {\norm{\vi t}^{\lmbd}}\).

Now, we can use just one well-chosen test function to discern the full form of \(C(n, \lmbd)\). Indeed, let \(\psi(\vi x) = \ee^{- \pi \norm{\vi x}^2}\) (which equals its own Fourier transform), then using the equality \(\oper T_{\FT{f}} \psi = \FT{\oper T}_f \psi = \oper T_f \FT{\psi}\), we have \[\begin{align*} \oper T_f \FT{\psi} &= \int_{\R^n} f(\vi x) \ee^{- \pi \norm{\vi x}^2} \dd \vi x = \omega_n \int_0^{\infty} r^{\lmbd - n} \ee^{- \pi r^2} r^{n-1} \dd r = \omega_n \int_0^{\infty} r^{\lmbd-1} \ee^{- \pi r^2} \dd r \\ &\onTop{=}{v = \pi r^2} \frac {\omega} {2 \pi} \int_0^{\infty} \brackets{\frac v {\pi}}^{\frac{\lmbd - 2}{2}} \ee^{-v} \dd v = \frac {\omega_n} 2 \pi^{- \frac {\lmbd} 2} \int_0^{\infty} v^{\frac {\lmbd} 2 - 1} \ee^{-v} \dd v\\ &= \frac {\omega_n} 2 \pi^{- \frac {\lmbd} 2} \GammaF{\frac {\lmbd} 2}, \end{align*}\] and \[\begin{align*} \oper T_{\FT{f}} \psi &= \int_{\R^n} \FT{f}(\vi t) \ee^{- \pi \norm{\vi t}^2} \dd \vi t = \omega_n C(n, \lmbd) \int_0^{\infty} r^{-\lmbd} \ee^{- \pi r^2} r^{n-1} \dd r\\ &= \omega_n C(n, \lmbd) \int_0^{\infty} r^{n- \lmbd -1} \ee^{- \pi r^2} \dd r \\ &\onTop{=}{v = \pi r^2} \frac {\omega_n} {2 \pi} C(n, \lmbd) \int_0^{\infty} \brackets{\frac v {\pi}}^{\frac{n - \lmbd - 2} 2} \ee^{-v} \dd v = \frac {\omega_n} 2 \pi^{\frac{\lmbd - n} 2} C(n, \lmbd) \int_0^{\infty} v^{\frac{n - \lmbd} 2 - 1} \ee^{-v} \dd v \\ &= \frac {\omega_n} 2 \pi^{\frac{\lmbd - n} 2} C(n, \lmbd) \GammaF{\frac{n - \lmbd} 2}. \end{align*}\] Comparing both sides yields \[ C(n, \lmbd) = \pi^{\frac n 2 - \lmbd} \frac{\GammaF{\frac {\lmbd} 2}}{\GammaF{\frac {n - \lmbd} 2}} \implies \FT{f}(\vi t) = \pi^{\frac n 2 - \lmbd} \frac{\GammaF{\frac {\lmbd} 2}}{\GammaF{\frac {n - \lmbd} 2}} \norm{\vi t}^{-\lmbd}, \tag{2.20}\] which even holds for \(0 < \reOf{\lmbd} < n\) (by analytic continuation). Note that (a multiple of) \(\oper T_{f}\) is typically called the Riesz potential w.r.t \(\lmbd\).

Indeed, recall that \(\lmbd\) is a complex parameter, and the dependence of \(\int_{\R^n} \frac 1 {\norm{\vi x}^{n - \lmbd}} \vf(\vi x) \dd \vi x\) is holomorphic in \(\lmbd\) (for a fixed \(\vf\) and \(\reOf{\lmbd} > 0\)). Such mapping is even meromorphic for \(\lmbd \in \C\) (save for even non-positive integers, as can be seen in (2.20)). So far, we relied on the fact that \(f\) was integrable around \(0\) (see (2.19)), which simply does not hold for \(\reOf{\lmbd} > n\); we shall come back to this issue later.

Example 2.5 For simplicity, let us make the situation easier by taking \(n = 1\). For \(f(x) = \absval{x}\) we have already computed in Example 2.2 that \(\Dgrad^2 f = 2 \dirac\) (in the distribution sense). Also, by Example 2.3 \(\FT{\Dgrad^2 f} = 2 \implies\), so \[ - 4 \pi^2 t^2 \FT{f}(t) = 2 \implies \FT{f}(t) = - \frac 1 {2 \pi^2 t^2} \] for \(t \neq 0\). This is even suggested by homogenity argument used above. Consider now the following distribution \[ \oper S(\vf) \letDef \frac 1 {\pi^2} \int_0^{\infty} \brackets{\vf(0) - \frac {\vf(x) + \vf(-x)} 2} \frac {\dd x}{x^2}, \quad \vf \in \schwartz(\R), \] where we can interpret \(\frac {\vf(x) + \vf(-x)} 2\) as an average over sphere of all radii (with \(\int \dd x\)), which vanishes for \(x \to \infty\) by \(\vf \in \schwartz(\R)\). In other words, the integrand behaves primarily like \(\frac 1 {x^2}\) for large \(x\), and is bounded near origin (by Taylor, for example). Therefore, \(\oper S(\vf)\) is convergent for every \(\vf \in \schwartz(\R)\). However, the distribution prescribed by \(\FT{f}\) is \(-\frac 1 {2 \pi^2} \int_{-\infty}^{\infty} \vf(t) \frac {\dd t}{t^2}\) (notice the different lower bound), which is not convergent9 for every \(\vf \in \schwartz(\R)\). If \(\vf\) vanishes at origin, we indeed get \(\oper S = \oper T_{\FT{f}} (= \FT{f})\), thus it must hold \(\oper S - \FT{f} = c\dirac_0\) for an appropriate constant \(c\). By noticing \[ \begin{aligned} (\oper S \after \vi L_{\ve})(\vf) &= \frac 1 {\ve} \oper S(\vf \after \vi L_{\ve}\Inv) = \frac 1 {\pi^2 \ve} \int_0^{\infty} \brackets{\vf(0)- \frac{\vf\brackets{\frac x {\ve}} + \vf \brackets{- \frac {x} {\ve}}} 2} \frac {\dd x}{x^2} \\ &\onTop{=}{y = \frac{x}{\ve}} \frac{1}{\pi^2 \ve} \int_0^{\infty} \brackets{\vf(0) - \frac{\vf(y) + \vf(-y)} 2} \frac{\frac{\dd x}{\ve}}{\frac{x^2}{\ve^2}} = \ve^{-2} \oper S(\vf), \end{aligned} \tag{2.21}\] we get that both \(\oper S\) and \(\FT{f}\) are homogeneous of degree \(-2\), whereas \(\dirac_{0}\) is only homogeneous of degree \(-1\) (see (2.18)), thus \(c = 0\) and \(\oper S = \FT{f}\). To put it differently, \(\oper S\) is an analytic continuation (or canonical extension) of \(\FT{f}\).

Similarly for general \(n\) and \(f(\vi x) = \norm{\vi x}\) we have that \(\oper T_{f}(\vf) = \int_{\R^n} \norm{\vi x}\vf(\vi x) \dd \vi x\) is a tempered distribution (\(\vf \in \schwartz(\R^n)\) decays faster than any negative power of \(\norm{\vi x}\) and is in \(\lp{p}\)-space). Moreover, we again expect the Fourier transform to be \(\FT{f}(\vi t) = - \frac {c} {\norm{\vi t}^{n+1}}\) by homogenity argument (for \(\vi t\neq \vi 0\)). Now take \[ \oper S(\vf) \letDef c \int_0^{\infty} \brackets{\vf(\vi 0) - \frac 1 {\omega_n} \int_{\norm{\vi x} = 1} \vf(r \vi x) \dd A(\vi x)} \frac {\dd r}{r^2}, \] and again \(\oper S(\vf) = \FT{f}(\vf)\) for \(\vf(\vi 0) = \vi 0\), hence \(\oper S - \FT{f} = \mu \dirac_{\vi 0}\). The terms on the LHS are both homogeneous of degree \(- n - 1\), whereas \(\dirac_{\vi 0}\) is homogeneous of degree \(-n\) (see (2.18)), which once again leads to \(\mu = 0\).

Lastly, to determine \(c\) we calculate \(\grad f = \frac{\vi x}{\norm{\vi x}}\) and \[ \begin{gathered} \partialOp{x_1} \frac {x_1} {\norm{\vi x}} = \frac 1 {\norm{\vi x}} + x_1 \cdot \frac {2 x_1}{\norm{\vi x}^3} \cdot \brackets{-\frac 1 2} = \frac 1 {\norm{\vi x}} - \frac{x_1^2}{\norm{\vi x}^3}\\ \Downarrow\\ \lapl f = \frac{n}{\norm{\vi x}} - \frac{\norm{\vi x}^2}{\norm{\vi x}^3} = \frac{n - 1}{\norm{\vi x}}. \end{gathered} \tag{2.22}\] Then using (1.1) for each term of \(\lapl f\) and (2.20) (with \(\lmbd = n-1\)) yields \[ \rcases{ \FT{\lapl f} = - 4 \pi \norm{\vi t}^2 \FT{f}(\vi t) = - 4 \pi \norm{\vi t}^2 \oper S \\ \FT{\brackets{\frac{n-1}{\norm{\vi x}}}} = 2 \frac {n - 1} 2 \frac{\GammaF{\frac{n-1} 2}}{\GammaF{\frac 1 2}} \pi^{1 - \frac n 2} \frac 1 {\norm{\vi t}^{n - 1}} } \onTop{\implies}{\GammaF{\frac 1 2} = \sqrt{\pi}} c = 2 \GammaF{\frac{n+1} 2}\pi^{- \frac{n-1} 2}. \]

Remark 2.17. In practice, we can also compute the Fourier transform of higher powers of \(\norm{\vi x}\), i.e., \(\norm{\vi x}^{\lmbd}\), however we would need more correction terms in \(\oper S\) (apart from the sphere-averaging) for derivatives in \(\vf\).

Remark 2.18. Similarly to above, we can try to construct an analytic continuation of the Gamma function \(\gammaF\). Indeed, recalling Definition 4.1 we know that \(\gammaF\) is defined for \(\reOf{z} > 0\), then \[\begin{align*} \GammaF{z} &= \int_0^{\infty} t^{z-1} \ee^{-t} \dd t \\ &= \int_0^1 t^{z-1} \ee^{-t} \dd t + \overbrace{\int_1^{\infty} t^{z-1} \ee^{-t} \dd t}^{\text{convergent for any }z} \\ &= \int_0^1 t^{z - 1} (\ee^{-t} - 1) \dd t + \underbrace{\int_0^1 t^{s-1} \dd t}_{\frac 1 z} + \int_1^{\infty} t^{z-1} \ee^{-t} \dd t, \end{align*}\] and \(\int_0^1 t^{z-1}(\ee^{-t} - 1) \dd t\) converges for \(\reOf{z} > -1\) (by, for example, Taylor expansion of \(\ee^{-t}\)). Notice that this is the same idea as we did above (subtracting the problematic term), which gives us the analytic continuation. Specifically for \(-1 < \reOf{z} < 0\) we have \(\int_0^1 t^{z-1} \dd t = \frac 1 z = - \int_1^{\infty} t^{z-1} \dd t\), hence \[ \GammaF{z} = \int_0^1 t^{z-1} (\ee^{-t} - 1) \dd t - \int_1^{\infty} t^{z-1} \dd t + \int_1^{\infty} t^{z-1} \ee^{-t} \dd t = \int_0^{\infty} (\ee^{-t} - 1) t^{z-1} \dd t. \]

We shall continue with Example 2.5 in the spirit of Remark 2.17. Firstly, notice that for \(f(\vi x) = \norm{\vi x}\) we determined \(\FT{f}\) as an explicit tempered distribution such that \(\supp \FT{f} = \R^n\). In more generality, consider \(f_{\lmbd} (\vi x) = \norm{\vi x}^{\lmbd}\), and, for simplicity choose \(\lmbd = 2\), then by (1.1) \[\begin{align*} (-4 \pi^2) \oper T_{\FT{f}_2} \vf &= (-4 \pi^2) \oper T_{f_2} \FT{\vf} = \int_{\R^n} (-4 \pi^2) \norm{\vi t}^2 \FT{\vf}(\vi t) \dd \vi t \\ &= \int_{\R^n} \int_{\R^n} \lapl_{\vi x} \vf(\vi x) \ee^{-2 \pi \scal{\vi t}{\vi x}} \dd \vi x \dd \vi t \\ &\tOnTop{=}{Fubini} \int_{\R^n} \lapl_{\vi x} \vf(\vi x) \int_{\R^n} \ee^{-2 \pi \im \scal{\vi t}{\vi x}} \dd \vi t \dd \vi x \\ &= \int_{\R^n} \lapl \vf(\vi x) \dirac_{\vi 0}(\vi x) \dd \vi x = \lapl \vf(\vi 0)\\ &\implies \FT{f}_2 = -\frac 1 {4 \pi^2} \lapl \dirac_{\vi 0}, \end{align*}\] because the following distributional identity holds for the Dirac delta at origin, \[ \int_{R^n} \vf(\vi x) \int_{\R^n} \ee^{- 2 \pi \im \scal {\vi t}{\vi x}} \dd \vi t \dd \vi x = \int_{\R^n} \FT{\vf}(\vi t) \dd \vi t = \int_{\R^n} \underbrace{\FT{\dirac}_{\vi 0}(\vi x)}_1 \FT{\vf}(\vi x) \dd \vi x = \dirac_{\vi 0} \FT{\FT{\vf}} = \dirac_{\vi 0} \reverse{\vf} = \dirac_{\vi 0} \vf. \] This is rather remarkable, as it is now a local operator with \(\supp \FT{f}_2 = \set{\vi 0}\) (whilst before it acted globally). Moreover, observe that for a monomial \(P(\vi x) = \vi x^{\vi \alpha}\) it further holds by the same argument that \[ \int_{\R^n} (2 \pi \im)^{\absval{\vi \alpha}} \vi t^{\vi \alpha} \FT{\vf}(\vi t) \dd \vi t = \Dgrad^{\vi \alpha} \vf(\vi 0) = \Dgrad^{\vi \alpha} \dirac_{\vi 0} \vf, \] i.e., \(\FT{P} = \frac 1 {(2 \pi \im)^{\absval{\vi \alpha}}} \Dgrad^{\vi \alpha} \dirac_{\vi 0}\), which is again local.

Let us phrase this approach (of extending/continuing distributions) more generally. To this end, assume \(\oper T_{\lmbd}\) is a distributions depending holomorphically on \(\lmbd \in U \subseteq \C\), i.e., for every test function \(\vf\) the mapping \(\lmbd \mapsto \oper T_{\lmbd}(\vf)\) is holomorphic on \(U\). Now for any \(\lmbd_0 \in U\) we can express \(\oper T_{\lmbd}(\vf)\) as a power series around \(\lmbd_0\), \[ \oper T_{\lmbd}(\vf) = \sum_{k = 0}^{\infty} \oper A_k(\vf) (\lmbd - \lmbd_0)^k \] with \(\oper A_k(\vf) = \frac 1 {k!} \evaluateAt{\nderiv{k}{\oper T_{\lmbd}(\vf)}{\lmbd}}{\lmbd = \lmbd_0}\), which follows from holomorphy. Surely, we can exploit this!

Example 2.6 (Adapted from the lecture and [2, chpt. 24]) To begin in a simpler setting, let \(n = 1\) and choose \(f_{\lmbd}(x) = \lcases{0 & x < 0 \\ x^{\lmbd - 1} & x >0}\) for \(\reOf{\lmbd} > 0\), hence \(f\) is locally integrable (and thus a distribution). Now we shall closely mirror what we did for the Gamma function \(\gammaF\) in Remark 2.18, i.e., take the Taylor polynomial of order \(N-1\) (chosen arbitrarily) of \(\vf(x) = P_{N-1}(x) + x^N g(x)\), then \[\begin{align*} \oper T_{f_\lmbd}(\vf) &= \int_{\R} f_{\lmbd}(x) \vf(x) \dd x = \int_0^{\infty} x^{\lmbd - 1} \vf(x) \dd x \\ &= \int_0^1 P_{N-1}(x) x^{\lmbd - 1} \dd x + \int_0^1 x^{N + \lmbd - 1}g(x) \dd x + \int_1^{\infty} x^{\lmbd - 1} \vf(x) \dd x\\ &= \int_0^1 \overbrace{\sum_{k = 0}^{N-1} \frac 1 {k!} \Dgrad^k \vf(0)x^k}^{P_{N - 1}(x)} x^{\lmbd - 1} \dd x + \text{remaining terms} \\ &= \sum_{k = 0}^{N - 1} \frac 1 {k!} \frac {\Dgrad^k \vf(0)} {k+\lmbd} \cdot (1 - 0) + \text{remaining terms} \\ &= \sum_{k = 0}^{N - 1} \frac 1 {k! (k+\lmbd)} \Dgrad^k \vf(0) + \underbrace{\int_0^1 x^{N + \lmbd - 1}g(x) \dd x}_{\text{holomorphic for } \reOf{\lmbd} > - N} + \underbrace{\int_1^{\infty} x^{\lmbd - 1} \vf(x) \dd x}_{\text{holomorphic by Morera}}, \end{align*}\] where the first term is just a rational function of \(\lmbd\), the second is clearly holomorphic for \(\reOf{\lmbd} > -N\), and last term is even entire by Morera’s theorem 4.810. Since \(\vf\) and \(N\) were chosen arbitrarily, it is evident we can continue/extend the distribution for \(\lmbd \in \C \setminus (-\N_0)\) and that non-positive integers form poles of the mapping.

However, one might recall that the Gamma function has the same poles, thus it might render advantageous to consider the distribution \(\oper T_{\lmbd} = \frac{f_{\lmbd}}{\GammaF{\lmbd}}\). As \(\frac 1 {\GammaF{\lmbd}}\) only rescales the distribution, then \(\Dgrad \oper T_{\lmbd} = \frac{\lmbd - 1}{\GammaF{\lmbd}} f_{\lmbd - 1} = \oper T_{\lmbd - 1}\). Noticing \(\Dgrad \oper T_1 =: \oper T_0 = \dirac_0\) (as \(\oper T_1\) is the Heaviside function, which has the Dirac distribution as its derivative, see Example 2.2), we finally obtain \(\oper T_{-k} = \Dgrad^k \dirac_0\). The LHS is surely a tempered distribution, thus also \(\FT{\oper T}_{-k} = (- 2 \pi \im t)^k\).

Example 2.7 (Continuation of Example 2.4, [2, chpt. 24]) Last, but not least, we shall consider the continuation of Example 2.4, where we shall address the issues discussed at the end of the example. To this end, let \(f_{\lmbd}(\vi x) = \norm{\vi x}^{\lmbd - n}\) which is surely locally integrable for \(\reOf \lmbd > 0\), and therefore a distribution on the same \(\lmbd\)-domain. Now by an analogous construction as in Example 2.6, \[ \begin{aligned} \oper T_{f_{\lmbd}} \vf &= \int_{\R^n} f_{\lmbd}(\vi x) \vf(\vi x) \dd \vi x \\ &= \underbrace{\int_{\norm{\vi x} \leq 1} P_{N - 1}(\vi x) \norm{\vi x}^{\lmbd - n} \dd \vi x}_{R_{\lmbd}(\vi x)} + \int_{\norm{\vi x} \leq 1} \underbrace{\brackets{\vf - P_{N-1}}(\vi x)}_{\BigO{\norm{\vi x}^N}} \norm{\vi x}^{\lmbd - n} \dd \vi x + \int_{\norm{\vi x} > 1} \norm{\vi x}^{\lmbd - n} \vf(\vi x) \dd \vi x, \end{aligned} \tag{2.23}\] where again the second term is holomorphic for \(\reOf{\lmbd} > - N\) and last term is an entire function of \(\lmbd\). The function \(R_{\lmbd}(\vi x)\) has terms \(\int_{\norm{\vi x} \leq 1} \vi x^{\vi \alpha} \norm{\vi x}^{\lmbd - n} \dd \vi x\) which vanish if at least one of the components of \(\vi \alpha\) is odd (by symmetry), thus only terms with \(\vi \alpha = 2\vi \beta\) contribute. As such \(R_{\lmbd}(\vi x)\), and by extension \(\oper T_{\lmbd}\), has poles at non-positive even integers. This hints at the possible usefulness of \(\oper T_{\lmbd} = \frac 1 {\GammaF{\frac{\lmbd} 2}} f_{\lmbd}\), where we take \(\GammaF{\frac {\lmbd} 2}\) because it too has poles at non-positive even integers.

As a side note, let \(f : \R^n \to \R\) (or \(\C\)) be a radial function, i.e., \(f(\vi x) = g(\norm{\vi x})\), then \[ \frac 1 {r^{n-1}} \derivOp{r} \brackets{r^{n-1} g'} = \frac 1 {r^{n-1}} \brackets{(n-1)r^{n-2}g' + r^{n-1} g''} = \frac {n-1}{r} g' + g'' \] and \(\grad f = \frac {\vi x}{\norm{\vi x}} g'\), thus the laplacian is (using (2.22)) \[ \lapl f = \frac{n-1}{\norm{\vi x}} g' + \scal{\frac {\vi x}{\norm{\vi x}}}{\frac {\vi x}{\norm{\vi x}} g''} \implies \lapl g \letDef \lapl f = \frac 1 {r^{n-1}} \derivOp{r} \brackets{r^{n-1} g'}. \tag{2.24}\]

Using the above argumentation and the side note with (2.24) for \(g(r) = r^{\lmbd - n}\), we may observe \[ \lapl \oper T_{\lmbd} = \frac {(\lmbd - n)(\lmbd - 2)} {\GammaF{\frac {\lmbd} 2}} f_{\lmbd - 2} = 2 (\lmbd - n) \oper T_{\lmbd - 2} \] by \(\GammaF{\frac{\lmbd} 2} = \brackets{\frac{\lmbd} 2 - 1} \GammaF{\frac{\lmbd - 2} 2}\). Specifically, let \(\oper T_2 = \frac 1 {\norm{\vi x}^{n - 2}}\) and recall \(\lapl \oper T_2 = -(n - 2) \omega_n \dirac_{\vi 0}\) (from Remark 2.5, where we did the calculation for \(n = 3\)) which also equals \(2 (2 - n) \oper T_0\), it follows that \(\oper T_0 = - \frac {\omega_n} 2 \dirac_{\vi 0}\) and iteration yields \(\oper T_{-2k} = c_k \lapl \dirac_{\vi 0}\). Still, for non-integer \(\lmbd\) with \(\reOf \lmbd < 0\), we can use the original expression (2.23), where the term \(R_{\lmbd}(\vi x)\) only has poles at non-positive integers, thus it turns a sum of constants \(C_{2\vi \beta}(\lmbd)\).

To conclude, let us note that in Example 2.4 we computed (2.20), which one can re-write as \(\FT{\oper T}_{\lmbd} = \oper T_{n - \lmbd} \pi^{\frac {n} 2 - \lmbd}\) after some re-arranging. This used to hold for \(\reOf \lmbd \in \brackets{0, \frac n 2}\), and we suggested it also holds for \(\reOf \lmbd \in (0, n)\) due to the form of (2.20). However, now we have extended it to hold for all \(\lmbd \in \C\) by the means of analytic continuation.


  1. For completeness’ sake, one can check \(\arctan'(x) = \frac 1 {\tan'(\arctan(x))} = \frac 1 {1 + x^2}\), where \(\tan'(x) = \frac{\cos^2(x) + \sin^2(x)} {\cos^2(x)} = 1 + \tan^2(x)\).↩︎

  2. Canonically, to a function \(f\) we assign the distribution \(\oper T_f \vf = \int_{\R^n} f(\vi x) \vf(\vi x) \dd \vi x\) for \(\vf \in \TestSpace{\R^n}\). Therefore, we do not need the function \(f\) to be defined (for evaluation) everywhere, but (local) integrability against test functions \(\TestSpace{\R^n}\) is sufficient. In this sense, while \(E(\vi x)\) is not defined at \(0\) it still defines a distribution.↩︎

  3. A test function \(\vf \in \TestSpace{\R^n}\) has all its derivatives continuous and compactly supported, thus bounded.↩︎

  4. As was mentioned in the beginning of this section, alternatively we may call it temperate distribution.↩︎

  5. In the sense that \(\oper T : X \to Y\) is a linear operator which said to be bounded if there exists a constant \(C > 0\) such that \(\norm{\oper T \vi x}_Y \leq C \norm{\vi x}_X\) for all \(x \in \dom{\oper T}\). This, in turn, implies continuity of \(\oper T\) (defined as usual).↩︎

  6. For clarity, let us mention the supremum is equivalent with the \(\linfty\)-norm.↩︎

  7. The reason for the local integrability may be illustrated with \(\int_{\norm{\vi x} \leq 1} \frac 1 {\norm{\vi x}^{n-2}} \dd \vi x = \int_0^1 \frac 1 {r^{n-2}} r^{n-1} \dd r \cdot \omega_n\), which is clearly a convergent integral.↩︎

  8. Meromorphic function is holomorphic on \(\C\) expect for at most countably many isolated poles.↩︎

  9. Indeed, the averaging over a sphere (here in \(\R\)) and the deliberate form of the integrand of \(\oper S\) allow for integrability around origin.↩︎

  10. Indeed \(\oint_{\curve} \int_1^{\infty} x^{\lmbd - 1} \vf(x) \dd x \dd \lmbd = \int_1^{\infty} \vf(x) \oint_{\curve} \ee^{\log(x)(\lmbd - 1)} \dd \lmbd \dd x = \int_1^{\infty} \vf(x) \cdot 0 \dd x = 0\).↩︎