Home » Mathematics » Nonstandard Analysis

Nonstandard Analysis

Nonstandard Analysis
Nonstandard Analysis

Preliminary remarks: In the following section, the definitions established in the chapters on Set Theory and Topology are used, and usually take \(m, n \in {}^{\omega}\mathbb{N}^{*}\). Integration and differentiation are studied on an arbitrary non-empty subset \(A\) from \({}^{(\omega)}\mathbb{K}^{n}\). The mapping concept requires replacing every element not in the image set by the neighbouring element in the target set. If multiple choices are possible, one single choice is selected. The following may be easily generalised to other sets and norms.

Definition: The function \(||\cdot||: \mathbb{V} \rightarrow {}^{(\omega)}\mathbb{R}_{\ge 0}\) where \(\mathbb{V}\) is a vector space over \({}^{(\omega)}\mathbb{K}\) is called a norm, if for all \(x, y \in \mathbb{V}\) and \(\lambda \in {}^{(\omega)}\mathbb{K}\), it holds that: \(||x|| = 0 \Rightarrow x = 0\) (definiteness), \(||\lambda x|| = |\lambda| \; ||x||\) (homogeneity), and \(||x + y|| \le ||x|| + ||y||\) (triangle inequality). The dimension of \(\mathbb{V}\) is defined as the maximal number of linearly independent vectors, and is denoted by dim \(\mathbb{V}\). The norms \({||\cdot||}_{a}\) and \({||\cdot||}_{b}\) are said to be equivalent if there exist non-infinitesimal \(s, t \in {}^{\nu}\mathbb{R}_{>0}\) such that, for all \(x \in \mathbb{V}\), it holds that \(s||x||{}_{b} \le ||x||{}_{a} \le t||x||{}_{b}.\triangle\)

Theorem: Let \(N\) be the set of all norms in \(\mathbb{V}\). Every norm on \(V\) is equivalent if and only if \({||x||}_{a}/{||x||}_{b}\) is finite but not infinitesimal for all \({||\cdot||}_{a}, {||\cdot||}_{b} \in N\) and all \(x \in \mathbb{V}^{*}\).

Proof: Set \(s := \text{min }\{{||x||}_{a}/{||x||}_{b}: x \in \mathbb{V}^{*}\}\) and \(t := \text{max }\{{||x||}_{a}/{||x||}_{b}: x \in \mathbb{V}^{*}\}.\square\)

Definition: The set \(\overline{\mathbb{R}} := \mathbb{R} \cup \{\infty\}\) allows calculations with \(\infty \gg \varsigma^2\) like having a constant. If \(\pm0\) is replaced by \(\pm\tilde{\infty}\), the calculations become unique and consistent. The area or half of the circumference of the unit circle defines pi \(\pi\). Euler’s number \(e\) is defined as the solution of \({x}^{\pi i} = -1\). Then the logarithm function ln is defined by \({e}^{\ln \, z} = z\) and the corresponding power function by \({z}^{s} = {e}^{s \, \ln \, z}\) for \(s, z \in \mathbb{C}\). This allows giving a formal definition of exponentiation.\(\triangle\)

Remark: The definition of \(e\) is \(\mathcal{O}(\tilde{\nu})\) larger than that by \({(1 + \tilde{\nu})}^{\nu}\). The exponential series being exactly differentiated with as many terms as possible justifies the former. Calculating as precisely as possible, this deviation can have negative consequences: Typically, resorting to approximations will be necessary.

Lemma: Because of \(\tilde{\nu}m \le 1 \le a\) for all \(m \in {}^{\nu}\mathbb{N}\) and \(a \in {}^{\nu}{\mathbb{R}}_{\ge 1}\), the Archimedean axiom is invalid.\(\square\)

Archimedes’ theorem: There exists \(m \in {}^{\nu}\mathbb{N}\) such that \(a < bm\) if and only if \(a < b\nu\) whenever \(a > b\) for \(a, b \in {\mathbb{R}}_{>0}\), since \(\nu = \max {}^{\nu}\mathbb{N}\) holds.\(\square\)

Definition: The function \({\mu}_{h}: A \rightarrow \mathbb{R}_{\ge 0}\) where \(A \subseteq {}^{(\omega)}\mathbb{C}^{n}\) is an \(m\)-dimensional set with \(h \in \mathbb{R}_{>0}\) less than or equal the minimal distance of the points in \(A, m \in {}^{\omega}\mathbb{N}_{\le \hat{n}}\), \({\mu}_{h}(A) := |A| {h}^{m}\) and \({\mu}_{h}(\emptyset) = |\emptyset| = 0\) is called the exact h-measure of \(A\) being h-measurable. Let the exact standard measure be \({\mu}_{\iota}\) (\(\iota\) may be omitted).\(\triangle\)

Remark: Answering positively the measure problem, the union \(A\) of pairwise disjoint \(h\)-homogeneous sets \({A}_{j}\) for \(j \in J \subseteq \mathbb{N}\) clearly additively and uniquely results in \({{\mu }_{h}}(A)={+}_{j \in J}{{{\mu }_{h}}\left( {{A}_{j}} \right)}.\) Its strict monotony follows for \(h\)-homogeneous sets \({A}_{1}, {A}_{2} \subseteq {}^{(\omega)}\mathbb{K}^{n}\) satisfying \({A}_{1} \subset {A}_{2}\) from \({\mu}_{h}({A}_{1}) < {\mu}_{h}({A}_{2})\). If \(h\) is not equal on all considered sets \({A}_{j}\), the minimum of all \(h\) is chosen and the homogenisation follows as described in Set Theory. In the following, let \(||\cdot||\) be the Euclidean norm.

Examples: Consider the set \(A \subset {[0, 1[}^n\) of points, whose least significant bit is 1 (0) in all \(n \in {}^{\omega}\mathbb{N}^{*}\) coordinates. Then \({\mu}_{\iota}(A) = \tilde{2}^n\). Since \(A\) is an infinite and conventionally uncountable union of individual points without the neighbouring points of \({[0, 1[}^n\) in \(A\), and these points are Lebesgue null sets, \(A\) is not Lebesgue measurable, however it is exactly measurable. Domains from \({}^{(\omega)} \mathbb{K}^{n}\) that are more densely pushed together have no smaller (larger) intersection (union) than previously.

Remark: The exact \(h\)-measure is optimal: It only considers the NRs of points, i.e. in the extreme case distances of points parallel to the coordinate axes. Concepts such as \(\sigma\)-algebras and null sets are dispensable, since the only null set is the empty set \(\emptyset\).

Definition: Neighbouring points in \(A\) are described by the irreflexive symmetric \ac{nr} \(B \subseteq {A}^{2}\). The function \(\gamma: C \rightarrow A \subseteq \mathbb{C}{}^{n}\), where \(C \subseteq \mathbb{R}\) is \(h\)-homogeneous and \(h\) is infinitesimal, is called a path if \(||\gamma(x) – \gamma(y)||\) is infinitesimal for all neighbouring points \(x, y \in C\) and (\(\gamma(x), \gamma(y)) \in B\). Let \({z}_{0} \in A \subseteq \mathbb{K}^{n}\) and \(f: A \rightarrow {}^{(\nu)}\mathbb{K}^{m}\). NRs are systematically written as (predecessor, successor) with the notation \(({z}_{0}, \curvearrowright {z}_{0})\) or \((\curvearrowleft {z}_{0}, {z}_{0})\) pronouncing \(\curvearrowright\) as “post” and \(\curvearrowleft\) as “pre”. The term compactness is renounced in any form.\(\triangle\)

Definition: If \(||f(\curvearrowright B {z}_{0}) – f({z}_{0})|| < \alpha\) for infinitesimal \(\alpha \in {}^{(\omega)}\mathbb{R}{}_{>0}\), \(f\) is defined \(\alpha B\)-successor-continuous in \({z}_{0}\) in the direction \(\curvearrowright B {z}_{0}\). If the exact modulus of \(\alpha\) does not matter, \(\alpha\) may be omitted in the notation. If \(f\) is \(\alpha B\)-successor-continuous for all \({z}_{0}\) and \(\curvearrowright B {z}_{0}\), it simply is defined \(\alpha B\)-continuous. It holds that \(\alpha\) is the degree of continuity. If the inequality only holds for \(\alpha = \tilde{\nu}\), \(f\) simply is defined (\(B\)-successor-)continuous. The property of \(\alpha B\)-predecessor-continuity is defined analogously.\(\triangle\)

Remark: Proofs for predecessors will be omitted below, since they are analogous to the proofs for successors. In practice, choose \(\alpha\) by estimating \(f\) (for example after considering any jump discontinuities). If \(B\) is obvious or irrelevant, it may be omitted – as below, when \(B = {}^{(\omega)}\mathbb{K}{}^{\hat{n}}\).

Example: The function \(f: \mathbb{R} \rightarrow \{\pm 1\}\) with \(f(x) = i^{\hat{x}/\iota}\) is nowhere successor-continuous on \(\mathbb{R}\), but its modulus is (cf. Number Theory). Here, \(x/\iota\) is an integer since \(\mathbb{R}\) is \(\iota\)-homogeneous. Setting \(f(x) = 1\) for rational \(x\) and \(= -1\) otherwise, then \(f(x)\) is partially \(\iota\)-successor-continuous on non-rational numbers, unlike the conventional notion of continuity.

Example of a Peano curve1Walter, Wolfgang: Analysis 2; 5., erw. Aufl.; 2002; Springer; Berlin, p. 188: “Consider the even, periodic function \(g: \mathbb{R} \rightarrow \mathbb{R}\) with period 2 and image [0, 1] defined by\[{g}(t)=\left\{ \def\arraystretch{1.5}\begin{array}{cl} 0 & \text{for }0\le t<\tfrac{1}{3}\\ 3t-1 & \text{for }\tfrac{1}{3}\le t<\tfrac{2}{3}\\ 1 & \text{for }\tfrac{2}{3}\le t\le 1.\\ \end{array} \right.\,\] Clearly, \(g\) is fully specified by this definition, and continuous. Now let the function \(\phi: I = [0, 1] \rightarrow \mathbb{R}^{2}\) be defined by\[\phi(t) = \left( {\sum\limits_{k = 0}^{\infty} {\frac{{g({4^{2k}}t)}}{{{2^{k + 1}}}},} \sum\limits_{k = 0}^{\infty} {\frac{{g({4^{2k + 1}}t)}}{{{2^{k + 1}}}}} } \right).”\]The function \(\phi\) is at least continuous since the sums are ultimately locally linear functions in \(t\), when \(\infty\) is replaced by \(\omega\). It would however be an error to believe that [0, 1] can be bijectively mapped onto \({[0, 1]}^{2}\) in this way: the powers of four in \(g\), and the values 0 and 1 taken by \(g\) in two sub-intervals thin out \({[0, 1]}^{2}\) so much that a bijection is clearly impossible. Restricting the proof to rational points only is simply insufficient.

Definition: For \(f: A \rightarrow {}^{(\omega)}\mathbb{K}{}^{m}, {{\downarrow}}_{\curvearrowright B z}f(z) := f(\curvearrowright B z) – f(z)\) is called \(B\)-successor-differential of \(f\) in the direction \(\curvearrowright B z\) for \(z \in A\). If dim \(A = n\), then \({{\downarrow}}_{\curvearrowright B z}f(z)\) can be specified by \({\downarrow}((\curvearrowright B){z}_{1}, \text{…} , (\curvearrowright B){z}_{n})f(z\)). If \(f\) is the identity, i.e. \(f(z) = z\), then \({{\downarrow}}_{\curvearrowright B z}Bz\) can be written instead of \({\downarrow}_{\curvearrowright B z}f(z)\). If \(A\) or \(\curvearrowright B z\) is obvious or irrelevant, it may be omitted. Read \({\downarrow}\) as “down”.\(\triangle\)

Definition: If \(|f(\curvearrowright x) – f(x)| > \tilde{\omega}\) holds for \(x\) of \(f: A \subseteq {}^{\omega}\mathbb{R} \rightarrow {}^{\omega}\mathbb{R}\), \(x\) is called a jump discontinuity. If the modulus of the \(B\)-successor-differential of \(f\) in the direction \(\curvearrowright B z\) at \(z \in A\) is smaller than \(\alpha\) and infinitesimal, then \(f\) is also rated as \(\alpha B\)-successor-continuous there. An (infinitely) real-valued function with arguments \(\in {}^{(\omega)}\mathbb{K}{}^{n}\) is said to be convex (concave) if the line segment between any two points on the graph of the function lies above (below) or on the graph. Let it strictly convex (concave) if “or on” can be omitted.\(\triangle\)

Definition: The \(m\) arithmetic means of all \({f}_{k}(\curvearrowright B z)\) of \(f(z)\) give the \(m\) averaged normed tangential normal vectors of \(m\) (uniquely determined) hyperplanes, defining the \(mn\) continuous partial derivatives of the Jacobian matrix of \(f\), which is not necessarily continuous. The hyperplanes are taken to pass through \({f}_{k}(\curvearrowright B z)\) and \(f(z)\) translated towards 0. The moduli of their coefficients are minimised by a quite simple linear programme (cf. Linear Programming).\(\triangle\)

Theorem, improving Froda’s one: A monotone function \(f: [a, b] \rightarrow {}^{\omega}\mathbb{R}\) has at most \(2\omega^2 – 1\) jump discontinuities, since at most \(2\omega^2\) jump discontinuities are possible between \(-\omega\) and \(\omega\) with a jump of \(\tilde{\omega}\) if the function does not decrease at non-discontinuities, like a step function.\(\square\)

Definition: The partial derivative in the direction \(\curvearrowright B {z}_{k}\) of \(F: A \rightarrow {}^{(\omega)}\mathbb{K}\) at \(z = ({z}_{1}, …, {z}_{n}) \in A \subseteq {}^{(\omega)}\mathbb{K}^{n}\) with \(k \in \mathbb{N}_{\le n}^*\) is defined as\[\frac{{\downarrow} B\,F(z)}{{\downarrow} B\,{{z}_{k}}}:=\frac{F({{z}_{1}},\,…,\,\curvearrowright B\,{{z}_{k}},\,…,\,{{z}_{n}})-F(z)}{\curvearrowright B\,{{z}_{k}}-{{z}_{k}}}.\]With this notation, if the function \(f\) satisfies \(f = ({f}_{1}, …, {f}_{n}): A \rightarrow {}^{(\omega)}\mathbb{K}^{n}\) with \(z \in A \subseteq {}^{(\omega)}\mathbb{K}^{n}\)\[\begin{aligned}f(z) &=\left( \frac{F(\curvearrowright B{{z}_{1}},{{z}_{2}},…,{{z}_{n}})-F({{z}_{1}},…,{{z}_{n}})}{(\curvearrowright B{{z}_{1}}-{{z}_{1}})},…,\frac{F({{z}_{1}},…,{{z}_{n-1}},\curvearrowright B{{z}_{n}})-F({{z}_{1}},…,{{z}_{n}})}{(\curvearrowright B{{z}_{n}}-{{z}_{n}})} \right)\\ &=\left( \frac{{\downarrow} B{{F}_{1}}(z)}{{\downarrow} B{{z}_{1}}},\,\,…\,\,,\,\,\frac{{\downarrow} B{{F}_{n}}(z)}{{\downarrow} B{{z}_{n}}} \right)=\text{grad }{{B}_{\curvearrowright Bz}}\,F(z)\,=\,\nabla {{B}_{\curvearrowright Bz}}\,F(z),\end{aligned}\]then \(f(z)\) is said to be exact \(B\)-successor-derivative \({F}_{\curvearrowright B z}^{\prime} B(z)\) or the exact \(B\)-successor-gradient \(\text{grad }_{\curvearrowright B z} F(z)\) of the function \(F\) at \(z\), which is said to be exactly \(B\)-differentiable at \(z\) in the direction \(\curvearrowright B z\), provided that each quotient exists in \({}^{(\omega)}\mathbb{K}\). \(\nabla\) is the Nabla operator. If this definition is satisfied for every \(z \in A\), then \(F\) is said to be an exactly \(B\)-differentiable \(B\)-\ac{ad} of \(f\). For \(x \in {}^{(\omega)}\mathbb{R}\), the left and right \(B\)-ADs \({F}_{l}(x)\) and \({F}_{r}(x)\) distinguish between the cases of the corresponding \(B\)-derivatives.

If \(A\) or \(\curvearrowright B z\) are obvious from context or irrelevant, they may be omitted. The conventional case may be obtained analogously and for \(n = 1\) and \({F}_{r}^{\prime}B(x)\) the right exact \(B\)-derivative follows for \(\curvearrowright B x > x \in {}^{(\omega)}\mathbb{R}\) and \({F}_{l}^{\prime}B(x)\) is the left exact \(B\)-derivative for \(\curvearrowright B x < x\). If all directions have the same value, \(F^{\prime}B(z)\) is called the exact derivative (\(A ={}^{\nu}\mathbb{C}\) and \(n = 1\) make \(F\) holomorphic). On a domain \(D\), let \(\mathcal{O}(D) \subseteq \mathcal{C}(D) \subseteq \mathbb{C}\) be the ring of holomorphic resp. continuous functions.\(\triangle\)

Theorem: Every in \(A \subseteq {}^{(\omega)}\mathbb{K}{}^{n}\) convex resp. concave function \(f: A \rightarrow {}^{(\omega)}\mathbb{R}\) is \(\alpha B\)-successor-continuous and \(B\)-successor-differentiable.\(\square\)

Chain rule: For \(x \in A \subseteq {}^{(\omega)}\mathbb{R}, B \subseteq {A}^{2}, f: A \rightarrow C \subseteq {}^{(\omega)}\mathbb{R}, D \subseteq {C}^{2}, g: C \rightarrow {}^{(\omega)}\mathbb{R}\), choosing \(f(\curvearrowright B x) = \curvearrowright D f(x)\)), it holds that:\[{g}_{r}^{\prime}B(f(x)) = {g}_{r}^{\prime}D(f(x)) {f’}_{r}B(x).\]Proof:\[{{g}_{r}^{\prime}}B(f(x))=\frac{g(f(\curvearrowright Bx))-g(f(x))}{f(\curvearrowright Bx)-f(x)}\frac{f(\curvearrowright Bx)-f(x)}{\curvearrowright Bx-x}=\frac{g(\curvearrowright Df(x))-g(f(x))}{\curvearrowright Df(x)-f(x)}{{f}_{r}^{\prime}}B(x)={{g}_{r}^{\prime}}D(f(x)){{f}_{r}^{\prime}}B(x).\square\]Product rule: It holds \((fg)_{r}^{\prime}B(x) = {f}_{r}^{\prime}B(x) g(x) + f(\curvearrowright B\,x) {g}_{r}^{\prime}B(x)= {f}_{r}^{\prime}B(x) g(\curvearrowright B\,x) + f(x) {g}_{r}^{\prime}B(x).\)

Proof: Add and subtract \(f(\curvearrowright B\,x) g(x)\) and \(f(x) g(\curvearrowright B\,x)\) in the numerator.\(\square\)

Quotient rule: Let denominators of the following quotients different from 0. Then:\[\left( \frac{f}{g} \right)_{r}^{\prime }B(x)=\frac{{{{{f}}}_{r}^{\prime}}B(x)\,g(x)-f(x)\,{{{{g}}}_{r}^{\prime}}B(x)}{g(x)\,g(\curvearrowright B\,x)}=\frac{{{{{f}}}_{r}^{\prime}}B(x)\,g(\curvearrowright B\,x)-f(\curvearrowright B\,x)\,{{{{g}}}_{r}^{\prime}}B(x)}{g(x)\,g(\curvearrowright B\,x)}.\]Proof: Add and subtract \(f(x) g(x)\) and \(f(\curvearrowright B\,x) g(\curvearrowright B\,x)\) in the numerator.\(\square\)

Remark: Arguments and function values must belong to a smaller level of infinity than \(\tilde{\iota}\), and \(f\) and \(g\) must be sufficiently (\(\alpha B\)-) continuous at \(x \in A\). I. e. \(\alpha\) must be sufficiently small to allow \(\curvearrowright x\) to be replaced by \(x\). An analogous principle holds for infinitesimal arguments. The right exact derivative of the reciprocal function reads \({f}_{r}^{-1\prime}B(y) = 1/{f}_{r}^{\prime}B(x)\) from \(y = f(x)\) and identity \(x = {f}^{-1}(f(x))\) by the chain rule and the same precision. L’Hôpital’s rule makes sense for (\(\alpha B\)-) continuous functions \(f\) and \(g\), and follows for \(f(v) = g(v) = 0\) where \(v \in A\) and \(g(\curvearrowright B\,v) \ne 0\) from\[\frac{f(\curvearrowright B\,v)}{g(\curvearrowright B\,v)}=\frac{f(\curvearrowright B\,v)-f(v)}{g(\curvearrowright B\,v)-g(v)}=\frac{{{{{f}}}_{r}^{\prime}}B(v)}{{{{{g}}}_{r}^{\prime}}B(v)}.\]Remark: If 0 lies in the interval with boundaries of the left and right exact derivative, let the function \(f: A \rightarrow {}^{(\omega)}\mathbb{R}\) have the derivative 0 where \(A \subseteq {}^{(\omega)}\mathbb{R}\). Differentiability is thus easy to establish. Wherever the quotient is defined in the (conventional) (infinite) real case, set\[{{F}_{b}^{\prime}}B(v)\,:=\,\frac{F(\curvearrowright B\,v)-F(\curvearrowleft B\,v)}{\curvearrowright B\,v-\curvearrowleft B\,v}.\]This is especially useful when \(\curvearrowright B v – v = v – \curvearrowleft B v\), and the combined derivatives both have the same sign. This definition has the advantage viewing \({F}_{b}^{\prime} \; B(v)\) as the “tangent slope” at the point \(v\), especially when \(F\) is \(\alpha B\)-continuous at \(v\). This can be extended to the (conventional) complex numbers analogously.

Definition: Given \(z \in A \subseteq {}^{(\omega)}\mathbb{K}^{n}\),\[{\uparrow}_{z\in A}{f(z){\downarrow}Bz:={+}_{z\in A}{f(z)(\curvearrowright B\,z-z)}}\]is called the exact \(B\)-integral of the vector field \(f = ({f}_{1}, …, {f}_{n}): A \rightarrow {}^{(\omega)}\mathbb{K}^{n}\) on \(A\) and \(f(z)\) is said to be \(B\)-integrable. If this requires removing at least one point from \(A\), then the exact \(B\)-integral is called improper. Read \({\uparrow}\) as “up”. For \(\gamma: [a, b[ \, \cap \, C \rightarrow A \subseteq {}^{(\omega)}\mathbb{K}^{n}, C \subseteq \mathbb{R}\), and \(f = ({f}_{1}, …, {f}_{n}): A \rightarrow {}^{(\omega)}\mathbb{K}^{n}\),\[{\uparrow}_{\gamma }{f(\zeta ){\downarrow}B\zeta =}{\uparrow}_{t\in [a,b[ \, \cap \, C}{f(\gamma (t)){{\gamma}_{\curvearrowright }^{\prime}}D(t){\downarrow}Dt}\]where \(dDt > 0, \curvearrowright D t \in ]a, b] \, \cap \, C\), choosing \(\curvearrowright B \gamma(t) = \gamma(\curvearrowright D t)\), since \(\zeta = \gamma(t)\) and \({\downarrow}B\zeta = \gamma(\curvearrowright D t) – \gamma(t) = {\gamma}_{\curvearrowright }^{\prime}D(t) {\downarrow}Dt\) (i.e. for \(C = \mathbb{R}, B\) maximal in \(\mathbb{C}^{2}\), and \(D\) maximal in \(\mathbb{R}^{2})\), is called the exact \(B\)-\ac{li} of the vector field \(f\) along the path \(\gamma\). Improper exact \(B\)-LIs are defined analogously to exact \(B\)-integrals, except that only interval end points may be removed from \([a, b[ \, \cap \, C\).\(\triangle\)

Remark: The (linear) exact LI on \({}^{(\nu)}\mathbb{K}\) \(f\) does not need a continuous \(f\), exists always and is usually consistent with the conventional LI. It is linear and monotone in the (conventional) (infinite) real case.

Intermediate value theorem: Let \(f: [a, b] \rightarrow {}^{(\omega)}\mathbb{R} \; \alpha\)-continuous in \([a, b]\). Then \(f(x)\) takes for \(x \in [a, b]\) every value between min \(f(x)\) and max \(f(x)\) with precision \(< \alpha\). If \(f\) is continuous in \({}^{\omega}\mathbb{R}\), it takes every value of \({}^{\nu}\mathbb{R}\) between min \(f(x)\) and max \(f(x)\).

Proof: A gapless chain of overlapping \(\alpha\)-environments exists between min \(f(x)\) and max \(f(x)\) where \(f(x)\) is centre, since otherwise there would be a contradiction to the \(\alpha\)-continuity of \(f\). The second part of the claim follows from the fact that a deviation \(|f(\curvearrowright B\,x) – f(x)| < \tilde{\nu}\) or \(|f(x) – f(\curvearrowleft B\,x)| < \tilde{\nu}\) in \({}^{\nu}\mathbb{R}\) falls below the conventional resolution maximally permitted.\(\square\)

Definition: For all \(x \in V\) of an \(h\)-homogeneous \(n\)-volume \(V \subseteq [{a}_{1}, {b}_{1}] \times…\times [{a}_{n}, {b}_{n}] \subseteq {}^{(\omega)}\mathbb{R}^{n}\) with \(B = {B}_{1}\times…\times{B}_{n}, {B}_{k} \subseteq {[{a}_{k}, {b}_{k}]}^{2}\) and \(|{{\downarrow}B}_{k}{x}_{k}| = h\) for all \(k \in \mathbb{N}_{\le n}^*\)\[{\uparrow}_{x \in V}{f(x){{\downarrow}Bx}}:={\uparrow}_{x\in V}{f(x){\downarrow}B({{x}_{1}},\,…,{{x}_{n}})}:={\uparrow}_{{{a}_{n}}}^{{{b}_{n}}}{…{\uparrow}_{{{a}_{1}}}^{{{b}_{1}}}{f(x){\downarrow}{{B}_{1}}{{x}_{1}}\,…\,{\downarrow}{{B}_{n}}{{x}_{n}}}}\]is called the exact \(B\)-volume integral of the \(B\)-volume integrable function \(f: {}^{(\omega)}\mathbb{R}^{n} \rightarrow {}^{(\omega)}\mathbb{R}\) with \(f(x) := 0\) for all \(x \in {}^{(\omega)}\mathbb{R}^{n} \setminus V\). Improper exact \(B\)-volume integrals are defined analogously to exact \(B\)-integrals.\(\triangle\)

Remark: Because \(\mathbb{C}\) and \(\mathbb{R}^{2}\) are isomorphic, something similar exists in the complex case and \({\uparrow}_{x \in V}{{\downarrow}Bx={{\mu }_{h}}(V)}.\)

Example: Using the exact \(B\)-volume integral in contrast to the Lebesgue integral,\[||f|{{|}_{p}}:={{\left( {\uparrow}_{x \in V}{||f(x)|{{|}^{p}}{\downarrow}Bx} \right)}^{\tilde{p}}}\]satisfies for arbitrary \(f: {}^{(\omega)}\mathbb{R}^{n} \rightarrow {}^{(\omega)}\mathbb{R}^{m}\) and \(p \in [1, \omega]\) all the properties of a norm, also definiteness.

Example: Let \([a, b[ \, \cap \, h{}^{\omega}\mathbb{Z} \ne \emptyset\) be an \(h\)-homogeneous subset of \([a, b[{}^{\omega}\mathbb{R}\), and write \(B \subseteq [a, b[ \, \cap \, h{}^{\omega}\mathbb{Z} \times ]a, b] \, \cap \, h{}^{\omega}\mathbb{Z}\). Now let \({T}_{r}\) be a right \(B\)-AD of a not necessarily convergent TS \(t\) on \([a, b[ \, \cap \, h{}^{\omega}\mathbb{Z}\) and define \(f(x) := t(x) + \varepsilon i^{\hat{x}/h}\) for conventionally real \(x\) and \(\varepsilon \ge \tilde{\nu}\). For \(h = \tilde{\nu}\), \(f\) is nowhere continuous, and thus is conventionally nowhere differentiable or integrable on \([a, b[ \, \cap \, h{}^{\omega}\mathbb{Z}\), but for all \(h\) holds\[f_{r}^{\prime }B(x)=t_{r}^{\prime }B(x)-\widetilde{{\downarrow}Bx}\hat{\varepsilon}{i^{\hat{x}/h}}\]and\[{\uparrow}_{x\in [a,b[ \, \cap \, h{}^{\omega }\mathbb{Z}}{f(x){\downarrow}Bx={{T}_{r}}(b)-{{T}_{r}}(a)+\,}\check{\varepsilon} \left( {i^{\hat{a}/h}}-{i^{\hat{b}/h}} \right).\]Example: The conventionally non-measurable middle-thirds Cantor set \({C}_{\tilde{3}}\) has measure \({\mu}_{\iota}({C}_{\tilde{3}}) = \check{3}^{-\omega}\). Consider the function \(c: [0, 1] \rightarrow \{0, {\check{3}}^{\omega}\}\) defined by \(c(x) = {\check{3}}^{\omega}\) for \(x \in {C}_{\tilde{3}}\) and \(c(x) = 0\) for \(x \in [0, 1] \setminus {C}_{\tilde{3}}\).Then\[{\uparrow}_{x \in {{C}_{\tilde{3}}}}{c(x){\downarrow}x={+}_{x=0}^{1}{c(x){\downarrow}x}}={{\check{3}}^{\omega}}{{\mu }_{\iota}}\left( {{C}_{\tilde{3}}} \right)=1.\]Fubini’s theorem: For \(X, Y \subseteq {}^{(\omega)}\mathbb{K}\) and \(f: X\times Y \rightarrow {}^{(\omega)}\mathbb{K}\), a reordering of integral sums shows\[{\uparrow}_{Y}{{\uparrow}_{X}{f(x,\,y){\downarrow}Bx\,}{\downarrow}By}={\uparrow}_{X\times Y}{f(x,\,y){\downarrow}B(x,\,y)}={\uparrow}_{X}{{\uparrow}_{Y}{f(x,\,y){\downarrow}By\,}{\downarrow}Bx}.\square\]Transformation theorem: If the Jacobian \(D\varphi(x)\) exists, linear algebra teaches for \(f: \varphi(A) \rightarrow {}^{(\omega)}\mathbb{R}^n\) and \(A \subseteq {}^{\omega}\mathbb{R}^n\)2cf. Köhler, Günter: Analysis; 1. Aufl.; 2006; Heldermann; Lemgo, p. 519:\[{\uparrow}_{\varphi(A)}^{\ }{f(y){\downarrow}y={\uparrow}_{A}^{\ }{f(\varphi(x))|\det(D\varphi(x))|{\downarrow}x}}.\square\]Example: Since\[{\uparrow}_{[a,\,b[\times [r,\,s[}{\frac{\left( {{x}^{2}}-{{y}^{2}} \right)}{{{\left( {{x}^{2}}+{{y}^{2}} \right)}^{2}}}{{{\downarrow}}^{2}}(x,\,y)}={\uparrow}_{a}^{b}{\left. \frac{y{\downarrow}x}{{{x}^{2}}+{{y}^{2}}} \right|_{r}^{s}}=-{\uparrow}_{r}^{s}{\left. \frac{x{\downarrow}y}{{{x}^{2}}+{{y}^{2}}} \right|_{a}^{b}}=\arctan \frac{s}{b}-\arctan \frac{r}{b}+\arctan \frac{s}{a}-\arctan \frac{r}{a}\]by the principle of latest substitution (see below), the (improper) integral\[I(a,b):={\uparrow}_{[a,\,b{{[}^{2}}}{\frac{\left( {{x}^{2}}-{{y}^{2}} \right)}{{{\left( {{x}^{2}}+{{y}^{2}} \right)}^{2}}}{{{\downarrow}}^{2}}(x,\,y)}=\arctan \frac{b}{b}-\arctan \frac{a}{b}+\arctan \frac{b}{a}-\arctan \frac{a}{a}= \check{\pi} – \check{\pi} =0\]is obtained and not\[I(0,1)={\uparrow}_{0}^{1}{{\uparrow}_{0}^{1}{\frac{\left( {{x}^{2}}-{{y}^{2}} \right)}{{{\left( {{x}^{2}}+{{y}^{2}} \right)}^{2}}}{\downarrow}y\,{\downarrow}x}}={\uparrow}_{0}^{1}{\frac{{\downarrow}x}{1+{{x}^{2}}}}=\frac{\pi}{4}\ne -\frac{\pi}{4}=-{\uparrow}_{0}^{1}{\frac{{\downarrow}y}{1+{{y}^{2}}}}={\uparrow}_{0}^{1}{{\uparrow}_{0}^{1}{\frac{\left( {{x}^{2}}-{{y}^{2}} \right)}{{{\left( {{x}^{2}}+{{y}^{2}} \right)}^{2}}}{\downarrow}x\,{\downarrow}y}}=I(0,1).\]Definition: A sequence \(({a}_{k})\) with members \({a}_{k}\) is a mapping from \({}^{(\omega)}\mathbb{Z}\) to \({}^{(\omega)}\mathbb{C}^{m}: k \mapsto {a}_{k}\). A series is a sequence \(({s}_{k})\) with \(m \in {}^{(\omega)}\mathbb{Z}\) and partial sums \({{s}_{k}}={+}_{j=m}^{k}{{{a}_{j}}}.\triangle\)

Definition: A sequence \(({a}_{k})\) with \(k \in {}^{(\omega)}\mathbb{N}^{*}, {a}_{k} \in {}^{(\omega)}\mathbb{C}\) and \(\alpha \in ]0, \tilde{\nu}]\) is called \(\alpha\)-convergent to \(a \in {}^{(\omega)}\mathbb{C}\) if there exists \(m \in {}^{(\omega)}\mathbb{N}^{*}_{\le k}\) where \(|{a}_{k} – a| < \alpha\) for all \({a}_{k}\) such that \(k – m\) is not too small. The set \(\alpha\)-\(A\) of all such \(a\) is called set of \(\alpha\)-limit values of \(({a}_{k})\). A uniquely determined representative of this set (e.g. the final value or mean value) is called the \(\alpha\)-limit value \(\alpha\)-\(a\). For the case \(a = 0\), the sequence is called a zero sequence. If the inequality only holds for \(\alpha = \tilde{\nu}\), the \(\alpha\)- is omitted. Usually, \(k\) will be chosen maximal and \(\alpha\) minimal.

Remark: Conventional limit values are hardly more precise than \(\mathcal{O}(\tilde{\omega})\). Their actual transcendence or algebraicity is seldom regarded! To avoid the exclusive relevance of the largest index of each sequence3cf. Heuser, Harro: Lehrbuch der Analysis Teil 1; 17., akt. Aufl.; 2009; Vieweg + Teubner; Wiesbaden, p. 144 the conventional definition requires the completion that infinitely many or almost all members of the sequence have an arbitrarily small distance from the limit value. Only finitely many may have a larger distance. Then only the monotone convergence is valid4cf. loc. cit., p. 155.

Remark: The fundamental theorem of set theory makes the representation of each positive number by a determined, unique, infinite decimal fraction baseless5cf. loc. cit., p. 27 f.. Putting \(\varepsilon := \; \curvearrowright 0\) any proof claiming that, for \(\varepsilon \in {}^{(\omega)}\mathbb{R}_{>0}\) – especially for all \(\varepsilon \in {}^{(\nu)}\mathbb{R}_{>0}\) – there exists a real number \(\varepsilon\tilde{r}\) with real \(r \in {}^{(\omega)}\mathbb{R}_{>1}\), is false. Otherwise, an infinite regression may occur. The \(\varepsilon\delta\)-definition of the limit value (it is questionable that \(\delta\) exists6loc. cit., p. 235 f.)) requires \(\varepsilon\) as a specific multiple of \(\curvearrowright 0\) making that of continuity also true7see loc. cit., p. 215 f..

Remark: Consider for example the real function that doubles every real value but is not even uniformly continuous. Uniform continuity need not be considered, since in general \(\delta := \; \curvearrowright 0\) and \(\varepsilon\) accordingly larger. If two function values do not satisfy the conditions, then the function is not continuous at that point. Thus, continuity is equivalent to uniform continuity, by choosing the largest \(\varepsilon\) from all admissible infinitesimal values. Easily, continuity is equivalent to Hölder continuity.

Remark: Here infinite real constants may be allowed. The same is true for uniform convergence, since the maximum of the indices may be chosen such that each argument as the index satisfies everything, and \(\acute{\omega}\) is sufficient in every case. Otherwise, pointwise convergence also fails. Thus, uniform convergence is equivalent to pointwise convergence, by choosing the largest of all admissible infinitesimal values.

Example: The \(\hat{\iota}\)-continuous function \(f: {}^{(\omega)}\mathbb{R} \rightarrow \{0, \iota\}\) defined by \(f(x):=\check{\iota}(i^{\hat{x}/{\iota}}+1)\) consists of only the local minima 0 and the local maxima \(\iota\), and has the left and right exact derivatives \(\pm 1\).

Example: The function \(f: [0, 1] \rightarrow [\acute{\iota}, -\acute{\iota}]\) for \(f(x) := i^{\hat{q}} \acute{q}/q\), if \(x\) is rational and has the denominator \(q \in \mathbb{N}^{*}\), and \(f(x) := 0\) else, has the two relative extrema \(\pm \acute{\iota}\)8cf. Gelbaum, Bernard R.; Olmsted, John M. H.: Counterexamples in Analysis; Republ., unabr., slightly corr.; 2003; Dover Publications; Mineola, New York, p. 24..

First fundamental theorem of exact differential and integral calculus for LIs: The function \(F(z)={\uparrow}_{\gamma }{f(\zeta ){\downarrow}B\zeta }\) where \(\gamma: [d, x[ \, \cap \, C \rightarrow A \subseteq {}^{(\omega)}\mathbb{K}, C \subseteq \mathbb{R}, f: A \rightarrow {}^{(\omega)}\mathbb{K}, d \in [a, b[ \, \cap \, C\), and choosing \(\curvearrowright B \gamma(x) = \gamma(\curvearrowright D x)\) is exactly \(B\)-differentiable, and for all \(x \in [a, b[ \, \cap \, C\) and \(z = \gamma(x)\)\[F^{\prime} \curvearrowright B(z) = f(z).\]Proof:\[\begin{aligned}{\downarrow}B(F(z)) &={\uparrow}_{t\in [d,x] \cap C}{f(\gamma (t)){{\gamma }_{\curvearrowright }^{\prime}}D(t){\downarrow}Dt}-{\uparrow}_{t\in [d,x[ \, \cap \, C}{f(\gamma (t)){{\gamma }_{\curvearrowright }^{\prime}}D(t){\downarrow}Dt} ={\uparrow}_{x}{f(\gamma (t))\frac{\gamma (\curvearrowright Dt)-\gamma (t)}{\curvearrowright Dt-t}{\downarrow}Dt} \\ &=f(\gamma (x)){{\gamma}_{\curvearrowright }^{\prime}}D(x){\downarrow}Dx=\,f(\gamma (x))(\curvearrowright B\gamma (x)-\gamma (x))=f(z){\downarrow}Bz.\square\end{aligned}\]Second fundamental theorem of exact differential and integral calculus for LIs: According to the conditions from above, it holds with \(\gamma: [a, b[ \, \cap \, C \rightarrow {}^{(\omega)}\mathbb{K}\) that\[ F(\gamma (b))-F(\gamma (a))={\uparrow}_{\gamma }{{{F}_{\curvearrowright }^{\prime}}B(\zeta ){\downarrow}B\zeta }.\]Proof: \(F(\gamma (b))-F(\gamma (a))\) &\(={+}_{t\in [a,b[ \, \cap \, C}{F(\curvearrowright B\,\gamma (t))}-F(\gamma (t))\) \(={+}_{t\in [a,b[ \, \cap \, C}{{{F}_{\curvearrowright }^{\prime}}B(\gamma (t))(\curvearrowright B\,\gamma (t)-\gamma (t))}\) \(={\uparrow}_{t\in [a,b[ \, \cap \, C}{{{F}_{\curvearrowright }^{\prime}}B(\gamma (t)){{\gamma }_{\curvearrowright }^{\prime}}D(t){\downarrow}Dt}\) \(={\uparrow}_{\gamma }{{{F}_{\curvearrowright }^{\prime}}B(\zeta ){\downarrow}B\zeta }.\square\)

Corollary: If \(f\) has an AD \(F\) on a CP \(\gamma\), it holds with the conditions above that \({\uparrow}_{\gamma }{f(\zeta ){\downarrow}B\zeta }=0.\square\)

Remark: The conventionally real case of both fundamental theorems may be established analogously. Given \(u, v \in [a, b[ \, \cap \, C, u \ne v\) and \(\gamma(u) = \gamma(v)\), it may be the case that \(\curvearrowright B \gamma(u) \ne \; \curvearrowright B \gamma(v)\).

Leibniz integral rule: For \(f: {}^{(\omega)}\mathbb{K}^{\grave{n}} \rightarrow {}^{(\omega)}\mathbb{K}, a, b: {}^{(\omega)}\mathbb{K}^{n} \rightarrow {}^{(\omega)}\mathbb{K}, \curvearrowright B x := {(s, {x}_{2}, …, {x}_{n})}^{T}\), and \(s \in {}^{(\omega)}\mathbb{K} \setminus \{{x}_{1}\}\), choosing \(\curvearrowright D a(x) = a(\curvearrowright B x)\) and \(\curvearrowright D b(x) = b(\curvearrowright B x)\), it holds tha\[\frac{{\downarrow} }{{\downarrow} {{x}_{1}}}\left( {\uparrow}_{a(x)}^{b(x)}{f(x,t){\downarrow}Dt} \right)={\uparrow}_{a(x)}^{b(x)}{\frac{{\downarrow} f(x,t)}{{\downarrow} {{x}_{1}}}{\downarrow}Dt}+\frac{{\downarrow} b(x)}{{\downarrow} {{x}_{1}}}f(\curvearrowright Bx,b(x))-\frac{{\downarrow} a(x)}{{\downarrow} {{x}_{1}}}f(\curvearrowright Bx,a(x)).\]Proof:\[\begin{aligned}\frac{{\downarrow} }{{\downarrow} {{x}_{1}}}\left( {\uparrow}_{a(x)}^{b(x)}{f(x,t){\downarrow}Dt} \right) &={\left( {\uparrow}_{a(\curvearrowright Bx)}^{b(\curvearrowright Bx)}{f(\curvearrowright Bx,t){\downarrow}Dt}-{\uparrow}_{a(x)}^{b(x)}{f(x,t){\downarrow}Dt} \right)}/{{\downarrow} {{x}_{1}}}\; \\ &={\left( {\uparrow}_{a(x)}^{b(x)}{(f(\curvearrowright Bx,t)-f(x,t)){\downarrow}Dt}+{\uparrow}_{b(x)}^{b(\curvearrowright Bx)}{f(\curvearrowright Bx,t){\downarrow}Dt}-{\uparrow}_{a(x)}^{a(\curvearrowright Bx)}{f(\curvearrowright Bx,t){\downarrow}Dt} \right)}/{{\downarrow} {{x}_{1}}}\; \\ &={\uparrow}_{a(x)}^{b(x)}{\frac{{\downarrow} f(x,t)}{{\downarrow} {{x}_{1}}}{\downarrow}Dt}+\frac{{\downarrow} b(x)}{{\downarrow} {{x}_{1}}}f(\curvearrowright Bx,b(x))-\frac{{\downarrow} a(x)}{{\downarrow} {{x}_{1}}}f(\curvearrowright Bx,a(x)).\square\end{aligned}\]Remark: Complex integration allows a path whose start and end points are the limits of integration. If \(\curvearrowright D a(x) \ne a(\curvearrowright B x)\), then multiply the final summand by \((\curvearrowright D a(x) – a(x))/(a(\curvearrowright B x) – a(x))\). If \(\curvearrowright D b(x) \ne b(\curvearrowright B x)\), then multiplythe penultimate summand by \((\curvearrowright D b(x) – b(x))/(b(\curvearrowright B x) – b(x))\). Let \(n \in {}^{\omega}\mathbb{N}^{*}\) and \(x \in [0, 1]\) in each case for the following examples9cf. Heuser, loc. cit., p. 540 – 543.

1. The sequence \({f}_{n}(x) = \sin(nx)/n^{\tilde{2}}\) does not tend to \(f(x) = 0\) as \(n \rightarrow \omega\), but instead to \(f(x) = \tilde{\omega}^{\tilde{2}} \sin(\omega x)\) with (continuous) derivative \(f^{\prime}(x) = {\omega}^{\tilde{2}} \cos(\omega x)\) instead of \(f^{\prime}(x) = 0\).

2. The sequence \({f}_{n}(x) = x – \tilde{n}x^{n}\) tends to \(f(x) = x – \tilde{\omega}{x}^{\omega}\) as \(n \rightarrow \omega\) instead of \(f(x) = x\) with derivative \(f^{\prime}(x) = 1 – {x}^{\acute{\omega}}\) instead of \(f^{\prime}(x) = 1\). Conventionally, \({f}_{n}(x) = 1 – {x}^{\acute{n}}\) is discontinuous at the point \(x = 1\).

3. The sequence \({f}_{n}(x) = nx(-\acute{x})^{n}\) does not tend to \(f(x) = 0\) as \(n \rightarrow \omega\), but to the continuous function \(f(x) = {\omega x(-\acute{x})}^{\omega}\), and takes the value \(\tilde{e}\) when \(x = \tilde{\omega}\).

Definition: Let according to the trapezoidal rule10cf. Grosche, Günter (Hrsg.): Teubner-Taschenbuch der Mathematik Teil 2; 7. Aufl.; 1995; Teubner; Leipzig, p. 1130 f.\[{}^{T}{\uparrow}_{z\in A}{f(z){\downarrow}Bz:={+}_{z\in A}{\tilde{2}(f(z)+f(\curvearrowright B\,z))(\curvearrowright B\,z-z)}}.\]Let according to the midpoint rule – assuming that \(\tilde{2}(z + \curvearrowright B z)\) exists -\[{}^{M}{\uparrow}_{z\in A}{f(z){\downarrow}Bz:={+}_{z\in A}{f(\tilde{2}(z\,+\curvearrowright Bz}))(\curvearrowright B\,z-z)}.\triangle\]Remark: In the first fundamental theorem, the derivative \({\downarrow}B(F(z))/{\downarrow}Bz\) can be tightened to the arithmetic mean \(\tilde{2}(f(z) + f(\curvearrowright B z))\) resp. \(f(\tilde{2}(z + \curvearrowright B z))\), and similarly, in the second fundamental theorem, \(F(\gamma(b)) – F(\gamma(a))\) can be tightened to \(\tilde{2}(F(\gamma(b)) + F(\curvearrowleft B \gamma(b))) – \tilde{2}(F(\gamma(a)) + F(\curvearrowright B \gamma(a)))\) resp. \(F(\tilde{2}(\gamma(b) + \curvearrowleft B \gamma(b))) – F(\tilde{2}(\gamma(a) + \curvearrowright B \gamma(a)))\). This yields approximately the original results when \(f\) and \(F\) are sufficiently \(\alpha\)-continuous at the boundary.

Definition: For a CP \(\gamma: [a, b[ \rightarrow {}^{(\omega)}\mathbb{C}\) and \(z \in {}^{(\omega)}\mathbb{C}, \widetilde{\hat{\pi}i}{\uparrow}_{\gamma}{\widetilde{\zeta-z}{\downarrow}\zeta}\) is called winding number or index ind\(_{\gamma}(z) \in \mathbb{Z}\). The coefficients \(a_{j,-1}\) of the function \(f: A \rightarrow {}^{(\omega)}\mathbb{C}\) for \(A \subseteq {}^{(\omega)}\mathbb{C}, n \in {}^{\omega}\mathbb{N}, a_{jk}, c_j \in {}^{(\omega)}\mathbb{C}\) and\[f(z)={+}_{j=0}^{n}{+}_{k=-\omega}^{\omega}{a_{jk}{(z-c_j)}^k}\]as well as pairwise different \(c_j\) are called residues res\(_{c_j}f.\triangle\)

Integral formula: The last corollary shows that for \(f: A \rightarrow {}^{(\omega)}\mathbb{C}\) and the CP \(\gamma([a, b[) \subseteq A \rightarrow {}^{(\omega)}\mathbb{C}\) the equation \(f(z)\) ind\(_\gamma(z) = \widetilde{\hat{\pi}i}{\uparrow}_{\gamma}{\widetilde{\zeta-z}f(\zeta){\downarrow}\zeta}\) holds, if and only if \(g(\zeta) = \widetilde{\zeta-z}(f(\zeta)-f(z))\) implies that \({\uparrow}_{\gamma}^{\ }{g(\zeta)}{\downarrow}\zeta=0\), meaning that especially \(g\) has on \(\gamma([a,b[)\) an AD.\(\square\)

Residue theorem: For \(\gamma\) and \(f\) as above, it holds that \(\widetilde{\hat{\pi}i}{\uparrow}_{\gamma}{f(\zeta){\downarrow}\zeta}={+}_{j=0}^{n}{{\rm ind}_\gamma(c_j)}{\rm res}_{c_j}f.\)

Proof: All \(j \in \mathbb{N}_{\le n}\) and all \(k \in {}^{\omega}\mathbb{Z} \setminus \{-1\}\) provide that \({\uparrow}_{\gamma}{{a_{jk}(\zeta-c_j)}^k{\downarrow}\zeta}=0\) and \(\widetilde{\hat{\pi}i}{\uparrow}_{\gamma}{{a_{j,-1}}\widetilde{\zeta-c_j}{\downarrow}\zeta}={\rm ind}_\gamma(c_j){\rm res}_{c_j}f.\square\)

Definition: Let \(f: A \rightarrow {}^{(\omega)}\mathbb{K}\) for \(A \subseteq {}^{(\omega)}\mathbb{K}\). The left-hand side of\[\frac{{\downarrow}_{\curvearrowright B\,z}^{2}Bf(z)}{{{({\downarrow}\curvearrowright B\,z)}^{2}}}:=\frac{f(\curvearrowright B(\curvearrowright B\,z))-2f(\curvearrowright B\,z)+f(z)}{{{({\downarrow}\curvearrowright B\,z)}^{2}}}\]is called the second derivative of \(f\) at \(z \in A\) in the direction \(\curvearrowright B z.\triangle\)

Remark: Higher derivatives are defined analogously. Every number \({m}_{n} \in {}^{\omega}\mathbb{N}\) for \(n \in {}^{\omega}\mathbb{N}^{*}\) of derivatives is written as an exponent after the \(n\)-th variable to be differentiated. If \(n \ge 2\), the derivatives are called partial and \({\downarrow}\) replaces \(d\). The exponent to be specified in the numerator is the sum of all \({m}_{n}\). Regarding 1/(–1)! = 0 implies then for \(g\) like \(f\) and \(p \in {}^{\omega}\mathbb{N}^{*}\) the Leibniz product rule:\[(fg)^{(p)} = {+}_{m+n=p}\tbinom{p}{m}f^{(m)} g^{(n)}.\]Proof: For \(p = 1\), the product rule mentioned above holds. Induction step from \(p\) to \(\grave{p}\):\[\begin{aligned}(fg)^{(\grave{p})} &\underset{p}{=} {+}_{m+1+n=\grave{p}} {\left (\tbinom{p}{m}+\tbinom{p}{\grave{m}} \right ) f^{(\grave{m})} g^{(n)}}+{+}_{m+1+n=\grave{p}} {\tbinom{p}{m} f^{(m)} g^{(\grave{n})}} -{+}_{m+1+n=\grave{p}} {\tbinom{p}{\grave{m}} f^{(\grave{m})} g^{(n)}} \\ &=\left(\left(fg\right)^\prime\right)^{(p)}\underset{1}{=}\left(f^\prime g+fg^\prime\right)^{\left(p\right)}=\left(f^\prime g\right)^{\left(p\right)}+\left(fg^\prime\right)^{\left(p\right)}={+}_{m+n=\grave{p}}{\tbinom{\grave{p}}{m} f^{\left(m\right)}g^{\left(n\right)}}.\square\end{aligned}\]Taylor’s theorem: \({+}_m |f^{(m)}(a)| > \tilde{\nu}, f^{(m)}(a) \in {}^{\omega}\mathbb{C}, g(z) = (z-a)^\omega, |z – a| < \tilde{e}\omega\) and \(z \rightarrow a \in {}^{\omega}\mathbb{C}\) imply\[f(z)=T_\omega(z):={+}_{m=0}^{\omega}{\widetilde{m!}f^{(m)}(a)(z-a)^m}.\]Proof: From L’Hôpital’s rule, it follows that\[f(z)=\frac{(fg)(z)}{g(z)}=\frac{(fg)^\prime(z)}{g^\prime(z)}=…=\frac{(fg)^{(\acute{\omega})}(z)}{g^{(\acute{\omega})}(z)}=\frac{(fg)^{(\omega)}(z)}{g^{(\omega)}(z)}=\widetilde{\omega!}(fg)^{(\omega)}(z)\]and Leibniz product rule gives\[(fg)^{(\omega)}(z)={+}_{m+n=\omega}{\tbinom{\omega}{m}f^{(m)}(a)g^{(\omega-m)}(z)}=g^{(\omega)}(z){+}_{m=0}^{\omega}{\widetilde{m!}f^{(m)}(a)(z-a)^m}.\square\]Conclusion: The second fundamental theorem implies for the remainder \(R_n(z) := f(z) – T_n(z) = f(a) + {\uparrow}_{a}^{z}{f^\prime(t){\downarrow}t} – T_n(z)\) by the mean value theorem where \(\xi \in \mathbb{B}_a(z)\) and \(p\in\mathbb{N}_{\le n}^*\)\[R_n(z)={\uparrow}_{a}^{z}{\widetilde{n!}(z-t)^nf^{(\grave{n})}(t){\downarrow}t}={\widetilde{pn!}(z-\xi)}^{\grave{n}-p}f^{(\grave{n})}(\xi)(z-a)^p.\]Proof by induction with integration by parts and induction step from \(\acute{n}\) to \(n\) (\(n\) = 0 see above):\[f(z)=T_{\acute{n}}(z)+\widetilde{n!}(z-a)^{n}f^{(n)}(a)+{\uparrow}_{a}^{z}{\widetilde{n!}(z-t)^{n}f^{(\grave{n})}(t){\downarrow}t}=T_n(z)+R_n(z).\square\]Definition: The derivative of a function \(f: A \rightarrow {}^{(\omega)}\mathbb{R}\), where \(A \subseteq {}^{(\omega)}\mathbb{R}\), is defined to be 0 if and only if 0 lies in the interval defined by the boundaries of the left and right exact derivatives.\(\triangle\)

Remark: It holds that \((e^{\iota}-1)/\iota = 1 = \exp(0)^\prime\) and thus \({\downarrow} _ey/{\downarrow}y = \tilde{y}\) from \({\downarrow}y/{\downarrow}x = y := e^x\) as well as \({\downarrow} x^n = {\downarrow}(e^{n _ex}) = nx^{\acute{n}}{\downarrow}x\) für \(n \in {}^{\omega}\mathbb{N}^{*}\) for \(n \in {}^{\omega}\mathbb{N}^{*}\) by product and chain rule. Unit circle and triangles easily show the relations sin \(\iota/1 = (\cos \iota – 1)/\iota\) and \(\cos \iota/1 = -\sin \iota/\iota\). Hence, it holds sin(0)\({}^\prime\) = cos(0) and cos(0)\({}^\prime = -\)sin(0) as well as for \(m \in {}^{\omega}\mathbb{N}\) and \(n = \hat{k}\) de Moivre’s formula:\[(\cos z + i \sin z)^m = e^{imz}=1+{+}_{k=1}^{\check{\omega}}\left({\widetilde{\acute{n}!}(imz)}^{\acute{n}}+{\widetilde{n!}(imz)}^{n}\right)=\cos{\left(mz\right)}+i \sin\left(mz\right).\square\]Euler’s sine formula: Zero and identity theorem11cf. Walter, Wolfgang: Analysis 1; 3., verb. Aufl.; 1992; Springer; Berlin, p. 41 for series plus the theorem above analogously yield \(\Gamma(\tilde{2}) = \sqrt{\pi}\) for the gamma function \(\Gamma(z) := \omega!\omega^z/{\times}_{k=0}^{\omega}{(z + k)}\) where \(z \in {}^{\nu}\mathbb{C} \setminus -{}^{\nu}\mathbb{N}\) from\[\frac{e^{\hat{\pi} iz} – 1}{e^{\pi iz}\hat{\pi} iz} = \frac{e^{\pi iz} – e^{-\pi iz}}{\hat{\pi} iz} = \frac{\sin(\pi z)}{\pi z} = {+}_{k=0}^{\omega}{\frac{(\pi iz)^{n}}{\grave{n}!}} = \frac{\tilde{z}}{\Gamma(z)\Gamma(-\acute{z})},\]since all \(\hat{\omega}\) zeros of the left- and right-hand side match due to \(e^{\pi in} = 1 + i \sin 0.\square\)

Conclusion: This shows for the Wallis product \(W := {\times}_{k=1}^{\omega}{k^2/(k^2-\tilde{4})} = \check{\pi}\) from \[\frac{\Gamma(\tilde{2})^2}{2W} = \frac{\omega 4^{\grave{\omega}}{\omega!}^2}{{(\hat{\omega} + 1)!!}^2} \frac{(\hat{\omega} + 1){(\hat{\omega} – 1)!!}^2}{2\,4^{\omega}{\omega!}^2} = \frac{\hat{\omega}}{\hat{\omega} + 1} := 1 = \frac{\check{\pi}}{W}.\square\]Stirling formula: The asymptotic approximation \(\omega! = ({\hat{\pi}\omega})^{\tilde{2}}(\tilde{e}\omega)^{\omega}(1+\mathcal{O}(\tilde{\omega}))\) holds.

Proof: Logarithmise \(\tbinom{\hat{\omega}}{\omega} = \frac{(\tilde{\pi}{\omega})^{\tilde{2}}4^{\omega}}{\omega+\tilde{2}}\sim\frac{4^{\omega}}{{(\pi\omega)}^{\tilde{2}}}\) as before to obtain \(\omega!= d(\pi\omega)^{\tilde{2}}(c\omega)^{\omega}\) for \(c, d \in {}^{\nu}\mathbb{R}_{>0}\) from\[{+}_{n=1}^{\omega}{{\;}_en} + {}_e4\,\omega – \tilde{2}{}_e(\pi\omega) = {+}_{n=\grave{\omega}}^{\hat{\omega}}{{\;}_en} = \omega{}_e(b\omega),\]where \(b \in \,]1, 2[\). Subscripting \(c_{\grave{\omega}}^{\grave{\omega}} / c_{\omega}^{\omega}\) of \(c\) yields \(c = \tilde{e}\) and \(d_{\omega}^{2}/d_{\hat{\omega}}\) of \(d\) reveals \(d = 2^{\tilde{2}}.\square\)

Exchange theorem: The result of multiple partial derivatives of a function \(f: A \rightarrow {}^{(\omega)}\mathbb{K}\) is independent of the order of differentiation, provided that variables are only evaluated and limits are only computed at the end, if applicable (principle of latest substitution).

Proof: The derivative is uniquely determined: This is clear up to the second derivative, and the result follows by (transfinite) induction for higher-order derivatives.\(\square\)

Example: Let \(f: {}^{\omega}\mathbb{R}^{2} \rightarrow {}^{\omega}\mathbb{R}\) be defined as \(f(0, 0) = 0\) and \(f(x, y) = {xy}^{3}/({x}^{2} + {y}^{2})\) otherwise. Then:\[\frac{{{{\downarrow} ^2}f}}{{{\downarrow} x{\downarrow} y}} = \frac{{{y^6} + 6{x^2}{y^4} – 3{x^4}{y^2}}}{{{{({x^2} + {y^2})}^3}}} = \frac{{{{\downarrow} ^2}f}}{{{\downarrow} y{\downarrow} x}}\]with value \(\tilde{2}\) at the point (0, 0), even though having \(y\) for \(x = 0\) on the left and 0 on the right for \(y = 0\) in\[\frac{{{\downarrow} f}}{{{\downarrow} x}} = \frac{{{y^5} – {x^2}{y^3}}}{{{{({x^2} + {y^2})}^2}}} \ne \frac{{x{y^4} + 3{x^3}{y^2}}}{{{{({x^2} + {y^2})}^2}}} = \frac{{{\downarrow} f}}{{{\downarrow} y}},\]where partially differentiating with respect to the other variable gives on the left \(1 \ne 0\) on the right.

Theorem: Splitting \(F: A \rightarrow {}^{(\omega)}\mathbb{C}\) into real and imaginary parts \(F(z) := U(z) + i V(z) := f(x, y) := u(x, y) + i v(x, y)\), \(h\)-homogeneous \(A \subseteq {}^{(\omega)}\mathbb{C}\) and \(h = |{\downarrow}Bx| = |{\downarrow}By|\), with the NR \(B \subseteq {A}^{2}\) for every \(z = x + i y \in A\) is holomorphic if the Cauchy-Riemann differential equations are satisfied by \(B\):\[\frac{{{\downarrow} Bu}}{{{\downarrow} Bx}} = \frac{{{\downarrow} Bv}}{{{\downarrow} By}},\,\,\frac{{{\downarrow} Bv}}{{{\downarrow} Bx}} = – \frac{{{\downarrow} Bu}}{{{\downarrow} By}}.\]Proof: From \(\frac{{{\downarrow} Bu}}{{{\downarrow} Bx}} +i \frac{{{\downarrow} Bv}}{{{\downarrow} Bx}} = \frac{{{\downarrow} Bv}}{{{\downarrow} By}} – i\frac{{{\downarrow} Bu}}{{{\downarrow} By}} = \frac{{{\downarrow} BF}}{{{\downarrow} Bz}} = {\downarrow}BU(z)+ i{\downarrow}BV(z)\), it follows directly the claim.\(\square\)

Remark: The following necessary and sufficient condition is valid for \(F\) to be holomorphic:\[F^{\prime}B(\bar z) = \frac{{{\downarrow} Bf}}{{{\downarrow} Bx}} = i\frac{{{\downarrow} Bf}}{{{\downarrow} By}} = \tilde{2}\left( {\frac{{{\downarrow} Bf}}{{{\downarrow} Bx}} + i\frac{{{\downarrow} Bf}}{{{\downarrow} By}}} \right) = \frac{{{\downarrow} BF}}{{{\downarrow} B\bar z}} = 0.\]Green’s theorem: Given NRs \(B \subseteq {D}^{2}\) for some \(h\)-domain \(D \subseteq {}^{(\omega)}\mathbb{R}^{2}\), infinitesimal \(h = |{\downarrow}Bx|= |{\downarrow}By| = |\curvearrowright B \gamma(t) – \gamma(t)| = \mathcal{O}({\tilde{\omega}}^{m})\), sufficiently large \(m \in \mathbb{N}^{*}, (x, y) \in D, {D}^{-} := \{(x, y) \in D : (x + h, y + h) \in D\}\), and a simply CP \(\gamma: [a, b[\rightarrow {\downarrow} D\) followed anticlockwise, choosing \(\curvearrowright B \gamma(t) = \gamma(\curvearrowright A t)\) for \(t \in [a, b[, A \subseteq {[a, b]}^{2}\), the following equation holds for sufficiently \(\alpha\)-continuous functions \(u, v: D \rightarrow \mathbb{R}\) with not necessarily continuous \({\downarrow} Bu/{\downarrow} Bx, {\downarrow} Bu/{\downarrow} By, {\downarrow} Bv/{\downarrow} Bx\) and \({\downarrow} Bv/{\downarrow} By\):\[{\uparrow}_{\gamma }{(u\,{\downarrow}Bx+v\,{\downarrow}By)}={\uparrow}_{(x,y)\in {{D}^{-}}}{\left( \frac{{\downarrow} Bv}{{\downarrow} Bx}-\frac{{\downarrow} Bu}{{\downarrow} By} \right){\downarrow}B(x,y)}.\]Proof: Only \(D := \{(x, y) : r \le x \le s, f(x) \le y \le g(x)\}, r, s \in {}^{(\omega)}\mathbb{R}, f, g : {\downarrow} D \rightarrow {}^{(\omega)}\mathbb{R}\) is proved, since the proof is analogous for each case rotated by \(\iota\). Every \(h\)-domian is union of such sets. Simply showing\[{\uparrow}_{\gamma }{u\,{\downarrow}Bx}=-{\uparrow}_{(x,y)\in {{D}^{-}}}{\frac{{\downarrow} Bu}{{\downarrow} By}{\downarrow}B(x,y)}.\]is sufficient because the other relation is given analogously. Neglecting the regions of \(\gamma\) with \({\downarrow}Bx = 0\) and \(t := h(u(s, g(s)) – u(r, g(r)))\) shows\[-{\uparrow}_{\gamma }{u\,{\downarrow}Bx}-t={\uparrow}_{r}^{s}{u(x,g(x)){\downarrow}Bx}-{\uparrow}_{r}^{s}{u(x,f(x)){\downarrow}Bx}={\uparrow}_{r}^{s}{{\uparrow}_{f(x)}^{g(x)}{\frac{{\downarrow} Bu}{{\downarrow} By}}{\downarrow}By{\downarrow}Bx}={\uparrow}_{(x,y)\in {{D}^{-}}}{\frac{{\downarrow} Bu}{{\downarrow} By}{\downarrow}B(x,y)}.\square\]Fundamental theorem of algebra: Every non-constant polynomial \(p \in {}^{(\omega)}\mathbb{C}\) has at least one complex root.

Indirect proof: By performing an affine substitution of variables, reduce to the case \(\widetilde{p(0)} \ne \mathcal{O}(\iota)\). Suppose that \(p(z) \ne 0\) for all \(z \in {}^{(\omega)}\mathbb{C}\). Since \(f(z) := \widetilde{p(z)}\) is holomorphic, it holds that \(f(\tilde{\iota}) = \mathcal{O}(\iota)\). By the mean value inequality \(|f(0)| \le {|f|}_{\gamma}\)12Remmert, loc. cit., p. 160 for \(\gamma = \partial\mathbb{B}_{r}(0)\) and arbitrary \(r \in {}^{(\omega)}\mathbb{R}_{>0}\), and hence \(f(0) = \mathcal{O}(\iota)\), which is a contradiction.\(\square\)

Goursat’s integral lemma: If \(f \in \mathcal{O}(\Delta)\) on a triangle \(\Delta \subseteq {}^{(\omega)}\mathbb{C}\) but has no AD on \(\Delta\), then13cf. Freitag, Eberhard; Busam, Rolf: Funktionentheorie 1; 4., korr. u. erw. Aufl.; 2006; Springer; Berlin, p. 74ff.\[I:={\uparrow}_{\partial\Delta }{f(\zeta ){\downarrow}B\zeta }=0.\]Refutation of conventional proofs based on estimation by means of a complete triangulation: The direction in which \(\partial\Delta\) is traversed is irrelevant. If \(\Delta\) is fully triangulated, then wlog every minimal triangle \({\Delta}_{s} \subseteq \Delta\) must either satisfy where \(\kappa, \lambda\), and \(\mu\) represent the vertices of \({\Delta}_{s}\)\[{I_s}: = {\uparrow}_{\partial{\Delta _s}} {f(\zeta ){\downarrow}B\zeta } = f(\kappa)(\lambda – \kappa) + f(\lambda)(\mu – \lambda) + f(\kappa)(\kappa – \mu) = (f(\kappa) – f(\lambda))(\lambda – \mu) = 0\]or\[\begin{aligned}{\uparrow}_{\partial{\Delta _s}} {f(\zeta ){\downarrow}B\zeta } &= f(\kappa)(\lambda – \kappa) + f(\lambda)(\mu – \lambda) + f(\mu)(\kappa – \mu) = (f(\kappa) – f(\lambda))\lambda + (f(\lambda) – f(\mu))\mu + (f(\mu) – f(\kappa))\kappa \\ &= f^{\prime}(\lambda)\left( {(\kappa – \lambda)\lambda – (\mu – \lambda)\mu + (\mu – \lambda)\kappa – (\kappa – \lambda)\kappa} \right) = f^{\prime}(\lambda)\left( {(\mu – \lambda)(\kappa – \mu) – {{(\kappa – \lambda)}^2}} \right) = 0\end{aligned}\]By holomorphicity and cyclic permutations, this can only happen for \(f(\kappa) = f(\lambda) = f(\mu)\). If every adjacent triangle to \(\Delta\) is considered, deduce that \(f\) must be constant, which contradicts the assumptions. This is because the term in large brackets is translation-invariant, since otherwise set \(\mu := 0\) wlog, making this term 0, in which case \(\kappa = \check{\lambda}(1 \pm i3^{\tilde{2}})\) and \(|\kappa| = |\lambda| = |\kappa – \lambda|\). However, since every horizontal and vertical line is homogeneous on \({}^{(\omega)}\mathbb{C}\), this cannot happen:

Otherwise, the corresponding sub-triangle would be equilateral and not isosceles and right-angled. Therefore, in both cases, \(|{I}_{s}|\) must be at least \(|f^{\prime}(\lambda) \mathcal{O}({\iota}^{2})|\), by selecting the vertices 0, |\(\iota\)| and \(i|\iota|\) wlog. If \(L\) is the perimeter of a triangle, then it holds that \(|I| \le {4}^{m} |{I}_{s}|\) for an infinite natural number \(m\), and also \({2}^{m} = L(\partial\Delta)/|\mathcal{O}({\iota}^{2})|\) since \(L(\partial\Delta) = {2}^{m} L(\partial{\Delta}_{s})\) and \(L(\partial{\Delta}_{s}) = |\mathcal{O}({\iota}^{2})|\). It holds that \(|I| \le |f^{\prime}(\lambda) {L(\partial\Delta)}^{2}/\mathcal{O}({\iota}^{2})|\), causing the desired estimate \(|I| \le |\mathcal{O}({\downarrow}B\zeta)|\) to fail, for example if \(|f^{\prime}(\lambda) {L(\partial\Delta)}^{2}|\) is larger than \(|\mathcal{O}({\iota}^{2})|.\square\)

Counter-directional theorem: If the path \(\gamma: [a, b[ \, \cap \, C \rightarrow V\) with \(C \subseteq \mathbb{R}\) passes the edges of every \(n\)-cube of side length \(\iota\) in the \(n\)-volume \(V \subseteq {}^{(\omega)}\mathbb{R}^{n}\) with \(n \in \mathbb{N}_{\ge 2}\) exactly once, where the opposite edges in all two-dimensional faces of every \(n\)-cube are traversed in reverse direction, but uniformly, then, for \(D \subseteq \mathbb{R}^{2}, B \subseteq {V}^{2}, f = ({f}_{1}, …, {f}_{n}): V \rightarrow {}^{(\omega)}\mathbb{R}^{n}, \gamma(t) = x, \gamma(\curvearrowright D t) = \curvearrowright B x\) and \({V}_{\curvearrowright } := \{\curvearrowright B x \in V: x \in V, \curvearrowright B x \ne \curvearrowleft B x\}\), it holds that\[{\uparrow}_{t \in [a,b[ \, \cap \, C}{f(\gamma (t)){{\gamma }_{\curvearrowright }^{\prime}}(t){\downarrow}Dt}={\uparrow}_{\begin{smallmatrix} (x,\curvearrowright B\,x) \\ \in V\times {{V}_{\curvearrowright}} \end{smallmatrix}}{f(x){\downarrow}Bx}={\uparrow}_{\begin{smallmatrix} t \in [a,b[ \, \cap \, C, \\ \gamma | {\partial}^{\acute{n}} V \end{smallmatrix}}{f(\gamma (t)){{\gamma }_{\curvearrowright }^{\prime}}(t){\downarrow}Dt}.\]Proof: If two arbitrary squares are considered with common edge of length \(\iota\) included in one plane, then only the edges of \(V\times{V}_{\curvearrowright}\) are not passed in both directions for the same function value. They all, and thus the path to be passed, are exactly contained in \({\partial}^{\acute{n}}V.\square\)

Remark: For \(\tilde{\omega}\) := 0, the main theorem of Cauchy’s theory of functions can be proven according to Dixon14as in Remmert, loc. cit., p. 228 f., since the limit there shall be 0 resp. \(\tilde{r}\) tends to 0 for \(r \in {}^{\omega}\mathbb{R}_{>0}\) tending to \(\omega\).

Cauchy’s integral theorem: Given the NRs \(B \subseteq {D}^{2}\) and \(A \subseteq [a, b]\) for some \(h\)-domain \(D \subseteq {}^{\omega}\mathbb{C}\), infinitesimal \(h\), \(f \in \mathcal{O}(D)\) and a CP \(\gamma: [a, b[\rightarrow \partial D\), choosing \(\curvearrowright B \gamma(t) = \gamma(\curvearrowright A t)\) for \(t \in [a, b[\) gives\[{\uparrow}_{\gamma }{f(z){\downarrow}Bz}=0.\]Proof: By the Cauchy-Riemann differential equations and Green’s theorem, with \(x := \text{Re} \, z, y := \text{Im} \, z, u := \text{Re} \, f, v := \text{Im} \, f\) and \({D}^{-} := \{z \in D : z + h + ih \in D\}\), it holds that\[{\uparrow}_{\gamma }{f(z){\downarrow}Bz}={\uparrow}_{\gamma }{\left( u+iv \right)\left( {\downarrow}Bx+i{\downarrow}By \right)}={\uparrow}_{z\in {{D}^{-}}}{\left( i\left( \frac{{\downarrow} Bu}{{\downarrow} Bx}-\frac{{\downarrow} Bv}{{\downarrow} By} \right)-\left( \frac{{\downarrow} Bv}{{\downarrow} Bx}+\frac{{\downarrow} Bu}{{\downarrow} By} \right) \right){\downarrow}B(x,y)}=0.\square\]Remark: The in \({\mathbb{B}}_{\omega}(0) \subset {}^{\omega}\mathbb{C}\) (entire) functions \(f(z) = {+}_{n=1}^{\omega }{{{z}^{n}}{{{\tilde{\omega }}}^{\tilde{n}}}}\) and \(g(z) = \tilde{\omega }z\) give counterexamples to Liouville’s (generalised) theorem and Picard’s little theorem because of \(|f(z)| < 1\) and \(|g(z)| \le 1\). The function \(f(\tilde{z})\) for \(z \in {\mathbb{B}}_{\omega}(0)^{*}\) discounts Picard’s great theorem. The function \(b(z) := \tilde{\nu}z\) for \(z \in {\mathbb{B}}_{\nu}(0) \subset {}^{\nu}\mathbb{C}\) maps the simply connected \({\mathbb{B}}_{\nu}(0)\) holomorphicly, but not necessarily injectively or surjectively to \(\mathbb{D}\). The Riemann mapping theorem must be corrected accordingly.

Definition: A point \({z}_{0} \in M \subseteq {}^{(\omega)}\mathbb{C}^{n}\) or of a sequence \(({a}_{k})\) for \(k \in {}^{(\omega)}\mathbb{N}\) is called a (proper) \(\alpha\)-accumulation point of \(M\) or of the sequence, if the ball \(\mathbb{B}_{\alpha}({z}_{0}) \subseteq {}^{(\omega)}\mathbb{C}^{n}\) with centre \({z}_{0}\) and infinitesimal \(\alpha\) contains infinitely many points from \(M\) or pairwise distinct members of \({a}_{k} \in {}^{(\omega)}\mathbb{C}^{n}\). Let \(\alpha\)- be omitted for \(\alpha = \tilde{\omega}.\triangle\)

Remark: Choose the pairwise distinct zeros \(c_k \in \mathbb{B}_{\tilde{\omega}}(0) \subset \mathbb{D}\) for \(z \in {}^{\omega}\mathbb{C}\) in \(p(z) = {\times}_{k=0}^{\omega}{\left( z-c_k \right)}\) in such a way that \(|f(c_k)| < \tilde{\omega}\) for \(f \in \mathcal{O}(D)\) on a domain \(D \subseteq \mathbb{C}\) where \(f(0) = 0\). Let \(D\) contain \(\mathbb{B}_{\tilde{\omega}}(0)\) completely, which a coordinate transformation always achieves provided that \(D\) is sufficiently “large”. The coincidence set \(\{\zeta \in D : f(\zeta) = g(\zeta)\}\) of \(g(z) := f(z) + p(z) \in \mathcal{O}(D)\) contains an accumulation point at 0.

Since \(p(z)\) can take every conventional complex number, the deviation between \(f\) and \(g\) is non-negligible. Since \(f \ne g\), this contradicts the statement of the identity theorem like the (local) fact that all derivatives \({u}^{(n)}({z}_{0}) = {v}^{(n)}({z}_{0})\) of two functions \(u\) and \(v\) can be equal at \({z}_{0} \in D\) for all \(n\), but \(u\) and \(v\) may significantly differ further away maintaining to be holomorphic, since some holomorphic function has to be developed into a TS with approximated powers.

Examples of such \(f \in \mathcal{O}(D)\) include functions with \(f(0) = 0\) that are restricted to \(\mathbb{B}_{\tilde{\omega}}(0)\). Extending the upper limit from \(\omega\) to \(|\mathbb{N}^{*}|\) gives entire functions with an infinite number of zeros. The set of zeros is not necessarily discrete. Thus, the set of all functions \(f \in \mathcal{O}(D)\) may contain zero divisors. The functions once again give counterexamples to Picard’s little theorem since they omit at least \(\acute{n}\) values in \(\mathbb{C}\).

Finiteness criterion for series: Let \(m, n, q, r \in \mathbb{N}\). Sum \(S_r := \left| {+}_{q=0}^{r}{s_q} \right|\) for \(s_q \in {}^{(\omega)}\mathbb{C}\) is finite, if and only if \(0 \le S_r = \left| {\pm}_{m=0}^{n}{{a}_{m}} \right| \le {a}_{0}\) for a sequence \(({a}_{m})\) such that \(a_{\grave{m}} < a_m \in {}^{\nu}\mathbb{R}_{\ge 0}\) holds, since summands in sums can be arbitrarily sorted according to their signs and sizes, recombined or split into separate sums.\(\square\)

Remark: Sums may be arbitrarily rearranged according to the associative, commutative, and distributive laws if care is taken to calculate them correctly (using Landau symbols).

Example: The alternating harmonic series implies \({\pm}_{n=1}^{\omega }{\left( \tilde{n} – \omega \right)}={_e}2.\)

Theorem (binomial series): From \(\alpha \in {}^{(\nu)}\mathbb{C}, \tbinom{\alpha}{n}:=\widetilde{n!}\alpha\acute{\alpha}…(\grave{\alpha}-n)\) and \(\left|\tbinom{\alpha}{\grave{m}}/\tbinom{\alpha}{m}\right|<1\) for all \(m \ge \nu\) where \(\tbinom{\alpha}{0}:=1\), it follows for \(z \in \mathbb{D}^\ll\) or \(z \in {}^{(\omega)}\mathbb{C}\) for \(\alpha \in {}^{(\omega)}\mathbb{N}\) the TS centred on 0\[{\grave{z}}^\alpha={+}_{n=0}^{\omega}{\tbinom{\alpha}{n}z^n}.\square\]Multinomial theorem: For \(z, \zeta \in {}^{(\omega)}\mathbb{C}^{k}, n^T \in {}^{(\omega)}\mathbb{N}^{k}, k, m \in {}^{\omega}\mathbb{N}^{*}, z^n := z_1^{n_1} … z_k^{n_k}\) and \(\tbinom{m}{n} := \widetilde{n_1! … {n}_k!}m!\;(k \ge 2)\), it holds that\[\left({\underline{1}}_k^Tz\right)^m={+}_{n\underline{1}_k=m}{\tbinom{m}{n}z^n}.\]Proof: Cases \(k \in \{1, 2\}\) are clear. Induction step from \(k\;(m)\) to \(\grave{k}\;(\grave{m})\) for \(\grave{n} := n+(1,0, … ,0)\):\[({(1,z^T){\underline{1}}_{\grave{k}})}^m={+}_{{(n_0,n)\underline{1}}_{\grave{k}}=m}{\tbinom{m}{(n_0,n)}z^n}\;\left({\grave{m}{\uparrow}_{0}^{\zeta_1}{\left({\underline{1}}_k^Tz\right)^m{\downarrow}z_1}=\left.\left({\underline{1}}_k^Tz\right)^{\grave{m}}\right|_0^{{\zeta}_1}={+}_{{\grave{n}}{\underline{1}}_k=\grave{m}}\tbinom{\grave{m}}{\grave{n}}{z}^{\grave{n}}}\right).\square\]Remark: If the modulus of \(x \in \mathbb{C}\), \({\downarrow}x\) or \(\widetilde{{\downarrow}x}\) have different orders of magnitude, the identity\[{{s}^{(0)}}(x):={\pm}_{m=0}^{n}{x^m}=(1-{{(-x)}^{\grave{n}}})/\grave{x}\]yields by differentiating\[{{s}^{(1)}}(x)={\mp}_{m=1}^{n}{m{x^{\acute{m}}}}=(\grave{n}{{(-x)}^{n}}-n{{(-x)}^{\grave{n}}}-1)/{{{\grave{x}}^{2}}}.\]The formulas above were sometimes miscalculated. For sufficiently small \(x\), and sufficiently, but not excessively large \(n\), the latter can be further simplified to \({-\grave{x}}^{-2}\), and remains valid when \(x \ge 1\) is not excessively large. By successively multiplying \({s}^{(j)}(x)\) by \(x\) for \(j \in {}^{\omega}\mathbb{N}^{*}\) and subsequently differentiating, other formulas can be derived for \({s}^{(j+1)}(x)\), providing an example of divergent series. However, if \({s}^{(0)}(-x)\) is integrated from 0 to 1 and set \(n := \omega\), an integral expression for \({_e}\omega + \gamma\) is obtained for Euler’s constant \(\gamma\).

L’Hôpital’s rule solves the case of \(x = -1\). Substituting \(y := -\acute{x}\), by the binomial series a series is obtained with infinite coefficients (if \({_e}\omega\) is also expressed as a series, even an expression for \(\gamma\) is obtained). If the numerator of \({s}^{(0)}(x)\) is illegitimately simplified, finding incorrect results is risked, especially when \(|x| \ge 1\). So \({s}^{(0)}(-{e}^{\pi i})\) is e.g. 0 for odd \(n\), and 1 for even \(n\), but not \(\tilde{2}\).

Definition: For \({a}_{m}, {b}_{n} \in {}^{(\omega)}\mathbb{K}\), the Cauchy product is to correct as series product as follows:\[{+}_{m=1}^{\omega }{{{a}_{m}}}{+}_{n=1}^{\omega }{{{b}_{n}}}={+}_{m=1}^{\omega }{\left( {+}_{n=1}^{m}{\left( {{a}_{n}}{{b}_{m-\acute{n}}}+{{a}_{\omega -\acute{n}}}{{b}_{\omega -m+n}} \right)}-{{a}_{m}}{{b}_{\omega -\acute{m}}} \right)}.\triangle\]Example: The following series product has the value15cf. Gelbaum, loc. cit., p. 61 f.:\[\left({+}_{m=1}^{\mathrm{\omega}}\frac{i^{\hat{m}}}{m^{\tilde{2}}}\right)^2={+}_{m=1}^{\mathrm{\omega}}{\left(\left(\frac{\tilde{m}}{\mathrm{\omega}-\acute{m}}\right)^{\tilde{2}}-{i^{\hat{m}}}{+}_{n=1}^{m}\left(\left(\frac{\tilde{n}}{m-\acute{n}}\right)^{\tilde{2}}+\left(\frac{\widetilde{\mathrm{\omega}-\acute{n}}}{\mathrm{\omega}-m\ \mathrm{+\ }n}\right)^{\tilde{2}}\right)\right)}=0,36590…\ \ \ \ll\frac{{\zeta\left(\tilde{2}\right)}^2}{3+2\;2^{\tilde{2}}}.\]Example: The signum function sgn yields the following series product16cf. loc. cit., p. 62: \[{+}_{m=0}^{\omega }{{2}^{{{m}^{\text{sgn}(m)}}}}{+}_{n=0}^{\omega}{\text{sgn}(n-\gamma)} = \acute{\omega}{2}^{\grave{\omega}}\gg -2.\]Stokes’ theorem17cf. Köhler, loc. cit., p. 625 f.: If \(-\) stands for sufficiently \(\alpha\)-continuous functions \(f_m: C \rightarrow {}^{\omega}\mathbb{R}\) above a term to be omitted for an alternating differential form of degree \(\acute{n}\)\(\upsilon := {+}_{m=1}^{n}{f_m\;{\downarrow}x_1\wedge…\wedge\overline{{\downarrow}x_m}\wedge…\wedge {\downarrow}x_n}\) on a cuboid \(C =[{a}_{1}, {b}_{1}] \times…\times [{a}_{n}, {b}_{n}] \subseteq {}^{\omega}\mathbb{R}^n\) where \(\partial C:= {\mp}_{m=1}^{n}{(F_{a,m} – F_{b,m})}\) has the faces \(F_{a,m} = [{a}_{1}, {b}_{1}] \times…\times \{a_m\} \times…\times [{a}_{n}, {b}_{n}]\) and \(F_{b,m} = [{a}_{1}, {b}_{1}] \times…\times \{b_m\} \times…\times [{a}_{n}, {b}_{n}]\), then\[{\uparrow}_C{{\downarrow}\upsilon} = {\uparrow}_{\partial C}{\upsilon}.\]Proof: The second fundamental theorem and Fubinis theorem (see above) give\[{\uparrow}_C{{\downarrow}\upsilon} = {\mp}_{m=1}^{n}{{{\uparrow}_{a_n}^{b_n}{…{\uparrow}_{a_m}^{\overline{b_m}}{…{\uparrow}_{a_1}^{b_1}{(f_m(x_1, …, a_m, …, x_n) – f_m(x_1, …, b_m, …, x_n)){\downarrow}x_1}…}\overline{{\downarrow}x_m}}…}{\downarrow}x_n}\]and\[\frac{{{\downarrow} f_m}}{{{\downarrow}x_m}}{\downarrow}x_m\wedge {\downarrow}x_1\wedge…\wedge {\downarrow}x_{\acute{m}}\wedge {\downarrow}x_{\grave{m}}\wedge…\wedge {\downarrow}x_n = -i^{\hat{m}}\frac{{{\downarrow} f_m}}{{{\downarrow} x_m}}{\downarrow}x_1\wedge…\wedge {\downarrow}x_n.\square\]Remark: Stokes’ theorem holds also for \(n\)-dimensional manifolds consisting of cuboids.

General Leibniz formula: For \({\downarrow}^n := {\downarrow}_1^{n_1}…{\downarrow}_k^{n_k}\) and \({\downarrow}_j^{n_j} := {\downarrow}^{n_j}/{\downarrow}{z_j}^{n_j}\), it follows for \(m^T, n^T \in {}^{(\omega)}\mathbb{N}^{k}, j, k \in {}^{(\omega)}\mathbb{N}^*\) and differentiable \(f = f_1\cdot…\cdot f_k \in {}^{(\omega)}\mathbb{C}\) from the multinomial theorem that \({\downarrow}^mf = {+}_{n\underline{1}_k=||m||_1}{\tbinom{||m||_1}{n}{\downarrow}^nf}.\square\)

Taylor’s theorem for several variables: For \(n! := n_1!\cdot … \cdot n_k!, a, z \in {}^{(\omega)}\mathbb{C}^{k}\) and \((z – a)^n := (z – a)^{n_1}\cdot … \cdot (z – a)^{n_k}\), it follows from the multinomial theorem also analogously to the proof of the simple TS\[f(z) = T_{\omega}(z) := {+}_{n\underline{1}_k=0}^{\omega }{\widetilde{n!}{\downarrow}^nf(a)(z – a)^n}.\square\]Conclusion: Analogously to the simple TS, the remainder is for \(\xi \in \mathbb{B}_a(z), p^T \in {}^{(\omega)}\mathbb{N}^{k}\) and \(||p||_1\le||n||_1\)\[R_n(z) = \tilde{p}\grave{n}(z – \xi)^{\grave{n} – p}{+}_{m\underline{1}_k=||\grave{n}||_1}{\widetilde{m!}{\downarrow}^mf(\xi)(z – a)^m}.\square\]Definition: Let \(f_n^*(z) = f(\eta_nz)\) sisters of the TS \(f(z) \in \mathcal{O}(D)\) centred on 0 on the domain \(D \subseteq {}^{\omega}\mathbb{C}\) where \(m, n \in {}^{\omega}\mathbb{N}^{*}\) and \(\eta_n^m := i^{2^{\lceil m/n \rceil}}\). Then let \(\delta_n^*f = \tilde{2}(f – f_n^*)\) the halved sister distances of \(f\). For \(\mu_n^m := m!n!/(m + n)!\), \(\mu\) and \(\eta\) form a calculus, which can be resolved on the level of TS18cf. Remmert, loc. cit., p. 165 f. and allows an easy and finite closed representation of integrals and derivatives. Let \(\widetilde{(-n)!} := 0.\triangle\)

Speedup theorem for integrals: The TS (see below) \(f(z) \in \mathcal{O}(D)\) centred on 0 on \(D \subseteq {}^{\omega}\mathbb{C}\) gives for \(\grave{m}, n \in {}^{\omega}\mathbb{N}^*\)\[{\uparrow}_0^z…{\uparrow}_0^{\zeta_2}{f(\zeta_1){\downarrow}\zeta_1\;…\;{\downarrow}\zeta_n} = \widetilde{n!} f(z\mu_n) z^n.\square\]Example: For the TS \(f(x), g(x) \in {}^{\omega}\mathbb{R}\), it holds that\[{\uparrow}_0^x{f(v){\downarrow}v}{\uparrow}_0^x{\uparrow}_0^{y}{g(v){\downarrow}v{\downarrow}y} = \tilde{2}f(x\mu_1)g(x\mu_2)x^3.\]Speedup theorem for derivatives: For \(\mathbb{B}_{\tilde{\nu}}(0) \subset D \subseteq {}^{\omega}\mathbb{C}, n\)-th unit roots, TS\[f(z):=f(0) + {+}_{m=1}^{\omega }{\widetilde{m!}\,{{f}^{(m)}}(0){z^m}},\]\(b_n := \tilde{\varepsilon}^{n}\,\acute{n}! = 2^j, j, n \in {}^{\omega}\mathbb{N}^{*}, \varepsilon \in ]0, r[, u := e^{\tilde{n}\hat{\pi}i}\) and \(f\)’s radius of convergence \(r \in {}^{\nu}{\mathbb{R}}_{>0}\) imply\[{{f}^{(n)}}(0)=b_n{+}_{k=1}^{n}{\delta_n^* f(\varepsilon u^k)}.\]Universal multistep theorem: For \(n \in {}^{\nu}\mathbb{N}_{\le p}, k, m, p \in {}^{\nu}\mathbb{N}^{*}, d_{\curvearrowright B} x \in\, ]0, 1[, x \in [a, b] \subseteq {}^{\omega}\mathbb{R}, y : [a, b] \rightarrow {}^{\omega}\mathbb{R}^q, f : [a, b]\times{}^{\omega}\mathbb{R}^{q \times n} \rightarrow {}^{\omega}\mathbb{R}^q, g_k(\curvearrowright B x) := g_{\acute{k}}(x)\), and \(g_0(a) = f((\curvearrowleft B)a, y_0, … , y_{\acute{n}})\), the TS of the initial value problem \(y^\prime(x) = f(x, y((\curvearrowright B)^0 x), … , y((\curvearrowright B)^{\acute{n}} x))\) of order \(n\) implies\[y(\curvearrowright B x) = y(x) + {\downarrow}_{\curvearrowright B}x{\pm}_{k=1}^{p}{\left (g_{p-k}((\curvearrowright B) x){+}_{m=k}^{p}{\widetilde{m!}\tbinom{\acute{m}}{\acute{k}}}\right )} + \mathcal{O}(({\downarrow}_{\curvearrowright B} x)^{\grave{p}}).\square\]Theorem for (anti-) derivatives of TS: For \(j \in {}^{\omega}\mathbb{Z}\), \(q = \tilde{\varepsilon}(z-a)\), \(a \in D\) and \(k, m \in \mathbb{N}_{<n}\), modular arithmetic19cf. Knuth, Donald Ervin: The Art of Computer Programming Volume 2; 3rd Ed.; 1997; Addison Wesley; Reading, p. 302 – 311 and \(n\)-th unit roots result in the corresponding DFT form:\[{\updownarrow}^jf_n(z) := \tilde{n}(q^k)^T(\delta_{km}\widetilde{\varepsilon q}^j\widetilde{(k-j)!}k!)({\tilde{u}}^{km})(f({\varepsilon u}^m+a))+\mathcal{O}(\varepsilon^n).\square\]Remark: The identity instead of \(\delta_n^*\) provides arbitrarily precise approximations for the \(f^{(n)}\). The last theorems are equally valid for multidimensional TS (with several sums) and Laurent series. The error is \(\mathcal{O}(\varepsilon^n)\) instead of \(\mathcal{O}(\varepsilon)\) for analogously defined \(m\)-dimensional DFT forms of the TS with \(\tbinom{m+n}{n}\) derivatives where the effort is comparable. A good choice is \(\varepsilon = \tilde{2}\) and \(n = 64\) resp. \(n = 16\) in the next conclusion.

Conclusion: DFT-zero methods iterate zeros \(a \in {}^{\omega}\mathbb{C}\) for every function \(f(z) \in {}^{\omega}\mathbb{C}\) that can be developed into a TS at defaults \(z_0 \in {}^{\omega}\mathbb{C}\) with each time the same convergence as in methods similar to Simpson’s, if \({\updownarrow}^0f_n(z)\) is differentiated (several times) for sufficiently precise \(|z_{\grave{m}} – z_m|\) and \(m \in {}^{\omega}\mathbb{N}.\square\)

Theorem for derivatives of Fourier series: For \(f \in \mathcal{C}_{\pi}^{j+2}\)20cf. Walter, loc. cit., p. 358 ff., \(j \in {}^{\omega}\mathbb{N}, k \in [-\acute{n}, n] \cap \mathbb{Z}, t \in [-\pi, \pi]\) and \(m \in \mathbb{N}_{<\hat{n}}\), \(\hat{n}\)-th unit roots result in the following DFT form:\[{\downarrow}^jf_n(t):=(u^{\tilde{\pi}knt})^T(\delta_{(k+\acute{n})m}(ik)^j)(\tilde{u}^{km})(f(\pi m/n – \pi))/{\hat{n}}+\mathcal{O}(\tilde{n}).\square\]Conclusion: Supporting points \(mr := \pi m/n\) of the smooth \(f(mr)\) yield for \(k\) like \(m\) interpolating in \(\mathcal{O}(_en n)\):\[{\downarrow}^jf_n(t):=(u^{\tilde{r}kt})^T(\delta_{km}(ik)^j)(\tilde{u}^{km})(f(mr))/{\hat{n}}.\square\]Theorem: The DFT fixed-point method can (initiating) determine every \(z \in {}^{\omega}\mathbb{C}\) of an arbitrary \(m\)-polynomial \(p(z) = 0\) with \(m \in [2, n] \cap \mathbb{N}\) for \(n := 2^r, r \in {}^{\nu}\mathbb{N}^*\) and coefficients from \({}^{\nu}\mathbb{C}\) also in \(\mathcal{O}(_en n)\).

Proof and algorithm: Let \(U = (\tilde{u}^{jk})\) for \(j, k \in \mathbb{N}_{<n}, u :=e^{\tilde{n} \hat{\pi} i}, q := 2z\) and \(s_k := p(\tilde{2}u^k)\). A simple transform achieves \(|q| < \tilde{2}\) for all zeros \(\zeta\) of \(p(z)\) and \(p(0) = 1\). It follows from \(p(z) = \tilde{n}(q^j)^TUs = \tilde{n}\mu^Ts = 0\) the simplified iteration \(\mu^* = U_{1}^{-T}\mu U((\delta_{jk}\tilde{u}^j)U^{-1}\mu-(U_{\acute{n}}^{-T}\mu+\beta s^T\mu, \beta s^T\mu, …, \beta s^T\mu)^T)\) with Kronecker delta \(\delta_{jk}\), starting point \(q := \tilde{2}\) and \(\beta \in {}^{\nu}\mathbb{C}^*\) such that each time \(||\mu^*-\mu||\) is roughly halved and \(\mu^Ts = 0\) holds. Finish by polynomial division where \(m > 2.\square\)

code of the FFT form

© 2010-2021 by Boris Haase

top

References

References
1 Walter, Wolfgang: Analysis 2; 5., erw. Aufl.; 2002; Springer; Berlin, p. 188
2 cf. Köhler, Günter: Analysis; 1. Aufl.; 2006; Heldermann; Lemgo, p. 519
3 cf. Heuser, Harro: Lehrbuch der Analysis Teil 1; 17., akt. Aufl.; 2009; Vieweg + Teubner; Wiesbaden, p. 144
4 cf. loc. cit., p. 155
5 cf. loc. cit., p. 27 f.
6 loc. cit., p. 235 f.)
7 see loc. cit., p. 215 f.
8 cf. Gelbaum, Bernard R.; Olmsted, John M. H.: Counterexamples in Analysis; Republ., unabr., slightly corr.; 2003; Dover Publications; Mineola, New York, p. 24.
9 cf. Heuser, loc. cit., p. 540 – 543
10 cf. Grosche, Günter (Hrsg.): Teubner-Taschenbuch der Mathematik Teil 2; 7. Aufl.; 1995; Teubner; Leipzig, p. 1130 f.
11 cf. Walter, Wolfgang: Analysis 1; 3., verb. Aufl.; 1992; Springer; Berlin, p. 41
12 Remmert, loc. cit., p. 160
13 cf. Freitag, Eberhard; Busam, Rolf: Funktionentheorie 1; 4., korr. u. erw. Aufl.; 2006; Springer; Berlin, p. 74ff.
14 as in Remmert, loc. cit., p. 228 f.
15 cf. Gelbaum, loc. cit., p. 61 f.
16 cf. loc. cit., p. 62
17 cf. Köhler, loc. cit., p. 625 f.
18 cf. Remmert, loc. cit., p. 165 f.
19 cf. Knuth, Donald Ervin: The Art of Computer Programming Volume 2; 3rd Ed.; 1997; Addison Wesley; Reading, p. 302 – 311
20 cf. Walter, loc. cit., p. 358 ff.