»» Nonstandard Analysis

# Nonstandard Analysis

Preliminary remarks: In the following section, the definitions established in the chapters on Set Theory and Topology are used, and usually take $$m, n \in {}^{\omega}\mathbb{N}^{*}$$. Integration and differentiation are studied on an arbitrary non-empty subset $$A$$ from $${}^{(\omega)}\mathbb{K}^{n}$$. The mapping concept requires replacing every element not in the image set by the neighbouring element in the target set. If multiple choices are possible, one single choice is selected. The following may be easily generalised to other sets and norms.

Definition: The function $$||\cdot||: \mathbb{V} \rightarrow {}^{(\omega)}\mathbb{R}_{\ge 0}$$ where $$\mathbb{V}$$ is a vector space over $${}^{(\omega)}\mathbb{K}$$ is called a norm, if for all $$x, y \in \mathbb{V}$$ and $$\lambda \in {}^{(\omega)}\mathbb{K}$$, it holds that: $$||x|| = 0 \Rightarrow x = 0$$ (definiteness), $$||\lambda x|| = |\lambda| \; ||x||$$ (homogeneity), and $$||x + y|| \le ||x|| + ||y||$$ (triangle inequality). The dimension of $$\mathbb{V}$$ is defined as the maximal number of linearly independent vectors, and is denoted by dim $$\mathbb{V}$$. The norms $${||\cdot||}_{a}$$ and $${||\cdot||}_{b}$$ are said to be equivalent if there exist non-infinitesimal $$s, t \in {}^{\nu}\mathbb{R}_{>0}$$ such that, for all $$x \in \mathbb{V}$$, it holds that:$s||x||{}_{b} \le ||x||{}_{a} \le t||x||{}_{b}.\triangle$Theorem: Let $$N$$ be the set of all norms in $$\mathbb{V}$$. Every norm on $$V$$ is equivalent if and only if $${||x||}_{a}/{||x||}_{b}$$ is finite but not infinitesimal for all $${||\cdot||}_{a}, {||\cdot||}_{b} \in N$$ and all $$x \in \mathbb{V}^{*}$$.

Proof: Set $$s := \text{min }\{{||x||}_{a}/{||x||}_{b}: x \in \mathbb{V}^{*}\}$$ and $$t := \text{max }\{{||x||}_{a}/{||x||}_{b}: x \in \mathbb{V}^{*}\}.\square$$

Definition: The set $$\overline{\mathbb{R}} := \mathbb{R} \cup \{\infty\}$$ allows calculations with $$\infty \gg \varsigma^2$$ like having a constant. If $$\pm0$$ is replaced by $$\pm\hat{\infty}$$, the calculations become unique and consistent. The area or half of the circumference of the unit circle defines pi $$\pi =: \tau/2$$. Let $$\iota := \pi/2$$. Euler’s number $$e$$ is defined as the solution of $${x}^{i\pi} = -1$$. Then the logarithm function ln is defined by $${e}^{\ln \, z} = z$$ and the corresponding power function by $${z}^{s} = {e}^{s \, \ln \, z}$$ for $$s, z \in \mathbb{C}$$. This allows giving a formal definition of exponentiation.$$\triangle$$

Remark: Period $$\tau$$ must define sine and cosine since their power series only converge for finite arguments. The definition of $$e$$ is $$\mathcal{O}(\hat{\nu})$$ larger than that by $${(1 + \hat{\nu})}^{\nu}$$. The exponential series being exactly differentiated with as many terms as possible justifies the former. Calculating as precisely as possible, this deviation can have negative consequences: Typically, resorting to approximations will be necessary.

Lemma: Because of $$\hat{\nu} m \le 1 \le a$$ for all $$m \in {}^{\nu}\mathbb{N}$$ and $$a \in {}^{\nu}{\mathbb{R}}_{\ge 1}$$, the Archimedean axiom is invalid.$$\square$$

Archimedes’ theorem: There exists $$m \in {}^{\nu}\mathbb{N}$$ such that $$a < b m$$ if and only if $$a < b \nu$$ whenever $$a > b$$ for $$a, b \in {\mathbb{R}}_{>0}$$, since $$\nu = \max {}^{\nu}\mathbb{N}$$ holds.$$\square$$

Definition: The function $${\mu}_{h}: A \rightarrow \mathbb{R}_{\ge 0}$$ where $$A \subseteq {}^{(\omega)}\mathbb{C}^{n}$$ is an $$m$$-dimensional set with $$h \in \mathbb{R}_{>0}$$ less than or equal the minimal distance of the points in $$A, m \in {}^{\omega}\mathbb{N}_{\le 2n}$$, $${\mu}_{h}(A) := |A| {h}^{m}$$ and $${\mu}_{h}(\emptyset) = |\emptyset| = 0$$ is called the exact h-measure of $$A$$ and $$A$$ is said to be h-measurable. Let the exact standard measure be $${\mu}_{\text{d0}}$$ (d0 may be omitted). The further refined conventionally real intervals represent the real numbers.$$\triangle$$

Remark: Answering positively the measure problem, the union $$A$$ of pairwise disjoint $$h$$-homogeneous sets $${A}_{j}$$ for $$j \in J \subseteq \mathbb{N}$$ clearly additively and uniquely results in${{\mu }_{h}}(A)=\sum\limits_{j \in J}{{{\mu }_{h}}\left( {{A}_{j}} \right)}.$Its strict monotony follows for $$h$$-homogeneous sets $${A}_{1}, {A}_{2} \subseteq {}^{(\omega)}\mathbb{K}^{n}$$ satisfying $${A}_{1} \subset {A}_{2}$$ from $${\mu}_{h}({A}_{1}) < {\mu}_{h}({A}_{2})$$. If $$h$$ is not equal on all considered sets $${A}_{j}$$, the minimum of all $$h$$ is chosen and the homogenisation follows as described in Set Theory. In the following, let $$||\cdot||$$ be the Euclidean norm.

Examples: Consider the set $$A \subset {[0, 1[}^n$$ of points, whose least significant bit is 1 (0) in all $$n \in {}^{\omega}\mathbb{N}^{*}$$ coordinates. Then $${\mu}_{\text{d0}}(A) = {2}^{-n}$$. Since $$A$$ is an infinite and conventionally uncountable union of individual points without the neighbouring points of $${[0, 1[}^n$$ in $$A$$, and these points are Lebesgue null sets, $$A$$ is not Lebesgue measurable, however it is exactly measurable. Domains from $${}^{(\omega)} \mathbb{K}^{n}$$ that are more densely pushed together have no smaller (larger) intersection (union) than previously.

Remark: The exact $$h$$-measure is optimal: It only considers the NRs of points, i.e. in the extreme case distances of points parallel to the coordinate axes. Concepts such as $$\sigma$$-algebras and null sets are dispensable, since the only null set is the empty set $$\emptyset$$.

Definition: Neighbouring points in $$A$$ are described by the irreflexive symmetric NR $$B \subseteq {A}^{2}$$. The function $$\gamma: C \rightarrow A \subseteq \mathbb{C}{}^{n}$$, where $$C \subseteq \mathbb{R}$$ is $$h$$-homogeneous and $$h$$ is infinitesimal, is called a path if $$||\gamma(x) – \gamma(y)||$$ is infinitesimal for all neighbouring points $$x, y \in C$$ and ($$\gamma(x), \gamma(y)) \in B$$. NRs are systematically written as (predecessor, successor) with the notation $$({z}_{0}, \curvearrowright {z}_{0})$$ or $$(\curvearrowleft {z}_{0}, {z}_{0})$$ pronouncing $$\curvearrowright$$ as “post” and $$\curvearrowleft$$ as “pre”. The concept of compactness is renounced in any form.$$\triangle$$

Definition: Let $${z}_{0} \in A \subseteq \mathbb{K}^{n}$$ and $$f: A \rightarrow {}^{(\nu)}\mathbb{K}^{m}$$. Proofs for predecessors will be omitted below, since they are analogous to the proofs for successors. If $$||f(\curvearrowright B {z}_{0}) – f({z}_{0})|| < \alpha$$ for infinitesimal $$\alpha \in {}^{(\omega)}\mathbb{R}{}_{>0}$$, $$f$$ is defined $$\alpha B$$-successor-continuous in $${z}_{0}$$ in the direction $$\curvearrowright B {z}_{0}$$. If the exact modulus of $$\alpha$$ does not matter, $$\alpha$$ may be omitted in the notation. If $$f$$ is $$\alpha B$$-successor-continuous for all $${z}_{0}$$ and $$\curvearrowright B {z}_{0}$$, it simply is defined $$\alpha B$$-continuous. It holds that $$\alpha$$ is the degree of continuity. If the inequality only holds for $$\alpha = \hat{\nu}$$, $$f$$ simply is defined ($$B$$-successor-)continuous. The property of $$\alpha B$$-predecessor-continuity is defined analogously.$$\triangle$$

Remark: In practice, choose $$\alpha$$ by estimating $$f$$ (for example after considering any jump discontinuities). If $$B$$ is obvious or irrelevant, it may be omitted – as below, when $$B = {}^{(\omega)}\mathbb{K}{}^{2n}$$.

Example: The function $$f: \mathbb{R} \rightarrow \{\pm 1\}$$ with $$f(x) = i^{2x/\text{d0}}$$ is nowhere successor-continuous on $$\mathbb{R}$$, but its modulus is (cf. Number Theory). Here, $$x/$$d0 is an integer since $$\mathbb{R}$$ is d0-homogeneous. Setting $$f(x) = 1$$ for rational $$x$$ and $$= -1$$ otherwise, then $$f(x)$$ is partially d0-successor-continuous on non-rational numbers, unlike the conventional notion of continuity.

Example of a Peano curve1Walter, Wolfgang: Analysis 2; 5., erw. Aufl.; 2002; Springer; Berlin, p. 188: “Consider the even, periodic function $$g: \mathbb{R} \rightarrow \mathbb{R}$$ with period 2 and image [0, 1] defined by${g}(t)=\left\{ \begin{array}{cl} 0 & \text{for }0\le t<\tfrac{1}{3}\\ 3t-1 & \text{for }\tfrac{1}{3}\le t<\tfrac{2}{3}\\ 1 & \text{for }\tfrac{2}{3}\le t\le 1.\\ \end{array} \right.\,$ Clearly, $$g$$ is fully specified by this definition, and continuous. Now let the function $$\phi: I = [0, 1] \rightarrow \mathbb{R}^{2}$$ be defined by$\phi(t) = \left( {\sum\limits_{k = 0}^{\infty} {\frac{{g({4^{2k}}t)}}{{{2^{k + 1}}}},} \sum\limits_{k = 0}^{\infty} {\frac{{g({4^{2k + 1}}t)}}{{{2^{k + 1}}}}} } \right).”$The function $$\phi$$ is at least continuous since the sums are ultimately locally linear functions in $$t$$, when $$\infty$$ is replaced by $$\omega$$. It would however be an error to believe that [0, 1] can be bijectively mapped onto $${[0, 1]}^{2}$$ in this way: the powers of four in $$g$$, and the values 0 and 1 taken by $$g$$ in two sub-intervals thin out $${[0, 1]}^{2}$$ so much that a bijection is clearly impossible. Restricting the proof to rational points only is simply insufficient.

Definition: For $$f: A \rightarrow {}^{(\omega)}\mathbb{K}{}^{m}$$,${d}_{\curvearrowright B z}f(z) := f(\curvearrowright B z) – f(z)$is called $$B$$-successor-differential of $$f$$ in the direction $$\curvearrowright B z$$ for $$z \in A$$. If dim $$A = n$$, then $${d}_{\curvearrowright B z}f(z)$$ can be specified by $$d((\curvearrowright B){z}_{1}, \text{…} , (\curvearrowright B){z}_{n})f(z$$). If $$f$$ is the identity, i.e. $$f(z) = z$$, then $${d}_{\curvearrowright B z}Bz$$ can be written instead of $${d}_{\curvearrowright B z}f(z)$$. If $$A$$ or $$\curvearrowright B z$$ is obvious or irrelevant, it may be omitted.$$\triangle$$

Definition: If $$|f(\curvearrowright x) – f(x)| > \hat{\omega}$$ holds for $$x$$ of $$f: A \subseteq {}^{\omega}\mathbb{R} \rightarrow {}^{\omega}\mathbb{R}$$, $$x$$ is called a jump discontinuity. If the modulus of the $$B$$-successor-differential of $$f$$ in the direction $$\curvearrowright B z$$ at $$z \in A$$ is smaller than $$\alpha$$ and infinitesimal, then $$f$$ is also rated as $$\alpha B$$-successor-continuous there. An (infinitely) real-valued function with arguments $$\in {}^{(\omega)}\mathbb{K}{}^{n}$$ is said to be convex (concave) if the line segment between any two points on the graph of the function lies above (below) or on the graph. Let it strictly convex (concave) if “or on” can be omitted.$$\triangle$$

Definition: The $$m$$ arithmetic means of all $${f}_{k}(\curvearrowright B z)$$ of $$f(z)$$ give the $$m$$ averaged normed tangential normal vectors of $$m$$ (uniquely determined) hyperplanes, defining the $$mn$$ continuous partial derivatives of the Jacobian matrix of $$f$$, which is not necessarily continuous. The hyperplanes are taken to pass through $${f}_{k}(\curvearrowright B z)$$ and $$f(z)$$ translated towards 0. The moduli of their coefficients are minimised by a quite simple linear programme (cf. Linear Programming).$$\triangle$$

Theorem: Every in $$A \subseteq {}^{(\omega)}\mathbb{K}{}^{n}$$ convex resp. concave function $$f: A \rightarrow {}^{(\omega)}\mathbb{R}$$ is $$\alpha B$$-successor-continuous and $$B$$-successor-differentiable.$$\square$$

Definition: The partial derivative in the direction $$\curvearrowright B {z}_{k}$$ of $$F: A \rightarrow {}^{(\omega)}\mathbb{K}$$ at $$z = ({z}_{1}, …, {z}_{n}) \in A \subseteq {}^{(\omega)}\mathbb{K}^{n}$$ with $$k \in \mathbb{N}_{\le n}^*$$ is defined as$\frac{\partial B\,F(z)}{\partial B\,{{z}_{k}}}:=\frac{F({{z}_{1}},\,…,\,\curvearrowright B\,{{z}_{k}},\,…,\,{{z}_{n}})-F(z)}{\curvearrowright B\,{{z}_{k}}-{{z}_{k}}}.$With this notation, if the function $$f$$ satisfies $$f = ({f}_{1}, …, {f}_{n}): A \rightarrow {}^{(\omega)}\mathbb{K}^{n}$$ with $$z \in A \subseteq {}^{(\omega)}\mathbb{K}^{n}$$\begin{aligned}f(z) &=\left( \frac{F(\curvearrowright B{{z}_{1}},{{z}_{2}},…,{{z}_{n}})-F({{z}_{1}},…,{{z}_{n}})}{(\curvearrowright B{{z}_{1}}-{{z}_{1}})},…,\frac{F({{z}_{1}},…,{{z}_{n-1}},\curvearrowright B{{z}_{n}})-F({{z}_{1}},…,{{z}_{n}})}{(\curvearrowright B{{z}_{n}}-{{z}_{n}})} \right)\\ &=\left( \frac{\partial B{{F}_{1}}(z)}{\partial B{{z}_{1}}},\,\,…\,\,,\,\,\frac{\partial B{{F}_{n}}(z)}{\partial B{{z}_{n}}} \right)=\text{grad }{{B}_{\curvearrowright Bz}}\,F(z)\,=\,\nabla {{B}_{\curvearrowright Bz}}\,F(z),\end{aligned}then $$f(z)$$ is said to be exact $$B$$-successor-derivative $${F}_{\curvearrowright B z}^{\prime} B(z)$$ or the exact $$B$$-successor-gradient $$\text{grad }_{\curvearrowright B z} F(z)$$ of the function $$F$$ at $$z$$, which is said to be exactly $$B$$-differentiable at $$z$$ in the direction $$\curvearrowright B z$$, provided that each quotient exists in $${}^{(\omega)}\mathbb{K}$$. $$\nabla$$ is the Nabla operator. If this definition is satisfied for every $$z \in A$$, then $$F$$ is said to be an exactly $$B$$-differentiable $$B$$-AD of $$f$$. For $$x \in {}^{(\omega)}\mathbb{R}$$, the left and right $$B$$-ADs $${F}_{l}(x)$$ and $${F}_{r}(x)$$ distinguish between the cases of the corresponding $$B$$-derivatives.

If $$A$$ or $$\curvearrowright B z$$ are obvious from context or irrelevant, they may be omitted. The conventional case may be obtained analogously and for $$n = 1$$ and $${F}_{r}^{\prime}B(x)$$ the right exact $$B$$-derivative follows for $$\curvearrowright B x > x \in {}^{(\omega)}\mathbb{R}$$ and $${F}_{l}^{\prime}B(x)$$ is the left exact $$B$$-derivative for $$\curvearrowright B x < x$$. If all directions have the same value, $$F^{\prime}B(z)$$ is called the exact derivative ($$A ={}^{\nu}\mathbb{C}$$ and $$n = 1$$ make $$F$$ holomorphic). On a domain $$D$$, let $$\mathcal{O}(D) \subseteq \mathcal{C}(D) \subseteq \mathbb{C}$$ be the ring of holomorphic resp. continuous functions.$$\triangle$$

Chain rule: For $$x \in A \subseteq {}^{(\omega)}\mathbb{R}, B \subseteq {A}^{2}, f: A \rightarrow C \subseteq {}^{(\omega)}\mathbb{R}, D \subseteq {C}^{2}, g: C \rightarrow {}^{(\omega)}\mathbb{R}$$, choosing $$f(\curvearrowright B x) = \curvearrowright D f(x)$$), it holds that:${g}_{r}^{\prime}B(f(x)) = {g}_{r}^{\prime}D(f(x)) {f’}_{r}B(x).$Proof:${{g}_{r}^{\prime}}B(f(x))=\frac{g(f(\curvearrowright Bx))-g(f(x))}{f(\curvearrowright Bx)-f(x)}\frac{f(\curvearrowright Bx)-f(x)}{\curvearrowright Bx-x}=\frac{g(\curvearrowright Df(x))-g(f(x))}{\curvearrowright Df(x)-f(x)}{{f}_{r}^{\prime}}B(x)={{g}_{r}^{\prime}}D(f(x)){{f}_{r}^{\prime}}B(x).\square$Remark: Product and quotient rule require, which can be easily shown, that the arguments and function values must belong to a smaller level of infinity than $$1/$$d0, and $$f$$ and $$g$$ must be sufficiently ($$\alpha$$-) continuous at $$x \in A$$. I.e. $$\alpha$$ must be sufficiently small to allow $$\curvearrowright x$$ to be replaced by $$x$$. An analogous principle holds for infinitesimal arguments. The intermediate value theorem (with overlapping $$\alpha$$-environments), L’Hôpital’s rule and differentiating the inverse function can also be easily shown.

Remark: Differentiability is thus easy to establish. Wherever the quotient is defined in the (conventional) (infinite) real case, set${{F}_{b}^{\prime}}B(v)\,:=\,\frac{F(\curvearrowright B\,v)-F(\curvearrowleft B\,v)}{\curvearrowright B\,v-\curvearrowleft B\,v}.$ This is especially useful when $$\curvearrowright B v – v = v – \curvearrowleft B v$$, and the combined derivatives both have the same sign. This definition has the advantage viewing $${F}_{b}^{\prime} \; B(v)$$ as the “tangent slope” at the point $$v$$, especially when $$F$$ is $$\alpha B$$-continuous at $$v$$. Simpler rules of differentiation make a derivative value of 0 most suitable for cases with opposite signs (see below). In other cases, simply calculate the arithmetic mean of both exact derivatives. This can be extended to the (conventional) complex numbers analogously.

Definition: Given $$z \in A \subseteq {}^{(\omega)}\mathbb{K}^{n}$$,$\int\limits_{z\in A}{f(z)dBz:=\sum\limits_{z\in A}{f(z)(\curvearrowright B\,z-z)}}$is called the exact $$B$$-integral of the vector field $$f = ({f}_{1}, …, {f}_{n}): A \rightarrow {}^{(\omega)}\mathbb{K}^{n}$$ on $$A$$ and $$f(z)$$ is said to be $$B$$-integrable. If this requires removing at least one point from $$A$$, then the exact $$B$$-integral is called improper.
For $$\gamma: [a, b[ \, \cap \, C \rightarrow A \subseteq {}^{(\omega)}\mathbb{K}^{n}, C \subseteq \mathbb{R}$$, and $$f = ({f}_{1}, …, {f}_{n}): A \rightarrow {}^{(\omega)}\mathbb{K}^{n}$$,$\int\limits_{\gamma }{f(\zeta )dB\zeta =}\int\limits_{t\in [a,b[ \, \cap \, C}{f(\gamma (t)){{\gamma}_{\curvearrowright }^{\prime}}D(t)dDt}$where $$dDt > 0, \curvearrowright D t \in ]a, b] \, \cap \, C$$, choosing $$\curvearrowright B \gamma(t) = \gamma(\curvearrowright D t)$$, since $$\zeta = \gamma(t)$$ and $$dB\zeta = \gamma(\curvearrowright D t) – \gamma(t) = {\gamma}_{\curvearrowright }^{\prime}D(t) dDt$$ (i.e. for $$C = \mathbb{R}, B$$ maximal in $$\mathbb{C}^{2}$$, and $$D$$ maximal in $$\mathbb{R}^{2})$$, is called the exact $$B$$-LI of the vector field $$f$$ along the path $$\gamma$$. Improper exact $$B$$-LIs are defined analogously to exact $$B$$-integrals, except that only interval end points may be removed from $$[a, b[ \, \cap \, C$$.$$\triangle$$

Remark: The exact LI on $${}^{(\nu)}\mathbb{K}$$ $$f$$ does not need a continuous $$f$$, exists always and is usually consistent with the conventional LI. It is linear and monotone in the (conventional) (infinite) real case. The art of integration lies in correctly combining the summands of a sum.

Definition: For all $$x \in V$$ of an $$h$$-homogeneous $$n$$-volume $$V \subseteq [{a}_{1}, {b}_{1}] \times…\times [{a}_{n}, {b}_{n}] \subseteq {}^{(\omega)}\mathbb{R}^{n}$$ with $$B = {B}_{1}\times…\times{B}_{n}, {B}_{k} \subseteq {[{a}_{k}, {b}_{k}]}^{2}$$ and $$|{dB}_{k}{x}_{k}| = h$$ for all $$k \in \mathbb{N}_{\le n}^*$$$\int\limits_{x\in V}{f(x){dBx}}:=\int\limits_{x\in V}{f(x)dB({{x}_{1}},\,…,{{x}_{n}})}:=\int\limits_{{{a}_{n}}}^{{{b}_{n}}}{…\int\limits_{{{a}_{1}}}^{{{b}_{1}}}{f(x)d{{B}_{1}}{{x}_{1}}\,…\,d{{B}_{n}}{{x}_{n}}}}$is called the exact $$B$$-volume integral of the $$B$$-volume integrable function $$f: {}^{(\omega)}\mathbb{R}^{n} \rightarrow {}^{(\omega)}\mathbb{R}$$ with $$f(x) := 0$$ for all $$x \in {}^{(\omega)}\mathbb{R}^{n} \setminus V$$. Improper exact $$B$$-volume integrals are defined analogously to exact $$B$$-integrals.$$\triangle$$

Remark: Because $$\mathbb{C}$$ and $$\mathbb{R}^{2}$$ are isomorphic, something similar exists in the complex case and$\int\limits_{x\in V}{dBx={{\mu }_{h}}(V)}.$Example: Using the exact $$B$$-volume integral in contrast to the Lebesgue integral,$||f|{{|}_{p}}:={{\left( \int\limits_{x\in V}{||f(x)|{{|}^{p}}dBx} \right)}^{\hat{p}}}$satisfies for arbitrary $$f: {}^{(\omega)}\mathbb{R}^{n} \rightarrow {}^{(\omega)}\mathbb{R}^{m}$$ and $$p \in [1, \omega]$$ all the properties of a norm, also definiteness.

Example: Let $$[a, b[ \, \cap \, h{}^{\omega}\mathbb{Z} \ne \emptyset$$ be an $$h$$-homogeneous subset of $$[a, b[{}^{\omega}\mathbb{R}$$, and write $$B \subseteq [a, b[ \, \cap \, h{}^{\omega}\mathbb{Z} \times ]a, b] \, \cap \, h{}^{\omega}\mathbb{Z}$$. Now let $${T}_{r}$$ be a right $$B$$-AD of a not necessarily convergent TS $$t$$ on $$[a, b[ \, \cap \, h{}^{\omega}\mathbb{Z}$$ and define $$f(x) := t(x) + \varepsilon i^{2x/h}$$ for conventionally real $$x$$ and $$\varepsilon \ge \hat{\nu}$$. For $$h = \hat{\nu}$$, $$f$$ is nowhere continuous, and thus is conventionally nowhere differentiable or integrable on $$[a, b[ \, \cap \, h{}^{\omega}\mathbb{Z}$$, but for all $$h$$ holds$f_{r}^{\prime }B(x)=t_{r}^{\prime }B(x)-2\widehat{dBx}\varepsilon {i^{2x/h}}$and$\int\limits_{x\in [a,b[ \, \cap \, h{}^{\omega }\mathbb{Z}}{f(x)dBx={{T}_{r}}(b)-{{T}_{r}}(a)+\,}\hat{2}\varepsilon \left( {i^{2a/h}}-{i^{2b/h}} \right).$Example: The conventionally non-measurable middle-thirds Cantor set $${C}_{\hat{3}}$$ has measure $${\mu}_{\text{d0}}({C}_{\hat{3}}) = {\delta}^{-\omega}$$ for $$\delta := \frac{2}{3}$$. Consider the function $$c: [0, 1] \rightarrow \{0, {\delta}^{\omega}\}$$ defined by $$c(x) = {\delta}^{\omega}$$ for $$x \in {C}_{\hat{3}}$$ and $$c(x) = 0$$ for $$x \in [0, 1] \setminus {C}_{\hat{3}}$$. Then$\int\limits_{x \in {{C}_{\hat{3}}}}{c(x)dx=\sum\limits_{x=0}^{1}{c(x)dx}}={{\delta}^{\omega}}{{\mu }_{\text{d0}}}\left( {{C}_{\hat{3}}} \right)=1.$Fubini’s theorem: For $$X, Y \subseteq {}^{(\omega)}\mathbb{K}$$ and $$f: X\times Y \rightarrow {}^{(\omega)}\mathbb{K}$$, a reordering of integral sums shows$\int\limits_{Y}{\int\limits_{X}{f(x,\,y)dBx\,}dBy}=\int\limits_{X\times Y}{f(x,\,y)dB(x,\,y)}=\int\limits_{X}{\int\limits_{Y}{f(x,\,y)dBy\,}dBx}.\square$Transformation theorem: If the Jacobian $$D\varphi(x)$$ exists, linear algebra teaches for $$f: \varphi(A) \rightarrow {}^{(\omega)}\mathbb{R}^n$$ and $$A \subseteq {}^{\omega}\mathbb{R}^n$$ (cf. , p. 519):$\int\limits_{\varphi(A)}^{\ }{f(y)dy=\int\limits_{A}^{\ }{f(\varphi(x))|\det(D\varphi(x))|dx}}.\square$Example: Since$\int\limits_{[a,\,b[\times [r,\,s[}{\frac{\left( {{x}^{2}}-{{y}^{2}} \right)}{{{\left( {{x}^{2}}+{{y}^{2}} \right)}^{2}}}{{d}^{2}}(x,\,y)}=\int\limits_{a}^{b}{\left. \frac{ydx}{{{x}^{2}}+{{y}^{2}}} \right|_{r}^{s}}=-\int\limits_{r}^{s}{\left. \frac{xdy}{{{x}^{2}}+{{y}^{2}}} \right|_{a}^{b}}=\arctan \frac{s}{b}-\arctan \frac{r}{b}+\arctan \frac{s}{a}-\arctan \frac{r}{a}$by the principle of latest substitution (see below), the (improper) integral$I(a,b):=\int\limits_{[a,\,b{{[}^{2}}}{\frac{\left( {{x}^{2}}-{{y}^{2}} \right)}{{{\left( {{x}^{2}}+{{y}^{2}} \right)}^{2}}}{{d}^{2}}(x,\,y)}=\arctan \frac{b}{b}-\arctan \frac{a}{b}+\arctan \frac{b}{a}-\arctan \frac{a}{a}= \iota – \iota = 0$is obtained and not$I(0,1)=\int\limits_{0}^{1}{\int\limits_{0}^{1}{\frac{\left( {{x}^{2}}-{{y}^{2}} \right)}{{{\left( {{x}^{2}}+{{y}^{2}} \right)}^{2}}}dy\,dx}}=\int\limits_{0}^{1}{\frac{dx}{1+{{x}^{2}}}}=\frac{\iota}{2}\ne -\frac{\iota}{2}=-\int\limits_{0}^{1}{\frac{dy}{1+{{y}^{2}}}}=\int\limits_{0}^{1}{\int\limits_{0}^{1}{\frac{\left( {{x}^{2}}-{{y}^{2}} \right)}{{{\left( {{x}^{2}}+{{y}^{2}} \right)}^{2}}}dx\,dy}}=I(0,1).$Definition: A sequence $$({a}_{k})$$ with members $${a}_{k}$$ is a mapping from $${}^{(\omega)}\mathbb{Z}$$ to $${}^{(\omega)}\mathbb{C}^{m}: k \mapsto {a}_{k}$$. A series is a sequence $$({s}_{k})$$ with $$m \in {}^{(\omega)}\mathbb{Z}$$ and partial sums${{s}_{k}}=\sum\limits_{j=m}^{k}{{{a}_{j}}}.\triangle$Definition: A sequence $$({a}_{k})$$ with $$k \in {}^{(\omega)}\mathbb{N}^{*}, {a}_{k} \in {}^{(\omega)}\mathbb{C}$$ and $$\alpha \in ]0, \hat{\nu}]$$ is called $$\alpha$$-convergent to $$a \in {}^{(\omega)}\mathbb{C}$$ if there exists $$m \in {}^{(\omega)}\mathbb{N}^{*}_{\le k}$$ where $$|{a}_{k} – a| < \alpha$$ for all $${a}_{k}$$ such that $$k – m$$ is not too small. The set $$\alpha$$-$$A$$ of all such $$a$$ is called set of $$\alpha$$-limit values of $$({a}_{k})$$. A uniquely determined representative of this set (e.g. the final value or mean value) is called the $$\alpha$$-limit value $$\alpha$$-$$a$$. For the case $$a = 0$$, the sequence is called a zero sequence. If the inequality only holds for $$\alpha = \hat{\nu}$$, the $$\alpha$$- is omitted. Usually, $$k$$ will be chosen maximal and $$\alpha$$ minimal.

Remark: Conventional limit values are hardly more precise than $$\mathcal{O}(\hat{\omega})$$. Their actual transcendence or algebraicity is seldom regarded! To avoid the exclusive relevance of the largest index of each sequence2cf. Heuser, Harro: Lehrbuch der Analysis Teil 1; 17., akt. Aufl.; 2009; Vieweg + Teubner; Wiesbaden, p. 144 the conventional definition requires the completion that infinitely many or almost all members of the sequence have an arbitrarily small distance from the limit value. Only finitely many may have a larger distance. Then only the monotone convergence is valid3cf. loc. cit., p. 155.

Remark: The fundamental theorem of set theory makes the representation of each positive number by a determined, unique, infinite decimal fraction baseless4cf. loc. cit., p. 27 f.. Putting $$\varepsilon := \; \curvearrowright 0$$ any proof claiming that, for $$\varepsilon \in {}^{(\omega)}\mathbb{R}_{>0}$$ – especially for all $$\varepsilon \in {}^{(\nu)}\mathbb{R}_{>0}$$ – there exists a real number $$\varepsilon\hat{r}$$ with real $$r \in {}^{(\omega)}\mathbb{R}_{>1}$$, is false. Otherwise, an infinite regression may occur. The $$\varepsilon\delta$$-definition of the limit value (it is questionable that $$\delta$$ exists5loc. cit., p. 235 f.) requires $$\varepsilon$$ as a specific multiple of $$\curvearrowright 0$$.

Remark: This is also true for the $$\varepsilon\delta$$-definition of continuity6see loc. cit., p. 215 f. : Consider for example the real function that doubles every real value but is not even uniformly continuous. Uniform continuity need not be considered, since in general $$\delta := \; \curvearrowright 0$$ and $$\varepsilon$$ accordingly larger. If two function values do not satisfy the conditions, then the function is not continuous at that point. Thus, continuity is equivalent to uniform continuity, by choosing the largest $$\varepsilon$$ from all admissible infinitesimal values.

Remark: Easily, continuity is equivalent to H{\”o}lder continuity. Here infinite real constants may be allowed. The same is true for uniform convergence, since simply the maximum of the indices may be chosen such that each argument as the index satisfies everything, and $$\acute{\omega}$$ is sufficient in every case. Otherwise, pointwise convergence also fails. Thus, uniform convergence is equivalent to pointwise convergence, by choosing the largest of all admissible infinitesimal values.

Example: The (2d0)-continuous function $$f: {}^{(\omega)}\mathbb{R} \rightarrow \{0, \text{d0}\}$$ defined by $$f(x):=\hat{2}\text{d0}(i^{2x/\text{d0}}+1)$$ consists of only the local minima 0 and the local maxima d0, and has the left and right exact derivatives $$\pm 1$$.

Example: The function $$f: [0, 1] \rightarrow [-\varsigma/\grave{\varsigma}, \varsigma/\grave{\varsigma}]$$ for $$f(x) := i^{2q} q/\grave{q}$$, if $$x$$ is rational and has the denominator $$q \in \mathbb{Q}_{> 0}$$, and $$f(x) := 0$$ else, has the two relative extrema $$\pm \varsigma/\grave{\varsigma}$$7cf. Gelbaum, Bernard R.; Olmsted, John M. H.: Counterexamples in Analysis; Republ., unabr., slightly corr.; 2003; Dover Publications; Mineola, New York, p. 24.

First fundamental theorem of exact differential and integral calculus for LIs: The function$F(z)=\int\limits_{\gamma }{f(\zeta )dB\zeta }$where $$\gamma: [d, x[ \, \cap \, C \rightarrow A \subseteq {}^{(\omega)}\mathbb{K}, C \subseteq \mathbb{R}, f: A \rightarrow {}^{(\omega)}\mathbb{K}, d \in [a, b[ \, \cap \, C$$, and choosing $$\curvearrowright B \gamma(x) = \gamma(\curvearrowright D x)$$ is exactly $$B$$-differentiable, and for all $$x \in [a, b[ \, \cap \, C$$ and $$z = \gamma(x)$$$F^{\prime} \curvearrowright B(z) = f(z).$Proof: \begin{aligned}dB(F(z)) &=\int\limits_{t\in [d,x] \, \cap \, C}{f(\gamma (t)){{\gamma}_{\curvearrowright }^{\prime}}D(t)dDt}-\int\limits_{t\in [d,x[ \, \cap \, C}{f(\gamma (t)){{\gamma}_{\curvearrowright }^{\prime}}D(t)dDt}=\int\limits_{x}{f(\gamma (t))\frac{\gamma (\curvearrowright Dt)-\gamma (t)}{\curvearrowright Dt-t}dDt}\\ &=f(\gamma (x)){{\gamma}_{\curvearrowright }^{\prime}}D(x)dDx=\,f(\gamma (x))(\curvearrowright B\gamma (x)-\gamma (x))=f(z)dBz.\square\end{aligned}Second fundamental theorem of exact differential and integral calculus for LIs: According to the conditions from above, it holds with $$\gamma: [a, b[ \, \cap \, C \rightarrow {}^{(\omega)}\mathbb{K}$$ that$F(\gamma (b))-F(\gamma (a))=\int\limits_{\gamma }{{{F}_{\curvearrowright }^{\prime}}B(\zeta )dB\zeta }.$Proof: \begin{aligned}F(\gamma (b))-F(\gamma (a)) &=\sum\limits_{t\in [a,b[ \; \cap C}{F(\curvearrowright B\,\gamma (t))}-F(\gamma (t))=\sum\limits_{t\in [a,b[ \; \cap C}{{{{{F}_{\curvearrowright }^{\prime}}}}B(\gamma (t))(\curvearrowright B\,\gamma (t)-\gamma (t))} \\ &=\int\limits_{t\in [a,b[ \; \cap C}{{{{{F}_{\curvearrowright }^{\prime}}}}B(\gamma (t)){{{{\gamma }_{\curvearrowright }^{\prime}}}}D(t)dDt}=\int\limits_{\gamma }{{{{{F}_{\curvearrowright }^{\prime}}}}B(\zeta )dB\zeta }.\square\end{aligned}

Corollary: If $$f$$ has an AD $$F$$ on a CP $$\gamma$$, it holds with the conditions above that$\oint\limits_{\gamma }{f(\zeta )dB\zeta :=}\int\limits_{\gamma }{f(\zeta )dB\zeta }=0.\square$Remark: The conventionally real case of both fundamental theorems may be established analogously. Given $$u, v \in [a, b[ \, \cap \, C, u \ne v$$ and $$\gamma(u) = \gamma(v)$$, it may be the case that $$\curvearrowright B \gamma(u) \ne \; \curvearrowright B \gamma(v)$$.

Remark: Sums may be arbitrarily rearranged according to the associative, commutative, and distributive laws if care is taken to calculate them correctly (using Landau symbols).

Leibniz integral rule: For $$f: {}^{(\omega)}\mathbb{K}^{n+1} \rightarrow {}^{(\omega)}\mathbb{K}, a, b: {}^{(\omega)}\mathbb{K}^{n} \rightarrow {}^{(\omega)}\mathbb{K}, \curvearrowright B x := {(s, {x}_{2}, …, {x}_{n})}^{T}$$, and $$s \in {}^{(\omega)}\mathbb{K} \setminus \{{x}_{1}\}$$, choosing $$\curvearrowright D a(x) = a(\curvearrowright B x)$$ and $$\curvearrowright D b(x) = b(\curvearrowright B x)$$, it holds that$\frac{\partial }{\partial {{x}_{1}}}\left( \int\limits_{a(x)}^{b(x)}{f(x,t)dDt} \right)=\int\limits_{a(x)}^{b(x)}{\frac{\partial f(x,t)}{\partial {{x}_{1}}}dDt}+\frac{\partial b(x)}{\partial {{x}_{1}}}f(\curvearrowright Bx,b(x))-\frac{\partial a(x)}{\partial {{x}_{1}}}f(\curvearrowright Bx,a(x)).$Proof: \begin{aligned}\frac{\partial }{\partial {{x}_{1}}}\left( \int\limits_{a(x)}^{b(x)}{f(x,t)dDt} \right) &={\left( \int\limits_{a(\curvearrowright Bx)}^{b(\curvearrowright Bx)}{f(\curvearrowright Bx,t)dDt}-\int\limits_{a(x)}^{b(x)}{f(x,t)dDt} \right)}/{\partial {{x}_{1}}}\;\\ &={\left( \int\limits_{a(x)}^{b(x)}{(f(\curvearrowright Bx,t)-f(x,t))dDt}+\int\limits_{b(x)}^{b(\curvearrowright Bx)}{f(\curvearrowright Bx,t)dDt}-\int\limits_{a(x)}^{a(\curvearrowright Bx)}{f(\curvearrowright Bx,t)dDt} \right)}/{\partial {{x}_{1}}}\;\\ &=\int\limits_{a(x)}^{b(x)}{\frac{\partial f(x,t)}{\partial {{x}_{1}}}dDt}+\frac{\partial b(x)}{\partial {{x}_{1}}}f(\curvearrowright Bx,b(x))-\frac{\partial a(x)}{\partial {{x}_{1}}}f(\curvearrowright Bx,a(x)).\square\end{aligned}

Remark: Complex integration allows a path whose start and end points are the limits of integration. If $$\curvearrowright D a(x) \ne a(\curvearrowright B x)$$, then multiply the final summand by $$(\curvearrowright D a(x) – a(x))/(a(\curvearrowright B x) – a(x))$$. If $$\curvearrowright D b(x) \ne b(\curvearrowright B x)$$, then multiplythe penultimate summand by $$(\curvearrowright D b(x) – b(x))/(b(\curvearrowright B x) – b(x))$$. Let $$n \in {}^{\omega}\mathbb{N}^{*}$$ and $$x \in [0, 1]$$ in each case for the following examples8cf. Heuser, loc. cit., p. 540 – 543.

1. The sequence $${f}_{n}(x) = \sin(nx)/\sqrt{n}$$ does not tend to $$f(x) = 0$$ as $$n \rightarrow \omega$$, but instead to $$f(x) = \sin(\omega x)/\sqrt{\omega}$$ with (continuous) derivative $$f^{\prime}(x) = \cos(\omega x) \sqrt{\omega}$$ instead of $$f^{\prime}(x) = 0$$.

2. The sequence $${f}_{n}(x) = x – \hat{n}x^{n}$$ tends to $$f(x) = x – \hat{\omega}{x}^{\omega}$$ as $$n \rightarrow \omega$$ instead of $$f(x) = x$$ with derivative $$f^{\prime}(x) = 1 – {x}^{\acute{\omega}}$$ instead of $$f^{\prime}(x) = 1$$. Conventionally, $${f}_{n}(x) = 1 – {x}^{\acute{n}}$$ is discontinuous at the point $$x = 1$$.

3. The sequence $${f}_{n}(x) = nx(-\acute{x})^{n}$$ does not tend to $$f(x) = 0$$ as $$n \rightarrow \omega$$, but to the continuous function $$f(x) = {\omega x(-\acute{x})}^{\omega}$$, and takes the value $$\hat{e}$$ when $$x = \hat{\omega}$$.

Definition: Let according to the trapezoidal rule$\int\limits_{z\in A}^{T}{f(z)dBz:=\sum\limits_{z\in A}{\frac{(f(z)+f(\curvearrowright B\,z))}{2}(\curvearrowright B\,z-z)}}.$Let according to the midpoint rule – assuming that $$(z + \curvearrowright B z)/2$$ exists -$\int\limits_{z\in A}^{M}{f(z)dBz:=\sum\limits_{z\in A}{f\left( \frac{z\,+\curvearrowright Bz}{2} \right)(\curvearrowright B\,z-z)}}.\triangle$Remark: Since these tightened exact $$B$$-integrals are clearly independent of the direction, they justify (implicitly) theorems that cancel integral values in opposite directions, such as Green’s theorem (see below). In the first fundamental theorem, the derivative $$dB(F(z))/dBz$$ can be tightened to the arithmetic mean $$(f(z) + f(\curvearrowright B z))/2$$ resp. $$(f(z + \curvearrowright B z)/2)$$, and similarly, in the second fundamental theorem, $$F(\gamma(b)) – F(\gamma(a))$$ can be tightened to $$(F(\gamma(b)) + F(\curvearrowleft B \gamma(b)))/2 – (F(\gamma(a)) + F(\curvearrowright B \gamma(a)))/2$$ resp. $$F((\gamma(b) + \curvearrowleft B \gamma(b))/2) – F((\gamma(a) + \curvearrowright B \gamma(a))/2)$$. This yields approximately the original results when $$f$$ and $$F$$ are sufficiently $$\alpha$$-continuous at the boundary.

Definition: For a CP $$\gamma: [a, b[ \rightarrow {}^{(\omega)}\mathbb{C}$$ and $$z \in {}^{(\omega)}\mathbb{C}, \widehat{2\pi i}\int_{\gamma}{\widehat{\zeta-z}d\zeta}$$ is called winding number or index ind$$_{\gamma}(z) \in \mathbb{Z}$$. The coefficients $$a_{j,-1}$$ of the function $$f: A \rightarrow {}^{(\omega)}\mathbb{C}$$ for $$A \subseteq {}^{(\omega)}\mathbb{C}, n \in {}^{\omega}\mathbb{N}, a_{jk}, c_j \in {}^{(\omega)}\mathbb{C}$$ and$f(z)=\sum_{j=0}^{n}\sum_{k=-\omega}^{\omega}{a_{jk}{(z-c_j)}^k}$as well as pairwise different $$c_j$$ are called residues res$$_{c_j}f.\triangle$$

Integral formula: The last corollary shows that for $$f: A \rightarrow {}^{(\omega)}\mathbb{C}$$ and the CP $$\gamma([a, b[) \subseteq A \rightarrow {}^{(\omega)}\mathbb{C}$$ the equation $$f(z)$$ ind$$_\gamma(z) = \widehat{2\pi i}\int_{\gamma}{\widehat{\zeta-z}f(\zeta)d\zeta}$$ holds, if and only if $$g(\zeta) = \widehat{\zeta-z}(f(\zeta)-f(z))$$ implies that $$\int_{\gamma}^{\ }{g(\zeta)}d\zeta=0$$, meaning that especially $$g$$ has on $$\gamma([a,b[)$$ an AD.$$\square$$

Residue theorem: For $$\gamma$$ snd $$f$$ as above, it holds that$\widehat{2\pi i}\int\limits_{\gamma}{f(\zeta)d\zeta}=\sum_{j=0}^{n}{{\rm ind}_\gamma(c_j)}{\rm res}_{c_j}f.$Proof: All $$j \in \mathbb{N}_{\le n}$$ and all $$k \in {}^{\omega}\mathbb{Z} \setminus \{-1\}$$ provide that$\int\limits_{\gamma}{{a_{jk}\left(\zeta-c_j\right)}^kd\zeta}=0$and$\widehat{2\pi i}\int\limits_{\gamma}{{a_{j,-1}}\widehat{\zeta-c_j}d\zeta}={\rm ind}_\gamma(c_j){\rm res}_{c_j}f.\square$Definition: Let $$f: A \rightarrow {}^{(\omega)}\mathbb{K}$$ for $$A \subseteq {}^{(\omega)}\mathbb{K}$$. The left-hand side of$\frac{d_{\curvearrowright B\,z}^{2}Bf(z)}{{{(d\curvearrowright B\,z)}^{2}}}:=\frac{f(\curvearrowright B(\curvearrowright B\,z))-2f(\curvearrowright B\,z)+f(z)}{{{(d\curvearrowright B\,z)}^{2}}}$is called the second derivative of $$f$$ at $$z \in A$$ in the direction $$\curvearrowright B z.\triangle$$

Remark: Higher derivatives are defined analogously. Every number $${m}_{n} \in {}^{\omega}\mathbb{N}$$ for $$n \in {}^{\omega}\mathbb{N}^{*}$$ of derivatives is written as an exponent after the $$n$$-th variable to be differentiated. If $$n \ge 2$$, the derivatives are called partial and $$d$$ is replaced by $$\partial$$. The exponent to be specified in the numerator is the sum of all $${m}_{n}$$. Regarding 1/(–1)! = 0 implies then for $$g$$ like $$f$$ and $$p \in {}^{\omega}\mathbb{N}^{*}$$ the Leibniz product rule:$(fg)^{(p)} = \sum\limits_{m+n=p}\binom{p}{m}f^{(m)} g^{(n)}.$Proof: For $$p = 1$$, the product rule mentioned above holds. Induction step from $$p$$ to $$\grave{p}$$:\begin{aligned}(fg)^{(\grave{p})} &\underset{p}{=} \sum\limits_{m+1+n=\grave{p}} {\left (\binom{p}{m}+\binom{p}{\grave{m}} \right ) f^{(\grave{m})} g^{(n)}}+\sum\limits_{m+1+n=\grave{p}} {\binom{p}{m} f^{(m)} g^{(\grave{n})}} -\sum\limits_{m+1+n=\grave{p}} {\binom{p}{\grave{m}} f^{(\grave{m})} g^{(n)}} \\ &=\left(\left(fg\right)^\prime\right)^{(p)}\underset{1}{=}\left(f^\prime g+fg^\prime\right)^{\left(p\right)}=\left(f^\prime g\right)^{\left(p\right)}+\left(fg^\prime\right)^{\left(p\right)}=\sum\limits_{m+n=\grave{p}}{\binom{\grave{p}}{m} f^{\left(m\right)}g^{\left(n\right)}}.\square\end{aligned}Taylor’s theorem: $$\sum_m |f^{(m)}(a)| > \hat{\nu}, f^{(m)}(a) \in {}^{\omega}\mathbb{C}, g(z) = (z-a)^\omega, |z – a| < \omega/e$$ and $$z \rightarrow a$$ imply$f(z)=T_\omega(z):=\sum\limits_{m=0}^{\omega}{\widehat{m!}f^{(m)}(a)(z-a)^m}.$Proof: From L’Hôpital’s rule, it follows that$f(z)=\frac{(fg)(z)}{g(z)}=\frac{(fg)^\prime(z)}{g^\prime(z)}=…=\frac{(fg)^{(\acute{\omega})}(z)}{g^{(\acute{\omega})}(z)}=\frac{(fg)^{(\omega)}(z)}{g^{(\omega)}(z)}=\widehat{\omega!}(fg)^{(\omega)}(z)$and Leibniz product rule gives$(fg)^{(\omega)}(z)=\sum_{m+n=\omega}{\binom{\omega}{m}f^{(m)}(a)g^{(\omega-m)}(z)}=g^{(\omega)}(z)\sum_{m=0}^{\omega}{\widehat{m!}f^{(m)}(a)(z-a)^m}.\square$Conclusion: The second fundamental theorem implies for the remainder $$R_n(z) := f(z) – T_n(z) = f(a) + \int_{a}^{z}{f^\prime(t)dt} – T_n(z)$$ by the mean value theorem where $$\xi \in \mathbb{B}_a(z)$$ and $$p\in\mathbb{N}_{\le n}^*$$$R_n(z)=\int_{a}^{z}{\widehat{n!}(z-t)^nf^{(\grave{n})}(t)dt}={\widehat{pn!}(z-\xi)}^{\grave{n}-p}f^{(\grave{n})}(\xi)(z-a)^p.$Proof by induction with integration by parts and induction step from $$n$$ to $$\grave{n}$$ ($$n$$ = 0 see above):$f(z)=T_n(z)+\widehat{\grave{n}!}(z-a)^{\grave{n}}f^{(\grave{n})}(a)+\int_{a}^{z}{\widehat{\grave{n}!}(z-t)^{\grave{n}}f^{(n+2)}(t)dt}=T_{\grave{n}}(z)+R_{\grave{n}}(z).\square$Remark: It holds that $$(e^{d0}-1)/d0 = \varsigma(1+ \hat{\varsigma})^{\varsigma d0} – \varsigma = 1 = \exp(0)^\prime$$ and thus $$d \ln y/dy = \hat{y}$$ from $$dy/dx = y := e^x$$ as well as $$d x^n = d(e^{n \ln x}) = nx^{\acute{n}}dx$$ for $$n \in {}^{\omega}\mathbb{N}^{*}$$ by product and chain rule. Unit circle and triangles easily show the relations sin d0/1 = (cos d0 $$-$$ 1)/d0 and cos d0/1 = $$-$$sin d0/d0. Hence, it holds sin(0)$${}^\prime$$ = cos(0) and cos(0)$${}^\prime = -$$sin(0) as well as for $$m \in {}^{\omega}\mathbb{N}$$ and $$n = 2k$$ de Moivre’s formula:$(\cos z + i \sin z)^m = e^{imz}=1+\sum_{k=1}^{\omega/2}\left({\widehat{\acute{n}!}(imz)}^{\acute{n}}+{\widehat{n!}(imz)}^{n}\right)=\cos{\left(mz\right)}+i \sin\left(mz\right).\square$Theorem, improving Froda’s one: A monotone function $$f: [a, b] \rightarrow {}^{\omega}\mathbb{R}$$ has at most $$2\omega^2 – 1$$ jump discontinuities, since at most $$2\omega^2$$ jump discontinuities are possible between $$-\omega$$ and $$\omega$$ with a jump of $$\hat{\omega}$$ if the function does not decrease at non-discontinuities, like a step function.$$\square$$

Definition: The derivative of a function $$f: A \rightarrow {}^{(\omega)}\mathbb{R}$$, where $$A \subseteq {}^{(\omega)}\mathbb{R}$$, is defined to be 0 if and only if 0 lies in the interval defined by the boundaries of the left and right exact derivatives.$$\triangle$$

Exchange theorem: The result of multiple partial derivatives of a function $$f: A \rightarrow {}^{(\omega)}\mathbb{K}$$ is independent of the order of differentiation, provided that variables are only evaluated and limits are only computed at the end, if applicable (principle of latest substitution).

Proof: The derivative is uniquely determined: This is clear up to the second derivative, and the result follows by (transfinite) induction for higher-order derivatives.$$\square$$

Example: Let $$f: {}^{\omega}\mathbb{R}^{2} \rightarrow {}^{\omega}\mathbb{R}$$ be defined as $$f(0, 0) = 0$$ and $$f(x, y) = {xy}^{3}/({x}^{2} + {y}^{2})$$ otherwise. Then:$\frac{{{\partial ^2}f}}{{\partial x\partial y}} = \frac{{{y^6} + 6{x^2}{y^4} – 3{x^4}{y^2}}}{{{{({x^2} + {y^2})}^3}}} = \frac{{{\partial ^2}f}}{{\partial y\partial x}}$with value $$\hat{2}$$ at the point (0, 0), even though the equation$\frac{{\partial f}}{{\partial x}} = \frac{{{y^5} – {x^2}{y^3}}}{{{{({x^2} + {y^2})}^2}}} \ne \frac{{x{y^4} + 3{x^3}{y^2}}}{{{{({x^2} + {y^2})}^2}}} = \frac{{\partial f}}{{\partial y}}$is equal to $$y$$ on the left for $$x = 0$$ and 0 on the right for $$y = 0$$. Partially differentiating the left-hand side with respect to $$y$$ gives $$1 \ne 0$$, which is the partial derivative of the right-hand side with respect to $$x$$.

Theorem: Splitting $$F: A \rightarrow {}^{(\omega)}\mathbb{C}$$ into real and imaginary parts $$F(z) := U(z) + i V(z) := f(x, y) := u(x, y) + i v(x, y)$$, and given infinitesimal $$h = |dBx| = |dBy|, h$$-homogeneous $$A \subseteq {}^{(\omega)}\mathbb{C}$$, with the NR $$B \subseteq {A}^{2}$$ for every $$z = x + i y \in A$$ is holomorphic and$r(h):=\frac{{\partial{}^{2}}Bf(x,y)}{\partial Bx\partial By\,}h$is infinitesimal if and only if the Cauchy-Riemann partial differential equations$\frac{{\partial Bu}}{{\partial Bx}} = \frac{{\partial Bv}}{{\partial By}},\,\,\frac{{\partial Bv}}{{\partial Bx}} = – \frac{{\partial Bu}}{{\partial By}},$are satisfied by $$B$$ in both the $$\curvearrowright$$ direction and the $$\curvearrowleft$$ direction.

Proof: Since\begin{aligned}F^{\prime}B(z) &= \frac{{F(z \pm \partial Bx) – F(z)}}{{\pm \partial Bx}} = \frac{{F(z \pm i\partial By) – F(z)}}{{\pm i\partial By}} = \frac{{F(z + dBz) – F(z)}}{{dBz}} = \frac{{\partial Bu}}{{\partial Bx}} + i\frac{{\partial Bv}}{{\partial Bx}} = \frac{{\partial Bv}}{{\partial By}} – i\frac{{\partial Bu}}{{\partial By}} \\ &= \frac{{u(x \pm \partial Bx,y) + i\,v(x \pm \partial Bx,y) – u(x,y) – i\,v(x,y)}}{{\pm \partial Bx}} = \frac{{\partial Bf}}{{\partial Bx}} = – i\frac{{\partial Bf}}{{\partial By}} \\ &= \frac{{u(x,y \pm \partial By) + i\,v(x,y \pm \partial By) – u(x,y) – i\,v(x,y)}}{{\pm i\partial By}} = \hat{2}\left( {\frac{{\partial Bf}}{{\partial Bx}} – i\frac{{\partial Bf}}{{\partial By}}} \right) = \frac{{\partial BF}}{{\partial Bz}}\end{aligned}and $$dBz = dBx + i dBy$$ for every derivative defined on $$A$$, it holds that\begin{aligned}&u(\curvearrowright Bx,y)-u(x,y)+u(x,\curvearrowright By)-u(x,y)+u(\curvearrowright Bx,\curvearrowright By)-u(\curvearrowright Bx,y)-u(x,\curvearrowright By)+u(x,y) \\ &=\frac{\partial Bu(x,y)}{\partial Bx}dBx+\frac{\partial Bu(x,y)}{\partial By}dBy+\frac{\partial Bu(\curvearrowright Bx,y)}{\partial By}dBy-\frac{\partial Bu(x,y)}{\partial By}dBy \\ &=\frac{\partial Bu(x,y)}{\partial Bx}dBx+\frac{\partial Bu(x,y)}{\partial By}dBy+\frac{{{\partial}^{2}}Bu(x,y)}{\partial Bx\partial By}dBxdBy = u(\curvearrowright Bx,\curvearrowright By)-u(x,y) =dBU(z)\end{aligned}giving the analogous formulas for $$v$$ and in the $$\curvearrowleft$$ direction, maybe dropping the final summand, and$F^{\prime}B(z)\,dBz = dBF(z) = dBU(z) + i\,dBV(z) = \,\left( {\begin{array}{*{20}{c}}{\frac{{\partial Bu}}{{\partial Bx}}} & {\frac{{\partial Bu}}{{\partial By}}}\\{i\frac{{\partial Bv}}{{\partial Bx}}} & {i\frac{{\partial Bv}}{{\partial By}}}\end{array}} \right)\left( {\begin{array}{*{20}{c}}{dBx}\\{dBy}\end{array}} \right) + \frac{{{\partial ^2}Bf(x,y)}}{{\partial Bx\partial By}}dBxdBy.\square$Remark: In particular, the final summand may be neglected whenever $$f$$ is continuous. The following necessary and sufficient condition is valid for $$F$$ to be holomorphic:$F^{\prime}B(\bar z) = \frac{{\partial Bf}}{{\partial Bx}} = i\frac{{\partial Bf}}{{\partial By}} = \hat{2}\left( {\frac{{\partial Bf}}{{\partial Bx}} + i\frac{{\partial Bf}}{{\partial By}}} \right) = \frac{{\partial BF}}{{\partial B\bar z}} = 0.$Green’s theorem: Given NRs $$B \subseteq {D}^{2}$$ for some $$h$$-domain $$D \subseteq {}^{(\omega)}\mathbb{R}^{2}$$, infinitesimal $$h = |dBx|= |dBy| = |\curvearrowright B \gamma(t) – \gamma(t)| = \mathcal{O}({\hat{\omega}}^{m})$$, sufficiently large $$m \in \mathbb{N}^{*}, (x, y) \in D, {D}^{-} := \{(x, y) \in D : (x + h, y + h) \in D\}$$, and a simply CP $$\gamma: [a, b[\rightarrow \partial D$$ followed anticlockwise, choosing $$\curvearrowright B \gamma(t) = \gamma(\curvearrowright A t)$$ for $$t \in [a, b[, A \subseteq {[a, b]}^{2}$$, the following equation holds for sufficiently $$\alpha$$-continuous functions $$u, v: D \rightarrow \mathbb{R}$$ with not necessarily continuous $$\partial Bu/\partial Bx, \partial Bu/\partial By, \partial Bv/\partial Bx$$ and $$\partial Bv/\partial By$$:$\int\limits_{\gamma }{(u\,dBx+v\,dBy)}=\int\limits_{(x,y)\in {{D}^{-}}}{\left( \frac{\partial Bv}{\partial Bx}-\frac{\partial Bu}{\partial By} \right)dB(x,y)}.$Proof: Only $$D := \{(x, y) : r \le x \le s, f(x) \le y \le g(x)\}, r, s \in {}^{(\omega)}\mathbb{R}, f, g : \partial D \rightarrow {}^{(\omega)}\mathbb{R}$$ is proved, since the proof is analogous for each case rotated by $$\iota$$. Every $$h$$-domian is union of such sets. Simply showing$\int\limits_{\gamma }{u\,dBx}=-\int\limits_{(x,y)\in {{D}^{-}}}{\frac{\partial Bu}{\partial By}dB(x,y)}$is sufficient because the other relation is given analogously. Neglecting the regions of $$\gamma$$ with $$dBx = 0$$ and $$t := h(u(s, g(s)) – u(r, g(r)))$$ shows$-\int\limits_{\gamma }{u\,dBx}-t=\int\limits_{r}^{s}{u(x,g(x))dBx}-\int\limits_{r}^{s}{u(x,f(x))dBx}=\int\limits_{r}^{s}{\int\limits_{f(x)}^{g(x)}{\frac{\partial Bu}{\partial By}}dBydBx}=\int\limits_{(x,y)\in {{D}^{-}}}{\frac{\partial Bu}{\partial By}dB(x,y)}.\square$Fundamental theorem of algebra: Every non-constant polynomial $$p \in {}^{(\omega)}\mathbb{C}$$ has at least one complex root.

Indirect proof: By performing an affine substitution of variables, reduce to the case $$1/p(0) \ne \mathcal{O}(\text{d0})$$. Suppose that $$p(z) \ne 0$$ for all $$z \in {}^{(\omega)}\mathbb{C}$$. Since $$f(z) := 1/p(z)$$ is holomorphic, it holds that $$f(1/\text{d0}) = \mathcal{O}(\text{d0})$$. By the mean value inequality $$|f(0)| \le {|f|}_{\gamma}$$9Remmert, Reinhold: Funktionentheorie 1; 3., verb. Aufl.; 1992; Springer; Berlin, p. 160 for $$\gamma = \partial\mathbb{B}_{r}(0)$$ and arbitrary $$r \in {}^{(\omega)}\mathbb{R}_{>0}$$, and hence $$f(0) = \mathcal{O}(\text{d0})$$, which is a contradiction.$$\square$$

Definition: When integrating identical paths in opposite positive and negative directions, the counter-directional rule for integrals is adopted, stating that when following the path in the negative direction, the function value of the successor of the argument must be chosen if the function is too discontinuous, implying that the integral sums to 0 over both directions to avoid a significantly different value. This may be applied like the following theorem to the complex numbers.$$\triangle$$

Counter-directional theorem: If the path $$\gamma: [a, b[ \, \cap \, C \rightarrow V$$ with $$C \subseteq \mathbb{R}$$ passes the edges of every $$n$$-cube of side length d0 in the $$n$$-volume $$V \subseteq {}^{(\omega)}\mathbb{R}^{n}$$ with $$n \in \mathbb{N}_{\ge 2}$$ exactly once, where the opposite edges in all two-dimensional faces of every $$n$$-cube are traversed in reverse direction, but uniformly, then, for $$D \subseteq \mathbb{R}^{2}, B \subseteq {V}^{2}, f = ({f}_{1}, …, {f}_{n}): V \rightarrow {}^{(\omega)}\mathbb{R}^{n}, \gamma(t) = x, \gamma(\curvearrowright D t) = \curvearrowright B x$$ and $${V}_{\curvearrowright } := \{\curvearrowright B x \in V: x \in V, \curvearrowright B x \ne \curvearrowleft B x\}$$, it holds that$\int\limits_{t \in [a,b[ \, \cap \, C}{f(\gamma (t)){{\gamma }_{\curvearrowright }^{\prime}}(t)dDt}=\int\limits_{\begin{smallmatrix} (x,\curvearrowright B\,x)\\ \in V\times {{V}_{\curvearrowright}} \end{smallmatrix}}{f(x)dBx}=\int\limits_{\begin{smallmatrix} t \in [a,b[ \, \cap \, C,\\ \gamma | {\partial{}^{\acute{n}}} V \end{smallmatrix}}{f(\gamma (t)){{\gamma}_{\curvearrowright }^{\prime}}(t)dDt}.$Proof: If two arbitrary squares are considered with common edge of length d0 included in one plane, then only the edges of $$V\times{V}_{\curvearrowright}$$ are not passed in both directions for the same function value. They all, and thus the path to be passed, are exactly contained in $${\partial}^{\acute{n}}V.\square$$

Goursat’s integral lemma: If $$f \in \mathcal{O}(\Delta)$$ on a triangle $$\Delta \subseteq {}^{(\omega)}\mathbb{C}$$ but has no AD on $$\Delta$$, then$I:=\int\limits_{\partial \Delta }{f(\zeta )dB\zeta }=0.$Refutation of conventional proofs based on estimation by means of a complete triangulation: The direction in which $$\partial\Delta$$ is traversed is irrelevant. If $$\Delta$$ is fully triangulated, then wlog every minimal triangle $${\Delta}_{s} \subseteq \Delta$$ must either satisfy where $$\kappa, \lambda$$, and $$\mu$$ represent the vertices of $${\Delta}_{s}$$${I_s}: = \int\limits_{\partial {\Delta _s}} {f(\zeta )dB\zeta } = f(\kappa)(\lambda – \kappa) + f(\lambda)(\mu – \lambda) + f(\kappa)(\kappa – \mu) = (f(\kappa) – f(\lambda))(\lambda – \mu) = 0$or\begin{aligned}\int\limits_{\partial {\Delta _s}} {f(\zeta )dB\zeta } &= f(\kappa)(\lambda – \kappa) + f(\lambda)(\mu – \lambda) + f(\mu)(\kappa – \mu) = (f(\kappa) – f(\lambda))\lambda + (f(\lambda) – f(\mu))\mu + (f(\mu) – f(\kappa))\kappa\\ &= f^{\prime}(\lambda)\left( {(\kappa – \lambda)\lambda – (\mu – \lambda)\mu + (\mu – \lambda)\kappa – (\kappa – \lambda)\kappa} \right) = f^{\prime}(\lambda)\left( {(\mu – \lambda)(\kappa – \mu) – {{(\kappa – \lambda)}^2}} \right) = 0.\end{aligned}By holomorphicity and cyclic permutations, this can only happen for $$f(\kappa) = f(\lambda) = f(\mu)$$. If every adjacent triangle to $$\Delta$$ is considered, deduce that $$f$$ must be constant, which contradicts the assumptions. This is because the term in large brackets is translation-invariant, since otherwise set $$\mu := 0$$ wlog, making this term 0, in which case $$\kappa = \lambda(1 \pm i\sqrt{3})/2$$ and $$|\kappa| = |\lambda| = |\kappa – \lambda|$$. However, since every horizontal and vertical line is homogeneous on $${}^{(\omega)}\mathbb{C}$$, this cannot happen:

Otherwise, the corresponding sub-triangle would be equilateral and not isosceles and right-angled. Therefore, in both cases, $$|{I}_{s}|$$ must be at least $$|f^{\prime}(\lambda) \mathcal{O}({\text{d0}}^{2})|$$, by selecting the vertices 0, |d0| and $$i|\text{d0}|$$ wlog. If $$L$$ is the perimeter of a triangle, then it holds that $$|I| \le {4}^{m} |{I}_{s}|$$ for an infinite natural number $$m$$, and also $${2}^{m} = L(\partial\Delta)/|\mathcal{O}({\text{d0}}^{2})|$$ since $$L(\partial\Delta) = {2}^{m} L(\partial{\Delta}_{s})$$ and $$L(\partial{\Delta}_{s}) = |\mathcal{O}({\text{d0}}^{2})|$$. It holds that $$|I| \le |f^{\prime}(\lambda) {L(\partial\Delta)}^{2}/\mathcal{O}({\text{d0}}^{2})|$$, causing the desired estimate $$|I| \le |\mathcal{O}(dB\zeta)|$$ to fail, for example if $$|f^{\prime}(\lambda) {L(\partial\Delta)}^{2}|$$ is larger than $$|\mathcal{O}({\text{d0}}^{2})|.\square$$

Remark: For $$\hat{\omega}$$ := 0, the main theorem of Cauchy’s theory of functions can be proven according to Dixon10as in loc. cit., p. 228 f., since the limit there shall be 0 resp. $$\hat{r}$$ tends to 0 for $$r \in {}^{\omega}\mathbb{R}_{>0}$$ tending to $$\omega$$.

Cauchy’s integral theorem: Given the NRs $$B \subseteq {D}^{2}$$ and $$A \subseteq [a, b]$$ for some $$h$$-domain $$D \subseteq {}^{\omega}\mathbb{C}$$, infinitesimal $$h$$, $$f \in \mathcal{O}(D)$$ and a CP $$\gamma: [a, b[\rightarrow \partial D$$, choosing $$\curvearrowright B \gamma(t) = \gamma(\curvearrowright A t)$$ for $$t \in [a, b[$$ gives$\int\limits_{\gamma }{f(z)dBz}=0.$Proof: By the Cauchy-Riemann partial differential equations and Green’s theorem, with $$x := \text{Re} \, z, y := \text{Im} \, z, u := \text{Re} \, f, v := \text{Im} \, f$$ and $${D}^{-} := \{z \in D : z + h + ih \in D\}$$, it holds that$\int\limits_{\gamma }{f(z)dBz}=\int\limits_{\gamma }{\left( u+iv \right)\left( dBx+idBy \right)}=\int\limits_{z\in {{D}^{-}}}{\left( i\left( \frac{\partial Bu}{\partial Bx}-\frac{\partial Bv}{\partial By} \right)-\left( \frac{\partial Bv}{\partial Bx}+\frac{\partial Bu}{\partial By} \right) \right)dB(x,y)}=0.\square$Remark: The in $${\mathbb{B}}_{\omega}(0) \subset {}^{\omega}\mathbb{C}$$ (entire) functions $$f(z) = \sum\limits_{k=1}^{\omega }{{{z}^{k}}{{{\hat{\omega }}}^{k+1}}}$$ and $$g(z) = \hat{\omega }z$$ give counterexamples to Liouville’s (generalised) theorem and Picard’s little theorem because of $$|f(z)| < 1$$ and $$|g(z)| \le 1$$. The function $$f(\hat{z})$$ for $$z \in {\mathbb{B}}_{\omega}(0)^{*}$$ discounts Picard’s great theorem. The function $$b(z) := \hat{\nu}z$$ for $$z \in {\mathbb{B}}_{\nu}(0) \subset {}^{\nu}\mathbb{C}$$ maps the simply connected $${\mathbb{B}}_{\nu}(0)$$ holomorphicly, but not necessarily injectively or surjectively to $$\mathbb{D}$$. The Riemann mapping theorem must be corrected accordingly.

Definition: A point $${z}_{0} \in M \subseteq {}^{(\omega)}\mathbb{C}^{n}$$ or of a sequence $$({a}_{k})$$ for $${a}_{k} \in {}^{(\omega)}\mathbb{C}^{n}$$ and an (infinite) natural number $$k$$ is called a (proper) $$\alpha$$-accumulation point of $$M$$ or of the sequence, if the ball $$\mathbb{B}_{\alpha}({z}_{0}) \subseteq {}^{(\omega)}\mathbb{C}^{n}$$ with centre $${z}_{0}$$ and infinitesimal $$\alpha$$ contains infinitely many points from $$M$$ or infinitely many pairwise distinct members of the sequence. Let $$\alpha$$- be omitted for $$\alpha = \hat{\omega}.$$ Let $$\underline{u}_n := (u, …, u)^T \in{}^{\omega}\mathbb{C}^{n}.\triangle$$

Remark: Choose the pairwise distinct zeros $$c_k \in \mathbb{B}_{\hat{\omega}}(0) \subset \mathbb{D}$$ for $$z \in {}^{\omega}\mathbb{C}$$ in $$p(z) = \prod\limits_{k=0}^{\omega}{\left( z-c_k \right)}$$ in such a way that $$|f(c_k)| < \hat{\omega}$$ for $$f \in \mathcal{O}(D)$$ on a domain $$D \subseteq \mathbb{C}$$ where $$f(0) = 0$$. Let $$D$$ contain $$\mathbb{B}_{\hat{\omega}}(0)$$ completely, which a coordinate transformation always achieves provided that $$D$$ is sufficiently “large”. The coincidence set $$\{\zeta \in D : f(\zeta) = g(\zeta)\}$$ of $$g(z) := f(z) + p(z) \in \mathcal{O}(D)$$ contains an accumulation point at 0.

Since $$p(z)$$ can take every conventional complex number, the deviation between $$f$$ and $$g$$ is non-negligible. Since $$f \ne g$$, this contradicts the statement of the identity theorem like the (local) fact that all derivatives $${u}^{(n)}({z}_{0}) = {v}^{(n)}({z}_{0})$$ of two functions $$u$$ and $$v$$ can be equal at $${z}_{0} \in D$$ for all $$n$$, but $$u$$ and $$v$$ may significantly differ further away maintaining to be holomorphic, since some holomorphic function has to be developed into a TS with approximated powers.

Examples of such $$f \in \mathcal{O}(D)$$ include functions with $$f(0) = 0$$ that are restricted to $$\mathbb{B}_{\hat{\omega}}(0)$$. Extending the upper limit from $$\omega$$ to $$|\mathbb{N}^{*}|$$ gives entire functions with an infinite number of zeros. The set of zeros is not necessarily discrete. Thus, the set of all functions $$f \in \mathcal{O}(D)$$ may contain zero divisors. Functions such as polynomials with $$n > 2$$ pairwise distinct zeros once again give counterexamples to Picard’s little theorem since they omit at least $$\acute{n}$$ values in $$\mathbb{C}$$.

Multinomial theorem: For $$z \in {}^{(\omega)}\mathbb{C}^{k}, n \in {}^{(\omega)}\mathbb{N}^{k}, \binom{m}{n} := \widehat{n_1! … {n}_k!}m!, z^n := z_1^{n_1} … z_k^{n_k}$$ and $$k, m \in {}^{\omega}\mathbb{N}^{*}$$, it holds that$\left({\underline{1}}_k^Tz\right)^m=\sum\limits_{\underline{1}_k^Tn=m}{\binom{m}{n}z^n}.$Proof: Case $$m = 1$$ is obvious. Induction step from $$m$$ to $$\grave{m}$$ for $$\grave{n} := n+(1,0, … ,0)^T$$ and $$\check{z}\ \in {}^{(\omega)}\mathbb{C}^{k}$$ gives:$\grave{m}\int_{0}^{\check{z}}{\left({\underline{1}}_k^Tz\right)^mdz_1}=\left({\underline{1}}_k^T\check{z}\right)^{\grave{m}}=\sum_{{\underline{1}}_k^T\grave{n}=\grave{m}}\binom{m}{n}{\check{z}}^{\grave{n}}.\square$Theorem (binomial series): From $$\alpha \in {}^{(\nu)}\mathbb{C}, \binom{\alpha}{n}:=\alpha(\alpha-1)…\frac{\alpha+1-n}{n!}$$ and $$\left|\binom{\alpha}{\grave{m}}/\binom{\alpha}{m}\right|<1$$ for all $$m \ge \nu$$ where $$\binom{\alpha}{0}:=1$$, it follows for $$z \in \mathbb{D}^\ll$$ the TS centred on 0${\grave{z}}^\alpha=\sum\limits_{n=0}^{\omega}{\binom{\alpha}{n}z^n}.\square$Remark: If the modulus of $$x \in \mathbb{C}$$, $$dx$$ or $$\widehat{dx}$$ have different orders of magnitude, the identity${{s}^{(0)}}(x):=\sum\limits_{m=0}^{n}{{{(-x)}^{m}}}=\frac{1-{{(-x)}^{\grave{n}}}}{\grave{x}}$yields by differentiating${{s}^{(1)}}(x)=-\sum\limits_{m=1}^{n}{m{{(-x)}^{\acute{m}}}}=\frac{\grave{n}{{(-x)}^{n}}-n{{(-x)}^{\grave{n}}}-1}{{{\grave{x}}^{2}}}.$The formulas above were sometimes miscalculated. For sufficiently small $$x$$, and sufficiently, but not excessively large $$n$$, the latter can be further simplified to $$-1/{\grave{x}}^{2}$$, and remains valid when $$x \ge 1$$ is not excessively large. By successively multiplying $${s}^{(j)}(x)$$ by $$x$$ for $$j \in {}^{\omega}\mathbb{N}^{*}$$ and subsequently differentiating, other formulas can be derived for $${s}^{(j+1)}(x)$$, providing an example of divergent series. However, if $${s}^{(0)}(-x)$$ is integrated from 0 to 1 and set $$n := \omega$$, an integral expression for $${_e}\omega + \gamma$$ is obtained for Euler’s constant $$\gamma$$.

L’Hôpital’s rule solves the case of $$x = -1$$. Substituting $$y := -\acute{x}$$, by the binomial series a series is obtained with infinite coefficients (if $${_e}\omega$$ is also expressed as a series, even an expression for $$\gamma$$ is obtained). If the numerator of $${s}^{(0)}(x)$$ is illegitimately simplified, finding incorrect results is risked, especially when $$|x| \ge 1$$. So $${s}^{(0)}(-{e}^{i\pi})$$ is e.g. 0 for odd $$n$$, and 1 for even $$n$$, but not $$\hat{2}$$.

Finiteness criterion for series: Let $$j, k, m, n \in \mathbb{N}$$. The modulus $$S_n := \left| \sum\limits_{k=0}^{n}{s_k} \right|$$ for $$s_k \in {}^{(\omega)}\mathbb{C}$$ is finite, if and only if for a monotonically decreasing sequence $$({d}_{j})$$ such that $$d_j \in {}^{\nu}\mathbb{R}_{\ge 0}$$ holds: $$S_n = \sum\limits_{j=0}^{m}{{i^{2j}}{{d}_{j}}}.$$

Proof: For $$0 \le S_n \le {d}_{0}$$, the claim follows directly from the ability to arbitrarily rearrange summands, sort them according to their signs and sizes, and recombine them or split them into separate sums.$$\square$$

Example: From the alternating harmonic series, it follows that$\sum\limits_{n=1}^{\omega }{{i^{2n}}}\left( \omega -\hat{n} \right)={_e}2.$Definition: For $${a}_{m}, {b}_{n} \in {}^{(\omega)}\mathbb{K}$$, the Cauchy product is to correct as series product as follows:$\sum\limits_{m=1}^{\omega }{{{a}_{m}}}\sum\limits_{n=1}^{\omega }{{{b}_{n}}}=\sum\limits_{m=1}^{\omega }{\left( \sum\limits_{n=1}^{m}{\left( {{a}_{n}}{{b}_{m-\acute{n}}}+{{a}_{\omega -\acute{n}}}{{b}_{\omega -m+n}} \right)}-{{a}_{m}}{{b}_{\omega -\acute{m}}} \right)}.\triangle$Example: The following series product has the value11cf. Gelbaum, loc. cit., p. 61 f.:$\left(\sum_{m=1}^{\mathrm{\omega}}\frac{i^{2m}}{\sqrt m}\right)^2=\sum_{m=1}^{\mathrm{\omega}}{\left(\sqrt{\frac{\hat{m}}{\mathrm{\omega}-\acute{m}}}-\sum_{n=1}^{m}{i^{2m}\left(\sqrt{\frac{\hat{n}}{m-\acute{n}}}+\sqrt{\frac{\widehat{\mathrm{\omega}-\acute{n}}}{\mathrm{\omega}-m\ \mathrm{+\ }n}}\right)}\right)=0,36590…\ }\ \ \ll\frac{{\zeta\left(\hat{2}\right)}^2}{3+2\sqrt2}.$Example: The signum function sgn yields the following series product12cf. loc. cit., p. 62: $\sum\limits_{m=0}^{\omega }{{2}^{{{m}^{\text{sgn}(m)}}}}\sum\limits_{n=0}^{\omega}{\text{sgn}(n-\gamma)} = \acute{\omega}{2}^{\grave{\omega}}\gg -2.$Definition: Let $$f_n^*(z) = f(\eta_nz)$$ sisters of the TS $$f(z) \in \mathcal{O}(D)$$ centred on 0 on the domain $$D \subseteq {}^{\omega}\mathbb{C}$$ where $$m, n \in {}^{\omega}\mathbb{N}^{*}$$ and $$\eta_n^m := i^{2^{\lceil m/n \rceil}}$$. Then let $$\delta_n^*f = (f – f_n^*)/2$$ the halved sister distances of $$f$$.
For $$\mu_n^m := m!n!/(m + n)!$$, $$\mu$$ and $$\eta$$ form a calculus, which can be resolved on the level of TS and allows an easy and finite closed representation of integrals and derivatives.$$\triangle$$

Speedup theorem for integrals: The TS (see below) $$f(z) \in \mathcal{O}(D)$$ centred on 0 on $$D \subseteq {}^{\omega}\mathbb{C}$$ gives for $$\grave{m}, n \in {}^{\omega}\mathbb{N}^*$$$\int\limits_0^z…\int\limits_0^{\zeta_2}{f(\zeta_1)\text{d}\zeta_1\;…\;\text{d}\zeta_n} = \widehat{n!} f(z\mu_n) z^n.\square$Example: For the TS $$f(x), g(x) \in {}^{\omega}\mathbb{R}$$, it holds that$\int\limits_0^x{f(v)\text{d}v}\int\limits_0^x\int\limits_0^{y}{g(v)\text{d}v\text{d}y} = \hat{2}f(x\mu_1)g(x\mu_2)x^3.$Speedup theorem for derivatives: For $$\mathbb{B}_{\hat{\nu}}(0) \subset D \subseteq {}^{\omega}\mathbb{C},$$ the TS$f(z):=f(0) + \sum\limits_{m=1}^{\omega }{\widehat{m!}\,{{f}^{(m)}}(0){z^m}},$$$b_n := \varepsilon^{-n}\,\acute{n}! = 2^j, j, n \in {}^{\omega}\mathbb{N}^{*}, \varepsilon \in ]0, r[, u :=e^{\hat{n} \tau i}$$ and $$f$$’s radius of convergence $$r \in {}^{\nu}{\mathbb{R}}_{>0}$$ imply${{f}^{(n)}}(0)=b_n\sum\limits_{k=1}^{n}{\delta_n^* f(\varepsilon u^k)}.$Proof: Taylor’s theorem13cf. loc. cit., p. 165 f. and the properties of the roots of unity.$$\square$$

Universal multistep theorem: For $$n \in {}^{\nu}\mathbb{N}_{\le p}, k, m, p \in {}^{\nu}\mathbb{N}^{*}, d_{\curvearrowright B} x \in\, ]0, 1[, x \in [a, b] \subseteq {}^{\omega}\mathbb{R}, y : [a, b] \rightarrow {}^{\omega}\mathbb{R}^q, f : [a, b]\times{}^{\omega}\mathbb{R}^{q \times n} \rightarrow {}^{\omega}\mathbb{R}^q, g_k(\curvearrowright B x) := g_{\acute{k}}(x)$$, and $$g_0(a) = f((\curvearrowleft B)a, y_0, … , y_{\acute{n}})$$, the TS of the initial value problem $$y^\prime(x) = f(x, y((\curvearrowright B)^0 x), … , y((\curvearrowright B)^{\acute{n}} x))$$ of order $$n$$ implies$y(\curvearrowright B x) = y(x) – d_{\curvearrowright B}x\sum\limits_{k=1}^{p}{i^{2k} g_{p-k}((\curvearrowright B) x)\sum\limits_{m=k}^{p}{\widehat{m!}\binom{\acute{m}}{\acute{k}}}} + \mathcal{O}((d_{\curvearrowright B} x)^{\grave{p}}).\square$Remark: Determine the $$f^{(n)}(a)$$ for $$a \in D$$ analogously from $$g(z) := f(z + a)$$. The identity instead of $$\delta_n^*$$ still provides arbitrarily precise approximations for the $$f^{(n)}$$. The last theorems are equally valid for multidimensional TS (with several sums) and Laurent series. By modular arithmetic14cf. Knuth, Donald Ervin: The Art of Computer Programming Volume 2; 3rd Ed.; 1997; Addison Wesley; Reading, p. 302 – 311 the DFT form of the TS (e.g. from the Fourier series) of $$f$$ can be most precisely determined where $$q = (z – a)/\varepsilon$$ and $$k, m \in \mathbb{N}_{\le n}^{*}$$ holds:$f(z) = f(a) + \hat{n}(q^m )^T(u^{-km})(f(\varepsilon u^k + a)) + \mathcal{O}(\varepsilon^n).$Remark: Analogously defined $$m$$-dimensional DFT forms of the TS with $$\binom{m+n}{n}$$ derivatives have an error $$\mathcal{O}(\varepsilon^n)$$ instead of $$\mathcal{O}(\varepsilon)$$ in (numerically) solving (partial) differential equations where the effort is comparable. Since $$n$$-dimensional manifolds may be assembled by cuboids, Stokes’ theorem also holds for them as for more general differential forms15cf. Köhler, Günter: Analysis; 1. Aufl.; 2006; Heldermann; Lemgo, p. 625 f..