Preliminary remarks: In the following section, the definitions established in the chapters on Set Theory and Topology are used, and usually take \(m, n \in {}^{\omega}\mathbb{N}^{*}\). Integration and differentiation are studied on an arbitrary non-empty subset \(A\) from \({}^{(\omega)}\mathbb{K}^{n}\). The mapping concept requires replacing every element not in the image set by the neighbouring element in the target set. If multiple choices are possible, one single choice is selected. The following may be easily generalised to other sets and norms.
Definition: The function \(||\cdot||: \mathbb{V} \rightarrow {}^{(\omega)}\mathbb{R}_{\ge 0}\) where \(\mathbb{V}\) is a vector space over \({}^{(\omega)}\mathbb{K}\) is called a norm, if for all \(x, y \in \mathbb{V}\) and \(\lambda \in {}^{(\omega)}\mathbb{K}\), it holds that: \(||x|| = 0 \Rightarrow x = 0\) (definiteness), \(||\lambda x|| = |\lambda| \; ||x||\) (homogeneity), and \(||x + y|| \le ||x|| + ||y||\) (triangle inequality). The dimension of \(\mathbb{V}\) is given by the maximal number of linearly independent vectors, and is denoted by dim \(\mathbb{V}\). The norms \({||\cdot||}_{a}\) and \({||\cdot||}_{b}\) are said to be equivalent if there exist \(s, t \in [\tilde{\nu}, \nu]\) such that \(s||x||{}_{b} \le ||x||{}_{a} \le t||x||{}_{b}\) for all \(x \in \mathbb{V}\). Let \(N\) be the set of all norms in \(\mathbb{V}.\triangle\)
Theorem: Norms are equivalent if and only if \({||x||}_{a}/{||x||}_{b} \in [\tilde{\nu}, \nu]\) for all \({||\cdot||}_{a}, {||\cdot||}_{b} \in N\) and all \(x \in \mathbb{V}^{*}\) by putting \(s := \text{min }\{{||x||}_{a}/{||x||}_{b}: x \in \mathbb{V}^{*}\}\) and \(t := \text{max }\{{||x||}_{a}/{||x||}_{b}: x \in \mathbb{V}^{*}\}.\square\)
Minimality theorem: For every \(r \in \mathbb{R}\) with \(n \in \mathbb{N}\) positions after the floating point, the GS yields a unique \(b\)-adic expansion and min \(\{b \in \mathbb{R}_{>1}\} = 2\), where \((1 – b^{-n})/(1 – b^{-1}) < 2\) holds.\(\square\)
Remark: For \(b \in \; ]1, 2[\), the uniqueness is lost and two digits (0 and 1), intended for the dual representation, must be used: One digit only makes sense for base \(b = 1\).
Definition: For the characteristic function \(\chi\), let \(\chi_A(a) := 1\) for \(a \in A\) and \(\chi_A(a) := 0\) for \(a \notin A\). Let \({}^{\pm}A := A \cup \{\pm\infty\}\) for \(A \subseteq \mathbb{K}\) and \(\infty \gg \tilde{\iota}^2\) as scalable constant. Furthermore, let sgn\((z) := \tilde{z}|z|\chi_{\mathbb{C}^*}(z)\) for complex \(z\) and sgn\((x) := \tilde{x}|x|\chi_{{}^{\pm}\mathbb{R}^*}(x)\) for real \(x\). The area or half of the circumference of \({}^1\dot{\mathbb{R}}^2\) gives pi \(\pi\). Euler’s number \(\epsilon\) (read briefly as “eps”) is defined as the solution of \({x}^{\underline{\pi}} = -1\). Then the logarithm function ln is given by \({\epsilon}^{\ln \, z} = z\) and the power function by \({z}^{s} = {\epsilon}^{s \, \ln \, z}\) for \(s, z \in \mathbb{C}\). This allows giving a formal definition of exponentiation.\(\triangle\)
Remark: If \(\pm0\) is replaced by \(\pm\widetilde{\infty}\), calculations become unique and consistent. The definition of \(\epsilon\) is \(\mathcal{O}(\tilde{\nu})\) larger than that by \({(1 + \tilde{\nu})}^{\nu}\) (calculate with approximations!): The exponential series justifies the former when being exactly differentiated with as many terms as possible.
Definition: The function \({\mu}_{h}: A \rightarrow \mathbb{R}_{\ge 0}\) where \(A \subseteq {}^{(\omega)}\mathbb{C}^{n}\) is an \(m\)-dimensional set with \(h \in \mathbb{R}_{>0}\) less than or equal the minimal distance of the points in \(A, m \in {}^{\omega}\mathbb{N}_{\le \hat{n}}\), \({\mu}_{h}(A) := |A| {h}^{m}\) and \({\mu}_{h}(\emptyset) = |\emptyset| = 0\) is called the exact h-measure of \(A\) being h-measurable. Let the exact standard measure \({\mu}_{\iota}\) (\(\iota\) may be omitted).\(\triangle\)
Remark: Answering positively the measure problem, the union \(A\) of pairwise disjoint \(h\)-homogeneous sets \(A_i\) for \(i \in I \subseteq \mathbb{N}\) clearly additively and uniquely results in \({{\mu }_{h}}(A)={\LARGE{\textbf{+}}}_{i \in I}{{{\mu }_{h}}(A_i)}.\) Its strict monotony follows for \(h\)-homogeneous sets \({A}_{1}, {A}_{2} \subseteq {}^{(\omega)}\mathbb{K}^{n}\) satisfying \({A}_{1} \subset {A}_{2}\) from \({\mu}_{h}({A}_{1}) < {\mu}_{h}({A}_{2})\). If \(h\) is not equal on all considered sets \(A_i\), the minimum of all \(h\) is chosen and the homogenisation follows as described in Set Theory. In the following, let \(||\cdot||\) the Euclidean norm.
Examples: Consider the set \(A \subset {[0, 1[}^n\) of points, whose least significant bit is 1 (0) in all \(n \in {}^{\omega}\mathbb{N}^{*}\) coordinates. Then \({\mu}_{\iota}(A) = \tilde{2}^n\). Since \(A\) is an infinite and conventionally uncountable union of individual points without the neighbouring points of \({[0, 1[}^n\) in \(A\), and these points are Lebesgue null sets, \(A\) is not Lebesgue measurable, however it is exactly measurable. Domains from \({}^{(\omega)} \mathbb{K}^{n}\) that are more densely pushed together have no smaller (larger) intersection (union) than previously.
Remark: The exact \(h\)-measure is optimal, since it only considers the NRs of points, i.e. in the extreme case distances of points parallel to the coordinate axes. Concepts such as \(\sigma\)-algebras and null sets are dispensable, since the empty set \(\emptyset\) is null set enough.
Definition: Neighbouring points in \(A\) are described by the irreflexive symmetric NR \(B \subseteq {A}^{2}\). The function \(\gamma: C \rightarrow A \subseteq \mathbb{C}{}^{n}\), where \(C \subseteq \mathbb{R}\) is \(h\)-homogeneous and \(h\) is infinitesimal, is called a path if \(||\gamma(x) – \gamma(y)||\) is infinitesimal for all neighbouring points \(x, y \in C\) and (\(\gamma(x), \gamma(y)) \in B\). Let \({z}_{0} \in A \subseteq \mathbb{K}^{n}\) and \(f: A \rightarrow {}^{(\nu)}\mathbb{K}^{m}\). NRs are systematically written as (predecessor, successor) with the notation \(({z}_{0}, \overset{\rightharpoonup}{z}_{0})\) or \((\overset{\leftharpoonup}{z}_{0}, {z}_{0})\) pronouncing \(\rightharpoonup\) as “post” and \(\leftharpoonup\) as “pre”.\(\triangle\)
Definition: If \(||f(\overset{\rightharpoonup}{z}_{0}) – f({z}_{0})|| < \alpha\) for infinitesimal \(\alpha \in {}^{(\omega)}\mathbb{R}{}_{>0}\), \(f\) is given by \(\alpha\)-successor-continuous in \({z}_{0}\) in the direction \(\overset{\rightharpoonup}{z}_{0}\). If the exact modulus of \(\alpha\) does not matter, \(\alpha\) may be omitted in the notation. If \(f\) is \(\alpha\)-successor-continuous for all \({z}_{0}\) and \(\overset{\rightharpoonup}{z}_{0}\), it simply is called \(\alpha\)-continuous. It holds that \(\alpha\) is the degree of continuity. If the inequality only holds for \(\alpha = \tilde{\nu}\), \(f\) simply is called (successor-) continuous.\(\triangle\)
Example: The function \(f: \mathbb{R} \rightarrow \{\pm 1\}\) with \(f(x) = \underline{1}^{\hat{x}/\iota}\) is nowhere successor-continuous on \(\mathbb{R}\), but its modulus is (cf. Number Theory). Here, \(x/\iota\) is an integer since \(\mathbb{R}\) is \(\iota\)-homogeneous. Setting \(f(x) = 1\) for finite fractions \(x\) and \(= -1\) otherwise, then \(f(x)\) is partially \(\iota\)-successor-continuous on infinite fractions, unlike the conventional notion of continuity.
Example of a Peano curve1cf. Walter, Wolfgang: Analysis 2; 5., erw. Aufl.; 2002; Springer; Berlin, p. 188: Consider the even, periodic function \(g: {}^{\omega}\mathbb{R} \rightarrow {}^{\omega}\mathbb{R}\) with period 2 in \(I := [0, 1]\) given by \(g(s) = \chi_{[1,\check{3}]}(\tilde{s}) + \chi_{]\check{3},3[}(\tilde{s})(3s – 1)\). Now let \(\phi: I \rightarrow {}^{\omega}\mathbb{R}^{2}\) be defined by\[\phi(s) = \left({\LARGE{\textbf{+}}}_{n = 0}^{\omega}{\tilde{2}^{\grave{n}}g(4^{\hat{n}}s)}, {\LARGE{\textbf{+}}}_{n = 0}^{\omega}{\tilde{2}^{\grave{n}}g(4^{\hat{n}+1}s)} \right).\]The function \(\phi\) is at least continuous since the sums are ultimately locally linear functions in \(s\). It would however be an error to believe that \(I\) can be bijectively mapped onto \(I^2\) in this way: the powers of four in \(g\), and the values 0 and 1 taken by \(g\) in two sub-intervals thin out \(I^2\) so much that a bijection is clearly impossible. Restricting the proof only to finite fractions is simply insufficient.
Definition: For \(f: A \rightarrow {}^{(\omega)}\mathbb{K}{}^{m}, {{\downarrow}}f(\overset{\rightharpoonup}{z}) := f(\overset{\rightharpoonup}{z}) – f(z)\) is called to be successor-differential of \(f\) in the direction \(\overset{\rightharpoonup}{z}\) for \(z \in A\). If dim \(A = n\), then \({{\downarrow}}f(\overset{\rightharpoonup}{z})\) stands for a successor-derivative in every variable. Mixed differentials are specified by several arrows. If \(f(z) = z\), then \({{\downarrow}}\overset{\rightharpoonup}{z}\) can be written instead of \({\downarrow}f(\overset{\rightharpoonup}{z})\). Clear \(A\) or \(B\) remain unmentioned.\(\triangle\)
Definition: Read \({\downarrow}\) as “down”. If \(|f(\overset{\rightharpoonup}{x}) – f(x)| > \tilde{\omega}\) holds for \(x\) of \(f: A \subseteq {}^{\omega}\mathbb{R} \rightarrow {}^{\omega}\mathbb{R}\), \(x\) is called a jump discontinuity. If the modulus of the successor-differential of \(f\) in the direction \(\overset{\rightharpoonup}{z}\) at \(z \in A\) is smaller than \(\alpha\) and infinitesimal, then \(f\) is also rated as \(\alpha\)-successor-continuous there. A function \(f: A \subseteq {}^{(\omega)}\mathbb{K}{}^{n} \rightarrow {}^{\omega}\mathbb{R}\) is said to be convex (concave) (written \(f \in Con(A)\)) if the line segment between any two points on the graph of the function lies above (below) or on the graph. Let it strictly convex (concave) if “or on” can be omitted.\(\triangle\)
Theorem: For \(A \subseteq {}^{(\omega)}\mathbb{K}{}^{n}\), all \(f \in Con(A)\) are \(\alpha\)-successor-continuous and successor-differentiable.\(\square\)
Definition: The \(m\) arithmetic means of all \({f}_{k}(\overset{\rightharpoonup}{z})\) of \(f(z)\) give the \(m\) averaged normed tangential normal vectors of \(m\) (uniquely determined) hyperplanes, defining the \(mn\) continuous derivatives of the Jacobian matrix of \(f\), which is not necessarily continuous. The hyperplanes are taken to pass through \({f}_{k}(\overset{\rightharpoonup}{z})\) and \(f(z)\) translated towards 0. The moduli of their coefficients are minimised by a quite simple linear programme (cf. Linear Programming).\(\triangle\)
Definition: For the (maybe dropped) control variable \(m \in \mathbb{N}_{\le n}^*\) and the concatenation operator \(\complement\), the derivative in the direction \(\overset{\rightharpoonup}{z}_{k}\) of \(F: A \rightarrow {}^{(\omega)}\mathbb{K}\) at \(z = \left (\complement_{(m=)1}^n\ z_m \right) := ({z}_{1}, …, {z}_{n}) \in A \subseteq {}^{(\omega)}\mathbb{K}^{n}\) with \(m \in \mathbb{N}_{\le n}^*\) is given by\[{\downarrow} F(z) / {\downarrow} {{z}_{m}}:=(F({{z}_{1}},\,…,\,{\overset{\rightharpoonup}{z}_{m}},\,…,\,{{z}_{n}})-F(z)) / ({\overset{\rightharpoonup}{z}_{m}}-{{z}_{m}}).\triangle\]Definition: The derivative of a function \(f: A \rightarrow {}^{(\omega)}\mathbb{R}\), where \(A \subseteq {}^{(\omega)}\mathbb{R}\), is said to be 0 if and only if 0 lies in the interval given by the boundaries of the left and right exact derivatives or where \(f\) is discontinuous. Let num\((x) = p \in \mathbb{Z}\) the numerator function and den\((x) = |q| \in \mathbb{N}^*\) the denominator function of \(x = p/q \in \mathbb{R}\) for coprime \(p\) and \(q\) (in short: \(p \perp q).\triangle\)
With this notation, if the function \(f\) satisfies \(f = \left(\complement_1^n\ f_m\right): A \rightarrow {}^{(\omega)}\mathbb{K}^{n}\) with \(z \in A \subseteq {}^{(\omega)}\mathbb{K}^{n}\)
\(f(z)=\left( \tfrac{F\left({\overset{\rightharpoonup}{z}_{1}},\complement_2^n\ z_m\right)-F\left(\complement_1^n\ z_m\right)}{{\overset{\rightharpoonup}{z}_{1}}-{{z}_{1}}},…,\tfrac{F\left(\complement_1^{\acute{n}}\ z_m,{\overset{\rightharpoonup}{z}_{n}}\right)-F\left(\complement_1^n\ z_m\right)}{{\overset{\rightharpoonup}{z}_{n}}-{{z}_{n}}} \right)\)\(=\left( \tfrac{{\downarrow} F_1(z)}{{\downarrow} {{z}_{1}}},\,\,…\,\,,\,\,\tfrac{{\downarrow} {{F}_{n}}(z)}{{\downarrow}{{z}_{n}}} \right),\)
then \(f(z)=\nabla\,F(\overset{\rightharpoonup}{z})\) with the Nabla operator \(\nabla\) is said to be exact successor-derivative \({}^1F\overset{\rightharpoonup}{z})\) or the exact successor-gradient \(\text{grad }\,F(\overset{\rightharpoonup}{z})\) of the function \(F(z)\), which is called exactly differentiable at \(z\) in direction \(\overset{\rightharpoonup}{z}\).
If this definition is satisfied for every \(z \in A\), then \(F\) is said to be an exactly differentiable AD of \(f\). If all directions have the same value, holomorphicity is obtained (\(A ={}^{\nu}\mathbb{C}\) and \(n = 1\) make \(F\) holomorphic). On a domain \(\mathbb{D}\), let \(\mathcal{O}(\mathbb{D}) \subseteq \mathcal{C}(\mathbb{D}) \subseteq \mathbb{C}\) be the ring of holomorphic resp. continuous functions.\(\triangle\)
Chain rule: For \(x \in A \subseteq {}^{(\omega)}\mathbb{R}, B \subseteq {A}^{2}, g: A \rightarrow C \subseteq {}^{(\omega)}\mathbb{R}, D \subseteq {C}^{2}, f: C \rightarrow {}^{(\omega)}\mathbb{R}\), choosing \(g(\overset{\rightharpoonup}{x}) = \overset{\rightharpoonup}{g}(x)\), it holds that:\[{}^1f(g(x)) = {}^1f(g(x))\ {}^1g(x).\]Beweis:\[{}^1f(g(x))=\tfrac{f(g(\overset{\rightharpoonup}{x}))-f(g(x))}{g(\overset{\rightharpoonup}{x})-g(x)}\tfrac{g(\overset{\rightharpoonup}{x})-g(x)}{\overset{\rightharpoonup}{x}-x}=\tfrac{f(\overset{\rightharpoonup}{g}(x))-f(g(x))}{\overset{\rightharpoonup}{g}(x)-g(x)}{}^1g(x)={}^1f(g(x))\ {}^1f(x).\square\]Product rule: Adding and subtracting \(f(\overset{\rightharpoonup}{x}) g(x)\) or \(f(x) g(\overset{\rightharpoonup}{x})\) in the numerator yields\[{}^1(fg)(x) = {}^1f(x) g(x) + f(\overset{\rightharpoonup}{x})\ {}^1g(x)= {}^1f(x) g(\overset{\rightharpoonup}{x}) + f(x)\ {}^1g(x).\square\]Quotient rule: The same for \(f(x) g(x)\) and \(f(\overset{\rightharpoonup}{x}) g(\overset{\rightharpoonup}{x})\) yields for denominators \(\ne 0\) of the following quotients\[{}^1\left( \tfrac{f}{g} \right)(x)=\tfrac{{}^1f(x)\,g(x)-f(x)\ {}^1g(x)}{g(x)\,g(\overset{\rightharpoonup}{x})}=\tfrac{{}^1f(x)\,g(\overset{\rightharpoonup}{x})-f(\overset{\rightharpoonup}{x})\ {}^1g(x)}{g(x)\,g(\overset{\rightharpoonup}{x})}.\square\]Remark: Arguments and function values must belong to a smaller level of infinity than \(\tilde{\iota}\), and \(f\) and \(g\) must be sufficiently (\(\alpha\)-) continuous at \(x \in A\). I. e. \(\alpha\) must be sufficiently small to allow \(\overset{\rightharpoonup}{x}\) to be replaced by \(x\). An analogous principle holds for infinitesimal arguments. The right exact derivative of the reciprocal function reads \({}^1f^{-1}(y) = 1/{}^1f(x)\) from \(y = f(x)\) and identity \(x = {f}^{-1}(f(x))\) by the chain rule and the same precision. L’Hôpitals rule makes sense for (\(\alpha\)-) continuous functions \(f\) and \(g\), and follows for \(f(v) = g(v) = 0\) where \(v \in A\) and \(g(\overset{\rightharpoonup}{v}) \ne 0\) from\[f(\overset{\rightharpoonup}{v}) / g(\overset{\rightharpoonup}{v})=(f(\overset{\rightharpoonup}{v})-f(v)) / (g(\overset{\rightharpoonup}{v})-g(v))={}^1f(v) / {}^1g(v).\]Remark: If the exact derivative can be replaced by \({}^1F(\overleftrightarrow{v})\,:=\,F(\overset{\rightharpoonup}{v})-F(\overset{\leftharpoonup}{v})/(\overset{\rightharpoonup}{v} – \overset{\leftharpoonup}{v})\) (numerator \(\ne 0\)), this has the advantage viewing \({}^1F(\overleftrightarrow{v})\) as the “tangent slope” at the point \(v\), especially when \(F\) is \(\alpha\)-continuous at \(v\). This applies beyond that when \(\overset{\rightharpoonup}{v} – v = v – \overset{\leftharpoonup}{v}\), and the combined derivatives both have the same sign. An extension to (conventional) complex numbers exists analogously.
Definition: Given \(z \in A \subseteq {}^{(\omega)}\mathbb{K}^{n}\), \({\uparrow}_{z\in A}{f(z){\downarrow}z:={\LARGE{\textbf{+}}}_{z\in A}{f(z)(\overset{\rightharpoonup}{z}-z)}}\)
is called the exact integral of the vector field \(f = ({f}_{1}, …, {f}_{n}): A \rightarrow {}^{(\omega)}\mathbb{K}^{n}\) on \(A\) and \(f(z)\) is said to be exactly integrable. If this requires removing at least one point from \(A\), then the exact integral is called improper. Read \({\uparrow}\) as “up”. For \(\gamma: G = [a, b[ \, \cap \, C \rightarrow A \subseteq {}^{(\omega)}\mathbb{K}^{n}, C \subseteq \mathbb{R}\) and \(f = ({f}_{1}, …, {f}_{n}): A \rightarrow {}^{(\omega)}\mathbb{K}^{n}\),\[{\uparrow}_{\gamma }{f(\zeta){\downarrow}\zeta} = {\uparrow}_{s \in G}{f(\gamma (s)){{}^1\gamma}(s){\downarrow}s}\]where \({\downarrow}s > 0, \overset{\rightharpoonup}{s} \in \; ]a, b] \cap C\), choosing \(\overset{\rightharpoonup}{\gamma}(s) = \gamma(\overset{\rightharpoonup}{s})\), since \(\zeta = \gamma(s)\) and \({\downarrow}\zeta = \gamma(\overset{\rightharpoonup}{s}) – \gamma(s) = {{}^1\gamma}(s) {\downarrow}s\) (i.e. for \(C = \mathbb{R}, B\) maximal in \(\mathbb{C}^{2}\), and \(D\) maximal in \(\mathbb{R}^{2})\), is called the exact LI of the vector field \(f\) along the path \(\gamma\). Improper exact LIs are founded analogously to exact integrals, except that only interval end points may be removed from \(G\). Read \(u{\upharpoonright}_k := u_k\) in \(\left(\complement_1^n\ u_m\right)^T \in{}^{\omega}\mathbb{K}^{n}\) as “u proj k”.\(\triangle\)
Theorem, improving Froda’s one: A monotone function \(f: [a, b] \rightarrow {}^{\omega}\mathbb{R}\) has at most \(2\omega^2 – 1\) jump discontinuities, since at most \(2\omega^2\) jump discontinuities are possible between \(-\omega\) and \(\omega\) with a jump of \(\tilde{\omega}\) if the function does not decrease at non-discontinuities, like a step function.\(\square\)
Laisant’s theorem: For \(c \in {}^{\omega}\mathbb{R}\), the product rule yields \(\left. {\uparrow}{f(x)}{\downarrow}x\right |_{f^{-1}(y)} = {\uparrow}y \tfrac{{\downarrow}x}{{\downarrow}y}{\downarrow}y = y f^{-1}(y) – {\uparrow}{f^{-1}(y)}{\downarrow}y + c.\square\)
Remark: The (linear) exact LI on \({}^{(\nu)}\mathbb{K}\) \(f\) does not need a continuous \(f\), exists always and is usually consistent with the conventional LI. It is linear and monotone in the (conventional) (infinite) real case.
Intermediate value theorem: Let \(f: [a, b] \rightarrow {}^{(\omega)}\mathbb{R} \; \alpha\)-continuous in \([a, b]\). Then \(f(x)\) takes for \(x \in [a, b]\) every value between min \(f(x)\) and max \(f(x)\) with precision \(< \alpha\). If \(f\) is continuous in \({}^{\omega}\mathbb{R}\), it takes every value of \({}^{\nu}\mathbb{R}\) between min \(f(x)\) and max \(f(x)\).
Proof: A gapless chain of overlapping \(\alpha\)-environments exists between min \(f(x)\) and max \(f(x)\) where \(f(x)\) is centre, since otherwise there would be a contradiction to the \(\alpha\)-continuity of \(f\). The second part of the claim follows from the fact that a deviation \(|f(\overset{\rightharpoonup}{x}) – f(x)| < \tilde{\nu}\) or \(|f(x) – f(\overset{\leftharpoonup}{x})| < \tilde{\nu}\) in \({}^{\nu}\mathbb{R}\) falls below the conventional resolution maximally permitted.\(\square\)
Remark to the extreme value theorem: The continuous function \(f(x) := \hat{\omega} \sin(\omega x)\) attains for \(x \in [-1, 1]\) the minima \(-\hat{\omega}\) and the maxima \(\hat{\omega}\) as infinite values.
Example: The \(\hat{\iota}\)-continuous function \(f: {}^{(\omega)}\mathbb{R} \rightarrow \{0, \iota\}\) defined by \(f(x):=\check{\iota}(\underline{1}^{\hat{x}/{\iota}}+1)\) consists of only the local minima 0 and the local maxima \(\iota\), and has the left and right exact derivatives \(\pm 1\).
Examples: In 2Gelbaum, Bernard R.; Olmsted, John M. H.: Counterexamples in Analysis; Republ., unabr., slightly corr.; 2003; Dover Publications; Mineola, New York., p. 160, it holds that \(r_1 = r_2 = 3.\) For \(q =\) den\((x)\) and \(f(x) := -\chi_{{}_{\omega}^{\omega}\mathbb{R}}(x)\underline{1}^{\hat{q}} \acute{q}/q\), it has \(f: [0, 1] \rightarrow [\acute{\iota}, -\acute{\iota}]\) the two relative extrema \(\pm \acute{\iota}\)3cf. ib. p. 24..
Definition: For all \(x \in V\) of an \(h\)-homogeneous \(n\)-volume \(V \subseteq [{a}_{1}, {b}_{1}] \times…\times [{a}_{n}, {b}_{n}] \subseteq {}^{(\omega)}\mathbb{R}^{n}\) with \(B = {B}_{1}\times…\times{B}_{n}, {B}_{k} \subseteq {[{a}_{k}, {b}_{k}]}^{2}\) and \(|\downarrow x_k| = h\) for all \(k \in \mathbb{N}_{\le n}^*\) such that \(f(x) := 0\) for all \(x \in {}^{(\omega)}\mathbb{R}^{n} \setminus V,\)\[{\uparrow}_{x \in V}{f(x){{\downarrow}x}}:={\uparrow}_{x\in V}{f(x){\downarrow}({{x}_{1}},\,…,{{x}_{n}})}:={\uparrow}_{{{a}_{n}}}^{{{b}_{n}}}{…{\uparrow}_{{{a}_{1}}}^{{{b}_{1}}}{f(x){\downarrow}{{x}_{1}}\,…\,{\downarrow}{{x}_{n}}}}\]is called the exact volume integral of the volume integrable function \(f: {}^{(\omega)}\mathbb{R}^{n} \rightarrow {}^{(\omega)}\mathbb{R}.\triangle\)
Remark: The isomorphy of \(\mathbb{C}\) and \(\mathbb{R}^{2}\) provides something similar for the complex case and \({\uparrow}_{x \in V}1{{\downarrow}x={{\mu }_{h}}(V)}.\)
Example: Using the exact volume integral in contrast to the Lebesgue integral,\[||f|{{|}_{p}}:={{\left( {\uparrow}_{x \in V}{||f(x)|{{|}^{p}}{\downarrow}x} \right)}^{\tilde{p}}}\]satisfies for arbitrary \(f: {}^{(\omega)}\mathbb{R}^{n} \rightarrow {}^{(\omega)}\mathbb{R}^{m}\) and \(p \in [1, \omega]\) all the properties of a norm, also definiteness.
Example: Let \([a, b[ \, \cap \, h{}^{\omega}\mathbb{Z} \ne \emptyset\) be an \(h\)-homogeneous subset of \([a, b[{}^{\omega}\mathbb{R}\), and write \(B \subseteq [a, b[ \, \cap \, h{}^{\omega}\mathbb{Z} \times ]a, b] \, \cap \, h{}^{\omega}\mathbb{Z}\). Now let \(T\) be a AD of a not necessarily convergent TS \(t\) on \([a, b[ \, \cap \, h{}^{\omega}\mathbb{Z}\) and define \(f(x) := t(x) + \varepsilon \underline{1}^{\hat{x}/h}\) for conventionally real \(x\) and \(\varepsilon \ge \tilde{\nu}\). For \(h = \tilde{\nu}\), \(f\) is nowhere continuous, and thus is conventionally nowhere differentiable or integrable on \([a, b[ \, \cap \, h{}^{\omega}\mathbb{Z}\), but for all \(h\) holds\[{}^1f(x)={}^1t(x)-\widetilde{{\downarrow}x}\hat{\varepsilon}{\underline{1}^{\hat{x}/h}}\]und\[{\uparrow}_{x\in [a,b[ \, \cap \, h{}^{\omega }\mathbb{Z}}{f(x){\downarrow}x=T(b) – T(a)+\,}\check{\varepsilon} \left( {\underline{1}^{\hat{a}/h}}-{\underline{1}^{\hat{b}/h}} \right).\]Example: The conventionally non-measurable middle-thirds Cantor set \({C}_{\tilde{3}}\) has measure \({\mu}_{\iota}({C}_{\tilde{3}}) = \check{3}^{-\omega}\). Consider the function \(c: [0, 1] \rightarrow \{0, {\check{3}}^{\omega}\}\) defined by \(c(x) = {\check{3}}^{\omega}\chi_{C_{\tilde{3}}}(x)\). Then\[{\uparrow}_{x \in [0, 1]}{c(x){\downarrow}x={\LARGE{\textbf{+}}}_{x=0}^{1}{c(x){\downarrow}x}}={{\check{3}}^{\omega}}{{\mu }_{\iota}}\left( {{C}_{\tilde{3}}} \right)=1.\]Definition: A sequence \(({a}_{k})\) with members \({a}_{k}\) is a mapping from \({}^{(\omega)}\mathbb{Z}\) to \({}^{(\omega)}\mathbb{C}^{m}: k \mapsto {a}_{k}\). A series is a sequence \(({s}_{k})\) with \(m \in {}^{(\omega)}\mathbb{Z}\), radius of convergence \(r\) and partial sums \({{s}_{k}}={\LARGE{\textbf{+}}}_{j=m}^{k}{{{a}_{j}}}.\) A sequence \(({a}_{k})\) with \(k \in {}^{(\omega)}\mathbb{N}^{*}, {a}_{k} \in {}^{(\omega)}\mathbb{C}\) and \(\alpha \in ]0, \tilde{\nu}]\) is called \(\alpha\)-convergent to \(a \in {}^{(\omega)}\mathbb{C}\) if there exists \(m \in {}^{(\omega)}\mathbb{N}^{*}_{\le k}\) where \(|{a}_{k} – a| < \alpha\) for all \({a}_{k}\) such that \(k – m\) is not too small. The set \(\alpha\)-\(A\) of all such \(a\) is called set of \(\alpha\)-limit values of \(({a}_{k})\). A uniquely determined representative of this set (e.g. the final value or mean value) is called the \(\alpha\)-limit value \(\alpha\)-\(a\). For the case \(a = 0\), the sequence is called a zero sequence. If the inequality only holds for \(\alpha = \tilde{\nu}\), the \(\alpha\)- is omitted. Usually, \(k\) will be chosen maximal and \(\alpha\) minimal.
Example: The alternating harmonic series implies \({\LARGE{\textbf{$\pm$}}}_{n=1}^{\omega }{\left( \tilde{n} – \omega \right)}={_\epsilon}2.\)
Remark: Conventional limit values are hardly more precise than \(\mathcal{O}(\tilde{\omega})\). Their actual transcendence or algebraicity is seldom regarded! To avoid the exclusive relevance of the largest index of each sequence4cf. Heuser, Harro: Lehrbuch der Analysis Teil 1; 17., akt. Aufl.; 2009; Vieweg + Teubner; Wiesbaden, p. 144 the conventional definition requires the completion that infinitely many or almost all members of the sequence have an arbitrarily small distance from the limit value. Only finitely many may have a larger distance. Then only the monotone convergence is valid5cf. loc. cit., p. 155.
Remark: The fundamental theorem of set theory makes the representation of each positive number by a determined, unique, infinite decimal fraction baseless6cf. loc. cit., p. 27 f.. Putting \(\varepsilon := \iota\) any proof claiming that, for \(\varepsilon \in {}^{(\omega)}\mathbb{R}_{>0}\) – especially for all \(\varepsilon \in {}^{(\nu)}\mathbb{R}_{>0}\) – there exists a real number \(\varepsilon\tilde{r}\) with real \(r \in {}^{(\omega)}\mathbb{R}_{>1}\), is false. Otherwise, an infinite regression may occur. The \(\varepsilon\delta\)-definition of the limit value (it is questionable that \(\delta\) exists7loc. cit., p. 235 f.) requires \(\varepsilon\) as a specific multiple of \(\iota\) making that of continuity also true8see loc. cit., p. 215 f..
Remark: Consider for example the real function that doubles every real value but is not even uniformly continuous. Uniform continuity need not be considered, since in general \(\delta := \iota\) and \(\varepsilon\) accordingly larger. If two function values do not satisfy the conditions, then the function is not continuous at that point. Thus, continuity is equivalent to uniform continuity, by choosing the largest \(\varepsilon\) from all admissible infinitesimal values. Easily, continuity is equivalent to Hölder continuity.
Remark: Here infinite real constants may be allowed. The same is true for uniform convergence, since the maximum of the indices may be chosen such that each argument as the index satisfies everything, and \(\acute{\omega}\) is sufficient in every case. Otherwise, pointwise convergence also fails. Thus, uniform convergence is equivalent to pointwise convergence, by choosing the largest of all admissible infinitesimal values.
Fubini’s theorem: For \(X, Y \subseteq {}^{(\omega)}\mathbb{K}\) and \(f: X\times Y \rightarrow {}^{(\omega)}\mathbb{K}\), a reordering of integral sums shows\[{\uparrow}_{Y}{{\uparrow}_{X}{f(x,\,y){\downarrow}x\,}{\downarrow}y}={\uparrow}_{X\times Y}{f(x,\,y){\downarrow}(x,\,y)}={\uparrow}_{X}{{\uparrow}_{Y}{f(x,\,y){\downarrow}y\,}{\downarrow}x}.\square\]Example: By the principle of latest substitution (see below), \(r_{\pm}^2 := x^2 \pm y^2\) results for\[{\uparrow}_{[a,\,b[\times [c,\,d[}\tilde{r}_+^4r_-^2{\downarrow}(x,\,y)={\uparrow}_{a}^{b}{\left. \tilde{r}_+^2y{\downarrow}x \right|_{c}^{d}}=-{\uparrow}_{c}^{d}{\left. \tilde{r}_+^2x{\downarrow}y \right|_{a}^{b}}=\arctan \tfrac{d}{b}-\arctan \tfrac{c}{b}+\arctan \tfrac{d}{a}-\arctan \tfrac{c}{a}\]in the (improper) integral\[I(a,b):={\uparrow}_{[a,\,b{{[}^{2}}}{\tilde{r}_+^4r_-^2}{\downarrow}(x,\,y)=\arctan \tfrac{b}{b}-\arctan \tfrac{a}{b}+\arctan \tfrac{b}{a}-\arctan \tfrac{a}{a}= \check{\pi} – \check{\pi} =0\]and not\[I(0,1)={\uparrow}_{0}^{1}{\uparrow}_{0}^{1}{\tilde{r}_+^4r_-^2}{\downarrow}y\,{\downarrow}x={\uparrow}_{0}^{1}{\tfrac{{\downarrow}x}{1+{{x}^{2}}}}=\tfrac{\pi}{4}\ne -\tfrac{\pi}{4}=-{\uparrow}_{0}^{1}{\tfrac{{\downarrow}y}{1+{{y}^{2}}}}={\uparrow}_{0}^{1}{{\uparrow}_{0}^{1}{\tilde{r}_+^4r_-^2}{\downarrow}x\,{\downarrow}y}=I(0,1).\]Exchange theorem: (Transfinite) induction shows the result of multiple derivatives of a function \(f: A \rightarrow {}^{(\omega)}\mathbb{K}\) as independent of the order of differentiation, provided that variables are only evaluated and limits are only computed at the end, if applicable (principle of latest substitution).\(\square\)
Example: For \(f: {}^{\omega}\mathbb{R}^{2} \rightarrow {}^{\omega}\mathbb{R}, f(0, 0) = 0\) and \(f(x, y) = \tilde{r}_+^2{xy}^{3}\) where \(r_{\pm}^2 := x^2 \pm y^2\), it holds that\[\tfrac{{{{\downarrow} ^2}f}}{{{\downarrow} x{\downarrow} y}} = \tilde{r}_+^6({y^6} + 6{x^2}{y^4} – 3{x^4}{y^2}) = \tfrac{{{{\downarrow} ^2}f}}{{{\downarrow} y{\downarrow} x}}\]with value \(\tilde{2}\) at the point (0, 0), even though having \(y\) for \(x = 0\) on the left and 0 on the right for \(y = 0\) in\[\tfrac{{{\downarrow} f}}{{{\downarrow} x}} = -\tilde{r}_+^4r_-^2y^3 \ne \tilde{r}_+^4(x{y^4} + 3{x^3}{y^2}) = \tfrac{{{\downarrow} f}}{{{\downarrow} y}},\]where differentiating with respect to the other variable gives on the left \(1 \ne 0\) on the right.
First fundamental theorem of exact differential and integral calculus for LIs: The function \(F(z)={\uparrow}_{\gamma }{f(\zeta ){\downarrow}\zeta }\) where \(\gamma: [d, x[ \, \cap \, C \rightarrow A \subseteq {}^{(\omega)}\mathbb{K}, C \subseteq \mathbb{R}, f: A \rightarrow {}^{(\omega)}\mathbb{K}, d \in G = [a, b[ \, \cap \, C\), and choosing \(\overset{\rightharpoonup}{\gamma}(x) = \gamma(\overset{\rightharpoonup}{x})\) is exactly differentiable, and \({}^1F(z) = f(z)\) holds for all \(x \in G\) and \(z = \gamma(x)\).
Proof:\({\downarrow}F(z)\) \(={\uparrow}_{s\in [d,x] \cap C}{f(\gamma (s)){{\ {}^1\gamma}}(s){\downarrow}s}-{\uparrow}_{s\in [d,x[ \, \cap \, C}{f(\gamma (s)){{\ {}^1\gamma}}(s){\downarrow}s}\) \(={\uparrow}_{x}{f(\gamma (s))\tfrac{\gamma (\overset{\rightharpoonup}{s})-\gamma (s)}{\overset{\rightharpoonup}{s}-s}{\downarrow}s}\) \(=f(\gamma (x)){{\ {}^1\gamma}}(x){\downarrow}x=\) \(\,f(\gamma (x))(\overset{\rightharpoonup}{\gamma}(x)-\gamma (x))\) \(=f(z){\downarrow}z.\square\)
Second fundamental theorem of exact differential and integral calculus for LIs: Conditions above imply with \(\gamma: G \rightarrow {}^{(\omega)}\mathbb{K}\) that\[F(\gamma (b))-F(\gamma (a))={\uparrow}_{\gamma }{{}^1F}(\zeta ){\downarrow}\zeta.\]Proof: \(F(\gamma (b))-F(\gamma (a))\) \(={\LARGE{\textbf{+}}}_{s\in G}{F(\overset{\rightharpoonup}{\gamma}(s))}-F(\gamma (s))\) \(={\LARGE{\textbf{+}}}_{s\in G}{{{}^1F}(\gamma (s))(\overset{\rightharpoonup}{\gamma}(s)-\gamma (s))}\) \(={\uparrow}_{s\in G}{{}^1F}(\gamma (s))\ {}^1{\gamma}(s){\downarrow}s\) \(={\uparrow}_{\gamma }{{}^1F}(\zeta ){\downarrow}\zeta.\square\)
Corollary: If \(f\) has an AD \(F\) on a CP \(\gamma\), it holds with the conditions above that \({\uparrow}_{\gamma }{f(\zeta ){\downarrow}\zeta }=0.\square\)
Integral formula: The last corollary shows that for \(f: A \rightarrow {}^{(\omega)}\mathbb{C}\) and the CP \(\gamma([a, b[) \subseteq A \rightarrow {}^{(\omega)}\mathbb{C}\) the equation (see below) \(f(z)\) ind\(_\gamma(z) = \widetilde{\hat{\underline{\pi}}}{\uparrow}_{\gamma}{\widetilde{\zeta-z}f(\zeta){\downarrow}\zeta}\) holds, if and only if \(g(\zeta) = \widetilde{\zeta-z}(f(\zeta)-f(z))\) implies that \({\uparrow}_{\gamma}^{\ }{g(\zeta)}{\downarrow}\zeta=0.\) This is especially true if \(g\) has an AD on \(\gamma([a,b[).\square\)
Remark: The conventionally real case of both fundamental theorems may be established analogously. Given \(u, v \in [a, b[ \, \cap \, C, u \ne v\) and \(\gamma(u) = \gamma(v)\), it may be the case that \(\overset{\rightharpoonup}{\gamma}(u) \ne \; \overset{\rightharpoonup}{\gamma}(v)\).
Remark: In the first fundamental theorem, the derivative \({\downarrow}(F(z))/{\downarrow}z\) can be tightened to the arithmetic mean \(\tilde{2}(f(z) + f(\overset{\rightharpoonup}{z}))\) resp. \(f(\tilde{2}(z + \overset{\rightharpoonup}{z}))\), and similarly, in the second fundamental theorem, \(F(\gamma(b)) – F(\gamma(a))\) can be tightened to \(\tilde{2}(F(\gamma(b)) + F(\overset{\leftharpoonup}{\gamma}(b))) – \tilde{2}(F(\gamma(a)) + F(\overset{\rightharpoonup}{\gamma}(a)))\) resp. \(F(\tilde{2}(\gamma(b) + \overset{\leftharpoonup}{\gamma}(b))) – F(\tilde{2}(\gamma(a) + \overset{\rightharpoonup}{\gamma}(a)))\). This yields approximately the original results when \(f\) and \(F\) are sufficiently \(\alpha\)-continuous at the boundary.
Leibniz integral rule: For \(f: {}^{(\omega)}\mathbb{K}^{\grave{n}} \rightarrow {}^{(\omega)}\mathbb{K}, a, e: {}^{(\omega)}\mathbb{K}^{n} \rightarrow {}^{(\omega)}\mathbb{K}, \overset{\rightharpoonup}{x} := {(s, {x}_{2}, …, {x}_{n})}^{T}\), and \(s \in {}^{(\omega)}\mathbb{K} \setminus \{{x}_{1}\}\), choosing \(\overset{\rightharpoonup}{a}(x) = a(\overset{\rightharpoonup}{x})\) and \(\overset{\rightharpoonup}{e}(x) = e(\overset{\rightharpoonup}{x})\), it holds that\[\tfrac{{\downarrow} }{{\downarrow} {{x}_{1}}}\left( {\uparrow}_{a(x)}^{e(x)}{f(x,t){\downarrow}t} \right)={\uparrow}_{a(x)}^{e(x)}{\tfrac{{\downarrow} f(x,t)}{{\downarrow} {{x}_{1}}}{\downarrow}t}+\tfrac{{\downarrow} e(x)}{{\downarrow} {{x}_{1}}}f(\overset{\rightharpoonup}{x},e(x))-\tfrac{{\downarrow} a(x)}{{\downarrow} {{x}_{1}}}f(\overset{\rightharpoonup}{x},a(x)).\]Proof:\[\begin{aligned}\tfrac{{\downarrow} }{{\downarrow} {{x}_{1}}}\left( {\uparrow}_{a(x)}^{e(x)}{f(x,t){\downarrow}t} \right) &={\left( {\uparrow}_{a(\overset{\rightharpoonup}{x})}^{e(\overset{\rightharpoonup}{x})}{f(\overset{\rightharpoonup}{x},t){\downarrow}t}-{\uparrow}_{a(x)}^{e(x)}{f(x,t){\downarrow}t} \right)}/{{\downarrow} {{x}_{1}}}\; \\ &={\left( {\uparrow}_{a(x)}^{e(x)}{(f(\overset{\rightharpoonup}{x},t)-f(x,t)){\downarrow}t}+{\uparrow}_{e(x)}^{e(\overset{\rightharpoonup}{x})}{f(\overset{\rightharpoonup}{x},t){\downarrow}t}-{\uparrow}_{a(x)}^{a(\overset{\rightharpoonup}{x})}{f(\overset{\rightharpoonup}{x},t){\downarrow}t} \right)}/{{\downarrow} {{x}_{1}}}\; \\ &={\uparrow}_{a(x)}^{e(x)}{\tfrac{{\downarrow} f(x,t)}{{\downarrow} {{x}_{1}}}{\downarrow}t}+\tfrac{{\downarrow} e(x)}{{\downarrow} {{x}_{1}}}f(\overset{\rightharpoonup}{x},e(x))-\tfrac{{\downarrow} a(x)}{{\downarrow} {{x}_{1}}}f(\overset{\rightharpoonup}{x},a(x)).\square\end{aligned}\]Remark: Complex integration allows a path whose start and end points are the limits of integration. If \(\overset{\rightharpoonup}{a}(x) \ne a(\overset{\rightharpoonup}{x})\), then multiply the final summand by \((\overset{\rightharpoonup}{a}(x) – a(x))/(a(\overset{\rightharpoonup}{x}) – a(x))\). If \(\overset{\rightharpoonup}{e}(x) \ne e(\overset{\rightharpoonup}{x})\), then multiply the penultimate summand by \((\overset{\rightharpoonup}{e}(x) – e(x))/(e(\overset{\rightharpoonup}{x}) – e(x))\). Let \(n \in {}^{\omega}\mathbb{N}^{*}\) and \(x \in [0, 1]\) in each case for the following examples9cf. loc. cit., p. 540 – 543.
1. The sequence \({f}_{n}(x) = \sin(nx)/n^{\tilde{2}}\) does not tend to \(f(x) = 0\) as \(n \rightarrow \omega\), but instead to \(f(x) = \tilde{\omega}^{\tilde{2}} \sin(\omega x)\) with (continuous) derivative \({}^1f(x) = {\omega}^{\tilde{2}} \cos(\omega x)\) instead of \({}^1f(x) = 0\).
2. The sequence \({f}_{n}(x) = x – \tilde{n}x^{n}\) tends to \(f(x) = x – \tilde{\omega}{x}^{\omega}\) as \(n \rightarrow \omega\) instead of \(f(x) = x\) with derivative \({}^1f(x) = 1 – {x}^{\acute{\omega}}\) instead of \({}^1f(x) = 1\). Conventionally, \({f}_{n}(x) = 1 – {x}^{\acute{n}}\) is discontinuous at the point \(x = 1\).
3. The sequence \({f}_{n}(x) = nx(-\acute{x})^{n}\) does not tend to \(f(x) = 0\) as \(n \rightarrow \omega\), but to the continuous function \(f(x) = {\omega x(-\acute{x})}^{\omega}\), and takes the value \(\tilde{\epsilon}\) when \(x = \tilde{\omega}\).
Definition: The direction \(w := \overset{\rightharpoonup}{z}\) gives the second derivative of \(f: A \rightarrow {}^{(\omega)}\mathbb{K}\) at \(z \in A \subseteq {}^{(\omega)}\mathbb{K}\) by\[{}^2f(\overset{\rightharpoonup}{z}):=\tfrac{{\downarrow}^{2}f(\overset{\rightharpoonup}{z})}{{{({\downarrow}\overset{\rightharpoonup}{z})}^{2}}}=\tfrac{f(\overset{\rightharpoonup}{w})-\hat{f}(\overset{\rightharpoonup}{z})+f(z)}{{{({\downarrow}\overset{\rightharpoonup}{z})}^{2}}}.\triangle\]Remark: Higher derivatives are defined analogously. Every number \({m}_{n} \in {}^{\omega}\mathbb{N}\) for \(n \in {}^{\omega}\mathbb{N}^{*}\) of derivatives is written as an exponent after the \(n\)-th variable to be differentiated. The exponent to be specified in the numerator is the sum of all \({m}_{n}\). Regarding 1/(–1)! = 0 implies then for \(g\) like \(f\) and \(p \in {}^{\omega}\mathbb{N}^{*}\) the Leibniz product rule:\[{}^p(fg) = {\LARGE{\textbf{+}}}_{m+n=p}\tbinom{p}{m}\ {}^mf\ {}^ng.\]Proof: For \(p = 1\), the product rule mentioned above holds. Induction step from \(p\) to \(\grave{p}\):
\({}^{\grave{p}}(fg) \underset{p}{=} {\LARGE{\textbf{+}}}_{m+1+n=\grave{p}} {\left (\tbinom{p}{m}+\tbinom{p}{\grave{m}} \right ){}^{\grave{m}}f\ {}^ng}+{\LARGE{\textbf{+}}}_{m+1+n=\grave{p}} {\tbinom{p}{m}\ {}^mf\ {}^{\grave{n}}g} -{\LARGE{\textbf{+}}}_{m+1+n=\grave{p}} {\tbinom{p}{\grave{m}}\ {}^{\grave{m}}f\ {}^ng} \)\(={}^p\left({}^1(fg)\right)\underset{1}{=}{}^p\left({}^1f g+f\ {}^1g\right)={}^p\left({}^1f g\right)+{}^p\left(f\ {}^1g\right)={\LARGE{\textbf{+}}}_{m+n=\grave{p}}{\tbinom{\grave{p}}{m}\ {}^mf\ {}^ng}.\square\)
Taylor’s theorem: \({\LARGE{\textbf{+}}}_m |{}^mf(a)| > \tilde{\nu}, {}^mf(a) \in {}^{\omega}\mathbb{C}, g(z) = (z-a)^\omega, |z – a| < \tilde{\epsilon}\omega\) and \(z \rightarrow a \in {}^{\omega}\mathbb{C}\) imply\[f(z)=T_\omega(z):={\LARGE{\textbf{+}}}_{m=0}^{\omega}{\widetilde{m!}\ {}^mf(a)(z-a)^m}.\]Proof: From L’Hôpital’s rule, it follows that Leibniz product rule gives\[f(z)=\tfrac{(fg)(z)}{g(z)}=\tfrac{{}^1(fg)(z)}{{}^1g(z)}=…=\tfrac{{}^{\acute{\omega}}(fg)(z)}{{}^{\acute{\omega}}g(z)}=\tfrac{{}^{\omega}(fg)(z)}{{}^{\omega}g(z)}=\widetilde{\omega!}\ {}^{\omega}(fg)(z)\]und\[{}^{\omega}(fg)(z)={\LARGE{\textbf{+}}}_{m+n=\omega}{\tbinom{\omega}{m}\ {}^mf(a)\ {}^{\omega-m}g(z)}={}^{\omega}g(z){\LARGE{\textbf{+}}}_{m=0}^{\omega}{\widetilde{m!}\ {}^mf(a)(z-a)^m}.\square\]Conclusion: The second fundamental theorem implies for the remainder \(R_n(z) := f(z) – T_n(z) = f(a) + {\uparrow}_{a}^{z}{\ {}^1f(t){\downarrow}t} – T_n(z)\) by the mean value theorem where \(\zeta \in {}^a\dot{\mathbb{C}}(z)\) and \(p\in\mathbb{N}_{\le n}^*\)\[R_n(z)={\uparrow}_{a}^{z}{\widetilde{n!}(z-t)^n\ {}^{\grave{n}}f(t){\downarrow}t}={\widetilde{pn!}(z-\zeta)^{\grave{n}-p}}\ {}^{\grave{n}}f(\zeta)(z-a)^p.\]Proof by induction with integration by parts and induction step from \(\acute{n}\) to \(n\) (\(\acute{n}\) = 0 see above):\[f(z)=T_{\acute{n}}(z)+\widetilde{n!}(z-a)^{n}\ {}^nf(a)+{\uparrow}_{a}^{z}{\widetilde{n!}(z-t)^{n}\ {}^{\grave{n}}f(t){\downarrow}t}=T_n(z)+R_n(z).\square\]Transformation theorem: If the Jacobian \(D\varphi(x)\) exists, linear algebra shows for \(f: \varphi(A) \rightarrow {}^{(\omega)}\mathbb{R}^n\) and \(A \subseteq {}^{\omega}\mathbb{R}^n\)10cf. Köhler, Günter: Analysis; 1. Aufl.; 2006; Heldermann; Lemgo, p. 519:\[{\uparrow}_{\varphi(A)}^{\ }{f(y){\downarrow}y={\uparrow}_{A}^{\ }{f(\varphi(x))|\text{eig}(D\varphi(x))|{\downarrow}x}}.\square\]Remark: It holds that \((\epsilon^{\iota}-1)/\iota = 1 = {}^1\exp(0)\) and thus \({\downarrow} _\epsilon y/{\downarrow}y = \tilde{y}\) from \({\downarrow}y/{\downarrow}x = y := \epsilon^x\) as well as \({\downarrow} x^n = {\downarrow}\epsilon^{n _\epsilon x} = nx^{\acute{n}}{\downarrow}x\) für \(n \in {}^{\omega}\mathbb{N}^{*}\) for \(n \in {}^{\omega}\mathbb{N}^{*}\) by product and chain rule. Unit circle and triangles easily show the relations sin \(\iota/1 = (\cos \iota – 1)/\iota\) and \(\cos \iota/1 = -\sin \iota/\iota\). Hence, it holds \({}^1\)sin(0) = cos(0) and \({}^1\)cos(0) \(= -\)sin(0) as well as for \(m \in {}^{\omega}\mathbb{N}\) and \(n = \hat{k}\) de Moivre’s formula:\[(\cos z + \underline{\sin}\,z)^m = \epsilon^{\underline{m}z}=1+{\LARGE{\textbf{+}}}_{k=1}^{\check{\omega}}\left({\widetilde{\acute{n}!}(\underline{m}z)^{\acute{n}}}+{\widetilde{n!}(\underline{m}z)^{n}}\right)=\cos{\left(mz\right)}+\underline{\sin}\left(mz\right).\square\]Euler’s sine formula: Zero and identity theorem11cf. Walter, Wolfgang: Analysis 1; 3., verb. Aufl.; 1992; Springer; Berlin, p. 41 for series plus the theorem above analogously yield \(\Gamma(\tilde{2}) = {\pi}^{\tilde{2}}\) for the gamma function \(\Gamma(z) := \omega!\omega^z/{\LARGE{\textbf{$\times$}}}_{k=0}^{\omega}{(z + k)}\) where \(z \in {}^{\nu}\mathbb{C} \setminus -{}^{\nu}\mathbb{N}\) from\[\frac{\epsilon^{\hat{\underline{\pi}}z} – 1}{\epsilon^{\underline{\pi}z}\hat{\underline{\pi}}z} = \frac{\epsilon^{\underline{\pi}z} – \epsilon^{-\underline{\pi}z}}{\hat{\underline{\pi}}z} = \frac{\sin(\pi z)}{\pi z} = {\LARGE{\textbf{+}}}_{k=0}^{\omega}{\frac{(\underline{\pi}z)^{n}}{\grave{n}!}} = {\LARGE{\textbf{$\times$}}}_{k=1}^{\omega}{(1 – z^2/k^2)} = \frac{\tilde{z}}{\Gamma(z)\Gamma(-\acute{z})},\]since all \(\hat{\omega}\) zeros of the left- and right-hand side match due to \(\epsilon^{\underline{\pi}n} = 1 + \sin 0.\square\)
Conclusion: This shows for the Wallis product \(W := {\LARGE{\textbf{$\times$}}}_{k=1}^{\omega}{k^2/(k^2-\tilde{4})} = \check{\pi}\) from \[\frac{\Gamma(\tilde{2})^2}{\widehat{W}} = \frac{\check{\omega} 4^{\grave{\omega}}{\omega!}^2}{{(\hat{\omega} + 1)!!}^2} \frac{(\hat{\omega} + 1){(\hat{\omega} – 1)!!}^2}{4^{\omega}{\omega!}^2} = \frac{\hat{\omega}}{\hat{\omega} + 1} := 1 = \frac{\check{\pi}}{W}.\square\]Functional equation of the Gamma function: From \(\Gamma(\grave{z})=z\Gamma(z)\omega/(\omega+\grave{z})\) and \(\Gamma(1):=1\), it follows for sufficiently small \(|z|\) integrating by parts that \(\Gamma(\grave{z}) := {\uparrow}_0^{\omega} t^z\epsilon^{-t}{\downarrow} t=z\Gamma(z)\). This leads for \(z = \tilde{2}\) and the substitution \(x := t^{\tilde{2}}\) to the equation \({\uparrow}_0^{\omega} \epsilon^{-x^2}{\downarrow} x = \tilde{2}\pi^{\tilde{2}}\) relevant to statistics and ball computation.\(\square\)
Conclusion: A logarithmic derivative12see Remmert, Reinhold: Funktionentheorie 1; 3., verb. Aufl.; 1992; Springer; Berlin, p. 324 shows the equation \( {\uparrow}_{0}^{\omega}{\tilde{x}\sin x {\downarrow}x} = \check{\pi}.\square\)
Stirling formula: Putting \(\theta = 12\omega\) gives the asymptotic approximation \(\grave{\theta}^{-1} < {{}_\epsilon}(\widetilde{\omega}^{\omega+\tilde{2}}{\hat{\pi}}^{-\tilde{2}}\omega!) + \omega < \tilde{\theta}\).
Proof: Logarithmise \(\binom{\hat{\omega}}{\omega} = \frac{(\tilde{\pi}{\omega})^{\tilde{2}}4^{\omega}}{\omega+\tilde{2}}\sim\frac{4^{\omega}}{{(\pi\omega)}^{\tilde{2}}}\) as before to obtain \(\omega!= d(\pi\omega)^{\tilde{2}}(c\omega)^{\omega}\) for \(c, d \in {}^{\nu}\mathbb{R}_{>0}\) from\[{\LARGE{\textbf{+}}}_{n=1}^{\omega}{{\;}_\epsilon n} + {}_e4\,\omega – \tilde{2}{}_\epsilon(\pi\omega) = {\LARGE{\textbf{+}}}_{n=\grave{\omega}}^{\hat{\omega}}{{\;}_\epsilon n} = \omega{}_\epsilon(b\omega),\]where \(b \in \,]1, 2[\). Subscripting \(c_{\grave{\omega}}^{\grave{\omega}} / c_{\omega}^{\omega}\) of \(c\) yields \(c = \tilde{\epsilon}\) and \(d_{\omega}^{2}/d_{\hat{\omega}}\) of \(d\) reveals \(d = 2^{\tilde{2}}\). TS (see above) and GS of the logarithm show \(\theta = 12\omega\) from \(\acute{\upsilon} = \hat{\omega}\) and \(\tilde{\theta} – \grave{\theta}^{-1} + 1= (\omega + \tilde{2}){{}_\epsilon}{(\tilde{\omega}\grave{\omega}}) = \check{\upsilon}{{}_\epsilon}((1 + \tilde{\upsilon})/(1 – \tilde{\upsilon})).\square\)
Conclusion: Mathematical induction obtains \(n! \in [{\epsilon}^{\tilde{n}[\tilde{t},\widetilde{12}]-n}{\hat{\pi}}^{\tilde{2}}n^{n+\tilde{2}}]\) for \(n \in {}^{\omega}\mathbb{N}^{*}\) and \(t = 12.004.\square\)
Green’s theorem: Given NRs \(B \subseteq {D}^{2}\) for some \(h\)-domain \(\mathbb{D} \subseteq {}^{(\omega)}\mathbb{R}^{2}\), infinitesimal \(h = |{\downarrow}x|= |{\downarrow}y| =
|\overset{\rightharpoonup}{\gamma}(s) – \gamma(s)| = \mathcal{O}({\tilde{\omega}}^{m})\), sufficiently large \(m \in \mathbb{N}^{*}, (x, y) \in \mathbb{D}, {\mathbb{D}}^{-} := \{(x, y) \in \mathbb{D} : (x + h, y + h) \in \mathbb{D}\}\), and a simply CP \(\gamma: [a, b[\rightarrow {\partial} \mathbb{D}\) followed anticlockwise, choosing \(\overset{\rightharpoonup}{\gamma}(s) = \gamma(\overset{\rightharpoonup}{s})\) for \(s \in [a, b[, A \subseteq {[a, b]}^{2}\), the following equation holds for sufficiently \(\alpha\)-continuous functions \(u, v: \mathbb{D} \rightarrow \mathbb{R}\) with not necessarily continuous \({\downarrow} u/{\downarrow} x, {\downarrow} u/{\downarrow} y, {\downarrow} v/{\downarrow} x\) and \({\downarrow} v/{\downarrow} y\)\[{\uparrow}_{\gamma }{(u\,{\downarrow}x+v\,{\downarrow}y)}={\uparrow}_{(x,y)\in {{\mathbb{D}}^{-}}}{\left( \tfrac{{\downarrow} v}{{\downarrow} x}-\tfrac{{\downarrow} u}{{\downarrow} y} \right){\downarrow}(x,y)}.\]Proof: Only \(\mathbb{D} := \{(x, y) : r \le x \le s, f(x) \le y \le g(x)\}, r, s \in {}^{(\omega)}\mathbb{R}, f, g : {\partial} \mathbb{D} \rightarrow {}^{(\omega)}\mathbb{R}\) is proved, since the proof is analogous for each case rotated by \(\iota\). Every \(h\)-domian is union of such sets. Simply showing\[{\uparrow}_{\gamma }{u\,{\downarrow}x}=-{\uparrow}_{(x,y)\in {{\mathbb{D}}^{-}}}{\tfrac{{\downarrow} u}{{\downarrow} y}{\downarrow}(x,y)}.\]is sufficient because the other relation is given analogously. Neglecting the regions of \(\gamma\) with \({\downarrow}x = 0\) and \(s := h(u(r, g(r)) – u(t, g(t)))\) shows\[-{\uparrow}_{\gamma }{u\,{\downarrow}x}-s={\uparrow}_{t}^{r}{u(x,g(x)){\downarrow}x}-{\uparrow}_{t}^{r}{u(x,f(x)){\downarrow}x}={\uparrow}_{t}^{r}{{\uparrow}_{f(x)}^{g(x)}{\tfrac{{\downarrow} u}{{\downarrow} y}}{\downarrow}y{\downarrow}x}={\uparrow}_{(x,y)\in {{\mathbb{D}}^{-}}}{\tfrac{{\downarrow} u}{{\downarrow} y}{\downarrow}(x,y)}.\square\]Remarks: If the moduli of \(x \in \mathbb{C}\), \({\downarrow}x\) or \(\widetilde{{\downarrow}x}\) have different orders of magnitude, the identity\[{}^0s(x):={\LARGE{\textbf{$\pm$}}}_{m=0}^{n}{x^m}=(1-{{x\text{-}}^{\grave{n}}})/\grave{x}\]yields by differentiating\[{}^1s(x)={\LARGE{\textbf{$\mp$}}}_{m=1}^{n}{m{x^{\acute{m}}}}=(\grave{n}{{x\text{-}}^{n}}-n{{x\text{-}}^{\grave{n}}}-1)/{{{\grave{x}}^{2}}}.\]The formulas above were sometimes miscalculated. For sufficiently small \(x\), and sufficiently, but not excessively large \(n\), the latter can be further simplified to \({-\grave{x}}^{-2}\), and remains valid when \(x \ge 1\) is not excessively large. By successively multiplying \({}^ms(x)\) by \(x\) for \(m \in {}^{\omega}\mathbb{N}^{*}\) and subsequently differentiating, other formulas can be derived for \({}^{\grave{m}}s(x)\), providing an example of divergent series. However, if \({}^0s(-x)\) is integrated from 0 to 1 and set \(n := \omega\), an integral expression for \({_\epsilon}\omega + \gamma\) is obtained for Euler’s constant \(\gamma\).
L’Hôpital’s rule solves the case of \(x = -1\). Substituting \(y := -\acute{x}\), by the binomial series a series is obtained with infinite coefficients (if \({_\epsilon}\omega\) is also expressed as a series, even an expression for \(\gamma\) is obtained). If the numerator of \({}^0s(x)\) is illegitimately simplified, finding incorrect results is risked, especially when \(|x| \ge 1\). So \({}^0s(-{\epsilon}^{\underline{\pi}})\) is e.g. 0 for odd \(n\), and 1 for even \(n\), but not \(\tilde{2}\).
Counter-directional theorem: If the path \(\gamma: [a, b[ \, \cap \, C \rightarrow V\) with \(C \subseteq \mathbb{R}\) passes the edges of
every \(n\)-cube of side length \(\iota\) in the \(n\)-volume \(V \subseteq {}^{(\omega)}\mathbb{R}^{n}\) with \(n \in \mathbb{N}_{\ge 2}\) exactly once, where the opposite edges in all two-dimensional faces of every \(n\)-cube are traversed in reverse direction, but uniformly, then, for \(D \subseteq \mathbb{R}^{2}, B \subseteq {V}^{2}, f = ({f}_{1}, …, {f}_{n}): V \rightarrow {}^{(\omega)}\mathbb{R}^{n}, \gamma(s) = x, \gamma(\overset{\rightharpoonup}{s}) = \overset{\rightharpoonup}{x}\) and \({V}_{r} := \{\overset{\rightharpoonup}{x} \in V: x \in V, \overset{\rightharpoonup}{x} \ne \overset{\leftharpoonup}{x}\}\), it holds that\[{\uparrow}_{s \in G}{f(\gamma (s)){\ {}^1\gamma}(s){\downarrow}s}={\uparrow}_{\begin{smallmatrix} (x,\overset{\rightharpoonup}{x}) \\ \in V \times {{V}_r} \end{smallmatrix}}{f(x){\downarrow}x}={\uparrow}_{\begin{smallmatrix} s \in G, \\ \gamma | {\partial}^{\acute{n}} V \end{smallmatrix}}{f(\gamma (s)){\ {}^1\gamma}(s){\downarrow}s}.\]Proof: If two arbitrary squares are considered with common edge of length \(\iota\) included in one plane, then only the edges of \(V\times{V}_r\) are not passed in both directions for the same function value. They all, and thus the path to be passed, are exactly contained in \({\partial}^{\acute{n}}V.\square\)
Theorem: Splitting \(F: A \rightarrow {}^{(\omega)}\mathbb{C}\) into real and imaginary parts \(F(z) := U(z) + \underline{V}(z) := f(x, y) := u(x, y) + \underline{v}(x, y)\), \(h\)-homogeneous \(A \subseteq {}^{(\omega)}\mathbb{C}\) and \(h = |{\downarrow}x| = |{\downarrow}y|\), with the NR \(B \subseteq {A}^{2}\) for every \(z = x + \underline{y} \in A\) is holomorphic if the Cauchy-Riemann differential equations are satisfied by \(B\):\[\tfrac{{{\downarrow} u}}{{{\downarrow} x}} = \tfrac{{{\downarrow} v}}{{{\downarrow} y}},\,\,\tfrac{{{\downarrow} v}}{{{\downarrow} x}} = – \tfrac{{{\downarrow} u}}{{{\downarrow} y}}.\]Proof: The claim follows directly from \(\tfrac{{{\downarrow} u}}{{{\downarrow} x}} + \tfrac{{{\downarrow} \underline{v}}}{{{\downarrow} x}} = \tfrac{{{\downarrow} v}}{{{\downarrow} y}} – \tfrac{{{\downarrow} \underline{u}}}{{{\downarrow} y}} = \tfrac{{{\downarrow} F}}{{{\downarrow} z}} = {\downarrow}U(z)+ {\downarrow}\underline{V}(z).\square\)
Remark: The following necessary and sufficient condition is valid for \(F\) to be holomorphic:\[{}^1F(\bar z) = \tfrac{{{\downarrow} f}}{{{\downarrow} x}} = \tfrac{{{\downarrow} \underline{f}}}{{{\downarrow} y}} = \tilde{2}\left( {\tfrac{{{\downarrow} f}}{{{\downarrow} x}} + \tfrac{{{\downarrow} \underline{f}}}{{{\downarrow} y}}} \right) = \tfrac{{{\downarrow} F}}{{{\downarrow} \bar z}} = 0.\]Goursat’s integral lemma: If \(f \in \mathcal{O}(\Delta)\) on a triangle \(\Delta \subseteq {}^{(\omega)}\mathbb{C}\) but has no AD on \(\Delta\), then13cf. loc. cit., p. 149 ff.\[I:={\uparrow}_{\partial\Delta }{f(\zeta ){\downarrow}\zeta }=0.\]Refutation of conventional proofs based on estimation by means of a complete triangulation: The direction in which \(\partial\Delta\) is traversed is irrelevant. If \(\Delta\) is fully triangulated, then wlog every minimal triangle \({\Delta}_{s} \subseteq \Delta\) must either satisfy where \(\kappa, \lambda\), and \(\mu\) represent the vertices of \({\Delta}_{s}\)\[{I_s}: = {\uparrow}_{\partial{\Delta _s}} {f(\zeta ){\downarrow}\zeta } = f(\kappa)(\lambda – \kappa) + f(\lambda)(\mu – \lambda) + f(\kappa)(\kappa – \mu) = (f(\kappa) – f(\lambda))(\lambda – \mu) = 0\]or\[\begin{aligned}{\uparrow}_{\partial{\Delta _s}} {f(\zeta ){\downarrow}B\zeta } &= f(\kappa)(\lambda – \kappa) + f(\lambda)(\mu – \lambda) + f(\mu)(\kappa – \mu) = (f(\kappa) – f(\lambda))\lambda + (f(\lambda) – f(\mu))\mu + (f(\mu) – f(\kappa))\kappa \\ &= {}^1f(\lambda)\left( {(\kappa – \lambda)\lambda – (\mu – \lambda)\mu + (\mu – \lambda)\kappa – (\kappa – \lambda)\kappa} \right) = {}^1f(\lambda)\left( {(\mu – \lambda)(\kappa – \mu) – {{(\kappa – \lambda)}^2}} \right) = 0\end{aligned}\]By holomorphicity and cyclic permutations, this can only happen for \(f(\kappa) = f(\lambda) = f(\mu)\).
If every adjacent triangle to \(\Delta\) is considered, deduce that \(f\) must be constant, which contradicts the assumptions. This is because the term in large brackets is translation-invariant, since otherwise set \(\mu := 0\) wlog, making this term 0, in which case \(\kappa = \check{\lambda}(1 \pm 3\text{-}^{\tilde{2}})\) and \(|\kappa| = |\lambda| = |\kappa – \lambda|\). However, since every horizontal and vertical line is homogeneous on \({}^{(\omega)}\mathbb{C}\), this cannot happen:
Otherwise, the corresponding sub-triangle would be equilateral and not isosceles and right-angled. Therefore, in both cases, \(|{I}_{s}|\) must be at least \(|{}^1f(\lambda) \mathcal{O}({\iota}^{2})|\), by selecting the vertices \(0, \iota\) and \(\underline{\iota}\) wlog. If \(L\) is the perimeter of a triangle, then it holds that \(|I| \le {4}^{m} |{I}_{s}|\) for an infinite natural number \(m\), and also \({2}^{m} = L(\partial\Delta)/|\mathcal{O}({\iota}^{2})|\) since \(L(\partial\Delta) = {2}^{m} L(\partial{\Delta}_{s})\) and \(L(\partial{\Delta}_{s}) = |\mathcal{O}({\iota}^{2})|\). It holds that \(|I| \le |{}^1f(\lambda) {L(\partial\Delta)}^{2}/\mathcal{O}({\iota}^{2})|\), causing the desired estimate \(|I| \le |\mathcal{O}({\downarrow}\zeta)|\) to fail, for example if \(|{}^1f(\lambda) {L(\partial\Delta)}^{2}|\) is larger than \(|\mathcal{O}({\iota}^{2})|.\square\)
Cauchy’s integral theorem: Given the NRs \(B \subseteq {D}^{2}\) and \(A \subseteq [a, b]\) for some \(h\)-domain \(\mathbb{D} \subseteq {}^{\omega}\mathbb{C}\), infinitesimal \(h\), \(f \in \mathcal{O}(\mathbb{D})\) and a CP \(\gamma: [a, b[\rightarrow \partial \mathbb{D}\), choosing \(\overset{\rightharpoonup}{\gamma}(s) = \gamma(\overset{\rightharpoonup}{s})\) for \(s \in [a, b[\) gives \({\uparrow}_{\gamma }{f(z){\downarrow}z}=0.\)
Proof: By the Cauchy-Riemann differential equations and Green’s theorem, with \(x := \text{Re} \, z, y := \text{Im} \, z, u := \text{Re} \, f, v := \text{Im} \, f\) and \({\mathbb{D}}^{-} := \{z \in \mathbb{D} : z + h + \underline{h} \in \mathbb{D}\}\), it holds that\[{\uparrow}_{\gamma }{f(z){\downarrow}z}={\uparrow}_{\gamma }{\left( u+\underline{v} \right)\left( {\downarrow}x+{\downarrow}\underline{y} \right)}={\uparrow}_{z\in {{\mathbb{D}}^{-}}}{\left( \left( \tfrac{{\downarrow} \underline{u}}{{\downarrow} x}-\tfrac{{\downarrow} \underline{v}}{{\downarrow} y} \right)-\left( \tfrac{{\downarrow} v}{{\downarrow} x}+\tfrac{{\downarrow} u}{{\downarrow} y} \right) \right){\downarrow}(x,y)}=0.\square\]Remark: For \(\tilde{\omega}\) := 0, the main theorem of Cauchy’s theory of functions can be proven according to Dixon14as in loc. cit., p. 228 f., since the limit there shall be 0 resp. \(\tilde{r}\) tends to 0 for \(r \in {}^{\omega}\mathbb{R}_{>0}\) tending to \(\omega\). The in \({}^{\omega}\dot{\mathbb{C}} \subset {}^{\omega}\mathbb{C}\) (entire) functions \(f(z) = {\LARGE{\textbf{+}}}_{n=1}^{\omega }{{{z}^{n}}{{{\tilde{\omega }}}^{\tilde{n}}}}\) and \(g(z) = \tilde{\omega }z\) give counterexamples to Liouville’s (generalised) theorem and Picard’s little theorem because of \(|f(z)| < 1\) and \(|g(z)| \le 1\). The function \(f(\tilde{z})\) for \(z \in {}^{\omega}\dot{\mathbb{C}}^{*}\) discounts Picard’s great theorem.
Definition: For a CP \(\gamma: [a, b[ \rightarrow {}^{(\omega)}\mathbb{C}\) and \(z \in {}^{(\omega)}\mathbb{C}, \widetilde{\hat{\underline{\pi}}}{\uparrow}_{\gamma}{\widetilde{\zeta-z}{\downarrow}\zeta}\) is called winding number or index ind\(_{\gamma}(z) \in \mathbb{Z}\). The coefficients \(a_{j,-1}\) of the function \(f: A \rightarrow {}^{(\omega)}\mathbb{C}\) for \(A \subseteq {}^{(\omega)}\mathbb{C}, n \in {}^{\omega}\mathbb{N}, a_{jk}, c_j \in {}^{(\omega)}\mathbb{C}\) and\[f(z)={\LARGE{\textbf{+}}}_{j=0}^{n}{\LARGE{\textbf{+}}}_{k=-\omega}^{\omega}{a_{jk}{(z-c_j)}^k}\]as well as pairwise different \(c_j\) are called residues res\(_{c_j}f.\triangle\)
Residue theorem: For \(\gamma\) and \(f\) as above, it holds that \(\widetilde{\hat{\underline{\pi}}}{\uparrow}_{\gamma}{f(\zeta){\downarrow}\zeta}={\LARGE{\textbf{+}}}_{j=0}^{n}{{\rm ind}_\gamma(c_j)}{\rm res}_{c_j}f.\)
Proof: For all \(k \in {}^{\omega}\mathbb{Z} \setminus \{-1\}\) holds \({\LARGE{\textbf{+}}}_{j=0}^{n}{|{\uparrow}_{\gamma}{{a_{jk}(\zeta-c_j)}^k{\downarrow}\zeta}|}=0\) and \(\widetilde{\hat{\underline{\pi}}}{\uparrow}_{\gamma}{{a_{j,-1}}\widetilde{\zeta-c_j}{\downarrow}\zeta}={\rm ind}_\gamma(c_j){\rm res}_{c_j}f.\square\)
Definition: A point \({z}_{0} \in M \subseteq {}^{(\omega)}\mathbb{C}^{n}\) or of a sequence \(({a}_{k})\) for \(k \in {}^{(\omega)}\mathbb{N}\) is called a (proper) \(\alpha\)-accumulation point of \(M\) or of the sequence, if the ball \({}^{\alpha}\dot{\mathbb{C}}({z}_{0}) \subseteq {}^{(\omega)}\mathbb{C}^{n}\) with centre \({z}_{0}\) and infinitesimal \(\alpha\) contains infinitely many points from \(M\) or pairwise distinct members of \({a}_{k} \in {}^{(\omega)}\mathbb{C}^{n}\). Let \(\alpha\)- be omitted for \(\alpha = \tilde{\omega}.\triangle\)
Remark: Choose the pairwise distinct zeros \(c_k \in {}^{\widetilde{\omega}}\dot{\mathbb{C}} \subset \mathbb{D}\) for \(z \in {}^{\omega}\mathbb{C}\) in \(p(z) = {\LARGE{\textbf{$\times$}}}_{k=0}^{\omega}{\left( z-c_k \right)}\) in such a way that \(|f(c_k)| < \tilde{\omega}\) for \(f \in \mathcal{O}(\mathbb{D})\) on a domain \(\mathbb{D} \subseteq \mathbb{C}\) where \(f(0) = 0\). Let \(\mathbb{D}\) contain \({}^{\widetilde{\omega}}\dot{\mathbb{C}}\) completely, which a coordinate transformation always achieves provided that \(\mathbb{D}\) is sufficiently "large". The coincidence set \(\{\zeta \in \mathbb{D} : f(\zeta) = g(\zeta)\}\) of \(g(z) := f(z) + p(z) \in \mathcal{O}(\mathbb{D})\) contains an accumulation point at 0. Since \(p(z)\) can take every conventional complex number, the deviation between \(f\) and \(g\) is non-negligible.
Since \(f \ne g\), this contradicts the statement of the identity theorem like the (local) fact that all derivatives \({}^nu({z}_{0}) ={}^nv({z}_{0})\) of two functions \(u\) and \(v\) can be equal at \({z}_{0} \in \mathbb{D}\) for all \(n\), but \(u\) and \(v\) may significantly differ further away maintaining to be holomorphic, since some holomorphic function has to be developed into a TS with approximated powers. The function \(b(z) := \tilde{\nu}z\) for \(z \in {}^{\nu}\dot{\mathbb{C}} \subset {}^{\nu}\mathbb{C}\) maps the simply connected \({}^{\nu}\dot{\mathbb{C}}\) holomorphicly to \({}^{1}\dot{\mathbb{C}}\).
A missing injectivity or surjectivity requires correcting the Riemann mapping theorem. Examples of such \(f \in \mathcal{O}(\mathbb{D})\) include functions with \(f(0) = 0\) that are restricted to \({}^{\widetilde{\omega}}\dot{\mathbb{C}}\). Extending the upper limit from \(\omega\) to \(|\mathbb{N}^{*}|\) gives entire functions with an infinite number of zeros. The set of zeros is not necessarily discrete. Thus, the set of all functions \(f \in \mathcal{O}(\mathbb{D})\) may contain zero divisors. The \(f\) once again give counterexamples to Picard’s little theorem since they omit at least \(\acute{n}\) values in \(\mathbb{C}\).
Theorem (binomial series): From \(\alpha \in {}^{(\nu)}\mathbb{C}, \binom{\alpha}{n}:=\widetilde{n!}\alpha\acute{\alpha}…(\grave{\alpha}-n)\) and \(\left|\binom{\alpha}{\grave{m}}/\binom{\alpha}{m}\right|<1\) for all \(m \ge \nu\) where \(\binom{\alpha}{0}:=1\), it follows for \(z \in \mathbb{D}^\ll\) or \(z \in {}^{(\omega)}\mathbb{C}\) for \(\alpha \in {}^{(\omega)}\mathbb{N}\) the TS centred on 0 that\[{\grave{z}}^\alpha={\LARGE{\textbf{+}}}_{n=0}^{\omega}{\tbinom{\alpha}{n}z^n}.\square\]Multinomial theorem: For \(\zeta \in {}^{(\omega)}\mathbb{C}, z \in {}^{(\omega)}\mathbb{C}^{k}, k \in {}^{(\omega)}\mathbb{N}_{\ge 2}, m, n_j \in {}^{\omega}\mathbb{N}^{*}, |n| := {\LARGE{\textbf{+}}}_{j=1}^{k}{n_j}, z^n := {\LARGE{\textbf{$\times$}}}_{j=1}^{k}{{z_j}^{n_j}}\) and \(\tbinom{m}{n} := \widetilde{n_1! … {n}_k!}m!\), it holds that\[(1{\upharpoonleft}_k^Tz)^m={\LARGE{\textbf{+}}}_{|n|=m}{\tbinom{m}{n}z^n}.\]Proof: Cases \(k \in \{1, 2\}\) are clear. Induction step from \(k\) to \(\grave{k}\) where \(\tbinom{m}{n} = \tbinom{m}{n_1, …,n_{\acute{k}},p}\tbinom{p}{n_k, n_{\grave{k}}}\) and \(p=n_k+n_{\grave{k}}\):\[\left.{({1{\upharpoonleft}_{\grave{k}}^Tz})^m}\right |_{\zeta_{k}=z_k+z_{\grave{k}}}=\left.{\LARGE{\textbf{+}}}_{|n|=m}{\tbinom{m}{n}z^n}\right |_{{\eta}_k!={n_k!}{n_{\grave{k}}!}} = {\LARGE{\textbf{+}}}_{|n|=m}{\tbinom{m}{n}z^n}\]resp. from \(m\) to \(\grave{m}\)
\((1{\upharpoonleft}_{k}^T z)^{\grave{m}} =\grave{m}{\uparrow}_{0}^{z_j}\left.{(1{\upharpoonleft}_{k}^T z)}^m\right |_{z_j=\zeta}{{\downarrow}\zeta}+\left.(1{\upharpoonleft}_{k}^T z)^{\grave{m}} \right |_{z_j=0}\)
\(=\left.\grave{m}{\uparrow}_{0}^{z_j}{\LARGE{\textbf{+}}}_{|n|=m}{\tbinom{m}{n}z^{n}}\right |_{z_j=\zeta}{{\downarrow}\zeta}+\left.(1{\upharpoonleft}_{k}^T z)^{\grave{m}} \right |_{z_j=0}={\LARGE{\textbf{+}}}_{|\grave{n}|=\grave{m}}{\tbinom{\grave{m}}{\grave{n}}z^{\grave{n}}}.\square\)
General Leibniz formula: Putting \({\downarrow}^n := {\downarrow}_1^{n_1}…{\downarrow}_k^{n_k}\) and \({\downarrow}_j^{n_j} := {\downarrow}^{n_j}/{\downarrow}{z_j}^{n_j}\), it follows for \(j, k, m, n \in {}^{(\omega)}\mathbb{N}\) and differentiable \(f = f_1\cdot…\cdot f_k \in {}^{(\omega)}\mathbb{C}\) from the multinomial theorem \({\downarrow}^mf = {\LARGE{\textbf{+}}}_{|n|=m}{\binom{m}{n}{\downarrow}^nf}.\square\)
Taylor’s theorem for several variables: For \(n! := {\LARGE{\textbf{$\times$}}}_{j=1}^{k}{n_j!}, a, z \in {}^{(\omega)}\mathbb{C}^{k}\) and \((z – a)^n := {\LARGE{\textbf{$\times$}}}_{j=1}^{k}{(z – a)^{n_j}}\), it follows from the multinomial theorem also analogously to the proof of the simple TS for \(n \in {}^{(\omega)}\mathbb{N}\)\[f(z) = T_{\omega}(z) := {\LARGE{\textbf{+}}}_{|n|=0}^{\omega }{\widetilde{n!}{\downarrow}^nf(a)(z – a)^n}.\square\]Conclusion: Analogously to the simple TS, the remainder is for \(\zeta \in {}^a\dot{\mathbb{C}}(z)\) and \(k \in \mathbb{N}_{\le\grave{n}}^{*}\)\[R_n(z) = (z – \zeta)^{k}/(1-k/\grave{n}){\LARGE{\textbf{+}}}_{|m|=\grave{n}}{\widetilde{m!}{\downarrow}^mf(\zeta)(z – a)^{m-k}}.\square\]Chain rule for several variables: For \(z \in A \subseteq {}^{(\omega)}\mathbb{K}^k, g: A \rightarrow B \subseteq {}^{(\omega)}\mathbb{K}^m, g: B \rightarrow C \subseteq {}^{(\omega)}\mathbb{K}^n\), it holds for \(k, m, n \in {}^{(\omega)}\mathbb{N}^*\) that:\[{}^1f(g(z)) = {}^1f(g(z))\ {}^1g(z).\]Proof: Taylor’s theorem for several variables implies with bounded \(||r(z)||\) and \(||s(g(z))||\)
\(g(\overset{\rightharpoonup}{z}) = g(z) + {}^1g(z) {{\downarrow}z} + r(z)||{\downarrow}z||^2 \) and \(f(\overset{\rightharpoonup}{g}(z)) = f(g(z)) + {}^1f(g(z)) {{\downarrow}g(z)} + s(g(z))||{\downarrow}g(z)||^2.\square\)
Newton’s method: Demanding above \(f(\overset{\rightharpoonup}{z})=f(z)+{}^1f(z){\downarrow}z=0\) implies \(z_{\grave{n}} := z_n-{{}^1f(z_n)}^{-1}f(z_n)\) if \({{}^1f(z_n)}^{-1}\) is invertible resulting in quadratic convergence close to a zero.\(\square\)
Finiteness criterion for series: Let \(m, n, q, r \in \mathbb{N}\). Sum \(S_r := \left| {\LARGE{\textbf{+}}}_{q=0}^{r}{s_q} \right|\) for \(s_q \in {}^{(\omega)}\mathbb{C}\) is finite, if and only if \(0 \le S_r = \left|{\LARGE{\textbf{$\pm$}}}_{m=0}^{n}{{a}_{m}}\right| \le {a}_{0}\) for a sequence \(({a}_{m})\) such that \(a_{\grave{m}} < a_m \in {}^{\nu}\mathbb{R}_{\ge 0}\) holds, since summands in sums can be arbitrarily sorted according to their signs and sizes, recombined or split into separate sums.\(\square\)
Example: Putting \(f(x) := {\LARGE{\textbf{+}}}_{n=1}^{\omega}{\tilde{n}^2{}_\epsilon(1+n^2x^2)}\) implies \({}^1f(0) =\tfrac{f(\iota) – f(0)}{\iota – 0} = {\LARGE{\textbf{+}}}_{n=1}^{\omega}{\left. \tfrac{\hat{x}}{1+n^2x^2}\right |_0} = {\LARGE{\textbf{+}}}_{n=1}^{\omega}{\tilde{\iota}\tilde{n}^2{}_\epsilon(1+n^2\iota^2)} = \iota \omega = 0\), where the series expansion \({}_\epsilon\grave{x} = {\LARGE{\textbf{$\pm$}}}_{n=1}^{\omega}{\tilde{n}x^n}\) for \(x \in\;]-1,1[\) was used differentiating term by term.
Definition: For \(j, k \in {}^{\omega}\mathbb{N}, f \in \mathcal{C}_{\pi}^{j+2}\) and Fourier coefficients \(c_k := \tilde{\hat{\pi}}{\uparrow}_{-\pi}^{\pi}{f(t)\tilde{\epsilon}^{\underline{k}t}{{\downarrow}t}}\)15see Walter: Analysis 2, loc. cit., p. 358 ff., the series \({\LARGE{\textbf{+}}}_{k=-\omega}^{\omega}{c_k\epsilon^{\underline{k}t}}\) is called Fourier series, with possibly other period lengths than \(\hat{\pi }\)16see loc. cit., p. 364.\(\triangle\)
Theorem (series product): For \({a}_{m}, {b}_{n} \in {}^{(\omega)}\mathbb{K}\), the next equation replaces the Cauchy product17see Walter: Analysis 1, loc. cit., p. 103:\[{\LARGE{\textbf{+}}}_{m=1}^{\omega }{{{a}_{m}}}{\LARGE{\textbf{+}}}_{n=1}^{\omega }{{{b}_{n}}}={\LARGE{\textbf{+}}}_{m=1}^{\omega }{\left( {\LARGE{\textbf{+}}}_{n=1}^{m}{\left( {{a}_{n}}{{b}_{m-\acute{n}}}+{{a}_{\omega -\acute{n}}}{{b}_{\omega -m+n}} \right)}-{{a}_{m}}{{b}_{\omega -\acute{m}}} \right)}.\square\]Example: The following series product has the finite value18cf. Gelbaum, loc. cit., p. 61 f.:
\(\left({\LARGE{\textbf{$\pm$}}}_{m=1}^{\mathrm{\omega}}{{\widetilde{m}}^{\tilde{2}}}\right)^2={\LARGE{\textbf{+}}}_{m=1}^{\mathrm{\omega}}{\left(\left(\tfrac{\widetilde{m}}{\mathrm{\omega}-\acute{m}}\right)^{\tilde{2}}-{\underline{1}^{\hat{m}}}{\LARGE{\textbf{+}}}_{n=1}^{m}\left(\left(\tfrac{\tilde{n}}{m-\acute{n}}\right)^{\tilde{2}}+\left(\tfrac{\widetilde{\mathrm{\omega}-\acute{n}}}{\mathrm{\omega}-m\ \mathrm{+\ }n}\right)^{\tilde{2}}\right)\right)}\)
\(=0,36590…\ \ \ \ll\tfrac{{\zeta\left(\tilde{2}\right)}^2}{3+8^{\tilde{2}}}.\)
Example: The signum function sgn yields the following series product19cf. loc. cit., p. 62: \[{\LARGE{\textbf{+}}}_{m=0}^{\omega }{{2}^{{{m}^{\text{sgn}(m)}}}}{\LARGE{\textbf{+}}}_{n=0}^{\omega}{\text{sgn}(n-\gamma)} = \acute{\omega}{2}^{\grave{\omega}}\gg -2.\]Stokes’ theorem20cf. Köhler, loc. cit., p. 625 f.: If \(-\) stands for sufficiently \(\alpha\)-continuous functions \(f_m: C \rightarrow {}^{\omega}\mathbb{R}\) above a term to be omitted for an alternating differential form \(\upsilon := {\LARGE{\textbf{+}}}_{m=1}^{n}{f_m\;{\downarrow}x_1\wedge…\wedge\overline{{\downarrow}x_m}\wedge…\wedge {\downarrow}x_n}\) of degree \(\acute{n}\) on a cuboid \(C =[{a}_{1}, {b}_{1}] \times…\times [{a}_{n}, {b}_{n}] \subseteq {}^{\omega}\mathbb{R}^n\) where \(\partial C:= {\LARGE{\textbf{$\mp$}}}_{m=1}^{n}{(F_{a,m} – F_{b,m})}\) has the faces \(F_{a,m} = [{a}_{1}, {b}_{1}] \times…\times \{a_m\} \times…\times [{a}_{n}, {b}_{n}]\) and \(F_{b,m} = [{a}_{1}, {b}_{1}] \times…\times \{b_m\} \times…\times [{a}_{n}, {b}_{n}]\), then \({\uparrow}_C{{\downarrow}\upsilon} = {\uparrow}_{\partial C}{\upsilon}.\)
Proof: The second fundamental theorem and Fubinis theorem (see above) give\[{\uparrow}_C{{\downarrow}\upsilon} = {\LARGE{\textbf{$\mp$}}}_{m=1}^{n}{{{\uparrow}_{a_n}^{b_n}{…{\overline{{\uparrow}_{a_m}^{b_m}}{…{\uparrow}_{a_1}^{b_1}}{(f_m(x_1, …, a_m, …, x_n) – f_m(x_1, …, b_m, …, x_n)){\downarrow}x_1}\wedge…\wedge }\overline{{\downarrow}x_m}}\wedge…\wedge }{\downarrow}x_n}\]and\[\tfrac{{{\downarrow} f_m}}{{{\downarrow}x_m}}{\downarrow}x_m\wedge {\downarrow}x_1\wedge…\wedge {\downarrow}x_{\acute{m}}\wedge {\downarrow}x_{\grave{m}}\wedge…\wedge {\downarrow}x_n = -\underline{1}^{\hat{m}}\tfrac{{{\downarrow} f_m}}{{{\downarrow} x_m}}{\downarrow}x_1\wedge…\wedge {\downarrow}x_n.\square\]Remark: Stokes’ theorem also holds for \(n\)-dimensional manifolds consisting of cuboids.
© 2010-2024 by Boris Haase
References
↑1 | cf. Walter, Wolfgang: Analysis 2; 5., erw. Aufl.; 2002; Springer; Berlin, p. 188 |
---|---|
↑2 | Gelbaum, Bernard R.; Olmsted, John M. H.: Counterexamples in Analysis; Republ., unabr., slightly corr.; 2003; Dover Publications; Mineola, New York., p. 160 |
↑3 | cf. ib. p. 24. |
↑4 | cf. Heuser, Harro: Lehrbuch der Analysis Teil 1; 17., akt. Aufl.; 2009; Vieweg + Teubner; Wiesbaden, p. 144 |
↑5 | cf. loc. cit., p. 155 |
↑6 | cf. loc. cit., p. 27 f. |
↑7 | loc. cit., p. 235 f. |
↑8 | see loc. cit., p. 215 f. |
↑9 | cf. loc. cit., p. 540 – 543 |
↑10 | cf. Köhler, Günter: Analysis; 1. Aufl.; 2006; Heldermann; Lemgo, p. 519 |
↑11 | cf. Walter, Wolfgang: Analysis 1; 3., verb. Aufl.; 1992; Springer; Berlin, p. 41 |
↑12 | see Remmert, Reinhold: Funktionentheorie 1; 3., verb. Aufl.; 1992; Springer; Berlin, p. 324 |
↑13 | cf. loc. cit., p. 149 ff. |
↑14 | as in loc. cit., p. 228 f. |
↑15 | see Walter: Analysis 2, loc. cit., p. 358 ff. |
↑16 | see loc. cit., p. 364 |
↑17 | see Walter: Analysis 1, loc. cit., p. 103 |
↑18 | cf. Gelbaum, loc. cit., p. 61 f. |
↑19 | cf. loc. cit., p. 62 |
↑20 | cf. Köhler, loc. cit., p. 625 f. |