Preliminary remarks: In the following section, the definitions established in the chapters on Set Theory and Topology are used, and usually take \(m, n \in {}^{\omega}\mathbb{N}^{*}\). Integration and differentiation are studied on an arbitrary non-empty subset \(A\) from \({}^{(\omega)}\mathbb{K}^{n}\). Every element not in the image set is replaced by the neighbouring element in the target set, and if multiple choices are possible one single choice is selected. Otherwise, the mapping is not meaningfully defined. In the following, \(||\cdot||\) denotes the Euclidean norm. A generalisation to other sets and norms is easy if the latter are equivalent.

Definition: The function \(||\cdot||: \mathbb{V} \rightarrow {}^{(\omega)}\mathbb{R}_{\ge 0}\) where \(\mathbb{V}\) is a vector space over \({}^{(\omega)}\mathbb{K}\) is called a *norm*, if for all \(x, y \in \mathbb{V}\) and \(\lambda \in {}^{(\omega)}\mathbb{K}\), it holds that: \(||x|| = 0 \Rightarrow x = 0\) (*definiteness*), \(||\lambda x|| = |\lambda| \; ||x||\) (*homogeneity*), and \(||x + y|| \le ||x|| + ||y||\) (*triangle inequality*). The *dimension* of \(\mathbb{V}\) is defined as the maximal number of linearly independent vectors, and is denoted by dim \(\mathbb{V}\). The norms \({||\cdot||}_{a}\) and \({||\cdot||}_{b}\) are said to be *equivalent* if there exist non-infinitesimal \(\sigma, \tau \in {}^{c}\mathbb{R}_{>0}\) such that, for all \(x \in \mathbb{V}\), it holds that:\[\sigma||x||{}_{b} \le ||x||{}_{a} \le \tau||x||{}_{b}.\]Theorem: Let \(N\) be the set of all norms in \(\mathbb{V}\). Every norm on \(V\) is equivalent if and only if \({||x||}_{a}/{||x||}_{b}\) is finite but not infinitesimal for all \({||\cdot||}_{a}, {||\cdot||}_{b} \in N\) and all \(x \in \mathbb{V}^{*}\).\(\triangle\)

Proof: Set \(\sigma := \text{min }\{{||x||}_{a}/{||x||}_{b}: x \in \mathbb{V}^{*}\}\) and \(\tau := \text{max }\{{||x||}_{a}/{||x||}_{b}: x \in \mathbb{V}^{*}\}.\square\)

Lemma: There are infinitely many numbers in \({}^{c}\mathbb{R}\) for which the Archimedean axiom does not hold.

Proof: For all \(m \in {}^{c}\mathbb{N}\) and \(a \in {}^{c}{\mathbb{R}}_{\ge 1}\), it holds that \(\hat{c} m \le 1 \le a.\square\)

Archimedes' theorem: There exists \(m \in {}^{c}\mathbb{N}\) such that \(d m > a\) if and only if \(d c > a\) whenever \(a > d\) for \(a, d \in {\mathbb{R}}_{>0}\), since \(c = \max {}^{c}\mathbb{N}\) holds.\(\square\)

Theorem: An arbitrary mapping \(f: X \rightarrow X\) on an arbitrary set \(X\) is bijective if it is either injective or surjective.

Proof: The claim follows directly from the fact that all (pre-)images are pairwise distinct.\(\square\)

Remark: Note that this theorem does not apply to the successor function \(s\) in \({}^{\omega}\mathbb{N}\), since \(s: {}^{\omega}\mathbb{N} \rightarrow {}^{\omega}\mathbb{N}^{*} \cup \{\grave{\omega}\}\).

Definition: The function \({\mu}_{h}: A \rightarrow \mathbb{R}_{\ge 0}\) where \(A \subseteq {}^{(\omega)}\mathbb{C}^{n}\) is an \(m\)-dimensional set with \(h \in \mathbb{R}_{>0}\) less than or equal the minimal distance of the points in \(A, m \in {}^{\omega}\mathbb{N}_{\le 2n}\), \({\mu}_{h}(A) := |A| {h}^{m}\) and \({\mu}_{h}(\emptyset) = |\emptyset| = 0\) is called the *exact h-measure* of \(A\) and \(A\) is said to be *h-measurable*. Let the *exact standard measure* be \({\mu}_{\text{d0}}\). If it is clear this is the standard measure, d0 may be omitted.\(\triangle\)

Remark: This answers the measure problem positively: \({\mu}_{h}(A)\) is clearly additive and uniquely determined, i.e. if \(A\) is the union of pairwise disjoint \(h\)-homogeneous sets \({A}_{j}\) for \(j \in J \subseteq \mathbb{N}\), then\[{{\mu }_{h}}(A)=\sum\limits_{j \in J}{{{\mu }_{h}}\left( {{A}_{j}} \right)}.\]It is also strictly monotone, i.e. if \(h\)-homogeneous sets \({A}_{1}, {A}_{2} \subseteq {}^{(\omega)}\mathbb{K}^{n}\) satisfy \({A}_{1} \subset {A}_{2}\), then \({\mu}_{h}({A}_{1}) < {\mu}_{h}({A}_{2})\). If \(h\) is not equal on all considered sets \({A}_{j}\), the minimum of all \(h\) is chosen and the homogenisation follows as described in Set Theory.

Remark: The exact \(h\)-measure is more precise than other measures and is optimal. It simply considers the neighbourhood relations of points and its value is neither smaller nor greater than the distances of points parallel to the coordinate axes. Concepts such as \(\sigma\)-algebras and null sets are dispensable, since the only null set in this context is the empty set \(\emptyset\). Nonstandard mathematics also does without the concept of compactness in any form.

Examples: Consider the set \(A \subset [0, 1[\) of points, whose least significant bit is 1 (0) in their (conventionally) real binary representation. Then \({\mu}_{\text{d0}}(A) = \hat{2}\). Real numbers represent a further refinement of the conventionally real numbers, by dividing the conventionally real intervals into considerably finer sub-intervals. Since A is an infinite and conventionally uncountable union of individual points (without the neighbouring points of [0, 1[ in \(A\)) and these points are Lebesgue null sets, \(A\) is not Lebesgue measurable, however it is exactly measurable. Similarly, consider the subset \(Q\) of [0, 1[ \(\times\) [0, 1[ of all points with least significant bit 1 (0) in both coordinates. This set has exact measure \({\mu}_{\text{d0}}(Q) = \hat{4}\).

Definition: Neighbouring points in \(A\) are described by means of the irreflexive symmetric *neighbourhood relation* \(B \subseteq {A}^{2}\). The function \(\gamma: C \rightarrow A \subseteq \mathbb{C}{}^{n}\), where \(C \subseteq \mathbb{R}\) is \(h\)-homogeneous and h is infinitesimal, is called a *path* if \(||\gamma(x) - \gamma(y)||\) is infinitesimal for all neighbouring points \(x, y \in C\) and (\(\gamma(x), \gamma(y)) \in B\). The neighbourhood relations of \(B\) in \(A\) are systematically written as (predecessor, successor) with the notation \(({z}_{0}, \curvearrowright {z}_{0})\) or \((\curvearrowleft {z}_{0}, {z}_{0})\), where \(\curvearrowright\) is pronounced "post" and \(\curvearrowleft\) is pronounced "pre". This applies analogously to the neighbourhood relation \(D \subseteq C{}^{2}\).\(\triangle\)

Definition: Let \({z}_{0} \in A \subseteq \mathbb{K}^{n}\) and \(f: A \rightarrow {}^{(c)}\mathbb{K}^{m}\). In the following, proofs for predecessors will be omitted, since they are analogous to the proofs for successors. If \(||f(\curvearrowright B {z}_{0}) - f({z}_{0})|| < \alpha\) for infinitesimal \(\alpha \in {}^{(\omega)}\mathbb{R}{}_{>0}\), \(f\) is defined *\(\alpha B\)-successor-continuous* in \({z}_{0}\) in the direction \(\curvearrowright B {z}_{0}\). If the exact modulus of \(\alpha\) does not matter, \(\alpha\) may be omitted in the notation. If \(f\) is \(\alpha B\)-successor-continuous for all \({z}_{0}\) and \(\curvearrowright B {z}_{0}\), it simply is defined \(\alpha B\)-continuous. It holds that \(\alpha\) is the *degree* of continuity. If the inequality only holds for \(\alpha = \hat{c}\), \(f\) simply is defined (\(B\)-successor-)continuous. The property of \(\alpha B\)-predecessor-continuity is defined analogously.\(\triangle\)

Remark: In practice, choose \(\alpha\) by estimating \(f\) (for example after considering any jump discontinuities). If \(B\) is obvious or irrelevant, it may be omitted - as below, when \(B = {}^{(\omega)}\mathbb{K}{}^{2n}\).

Example: The function \(f: \mathbb{R} \rightarrow \{\pm 1\}\) with \(f(x) = {(-1)}^{x/\text{d0}}\) is nowhere successor-continuous on \(\mathbb{R}\), but its absolute value is (cf. Number Theory). Here, \(x/\)d0 is an integer since \(\mathbb{R}\) is d0-homogeneous. Setting \(f(x) = 1\) for rational \(x\) and \(= -1\) otherwise, then \(f(x)\) is partially d0-successor-continuous on non-rational numbers, unlike the conventional notion of continuity.

Definition: For \(f: A \rightarrow {}^{(\omega)}\mathbb{K}{}^{m}\),\[{d}_{\curvearrowright B z}f(z) := f(\curvearrowright B z) - f(z)\]is called *\(B\)-successor-differential* of \(f\) in the direction \(\curvearrowright B z\) for \(z \in A\). If dim \(A = n\), then \({d}_{\curvearrowright B z}f(z)\) can be specified by \(d((\curvearrowright B){z}_{1}, ... , (\curvearrowright B){z}_{n})f(z\)). If \(f\) is the identity, i.e. \(f(z) = z\), then \({d}_{\curvearrowright B z}Bz\) can be written instead of \({d}_{\curvearrowright B z}f(z)\). If \(A\) or \(\curvearrowright B z\) is obvious or irrelevant, it can be omitted. The conventional real case can be defined analogously to the above.\(\triangle\)

Remark: If the modulus of the \(B\)-successor-differential of \(f\) in the direction \(\curvearrowright B z\) at \(z \in A\) is smaller than \(\alpha\) and infinitesimal, then \(f\) is also \(\alpha B\)-successor-continuous at that point.

Definition: An (infinitely) real-valued function with arguments \(\in {}^{(\omega)}\mathbb{K}{}^{n}\) is said to be *convex (concave)* if the line segment between any two points on the graph of the function lies above (below) or on the graph. It is said to be *strictly* convex (concave) if "or on" can be omitted.\(\triangle\)

Definition: The \(m\) arithmetic means of all \({f}_{k}(\curvearrowright B z)\) of \(f(z)\) give the \(m\) *averaged normed tangential normal vectors* of \(m\) (uniquely determined) hyperplanes, defining the \(mn\) continuous partial derivatives of the Jacobian matrix of \(f\), which is not necessarily continuous. The hyperplanes are taken to pass through \({f}_{k}(\curvearrowright B z)\) and \(f(z)\) translated towards 0. The moduli of their coefficients are minimised by a very simple linear programme (cf. Linear Programming).\(\triangle\)

Theorem: Every in \(A \subseteq {}^{(\omega)}\mathbb{K}{}^{n}\) convex resp. concave function \(f: A \rightarrow {}^{(\omega)}\mathbb{R}\) is \(\alpha B\)-successor-continuous and \(B\)-successor-differentiable.\(\square\)

Example of a Peano curve (from [739], p. 188): "Consider the even, periodic function \(g: \mathbb{R} \rightarrow \mathbb{R}\) with period 2 and image [0, 1] defined by\[{g}(t)=\left\{ \begin{array}{cl} 0 & \text{for }0\le t<\tfrac{1}{3} \\ 3t-1 & \text{for }\tfrac{1}{3}\le t<\tfrac{2}{3} \\ 1 & \text{for }\tfrac{2}{3}\le t\le 1. \\ \end{array} \right.\,\] Clearly, g is fully specified by this definition, and continuous. Now let the function \(\phi: I = [0, 1] \rightarrow \mathbb{R}^{2}\) be defined by\[\phi(t) = \left( {\sum\limits_{k = 0}^{\infty} {\frac{{g({4^{2k}}t)}}{{{2^{k + 1}}}},} \sum\limits_{k = 0}^{\infty} {\frac{{g({4^{2k + 1}}t)}}{{{2^{k + 1}}}}} } \right)."\]The function \(\phi\) is at least continuous since the sums are ultimately locally linear functions in \(t\), when \(\infty\) is replaced by \(\omega\). It would however be an error to believe that [0, 1] can be bijectively mapped onto \({[0, 1]}^{2}\) in this way: the powers of four in \(g\), and the values 0 and 1 taken by \(g\) in two sub-intervals thin out \({[0, 1]}^{2}\) so much that a bijection is clearly impossible. Restricting the proof to rational points only is simply insufficient.

Definition: A point \(x (\curvearrowright x)\) of a function \(f: A \subseteq {}^{\omega}\mathbb{R} \rightarrow {}^{\omega}\mathbb{R}\) is said to be a *right (left) jump discontinuity* of \(s := |f(\curvearrowright x) - f(x)|\) *upwards (downwards)*, or vice versa if \(s > \hat{\omega}\).\(\triangle\)

Theorem: A monotone function \(f: [a, b] \rightarrow {}^{\omega}\mathbb{R}\) has at most \(2\omega^2 - 1\) jump discontinuities.

Proof: Between \(-\omega\) and \(\omega\), at most \(2\omega^2\) jump discontinuities with a jump of \(\hat{\omega}\) are possible. If the function does not decrease at non-discontinuities, like a step function, the claim follows.\(\square\)

Remark: This theorem corrects Froda's theorem and makes it more precise. If each set is preceded by the superscript \({}^{c}\), the statement for conventional sets is obtained.

Definition: The *partial derivative* in the direction \(\curvearrowright B {z}_{k}\) of \(F: A \rightarrow {}^{(\omega)}\mathbb{K}\) at \(z = ({z}_{1}, ..., {z}_{n}) \in A \subseteq {}^{(\omega)}\mathbb{K}^{n}\) with \(k \in [1, n]\mathbb{N}\) is defined as\[\frac{\partial B\,F(z)}{\partial B\,{{z}_{k}}}:=\frac{F({{z}_{1}},\,...,\,\curvearrowright B\,{{z}_{k}},\,...,\,{{z}_{n}})-F(z)}{\curvearrowright B\,{{z}_{k}}-{{z}_{k}}}.\]With this notation, if the function \(f\) satisfies \(f = ({f}_{1}, ..., {f}_{n}): A \rightarrow {}^{(\omega)}\mathbb{K}^{n}\) with \(z \in A \subseteq {}^{(\omega)}\mathbb{K}^{n}\)\[f(z)=\left( \frac{F(\curvearrowright B{{z}_{1}},{{z}_{2}},...,{{z}_{n}})-F({{z}_{1}},...,{{z}_{n}})}{(\curvearrowright B{{z}_{1}}-{{z}_{1}})},...,\frac{F({{z}_{1}},...,{{z}_{n-1}},\curvearrowright B{{z}_{n}})-F({{z}_{1}},...,{{z}_{n}})}{(\curvearrowright B{{z}_{n}}-{{z}_{n}})} \right)=\left( \frac{\partial B{{F}_{1}}(z)}{\partial B{{z}_{1}}},\,\,...\,\,,\,\,\frac{\partial B{{F}_{n}}(z)}{\partial B{{z}_{n}}} \right)=\text{grad }{{B}_{\curvearrowright Bz}}\,F(z)\,=\,\nabla {{B}_{\curvearrowright Bz}}\,F(z),\]then \(f(z)\) is said to be *exact \(B\)-successor-derivative* \({F'}_{\curvearrowright B z} B(z)\) or the *exact \(B\)-successor-gradient* \(\text{grad }_{\curvearrowright B z} F(z)\) of the function \(F\) at \(z\), which is said to be *exactly \(B\)-differentiable* at \(z\) in the direction \(\curvearrowright B z\), provided that each quotient exists in \({}^{(\omega)}\mathbb{K}\). \(\nabla\) is the *Nabla operator*. If this definition is satisfied for every \(z \in A\), then \(F\) is said to be an *exactly \(B\)-differentiable \(B\)-antiderivative* of \(f\). On the (conventionally) (infinite) reals, the left and right \(B\)-antiderivatives \({F}_{l}(x)\) and \({F}_{r}(x)\) at \(x \in {}^{(\omega)}\mathbb{R}\) distinguish between the cases of the corresponding \(B\)-derivatives.

If \(A\) or \(\curvearrowright B z\) are obvious from context or irrelevant, they can be omitted. The conventional case may be obtained analogously to the above. In the case \(n = 1\), \({F'}_{r}B(x)\) is defined the *right exact \(B\)-derivative* for \(\curvearrowright B x > x \in {}^{(\omega)}\mathbb{R}\) and that \({F'}_{l}B(x)\) is the *left exact \(B\)-derivative* for \(\curvearrowright B x < x\). If both directions have the same value, \(F'B(z)\) is called the exact derivative. When \(A ={}^{c}\mathbb{C}\) and \(n = 1\), this reduces to the conventional case where \(F\) is *holomorphic*.\(\triangle\)

Remark: Clearly, the \(B\)-antiderivatives of a given function only differ by an additive constant. The \(B\)-antiderivatives of discontinuous functions can typically only be derived by adding and appropriately recombining easier \(\alpha B\)-continuous functions (e.g. by reversing the rules of differentiation).

Remark: The rules stated below can be extended to (infinite) complex sets and left exact derivatives. The sets and neighbourhood relations will be subsequently omitted. Let \(f\) and \(g\) be right exactly differentiable (infinite) real functions at \(x \in A \subseteq {}^{(\omega)}\mathbb{R}\).

Chain rule: For \(x \in A \subseteq {}^{(\omega)}\mathbb{R}, B \subseteq {A}^{2}, f: A \rightarrow C \subseteq {}^{(\omega)}\mathbb{R}, D \subseteq {C}^{2}, g: C \rightarrow {}^{(\omega)}\mathbb{R}\), choosing \(f(\curvearrowright B x) = \curvearrowright D f(x)\)), it holds that:\[{g'}_{r}B(f(x)) = {g'}_{r}D(f(x)) {f'}_{r}B(x).\]Proof:\[{{{g}'}_{r}}B(f(x))=\frac{g(f(\curvearrowright Bx))-g(f(x))}{f(\curvearrowright Bx)-f(x)}\frac{f(\curvearrowright Bx)-f(x)}{\curvearrowright Bx-x}=\frac{g(\curvearrowright Df(x))-g(f(x))}{\curvearrowright Df(x)-f(x)}{{{f}'}_{r}}B(x)={{{g}'}_{r}}D(f(x)){{{f}'}_{r}}B(x).\square\]Product rule: \((fg){'}_{r}(x) = {f'}_{r}(x) g(x) + f(\curvearrowright x) {g'}_{r}(x)= {f'}_{r}(x) g(\curvearrowright x) + f(x) {g'}_{r}(x).\)

Proof: Add and subtract \(f(\curvearrowright x) g(x)\) resp. \(f(x) g(\curvearrowright x)\) in the numerator.\(\square\)

Quotient rule: Suppose that the denominators of the following quotients are non-zero. Then:\[\left( \frac{f}{g} \right)_{r}^{\prime }(x)=\frac{{{{{f}'}}_{r}}(x)\,g(x)-f(x)\,{{{{g}'}}_{r}}(x)}{g(x)\,g(\curvearrowright x)}=\frac{{{{{f}'}}_{r}}(x)\,g(\curvearrowright x)-f(\curvearrowright x)\,{{{{g}'}}_{r}}(x)}{g(x)\,g(\curvearrowright x)}.\]Proof: Add and subtract \(f(x) g(x)\) resp. \(f(\curvearrowright x) g(\curvearrowright x)\) in the numerator.\(\square\)

Remark: For the product and quotient rule to be as precise as the conventional one, the arguments and function values must belong to a smaller level of infinity than \(1/\)d0, and \(f\) and \(g\) must be sufficiently (\(\alpha\)-) continuous at \(x \in A\) (i.e. \(\alpha\) must be sufficiently small) to allow \(\curvearrowright x\) to be replaced by \(x\). An analogous principle holds for infinitesimal arguments.

Remark: The right exact derivative of the inverse function\[{f}^{-1}{'}_{r}(y) = 1/{f'}_{r}(x)\]can be derived from \(y = f(x)\) and the identity \(x = {f}^{-1}(f(x))\) using the chain rule with an equal level of precision. L'Hôpital's rule also makes sense for (\(\alpha\)-) continuous functions \(f\) and \(g\) such that \(f(v) = g(v) = 0\) with \(v \in A\) and \(g(\curvearrowright v) \ne 0\), and may be stated as:\[\frac{f(\curvearrowright v)}{g(\curvearrowright v)}=\frac{f(\curvearrowright v)-f(v)}{g(\curvearrowright v)-g(v)}=\frac{{{{{f}'}}_{r}}(v)}{{{{{g}'}}_{r}}(v)}.\]

Remark: Differentiability is thus easy to establish. In the (conventional) (infinite) real case, set\[{{{F}'}_{b}}B(v)\,:=\,\frac{F(\curvearrowright B\,v)-F(\curvearrowleft B\,v)}{\curvearrowright B\,v-\curvearrowleft B\,v}\] wherever this quotient is defined. This is especially useful when \(\curvearrowright B v - v = v - \curvearrowleft B v\), and the combined derivatives both have the same sign. This definition has the advantage of allowing us to view \({F'}_{b} \; B(v)\) as the "tangent slope" at the point \(v\), especially when \(F\) is \(\alpha B\)-continuous at \(v\). It also results in simpler rules of differentiation, in particular since a derivative value of 0 is most suitable for cases with opposite signs (see below). In other cases, simply calculate the arithmetic mean of both exact derivatives. This can be extended to the (conventional) complex numbers analogously.

Definition: Given \(z \in A \subseteq {}^{(\omega)}\mathbb{K}^{n}\), \[\int\limits_{z\in A}{f(z)dBz:=\sum\limits_{z\in A}{f(z)(\curvearrowright B\,z-z)}}\]is called the *exact \(B\)-integral* of the *vector field* \(f = ({f}_{1}, ..., {f}_{n}): A \rightarrow {}^{(\omega)}\mathbb{K}^{n}\) on \(A\) and \(f(z)\) is said to be*\(B\)-integrable*. If this requires removing at least one point from \(A\), then the exact \(B\)-integral is called *improper*.

For \(\gamma: [a, b[C \rightarrow A \subseteq {}^{(\omega)}\mathbb{K}^{n}, C \subseteq \mathbb{R}\), and \(f = ({f}_{1}, ..., {f}_{n}): A \rightarrow {}^{(\omega)}\mathbb{K}^{n}\)\[\int\limits_{\gamma }{f(\zeta )dB\zeta =}\int\limits_{t\in [a,b[C}{f(\gamma (t)){{{{\gamma }'}}_{\curvearrowright }}D(t)dDt}\]where \(dDt > 0, \curvearrowright D t \in ]a, b]C\), choosing \(\curvearrowright B \gamma(t) = \gamma(\curvearrowright D t)\), since \(\zeta = \gamma(t)\) and \(dB\zeta = \gamma(\curvearrowright D t) - \gamma(t) = {\gamma'}_{\curvearrowright }D(t) dDt\) (i.e. in particular for \(C = \mathbb{R}, B\) maximal in \(\mathbb{C}^{2}\), and \(D\) maximal in \(\mathbb{R}^{2})\), is called the *exact \(B\)-line integral* of the vector field \(f\) along the path \(\gamma\). Improper exact \(B\)-line integrals are defined analogously to exact \(B\)-integrals, except that only interval end points may be removed from \([a, b[C\).\(\triangle\)

Remark: The value of the exact line integral on \({}^{(c)}\mathbb{K}\) is usually consistent with the conventional line integral; however, \(f\) does not need to be continuous and the proper \(B\)-line integral exists always. It can easily be seen that the exact \(B\)-line integral is linear and monotone in the (conventional) (infinite) real case. The art of integration lies in correctly combining the summands of a sum.

Definition: For all \(x \in V\) of an \(h\)-homogeneous \(n\)-volume \(V \subseteq [{a}_{1}, {b}_{1}] \times...\times [{a}_{n}, {b}_{n}] \subseteq {}^{(\omega)}\mathbb{R}^{n}\) with \(B = {B}_{1}\times...\times{B}_{n}, {B}_{k} \subseteq {[{a}_{k}, {b}_{k}]}^{2}\) and \(|{dB}_{k}{x}_{k}| = h\) for all \(k \in [1, n]\mathbb{N}\)\[\int\limits_{x\in V}{f(x){dBx}}:=\int\limits_{x\in V}{f(x)dB({{x}_{1}},\,...,{{x}_{n}})}:=\int\limits_{{{a}_{n}}}^{{{b}_{n}}}{...\int\limits_{{{a}_{1}}}^{{{b}_{1}}}{f(x)d{{B}_{1}}{{x}_{1}}\,...\,d{{B}_{n}}{{x}_{n}}}}\]is called the *exact \(B\)-volume integral* of the *\(B\)-volume integrable* function \(f: {}^{(\omega)}\mathbb{R}^{n} \rightarrow {}^{(\omega)}\mathbb{R}\) with \(f(x) := 0\) for all \(x \in {}^{(\omega)}\mathbb{R}^{n} \setminus V\). Improper exact \(B\)-volume integrals are defined analogously to exact \(B\)-integrals.\(\triangle\)

Remark: Because \(\mathbb{C}\) and \(\mathbb{R}^{2}\) are isomorphic, something similar exists in the complex case and\[\int\limits_{x\in V}{dBx={{\mu }_{h}}(V)}.\]Example: Using the exact \(B\)-volume integral in contrast to the Lebesgue integral,\[||f|{{|}_{p}}:={{\left( \int\limits_{x\in V}{||f(x)|{{|}^{p}}dBx} \right)}^{\hat{p}}}\]satisfies for arbitrary \(f: {}^{(\omega)}\mathbb{R}^{n} \rightarrow {}^{(\omega)}\mathbb{R}^{m}\) and \(p \in [1, \omega]\) all the properties of a norm, also definiteness.

Example: Let \([a, b[h{}^{\omega}\mathbb{Z}\) be a non-empty \(h\)-homogeneous subset of \([a, b[{}^{\omega}\mathbb{R}\), and write \(B \subseteq [a, b[h{}^{\omega}\mathbb{Z} \times ]a, b]h{}^{\omega}\mathbb{Z}\). Now let \({T}_{r}\) be a right \(B\)-antiderivative of a not necessarily convergent Taylor series \(t\) on \([a, b[h{}^{\omega}\mathbb{Z}\) and define \(f(x) := t(x) + {\varepsilon(-1)}^{x/h}\) for conventionally real \(x\) and \(\varepsilon \ge \hat{c}\). For \(h = \hat{c}\), \(f\) is nowhere continuous, and thus is conventionally nowhere differentiable or integrable on \([a, b[h{}^{\omega}\mathbb{Z}\), but for all \(h\) it gives the following exact result\[ f_{r}^{\prime }B(x)=t_{r}^{\prime }B(x)-2\widehat{dBx}\varepsilon {{(-1)}^{x/h}}\]und\[\int\limits_{x\in [a,b[h{}^{\omega }\mathbb{Z}}{f(x)dBx={{T}_{r}}(b)-{{T}_{r}}(a)+\,}\hat{2}\varepsilon \left( {{(-1)}^{a/h}}-{{(-1)}^{b/h}} \right).\]Example: The conventionally non-measurable middle-thirds Cantor set \({C}_{\hat{3}}\) has measure \({\mu}_{\text{d0}}({C}_{\hat{3}}) = {\delta}^{-\omega}\) for \(\delta := \frac{2}{3}\). Consider the function \(c: [0, 1] \rightarrow \{0, {\delta}^{\omega}\}\) defined by \(c(x) = {\delta}^{\omega}\) for \(x \in {C}_{\hat{3}}\) and \(c(x) = 0\) for \(x \in [0, 1] \setminus {C}_{\hat{3}}\). Then\[\int\limits_{x \in {{C}_{\hat{3}}}}{c(x)dx=\sum\limits_{x=0}^{1}{c(x)dx}}={{\delta}^{\omega}}{{\mu }_{\text{d0}}}\left( {{C}_{\hat{3}}} \right)=1.\]Definition: A *sequence* \(({a}_{k})\) with *members* \({a}_{k}\) is a mapping from \({}^{(\omega)}\mathbb{Z}\) to \({}^{(\omega)}\mathbb{C}^{m}: k \mapsto {a}_{k}\). A *series* is a equence \(({s}_{k})\) with \(m \in {}^{(\omega)}\mathbb{Z}\) and *partial sums*\[{{s}_{k}}=\sum\limits_{j=m}^{k}{{{a}_{j}}}.\]

Remark: Sums may be arbitrarily rearranged according to the associative, commutative, and distributive laws if care is taken to calculate them correctly (using Landau symbols).

Fubini's theorem: For \(X, Y \subseteq {}^{(\omega)}\mathbb{K}\), \(f: X\times Y \rightarrow {}^{(\omega)}\mathbb{K}\) satisfies\[\int\limits_{Y}{\int\limits_{X}{f(x,\,y)dBx\,}dBy}=\int\limits_{X\times Y}{f(x,\,y)dB(x,\,y)}=\int\limits_{X}{\int\limits_{Y}{f(x,\,y)dBy\,}dBx}.\]Proof: Reorder the sums corresponding to these integrals.\(\square\)

Example: Since\[\int\limits_{[a,\,b[\times [r,\,s[}{\frac{\left( {{x}^{2}}-{{y}^{2}} \right)}{{{\left( {{x}^{2}}+{{y}^{2}} \right)}^{2}}}{{d}^{2}}(x,\,y)}=\int\limits_{a}^{b}{\left. \frac{ydx}{{{x}^{2}}+{{y}^{2}}} \right|_{r}^{s}}=-\int\limits_{r}^{s}{\left. \frac{xdy}{{{x}^{2}}+{{y}^{2}}} \right|_{a}^{b}}=\arctan \frac{s}{b}-\arctan \frac{r}{b}+\arctan \frac{s}{a}-\arctan \frac{r}{a}\]by the principle of latest substitution (see below), the (improper) integral\[I(a,b):=\int\limits_{[a,\,b{{[}^{2}}}{\frac{\left( {{x}^{2}}-{{y}^{2}} \right)}{{{\left( {{x}^{2}}+{{y}^{2}} \right)}^{2}}}{{d}^{2}}(x,\,y)}=\arctan \frac{b}{b}-\arctan \frac{a}{b}+\arctan \frac{b}{a}-\arctan \frac{a}{a}= \iota - \iota = 0\]is obtained and not\[I(0,1)=\int\limits_{0}^{1}{\int\limits_{0}^{1}{\frac{\left( {{x}^{2}}-{{y}^{2}} \right)}{{{\left( {{x}^{2}}+{{y}^{2}} \right)}^{2}}}dy\,dx}}=\int\limits_{0}^{1}{\frac{dx}{1+{{x}^{2}}}}=\frac{\iota}{2}\ne -\frac{\iota}{2}=-\int\limits_{0}^{1}{\frac{dy}{1+{{y}^{2}}}}=\int\limits_{0}^{1}{\int\limits_{0}^{1}{\frac{\left( {{x}^{2}}-{{y}^{2}} \right)}{{{\left( {{x}^{2}}+{{y}^{2}} \right)}^{2}}}dx\,dy}}=I(0,1).\]Definition: A sequence \(({a}_{k})\) with \(k \in {}^{(\omega)}\mathbb{N}^{*}, {a}_{k} \in {}^{(\omega)}\mathbb{C}\) and \(\alpha \in ]0, \hat{c}]\) is said to be *\(\alpha\)-convergent* to \(a \in {}^{(\omega)}\mathbb{C}\) if there exists \(m \in {}^{(\omega)}\mathbb{N}^{*}\) satisfying \(|{a}_{k} - a| < \alpha\) for all \({a}_{k}\) with \(k \ge m\) such that the difference max \(k - m\) is not too large. The set \(\alpha\)-\(A\) of all such \(a\) is called *set of \(\alpha\)-limit values* of \(({a}_{k})\). An appropriately and uniquely determined representative of this set (e.g. the final value or mean value) is called the *\(\alpha\)-limit value* \(\alpha\)-\(a\). In the special case \(a = 0\), the sequence is called a *zero sequence*. If the inequality only holds for \(\alpha = \hat{c}\), the \(\alpha\)- is omitted from the notation.

Remark: The choice of \(k\) will usually be maximal and of \(\alpha\) minimal. Conventional limit values are often only precise to less than \(\mathcal{O}(\hat{\omega})\) and in general are too imprecise, since they are often e.g. (arbitrarily) algebraic (of a certain degree) or transcendental. The conventional formulation of the definition of conventional convergence, which always requires infinitely many or almost all members of the sequence to have an arbitrarily small distance from the limit value and only allows finitely many to have larger distance, needs to be extended, since otherwise only the largest index of each sequence is taken into account and considered to be relevant (cf. \cite{813}, p. 144). Only then is the monotone convergence valid (cf. [813], p. 155).

Remark: The statement that each positive number may be represented by a determined, unique, infinite decimal fraction is baseless because of the fundamental theorem of set theory (cf. p. 27 f.). Furthermore, any proof claiming that, for \(\varepsilon \in {}^{(\omega)}\mathbb{R}_{>0}\) - in particular whenever the phrase "for all conventionally reals \(\varepsilon > 0\)" is used - there exists a real number \(\varepsilon\hat{r}\) with real \(r \in {}^{(\omega)}\mathbb{R}_{>1}\), is false, because of \(\varepsilon := \; \curvearrowright 0\) or an infinite regression occurs. Therefore, in the \(\varepsilon\delta\)-definition of the limit value (it is questionable that \(\delta\) exists, p. 235 f.) and in the \(\varepsilon\delta\)-definition of continuity (see p. 215) \(\varepsilon\) must be restricted to specific multiples of \(\curvearrowright 0\): Consider for example the real function that doubles every real value but is not even uniformly continuous.

Remark: Uniform continuity need not be considered, since in general \(\delta := \; \curvearrowright 0\) and \(\varepsilon\) accordingly larger. If two function values do not satisfy the conditions, then the function is not continuous at that point. Thus, continuity is equivalent to uniform continuity, by choosing the largest \(\varepsilon\) from all admissible infinitesimal values. It is also easy to show that continuity is equivalent to Hölder continuity if infinite real constants are allowed. The same is true for uniform convergence, since simply the maximum of the indices may be chosen satisfying each argument as the index that satisfies everything, and \(\acute{\omega}\) is sufficient in every case. If this is not true for a given argument, then pointwise convergence also fails. Thus, uniform convergence is equivalent to pointwise convergence, by choosing the largest of all admissible infinitesimal values.

Intermediate value theorem: Let \(f: [a, b] \rightarrow {}^{(\omega)}\mathbb{R}\) be \(\alpha\)-continuous on \([a, b]\). Then \(f(x)\) takes every value between min \(f(x)\) and max \(f(x)\) to a precision of \(< \alpha\) as \(x\) ranges over \(\in [a, b]\). If \(f\) is continuous on \({}^{\omega}\mathbb{R}\), then it takes every value of \({}^{c}\mathbb{R}\) between min \(f(x)\) and max \(f(x)\).

Proof: Between min \(f(x)\) and max \(f(x)\), there is a gapless chain of overlapping \(\alpha\)-neighbourhoods centred around each \(f(x)\), by \(\alpha\)-continuity of \(f\). The second part of the claim follows from the fact that a deviation \(|f(\curvearrowright x) - f(x)| < \hat{c}\) or \(|f(x) - f(\curvearrowleft x)| < \hat{c}\) in \({}^{c}\mathbb{R}\) is smaller than the conventionally maximal admissible resolution.\(\square\)

Definition: The derivative of a function \(f: A \rightarrow {}^{(\omega)}\mathbb{R}\) where \(A \subseteq {}^{(\omega)}\mathbb{R}\), is defined to be zero if and only if 0 lies in the interval defined by the boundaries of the left and right exact derivatives.\(\triangle\)

Example: The (2d0)-continuous function \(f: {}^{(\omega)}\mathbb{R} \rightarrow \{0, \text{d0}\}\) defined by \(f(x):=\hat{2}\text{d0}\left( {(-1)}^{x/\text{d0}}+1 \right)\) consists of only the local minima 0 and the local maxima d0, and only has the left and right exact derivatives \(\pm 1\).

Example: For \(s \in {}^{(\omega)}\mathbb{C}\) where Re\((s) \le 1\), \(\zeta(s) = \sum\limits_{n=1}^{\omega}{n^{-s}}\) has definitely no analytic continuation (cf. [949], p. 4) and no zeros. This disproves the Riemann hypothesis:\[\sum_{n=1}^{\mathrm{\omega}}\frac{{(-1)}^n}{n^s}=2^{-\acute{s}}\sum_{n=1}^{\mathrm{\omega/2}}n^{-s}-\sum_{n=1}^{\mathrm{\omega}}n^{-s}\neq\left(2^{-\acute{s}}-1\right)\sum_{n=1}^{\mathrm{\omega(/2)}}n^{-s}\]First fundamental theorem of exact differential and integral calculus for line integrals: The function\[F(z)=\int\limits_{\gamma }{f(\zeta )dB\zeta }\]where \(\gamma: [d, x[C \rightarrow A \subseteq {}^{(\omega)}\mathbb{K}, C \subseteq \mathbb{R}, f: A \rightarrow {}^{(\omega)}\mathbb{K}, d \in [a, b[C\), and choosing \(\curvearrowright B \gamma(x) = \gamma(\curvearrowright D x)\) is exactly \(B\)-differentiable, and for all \(x \in [a, b[C\) and \(z = \gamma(x)\)\[F' \curvearrowright B(z) = f(z).\]Proof:\[dB(F(z))=\int\limits_{t\in [d,x]C}{f(\gamma (t)){{{{\gamma }'}}_{\curvearrowright }}D(t)dDt}-\int\limits_{t\in [d,x[C}{f(\gamma (t)){{{{\gamma }'}}_{\curvearrowright }}D(t)dDt}=\int\limits_{x}{f(\gamma (t))\frac{\gamma (\curvearrowright Dt)-\gamma (t)}{\curvearrowright Dt-t}dDt}=f(\gamma (x)){{{\gamma }'}_{\curvearrowright }}D(x)dDx=\,f(\gamma (x))(\curvearrowright B\gamma (x)-\gamma (x))=f(z)dBz.\square\]Second fundamental theorem of exact differential and integral calculus for line integrals: According to the conditions from above, it holds with \(\gamma: [a, b[C \rightarrow {}^{(\omega)}\mathbb{K}\) that\[ F(\gamma (b))-F(\gamma (a))=\int\limits_{\gamma }{{{{{F}'}}_{\curvearrowright }}B(\zeta )dB\zeta }.\]Proof:\[F(\gamma (b))-F(\gamma (a))=\sum\limits_{t\in [a,b[C}{F(\curvearrowright B\,\gamma (t))}-F(\gamma (t))=\sum\limits_{t\in [a,b[C}{{{{{F}'}}_{\curvearrowright }}B(\gamma (t))(\curvearrowright B\,\gamma (t)-\gamma (t))}=\int\limits_{t\in [a,b[C}{{{{{F}'}}_{\curvearrowright }}B(\gamma (t)){{{{\gamma }'}}_{\curvearrowright }}D(t)dDt}=\int\limits_{\gamma }{{{{{F}'}}_{\curvearrowright }}B(\zeta )dB\zeta }.\square\]Corollary: If \(f\) has an antiderivative \(F\) on a closed path \(\gamma\), it holds with the conditions above that\[\oint\limits_{\gamma }{f(\zeta )dB\zeta :=}\int\limits_{\gamma }{f(\zeta )dB\zeta }=0.\square\]Remark: The conventionally real case of both fundamental theorems may be established analogously. Given \(u, v \in [a, b[C, u \ne v\) and \(\gamma(u) = \gamma(v)\), it may be the case that \(\curvearrowright B \gamma(u) \ne \; \curvearrowright B \gamma(v)\).

Definition: Let *according to the trapezoidal rule*\[\int\limits_{z\in A}^{=}{f(z)dBz:=\sum\limits_{z\in A}{\frac{(f(z)+f(\curvearrowright B\,z))}{2}(\curvearrowright B\,z-z)}}.\]Let *according to the midpoint rule* - assuming that \((z + \curvearrowright B z)/2\) exists -\[\int\limits_{z\in A}^{\doteq }{f(z)dBz:=\sum\limits_{z\in A}{f\left( \frac{z\,+\curvearrowright Bz}{2} \right)(\curvearrowright B\,z-z)}}.\]Remark: Since this tightened exact \(B\)-integral is clearly independent of the direction, it justifies (implicitly) theorems that cancel integral values in opposite directions, such as Green's theorem (see below). In the first fundamental theorem, the derivative \(dB(F(z))/dBz\) can be tightened to the arithmetic mean \((f(z) + f(\curvearrowright B z))/2\) resp. \((f(z + \curvearrowright B z)/2)\), and similarly, in the second fundamental theorem, \(F(\gamma(b)) - F(\gamma(a))\) can be tightened to \((F(\gamma(b)) + F(\curvearrowleft B \gamma(b)))/2 - (F(\gamma(a)) + F(\curvearrowright B \gamma(a)))/2\) resp. \(F((\gamma(b) + \curvearrowleft B \gamma(b))/2) - F((\gamma(a) + \curvearrowright B \gamma(a))/2)\), which yields approximately the original results when \(f\) and \(F\) are sufficiently \(\alpha\)-continuous at the boundary.\(\triangle\)

Leibniz' differentiation rule: For \(f: {}^{(\omega)}\mathbb{K}^{n+1} \rightarrow {}^{(\omega)}\mathbb{K}, a, b: {}^{(\omega)}\mathbb{K}^{n} \rightarrow {}^{(\omega)}\mathbb{K}, \curvearrowright B x := {(s, {x}_{2}, ..., {x}_{n})}^{T}\), and \(s \in {}^{(\omega)}\mathbb{K} \setminus \{{x}_{1}\}\), choosing \(\curvearrowright D a(x) = a(\curvearrowright B x)\) and \(\curvearrowright D b(x) = b(\curvearrowright B x)\), it holds that\[\frac{\partial }{\partial {{x}_{1}}}\left( \int\limits_{a(x)}^{b(x)}{f(x,t)dDt} \right)=\int\limits_{a(x)}^{b(x)}{\frac{\partial f(x,t)}{\partial {{x}_{1}}}dDt}+\frac{\partial b(x)}{\partial {{x}_{1}}}f(\curvearrowright Bx,b(x))-\frac{\partial a(x)}{\partial {{x}_{1}}}f(\curvearrowright Bx,a(x)).\]Proof:\[\begin{aligned}\frac{\partial }{\partial {{x}_{1}}}\left( \int\limits_{a(x)}^{b(x)}{f(x,t)dDt} \right) &={\left( \int\limits_{a(\curvearrowright Bx)}^{b(\curvearrowright Bx)}{f(\curvearrowright Bx,t)dDt}-\int\limits_{a(x)}^{b(x)}{f(x,t)dDt} \right)}/{\partial {{x}_{1}}}\;={\left( \int\limits_{a(x)}^{b(x)}{(f(\curvearrowright Bx,t)-f(x,t))dDt}+\int\limits_{b(x)}^{b(\curvearrowright Bx)}{f(\curvearrowright Bx,t)dDt}-\int\limits_{a(x)}^{a(\curvearrowright Bx)}{f(\curvearrowright Bx,t)dDt} \right)}/{\partial {{x}_{1}}}\; \\ &=\int\limits_{a(x)}^{b(x)}{\frac{\partial f(x,t)}{\partial {{x}_{1}}}dDt}+\frac{\partial b(x)}{\partial {{x}_{1}}}f(\curvearrowright Bx,b(x))-\frac{\partial a(x)}{\partial {{x}_{1}}}f(\curvearrowright Bx,a(x)).\square\end{aligned}\]Remark: Integrating happens in the complex plane over a path whose start and end points are the limits of integration. If \(\curvearrowright D a(x) \ne a(\curvearrowright B x)\), then the final summand must be multiplied by \((\curvearrowright D a(x) - a(x))/(a(\curvearrowright B x) - a(x))\), and if \(\curvearrowright D b(x) \ne b(\curvearrowright B x)\), then the penultimate summand must be multiplied by \((\curvearrowright D b(x) - b(x))/(b(\curvearrowright B x) - b(x))\).

Examples (cf. [813], p. 540 - 543 with \(n \in {}^{\omega}\mathbb{N}^{*}\) and \(x \in [0, 1]\) in each case):

1. The sequence \({f}_{n}(x) = \sin(nx)/\sqrt{n}\) does not tend to \(f(x) = 0\) as \(n \rightarrow \omega\), but instead to \(f(x) = \sin(\omega x)/\sqrt{\omega}\) with (continuous) derivative \(f'(x) = \cos(\omega x) \sqrt{\omega}\) instead of \(f'(x) = 0\).

2. The sequence \({f}_{n}(x) = x - \hat{n}x^{n}\) tends to \(f(x) = x - \hat{\omega}{x}^{\omega}\) as \(n \rightarrow \omega\) instead of \(f(x) = x\) with (continuous) derivative \(f'(x) = 1 - {x}^{\acute{\omega}}\) instead of \(f'(x) = 1\). Conventionally, \({f}_{n}(x) = 1 - {x}^{\acute{n}}\) is discontinuous at the point \(x = 1\).

3. The sequence \({f}_{n}(x) = (\hat{2}{n}^{2} - |{n}^{3}(x - \widehat{2n})|)(1 - \text{sgn}(x - \hat{n}))\) (or alternatively, expressed in terms of continuously differentiable functions\[{{f}_{n}}(x)=\left\{ \begin{array}{cl} 2{{n}^{3}}x & \text{for }x\in \left[ 0,\widehat{2n} \right] \\ 2{{n}^{2}}-2{{n}^{3}}x & \text{for }x\in \left] \widehat{2n},\hat{n} \right] \\ 0 & \text{for }x\in \left] \hat{n},1 \right] \\ \end{array} \right.\,\]does always not tend to 0 as \(n \rightarrow \omega\), but instead tends to different values depending on the value of \(x\) (replace \(n\) by \(\omega\) in \({f}_{n}(x))\). Furthermore, it holds that\[\int\limits_{x \in [0,1[} {{f_n}dx} = \hat{2}{n}\]and\[\int\limits_{x\in [0,1[}{f dx}=\hat{2}{\omega}\]instead of\[\int\limits_{x \in [0,1[} {fdx} = 0\]supposedly because \(f(x) = 0\).

4. The sequence \({f}_{n}(x) = (\hat{2}n - |{n}^{2}(x - \widehat{2n})|)(1 - \text{sgn}(x - \hat{n}))\) (or alternatively, expressed in terms of continuously differentiable functions)\[{{f}_{n}}(x)=\left\{ \begin{array}{cl} 2{{n}^{2}}x & \text{for }x\in \left[ 0,\widehat{2n} \right] \\ 2n-2{{n}^{2}}x & \text{for }x\in \left] \widehat{2n},\hat{n} \right] \\ 0 & \text{for }x\in \left] \hat{n},1 \right] \\ \end{array} \right.\,\]does not always tend to 0 as \(n \rightarrow \omega\), but instead tends to different values depending on the value of \(x\) (replace \(n\) by \(\omega\) in \({f}_{n}(x)\)). Furthermore, it holds that\[\int\limits_{x \in [0,1[} {{f_n}dx} = \int\limits_{x \in [0,1[} {fdx} = {\hat{2}}\]instead of\[\int\limits_{x \in [0,1[} {fdx} = 0\]supposedly because \(f(x) = 0\).

5. The sequence \({f}_{n}(x) = nx(-\acute{x})^{n}\) does not tend to \(f(x) = 0\) as \(n \rightarrow \omega\), but instead to the continuous function \(f(x) = {\omega x(-\acute{x})}^{\omega}\), and takes the value \(\hat{e}\) when \(x = \hat{\omega}\).

Finiteness criterion for series: Let \(j, k, m, n \in \mathbb{N}\). The modulus \(S_n := \left| \sum\limits_{k=0}^{n}{s_k} \right|\) for \(s_k \in {}^{(\omega)}\mathbb{C}\) is finite, if and only if for a monotonically decreasing sequence \(({d}_{j})\) such that \( d_j \in {}^{c}\mathbb{R}_{\ge 0}\) holds: \(S_n = \sum\limits_{j=0}^{m}{{{\left( -1 \right)}^{j}}{{d}_{j}}}.\)

Proof: The sum is bounded below by 0 and bounded above by \({d}_{0}\). The claim follows directly from the ability to arbitrarily rearrange summands, sort them according to their signs and sizes, and recombine them or split them into separate sums.\(\square\)

Example: From the alternating harmonic series, it follows that\[\sum\limits_{n=1}^{\omega }{{{\left( -1 \right)}^{n}}}\left( \omega -\hat{n} \right)=\ln 2.\]Definition: The following rearrangement for \({a}_{m}, {b}_{n} \in {}^{(\omega)}\mathbb{K}\) gives the *series product* and corrects the Cauchy product:\[\sum\limits_{m=1}^{\omega }{{{a}_{m}}}\sum\limits_{n=1}^{\omega }{{{b}_{n}}}=\sum\limits_{m=1}^{\omega }{\left( \sum\limits_{n=1}^{m}{\left( {{a}_{n}}{{b}_{m-\acute{n}}}+{{a}_{\omega -\acute{n}}}{{b}_{\omega -m+n}} \right)}-{{a}_{m}}{{b}_{\omega -\acute{m}}} \right)}.\]Example: For following series product (cf. [763], p. 61 f.), it holds that:\[\left(\sum_{m=1}^{\mathrm{\omega}}\frac{{(-1)}^m}{\sqrt m}\right)^2=\sum_{m=1}^{\mathrm{\omega}}{\left(\sqrt{\frac{\hat{m}}{\mathrm{\omega}-\acute{m}}}-\sum_{n=1}^{m}{{(-1)}^m\left(\sqrt{\frac{\hat{n}}{m-\acute{n}}}+\sqrt{\frac{\widehat{\mathrm{\omega}-\acute{n}}}{\mathrm{\omega}-m\ \mathrm{+\ }n}}\right)}\right)=0,36590...\ }\ \ \ll\frac{{\zeta\left(\hat{2}\right)}^2}{3+2\sqrt2}.\]Example: The signum function sgn yields the following series product (cf. [763], p. 62): \[\sum\limits_{m=0}^{\omega }{{2}^{{{m}^{\text{sgn}(m)}}}}\sum\limits_{n=0}^{\omega}{\text{sgn}(n-\gamma)} = \acute{\omega}{2}^{\grave{\omega}}\gg -2.\]Definition: Let \(f: A \rightarrow {}^{(\omega)}\mathbb{K}\) for \(A \subseteq {}^{(\omega)}\mathbb{K}\). The left-hand side of\[\frac{d_{\curvearrowright B\,z}^{2}Bf(z)}{{{(d\curvearrowright B\,z)}^{2}}}:=\frac{f(\curvearrowright B(\curvearrowright B\,z))-2f(\curvearrowright B\,z)+f(z)}{{{(d\curvearrowright B\,z)}^{2}}}\]is called the *second derivative* of \(f\) at \(z \in A\) in the direction \(\curvearrowright B z\).

Remark: Higher derivatives are defined analogously. Every number \({m}_{n} \in {}^{\omega}\mathbb{N}\) for \(n \in {}^{\omega}\mathbb{N}^{*}\) of derivatives is written as an exponent after the \(n\)-th variable to be differentiated. If \(n \ge 2\), the derivatives are called *partial* and \(d\) is replacedby \(\partial\). The exponent to be specified in the numerator is the sum of all \({m}_{n}\). Taylor series only make sense for \(\omega\)-times \(\alpha\)-continuously differentiable functions, due to approximating and convergence-related behaviour.

Exchange theorem: The result of multiple partial derivatives of a function \(f: A \rightarrow {}^{(\omega)}\mathbb{K}\) is independent of the order of differentiation, provided that variables are only evaluated and limits are only computed at the end, if applicable (*principle of latest substitution*).

Proof: The derivative is uniquely determined: This is clear up to the second derivative, and the result follows by (transfinite) induction for higher-order derivatives.\(\square\)

Example: Let \(f: {}^{\omega}\mathbb{R}^{2} \rightarrow {}^{\omega}\mathbb{R}\) be defined as \(f(0, 0) = 0\) and \(f(x, y) = {xy}^{3}/({x}^{2} + {y}^{2})\) otherwise. Then:\[\frac{{{\partial ^2}f}}{{\partial x\partial y}} = \frac{{{y^6} + 6{x^2}{y^4} - 3{x^4}{y^2}}}{{{{({x^2} + {y^2})}^3}}} = \frac{{{\partial ^2}f}}{{\partial y\partial x}}\]with value \(\hat{2}\) at the point (0, 0), even though the equation\[\frac{{\partial f}}{{\partial x}} = \frac{{{y^5} - {x^2}{y^3}}}{{{{({x^2} + {y^2})}^2}}} \ne \frac{{x{y^4} + 3{x^3}{y^2}}}{{{{({x^2} + {y^2})}^2}}} = \frac{{\partial f}}{{\partial y}}\]is equal to \(y\) on the left for \(x = 0\) and 0 on the right for \(y = 0\). Partially differentiating the left-hand side with respect to \(y\) gives \(1 \ne 0\), which is the partial derivative of the right-hand side with respect to \(x\).

Theorem: Splitting \(F: A \rightarrow {}^{(\omega)}\mathbb{C}\) into real and imaginary parts \(F(z) := U(z) + i V(z) := f(x, y) := u(x, y) + i v(x, y)\), and given infinitesimal \(h = |dBx| = |dBy|, h\)-homogeneous \(A \subseteq {}^{(\omega)}\mathbb{C}\), with the neighbourhood relation \(B \subseteq {A}^{2}\) for every \(z = x + i y \in A\) is holomorphic and\[r(h):=\frac{{\partial{}^{2}}Bf(x,y)}{\partial Bx\partial By\,}h\]is infinitesimal if and only if the *Cauchy-Riemann partial differential equations*\[\frac{{\partial Bu}}{{\partial Bx}} = \frac{{\partial Bv}}{{\partial By}},\,\,\frac{{\partial Bv}}{{\partial Bx}} = - \frac{{\partial Bu}}{{\partial By}},\]are satisfied by \(B\) in both the \(\curvearrowright\) direction and the \(\curvearrowleft\) direction.

Proof: Since\[\begin{aligned}F'B(z) &= \frac{{F(z \pm \partial Bx) - F(z)}}{{\pm \partial Bx}} = \frac{{F(z \pm i\partial By) - F(z)}}{{\pm i\partial By}} = \frac{{F(z + dBz) - F(z)}}{{dBz}} = \frac{{\partial Bu}}{{\partial Bx}} + i\frac{{\partial Bv}}{{\partial Bx}} = \frac{{\partial Bv}}{{\partial By}} - i\frac{{\partial Bu}}{{\partial By}} = \frac{{u(x \pm \partial Bx,y) + i\,v(x \pm \partial Bx,y) - u(x,y) - i\,v(x,y)}}{{\pm \partial Bx}} \\ &= \frac{{u(x,y \pm \partial By) + i\,v(x,y \pm \partial By) - u(x,y) - i\,v(x,y)}}{{\pm i\partial By}} = \frac{{\partial Bf}}{{\partial Bx}} = - i\frac{{\partial Bf}}{{\partial By}} = \hat{2}\left( {\frac{{\partial Bf}}{{\partial Bx}} - i\frac{{\partial Bf}}{{\partial By}}} \right) = \frac{{\partial BF}}{{\partial Bz}}\end{aligned}\]and \(dBz = dBx + i dBy\) for every derivative defined on \(A\), and since\[\begin{aligned}&u(\curvearrowright Bx,y)-u(x,y)+u(x,\curvearrowright By)-u(x,y)+u(\curvearrowright Bx,\curvearrowright By)-u(\curvearrowright Bx,y)-u(x,\curvearrowright By)+u(x,y) =u(\curvearrowright Bx,\curvearrowright By)-u(x,y) \\ &=\frac{\partial Bu(x,y)}{\partial Bx}dBx+\frac{\partial Bu(x,y)}{\partial By}dBy+\frac{\partial Bu(\curvearrowright Bx,y)}{\partial By}dBy-\frac{\partial Bu(x,y)}{\partial By}dBy =\frac{\partial Bu(x,y)}{\partial Bx}dBx+\frac{\partial Bu(x,y)}{\partial By}dBy+\frac{{{\partial}^{2}}Bu(x,y)}{\partial Bx\partial By}dBxdBy=dBU(z)\end{aligned}\]

as well as the analogous formulas for \(v\) and in the \(\curvearrowleft\) direction, it holds that\[F'B(z)\,dBz = dBF(z) = dBU(z) + i\,dBV(z) = \,\left( {\begin{array}{*{20}{c}}{\frac{{\partial Bu}}{{\partial Bx}}} & {\frac{{\partial Bu}}{{\partial By}}}\\{i\frac{{\partial Bv}}{{\partial Bx}}} & {i\frac{{\partial Bv}}{{\partial By}}}\end{array}} \right)\left( {\begin{array}{*{20}{c}}{dBx}\\{dBy}\end{array}} \right) + \frac{{{\partial ^2}Bf(x,y)}}{{\partial Bx\partial By}}dBxdBy.\]The assumptions allow us to neglect the final summand, and so the claim follows.\(\square\)

Remark: The final summand may in particular be neglected whenever \(f\) is continuous. The following necessary and sufficient condition is valid for \(F\) to be holomorphic:\[F'B(\bar z) = \frac{{\partial Bf}}{{\partial Bx}} = i\frac{{\partial Bf}}{{\partial By}} = \hat{2}\left( {\frac{{\partial Bf}}{{\partial Bx}} + i\frac{{\partial Bf}}{{\partial By}}} \right) = \frac{{\partial BF}}{{\partial B\bar z}} = 0.\]Definition: When integrating identical paths in opposite positive and negative directions, the *counter-directional rule* for integrals is adopted, stating that when following the path in the negative direction, the function value of the successor of the argument must be chosen if the function is too discontinuous, implying that the integral sums to 0 over both directions.

Remark: This convention is required in order to ensure that integrals that are expected to sum to zero do in fact do so. Without it, they could potentially have a significantly different value.

Counter-directional theorem: If the path \(\gamma: [a, b[C \rightarrow V\) with \(C \subseteq \mathbb{R}\) passes the edges of every \(n\)-cube of side length d0 in the \(n\)-volume \(V \subseteq {}^{(\omega)}\mathbb{R}^{n}\) with \(n \in \mathbb{N}_{\ge 2}\) exactly once, where the opposite edges in all two-dimensional faces of every \(n\)-cube are traversed in reverse direction, but uniformly, then, for \(D \subseteq \mathbb{R}^{2}, B \subseteq {V}^{2}, f = ({f}_{1}, ..., {f}_{n}): V \rightarrow {}^{(\omega)}\mathbb{R}^{n}, \gamma(t) = x, \gamma(\curvearrowright D t) = \curvearrowright B x\) and \({V}_{\curvearrowright } := \{\curvearrowright B x \in V: x \in V, \curvearrowright B x \ne \curvearrowleft B x\}\), it holds that\[\int\limits_{t \in [a,b[C}{f(\gamma (t)){{{{\gamma }'}}_{\curvearrowright }}(t)dDt}=\int\limits_{\begin{smallmatrix} (x,\curvearrowright B\,x) \\ \in V\times {{V}_{\curvearrowright}} \end{smallmatrix}}{f(x)dBx}=\int\limits_{\begin{smallmatrix} t \in [a,b[C, \\ \gamma | {\partial{}^{\acute{n}}} V \end{smallmatrix}}{f(\gamma (t)){{{{\gamma }'}}_{\curvearrowright }}(t)dDt}.\]Proof: If two arbitrary squares are considered with common edge of length d0 included in one plane, then only the edges of \(V\times{V}_{\curvearrowright}\) are not passed in both directions for the same function value. They all, and thus the path to be passed, are exactly contained in \({\partial}^{\acute{n}}V.\square\)

Remark: It is not difficult to transfer both definition and theorem to the complex numbers.

Green's theorem: Given neighbourhood relations \(B \subseteq {A}^{2}\) for some simply connected \(h\)-set \(A \subseteq {}^{(\omega)}\mathbb{R}^{2}\), infinitesimal \(h = |dBx|= |dBy| = |\curvearrowright B \gamma(t) - \gamma(t)| = \mathcal{O}({\hat{\omega}}^{m})\), sufficiently large \(m \in \mathbb{N}^{*}, (x, y) \in A, {A}^{-} := \{(x, y) \in A : (x + h, y + h) \in A\}\), and a simply closed path \(\gamma: [a, b[\rightarrow \partial A\) followed anticlockwise, choosing \(\curvearrowright B \gamma(t) = \gamma(\curvearrowright D t)\) for \(t \in [a, b[, D \subseteq {[a, b]}^{2}\), the following equation holds for sufficiently \(\alpha\)-continuous functions \(u, v: A \rightarrow \mathbb{R}\) with not necessarily continuous partial derivatives \(\partial Bu/\partial Bx, \partial Bu/\partial By, \partial Bv/\partial Bx\) and \(\partial Bv/\partial By\):\[\int\limits_{\gamma }{(u\,dBx+v\,dBy)}=\int\limits_{(x,y)\in {{A}^{-}}}{\left( \frac{\partial Bv}{\partial Bx}-\frac{\partial Bu}{\partial By} \right)dB(x,y)}.\]Proof: Wlog the case \(A := \{(x, y) : r \le x \le s, f(x) \le y \le g(x)\}, r, s \in {}^{(\omega)}\mathbb{R}, f, g : \partial A \rightarrow {}^{(\omega)}\mathbb{R}\) is proved, since the proof is analogous for each case rotated by \(\iota\), and every simply connected \(h\)-set is a union of such sets. It is simply shown that\[\int\limits_{\gamma }{u\,dBx}=-\int\limits_{(x,y)\in {{A}^{-}}}{\frac{\partial Bu}{\partial By}dB(x,y)}\]since the other relation may be shown analogously. Since the regions of \(\gamma\) where \(dBx = 0\) do not contribute to the integral, for negligibly small \(t := h(u(s, g(s)) - u(r, g(r)))\), it holds that\[-\int\limits_{\gamma }{u\,dBx}-t=\int\limits_{r}^{s}{u(x,g(x))dBx}-\int\limits_{r}^{s}{u(x,f(x))dBx}=\int\limits_{r}^{s}{\int\limits_{f(x)}^{g(x)}{\frac{\partial Bu}{\partial By}}dBydBx}=\int\limits_{(x,y)\in {{A}^{-}}}{\frac{\partial Bu}{\partial By}dB(x,y)}.\square\]Remark: The choice of \(m\) depends on the required number of sets of the type specified in the above proof, the union of which yields the simply connected \(h\)-set.

Goursat's integral lemma: If \(f\) is holomorphic on a triangle \(\Delta \subseteq {}^{(\omega)}\mathbb{C}\) but does not have an antiderivative on \(\Delta\), then\[I:=\int\limits_{\partial \Delta }{f(\zeta )dB\zeta }=0.\]Refutation of conventional proofs based on estimation by means of a complete triangulation: The direction in which \(\partial\Delta\) is traversed is irrelevant. If \(\Delta\) is fully triangulated, then wlog every minimal triangle \({\Delta}_{s} \subseteq \Delta\) must either satisfy\[{I_s}: = \int\limits_{\partial {\Delta _s}} {f(\zeta )dB\zeta } = f({z_1})({z_2} - {z_1}) + f({z_2})({z_3} - {z_2}) + f({z_1})({z_1} - {z_3}) = (f({z_1}) - f({z_2}))({z_2} - {z_3}) = 0\]or\[\begin{aligned}\int\limits_{\partial {\Delta _s}} {f(\zeta )dB\zeta } &= f({z_1})({z_2} - {z_1}) + f({z_2})({z_3} - {z_2}) + f({z_3})({z_1} - {z_3}) = (f({z_1}) - f({z_2})){z_2} + (f({z_2}) - f({z_3})){z_3} + (f({z_3}) - f({z_1})){z_1}\\ &= f'({z_2})\left( {({z_1} - {z_2}){z_2} - ({z_3} - {z_2}){z_3} + ({z_3} - {z_2}){z_1} - ({z_1} - {z_2}){z_1}} \right) = f'({z_2})\left( {({z_3} - {z_2})({z_1} - {z_3}) - {{({z_1} - {z_2})}^2}} \right) = 0\end{aligned}\]where \({z}_{1}, {z}_{2}\), and \({z}_{3}\) represent the vertices of \({\Delta}_{s}\). By holomorphicity and cyclic permutations, this can only happen for \(f({z}_{1}) = f({z}_{2}) = f({z}_{3})\). If every adjacent triangle to \(\Delta\) is considered, deduce that \(f\) must be constant, which contradicts the assumptions. This is because the term in large brackets is translation-invariant, since otherwise set \({z}_{3} := 0\) wlog, making this term 0, in which case \({z}_{1} = {z}_{2}(1 \pm i\sqrt{3})/2\) and \(|{z}_{1}| = |{z}_{2}| = |{z}_{1} - {z}_{2}|\). However, since every horizontal and vertical line is homogeneous on \({}^{(\omega)}\mathbb{C}\), this cannot happen, otherwise the corresponding sub-triangle would be equilateral and not isosceles and right-angled. Therefore, in both cases, \(|{I}_{s}|\) must be at least \(|f'({z}_{2}) \mathcal{O}({\text{d0}}^{2})|\), by selecting the vertices 0, |d0| and \(i|\text{d0}|\) wlog. Denoting the perimeter of a triangle by \(L\), then it holds that \(|I| \le {4}^{m} |{I}_{s}|\) for an infinite natural number \(m\), and also \({2}^{m} = L(\partial\Delta)/|\mathcal{O}({\text{d0}}^{2})|\) since \(L(\partial\Delta) = {2}^{m} L(\partial{\Delta}_{s})\) and \(L(\partial{\Delta}_{s}) = |\mathcal{O}({\text{d0}}^{2})|\). Therefore, it holds that \(|I| \le |f'({z}_{2}) {L(\partial\Delta)}^{2}/\mathcal{O}({\text{d0}}^{2})|\), causing the desired estimate \(|I| \le |\mathcal{O}(dB\zeta)|\) to fail, for example if \(|f'({z}_{2}) {L(\partial\Delta)}^{2}|\) is larger than \(|\mathcal{O}({\text{d0}}^{2})|.\square\)

Cauchy's integral theorem: Given the neighbourhood relations \(B \subseteq {A}^{2}\) and \(D \subseteq [a, b]\) for some simply connected \(h\)-set \(A \subseteq {}^{\omega}\mathbb{C}\), infinitesimal \(h\), a holomorphic function \(f: A \rightarrow {}^{\omega}\mathbb{C}\) and a closed path \(\gamma: [a, b[\rightarrow \partial A\), choosing \(\curvearrowright B \gamma(t) = \gamma(\curvearrowright D t)\) for \(t \in [a, b[\), it holds that\[\int\limits_{\gamma }{f(z)dBz}=0.\]Proof: By the Cauchy-Riemann partial differential equations and Green's theorem, with \(x := \text{Re} \, z, y := \text{Im} \, z, u := \text{Re} \, f, v := \text{Im} \, f\) and \({A}^{-} := \{z \in A : z + h + ih \in A\}\), it holds that\[\int\limits_{\gamma }{f(z)dBz}=\int\limits_{\gamma }{\left( u+iv \right)\left( dBx+idBy \right)}=\int\limits_{z\in {{A}^{-}}}{\left( i\left( \frac{\partial Bu}{\partial Bx}-\frac{\partial Bv}{\partial By} \right)-\left( \frac{\partial Bv}{\partial Bx}+\frac{\partial Bu}{\partial By} \right) \right)dB(x,y)}=0.\square\]Fundamental theorem of algebra: For every non-constant polynomial \(p \in {}^{(\omega)}\mathbb{C}\), there exists some \(z \in {}^{(\omega)}\mathbb{C}\) such that \(p(z) = 0\).

Indirect proof: By performing an affine substitution of variables, reduce to the case \(1/p(0) \ne \mathcal{O}(\text{d0})\). Suppose that \(p(z) \ne 0\) for all \(z \in {}^{(\omega)}\mathbb{C}\). Since \(f(z) := 1/p(z)\) is holomorphic, it holds that \(f(1/\text{d0}) = \mathcal{O}(\text{d0})\). By the mean value inequality \(|f(0)| \le {|f|}_{\gamma}\) (see [473], p. 160) for \(\gamma = \partial\mathbb{B}_{r}(0)\) and arbitrary \(r \in {}^{(\omega)}\mathbb{R}_{>0}\), and hence \(f(0) = \mathcal{O}(\text{d0})\), which is a contradiction.\(\square\)

Remark: The in \({\mathbb{B}}_{\omega}(0) \subset {}^{\omega}\mathbb{C}\) (entire) functions \(f(z) = \sum\limits_{k=1}^{\omega }{{{z}^{k}}{{{\hat{\omega }}}^{k+1}}}\) and \(g(z) = \hat{\omega }z\) give counterexamples to Liouville's (generalised) theorem and Picard's little theorem because of \(|f(z)| < 1\) and \(|g(z)| \le 1\). The function \(f(\hat{z})\) for \(z \in {\mathbb{B}}_{\omega}(0)^{*}\) discounts Picard's great theorem. The function \(b(z) := \hat{c}z\) for \(z \in {\mathbb{B}}_{c}(0) \subset {}^{c}\mathbb{C}\) maps the simply connected \({\mathbb{B}}_{c}(0)\) holomorphicly, but not necessarily injectively or surjectively to \(\mathbb{D}\). It disproves the Riemann mapping theorem and the (generalised) Poincaré conjecture.

Remark: If \(\hat{\omega}\) is identified with 0, the main theorem of Cauchy's theory of functions is true and can be proven according to Dixon (as in [473], p. 228 f.), especially as the limit mentioned there shall be 0 resp. \(\hat{r}\) tends to 0 for \(r \in {}^{\omega}\mathbb{R}_{>0}\) tending to \(\omega\).

Definition: A point \({z}_{0} \in M \subseteq {}^{(\omega)}\mathbb{C}^{n}\) or belonging to a sequence \(({a}_{k})\) for \({a}_{k} \in {}^{(\omega)}\mathbb{C}^{n}\) and an (infinite) natural number \(k\) is called a *(proper) \(\alpha\)-accumulation point* of \(M\) or of the sequence, if the ball \(\mathbb{B}_{\alpha}({z}_{0}) \subseteq {}^{(\omega)}\mathbb{C}^{n}\) with centre \({z}_{0}\) and infinitesimal radius \(\alpha\) contains infinitely many points from \(M\) or infinitely many pairwise distinct members of the sequence. If \(\alpha = \hat{\omega}a\) holds here, the \(\alpha\)-accumulation point is simply called an accumulation point.\(\triangle\)

Remark: Let \(p(z) = \prod\limits_{k=0}^{\omega}{\left( z-{{d}_{k}} \right)}\) with \(z \in {}^{\omega}\mathbb{C}\) be an infinite product with pairwise distinct zeros \({d}_{k} \in \mathbb{B}_{\hat{\omega}}(0) \subset \mathbb{D}\), chosen in such a way that \(|f({d}_{k})| < \hat{\omega}\) for a function \(f\) holomorphic on a region \(G \subseteq \mathbb{C}\) and that \(f(0) = 0\). Suppose that \(G\) contains \(\mathbb{B}_{\hat{\omega}}(0)\) completely. This can always be achieved by means of coordinate transformations provided that \(G\) is sufficiently "large".

Then the coincidence set \(\{\zeta \in G : f(\zeta) = g(\zeta)\}\) of the on \(G\) holomorphic function \(g(z) := f(z) + p(z)\) contains an accumulation point at 0, and \(f \ne g\), contradicting the statement of the identity theorem. Examples of such \(f\) include functions with a zero at 0 that are restricted to \(\mathbb{B}_{\hat{\omega}}(0)\) and holomorphic on \(G\). Since \(p(z)\) can take every conventional complex number, the deviation between \(f\) and \(g\) is non-negligible.

The identity theorem is also contradicted by the (local) fact that all derivatives \({u}^{(n)}({z}_{0}) = {v}^{(n)}({z}_{0})\) of two functions \(u\) and \(v\) can be equal at \({z}_{0} \in G\) for all \(n\), but that \(u\) and \(v\) may significantly differ further away maintaining to be holomorphic, since not every holomorphic function can be uniquely developed into a Taylor series due to the approximation of differentiation and computation with Landau symbols.

Extending to \(\prod\limits_{k=0}^{\left| \mathbb{N}^{*} \right|}{\left( z-{{d}_{k}} \right)}\) allows entire functions with an infinite number of zeros to be constructed. The set of zeros is not necessarily discrete. Thus, the set of all functions that are holomorphic on a region \(G\) may contain zero divisors. Functions such as polynomials with \(n > 2\) pairwise distinct zeros once again give counterexamples to Picard's little theorem, since they omit at least \(\acute{n}\) values in \(\mathbb{C}\).

Remark: From the identity\[{{s}^{(0)}}(x):=\sum\limits_{m=0}^{n}{{{(-x)}^{m}}}=\frac{1-{{(-x)}^{\grave{n}}}}{\grave{x}}\]for real or complex \(x\), by differentiating, it can be deduced that:\[{{s}^{(1)}}(x)=-\sum\limits_{m=1}^{n}{m{{(-x)}^{\acute{m}}}}=\frac{\grave{n}{{(-x)}^{n}}-n{{(-x)}^{\grave{n}}}-1}{{{\grave{x}}^{2}}},\]when the modulus of \(x\), \(dx\) or \(\widehat{dx}\) have different orders of magnitude.

For sufficiently small \(x\), and sufficiently, but not excessively large \(n\), this formula can be further simplified to \(-1/{\grave{x}}^{2}\), and also remains valid when \(x \ge 1\) is not excessively large. By successively multiplying \({s}^{(j)}(x)\) by \(x\) for \(j \in {}^{\omega}\mathbb{N}^{*}\) and subsequently differentiating, other formulas can be derived for \({s}^{(j+1)}(x)\), providing an example of divergent series that has previously not always been correctly calculated.

For all complex \(z, \; {s}^{(0)}(z)\) holds. In the problematic special case of \(z = -1\), this follows from L'Hôpital's rule. However, if \({s}^{(0)}(-x)\) is integrated from 0 to 1 and set \(n := \omega\), an integral expression for \(\ln \, \omega + \gamma\) is obtained in terms of Euler's constant \(\gamma\). Substituting \(y := -\acute{x}\), by the binomial series a series is obtained with almost exclusively infinite coefficients; if \(\ln \, \omega\) is also expressed as a series, even an expression for \(\gamma\) is obtained.

If the numerator of \({s}^{(0)}(z)\) is illegitimately simplified, finding incorrect results is risked, especially when \(|z| \ge 1\). So \({s}^{(0)}(-{e}^{i\pi})\) is e.g. 0 for odd \(n\), and 1 for even \(n\), but not \(\hat{2}\).

Theorem: Using the digamma function \(\psi\), it holds for \(n \in {}^{\omega}2\mathbb{N}^{*}\) and small \(\varepsilon \in ]0, 1]\) and \({{d}_{\varepsilon k n}}:={{\varepsilon}^{{\hat{n}}}}{e}^{\hat{n}2k\pi i}\) that\[\zeta(\grave{n}) = \underset{\varepsilon \to 0}{\mathop{\lim }}\,\widehat{-\varepsilon n}\sum\limits_{k=1}^{n}{\left( \gamma +\psi ({{d}_{\varepsilon k n}}) \right)}+\mathcal{O}(\varepsilon)\]and\[\zeta(\grave{n}) = \underset{\varepsilon \to 0}{\mathop{\lim }}\,\widehat{2\varepsilon n}\sum\limits_{k=1}^{n}{\left( \psi ({{d}_{\varepsilon k n}}{{i}^{\hat{n}2}})-\psi ({{d}_{\varepsilon k n}}) \right)}+\mathcal{O}({{\varepsilon }^{2}}).\]Proof: The claim results easily via the geometric series from ([474], p. 37 - 42): \[\psi (z)+\gamma +\hat{z}=\sum\limits_{m=1}^{\omega }{\left( \hat{m}-\widehat{m+z} \right)}=-\sum\limits_{m=1}^{\omega }{\zeta(\grave{m}){{(-z)}^{k}}}=z\sum\limits_{m=1}^{\omega}{\hat{m}\widehat{m+z}}.\square\]Remark: The slowly convergent series on the right-hand side may be well accelerated by Euler summation. Odd roots of unity lead to analogous representations.

Corollary: For \(z \in \mathbb{B}_{1-\hat{c}}(0) \subset \mathbb{D}, s \in {}^{\omega}\mathbb{C}\) and \(\hat{c} \le \text{Re} \; s < 1 + \hat{c}\), it holds with the convergent series\[v(s,z):=\sum\limits_{m=1}^{\omega }{\zeta (m+s){{z}^{m}}}=z\sum\limits_{m=1}^{\omega }{{{{\hat{m}}}^{s}}\widehat{m-z}}\]and \(u(s, z) := z \, v(s, z)\):\[\zeta (\acute{n}+s)=\underset{\varepsilon \to 0}{\mathop{\lim }}\,\widehat{\varepsilon n}\sum\limits_{k=1}^{n}{u(s,{{d}_{\varepsilon k n}})}+\mathcal{O}(\varepsilon)\]and\[\zeta (n+s)=\underset{\varepsilon \to 0}{\mathop{\lim }}\,\widehat{\varepsilon n}\sum\limits_{k=1}^{n}{v(s,{{d}_{\varepsilon k n}})}+\mathcal{O}(\varepsilon).\square\]

Remark: Adding \(p\hat{z} + qz + r\) to \(u(s, z)\) resp. \(v(s, z)\) with \(p, q, r \in {}^{\omega}\mathbb{C}\) barely matters.

Corollary: With \(a \in D \subseteq \mathbb{C}\), it holds by Taylor's theorem (cf. [473], p. 165 f.) for a function\[f(z)=\sum\limits_{m=0}^{\omega }{\widehat{m!}\,{{f}^{(m)}}(a){{(z-a)}^{m}}}\]holomorphic in the domain \(D\) and \(g(a, z) := zf(z + a)\) resp. \(h(a, z) := f(z + a) - f(a)\) that\[{{f}^{(\acute{n})}}(a)=\underset{\varepsilon \to 0}{\mathop{\lim }}\,\widehat{\varepsilon n}\,\acute{n}!\sum\limits_{k=1}^{n}{g(a, {{d}_{\varepsilon k n}})}+\mathcal{O}(\varepsilon )\]and\[{{f}^{(n)}}(a)=\underset{\varepsilon \to 0}{\mathop{\lim }}\,\hat{\varepsilon}\,\acute{n}!\sum\limits_{k=1}^{n}{h(a, {{d}_{\varepsilon k n}})}+\mathcal{O}(\varepsilon).\square\]

Remark: The precision may be arbitrarily increased by successively inserting the terms.

© 07.03.2018 by Boris Haase

• privacy policy • disclaimer • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • mwiki • top