Euclidean Geometry • Equitable Distributing • Linear Programming • Set Theory • Nonstandard Analysis • Representations • Topology • Transcendental Numbers • Number Theory • Calculation of Times (Previous | Next)

Preliminary remarks: In the following, we shall use the definitions established in the chapter on Set Theory, and we will usually take m, n ∈ ^{ω}ℕ*. We seek to study integration and differentiation on an arbitrary non-empty subset A of the sets ^{(ω)}ℂ^{n} or ^{(ω)}ℝ^{n} for arbitrary n, which we shall combine with the notation ^{(ω)}ℍ^{n} = ^{(ω)}ℍ× ... ×^{(ω)}ℍ, where each ℍ represents either ℂ or ℝ in some arbitrary order, since we shall not consider quaternions here. In particular, we will consider inconcrete and infinite sets that are conventionally non-measurable, and functions that are conventionally discontinuous. Every element not in the image set is replaced by the neighbouring element in the target set, and if multiple choices are possible one single choice is selected. If we do not do this, the mapping is not (meaningfully) defined. This can easily be generalised to other sets.

Definition: The function ||·||: V → ^{(ω)}ℝ_{>0} where V is a vector space over ^{(ω)}ℍ is called a *norm*, if for all x, y ∈ V and λ ∈ ^{(ω)}ℍ, we have that: ||x|| = 0 ⇒ x = 0 (*definiteness*), ||λx|| = |λ| ||x|| (*homogeneity*), and ||x + y|| ≤ ||x|| + ||y|| (*triangle inequality*). The *dimension* of V is defined as the maximal number of linearly independent vectors, and is denoted by dim V. The norms ||·||_{a} and ||·||_{b} are said to be *equivalent* if there exist finite but not infinitesimal σ, τ ∈ ^{(ω)}ℝ_{>0} such that, for all x ∈ V, it holds that:

σ||x||_{b} ≤ ||x||_{a} ≤ τ||x||_{b}.

Theorem: Let N be the set of all norms in V. Every norm on V is equivalent if and only if ||x||_{a}/||x||_{b} is finite but not infinitesimal for all ||·||_{a}, ||·||_{b} ∈ N and all x ∈ V*.

Proof: The claim follows immediately after setting σ := min {||x||_{a}/||x||_{b}: x ∈ V*} and τ := max {||x||_{a}/||x||_{b}: x ∈ V*}.⃞

Remark: On ^{κ}ℍ^{n} with n ∈ ^{ω}ℕ*, all norms are equivalent if we disallow the definition of infinite or infinitesimal norm values. In the following, ||·|| denotes the Euclidean norm. Results can easily be generalised to equivalent norms.

Definition: We describe neighbouring points in A by means of the irreflexive symmetric *neighbourhood relation* B ⊆ A^{2}. The set ÑB(z_{0}) of all neighbours of z_{0} ∈ A (with respect to B in A) is called the *neighbourhood* of z_{0} (with respect to B in A). The function γ: C → A ⊆ ℂ^{n} where C ⊆ ℝ is h-homogeneous and h is infinitesimal, is called a *path*, if ||γ(x) – γ(y)|| is infinitesimal for all neighbouring points x, y ∈ C and (γ(x), γ(y)) ∈ B. The neighbourhood relations of B are systematically written as (predecessor, successor) with the notation (z_{0}, ↷z_{0}) or (↶z_{0}, z_{0}), where ↷ is pronounced "succ" and ↶ is pronounced "pre". This applies analogously to the neighbourhood relation D ⊆ C^{2}.

Definition: Let z_{0} ∈ A ⊆ ℍ^{n} and f: A → ^{(κ)}ℍ^{m}. In the following, we will omit proofs for predecessors, since they are analogous to the proofs for successors. We say that f is *αB-successor-continuous* in z_{0} in the direction ↷B z_{0} if for infinitesimal α ∈ ^{(ω)}ℝ_{>0}, we have that

||f(↷B z_{0}) - f(z_{0})|| < α.

In general, when ||x – y|| < α for x, y ∈ A, we write x ≈_{α} y, pronounced as "α-infinitesimally equal". If the exact modulus of α does not matter, α can be omitted in the notation. If f is αB-successor-continuous for all z_{0} and ↷B z_{0} ∈ ÑB(z_{0}), we simply say that it is αB-continuous. We say that α is the *degree* of continuity. If the inequality only holds for α = 1/⌊ω⌋, we simply say that f is (B-successor-)continuous. The property of αB-predecessor-continuity is defined analogously.

Remark: In practice, α is chosen by estimating f (for example after considering any jump discontinuities). If B is obvious or irrelevant, it can be omitted. Below, we systematically omit it when B = ^{(ω)}ℍ^{2n}.

Example: The function f: ℝ → {±1} with f(x) = (-1)^{x/d0} is nowhere successor-continuous on ℝ, but its absolute value is (cf. Transcendental Numbers). Here, x/d0 is an integer since ℝ is d0-homogeneous. If we set f(x) = 1 for rational x and = -1 for irrational x, then f(x) is partially d0-successor-continuous on irrational numbers, unlike the conventional notion of continuity.

Definition: For f: A → ^{(ω)}ℍ^{m}, we define

d_{↷B z}f(z) := f(↷B z) - f(z)

to be the *B-successor-differential* of f in the direction ↷B z for z ∈ A. If dim A = n, then we can specify d↷B zf(z) d_{↷B z}f(z) as d((↷B)z_{1}, ... , (↷B)z_{n})f(z). If f is the identity, i.e. f(z) = z, then we can write d_{↷B z}Bz instead of d_{↷B z}f(z). If A or ↷B z is obvious or irrelevant, it can be omitted. The conventionally real case can be defined analogously to the above.

Remark: If the modulus of the B-successor-differential of f in the direction ↷B z at z ∈ A is smaller than α and infinitesimal, then f is also αB-successor-continuous at that point.

Definition: The m arithmetic means of all f_{k}(↷B z) of f(z) give the m *averaged normed tangential normal vectors* of m (uniquely determined) hyperplanes, defining the mn continuous partial derivatives of the Jacobian matrix of f, which is not necessarily continuous. The hyperplanes are taken to pass through f_{k}(↷B z) and f(z) translated towards 0. We can minimise the moduli of their coefficients by solving a very simple linear programme (cf. Linear programming).

Theorem: An arbitrary mapping f: X → X on an arbitrary set X is bijective if it is either injective or surjective.

Proof: The claim follows directly from the fact that all (pre-)images are pairwise distinct.⃞

Remark: Note that this theorem does not apply to the successor function s in ^{ω}ℕ, since s: ^{ω}ℕ → ^{ω}ℕ* ∪ {|^{ω}ℕ|}.

Example of a Peano curve (from [739], p. 188, see Bibliography): "Consider the even, periodic function g: ℝ → ℝ with period 2 and image [0, 1] defined by

It is clear that g is fully specified by this definition, and continuous. Now let the function Φ: I = [0, 1] → ℝ^{2} be defined by

"

The function Φ is at least continuous since the sums are ultimately locally linear functions in t, when ∞ is replaced by ⌊ω⌋. It would however be an error to believe that [0, 1] can be bijectively mapped onto [0, 1]^{2}, in this way, e.g. because the powers of four in g, and the values 0 and 1 taken by g in two sub-intervals thin out [0, 1]^{2} so much that a bijection is clearly impossible. Restricting the proof to rational points only is simply insufficient.

Definition: A point x (↷x) of a function f: A ⊆ ^{ω}ℝ → ^{ω}ℝ is said to be a *right (left) jump discontinuity* of s:= |f(↷x) – f(x)| *upwards (downwards)*, or vice versa if s > 1/|^{ω}ℕ*|.

Theorem: A monotone function f: [a, b] → ^{ω}ℝ has at most |^{ω}ℕ||^{ω}ℤ*| - 1 jump discontinuities.

Proof: Between -|^{ω}ℕ*| and |^{ω}ℕ*|, at most |^{ω}ℕ*||^{ω}ℤ*| jump discontinuities with a jump of 1/|^{ω}ℕ*| are possible, totalling a maximum of |^{ω}ℤ*| - 1 when all viewed together. If the function does not decrease at non-discontinuities, like a step function, then the claim follows after correctly accounting for the ends of the number line ^{ω}ℝ.⃞

Remark: This theorem corrects Froda's theorem and makes it more precise. If we precede each set by the superscript ^{κ}, we obtain the statement for conventional sets.

Definition: The function µ_{h}: A → ℝ_{≥0} where A ⊆ ^{(ω)}ℂ^{n} is an h-homogeneous m-dimensional set, m ∈ ℕ*_{≤2n}, µ_{h}(A) := |A| h^{m}, and µ_{h}(∅) = |∅| = 0 is called the *exact h-measure* of A and A is said to be *h-measurable*, with dim ^{(ω)}ℂ = 2.

Remark: µ_{h}(A) is clearly additive and uniquely determined, i.e. if A is the union of pairwise disjoint h-homogeneous sets A_{k} for k ∈ ℕ, then

It is also strictly monotone, i.e. if h-homogeneous sets A_{1}, A_{2} ⊆ ^{(ω)}ℍ^{n} satisfy A_{1} ⊂ A_{2} then µ_{h}(A_{1}) < µ_{h}(A_{2}). If h is not equal on all considered sets A_{k}, we choose the minimum of all h and homogenise as described above. This measure is more precise than other measures and is optimal, since its value is neither smaller nor greater than the distances of the points parallel to the coordinate axes, as it simply considers the neighbourhoods of a point. Concepts such as σ-algebras and null sets are not required, since the only null set in this context is the empty set.

Examples: Consider the set A ⊂ [0, 1[ of points, whose least significant bit is 1 (0) in their (conventionally) real binary representation. Then µ_{d0}(A) = ½. Real numbers represent a further refinement of the conventionally real numbers, by dividing the conventionally real intervals into (significantly) finer sub-intervals. Since A is an infinite (conventionally uncountable) union of individual points (without the neighbouring points of [0, 1[ in A), and these points are Lesbesgue null sets, A is not Lebesgue measurable, however it is exactly measurable. Similarly, consider the subset S of [0, 1[ × [0, 1[ of all points with least significant bit 1 (0) in both coordinates. This set has exact measure µ_{d0}(S) = ¼.

Remark: Binary representation is also suitable for BBP series - or at least some power of two (8 is ideal, cf. Calculation of Times).

Definition: The *partial derivative* in the direction ↷B z_{k} of F: A → ^{(ω)}ℍ at z = (z_{1}, ..., z_{n}) ∈ A ⊆ ^{(ω)}ℍ^{n} with k ∈ [1, n]ℕ is defined as

With this notation, if the function f satisfies f = (f_{1}, ..., f_{n}): A → ^{(ω)}ℍ^{n} with z ∈ A ⊆ ^{(ω)}ℍ^{n}

then f(z) is said to be *exact B-successor-derivative* F′_{↷B z} B(z) or the *exact B-successor-gradient* grad_{↷B z} of the function F at z, which is said to be *exactly B-differentiable* at z in the direction ↷B, provided that each quotient exists in ^{(ω)}ℍ. ∇ is the *Nabla operator*. If this definition is satisfied for every z ∈ A, then F is said to be an *exactly B-differentiable B-antiderivative* of f. On the (conventionally) (infinitely) reals, the left and right B-antiderivatives F_{l}(x) and F_{r}(x) at x ∈ ^{(ω)}ℝ distinguish between the cases of left and right B-derivatives.

If A or ↷B z are obvious from context or irrelevant, they can be omitted. The conventional case may be obtained analogously to the above. In the case n = 1, we say that F′_{r}B(w) is the *right exact B-derivative* for ↷B w > w ∈ ^{(ω)}ℝ and that F′_{l}B(w) is the *left exact derivative* for ↷B w < w. If both directions have the same value, we say that F′B(z) is the exact derivative (when A = ^{κ}ℂ and n = 1, this reduces to the conventional case where F is *holomorphic*).

Remark: Clearly, the B-antiderivatives of a given function only differ by a (conventional) (infinite) complex or (infinite) real additive constant. The B-antiderivatives of discontinuous functions can typically only be derived by adding and appropriately recombining easier αB-continuous functions (e.g. by reversing the rules of differentiation).

Chain rule: For x ∈ A ⊆ ^{(ω)}ℝ, B ⊆ A^{2}, f: A → C ⊆ ^{(ω)}ℝ, D ⊆ C^{2}, g: C → ^{(ω)}ℝ, choosing f(↷B x) = ↷D f(x), we have that:

g′_{r}B(f(x)) = g′_{r}D(f(x)) f′_{r}B(x).

Proof:

Remark: The chain rule and the rules stated below can be extended to (infinite) complex sets and left exact derivatives. For simplicity and clarity, we will omit the sets and neighbourhood relations, which should be viewed as implicit. Let f and g be right exactly differentiable (infinite) real functions at x ∈ A ⊆ ^{(ω)}ℝ.

Product rule:

(fg)′_{r}(x) = f′_{r}(x) g(x) + f(↷ x) g′_{r}(x)= f′_{r}(x) g(↷ x) + f(x) g′_{r}(x).

Proof: Add and subtract f(↷ x) g(x) resp. f(x) g(↷ x) in the numerator.⃞

Quotient rule: Suppose that the denominators of the following quotients are non-zero. Then:

Proof: Add and subtract f(x) g(x) resp. f(↷ x) g(↷ x) in the numerator.⃞

Remark: For the product and quotient rule to be consistent with the conventional rules to a sufficient precision, the arguments and function values must belong to a smaller level of infinity than 1/d0, and f and g must be sufficiently (α-)continuous at x ∈ A (i.e. we must be able to substitute a sufficiently small α) to allow ↷x to be replaced by x. An analogous principle holds for infinitesimal arguments.

Remark: The right exact derivative of the inverse function

f^{-1}′_{r}(y) = 1/f′_{r}(x)

can be derived from y = f(x) and the identity x = f^{-1}(f(x)) using the chain rule with an equal level of precision. L'Hôpital's rule also makes sense for (α-)continuous functions f and g such that f(w) = g(w) = 0 with w ∈ A and f(↷ w) and g(↷ w) not both zero, and may be stated as:

Remark: Differentiability is thus easy to establish. In the (conventionally) (infinite) real case, we can give the alternative definition

wherever this quotient is defined. This is especially useful when ↷B w - w = w - ↶B w, and the combined derivatives both have the same sign. This definition has the advantage of allowing us to view F′_{b}B(w) as the "tangent slope" at the point w, especially when F is αB-continuous in w. It also results in simpler rules of differentiation, in particular since a derivative value of 0 is most suitable for cases with opposite signs (see below). In other cases, we simply calculate the arithmetic mean of both exact derivatives. This can be extended to the conventional complex numbers analogously.

Definition: Given z ∈ A ⊆ ^{(ω)}ℍ^{n} and when dBz resp. ↷B z and the right-hand side exist in ^{ω}ℍ^{n}, we define

to be the *exact B-integral* of the *vector field* f = (f_{1}, ..., f_{n}): A → ^{(ω)}ℍ^{n} on A and we say that f(z) is *B-integrable*. If this requires us to remove at least one point from A, then we say that the exact B-integral is *improper*. For γ: [a, b[C → A ⊆ ^{(ω)}ℍ^{n}, C ⊆ ℝ and f = (f_{1}, ..., f_{n}): A → ^{(ω)}ℍ^{n}, we say that

where dDt > 0, ↷D t ∈ ]a, b]C, choosing ↷B γ(t) = γ(↷D t), since ζ = γ(t) and dBζ = γ(↷D t) - γ(t) = γ′_{↷}D(t) dDt (i.e. in particular for C = ℝ, B maximal in ^{ω}ℂ^{2}, and D maximal in ^{ω}ℝ^{2}), is the *exact B-line integral* of the vector field f along the path γ, provided that the right-hand side exists in ℍ^{n}. Improper exact B-line integrals are defined analogously to exact B-integrals, except that only interval end points may be removed from [a, b[C.

Standard estimate: For n = 1, and M = max |f(z)| on γ, by successively applying the triangle inequality, we have that

where the right-hand sum is known as the *Euclidean path length* L(γ), including in the case n > 1.

Remark: The conventionally real case is defined analogously to the above. It is clear that exact integration is a special case of summation. The value of the exact line integral on ^{(κ)}ℍ is usually consistent with the conventional path integral; however, f does not need to be continuous and the conditions for the existence of the B-line integral are otherwise significantly less strict. It can easily be seen that the exact B-path integral is linear and monotone in the (conventional) (infinite) real case. The art of integration lies in correctly combining the summands.

Definition: For z ∈ A = A_{1}× ... ×A_{n} ⊆ ^{(ω)}ℍ^{n} and each z_{k} ∈ A_{k} with a uniquely determined neighbour ↷B_{k} z_{k} and neighbourhood relations B_{k} ⊆ A_{k}×^{(ω)}ℍ for all k ∈ ^{ω}ℕ*_{≤n} and B = B_{1}× ... ×B_{n}, we say that

is the *exact B-volume integral* of the *B-volume integrable* function f: A → ^{(ω)}ℍ^{m}, provided that the right-hand side exists in ^{ω}ℍ^{n}. Improper exact B-volume integrals are defined analogously to exact B-integrals.

Remark: It is clear that

if dBx_{k} = h for every x in a h-homogeneous set A ⊆ ^{(ω)}ℝ^{n} and all k ∈ [1, n]ℕ, and that the conditions stated above for the exact h-measure are fulfilled.

Remark: Analogously to the above, an alternative definition of the exact volume integral may be given. However, the original definitions are easier to manipulate. In some cases, suitable Landau notation may be useful. If the result of differentiation lies outside of the domain, it should be replaced by the closest number within the domain. If this is not uniquely determined, the result can either be given as the set of all such numbers, or we can select the preferred result (e.g. according to a uniform rule).

Example: Let [a, b[h^{ω}ℤ be a non-empty h-homogeneous subset of [a, b[^{ω}ℝ, and write B = ]a, b]h^{ω}ℤ. For h = 1/κ and κ = -a = b – h, [a, b[h^{ω}ℤ is comparable with ^{κ}ℝ. Now let T_{r} be a right B-antiderivative of a not necessarily convergent Taylor series t on [a, b[h^{ω}ℤ and define f(x) := t(x) + ε(-1)^{x/h} for conventionally real x and ε ≥ 1/κ. For h = 1/κ, f is nowhere continuous, and thus is conventionally nowhere differentiable or integrable on [a, b[h^{ω}ℤ, but for all h we have the following exact result

and

Example: The interval Q := [0, 1[ℚ has measure µ_{d0}(Q) = ½ (cf. Set Theory). Consider the function q: [0, 1[→ {0, 1} defined by q(x) = 1 for x ∈ Q and q(x) = 0 for x ∈ [0, 1[ \ Q. Then

Example: The middle-thirds Cantor set C_{⅓} has measure µ_{d0}(C_{⅓}) = (⅔)^{⌊ω⌋}. Consider the function c: [0, 1] → {0, (⅔)^{-⌊ω⌋}} defined by c(x) = (⅔)^{-⌊ω⌋} for x ∈ C_{⅓} and c(x) = 0 for x ∈ [0, 1] \ C_{⅓}. Then

Remark: The sets Q and C are conventionally not measurable. Thus, exact integration is more general than Riemann, Lebesgue(-Stieltjes) integration and other types of integral, which only exist on conventionally measurable sets. We only consider simple examples here for purposes of illustration, and more complex examples can of course be found.

Definition: A *sequence* (a_{k}) with *members* a_{k} is a mapping from ^{(ω)}ℤ to ^{(ω)}ℂ^{m}: k ↦ a_{k}. A *series* is a sequence (s_{k}) with m ∈ ^{(ω)}ℤ and *partial sums*

Remark: The associative, commutative, and distributive laws imply that sums can be arbitrarily rearranged. If we take care to calculate it correctly (using Landau symbols), the B-volume integral satisfies the following theorem:

Fubini's theorem: For A_{1}, A_{2} ⊆ ^{(ω)}ℍ, f: A_{1}×A_{2} → ^{(ω)}ℍ satisfies

Proof: The claim follows immediately by reordering the sums corresponding to these integrals.⃞

Example: Since

by the principle of latest substitution (see below), we obtain the (improper) integral

and not

Remark: Thus, in particular, the Riemann series theorem is false, since when summing positive summands to a desired value we are forced to add negative values until the original sum is attained, and vice versa. The same is true in the case that we obtain a smaller or larger value than the sum of the positive or negative terms, since the remainders almost fully cancel, and so on. We must avoid choosing arbitrary methods to deal with infinities if we wish to avoid making mistakes. We cannot make the mistake of thinking that something no longer exists simply because it is stored at infinity.

Finiteness criterion for series: The Euclidean norm of the partial sum with the largest index of a series (s_{k}), where k and j are infinite natural numbers, is finite if and only if it may be represented in the form

Where the ||a_{j} - b_{j}|| form a finite, monotonically decreasing sequence for a_{j}, b_{j} ∈ ^{(ω)}ℂ^{m}.

Proof: The claim follows directly from the finiteness of ||a_{1} - b_{1}|| and the ability to arbitrarily rearrange summands, sort them according to their signs and sizes, and recombine them or split them into separate sums.⃞

Example: From the alternating harmonic series, it follows that

Remark: In more interesting examples, it is harder to show that |a_{j} - b_{j}| is monotonically decreasing, e.g. in the case of the divergent series

where the c_{j} are monotonically increasing, but c_{j+1} - c_{j} is monotonically decreasing.

Theorem: The number s := 1 - 1/κ is an upper bound for arguments that yield conventionally real sums when substituted as an argument into the geometric series.

Proof: The claim follows directly from κ (1 - 1/κ)^{⌈ω⌉} ≈ 0.⃞

Remark: With the notation of the preceding theorem, the standard proofs given in the literature show that the sum of a power series with members a_{n}x^{n} is conventionally real whenever its radius of convergence is ≤ s/lim sup |a_{n}|^{1/n}.

Finiteness criterion for products (analogue of the fundamental theorem): The product

where k ∈ ℕ* and a_{k} ∈ ^{κ}ℝ_{>0}, is finite whenever the finiteness of

implies that e^{S} is at most finite.

Proof: By considering the exponential series, the claim follows directly from S < P < e^{S}.⃞

Remark: Products with a_{k} ∈ ^{κ}ℂ are finite, if and only if their moduli are finite. Factors with modulus < 1 must be considered in the computation by pairing off with factors > 1, e.g. by considering the product of their reciprocals.

Definition: A sequence (a_{k}) with k ∈ ^{(ω)}ℕ*, a_{k} ∈ ^{(ω)}ℂ and α ∈ ]0, 1/κ] is said to be *α-convergent* to a ∈ ^{(ω)}ℂ if there exists q ∈ ℕ* satisfying |a_{k} - a| < α for all a_{k} with k ≥ q such that the difference max k - q is not too large. The set α-A of all such a is called *set of α-limit values* of (a_{k}). An appropriately and uniquely determined representative of this set (e.g. the final value or mean value) is called the *α-limit value* α-a. In the special case a = 0, we say that the sequence is a *zero sequence*. If the inequality only holds for α = 1/κ, the α- is omitted from the notation.

Remark: We will usually choose k to be maximal and α to be minimal. Conventional limit values are often only precise to less than O(1/⌊ω⌋) and in general are too imprecise, since they are often e.g. (arbitrarily) algebraic (of a certain degree) or transcendental. The conventional formulation of the definition of conventional convergence, which always requires infinitely many or almost all members of the sequence to have an arbitrarily small distance from the limit value and only allows finitely many to have larger distance, needs to be extended, since otherwise only the largest index of each sequence is taken into account and considered to be relevant (cf. [813], p. 144, see Bibliography). Only then is the monotone convergence valid (cf. [813], p. 155).

Remark: The statement that each positive number may be represented by a fully determined unique, infinite decimal fraction is baseless, since the proof of the irrationality of 2 can also be applied to infinite decimal fractions (see p. 27 f.) Furthermore, any proof claiming that, for ε ∈ ^{(ω)}ℝ_{>0} - in particular whenever the phrase "for all conventionally reals ε > 0" is used - there exists a real number ε/r with real r ∈ ^{(ω)}ℝ_{>1}, is false, because we can simply set ε := ↷0, or become stuck in an infinite regression. Therefore, in the εδ-definition of the limit value (it is questionable that δ exists, p. 235 f.) and in the εδ-definition of continuity (see p. 215) (consider for example the real function that doubles every real value but is not even uniformly continuous), ε must be restricted specific multiples of ↷0.

Remark: The concept of uniform continuity is superfluous, since in general we can set δ := ↷0 and ε accordingly larger. If the conditions are not satisfied for two function values, then the function is not continuous at that point. Thus, continuity is equivalent to uniform continuity, by choosing the largest ε from all admissible infinitesimal values. It is also easy to show that continuity is equivalent to Hölder continuity, provided that we allow infinite real constants. The same is true for uniform convergence, since we can simply choose the maximum of the indices satisfying each argument as the index that satisfies everything, and ⌊ω⌋ is sufficient in every case. If this is not true for a given argument, then pointwise convergence also fails. Thus, uniform convergence is equivalent to pointwise convergence, by choosing the largest of all admissible infinitesimal values.

Remark: Since between any two rational numbers there are infinitely many algebraic numbers of higher degree (see Set Theory), the principle of nested intervals is invalid (cf. p. 158). The definition of the real numbers via Dedekind cuts is therefore just as unsuitable as its definition via equivalence classes of rational Cauchy sequences (see p. 29 ff.). The best definition is therefore based on homogeneity, defining the real numbers as infinite integer multiples (i.e. including integers that do not lie in ℤ) of ↷0. The above remarks illustrate why conventional analysis cannot be preserved in its existing form.

Examples (cf. p. 540 - 543 with n ∈ ^{ω}ℕ* and x ∈ [0, 1] in each case):

1. The sequence f_{n}(x) = sin(nx)/√n does not tend to n → ⌊ω⌋ to f(x) = 0, but instead to f(x) = sin(⌊ω⌋x)/√⌊ω⌋ with (continuous) derivative f′(x) = cos(⌊ω⌋x) √⌊ω⌋ instead of f′(x) = 0.

2. The sequence f_{n}(x) = x - x^{n}/n does not tend to f(x) = x as n → ⌊ω⌋, but instead to f(x) = x - x^{⌊ω⌋}/⌊ω⌋ with (continuous) derivative f′(x) = 1 - x^{⌊ω⌋-1} instead of f′(x) = 1. Conventionally, f_{n}(x) = 1 - x^{n-1} is discontinuous at the point x = 1.

3. The sequence f_{n}(x) = (n^{2}/2 - |n^{3}(x - 1/(2n))|)(1 - sgn(x - 1/n)) (or alternatively, expressed in terms of continuously differentiable functions

does always not tend to 0 as n → ⌊ω⌋, but instead tends to different values depending on the value of x (replace n by ⌊ω⌋ in f_{n}(x)). Furthermore, we have that

and

instead of

supposedly because f(x) = 0.

4. The sequence f_{n}(x) = (n/2 - |n^{2}(x - 1/(2n))|)(1 - sgn(x - 1/n)) (or alternatively, expressed in terms of continuously differentiable functions

does not always tend to 0 as n → ⌊ω⌋, but instead tends to different values depending on the value of x (replace n by ⌊ω⌋ in f_{n}(x)). Furthermore, we have that

instead of

supposedly because f(x) = 0.

5. The sequence f_{n}(x) = nx(1 - x)^{n} does not tend to f(x) = 0 as n → ⌊ω⌋, but instead to the continuous function f(x) = ⌊ω⌋x(1 - x)^{⌊ω⌋}, and takes the value 1/e when x = 1/⌊ω⌋.

These five examples illustrate the superiority of nonstandard analysis, and show that considering infinitesimal and infinite values is meaningful.

Commutativity theorem for integrals of α-limit values: Let A ⊆ ^{(ω)}ℝ be h-homogeneous and let (f_{j}) be a sequence of integrable functions with (infinite) natural j and such that f_{j}: A → ^{(ω)}ℝ are α_{1}-convergent to the integrable function f: A → ^{(ω)}ℝ. Then, from

we have that

Proof:

⃞

Remark: As long as we are carefully to calculate correctly using Landau notation, we can exchange the order of differentiation or integration and summation (even) in (divergent) series. However, the conventional approach can lead to non-negligible error propagation in subsequent calculations, e.g. if α_{1}μ_{d0}(A) ≥ 1/ω. Whenever exchanging the order of operations, the principle of latest variable substitution must be followed, otherwise there may be discrepancies in the results. This immediately implies:

Strong permutation theorem: Arbitrary permutations of the order of (admissible) substitution of the same variables in sequences, derivatives, and integrals, yield the same results.

First fundamental theorem of exact differential and integral calculus for line integrals: The function

where γ: [c, x[C → A ⊆ ^{(ω)}ℍ^{n}, C ⊆ ℝ, f = (f_{1}, ..., f_{n}): A → ^{(ω)}ℍ^{n}, c ∈ [a, b[C, and choosing ↷B γ(x) = γ(↷D x), (and so in particular for C = ℝ, B maximal in ^{ω}ℂ^{2}, and D maximal in ^{ω}ℝ^{2}), is exactly B-differentiable, and for all x ∈ [a, b[C and z = γ(x)

dB(F(z)) = F(↷Bz) - F(z) = dD(F ∘ γ)(x) = f(γ(x))γ′_{↷}D(x)dDx = f(z)dBz.

Proof:

Second fundamental theorem of exact differential and integral calculus for line integrals: Writing F instead of f in the above, if F is exactly successor-differentiable for t ∈ [a, b[C and its exact B-successor-derivative F′_{r}B is exactly B-line integrable at this point, then, choosing ↷B γ(t) = γ(↷D t), (and so in particular for C = ^{ω}ℝ, B maximal in ^{ω}ℂ^{2} and D maximal in ^{ω}ℝ^{2}) where γ: [a, b[C → ^{(ω)}ℍ^{n}, we have that

Proof: From

(F(↷B γ(t)) - F(γ(t)) (↷B γ(t) - γ(t))/(↷D t - t) = (F(γ(↷D t) - F(γ(t)) γ′_{↷}D(t) = F′_{↷}B(γ(t))γ′_{↷}D(t) (↷B γ(t) - γ(t)) = (F ∘ γ)′_{↷}D(t) γ′_{↷}D(t)(↷D t - t)

we have that

Corollary: For a closed path γ, we have with the conditions above that

whenever f has an antiderivative F on γ.

Remark: The conventionally real case of both fundamental theorems may be established analogously. Given v, w ∈ [a, b[C, v ≠ w and γ(v) = γ(w), it may be the case that ↷B γ(v) ≠ ↷B γ(w). It should be noted that neither the integral nor the derivative is assumed to be continuous. Actual integration (as the inverse operation to differentiation) only makes sense for continuous functions if we wish to go beyond simple summation. However, if we can express the function values in the form of finitely many continuous functions whose antiderivatives may be computed in finite time, integrals may be calculated even for discontinuous functions, if necessary by appropriately applying the Euler-Maclaurin summation formula and other simplification techniques.

Remark: The higher the number of elements included in the integration, the greater the deviation in the value of the integral can become, even with identical interval limits. When we use alternative exact differentiation, the formulas change accordingly: the more continuous the functions in question, the smaller the deviation in the value. Here, and in general, choosing a suitable rounding strategy can be helpful.

Definition: The tightened *(right) exact B-integral according to the trapezoid rule* is defined by

The tightened *(right) exact B-integral according to the midpoint rule* assuming that (z + ↷B z)/2 exists - is defined by

Remark: Since this tightened exact B-integral is clearly independent of the direction, it may be (implicitly) used to justify theorems that cancel integral values in opposite directions, such as Green's theorem (see below). In the first fundamental theorem, the derivative dB(F(z))/dBz can be tightened to the arithmetic mean (f(z) + f(↷B z))/2 resp. (f(z + ↷B z)/2), and similarly, in the second fundamental theorem, F(γ(b)) - F(γ(a)) can be tightened to (F(γ(b)) + F(↶B γ(b)))/2 - (F(γ(a)) + F(↷B γ(a)))/2 resp. F((γ(b) + ↶B γ(b))/2) - F((γ(a) + ↷B γ(a))/2), which yields approximately the original results when f and F are sufficiently α-continuous at the boundary. Since the exact B-integral is defined according to a rectangular rule, we can estimate the corresponding overall error relative to exact integration (see the literature on Numerical Mathematics).

Leibniz' differentiation rule: For f: ^{(ω)}ℍ^{n+1} → ^{(ω)}ℍ, a, b: ^{(ω)}ℍ^{n} → ^{(ω)}ℍ, ↷B x := (s, x_{2}, …, x_{n})^{T}, and s ∈ ^{(ω)}ℍ \ {x_{1}}, choosing ↷D a(x) = a(↷B x) and ↷D b(x) = b(↷B x), we have that

Proof:

Remark: We are integrating in the complex plane over a path whose start and end points are the limits of integration. If ↷D a(x) ≠ a(↷B x), then the final summand must be multiplied by (↷D a(x) - a(x))/(a(↷B x) - a(x)), and if ↷D b(x) ≠ b(↷B x), then the penultimate summand must be multiplied by (↷D b(x) - b(x))/(b(↷B x) - b(x)).

Definition: For a closed path γ: [a, b[A → ^{(ω)}ℂ and z ∈ ^{(ω)}ℂ, we say that

is the *winding number* or *index* ind_{γ}(z).

Integral formula: For f: A → ^{(ω)}ℂ and γ([a, b[) ⊆ A ⊆ ^{(ω)}ℂ,

if and only if

for g(ζ) = (f(ζ) - f(z))/(ζ - z), and so in particular when g has an antiderivative on γ([a, b[).

Proof: The claim follows directly from the corollary to the second fundamental theorem.⃞

Remark: The winding number is 0 if the path does not wind around z (antiderivative ln(ζ - z)). For n ∈ ℕ, the winding number is n (-n) if the path winds n around z n times in the positive (negative) direction. This can easily be seen in the following figure, parametrising the circle λ by ζ = z + r e^{it}, with length 2πr for t ∈ [0, 2π[ and r ∈ ℝ_{>0}:

Fig. 1

Example: Integrating 1/z over the boundary ∂đ of the unit disc đ resp. its translation by 2 gives the value 2πi resp. 0 (antiderivative ln(z + 2) on ∂đ + 2), and integrating |z|^{2} gives 0 (antiderivative z on ∂đ) resp. 4πi.

Remark: This example shows that the existence of the antiderivative of the integrand of a path integral and the value of this integral in general only depends on the integrand and the (oriented) path, and not on the composition of the underlying set, the interior or exterior of a path, or holomorphicity or (null-)homology as in conventional complex analysis.

Mean value equation: Given γ([0, 2π[) = ∂B_{r}(κ) with c ∈ ^{(ω)}ℂ and r ∈ ^{(ω)}ℝ_{>0}, then if f: B_{r}(κ) → ^{(ω)}ℂ satisfies the conditions of the integral formula, we have that

Proof: Substituting z = c + e^{iφ} into the integral formula immediately proves the claim.⃞

Remark: Using the standard estimate, we can derive the

Mean value inequality: |f(κ)| ≤ |f|_{γ}.

Definition: The coefficient a_{-1} of the function f: A → ^{(ω)}ℂ with A ⊆ ^{(ω)}ℂ and

with n ∈ ℕ, a_{i}, c, a_{ij}, c_{i} ∈ ^{(ω)}ℂ and pairwise distinct c_{i} ≠ c is called the *residue* res_{c} f.

Residue theorem: Given γ([a, b[) ⊆ A ⊆ ^{(ω)}ℂ, if f: A → ^{(ω)}ℂ may be represented as

with n ∈ ℕ, a_{ij}, c_{i} ∈ ^{(ω)}ℂ and c_{i} pairwise distinct, then we have

for the closed path γ: [a, b[ → ^{(ω)}ℂ.

Proof: For all i ∈ ^{ω}ℕ_{≤n} and all j ∈ ℤ \ {-1}, we have

and

⃞

Intermediate value theorem: Let f: [a, b] → ^{(ω)}ℝ be α-continuous on [a, b]. Then f(x) takes every value between min f(x) and max f(x) to a precision of < α as x ranges over [a, b]. If f is continuous on ^{ω}ℝ, then it takes every value of ^{κ}ℝ between min f(x) and max f(x).

Proof: Between min f(x) and max f(x), there is a gapless chain of overlapping α-neighbourhoods centred around each f(x), by α-continuity of f. The second part of the claim follows from the fact that a deviation f(↷x) - f(x)| < 1/κ or |f(x) - f(↶x)| < 1/κ in ^{κ}ℝ is smaller than the conventionally maximal admissible resolution.⃞

Extremum criterion: The function f has a local maximum at x_{0} as above if and only if f has a left exact derivative > 0 and a right exact derivative < 0 at x_{0}. Similarly, f has a local minimum at x_{0} if and only if f has a left exact derivative < 0 and a right exact derivative > 0 at x_{0}.

Proof: Clear from the definitions.⃞

Definition: The derivative of a function f: A → ^{(ω)}ℝ, where A ⊆ ^{(ω)}ℝ is defined to be zero if and only if 0 lies in the interval defined by the boundaries of the left and right exact derivatives.

Example: The (2d0)-continuous function f: ^{(ω)}ℝ → {0, d0} defined by

consists of only the local minima 0 and the local maxima d0, and only has the (left and right) exact derivatives ±1.

Definition: Let f: A → ^{(ω)}ℍ for A ⊆ ^{(ω)}ℍ. We say that

is the *second derivative* of f at z ∈ A in the direction ↷B z.

Higher (partial) derivatives are defined analogously. The number j ∈ ℕ of partial derivatives is written as an exponent after ∂, and the variables with respect to which we are differentiating are listed in the denominator, each preceded by ∂. Multiples of the same variable are indicated with an exponent corresponding to the number of times that the variable occurs. Taylor series only make sense for ⌊ω⌋-times α-continuously differentiable functions, due to approximating and convergence-related behaviour.

Inflection point criterion: The function f has a local left-to-right inflection point at x_{0} if and only if f has a left exact second derivative > 0 and a right exact second derivative < 0 at x_{0}, as above. Similarly, f has a local right-to-left inflection point at x_{0} if and only if f has a left exact second derivative < 0 and a right exact second derivative > 0 at x_{0}.

Proof: Equally clear from the definitions.⃞

Exchange theorem: The result of multiple partial derivatives of a function f: A → ^{(ω)}ℍ is independent of the order of differentiation, provided that variables are only evaluated and limits are only computed at the end, if applicable (*principle of latest substitution*).

Proof: The derivative is uniquely determined: This is clear up to the second derivative, and the result follows by (transfinite) induction for higher-order derivatives.⃞

Example: Let f: ^{ω}ℝ^{2} → ^{ω}ℝ be defined by f(0, 0) = 0 and f(x, y) = xy^{3}/(x^{2} + y^{2}) otherwise. Then:

with value ½ at the point (0, 0), even though the equation

is equal to y on the left for x = 0 and 0 on the right for y = 0. Partially differentiating the left-hand side with respect to y gives 1 ≠ 0, which is the partial derivative of the right-hand side with respect to x.

Theorem: Splitting F: A → ^{(ω)}ℂ into real and imaginary parts F(z) := U(z) + i V(z) := f(x, y) := u(x, y) + i v(x, y), and given infinitesimal h = |dBx| = |dBy|, h-homogeneous A ⊆ ^{(ω)}ℂ, with the neighbourhood relation B ⊆ A^{2} for all z = x + i y ∈ A, F is holomorphic and

is infinitesimal if and only if the*Cauchy-Riemann partial differential equations*

are satisfied by B in both the ↷ direction and the ↶ direction.

Proof: Since we have

and dBz = dBx + i dBy for every derivative defined on A, and since

as well as the analogous formulas for v and in the ↶ direction, we have that

The assumptions allow us to neglect the final summand, and so the claim follows.⃞

Remark: The final summand may in particular be neglected whenever f is continuous. The equations for F′B(z) imply the following necessary and sufficient condition for F to be holomorphic:

Definition: A (infinitely) real-valued function with arguments ∈ ^{(ω)}ℍ^{n} is said to be *convex (concave)* if every value taken by the function on arguments lying between two different arguments is non-strictly below (non-strictly above) the line connecting the two values taken by the function at this arguments. The function is said to be *strictly convex (concave)* if we can replace "non-strictly" by "strictly" in the above.

Definition: When integrating identical paths in opposite positive and negative directions, we adopt the *counter-directional rule* for integrals, stating that when following the path in the negative direction, we must choose the function value of the successor of the argument if the function is too discontinuous, implying that the integral sums to 0 over both directions.

Remark: We require this convention in order to ensure that integrals that we expect to sum to zero do in fact do so. Without it, they could potentially have a (significantly) different value.

Reduction theorem for particular line integrals: If the path γ: [a, b[C → A ⊆ ^{(ω)}ℝ^{n} with C ⊆ ℝ passes the edges of length d0 of every n-cube of the simply connected, d0-homogeneous set A of n-cubes exactly once, where the opposite edges in a two-dimensional face of every n-cube are traversed in reverse direction, but uniformly, then the edges belonging to E are exactly contained in ∂^{n-1}A, with f: A → ^{(ω)}ℝ, γ(t) = x := (x_{1}, ..., x_{k}, ..., x_{n}), k, n ∈ ℕ*, γ(↷D t) = ↷B x, D ⊆ ℝ^{2}, B ⊆ A^{2} and E := {dBx_{k} : x = γ(t) ∈ A, -dBx_{k} ∉ E}, and we have that

Proof: Only the edges belonging to E are not passed in both directions for the same function value. We obtain this result if we consider two arbitrary squares with common edge of length d0 included in one plane. The edges above can therefore only be contained in ∂^{n-1}A.⃞

Remark: It is not difficult to transfer both definition and theorem to the complex numbers.

Green's theorem: Given neighbourhood relations B ⊆ A^{2} for some simply connected h-set A ⊆ ^{(ω)}ℝ^{2}, infinitesimal h = |dBx|= |dBy| = |↷B γ(t) - γ(t)| = O(ω^{-m}), sufficiently large m ∈ ℕ*, (x, y) ∈ A, A^{‒} := {(x, y) ∈ A : (x + h, y + h) ∈ A}, and a simply closed path γ: [a, b[→ ∂A followed anticlockwise, choosing ↷B γ(t) = γ( ↷D t) for t ∈ [a, b[, D ⊆ [a, b]^{2}, the following equation holds for sufficiently α-continuous functions u, v: → ℝ, with (not necessarily continuous) partial derivatives ∂Bu/∂Bx, ∂Bu/∂By, ∂Bv/∂Bx and ∂Bv/∂By:

Proof: We will prove the case A := {(x, y) : c ≤ x ≤ d, f(x) ≤ y ≤ g(x)}, c, d ∈ ^{(ω)}ℝ, f, g : ∂A → ^{(ω)}ℝ wlog, since the proof is analogous for each case rotated by 90°, and every simply connected h-set is a union of such sets. We will simply show that

since the other relation may be shown analogously. Since the regions of γ where dBx = 0 do not contribute to the integral, for negligibly small t := h(u(d, g(d)) – u(c, g(c))), we have that

⃞

Remark: The choice of m depends on the required number of sets of the type specified in the above proof, the union of which yields the simply connected h-set.

Goursat's integral lemma: If f is holomorphic on a triangle Δ ⊆ ^{(ω)}ℂ but does not have an antiderivative on Δ, then

Refutation of conventional proofs based on estimation by means of a complete triangulation: The direction in which ∂Δ is traversed is irrelevant. If Δ is fully triangulated, then wlog every minimal triangle Δ_{s} ⊆ Δ must either satisfy

or

where z_{1}, z_{2} and z_{3} represent the vertices of Δ_{s}. By holomorphicity and cyclic permutations, this can only happen for f(z_{1}) = f(z_{2}) = f(z_{3}). If we consider every adjacent triangle to Δ, we deduce that f must be constant, which contradicts the assumptions. This is because the term in large brackets is translation-invariant, since otherwise we can set z_{3} := 0 wlog, making this term 0, in which case z_{1} = z_{2}(1 ± i√3)/2 and |z_{1}| = |z_{2}| = |z_{1} – z_{2}|. However, since every horizontal and vertical line is homogeneous on ^{(ω)}ℂ, this cannot happen, otherwise the corresponding sub-triangle would be equilateral and not isosceles and right-angled. Therefore, in both cases, |I_{s}| must be at least |f′(z_{2}) O(d0^{2})|, by selecting the vertices 0, |d0| and i|d0| wlog. Denoting the perimeter of a triangle by L, then we have that |I| ≤ 4^{m} |I_{s}| for an infinite natural number m, and also 2^{m} = L(∂Δ)/ |O(d0^{2})| since L(∂Δ) = 2^{m} L(∂Δ_{s}) and L(∂Δ_{s}) = |O(d0^{2})|. Therefore, we have that |I| ≤ |f′(z_{2}) L(∂Δ)^{2}/O(d0^{2})|, causing the desired estimate |I| ≤ |O(dBζ)| to fail, for example if |f′(z_{2}) L(∂Δ)^{2}| is larger than |O(d0^{2})|.⃞

Cauchy's integral theorem: Given the neighbourhood relations B ⊆ A^{2} and D ⊆ [a, b] for some simply connected h-set A ⊆ ^{ω}ℂ, infinitesimal h, a holomorphic function f: A → ^{ω}ℂ and a closed path γ: [a, b[→ ∂A, choosing ↷B γ(t) = γ(↷D t) for t ∈ [a, b[, we have that

Proof: By the Cauchy-Riemann partial differential equations and Green's theorem, with x := Re z, y := Im z, u := Re f, v := Im f and A^{‒} := {z ∈ A : z + h + ih ∈ A}, we have that

Fundamental theorem of algebra: For every non-constant polynomial P ∈ ^{(ω)}ℂ, there exists some z ∈ ^{(ω)}ℂ such that P(z) = 0.

Indirect proof: By performing an affine substitution of variables, we can reduce to the case 1/P(0) ≠ O(d0). Suppose that P(z) ≠ 0 for all z ∈ ^{(ω)}ℂ. Since f(z) := 1/P(z) is holomorphic, we have that f(1/d0) = O(d0). By the mean value inequality |f(0)| ≤ |f|_{γ} for γ = ∂B_{r}(0) and arbitrary r ∈ ^{(ω)}ℝ_{>0}, and hence f(0) = O(d0), which is a contradiction.⃞

All conventional complex functions f: A → ^{(ω)}ℂ, where A ⊆ ^{(ω)}ℂ, below are implicitly defined in such a way as to satisfy max |f(z)| ≤ r := max ^{(ω)}ℝ and f(z) = r f(z)/|f(z)| when |f(z)| > r and also when f = id (i.e. we write ^{(ω)}ℂ differently for B_{≤r } (0)). The entire functions f(z) = z/ω and g(z) = Σa_{k} z^{k} with k ∈ ℕ and a_{k} = 1/ω^{k+1} give counterexamples to Liouville's theorem and Picard's little theorem.

Evidence: Since |f(z)| ≤ 1 and |g(z)| < 1, the claim follows directly.⃞

Remark: Choosing sufficiently small (transcendental) constants in the generalised Liouville theorem shows that it does not hold. Neither theorem can be saved by introducing restrictions, since if a function h is holomorphic on ^{ω}ℂ the Laurent polynomial (Laurent series) of h necessarily has coefficients a_{k} such that |a_{k}| < O(ω^{-|k|}) (O(ω^{-|k|-1})) and k is an (infinite) integer (in order to converge), unless the polynomial (series) is constant anyway. Therefore, restricting to coefficients ≥ 1/ω does not help.

The function f gives a biholomorphic, bijective mapping from the circle definition of ^{κ}ℂ to the highly dense complex unit disc đ_{d}, where |đ_{d}| = |^{κ}ℂ| ≫ |đ|. Thus, the Riemann mapping theorem also holds for ℂ. It is of course not possible to map all of ^{κ}ℂ in this way. The function 1/f gives a counterexample to Picard's great theorem when ^{ω}ℂ is taken to be arbitrarily dense.

Definition: A point z_{0} ∈ M ⊆ ^{(ω)}ℂ^{n} or belonging to a sequence (a_{k}) for a_{k} ∈ ^{(ω)}ℂ^{n} and an (infinite) natural number k is called a *(proper) α-accumulation point* of M or of the sequence, if the ball B_{α}(z_{0}) ⊆ ^{(ω)}ℂ^{n} with centre z_{0} and infinitesimal radius α contains infinitely many points from M or infinitely many pairwise distinct members of the sequence. If the claim holds for α = 1/ω, the α-accumulation point is simply called an accumulation point.

Let p(z) = π(z - c_{k}) with k ∈ ^{ω}ℕ and z ∈ ^{ω}ℂ be an infinite product with pairwise distinct zeros c_{k} ∈ B_{1/⌊ω⌋}(0) ⊂ ^{ω}ℂ (disc around 0 with radius 1/⌊ω⌋), chosen in such a way that |f(c_{k})| < 1/⌊ω⌋ for a function f holomorphic on a region G ⊆ ^{ω}ℂ and that f(0) = 0. Suppose that G contains B_{1/⌊ω⌋}(0) completely. This can always be achieved by means of coordinate transformations provided that G is sufficiently "large".

Then the coincidence set {ζ ∈ G : f(ζ) = g(ζ)} of the function g(z) := f(z) + p(z), which is also holomorphic on G, contains an accumulation point at 0, and f ≠ g, contradicting the statement of the identity theorem. Examples of such f include functions with a zero at 0 that are restricted to B_{1/⌊ω⌋}(0) and simultaneously holomorphic on G. Since p(z) can take every conventional complex number, the deviation between f and g is non-negligible.

The identity theorem is also contradicted by the fact that all derivatives d^{(n)}(z_{0}) = h^{(n)}(z_{0}) of two functions d and h can be equal at a point z_{0} ∈ G for all n, but that they can be significantly different further away from this local behaviour without ceasing to be holomorphic, since not every holomorphic function can be (uniquely) developed into a Taylor series due to the approximation (of differentiation) and computation with Landau symbols (see Transcendental Numbers).

Extending to k ∈ ℕ allows entire functions with an infinite natural number of zeros to be constructed. The set of zeros is not necessarily discrete. Thus, the set of all functions that are holomorphic on a region G may contain zero divisors. Functions such as polynomials with n > 2 pairwise distinct zeros once again give counterexamples to Picard's little theorem, since they omit at least n - 1 values in ^{ω}ℂ.

Remark: We can approximate ζ(2n+1) as follows for n ∈ ^{ω}ℕ* by

We can then decompose the summands in the first sum into a sum of partial fractions with zeros at the n-th roots of unity, continuing this expansion until a sufficient approximation is achieved, since the final sum is precisely ζ(4n+2). When doing so,

for z ∈ ^{ω}ℂ \ ^{ω}ℕ* is characterised by z and the difference with ln ⌈ω⌉. This method of expansion can be extended as follows (generalisation to infinite numbers also possible):

Theorem: A series whose members are similarly constructed from fractions of polynomials in k ∈ ^{ω}ℕ*, with complex-rational coefficients may be represented to arbitrary conventionally real precision as the sum of ζ(2n) values with n ∈ ^{ω}ℕ*, multiplied by conventionally complex constants, or F′(z) values for z ∈ ^{ω}ℂ, together with a conventionally complex constant.

Proof: Successively expand into partial fraction decomposition and estimate the rapidly converging residual series.⃞

Remark: The values of F′(z) can be easily calculated with the Euler-Maclaurin summation formula, or in the real case using the Euler transformation and the harmonic series. This uniform method for these series accelerates convergence and avoids having to directly calculate difficult integrals or derivatives of polynomial quotients, and provides the foundation for a value-based classification of series that increases additively with the precision, exactly specified up to the given precision.

Examples: For Bernouilli numbers of the second kind, using Faulhaber's formula with k ∈ ^{ω}ℕ, m, n ∈ ^{ω}ℕ* we obtain the following (which can also be generalised to higher numbers):

From the identity

for real or complex x with (-x)^{k} := (-1)^{k} x^{k}, by differentiating, we deduce:

when the modulus of x, dx or 1/dx have different orders of magnitude.

For sufficiently small but not excessively small x, and sufficiently large but not excessively large (infinite) n, this formula can be further simplified to -1/(1+x)^{2}, and also remains valid when x ≥ 1 is not excessively large. By successively multiplying P_{j}(x) := P_{0}^{(j)}(x) by x for j ∈ ^{ω}ℕ* and subsequently differentiating, we can derive other formulas for P_{j+1}(x), providing an example of divergent series that has previously not always been correctly calculated.

P_{0}(z) holds for all complex z. In the problematic special case of z = -1, this follows from L'Hôpital's rule. However, if we integrate P_{0}(-x) from 0 to 1 and set n := ⌊ω⌋, we obtain an integral expression for ln ⌊ω⌋ + γ in terms of Euler's constant γ. Substituting y := 1 - x, by the binomial series we obtain a series with almost exclusively infinite coefficients; if we also express ln ⌊ω⌋ as a series, we even obtain an expression for γ.

Remark: If we illegitimately simplify the numerator of P_{0}(z), we risk finding incorrect results, especially when |z| ≥ 1.

Example: P_{0}(-e^{iπ}) is 0 for odd n, and 1 for even n, but not ½.

Theorem: The series

where x ∈ [1/κ - 1, 1 - 1/κ] does not admit any other power series expansion that allows ζ(2n+1) to be determined by comparing coefficient.

Proof: Starting from the functional equation f(x + 1) = f(x) - 1/(x + 1) + 1/(x + ⌈ω⌉),

is uniquely determined up to an additive constant whenever |x + k| ≥ 1/κ for all k ∈ ℕ*. If we add an arbitrary power series P(x) to f(x), so the claim follows analogously.⃞

© 2010-2015 by Boris Haase

• disclaimer • mail@boris-haase.de • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • top