# Homepage of Boris Haase

## #22: Alteration resp. Completion of Nonstandard Analysis on 29.10.2011

In the following, the notations from the set theory are applied. At first the integration and differentiation on arbitrary subsets of the set ℝ are studied (especially for conventionally non-measurable and infinite sets, as well as discontinuous functions), and then is progressed to subsets of ℂm or ℂn with arbitrary m, n ∈ ℕ. A generalisation to other sets is easily possible. The sign ∞ is not used, since there is nothing what exceeds the maxima of all infinite sets, resp. all real values that exceed all finite ones can be specified more precisely.

Definition: Let be A ⊆ ℝ and f a (uniquely) defined function f: A → ℝ. Let be pre x := max {y ∈ A : y < x}, if {y ∈ A : y < x} ≠ ∅, and ≤ x real defined else. Let be suc x := min {y ∈ A : y > x}, if {y ∈ A : y > x} ≠ ∅, and ≥ x real defined else. Then, with d for Latin dextra = right

df(x) := f(suc x) - f(x)

is called right differential of f in A. With s for Latin sinistra = left

sf(x) := d(f pre)(x) = f(x) – f(pre x)

is called left differential of f in A. If f is the identity, that is f(x) = x, the function f is omitted. If A is clear or unimportant, also A is omitted.

Definition: With the notations above,

is called the right-sided exact integral in A over f(x). It is analogously

called the left-sided exact integral. Here suc max A ≥ max A and pre min A ≤ min A are real defined. If both integrals match, one speaks correspondingly of the exact integral. We write for real intervals [a, b[A := A ∩ ℝ bzw. ]a, b]A := A ∩ ℝ with a = min A and b = max A

resp.

Remark: Obviously, the exact integration is a special case of summation. The exact integral coincides on the conventional ℝ widely with conventional integrals; f must yet not be continuous and also else the conditions are significantly weaker, in order that the integral exists. For A = ℝ, A can be omitted.

Remark: Obviously, the exact integral is monotone and linear. The art of integrating consists in the correct summarising of the addends.

Definition: Let x0 ∈ A ⊆ ℝ and f: A → ℝ. Then f is called right-sided α-continuous in x0 if for infinitesimal α ∈ ℝ+ applies:

|f(suc x0) - f(x0)| < α.

Left-sided is to apply:

|f(x0) - pre f(x0)| < α.

Double-sided α-continuity is called simply α-continuity. If the inequalities apply for all properly finite α ∈ ℝ+, one speaks simply of continuity.

Remark: Practically, one will determine α by an estimate (after considering possible jump discontinuities).

Definition: With the notations above, the right-sided exact derivative of f in A at the position x0 ∈ A is defined as

provided x0 ≠ suc x0 exists and the difference quotient in the middle is defined. Analogously applies left-sided

If both derivatives match, one speaks correspondingly of the exact derivative f'(x0). If A is clear, A is omitted.

Remark: Differentiability thus can be easily established. One can define the exact derivative alternatively also everywhere there as

where suc x0 ≠ pre x0 applies and the quotient is defined. This has the advantage that f'(x0) can be regarded more as "tangent slope" in the point x0, which can become rather zero for a local extremum, especially if f is α-continuous in x0. This is convenient, for example, if one wants to characterise the exact values of suc x0 and pre x0 only as arbitrarily close to x0, or wants to round the exact derivatives suitably, in order to provide simple derivation rules, if necessary.

Remark: Analogously, the exact integral can be alternatively defined. But the original definitions are to handle the easiest way. If applicable, there is an appropriate Landau notation. If the result of differentiation lies outside of the domain, it should be replaced by the number lying closest to it within the domain. If the number is not uniquely determined, the result shall consist of all these numbers, or one may choose one (e.g. after a uniform rule).

Definition: The function Fr: [a, b[A → ℝ with [a, b[A ⊆ A ⊆ ℝ and Fr'(x) = f(x) for x ∈ [a, b[A and f: [a, b[A → ℝ is called right-sided antiderivative of f in [a, b[A.The function Fl: ]a, b]A → ℝ with ]a, b]A ⊆ A ⊆ ℝ and Fl'(x) = f(x) for x ∈ ]a, b]A and f: ]a, b]A → ℝ is called left-sided antiderivative of f in ]a, b]A. If F = Fr = Fl applies in [a, b]A, so F is simply called antiderivative of f in [a, b]A.

Remark: Obviously, the antiderivatives of a function differ from each other only by a real addend. Antiderivatives of discontinuous functions can usually only be obtained by adding them up and skilfully combining them, such of piecewise α-continuous functions easier (for example, by reversing the rules of derivation).

Example: Let be [a, b[dx the non-empty homogeneous subset of [a, b[ ⊆ ℝ with dx = suc x - x for all x ∈ [a, b[dx and integer a/dx. For infinitesimal dx and b = -a = |ℕ|, [a, b[dx is comparable with the conventional ℝ. Let be Tr furthermore a right-sided antiderivative of a taylor series t, not necessarily convergent in [a, b[dx, and f(x) := t(x) + ε (-1)x/dx with a properly finite ε ∈ ℝ+. For infinitesimal dx, f is nowhere continuous, and can therewith nowhere be conventionally differentiated and integrated in [a, b[dx, but it applies exactly for all dx:

and

Definition: The function µ: A → ℝ with a non-empty set A ⊆ ℂn, n ∈ ℕ, k ∈ {1, ..., n} and z = x + iy as well as

with µ(∅) = |∅| = 0 is called measure of A with suc A = {(z, (z1, ..., suc zk, ..., zn)) ∈ ℂn × ℂn : z ∈ A, k ∈ {1, ..., n}, suc zk = suc xk + i suc yk}. For A ⊆ ℝn the formula can be simplified to

Remark: ℝ is, however, not homogenous, as long from x ∈ ℝ always is to follow 1/x ∈ ℝ: If it is assumed homogenous for x in the interval ]0, 1[, so applies 1/x - 1/(x + dx) = dx/(x (x + dx)) > dx. Something similar is true, if ℝ is assumed homogenous for values > 1 and their reciprocals are considered. It is thus precisely to specify, with which definition and construction of ℝ it is dealt, e.g. with a homogenised ℝh. Analogously, there is also the homogenous set ℚh of rational numbers with max ℚh = -min ℚh = |ℕ| and dw = min |w| = 1/max kgV(1, 2, ..., m) with |ℚh| = kgV(1, 2, ..., m) (|ℤ| - 1) + 1 ≤ |ℚ| for w ∈ ℚ and m ∈ ℕ. Obviously, calculating in the conventional ℝ corresponds to calculating in ℚh. If the set A of the real conventionally algebraic numbers is homogenised, it is just at least as dense as the conventional ℝ, as one can easily realise with the homogenisation of the set ℕ ∪ {√2}.

Remark: Much more important, interesting and relevant, for computers, are the homogeneous sets Bj with natural j that emerge by continued halving of the unit distance. If one agrees on an accuracy of representation 2-j, so it can be calculated with this maximal accuracy of calculating. The homogeneous ℝh should also be isomorphic to such an infinite set, where dx = 2-j is to be assumed minimal possible for x ∈ Bj and now trans-natural j. For every additional number x, concerning a homogenous underlying set, the newly emerged set can then be homogenised, if one agrees on an accuracy of representation for x.

Example: The middle thirds Cantor set C has the relative measure µ(C) = (⅔)|ℕ|. Let the function c: [0, 1] → {0, (⅔)-|ℕ|} be defined by c(x) = (⅔)-|ℕ| for x ∈ C and c(x) = 0 for x ∈ [0, 1] \ C. Then it applies:

Example: For the classes a + ℚ with a ∈ ℝ the equivalence relation x ~ y ⇔ x - y ∈ ℚ with x and y ∈ ℝ their representatives can be specified through the set R = [0, 1/(|ℕ|)[ with the measure µ(R) = 1/|ℕ|. Let the function r: ℝ → {0, 1} be defined by r(x) = 1 for x ∈ ℤ + R and r(x) = 0 for x ∈ ℝ \ (ℤ + R). Then it applies:

Example (cf. set theory): Let A1 = [0, 1[ ∩ A and the function q: A1 → {0, 1} be defined by q(x) = 1 for x ∈ A1 and q(x) = 0 for x ∈ A1 ∩ ℚ. The exact integral over q(x)dx has then in A1 the transcendental value

Remark: The sets C, R and ℚ are conventionally not measurable. Thus, the exact integral is more generally valid than Riemann, Lebesgue (-Stieltjes) integral and other integrals, since the latter exist only in conventionally measurable sets. The functions were chosen so easy only because of the clearness and may be, of course, more complicated.

Definition: For n ∈ ℕ, the exact integral in A ⊆ ℝn over a function f: A → ℝ is defined by

where the suc xi for i ∈ {1, ..., n} are ≥ xi real defined in the set suc A (cf. above).

Definition: A sequence (ai) with members ai is a map of a (in-)finite index set I with gapless serial (trans-) natural elements i to ℂ: i ↦ ai. If the member with the greatest index is infinitesimal, the sequence is called infinitesimal sequence. A series is a sequence (sn) with the partial sums

for n ∈ I. The smallest index in sn for i can be defined differently (e.g. 0 or -n).

Remark: Since sums can be arbitrarily added otherwise, because of the associative, commutative and distributive law, if is calculated correctly resp. with the Landau symbols, Fubini's theorem results for exact integrals, which allows to change the order of integration arbitrarily. A generalisation for functions f: ℂn → ℂm with m and n ∈ ℕ is easily possible. Since for z = x + iy ∈ A ⊆ ℂ with x, y ∈ ℝ and f: A → ℂ applies:

where suc max Re z ≥ max Re z and suc max Im z ≥ max Im z are real defined.

Remark: Thus, in particular, the Riemann series theorem is invalid, since one is coerced, when summing up the positive summands to a value aimed at, to add so many negative ones, until one obtains again the original sum of the series, and vice versa. With a smaller resp. greater value than the sum of the positive resp. negative summands, the same applies, since the rest is almost annulled, and so on. Also infinity must not be dealt with arbitrarily, if one wants to avoid going astray. Who moves something into infinity must not lend zerself to the illusion that it would no longer exist.

Finiteness criterion for series: The partial sum with the greatest index of a real series (sk) for infinite (trans-) natural k and n is finite, iff it can be represented as

with finite real a and finite |an - bn|, forming a monotonically nonincreasing infinitesimal sequence for real an and bn. In the complex, this must be satisfied for real and imaginary part.

Proof: The assertion follows directly from the Leibniz criterion and because the addends can be added arbitrarily otherwise, sorted by size and sign, summed up or split up into sums.

Example: From the alternating harmonic series follows

Remark: More interesting examples are those whose nonincreasing monotony of |aj - bj| cannot be so easily proven as e.g. for the divergent series

where cj increases monotonically, but cj+1 - cj is monotonically nonincreasing.

Finiteness criterion for products: The product

for infinite (trans-) natural n and finite complex an that is not, w. l. o. g., ≤ -1, if it is real, is finite, iff

is so, where m must be able to be chosen so big that

is also finite and |ak| < 1, where all its factors with |1 + an| > 1 must be able to be so rearranged resp. pooled with other factors that, w. l. o. g., | an| for n ≥ n0 with a (trans-) natural n0 forms a monotonically nonincreasing infinitesimal sequence, after the number of the factors with |1 + an| < 1 is likewise minimised and their index n for an is chosen ≤ m.

Proof: The assertion follows from the logarithm series.

Definition: Let i, j, k and l be natural. A sequence (ai) with ai ∈ ℂ infinitesimal α ∈ ℝ+ is called α-convergent, if there is a k, so that for all i and j with max i ≥ i > j ≥ k applies:

|aj - ai| < α.

If the inequality applies for all properly finite α ∈ ℝ+, so the sequence is simply called convergent. The uniquely determined last value of the sequence is amax i and is also called 0-limit value or simply limit value, while the β-limit values al(β) are given by the elements of

{z ∈ ℂ : |z - amax i| ≤ β}

with infinitesimal β ∈ ℝ+.

Remark: The conventional limit values are often only β-limit values, selected after (general) preferences (with β often no more precise than O(1/|ℕ|)) and generally too imprecise, since they are for example (arbitrarily) algebraic (of a certain degree), or transcendental.

Proposition about commuting β-limit values for the integration: Let be A ⊆ ℝ and (fn) a sequence of integrable functions with (trans-) natural n and fn: A → ℝ, which α-converge to the integrable function f: A → ℝ. Then applies for:

Proof:

Remark: As long as one correctly calculates with Landau notation, differentiation or integration and summation may be (also) interchanged in (divergent) series. The conventional procedure can, however, lead to not inconsiderable error propagations, in subsequent calculations. for example if β1μ(A) is a properly finite value.

First fundamental theorem of exact calculus: Let f be as above right-sided resp. left-sided exactly integrable for x ∈ [a, b[A resp. x ∈ ]a, b]A. Then the function

resp.

with c ∈ [a, b]A is right-sided resp. left-sided exactly differentiable and it applies

Fr'(x) = f(x) resp. Fl'(x) = f(x)

for x ∈ [a, b]A.

Proof:

This applies analogously to sF(x), pre x and sx.

Second fundamental theorem of exact calculus: If F is, instead of f as above, for x ∈ [a, b[A right-sided exactly differentiable and its right-sided exact derivative Fr' is there right-sided exactly integrable, then it applies for c ∈ [a, b]A:

Left-sided applies analogously for x ∈ ]a, b]A and Fl':

Proof:

This applies analogously to t ∈ ]c, x]A, pre t and Fl'(t).

Remark: Notice that continuity is not presupposed for integral and derivative. By regarding the real and imaginary part of complex functions F and f: ℂ → ℂ, both fundamental theorems can be transferred easily to the complex. Actual integration (as inversion of the derivative) only makes sense for continuous functions, if it is to go beyond mere summation. However, if function values can be combined into a finite number of continuous functions, for which each of them the antiderivative can be specified in finite time, also the integral for discontinuous functions can be calculated, possibly with the appropriate aid of the Euler-Maclaurin sum formula and further simplification techniques.

Remark: Over the more or less elements is integrated, the greater the value of the integral can differ, even within the same interval limits. If one uses the alternative exact derivation, then the formulas change accordingly, and this applies the less, the more continuous the occurring functions are. Here and in general can appropriate rounding rules can be helpful.

Intermediate value theorem: Let be f: [a, b] → ℝ α-continuous in [a, b]. Then f(x) maps to any value between min f (x) and max f(x) , for x ∈ [a, b], with an accuracy < α. If f is continuous in ℝ, it maps to any value of the conventional ℝ between min f(x) and max f(x).

Proof: Between min f(x) and max f(x) exists an unbroken chain of overlapping α-environments, each with f(x) as the centre, since otherwise a contradiction to the α-continuity of f would emerge. The second part of the assertion follows from the fact that a deviation |f(suc x0) - f(x0)| < α resp. |f(x0) - f(pre x0)| < α for all properly finite α ∈ ℝ+ in the conventional ℝ falls below the maximal resolution.

Extremum criterion: Iff f has, as above, in the point x0 a left-sided exact derivative > 0 and a right-sided exact derivative < 0, f has there a local maximum. Iff f has, as above, in the point x0 a left-sided exact derivative < 0 and a right-sided exact derivative > 0, f has there a local minimum. A derivative can then be defined there as 0.

Proof: Clear from the definitions.

Product, quotient and chain rule: Let f and g be right-sided (left-sided) exactly differentiable functions and all quotients well defined. Then it applies:

(fg)r'(x0) = fr'(x0)g(x0) + f(suc(x0))gr'(x0),

and

f(g(x0))r' = γr(x0) fr'(g(x0)) gr'(x0)

with

Left-sided applies analogously:

(fg)l'(x0) = fl'(x0)g(pre(x0)) + f(x0)gl'(x0),

and

f(g(x0))l' = γl(x0) fl'(g(x0)) gl'(x0).

with

Exactly then is γr(x0) = γl(x0) = 1, if f(g(x0)), f(g(suc x0)) and f(suc g(x0)) resp. f(g(x0)), f(g(pre x0)) and f(pre g(x0)) are lying on a straight line.

Proof: Product and quotient rule are easy to recalculate. It applies:

Thus

The last sentence is valid, because the differences of the f-values are lying anyway on a straight line and, divided by the differences of the corresponding g-values, build a quotient of slopes of two straight lines that share a point. Iff the slopes are equal, the quotient becomes 1, applies therefore the conventional chain rule and it follows the assertion.

Remark: In order that the product and quotient rule exactly coincides precisely enough with the conventional one, in x0 either f or g must be (α-)continuous enough. (i.e. α can be set small enough). That γ can attain almost any value in ℝ, can be seen from the functions f(y) = y±2 und y = g(x) = x2 mit x0 = d0 and y ∈ ℝ. Thus, the conventional chain rule is only usable approximately for not infinitesimal arguments. If f is not linear or g not the identity or a translation, it is quite unlikely that the three f-values are lying on a straight line. If f and g are continuous, the chain rule applies at least approximately.

Remark: The right-sided resp. left-sided exact derivative of the inverse function results as

f-1r'(y0) = 1/fr'(x0) resp. f-1l'(y0) = 1/fl'(x0)

from y0 = f(x0) and the identity x = f-1(f(x)) with the aid of the chain rule for the same precision. L'Hôpital's rule is useful for (α-)continuous functions f and g and results for f(x0) = g(x0) = 0 as well as f(suc x0) and g(suc x0) not both simultaneously 0 and analogously left-sided from

Outlook on the complex analysis: The entire functions f(z) = z/ℜ and g(z) = Σak zk with k ∈ ℕ and ak = 1/ℜk+1 disprove Liouville's theorem.

Evidence: Since |f(z)| ≤ 1, and |g(z)| converges additionally, the assertion follows directly.

Choosing sufficiently small (transcendental) constants in the generalisation of Liouville's theorem disproves it too. Both theorems cannot be remedied by restricting them, since the holomorphy of a function h on (the conventional) ℂ compels that the Laurent polynomial (the Laurent series) of h must have coefficients ak mit |ak| < O(ℜ-|k|) (O(ℜ-|k|-1)) and (trans-) integer k (to converge), if it is not already constant. Thus, a limitation to coefficients, downward finitely bounded, is pointless.

The function f yields the biholomorphic, bijective mapping of the conventionally and circular defined ℂ onto the complex unit circle ℰd, very condensed from the complex unit circle ℰ, with |ℰd| = |ℂ| ≫ |ℰ|. Therewith, the Riemann mapping theorem is also valid for ℂ. The complete ℂ cannot be mapped this way, of course.

Definition: A point p ∈ M ⊆ ℂn with n ∈ ℕ is called (ω-) α-accumulation point of M resp. of the sequence, if there are in the open sphere Bα(p) ⊆ ℂn around p, with the infinitesimal radius α infinitely many points of M. If the asserted applies for all properly finite α, the α-accumulation point is simply called accumulation point.

Let be p(z) = π(z - ck) with k ∈ ℕ for z ∈ ℂ an infinite product with pairwise distinct zeros ck ∈ B1/|ℕ|(0) ⊂ ℂ (open circle disks around 0 with radius 1/|ℕ|), which are chosen so that |f(ck)| < 1/|ℕ| applies, for a in a domain G ⊆ ℂ holomorphic function f with f(0) = 0. G contains B1/|ℕ|(0) completely, which is always obtainable by coordinate transformation, while G is "big" enough.

Then, for the also there holomorphic function g(z) := f(z) + p(z), the coincidence set {w ∈ G : f(w) = g(w)} has an accumulation point at 0, and we have f ≠ g, in contradiction to the statement of the identity theorem. Examples of f are all in B1/|ℕ|(0) bounded and at the same time in G holomorphic functions with zero 0. Since p(z) can attain any complex value, the deviation from f and g is not negligible.

Also in contradiction to the identity theorem is the fact that, at a point c ∈ G all the derivatives d(k)(c) = h(k)(c) of two functions d and h can coincide for all k ∈ ℕ, but d and h can also be significantly different further away, beyond this local fact, without losing their holomorphy, since not every holomorphic function, due to the approximate character (of differentiation) resp. of the calculation with Landau symbols, can be (uniquely) expanded into a Taylor series (cf. Transcendental numbers).

If we raise the index set of π(z - ck), we obtain entire functions with trans-naturally many zeros. The zero set can be open and need not to be discrete. Thus the set of all functions holomorphic in a domain G need not to be free from zero divisors. The functions disprove, like polynomials with at least n > 2 pairwise distinct zeros, the theorems of Picard, since they miss at least n - 1 values in ℂ.