Homepage of Boris Haase




Previous | Next

#20: Insertion Nonstandard Analysis and Extension Introduction and Mathematics on 03.06.2010

Definition: Given non-empty sets X and Y ⊆ ℝ, a (uniquely) defined function f: X → Y, x ↦ y and with suc(x) (pre(x)) the greatest (smallest) element of [min(X), x[ (]x, max(X)]). Then, with a and b ∈ X, as well as with d for Latin dextra = right and s for Latin sinistra = left, df(x) := f(suc(x)) - f(x) (sf(x) := d(f pre)(x) = f(x) – f(pre(x))) is called right (left) differential of f. If f is the identity with f(x) = x, the function f is omitted.

Definition: It is ∫(a, b, f(x) dx) := Σ(x ∈ [a, b[, f(x)(suc(x)-x)) (∫(a, b, f(x) sx) := Σ(x ∈ ]a, b], f(x)(x-pre(x)) called the right-sided (left-sided) exact integral from a to b over f(x) in Y. If both integrals match, one speaks correspondingly of the exact integral.

Remark: Obviously, the exact integration is a special case of summation. The exact integral coincides on the conventional ℝ widely with conventional integrals; f must yet not be continuous and also else the conditions are significantly weaker, in order that the integral exists. Where appropriate, calculating with symbolic (infinity) values may be advisable.

Remark: Obviously, the exact integral is monotone and linear. Multidimensional exact integrals emerge by replacing the f(x) successively by exact integrals over a function, where the functions may depend on several variables. The art of integrating consists in the correct summarising of the addends.

Definition: A complex number is called finite, if it emerged from finitely many operations on the finite defined set of algebraic numbers. Let x0 ∈ ℝ, 0 ≤ α ∈ ℝ and f: ℝ → ℝ. Then f is called right-sided (left-sided) α-continuous in x0 if applies |f(suc(x0)) - f(x0)| < b|ℕ| (|f(x0) - f(pre(x0))| < b|ℕ|). Double-sided α-continuity is called simply α-continuity, 0-continuity simply continuity.

Definition: With the notations above, the right-sided (left-sided) exact derivative of f at the position x0 ∈ X is defined as fr'(x0) := (f(suc(x0)) - f(x0)))/(suc(x0) - x0) = df(x0)/dx0 (fl'(x0) := (f(x0) - f(pre(x0)))/(x0 - pre(x0)) = sf(x0)/sx0), if suc(x0) and pre(x0) do exist and the difference quotient with both differences is defined. If both derivatives match, one speaks correspondingly of the exact derivative f'(x0).

Remark: Differentiability thus can be easily established. For x0 ∈ ]pre(x0), suc(x0)[, one can define the exact derivative alternatively also everywhere there as f'(x0) := (f(suc(x0)) - f(pre(x0)))/(suc(x0) - pre(x0)) where f(suc(x0)) and f(pre(x0)) are defined. This has the advantage that f'(x0) can be regarded more as "tangent slope" in the point x0, which can become rather zero for a local extremum, especially if f is α-continuous in x0 or suc(x0) - x0 = x0 - pre(x0). This is convenient, for example, if one wants to characterise the exact values of suc(x0) and pre(x0) only as arbitrarily close to x0 or wants to round the exact derivatives suitably, in order to provide simple derivation rules, if necessary.

Remark: Analogously, the exact integral can be alternatively defined. But the original definitions are to handle the easiest way. If applicable, there is an appropriate Landau notation. Antiderivatives of discontinuous functions can usually only be obtained by summing them up and skilfully combining them, such of piecewise α-continuous functions easier (for example, by reversing the rules of derivation). If the result of differentiation lies outside of the domain, it should be replaced by the number lying closest to it within the domain. If the number is not uniquely determined, the result shall consist of all of these numbers, or one may choose one (e.g. after a uniform rule).

Definition: The exact integral in ℝn over a function f: A → ℝ with A ⊆ ℝn, n ∈ ℕ and f(x) = 0 for x ∈ ℝn \ A is defined by ∫(x ∈ A, f(x) µ({x})) := ∫(-ℜ, ℜ, ∫(-ℜ, ℜ, f(x) dx1) ... dxn).

Remark: Its value does not change, if any number of dxk for k ∈ {1, ..., n} is replaced by sxk, since ℝn is homogenous. Since the integrals correspond to sums, which may be arbitrarily added otherwise, because of the associative, commutative and distributive law, if is calculated correctly exactly or with the Landau symbols, Fubini's theorem results, which allows to change the order of integration arbitrarily. A generalisation for functions f: ℂn → ℂm with m and n ∈ ℕ is easily possible. Since for f: A → ℂ applies ∫(z ∈ A, f(z) dz) = ∫(-ℜ, ℜ, (Re(f(z)) + iIm(f(z))) dx) + ∫(-ℜ, ℜ, (iRe(f(z)) - Im(f(z))) dy) with z = x + iy, as well as x, y ∈ ℝ, A ⊆ ℂ and f(z) = 0 for x ∈ ℂ \ A.

Remark: Thus, in particular, the Riemann series theorem is invalid, since one is coerced, when summing up the positive summands to a value aimed at, to add so many negative ones, until one obtains again the original sum of the series, and vice versa. With a smaller resp. greater value than the sum of the positive resp. negative summands, the same applies, since the rest is almost annulled, and so on. Also infinity must not be dealt with arbitrarily, if one wants to avoid going astray.

Definition: Let i, j, k and l be (maybe trans-) natural. A complex number is called finite, if it emerged from finitely many operations on the finite defined set of algebraic numbers. A sequence (ai) with ai ∈ ℂ and 0 ≤ α ∈ ℝ is called α-convergent, if there is a k, so that for all i and j with i > j ≥ k for all finite b ∈ ℝ+ applies: |aj - ai| < b|ℕ|. If α = 0, so it is called convergent. Let amax(i) be the (uniquely determined) limit value, while the β-tending values al(β) are given by the elements of {z ∈ ℂ : |z - amax(i)| ≤ β ∈ ℝ+} with β less than any finite number from ℝ+.

Remark: The conventional limit values are often only β-tending values selected after (general) preferences (with β often no more precise than O(1/|ℕ|)) and generally too imprecise, since they are for example (arbitrarily) algebraic (of a certain degree), or transcendental.

First fundamental theorem of exact calculus: Let f be as above right-sided (left-sided) exactly integrable for x ∈ [a, b[ (x ∈ ]a, b]). Then the function F(x) = ∫(c, x, f(t) dt) (∫(c, x, f(t) st) with c ∈ [a, b] is right-sided (left-sided) exactly differentiable and it applies Fr'(x) = f(x) (Fl'(x) = f(x)) for x ∈ [a, b].

Proof: dF(x) = F(suc(x)) - F(x) = ∫(c, suc(x), f(t) dt) - ∫(c, x, f(t) dt) = ∫(x, suc(x), f(t) dt) = f(x) dx. This applies analogously to sF(x), pre(x) and sx.

Second fundamental theorem of exact calculus: If F is, instead of f as above, for x ∈ [a, b[ (x ∈ ]a, b]) right-sided (left-sided) exactly differentiable and its right-sided (left-sided) exact derivative Fr' (Fl') is there right-sided (left-sided) exactly integrable, then for c ∈ [a, b] applies F(x) - F(c) = ∫(c, x, Fr'(t) dt) (∫(c, x, Fl'(t) st).

Proof: F(x) - F(c) = Σ(t ∈ [c, x[, F(suc(t)) - F(t)) = Σ(t ∈ [c, x[, Fr'(t) (suc(t) - t)) = ∫(c, x, Fr'(t) dt). This applies analogously to t ∈ ]c, x], pre(t) and Fl'(t).

Remark: Notice that continuity is not presupposed for integral and derivative. By regarding the real and imaginary part of complex functions F and f: ℂ → ℂ, both fundamental theorems can be transferred easily to the complex. Actual integration (as inversion of the derivative) only makes sense for continuous functions, if it goes beyond mere summation. However, if function values can be combined into a finite number of continuous functions, for which each of them the antiderivative can be specified in finite time, also the integral for discontinuous functions can be calculated, possibly with the appropriate aid of the Euler-Maclaurin sum formula and further simplification techniques.

Remark: Over the more or less elements is integrated, the greater the value of the integral can differ, even within the same interval limits. If one uses the alternative exact derivation, then the formulas change accordingly, and this applies the less, the more continuous the occurring functions are. Here and in general can appropriate rounding rules can be helpful.

Intermediate value theorem: Let f as above for [a, c] be α-continuous. Then f(x) maps to any value between min(f (x)) and max(f(x)) with an accuracy < b|ℕ|. If f is continuous in ℝ, it maps to any value of the conventional ℝ between min(f(x)) and max(f(x)).

Proof: Between min(f(x)) and max(f(x)) exists an unbroken chain of overlapping ε-environments, each with f(x) as the centre and ε < b|ℕ|, since otherwise a contradiction to the α-continuity of f would emerge. The second part of the assertion follows from the fact that a deviation < b in the conventional ℝ falls below the maximal resolution.

Extremum criterion: Iff f has, as above, in the point x0 a left-sided (right-sided) exact derivative > 0 (< 0) and a right-sided (left-sided) exact derivative < 0 (> 0), f has there a local maximum (minimum). A derivative can then be defined there as 0.

Proof: Clear from the definitions.

Product, quotient and chain rule: Let f and g be right-sided (left-sided) exactly differentiable functions and g(x0) g(suc(x0)) ≠ 0 ≠ g(pre(x0)). Then applies (fg)r'(x0) = fr'(x0)g(x0) + f(suc(x0))gr'(x0) ((fg)l'(x0) = fl'(x0)g(pre(x0)) + f(x0)gl'(x0)) und (f/g)r'(x0) = (fr'(x0)g(suc(x0)) - f(suc(x0))gr'(x0))/(g(x0)g(suc(x0))) ((f/g)l'(x0) = (fl'(x0)g(x0) - f(x0)gl'(x0))/(g(pre(x0))g(x0))). Only if f is sufficiently continuous, the chain rule f(g(x0))r' = fr'(g(x0)) gr'(x0) (f(g(x0))l' = fl'(g(x0)) gl'(x0)) applies. Here the derivatives are only exact, where g does not skip any values.

Proof: Product and quotient rule are easy to recalculate. Only if f is sufficiently continuous, fr'(g(x0)) = (f(g(suc(x0))) - f(g(x0)))/(g(suc(x0)) - g(x0)), hence f(g(x0))r' = (f(g(suc(x0))) - f(g(x0)))/(suc(x0) - x0) = fr'(g(x0)) gr'(x0). The right-sided derivative is only exact, if suc(g(x0)) - g(x0) = g(suc(x0)) - g(x0), to wit where g does not skip any values.

Remark: The right-sided (left-sided) exact derivative of the inverse function results as f-1r'(y0) = 1/fr'(x0) (f-1l'(y0) = 1/fl'(x0)) from the identity x = f-1(f(x)) with the aid of the chain rule and y0 = f(x0). L'Hôpital's rule is useful for (α-)continuous functions f and g. If the elements do not vary in ℝ within certain (tolerance) limits, more sharply characterising, minimum and maximum can be determined, instead of infimum and supremum.

Since individual n-ness belongs to every natural number n that cannot be derived from its predecessors or successors, there is no complete system of axioms in mathematics, because with each new number something irreducible new emerges. If one confines, however, to selected aspects, a finite system of axioms for a finite number of entities can be specified. Each level of infinity refuses completeness all the more.

Theories are based on presuppositions. In mathematics, they are often expressed by axioms that may be true or false, what can possibly be proven by other considerations. Thus, all theories are incomplete and, as the case may be, beyond that contradictory. Instead of explicit axioms, (implicit) definitions are more suitable in that the existence of the specified is tacitly presupposed, until refutation.

© 03.06.2010 by Boris Haase


Valid XHTML 1.0 • disclaimer • mail@boris-haase.de • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • top