Definition: Given non-empty sets X and Y ⊆ ℝ, a (uniquely) defined function f: X → Y, x ↦ y and with suc(x) (pre(x)) the greatest (smallest) element of [min(X), x[ (]x, max(X)]). Then, with a and b ∈ X, as well as with d for Latin dextra = right and s for Latin sinistra = left, df(x) := f(suc(x)) - f(x) (sf(x) := d(f pre)(x) = f(x) – f(pre(x))) is called *right (left) differential of f*. If f is the identity with f(x) = x, the function f is omitted.

Definition: It is ∫(a, b, f(x) dx) := Σ(x ∈ [a, b[, f(x)(suc(x)-x)) (∫(a, b, f(x) sx) := Σ(x ∈ ]a, b], f(x)(x-pre(x)) called the *right-sided (left-sided) exact integral* from a to b over f(x) in Y. If both integrals match, one speaks correspondingly of the exact integral.

Remark: Obviously, the exact integration is a special case of summation. The exact integral coincides on the conventional ℝ widely with conventional integrals; f must yet not be continuous and also else the conditions are significantly weaker, in order that the integral exists. Where appropriate, calculating with symbolic (infinity) values may be advisable.

Remark: Obviously, the exact integral is monotone and linear. Multidimensional exact integrals emerge by replacing the f(x) successively by exact integrals over a function, where the functions may depend on several variables. The art of integrating consists in the correct summarising of the addends.

Definition: A complex number is called *finite*, if it emerged from finitely many operations on the finite defined set of algebraic numbers. Let x_{0} ∈ ℝ, 0 ≤ α ∈ ℝ and f: ℝ → ℝ. Then f is called *right-sided (left-sided) α-continuous* in x_{0} if applies |f(suc(x_{0})) - f(x_{0})| < b|ℕ|^{-α} (|f(x_{0}) - f(pre(x_{0}))| < b|ℕ|^{-α}). Double-sided α-continuity is called simply α-continuity, 0-continuity simply continuity.

Definition: With the notations above, the *right-sided (left-sided) exact derivative* of f at the position x_{0} ∈ X is defined as f_{r}'(x_{0}) := (f(suc(x_{0})) - f(x_{0})))/(suc(x_{0}) - x_{0}) = df(x_{0})/dx_{0} (f_{l}'(x_{0}) := (f(x_{0}) - f(pre(x_{0})))/(x_{0} - pre(x_{0})) = sf(x_{0})/sx_{0}), if suc(x_{0}) and pre(x_{0}) do exist and the difference quotient with both differences is defined. If both derivatives match, one speaks correspondingly of the exact derivative f'(x_{0}).

Remark: Differentiability thus can be easily established. For x_{0} ∈ ]pre(x_{0}), suc(x_{0})[, one can define the exact derivative alternatively also everywhere there as f'(x_{0}) := (f(suc(x_{0})) - f(pre(x_{0})))/(suc(x_{0}) - pre(x_{0})) where f(suc(x_{0})) and f(pre(x_{0})) are defined. This has the advantage that f'(x_{0}) can be regarded more as "tangent slope" in the point x_{0}, which can become rather zero for a local extremum, especially if f is α-continuous in x_{0} or suc(x_{0}) - x_{0} = x_{0} - pre(x_{0}). This is convenient, for example, if one wants to characterise the exact values of suc(x_{0}) and pre(x_{0}) only as arbitrarily close to x_{0} or wants to round the exact derivatives suitably, in order to provide simple derivation rules, if necessary.

Remark: Analogously, the exact integral can be alternatively defined. But the original definitions are to handle the easiest way. If applicable, there is an appropriate Landau notation. Antiderivatives of discontinuous functions can usually only be obtained by summing them up and skilfully combining them, such of piecewise α-continuous functions easier (for example, by reversing the rules of derivation). If the result of differentiation lies outside of the domain, it should be replaced by the number lying closest to it within the domain. If the number is not uniquely determined, the result shall consist of all of these numbers, or one may choose one (e.g. after a uniform rule).

Definition: The *exact integral* in ℝ^{n} over a function f: A → ℝ with A ⊆ ℝ^{n}, n ∈ ℕ and f(x) = 0 for x ∈ ℝ^{n} \ A is defined by ∫(x ∈ A, f(x) µ({x})) := ∫(-ℜ, ℜ, ∫(-ℜ, ℜ, f(x) dx_{1}) ... dx_{n}).

Remark: Its value does not change, if any number of dx_{k} for k ∈ {1, ..., n} is replaced by sx_{k}, since ℝ^{n} is homogenous. Since the integrals correspond to sums, which may be arbitrarily added otherwise, because of the associative, commutative and distributive law, if is calculated correctly exactly or with the Landau symbols, Fubini's theorem results, which allows to change the order of integration arbitrarily. A generalisation for functions f: ℂ^{n} → ℂ^{m} with m and n ∈ ℕ is easily possible. Since for f: A → ℂ applies ∫(z ∈ A, f(z) dz) = ∫(-ℜ, ℜ, (Re(f(z)) + iIm(f(z))) dx) + ∫(-ℜ, ℜ, (iRe(f(z)) - Im(f(z))) dy) with z = x + iy, as well as x, y ∈ ℝ, A ⊆ ℂ and f(z) = 0 for x ∈ ℂ \ A.

Remark: Thus, in particular, the Riemann series theorem is invalid, since one is coerced, when summing up the positive summands to a value aimed at, to add so many negative ones, until one obtains again the original sum of the series, and vice versa. With a smaller resp. greater value than the sum of the positive resp. negative summands, the same applies, since the rest is almost annulled, and so on. Also infinity must not be dealt with arbitrarily, if one wants to avoid going astray.

Definition: Let i, j, k and l be (maybe trans-) natural. A complex number is called *finite*, if it emerged from finitely many operations on the finite defined set of algebraic numbers. A sequence (a_{i}) with a_{i} ∈ ℂ and 0 ≤ α ∈ ℝ is called *α-convergent*, if there is a k, so that for all i and j with i > j ≥ k for all finite b ∈ ℝ^{+} applies: |a_{j} - a_{i}| < b|ℕ|^{-α}. If α = 0, so it is called *convergent*. Let a_{max(i)} be the (uniquely determined) *limit value*, while the *β-tending values* a_{l}(β) are given by the elements of {z ∈ ℂ : |z - a_{max(i)}| ≤ β ∈ ℝ^{+}} with β less than any finite number from ℝ^{+}.

Remark: The conventional limit values are often only β-tending values selected after (general) preferences (with β often no more precise than O(1/|ℕ|)) and generally too imprecise, since they are for example (arbitrarily) algebraic (of a certain degree), or transcendental.

First fundamental theorem of exact calculus: Let f be as above right-sided (left-sided) exactly integrable for x ∈ [a, b[ (x ∈ ]a, b]). Then the function F(x) = ∫(c, x, f(t) dt) (∫(c, x, f(t) st) with c ∈ [a, b] is right-sided (left-sided) exactly differentiable and it applies F_{r}'(x) = f(x) (F_{l}'(x) = f(x)) for x ∈ [a, b].

Proof: dF(x) = F(suc(x)) - F(x) = ∫(c, suc(x), f(t) dt) - ∫(c, x, f(t) dt) = ∫(x, suc(x), f(t) dt) = f(x) dx. This applies analogously to sF(x), pre(x) and sx.

Second fundamental theorem of exact calculus: If F is, instead of f as above, for x ∈ [a, b[ (x ∈ ]a, b]) right-sided (left-sided) exactly differentiable and its right-sided (left-sided) exact derivative F_{r}' (F_{l}') is there right-sided (left-sided) exactly integrable, then for c ∈ [a, b] applies F(x) - F(c) = ∫(c, x, F_{r}'(t) dt) (∫(c, x, F_{l}'(t) st).

Proof: F(x) - F(c) = Σ(t ∈ [c, x[, F(suc(t)) - F(t)) = Σ(t ∈ [c, x[, F_{r}'(t) (suc(t) - t)) = ∫(c, x, F_{r}'(t) dt). This applies analogously to t ∈ ]c, x], pre(t) and F_{l}'(t).

Remark: Notice that continuity is not presupposed for integral and derivative. By regarding the real and imaginary part of complex functions F and f: ℂ → ℂ, both fundamental theorems can be transferred easily to the complex. Actual integration (as inversion of the derivative) only makes sense for continuous functions, if it goes beyond mere summation. However, if function values can be combined into a finite number of continuous functions, for which each of them the antiderivative can be specified in finite time, also the integral for discontinuous functions can be calculated, possibly with the appropriate aid of the Euler-Maclaurin sum formula and further simplification techniques.

Remark: Over the more or less elements is integrated, the greater the value of the integral can differ, even within the same interval limits. If one uses the alternative exact derivation, then the formulas change accordingly, and this applies the less, the more continuous the occurring functions are. Here and in general can appropriate rounding rules can be helpful.

Intermediate value theorem: Let f as above for [a, c] be α-continuous. Then f(x) maps to any value between min(f (x)) and max(f(x)) with an accuracy < b|ℕ|^{-α}. If f is continuous in ℝ, it maps to any value of the conventional ℝ between min(f(x)) and max(f(x)).

Proof: Between min(f(x)) and max(f(x)) exists an unbroken chain of overlapping ε-environments, each with f(x) as the centre and ε < b|ℕ|^{-α}, since otherwise a contradiction to the α-continuity of f would emerge. The second part of the assertion follows from the fact that a deviation < b in the conventional ℝ falls below the maximal resolution.

Extremum criterion: Iff f has, as above, in the point x_{0} a left-sided (right-sided) exact derivative > 0 (< 0) and a right-sided (left-sided) exact derivative < 0 (> 0), f has there a local maximum (minimum). A derivative can then be defined there as 0.

Proof: Clear from the definitions.

Product, quotient and chain rule: Let f and g be right-sided (left-sided) exactly differentiable functions and g(x_{0}) g(suc(x_{0})) ≠ 0 ≠ g(pre(x_{0})). Then applies (fg)_{r}'(x_{0}) = f_{r}'(x_{0})g(x_{0}) + f(suc(x_{0}))g_{r}'(x_{0}) ((fg)_{l}'(x_{0}) = f_{l}'(x_{0})g(pre(x_{0})) + f(x_{0})g_{l}'(x_{0})) und (f/g)_{r}'(x_{0}) = (f_{r}'(x_{0})g(suc(x_{0})) - f(suc(x_{0}))g_{r}'(x_{0}))/(g(x_{0})g(suc(x_{0}))) ((f/g)_{l}'(x_{0}) = (f_{l}'(x_{0})g(x_{0}) - f(x_{0})g_{l}'(x_{0}))/(g(pre(x_{0}))g(x_{0}))). Only if f is sufficiently continuous, the chain rule f(g(x_{0}))_{r}' = f_{r}'(g(x_{0})) g_{r}'(x_{0}) (f(g(x_{0}))_{l}' = f_{l}'(g(x_{0})) g_{l}'(x_{0})) applies. Here the derivatives are only exact, where g does not skip any values.

Proof: Product and quotient rule are easy to recalculate. Only if f is sufficiently continuous, f_{r}'(g(x_{0})) = (f(g(suc(x_{0}))) - f(g(x_{0})))/(g(suc(x_{0})) - g(x_{0})), hence f(g(x_{0}))_{r}' = (f(g(suc(x_{0}))) - f(g(x_{0})))/(suc(x_{0}) - x_{0}) = f_{r}'(g(x_{0})) g_{r}'(x_{0}). The right-sided derivative is only exact, if suc(g(x_{0})) - g(x_{0}) = g(suc(x_{0})) - g(x_{0}), to wit where g does not skip any values.

Remark: The right-sided (left-sided) exact derivative of the inverse function results as f^{-1}_{r}'(y_{0}) = 1/f_{r}'(x_{0}) (f^{-1}_{l}'(y_{0}) = 1/f_{l}'(x_{0})) from the identity x = f^{-1}(f(x)) with the aid of the chain rule and y_{0} = f(x_{0}). L'Hôpital's rule is useful for (α-)continuous functions f and g. If the elements do not vary in ℝ within certain (tolerance) limits, more sharply characterising, minimum and maximum can be determined, instead of infimum and supremum.

Since individual n-ness belongs to every natural number n that cannot be derived from its predecessors or successors, there is no complete system of axioms in mathematics, because with each new number something irreducible new emerges. If one confines, however, to selected aspects, a finite system of axioms for a finite number of entities can be specified. Each level of infinity refuses completeness all the more.

Theories are based on presuppositions. In mathematics, they are often expressed by axioms that may be true or false, what can possibly be proven by other considerations. Thus, all theories are incomplete and, as the case may be, beyond that contradictory. Instead of explicit axioms, (implicit) definitions are more suitable in that the existence of the specified is tacitly presupposed, until refutation.

© 03.06.2010 by Boris Haase

• disclaimer • mail@boris-haase.de • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • top