In the following, the notations from the set theory are applied. At first the integration and differentiation on arbitrary subsets of the set ℝ are studied (especially for conventionally non-measurable and infinite sets, as well as discontinuous functions), and then is progressed to subsets of ℂ^{m} or ℂ^{n} with arbitrary m, n ∈ ℕ. A generalisation to other sets is easily possible. The sign ∞ is not used, since there is nothing what exceeds the maxima of all infinite sets, resp. all real values that exceed all finite ones can be specified more precisely.

Definition: Let be A ⊆ ℝ and f a (uniquely) defined function f: A → ℝ. Let be pre x := max {y ∈ A : y < x}, if {y ∈ A : y < x} ≠ ∅, and ≤ x real defined else. Let be suc x := min {y ∈ A : y > x}, if {y ∈ A : y > x} ≠ ∅, and ≥ x real defined else. Then, with d for Latin dextra = right

df(x) := f(suc x) - f(x)

is called *right differential * of f in A. With s for Latin sinistra = left

sf(x) := d(f pre)(x) = f(x) – f(pre x)

is called *left differential* of f in A. If f is the identity, that is f(x) = x, the function f is omitted. If A is clear or unimportant, also A is omitted.

Definition: With the notations above,

is called the *right-sided exact integral* in A over f(x). It is analogously

called the *left-sided exact integral*. Here suc max A ≥ max A and pre min A ≤ min A are real defined. If both integrals match, one speaks correspondingly of the exact integral. We write for real intervals [a, b[_{A} := A ∩ ℝ bzw. ]a, b]_{A} := A ∩ ℝ with a = min A and b = max A

resp.

Remark: Obviously, the exact integration is a special case of summation. The exact integral coincides on the conventional ℝ widely with conventional integrals; f must yet not be continuous and also else the conditions are significantly weaker, in order that the integral exists. For A = ℝ, A can be omitted.

Remark: Obviously, the exact integral is monotone and linear. The art of integrating consists in the correct summarising of the addends.

Definition: Let x_{0} ∈ A ⊆ ℝ and f: A → ℝ. Then f is called *right-sided α-continuous* in x_{0} if for infinitesimal α ∈ ℝ^{+} applies:

|f(suc x_{0}) - f(x_{0})| < α.

*Left-sided* is to apply:

|f(x_{0}) - pre f(x_{0})| < α.

Double-sided α-continuity is called simply α-continuity. If the inequalities apply for all properly finite α ∈ ℝ^{+}, one speaks simply of continuity.

Remark: Practically, one will determine α by an estimate (after considering possible jump discontinuities).

Definition: With the notations above, the *right-sided exact derivative* of f in A at the position x_{0} ∈ A is defined as

provided x_{0} ≠ suc x_{0} exists and the difference quotient in the middle is defined. Analogously applies *left-sided*

If both derivatives match, one speaks correspondingly of the exact derivative f'(x_{0}). If A is clear, A is omitted.

Remark: Differentiability thus can be easily established. One can define the exact derivative alternatively also everywhere there as

where suc x_{0} ≠ pre x_{0} applies and the quotient is defined. This has the advantage that f'(x_{0}) can be regarded more as "tangent slope" in the point x_{0}, which can become rather zero for a local extremum, especially if f is α-continuous in x_{0}. This is convenient, for example, if one wants to characterise the exact values of suc x_{0} and pre x_{0} only as arbitrarily close to x_{0}, or wants to round the exact derivatives suitably, in order to provide simple derivation rules, if necessary.

Remark: Analogously, the exact integral can be alternatively defined. But the original definitions are to handle the easiest way. If applicable, there is an appropriate Landau notation. If the result of differentiation lies outside of the domain, it should be replaced by the number lying closest to it within the domain. If the number is not uniquely determined, the result shall consist of all these numbers, or one may choose one (e.g. after a uniform rule).

Definition: The function F_{r}: [a, b[_{A} → ℝ with [a, b[_{A} ⊆ A ⊆ ℝ and F_{r}'(x) = f(x) for x ∈ [a, b[_{A} and f: [a, b[_{A} → ℝ is called *right-sided antiderivative* of f in [a, b[_{A}.The function F_{l}: ]a, b]_{A} → ℝ with ]a, b]_{A} ⊆ A ⊆ ℝ and F_{l}'(x) = f(x) for x ∈ ]a, b]_{A} and f: ]a, b]_{A} → ℝ is called *left-sided antiderivative* of f in ]a, b]_{A}. If F = F_{r} = F_{l} applies in [a, b]_{A}, so F is simply called antiderivative of f in [a, b]_{A}.

Remark: Obviously, the antiderivatives of a function differ from each other only by a real addend. Antiderivatives of discontinuous functions can usually only be obtained by adding them up and skilfully combining them, such of piecewise α-continuous functions easier (for example, by reversing the rules of derivation).

Example: Let be [a, b[_{dx} the non-empty homogeneous subset of [a, b[ ⊆ ℝ with dx = suc x - x for all x ∈ [a, b[_{dx} and integer a/dx. For infinitesimal dx and b = -a = |ℕ|, [a, b[_{dx} is comparable with the conventional ℝ. Let be T_{r} furthermore a right-sided antiderivative of a taylor series t, not necessarily convergent in [a, b[_{dx}, and f(x) := t(x) + ε (-1)^{x/dx} with a properly finite ε ∈ ℝ^{+}. For infinitesimal dx, f is nowhere continuous, and can therewith nowhere be conventionally differentiated and integrated in [a, b[_{dx}, but it applies exactly for all dx:

and

Definition: The function µ: A → ℝ with a non-empty set A ⊆ ℂ^{n}, n ∈ ℕ, k ∈ {1, ..., n} and z = x + iy as well as

with µ(∅) = |∅| = 0 is called *measure* of A with suc A = {(z, (z_{1}, ..., suc z_{k}, ..., z_{n})^{⊤}) ∈ ℂ^{n} × ℂ^{n} : z ∈ A, k ∈ {1, ..., n}, suc z_{k} = suc x_{k} + i suc y_{k}}. For A ⊆ ℝ^{n} the formula can be simplified to

Remark: ℝ is, however, not homogenous, as long from x ∈ ℝ always is to follow 1/x ∈ ℝ: If it is assumed homogenous for x in the interval ]0, 1[, so applies 1/x - 1/(x + dx) = dx/(x (x + dx)) > dx. Something similar is true, if ℝ is assumed homogenous for values > 1 and their reciprocals are considered. It is thus precisely to specify, with which definition and construction of ℝ it is dealt, e.g. with a homogenised ℝ_{h}. Analogously, there is also the homogenous set ℚ_{h} of rational numbers with max ℚ_{h} = -min ℚ_{h} = |ℕ| and dw = min |w| = 1/max kgV(1, 2, ..., m) with |ℚ_{h}| = kgV(1, 2, ..., m) (|ℤ| - 1) + 1 ≤ |ℚ| for w ∈ ℚ and m ∈ ℕ. Obviously, calculating in the conventional ℝ corresponds to calculating in ℚ_{h}. If the set A_{ℝ} of the real conventionally algebraic numbers is homogenised, it is just at least as dense as the conventional ℝ, as one can easily realise with the homogenisation of the set ℕ ∪ {√2}.

Remark: Much more important, interesting and relevant, for computers, are the homogeneous sets B_{j} with natural j that emerge by continued halving of the unit distance. If one agrees on an accuracy of representation 2^{-j}, so it can be calculated with this maximal accuracy of calculating. The homogeneous ℝ_{h} should also be isomorphic to such an infinite set, where dx = 2^{-j} is to be assumed minimal possible for x ∈ B_{j} and now trans-natural j. For every additional number x, concerning a homogenous underlying set, the newly emerged set can then be homogenised, if one agrees on an accuracy of representation for x.

Example: The middle thirds Cantor set C has the relative measure µ(C) = (⅔)^{|ℕ|}. Let the function c: [0, 1] → {0, (⅔)^{-|ℕ|}} be defined by c(x) = (⅔)^{-|ℕ|} for x ∈ C and c(x) = 0 for x ∈ [0, 1] \ C. Then it applies:

Example: For the classes a + ℚ with a ∈ ℝ the equivalence relation x ~ y ⇔ x - y ∈ ℚ with x and y ∈ ℝ their representatives can be specified through the set R = [0, 1/(|ℕ|)[ with the measure µ(R) = 1/|ℕ|. Let the function r: ℝ → {0, 1} be defined by r(x) = 1 for x ∈ ℤ + R and r(x) = 0 for x ∈ ℝ \ (ℤ + R). Then it applies:

Example (cf. set theory): Let A_{1} = [0, 1[ ∩ A_{ℝ} and the function q: A_{1} → {0, 1} be defined by q(x) = 1 for x ∈ A_{1} and q(x) = 0 for x ∈ A_{1} ∩ ℚ. The exact integral over q(x)dx has then in A_{1} the transcendental value

Remark: The sets C, R and ℚ are conventionally not measurable. Thus, the exact integral is more generally valid than Riemann, Lebesgue (-Stieltjes) integral and other integrals, since the latter exist only in conventionally measurable sets. The functions were chosen so easy only because of the clearness and may be, of course, more complicated.

Definition: For n ∈ ℕ, the *exact integral* in A ⊆ ℝ^{n} over a function f: A → ℝ is defined by

where the suc x_{i} for i ∈ {1, ..., n} are ≥ x_{i} real defined in the set suc A (cf. above).

Definition: A *sequence* (a_{i}) with *members* a_{i} is a map of a (in-)finite index set I with gapless serial (trans-) natural elements i to ℂ: i ↦ a_{i}. If the member with the greatest index is infinitesimal, the sequence is called *infinitesimal sequence*. A *series* is a sequence (s_{n}) with the *partial sums*

for n ∈ I. The smallest index in s_{n} for i can be defined differently (e.g. 0 or -n).

Remark: Since sums can be arbitrarily added otherwise, because of the associative, commutative and distributive law, if is calculated correctly resp. with the Landau symbols, Fubini's theorem results for exact integrals, which allows to change the order of integration arbitrarily. A generalisation for functions f: ℂ^{n} → ℂ^{m} with m and n ∈ ℕ is easily possible. Since for z = x + iy ∈ A ⊆ ℂ with x, y ∈ ℝ and f: A → ℂ applies:

where suc max Re z ≥ max Re z and suc max Im z ≥ max Im z are real defined.

Remark: Thus, in particular, the Riemann series theorem is invalid, since one is coerced, when summing up the positive summands to a value aimed at, to add so many negative ones, until one obtains again the original sum of the series, and vice versa. With a smaller resp. greater value than the sum of the positive resp. negative summands, the same applies, since the rest is almost annulled, and so on. Also infinity must not be dealt with arbitrarily, if one wants to avoid going astray. Who moves something into infinity must not lend zerself to the illusion that it would no longer exist.

Finiteness criterion for series: The partial sum with the greatest index of a real series (s_{k}) for infinite (trans-) natural k and n is finite, iff it can be represented as

with finite real a and finite |a_{n} - b_{n}|, forming a monotonically nonincreasing infinitesimal sequence for real a_{n} and b_{n}. In the complex, this must be satisfied for real and imaginary part.

Proof: The assertion follows directly from the Leibniz criterion and because the addends can be added arbitrarily otherwise, sorted by size and sign, summed up or split up into sums.

Example: From the alternating harmonic series follows

Remark: More interesting examples are those whose nonincreasing monotony of |a_{j} - b_{j}| cannot be so easily proven as e.g. for the divergent series

where c_{j} increases monotonically, but c_{j+1} - c_{j} is monotonically nonincreasing.

Finiteness criterion for products: The product

for infinite (trans-) natural n and finite complex a_{n} that is not, w. l. o. g., ≤ -1, if it is real, is finite, iff

is so, where m must be able to be chosen so big that

is also finite and |a_{k}| < 1, where all its factors with |1 + a_{n}| > 1 must be able to be so rearranged resp. pooled with other factors that, w. l. o. g., | a_{n}| for n ≥ n_{0} with a (trans-) natural n_{0} forms a monotonically nonincreasing infinitesimal sequence, after the number of the factors with |1 + a_{n}| < 1 is likewise minimised and their index n for a_{n} is chosen ≤ m.

Proof: The assertion follows from the logarithm series.

Definition: Let i, j, k and l be natural. A sequence (a_{i}) with a_{i} ∈ ℂ infinitesimal α ∈ ℝ^{+} is called *α-convergent*, if there is a k, so that for all i and j with max i ≥ i > j ≥ k applies:

|a_{j} - a_{i}| < α.

If the inequality applies for all properly finite α ∈ ℝ^{+}, so the sequence is simply called *convergent*. The uniquely determined last value of the sequence is a_{max i} and is also called *0-limit value* or simply *limit value*, while the *β-limit values* a_{l}(β) are given by the elements of

{z ∈ ℂ : |z - a_{max i}| ≤ β}

with infinitesimal β ∈ ℝ^{+}.

Remark: The conventional limit values are often only β-limit values, selected after (general) preferences (with β often no more precise than O(1/|ℕ|)) and generally too imprecise, since they are for example (arbitrarily) algebraic (of a certain degree), or transcendental.

Proposition about commuting β-limit values for the integration: Let be A ⊆ ℝ and (f_{n}) a sequence of integrable functions with (trans-) natural n and f_{n}: A → ℝ, which α-converge to the integrable function f: A → ℝ. Then applies for:

Proof:

Remark: As long as one correctly calculates with Landau notation, differentiation or integration and summation may be (also) interchanged in (divergent) series. The conventional procedure can, however, lead to not inconsiderable error propagations, in subsequent calculations. for example if β_{1}μ(A) is a properly finite value.

First fundamental theorem of exact calculus: Let f be as above right-sided resp. left-sided exactly integrable for x ∈ [a, b[_{A} resp. x ∈ ]a, b]_{A}. Then the function

resp.

with c ∈ [a, b]_{A} is right-sided resp. left-sided exactly differentiable and it applies

F_{r}'(x) = f(x) resp. F_{l}'(x) = f(x)

for x ∈ [a, b]_{A}.

Proof:

This applies analogously to sF(x), pre x and sx.

Second fundamental theorem of exact calculus: If F is, instead of f as above, for x ∈ [a, b[_{A} right-sided exactly differentiable and its right-sided exact derivative F_{r}' is there right-sided exactly integrable, then it applies for c ∈ [a, b]_{A}:

Left-sided applies analogously for x ∈ ]a, b]_{A} and F_{l}':

Proof:

This applies analogously to t ∈ ]c, x]_{A}, pre t and F_{l}'(t).

Remark: Notice that continuity is not presupposed for integral and derivative. By regarding the real and imaginary part of complex functions F and f: ℂ → ℂ, both fundamental theorems can be transferred easily to the complex. Actual integration (as inversion of the derivative) only makes sense for continuous functions, if it is to go beyond mere summation. However, if function values can be combined into a finite number of continuous functions, for which each of them the antiderivative can be specified in finite time, also the integral for discontinuous functions can be calculated, possibly with the appropriate aid of the Euler-Maclaurin sum formula and further simplification techniques.

Remark: Over the more or less elements is integrated, the greater the value of the integral can differ, even within the same interval limits. If one uses the alternative exact derivation, then the formulas change accordingly, and this applies the less, the more continuous the occurring functions are. Here and in general can appropriate rounding rules can be helpful.

Intermediate value theorem: Let be f: [a, b] → ℝ α-continuous in [a, b]. Then f(x) maps to any value between min f (x) and max f(x) , for x ∈ [a, b], with an accuracy < α. If f is continuous in ℝ, it maps to any value of the conventional ℝ between min f(x) and max f(x).

Proof: Between min f(x) and max f(x) exists an unbroken chain of overlapping α-environments, each with f(x) as the centre, since otherwise a contradiction to the α-continuity of f would emerge. The second part of the assertion follows from the fact that a deviation |f(suc x_{0}) - f(x_{0})| < α resp. |f(x_{0}) - f(pre x_{0})| < α for all properly finite α ∈ ℝ^{+} in the conventional ℝ falls below the maximal resolution.

Extremum criterion: Iff f has, as above, in the point x_{0} a left-sided exact derivative > 0 and a right-sided exact derivative < 0, f has there a local maximum. Iff f has, as above, in the point x_{0} a left-sided exact derivative < 0 and a right-sided exact derivative > 0, f has there a local minimum. A derivative can then be defined there as 0.

Proof: Clear from the definitions.

Product, quotient and chain rule: Let f and g be right-sided (left-sided) exactly differentiable functions and all quotients well defined. Then it applies:

(fg)_{r}'(x_{0}) = f_{r}'(x_{0})g(x_{0}) + f(suc(x_{0}))g_{r}'(x_{0}),

and

f(g(x_{0}))_{r}' = γ_{r}(x_{0}) f_{r}'(g(x_{0})) g_{r}'(x_{0})

with

Left-sided applies analogously:

(fg)_{l}'(x_{0}) = f_{l}'(x_{0})g(pre(x_{0})) + f(x_{0})g_{l}'(x_{0}),

and

f(g(x_{0}))_{l}' = γ_{l}(x_{0}) f_{l}'(g(x_{0})) g_{l}'(x_{0}).

with

Exactly then is γ_{r}(x_{0}) = γ_{l}(x_{0}) = 1, if f(g(x_{0})), f(g(suc x_{0})) and f(suc g(x_{0})) resp. f(g(x_{0})), f(g(pre x_{0})) and f(pre g(x_{0})) are lying on a straight line.

Proof: Product and quotient rule are easy to recalculate. It applies:

Thus

The last sentence is valid, because the differences of the f-values are lying anyway on a straight line and, divided by the differences of the corresponding g-values, build a quotient of slopes of two straight lines that share a point. Iff the slopes are equal, the quotient becomes 1, applies therefore the conventional chain rule and it follows the assertion.

Remark: In order that the product and quotient rule exactly coincides precisely enough with the conventional one, in x_{0} either f or g must be (α-)continuous enough. (i.e. α can be set small enough). That γ can attain almost any value in ℝ, can be seen from the functions f(y) = y^{±2} und y = g(x) = x^{2} mit x_{0} = d0 and y ∈ ℝ. Thus, the conventional chain rule is only usable approximately for not infinitesimal arguments. If f is not linear or g not the identity or a translation, it is quite unlikely that the three f-values are lying on a straight line. If f and g are continuous, the chain rule applies at least approximately.

Remark: The right-sided resp. left-sided exact derivative of the inverse function results as

f^{-1}_{r}'(y_{0}) = 1/f_{r}'(x_{0}) resp. f^{-1}_{l}'(y_{0}) = 1/f_{l}'(x_{0})

from y_{0} = f(x_{0}) and the identity x = f^{-1}(f(x)) with the aid of the chain rule for the same precision. L'Hôpital's rule is useful for (α-)continuous functions f and g and results for f(x_{0}) = g(x_{0}) = 0 as well as f(suc x_{0}) and g(suc x_{0}) not both simultaneously 0 and analogously left-sided from

Outlook on the complex analysis: The entire functions f(z) = z/ℜ and g(z) = Σa_{k} z^{k} with k ∈ ℕ and a_{k} = 1/ℜ^{k+1} disprove Liouville's theorem.

Evidence: Since |f(z)| ≤ 1, and |g(z)| converges additionally, the assertion follows directly.

Choosing sufficiently small (transcendental) constants in the generalisation of Liouville's theorem disproves it too. Both theorems cannot be remedied by restricting them, since the holomorphy of a function h on (the conventional) ℂ compels that the Laurent polynomial (the Laurent series) of h must have coefficients a_{k} mit |a_{k}| < O(ℜ^{-|k|}) (O(ℜ^{-|k|-1})) and (trans-) integer k (to converge), if it is not already constant. Thus, a limitation to coefficients, downward finitely bounded, is pointless.

The function f yields the biholomorphic, bijective mapping of the conventionally and circular defined ℂ onto the complex unit circle ℰ_{d}, very condensed from the complex unit circle ℰ, with |ℰ_{d}| = |ℂ| ≫ |ℰ|. Therewith, the Riemann mapping theorem is also valid for ℂ. The complete ℂ cannot be mapped this way, of course.

Definition: A point p ∈ M ⊆ ℂ^{n} with n ∈ ℕ is called *(ω-) α-accumulation point * of M resp. of the sequence, if there are in the open sphere B_{α}(p) ⊆ ℂ^{n} around p, with the infinitesimal radius α infinitely many points of M. If the asserted applies for all properly finite α, the α-accumulation point is simply called accumulation point.

Let be p(z) = π(z - c_{k}) with k ∈ ℕ for z ∈ ℂ an infinite product with pairwise distinct zeros c_{k} ∈ B_{1/|ℕ|}(0) ⊂ ℂ (open circle disks around 0 with radius 1/|ℕ|), which are chosen so that |f(c_{k})| < 1/|ℕ| applies, for a in a domain G ⊆ ℂ holomorphic function f with f(0) = 0. G contains B_{1/|ℕ|}(0) completely, which is always obtainable by coordinate transformation, while G is "big" enough.

Then, for the also there holomorphic function g(z) := f(z) + p(z), the coincidence set {w ∈ G : f(w) = g(w)} has an accumulation point at 0, and we have f ≠ g, in contradiction to the statement of the identity theorem. Examples of f are all in B_{1/|ℕ|}(0) bounded and at the same time in G holomorphic functions with zero 0. Since p(z) can attain any complex value, the deviation from f and g is not negligible.

Also in contradiction to the identity theorem is the fact that, at a point c ∈ G all the derivatives d^{(k)}(c) = h^{(k)}(c) of two functions d and h can coincide for all k ∈ ℕ, but d and h can also be significantly different further away, beyond this local fact, without losing their holomorphy, since not every holomorphic function, due to the approximate character (of differentiation) resp. of the calculation with Landau symbols, can be (uniquely) expanded into a Taylor series (cf. Transcendental numbers).

If we raise the index set of π(z - c_{k}), we obtain entire functions with trans-naturally many zeros. The zero set can be open and need not to be discrete. Thus the set of all functions holomorphic in a domain G need not to be free from zero divisors. The functions disprove, like polynomials with at least n > 2 pairwise distinct zeros, the theorems of Picard, since they miss at least n - 1 values in ℂ.

© 29.10.2011 by Boris Haase

• privacy policy • disclaimer • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • mwiki • top