We need a way to handle infinite but bounded length floating point computation. At present we have FLOAT and DFLOAT but we really need general purpose floating point algorithms similar in spirit to infinite length integers. ??? FLOAT --Bill Page, Sun, 12 Jun 2005 14:26:58 -0500 reply Tim,
What is the domain
From
)abbrev domain FLOAT Float From Tim Daly Sun, 12 Jun 2005 17:17:54 -0500 From: Tim Daly Date: Sun, 12 Jun 2005 17:17:54 -0500 Subject: infinite floats domain Message-ID: <20050612171754-0500@page.axiom-developer.org> you're right, of course. t From Bill Page, Sun, 12 Jun 2005 21:24:48 -0400 From: "Bill Page" Subject: infinite floats domain Message-ID: <20050612212448-0400@page.axiom-developer.org>,br> On June 12, 2005 6:18 PM Tim Daly wrote: you're right, of course. my mind is mush at the moment. i've been driving all week. Yah, that'll do it to ya ... :) How's CMU? Seriously, I think that there are some issues that should be dealt with here. There may very well be some situations that could make more efficient and/or effective use of Axiom's "infinite precision" floats. I expect that just as in the case of "infinite" integers, there may be some better ways and some worse ways of doing calculations with this objects. For that matter, mathematically speaking just what is Axiom's
Regards, Bill Page. From wyscc Mon, 13 Jun 2005 07:00:00 -4:00 Axiom has In a fixed precision floating point system, there are only finitely many numbers, so it cannot be Axiom implementation uses the simplified representation of as it mathematically stands ( is still normalized to give a unique representation, but the base point is to the right of the least significant digit). The intrinsic properties do not depend on the base or the word length, so let's assume for discussion purposes. Let's say the exponent has precision one (that is, only one digit plus one sign bit). So there are 19 exponents (from to 9). Say the mantissa m has precision 2 (that is, only two digits plus one sign bit). So there are 181 possible mantissas (from to , 0, 10 to 99). For simplicity, we will only discuss positive numbers. So one sees that there can only be positive numbers. But these are not evenly distributed. There are 90 positive numbers for ANY two successive exponents: so 90 positive numbers between 10 and 99, 90 between 100 and 990, and 90 between 1000 and 9900. It is clear that the accuracy of a real number by a floating point number in this system becomes less and less as the exponent goes up. This is the source of round-off errors. This is basically the situation in DFLOAT (on Intel processors), where , has 53 bits stored in a 52 bit field (not including sign, note that in base 2, the most significant digit normalized must be 1, so no need to store it!) and has 11 bits (including sign, ranging from to which accounts for an offset of 53 because the base point location). This is exactly equivalent to the IEEE-754 standard for 64 bit floating point. The actual arithmetic operations are done via Lisp, which I assume calls the hardware. In Shall we improve Example. Addition of two If fricas (1) -> bits()$Float
Type: PositiveInteger?
fricas x:Float:=2^(-34)
Type: Float
fricas y:Float:=2^35
Type: Float
fricas x+y
Type: Float
By the way, in interpreting correctness of displayed answers for floating point computation, there is another source of error besides round off. For example, fricas x:Float:=sqrt(2)
Type: Float
fricas bits(100)$Float
Type: PositiveInteger?
fricas z:Float:=sqrt(2)
Type: Float
fricas z - x
Type: Float
Most people would expect the answer of fricas x - 1.4142135623730950488
Type: Float
fricas bits(120)$Float
Type: PositiveInteger?
fricas z - 1.4142135623730950488016887242
Type: Float
Now that I'm awake the idea is coming back to me. What originally triggered the thought was that we need a way to compute an answer to a variable number of decimal places which could be expanded later. Thus we could compute the coefficients of a polynomial to 3 decimal places for display but expand them to 200 decimal places when we are evaluating the polynomial at a point. The idea was to have a different representation that stored a closure of the computation, similar to the series mechanism, so we could "continue" to get more digits at a later time. This raises the same kind of implementation issue that indefinite computation raises except that indefinites are symbolic and infinite precision floats are not. The issue is that we want to keep the state of a computation in a form that we can expand. This involves storing functions and composition of functions. This would imply that floating point computations are no longer strictly numeric. So the product of two infinite floats (INFL) object would be stored in such a way to keep the original values as well as the result. Ideally we could stop a computation, store it as a closure, and restart it when we need more digits. I have to give some thought to the more general case of a "closure" based mathematical object that captures the computations. As we push for more "symbolic" answers this issue keeps appearing. Tim From Kai Kaminski: I just read your posts about infinite precision floats on the Axiom list and I recalled that I have seen something like this a while ago. I am not sure if this is what you are looking for but it sounds like it: http://www.haible.de/bruno/MichaelStoll/reals.html It is implemented in Common Lisp apparently. yes, this is exactly the idea. Tim How is this different than what Axiom already does? I can write:fricas a:=2*asin(1)
Type: Expression(Integer)
fricas a::Expression Float
Type: Expression(Float)
fricas digits(100)
Type: PositiveInteger?
fricas a::Expression Float
Type: Expression(Float)
So %pi already has this kind of "closure" built-in. Is it really possible to do this more generally for all possible computations with real numbers? How are "computable reals" different than actual real numbers? wyscc wrote: Any floating point system is only, mathematically speaking, a small subset, and not evenly distributed one for that, of the reals, and for that matter, of the rationals. It is definitely not FRAC INT, which is mathematically equivalent to the field of rational numbers. But surely there is an isomorphism between the domain of infinite precision floating point numbers and the domain of rationals, no? Maybe these computable reals are something else? Isn't it related to the RealClosure as already implemented in Axiom? From William Sit, Tuesday June 14 20:06:00 -4:00 Subject: Float, Real, RealClosure Tim wrote: This raises the same kind of implementation issue that indefinite computation raises except that indefinites are symbolic and infinite precision floats are not. But that is a very big difference. For example, Axiom using However, in my view, this still is not infinite precision in the sense of that will not compute exactly because the system does not automatically increase precision in fricas )clear all
Type: PositiveInteger?
fricas x == 2^(-34)::Float Type: Void
fricas y == 2^(35)::Float Type: Void
fricas z == y+x Type: Void
fricas x fricas Compiling body of rule x to compute value of type Float
Type: Float
fricas y fricas Compiling body of rule y to compute value of type Float
Type: Float
fricas z fricas Compiling body of rule z to compute value of type Float
Type: Float
fricas bits(200)$Float
Type: PositiveInteger?
fricas z
Type: Float
fricas bits(68)$Float
Type: PositiveInteger?
fricas 2^(-34)+2^35
Type: Fraction(Integer) In contrast, evaluation is not always possible for indefinites (a topic to be discussed separately). It is difficult to evaluate indefinites since that results in the recursive composition of functions (technique involved bears similarity to code optimization by compiler, but unfortunately, we do not just want the code, but a mathematically readable expression). The indefinites are the functions themselves, not their functional values. The output form of an indefinite is either the defining expression (typically useless, since no computation is performed), or recursively composed (typically a mess), or recursively analyzed into cases (proviso-attached expressions, and extremely computationally demanding to simplify provisos). When an indefinite is involved in a loop control construct, it is difficult to "simplify". None of these difficulties are present in infinite precision Bill Page wrote: So %pi already has this kind of "closure" built-in. Is it really possible to do this more generally for all possible computations with real numbers? Yes. fricas bits(68)$Float
Type: PositiveInteger?
fricas x == sin(1/sqrt(2)::Float) Type: Void
fricas x fricas Compiling body of rule x to compute value of type Float
Type: Float
fricas bits(100)$Float
Type: PositiveInteger?
fricas x
Type: Float
But surely there is an isomorphism between the domain of infinite precision floating point numbers and the domain of rationals, no? If by such a domain, you meant a floating point representation where the mantissa has infinite precision (not arbitrary precision), then you might as well use infinite sequences of rational numbers in decimal (or binary) expansion, and you could represent, in theory, and perform exact arithmetic like . Since the set of rational numbers is dense in the set of real numbers, and FRAC INT is the field of rational numbers, rational number sequences are better for approximating real numbers, and of course, they would be exact on rationals (unlike FPS). In that case the isomorphism would be with the field of real numbers. But this is of course not what floating point systems are. In FPS, you may be capturing more and more rational numbers as the precision is increased, but the gaps between two consecutive Another way of stating the difference is that IPFPS requires the limiting value of mantissa (and associated exponent to locate the fractional point), which does not exist in APFPS (which I view as the infinite union of -precision FPS over all positive ; your view may be different). There are problems with simulating IPFPS by automatically adjusting the precision to yield exact results on floating point computation. The resulting system still cannot perform exact arithmetic on all rational (therefore, real) numbers due to truncation and round off. Whether the error term tends to zero would depend on the particular computation involved (problem may be ill-conditioned). We also need algorithms to automatically adjust the precision, worry about efficiency because of repeated recalculation to increase precision, and perhaps worry about termination. There is also an intrinsic problem in the representation. Consider the problem of extending the precision of two FP numbers, which have identical FP representation at 10-bit precision. To avoid truncation errors due to binary to decimal conversion, we will work with internal representations of fricas z == sqrt(3)::Float Type: Void
fricas bits(10)$Float
Type: PositiveInteger?
fricas mz := mantissa z fricas Compiling body of rule z to compute value of type Float
Type: PositiveInteger?
fricas ez := exponent z
Type: Integer
fricas z10 := mz*2^ez
Type: Fraction(Integer)
fricas z10float == z10::Float Type: Void
fricas mantissa z10float fricas Compiling body of rule z10float to compute value of type Float
Type: PositiveInteger?
fricas exponent z10float
Type: Integer
fricas mantissa(z-z10float)
Type: NonNegativeInteger?
So here we have two identical 10-bit floating point numbers, both defined as macros. If we extend the precision, they would no longer be the same. fricas bits(20)$Float
Type: PositiveInteger?
fricas mantissa z
Type: PositiveInteger?
fricas mantissa z10float
Type: PositiveInteger?
This raises some questions. It showed that the current representation of There should be a lot of similarities between IPFP and power series because we should treat IPFP numbers as infinite sequences, not floats. In power series, we start off with the infinite, compute with the infinite (lazy evaluation), but display finitely. In Maybe these computable reals are something else? Isn't it related to the RealClosure as already implemented in Axiom? APFPS is not the same as RealClosure. For some background material, see Lang: Algebra. Here I recall a few basic definitions. A field is a real field if is not the sum of squares in . A field is real closed if is real, and any algebraic extension of that is real must be itself. Every real closed field has a unique ordering. The real closure of a real field is an algebraic extension of that is real closed. Every real field has a real closure , and if is ordered, is unique (up to a unique isomorphism). In , every positive element is a square and the algebraic closure of is . RealClosure implements computation in a real closure of a base field. In particular, it does not compute any element that is transcendental over the base field. RealClosure provides two modes of computation, one in terms of minimal polynomials and the other, an interval containing the particular root is used for approximation. Its relation with infinite sequences of rational numbers and also that my presumption of an isomorphism between the domain of infinite precision floating point numbers and the domain of rationals was wrong. In fact I think this subject, on the border between symbolic and numeric computations, is very relevant to Axiom and to computer algebra systems in general. For that reason and since this is not really an issue about Axiom itself, I propose that we continue this elsewhere on another page. I've create a new page named RealNumbers. Please joint me there. I hope others will also contribute to this subject. Category: New feature request => Axiom Library |