Monday, October 27, 2014

A Synopsis, at Length, Onto the Decimal Rationalization and Floating-Point Type Coercion Features of the Igneous-Math System

Storage of a Rational Pi 

Last week, after having discovered a rather simple methodology for translating a floating-point computational representation of the irrational number, pi, into a rational number with a decimal exponent, within a fixed-precision floating point implementation, in a software system -- which I denoted, a rational pi, in a previous article, here at my DSP42 web log -- I was all the more cheered, when I discovered that there is also a methodology available for translating pi or any other floating point number, into a rational value, within a machine's respective data space, namely as represented with the ANSI Common Lisp rational and rationalize functions. On having discovered that simple feature of the ANSI Common Lisp standard, I was quite cheered to observe:
(rationalize pi)
=> 245850922/78256779
...as within the Common Lisp implementation in which I had performed that calculation, namely Steel Bank Common Lisp (SBCL)  version 1.2.1 (x86-64) on Microsoft Windows. An equivalent value is produced, with that form, in SBCL 1.2.4 (x86-64) on Linux.

Though the behaviors of the rational and rationalize functions are defined, broadly, in ANSI Common Lisp, but -- of course -- such translation of floating point values into ratio values would be nothing so directly and explicitly standardized as of IEEE floating-point arithmetic. Indirectly, however, given a value of pi calculated to the greatest possible precision of floating point values within a single Common Lisp implementation, one would assume that between two separate  Common Lisp implementations, as such -- both implementing the IEEE standards for floating-point arithmetic -- that the form, (rationalize pi), would return an equivalent rational value, in each respective Common Lisp implementation.

Incidentally, for those of us not having a membership in the IEEE, the article -- i.e [Goldberg1991] -- What Every Computer Scientist Should Know About Floating Point Arithmetic, by David Goldberg (XEROX PARC) (1991) -- that article may serve to explain many of the principles represented in IEEE floating-point arithmetic, candidly whether with or without an accessible draft of the standards defining the IEEE floating-point arithmetic.

Ratio as Rational Number

This might sound like some fairly dry or perhaps droll material to be writing about. Certainly, with so many implementations of IEEE floating-point arithmetic, in computing, one might wish to simply accept the ulps and relative errors of the floating point implementation, and set a software program to  continue, nonetheless, along its axiomatic course. Certainly, by any stretch of arithmetic, 1/3 will always be an irrational number, when reduced to a floating point value.

So, there's "The rub," essentially. The number 1/3 itself is a rational number -- having the corresponding mathematical accuracy of a rational number, per se -- though its floating point representation is not a rational number.

Conveniently, perhaps, ANSI Common Lisp defines a numeric type, ratio, disjunct to the numeric type, integer -- both of which, together, serve to fulfill the set of all types of rational number.

Without making a comprehensive review of ANSI Common Lisp, albeit, one might simply observe that the Common Lisp ratio type is utilized in many mathematical operations within a Common Lisp implementation. For example, in a simple evaluation in SBCL, specifically:
(/ 1 3)
=> 1/3

Rational Magnitude and Degree

This short web log article, presently, will endeavor to illustrate a relevance of rational values as  intermediary values within multi-step mathematical formulas. Rather than preferring a ratio type rationalization of floating-point numbers, however, this article will develop a decimal exponent encoding for rationalization of floating-point numbers.

This article will develop an example onto a methodology for an estimation of net circuit impedance within a single-source AC circuit comprised of resistive, inductive, and capacitive (R,L,C) circuit elements, in which -- namely -- firstly, each of net inductive reactance, net capacitive reactance, and -- if applicable -- net parallel reactance is calculated -- those values deriving from the respective net inductance and net capacitance, and single-source frequency, in the respective circuit being analyzed. Then, one would apply the reactance value together with the resistance in the circuit, for calculating the net circuit impedance and its respective phase angle, through any exacting approach to the computation of those values, dependent on the nature of the exact circuit network -- thus, in a conventional methodology, ultimately estimating  a net circuit impedance, such that may then be applied in estimating a net circuit current, subsequently in estimating the current and voltage at individual circuit elements, within the circuit being analyzed, thusly. At all points, in such calculation, it would be possible for the floating-point implementation to introduce its own natural errors -- whether of a hand-held calculator, or a program system at a desktop computer -- as due to floating point ulps and relative error, such as addressed in [Goldberg1991].

Without venturing to estimate whether such a methodology may be the best or most accurate  methodology available for an estimation of net circuit impedance, presently it may serve as a simple example of a multi-step equation utilizing of floating-point values.

At the outset, in such a calculation, the value pi is encountered, as namely for calculating the net inductive reactance and net capacitive reactance in the circuit. For a series (R,L,C) circuit with a single-source AC power supply operating at a frequency f  Herz, the estimation -- as a calculation onto an imperfect mechanical domain, certainly -- the estimation may begin as follows, axiomatically:
net inductance (series) LT = L1 + L2 + ... Ln

net capacitance (series) CT = 1 / (1/C1 + 1/C2 + ... 1/Cn)

inductive reactance XL = 2 * pi * f * LT

capacitive reactance XC = 1 / (2 * pi * f * CT)
The reader will be spared the details of a further calculation or estimation of net impedance, by any single, known methodology, as would be in an analysis of such types of electrical circuit -- the subject matter of such as the ECT-125 course, at DeVry University Online.

Presently, this article shall focus about: That a concern with regards to floating point contagion is introduced, so soon as at in the calculation of each of XL and XC.

Towards a Methodology for Decimal Scaling in Rationalization of Numeric Input Values

Within the last two of the preceding equations -- those representing, each, an implicit diadic function, accepting of values of f and respectively, LT or CT --  within those functional equations, if  pi is then represented as a floating-point number, or if fLT, or CT is represented as a floating-point number, then of course each of XC and XL likewise would be calculated as a floating-point number. In a Common Lisp application, the resulting value would be of the greatest precision of the input values -- insofar as measuring the precision of individual floating-point numeric values.

If, instead, each input value, n, to the respective functions would be represented as a numeric object with a rational magnitude, m and an integer decimal degree, d, such that:
n = m * 10^d
...then those  calculations may be performed all within the rational number line, wherein even the intermediary values may be stored as rational, decimal-shifted numeric objects (m, d). The respective calculations for each of inductive reactance and capacitive reactance all rely on the simple mathematical operations of multiplication  and division. In a Common Lisp application, then if those calculations could be conducted with values all of type cl:rational, the result would be, in each, a numeric object of type cl:rational.

Essentially, this methodology applies a decimal shift for integral values -- preferring a decimal base for representation of floating-point values, contrasted to a binary base --  in a manner that serves to ensure that m and d would both be rational values, representative of any floating point value n. This methodology would  serve to ensure that a rational decimal value may be computed for any intermediary real number value, within a mathematical operation.

Furthermore, this methodology  allows for a  manner of scaling of n onto (m,d) into any alternate scale of (m,d) -- as per the decimal prefix values standardized of the Systeme Internationale (SI) -- to any scale for m other than the initial scaled.For m being of type cl:integer, then, the value (m,d) may be scaled simply by incrementing or decrementing d.

Of course, this in itself does not serve to eliminate the possibility of encountering a floating-point value in an arithmetic computation. For instance, in something of a trivial example, given:
n1 = 100 => (m1, d1) = (1, 2)
n2 = 300 => (m2, d2) = (3, 2)
then:
n1 / n2 = 1/3 => (m3, d3) = (1/3, 0)
The quotient of n and  n2 ,in the value's floating point representation, would be an irrational number. In an implementation of the type, ratio, however: The quotient may be stored as a ratio, rather than as a floating point number.

In applying such a methodology -- as here, denoted onto a decimal base, base 10 -- it may be possible to eliminate floating-point values from within the intermediary values of any series of mathematical calculations. Any initial input values to a calculation may be converted into such decimal scalar values -- for each n, initializing a decimal scalar object storing n as a rational magnitude, m, with an integer decimal degree, d. Subsequently, all mathematical operations may be performed onto (m, d)

On a sidebar with regards to base two, ie binary scale encoding for numeric objects: Considering that a numeric value -- even an integer value -- within a software system, would be encoded as a binary value, within the software system's data space, then it may seem more advantageous to perform such a shift onto base 2, in keeping with the natural base of the containing data space . With the numeric value initially provided as a decimal digit, however, the initial precision of the input value may be lost if the shift is not performed onto the input value's base, namely the decimal base, base 10.

Whereas the conventional scalar measurement prefixes standardized in the Systeme Interationale are also defined onto the decimal base, then of course the decimal base scalar value, (m, d), may be effectively re-scaled for any prefix, with simply an adjustment applied onto d, proceeding to any instance in decoding the rationally scaled decimal value.

Inequalities Resulting of Floating-Point Type Inconsistency in Arithmetic Operations

A prototypical example is provided, onto the numeric value, pi, and a fraction of pi, in a comparison onto a value functionally equivalent or ideally equivalent to the fraction of pi, the latter produced of the trigonometric transcendental function, atan:

(typep pi 'double-float)
=> T

(= (/ pi 4) (atan 1d0))
=> T

(= (/ pi 4) (atan 2d0 2d0))
=> T

(= (/ pi 4) (atan 2 2))
=> NIL
In the last instance, the call to atan produces a value of type single-float, such that does not have the same floating-point precision as the  double-float value, pi.

Thus, it might seem well to extrapolate that all values that are input to a mathematical system -- as via read -- should be type coerced -- in a semiotics of programming languages not Common Lisp, essentially to cast a value -- to type coerce all number type values  to double-float -- that being a floating-point type of some precision -- insofar as interacting with functions that accept a value of type float and that consistently produce a floating-point value, i.e. insofar as with regards to floating-point functions.

In coercing every input floating-point type numeric value into a value of a single floating-point numeric type, it may be possible to ensure a greater degree of floating-point accuracy, throughout a program system -- thus, obviating some concerns with regards to floating point contagion.

Notably, there is a feature defined in ANSI Common Lisp, namely the special variable,  cl:*read-default-float-format*. The conclusion of this article, an application of that variable is denoted for an extension to ANSI Common Lisp, in the form of an application of ANSI Common Lisp.

In Search of the Elusive CL:Long-Float Numeric Type

Presently, this article will introduce a sidebar: That if this article may be interpreted as with a perspective towards any Lisp implementations implementing a long-float value type -- assuming that the latter may be implemented to a greater floating-point precision than the double-float numeric type -- then the reader may wish to transpose the term, long-float, into every instance of the term, double-float, denoted in this document, thus implicitly preferring a floating-point numeric type of a greater floating-point precision.

Functions Accepting of Integer Values Only

Of course, not every function in the Common Lisp numbers dictionary would accept a value of type double-float. For instance, the function isqrt accepts values only of type integer. Inasmuch, it may be noted that -- in a prototypical regards -- the double-float value 1.0d0 is equivalent to the integer value, 1. If it would be thought necessary, then a type coercion could be performed onto the double-float 1.0d0 to its integral value 1 -- with measure for accuracy -- as  within an overloaded math functions system, such as provided via Igneous-Math [math-ov], in the specific method dispatching provided by Igneous-Math system, in its present revision (1.3).

Towards a Numeric Equivalence Between Numerically Rationalized Constants and Values Returned of Transcendental Functions

An  ideal fraction of pi, pi/4, represents an ideal angle of exactly 45 degrees -- essentially, one eighth of  the radial angle of 2pi, for 2pi representing the total, singular measure of the radial angles in a unit circle.  

Of course, when pi must be implemented as a measurement within a computational system -- as effectively limited, then, to the precision of the floating point numeric features of the computational system -- then the definition of an ideal angle onto pi  becomes effectively entangled with the definition of the same floating point implementation. Considering that all systems implementing the IEEE floating point standard  would produce equivalent values for pi -- as when provided with a singular, conventionally accepted reference value for pi  --  then one may expect a sense of accuracy between the respective floating point implementations,  for fractions onto pi, as much as onto any  floating-point operations conforming to the IEEE floating point standard. Presently, one might not proceed, at length, to denote any of the possibly nondeterministic qualities of such a relatively simple assertion with regards to floating-point accuracy. 

If the Common Lisp rational and rationalize functions may be applied as, in effect, to limit the number of possible instances of errors resulting of floating point ulps and relative error conditions -- literally, then, to ensure that any single, floating-point value would be type coerced to an appropriate rational value, as soon as a floating-point value would be introduced to a mathematical system --  but such an application may also be met with questions, namely: Whether to apply cl:rational or alternately, to apply cl:rationalize, and whence?

An illustration in applying the functions, cl:rationalize and cl:rational separately, onto each of: (1) pi, (2) a fraction of pi, and (3) a value returned by cl:atan, in an evaluation of cl:atan that produces -- essentially -- a fraction of pi, namely towards an ideal value of pi/4 radians, i.e. 45 angular degrees.
;; double-flat => rationalized rational => equivalent
(= (rationalize (/ pi 4d0)) (rationalize (atan 2d0 2d0)))
=> T

;; double-float=> rationalized rational => equivalent
(= (rationalize (/ pi 4)) (rationalize (atan 2d0 2d0)))
=> T ;; pi being of a double-float type

;; With a rationalized pi applied internal to the
;; fractional form, however, the fractional form   
;; is then unequal to its approximate functional equivalent.
;; Ed. Note: This example was developed with SBCL. ECL presents a different result 
;; than illustrated, certainly as resulting of subtle differences in implementation of
;; the function, cl:rationalize
(= (/ (rationalize pi) 4) (rationalize (atan 2d0 2d0)))
=> NIL

;; In applying cl:rational, as internal to the fractional form,
;; as well as applying cl:rational onto the value returned by
;; the atan function, the two values are then equivalent.
(= (/ (rational pi) 4) (rational (atan 2d0 2d0)))
=> T

With regards to distinctions between the functions, cl:rational and cl:rationalize,  the  wording presented in the Common Lisp Hyperspec (CLHS) may seem somewhat ambiguous, in one regards, as towards a question: What is a completely accurate floating point value?  The question might seem tedious, perhaps, as it may be a difficult question to develop an accurate answer unto. In such regards, of course, one might revert to observing the behaviors of individual Common Lisp implementations.

Original emphasis retained, with emphasis added, in quoting the CLHS dictionary for the functions cl:rational and cl:rationalize:
"If number is a floatrational returns a rational that is mathematically equal in value to the floatrationalize returns a rational that approximates the float to the accuracy of the underlying floating-point representation. 
"rational assumes that the float is completely accurate
"rationalize assumes that the float is accurate only to the precision of the floating-point representation."
One might extrapolate, from that summary, that it might be advisable to apply cl:rationalize, throughout. In implementations in which the precision of the Lisp implementation's floating-point number system is governed, effectively, by the IEEE floating-point standard, then certainly one could assume that values produced with cl:rationalize  -- specifically, of cl:rationalize applied onto double-float values --  that the results would be consistent, across all such implementations -- as when the behaviors of cl:rationalize are governed expressly onto the underlying floating-point implementation, and the underlying floating-point implementation is consistent with the IEEE standard for floating-point arithmetic.

To where, then, may one relate the disposition of cl:rational? This article proposes a thesis: That if the IEEE standard for floating-point arithmetic is accepted as the industry standard for floating-point arithmetic, and if the IEEE standard for floating-point arithmetic may represent an implementation of floating-point arithmetic both to an acceptable precision in addition to an acceptable accuracy in computations within and among individual IEEE floating-point implementations,  then -- in a sense -- perhaps the IEEE floating-point standard may be denoted as "The nearest thing possible to a completely accurate floating-point implementation"?  Of course, that might be a mathematically contentious assertion, though it might anyhow represent a pragmatically acceptable assertion, towards assumptions of precision and accuracy within computational systems implementing and extending of Common Lisp.

In a practical regards: Considering the example -- presented in the previous -- as with regards to an ideal fractional value of pi, namely an ideal pi/4 and its representation in a single Common Lisp system, as both (1) via a ratio onto pi and (2) in a numeric rationalization of an ideally equivalent value returned via the transcendental trigonometric function, atan,  then it may seem that it would instead be advisable to apply cl:rational, as consistently and directly onto floating-point values, in all instances of  when a floating-point value is produced -- as whether produced via input or produced via a floating-point calculation -- produced within a Common Lisp program system.

Concerning the Elusive cl:long-float Numeric Type

ANSI Common Lisp defines a type cl:long-float, with a minimum precision and a minimum exponent size equivalent to those qualities of the type, cl:double-float. On a sidebar, if one may wish to develop a further understanding of the relevance of those values, within a floating-point numeric implementation, certainly one may consult any amount of "Existing work", such as, conveniently:  [Goldberg1991]

In a practical regards, it might be assumed that most Common Lisp implementations would implement or would emulate -- respectively -- the C data types, float and double [Wikpedia] for the Common Lisp cl:single-float and cl:double-float types.

Common Lisp implementations might also endeavor to implement or to emulate the C long double numeric type [Wikpedia, ibid.] as extending of the IEEE standards for floating-point arithmetic [Wikipedia]. Conceivably, the C long double numeric type could be implemented or emulated with the elusive cl:long-float numeric type, and then extended with so many compiler-specific optimizations -- such optimizations, perhaps, providing a veritable "Icing on the rational pi" within Common Lisp applications.

Towards Subversion of Floating Point Contagion

Concerning the behaviors of floating-point numeric coercion within implementations of ANSI Common Lisp, it might be advisable for an implementation of a mathematical system to prefer the most precise floating-point implementation available, within a Common Lisp implementation. Conventionally, that would be the type, double-float.

In a simple analysis of possible sources of numeric values within a Common Lisp application, essentially a numeric value may be introduced into an application either via input or via calculation. Concerning calculation of numeric values in Common Lisp, it would be assumed that for any calculation accepting a floating-point value, that the result of the operation would be of the same precision as the highest-precision input value -- with however much for numeric accuracy, as across possible floating point contagion, in instances of floating point coercion.

In regards to how a numeric value may be input into a Common Lisp program, one might denote at least three possible, essentially distinct input sources:
  • REPL - The read/eval/print loop (REPL)
  • Source file
  • Compiled, "Fast loading" i.e. FASL file
In regards to the first two of those input sources, one would assume that the type of any floating-point numeric values would be selected as per the value of cl:*read-default-float-format* . It might seem, then, as if that was sufficient to ensure that all floating-point values introduced via input to a mathematical system would be of an expected floating-point format, perhaps exactly the floating-point format specified in cl:*read-default-float-format*

Without venturing into any manner of a lengthy analysis of implementation-specific compiler behaviors and implementation-specific FASL encoding schemes, it might be denoted that a compiler may encode -- as when writing objects into an object form in a FASL file -- that a Lisp implementation may encode a floating-point value, essentially a numeric object, as being of a specific format not functionally equivalent to the value of cl:*read-default-float-format*, in any instantaneous time. Similarly, it may be possible for a Lisp implementation to read a numeric value from a FASL file, without the value being type coerced according to cl:*read-default-float-format*.

Conclusion: Specifications for Floating-Point Type Coercion, Decimal Rationalization, and Default Floating-Point Format for the Lisp Reader

So, if a mathematical system was to endeavor to ensure that all floating-point mathematical operations performed within the system would be performed onto the highest-precision floating point data type available in the implementation, then although it may serve to introduce an additional set of instructions into any functional forms within the same mathematical system,  but the same mathematical system -- in interfacing with any function consistently returning a floating point numeric type -- may endeavor to consistently type coerce  numeric values to the highest precision floating point data type available -- as insofar as in any direct interfaces onto such floating point functions, such as the transcendental trigonometric functions in Common Lisp.  Of course, this might serve to introduce some errors, with regards to floating-point type coercion,  while it would at least serve to address any errors as may occur when comparing values derived of differing floating-point precision.

It would may be a matter of a further design decision, then, as for whether the exacting floating-point format would be specified statically, at compile time, or would default to the value of cl:*read-default-float-format* at time of evaluation. Certainly, both behaviors may be supported -- the exact selection of which could be governed with specific definitions of cl:*features* elements, as may be applied within the source code of the system.

That, in itself, may not serve to obviate some concerns with regards to floating-point ulps and rounding errors, however. So, it might furthermore be specified -- in so much as of an axiomatic regards, certainly -- that in an implementation of a mathematical object system, viz a viz Igneous-Math: That any procedure interfacing directly with floating-point mathematical operations, in Common Lisp,  would return a numeric value directly rationalized with cl:rational -- referencing the example denoted in the previous, as with regards to a rational fraction of pi and a numeric value functionally equivalent or ideally equivalent to a fraction of pi, but as calculated via the transcendental trigonometric function, atan, onto any specific floating point precision, then the return value being numerically rationalized with either cl:rational or cl:rationalize.

Thirdly, for ensuring a manner of type consistency of values input via source files and input via the Common Lisp REPL, the mathematics system may endeavor to apply a floating-point numeric type of the highest available precision in any single Lisp implementation -- as towards the value of cl:*read-default-float-format*. Certainly, an application may endeavor to avoid setting the value of cl:*read-default-float-format* itself, however. An application may signal a continuable error, during application initialization -- as when the user-specified value of cl:*read-default-float-format* would not be of the greatest available numeric precision. That continuable error, then, may be defined as to allow the user to select the highest-precision floating point numeric type and to set the value of cl:*read-default-float-format* oneself, form within any of the continuation forms defined, then, in the single continuable error.

Sidebar in Tangent: Design of the MCi Dobelle-App System

Of course, the previous paragraph may also serve to denote that there would be a definition of an application object, as well as a definition of an initialization phase in the definition of an application object -- towards a sidebar onto the dobelle-app system, in all its colloquial name, within the MetaCommunity group at Github.

For "Later Study," Irregularity in Fractional Rationalized Pi onto a Rationalized, Functionally Equivalent Value Computed By atan 

The following software form presents another interesting example with regards to irregularities in floating point arithmetic and rationalization of floating-point values. A study of the cause of the difference between the values of rpi4 and rtpi4, of course, may be other than trivial. No doubt, such a study may entail a study directly of the implementation of the transcendental trigonometric functions, in GNU LibC.

This example was produced, originally, with SBCL 1.2.4 on Linux (x86-64), and has been tested also with CCL 1.9-r15757 on Linux (x86-64). Considering the consistency of IEEE floating point implementations, an equivalent set of return values is calculated in each implementation.
(let ((fpi4 (/ pi 4d0)) 
      (ftpi4 (atan 2d0 2d0))
      (rpi4 (/ (rationalize pi) 4))
      (rtpi4  (rationalize (atan 2d0 2d0))))
  (values fpi4 ftpi4 
   rpi4 rtpi4 
   (float rpi4 pi) (float rtpi4 pi)
   (= rpi4 rtpi4)))

=> 0.7853981633974483d0,
      0.7853981633974483d0,
      122925461/156513558,
      101534659/129277943,
      0.7853981633974483d0,
      0.7853981633974483d0,
      NIL

Notably, if the function cl:rational is applied instead of cl:rationalize,  the values calculated for each of rpi4 and rtpi4 are, then, exactly equivalent, and the last return value, then, is T.

Appendix: Floating-Point and Non-Floating-Point Functions Defined in ANSI Common Lisp

The following functions, for any input values, will consistently return a floating point return value
  • ffloor
  • fceiling
  • ftruncate
  • fround
The following functions may return a rational value -- as may be tested, in each implementation -- as per the Rule of Float Substitutability, in ANSI Common Lisp. When consistently returning a floating point value, then these functions may be regarded as in the same set as the previous -- this being an implementation-specific quality.
  • sin
  • cos
  • tan
  • asin
  • acos
  • atan
  • sinh
  • cosh
  • tanh
  • asinh
  • acosh
  • atanh
  • exp
  • expt
  • log
  • sqrt

Onto floating-point complex numbers:
  • cis
A set of functions that accept integer values, and would return integer values, exclusively -- this  mathematics system not addressing ash, ldb, or dpb:
  • gcd
  • lcm
  • isqrt
  • evenp
  • oddp
Towards consistency onto the decimal rationalization policy of the the Igneous-Math system: Within  interfaces to those functions that return real numeric values, but that do not  or may not return integer numeric values, those functions' direct return values should be effectively transformed with a call to cl:rational, before return.

Notably, this quality of the Igneous-Math system is designed with an  emphasis towards numeric accuracy before minimization of object system garbage-collection calls -- as with regards to  a consistent discarding of floating point values, in procedural transformation to decimal rationalized integers.

Of course, it may be advantageous towards possible re-use of the software of the Igneous-Math system, if the decimal rationalization policy would be implemented as a compile-time option, in the Igneous-Math system. 

At this point, the author's attention is returned to the documentation toolchain.

No comments:

Post a Comment