Showing posts with label Igneous-Math. Show all posts
Showing posts with label Igneous-Math. Show all posts

Monday, September 28, 2015

Calculating with CORBA – A Showy Prognostication Of Rational Numbers, Computations, and Communications in Application Systems

Reading over an embarrasingly obtuse description that I had written, one day in another place, about a software system that I was referring to as igneous-math, I would like to denote henceforward that the article was much of an effort to describe a simple sense of novelty about the 'ratio' type as implemented in Common Lisp. Without immediately producing a study of IEEE standards for representing floating point decimal values in computing systems, and without any lengthy sidebar as to comment to a concept of fixed point arithmetic as developed in Forth, I recall that my initial thesis had centered on the very limits of finite data registers in computational machines.

In the following article, the author will endeavor again to circumnavigate any specific consideration with regards to computing, instead to focus about some concepts of commumications as developed with regards to CORBA.

Assuming a fixed-size data register, in any single numeric computation – whether of a 24 bit, 32 bit, 64 bit, or other bit length of numeric encoding – any number may be represented in a precision bounded by the size of the data register in which the number is encoded.

This article will not go at length to juxtapose different methods of discrete digital encoding of numeric values, as whether for encoding of natural numbers, signed integers, unsigned integers, fixed point decimal numeric values, floating point decimal values, numeric ratios, or complex numbers. Excepting the field of ratios and the field of complex numbers, each other of those topics may be referenced immediately onto the Common Data Representation (CDR) format as standardized in specifications about the Common Object Request Broker Architecture (CORBA). Though that may not serve as to describe such topics in any manner of a comprehensive detail, but as to reference these topics onto CDR encoding, perhaps it may at least serve as to provide a common point of reference – in regards to numeric computing – principally orthogonal to an implementation of any single programming language or programming architecture.

The CORBA specifications, as publicly described in core specifications documents and as extended in any number of supplemental specifications documents published by the Object Management Group (OMG), moreover as pragmatically augmented with domain-specific case studies described in any discrete number of documents in or outside of the OMG specifications set, moreover as implemented in any number of instances of development tools, then as applied in any singular set of software products – broadly, CORBA provides a platform-agnostc framework for applications, such as may be developed to extend of any number of fundamental CORBA object services – as in regards to developing software components such that may interact via CORBA object services, at an applications layer and – as on a TCP/IP network – may employ methodologies of transport semantics at session (cf. SECIOP) and presentation (cf. ZIOP) layers, as in regards to a view of networked applications projected onto a conventional seven layer model of TCP/IP networking.

In such a context, CORBA serves to provide a single, standardized, baseline object service architecture, such as may be augmented in applications of supplemental CORBA object service specifications.

In regards to applications utilizing CORBA IIOP – IIOP being the Internet Inter-ORB Protocol, principally an extension of the Generalized Inter-ORB Protocol (GIOP) –  applications may apply the Common Data Representation (CDR) format for stream encoding of data values, such as may be reversibly encoded and subsequently decoded onto protocol request sockets and protocol response sockets in a context of CORBA IIOP.

Short of an orthogonal concern with regards to encoding of object references, the CDR encoding format provides for an encoding of atomic, primitive numeric values – an encoding standardized essentially onto stream objects, principally in a manner independent of any single microcontroller implementation, but however dependent on a streams-oriented communications media. Though perhaps the CORBA architecture may serve to encapsulate much of the nature of the stream based encoding of data values in IIOP, but inasmuch as that an IIOP application utilizes TCP sockets for its implementation, the IIOP implementation therefore utilizes a stream based encoding. Whether GIOP may be extended, alternately, as to develop any more computationally limited encoding – such as perhaps to develop an encoding for protocol data values onto an I²C bus, as may be for application within a light-duty parallel computing framework, or alternately, as to develop an encoding method for GIOP onto shared memory segments within a process model of any single operating system – the CDR encoding onto IIOP serves to provide a standard encoding for atomic data values, moreover CDR providing a basis for encoding of object references and any number of application-specific data values, in CORBA applications.

Thus, as towards developing a platform-agnostic view of applications of computing machines, it may be fortuitious to make reference to the CORBA architecture – even in so limited a regards as in reference to the CDR encoding for primitive numeric values.

Concerning the CDR encoding for any single data value onto a communication stream, the CDR encoding may be observed as being compatible with so many machine-specific encoding methods for encoding of data values onto finite memory registers, data values as may likewise transmitted across finite processor bus channels within any single computing machine. Certainly, the communications of a data value – as within a CORBA application framework – would not cease at the transmittal of the data value to or from an IIOP application socket.

Towards a perspective of an implementation of CORBA, the CORBA implementation's own concerns – with regards to communication services – might not extend much further than as to provide an interface for encoding and decoding of data values onto CORBA data channels, in a manner however orchestrated with CORBA object adapters. The trope as that CORBA is for application in regards to middleware services – aside to any suggestion, to which the author may take some exception, as if CORBA was not trendy, any more – it does not serve to say much, at all, as to how a computing machine may apply any single data value, once a data value is transmitted over a CORBA object services network. Naturally, CORBA is a feature of a communications domain. In a manner of a domain-oriented view, CORBA dovetails with a domain of computer operating systems and a domain of microcontroller design, furthermore – as inasmuch as that a microcontroller provides computing services to an operating system, and as that an operating system may be applied as to provide computing services to a CORBA application.

How, then, is UNIX not relevant to CORBA applications?

The author of this article being how immediately preoccupied of developing a thesis with regards to the Common Lisp programming language, the FreeBSD operating system, and the potential for applications of CORBA in networked computing,  but perhaps it may seem as though the author is simply distracted of some few archaic trends in the social universe. The author would not propose to suggest any too narrowly deterministic model of such a thesis, whether or not is may be easily proved to be of a computationally sound and comprehensive architecture for computing in any arbitrary networked systems environment. Common Lisp has perhaps not been a great favorite for applications programming, however it may momentarily serves an academic end, as in a manner of thesis development. It might be estimated, moreover, that most theses developed about Common Lisp would be developed, likewise, as theses about concepts of artificial intelligence – logically, with regards to theses describing of applications of logic in mathematics and in computing systems. Perhaps it has not been since the 1970s and 1980s that Common Lisp development was shaped of microprocessor development. Across the popular AI Winter of the era, but perhaps the old scrolls have not all been lost to the shifting ice of the populist glacier.

As though lost in the library of historic record – but AI Memo 514 is certainly no manner of a Cardiff giant. If it may be a memo of an academia in which the plain logic of microprocessor design was paramount – but perhaps the contemporary computing market may seem all preoccupied of so many novel products of contemporary microprocessor manufacture, if not of applications of the newest high-speed microcontrollers and software designs in all the novel, new commercial systems. In regards to how so many novel, new toys are applied to present any manner of noise to contemporary engineering trades, but if it may be simply a manifestation of a lot of marketing fluff atop a lot of repetitive iterations in social development, how novel is a simple endeavor to disembark of the sollubrious consumerism and recover whatever remains of the sciences aside the grand industrial merry-go-round?

It might seem as if it was only a metaphor of a brass ring, a token, a trinket grabbed in a singular merry-go-round's long spin? Is it to a metaphor of a trophy, then, ere it is returned to the vendor's brass ring dispensor? If there is not a metaphysics of such a commercial mechanics, perhaps Whitehead's thesis really ever was disproved? May the circus resume, fortwith, if there is no more of a meaning beyond so many singular objects of materialism.

Before returning to this thesis, the author denotes a concept of pi in which pi is calculated as a ratio.

Though perhaps not all definitions of number would include ratio as a type of number, but as for that a ratio may be defined as a type of rational number, it may be a type of number of some relevance for application in computations onto other types of rational number. Sure, then, between concerns of computation and of measurement, some variance in measurment may entail some of a variance in computation. In regards to variance of physical measurements in physical systems, and considering as that a discrete measurement – a measurement of any estimable precision – may be applied in a computational model, applied as towards calculating any intermediate numeric values as may be, lastly, applied in a physical analysis or physical design of a system, and though it may seem practically naive to assume a perfect measurement may be provided to any computational system, but insomuch as that a computational system may produce a predictable set of consequent computations, as given a known set of initial measurements and a discrete formula for calculations, a computational system should not itself be assumed to operate at variance of any set of initial measurment values.

If there may seem to be a commercial novelty of the Common Lisp functions, 'rational' and 'rationalize', those two functions alone may be applird as a computational entry point – of each, seperately so – to perform all intermediate calculations within a computing machine as in calculations with rational numbers. There may be, in that, likewise a toss of the hat with regards to fixed-point numeric calculations in Forth. Perhaps it may be said as to "Go further," moreover, that any immediately fixed-point numeric value may be converted to a ratio numeric value, as to conduct any subsequent calculations wholly onto rational numbers, whether or not producing a decimal number for numeric denotation as a consequent value.

It is not unprecedented: That a computing machine may usefully produce calculations deriving of rational numbers. In an immediate manner, a computer of a finite memory may not actually contain an irrational number, except that an irrational number is encoded of rational quantities. A concept of irrational number should not be removed from numbee theory, of course! It is simply not a concept occurring in computations within finine memory spaces.

It is not to merely make a drab show, therefore, if a rational number may be calculated as estimating a built-in decimal value of pi to a rational numeric constant. Such a constant, rational value may be applied intermediate computations wholly onto rational numbers. The rational numbers computer may only need convert any decimal input values into a ratio format, then to produce a decimal representation – to any single consequent precision – of any numbers calculated finally of the rational computation.

Of course, this does not indicate any trivial methodology for rational numeric computations onto transcendental functions. So far as that the triginometric transcendental functions may be derived of a rational estimate of pi, as on a unit circle, there may be a channel for computations with rational numbers in those transndental functions. The author, regrettably, is not as immediately familiar with calculations of Euler's constant.

Towards a representation of ratio numeric values and complex numeric values – no mere propaganda, the Common Lisp type system providing definitions of such numeric types, in computing – of course, it may be a trivial affair, to define an IDL module, an IDL interface, and finally an IDL data type in which a platform-agnostic definition of each of the ratio or of complex number numeric types may be provided, in a computing system. That being denoted as towards a consideration of a transport level encoding of numeric values, but it may not seem to say an immediate bunch as towards an algorithmic, if not functional view of computing

This article, presently, returns to a thesis about CORBA.

Of course, CORBA IDL itself is not the computing element of a CORBA system. Inasmuch as that an implementation of a CORBA IDL module may be developed in an application programming language, then applied within an object adapters system, CORBA IDL may serve something of a complete, modular role in regards to communications among computing systems. This article has denoted, albeit somewhat obliquely, the CDR encoding for data values in an IIOP framework. Likewise, as in regards to the subclauses of the CORBA core specifications as in which CDR and IIOP are described, it may be said to amount to a much of a description of a platform-agnostic communications model, such as is defined in a manner for application within a broader, platform-agnostic object services framework as may be denoted, broadly: CORBA.

May it be, then, as if to limit the potential applications of CORBA frameworks, if the computational features of CORBA systems may not often be found as described in so much of specific detail as of the CORBA communications model itself? One may wish to speculate as to how the platform-agnostic nature of CORBA specifications may seem to effectively obfuscate any single CORBA implementations from immediate duplication without limits of immediate licensing agreements. In some regards, such a coincidence could be quite fortuitous, overall. Inasmuch as CORBA may find an application in defense systems, such applications should probably not be danced out for any idle opportunism, however any CORBA applications may be momentarily denoted of a popular discourse. However analogously, if CORBA may be estimated as to find applications in some of social infrastructure systems – such that may seem however estimably possible, as in consideration of specifications defining platform-agnostic applications of CORBA in telecommunications systems – but this says almost nothing with regards to any manner of actual computing systems. Not to offend the reader with the author's own simple obtuseness: A communications specification is not a microprocessor, just as much as a pie plate is not a peach pie.

Sure, a discrete number of peaches may be applied in preparing any discrete number of peach pies, at any estimable formula of efficiency and other results. Analogously, the CORBA specifications are not either like recipes for microcontroller applications. (Ed. note: There should be a cute metaphor to Hexbugs(R), here. Though it might seem to startle a reader, if juxtaposed to the fantastic, fictional Replicators of the Stagate SG-1 universe, in science fiction literature, but the metaphor of a microchip-sized automoton of a bounding character, outside of any too far Arachnid-like species of science fiction creature, the simple ant-like Hexbug Nano(R), as a metaphor, could be apropos.)

So, a student of computing might feel in some ways stymied of the communications-oriented CORBA specifications. The proverbial magic of CORBA may not be insomuch in the proverbial network chatter of a CORBA object system.

Monday, October 27, 2014

A Synopsis, at Length, Onto the Decimal Rationalization and Floating-Point Type Coercion Features of the Igneous-Math System

Storage of a Rational Pi 

Last week, after having discovered a rather simple methodology for translating a floating-point computational representation of the irrational number, pi, into a rational number with a decimal exponent, within a fixed-precision floating point implementation, in a software system -- which I denoted, a rational pi, in a previous article, here at my DSP42 web log -- I was all the more cheered, when I discovered that there is also a methodology available for translating pi or any other floating point number, into a rational value, within a machine's respective data space, namely as represented with the ANSI Common Lisp rational and rationalize functions. On having discovered that simple feature of the ANSI Common Lisp standard, I was quite cheered to observe:
(rationalize pi)
=> 245850922/78256779
...as within the Common Lisp implementation in which I had performed that calculation, namely Steel Bank Common Lisp (SBCL)  version 1.2.1 (x86-64) on Microsoft Windows. An equivalent value is produced, with that form, in SBCL 1.2.4 (x86-64) on Linux.

Though the behaviors of the rational and rationalize functions are defined, broadly, in ANSI Common Lisp, but -- of course -- such translation of floating point values into ratio values would be nothing so directly and explicitly standardized as of IEEE floating-point arithmetic. Indirectly, however, given a value of pi calculated to the greatest possible precision of floating point values within a single Common Lisp implementation, one would assume that between two separate  Common Lisp implementations, as such -- both implementing the IEEE standards for floating-point arithmetic -- that the form, (rationalize pi), would return an equivalent rational value, in each respective Common Lisp implementation.

Incidentally, for those of us not having a membership in the IEEE, the article -- i.e [Goldberg1991] -- What Every Computer Scientist Should Know About Floating Point Arithmetic, by David Goldberg (XEROX PARC) (1991) -- that article may serve to explain many of the principles represented in IEEE floating-point arithmetic, candidly whether with or without an accessible draft of the standards defining the IEEE floating-point arithmetic.

Ratio as Rational Number

This might sound like some fairly dry or perhaps droll material to be writing about. Certainly, with so many implementations of IEEE floating-point arithmetic, in computing, one might wish to simply accept the ulps and relative errors of the floating point implementation, and set a software program to  continue, nonetheless, along its axiomatic course. Certainly, by any stretch of arithmetic, 1/3 will always be an irrational number, when reduced to a floating point value.

So, there's "The rub," essentially. The number 1/3 itself is a rational number -- having the corresponding mathematical accuracy of a rational number, per se -- though its floating point representation is not a rational number.

Conveniently, perhaps, ANSI Common Lisp defines a numeric type, ratio, disjunct to the numeric type, integer -- both of which, together, serve to fulfill the set of all types of rational number.

Without making a comprehensive review of ANSI Common Lisp, albeit, one might simply observe that the Common Lisp ratio type is utilized in many mathematical operations within a Common Lisp implementation. For example, in a simple evaluation in SBCL, specifically:
(/ 1 3)
=> 1/3

Rational Magnitude and Degree

This short web log article, presently, will endeavor to illustrate a relevance of rational values as  intermediary values within multi-step mathematical formulas. Rather than preferring a ratio type rationalization of floating-point numbers, however, this article will develop a decimal exponent encoding for rationalization of floating-point numbers.

This article will develop an example onto a methodology for an estimation of net circuit impedance within a single-source AC circuit comprised of resistive, inductive, and capacitive (R,L,C) circuit elements, in which -- namely -- firstly, each of net inductive reactance, net capacitive reactance, and -- if applicable -- net parallel reactance is calculated -- those values deriving from the respective net inductance and net capacitance, and single-source frequency, in the respective circuit being analyzed. Then, one would apply the reactance value together with the resistance in the circuit, for calculating the net circuit impedance and its respective phase angle, through any exacting approach to the computation of those values, dependent on the nature of the exact circuit network -- thus, in a conventional methodology, ultimately estimating  a net circuit impedance, such that may then be applied in estimating a net circuit current, subsequently in estimating the current and voltage at individual circuit elements, within the circuit being analyzed, thusly. At all points, in such calculation, it would be possible for the floating-point implementation to introduce its own natural errors -- whether of a hand-held calculator, or a program system at a desktop computer -- as due to floating point ulps and relative error, such as addressed in [Goldberg1991].

Without venturing to estimate whether such a methodology may be the best or most accurate  methodology available for an estimation of net circuit impedance, presently it may serve as a simple example of a multi-step equation utilizing of floating-point values.

At the outset, in such a calculation, the value pi is encountered, as namely for calculating the net inductive reactance and net capacitive reactance in the circuit. For a series (R,L,C) circuit with a single-source AC power supply operating at a frequency f  Herz, the estimation -- as a calculation onto an imperfect mechanical domain, certainly -- the estimation may begin as follows, axiomatically:
net inductance (series) LT = L1 + L2 + ... Ln

net capacitance (series) CT = 1 / (1/C1 + 1/C2 + ... 1/Cn)

inductive reactance XL = 2 * pi * f * LT

capacitive reactance XC = 1 / (2 * pi * f * CT)
The reader will be spared the details of a further calculation or estimation of net impedance, by any single, known methodology, as would be in an analysis of such types of electrical circuit -- the subject matter of such as the ECT-125 course, at DeVry University Online.

Presently, this article shall focus about: That a concern with regards to floating point contagion is introduced, so soon as at in the calculation of each of XL and XC.

Towards a Methodology for Decimal Scaling in Rationalization of Numeric Input Values

Within the last two of the preceding equations -- those representing, each, an implicit diadic function, accepting of values of f and respectively, LT or CT --  within those functional equations, if  pi is then represented as a floating-point number, or if fLT, or CT is represented as a floating-point number, then of course each of XC and XL likewise would be calculated as a floating-point number. In a Common Lisp application, the resulting value would be of the greatest precision of the input values -- insofar as measuring the precision of individual floating-point numeric values.

If, instead, each input value, n, to the respective functions would be represented as a numeric object with a rational magnitude, m and an integer decimal degree, d, such that:
n = m * 10^d
...then those  calculations may be performed all within the rational number line, wherein even the intermediary values may be stored as rational, decimal-shifted numeric objects (m, d). The respective calculations for each of inductive reactance and capacitive reactance all rely on the simple mathematical operations of multiplication  and division. In a Common Lisp application, then if those calculations could be conducted with values all of type cl:rational, the result would be, in each, a numeric object of type cl:rational.

Essentially, this methodology applies a decimal shift for integral values -- preferring a decimal base for representation of floating-point values, contrasted to a binary base --  in a manner that serves to ensure that m and d would both be rational values, representative of any floating point value n. This methodology would  serve to ensure that a rational decimal value may be computed for any intermediary real number value, within a mathematical operation.

Furthermore, this methodology  allows for a  manner of scaling of n onto (m,d) into any alternate scale of (m,d) -- as per the decimal prefix values standardized of the Systeme Internationale (SI) -- to any scale for m other than the initial scaled.For m being of type cl:integer, then, the value (m,d) may be scaled simply by incrementing or decrementing d.

Of course, this in itself does not serve to eliminate the possibility of encountering a floating-point value in an arithmetic computation. For instance, in something of a trivial example, given:
n1 = 100 => (m1, d1) = (1, 2)
n2 = 300 => (m2, d2) = (3, 2)
then:
n1 / n2 = 1/3 => (m3, d3) = (1/3, 0)
The quotient of n and  n2 ,in the value's floating point representation, would be an irrational number. In an implementation of the type, ratio, however: The quotient may be stored as a ratio, rather than as a floating point number.

In applying such a methodology -- as here, denoted onto a decimal base, base 10 -- it may be possible to eliminate floating-point values from within the intermediary values of any series of mathematical calculations. Any initial input values to a calculation may be converted into such decimal scalar values -- for each n, initializing a decimal scalar object storing n as a rational magnitude, m, with an integer decimal degree, d. Subsequently, all mathematical operations may be performed onto (m, d)

On a sidebar with regards to base two, ie binary scale encoding for numeric objects: Considering that a numeric value -- even an integer value -- within a software system, would be encoded as a binary value, within the software system's data space, then it may seem more advantageous to perform such a shift onto base 2, in keeping with the natural base of the containing data space . With the numeric value initially provided as a decimal digit, however, the initial precision of the input value may be lost if the shift is not performed onto the input value's base, namely the decimal base, base 10.

Whereas the conventional scalar measurement prefixes standardized in the Systeme Interationale are also defined onto the decimal base, then of course the decimal base scalar value, (m, d), may be effectively re-scaled for any prefix, with simply an adjustment applied onto d, proceeding to any instance in decoding the rationally scaled decimal value.

Inequalities Resulting of Floating-Point Type Inconsistency in Arithmetic Operations

A prototypical example is provided, onto the numeric value, pi, and a fraction of pi, in a comparison onto a value functionally equivalent or ideally equivalent to the fraction of pi, the latter produced of the trigonometric transcendental function, atan:

(typep pi 'double-float)
=> T

(= (/ pi 4) (atan 1d0))
=> T

(= (/ pi 4) (atan 2d0 2d0))
=> T

(= (/ pi 4) (atan 2 2))
=> NIL
In the last instance, the call to atan produces a value of type single-float, such that does not have the same floating-point precision as the  double-float value, pi.

Thus, it might seem well to extrapolate that all values that are input to a mathematical system -- as via read -- should be type coerced -- in a semiotics of programming languages not Common Lisp, essentially to cast a value -- to type coerce all number type values  to double-float -- that being a floating-point type of some precision -- insofar as interacting with functions that accept a value of type float and that consistently produce a floating-point value, i.e. insofar as with regards to floating-point functions.

In coercing every input floating-point type numeric value into a value of a single floating-point numeric type, it may be possible to ensure a greater degree of floating-point accuracy, throughout a program system -- thus, obviating some concerns with regards to floating point contagion.

Notably, there is a feature defined in ANSI Common Lisp, namely the special variable,  cl:*read-default-float-format*. The conclusion of this article, an application of that variable is denoted for an extension to ANSI Common Lisp, in the form of an application of ANSI Common Lisp.

In Search of the Elusive CL:Long-Float Numeric Type

Presently, this article will introduce a sidebar: That if this article may be interpreted as with a perspective towards any Lisp implementations implementing a long-float value type -- assuming that the latter may be implemented to a greater floating-point precision than the double-float numeric type -- then the reader may wish to transpose the term, long-float, into every instance of the term, double-float, denoted in this document, thus implicitly preferring a floating-point numeric type of a greater floating-point precision.

Functions Accepting of Integer Values Only

Of course, not every function in the Common Lisp numbers dictionary would accept a value of type double-float. For instance, the function isqrt accepts values only of type integer. Inasmuch, it may be noted that -- in a prototypical regards -- the double-float value 1.0d0 is equivalent to the integer value, 1. If it would be thought necessary, then a type coercion could be performed onto the double-float 1.0d0 to its integral value 1 -- with measure for accuracy -- as  within an overloaded math functions system, such as provided via Igneous-Math [math-ov], in the specific method dispatching provided by Igneous-Math system, in its present revision (1.3).

Towards a Numeric Equivalence Between Numerically Rationalized Constants and Values Returned of Transcendental Functions

An  ideal fraction of pi, pi/4, represents an ideal angle of exactly 45 degrees -- essentially, one eighth of  the radial angle of 2pi, for 2pi representing the total, singular measure of the radial angles in a unit circle.  

Of course, when pi must be implemented as a measurement within a computational system -- as effectively limited, then, to the precision of the floating point numeric features of the computational system -- then the definition of an ideal angle onto pi  becomes effectively entangled with the definition of the same floating point implementation. Considering that all systems implementing the IEEE floating point standard  would produce equivalent values for pi -- as when provided with a singular, conventionally accepted reference value for pi  --  then one may expect a sense of accuracy between the respective floating point implementations,  for fractions onto pi, as much as onto any  floating-point operations conforming to the IEEE floating point standard. Presently, one might not proceed, at length, to denote any of the possibly nondeterministic qualities of such a relatively simple assertion with regards to floating-point accuracy. 

If the Common Lisp rational and rationalize functions may be applied as, in effect, to limit the number of possible instances of errors resulting of floating point ulps and relative error conditions -- literally, then, to ensure that any single, floating-point value would be type coerced to an appropriate rational value, as soon as a floating-point value would be introduced to a mathematical system --  but such an application may also be met with questions, namely: Whether to apply cl:rational or alternately, to apply cl:rationalize, and whence?

An illustration in applying the functions, cl:rationalize and cl:rational separately, onto each of: (1) pi, (2) a fraction of pi, and (3) a value returned by cl:atan, in an evaluation of cl:atan that produces -- essentially -- a fraction of pi, namely towards an ideal value of pi/4 radians, i.e. 45 angular degrees.
;; double-flat => rationalized rational => equivalent
(= (rationalize (/ pi 4d0)) (rationalize (atan 2d0 2d0)))
=> T

;; double-float=> rationalized rational => equivalent
(= (rationalize (/ pi 4)) (rationalize (atan 2d0 2d0)))
=> T ;; pi being of a double-float type

;; With a rationalized pi applied internal to the
;; fractional form, however, the fractional form   
;; is then unequal to its approximate functional equivalent.
;; Ed. Note: This example was developed with SBCL. ECL presents a different result 
;; than illustrated, certainly as resulting of subtle differences in implementation of
;; the function, cl:rationalize
(= (/ (rationalize pi) 4) (rationalize (atan 2d0 2d0)))
=> NIL

;; In applying cl:rational, as internal to the fractional form,
;; as well as applying cl:rational onto the value returned by
;; the atan function, the two values are then equivalent.
(= (/ (rational pi) 4) (rational (atan 2d0 2d0)))
=> T

With regards to distinctions between the functions, cl:rational and cl:rationalize,  the  wording presented in the Common Lisp Hyperspec (CLHS) may seem somewhat ambiguous, in one regards, as towards a question: What is a completely accurate floating point value?  The question might seem tedious, perhaps, as it may be a difficult question to develop an accurate answer unto. In such regards, of course, one might revert to observing the behaviors of individual Common Lisp implementations.

Original emphasis retained, with emphasis added, in quoting the CLHS dictionary for the functions cl:rational and cl:rationalize:
"If number is a floatrational returns a rational that is mathematically equal in value to the floatrationalize returns a rational that approximates the float to the accuracy of the underlying floating-point representation. 
"rational assumes that the float is completely accurate
"rationalize assumes that the float is accurate only to the precision of the floating-point representation."
One might extrapolate, from that summary, that it might be advisable to apply cl:rationalize, throughout. In implementations in which the precision of the Lisp implementation's floating-point number system is governed, effectively, by the IEEE floating-point standard, then certainly one could assume that values produced with cl:rationalize  -- specifically, of cl:rationalize applied onto double-float values --  that the results would be consistent, across all such implementations -- as when the behaviors of cl:rationalize are governed expressly onto the underlying floating-point implementation, and the underlying floating-point implementation is consistent with the IEEE standard for floating-point arithmetic.

To where, then, may one relate the disposition of cl:rational? This article proposes a thesis: That if the IEEE standard for floating-point arithmetic is accepted as the industry standard for floating-point arithmetic, and if the IEEE standard for floating-point arithmetic may represent an implementation of floating-point arithmetic both to an acceptable precision in addition to an acceptable accuracy in computations within and among individual IEEE floating-point implementations,  then -- in a sense -- perhaps the IEEE floating-point standard may be denoted as "The nearest thing possible to a completely accurate floating-point implementation"?  Of course, that might be a mathematically contentious assertion, though it might anyhow represent a pragmatically acceptable assertion, towards assumptions of precision and accuracy within computational systems implementing and extending of Common Lisp.

In a practical regards: Considering the example -- presented in the previous -- as with regards to an ideal fractional value of pi, namely an ideal pi/4 and its representation in a single Common Lisp system, as both (1) via a ratio onto pi and (2) in a numeric rationalization of an ideally equivalent value returned via the transcendental trigonometric function, atan,  then it may seem that it would instead be advisable to apply cl:rational, as consistently and directly onto floating-point values, in all instances of  when a floating-point value is produced -- as whether produced via input or produced via a floating-point calculation -- produced within a Common Lisp program system.

Concerning the Elusive cl:long-float Numeric Type

ANSI Common Lisp defines a type cl:long-float, with a minimum precision and a minimum exponent size equivalent to those qualities of the type, cl:double-float. On a sidebar, if one may wish to develop a further understanding of the relevance of those values, within a floating-point numeric implementation, certainly one may consult any amount of "Existing work", such as, conveniently:  [Goldberg1991]

In a practical regards, it might be assumed that most Common Lisp implementations would implement or would emulate -- respectively -- the C data types, float and double [Wikpedia] for the Common Lisp cl:single-float and cl:double-float types.

Common Lisp implementations might also endeavor to implement or to emulate the C long double numeric type [Wikpedia, ibid.] as extending of the IEEE standards for floating-point arithmetic [Wikipedia]. Conceivably, the C long double numeric type could be implemented or emulated with the elusive cl:long-float numeric type, and then extended with so many compiler-specific optimizations -- such optimizations, perhaps, providing a veritable "Icing on the rational pi" within Common Lisp applications.

Towards Subversion of Floating Point Contagion

Concerning the behaviors of floating-point numeric coercion within implementations of ANSI Common Lisp, it might be advisable for an implementation of a mathematical system to prefer the most precise floating-point implementation available, within a Common Lisp implementation. Conventionally, that would be the type, double-float.

In a simple analysis of possible sources of numeric values within a Common Lisp application, essentially a numeric value may be introduced into an application either via input or via calculation. Concerning calculation of numeric values in Common Lisp, it would be assumed that for any calculation accepting a floating-point value, that the result of the operation would be of the same precision as the highest-precision input value -- with however much for numeric accuracy, as across possible floating point contagion, in instances of floating point coercion.

In regards to how a numeric value may be input into a Common Lisp program, one might denote at least three possible, essentially distinct input sources:
  • REPL - The read/eval/print loop (REPL)
  • Source file
  • Compiled, "Fast loading" i.e. FASL file
In regards to the first two of those input sources, one would assume that the type of any floating-point numeric values would be selected as per the value of cl:*read-default-float-format* . It might seem, then, as if that was sufficient to ensure that all floating-point values introduced via input to a mathematical system would be of an expected floating-point format, perhaps exactly the floating-point format specified in cl:*read-default-float-format*

Without venturing into any manner of a lengthy analysis of implementation-specific compiler behaviors and implementation-specific FASL encoding schemes, it might be denoted that a compiler may encode -- as when writing objects into an object form in a FASL file -- that a Lisp implementation may encode a floating-point value, essentially a numeric object, as being of a specific format not functionally equivalent to the value of cl:*read-default-float-format*, in any instantaneous time. Similarly, it may be possible for a Lisp implementation to read a numeric value from a FASL file, without the value being type coerced according to cl:*read-default-float-format*.

Conclusion: Specifications for Floating-Point Type Coercion, Decimal Rationalization, and Default Floating-Point Format for the Lisp Reader

So, if a mathematical system was to endeavor to ensure that all floating-point mathematical operations performed within the system would be performed onto the highest-precision floating point data type available in the implementation, then although it may serve to introduce an additional set of instructions into any functional forms within the same mathematical system,  but the same mathematical system -- in interfacing with any function consistently returning a floating point numeric type -- may endeavor to consistently type coerce  numeric values to the highest precision floating point data type available -- as insofar as in any direct interfaces onto such floating point functions, such as the transcendental trigonometric functions in Common Lisp.  Of course, this might serve to introduce some errors, with regards to floating-point type coercion,  while it would at least serve to address any errors as may occur when comparing values derived of differing floating-point precision.

It would may be a matter of a further design decision, then, as for whether the exacting floating-point format would be specified statically, at compile time, or would default to the value of cl:*read-default-float-format* at time of evaluation. Certainly, both behaviors may be supported -- the exact selection of which could be governed with specific definitions of cl:*features* elements, as may be applied within the source code of the system.

That, in itself, may not serve to obviate some concerns with regards to floating-point ulps and rounding errors, however. So, it might furthermore be specified -- in so much as of an axiomatic regards, certainly -- that in an implementation of a mathematical object system, viz a viz Igneous-Math: That any procedure interfacing directly with floating-point mathematical operations, in Common Lisp,  would return a numeric value directly rationalized with cl:rational -- referencing the example denoted in the previous, as with regards to a rational fraction of pi and a numeric value functionally equivalent or ideally equivalent to a fraction of pi, but as calculated via the transcendental trigonometric function, atan, onto any specific floating point precision, then the return value being numerically rationalized with either cl:rational or cl:rationalize.

Thirdly, for ensuring a manner of type consistency of values input via source files and input via the Common Lisp REPL, the mathematics system may endeavor to apply a floating-point numeric type of the highest available precision in any single Lisp implementation -- as towards the value of cl:*read-default-float-format*. Certainly, an application may endeavor to avoid setting the value of cl:*read-default-float-format* itself, however. An application may signal a continuable error, during application initialization -- as when the user-specified value of cl:*read-default-float-format* would not be of the greatest available numeric precision. That continuable error, then, may be defined as to allow the user to select the highest-precision floating point numeric type and to set the value of cl:*read-default-float-format* oneself, form within any of the continuation forms defined, then, in the single continuable error.

Sidebar in Tangent: Design of the MCi Dobelle-App System

Of course, the previous paragraph may also serve to denote that there would be a definition of an application object, as well as a definition of an initialization phase in the definition of an application object -- towards a sidebar onto the dobelle-app system, in all its colloquial name, within the MetaCommunity group at Github.

For "Later Study," Irregularity in Fractional Rationalized Pi onto a Rationalized, Functionally Equivalent Value Computed By atan 

The following software form presents another interesting example with regards to irregularities in floating point arithmetic and rationalization of floating-point values. A study of the cause of the difference between the values of rpi4 and rtpi4, of course, may be other than trivial. No doubt, such a study may entail a study directly of the implementation of the transcendental trigonometric functions, in GNU LibC.

This example was produced, originally, with SBCL 1.2.4 on Linux (x86-64), and has been tested also with CCL 1.9-r15757 on Linux (x86-64). Considering the consistency of IEEE floating point implementations, an equivalent set of return values is calculated in each implementation.
(let ((fpi4 (/ pi 4d0)) 
      (ftpi4 (atan 2d0 2d0))
      (rpi4 (/ (rationalize pi) 4))
      (rtpi4  (rationalize (atan 2d0 2d0))))
  (values fpi4 ftpi4 
   rpi4 rtpi4 
   (float rpi4 pi) (float rtpi4 pi)
   (= rpi4 rtpi4)))

=> 0.7853981633974483d0,
      0.7853981633974483d0,
      122925461/156513558,
      101534659/129277943,
      0.7853981633974483d0,
      0.7853981633974483d0,
      NIL

Notably, if the function cl:rational is applied instead of cl:rationalize,  the values calculated for each of rpi4 and rtpi4 are, then, exactly equivalent, and the last return value, then, is T.

Appendix: Floating-Point and Non-Floating-Point Functions Defined in ANSI Common Lisp

The following functions, for any input values, will consistently return a floating point return value
  • ffloor
  • fceiling
  • ftruncate
  • fround
The following functions may return a rational value -- as may be tested, in each implementation -- as per the Rule of Float Substitutability, in ANSI Common Lisp. When consistently returning a floating point value, then these functions may be regarded as in the same set as the previous -- this being an implementation-specific quality.
  • sin
  • cos
  • tan
  • asin
  • acos
  • atan
  • sinh
  • cosh
  • tanh
  • asinh
  • acosh
  • atanh
  • exp
  • expt
  • log
  • sqrt

Onto floating-point complex numbers:
  • cis
A set of functions that accept integer values, and would return integer values, exclusively -- this  mathematics system not addressing ash, ldb, or dpb:
  • gcd
  • lcm
  • isqrt
  • evenp
  • oddp
Towards consistency onto the decimal rationalization policy of the the Igneous-Math system: Within  interfaces to those functions that return real numeric values, but that do not  or may not return integer numeric values, those functions' direct return values should be effectively transformed with a call to cl:rational, before return.

Notably, this quality of the Igneous-Math system is designed with an  emphasis towards numeric accuracy before minimization of object system garbage-collection calls -- as with regards to  a consistent discarding of floating point values, in procedural transformation to decimal rationalized integers.

Of course, it may be advantageous towards possible re-use of the software of the Igneous-Math system, if the decimal rationalization policy would be implemented as a compile-time option, in the Igneous-Math system. 

At this point, the author's attention is returned to the documentation toolchain.

Some Thoughts about Source Code, Design Decisions, and Documentation - Markdown, DocBook, and the Igneous-Math Source Tree

Having completed a finals exam for an electronics course I'm enrolled in, I've published the Igneous-Math source code to GitHub, tonight [igneous-math]. The Igenous-Math source tree is currently in the beginnings of its 1.3 edition, focusing on definition of unit conversion formulas. Though, simply, I do look forward to the beginning of the prototyping for the 1.3 edition -- have been bouncing around a few ideas, namely to make reference to the Garnet KR system [garnet] for its formula modeling as applied in KR schema objects and as may be applied in arbitrary Lisp code -- however, I would like to make some more refined documentation for the source tree, also -- now at the end of the end-of-week sprint, in developing that source tree.

In developing the Igenous-Math system, presently, I'm using GNU Emacs as an IDE, together with a system named SLIME, in its Emacs Lisp components -- and in its corresponding Common Lisp components, named SWANK -- focusing mostly on SBCL as the Common Lisp implementation, but  I've made some testing of the source tree using CCL, also. In applying those software components, together with the Git changeset management system, in an installation of KUbuntu within Virtualbox, I think it makes for a fairly comfortable development environment -- it's certainly easy to test the Common Lisp source code, using SLIME+SWANK, and simultaenously so, when running the Linux platform and the Microsoft Windows platform, using Virtualbox. Of course, I would like to make a modicum of effort to ensure that the Igneous-Math system could be a system that other software developers might feel comfortable to use, too, at least once it would be at its first branch-level revision.

Throughout this week's short sprint in the development of the Igenous-Math source tree, I've been making some copious notes in the source-tree -- something to remind oneself of a bit of a sense of context, in relation to any source code approximate to the comments, in the source tree. Of course, that can seem convenient enough, from one's own perspective in developing the source tree. As far as the availability of the source tree, I understand that it might seem to leave something lacking, though, under the heading, "Documentation."

So far as what may be accomplished in developing the documentation of a source tree, I think:

  1. An item of documentation may serve as to explain one's design decisions, to oneself, as well as towards the respective audience or user base
    • That rather than developing an axiomatic mode, with regards to documentation, alternately one may seek to explain one's rationale, to the reader, as with regards to any specific design decisions made in the development of a software system -- that not only may the documentation serve to explain so many design decisions, but it might alsoserve to place those design decisions within a discrete sense of context, conceptually.
    • Inasmuch, candidly, an item of documentation may serve as to explain one's rationale to oneself , at a later time -- thus, perhaps serving to ensure something of a degree of design consistency, with regards to one's rationale, in the design of a software system.
  2. An item of documentation may serve as to describe any set of things presently developed within a source tree -- thus, making a sort of simple inventory of results.
  3. Furthermore, an item of documentation may serve as to describe any number of things planned for further development, within a source tree and within any related software components
  4. That an item of documentation may serve a necessary role in ensuring that the software code one has developed can be reused, as by others and by oneself, extensionally.
Although the Igenous-Math system is published under a license as free/open source software, but candidly, I would not wish to have to ask the reader to peruse the source tree, immediately, if the reader may simply wish to understand the natures of any set of software components published of the source tree. 

So, perhaps that's towards a philosophical regard about documentation. Corresponding to a philosophy about documentation, there would also be a set of tools  one would apply, as in developing a  set of documentation items as -- altogether -- the documentation of a project serving as a sort of content resource, within a project.

Though I have given some consideration towards applying a set of complex editing tools and structured formats for documentation -- had made a note, towards such an affect, in the MetaCommunity.info Work Journal, at a time when I was considering whether I'd wish to use the DocBook format and the DocBook toolchain for documenting another software system I've developed, namely AFFTA -- however, tonight I'm more of a mind that it would be just as well to use the simple Markdown format, in canonical Markdown, or in the more structured MultiMarkdown format. Markdown may serve as a relatively accessible document format, for simple notes and documentation within a software source tree.

DocBook, in its own regards, would be a more syntactically heterogenous format to integrate within a software source tree. In a sense, DocBook doesn't look a whole lot like text/plain, though XML is a plain text format.

A DocBook XML document, in particular, can be written with any existing text editor, such as Emacs, or with a specialized XML editor, such as XXE, the XMLmind XML Editor. In Emacs, specifically, the nXML Emacs Lisp extensions can be of particular utility, when writing XML documents [JClark][NMT].  Together with a complete DocBook toolchain [Sagehill], the DocBook schema [TDG5], and DocBook editing platform, the documentation author would be able to create a well structured and comprehensively descriptive reference document for a project.

As something of a rosetta stone for document formats, there's also Pandoc

Presently, MultiMarkdown is the format that I prefer for developing so much as the README file for the Igenous-Math codebase.  There's even a nice, succinct guide page available about MultiMarkdown [ByWord] explaining the additional features provided with MultiMarkdown, as it extending on the original Markdown format.  For a purpose of delineating the simple "To Do" items of the Igneous-Math codebase, and for providing anything of an external, "High level" overview of the codebase -- such as one would encounter of the README file for the source tree, as when viewing the source tree, in its web-based GitHub presentation --  MultiMarkdown may be sufficient, in such application. Candidly, to make such a simple presentation about the source tree, using DocBook -- for one thing, the DocBook format does not so readily allow for the author to control the formatting of the documentation. With DocBook, perhaps there's something of a tradeoff for semantic expressiveness, in lieu of any easy visual design for documentation. By contrast, Markdown allows for an easy control of the visual layout of the documentation, though the Markdown syntax offers only a relatively small set of markup features.

Secondly, the toolchain automation that would be required to simply publish a README file, in Markown format, if from a source document in DocBook format -- such a tranformation model would certainly be less than ideal. It would allow for a transformation of only a limited set of elements from the DocBook schema, and though it may serve to produce some relatively accessible documentation -- accessible to the sighted web visitor, at least -- but one may find it difficult to explain to oneself: Why one would transform a source document from DocBook format, into a source document in Markdown format. Candidly, the GitHub presentation for Markdown is a feature extending of Github's own support for the Markdown format. One might think it's unsurprising, if there's no such specific support for all of XML. in the GitHub web interface -- XML is a very broad markup format, in all of its structural extensibility and its text/plain format

So, if it's possible to effectively produce a snapshot of one's point of view, at a point in time, then here's my view about the Markdown and DocBook formats, considered in relation to development of the Igneous-Math source tree.

It might bear a footnote, also, that the MultiMarkdown format is supported on Ubuntu platforms, in the libtext-multimarkdown-perl package. For integrating MultiMarkdown with Emacs MarkdownMode, there's some advice by David from Austin, TX [ddloeffler]

Saturday, October 25, 2014

Thoughts about: Electronics, Geometry, Common Lisp, and a Rational Pi within Common Lisp Mathematics

"Once upon a time," when I was a young person, my family had "Gifted me" with an electronics kit, such that I've later learned was developed by Elenco. In fact, my family had "Gifted me" with a number of electronics kits -- an old 60-in-1 kit, later the Elenco 200-in-1 kit, and then a kit somewhat resembling the Elenco Snap Circuits Kit, though the last of those was of a different design, as I recall -- something more designed for carrying an assembled circuit, really. My family had also gifted me with the opportunity to be wary of computer salespersons, but that's another thing.

In studying the manual of the Elenco 200-in-1 kit, specifically, I was able to learn a few things about the simple novelty of electrical circuits. The manual for the Elenco 200-in-1 kit contains a broad range of circuits, onto the domains of each of solid-state and digital electronics. The circuit elements, in the kit, are all firmly fastened into a non-conductive base, each junction attached to a nice spring. The circuit elements, in the kit, may be "wired together" with the set of jumper wires contained in the kit, according to the numbered schematics within the manual. As I remember, today, I'd assembled the radio tuner circuit, the buzzer/oscillator, and something about the multi-segment LED, then  separately, something about the step-up and step-down transformers. Of course, there's a novel analog meter, on the front of the Elenco 200-in-1 kit's form factor case.

Candidly, to my young point of view, the full manual of the Elenco 200-in-1  kit had seemed a little overwhelming, to me. I remember, still, that I'd had a feeling for that I was looking at a broad lot of material when browsing the pages of the manual, but I wasn't particularly certain if I was able to understand the material I was reading. Of course, I could understand something about the circuit assembly instructions, in the manual -- it seemed, to me, much like the "Snap together" airplane models of the hobbyist domain -- but I was not able to analyze the material of those circuits. Inasmuch, I wasn't able to interpret anything about my experiences, in studying of the manual. Certainly, I was able to set a wire between spring 30 and spring 14, to push a button, and to watch an LED glow, but -- beyond the simple, mechanical qualities of such circuit assembly and testing -- it didn't mean so much to me, then. It was a little bewildering to me, too -- there was such a nice manual, with the item, but I was unable to study the manual, beyond the simple circuit assembly instructions. I could not understand any much about it.

Candidly, It was the only electronics book that I knew of, in all of my young views. In that time of my life, I was not whatsoever cognizant about any sense of a relevance of digital circuits -- such as, of the digital logic gates with transistorized circuits, printed in the last pages of the manual. Sure, though, that was only a few years decades after Bell Labs had made their first discoveries about the superconductive properties of germanium [Computer History Museum].  I was a kid, when the decade was 1980, when the 32 bit processor was "New", and when the Nintendo Entertainment System was the hottest thing on the games market. Somehow, I still mark the chronology of my life by those relative metrics.

At that time in my life, I was not aware of there being a digital domain in electrical circuits, beside any concepts particularly of digital/analog interfaces. I had received the Elenco 200-in-1  kit as a gift, at a time before I had ever studied geometry, in school. It was a very novel thing, to me, but there wasn't so much I could do to understand the material of the manual provided with the thing. Candidly, it gathered some dust, and with some regrets, was delivered to a landfill, when I first left California. I wish I had held onto that item -- a reproduction would not be quite the same.

Certainly, the high school geometry class -- in a sense not focused towards any specific, practical or vocational regards -- the class did not present any notes with regards to any applications of geometry, in electrical circuit analysis, so far as I can recall of the primary material of the high school geometry course. There was nothing about the rectangular coordinate plane, almost nothing about the unique, polar coordinate plane, and there was not so much about complex numbers, though certainly the course stayed its track along the standard curriculum.

Sure, though, the geometry class had explained the natures of the unit circle, the pythagorean theorem, and the transcendental functions as defined of a domain of trigonometry -- such as the sine, cosine, and tangent functions, and their functional inverses, asin, acos, and atan. Certainly, it had seemed novel to me that there are the relations of those functions onto objects within a unit circle, but I could not conceive of any applications for such knowledge, beyond any of the no-doubt demanding domains of structural design -- such as of physical architecture or of automotive design, in any manner of mathematical refinement in structural design.

So, it was not until the 38th year of my life that I would learn that those trigonometric transcendental functions may be applied within a methodology of electrical circuit analysis -- and that conventionally, those are denoted as being among the transcendental functions in mathematics. Without begging a discussion with regards to imperfections in measurement systems, personally I think it's a significant thing that there are applications of geometry in electrical circuit analysis, as well as in the broader physics. Candidly, I may only narrowly avoid regressing to a mindset like of a giddy child, to observe that there are more applications for trigonometry, under the nearby sun.


This week, I've begun to define something of an extensional model for application of mathematics, in Common Lisp -- that extending of the set of mathematically relevant functions, defined within the ANSI Common Lisp specification. Today, specifically, I've been developing a model for overloading of mathematical operations in the Common Lisp Object System -- in a manner somewhat extending of a Common Lisp implementation's existing optimizations with regards to numeric computation. While I was endeavoring to make the next day's set of notes, for that "Sprint", I've now learned of a couple of functions defined in ANSI Common Lisp, such that effectively serve to "Obsolete" a page full of code that I had written and tested, earlier this week -- and in that, no doubt with further implementation-specific optimizations, in implementations of the ANSI Common Lisp rational and rationalize functions. The code that I had written, earlier this week, it was something implementing of a decimal-shifted encoding for floating-point numbers -- in a sense, developing a model for decimal shift in measurement values, that onto the base-ten or decimal base measurement prefixes defined of the standard Systeme Internationale measurement system. Of course, there are also the base-eight or octal base prefixes conventionally applied about quantities of digital information.

The code that I had written, earlier this week, it does not completely attain to a model for measurement for "Significant digits" within input values -- not in a sense as I had learned of, when I was a student of the science courses I'd attended in high school, in the mid 1990's. There was not so much about the floating point numeric types, as defined in modern standards for computation -- and neither of so much of a sense of a mantissa, as defined in those specifications -- in a course that described the molar weight of helium.

On discovering the the ANSI Common Lisp rational and rationalize functions, personally I may feel at least a small bit of good cheer that ANSI X3J13 [Franz Inc] had seen fit to develop such a set of functions as would serve to address some inevitable concerns in implementations of conventional models for floating-point arithmetic.

Personally, I think that this -- in a sense -- is the nicest line of code that I've ever written:
(rationalize pi)
=> 245850922/78256779
 It follows, then, that one is also cheered to write:
(defun rad-to-deg (r)
  (* r #.(/ 180 (rationalize pi))))
=> RAD-TO-DEG

(rad-to-deg pi)
=> 180.0D0
So, clearly, it is possible to define an accurate method for transformation of a measurement quantity from onto a unit of radians, then onto a unit of degrees, without one having to go to any gross extent, to extend the ANSI Common Lisp specification.

That being simply the happiest thing that one has seen produced of a computer, of late, but of course one does not wish to make a laurel of the discovery.

Of course, the simple transformation of pi radians  to 180 degrees is not so different if one does not firstly rationalize pi in the equation. In a converse instance, however: What of a transformation of  pi/2 radians?
(/ (rationalize pi) 2)
=> 122925461/78256779
contrasted to:
(/ pi 2)
=> 1.5707963267948966D0
In the first  instance of those two, when the implementation's value for pi is firstly converted to a rational number, then with the computation being made onto another rational number, the real precision of the initial value of pi is neither lost nor folded. In the second instance of those two, it might seem as though the application would simply be venturing along in the implementations' system for floating point arithmetic -- that the application developer, then, must simply accept any matters of  floating-point rounding, and so on, as those being simply matters of the programmed environment. One might think that may not be sufficient, in all regards, though perhaps it might be common, as a matter of practice.

Notably, in applying a rationalized pi value, the calculation of the number of degrees represented of pi/2 radians itself results in a rational number, exactly 90.
(defconstant *rpi* (rationalize pi))
=> *RPI*

(defun rad-to-deg (r)
  (* r #.(/ 180 *rpi*)))
=> RAD-TO-DEG

(rad-to-deg (/ *rpi* 2))
=> 90
In a simple regards: That's both of a mathematical precision, and mathematical accuracy, represented in a conversation of pi/2 radians to 90 degrees.

I'm certain that it may be possible to emulate the CLHS rationalize function, within other object-oriented programming languages -- such a Java, C++, Objective C, and so on. One would like to think that in such an instance, one may also emulate the Common Lisp numeric type, ratio -- that being a subtype of rational, in Common Lisp, disjunct to the numeric type, integer -- then to overload the respective numeric operations, for the new ratio numeric type.

Personally, I would prefer to think of Common Lisp subsuming other programming languages, however, rather than of other programming languages simply emulating Common Lisp. I know that there can be "More than one" -- as far as programming languages, in the industry -- certainly. Personally, I think that it may be as well to address other programming languages if from a Common Lisp baseline, if simply as a matter of at-worst hypothetical efficiency.

An ANSI Common Lisp implementation would already have defined the ratio numeric type, as well as the mathematical operations denoted in the CLHS Mathematics (Numbers) Dictionary, onto the whole set of numeric types defined in ANSI Common Lisp. Implementations might also extend on those functions and types, with any number of implementation-specific extensions such as may be defined for purpose of optimization within Common Lisp programs -- perhaps, towards a metric for deterministic timing in mathematical procedures.

Sure, though, if one would endeavor to extend those numeric operations and types onto more types of mathematical object than those defined in ANSI Common Lisp -- as onto a domain of vector mathematics  and scalar measurements, such as the Igneous-Math project seeks to address, in all of a colloquial character of a source tree (yet unpublished) -- then it would be necessary to extend the mathematical operations, themselves, and to define an object system for the types extension. That's something I was working on developing, today, before I noticed the simply sublime matter of rational and rationalize, as functions standardized in and of ANSI Common Lisp.

Presently, I'm not certain if I should wish to altogether discard the simple decimal-shift code that I'd written, earlier this week -- essentially, that was for developing a rational numeric equivalent of a floating-point representation of a numeric input value, such as pi -- but not only so. In that much, alone, it might be redundant onto the ANSI CL rationalize function.

Also, it was for "Packing" numbers into their least significant digits -- in a sense, to derive value of magnitude and a corresponding decimal scale value, the latter essentially representing a decimal exponent  -- with a thought towards developing some optimized mathematical operations onto rational numeric values, throughout a mathematical system, in seeking to retain something of a sense of mathematical accuracy and precision, if but because it would seem to be appropriate to one's understanding, as such

The decimal shift code, as implemented in Igneous-Math, of course it's nothing too complex, in any sense -- in a simple regards, the "decimal reduction" algorithm uses the Common Lisp truncate function, and correspondingly, the "decimal expansion" algorithm uses the Common Lisp expt function. It's been implemented as part of a system for encoding the Systeme Internationale decimal prefixes as Common Lisp objects, in a system essentially designed to ensure that -- excepting of such rational numeric measurement values as would be input as of the type, ratio -- that other real numeric values in the system would be stored in a manner derived to a base measurement unit, before being applied within such conventional formulae as would be defined around base measurement units -- but allowing for operations onto arbitrary exponents of the base measurement units, in any single measurement system.

While designing so much of those features of the Igneous-Math system, candidly I was not then aware of the simple existence of the ANSI CL rationalize function, as it being a feature available of an ANSI Common Lisp implementation. With some chagrin, I wonder if I would've designed the decimal-shift code any differently, had I been aware of that feature of ANSI Common Lisp, at any time earlier this week.

I think that, henceforward, I'll have to make some consideration about whether the rationalize function should be applied throughout the Igneous-Math system. One matter that I would wish to emphasize about the design of the Igenous-Math system: That a number's printed representation need not be EQ to a number's representation in object storage, within a numeric system. So long as the printed value and the stored value are at least mathematically equivalent -- onto the ANSI Common Lisp '=' function, broadly -- then certainly, the system would have achieved its necessary requirements for storage and printing of numeric values.

That a mathematical system, moreover, may endeavor to apply the rationalize function onto all numeric values input to the system, before applying those values within any mathematical operations -- such a feature would be developed as an extension of ANSI Common Lisp, also. It might seem redundant or simply frivolous, perhaps, but personally I think it's just as well.

Furthermore, I think it's a swell to encode any floating point number as a set of two fixnum type numbers -- assuming that the instances would be few, in which a floating point number would be expanded to a bignum value on a decimal scale -- and only a matter of the nature of an application system, that one would scale a floating point number onto base ten, instead of onto a base eight -- perhaps a matter of developer convenience, in developing a software program for applications of mathematics within a 64-bit Common Lisp implementation.

While beginning to develop a set of overloaded math procedures in the Igneous-Math system, I've begun to wonder of how much work would be required if to optimize an individual mathematical operation onto a microcontroller's own specific hardware -- such as with regards to the Intel SSE extensions [Wikipedia] and the ARM architecture's hard float design, with its corresponding ABI in the Linux application space [Debian Wiki]. The Igneous-Math system has been designed to rely on a Common Lisp implementation's own optimizations, in a portable regards.