Thursday, October 30, 2014

Figuratively Setting Point and Mark Onto: Further Research Regarding Household-Local Applications for DC Electrical Systems in a Contemporary Environment

The following is a response I'd written for a private discussion forum, today. This, effectively, is "Draft 1" of a thesis.

Insofar as in applying a power supply to an existing electrical circuit component -- in an instance of if there was not a power supply provided with the component, or if the provided power supply was unavailable -- I would first endeavor to read the specifications of the item, to see what type of current -- perhaps trivially, AC or DC -- and what amperage, and what voltage would be required by the component, so far as would be published by the component's manufacturer.

Offhand, I'm at last outwardly familiar with a "Wall wart" kind of power converter -- as would typically be provided with a laptop -- or an AC-to-DC-over-USB converter, as insofar as charging for mobile electronics from an AC electrical source. Of course, those are all designed for a single-phase household current, in a consumer electrical domain -- as namely for charging and for consistent power of digital electrical devices, each having an internal or detachable battery.  Of course, the battery in such a device, itself, would be a type of power source also.

So, as far as electrical sources and their corresponding power adapters, without seeking any too formal reference, offhand:
1) The AC-to-DC power converter in the form of a "Wall wart," so to speak (AC input, DC output)
2) The AC-to-DC power converter provided of an AC-to-USB power adapter (AC input, DC output)
3) Direct household AC electrical (AC input, AC output)
4) Three-phase industrial electrical. (AC input, three phase, AC output, three phase)
5) Rechargeable battery (DC input, DC output) 
6) Non-rechargeable battery (DC output)

Perhaps in a bit of a sidebar, as with regards to DC electrical systems:  Candidly, I don't imagine there's been as much standardization for DC power inputs, in digital devices -- as much as with regards to so much standardization as developed around household AC systems, regionally, and industrial AC systems, likewise. Perhaps there may ever be some more standards developed for DC power systems, however -- as could be developed in terms of ideal voltage and ideal amperage, such as for powering a heterogeneous range of DC powered devices, prospectively from a grounded household DC line, as might be run in parallel to a grounded household AC line, within a context of a single household?

Perhaps, if one may endeavor to develop any small effort towards a manner of standardization as such, perhaps it could be of some relevance with regards to development of solar electrical systems?

Candidly, I'm not sure of how well such a concept might be adopted in any single industry, however -- so, but perhaps one may endeavor to develop an initial "Proof of concept," a working prototype, sometime, for implementing a household DC line -- perhaps, towards a primarily DC electrical system extending of a solar electrical source, and safely so? Certainly, in a context of a household, a DC electrical current may not need to travel a really long distance -- with an apology, that I know this represents such that one may seek any number of formal references about, if one was to develop a complete thesis about such a concept.

Of course, there are applications for AC inductance, such as of any number of conventional AC devices -- as might typically be in regards to motors and heating elements -- such as may be powered indirectly from a household DC line, if not directly from a household AC line -- and of course, there could be a development of any number of standards as such, as  might be addressed firstly around paramount concerns for household safety, in applications, such as insofar as for ensuring a consistent ground, within a prospective, household DC electrical system.

Certainly, there are applications for AC inductance, in household and industrial electrical systems. In regards to industrial domains, certainly, I've read of a transition from DC electrical systems to AC electrical systems, in industry.[1] My not being immediately too familiar with the theoretical qualities of such a transition, but if there may have been, previously, any systems developed for DC power in regards to motors and heating elements, perhaps such old designs could be rennovated somehow, perhaps towards a contemporary application onto solar electrical systems, and broader DC electrical systems? Certainly, with digital systems being so broadly available in the consumer market -- assuming that any single digital system runs on a DC power supply, directly -- then if a DC electrical system could be developed as to support an adequate range of amperage and voltage ratings, among individual digital household devices, though it might seem to represent something of a shift away from conventional household and industrial electrical sources, but perhaps there might ever be an emerging market discovered for such technology.

Insofar as if one may find any applications for DC electrical systems -- such as for a simple feat of driving an electrical motor, for instance, from a DC current, such as for a household air conditioner, in its condenser and motor elements, or a household refrigerator, in a similar regards, or any motorized or heater-type household tools as in a proverbial electrical-powered shop  -- but if there may be any similar elements in industrial applications of DC electrical systems, perhaps there may be some similar works developed for household applications.

So, barring any lobbying to the contrary -- candidly -- perhaps there might be a broader application available for DC electrical systems, in household applications, focusing on essentially short-distance household wiring systems and their corresponding safety elements, as well as the solar domain of electrical sources? One might hope it would not seem like a devolution in electrical systems, however -- perhaps, rather towards something of a decentralization, in some regards?

Certainly, I've read of some of the press around households endeavoring to "Go" effectively "Off grid" with solar electrical systems. So, I would certainly not want to be too hasty in any of such a design then, certainly -- though it may seem somehow viable, theoretically, and perhaps none too problematic, electrically, if one may develop a system for a household DC current, and safely so?

Works Referenced:

[1]  Rockwell Automation. DC vs. AC drives: Why should I upgrade if it still works? Feb, 2014

Towards Draft 2 of this thesis, Ed. Notes:

  • This is not to revisit the War on Currents [Wikipedia] Certainly, the essentially decentralized nature of a solar-electrical system -- considered onto the open market, broadly in a global scale -- may permit for a warless development of a household DC electrical system. One might assume, furthermore, that there would not be any high-voltage lines implemented on Mars, at any time soon.
  • The decentralized character of a solar/battery systems may bear some specific note, as in regards to development of solar energy systems for rural applications
  • The International Space Station certainly would be using a lot of DC-signal energy also
    • Solar, perhaps together with fuel cells.
    • How does one ground an electrical system, in outer space?
  • Concept, version 2: Parallel-line AC/DC mains system for localized residential application
    • Safety concern: DC grounding
      • Common ground line for each of AC and DC household mains?
    • Technical question: Possible interference to DC mains lines, from AC mains lines?
    • Practical concerns: 
      • Differing DC current and voltage ratings in DC appliances
      • Block Diagrams
      • Proof of Concept - Case Studies
      • Specifications
      • Safety
  • See also: Next article in research arc

Monday, October 27, 2014

A Synopsis, at Length, Onto the Decimal Rationalization and Floating-Point Type Coercion Features of the Igneous-Math System

Storage of a Rational Pi 

Last week, after having discovered a rather simple methodology for translating a floating-point computational representation of the irrational number, pi, into a rational number with a decimal exponent, within a fixed-precision floating point implementation, in a software system -- which I denoted, a rational pi, in a previous article, here at my DSP42 web log -- I was all the more cheered, when I discovered that there is also a methodology available for translating pi or any other floating point number, into a rational value, within a machine's respective data space, namely as represented with the ANSI Common Lisp rational and rationalize functions. On having discovered that simple feature of the ANSI Common Lisp standard, I was quite cheered to observe:
(rationalize pi)
=> 245850922/78256779
...as within the Common Lisp implementation in which I had performed that calculation, namely Steel Bank Common Lisp (SBCL)  version 1.2.1 (x86-64) on Microsoft Windows. An equivalent value is produced, with that form, in SBCL 1.2.4 (x86-64) on Linux.

Though the behaviors of the rational and rationalize functions are defined, broadly, in ANSI Common Lisp, but -- of course -- such translation of floating point values into ratio values would be nothing so directly and explicitly standardized as of IEEE floating-point arithmetic. Indirectly, however, given a value of pi calculated to the greatest possible precision of floating point values within a single Common Lisp implementation, one would assume that between two separate  Common Lisp implementations, as such -- both implementing the IEEE standards for floating-point arithmetic -- that the form, (rationalize pi), would return an equivalent rational value, in each respective Common Lisp implementation.

Incidentally, for those of us not having a membership in the IEEE, the article -- i.e [Goldberg1991] -- What Every Computer Scientist Should Know About Floating Point Arithmetic, by David Goldberg (XEROX PARC) (1991) -- that article may serve to explain many of the principles represented in IEEE floating-point arithmetic, candidly whether with or without an accessible draft of the standards defining the IEEE floating-point arithmetic.

Ratio as Rational Number

This might sound like some fairly dry or perhaps droll material to be writing about. Certainly, with so many implementations of IEEE floating-point arithmetic, in computing, one might wish to simply accept the ulps and relative errors of the floating point implementation, and set a software program to  continue, nonetheless, along its axiomatic course. Certainly, by any stretch of arithmetic, 1/3 will always be an irrational number, when reduced to a floating point value.

So, there's "The rub," essentially. The number 1/3 itself is a rational number -- having the corresponding mathematical accuracy of a rational number, per se -- though its floating point representation is not a rational number.

Conveniently, perhaps, ANSI Common Lisp defines a numeric type, ratio, disjunct to the numeric type, integer -- both of which, together, serve to fulfill the set of all types of rational number.

Without making a comprehensive review of ANSI Common Lisp, albeit, one might simply observe that the Common Lisp ratio type is utilized in many mathematical operations within a Common Lisp implementation. For example, in a simple evaluation in SBCL, specifically:
(/ 1 3)
=> 1/3

Rational Magnitude and Degree

This short web log article, presently, will endeavor to illustrate a relevance of rational values as  intermediary values within multi-step mathematical formulas. Rather than preferring a ratio type rationalization of floating-point numbers, however, this article will develop a decimal exponent encoding for rationalization of floating-point numbers.

This article will develop an example onto a methodology for an estimation of net circuit impedance within a single-source AC circuit comprised of resistive, inductive, and capacitive (R,L,C) circuit elements, in which -- namely -- firstly, each of net inductive reactance, net capacitive reactance, and -- if applicable -- net parallel reactance is calculated -- those values deriving from the respective net inductance and net capacitance, and single-source frequency, in the respective circuit being analyzed. Then, one would apply the reactance value together with the resistance in the circuit, for calculating the net circuit impedance and its respective phase angle, through any exacting approach to the computation of those values, dependent on the nature of the exact circuit network -- thus, in a conventional methodology, ultimately estimating  a net circuit impedance, such that may then be applied in estimating a net circuit current, subsequently in estimating the current and voltage at individual circuit elements, within the circuit being analyzed, thusly. At all points, in such calculation, it would be possible for the floating-point implementation to introduce its own natural errors -- whether of a hand-held calculator, or a program system at a desktop computer -- as due to floating point ulps and relative error, such as addressed in [Goldberg1991].

Without venturing to estimate whether such a methodology may be the best or most accurate  methodology available for an estimation of net circuit impedance, presently it may serve as a simple example of a multi-step equation utilizing of floating-point values.

At the outset, in such a calculation, the value pi is encountered, as namely for calculating the net inductive reactance and net capacitive reactance in the circuit. For a series (R,L,C) circuit with a single-source AC power supply operating at a frequency f  Herz, the estimation -- as a calculation onto an imperfect mechanical domain, certainly -- the estimation may begin as follows, axiomatically:
net inductance (series) LT = L1 + L2 + ... Ln

net capacitance (series) CT = 1 / (1/C1 + 1/C2 + ... 1/Cn)

inductive reactance XL = 2 * pi * f * LT

capacitive reactance XC = 1 / (2 * pi * f * CT)
The reader will be spared the details of a further calculation or estimation of net impedance, by any single, known methodology, as would be in an analysis of such types of electrical circuit -- the subject matter of such as the ECT-125 course, at DeVry University Online.

Presently, this article shall focus about: That a concern with regards to floating point contagion is introduced, so soon as at in the calculation of each of XL and XC.

Towards a Methodology for Decimal Scaling in Rationalization of Numeric Input Values

Within the last two of the preceding equations -- those representing, each, an implicit diadic function, accepting of values of f and respectively, LT or CT --  within those functional equations, if  pi is then represented as a floating-point number, or if fLT, or CT is represented as a floating-point number, then of course each of XC and XL likewise would be calculated as a floating-point number. In a Common Lisp application, the resulting value would be of the greatest precision of the input values -- insofar as measuring the precision of individual floating-point numeric values.

If, instead, each input value, n, to the respective functions would be represented as a numeric object with a rational magnitude, m and an integer decimal degree, d, such that:
n = m * 10^d
...then those  calculations may be performed all within the rational number line, wherein even the intermediary values may be stored as rational, decimal-shifted numeric objects (m, d). The respective calculations for each of inductive reactance and capacitive reactance all rely on the simple mathematical operations of multiplication  and division. In a Common Lisp application, then if those calculations could be conducted with values all of type cl:rational, the result would be, in each, a numeric object of type cl:rational.

Essentially, this methodology applies a decimal shift for integral values -- preferring a decimal base for representation of floating-point values, contrasted to a binary base --  in a manner that serves to ensure that m and d would both be rational values, representative of any floating point value n. This methodology would  serve to ensure that a rational decimal value may be computed for any intermediary real number value, within a mathematical operation.

Furthermore, this methodology  allows for a  manner of scaling of n onto (m,d) into any alternate scale of (m,d) -- as per the decimal prefix values standardized of the Systeme Internationale (SI) -- to any scale for m other than the initial scaled.For m being of type cl:integer, then, the value (m,d) may be scaled simply by incrementing or decrementing d.

Of course, this in itself does not serve to eliminate the possibility of encountering a floating-point value in an arithmetic computation. For instance, in something of a trivial example, given:
n1 = 100 => (m1, d1) = (1, 2)
n2 = 300 => (m2, d2) = (3, 2)
then:
n1 / n2 = 1/3 => (m3, d3) = (1/3, 0)
The quotient of n and  n2 ,in the value's floating point representation, would be an irrational number. In an implementation of the type, ratio, however: The quotient may be stored as a ratio, rather than as a floating point number.

In applying such a methodology -- as here, denoted onto a decimal base, base 10 -- it may be possible to eliminate floating-point values from within the intermediary values of any series of mathematical calculations. Any initial input values to a calculation may be converted into such decimal scalar values -- for each n, initializing a decimal scalar object storing n as a rational magnitude, m, with an integer decimal degree, d. Subsequently, all mathematical operations may be performed onto (m, d)

On a sidebar with regards to base two, ie binary scale encoding for numeric objects: Considering that a numeric value -- even an integer value -- within a software system, would be encoded as a binary value, within the software system's data space, then it may seem more advantageous to perform such a shift onto base 2, in keeping with the natural base of the containing data space . With the numeric value initially provided as a decimal digit, however, the initial precision of the input value may be lost if the shift is not performed onto the input value's base, namely the decimal base, base 10.

Whereas the conventional scalar measurement prefixes standardized in the Systeme Interationale are also defined onto the decimal base, then of course the decimal base scalar value, (m, d), may be effectively re-scaled for any prefix, with simply an adjustment applied onto d, proceeding to any instance in decoding the rationally scaled decimal value.

Inequalities Resulting of Floating-Point Type Inconsistency in Arithmetic Operations

A prototypical example is provided, onto the numeric value, pi, and a fraction of pi, in a comparison onto a value functionally equivalent or ideally equivalent to the fraction of pi, the latter produced of the trigonometric transcendental function, atan:

(typep pi 'double-float)
=> T

(= (/ pi 4) (atan 1d0))
=> T

(= (/ pi 4) (atan 2d0 2d0))
=> T

(= (/ pi 4) (atan 2 2))
=> NIL
In the last instance, the call to atan produces a value of type single-float, such that does not have the same floating-point precision as the  double-float value, pi.

Thus, it might seem well to extrapolate that all values that are input to a mathematical system -- as via read -- should be type coerced -- in a semiotics of programming languages not Common Lisp, essentially to cast a value -- to type coerce all number type values  to double-float -- that being a floating-point type of some precision -- insofar as interacting with functions that accept a value of type float and that consistently produce a floating-point value, i.e. insofar as with regards to floating-point functions.

In coercing every input floating-point type numeric value into a value of a single floating-point numeric type, it may be possible to ensure a greater degree of floating-point accuracy, throughout a program system -- thus, obviating some concerns with regards to floating point contagion.

Notably, there is a feature defined in ANSI Common Lisp, namely the special variable,  cl:*read-default-float-format*. The conclusion of this article, an application of that variable is denoted for an extension to ANSI Common Lisp, in the form of an application of ANSI Common Lisp.

In Search of the Elusive CL:Long-Float Numeric Type

Presently, this article will introduce a sidebar: That if this article may be interpreted as with a perspective towards any Lisp implementations implementing a long-float value type -- assuming that the latter may be implemented to a greater floating-point precision than the double-float numeric type -- then the reader may wish to transpose the term, long-float, into every instance of the term, double-float, denoted in this document, thus implicitly preferring a floating-point numeric type of a greater floating-point precision.

Functions Accepting of Integer Values Only

Of course, not every function in the Common Lisp numbers dictionary would accept a value of type double-float. For instance, the function isqrt accepts values only of type integer. Inasmuch, it may be noted that -- in a prototypical regards -- the double-float value 1.0d0 is equivalent to the integer value, 1. If it would be thought necessary, then a type coercion could be performed onto the double-float 1.0d0 to its integral value 1 -- with measure for accuracy -- as  within an overloaded math functions system, such as provided via Igneous-Math [math-ov], in the specific method dispatching provided by Igneous-Math system, in its present revision (1.3).

Towards a Numeric Equivalence Between Numerically Rationalized Constants and Values Returned of Transcendental Functions

An  ideal fraction of pi, pi/4, represents an ideal angle of exactly 45 degrees -- essentially, one eighth of  the radial angle of 2pi, for 2pi representing the total, singular measure of the radial angles in a unit circle.  

Of course, when pi must be implemented as a measurement within a computational system -- as effectively limited, then, to the precision of the floating point numeric features of the computational system -- then the definition of an ideal angle onto pi  becomes effectively entangled with the definition of the same floating point implementation. Considering that all systems implementing the IEEE floating point standard  would produce equivalent values for pi -- as when provided with a singular, conventionally accepted reference value for pi  --  then one may expect a sense of accuracy between the respective floating point implementations,  for fractions onto pi, as much as onto any  floating-point operations conforming to the IEEE floating point standard. Presently, one might not proceed, at length, to denote any of the possibly nondeterministic qualities of such a relatively simple assertion with regards to floating-point accuracy. 

If the Common Lisp rational and rationalize functions may be applied as, in effect, to limit the number of possible instances of errors resulting of floating point ulps and relative error conditions -- literally, then, to ensure that any single, floating-point value would be type coerced to an appropriate rational value, as soon as a floating-point value would be introduced to a mathematical system --  but such an application may also be met with questions, namely: Whether to apply cl:rational or alternately, to apply cl:rationalize, and whence?

An illustration in applying the functions, cl:rationalize and cl:rational separately, onto each of: (1) pi, (2) a fraction of pi, and (3) a value returned by cl:atan, in an evaluation of cl:atan that produces -- essentially -- a fraction of pi, namely towards an ideal value of pi/4 radians, i.e. 45 angular degrees.
;; double-flat => rationalized rational => equivalent
(= (rationalize (/ pi 4d0)) (rationalize (atan 2d0 2d0)))
=> T

;; double-float=> rationalized rational => equivalent
(= (rationalize (/ pi 4)) (rationalize (atan 2d0 2d0)))
=> T ;; pi being of a double-float type

;; With a rationalized pi applied internal to the
;; fractional form, however, the fractional form   
;; is then unequal to its approximate functional equivalent.
;; Ed. Note: This example was developed with SBCL. ECL presents a different result 
;; than illustrated, certainly as resulting of subtle differences in implementation of
;; the function, cl:rationalize
(= (/ (rationalize pi) 4) (rationalize (atan 2d0 2d0)))
=> NIL

;; In applying cl:rational, as internal to the fractional form,
;; as well as applying cl:rational onto the value returned by
;; the atan function, the two values are then equivalent.
(= (/ (rational pi) 4) (rational (atan 2d0 2d0)))
=> T

With regards to distinctions between the functions, cl:rational and cl:rationalize,  the  wording presented in the Common Lisp Hyperspec (CLHS) may seem somewhat ambiguous, in one regards, as towards a question: What is a completely accurate floating point value?  The question might seem tedious, perhaps, as it may be a difficult question to develop an accurate answer unto. In such regards, of course, one might revert to observing the behaviors of individual Common Lisp implementations.

Original emphasis retained, with emphasis added, in quoting the CLHS dictionary for the functions cl:rational and cl:rationalize:
"If number is a floatrational returns a rational that is mathematically equal in value to the floatrationalize returns a rational that approximates the float to the accuracy of the underlying floating-point representation. 
"rational assumes that the float is completely accurate
"rationalize assumes that the float is accurate only to the precision of the floating-point representation."
One might extrapolate, from that summary, that it might be advisable to apply cl:rationalize, throughout. In implementations in which the precision of the Lisp implementation's floating-point number system is governed, effectively, by the IEEE floating-point standard, then certainly one could assume that values produced with cl:rationalize  -- specifically, of cl:rationalize applied onto double-float values --  that the results would be consistent, across all such implementations -- as when the behaviors of cl:rationalize are governed expressly onto the underlying floating-point implementation, and the underlying floating-point implementation is consistent with the IEEE standard for floating-point arithmetic.

To where, then, may one relate the disposition of cl:rational? This article proposes a thesis: That if the IEEE standard for floating-point arithmetic is accepted as the industry standard for floating-point arithmetic, and if the IEEE standard for floating-point arithmetic may represent an implementation of floating-point arithmetic both to an acceptable precision in addition to an acceptable accuracy in computations within and among individual IEEE floating-point implementations,  then -- in a sense -- perhaps the IEEE floating-point standard may be denoted as "The nearest thing possible to a completely accurate floating-point implementation"?  Of course, that might be a mathematically contentious assertion, though it might anyhow represent a pragmatically acceptable assertion, towards assumptions of precision and accuracy within computational systems implementing and extending of Common Lisp.

In a practical regards: Considering the example -- presented in the previous -- as with regards to an ideal fractional value of pi, namely an ideal pi/4 and its representation in a single Common Lisp system, as both (1) via a ratio onto pi and (2) in a numeric rationalization of an ideally equivalent value returned via the transcendental trigonometric function, atan,  then it may seem that it would instead be advisable to apply cl:rational, as consistently and directly onto floating-point values, in all instances of  when a floating-point value is produced -- as whether produced via input or produced via a floating-point calculation -- produced within a Common Lisp program system.

Concerning the Elusive cl:long-float Numeric Type

ANSI Common Lisp defines a type cl:long-float, with a minimum precision and a minimum exponent size equivalent to those qualities of the type, cl:double-float. On a sidebar, if one may wish to develop a further understanding of the relevance of those values, within a floating-point numeric implementation, certainly one may consult any amount of "Existing work", such as, conveniently:  [Goldberg1991]

In a practical regards, it might be assumed that most Common Lisp implementations would implement or would emulate -- respectively -- the C data types, float and double [Wikpedia] for the Common Lisp cl:single-float and cl:double-float types.

Common Lisp implementations might also endeavor to implement or to emulate the C long double numeric type [Wikpedia, ibid.] as extending of the IEEE standards for floating-point arithmetic [Wikipedia]. Conceivably, the C long double numeric type could be implemented or emulated with the elusive cl:long-float numeric type, and then extended with so many compiler-specific optimizations -- such optimizations, perhaps, providing a veritable "Icing on the rational pi" within Common Lisp applications.

Towards Subversion of Floating Point Contagion

Concerning the behaviors of floating-point numeric coercion within implementations of ANSI Common Lisp, it might be advisable for an implementation of a mathematical system to prefer the most precise floating-point implementation available, within a Common Lisp implementation. Conventionally, that would be the type, double-float.

In a simple analysis of possible sources of numeric values within a Common Lisp application, essentially a numeric value may be introduced into an application either via input or via calculation. Concerning calculation of numeric values in Common Lisp, it would be assumed that for any calculation accepting a floating-point value, that the result of the operation would be of the same precision as the highest-precision input value -- with however much for numeric accuracy, as across possible floating point contagion, in instances of floating point coercion.

In regards to how a numeric value may be input into a Common Lisp program, one might denote at least three possible, essentially distinct input sources:
  • REPL - The read/eval/print loop (REPL)
  • Source file
  • Compiled, "Fast loading" i.e. FASL file
In regards to the first two of those input sources, one would assume that the type of any floating-point numeric values would be selected as per the value of cl:*read-default-float-format* . It might seem, then, as if that was sufficient to ensure that all floating-point values introduced via input to a mathematical system would be of an expected floating-point format, perhaps exactly the floating-point format specified in cl:*read-default-float-format*

Without venturing into any manner of a lengthy analysis of implementation-specific compiler behaviors and implementation-specific FASL encoding schemes, it might be denoted that a compiler may encode -- as when writing objects into an object form in a FASL file -- that a Lisp implementation may encode a floating-point value, essentially a numeric object, as being of a specific format not functionally equivalent to the value of cl:*read-default-float-format*, in any instantaneous time. Similarly, it may be possible for a Lisp implementation to read a numeric value from a FASL file, without the value being type coerced according to cl:*read-default-float-format*.

Conclusion: Specifications for Floating-Point Type Coercion, Decimal Rationalization, and Default Floating-Point Format for the Lisp Reader

So, if a mathematical system was to endeavor to ensure that all floating-point mathematical operations performed within the system would be performed onto the highest-precision floating point data type available in the implementation, then although it may serve to introduce an additional set of instructions into any functional forms within the same mathematical system,  but the same mathematical system -- in interfacing with any function consistently returning a floating point numeric type -- may endeavor to consistently type coerce  numeric values to the highest precision floating point data type available -- as insofar as in any direct interfaces onto such floating point functions, such as the transcendental trigonometric functions in Common Lisp.  Of course, this might serve to introduce some errors, with regards to floating-point type coercion,  while it would at least serve to address any errors as may occur when comparing values derived of differing floating-point precision.

It would may be a matter of a further design decision, then, as for whether the exacting floating-point format would be specified statically, at compile time, or would default to the value of cl:*read-default-float-format* at time of evaluation. Certainly, both behaviors may be supported -- the exact selection of which could be governed with specific definitions of cl:*features* elements, as may be applied within the source code of the system.

That, in itself, may not serve to obviate some concerns with regards to floating-point ulps and rounding errors, however. So, it might furthermore be specified -- in so much as of an axiomatic regards, certainly -- that in an implementation of a mathematical object system, viz a viz Igneous-Math: That any procedure interfacing directly with floating-point mathematical operations, in Common Lisp,  would return a numeric value directly rationalized with cl:rational -- referencing the example denoted in the previous, as with regards to a rational fraction of pi and a numeric value functionally equivalent or ideally equivalent to a fraction of pi, but as calculated via the transcendental trigonometric function, atan, onto any specific floating point precision, then the return value being numerically rationalized with either cl:rational or cl:rationalize.

Thirdly, for ensuring a manner of type consistency of values input via source files and input via the Common Lisp REPL, the mathematics system may endeavor to apply a floating-point numeric type of the highest available precision in any single Lisp implementation -- as towards the value of cl:*read-default-float-format*. Certainly, an application may endeavor to avoid setting the value of cl:*read-default-float-format* itself, however. An application may signal a continuable error, during application initialization -- as when the user-specified value of cl:*read-default-float-format* would not be of the greatest available numeric precision. That continuable error, then, may be defined as to allow the user to select the highest-precision floating point numeric type and to set the value of cl:*read-default-float-format* oneself, form within any of the continuation forms defined, then, in the single continuable error.

Sidebar in Tangent: Design of the MCi Dobelle-App System

Of course, the previous paragraph may also serve to denote that there would be a definition of an application object, as well as a definition of an initialization phase in the definition of an application object -- towards a sidebar onto the dobelle-app system, in all its colloquial name, within the MetaCommunity group at Github.

For "Later Study," Irregularity in Fractional Rationalized Pi onto a Rationalized, Functionally Equivalent Value Computed By atan 

The following software form presents another interesting example with regards to irregularities in floating point arithmetic and rationalization of floating-point values. A study of the cause of the difference between the values of rpi4 and rtpi4, of course, may be other than trivial. No doubt, such a study may entail a study directly of the implementation of the transcendental trigonometric functions, in GNU LibC.

This example was produced, originally, with SBCL 1.2.4 on Linux (x86-64), and has been tested also with CCL 1.9-r15757 on Linux (x86-64). Considering the consistency of IEEE floating point implementations, an equivalent set of return values is calculated in each implementation.
(let ((fpi4 (/ pi 4d0)) 
      (ftpi4 (atan 2d0 2d0))
      (rpi4 (/ (rationalize pi) 4))
      (rtpi4  (rationalize (atan 2d0 2d0))))
  (values fpi4 ftpi4 
   rpi4 rtpi4 
   (float rpi4 pi) (float rtpi4 pi)
   (= rpi4 rtpi4)))

=> 0.7853981633974483d0,
      0.7853981633974483d0,
      122925461/156513558,
      101534659/129277943,
      0.7853981633974483d0,
      0.7853981633974483d0,
      NIL

Notably, if the function cl:rational is applied instead of cl:rationalize,  the values calculated for each of rpi4 and rtpi4 are, then, exactly equivalent, and the last return value, then, is T.

Appendix: Floating-Point and Non-Floating-Point Functions Defined in ANSI Common Lisp

The following functions, for any input values, will consistently return a floating point return value
  • ffloor
  • fceiling
  • ftruncate
  • fround
The following functions may return a rational value -- as may be tested, in each implementation -- as per the Rule of Float Substitutability, in ANSI Common Lisp. When consistently returning a floating point value, then these functions may be regarded as in the same set as the previous -- this being an implementation-specific quality.
  • sin
  • cos
  • tan
  • asin
  • acos
  • atan
  • sinh
  • cosh
  • tanh
  • asinh
  • acosh
  • atanh
  • exp
  • expt
  • log
  • sqrt

Onto floating-point complex numbers:
  • cis
A set of functions that accept integer values, and would return integer values, exclusively -- this  mathematics system not addressing ash, ldb, or dpb:
  • gcd
  • lcm
  • isqrt
  • evenp
  • oddp
Towards consistency onto the decimal rationalization policy of the the Igneous-Math system: Within  interfaces to those functions that return real numeric values, but that do not  or may not return integer numeric values, those functions' direct return values should be effectively transformed with a call to cl:rational, before return.

Notably, this quality of the Igneous-Math system is designed with an  emphasis towards numeric accuracy before minimization of object system garbage-collection calls -- as with regards to  a consistent discarding of floating point values, in procedural transformation to decimal rationalized integers.

Of course, it may be advantageous towards possible re-use of the software of the Igneous-Math system, if the decimal rationalization policy would be implemented as a compile-time option, in the Igneous-Math system. 

At this point, the author's attention is returned to the documentation toolchain.

Some Thoughts about Source Code, Design Decisions, and Documentation - Markdown, DocBook, and the Igneous-Math Source Tree

Having completed a finals exam for an electronics course I'm enrolled in, I've published the Igneous-Math source code to GitHub, tonight [igneous-math]. The Igenous-Math source tree is currently in the beginnings of its 1.3 edition, focusing on definition of unit conversion formulas. Though, simply, I do look forward to the beginning of the prototyping for the 1.3 edition -- have been bouncing around a few ideas, namely to make reference to the Garnet KR system [garnet] for its formula modeling as applied in KR schema objects and as may be applied in arbitrary Lisp code -- however, I would like to make some more refined documentation for the source tree, also -- now at the end of the end-of-week sprint, in developing that source tree.

In developing the Igenous-Math system, presently, I'm using GNU Emacs as an IDE, together with a system named SLIME, in its Emacs Lisp components -- and in its corresponding Common Lisp components, named SWANK -- focusing mostly on SBCL as the Common Lisp implementation, but  I've made some testing of the source tree using CCL, also. In applying those software components, together with the Git changeset management system, in an installation of KUbuntu within Virtualbox, I think it makes for a fairly comfortable development environment -- it's certainly easy to test the Common Lisp source code, using SLIME+SWANK, and simultaenously so, when running the Linux platform and the Microsoft Windows platform, using Virtualbox. Of course, I would like to make a modicum of effort to ensure that the Igneous-Math system could be a system that other software developers might feel comfortable to use, too, at least once it would be at its first branch-level revision.

Throughout this week's short sprint in the development of the Igenous-Math source tree, I've been making some copious notes in the source-tree -- something to remind oneself of a bit of a sense of context, in relation to any source code approximate to the comments, in the source tree. Of course, that can seem convenient enough, from one's own perspective in developing the source tree. As far as the availability of the source tree, I understand that it might seem to leave something lacking, though, under the heading, "Documentation."

So far as what may be accomplished in developing the documentation of a source tree, I think:

  1. An item of documentation may serve as to explain one's design decisions, to oneself, as well as towards the respective audience or user base
    • That rather than developing an axiomatic mode, with regards to documentation, alternately one may seek to explain one's rationale, to the reader, as with regards to any specific design decisions made in the development of a software system -- that not only may the documentation serve to explain so many design decisions, but it might alsoserve to place those design decisions within a discrete sense of context, conceptually.
    • Inasmuch, candidly, an item of documentation may serve as to explain one's rationale to oneself , at a later time -- thus, perhaps serving to ensure something of a degree of design consistency, with regards to one's rationale, in the design of a software system.
  2. An item of documentation may serve as to describe any set of things presently developed within a source tree -- thus, making a sort of simple inventory of results.
  3. Furthermore, an item of documentation may serve as to describe any number of things planned for further development, within a source tree and within any related software components
  4. That an item of documentation may serve a necessary role in ensuring that the software code one has developed can be reused, as by others and by oneself, extensionally.
Although the Igenous-Math system is published under a license as free/open source software, but candidly, I would not wish to have to ask the reader to peruse the source tree, immediately, if the reader may simply wish to understand the natures of any set of software components published of the source tree. 

So, perhaps that's towards a philosophical regard about documentation. Corresponding to a philosophy about documentation, there would also be a set of tools  one would apply, as in developing a  set of documentation items as -- altogether -- the documentation of a project serving as a sort of content resource, within a project.

Though I have given some consideration towards applying a set of complex editing tools and structured formats for documentation -- had made a note, towards such an affect, in the MetaCommunity.info Work Journal, at a time when I was considering whether I'd wish to use the DocBook format and the DocBook toolchain for documenting another software system I've developed, namely AFFTA -- however, tonight I'm more of a mind that it would be just as well to use the simple Markdown format, in canonical Markdown, or in the more structured MultiMarkdown format. Markdown may serve as a relatively accessible document format, for simple notes and documentation within a software source tree.

DocBook, in its own regards, would be a more syntactically heterogenous format to integrate within a software source tree. In a sense, DocBook doesn't look a whole lot like text/plain, though XML is a plain text format.

A DocBook XML document, in particular, can be written with any existing text editor, such as Emacs, or with a specialized XML editor, such as XXE, the XMLmind XML Editor. In Emacs, specifically, the nXML Emacs Lisp extensions can be of particular utility, when writing XML documents [JClark][NMT].  Together with a complete DocBook toolchain [Sagehill], the DocBook schema [TDG5], and DocBook editing platform, the documentation author would be able to create a well structured and comprehensively descriptive reference document for a project.

As something of a rosetta stone for document formats, there's also Pandoc

Presently, MultiMarkdown is the format that I prefer for developing so much as the README file for the Igenous-Math codebase.  There's even a nice, succinct guide page available about MultiMarkdown [ByWord] explaining the additional features provided with MultiMarkdown, as it extending on the original Markdown format.  For a purpose of delineating the simple "To Do" items of the Igneous-Math codebase, and for providing anything of an external, "High level" overview of the codebase -- such as one would encounter of the README file for the source tree, as when viewing the source tree, in its web-based GitHub presentation --  MultiMarkdown may be sufficient, in such application. Candidly, to make such a simple presentation about the source tree, using DocBook -- for one thing, the DocBook format does not so readily allow for the author to control the formatting of the documentation. With DocBook, perhaps there's something of a tradeoff for semantic expressiveness, in lieu of any easy visual design for documentation. By contrast, Markdown allows for an easy control of the visual layout of the documentation, though the Markdown syntax offers only a relatively small set of markup features.

Secondly, the toolchain automation that would be required to simply publish a README file, in Markown format, if from a source document in DocBook format -- such a tranformation model would certainly be less than ideal. It would allow for a transformation of only a limited set of elements from the DocBook schema, and though it may serve to produce some relatively accessible documentation -- accessible to the sighted web visitor, at least -- but one may find it difficult to explain to oneself: Why one would transform a source document from DocBook format, into a source document in Markdown format. Candidly, the GitHub presentation for Markdown is a feature extending of Github's own support for the Markdown format. One might think it's unsurprising, if there's no such specific support for all of XML. in the GitHub web interface -- XML is a very broad markup format, in all of its structural extensibility and its text/plain format

So, if it's possible to effectively produce a snapshot of one's point of view, at a point in time, then here's my view about the Markdown and DocBook formats, considered in relation to development of the Igneous-Math source tree.

It might bear a footnote, also, that the MultiMarkdown format is supported on Ubuntu platforms, in the libtext-multimarkdown-perl package. For integrating MultiMarkdown with Emacs MarkdownMode, there's some advice by David from Austin, TX [ddloeffler]

Saturday, October 25, 2014

Thoughts about: Electronics, Geometry, Common Lisp, and a Rational Pi within Common Lisp Mathematics

"Once upon a time," when I was a young person, my family had "Gifted me" with an electronics kit, such that I've later learned was developed by Elenco. In fact, my family had "Gifted me" with a number of electronics kits -- an old 60-in-1 kit, later the Elenco 200-in-1 kit, and then a kit somewhat resembling the Elenco Snap Circuits Kit, though the last of those was of a different design, as I recall -- something more designed for carrying an assembled circuit, really. My family had also gifted me with the opportunity to be wary of computer salespersons, but that's another thing.

In studying the manual of the Elenco 200-in-1 kit, specifically, I was able to learn a few things about the simple novelty of electrical circuits. The manual for the Elenco 200-in-1 kit contains a broad range of circuits, onto the domains of each of solid-state and digital electronics. The circuit elements, in the kit, are all firmly fastened into a non-conductive base, each junction attached to a nice spring. The circuit elements, in the kit, may be "wired together" with the set of jumper wires contained in the kit, according to the numbered schematics within the manual. As I remember, today, I'd assembled the radio tuner circuit, the buzzer/oscillator, and something about the multi-segment LED, then  separately, something about the step-up and step-down transformers. Of course, there's a novel analog meter, on the front of the Elenco 200-in-1 kit's form factor case.

Candidly, to my young point of view, the full manual of the Elenco 200-in-1  kit had seemed a little overwhelming, to me. I remember, still, that I'd had a feeling for that I was looking at a broad lot of material when browsing the pages of the manual, but I wasn't particularly certain if I was able to understand the material I was reading. Of course, I could understand something about the circuit assembly instructions, in the manual -- it seemed, to me, much like the "Snap together" airplane models of the hobbyist domain -- but I was not able to analyze the material of those circuits. Inasmuch, I wasn't able to interpret anything about my experiences, in studying of the manual. Certainly, I was able to set a wire between spring 30 and spring 14, to push a button, and to watch an LED glow, but -- beyond the simple, mechanical qualities of such circuit assembly and testing -- it didn't mean so much to me, then. It was a little bewildering to me, too -- there was such a nice manual, with the item, but I was unable to study the manual, beyond the simple circuit assembly instructions. I could not understand any much about it.

Candidly, It was the only electronics book that I knew of, in all of my young views. In that time of my life, I was not whatsoever cognizant about any sense of a relevance of digital circuits -- such as, of the digital logic gates with transistorized circuits, printed in the last pages of the manual. Sure, though, that was only a few years decades after Bell Labs had made their first discoveries about the superconductive properties of germanium [Computer History Museum].  I was a kid, when the decade was 1980, when the 32 bit processor was "New", and when the Nintendo Entertainment System was the hottest thing on the games market. Somehow, I still mark the chronology of my life by those relative metrics.

At that time in my life, I was not aware of there being a digital domain in electrical circuits, beside any concepts particularly of digital/analog interfaces. I had received the Elenco 200-in-1  kit as a gift, at a time before I had ever studied geometry, in school. It was a very novel thing, to me, but there wasn't so much I could do to understand the material of the manual provided with the thing. Candidly, it gathered some dust, and with some regrets, was delivered to a landfill, when I first left California. I wish I had held onto that item -- a reproduction would not be quite the same.

Certainly, the high school geometry class -- in a sense not focused towards any specific, practical or vocational regards -- the class did not present any notes with regards to any applications of geometry, in electrical circuit analysis, so far as I can recall of the primary material of the high school geometry course. There was nothing about the rectangular coordinate plane, almost nothing about the unique, polar coordinate plane, and there was not so much about complex numbers, though certainly the course stayed its track along the standard curriculum.

Sure, though, the geometry class had explained the natures of the unit circle, the pythagorean theorem, and the transcendental functions as defined of a domain of trigonometry -- such as the sine, cosine, and tangent functions, and their functional inverses, asin, acos, and atan. Certainly, it had seemed novel to me that there are the relations of those functions onto objects within a unit circle, but I could not conceive of any applications for such knowledge, beyond any of the no-doubt demanding domains of structural design -- such as of physical architecture or of automotive design, in any manner of mathematical refinement in structural design.

So, it was not until the 38th year of my life that I would learn that those trigonometric transcendental functions may be applied within a methodology of electrical circuit analysis -- and that conventionally, those are denoted as being among the transcendental functions in mathematics. Without begging a discussion with regards to imperfections in measurement systems, personally I think it's a significant thing that there are applications of geometry in electrical circuit analysis, as well as in the broader physics. Candidly, I may only narrowly avoid regressing to a mindset like of a giddy child, to observe that there are more applications for trigonometry, under the nearby sun.


This week, I've begun to define something of an extensional model for application of mathematics, in Common Lisp -- that extending of the set of mathematically relevant functions, defined within the ANSI Common Lisp specification. Today, specifically, I've been developing a model for overloading of mathematical operations in the Common Lisp Object System -- in a manner somewhat extending of a Common Lisp implementation's existing optimizations with regards to numeric computation. While I was endeavoring to make the next day's set of notes, for that "Sprint", I've now learned of a couple of functions defined in ANSI Common Lisp, such that effectively serve to "Obsolete" a page full of code that I had written and tested, earlier this week -- and in that, no doubt with further implementation-specific optimizations, in implementations of the ANSI Common Lisp rational and rationalize functions. The code that I had written, earlier this week, it was something implementing of a decimal-shifted encoding for floating-point numbers -- in a sense, developing a model for decimal shift in measurement values, that onto the base-ten or decimal base measurement prefixes defined of the standard Systeme Internationale measurement system. Of course, there are also the base-eight or octal base prefixes conventionally applied about quantities of digital information.

The code that I had written, earlier this week, it does not completely attain to a model for measurement for "Significant digits" within input values -- not in a sense as I had learned of, when I was a student of the science courses I'd attended in high school, in the mid 1990's. There was not so much about the floating point numeric types, as defined in modern standards for computation -- and neither of so much of a sense of a mantissa, as defined in those specifications -- in a course that described the molar weight of helium.

On discovering the the ANSI Common Lisp rational and rationalize functions, personally I may feel at least a small bit of good cheer that ANSI X3J13 [Franz Inc] had seen fit to develop such a set of functions as would serve to address some inevitable concerns in implementations of conventional models for floating-point arithmetic.

Personally, I think that this -- in a sense -- is the nicest line of code that I've ever written:
(rationalize pi)
=> 245850922/78256779
 It follows, then, that one is also cheered to write:
(defun rad-to-deg (r)
  (* r #.(/ 180 (rationalize pi))))
=> RAD-TO-DEG

(rad-to-deg pi)
=> 180.0D0
So, clearly, it is possible to define an accurate method for transformation of a measurement quantity from onto a unit of radians, then onto a unit of degrees, without one having to go to any gross extent, to extend the ANSI Common Lisp specification.

That being simply the happiest thing that one has seen produced of a computer, of late, but of course one does not wish to make a laurel of the discovery.

Of course, the simple transformation of pi radians  to 180 degrees is not so different if one does not firstly rationalize pi in the equation. In a converse instance, however: What of a transformation of  pi/2 radians?
(/ (rationalize pi) 2)
=> 122925461/78256779
contrasted to:
(/ pi 2)
=> 1.5707963267948966D0
In the first  instance of those two, when the implementation's value for pi is firstly converted to a rational number, then with the computation being made onto another rational number, the real precision of the initial value of pi is neither lost nor folded. In the second instance of those two, it might seem as though the application would simply be venturing along in the implementations' system for floating point arithmetic -- that the application developer, then, must simply accept any matters of  floating-point rounding, and so on, as those being simply matters of the programmed environment. One might think that may not be sufficient, in all regards, though perhaps it might be common, as a matter of practice.

Notably, in applying a rationalized pi value, the calculation of the number of degrees represented of pi/2 radians itself results in a rational number, exactly 90.
(defconstant *rpi* (rationalize pi))
=> *RPI*

(defun rad-to-deg (r)
  (* r #.(/ 180 *rpi*)))
=> RAD-TO-DEG

(rad-to-deg (/ *rpi* 2))
=> 90
In a simple regards: That's both of a mathematical precision, and mathematical accuracy, represented in a conversation of pi/2 radians to 90 degrees.

I'm certain that it may be possible to emulate the CLHS rationalize function, within other object-oriented programming languages -- such a Java, C++, Objective C, and so on. One would like to think that in such an instance, one may also emulate the Common Lisp numeric type, ratio -- that being a subtype of rational, in Common Lisp, disjunct to the numeric type, integer -- then to overload the respective numeric operations, for the new ratio numeric type.

Personally, I would prefer to think of Common Lisp subsuming other programming languages, however, rather than of other programming languages simply emulating Common Lisp. I know that there can be "More than one" -- as far as programming languages, in the industry -- certainly. Personally, I think that it may be as well to address other programming languages if from a Common Lisp baseline, if simply as a matter of at-worst hypothetical efficiency.

An ANSI Common Lisp implementation would already have defined the ratio numeric type, as well as the mathematical operations denoted in the CLHS Mathematics (Numbers) Dictionary, onto the whole set of numeric types defined in ANSI Common Lisp. Implementations might also extend on those functions and types, with any number of implementation-specific extensions such as may be defined for purpose of optimization within Common Lisp programs -- perhaps, towards a metric for deterministic timing in mathematical procedures.

Sure, though, if one would endeavor to extend those numeric operations and types onto more types of mathematical object than those defined in ANSI Common Lisp -- as onto a domain of vector mathematics  and scalar measurements, such as the Igneous-Math project seeks to address, in all of a colloquial character of a source tree (yet unpublished) -- then it would be necessary to extend the mathematical operations, themselves, and to define an object system for the types extension. That's something I was working on developing, today, before I noticed the simply sublime matter of rational and rationalize, as functions standardized in and of ANSI Common Lisp.

Presently, I'm not certain if I should wish to altogether discard the simple decimal-shift code that I'd written, earlier this week -- essentially, that was for developing a rational numeric equivalent of a floating-point representation of a numeric input value, such as pi -- but not only so. In that much, alone, it might be redundant onto the ANSI CL rationalize function.

Also, it was for "Packing" numbers into their least significant digits -- in a sense, to derive value of magnitude and a corresponding decimal scale value, the latter essentially representing a decimal exponent  -- with a thought towards developing some optimized mathematical operations onto rational numeric values, throughout a mathematical system, in seeking to retain something of a sense of mathematical accuracy and precision, if but because it would seem to be appropriate to one's understanding, as such

The decimal shift code, as implemented in Igneous-Math, of course it's nothing too complex, in any sense -- in a simple regards, the "decimal reduction" algorithm uses the Common Lisp truncate function, and correspondingly, the "decimal expansion" algorithm uses the Common Lisp expt function. It's been implemented as part of a system for encoding the Systeme Internationale decimal prefixes as Common Lisp objects, in a system essentially designed to ensure that -- excepting of such rational numeric measurement values as would be input as of the type, ratio -- that other real numeric values in the system would be stored in a manner derived to a base measurement unit, before being applied within such conventional formulae as would be defined around base measurement units -- but allowing for operations onto arbitrary exponents of the base measurement units, in any single measurement system.

While designing so much of those features of the Igneous-Math system, candidly I was not then aware of the simple existence of the ANSI CL rationalize function, as it being a feature available of an ANSI Common Lisp implementation. With some chagrin, I wonder if I would've designed the decimal-shift code any differently, had I been aware of that feature of ANSI Common Lisp, at any time earlier this week.

I think that, henceforward, I'll have to make some consideration about whether the rationalize function should be applied throughout the Igneous-Math system. One matter that I would wish to emphasize about the design of the Igenous-Math system: That a number's printed representation need not be EQ to a number's representation in object storage, within a numeric system. So long as the printed value and the stored value are at least mathematically equivalent -- onto the ANSI Common Lisp '=' function, broadly -- then certainly, the system would have achieved its necessary requirements for storage and printing of numeric values.

That a mathematical system, moreover, may endeavor to apply the rationalize function onto all numeric values input to the system, before applying those values within any mathematical operations -- such a feature would be developed as an extension of ANSI Common Lisp, also. It might seem redundant or simply frivolous, perhaps, but personally I think it's just as well.

Furthermore, I think it's a swell to encode any floating point number as a set of two fixnum type numbers -- assuming that the instances would be few, in which a floating point number would be expanded to a bignum value on a decimal scale -- and only a matter of the nature of an application system, that one would scale a floating point number onto base ten, instead of onto a base eight -- perhaps a matter of developer convenience, in developing a software program for applications of mathematics within a 64-bit Common Lisp implementation.

While beginning to develop a set of overloaded math procedures in the Igneous-Math system, I've begun to wonder of how much work would be required if to optimize an individual mathematical operation onto a microcontroller's own specific hardware -- such as with regards to the Intel SSE extensions [Wikipedia] and the ARM architecture's hard float design, with its corresponding ABI in the Linux application space [Debian Wiki]. The Igneous-Math system has been designed to rely on a Common Lisp implementation's own optimizations, in a portable regards.


Friday, October 24, 2014

"FIXME" - A Screen with Tangents

Emacs, SLIME IDE, and Sourcegear Diffmerge (platform: MS Windows 8.1 w/ Classic Start Menu)
This week, in having completed a second semester of a formal college course, online -- a course focused, in much, about the electromagnetic properties of coils and the electorchemical qualities of dielectric materials, as well as the plain (?) geometry of AC circuit analysis, as if it was all so plain, but at least it is "Planar," in a sense -- I've begun developing a model for mathematics in Common Lisp.

As a sidebar: This is not to compete with the Maxima Computer Algebra System (CAS) and not to compete with ACL2, the latter being essentially a "proofing" system, applicable in a domain of formal mathematics -- both of which are implemented in the Common Lisp programming language, perhaps a "feat" in itself, that.

I'm calling it igneous-math. Perhaps, the name is not so much of the "Thing," but that's what I prefer to name that source tree, as pictured. Igneous-Math, so far, implements a "yet another" object model  for modelng of measurements, in Common Lisp. Presently, there's no support for derived measurements, and I've a few "FIXME" items to address with regards to the SI base unit of mass, "Kilogram".  Things that Igneous-Math does provide, in so far;
  • An object model for measurement domains and base measurements, extending of the Common Lisp Object System
  • An object model for decimal, i.e. "Base 10" prefixes for measurements -- as represented of the Systeme Internationale -- contrasted to the "Base 8" prefixes applied about quantities of digital information.
  • An, I think, "Novel" implementation of a decimal-shift system, developed as in effect to "Work around" some errors that occur of certain floating-point numeric implementations -- for instance, to work around an error that may occur in a certain formula, that 
    (* 10 4.159265358979312d0)
    => 41.592653589793116d0
    ...and although
    (/ 41.592653589793116d0 10)
    => 4.159265358979312d0
    However, one may ever find it difficult to accept that in the first evaluation, a digit is introduced, such that was not present within the input value.  In some calculations, the introduction of that digit may lead to a matter of inaccuracy, as well as the failure of a functional test, within a software system.
  •   ...and a whole lot of "FIXME" notes, in the source code -- many "FIXME" notes, all annotating some momentary concepts, at the time each "FIXME" note was added to the source code -- far too many "FIXME" notes to "Track" with any single, instantaneous approach, moreover, but that may be categorized within specific task domains, within the Igneous-RVE project.
One of the effective "FIXME" notes of the Igneous-Math project, at this time -- a "FIXME" note, but not expressly in the source code -- is that it needs a functional unit testing framework. Not insomuch conveniently, but there is a codebase for that, I call it AFFTA. In an interest of developing a policy for continuous systems deployment, I don't have any plans to publish the source code of Igenous-Math until when AFFTA would be applied, in the same -- effectively, then, becoming AFFTA's first real "Usage case", subsequent to so many inline test forms implemented in comments within the AFFTA source code and, similarly, in the Igneous-Math source code. One has begun to reconsider one's views of the adage, "Release early, release often." Candidly, one might rather prefer, "Release rarely, almost never," but that's perhaps more a matter of culture, in a contemporary mode.
Not to begin a sidebar as if to make a contrast of a view of "Monolithic design" and a view of "Agile design" -- in a simple sense: A mountain is a monolithic thing; a mountain climber may take an agile course, in proceeding up a side of a a mountain. Agile approaches and monolithic things may not all be mutually exclusive -- or how was the first mountain ever climbed up, in human history? Certainly, one may not need to be a member of a 1% to climb up a mountain. One would mind a way up a mountain, nonetheless -- there being not so  much of a safety net, at the bottom of most steep, rocky facades, though there would be trees and so on, and one's footwear gripping steadily to the rocky face. Certainly, one would not fall to eternity, if one would simply step a little askew, in such a place.

Perhaps one has seen the rocky faces of any tall, old mountains, from any angle, and it been such a sight as one might not wish to forget, at any time soon. In that regards, one recalls that a walking stick -- if one may ever find such a matter, of a broad woods -- a walking stick may be applied not only as a vertical resource. The long surface of a walking stick can be applied, horizontally, in creating a greater amount of friction between oneself and a rocky face -- as in times when friction would be a very good thing, ere one would learn instead of the velocity of an object accelerating due to gravity, on an inclined surface -- and that object would be oneself, on a rocky surface polished under long winds and snow. One treads lightly, in such locales, and perhaps the natural wood walking sticks are just as well.

So, I too would rather be out climbing a mountain, right now, but there's not much of an industry in mountain climbing -- just as well. Regardless, there are applications for mathematics and physics, too, in the mountains. One does not inquire of lone things and sounds, in light of such. There's the rocky face, the walking stick, and the singular view from the peak.

Mount Shasta, with Cloud
....and then there's the next mountain, and so on. It could sound so simple to comment to, if one is not too burdened with load.

Speaking of load, there's a concept to draw this linear narrative back to the electronics domain -- load analysis, in electrical circuits. Sure, I'd make a convenient illustration, right now, if it was only so easy to.  Somehow, one is at pains to make a schematic, make a screenshot, crop the screenshot, and then post it to this web log, for then be a far sidebar in regards to electrical circuit analysis.

Not to launch onto a whole dissertation, but if one may explain something of a sense of context, presently: There's a geometry to electrical circuit analysis. Personally, I'd not learned of that, until the second course in the solid state electronics track, at a certain online university, in this the 38th year of my life. Geometry has a real world application -- not only in structural architectural design, but also in structured electrical circuit analysis, "All the small things."



Presently, though, I'll try to avoid launching onto a childish sidebar, either. Certainly, Legos are simple enough to put together -- and circuits on breadboards, perhaps a little less simple to put together, but electrical circuits also work according to principles known in the material sciences, including the simplest principle of all: Safety.

I can see, from my own writing here, that I'd rather be out writing about mountains.

There's something about math and Common Lisp, also -- no too much a sidebar for SSE architectures and the VOPs in SBCL.

Sunday, October 5, 2014

Notes - Towards developing a Circuit Modeling and Circuit Analysis Platform in Common Lisp

Though further delaying of the weekend time as I'd set aside for some student work, I would like to make a couple of quick notes, presently, with regards to developing a circuit design and analysis platform in Common Lisp. This is partly in referencing XCircuit, also partly in referencing the approach I'm trying to develop for such de rigeur analysis of AC circuits as I observe is represented in one course of DeVry University Online.

Things a circuit design platform may need:
  • A library of circuit components
    • XCircuit has such a thing organized in a unique regards.
    • Fritzing uses SVG
    • KiCAD - I'm unable to find any "simple" analog circuit components of its component library, however there's clearly a lot of digital components in the KiCAD circuit component library
  • A circuit analysis model
    • Common: Derivatives of the original Berkeley SPICE
      • PSPICE - Commercially licensed by Cadence, as a part of the Cadence OrCAD platform. It's an extensible platform, however within its commercial and academic license models.
      • NGSPICE - free/open source software.
    • ...
  • A desktop environment
    • With either GTK, QT, or even Microsoft Windows "widgets," a desktop environment would be assumed to be available complimentary to the respective GUI toolkit -- namely, GNOME in any edition, KDE, or the Microsoft Windows Explorer shell.
    • Author is not sure what there may be towards such a matter, if in "state of the art" of developments in Common Lisp
      • Lisp Machines and "The state of the art" for desktop computing
        • cf. MIT CADR, Symbolics, XEROX, Texas Instruments, ...
        • cf. NeXT ...
'Fin'

Friday, October 3, 2014

Towards Development of a singular EDA Platform - A Student's Commentary

Sidebar of a Student's Lament

Though I'm not much a fan for chatting too lightly about it, this year I've been a student of an electronics and computing program at an online university, namely DeVry University Online (DVUO) in DVUO's associates program in the DVUO ECT department. It's a challenging program, I consider, but the challenge is not always only in the course material itself. Firstly, there's the matter of trying to be a student of an electronics and computing program at DVUO, with lab, while not living anywhere near a formal DVUO campus. I've received quite some stunning items, at that -- in one item, a nice shiny black box, with all sorts of "bizarre" electronics things inside. It's nothing I could wish to carry around, for any purpose, but it's what DVUO sent to me -- one Knight Electonics ML-2010. I also own an oscilloscope, now. I can count, on two hands, how many times I've needed either of those expensive items of equipment, candidly. That I'm able to afford those items, on a scholarship, I'm grateful for that much. In regards to the educational model in practice, however, a scholarship does not divert my criticisms about the very expensive model of it. That's one thing, then. Being a student, there, is yet another thing.

The course textbook we're using -- to my point of view -- prevents entirely an axiomatic, if not simply rote approach to AC circuit analysis. In one point of view, it might seem simply practical -- the analysis methodology presented in the course's textbook, and the parallel methdology presented in the course's lectures -- but it is not in all ways a thorough methodology. In my experience, it's a very tedious methodology, and the book leaves a number of gaps, with difficulties. The book, itself, leaves some "gaps" about analysis of some specific combinations of {R,L,C} circuit elements within a series/parallel {R,L,C} circuit onto an AC power source.

Secondly, the course's lectures use a different notation than the book. The course's textbook uses polar notation, throughout. The lectures use rectangular notation, for those complex calculations. Though it might seem that the calculations might possibly "Make more sense" when conducted only with formulas in a rectangular notation, however -- considering that I've been studying, this semester, a textbook using polar notation, throughout, and though a corresponding formula may be produced in a rectangular notation, for any formula in a polar notation, but it's a wide gap in syntax and semantics between the two respective types of notation. I prefer the polar notation, but it's "Just not working out," to me, for analyzing the circuits that the book does not present a methodological model for. It's with a sense of chagrin,  then, that I begin to wonder if I've studied with the wrong sort of notation? I'm not understanding the relations of voltage, current, and reactance, in magnitude of phase angle, for a couple of types of circuit --if from a perspective of polar notation, and corresponding calculations of geometric sums,  parallel equivalent values, and arctangents of a specific quotient. Sure, the arctangents are easy to calculate in Common Lisp, and a radians-to-degrees conversion is easy enough likewise, but the whole polar notation is something I now elect to "Take a break from," to be sure I can analyze the same circuits, in a model using rectangular notation. I'm now in week 5 of an eight week course, and it seems rather pressing to me.

More broadly: I'm concerned that, for one thing, a pedantic iteration of the same formulas for the same types of circuit, ad infinitum, may not really help one to learn how to analyze circuits. It would, at least, serve to assist towards applying formulas, and perhaps understanding "some of" how those formulas relate to any exacting electronic, electromagentic, dielectric, or more broadly electrochemical qualities of individual circuit elements, and circuit elements in parallel and in series.

I'm concerned, furthermore, about whether the course program, itself, has really equipped "We the students" to understand enough of the essential mathematics of electrical science, even insofar as the calculus. Although I'd studied some of the differential calculus and the integral calculus, before college -- in "high school," namely -- but that was some years ago. I am not presently equipped to analyze a differential curve for change or variation in voltage or current within a circuit of an alternating current. When I read [Nahvi2004]  I can see that there's more of math to the matter, and I may be able to ascertain some of what that math would be, if I understood the same disciplines of mathematics, presently. Perhaps it would not be a panacea, but it might help towards understanding the basic principles of electrical science, and any number of corresponding geometric models, in mathematics.

Pragmatically, there are a couple of types of {R,L,C} circuit that -- in all candor -- I simply don't know how to analyze, at present. Here's the full set of {R,L,C} circuits, then, in diagrams that I've made, myself, towards a sort of reference sheet for manual circuit analysis. I'll denote that these graphics are public domain, in how I'll license my own work, at that.

The Simple Series and Parallel {R,L,C} Circuits

Fig. 1: RLC Circuit
Fig 2: R//L//C Circuit

The R(X//X) Circuit 

Fig 3: R(L//C) Circuit


The X(R//X) Circuits

Fig 4: L(R//C)
Fig 5: C(R//L)


The R//LC Circuit

R//(LC)

The X//(RX) Circuits

C//(RL)
  
L//(RC) 

Between calculations of phase angle, net reactances, and net impedances, the axiomatic approach that I thought I'd been learning from the course's textbook fails me, in that the book does not provide a a complete model for computing the impedance, in its magnitude and phase angle, for the following:

  • C(R//L) i.e capacitor in series to parallel RL mesh 
    • correspndingly, L(R//C)
  • L//(RC) i.e. inductor in parallel to series RC branch
    • correspondingly, C//(RL) 

The text lecture materials of the course illustrate a methodology divergent from the course's textbook, in that the course's textbook utilizes polar notation throughout the computations, whereas the course's lectures use rectangular notation, "This week." Of course, I understand that there's a mathematical correspondence between polar and rectangular notations, however the semantic and syntactic differences -- between a view of the math in the two notations, respectively -- it may seem like a pretty broad difference.


In the process of this process, I've learned a few additional things, such that I would wish to denote here, in short:

  • That MathML is not immediately well supported on the web.
    • There's MathML support in Firefox
    • There are web plugins
    • ...whether or not for a short "notes sheet"
    • ...or a full, public math and science portal
    • [Bookmarks]
  • That Google's Picassa tools can be useful for sharing graphics representations of electrical schematics
    • ...without their corresponding models, albeit, in any single hardware definition language (HDL) such as VHDL, Verilog, or the perhaps less Fortran-like EDIF format
  • That a "Single EDA platform" cannot be comprised only of a set of software tools, but must be accompanied of a corresponding methdological model at least insofar as circuit analysis
    • Evidently, there are multiple methdologies that may be applied in circuit analysis.
    • I may tend to prefer a methodology utilizing of Thevenin's and Norton's theorems addressed to AC circuits  [OMalley2011]
      • That's not being taught in the course, though, and it may take some time in study, to "Re-learn" the same material, from a perspective of that more principally extensional approach 
      • It's apparently a viable methodology for analysis of AC circuits
      • ...referencing an additional textbook [OMalley2011] in just how that textbook is, itself:
  • There's an EDA platform alternate to NI Mulstisim, namely: Cadence OrCAD Capture [Info][Download page
  • In free/open source software (FOSS) platforms compliant with the Debian Free Software Guidelines (DFSG) there's also XCircuit
    • XCircuit is available on Microsoft Windows platforms in a stand-alone edition (with prerequisites) or as a Cygwin package (with the package manager handling the installation of the prerequisites, as a good package manager system would)
    • Of course, on Linux platforms, one could use the respective package manager to install XCircuit from the respective upstream software package repository, in like as a Linux distribution may provide.

Presently, I'm thinking to re-study all of {R,L,C} circuit analysis, with a focus toward polar notation and Common Lisp to make the semnatics of it more manageable. Then, I could possibly "Check my results" with XCircuit and NGSPICE -- assuming a certain reliability in NGSPICE, as about circuit modeling for conventional, analog {R,L,C} circuits..

I don't expect I'll be writing much about the results, immediately. Candidly, it's "a homework thing", too.

A Reference Bibliography onto Electrical Science and Correspondence in Mathematical Systems

Nahvi2014
Nahvi, Mahmood and Joseph A. Edminister. Schaum's Outline of Electric Circuits. 6th edition. McGraw-Hill (New York). 2014

Spiegel1971
Spiegel, Murray R. Schaum's Outline of Advanced Mathematics for Engineers and Scientists. McGraw-Hill (New York). 1971

Lipschutz1959
Lipschutz, Seymor, Dennis Spellman, and Murray R. Spiegel. Schaum's Outline of Vector Analysis. 2nd edition. McGraw-Hill (New York). 2009

Ayres2013
Ayres, Frank and Elliott Mendelson. Schaum's Outline of Calculus. 6th edition. McGraw-Hill (New York). 2013

OMalley2011
O'Malley, John. Schaum's Outline of Basic Circuit Analysis. 2nd edition. McGraw-Hill (New York). 2011