Saturday, October 25, 2014

Thoughts about: Electronics, Geometry, Common Lisp, and a Rational Pi within Common Lisp Mathematics

"Once upon a time," when I was a young person, my family had "Gifted me" with an electronics kit, such that I've later learned was developed by Elenco. In fact, my family had "Gifted me" with a number of electronics kits -- an old 60-in-1 kit, later the Elenco 200-in-1 kit, and then a kit somewhat resembling the Elenco Snap Circuits Kit, though the last of those was of a different design, as I recall -- something more designed for carrying an assembled circuit, really. My family had also gifted me with the opportunity to be wary of computer salespersons, but that's another thing.

In studying the manual of the Elenco 200-in-1 kit, specifically, I was able to learn a few things about the simple novelty of electrical circuits. The manual for the Elenco 200-in-1 kit contains a broad range of circuits, onto the domains of each of solid-state and digital electronics. The circuit elements, in the kit, are all firmly fastened into a non-conductive base, each junction attached to a nice spring. The circuit elements, in the kit, may be "wired together" with the set of jumper wires contained in the kit, according to the numbered schematics within the manual. As I remember, today, I'd assembled the radio tuner circuit, the buzzer/oscillator, and something about the multi-segment LED, then  separately, something about the step-up and step-down transformers. Of course, there's a novel analog meter, on the front of the Elenco 200-in-1 kit's form factor case.

Candidly, to my young point of view, the full manual of the Elenco 200-in-1  kit had seemed a little overwhelming, to me. I remember, still, that I'd had a feeling for that I was looking at a broad lot of material when browsing the pages of the manual, but I wasn't particularly certain if I was able to understand the material I was reading. Of course, I could understand something about the circuit assembly instructions, in the manual -- it seemed, to me, much like the "Snap together" airplane models of the hobbyist domain -- but I was not able to analyze the material of those circuits. Inasmuch, I wasn't able to interpret anything about my experiences, in studying of the manual. Certainly, I was able to set a wire between spring 30 and spring 14, to push a button, and to watch an LED glow, but -- beyond the simple, mechanical qualities of such circuit assembly and testing -- it didn't mean so much to me, then. It was a little bewildering to me, too -- there was such a nice manual, with the item, but I was unable to study the manual, beyond the simple circuit assembly instructions. I could not understand any much about it.

Candidly, It was the only electronics book that I knew of, in all of my young views. In that time of my life, I was not whatsoever cognizant about any sense of a relevance of digital circuits -- such as, of the digital logic gates with transistorized circuits, printed in the last pages of the manual. Sure, though, that was only a few years decades after Bell Labs had made their first discoveries about the superconductive properties of germanium [Computer History Museum].  I was a kid, when the decade was 1980, when the 32 bit processor was "New", and when the Nintendo Entertainment System was the hottest thing on the games market. Somehow, I still mark the chronology of my life by those relative metrics.

At that time in my life, I was not aware of there being a digital domain in electrical circuits, beside any concepts particularly of digital/analog interfaces. I had received the Elenco 200-in-1  kit as a gift, at a time before I had ever studied geometry, in school. It was a very novel thing, to me, but there wasn't so much I could do to understand the material of the manual provided with the thing. Candidly, it gathered some dust, and with some regrets, was delivered to a landfill, when I first left California. I wish I had held onto that item -- a reproduction would not be quite the same.

Certainly, the high school geometry class -- in a sense not focused towards any specific, practical or vocational regards -- the class did not present any notes with regards to any applications of geometry, in electrical circuit analysis, so far as I can recall of the primary material of the high school geometry course. There was nothing about the rectangular coordinate plane, almost nothing about the unique, polar coordinate plane, and there was not so much about complex numbers, though certainly the course stayed its track along the standard curriculum.

Sure, though, the geometry class had explained the natures of the unit circle, the pythagorean theorem, and the transcendental functions as defined of a domain of trigonometry -- such as the sine, cosine, and tangent functions, and their functional inverses, asin, acos, and atan. Certainly, it had seemed novel to me that there are the relations of those functions onto objects within a unit circle, but I could not conceive of any applications for such knowledge, beyond any of the no-doubt demanding domains of structural design -- such as of physical architecture or of automotive design, in any manner of mathematical refinement in structural design.

So, it was not until the 38th year of my life that I would learn that those trigonometric transcendental functions may be applied within a methodology of electrical circuit analysis -- and that conventionally, those are denoted as being among the transcendental functions in mathematics. Without begging a discussion with regards to imperfections in measurement systems, personally I think it's a significant thing that there are applications of geometry in electrical circuit analysis, as well as in the broader physics. Candidly, I may only narrowly avoid regressing to a mindset like of a giddy child, to observe that there are more applications for trigonometry, under the nearby sun.


This week, I've begun to define something of an extensional model for application of mathematics, in Common Lisp -- that extending of the set of mathematically relevant functions, defined within the ANSI Common Lisp specification. Today, specifically, I've been developing a model for overloading of mathematical operations in the Common Lisp Object System -- in a manner somewhat extending of a Common Lisp implementation's existing optimizations with regards to numeric computation. While I was endeavoring to make the next day's set of notes, for that "Sprint", I've now learned of a couple of functions defined in ANSI Common Lisp, such that effectively serve to "Obsolete" a page full of code that I had written and tested, earlier this week -- and in that, no doubt with further implementation-specific optimizations, in implementations of the ANSI Common Lisp rational and rationalize functions. The code that I had written, earlier this week, it was something implementing of a decimal-shifted encoding for floating-point numbers -- in a sense, developing a model for decimal shift in measurement values, that onto the base-ten or decimal base measurement prefixes defined of the standard Systeme Internationale measurement system. Of course, there are also the base-eight or octal base prefixes conventionally applied about quantities of digital information.

The code that I had written, earlier this week, it does not completely attain to a model for measurement for "Significant digits" within input values -- not in a sense as I had learned of, when I was a student of the science courses I'd attended in high school, in the mid 1990's. There was not so much about the floating point numeric types, as defined in modern standards for computation -- and neither of so much of a sense of a mantissa, as defined in those specifications -- in a course that described the molar weight of helium.

On discovering the the ANSI Common Lisp rational and rationalize functions, personally I may feel at least a small bit of good cheer that ANSI X3J13 [Franz Inc] had seen fit to develop such a set of functions as would serve to address some inevitable concerns in implementations of conventional models for floating-point arithmetic.

Personally, I think that this -- in a sense -- is the nicest line of code that I've ever written:
(rationalize pi)
=> 245850922/78256779
 It follows, then, that one is also cheered to write:
(defun rad-to-deg (r)
  (* r #.(/ 180 (rationalize pi))))
=> RAD-TO-DEG

(rad-to-deg pi)
=> 180.0D0
So, clearly, it is possible to define an accurate method for transformation of a measurement quantity from onto a unit of radians, then onto a unit of degrees, without one having to go to any gross extent, to extend the ANSI Common Lisp specification.

That being simply the happiest thing that one has seen produced of a computer, of late, but of course one does not wish to make a laurel of the discovery.

Of course, the simple transformation of pi radians  to 180 degrees is not so different if one does not firstly rationalize pi in the equation. In a converse instance, however: What of a transformation of  pi/2 radians?
(/ (rationalize pi) 2)
=> 122925461/78256779
contrasted to:
(/ pi 2)
=> 1.5707963267948966D0
In the first  instance of those two, when the implementation's value for pi is firstly converted to a rational number, then with the computation being made onto another rational number, the real precision of the initial value of pi is neither lost nor folded. In the second instance of those two, it might seem as though the application would simply be venturing along in the implementations' system for floating point arithmetic -- that the application developer, then, must simply accept any matters of  floating-point rounding, and so on, as those being simply matters of the programmed environment. One might think that may not be sufficient, in all regards, though perhaps it might be common, as a matter of practice.

Notably, in applying a rationalized pi value, the calculation of the number of degrees represented of pi/2 radians itself results in a rational number, exactly 90.
(defconstant *rpi* (rationalize pi))
=> *RPI*

(defun rad-to-deg (r)
  (* r #.(/ 180 *rpi*)))
=> RAD-TO-DEG

(rad-to-deg (/ *rpi* 2))
=> 90
In a simple regards: That's both of a mathematical precision, and mathematical accuracy, represented in a conversation of pi/2 radians to 90 degrees.

I'm certain that it may be possible to emulate the CLHS rationalize function, within other object-oriented programming languages -- such a Java, C++, Objective C, and so on. One would like to think that in such an instance, one may also emulate the Common Lisp numeric type, ratio -- that being a subtype of rational, in Common Lisp, disjunct to the numeric type, integer -- then to overload the respective numeric operations, for the new ratio numeric type.

Personally, I would prefer to think of Common Lisp subsuming other programming languages, however, rather than of other programming languages simply emulating Common Lisp. I know that there can be "More than one" -- as far as programming languages, in the industry -- certainly. Personally, I think that it may be as well to address other programming languages if from a Common Lisp baseline, if simply as a matter of at-worst hypothetical efficiency.

An ANSI Common Lisp implementation would already have defined the ratio numeric type, as well as the mathematical operations denoted in the CLHS Mathematics (Numbers) Dictionary, onto the whole set of numeric types defined in ANSI Common Lisp. Implementations might also extend on those functions and types, with any number of implementation-specific extensions such as may be defined for purpose of optimization within Common Lisp programs -- perhaps, towards a metric for deterministic timing in mathematical procedures.

Sure, though, if one would endeavor to extend those numeric operations and types onto more types of mathematical object than those defined in ANSI Common Lisp -- as onto a domain of vector mathematics  and scalar measurements, such as the Igneous-Math project seeks to address, in all of a colloquial character of a source tree (yet unpublished) -- then it would be necessary to extend the mathematical operations, themselves, and to define an object system for the types extension. That's something I was working on developing, today, before I noticed the simply sublime matter of rational and rationalize, as functions standardized in and of ANSI Common Lisp.

Presently, I'm not certain if I should wish to altogether discard the simple decimal-shift code that I'd written, earlier this week -- essentially, that was for developing a rational numeric equivalent of a floating-point representation of a numeric input value, such as pi -- but not only so. In that much, alone, it might be redundant onto the ANSI CL rationalize function.

Also, it was for "Packing" numbers into their least significant digits -- in a sense, to derive value of magnitude and a corresponding decimal scale value, the latter essentially representing a decimal exponent  -- with a thought towards developing some optimized mathematical operations onto rational numeric values, throughout a mathematical system, in seeking to retain something of a sense of mathematical accuracy and precision, if but because it would seem to be appropriate to one's understanding, as such

The decimal shift code, as implemented in Igneous-Math, of course it's nothing too complex, in any sense -- in a simple regards, the "decimal reduction" algorithm uses the Common Lisp truncate function, and correspondingly, the "decimal expansion" algorithm uses the Common Lisp expt function. It's been implemented as part of a system for encoding the Systeme Internationale decimal prefixes as Common Lisp objects, in a system essentially designed to ensure that -- excepting of such rational numeric measurement values as would be input as of the type, ratio -- that other real numeric values in the system would be stored in a manner derived to a base measurement unit, before being applied within such conventional formulae as would be defined around base measurement units -- but allowing for operations onto arbitrary exponents of the base measurement units, in any single measurement system.

While designing so much of those features of the Igneous-Math system, candidly I was not then aware of the simple existence of the ANSI CL rationalize function, as it being a feature available of an ANSI Common Lisp implementation. With some chagrin, I wonder if I would've designed the decimal-shift code any differently, had I been aware of that feature of ANSI Common Lisp, at any time earlier this week.

I think that, henceforward, I'll have to make some consideration about whether the rationalize function should be applied throughout the Igneous-Math system. One matter that I would wish to emphasize about the design of the Igenous-Math system: That a number's printed representation need not be EQ to a number's representation in object storage, within a numeric system. So long as the printed value and the stored value are at least mathematically equivalent -- onto the ANSI Common Lisp '=' function, broadly -- then certainly, the system would have achieved its necessary requirements for storage and printing of numeric values.

That a mathematical system, moreover, may endeavor to apply the rationalize function onto all numeric values input to the system, before applying those values within any mathematical operations -- such a feature would be developed as an extension of ANSI Common Lisp, also. It might seem redundant or simply frivolous, perhaps, but personally I think it's just as well.

Furthermore, I think it's a swell to encode any floating point number as a set of two fixnum type numbers -- assuming that the instances would be few, in which a floating point number would be expanded to a bignum value on a decimal scale -- and only a matter of the nature of an application system, that one would scale a floating point number onto base ten, instead of onto a base eight -- perhaps a matter of developer convenience, in developing a software program for applications of mathematics within a 64-bit Common Lisp implementation.

While beginning to develop a set of overloaded math procedures in the Igneous-Math system, I've begun to wonder of how much work would be required if to optimize an individual mathematical operation onto a microcontroller's own specific hardware -- such as with regards to the Intel SSE extensions [Wikipedia] and the ARM architecture's hard float design, with its corresponding ABI in the Linux application space [Debian Wiki]. The Igneous-Math system has been designed to rely on a Common Lisp implementation's own optimizations, in a portable regards.


No comments:

Post a Comment