Showing posts with label ARM. Show all posts
Showing posts with label ARM. Show all posts

Wednesday, June 17, 2015

Chromebooks - Utility and Adaptations

Chromebooks - Utility and Adaptations 

  • Availability at Amazon.com
  • Ultrathin laptops
    • Small form factor
    • Limited power consumption
    • Lightweight computing
  • ARM platforms available 
  • Stocked with Chromium OS
    • Chromium OS development
    • Web-oriented architecture
      • Google Chrome browser
      • Google Chrome app store
      • Google docs, etc
      • Dissimilar to a conventional desktop environment
    • Chromium OS is locked into platform
    • Alternatives to ChromiumOS?
      • With some modifications to the platform, a complete OS installation can be made
        • ...in chroot (e.g with Crouton)
        • ...in dual boot (e.g with ChrUbuntu)
        • ...or "bare metal" 
          • Bare metal installation with original BIOS
            • 'Developer mode' still enabled 
            • OS can be wiped inadvertently
            • Legacy BIOS must be accessed for each boot
          • Bare metal installation with custom BIOS
            • Flashing the Chromebook BIOS - non-trivial
            • Hardware modifications then required
              • Vendor-specific
              • Proceed at own discretion
            • Allows for further flexibility in applying the Chromebook as a hardware platform
      • Documentation: Installing Bodhi Linux on a Chromebook [Bodhi Linux wiki]
        • Option 1: Install for dual boot [Bodhi Linux Wiki]
        • Option 2: Bare metal install with upstream  BIOS [Bodhi Linux Wiki]
        • Option 3: Bare metal install with custom BIOS
          • Not documented at Bodhi Linux Wiki
          • Hardware modifications previous to flashing BIOS
            • Proceed at own discretion
      • Documentation: Coreboot and Chromebook platforms
      • Documentation: flashrom utility
      • Documentation: Developer information, Chromebook hardware
      • The "Bricked Chromebook" - proceed at own discretion
  • Case Study: Samsung Chromebook (Exynos 5250)
  • Further resources: Custom firmware images for Chromebooks, pre-compiled
  • Further wrench-turning may be required.

Wednesday, January 28, 2015

Towards LabVIEW and GCC, or not? Introduction to the eSym Project

Considering the availability of LVH LINX -- originally, LIFA -- as an extension module available for programming Arduino and other microcontroller boards from the National Instruments LabVIEW platform, my first thought is to wonder what defines the toolchain in LIFA.

Comparatively, the NI LabVIEW Embedded Module for ARM Microcontrollers extension (footnote: device support) uses the commercially licensed Kiel uVision compiler. Personally, as a student of DeVry University Online, I've received a one year license for uVision, complimentary with the LabVIEW 2010 and LabVIEW for ARM bundle that we're using for some very simple device programming projects, in a few of the courses. That license will expire in roughly one year's time. If I will begin programming for embedded platforms with LabVIEW, I would like to be sure that there would be some longevity to the toolkit I would be using.

I wonder, Does LINX use GCC, alternately?

In a rough summary of procedures with the LabVIEW programming kit for ARM: When a LabVIEW project is compiled with that kit, clearly LabVIEW performs a translation into C code, then compiles that to machine code, then programs the device with the resulting application. Certainly, the similar could be done with GCC and other free/open source software (FOSS) tools.

Concerning the hypothetical GCC-based procedure for application building and device programming, the "FOSS alternative" could simply apply GNU Make, or even  Gradle, all within a conventional FOSS toolchain. The same toolchain could use uClibc as an alternative to GNU LibC, perhaps minimizing the space requirements for applications compiled with the toolchain. Insofar as that much of the toolchain, itself, would be comprised of free/open source software components -- those, all compatible with the Debian Free Software Guidelines (DFSG) -- it might serve to provide something of an available platform for programming with LabVIEW. Of course, LabVIEW itself  is a commercially licensed product.

What would be the incentive for developing a free/open source data flow programming language, alternately? Without making a "too lengthy" dissertation in this article, the hypothetical FOSS data flow programming language could serve to provide a convenient, desktop interface for data-oriented programming -- in that much, perhaps very similar to LabVIEW -- without the commercial overhead of a commercially licensed product such as LabVIEW.

Would it be enough: Simply to climb the mountain because it is there?

What additional features could be provided in a data flow programming language and toolchain licensed as FOSS -- beside any features implicitly available of a software product produced with a FOSS licensing model, such as instant availability and low commercial overhead? Some thoughts, to matters that may be relatively simple to set as goals, at least:
  • If the hypothetical data flow programming toolchain would be developed on a platform supporting ARM microcontrollers as well as Intel microcontrollers, and if that platform would be functionally applicable in a Linux operating system, then -- in short terms -- it could as easily be ported to the Android platform.
  • If a data flow programming language -- as a feature of the hypothetical data flow programming toolchain -- could be implemented with a corresponding metamodel onto the Metaobject Framework (MOF), then a corresponding model transformation framework could be developed. The model transformation framework could implement QVT, as for a purpose of transforming a program model -- such that would be developed in the same data flow programming language -- into a program model in any single language supported with the corresponding QVT transform. 
  • If the data flow programming model would be implemented with a binding onto Common Lisp:
    • A corresponding visual model could be implemented with CLIM, for visual representation of elements in the data flow programming model
    • Insofar as that the data flow programming model may be implemented towards a deterministic timing, for programs as would be produced in any single compiler toolchain, it would serve to provide an opportunity for developing a number of ideal "Use case" scenarios, towards models for deterministic timing in developing Common Lisp software programs for realtime systems.
    • In utilizing the Common Lisp Object System (CLOS), the Common Lisp binding for the data flow programming model could be developed as not only to extend CLIM, but also -- in another facet -- as to extend a hypothetical implementation of MOF in Common Lisp, in implementing the corresponding program metamodel
      • If the data flow programming model would be implemented consistently as an expression of the hypothetical data flow programming metamodel, then any programs developed in that programming model could be easily serialized for storage, using the XMI binding for MOF.
      • If accompanied with a corresponding QVT implementation -- viz a viz QVTo --  any model developed under the programming metamodel, ideally, could be transformed into any single target platform programming language, so long as a QVT transform would be implemented for the latter.
  • With sufficient application of magic beans, it could even become a glorious beanstalk.

Sometime around when the author of this article was first beginning an acquaintance with LabVIEW, the eSym project was created. The main driving intention in the eSym project was to develop a data flow programming model in Common Lisp. 

At this time, the codebase for the eSym project is in no normative state. The project has been stalled around some orthogonal design issues, such as for developing a modal locking model for thread-safe slot value access onto the Common :Lisp Object System --- most likely, as to apply cl-async via green-threads, in an extension onto the Common Lisp Metaobject Protocol (MOP), Furthermore, the author's own attention has been directed towards the possibility of developing a Common Lisp programming framework onto the Android platform -- such that may need a substantial integration between Android Studio and/or the Android Developer Tools (ADT) and any single Common Lisp implementation supported on the ARM platform, viz a viz SBCL and CCL, in Common Lisp implementations compiling to machine code, as well as ABCL, which compiles to Java bytecode and might therefore integrate more easily with the Android programming environment, overall.

The prospective "Common Lisp on Android" project would represent a subsantial divergence, orthogonal to further development of the eSym source tree. 

Sunday, January 11, 2015

In a View of a Software Project as a Tree-Like Structure: Towards Memory Locking and Memory Access Control in Common Lisp Programs

On beginning a study of National Instruments' LabVIEW programming platform, last year -- that, as was contingent on being a student of a sort of vocationally-focused introductory program in computing, networking, and the electrical sciences, a program hosted by a certain online university -- the author if this article has begun studying a broader concept of data flow programming. Contingent with that study -- not as if try to trump-up any competition towards National Instruments (NI) but rather towards a broader academic study of data flow programming -- the author has been developing an expression of a concept of data flow programming, in Common Lisp.

The author would gladly make reference to the source-tree for that item, here, but it may not be in any useful state for applications, presently. The author has been referring to the project as eSym -- as to denote something towards a concept combining of component concept of symbolic logic and a metaphor with regards to electrical current flow.


While developing eSym, the author has discovered a number of complimentary concerns that may be addressed in a Common Lisp program. Those concerns may be addressed before eSym would be any further developed.

The following article presents an outline of those initial concerns, then proceeds to develop some concepts as with regards to a sense of a broader context in development software programs in Common Lisp.

  • An extension may be defined onto the Common Lisp Object System (CLOS) -- at the least, in implementations applying of the Metaobject Protocol (MOP) -- such that the extension would allow for a thread-local modal locking procedure -- and more specifically, a read/write locking procedure -- onto slot values of a Common Lisp standard object.
    • This, in turn, may be implemented onto a portable interface for modal locking, such that may be developed directly onto Bordeaux Threads (BT), in a platform-agnostic model
      • In parallel to developing a modal locking layer onto Bordeaux Threads, an additional portability interface may be developed onto so many implementation-specific timer interfaces
        • ....such that may then be applied in a revised `BT:WITH-LOCK` then allowing a blocking keyword argument -- the behaviors of which would be fully described in the documentation, but in summary of which: 
          • blocking t : block indefinitely to acquire lock
          • blocking nil : do not block. Rather than returning nil, should signal error of type lock-held if lock is held, such that the calling function could then handle appropriately without storing a return-value from the failed lock acquisition request
          • blocking {integer} : block for a duration specified by {integer}, and if timeout, then signal error of type timeout-exceeded
        • ....and that may then also be applied in a new `WITH-MODAL-LOCK` macro, as well as underlying `ACQUIRE-MODAL-LOCK` function
        • Alternate to Bordeaux Threads:
          • Threading in cl-async [github]
            • Extended with a BT-like interface in Green Threads
            • CPS: Continuation Passing Style [multscheme]
            • In operating systems applying a pthreads threading model, this may provide an interface onto an underlying PThread implementations, as via cl-libevent2 and transitively, libevent
            • libevent provides a platform-agnostic model for development of asynchronous programs onto C
            • As far as a portable timers implementation in recent revisions of cl-async : See also  delay / with-delay (?)
  • The data flow programming language, in its abstract graphical representation, could be implemented as an extension of the abstract graphical syntax and the underlying semantics of the Systems Modeling Language (SysML) ... as SysML being implemented with a metamodel extending of the meta-metamodel defined of the Metobject Framework (MOF).
    • Alternately, a SysML view for the data flow programming language's data flow graphs could be added later.
    • This may extend, indirectly, of de.setf.xml -- with de.setf.xml providing an interface for parsing the XMI schema and the XMI schema then serving to present a structured interface for processing and I/O onto metamodels serialized onto the standard MOF/XMI binding , with any number of vendor-specific conventions in the same.
    • An implementation of the Object Constraint Language (OCL) should be developed as part of the same -- such that may apply the ATN parser, such that is applied in de.setf.xml -- the parser implementation in the OCL implementation therefore remaining aligned with the XML implementation
  • This data flow programming model may  of course be implemented with extensions onto McCLIM, for managing interactions with a data flow programming graph, as via a CLIM application frame.
In parallel to the matter of thread-local slot value locking: While I was developing some notes as with regards to a lock ticket caching framework -- and a more generic object buffering framework -- as to address a question of how a program may provide an interface for controlling access to the slots of  buffered object, such that the buffered objectwould be simultaneously accessible in a buffer cache and from there, accessible in any number of threads of a program, as well as being accessible broadly in the data space of the containing process -- I had developed a question as to how and whether it may be possible to control access to a region of memory. In regards to how that may be approached in software programs developed on the Linux kernel, my question -- as such -- was promptly answered, on discovering the shmget(2) and shmctl(2) manual pages, as well as the useful  shm_overview(7) such that provides an overview of an independent SHM interface providing file descriptors for shared memory segments.  I would denote some additional caveats to this matter, as well:
  • The  shmctl(2) manual page, specifically, describes some of the structural qualities of the following structure types in the Linux programming environment:
    • The C structure type shmid_ds 
      • Application: This type appears to be used for providing information about the Kernel's implementation of the SHM interface. For instance, there are some structure slots for recording of times and process IDs (PIDs), within the shmid_ds structure type
      • The shm_perm slot provides further structure, as denote in the following
    • The C structure type ipc_perm
      • This type uses the key value as applied also with shmget(2)
      • This type also provides a record for UID and GID values as well as flags for access permissions defined onto a shared memory segment
    • Consequent to a further study of the shmctl(2) manual page, it may seem that the shmctl() function is defined for purpose of reflection onto system-wide registers with regards to shared memory allocation
    • An orthogonal question: Onto what platforms other than Linux is the Linux SHM interface considered portable?
      • BSD?
      • QNX?
  • It's not the same as mlock(2)
    • Application: Ideally, there may be a compatibility interface defined between values used in  shmget(2) and similar SHM functions, and values used in mlock(2). Such a compatibility interface may or may not exist, presently, within the body of existing work in free/open source software and systems
    • mlock(2) may be applied in at least two manners of usage case
      • To prevent a memory segment from being paged to disk, as to ensure that data in the memory segment will not be recorded in any permanent medium, namely towards applications in data security
      • To ensure that a segment of memory will not be paged to disk, as towards ensuring a deterministic timing within the calling program, for access onto the same segment of memory -- as towards applications in a realtime operating system (RTOS) environment, such as for embedded computing. As a further reference, to that effect: sched_setscheduler(2)

This outline, of course, does not present a full interface onto SHM in Linux, for any single Common Lisp implementation. The previous outline may serve to develop some further knowledge, towards some questions with regards to memory management, within any single Common Lisp implementation, viz a viz:
  • One  "first question", as towards the origins the previous outline -- more broadly, as towards preventing unwanted access to regions in a program's data space -- in that regards, towards a design for data security within Common Lisp programs. 
    • The question, in short: How can an interface be developed for controlling access to memory registers within a program's data space, in a Common Lisp program on any single operating system platform? 
    • Nothing to compete with the Java programming model, certainly, though Java also defines some controls for data access, at least insofar as with regards to definitions of Java fields and Java methods in Java classes. The author is not presently aware of any similar SHM interfaces in Java.
  • A second question, as contingent around the matter of RTOS applications: 
    • The question, in short: How can memory be locked, within a Common Lisp program -- whether permanently or temporarily -- as to not be shifted in the data space, within garbage collection processes?
      • This is orthogonal to the question: How can memory be locked, within a Common Lisp program, as to not be paged out to the system's page area?
    • The "second question" is notably a naive question, it demonstrating some assumptions as to what side effects a garbage collection (GC) process may produce, within a Common Lisp program. Namely, there is an assumption that an object's object address may change within the duration of a Common Lisp program -- assuming that an object in a Common Lisp program may not be left where originally created within a program's data space, after completion of any single GC iteration.
    • Of course there would be some orthogonal concerns, and perhaps some orthogonal design concepts, for instance:
      • Is it possible to define a specific region of memory as being locked with mlock(2), and such that that region of memory may be used for allocating multiple Lisp objects? 
        • May that serve to provide a methodology for implementing a non-paged memory IO region wtihin a Common Lisp program?
        • May it serve to allow for a tunable efficiency in applications, such that mlock(2) may not need to be applied at every time when a new Lisp object is allocated? 
        • If a shared memory control interface would implemented, in parallel to this feature, of course the shared memory control interface may produce some side effects as with regards to this feature's implementation -- for instance, in that an access-controlled region of memory may be effectively contained within the mlock'd memory region.
      • There may be some additional concerns addressed, as with regards to manual memory management in Common Lisp programs, for instance:
        •  That  if there may be an application in which one cannot simply cons and forget and leave the garbage collector to clean up the memory space for unused objects
        • ...then as well as the implicit memory allocation of creating a new Lisp object
        • ...a Lisp program may provide an interface for manual memory deallocation, thus towards both
          • obviating the burdens for the garbage collector and also 
          • allowing for a deterministic procedure chain, insofar as for memory management within a Common Lisp program
        • This can be implemented with requirement for multithreading.
          • An explicit memory deallocation thread may be implemented
            • ...as to be coordinated in scheduling alongisde the garbage collector
            • ...such that it would be accompanied with rigorous definitions of memory deallocation forms, for ensuring that objects created for "one time use" within a single lexical environment will be explicitly marked as to deallocate when control exits from within the same lexical environment.
            • ...though that would require a storage of a separate "to-deallocate table" essentially separate to reference-counted memory -- lazy references? -- and a methodology for iterating across the same table, perhaps alternate to a methodology of scanning Lisp objects recursively for count of object references and recursively deallocating those denoted only as "not referenced".
            • ...and may be applied with an functional interface, parallel to cl:gc, rather such as foo-mem:prune
            • ...such that an application applying the memory prune function may be exist in parallel to applications not doing such explicit memory deallocation.
            • ...and should be accompanied with a conditions model, such as for when   foo-mem:prune would be applied on any object that would contain, as an n-ary  referenced object, any object that would be known as that it cannot be pruned, for instance any definition of a class defined in the cl package
            • ...and must also be accompanied with a lengthy description of how foo-mem:prune would function on individual objects to be pruned
              • Note: Reference vs definition -- for instance, that a reference to a string is not a string object itself
            • Essentially, this may serve to develop a concept: Memory management for objects referenced within only a single lexical environment
            • Ideally, this should be implemented as to not entail any overhead for "Reference counting".
Certainly, a Common Lisp interface on the SHM features of the Linux kernel may be implemented as a layer onto an interface onto mlock(2) -- and so, the development of the mlock(2) interface could effectively precede the development of the SHM memory control interface. Also clearly, such extrnsions would need to be implemented in all of an implementation-specific manner, as onto the memory management features of any single Common Lisp implementation. This, then, would certainly require much of a detailed study onto those same features, and perhaps an application of the C programming language.

Some further notes in documentation should be forthcoming, to that effect, as may be published in this web log, henceforward. The author proposes to limit the study onto two Common Lisp implementations specifically:
...at such future time as when it may be immediately feasible as to address the following procedural outline into an implementation:
  1. To begin such a study of memory management, on those platforms.
    • Ostensibly, to develop a series of SysML models for describing the conventions applied in memory management on those platforms
      • UML class diagrams for illustrating structures and relations of explicit and inferred data types (e.g. VOPs in CMUCL and SBCL)
      • SysML block diagrams for developing a block/port-oriented view of memory and memory management  in the respective Common Lisp implementation
      • Onto CCL, block diagrams illustrating of qualities of the FFI interface
      • State Charts in SysML for describing memory management in terms of finite state automota
  2. To develop an interface for an  mlock(2) locked region of memory in a program's data space
  3. Implicitly: To develop a Common Lisp interface onto an operating system's underlying authentication and authorization (A2) model
    • To furthermore extend that interface onto the A2 procedures of Kerberos 
  4. To develop an interface for an SHM controlled access to memory within a program's data sapce
    • To develop such condition types, deubugger features, and other features unto the Common Lisp semantics, such that may serve to simply the programmer's view of the memory access control forms 
If the reader may wish to inquire as to why those two platforms are specified, in this article:
  • Both of those platforms may be applied on an ARM microprocessor architecture
    • ARM, in turn, is applied within a number of embedded computing platforms, including mobile phones
  • Although there are additional Common Lisp implementations available for the ARM architecture -- including ECL -- it may be feasible to limit the study to, essentially: 
    • One implementation of or deriving from CMUCL, as with SBCL at the original "Point of fork"
      • This should be approached with an observation about signals, interrupts, and foreign threads in SBCL -- as denoted as being problematic for an interface onto a Java Virtual Machine, at the web site for CL+J 
      • This should be approached, also, with an observation as with regards to the newer thruption model in SBCL, and related features defined in recent editions of SBCL
    • One implementation deriving its implementation more broadly from LibC, as with CCL and its own respective foreign functions interface
      • This may be approached with consideration: 
        • That the Linux Kernel is implemented in C
        • That the fundamental interfaces to the Linux Kernel are provided as for C programs
        • That the FFI model developed in CCL may applied in a prototype fork for and towards applying such extended memory management procedures as denoted in the previous outlines in this article
Of course, it would be impractical to assume as if an extension could be addressed instantaneously onto two separate Common Lisp implementations, if only a single developer is developing the same extension. At this point in the article, the author arrives at a point of decision as to which Common Lisp implementation the author would wish to focus on, first, if to develop such an extension. That question, then, may be pushed further down the effective stack, with a couple of corresponding questions:
  • How may a mobile application development model be developed in free/open source software, for supporting development of mobile applications and desktop applications with Common Lisp?
    • How may CLIM be applied within such a model for mobile application development in a free/open source regards?
    • How may that be extended, furthermore, in applications of "new" embedded computing models, such as may apply the BeagleBone Black platform, Gumstix, or any other single-board computing model capable of running a Linux operating system?
  • How easily can CCL or SBCL and McCLIM be installed onto an Android device, for purpose of development testing?
    • How easy would it be to use the CLIM REPL, in such an installation?
    • How many separate concerns may be addressed onto CLIM, as for design of mobile human computer interfaces -- in whatever regards, if as an alternative to conventions developed of popular desktop computing interfaces? 
      • How and how well may CLIM be integrated with an on-screen keyboard, as of a tablet PC or a notebook with touschscreen?
      • Would it not be useful to include a modeline -- even if it would be a 'hidden modeline' of a kind -- onto every single window created, in such an appliation of CLIM? Then how would the 'meta' key be implemented, for key input to such a modeline?
  • How can a commercial interface be developed, onto such work, and it not be limiting towards the free open source nature of the software that would be applied in such work?
    • Sidebar: "Free/open source" does not imply "Insecure"
      • A deterministic model will not be illustrated for that thesis, in this article
    • Sidebar: Incentives?
      • Data-secure computing for small/medium enterprise - referencing strongSwan, Kerberos, and JacORB as providing support for Kerberos and SSL onto CORBA
      • Games programming for mobile platforms, in Common Lisp? 
        • Can we make it look more like Half Life? Should we?
        • What about music?
          • What about folk music?
        • What about educational applications developed albeit with a novel game-like interface but lending towards any actual, fundamental concepts in mathematics, if not also towards the material sciences?
          • What wonders may await, in the literary works of Charles Babbage?
          • What wonders, in a view providing of an implicit sense of gender neutrality, if not a broader sense of cultural neutrality, with regards to mathematics and science? barring any discussions lending immediately to a maturity of nature.
      • Furtherance of concepts in the state of the art?
        • IDE development in the heterogenous systems programming environment
          • Towards implications for the software development environments in applications of virtualizaiton and software-defined networking (SDN)
          • "Because CORBA"
      • Beginnings of discussion with regards to community support, if not for further commercial support for Common Lisp  applications on mobile platforms?
        •  "There's the kicker"
        • Existing services for issue support may include:
        • May entail some platform-specific focus about Linux
          • The following mobile platforms implement Linux:
          • Focusing on the Android platform, specifically, there may be an interface available for bug tracking onto any single Anrdoid app store (TBD)
        • Questions with regards to tools
          • A hypothetical model for capturing results of an application error: Fork, save diagnostic data-optionally save entire Lisp image, and debug on 'Own hardware"
            • Revisions for "Other users" : Do not save session-local data within diagnostics; Do not save Lisp image; ensure full authorization and authentication with upstream issue tracking agency, in network transactions transmitting of diagnostic information
          • Debugger hook : How to apply that usefully on a mobile platform?
            • Usage Case - Developer's mobile platform
              • Context: Prototyping and testing
              • Context: Unexpected situation in program
            • Usage Case - User's mobile platform
              • Context: Regular application procedures
              • Context: Unexpected situation in program
          • Debugger hook   How to apply that  usefully, for server diagnostics?
          • Data models : How to "flag" session-local data values, so as to prevent their printing or storage onto media outside of environment of a running Common Lisp program?
          • Systems models : How to securely and conveniently (?) encrypt a saved Lisp image?
            • Further implications onto design and development of digital platform operating systems?
          • How to integrate developer tools, in a novel manner, with Xen? 
            • cf. Xen Dom0 
              • Emulator as Dom1 instance
              • Xen must be (?) installed directly on platform hardware
              • Orthogonal to: Microkernel design.
            • cf. Amazon Web Services
              • Application of Xen
To this point in the article , the article has diverged from the original topic of developing a data flow programming model in Common Lisp.  Figuratively, the eSym project might represent some figurative icing on the figurative concha otherwise described in this article. The majority of such tasks as would entail an extension of interfaces onto the Linux kernel -- as addressed in this article, albeit superficially -- those might be approached albeit without a cup full of sugared bread, but with a substantial portion of documentation regardless.

This article has described a substantial number of individual concepts and related task items, such that may be addressed into development of an essentially arbitrary number of source trees.

This is developed under the MetaCommunity project, towards a further goal of developing a comprehensive architecture for application development in Common Lisp, and an orthogonal goal of reviving the design of MIT CADR for application onto contemporary microprocessor architectures, denoted as the CIIDR project.

Saturday, October 25, 2014

Thoughts about: Electronics, Geometry, Common Lisp, and a Rational Pi within Common Lisp Mathematics

"Once upon a time," when I was a young person, my family had "Gifted me" with an electronics kit, such that I've later learned was developed by Elenco. In fact, my family had "Gifted me" with a number of electronics kits -- an old 60-in-1 kit, later the Elenco 200-in-1 kit, and then a kit somewhat resembling the Elenco Snap Circuits Kit, though the last of those was of a different design, as I recall -- something more designed for carrying an assembled circuit, really. My family had also gifted me with the opportunity to be wary of computer salespersons, but that's another thing.

In studying the manual of the Elenco 200-in-1 kit, specifically, I was able to learn a few things about the simple novelty of electrical circuits. The manual for the Elenco 200-in-1 kit contains a broad range of circuits, onto the domains of each of solid-state and digital electronics. The circuit elements, in the kit, are all firmly fastened into a non-conductive base, each junction attached to a nice spring. The circuit elements, in the kit, may be "wired together" with the set of jumper wires contained in the kit, according to the numbered schematics within the manual. As I remember, today, I'd assembled the radio tuner circuit, the buzzer/oscillator, and something about the multi-segment LED, then  separately, something about the step-up and step-down transformers. Of course, there's a novel analog meter, on the front of the Elenco 200-in-1 kit's form factor case.

Candidly, to my young point of view, the full manual of the Elenco 200-in-1  kit had seemed a little overwhelming, to me. I remember, still, that I'd had a feeling for that I was looking at a broad lot of material when browsing the pages of the manual, but I wasn't particularly certain if I was able to understand the material I was reading. Of course, I could understand something about the circuit assembly instructions, in the manual -- it seemed, to me, much like the "Snap together" airplane models of the hobbyist domain -- but I was not able to analyze the material of those circuits. Inasmuch, I wasn't able to interpret anything about my experiences, in studying of the manual. Certainly, I was able to set a wire between spring 30 and spring 14, to push a button, and to watch an LED glow, but -- beyond the simple, mechanical qualities of such circuit assembly and testing -- it didn't mean so much to me, then. It was a little bewildering to me, too -- there was such a nice manual, with the item, but I was unable to study the manual, beyond the simple circuit assembly instructions. I could not understand any much about it.

Candidly, It was the only electronics book that I knew of, in all of my young views. In that time of my life, I was not whatsoever cognizant about any sense of a relevance of digital circuits -- such as, of the digital logic gates with transistorized circuits, printed in the last pages of the manual. Sure, though, that was only a few years decades after Bell Labs had made their first discoveries about the superconductive properties of germanium [Computer History Museum].  I was a kid, when the decade was 1980, when the 32 bit processor was "New", and when the Nintendo Entertainment System was the hottest thing on the games market. Somehow, I still mark the chronology of my life by those relative metrics.

At that time in my life, I was not aware of there being a digital domain in electrical circuits, beside any concepts particularly of digital/analog interfaces. I had received the Elenco 200-in-1  kit as a gift, at a time before I had ever studied geometry, in school. It was a very novel thing, to me, but there wasn't so much I could do to understand the material of the manual provided with the thing. Candidly, it gathered some dust, and with some regrets, was delivered to a landfill, when I first left California. I wish I had held onto that item -- a reproduction would not be quite the same.

Certainly, the high school geometry class -- in a sense not focused towards any specific, practical or vocational regards -- the class did not present any notes with regards to any applications of geometry, in electrical circuit analysis, so far as I can recall of the primary material of the high school geometry course. There was nothing about the rectangular coordinate plane, almost nothing about the unique, polar coordinate plane, and there was not so much about complex numbers, though certainly the course stayed its track along the standard curriculum.

Sure, though, the geometry class had explained the natures of the unit circle, the pythagorean theorem, and the transcendental functions as defined of a domain of trigonometry -- such as the sine, cosine, and tangent functions, and their functional inverses, asin, acos, and atan. Certainly, it had seemed novel to me that there are the relations of those functions onto objects within a unit circle, but I could not conceive of any applications for such knowledge, beyond any of the no-doubt demanding domains of structural design -- such as of physical architecture or of automotive design, in any manner of mathematical refinement in structural design.

So, it was not until the 38th year of my life that I would learn that those trigonometric transcendental functions may be applied within a methodology of electrical circuit analysis -- and that conventionally, those are denoted as being among the transcendental functions in mathematics. Without begging a discussion with regards to imperfections in measurement systems, personally I think it's a significant thing that there are applications of geometry in electrical circuit analysis, as well as in the broader physics. Candidly, I may only narrowly avoid regressing to a mindset like of a giddy child, to observe that there are more applications for trigonometry, under the nearby sun.


This week, I've begun to define something of an extensional model for application of mathematics, in Common Lisp -- that extending of the set of mathematically relevant functions, defined within the ANSI Common Lisp specification. Today, specifically, I've been developing a model for overloading of mathematical operations in the Common Lisp Object System -- in a manner somewhat extending of a Common Lisp implementation's existing optimizations with regards to numeric computation. While I was endeavoring to make the next day's set of notes, for that "Sprint", I've now learned of a couple of functions defined in ANSI Common Lisp, such that effectively serve to "Obsolete" a page full of code that I had written and tested, earlier this week -- and in that, no doubt with further implementation-specific optimizations, in implementations of the ANSI Common Lisp rational and rationalize functions. The code that I had written, earlier this week, it was something implementing of a decimal-shifted encoding for floating-point numbers -- in a sense, developing a model for decimal shift in measurement values, that onto the base-ten or decimal base measurement prefixes defined of the standard Systeme Internationale measurement system. Of course, there are also the base-eight or octal base prefixes conventionally applied about quantities of digital information.

The code that I had written, earlier this week, it does not completely attain to a model for measurement for "Significant digits" within input values -- not in a sense as I had learned of, when I was a student of the science courses I'd attended in high school, in the mid 1990's. There was not so much about the floating point numeric types, as defined in modern standards for computation -- and neither of so much of a sense of a mantissa, as defined in those specifications -- in a course that described the molar weight of helium.

On discovering the the ANSI Common Lisp rational and rationalize functions, personally I may feel at least a small bit of good cheer that ANSI X3J13 [Franz Inc] had seen fit to develop such a set of functions as would serve to address some inevitable concerns in implementations of conventional models for floating-point arithmetic.

Personally, I think that this -- in a sense -- is the nicest line of code that I've ever written:
(rationalize pi)
=> 245850922/78256779
 It follows, then, that one is also cheered to write:
(defun rad-to-deg (r)
  (* r #.(/ 180 (rationalize pi))))
=> RAD-TO-DEG

(rad-to-deg pi)
=> 180.0D0
So, clearly, it is possible to define an accurate method for transformation of a measurement quantity from onto a unit of radians, then onto a unit of degrees, without one having to go to any gross extent, to extend the ANSI Common Lisp specification.

That being simply the happiest thing that one has seen produced of a computer, of late, but of course one does not wish to make a laurel of the discovery.

Of course, the simple transformation of pi radians  to 180 degrees is not so different if one does not firstly rationalize pi in the equation. In a converse instance, however: What of a transformation of  pi/2 radians?
(/ (rationalize pi) 2)
=> 122925461/78256779
contrasted to:
(/ pi 2)
=> 1.5707963267948966D0
In the first  instance of those two, when the implementation's value for pi is firstly converted to a rational number, then with the computation being made onto another rational number, the real precision of the initial value of pi is neither lost nor folded. In the second instance of those two, it might seem as though the application would simply be venturing along in the implementations' system for floating point arithmetic -- that the application developer, then, must simply accept any matters of  floating-point rounding, and so on, as those being simply matters of the programmed environment. One might think that may not be sufficient, in all regards, though perhaps it might be common, as a matter of practice.

Notably, in applying a rationalized pi value, the calculation of the number of degrees represented of pi/2 radians itself results in a rational number, exactly 90.
(defconstant *rpi* (rationalize pi))
=> *RPI*

(defun rad-to-deg (r)
  (* r #.(/ 180 *rpi*)))
=> RAD-TO-DEG

(rad-to-deg (/ *rpi* 2))
=> 90
In a simple regards: That's both of a mathematical precision, and mathematical accuracy, represented in a conversation of pi/2 radians to 90 degrees.

I'm certain that it may be possible to emulate the CLHS rationalize function, within other object-oriented programming languages -- such a Java, C++, Objective C, and so on. One would like to think that in such an instance, one may also emulate the Common Lisp numeric type, ratio -- that being a subtype of rational, in Common Lisp, disjunct to the numeric type, integer -- then to overload the respective numeric operations, for the new ratio numeric type.

Personally, I would prefer to think of Common Lisp subsuming other programming languages, however, rather than of other programming languages simply emulating Common Lisp. I know that there can be "More than one" -- as far as programming languages, in the industry -- certainly. Personally, I think that it may be as well to address other programming languages if from a Common Lisp baseline, if simply as a matter of at-worst hypothetical efficiency.

An ANSI Common Lisp implementation would already have defined the ratio numeric type, as well as the mathematical operations denoted in the CLHS Mathematics (Numbers) Dictionary, onto the whole set of numeric types defined in ANSI Common Lisp. Implementations might also extend on those functions and types, with any number of implementation-specific extensions such as may be defined for purpose of optimization within Common Lisp programs -- perhaps, towards a metric for deterministic timing in mathematical procedures.

Sure, though, if one would endeavor to extend those numeric operations and types onto more types of mathematical object than those defined in ANSI Common Lisp -- as onto a domain of vector mathematics  and scalar measurements, such as the Igneous-Math project seeks to address, in all of a colloquial character of a source tree (yet unpublished) -- then it would be necessary to extend the mathematical operations, themselves, and to define an object system for the types extension. That's something I was working on developing, today, before I noticed the simply sublime matter of rational and rationalize, as functions standardized in and of ANSI Common Lisp.

Presently, I'm not certain if I should wish to altogether discard the simple decimal-shift code that I'd written, earlier this week -- essentially, that was for developing a rational numeric equivalent of a floating-point representation of a numeric input value, such as pi -- but not only so. In that much, alone, it might be redundant onto the ANSI CL rationalize function.

Also, it was for "Packing" numbers into their least significant digits -- in a sense, to derive value of magnitude and a corresponding decimal scale value, the latter essentially representing a decimal exponent  -- with a thought towards developing some optimized mathematical operations onto rational numeric values, throughout a mathematical system, in seeking to retain something of a sense of mathematical accuracy and precision, if but because it would seem to be appropriate to one's understanding, as such

The decimal shift code, as implemented in Igneous-Math, of course it's nothing too complex, in any sense -- in a simple regards, the "decimal reduction" algorithm uses the Common Lisp truncate function, and correspondingly, the "decimal expansion" algorithm uses the Common Lisp expt function. It's been implemented as part of a system for encoding the Systeme Internationale decimal prefixes as Common Lisp objects, in a system essentially designed to ensure that -- excepting of such rational numeric measurement values as would be input as of the type, ratio -- that other real numeric values in the system would be stored in a manner derived to a base measurement unit, before being applied within such conventional formulae as would be defined around base measurement units -- but allowing for operations onto arbitrary exponents of the base measurement units, in any single measurement system.

While designing so much of those features of the Igneous-Math system, candidly I was not then aware of the simple existence of the ANSI CL rationalize function, as it being a feature available of an ANSI Common Lisp implementation. With some chagrin, I wonder if I would've designed the decimal-shift code any differently, had I been aware of that feature of ANSI Common Lisp, at any time earlier this week.

I think that, henceforward, I'll have to make some consideration about whether the rationalize function should be applied throughout the Igneous-Math system. One matter that I would wish to emphasize about the design of the Igenous-Math system: That a number's printed representation need not be EQ to a number's representation in object storage, within a numeric system. So long as the printed value and the stored value are at least mathematically equivalent -- onto the ANSI Common Lisp '=' function, broadly -- then certainly, the system would have achieved its necessary requirements for storage and printing of numeric values.

That a mathematical system, moreover, may endeavor to apply the rationalize function onto all numeric values input to the system, before applying those values within any mathematical operations -- such a feature would be developed as an extension of ANSI Common Lisp, also. It might seem redundant or simply frivolous, perhaps, but personally I think it's just as well.

Furthermore, I think it's a swell to encode any floating point number as a set of two fixnum type numbers -- assuming that the instances would be few, in which a floating point number would be expanded to a bignum value on a decimal scale -- and only a matter of the nature of an application system, that one would scale a floating point number onto base ten, instead of onto a base eight -- perhaps a matter of developer convenience, in developing a software program for applications of mathematics within a 64-bit Common Lisp implementation.

While beginning to develop a set of overloaded math procedures in the Igneous-Math system, I've begun to wonder of how much work would be required if to optimize an individual mathematical operation onto a microcontroller's own specific hardware -- such as with regards to the Intel SSE extensions [Wikipedia] and the ARM architecture's hard float design, with its corresponding ABI in the Linux application space [Debian Wiki]. The Igneous-Math system has been designed to rely on a Common Lisp implementation's own optimizations, in a portable regards.


Saturday, August 2, 2014

Supercluster, The // Design Update, A // Towards System on Module Concepts for BBB

Last night, around a discussion in a private, online forum in the DeVry University Online (DVUO) Electronics and Computing Technology (ECT) program -- a discussion, namely with regards to applications of parallel resistive/inductive/capacitive (RLC) circuit designs in ECT-125, this semester -- I began a "Research arc," which I would like to presently describe. That was in continuing with a certain thesis development project, as with regards to -- broadly -- applications in parallel computing. I've been proceeding along in this thesis development project, since sometime after I'd read of the Raspberry Pi (RPi) Compute module  in its SO-DIMM form factor, then later purchased a BeagleBone Black (BBB) at Radio Shack, in fact.

Back Context: BBB alt. to RPi 

At the time when I'd first read of the RPi Compute module, I was at least vaguely familiar with some couple of concepts in designs and application of single board computers. I'd heard of RPi, it's seemed both novel and popular, to my point of view. The concept of a single board computer being available for a SO-DIMM bus,[1] it struck me initially as it being a concept that could lend of the single board computer design, towards applications in parallel computing.

It was altogether a new and novel concept, to me -- reusing a RAM bus for communication with a single-board computer?

Summary of Design: Supercluster, The

 Ideally, if not only hypothetically, a printed circuit board (PCB) could be defined with multiple SO-DIMM slots -- or other bus slots -- for communicating between multiple peer SBCs. A single master peer SBC could be applied to the same PCB, as primarily for the master peer to serve as a sort of device controller, in communicating with the submodule peers via I2C over the SO-DIMM bus, or other digital bus technology. The master peer SBC on the multi-SO-DIMM PCB could itself communicate with any components external to the PCB -- the master peer serving as a sort of controller -- with the PCB module itself being a submodule within a broader computing system.

This design, in effect, would define a sort of tree-like hierarchy for such SBC modules -- moreover, with an opportunity for "pass through" in the master peer, such that other modules, external to the single PCB, could communicate directly with any submodule peers on the PCB, in a parallel computing model. In its wire protocol, then, the design could incorporate a novel multi-master I2C protocol, such that could be designed specifically for this "Supercluster" concept -- I2C being the wire protocol I had thought might be appropriate for the design of such a thing.

In the application layer, the design would incorporate a set of CORBA interface definitions -- essentially, such that the communications to/from each SBC module would be managed via a CORBA protocol, that extending CORBA GIOP specifically for the unique I2C protocol -- something like an I2C IOP, as alternate to the Internet IOP (IIOP) extension of GIOP, with the I2C IOP instead using I2C device addresses, as alternate to IPv4 host addresses, for device addressing in the protocol. Each supercluster board could itself be defined with a unique bus address, in the same protocol -- if not rather, to identify each supercluster board, in any context external to the supercluster board, by using the I2C device address of the master peer on the supercluster board -- thus, defining a sort of simple ':' address model onto the underlying I2C protocol, such that would then be extended kernel space drivers and a CORBA implementation, perhaps The ACE ORB.

Why: Preferring BBB in lieu of RPi, in Design of Supercluster, The

Earlier on, in the design of that -- albeit, that nonetheless naive -- thesis concept, I think, I had valiantly defended my thesis in social networking, that the SO-DIMM design of the RPi Compute Module (1) could be of relevance for designing a "Raspberry Pi Supercluster" design for parallel computing, and (2) that simply, it is relevant.  Candidly, I'm still a bit mystified if it could seem like a concept one should have to defend.

Later -- having since abandoned most "chat" venues, in social networking -- I began to read more about the limitations in regards to the Broadcom's purported lack of public documentation about the BCM2835 -- the BCM2835 being used as a primary microcontroller unit (MCU) on RPi platforms. To my best understanding, it seems that Broadcom has since published some sort of a datasheet, however -- presumably at some time since I last read of any "updates," in that particular context, or at least at a time before the "updates" I had read were themselves "updated" -- specifically, noting [4]

Around the matter of the BCM2835's datasheet, and with my frankly being of a yen to avoid populist venues, I've begun to focus more on developing with the BeagleBone Black platform (BBB) -- my having a definite appreciation for that the BBB is platform is thoroughly documented, in legally open documentation, from the design documents for the BBB SBC itself,[5] to even insofar as Texas Instruments' (TI) AM335x Sitara, an ARM Cortex A9 MCU[6] -- and its PRU-ICSS submodules.[7]






Onto the OS Layer - Towards application of a Linux RTOS for Supercluster, The


So, in "Fast forwarding" this contextual narrative to a present matter, "in short," I've decided to focus primarily on the BeagleBone Black platform and its TI  AM3358 MCU -- presently, to no lengthy sidebar about RTOS designs, though I think there are some notes that I might like share, at that, as with regards to RTOS kernel designs, moreover TimeSys LinuxLink and TimeSys' developer support for developing discrete RTOS applications of the BBB platform.[8] Of course, that all kind of goes out the window, if my simple notes here would be thought to be of any interest to Qatari Hackers, in any long sidebar about things that don't make world peace.

Sidebar: Not for Proponents of Endless War

Assuming, then, that this is not to be misinterpreted as if it was whatsoever applicable for militant opponents of Israel, and not either for militant separatists in the lattitudes of Asia Major -- ostensibly, including opponents of Russia, there -- my being at least vaguely familiar with some of the volatility in the social network, online, and hostility in the real world "Not online", then before my life's time is wasted entirely if only in addressing people's own spiteful regards, if not also so many convenient "Straw man" arguments, so easily "Tossed around," in so much of "The real world" as ever ventures to develop any items of content, online -- with no lengthy dissertation, here, to my own appreciation of Pres. Eisenhower's comments in his Farewell Address, as then President of the US, and its relevance with regards to international military cooperatives developed during the Cold War -- of a time in world history, when the "Military Industrial Complex" was veritably enshrined, ostensibly then as a defensive measure preventative of global nuclear holocaust, more literally in matters of policies of national nuclear deterrence, in that horrendous brinksmanship ever one step short of global catastrophe and subsequent "Nuclear Winter" outcomes, but fortunately ever stayed of its wiser agents and agencies -- and of the world as before the catastrophic events of September 11, 2001, this world where nonetheless there is still an INTERPOL, and still are some bastions of peace -- inasmuch, if one may endeavor to extend beyond any outcomes of an Armageddon , real and/or invented -- perhaps one might not think as if it was the last significant event in one's life, to be treated to a cup of tea by a team of Kurdish guards.

Complexity -- better a topic for discrete theories in mathematics, nonlinear dynamics, and nondeterministic systems (NDS) theories -- certainly.

Backplanes, sans Hooligans

Considering that the ECT-125 course is focusing primarily about qualities of inductance and capacitance in analog circuits, and that I'd once read a note with regards to capacitance in PCB layouts[9], then when reading up for research in a forum response, last night, I found an article, a resource published by Quabbin(R) Wire and Cable Co. Inc, an article effectively enumerating some relevant electronic characteristics of cabling and wiring, in electrical systems [10]. Certainly, those characteristics of electronic systems would be no less relevant, in a microscopic context -- as, in a sense, with regards to small traces in PCBs and smaller traces, in microchip designs -- no less relevant than in a macroscopic context of cables and wires.

In then addressing a matter my own naivete, in some brief and candid reflection, I endeavored to read up more about the concept: Backplane.

As was after some brief research -- towards the end of the first session of courses, this semester at DVUO -- I'm afraid I'd develop a concept of a Backplane as if there could be any sort of a backplane simply comprised of rudimentary, bare copper straps, in  what -- strangely -- might resemble more an item from a bleak and torturous dungeon, however, more than any sort of a realistic design in a digital computing system.

I'm certain, now, that I'd simply misinterpreted a certain diagram that I'd read, like as if a "Broad arrow" in any way indicated a "Surface area of a copper conductor." To my best present recollection, I'd seen somesuch diagram, sometime when I was reading up about I2C[7]

So, last night, in my then having observed some of the obvious naivete of that concept, I then proceeded to do some more studying -- that, in the "reading up" part of so much homework, rather studying -- in a sense, research, as namely about digital backplane design.

Towards a Concept of SHB as Backplane

My own study had not begun with this exact item, though I would denote it as the first item in this portion of my short summary, here: There's an article about backplane design, at Wikipedia, the Free Encyclopedia.[11]

One of the items that I think is of particular note, in that article -- and namely, as with regards to a concept by which I had found the Wikipedia article, in looking for some more description after another item that I'd found, last night: There's something of a set of industry standards defined as with regards to backplane design, in a context of single board computing, namely PICMG standards 1.0, 1.1, 1.2, and 1.3.[11][12], as would be relevant to a concept of System Host Board (SHB) design, and SHB as backplane, in applications of single board computing[11]. The last in that list of those PICMG standards, there, is also related to a concept, SHB Express.[13]

Presently, I've not been able to locate any exact designs for an SHB Express implementation using the TI AM335x or other editions of the TI Sitara MCUs. I've not either begun to read the actual PICMG standards, as yet. I only wish to denote those resources as reference items, here.


Towards "That Popcorn Moment," in Regards to: System on Module (SOM) Concepts, TI Sitara MCUs, ARM Architectures, RISC Instruction Sets, and MIT CADR

Towards a further, perhaps related concept, as in regards to applications of single-board computer MCUs: As of this morning, I've found a few System on Module (SOM) designs incorporating various editions of the TI Sitara MCU design. Of course, I'm biased towards the Sitara, due to that it's the same brand of MCU as used on the BeagleBone Black -- that ultra-affordable development kit, erstwhile that I've denoted as BBB here. The Sitara, being an ARM MCU, uses an ARM instruction set -- therefore a RISC instruction set. At that point, it might bear some relevance furthermore with regards to the old MIT CADR processor.

If my bias about the Sitara might be not only sentimental, then, I certainly hope it may be fortuitous if -- as a young software and systems developer -- I would focus primarily on the TI Sitara MCU architecture including the TI AM335x and the TI AM437x.

In some short searching, I've found a couple of System on Module (SOM) designs using Sitara MCUs -- in likewise, a short list, moreover with a bit of informative tail recursion:

In considering the matter of capacitance and electrical reactance in circuit traces, it's my impression that "the SBC as SoM design" might seem to be more ideal, for some applications, contrasted to relatively common "SBC on SO-DIMM" design -- that, for instance, the iWave G12M offers a smaller, therefore certainly less reactive edge interface than what I presume an SBC on SO-DIMM card might offer. In my opinion, the smaller interface of the G12M would be closer to ideal, for the G12M's application as a modular computing element.

Candidly, then: I would like to define a BeagleBone SOM edition -- if only sans political concerns, such as with regards to contemporary events in world news.

The SOM BB would be designed so as to continue with design concepts represented in the BeagleBone Black, although -- in my opinion -- the SOM BB should be considered with its baseboard, as in regards to how the BBB design could be effectively revised for the different architecture.

The SOM BB would not, itself, require so many conventional I/O interfaces. The interfaces for SD storage, Ethernet/FastEthernet, HDMI, USB, those could be placed on the SOM's baseboard.

The baseboard could also be defined with with conventional GPIO headers, placed so as to accommodate the conventional BeagleBone Cape. Ideally, the GPIO headers would be geometrically elevated above the SOM socket and SOM board, sufficient to accommodate the respective Capes.

Furtheremore, a digital voltage controller could be added, with an additional header for interfacing with Arduino shields -- thus, adding a bit more of extensibility to the design.

My being a novice student of electrical engineering, I could only guess if perhaps an Atmel chip might be sufficient as an independent microcontroller for the baseboard itself, something effectively to "Bounce" I/O from the respective hardware bus onto the SOM board.

In its edition one, the baseboard would contain one socket, for exactly one SOM board.

The design of the baseboard should be accompanied with SysML diagrams, sufficient for illustrating the component architecture of the SOM baseboard and its interfaces to the SOM board, itself. In a broader sense of design, the baseboard and the SOM board may be approached so as to present an opportunity for study, as with regards to how a SOM programmed with the SOM board could be applied, alternately, in special purpose digital systems -- for instance, to a usage case with such a SOM board being applied as a "Cubesat Brain".

I would reference, primarily, the iWave G-12M as an inspiration to the design of the SOM board itself. The G-12M features a compact form factor, with a small edge interface, certainly sufficient to accommodate the small space of a Cubesat, in the Cubesat usage case.

A second usage case could be defined for the hypothetical BB SOM board, as the BB SOM board being applied in a network service context, namely as a compact network gateway device providing the functionality of conventional Linux features such as the Linux kernel's netfilter modules, routing subsystem, traffic shaping qualities, and TCP congestion control modules. In that usage case, SOM boards could be "Cold swapped" for updates to the network gateway's firmware and software components, with minimal network downtime.

A third usage case could be defined for the BB SOM board, in a context of desktop computing. If the novelty of the design might be sufficient to carry this by, candidly I would prefer that that usage case would make a demonstration of Linux Mint for modular applications in single-CPU desktop computing.

Later usage cases could be defined for applications in parallel computing -- as to define a Beowolf cluster of SOM modules, with a multi-SOM baseboard -- ostensibly, towards protyping a system for locating stellar reference points, within the Sloan Digital Sky Survey, using simple fourier transforms or other mathematical signal domain processing  models.

In a context of application/network/data/user security, a branch of the SOM design could be developed, at some point in time, for integrating a Trusted Platform Module (TPM) into the SOM and the SOM baseboard. That could be accompanied with a development of a usage case for the SOM board as a VPN interface provider -- namely, using strongSwan as the VPN service implementation on the SOM board.

Ed. Note: Regarding TPM applications in VPN configurations, see also: BeagleBone CryptoCape; strongSwan.

Temporary Conclusion

That, then, is my primary academic thesis concept, in its current state -- with such concerns about world peace and online politics notwithstanding.

In summary: Pending further progress in academic studies, I would wish to focus on developing a system on module edition of the BeagleBone Black platform -- focusing primarily on those usage cases that I've denoted in the previous.

Though my bias towards the TI Sitara MCU might seem arbitrary, in a sense of convenience, however with regards to the Sitara MCU's PRU-ICSS modules being applicable for independent I2C signal processing, candidly I wonder if it might prove to be not only arbitrary, as a design choice?

Works Consulted

[1] Raspberry Pi Blog. Raspberry Pi Compute Module: new product!
[2] Embedded Linux Wiki. RPi Hardware
[3] Embedded Linux Wiki. Beagleboard: BeagleBoneBlack
[4] Embedded Linux Wiki. BCM2835 datasheet errata
[5] Circuit Co. Design and Document files for the BeagleBone Black from BeagleBoard.org
[6] Texas Instruments. AM33588 Sitara Processor
[7] Digital Spelunk 42. Resource Note - Digital Serial Bus Technologies - I2C
[8] Digital Spelunk 42. Notes onto RTOS Linux
[9] Microchip Forums. I2C On a Backplane?
[10] Quabbin(R) Wire and Cable Co. Inc. Tech Brief - Why is Cable Capacitance Important for Electronic Applications?
[11] Wikipedia Advanced. Backplane
[12] Wikipedia Advanced. PICMG
[13] Wikipedia Advanced. PICMG 1.3

Sunday, June 29, 2014

Notes onto RTOS Linux

Personally, I'm not one to make any sort of lengthy association between culture and computing. It's my point of view that a computer is a practical thing -- nothing of "style" or of too much of conversation, to me. I also don't talk for long about my car, or my experiences in driving, or my experiences in operating heavy engineering equipment. Those are all practical things, to my point of view. If I might have any sort of a "style" about computing, regardless, I suppose "Tight lipped" would be the label that I would prefer, at that -- "Tight lipped" as contrasted to my sometimes bouts of rambling inconherently, and likewise of writing, though without a great sense of structure.

I understand that as I would continue to pursue a course of formal, academic study in computing and -- correspondingly -- a study in digital electronics, that I should wish to learn to develop a normative style of writing, then -- such that I would not be thought to be inept at communication, at least, though I am inept in much of "Style," I think. It's on the premise of computing being a practical thing, to me, that I endeavor to not be "Self conscious" about writing about computing -- even when I am writing without a great sense of structure or of focus.

If I would begin to write with more of a sense of structure and/or a sense of focus, then, I think it could go well if I was to begin to write about a single, linear process in development. At that, the first thing that comes to mind that I would like to write about: A manner of study, with regards to applications of the Linux kernel in realtime systems on thin client platforms. Of course, that in itself might not be so much of a simple topic to write about. If there is even a single vocabulary to the concept, I have not learned it yet, and if there is single, discrete set of technologies to the applications of such a concept, realtime systems on thin client platforms, I have not found that set of single, discrete technologies, as yet. Candidly, being a student, I have found at least a small assembly of hetereogenous components, some of which share some common qualities -- for instance, that a BeagleBone Black and an Arduino board can both conduct communications using the I2C protocol,  and that the Altera Cyclone III on eSOC FPGA board manfucatured for DeVry University students (a group which I am currently one of)  .... that it has IO headers at least, though he eSOC 3 is currently lacking any kind of a user manual, schematic, or other design documents, and therefore there's not much I can do with it.

In a practical regards, I think that a realtime Linux platform could be of some interest with regards to parallel computing models using single board computers, if not furthermore of Cubesat design -- neither of which would seem to be hot topics in "the consumer market," at this time, however. I'm sure that there could be an app made for news about it -- maybe it could be more popular, then. Certainly, it's a bit more complex than what can be developed on a mobile phone.

In all that complexity, perhaps there are some discrete qualities, however -- such that one may endeavor to "Snapshot," in a manner of writing -- resources, such as:
  • Article: Jones, M. Tim. Anatomy of real-time Linux architectures. IBM Developerworks. 2008
    •  In a structural view: The article denotes three primary types of Realtime Linux kernel, namely: Thin-kernel; Nano-kernel; Resource kernel. The article also describes the baseline architectures of the Linux 2.4 and Linux 2.6 kernels.
  • Project: ADEOS Project
  • Realtime Linux distribution:  Timesys LinuxLink
    • This item, in particular, has really "Caught my attention," this week. I denote it, here, as a realtime linux distribution considering that it is an apparent descendant of a Linux distribution denoted in "that IBM article" (Jones op. cit.) namely, Linux/RK, denoted there as a realtime Linux distribution implementing a resource kernel model
    • The Linux/RK project page, itself, makes reference to a number of additional resources, describing concepts of the architecture of a resource kernel model as implemented in Linux/RK. (Note that the resources are dated around 2001, and are available in Postscript format, but may be converted to PDF format with some GPL licensed tools such as 'ps2pdf', cf. Cygwin)
    • In a practical regards, I've noticed that the Timesys web-based Build Factory provides support for the BeagleBone Black single-board computer and the TI Sitara microcontroller unit used on the BeagleBone platform. I'm well impressed, moreover, to see how well architected the Timesys Build Factory is. I've compiled a single build with it, and have yet to install that to my BeagleBone Black, but I look forward to that -- though catching up on some other  items, in the meanwhile.
    • This item, in particular, I think I may be able to incorporate into a senior project. Though the project would be focusing primarily on developing an I2C I/O model using the BeagleBone Black, but with there being a solid baseline OS avaialble -- namely, LinuxLink -- as already catered towards realtime and embedded systems, that substantial bit of existing work could simply help -- by a long shot -- in defining a modular design, in the project.
    • For a bit of historical context, there's an article about Timesys' original Linux/RT, at the Linux Journal
  • Standard: POSIX 1003.1
 From there, the snapshot begins to focus more on the development environment, less on the OS itself.
  •  Virtualization platform (x86 architectures): Virtualbox
    • My having recently begun to develop a Windows 8.1 desktop on an x86 (amd64) PC, such that uses a UEFI architecture in its boot process -- that, adding some distinct complexity for any sort of a "Dual boot" configuration -- and the same PC has an intel graphics card that doesn't work ideally well, on Mint LMDE, "Out of the box", it's therefore my impression that Virtualbox provides a "Nearest to ideal" methodology, then, for both developing software in the Linux environment and continuing with the typically Microsoft Windows-dependent lab materials in the courses at DeVry University Online (DVUO)
    • I've installed Linux Mint (LMDE edition 201403) into that Virtualbox image, and have begun to use that desktop as an overlay onto my Microsort Windows 8.1 desktop. 
    • Virtualbox provides a nice seamless view mode for graphical applications in compatible Virtualbox virtual guest instances. When Virtualbox is using the seamless mode -- as for displaying GUI applications running directly in the compatible virtual guest instance -- it makes for a relatively smooth integration with the virtual host PC desktop
    • There are some limitations -- such as namely:
      • That the desktop menu of the virtual guest instance is inaccessible in seamless mode. However, there are some quick hotkeys that Virtualbox provides for changing the view mode, such that allows for access to the desktop men, as in fullscreen view mode.
      • That it does somewhat "lag the PC", it being a virtual OS image for an OS that would typically be installed directly onto "Bare metal" hardware
      • Those are, effectively, the only limitations I can observe of that configuration
    •  Of course, it would also work "Swell" with the build factory image created by the Timesys web-based Build Factory
      • Virtualbox can be easily configured to provide "pass through" I/O for USB devices, such as would allow for the BeagleBone to be configured directly via USB from within the Virtualbox virtual guest instance
      • With or without an X.org window server running in the Virtualbox virtual guest image, Virtualbox nonetheless provides a framework for a full Linux development environment, without the characteristically "hairy" qualities of the process of installing a second bootable OS diretly onto a PC that uses UEFI.
      •  The virtual media device as used by Virtualbox for its machine and block storage device emulation procedures, that media device is essentially contained within a single virtual media image, such that can easily be backed up and/or relocated to another PC.
    • Advice: To use Samba and CIFS instead of the Virtualbox "Shared folder", for sharing files between the Virtualbox virtual guest image and the virtual host OS.
      • "Some limitations would apply"
  • Programming Language: Common Lisp
    • This topic necessarily would entail a number of distinct sub-topics, such as at least one topic for each Common Lisp implementation being used in a single Common Lisp development environment. 
    • Of course, one cannot simultaneously use every Common Lisp implementation that exists. Personally, I wish to focus on:
      • Steel Bank Common Lisp  (SBCL) 
        • towards: Defining a "New" Lisp OS
        • Focusing on platforms:
          • amd64
          • armhf (where applicable, else armel)
            • See also: Connors, Jim. Is it armhf or armel?. Jim Connors' Web Log. 2013
            • Shell command: readelf -A /proc/self/exe | grep Tag_ABI_VFP_args
            • NOTE: The distro might be compiled as armel, though the MCU might support armhf features, regardless
      • Clozure Common Lisp (CCL)
        • towards: A Common Lisp implementation providing close integration of existing software components on the Linux platform -- typically via FFI -- including the Linux Kernel itself
  • Concept: MPI
    • Resource: ParGCL, another Common Lisp implementation
  • Concept: I2C
    • Short summary: Standard I/O protocol for digital electronics devices onto a common digital signal bus
    • Supported in the two PRU-ICSS modules in the TI Sitara MCU (cf. BeagleBone Black)
  • Concept: CORBA
    • Short summary: Platform-neutral architecture for remote data procedures and remote procedure calls within heterogenous networks
    • Concepts: CORBA GIOP, CORBA IIOP
    • Concept: CORBA IDL
 That, then, would effectively serve to complete the "Snapshot," this morning -- the "Snapshot" amended, then, with subsequent updates.

Tuesday, May 27, 2014

OS Platform Compatibility, in a Laptop Recovery View - Filesystem Compatibility, Binary Incomptibility

Recently, my primary laptop became effectively bricked, insofar as booting from the laptop's internal hard drive and booting to the MS Windows 7 installation, on that laptop. It's a Toshiba model, an A665, with a thin form factor not affording much air circulation internal to the laptop. The laptop has a wide screen and a contemporary Nvidia graphics card. It has been my "main laptop," for a couple of years. Tasks that it is not bricked for would include:
  •  Boot from external DVD drive or flash media
  • Chroot to the Linux partition on the internal hard drive -- the SystemRescueCD can be used to manage the chroot environment of the same, for simple recovery tasks, albeit with some concerns as in regards to kernel compatibility and device creation under devfs
  • Mount the Windows main partition or either of the recovery partitions on the internal hard drive
I've tried to fix the MBR on the internal hard drive, using TestDisk, such that is distributed on the SystemRescueCD. However, the laptop is still not bootable from its internal hard drive. I plan on holding onto the laptop nonetheless, until if I may be able to replace the internal hard drive with an internal SSD drive, and to re-image the "Good partitions" from the existing internal hard drive onto the ostensible SSD drive, certainly not carrying over any "Bad bits" from the MBR sector or however.

In short, I can boot the laptop from an external DVD drive, and can mount and chroot to the Linux partition on the laptop, and from there, I can mount the Windows partitions. It's fine for filesystem recovery, but with the internal hard drive in an unbootable condition, and with the recovery media unable to boot the existing Windows partition -- though it can mount the same --  to my observation, it's effectively "Nil" for running any of the Microsoft Windows software installed on the same.

I've reverted to an older laptop, an older Toshiba, such that has Windows Vista installed. That's been sufficient for running the software required for courses at DVUO, however I am certainly interested in rendering my "Main laptop" usable, again, at least on its Linux partition, and without the DVD dongle. It would serve at least as a cross-compiling platform for the ARM architecture.

As of when my "Main laptop" became unbootable, then my Chromebook became the effective "mainstay" for developing on Linux, on my "Home network." It's a Samsung Chromebook. Samsung Chromebooks use an ARM architecture -- I'd chosen the particular model of Chromebook, as due to that, when I purchased the Chromebook from BestBuy, on my student budget. I've since enabled Developer Mode on my Chromebook and installed a KDE desktop environment into a chroot, using Crouton, furthermore referencing a "How To" published by HowToGeek, How to Install Ubuntu Linux on Your Chromebook with Crouton.

The chroot environment on the laptop is using Ubuntu Linux, edition 12.04. "Precise". The Linux kernel is version 3.8.11, armv7l architecture.

To my understanding, QEmu is unable to create a VM on the ARM architecture. Virtualbox is not available for Linux ARM, either. Failing any other alternatives for OS virtualization on Linux ARM, my Chromebook therefore cannot serve as a virtualization host.

There are a number of software applications that I would prefer to use on my Chromebook -- including Modelio, and the latest edition of the Eclipse IDE -- but which are not immediately available on Linux ARM, and are available on amd64, such that is the architecture of the Linux installation on my main laptop. Given that my ARM chromebook cannot emulate an amd64 environment, not to my understanding, I shall have to either do without those applications, indefinitely -- an undesirable situation, certainly -- or I shall have to figure out how to recover my A665 laptop, at least so far as that it can boot without the "DVD dongle"

My A665 laptop can boot from USB. Of course, all that I need to have it boot from USB would be: The boot loader, ostensibly. There's already a working configuration of a Linux root partition and swap partition, accessible after boot, on the internal hard drive of the laptop. I might as well wish to install a minimal OS configuration onto the flash media, however -- something that could at least run a shell, in an event of that flash media being the only usable media in the laptop.

I've yet to find a "Pre-built" Linux USB thumb drive distro that would be clearly compatible with this particular use case -- basically, just an external, chained boot loader -- moreover in a minimal filesystem configuration.

This morning, I have begun to read up about Ubuntu Core, a minimal Linux distro.

If I was able to run a virtualization environment on my Chromebook, I could begin to configure a thumb drive for using Ubuntu Core, immediately, using chroot and/or QEmu. There would be a certain matter of "platform difference", as between the host platform and the target platform, however, when my Chromebook (ARM) is the host platform, and the A665 (amd64) is the target platform.


Here is a model for a Platform view of my own PC environment, expressed in a sort of ad hoc UML notation:

[Target Platform <<Platform>>]-*----+-[Filesystem]
  |
  |+
  |
  |
  |+
  |
[Host Platform <<Platform>>]-*----+-[Filesystem]

In the situation in which the Ubuntu chroot on my Chromebook (armv7l) would the ostensible "host platform," and the "target platform" is amd64, but the former is a host platform that is currently unable to run virtualization software, to my best understanding. The ARM and amd64 architectures would be  incompatible, moreover, as due to "something in regards to ABI,"broadly.

Effectively, an ARM Chromebook cannot serve as a complete host platform for building and testing of a recovery disk for an amd64 OS.

Of course, a host platform running an ARM Linux OS can be used to transfer a filesystem image to a storage device supported by the host platform, and can mount an EXT2 or later edition EXT_ filesystem from a supported storage device or, can mount a filesystem directly from a block device image -- like as in a manner of flesystem compatibilityHowever, if the target platform does not implement a processor architecture similar to that of the host platform, then the host platform cannot  effectively chroot to the target platform's root filesystem, as the machine-dependent files on the chroot 'target' would be binary incompatible with the machine architecture of the host platform.

In summary:
  • ARM and AMD64 Architectures : Binary incompatible, in machine-dependent files
  • EXT4 Filesystem: Compatible with any OS such as can mount an EXT2 or later EXT_ edition filesystem. Mounting an EXT4 filesystem as EXT2, of course, would entail a certain loss of features available in EXT4 (cf. filesystem journaling) for the duration of time in which the filesystem would be mounted, in the host OS.

In extension: Use cases for a dynamic host platform modeling interface

  • Filesystem Imaging
    • Integration with functional filesystem management tooling (favoring command-line tools)
  • Cross-Compilation
    • Integration with functional distribution management tooling (favoring command-line tools)
  • Network Management
    • Integration with functional user authentication interfaces (Kerberos, etc)
    • Integration of network models (Ethernet/WLAN, IPv4 protocol stack, etc)
    • Integration with networked host control interfaces
      • SSH - Command line interface
      • CORBA - Distributed network host management interface TBD