Saturday, October 17, 2015

Late Announcement of a Fork of CLORB, and Documentation Design, a CTags-to-DITA Model, and a Concept of Security Policies for Common Lisp

On reviewing a set of notes I've begun developing as towards producing an unambiguous outline of concepts as applied with regards to material sciences and computing, then in considering a possibility of developing a modeling service in extending of the topical outline of the article with models of tangible computing machine designs -- in no radical estimation of concepts of intellectual property, simply focusing on a modeling view, this morning -- I've returned to a fork of CLORB that I had created at GitHub, presently named hpan-dnet-corba. The name of the fork is derived of the name of the Hardpan Tech projects set, as well as a concept of a distributed data network. Presently, I am fairly certain that the repository will be renamed. I believe that I may be fairly certain that this will not interfere with anyone's present work, in regards to software development -- the repository at GitHub, in its present state, has not been forked, starred, or "Watched". Neither have I been able to proceed to any immediate development of the codebase in the repository -- so far, directing my attention to other projects. Of course, GitHub will automatically forward any URLs on event of repository name change.

On reviewing the codebase of the CLORB fork, this afternoon, not firstly considering any of the immediate "TO DO" items -- for instance, to ensure that CLORB will apply a portable sockets interface, such as usocket, a portable threading interface of some kind to be determined, to the project, and a portable operating systems interface such as osicat, then to proceed to update the CLORB baseline for the latest edition of CORBA, as well as to develop an implementation of the CORBA Component Model (CCM) in Common Lisp, to include services for component assembly and component activation, moreover in a manner as may be compatible onto component definitions not written in Common Lisp, however compiled to any single object file format -- my most immediate concern, superficial though it may be, is that I do not want to "Get lost in the codebase."

Of course, that would not be "All of the project," either, as far as updating the fork I've begun of the CLORB codebase. Likewise, I would like to develop a set of Common Lisp metclasses for reflective modeling of the IDL definitions that will be implemented with the codebase. This, I am certain, would be relatively easy to develop, with a small modification of the IDL compiler, onto a specific namespace syntax for IDL in Common Lisp, and a compatible definition of object services for Interface Repository reflection in CORBA. This extension would depart from the traditional IDL binding for Lisp onto CORBA -- incorporating some functionality available in a Common Lisp dialect, so far as may be available of Common Lisp implementations including an implementation of the Metaobjet Protocol (MOP) as MOP representing an implementation, transitively, of the Common Lisp Object System (CLOS).

Furthermore, I would like to develop a concept of a manner of "Specialized dispatching" of Common Lisp method definitions -- if definitively possible -- such as for implementing an instance of a definition of an object method A operating on a parameter B, within an arbitrary class C i.e C::A(B), such that the method definition is translated to a method A having a lambda list with specializers (C B) in Common Lisp. For instances in which A is not specialized onto any class D, then its unique application in C may be collapsible to a specialization onto B. Of course, if A would later be defined with a specialization onto any class D, then its implementation should need to be "Un-collapsed" to allow for dispatching onto both C and D. This, of course, might entail an unconventional extension onto a MOP implementaiton, itself, but it could be developed as to be portable onto MOP. Considering that any possible "Un-collapsing" would be performed at component load time, it may be minimally expensive as in regards to computational resources, while allowing -- ideally -- for something of an optimization in regards to runtime method dispatching. A to whether any further "Dispatch collapsing could be performed ... but this should all be proceeded by an in-depth study of the respective MOP implementation. Presently, though I may wish to assume that a MOP implementation is already implemented to its optimal semantic and procedural effectiveness for standard method dispatching in Common Lisp, but the nature of the conventional IDL-to-Lisp binding -- I think -- may seem to suggest that an even more optimal model may be possible. Not as if to split bits over a matter of byte sequencing, I think it represents a useful goal for a CORBA implementation.

So far as with regards to a concern of object modeling, there could seem to be an irony -- that here I am beginning to consider to "Put the wheels to the road," in a manner of speaking, to proceed now about CORBA development in Common Lisp, and to proceed as towards a purpose of developing a no doubt intellectual property-agreeable model repository service ... and yet that I may be unable to develop a model for this project until having produced this project to such a point as in which it would be applicable in a modeling service, or either, in perusing the codebase manually.

So, there is an exit condition from the semantic loop of that concern -- namely, as to read the source code. Again, though, I am at the concern to not "Get lost in the source code."

In extending of a concept of "reading the source code", of course I would also want to begin to develop a comprehensive reference about the source code, namely in a documentation format external to the source code. Personally, I would not prefer to develop such a manner of reference if with an HTTP virtual filesystem service intervening, in which a local filesystem service may be sufficient.

There's a side note about the reStructuredText (RST) format that could seem apropos, inasmuch --  RST offering a certain number of syntactic features effectively extending of the set of markup types available in the Markdown format. With GitHub providing instantaneous RST-to-HTML translation, and though it may not be the most computationally efficient process to  not write the documentation originally in HTML format and publish it likewise in HTML format, but text-oriented markup formats may typically be more succinct than HTML, and would probably be more "Friendly" to editors not familiar with an XML format.

Alternately, it may be feasible to develop a DITA formatted topic repository about the original CLORB codebase, then to update the same topic respository with any later notes as may be added onto any reference elements generated in the immediate Lisp-to-DITA translation. Thus, as this is not an enterprise project, and it does not have an enterprise management base to manage it by, but with it representing nearly an enterprise scale of endeavor -- in a small manner, as it might seem -- it can be approached functionally and manageably, to so much as document the existing CLORB codebase, even if in a manner as that the documentation may be intermediate to any updates of the codebase.

Of course, to keep the documentation synchronized with any changes to the source code, it would need an attention to both of the documentation and the source code, as if simultaneously, and throughout the duration of any updates to the source code.

Much of the documentation might be generated, initially, with an application of CTags -- if not of an extensional tool, such as Exuberant CTags -- then with an application of a transformation model for generating documentation from a set of templates, such as may be applied to the tags lists generated by the respective CTags implementation. Such a procedure, of course, could be performed onto any single language supported by the respective CTags implementation, given any suitable set of document templates. It might not be in all ways analogous to LXR or Doxygen, though accomplishing a result in some ways similar to Doxygen -- namely, a structured reference about source code forms -- though ideally, producing documentation files in a structural format resembling the Common Lisp Hyperspec, such that may include -- by default -- the contents of any available documentation strings, and such that may be extended, potentially, with source code references -- and a corresponding URI transformation -- in a manner analogous to LXR.

Thus, it might produce not so much of an IDE-like web-based presentation for linked source code review, rather producing a sort of "Skeleton" -- 'tis the season -- for support of documentation authoring onto an existing codebase. It would not presume to provide a complete set of documentation files, but merely a skeletal documentations structure -- such that could then be edited by software developers, such as to add any  documentary information that would not otherwise be available, immediately, in the source code. In a sense, it would may as to provide a manner of a source code annotation service, but with the annotations contained in documentation files, not directly in the source code.

In regards to a design of a template model for application in such a manner of a documentation skeleton generator tool, it might be beneficial if the documentation and templates may be maintained -- in some ways -- separately, with a semantic linking model as to ensure that the documentation may be automatically "Linted" for compatibility across any changes to the source code -- "Actively linted," moreover, such that if an object is renamed in the source code, its documentation will be renamed, and if removed, then its documentation removed, and if any new features would be added, that a new documentation stub would be added for each feature.

Speaking of features, in a context of Common Lisp, some features may be difficult to "Parse for," however -- the Common Lisp feature syntax itself, for instance, such as "#+quux or "#-quux" or any more complex expressions such as "#-(and quux (not quo))". Perhaps it may be in no small sense of coincidence, if such expressions might -- in ways -- might resemble something like a C preprocessor syntax, moreover being evaluated in a manner -- namely, at the nearest approximation of "Compile time" in any Lisp reader/evaluator procedure  -- then in a manner analogous to how a C toolchain evaluates a C preprocessor directive, but minus any analogy of of macro syntax and evaluation.  In a sense, it is like the Common Lisp read/eval/print loop (REPL) applies a preprocessor in the reader component, intermediate to a computational evaluation of forms read by the reader, then any printing of return values or stream output values as may result of the evaluation. It might seem, in some ways, "More tidy," but a whole lot less common than the langauge's name might seem to imply.

So, together with such a short sidebar about tool stacks in C, continuing ... the documentation system, if it can update the documentation files in parallel to any updates observed of the source code itself -- maybe it could be presented to market as a manner of a "Smart" documentation system, but aside to so many concerns of marketing -- if not updating the documentation tree in response to any changes in actual definitions of compiled objects, then as long as any "Manually written" documentation is maintained in a manner separate to any "Generated structural" documentation, the "Manually written" documentation can be presented for update, corresponding to any change in the structural definition of an object.

It might seem computationally frivolous, perhaps, to propose to keep a documentation tree simultaneously linked with an object system, and the object system's source code and documentation tree both mapped onto a filesystem managed under a Software Change and Configuration Management (SCCM) service. It's certainly a small toss from the CTags-parser paradigm, but it may be only a small toss inasmuch. The most computationally expensive aspect of such a feature, it   may be in simply monitoring any source code file for changes, then detecting which definitions a change applies to, then processing the documentation about those definitions such as to reflect the change in the source code -- likewise, maintaining a manner of a table between object definitions and source forms, such that if a compiled definition is replaced with a new definition, the developer may be presented with a set of convenient, if not in ways pedantic options for modifying the documentation about the original definition.

Of course, considering that an object's definition, in a compiled form, may not be so much "Changed" in its compiled data, as much as "Replaced" with a newly defined object of compiled data, it would certainly need some implementation-specific modifications to implement this albeit ad hoc proposal, in full -- that the software system could be programmed to detect a change in the definition of a named object, and if maintaining a definition-source state about the name of the object (as some Common Lisp implementations may, at developer option), that the detected change could be noted in the software's program system, then followed with a query to the developer by some manner of an interactive prompt.

Towards developing a programmed security model onto Common Lisp, the very fact that a Common Lisp implementation may allow any item of code to redefine any existing item of code -- sometimes, as optionally filtered with "Package locks" -- we must assume that all of the software code having been evaluated by a Common Lisp implementation is instantaneously trusted, moreover that not any software will be evaluated that is not trusted -- an oblique sense of "Trust", by no means programmatically defined. Perhaps the security policy model defined in Java could seem to be of some particular relevance, at that, short of any ad hoc and distinctly not code related approaches to ensuring a manner of discrete security of software code and program data.

By no means will this project represent any manner of trivial convenience. Even in the simple effort of developing so much as a design for a documentation system, it somewhat apparent that there may be some "Lower level concerns" -- such as that the Common Lisp language development ... may be behind in a few updates, as with regards to the "State of the art" in commercial software development, quite candidly. Though Common Lisp is a computationally comprehensive programming language, but if Common Lisp may be applied within a secure, trusted commercial communication system -- firstly, we may wish to consider, each, our own integrity as to how much it is a trusted programming language, juxtaposed to any programming language as may provide a distinct level of security policy definition and of security policy enforcement, ostensibly with such security policy features being applied throughout commercial software systems.

The author of this article is not one to place any chips on the table, before an analysis of such a concern.

It may be not as if Common Lisp was vastly behind other programming language -- short of anything in regards to "Warm fuzzy" marketing -- but the security policy issue, it may be approached perhaps without any too broad sweeping changes to any single Common Lisp implementation.

So, but there was a discussion about documentation, in this article -- albeit, an in many ways breeezy, verbose discussion -- an in-all-ways a rhetorical discusssion, likewise lacking any great presentation of detail. This article describes a manner of a semantic model for working with documentation and source code, in parallel. This article does not go to great lengths for a description of the DITA format, or XML Stylesheets, or the Document Object Model .

Presently, this article returns to the original topic, of generating documentation from CTags files. The topic of IDE-to-source-code-to-object-definition linking should be approached with a manner of a later demonstration, but first there would need to be an IDE compatible to the demonstration. Secondly, the topic of how-to-prevent-unwanted-object-redefinition-scalably-and-well could be  approached of any much more detailed analysis.

Towards a manner of an event-oriented model in regards to definitions in Common Lisp programs, appending a few ad hoc notes:
  • Types of Program Objects, "Program Top Level"
    • Variable Definitions
    • Type Definitions
    • Class Definitions
      • Structure Class Definitions
      • Condition Type Definitions
      • Standard Class Definitions
    • Functions
      • Standard Functions
      • Funcallable Instances
      • Generic Functions
    • Method Definitions
    • Macros
    • Special Operators
    • Packages 
    • System Definitions
    • Declarations
      • FTYPE Declarations
      • Type Declarations onto Variables
    • Closures and Closure Environments
      • Concept: Null lexical environment, i.e. global environment, as an effective "Top level closure"
      • Concept: Redefining a lexically scoped object defined in a non-global environment, A-OK ?
      • Concept: Redefining a 'special' scoped object defined in a non-global environment, A-OK ?
  • Events
    • Event: Program Object Definition
      • Instance: One of Defvar, Defparameter, Defconstant
      • Instance: LET
      • Instance: Defclass, or related CLOS, MOP protocol procedures
      • Instance: Defun
      • Instance: Defgeneric
      • Instance: Defmethod 
      • Instance: Defpackage
      • Instance: Defsystem or similar
    • Event: Program Object Redefinition
      • Instance: SETF  
      • Instance: SETQ
      • Instance: Object definition onto a previously defined object
        • Re-DEFCONSTANT: Implementation-specific handling [exists]
    • Event: Program Object Definition Shadowing
      • Not expressly 'redefinition', more entailed of both closure definition and component program object definition 
      • Synopsis: a lexical scope is defined in which a program object defined in which a new definition is created, in a manner as to  effectively shadowed a definition previously created -- a definition furthermore bound to a single name for the definition's program object type -- in a containing lexical scope
      • May be a part of a shadow => redefine procedure
      • May or may not be approached "Maliciously"
      • May produce unintended side-effects in software programs, e.g. if *STANDARD-OUTPUT* is shadowed as to pipe all data through a digital wormhole to an alternate universe
    • Event: Program Object Deletion
      • Note: Though defining a top-level interface for garbage collection, Common Lisp (CLtL2) does not define any single 'finalize', 'delete' or 'free' procedure, such as could be applied for dereferencing and deallocating objects manually
      • Instance: makunbound   (global symbol table)
        • Note: Whether or not this would actually result in the deletion of the program object, or merely in the "Un-binding" of the program object to any single symbolic name, may be implementation-dependent
      • Instance: fmakunbound (local function table)
        • Does not affect immediately any compiled, inline functions in which contexts the respective functions are compiled inline