Monday, September 28, 2015

Calculating with CORBA – A Showy Prognostication Of Rational Numbers, Computations, and Communications in Application Systems

Reading over an embarrasingly obtuse description that I had written, one day in another place, about a software system that I was referring to as igneous-math, I would like to denote henceforward that the article was much of an effort to describe a simple sense of novelty about the 'ratio' type as implemented in Common Lisp. Without immediately producing a study of IEEE standards for representing floating point decimal values in computing systems, and without any lengthy sidebar as to comment to a concept of fixed point arithmetic as developed in Forth, I recall that my initial thesis had centered on the very limits of finite data registers in computational machines.

In the following article, the author will endeavor again to circumnavigate any specific consideration with regards to computing, instead to focus about some concepts of commumications as developed with regards to CORBA.

Assuming a fixed-size data register, in any single numeric computation – whether of a 24 bit, 32 bit, 64 bit, or other bit length of numeric encoding – any number may be represented in a precision bounded by the size of the data register in which the number is encoded.

This article will not go at length to juxtapose different methods of discrete digital encoding of numeric values, as whether for encoding of natural numbers, signed integers, unsigned integers, fixed point decimal numeric values, floating point decimal values, numeric ratios, or complex numbers. Excepting the field of ratios and the field of complex numbers, each other of those topics may be referenced immediately onto the Common Data Representation (CDR) format as standardized in specifications about the Common Object Request Broker Architecture (CORBA). Though that may not serve as to describe such topics in any manner of a comprehensive detail, but as to reference these topics onto CDR encoding, perhaps it may at least serve as to provide a common point of reference – in regards to numeric computing – principally orthogonal to an implementation of any single programming language or programming architecture.

The CORBA specifications, as publicly described in core specifications documents and as extended in any number of supplemental specifications documents published by the Object Management Group (OMG), moreover as pragmatically augmented with domain-specific case studies described in any discrete number of documents in or outside of the OMG specifications set, moreover as implemented in any number of instances of development tools, then as applied in any singular set of software products – broadly, CORBA provides a platform-agnostc framework for applications, such as may be developed to extend of any number of fundamental CORBA object services – as in regards to developing software components such that may interact via CORBA object services, at an applications layer and – as on a TCP/IP network – may employ methodologies of transport semantics at session (cf. SECIOP) and presentation (cf. ZIOP) layers, as in regards to a view of networked applications projected onto a conventional seven layer model of TCP/IP networking.

In such a context, CORBA serves to provide a single, standardized, baseline object service architecture, such as may be augmented in applications of supplemental CORBA object service specifications.

In regards to applications utilizing CORBA IIOP – IIOP being the Internet Inter-ORB Protocol, principally an extension of the Generalized Inter-ORB Protocol (GIOP) –  applications may apply the Common Data Representation (CDR) format for stream encoding of data values, such as may be reversibly encoded and subsequently decoded onto protocol request sockets and protocol response sockets in a context of CORBA IIOP.

Short of an orthogonal concern with regards to encoding of object references, the CDR encoding format provides for an encoding of atomic, primitive numeric values – an encoding standardized essentially onto stream objects, principally in a manner independent of any single microcontroller implementation, but however dependent on a streams-oriented communications media. Though perhaps the CORBA architecture may serve to encapsulate much of the nature of the stream based encoding of data values in IIOP, but inasmuch as that an IIOP application utilizes TCP sockets for its implementation, the IIOP implementation therefore utilizes a stream based encoding. Whether GIOP may be extended, alternately, as to develop any more computationally limited encoding – such as perhaps to develop an encoding for protocol data values onto an I²C bus, as may be for application within a light-duty parallel computing framework, or alternately, as to develop an encoding method for GIOP onto shared memory segments within a process model of any single operating system – the CDR encoding onto IIOP serves to provide a standard encoding for atomic data values, moreover CDR providing a basis for encoding of object references and any number of application-specific data values, in CORBA applications.

Thus, as towards developing a platform-agnostic view of applications of computing machines, it may be fortuitious to make reference to the CORBA architecture – even in so limited a regards as in reference to the CDR encoding for primitive numeric values.

Concerning the CDR encoding for any single data value onto a communication stream, the CDR encoding may be observed as being compatible with so many machine-specific encoding methods for encoding of data values onto finite memory registers, data values as may likewise transmitted across finite processor bus channels within any single computing machine. Certainly, the communications of a data value – as within a CORBA application framework – would not cease at the transmittal of the data value to or from an IIOP application socket.

Towards a perspective of an implementation of CORBA, the CORBA implementation's own concerns – with regards to communication services – might not extend much further than as to provide an interface for encoding and decoding of data values onto CORBA data channels, in a manner however orchestrated with CORBA object adapters. The trope as that CORBA is for application in regards to middleware services – aside to any suggestion, to which the author may take some exception, as if CORBA was not trendy, any more – it does not serve to say much, at all, as to how a computing machine may apply any single data value, once a data value is transmitted over a CORBA object services network. Naturally, CORBA is a feature of a communications domain. In a manner of a domain-oriented view, CORBA dovetails with a domain of computer operating systems and a domain of microcontroller design, furthermore – as inasmuch as that a microcontroller provides computing services to an operating system, and as that an operating system may be applied as to provide computing services to a CORBA application.

How, then, is UNIX not relevant to CORBA applications?

The author of this article being how immediately preoccupied of developing a thesis with regards to the Common Lisp programming language, the FreeBSD operating system, and the potential for applications of CORBA in networked computing,  but perhaps it may seem as though the author is simply distracted of some few archaic trends in the social universe. The author would not propose to suggest any too narrowly deterministic model of such a thesis, whether or not is may be easily proved to be of a computationally sound and comprehensive architecture for computing in any arbitrary networked systems environment. Common Lisp has perhaps not been a great favorite for applications programming, however it may momentarily serves an academic end, as in a manner of thesis development. It might be estimated, moreover, that most theses developed about Common Lisp would be developed, likewise, as theses about concepts of artificial intelligence – logically, with regards to theses describing of applications of logic in mathematics and in computing systems. Perhaps it has not been since the 1970s and 1980s that Common Lisp development was shaped of microprocessor development. Across the popular AI Winter of the era, but perhaps the old scrolls have not all been lost to the shifting ice of the populist glacier.

As though lost in the library of historic record – but AI Memo 514 is certainly no manner of a Cardiff giant. If it may be a memo of an academia in which the plain logic of microprocessor design was paramount – but perhaps the contemporary computing market may seem all preoccupied of so many novel products of contemporary microprocessor manufacture, if not of applications of the newest high-speed microcontrollers and software designs in all the novel, new commercial systems. In regards to how so many novel, new toys are applied to present any manner of noise to contemporary engineering trades, but if it may be simply a manifestation of a lot of marketing fluff atop a lot of repetitive iterations in social development, how novel is a simple endeavor to disembark of the sollubrious consumerism and recover whatever remains of the sciences aside the grand industrial merry-go-round?

It might seem as if it was only a metaphor of a brass ring, a token, a trinket grabbed in a singular merry-go-round's long spin? Is it to a metaphor of a trophy, then, ere it is returned to the vendor's brass ring dispensor? If there is not a metaphysics of such a commercial mechanics, perhaps Whitehead's thesis really ever was disproved? May the circus resume, fortwith, if there is no more of a meaning beyond so many singular objects of materialism.

Before returning to this thesis, the author denotes a concept of pi in which pi is calculated as a ratio.

Though perhaps not all definitions of number would include ratio as a type of number, but as for that a ratio may be defined as a type of rational number, it may be a type of number of some relevance for application in computations onto other types of rational number. Sure, then, between concerns of computation and of measurement, some variance in measurment may entail some of a variance in computation. In regards to variance of physical measurements in physical systems, and considering as that a discrete measurement – a measurement of any estimable precision – may be applied in a computational model, applied as towards calculating any intermediate numeric values as may be, lastly, applied in a physical analysis or physical design of a system, and though it may seem practically naive to assume a perfect measurement may be provided to any computational system, but insomuch as that a computational system may produce a predictable set of consequent computations, as given a known set of initial measurements and a discrete formula for calculations, a computational system should not itself be assumed to operate at variance of any set of initial measurment values.

If there may seem to be a commercial novelty of the Common Lisp functions, 'rational' and 'rationalize', those two functions alone may be applird as a computational entry point – of each, seperately so – to perform all intermediate calculations within a computing machine as in calculations with rational numbers. There may be, in that, likewise a toss of the hat with regards to fixed-point numeric calculations in Forth. Perhaps it may be said as to "Go further," moreover, that any immediately fixed-point numeric value may be converted to a ratio numeric value, as to conduct any subsequent calculations wholly onto rational numbers, whether or not producing a decimal number for numeric denotation as a consequent value.

It is not unprecedented: That a computing machine may usefully produce calculations deriving of rational numbers. In an immediate manner, a computer of a finite memory may not actually contain an irrational number, except that an irrational number is encoded of rational quantities. A concept of irrational number should not be removed from numbee theory, of course! It is simply not a concept occurring in computations within finine memory spaces.

It is not to merely make a drab show, therefore, if a rational number may be calculated as estimating a built-in decimal value of pi to a rational numeric constant. Such a constant, rational value may be applied intermediate computations wholly onto rational numbers. The rational numbers computer may only need convert any decimal input values into a ratio format, then to produce a decimal representation – to any single consequent precision – of any numbers calculated finally of the rational computation.

Of course, this does not indicate any trivial methodology for rational numeric computations onto transcendental functions. So far as that the triginometric transcendental functions may be derived of a rational estimate of pi, as on a unit circle, there may be a channel for computations with rational numbers in those transndental functions. The author, regrettably, is not as immediately familiar with calculations of Euler's constant.

Towards a representation of ratio numeric values and complex numeric values – no mere propaganda, the Common Lisp type system providing definitions of such numeric types, in computing – of course, it may be a trivial affair, to define an IDL module, an IDL interface, and finally an IDL data type in which a platform-agnostic definition of each of the ratio or of complex number numeric types may be provided, in a computing system. That being denoted as towards a consideration of a transport level encoding of numeric values, but it may not seem to say an immediate bunch as towards an algorithmic, if not functional view of computing

This article, presently, returns to a thesis about CORBA.

Of course, CORBA IDL itself is not the computing element of a CORBA system. Inasmuch as that an implementation of a CORBA IDL module may be developed in an application programming language, then applied within an object adapters system, CORBA IDL may serve something of a complete, modular role in regards to communications among computing systems. This article has denoted, albeit somewhat obliquely, the CDR encoding for data values in an IIOP framework. Likewise, as in regards to the subclauses of the CORBA core specifications as in which CDR and IIOP are described, it may be said to amount to a much of a description of a platform-agnostic communications model, such as is defined in a manner for application within a broader, platform-agnostic object services framework as may be denoted, broadly: CORBA.

May it be, then, as if to limit the potential applications of CORBA frameworks, if the computational features of CORBA systems may not often be found as described in so much of specific detail as of the CORBA communications model itself? One may wish to speculate as to how the platform-agnostic nature of CORBA specifications may seem to effectively obfuscate any single CORBA implementations from immediate duplication without limits of immediate licensing agreements. In some regards, such a coincidence could be quite fortuitous, overall. Inasmuch as CORBA may find an application in defense systems, such applications should probably not be danced out for any idle opportunism, however any CORBA applications may be momentarily denoted of a popular discourse. However analogously, if CORBA may be estimated as to find applications in some of social infrastructure systems – such that may seem however estimably possible, as in consideration of specifications defining platform-agnostic applications of CORBA in telecommunications systems – but this says almost nothing with regards to any manner of actual computing systems. Not to offend the reader with the author's own simple obtuseness: A communications specification is not a microprocessor, just as much as a pie plate is not a peach pie.

Sure, a discrete number of peaches may be applied in preparing any discrete number of peach pies, at any estimable formula of efficiency and other results. Analogously, the CORBA specifications are not either like recipes for microcontroller applications. (Ed. note: There should be a cute metaphor to Hexbugs(R), here. Though it might seem to startle a reader, if juxtaposed to the fantastic, fictional Replicators of the Stagate SG-1 universe, in science fiction literature, but the metaphor of a microchip-sized automoton of a bounding character, outside of any too far Arachnid-like species of science fiction creature, the simple ant-like Hexbug Nano(R), as a metaphor, could be apropos.)

So, a student of computing might feel in some ways stymied of the communications-oriented CORBA specifications. The proverbial magic of CORBA may not be insomuch in the proverbial network chatter of a CORBA object system.

Sunday, September 27, 2015

Thoughts - Nomenclature and Taxonomy

As a topic that I've found some concern about: Even in so much as developing my own set of notes at Evernote -- and marking such notes in any ways meaningfully, with labeled tags -- I've begun to develop a sort of an ad hoc taxonomy about a number of concepts in computing.

Though I've not lately been refreshing my own study of the bibliography of taxonomy, it's a topic that I'm aware of as it existing, with potential applications in a number of theoretical contexts, including: Topic Maps, as in reference to the XTM topic maps format and Ontopia; the Simple Knowledge Organization System (SKOS), as in reference to RDF, RDF Schema, and the Web Ontology Language (OWL); the Darwin Information Typing Architecture (DITA) as in reference to types of DITA topic elements. In an applications sense, I've observed that a concept of taxonomy may be relevant in regards to web services for content curation, web content annotation, and web content development, as in reference to both of Evernote and Diigo.

Both of Evernote and Diigo allow for annotation of web content. Evernote might seem to provide something more of a container view of web content, juxtaposed to Diigo. Diigo might seem to be more readily usable, at desktop PCs, for web content annotation. Evernote and Diigo both provide services towards content development – Diigo, with Diigo outliners, and Evernote with Evernote articles.

Focusing on the "Content tagging" features of each of Evernote and Diigo -- with a momentary reference to the original Annotea project of the World Wide Web Consortium (W3C) -- Evernote allows for hierarchical organization of content tags. This evening, I've noticed one simple example in which that occurs to a particular highlight: Annotating a concept of instruction set architecture, as of any of a Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC) or other species of instruction set architecture developed into a tangible microcontroller platform. Orthogonally, there's a concept of instruction set architecture denoted in MARTE 1.1.

To denote each of a concept of CISC and a concept of RISC as being subsumed by a concept of instruction set architecture, it may not seem all of ideally like a perfect taxonomy, but I think it suffices for my own personal notes, in my own Evernote notebooks. That there are any number of instruction set architectures that may not be immediately denoted as either CISC or RISC architectures, there's a whole history of computing that was developed before those terms became en vogue. Some of the literature of the earlier computing might seem particularly clear in developing concepts as with regards to logical microcontroller design, moreover.

I think, the classification of CISC and RISC concepts as being subsumed by a concept of an instruction set architecture, it provides both of a semantically meaningful construction and a flexible construct allowing for later description of instruction set architectures, such as may not be immediately identified as either CISC or RISC architectures -- however anyone may endeavor to define exact limits to a definition of either of those concepts, as implemented in any single, tangible microprocessor.

Of course, a concept of RISC and of CISC may likewise be subsumed of a concept of microprocessor architecture -- as by way of a concept of instruction set architecture.

This article -- which I had wanted to write, originally, about taxonomy -- it is, by now, also an article about concepts of instruction set architecture. The reader might notice that this article, moreover, is absent of any cross references to a Wiki. That's not to snub the WIki editor community, simply the reader may Google any number of these concepts. Personally, I think that my own study is better served with my own application of Evernote, Diigo, and towards a DITA model for content development, not to lead too much into any singular Wiki narrative.

This web log I keep, I think this is just a forum to be a little more chatty about some concepts, essentially outside of any specific social channels, online. I've been trolled, here, exactly once -- therefore, would expect anything similar again. Though I've yet to be trolled here any more than once, I now understand I must understand that I may be trolled, here, at any time. So, I'm naturally going to be a little more edgy-terse in writing at this, my tech web log -- might learn to be more cheerfully natured about the overall "Troll potential" of this supposedly anonymous Internet, though in no immediate sense of warmth for trolling. It sure makes it difficult to write about any cheerful concepts, here. I've a sense, that's by no means "Only Online," either. However momentarily flexible the Internet media might seem, of course, any single page with a comment section might become an instant forum for mud-slinging.

Why is anyone so ad hominem online? I honestly cannot imagine. I'm not one to escalate or deter anyone's own fantastic ideas, as such, either. Does that make me an easy target, online, or just a subtle observer of semantics, though? So sure, maybe sometimes I draw fire from an Internet troll. Big loss, huh?


Further than defining a correlation between a concept of CISC architectures, and a concept of RISC architectures, with both concepts being subsumed of a concept of Instruction Set Architectures -- and this, being represented simply in a set of Evernote 'tags' -- that concept then being subsumed of a concept of Microprocessor Architecture, I notice that all of these concepts may seem to fit well in an overall taxonomy of nomenclature about computing.  My being a student not so much of the nomenclature as much as of concepts denoted of the nomenclature, in computing, I wouldn't want to be tedious about nomenclature. I think, it's an idea towards keeping so much as my own webliography well organized -- not to say of any broader sense of bibliography, and how to integrate a bibliography system, and Evernote, and Diigo.

Of course, in thinking furthermore of developing a topic repository model onto DITA, I'm not thinking anything "Like FOLDOC", as FOLDOC is far too friendly a media for such a serious demeanor as I think I must have to keep, in most of communications immediately in the modern world online.  Sometimes, I can't help but have an impression that some lot of the readership might be just waiting to reach out and take a swipe at something -- judging only by previous experiences, no more wishful thinking from me for how people apparently are, online.

FOLDOC publishes a number of reference pages, itself, mostly in a colloquial reference. It's a manner of a topic-oriented reference base, about computing, alternate to Wikipedia.

So, there's already FOLDOC, no rush to develop any excess of an additional topic repository about contemporary concepts in computing -- any repository about information science, the physical sciences, mathematics, logic, and marketing, and anything else that may be defined about computing, including: Concepts of human-computer or human-machine interface design (HCI or HMI, respectively), HCI/HMI accessibility, or plain novelty.

FOLDOC exists, great thing.

Concerning Android App Identities as Represented in the Android OS

One of the challenges facing the development of an integrated bibliography system on the Android platform, simply, arrives of the Android OS' app UID/GID model, moreover Android's application of the Linux filesystems permission model – in any ways augmented, as some documentation may denote, augmented with a mandatory access control (MAC) application extending of SE Linux, in light of the albeit not formally adopted POSIX 1003.1e i.e "POSIX 1e" draft, in Android KitKat and other Android release branches. POSIX 1e finds application in Linux, furthermore, with the Linux process capabilities implementation. Analogously, in the FreeBSD operating system, POSIX 1e find application in the FreeBSD MAC implementation and in the Capsicum implementation.

Not as if to abjectly criticize the Google Android project – the app/filesystem permissions model in KitKat being of some notable inconvenience for file storage to SD card media, in Android platforms – there is probably a logic to the changes made with regards to these features, in Android 4 i.e "Kit Kat" and other Android release branches. (Ed note: cf. Android Content Providers [AndrDeveloperST])  Whatever ways in which the MAC model may be involved in the issue, it may seem to center primarily about app UID specification – the issue of the inonvenience for platform users, as with regards to files that must be accessed with multiple Android apps on a single Android appliance – whether or not specifically to access files stored on external SD card media, logically an orthogonal concern, orthogonal to the permissions model of a Google certified Android appliance's own on-device storage meda, there with further orthogonal reference to filesystem types (bibliography on file).

On the Android platform, app UID and GID is computed at time of app installation[AndrDeveloper]. App UID and GID information on the Android platform is not stored in the common text and shadow files under the /etc directory – common insomuch as with regards to UNIX platforms applying a POSIX and X/Open model. Rather, in one regards, UID and GID information is stored – on the Android platform – as with reference to an 'acct/uid' directory[BD2015]. (Ed. note: See also, the GET_ACCOUNTS manifest permission [AndrAPI_Mperm])


From a developer's perspective, there are application settings available for sharing an app's identity with an existing app, inasmuch as with regards to the sharedUserID attribute [AndrDeveloperPerms] of a single app's APK manifest description. The corresponding sharedUserLabel APK manifest field, moreover, allows for human-readable labeling of apps' shared user ID[AndrAPI_R], Albeit, the sharedUserLabel may be applied to an ambiguous situation – as with regards to that a sharedUserLabel may be specified in one app's APK manifest, but would be applied for a UID shared among multiple apps. This article will not further investigate how the Android OS may establish a systematic precedence to shared user labels, as in a conflict of differring sharedUserLabel specifications for a single sharedUserID.

In addition to the sharedUserID APK manifest attribute, the Android API defines a permissionGroup object type [AndrAPI_R], corresonding to a 'permission-group' tag as may be specified in an Android APK manifest file[AndrAPI_RStyleable]

Referring to the reference documentation about these multiply-linked OS features, in Android, it seems there is an Account Service in the Android OS, specifically referencing the GET_ACCOUNTS manifest permission [AndrAPI_Mperm] – perhaps an elusive feature of the Android Operating System, the Android OS Account Service.

There is  inquiry of a topic as with regards to storage of user account information on the Android platform. [OvOp2015]  Probably, the Android OS Account Service may be a topic of a commonly linked reference, as with regards to APK principal peer identity, and UID and GID values computed at time of APK install.
(Ed. Note: Or not. On further study, it seems that the Android Account Service is rather a service for managing a user's web account information, centrally, on any single Android appliance.)

Perhaps this study may be extended of, as with regards to applying Kerberos as a neteork peer authentication service on Android appliances. There is some of an existing work, with regards to implementing Kerberos services on the Android platform (bibliography on file), though perhaps nothing immediately about whether or not a network admin should allow authentication to a network service, e.g SSH, with a Kerberos ticket from a thin client appliance – an orthogonal topic, certainly. For web apps, of course there's OAuth/OAuth2.

 [Article Draft Nr. 2]

Webliography

Thursday, September 24, 2015

Lua - A Comment

Though I've been considering to begin taking a closer look at EFL WebKit potentially for application with Common Lisp, but presently I've become somewhat distracted of the network configuration on my small LAN. Towards considering a scripting language -- alternate to SH -- for automating a time-based iterative function on the LAN gateway, I've not considered Common Lisp as an option for this application. Though I think Common Lisp has a lot to offer to systems programming, but I think it may be a little much to apply a complete Common Lisp implementation for simply automating such a function on the LAN gateway.

Considering scripting languages, I'm young enough in years to remember when PERL was first becoming a popular scripting language. I remember, likewise, my own difficulty with making sense of the affability with which PERL's baroque syntax was presented, in popular discourse. Presently, I don't believe a person  should ever have to indulge in any manner of intellectual gymnastics, if to understand the syntax and semantics of an interpreted programming language. I don't have any great sense of appreciation about the eclectic nature of PERL. Candidly, I think it's more a novelty than a necessity. I don't find it to be a particularly helpful novelty, either.

So, I'm also going to take a break from so much as considering to develop any applications of Common Lisp. I'm sure Common Lisp is great for making adventures about computer science.

I've glanced at the Ruby language. I've heard that Ruby is applied in Puppet Labs' Puppet tools. Not to rain on anyone's fair, as far as scripting languages, I'm thinking to favor Lua instead.

Lua is a programming language with a few applications that I know of. The first one I've thought of, of late, has been the application of Lua in the block-oriented builder game universe of the Minetest platform. Then again, there's also the Texas Instruments N-Spire graphing calculators, which feature a Lua interpreter. Even in this era of the Phablet computer, I think that the N-Spire calculators are some really profound calculators. I'm impressed with their possible applications in laboratory science and in mathematics.

This evening, I've also found a couple of implementations of Lua onto CORBA, and a note about a Lua scripting extension for Nginx. The latter could be of use as towards developing a manner of a "Web Social" presentation model onto Minetest, but that's in looking ahead.

Searching the FreeBSD baseline packages repository, I see that there are a number of FreeBSD ports available for programming with Lua. Towards adopting Lua in integrated development environments, there's also an Lua-mode for Emacs, and -- in the Eclipse IDE -- there's also the Lua Development Tools platform, previously Koneki.

There are a number of books about programming with Lua, at Safari Books Online. Reader's interests may vary.

There are also Lua interpreters available on the Android platform.

So, it should be to a great lot of fun. Why, then, am I writing a web log article instead of writing my first lines of Lua code?  Is it that I am bewildered that Common Lisp, for all its Turing Completeness, still does not re-emerge any further from its AI Winter in Comp. Sci academia and into the commercial market? Am I perhaps a bit disconcerted by the character of same Comp Sci academia, itself, in some that I have seen of the same, and way too personally so?

I think both of those are why I would like to write a while longer, before studying much more in detail about Lua -- but of such a rough time veritably on the dark side of the moon, what is there to every write usefully of it?

Not a lot.

Wednesday, September 23, 2015

Cloning WebKit in a Few Easy Steps

The Git changeset repository developed by the WebKit project is not a small Git chanegset repository. In order to Git clone the WebKit Git repository for software development, an incremental Git fetch may be useful. 

An intermediary Git gc call may serve to optimize the Git repository clone, before the final 'unshallow' Git fetch procedure.

Example: Incremental Git clone/fetch - WebKit Git repository.
#!/bin/sh

WHENCE="git://git.webkit.org/WebKit.git"
WHERE="webkit"

git clone --depth=1 "$WHENCE" "${WHERE}"

cd $WHERE

for LIM in 100 1000 5000 10000 20000 40000 60000; do
    git fetch --depth="$LIM"
done

git gc --aggressive --prune=all

exec git fetch --unshallow

Configuring Git index-pack

The following configuration may not be ideally optimal. The following configuration settings may serve to alleviate some conditions that may otherwise cause Git index-pack to fail when repacking the data of WebKit source code repository.
git config --global pack.deltaCacheSize 512m
git config --global pack.deltaCacheLimit 5000
git config --global pack.windowMemory 100m
git config --global pack.packSizeLimit 100m

Post-Checkout Configuration

The WebKit Git repository is constructed effectively as a Git proxy onto the WebKit SVN repository. In order to draw changes directly from the WebKit SVN repository, the WebKit project has provided a utility shell command [cross reference]
cd ${WHENCE}
Tools/Scripts/webkit-patch setup-git-clone

For developing with the EFL WebKit integration, a shell command is available as an initial dependency management utility [cross reference]
${WHENCE}/Tools/efl/install-dependencies

 The "See Also" Section

Developers may wish to refer to more detailed documentation about Git

For further documentation about developing with the EFL integration for WebKit

EFL is a core feature of the Tizen framework. Additional documentation may be available about applications of EFL, with reference to Tizen

Towards Developing a DITA Editing Platform on FreeBSD

Having developed a certain sense of interest with regards to writing technical documentation, I've been studying periodically about the Darwin Information Typing Architecture (DITA). There are a number of things that I would like to highlight about DITA, as in a sense of a thesis document – I've tried to collect my notes, as such, with Evernote. Thus far, I have been unable to write any single thesis document about the topic, rather developing only a series of itemized lists of notes, and a small bibliography.

Presently, I think it may be as well to begin writing some narrative text about DITA. Although I may not feel as able to write fluidly, in a digital media – as juxtaposed to a media of graphite and paper, with clipboard, to an older style of desktop writing appliance – but it is surely easier to share a work of writing produced in a legible digital font – here juxtaposed to my own cursive handwriting and ad hoc visual layout approaches, as when writing with pencil on paper.  Sure, I have begun to sketch out an idea for a more fluid style of sketching in a digital media, but it would need some work in regards to ergonomic presentation and editing of vector graphics, and therefore may depend on software not yet developed for the Common Lisp Interface Manager (CLIM). Presently, I am more preoccupied of developing a workflow towards DITA editing with free/open source software (FOSS) tools. Though I have looked at both of the XXE XML editor and OxygenXML for a DITA workflow, I would like to start writing DITA with a platform more familiar to me. Personally, I'm comfortable with Emacs and pSGML.

In regards to any manner of perhaps esoteric features of DITA – such as, DITA linking, DITA content replacement, and any broadly semantic qualities of DITA's  topic element classes – it might not seem that a baseline Emacs + pSGML environment could be completely sufficient, If applied primarily as a DITA editing enviromment. The Emacs + pSGML platform, of course, can be extended with Emacs Lisp. Although Emacs, in turn, may not seem like a very rigorous programming enviromment,  as when juxtaposed to the Eclipse IDE or IntelliJ IDEA platforms – both of which are developed, primarily, in the Java(R) programming language – but I think there is a manner of an incentive in the simple convenience of the interactive, functional command interface of Emacs + pSGML. It may seem to be enough that I might not be excessively distraught about the absence of programmatic rigour in the Emacs Lisp programming language, juxtaposed to Oracle's Java(R).

So, but in that I would not wish to develop an manner of an ivory tower of this concept, and I will be writing with Emacs on a FreeBSD desktop, I think it needs a a sense of a project, in itself. I propose that a FreeBSD port can be developed of each of: The DITA 1.2 Document Type Definition (DTD) files, there integrating with the XML entity resolver's configuration on FreeBSD; the DITA Open Toolit, likewise to be integrated with the host's XML entity resolver, moreover to be applied with the user's own ASF Ant configuration. If anything analogous to the Comprehensive PERL Archive Network (CPAN) may be developed, moreover, as to provide a software distribution service for extensions to the DITA DTDs and DITA Open Toolkit (DITA OT) XML stylesheets, perhaps such a thing may be anyhow designed onto CORBA services, moreover the CORBA Component Model (CCM), if not Java(R) and OSGi services. Of course, from an end user's perspective, that might be secondary to the availability of a FreeBSD port for each of those two existing components, respectively the DITA DTDs and DITA OT.

Fourthly, as towards a concept of applying DITA in an open documentation system, there is a simple concept of a topic repository model that may be developed, in a manner of writing DITA as like for a sort of an – albeit not primarily web-based, and insamuch not web-editableWiki-like format. Though it may seem to loose something of a popular incentive, for it not being a web-editable format, but inasmuch as that a DITA editing platform may serve to provide a manner of editing support not easily provided altogether with a web-based interface environment, then perhaps there may be an adequate "up side" to the design decision, as such.

Immediately, I am somewhat distracted of the tedium of making a complete Git clone of the WebKit source code repository – perhaps this is referred to as multi-tasking. It seems that the initial Git fetch must be run incrementally, onto that same source code repository. It's presently at `--depth=20000`, in no small detail. I have referenced only a couple of forum topics [Matthews 2010][midtiby2011], to try to resolve that a complete `git clone` onto that source code repository has consistently has failed. Proceeding to to a `git fetch` at `--depth=60000`, the Git incremental-clone procedure succeeds, so far.

Ed. Note: There's also something to write about `git gc`

Monday, September 21, 2015