Thursday, July 31, 2014

Resources - "It Applies the Soldering Iron to the Lead-Free Solder"

Personally, as a fan of the "DIY" electronics kits published by organizations such as Adafruit and web resources such as Circuits@Home, I've looked forward to learning a better soldering iron technique than that of -- candidly -- my youthful "hacks", with regards to simple electronics. I wonder if there's a course at DeVry U Online, in which there would be some coverage specifically about soldering iron technique -- for soldering and desoldering?
Of course, I notice that the ECT CRC kit includes a PCB, so I presume that we'll be studying soldering, at some point -- perhaps, in a given week's lab work? In looking ahead, then, I wanted to ask about the matter -- if it may be possible to be sure of that -- whether as students of these ECT courses, we may be, at some point, studying how to apply a soldering iron, to solder an electronics component to a PCB? Personally, I hope that it could be supportive of some practice in DIY/Hobby work, if not later, professional shop work, to study that particular feature of electrical engineering, in addition to its coverage in the course.

Moreover, I hope that I might be able to share some resources, here, such that might help to provide some elucidation about the topic of soldering iron application -- at least for soldering, though here, not insomuch with regards to desoldering.

Application: Proto boards?

In regards to the "DIY" electronics domain, personally I'm a fan of the BeagleBone Black (BBB) single-board computer platform and its ARM Cortex MCU -- one Texas Instruments (TI) "Sitara" AM335x. In my opinion, the BBB platform has an advantage over the Raspberry Pi (RPi) single-board computer platform, in that the BBB is thoroughly documented in public design documents [CircuitCo] as well as in the complete manual for the AM335x MCUs, published by Texas Instruments [TI]. In some contrast, there are design documents available for the RPi. However, the MCU on the RPi-- one Broadcom BCM2835 -- is not presently described in as much detail, publicly [RPi Forums] contrasted to the BBB and its AM335x MCU.

As far as applications for soldering, as in a context of endeavors in hobby and study, I'm aware that there's a Proto Shield extension available for the BeagleBone Black -- for instance, as via Adafruit. Presumably, the Proto Shield would require an application of a soldering iron, as well as a discrete extent of formal or informal circuit analysis -- towards the Proto Shield's own safe and effective usefulness in circuit prototyping, such as in extending on the TI AM335x MCU and its PRU-ICSS submodules in DIY electronics projects, if not in formal academic studies?

Personally, in some fewer words, I'm only assuming that a proto shield would need a soldering iron.

Further Resources

Having discovered some practical resources online, with regards to soldering, I hope that these resources might be of some help to my fellow students in these ECT courses. To share this small number of resources, here, these public reference items online:

[1] Daycounter, Tim. How to Solder and Desolder Surface Mount Parts. Daycounter Inc.  2004
[2] Adafruit Learning System. Adafruit Guide to Excellent Soldering. 2014
[3] Johnson Space Center (NASA). Through-Hole Soldering, Single In-Line Package (SIP), from NASA Workmanshp Standards, Pictorial Reference

"Lining up the Architecture" - Liferay 6.2 + Glassfish 4 Bundle, AWS, and Monitoring Services. Side-notes: Open Education Resources Online

Shortly ago, this year, I began sketching up some concepts for what would have to be a business organization -- assuming it could not be registered as any sort of non-profit producer-owned cooperative in computing technology -- and as such would be a sole proprietorship, in its ownership model, lacking so much as a corporate board for the matter. I will not presently denote that I would like to name that business, though I would offer a hint, at least to myself, at any later time: It would be a name derived after a feature of the geology of the valley around the home town where I grew up, such that my grandfather had once explained to me. Beneath the soil of that valley -- at least, where was the house that I grew up in -- the ground is not so permeable as the loose topsoil, there. Moreover, I hope it would be a business name with a sense of character to it, denoting of "Life beyond the rose garden," but without any sort of a sense of pessimism. Once if I'll be able to register a domain name for that business, in some small portion of my student budget, then I would be willing to denote the name of that business, online. Presently, I'm more concerned about the architecture for the information system that would represent the core of that business' web presence, online.

In short summary: I plan on installing Glassfish 4 into a Tomcat 7 instance created with a Linux AMI hosted in an Amazon Web Services Elastic Compute Cloud (EC2) instance. Then, I would install Liferay 6.2 into the Glassfish 4 applciation server. I've chosen Glassfish 4 for this model, primarily due to Glassfish' integration with CORBA -- a feature that I know would not be unavailable with alternate application servers, such as each of JBoss AS7 and Wildfly (ie. JBoss AS post-AS7) however I'm of an impression that CORBA might be more well supported in Glassfish. Also, I'm simply happier with the thought of installing Glassfish, and Liferay into Glassfish.

As well as that I would be installing those resources into my Linux AMI, I've downloaded the Liferay 6.2 + Glassfish 4 bundle, and will be installing that onto a Microsoft Windows 8.1 partition. I could then use the "Demo portal" installed with the bundle, for purpose of generic sandbox work, or could uninstall the "Demo portal" and start from scratch.

Basically, I plan on creating one portal, to begin with, in the Liferay-on-AWS installation. In the portal's public pages, it would represent a sort of hub. In its private pages, it would represent a sort of MIS primarily for administration of the public portion of the portal/hub. The portal/hub would host some content for integration with the Amazon Associates program, as well as some public project information for public projects that I would be managing with the same hub. I plan on integrating Github with the portal -- planning on developing portlets, over time, to accomodate.

Between the portal as installed on my laptop, and the portal as installed on AWS, I plan on enabling the Liferay Staging feature, for developing content locally then publishing it to the hub, via Liferay itself. Certainly, that would not be the only way to publish content to the portal, but it would be an option such as I would like to understand, as my being a novice portal developer.

Today, I've found a couple of resources via Twitter, such as I would like to make note of, for future reference -- namely, the companies, each, Datadog and Coperegg. I'm afraid I don't have any sort of a populist view to present, about the resources and services provided by those companies. Simply, they both may be of some relevance with regards to service monitoring. I'm considering Coperegg primarily for their Amazon Cloud Instance Monitoring service. Datadlog, I should like to make reference to at some later time, I think, if perhaps it could be useful for a "Build server' for EDA projects and simple software projects

Coperegg has published, via their twitter account, a nice article with regards to server monitoring[1]. The article presents a succinct overview of how how monitoring and measurement processes may be implemented, on a server. I consider that it's a meaningful article, and as such, it has left a positive impression to me, about Coperegg. In a broad sense, my having reviewed some of Coperegg's web-based materials, I would like to consider using Coperegg in lieu of trying to piece together a Nagios + JMX hack of my own, for monitoring the portal as would be installed at AWS.

Datadog's monitoring service, though in some ways perhaps similar to the service provided by Coperegg, is such that I would wish to consider primarily with regards to developing a build monitoring service, for one-time software builds (Verilog or VHDL, Java or Common Lisp, or otherwise) for monitoring such processes on a build server, in such that would comprise a software build/distribution workflow, within a normal software development process. I consider that I can understand at least "that much" of what the architecture could be of such a thing. Of course, it would be more tedious to address that architecture into relevant details. If Datadog may be able to alleviate some of that tedium, I would certainly consider outsourcing some of the hypothetical build process to Datadog.

The matter of my endeavoring to install an enterprise grade web portal server, as my being an individual developer, it's such that I would not wish to make any sort of an ostentatious presentation about -- it being such that I'm really not so much looking forward to, candidly, though I think it would be more appropriate than any alternate approaches for my to develop to web content, online. The Liferay portal framework provides its own user model, for instance, and would already support integration with the Glassfish application server.


Of course, my being a student of DeVry University Online (DVUO), and though this matter of installing a portal sever is rather far beside the progress of my current course of study at the same, but I think I could customize the "Portal on laptop" such that it could be applied, privately, in a manner supportive of my own continuing studies as a student of DVUO -- such as for providing a central resource for my to use in managing the weekly lab materials, homework assignments, and as to digitize my paper notes from the textbooks. In that regard, I think I should define an individual portal, in at least the "Liferay on laptop" installation, a portal distinctly for purpose of developing a bit of a knowledge management technique, with regards to my own student materials. Though I am by no means a qualified expert in the topics we're studying in any single semester, but if I may be able to understand the material well enough to observe that the courses, in each, describes a sort of controlled vocabulary for instance, then I may be able to apply that concept -- in regards to my being a student of the private institution, DVUO, moreover in my studying materials published publicly by other academic institutions -- that it might at least provide a nicer way to manage the folders containing my weekly responses to the weekly course assignments. Thus far, I've been using a bit of an ad hoc mix of Dropbox and the local hard drive on my laptop, for that purpose, as my being a student of DVUO. Sometimes, also, I copy and revise my own discussion forum responses, for those that I would like to make any particular attention to, later, then publishing the revised item here at my DSP42 'blog.


I don't know if I should like to push that particular portal to the broader web. It would be kind of a sandbox resource, anyway.

I hope that I may sometime be able to develop some further ideas with regards to the open education resources published by each of OpenStax CNX, Coursera, uDacity and the OER Commons. Most notably, I think, the Coursera self-paced course, Calclus 1 I think promises to be an edifying course, and especially of some help to me, as my being a student of the electronics and computing program at DVUO. Certainly, electrical circuit analysis -- at least, insomuch of analog AC and DC circuit features, such as of electrical capacitance, inductance in AC circuits, and electrical restistance -- it would involve "a lot of math," in any practice. Though so much of that might be given something of a shortcut, with some careful application of Electronic Design Automation (EDA) tools, however I would like to be certain that I would understand the concepts entailed of the principles of the matter, before I would endeavor to apply those concepts in any application of those principles. So, I may have to study outside of class for Calculus 1, inasmuch as working outside of class in developing the portal structures, as denoted -- and this, on my shoestring student budget -- however, perhaps it may be as well "Worth it," in the proverbial "Long run."

At least, there's some community at Twitter. Though there's never an opportunity to say a lot in 140 characters of microblog text, however I think I'm learning a gist of it, of how to communicate @Twitter, at least to my best ability as such.


Article's Bibliography:

[1] Mueller, Ernest. Know your options for infrastructure monitoring. 2014. InfoWorld.

Tuesday, July 22, 2014

Towards Non-Repudiation in Network Routing - Specifically RIPv2

Presently, I'm a student at DeVry University Online (DVUO). Personally, I've found it exceedingly difficult to retain so much of sense of goodwill across discussions in undirected forums. Within the discretely bounded discussion forums of the online classes at DVUO, however, moreover that every discussion forum in the class is directed about one or more single, discrete topics, it's rather easier to retain a simple sense of focus, there -- and none to any ad hominem regard, candidly, whether positively or otherwise. It's none so difficult for me to "Keep my heart in it," then, in those bounded, focused discussion forums. Of course, if there's any goodwill quotient, it stays bounded to those forums, inasmuch.

One of the topics we are discussing, in one of DVUO's networking courses, is the topic: The Routing Information Protocol (RIP) and its specific editions. That being clearly a part of "Existing work," on the Internet, certainly there could not be any concern as if with regards to "Intellectual property," in my presenting my present view about the topic, here.

The Routing Information Protocol (RIP) -- whether in the RIP protocol version RIPv1 or RIPv2 for IPv4 networks, or RIPng on an IPv6 network -- broadly, RIP is a family of network protocols, each similarly applied in dynamic routing configurations. An RIP protocol would be applied namely for broadcasting of routing information in router neighbor updates on a dynamically routed IPv4 or IPv6 network. Dynamic routing may be contrasted to  static routing -- the latter, as in which a static network address and netmask would be used for determining a "next hop" for routing of a network packet, in a static routing table configuration, not requiring further updates from other routers on the network.

The Routing Information Protocol protocol may be useful in a "network backbone" network, namely in a configuration in which  a router would be able to select one of multiple alternate routes for a "next hop" in packet routing.

RIPv1, specifically, might be useful if a network's routers would not support RIPv2. In a brief feature comparison, RIPv2 would offer some advantages over RIPv1 -- including, the addition of an authentication data field, as an extension available for the RIPv2 message format.[1]

Essentially, RIPv2 message authentication, as a feature, is available as an extension on the baseline RIPv2 message format. An RIPv2 protocol implementation should be able to function with or without the RIPv2 message authentication feature.[2]

Functionally, an RIPv2 authenticated message frame would consist of a two-byte authentication type code -- e.g simple password as type 2 -- and a 16 byte authentication data field. Of course, in simple password RIPv2 message authentication, there would be the disadvantage of the password being encoded in plain text within the 16 byte field.[2]  Preferably, RIPv2 MD-5 authentication would be used instead, for authenticated RIPv2 messaging. Though that would not ensure a complete non-repudiability -- such as with regards to non-repudiation and the Kerberos authentication framework[3] -- but the MD-5 approach would at least serve to ensure that when the password would be encoded into the authentication data field, it would be encoded using an MD5 checksum of the password text.[4]

RIPv2 message authentication may serve in a role in message authentication for routing updates. However the simple password-in-a-packet model would not in itself support non-repudiation -- i.e it would not be proofed against "Man in the middle" exploits of the routing table updates. In a short summary: If an RIPv2 authenticated message is sent "in the clear" -- that is, without any manner of a secure handshake and tunneling such as with SSL -- then if the RIPv2 authenticated message would be intercepted, the message password in the authentication data field, hypothetically, could be duplicated by an exploiting network device, that allowing that device then to spoof the actual router. Perhaps it might therefore seem redundant as a feature, if not potentially counterproductive to network functionality, but it is available as a feature for RIPv2 and extending protocols.

I wonder, then, are there ways to limit the possible "Man in the middle exploit potential," in that?

The first idea I that would like to denote, at that, would be to inquire of whether a Kerberos KDC may be applied in parallel with RIPv2 message authentication? Can RIPv2 message authentication be extended for integration with Kerberos? If that could be effective, for securing a network's routing updates in a manner of non-repudiation? 

Focusing, then, on the 16 byte authentication data field in an authenticating RIPv2 message, could that 16 bytes be applied in some sort of a manner integrating with Kerberos tickets? Would that be possible in RIPv2 broadcast updates? Or is it simply an inapplicable idea?


Secondly, I wonder if RIPv2 could be applied onto an SSL transport? Perhaps that would be altogether simpler than to try to "Work it together with Kerberos," so to speak?



Either way, I'm sure it would seem nonconventional, to a popular point of view that one would so much as question "Existing knowledge." Personally, I'm none so much a populist about technology.  I know only so much as I know. Personally, I believe that there must be a way to prevent that RIPv2 packets could be spoofed on a network -- in a manner none so colloquial as to prevent "Router poisoning," though perhaps that single phrase might serve to describe the possible seriousness of the concern.

If, more broadly then, the concern would be of "Router poisoning" it would need a more complex problem analysis than my brief synopsis, here. In short, a "Spoofed route" might be problematic only if there was an exploited router for the "Spoofed route" to direct traffic to, in a network -- nothing so trivial, and nothing whatsoever good for a digital network. Perhaps it would be simply too complex of a "Threat surface," too much to merit further analysis, in this short view.

Works consulted:

[1] Ahamed, Afazuddin. CCNA Prep: Why Choose RIPv2 over RIPv1? Intense School. 2013
[2] Malkin, G. RIP Version 2, Carrying Additional Information. IETF. 1994
[3] Neuman, B. Clifford and Theodore Ts'o. Kerberos: An Authentication Service for Computer Networks. IEEE Communications Magazine. 1994
[4] Baker, F. and R. Atkinson. RIP-2 MD5 Authentication. IETF. 1997

Monday, July 21, 2014

"Goodbye Goodwill"

After completing week two of ECT-125 at DeVry University Online -- in what has been, personally, quite the worst week of my life in the past year, my nonetheless continuing with the course's course material, in it presenting at least an opportunity to focus on something beyond stark poverty, in the cruel world around -- I've thought it would be appropriate to denote not only my innate feelings and views after so much awful news in the broader world -- this week, in the world so much broader than a hipster snowglobe -- and the difficult situations of my own person's life, since day zero. Clearly, I've none too much of warm sentiment to share with the same world, and I am none too cheerful at the thought of -- any account -- of my own life's difficulty being exploited in whatsoever for anyone's own cruel entertainment.

There has never been a day when I could sincerely believe as though human sense of reason was whatsoever sufficiently abundant, enough to really merit any manner of a cheerful view of the world. I have tried, always to a point of failure, to whatsoever pretend a cheerful view nonetheless. Perhaps I was only afraid of how I might view the world if I would ever give up that vain effort? It's a pretense, however, and I have no further wish to pretend any luxury as if sufficient to maintain such a pretense. I have seen, time and again, that it only meets with a resonance of nothing I could whatsoever adopt or befriend, if I am not indefinitely divorced of my own wit and reason.

Inasmuch, I observe that I must now reconsider how and whether I should whatsoever present any kind of a resource to any of the "Web at large." Goodwill is evidently viewed only as a lever, and that I do not know what mechanism anyone is trying to pull me for, at that, I have no further goodwill to offer.

I cannot, then, in any ways denote if there are any of resources online. To each their own. #YMMV.

Thursday, July 10, 2014

Design Document: Transforming YAML to OWL

In an ad hoc regards, my having begun a project[1] in which one of the goals will be, technically, to transform a set of YAML models into a set of OWL ontological models -- all of a set of simultaneously legal and public information items about the structure and the proceedings of the United States Congress -- there becomes a goal, sort of iteratively, towards defining a methodology for translating "That YAML model" into a a single ontology model -- likely that may be stored as separate files, one for each of :
  • An ontology about the United States Congress as a whole, including:
    • The United States House of Representatives
    • The Senate of the United States of America
    • Committees of the House and of the Senate
    • Sessions of the Congress
    • Leadership Roles in the Structure of the Congress
    • Congressional Membership
  • Political Parties in the US
  • US States
  • Congressional Districts, in Each Congress (in a model organized into one file to each US state)
  • Formal public profiles about members of the Congress (GovTrack, Washington Post, etc) i.e "Social media"

Some Caveats

Caveat 1: Not About Lobbying

This project will not be endeavoring to venture onto the "Bling-bling" or alternately "Hot-Button" topic, campaign financing -- not at the scale of the PAC, and not at the scale of any manner of crowdsourcing. Certainly, campaign financing is thoroughly addressed in an existing number of existing web information resources. This project shall focus, rather, about the formal structure and the proceedings of the US Congress, in an interest of developing a structural model of the Congress -- a model constrained within the effective model syntax of the Web Ontology Language (OWL) -- in developing an OWL model that may be made reference to, for effective application in development of web content resources. This would be, principally, towards fostering a sense of understanding for US citizens, as about the US Congress as an entity representative of the United States, as well as for fostering of US citizen involvement in the democratic processes of the US federal government.

 

Caveat 2: Not POTUS or SCOTUS

This project, specifically,  shall not be focusing about the US Federal Executive Branch, and neither about the US Judicial Branch.

 

Caveat 3: Congressional Record Annotation Engine?

This project may later endeavor to develop a model for processing of documents published in the formal Congressional record, in the interest of applying  any number of topically focused models -- e.g AGROVOC -- in developing a machine generated, topical annotation layer onto the Congressional record, in application of existing, expert topical knowledge models, for fostering a broader public understanding about topics addressed in the Congressional record.

Ed note: In regards to this item, specifically, it should be considered, whether and how an ontology-focused model, as such, could in any ways extend on the functionality available of a conventional, flat/full text search engine. Concepts of the ontological knowledge model should be thoroughly addressed, in that analysis, such as: Knowledge and Information Structure; Inference and Entailment; Subsumption, and Extensibility, and the Open Universe Model.

Technical Outline: YAML to OWL

An information model in YAML structured markup format may be transformed to instance of OWL ontology model, using one or more programming languages. This project shall use Java.

Implementation note: The processing of the YAML input stream could be managed with SnakeYAML. Serialization to OWL could be performed via OWL API or Apache Jena.

Deriving Instances, Assigning OWL Classes

It should be noted that input YAML data would not define the entire ontology, itself. Rather the YAML data would be used in deriving OWL individual objects, each implementing of one or more OWL classes from within an existing ontology.

The pairing between YAML and OWL may be implemented in conjunction with a structured table, such that may be initialized exactly one, at runtime -- essentially, assigning one or more OWL classes C1..Cn to a single, structured, ad hoc YAML input path, P. In the effective processing model applied onto the YAML input, each node in the YAML input may then be iteratively scanned for each P in the table, as to generate a set of OWL individual instances M1..Mn such that each M would be of a class C1..Cn for each such P.


Step 1: Initializing XML DOM Nodes from YAML

That initial scanning method may be accomplished by first initializing an XML document object model (DOM) document, at runtime, derived from the input YAML model -- regarding the YAML model, then, as a source of instance data for within the DOM document. and the DOM document as it providing a programmatically useful structural model for the initial instance data.

The OWL-class-to-input-data table {{P1,C1...Cn,} ...} may then be implemented with each P being of type xpath expression. As for whether Cn would be implemented as of type URI, each referencing a single OWL class -- "YAML to OWL" -- or alternately, each Cn being implemented as exactly one Cn to each P, with Cn then denoting a Fully Qualified Java Class Name -- "YAML to Java." This processing model will prefer the latter implementation. In the "YAML to Java" implementation, each Cn may then make reference to a set of C'1...C'n each denoting an OWL class.

 

 Step 2: Initializing Java Objects from XML

In the "YAML to OWL" approach, C1...Cn may be defined as being each of type URI -- each URI, then denoting a specific OWL class within an input types ontologly. The respective types ontology may then be initialized within the Java runtime, at any time before the assignment of data properties, within the respective OWL engine -- such as OWL API or Apache Jena, for instance. 

In either of the "YAML to OWL" or "YAML" to "Java" approach, a loose coupling should be implemented between the respective Java class and the set of OWL classes.

 

Step 2.1: Deriving Java Class Instances from Input DOM Nodes

Alternately to the "YAML to OWL" methodology denoted in this article, then as in order to make effective use of Java method overriding within the processing model, C1 may be defined as each a single Java class, with C then serving effectively as a container of OWL Class URI C'1...C'n.

 

Step 2.2: Assigning OWL Properties 

... specifically, Object Properties and Datatype Propeties 
...  to each OWL Individual Instance Derived from the DOM model

After each Java object N'1..N'n is initialized, each of a single Java class C1..Cn, then the assignment of data properties -- as would be assigned, each, onto an OWL individual instance represented in N'n as derived of DOM node Nn -- the property assignment procedures may then proceed in one of at least two alternate approaches:
  1. With a constructor for C processing the DOM node N of which N' would have been derived, in the initial assignment of C to P, then assigning OWL properties A1..An to N'
    • This would be in a model of assigning OWL object properties and data properties (P) defined the input ontology
    • Each property defined in the input ontology would then be mapped onto an input DOM node N and the derived instance N'
  2. Similarly, but rather than with the OWL property assignment being encapsulated within the constructor for C, instead with the OWL property assignment being encapsulated into a single property assignment engine method, then encapsulating calls onto any exacting property assignment methods -- this description, necessarily differentiating OWL object type properties and OWL data type properties. 

Focus: Object Property Assignment

The generic object type properties assignment method would bear some particular attention, whereas any object type property assignment method, in this model, may be effectively required to reference -- in (subject, predicate, object) form, similarly (N, A, M) -- as given a single Java subject object N, as would be provided to the respective property assignment engine method defined to the class C of N, and for each predicate object property A, as selected of that engine -- that the object M may be available only as an object reference, towards an object not yet initialized within the result ontology.

The input model -- whether in a YAML text format or derived DOM object format -- the input model may denote any single reference object of (subject, predicate, object) with a string key code for the object. To effectively pad for that concern, within the object type properties assignment step, the property assignment may be conducted not until after all N'1..N'n would have been initialized.

Considering that in each element of the (subject, predicate, and object) trie of any single OWL object property expression, the reference to each of the subject, predicate, and object is denoted with a URI, whereas the reference in the input YAML model is rather encoded as (subject, predicateCode, objectCode) then the procedure for reference translation, in this model, must effectively represent a translation from numeric object code to object URI.

Step 2.3: Assigning OWL Class Identities to Java Objects

Effectively, the assignment of C'1...C'n OWL class identities may be implemented as a sort of property assignment procedure, in itself -- as iteratively, onto each Java object derived from the input DOM model, that derived from an input YAML model, in this example.

This document will denote as in a sidebar that one or more OWL Classes may be assigned directly  to any single OWL Individual node, moreover that zero or more OWL Classes may be derived of a single OWL Individual node, as by way of directed inference applied onto the input OWL model. The model for deriving an ontology of information about the Congress -- in the methodology as described in this article -- will be implemented only with direct OWL class assignment.

In sidebar, briefly: As an abstract example of the derived class model, a derived class may be defined, "Senators of North Dakota", such that the OWL class of that derived information object class would be defined with SWRL inference rules, namely as such that the OWL class thusly defined would be defined as to denote Congresspersons who are Senators elected to represent the state of North Dakota, as would be across the entire timeline provided of the model itself -- as in this example, deriving from the input YAML model. It may seem that the greatest strengths of the OWL abstract data model would be found in such a capacity for defining such derived classes within an OWL abstract data model. Not in so much as a tedious "Data mining," rather that an OWL inference model -- such as the SWRL inference model onto OWL, specifically -- that it allows for extraction and derivation of discrete objects representative of structured knowledge, from within a broader object model representative of structured knowledge.

(Draft 2 and final, of this article)

 
[1] Onto Ontologies, the Constitution, and the Congress. DSP42. 2014

Onto Ontologies, the Constitution, and the Congress

Contemporary Sidebar, or "Big Brother's Big Ears"

But first, a question: Is PRISM anything I would like to speculate about? Absolutely not! If the NSA would be voluntarily transparent about the science and technology being used by the NSA, it would sure be a different kind of National SecurityAdministration, I'm sure. Until that time, I can only interpolate -- not so much as to to speculate -- of anything I've ever read or read of[1], in so much very serious web content. It's not a happy chain of thought, that, but certainly it's a comment to the state of the art, in this epoch? Nothing about anyone's "wargaming", moreso about information science.

KR, KM, XML, RDF, and OWL, as Topics

So, "that aside," There's a long history to the development of knowledge modeling and knowledge representation, in academia -- much summarized, and summarized again, throughout academia. This article will not endeavor as if to to duplicate any of those multitudinous items in academia. In a practical sense, recently the World Wide Web Consortium (W3C) developed, specifically, the Resource Description Format (RDF), later the Simple Knowledge Organization System (SKOS) as then extending on RDF, and later the Web Ontology Language (OWL), later SKOS then effectively reexpressed as an extension onto OWL.

Sometime in 1776, there was a certain issue of national independence that was begun, followed by the drafting of exactly one US Constitution, followed by the development of one United States of America. Around that time, the US Congress was defined, as delineated specifically in Article I of the Constitution of the United States of America.[1]

Sometime more recently, the Congress developed information systems such as would publish proceedings of the US Congress, in XML format[3] More recently still, the United States project at Github[4] has published structured information about the Congress and the proceedings of the Congress, in structured YAML format.[5]

2 (base 10) + 2 (base 10) = 100 (base 2) therefore....

Onto an Ontological View of the Proceedings of the US Congress

There's a lot of information in the @unitedstates YAML files about the Congress. Though in browsing the project's README file, it might seem like simply a flat table, but there's a depth to that public information, such that becomes to a nice concept of linked open data[6] in a legally public regards. Such as: Congress membership, Congressional sessions, Committees, Political party affiliations, and much that could be rendered in OWL datatype properties for creating convenient URL links onto existing, simultaneously legal and public knowledge resources, online. I'm not sure if Julian Assange would be impressed by that or not, but to continue.... Those @unitedstates YAML files can be processed, easily enough, to a definition of an ontology about the Congress and the proceedings of the Congress. Why would that be useful, though?

Article I of the Constitution of the United States of America is the Article of the Constitution of the United States of America in which the Congress of the United States of America is defined.[2] Article I precedes Article II -- the latter, in which the Executive Branch of the Federal Government is defined. The authors of the Constitution had saw fit to describe the Congress, before describing the President. There is certainly a matter of precedence, illustrated in that decision. It's quite significant to the concern of State's Rights and of States' representation in the Federal Legislature. With the recent bunch of actions by the Department of Homeland Security (DHS)[5] in US states, the concern for states' rights is more clearly poignant than ever, as a feature of the US democracy -- truly more clear than ever, in this author's own lifetime. That concern is likewise represented in the very design of the United States Congress.

Therefore, the author proposes: It is a good time to start paying a lot more of attention to the US Congress. 

Consequently, the author proposes: That the "Bar napkin sketch" I've been carrying, so to speak, of an ontology about the federal legislature, that now is a good time to begin developing that. Considering the availability of structured legal and public information in that domain,[6] it could even seem trivial.

This would be, in effect, the announcement of that project.

  
[1] having read of: Inyaem, U.; Meesad, P.; Haruechaiyasak, C.; Tran, D., "Ontology-Based Terrorism Event Extraction," Information Science and Engineering (ICISE), 2009 1st International Conference on , vol., no., pp.912,915, 26-28 Dec. 2009
[2] Constitution of the United States. NARA
[3] XML.gov at xml.fido.gov
[4] @unitedstates. A shared commons of data and tools for the United States. Made by the public, used by the public
[5] @unitedstates. Members of the United States Congress, 1789-Present, in YAML, as well as committees, presidents, and vice presidents.
[6] http://linkeddata.org/
[7] notes that I've been able to compile as to share, this week, at my Bring Back Lady Liberty blog: Deferred Action for Childhood Arrivals (DACA) - a DC Politics Story