Wu-tang Flashback via Dave Chappelle

I was watching what I believe (so far - only having seen a handful of episodes) to be the funniest Dave Chappelle skit ever, 'Racial Draft', when the Chinese representative gets up to pick his representative and he paraphrases the original Wutang introduction:

From the slums of Shaolin, Wu-Tang Clan strikes again
The RZA, the GZA, Ol Dirty Bastard, Inspectah Deck, Raekwon the Chef
U-God, Ghost Face Killer and the Method Man

It Gave me chills and flashbacks to a much better time in music. I remember that track (Method Man) was the only song I ever heard from Enter the Wu-Tang (36 Chambers) at the time. I bought the tape (yes, back then I was rocking a tape walkman) not knowing exactly what to expect. The yellow, odd looking symbol and inset made me thing twice at the store. It ended up being one of my favorite purchases. My favorite of the clan has always been and always will be the GZA (aptly named the Genius). My favorite verse snippet (from Amplified Sample):

Guide this, strenuous as an arm wrestle
Move swift as light, a thousand years in one night
Inflight with insight
Everything i thought of, I saw it happen
Then I rose from the soil, the sun blackened
Then came rap czars, left tracks in scars
A pair of brightness of exploding stars
Give you goods to taste
No ingredients to trace
You'll remain stuck trying to figure the shape of space

(I had to make some corrections to the entry on The Original Hip-Hop Lyrics Archive)

Dave Chappelle's musical selection for guest appearances and Hip-hop skits (anyone seen the Turn my headphones up skit? Funny!) is perhaps the most obvious indication of how grounded he is, but when the Chinese delegation picked the Wutang in Racial Draft, I almost lost it. Yeah, perhaps border-line tasteless, but very well-written comedy.

[Chimezie Ogbuji]

via Copia

Service Modeling Language

I'd long ago put up a very thick lens for looking at any news from the SOA space. With analysts and hungry vendors flinging the buzzword around in a mindless frenzy it came to the point where one out of twenty bits of information using the term were pure drivel. I do believe there is some substance to SOA, but it's definitely veiled in a thick cloud of the vapors. This week Service Modeling Language caught my eye through said thick lens, and I think it may be one of the more interesting SOA initiatives to emerge.

One problem is that the SML blurbs and the SML spec seem to have little substantive connection. It's touted as follows:

The Service Modeling Language (SML) provides a rich set of constructs for creating models of complex IT services and systems. These models typically include information about configuration, deployment, monitoring, policy, health, capacity planning, target operating range, service level agreements, and so on.

The second sentence reads at first glance as if it's some form of ontology of systems management, basically an actual model, rather than a modeling language. No big deal. I see modeling languages and actual models conflated all the time. Then I catch the "typically", and read the spec, and it becomes evident that SML has much more to it than "models of complex IT services and systems". It's really a general-purpose modeling language. It builds on a subset of WXS and a subset of ISO Schematron, adding a handful of useful data modeling constructs such as sml:unique and sml:acyclic (the latter is subtle, but experienced architects know how important identifying cyclic dependencies is to risk assessment).

I'm still not sure I see the entire "story" as it pertains to services/SOA and automation. I guess if I use my imagination I could maybe divine that an architect publishes a model of the IT needs for a service, and some management tool such as Tivoli or Unicenter generates reports on a system to flag issues or to assess compatibility with the service infrastructure need? (I'm not sure whether this would be a task undertaken during proposal assessment, systems development, maintenance, all of the above?). I imagine all the talk of automation in SML involves how such reports would help reduce manual assessment in architecture and integration? But that can't be! Surely SML folks have learned the lessons of UDDI. Some assessment tasks simply cannot be automated.

SML shows the fingerprints of some very sharp folks, so I assume I'm missing soemthing. I think that much more useful than the buzzword-laden blurbs for SML would be an document articulating some nice, simple use cases. Also, I think the SML spec should be split up. At present a lot of its bulk is taken up defining WXS and ISO Schematron subsets. It seems a useful profile to have, but it should be separated from the actual specification of SML modeling primitives.

[Uche Ogbuji]

via Copia

How do you solve a problem like Rah Digga?

So I was putting on The Sound of Music for the kids. After all, they're getting to about the age when all good children are to be indoctrinated into the mysteries of Rodgers and Hammerstein. About half way through the opening montage, Osita said

"Hmm. The Sound of Music. So will they have any Hip-Hop in it?"

I guess I've taught him rather too well. When I informed him the movie predated Hip-Hop, he proceeded to run over to his computer to play Hot Wheels Velocity X. He was lured back, though, when he overheard so many of the songs he's heard at bedtime over the years, in their original form.

[Uche Ogbuji]

via Copia

Phoenician Crimson (or more prosaically: Utter madness in the Middle East)

But I say unto you, It shall be more tolerable for Tyre and Sidon at the day of judgment, than for you.
—Matthew 11:22 (KJV Bible)

[Disclaimer: No, I'm quite agnostic, but I went through several denominations of religious education, and some passages from the Bible still rise unbidden to my mind in times of stress, such as this is.]

The situation in Lebanon just boggles the mind. What on earth is Israel thinking? What are the U.S. and U.K. thinking? Are they even thinking at all? Or are they wrapped up in a frenzy of emotion? The latter possibility might explain what's going on. Make no mistake about it. Israel was sorely, sorely provoked. No sovereign nation can stand by while its cities are being shelled. Israel had to respond, and to respond forcefully. But what of that response? Israel seems to be killing everyone but their enemy. They are killing Lebanese civilians by the hundreds, blasting infrastructure back to the stone age, and even taking out U.N. observers. All the while they are making no dent in Hezbollah's operations, despite the chest-pounding of their generals. It's surely unacceptable that Northern Israelis have to cower in fear of constant rocket attacks; nevertheless, the devastation that Israel is handing out to Lebanon can hardly be considered anything short of indiscriminate and even criminal reprisal.

The U.S. is irrelevant in this whole affair. It's interesting to see how an ally's unstinting support even in the face of obvious breakdown in morals has the perverse effect of making the supporter somewhat irrelevant. Britain under Blair has learned that their obsequiousness has gained them kind words, yet real contempt from the Bush administration, and the Bush administration is subject to no less contempt by Israel, and for no less reason. Bush and co. wouldn't dare criticize Israel anyway because the response would probably involve deep embarrassment. The only reason I'll mention the U.S.'s hands-off approach to the Lebanon crisis is to point out their certain hypocrisy.

Turkey is also at present suffering attack from militants across its borders. In this case it's Kurdish separatists (of the PKK) holed up in the hills of Northern Iraq. Now make no mistake: I am sympathetic to Kurdish separatist aspirations (Turkey has been quite oppressive of its Kurds), but in the simplest terms, a response from Turkey equivalent to that of Israel would involve Turkish bombing and shelling of Kirkuk, while also destroying most of Northern Iraq's oil infrastructure. Needless to say the U.S. would never allow that, and this is just one measure of the staggering hypocrisy that underlies the bombing of Beruit.

Ho hum, hypocrisy is the grease of foreign affairs, and has always been. What truly amazes me is the suicidal nature of Israel's devastation of Beruit. Yes. I said "suicidal". But what does Israel, one of the world's preeminent military powers, have to fear from tiny little Lebanon? Nothing directly, unless you take a step back to history's lesson book to see that no military might has ever been able to defeat the force of demographics. Israelis are badly outnumbered in their little corner of the world, and their survival depends on the fragmentation of their hostile neighbors. Israel has historically been very skilful at encouraging this fragmentation, and this has been more of an asset than its military might. Unfortunately, in recent red mist it has dumped all such subtlety and practicality, and is in the process of not only deepening the radicalism of the region, but of uniting it as well. It's a very ugly irony that when Lebanese families are rendered homeless by Israeli warplanes, and their children killed, it is usually Hezbollah's charitable wing that has been coming to their aid. This is no different from how Israel's devastation of the West Bank and Gaza strip a few years ago led inexorably to the rise of Hamas to power.

Lebanon's Hezbollah and Hamas are not historically likely strike partners: The Shi'a/Sunni divide in the region is almost as deep as the national divides, but Israel's recent fits are uniting the radicals, and their sponsors, and when innocent families helplessly watch the loss of loved ones and property, they often end up joining the ranks of the radicals. Israel cannot afford swelling numbers of militants, as the simple mathematics of the Lebanon war illustrate. For every 10 Lebanese casualies there has been one Israeli. There's no reason to believe that sheer military might will improve that ratio. The problem is that Hồ Chí Minh's famous boast could just as easily come from Israel's enemies:

You can kill ten of our men for every one we kill of yours. But even at those odds, you will lose and we will win.

Over time Israel's population, which is essentially at a plateau, will lose the demographic war unless it can find peace among the growing populations immediately beyond its borders. Israelis like to say "yeah! We do want peace! It's everyone else who wants war." Their government's near-sighted decision-making process far too often gives the lie to those claims.

Two things I have learned from my many encounters and friendships with Lebanese people is that (1) they are perhaps the most resourceful people on Earth (2) they are perhaps the most pragmatic people on Earth. I think they have the wherewithal to rebuild once Israel's fit has passed (to be blunt, I don't expect their institutions to crumble as hopelessly as those of the Palestinians), and I do think that their population will end up much less radicalized than one could expect under the circumstances. That is the only basis for a faint glimmer of hope, for Israel, the region, and the world. There may be no soothing the moral outrage of Israel's present, apalling brutality, but perhaps if they can be shamed into moderation the slow agency of time will prevent a spiraling escalation through which there will be winner (most certainly not Israel).

Oh, and at some point someone still has to uproot Hezbollah from the border regions, so there is some containment of the effects of their murderous recklessness. I suppose the fact that Israel would rather bomb civillians than meet Hezbollah head-on is no different from the U.S.'s preference for invading Iraq rather than focusing on the elimination of Bin Laden and his henchmen. I just wish I could apprehend their logic. On the other hand, perhaps it's a healthy thing I can't.

[Uche Ogbuji]

via Copia

Patterns and Optimizations for RDF Queries over Named Graph Aggregates

In a previous post I used the term 'Conjunctive Query' to refer to a kind of RDF query pattern over an aggregation of named graphs. However, the term (apparently) has already-established roots in database querying and has a different meaning that what I intended. It's a pattern I have come across often and is for me a major requirement for an RDF query language, so I'll try to explain by example.

Consider two characters, King (Wen) and his heir / son (Wu) of the Zhou Dynasty. Let's say they each have a FOAF graph about themselves and the people they know within a larger database which holds the FOAF graphs of every historical character in literature.

The FOAF graphs for both Wen and Wu are (preceeded by the name of each graph):

<urn:literature:characters:KingWen>

@prefix : <http://xmlns.com/foaf/0.1/>.
@prefix rel: <http://purl.org/vocab/relationship/>.

<http://en.wikipedia.org/wiki/King_Wen_of_Zhou> a :Person;
    :name “King Wen”;
    :mbox <mailto:kingWen@historicalcharacter.com>;
    rel:parentOf [ a :Person; :mbox <mailto:kingWu@historicalcharacter.com> ].

<urn:literature:characters:KingWu>

@prefix : <http://xmlns.com/foaf/0.1/>.
@prefix rel: <http://purl.org/vocab/relationship/>.

<http://en.wikipedia.org/wiki/King_Wu_of_Zhou> a :Person;
    :name “King Wu”;
    :mbox <mailto:kingWu@historicalcharacter.com>;
    rel:childOf [ a :Person; :mbox <mailto:kingWen@historicalcharacter.com> ].

In each case, Wikipedia URLs are used as identifiers for each historical character. There are better ways for using Wikipedia URLs within RDF, but we'll table that for another conversation.

Now lets say a third party read a few stories about “King Wen” and finds out he has a son, however, he/she doesn't know the son's name or the URL of either King Wen or his son. If this person wants to use the database to find out about King Wen's son by querying it with a reasonable response time, he/she has a few thing going for him/her:

  1. foaf:mbox is an owl:InverseFunctionalProperty and so can be used for uniquely identifying people in the database.
  2. The database is organized such that all the out-going relationships (between foaf:Persons – foaf:knows, rel:childOf, rel:parentOf, etc..) of the same person are asserted in the FOAF graph associated with that person and nowhere else.
    So, the relationship between King Wen and his son, expressed with the term ref:parentOf, will only be asserted in
    urn:literature:characters:KingWen.

Yes, the idea of a character from an ancient civilization with an email address is a bit cheeky, but foaf:mbox is the only inverse functional property in FOAF to use to with this example, so bear with me.

Now, both Versa and SPARQL support restricting queries with the explicit name of a graph, but there are no constructs for determining all the contexts of an RDF triple or:

The names of all the graphs in which a particular statement (or statements matching a specific pattern) are asserted.

This is necessary for a query plan that wishes to take advantage of [2]. Once we know the name of the graph in which all statements about King Wen are asserted, we can limit all subsequent queries about King Wen to that same graph without having to query across the entire database.

Similarly, once we know the email of King Wen's son we can locate the other graphs with assertions about this same email address (knowing they refer to the same person [1]) and query within them for the URL and name of King Wen's son. This is a significant optimization opportunity and key to this query pattern.

I can't speak for other RDF implementations, but RDFLib has a mechanism for this at the API level: a method called quads((subject,predicate,object)) which takes three terms and returns a tuple of size 4 which correspond to the all triples (across the database) that match the pattern along with the graph that the triples are asserted in:

for s,p,o,containingGraph in aConjunctiveGraph.quads(s,p,o):
  ... do something with containingGraph ..

It's likely that most other QuadStores have similar mechanisms and given the great value in optimizing queries across large aggregations of named RDF graphs, it's a strong indication that RDF query languages should provide the means to express such a mechanism.

Most of what is needed is already there (in both Versa and SPARQL). Consider a SPARQL extension function which returns a boolean indicating whether the given triple pattern is asserted in a graph with the given name:

rdfg:AssertedIn(?subj,?pred,?obj,?graphIdentifier)

We can then get the email of King Wen's son efficiently with:

BASE  <http://xmlns.com/foaf/0.1/>
PREFIX rel: <http://purl.org/vocab/relationship/>
PREFIX rdfg: <http://www.w3.org/2004/03/trix/rdfg-1/>

SELECT ?mbox
WHERE {
    GRAPH ?foafGraph {
      ?kingWen :name “King Wen”;
                       rel:parentOf [ a :Person; :mbox ?mbox ].
    }  
     FILTER (rdfg:AssertedIn(?kingWen,:name,”King Wen”,?foafGraph) ).
}

Now, it is worth noting that this mechanism can be supported explicitly by asserting provenance statements associating the people the graphs are about with the graph identifiers themselves, such as:

<urn:literature:characters:KingWen> 
  :primaryTopic <http://en.wikipedia.org/wiki/King_Wen_of_Zhou>.

However, I think that the relationship between an RDF triple and the graph in which it is asserted, although currently outside the scope of the RDF model, should have it's semantics outlined in the RDF abstract syntax instead of using terms in an RDF vocabulary. The demonstrated value in RDF query optimization makes for a strong argument:

BASE  <http://xmlns.com/foaf/0.1/>
PREFIX rel: <http://purl.org/vocab/relationship/>
PREFIX rdfg: <http://www.w3.org/2004/03/trix/rdfg-1/>

SELECT ?kingWu,  ?sonName
WHERE {
    GRAPH ?wenGraph {
      ?kingWen :name “King Wen”;
                       :mbox ?wenMbox;
                       rel:parentOf [ a :Person; :mbox ?wuMbox ].
    }  
    FILTER (rdfg:AssertedIn(?kingWen,:name,”King Wen”,?wenGraph) ).
    GRAPH ?wuGraph {
      ?kingWu :name ?sonName;
                     :mbox ?wuMbox;
                     rel:childOf [ a :Person; :mbox ?wenMbox  ].
    }  
     FILTER (rdfg:AssertedIn(?kingWu,:name,?sonName,?wuGraph) ).
}

Generally, this pattern is any two-part RDF query across a database (a collection of multiple named graphs) where the scope of the first part of the query is the entire database, identifies terms that are local to a specific named graph, and the scope of the second part of the query is this named graph.

Chimezie Ogbuji

via Copia

What Do Closed Systems Have to Gain From SW Technologies?

Aaron Swartz asked that I elaborate on a topic that is dear to me and I didn't think a blog comment would do it justice, so here we are :)

The question is what do single-purpose (closed) databases have to gain from SW technologies. I think the most important misconception to clear up first is the idea that XML and Semantic Web technologies are mutually exclusive.
They are most certainly not.

It's not that I think Aaron shares this misconception, but I think that the main reason why the alternative approach to applying SW technologies that he suggests isn't very well spoken for is that quite a few on the opposing sides of the issue assume that XML (and it's whole strata of protocols and standards) and RDF/OWL (the traditionally celebrated components of SW) are mutually exclusive. There are other misconceptions that hamper this discussion, such as the assumption that the SW is an all or nothing proposition, but that is a whole other thread :)

As we evolve towards a civilization where the value in information and it's synthesis is of increasing importance, 'traditional' data mining, expressiveness of representation, and portability become more important for most databases (single-purpose or not).

These are areas that these technologies are meant to address, explicitly because “standard database” software / technologies are simply not well suited for these specific requirements. Not all databases are alike and so it follows that not all databases will have these requirements: consider databases where the primary purpose is the management of financial transactions.

Money is money, arithmetic is arithmetic, and the domain of money exchange and management for the most part is static and traditional / standard database technologies will suffice. Sure, it may be useful to be able to export a bank statement in a portable (perhaps XML-based) format, but inevitably the value in using SW-related technologies is very minimal.

Ofcourse, you could argue that online banking systems have a lot to gain from these technologies, but the example was of pure transactional management, the portal that manages the social aspects of money management is a layer on top.

However, where there is a need to leverage:

  • More expressive mechanisms for data collection (think XForms)
  • (Somewhat) unambiguous interpretation of content (think FOL and DL)
  • Expressive data mining (think RDF querying languages)
  • Portable message / document formats (think XML)
  • Data manipulation (think XSLT)
  • Consistent addressing of distributed resources (think URLs)
  • General automation of data management (think Document Definitions and GRDDL)

These technologies will have an impact on how things are done. It's worth noting that these needs aren't restricted to distributed databases (which is the other assumption about the Semantic Web - that it only applies within the context of the 'Web'). Consider the Wiki example and the advantages that Semantic Wikis have over them:

  • Much Improved possibility of data mining from more formal representation of content
  • 'Out-of-the-box' interoperability with tools that speak in SW dialects
  • Possibility of certain amount of automation from the capabilities that interpretation bring

It's also worth noting that recently the Semantic Wiki project introduced mechanisms for using other vocabularies for 'marking-up' content (FOAF being the primary vocabulary highlighted).

It's dually important in that 1) it demonstrates the value in incorporating well-established vocabularies with relative ease and 2) the policed way in which these additional vocabularies can be used demonstrate precisely the middle ground between a very liberal, open world assumption, approach to distributed data in the SW and controlled, closed, (single-purpose) systems approach.

Such constraints can allow for some level of uniformity that can have very important consequences in very different areas: XML as a messaging interlingua and extraction of RDF.

Consider the value in developing a closed vocabulary with it's semantics spelled out very unambiguously in RDF/RDFS/OWL and a uniform XML representation of it's instances with an accompanying XSLT transform (something the AtomOWL project is attempting to achieve).

What do you gain? For one thing, XForms-based data entry for the uniform XML instances and a direct, (relatively) unambiguous mapping to a more formal representation model – each of which have their own very long list of advantages they bring by themselves much less in tandem!

Stand-alone databases (where their needs intersect with the value in SW technologies) stand to gain: Portable, declarative data entry mechanisms, interoperability, much improved capabilities for interpretation and synthesis of existing information, increased automation of data management (by closing the system certain operations become much more predictable), and the additional possibilities for alternative reasoning heuristics that take advantage of closed world assumptions.

Chimezie Ogbuji

via Copia

“Get free stuff for Web design”

"Get free stuff for Web design"

Subtitle: Spice up your Web site with a variety of free resources from fellow designers
Synopsis: Web developers can find many free resources, although some are freer than others. If you design a Web site or Web application, whether static or with all the dynamic Ajax goodness you can conjure up, you might find resources to lighten your load and spice up your content. From free icons to Web layouts and templates to on-line Web page tools, this article demonstrates that a Web architect can also get help these days at little or no cost.

This was a particularly fun article to write. I'm no Web designer (as you can tell by looking at any of my Web sites), but as an architect specializing in Web-fronted systems I often have to throw a Web template together, and I've slowly put together a set of handy resources for free Web raw materials. I discuss most of those in this article. On Copia Chimezie recently mentioned OpenClipart.org, which I've been using for a while, and is mentioned in the article. I was surprised it was news to as savvy a reader as Sam Ruby. That makes me think many other references in the article will be useful.

[Uche Ogbuji]

via Copia

Mapping Rete algorithm to FOL and then to RDF/N3

Well, I was hoping to hold off on the ongoing work I've doing with FuXi, until I could get a decent test suite working, but I've been engaged in several threads (older) that left wanting to elaborate a bit.

There is already a well established precedent with Python/N3/RDF reasoners (Euler, CWM, and Pychinko). FuXi used to rely on Pychinko, but I decided to write a Rete implementation for N3/RDF from scratch - trying to leverage the host language idioms (hashing, mappings, containers, etc..) as much as possible for areas where it could make a difference in rule evaluation and compilation.

What I have so far is more Rete-based than a pure Rete implementation, but the difference comes mostly from the impedance between the representation components in the original algorithm (which are very influenced by FOL and Knowledge Representation in general) and those in the semantic web technology stack.

Often with RDF/OWL, there is more talk than neccessary, so I'll get right to the meat of the semantic mapping I've been using. This assumes some familiarity with the original algorithm. Some references:

Tokens

The working memory of the network is fed by an N3 graph. Tokens represent the propagation of RDF triples (no variables or formulae identifiers) from the source graph through the rule network. These can represent token addition (where the triples are added to the graph - in which case the live network can be associated with a live RDF graph) and token removals (where triples are removed from the source graph). When tokens pass an alpha nodes intra-element test (see below) the tokens are passed on with a subtitution / mapping of variables in the pattern to the corresponding terms in the triples. This variable substitution is used to check for consistent variable bindings across beta nodes.

ObjectType Nodes and Working Memory

ObjectType nodes can be considered equivalent to the test for concept subsumption (in Description Logics). and therefore equivalent to the alpha node RDF pattern:

?member rdf:type ?klass.

'Classic' Alpha node patterns (the one below is taken directly from the original paper) map to multiple RDF-triple alpha node patterns:

(Expression ^Op X ^Arg2 Y)

Would be equivalent to the following triples patterns (the multiplicative factor is that RDF assertions are limited to binary predicates):

  • ?Obj rdf:type Expression
  • ?Obj Op ?X
  • ?Obj Arg2 ?Y

Alpha Nodes

Alpha nodes correspond to patterns in rules, which can be

  1. Triple patterns in N3 rules
  2. N-array functions.

Alpha node intra-element tests have a 'default' mechanism for matching triple patterns or they exhibit the behavior associated with a registered set of N-array functions - the core set coincide with those used by CWM/Euler/Pychinko (and often called N3 built-ins). Fuxi will support an extension mechanism for registering additional N-array N3 functions by name which associate them with a Python function that implements the constraint in a similar fashion to SPARQL extension functions. N-aray functions are automatically propagated through the network so they can participate in Beta Node activation (inter-element testing in Rete parlance) with regular triple patterns, using the bindings to determine the arguments for the functions.

The default mechanism is for equality of non-variable terms (URIs).

Beta Nodes

Beta nodes are pretty much verbatim Rete, in that they check for consistent variable substitution between their left and right memory.This can be considered similar to the unification routine common to both forward and backward chaining algorithms which make use of the Generalized Modus Ponens rule. The difference is that the sentences aren't being make to look the same but the existing variable substitutions are checked for consistency. Perhaps there is some merit in this similarity that would make using a Rete network to faciliate backward-chaining and proof generation an interesting possiblity, but that has yet to be seen.

Terminal Nodes

These correspond with the end of the LHS of a N3 rule, and is associated with the RHS and when 'activated' they 'fire' the rules, apply the propaged variable substitution, and add the newly inferred triples to the network and to the source graph.

Testing and Visualizing RDF/N3 Rete Networks

I've been able to adequately test the compilation process (the first of two parts in the original algorithm), using a visual aid. I've been developing a library for generating Boost Graph Library DiGraphs from RDFLib graphs - called Kaleidos. The value being in generating GraphViz diagrams, as well as access to a whole slew of graph heuristics / algorithms that could be infinitely useful for RDF graph analysis and N3 rule network analysis:

  • Breadth First Search
  • Depth First Search
  • Uniform Cost Search
  • Dijkstra's Shortest Paths
  • Bellman-Ford Shortest Paths
  • Johnson's All-Pairs Shortest Paths
  • Kruskal's Minimum Spanning Tree
  • Prim's Minimum Spanning Tree
  • Connected Components
  • Strongly Connected Components
  • Dynamic Connected Components (using Disjoint Sets)
  • Topological Sort
  • Transpose
  • Reverse Cuthill Mckee Ordering
  • Smallest Last Vertex Ordering
  • Sequential Vertex Coloring

Using Kaleidos, I'm able to generate visual diagrams of Rete networks compiled from RDF/OWL/N3 rule sets.

However, the heavy cost with using BGL is the compilation process of BGL and BGL python, which is involved if doing so from source.

Chimezie Ogbuji

via Copia

Coup de Boule (or summer 2006: pwned by Zizou)

Work's bee a bit heavy this week, but I've had instant stress relief every evening thanks to the new summer craze. F'real, Janet was just dreaming of a response like this when she told Justin Timbo to pop her bra. The Zizou is everywhere. First of all (coz you know j'aime d'la musique tout de temps, ooh), there's the song "Coup de Boule" that's become an overnight phenom in France. Then there are the anis. I swear many of the gems I've seen make the ones I posted seem positively pedestrian. Thank goddess for under-employed people. My French/American friend Noah tipped me to this Register article with a few choice selections. That article led me to this "Zidane's headbutt photoshop-thread" forum, which has pages of good stuff, including:

There's even more madness on YTMND. My fave is The Death Star.

Question: Does the Zizou count as the best header in World Cup history that was not saved by Gordon Banks? Think about it.

[Uche Ogbuji]

via Copia

Tagging meets hierarchies: XBELicious

The indefatigable John L. Clark recently announced another very useful effort, the start of a system for managing your del.icio.us bookmarks as XBEL files. Of course not everyone might be as keen on XBEL as I am, but even if you aren't, there is a reason for more general interest in the project. It uses a very sensible set of heuristics for mapping tagged metadata to hierarchical metadata. del.icio.us is all Web 2.0-ish and thus uses tagging for organization. XBEL is all XML-ish and thus uses hierarchicy for same. I've long wanted to document simple common-sense rules for mapping one scenario to another, and John's approach is very similar to sketches I had in my mind. Read section 5 ("Templates") of the XBELicious Installation and User's Guide for an overview. Here is a key snippet:

For example, if your XBEL template has a hierarchy of folders like "Computers → linux → news" and you have a bookmark tagged with all three of these tags, then it will be placed under the "news" folder because it has tags corresponding to each level in this hierarchy. Note, however, that this bookmark will not be placed in either of the two higher directories, because it fits best in the news category. A bookmark tagged with "Computers" and "news" would only be placed under "Computers" because it doesn't have the "linux" tag, and a bookmark tagged with "linux" and "news" would not be stored in any of these three folders.

XBELicious is work in progress, but worthy work for a variety of reasons. I hope I have some time to lend a hand soon.

[Uche Ogbuji]

via Copia