del.icio.us daily links, using Amara

I added a new feature on Copia: Every day there will be an automated posting with mine and Chime's del.icio.us links from the previous day. You can see, in the previous Copia entry to this one, an example of the results.

What I think most cool is how easy it was to write, and how easy the resulting code is to understand. It's just 35 lines (including 7 lines of imports) , and in that it packs some useful features I haven't found in other such scripts, including:

  • Full Unicode safety (naturally, I wouldn't have it any other way)
  • support for multiple del.icio.us feeds, with tag by author
  • tagging the PyBlosxom entry with the aggregated/unique tags from the del.icio.us entries

Here's the code. The only external requirement is Amara:

import os
import sets
import time
import codecs
import itertools
from datetime import date, timedelta

from amara import binderytools

TAGBASE = 'http://del.icio.us/tag/'

#Change BASEDIR and FEEDS to customize
BASEDIR = '/srv/www/ogbuji.net/copia/pyblosxom/datadir'
FEEDS = ['http://del.icio.us/rss/uche', 'http://del.icio.us/rss/chimezie']

now = time.gmtime()
timestamp = unicode(time.strftime('%Y-%m-%dT%H:%M:%SZ', now))
targetdate = (date(*now[:3]) - timedelta(1)).isoformat()

#Using Amara.  Easy to just grab the RSS feed
docs = map(binderytools.bind_uri, FEEDS)
items = itertools.chain(*[ doc.RDF.item for doc in docs ])
current_items = [ item for item in items
                       if unicode(item.date).startswith(targetdate) ]
if current_items:
    # Create a Markdown page with the daily bookmarks.
    dir = '%s/%s' % (BASEDIR, targetdate)
    if not os.path.isdir(dir):
        os.makedirs(dir)
    f = codecs.open('%s/%s/del.icio.us.links.txt' % (BASEDIR, targetdate), 'w', 'utf-8')

    # Pyblosxom Title
    f.write(u'del.icio.us bookmarks for %s\n' % targetdate)

    tags = sets.Set()
    for item in current_items:
        tags.update([ li.resource[len(TAGBASE):] for li in item.topics.Bag.li ])
    f.write(u'#post_time %s\n'%(timestamp))
    f.write(u'<!--keywords: del.icio.us,%s -->\n'%(u','.join(tags)))

    for item in current_items:
        # List of links in Markdown.
        title = getattr(item, 'title', u'')
        href = getattr(item, 'link', u'')
        desc = getattr(item, 'description', u'')
        creator = getattr(item, 'creator', u'')
        f.write(u'* "[%s](%s)": %s *(from %s)*\n' % (title, href, desc, creator))

    f.close()

Or download amara_delicious.py.

You can see how easily you can process RSS 1.0 in Amara. I don't think actual RDF parsing/processing is a bit necessary. That extra layer is the first thing that decided me against Matt Biddulph's module, in addition to his use of libxml for XML processing, which is also used in Roberto De Almeida's.

[Uche Ogbuji]

via Copia

"Tip: Computing word count in XML documents" pubbed

"Tip: Computing word count in XML documents"

XML is text and yet more than just text -- sometimes you want to work with just the content rather than the tags and other markup. In this tip, Uche Ogbuji demonstrates simple techniques for counting the words in XML content using XSLT with or without additional tools.

It was just a few weeks after I sent the manuscript to the editor that this thread started up on XML-DEV. Spooky timing.

[Uche Ogbuji]

via Copia

Today's XML WTF: Internal entites in browsers

This unnecessary screw-up comes from the Mozilla project, of all places. Mozilla's XML support is improving all the time, as I discuss in my article on XML in Firefox, but the developer resources seem to lag the implementation, and this often leads to needless confusion. One that I ran into recently could perhaps be given the summary: "not everything in the Mozilla FAQ is accurate". From the Mozilla FAQ:

In older versions of Mozilla as well as in old Mozilla-based products, there is no pseudo-DTD catalog and the use of entities (other than the five pre-defined ones) leads to an XML parsing error. There are also other XHTML user agents that do not support entities (other than the five pre-defined ones). Since non-validating XML processors are not required to support entities (other than the five pre-defined ones), the use of entities (other than the five pre-defined ones) is inherently unsafe in XML documents intended for the Web. The best practice is to use straight UTF-8 instead of entities. (Numeric character references are safe, too.)

See the part in bold. Someone either didn't read the spec, or is intentionally throwing up a spec distortion field. The XML 1.0 spec provides a table in section 4.4: "XML Processor Treatment of Entities and References" which tells you how parsers are allowed to treat entities, and it flatly contradicts the bogus Mozilla FAQ statement above.

The main reason for the "WTF" is the fact that the Mozilla implementation actually gets it right. That it should. It's based on Expat. AFAIK Expat has always got this right (I've been using Expat about as long as the Mozilla project has been), so I'm not sure what inspired the above error. Mozilla should be touting its correct and useful behavior, rather than giving bogus excuses to its competitors.

This came up last week in the IBM developerWorks forum where a user was having problems with internal entities in XHTML. It turns out that he was missing an XHTML namespace (and based on my experimentation was probably serving up XHTML as text/html which is generally a no-no). It should have been a clear case of "Mozilla gets this right, and can we please get other browsers to fix their bugs?" but he found that FAQ entry and we both ended up victims of the red herring for a little while.

I didn't realize that the Mozilla implementation was right until I wrote a careful test case in preparation for my next Firefox/XML article. The following CherryPy code is a test server set-up for browser rendering of XHTML.

import cherrypy

INTENTITYXHTML = '''\
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html
  PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
         "http://www.w3.org/TR/xhtml/DTD/xhtml1-strict.dtd" [
<!ENTITY internal "This is text placed as internal entity">
]>
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-US">
  <head>
    <title>Using Entity in xhtml</title>
  </head>
  <body>
    <p>This is text placed inline</p>
    <p>&internal;</p>
    <abbr title="&internal;">Titpaie</abbr>
  </body>
</html>
'''

class root:
    @cherrypy.expose
    def text_html(self):
        cherrypy.response.headerMap['Content-Type'] = "text/html; charset=utf-8"
        return INTENTITYXHTML

    @cherrypy.expose
    def text_xml(self):
        cherrypy.response.headerMap['Content-Type'] = "text/xml; charset=utf-8"
        return INTENTITYXHTML

    @cherrypy.expose
    def app_xml(self):
        cherrypy.response.headerMap['Content-Type'] = "application/xml; charset=utf-8"
        return INTENTITYXHTML

    @cherrypy.expose
    def app_xhtml(self):
        cherrypy.response.headerMap['Content-Type'] = "application/xhtml+xml; charset=utf-8"
        return INTENTITYXHTML

cherrypy.root = root()
cherrypy.config.update({'server.socketPort': 9999})
cherrypy.config.update({'logDebugInfoFilter.on': False})
cherrypy.server.start()

As an example, this code serves up a content type text/html when accessed through a URL such as http://localhost:9999/text_html. You should be able to work out the other URL to content type mappings from the code, even if you're not familiar with CherryPy or Python.

Firefox 1.0.7 handles all this very nicely. For text_xml, app_xml and app_xhtml you get just the XHTML rendering you'd expect, including the correct text in the attribute value with the mouse hovered over "Titpaie".

IE6 (Windows) and Safari 1.3.1 (OS X Panther) both have a lot of trouble with this.

IE6 in the text_xml and app_xml cases complains that it can't find http://www.w3.org/TR/xhtml/DTD/xhtml1-strict.dtd. In the app_xhtml case it treats the page as a download, which is reasonable, if not convenient.

Safari in the text_xml, app_xml and app_xhtml cases complains that the entity internal is undefined (??!!).

IE6, Safari and Mozilla in the text_html case all show the same output (looking, as it should, like busted HTML). That's just what you'd expect for a tag soup mode, and emphasizes hat you should leave text_html out of your XHTML vocabulary.

All this confusion and implementation difference illustrates the difficulty for folks trying to deploy XHTML, and why it's probably not yet realistic to deploy XHTML without some sort of browser sniffing (perhaps by checking the Accept header, though it's well known that browsers are sometimes dishonest with this header). I understand that the MSIE7 team hopes to address such problems. I don't know whether to expect the same from Safari. My focus in research and experimentation has been on Firefox.

One final note is that Mozilla does not support external parsed entities. This is legal (and some security experts claim even prudent). The relevant part of the XML 1.0 spec is section 4.4.3:

When an XML processor recognizes a reference to a parsed entity, in order to validate the document, the processor MUST include its replacement text. If the entity is external, and the processor is not attempting to validate the XML document, the processor MAY, but need not, include the entity's replacement text. If a non-validating processor does not include the replacement text, it MUST inform the application that it recognized, but did not read, the entity.

I would love Mozilla to adopt the idea in the next spec paragraph:

Browsers, for example, when encountering an external parsed entity reference, might choose to provide a visual indication of the entity's presence and retrieve it for display only on demand.

That would be very useful. I wonder whether it would be possible through a Firefox plug-in (probably not: I guess it would require very tight Expat integration for plug-ins).

[Uche Ogbuji]

via Copia

I'm going going back back to Naija Naija

(Apologies for the title to the late Biggie Smalls). Fifteen years. Iri na ise. That's how long it's been since I've even stepped foot on Nigerian soil. The latter half of this December Lori, the kids and I shall be heading back for the holidays. Should be quite an experience for Lori and Osita. I expect Jide will just be generally aware that stuff's kinda different. Udoka will probably know no better than to squeeze the lungs when he needs Mom for that milk.

We'll mostly be staying in Calabar with my maternal relatives, since we're traveling with my mother, but we'll jaunt around some, including a visit to my father's home town. I hope to get to Nsukka, where I began university although that might not be realistic in the available time. It's a good time of year to go. Not only will there be a lot of folks to see, because Nigerians traditionally go home for Christmas hols, but the weather will be as close to Colorado's as it gets there, what with the dry season in full effect, the chilly Harmattan mornings and the intense midday sun. The mosquitoes should be at manageable levels. Should also be a good time for shopping. Thanks to Naijajams, I have a good sized list of music to pick up, and then there are all those books I just can't seem to find here in the U.S. Now that I'm finally starting to get over the shock of how much it's all costing us, I'm getting pretty excited.

[Uche Ogbuji]

via Copia

Happy Birthday, Nigeria

Ndewo nu. Ekaro. Sanu. The land of my birth is 45 today. No be small ting, oh! Thrown together as we were by the Beasts of no Nation, it would have been prodigy enough for the nation to have lasted a decade. It very nearly didn't. As it is, it looks as if we're intent on fusing our identity into the global fabric as a set of intriguing personalities with exotic names rather than as a nation. Just yesterday I watched Chiwetel Ejiofor put in his usual smouldering performance in the geek event of the week. I'm wondering when Wole Soyinka or Chinua Achebe will next be in my neck of the woods so I can see the great men speak. I scour the local bookshops for glimpses of anything by Abiola Irele. I grin at the rave reviews for U.S./U.K. published novels by Chimamanda Ngozi Adichie, Helen Oyeyemi and Chimezie's great friend Nnedimma Okorafor-Mbachu. I wild out when Oguchi Onyewu and Amaechi Igwe play strongly for the US national soccer team, Ogonna Nnamani for the US Women's Volleyball team, Emeka Okafor for the US Basketball team. Heck, I wonder whether Emmanuel Olisadebe will have a good run now that he's been reactivated to help Poland through the 2006 World Cup qualifiers (yeah, that's right—Poland). My son Osita and I try to catch the New York Giants (a team I've traditionally ignored) so that we can check out his namesake Ositadinma Umenyiora's skills at work in the American brand of football. Hmm. There's a broken lens somewhere behind this picture. Ah well. In a less somber observance, I've added Nigerian Blogs Aggregator to my sidebar. And I hate to wish anyone ill, but I just have to spend a moment today invoking the banana peel for Angola in the final African group four World Cup qualifier round so Nigeria can join the U.S. in booking tickets to Germany.

Nna man, men. 45 years, abi? We see wahala no mean say we no fit celebrate. E je ka jo O!

[Uche Ogbuji]

via Copia

Happy Serenity Day

As a Nigerian, I'm used to being late to parties, but that doesn't mean I enjoy them any less when I finally get there. I've heard about Firefly) for over a year now. I've heard all the raves and all the abuse of Fox for cocking up a very good thing (no news there).

I finally got around to watching it when the SciFi channel started re-airing it in the right order a couple of months ago. I was hooked about halfway through the first hour of the pilot. I do mean hooked. I decided that the cable format wasn't doing it any justice (not with those gorgeous space cut scenes) and I grabbed the DVD series on Netflix. Lori caught some of it as she'd walk by when I was watching and was at first intrigued and finally completely hooked, as well. We're hoping Netflix delivers the first series CD today so that Lori can watch the only episodes she hasn't yet seen in a marathon before we go to see Serenity today.

And oh yeah, it's all about Serenity today. In my opinion, Firefly is the best TV science fiction show ever. ever. Better than Star Trek TNG and DS9. Better than X-Files when it was good (the first three seasons). Firefly is Is an amazingly tight package of plot, action and character development. You really do care what's about to happen at every moment. You find yourself immersed in the very credible universe that the series depicts. It's a virtuoso science fiction performance. The common description is "a Western in space, with horses", but I don't think that quite does the show justice. I think to get a better idea of the quality of the show, consider that it's really the best of all genres between Western and science fiction, and it glides effortlessly across this spectrum. Based on early screening reports from the Serenity movie, it's every bit as good as the series, if not better.

It's cool if you're even later than I am to the Firefly party, but don't you miss it completely. First rent the DVDs this week, then catch the movie before it goes off the biggest multiplex screens. I bet it doesn't even matter whether or not you're a sci fi fan (Lori certainly isn't). Firefly is not about caricature aliens, pseudo-scientific babble and the like. It's about strong characters, great humor, tight action, and one brilliant thread of a story.

[Uche Ogbuji]

via Copia

The Mexican puzzle of "kalucha"

I play a lot of amateur soccer ("football", henceforth), as my poor right knee can attest. In the U.S., this inevitably means playing a lot with Spanish-speaking immigrants. As a result, my football Spanish has always been a lot better than my general-purpose Spanish (I do have to work on the latter).

One puzzle I've had for a while (at least a year) is why Mexicans call African players "kalucha". I've become quite used to being called that recently. Every call to me or other Africans on the field would use the term—"otra vez, kaLUcha!" or "chuta-la kaLUcha". I tried to puzzle it out in linguistic terms. Maybe it had something to do with "lucha"—"fight", "wrestling bout". Maybe it was a dig at the rather combative style of soccer African immigrants are used to. That didn't really sound right. When I asked a few of my Mexican friends, they said, they were not sure: they'd picked it up from their friends.

Last night I finally figured it out. Lori and I were watching a documentary that touched on the terrible tragedy of the 1993 Zambian football team plane crash. They happened to talk a bit about Kalusha Bwalya, the Zambian star who (with Charles Musonda) happened to miss the fatal plane ride because he played his professional football abroad and was to fly to Senegal separately. I'd known Kalucha had gone to Mexico, but I didn't know he played a time for the very popular Club América, nor did I know how hugely popular he'd become.

Mention Kalusha to any Mexican soccer fan and you could be certain they've met, heard of, or watched him on the screen. Having lived in Mexico for over five years , Kalusha has won hearts of most Mexicans and earned himself much respect.

In retrospect, this should have been obvious to me. As an example, I mentioned above the bit of Spanglish "chuta-la", in which "chuta" is a corruption of the English "shoot", because the "sh" sound does not occur naturally in Spanish and is generally corrupted to "ch". The same effect was changing "Kalusha" to "Kalucha". Most big-time soccer nations have a custom of local football nicknames taken from prominent stars. In Nigeria, we called each other "Keshi" or "Sia-Sia" depending on playing style or looks. Senegalese immigrants here in Colorado call each other "Diouf" and "Titi Camara". Mexicans call each other "Rafa" or "Borghetti" (wicked exciting player, that one). Clues were everywhere.

People call Bwayla "Kalu" for short. This is one of those names like "Obi" that are common throughout the African continent, with different meanings almost everywhere. In Igbo "Kalu" (with high tone and emphasis on the first syllable) is generally short for "Kamalu", meaning "thunder". It's a name I considered for Jide. Soccer is full of prodigious Kalus, including Nigerian Igbo Kalu Uche and Ivorian Bonaventure Kalou.

[Uche Ogbuji]

via Copia

Hooking up an IRC Agent to a Query Interface

Uche gave an excellent suggestion to augment Emeka to work with Triclops. After finishing Triclops, I had realized that most of the functionality Emeka provided was now redundant since it could be performed using Triclops (with the added advantage of being able to diagram/navigate RDF Graphs). Triclops URIs are unfortunately very long for queries submitted through HTTP GET. This is unfortunately unavoidable mostly because the parameters to the query service are Versa queries (which have their own syntax that includes characters that may not be very friendly for URIs) and URIs, so the prospect of submitting URIs to a service identified to a URI will probably always be problematic and might be as fundamental as submitting XML content through an XML-based messaging format. I'm digressing, but the solution that worked for me (which Uche suggested) was to make use of a URI 'shortening' service (such as MakeAShorterLink - which I used) to generate a 'hashed' or 'compacted' form of the URI from the parts submitted from , say, IRC. For example, lets say I was interested in Daniel Krech's redfoot framework and wanted to navigate the ontology (available from the URL: http://redfoot.net/2005/redfoot) as an RDF diagram in order to get an idea of the components and how they related. In particular, lets say I knew to start with the redfoot:Code class, I could interactively ask Emeka to return to execute the query:

redfoot:Code

against the URL which returns the RDF/XML serialization of the redfoot ontology. I could also ask Emeka to have the result rendered as a JPEG diagram with an associated image map that allows me to navigate to other aspects of the ontology to learn more about redfoot. Below is how I would do so with the new .triclops service:

<chimezie> .triclops "redfoot:Code" http://redfoot.net/2005/redfoot jpeg
<Emeka> Shortened link to Triclops request: http://makeashorterlink.com/?I1F7262EB

Below is the help associated with the triclops service.

<chimezie> .help triclops
<Emeka> .triclops "..Versa Query.." <source graph> <triples|jpeg|svg|versa>
<Emeka> The first argument is quoted Versa query to execute.  The second is a URL which points to the RDF graph (N3 or RDF/XML) to query against
<Emeka> The third argument is one of "triples", "jpeg", or "svg" and specifies how to return the query result
<Emeka> "triples" - raw triples in a tabled-grid, "jpeg" or "svg" - as navigable RDF graphs, and "versa" - raw Versa datatypes (rendered as html)
<Emeka> The result is a uri (courtesy of http://makeashorterlink.com) which redirects to the appropriate Triclops request

[Uche Ogbuji]

via Copia

Quotīdiē

To bring the dead to life
Is no great magic.
Few are wholly dead:
Blow on a dead man's embers
And a live flame will start.

Let his forgotten griefs be now,
And now his withered hopes;
Subdue your pen to his handwriting
Until it prove as natural
To sign his name as yours.

Limp as he limped,
Swear by the oaths he swore;
If he wore black, affect the same;
If he had gouty fingers,
Be yours gouty too.

Assemble tokens intimate of him —
A seal, a cloak, a pen:
Around these elements then build
A home familiar to
The greedy revenant.

So grant him life, but reckon
That the grave which housed him
May not be empty now:
You in his spotted garments
Shall yourself lie wrapped.

Robert Graves—"To bring the dead to life"

It has been a sad long while since I've posted a Quotīdiē, and an even sadder long while since I've had time for contemplation of the choicest art, but few spirits raise raise me from a poetic torpor as well as Robert Graves, one of my favorite poets and critics.

Robin Hamilton used the first stanza of the above poem in a message on the New-Poetry mailing list. I couldn't place it, but when I asked Robin kindly provided the source.

I've invoked Graves myself on that mailing list. The man never seems far from modern meditation on the numinous qualities of poetry. He himself sometimes went overboard in his mysticism, and sometimes it even clogged up his verse (Robin put it very aptly: "God preserve us from Graves' Goddess Poems."), but overall, there are few writers that surpass Graves for impressing upon students the divine essence of poetry.

See for yourself. Visit the Robert Graves Archive. Some of the links therefrom are broken, but overall, it's a very useful compilation.

[Uche Ogbuji]

via Copia

BNode Drama for your Mama

You know you are geek when it's 5am in the morning and you are wrestling with existential quantification and their value in querying. This was triggered originally by the ongoing effort to extend an already expressive pattern-based RDF querying language to cover more usecases. The motivation being that such patterns should be expressive beyond just the level of triple-matching since the core RDF model consists of a level of granularity below statements (you have literals, resources, and bnodes, ..). I asked myself if there was a justifiable reason why Versa at it's core does not include BNodes?:

Blank nodes are treated as simply indicating the existence of a thing, without using, or saying anything about, the name of that thing. (This is not the same as assuming that the blank node indicates an 'unknown' URI reference; for example, it does not assume that there is any URI reference which refers to the thing. The discussion of Skolemization in appendix A is relevant to this point.)

I don't remember the original motivation for leaving BNodes out of the core query data types, but in retrospect I think it it was a good decision and not only because the SPARQL specification does something similar (in interpreting BNodes as an open-ended variable). But it's worth noting that the section on blank nodes appearing in a query as opposed to appearing to a query result (or existing in the underlying knowledge base) is quite short:

A blank node can appear in a query pattern. It behaves as a variable; a blank node in a query pattern may match any RDF term.

Anyways, at the time I noticed this lack of BNodes in query languages, I had a misconception about BNodes. I thought they represented individual things we want to make statements about but don't know their identification or don't want to have to worry about assigning identification about them (this is probably 90% of the way BNodes are used in reality). This confusion came from the practical way BNodes are almost always handled by RDF data stores (Skolemization):

Skolemization is a syntactic transformation routinely used in automatic inference systems in which existential variables are replaced by 'new' functions - function names not used elsewhere - applied to any enclosing universal variables. In RDF, Skolemization amounts to replacing every blank node in a graph by a 'new' name, i.e. a URI reference which is guaranteed to not occur anywhere else. In effect, it gives 'arbitrary' names to the anonymous entities whose existence was asserted by the use of blank nodes: the arbitrariness of the names ensures that nothing can be inferred that would not follow from the bare assertion of existence represented by the blank node.

This misconception was clarified when Bijan Parsia ({scope(PyChinko)} => {scope(FuXi)}) expressed that he had issue with my assertion(s) that there are some compromising redundancies with BNodes, Literals, and simple entailment with regards to building programmatic APIs for them.

Then the light bulb went off that the semantics of BNodes are (as he put it) much stronger than they are most often used. Most people who use BNodes don't mean to use it to state that there is a class of things which have the asserted set of statements made about them. Consider the difference between:

  1. Who are all the people Chime knows?
  2. There is someone Chime knows, but I just don't know his/her name right now
  3. Chime knows someone! (dudn't madder who)

The first scenario is the basic use case for variable resolution in an RDF query and is asking for the resolution of variable ?knownByChime in:

<http://metaacognition.info/profile/webwho.xrdf#chime> foaf:knows ?knownByChime.

Which can be [expressed] in Versa (currently) as:

resource('http://metacognition.info/profile/webwho.xrdf#chime')-foaf:knows->*

Or eventually (hopefully) as:

foaf:knows(<http://metacognition.info/profile/webwho.xrdf#chime>)

And in SPARQL as:

select 
  ?knownByChime 
where  
{
  <http://metacognition.info/profile/webwho.xrdf#chime> foaf:knows ?knownByChime
}

The second case is the most common way people use BNodes. You want to say Chime knows someone but don't know a permanent identifier for this person or care to at the time you make the assertion:

http://metaacognition.info/profile/webwho.xrdf#chime foaf:knows _:knownByChime

But RDF-MT specifically states that BNodes are not meant to be interpreted in this way only. Their semantics are much stronger. In fact, as Bijaan pointed out to me, the proper use for BNodes is as scoped existentials within ontological assersions. For example owl:Restrictions which allow you to say things like: The named class KnowsChime consists of everybody who knows Chime:

@prefix mc <http://metaacognition.info/profile/webwho.xrdf#>.
  @prefix owl <http://www.w3.org/2002/07/owl#>.
  :KnowsChime a owl:Class;
        rdfs:subClassOf 
        [
          a owl:Restriction;
          owl:onProperty foaf:knows;
          owl:hasValue mc:chime
        ];
        rdfs:label "KnowsChime";
        rdfs:comment "Everybody who knows Chime";

The fact that BNodes aren't meant to be used in the way they often are leads to some suggested modifications to allow BNodes to be used as 'temporary identifiers' in order to simplify query resolution. But as clarified in the same thread, BNodes in a query doesn't make much sense - which is the conclusion I'm coming around to: There is no use case for asserting an existential quantification while querying for information against a knowledge base. Using a variable (in the way SPARQL does) should be sufficient. In fact, all RDF querying usecases (and languages) seem to be reducable to variable resolution.

This last part is worth noting because it suggests that if you have a library that handles variable resolution (such as rdflib's most recent addition) you can map any query language to (Versa/SPARQL/RDFQueryLanguage_X) it by reducing it to a set of triple patterns with the variables you wish to resolve.

So my conclusions?:

  • Blank Nodes are a neccessary component in the model (and any persistence API) that unfortunately have much stronger semantics (existential quanitifcation) than their most common use (as temporary identifiers)
  • The distinction between the way BNodes are most often used (as a syntactic shorthand for a single resource for which there is no known identity - at the time) and the formal definition of BNodes is very important to note - especially to those who are very much wed to their BNodes as Shelly Powers has shown to be :).
  • Finally, BNodes emphatically do not make sense in the context of a query - since they become infinitely resolvable variables: which is not very useful. This confusion is further proof that (once again), for the sake of minimizing said confusion and misinterpretation of some very complicated axioms there is plenty value in parenthetically (if not logically) divorcing (pun intended) RDF model theoretics from the nuts and bolts of the underlying model

Chimezie Ogbuji

via Copia