Firing SAX events from a DOM tree in 4Suite

One nice thing about the briskly-moving 4Suite documentation project is that it is shining a clear light on places where we need to make the APIs more versatile. Adding convenience parse functions was one earlier result.

Saxlette has the ability to walk a Domlette tree, firing off events to a handler as if from a source document parse. This ability used to be too well, hidden, though, and I made an API addition to make it more readily available. This is the new Ft.Xml.Domlette.SaxWalker. The following example should show how easy it is to use:

from Ft.Xml.Domlette import SaxWalker
from Ft.Xml import Parse

XML = ""

class element_counter:
    def startDocument(self):
        self.ecount = 0

    def startElementNS(self, name, qname, attribs):
        self.ecount += 1

#First get a Domlette document node
doc = Parse(XML)
#Then SAX "parse" it
parser = SaxWalker(doc)
handler = element_counter()
parser.setContentHandler(handler)
#You can set any properties or features, or do whatever
#you would to a regular SAX2 parser instance here
parser.parse() #called without any argument
print "Elements counted:", handler.ecount

Again Saxlette and Domlette are fully implemented in C, so you get great performance from the SaxWalker.

[Uche Ogbuji]

via Copia

Python/XML column #37 (and out): Processing Atom 1.0

"Processing Atom 1.0"

In his final Python-XML column, Uche Ogbuji shows us three ways to process Atom 1.0 feeds in Python. [Sep. 14, 2005]

I show how to parse Atom 1.0 using minidom (for those who want no additional dependencies), Amara Bindery (for those who want an easier API) and Universal Feed Parser (with a quick hack to bring the support in UFP 3.3 up to Atom 1.0). I also show how to use DateUtil and Python 2.3's datetime to process Atom dates.

As the teaser says, we've come to the end of the column in its present form, but it's more of a transition than a termination. From the article:

And with this month's exploration, the Python-XML column has come to an end. After discussions with my editor, I'll replace this column with one with a broader focus. It will cover the intersection of Agile Languages and Web 2.0 technologies. The primary language focus will still be Python, but there will sometimes be coverage of other languages such as Ruby and ECMAScript. I think many of the topics will continue to be of interest to readers of the present column. I look forward to continuing my relationship with the XML.com audience.

It is too bad that I don't get to some of the articles that I had in the queue, including coverage of lxml pygenx, XSLT processing from Python, the role of PEP 342 in XML processing, and more. I can still squeeze some of these topics into the new column, I think, as long as I make an emphasis on the Web. I'll also try to keep up my coverage of news in the Python/XML community here on Copia.

Speaking of such news, I forgot to mention in the column that I'd found an interesting resource from John Shipman.

[F]or my relatively modest needs, I've written a more Pythonic module that uses minidom. Complete documentation, including the code of the module in 'literate programming' style, is at:

http://www.nmt.edu/tcc/help/pubs/pyxml/

The relevant sections start with section 7, "xmlcreate.py".

[Uche Ogbuji]

via Copia

Is RDF moving beyond the desperate hacker? And what of Microformats?

I've always taken a desperate hacker approach to RDF. I became a convert to the XML way of expressing documents right away, in 1997. As I started building systems that managed collections of XML documents I was missing a good, declarative means for binding such documents together. I came across RDF, and I was sold. I was never really a Semantic Web head. I used RDF more as a desperate hacker with problems in a fairly well-contained domain. At that time the Sem Web aspirations behind RDF didn't get in the way too badly, so all was well for me. My desperate hacker mindset is probably best summarized in this XML-DEV message from may, 2001.

I see RDF as an excellent modeling tool for closed systems. In my practice, most of the real "knowledge" is in the XML documents at the nodes, but RDF can provide important indexing and relationship expression between these nodes.

I go on to in that message expand on where RDF fits into the architecture of apps for my needs. I also mention a bit of wariness about how RDF's extravagant ambition (i.e. Sem Web) could affect my simple, practical needs.

I quickly found out on www-rdf-logic that in the discussion there, the assumption appear to be that in the semantic Web the RDF statements would carry a heavy burden of the "knowledge" in the system. I've started to think that this idea is a straw man set up by folks who would like RDF to be a fully-blown knowledge-representation language, but if "strong RDF" is indeed a cog in the SW wheel, I fear I must excuse myself from contributing to that discussion because It places me immediately out of my depth.

I've spent a lot of time with RDF, and for a while it was a big part of our consulting practice, but recently applications architecture and schema design (RELAX NG mostly, thank goodness) have been the biggest part of the day job. Honestly, I started to lose touch with where RDF was going. I knew there were some common-sense fixes to bugs in the 1999 specs, but I also knew there were some worrying injections of Sem Web think into the model core. Recently I've had some opportunity to catch up. SPARQL just doesn't fit my head, so a few of us in the Versa 1.0 gang, including Mike Olson and Chimezie, have started work towards Versa 2.0. Mike and Chime have kept up with the state of RDF, and in several discussions, I expressed what I felt were simple view of the RDF model and got in response what I thought were overblown claims about how the RDF model's semantics have been updated. In all cases when I checked the relevant parts of the latest RDF specs I found that Mike and Chime were right and it was rather the RDF model itself that was overblown.

I've developed an overall impression of dismay at the latest RDF model semantics specs. I've always had a problem with Topic Maps because I think that they complicate things in search of an unnecessary level of ontological purity. Well, it seems to me that RDF has done the same thing. I get the feeling that in trying to achieve the ontological purity needed for the Semantic Web, it's starting to leave the desperate hacker behind. I used to be confident I could instruct people on almost all of RDF's core model in an hour. I'm no longer so confident, and the reality is that any technology that takes longer than that to encompass is doomed to failure on the Web. If they think that Web punters will be willing to make sense of the baroque thicket of lemmas (yes, "lemmas", mi amici docte) that now lie at the heart of RDF, or to get their heads around such bizarre concepts as assigning identity to literal values, they are sorely mistaken. Now I hear the argument that one does not need to know hedge automata to use RELAX NG, and all that, but I don't think it applies in the case of RDF. In RDF, the model semantics are the primary reason for coming to the party. I don't see it as an optional formalization. Maybe I'm wrong about that and it's the need to write a query language for RDF (hardly typical for the Web punter) that is causing me to gurgle in the muck.

Assuming it were time for a desperate hacker such as me to move on (and I'm not necessarily saying that I am moving on), where would he go from here? I hear the chorus: microformats. But I see nothing but nasty pricklies down that road. IMO microformats are now where RDF was back in 1999 (actually more like 1998) in terms of practical use to the Web, but in making their specification nothing but a few notes scribbled in a WIki, they are purely syntactic, and offer no semantic anchor. As such, I'm not sure why it makes sense to think of microformats as different from XML ca. 1997. What's the news there? They certainly don't solve my desperate hacker need for indexing and expressing relationships across XML documents. I don't need the level of grounding that RDF seems to so slavishly be aiming for these days, but I need more than scattered Wiki notes.

GRDDL is the RDF community's bid to fix microformats up with some grounding. Funny thing is that in GRDDL they are re-discovering what the desperate hackers at Fourthought devised almost four years ago in "document definitions" to map XML syntax to RDF statements using XPath and XSLT. The desperate hacker in me feels at the same time vindicated, and left in the weeds. Sure GRDDL gets RDF some of what I've thought it's needed for ages, but it still does wed microformats to the present-day RDF model, which is just what I'm becoming uneasy about.

I'm more wandering around than getting anywhere in this entry, I freely admit. Working the grounding layer for XML is still what I consider to be my work of primary career interest. Lately, this work has led me more in the direction of schema annotations, as you can see in some of my recent articles on IBM developerWorks. Architectural forms are the closest thing the SGML old-heads gave us to syntax-semantic grounding (grounded to HyTime, of course), and AF were a creature of the schema. Perhaps it's high time we went back to learn that old-head lesson and quit fiddling around with brittle post-schema transformations.

As for the modeling system to use as the basis for grounding XML syntax, I don't know. I stick to RDF for now, but I'll have to see if it's possible to use it interoperably while still ignoring the more esoteric flourishes it's picked up lately. The Versa discussions at first gave me the impression that these flourishes are inevitable, but more recent threads have been a bit more encouraging.

I certainly hope that it doesn't take another rewind to RDF circa 2000 to satisfy the desperate hacker.

[Uche Ogbuji]

via Copia

Live Markdown Compilation via 4XSLT / 4Suite Repository

Related to uche's recent entry about PyBlosxom + CherryPy, I recently wrote a 4XSLT extension that compiles a 4Suite Repository RawFile (which holds a Markdown document) into an HTML 4.1 document on the fly. I'm using it to host a collaborative markdown-based Wiki.

The general idea to allow the Markdown document to reside in the repository and be editable by anyone (or specific users). The raw content of that document can be viewed with a different URL: http://metacognition.info/markdown-documents/RDFInterfaces.txt . That is the actual location of the file, the previous URL is actually a REST service setup with the 4Suite Server instance running on metacognition that listens for requests with a leading /markdown and redirects the request to a stylesheet that compiles the content of the file and returns an HTML document.

The relevant section of the server.xml document is below:

<Rule 
         pattern='/markdown/(?P<doc>.*)' 
         extra-args='path=/markdown-documents/\1' 
         xslt-transform='/extensions/RenderMarkdown.xslt'   />

This makes use of a feature in the 4Suite Repository Server architecture that allows you to register URL patterns to XSLT transformations. In this case, all incoming requests for paths with a leading /markdown are interpreted as a request to execute the stylesheet /extensions/RenderMarkdown.xslt with a top-level path parameter which is the full path to the markdown document (/markdown-documents/RDFInterfaces.txt in this case). For more on these capabilities, see: The architecture of 4Suite Web applications.

The rendering stylesheet is below:

<?xml version="1.0" encoding="UTF-8"?>
    <xsl:stylesheet 
        xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
        xmlns:md="http://metacognition.info/extensions/markdown.dd#"
        xmlns:exsl="http://exslt.org/common"
        version="1.0"
        xmlns:ftext="http://xmlns.4suite.org/ext"
        xmlns:fcore="http://xmlns.4suite.org/4ss/score"
        extension-element-prefixes="exsl md fcore"
        exclude-result-prefixes="fcore ftext exsl md xsl">
        <xsl:output 
          method="html" 
          doctype-public="-//W3C//DTD HTML 4.01 Transitional//EN" 
          doctype-system="http://www.w3.org/TR/html4/loose.dtd"/>
        <xsl:param name="path"/>
        <xsl:param name="title"/>
        <xsl:param name="css"/>
        <xsl:template match="/">        
        <html>
            <head>
            <title><xsl:value-of select="$title"/></title>         
            <link href="{$css}" type="text/css" rel="stylesheet"/>
            </head>
            <xsl:copy-of select="md:renderMarkdown(fcore:get-content($path))"/>
        </html>            
        </xsl:template>
    </xsl:stylesheet>

This stylesheet makes use of a md:renderMarkdown extension function defined in the Python module below:

from pymarkdown import Markdown
    from Ft.Xml.Xslt import XsltElement,ContentInfo,AttributeInfo
    from Ft.Xml.XPath import Conversions
    from Ft.Xml import Domlette

    NS=U'http://metacognition.info/extensions/markdown.dd#'

    def RenderMarkdown(context, markDownString):
        markDownString=Conversions.StringValue(markDownString)
        rt="<body>%s</body>"%Markdown(markDownString)
        dom = Domlette.NonvalidatingReader.parseString(str(rt),"urn:uuid:Blah")
        return [dom.documentElement]

    ExtFunctions = {
        (NS, 'renderMarkdown'): RenderMarkdown,
    }

Notice that the stylesheet allows for the title and css to be specified as parameters to the original URL.

The markdown compilation mechanism is none other than the pymarkdown.py used by Copia.

For now, the Markdown documents can only be edited remotely by editors that know how to submit content over HTTP via PUT as well as handle HTTP authentication challenges if met with a 401 for a resource in the repository that isn't publicly available (in this day and age it's a shame there are only a few such editors - The one I use primarily is the Oxygen XML Editor).

I hope to later add a simple HTML-based form for live modification of the markdown documents which should complete the very simple framework for a markdown-based, 4Suite-enabled mini-Wiki.

Chimezie Ogbuji

via Copia

Itinerant Binds - Better Software Documentation

It was brought to my attention that my recent entry about Sparta/Versa/rdflib possibilities was a little vague/unclear. This tends to happen when I get caught up in an interest. Anyways,.. I renamed the module to Itinerant Binds (I liked the term), created a page on Metacognition for the recent rdflib/4Suite RDF work I've been doing with some more details on how the components works. I added an example that better demonstrates isolating RDF resources through targeted Versa queries and using the bound python result objects to modify / extend the underlying graph.

Chimezie Ogbuji

via Copia

PyBlosxom...CherryPy...hmmm

I have so much hacking to do on Copia's engine that it puts me off doing anything at all. The atom rendering bug I stumbled upon yesterday especially needs a look-in, but I suspect it would take a move from flavor to plug-in to fix it. I'm just honestly not all that bullish about hacking PyBlosxom right now. Not while I've been having so much fun with CherryPy lately.

A while ago Bill Mill said in a comment here:

I'm about halfway done with a "pyblosxom in cherrypy" thing I've been working on. It works, reads all my pyblosxom blogs, and can leave comments. I just need to refactor it to be more sensible and plugin-oriented.

That's the sort of sign I need in order to to hang on, but I do hope Bill works his way through the remaining half soon enough.

Speaking of CherryPy, recently spotted Cookbook entry: "A simple integration of a CherryPy web server, using Quixote template publishing, managed in its own thread."

[Uche Ogbuji]

via Copia

Quotīdiē

I don't know her name, but she works for MSNBC. My apologies for my wordage, but this wench didn't know what the hell was going on. She made up 75% of what she was saying and exaggerated about 95% of everything that she did know. The message: do you want to be a reporter? All you need to do is have a pretty face and buy a Thesaurus!

From Alvaro R. Morales Villa's amazing photo diary (via Eve Maler).

The caption for the next picture is also telling:

Mr. Brian Williams... you know, I've always been a fan of news reporters. After this "event", however, I'm a lot more skeptical about what they say. In this photo he had just gotten into an argument witht the lady in the light blue shirt. She couldn't find out if West Esplanade Avenue (which is in Metaire) and Esplanade Ave. (which is in the French Quarter) were the same.

The U.S. press has become an institution that is completely useless in its complacency and venality. Besides the widespread bungling that is wryly noted in these captions, what amazes me is that the U.S. loves to lecture other countries about freedom of the press, and yet it's widely admitted that the U.S. press never found the backbone to criticize the current government until Katrina.

Much thanks to Alvaro for pluck and resourcefulness to document all that he did, and for sharing it so generously with us (minor note: grand theft auto is not usually just "a minor misdemeanor", although extenuating circumstances in this case might possibly make up that difference).

[Uche Ogbuji]

via Copia

Convenience APIs for 4Suite Domlette parsing

I added some functions to make a baby step into Domlette parsing. I call these the brain-dead APIs (though we use more decorous terms officially). You can now get yourself a crisp DOM with as little effort as:

>>> from Ft.Xml import Parse
>>> doc = Parse("<hello>world</hello>")

And thence on to the usual fun stuff:

>>> print doc.xpath(u"string(hello)")
world

Parse also knows how to handle streams (file-like objects):

>>> doc = Parse(open("hello.xml"))

Do a help(Parse) to get a warning about using that function with XML that is not self-contained, and an example of how to parse such XML properly in 4Suite.

There is also ParsePath, which handles files paths and URLs:

>>> from Ft.Xml import ParsePath
>>> doc = ParsePath("hello.xml")
>>> doc = ParsePath("http://copia.ogbuji.net/blog/index.atom")

And what do you know? That last URL does not parse. My feed is ill-formed. ☠☠☠☠☠ (I love cursing in Unicode) Sigh. Gotta fix the PyBlosxom Atom generation. Maybe we need a touch of 4Suite (Amara, actually) there, as well.

[Uche Ogbuji]

via Copia

Kumite! Python vs. Javascript! script vs. XForms! declarativity vs. wizards!

Sylvain's question about Javascript-without-tears certainly kicked off a chain of interesting dialogue, for me. It also happens to dovetail with some interesting dialog elsewhere that started independently.

Kurt had an interesting take in his follow-up "Javascript and Python"

To be blunt, I would prefer to see a Python interpreter being standard issue in all browsers in addition to a Javascript one, as I believe the language to be much more expressive, more object oriented, more secure, easier to write, and in general better suited to contemporary needs. Unfortunately, browsers in particular are informed as much by business and political decisions, some, perhaps even most, based less upon the best technology for a problem and more based upon what will provide the best backward compatibility to insure that existing websites do not break or that download sizes remain under some critical value.

I like Python as well, but boy do I shudder to imagine the political conflagration that would ensue if Python were elevated in Web programming over peers such as Perl and Ruby. Javascript appears to have carved out a niche as the Switzerland of dynamic languages.

I would like to see diversity in Web scripting languages, so that others have choices besides Javascript, and as Kurt says, Mozilla does seem to be taking the lead in this. It's really cool to see the Mozilla Wiki entry "Breaking the grip JS has on the DOM". And you gotta love the opening lines:

We want to change the grip JS has on the DOM and on XUL. We will do this in 2 steps:

Ideally, the first step could be done without consideration for the second, in the assumption at the implementation should be truly language neutral.

But regardless of scripting language, there is the fact that XForms is out there looking to take on the very need for scripting in many of the browser use cases. Kurt says:

I am a major advocate of XML, precisely because it is much more difficult to isolate a language when a mapping is essentially an XSLT transformation away. For this reason, XForms is a very attractive model to be moving towards, certainly, and I look forward to the day that I can build XForms applications that work in all browsers equally. However, XForms is not even remotely widely implemented yet nor are there standard forms of declarative binding languages along the lines of XBL (sXBL is getting there, but so far there are perhaps two still very much test implementations in existence).

Chime predictably takes exception to this (in a response to Kurt). He's been a very involved early adopter of XForms, and I've been amazed to see how productive XForms has made him, so I take his point very seriously when he says:

I beg to differ. Though it may be true that it doesn't quite have the traction that Flash has at the moment, I wouldn't go as far as saying it isn't remotely, widely implemented. There are several very mature implementations...
[...] [re:eliminating the need for an imperative language] Once again, I have yet find myself in a situation where I needed javascript for UI-related capabilities that weren't covered by XForms event processing, instance binding, and other such [programmatic] components. The only time I did was when I had to encode XML content as base64 encoded binary (see: http://copia.ogbuji.net/blog/2005-08-19/BinaryEncodingAndXMLRPCs) and had little to do with XForms but more with the means of remote communication (SOAP). I'm not suggesting that frameworks such as XForms will eliminate the need for an imperative language, but rather that the need will be more like the reverse of your 80/20 proposition.

To be fair, I think Kurt was saying that script is only needed for the 20% case, and not the 80% case. He just felt that declarative solutions architecture "is hideously inappropriate for the remaining 20%." I do agree with that, but I think that people (not Kurt) tend to exaggerate this fact as an argument against declarative programming.

Chime wraps up:

I must give the disclaimer that I'm not suggesting XForms will be a user interface / browser-based application building [panacea], but rather that the potential it has to eliminate the unportable, architecturally unsound code that often drive DHTML web-sites with minimal complexity is very much overlooked.

Based on what I've seen of XForms, I tend to agree. I actually think Chime and Kurt are more in agreement than they sound, except maybe on the matter of the maturity of XForms engines.

At almost the same time another script versus XML exchange was going on in Mark Birbeck's blog, and in particular "On Adobe and XForms via Declarative Programming, Wizards and Aspects". That article is well worth reading in its entirety, but I'll highlight what he says about Wizards:

...in nearly all cases I find the 'wizard approach' is great to get you started, but then very quickly gets complex again. Anyone who uses Microsoft's Visual Studio, for example, will know that getting a C++ application up and running quickly, with support for multiple windows, toolbars, printing and file saving, is a snip. But then when you want to modify that code and move away from the wizard, you are very soon into normal C++ territory.

I tend to put this point even more strongly, as I do in my article "The worry about program wizards", but I'm always happy to have it reinforced.

In some ways I see wizards as the shoddy high street knock-off of declarative systems. Well designed declarative systems fully encapsulate modal aspects of the application in development, and they expose slots for ready extension in imperative implementation, if needed. Wizards, on the other hand, do focus on parameters in a way tantalizingly like declarative systems, but then ruin the entire plot by handing the programmer a hairball of imperative code that they have to hack at arbitrarily in order to complete the application. It's the difference between just plugging a device into a USB port to add capability to your PC, rather than having the motherboard thrust in your face so that you can find the right place to solder in the leads.

Chimezie Ogbuji

via Copia

RDF-API: Reconciling the redundancy in pythonic RDF store implementations

I just wrapped up the second of two rdflib-related libraries I wrote with the aim of bridging the gap between rdflib and 4Suite RDF. The latter (BoundVersaResult.py) is a little more interesting than the former in that it uses Sparta to allow the distinct components of a Versa query result to each be bound to appropriate python objects. 4Suite RDF's Versa implementation already provides such a binding:

  • String -> Python unicode
  • Number -> Python float
  • Boolean -> Python boolean
  • List -> Python list
  • Set -> Python Sets
  • Resource/BlankNodes -> Python unicode

The bindings for all the datatypes except Resource/BlankNodes are straight forward. This library extends the datatype binding to include the ability to bind Sparta Things to Versa Resources and BlankNodes. Since Sparta only works with rdflib Graphs, the FtRdfBackend.py module is used to wrap an rdflib.Graph around a 4Suite Model.

Sparta takes an RDF Graph and a defining Ontology which dictates the cardinality of properties bound to resource objects (Things). It allows an RDF Graph to be traversed (and extended) via pythonic idiom. The combination of being able to isolate resources by Versa query (or SPARQL queries eventually - as soon as the ongoing rdflib effort in that regard is completed) and bind them to python objects whose properties reflect the properties on the underlying RDF resources they are bound to is very cool, IMHO. The ability to provide an implementation agnostic way to modify an RDF graph, using a host language as expressive as Python is the icing on the cake. For example, check out the following code snippet demonstrating the use of this library:

#Setup FtRDF Model
Memory.InitializeModule()   
db = Memory.GetDb('', '')
db.begin()
model = Model.Model(db)

#Parse my del.icio.us rss feed
szr = Dom.Serializer()
delUri="http://del.icio.us/rss/chimezie/academic+rdf"
domStr=urllib2.urlopen(delUri).read()        
dom = Domlette.NonvalidatingReader.parseString(domStr,'http://del.icio.us/rss/chimezie')
szr.deserialize(model,dom,scope=delUri)

#Setup rdflib.Graph with FtRDF Model as Backend, using FtRdf driver
generator=VersaThingGenerator(model)
#generator.query("type(rss:item)")
for item in generator.query("type(rss:item)"):        
    [pprint(link) for link in item.rss_link]
    print generator.query("distribute(@'%s','.-rss:title->*','.-dc:subject->*')"%item._id)[0]

Note that (within the loop over the rss:items in the graph), the rss:link property returns an iterator over the possible values (since there is no defining ontology that could have specified that the rss:link property has a cardinality of 1, or is an inverse functional property - which would have caused Sparta to bind the rss_link property to a single object instead of an iterator).

The result of running this code:

u'http://lists.w3.org/Archives/Public/public-rdf-dawg/2004JulSep/0069'
[[u'More on additional semantic information from Enrico Franconi on 2004-07-12 (public-rdf-    dawg@w3.org from 
July to September 2004)'], [u'academic architecture archive community dawg email logic query rdf reference 
semantic']]
u'http://www.w3.org/TR/swbp-specified-values/'
[[u'Representing Specified Values in OWL: "value partitions" and "value sets"'], [u'academic datatypes ontology owl 
rdf semantic standard w3c']]
u'http://lists.w3.org/Archives/Public/public-rdf-dawg/2005JulSep/0386.html'
[[u'boolean operators and type errors from Jeen Broekstra on 2005-09-07 (public-rdf-dawg@w3.org from July to 
September 2005)'], [u'academic architecture archive community dawg email logic rdf reference semantic w3c']]
u'http://www.w3.org/DesignIssues/Diff'
[[u'RDF Diff, Patch, Update, and Sync -- Design Issues'], [u'academic paper rdf semantic standards tbl w3c']]
u'http://www.w3.org/TR/rdf-dawg-uc/'
[[u'RDF Data Access Use Cases and Requirements'], [u'academic architecture framework query rdf reference semantic 
specification standard w3c']]
u'http://www.w3.org/DesignIssues/RDB-RDF'
[[u'Relational Databases and the Semantic Web (in Design Issues)'], [u'academic architecture framework rdb rdf 
reference semantic tbl w3c']]
u'http://www.w3.org/TR/swbp-n-aryRelations/'
[[u'Defining N-ary Relations on the Semantic Web: Use With Individuals'], [u'academic logic ontology owl predicate 
rdf reference relationships semantic standard w3c']]

Chimezie Ogbuji

via Copia