2006-06-14

XTech 2006 - Wednesday

Wednesday morning I had a cold and was late getting up, so missed Paul Graham's keynote on startups.

SQL, XQuery, and SPARQL: What's Wrong With This Picture? covered the lack of difference between the power of SPARQL vs recursive SQL. This means either that you can use existing optimised SQL databases to store and process RDF triples, converting SPARQL to SQL3 as required, or use RDF at the perimeter and SQL internally.

(* TME: Since SPARQL update seems to be a long time coming, and the restrictions imposed by triples make even simple context problems more complex, I'm not sure if there is ever going to be any merit in RDF. I overhead someone from Sesame saying they are moving to quads, because of the many use cases that triples don't meet (I think the tipping point would be hepts, but that's more anthropomorphic than graph-theoretical). Similarly others have suggested named graphs, eg TriX, as a solution to the context problem in RDF. Currently I haven't seen anything that would indicate a rest based, plain ol' xml interface onto an sql database wouldn't be better for anything with context, uncertainty or transitive relations. The better RDF applications all seem only to deal with information that is equally trusted and true. *)

Michael Kay spoke on Using XSLT and XQuery for life-size applications. Speaking of using fixed schemas to validate human data - for example the format of an address field which forces you to enter a house number forcing the operator to massage the data to fit - 'Integrity rules mean you only accept data if it's false.'

He tends to observe the documents which are collected in the business process, rather than trying to get the experts to create an abstract model. For example, an employee's record is a collection of completed forms. So if you want the current employee state, query the history and make a summary taking the most recent values.

Applications are built by composing pipelines which transform these documents. Using automatically generated schemata to describe the intermediate stages in these pipelines gives regression tests.

(* I like Michael's anthropological approach, and imagine that it would build applications that augment the current process, rather than trying to impose an idealised process - which is what some managers attempt to do. *)

Next up, Jonas Sicking of Mozilla talked about XBL2, a revision of Mozilla's XML binding language.

Some of the improvements are tidier means of binding XML (the original XBL is somewhat polluted from being an RDF binding language), support for better separation of implementation and presentation, richer binding mechanisms and plugin security improvements.

(* All of these seem good, but doesn't address the three show-stopping problems I've had with XBL which have meant I've had to stop using it everywhere I'd like to:

  • The binding of methods and properties is asynchronous, so you can't easily create an element with an XBL binding, then call Javascript methods on it.

  • The method and property bindings only exist when the element is in the document tree. This complicates Undo and Redo, as removing or re-adding an element changes its interface.

  • Trying to use XBL with SVG, sometimes the same data is presented in multiple layers, so there isn't a single point to insert content.

Everywhere I've hit the third point, I've had to move from XBL; most times I've hit the second, XBL got too painful and I've moved away. I'm still thinking about moving away from XBL for some of the first case, but probably will have by the time I'm selling software based on XUL as having intemittant glitches caused by races is not a good user experience. *)

Ralph Meijer (who was also at BarCamp) allegedly spoke on Publish-subscribe using Jabber. Or maybe he spoke on publish-subscribe using XMPP, since there are more applications than Jabber. This was interesting in as much as seeing how much take up there was so far - not much, though some other people have noticed that it has potential both for distributed computing, for human interaction, for location based systems, and for hybrid networks. It's been mooted that in the internet of things sensor nets will dominate over comms nets; but already we communicate by publishing our availability and playlists - your iPod is another sensor measuring your mood; a keyboard/mouse activity monitor on your pc is sensing your presence; pushing smarter sensors to the periphery mean it's all the same pub-sub tuple space.


Vladimir Vukićević spoke about canvas and SVG; more a putting a face to the implementer, since I've been playing with canvas for nearly a year, and SVG since last millennium. But all good, and there will also be OpenGL bindings for canvas in the future.


Henry S. Thompson, whom I've often enjoyed the wisdom of on xml-dev, gave a seminar on Efficient implementation of content models with numerical occurrence constraints. Transforming schemata to finite state machines, then adding counters which are incremented or reset on transitions, with maximum and minimum value guards. These allow you to transform large numerical constraints into simple state machines without having a very large number of states. Of course, this simply doesn't happen if you don't use a state machine for schema validation, which got me thinking and I ended up writing another little packrat parser. I'm sure you could create a packrat schema validator that wouldn't suffer the state explosion problem.

I hung around at the Mozilla reception for a bit, but was tired and cold-y, so went to bed early.

Thus ended the second day.


TME

Labels:

0 Comments:

Post a Comment

<< Home