Dual-audience Data
© 4 November 2021 Luther Tychonievich
Licensed under Creative Commons: CC BY-NC-SA 3.0
other posts

Talking to computers and humans at the same time.

 

A book I enjoy reading is Sir Pelham Grenville Wodehouse’s Leave It To Psmith. Or should I say “‍Leave It To Psmith by P.G. Wodehouse‍”? I assume that you, my English-fluent human reader, are fully capable of understanding that both of those ways of wording it refer to the same book by the same author. I also assume you can see a later reference like

In that book, Wodehouse communicates through humor that crime and laziness are superior to order and productivity, provided they are executed cheerfully and in the pursuit of relations with a stranger of great beauty.

and understand “‍that book‍” to be another reference to P.G. Wodehouse (1932) Leave It To Psmith. There are many ways I can reference this same book, and while I’ve intentionally used too many for comfortable reading in this post so far, if I picked just one and used it in every instance my writing would come off as stilted and unpleasant.

By contrast, a computer benefits from each reference to something being explicitly marked, and that in the same way. But it is hard to use pure data languages in human communication, so semantic markup was born. The basic idea of this is to interleave semantic markers with text and then rely on a display tool to hide those markers from the human. For example, I could write

<q vocab="http://schema.org" typeof="Book"><em property="title">Leave It To Psmith</em> by <span property="author">P.G. Wodehouse<span></q>

and most web browser will display it as

“‍Leave It To Psmith by P.G. Wodehouse‍”

but any system looking for bibliographic data using RDFa and the schema.org vocabulary will understand it’s meaning. However, this approach limits both the text (for example, it makes it much harder to interleave references like “‍Leave It To Psmith and House At Pooh Corner by Wodehouse and Milne, respectively‍”) and the data (the structure of the data is constrained by the flow of the text).

A related approach for mixing human- and computer-centric information provides the full bibliographic data in a computer-centric form and then use lightweight references to that data in the text markup; this is more succinct and flexible, but also makes it easier to get the text and semantics out of sync. We could also have the data in a computer-centric form and then use a template language to write the document, with behind-the-scenes text like “‍I enjoy reading <insert data="book1" form="full"/>‍” and a pre-processing step to replace that markup with the appropriate values from the data. and so on.

As with so many things in life, there is no perfect solution to the challenge of communicating clearly enough for a computer and naturally enough for a human. Techniques that handle more cases also add more complexity, leading to more data entry errors and impeding implementation.

As a result, each community of users will tend to gravitate toward its own solution, one that minimizes effort and complexity subject to the specific functionality that they need. Nor does it stop there: maintaining the tooling for a custom solution is expensive, so eventually bespoke solutions are replaced by better-supported more general solutions. But that’s also not the end: general solutions are clunky for daily use, so communities start adding customizations and wrappers and shortcuts until their version of the general tool is effectively its own bespoke solution. Eventually this bespoke variant of the general tool becomes more valuable to the community than the next update of the general tool itself, becoming a community-specific solution once again and restarting the cycle.




Looking for comments…



Loading user comment form…