UK Open Standards Consultation

Apr 14, 2012

Over the last few months, the UK Government has been running a consultation on its Open Standards policy. The outcome of this consultation is incredibly important not only for organisations and individuals who want to work with government but also because of its potential knock-on effects on the publication of Open Data and the use of Open Source software within public sector organisations.

Unsurprisingly, Microsoft, Qualcomm and other organisations who have a vested interest in keeping the UK Government locked in to their products are responding vociferously to the consultation. They risk not only losing business to smaller enterprises within the UK but also, if the policy is successfully adopted here, in other countries in Europe and internationally that follow suit.

If we want our Government to be Open – to use Open Standards, to publish Open Data, to adopt Open Source – then we must respond to this consultation in numbers.

There are three things that you can do:

  1. Respond to the consultation – made even easier by this response form developed by Ric Harvey
  2. Attend the events – these seem pretty full now, but try to get in if you can
  3. Spread the message – blog and tweet and write to raise awareness of the importance and impact that this consultation could have

Content and Descriptions of Web Resources

Mar 31, 2012

Those readers who follow the TAG or public-lod mailing lists over the last couple of weeks cannot have failed to notice a large number of posts on a theme that recurs on roughly a 9-monthly cycle within these communities: httpRange-14.

The reason for this particular recurrence was a Call for Change Proposals on the resolution. The TAG meets on Monday, and discussion of this issue is one of the first items on our agenda. These are my thoughts going in to that discussion.

Precious Snowflakes

Mar 10, 2012

Disclaimer: As usual, this post contains my personal opinion and does not reflect that of any organisation with which you might associate me.

The other day, I had a lovely conversation with some folks from the BBC about some of their future plans. In the course of the conversation, Michael Smethurst spoke about his frustration when dealing with people involved with particular programmes at the BBC, where every single one of them thinks their programme is a “precious snowflake”, completely unique, that simply can’t be treated in the same way as all the other programmes described on the site.

Michael’s point, of course, is that TV programmes have a hell of a lot of similarities with each other. They all have episodes and cast members and may have trailers or be available on iPlayer. When the BBC models them in the same way, they gain enormous efficiencies in their ability to store and access information about programmes: they can reuse code, share content between programmes, and perform analyses over the aggregated data set. It’s great for users too: they get the same fantastic user experience no matter which programme they are viewing information about, and can apply the experience they gain when navigating pages about one programme when they need to find information about another.

The ability to classify and categorise, to bring order to what seems like chaos, to create a model of the world, is one of the things that marks humans from animals. We can look at a hundred people, with different colour hair and skin; different height and build; smiling, talking, crying, and still call them all Person because the essential characteristics that govern how we interact with them are the same.

But if there’s one thing that the last five long, hard years working with legislation has taught me, it’s that in any vaguely interesting domain, this search for order will always fall down in the face of reality.

Microdata and RDFa Living Together in Harmony

Aug 20, 2011

One of the options that the TAG put forward when it asked the W3C to put together task force on embedded data in HTML was the co-existence of RDFa and microdata. If that’s what we’re headed for, what might make things easier for consumers and publishers who have to live in that world?

In a situation where there are two competing standards, I think that developers – both on the publication and consumption sides – are going to want to hedge their bets. They will want to avoid being tied to one syntax in case it turns out that that syntax isn’t supported by the majority of publishers/consumers in the long term and they have to switch.

Publishers like us at legislation.gov.uk who are aiming to share their data to whoever is interested in it (rather than having a particular consumer in mind) are also likely to want to publish in both microdata and RDFa, rather than force potential consumers to adopt a particular processing model, and will therefore need to mix the syntaxes within their pages.

(Of course developers might just avoid embedded data altogether while they wait to see what happens, but let’s assume that they want to press ahead regardless of the lack of consensus from the standardistas.)

I’ve therefore embarked on a task of trying:

  • to identify the differences in approach and functionality of the two languages, which should help developers choose between them
  • to identify any guidelines for developers of vocabularies for use with both languages
  • to identify a subset of functionality that is common between the two languages, which developers might want to stick to to make switching and mixing easier
  • to identify mapping rules that might be applied to automatically or manually map from one language to another if the simple subset is used

I’ve done this by looking at converting microdata examples to RDFa and vice versa, and the lessons to be drawn from that exercise. I’ve broken down the result into three posts:

This is the last of these posts. It is probably the only one you will want to read :)

Mapping RDFa to Microdata

Aug 20, 2011

This post is part of a three-part series that analyses the differences in features and syntax between microdata and RDFa. The series attempts:

  • to identify the differences in approach and functionality of the two languages, which should help developers choose between them
  • to identify any guidelines for developers of vocabularies for use with both languages
  • to identify a subset of functionality that is common between the two languages, which developers might want to stick to to make switching and mixing easier
  • to identify mapping rules that might be applied to automatically or manually map from one language to another if the simple subset is used

I’ve done this by looking at converting microdata examples to RDFa and vice versa, and the lessons to be drawn from that exercise. The three posts are on:

This post is the second of these, which looks at how RDFa might be mapped to microdata. In this case, I will aim to express the RDF created from the RDFa as the equivalent microdata JSON, and aim to create that JSON with the microdata.