eneylon

Archive for the ‘general’ Category

Country Life

In general on November 19, 2013 at 10:59 pm

Living in the countryside is great!

Just this evening I’ve had to close the gates to our drive as a stray horse was eating the lawn of our neighbours garden – not a problem we were likely to have in our previous life. The beautiful sunsets that I have the privilege to witness on a regular basis help get things into perspective; but one of the best changes is the absence of distractions. Without a commute and lots of unnecessary rituals, you tend to focus on the task at hand.

So although there’s still lots to do (have only been in Ireland for just over three weeks), some patterns are starting to emerge. The days have been busier than expected – mainly attempting to keep my first client happy. The nights have been good – sleeping well which is aided by fresh sea air, no commute and being busy during the day. But best of all has been the time to think.

Thinking has mainly focussed on what has prevented the information industry making more progress over the twenty-five years that I have been involved, and what opportunities exist to make a difference to how content is perceived by the people who create and use it. The main conclusion I’ve come to is that publishers are too fixated on technology, and not enough on understanding their content.

This view will inform the new services and tools that are needed for the venture that I am building up to. Software tools should help demystify content rather than increase the distance between the technologists and the content curators. My goal is to make it easier for publishers to make informed decisions about ho to invest in their content. To make it possible to understand how content can be enhanced and used in straightforward ways.

It’s taken a long time but a name for the trading entity, that will help publishers and users, has emerged. Once I’ve registered it with the relevant organizations, there will be a commotion around its launch. That should be before Christmas and hopefully be a portent for a busy 2014.

Advertisements

Changes

In general on September 9, 2013 at 12:20 pm

A few months ago my family made the decision to move from the UK to Ireland. The opportunity presented itself, and, since we (wife,  daughter and myself) all love being in Ireland, we jointly decided it was the right thing to do.

Having made the decision there were a few details to work through: a school for our daughter was top priority, and it proved to be straightforward (although there are some big changes for her, but she seems to be coping fine). My wife had a straightforward transition, as her company are now setting up an office in Ireland.

Which just left me. Having committed to a house, and with my family already moved, change is required. So I am starting my own business, offering consulting on publishing technology strategy and implementation. I’m looking at what I can do to help businesses achieve their goals. And that’s where you can help.

For the last six years, my employer has been the UK’s national standards body. So my recent experience is related to standardization processes and the commercialization of standards content. In my current role of Technology Strategist and Data Architect, I’ve automated marketing campaigns, project managed website software development, augmented content in PDF files, designed schemas, written data extraction routines, created quality assurance processes and represented the company in a range of international forums.

In moving to rural Ireland, many of the same activities that I do in urban England can still be done. There will be differences: much more time will be spent on telecommunications than currently. But there won’t be a regular commute; so that will compensate. I’m available for work from 23 October – so if you would like to discuss current or future issues you need help with, please get in touch (via eneylon on twitter initially, but there will be other channels shortly).

For those of you interested in where we are moving to, here is the Google map view of our new town:

On making a clock

In general on October 13, 2012 at 9:41 pm

Early this year, a clock in the window at Heals furniture store caught my daughter’s eye. In the moment, I committed to making the clock – a promise that has resulted in a voyage of discovery. My initial focus was on a physical implementation, but in designing a fascia for laser cutting, the use of proportional font caused me to consider writing the clock in software. No server side code was written in the making of this clock – something which has caused me to consider more carefully the client-side capabilities that exist in modern web browsers. [after writing the web implementation I found that there is already instructions on instructables on how to make this clock]

In the Summer, at Over the Air, I knocked out a location-based encryption tool during the judging of the hack competitions. Whether this is a good idea is debatable (the idea was to get kids to understand the importance of a shared secret in understanding ciphers which could be expressed as a treasure hunt around the grounds of Bletchley Park), but again the solution was a self contained webpage (handling both encryption and decryption as a symmetric operation). So with modern web clients being both powerful and capable, is the traditional focus on server-side capabilities misplaced?

Back in the physical-object space, the clock’s electronics proved trivial (well with an arduino the programming task was simple), but the real problem became power consumption and reproducibility (since it became obvious that I might want to make more than one clock). From being a craft problem it has become one of manufacturing and finding the right tools and materials to approach the, as yet unclear, demand for possibly many copies of the clock.

So the virtual and real design spaces both pose questions about how best to approach the selection of appropriate tools at a time when costs and capabilities are shifting rapidly …

Binding back

In general on March 27, 2011 at 7:54 am

The last article discussed how to convert XML content to bind rendered content to its source – by carrying references back to the originating document using an XSLT transformation. To make use of that data in the client (generally being a web browser), we need an interaction mechanism. HTML is already self-aware of its structure through the DOM, and so a user interaction can be easily captured in relation to some rendered content.

The following example demonstrates how simple it is to get JavaScript to make HTML aware of its own existence (existential HTML):

<html>
<head>
<script type="text/javascript">
function locate(e){
alert('You pressed button '+e.button+' in the element with id of value '+e.srcElement.getAttribute('id'));
}
</script>
</head>
<body onMouseDown="locate(event)">
<div id="first">Click on this text to see that a <span id="second">different context</span> can be recognised <span id="third">even when nested <sup id="fourth">inside</sup> another context</span>.</div>
</body>

Clicking in the different areas of the text in the document results in different messages being displayed based on the context of the mouse in relation to the ids used in the document. By extension if every html element contained an XPath back to source (or a hash key for an XPath stored on the server for efficiency’s sake), then every presentational structure can be tied back to the semantic structure of the originating document. It’s up to the transformation writer to ensure that those structures that need relating back to the original are passed through.

<div source="/doc/annex/section[5]/paragraph[3]">Allows us to point to  particular structure in the source document</div>

Since XPaths can be arbitarily long, it might be more efficient to use a hash on the XPath and do a lookup on the server side if content is sent back (for example for adding annotations to a particular node in the XML document structure). This approach has been used to allow users to comment on data conversion quality by having jQuery popup a menu when the user right clicks some content.

Annotation Correlation

In general on June 18, 2010 at 9:06 pm

When structured content is converted for presentation, the relationship between source and rendition is often lost.

Increasingly documents are being made available not just for reading, but also for writing. Wikis allow editing of content from a raw state, but the bulk of annotation (for example in consultation exercises) still needs moderation or processing before affecting the source document. So the publication of transformed documents for annotation is a legitimate model for soliciting input to those documents.

The problem comes when the comments need to be tied back to the source. Lossy transformations are common when documents are converted into HTML. Reversing a transformation is not often a design consideration and the semantics of source tags are often lost when rendering content.

What is needed is a means of commenting on a presentation form that allows annotation at the precision of the source document. The approach described here assumes the source is an XML document that is transformed for display to the reader.

In order to be able to tie items in the HTML to their corresponding elements in the source document, each element in the source must be uniquely represented in the rendition. The approach advocated is to insert the XPath for each node in the source in a attribute of the rendered content. It is proposed that this attribute be named noid.

Of course there are other ways of achieving the same result: such as creating a lookup table of XPaths and giving each node a corresponding guid. However the direct approach of inserting the XPath in the attribute has the advantage of simplicity, not needing another data structure, and transparency. One disadvantage is the increase in file size becomes a function of content structure and element naming rather than just the number of nodes in the source.

The solution uses XSLT to transform the source. This allows for easy extensibility and placement in a transformation pipeline. The code below enhances the identity transform by adding a new attribute to every element output. That attribute contains the XPath of the element that it corresponds to:

<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
<xsl:template match="node()|@*">
<xsl:copy>
<xsl:attribute name="noid">
<xsl:for-each select="ancestor-or-self::*">
<xsl:variable name="my-key-name"><xsl:value-of select="local-name(.)"/></xsl:variable>
<xsl:text>/</xsl:text>
<xsl:value-of select="name()"/>
<xsl:text>[</xsl:text>
<xsl:value-of select="1+count(preceding-sibling::*[local-name(.)=$my-key-name])"/>
<xsl:text>]</xsl:text>
</xsl:for-each>
</xsl:attribute>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>

The additional code performs a path trace which takes the current node and calculates the route that is needed to access that particular node in the source.

The result of documents transformed with this is an identical document which can be used to render HTML and provides a means back to the source for every element. Any subsequent processing can choose to make use of those links. This would typically by using the noid attribute to populate the id on a div or span element in HTML.

This post has shown that an XML document can be transformed to provide a route back to the source document in a subsequent rendition. In the next post in this series I will cover how to make use of that path from the rendered document using JavaScript events and the HTML document object model.

Hacking Sunrise

In general on March 15, 2010 at 9:51 pm

Sunrise is my hack from wherecamp.eu.

It’s a calculator of sunrise and sunset times that can be used to build location-based applications. Potential users of such applications are anyone needing to know when surise or sunset will occur at their location – or another specified location. This could be photographers wanting to capture landscapes bathed in colour, farmers who wake a specified time before sunrise (if farmers still do that), religious observers with dietary or worship practices based on sunrise or sunset, etc.

Timings are calculated using an algorithm from the Almanac for Computers, 1990, published by Nautical Almanac Office, United States Naval Observatory. It makes sense to use an approach driven from a need for precision informtion so this seemed a good starting point. Implementation in code was trickier than originally thought because the trignomatric functions need converting between degrees and radians – fortunately there is a worked example available which also led to the discovery of an an error in the algorithm. Additionally there is the need to account for step changes in time zones when factoring in summer time.

The hack took this data and cycled through the days of the year. For each returned value a line was drawn on an SVG canvas – showing the variation across the year. Near the equator this variation is minimal (the days tend to be around 12 hours), but once we deviate into seasonally-affected areas the utility of knowing each days twilight tmes becomes more apparent.

The calculator source code is posted on github and I welcome implementations in other languages, extensions and improvements. If you would like to get involved ping me on twitter where my username is eneylon. Next steps on this are to figure how could the latitude and longitude can be retrieved: they could be provided by the application, entered into a web form, supplied from gps access, or accessed with a javascript library. There is also more work needed on calculating summer time change dates.

Unfortunately I’m not patient enough to be a graphic designer – so to move this forward it would be good to work with someone skilled in design to collaborate on rendition of an attractive interface to the data.

Small and random

In general on August 20, 2009 at 9:26 pm

Such is the mindshare of twitter that the need to write structured prose recedes and becomes a distant echo. So why change that now (dear reader)? Well in part to let you know that the heart still beats – in fact better than for a long time. Hope you like the new steampunky theme.

Componentising the web

In general on June 8, 2008 at 8:29 pm

“The problem with standards is that there are so many to choose from.” In a knowledgable market, needs are met by providers who then compete over implementation and reliability issue. Eventually some accepted norms emerge to allow interoperability of need satisfaction. So, it’s pleasing to see some new services emerge on the web like the ‘still testing’ UserVoice a neat idea to allow plug-in feedack capabilities on your site. This got me thinking about the privacy of the data collected. Investigating further, I found that the developers have anticipated (using their own software in a reflective postmodern way) the need to ensure that users giving feedback are known to the application i.e. authenticated. They have the development of an authentication API on their to do list – which presumably will play well with other authentication system – such as those used by prospective customers! It will either be a useful technology lesson or a useful commercial lesson, whichever way this one goes.

Precipice Beckons

In general on October 15, 2007 at 8:19 pm

On the eve of our first holiday if the year and it’s sorely needed. Wife has just got hold of a Nolia N95 (not the just released 8Mbyte version, unfortunately), so we can have some fun figuring out where to go in Turkey – assuming there is coverage there. But what I really really want is to see the Rugby World Cup Final. After blogging the last one (unfortunately that domain is no more), it would be a shame to miss out just because we are on holiday. But it may be that this new phone will help locate a means of watching the final come next Saturday.

Opportunity Costs

In general on September 26, 2007 at 7:56 pm

One side-effect of DIS29000 (the fast track submission to ISO of Microsoft’s office formats) is the saturation of effort from those involved in the ‘normal’ XML standards creation and promotional activities. This was evident is the lackadaisical response to the proposal to adopt STX as the basis for streaming transformations in the ISO DSDL activity, and the general slow-down on development activities. At a more local level, there has been a notable absence of activity within XML:UK (the user group for markup users in England). At XML:UK’s Publishing 2.0 event back in April, it was mooted that there would be an XForms workshop, a members meet and some other activities this year. With just over three months to go in 2007, there is no sign of any of these events.

With the demise of the interchange publication (which I edited for years as a user group syndicated publication from the International SGML/XML Users Group), the value gained in XML:UK membership needs to be demonstrable. The opportunity cost of not acting is far greater than the risk of providing the imprimatur to others to create value for members. There is plenty of energy to tap into: XML adoption continues to grow and the proliferation of ad hoc events show the demand for grass roots activities.