To help us design the future Thinglink developer API, I've been documenting patterns that I see in web apps that share and modify data over HTTP. The following is an extract from a draft of the Thinglink technical white-paper.
Syndication
One of the simplest cases is when the writer of a blog would like a subset of the latest Thinglink information on their blog. They can take the URL of the feed for the information they want - perhaps http://thinglink.org/feed/tag/hat - and transform the XML from that feed into HTML in a sidebar. This can be done by polling in a batch process on a regular schedule.
This doesn't require write access to thinglink data, nor does it need any special developer relationship between the two sites. If thinglink.org becomes unavailable, the information may start to become stale, but the blogger's site continues to work.
Thin client
A mobile application could be developed to help people discover and record information about things in their environment. Running on a phone or a PDA, it would not have a database of its own and would directly access the Thinglink API in order to lookup a thinglink code, create a new thinglink or make a comment.
This kind of application is much more dependent on network connectivity and the availability of thinglink.org. Because all the information in the application is transmitted over the internet, it cannot do anything for the user if the API becomes unavailable.
The principles in this pattern would also apply if a website used AJAX to directly talk to the Thinglink API.
Website using data enriched with thinglink information
We intend that over time people will start using thinglink codes in their own data. The more this happens, the more the network effect will help enrich every participant's information.
If a museum tagged a collection with thinglinks and put the information on their own website, they would be combining a mixture of information from their own database and information from thinglink.org.
Because thinglink codes are issued at thinglink.org, they would use the API directly during their archiving process to allocate codes and record information about their pieces. It is practical to do so if the number of items they are cataloguing is a few hundred or thousand a year.
However, when users requested pages from their site they would use cached thinglink information from their database. This would improve performance, reduce load on thinglink.org and avoid relying on the availability of the API for every request. They would periodicially expire this cache and refresh the information directly from the Thinglink API.
Decoupled, replicated thinglink data
A busy ecommerce site that lists many new items per day cannot rely on a direct coupling to the Thinglink API to generate new thinglink codes because of the risk to their business if problems occur.
In this situation, Thinglink would define a scheme for the ecommerce site to allocate its own thinglink codes and define the minimum information that should be stored about each item. Their website and database is then fully independent of thinglink.org for its daily operation. They would agree to deliver regular updates of XML data to Thinglink, perhaps once a day, which would be incorporated into thinglink.org in a batch.
Hi,
what do you think of a push way of syndicating data instead of having the people interested poll every x minutes? The poll approach leeds to slow sites all over the places. especially because you cant trust the people to only poll every 15minutes, they'll wnat to be "up-to-date" and poll every minute or so.
i had some ideas how to do a more push like thing posted http://the-ad.blogspot.com/2005/11/feeds-via-post-instead-of-get.html
here. another very interesting lot of ideas can be found at http://about.psyc.eu/Newscasting
feedback welcome
tim
Posted by: betatim | July 24, 2006 at 11:20 AM
I'm shocked that I found this info so ealsiy.
Posted by: Benon | May 16, 2013 at 03:43 PM