TwapperKeeper is shutting down it seems. It’s a popular online tool for archiving hashtags and other twitter searchers, and certainly well used in UK Higher Education where I work. I actually met John O’Brien, founder of TwapperKeeper, when he attended Dev8D (a developer event for those working in UK HE) a couple of years a go, nice chap.
Anyway, Martin Hawksey has created a wonderful tool for archiving, ummm, archives before they are gone. The tool is actually a Google Spreadsheet and to me it’s a testament to Google Docs power that an application that fetches and stores data from another site can be created using it.
Here’s my Twappers. Saved thanks to Martin’s brilliant spreadsheet.
UKSG is an organisation whose members are mainly University Libraries and Publishers. While I only attended their conference for the first time this year, it seems I was the person who originally created a TwapperKeeper archive, you can access the document by clicking the link (and if you’re signed in to Google Docs you can save a copy etc), note you need to click the archive tab at the bottom.
data.gov.uk – again nothing really to do with me, but an archive of tweets mentioning data.gov.uk (again, click the archive worksheet tab at the bottom)
bcb4 – Barcamp Brighton 4. I attended this event a couple of years a go
dilsr – Developing Innovative Library Support for Researchers. Not only did the twitter archives of this event, held at Sussex last year, almost disappear, but the website originally on ning has already gone with the dodo.
nickcleggsfault – during the run up to the election the Lib Dems looks like they may come a respectable third place rather than a distant third place. Our great papers put their usual impartial views to one side to – for the sake of Britain – destroy the Lib Dem leader. Twitter decided to join in. (I blogged about some of the articles for one day at the time). For some reason, I can only get 4,500 tweets, I think a tweet around that point is causing an error, I will try to get more.
I make no claim on owning any of the data. I’m guessing the original tweeters do. Or maybe Twitter Inc. Or Facebook. Actually it’s definitely Facebook. And it’s already alerted your mum that you’re reading this. Sorry.
I’ve been using Google Reader for a while having jumped ship from Bloglines. One of its features is to share stuff. This is potentially a good thing as it avoids me bombarding my twitter followers with endless links to stuff i find interesting.
At the moment it is useless as I don’t really follow anyone on Google Reader, and they (probably good sense, and a firm value of their own time) don’t follow me.
So feel free to add me as a contact in Google Reader, and I’ll do the same. And read interesting stuff. Because twitter, failblog, blogs and the web don’t already waste enough of my time.
It’s a way for RSS/Atom feed consumers (feed readers etc) to be instantly updated and notified when an RSS is updated.
In a nutshell, the RSS publisher notifies a specific hub when it has a new item. The hub then notifies – instantly – any subscribers who have requested the hub to contact them when there is an update.
This is all automatic and requires no special setup by users. Once the feed producer has set up PubSubHubbub, and specified a particular, the RSS feed has an extra entry in the feed itself telling subscribing client systems that they can use a specific hub for this feed. Clients which do not understand this line will just ignore and carry on as normal.Those that are compatible with PubSubHubbub can then contact the hub and ask to be notified when there are updates.
It has been developed by Google, and they’ve implemented it in to various Google services such as Google Reader and Blogger. This should help give it momentum (which is also crucial for this sorts of things). In a video on Joss’ post (linked to above) the developers demonstrate posting an article and showing Google Reader instantly update the article count for that feed (in fact, before the blog software has even finished loading the page after the user has hit ‘publish’). It reminds be of the speed of Friendfeed, I will often see by friendfeed stream webpage update with my latest tweet before I see it sent from twirl.
Exactly a week a go I was coming home from Mashed Libraries in London (Birkbeck).
I wont bore you with details of the day (or more to the point, I’m lazy and others have already done it better than i could (of course, I should have made each one of those words a link to a different blog but I’m laz… or never-mind)).
Thanks to Owen Stephens for organising, UKOLN for sponsoring and Dave Flanders (and Birkbeck) for the room.
During the afternoon we all got to hacking with various sites and services.
I had previously played around with the Talis Platform (see long winded commentary here, got it seems weird that at the time I really didn’t have a clue what I was playing with, and it was only a year a go!).
I built a basic catalogue search based on the ukbib store. I called it Stalisfield (which is a small village in Kent).
But one area I had never got working was the Holdings. So I decided to set to work on that. Progress was slow, but then Rob Styles sat down next to me and things started to move. Rob help create Talis Cenote (which I nicked most of the code from) and generally falls in to that (somewhat large) group of ‘people much smarter than me’.
We (well I) wanted to show which Libraries had the book in question, and plot them on a Google Map. So once we had a list of libraries we needed to connect to another service to get the location for each of these libraries. The service which fitted this need was the Talis Directory (Silkworm). This raised a point with me, it was a good job there was a Talis service which used the same underlying ID codes for the libraries i.e. the holdings service and the directory both used the same ID number. It could have been a problem if we needed to get the geo/location data from something like OCLC or Librarytechnology.org, what would we have searched on? a Libraries name? hardly a reliable term to use (e.g. The University of Sussex Library is called ‘UNIV OF SUSSEX LIBR’ in OCLC!). Do Libraries need a code which can be used to cross reference them between different web services (a little like ISBNs for books)?
Using the Talis Silkworm Directory was a little more challenging than first thought, and the end result was a very long URL which used SPARQL (something which looks a steep learning curve to me!).
In the mean time, I signed up for Google Maps, and gave myself a crash course in setting it up (I’m quite slow to pick these things up). So we had the longitude and latitude co-ordinates for each library, and we had a Google Map on the page, we just needed to connect to the two.
Time was running short, so I was glad to take a back seat and watch (and learn) while Rob went to in to speed-javascript mode. This last part proved to be elusive. The PHP code which was generating javascript code was just not quite working. In the end the (final) problem was related to the order I was outputting the code, but we were out of time, and this required more than five minutes.
Back home, I fixed this (though I never would have known I needed to do this without help).
You can see an example here, and here and here (click on the link at the top to go back to the bib record for the item, which, by the way, should show a Google Book cover at the bottom, though this only works for a few books).
You can click on a marker to see the name of library, and the balloon also has a link which should take you straight to item in question on the library’s catalogue.
It is a little slow, partly due to my bad code and partly due to what it is doing:
Connecting to the Talis Platform to get a list of libraries which have the book in question (quick)
For each library, connect to the Talis Silkworm Directory and perform a SPARQL query to get back some XML which includes the geo co-ordinates. (geo details not available for all libraries)
Finally generate some javascript code to plot each library on to a Google map.
As this last point needs to be done in the <head> of the page, it is only at this point that we can push the page out to the browser.
I added one last little feature.
It is all well and good to see which libraries have the item you are after, but you are probably iterested in libraries near you. So I used the Maxmind GeoLite City code-library to get the user’s rough location, and then centering the map on this (which is clearly not good for those trying to use it outside the UK!). This seems to work most of the time, but it depends on your ISP, some seem more friendly in their design towards this sort of thing. Does the map centre on your location?