Monday, December 29, 2008

The Social Netbook

It is perhaps reasonably well known that i won a Netbook courtesy of Vodafone last month. As soon as i got it I started to think about how it could make its way into the everyday lives of the people i know. So, here are a couple of suggestions:

Social Software
The first thing is that these Netboos need to come pre-bundled with social software. Most people won't buy a Netbook to write code or even compile documents. Most will buy them to "get on the web" as Netbooks are a great opportunity to point them at things to do.

I imagine a wizard in startup that suggests the services you may like to use, takes you through a signing up process and installs and configures the appropriate software for you.

An example is ...
  1. Configure an OpenID account
  2. Create Twitter account (they should be doing (1) by now!) and install software
  3. Create a flickr and twitpic account and configure appropriately.
  4. Set up friendfeed.
  5. Create a MySpace/Facebook account and store shortcuts.
  6. Install UStream/Qik and configure an account.
  7. Email account.
  8. blah dee blah ...

(Other suggestions ??)

Cloud Services
We're talking about a lightweight laptop that has limited disk space and less than the typical laptop in terms of memory and processor speeds. It is pretty much reliant on you being online. So why not use the cloud??

An example ...
  1. An Amazon/Mesh storage account direct from your desktop ... via your OpenID account ;)
  2. Allow direct storage and management of pictures, video and so on on the clound (maybe on top of (1)).
  3. Perhaps even EC2 could be somehow synchd with Netbooks to allow complex calculations to be carried out on the cloud. Idea here?

In short there is potentially a huge opportunity for services to get on board this new wave of laptop and whilst rolling out an empty laptop works fine for me, there are many others who could climb on board the social wagon if they were given some help along the way!

Who else uses a Netbook? What do you think?

Sunday, December 7, 2008

Xtranormal - create your own 3D animation

Keeping the kids amuzed on a Sunday morning can be somewhat slow, but when @joecar mentioned Xtranormal to us this morning we tried it out... and how addictive it has become. To me, sure - but my 5 yr old loves it.

So we created a story (i helped him out but all the ideas, script, names and so on were his and you can perhaps guess someof his influences!).

Here is it below - you can also see it here and in youtube . I suggest you try it out - some things still to be worked on, but in general very, very well done.

Wednesday, December 3, 2008

Secret Santa on Twitter

This Christmas i'm running a secret santa on Twitter.

The idea is that - you contact me with your twitter name and you will be added to the list - this will stop on Friday December 12 (I'll extend to December 19 if need be).

You will then be contacted with the twitter username of the person you are to get a "gift" for. The gift can be a cool URL, a connection with someone, a beta invite - anything useful (probably best avoid sending anything involving advertising).

When done (i.e. after i have contacted a secret link), you simply send the following:

@username #secretsanta [message]

The #secretsanta is just so we can follow this happening (and the person knows why you are contacting them).

If wished, i could publish the list after Christmas.

Update - Who's involved?



Tuesday, November 18, 2008

A winner with Vodafone Live Guy

Today i spent more than a few hours hunting for @vodafoneliveguy - you can see my twitter feed on the day at This is a treasure hunt style contest where you need to find the vodafone live guy who is constantly moving throughout the city. It is a really nice idea, something i read about a while back in SmartMobs. You need to follow his updates via GPS, blog, twitter and more and try and guess where he is.

It was more than just winning the laptop i was interested in. I wanted to participate in something like this - as well as livestream it as much as possible, armed with a PDA, a laptop with some web access and my feet (i wasn't in a car, nor considered paying for taxi's to travel around).

The Morning
At 9.10 AM after dropping my son off at school, i got the train into Glasgow and the game started. I was pretty determined to find him - but i wanted to see how i would use technology to do so.

I went to Queens Street and immediately headed to Bothwell Street tracing the path from the old Herald offices opposite Livingstone Tower at Strathclyde Univeristy. I went to Bothwell Street and after looking around, figured he had went into the centre of town. So then i checked into Borders to fire up the laptop and check his map.... it said he was in Bothwell Street, but much further up. So back i headed and outside central approached a rather worried looking chap dressed in red and put my hand out to shake it. Perhaps he was a tourist thinking were were always thing friendly, but nevertheless he just kind of looked at me as I realized this was not my man. I felt pretty stupid, but onwards i went.

I was then at Bothwell Street and despite me being roughly where he was for about 15 minutes before anyone else turned up (i could tell the instant a challenger arrived, just outside Weatherspoons on Bothwell Street) and despite the picture of the car park, i just couldn't locate him.

I tried to think of where he could head... the Clyde was my thought - a fairly major part of Glasgow. So i walked down that way, only to discover he had been found. Fek... and i think it was the guy who ran past me who found him (i'd love to hear that story too!).

I was beaten but not out and the day was young. I recorded a video on my mobile where i talk about the fact he started at Colville Building of Strathclyde University (which is where the Headquarters are in detective series Taggart) and how he was at the Livingstone Tower, named after David Livingstone.

The Afternoon
As i sat in St Enoch (suggested by my wife), i read that he was at the Sas Raddison and spent 15 mins there before realizing i was never going to find him. I then read he had to moved into MY backyard - the West End! On the underground to Hillhead - guessing he would go to Glasgow University and the Hunterian Museum, only to find he was going to Kelvingrove.

So i walked pretty quickly to the museum and figured i'd go in the back way and surprise him. But as i went round the corner, what do i see? A big red jacket and a bunch of people around him. I approached to congratulate the winner(s) only to find none of them knew the phrase. I immediately said "You're the LiveGuy and I'm a Vodafone Winner" and sure enough, the prize was mine.

It turns out Jon Sykes, a lecturer at Glasgow Caladonian University had taxied two lots of students to Kelvingrove, but none of them knew the crucial phrase. So i was either lucky, or had studied harder! Put it this way, it wasn't exactly Maradona handball in the box - more like that second goal ;)

I had a nice chat with liveguy and friend for a while telling them of my day and the close misses and they went their way and i went mine. I recorded a video just after i had spoken with them.

I then went into the BeanScene cafe opposite the Galleries and relaxed for an hour or so, checked from mail, read some papers.

There's More
I do some work for Conscia and sometimes use their desks. Now, yesterday i had told them of the game and figured they had just ignored me - especially as i had heard nothing from any of them all day. Well, turns out they had been keeping tabs on the live guy.

I walked into the office and said "Yes, I won" only to be told, "Yes, so did Chris". Now, i assumed this to be a wind up, but it turns out that @vodafoneliveguy had left his GPS on and he made his way into town and stopped around the corner from the offices. About 10 of them apparently ran out in various directions and the first guy (Mark) didn't know the phrase either - "Can I have the laptop please" is apparently what he said. In fact the one that did (Chris) had made a call back to someone else (Dominic) in the office who told him what it was.

To end it, one of the guys (Jim) had went to the bank which is nearby where this all happened and came out only to see much of this going on. He had been there the whole time and despite HIM being the one i initially told AND HIM being the one that reminded everyone else this morning after i had told them yesterday, he had temporarily let his guard down. Otherwise he would be a happy owner of a new laptop.

Update: Apparently Mark had an iPhone with GPS tracking on it and could literally watch them drive into town, stop and emerge from the offices they had been in - with lots of people following him as they knew this! The only problem was he didn't know the winning phrase and Chris jumped in (ouch!).

What a mad day. Feet are sore, but happy that i won a laptop and even happier that i was part of all this, especially as by the time it hits London he's need to wear a disguise.

So i guess the moral to all of this is that social networking works - but only when you know a bit about the detail. Deeplinking is cool even in a social sense so long as you read what's at the end of that link.

Wednesday, November 5, 2008

OpenID "Friend" based Attributes

Something i have been thinking about in line with some recent discussion on the OpenID mailing lists has been techniques to verify someone's identity and the attributes that are associated with it.

My initial thoughts comes up with the following:

In this case, the longer you use your OpenID, the more people will come to know it is associated with you. I’d be interested in how we could explicitly extend this concept to support a distributed reputation system where you can attach OpenID Reputation points which are assigned from other sites. These could even be broken down into types of reputation. The longer and more you use it the more reputation points. Even sending an email that isn’t spam etc via Gmail or Windows Live could “increment” your reputation score (like pagerank, but for OpenID).

Friend Verification
Slightly more explicit and extension of the above whereby friends can verify things you write down about yourself so that others can trust them more. John says he works at Google Inc and Brian has verified this etc.

Central Validation
Shibboleth model and probably needed for institutions who perhaps verify things such as “Yes, this is definitely Dr. Livingstone” etc.

More thinking
However, i can't think of a reason why OpenID AX couldn't support attributes from third parties that are signed and stored with the OpenID. This way an institution could be given a users OpenID, make statements about them and return the signed statements/attributes which can then be stored anywhere with that users OpenID profile.

Someone who wants to get those attributes can easily check via the insituions publically available key that thse attributes have not been altered in any way.

Extending this you could have a plethora of OpenID attribute information that is both unverified and strongly verified.

Saturday, November 1, 2008

OpenSearch and JSON

In the limited spare time that i seem to have these days, one of the things i've been looking at is OpenSearch, "... a collection of simple formats for the sharing of search results".

OpenSearch is slowing gaining lots of traction and each of the major browsers now allow you to add an OpenSearch to your toolbar automatically (although the User Experience certainly needs work!).

In my current consultancy work for the NHS I was asked to extend the search API I wrote some months back which works against FAST, and enable any third party site to ended some script in their site and make Ajax style queries. Last week I wrote this functionality (which returns custom Xml and Json) and sure enough it works great.

However, I like open stuff because it makes it easy for others to pick up and run and so after I added OpenSearch support via the usual description file and tested this with Internet Explorer et al and it worked, i decided to look at whether i could use OpenSearch as the actual syndication format.

Now, if this all ran on the same domain there would be no problem in pulling back Atom or RSS and formatting it for display. But in a cross domain architecture you can't do this due to Same Origin Policy. So, using JSON-P we can enable OpenSearch queries over the wire.

To do that however, we need to have an OpenSearch format for JSON - in my case i simply took the Atom format as a starting point and mapped much of that and added in the OpenSearch elements and attributes.

So now you can take a few lines of code, add it to any page in the world and make an OpenSearch query using the template Uri and get an OpenSearch response which can be parsed and dislayed.

Early days but if it's useful to no-one else on the planet, it is useful to me :)

If interested, please try it out here. It queries over 100 million records so you will get results no matter what you type in, but the formats and data mappings are still being worked out.

PS. Would be *very* nice if we could query the OpenSearch discovery endpoint and ask for JSON to be returned. That way we could dynamically build all of the front end stuff that i can't do with the existing reponse (it returns an OpenSearch specific content type which is not trusted by browsers).

Saturday, October 11, 2008

Grassroots rail support

I've travelled on trains, planes and automobiles for so many years now and one thing that has yet to be resolved is telling you what is happening as it is happening. You are often left to try and guess what the problem is, or when the train will arrive - and this is worse at night when there's often no-one around to ask. Well, for years I always intended to build a grassroots support network for transport as the profilferation of mobile devices enabled more people to become involved using SMS and mobile browsing.

Well, below is a rough sketch (it's not too detailed as my son is pushing me to watch an Indiana Jones puppet show he has created, but you'll get the idea).

Friday, October 10, 2008

Thoughts on the Cloud API

I have seen quite a few diagrams on cloud computing but not something like the one below - (please tell me if you know of more and i'll update!).

You see what many leave out is that if you build your service on a third party service in the cloud and it closes down, you're screwed. Well, currently you are. You need to find a new provider, swap your data, change the api, modify your internal code. Which in simple terms, means your're screwed.

Now, it's not such a big deal if Amazon is your service (although this should still be a consideration) but if the POINT of all this open web stuff and open source and api's etc etc is to allow the little guy to provide services, then it IT a big deal. The little guy will just NOT get the trust for many businesses and it is a shame. Once their ideas are bought by a bigger company then they get exposure. But surely this is a shitty way to do things. Why not let the little guy in?

A good way for this to happen would be for the "cloud" (and in Scotland we have cloud computing every day of the year, with torrential rain computing thrown in) to support common (i hate saying standard!) API's. oAuth is arguably the most successful example of that to date (with PoCo to come), but OpenID is also another good example (oAuth is not used seamlessly by many a big company).

But surely we need to look at common API's for all services - or at least the service providers do. I want to use provider A today and provider B tomorrow. It should be a configuration change for the client (and a "swap provider" checkbox ;) ).

Internally however there should be a service provider data exchange API that allows swapping of data (and arguably hot swapping for back up - imagine service A and service B backing each other's data up whilst providing the same service!?). So when i ask to move, i can have all my data moved to the new provider (for a fee of course). Here's that API :

OK, so it's a little more complex than that, high level is a good place to start. I also like the idea of a higher level cloud API that provides common services for all kinds of cloud services. I know there has been lots of work on various SOAP style services at W3 and so on (transations, instrumentation etc) - not sure how this thinking all fits in with the cloud stuff but would love to hear/read about it.

One Web, Many Cultures - Reaching Out

I'm not entirely sure where this post is going, nor do i have any conclusions in mind. I have simpy been thinking about this for some years and feel like writing it down. I'm not sure what tone it will have for the person reading, but it is intended to ask some questions rather than make any kind of statement! Let me first say that i am from Scotland and have live in England, Italy, Canada and Chile in the last 15 years - i wrote books and used the web constantly in each.

I listened to Kevin Marks interview with Jemima Kiss at the Guardian and other than Kevin sounding incredibly similar to a friend of mind, the part that jumped out of me was his comments that many of the big social networks have done well out of users they just didn't expect. US technology companies growing "execpectedly" from string user bases in Russia, Brazil, China and so on - in other words from users NOT in North America.

So, I wrote on Twitter :

"that was an interesting interview Kevin. Given the cultural luck many networks
have do you think we need to put more work in involving other cultures right
from the start rather than once the technology is ready?"

Kevin Marks of Google made the point to me on Twitter that :

"the technology is ready, and has been for a while. Finding people to
bridge app ideas to other cultures and language is still hard."

My wording wasn't great (140 characters isn't much!) so this is what i meant....

Much of the technology driving the web is coming from specific locations in the US - mostly California - there is some input from the rest of us through discussion groups and so on, but primarily it is driven offline. This is understandable on many points due to the huge concentration of top people over there who are much like myself - driven and excited by the potential of the web as a platform that can fit into the daily life of everyone on the planet - in the many different ways people will use it. Also, at the moment virtual working groups are still a work in progress.

Now, oftentimes i get frustrated that i'm not amongst it - i had hoped to some years back startig with OpenID - (every week on twitter there is another cool event somewhere in the US where all the guys and gals get together to push these things forward). However, something that must be more frustrating is little involvement other cultures in the rest of the world must feel - at least i speak (bad) English and can follow in the forums and blog posts (and every so often post in the forums). What if you speak Chinese and want to use oAuth? Or Spanish? Say you want to add to it or suggest a change? There are a few involved, but this is very much the exception and if you check the board membership for most of these initiatives you can see they are based in the US and often (but not always) part U.S. web companies.

I think it is a problem waiting for a solution - the problem is how to integrate other languages and cultures so that the open web can be something that fits for everyone. If Orkut surprisingly managed to get a dominant user base in Brazil, was there something different to what other social networks were offering which made this happen? Does anyone know? In Africa it is suggested the uptake of mobile phones will continue to be huge - how do these efforts fit into their plans for a social web?

There are many of these folks who do a lot of evangelizing for these efforts so perhaps they can get to know active people who could get involved in the core specifications from Europe, Asia, Africa, South America and so on. It's worth remembering this IS one web, but has many cultures and what fits one may not fit all.

Thursday, October 9, 2008

Thoughts on Using Netflix’s New API

OK - I have read through Using Netflix’s New API: A step-by-step guide by Joseph Smarr and tried out a few of the things mentioned.

My only irritation in the excellent efforts Netflix have done are around the anomalies you can see in the API workflow.

i'm sure they have some good reasons for diverting from the "standard" process and perhaps some view to these may be useful.

In addition it may suggest some extension points are needed that could be added to oAuth so that rather than hacking these things we could simply add some optional extensions (which are generic such as "dynamic parameter" rather than "userid") and can be plugged into any oAuth library.

The fact this can even be done is excellent but in using it it would be nice to just have a graphical pipeline for every site that shows where they may have used some of the optional filters during the oAuth process.

Wednesday, October 8, 2008

Copy and Paste between OpenID's

In case you missed it i've added a new feature that allows you to Copy and Paste data between OpenID's.

More information can be found here.

Saturday, October 4, 2008

TESCO miss the point on the web and PR

Tonight we were expecting a home delivery from Tesco in Glasgow at 5pm. At 7.55 PM with the kids wandering where their milk was, we got a call saying they won't be coming due to "staffing issues". I am furious for obvious reasons - none worse than the fact i now need to go shopping to get breakfast for the kids for tomorrow and it is really late and pouring outside.

I am writing to them in response to an email i received so i will keep you updated and see how things progress (or don't).


Mr. Livingstone here. I included Terry Leahy in this and additionally will blog what happens at :

Are you taking the piss? We have a 2 year old and a 5 year old who were waiting on milk before bed time because you were due to deliver before 7. You phone 1 HOUR late on a Saturday night to tell us this, asking if we can "come get it" !? You had from 5pm at the very,very latest to know of any apparent staffing issues.

What is WORSE is this pathetic general letter you have sent us - you never even took the time to send something personal. We never even had an opportunity to file any complaint and "you have been dissatisfied" is an understatement.

I would have been irritated at a cancellation earlier, but calling 1 hour later than the latest it was supposed to be here combined with a pathetic general letter (I assume the typists also had the evening off due to the rain) means we will no longer bother doing business with you and move to Asda or equivalent.

Additionally, I am a high profile online UK blogger and Internet consultant so you can bet a few people will hear about it and i will be looking out for someone at Tesco who can look into this.
Now I have got to go out into the pissing rain at 8 PM on a Saturday night to get milk for tonight and the kids breakfast for tomorrow morning.

For what it's worth - you don't need to concentrate on staff issues - you need to look at your PR... a phone call earlier in the day would have saved hassle for all of us.


> From:
> Subject: Tesco Online Vouchers
> Date: Sat, 4 Oct 2008 19:51:19 +0100
> Dear Mr Livingstone
> Thank you for notifying us of your complaint. We value our customers' feedback, which helps us to monitor and improve our services.
> We are sorry to hear that you have been dissatisfied with the service you recently received from Tesco. As an apology we would like to offer you a ? 10.00 eCoupon, with our compliments, to spend on your next grocery order on To use this eCoupon simply type in the code XXXXXXXXX when you reach Checkout. This eCoupon can be used from 4 Oct 2008 to 4 Jan 2009
> If you have any questions, you can also e-mail us at Or you can contact our Customer Helpline on 0845 722 55 33 between 9am and 11pm Monday to Friday, 9am and 8pm Saturday and from 10am to 6pm on Sundays.
> Best wishes,
> -
> Tesco.Com Limited
> Company Number: 3942522
> Registered in England
> Registered Office: Tesco House, Delamare road, Cheshunt, Hertfordshire EN8 9SL

Thursday, October 2, 2008

Syndication in 2009

Had some thoughts on what could be done with Feed Aggregation sites such as and where it may all go. I decided to consider where we are, the problems we currently have and the inevitable move towards syndication for all.

This is a picture of where we stand today - a very much limited set of pre-defined feeds we can subscribe to (some such as twitter are doing some good work on improving this for their search).

So where are we going? Well, everyone needs to open up - and there are a whole load of good advantages to doing so. Personally, i think the fact we will be able to write complex queries is a major advantage - users are just never going to learn anything more than AND/OR - but we can improve their results by employing the advanced capabilities of search engine logic and API's.

Additionally we want to start using the Semantic Web, but how a user would work with Google Base or FreeBase etc isn't entirely straightforward. It really comes down to either providing an interface they can build upon or writing the queries for them - perhaps in a templated more to substitute their specific query (ala OpenSearch).

Whatever method is used people want RSS/Atom. Feeds just now are starting to get out of control because even a reliable source have a small percentage of data you really want. We now need to be able to write custom queries to request exactly what we want. To join, filter, deduplicate and so on.

I fully expect in the next year or so that the advances that have already started in these areas will only gain momentum and make it a hell of a lot easier for us to get results.

Will be see a "RSS" button on each page of the Google Search results by the end of 2009?

Monday, September 29, 2008

Happy Belated Anniversary Google

I kinda wish i had spent more time on this the other day but i was busy doing other stuff. Thanks to a link from DeWitt Clinton (who now works at Google) on Google jobs back in 1999, i played around with for the first time in ages.

People may already have found this, but i had to blog anyway about some gems i found that blew me away.

1. The Google Beta Search Engine in 1998 - link

2. The "Google Friends" mailing list which i suspect was the first public message about Google. It was written by Larry Page himself on 28 April 1998 at 10.28 PM (so must have been a late one!). I love message 5 "Google gets funding" !!

The "Group Info" suggests 1 post a month. I like that either Larry or Sergey categorized the group as "Culture & Lifestyle : Gender : Women, Girls, Mothers, Daughters".

I did actually send a "Happy Birthday" message to "" - it's nice to be nice. We made a big deal of my son's 5th birthday and i reckon i see Google as much as him these days! It hasn't bounced so maybe i'll get a reply ;)

3. This is the first public email (mentioned above) i am aware of by Larry Page himself about Google.

4. There is also the first email from Larry Page and another in July 1998 where he talks about the new features. For anyone stressing out over their servers, here is surely one of the historic paragraphs of the Internet:

Combined "our server" (some suggest there is now > 64,000 servers) and "try back in a minute or two". Come on guys - how did you get funding with that line ;) If this kind of thing doesn't inspire you as an entrepreneur you're maybe in the wrong job!

Look around and let me know if you find other similar gems.

This reminds us these were two guys who were like every other entrepreneur at the start - they had no idea where they would be in 10 years.

Hey, they'll likely never ready this, but Well Done - i consider myself inspired !!

Think i'm starting to really think about writing a book on this kind of thing. Amazing!

PS. I forgot to mention the Google Stickers... check this.

Thursday, September 25, 2008

The art of a search query

One thing that i don't see much coverage over is how non-technical users get the power afforded by advanced querying. Something that drive me to create something LIKE was through my experiences with querying in a project which involved many hundreds of millions of distributed records.

In that project we had a team over over 20 experts writing specific queries that were semantically relevant to the area they were in and the output of one search was actually a fairly complex backend query most of the time consisting of the union of multiple backend queries all reformatted for a specific output.

Today many sites - and emerging distributed query sites - are focused on simple queries, but this requires that you KNOW semantically what to look for and that you want to type it in all the time. In addition it assumes you know how to construct fairly complex (it's all relevant) queries.

So, YES, we need all these cool sites that do the keyword searching. But we ALSO need something a bit higher level. We need to hide the user from the complexity of searching and also make it easier for them to remember the kinds of searches they either constructed or used before.

Wednesday, September 24, 2008

A syndication formatting cache

I'm really thinking about this stuff just now, so this note is as much use for me as anyone else.

We have a ton of sources all working with Atom/RSS formats but being semantically different and in cases extending the same concepts in different ways (e.g. Digg has it's own namespace in its Atom feeds for authors).

Imagine a service that indexed and transformed these sources to normalized formats. So you could basically do XPATH style queries (the interface wouldn't be so complex of course) on the RSS/Atom sources and not only get the data in a given element, but be semantically accurate on what you are getting.

In addition, extension namespaces could also be queried, so you could ask for media items from youtube, flickr, meefedia and so on and get an accurate result.

This service may even be useful if we were all using the same format, but at a time where joining feeds is near impossible when you are thinking about the user, it may be useful to have a service helping out.

I've already written a bunch of tranforms to do this and to be honest had to write an Xslt for every single feed i got (i think delicious was ok), so i know the headache as others look for more advanced syndication feeds!

Long tail of atom extended formats

I picked up on a post by Andrew Turner on OGC Geospatial Search Summit
“Of course, a format can expand upon this and offer more complex formats that
conform to more complex specs. But by at least providing a common baseline means
that almost any service can easily interconnect with another service.”

I can see why we need to stop at some point! The issue is that in the long tail, these extended formats are quite prevalent and I’d like to see extended communities supporting people who want to extend. The reason I say that is that even in the rich media space I have numerous Xslt’s, function calls and so on to normalize what is essentially the same data. GeoRSS is an example of a specific community that does it well!

My thinking is that if (at the extreme) two companies in the world extended for a very specific topic, we could at least get some normalized view of the data for everyone as a response from an OpenSearch query.

Opening up advanced search syndication

[update: 24 Sept @ 12:07 PM : Google Chrome also supports OpenSearch]

[update: 24 Sept @ 11:45 AM : Through the OpenSearch group, I see MS IE 8 Beta supports OpenSearch so perhaps even more sites will realize more complex searches are needed]

As you may well now be aware, I just launched an Alpha release of and what I tried to do there is (in many cases) write intelligent query sets against sites that provide the results as RSS or Atom feeds. So rather than just pulling in every feed we can find, we actually create a query such as "Europe and technology" and so on. It really isn’t easy and requires a lot more work that isn't visible up front. Here are the five issues:

1. Search, no feed
2. Query syntax
3. RESTful
4. The commercial clause
5. Semantics of response formats

I will provide one example in each section, but there are many others I have come across.

Search, no feed
In many cases you can get RSS or Atom feeds from static pages, but as soon as it comes to searching and gathering the results as a feed, you’re in trouble. One example if

I can do a bunch of querying to get certain feeds but as soon as I want to something such as "languages and Glasgow" I’m out of luck. In short, you get exactly what you want with most search queries on some of these excellent sites, but they only work when you are ON the site.

This misses the opportunity of syndicating the results to third parties to allow them to point at YOUR site. You end up with having to use the limited feeds available and most of the time this isn’t much use to anyone – especially in the era of content overload and the increasing importance of providing the user with what they want.

Ensure all the results from your search query can be syndicated as RSS or Atom. The extra queries against your server will be balanced against higher profile or extra hits on your site.

Query Syntax
In short, query syntax is all over the place. In some cases you can only search for one term and in other cases you can’t use AND or OR. There is a real lack of support for doing interesting things – if you really want to customize the feeds, in most cases you are seriously limited by what you can achieve. One example is

If I want to search for Tech events in Glasgow or Edinburgh I really need to do 4 separate queries – "tech glasgow", "technology Glasgow", "technology Edinburgh" and "tech Edinburgh". This is a waste of resources all round for something that is relatively simple to achieve.

In some cases typing "or" doesn’t mean the same as "OR" and in others typing "and" gets interpreted as part of the querying rather than ANDing the terms.

If we are not going to use then the site needs to at least provide a powerful search interface – perhaps more powerful than any individual is going to use, but in cases such as we are actively creating powerful queries against your backend database – saving everyone resources!

Give us a REST
For the majority of queries, a RESTful URL should be enough to get the results. Granted many sites already support this but there are others that provide access only through a POST API XML based syntax. This is good for more advanced queries but sometimes you want to write something simple that returns results in a given format, without passing an extended collection of parameters.


If I can’t type (something like):

… then you really need to think about adding this functionality.

The commercial clause
Now this is one I simply DO NOT GET. Many sites say the data can’t be used in a commercial context, without really defining what that means. I have some sympathy for this when it is data you have collated and published – such as a postcode search facility or something... syndicating that would render visiting your site close to pointless.

However, when it comes to user generated content I just cannot understand it. Many sites allow searching, feeds and then insert a clause saying you cannot syndicate the data without a license. The point of these feeds however is to provide a "teaser" to bring the people TO your site… so even in a commercial context, surely allowing your feeds to be displayed can ONLY work for you. but it’s got "commercial" all over it (it also doesn’t have RESTful URL access to feeds). So do we write YET ANOTHER MyStrands or do they just provide intelligent syndicated search feeds we can all use and drive business to our and their site.

Remove this kind of clause.

Semantics of response formats
This particular part almost drove me to distraction. We now have RSS and Atom as the key formats of feeds – sure there are some variations in versions but we are pretty close to two general formats.

So where is the problem with this? Well, the problem is twofold. The first is different interpretation of what goes into each field and the other is the extensions used within the feeds and the variation on how these are semantically interpreted.

The first of these is particularly an issue with "content" and "summary". Some people put in a short description, others put in formatted html. Some don’t’ put a summary and only add content so you need to parse this somehow if you want to display some kind of a summary.

In addition to this you may find some sites (such as FriendFeed) provide much of the information that should be in atom fields (such as the author) embedded within the content so you would need to parse that to give any kind of standard view.

Now, the extensions is altogether more of an issue. Just try combining some of the feeds using the xmlns:media ( ) namespace. Sometimes the link is in the atom elements, others it’s with the media player element, sometimes the author is in the media credit and others it’s in the atom author field.

You need to parse some of these to death just to get a standard output – in fields where the output should really be a specific extension of the core RSS or Atom specifications. This is a nightmare when applied to video, photo’s, music and so on and makes intelligent search and syndication very difficult.

Call to action
We really need to change some of this. It’s not like we need any scientific breakthrough to make this work – we just need to come to some kind of agreement on the points I outlines above – all the difficult technical stuff has already been done. It doesn’t require ripping software – just extending it.

If you provide an Atom feed you may not want to change that but adding a version parameter in an API is easy. That way you can provide the "new" improved feed. Really the best option is to look at but there are any number of options I would happily accept.

We also need to generally improve search and syndication and realize it is not something that takes people away from your site, but rather drives them to your site. The better your search APi the easier it will be for sites like to integrate your feed with specialist and content sensitive queries. Users will like that and they will come to you through all sorts of gateways!

Please take away the commercial stuff, or at least tone it down. In most cases people will come to your site if they see a teaser of what they want no matter what site they are on.

Thanks for reading. This was based on my experiences creating the weblivz website. I’d love to hear feedback – good or bad. If you have pointers or want to point out anything right/wrong or additions, please suggest and I will update the article.

Wednesday, September 10, 2008

Gazopa - it's fun too

While Gazopa looked useful and i tried a few searches which i had mixed results with, what REALLY caught my imagination was its ability to allow YOU to draw and look for similar results.

Now, you need to remember I have the artistic ability of a drunk one handed (pawd?) fox wearing a blindfold so it really was a challenge.

Well what do you know, it actually brough something back for my, erm "spider" - and it wasn't too bad - mainly coz it DID actually show a spider (yes, i was as shocked as you are reading).

Look at that last result in the right - a frickin spider. And the others were not too bad - but a 20 points for anyone who can tell me why a LAMP is in there?!

So, being excited i decided to try searching for a more creative drawing. I thought, who will i search for that might come back in their database? Well, @scobleizer came to mind and i've watched enough of his videos to think i could have a good attempt at portrait. As my guide i used a fairly unusual picture of him (at least i'd imagine it's unusual)...

So below you can see my, erm, portrait (1000 apologies) and the results (using a facial search coz the results without it were just ridiculous).

It's perhaps fair to say there is still some work when it gets a little more advanced - but that's ok, coz it was the first thing ever to recognize something i drew!

Friday, September 5, 2008

Javascript object instance methods

Some other styles i've seen in the creation of instance methods on Javascript objects are as follows:

NS.test = function() {
NS.test.prototype.doSomething = function()

NS.test2 = new function() {
this.doSomething = function() {

(function() {
var test3 = function() {};

test3.prototype = {
doSomething : function()

NS.test3.doSomethingStatic = function() {

NS.test3 = test3;


var test1 = new NS.test();


var test3 = new NS.test3();


Variable scope in Javascript

I'm from an old skool of JavaScript where it was a nice add-on and everything was done on the server. Sure, i've used all the JQuery, Ajaxy, Prototypey frameworks out there but i never went back to look at the core of the language - well at the moment i'm working on something and writing a lot of script.

So i learned something new. A lot of frameworks were using the following syntax in their libraries..

(function() {


It's fair to say i was going mad trying to figure out why. I mean you put an alert in there and it pops up when the script loads. So why bother with the function? Why not just write your code as we used to - directly in the page?

Well, thanks to this paper, I have discovered why and this note is as much for my future reading as yours :)

Turns out that you can locally scope variables within these anonymous javascript functions which more importantly means you don't screw up global vars that are being used elsewhere. So you can confidently write the following ...
var person = 'steven';
(function() {

var person = 'xavier';

alert("Person 1 " + person);


alert("Person 2 " + person);

... and you will get "Person 1 xavier" and "Person 2 steven" - notably the global variable "person" was redeclared within the scope of the anonymous function and the name changed, but as it was declared with "var" it does not overwrite the global value. If you did NOT use "var" in the function, you would change the value globally. So it works like every other programming language - just a wee bit different in term of syntax :)

Wednesday, September 3, 2008

V8, Chrome and the DOM

V8 is the Javascript engine for Chrome, the new Google web browser. It is very fast.

V8 does not provide DOM or XML DOM support. This was confirmed to me by a member of the team at the V8 google group.

This leaves me with a question i hope someone can answer.

Most of the scripts i write either use the DOM (getElementById) or the Xml Dom (childNodes[0].nodeValue) and so on. That is a LARGE part of my scripts.

So if the engine does not provide support for this, what happens to my scripts? Do they go really fast until they need to query the DOM and then slow down? The net effect in my case could be a lot of slowing down. I my experience, querying the DOM is the slowest part of your scripts (unless you write non-typical).

Haven't had time (and won't today) to run tests so if you know the answer please let me know.

Tuesday, September 2, 2008

"Google Suggests ..." - don't use Chrome :)

I had to put this one up. I was in Chrome and tried to change my pic in the Groups and here's what i got (changeing pic works fine in friendfeed, for example).

my first chrome weird thing

This may just chance, but the only site i can't get to is - which Chrome runs on top of :)

Not only that, it thinks it is "null" (other sites that can't actually be found give a DNS error).

Anyone else get to ?

Chrome Auto-complete Ideas

Whilst some of what i have seen from Chrome is what i would expect of a browser of this generation (much of it is in IE 8 too), there are some things such as the V8 Javascript compiler which looks to be very cool. I am only halfway through the book, but wanted to write a post on auto-complete.

Google say Chrome will mean you don't need to bookmark pages any more and will provide an improved auto-complete. Well, here's my thoughts...

"Auto-complete" should be much more expansive. I rarely remembers URL's and even things i have found and bookmarked can be a pain to find again. In fact i tend to bookmark things and never look at them again. So i may be searching for something i have already found. Auto-complete needs to add techniques to discover information you want to keep track of. Tags in a delicious style are an obvious way. Add to that some kind of personal strength and it may be very useful. So i type in "Javascript Cross Domain problem" and get a view of a set of links i previously found useful - coz let's face it, you'll end up going to google, and in the process of a single search modify the query, view certain pages, like some discard others and repeat. In that *session* I should be able to get back to all the pages i found useful. I don't mean caching the pages, i mean the semantic link to those pages.

Can you imaging a little package of links you found useful related to a search concept that you can reuse in the future. I would LOVE this. So where Chrome shows a new tab with the top links it it, the auto-complete concept would extend to the search package i discussed above.

ps. stuff such as gears, prototype and so on should be part of the environment itself... maybe that is later in the comic :)

pps. Is it true that the Joker appears in chapter 5??

Friday, August 29, 2008

Friendfeed custom display

A while back i changed my homepage to be a custom view of my friendfeed activity so you could see what i was doing all day long. The only thing i don't like is that it puts my most recent items up which is great, but it means i have lots of twitter posts as i'm very active in there. You don't see my most recent images, videos and so on. I should have posted this ages ago, but i didn't, so apologies.

Here you go if you find it useful...

1. Put in the head element ...


function ParseFriend(feedid)
var prefeed = document.getElementById("prefriendfeed"+feedid); = 'none';

var feeditem = document.getElementById("feed"+feedid);
var divs = feeditem.getElementsByTagName("div");
for(var i=0;i<divs.length;i++)
if (divs[i].className == "header") divs[i].style.display='none';

if (divs[i].className == "friendfeed")
var subdivs = divs[i].getElementsByTagName("div");
for(var j=0;j<subdivs.length;j++)
if (subdivs[j].className == "feed")
subdivs[j] = '';


2. Place on your page and customize ...

<div id="friendfeed" style="float:left;width:600px;">
<a href=""><img src="images/friendfeed.gif" style="vertical-align:top;" border="0" /></a>
<a href="#feed101">location</a>
| <a href="#feed1">blog</a>
| <a href="#feed2">twitter</a>
| <a href="#feed6">favourites</a>
| <a href="#feed5">videos</a>
| <a href="#feed3">photos</a>
| <a href="#feed4">music</a>
| <a href="#feed7">books</a>
<br />
<div name="feed101" id="feed101" style="float:left;margin-left:20px;"><img src="images/plazes.jpg" border="0" alt="" /> </div><div style="margin-left:40px;"><object type="application/x-shockwave-flash" data="" width="316" height="146"><param name="movie" value="" /><param name="allowScriptAccess" value="sameDomain" /><param name="swLiveConnect" value="true" /><param name="wmode" value="transparent" /><param name="FlashVars" value="key=fcffc504b834fcd539f4310515a31d5e&amp;dark=7cd9f7&amp;light=ff9900&amp;text=000000&amp;link=ffffff" /><p><strong><a class="external" href="">Download Flash plugin</a></strong></p></object></div>
<div id="prefriendfeed2" style="float:left;width:600px;"><img src="images/loading.gif" style="vertical-align:middle;" /> loading livzstream tweets ...</div>
<div name="feed2" id="feed2" style="display:none;"><script type="text/javascript" src=""></script></div>
<div id="prefriendfeed1" style="float:left;width:600px;"><img src="images/loading.gif" style="vertical-align:middle;" /> loading livzstream blogs ...</div>
<div name="feed1" id="feed1" style="display:none;"><script type="text/javascript" src=""></script></div>
<div id="prefriendfeed6" style="float:left;width:600px;"><img src="images/loading.gif" style="vertical-align:middle;" /> loading livzstream favourites ...</div>
<div name="feed6" id="feed6" style="display:none;"><script type="text/javascript" src=""></script></div>
<div id="prefriendfeed3" style="float:left;width:600px;"><img src="images/loading.gif" style="vertical-align:middle;" /> loading livzstream photos ...</div>
<div name="feed3" id="feed3" style="display:none;"><script type="text/javascript" src=""></script></div>
<div id="prefriendfeed5" style="float:left;width:600px;"><img src="images/loading.gif" style="vertical-align:middle;" /> loading livzstream videos ...</div>
<div name="feed5" id="feed5" style="display:none;"><script type="text/javascript" src=""></script><script type="text/javascript" src=""></script></div>
<div id="prefriendfeed4" style="float:left;width:600px;"><img src="images/loading.gif" style="vertical-align:middle;" /> loading livzstream music ...</div>
<div name="feed4" id="feed4" style="display:none;"><script type="text/javascript" src=""></script></div>
<div id="prefriendfeed7" style="float:left;width:600px;"><img src="images/loading.gif" style="vertical-align:middle;" /> loading livzstream books ...</div>
<div name="feed7" id="feed7" style="display:none;"><script type="text/javascript" src=""></script></div>

3. Put at the end of your web page the following ...


Saturday, August 23, 2008

Make your API's easy to play with

I'm looking at advertising API's and i've got a headache. Got a nice list from here, but ALL i want to do right now is test them out. I'm sitting here with my code calling some Amazon Web Services (excellent) and other simpler services but the Ad one is driving me nuts. Google's was way complex and involved signing up all over the place. Yahoo required me to be processed. Again Microsoft's looks fairly involved. (Update) Didn't realize that i need to use AdSense and not AdWords to actually get suggestions ! Never saw any pointers at the top of the AdWords. Anyway AdSense is a little better but it seems to return only HTML which kinds defeats the purpose of an API. I want to embed it my own way!

I want to pass some data and get some ads - maybe do something more in the future. More importantly i just want to test this just now. I don't want to commit to anything but things need to be simpler. I don't wanna spend hours registering, being verified and then coding against something far more complex than i need. I don't mind the complexity, but when i can't immediately see (and play with) what i need to do then, to me, it's complex.

Please - all - provide a sandbox for hacking. Make it open with an API key easy to get which allows X number of requests. Make the API simple - most could be RESTful gets for that. Make the reponse simple, but customizable.

I know from personal experience these things get complex quickly, but we need API's we can quickly run with.

Thursday, August 21, 2008

Needz twitter

As an experiment i've created an account on Twitter when you can add and request things ou need, such as positions to be filled, a job, directions and so on.

If you need something, send a message to @needz at

Tuesday, August 19, 2008

Reading from Freebase using C#

Update : Seems Freebase no longer requires the cookie authentication - comment out that line and it works fine. Cool.

Freebase is one of my favourite things EVER on the web.

I'm working on a project just now and need to read from FreeBase. Alas I couldn't find any C# examples (if you know of any please let me know). As FreeBase is in early stages it required Cookie authentication - they say they will open up in the future.

Anyway, here is the code that does the same thing as the Perl, PHP and JavaScript code on this Freebase page. Enjoy :)

using System;
using System.IO;
using System.Net;
using System.Text;
using System.Security.Authentication;

namespace FreebaseSample
public class FreeFind
private static string server = "";
private static string queryurl = "/api/service/mqlread";
private static string path = "/api/account/login";
private static string username = "";
private static string password = "";
private static CookieContainer authCookies = null;

public void Ask(string band)
// This is our MQL query
// Find a band
// With the specified name
// We want to know about albums
// Return album names
// And release dates
// Order by release date
string query = "{\"type\": \"/music/artist\",\"name\": \"" + band + "\",\"album\": [{\"name\":null,\"sort\": \"release_date\", \"release_date\":null}]}";

string envelope = "{\"a0\":{\"query\":"+query+"}}";
string url = server + queryurl + "?queries=" + System.Web.HttpUtility.UrlDecode(envelope);

HttpWebRequest request = (HttpWebRequest)System.Net.WebRequest.Create(url);
request.Method = "GET";
request.ContentType = "application/x-www-form-urlencoded";
request.CookieContainer = authCookies;

HttpWebResponse response = (HttpWebResponse)request.GetResponse();
StreamReader reader = new StreamReader(response.GetResponseStream());


/// <summary>
/// Authentication.
/// </summary>
public void Authenticate()
string authURI = server + path;

// Create the web request body:

string body = string.Format("username={0}&password={1}", username, password);
byte[] bytes = Encoding.UTF8.GetBytes( body );

// Create the web request:

HttpWebRequest request = (HttpWebRequest)System.Net.WebRequest.Create( authURI );
request.Method = "POST";
request.ContentType = "application/x-www-form-urlencoded";
request.CookieContainer = new CookieContainer();
request.ContentLength = bytes.Length;

// Create the web request content stream:
using ( Stream stream = request.GetRequestStream() )
stream.Write( bytes, 0, bytes.Length );

// Get the response & store the authentication cookies:

HttpWebResponse response = (HttpWebResponse)request.GetResponse();

if ( response.Cookies.Count < 2 )
throw new AuthenticationException( "Login failed. Is the login / password correct?" );

authCookies = new CookieContainer();
foreach ( Cookie myCookie in response.Cookies )


You can call it as so ...

using System;

namespace FreebaseSample
class Program

static void Main(string[] args)
FreeFind f = new FreeFind();


You get something like the following JSON response ...

"status": "200 OK",
"a0": {
"code": "/api/status/ok",
"result": {
"album": [
"Another Day",
"11 O'Clock Tick Tock",
"A Day Without Me",
"U2 Popmart - Miss Sarajevo (disc 2)",
"Zoo TV Tour (disc 2: Live Transmission)",
"The Complete U2",
"Wide Awake in America"
"type": "/music/artist",
"name": "U2"
"code": "/api/status/ok"

Saturday, August 2, 2008

Large Hadron Collider Nightmare scenario

The pictures of the large hadron collider look amazing. Other than pulling us all into a black hole, surely scientists who wrote the software dread the following ....

c:\>run large hadron collider.exe

Error at line 4. The 'Higgs Boson' is not a recognized particle. Please try again.

Friday, July 18, 2008


Son went to see Wall-E today. Complete waste of time ...

All starts off very nice in a research lab, but soon Microsoft come in and buy him, henceforth known as Wall-XE to support their “A robot in every home” initiative (IBM missed out pronouncing that there will be “only need for one robot in the world”). Then a bunch of other folks create an Open Source version (Wall-FREE) but no-one is around to support it and there ends up 800 versions of it. Google released their own “Wall-G beta” funding it through its new advertising platform Robo-Words (which no-one ever reads of course and so goes down the pan… but despite being perpetually in beta it DOES read your Gmail and integrate with every appliance in your kitchen). It finishes by being released on Bit Torrent and now you just download and install the software to have your own.

Sorry for spoiling it for everyone.

* Please note i haven't seen it - my son is watching it right now... and it looks , really good!

Friday, June 27, 2008

Powerset does, FAST is

I haven't used Powerset much. I have used FAST a lot. I used their API fairly extensively. I could write a book about that (something i have considered - see my "previous life"), but i won't go into all that just now.

What i DID like when i used Powerset was the "factz" (in itself a slightly ironic spelling). You see FAST is all centred around Nouns - people, places, things, authors, books and so on. This works pretty good. (So for example) You search for Steven Livingtone and you get my books, my friends, my bookmarks, my relations and so on - nouns related to me - in facets.

However, what i liked about Powerset what the fact the use verbs in the facts. So again i search for Steven Livingstone (someone has to) and i maybe get "writes", "likes", "bookmarks" and so on - well something to that effect.

Unfortunately i'm not entirely sure how to get these factz appearing (you seem to need to do the search and then select something and even then it isn't always verbs), but i DO like the idea of making a search and combining the doing with the object.

"Steven Livingstone writes MSXML" and so on. One part powerset, one part fast.

I'm still looking into Powerset so things may become more obvious to me.


Got my haircut - what's best?

Livz in the past

Livz in the present

10 startups to watch

I read the list over at Technology Review and here's my take:
Nice idea - great idea in fact. Got an account but the software isn't available outside the US and Canada, but the number is (i would have thought it would have been the reverse!). As I am hopless at numbers and baring in mind when you are ON the mobile phone it's impossible to look up someone's number (you tried it?) - i am going to wait until i can use the software.

I tried their Outlook add-in, but it never seemed to do anything. Needs work.
I just keep wandering why it took so long for Comvu to really push their offering.
Another good idea which could be a platform for hyperlocal citizens jourlalism anywhere. Probably needs branded a bit but should be interesting to follow.

Anything that helps me remember stuff is good - i like the idea but the interface was a little confusing and when i dialled the number it asked for for a pin- i gave it my mobile and added something but it then said "thanks for the comment for our support team". The UI was also a little confusing but will improve.

Tells you who is influential in the world. No idea how good it is. No signup i could see or way of registering interest which was slighlty ironic given its remit.

I love anything semantic (but i bet it becomes the most overused word after Web 2.0 soon!). Advertising sure needs to get smarter and take in context. I'm not an advertiser so not sure how useful it is, but their list of publications looks a good read!

Could be huge.

Now this is interesting. It's not a million miles from something i am looking at just now (maybe a few thousand tho') and managed API's are important for enterprises. I'll be keeping my eye on these guys. Very interesting to see how mashups take a step inside the firewall!!

Great list. The other companies looked great too, but these are the ones that i could play around with and are closer to what i do.

Wednesday, June 25, 2008

Live Poll - Will Germany or Turkey win?

Take a look at this:

RESTful WCF Web Services

I spent some time looking at how you could remove the ".svc" from WCF web services and looking at it last year i got quite close but just not close enough. Well, i'm back writing WCF services again to expose JSON for my REST API, and this time i was determined to spend time looking for a solution. I found some people who did a lot of great work but the solutions were all missing the extra i really needed. I needed an extensible method of re-writing any kind of URL.

The solution was to combine the ideas here and here using a mixture of IHttpModule and regular expressions to provide a solution allowing any URL to be mapped in the configuration file. The code is below and is made up of RestModule.cs and some rules in the web.config.

This code will actually work for any URL you wish to make RESTful but my target was WCF.


using System;
using System.Configuration;
using System.Web;
using System.Xml;
using System.Text.RegularExpressions;
using System.Xml.Xsl;

public class RestModule : IHttpModule
public void Dispose()
{ }

public void Init(HttpApplication app)
app.BeginRequest += delegate
HttpContext ctx = HttpContext.Current;
string path = ctx.Request.AppRelativeCurrentExecutionFilePath;

if (ctx.Request.Url.Query.Length > 0) path = path + ctx.Request.Url.Query;
string zSubst = ReWriter.Process(path);

//did we have a new path match?
if (!zSubst.Equals(path))
int i = zSubst.IndexOf('/', 2);
if (i > 0)
string svc = null;
string rest = null;
string qs = null;

svc = zSubst.Substring(0, i);
rest = zSubst.IndexOf('?') > -1 ? zSubst.Substring(i, zSubst.Length - i).Split('?')[0] : zSubst.Substring(i, zSubst.Length - i);
qs = zSubst.IndexOf('?') > -1 ? zSubst.Split('?')[1] : null;

ctx.RewritePath(svc, rest, qs, false);


public class ReWriter : IConfigurationSectionHandler
protected XmlNode _oRules = null;

public string GetSubstitution(string zPath)
Regex oReg;

foreach (XmlNode oNode in _oRules.SelectNodes("rule"))
oReg = new Regex(oNode.SelectSingleNode("url/text()").Value, RegexOptions.IgnoreCase);
Match oMatch = oReg.Match(zPath);

if (oMatch.Success)
return oReg.Replace(zPath, oNode.SelectSingleNode("rewrite/text()").Value);

return zPath;

public static string Process(string requestpath)
ReWriter oRewriter = (ReWriter)System.Configuration.ConfigurationManager.GetSection("system.web/urlrewrites");

return oRewriter.GetSubstitution(requestpath);

#region Implementation of IConfigurationSectionHandler
public object Create(object parent, object configContext, XmlNode section)
_oRules = section;

// TODO: Compile all Regular Expressions

return this;

Add a new section group to your web.config ...

<sectionGroup name="system.web">
<section name="urlrewrites" type="ReWriter"/>

Finally add some rules in the web.config in the system.web section ...


Now you can have ...{"Name":"Starbucks%20Manhattan"}

or ....

Neat heh?