Wednesday, December 12, 2007

Interop with Membership Provider

I have a situation where I have been using the Membership provider and wish to migrate to a custom format for the re-work i am doing on OpenID. There is no great reason for not using their structure other than it has a bunch of stuff I really don't need nor will be implementing.

My issue was that the user passwords are SHA-1 hashed and i wanted to simply migrate that data and provide the same implementation so that I don't have hassles checking password or creating new password.

The code below will do that (except for encrypted passwords).

using System;
using System.Text;
using System.Security.Cryptography;

public enum PasswordFormats
{
Clear,
Hashed
}

public static class MembershipManager
{
public static bool IsMatch(string pass, string salt, string encodedPassword)
{
string newencoded = EncodePassword(pass, salt);
return newencoded.Equals(encodedPassword);
}

public static string EncodePassword(string pass, string salt)
{
return EncodePassword(pass, PasswordFormats.Hashed ,salt);
}

public static string EncodePassword(string pass, PasswordFormats format, string salt)
{
if (format == PasswordFormats.Clear)
return pass;

byte[] bIn = Encoding.Unicode.GetBytes(pass);
byte[] bSalt = Convert.FromBase64String(salt);
byte[] bAll = new byte[bSalt.Length + bIn.Length];
byte[] bRet = null;

Buffer.BlockCopy(bSalt, 0, bAll, 0, bSalt.Length);
Buffer.BlockCopy(bIn, 0, bAll, bSalt.Length, bIn.Length);
HashAlgorithm s = HashAlgorithm.Create("SHA1");
bRet = s.ComputeHash(bAll);

return Convert.ToBase64String(bRet);
}

public static string GenerateSalt()
{
byte[] buf = new byte[16];
(new RNGCryptoServiceProvider()).GetBytes(buf);
return Convert.ToBase64String(buf);
}
}

Tuesday, December 11, 2007

The dead web - Google + Archive.org?

I have put this into this new blog from my old one where i originally posted this (actually that was also a re-hash of the same idea i wrote back in 2002, so been on my mind for a while!) a while back due to Dave Winer's recent post. He is spot on!

====

This was something i posted on an old blog some years back, but recent discussions have made me re-post just in case there is some new opinion!

Over the last 2 months I have been conducting research almost exclusively on the web.
What has really became obvious to me is the amout of dead material out there.

From web pages, that contain out of date information, to whole sites that stopped running years ago with no indication, to projects that seem to have been in flux for years, businesses that stopped trading years back and left their site on and even stuff written by people whom i'm almost read to email, only to find the passed away a couple of years back.

So is a new web needed to get us out of this? Can we see Google work with Archive.org and create a diary of the web? A time-aware searcheable web which allows some kind of time scale on the information out there, without requiring everyone to annotate their documents! Could i say "Only search content added/updated in the last year" ?

I hope so, because frankly it's getting ridiculous. 10 years ago i did some research on solitons for a Physics paper i wrote. Today some of that material returns seelingly as relevant as ever despite things continuing to evolve over the last decade. My File Exists article on 15 seconds at http://www.15seconds.com/issue/990401.htm is now over 5 years old, but still comes 8th in Google when i type "FileExists".

I don't know how many replies i have had indicating some academic moved on 3 years ago, or some project research was finished, or even links to other sites that closed their doors, re-organized or just changed their content to make it completely useless.

Could a hyped up archive.org challenge something like Google? I think so. Could we "Diff The Web" to make the content more relevant - noting that getting dublin core on everything is highly unlikely.

Anyone got answers?

Saturday, December 8, 2007

Feelz - search with context

Today sees the launch of an experimental idea i've created called Feelz at http://feelz.org

The idea is to allow you to label things on the web with emotions, audience, characteristics and so on - things that aren't easily understood by computers. It will be particularly useful for audio, video and products but through concepts such as "audience", can easily extend to the web in general.

I'm really interested in feedback about this and what direction you think it should or should not take.

You can find the blog at http://feelz.wordpress.com

Tuesday, December 4, 2007

Monday, December 3, 2007

How far does collaboration stretch?

A few years back I worked at a large government enterprise that attempted to implement a knowledge system that expected users to enter titles, descriptions, hierarchical categories, privacy and so on ...

.. it didn't work very well. The argument i heard was that too much was expected of the users who had to enter all this data - they had too many other things to do.

And THEN (roughly speaking) we moved into tagging (delicious kicking this off) and so on - really the first phase of proper online collaboration. This became popular with Tag Clouds (all Scottish ones had rain as a background of course) and so on being implemented by every single web page on the planet.

Of course the last 2 years or so has seen a movement beyond this to real social collaboration - much of it to this point about ourselves (save specifics such as blogging and wikipedia which is a very small proportion of the online community). The interesting thing (and i have noticed this mainly through non-tech family and friends) is that everyone is using linked-in, geni, facebook and so on. And they all required a LOT of work.

Furthermore, sites such as Digg etc are very easy for users to make a statement - pretty much a click and Google seem to be looking at adding similar features.

Here is my question though. How far will this go? We see Mahalo looking at something akin to where Yahoo started out many years ago - human managed information... although now there was is a much longer tail in content and so you needed a LOT of editors - something that is definitely getting easier!

There is also Freebase and Google base to name but a few. These all expect a lot more from the users than anything before - well at least in 7 years.

Will this fail again or are we just at the right time for this kind of user involvement to work? Yes I know the stats about we only need X users out of everyone to contribute, but the more content that is added, the more the authors and those with a vested social interest have seemed to become involved.

There was ALWAYS the issues of creating structures to hold metadata (such as Dublin Core and RDF) but so long as this stuff is abstracted in a nice UI, do we think it can now work in a distributed social environment? In other words, people seem to now be working harder (i.e. actually adding this metadata which in the past was always empty... ask anyone who ever maintained a database of it!) - is this a long term thing or are we just in a social high?

Will this extend to micro-content and the enterprise?

I do have a vested interest as i'm working on an experimental idea i wish to release in the next few days, so please post comments as they will be very useful!

Saturday, December 1, 2007

OAuth for API Authentication

I've defined a number of API's in my time and always seem to resort to a different kind of authentication mechanism and my latest API was likely to be no different. I looked at Flickr, Google and Amazon API's and how they did authentication and they all uses customer MD5 and/or HMAC-SHA-1 hashing with a private key with different parts of the query being hashed.

I thought, there should be a standards around this ... i had looked at OAuth but from an API veiwpoint never really made the connection. Until today! I am looking at a C# implementation by Eran Hammer and i see there is some work on it, but I need to do some more research to find out what i need to provide on top of this to actually implement it (i.e. client and server requirements).

I hope to learn more about what i need to do to just use this in my API and in particular how i may integrate the authentication in an AJAX client application without redirecting to the site.