So, to take a slightly different turn from my usual meta discussions of process, theory, and architecture, there have been several people who have offered up some examples of extension methods that they have found useful, now that .Net 3.5 is roaring along nicely.  There are some collections of such utilities, like the Umbrella project, and some folks like Bill Wagner who have written books on the subject (okay, there are other things in there too), so I thought I might as well throw my hat into the ring as well.  Specifically, there was this tweet from @elijahmanor a few days ago.  It points to a positing which includes some extensions on string to convert from strings to value types (int, long, short, etc).  I pointed out that in our current project we have distilled this down to a single extension method: To().

He suggested I blog about it, so here it is:

So we actually have two classes of conversions, one converts from a string to a value type, and they other converts from a string to a Nullable value type.  In the case of our project, the nullable version came first, and so it became very easy to create the version that returned a non-nullable value.  Here are the methods of interest:

public static T To(this string input) where T : struct
{
    return input.To(default(T));
}

public static T To(this string input, T defaultValue) where T: struct
{
    return input.ToNullable() ?? defaultValue;
}

public static T? ToNullable(this string input) where T : struct
{
    if (string.IsNullOrEmpty(input))
        return null;
    var tryParse = GetTryParse();
    return (T?)tryParse(input);
}

Okay, so this is pretty straight forward.  The non-nullable version calls through to the nullable version, and if it is null, it returns default(T).  But, as I’m sure you are an astute reader, you will see that there is some magic going on; namely the GetTryParse() method.  This little guy goes off and looks for a TryParse method on whatever T happens to be, and then returns a Func<> delegate that will run the string input through the try parse and return either a null (in case the TryParse fails), or a boxed version of the result.  So, lets see what it looks like before we discuss pros and cons

private static Func<string,object> GetTryParse()
{
    var tryParseEx = GetTryParseExpression();
    return (s) =&gt; tryParseEx.Compile()(s,default(T));
}

private static Expression&lt;Func&lt;string,object&gt;&gt; GetTryParseExpression()
{   
    if (_tryParseCache.ContainsKey(typeof(T)))
	return _tryParseCache[typeof(T)] as Expression&lt;Func&lt;string, T, object&gt;&gt;;

    MethodInfo tryParse = typeof(T).GetMethod("TryParse", new Type[] { typeof(string), typeof(T).MakeByRefType() });
    Ensure.IsNotNull(tryParse, string.Format("Cannot convert from type string to type {0} because {0} does not have a TryParse method", typeof(T).FullName));

    var stringArg = Expression.Parameter(typeof(string), "input");
    var tempArg = Expression.Parameter(typeof(T), "tmp");

    var tryParseEx = Expression.Lambda&lt;Func&lt;string,object&gt;&gt;(
        Expression.Condition(
            Expression.Call(tryParse, stringArg, tempArg)
                , Expression.Convert(tempArg, typeof(object))
                , Expression.Constant(null))
        , stringArg, tempArg);
    _tryParseCache.Add(typeof(T), tryParseEx);
    return tryParseEx;
}

So here we have some code looking for a TryParse method, and building an expression (using Linq Expression Trees) to execute it.  Now, I’ll be honest, this is not the code I’m using in the project where this originally came from…mostly because I didn’t think of it then.  In that case I’m actually doing a big case statement, checking the type of T and running the appropriate method.  This is much shorter, but potentially much slower at runtime.  So that is where the _tryParseCache comes in.  This is a simple static dictionary which contains the expressions created for each of the types, which means you only get the runtime performance hit once when you first ask to parse a specific type.  The declaration for this object looks like this:

private static Dictionary _tryParseCache = new Dictionary();

There you have it, my first (and possibly last??) contribution into the world of extension methods. Please commence criticisms

So previously I posed a question, which in it’s simplest form is: Should you write code for the rest of your group (or at their proficency level), or should you write code as advanced as you need. and let it serve as an example for those on your team who are less advanced in their abilities.  The practical answer I have come up with is, like most answers of this type, “It depends”.

I have decided to handle things this way: 
First, don’t compromise.  If I feel something is a bad practice, or ultimately going to restrain future development, then do what needs to be done.
But Also: Try to avoid introducing advanced concepts and idioms until there is a compelling example of what their benefit is.  This is probably something that applies more to working with existing apps, then greenfield development, but it certainly has bearing in both cases.  The big example for this that I ran into was IoC.  I am a big fan of IoC, but it is hard to come up with a good, concise explanation of why you need this additional tool.  I’ve been wanting to introduce IoC since I started working at Envisage, but the explanation “This will make things much easier later on” is not good enough…particularly when you are trying to embrace YAGNI.  So the key is to wait until you can actually demonstrate the advantage, and provide a before and after example of how things are done.

Lead by example, but make sure you have the examples.  To bring the food analogy back into play, it isn’t good enough to create a gourmet meal, and tell the people who say they don’t like it that, no, it really is better, and their just not sophisticated enough to know it.  It is better to start smaller, and build their appreciation up in increments.

At Envisage, we have a 1 week iteration.  This means that we typically don’t want a single story to go longer than one week.  We also change up the pair rotation once per week.  The estimation process is not perfect, though, and we have a lot of support from management to choose doing things right, rather than doing things now, which leads to stories going beyond a single iteration.  So when this happens, there is usually a small rock/paper/scissors session with the developers involved to determine who will be taking the story.  More often then not, the story goes with the machine (and the developer) that was used to do the work, since it is usually not practical to check in changes at the end of the week (also, our QA builds on Fri for a demo, so a green build is needed on Fri).

One approach to this, which occurred to myself, and one of our other developers at close to the same time, was to utilize the concept of a personal branch.  This is something that I believe is supported transparently in TFS, though I’m not 100% sure.  We are using Subversion, so the process is somewhat manual, but overall pretty easy to get started.  Once a branch was created, I found it extremely liberating.  I have apparently been making decisions about how much to change while I’m refactoring code at least partly based on how much work it will be to integrate directly back into the trunk.  Having a private, protected area for me to work made it much easier to change things in the way they needed to be changed.  It also meant that I could check in more often, including checking in changes which would leave the app in a non-functional state.  Having those commit points meant that I could more easily undo changes, and gave me a nice warm and fuzzy feeling about the changes I was making.  There was also another interesting advantage, and that was when another developer was asking me about how the API would look on some of the objects I was working on, and I was able to point him directly to my branch to see the changes.

There were a few issues, however.  The biggest was that, though we are running the Subversion 1.5 server, our repository has not been updated, so the automatic merge tracking was not working.  This meant that I had to keep track of the revision numbers myself whenever I needed to merge updates to the trunk into my branch.  And this also made the “Reintegrate Branch” merge function impossible when I was ready to check my changes in.  Despite these issues, I think in this particular case (the story lasted about 3 weeks) I was worth the extra effort, and made the overall process much easier.  We will be updating our repository this week, which may make having a personal branch a viable solution for normal day-to-day work, but as it is the effort involved was a bit more than what I would want to deal with on a regular basis.  I will certainly not hesitate to break out this tool whenever I’ve got either a long running, or a large scope (meaning either large numbers of files effected, or a large change to parts of the API) story.  I certainly recommend giving it a try.