This is part 3 in a series.  If you’ve not followed-along so far, you may want to check out Part 1 and Part 2 first.

It’s time to start digging in to some of the crazy-goodness that makes Scala such a glorious and wonderful thing.  First things first, though, we have to talk a little bit about this history of Generics in Java and the JVM. 

 

 

Type Erasure and you…

Back around the time that .Net was adding support for generics, the folks in Java land were doing the same…sort of.  The biggest difference between the way Generics were implemented in .Net and Java is the fact that in .Net generics are supported directly in the CLR (sometimes called reified generics), whereas the JVM did not include direct support for generics.  Ok, so what does that actually mean?  Well, for starters it means that when you’re dealing with Generics in Java you run into Type Erasure.  Type Erasure means that when Java code with generics are compiled, the generic type information is removed at compile time, so while you’re looking at an ArrayList<String> in Java, the JVM sees this as just an ArrayList, and things get cast as needed.  There are a couple of types which get reified into actual types (Arrays are the best example), but for the most part this doesn’t happen.  In contrast in the .Net world a List<string> gets compiled into a List`1<System.String> which is a real type at the IL level.  Now, I’m not going to get into the argument about whether or not type erasure is good or bad, but it is a fundamental difference between the two platforms. 

One very interesting effect of Type Erasure is that it allows you to specify a wildcard type argument for a generic type parameter.  In Java this looks something like ArrayList<?>, in Scala this same type would be ArrayList[_].  Now, this is interesting because it allows you to side-step the whole issue of providing type arguments when you really don’t care what they are.  Now, lets back up one step real quick and talk about syntax for a moment. 

Getting down to business…

Those of you with sharp eyes will have noticed that Scala uses square brackets for type arguments.  You can limit types (in a way similar to generic constraints in C#) based on other types by using the following syntax: MyType[T <: AnyRef].  In this case we’re saying the Type argument T is a subtype of AnyRef (which you may recall is the supertype for all classes).  This is known as a Type Bound, and specifically in this case an Upper Type Bound.  You could speculate (and would be correct to do so) that a Lower Type Bound works in the opposite direction, so MyType[T >: A] means T is a supertype of A.  Now, this is interesting from a theoretical perspective, but there are practical uses for this when you bring in covariant and contravariant type parameters.  Scala allows you to denote a covariant type like this: MyType[+T], and a contravariant type like this: MyType[-T].  These are equivalent to the .Net 4 variance modifiers in and out.  So the .Net version of our MyType declarations would be IMyType<in T> and IMyType<out T> respectively.  Note I added the I to the type names since in .Net type variance declarations are limited to interfaces and methods.  This limitation does not exist in Scala, so you’re free to add variance modifiers wherever you like.  Ok, so why do I say type variance becomes interesting when you combine it with type bounds?  Well, lets look at some real live Scala code and see:

sealed abstract class Option[+A] extends Product {
...
  def getOrElse[B &gt;: A](default: =&gt; B): B = 
    if (isEmpty) default else this.get
...
}

So this is a chunk of code from the Scala library. This is a sample of the Option class, which is an implementation of the Null Object Pattern, that is used everywhere within the Scala libraries. If you look at the type declaration you can see that Option takes a single type parameter A that happens to be covariant. This is because when you have an Option of a specific type you expect to be able to get that type back out of it. However, there is also this getOrElse method, which will either return the item, or return the value of the function you pass in (which should return the type A). Now if you were to try and make the argument to the function something like => A you would get a compiler error because a covariant type is appearing in a contrvariant position. This happens in C# as well, if you were to do the same thing. So the solution to this is to give the getOrElse method a TypeParameter, and specify a lower bound of A, which means B has to be a supertype (or the same type as) A.  This is particularly awesome since we would like that to be a contravariant operation anyway.  This is a very nice trick, and one I wish it were possible to use in C#.

Types of Types of Types…..

But the goodness does not stop there.  We’re going to head off into the land where Scala really asserts itself in terms of it’s type system.  In Scala it is possible to specify that the Type Parameter of your class/method/whatever should itself also have a type parameter (which could also have a type parameter, that could have a type parameter….you get the idea).  This is known as Higher Kinded Types (from the idea of Higher Order Functions in Functional Programing, which are functions that take functions as arguments).  A Kind in type theory is basically a higher-level abstraction over types which relates to types in basically the same way types relate to variables in terms of regular code (I know this didn’t actually help explain anything, but it sounds good right?).

You may be wondering why this is a big deal, or even what use it may have….well, lets turn to another example from the Scala library.  In this case we’re going to talk about the Map function that exists on pretty much every Scala collection (or collection-like) object.  This is a method that allows you to take a collection of one type, and turn it in to a collection of a different type…more or less equivalent to the Select() method from Linq on IEnumerable.  Now, what is really cool about the Scala Map method is that when you call it on a List<y> for example, and pass it a function to transform y into x,  you will get back a List<x>.  The important part here is the fact that it is a List…not an Iterable, not an Enumerable, not any other supertype in the heirarchy, but a List.  And to top it all off this method is defined at the top level of the Collections type hierarchy.  So if you create a new collection that inherits from one of the existing collection types, you get a Map method that behaves the right way free of charge.  This is impossible to do in C# without creating an implementation of the method for each object (and if you did that you would have no contract at the IList or IEnumerable level for the method).  I’ll let that sink in for a moment.  Now, there is a lot of very cool things going on in the type system which will get their own posts (or twenty) at some point, as a matter of fact Scala’s type system is actually Turing Complete in and of iteself, which makes my head hurt to thing about. But apart from that there is one more thing I want to share about Scala’s type parameters before wrapping this up.

A quacking all the way…..

In addition to specifying type arguments as parameters, you can actually get more granular and tell Scala you’ll take any object it’s got as a parameter, as long as it has some specific method declaration(s).  This is effectively duck typing, and makes testing really, really easy when you can use it.  The syntax for doing this looks like:

def MyMethod(thing: { def that(x:Int) }) = thing.that(2)

Take a long look at that for a sec, let it sync in.  Now, this is pretty good stuff by itself, but Scala’s type system gives us even more goodness.  One of the nifty things you can do is declare new types in terms of other types.  Something like aliasing….but with a lot more power, particularly when you take into account the fact that you can use the same “duck typing” syntax to define a type.  So here is what that would look like:

class Test {
    type Dog: { def bark; }

    def itBarks(dog:Dog) = dog.bark
}

class Wolf {

    def bark = println("Actually, I tend to howl more")

}

class Poodle {

    def bark = prinln("Le bark, Le bark")
}

val test = new Test()
test.itBarks(new Wolf())
test.itBarks(new Poodle())

So, as you can see, you can use type definitions as a form of shorthand for the duck-typed references. What’s even cooler is that you can actually define types at a package level as well, which means you can define them once in your project, and use them wherever you need them. This is something like the using aliases you can set up at the file level, only you can declare them at a much broader level. As a matter of fact, Scala has a special file called Predef that gets run before anything else in a Scala program. This file defines a lot of nifty things, like the println method, including several types that can be used in any Scala program (check our the source if you’d like). Pretty groovy, no?

 

In Part 2 of this series we’re going to move into some of the object-oriented aspects of Scala.  For a primer on what Scala is, and a quick primer on syntax, check out part 1.

Let’s start with the built-in object structure

Like C# and Java, Scala has a built-in object hierarchy, and a standard library full of goodies.  Unlike Java and C#, Scala tackled the issue of reference types and value types head-on, so while the root type in the Scala type system is the Any type, there are two descendants of Any that come into play before any other type. The two are: AnyRef, and AnyVal. As you might guess AnyRef is the base type for all reference types (including types made available from Java), and AnyVal is the base type for all value types. This is a little bit like the class and struct type constraints in C#, only you can use these as variables. They can also be used to limit type parameters, but Scala Generics are going to have to get their own post.

There is one other interesting class that makes Scala unique from C# and Java (and maybe even other OO languages…I don’t think Ruby does this….pretty sure python doesn’t either….Smalltalk probably had it, but then Smalltalk had everything).  There is a class called Nothing, which is a Subclass of every class. I’ll give that a second to sink in.  Ok, got that? There is also another class called Null that is similar in that it is a Subclass of every reference type.  Here is a diagram of the Scala class hierarchy to make things a little clearer.

With this information fresh in our minds, lets take a look at actually putting together some Object-Oriented code.

The Trinity….Classes, Objects, and Traits

This was actually one of the first areas that made me go “Wow” in Scala land. Being on the JVM, you would expect it to use Java’s notion of classes and interfaces. But, interestingly, while it is built to support interoperability with Java directly, Scala starts asserting itself early on in the following way: In Scala you have three basic building blocks for OO, the Class, the Object, and the Trait. Taking these in order of most like what C# devs are used to up to the least, we’ll start with the Class. Classes in Scala are basically like C# classes…only with one significant difference: Scala classes do not have Statics. As a matter of fact there is no way to make a Class static in Scala. That’s where Object comes in. Object is basically a singleton, which you reference using static-like syntax (so Name.Method). You can create something which looks like statics on classes by using a Companion Object, that is an Object which has the same name as the Class (and is in the same file). Objects have a couple special things going on like the ability to be used as Factories for other classes, and the special apply() and unapply() methods, which allow you to get instances of object, and come into play in nifty ways when we start talking about Pattern Matching. But before we get into that lets talk about the last of the Scala OO Trinity, the Trait. So Traits are really cool….they are effectively Interfaces, only they can contain implementation. I say can, but they don’t have to. So you can use Traits in a way that is similar to the way interfaces are used, or you can add implementation, and they become Mixins. What’s more, you can create a Trait that inherits from a Class, as well as a Class that extends a Trait. You can also create Objects that inherit from classes and extend Traits. You can also attach a Trait to an instance of a class at the time you instantiate the class, which means you can add behavior to a class declaratively at the time you need it. So lets get some code samples going here, just to give an idea of what this all looks like. Lets put together a totally non-realistic example of an Animal class hierarchy:

class Animal

Ok, that was maybe no so helpful (but I will point out that this is valid, code…it compiles people, try it out). Lets move on to some categorization

class Feline extends Animal {
  var _lives:Int = 9

  def creep {
  }
  def scratch {
  }
  def hunt {
  }
  def livesLeft:Int = _lives
}

class Canine extends Animal {
  def howl {
  }
  def hunt {
  }
  def chaseFeline(cat:Feline) {
  }
}

So now we have a little more to work with…not much, but it’s a start. Before we get too far, we have some duplication, so lets do a little refactoring…since both classes have a hunt method, we’re going to pull this out into a Trait:

trait Hunting {
  def hunt {
  }
}

And to actually make use of this, our classes are now:

class Feline extends Animal with Hunting {
  var _lives = 9

  def creep {
  }

  def scratch {
  }
  
  def livesLeft:Int = _lives
}

class Canine extends Animal with Hunting {
  def howl {
  }
  
  def chaseFeline(cat:Feline) {
  }
}

There, isn’t that nice? The Trait is added to the class in this case with the addition of the with keyword (though if there was not a base class, we would need to use the extends. The rule is the first item is extends, and any other traits use with). We could also do this declaratively at instantiation time if we wanted. Like this:

val canine = new Canine() with Hunting
 

Before we finish up (cause I think this is plenty to chew on for a while), I want to talk briefly about visibility and overrides.  By default in Scala, everything is public…as a matter of fact there isn’t a public keyword at all. There are actually some really interesting options for scoping which give you more control than the public,private,protected,internal,protected internal options available in C#, but those will have to wait for a bit. As far as overrides, Scala does not have a virtual keyword. So everything is overridable (sort of). Unlike Java, if you want to override something you have to use the overridekeyword. Lets put together a quick example:

class Animal {
  def sleep {
  }
}

class FruitBat extends Animal {
  override def sleep {
    if(isNighttime)
      super.sleep()
  }
}

Again, contrived, but so are most samples on blogs. Naturally we’ll ignore the fact that isNighttime has no implementation, cause we can, and look at the fact that overriding existing functionality is pretty familiar to the C# dev, except for the call to super rather than base. The interesting thing to note here is that, by default, all methods can be overridden in Scala (using the final keyword on a method def will keep it from being overridden), and you must be explicit when you do it. This actually takes care of the complaints from folks, particularly when discussing testability, about C# requiring the virtual keyword to allow it to be overridden (and since everything is public by default, there is that argument too). It also takes care of complaints people have about the fact you can “accidentally” override methods in Java, since you don’t have an overridekeyword (Some IDEs will look at an @Overrides annotation, but there is nothing checked at compile time for this).

We’ve covered quite a bit, and there are still some things we could talk about, but this takes care of the basics. You can now create Object Oriented code in Scala. In our next installment we’ll dig in to the Scala concept of Generics, which actually makes C# generics look like a sad and feeble attempt at doing something interesting (while still dealing with type erasure in the JVM).

I’m going to take a brief intermission in my Scala series, and show a head-to-head comparison of some code in Scala and C#.  To do this, I’m going to go with the first and second problems from Project Euler.  If your not familiar with the site, it’s a playground full of problems that are absolutely perfect for functional languages (cause they tend to be mathematical functions).  So let’s get started with Question #1:

If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.

We’ll start with the C# version:

public class Problem1
{
   public int GetResult()
   {
       return Enumerable.Range(1, 999).Where(i => i % 3 == 0 || i % 5 == 0).Sum();
   }
}

With the magic of Linq this is pretty easy (I’ll show later how Linq is basically a way to do list comprehensions in C#, but that’s for one of the longer posts).  Now, on to the Scala version (which you can paste into your REPL or SimplyScala if you want):

(1 until 1000).filter(r => r % 3 == 0 || r % 5 == 0).reduceLeft(_+_)

Now, comparing these too, they are fairly similar. Creating the initial range of numbers is a little easier in Scala (and using the until “keyword” means we don’t have to use 999 like in C#). Instead of the Where Scala uses the more traditional filter function, and we have to do a little more work and use the reduceLeft function with the special _+_ I talked about before, but overall they are quite similar.

So let’s move on to Question #2. It is:

Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, …
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.

Seems pretty straight forward.  We want a Fibonacci sequence generator, then we simply need to filter out odd values, and sum the even values that are less than 4 million to get our answer.  Lets start with C#:

public class Problem2
{
    private IEnumerable<int> Fibonacci()
    {
        var tuple = Tuple.Create(0, 1);
        while (true)
        {
            yield return tuple.Item1;
            tuple = Tuple.Create(tuple.Item2, tuple.Item1 + tuple.Item2);
        }
    }

    public int GetResult()
    {
        return Fibonacci().Where(i => i % 2 == 0).TakeWhile(i => i < 4000000).Sum();
    }
}

This one is a little more involved because we have to generate a Fibonacci sequence. I decided to use an iterator and the magical yield keyword to make a never-ending sequence (well, never ending until we end up with an overflow exception that is), but beyond that the solution is very similar to problem #1.

Now for the Scala version:

lazy val fib:Stream[Int] = Stream.cons(0,Stream.cons(1,fib.zip(fib.tail).map(p => p._1 + p._2)))
fib.filter(_ % 2 == 0).takeWhile(_ <= 4000000).reduceLeft(_+_)

Well, isn’t this interesting….The first line is the equivalent of our C# iterator.  It’s creating a lazy stream which is a stream who’s contents are calculated as they are needed (just like our iterator).  The difference here is that Scala doesn’t try to evaluate it all if you just type fib into the REPL..it will give you the first few results and then say “Look, I could go on, but you didn’t tell me how far to go so I’m just going to stop here”, and it spits out a ? to let you know that there could be more. This means that Scala has a deeper understanding that this thing may well never end.  Keeping this in mind we’re actually calculating it’s contents recursively (after hard-coding the 0 and 1 values)) by using the zip function, which will take a list and combine it with another list into a collection of Tuples. For the second list, which gets passed in to the zip function on fib we’re specifying fib.tail which is our list, minus the first element. So if our list starts out looking like List(0,1,...) then fib.tail is List(1,...). That means the initial call to zip creates the tuple (0,1). Now, from there we use the map function (translate this to Select in Linq-ease) to return the sum of the first and second items from out tuple. So now we have just created the third element in our sequence: 1. So the next time round this all happens again, only on the next elements in the two sequences respectively. So the zip function returns a tuple with the second and third element in the sequence: (1,1), and low and behold the 4th element in our sequence is born. This will go on until you stop asking for values, or you get an overflow exception. The entire time the evaluation of what is in the sequence is exactly one element ahead of what is being returned. Kinda mind bending, no? Now for the second line, we once again have an almost one-to-one map to our C# code. We filter out the odd values, take all the values less than 4 million, and then sum up the results.

Hopefully this has been at least a little bit enlightening…I’ll continue on making some more detailed forays into the world of Scala, but I thought an occasional one to one comparison might be help shed some light on some of the places where Scala offers some added elegance to what is possible in C#…as well as those spots where it doesn’t.

Hows that for a title, eh?  Yeah, I know, kinda crappy, but there is only so much creativity I can manage in a day, and as you will soon find out my mind is busy with all kinds of new and interesting thing, so any spare neural pathways which may have one been useful for something like clever titles are now just too damn busy to be bothered.

So what is this all about?  Several months ago I had an interesting experience.  One that I could almost compare to a religious experience, only not quite so….religious.  It started with the most excellent book Seven Languages in Seven Weeks by Bruce Tate, from our good friends at the Pragmatic Programmers.  Now, I will admit that I’ve not actually read all of it…yet.  The reason is that while I found the first several languages (Ruby, Io, and Prolog) interesting, and challenging, it was Scala that had a certain something that kept me wanting to know more.  Mr. Tate actually didn’t care much for the syntax of Scala, and was rather pleased to move on to Erlang.  Meanwhile I was off and running downloading Eclipse and the Scala IDE, looking at testing frameworks, online documentations, and what’s this?  Android development with Scala!?!?  I was hooked.

What is Scala all about?

2003 – A drunken Martin Odersky sees a Reese’s Peanut Butter Cup ad featuring somebody’s peanut butter getting on somebody else’s chocolate and has an idea. He creates Scala, a language that unifies constructs from both object oriented and functional languages. This pisses off both groups and each promptly declares jihad.
James IryA Brief, Incomplete, and Mostly Wrong History of Programming Languages

So to move on to the the elevator pitch on Scala.  Scala is a Hybrid Object-Oriented/Functional language build on the JVM which makes extensive use of type inference.  Now, a C#/.Net dev looking at this statement will most likely have a couple of responses.  Depending on how snarky the specific developer is the first will most likely be “So?”.  Followed by something like “So is F#, and C# has all that Functional stuff built-in…what’s the big deal?”.  Answering this question is one of the things I’m hoping to address…though maybe not directly. There are a lot of reasons I think Scala is interesting to a .Net developer, not the least of which is the idea that learning a new language can help open your eyes to new ideas,  patterns and idioms.  But also, there was a recent announcement from the Scala CLR team that, thanks to lots of hard work, and some funding from Microsoft, the Scala CLR compiler tools are actually usable.  This means that we could have another language available on the .Net platform for which to play.

There are no doubt going to be a number of different posts where I address various topics in some order that is in no way thought out.  What I would like to try to do is to present a lot of the things that make the language interesting, and unique, and how it can be used to solve problems in ways that are not possible in C#.  In some cases I may draw comparisons to Java, as a means of explaining some specific nuance of the language, or just to put into perspective why Scala is an even more compelling option for Java developers.  For those wanting to play along at home, without going through the process of installing Scala, you can head over to SimplyScala and try out their web-based Scala REPL.  When you install Scala, you also get a REPL that you can fire off from a command line (or, as I like to do, add it as a tab to Console2)

 

A quick look at some key syntax differences…

Before I start digging in too deep, I want to get some of the syntactic differences out of the way, that way future posts can focus more on specific topics, without needing to take a side-track into syntax.  Scala has a very malleable syntax, which is a very interesting thing given it’s also a statically-types, compiled language.  We’re used to this sort of behavior from those unruly dynamic languages like Ruby, but from a compiled language?  Really?  How gauche…But lets start with the basics.  Scala is a language that uses curly braces, so it’s not terribly difficult for a C# dev to look at initially.  What is unusual is the way variables and methods (and properties for those methods) are declared.  The general pattern is name:type. I read somewhere that Martin Ordesky once said that the name is the most important thing when looking at declarations, and this pattern supports that.  So if you want to declare an immutable variable (i.e. one that can not be reassigned) it would look like this:

 
Mutability

I’m going to gloss over the distinction between mutable, and immutable for the moment, but suffice to say, functional programming places a high value on  immutability, and Scala extends this idea to include a general preference for immutability when possible.

val age:Int = 6

Likewise a mutablevariable declaration would look like this:

var age:Int = 6

This is not actually what you are likely to see in a “real” Scala program, however, since the Scala compiler makes extensive use of Type Inference. So in both of these cases the type can be left off, because there is plenty of information available to know what the types are:

val age = 6

When we’re dealing with method or function definitions, then there are some limits to what can be inferred. Generally the types of parameters can not be inferred, and the return type can (though when declaring methods on a class, unless they are very simple, it’s considered a good idea to specify the type. This is to avoid confusion for both you’re reader and the compiler).  Functions are defined using the defkeyword like this:

 
Function vs Method

The distinction between functions and methods is actually pretty simple.  A method exists on a Class (or Object, or Trait), whereas a function does not.  Because Scala is functional, you can declare functions outside of the scope of a class without causing the anger of the gods to come raining down upon you.

def printNumbers(nums:List[Int]) {
  nums.foreach(n => println(n))
}

There are a few interesting things here, but I have a feeling C# devs will know what is going on without a whole lot of guidance. Take it as given that the printlnis a method that prints something to standard out (cause it is…we’ll talk about where it comes from later), and it’s easy to see that this method translates to this in C#:

public void printNumbers(List<Int> nums)
{
    nums.ForEach(n => Console.WriteLine(n));
}

Lets look at some of the differences….In the Scala version, we’re not specifying visibility. By default everything is pubic in Scala, which is a different world than we’re used to in C#. Also, there is no void return type defined. In Scala there is actually a class defined that takes the place of void called Unit. The compiler knows we are dealing with a method returning Unit for a couple reasons, the first and most significant is that in Scala, the return value of a function is the value of the last statement in the function.  So the foreach() function on the List returns Unit, so therefore the printNumbers function also returns Unit. There is also the way the function is written…lets take a look at a function that returns a value:

def square(value:Int):Int = value * value

In this case there is an equal sign after the function definition, which indicates that the bits afterword are the body of the function, and that the result of the following block is the return value of the function. Also notice I left of the curly braces this time round. Since it was a simple single statement method, there is no reason to use them. As a matter of convention, if your creating a method that returns Unit, you leave off the equal sign, and use the curlies (as a matter of fact, if you leave off the equals, you have to use the curlies).  If you need multiple statements, then you need to use the curlies.

So back to the Unit return type on the first method. We can also write out printNumbers function like this:

def printNumbers(nums:List[Int]) { nums.foreach(println(_)) }

Yes, I did throw another curve at you here…the foreach function on the List takes a function as an argument, and the format for lambda expressions in Scala is basically the same as C#, however you can skip input types on lambdas if there is only one, and use the underscore _ instead. This looks similar to the way C# lets you pass in a reference to a method that matches a delegate signature, but is not actually the same thing. In the Scala case, the println(_)is still a lambda expression. So before I wrap things up for this first go round, I’m going to throw out another example of using the underscore. This has actually been used as fodder to argue that Scala syntax is overly obtuse and cryptic. These things are always subject to interpretation, and I tend to think that it really depends on the specific project and developers. If your organization is used to a particular pattern, then even if it seems obtuse from the outside, it is a well defined and established pattern…use it. Anyway, on to the example…here we’re going to calculate the sum of a list of numbers:

def sum(nums:List[Int]) = nums.reduceLeft(_+_)

Ok, so lets start with a quick look at reduceLeft…this is a function that will apply another function to each element in a list, starting with the left-most, and accumulating the result along the way. So basically it looks at the list, applies the function to the first two elements, then takes that result and applies it to the third element, and so on. In this case we’ve defined our function as _+_, and if you recall the underscore is basically a stand-in for a lambda parameter. In this case, though, we have two params, but the Scala compiler is clever enough to figure out that we want to take the arguments in order, and apply the + method to them (yes, + is a method, not an operator…stick with me we’ll talk about this down the line). This starts delving into a little bit of compiler magic, and so it makes sense that some folks would be uncomfortable with this. It starts to unveil some of the secrets that are hiding underneath the surface of a seemingly stuffy, curly-braced, JVM language. I’m going to leave it here for now, and let you chew on this for a bit. Next time, we’re going to start talking about the Object-Oriented side of Scala. Stay tuned.

If it walks like a giraffe and talks like a duck then what is it?  Maybe a duhk?  Who knows, but it certainly is not a duck.  So if that is the case, then you can probably guess what the Duhking library is all about…or maybe you can’t.  In terms of programming, Duck Typing refers to the ability of some languages to allow you to treat an object of one type as an object of a different type, provided the methods/properties needed exist on both objects.  Statically typed languages are usually not very good at this sort of loosey-goosey type inference, which is why this behavior is typically restricted to languages with less stringent rules on typing.

The Duhking library is an attempt to provide a very limited view of Duck Typing to .Net 3.5 applications.  It allows you to graft an interface type onto an object that has matching properties and/or methods but does not actually implement that type.  Why would you do that?  Well, there were a couple of use-cases that drove the development of this library.  One is a case where you want to wrap an API that you have no control over in an interface so that you can test your consuming code.  This is common for things like HttpContext or SmtpClient where you want to utilize the functionality of those libraries in your code which you work so hard to make testable.  A standard approach to doing this is to create an interface which defines the methods you need, and then create a “wrapper” class that implements the interface, but then calls through to the real, un-testable class to do the work.  So my thought was this:  “Since we’re just calling through to matching method signatures in a class that already exists, why not abstract the whole thing so I don’t need all of these crazy Wrapper classes everywhere?”.

The other use-case came up when dealing with anonymous types.  We all know that you can create basic data objects as an anonymous type, and then use them within the scope they are created.  But what happens if you want to pass an anonymous type to another method?  Well, you have two choices.  You can either move the data in your anonymous type to another class/struct and pass that, or you can resort to some reflection trickery to get the values out of a plain old object.  It seemed like you should be able to create a new anonymous object as a particular interface type, and then pass it around as that interface type.

So the Duhking library allows you to do both fairly easily by use of some simple extension methods on object, and the magic of the Castle project DynamicProxy2 library.  Now that I’ve told you the secret, surely you can see how things work.  The library simply creates a proxy of the specified interface, and then intercepts calls to the interface methods, and in-turn calls the matching methods on the object we are “Duhking”.  There is some checking going on to ensure that your object is compatible with the interface you are wanting to Duhk, which involves checking method signatures (this is surprisingly complicated, considering it is the basis for compiler based interface implementation verification….but then I may be doing it the hard way), but beyond that its just passing calls off to the proxy.  But enough idle chit-chat, lets see a sample.

Okay, so lets take the first use-case where we are wanting a wrapper class for a sealed framework class so we can test our consumers.  Let’s go with the SmtpClient as an example because it is fairly common to want to send emails for various reasons.

First off we need out wrapper interface:

public interface ISmtpClient
{
    string Host { get; set; }
int Port { get; set; }

void Send(string from, string recipients, string subject, string body); }

You could add in additional properties or methods, but this is enough to get you going.  So now you can use this interface in place of the standard SmtpClient in the framework, and write your tests against it without major pain.  So the next step is to Duhk the real SmtpClient so it implements your interface when your ready to do the “real” work.

// Some code here getting ready to call your class that needs the client
var myClass = new ClassNeedingSmtpClient(new SmtpClient().AsType<ISmtpClient>());
// and now you do something with it

Pretty cool huh?  You can also check to see if the given concrete type can be Duhked by using the CanBe extension method

// Some code here getting ready to call your class that needs the client
var realClient = new SmtpClient();
if(realClient.CanBe<ISmtpClient>())
    var myClass = new ClassNeedingSmtpClient(realClient.AsType<ISmtpClient>());
// and do something else if it doesn't work

So now lets look at the other usage scenario, wrapping anonymous types in an iterface so you can pass them around.  The first thing we need is an interface to hold the data

public interface INamedSomething
{
    string Name { get; }
    int Id { get; }
    string SomethingElse { get; }
}

Note here that we are only specifying getters. That is because the properties of anonymous types are read-only, and right now the duhking code doesn’t differentiate anonymous types from other types (more on that later). Ok, so with this we can now create an anonymous type and return it as an INamedSomething

public INamedSomething MethodThatReturnsSomething()
{
    // Some work goes here
    return new { Name = "Sam", Id = 1234, SomethingElse = "Hah!", SomethingElseNotInTheInterface = "Foo" }.AsType<INamedSomething>();
}

And this works fine. Notice I threw an extra property in there to show you that when we are checking for matching signatures we’re only checking the methods/properties in the interface we’re trying to Duhk. You can have as many additional properties or methods as you want in your concrete type, doesn’t matter.

Now, as for that whole read-only thing.  Right now the code that checks compatability between your class and the target interface is ensuring that each method in the target interface has a matching method in the class.  This includes the compiler-generated methods for getting and setting properties.  That means that if your using an anonymous type as your class, you will never be able to Duhk it to an interface that has setters on it’s properties.  While technically correct there is something about this behavior that bugs me…it just doesn’t seem flexible enough.  So most likely what I am going to do is add some special handling for anonymous types that will allow the target interface to have both getters an setters.  This will in effect provide a way to stub out an interface implementation, and use the anonymous type to set the initial values of the interface.  This does change a bit the purpose of what I’m trying to do with this library, and gives it the added ability to stub out interfaces, so I’ve held off on doing this.  I think, though, that adding this functionality will actually increase the utility of the library, so it’s probably worth doing.

Right, so now that you have all of the grueling details, go get it, and let me know what you think.

On November 27th, a beta release of the 9.3 version of the Developer Express components, including CodeRush and Refactor Pro! was made available to subscribers.  This release is pretty significant to me because it contains a major feature that I have been waiting for for a long time: A Unit Test Runner.  There were some teasers released by Mark Miller a while back, which only made me want to get my hands on the tool that much more.  My initial impressions are that it is very nice.  It is similar to TestDriven.Net in that it provides context menu options to run tests at various levels of granularity (single test, file, project, and solution level) and includes a debug option.  At this point it does not contain some of the additional coolness that TestDriven gives you like NCover/Team Coverage and TypeMock integration, but it does have the advantage of being extensible.  I know it was extensible because Mr. Miller told me it was extensible (the title “The Extensible Unit Test Runner You’ve Been Waiting For” was a clue).  I did not realize how extensible, however, until after I submitted a bug report to DevExpress.  The bug I was reporting (the NUnit TestCase attributes were not recognized), it turns out, was already brought to the attention of the DX team by way of a forum post, and they had already planned on correcting it with the next 9.3 release, but I could have saved myself (and Vito on DevExpress team) some time by taking a peek at the source samples bundled with the 9.3 release.  Yep, you guessed it, there with a shared source license were all of the test framework implementation projects.  So this meant I could whip together my own temporary fix while I was waiting for the next release.  It seemed like something that other folks might want to know about, so I thought I would share it here.

The biggest piece of the puzzle is a new TestExecuteTask class for handling the TestCaseAttribute.  Due to my complete lack of creativity, I called mine TestCaseExecuteTask, and it looks like this:

using System;
using System.Collections.Generic;
using System.Text;
using DevExpress.CodeRush.Core.Testing;
using System.Reflection;
using DevExpress.CodeRush.Core;

namespace CR_NUnitTesting
{
    public class TestCaseExecuteTask : TestExecuteTask
    {
        public override TaskExecuteResult CollectTestParameters()
        {
            TaskExecuteResult result = TaskExecuteResult.SkippedTaskResult;
            Attribute testCase = GetMethodAttribute("NUnit.Framework.TestCaseAttribute");
            if (testCase == null)
                return result;
            
            foreach(Attribute testCaseItem in TestMethod.GetCustomAttributes(true))
            {
                if(testCaseItem == null)
                    continue;
                var testCaseType = testCaseItem.GetType();
                if(testCaseType == null || testCaseType.FullName != "NUnit.Framework.TestCaseAttribute")
                    continue;
                PropertyInfo prop = testCaseType.GetProperty("Arguments");
                if(prop == null)
                    continue;
                foreach(MethodInfo getter in prop.GetAccessors())
                {
                    object[] parameters = getter.Invoke(testCaseItem, Type.EmptyTypes) as object[];
                    result.AddParameters(parameters);
                }
            }
        }
    }
}

This could be cleaned up some, and some of the magic strings extracted to constants, but overall it is pretty simple. Basically what is going on here is that we are looking for the TestCase attribute, and extracting the arguments for any attributes we find.  It just so happens that the TestExecuteTask base class has a CollectTestParameters() method we can override which allows for this sort of Row testing.  The parameters we extract get stashed in the execution result, which causes the test runner to execute the test once for each group of parameters (the result has a list of parameters, which gets populated with an array of objects for each TestCase attribute), and will correctly display which cases failed if there is a failure.

There are a couple other small changes that need to happen to get this to work.  There is an NUnitExtension.cs  class, which is the Plug-In class for the NUnit support, and it handles wiring everything up for us.  First off we need to initialize our new TestExecuteTask, and add it to the list of tasks that run for NUnit tests.  We do that in the InitializePlugin method of the NUnitExtension class:

public override void InitializePlugin()
{
    base.InitializePlugin();
    nUnitProvider.AvailableTasks.Add(new NUnitIgnoreTask());
    nUnitProvider.AvailableTasks.Add(new NUnitSetupTearDownTask());
    nUnitProvider.AvailableTasks.Add(new NUnitExpectedExceptionTask());
    nUnitProvider.AvailableTasks.Add(new NUnitValuesTask());
    nUnitProvider.AvailableTasks.Add(new NUnitRowTestTask());
    nUnitProvider.AvailableTasks.Add(new NUnitTimeoutTask());
    nUnitProvider.AvailableTasks.Add(new NUnitExplicitTask());
    nUnitProvider.AvailableTasks.Add(new NUnitTestCaseTask());
}

Ours gets added to the end of the list, so it will be executed. The next step is to get the plug-in to realize that a method with a TestCase attribute is an executable test method. That trick happens in the handler for the CheckTestMethod event on the UnitTestProvider. All we’re going to do is add another condition to an if statement like so:

void nUnitProvider_CheckTestMethod(object sender, CheckTestMethodEventArgs ea)
{
    IMethodElement method = ea.Method;
    if(//method.Name != null && method.Name.StartsWith("Test")
       ea.GetAttribute("NUnit.Framework", "Test", method) != null
    || ea.GetAttribute("NUnit.Framework.Extensions", "RowTest", method) != null
    || ea.GetAttribute("NUnit.Framework", "TestCase", method) != null)
    {
        ea.IsTestMethod = true;
        ea.Description = ea.GetAttributeText("NUnit.Framework", "Description", method);
        ea.Category = ea.GetAttributeText("NUnit.Framework", "Category", method);
    }
}

The only change to the original code was the additional GetAttribute call at the end of the if statement (the comments were there when I got there, I swear).  Now the only thing left to do is to compile it and drop it in the plug-ins directory.  Now when you are looking at a test class, you should be able to run TestCase decorated test methods without problem.  Well, almost.  There is one thing I was not able to find a clean way to implement, and that is the Result property of the TestCase attribute.  This allows you to streamline tests which are doing equals assertions by having the test method return the actual result, and you specify the expected result by using the result property.  Unfortunately I could not find a way to hook into the actual execution of the test in such a way that I could have access to the specific test properties being used, and the result of the test method execution.  But considering the DevExpress folks will be fixing this issue, I’m sure when they release it there will be support for this feature.  After all, this is simply a stop-gap solution until the next CodeRush release is available, so I’m willing to live with this slight inconvenience.

Happy Testing!  

As of right about now, you should be able to mosey on over to the DxCore Community Plug-ins page, and grab a copy of CR_MoveFile.  This is a plug-in I created primarily as a tool to aid in working in a TDD environment, but which certainly has uses for non-TDD applications.  It does basically what CR_MoveFile_ScreenShotthe name suggests, it allows you to move a file from one directory in your solution/project structure to another, even one in a different project.  I implemented this as a code provider (since it could change the functionality if you move the file from one project to another), so it will appear in the Code menu when you have the cursor somewhere within the beginning blocks of a file (“using” sections, namespace declaration, or class/interface/struct declarations).  Once selected you are presented with a popup window which has a tree that represents your current solution structure, with your current directory highlighted.  You can use the arrow keys to navigate the directories and choose a new home for your file.

If you move files between projects, the plug-in will create project references for you, so you don’t need to worry about that.  When the file is moved the file contents remain unchanged, so all namespaces will be the same as they were originally.  I did this mostly to keep the plug-in simple, but also because I could see situations where this would be good, and situations where this would be bad, and it seemed like this was a bad choice to make for people.  I’ve been using this plug-in on a day-to-day basis for a while now, and things seem pretty clean, I did run into a small issue, however, using it within a solution that was under source control.  At this point you need to make sure the project files effected by the move are checked out, otherwise the plug-in goes through the motions, but doesn’t actually do anything, which is quite annoying.  There is also no checking going on to make sure the language is the same between the source and target project, so if you work on a solution that contains C# and VB.Net projects, you have to be careful not to move files around to projects that can’t understand what they are (oh, and the project icons used on the tree view are all the same, so there is no visual indication of what project contains what type of files).

That’s pretty much it.  Clean, simple, basic.  Used with other existing CodeRush/Refactor tools like “Move Type To File” and “Move to Namespace”, this provides for some pretty powerful code re-organization.  Just make sure you run all of your tests :).

Anyone who has been around me for more than a few hours while coding, or who pays any attention to me on Twitter will know that I am a huge fan of CodeRush and Refactor Pro! from DevExpress.  I consider these sorts of tools essential to getting the most out of your development environment, and I think CodeRush is one of the best tools available for a number of reasons, not the least of which is it’s extensibility.  CodeRush is built on top of DxCore, which is a freely available library for building Visual Studio plug-ins (incidentally, DevExpress also have a free version of CodeRush called CodeRush XPress, which is built on the same platform).  DxCore provides any developer who wants it access to the same tools that the folks at DevExpress have for building plug-ins and extensions on top of VisualStudio, and several developers (including yours truly) have done just that.

One of the more recent additions to the CodeRush arsenal are the CodeIssues.  As of the v9 release, CodeRush included an extensive collection of these mini code analyzers which will look at your code in real time and do everything from let you know when you have undisposed resources, to suggesting alternate language features you may not even be aware of.  A lot of these are also tied in to the refactoring and code generation tools that already exist within CodeRush and Refactor Pro! so that not only do you see that there is an issue or suggestion, but in a lot of cases you can tell the tool to correct it for you.  Pretty impressive stuff.

So what I would like to do is dig in to how the CodeIssue functionality works within CodeRush by creating a custom CodeIssue Provider.  Because I’m a TDD guy, one of the things I’ve been trying to do is build in some tooling around the TDD process to make it that much easier to write code TDD.  So based on that I’m going to show you how to implement a CodeRush CodeIssueProvider which will generate a warning whenever you have created a Unit Test method with no assertions (which would indicate that you are either dealing with an Integration Test, or your test is not correctly factored).  Note: Since the CodeIssue UI elements are part of the full CodeRush product, and not CodeRush XPress, this plug-in will note do anything unless you are running the full version of CodeRush.

Okay, so the first thing to do is to create a new Plug-In project.  This can either be done from the Visual Studio File –> New Project menu, or by selecting the New Plug-in option from the DevExpress menu in visual studio (if you are using CodeRush XPress and you don’t have the DevExpress menu, my man Rory Becker has a solution for you).  Regardless of which way you go, you will get a “New DxCore Plug-in Project” window, which will ask you what Language you want to write your plug-in in (C# or Visual Basic .Net), and what kind of plug-in you want, along with the standard stuff about what to name the solution and where to store the files.  For our purposes we’re going to go with C# as the Language, a Standard Plug-in, and we’ll call it CR_TestShouldAssert (the CR_ is a naming convention used by the CodeRush team to indicate it’s a CodeRush plug-in, as opposed to a Refactoring or DxCore plug-in).

image

Net up is the “DxCore Plug-in Project Settings” dialog.  This allows you to give your plug-in a title, and set some more advanced options which deal with how the plug-in gets loaded by the DxCore framework.  We’ll just leave everything as-is and move on to the good stuff.

image

Once your project loads you will be presented with a design surface, this is because a large number of the components that are available via DXCore can actually be found in the Visual Studio toolbox, and you can just drag them out onto your plug-in designer to get started.  The CodeIssueProvider is an exception, though, so we will have to crack open the designer file to add it to our plug-in.  So open up the PlugIn1.designer.cs file, and add the following line of code under the “Windows Form Designer Generated Code” section:

CodeIssueProvider cipTestsShouldAssert;

You’ll need to add a using statement for the DevExpress.CodeRush.Core namespace as well.  Next we need to instantiate it, so we need to do this in the the InitializeComponents method.  When you are finished your InitializeComponents method should look like this:

this.components = new System.ComponentModel.Container();
cipTestsShouldAssert = new CodeIssueProvider(this.components);
((System.ComponentModel.ISupportInitialize)(this)).BeginInit();
((System.ComponentModel.ISupportInitialize)(this)).EndInit();

Now if we switch back over to the designer, we will see our new provider on the design surface.  At this point we can use the Properties window to configure the provider.  The things we need to worry about filling out are the Description, DisplayName, and ProviderName properties.  The Description is the text that will be displayed in the Code Issue catalog, so it needs to clearly explain what the CodeIssueProvider is intended to do.  Let’s go with something like: “A Unit Test should have at least one explicit or implicit assertion.”  As for DisplayName, lets say something like “Unit Test Method Should Assert”, and make the ProviderName the same.

Ok, so now it’s time to actually do the work of finding a TestMethod that violates this condition.  So we need to switch over to the Events list for our provider, and Double-Click in the CheckCodeIssues drop-down so it generates an event handler for us.  You will now be taken to the code editor and presented with a empty handler that looks something like:

private void cipTestsShouldAssert_CheckCodeIssues(object sender, CheckCodeIssuesEventArgs ea)
{

}

This looks pretty much like your normal event handler, we’ve got the sender object (which would be our provider instance, and then we have a custom EventArgs object. Looking at this event args object, you can see quite a few methods, and a couple of properties.  The first few methods you see deal with actually adding your code issue, if it exists, to the list of issues reported by the UI.  You’ve got one method for each type of CodeIssue (AddDeadCode, AddError, AddHint, AddSmell,AddWarning), and then one method (AddIssue), which allows you to specify the CodeIssue Type.  Now this is where things start to get interesting because basically we’re at the point where the good folks who wrote DxCore have said “All right, go off and find your problem and report your finding back to me when your done”.  So from here we have to figure out whether or not there are any test methods without asserts floating around anywhere.  The good news is that there are a few tools in the CodeRush bag of tricks that can help us.

Perhaps the best tool for figuring out this sort of thing is the “Expression Lab” plug-in.  You can open this up by going to the DevExpress menu, opening the Tool Windows->Diagnostics->Expressions Lab.  This shows you in real time what the AST that CodeRush produces for your code looks like as you move about in a file.  You can also see all of the properties associated with the various syntax elements, and view how things are related.  This is a very handy tool to have.  Before we dig too deep into the Expressions Lab, lets get a start on finding our CodeIssue.  We know that we are going to be looking at methods here, since we are ultimately searching for test methods, so the first thing to do is to limit the scope of our search to just methods.  The CheckCodeIssues event is fired at a file level, so you are basically handed an entire file to search by the DxCore framework.  We need to filter that down a bit and only pay attention to the methods contained in the current file.  To do that we’re going to use the ResolveScope() method of the CheckCodeIssuesEventArgs object.  Calling the ResolveScope() method gives us a ScopreResolveResult object, which doesn’t sound very interesting, but this object has a wonderful little method on it called GetElementEnumerator().  This method will allow you to pass in a filter expression, and return all of the elements that match that filter expression as an enumerable collection. So to get to this, lets add the following to the body of our event handler:

var resolveScope = ea.ResolveScope();
foreach(IMethodElement method in resolveScope.GetElementEnumerator(ea.Scope,new ElementTypeFilter(LanguageElementType.Method)))
{
}

This looks pretty straightforward, but there are a couple of things I want to point out. First is the ea.Scope property that we are passing in to the GetElementEnumerable() method. This is the AST object that represents the top of the parse-tree that we are going to be searching for code issues in. Typically this is a file-level object, but I don’t know that you can count on that always being the case (changing the parse settings could potentially effect how much of the code is considered invalid at a time, and so you could get larger or smaller segments of code).  The other interesting bit is the ElementTypeFilter().  This allows us to filter the list of AST elements given to us in our enumerable based on their LangueElementType (LanguageElement is the base class for syntax elements within the DxCore AST structure.  All nodes have an ElementType property which exposes a LanguageElementType enum value). In our case we’re only interested in methods, so we’re using LanguageElementType.Method.  The result is a collection of all of the methods within our Scope.

Now that we have all of our methods, we need to figure out if they are Test methods.  To do this we’ll have to look for the existence of an Attribute on the method.  Taking a look at Expressions Lab, we can see that a Method object has an Attributes collection associated with it. So we should be able to search the list of attributes for one with a Name property of “Test”.  Using Linq, we can do this pretty easily like this:

method.Attributes.OfType<IAttributeElement>().Count(a => a.Name == "Test")

This will give a a count of the “Test” attributes on our method. We can put this into an if statement like so:

if(method.Attributes.OfType<IAttributeElement>().Count(a => a.Name == "Test") > 0)
{
}

A quick note; I’m using the OfType<T>() method to convert the collection returned by the Attributes Property into an enumerable of IAttributeElements just as an easy way of enabling Linq expressions against the collection. Since DxCore is written to work with all versions of VisualStudio, there really isn’t any official Linq support. As a matter of fact, using the expression we did limits the plug-in to only those people with .Net Framework 3.5 installed on their development machines. I think that in this day and age, this is a fairly safe assumption, so I’m not that worried about it. I would like to point out also, that having this expression in place does not prevent the plug-in from working with Visual Studio 2005, as long as the 3.5 framework is installed.

 

Ok, so now we have a list of methods, and we’re filtering them based on whether or not they are Test methods (defined by the existence of a Test attribute).  The next thing to do is look for an Assert statement within the text of our method.  This is another place where the Expressions Lab proves invaluable.  Looking at Expressions Lab we discover that our Assert statement is in fact an ElementReferenceExpression and is a child node of our Method object.  With this knowledge in hand we can use the FindElementByName method on our Method object to look for an Assert reference:

var assert = method.FindChildByName("Assert") as IElementReferenceExpression

Now all we have to do is test whether or not our assert variable is null, and we know whether or not this method violates our rule. Once we do that test we can add the appropriate Code Issue Type to the CodeIssues list using our event args. The last piece of the puzzle then will look something like this:

if(assert == null)
{
    ea.AddIssue(CodeIssueType.CodeSmell,(SourceRange)method.NameRanges[0],"A Test Method should have at least one Assert");
}

With this in place we should now be able to run our project and try it out. Using F5 to debug a DxCore plug-in will launch a new instance of Visual Studio. From there if you create a new project, or open an existing project, and write a test method which does not have an Assert, you should see a red squiggle underneath the name of the method. Hovering over that with your mouse you’ll see our Code Issue test presented. Adding an Assert will make the Code Issue disappear.

image

Well, things are looking good here, we’ve got code that is searching for an issue, and displaying the appropriate warning if our condition is met.  There is one other condition we should probably consider, however.  The one case I can think of when our rule does not apply is when we are expecting the code under test to throw an exception.  In that case there would be an ExpectedException attribute on the test class.  To make our users happy we should probably implement this functionality.

The good news is we already know how to accomplish this, since we are using the same technique to determine if the method we’re looking at is a test method.  All we need to do is change the test condition in our Count() method so it looks for “ExpectedException” instead of “Test”.  While we’re at it it seems like a reasonable thing to get an instance of the attribute and then check it for null, similar to how we’re handling the assert.  With all of this done the code should look like this:

var assert = method.FindChildByName("Assert") as IElementReferenceExpression;
var expectedException = method.Attributes.OfType<IAttributeElement>().FirstOrDefault(a => a.Name == "ExpectedException");
if (assert == null && expectedException == null)
{
    ea.AddIssue(CodeIssueType.CodeSmell, (SourceRange)method.NameRanges[0], "A Test Method should have at least one implicit or explicit Assertion");
}

So now we should be able to run this, and see that the code issue disappears if we have a test method with either an assert statement, or an expected exception attribute. Pretty cool. You’ll notice that I also updated our issue message so it reflects the fact that we are able to handle implicit assertions (in the form of our ExpectedException) attribute.  For the sake of completeness, here is what our finished CheckCodeIssues method looks like:

private void cipTestShouldAssert_CheckCodeIssues(object sender, CheckCodeIssuesEventArgs ea)
{
    var resolveScope = ea.ResolveScope();
    foreach (IMethodElement method in resolveScope.GetElementEnumerator(ea.Scope, new ElementTypeFilter(LanguageElementType.Method)))
    {
        if (method.Attributes.OfType<IAttributeElement>().Count(a => a.Name == "Test") > 0)
        {
            var assert = method.FindChildByName("Assert") as IElementReferenceExpression;
            var expectedException = method.Attributes.OfType<IAttributeElement>().FirstOrDefault(a => a.Name == "ExpectedException");
            if (assert == null && expectedException == null)
            {
                ea.AddIssue(CodeIssueType.CodeSmell, (SourceRange)method.NameRanges[0], "A Test Method should have at least one implicit or explicit Assertion");
            }
        }
    }
}

And that’s it. Granted there are some things here I would like to change before releasing this into the wild. We are specifically looking for NUnit/MbUnit style test method declarations for one, and we are also looking only for the short version of the attribute names, but this should give you a good idea of how things work.

If you are interested in seeing a more polished final version, you can either download the finished source for this post, or have a look at my CR_CreateTestMethod (admittedly poorly named) plug-in on the DxCore Community Plug-In’s site.

I ran into this odd problem recently working with some Linq2SQL based persistence code.  There is some code someone put together to commit a list of changed entities to the database as part of a single transaction, which simply iterates through the list and performs the appropriate action.  The problem I was having was that I had an object referenced by another object that needed to be persisted first, otherwise there was a foreign key violation.  To add to the strangeness there seemed to be some magic going on (most likely utilizing the INotifyPropertyChanged goodness), so that even if I tried to persist just my dependent object first, both were still showing up in the list, and always in exactly the wrong order.  Now, I’m okay with magic.  Magic makes a lot of things a lot easier.  The problem arises whenever the magic is incomplete, and doesn’t follow through to take care of all of the operation.  Its like someone comming up to you and saying “Pick A Card”, at which point you do, and put the card back, and they say “I know what your card was” and walking away.  Not real convincing.  This is what was going on here.  There was the smarts to know that changes were being made to more than one entity, and there were even attributes to define what properties contained dependent objects, but no smarts to actually deal with a case when you would want to save more than one object in an object graph at a time.

So it occued to me I should be able to do some linqy magic and create some sort of iterator that would return dependent objects in the appropriate order, so the lest dependent of the objects get move to the beginning of the list.  My first step, since I wasn’t really sure how to do this, was to write a test.  And I made it more or less mirror the issue I was facing, a list of two items, one of which is a dependency of the other.  I don’t know if there is a lot of value in posting all of the test cases here, but the end result was rather nice.  Sure it took several iterations, and there was plenty of infinite looping and stack overflows (which does some fun things to studio when your running your tests with TestDriven.Net), but I think this is a reasonable solution to the problem:

public static IEnumerable<T> EnsureDependenciesFirst<T>(this IEnumerable<T> items, Func<T ,IEnumerable> selector)
{
    if(items.Count() < 2)
        return;
    var firstPass = items.SkipWhile(t => items.Intersect(selector(t)).Count() > 0);
    var remainingItems = items.Except(firstPass);
    if(items.Count() == remainingItems.Count())
        return remainingItems;
    return firstPass.Concat(remainingItems.EnsureDependenciesFirst(selector));
}

Ok, so what do we have here?  Well to start out I’m checking the item list to see if there are at least two items in it, if not I just return the list.  This provides a means to avoid an infinate loop due to the recursive call, and provides a shortcut for a scenario with only one item.  Next off I use the SkipWhile() method, combined with the user-supplied selector function to iterate through each item, retrieve it’s list of dependencies (which is what the selector function does), and checks to see if the current list contains any of the dependencies for the object.  The results of this first pass are the objects which have no dependencies at all, so therefore they need to be first in the list.  The next logical step is to run the operation again for a list that does not contains the items filtered out by the first pass.  This is done via a recursive call back to the EnsureDependenciesFirst extension.  You will notice we’re checking the count of the remaining items against the current list, and returning the list if they are the same.  This is another safety precaution for dealing with infinite loops.  If we have a circular dependency, this bit will just return the items that are interdependent.

You will note that this is a generic function that has really noting at all to do with the entities that I am dealing with.  This was largely due to the fact that this was built TDD, so I just used a simple class which had a property that could take another instance of itself.  To use this to overcome my entity committing problem, I would have to write a not too small function to retrieve the list of dependent objects from the entity (since there would need to be some reflection magic to look at attributes on the properties to determine which properties contain dependencies), but it pretty much will drop in to the foreach statement that is currently being used to persist the entities.

Incidently, I learned from my dev team what the “official” way of dealing with this is a “ReorderChanges” method, which takes two entities, in the order in which they should be persisted.  I think I like my solution better, mostly because it should mean I don’t have to worry about it again.

As of today I have updated several things on the site.  First, I have a new hosting provider, WinHost.  This was primarily a financial decision (saving $3.03 a month, not a lot but hey, every little bit helps, right?), but it looks like there are some nice technical advantages to switching as well.  Some of these are things like IIS 7 hosting, and SQL Server access.  Also you get unlimited domain name pointers, so the IThinkIn.Net domain that I registered a few years back and could never wire up is now live.  While I was in the process of moving to a new server I also updated to the latest version of dasBlog, seeing as I was still running the original .Net 2.0 version from sometime in early 2006 (I think).  I’m not sure whether there will be any immediate signs of new and exciting things, but you never know.  You should now be able to utilize Gravatar icons, and OpenID, so it can’t be all bad right?

Please contact me if for any reason there is an issue with any of the content.  As far as I know everything moved over without a hitch, but I’ve not verified everything at this point.