On November 27th, a beta release of the 9.3 version of the Developer Express components, including CodeRush and Refactor Pro! was made available to subscribers.  This release is pretty significant to me because it contains a major feature that I have been waiting for for a long time: A Unit Test Runner.  There were some teasers released by Mark Miller a while back, which only made me want to get my hands on the tool that much more.  My initial impressions are that it is very nice.  It is similar to TestDriven.Net in that it provides context menu options to run tests at various levels of granularity (single test, file, project, and solution level) and includes a debug option.  At this point it does not contain some of the additional coolness that TestDriven gives you like NCover/Team Coverage and TypeMock integration, but it does have the advantage of being extensible.  I know it was extensible because Mr. Miller told me it was extensible (the title “The Extensible Unit Test Runner You’ve Been Waiting For” was a clue).  I did not realize how extensible, however, until after I submitted a bug report to DevExpress.  The bug I was reporting (the NUnit TestCase attributes were not recognized), it turns out, was already brought to the attention of the DX team by way of a forum post, and they had already planned on correcting it with the next 9.3 release, but I could have saved myself (and Vito on DevExpress team) some time by taking a peek at the source samples bundled with the 9.3 release.  Yep, you guessed it, there with a shared source license were all of the test framework implementation projects.  So this meant I could whip together my own temporary fix while I was waiting for the next release.  It seemed like something that other folks might want to know about, so I thought I would share it here.

The biggest piece of the puzzle is a new TestExecuteTask class for handling the TestCaseAttribute.  Due to my complete lack of creativity, I called mine TestCaseExecuteTask, and it looks like this:

using System;
using System.Collections.Generic;
using System.Text;
using DevExpress.CodeRush.Core.Testing;
using System.Reflection;
using DevExpress.CodeRush.Core;

namespace CR_NUnitTesting
{
    public class TestCaseExecuteTask : TestExecuteTask
    {
        public override TaskExecuteResult CollectTestParameters()
        {
            TaskExecuteResult result = TaskExecuteResult.SkippedTaskResult;
            Attribute testCase = GetMethodAttribute("NUnit.Framework.TestCaseAttribute");
            if (testCase == null)
                return result;
            
            foreach(Attribute testCaseItem in TestMethod.GetCustomAttributes(true))
            {
                if(testCaseItem == null)
                    continue;
                var testCaseType = testCaseItem.GetType();
                if(testCaseType == null || testCaseType.FullName != "NUnit.Framework.TestCaseAttribute")
                    continue;
                PropertyInfo prop = testCaseType.GetProperty("Arguments");
                if(prop == null)
                    continue;
                foreach(MethodInfo getter in prop.GetAccessors())
                {
                    object[] parameters = getter.Invoke(testCaseItem, Type.EmptyTypes) as object[];
                    result.AddParameters(parameters);
                }
            }
        }
    }
}

This could be cleaned up some, and some of the magic strings extracted to constants, but overall it is pretty simple. Basically what is going on here is that we are looking for the TestCase attribute, and extracting the arguments for any attributes we find.  It just so happens that the TestExecuteTask base class has a CollectTestParameters() method we can override which allows for this sort of Row testing.  The parameters we extract get stashed in the execution result, which causes the test runner to execute the test once for each group of parameters (the result has a list of parameters, which gets populated with an array of objects for each TestCase attribute), and will correctly display which cases failed if there is a failure.

There are a couple other small changes that need to happen to get this to work.  There is an NUnitExtension.cs  class, which is the Plug-In class for the NUnit support, and it handles wiring everything up for us.  First off we need to initialize our new TestExecuteTask, and add it to the list of tasks that run for NUnit tests.  We do that in the InitializePlugin method of the NUnitExtension class:

public override void InitializePlugin()
{
    base.InitializePlugin();
    nUnitProvider.AvailableTasks.Add(new NUnitIgnoreTask());
    nUnitProvider.AvailableTasks.Add(new NUnitSetupTearDownTask());
    nUnitProvider.AvailableTasks.Add(new NUnitExpectedExceptionTask());
    nUnitProvider.AvailableTasks.Add(new NUnitValuesTask());
    nUnitProvider.AvailableTasks.Add(new NUnitRowTestTask());
    nUnitProvider.AvailableTasks.Add(new NUnitTimeoutTask());
    nUnitProvider.AvailableTasks.Add(new NUnitExplicitTask());
    nUnitProvider.AvailableTasks.Add(new NUnitTestCaseTask());
}

Ours gets added to the end of the list, so it will be executed. The next step is to get the plug-in to realize that a method with a TestCase attribute is an executable test method. That trick happens in the handler for the CheckTestMethod event on the UnitTestProvider. All we’re going to do is add another condition to an if statement like so:

void nUnitProvider_CheckTestMethod(object sender, CheckTestMethodEventArgs ea)
{
    IMethodElement method = ea.Method;
    if(//method.Name != null && method.Name.StartsWith("Test")
       ea.GetAttribute("NUnit.Framework", "Test", method) != null
    || ea.GetAttribute("NUnit.Framework.Extensions", "RowTest", method) != null
    || ea.GetAttribute("NUnit.Framework", "TestCase", method) != null)
    {
        ea.IsTestMethod = true;
        ea.Description = ea.GetAttributeText("NUnit.Framework", "Description", method);
        ea.Category = ea.GetAttributeText("NUnit.Framework", "Category", method);
    }
}

The only change to the original code was the additional GetAttribute call at the end of the if statement (the comments were there when I got there, I swear).  Now the only thing left to do is to compile it and drop it in the plug-ins directory.  Now when you are looking at a test class, you should be able to run TestCase decorated test methods without problem.  Well, almost.  There is one thing I was not able to find a clean way to implement, and that is the Result property of the TestCase attribute.  This allows you to streamline tests which are doing equals assertions by having the test method return the actual result, and you specify the expected result by using the result property.  Unfortunately I could not find a way to hook into the actual execution of the test in such a way that I could have access to the specific test properties being used, and the result of the test method execution.  But considering the DevExpress folks will be fixing this issue, I’m sure when they release it there will be support for this feature.  After all, this is simply a stop-gap solution until the next CodeRush release is available, so I’m willing to live with this slight inconvenience.

Happy Testing!  

So I had this analogy pop into my head a while back and I’ve been sitting on it because, quite frankly, I was almost embarrassed to have thought of it.  I finally decided that I might as well post it since no one is going to read this anyway, so here goes:

Writing software without TDD is like having unprotected sex.  Its extremely irresponsible in this day and age when everyone is supposed to know better, but it still happens a lot.  Sure, it feels better, and let’s face it in the heat of the moment you really don’t want

 to take the time to make sure you’ve taken all of the right precautions…you might loose your edge.  The problem is if we don’t do it we have that lingering fear in the back of our minds, that “What if?” that keeps popping up at the worst moments.  Most of the time its okay, and everything is fine, but it only takes one time for things to go wrong to make you regret your decisions.

So come on folks, we all know what we should be doing, so no more making excuses.  After a while you get used to it, and you find that the peace of mind it gives you far outweighs the momentary delights of living on the wild side.

So if you recall from some of my earlier posts, I’ve talked about the concept of the “Friend” class in C++ and how it could apply to TDD within .Net.  Well, today, with the help of Roy Osherove, I just stumbled upon the InternalsVisibleToAttribute within .Net 2.0.  This allows you to specify within one assembly, another assembly that should have access to the internal members of your assembly.  This is genius, and goes a long way towards allowing you to keep your code encapsulated, while still being testable.  If we could just get them to go one step farther, and allow for access to private and protected members as well, life would be good, and there would be no more of this OOD vs TOOD junk.

The other interesting thing about this is that it could allow you to give a separate utility…say the Castle Microkernel, access to internal class constructors, and thus enforce creation of your objects through the Kernel from outside assemblies.  This is actually a feature I am desperately wanting in my current project,  but sadly, I am limited to .Net 1.1, so I can’t quite get there.

Here is a quick look at how this works.  Here is a very unrealistic class in an assembly that I want to test:

class TestClass
{
    internal TestClass()
    {}

    private bool PrivateMethod()
    {
        return false;
    }

    internal bool SomeMethod()
    {
        return true;
    }

    public string PublicMethod()
    {
        return "You can see me";
    }
}

Now, I add the following to the AssemblyInfo.cs file:

[assembly: InternalsVisibleTo("TestAssembly")]

And here is what Intellisense looks like in my test class

Not bad.  Overall I would say this is defiantly a good feature to have in your toolbox.  Internals are not perfect, but they are much more versatile than a lot of folks give them credit for.  Now if only I could get something like this in .Net 1.1

The project I’m working on now has a huge need for auto-update.  Strangely enough, there aren’t a whole lot of documented solutions for an auto-update application for .Net 1.1.  In the 2.0 world you have ClickOnce, which handles those sorts of things for you (and in a way that isn’t terribly difficult to manage as a developer…as long as you pay attention to what your doing), but the only real option you get from MS on this is the AutoUpdater Application Block from the Patterns and Practices guys.  I took a look at this when it was in it’s 1.0 version a while back, and really didn’t care for it much.  The big reason was that it required you set up an AppStart.exe file, which would take a look at your config file to determine which directory to start the app from.  When updates arrived, they were put in new directories based on versions.  This seemed like a lot of effort, and a lot of overhead.  The good news is that there is now a 2.0 version of the application block, and it looks like it is much more configurable, and has the ability to do inproc updates.

So here is my issue.  I want to change the default downloader (used to retrieve the updated files) from the BITSDownloader that ships with the block to one which will allow me to copy files from a UNC path.  Fortunately, a sample implementation of such a thing exists in the documentation, so there is a starting point.  It is pretty rough around the edges, and isn’t testable, so I am working creating a nicer, testable version of the sample downloader.  Here is the problem, though; the dependencies are insane!  And it doesn’t look like even mock objects will be able to help.  Initially I found this a little strange since I know that the Patterns and Practices group was headed up by some folks who were heavy into the Agile methodologies.  I also know for a fact that the Enterprise Library components have test included.  So I pulled up the AutoUpdater Application Block to see how they were testing things….and what do you think I found?  They were not testing anything!  I can only assume that the updater block came from another group, because there was no tests in sight.  So, since I had the source code, I’m reduced to making modifications to the block to support testing.  For the most part this involved marking public methods/properties as virtual so that Rhino.Mocks can mock them.  I also added some parameters to the constructor of the UpdateTask class so that I could supply mock versions of some of the dependencies.

I can’t imagine trying to test this without the source code to modify…as it stands I don’t feel real good about shipping around a non-standard version of the library.  Overall I’m quite disappointed in the P&P folks on this one.  I had high hopes whenever I saw the rest of the Enterprise Library that I would be able to test my extensions fairly easily.  I guess that goes to show that even in organizations (or specifically groups) where TDD is a best practice, it still is finding hurdles to acceptance.

Thinking about my earlier post discussing the OOP vs TOOP problem, I mentioned at the end that the best solution to this problem in my mind would be integrated language support for test classes.  Specifically, a way to let the Compiler/Runtime know that a specific class is a test class, and should therefore be able to access any and every property of a class.


It occurred to me that such blatant intrusion into the privacy of a class is not unknown in the programming world.  C++ has the notion of a “Friend” class.  This is a class that can access all members of another class regardless of their protection level.  To keep things civil, so that just any class can’t declare itself to be a Friend of any class it wants, the class that the Friend class would be accessing would declare specifically that classes X, Y and Z are fiends, and so can have free reign.  Granted this is considered to be rather scary, and one of those features that makes C++ an ideal tool for shooting ones own foot off.


But, this concept has some merit within the context of Test classes.  In the .Net world we could potentially use attributes on a class to identify what the test class (or classes I suppose) for a class are.  The compiler and runtime could then use that information to provide unlimited access only to the classes listed in the test class list.  For more protection perhaps it would also validate that the classes in the list also have the appropriate attribute (TestFixture in NUnit, TestClass in TFS) before allowing access.  You would need some additional tooling in the development environment so that you would get Intellisense on all of the private/protected members, but that should be easy for the MS guys after building in the test support, right?


There is some additional danger in this approach since there is some potential to modify IL at runtime, but couldn’t there be additional protections around such things?  As someone with no knowledge of the internal workings of the CLR, I can’t say for sure, but its worth trying.


So, I know officially propose to Microsoft that this feature be added to the 4.0 version of the framework.


Do you think they heard me?

Here is something I’ve been kicking around in my head for a while, and thought I would put it down in more or less a “permanent” format so maybe I’ll do something about it sometime…

Back when I was first trying to get my head around TDD, one of the things that I found most clarifying was an idea I first saw in Test Driven Development in Microsoft .Net (Microsoft Press).  The idea is that you write tests based on your requirements.  So as a developer, you should have hopefully been given a list of requirements by someone for the project you are working on.  When you go to build the software you start looking at the requirements list, and the pick on (usually a simple one) to start implementing.  Once you have your requirement you start brainstorming tests that can be implemented to fulfill that requirement.  Once you can no longer think of tests for a requirement, you move on to the next one.  After that once you have run out of requirements, then your done writing the software.

This struck me not only as a good way to approach the problem of how do I come up with tests, but also as a potentially powerful tool for tracking progress within a project.  I can imagine something not unlike the NUnit Test Runner that, instead of listing all of the tests, lists all of the requirements.  If there are tests written for a requirement, then you could potentially expand the requirement to see the list of tests.  When you execute the tests, you can then see which requirements are complete (more on this in a minute), and also which ones are functional.  The assumption being, if a test fails, then there is a functional problem with the requirements.

To support this a few modifications to the standard TDD toolset may be needed.  First a place to put the list of requirements in code would be useful.  This could be an XML document with the test descriptions and IDs, or maybe even a plain text document, or a piece of code.  I actually like the idea of the XML document because potentially you could build an InfoPath form or something that a BA could use to put together the requirement list, and then hand the XML doc over to the development team.  From there an additional attribute would be added to the NUnit tests, so that a specific test could be associated with a specific requirement.  If you wanted to be able to give a “percentage implemented” on a requirement, then you would probably want to do something like create all of the test methods you can come up with during a brainstorming session, and put an Ignore attribute on them.  At that point it should be feasible to see how many of the methods are implemented, and how many are still being Ignored, and then grab a percentage.  This number would change as time progressed and the development team came up with additional tests for specific requirements, but it would probably stabilize by the time serious work started into a specific requirement.

So that’s it…my brilliant idea.  I actually think there wouldn’t be a whole lot of work involved in getting it going either…A new attribute, an updated Unit Test Runner, and some XML work.  It would be nice to be able to define the requirements in a way that would let you use Intellisense to select the requirements while in VS.Net….maybe code-generate a struct or enum based on the XML doc?

I have officially crossed over….I am now using Mock Objects in my tests and loving it.  After much humming and hawing, and trying to figure out how to write truly effective tests, I decided to give it a go, and so grabbed a copy of Rhino Mocks and started the grueling task of converting some data access code so that I no longer needed a database to run the tests.  It took a little bit to get my mind around the new way of thinking, but I have to say it worked great.  I’m now able to test my data access routines with complete success, and about a 95% code coverage rate.  This is all on top of the fact that I’m using Enterprise Library (for 1.1) for data access and exception handling.

Now, there is a very interesting article from Martin Fowler which is called simply “Mocks aren’t Stubs”.  In this article he is discussing two approaches for dealing with the sort of problem that I had with my data access code.  One is to create object stubs within your tests that you can use to feed known values back to your tests.  This is not at all a bad approach, and I am using it in some cases in conjunction with my mock objects.  The other approach is to use Mock Objects for all of the tricky stuff.  He observed an interesting dichotomy between the sort of TDD folks who follow each approach:  The folks that use stubs tend to be very results oriented in their testing…That is they are verifying the values of various object after performing an action using the test frameworks Assert calls.  This, I think, is most in line with the way TDD is supposed to work.  For the Mock Object folks, though, another thing starts to happen.  These test gain the ability to verify that specific methods are called in a specific order, and therefore the tests that come from this approach start becoming less results-oriented, and rely more on the method verification from the Mocking framework.  He points out these two styles of testing without really saying one is better than the other (he does indicate that he tends to be in the Stub camp, but I think that is a personal choice and level of comfort thing), beyond pointing out that relying on the results of the test tends to be more in line with the pure TDD way of doing things.

Now, I had read this article before I started using mocks within my project.  I thought about using stubs throughout, but since I was testing data access code, I would have a lot of methods to implement in my stub objects that wouldn’t really be part of what I was testing, so it seemed a large waste of my time.  As I started implementing mocks in the tests, I paid attention to how I was testing to see if I was becoming a “Mockist” in the Fowler sense.  Overall I would say that I am not, currently, a “Mockist”.  I’m still overwhelmingly validating the results of calls based on the methods I’m testing, and just using the mocks to feed appropriate values into the classes I’m trying to test.  As a matter of fact, I could probably remove the call to _mocks.VerifyAll() at the end of my tests, since I don’t really care.  I can say, though, that the ability to verify what is being called with what arguments has been extremely useful in my data access testing space.  I’ve basically got objects which are making calls to stored procs on Oracle to do my data work. The DAL is a real simple deal that doesn’t have much in the way of smarts, but works well enough for a small project.  Because of the fact that I’m making stored proc calls to the database, I can now use the mocks to make sure that when I call the Save() method on my User object (for example), It’s calling ExecuteNonQuery() with the proc name of SAVE_USER.  I can also verify that all of the input an output parameters are being set correctly, which is a big consideration when adding a property to the data objects.  In that way I can know that my Save() method is doing what it is supposed to even though it returns no value.  This checking could probably be done by stubbing the EnterpriseLibrary Database and DBCommand objects, and then setting values within my stub object, which I could verify using an Assert call, but that seems like an awful lot of work when I have the option to simply set expectations, and then do a VerifyAll().

I think the Fowler article may be making much of a practice which is not that common among people using mocks in TDD.  Or maybe there is a slow creep that happens to the developer after months and years of using mocks in their tests that turn them into Mockists.  Personally I find that there is a very broad and comfortable middle ground which allows me more flexibility to express my intent quickly and concisely.  And after all, expressing intent is one of the major boons of TDD.

I was listening to the ArCast recorded with Scott Hanselman earlier today, and he was talking about the idea that Non-Software artifacts should approach zero.  If you’ve seen some of his posts, or listened to some Hanselminutes podcasts, you have no doubt come across this idea before.  I like this particular phrasing mostly because it gets to the heart of what I think one of the most often overlooked aspect of the programming process is; Namely, the intent of the programmer.

I think this is one of the most important things to take into consideration when looking at someone else’s code (or even your code, more that about a week after you wrote it).  There are a lot of subtleties about a design which go away after the code is written and starts to gather dust.  If the developer had a particular design pattern in mind when they built a class structure, this information exists only in the mind of the developer, and maybe some technical spec doc which is lost in source control, or share point somewhere.  Someone else comming along may look at that class structure, and not see the scaffolding the original developer put there to support that pattern, and will most likely simplify the design, removing the pattern in the process.

One proposed solution to this, which I saw in a posting from a java developer was to use marker interfaces to communicate this sort of intent.  That is, an interface which actually has no method declarations, but exists only to mark a specific class as being part of a pattern.  One additional advantage that the Java-Doc system allowed was the addition of the documentation around that interface into the docs generated for the implementing class.  This is not a bad idea, though it is terribly hard to enforce.

I think Windows Workflow will be a major contributor in this arena, allowing very explicit declarative syntax for creating code.  There may even be some potential to building WF activities around design patterns (hmmm…maybe I have a project).  This idea ties into all sorts of other areas of development, though. When designing web services the contract is what is used to communicate the developer intent, and therefore creating super methods that take DataSets or XML Blobs makes the contract basically useless.  .Net attribute-oriented programming also allows for this sort of thing, though I can’t see it being flexible enough to serve as a declarative language extension (yet). 

Once again I think we have no choice but to look at Unit Tests as the single most effective way to communicate developer intent.  If the tests are named properly, and test coverage is high enough, we should be able to see all of the requirements, how the various components interact, and generally what the developer was thinking.  I’ve even written tests which assert a particular class implements an interface simply because I though that was a critical part of the design.

What is my point?  Well, I guess its really just the beginning of a though process around how to better capture programmer intent.  What tools should there be?  We all know documentation doesn’t work.  Unless your doing Model Driven Development, UML is usually as out of whack with the software as the documentation (or worse).  And I think everyone agrees that putting this information in a word document is the best way to ensure it does not end up in the final product.

powered by performancing firefox

Okay, I admit it, this is a rant….But I promised myself I would post more, so you have to take what you can…

So here’s the deal, I’ve been peripherally involved in a project at my current client, to the point that I know generally where things are going, but I haven’t done any code review or anything like that.  The good news is that the developer working on it is fairly sharp, so I had no major worries….The fun comes when this developer (who is a contractor like myself), decides to take a offer for another gig (for way more $$, so who can really blame him).  Left in his wake is a new developer, who is trying to figure out the business as well as the code, which is not a fun thing to do, a stack of code, and deadlines which are not moving.  What is not left is any sort of design artifact on the code side.  There were some data changes, which have detailed ERDs, but nothing on the code side.  In looking at the code, it seems reasonably well put-together, though there is way more tight-coupling than I generally like to see, and a much deeper object hierarchy than I generally like to work with.

Now allow me to wonder into dream-land for a bit…This is a situation where TDD would pay off big-time.  For starters, it would be more likely that the code that was left would be leaner than it currently is had this project been a TDD project.  Also there would be the wealth of code artifacts that the tests provide.  At a glance I could see which components are supposed to do what, and (assuming the test writer thought this way) why certain components where there.  Then there is the safety net for the massive refactoring that every developer does when they inherit new code (you know, the guy who wrote it originally was an idiot…I never would have done it this way…that doesn’t make any sense…).

This is making me start thinking about mandating a TDD approach, or at least if not mandating, fostering.  What is the best way to do that, though?  Everybody knows that developers don’t take kindly to being forced to do anything, particularly when they see it as making their jobs more difficult.  And lets face it, the sorts of tests that you get from someone who doesn’t “get” TDD are things like “Test1”, “Test2″…”Test132”.  What use is that?  So my challenge is to try to figure out how to introduce a concept like TDD into an organization which really has no other standards (there hasn’t even been a decision regarding C# vs VB.Net), and where developers have not used TDD before.  The biggest problem with TDD initially, is that developers feel stifled.  Everyone is taught to think ahead, and try to create something that is resilient to change.  TDD throws that idea out the window, and it is damn hard to break that sort of thinking.  Particularly when your trying to baby-step your way through functionality like properties, that have to be put together in a particular way, and should have tests associated with them.  Why would any rational developer spend the extra 10 minutes to write CanSetCustomerName CanGetCustomerName tests when they can implement the properties and move on?  I think the only way to do it is to figure out some way of showing first-hand where the benefits lie.  Yes, I find writing tests for my accessors a bit redundant, but I take a great deal of satisfaction at the end in the information that I have managed to communicate about the code once it is done.  If I have a get test and a set test for a property, then it is pretty obvious I wanted both.  If I only have a get test, but a I defined a setter, then I know I can get rid of it.  There is also the knowledge that those small things are what the larger body of the tests are built on, and I think they provide a good way to ease into creating a new class in a TDD style.  While your writing those get and set tests you have time to think about the classes public API, and you can start to alter it a lot easier.

But back to the thoughts around introducing TDD….It is hard to show the added agility, unless you are actively changing an existing project.  The Code Artifacts are apparent, but a bit muddled and confusing.  The confidence that a full test suite provides is also a bit hard to communicate, particularly to a skeptical audience.  This is not an easy problem.  I think on one level it is easy to talk to developers and tell them why this is a good thing (Scott Hanselman did an excellent job on his TDD podcast), but experience shows that when the pressure to deliver starts cranking up, most developers return to their old habits to “deliver the project on time”, regardless of the amount of time that may be lost down the road because it was not possible to do ruthless refactoring.  It is also dangerous to try to introduce TDD on a critical project….if for some reason it fails, or is delayed, there is likely to be some serious push-back from management to TDD adoption.

It’s not an easy problem….but I think I need to figure it out.  I think there would be serious benefits to adopting TDD, especially in my current situation with developers coming and going.  I’ll just need to figure out how to share the enthusiasm, and convince others that there really is no other way to develop software.

powered by performancing firefox