Here is something I’ve been kicking around in my head for a while, and thought I would put it down in more or less a “permanent” format so maybe I’ll do something about it sometime…

Back when I was first trying to get my head around TDD, one of the things that I found most clarifying was an idea I first saw in Test Driven Development in Microsoft .Net (Microsoft Press).  The idea is that you write tests based on your requirements.  So as a developer, you should have hopefully been given a list of requirements by someone for the project you are working on.  When you go to build the software you start looking at the requirements list, and the pick on (usually a simple one) to start implementing.  Once you have your requirement you start brainstorming tests that can be implemented to fulfill that requirement.  Once you can no longer think of tests for a requirement, you move on to the next one.  After that once you have run out of requirements, then your done writing the software.

This struck me not only as a good way to approach the problem of how do I come up with tests, but also as a potentially powerful tool for tracking progress within a project.  I can imagine something not unlike the NUnit Test Runner that, instead of listing all of the tests, lists all of the requirements.  If there are tests written for a requirement, then you could potentially expand the requirement to see the list of tests.  When you execute the tests, you can then see which requirements are complete (more on this in a minute), and also which ones are functional.  The assumption being, if a test fails, then there is a functional problem with the requirements.

To support this a few modifications to the standard TDD toolset may be needed.  First a place to put the list of requirements in code would be useful.  This could be an XML document with the test descriptions and IDs, or maybe even a plain text document, or a piece of code.  I actually like the idea of the XML document because potentially you could build an InfoPath form or something that a BA could use to put together the requirement list, and then hand the XML doc over to the development team.  From there an additional attribute would be added to the NUnit tests, so that a specific test could be associated with a specific requirement.  If you wanted to be able to give a “percentage implemented” on a requirement, then you would probably want to do something like create all of the test methods you can come up with during a brainstorming session, and put an Ignore attribute on them.  At that point it should be feasible to see how many of the methods are implemented, and how many are still being Ignored, and then grab a percentage.  This number would change as time progressed and the development team came up with additional tests for specific requirements, but it would probably stabilize by the time serious work started into a specific requirement.

So that’s it…my brilliant idea.  I actually think there wouldn’t be a whole lot of work involved in getting it going either…A new attribute, an updated Unit Test Runner, and some XML work.  It would be nice to be able to define the requirements in a way that would let you use Intellisense to select the requirements while in VS.Net….maybe code-generate a struct or enum based on the XML doc?

Its been a while now, but Roy Osherove posted some articles about Testable Object-Oriented Programming.  That is, designing your code to be testable by default.  The part of this that is most interesting is that he suggests that sometimes you need to sacrifice encapsulation in favor of being able to test your code.  This is where one of the biggest flaws with TDD (at least in my opinion) begins to show.  I think the idea of making code testable breaking encapsulation is one of the only arguments against TDD that I have heard that I can’t give a good defense for, and it makes me crazy.

Overall I would consider myself a competent OO programmer…I might even venture to say that I’m really good, so this issue is one that bugs me.  I think encapsulation is one of the most powerful tools out there for making good OO code.  So now I’m a big fan of TDD, which does not allow me to use this tool as much as I would like….so what gives?

The Problem
There are many cases (at least that I’ve seen) where it makes sense to restrict the visibility of pieces of your object model.  In this context the situation is generally calling for classes/methods to be Internal (Package in Java), or Protected Internal.  I have a feeling, even though I have no evidence to back it up, that this is probably the least-used of all the visibility constraints.  What I’ve found, though, is that using it allows you to create well-factored classes, that abide by the Single Responsibility Principle, and still present a minimal interface to other layers/users of your code.  There is also the problem of exposing internal dependencies so that those dependencies can be injected during testing (this is where stubs or mock objects come into play).  In many cases there is no real reason to expose this information, and in fact making it available is probably a bad thing because it will make code that is consuming your object more aware of the implementation than it needs to be…And Steve McConnell did a good job at letting us know this is bad.

These are extremely brief introductions to the problems, and even though these are simple examples, such things can get ugly quickly in a large project.  The main reason being that all of a sudden there are many, many more things available to the developer who is just trying to use your class to get some work done.

Some Solutions
There are some solutions which can reduce the complexity somewhat.  In the case of using Internal objects, there is always the option of including your test classes in the same project as the code you are testing.  Microsoft seems to have done this with EnterpriseLibrary 1.1.  The biggest downside to this, however, is that in a lot of cases you don’t want to ship your tests with the rest of your production code, so you have to figure out how to do some clever things like conditional compilation to avoid compiling those classes into your production assemblies.  Granted with a build tool like NAnt this becomes easier, but if your not familiar with it, there is a rather steep learning curve.

In the realm of exposing implementation details, one possible course of action is to use a Inversion Of Control container like the Castle Microkernel/Windsor tools.  With these you can write your objects so that they request instances of the dependencies from the IoC container, and not create new instances themselves (or expect you to provide them).  This begs the question of where the IoC container lives, though.  In the case of testing you would want to fill the container with mocks or stubs of the appropriate objects, so that the class your testing gets those objects instead.  In some cases this may mean that your IoC container needs to live in a central repository where all of the objects can get to it…or you could pass around your IoC instance, instead of your dependencies.

The other solution, which was the point of the 2nd post from Roy on TOOD, is to use a tool like TypeMock, which has the remarkable ability to mock objects and give you access to the internal/protected/private members in your tests.  Pretty serious stuff.  It doesn’t solve the problem of dependency injection completely, though.  There is also the issue of cost if you want all of the really nifty features (though the Community Edition seems to include the ability to mock all methods/properties of an object).

The Ultimate Solution
In my mind what is needed to really bridge the divide between the issues here (which are testability and good OO practices like encapsulation) is to make support for TDD a first-class feature of a language or runtime.  What I mean by that is that the .Net Framework, for example, would have knowledge of test classes and test methods, and loosen the visibility restrictions while running a test.  Most likely this would mean some serious changes to the CLR, but it seems like it should be possible to set up a sandbox environment for testing, where the CLR would evaluate the objects being tested, and then allow full access to all fields and methods.  It would have to be seriously limited, and there may even need to be some restrictions around when and where this sort of behavior can happen, but ultimately it seems like the best solution.  It seems like a stretch, but in my mind it is the only real solution to the problem.  Until that point, we are stuck making decisions about where to draw the line between testability and good OO design.

I have officially crossed over….I am now using Mock Objects in my tests and loving it.  After much humming and hawing, and trying to figure out how to write truly effective tests, I decided to give it a go, and so grabbed a copy of Rhino Mocks and started the grueling task of converting some data access code so that I no longer needed a database to run the tests.  It took a little bit to get my mind around the new way of thinking, but I have to say it worked great.  I’m now able to test my data access routines with complete success, and about a 95% code coverage rate.  This is all on top of the fact that I’m using Enterprise Library (for 1.1) for data access and exception handling.

Now, there is a very interesting article from Martin Fowler which is called simply “Mocks aren’t Stubs”.  In this article he is discussing two approaches for dealing with the sort of problem that I had with my data access code.  One is to create object stubs within your tests that you can use to feed known values back to your tests.  This is not at all a bad approach, and I am using it in some cases in conjunction with my mock objects.  The other approach is to use Mock Objects for all of the tricky stuff.  He observed an interesting dichotomy between the sort of TDD folks who follow each approach:  The folks that use stubs tend to be very results oriented in their testing…That is they are verifying the values of various object after performing an action using the test frameworks Assert calls.  This, I think, is most in line with the way TDD is supposed to work.  For the Mock Object folks, though, another thing starts to happen.  These test gain the ability to verify that specific methods are called in a specific order, and therefore the tests that come from this approach start becoming less results-oriented, and rely more on the method verification from the Mocking framework.  He points out these two styles of testing without really saying one is better than the other (he does indicate that he tends to be in the Stub camp, but I think that is a personal choice and level of comfort thing), beyond pointing out that relying on the results of the test tends to be more in line with the pure TDD way of doing things.

Now, I had read this article before I started using mocks within my project.  I thought about using stubs throughout, but since I was testing data access code, I would have a lot of methods to implement in my stub objects that wouldn’t really be part of what I was testing, so it seemed a large waste of my time.  As I started implementing mocks in the tests, I paid attention to how I was testing to see if I was becoming a “Mockist” in the Fowler sense.  Overall I would say that I am not, currently, a “Mockist”.  I’m still overwhelmingly validating the results of calls based on the methods I’m testing, and just using the mocks to feed appropriate values into the classes I’m trying to test.  As a matter of fact, I could probably remove the call to _mocks.VerifyAll() at the end of my tests, since I don’t really care.  I can say, though, that the ability to verify what is being called with what arguments has been extremely useful in my data access testing space.  I’ve basically got objects which are making calls to stored procs on Oracle to do my data work. The DAL is a real simple deal that doesn’t have much in the way of smarts, but works well enough for a small project.  Because of the fact that I’m making stored proc calls to the database, I can now use the mocks to make sure that when I call the Save() method on my User object (for example), It’s calling ExecuteNonQuery() with the proc name of SAVE_USER.  I can also verify that all of the input an output parameters are being set correctly, which is a big consideration when adding a property to the data objects.  In that way I can know that my Save() method is doing what it is supposed to even though it returns no value.  This checking could probably be done by stubbing the EnterpriseLibrary Database and DBCommand objects, and then setting values within my stub object, which I could verify using an Assert call, but that seems like an awful lot of work when I have the option to simply set expectations, and then do a VerifyAll().

I think the Fowler article may be making much of a practice which is not that common among people using mocks in TDD.  Or maybe there is a slow creep that happens to the developer after months and years of using mocks in their tests that turn them into Mockists.  Personally I find that there is a very broad and comfortable middle ground which allows me more flexibility to express my intent quickly and concisely.  And after all, expressing intent is one of the major boons of TDD.

 

This morning I was greeted by a friendly message from Windows Update saying I had updates ready to install.  Because I’m the curious sort who likes to see how many issues MS has to deal with regularly, I decided to use “Advanced” mode to see what the updates were (By “Advanced” of course it means anyone who is concerned about what is getting installed on their machines…I’ve seen I Robot, I know that one day the great machine can decide the only way to save us from our selves is to format our hard drives).

What greeted me was the “Genuine Advantage Notification” update.  Apparently this is a little guy that sits in memory and periodically checks to make sure my version of Windows does not suddenly become pirated.  WTF?  Is this really a problem?  Sure I can maybe see if someone decides to upgrade to a pirated version of Vista, instead of paving their machine like normal people do, this might come in handy….that’s assuming though that the tool gets picked up by the upgrade process, and that someone who purchases a pirated copy of Vista is the sort of person who would leave a gem such as this one installed on their machine.

Okay, okay, I know that there are probably cases of normal users going to Thailand and thinking to themselves “Well, here is a booth with Windows Vista DVDs, and they are only wanting $30 for it…that’s way cheaper than in the US…must be the currency conversion rate”, and so they accidentally get themselves a pirated copy of Windows.  I would think that most people who are grabbing pirated copies of their OS are doing it because they don’t want to pay for the full version, and they are under no false pretenses about the “Genuineness” of the product.

The best part of this utility is the fact that once it figures out that your OS has become pirated, it will “help you obtain a licensed copy”.  That makes me feel really confident about having this thing running all the time.

A little background:  I’ve been working with a SQL 2000 Reporting Server for about 4 months now, trying to do some integration into an intranet app, and some reporting conversion.  For the last 3 months or so, the web services API, and the ability to publish from within Visual Studio have been gone.  Needless to say this has put a serious damper on my ability to integrate RS into the client’s intranet.  I did some looking out on the web, and the newsgroups, and could never find anything that quite worked.  The real interesting thing was that if IIS was refreshed, I could use the Web Service API once, and only once.  If I attempted to connect again, I would loose my connection, and not be able to connect again until IIS was bounced.  What was really strange was that it looked like some data was being sent, but eventually it would just give up.  Also strange was that this did not seem to effect the Report Manager app that MS uses as the front-end for the report server.

Well, as you may guess, I wouldn’t be writing this if I didn’t have a solution (at lest not with “Solved!” in the title), so here it is:  HTTP Keep-Alives.  I disabled HTTP Keep-Alives on the “Default Web Site” and life was groovy again.  I still need to check publishing from Visual Studio, but I can connect to the web service API, which is a good first start.

Its the little things in life that bring such joy…..

UPDATE:
So after moving right along for a little while with my HTTP Keep-Alive disabled, I attempted to go to the ReportManager page to change the data source of a report.  When I arrived, I was greeted by a 401 error.  WTF?  So for some unknown reason the Keep-Alive is required by the Report Manager.  Since you can’t enable or disable the Keep-Alive at the Virutal Directory level, I was SOL. After trying some stupid things that did not work (set the Connection: Keep-Alive header on the ReportManager virtual dir), I finally found this little gem on Google. It seems that there is a bug in ASP.Net that will cause it to suddenly drop the connection, even though there is still data.  The fix is to grab your HttpWebRequest object, and set the Keep-Alive to false, and the ProtocolVersion to HttpVersion10.  After doing this, life seems to be okay again.

My wife and I have decided to try and put together a PVR box to make life a little easier.  With Lana getting old enough to pay attention to grown-up TV, and the fact that there are 4 shows we like to watch that happen at the same time, it seems appropriate (not to mention being able to record Sesame Street for the baby).

So here is what I’ve figured out so far:

Linux-Based PVR using MythTV.

Pretty good eh?  So now for the hard part: Hardware.  So far I’m looking at the WinTV cards with the hardware encoding/decoding.  They are fairly reasonably priced, and fairly well supported.

I’m also looking at the rather sharp SilverStone LC03V home theater component case.

Not much info yet, but I’ll be keeping notes here as I go along and decide what will work best.

I’ve been reading the “Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries” book recently, and I came across a section discussing interface design, which had direct bearing on one of my earlier posts regarding programmer intent.  They basically state flat out that you should never use marker interfaces in .Net.  Instead, you should favor custom attributes, and then test the type for that attribute.  This was interesting to me, since I have been trying to determine what, if any, value marker interfaces would have in .Net.  In the Java example I cited, one of the benefits was that the JavaDoc information associated with the interface would then be attached to the class, so you would have clear intent from the developer when the interfaces were used.  .Net documentation comments don’t carry that same direct association…granted, when the documentation is generated, most of the time there will be a link to the interface definition..but it’s not quite the same.  On the other hand, generally a custom attribute will not even provide that link, so from a doc standpoint there is less information available. 

I think, though, that it is a bit more important to have the information about developer intent available while reviewing the source code, which makes the custom attribute concept a really good option.  You could even conceivably create a utility which would grab the attribute information from the assembly meta-data and generate a report containing which classes were marked with which interfaces, and therefore, which classes were part of which patterns.  There would also be the ability to associate other information with the attribute, such as a description, so you could have an attribute which stated a class was part of a specific patter (say, Chain of Responsibility), and then give a description of the component.  With that you could see specific instances of Chain of Responsibility patterns within the code.  Attributes also follow inheritance hierarchy, so that could make things more confusing, depending on how the attribute was used on the more top-level classes.

powered by performancing firefox

I recently ran across an intranet reporting app which required IE6 (or, I should say, would not work with IE7 or FireFox 2).  I upgraded to IE7 originally after numerous javascript errors, and a desire to check out CardSpace, and thus far had not had too many problems.  This was a potential issue, however.

Fortunately there is a ready made solution: Multiple IE’s.  This little gem is a single installer, which will allow you to install any version of IE from 3 through 6 in it’s own folder so it does not interfere with the default install.  You can pick it up at http://tredosoft.com/Multiple_IE

powered by performancing firefox

I was listening to the ArCast recorded with Scott Hanselman earlier today, and he was talking about the idea that Non-Software artifacts should approach zero.  If you’ve seen some of his posts, or listened to some Hanselminutes podcasts, you have no doubt come across this idea before.  I like this particular phrasing mostly because it gets to the heart of what I think one of the most often overlooked aspect of the programming process is; Namely, the intent of the programmer.

I think this is one of the most important things to take into consideration when looking at someone else’s code (or even your code, more that about a week after you wrote it).  There are a lot of subtleties about a design which go away after the code is written and starts to gather dust.  If the developer had a particular design pattern in mind when they built a class structure, this information exists only in the mind of the developer, and maybe some technical spec doc which is lost in source control, or share point somewhere.  Someone else comming along may look at that class structure, and not see the scaffolding the original developer put there to support that pattern, and will most likely simplify the design, removing the pattern in the process.

One proposed solution to this, which I saw in a posting from a java developer was to use marker interfaces to communicate this sort of intent.  That is, an interface which actually has no method declarations, but exists only to mark a specific class as being part of a pattern.  One additional advantage that the Java-Doc system allowed was the addition of the documentation around that interface into the docs generated for the implementing class.  This is not a bad idea, though it is terribly hard to enforce.

I think Windows Workflow will be a major contributor in this arena, allowing very explicit declarative syntax for creating code.  There may even be some potential to building WF activities around design patterns (hmmm…maybe I have a project).  This idea ties into all sorts of other areas of development, though. When designing web services the contract is what is used to communicate the developer intent, and therefore creating super methods that take DataSets or XML Blobs makes the contract basically useless.  .Net attribute-oriented programming also allows for this sort of thing, though I can’t see it being flexible enough to serve as a declarative language extension (yet). 

Once again I think we have no choice but to look at Unit Tests as the single most effective way to communicate developer intent.  If the tests are named properly, and test coverage is high enough, we should be able to see all of the requirements, how the various components interact, and generally what the developer was thinking.  I’ve even written tests which assert a particular class implements an interface simply because I though that was a critical part of the design.

What is my point?  Well, I guess its really just the beginning of a though process around how to better capture programmer intent.  What tools should there be?  We all know documentation doesn’t work.  Unless your doing Model Driven Development, UML is usually as out of whack with the software as the documentation (or worse).  And I think everyone agrees that putting this information in a word document is the best way to ensure it does not end up in the final product.

powered by performancing firefox

Here are my results of the Superhero Personality Test…Does it surprise anyone, really?

Your results:
You are Spider-Man

Spider-Man
80%
Green Lantern
65%
Iron Man
55%
Catwoman
55%
The Flash
50%
Hulk
50%
Superman
45%
Supergirl
40%
Robin
35%
Batman
35%
Wonder Woman
30%
You are intelligent, witty,
a bit geeky and have great
power and responsibility.


Click here to take the “Which Superhero am I?” quiz…

On a similar note…here are my supervillan results…I personally think I tend more towards the joker, personally:

Your results:
You are Dr. Doom

Dr. Doom
60%
The Joker
60%
Lex Luthor
58%
Mr. Freeze
57%
Magneto
53%
Apocalypse
48%
Poison Ivy
44%
Green Goblin
44%
Riddler
40%
Dark Phoenix
39%
Juggernaut
36%
Kingpin
36%
Catwoman
34%
Venom
33%
Two-Face
28%
Mystique
24%
Blessed with smarts and power but burdened by vanity.


Click here to take the Super Villain Personality Test