I have officially crossed over….I am now using Mock Objects in my tests and loving it.  After much humming and hawing, and trying to figure out how to write truly effective tests, I decided to give it a go, and so grabbed a copy of Rhino Mocks and started the grueling task of converting some data access code so that I no longer needed a database to run the tests.  It took a little bit to get my mind around the new way of thinking, but I have to say it worked great.  I’m now able to test my data access routines with complete success, and about a 95% code coverage rate.  This is all on top of the fact that I’m using Enterprise Library (for 1.1) for data access and exception handling.

Now, there is a very interesting article from Martin Fowler which is called simply “Mocks aren’t Stubs”.  In this article he is discussing two approaches for dealing with the sort of problem that I had with my data access code.  One is to create object stubs within your tests that you can use to feed known values back to your tests.  This is not at all a bad approach, and I am using it in some cases in conjunction with my mock objects.  The other approach is to use Mock Objects for all of the tricky stuff.  He observed an interesting dichotomy between the sort of TDD folks who follow each approach:  The folks that use stubs tend to be very results oriented in their testing…That is they are verifying the values of various object after performing an action using the test frameworks Assert calls.  This, I think, is most in line with the way TDD is supposed to work.  For the Mock Object folks, though, another thing starts to happen.  These test gain the ability to verify that specific methods are called in a specific order, and therefore the tests that come from this approach start becoming less results-oriented, and rely more on the method verification from the Mocking framework.  He points out these two styles of testing without really saying one is better than the other (he does indicate that he tends to be in the Stub camp, but I think that is a personal choice and level of comfort thing), beyond pointing out that relying on the results of the test tends to be more in line with the pure TDD way of doing things.

Now, I had read this article before I started using mocks within my project.  I thought about using stubs throughout, but since I was testing data access code, I would have a lot of methods to implement in my stub objects that wouldn’t really be part of what I was testing, so it seemed a large waste of my time.  As I started implementing mocks in the tests, I paid attention to how I was testing to see if I was becoming a “Mockist” in the Fowler sense.  Overall I would say that I am not, currently, a “Mockist”.  I’m still overwhelmingly validating the results of calls based on the methods I’m testing, and just using the mocks to feed appropriate values into the classes I’m trying to test.  As a matter of fact, I could probably remove the call to _mocks.VerifyAll() at the end of my tests, since I don’t really care.  I can say, though, that the ability to verify what is being called with what arguments has been extremely useful in my data access testing space.  I’ve basically got objects which are making calls to stored procs on Oracle to do my data work. The DAL is a real simple deal that doesn’t have much in the way of smarts, but works well enough for a small project.  Because of the fact that I’m making stored proc calls to the database, I can now use the mocks to make sure that when I call the Save() method on my User object (for example), It’s calling ExecuteNonQuery() with the proc name of SAVE_USER.  I can also verify that all of the input an output parameters are being set correctly, which is a big consideration when adding a property to the data objects.  In that way I can know that my Save() method is doing what it is supposed to even though it returns no value.  This checking could probably be done by stubbing the EnterpriseLibrary Database and DBCommand objects, and then setting values within my stub object, which I could verify using an Assert call, but that seems like an awful lot of work when I have the option to simply set expectations, and then do a VerifyAll().

I think the Fowler article may be making much of a practice which is not that common among people using mocks in TDD.  Or maybe there is a slow creep that happens to the developer after months and years of using mocks in their tests that turn them into Mockists.  Personally I find that there is a very broad and comfortable middle ground which allows me more flexibility to express my intent quickly and concisely.  And after all, expressing intent is one of the major boons of TDD.

 

This morning I was greeted by a friendly message from Windows Update saying I had updates ready to install.  Because I’m the curious sort who likes to see how many issues MS has to deal with regularly, I decided to use “Advanced” mode to see what the updates were (By “Advanced” of course it means anyone who is concerned about what is getting installed on their machines…I’ve seen I Robot, I know that one day the great machine can decide the only way to save us from our selves is to format our hard drives).

What greeted me was the “Genuine Advantage Notification” update.  Apparently this is a little guy that sits in memory and periodically checks to make sure my version of Windows does not suddenly become pirated.  WTF?  Is this really a problem?  Sure I can maybe see if someone decides to upgrade to a pirated version of Vista, instead of paving their machine like normal people do, this might come in handy….that’s assuming though that the tool gets picked up by the upgrade process, and that someone who purchases a pirated copy of Vista is the sort of person who would leave a gem such as this one installed on their machine.

Okay, okay, I know that there are probably cases of normal users going to Thailand and thinking to themselves “Well, here is a booth with Windows Vista DVDs, and they are only wanting $30 for it…that’s way cheaper than in the US…must be the currency conversion rate”, and so they accidentally get themselves a pirated copy of Windows.  I would think that most people who are grabbing pirated copies of their OS are doing it because they don’t want to pay for the full version, and they are under no false pretenses about the “Genuineness” of the product.

The best part of this utility is the fact that once it figures out that your OS has become pirated, it will “help you obtain a licensed copy”.  That makes me feel really confident about having this thing running all the time.

A little background:  I’ve been working with a SQL 2000 Reporting Server for about 4 months now, trying to do some integration into an intranet app, and some reporting conversion.  For the last 3 months or so, the web services API, and the ability to publish from within Visual Studio have been gone.  Needless to say this has put a serious damper on my ability to integrate RS into the client’s intranet.  I did some looking out on the web, and the newsgroups, and could never find anything that quite worked.  The real interesting thing was that if IIS was refreshed, I could use the Web Service API once, and only once.  If I attempted to connect again, I would loose my connection, and not be able to connect again until IIS was bounced.  What was really strange was that it looked like some data was being sent, but eventually it would just give up.  Also strange was that this did not seem to effect the Report Manager app that MS uses as the front-end for the report server.

Well, as you may guess, I wouldn’t be writing this if I didn’t have a solution (at lest not with “Solved!” in the title), so here it is:  HTTP Keep-Alives.  I disabled HTTP Keep-Alives on the “Default Web Site” and life was groovy again.  I still need to check publishing from Visual Studio, but I can connect to the web service API, which is a good first start.

Its the little things in life that bring such joy…..

UPDATE:
So after moving right along for a little while with my HTTP Keep-Alive disabled, I attempted to go to the ReportManager page to change the data source of a report.  When I arrived, I was greeted by a 401 error.  WTF?  So for some unknown reason the Keep-Alive is required by the Report Manager.  Since you can’t enable or disable the Keep-Alive at the Virutal Directory level, I was SOL. After trying some stupid things that did not work (set the Connection: Keep-Alive header on the ReportManager virtual dir), I finally found this little gem on Google. It seems that there is a bug in ASP.Net that will cause it to suddenly drop the connection, even though there is still data.  The fix is to grab your HttpWebRequest object, and set the Keep-Alive to false, and the ProtocolVersion to HttpVersion10.  After doing this, life seems to be okay again.

My wife and I have decided to try and put together a PVR box to make life a little easier.  With Lana getting old enough to pay attention to grown-up TV, and the fact that there are 4 shows we like to watch that happen at the same time, it seems appropriate (not to mention being able to record Sesame Street for the baby).

So here is what I’ve figured out so far:

Linux-Based PVR using MythTV.

Pretty good eh?  So now for the hard part: Hardware.  So far I’m looking at the WinTV cards with the hardware encoding/decoding.  They are fairly reasonably priced, and fairly well supported.

I’m also looking at the rather sharp SilverStone LC03V home theater component case.

Not much info yet, but I’ll be keeping notes here as I go along and decide what will work best.

I’ve been reading the “Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries” book recently, and I came across a section discussing interface design, which had direct bearing on one of my earlier posts regarding programmer intent.  They basically state flat out that you should never use marker interfaces in .Net.  Instead, you should favor custom attributes, and then test the type for that attribute.  This was interesting to me, since I have been trying to determine what, if any, value marker interfaces would have in .Net.  In the Java example I cited, one of the benefits was that the JavaDoc information associated with the interface would then be attached to the class, so you would have clear intent from the developer when the interfaces were used.  .Net documentation comments don’t carry that same direct association…granted, when the documentation is generated, most of the time there will be a link to the interface definition..but it’s not quite the same.  On the other hand, generally a custom attribute will not even provide that link, so from a doc standpoint there is less information available. 

I think, though, that it is a bit more important to have the information about developer intent available while reviewing the source code, which makes the custom attribute concept a really good option.  You could even conceivably create a utility which would grab the attribute information from the assembly meta-data and generate a report containing which classes were marked with which interfaces, and therefore, which classes were part of which patterns.  There would also be the ability to associate other information with the attribute, such as a description, so you could have an attribute which stated a class was part of a specific patter (say, Chain of Responsibility), and then give a description of the component.  With that you could see specific instances of Chain of Responsibility patterns within the code.  Attributes also follow inheritance hierarchy, so that could make things more confusing, depending on how the attribute was used on the more top-level classes.

powered by performancing firefox

I recently ran across an intranet reporting app which required IE6 (or, I should say, would not work with IE7 or FireFox 2).  I upgraded to IE7 originally after numerous javascript errors, and a desire to check out CardSpace, and thus far had not had too many problems.  This was a potential issue, however.

Fortunately there is a ready made solution: Multiple IE’s.  This little gem is a single installer, which will allow you to install any version of IE from 3 through 6 in it’s own folder so it does not interfere with the default install.  You can pick it up at http://tredosoft.com/Multiple_IE

powered by performancing firefox

I was listening to the ArCast recorded with Scott Hanselman earlier today, and he was talking about the idea that Non-Software artifacts should approach zero.  If you’ve seen some of his posts, or listened to some Hanselminutes podcasts, you have no doubt come across this idea before.  I like this particular phrasing mostly because it gets to the heart of what I think one of the most often overlooked aspect of the programming process is; Namely, the intent of the programmer.

I think this is one of the most important things to take into consideration when looking at someone else’s code (or even your code, more that about a week after you wrote it).  There are a lot of subtleties about a design which go away after the code is written and starts to gather dust.  If the developer had a particular design pattern in mind when they built a class structure, this information exists only in the mind of the developer, and maybe some technical spec doc which is lost in source control, or share point somewhere.  Someone else comming along may look at that class structure, and not see the scaffolding the original developer put there to support that pattern, and will most likely simplify the design, removing the pattern in the process.

One proposed solution to this, which I saw in a posting from a java developer was to use marker interfaces to communicate this sort of intent.  That is, an interface which actually has no method declarations, but exists only to mark a specific class as being part of a pattern.  One additional advantage that the Java-Doc system allowed was the addition of the documentation around that interface into the docs generated for the implementing class.  This is not a bad idea, though it is terribly hard to enforce.

I think Windows Workflow will be a major contributor in this arena, allowing very explicit declarative syntax for creating code.  There may even be some potential to building WF activities around design patterns (hmmm…maybe I have a project).  This idea ties into all sorts of other areas of development, though. When designing web services the contract is what is used to communicate the developer intent, and therefore creating super methods that take DataSets or XML Blobs makes the contract basically useless.  .Net attribute-oriented programming also allows for this sort of thing, though I can’t see it being flexible enough to serve as a declarative language extension (yet). 

Once again I think we have no choice but to look at Unit Tests as the single most effective way to communicate developer intent.  If the tests are named properly, and test coverage is high enough, we should be able to see all of the requirements, how the various components interact, and generally what the developer was thinking.  I’ve even written tests which assert a particular class implements an interface simply because I though that was a critical part of the design.

What is my point?  Well, I guess its really just the beginning of a though process around how to better capture programmer intent.  What tools should there be?  We all know documentation doesn’t work.  Unless your doing Model Driven Development, UML is usually as out of whack with the software as the documentation (or worse).  And I think everyone agrees that putting this information in a word document is the best way to ensure it does not end up in the final product.

powered by performancing firefox

Here are my results of the Superhero Personality Test…Does it surprise anyone, really?

Your results:
You are Spider-Man

Spider-Man
80%
Green Lantern
65%
Iron Man
55%
Catwoman
55%
The Flash
50%
Hulk
50%
Superman
45%
Supergirl
40%
Robin
35%
Batman
35%
Wonder Woman
30%
You are intelligent, witty,
a bit geeky and have great
power and responsibility.


Click here to take the “Which Superhero am I?” quiz…

On a similar note…here are my supervillan results…I personally think I tend more towards the joker, personally:

Your results:
You are Dr. Doom

Dr. Doom
60%
The Joker
60%
Lex Luthor
58%
Mr. Freeze
57%
Magneto
53%
Apocalypse
48%
Poison Ivy
44%
Green Goblin
44%
Riddler
40%
Dark Phoenix
39%
Juggernaut
36%
Kingpin
36%
Catwoman
34%
Venom
33%
Two-Face
28%
Mystique
24%
Blessed with smarts and power but burdened by vanity.


Click here to take the Super Villain Personality Test

Okay, I admit it, this is a rant….But I promised myself I would post more, so you have to take what you can…

So here’s the deal, I’ve been peripherally involved in a project at my current client, to the point that I know generally where things are going, but I haven’t done any code review or anything like that.  The good news is that the developer working on it is fairly sharp, so I had no major worries….The fun comes when this developer (who is a contractor like myself), decides to take a offer for another gig (for way more $$, so who can really blame him).  Left in his wake is a new developer, who is trying to figure out the business as well as the code, which is not a fun thing to do, a stack of code, and deadlines which are not moving.  What is not left is any sort of design artifact on the code side.  There were some data changes, which have detailed ERDs, but nothing on the code side.  In looking at the code, it seems reasonably well put-together, though there is way more tight-coupling than I generally like to see, and a much deeper object hierarchy than I generally like to work with.

Now allow me to wonder into dream-land for a bit…This is a situation where TDD would pay off big-time.  For starters, it would be more likely that the code that was left would be leaner than it currently is had this project been a TDD project.  Also there would be the wealth of code artifacts that the tests provide.  At a glance I could see which components are supposed to do what, and (assuming the test writer thought this way) why certain components where there.  Then there is the safety net for the massive refactoring that every developer does when they inherit new code (you know, the guy who wrote it originally was an idiot…I never would have done it this way…that doesn’t make any sense…).

This is making me start thinking about mandating a TDD approach, or at least if not mandating, fostering.  What is the best way to do that, though?  Everybody knows that developers don’t take kindly to being forced to do anything, particularly when they see it as making their jobs more difficult.  And lets face it, the sorts of tests that you get from someone who doesn’t “get” TDD are things like “Test1”, “Test2″…”Test132”.  What use is that?  So my challenge is to try to figure out how to introduce a concept like TDD into an organization which really has no other standards (there hasn’t even been a decision regarding C# vs VB.Net), and where developers have not used TDD before.  The biggest problem with TDD initially, is that developers feel stifled.  Everyone is taught to think ahead, and try to create something that is resilient to change.  TDD throws that idea out the window, and it is damn hard to break that sort of thinking.  Particularly when your trying to baby-step your way through functionality like properties, that have to be put together in a particular way, and should have tests associated with them.  Why would any rational developer spend the extra 10 minutes to write CanSetCustomerName CanGetCustomerName tests when they can implement the properties and move on?  I think the only way to do it is to figure out some way of showing first-hand where the benefits lie.  Yes, I find writing tests for my accessors a bit redundant, but I take a great deal of satisfaction at the end in the information that I have managed to communicate about the code once it is done.  If I have a get test and a set test for a property, then it is pretty obvious I wanted both.  If I only have a get test, but a I defined a setter, then I know I can get rid of it.  There is also the knowledge that those small things are what the larger body of the tests are built on, and I think they provide a good way to ease into creating a new class in a TDD style.  While your writing those get and set tests you have time to think about the classes public API, and you can start to alter it a lot easier.

But back to the thoughts around introducing TDD….It is hard to show the added agility, unless you are actively changing an existing project.  The Code Artifacts are apparent, but a bit muddled and confusing.  The confidence that a full test suite provides is also a bit hard to communicate, particularly to a skeptical audience.  This is not an easy problem.  I think on one level it is easy to talk to developers and tell them why this is a good thing (Scott Hanselman did an excellent job on his TDD podcast), but experience shows that when the pressure to deliver starts cranking up, most developers return to their old habits to “deliver the project on time”, regardless of the amount of time that may be lost down the road because it was not possible to do ruthless refactoring.  It is also dangerous to try to introduce TDD on a critical project….if for some reason it fails, or is delayed, there is likely to be some serious push-back from management to TDD adoption.

It’s not an easy problem….but I think I need to figure it out.  I think there would be serious benefits to adopting TDD, especially in my current situation with developers coming and going.  I’ll just need to figure out how to share the enthusiasm, and convince others that there really is no other way to develop software.

powered by performancing firefox