So if you recall from some of my earlier posts, I’ve talked about the concept of the “Friend” class in C++ and how it could apply to TDD within .Net.  Well, today, with the help of Roy Osherove, I just stumbled upon the InternalsVisibleToAttribute within .Net 2.0.  This allows you to specify within one assembly, another assembly that should have access to the internal members of your assembly.  This is genius, and goes a long way towards allowing you to keep your code encapsulated, while still being testable.  If we could just get them to go one step farther, and allow for access to private and protected members as well, life would be good, and there would be no more of this OOD vs TOOD junk.

The other interesting thing about this is that it could allow you to give a separate utility…say the Castle Microkernel, access to internal class constructors, and thus enforce creation of your objects through the Kernel from outside assemblies.  This is actually a feature I am desperately wanting in my current project,  but sadly, I am limited to .Net 1.1, so I can’t quite get there.

Here is a quick look at how this works.  Here is a very unrealistic class in an assembly that I want to test:

class TestClass
{
    internal TestClass()
    {}

    private bool PrivateMethod()
    {
        return false;
    }

    internal bool SomeMethod()
    {
        return true;
    }

    public string PublicMethod()
    {
        return "You can see me";
    }
}

Now, I add the following to the AssemblyInfo.cs file:

[assembly: InternalsVisibleTo("TestAssembly")]

And here is what Intellisense looks like in my test class

Not bad.  Overall I would say this is defiantly a good feature to have in your toolbox.  Internals are not perfect, but they are much more versatile than a lot of folks give them credit for.  Now if only I could get something like this in .Net 1.1

The project I’m working on now has a huge need for auto-update.  Strangely enough, there aren’t a whole lot of documented solutions for an auto-update application for .Net 1.1.  In the 2.0 world you have ClickOnce, which handles those sorts of things for you (and in a way that isn’t terribly difficult to manage as a developer…as long as you pay attention to what your doing), but the only real option you get from MS on this is the AutoUpdater Application Block from the Patterns and Practices guys.  I took a look at this when it was in it’s 1.0 version a while back, and really didn’t care for it much.  The big reason was that it required you set up an AppStart.exe file, which would take a look at your config file to determine which directory to start the app from.  When updates arrived, they were put in new directories based on versions.  This seemed like a lot of effort, and a lot of overhead.  The good news is that there is now a 2.0 version of the application block, and it looks like it is much more configurable, and has the ability to do inproc updates.

So here is my issue.  I want to change the default downloader (used to retrieve the updated files) from the BITSDownloader that ships with the block to one which will allow me to copy files from a UNC path.  Fortunately, a sample implementation of such a thing exists in the documentation, so there is a starting point.  It is pretty rough around the edges, and isn’t testable, so I am working creating a nicer, testable version of the sample downloader.  Here is the problem, though; the dependencies are insane!  And it doesn’t look like even mock objects will be able to help.  Initially I found this a little strange since I know that the Patterns and Practices group was headed up by some folks who were heavy into the Agile methodologies.  I also know for a fact that the Enterprise Library components have test included.  So I pulled up the AutoUpdater Application Block to see how they were testing things….and what do you think I found?  They were not testing anything!  I can only assume that the updater block came from another group, because there was no tests in sight.  So, since I had the source code, I’m reduced to making modifications to the block to support testing.  For the most part this involved marking public methods/properties as virtual so that Rhino.Mocks can mock them.  I also added some parameters to the constructor of the UpdateTask class so that I could supply mock versions of some of the dependencies.

I can’t imagine trying to test this without the source code to modify…as it stands I don’t feel real good about shipping around a non-standard version of the library.  Overall I’m quite disappointed in the P&P folks on this one.  I had high hopes whenever I saw the rest of the Enterprise Library that I would be able to test my extensions fairly easily.  I guess that goes to show that even in organizations (or specifically groups) where TDD is a best practice, it still is finding hurdles to acceptance.

I’ve been doing a lot of poking around the MythTV and Knoppmyth sites recently trying to figure out what is going to work best in my current situation.  Since I’ve got cable, and not satellite, it would seem that it is possible to get all of the non-encrypted content from my provider, as long as I have a capture card with QAM support.  These cards seem to be HD cards overall (at least I haven’t found one that is supported that is not HD), which makes sense given the QAM signal will be a digital signal.  It sounds like getting the channel mapping put together for the QAM content can be somewhat tedious depending on how well Myth can auto-discover your channels (which it would seem is not that good), so I know there is some work involved in getting the digital channels.  For analog, any of the supported analog capture cards would work, and it seems that my wife and I tend to watch more of the analog stations than the digital (though, in my current line-up all of the “expanded” cable stations (Discovery, Comedy Central, TBS, etc) have both an analog station and a digital station…pretty strange.  I also know that I want the ability to watch and record at the same time, which means at least two tuners.  Here is the trick, though, since I need different tuners for digital vs analog, how many tuners do I really need?  And how many PCI slots do I want to eat up in the quest for multiple record?

So here is what I’ve decided.  Since I’m mostly concerned with analog, I went with a Hauppauge WinTV PVR-500, which is a dual-tuner analog PCI card, and which seems to be supported just fine in myth (looks like two PVR-150 cards from the sound of it).  That by itself takes care of my analog needs.  As for the digital, I haven’t decided for sure yet, but I’m leaning towards the DVICO FusionHDTV RT Lite, which seems to have good support in Myth.  It also has the advantage of being one of the few HD cards with a hardware encoder.

I should take a step back here and explain this.  You have two choices when watching or recording TV on a PC.  You can have your machine take care of encoding and decoding the MPEG4 video that is your TV show, or you can let the capture card do it.  If you let the capture card do it, then your machine is going to not be nearly as busy as it would otherwise.  This presents a couple advantages: 1.  You can get away with less horse-power on the machine, which means parts will be cheaper (spend the cash on storage if you can). 2. You’re machine will consume less power (I’m talking wattage here), and so therefore you are less likely to have a machine in your living room that sounds like a Lear jet taking off when your trying to watch all of your recorded episodes of “Eating bugs with the Stars”, or whatever the latest reality fad is. 

The PVR-500 contains a hardware encoder, so if I can get the same thing on the digital side, I’ll be in good shape.  While I’m on the subject of power, I’m also seriously looking at an AMD Turion based system.  This is the AMD mobile processor, so it is designed with a low-power footprint in mind.  It also seems that there are several motherboards out there which will support it, and some additional components which will help keep it cool.  The only other major sources of noise are the power-supply, and the hard-drive.  Cooler-Master makes some nice quite power supplies, so I will probably check there first.  As for hard drives, I haven’t really started looking yet.  If I can find noise specs on them, then I may use that as a criteria when deciding which to buy, but my primary concern is how many GB can I get for my money.

The other components have been moved to the end of the decision-tree.  I like the Sliverstone cases I mentioned in my last post, but I decided that I should get the components together first, since I didn’t really want to end up with a nice-looking case where I couldn’t fit all of my goodies.  I also need to decide whether or not to go with a DVD writer in the case….could be handy, but then I can also grab stuff off the network, so is it really needed (I’ve got a CD writer lying around, which may be my stand-in for a while)?

Overall this is going to be a fun project, and lets face it, how often do you get a geek project that your wife is behind 100%?

I’ve been trying to catch up on past episodes of .Net Rocks while on my hour+ drive to work everyday, and this morning listened to the interview with Jimmy Nilsson (episode # 191) on Domain Driven Design.  I’ll admit now that I have not had time to read his book all the way through, but it is one of the ones that I’m most eager to dig into, since not only is he talking about DDD, but he’s also talking about the Martin Fowler design patterns from PoEEA.


One of the things brought up on the show was this post on Jimmy’s blog.  I was shocked at the idea that SOA and DDD could be considered opposite strategies by anyone.  One of the comments made by the DNR hosts was that SOA was a technology in search of a problem to solve, whereas DDD is focused on the problem domain to the exclusion of specific technologies.  This is the sort of brutal mischaracterization of SOA, that I think is still quite prevalent despite the efforts of folks like Ron Jacobs who are trying to spread the word of SOA done right.  So here is my rundown of why, in fact, SOA and DDD go together like peanut butter and chocolate:


SOA is not a technology, it is a design strategy.  It is a way of thinking about your business in terms of what services can be provided.  The idea that if your using SOA then you are automatically using Web Services is just plain wrong.  You can create a business application that is Service Oriented without ever firing up the Web Service designer in .Net.  The core concept is to model the services used in your organization.  These services should in fact be recognizable objects from your Domain Model.  Business people should be able to talk about the services in the same language they use to talk about the same concepts within the business.  If that isn’t happening, then you are not designing the right services.


The process of modeling services should be derived directly from the Domain Model created using DDD if those services are going to be useful to the business.  One of the arguments for going SOA is “Business Agility”.  What does that mean in the real world?  Basically the ability to allow all of the aspects of your business to share information in a way that provides a measurable benefit.  The agility comes in when you are able to connect parts of your business in ways that were impossible before.  Once again, this has nothing to do with the technology behind SOA, it has to do with the business needs.  Granted from a technical standpoint, the easiest way to achieve the sort of connections we’re talking about is to use a technology like web services, but that is not the point of creating the SOA, and it certainly shouldn’t be the overwhelming factor in deciding to use SOA.


So, in answer to Jimmy’s question in his post (As I see it, there is lot of useful ideas there for SOA as well, or what am I missing?):  No, you aren’t missing something, the SOA’ers (as you refer to them) are missing something.  You’re right on.

Thinking about my earlier post discussing the OOP vs TOOP problem, I mentioned at the end that the best solution to this problem in my mind would be integrated language support for test classes.  Specifically, a way to let the Compiler/Runtime know that a specific class is a test class, and should therefore be able to access any and every property of a class.


It occurred to me that such blatant intrusion into the privacy of a class is not unknown in the programming world.  C++ has the notion of a “Friend” class.  This is a class that can access all members of another class regardless of their protection level.  To keep things civil, so that just any class can’t declare itself to be a Friend of any class it wants, the class that the Friend class would be accessing would declare specifically that classes X, Y and Z are fiends, and so can have free reign.  Granted this is considered to be rather scary, and one of those features that makes C++ an ideal tool for shooting ones own foot off.


But, this concept has some merit within the context of Test classes.  In the .Net world we could potentially use attributes on a class to identify what the test class (or classes I suppose) for a class are.  The compiler and runtime could then use that information to provide unlimited access only to the classes listed in the test class list.  For more protection perhaps it would also validate that the classes in the list also have the appropriate attribute (TestFixture in NUnit, TestClass in TFS) before allowing access.  You would need some additional tooling in the development environment so that you would get Intellisense on all of the private/protected members, but that should be easy for the MS guys after building in the test support, right?


There is some additional danger in this approach since there is some potential to modify IL at runtime, but couldn’t there be additional protections around such things?  As someone with no knowledge of the internal workings of the CLR, I can’t say for sure, but its worth trying.


So, I know officially propose to Microsoft that this feature be added to the 4.0 version of the framework.


Do you think they heard me?

Here is something I’ve been kicking around in my head for a while, and thought I would put it down in more or less a “permanent” format so maybe I’ll do something about it sometime…

Back when I was first trying to get my head around TDD, one of the things that I found most clarifying was an idea I first saw in Test Driven Development in Microsoft .Net (Microsoft Press).  The idea is that you write tests based on your requirements.  So as a developer, you should have hopefully been given a list of requirements by someone for the project you are working on.  When you go to build the software you start looking at the requirements list, and the pick on (usually a simple one) to start implementing.  Once you have your requirement you start brainstorming tests that can be implemented to fulfill that requirement.  Once you can no longer think of tests for a requirement, you move on to the next one.  After that once you have run out of requirements, then your done writing the software.

This struck me not only as a good way to approach the problem of how do I come up with tests, but also as a potentially powerful tool for tracking progress within a project.  I can imagine something not unlike the NUnit Test Runner that, instead of listing all of the tests, lists all of the requirements.  If there are tests written for a requirement, then you could potentially expand the requirement to see the list of tests.  When you execute the tests, you can then see which requirements are complete (more on this in a minute), and also which ones are functional.  The assumption being, if a test fails, then there is a functional problem with the requirements.

To support this a few modifications to the standard TDD toolset may be needed.  First a place to put the list of requirements in code would be useful.  This could be an XML document with the test descriptions and IDs, or maybe even a plain text document, or a piece of code.  I actually like the idea of the XML document because potentially you could build an InfoPath form or something that a BA could use to put together the requirement list, and then hand the XML doc over to the development team.  From there an additional attribute would be added to the NUnit tests, so that a specific test could be associated with a specific requirement.  If you wanted to be able to give a “percentage implemented” on a requirement, then you would probably want to do something like create all of the test methods you can come up with during a brainstorming session, and put an Ignore attribute on them.  At that point it should be feasible to see how many of the methods are implemented, and how many are still being Ignored, and then grab a percentage.  This number would change as time progressed and the development team came up with additional tests for specific requirements, but it would probably stabilize by the time serious work started into a specific requirement.

So that’s it…my brilliant idea.  I actually think there wouldn’t be a whole lot of work involved in getting it going either…A new attribute, an updated Unit Test Runner, and some XML work.  It would be nice to be able to define the requirements in a way that would let you use Intellisense to select the requirements while in VS.Net….maybe code-generate a struct or enum based on the XML doc?