I ran into this issue today when trying to find a quick and dirty script to enforce redirection from HTTP to HTTPS for an intranet site we’re enabling for HTTPS (yes, I know SSL on an intranet???  One word – Audit).  I thought this would be easy…I half expected that there would be an IIS setting to handle this for me; I was wrong on both counts.  So what I ended up doing is using an old DNN HttpModule which was set up to allow the user to specify specific tabs in DNN for SSL, and greatly simplifying things to work to my advantage.  I thought I would post it so I wouldn’t have to look for it again.

To give a quick overview of what is going on, this is an HTTP Module which looks at a value in the web.config to determine whether or not to enforce HTTPS or not.  If the setting is set to true (or yes), then I’m just grabbing the Url property from the Request object, loading it up in a UriBuilder, and setting the Scheme to “https”, and the port to 443 (this may not be necessary, but it was generating an URL with a port of 80 before, which defeats the purpose so I decided to play it safe).  It then feeds the generated URI to Response.Redirect(), and your off.  There is some additional code in there to disable the feature if your on localhost, which is mostly to keep from blowing up your dev box.

Here is the class:

using System;
using System.Configuration;
using System.Web;


namespace HTTPSRedirectHandler
{
    /// <summary>
    /// An HttpModule which redirects traffic to HTTPS based on configuration settings.
    /// </summary>
    public class HttpsRedirector : IHttpModule
    {
        HttpApplication _context;

        public HttpsRedirector()
        {
            //
            // TODO: Add constructor logic here
            //
        }

        #region IHttpModule Members

        /// <summary>
        /// Initializes a module and prepares it to handle
        /// requests.
        /// </summary>
        /// <param name="context">An <see cref="T:System.Web.HttpApplication"/> that provides access to the methods, properties, and events common to all application objects within an ASP.NET application</param>
        public void Init(HttpApplication context)
        {
            _context = context;
            _context.BeginRequest += new EventHandler(context_BeginRequest);
        }

        /// <summary>
        /// Disposes of the resources (other than memory) used by the
        /// module that implements <see langword="IHttpModule."/>
        /// </summary>
        public void Dispose()
        {
            _context.BeginRequest -= new EventHandler(context_BeginRequest);
            _context.Dispose();
        }

        #endregion

        /// <summary>
        /// Handles the BeginRequest event of the current Http Context.
        /// </summary>
        private void context_BeginRequest(object sender, EventArgs e)
        {
            bool useSSL = false;
            string result = null;
            if((result = ConfigurationSettings.AppSettings["RequireSSL"]) != null)
            {
                if(result.ToUpper() == "TRUE" || result.ToUpper() == "YES")
                    useSSL = true;
            }
            if (useSSL)
                EnforceSSL();
        }

        /// <summary>
        /// Enforces a redirection to HTTPS if the current connection is using HTTP (port 80).
        /// </summary>
        private void EnforceSSL()
        {
            if(_context.Request.ServerVariables["SERVER_NAME"].ToLower() != "localhost")
            {
                if(_context.Request.ServerVariables["SERVER_PORT"] == "80")
                {
                    UriBuilder uri = new UriBuilder(_context.Request.Url);
                    uri.Scheme = "https";
                    uri.Port = 443;
                    _context.Response.Redirect(uri.ToString());
                }
            }
        }
    }
}

 

Here is the web.config settings you need to use it:

<httpModules>
    <add name="SSLRedirect" type="HTTPSRedirectHandler.HttpsRedirector, HTTPSRedirectHandler" />
</httpModules>
 
<appSettings>
<add key="RequireSSL" value="True" /> </appSettings>

So I had this analogy pop into my head a while back and I’ve been sitting on it because, quite frankly, I was almost embarrassed to have thought of it.  I finally decided that I might as well post it since no one is going to read this anyway, so here goes:

Writing software without TDD is like having unprotected sex.  Its extremely irresponsible in this day and age when everyone is supposed to know better, but it still happens a lot.  Sure, it feels better, and let’s face it in the heat of the moment you really don’t want

 to take the time to make sure you’ve taken all of the right precautions…you might loose your edge.  The problem is if we don’t do it we have that lingering fear in the back of our minds, that “What if?” that keeps popping up at the worst moments.  Most of the time its okay, and everything is fine, but it only takes one time for things to go wrong to make you regret your decisions.

So come on folks, we all know what we should be doing, so no more making excuses.  After a while you get used to it, and you find that the peace of mind it gives you far outweighs the momentary delights of living on the wild side.

I recently decided to take the plunge and get CodeRush and Refactor! Pro (along with DxCore) loaded instead of Resharper.  Now, don’t get me wrong, there is a lot I like about Resharper, but overall the performance was becoming an issue.  There were often problems with VS freezing for no particular reason, and then coming back as if nothing was wrong…I swear it was like my IDE had narcolepsy or something.

One of the things I noticed immediately about CodeRush was the fact that there was a single installer, and when I went to run it I was able to install it on all versions of Visual Studio, including the Orcas Beta.  This was nice when compared to Resharper’s separate install for vs 2005 and vs 2003.  It also makes me feel good about improvements in the product being available for all versions of the IDE.  One thing that I noticed about R# was that there was some work being done in the VS 2005 version around performance, but that did not seem to trickle down to the VS 2003 version.  I think the big reason for this is the fact that CodeRush and Refactor! are implemented on top of DxCore, which provides a very clean abstraction from the scariness that is the Visual Studio integration layer.

Here are the things I really like about CodeRush/Refactor:

  • The visualizations are stunning!  No, seriously this is some amazing stuff.  Circles, arrows, animations, eye candy yes, but useful eye candy.
  • The Refactoring Live Preview is crazy brilliant.  Mark Miller mentioned this on a DNR episode, and I agree with his comment that the live preview allows you to discover new useful refactorings that may not be completely obvious from the name.
  • Performance is great.  ‘Nuf said.
  • It works with VB.Net.  Granted I don’t use VB.Net, but some of my co-workers do, and occasionally I have to work on VB.Net projects.
  • Dynamic Templates rock.  The fact that I can create a new mnemonic for my type, and then use predefined prefixes and do vee<space> to create a new Employee Entity for example is pure bliss.
  • Template contexts are way cool.  By default they have NUnit templates defined, and with contexts, t<space> creates both a TestFixture class and a Test method
  • Markers and navigation are dreamy.
  • There are some crazy-cool template functions, like the ability to do a foreach within a template, so things like creating a switch/case statement for all items in an enum can be done easily.  This also powers a conditional to case refactoring that is pretty sweet.

Here are the things that I miss from Resharper:

  • Automatically adding a using statement in VS 2003 was sweet.  VS 2005 can do it with the buit-in intellisense features, but I got very used to it.
  • The VS 2003 test runner was very nice.  I use the Testdriven.Net plugin, which I cannot live without, but I like the graphical runner in the IDE.  The free test runner from JetBrains is for VS 2005 only, so it doesn’t help those of us in VS 2003.  I do like the fact that they released it as a free tool, though.
  • The “Extract Field” refactoring doesn’t exist in Refactor!…This shocked me a lot.
  • The Find Usages task.  This I think is part of the reason why R# was slow, but it did a brilliant job.  I think the rename in R# was more powerful as well.  I think there is a rename in Beta for Refactor! that is supposed to be able to work accross an entire project/solution, but I haven’t had a chance to really test it yet.
  • The pre-build error checking is nice.

The good news is that the DxCore extensibility model means that most if not all of these items could be recreated.  The bad news is that there isn’t a lot of documentation around the extensibility model, particularly when it comes to creating new refactorings.  The test runner is one of the most painful points for me right now, so I’ve started exploring the process of creating one using the DxCore APIs.  It opens up the possibility of refining things too, which would be nice.  What I would really like would be the ability to detect and integrate with TestDriven.Net.

The folks over at Eleutian are evidently running both, which they claim is possible with some tweaking, but the performance issues for VS 2003 are the biggest downer on the R# side, so I will probably not go down that path.  I may load up the test runner for VS 2005, unless I get some time to try and build one using DxCore, in which case I’ll share it with the rest of the world.

So if you recall from some of my earlier posts, I’ve talked about the concept of the “Friend” class in C++ and how it could apply to TDD within .Net.  Well, today, with the help of Roy Osherove, I just stumbled upon the InternalsVisibleToAttribute within .Net 2.0.  This allows you to specify within one assembly, another assembly that should have access to the internal members of your assembly.  This is genius, and goes a long way towards allowing you to keep your code encapsulated, while still being testable.  If we could just get them to go one step farther, and allow for access to private and protected members as well, life would be good, and there would be no more of this OOD vs TOOD junk.

The other interesting thing about this is that it could allow you to give a separate utility…say the Castle Microkernel, access to internal class constructors, and thus enforce creation of your objects through the Kernel from outside assemblies.  This is actually a feature I am desperately wanting in my current project,  but sadly, I am limited to .Net 1.1, so I can’t quite get there.

Here is a quick look at how this works.  Here is a very unrealistic class in an assembly that I want to test:

class TestClass
{
    internal TestClass()
    {}

    private bool PrivateMethod()
    {
        return false;
    }

    internal bool SomeMethod()
    {
        return true;
    }

    public string PublicMethod()
    {
        return "You can see me";
    }
}

Now, I add the following to the AssemblyInfo.cs file:

[assembly: InternalsVisibleTo("TestAssembly")]

And here is what Intellisense looks like in my test class

Not bad.  Overall I would say this is defiantly a good feature to have in your toolbox.  Internals are not perfect, but they are much more versatile than a lot of folks give them credit for.  Now if only I could get something like this in .Net 1.1

The project I’m working on now has a huge need for auto-update.  Strangely enough, there aren’t a whole lot of documented solutions for an auto-update application for .Net 1.1.  In the 2.0 world you have ClickOnce, which handles those sorts of things for you (and in a way that isn’t terribly difficult to manage as a developer…as long as you pay attention to what your doing), but the only real option you get from MS on this is the AutoUpdater Application Block from the Patterns and Practices guys.  I took a look at this when it was in it’s 1.0 version a while back, and really didn’t care for it much.  The big reason was that it required you set up an AppStart.exe file, which would take a look at your config file to determine which directory to start the app from.  When updates arrived, they were put in new directories based on versions.  This seemed like a lot of effort, and a lot of overhead.  The good news is that there is now a 2.0 version of the application block, and it looks like it is much more configurable, and has the ability to do inproc updates.

So here is my issue.  I want to change the default downloader (used to retrieve the updated files) from the BITSDownloader that ships with the block to one which will allow me to copy files from a UNC path.  Fortunately, a sample implementation of such a thing exists in the documentation, so there is a starting point.  It is pretty rough around the edges, and isn’t testable, so I am working creating a nicer, testable version of the sample downloader.  Here is the problem, though; the dependencies are insane!  And it doesn’t look like even mock objects will be able to help.  Initially I found this a little strange since I know that the Patterns and Practices group was headed up by some folks who were heavy into the Agile methodologies.  I also know for a fact that the Enterprise Library components have test included.  So I pulled up the AutoUpdater Application Block to see how they were testing things….and what do you think I found?  They were not testing anything!  I can only assume that the updater block came from another group, because there was no tests in sight.  So, since I had the source code, I’m reduced to making modifications to the block to support testing.  For the most part this involved marking public methods/properties as virtual so that Rhino.Mocks can mock them.  I also added some parameters to the constructor of the UpdateTask class so that I could supply mock versions of some of the dependencies.

I can’t imagine trying to test this without the source code to modify…as it stands I don’t feel real good about shipping around a non-standard version of the library.  Overall I’m quite disappointed in the P&P folks on this one.  I had high hopes whenever I saw the rest of the Enterprise Library that I would be able to test my extensions fairly easily.  I guess that goes to show that even in organizations (or specifically groups) where TDD is a best practice, it still is finding hurdles to acceptance.

I’ve been doing a lot of poking around the MythTV and Knoppmyth sites recently trying to figure out what is going to work best in my current situation.  Since I’ve got cable, and not satellite, it would seem that it is possible to get all of the non-encrypted content from my provider, as long as I have a capture card with QAM support.  These cards seem to be HD cards overall (at least I haven’t found one that is supported that is not HD), which makes sense given the QAM signal will be a digital signal.  It sounds like getting the channel mapping put together for the QAM content can be somewhat tedious depending on how well Myth can auto-discover your channels (which it would seem is not that good), so I know there is some work involved in getting the digital channels.  For analog, any of the supported analog capture cards would work, and it seems that my wife and I tend to watch more of the analog stations than the digital (though, in my current line-up all of the “expanded” cable stations (Discovery, Comedy Central, TBS, etc) have both an analog station and a digital station…pretty strange.  I also know that I want the ability to watch and record at the same time, which means at least two tuners.  Here is the trick, though, since I need different tuners for digital vs analog, how many tuners do I really need?  And how many PCI slots do I want to eat up in the quest for multiple record?

So here is what I’ve decided.  Since I’m mostly concerned with analog, I went with a Hauppauge WinTV PVR-500, which is a dual-tuner analog PCI card, and which seems to be supported just fine in myth (looks like two PVR-150 cards from the sound of it).  That by itself takes care of my analog needs.  As for the digital, I haven’t decided for sure yet, but I’m leaning towards the DVICO FusionHDTV RT Lite, which seems to have good support in Myth.  It also has the advantage of being one of the few HD cards with a hardware encoder.

I should take a step back here and explain this.  You have two choices when watching or recording TV on a PC.  You can have your machine take care of encoding and decoding the MPEG4 video that is your TV show, or you can let the capture card do it.  If you let the capture card do it, then your machine is going to not be nearly as busy as it would otherwise.  This presents a couple advantages: 1.  You can get away with less horse-power on the machine, which means parts will be cheaper (spend the cash on storage if you can). 2. You’re machine will consume less power (I’m talking wattage here), and so therefore you are less likely to have a machine in your living room that sounds like a Lear jet taking off when your trying to watch all of your recorded episodes of “Eating bugs with the Stars”, or whatever the latest reality fad is. 

The PVR-500 contains a hardware encoder, so if I can get the same thing on the digital side, I’ll be in good shape.  While I’m on the subject of power, I’m also seriously looking at an AMD Turion based system.  This is the AMD mobile processor, so it is designed with a low-power footprint in mind.  It also seems that there are several motherboards out there which will support it, and some additional components which will help keep it cool.  The only other major sources of noise are the power-supply, and the hard-drive.  Cooler-Master makes some nice quite power supplies, so I will probably check there first.  As for hard drives, I haven’t really started looking yet.  If I can find noise specs on them, then I may use that as a criteria when deciding which to buy, but my primary concern is how many GB can I get for my money.

The other components have been moved to the end of the decision-tree.  I like the Sliverstone cases I mentioned in my last post, but I decided that I should get the components together first, since I didn’t really want to end up with a nice-looking case where I couldn’t fit all of my goodies.  I also need to decide whether or not to go with a DVD writer in the case….could be handy, but then I can also grab stuff off the network, so is it really needed (I’ve got a CD writer lying around, which may be my stand-in for a while)?

Overall this is going to be a fun project, and lets face it, how often do you get a geek project that your wife is behind 100%?

I’ve been trying to catch up on past episodes of .Net Rocks while on my hour+ drive to work everyday, and this morning listened to the interview with Jimmy Nilsson (episode # 191) on Domain Driven Design.  I’ll admit now that I have not had time to read his book all the way through, but it is one of the ones that I’m most eager to dig into, since not only is he talking about DDD, but he’s also talking about the Martin Fowler design patterns from PoEEA.


One of the things brought up on the show was this post on Jimmy’s blog.  I was shocked at the idea that SOA and DDD could be considered opposite strategies by anyone.  One of the comments made by the DNR hosts was that SOA was a technology in search of a problem to solve, whereas DDD is focused on the problem domain to the exclusion of specific technologies.  This is the sort of brutal mischaracterization of SOA, that I think is still quite prevalent despite the efforts of folks like Ron Jacobs who are trying to spread the word of SOA done right.  So here is my rundown of why, in fact, SOA and DDD go together like peanut butter and chocolate:


SOA is not a technology, it is a design strategy.  It is a way of thinking about your business in terms of what services can be provided.  The idea that if your using SOA then you are automatically using Web Services is just plain wrong.  You can create a business application that is Service Oriented without ever firing up the Web Service designer in .Net.  The core concept is to model the services used in your organization.  These services should in fact be recognizable objects from your Domain Model.  Business people should be able to talk about the services in the same language they use to talk about the same concepts within the business.  If that isn’t happening, then you are not designing the right services.


The process of modeling services should be derived directly from the Domain Model created using DDD if those services are going to be useful to the business.  One of the arguments for going SOA is “Business Agility”.  What does that mean in the real world?  Basically the ability to allow all of the aspects of your business to share information in a way that provides a measurable benefit.  The agility comes in when you are able to connect parts of your business in ways that were impossible before.  Once again, this has nothing to do with the technology behind SOA, it has to do with the business needs.  Granted from a technical standpoint, the easiest way to achieve the sort of connections we’re talking about is to use a technology like web services, but that is not the point of creating the SOA, and it certainly shouldn’t be the overwhelming factor in deciding to use SOA.


So, in answer to Jimmy’s question in his post (As I see it, there is lot of useful ideas there for SOA as well, or what am I missing?):  No, you aren’t missing something, the SOA’ers (as you refer to them) are missing something.  You’re right on.

Thinking about my earlier post discussing the OOP vs TOOP problem, I mentioned at the end that the best solution to this problem in my mind would be integrated language support for test classes.  Specifically, a way to let the Compiler/Runtime know that a specific class is a test class, and should therefore be able to access any and every property of a class.


It occurred to me that such blatant intrusion into the privacy of a class is not unknown in the programming world.  C++ has the notion of a “Friend” class.  This is a class that can access all members of another class regardless of their protection level.  To keep things civil, so that just any class can’t declare itself to be a Friend of any class it wants, the class that the Friend class would be accessing would declare specifically that classes X, Y and Z are fiends, and so can have free reign.  Granted this is considered to be rather scary, and one of those features that makes C++ an ideal tool for shooting ones own foot off.


But, this concept has some merit within the context of Test classes.  In the .Net world we could potentially use attributes on a class to identify what the test class (or classes I suppose) for a class are.  The compiler and runtime could then use that information to provide unlimited access only to the classes listed in the test class list.  For more protection perhaps it would also validate that the classes in the list also have the appropriate attribute (TestFixture in NUnit, TestClass in TFS) before allowing access.  You would need some additional tooling in the development environment so that you would get Intellisense on all of the private/protected members, but that should be easy for the MS guys after building in the test support, right?


There is some additional danger in this approach since there is some potential to modify IL at runtime, but couldn’t there be additional protections around such things?  As someone with no knowledge of the internal workings of the CLR, I can’t say for sure, but its worth trying.


So, I know officially propose to Microsoft that this feature be added to the 4.0 version of the framework.


Do you think they heard me?

Here is something I’ve been kicking around in my head for a while, and thought I would put it down in more or less a “permanent” format so maybe I’ll do something about it sometime…

Back when I was first trying to get my head around TDD, one of the things that I found most clarifying was an idea I first saw in Test Driven Development in Microsoft .Net (Microsoft Press).  The idea is that you write tests based on your requirements.  So as a developer, you should have hopefully been given a list of requirements by someone for the project you are working on.  When you go to build the software you start looking at the requirements list, and the pick on (usually a simple one) to start implementing.  Once you have your requirement you start brainstorming tests that can be implemented to fulfill that requirement.  Once you can no longer think of tests for a requirement, you move on to the next one.  After that once you have run out of requirements, then your done writing the software.

This struck me not only as a good way to approach the problem of how do I come up with tests, but also as a potentially powerful tool for tracking progress within a project.  I can imagine something not unlike the NUnit Test Runner that, instead of listing all of the tests, lists all of the requirements.  If there are tests written for a requirement, then you could potentially expand the requirement to see the list of tests.  When you execute the tests, you can then see which requirements are complete (more on this in a minute), and also which ones are functional.  The assumption being, if a test fails, then there is a functional problem with the requirements.

To support this a few modifications to the standard TDD toolset may be needed.  First a place to put the list of requirements in code would be useful.  This could be an XML document with the test descriptions and IDs, or maybe even a plain text document, or a piece of code.  I actually like the idea of the XML document because potentially you could build an InfoPath form or something that a BA could use to put together the requirement list, and then hand the XML doc over to the development team.  From there an additional attribute would be added to the NUnit tests, so that a specific test could be associated with a specific requirement.  If you wanted to be able to give a “percentage implemented” on a requirement, then you would probably want to do something like create all of the test methods you can come up with during a brainstorming session, and put an Ignore attribute on them.  At that point it should be feasible to see how many of the methods are implemented, and how many are still being Ignored, and then grab a percentage.  This number would change as time progressed and the development team came up with additional tests for specific requirements, but it would probably stabilize by the time serious work started into a specific requirement.

So that’s it…my brilliant idea.  I actually think there wouldn’t be a whole lot of work involved in getting it going either…A new attribute, an updated Unit Test Runner, and some XML work.  It would be nice to be able to define the requirements in a way that would let you use Intellisense to select the requirements while in VS.Net….maybe code-generate a struct or enum based on the XML doc?

Its been a while now, but Roy Osherove posted some articles about Testable Object-Oriented Programming.  That is, designing your code to be testable by default.  The part of this that is most interesting is that he suggests that sometimes you need to sacrifice encapsulation in favor of being able to test your code.  This is where one of the biggest flaws with TDD (at least in my opinion) begins to show.  I think the idea of making code testable breaking encapsulation is one of the only arguments against TDD that I have heard that I can’t give a good defense for, and it makes me crazy.

Overall I would consider myself a competent OO programmer…I might even venture to say that I’m really good, so this issue is one that bugs me.  I think encapsulation is one of the most powerful tools out there for making good OO code.  So now I’m a big fan of TDD, which does not allow me to use this tool as much as I would like….so what gives?

The Problem
There are many cases (at least that I’ve seen) where it makes sense to restrict the visibility of pieces of your object model.  In this context the situation is generally calling for classes/methods to be Internal (Package in Java), or Protected Internal.  I have a feeling, even though I have no evidence to back it up, that this is probably the least-used of all the visibility constraints.  What I’ve found, though, is that using it allows you to create well-factored classes, that abide by the Single Responsibility Principle, and still present a minimal interface to other layers/users of your code.  There is also the problem of exposing internal dependencies so that those dependencies can be injected during testing (this is where stubs or mock objects come into play).  In many cases there is no real reason to expose this information, and in fact making it available is probably a bad thing because it will make code that is consuming your object more aware of the implementation than it needs to be…And Steve McConnell did a good job at letting us know this is bad.

These are extremely brief introductions to the problems, and even though these are simple examples, such things can get ugly quickly in a large project.  The main reason being that all of a sudden there are many, many more things available to the developer who is just trying to use your class to get some work done.

Some Solutions
There are some solutions which can reduce the complexity somewhat.  In the case of using Internal objects, there is always the option of including your test classes in the same project as the code you are testing.  Microsoft seems to have done this with EnterpriseLibrary 1.1.  The biggest downside to this, however, is that in a lot of cases you don’t want to ship your tests with the rest of your production code, so you have to figure out how to do some clever things like conditional compilation to avoid compiling those classes into your production assemblies.  Granted with a build tool like NAnt this becomes easier, but if your not familiar with it, there is a rather steep learning curve.

In the realm of exposing implementation details, one possible course of action is to use a Inversion Of Control container like the Castle Microkernel/Windsor tools.  With these you can write your objects so that they request instances of the dependencies from the IoC container, and not create new instances themselves (or expect you to provide them).  This begs the question of where the IoC container lives, though.  In the case of testing you would want to fill the container with mocks or stubs of the appropriate objects, so that the class your testing gets those objects instead.  In some cases this may mean that your IoC container needs to live in a central repository where all of the objects can get to it…or you could pass around your IoC instance, instead of your dependencies.

The other solution, which was the point of the 2nd post from Roy on TOOD, is to use a tool like TypeMock, which has the remarkable ability to mock objects and give you access to the internal/protected/private members in your tests.  Pretty serious stuff.  It doesn’t solve the problem of dependency injection completely, though.  There is also the issue of cost if you want all of the really nifty features (though the Community Edition seems to include the ability to mock all methods/properties of an object).

The Ultimate Solution
In my mind what is needed to really bridge the divide between the issues here (which are testability and good OO practices like encapsulation) is to make support for TDD a first-class feature of a language or runtime.  What I mean by that is that the .Net Framework, for example, would have knowledge of test classes and test methods, and loosen the visibility restrictions while running a test.  Most likely this would mean some serious changes to the CLR, but it seems like it should be possible to set up a sandbox environment for testing, where the CLR would evaluate the objects being tested, and then allow full access to all fields and methods.  It would have to be seriously limited, and there may even need to be some restrictions around when and where this sort of behavior can happen, but ultimately it seems like the best solution.  It seems like a stretch, but in my mind it is the only real solution to the problem.  Until that point, we are stuck making decisions about where to draw the line between testability and good OO design.