About two weeks ago I crafted my first Fluent Interface.  Since then I’m finding myself seeing more and more places where I think such an approach would be useful.  The part that I’m finding odd is that it is something that just recently became a possibility for me.  The big motivating factor behind that I believe was reading the post that Martin Fowler did on the subject, in which he basically describes it as a super-great idea (okay, I’m paraphrasing, but you get the idea).  The concept is fairly simple; write an API that reads like a sentence. 

This isn’t a new concept, in fact I seem to recall reading quite a bit in the world of agile and TDD in which the authors encourage you to make method/property/variable names verbose and more like natural language in order to improve readability of code, and make the items more self-documenting.  I think the big difference, though, is that fluent interfaces tend to be more granular.  Instead of a single method that reads like a sentence, you are building a sentence using method and property names, with Intellisense there to help you determine what is possible at the end.

The big shift, I think, is in the realization that within this context method names like With, For, and And are perfectly okay, and as a matter of fact make things better in the end.  Its like a taboo has been lifted, and suddenly I have a whole new landscape of possibilities opened up.

Since the original implementation of a small fluent interface I created for a small part of my project (I’m using it to describe discrete elements of a document to be parsed), I’ve found myself creating a new fluent interface to play with .Net 3.5 extension methods (replicating the Ruby 10.Minutes.Ago semantics), and also adding a new interface to the same production project as the first which is being used to grab component services from my IoC container.

I’m not sure if this is a new paradigm, or just a new hammer looking for nails, but it is interesting none-the-less. It has also opened up new challenges around testing and intellisense documentation, which I’ve not quite figured out yet.

I ran into this issue today when trying to find a quick and dirty script to enforce redirection from HTTP to HTTPS for an intranet site we’re enabling for HTTPS (yes, I know SSL on an intranet???  One word – Audit).  I thought this would be easy…I half expected that there would be an IIS setting to handle this for me; I was wrong on both counts.  So what I ended up doing is using an old DNN HttpModule which was set up to allow the user to specify specific tabs in DNN for SSL, and greatly simplifying things to work to my advantage.  I thought I would post it so I wouldn’t have to look for it again.

To give a quick overview of what is going on, this is an HTTP Module which looks at a value in the web.config to determine whether or not to enforce HTTPS or not.  If the setting is set to true (or yes), then I’m just grabbing the Url property from the Request object, loading it up in a UriBuilder, and setting the Scheme to “https”, and the port to 443 (this may not be necessary, but it was generating an URL with a port of 80 before, which defeats the purpose so I decided to play it safe).  It then feeds the generated URI to Response.Redirect(), and your off.  There is some additional code in there to disable the feature if your on localhost, which is mostly to keep from blowing up your dev box.

Here is the class:

using System;
using System.Configuration;
using System.Web;


namespace HTTPSRedirectHandler
{
    /// <summary>
    /// An HttpModule which redirects traffic to HTTPS based on configuration settings.
    /// </summary>
    public class HttpsRedirector : IHttpModule
    {
        HttpApplication _context;

        public HttpsRedirector()
        {
            //
            // TODO: Add constructor logic here
            //
        }

        #region IHttpModule Members

        /// <summary>
        /// Initializes a module and prepares it to handle
        /// requests.
        /// </summary>
        /// <param name="context">An <see cref="T:System.Web.HttpApplication"/> that provides access to the methods, properties, and events common to all application objects within an ASP.NET application</param>
        public void Init(HttpApplication context)
        {
            _context = context;
            _context.BeginRequest += new EventHandler(context_BeginRequest);
        }

        /// <summary>
        /// Disposes of the resources (other than memory) used by the
        /// module that implements <see langword="IHttpModule."/>
        /// </summary>
        public void Dispose()
        {
            _context.BeginRequest -= new EventHandler(context_BeginRequest);
            _context.Dispose();
        }

        #endregion

        /// <summary>
        /// Handles the BeginRequest event of the current Http Context.
        /// </summary>
        private void context_BeginRequest(object sender, EventArgs e)
        {
            bool useSSL = false;
            string result = null;
            if((result = ConfigurationSettings.AppSettings["RequireSSL"]) != null)
            {
                if(result.ToUpper() == "TRUE" || result.ToUpper() == "YES")
                    useSSL = true;
            }
            if (useSSL)
                EnforceSSL();
        }

        /// <summary>
        /// Enforces a redirection to HTTPS if the current connection is using HTTP (port 80).
        /// </summary>
        private void EnforceSSL()
        {
            if(_context.Request.ServerVariables["SERVER_NAME"].ToLower() != "localhost")
            {
                if(_context.Request.ServerVariables["SERVER_PORT"] == "80")
                {
                    UriBuilder uri = new UriBuilder(_context.Request.Url);
                    uri.Scheme = "https";
                    uri.Port = 443;
                    _context.Response.Redirect(uri.ToString());
                }
            }
        }
    }
}

 

Here is the web.config settings you need to use it:

<httpModules>
    <add name="SSLRedirect" type="HTTPSRedirectHandler.HttpsRedirector, HTTPSRedirectHandler" />
</httpModules>
 
<appSettings>
<add key="RequireSSL" value="True" /> </appSettings>

So I had this analogy pop into my head a while back and I’ve been sitting on it because, quite frankly, I was almost embarrassed to have thought of it.  I finally decided that I might as well post it since no one is going to read this anyway, so here goes:

Writing software without TDD is like having unprotected sex.  Its extremely irresponsible in this day and age when everyone is supposed to know better, but it still happens a lot.  Sure, it feels better, and let’s face it in the heat of the moment you really don’t want

 to take the time to make sure you’ve taken all of the right precautions…you might loose your edge.  The problem is if we don’t do it we have that lingering fear in the back of our minds, that “What if?” that keeps popping up at the worst moments.  Most of the time its okay, and everything is fine, but it only takes one time for things to go wrong to make you regret your decisions.

So come on folks, we all know what we should be doing, so no more making excuses.  After a while you get used to it, and you find that the peace of mind it gives you far outweighs the momentary delights of living on the wild side.

I recently decided to take the plunge and get CodeRush and Refactor! Pro (along with DxCore) loaded instead of Resharper.  Now, don’t get me wrong, there is a lot I like about Resharper, but overall the performance was becoming an issue.  There were often problems with VS freezing for no particular reason, and then coming back as if nothing was wrong…I swear it was like my IDE had narcolepsy or something.

One of the things I noticed immediately about CodeRush was the fact that there was a single installer, and when I went to run it I was able to install it on all versions of Visual Studio, including the Orcas Beta.  This was nice when compared to Resharper’s separate install for vs 2005 and vs 2003.  It also makes me feel good about improvements in the product being available for all versions of the IDE.  One thing that I noticed about R# was that there was some work being done in the VS 2005 version around performance, but that did not seem to trickle down to the VS 2003 version.  I think the big reason for this is the fact that CodeRush and Refactor! are implemented on top of DxCore, which provides a very clean abstraction from the scariness that is the Visual Studio integration layer.

Here are the things I really like about CodeRush/Refactor:

  • The visualizations are stunning!  No, seriously this is some amazing stuff.  Circles, arrows, animations, eye candy yes, but useful eye candy.
  • The Refactoring Live Preview is crazy brilliant.  Mark Miller mentioned this on a DNR episode, and I agree with his comment that the live preview allows you to discover new useful refactorings that may not be completely obvious from the name.
  • Performance is great.  ‘Nuf said.
  • It works with VB.Net.  Granted I don’t use VB.Net, but some of my co-workers do, and occasionally I have to work on VB.Net projects.
  • Dynamic Templates rock.  The fact that I can create a new mnemonic for my type, and then use predefined prefixes and do vee<space> to create a new Employee Entity for example is pure bliss.
  • Template contexts are way cool.  By default they have NUnit templates defined, and with contexts, t<space> creates both a TestFixture class and a Test method
  • Markers and navigation are dreamy.
  • There are some crazy-cool template functions, like the ability to do a foreach within a template, so things like creating a switch/case statement for all items in an enum can be done easily.  This also powers a conditional to case refactoring that is pretty sweet.

Here are the things that I miss from Resharper:

  • Automatically adding a using statement in VS 2003 was sweet.  VS 2005 can do it with the buit-in intellisense features, but I got very used to it.
  • The VS 2003 test runner was very nice.  I use the Testdriven.Net plugin, which I cannot live without, but I like the graphical runner in the IDE.  The free test runner from JetBrains is for VS 2005 only, so it doesn’t help those of us in VS 2003.  I do like the fact that they released it as a free tool, though.
  • The “Extract Field” refactoring doesn’t exist in Refactor!…This shocked me a lot.
  • The Find Usages task.  This I think is part of the reason why R# was slow, but it did a brilliant job.  I think the rename in R# was more powerful as well.  I think there is a rename in Beta for Refactor! that is supposed to be able to work accross an entire project/solution, but I haven’t had a chance to really test it yet.
  • The pre-build error checking is nice.

The good news is that the DxCore extensibility model means that most if not all of these items could be recreated.  The bad news is that there isn’t a lot of documentation around the extensibility model, particularly when it comes to creating new refactorings.  The test runner is one of the most painful points for me right now, so I’ve started exploring the process of creating one using the DxCore APIs.  It opens up the possibility of refining things too, which would be nice.  What I would really like would be the ability to detect and integrate with TestDriven.Net.

The folks over at Eleutian are evidently running both, which they claim is possible with some tweaking, but the performance issues for VS 2003 are the biggest downer on the R# side, so I will probably not go down that path.  I may load up the test runner for VS 2005, unless I get some time to try and build one using DxCore, in which case I’ll share it with the rest of the world.

So if you recall from some of my earlier posts, I’ve talked about the concept of the “Friend” class in C++ and how it could apply to TDD within .Net.  Well, today, with the help of Roy Osherove, I just stumbled upon the InternalsVisibleToAttribute within .Net 2.0.  This allows you to specify within one assembly, another assembly that should have access to the internal members of your assembly.  This is genius, and goes a long way towards allowing you to keep your code encapsulated, while still being testable.  If we could just get them to go one step farther, and allow for access to private and protected members as well, life would be good, and there would be no more of this OOD vs TOOD junk.

The other interesting thing about this is that it could allow you to give a separate utility…say the Castle Microkernel, access to internal class constructors, and thus enforce creation of your objects through the Kernel from outside assemblies.  This is actually a feature I am desperately wanting in my current project,  but sadly, I am limited to .Net 1.1, so I can’t quite get there.

Here is a quick look at how this works.  Here is a very unrealistic class in an assembly that I want to test:

class TestClass
{
    internal TestClass()
    {}

    private bool PrivateMethod()
    {
        return false;
    }

    internal bool SomeMethod()
    {
        return true;
    }

    public string PublicMethod()
    {
        return "You can see me";
    }
}

Now, I add the following to the AssemblyInfo.cs file:

[assembly: InternalsVisibleTo("TestAssembly")]

And here is what Intellisense looks like in my test class

Not bad.  Overall I would say this is defiantly a good feature to have in your toolbox.  Internals are not perfect, but they are much more versatile than a lot of folks give them credit for.  Now if only I could get something like this in .Net 1.1

The project I’m working on now has a huge need for auto-update.  Strangely enough, there aren’t a whole lot of documented solutions for an auto-update application for .Net 1.1.  In the 2.0 world you have ClickOnce, which handles those sorts of things for you (and in a way that isn’t terribly difficult to manage as a developer…as long as you pay attention to what your doing), but the only real option you get from MS on this is the AutoUpdater Application Block from the Patterns and Practices guys.  I took a look at this when it was in it’s 1.0 version a while back, and really didn’t care for it much.  The big reason was that it required you set up an AppStart.exe file, which would take a look at your config file to determine which directory to start the app from.  When updates arrived, they were put in new directories based on versions.  This seemed like a lot of effort, and a lot of overhead.  The good news is that there is now a 2.0 version of the application block, and it looks like it is much more configurable, and has the ability to do inproc updates.

So here is my issue.  I want to change the default downloader (used to retrieve the updated files) from the BITSDownloader that ships with the block to one which will allow me to copy files from a UNC path.  Fortunately, a sample implementation of such a thing exists in the documentation, so there is a starting point.  It is pretty rough around the edges, and isn’t testable, so I am working creating a nicer, testable version of the sample downloader.  Here is the problem, though; the dependencies are insane!  And it doesn’t look like even mock objects will be able to help.  Initially I found this a little strange since I know that the Patterns and Practices group was headed up by some folks who were heavy into the Agile methodologies.  I also know for a fact that the Enterprise Library components have test included.  So I pulled up the AutoUpdater Application Block to see how they were testing things….and what do you think I found?  They were not testing anything!  I can only assume that the updater block came from another group, because there was no tests in sight.  So, since I had the source code, I’m reduced to making modifications to the block to support testing.  For the most part this involved marking public methods/properties as virtual so that Rhino.Mocks can mock them.  I also added some parameters to the constructor of the UpdateTask class so that I could supply mock versions of some of the dependencies.

I can’t imagine trying to test this without the source code to modify…as it stands I don’t feel real good about shipping around a non-standard version of the library.  Overall I’m quite disappointed in the P&P folks on this one.  I had high hopes whenever I saw the rest of the Enterprise Library that I would be able to test my extensions fairly easily.  I guess that goes to show that even in organizations (or specifically groups) where TDD is a best practice, it still is finding hurdles to acceptance.

I’ve been doing a lot of poking around the MythTV and Knoppmyth sites recently trying to figure out what is going to work best in my current situation.  Since I’ve got cable, and not satellite, it would seem that it is possible to get all of the non-encrypted content from my provider, as long as I have a capture card with QAM support.  These cards seem to be HD cards overall (at least I haven’t found one that is supported that is not HD), which makes sense given the QAM signal will be a digital signal.  It sounds like getting the channel mapping put together for the QAM content can be somewhat tedious depending on how well Myth can auto-discover your channels (which it would seem is not that good), so I know there is some work involved in getting the digital channels.  For analog, any of the supported analog capture cards would work, and it seems that my wife and I tend to watch more of the analog stations than the digital (though, in my current line-up all of the “expanded” cable stations (Discovery, Comedy Central, TBS, etc) have both an analog station and a digital station…pretty strange.  I also know that I want the ability to watch and record at the same time, which means at least two tuners.  Here is the trick, though, since I need different tuners for digital vs analog, how many tuners do I really need?  And how many PCI slots do I want to eat up in the quest for multiple record?

So here is what I’ve decided.  Since I’m mostly concerned with analog, I went with a Hauppauge WinTV PVR-500, which is a dual-tuner analog PCI card, and which seems to be supported just fine in myth (looks like two PVR-150 cards from the sound of it).  That by itself takes care of my analog needs.  As for the digital, I haven’t decided for sure yet, but I’m leaning towards the DVICO FusionHDTV RT Lite, which seems to have good support in Myth.  It also has the advantage of being one of the few HD cards with a hardware encoder.

I should take a step back here and explain this.  You have two choices when watching or recording TV on a PC.  You can have your machine take care of encoding and decoding the MPEG4 video that is your TV show, or you can let the capture card do it.  If you let the capture card do it, then your machine is going to not be nearly as busy as it would otherwise.  This presents a couple advantages: 1.  You can get away with less horse-power on the machine, which means parts will be cheaper (spend the cash on storage if you can). 2. You’re machine will consume less power (I’m talking wattage here), and so therefore you are less likely to have a machine in your living room that sounds like a Lear jet taking off when your trying to watch all of your recorded episodes of “Eating bugs with the Stars”, or whatever the latest reality fad is. 

The PVR-500 contains a hardware encoder, so if I can get the same thing on the digital side, I’ll be in good shape.  While I’m on the subject of power, I’m also seriously looking at an AMD Turion based system.  This is the AMD mobile processor, so it is designed with a low-power footprint in mind.  It also seems that there are several motherboards out there which will support it, and some additional components which will help keep it cool.  The only other major sources of noise are the power-supply, and the hard-drive.  Cooler-Master makes some nice quite power supplies, so I will probably check there first.  As for hard drives, I haven’t really started looking yet.  If I can find noise specs on them, then I may use that as a criteria when deciding which to buy, but my primary concern is how many GB can I get for my money.

The other components have been moved to the end of the decision-tree.  I like the Sliverstone cases I mentioned in my last post, but I decided that I should get the components together first, since I didn’t really want to end up with a nice-looking case where I couldn’t fit all of my goodies.  I also need to decide whether or not to go with a DVD writer in the case….could be handy, but then I can also grab stuff off the network, so is it really needed (I’ve got a CD writer lying around, which may be my stand-in for a while)?

Overall this is going to be a fun project, and lets face it, how often do you get a geek project that your wife is behind 100%?

I’ve been trying to catch up on past episodes of .Net Rocks while on my hour+ drive to work everyday, and this morning listened to the interview with Jimmy Nilsson (episode # 191) on Domain Driven Design.  I’ll admit now that I have not had time to read his book all the way through, but it is one of the ones that I’m most eager to dig into, since not only is he talking about DDD, but he’s also talking about the Martin Fowler design patterns from PoEEA.


One of the things brought up on the show was this post on Jimmy’s blog.  I was shocked at the idea that SOA and DDD could be considered opposite strategies by anyone.  One of the comments made by the DNR hosts was that SOA was a technology in search of a problem to solve, whereas DDD is focused on the problem domain to the exclusion of specific technologies.  This is the sort of brutal mischaracterization of SOA, that I think is still quite prevalent despite the efforts of folks like Ron Jacobs who are trying to spread the word of SOA done right.  So here is my rundown of why, in fact, SOA and DDD go together like peanut butter and chocolate:


SOA is not a technology, it is a design strategy.  It is a way of thinking about your business in terms of what services can be provided.  The idea that if your using SOA then you are automatically using Web Services is just plain wrong.  You can create a business application that is Service Oriented without ever firing up the Web Service designer in .Net.  The core concept is to model the services used in your organization.  These services should in fact be recognizable objects from your Domain Model.  Business people should be able to talk about the services in the same language they use to talk about the same concepts within the business.  If that isn’t happening, then you are not designing the right services.


The process of modeling services should be derived directly from the Domain Model created using DDD if those services are going to be useful to the business.  One of the arguments for going SOA is “Business Agility”.  What does that mean in the real world?  Basically the ability to allow all of the aspects of your business to share information in a way that provides a measurable benefit.  The agility comes in when you are able to connect parts of your business in ways that were impossible before.  Once again, this has nothing to do with the technology behind SOA, it has to do with the business needs.  Granted from a technical standpoint, the easiest way to achieve the sort of connections we’re talking about is to use a technology like web services, but that is not the point of creating the SOA, and it certainly shouldn’t be the overwhelming factor in deciding to use SOA.


So, in answer to Jimmy’s question in his post (As I see it, there is lot of useful ideas there for SOA as well, or what am I missing?):  No, you aren’t missing something, the SOA’ers (as you refer to them) are missing something.  You’re right on.

Thinking about my earlier post discussing the OOP vs TOOP problem, I mentioned at the end that the best solution to this problem in my mind would be integrated language support for test classes.  Specifically, a way to let the Compiler/Runtime know that a specific class is a test class, and should therefore be able to access any and every property of a class.


It occurred to me that such blatant intrusion into the privacy of a class is not unknown in the programming world.  C++ has the notion of a “Friend” class.  This is a class that can access all members of another class regardless of their protection level.  To keep things civil, so that just any class can’t declare itself to be a Friend of any class it wants, the class that the Friend class would be accessing would declare specifically that classes X, Y and Z are fiends, and so can have free reign.  Granted this is considered to be rather scary, and one of those features that makes C++ an ideal tool for shooting ones own foot off.


But, this concept has some merit within the context of Test classes.  In the .Net world we could potentially use attributes on a class to identify what the test class (or classes I suppose) for a class are.  The compiler and runtime could then use that information to provide unlimited access only to the classes listed in the test class list.  For more protection perhaps it would also validate that the classes in the list also have the appropriate attribute (TestFixture in NUnit, TestClass in TFS) before allowing access.  You would need some additional tooling in the development environment so that you would get Intellisense on all of the private/protected members, but that should be easy for the MS guys after building in the test support, right?


There is some additional danger in this approach since there is some potential to modify IL at runtime, but couldn’t there be additional protections around such things?  As someone with no knowledge of the internal workings of the CLR, I can’t say for sure, but its worth trying.


So, I know officially propose to Microsoft that this feature be added to the 4.0 version of the framework.


Do you think they heard me?