I’ve been doing a lot of poking around the MythTV and Knoppmyth sites recently trying to figure out what is going to work best in my current situation.  Since I’ve got cable, and not satellite, it would seem that it is possible to get all of the non-encrypted content from my provider, as long as I have a capture card with QAM support.  These cards seem to be HD cards overall (at least I haven’t found one that is supported that is not HD), which makes sense given the QAM signal will be a digital signal.  It sounds like getting the channel mapping put together for the QAM content can be somewhat tedious depending on how well Myth can auto-discover your channels (which it would seem is not that good), so I know there is some work involved in getting the digital channels.  For analog, any of the supported analog capture cards would work, and it seems that my wife and I tend to watch more of the analog stations than the digital (though, in my current line-up all of the “expanded” cable stations (Discovery, Comedy Central, TBS, etc) have both an analog station and a digital station…pretty strange.  I also know that I want the ability to watch and record at the same time, which means at least two tuners.  Here is the trick, though, since I need different tuners for digital vs analog, how many tuners do I really need?  And how many PCI slots do I want to eat up in the quest for multiple record?

So here is what I’ve decided.  Since I’m mostly concerned with analog, I went with a Hauppauge WinTV PVR-500, which is a dual-tuner analog PCI card, and which seems to be supported just fine in myth (looks like two PVR-150 cards from the sound of it).  That by itself takes care of my analog needs.  As for the digital, I haven’t decided for sure yet, but I’m leaning towards the DVICO FusionHDTV RT Lite, which seems to have good support in Myth.  It also has the advantage of being one of the few HD cards with a hardware encoder.

I should take a step back here and explain this.  You have two choices when watching or recording TV on a PC.  You can have your machine take care of encoding and decoding the MPEG4 video that is your TV show, or you can let the capture card do it.  If you let the capture card do it, then your machine is going to not be nearly as busy as it would otherwise.  This presents a couple advantages: 1.  You can get away with less horse-power on the machine, which means parts will be cheaper (spend the cash on storage if you can). 2. You’re machine will consume less power (I’m talking wattage here), and so therefore you are less likely to have a machine in your living room that sounds like a Lear jet taking off when your trying to watch all of your recorded episodes of “Eating bugs with the Stars”, or whatever the latest reality fad is. 

The PVR-500 contains a hardware encoder, so if I can get the same thing on the digital side, I’ll be in good shape.  While I’m on the subject of power, I’m also seriously looking at an AMD Turion based system.  This is the AMD mobile processor, so it is designed with a low-power footprint in mind.  It also seems that there are several motherboards out there which will support it, and some additional components which will help keep it cool.  The only other major sources of noise are the power-supply, and the hard-drive.  Cooler-Master makes some nice quite power supplies, so I will probably check there first.  As for hard drives, I haven’t really started looking yet.  If I can find noise specs on them, then I may use that as a criteria when deciding which to buy, but my primary concern is how many GB can I get for my money.

The other components have been moved to the end of the decision-tree.  I like the Sliverstone cases I mentioned in my last post, but I decided that I should get the components together first, since I didn’t really want to end up with a nice-looking case where I couldn’t fit all of my goodies.  I also need to decide whether or not to go with a DVD writer in the case….could be handy, but then I can also grab stuff off the network, so is it really needed (I’ve got a CD writer lying around, which may be my stand-in for a while)?

Overall this is going to be a fun project, and lets face it, how often do you get a geek project that your wife is behind 100%?

Its been a while now, but Roy Osherove posted some articles about Testable Object-Oriented Programming.  That is, designing your code to be testable by default.  The part of this that is most interesting is that he suggests that sometimes you need to sacrifice encapsulation in favor of being able to test your code.  This is where one of the biggest flaws with TDD (at least in my opinion) begins to show.  I think the idea of making code testable breaking encapsulation is one of the only arguments against TDD that I have heard that I can’t give a good defense for, and it makes me crazy.

Overall I would consider myself a competent OO programmer…I might even venture to say that I’m really good, so this issue is one that bugs me.  I think encapsulation is one of the most powerful tools out there for making good OO code.  So now I’m a big fan of TDD, which does not allow me to use this tool as much as I would like….so what gives?

The Problem
There are many cases (at least that I’ve seen) where it makes sense to restrict the visibility of pieces of your object model.  In this context the situation is generally calling for classes/methods to be Internal (Package in Java), or Protected Internal.  I have a feeling, even though I have no evidence to back it up, that this is probably the least-used of all the visibility constraints.  What I’ve found, though, is that using it allows you to create well-factored classes, that abide by the Single Responsibility Principle, and still present a minimal interface to other layers/users of your code.  There is also the problem of exposing internal dependencies so that those dependencies can be injected during testing (this is where stubs or mock objects come into play).  In many cases there is no real reason to expose this information, and in fact making it available is probably a bad thing because it will make code that is consuming your object more aware of the implementation than it needs to be…And Steve McConnell did a good job at letting us know this is bad.

These are extremely brief introductions to the problems, and even though these are simple examples, such things can get ugly quickly in a large project.  The main reason being that all of a sudden there are many, many more things available to the developer who is just trying to use your class to get some work done.

Some Solutions
There are some solutions which can reduce the complexity somewhat.  In the case of using Internal objects, there is always the option of including your test classes in the same project as the code you are testing.  Microsoft seems to have done this with EnterpriseLibrary 1.1.  The biggest downside to this, however, is that in a lot of cases you don’t want to ship your tests with the rest of your production code, so you have to figure out how to do some clever things like conditional compilation to avoid compiling those classes into your production assemblies.  Granted with a build tool like NAnt this becomes easier, but if your not familiar with it, there is a rather steep learning curve.

In the realm of exposing implementation details, one possible course of action is to use a Inversion Of Control container like the Castle Microkernel/Windsor tools.  With these you can write your objects so that they request instances of the dependencies from the IoC container, and not create new instances themselves (or expect you to provide them).  This begs the question of where the IoC container lives, though.  In the case of testing you would want to fill the container with mocks or stubs of the appropriate objects, so that the class your testing gets those objects instead.  In some cases this may mean that your IoC container needs to live in a central repository where all of the objects can get to it…or you could pass around your IoC instance, instead of your dependencies.

The other solution, which was the point of the 2nd post from Roy on TOOD, is to use a tool like TypeMock, which has the remarkable ability to mock objects and give you access to the internal/protected/private members in your tests.  Pretty serious stuff.  It doesn’t solve the problem of dependency injection completely, though.  There is also the issue of cost if you want all of the really nifty features (though the Community Edition seems to include the ability to mock all methods/properties of an object).

The Ultimate Solution
In my mind what is needed to really bridge the divide between the issues here (which are testability and good OO practices like encapsulation) is to make support for TDD a first-class feature of a language or runtime.  What I mean by that is that the .Net Framework, for example, would have knowledge of test classes and test methods, and loosen the visibility restrictions while running a test.  Most likely this would mean some serious changes to the CLR, but it seems like it should be possible to set up a sandbox environment for testing, where the CLR would evaluate the objects being tested, and then allow full access to all fields and methods.  It would have to be seriously limited, and there may even need to be some restrictions around when and where this sort of behavior can happen, but ultimately it seems like the best solution.  It seems like a stretch, but in my mind it is the only real solution to the problem.  Until that point, we are stuck making decisions about where to draw the line between testability and good OO design.

 

This morning I was greeted by a friendly message from Windows Update saying I had updates ready to install.  Because I’m the curious sort who likes to see how many issues MS has to deal with regularly, I decided to use “Advanced” mode to see what the updates were (By “Advanced” of course it means anyone who is concerned about what is getting installed on their machines…I’ve seen I Robot, I know that one day the great machine can decide the only way to save us from our selves is to format our hard drives).

What greeted me was the “Genuine Advantage Notification” update.  Apparently this is a little guy that sits in memory and periodically checks to make sure my version of Windows does not suddenly become pirated.  WTF?  Is this really a problem?  Sure I can maybe see if someone decides to upgrade to a pirated version of Vista, instead of paving their machine like normal people do, this might come in handy….that’s assuming though that the tool gets picked up by the upgrade process, and that someone who purchases a pirated copy of Vista is the sort of person who would leave a gem such as this one installed on their machine.

Okay, okay, I know that there are probably cases of normal users going to Thailand and thinking to themselves “Well, here is a booth with Windows Vista DVDs, and they are only wanting $30 for it…that’s way cheaper than in the US…must be the currency conversion rate”, and so they accidentally get themselves a pirated copy of Windows.  I would think that most people who are grabbing pirated copies of their OS are doing it because they don’t want to pay for the full version, and they are under no false pretenses about the “Genuineness” of the product.

The best part of this utility is the fact that once it figures out that your OS has become pirated, it will “help you obtain a licensed copy”.  That makes me feel really confident about having this thing running all the time.

A little background:  I’ve been working with a SQL 2000 Reporting Server for about 4 months now, trying to do some integration into an intranet app, and some reporting conversion.  For the last 3 months or so, the web services API, and the ability to publish from within Visual Studio have been gone.  Needless to say this has put a serious damper on my ability to integrate RS into the client’s intranet.  I did some looking out on the web, and the newsgroups, and could never find anything that quite worked.  The real interesting thing was that if IIS was refreshed, I could use the Web Service API once, and only once.  If I attempted to connect again, I would loose my connection, and not be able to connect again until IIS was bounced.  What was really strange was that it looked like some data was being sent, but eventually it would just give up.  Also strange was that this did not seem to effect the Report Manager app that MS uses as the front-end for the report server.

Well, as you may guess, I wouldn’t be writing this if I didn’t have a solution (at lest not with “Solved!” in the title), so here it is:  HTTP Keep-Alives.  I disabled HTTP Keep-Alives on the “Default Web Site” and life was groovy again.  I still need to check publishing from Visual Studio, but I can connect to the web service API, which is a good first start.

Its the little things in life that bring such joy…..

UPDATE:
So after moving right along for a little while with my HTTP Keep-Alive disabled, I attempted to go to the ReportManager page to change the data source of a report.  When I arrived, I was greeted by a 401 error.  WTF?  So for some unknown reason the Keep-Alive is required by the Report Manager.  Since you can’t enable or disable the Keep-Alive at the Virutal Directory level, I was SOL. After trying some stupid things that did not work (set the Connection: Keep-Alive header on the ReportManager virtual dir), I finally found this little gem on Google. It seems that there is a bug in ASP.Net that will cause it to suddenly drop the connection, even though there is still data.  The fix is to grab your HttpWebRequest object, and set the Keep-Alive to false, and the ProtocolVersion to HttpVersion10.  After doing this, life seems to be okay again.

My wife and I have decided to try and put together a PVR box to make life a little easier.  With Lana getting old enough to pay attention to grown-up TV, and the fact that there are 4 shows we like to watch that happen at the same time, it seems appropriate (not to mention being able to record Sesame Street for the baby).

So here is what I’ve figured out so far:

Linux-Based PVR using MythTV.

Pretty good eh?  So now for the hard part: Hardware.  So far I’m looking at the WinTV cards with the hardware encoding/decoding.  They are fairly reasonably priced, and fairly well supported.

I’m also looking at the rather sharp SilverStone LC03V home theater component case.

Not much info yet, but I’ll be keeping notes here as I go along and decide what will work best.

I’ve been reading the “Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries” book recently, and I came across a section discussing interface design, which had direct bearing on one of my earlier posts regarding programmer intent.  They basically state flat out that you should never use marker interfaces in .Net.  Instead, you should favor custom attributes, and then test the type for that attribute.  This was interesting to me, since I have been trying to determine what, if any, value marker interfaces would have in .Net.  In the Java example I cited, one of the benefits was that the JavaDoc information associated with the interface would then be attached to the class, so you would have clear intent from the developer when the interfaces were used.  .Net documentation comments don’t carry that same direct association…granted, when the documentation is generated, most of the time there will be a link to the interface definition..but it’s not quite the same.  On the other hand, generally a custom attribute will not even provide that link, so from a doc standpoint there is less information available. 

I think, though, that it is a bit more important to have the information about developer intent available while reviewing the source code, which makes the custom attribute concept a really good option.  You could even conceivably create a utility which would grab the attribute information from the assembly meta-data and generate a report containing which classes were marked with which interfaces, and therefore, which classes were part of which patterns.  There would also be the ability to associate other information with the attribute, such as a description, so you could have an attribute which stated a class was part of a specific patter (say, Chain of Responsibility), and then give a description of the component.  With that you could see specific instances of Chain of Responsibility patterns within the code.  Attributes also follow inheritance hierarchy, so that could make things more confusing, depending on how the attribute was used on the more top-level classes.

powered by performancing firefox

I recently ran across an intranet reporting app which required IE6 (or, I should say, would not work with IE7 or FireFox 2).  I upgraded to IE7 originally after numerous javascript errors, and a desire to check out CardSpace, and thus far had not had too many problems.  This was a potential issue, however.

Fortunately there is a ready made solution: Multiple IE’s.  This little gem is a single installer, which will allow you to install any version of IE from 3 through 6 in it’s own folder so it does not interfere with the default install.  You can pick it up at http://tredosoft.com/Multiple_IE

powered by performancing firefox

Here are my results of the Superhero Personality Test…Does it surprise anyone, really?

Your results:
You are Spider-Man

Spider-Man
80%
Green Lantern
65%
Iron Man
55%
Catwoman
55%
The Flash
50%
Hulk
50%
Superman
45%
Supergirl
40%
Robin
35%
Batman
35%
Wonder Woman
30%
You are intelligent, witty,
a bit geeky and have great
power and responsibility.


Click here to take the “Which Superhero am I?” quiz…

On a similar note…here are my supervillan results…I personally think I tend more towards the joker, personally:

Your results:
You are Dr. Doom

Dr. Doom
60%
The Joker
60%
Lex Luthor
58%
Mr. Freeze
57%
Magneto
53%
Apocalypse
48%
Poison Ivy
44%
Green Goblin
44%
Riddler
40%
Dark Phoenix
39%
Juggernaut
36%
Kingpin
36%
Catwoman
34%
Venom
33%
Two-Face
28%
Mystique
24%
Blessed with smarts and power but burdened by vanity.


Click here to take the Super Villain Personality Test

I just learned that Microsoft changed the way unhandled exceptions in Threads are propogated in .Net 2.0 vs the way they worked in 1.1.  With 2.0 any unhandled exceptions in a thread will not only cause that thread to exit, but will also be bubbled up to the thread that started the offending thread.  Overall, I think this is a good thing, since not only will it force people to handle exceptions within their thread code, but it will also eliminate the problem of processes mysteriously stopping if there is an unhandled exception on a thread.  It will, however, cause some serious grief for any developers not expecting that sort of behavior.  I’m sure I’m not the only one who has seen less than steller multi-threaded programming tactics in applications which should be better behaived (heck, I’ve done my share of shoddy thread handling in the past).

The one exception to this, of course, is the ThreadAbortException, which understandably, only effects the thread it is raised on.  Granted using Thread.Abort, or raising a ThreadAbortException is considered a brute-force approach, so it shouldn’t be happening anyway….