# Monday, 18 December 2006

Late last week, Rhapsody finally released a new desktop client that doesn't crash in the presence of IE7.  Hurray!  I've been gimping along with the web-based client, which is cool, but not nearly as full-featured as the desktop version.  I've been running it for several days now, and not one crash, so I'm hopeful at this point.  Just in time to listen to all that Christmas music that I'd never shell out to buy full time...

Update: I may have spoken too soon.  It works fine on my desktop at work, but crashes constantly on my laptop at home.  Sigh.

Monday, 18 December 2006 12:41:57 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 

Since time immemorial (or since Marconi, anyhow) those wishing to be licensed as Radio Amateurs in the US have had to pass a Morse code test.  Relatively recently, there has been an entry level license class (Technician) that doesn't require passing the code test, but which (therefore) comes with no privileges on the HF bands (for long-distance communications). 

The code requirement has long been a hotly contested issue among amateurs.  Many have maintained that learning Morse code meant that you were "serious" about amateur radio, and would therefore be a skilled and considerate radio operator.  The problem is, since so few people actually use Morse code on the radio anymore, it has become (in my opinion, and that of many others) an artificial hoop that had to be jumped through before you could get into the high priesthood of amateur radio.  Most of the General or Extra class licensees I've talked to have never ditted or dahed once since passing the test. 

What this means for me personally is that I can finally hope to upgrade to a General class license.  I've studied all the material, and am pretty sure that I could pass the exam, but given the way the rest of my life works, I've been unable (or unwilling) to devote the time it would take to learn Morse code, so I've never taken the test. 

Of course, if I did pass the test, I'd want to get an HF-capable radio, but that's a whole different problem. :-)

CERT | Radio
Monday, 18 December 2006 12:38:55 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 

I spent about a day and a half this week trying to get a CC.NET server working with a complex combination of MSBuild (running the build script) TypeMock (the solution we're using for "mock objects") and NCover (for code coverage analysis) that proved tricky to get right.

In a typical build script, you'd create a task to run your unit tests that depends on your compile task.  And you might want to run a "clean" task first so you know you're starting from ground zero. 

Bringing a profiler into the mix changes the equation a bit.  Or in this case, two profilers that have to play nice together.  A "profiler" in this case means something that registers itself as a .NET profiler, meaning that it uses the underlying COM API in the CLR to get under the covers of the runtime.  This was originally envisioned as enabling profilers that track things like performance or memory usage that have to run outside the context of the CLR.  NCover uses this capability to do code coverage analysis, so that we can get reports of which lines of code in our platform are not being touched by our unit tests.

TypeMock is also a profiler, which causes some interesting interaction.  TypeMock uses the profiler API to insert itself between calling code and the ultimate target of that call in order to "mock" or pretend to be the target.  We use this to reduce the dependencies that our unit test code requires.  Rather than installing SQL Server on our build box, we can use TypeMock to "mock" the calls to the database, and have them return the results we expect without actually calling SQL. 

So...the problem all this interaction led to re: my build was that in order to make everything work, the profiler(s) have to be active before any of the assemblies you want to test/profile are loaded.  I had my UnitTest target set up like this:

 

<Target Name="UnitTestCoverage" DependsOnTargets="Compile;RemoveTestResults">

 

    <CreateItem Include="$(BuildDir)\\**\*.MBUnit.dll" >

      <Output

              TaskParameter="Include"

              ItemName="TargetFiles"/>

    </CreateItem>

 

    <TypeMockStart Link="NCover" />

 

    <Exec Command="&quot;$(NCoverConsole)&quot; $(MBUnitConsole) @(TargetFiles->'%(FullPath)', ' ') /rt:xml /rf:$(TestResultsDir) /fc:unit //l $(TestResultsDir)\NCover.log //x $(TestResultsDir)\NCoverage.xml //a $(AssembliesToProfile)" ContinueOnError="true"/>

 

    <TypeMockStop/>

    ...

</Target>

with the dependencies set to that the compile and "RemoveTestResults" targets would be evaluated first.  Unfortunately, this caused the whole build to hang for upwards of an hour when this target started, but after the compile and "remove" targets had run.  I'm theorizing (but haven't confirmed) that is is because the compiler loads the assemblies in question during the build process, and they don't get unloaded by the time we get to starting TypeMock.  That apparently means a whole bunch of overhead to attach the profilers (or something).  What finally ended up working was moving the compile target inside the bounds of the TypeMock activation, using CallTarget instead of the DependsOnTargets attribute.

 

    <TypeMockStart Link="NCover" />

    <CallTarget Targets="CleanCompile;RemoveTestResults"/>

 

    <Exec Command="&quot;$(NCoverConsole)&quot; $(MBUnitConsole) @(TargetFiles->'%(FullPath)', ' ') /rt:xml /rf:$(TestResultsDir) /fc:unit //l $(TestResultsDir)\NCover.log //x $(TestResultsDir)\NCoverage.xml //a $(AssembliesToProfile)" ContinueOnError="true"/>

 

    <TypeMockStop/>

This works just fine, and doesn't cause the 1 hour delay, which makes things much easier. :-)
Monday, 18 December 2006 12:27:20 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 15 December 2006

I just finished Deep Survival: Who Lives, Who Dies, and Why by Laurence Gonzales, and would heartily recommend it to anyone who participates in any kind of outdoor or adventure activity, or anyone interested in the psychology of survial.  I was a little skeptical, since the book jacket made it sound like it was mostly case studies of survial situations.  While those certainly play a central role, the book is really more about the latest in brain science and psychology, and how that explains the way people behave when they get lost in the wilderness, or have to face other kinds of survival situations. 

Mr. Gonzales has definitely done his homework.  He's obviously spent a huge amount of time reading accident reports, and accounts by survivors, and picks out trends from both categories.  He then ties those trends back to the underlying brain science, which goes a long way toward explaining the (seemingly) irrational behavior often observed in people under stress. 

As someone who enjoys wilderness backpacking, as well as someone involved in disaster preparedness, I found this book completely fascinating. 

The book ends with a list of 12 tips for how to make it through a survival situation, which I found quite valuable.

Friday, 15 December 2006 15:50:15 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 11 December 2006

One of the (many) issues I encountered in creating a duplex http binding with IssuedToken security (details here) was that when using a duplex contract, the client has to open a "server" of its own to receive the callbacks from the "server" part of the contract.  Unfortunately, there are a couple of fairly critical settings that you'd want to change for that client side "server" that aren't exposed via the configuration system, and have to be changed via code.

One that was really causing me problems was that for each duplex channel I opened up (and for one of our web apps, that might be 2-3 per ASP.NET page request) I had to provide a unique BaseClientAddress for each channel, which quickly ran into problems with opening too many ports (a security hassle) and insuring uniqueness of the URI's (a programming hassle).  Turns out that you can ask WCF to create unique addresses for you, specific to your chosen transport in their method of uniqueness.  However, you make that request by setting the ListenUri and ListenUriMode properties on the ServiceEndpoint (or the BindingContext).  For the client-side "server" part of a duplex contract, that turns out to be one of those settings that isn't exposed via configuration (or much of anything else, either). 

Luckily, I found an answer in the good works of Nicholas Allen (yet again).  He mentioned that you could set such a property on the BindingContext for the client side of a duplex contract.  Not only that, but Mike Taulty was good enough to post a sample of said solution. 

There's a great summary of the solution here, but to summarize even more briefly, you have to create your own BindingElement, and inject it into the binding stack so that you can grab the BindingContext as it hastens past.  Now I'm setting the ListenUri base address on a specific port, and asking WCF to do the rest by unique-ifying the URI.  Not only do I not have to keep track of them myself, but I can easily control which ports are being used on the client side, which makes both IT and Security wonks really happy.

Monday, 11 December 2006 11:02:53 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 05 December 2006

On Scott's recommendation, Vikki and I got Brick from Netflix and watched it a few days back.  Then watched it again. 

If you like film noir, Brick is a no-brainer. 

The basic vision of the film (and it won a Sundance prize for originality of vision) is that of a classic Hammet-esque noir film, but set in a Southern Californian High School.  No, really.  And it's brilliant.  The actors all obviously got it.  There's nothing farcical or comic about the performances.  They all believe in the vision.  Which isn't to say that there aren't funny moments (such as the hero meeting with the most dangerous drug dealer in town while his Mom serves them juice and cookies) but they are funny as part of the plot, not because of the aesthetic of the film. 

Solid performances all around, and some great dialog.  The dialog is heavily spiked with both gumshoe and pseudo-modern-teen argot, so turning on the subtitles helps follow the story the first time through. 

There's a lot of depth here, especially for a directorial debut, and I think this is a film I'll go back and watch over and over again.

Tuesday, 05 December 2006 16:13:57 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 

This took me WAY longer to figure out that either a) it shoudl have or b) I had hoped it would.  I finally got it working last week though.  I now have my client calling the server via a Dual Contract using IssuedToken security (requires a SAML token), said token being obtained from an STS written by me, which takes a custom token from the client for authentication. 

On the plus side, I know a whole bunch more about the depths of the ServiceModel now than I did before. :-)

It turns out (as far as I can tell) that the binding I needed for client and server cannot be described in a .config file, and must be created in code.  That code looks like this on the server

 

public static Binding CreateServerBinding(Uri baseAddress, Uri issuerAddress)

{

 

    CustomBinding binding = new CustomBinding();

    IssuedSecurityTokenParameters issuedTokenParameters =

        new IssuedSecurityTokenParameters();

    issuedTokenParameters.IssuerAddress = new EndpointAddress(issuerAddress);

    issuedTokenParameters.IssuerBinding = CreateStsBinding();

    issuedTokenParameters.KeyType = SecurityKeyType.SymmetricKey;

    issuedTokenParameters.KeySize = 256;

    issuedTokenParameters.TokenType

        = "http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1";

 

    SecurityBindingElement security

        = SecurityBindingElement.CreateIssuedTokenBindingElement(issuedTokenParameters);

    binding.Elements.Add(SecurityBindingElement.CreateSecureConversationBindingElement(security));

    binding.Elements.Add(new CompositeDuplexBindingElement());

    binding.Elements.Add(new OneWayBindingElement());

    binding.Elements.Add(new TextMessageEncodingBindingElement());

    binding.Elements.Add(new HttpTransportBindingElement());

 

    return binding;

}

and essentially the same on the client, with the addition of setting the clientBaseAddress on the CompositeDuplexBindingElement.

This works just fine, getting the token from the STS under the covers and then calling the Dual Contract server interface with the token. 

I ended up with a custom binding to the STS itself, mostly because I needed to pass more credentials than just user name and password.  So the STS binding gets created thusly:

 

public static Binding CreateStsBinding()

{

    Binding binding = null;

 

    SymmetricSecurityBindingElement messageSecurity

        = new SymmetricSecurityBindingElement();

 

    messageSecurity.EndpointSupportingTokenParameters.

     SignedEncrypted.Add(new VoyagerToken.VoyagerTokenParameters());

 

    X509SecurityTokenParameters x509ProtectionParameters

        = new X509SecurityTokenParameters( X509KeyIdentifierClauseType.Thumbprint);

    x509ProtectionParameters.InclusionMode = SecurityTokenInclusionMode.Never;

 

    messageSecurity.ProtectionTokenParameters = x509ProtectionParameters;

    HttpTransportBindingElement httpBinding = new HttpTransportBindingElement();

 

    binding = new CustomBinding(messageSecurity, httpBinding);

 

    return binding;         

}

Not as easy as I had hoped it would be, but it's working well now, so it's all good.  If you go the route of the custom token, it turns out there are all kinds of fun things you can do with token caching, etc.  It does require a fair amount of effort though, since there are 10-12 classes that you have to provide.

Tuesday, 05 December 2006 15:47:33 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  |