# Thursday, 10 August 2006

I’m having a heck of a time trying to get ADAM and AzMan to work together.  The vision is that I’d like to use ADAM as both the store for AzMan, and the source of principals to use inside AzMan, rather than principals from AD.  Using ADAM as the store is pretty straightforward, but the second bit is turning out to be a lot harder.  In addition, I’m trying to use the ASP.NET Membership object to mediate between ADAM and AzMan, and seeing some weird stuff.  I was able to use Membership.GetUser(“username”) to pull the user from an ADAM store, but only until I installed AzMan using the same ADAM instance as its store.  After that, the call to GetUser started returning null.  Once I get that working, I think I understand how to add the principals to AzMan, but have yet to see it work.

Hmm.  (Or possibly “arghh!”.)

Work continues. 

Unfortunately, the documentation I’ve been able to turn up is sketchy at best, and it all assumes that you are using ASP.NET (I’m not) and really just want to make Membership work.  Sigh.

To further confuse things, the only way to get the AzMan management tools on XP is to install the 2003 Server management kit, but that doesn’t contain the PIA for AzMan.  That only gets installed on actual 2003 systems, so I’ll have to try and track one down.

Thursday, 10 August 2006 11:05:04 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Monday, 07 August 2006
There’s a great (relatively new) site for hikers around the Portland area called (aptly enough) PortlandHikers.com.  There are forums for trip reports (many of which come with beautiful photos), gear reviews, and other topics related to hiking our part of the Great NW.  You can check out the pictures I posted of our hike to the Indian Heaven wilderness last weekend, which turned out to be a great trip.  Nice weather, good company, and a very pretty lake to camp next to.
Monday, 07 August 2006 23:10:39 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 02 August 2006

I’ve been doing some exploration of the Peer Channel in WCF over the last week or so.  It’s a pretty cool idea.  Basically, the peer channel provides a way to do multi-cast messages with WCF, where all parties involved get a call at (essentially) the same time.  Better still, it’s not just a simple broadcast, but a “mesh” with some pretty complex algorithms for maximizing network resources, etc. 

The hard part is in the bootstrapping.  When you want to join the “mesh”, you have to have at least one other machine to talk to so that you can get started.  Where does that one machine live?  Tricky.  The best case solution is to use what MS calls PNRP, or the Peer Name Resolution Protocol.  There’s a well known address at microsoft.com that will be the bootstrapping server to get you going.  Alternatively, you can set up your own bootstrap servers, and change local machine configurations to go there instead.  All this depends on the Peer Networking system in XP SP2 and up, so some things have to be configured at the Win32 level to get everything working.  The drawback (and it’s a big one) to PNRP is that it depends on IPv6.  It took me quite a while to ferret out that bit of information, since it’s not called out in the WCF docs.  I finally found it in the Win32 docs for the Peer Networking system. 

This poses a problem.  IPv6 is super cool and everything, but NOBODY uses it.  I’m sure there are a few hearty souls out there embracing it fully, but it’s just not there in the average corporate environment.  Apparently, our routers don’t route IPv6, so PNRP just doesn’t work. 

The way to solve this little problem with WCF is to write a Custom Peer Resolver.  You implement your own class, derived from PeerResolver, and it provides some way to register with a mesh, and get a list of the other machines in the mesh you want to talk to.  There’s a sample peer resolver that ships with the WCF samples, which works great.  Unfortunately, it stores all the lists of machines-per-mesh in memory, which suddenly makes it a single point of failure in an enterprise system, which makes me sad…

That said, I’ve been working on a custom resolver that is DB backed instead of memory backed.  This should allow us to run it across a bunch of machines, and have it not be a bottleneck.  I’m guessing that once everyone has joined the mesh, there won’t be all that much traffic, so I don’t think performance should be a big deal. 

The next step will be WS-Discovery over PeerChannel.  I’ve seen a couple of vague rumors of this being “worked on” but I haven’t seen anything released anywhere.  If someone knows different I’d love to hear about it.

Indigo | Work
Wednesday, 02 August 2006 14:10:43 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 

Yes, it’s happened again.  Yet another technology/trend which appeared in Neal Stephenson’s seminal novel Snow Crash has come to (almost) fruition.  I think he called it “sintergel” or some such.  This new technology joins the burbclave and a host of other trends that Stephenson predicted back in the day. 

Liquid Body Armor By End Of 2007

The company Armor Holdings is developing a liquid-type of body armor to either replace or enhance the current tough fiber and polymer armor that's in use today. The liquid can be smeared on a person, or a person's clothing, and stiffens when hit by an object. [Gizmodo]

Wednesday, 02 August 2006 09:54:15 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, 11 July 2006

One of the things the has irked my about using SVN with VisualStudio.NET is trying to set up a new project.  You’ve got some new thing that you just cooked up, and now you need to get it into Subversion so it doesn’t get lost.  Unfortunatley that means you have to “import” it into SVN, being careful not to include any of the unversionable VS.NET junk files, then check it out again, probably some place else, since Tortoise doesn’t seem to dig checking out over the existing location.  Big pain.

Along comes Ankh to the rescue.  I’ve been using it off and on for a while (version .6 built from the source) but now I’m hooked.  It adds the traditional “Add to source control” menu item in VS.NET, and it totally did the right thing.  Imported, checked out to the same location (in place) and skipped all the junk files.  Worked like a charm.  I’m definitely a believer now.

Tuesday, 11 July 2006 10:46:49 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [3]  | 

I’m a big fan of watching TV shows after they come out on DVD.  You don’t have to deal with the commercials, and you can be assured of not missing anything.  Plus, I don’t have cable, so it’s about the only way I ever see TV.  Anyway, Vikki and I just finished season 1 of Veronica Mars.  What a fantastic show.  I can see why Joss Whedon calls it the best show that noone is watching.  Great dialogue, good acting (mostly), great story arc, and I totally didn’t see the ending coming. 

While each episode explores a subplot about the rigors of high school, etc. the overarching story line is about a murder mystery, and the season ends with the murderer revealed (it’s not who you think).  They pulled off some very interesting plot twists throughout.  I’m breathlessly anticipating season 2 next month.  There are still a number of open questions which I’m hoping they’ll pursue in the second season. 

If you like the Whedonverse (BtVS/Angel/Firefly) you’ll probably like Veronica Mars.  Best dialogue this side of Joss himself.

Tuesday, 11 July 2006 10:42:52 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 05 July 2006

So at TechEd, Scott captured Jeff and I talking in the friendliest of fashions about the relative merits of Team System (which Jeff evangelizes) and the “OSS solution” which I helped craft at Corillian involving Subversion, CruiseControl.NET, NUnit, NAnt, et. al. 

Since then, I did a bit for PADNUG about the wonders of source control, and it caused me to refine my positions a bit. 

I think that in buying a system like Team System / TFS (or Rational’s suite, etc.) you are really paying for not just process, but process that can be enforced.  We have developed a number or processes around our “OSS” solution, including integrating Subversion with Rational ClearQuest so that we can relate change sets in SVN with issues tracked in ClearQuest, and a similar integration with our Scrum project management tool, VersionOne.  However, those are policies which are based upon convention, and which we thus can’t enforce.  For example, by convention, we create task branches in SVN named for ClearQuest issues (e.g. cq110011 for a task branch to resolve ClearQuest issue #110011), and we use a similar convention to identify backlog items or tasks in VersionOne.  The rub is that the system depends upon developers doing the right thing.  And sometimes they don’t. 

With an integrated product suite like TFS, you not only get processes, but you get the means to enforce them.  In a previous job, we used ClearQuest and ClearCase together, and no developer could check in a change set without documenting it in ClearQuest.  Period.  It was not left up to the developer to do the right thing, because the tools made sure that they did.  Annoying?  Sure.  Effective?  Absolutely.  Everyone resented the processes until the first time we really needed the information about a change set, which we already had waiting for us in ClearQuest. 

Is that level of integration necessary?  We’ve decided (at least for now) that it’s not, and that we are willing to rely on devs doing the right thing.  You may decide that you do want that level of assurance that your corporate development processes are being followed.  All it takes is money. 

What that means (to me at least) is that the big win in choosing an integrated tool is the integration part.  Is the source control tool that comes with TFS a good one?  I haven’t used it personally, but I’m sure that it is.  Is it worth the price if all you’re looking for is a source control system?  Not in my opinion.  You can get equally capable SCC packages for far less (out of pocket) cost.  It’s worth spending the money if you are going to take advantage of the integration across the suite, since it allows you to not only set, but enforce policy. 

I’m sure that if you choose to purchase just the SCC part of TFS, or just Rational’s ClearQuest, you’ll end up with a great source control tool.  But you could get an equally great source control tool for a lot less money if that’s the only part you are interested in. 

The other thing to keep in mind is that the integrated suites tend to come with some administration burden.  Again, I can’t speak from experience about TFS, but in my prior experience with Rational, it took a full-time adminstrator to keep the tools running properly and to make sure they were configured correctly.  When the company faced some layoffs and we lost our Rational administrator, we switched overnight to using CVS instead, because we couldn’t afford to eat the overhead of maintaning the ClearQuest/ClearCase tools, and none of us had been through the training in any case.  I’ve heard reports that TFS is much easier to administer, but make sure you plan for the fact that it’s still non-zero.

So, in summary, if you already have a process that works for you, you probably don’t need to invest is a big integrated tools suite.  If you don’t have a process in place (or at least enough process in place) or you find that you are having a hard time getting developers to comply, then it may well be worth the money and the administrative overhead.

Wednesday, 05 July 2006 15:54:38 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, 27 June 2006

I took Steve’s comment to heart, and got rid of the two places I had been “forced” to use the CodeSnippetExpression.  It took a few minutes thought, and a minor refactoring, but I’ll sleep that much better at night. 

Vive le DOM!

Tuesday, 27 June 2006 13:10:37 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 26 June 2006

A few days back, Jeff commented that he wasn’t convinced about the value of putting an object model on top of what should be simple string generation.  His example was using XmlTextWriter instead of just building an XML snippet using string.Format. 

I got similar feedback internally last week when I walked through some of my recent CodeDOM code.  I was asked why I would write

CodeExpression append = new CodeMethodInvokeExpression(sb,"Append", new CodePrimitiveExpression(delim1));

instead of

CodeExpression append = new CodeSnippetExpression(“sb.Append(delim1)”);

It’s a perfectly reasonable questions.  I’m using the CodeDOM’s object model, but the reality is since we’re an all-C# shop, I’m never going to output my CodeDOM-generated code as VB.NET or J#.  So I could just as easily using a StringBuilder, and a bunch of string.Format calls to write C# code and compile it using the CodeDOM’s C# compiler.  It certainly would be simpler. 

The same is true (as Jeff points out) for XML.  It’s much easier to write XML using string.Format, or a StringBuilder. 

If nothing else, I personally find that using the object-based interface makes me think harder about the structure I’m creating.  It’s not really much less error-prone that writing the code (or XML) by hand, it just provides a different way to screw up.  What it does do is force you to think at a higher level of abstraction, about the structure of the thing you are creating rather than the implementation.  You may never need to output binary XML instead of text, but using XmlTextWriter brings you face to face with the nauances of the structure of the document you’re creating.  Writing a CDATA section isn’t the same as writing an element node.  And it shouldn’t be.  Using the object interface makes those distinctions more obvious to the coder. 

However, it’s definitely a tradeoff.  You have to put up with a lot more complexity, and more limitations.  There are a buch of contructs that would be much easier to write in straight C# than to express them in CodeDOM.  It’s easy to write a lock{} construct in C#, but much more complex to create the necessary try/catch and monitor object using the CodeDOM. 

I was, in fact, forced to resort to the CodeSnippetExpression in one place, where nothing but a terniary operator would do.  I still feel guilty.  :-)  Maybe it just comes down to personal preference, but I’d rather deal with the structure than the syntax even if it means I have to write more complicated code.

Monday, 26 June 2006 14:05:19 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 

Aboutt 4 months ago I moved into a brand new (town)house.  It’s been great, particularly since our last house was generating more maintenance opportunities that we could handle.  The new place is 3 stories, and there’s a deck off the back of the second floor over the driveway.  Staining/finishing said deck is left as an excercise for the homeowner, and yesterday I finally got around to it.  At first I didn’t want to tackle it due to the everpresent rain, and lately it’s just been a matter of finding the time.  And I really hate ladders. 

Anyway, I had the time, the materials, and no rain.  Unfortunately, it was around 100° yesterday.  It’s a small deck, but nonetheless 4 hours of huffing paint fumes on my hands and knees left me a bit nackered.  And today I’m finding out how unbendy I’ve become (i.e. crippled today). 

This whole getting older thing really blows. 

Monday, 26 June 2006 13:14:20 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  |