# Monday, August 21, 2006

We just got back from a week's vacation in sunny Marin County CA (just across the Golden Gate from San Francisco, for those not up on Californian geography).  We were visiting family and checking out goings on in "the City", which we haven't done in 4-5 years. 

I was quite surprised to discover that the California Academy of Sciences, which was one of my favorite destinations as a kid, is being rebuilt.  We showed up in Golden Gate Park, (finally) found parking, and were all set to go to the aquarium and visit the stuffed lions when we came around the bend to find a big hole in the ground, surrounded by cranes.  So we went to the recently renovated Deyoung art museum instead, and hit the temporary location of the Academy (near Moscone Center) the next day. 

We also squeezed in a visit to the new Asian Art Museum, much of which used to be the Brundage (sp?) Collection at the Deyoung.  The new building is beautiful, and very well laid out.  It's designed to be viewed as a progression over time and distance, starting with India and South Asia, through SE Asia, and then East Asia (China, Korea, Japan).  The new Deyoung is also very well laid out.  Don't be put off by the exterior.  It'll grow on you as you get closer, and the inside is fantastic. 

Our tour ended with a day in Sonoma, where we checked out the historical sights, like Valejo's house, the Sonoma Mission, and Jack London State Park, which has a very nice museum, and where you can see the ruins of London's "Wolf House" which burned down a month before he could move in. 

The weather turned out to be very pleasant, and in fact it was hotter here in Portland when we got home yesterday.  Go figure.  Hotter in Portland than in Redding?  Who'd have thunk it. :-)

Monday, August 21, 2006 1:18:16 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, August 10, 2006

I have progressed a bit.  At least I seems to have an install where ADAM and AzMan will coexist happily in the same ADAM instance, and I can retrieve a user from the ADAM store.  I can also add roles to AzMan programatically, so that's all good.  However, I still can't add an ADAM principal to AzMan as a member of a role.
This is supposed to work...

   15           string roleName = "RetailUser";

   16 

   17             MembershipUser user = Membership.GetUser("TestUser@bank.com");

   18             Console.WriteLine(user.ProviderUserKey);

   19 

   20             IAzAuthorizationStore2 azStore = new AzAuthorizationStoreClass();

   21             azStore.Initialize(0, "msldap://localhost:50000/CN=Test,CN=AzMan,O=AzManPartition", null);

   22             IAzApplication2 azApp = azStore.OpenApplication2("TestApp", null);

   23 

   24             IAzTask task = azApp.CreateTask(roleName, null);

   25             task.IsRoleDefinition = -1;

   26             task.Submit(0, null);

   27             IAzRole role = azApp.CreateRole(roleName, null);

   28             role.AddTask(roleName, null);

   29             role.Submit(0, null);

   30 

   31             IAzRole newRole = azApp.OpenRole(roleName, null);

   32 

   33 

   34             newRole.AddMember(user.ProviderUserKey.ToString(), null);

   35             newRole.Submit(0, null);

And should result in TestUser@bank.com being added to the role "RetailUser". 
Sadly, on that last line, I get

System.ArgumentException was unhandled
  Message="Value does not fall within the expected range."
  Source="Microsoft.Interop.Security.AzRoles"
  StackTrace:
       at Microsoft.Interop.Security.AzRoles.IAzRole.Submit(Int32 lFlags, Object varReserved)
       at ADAMAz.Program.Main(String[] args) in C:\Documents and Settings\PCauldwell\My Documents\Visual Studio 2005\Projects\ADAMAz\ADAMAz\Program.cs:line 35
       at System.AppDomain.nExecuteAssembly(Assembly assembly, String[] args)
       at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)
       at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
       at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
       at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
       at System.Threading.ThreadHelper.ThreadStart()

All I can figure is that AzMan doesn't like the SID as generated above. 
I'm running this on XP SP2, with the 2003 Management tools, and ADAM SP1 installed.  I'm fearing that I may have to run this on 2003 R2 to get it to work.


Thursday, August 10, 2006 5:07:15 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

I’m having a heck of a time trying to get ADAM and AzMan to work together.  The vision is that I’d like to use ADAM as both the store for AzMan, and the source of principals to use inside AzMan, rather than principals from AD.  Using ADAM as the store is pretty straightforward, but the second bit is turning out to be a lot harder.  In addition, I’m trying to use the ASP.NET Membership object to mediate between ADAM and AzMan, and seeing some weird stuff.  I was able to use Membership.GetUser(“username”) to pull the user from an ADAM store, but only until I installed AzMan using the same ADAM instance as its store.  After that, the call to GetUser started returning null.  Once I get that working, I think I understand how to add the principals to AzMan, but have yet to see it work.

Hmm.  (Or possibly “arghh!”.)

Work continues. 

Unfortunately, the documentation I’ve been able to turn up is sketchy at best, and it all assumes that you are using ASP.NET (I’m not) and really just want to make Membership work.  Sigh.

To further confuse things, the only way to get the AzMan management tools on XP is to install the 2003 Server management kit, but that doesn’t contain the PIA for AzMan.  That only gets installed on actual 2003 systems, so I’ll have to try and track one down.

Thursday, August 10, 2006 11:05:04 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Monday, August 07, 2006
There’s a great (relatively new) site for hikers around the Portland area called (aptly enough) PortlandHikers.com.  There are forums for trip reports (many of which come with beautiful photos), gear reviews, and other topics related to hiking our part of the Great NW.  You can check out the pictures I posted of our hike to the Indian Heaven wilderness last weekend, which turned out to be a great trip.  Nice weather, good company, and a very pretty lake to camp next to.
Monday, August 07, 2006 11:10:39 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, August 02, 2006

I’ve been doing some exploration of the Peer Channel in WCF over the last week or so.  It’s a pretty cool idea.  Basically, the peer channel provides a way to do multi-cast messages with WCF, where all parties involved get a call at (essentially) the same time.  Better still, it’s not just a simple broadcast, but a “mesh” with some pretty complex algorithms for maximizing network resources, etc. 

The hard part is in the bootstrapping.  When you want to join the “mesh”, you have to have at least one other machine to talk to so that you can get started.  Where does that one machine live?  Tricky.  The best case solution is to use what MS calls PNRP, or the Peer Name Resolution Protocol.  There’s a well known address at microsoft.com that will be the bootstrapping server to get you going.  Alternatively, you can set up your own bootstrap servers, and change local machine configurations to go there instead.  All this depends on the Peer Networking system in XP SP2 and up, so some things have to be configured at the Win32 level to get everything working.  The drawback (and it’s a big one) to PNRP is that it depends on IPv6.  It took me quite a while to ferret out that bit of information, since it’s not called out in the WCF docs.  I finally found it in the Win32 docs for the Peer Networking system. 

This poses a problem.  IPv6 is super cool and everything, but NOBODY uses it.  I’m sure there are a few hearty souls out there embracing it fully, but it’s just not there in the average corporate environment.  Apparently, our routers don’t route IPv6, so PNRP just doesn’t work. 

The way to solve this little problem with WCF is to write a Custom Peer Resolver.  You implement your own class, derived from PeerResolver, and it provides some way to register with a mesh, and get a list of the other machines in the mesh you want to talk to.  There’s a sample peer resolver that ships with the WCF samples, which works great.  Unfortunately, it stores all the lists of machines-per-mesh in memory, which suddenly makes it a single point of failure in an enterprise system, which makes me sad…

That said, I’ve been working on a custom resolver that is DB backed instead of memory backed.  This should allow us to run it across a bunch of machines, and have it not be a bottleneck.  I’m guessing that once everyone has joined the mesh, there won’t be all that much traffic, so I don’t think performance should be a big deal. 

The next step will be WS-Discovery over PeerChannel.  I’ve seen a couple of vague rumors of this being “worked on” but I haven’t seen anything released anywhere.  If someone knows different I’d love to hear about it.

Indigo | Work
Wednesday, August 02, 2006 2:10:43 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 

Yes, it’s happened again.  Yet another technology/trend which appeared in Neal Stephenson’s seminal novel Snow Crash has come to (almost) fruition.  I think he called it “sintergel” or some such.  This new technology joins the burbclave and a host of other trends that Stephenson predicted back in the day. 

Liquid Body Armor By End Of 2007

The company Armor Holdings is developing a liquid-type of body armor to either replace or enhance the current tough fiber and polymer armor that's in use today. The liquid can be smeared on a person, or a person's clothing, and stiffens when hit by an object. [Gizmodo]

Wednesday, August 02, 2006 9:54:15 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, July 11, 2006

One of the things the has irked my about using SVN with VisualStudio.NET is trying to set up a new project.  You’ve got some new thing that you just cooked up, and now you need to get it into Subversion so it doesn’t get lost.  Unfortunatley that means you have to “import” it into SVN, being careful not to include any of the unversionable VS.NET junk files, then check it out again, probably some place else, since Tortoise doesn’t seem to dig checking out over the existing location.  Big pain.

Along comes Ankh to the rescue.  I’ve been using it off and on for a while (version .6 built from the source) but now I’m hooked.  It adds the traditional “Add to source control” menu item in VS.NET, and it totally did the right thing.  Imported, checked out to the same location (in place) and skipped all the junk files.  Worked like a charm.  I’m definitely a believer now.

Tuesday, July 11, 2006 10:46:49 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [3]  | 

I’m a big fan of watching TV shows after they come out on DVD.  You don’t have to deal with the commercials, and you can be assured of not missing anything.  Plus, I don’t have cable, so it’s about the only way I ever see TV.  Anyway, Vikki and I just finished season 1 of Veronica Mars.  What a fantastic show.  I can see why Joss Whedon calls it the best show that noone is watching.  Great dialogue, good acting (mostly), great story arc, and I totally didn’t see the ending coming. 

While each episode explores a subplot about the rigors of high school, etc. the overarching story line is about a murder mystery, and the season ends with the murderer revealed (it’s not who you think).  They pulled off some very interesting plot twists throughout.  I’m breathlessly anticipating season 2 next month.  There are still a number of open questions which I’m hoping they’ll pursue in the second season. 

If you like the Whedonverse (BtVS/Angel/Firefly) you’ll probably like Veronica Mars.  Best dialogue this side of Joss himself.

Tuesday, July 11, 2006 10:42:52 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, July 05, 2006

So at TechEd, Scott captured Jeff and I talking in the friendliest of fashions about the relative merits of Team System (which Jeff evangelizes) and the “OSS solution” which I helped craft at Corillian involving Subversion, CruiseControl.NET, NUnit, NAnt, et. al. 

Since then, I did a bit for PADNUG about the wonders of source control, and it caused me to refine my positions a bit. 

I think that in buying a system like Team System / TFS (or Rational’s suite, etc.) you are really paying for not just process, but process that can be enforced.  We have developed a number or processes around our “OSS” solution, including integrating Subversion with Rational ClearQuest so that we can relate change sets in SVN with issues tracked in ClearQuest, and a similar integration with our Scrum project management tool, VersionOne.  However, those are policies which are based upon convention, and which we thus can’t enforce.  For example, by convention, we create task branches in SVN named for ClearQuest issues (e.g. cq110011 for a task branch to resolve ClearQuest issue #110011), and we use a similar convention to identify backlog items or tasks in VersionOne.  The rub is that the system depends upon developers doing the right thing.  And sometimes they don’t. 

With an integrated product suite like TFS, you not only get processes, but you get the means to enforce them.  In a previous job, we used ClearQuest and ClearCase together, and no developer could check in a change set without documenting it in ClearQuest.  Period.  It was not left up to the developer to do the right thing, because the tools made sure that they did.  Annoying?  Sure.  Effective?  Absolutely.  Everyone resented the processes until the first time we really needed the information about a change set, which we already had waiting for us in ClearQuest. 

Is that level of integration necessary?  We’ve decided (at least for now) that it’s not, and that we are willing to rely on devs doing the right thing.  You may decide that you do want that level of assurance that your corporate development processes are being followed.  All it takes is money. 

What that means (to me at least) is that the big win in choosing an integrated tool is the integration part.  Is the source control tool that comes with TFS a good one?  I haven’t used it personally, but I’m sure that it is.  Is it worth the price if all you’re looking for is a source control system?  Not in my opinion.  You can get equally capable SCC packages for far less (out of pocket) cost.  It’s worth spending the money if you are going to take advantage of the integration across the suite, since it allows you to not only set, but enforce policy. 

I’m sure that if you choose to purchase just the SCC part of TFS, or just Rational’s ClearQuest, you’ll end up with a great source control tool.  But you could get an equally great source control tool for a lot less money if that’s the only part you are interested in. 

The other thing to keep in mind is that the integrated suites tend to come with some administration burden.  Again, I can’t speak from experience about TFS, but in my prior experience with Rational, it took a full-time adminstrator to keep the tools running properly and to make sure they were configured correctly.  When the company faced some layoffs and we lost our Rational administrator, we switched overnight to using CVS instead, because we couldn’t afford to eat the overhead of maintaning the ClearQuest/ClearCase tools, and none of us had been through the training in any case.  I’ve heard reports that TFS is much easier to administer, but make sure you plan for the fact that it’s still non-zero.

So, in summary, if you already have a process that works for you, you probably don’t need to invest is a big integrated tools suite.  If you don’t have a process in place (or at least enough process in place) or you find that you are having a hard time getting developers to comply, then it may well be worth the money and the administrative overhead.

Wednesday, July 05, 2006 3:54:38 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, June 27, 2006

I took Steve’s comment to heart, and got rid of the two places I had been “forced” to use the CodeSnippetExpression.  It took a few minutes thought, and a minor refactoring, but I’ll sleep that much better at night. 

Vive le DOM!

Tuesday, June 27, 2006 1:10:37 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  |