# Thursday, 11 June 2009

A month or so ago I posted on a solution for simulating “default button” semantics in a Silverlight app, meaning that if you are entering text in a text box and you hit the enter key, the “default button” for the “page” should be pressed.  Very natural for form entry, etc. 

An issue came up (discovered by John Papa) with the solution in a Prism app, because my solution depends on being able to find the “default” button in the visual tree using the FindName method.  That means that you have to be high enough up the visual tree to find the button, since it only works “down” the tree.  In a Prism app, it’s not necessarily clear where “high enough” might be.  Plus, because the solution requires unique names, and Prism modules may have nothing to do with one another, they may have duplicate names, etc.

Here’s a revision to the solution that doesn’t require unique names, and doesn’t require any static references that might interfere with proper garbage collection…

First, a new object called DefaultButtonHub that keeps track of the relationship between text boxes and buttons.  It also exposes an Attached Property that takes a DefaultButtonHub reference so we can hook up text boxes and buttons to the “hub” in XAML.

public class DefaultButtonHub
   ButtonAutomationPeer peer = null;

   private void Attach(DependencyObject source)
       if (source is Button)
           peer = new ButtonAutomationPeer(source as Button);
       else if (source is TextBox)
           TextBox tb = source as TextBox;
           tb.KeyUp += OnKeyUp;
       else if (source is PasswordBox)
           PasswordBox pb = source as PasswordBox;
           pb.KeyUp += OnKeyUp;

   private void OnKeyUp(object sender, KeyEventArgs arg)
       if(arg.Key == Key.Enter)
           if (peer != null)

   public static DefaultButtonHub GetDefaultHub(DependencyObject obj)
       return (DefaultButtonHub)obj.GetValue(DefaultHubProperty);

   public static void SetDefaultHub(DependencyObject obj, DefaultButtonHub value)
       obj.SetValue(DefaultHubProperty, value);

   // Using a DependencyProperty as the backing store for DefaultHub.  This enables animation, styling, binding, etc...
   public static readonly DependencyProperty DefaultHubProperty =
       DependencyProperty.RegisterAttached("DefaultHub", typeof(DefaultButtonHub), typeof(DefaultButtonHub), new PropertyMetadata(OnHubAttach));

   private static void OnHubAttach(DependencyObject source, DependencyPropertyChangedEventArgs prop)
       DefaultButtonHub hub = prop.NewValue as DefaultButtonHub;


Basically we’re expecting that both the text boxes and the button will register themselves with the “hub”.  If it’s a button that’s being registered, we wrap it in a ButtonAutomationPeer so we can “press” it later.  If it’s a text box, we hook up a KeyUp handler that will “press” the button if it’s there.  The requirement in the XAML is only marginally heavier than in my previous solution…we have to add a resource of type DefaultButtonHub, and point the button and text boxes at it using the {StaticResource} markup extension.

<UserControl x:Class="DefaultButton.Page"
    Width="400" Height="300">
        <my:DefaultButtonHub x:Key="defaultHub"/>
    <Grid x:Name="LayoutRoot" Background="White">
        <TextBox x:Name="theText" Grid.Row="0"
                 my:DefaultButtonHub.DefaultHub="{StaticResource defaultHub}"/>
        <Button x:Name="theButton" Grid.Row="1" Content="Default"
                Click="theButton_Click" my:DefaultButtonHub.DefaultHub="{StaticResource defaultHub}"/>

Note that the new DefaultHub attached property is applied to both the text box and the button, each pointing the the single resource.  This way everything gets wired up property, there isn’t any problem with name resolution (aside from the usual resource name scoping) and everything will get cleaned up if the form needs to be GC’d.

Thursday, 11 June 2009 12:03:22 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 02 June 2009

Most of the demos/samples I’ve looked at so far for RIA Services have started with a LINQ to SQL or ADO.NET Entity model and generated Domain Service classes from those.  I decided to start from something super simple (a POCO, if you will) and work up from there.  I started with a canonical  “Person” class

public partial class Person : INotifyPropertyChanged
   #region INotifyPropertyChanged Members

   public event PropertyChangedEventHandler PropertyChanged;


   protected virtual void Changed(string propertyName)
       PropertyChangedEventHandler handler = PropertyChanged;
       if (handler != null)
           handler(this, new PropertyChangedEventArgs(propertyName));

   private int _personId;
   public int PersonId
           return _personId;
           if (_personId != value)
               _personId = value;

   private string _firstName;
   public string FirstName
   private string _lastName;
   public string LastName
   private System.DateTime _birthDate;
   public System.DateTime BirthDate

Nothing to see here.  It’s just a simple POCO that supports INotifyPropertyChanged for databinding.  Note that it’s a partial class…  The simplest path would be to add RIA Services attributes directly to these properties for things like data validation, but I wanted to try out all the features, so I split out the metadata into another file

public partial class Person

   internal sealed class PersonMetadata
       public int PersonId;

       public string FirstName;

       public string LastName;

       public DateTime BirthDate;

This is kind of an interesting trick, and allows me to separate out all the RIA specific metadata from the original class definition.  This makes total sense when you look at the way they handle LINQ to SQL, for example.  In that case you already have Entity classes defined by the LINQ to SQL wizard, so this extra metadata class allows you to associate the right attributes without touching the (generated) LINQ to SQL classes.  Clever.  Notice that the metadata class uses fields, not properties, and just matches the names for the sake of simplicity.

In this case, the additional metadata defines data validation rules that get enforced both server and client side.  There are other attributes to enforcing string length and ranges. 

This all works because RIA Services generates “proxy” entity classes on the client side that are Silverlight compilable and also DataContracts (for serializing, which is cool…).  However, what happens if I have a calculated property that’s not just storage?  There’s a solution for that too, and it involves another piece of the partial class

public partial class Person
   public int Age
           return Convert.ToInt32(Math.Floor((DateTime.Now - _birthDate).Days / 365.25));


This bit goes in a file called Person.shared.cs, and it will be copied to the client project and compiled there as well as on the server.  The [Shared] attribute marks the bits that need to be thus propagated.  Again, clever.  Of course, any such shared code has to compile in Silverlight.

The other piece of code I want to share (using the same method) is a custom validator.  In addition to the [Required] or [RegularExpression] attributes used above, you can register a custom validation routine that can examine the state of the entity as a whole.  The validation routine looks like this

public class PersonValidator
   public static bool IsPersonValid(Person p, ValidationContext context, out ValidationResult result)
       bool valid = true;

       result = null;

       if (p.Age > 130)
           valid = false;

       if (!valid)
           result = new ValidationResult("Birthdate is invalid, people can't be that old", new []{"BirthDate"});

       return valid;


That’s in a file called PersonValidator.shared.cs, so that it will be available client and server-side.  It’s associated with the Person entity with an additional attribute

[CustomValidation(typeof(PersonValidator), "IsPersonValid")]
public partial class Person

With the Person entity all ready, I can expose it to the client by creating a new DomainService class with methods for Get, Insert, Update, Delete, etc.

public class PeopleDomainService : DomainService
   public IQueryable<Person> GetPersons()

       return PeopleData.Persons.AsQueryable<Person>();

   public IQueryable<Person> GetPerson(int personId)
       return (from p in PeopleData.Persons
               where p.PersonId == personId
               select p).AsQueryable<Person>();

   public void UpdatePerson(Person person)
       Person oldP = (from p in PeopleData.Persons
                      where p.PersonId == person.PersonId
                      select p).FirstOrDefault();

       if (oldP != null)

   public void InsertPerson(Person person)
       if (person.PersonId == 0)
           int max = PeopleData.Persons.Max(p => p.PersonId);
           person.PersonId = max + 1;


   public void DeletePerson(Person person)
       Person oldP = (from p in PeopleData.Persons
                      where p.PersonId == person.PersonId
                      select p).FirstOrDefault();

       if (oldP != null)

PeopleData.Persons in this case is a List<Person> that’s populated with some sample data.  The [EnableClientAccess] attribute causes the build-time bits to generate a client side proxy for calling the service without the client project needing a service reference.  It really makes the Silverlight and the Web projects feel like parts of the same app rather than disconnected pieces. 

The corresponding class that is generated on the client side is a DomainDataContext, which feels much like a LINQ to SQL DataContext only it’s lazy-loaded like the Astoria ones.  The GetPersons method on the server results in LoadPersons on the client, etc.  If I hadn’t implemented the Insert/Update/Delete methods on the server side, the DomainDataContext would simple behave like a read only data source.  This model works really well with the DataForm class.  If I set the ItemsSource of the DataForm to the Persons “table” in the client side data context, it will properly enable/disable the add/delete buttons depending on the capabilities of the data context.  Neat. 

Coming in future posts… hooking up the PersonDataContext to the Silverlight 3 UI, and in ASP.NET

Tuesday, 02 June 2009 15:53:18 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 21 May 2009

In my book, I talked a bit about programming by contract and how that makes everyone’s lives easier.  Say I have a method that divides one integer by another

public double Divide(int dividend, int divisor)
  return dividend / divisor;

I’d like to be able to let callers know that they can’t pass a 0 for the divisor, because that will result in a DivideByZeroException, and nobody wants that.  In the past I had a couple of choices on how to express that, mostly involving writing different kids of code, because C# doesn’t have a native expression of design by contract like Eiffel does.  One way is to use Debug.Assert

public double Divide(int dividend, int divisor)
  Debug.Assert(divisor != 0);

  return dividend / divisor;

That way any caller that passes a 0 will get an assertion dialog at runtime, which brings it pretty dramatically to everyone’s attention.  The assumption is that using the Debug.Assert will flush out all cases where people are incorrectly calling my method during development, so it’s OK that the assertion will get compiled out of my Release build.  However, that doesn’t make it impossible for a caller to pass a 0 at runtime and cause the exception.  Another option is explicitly checking parameters and throwing a different exception.

public double Divide(int dividend, int divisor)
  if (divisor == 0)
      throw new ArgumentException("divisor cannot be zero", "divisor");

  return dividend / divisor;

Patrick argues that this has now moved from defining contract to defining behavior, and I can agree with that, although I’d probably argue that it defines both contract and behavior since I’ve extended the functionality of the Debug.Assert to the release build, while also protecting my internal state from bad data.  But that’s really a separate discussion… :)

Now thanks to the Microsoft Code Contracts project, I have a third option.  The Code Contracts project is the evolution of the work done on Spec#, but in a language neutral way.  The Code Contracts tools are currently available from DevLabs for VS 2008, as well as shipping with VS 2010 B 1.  Just at the moment, there are more features in the DevLabs version that what made it into the 2010 beta.  With Code Contracts, I can rewrite my Divide method like this

public double Divide(int dividend, int divisor)
  Contract.Requires(divisor != 0);

  return dividend / divisor;

I like the syntax, as it calls out quite explicitly that I’m talking about contract, and making an explicit requirement.  The default behavior of this method at runtime is identical to Debug.Assert, it brings up an assertion dialog and brings everything to a screeching halt. However, it’s configurable at build time, so I can have it throw exceptions instead, or do whatever might be appropriate for my environment if the contract is violated.  I can even get the best of both worlds, with a generic version of Requires that specifies an exception

public double Divide(int dividend, int divisor)
  Contract.Requires<ArgumentException>(divisor != 0, "divisor");

  return dividend / divisor;

I could configure this to bring up the assertion dialog in Debug builds, but throw ArgumentNullException in Release builds.  Good stuff.

The example above demonstrates a required “precondition”.  With Code Contracts, I can also specify “postconditions”. 

public void Transfer(Account from, Account to, decimal amount)
  Contract.Requires(from != null);
  Contract.Requires(to != null);
  Contract.Requires(amount > 0);
  Contract.Ensures(from.Balance >= 0);

  if (from.Balance < 0 || from.Balance < amount)
      throw new InsufficientFundsException();

  from.Balance -= amount;
  to.Balance += amount;


This isn’t the greatest example, but basically the Transfer method is promising (with the Contract.Ensures method) that it won’t ever leave the Balance a negative number.  Again, this is arguably behavior rather than contract, but you get the point. 

A really nifty feature is that I can write an interface definition and associate it with a set of contract calls, so that anyone who implements the interface will automatically “inherit” the contract validation.  The syntax is a bit weird, but you can see why it would need to be like this…

interface ICalculator
   int Add(int op1, int op2);
   double Divide(int dividend, int divisor);

class ContractForCalucluator : ICalculator
   #region ICalculator Members

   public int Add(int op1, int op2)
       return default(int);

   public double Divide(int dividend, int divisor)
       Contract.Requires(divisor != 0);

       return default(double);


Now any class that implements ICalculator will have the contract validated for the Divide method.  Cool.  The last thing I want to point out is that the team included a handy sample of how to work with contract validation in your MSTest unit test code.  The Contract class exposes an event called ContractFailed, and I can subscribe to the event to decide what happens on a failure.  For a test assembly, I can do this

public static void AssemblyInitialize(TestContext tc)
  Contract.ContractFailed += (sender, e) =>
      Assert.Fail(e.FailureKind.ToString() + " : " + e.Message);

which will translate contract failures into test failures with very explicit error messages.  In the case of my Divide method if I run this test

public void DivideContractViolation()
  Calculator target = new Calculator(); 
  int dividend = 12; 
  int divisor = 0; 
  double actual;
  actual = target.Divide(dividend, divisor);
  Assert.Fail("Should have failed");

I get a test failure of

Test method CalculatorTests.CalculatorTest.DivideContractViolation threw exception:  System.Diagnostics.Contracts.ContractException: Precondition failed: divisor != 0 divisor --->  Microsoft.VisualStudio.TestTools.UnitTesting.AssertFailedException: Assert.Fail failed. Precondition : Precondition failed: divisor != 0 divisor.

Cool.  This is definitely something I’ll be looking at more in the future.

Thursday, 21 May 2009 14:50:38 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 20 May 2009

This came up in class yesterday, so I did a little digging.  Everyone may already know this, but it came as news to me. :)

If I’ve got a collection in memory, as well as a LINQ to SQL DataContext

List<FavoriteFood> foods = new List<FavoriteFood>(){
	new FavoriteFood{CustomerID = "ALFKI", Favorite="Chips"},
	new FavoriteFood{CustomerID = "ANATR", Favorite = "Fish"}};

NorthwindDataContext context = new NorthwindDataContext();

and I want to do an INNER JOIN between the foods list and the Customer table in Northwind, it would seem like this should do it

var bad = (from cust in context.Customers
         join f in foods on cust.CustomerID equals f.CustomerID
         select new

but sadly, no.


if you stop to think about it, it totally makes sense that it wouldn’t work like that, since there’s no way to translate that into SQL in any rational way to send to SQL server. 

OK, so the next step would be to first get the customers in memory, then do the join

//this executes the whole query, thus retrieving the entire customer table
var customers = (from c in context.Customers
             select c).ToList();

//an inner join between the customers from SQL and the in-memory list
var inner = from cust in customers
     join f in foods on cust.CustomerID equals f.CustomerID
     select new

That works, but I’ve had to pull the entire Customer table into memory just to join two rows.  If I wanted to do a LEFT OUTER JOIN, I’d need that anyway, like so

//here's the left outer join between the customer list from SQL 
//and the in-memory favorites
var leftouter = from cust in customers
             join f in foods on cust.CustomerID equals f.CustomerID into custFoods
             from custFood in custFoods.DefaultIfEmpty()
             select new
                 Favorite = custFood == null ? null : custFood.Favorite

but I want an inner join without pulling down all the customers, so I need to only fetch those rows that will join from Customer

//this is how you manage the IN clause
var x = from c1 in context.Customers
     where (from cf in foods
           select cf.CustomerID).Contains(c1.CustomerID)
     select c1;

//Note that to join the result in x back to the foods collection you would have to 
//execute the query just like with customers above...
var littleInnerJoin = from filteredCustomer in x.ToList()
                   join f in foods on filteredCustomer.CustomerID equals f.CustomerID
                   select new

It’s two steps, but now I’ve just loaded the rows that will join into memory, and then done the join with LINQ to Objects.  It bears a little thinking about, but doesn’t seem like too much overhead, IMHO.

Work | LINQ
Wednesday, 20 May 2009 13:38:43 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 07 May 2009

Let’s say I’ve got to the effort of creating a modal dialog or it’s equivalent in a Silverlight application.  What I would like to complete the picture is a “default” button, so that if I hit the Enter key while in a text box, the dialog will be “submitted”.  There are probably several ways of achieving this end, but I wanted something that was simple, and encapsulated in an attached property so the consumer wouldn’t have to deal with much code.  I wrote the attached property so that I could use it on text boxes like this

<TextBox x:Name="theText" VerticalAlignment="Center" Margin="10,0,10,0" 
	Grid.Row="0" my:DefaultButtonService.DefaultButton="theButton"/>

where “theButton” here is the default button I want to be pressed when the Enter key happens inside my text box.  The downside is that I have to apply this property to every text box on the page, but so be it, that seems like a relatively small price to pay.  So I got as far as finding the named button in the visual tree, but the the question was, how to “press” the button.  The Button class in Silverlight has a protected OnClick that would do the trick, if it wasn’t protected.  I could derive my own control from Button and expose the OnClick method, but ewww.  If I did that then every dialog that wanted this behavior would have to remember to use the derived Button class.  I tried reflecting over the Button and Invoking OnClick anyway, but turns out you get a security exception.  OK.  Then, thanks to a presentation Stuart gave us yesterday on Accessibility, I remembered the Automation framework in Silverlight.  That turned out to be super easy, just create an instance of ButtonAutmationPeer from the Button, then Invoke it.  Cool.

public static class DefaultButtonService
   public static readonly DependencyProperty DefaultButtonProperty =
       DependencyProperty.RegisterAttached("DefaultButton", typeof(string), typeof(DefaultButtonService), new PropertyMetadata(OnDefaultButtonChanged));

   public static string GetDefaultButton(DependencyObject d)
       return (string)d.GetValue(DefaultButtonProperty);

   /// <summary>
   /// Sets the CommandParameter property.
   /// </summary>
   public static void SetDefaultButton(DependencyObject d, string value)
       d.SetValue(DefaultButtonProperty, value);

   private static void OnDefaultButtonChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
       TextBox tb = d as TextBox;
       if (tb != null)
           tb.KeyUp += new KeyEventHandler(tb_KeyUp);

   static void tb_KeyUp(object sender, KeyEventArgs e)
       switch (e.Key)
           case Key.Enter:
               string name = (string)((DependencyObject)sender).GetValue(DefaultButtonProperty);
               object root = App.Current.RootVisual;
               object button = ((FrameworkElement)root).FindName(name);
               if (button is Button)
                   ButtonAutomationPeer peer = new ButtonAutomationPeer((Button)button);

                   IInvokeProvider ip = (IInvokeProvider)peer;
Thursday, 07 May 2009 13:27:54 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 07 April 2009

We just added a set of new one day classes starting in May, including “Agile in day”, Silverlight 3, and UI design patterns for WPF and Silverlight.  These are lecture-style classes that will give you an in depth look at each subject.  Like our previous Silverlight event, these are 300-400 level, technical sessions that will give you a leg up on each subject in a day.  Because they are presented lecture style, we can offer them for only $125 each, so feel free to collect the whole set. :-)  Details and registration here.  We may be adding some additional topics soon, so check back often.

Tuesday, 07 April 2009 10:41:19 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 10 March 2009

I’m building a MVVM app in WPF, and needed to show a modal dialog, but I couldn’t figure out how to make it work and still maintain the separation between ViewModels and Views.  I have been mapping Views to ViewModels using DataTemplates, so that the relationship is declarative in XAML rather than in code someplace.  What I ended up with was a new Window called Dialog that takes a type in its constructor that corresponds to a ViewModelBase-derived type.  (See Josh Smith’s WPF Apps With The Model-View-ViewModel Design Pattern article for background on how the pieces fit together…) 

public partial class Dialog : Window
    public Dialog(Type vmType)

        ViewModelBase vmb = Activator.CreateInstance(vmType) as ViewModelBase;
        item.Content = vmb;

        this.Title = vmb.DisplayName;

In the XAML for Dialog, there’s just a Grid that contains a ContentPresenter called “item”.  The constructor sets item’s content to be the ViewModel, and the DataTemplate takes care of associating the View (a UserControl) with the ViewModel.  Note the use of SizeToContent="WidthAndHeight" on the Dialog window, which causes the window to resize to how ever big the UserControl that represents the View might be.

<Window x:Class="SomeNamespace.Dialog"
    Title="Dialog" SizeToContent="WidthAndHeight">
        <ContentPresenter x:Name="item">
<DataTemplate DataType="{x:Type vm:NewThingViewModel}">

To create an instance of the new modal dialog, I just create a new instance of Dialog and pass the type of ViewModel it’s supposed to host…

new Dialog(typeof(NewThingViewModel)).ShowDialog();

There are still some details to work out as far as getting results from said dialog, but hey, progress…

Tuesday, 10 March 2009 15:36:01 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 04 March 2009

So yesterday I was reading Josh Smith’s WPF Apps With The Model-View-ViewModel Design Pattern article in last month’s MSDN magazine, and I really liked the sample app that he built.  It totally cleared up for me some rough edges (in my understanding) of MVVM.  Most specifically, I had been conflicted about where to put things like button click handlers.  Do you put a Click event handler in your codebehind, which just defers to the View Model?  Do you create a generic CommandBinding handler to wire up commands to your View Model’s methods?  (I’ve seen both in various examples…)  The former seems like cheating a bit on MVVM (no code behind would be ideal, it seems to me) and the latter overly complicated.  Josh’s solution is to bind the Command property of the UIElement (button, menu item, whatever) to a property of type ICommand on the View Model. 

That totally made sense to me.  No more code behind (like at all, basically) and I don’t have to build additional framework to make it happen, with the exception of coming up with an ICommand implementation.  Josh solved that with a class he calls RelayCommand, which just wraps an ICommand implementation around a delegate (or lambda) of type Action<object>, with an optional Predicate<object> to handle the CanExecute method of ICommand. 

Groovy, now my XAML knows absolutely nothing about the View Model, and it’s only links to said View Model are through data binding.

Then I got to wondering if something as easy could work in Silverlight…  The answer turned out to be almost, but not quite.  Since Silverlight controls don’t expose a Command property, you have to hook them up yourself, but otherwise it works just the same way.

So if I’ve got a page with a single button on it


<UserControl x:Class="SilverlightCommands.Page"
    Width="400" Height="300">
    <Grid x:Name="LayoutRoot" Background="White">
        <Button Content="Say Hello..." VerticalAlignment="Center" 
                my:ButtonService.Command="{Binding Path=SayHello}"/>

I can add an attached property whose value is bound to the ICommand property on the View Model.  The attached property grabs the ICommand and hooks up the button’s click handler to the ICommand’s Execute method.

#region Command

/// <summary>
/// Command Attached Dependency Property
/// </summary>
public static readonly DependencyProperty CommandProperty =
    DependencyProperty.RegisterAttached("Command", typeof(ICommand), typeof(ButtonService),
        new PropertyMetadata(OnCommandChanged));

/// <summary>
/// Gets the Command property.
/// </summary>
public static ICommand GetCommand(DependencyObject d)
    return (ICommand)d.GetValue(CommandProperty);

/// <summary>
/// Sets the Command property.
/// </summary>
public static void SetCommand(DependencyObject d, ICommand value)
    d.SetValue(CommandProperty, value);

/// <summary>
/// Handles changes to the Command property.
/// </summary>
private static void OnCommandChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
    if (d is Button)
        Button b = d as Button;
        ICommand c = e.NewValue as ICommand;
        b.Click += delegate(object sender, RoutedEventArgs arg) { c.Execute(null); };


In the View Model, then, is the actual handler for the command

public class PageViewModel
    public ICommand SayHello
            return new RelayCommand(param => MessageBox.Show("HelloWorld"));

Note that I’m using Josh’s RelayCommand helper to wrap the simple lambda.

That feels like not too terribly much infrastructure, although I might have to create separate attached properties to handle different control types (e.g. those that don’t have a Click event).  It would be pretty straightforward to support command parameters in the same way, by creating a CommandParameter attached property and picking up its value in the OnCommandChanged handler…

private static void OnCommandChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
    if (d is Button)
        string parameter = d.GetValue(CommandParameterProperty) as string;
        Button b = d as Button;
        ICommand c = e.NewValue as ICommand;
        b.Click += delegate(object sender, RoutedEventArgs arg) { c.Execute(parameter); };

The only thing that doesn’t really handle well is the CanExecute method, which in WPF would automatically enable/disable the button based on the result of the ICommand’s CanExecute method.  I tried a couple ways of wiring it up during the OnCommandChanged handler, but couldn’t come up with anything that didn’t look to have nasty side-effect (garbage collection, etc.) or be just ugly.  I’d probably just bind the IsEnable property of the button to a separate boolean on the View Model and deal with it there. 


Code is here.

Wednesday, 04 March 2009 16:06:59 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [3]  | 
# Monday, 02 March 2009

After reading Justin Angel’s very good summary of unit testing Silverlight using the Silverlight UnitTest framework, RhinoMocks, Unity etc. I decided to give it a go and find out how easy it was for a “real” application.  I converted the course evaluation app I’ve been working on since December to MVVM using Unity, and then set about trying to test it with the testing tools. 

I must say I do rather like the MVVM pattern, so that part went pretty well, as did the use of Unity, although there was some learning to do there.  It’s not quite as obvious as it maybe should be, but it didn’t take too long.  The biggest issue I had with both Unity and the test tools come in relation to the WCF proxy that I’m using to talk back to the server from Silverlight.  I think it would be a bit easier using the asynchronous interface that is generated as part of the proxy (the one that has all the BeginXXX, EndXXX methods on it) but I’m using the interface that consists of completion events and XXXAsync methods.  That object (in my case it’s called “EvalServiceClient”) doesn’t like being created by Unity, presumably somewhere down in the WCF infrastructure, so I had to create it myself and register the instance with Unity.

Current = new UnityContainer();
Current.RegisterInstance(typeof(EvalServiceClient), new EvalServiceClient());

That isn’t too terrible, but it did take a while to figure out.  One of the things that makes that harder is that the errors that come back just say “Unity couldn’t create your thing” and it takes a bit of digging to find out where and why it actually failed. 

The blogosphere suggests (and I agree) that implementing MVVM in Silverlight isn’t quite as straightforward as it might be in WPF, largely due to the lack of commands.  There are a couple of declarative solutions for mapping UI elements to methods in a View Model, but most relied on quite a bit of infrastructure.  I decided it was OK (enough) to put just enough code in my code-behind to wire up event handlers to methods on the View Model.  Icky?  No too bad.  Commands would obviously be better, but there it is. 

private void submitEval_Click(object sender, RoutedEventArgs e)

private void lstCourse_SelectionChanged(object sender, SelectionChangedEventArgs e)
    CourseInfo ci = lstCourse.SelectedItem as CourseInfo;

I found that the reliance on data binding makes it much easier to separate presentation from business logic.  The View Model can represent any UI specific logic like which buttons should be enabled when (which the underlying business/domain model doesn’t care about) and allow a designer to work strictly with XAML.  Because INotifyPropertyChanged really works, you don’t have to worry about pushing data into the interface, just on which properties have to be marked as changed.  For a computed property like “should the submit button be shown” it may take a few extra notification calls to make sure the UI gets updated properly, but that seems reasonable. 

public bool CanSubmit
        _canSubmit = (_registrationId.HasValue && _questionCategories != null);
        return _canSubmit;
        if (_canSubmit != value)
            _canSubmit = value;

public int? RegistrationId
        return _registrationId;
        if (_registrationId != value)
            _registrationId = value;

public System.Collections.ObjectModel.ObservableCollection<Evaluation.EvaluationServer.QuestionCategory> QuestionCategories
        return _questionCategories;
        if (_questionCategories != value)
            _questionCategories = value;

In the example above, the value of “CanSubit” relies on the state of RegistrationID and QuestionCategories, so the property setters for those properties also “invalidate” CanSubmit so the UI will update properly.  In the XAML, the IsEnabled property of the Submit button is bound to the CanSubmit property of the View Model.

The next challenge was getting the test code to work.  Because I didn’t want the test code to call the real web service, I had to mock the calls to the EvalServiceClient.  For whatever reason, I didn’t have any luck with mocking the object itself.  I think this had to do with the asynchronous nature of the calls.  The code registers an event handler for each completion event, then calls XXXAsync to call the web service.  When it returns, it fires the completion handler.  To make that work with RhinoMocks, you have to record the event hookup, then capture an IEventRasier interface that will let you raise the desired event.

using (mocks.Record())
    client.GetStudentNameCompleted += null;
    raiser = LastCall.IgnoreArguments().GetEventRaiser();

That call to GetEventRaiser fails if I mock the EvalServiceClient object itself, so I had to create an interface that I could mock instead.  Luckily, the generated proxy is a partial class, so it’s easy to add a new interface.

public interface IEvalServiceClient
    event System.EventHandler<GetStudentNameCompletedEventArgs> GetStudentNameCompleted;

    void GetStudentNameAsync();

public partial class EvalServiceClient : IEvalServiceClient


Now the RhinoMocks code mock the IEvalServiceClient interface, and the GetEventRaiser call works just fine.  Because the WCF client actually gets created by Unity, we have to register the new mock instance with the UnityContainer.

MockRepository mocks = new MockRepository();
IEvalServiceClient client = mocks.StrictMock<IEvalServiceClient>();

IEventRaiser raiser;

using (mocks.Record())
    client.GetStudentNameCompleted += null;
    raiser = LastCall.IgnoreArguments().GetEventRaiser();



using (mocks.Playback())
    Page page = new Page();
    raiser.Raise(client, new GetStudentNameCompletedEventArgs(new object[]{"Jones, Fred"}, null, false, null));
    WaitFor(page, "Loaded");

    EnqueueCallback(() => Assert.IsTrue(page.lblStudent.Text == "Jones, Fred"));


During playback, we can use the IEventRaiser to fire the completion event, then check the UI to make sure the property got set correctly. 

I’m pretty convinced that MVVM is a good idea, but this method of testing seems awfully cumbersome to me, plus pretty invasive.  I had to make quite a few changes to my app to make the testing work, including creating the interface for the EvalServiceClient, and marking any controls I needed to write tests against with x:FieldModifier=”public” in my XAML.  It’s good to know how to make this work, but I’m not sure I’d use this method to test everything in my Silverlight app.  Probably only the highest risk areas, or places that would be tedious for a tester to hit.

Monday, 02 March 2009 14:42:25 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 20 February 2009

On Wednesday I posted about coming up with a way of decorating any UIElement with a red border in response to a validation error.  The only drawback (that I could see just then) was that I had to add borders to all the elements that I might want to exhibit this behavior.  Today Shaun suggested that I try it as an attached property instead, which might save that hassle.  Indeed it does.  Because of the way attached properties work in Silverlight, it’s possible to use them to attach behavior as well as data be hooking the value changed event.  Here’s the same (basically) logic as an attached property

public class ValidationService
    public static readonly DependencyProperty ValidationBehaviorProperty =
        DependencyProperty.RegisterAttached("ValidationBehavior", typeof(bool), typeof(ValidationService),
                                    new PropertyMetadata(OnValidationBehaviorChanged));

    public static bool GetValidationBehavior(DependencyObject d)
        return (bool)d.GetValue(ValidationBehaviorProperty);

    public static void SetValidationBehavior(DependencyObject d, bool value)
        d.SetValue(ValidationBehaviorProperty, value);

    private static void OnValidationBehaviorChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
        if (d is FrameworkElement)
            FrameworkElement fe = d as FrameworkElement;

            if ((bool)e.OldValue)

            if ((bool)e.NewValue)
                fe.BindingValidationError += new EventHandler<ValidationErrorEventArgs>(fe_BindingValidationError);

    static void fe_BindingValidationError(object sender, ValidationErrorEventArgs e)
        FrameworkElement fe = e.OriginalSource as FrameworkElement;
        DependencyObject parent = fe.Parent;
        Border b = null;

        if (parent is Grid)
            b = new Border() { BorderBrush = new SolidColorBrush(Colors.Red), BorderThickness = new Thickness(0) };

            Grid g = parent as Grid;
            int column = (int)fe.GetValue(Grid.ColumnProperty);
            b.SetValue(Grid.ColumnProperty, column);
            int row = (int)fe.GetValue(Grid.RowProperty);
            b.SetValue(Grid.RowProperty, row);
            b.Margin = fe.Margin;
            fe.Margin = new Thickness(0);
            b.Child = (UIElement)fe;
        else if (parent is Border)
            b = parent as Border;

        if (e.Action == ValidationErrorEventAction.Added)
            if (b != null)
                b.BorderThickness = new Thickness(1);
            ToolTipService.SetToolTip(fe, new TextBlock() { Text = e.Error.Exception.Message });
        if (e.Action == ValidationErrorEventAction.Removed)
            if (b != null)
                b.BorderThickness = new Thickness(0);
            ToolTipService.SetToolTip(fe, null);

Note that when registering the attached property, we register a value changed handler (in this case OnValidationBehaviorChanged).  This will get called the first time the property is set, which gives us an opportunity to mess with the element that the property is attached to.  If the value of the property is being set to true, we hook up the BindingValidationError handler.

In the validation error handler, we have to find out if the element is currently surrounded by a Border, and add one if it isn’t.  Note: in this example it will only work if the element is originally the child of a Grid.  This could easily be modified to support the other Panel types. 

Once the parent is a border, then the old logic from the previous example can be reused to set the border and tooltip on or off as appropriate. 

Notice that we don’t mark the BindingValidationError event as Handled so that it will bubble.  That allows us to write additional forms specific error handling higher up in the visual hierarchy.  At the form level we might want to invalidate the submit button if validation failed, or something similar, while letting the logic in the attached property handle the UI effects.

In the XAML, this new attached property can be used to attached the desired visual effect to any FrameworkElement like so

<TextBox Width="Auto" x:Name="txtNewHours"  Text="{Binding Hours, Mode=TwoWay, Converter={StaticResource hoursConverter}, 
    NotifyOnValidationError=true, ValidatesOnExceptions=true}" 
    Grid.Row="1" Grid.Column="1" Margin="5,5,5,5"/>
Friday, 20 February 2009 15:32:07 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 19 February 2009

I’m going to be teaching two new one-day courses on setting up a Continuous Integration process, one using freely available tools, and one using Microsoft Team Foundation Server.  For more details and to register, see our registration page.

Both classes will be held at the Westside campus of OIT

Course Description:

This one day course will provide a hands on look at setting up a Continuous Integration process using freely available tools from beginning to end. Students will learn the theory and practice behind establishing a Continuous Integration process, including setting up a build script, running unit tests, setting up an automated build/test server, and capturing reporting information for the whole process.

Course Technology

This course uses Microsoft Windows XP, the Microsoft .NET 3.5 Framework, NAnt, NUnit, and CruiseControl.NET.

At the end of the course you will be able to:

  • Create a build script using NAnt or MSBuild
  • Create and run unit tests using NUnit
  • Set up and run an automated build using CruiseControl.NET
  • Capture reporting data from the automated CI process

Course Outline
    Continuous Integration in Theory
    • Why CI is important
    • How CI fits into the Software Development process
    Creating a build script
    • What goes in a build script?
    • How does NAnt work?
    • How does MSBuild work?
    • Creating build scripts for NAnt or MSBuild
    • Running an automated build
    Adding unit tests to your build
    • Writing NUnit tests
    • Running tests as part of a build
    • Capturing test results
    Continuous Integration with CruiseControl.NET
    • Installing and configuring CC.NET
    • Adding projects to a build server
    • Reporting with CC.NET
    • Running multiple builds on the same server
        Dependencies between builds


This class is intended for experienced .NET 2.0 software developers.

Thursday, 19 February 2009 12:37:26 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 18 February 2009

I was having a bit of trouble getting validation to work during data binding in SL, but that seems to be resolved now.  I’ve got a little time tracking app, and want to be able to log new time entries…


I want the user to be able to enter hours only in 1/4 hour increments.  The way that gets handled in Silverlight is different from WPF.  To make it work in Silverlight the data binding source has the throw an exception if the new value is unacceptable.  In this case, the source is a “LogEntry” class that has a property “Hours” that implements INotifyPropertyChanged to support full two-way data binding.

public class LogEntry : INotifyPropertyChanged

    private decimal hours;
    public decimal Hours 
            return hours;
            if ((value % .25M) != 0)
                throw new Exception("please enter time in quarter-hour increments");
            if (hours != value)
                hours = value;


    private void NotifyPropertyChanged(string p)
        if (PropertyChanged != null)
            PropertyChanged(this, new PropertyChangedEventArgs(p));

    #region INotifyPropertyChanged Members

    public event PropertyChangedEventHandler PropertyChanged;


In the property setting for the Hours property, if the value isn’t evenly divisible by .25 an Exception gets thrown.  In the XAML, the binding is set up to notify (as a binding error) when an exception gets thrown


<TextBox Width="Auto" x:Name="txtNewHours"  Text="{Binding Hours, 
    Converter={StaticResource hoursConverter}, 
    NotifyOnValidationError=true, ValidatesOnExceptions=true}"/>

The grid that holds all the controls handles the BindingValidationError event, which is where all the fun stuff happens…

private void newEntryGrid_BindingValidationError(object sender, ValidationErrorEventArgs e)
    if (e.Action == ValidationErrorEventAction.Added)
        //not valid
    if (e.Action == ValidationErrorEventAction.Removed)
        //now it's valid

OK, so now I know when the value isn’t valid, but what to do about it.  In WPF, the easiest thing to do would be to use a “decorator” to add a border around the UI element that failed validation, but in Silverlight we don’t get decorators.  Instead, I wrapped each item that needed to support validation in a red border with a thickness of 0 in XAML.

<Border BorderBrush="Red" BorderThickness="0">
    <TextBox x:Name="txtNewHours" Text="{Binding ...}"/>

That’s a little extra work that I’d rather not do, but there it is.  It would be easy to wrap up this behavior in a custom control, but this was much quicker.

In the validation handler then, all we have to do is get the original source of the binding error and use the VisualTreeHelper to get it’s parent.  If that parent happens to be a border, set the border’s thickness to 1 to make it visible.  I’m also setting the tooltip for that offending control to be the message of the validation error.  If the ValidationErrorEventAction == Removed, then get rid of the tooltip and set the border’s thickness back to 0.

private void newEntryGrid_BindingValidationError(object sender, ValidationErrorEventArgs e)
    if (e.Action == ValidationErrorEventAction.Added)
        DependencyObject ui = e.OriginalSource as DependencyObject 
        DependencyObject parent = VisualTreeHelper.GetParent(ui);
        if (parent is Border)
            ((Border)parent).BorderThickness = new Thickness(1);
        ToolTipService.SetToolTip(ui, new TextBlock() { Text = e.Error.Exception.Message });
    if (e.Action == ValidationErrorEventAction.Removed)
        DependencyObject ui = e.OriginalSource as DependencyObject 
        DependencyObject parent = VisualTreeHelper.GetParent(ui);
        if (parent is Border)
            ((Border)parent).BorderThickness = new Thickness(0);
        ToolTipService.SetToolTip(ui, null);


That same code should work for any control that I want to do validation on, as long as I remember to add the borders in XAML. 

Wednesday, 18 February 2009 14:10:45 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 13 February 2009

Check out http://events.sftsrc.com/ for more new classes…

  • LINQ
    • 2 days of LINQ to Objects, LINQ to XML, LINQ to SQL, LINQ to Entities and more
    • 5 days of ASP.NET including some AJAX and a hint of Silverlight
  • WPF
    • 3 days of WPF goodness
  • Silverlight
    • 2 days for those with WPF experience or
    • 3 days for web developers
  • C#  and the .NET Framework
    • 5 days of C# 3.0 and the 3.5 framework, including LINQ, extension methods and the rest of the new language features
Friday, 13 February 2009 10:17:36 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 21 January 2009

I needed to build a survey (a course evaluation in this case, but name-your-survey…) and I wanted to be able to add new questions and question categories to the database without having to touch my (Silverlight) survey app.  I wanted the basic layout to look like this…


It took a little experimentation, and I’m sure there are other ways to make this work, but here’s what worked for me:

The questions and categories live in the database, like so


Categories contain questions, questions have text, and what we store when the survey is complete are the answers to the questions.

In the XAML, first there is an ItemsControl to deal with the categories, so that each category will have it’s own DataGrid.  The ItemsControl has a DataTemplate that defines what each category name and data grid of questions will look like (some formatting details removed for clarity)

<ItemsControl x:Name="dgPanel" >



            <StackPanel Orientation="Vertical">

                <TextBlock Text="{Binding CategoryName}"/>

                <data:DataGrid x:Name="dgOverall" ItemsSource="{Binding Questions}">


                    <data:DataGridTextColumn Header="Question"

                   Binding="{Binding Text}" IsReadOnly="True"/>

                    <data:DataGridTemplateColumn Header="Rating">



                                <t:Rating UserRating="{Binding Path=Answer, Mode=TwoWay}" />





                                <t:Rating UserRating="{Binding Path=Answer, Mode=TwoWay}"/>











The questions come from a WCF call, and get bound in the form load

void client_GetQuestionsCompleted(object sender, GetQuestionsCompletedEventArgs e)


    dgPanel.ItemsSource = e.Result;


Each row that comes back has the category header text and a collection of questions, so that for each item in the ItemsControl, the text is bound to the header, and the questions are bound to the datagrid.  The DataGridTemplateColumn stuff above maps the Answer property of each question to a UserControl called Rating that contains the radio buttons.  Depending on which radio button gets checked, the value of the user control represents an enum value with their answer.  Because the data binding is defined as TwoWay, the user’s answers are updating in the in-memory copy of the questions, and when the user eventually hits the “Submit” button, the collection of questions (and answers) is sent back to the web service. 

Now I can add new questions to the database and have those questions show up in the survey, and previously submitted evaluations will only have answers for the questions that were present at the time the survey was submitted.  There’s still some work to do here, like setting up groups of questions so that different sets of questions could be used for different circumstances, etc, but this is a decent start.

Wednesday, 21 January 2009 10:23:06 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 06 October 2008
In the spirit of constantly embracing change, as of last week I am now the Director of Education at SoftSource.  What that means in the near term is that I'm going to be spending a bit of time in the classroom.  What that means in the middle term is that I'll be working on expanding our course offerings to encompass a wider range of topics. 

Right now we offer several classes for open enrollment at OHSU's Professional Development Center in Hillsboro (formerly OGI).  This Fall we'll be teaching C# and the .NET Framework (3.5), WPF, WCF and Debugging.  For details see our open enrollment page.  We also do custom training designed specifically around customer's needs, or we can do our standard courseware at your site and time of your choice. 

Now here comes the important part...  All of our instructors are also Software Engineers and Architects that spend the time they aren't in the classroom on customer projects.  That means that they have real-world experience developing real software on a day to day basis, with the latest technology in the field. 

What I'd love to hear back from y'all is what kind of topics you'd be interested in.  Agile development?  Continuous Integration?  Using MS Team Foudation Server?  How to do real-world estimation and planning?  Of course we hope to continue offering technical subjects like advanced .NET development, WPF, Silverlight, and whatever else tickles your collective fancy. 

If you or your organization needs off-the-shelf of custom-design training, or if you have ideas for training you'd like to see us offer please let me know.

Monday, 06 October 2008 11:19:57 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 28 July 2008
I spent quite a while working with ADAM (Active Directory Application Mode) in Windows 2003 Server / XP on a previous project.  It's a great way to incorporate an LDAP directory into your application without requiring a full-blown AD installation, or domain memberships.  The Windows 2008 Server version has been renamed AD LDS (Lightweight Directory Services), which is a much better name.  LDS incorporates several improvements that make it even more attractive, including some very nice performance increases.   Bart De Smet has a very nice guide to installing LDS on 2008 Server and setting up a useful configuration.  Check it out if you've used ADAM and/or have any interested in LDAP. 

Monday, 28 July 2008 12:54:31 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Friday, 25 July 2008
I'll be speaking on Continuous Integration at the Software Association of Oregon's Developer SIG meeting in September.  We'll look at how to implement CI across different parts of your organization, using techniques and tools for both .NET and Java.

Friday, 25 July 2008 13:37:41 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 14 May 2008
Looks like I'll be speaking on Continuous Integration in theory and practice at next month's meeting of the Software Process Improvement Network.  Details are here.  If you are interested in CI, come on down.

Wednesday, 14 May 2008 10:18:08 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 25 April 2008
My new book Code Leader: Using People, Tools, and Processes to Build Successful Software is now available and in stock at Amazon and hopefully soon at stores near you.  The initial outline for the book came from a post I did a while back, This I Believe, the developer edition.  Writing up that post got me thinking about a lot of things, and I've continued to expand on them both in the book and in presentations since then. 

There is a lot of content in the book about working with tools like SCC, static analysis, and unit test frameworks to build better software more efficiently.  I think if I were to sum up the focus, it is that we as developers owe it to our project sponsors to work efficiently and effectively, and there are lots of incremental changes that can be made to the way we work, both in terms of tools and processes to make that happen. 

The reality of modern software development (mostly) is that since the physical constraints like CPU time and memory/disk space have largely fallen away, the goal of any development project (from a technical standpoint) should be optimized for developer productivity.  That includes a wide array of factors from readability of design, how clear developer intent is to future readers, and how resillient our software is to future change to reducing barriers to developer productivity like implementing a good CI process.  Working together as a team across disciplinary and physical boundaries is at least as big a challenge as writing good code. 

I agree very much with Scott that the why and how of software development are often just as important (and harder to get right) than just the what.  I hope that I have something to contribute to the discussion as we all figure it out together...

Friday, 25 April 2008 11:31:59 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Thursday, 10 April 2008
If you are doing anything even remotely involving Ajax or client side script, and you like Firefox like I like Firefox, go and get a copy of Firebug.  I quite literally don't know what I would do without it.  Firebug provides very comprehensive script debugging, allowing you to set breakpoints in your JavaScript, step through it, etc.  It also shows you all the requests and responses for all your post-backs, and lets you examine HTML, CSS, and script as well as the client side DOM at "run-time".  It's been invaluable in helping me figure out how ASP.NET Ajax is supposed to work and what's wrong with the JavaScript I'm writing (it's been a while). 

Good stuff.

Ajax | ASP.NET | Work
Thursday, 10 April 2008 10:56:03 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

First, there was a few ideas rattling around in my head, then there was a blog post, and now you can buy the book. :-)

The most difficult challenges facing most of us these days are not about making code work.  That's relatively easy, compared to making code that you can work on and maintain with other people.  I've tried to bring together a bunch of issues and practical solutions for writing code with a team, including defining contracts, creating good tracing and error handling, and building a set of automatic tests that you can rely on.  In addition, there's a bunch in here about how to work with other developers using source control, continuous integration, etc. to make everyone's lives easier and more productive.
Thursday, 10 April 2008 10:41:33 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
Thanks to one of the other SoftSource guys here (yay, Ben!) we got the problem resolved.  Turns out if was also a problem under VS 2005, but only sometimes and nobody noticed.  Huh.  Anyway, there was a bit in the base page that was creating a control dynamically and inserting it into the control hierarchy.  In order to find out where it was supposed to go, it was recursively traversing the control hierarchy looking for the right one, before LoadViewState.  This causes the repeater to be created too early, and when ViewState comes along, it sees the control has already been created and so does nothing. 

This goes to show once again, just don't mess with the control hierarchy.  Nothing good can come of it.  Even better, just don't use repeaters and come up with a better way of doing things.  I'm finally doing some "real" Ajax using client side JavaScript to make asynchronous callbacks.  Sooooo much better than using update panels.  Yes, it requires writing JavaScript (of which I'm mildly scared) but it turns out to be pretty easy, and the pages are not only much more responsive, but much more deterministic in their behavior.  We're now essentially only rendering the full page once, then making all the changes client side.  The programming model is a bit harder, but the results are worth it.

Thursday, 10 April 2008 10:32:49 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 25 March 2008
Yes, once again repeaters and ViewState are the bane of my existence... 

We have several pages that follow essentially the same pattern:  we have repeaters that include text boxes/check boxes, etc.  We need to retrieve the values from those controls in order to get feedback from the user.  To do that, the controls themselves are set to AutoPostBack=true.  (To make things more complicated, these repeaters are sitting inside an ASP.NET Ajax UpdatePanel.) When the post back happens, we grab the updated values from the repeater items, save them, then re-databind the repeaters.  This is less than optimal, I have come to see, but it worked.  That's worked, past tense.  As soon as we moved the project to VS 2008, it completely fails to work.  I'm pretty sure that ViewState is involved, since essentially we were relying on the repeaters being populated from ViewState before being databound.  That seems to no longer be the case.  To make matters worse, when I tried to code up a trivial example to demonstrate the problem, it works fine.  Curses! 

I'm down to two possibly explanations, neither of which I've been succesful proving so far.  Either something in the page is touching those repeaters before LoadViewState, thus screwing up the object creation, or the new 3.5 version of Ajax is passing ViewState differently on Async PostBacks.  Either way, we're working around it right now, but it would still be nice to know what happened.

My key learning from this whole experience?  Repeaters are the work of the devil, and writing HTML by hand wasn't such a bad idea. :-)

Tuesday, 25 March 2008 09:43:11 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
I'll be speaking at next month's InnoTech Oregon conference, at the convention center April 16th and 17th.  The session is entitled "The Code is the Easy Part".  I'll be speaking on practical steps you can take to improve your software project, specifically around making it easier and more effective for developers to work together as a team.  Specific topics include starting a Continuous Integration process, effective use of source control, and using tests as a means of expressing developer intent.

It looks like there's a little something for everyone at the conference, including some stuff on development practices and "agile" methodologies.  Should be fun.

Tuesday, 25 March 2008 09:34:33 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 14 March 2008
Once again I'll be teaching at the Oregon Institue of Technology this coming Spring term.  This time around the topic is building an application using C# in Visual Studio 2008.  We'll be covering an entire application lifecycle, including design, coding, test, build and deploy.  We'll go into detail about effective use of source control, how to set up a Continuous Integration process, how to write good unit tests, etc.  Along the way we'll be covering some of the new .NET 3.5 features like LINQ and all that entails.

Spread the word, and come on down if you are interested.

The official course title is .NET Tools with C# (CST407) and it will be Wednesday nights at the Hillsboro campus.

Friday, 14 March 2008 10:02:43 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 22 January 2008
I've been poking around a bit with the new ASP.NET MVC stuff, and so far I am very favorably impressed.  Sure, there are some things to work out, but I think they represent exactly the right level of abstraction for getting serious work done.  One of the things I've noticed lately since I've been doing lots of ASP.NET work is that the "classic" (if that works here) ASP.NET postback model is very very good at making the trivial case trivial.  If all you want is a simple form that pretends to have button click events, it's super easy.  Unfortunately for many of us, however, it is also very very good at making less trivial cases really hard.  At some point (and relatively quickly, I think) we spend a lot of time fighting the ASP.NET model rather than taking advantage of it.  I have spent hours trying to deal with the fact that ASP.NET is trying to make me code like I was writing a windows forms application when what I'm really writing is a web app.  I shouldn't have to worry about whether or not I need to handle some bit of rendering in OnInit or OnPreInit.  I know what HTML I want, but ASP.NET can make it hard to get there.

The new MVC model is the level of abstraction I'd like to work at.  I know I'm writing a web app, so better to embrace that fact and deal with requests and responses rather than trying to pretend that I have button click events.  The MVC model will make it very easy to get the HTML I want without having to fight the model.  I haven't logged enough time to have found the (inevitable) rough edges yet, but so far if I was starting a new web project I'd definitely want to do it using the MVC model.  The fact that it is easy to write unit test code is an added bonus (and well done, there!) but what really strikes me as useful is the model. 

If you want a web app, write a web app!

Tuesday, 22 January 2008 13:38:41 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 26 December 2007
A while back I was having a problem with getting events to fire from a drop down list inside a repeater.  In that case, I turned off ViewState for the Repeater control, and everything worked fine after that.  Well, last week I came upon a situation (much to complicated to go into here) where that just wouldn't work.  I needed the ViewState to be on, and knew there should be a reasonable way to make it work.  To make a (much too) long story short, after talking it through with Leo I discovered the error of my ways.  I was populating the items inside the drop down list during the Repeater's ItemCreated event.  Apparently that's bad.  I took exactly the same code and moved it to the handler for ItemDataBound, and everyone was happy.  Now I can leave ViewState on, correctly populate the items in the list, and actully get the SelectedIndexChanged event properly fired from each and every DropDownList. 

It's the little things...

Wednesday, 26 December 2007 12:13:48 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 06 December 2007
I happened to meet a woman the other day who grew up speaking Swiss German, and she said that one of the hardest bits of American slang for her to get the hang of was our use of "whatever" (usually with an !) to mean "that's the stupidist thing I've ever heard, but it's too trivial for me to take the time arguing with you about it".  That translation is mine, not hers.  Just look at how many works are saved with that simple but idiomatic usage.

So when I find that someone has changed the name of one of my variables (in several places) from "nextButton" to "nextBtn" causing me two separate merge conflicts, I'm glad we have that little word.


Thursday, 06 December 2007 14:56:06 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  | 
# Monday, 03 December 2007
My SoftSource compatriot Leo makes some good points about the overuse, or at least overreliance on unit testing.  I absolutely agree that you have to be clear about what the point of unit testing is.  Unit testing is for exercising all the possible paths through any given code, and verifying that it does what the author of that code thought that it should do.  That doesn't mean that it does what it is supposed to do in the context of a larger application.  That's why we have integration or functional tests.  It's up to whoever writes your functional tests (hopefully professional testers with specs in hand) to verify that the code does what the user wants it to do.  Integration tests will tell you if your code does what the developer in the cube next to you thinks it should. 

It's all about context.  It is folly to think that running a bunch of unit tests guarantees that the application will do what it should.  In fact, I'm working on a web project right now, and it is entirely possible for every unit test to pass with raging success and still not have the ASP.NET application start up.  That's a pretty clear case of the application not even running, even though all the tests pass.  The tests are still useful for what they provide, but there's a lot more to even automated testing than just validating that the code does what the author intended.

Monday, 03 December 2007 16:48:27 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  | 
If you are following a continuous integration strategy, breaking the build should be a big deal.  Most of the time, if the build is broken it's because someone forgot to check in a new file, or because they didn't update before committing their stuff.  Those are both pretty easily avoided.  One strategy that helps with missing files is to do your work in a task-specific branch, get everything checked in the way you think it should be, and then check out that branch to a separate location on your local system.  If it builds there too, then you probably aren't missing any file.  If you are, you'll know about it before you break the integration branch. 

As far as not updating before committing, there are a couple reasons why that might happen more than you would like.  If the build is constantly breaking, then people will be less likely to want to update before committing, since if the build is broken they won't be able to validate their changes, and will have to put off committing.  If your build process takes too long (more than 10 minutes including unit tests) then updating and building before committing will be painful, and people won't do it.  Of couse, sometimes it's simple laziness, but that just needs to be beaten out of people. :)  The other things are fixable with technology, which is easier than getting people to do the right thing most of the time.

To make the process easier for everyone, don't commit any changes after the build breaks unless those changes are directly related to fixing the build.  If they aren't, then those additional commits will also cause failing builds, but now the waters have been muddied and it is harder to track down which changes are causing failures.  If the build breaks, then work on fixing it rather than committing unrelated changes that make it harder to fix the real problem. 

Monday, 03 December 2007 14:44:06 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  | 
# Thursday, 15 November 2007
I know it's been a bit quiet hereabouts of late.  I'm in the middle of a writing project, as well as a (relatively) new work situation still, and trying to catch up from the fact that my knowledge of ASP.NET basically consisted of "there's code behind" before.  So far my feelings are mixed.  Sometimes I think ASP.NET if pretty darned clever.  Other days I find myself longing for the days of CGI, when at least it was easy to understand what was supposed to happen.  Those days usually have something to do with viewstate. :-)  Anyway...

I've also been learning about the ASP.NET Ajax goodies, and ran into an interesting problem.  If you are using UpdatePanel, you absolutely cannot mess with the control hierarchy after the update panel has been created.  If you do, you'll be presented with an helpful error that basically reads "you've screwed something up!".  It took a while to get to the root of the problem given that input.  Turned out that the update panel was in markup for my page, but somewhere between when it got created and OnInit for the page, the control hiearchy was being changed.  I won't go into detail about how, but suffice it to say stuff in the page got wrapped with some other stuff, thereby changing the hierarchy.  Nobody else seemed to mind, except for the updatepanel.  Too make a potentially very long story only slightly long, as long as all of the screwing around with the controls is done in or prior to OnPreInit for the page, all is well.  Once that change was made everything was happy, and the update panels work fine. 

The page lifecycle is possibly one of the great mysteries of our time...

Thursday, 15 November 2007 10:24:37 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [2]  | 
# Wednesday, 17 October 2007
OK, I'm the first to admit that when it comes to writing ASP.NET I'm a complete newbie.  Before I started on the project I'm currently working on I'd probably written less than 10 ASP.NET pages.  So maybe this is obvious to the set of all people not me.  But I think maybe not.

I've got a repeater inside a form.  Each item in the repeater is composed of three DropDownList controls.  Those dropdowns are populated via databinding.  When the user has made all of their selections using the dropdowns, the hit the "go" button and a PostBack happens.  I spent the last full day (like since this time yesterday) trying to figure out why no matter what I tried I couldn't retrieve the value the user selected after the form posted back.  Gone.  Every time I got the post back, all of the dropdowns has a selected index of -1. *@!~#!!

I was pretty sure I had the overall pattern down right, since just the day before I made this work in another page that used textboxes instead of dropdownlists.  See the dent in my forehead from the keyboard?  Sure, initially I had some other problems like repopulating the data in the drop downs every time, etc., but I got past that.

Google proved comparatively fruitless.  Lots of people couldn't figure out how to databind the list of items to the drop down, but nobody was talking about posting back results.  The lightening struck, and I found a reference to someone having problems with events not firing from drop down lists if the repeater's ViewStateEnabled == true.  Granted, I'm not hooking up the events, but you never know. 

That was it. 

<asp:Repeater ID="medicationRepeater" runat="server" EnableViewState="false">

Now the PostBack works just like I would expect.  Why this should be is completely beyond my ken.


Wednesday, 17 October 2007 13:25:06 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
Refactoring is cool.  Really cool.  It also means something very specific.  There's a book and everything.  If you haven't read it, you should. (Hint: it's by Martin Fowler)

Refactoring means reducing code smells by making small, incremental, orderly, and very specific changes to existing code.  Make such changes (one at a time) allows one to improve the structure of existing code without changing its functionality.  Always a good thing.  Refactorings have very specific names like "extract method" or "encapsulate field" or "push member up".  Again, there's a book.  You can look them up.

Where is this going?, you might ask.  I already know this, you say.  Cool.  Then let's move on to what refactoring (Refactoring?) isn't.

Refactoring doesn't mean changing code because you think your way would have been better.  It doesn't mean rewriting things from scratch because you have a different opinion.  It doesn't mean starting over again and again in pursuit of the perfect solution to every coding problem. 

Those other things have names (which I won't mention here for the sake of any children reading this), but "Refactoring" isn't among them.  There's a tie-in here with another term we all love, "Agile".  Refactoring fits into an "agile" process after you've made everything work they way it should (i.e. passes the tests) to make it easier to work with the code on the next story/backlog item/iteration.  The point of agile development (IMHO) is to write as little code a possible to meet your requirements.  It doesn't mean redoing things until you end up with the least possible amount of code, measured in lines.  Again, that has a different name. 

Sometimes code needs to be fixed.  More often than we'd like, in fact.  But if you are (re)writing code in pursuit of the most bestest, don't call it Refactoring.  It confuses the n00bs. 

Wednesday, 17 October 2007 12:22:00 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 20 September 2007
This came up at work this week, so I thought I'd disgorge my views on the issue.  The question at hand is whether or not every method needs to validate its input parameters. 
For example, if I have a method

    public class Example


        public int DoThingy(SomeObject obj)


            return obj.GetImportantValue();



This method depends on the value of obj not being null.  If a caller passes a null, the framework will, on my behalf, throw a NullReferenceException.  Is that OK?  Sometimes it might be, and we'll come back to that.
The alternative is for me to validate my input parameters thusly:

        public int DoThingy(SomeObject obj)


            if (obj == null)

                throw new ArgumentNullException("obj");


            return obj.GetImportantValue();


The discussion we got into was wether this was really necessary or not, given that the Example object only got used in one place, and it would require extra code to validate the parameters.  The assertion was made by more than one person that the NullReferenceException was just fine, and that there was no need to do any validation. 
My $.02 on the matter was (and is) that if you don't do parameter validation, you are not upholding and/or communicating your contract.  Part of the contract implied by DoThingy() is that you have to pass a non-null SomeObject reference.  Therefore, in order to properly communicate your contract, you should throw an ArgumentNullException, which informs the caller exactly how they have violated there half of the contract.  Yes, it's extra code.  No, it may never be necessary.  A whole different subject is whether of not "fewest possible lines of code" is a design goal.  I'm going to avoid going there just now.

That said, there are obviously mitigating circumstances that apply.  If the object in question is really only called in one place, upholding the contract is less of a concern, since you should be able to write tests that cover the right cases to make sure the caller never passes null. Although that brings up a separate issue.  If the method only had one caller, which is it in a separate object at all?  Again, we'll table that one.  In addition, since in this particular case the DoThingy() method only takes one parameter, we don't have to wonder to hard when we get a NullReferenceException where the culprit is. 

The other issue besides contract is debugging.  If you don't check your input, and just let the method fail, then the onus is on the caller to figure out what the problem is.  Should they have to work it out?  If the method took 10 parameters, all reference types, and I let the runtime throw NullReferenceException, how long will it take the consumer to find the problem?  On the other hand, if I validate the parameters and throw ArgumentNullException, the caller is informed which parameter is null, since the onus was on me to find the problem. 

One last reason...  If I validate my input parameters, it should leave me with fewer failure cases to test, since the error path is potentially much shorter than if the runtime handles it.  Maybe not much of a savings, but it's there.


Thursday, 20 September 2007 16:10:41 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [11]  | 
I was having some issues with using a straight keyboard and regular old mouse, since I've been using only trackballs and trackpads for years now, so I got hooked up with some new gear.  It's only been a few days, but so far I'm pretty impressed.  I got the Microsoft Natural Ergonomic Keyboard 4000, and the Evoluent VerticalMouse 3.  The keyboard is very comfortable (as I type this) and has an optional support that props up the front of the keyboard, which keeps your hands flatter.  I wasn't sure about the lift at first, but now I'm sold.  It's quite comfy, although it did take me a day or two to get used to it.  The buttons are laid out very nicely, with a real inverted T arrow key setup.  My only complaint so far is that the spacebar clacks.  But since I usually work with headphones on, it will only really annoy my neighbors. :-)
The most is also quite comfortable, although I think I still need to get my chair adjusted a bit to deal with the desk height and the new mouse.  The VerticalMouse is just that.  It's a nice optical mouse, but oriented vertically instead of horizontally, so that you hold your hand perpendicular to the desk, rather than on top of it.  It seems like a much more natural hand position.  The buttons have a nice feel, as does the scroll wheel.  Because of the layout, the third button has to be pressed with the pinky (at least for me) which seems a bit awkward, but I'm sure I'll get used to it. 
Oh, yeah.  And I got a new car.  My Durango finally gave up the ghost after 9 years and over 230K miles.  RIP.  I got a 2007 Subaru Outback 2.5i, and so far I'm loving it.  Very comfy, handles nice.  About twice the mileage of the Durango.  We'll see how it does pulling a trailer.  The hitch goes on today, with luck.
Fall seems to be the season of change for me this year.  And it seems to have come early.  It's started getting much chillier at night just over the last week or so.  OK, now I'm rambling. 

Home | Work
Thursday, 20 September 2007 10:15:43 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 07 September 2007
We're looking for well qualified and excited individuals for positions in engineering, QA, and project management.  If you like working on a wide variety of projects, come and join our merry band!  Check out the SoftSource site for more details.

Friday, 07 September 2007 10:39:26 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 06 September 2007
The project I'm working on right now uses SourceGear Vault for SCC, and I must admit, I'm not digging it.  It's like using a slightly more performant version of VSS.  Except that you can set it in "CVS mode" which means it doesn't do exclusive locks.  It seems very difficult to deal with adding new files (if you aren't integrating with VS.NET, which we aren't) and updates seem unreasonably slow, although that may just be me.  It is a pretty big project.  Doing a "Show history" on the whole tree takes over a minute to return any results at all.  It does do change sets, at least, so that's a big plus over VSS. 
In short, I can see tha market here (for people that are comfortable with VSS, but don't want Access databases for their backend) but I'd rather be using Subversion.  I guess I'm just spoiled.

Thursday, 06 September 2007 15:26:07 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Friday, 31 August 2007

For the last week (at my new gig) I've been using ReSharper 3.0, and I must say I'm pretty darned impressed.  I hadn't looked at ReSharper since the very early versions, and had sort of written it off in favor of CodeRush/Refactor, but ReSharper 3 has some pretty compelling features...

  • ReSharper parses your code outside of the compiler, and therefore knows stuff the compiler couldn't, like when you reference something in another namespace without the proper using directive in place, and ReSharper figures out which namespace you mean, and asks if you'd like to add the reference.  Slick.
  • Inline suggestions.  ReSharper parses your code and offers up both errors (in an easier way the VS.NET, IMHO) and suggestions, such as "you really could make this field readonly".  These errors and suggestions are also displayed in a sidebar next to the scroll bar as red, green or yellow lines, so you can scroll the the problems quickly.  Nice.
  • Integrated NUnit support.  I love the way ReSharper handles NUnit integration.  You get little buttons in the right hand sidebar (where your breakpoints show up) that allow you to run or debug individual tests or the whole fixture.  The results get put in a custom results pane, much neater than the text based report you get from TestDriven.NET.  If you happen to have JetBrains profiler, that's integrated as well. 
  • Clever autocompletion.  I know I probably shouldn't be so impressed with this one, but just this morning I wrote a line that returned an IEnumerable, then on the next line typed foreach, and it completed the whole block, figured out the right types, etc.  Very clean.
  • Nice refactoring support.  I haven't had too much time to play with this yet, but it looks like it supports a goodly number of refactorings.  Cool.

I'm looking forward to playing with it more in the days to come.

Friday, 31 August 2007 09:49:06 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Wednesday, 22 August 2007

Things have been a bit quiet here lately, largely due to a lot of change and a lot of heads down work.  The end result is that today marks my final day at Corillian Corp. (now a part of CheckFree).  It's been a great almost 4 years of working to improve the lives of people building (and hopefully using) online banking sites, but it's time for a change.  I'm going back to consulting after a 6 year hiatus in the product world.  Come Monday morning, I start work at SoftSource Consulting where I'm hoping to continue to build cool stuff, one customer at a time. 

In the mean time, I'm off for the woods.  Hopefully I'll have some good pictures to post on Monday.

Wednesday, 22 August 2007 14:44:25 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Tuesday, 14 August 2007

You all know this.  I (in theory) know this.  However, just to remind everyone (and apparently myself)...

When debugging anything related to WCF, there are a set of problems that are IMPOSSIBLE to discover without looking at the service trace logs.  In particular (this is the one that bit me) for very good reasons, nothing security related gets as far as application code.  You pretty much get "something bad happened". 

Anyway, after beating my head against the keyboard for a good solid day, I remembered this little maxim, and checked the logs.  There was the problem, plain as day.  Doh!  Turned out that I'd changed the namespace of a particular class, and failed to propagate that change to my .svc file.  There is absolutely no indication that this was the problem at the app level, but on looking at the logs, it's right there. 

Hopefully I'll remember that one next time.

Indigo | Work
Tuesday, 14 August 2007 14:20:17 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 18 July 2007

I decided a while back that I had too much extraneous junk running on my development machine and it was time to scale back.  Fist to go were Google Desktop, and Microsoft Desktop Search, both of which had been competing to see who could eat the most cycles and thrash my drives the worst.  Next on the list was TortoiseSVN.  Tortoise's cache seemed to be eating an awful lot of CPU time.  "How ever will I manage to deal with SVN without it?", I found myself wondering.  The reply came back "have you become completely feeble in your old age? Once you lived by PVCS, before there was such a thing as a decadent SCC GUI!". 

So I figured I'd give it a go.  The results have been pretty good so far, although it has taken me a while to get used to the svn.exe command set.  On the up side, it seems much faster than Tortoise, with updates in particular taking much less time.  The thing I find myself missing the most if the commit dialog, where you can uncheck stuff you don't want to commit right now.  Living without that makes for some extra typing, but overall it's still worth it.

The one thing that's a major hassle is browsing the repository.  If I have to figure out where someone shoved something in which repository, I've been falling back on another machine with Tortoise installed.  Lame?  Possibly.  I couldn't find anything approximating a non-Tortoise GUI SVN repository browser.  Maybe there is one, and I just haven't heard. 

So overall, I'm not quite ready to go back to edlin for text files, but the svn command line is treating me pretty well.  We'll see if I ever come to some watershed event where I decide to go back to the GUI.  That day may well come, but it's not on the horizon just yet.

Wednesday, 18 July 2007 10:57:58 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [10]  | 
# Monday, 09 July 2007

Lately we've been experiencing one of the downsides of all the tooling we've put into our continuous integration practice.  It's fragile.  Trying to get all your tools lined up with versions that all get along is tricky, and hard to keep balanced.  Particularly when two of them (NCover and TypeMock) both try to register themselves as profiles, and have to be chained to keep the tests running while gathering coverage information.

I think we've just about got everybody moved up to the right versions, but there's been lots of "it works on my machine" the last week or so. 

The price you pay, I guess.

Monday, 09 July 2007 14:15:03 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 26 June 2007

A while back I posted about how we're implementing denials using AzMan.  This week, a helpful reader pointed out that with a little extra math, I could save a call into AzMan (with all the attendant COM interop, etc.).

So, instead of my previous


object[] grantOps = new object[] { (object)1, (object)2, (object)3, (object)4};

object[] denyOps = new object[] { (object)100001, (object)100002, (object)100003, (object)100004 };



object[] grantResults = (object[])client.AccessCheck("check for grant", scopes, grantOps, null, null, null, null, null);

object[] denyResults = (object[])client.AccessCheck("check for deny", scopes, denyOps, null, null, null, null, null);


for (int i = 0; i < grantResults.Length; i++)


    if(((int)grantResults[i] == 0) && ((int)denyResults[i] != 0))






I can do it in one call like so

object[] allOps = new object[8];

grantOps.CopyTo(allOps, 0);

denyOps.CopyTo(allOps, 4);

object[] allResults = (object[])client.AccessCheck("check for grant and deny", scopes, allOps, null, null, null, null, null);

for(int j = 0; j < grantOps.Length; j++)


    if (((int)allResults[j] == 0) && ((int)allResults[j + grantOps.Length] != 0))








It does mean you have to be a little careful about getting the indexes straight, but that's probably worth the savings you'll get by not making the extra COM call. 

Every little bit helps...

AzMan | Work
Tuesday, 26 June 2007 09:12:39 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [3]  | 
# Friday, 15 June 2007

I think I've come across a better solution (or at least one that I'm happier with) to this problem.  Turns out that the KnownType attribute can take a method name instead of a type.  The type it's applied to then has to implement a static method with that name that returns an array of Type.  That method can then decide which types to return.  So I could do something like this



public class UserBase



    public string FirstName;


    public string LastName;


    public static Type[] CheckForConfiguredOverrides()


        List<Type> result = new List<Type>();


        string thisTypeName = typeof(UserBase).FullName;

        string overrideType = System.Configuration.ConfigurationManager.AppSettings.Get(thisTypeName);

        if (!string.IsNullOrEmpty(overrideType))


            Type type = Type.GetType(overrideType);

            if (type != null) result.Add(type);



        return result.ToArray();



Then in the config file (this ultimately will use our distributed configuration service) I add


  <add key="UserBaseType.UserBase" value="UserBaseType.BankUser"/>


To return the right type.  Everybody is happy!  As a side benefit, this allows me to limit "overridability" to only those types I apply the attribute to.  That lets me (as the designer) decide which types are OK to subclass and which aren't (obviously I could use sealed here as well, but that's a separate issue) using this KnownType attribute.  I think this will work out much better than marking up base classes with specific types.

Indigo | Work
Friday, 15 June 2007 12:43:46 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

This may have been understood by the set of all people not me, but I didn't realize that the KnownType attribute doesn't seem to work on a MessageContract.  The KnownType attribute is supposed to tell the data contract serializer that there might be some extra types it needs to know about when serializing your objects.  It's equivalent to the XmlInclude attribute used by the XmlSerializer.  If I have some code that looks like this...


public class UserRequest



    public string UserName;


    public bool SendExtended = false;




public class UserResponse



    public string Message;


    public UserBase User;




public class UserBase



    public string FirstName;


    public string LastName;




public class BankUser : UserBase


    public BankUser() : base() { }



    public int TupperwarePoints;



In my server side implementation, if the "SendExtended" flag is true in the UserRequest, the object returned in the "User" property of the UserResponse is a BankUser, not a UserBase object.  If the code runs like this, that fails horribly, since the data contract serializer is set to serialize a UserBase, not a BankUser. 

I would have thought that I could mark up the UserResponse, like so



public class UserResponse



    public string Message;


    public UserBase User;


Turns out this totally doesn't work.  I get the same serializer error, despite the KnownType attribute.  If I make UserResponse a DataContract instead of a MessageContract, it works fine, and the appropriate BankUser object is received on the client side. 

However, in my actual code, I need the MessageContract.  The alternative is marking up the UserBase class itself, like this:



public class UserBase



    public string FirstName;


    public string LastName;


This works just fine, but from a design standpoint it's teh suck.  This means that I have to tell my base class about all it's derived types, thus defeating the entire point of inheritance.  That's why I never used the XmlInclude attribute.  I might have to rethink how the messages are composed and see if there's a better way to encapsulate things.

On a slightly unrelated note, it seems that my ServiceHost takes much longer to start up when the Request and Response are defined as MessageContract than when they are straight DataContracts.  I haven't tested this extensively, so it's anecdotal, but interesting nonetheless. 

Indigo | Work
Friday, 15 June 2007 10:37:14 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 

Check out the latest Hanselminutes to hear Scott and I chatting with Carl about the latest language features in Visual Studio 2008 ("Orcas"), static vs. dynamic languages, and whatever else came up.

There are some very cool features coming up in C# 3.0, which while they are there mainly to enable LINQ, provide a number of opportunities for reducing the amount of extra code we write, and enabling a closer link between code written and developer intent.  Pretty exciting stuff.

Radio | Work
Friday, 15 June 2007 09:34:34 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Friday, 08 June 2007

I finished up the day yesterday with Kathy Kam's talk on Acropolis, and watched the irrepressible Ed Pinto and Kenny Wolf describe WCF internals (which involves juggling, apparently).  The Acropolis talk was too short, and could have included a lot more detail.  It felt like brochure-ware.  Ed and Kenny's talk packed more detail than one human being could absorb in 75 minutes about the channel stack, and how behaviors can be used to influence it, as well as other gooey details about the inner machinations of WCF.  Very useful.  I think it'll take me a few hours with the slides to let it all sink in fully. 

Last night's party was a blast as usual.  I now know that the only think more fun that riding rollercoasters with drunk nerds is riding rollercoasters with drunk nerds in the pouring rain.  :-)  They shut down the rides for quite some time due to lightning, but I did squeeze in a few hair raising trips on the 'coasters, as well as plenty of general carousing.  A good time was had by all. 

Now it's time to head home and sleep for a few days.  There's nothing like sitting through 4-5 technical presentations and walking 3-4 miles a day for a week to wear a body out.

TechEd | Work
Friday, 08 June 2007 07:01:32 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Thursday, 07 June 2007

I went to a more detailed (possibly too detailed) presentation on the ADO.NET entity model this morning.  This is pretty exciting stuff, although obviously people will have some reservations.  It's easy to dismiss EDM as just another in a long line of O/R mapping attempts, and while that's a valid criticism, I think there are some interesting and instructive differences.  Because EDM is built on top of support all the way down at the ADO.NET provider level, it's more than just an O/R map.  You can define entities (this is done using XML files for now) that span multiple tables, or otherwise hide complex structures in the underlying data store, as well as defining relationships between those entities that express a higher level relationship than a relational foreign key.  These relationships can be navigated at a much higher level, without having to deal with complex joins, etc.  Because that the Entity Model actually passes down to the data provider is an expression tree, the data provider is free to optomize into its native dialect as best it can.  So when you are taking advantage of complex relationships in your entity model, you aren't just fetching a bunch of objects and then making them relate to one another.  That relationship is handled at the provider level in ways that can be optimized, so that you aren't bringing back any more data then necessary. 

Plus, because of the deep support at the provider level, you can specify a query using the simplified entity syntax, but still return a forward only dataset for best performance, skipping the object translation step all together. 

Granted, they still have quite a bit of work ahead of them to improve the tooling, and make it as transparent as possible, but I think there's a lot of value to be had here. 

I'm looking forward to taking some time with the bits and trying out some different scenarios, so hopefully I'll have some good examples later.

TechEd | Work
Thursday, 07 June 2007 11:22:39 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

I saw some great stuff yesterday afternoon on how to host WPF controls in WinForms and vice versa.  I've been wondering for a while if it would be possible to host WPF controls in MMC 3.0 for some management apps we need to write, and now I know it's at least possible, and possible with Whidbey, although the desinger experience will be better in Orcas for WPF on WinForms.

Jon Flanders did a very good presentation on real world tips and tricks for hosting WF in WCF services, without waiting for Silver (WF/WCF integration in Orcas).  Since our new app does exactly this, I learned some things I'll have to try out, specifically using the ManualSchedulerService.  I hadn't considered the implications of ASP.NET and WF contending for the same thread pool.  The manual scheduler might also simplify some things around identity management and impersonation that I've been puzzling over.  Jon is a very compelling speaker, and I got a lot out of his presentation.

To wrap up the day yesterday I got a preview of what's coming as part of the Acropolis project.  I was totally blown away by how much they've already accomplished, and how thoughtfully they have proceeded.  Acropolis is basically CAB for WPF, but they really took it in a positive direction.  While CAB is (IMHO) too technology focused, Acropolis is very much designed for building real applications quickly, and providing (again IMHO) exactly the right level of support.  I'm a big Smart Client fan, and I think they're really expanding the horizons for building smart client apps quickly and with a minimum of overhead.  If you are interested in build any kind of client app with WPF, be sure to follow their work.

This morning started off with a very good presentation on the new language features in C# 3.0 by Luke Hoban.  He did some very compelling demos to show off the new features, including the LINQ over Objects syntax.  Then, just to show there was no magic involved, he removed his reference to LINQ, and reimplemented the where and select operators using the new language features.  It really showed off the new features, and put them in a context that was very accessible.  Well done!

The rest of the day holds more on Acropolis, and more details of the LINQ over Entities project, which is pretty groovy stuff. 

TechEd | Work
Thursday, 07 June 2007 06:36:06 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 06 June 2007

Chris Anderson's session on using XAML as a modelling language for declaratively expressing programmer intent is easily the best thing I've seen so far.  I've been struggling to grok the beauty that is XAML for a while now, and while I'm not totally sure I've grokked it in its fullness, I'm a good deal closer now.  I think I'll have to play around with it for a while before it's fully cemented.  One thing he mentioned that I'm very interested in pursuing is using LINQ expression trees in non-data oriented ways.  There could be some huge opportunities for building declarative models using this technique.  Much to think on. 

I think this was actully the first time I'd seen Chris speak, and I was very impressed.  Not only by his style, but by his fabulous wardrobe.  Chris eschewed the MS-blue shirt for a fantastic brown and orange Aloha shirt with a huge (possibly embroidered) green and yellow palm tree across the front.  Excellent choice.

As an only vaguely related side note: how far back does the nerd fondness for Aloha shirts go?  Maybe it's just because we're in Florida, but I've seen some truly stunning examples of the genre.  Being a big fan (possibly even proponant) of the Aloha shirt myself, it's been fun checking out the good ones.  Better than birdwatching. 

To bring this back to something slightly more relavent, I think there are several developments underway, including the Software Factories initiative discussed a few Architecture Journals back, the rise of DSL Toolkit, etc. that make model driven development much more feasible, and it will be interesting to see what part XAML has to play there.  Very exciting times.

TechEd | Work
Wednesday, 06 June 2007 08:44:35 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 05 June 2007

More goodness today.  One of my favorites today was a presentation on the new features for building Smart Client applications with "Orcas".  Smart Clients hold a special place in my heart, so I'm glad to see them investing in better support.  Very cool stuff, including the new "SQL Compact Edition" which can run in-proc to your client app, and better still a new "sync provider" that allows you to write your smart client against the local "Compact Edition" database, then do a two way sync with a full SQL database, which provides a super-easy way to deal with offline use of your client app.  Also in Orcas, you'll be able to host WPF in WinForms and vice versa, which makes it easy to both extend and leverage existing applications.

Other highlights from the day include SQL 2008 support for "non-relational" data like files, spatial data, full text documents, etc.  They've really gone out of their way to make it easy to integrate all your data into SQL Server.

One big thing I'm taking way this year is Microsoft's continued commitment to responding to customer input.  In several arenas (SQL, Orcas, ADFS, etc) over and over I keep hearing "you said it was too hard, or not feature rich enough, so we fixed it".  It's really encouraging to see the extent to which real world customer input is having a direct impact on feature direction. 

Both of my BOF sessions went well.  The lunchtime session on NDepend was well attended, and sparked some interesting discussions.  This evening's session on ADAM and AzMan provoked some interesting responses.  One that surprised me was an assertion that MS must not intend to support AzMan because it doesn't get a managed interface in Longhorn Server.  I'm pretty sure I disagree with that statement, since there has been quite a bit of investment in the Longhorn (sorry Server 2008) version of AzMan, but I think it's illustrative of MS's lack of advertisement around AzMan.  It's a great feature, but it goes largely under-represented, so nobody takes it seriously.

Anyhow, busy day, once more into the breach tomorrow...

TechEd | Work
Tuesday, 05 June 2007 19:18:14 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 

I've only got a few minutes between sessions, so I'm going to hit the highlights, more details to come later when I can focus a bit. ;)

The keynote was way better than last year's.  The focus this year seems to be much less on developer-related technology, and more on IT, and more specifically on what IT pros can do right now today, rather than any vision for the future.  Bob Muglia set the tone in the keynote by saying that they didn't want to give people any more MSBSVS, or Microsoft BullS**t Vision-Speak.  A very positive direction, and there's plenty to talk about.  I just came away from a session on the next version of the Identity Lifecycle Manger, and I think they're really nailed it.  Identity management in the enterprise is hard, and they've gone a long way towards fixing it.  If I was in IT, I'd be even more excited.

Yesterday I saw talks on the next version of ADFS (Active Directory Federation Services) and again they are headed in the right direction.  ADFS "2" will basically allow you to expose an STS on top of your AD store, with CardSpace support, etc.  Very cool. 

I also saw some stuff on the LINQ "Entity Data Model", which, in summary, is basically Rails-style active record support at the platform level, with some very advanced querying capabilities via LINQ.  This will be very useful in the future for getting rid of DAL code in the middle tier. 

Lastly, I saw an overview of SQL 2008 (Katmai) features.  Again, some very cool stuff, and new features that look like they've been developed in response to user feedback.  Always nice to see.  For all those groovy new mashups, there's a new LOCATION datatype.  Spacial data as first class citizen.  There's also some brilliant management-by-policy features that will make DBA's very happy.

Gotta run.  NDepend BOF at lunch, ADAM/AzMan tonight, so if you're reading this in a timely fashion, stop on by!

TechEd | Work
Tuesday, 05 June 2007 07:10:30 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Monday, 07 May 2007

We've been estimating a large new project for the last week or so using the "planning poker" technique.  So far I think it's working out very well, although the biggest learning so far has been that you really need a solid task list before you enter this process.  On backlog items where we already had task lists defined, we didn't have any trouble coming to consensus about how long it would take.  This morning we entered into a backlog item without the task list already in place, and it led to an hour-long discussion that delved into design, etc. rather than getting any estimating done.  While I understand that part of the planning poker process is group learning and making sure everyone is on the same page, we would have made a lot more progress with even a tentative task list rather than none at all.

Now we know.

Monday, 07 May 2007 15:35:19 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Tuesday, 01 May 2007

I was trying to come up with a set of guiding principles as we've been using them on my current project, with an eye toward educating new developers being assigned to the project, so I wrote up a PPT deck for the purpose.  Scott thought it would make a good, general purpose manifesto for any project, so here goes...

These are my opinions based on experience and preference, YMMV.  Much of this is common sense, some of it is arbitrary, but it's all stuff that I think is true.

  • Our "guiding principles"
    • Test Driven Development
      • As much as possible, all of our code should be developed according to the TDD methodology.  If we stick with it, we will always have confidence in the current build (i.e. that it really works).
    • Continuous Integration
      • We want our build to produce something that looks as much as possible like a production deployment.  To that end, our build deploys a real live system and runs "integration" tests against it. 
  • Unit tests
    • Two kinds of tests
      • Unit tests have no dependencies, or only depend on other assemblies owned by us, which can be mocked.  This means our unit tests should run on anyone's box without having to set up things like SQL, ADAM, etc.
      • Integration tests depend on code not owned by us, like SQL, ADAM, IIS, etc. 
    • Automation is equally possible for both sets of tests
      • NUnit/MbUnit for all of the tests
      • MbUnit allows for test "categories" like "unit" and "integration" and can run one at a time
      • Unit tests run on every build, integration tests run at night since they take much longer to run
    • All UI development should follow the MVP pattern for ease of testing
      • This includes Win/WebForms, MMC snapins, etc.
  • Test coverage
    • 90%+ is the goal
      • This is largely achievable using proper mocking
    • NCover runs as part of the build, and reports are generated
  • Buy, not build
    • Take full advantage of the platform, even if it only solves the 80% case
      • every line of code we don't have to write and maintain saves us time and money
    • Don't write a single line of code you don't have to
    • Take full advantage of .NET 3.0, SQL 2005, Windows 2003 Server, plan for- and test on Longhorn
    • Don't invent new solutions to solved problems
      • Even if you know how to write a better hashing algorithm than the ones in the BCL, it doesn't add value to your product, so it's not worth your time
  • Limit compile time dependencies on code you don't own
    • Everything not owned by us should be behind an interface and a factory method
      • Easier to mock
      • Easier to replace
    • ILoggingService could use log4net today, EnterpriseLibrary tomorrow, but clients are insulated from the change at compile time
  • Define your data contracts in C# (think "active record")
    • All persistent storage should be abstracted using logical interfaces
      • IUserProfileService takes a user profile data object, rather than methods that take parameter lists
      • IUserProfileService knows how to store and retrieve User Profile objects in ways important to your application
      • How a User Profile object gets persisted across databases/tables/XML/whatever is only interesting to the DBA, not consumers of the service
      • This means database implementations can change, or be optimized by DBAs without affecting the app
      • The application doesn't care about how database file groups or clustered indexes work, so define the contract, and leave that bit to a DBA
  • Fewer assemblies is better
    • There should be a VERY good reason for creating a new assembly
    • The assembly is the smallest deployable unit, so it's only worth creating a new assembly if it means NOT shipping something else
    • Namespace != assembly name.  Roll up many namespaces into one physical assembly if they all must be deployed together.
  • Only the public interface should be public
    • Only make classes and interfaces public if absolutely necessary
    • Test code should take advantage of InternalsVisibleTo attributes
    • VS 2005 defaults to creating internal, not public classes for a reason
    • If it's public, you have to support it for ever
  • Windows authentication (good)
    • Just say no to connection strings
    • Windows authentication should be used to talk to SQL, ADAM, the file system, etc.
    • You can take advantage of impersonation without impersonating end users
      • Rather than impersonating end users all the way back to the DB, which is expensive and hard to manage, pick 3-4 windows accounts like "low privilege", "authenticated user", and "admin" and assign physical database access to tables/views/stored procs for those windows accounts
      • When users are authenticated, impersonate one of those accounts (low priv for anonymous users, authenticated user for same, and admin for customer service reps, for example)
  • Tracing
    • Think long and hard about trace levels
      • Critical is for problems that mean you can't continue (app domain is going away)
      • Error means anything that broke a contract (more on that later)
      • Warning is for problems that don't break the contract
      • Info is for a notable event related to the application
      • Verbose is for "got here" debugging, not application issues
    • Use formatted resource strings everywhere for localization
    • For critical, error, or warning, your audience is not a developer
      • Make error messages actionable (they should tell the recipient what to do or check for to solve the problem)
  • Error handling
    • Method names are verbs
      • the verb is a contract, e.g. Transfer() should cause a transfer to occur
    • If anything breaks the contract, throw an exception
      • if Transfer() can't transfer, for whatever reason, throw
      • Exceptions aren't just for "exceptional" conditions.  If they were really exceptional, we wouldn't be able to plan for them.
    • Never catch Exception
      • If it's not a problem you know about, delegate up the call stack, because you don't know what to do about it anyway
    • Wrap implementation specific exceptions in more meaningful Exceptions
      • e.g. if the config subsystem can't find the XML file it's looking for, it should throw ConfigurationInitializationException, not FileNotFoundException, because the caller doesn't care about the implementation.  If you swap out XML files for the database, the caller won't have to start catching DatabaseConnectionException instead of FileNotFoundException because of the wrapper.
      • Anytime you catch anything, log it
        • give details about the error, and make them actionable if appropriate
      • Rethrow if appropriate, or wrap and use InnerException, so you don't lose the call stack
  • The definition of "done" (or, how do I know when a task is ready for QA?)
    • Any significant design decisions have been discussed and approved by the team
    • For each MyClass, there is a corresponding MyClassFixture in the corresponding test assembly
    • MyClassFixture exercises all of the functionality of MyClass (and nothing else)
    • Code coverage of MyClass is >=90%, excluding only lines you are confident are unreasonable to test
      • yes this is hard, but it's SOOOOO worth the effort
    • No compiler warnings are generated by the new code
      • we compile with warnings as errors.  if it's really not a valid or important warning, use #pragma warning disable/enable
    • Before committing anything to source control, update to get all recent changes, and make sure all unit and integration tests pass
    • FxCop should generate no new warnings for your new code
    • Compiling with warnings as errors will flush out any place you forgot documentation comments, which must be included for any new code
  • There WAS a second shooter on the grassy knoll
    • possibly one on the railway overpass
    • (that's for anyone who bothered to read this far...)
Tuesday, 01 May 2007 13:49:18 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [10]  | 
# Thursday, 26 April 2007

Back in February, I mentioned that I was having problems testing our AzMan code on developer machines running Window XP instead of Windows 2003 server.  This was because the 2003 Server version of the AzMan interfaces weren't present on XP.  To deal with the issue, we actually put some code in our unit tests that reported success if running on XP when the error was encountered, so that everyone could have a successful build.  Drag.

Anyway, it turns out I wasn't the only one having this problem.  Helpful reader Jose M. Aragon says

We are trying to get AzMan into our authorization architecture, and were able to do it nicely, as soon as the development was done on Windows Server 2003. But, when we tried to use IAzClientContext2.GetAssignedScopesPage on a developers machine running XP sp2 it fails miserably with a type initialization. Searching on help, I found your entry. so we started to circumvent the problem by programming our own "GetAssignedScopes" method.

Meanwhile, on another XP workstation, the problem was not occurring. So, it turns out the difference resides in what was the Server 2003 admin pack package installed on every XP machine..

adminpak.exe sized 12.537KB on 24 Jan 2007 : Does NOT include IAzClientContext2 on azroles.dll

WindowsServer2003-KB304718-AdministrationToolsPack.exe sized 13.175KB on 13 March 2007 : allows using IAzClientContext2 on XP

It seems the two files where downloaded from microsoft.com/downloads, but the one on Server2K3 SP1 is the most recent AdminPak.

I uninstalled my original version of the Admin Pack on my XP-running laptop, installed the new one, and now all of our tests succeed without the hack!

Thanks Jose!

Thursday, 26 April 2007 13:02:04 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 17 April 2007

I've been wrestling for a while with how to deal with the fact that AzMan doesn't natively have a concept of denials.  I'd convinced myself that you could structure a hierarchy of grants such that you wouldn't really need an explicit denial, but I got a use case last week that proved that it really is necessary (without doing something REALLY horrible). 
I started with some suggesting from Cam, and modified it a bit into something that I think will work going forward.
The basic idea is that for every operation we want to add grants for, e.g. "GetAccounts", we also add a hidden operation called "!GetAccounts".  To keep them separated, we also offset their AzMan operation IDs by some arbitrary constant, say 100,000.  They when we do access checks, we actually have to call IAzClientContext.AccessCheck twice, once for the grant operations and once for the deny operations.  The results get evaluated such that we only grant access for an operation if the access check succeeds for the "grant" operation, AND doesn't succeed for the "deny" operation. 
WindowsIdentity identity = WindowsIdentity.GetCurrent();
IAzClientContext client = app.InitializeClientContextFromToken((ulong)identity.Token.ToInt64(), null);
object[] scopes = new object[] { (object)"" };
object[] grantOps = new object[] { (object)1, (object)2, (object)3, (object)4};
object[] denyOps = new object[] { (object)100001, (object)100002, 
     (object)100003, (object)100004 };
object[] grantResults = (object[])client.AccessCheck("check for grant", 
     scopes, grantOps, null, null, null, null, null);
object[] denyResults = (object[])client.AccessCheck("check for deny", 
     scopes, denyOps, null, null, null, null, null);
for (int i = 0; i < grantResults.Length; i++)
       if(((int)grantResults[i] == 0) && ((int)denyResults[i] != 0))

We'll hide all the details in the administration tool, so that all the user will see is radio buttons for Grant and Deny for each operations.

I'm pretty happy with this for a number of reasons

  • it gives us the functionality we need
  • it's easy to understand (no coder should have a hard time figuring out what "!GetAccounts" means)
  • all of the logic is localized into the admin tool and the CheckAccess method, so the interface to the client doesn't change
  • it's not too much additional semantics layered on top of AzMan, and we take full advantage of AzMan's performance this way
Tuesday, 17 April 2007 15:44:12 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [3]  | 

This is a weird one...

A while back I changed our serializers from a reflection based model, to one that uses run-time code generation via the CodeDOM.  Performance is two orders of magnitude better, so happy happy.  However (there's always a however) a weird problem cropped up yesterday that I'm not quite sure what to do with. 

One of our applications has a case where they need to touch web.config for their ASP.NET applications while under load.  When they do that, they start getting InvalidCastExceptions from one or more of the cached, code generated serializers.  Strange.  Internally, after we generate the new serializer, we cache a delegate to the Serialize method for later retrieval.  It looks like that cache is getting boogered, so that the type cast exceptions ensue.  This is caused by serializers that are compiled against an assembly in one loader context, getting called by code that's running against the same assembly in a different loader context, hence the type case failure. 

I had assumed (probably a bad idea) that when you touch web.config, that the whole AppDomain running that application goes away.  It looks like that's not the case here.  If the AppDomain really went away, then the cache (a static Hashtable) would be cleared, and there wouldn't be a problem. 

To test the behavior, I wrapped the call to the cached serializer to catch InvalidCastExceptions, and in the catch block, clear the serializer cache so that they get regenerated.  That works fine, the serializers get rebuilt, and all is well.  However, I also put in a cap, so that if there was a problem, we wouldn't just keep recycling the cache over and over again.  On the 4th time the cache gets cleared, we give up, and just start throwing the InvalidCastExceptions.

The behavior that was reported back by the application team was that said fix worked fine until the 4th time they touch web.config, then it starts throwing exceptions.  Ah ha!  That means the AppDomain couldn't be going away, because the counter is stored in a static, and so should be resetting to 0 every time the application restarts. 

Unfortunately, just because I can characterize the problem doesn't mean I know what to do about it. :-)

Tuesday, 17 April 2007 12:45:10 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 13 April 2007

We've been building around WCF duplex contracts for a while now, as a way of limiting the number of threads active on our server.  The idea being since a duplex contract is really just a loosely coupled set of one-way operations, there's no thread blocking on the server side waiting for a response to be generated. 

It's been working well, but I'm starting to find that there are an awful lot of limitations engendered by using duplex contracts, mostly around the binding options you have.  It was very difficult to set up duplex contracts using issued token security, and in the end it turned out we couldn't set up such a binding through configuration, but were forced to resort to building the bindings in code.  Just recently, I discovered that you can't use transport level security with duplex contracts.  This makes sense, because a duplex contract involves an independent connection from the "server" back to the "client".  If you are using SSL for transport security, you would need certificates set up on the client, etc.  I'm not surprised that it doesn't work, but it's limiting.  Plus we have constant battles with the fact that processes tend to hold open the client side port that gets opened for the return calls back to the client.  Every time we run our integration tests, we have to remember to kill TestDriven.NET's hosting process, or on our next test run we get a failure because the client side port is in use already. 

All this has lead me to think that maybe we should go with standard request-response contracts until the server-side threading really shows itself to be a problem.  That's what I get for prematurely optimizing...

Indigo | Work
Friday, 13 April 2007 12:53:50 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 04 April 2007

Scott posted today about "Mindful Coding" or "Coding Mindfully", meaning "paying full attention to what you are doing while coding".  It's an easy thing to say, and a very hard thing to do.  We are so habitualized, as a society, to not pay full attention to much of anything that it's hard to really focus on what's happening right now.  I know that between email, questions, lunch, etc. it's difficult for me to focus on the code I'm writing at any given moment.  If nothing else, it's tempting to think about how the code you are writing now will interact with or influence the code you right tomorrow or next week, or the code your neighbor is writing that you have to integrate with.  Before you know it, you aren't really paying full attention to what you are writing.  Personally, this is where I get a lot of value out of refactoring. 

Refactoring allows you to fix the problems caused by wandering attention, if it's done often enough.  Several of the commenters on Scott's post also note that pair programming is a good remedy, and I agree.  When you are collaborating with someone else, you have to focus on the collaboration, which leads to mindfulness about what you are doing right now. 

Go forth and be mindful...

Wednesday, 04 April 2007 14:48:33 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 

Since there's not way to manage an explicit "Deny" in AzMan like there is with ACL's, that has some influence on the way in which we will have to define roles.  Let's say I have a group hierarchy like this...

  • Ray's Bakery
    • Admins
    • Bill Payers
    • Corporate Headquarters
      • Finance
      • Payroll

If we have the ability to deny, we might give rights to all operations to the top level "Ray's Bakery", and then deny rights to "Payroll" for example that we don't want that group to have, but are OK for "Finance".  It's relatively easy to understand, and has the advantage of being not very much work. 

Because of the way AzMan works, however, we have to work from the top of the hierarchy down, going from most restrictive to least restrictive.  In other words, we can only assign rights to the "Ray's Bakery" group that we want ALL users to have, then grant additional rights as we go down the tree.  We might want everyone in Finance and Payroll to be able to see balances, so those rights could be assigned at the "Corporate HQ" level, while rights to transfer money from one corporate account to another might be granted only to Finance, and rights to wire transfers only to Payroll.  This should also be pretty easy to understand, but it does require a bit more thinking about.  I think a good test tool will help make it clear to users who has what rights, so it shouldn't be too hard to follow. 

For the sake of clearing up any implementation details, what I'm really describing are hierarchical groups in ADAM, which are associated with AzMan roles, so when I say "assign rights at the Finance level", I mean that the group SID for the Finance group in ADAM will be associated with an AzMan role called something like RaysBakeryFinance, which is in turn associated with the operations or tasks that comprise wire transfers. 

Wednesday, 04 April 2007 13:51:28 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [3]  | 
# Wednesday, 07 March 2007

This is mostly an annoyance, but I've noticed that every time I try to attach the VisualStudio 2005 debugger to the w3wp.exe process (to debug a WCF service hosted in IIS, on Windows 2003 Server), the Attach to Process dialog automatically tries to debug Workflow and T-SQL code, not managed code.  This then causes VS.NET to crash instead of attaching the debugger. 

When I remember to set it manually to Managed instead, everything works just fine.  The drag is that there doesn't seem to be an obvious way to turn off the automatic selection.  Every time I start this process, I forget, crash the app, then remember to select manually.  Costs some extra time I'd rather not spend.

I suppose on the other hand, I could just pay more attention.

Wednesday, 07 March 2007 10:58:26 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 26 February 2007

It looks like the issue I've been having with calling AzMan via PowerShell via MMC have nothing to do with PowerShell, and is probably an AzMan issue.  I tried exactly the same scenario, but accessed a different path with my drive provider that ultimately makes a COM call to ADAM, rather than AzMan, and it worked just fine.  That makes me think it's something about the AzMan implementation. 

However, I still need to call my PowerShell drive provider from MMC, and that problem has been resolved, (many) thanks to Bruce Payette

Turns out that there's a way to get PowerShell to run in the caller's thread, rather than creating its own MTA thread.  This means that my PowerShell code will get called from MMC's STA thread directly, and the AzMan interfaces get created just fine.  Woohoo!

Previously, I was creating the Runspace and calling it thusly...

RunspaceConfiguration config = RunspaceConfiguration.Create();
PSSnapInException warning;
//load my drive provider
PSSnapInInfo info = config.AddPSSnapIn(snapInName, out warning); 
Runspace rs = RunspaceFactory.CreateRunspace(config);
create Command object with right parameters
Collection<PSObject> results;
using (Pipeline pipeline = _runspace.CreatePipeline())
    results = pipeline.Invoke();

This wasn't working, I believe because MMC is calling this code on an STA thread, and PowerShell is creating its own MTA thread, which doesn't seem to be acceptible to AzMan (not sure why).

However, according to Bruce's suggestion, I changed it to look like this...

Runspace rs = RunspaceFactory.CreateRunspace();
PSSnapInException warning;
//load my drive provider
rs.RunspaceConfiguration.AddPSSnapIn(snapinName, out warning);
Runspace.DefaultRunspace = rs;
EngineIntrinsics _exec = rs.SessionStateProxy.GetVariable("ExecutionContext") as EngineIntrinsics;
ScriptBlock sb = _exec.InvokeCommand.
    NewScriptBlock("my script block");
Collection<PSObject> results = sb.Invoke();

This causes the PowerShell call to happen on the caller's (MMC's) thread, and everyone is happy. 

Monday, 26 February 2007 11:09:26 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [2]  | 
# Thursday, 22 February 2007

Update:  OK this is even wierder.  If I call into the PowerShell Runspace from inside a WinForms app marked STAThread, but first I create my own MTA thread and use it to make the call to the Runsapce, everything works.  However, from within MMC, even if I create the MTA thread myself to call the Runsapace on, it fails with the same "Interface not registered" error.



Our plan is to build PowerShell Cmdlets and drive providers to expose our management interfaces, then layer an MMC interface on top of them to provide a GUI management tool. 

Yesterday, I ran into the first hitch in that plan...

My PowerShell drive provider makes use of the AzMan COM interfaces to deal with Operations, Roles, and Tasks.  That works great from within PowerShell, and it does all the stuff it needs to.  However, when I started running the PowerShell stuff through a Runspace inside of MMC, the AzMan calls started failing with "Interface not registered".  Hmm.  So, on a hunch, I tried the same PowerShell calls (again using a Runspace) inside a WinForms app.  Same problem.  Even though they work fine in PowerShell (which obviously makes me doubt that the interface isn't registered). 

On another hunch, I changed the COM threading model attribute on the WinForms app from [STAThread] to [MTAThread].  Everything worked just peachy.  Dang. 

Turns out that MMC always calls snap-ins on an STA thread, and for the life of my I can't find any way to change that behavior.  I still don't know why the problem manifests itself quite this way.

To further qualify the problem, I called the AzMan interfaces directly from within the WinForms app, and it worked in both STA and MTA threads.  So apparently it has something to do with launching the Runspace from an STA thread, and then making the COM calls from PowerShell (which uses an MTA thread). 

I'm hoping that I won't have to abandon the MMC -> PowerShell concept for this particular part of the product, but right now I'm not sure how to resolve this.

Thursday, 22 February 2007 12:57:24 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 19 February 2007

I'm finally taking the time to get to know PowerShell, and I'm pretty much as impressed as I expected to be given the rave reviews from Scott and others. 

In rewriting our core platform, we're committed to exposing management operations through both PowerShell and MMC, so I've been building out some PS Cmdlets, and now a "drive provider" to make it easy for an end user to navigate through our authentication system.  In the end, the user will be able to "CD" into our authentication system, and navigate the available groups (ADAM) roles, and operations (both from AzMan) like they were directories and files in a file system. 

It's taken me a while to get my head around the PS drive provider model, but now that I (mostly) have, I totally get how flexible a model it is.  You can choose different levels of functionality to support, including adding new items to the tree, "cd"-ing into multiple levels of hierarchy, etc.  Once I grokked the stateless model used by the drive providers, it made sense as an easy way to expose hierarchical data to PS, in a way that's intuitive to users. 

Once that stuff is done, we'll layer an MMC interface on top of the PS Cmdlets, so we won't have to reinvent the wheel functionality wise. 

Very exciting times...

ADAM | AzMan | Work | PowerShell
Monday, 19 February 2007 11:06:24 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 13 February 2007

One obvious thing you might want to do with an authorization system is to get a list of all the operations a given user is granted access to.  This is particularly useful for building up UI elements such as navigation menus.  Nobody likes navigation links that result in Access Denied errors, so it's best to only include in your navigation hierarchy those pages a user can actually execute successfully. 

Here's an example of how to do this with ADAM and AzMan.  It's a little weird, since you have to switch back and forth between an operation's string name, and its integer ID, hence the lookup dictionaries. 


   1:  public string[] AvailableOperations(string userId)
   2:  {
   3:      IAuthenticationStore authenticate = AuthenticationStoreFactory.Create();
   4:      UserInfo userInfo = authenticate.LookupUser(userId);
   5:      IAzClientContext2 client = setupClientContext(userId, userInfo);
   7:      try
   8:      {
   9:          Dictionary<string, int> opToId = 
  10:      new Dictionary<string, int>(azApp.Operations.Count);
  11:          Dictionary<int, string> idToOp =
  12:      new Dictionary<int, string>(azApp.Operations.Count);
  13:          foreach (IAzOperation op in azApp.Operations)
  14:          {
  15:              opToId.Add(op.Name, op.OperationID);
  16:              idToOp.Add(op.OperationID, op.Name);
  17:              Marshal.ReleaseComObject(op);
  18:          }
  20:          List<string> availableOps = new List<string>();
  22:          object[] scope = new object[] { (object)"" };
  24:          object[] operations = new object[idToOp.Count];
  25:          int i = 0;
  26:          try
  27:          {
  28:              foreach (int opId in idToOp.Keys)
  29:              {
  30:                  operations[i] = (object)opId;
  31:                  i++;
  32:              }
  33:          }
  34:          catch (COMException ex)
  35:          {
  36:              throw new InvalidOperationException("Failed", ex);
  37:          }
  38:          object[] results = (object[])client.AccessCheck("User authentication", scope,
  39:               operations, null, null, null, null, null);
  41:          for (int j = 0; j < results.Length; j++)
  42:          {
  43:              if ((int)results[j] == 0)
  44:              {
  45:                  availableOps.Add(idToOp[(int)operations[j]]);
  46:              }
  47:          }
  48:          return availableOps.ToArray();            
  49:      }
  50:      finally
  51:      {
  52:          Marshal.ReleaseComObject(client);
  53:      }
  55:  }

I talked about the LookupUser and setupClientContext methods yesterday.  This method returns a handy list of all the operations that the given user can access.  In practice, you probably want to cache the results, although so far it seems to perform very well.  It might prove particularly valuable to cache the ID->Name and Name->ID lookup tables, as they aren't likely to change very often.

Tuesday, 13 February 2007 09:34:22 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 12 February 2007

I'm working on an authentication and authorization solution using ADAM and AzMan, and starting to write the code to integrate the two.  For those who might be looking to do such a thing, here's an example.  This method takes two pieces of information, a user's ID (which in this case corresponds to a userPrincipalName in ADAM), and the name of the operation to check access to...


   1:  public bool CheckAccess(string operation, string userId)
   2:  {
   3:      log.Verbose("Starting CheckAccess");
   4:      bool result = false;
   5:      IAuthenticationStore authenticate = AuthenticationStoreFactory.Create();
   6:      UserInfo userInfo = authenticate.LookupUser(userId);
   7:      log.Verbose("LookupUser {0} succeeded", userId);
   8:      IAzClientContext2 client = setupClientContext(userId, userInfo);
  10:      object[] scope = new object[] {(object)""};
  12:      object[] operations = null;
  13:      try
  14:      {
  15:          IAzOperation op = azApp.OpenOperation(operation, null);
  16:          operations = new object[] { (Int32)op.OperationID };
  17:          Marshal.ReleaseComObject(op);
  18:      }
  19:      catch (COMException ex)
  20:      {
  21:          log.Error("No such operation: {0}", operation);
  22:          if (ex.ErrorCode == AzManConstants.INT_NO_SUCH_OBJECT)
  23:              throw new UnknownOperationException(operation);
  24:          else
  25:              throw;
  26:      }
  27:      object[] results = (object[])client.AccessCheck("User authentication", scope, 
  28:                      operations, null, null, null, null, null);
  29:      if ((int)results[0] == 0)
  30:      {
  31:          result = true;
  32:          log.Verbose("Access granted to {0} for user {1}", operation, userId);
  33:      }
  34:      else
  35:      {
  36:          log.Verbose("Access denied to {0} for user {1}", operation, userId);
  37:      }
  38:      Marshal.ReleaseComObject(client);
  39:      log.Verbose("Leaving CheckAccess");
  40:      return result;
  41:  }

There are a couple things here that need explaining.  The LookupUser method finds the specified user in ADAM by searching for the userPrincipalName using the DirectorySearcher class.  It returns a structure containing the user's SID, all of the SIDs for groups they belong to, and some other stuff specific to our application.  There's a great example of how to retrieve this info in the paper Developing Applications Using Windows Authorization Manager, available on MSDN. 

The other method of interest here is the setupClientContext method, which looks like this...


   1:  private IAzClientContext2 setupClientContext(string userId, UserInfo userInfo)
   2:  {
   3:      log.Verbose("Setting up client context");
   4:      IAzClientContext2 client = azApp.InitializeClientContext2(userId, null);
   5:      log.Verbose("User sid = {0}", userInfo.SecurityId);
   6:      object[] userSids = null;
   7:      if (userInfo.GroupIds != null)
   8:      {
   9:          userSids = new object[userInfo.GroupIds.Count + 1];
  10:          userSids[0] = userInfo.SecurityId;
  11:          int i = 1;
  12:          foreach (string sid in userInfo.GroupIds)
  13:          {
  14:              log.Verbose("Adding group sid {0}", sid);
  15:              userSids[i] = sid;
  16:              i++;
  17:          }
  18:      }
  19:      else
  20:      {
  21:          log.Verbose("No group IDs returned, using only user SID");
  22:          userSids = new object[1];
  23:          userSids[0] = (object)userInfo.SecurityId;
  24:      }
  25:      client.AddStringSids(userSids);
  26:      log.Verbose("Finished setting up client context");
  27:      return client;
  28:  }

This is the part that only works on Windows 2003 Server or Vista. On XP, the IAzClientContext2 interface is not available, and so you can't add the additional group SIDs to make this work.

The end result is that the user's role membership will be used by AzMan to determine whether or not the specified user can access the specified operation, and return a boolean value back to the caller.  Easy to use by the client, and as far as I can tell, pretty performant.

Monday, 12 February 2007 11:21:02 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 31 January 2007

If you are working with AzMan at all, the best resource I've found so far that you'll want to check out is Developing Applications Using Windows Authorization Manager from MSDN.  It includes a very comprehensive set of info on working with AzMan using Windows, ADAM, or custom principals, on using AzMan from ASP.NET, and general info on authorization strategy and policy.

There are lots of code samples included in the document, including code for doing access checks using ADAM principals, and some for writing AzMan "BizRules" in managed code.

Good stuff.

AzMan | Work
Wednesday, 31 January 2007 15:05:23 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 24 January 2007

I'll post some code later on, but I wanted to make some quick points about integrating ADAM and AzMan.  I'm in the midst of building an authentication/authorization system using the two technologies together, and have some across some stumbling blocks.  There's not much documentation, particularly around AzMan, and the COM interfaces for AzMan can be a bit cumbersome.

  • Storing users in ADAM and authorizing them using ADAM requires Windows 2003 Server or Vista.  There's no decent way to make this work on Windows XP.  The necessary AzMan interface, IAzClientContext2, doesn't exist on XP.  It's required for using a collection of user and group SIDs from ADAM to do access checks against AzMan.  I'll post some code later...
    • IAzClientContext2 is also available on Vista, so Vista is also a viable dev platform.
  • There are some confusing interactions between the AzMan UI and the programmatic API.  If you create a Role in the AzMan UI, but don't create a RoleAssignment, the programmatic call to IAzApplication2.OpenRole will fail.  If you create the role assignment, but don't actually assign any users or groups to it, OpenRole succeeds.  Conversely, if you call the programmatic IAzApplication2.CreateRole method and assign operations and users to the role in code, the RoleAssignment shows up in the UI, but not the Role itself. 
  • If you assign an ADAM user to be a member of an AzMan group, it won't show up in the AzMan UI, but if you assign them directly to a Role, the ADAM user's SID will show up (as "unknown SID") under the RoleAssignment.  Either way, the call to AccessCheck works correctly.
  • You must pass the complete list of group SIDs from ADAM, but fetching the user's "tokenGroups" property.  Don't use "memberOf" because it doesn't take into account groups which belong to other groups.

More detail to come...

Wednesday, 24 January 2007 10:08:12 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [2]  | 
# Monday, 18 December 2006

I spent about a day and a half this week trying to get a CC.NET server working with a complex combination of MSBuild (running the build script) TypeMock (the solution we're using for "mock objects") and NCover (for code coverage analysis) that proved tricky to get right.

In a typical build script, you'd create a task to run your unit tests that depends on your compile task.  And you might want to run a "clean" task first so you know you're starting from ground zero. 

Bringing a profiler into the mix changes the equation a bit.  Or in this case, two profilers that have to play nice together.  A "profiler" in this case means something that registers itself as a .NET profiler, meaning that it uses the underlying COM API in the CLR to get under the covers of the runtime.  This was originally envisioned as enabling profilers that track things like performance or memory usage that have to run outside the context of the CLR.  NCover uses this capability to do code coverage analysis, so that we can get reports of which lines of code in our platform are not being touched by our unit tests.

TypeMock is also a profiler, which causes some interesting interaction.  TypeMock uses the profiler API to insert itself between calling code and the ultimate target of that call in order to "mock" or pretend to be the target.  We use this to reduce the dependencies that our unit test code requires.  Rather than installing SQL Server on our build box, we can use TypeMock to "mock" the calls to the database, and have them return the results we expect without actually calling SQL. 

So...the problem all this interaction led to re: my build was that in order to make everything work, the profiler(s) have to be active before any of the assemblies you want to test/profile are loaded.  I had my UnitTest target set up like this:


<Target Name="UnitTestCoverage" DependsOnTargets="Compile;RemoveTestResults">


    <CreateItem Include="$(BuildDir)\\**\*.MBUnit.dll" >






    <TypeMockStart Link="NCover" />


    <Exec Command="&quot;$(NCoverConsole)&quot; $(MBUnitConsole) @(TargetFiles->'%(FullPath)', ' ') /rt:xml /rf:$(TestResultsDir) /fc:unit //l $(TestResultsDir)\NCover.log //x $(TestResultsDir)\NCoverage.xml //a $(AssembliesToProfile)" ContinueOnError="true"/>





with the dependencies set to that the compile and "RemoveTestResults" targets would be evaluated first.  Unfortunately, this caused the whole build to hang for upwards of an hour when this target started, but after the compile and "remove" targets had run.  I'm theorizing (but haven't confirmed) that is is because the compiler loads the assemblies in question during the build process, and they don't get unloaded by the time we get to starting TypeMock.  That apparently means a whole bunch of overhead to attach the profilers (or something).  What finally ended up working was moving the compile target inside the bounds of the TypeMock activation, using CallTarget instead of the DependsOnTargets attribute.


    <TypeMockStart Link="NCover" />

    <CallTarget Targets="CleanCompile;RemoveTestResults"/>


    <Exec Command="&quot;$(NCoverConsole)&quot; $(MBUnitConsole) @(TargetFiles->'%(FullPath)', ' ') /rt:xml /rf:$(TestResultsDir) /fc:unit //l $(TestResultsDir)\NCover.log //x $(TestResultsDir)\NCoverage.xml //a $(AssembliesToProfile)" ContinueOnError="true"/>



This works just fine, and doesn't cause the 1 hour delay, which makes things much easier. :-)
Monday, 18 December 2006 12:27:20 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 11 December 2006

One of the (many) issues I encountered in creating a duplex http binding with IssuedToken security (details here) was that when using a duplex contract, the client has to open a "server" of its own to receive the callbacks from the "server" part of the contract.  Unfortunately, there are a couple of fairly critical settings that you'd want to change for that client side "server" that aren't exposed via the configuration system, and have to be changed via code.

One that was really causing me problems was that for each duplex channel I opened up (and for one of our web apps, that might be 2-3 per ASP.NET page request) I had to provide a unique BaseClientAddress for each channel, which quickly ran into problems with opening too many ports (a security hassle) and insuring uniqueness of the URI's (a programming hassle).  Turns out that you can ask WCF to create unique addresses for you, specific to your chosen transport in their method of uniqueness.  However, you make that request by setting the ListenUri and ListenUriMode properties on the ServiceEndpoint (or the BindingContext).  For the client-side "server" part of a duplex contract, that turns out to be one of those settings that isn't exposed via configuration (or much of anything else, either). 

Luckily, I found an answer in the good works of Nicholas Allen (yet again).  He mentioned that you could set such a property on the BindingContext for the client side of a duplex contract.  Not only that, but Mike Taulty was good enough to post a sample of said solution. 

There's a great summary of the solution here, but to summarize even more briefly, you have to create your own BindingElement, and inject it into the binding stack so that you can grab the BindingContext as it hastens past.  Now I'm setting the ListenUri base address on a specific port, and asking WCF to do the rest by unique-ifying the URI.  Not only do I not have to keep track of them myself, but I can easily control which ports are being used on the client side, which makes both IT and Security wonks really happy.

Monday, 11 December 2006 11:02:53 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 05 December 2006

This took me WAY longer to figure out that either a) it shoudl have or b) I had hoped it would.  I finally got it working last week though.  I now have my client calling the server via a Dual Contract using IssuedToken security (requires a SAML token), said token being obtained from an STS written by me, which takes a custom token from the client for authentication. 

On the plus side, I know a whole bunch more about the depths of the ServiceModel now than I did before. :-)

It turns out (as far as I can tell) that the binding I needed for client and server cannot be described in a .config file, and must be created in code.  That code looks like this on the server


public static Binding CreateServerBinding(Uri baseAddress, Uri issuerAddress)



    CustomBinding binding = new CustomBinding();

    IssuedSecurityTokenParameters issuedTokenParameters =

        new IssuedSecurityTokenParameters();

    issuedTokenParameters.IssuerAddress = new EndpointAddress(issuerAddress);

    issuedTokenParameters.IssuerBinding = CreateStsBinding();

    issuedTokenParameters.KeyType = SecurityKeyType.SymmetricKey;

    issuedTokenParameters.KeySize = 256;


        = "http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1";


    SecurityBindingElement security

        = SecurityBindingElement.CreateIssuedTokenBindingElement(issuedTokenParameters);


    binding.Elements.Add(new CompositeDuplexBindingElement());

    binding.Elements.Add(new OneWayBindingElement());

    binding.Elements.Add(new TextMessageEncodingBindingElement());

    binding.Elements.Add(new HttpTransportBindingElement());


    return binding;


and essentially the same on the client, with the addition of setting the clientBaseAddress on the CompositeDuplexBindingElement.

This works just fine, getting the token from the STS under the covers and then calling the Dual Contract server interface with the token. 

I ended up with a custom binding to the STS itself, mostly because I needed to pass more credentials than just user name and password.  So the STS binding gets created thusly:


public static Binding CreateStsBinding()


    Binding binding = null;


    SymmetricSecurityBindingElement messageSecurity

        = new SymmetricSecurityBindingElement();



     SignedEncrypted.Add(new VoyagerToken.VoyagerTokenParameters());


    X509SecurityTokenParameters x509ProtectionParameters

        = new X509SecurityTokenParameters( X509KeyIdentifierClauseType.Thumbprint);

    x509ProtectionParameters.InclusionMode = SecurityTokenInclusionMode.Never;


    messageSecurity.ProtectionTokenParameters = x509ProtectionParameters;

    HttpTransportBindingElement httpBinding = new HttpTransportBindingElement();


    binding = new CustomBinding(messageSecurity, httpBinding);


    return binding;         


Not as easy as I had hoped it would be, but it's working well now, so it's all good.  If you go the route of the custom token, it turns out there are all kinds of fun things you can do with token caching, etc.  It does require a fair amount of effort though, since there are 10-12 classes that you have to provide.

Tuesday, 05 December 2006 15:47:33 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 17 October 2006

After some minor setbacks, I’m finally getting going with ADAM, and amazed at how well it works (once you figure out how it’s supposed to work, which is a non-trivial undertaking).  One thing that makes it harder is that most of the (still scant) documentation around ADAM is geared toward IT Pros, not developers, so there are plenty of places where you track down the documentation for that critical change you need to make to your directory, only to be told “start the command line tool…”.  Frustrating, but workable. 

Once the magic incantations are prized forth, it seems very fast.  Granted we’re not doing anything tricky at this point, but adding users, setting passwords, simple queries, all seem pretty snappy. 

On my previous problems with ADAM and AzMan integration, this is apparently caused by the fact that I had my users in one application partition in ADAM, and my AzMan store in another partition in the same ADAM instance.  The official word is that this is not supported, which probably makes sense.  It means that I’ll probably end up using a separate ADAM instance as the AzMan store, or fall back to the XML file, although that’s harder to distribute.  Only time will tell.

Tuesday, 17 October 2006 10:32:08 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 11 October 2006

I'm here in Redmond at the patterns and practices Summit, and yesterday Don Smith had some very interesting things to say about versioning of web services.  He was polling the audience for opinions and scenarios, and I started to realize that I hadn't thought very hard about the problem.  Typically in cases where I've wanted to version a web services, I've left the existing endpoint in place (supporting the old interface) and added a new endpoint which supports the new interface.  I still think that's a pretty good solution.  What I realized, though, was that there's no way to communicate the information about the new interface programmatically.  It still pretty much requires a phone call. 

Don suggested something like RSS (or actually RSS) to push information about new versions out to clients.  Another suggestion was to have people actively "subscribe" to your service.  That would not only handle the "push" notification, but you'd know how many people were actually using your old vs. new interface, etc. 

I started thinking about something like a WS-VersioningPolicy.  We already have standards like WS-SecurityPolicy, which carry metadata about how you should secure your use of a given service.  Why not use a similar formal, programmatic method for distributing information about versioning policy.  You could convey information like methods that are deprecated, where to find the new versioned endpoint (if that's your strategy), or define up front policies about how you will version your interface without switching to a new endpoint. 

That still doesn't solve the "push" problem.  Clients would have to not only consume the policy file at the time they start using the service, but presumably they'd have to check it from time to time.  That suggests human intervention at some level.  Hmmm. 

This is a hard problem, but one that Don's pretty passionate about solving, so check out his blog for future developments in this space.

Wednesday, 11 October 2006 09:12:47 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 06 October 2006

I'd been having some trouble (read: beating my head against the wall) trying to get passwords set/changed for users in ADAM early this week.  Unfortunately, many of the available samples are geared toward IT rather than development users, so tend to be in script, etc.  I've been working from the excellent book "The .NET Developer's Guide to Directory Services Programming" by Joe Kaplan and Ryan Dunn.  Sadly, I couldn't get their password changing samples to work, although aside from that the information in the book has been invaluable. 

I finally found what I needed in an obscure section of the documentation after many a googling.  This article gave me the key info that made this work.  I haven't tried all the permutations, but I think the key piece I was missing was the right combination of AuthenticationTypes flags


            // Set authentication flags.

            // For non-secure connection, use LDAP port and

            //  ADS_USE_SIGNING |

            //  ADS_USE_SEALING |


            // For secure connection, use SSL port and


            AuthTypes = AuthenticationTypes.Signing |

                AuthenticationTypes.Sealing |



            // Bind to user object using LDAP port.



                objUser = new DirectoryEntry(

                    strPath, null, null, AuthTypes);



            catch (Exception e)


                Console.WriteLine("Error:   Bind failed.");

                Console.WriteLine("        {0}.", e.Message);



This sample worked just fine, and I was finally able to set or change passwords for ADAM users.  Once that's done, authenticating that user is as easy as

            DirectoryEntry de = new DirectoryEntry("LDAP://localhost:50000/RootDSE",





If the supplied password is correct, it succeeds, else you get an exception. 

Now I can get on with the interesting parts...

Friday, 06 October 2006 10:42:25 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Thursday, 28 September 2006

I'm working on a new project these days, and we're going all out with the bleeding edge goodies, including .NET 3.0.  To set the stage, we've got a server that internally calls a WF workflow instance, which is inherently asynchronous, so the interface we expose to clients is a WCF "Duplex Contract" which basically means two sets of one-way messages that are loosely coupled.  Think MSMQ, or other one-way message passing applications. 


    public interface ITestService



        void SignOn(SignOnRequest request);




    public interface ITestServiceCallback


        [OperationContract(IsOneWay = true)]

        void SignOnResponse(SignOnResponse response);


However, we need to call this duplex contract from an ASP.NET application, since that's how we write out clients 80% of the time.  Since HTTP is inherently synchronous, this poses a bit of a challenge.  How do you call a duplex contract, but make it look synchronous to the caller?

The solution I came up with was to use a generated wrapper class that assigns each outgoing message a correlation id (a GUID), than uses that id as a key into a dictionary where it stores a System.Threading.WaitHandle, in this case really an AutoResetEvent.  Then the client waits on the WaitHandle.

When the response comes back (most likely on a different thread) the wrapper matches up the correlation id, gets the wait handle, and signals it, passing the response from the server in another dictionary which uses the WaitHandle as a key.  The resulting wrapper code looks like this:


public class Wrapper : ITestServiceCallback


    #region private implementation

    private static Dictionary<Guid, WaitHandle> correlationIdToWaitHandle = new Dictionary<Guid,WaitHandle>();

    private static Dictionary<WaitHandle, Response> waitHandleToResponse = new Dictionary<WaitHandle, Response>();

    object correlationLock = new object();


    private static void Cleanup(Guid correlationId, WaitHandle h)








    ITestService client = null;




    public Wrapper()


        client = new TestServiceClient(new InstanceContext(this));



    public SignOnResponse SignOn(SignOnRequest request)


        request.CorrelationId = Guid.NewGuid();

        WaitHandle h = new AutoResetEvent(false);




            lock (correlationLock)


                correlationIdToWaitHandle.Add(request.CorrelationId, h);







            SignOnResponse response = (SignOnResponse)waitHandleToResponse[h];


            return response;




            Cleanup(request.CorrelationId, h);




    #region ITestServiceCallback Members


    void ITestServiceCallback.SignOnResponse(SignOnResponse response)


        Guid correlationId = response.CorrelationId;


        WaitHandle h = correlationIdToWaitHandle[correlationId];


        waitHandleToResponse.Add(h, response);









This worked just fine in my command line test app, but when I moved to an actual Web app, hosted either in IIS or in VisualStudio 2005, I started getting random but very frequent exceptions popping up in my client app, AFTER the synchronous method had completed.  The exception had a very long and pretty incomprehensible call stack

System.ServiceModel.Diagnostics.FatalException was unhandled
Message="Object reference not set to an instance of an object."
at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc& rpc)
at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc& rpc)
at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc& rpc)
at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc& rpc)
at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)
at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.Dispatch(MessageRpc& rpc, Boolean isOperationContextSet)
at System.ServiceModel.Dispatcher.ChannelHandler.DispatchAndReleasePump(RequestContext request, Boolean cleanThread, OperationContext currentOperationContext)
at System.ServiceModel.Dispatcher.ChannelHandler.HandleRequest(RequestContext request, OperationContext currentOperationContext)
at System.ServiceModel.Dispatcher.ChannelHandler.AsyncMessagePump(IAsyncResult result)
at System.ServiceModel.Dispatcher.ChannelHandler.OnAsyncReceiveComplete(IAsyncResult result)
at System.ServiceModel.Diagnostics.Utility.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result)
at System.ServiceModel.AsyncResult.Complete(Boolean completedSynchronously)
at System.ServiceModel.AsyncResult.Complete(Boolean completedSynchronously, Exception exception)
at System.ServiceModel.Channels.InputQueue`1.AsyncQueueReader.Set(Item item)
at System.ServiceModel.Channels.InputQueue`1.Dispatch()
at System.ServiceModel.Channels.InputQueueChannel`1.Dispatch()
at System.ServiceModel.Channels.ReliableDuplexSessionChannel.ProcessDuplexMessage(WsrmMessageInfo info)
at System.ServiceModel.Channels.ClientReliableDuplexSessionChannel.ProcessMessage(WsrmMessageInfo info)
at System.ServiceModel.Channels.ReliableDuplexSessionChannel.HandleReceiveComplete(IAsyncResult result)
at System.ServiceModel.Channels.ReliableDuplexSessionChannel.OnReceiveCompletedStatic(IAsyncResult result)
at System.ServiceModel.Diagnostics.Utility.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result)
at System.ServiceModel.AsyncResult.Complete(Boolean completedSynchronously)
at System.ServiceModel.AsyncResult.Complete(Boolean completedSynchronously, Exception exception)
at System.ServiceModel.Channels.ReliableChannelBinder`1.InputAsyncResult`1.OnInputComplete(IAsyncResult result)
at System.ServiceModel.Channels.ReliableChannelBinder`1.InputAsyncResult`1.OnInputCompleteStatic(IAsyncResult result)
at System.ServiceModel.Diagnostics.Utility.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result)
at System.ServiceModel.AsyncResult.Complete(Boolean completedSynchronously)
at System.ServiceModel.AsyncResult.Complete(Boolean completedSynchronously, Exception exception)
at System.ServiceModel.Channels.InputQueue`1.AsyncQueueReader.Set(Item item)
at System.ServiceModel.Channels.InputQueue`1.EnqueueAndDispatch(Item item, Boolean canDispatchOnThisThread)
at System.ServiceModel.Channels.InputQueue`1.EnqueueAndDispatch(T item, ItemDequeuedCallback dequeuedCallback, Boolean canDispatchOnThisThread)
at System.ServiceModel.Channels.InputQueue`1.EnqueueAndDispatch(T item, ItemDequeuedCallback dequeuedCallback)
at System.ServiceModel.Channels.InputQueue`1.EnqueueAndDispatch(T item)
at System.ServiceModel.Security.SecuritySessionClientSettings`1.ClientSecurityDuplexSessionChannel.CompleteReceive(IAsyncResult result)
at System.ServiceModel.Security.SecuritySessionClientSettings`1.ClientSecurityDuplexSessionChannel.OnReceive(IAsyncResult result)
at System.ServiceModel.Diagnostics.Utility.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result)
at System.ServiceModel.AsyncResult.Complete(Boolean completedSynchronously)
at System.ServiceModel.AsyncResult.Complete(Boolean completedSynchronously, Exception exception)
at System.ServiceModel.Security.SecuritySessionClientSettings`1.ClientSecuritySessionChannel.ReceiveAsyncResult.OnReceive(IAsyncResult result)
at System.ServiceModel.Diagnostics.Utility.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result)
at System.ServiceModel.AsyncResult.Complete(Boolean completedSynchronously)
at System.ServiceModel.AsyncResult.Complete(Boolean completedSynchronously, Exception exception)
at System.ServiceModel.Channels.ReliableChannelBinder`1.InputAsyncResult`1.OnInputComplete(IAsyncResult result)
at System.ServiceModel.Channels.ReliableChannelBinder`1.InputAsyncResult`1.OnInputCompleteStatic(IAsyncResult result)
at System.ServiceModel.Diagnostics.Utility.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result)
at System.ServiceModel.AsyncResult.Complete(Boolean completedSynchronously)
at System.ServiceModel.AsyncResult.Complete(Boolean completedSynchronously, Exception exception)
at System.ServiceModel.Channels.InputQueue`1.AsyncQueueReader.Set(Item item)
at System.ServiceModel.Channels.InputQueue`1.Dispatch()
at System.ServiceModel.Channels.InputQueue`1.OnDispatchCallback(Object state)
at System.ServiceModel.Channels.IOThreadScheduler.WorkItem.Invoke()
at System.ServiceModel.Channels.IOThreadScheduler.ProcessCallbacks()
at System.ServiceModel.Channels.IOThreadScheduler.CompletionCallback(Object state)
at System.ServiceModel.Channels.IOThreadScheduler.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped)
at System.ServiceModel.Diagnostics.Utility.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped)
at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP)

Obvious? Not so much.

Interestingly, if I removed the lock around the hashtable insertion in the signon method

            lock (correlationLock)


                correlationIdToWaitHandle.Add(request.CorrelationId, h);


everything worked just fine.  Hmmmm. 

The problem, it turned out, was that ASP.NET uses (by default) a little thing called the SynchronizationContext.  As near as I can tell (I haven't researched this thoroughly, to be honest) one of it's jobs it to make sure that any callbacks get run on the UI thread, thereby obviating the need to call Control.Invoke like you do in WinForms.  In my case, that additional lock was giving something fits, and it was trying to clean stuff up on a thread that wasn't around any more, hence to NullReferenceException. 

The solution was provided by Steve Maine (thanks Steve!).  Unbeknownst to me (since I didn't know there was a SynchronizationContext) you can turn it off.  Just add a CallbackBehavior attribute to your client side code, with UseSynchronizationContext=false.  So now my wrapper class looks like this


public class Wrapper : ITestServiceCallback

and everything works fine. 

I'm open to other ways of making this work, so I'd love to hear from people, but right now everything is at least functioning the way I expect.  Several people have wondered why we don't just make the server-side interface synchronous, but since we are calling WF internally, we'd essentially have to pull the same trick, only on the server side.  I'd rather burn the thread on the client then tie up one-per-client on the server.  Again, I'm open to suggestions.

CodeGen | Indigo | Work
Thursday, 28 September 2006 10:02:46 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [3]  | 
# Monday, 11 September 2006

I've been a contented user of AnkhSVN for a while now, and love the way it deals with integrating Subversion into VS.NET. 

I've also been using VS 2005, and the formerly WinFX now .NET 3.0 pre-release versions for some time.  A few months back, I got into a state where all the .NET 3.0 stuff worked fine on my machine, except the WF extensions for VisualStudio.  If I installed the WF extensions, VS.NET 2005 would hang on the splash screen with 100% CPU for ever.  Very frustrating.  I'd been putting off solving the problem, instead just uninstalling the WF stuff, but I really need to work on some WF stuff, so I tried again with the recently released RC1 bits of .NET 3.0.  Still no joy.

However, all the WF stuff works fine on my laptop.  What could be the difference?, I asked myself.  AnkhSVN, I answered.  So I tried uninstalling it on my desktop machine, and hey presto, VS 2005 starts up just fine with WF extensions intact. 


I'll look and see if there's a more recent version of AnkhSVN than I had installed, and give that a try.  I'd hate to not be able to use it.

Update:  I installed the latest version of AnkhSvn (1.0 RC3), and it works fine now. 

Monday, 11 September 2006 14:04:42 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Thursday, 10 August 2006

I have progressed a bit.  At least I seems to have an install where ADAM and AzMan will coexist happily in the same ADAM instance, and I can retrieve a user from the ADAM store.  I can also add roles to AzMan programatically, so that's all good.  However, I still can't add an ADAM principal to AzMan as a member of a role.
This is supposed to work...

   15           string roleName = "RetailUser";


   17             MembershipUser user = Membership.GetUser("TestUser@bank.com");

   18             Console.WriteLine(user.ProviderUserKey);


   20             IAzAuthorizationStore2 azStore = new AzAuthorizationStoreClass();

   21             azStore.Initialize(0, "msldap://localhost:50000/CN=Test,CN=AzMan,O=AzManPartition", null);

   22             IAzApplication2 azApp = azStore.OpenApplication2("TestApp", null);


   24             IAzTask task = azApp.CreateTask(roleName, null);

   25             task.IsRoleDefinition = -1;

   26             task.Submit(0, null);

   27             IAzRole role = azApp.CreateRole(roleName, null);

   28             role.AddTask(roleName, null);

   29             role.Submit(0, null);


   31             IAzRole newRole = azApp.OpenRole(roleName, null);



   34             newRole.AddMember(user.ProviderUserKey.ToString(), null);

   35             newRole.Submit(0, null);

And should result in TestUser@bank.com being added to the role "RetailUser". 
Sadly, on that last line, I get

System.ArgumentException was unhandled
  Message="Value does not fall within the expected range."
       at Microsoft.Interop.Security.AzRoles.IAzRole.Submit(Int32 lFlags, Object varReserved)
       at ADAMAz.Program.Main(String[] args) in C:\Documents and Settings\PCauldwell\My Documents\Visual Studio 2005\Projects\ADAMAz\ADAMAz\Program.cs:line 35
       at System.AppDomain.nExecuteAssembly(Assembly assembly, String[] args)
       at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)
       at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()
       at System.Threading.ThreadHelper.ThreadStart_Context(Object state)
       at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
       at System.Threading.ThreadHelper.ThreadStart()

All I can figure is that AzMan doesn't like the SID as generated above. 
I'm running this on XP SP2, with the 2003 Management tools, and ADAM SP1 installed.  I'm fearing that I may have to run this on 2003 R2 to get it to work.

Thursday, 10 August 2006 17:07:15 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

I’m having a heck of a time trying to get ADAM and AzMan to work together.  The vision is that I’d like to use ADAM as both the store for AzMan, and the source of principals to use inside AzMan, rather than principals from AD.  Using ADAM as the store is pretty straightforward, but the second bit is turning out to be a lot harder.  In addition, I’m trying to use the ASP.NET Membership object to mediate between ADAM and AzMan, and seeing some weird stuff.  I was able to use Membership.GetUser(“username”) to pull the user from an ADAM store, but only until I installed AzMan using the same ADAM instance as its store.  After that, the call to GetUser started returning null.  Once I get that working, I think I understand how to add the principals to AzMan, but have yet to see it work.

Hmm.  (Or possibly “arghh!”.)

Work continues. 

Unfortunately, the documentation I’ve been able to turn up is sketchy at best, and it all assumes that you are using ASP.NET (I’m not) and really just want to make Membership work.  Sigh.

To further confuse things, the only way to get the AzMan management tools on XP is to install the 2003 Server management kit, but that doesn’t contain the PIA for AzMan.  That only gets installed on actual 2003 systems, so I’ll have to try and track one down.

Thursday, 10 August 2006 11:05:04 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Wednesday, 02 August 2006

I’ve been doing some exploration of the Peer Channel in WCF over the last week or so.  It’s a pretty cool idea.  Basically, the peer channel provides a way to do multi-cast messages with WCF, where all parties involved get a call at (essentially) the same time.  Better still, it’s not just a simple broadcast, but a “mesh” with some pretty complex algorithms for maximizing network resources, etc. 

The hard part is in the bootstrapping.  When you want to join the “mesh”, you have to have at least one other machine to talk to so that you can get started.  Where does that one machine live?  Tricky.  The best case solution is to use what MS calls PNRP, or the Peer Name Resolution Protocol.  There’s a well known address at microsoft.com that will be the bootstrapping server to get you going.  Alternatively, you can set up your own bootstrap servers, and change local machine configurations to go there instead.  All this depends on the Peer Networking system in XP SP2 and up, so some things have to be configured at the Win32 level to get everything working.  The drawback (and it’s a big one) to PNRP is that it depends on IPv6.  It took me quite a while to ferret out that bit of information, since it’s not called out in the WCF docs.  I finally found it in the Win32 docs for the Peer Networking system. 

This poses a problem.  IPv6 is super cool and everything, but NOBODY uses it.  I’m sure there are a few hearty souls out there embracing it fully, but it’s just not there in the average corporate environment.  Apparently, our routers don’t route IPv6, so PNRP just doesn’t work. 

The way to solve this little problem with WCF is to write a Custom Peer Resolver.  You implement your own class, derived from PeerResolver, and it provides some way to register with a mesh, and get a list of the other machines in the mesh you want to talk to.  There’s a sample peer resolver that ships with the WCF samples, which works great.  Unfortunately, it stores all the lists of machines-per-mesh in memory, which suddenly makes it a single point of failure in an enterprise system, which makes me sad…

That said, I’ve been working on a custom resolver that is DB backed instead of memory backed.  This should allow us to run it across a bunch of machines, and have it not be a bottleneck.  I’m guessing that once everyone has joined the mesh, there won’t be all that much traffic, so I don’t think performance should be a big deal. 

The next step will be WS-Discovery over PeerChannel.  I’ve seen a couple of vague rumors of this being “worked on” but I haven’t seen anything released anywhere.  If someone knows different I’d love to hear about it.

Indigo | Work
Wednesday, 02 August 2006 14:10:43 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, 11 July 2006

One of the things the has irked my about using SVN with VisualStudio.NET is trying to set up a new project.  You’ve got some new thing that you just cooked up, and now you need to get it into Subversion so it doesn’t get lost.  Unfortunatley that means you have to “import” it into SVN, being careful not to include any of the unversionable VS.NET junk files, then check it out again, probably some place else, since Tortoise doesn’t seem to dig checking out over the existing location.  Big pain.

Along comes Ankh to the rescue.  I’ve been using it off and on for a while (version .6 built from the source) but now I’m hooked.  It adds the traditional “Add to source control” menu item in VS.NET, and it totally did the right thing.  Imported, checked out to the same location (in place) and skipped all the junk files.  Worked like a charm.  I’m definitely a believer now.

Tuesday, 11 July 2006 10:46:49 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [3]  | 
# Wednesday, 05 July 2006

So at TechEd, Scott captured Jeff and I talking in the friendliest of fashions about the relative merits of Team System (which Jeff evangelizes) and the “OSS solution” which I helped craft at Corillian involving Subversion, CruiseControl.NET, NUnit, NAnt, et. al. 

Since then, I did a bit for PADNUG about the wonders of source control, and it caused me to refine my positions a bit. 

I think that in buying a system like Team System / TFS (or Rational’s suite, etc.) you are really paying for not just process, but process that can be enforced.  We have developed a number or processes around our “OSS” solution, including integrating Subversion with Rational ClearQuest so that we can relate change sets in SVN with issues tracked in ClearQuest, and a similar integration with our Scrum project management tool, VersionOne.  However, those are policies which are based upon convention, and which we thus can’t enforce.  For example, by convention, we create task branches in SVN named for ClearQuest issues (e.g. cq110011 for a task branch to resolve ClearQuest issue #110011), and we use a similar convention to identify backlog items or tasks in VersionOne.  The rub is that the system depends upon developers doing the right thing.  And sometimes they don’t. 

With an integrated product suite like TFS, you not only get processes, but you get the means to enforce them.  In a previous job, we used ClearQuest and ClearCase together, and no developer could check in a change set without documenting it in ClearQuest.  Period.  It was not left up to the developer to do the right thing, because the tools made sure that they did.  Annoying?  Sure.  Effective?  Absolutely.  Everyone resented the processes until the first time we really needed the information about a change set, which we already had waiting for us in ClearQuest. 

Is that level of integration necessary?  We’ve decided (at least for now) that it’s not, and that we are willing to rely on devs doing the right thing.  You may decide that you do want that level of assurance that your corporate development processes are being followed.  All it takes is money. 

What that means (to me at least) is that the big win in choosing an integrated tool is the integration part.  Is the source control tool that comes with TFS a good one?  I haven’t used it personally, but I’m sure that it is.  Is it worth the price if all you’re looking for is a source control system?  Not in my opinion.  You can get equally capable SCC packages for far less (out of pocket) cost.  It’s worth spending the money if you are going to take advantage of the integration across the suite, since it allows you to not only set, but enforce policy. 

I’m sure that if you choose to purchase just the SCC part of TFS, or just Rational’s ClearQuest, you’ll end up with a great source control tool.  But you could get an equally great source control tool for a lot less money if that’s the only part you are interested in. 

The other thing to keep in mind is that the integrated suites tend to come with some administration burden.  Again, I can’t speak from experience about TFS, but in my prior experience with Rational, it took a full-time adminstrator to keep the tools running properly and to make sure they were configured correctly.  When the company faced some layoffs and we lost our Rational administrator, we switched overnight to using CVS instead, because we couldn’t afford to eat the overhead of maintaning the ClearQuest/ClearCase tools, and none of us had been through the training in any case.  I’ve heard reports that TFS is much easier to administer, but make sure you plan for the fact that it’s still non-zero.

So, in summary, if you already have a process that works for you, you probably don’t need to invest is a big integrated tools suite.  If you don’t have a process in place (or at least enough process in place) or you find that you are having a hard time getting developers to comply, then it may well be worth the money and the administrative overhead.

Wednesday, 05 July 2006 15:54:38 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, 27 June 2006

I took Steve’s comment to heart, and got rid of the two places I had been “forced” to use the CodeSnippetExpression.  It took a few minutes thought, and a minor refactoring, but I’ll sleep that much better at night. 

Vive le DOM!

Tuesday, 27 June 2006 13:10:37 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 26 June 2006

A few days back, Jeff commented that he wasn’t convinced about the value of putting an object model on top of what should be simple string generation.  His example was using XmlTextWriter instead of just building an XML snippet using string.Format. 

I got similar feedback internally last week when I walked through some of my recent CodeDOM code.  I was asked why I would write

CodeExpression append = new CodeMethodInvokeExpression(sb,"Append", new CodePrimitiveExpression(delim1));

instead of

CodeExpression append = new CodeSnippetExpression(“sb.Append(delim1)”);

It’s a perfectly reasonable questions.  I’m using the CodeDOM’s object model, but the reality is since we’re an all-C# shop, I’m never going to output my CodeDOM-generated code as VB.NET or J#.  So I could just as easily using a StringBuilder, and a bunch of string.Format calls to write C# code and compile it using the CodeDOM’s C# compiler.  It certainly would be simpler. 

The same is true (as Jeff points out) for XML.  It’s much easier to write XML using string.Format, or a StringBuilder. 

If nothing else, I personally find that using the object-based interface makes me think harder about the structure I’m creating.  It’s not really much less error-prone that writing the code (or XML) by hand, it just provides a different way to screw up.  What it does do is force you to think at a higher level of abstraction, about the structure of the thing you are creating rather than the implementation.  You may never need to output binary XML instead of text, but using XmlTextWriter brings you face to face with the nauances of the structure of the document you’re creating.  Writing a CDATA section isn’t the same as writing an element node.  And it shouldn’t be.  Using the object interface makes those distinctions more obvious to the coder. 

However, it’s definitely a tradeoff.  You have to put up with a lot more complexity, and more limitations.  There are a buch of contructs that would be much easier to write in straight C# than to express them in CodeDOM.  It’s easy to write a lock{} construct in C#, but much more complex to create the necessary try/catch and monitor object using the CodeDOM. 

I was, in fact, forced to resort to the CodeSnippetExpression in one place, where nothing but a terniary operator would do.  I still feel guilty.  :-)  Maybe it just comes down to personal preference, but I’d rather deal with the structure than the syntax even if it means I have to write more complicated code.

Monday, 26 June 2006 14:05:19 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Thursday, 22 June 2006

I’m a huge fan or runtime code generation.  I think it solves a lot of common problems in modern applications.  However, the .NET implementation of runtime code gen, a.k.a. the CodeDOM, is viciously difficult to learn and use.  I think that’s holding many people back from implementing good solutions using runtime codegen. 

For example, it takes all this

CodeConditionStatement ifArrayNotNull = new CodeConditionStatement(

    new CodeBinaryOperatorExpression(propertyRef,


    new CodePrimitiveExpression(null))


CodeMethodInvokeExpression convertExpr = new CodeMethodInvokeExpression(

    new CodeTypeReferenceExpression(typeof(Convert)),"ToBase64String",

    new CodeExpression[] { new CodeCastExpression(typeof(byte[]),propertyRef)}



to make this

if(myObject.Property != null)




Not only is it a lot more code using the CodeDOM, but it’s certainly not the kind of code that you can just pick up and understand. 

There must be an easier way.  Refly helps a bunch, but last time I tried it I found it to be incomplete.  It’s certainly a step in the right direction.  It’s certainly a model that’s much easier to understand.  I wonder if, in the end, it’s too limiting?  There may be a good reason for the complexity of the CodeDOM.  Or there may not be.

Thursday, 22 June 2006 16:32:32 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 

Next Wednesday evening, I’ll be talking about the wonders of source code control at this month’s PADNUG meeting.  If you’re currently using SCC, you might get some tips for making better use of it.  If you aren’t, you’ll find out why you should be, even if you work alone.  I’ll be focusing on the role of SCC in agile development and continuous integration.  I’ll also talk about some of the many version control systems available, and which one(s) may be right for your environment (as well as which ones you really shouln’t be using:  you know who you are…). 

And there’s pizza!

Thursday, 22 June 2006 15:33:23 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
I hope not.  But since it seems to otherwise be a good recording, I fear it’s so.  Check out Hanselminutes 21, wherein Scott interivews Jeff Atwood and I about the relative merits of Subversion vs. Team System.  Taking the show on the road was a great idea, and all the segments came out very well.  I’m amazed that he and Carl were able to clean up the audio to the point that you hardly hear the other 7–8000 people in the room with us.
Home | TechEd | Work
Thursday, 22 June 2006 15:28:15 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Monday, 19 June 2006

TechEd is over, and I’m starting to get back into being home.  I think I’m still on Boston time, so I was up at 5:00 this morning, but other than that it’s coming together.

I think my two biggest takeaways from TechEd, as compared to the last time I went (2003) are that there were way more women at TechEd (good) and the inconsiderate cell phone use was at an all time high (bad). 

Other key learnings:

  • the food was terrible
  • the TLC concept was much cooler than the “cabanas” last time
  • it’s very easy to write good tooling in VS 2005 + GAT + DSL
  • WCF and WF rock

I had a great time meeting new people, and it was very entertaining standing in the shadow of the great man. :-)

TechEd | Work
Monday, 19 June 2006 11:26:46 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 15 June 2006

Slowing down now.  Me, not TechEd.  Very long days starting to take their toll...

That aside, I've seen some very groovy stuff over the last day or two.  WF as the controller in an MVC architecture, using rule-based activities in WF, WCF talking to an Oracle system over standards based web services (with security, reliable messaging, MTOM, et al).  Shy Coen did a good chalk talk yesterday on publish and subscribe patterns using WCF which gave me some good ideas.  I'm looking forward to seeing more about the Service Factory tomorrow morning.  Meeting lots of very smart people. 

I realize my sentences are getting shorter and shorter, and the nouns will probably start dropping out next.  Attendee party tonight.  Nothing I like better than 12,000 drunk nerds all in one place.  With batting cages.  I'll take pictures. :-)

On a completly different note, look to see some major changes to this blog in the next week or so...

Thursday, 15 June 2006 08:37:10 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 13 June 2006
Scott and I are now done with our TechEd session, and all was well.  Whewww!  It was down to the wire, and I think we finished our slides about an hour before we went on stage, but we pulled it out in the end.  I think it went pretty well, but we'll see what the evaluations have to say about it tomorrow. :-)
Now I'm hanging out at the birds of a feather gathering ('cause, ya know, free beer) and watching Stuart strolling around wearhing a giant parrot on his head.  Not something you see every day.  Unless you work with Stuart.  I'm looking forward to actually attending some sessions tomorrow, since I won't be worrying about how badly I'm going to suck.
I saw a couple of good talks yesterday, my favorite of which was Don Smith and Jason Hogg talking about web services security.  If you care at all about securing web services, check out their patterns & practices book.  It completely kicks the llama's @ss.  They presented an early version of their Web Service Factory, which is a guidance automation toolkit package for building and securing web services, either in Indigo or in WSE 3.0.  Very groovy stuff. 
I also saw a good one this morning by Pravin Indurkar on building state machines with WF.  They've really thought hard about this, and I really think that it's a viable solution not only for building state machines, but for using said state machines for driving UIs.  Pravin had a very compelling UI demo that was mainly driven by the workflow instance itself.  Excellent.
More tomorrow after I've had some time to absorb some new content.

TechEd | Work
Tuesday, 13 June 2006 17:03:00 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Monday, 12 June 2006
I'm here in Boston (finally).  I haven't done it in a while, so I'd nearly forgotten how much fun it isn't to fly cross-country.  2 hour delay in Denver (just because), followed by the usual hassles of airports, shuttles, etc.
Anyway, after only 12 short hours, I was here.  I hustled over to the convention center to make it in time for the keynote, only to be pretty disappointed and leave early.  The keynote (at least the bit I staid for) was very buzzword oriented.  I understood all the words coming out of Ozzie's mouth individually, but strung together they didn't make any sense.  So, I opted for dinner and a beer instead.
I ran into Scott in the lobby, and ended up hanging out with an ever growing crowd of clever people in a blur of beer and chicken fingers.  The highlight for me was talking Ruby with the (quite) clever and (highly) articulate John Lam.  There's apparently a lot about my programming environment I don't consider very deeply. :-)
The first sessions are about to start, so I'm off to see if I can learn something. 

TechEd | Work
Monday, 12 June 2006 06:00:02 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 24 April 2006

We do a bunch of serialization from objects into various (legacy) formats, and just this week discovered a performance bottleneck I’d never anticipated (mea culpa).  Apparently Type.GetCustomAttributes(true), or PropertyInfo.GetCustomAttributes(true) are expensive operations.  When I stop to think about it, I guess I’m not surprized, but it was not something I’d ever considered as costly performance wise.  We have to pass true in a few cases, since we may need to now about inherited attributes out on leaf nodes in the inheritance hierarchy.

Anyway, some selective caching fixed a pretty sizeable portion of the problem.  The realy way to solve the problem is, not surprisingly, exactly how the XmlSerializer solves the problem:  runtime code generation.  But we’re not quite there yet. 

Anyhow, if you find yourself calling GetCustomAttributes(true), beware that it may be more expensive than you think…

Monday, 24 April 2006 14:20:11 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Monday, 20 March 2006
Scott and Mo are putting together a team for a Walk for Diabetes later this Spring.  I walked with Scott and his family 6–7 years ago, and had a great time.  Jump on over and make a tax deductible (in the US) contribution, or come out and walk in May.  I’ve watched the technology that Scott uses to control his condition pretty closely over the years, and know that there’s still a long way to go.  Definitely a worthy cause.  Show your support for Team Hanselman, and know that you’re helping a lot of people in the process.
Home | Work
Monday, 20 March 2006 16:43:00 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 22 February 2006

We’ll be there.  Scott and I will be presenting Dirty SOAP: A Dynamic Endpoint Without ASMX - How and Why? as part of the architecture track.  We’ll be talking about some new work we’ve been doing around extending the reach of our banking application using WSDL, Xml Schema, and the wonders of .NET.

See you there.

Wednesday, 22 February 2006 14:55:13 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Sunday, 05 February 2006
I've been a fan- and user of Plaxo for some time, but hadn't been using it as much since switching to Thunderbird for all my email at home.  Then they came out with Thunderbird support, but not for 1.5, which is (of course) what I was using.  Now everyone has caught up, and Plaxo supports Thunderbird 1.5.  Oh, happy day!  I'm once again a regular user, and love it.  It's so nice to have my contacts synchronized across all the different machines I use on a daily basis. 

Home | Work
Sunday, 05 February 2006 09:09:31 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 09 January 2006


It’s taken most of 3 work days, but I finally got a decent installation of the December CTP.  I tried uninstalling the November bits, then installing the December stuff.  No go.  The SDK won’t install.  Sigh.  So I created a new Virtual Server image with Win2K3 sp1, and tried installing from scratch.  First the WinFX runtime, then Visual Studio 2005, then the SDK, just like it says in the docs.  The SDK still won’t install, because it says I haven’t installed the runtime.  After several days worth of combinations tried, I finally ran the Repair on the runtime’s installer, then tried the SDK again.  Success!  So simple it only took 3 days. 

The price one pays to be on the bleeding edge I guess.

Monday, 09 January 2006 16:39:50 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 02 December 2005

I haven’t tried doing any WCF (formerly and still known as Indigo) development in probably 6 months or so, and some things have changed since then.  Also, this is the first time I’ve tried implementing a duplex contract.  I made some mistakes along the way, due in part to the fact that the sample code in the November CTP doesn’t match the docs (no surprise there).  Over all, though, it was way easier than I thought it might be.  Certainly easier then .NET Remoting, and the fact that there’s a built-in notion of a duplex contract solves tons of problems.

Anyway, I was trying to get my client to work, and for the life of me couldn’t figure out the errors I was getting, until it finally dawned on me.  Here’s what I had:

        static void Main(string[] args)


            InstanceContext site = new InstanceContext(new CallbackHandler());


            // Create a proxy with the given client endpoint configuration.

            using (MyDuplexProxy proxy = new MyDuplexProxy(site))







It’s probably obvious to everyone who isn’t me why this won’t work.  You can’t dispose of the proxy and still expect a callback.  Now that I say that it makes sense, but it didn’t occur to me for a while, since the callback itself isn’t on the proxy class. So, I changed one line to this:

        static void Main(string[] args)


            InstanceContext site = new InstanceContext(new CallbackHandler());


            // Create a proxy with the given client endpoint configuration.

            using (MyDuplexProxy proxy = new MyDuplexProxy(site))






and everything worked out swimmingly.  I continue to be impressed with how well thought out Indigo is.  While many people like to point out how many mistakes MS has made over the years, you certainly can’t fault them for not learning from them. 

Friday, 02 December 2005 14:31:53 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 01 November 2005

I’ve been working on a project that required me to turn some CLR types into a set of XML Schema element definitions so that they can be included in another file.  It stumped me for a while, and I envisioned having to reflect over all my types and build schema myself, which would be a total drag. 

Then I remembered that this is exactly what xsd.exe does.  Thank the heavens for Reflector!  It turns out to be really simple, just undocumented…

            XmlReflectionImporter importer1 = new XmlReflectionImporter();

            XmlSchemas schemas = new XmlSchemas();

            XmlSchemaExporter exporter1 = new XmlSchemaExporter(schemas);

            Type type = typeof(MyTypeToConvert);

            XmlTypeMapping map = importer1.ImportTypeMapping(type);


It’s that easy!  The XmlSchemaExporter will do all the right things, and you can do this with a bunch of types in a loop, then check your XmlSchemas collection.  It will contain one XmlSchema per namespace, with all the right types, just as if you’d run xsd.exe over your assembly.

Even better, if there’s stuff in your CLR types that isn’t quite right, you can use XmlAttributeOverrides just like you can with the XmlSerializer.  So if you want to exclude a property called “IgnoreMe” from your MyTypeToConvert type…

            // Create the XmlAttributeOverrides and XmlAttributes objects.

            XmlAttributeOverrides xOver = new XmlAttributeOverrides();

            XmlAttributes attrs = new XmlAttributes();


            /* Use the XmlIgnore to instruct the XmlSerializer to ignore

               the IgnoreMe prop  */

            attrs = new XmlAttributes();

            attrs.XmlIgnore = true;

            xOver.Add(typeof(MyTypeToConvert), "IgnoreMe", attrs);


            XmlReflectionImporter importer1 = new XmlReflectionImporter(xOver);

            XmlSchemas schemas = new XmlSchemas();

            XmlSchemaExporter exporter1 = new XmlSchemaExporter(schemas);

            Type type = typeof(MyTypeToConvert);

            XmlTypeMapping map = importer1.ImportTypeMapping(type);


That’ll get rid of the IgnoreMe element in the final schema.  It took a bit of Reflectoring, but this saves me a ton of time.

Work | XML
Tuesday, 01 November 2005 16:32:41 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, 27 September 2005

I’ll be starting the class I’m teaching this term at OIT tomorrow night.  Introduction to Web Services.  There’s still time to get in on it.  CST 407.

Tuesday, 27 September 2005 16:04:41 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 23 August 2005

I’m not actually dead.  Here I am posting.  I’ve been doing a lot of design work lately, and have reached a generally introspective period, so not posting much lately. 

That said…

I was trying to come up with a way to make asynchronous requests from a web server to a back end system.  Like really asynchronous.  Fire and forget, no socket open on the network, etc.  Maybe implemented as two one-way web services, whatever.  The glaringly obvious fact is that given the nature of HTTP, the web client has to block, since an HTTP request/response is by its very nature synchronous and blocking.

So, the solution I came up with was to do the blocking in the one place where it matters, which is in the web client.  It depends on sending some kind of correlation ID between client and server, and back.  So sending the message from ASP.NET might look like this:

    public WaitHandle Send(Envelope env)


      WaitHandle h = new AutoResetEvent(false);

      string correlation = Guid.NewGuid().ToString();

      env.CorrelationID = correlation;



        correlationToWaitHandle.Add(correlation, h);



      ThreadPool.QueueUserWorkItem(new WaitCallback(DoSending), env);


      return h;



This will do an async send to your back end, keeping track of a WaitHandle by the correlation ID.  That way, you hand the WaitHandle back to the client, who can wait on it as required by HTTP.

In the mean time, the DoSending method has gone and sent the Envelope to your back end.  When it’s done, it sends the response back to the client machine by whatever means (I’m using ASMX web services) and calls the PostResult method with the response…

    public static void PostResult(Envelope env)




        AutoResetEvent are = null;



          are = (AutoResetEvent)correlationToWaitHandle[env.CorrelationID];

          if(are == null)//this shouldn't happen, but would be bad.  Other option is to check Contains again after lock.

            throw new InvalidOperationException("Threading problem!  CorellationID got removed before lock!");




          waitHandleToResult[are] = env;






        throw new ArgumentException("unknown corellationID","env");



Look up the wait handle associated with the correlation ID, since we need it, and it’s all the client has at this point, then take the response we got back from the server and store it against the wait handle.  Finally, signal the wait handle, so the client will no we’re done.  When that happens, the client will stop waiting, and call the Receive method to get the results.

    public Envelope Receive(WaitHandle wh)


      Envelope result = null;









            result = (Envelope)waitHandleToResult[wh];












        throw new InvalidOperationException("No response received.");



      return result;



We look up the stored result by the wait handle, clean up our hashtables, and return the result to the client.

For the sake of simplicity, we can go ahead and wrap this in a synchronous method for the client to call.

    public void SendAndWait(ref Envelope env)



      WaitHandle h = Send(env);



        env = Receive(h);




        throw new RequestTimeoutException("Timed out waiting for server");




That way all the wait handle stuff is hidden, and the client just sees their response come back in the [ref] Envelope in a synchronous fashion. 

By decoupling the client from the server this way, we not only get better control over how many threads are running in our server side implementation (although there are other ways to do that) but we also aren’t dependent on the ability to hold open a network socket from client to server during the time it takes to process the request.  Normally that’s not an issue, but it can be a problem in certain circumstances. 

Tuesday, 23 August 2005 14:39:05 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 24 June 2005

We’ve got some XML documents that are getting written out with way too many namespace declarations.  That probably wouldn’t be too much of a problem, except we then use those XML documents as templates to generate other documents, many with repetitive elements.  So we’re ending up with namespace bloat.  Scott and I found an example that was coming across the network at about 1.5Mb.  That’s a lot.  A large part of that turned out to be namespace declarations.  Because of the way XmlTextWriter does namespace scoping, it doesn’t write out a namespace declaration until it first sees it, which means for leaf nodes with a different namespace than their parent node, you end up with a namespace declaration on every element, like this…

<?xml version="1.0" encoding="UTF-8"?>

<ns0:RootNode xmlns:ns0="http://namespace/0">

            <ns1:FirstChild xmlns:ns1="http://namespace/1">

                        <ns2:SecondChild xmlns:ns2="http://namespace/2">Value</ns2:SecondChild>

                        <ns2:SecondChild xmlns:ns2="http://namespace/2">Value</ns2:SecondChild>

                        <ns2:SecondChild xmlns:ns2="http://namespace/2">Value</ns2:SecondChild>

                        <ns2:SecondChild xmlns:ns2="http://namespace/2">Value</ns2:SecondChild>

                        <ns2:SecondChild xmlns:ns2="http://namespace/2">Value</ns2:SecondChild>

                        <ns2:SecondChild xmlns:ns2="http://namespace/2">Value</ns2:SecondChild>

                        <ns2:SecondChild xmlns:ns2="http://namespace/2">Value</ns2:SecondChild>

                        <ns2:SecondChild xmlns:ns2="http://namespace/2">Value</ns2:SecondChild>



With our actual namespace strings, that’s like an additional 60 btyes per element that we don’t really need.  What we’d like to see is the namespaces declared once at the top of the file, then referenced elsewhere, like this…

<?xml version="1.0" encoding="UTF-8"?>

<ns0:RootNode xmlns:ns0="http://namespace/0" xmlns:ns1="http://namespace/1"  xmlns:ns2="http://namespace/2">












When we edited the templates manually to achieve this effect, the 1.5Mb document went to like 660Kb.  Much better.

There doesn’t seem to be any way to get XmlTextWriter to do this, however.  Even if you explicitly write out the extra namespaces on the root element, you still get them everywhere, since the writer sees those as just attributes you chose to write, and not namespace declarations. 

Curses!  I’ve spent all day on this and have no ideas.  Anyone have any input?

Work | XML
Friday, 24 June 2005 14:53:54 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [10]  | 
# Tuesday, 14 June 2005
The new 2.0 version of MaxiVista is out, and as usual it kicks complete a**.  It supports additional monitors, so I’ve got it running across 3 right now.  Plus, the new remote control feature is fantastic for times when you need to get at your other machines but don’t want to have to shuffle keyboards, etc.  If you are doing any debugging in VS.NET, you owe it to yourself to run on at least two monitors.  I’ve found it better than one big one.
Tuesday, 14 June 2005 16:15:01 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 13 June 2005

Sigh.  It’s a constant battle.  I knew full well that XmlSerializer leaks temp assemblies if you don’t use the right constructor.  (The one that takes only a type will cache internally, so it’s not a problem.)  And then I went and wrote some code that called one of the other constructors without caching the resulting XmlSerializer instances. 

The result:  one process I looked at had over 1,500 instances of System.Reflection.Assembly on the heap.  Not so good. 

The fix?  Not as simple as I would have hoped.  The constructor that I’m using takes the Type to serialize, and an XmlRootAttribute instance.  It would be nice to be able to cache the serializers based on that XmlRootAttribute, since that’d be simple and all.  Unfortunately, two instances of an XmlRootAttribute with the same parameters return different values to GetHashCode(), so it’s not that easy.  I ended up using a string key compounded from the type’s full name and the parameters I’m using on XmlRootAttribute.  Not the most beautiful, but it’ll work.  Better than having 1,500 temp assemblies hanging around.

Work | XML
Monday, 13 June 2005 16:09:22 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Monday, 23 May 2005

Several times recently I’ve been bitten by “code-freeze”.  Granted, I’ve been bitten because I wasn’t tracking the “freeze” closely enough, my bad.  But there are alternatives to this (in my mind) counter productive process.  Developers need to be free to check in changes.  They fix bugs, or add new features, and need to get those things checked in.  Their dev machine could collapse catastrophically, they could forget to check them in, or the fixes could become irrelevant if they wait too long.  Plus, the farther you get away from the point where you made the changes, the more likely you are to get merge conflicts.  If the “freeze” lasts too long, multiple people may make changes to the same files, so that when the freeze is lifted, merge conflicts ensue, and that’s not fun for anyone.  Not good for continuous integration.  Code freezes pretty much take the continuous right out of it.

The solution, you ask?  Labels.  I think most people don’t realize how many groovy things you can do with good labeling policy.  Part of this stems from the widespread use of Visual SourceSafe.  If you are still using VSS, stop.  VSS and continuous integration just don’t belong in the same sentence.  Labeling doesn’t work right, branches are hard and have lasting consequences, etc.  The common excuse is “it comes for ‘free’ with my development environment”.  Of course, it’s not really “free” but that’s another issue.  Even if it were, there are several (much better) free alternatives.  CVS and Subversion top the list.  Much better choices, since they do labeling and branching right, and they discourage the exclusive check-out, which is pure evil anyway.  (Can you tell I have an opinion or two on this topic?)

What I have done (quite successfully) in the past is to use labels to “freeze” the tree rather than actually preventing people from making check-ins.  It runs something like this…  Every developer is responsible for checking in changes, and for deciding when those changes are stable.  It’s a common mistake to assume that the tip or HEAD of your source tree must always be pristine.  That keeps people from checking things in when they should.  So, decide on a label (like “BUILDABLE”) that represents that stable state.  That way developers can continue to check in changes, even potentially destabilizing ones, on the tip without causing havoc.  When the developer decides that his/her changes are stable, he or she can move the label out to encompass those changes. 

How does this work with continuous integration?  Your build server always builds from the label you’ve chosen to represent stability.  Not from the tip.  That way you should always get the latest stable build, even if that doesn’t represent the latest changes.  When it’s done successfully building, the build server should label whatever it just built with something like “BUILT” and the time/date (in another label).  Remember, each revision can have as many labels as you need, so extras don’t hurt. 

Another big benefit of this process is that if someone is just joining the team, or needs to build the software for the first time, they can pull from the BUILT label, and be assured that they are getting the most recent version that is known to be build-able. 

The end result is that you have two labels that move, leapfrogging each other as new things get marked as stable and then built, and some that don’t move indicating what was successfully built for each date/time/version.  That’s good information to have.  It’s also good for developers to be able to freely check in changes without having to worry about “freezes” or breaking the build.

Give it a shot.  It’s work up front, and requires a little training and discipline, but you’ll be glad you made the effort.

Monday, 23 May 2005 10:58:15 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 18 April 2005

I’m doing a little refactoring this week of some code (not authored by me) that involves a WinForms app that drives tests.  I didn’t set up to do any refactoring, but in trying to fix some bugs and add a few simple features, what I’m finding everywhere is

        private void btnTheButton_Click(object sender, System.EventArgs e)


            //all the program logic is here…



Time and again we see this “pattern” in code.  In order to use that same functionality in the context of a different UI action, I now have to take the time to factor out the program logic into a separate method which can be called from two different locations (as should have been done in the first place). 

Now, in defense of the guilty, WinForms encourages this behavior, which I must admit is my single biggest gripe about WinForms.  Say what you will about MFC, I loved the way MFC’s designers defaulted to the MFC Command/CommandHandler pattern.  Every UI action represented a command, and it was up to the command handler map to decide what method actually handled the command.  Thus, the default pattern made it easy to do things like having a button and a menu item that both fired the same action.  I wish some of that had rubbed off on WinForms instead of the current VB-esque, every-bit-of-code-gets-attached-to-an-UI-element pattern.

However, it’s not just in WinForms that this is a problem.  I found that when I was teaching Web Services using ASP.NET, I ran into the same pattern.  A Web Service should be thought of no differently than a UI.  It’s an external way to make things happen.  So, having all of one’s program logic right in the ASP.NET Web Service code (.asmx) introduces the same problem.  Do yourself (and whoever is going to have to maintain your code in the future) a favor and create separate “code behind” classes for your web service that hold the real logic.  All that should be in your ASP.NET web service code is argument processing and calls into your “code behind”.  A full-blown MVC architecture might be overkill for a web service, but removing your logic from what is essentially your “user-interface” will save many headaches later on down the road.

Monday, 18 April 2005 14:41:58 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, 22 March 2005
Travis presents some hints for understanding the Hanselman
Tuesday, 22 March 2005 15:42:44 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 17 March 2005
Scott and Rory have another funny TechEd video up.  This one’s funny even if you don't know them. :-)
Thursday, 17 March 2005 09:45:38 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 10 March 2005

…and good riddance. 

I was talking with Scott and some others at lunch today about the announcement that support is ending for VB6, and the ensuing petition.

OK, I understand that switching to VB.NET (or C#) will be fraught with pain, uncertainty and chaos. 

Get over it.  Cowboy up, people.  COM is dead (long live COM).

The bus is leaving… 

Thursday, 10 March 2005 16:05:06 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  | 
# Monday, 24 January 2005

 Henrik Gemal has added some code to Thunderbird (as a first step he says) to provide extra protection from phishing.   Sounds like a pretty good start to me.  You’ll get a warning if the link you clicked on in an email is an IP address instead of DNS, or if it’s a different address from the one in the text of the link.  A little goes a long way here, and this will certainly be a big help. 

Good work!

Monday, 24 January 2005 10:09:58 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 22 December 2004

Being a dedicated Firefox user, one of the few things that was still thwarting me was SharePoint.  We use SharePoint internally for a ton of stuff, and it was a drag to have to fall back to that other browser.  SharePoint pages look and work fine in Firefox, but I was having to reauthenticate on every single page, which really hindered my enjoyment of the experience.

I finally figured out how to get Firefox to do NTLM, which means I don’t have to deal with the authentication dialogs, thereby reducing my dependence on IE to one and only one application (Oddpost). 

It’s not at all obvious how to make it work, and it took me a few tries.  You have to go to your Firefox address bar and type about:config.  This will bring up the internal config editor, which allows you to set all kinds of properties that influence Firefox’s behavior.  Look for the key called network.automatic-ntlm-auth.trusted-uris.  Set that key’s value to a comma separated list of servers you want NTLM auth for.  So if your internal SharePoint sites are on servers called Larry and Mo, use “larry,mo”.  You can also add the same value to the key network.negotiate-auth.trusted-uris.  It’s unclear to me if that second one is required, but I set it, and everything works.  Now SharePoint works like a champ, and authenticates automatically.

Wednesday, 22 December 2004 10:53:30 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [18]  | 
# Tuesday, 30 November 2004

We’ve recently switched to the latest version of CruiseControl.NET (0.7) and my favorite new feature is the ability of ccnet to deal with CVS directly.  Previously we had to include code in our NAnt build file to do a CVS update at the beginning of the build, then do the CVS tag (so we can tag all the files with the build version) at the end of the build if it was successful. 

The new ccnet will do the update and the label for us, but…

It only supports one format for the labels, which it to allow you to specify a prefix, like “1.0.” and it will increment a number and append it, so you get “ver-1.0.1”, “ver-1.0.2”, etc.  That number resets to 1 every time you restart the ccnet executable.  Hmmm.  What we wanted was to use our previous scheme, which involved the version number we use for our .NET executables (e.g.  We used the version task from NAntContrib to create that version number on the formula (x.y.monthday.secondssincemidnight). 

Luckily, ccnet .7 provides an interface for the labeling, so you can write your own scheme.  Ours now looks like this…


    public class OurLabeller : ILabeller


        public OurLabeller()




        private string majorMinor = "2.0";

        [ReflectorProperty("majorminor", Required=false)]

        public string MajorMinor




                return majorMinor;




                majorMinor = value;



        #region ILabeller Members


        public string Generate(IIntegrationResult previousLabel)


            string ver = string.Format("{0}.{1}.{2}",majorMinor,getMonthDay(),calculateSecondsSinceMidnight());

            return ver.Replace(".","_");//it's a label, so no dots...





        #region ITask Members


        public void Run(IIntegrationResult result)


            result.Label = Generate(result);




        private int calculateSecondsSinceMidnight()


            DateTime today = DateTime.Now;

            return (today.Hour * 3600 + today.Minute * 60 + today.Second) / 10;



        public int getMonthDay()


            DateTime time = DateTime.Now;

            string timeString = string.Format("{0}{1}",time.Month,time.Day);

            return Convert.ToInt32(timeString);




So now ccnet will now use our labeling scheme, as long was we stick our new class in an assembly called ccnet.*.plugin.dll.  The config file bit looks like

  <labeller type="ourlabeller">

We want the version of the assemblies to match the new generated label, so we need to read it in our NAnt buildfile.  CCNET stuffs the label in a property that gets passed to NAnt called ccnet.label, so we can read that in our NAnt build…

  <if propertyexists="ccnet.label">
   <script language="C#">
    public static void ScriptMain(Project project) {
     //Shorten the project string (like, to 1.3.4)
     string projectVersion = project.Properties["ccnet.label"];
     project.Properties["project.version"] = projectVersion.Replace("_",".");

Tuesday, 30 November 2004 16:22:47 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 08 November 2004

Our CTO, Chris, recently turned me on to Ruby.  I've been playing around with it a bit over the last few weeks, and I've got so say I'm pretty impressed.  I really appreciate that it was designed, as they say, according to the “Principal of Least Surprise”.  Which means that it basically works the way you would think. 

Ruby has a lot in common with Smalltalk, in that “everything is an object” kinda way, but since Ruby's syntax seems more (to me at least) like Python or Boo, it seems more natural than Smalltalk.  Sure, you don't get the wizzy browser, but that's pretty much OK.  When you apply the idea that everything is an object, and you're just sending them messages to ask them (please) to do what you want, you get some amazingly flexible code.  Sure, it's a bit squishy, and for code I was going to put into production I still like compile time type safety, but for scripting or quick tasks, Ruby seems like a very productive way to go.

Possibly more impressive was the fact that the Ruby installer for Windows set up everything exactly the way I would have thought (”least surprise” again) including adding the ruby interpreter into the path (kudos) and setting up the right file extension associations so that everything “just worked”.  Very nice.

The reason Chris actually brought it to my attention was to point me at Rails, which is a very impressive MVC framework for writing web applications in Ruby.  Because Ruby is so squishily late-bound, it can do some really amazing things with database accessors.  Check out the “ActiveRecord” in Rails for some really neat DAL ideas. 

I'm assuming that that same flexibility makes for some pretty groovy Web Services clients, but I haven't had a chance to check any out yet.  Anyone have any experience with SOAP and Ruby?

Monday, 08 November 2004 18:48:14 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 29 October 2004

The last working day before Halloween is upon us (as evidenced by the giant Sponge Bob sitting over in QA) and Drew brings us a poem of distrubed systems horrors.  A great read!

Friday, 29 October 2004 09:44:02 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 21 October 2004
Wow, Rory drew my head.  I've arrived. :-)
Work | XML
Thursday, 21 October 2004 09:35:02 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 18 October 2004

Being a non-CS degree holder (I can tell you all about Chinese vs. Japanese Buddhism though) I've always been a bit intimidated by the idea of parser/compiler building.  Luckily, there's Coco/R.  I'm intimidated no longer!  I've been playing around with creating a custom scripting language for some of the code generation we're doing, and this turned out to be a really easy way to parse/compile the scripts.  Coco/R is distributed under the GPL, and source is available.  There are versions for both C# and Java. 

I was really impressed at how easy it was.  Basically you write an EBNF definition of your files to be parsed, and then annotate them with native (C# or Java) code that does the compilation.  Here's an example from the sample that comes with the distribution...

MulOp<out int op>
=                        (. op = -1; .)
  ( '*'                  (. op = times; .)
  | '/'                  (. op = slash; .)
RelOp<out int op>
=                        (. op = -1; .)
  ( '='                  (. op = equ; .)
  | '<'                  (. op = lss; .)
  | '>'                  (. op = gtr; .)

The EBNF stuff is on the left, and the native code on the right.  Super easy, and the parsers work great.  Very fast.  They are also very easy to debug, as the generated code is very well laid out.  It corresponds to the EBNF constructions, so debugging the process is very easy.

If you ever find yourself needing to do some parsing, check it out.

Monday, 18 October 2004 11:32:36 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 08 October 2004

Turns out that if you have multiple versions of the same assembly, say and in your probing path, only one of them will ever get loaded.  Here's the deal...

I've got an app in


in it are myassembly.dll v1.0.0.1 and myapp.exe.

There's a subdirectory


that contains myassembly.dll v1.0.0.2, plus a new assembly newassembly.dll, which is dependent on v1.0.0.2 of myassembly.dll

In myapp.exe.config, I've included the "new" subdirectory in the applications private path, meaning that it will look there for assemblies when doing assembly binding.

  <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
   <probing privatePath="new;"/>

When the application loads newassembly.dll, I would have thought that it would correctly load the version of myassembly.dll, and that the application would load the version to which it was bound.  Alas, that turns out not to be the case.  Fusion comes back with

LOG: Post-policy reference: myassembly, Version=, Culture=neutral, PublicKeyToken=xxxyyyzzz
LOG: Cache Lookup was unsuccessful.
LOG: Attempting download of new URL file:///C:/myapp/myassembly.DLL.
LOG: Assembly download was successful. Attempting setup of file: C:\myapp\myassembly.DLL
LOG: Entering run-from-source setup phase.
WRN: Comparing the assembly name resulted in the mismatch: Build Number
ERR: The assembly reference did not match the assembly definition found.
ERR: Failed to complete setup of assembly (hr = 0x80131040). Probing terminated.

We had assumed that fusion would skip the wrong version and keep probing for the right one.  It looks like it just finds the wrong one and bails.  Of course, if we put both versions in the GAC, it works just like we would expect, which makes sense, that being what the GAC is for and everything. :-)

Friday, 08 October 2004 14:20:04 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 07 September 2004

By now everyone has heard that WinFS (the new SQL-based, meta-data driven file system) won't be shipping with Longhorn.  It's not really surprising.  Not only is it a fairly challenging technology, but the surrounding behavioral issues are, IMHO, an even bigger deal, and will take a good long while to resolve. 

The reason meta-data based solutions haven't dominated the world have nothing to do with technology.  RDF works just fine.  So do XSD and Web Services.  So does SQL server, so I'm convinced that the technical hurdles to achieving WinFS are solvable.  The trouble is getting people to use it.  People just don't get it.  Nobody uses RDF, in part because it's way to complicated, but also because most people just don't get meta-data.  It's hard enough to get people to use proper keywords on their HTML pages. 

Similarly, the reason that Web Services have yet to revolutionize the world of B2B eCommerce have nothing to do with technology.  The parts of the technical picture that aren't solved by SOAP/WSDL are quickly being addressed by WS-*.  The real issue is schemas.  The barrier to real B2B isn't security, or trust, or routing/addressing, or federation even.  It's the fact that no two companies in the entire world can agree on what a PO looks like.  The barriers are institutional, not technical.

The same thing applies to WinFS.  Even if the technical side can be made to work reliably (of which I have no doubt given enough time), it's the institutional issues that are hard.  What do you call the tags that get applied to your file system?  If any applications are going to take real advantage of them, they have to be agreed upon in common.  Anyone remember BizTalk.org?  It's not easy to reach consensus on what seems like a simple problem.  What do you call the meta-tags that are applied to your data?  It's great that you can arbitrarily add new tags through the explorer, but if no application besides explorer supports them, is it anything more than a great new way to do sorts? 

I think in the long run it's that problem that has delayed the release of WinFS.  There has to be a plan in place for handling the institutional issues in place first, or MS will end up with another great piece of technology that no one knows what to do with.

Tuesday, 07 September 2004 12:55:56 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Despite the lack of technical content here lately, I do actually still work for a living.  I've been working on some implementation stuff lately, and have been struck by something that I've known for a while, but always comes home in a big way when I have to use someone else's UI.  There's no substitute for use cases!  I'm working with an API right now that was obviously designed by someone who thought it seemed like a good idea at the time, but didn't spend any time thinking about how this API was actually going to be used in the real world.  The end result is that to actually use it, you have to go through way to many steps, each of which takes a different set of parameters that you may or may no already have.  It's very frustrating. 

I'm working with Scott on most of it, and from the very beginning of our involvement he predicted that we'd end up spending 6 hours a day reading docs and 2 hours coding to achieve the desired end result.  It's actually been more like 7/1.  I spend all day reading docs and trying to figure out how the API is supposed to work, then write 20 lines of code to solve the problem. 

My own solution to this problem when I've been on the API writing side has been TDD.  If I can write a test case, it at least forces me to think about how the API will be used enough to avoid some of the major pitfalls.  On a larger scale project, the only solution is full-blown use case analysis.  Write the use cases first, then figure out what the API should look like.  Too often a hard-core technologist designs the API in the way he feels best exemplifies the underlying data structures.  The problem is that the users almost never care about the nature of the underlying data structures.  They need high level methods that answer questions, not CRUD based data exposition.  At the very least, TDD forces us to think through what some of those questions might be.  It's easy to miss some, however, so you really need to do use cases to find out what all the questions are before you write the API.

Tuesday, 07 September 2004 10:34:26 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 01 September 2004

It's been quiet here on the old blog lately.  Partly because I've been pretty busy, and partly because I spent 10 days on vacation in SoCal.  I've got to say it pretty much rocked. 

I hadn't been to Disneyland since I was in high school, and this time I got to take my kids (who are 6 and 9).  It was fabulous.  There are a bunch of new rides, and many of the old rides from when we were kids are gone to make room for them, but all the new ones were pretty great.  Splash Mountain, Indiana Jones, etc were very cool.  My kids had a great time, and it's a lot of fun to see that kind of thing through them.  We spent three solid days at Disney (2 at Disneyland and 1 at California Adventure) and that was just about the right amount of time.  One of the new crazes that's sprung up since I was a kid is getting the characters to sign autographs.  Both my kids had a good time hounding people in big fuzzy suits to sign their books.  It gave them a goal that kept them focused on moving around even when they were tired.  If you're kids are into autograph hunting, it's worth checking out some of the "character meal" opportunities.  There are several restaurants around the parks and hotels where the characters hang out, and will come by your table and chat (in pantomime) with the kids, etc.  We went to "Goofy's Kitchen" at the Disneyland Hotel (where we staid) and "Ariel's Grotto" at California Adventure and both had pretty good food and lots to entertain the kids, although be prepared for some sticker shock, they aren't cheap.  Some pretty groovy, although less kid-friendly dining was to be had at the "Blue Bayou" which is actually inside Pirates of the Caribbean.

This was the first time I'd been to California Adventure, and I was really impressed.  I had heard that it was more adult-focused than Disneyland, but I didn't find that to be the case.  Both my kids loved "Soaring over California" and there was lots of kid related activity in the Bug's Life area.  The rides were pretty cool.  My son drug me on California Screaming twice (the big roller coaster) and I coaxed him into going on the "Tower of Terror" with me, which was pretty dang fun.  My daughter really like the Redwood Creak Challenge Trail, which is basically a big forest-ranger-themed play structure. 

After our three grueling sun-up to well-past-sundown days at Disney we hit Legoland, the San Diego Zoo, SeaWorld, and last but not least my son's big requested attraction, the La Brea tar pits in LA.  And somehow we still managed to get in most of a day at the beach in San Juan Capistrano. 

All that's behind me know and it's back to work.  Not only is there plenty of work at work, but I've got a new Web Services class coming up, and then SellsCon in October. 

Home | Work
Wednesday, 01 September 2004 09:53:43 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 13 August 2004

Looks like Scott and I will be speaking at Chris Sells' XML DevCon this year.  Last year I spoke on XML on transformer monitors.  This year Scott and I will be talking about the work we've been doing with online banking and XML Schema. 

If it's anything like last year's, the conference should be pretty amazing.  The speakers list includes some pretty serious luminaries.  In fact, it's pretty much a bunch of famous guys... and me.  :-)

Sign up now!

Friday, 13 August 2004 16:50:12 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 04 August 2004

I realize this is a rant, and not strictly technical, but here's one soapbox (among many) I'm willing to step up to. 

For ~95% of business software, the actual code needed to implement them ends up being, if not trivial, then at least not technically challenging.  Business software projects go over budget/schedule etc. because the business rules aren't known (to anyone) or because no one in the organization really knows what they want.  As a further complication, people who are tightly focused on a particular domain tend to make things sound way more complicated than they turn out to be because they aren't building their models at the right level. 

When it actually comes down to writing the code to implement the loosely defined business system, it's almost always "just code".  Not challenging to write, often tedious and time consuming, but not in any way difficult from a CS standpoint. 

So, the challenge when writing such software is in cutting though all the BS to get at which little bits of "just code" you have to write.  Once you get that part out of the way, the actual coding bit is easy.  So when people complain that projects are going way over budget/schedule/what have you, it's almost always (in my experience) due to the fact that engineers spend 6 hours a day poring over vague and conflicting sets of documents, and 2 hours writing code. 

Along the same lines, anyone who's looked seriously into enterprise level web services understands that the barriers to real business to business communications aren't technical barriers.  They are organizational.  If only two or more businesses could agree on what the schema for a purchase order should look like, we'd see a real renaissance in B2B computing.  The technologies are all there, the hard part is getting the organizations to work together.  That's the problem to solve. 


Wednesday, 04 August 2004 11:42:05 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Friday, 30 July 2004

After an inspirational post by Loren Halvorson, we decided to emulate his team's use of an Ambient Orb to track the status of our continuous integration builds. 

After being backordered for a couple of months, it finally show up yesterday.  It's happily glowing green on my desk as I write this.  Unfortunately, what we didn't know is that in order to bend the orb to our will and get it to show our build status, we have to shell out $20 a quarter for their "premium" service.  Humph.  Plus, I've been having some "issues" with it.  It's supposed to reflect the current status of the DJIA, which is down right now, but the orb is green, which is supposed to indicate that the Dow is up.  I can't seem to make it be anything but green.  I know it's not a problem with the lights, because when it was "synchronizing" it strobed through just about every color there is.  After it's done though, I only get green. 

Anyway, all that together has prompted us to order their serial interface kit, so we can control the thing directly without depending on their wireless network.  Maybe not as elegant, but it should be deterministic, which it more important in this case.  Seems like a lot of work for what was supposed to be a funny toy.  :-)

Friday, 30 July 2004 11:36:02 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Friday, 23 July 2004

Our continuous integration efforts came crashing down around our ears yesterday.  Our build started failing on Wednesday evening, after someone checked in a ton of changes all at once (not very continuous, I know).  That turned into a snowball of issues that took Scott and I all day yesterday to unravel.  Even after we'd fixed all the consequences of such large code changes (that was done maybe by noon) things continued not to work.  To make matters more dicey, at some point during the day our CVS server went TU, and had to be rebooted.  Builds continued to fail, mostly with weird timeout problems.  We were also seeing lots of locks being held open which was slowing down the build, contributing to the timeouts, etc. 

We ended up building manually just to get a build out, which was suboptimal but necessary. 

Thankfully, Scott was able to track down the problem last night.  Turns out that when the CVS box went down, our build server was in the middle of a CVS "tag" operation, a.k.a. labeling.  That left a bunch of crud on the CVS box that meant that subsequent tagging operations failed miserably, thus causing the timeouts, etc.  A few well placed file deletions on the CVS server cleaned things up, and we're (relatively) back to normal now. 

While I think that continuous integration is a fabulous idea, it's times like these that bring home just how many things have to go right at the same time for it to work.  What a tangled web we weave. :-)

Friday, 23 July 2004 10:54:00 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 12 July 2004

[Update]:  Turns out that this problem was caused by one and only one COM class, that happens to be a COM singleton, meaning that no matter how many times you call CoCreateInstance, you always get back the same instance.  It gets wrapped by the first interop assembly that gets loaded (which makes sense) and when another bit of code asks for a reference it gets the same wrapper back.  Which also makes sense.  You can't have two wrapper objects pointing to the same COM instance, or chaos whould insue.  So, the problem is not that you can't load two interop assemblies with different versions, it's that you can't wrap the same instance twice with two differently-versioned wrapper classes. 

We ran afoul of an interesting problem at work today, which upon reflection (no pun intended) makes sense, but I'd never thought about before. 

Due to an historical accident, we've got an application that's using our libraries, and also some primary interop assemblies to our core product.  Unfortunately, they aren't the same version of the primary interop assemblies that we're using for our libraries.  So primary is kind of fuzzy in this case.  The problem is that only one of them ever gets loaded.  They don't work in standard .NET side-by-side fashion.  The have different version numbers, etc. so it should.  However, when we started thinking about it, there can really only be one LoadLib and CoCreateInsance happening deep in the bowels some where, so having two primary interop assemblies pointing to the same COM objects doesn't really make any sense.  Dang.  Anyway, once they don't both get loaded, we get a nasty invalid cast exception where there shouldn't be one if we were dealing with properly side-by-sideable .NET assemblies.

If it's not one thing it's another.


Monday, 12 July 2004 18:07:38 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 09 July 2004

Dare Obasanjo posits that the usefulness of the W3C might be at an end, and I couldn't agree more.  Yes, the W3C was largely behind the standards that "made" the Web, but they've become so bloated and slow that they can't get anything done.

There's no reason why XQuery, XInclude, and any number of other standards that people could be using today aren't finished other than the fact that all the bureaucrats on the committee all want their pet feature in the spec, and the W3C process is all about consensus.  What that ends up meaning is that no one is willing to implement any of these specs seriously until they are full recommendations.  6 years now, and still no XQuery.  It's sufficiently complex that nobody is going to try to implement anything other than toy/test implementations until the spec is a full recommendation.

By contrast, the formally GXA now WS-* specs have been coming along very quickly, and we're seeing real implementation because of it.  The best thing that ever happened to Web Services was the day that IBM and Microsoft agreed to "agree on standards, compete on implementations".  That's all it took.  As soon as you get not one but two 800 lb. gorillas writing specs together, the reality is that the industry will fall behind them.  As a result, we have real implementations of WS-Security, WS-Addressing, etc.  When we in the business world are still working on "Internet time", we can't wait around 6-7 years for a real spec just so every academic in the world gets his favorite thing in the spec.  That's how you get XML Schema, and all the irrelevant junk that's in that spec. 

The specs that have really taken off and gotten wide acceptance have largely been defacto, non-W3C blessed specs, like SAX, RSS, SOAP, etc.  It's time for us to move on and start getting more work done with real standards based on the real world.

SOAP | Web Services | Work | XML
Friday, 09 July 2004 10:35:44 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 24 June 2004
Jim Newkirk posts a fabulous use of the much overlooked alias feature in C# to make existing NUnit test classes compile with Team System.  That's just cool.
Thursday, 24 June 2004 11:32:55 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

I started teaching a class at OIT this week on "Web Services Theory", in which I'm trying to capture not only reality, but the grand utopian vision that Web Services were meant to solve (more on that later).  That got me thinking about the way the industry as a whole has approached file formats over the last 15 years or so. 

There was a great contraction of file formats in the early 90s, which resulted in way more problems than anyone had anticipated I think, followed by a re-expansion in the late 90s when everyone figured out that the whole Internet thing was here to stay and not just a fad among USENET geeks. 

Once upon a time, back when I was in college I worked as a lab monkey in a big room full on Macs as a "support technician".  What that mostly meant was answering questions about how to format Word documents, and trying to recover the odd thesis paper from the 800k floppy that was the only copy of the 200 page paper and had somehow gotten beer spilled all over it.  (This is back when I was pursuing my degree in East Asian Studies and couldn't imagine why people wanted to work with computers all day.)

Back then, Word documents were RTF.  Which meant that Word docs written on Windows 2.0 running on PS/2 model 40s were easily translatable into Word docs running under System 7 on Mac SEs.  Life was good.  And when somebody backed over a floppy in their VW bug and just had to get their thesis back, we could scrape most of the text off the disc even if had lost the odd sector here and there.  Sure, the RTF was trashed and you had to sift out the now-useless formatting goo, but the text was recoverable in large part.  In other sectors of the industry, files were happily being saved in CSV or fixed length text files (EDI?) and it might have been a pain to write yet another CSV parser, but with a little effort people could get data from one place to another. 

Then the industry suddenly decided that it could add lots more value to documents by making them completely inscrutable.  In our microcosm example, Word moved from RTF to OLE Structured Storage.  We support monkeys rued the day!  Sure, it made it really easy to serialize OLE embedded objects, and all kinds of neat value added junk that most people didn't take advantage of anyway.  On the other hand, we now had to treat our floppies as holy relics, because if so much as one byte went awry, forget ever recovering anything out of your document.  Best to just consider it gone.  We all learned to be completely paranoid about backing up important documents on 3-4 disks just to make sure.  (Since the entire collection of all the papers I ever wrote in college fit on a couple of 1.4Mb floppies, not a big deal, but still a hassle.)

Apple and IBM were just as guilty.  They were off inventing "OpenDoc" which was OLE Structured Storage only invented somewhere else.  And OpenDoc failed horribly, but for lots of non-technical reasons.  The point is, the industry in general was moving file formats towards mutually incomprehensible binary formats.  In part to "add value" and in part to assure "lock in".  If you could only move to another word processing platform by losing all your formatting, it might not be worth it. 

When documents were only likely to be consumed within one office or school environment, this was less of an issue, since it was relatively easy to standardize on a single platform, etc.  When the Internet entered the picture, it posed a real problem, since people now wanted to share information over a much broader range, and the fact that you couldn't possibly read a Word for Windows doc on the Mac just wasn't acceptable. 

When XML first started to be everyone's buzzword of choice in the late 90s, there were lots of detractors who said things like "aren't we just going back to delimited text files? what a lame idea!".  In some ways it was like going back to CSV text files.  Documents became human readable (and machine readable) again.  Sure, they got bigger, but compression got better too, and disks and networks became much more capable.  It was hard to shake people loose from proprietary document formats, but it's mostly happened.  Witness WordML.  OLE structured storage out, XML in.  Of course, WordML is functionally RTF, only way more verbose and bloated, but it's easy to parse and humans can understand it (given time). 

So from a world of all text, we contracted down to binary silo-ed formats, then expanded out to text files again (only with meta-data this time).  It's like a Big Bang of data compatibility.  Let's hope it's a long while before we hit another contracting cycle.  Now if we could just agree on schemas...

Work | XML
Thursday, 24 June 2004 11:24:31 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 17 June 2004

Sigh.  Somewhere along the line (probably because I opened my mouth) I became installer boy for our project.  I've learned some pretty interesting things along the way, mostly having to do with how inadequate the VS.NET setup projects can be if you do anything even mildly out of the ordinary. 

We have two potential targets for our installer, a developer's machine and a production machine, and for a production machine we have to install a subset of the developer installation.  The way to accomplish such an option in a VS.NET setup project is to add a custom dialog called Radio Button (2) and the set it's properties so that your text shows up in the right places on a pre-built dialog that contains nothing but 2 radio buttons.  OK, not so bad.  Then you name an environment variable that will hold the result of the radio group, let's call it SPAM.  You give the radio buttons their own values, like 1 and 2.  Again, OK, not so bad.

The part that really sucks is how you use the results of the dialog box.  For every file that I want to not be installed in production (every single file, not just folders) I have to set its Condition property to "SPAM=1".  I understand that it was probably really easy to implement, and probably meets the needs of many, but really, how lame is that.  Because the Condition property of the folder doesn't propagate to its files, I'll have to add that condition to every new file that gets added to the project.  And, no one but me will be likely to understand how the setup works without asking or doing some research.  Hmph!

On top of that, I've learned how amazingly limited is the support for registering COM interop objects in .NET.  I need to register some COM objects with Version Independent ProgIDs, so clients won't have to constantly upgrade the ProgIDs they use to find our objects.  There's absolutely no support for that in System.Runtime.Interop.  None.  You can't even do registration on a type by type basis.  You can only register all the types in an assembly at once.  Doesn't leave a lot of room for custom behavior. 

So, I ended up writing my own Version Independent ProgIDs to the registry in an installer class.  Not elegant, but there you have it.

Thursday, 17 June 2004 13:37:05 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 27 May 2004

Monopoly or no monopoly, this is what put and what keeps Microsoft on top:

I had an incredible experience today at the sails pavilion - I asked about Speech Server...So I'm sitting at the SQL cabana and the Microsoft helper gets on the radio and asks someone if they've got the Speech Server group at their cabana. About fifteen seconds later, every MS radio in earshot lights up with "Any Speech Server expert, any Speech Server expert, we need an answer in the cabana ASAP!" Really really impressive! [Jason Fredrickson ]

Back in the dim time, I started out as a Mac developer before moving to Win32 (I never had to write a far pointer :-) ).  I was pretty up on developing for the Mac.  I went to two WWDCs in the early/mid 90's.  I was a total Apple bigot.  Why aren't I still?  Because Apple had (and probably has) a habit of completely jerking developers around, when they weren't ignoring them completely.  The barrier to entry was high.  I still have the many $100s worth of Apple Developer books that you pretty much had to buy to write for the Mac.  Apple's own development tools were way overpriced, and horribly under-useable.  Worst of all was the System 8 debacle.  I spent quite a bit of time and effort getting ready for "Copeland" which was Apple's first "System 8" replacement for their antiquated System 7.  I even went and learned Dylan, since Apple said they were going to be moving into the future with Dylan on Copeland.  (Dylan, and particularly Apple's Dylan implementation which was written in Lisp, was awesome at the time.  Everything Java brought to the table later and more useable, IMHO.)  Then Apple pulled the rug out, never shipped Copeland, or any of the DocPart stuff they were touting with IBM, killed Dylan, etc.  I think that was when they really started losing market share.  They were alienating developers at the same time that MS was coming out with Win32 and courting developers.  No matter how cool your operating system is, if nobody writes apps for it, it's not going anywhere (witness BeOS).

I'm not saying MS has never lead developers astray (Cairo?) but overall they have made a concerted effort to attract developers and make them feel valued, which leads to more high quality apps being available on Windows then anywhere else. 

I've been to numerous MS conferences, and always had a good time, and more importantly I always felt like MS was seriously committed to making my life easier and showing me how to better get my job done.  That's worth a lot.

Thursday, 27 May 2004 10:04:49 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 25 May 2004

If you've ever gotten involved in the Java vs. .NET debate (and who hasn't) check out N. Alex Rupp's blog.  He's a dyed-in-the-wool Java guy who's going to TechEd this week and talking with .NET developers and INETA people about what they like about .NET.  He has some very interesting things to say about the Java developer - .NET developer relationship.  A very fair and unbiased look at the issues and how the communities interact internally and externally.

It's very refreshing to see someone being so open and honest about the pros and cons of both platforms.  (And it pretty courageous, given the longstanding antagonisms, for him to not only go to TechEd, but to advertise his Java-guy-ness.)

Tuesday, 25 May 2004 15:05:22 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 

On a whim last weekend I spent some time trying to learn Smalltalk.  So many of the seminal thinkers around XP, etc. were originally Smalltalk heads, so I wanted to see what all the fuss was about. 

I downloaded a copy of Squeak for Windows and went through several of the tutorials.  Pretty interesting stuff, but I think I'll stick to C#.  I can see why people are hot for Smalltalk (or used to be anyway).  Because it's so rigidly OO, it forces you into doing the right things with regard to object orientation.  And the tools are pretty cool. Having such an advanced class browser and inspection system is a big advantage. 

However, I think I'll stick to strongly typed languages (of which C# is currently my favorite).  I guess my overall impression of Smalltalk is that for people who were very competent, you could get a lot of work done in a very short amount of time because the system is so flexible.  On the other hand, because the system is so flexible, I would guess that people where were less then amazingly competent (or confident) would have a very hard time getting anything done at all because you have to understand exactly what you are doing, and many errors will only present themselves at runtime.  It would be fun to work in such a flexible system, but I really appreciate the benefits of compile-time type checking. 

At the same time, I was playing with a copy of MSWLogo (a free Logo implementation for Windows).  What a blast.  I haven't played with Logo since the Apple II days.  Once upon a time you could actually get a physical turtle that held a pen, and connected to a serial port.  You could write Logo programs to get the turtle to scoot around on a really big piece of paper.  I always thought that was a cool idea.  I was also surprised at how much Logo and Smalltalk have in common syntactically. 

I was trying to get my 8-year-old son interested in Logo, but I think he's still a little too young.  I met with a resounding "whatever, Dad".  I guess I didn't get my first Commodore PET until 6th or 7th grade. :-)

Tuesday, 25 May 2004 10:37:49 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 21 May 2004

I was just reading Steven Padfield's article on unit testing ASP.NET code by creating your own HttpContext outside of IIS (which is a very useful technique) and it got me thinking about a technique that I've gotten a fair amount of mileage out of lately, namely creating my own context object that will be available anywhere in a call stack, just like the HttpContext is.

When I started looking into how to implement such a think, I was thinking in terms of deriving from ContextBoundObject, which seemed like overkill.  So I fired up the ever-handy Reflector and found out how HttpContext handles itself.  Turns out that no ContextBoundObject is needed.  Hidden in the bowls of System.Runtime.Remoting.Messaging is a method called CallContext.SetData(string, object) that will stick a named object value into your call context, which can be retrieved from anyplace on the current call stack.  Pretty handy.  If you wrap that in an object like HttpContext, you can store your own context values, and potentially provide context-sensitive methods such as HttpContext.GetConfig().

What you end up with is an object that looks something like this:

using System;
using System.Collections;
using System.Runtime.Remoting.Messaging;

namespace MyContext
public class ContextObject
private const string ContextTag = "MyContextObject";

private ContextObject()

/// <summary>
/// returns a valid context object, creating one if
/// none exists
/// </summary>
public static ContextObject CurrentContext
object o = CallContext.GetData(ContextTag);
if(o == null)
o = new ContextObject();

if(!( o is ContextObject))
throw new ApplicationException("Corrupt ContextObject");

return (ContextObject)o;

/// <summary>
/// Clears out the current context. May be useful
/// in situations where you don't have complete
/// control over your call stack, i.e. you aren't at the top of
/// the application call stack and need to maintain
/// a separate context per call.
/// </summary>
public static void TeardownCurrentContext()

private string contextValue1;

/// a sample value to store to/pull from context
public string ContextValue1
return contextValue1;
contextValue1 = value;

You can use the context object from anywhere in your call stack, like this

public class Tester
public static void Main(string[] args)
ContextObject co = ContextObject.CurrentContext;
co.ContextValue1 = "Hello World";

public static void OtherMethod()
ContextObject co = ContextObject.CurrentContext;


The resulting output is, of course, "Hello World", since the context object retains its state across calls.  This is a trivial example, and you wouldn't really do it this way, but you get the idea.
Friday, 21 May 2004 10:52:34 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 20 May 2004
Now that I think about it some more, this is a problem that WinFS could really help to solve.  The biggest reason that people don't use things like RDF is sheer laziness (you'll notice the rich RDF on my site :-) ) but if we can use the Longhorn interface to easily enter and organize metadata about content, it might be a cool way to generate RDF or other semantic information.  Hmmmm...  It would be fun to write a WinFS -> RDF widget.  If it wasn't for that dang day job...
XML | Work
Thursday, 20 May 2004 12:43:55 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Scott mentions some difficulty he had lately in finding some some information with Google, which brings to my mind the issue (long debated) of the semantic web.  Scott's problem is exactly the kind of thing that RDF was meant to solve when it first came into being, lo these 6-7 years ago. 

Has anyone taken advantage of it?  Not really.  The odd library and art gallery.  Why?  Two main reasons: 1) pure laziness.  It's extra work to tag everything with metadata 2) RDF is nearly impossible to understand.  That's the biggest rub.  RDF, like so many other standards to come out of IETF/W3C is almost incomprehensible to anyone who didn't write the standard.  The whole notion of writing RDF tuples in XML is something that most people just don't get.  I don't really understand how it's supposed to work myself.  And, like with WSDL and other examples, the people who came up with RDF assumed that people would use tools to write the tuples, so they wouldn't have to understand the format.  The problem with that (and with WSDL) is that since noone understands the standard, noone has written any usable tools either. 

The closest that anyone has come to using RDF in any real way is RSS, which has turned out to be so successful because it is accessible.  It's not hard to understand how RSS is supposed to work, which is why it's not really RDF.  So attaching some metadata to blog content has turned out to be not so hard, mostly because most people don't go beyond a simple category, although RSS supports quite a bit more. 

The drawback to RDF is that it was create by and for librarians, not web page authors (most of whom aren't librarians).  Since most of us don't have librarians to mark up our content with RDF for us, it just doesn't get done.  Part of the implicit assumption behind RDF and the semantic web is that authoritative information only comes from institutional sources, who have the resources to deal with semantic metadata.  If blogging has taught us anything, it's that that particular assumption just isn't true.  Most of the useful information on the internet comes from "non-authoritative" sources.  When was the last time you got a useful answer to a tech support problem from a corporate web site?  The tidbit you need to solve your tech support problem is now-a-days more likely to come from a blog or a USENET post than it is from the company who made the product.  And those people don't give a fig for the "semantic web". 


Work | XML
Thursday, 20 May 2004 12:29:22 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 13 May 2004

This is a bit convoluted, but stick with me...

We've got a fairly involved code generation process that builds C# classes from XSDs, which is all well and good.  The code generations uses some utility classes to help with reading the schema files, and those have unit tests, which is cool.  The code generation happens as part of our regular build, so if I screw up the CodeSmith tempate such that the code generation fails, I'll know about it at build time.  If I screw up the CodeSmith template such that the code generation succeeds, but the results don't compile, I'll also find out about that at build time. 

However, since the whole build/codegen process takes a while, it's not the kind of thing you want to run all the time after every change (as is the XP way).  So, how do I unit test the generated classes?  I think I have a workable solution, but it took some thinking about. 

I ended up writing a set of unit tests (using NUnit) that run the code generation process on a known schema.  The resulting C# file is then compiled using the CodeDom, then I can test the resulting class itself.  As a happy side benefit, I'll know if the CodeSmith template runs, and if the resulting C# compiles without having to run the whole build process.  Pretty cool.

An interesting side note: one of the things we do with our generated classes is serialize them as XML using the XmlSerializer.  I discovered during this process that if you generate an assembly with the CodeDom, you can't use the XmlSerializer on the resulting classes unless you write the assembly to disk.  I was using an in-memory only assembly, and the XmlSerializer gave me a very helpful exception stating that the assembly must be written to disk, which was easy enough to do.  I'm assuming that this is because the XmlSerialzer itself is going to generate another dymanic assembly, and it needs something to set it's reference to.  Interesting.

Thursday, 13 May 2004 15:18:08 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 07 May 2004


The foremost proof of which (even more so that road rage) is the office coffee pot.  The fact that so many feel that they can take the last of the coffee and then blithely walk away is evidence that we've fallen on un-civil times. 

My employer is kind enough to provide three different kinds of coffee, and on more than one occasion I've walked up to find all three karafes completely empty.  I rage, swear, cry to the heavens, and then set about making three pots of coffee, as is only fitting and civic-minded. 

I understand that in some circles it's seen as stigmatizing and degrading to have to "make the coffee" at the office, since someone is just supposed to attend to it, but come on people, the 50's are long gone, and administrative assistents have no more duty to make coffee than do each and every one of us. 

So be civil, thoughtful, respectful of others, and make the @#(@*$% coffee!  The 20 seconds it takes are worth the feeling of pride you can take in not being one of those non-coffee making so-and-sos.


Friday, 07 May 2004 12:45:18 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Thursday, 06 May 2004
I've long said that the internet knows everything.  Now the internet has everything too.  All I have to do is mention the idea for a T-Shirt and low and behold (thanks to Greg) my wish (and my son's) is granted.  Thanks man.
Thursday, 06 May 2004 14:55:04 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Scott makes a comment about people being born coders.  I wholeheartedly agree.  I've known lots of people who have had reasonably successful careers as coders, but who've never been extraordinary.  Plenty of people pull down a paycheck doing competent work as coders, but never achieve greatness.  I agree that it's something inborn.  Maybe it's the ADD and dyslexia.  :-)  Whatever it is, my experience has shown that some people just don't have it, and no matter how hard they work they'll never get over the hump. 

I recently spoke with a psychologist who was musing about the idea that the high level of coffee consumption in the NW is due to the fact that many if not most coders are ADD, and caffeine works to calm them down enough to work, in the same way that Ritalin works on kids.  Not scientific, perhaps, but it makes some sense. 

My son thinks it's pretty cool to be a geek.  He's started referring to himself as “geek, son of geek”.  He's a little miffed that there aren't any T-Shirts that say that.  In his case it's tough to distinguish nature from nurture, since I was out as geek before he was born. 

Maybe someday in the near future they'll map out the geek gene. 

Home | Work
Thursday, 06 May 2004 11:30:12 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 29 April 2004

OK, usually I try to stay out of the whole MS/OSS thing, but I find this interesting enought to comment on.  Just to get this out of the way, I'm all in favor of OSS, and I think both Linux and Gnome are pretty cool.  I'm a dedicated Firefox user personally.  (I realize that sounds a lot like "some of my best friends are gay".)

That said, I find statements [via TSS.NET] like "Gnome and Mozilla need to align to counter [Longhorn]" to be pretty strange.  That sounds an awful lot like the statement of someone who sees Gnome/Mozilla/Linux as a commercial, for profit enterprise.  Why else is "competition" with MS something to consider?  (One possible alternative is that the OSS community is driven by people who have way too much ego invested in OSS projects.)  I thought the whole "point" of OSS was that it was driven by a different set of concerns that is commercial, for profit software.  Am I missing something? 

Either the Gnome/Mozilla community is being run as if it was commercial, or it's being run by people who are driving the direction of the community for the sake of their own egos.  Neither is very attractive. 

Thursday, 29 April 2004 11:57:55 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 27 April 2004

As previously mentioned, Scott and I are working on some pretty cool stuff involving generating code from XML Schema, and then doing various bits of stuff to those generated objects.  Today what I'm pondering is not the code generation part per se, but the bits of stuff afterwards part. 

For example, if you want to take your generated class and serialize it as a string, there are (at least) three options:

  • Generate the serialization code as part of the code generation itself.  E.g. code gen the objects .ToString() method at the same time as the rest of the code.
  • Insert some cues into the object at code gen time, in the form of custom attributes, then reflect over the object at runtime and decide how to serialize it.
  • Insert some cues into the object at code gen time, in the form of custom attributes, then reflect over the object once at runtime and dynamically build the serialization code, which can then be cached (a la the XmlSerializer).

The advantage to number one is that it's fairly easy to understand, the code is pretty easy to generate, and the resulting code is probably the most performant.  The disadvantages are that the code embedded in the code gen process is a bit harder to maintain, since it's "meta-code", and that if you want to support different serialization formats for the same object, you might have to generate quite a lot of code, which embeds a lot of behavior into the objects.  That may not be a bad thing, since the whole point is that you can always regenerate the objects if the serialization behavior needs to change.

The advantage to number two is that it is the most flexible.  The objects don't need to know anything at all about any of the formats they might be serialized into, with the exception of having to be marked up with any appropriate custom attributes.  You can add new serialization methods without having to touch the objects themselves.  The biggest drawback (I think) is the reliance on reflection.  It's hard for some people to understand the reflection model, which might make the code harder to maintain.  Also, there are theoretical performance issues with using reflection all the time (many of which would be solved by #3) although we've been running this code in production and it's performing fine, but that's no guarantee that it always will.  Scott is also a bit concerned that we may run into a situation where our code isn't granted reflection privileges, but in that case we'll have plenty of other problems to deal with too.  As long as our code can be fully trusted I don't think that's a big issue.

The advantage to number three is essentially that of number two, with the addition of extra performance at runtime.  We can dynamically generate the serialization code for each object, and cache the compiled assembly just like the XmlSerializer does.  The biggest disadvantages are that it still leaves open the reflection permission isse, and (first and foremost) it's a lot more work to code it that way.  Like #1, you end up writing meta-code which takes that extra level of abstraction in thinking about.

Practically speaking, so far I find #1 easier to understand but harder to maintain, and #2 easier to maintain but much harder to explain to people. 

If anyone has any opinions or experiences I'd love to hear them.

Tuesday, 27 April 2004 10:39:53 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Friday, 16 April 2004

Now this is just plain annoying...

As I've mentioned before, I'm doing some ongoing work with code generation from XmlSchema files.  Developers mark up XmlSchema documents with some extra attributes in our namespace, and that influences how the code gets generated.  Think of it as xsd.exe, only this works.  :-)

So today a new problem was brought to my attention.  I read in schema files using the .NET XmlSchema classes.  OK, that works well.  For any element in the schema, you can ask its XmlSchemaDatatype what the corresponding .NET value type would be.  E.g. if you ask an element of type "xs:int", you get back System.Int32.  "xs:dateTime" maps to System.DateTime, etc.

When you want to serialize .NET objects using the XmlSerializer, you can add extra custom attributes to influence the behavior of the serializer.  So, if you have a property that returns a DateTime, but the schema type is "date" you need some help, since there's no underlying .NET Date type. 

public class MyClass
 public DateTime SomeDate

The XmlElementAttribute tells the serializer to enforce the rules corresponding to the schema datatype "date" and not "dateTime".  Groovy so far.

However, the XmlSerializer only supports certain schema types for certain CLR types.  Makes sense, as this

 public Int32 SomeDate

doesn't really make any intuitive sense. 

So now for the catch.  The CLR types that the schema reading classes (System.Xml.Schema) map to schema types don't in all cases match the CLR types that the XmlSerializer maps to schema types.  The schema reader says that "xs:integer" maps to System.Decimal.  OK, that's consistent with the XmlSchema part 2 spec.  Unfortunately, the XmlSerializer says that "xs:integer" must map to a System.String.  So does "xs:gYear", and several others. 

The end result is that I can't rely on the XmlSchemaDatatype class to tell me what type to make the resulting property in my generated code.  Arrrrrggggghhhhh!!!!!!!

The two solutions are basically

  • tell people not to use integer, gYear, etc. (possibly problematic)
  • have my code embody the differences in type mapping between XmlSchema and XmlSerializer (lame)

I haven't delved, but I can only assume that xsd.exe uses the latter of the two. 

Work | XML
Friday, 16 April 2004 15:39:42 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 12 April 2004

If you have any interest in builds or code maintenance (like I do) check out vil.  It's a code metrics tool that works directly on .NET assemblies.  The interesting thing (IMHO) about that approach is that when it gives metrics for lines of code, complexity, etc. that's all with regard to IL, and not the C# or VB.NET or whatever that you wrote in the first place. 

That makes sense when you consider all the stuff that get's compiled out, although particularly with things like lines of code it may be misleading.  Particularly in C#, where keywords can expand out to quite a bit of code.  Most of the other metrics still make sense with regard to IL, and of course even lines of code is useful when used for comparison rather than absolutist purposes. 

Monday, 12 April 2004 15:20:40 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Roy's released the latest version of The Regulator, which continues to build upon what has become my favorite regex tool.  You can now generate .NET code for your regex straight from the tool, and for those of us who have to work with someone else's regular expressions, there's an analyzer that explains the regex in English.  Pretty cool stuff. 

Thanks Roy.

Monday, 12 April 2004 14:53:00 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 07 April 2004

I'm really wanting to fold some kind of code coverage tool into my automated build process.  I'm using CruiseControl.NET to do continuous integration builds, which run NUnit tests, FxCop, build docs, etc, all through the wonders of NAnt.  The problem comes when trying to decide which code coverage tool to run.  For more info, see a presentation I did for a user group meeting a while back.

There seem to be two different projects, both called NCover that provide code coverage for .NET projects.  One of them is at GotDotNet, and it uses the profiling API to get coverage data from a running assembly.  Pretty cool, but I haven't been able to get it to work reliably with NUnit tests.  I think that NUnit doesn't allow enough idle time for the NCover output to get flushed to disk before the process goes away.  Drag.  However, I see that a new version was release yesterday, so I'll have to try it out and see how that goes.

The other NCover follows a more traditional source code intrumentation approach.  You build your project with it, which injects coverage hooks into your source, then you run your tests and out come coverage reports.  Cool.  And somehow it just feels more reliable to me than depending on the (slightly witchy) profiler interface.  On the other hand, it requires a lot more setup and is much more intrusive.  But it seems to work. 

Does anyone have any experience with one or the other?  Or any opinions?

Builds | Work
Wednesday, 07 April 2004 09:17:44 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [5]  | 
# Monday, 05 April 2004

It's amazing the difference a few good samples can make.  The last time I looked at integrating with VisualStudio.NET's new File / Project dialogs all the samples where fairly gnarly looking ATL COM objects that, even if they weren't very complicated, looked WAY too intimidating. 

Thanks to a couple of good samples, I now see how simple it is to write your own new file or project wizards.  Yes, there's still some COM involved, but thanks to the wonder of .NET, you don't really have to be too concerned with the details.  Check out the AddinWizardCS sample for a great example of how to put up a wizard dialog, get some user input, then customize your template files based on that input, all inside of VS.NET. 

The biggest trick is figuring out what goes in the .vsz and .vsdir files.  There are the little snippets of text files that you put deep in the bowels of your VS.NET installation directories to tell VS what to add to the dialog, and the ProgId of the thing to run if a user wants to create one of your projects or files.  Just text files, but figuring out the fields of the .vsdir files does take a moment of reflection (not the coding- but the thinking kind).  The help makes it look more complicated than it has to be.  Check out the samples first and see if they make sense to you before hitting the docs. 

I found this much easier to deal with that the "easy" wizard which involves a bunch more config, and some jscript for good measure.  Just implement IDTWizard yourself, and it's remarkably straightforward.

Monday, 05 April 2004 16:30:37 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

It just gets better every day.  I've pretty much removed dependence on IE in favor of Firefox.  One of the last few things that was bugging me got taken care of last week, and I can now right-click to subscribe to feeds in Firefox.  (Big thanks to Stuart Hamilton!)  And also fairly recently, FreeTextBox introduced support for Firefox.  Happy day!  Although I much admit I've been too lazy (ahem, I mean busy) to get my DasBlog installation working with the new version. 

The only things I ever use IE for are Oddpost (which I can appreciate would be a lot of work to port) and SharePoint.  I'm hoping Firefox will support NTLM soon, so I can start using it for SharePoint too.  SharePoint sites look fine in Firefox, but retyping my credentials on every page gets old.  I had seen some announcements that Firefox .8 would support NTLM, but it looks like that didn't happen.  If I'm missing something, please let me know. 

On a separte note, I'll try to be better about posting anything interesting.  Life's been intruding pretty seriously the last month or so (as life does).

Monday, 05 April 2004 09:52:04 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 01 April 2004
Dan Fernandez points out some new additions to Whidbey to help combat public indecency.
Thursday, 01 April 2004 09:49:49 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 02 March 2004
Rico Mariani has posted yet another great performance improvement story, and a fabulous illustration of the fact that the best way to solve performance problems is to really know your numbers.
Tuesday, 02 March 2004 11:54:49 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 

There's been a lot of ruminating around here lately on the subject of CS degrees.  Who has one, who doesn't, and does it really matter.  I've done a fair amount of thinking on the subject, and thought I'd just go ahead and let it out.

I don't have a CS degree.  I had a very nice time earning my (basically useless) BA in East Asian Studies.  Seemed like a good idea at the time.  But then the bottom dropped out of the Japanese economy, and nobody really cared about doing business with them any more.  There went that consulting job I was hoping for.  The bottom line remains, however, that knowing what I know now I wouldn't have done things any differently. 

At the time I wasn't in touch with my inner geek.  Liked working with computers and all, but who wants to do that for a living?  Some random circumstances and the needs of my bank account landed me in a QA monkey job at Central Point Software (remember PC Tools?), and the rest is history.  I've taken a lot of professional development courses, and a few at local community colleges, but not much "formal" education in computer science.  And yet, I've managed to (I like to think) have a pretty successful career as a software developer / architect (so far). 

I think the reality is that getting a CS degree doesn't make you a good problem solver.  I've known and interviewed plenty of people with CS degrees who I wouldn't hire to write code.  I've known plenty of very talented coders who don't have CS degrees.  So on the whole, my personal experience has led me to believe that for the vast majority of jobs, it really doesn't matter.

I'm not saying there aren't jobs in which it does matter.  I'd be hard pressed to write an operating system, or embedded software, or graphics internals.  But how many people have those jobs?  Not that many, in the greater scope of things. 

I've thought about going back to school in pursuit of an MSCS or some such, but over the last 6-7 years, multiple people have told me that it really wouldn't make much difference at this point.  And I have to hold down a full time job and raise two kids, etc, so taking the time and money to go to grad school hasn't been a big priority. 

And yet, I still run into people who are hung up on the idea that you can't possibly be good at your job unless you have that piece of paper hanging somewhere.

All right, enough of my obviously biased opinion.  But if any of you out there happen to be in high school wondering what to do with yourselves, pursue a liberal arts degree.  If you learn how to think about things, what you choose to think about is just a detail of syntax.

Tuesday, 02 March 2004 11:04:32 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [3]  | 
# Friday, 13 February 2004

As I'm sure you've probably heard elsewhere, SourceGear has released a new version of their Vault SCC tool.  The single user version is now free, lot's of cool new features, etc. 

I was just looking through the list of new features, and this one has to be my favorite

 Blame: Displays an annotated view of the file showing which user last modified each line.

That'll come in handy. :-)

Friday, 13 February 2004 15:55:26 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 15 January 2004

I'm currently working on doing some instrumenting for the sake of unit testing (using NUnit) and was doing some thinking about Schematron.  I haven't heard much about it lately, and I don't know if people are going forward with it, but it's a pretty compelling idea. 

For those of you who haven't looked into it, Schematron allows you to define assertions based on XPath expressions that can be used to validate XML documents, over and above what XML Schema provides.  For example, if your business logic specifies that for any given order, a retail customer can only have purchases totalling $100, that's something that you can't really specify in XSD, since it involves cross-element rules, but you can with Schematron. 

Anyway, I happen to have XML serialized versions of the objects I'm interested in lying around, so I could create a shim that would work with NUnit to do Schematron validation (using Schematron.NET).  However, I might not always have XML around.  It would be pretty cool if you could do the same kind of declarative validation of objects.  I wonder if ObjectSpaces will facilitate something like that??

Work | XML
Thursday, 15 January 2004 15:56:54 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 06 January 2004

There’s a new video of a Longhorn-based real estate application up on MSDN.  I’ve got to say this is one of the coolest things I’ve seen in a long while.  The new Avalon execution model allows you to do some truly amazing things with Smart Client applications.  These same techniques could be applied to stock tracking, or any other kind of notification based application.  Just send out an email with a notification, and an attached Smart Client for more info.  What a great idea. 

Tuesday, 06 January 2004 09:54:18 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 09 December 2003

I'm making a web request to a site that uses certificate authentication, and it's the first time I've done that with .NET, so bear with me.  I'm doing something like the following...

HttpWebRequest req = (HttpWebRequest)WebRequest.Create("https://mysite.com");

X509Certificate x509 = X509Certificate.CreateFromCertFile(@"c:\temp\mycert.cer");


That works fine, but it seems like there should be an easier way.  The certificate already exists in my local certificate store, or I'm pretty sure this wouldn't work at all, so why do I have to export the cert to a .cer file in order to use it?  Shouldn't I be able to get a reference to it directly from the certificate store?  Or would that involve too much unmanaged yuckiness?  It just seems like an extra step, and administrative hassle, but maybe that's just the way it is. 

Any suggestions or feedback welcome.


Got it. 

Tuesday, 09 December 2003 20:58:54 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 19 November 2003

The code was all working just fine in test (heh, heh), but in production, suddenly we were getting

exception: System.IO.FileNotFoundException:
File or assembly name ox0vcwlz.dll, or one of its dependencies, was not found.  File name: "ox0vcwlz.dll"
at System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Boolean isStringized, Evidence assemblySecurity, Boolean throwOnFileNotFound, Assembly locationHint, StackCrawlMark& stackMark) 

Very sad.  Even more interesting was that on each run, the assembly name was different.  OK, probably the XmlSerializer then.  But why did it work in test, and not under other conditions. 

It all became clear when we logged the full exception...  Fusion says:

=== Pre-bind state information ===
LOG: Where-ref bind. Location = C:\WINNT\TEMP\ox0vcwlz.dll
LOG: Appbase = D:\Program Files\XXX
LOG: Initial PrivatePath = NULL
Calling assembly : (Unknown).
LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind).
LOG: Attempting download of new URL file:///C:/WINNT/TEMP/ox0vcwlz.dll.

Ah ha!  Turns out the user context we were running under didn't have permissions to the c:\winnt\temp directory.  Ouch.  An easy fix, but not one that was intuitively obvious, at least to me.  I knew that the XmlSerializer created dynamic assemblies, but I didn't know they had to be persisted to disk. 

Something to think about.

Work | XML
Wednesday, 19 November 2003 13:50:00 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [3]  | 
# Friday, 14 November 2003

A few brief tidbits I couldn’t resist mentioning:



Friday, 14 November 2003 09:06:53 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Saturday, 08 November 2003

I’d have to agree with Clemens that one of the coolest parts of .Net is custom attributes.  I’m constantly amazed at how much you can achieve through the judicious use of attributes.  Best of all, since they are much like attributes (should be) in XML, you can carry them over from one to the other.  For example, you can add some extra attributes to an XML Schema document in the namespace of your choice, then (if you want to write your own xsd.exe) you can carry those attributes forward into your .Net classes.  Based on those custom attributes, you can apply aspects to your objects at runtime, and basically make the world a better place.  

When all that work is finished, you can influence the behavior of your data objects at runtime just be tweaking the schema that defines them.  At the same time, since you’re starting with schema, you get lots of fun things for free, like XmlSerializer and other bits of .Netty goodness.  

I’m a bit to excited to go into all the details right now, but suffice it to say the prospects for code generation, attributes and aspects are pretty amazing.  Once we get the work out of the way, the rest is just business logic.  More business, less work. 

Say it as a mantra: “more business, less work  more business, less work…..”.  

Work | XML
Saturday, 08 November 2003 08:37:38 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 

This week, Corillian (my employer) sponsored a programming contest for employees.  What I thought was one of the coolest parts was that non-technical people were encouraged to join teams (of 2-4) to solve the programming problem by contributing their problem solving skills.  

Since the contest is over (although the results haven’t been announced yet) I can let the cat out of the bag.  It was a word search.  Given a dictionary of words, and a rectangular array of characters, find all the words from the dictionary that exist in the puzzle, in any direction.  

It was a total blast.  I was on a team with some folks I know from STEP, Dr. Tom (a real live doctor of computer science), and Darin (a hardcore ATL head) and Don (a QA engineer).  Much fun was had by all.  I haven’t had such pure geek fun in a long time.  I woke up the next morning still thinking about potential optimizations to our solution (which ended up being in C#).  I got to work to find email from Dr. Tom with some more suggestions. 

I thought it was a really great idea to get people thinking about hardcore programming problems, involve non-technical people, and just have a lot of fun coding.  I’ll be really interested to see all the solutions after the judging is complete.  

Kudos to Scott and Chris for coming up with the problem and the reference implementation.  

Saturday, 08 November 2003 08:22:55 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 30 October 2003

As of tomorrow, I’ll be starting a new adventure as an employee of Corillian Corporation.  I’m looking forward to new challenges, and learning all about the world of online financial services.  There are some very smart people working there, including Scott Hanselman, and many other people I worked with back in the day at STEP Technology.  

So even if I didn’t get to go to the PDC (I’m still moping), at least I’ll get to do some really cool coding. :-)


Work | Home
Thursday, 30 October 2003 21:09:42 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 29 October 2003

Hooray for declarative programming!  Let’s do more business and less work.  Whether it’s the BizTalk server orchestration designer (the new Jupiter one TOTALLY rocks!) or XSLT, or possibly best of all, Avalon, it just makes sense. 

As Scott mentions, I’ve done some declarative UI with SVG, and I’m a big fan of XForms, if and when we ever see a mainstream implementation.  But I’d have to agree that Avalon is sheer genius.  There’s no reason why to write the code for most of the UI we do now, and since it sounds like the same XAML will work for both the web and for Longhorn client apps, better still. 

It doesn’t hurt that, as Rory mentions, Avalon is just beautiful. 

Wednesday, 29 October 2003 12:58:15 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 

Only days ago I mused that it would be nice to have more control over the way the XmlSerializer works.  Sure enough, according to Doug Purdy via Christoph Schittko we’ll get access to IXmlSerializable, and can write our own XML to our hearts content.  :-)

Isn’t life grand?


Work | XML
Wednesday, 29 October 2003 09:00:59 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 27 October 2003

While I couldn’t go to PDC this year :'( at least I can get in on the next best thing.  Watching the stuff that’s coming up on PDCBloggers is pretty amazing.  You get the blow by blow in living color (or at least text).  I love watching what’s coming up on Scott’s blog.  He’s posting from the keynote via his Blackberry, so as Jim Alchin says it, it’s coming up on his blog.  Gotta love technology. 

Monday, 27 October 2003 11:37:45 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 

I’ve used TortoiseCVS for a while, and always considered it a nice to have, but not really the best way to deal with a CVS server.  Now that I’m running the latest (1.4.5) I think differently.


In the past I’ve relied on WinCVS to do things like recursively find out all the files I’ve modified, what revision my files are at, etc, and pretty much used Tortoise just for simple commits and adds.  With 1.4.5, Tortoise will recursively find all the modified files, categorize them as add, deletes and modifies, and allow me to commit any or all of them at the same time.  Very nice, and much easier than in WinCVS. 


Also, the explorer integration has gone that one step farther, and you can add CVS status and CVS revision columns to any explorer view.  Also much easier.  I’ve pretty much abandoned WinCVS in favor or Tortoise, which is nice, since it saves one more application launch every time I want to deal with versioned files. 


Way to go TortoiseCVS!

Monday, 27 October 2003 11:30:06 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 

It doesn’t take huge changes in an application to make our lives better. 


Granted, there are some really great new features in Outlook 2003, and I’m totally diggin’ them, but every time I can reboot my machine without facing the dreaded “Please close all Office applications..” dialog my heart is filled with a simple and pure joy. 


Sometimes it’s the little things that really make a difference.

Monday, 27 October 2003 11:24:19 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 23 October 2003

I’ve been doing a lot of work with the .NET XmlSerializer over the last couple of weeks, and once again I’m wishing I had that just one more custom attribute to work with.  What I’d love to be able to do is just tell the XmlSerializer to let me worry about a specific part of the XML, since I know better than it.


Something like this (to serialize a property called MyString):


      public interface ICustomSerializer


            void DoSerialization(object theValue, XmlTextWriter w);

            void DoDeserialization(XmlNode x, object targetValue)



      public class DataClass


            public DataClass()






            public string MyString


                  get{ return "OK";}







      public class XmlMyWayAttribute : Attribute


            public XmlMyWayAttribute(Type serializer)






      public class MySerializer : ICustomSerializer


            #region ICustomSerializer Members


            public void DoSerialization(object theValue, XmlTextWriter w)


                  // TODO:  Add MySerializer.DoSerialization implementation



            public void DoDeserialization(XmlNode x, object targetValue)


                  // TODO:  Add MySerializer.DoDeserialization implementation







Then you could do whatever funky serialization you needed to do.  Or is this flawed in some fundamental way?  It’d be pretty cool though…



Work | XML
Thursday, 23 October 2003 17:14:36 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 17 October 2003

I’m sure everyone knew about this but me, but I was impressed.  I needed to attach some extra data to an XML Schema document so that it would be available to a code generator I’m writing.  You can put whatever extra attributes you want in an XSD document (which is how MS does it with SQLXML, for example) and no one will be bothered. 

However, I needed some full on elements to express the extra data I needed to carry. Luckily for me, the clever people at the W3C thought of this, and gave us the annotation element.  I knew you could put documentation inside an annotation element, but I’d never noticed the appInfo element before. 

Inside an appInfo, you can put whatever you want to, and attach it to just about any place in your schema file.  Very cool. 


<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:my="myNamespace" elementFormDefault="qualified" attributeFormDefault="unqualified">

     <xs:element name="root element">


               <xs:documentation>some documentation</xs:documentation>


                    <my:whatever>some app specific data</my:whatever>






On a completely different note, I’m amazed to see that WordML actually serializes the little red “you misspelled something again” bar into the XML.  Just in case you want to style it into a mistake somewhere else?

Work | XML
Friday, 17 October 2003 15:30:06 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 16 October 2003

It’s been a very busy week in which I think I’ve written more code faster than I have in months.  All in all a pretty productive time for me.  Key learning for this week include:


  • CodeSmith is a pretty groovy tool, although I wish the documentation was better, particularly for writing extensions.  On the other hand, it does most of what I need it to, and free’s a great price J.  There’s so much more you can do besides strongly typed collections.
  • Custom attributes have got to be one of the coolest features of .NET.  The ability to carry around arbitrary, strongly typed data about your classes that you can ask for whenever you need it is such an amazing boon that I don’t think I can say enough about it.  
  • NAntPad is a very promising step in the right direction.  Still a bit rough around the edges, but I anxiously await further versions.  Much easier than maintaining NAnt build files by hand.
  • There aren’t enough hours in the day.


I haven’t been blogging much lately, largely due to the ongoing weirdness of my current employment situation, but I’m trying to be more conscientious about it.  

Update: NAntPad .4 is much improved.  I'm starting to use it more earnestly.


Thursday, 16 October 2003 18:37:46 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 10 October 2003

On Wednesday I started doing some contract work at Corillian Corp.  Not only am I doing some really interesting work, but I'm working with my friend Scott Hanselman.  I shared an office with him for a couple of years at STEP Technolgy, and I'd forgotten how much fun it is.  Scott is one of the quickest thinking guys I've ever known, not to mention an actual standup comic :).  It turns out that I know a bunch of people at Corillian, including some people I worked with at Intel once upon a time, and the three best project managers I've ever had the pleasure to work with.  Should be a good time.

Friday, 10 October 2003 21:25:29 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 07 October 2003

I finally got around to moving this blog to dasBlog.  Now I'll have a lot more control over how things look, as well as features, etc.  Right now it's using one of the default templates, so there will probably be some cosmetic changes as I get time to mess with templates.  All of the old permalinks should still work, although the RSS has moved to http://www.cauldwell.net/patrick/blog/SyndicationService.asmx/GetRss


Home | Work
Tuesday, 07 October 2003 18:26:45 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 01 October 2003

In one of his posts a while back, Scott mentioned that he was using Mozilla Firebird, so I decided to check it out.  Wow.  It completely rocks.  Page rendering is WAY faster than in IE, and I have yet to see any of the major rendering problems that I recall from previous experiments with Mozilla/Netscrape.  I’ve only seen a couple of minor issues regarding table layout (and the fact that Windows Update won’t load at all).  I set it as my default browser, and haven’t had any reason to resort back to IE (except for Windows Update).  

Possibly the best part is there extension architecture.  There are extensions for all kinds of things, including a Google bar, an Amazon browser, and a great download manager that handles multiple concurrent downloads without popping up extra windows.   

There are only two things which still vex me… 1) I can’t for the life of me figure out any way of replicating IE’s “never reuse browser windows” feature.  I’m so used to it that I keep clobbering stuff I’m not done with by clicking on links.  2) I miss my context menu for “Subscribe in NewsGator” when right clicking on links.  (I think I may be able to come up with a solution, just haven’t had time yet.)

Wednesday, 01 October 2003 14:58:41 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, 23 September 2003

I’ve been a big fan of XMLSpy for a long time.  It’s one the best XML tools out there, and I’d have to say the very best schema editor.  Now with their new version 2004, you can integrate XMLSpy directly into VS.NET, and use all the functionality of XMLSpy without having to leave everyone’s favorite development environment.  Very cool stuff. I much prefer the schema editor in XMLSpy to the database-centric one that ships with VS.NET, so it’s nice not to have to launch yet another app to get at it.  

Tuesday, 23 September 2003 13:43:08 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 15 September 2003

If you haven’t already, check out the PDC episode of Red vs. Blue.  Talk about an appropriate use of technology… 

Monday, 15 September 2003 13:18:29 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 27 August 2003

Just finished struggling through yet another Windows Forms issue that had us completely baffled.  Writing multi-threaded WinForms applications brings up some very interesting challenges, and involves learning a bunch of new stuff, some of which is not at all obvious.  See my previous posts on InvokeRequired for a good example. There end up being some really interesting consequences of the boundary between the CLR only world, and the underlying implementation where windows have to deal with message pumps.  While WinForms insulates us from the real underlying PeekMessage calls, they are still there, and at some point the rubber has to meet the road, so to speak.  The problem I just had to deal with is way to complicated to even summarize here, but the core of the problem was that if I call Thread.Sleep on short intervals in a loop with calls to Application.DoEvents in between, the GUI doesn’t update correctly, but if I call Form.ShowDialog then it does.  WinForms knows something about message pumping that I don’t, which doesn’t really come as a surprise. 

Some days I think that this is way harder than MFC was, but on other days (like today) I realize that the real difference is that WinForms frees us to hurt ourselves in new and different ways that were too hard to achieve before.  I can only think that’s a step in the right direction. J

Wednesday, 27 August 2003 15:33:59 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 13 August 2003

Fixed.  If we move all our initialization code to run after the controls have been fully created and added to their parent’s Controls collection then everything works just fine, and InvokeRequired returns what it should.  Again, the more I think about this problem the more it makes sense that it would work this way.  However, what I would expect is for the call to InvokeRequired to throw an exception if it can’t really determine the right answer (e.g. isn’t initialized properly?) rather than just returning false.  If it had thrown an exception we would have found the problem right away, rather than having to discover it the hard way.  And since calling InvokeRequired on a control without a parent is apparently an exceptional case, it would be the right thing to do. 

If anyone who reads this is or knows a PM on the WinForms team, you might mention this issue. J 

Wednesday, 13 August 2003 14:35:03 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 11 August 2003

I spent some time late last work working with my colleague Jeff Berkowitz on what seemed like a pretty sticky problem.  He’s got some more info about the problem here, but the quick description is that we were in a situation where Control.InvokeRequired was apparently returning an incorrect value.  Our code was clearly running on a thread other than the GUI thread, and yet InvokeRequired returned false.  Disconcerting to say the least. 

Jeff spent quite a bit of time tracking down the problem over the weekend, and the conclusion that he came to is that if you want to call Control.InvokeRequired and get a rational and deterministic answer, the control you are calling it on must be embedded in the control containment hierarchy all the way up to a top level Form.  What we were doing involved creating controls but holding them in memory without putting them into the “visual” hierarchy of the application, meaning that they were not contained by a parent Control or Form.  Apparently if that is the case, InvokeRequired doesn’t return the correct results.  (Note that so far this is based on experiential evidence and not “scientific” data.)

The longer I think about this the more I’m not surprised that this is the case, but I’ve never seen any hint of documentation (from MS or otherwise) that indicates that this is true.  The solution (at least for us) is pretty straightforward, and involved moving some initialization code until after all the controls have been fully created and sited in forms.  Not a big deal, but it does prevent us from doing some up front initialization in a place where our users wouldn’t notice it.  Now it’ll take a progress dialog.  Seems like a reasonable price to pay, but it would, of course, be nicer if it worked the way we expected it to.

Monday, 11 August 2003 16:05:21 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 05 August 2003

So far I’ve done some experiments with a couple of the options I mentioned in my last post.   I tried styling the incoming XML document into something I could read into a dataset, and then used SqlDataAdapter.Update to persist the changes.  This works pretty well, but the biggest issue ends up being foreign key constraints.  I think you’d either have to do some pretty funky stuff in the stylesheet, or clean up the foreign keys once they were in the dataset, although that only works if your dataset doesn’t have constraints to begin with. 

Then I tried OPENXML, and I’ve got to say that so far that’s the way I’m leaning.  It turned out to make things much easier if I style the incoming XML into a simpler format (without all the namespaces) then pass that to OPENXML.  The OPENXML code turned out to be way less hairy than I had thought it might be, and I can handle the transaction in the stored proc rather than using DTC transactions.  All in all, not a bad thing.  It’s almost enough to make me not care if things change in Yukon in ways that would make this not work, or be obsolete.  It’s pretty slick in the near term.  I haven’t tried any performance testing, but it seems to me that the OPENXML solution is faster. 

I could try the other option of parsing the XML in C# and then making transacted ADO.NET calls to persist the data, but I don’t really want to go there.  It’s the business-layer XML parsing I’m trying to get rid of, and it’s a lot more code. 

Tuesday, 05 August 2003 19:34:08 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 30 July 2003

When new data comes in from monitors, it needs to be processed and then stored in the DB.  (See Xml on Large Power Transformers and Industrial Batteries for some background.)  I’m currently pondering how to manage that, and what I’m thinking is that I’ll process the incoming data as XML, using XPath and some strongly typed methods for doing inserts.  The issue I’m wrestling with is how to get the XML data into the database.  We’re currently using SQLXML for some things, so I could style it into an updategram.  Initially I was thinking of just chucking the whole incoming XML doc into a stored proc, and then using SQL Server 2K’s OPENXML method to read the document inside the stored procedure and do all the inserts from there.  The advantage is that I can rip up the doc and do a bunch of inserts into normalized tables and keep it all in the context of one transaction in the sproc.  Also, it keeps me from having to write C# code to parse out all the stuff that’s in the doc and do the inserts from there (although that’s also an option).  Usually I’m opposed to putting much code into the DB, since it’s the hardest bit to scale, but in this case it wouldn’t be business logic, just data parsing, which seems like a pretty reasonable thing for a database application to do. 

With that all out of the way, my concern is that OPENXML is fairly complex, and would require some relatively involved TSQL (which I can do, but would rather not, and hate to maintain).  Also, I worry about it all being made irrelevant by Yukon, which is supposed to have some pretty wizzy new XML related data storage stuff.  

Another option would be to style the incoming data into a dataset and use ADO.NET for the dirty work.  Since I’m really only ever doing inserts, not updates, it should be pretty straightforward. 

Sigh.  Too many choices.

Wednesday, 30 July 2003 19:27:59 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, 08 July 2003

This is a really interesting idea. It’s a .NET remoting channel that uses IIOP as its wire format.  This means that you can use it to interoperate with CORBA/J2EE/JavaRMI systems rather than using XML Web Services to do the same.   It looks like it’s not much more work then implementing Web Services in .NET, and the benefits you would get would be better performance (due to the binary serialization format) and the fact that your clients can take advantage of object references rather than essentially stateless methods exposed as Web Services.  The biggest drawback compared to XML Web Services is obviously that it only works for CORBA/J2EE/JavaRMI systems and not for all the other platforms for which there are Web Services implementations and not IIOP implementations.

Tuesday, 08 July 2003 13:44:24 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 27 June 2003

I finally got around to creating my first RSS feed today.  We are using an automated build tool for .NET called Draco.NET to build our (rather complex) application.  The great thing about Draco is that it watches your source-code repository for any changes, and rebuilds if it detects any changes.  When it’s done, you get a very nicely formatted email that tells you if the build succeeded or failed. 

Unfortunately, as your build process grows, so does the email, since it includes the output from the NAnt build.  Also, because of some strangeness in CVS log files, Draco tends to build rather more frequently than it really needs to, particularly if you are building from two different branches.  The end result is, lots of great big email, or “build spam”. 

So, I cooked up a quick ASP.NET application that will look at the directory containing output from Draco and turn it into an RSS feed.  Now all I get is the success or failure of the build in the RSS stream, with a link to another page that provides the full results if I want to see them.  A relatively small accomplishment, I realize, but there you have it.  

What the exercise did do is confirm my faith in two things:  1) RSS is pretty darn handy, and has a lot of applications, and 2) .NET is pretty much the most straightforward way to do just about anything.  The ASP.NET application only took around 2 hours, and would have taken MUCH longer in ASP or (heaven forefend) ATL Server.

[Listening to: Lady Diamond - Steeleye Span - Spanning the Years(04:37)]
Friday, 27 June 2003 19:13:53 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 19 June 2003

Chris Goldfarb has some pointers/caveats about upgrading your build process from VS.NET 2002 -> 2003.   I’d add to that additional things to watch out for if you are using a build script that doesn’t use VS.NET.  We’re using NAnt to do our builds, and it uses the underlying .NET SDK compilers without regard to anything in the VS project files.  This leads to an even weirder upgrade, since you have to update your NAnt build file to reflect any changes required to build under 1.1 (and there are likely to be some, it took me most of a day to iron out all the issues) completely outside the context of VS.NET.  

The end result was that we had a full build working under 1.1 long before we had updated all our project files to VS.NET 2003.  This brings up some interesting problems when it comes to dependencies.  We have a fairly complex system with dozens of assemblies, and many of the project files reference assemblies from the build directory.  If your build directory is suddenly full of assemblies compiles against 1.1 and you still have 1.0 projects, chaos ensues.  All together it took the team 2-3 days to iron out all the issues and transition fully to 1.1.   As a side benefit, between the upgrade to 1.1 and moving to the latest version of NAnt (0.8.2) our build now takes about half the time it did before using essentially the same build script.  At worst it only took around 30 minutes, but 15 is still much nicer.

I guess the bottom line either way (and I think Chris reached the same conclusion) is that upgrading to 1.1 is not something you can do piecemeal, and you really have to tackle it all at once.  Embrace the pain and get it over with.

[Listening to: Man of Constant Sorrow - Dan Tyminski - O Brother, Where Art Thou?(03:10)]
Thursday, 19 June 2003 13:50:20 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 06 June 2003

 Humph.  I’d have to say this presentation was disappointing.  It was on the development of the London traffic congestion charging system, which is not only highly controversial, but arguably the largest deployed .NET application.  I was hoping to get some technical details about how they got it to scale, but instead it was pretty much just marketecture, which I haven’t seen a lot of here this year.  The main focus was around the fact that .NET beat out J2EE for this job, and that it was done quickly and at comparatively low cost.  OK, I get that about .NET.  The one interesting thing in that space was that Mastek, the India-based development shop that did the implementation, actually did two separate test projects during the RFI for the project, one in J2EE, the other in .NET (v1.0, beta 1).  It’s interesting to see the results of one company seriously trying to build the same application on both platforms, rather than the competitive Pet Store type comparison.  Their conclusion was that they could do the .NET implementation for 30% less. 

Unfortunately the presentation was almost totally devoid of technical details.  For a 300 level presentation for developers, I would expect more than two slides on the implementation.  The only interesting technical details was that they used the same set of business object for both intra- and extranet sites, but the extranet used a wrapper that hid the privileged methods, and a firewall was used between the presentation and business tiers to limit the public site’s access to only the wrapper class.   

Friday, 06 June 2003 11:48:39 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

While the presentation itself was a bit slow, the content was very interesting.  It was about how Microsoft, Unisys and others built a highly scalable system for use by law enforcement and the public health system to send and receive alerts such as Amber Alerts and public health warnings.  The biggest factor influencing the design was the fact that they control essentially the entire system, from the back end datacenter all the way to the workstation setups.  This allowed them to take advantage of a lot of features not normally available to “Internet” applications.  For starters, they chose a WinForms app on the client side to provide the best performance and richest user experience.  They use .NET remoting (over TCP using the binary formatter) to get the best performance over the network, which also allows them to hook up bidirectional eventing so that new bulletins can be distributed rapidly without poling.  The client app uses MSDE to cached frequently used data like addresses and geographical data.  Each local client can be configured to either communicate with a local hub system, or with the back end datacenter. 

Since they had to accommodate 25,000 installed workstations, and an addressbook projected at 100M users it makes sense to take advantage of some heavyweight code on the client side to get the best scalability.  Overall it was a good example of how to build a really large system, although it depends so heavily on being able to control the whole system that it may not be applicable in very many cases. 

I’ll looking forward to comparing and contrasting with DEV372 (coming up this morning) which is a case study of the London traffic billing system.  More on that later.


Friday, 06 June 2003 10:34:03 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 05 June 2003

 Hallelujah brother!   I have seen the light of AOP!  I read last year’s MSDN article on AOP and thought that it was a dang good idea, but the implementation was a little on the questionable side.  It took advantage of undocumented goo in the .NET remoting infrastructure to do interception.  Cool, but not necessarily something I’d go into production with.  And it doesn’t work on ServicedComponents.  Clemens Vasters showed how to do AOP in about 4 different contexts, including ASP.NET WebServices, EnterpriseServices, WinForms and using the remoting interception method.  Best of all, he showed how to use the same metadata in each context.  Very cool stuff, and all the samples will be posted to his blog

I was particularly impressed by two things:  the way he explained why AOP is interesting and useful using real world examples, and the AOP examples in ASP.NET web services, which is a great way to abstract the use of SOAP extensions.  Check out his samples for more details. 


Thursday, 05 June 2003 15:17:19 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

The good news is that just about everything that I don’t like about implementing WS-Security with WSE v1 has been addressed in v2.  Because it supports WS-Policy, you can now set up declarative rules enforcing security policy without the class developers having to do any coding.  With v1, every web method must remember to check for the existence of a UsernameToken (if that’s what you are using) and each component created on the client side must remember to add the right token(s) to the SoapContext.  While not insurmountable, it’s still a pain.  With version 2 you can set up a policy that is used on both server and client, and the individual client and server components can blissfully go about their business, with the security just taken care of by WSE.  Not only does that make it much easier to implement, and more secure, since you aren’t depending on developers to remember to do the right thing, but since they don’t have to do any additional work, it’s easier to convince them that WS-Security is a good idea (which it is).

The new support for WS-Trust, and specifically of WS-SecureConversation is a big boon to performance.  It allows you to do one heavyweight authentication, using Username or x509, etc. then have the server generate a new light weight token for use during the rest of the conversation.  Much less of a burden on server resources.

There is also new support for non-RPC programming models, so that you can use one-way or multicast messages and more.  And you don’t have to host your web services in IIS, which allows for lots of new deployment possibilities. 

The only drawback to this session was that a great deal of the material overlapped with the content of  WEB401 (on the new security support). 


Thursday, 05 June 2003 11:56:47 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 04 June 2003

Once again a triumphant performance from Scott Hanselman.  He did a great job of explaining what makes WSDL interesting, and why we should, if not necessarily love it then at least understand it.   In keeping with the message I’ve been hearing the rest of the week, Scott advocated starting with XSD and WSDL, and generating clients and servers from there, rather than developing WebServices “code first”. 

One of the coolest things he demoed was SoapScope, from MindReef which allows you to sniff soap at a workgroup rather than individual developer box level.  Nice interface, and very handy for debugging distributed problems.

I also appreciated Scott’s Matrix references.  Ignore the WSDL and live in the Matrix, or learn to see the WSDL and, like Neo, gain super powers. :)

Wednesday, 04 June 2003 18:46:17 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

I’ve seen a wide range of TechEd presentations on Web Services now by some of the luminaries (Don Box, Keith Ballinger, Doug Purdy, et. al.) and I find it interesting that the story for how to build .NET web services has changed.  Maybe that’s been true for a while and I never noticed, but either way, now I know.  When .NET first appeared on the scene, the story was that all you had to do was derived from WebService, mark your methods with [WebMethod()], and all would be well. 

This week I’ve consistently been hearing that “code-first” development is out for web services, and that we should all learn to love XSD and WSDL.  So rather than coding in C#, adding some attributes, and letting the framework define our WSDL, we should start with XSD and WSDL definitions of our service, and use wsdl.exe and xsd.exe to create the .NET representations.  Very interesting. 

Further more, we should define our WSDL specifically with versioning and interop in mind.  Some tips and tricks include using WS-Interop as a guideline for defining our WSDL, including version attributes in our SOAP schemas from the get go, and using loosely typed parameters (xsd:string) or loosely typed schemas (xsd:any) to leave “holes” into which we can pour future data structures without breaking existing clients. 

Since I’m about to embark on a fairly major interop project myself, I’m glad to have heard the message, and I say hooray.  I think it makes much more sense to stick to industry standard contracts that we can all come to agreement on and work the code backwards from there, rather than tying ourselves to the framework’s notion of what WSDL should look like.  Ultimately it’s the only way that cross-platform interop is going to work.  The “downside” if it can be called such is that we have to really work to understand WSDL and XSD (or at least the WS-I specified compatible subsets thereof) in order to design web services correctly.  However, anyone who was writing any kind of web service without a firm grounding in WSDL and XSD was heading for trouble anyway.  I’m looking forward to Scott Hansleman’s “Learn to Love WSDL” coming up after lunch today.


Wednesday, 04 June 2003 14:55:19 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Yesterday I saw a couple of presentations here at TechEd on the Enterprise Instrumentation Framework.  Currently I use log4net for all my logging needs, and I've been doing some compare and contrast between the two systems.


The advantage to using either of these frameworks is that they are loosely coupled.  There is no design time mapping between event sources and event sinks.  In log4net all the developer need to is categorize the log message as Debug,  Info, Warn, Error or Fatal.  At runtime, log4net is configured using an XML document.  You can define multiple event sinks, such as text files, the Event Log, ADO data sources, OutputDebugString, etc.  It's easy to create plugins to support new sinks as well.  There's even a remoting sink so that you can log to a different machine.  In the configuration file, you can send events to different sinks based on the logging level (debug, info, etc) or on the namespace of the class the logging is done from. 


In EIF, you can similarly define multiple sources and sinks, and map them at run time.  One big difference is that in EIF, events are strongly types.  You can create an event schema that is used for logging data with distinct types, rather than just as strings.  In text logging that's not so important, but in the Event log, and especially in WMI ( which EIF supports) you can take advantage of the strong typing when you read and or sort the events.  However, that means that you have to define the schema, which is work.  One drawback is that out of the box, EIF supports many fewer event sinks, in fact only three:  WMI, the Event Log, and Windows Event Tracing (on Win2K and up).  As with log4net, EIF allows you to create your own event sinks.  There's currently no UI for reading Windows Event Tracing logs, but they do provide an API.  Furthermore, the configuration for EIF is rather more complicated.


The built in support for WMI in EIF is pretty useful, since it abstracts out all the System.Management stuff.  This support makes it easy to support industry standard management tools like HP OpenView.  And you even get perf counters for free when you define an EIF event source.  On the other hand, this support for WMI makes intalling a bit more high ceremony, since you have to include an installer class in your app to register the WMI schema. 

Possibly the coolest feature in EIF is the fact that they can push correlation data onto the .NET call context, which travels across remoting boundaries.  That means that they can correlate a set of events for a particular operation across process and server boundaries.  They demoed a UI that would then create a tree of operations with nested events arranged in order.  Pretty cool stuff.


So, EIF has strongly typed events, WMI support, perf counters for free and cross process event correlation.  Log4net is much simpler to implement, requires less coding overhead, and supports a wider range of event sinks out of the box.  It's a tough choice.  The WMI support might swing things toward EIF in the long run, especially if you operations outfit uses WMI management tools.


Wednesday, 04 June 2003 11:25:06 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [3]  | 
# Tuesday, 03 June 2003

I went to a get together here at TechEd last night of bloggers and “aggregators” (sounds much better than “lurker”) who are here at the show, organized by Drew Robbins who runs TechEdBloggers.net.  It was really interesting to put some faces with the names.  There were also some interesting discussions on blogs and blogging.  Many seemed comfortable with the notion I put down a while ago (http://erablog.net/filters/12142.post) of communal validation.  Someone dubbed it a “web of trust” which I think is pretty appropriate.  The question also some up, “So who plays the role of VeriSign” in this web of trust.  I think the interesting part about blogging, at least in the technical community, is that that trust seems to be based on personal relationships rather than out outside “authority”.  Unlike in the UseNet world where it’s easy to forgo personal relationships in favor of the collective, blogging seems to foster personal relationships.  That’s a big change from the general anonymity of the Web.  I’ll be interested to see where it goes from here. 

Tuesday, 03 June 2003 10:43:40 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Once upon a time I did the happy dance on stage (in San Francisco: “Anatomy of an eCommerce Web Site”) because I was so excited about the new BizTalk orchestration designer in BTS 2000.  What a great idea, so be able to draw business processes in Visio, then use the drawing to create code that knows how to manage a long running transactional business process.  I had been preaching the gospel of the CommerceServer pipeline as a way of separating business process from code, but BT Orchestrations was even better.

Little did I know…  Yesterday (I’m here at TechEd in Dallas) I saw a demo of BTS 2004.  Wow.  Microsoft has really made great strides in advancing the orchestration interface.  Instead of Visio, it’s now a Visual Studio.NET plugin, and the interface looks really good.  It includes hints to make sure you get everything set up corrects, and full IntelliSense to speed things along.  I was very impressed with the smoothness of the interface.  Not only that, but now you can expose your orchestration as an XML Web Service, and you can call Web Services from inside your schedule. 

I’ve always thought that BTS has gotten short shrift in the developer community.  I think it’s because it tends to be pigeon-holed as something only useful in big B2B projects.  I can think of lots of ways in which orchestration could be very useful outside the realm of B2B.  I guess part of it has to do with the pricing.  While I can think of lots of ways to use BTS in non-B2B scenarios, they aren’t really compelling enough to convince me to spend that much money.  Ah well.   

Tuesday, 03 June 2003 10:31:47 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 28 May 2003


The constant point-and-click of the mouse can be a real drag. One company has developed products that sense hand movements to give computer commands, creating input devices that it hopes will replace the mouse. By Katie Dean.
[Wired News]



This is a great idea, and I’d be all over it if it wasn’t quite so spendy.  I think in the long run this kind of gestural interface could really win out, since you can encode a lot of information in gestures that would be harder or take longer using other input methods.

Wednesday, 28 May 2003 13:10:19 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 16 May 2003

There's been a great deal of hullabaloo in the last week or so about blogging “ruining the Internet for everyone” and Google segregating hits from blogs into their own section (good or bad?). I realize that since I'm posting this to a blog, this sounds a bit self-serving, but here's my two cents worth:

While it's true that blogging has lowered the bar in terms of access to web publishing, the simple fact is that ever since there was an Internet, anyone who wanted two and had access to a keyboard could post whatever drivel they wanted to the web. All the blogging really adds to the mix is the fact that now you don't even have to have rudimentary knowledge of HTML (or how to save your Word doc as HTML) in order to publish yourself. While that means more volume, it doesn't really change the nature of data on the web.

The real “problem” as far as Google is concerned is that the self-referential nature of blogging upsets their ranking algorithm. This has apparently led people (like Larry Lessig) to conclude that blogging is ruining the nature of content on the web.

I would argue that there's nothing about blogging that changes the need for critical thinking when looking for information on the web. That's always been true, since there's fundamentally no concrete way to verify the validity of anything you read on any site without thinking critically about the nature of the source, how you came across it, and what references it makes to other works. If you apply that kind of filter, then the self-referencial or “incestuous” nature of blogs can be used to advantage.

For example, if I'm looking for interesting information about SOAP, XML Web Services, or how either or both relate to .NET I'd assume that Don Box (for example) is a reliable source, given that I've read his books, articles and speeches in other contexts. If he mentions something in his blog that someone else said about .NET on their blog, I would assume a high degree of certainty that the reference is relevant and useful. Then, when I follow that link, I'll find more links to other blogs that contain relevant and useful information. The tendency is for all those cross links to start repeating themselves, and to establish a COMMUNITY wherein one can assume a high degree of relevant information. All that's needed to validate the COMMUNITY as a whole is a few references from sources that are externally trusted.

In the long run, I think that kind of “incestuousness” can be used to validate, rather than discount large bodies of interesting, useful, and otherwise unattainable information that we wouldn't have access to otherwise.

That's a fairly lengthy rant for me, but I had to get that off my chest. I hate to see pundits discounting a body of information just because it's posted casually and not in academic journals.

Friday, 16 May 2003 23:22:41 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
I'll be giving a presentation on "XML on Large Power Transformers and Substation Batteries" at the Applied XML Developers Conference 2003 West, on July 10th and 11th in Beaverton, OR. Register now at www.sellsbrothers.com/conference.
It looks like there are going to be some really interesting sessions, and tickets reportedly go fast, so sign up now.
I'm really interested in hearing "SOAP, it wasn't Simple, we didn't Access Objects and its not really a Protocol", and "A Steady and Pragmatic Approach to Dynamic XML Security". Cool stuff.
Friday, 16 May 2003 22:59:56 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 14 May 2003
Scott has posted a very insightful article into the nature of the CLR as it relates to the underlying Windows platform, and how that's different from the way the Java VM works.
Keep up the good work Scott!
Wednesday, 14 May 2003 13:14:41 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 12 May 2003

It seems like every time I come across something in the framework documentation that says something along the lines of "this works just fine unless" my application is firmly in the "unless" category :)

There is a new security setting in 1.1 that has to do with serializing objects over remoting which is much more restrictive than the 1.0 framework was. I recompiled my app against 1.1, fixed all the compile time bugs, tracked down all the dependencies, and then the real fun began: finding the runtime bugs.

It turns out that there is a new setting on the remoting formatters that won't deserialize your object by default unless they fall into a certain category of "safe" objects. Mine apparently don't, although going over the list on gotdotnet I don't see how my objects don't fall into the category of "low". Anyway, for whatever reason they don't, so I have to prompt the remoting formatters to use "Full" deserialization. (Low vs. Full???) Then everything works hunky-dory. I wish that the list of conditions was a little more exhausitve. The only things listed as for sure causing a need for "full" support are sending an ObjRef as a parameter, implementing ISponsor, or being an object inserted in the remoting pipeline by IContributeEnvoySink. I tried this with two completely different parts of my application, neither of which meet those criteria, and still had to use "full" support. Hmmmm.

Live and learn I guess.
Monday, 12 May 2003 20:01:17 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 07 May 2003
A fabulous piece on async calls in .NET from Chris Brumme. The stuff that guy has in his head is truly amazing.
Wednesday, 07 May 2003 13:52:17 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  |