# Tuesday, 01 May 2007

I was trying to come up with a set of guiding principles as we've been using them on my current project, with an eye toward educating new developers being assigned to the project, so I wrote up a PPT deck for the purpose.  Scott thought it would make a good, general purpose manifesto for any project, so here goes...

These are my opinions based on experience and preference, YMMV.  Much of this is common sense, some of it is arbitrary, but it's all stuff that I think is true.

  • Our "guiding principles"
    • Test Driven Development
      • As much as possible, all of our code should be developed according to the TDD methodology.  If we stick with it, we will always have confidence in the current build (i.e. that it really works).
    • Continuous Integration
      • We want our build to produce something that looks as much as possible like a production deployment.  To that end, our build deploys a real live system and runs "integration" tests against it. 
  • Unit tests
    • Two kinds of tests
      • Unit tests have no dependencies, or only depend on other assemblies owned by us, which can be mocked.  This means our unit tests should run on anyone's box without having to set up things like SQL, ADAM, etc.
      • Integration tests depend on code not owned by us, like SQL, ADAM, IIS, etc. 
    • Automation is equally possible for both sets of tests
      • NUnit/MbUnit for all of the tests
      • MbUnit allows for test "categories" like "unit" and "integration" and can run one at a time
      • Unit tests run on every build, integration tests run at night since they take much longer to run
    • All UI development should follow the MVP pattern for ease of testing
      • This includes Win/WebForms, MMC snapins, etc.
  • Test coverage
    • 90%+ is the goal
      • This is largely achievable using proper mocking
    • NCover runs as part of the build, and reports are generated
  • Buy, not build
    • Take full advantage of the platform, even if it only solves the 80% case
      • every line of code we don't have to write and maintain saves us time and money
    • Don't write a single line of code you don't have to
    • Take full advantage of .NET 3.0, SQL 2005, Windows 2003 Server, plan for- and test on Longhorn
    • Don't invent new solutions to solved problems
      • Even if you know how to write a better hashing algorithm than the ones in the BCL, it doesn't add value to your product, so it's not worth your time
  • Limit compile time dependencies on code you don't own
    • Everything not owned by us should be behind an interface and a factory method
      • Easier to mock
      • Easier to replace
    • ILoggingService could use log4net today, EnterpriseLibrary tomorrow, but clients are insulated from the change at compile time
  • Define your data contracts in C# (think "active record")
    • All persistent storage should be abstracted using logical interfaces
      • IUserProfileService takes a user profile data object, rather than methods that take parameter lists
      • IUserProfileService knows how to store and retrieve User Profile objects in ways important to your application
      • How a User Profile object gets persisted across databases/tables/XML/whatever is only interesting to the DBA, not consumers of the service
      • This means database implementations can change, or be optimized by DBAs without affecting the app
      • The application doesn't care about how database file groups or clustered indexes work, so define the contract, and leave that bit to a DBA
  • Fewer assemblies is better
    • There should be a VERY good reason for creating a new assembly
    • The assembly is the smallest deployable unit, so it's only worth creating a new assembly if it means NOT shipping something else
    • Namespace != assembly name.  Roll up many namespaces into one physical assembly if they all must be deployed together.
  • Only the public interface should be public
    • Only make classes and interfaces public if absolutely necessary
    • Test code should take advantage of InternalsVisibleTo attributes
    • VS 2005 defaults to creating internal, not public classes for a reason
    • If it's public, you have to support it for ever
  • Windows authentication (good)
    • Just say no to connection strings
    • Windows authentication should be used to talk to SQL, ADAM, the file system, etc.
    • You can take advantage of impersonation without impersonating end users
      • Rather than impersonating end users all the way back to the DB, which is expensive and hard to manage, pick 3-4 windows accounts like "low privilege", "authenticated user", and "admin" and assign physical database access to tables/views/stored procs for those windows accounts
      • When users are authenticated, impersonate one of those accounts (low priv for anonymous users, authenticated user for same, and admin for customer service reps, for example)
  • Tracing
    • Think long and hard about trace levels
      • Critical is for problems that mean you can't continue (app domain is going away)
      • Error means anything that broke a contract (more on that later)
      • Warning is for problems that don't break the contract
      • Info is for a notable event related to the application
      • Verbose is for "got here" debugging, not application issues
    • Use formatted resource strings everywhere for localization
    • For critical, error, or warning, your audience is not a developer
      • Make error messages actionable (they should tell the recipient what to do or check for to solve the problem)
  • Error handling
    • Method names are verbs
      • the verb is a contract, e.g. Transfer() should cause a transfer to occur
    • If anything breaks the contract, throw an exception
      • if Transfer() can't transfer, for whatever reason, throw
      • Exceptions aren't just for "exceptional" conditions.  If they were really exceptional, we wouldn't be able to plan for them.
    • Never catch Exception
      • If it's not a problem you know about, delegate up the call stack, because you don't know what to do about it anyway
    • Wrap implementation specific exceptions in more meaningful Exceptions
      • e.g. if the config subsystem can't find the XML file it's looking for, it should throw ConfigurationInitializationException, not FileNotFoundException, because the caller doesn't care about the implementation.  If you swap out XML files for the database, the caller won't have to start catching DatabaseConnectionException instead of FileNotFoundException because of the wrapper.
      • Anytime you catch anything, log it
        • give details about the error, and make them actionable if appropriate
      • Rethrow if appropriate, or wrap and use InnerException, so you don't lose the call stack
  • The definition of "done" (or, how do I know when a task is ready for QA?)
    • Any significant design decisions have been discussed and approved by the team
    • For each MyClass, there is a corresponding MyClassFixture in the corresponding test assembly
    • MyClassFixture exercises all of the functionality of MyClass (and nothing else)
    • Code coverage of MyClass is >=90%, excluding only lines you are confident are unreasonable to test
      • yes this is hard, but it's SOOOOO worth the effort
    • No compiler warnings are generated by the new code
      • we compile with warnings as errors.  if it's really not a valid or important warning, use #pragma warning disable/enable
    • Before committing anything to source control, update to get all recent changes, and make sure all unit and integration tests pass
    • FxCop should generate no new warnings for your new code
    • Compiling with warnings as errors will flush out any place you forgot documentation comments, which must be included for any new code
  • There WAS a second shooter on the grassy knoll
    • possibly one on the railway overpass
    • (that's for anyone who bothered to read this far...)
Tuesday, 01 May 2007 13:49:18 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [10]  | 
# Thursday, 26 April 2007

Back in February, I mentioned that I was having problems testing our AzMan code on developer machines running Window XP instead of Windows 2003 server.  This was because the 2003 Server version of the AzMan interfaces weren't present on XP.  To deal with the issue, we actually put some code in our unit tests that reported success if running on XP when the error was encountered, so that everyone could have a successful build.  Drag.

Anyway, it turns out I wasn't the only one having this problem.  Helpful reader Jose M. Aragon says

We are trying to get AzMan into our authorization architecture, and were able to do it nicely, as soon as the development was done on Windows Server 2003. But, when we tried to use IAzClientContext2.GetAssignedScopesPage on a developers machine running XP sp2 it fails miserably with a type initialization. Searching on help, I found your entry. so we started to circumvent the problem by programming our own "GetAssignedScopes" method.

Meanwhile, on another XP workstation, the problem was not occurring. So, it turns out the difference resides in what was the Server 2003 admin pack package installed on every XP machine..

adminpak.exe sized 12.537KB on 24 Jan 2007 : Does NOT include IAzClientContext2 on azroles.dll

WindowsServer2003-KB304718-AdministrationToolsPack.exe sized 13.175KB on 13 March 2007 : allows using IAzClientContext2 on XP

It seems the two files where downloaded from microsoft.com/downloads, but the one on Server2K3 SP1 is the most recent AdminPak.

I uninstalled my original version of the Admin Pack on my XP-running laptop, installed the new one, and now all of our tests succeed without the hack!

Thanks Jose!

Thursday, 26 April 2007 13:02:04 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 18 April 2007

I'll be moderating a Birds of a Feather session on usage and adoption of ADAM and AzMan for application development on Tuesday night at the BoF evening event.  If you are working on an ADAM/AzMan project, or just considering it, come on down and chat with us about your experiences and/or plans.

ADAM | AzMan | TechEd
Wednesday, 18 April 2007 08:43:30 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 17 April 2007

I've been wrestling for a while with how to deal with the fact that AzMan doesn't natively have a concept of denials.  I'd convinced myself that you could structure a hierarchy of grants such that you wouldn't really need an explicit denial, but I got a use case last week that proved that it really is necessary (without doing something REALLY horrible). 
I started with some suggesting from Cam, and modified it a bit into something that I think will work going forward.
The basic idea is that for every operation we want to add grants for, e.g. "GetAccounts", we also add a hidden operation called "!GetAccounts".  To keep them separated, we also offset their AzMan operation IDs by some arbitrary constant, say 100,000.  They when we do access checks, we actually have to call IAzClientContext.AccessCheck twice, once for the grant operations and once for the deny operations.  The results get evaluated such that we only grant access for an operation if the access check succeeds for the "grant" operation, AND doesn't succeed for the "deny" operation. 
 
WindowsIdentity identity = WindowsIdentity.GetCurrent();
 
IAzClientContext client = app.InitializeClientContextFromToken((ulong)identity.Token.ToInt64(), null);
object[] scopes = new object[] { (object)"" };
 
object[] grantOps = new object[] { (object)1, (object)2, (object)3, (object)4};
object[] denyOps = new object[] { (object)100001, (object)100002, 
     (object)100003, (object)100004 };
 
 
object[] grantResults = (object[])client.AccessCheck("check for grant", 
     scopes, grantOps, null, null, null, null, null);
object[] denyResults = (object[])client.AccessCheck("check for deny", 
     scopes, denyOps, null, null, null, null, null);
 
for (int i = 0; i < grantResults.Length; i++)
{
       if(((int)grantResults[i] == 0) && ((int)denyResults[i] != 0))
            Console.WriteLine("Grant");
       else
            Console.WriteLine("Deny");
}

We'll hide all the details in the administration tool, so that all the user will see is radio buttons for Grant and Deny for each operations.

I'm pretty happy with this for a number of reasons

  • it gives us the functionality we need
  • it's easy to understand (no coder should have a hard time figuring out what "!GetAccounts" means)
  • all of the logic is localized into the admin tool and the CheckAccess method, so the interface to the client doesn't change
  • it's not too much additional semantics layered on top of AzMan, and we take full advantage of AzMan's performance this way
Tuesday, 17 April 2007 15:44:12 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [3]  | 

This is a weird one...

A while back I changed our serializers from a reflection based model, to one that uses run-time code generation via the CodeDOM.  Performance is two orders of magnitude better, so happy happy.  However (there's always a however) a weird problem cropped up yesterday that I'm not quite sure what to do with. 

One of our applications has a case where they need to touch web.config for their ASP.NET applications while under load.  When they do that, they start getting InvalidCastExceptions from one or more of the cached, code generated serializers.  Strange.  Internally, after we generate the new serializer, we cache a delegate to the Serialize method for later retrieval.  It looks like that cache is getting boogered, so that the type cast exceptions ensue.  This is caused by serializers that are compiled against an assembly in one loader context, getting called by code that's running against the same assembly in a different loader context, hence the type case failure. 

I had assumed (probably a bad idea) that when you touch web.config, that the whole AppDomain running that application goes away.  It looks like that's not the case here.  If the AppDomain really went away, then the cache (a static Hashtable) would be cleared, and there wouldn't be a problem. 

To test the behavior, I wrapped the call to the cached serializer to catch InvalidCastExceptions, and in the catch block, clear the serializer cache so that they get regenerated.  That works fine, the serializers get rebuilt, and all is well.  However, I also put in a cap, so that if there was a problem, we wouldn't just keep recycling the cache over and over again.  On the 4th time the cache gets cleared, we give up, and just start throwing the InvalidCastExceptions.

The behavior that was reported back by the application team was that said fix worked fine until the 4th time they touch web.config, then it starts throwing exceptions.  Ah ha!  That means the AppDomain couldn't be going away, because the counter is stored in a static, and so should be resetting to 0 every time the application restarts. 

Unfortunately, just because I can characterize the problem doesn't mean I know what to do about it. :-)

Tuesday, 17 April 2007 12:45:10 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Thanks to some comments from Keith, I not only have the online version of Rhapsody working on Windows 2003 Server, but the desktop version as well. Yay! 

I installed 2K3 sp 2, then reinstalled the Rhapsody desktop player.  That still didn't work, until I set the compatibility mode on the shortcut to the desktop player to XP, and now it's shiny.

As good as the online player is, the desktop one is still more feature rich, so I'm excited to have it working again.

Tuesday, 17 April 2007 12:43:53 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 13 April 2007

We've been building around WCF duplex contracts for a while now, as a way of limiting the number of threads active on our server.  The idea being since a duplex contract is really just a loosely coupled set of one-way operations, there's no thread blocking on the server side waiting for a response to be generated. 

It's been working well, but I'm starting to find that there are an awful lot of limitations engendered by using duplex contracts, mostly around the binding options you have.  It was very difficult to set up duplex contracts using issued token security, and in the end it turned out we couldn't set up such a binding through configuration, but were forced to resort to building the bindings in code.  Just recently, I discovered that you can't use transport level security with duplex contracts.  This makes sense, because a duplex contract involves an independent connection from the "server" back to the "client".  If you are using SSL for transport security, you would need certificates set up on the client, etc.  I'm not surprised that it doesn't work, but it's limiting.  Plus we have constant battles with the fact that processes tend to hold open the client side port that gets opened for the return calls back to the client.  Every time we run our integration tests, we have to remember to kill TestDriven.NET's hosting process, or on our next test run we get a failure because the client side port is in use already. 

All this has lead me to think that maybe we should go with standard request-response contracts until the server-side threading really shows itself to be a problem.  That's what I get for prematurely optimizing...

Indigo | Work
Friday, 13 April 2007 12:53:50 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Once again, Rhapsody has failed me.  Just this morning, I was happily listening away.  I closed my browser for a while, and just now came back to start up Rhapsody Online, since the desktop player totally doesn't work on Windows 2003 Server (at least on my box).  Now the online player tells me I'm running an incompatible operating system.  It's been working fine for months, but now it's incompatible.  Curses!  I supposed I'm going to have to start using a different box to play Rhapsody than the one I usually work on. 

I'm a big fan of Rhapsody's service, and have been steadily subscribing for years.  Unfortunately, their software (again and again) proves to be crap.

Friday, 13 April 2007 12:43:15 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [4]  | 
# Tuesday, 10 April 2007

Orlando, here I come.  Not speaking this year, just basking in the goodness that is .NET 3.0, and learning as much as I can cram in my rapidly aging brain in a week.  There's still time, so register now if you haven't already...

 
Tuesday, 10 April 2007 16:54:33 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Thursday, 05 April 2007

Donovan posted a link to the latest Channel 9 screencast on AzMan, in which Keith Brown talks about some of the new features coming up in the Longhorn Server version of AzMan.  It's a good video, and very encouraging to me, because it looks like the things the AzMan team has added for Longhorn are exactly the things I would have wanted. 

The programmatic interface is much simplified (thank you) and there's a new plugin model for the admin tool which allows you to work with custom principles such as ADAM principals from within the AzMan MMC snapin. 

Another interesting feature calls BizRule groups allows you to define ad hoc groups defined by script, which can then be associated with roles at runtime.  The example in the video created a group consisting of the current user's direct manager, and then assigning that group with an expense approver group.  This is a cool idea, but I'm reluctant to go back to defining application logic in VB script.  The one thing I really wish the team had done for Longhorn would be to create a managed interface for AzMan, and let us get away from COM (finally).  Since I'm writing the rest of my application in C#, I don't really want to have to maintain and deploy a bunch of COM based scripting files.  I'm not too concerned about the "COM overhead" involved with calling AzMan interfaces, but the scripting part is lame.

I'm looking forward to testing some of the new features, and trying out the performance improvements.

Thursday, 05 April 2007 10:32:08 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  |