# Thursday, 21 May 2009

In my book, I talked a bit about programming by contract and how that makes everyone’s lives easier.  Say I have a method that divides one integer by another

public double Divide(int dividend, int divisor)
{
  return dividend / divisor;
}

I’d like to be able to let callers know that they can’t pass a 0 for the divisor, because that will result in a DivideByZeroException, and nobody wants that.  In the past I had a couple of choices on how to express that, mostly involving writing different kids of code, because C# doesn’t have a native expression of design by contract like Eiffel does.  One way is to use Debug.Assert

public double Divide(int dividend, int divisor)
{
  Debug.Assert(divisor != 0);

  return dividend / divisor;
}

That way any caller that passes a 0 will get an assertion dialog at runtime, which brings it pretty dramatically to everyone’s attention.  The assumption is that using the Debug.Assert will flush out all cases where people are incorrectly calling my method during development, so it’s OK that the assertion will get compiled out of my Release build.  However, that doesn’t make it impossible for a caller to pass a 0 at runtime and cause the exception.  Another option is explicitly checking parameters and throwing a different exception.

public double Divide(int dividend, int divisor)
{
  if (divisor == 0)
      throw new ArgumentException("divisor cannot be zero", "divisor");

  return dividend / divisor;
}

Patrick argues that this has now moved from defining contract to defining behavior, and I can agree with that, although I’d probably argue that it defines both contract and behavior since I’ve extended the functionality of the Debug.Assert to the release build, while also protecting my internal state from bad data.  But that’s really a separate discussion… :)

Now thanks to the Microsoft Code Contracts project, I have a third option.  The Code Contracts project is the evolution of the work done on Spec#, but in a language neutral way.  The Code Contracts tools are currently available from DevLabs for VS 2008, as well as shipping with VS 2010 B 1.  Just at the moment, there are more features in the DevLabs version that what made it into the 2010 beta.  With Code Contracts, I can rewrite my Divide method like this

public double Divide(int dividend, int divisor)
{
  Contract.Requires(divisor != 0);

  return dividend / divisor;
}

I like the syntax, as it calls out quite explicitly that I’m talking about contract, and making an explicit requirement.  The default behavior of this method at runtime is identical to Debug.Assert, it brings up an assertion dialog and brings everything to a screeching halt. However, it’s configurable at build time, so I can have it throw exceptions instead, or do whatever might be appropriate for my environment if the contract is violated.  I can even get the best of both worlds, with a generic version of Requires that specifies an exception

public double Divide(int dividend, int divisor)
{
  Contract.Requires<ArgumentException>(divisor != 0, "divisor");

  return dividend / divisor;
}

I could configure this to bring up the assertion dialog in Debug builds, but throw ArgumentNullException in Release builds.  Good stuff.

The example above demonstrates a required “precondition”.  With Code Contracts, I can also specify “postconditions”. 

public void Transfer(Account from, Account to, decimal amount)
{
  Contract.Requires(from != null);
  Contract.Requires(to != null);
  Contract.Requires(amount > 0);
  Contract.Ensures(from.Balance >= 0);

  if (from.Balance < 0 || from.Balance < amount)
      throw new InsufficientFundsException();

  from.Balance -= amount;
  to.Balance += amount;

}

This isn’t the greatest example, but basically the Transfer method is promising (with the Contract.Ensures method) that it won’t ever leave the Balance a negative number.  Again, this is arguably behavior rather than contract, but you get the point. 

A really nifty feature is that I can write an interface definition and associate it with a set of contract calls, so that anyone who implements the interface will automatically “inherit” the contract validation.  The syntax is a bit weird, but you can see why it would need to be like this…

[ContractClass(typeof(ContractForCalucluator))]
interface ICalculator
{
   int Add(int op1, int op2);
   double Divide(int dividend, int divisor);
}

[ContractClassFor(typeof(ICalculator))]
class ContractForCalucluator : ICalculator
{
   #region ICalculator Members

   public int Add(int op1, int op2)
   {
       return default(int);
   }

   public double Divide(int dividend, int divisor)
   {
       Contract.Requires(divisor != 0);

       return default(double);
   }

   #endregion
}

Now any class that implements ICalculator will have the contract validated for the Divide method.  Cool.  The last thing I want to point out is that the team included a handy sample of how to work with contract validation in your MSTest unit test code.  The Contract class exposes an event called ContractFailed, and I can subscribe to the event to decide what happens on a failure.  For a test assembly, I can do this

[AssemblyInitialize]
public static void AssemblyInitialize(TestContext tc)
{
  Contract.ContractFailed += (sender, e) =>
  {
      e.SetHandled();
      e.SetUnwind();
      Assert.Fail(e.FailureKind.ToString() + " : " + e.Message);
  };
}

which will translate contract failures into test failures with very explicit error messages.  In the case of my Divide method if I run this test

[TestMethod()]
public void DivideContractViolation()
{
  Calculator target = new Calculator(); 
  int dividend = 12; 
  int divisor = 0; 
  double actual;
  actual = target.Divide(dividend, divisor);
  Assert.Fail("Should have failed");
}

I get a test failure of

Test method CalculatorTests.CalculatorTest.DivideContractViolation threw exception:  System.Diagnostics.Contracts.ContractException: Precondition failed: divisor != 0 divisor --->  Microsoft.VisualStudio.TestTools.UnitTesting.AssertFailedException: Assert.Fail failed. Precondition : Precondition failed: divisor != 0 divisor.

Cool.  This is definitely something I’ll be looking at more in the future.

Thursday, 21 May 2009 14:50:38 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 25 April 2008
My new book Code Leader: Using People, Tools, and Processes to Build Successful Software is now available and in stock at Amazon and hopefully soon at stores near you.  The initial outline for the book came from a post I did a while back, This I Believe, the developer edition.  Writing up that post got me thinking about a lot of things, and I've continued to expand on them both in the book and in presentations since then. 

There is a lot of content in the book about working with tools like SCC, static analysis, and unit test frameworks to build better software more efficiently.  I think if I were to sum up the focus, it is that we as developers owe it to our project sponsors to work efficiently and effectively, and there are lots of incremental changes that can be made to the way we work, both in terms of tools and processes to make that happen. 

The reality of modern software development (mostly) is that since the physical constraints like CPU time and memory/disk space have largely fallen away, the goal of any development project (from a technical standpoint) should be optimized for developer productivity.  That includes a wide array of factors from readability of design, how clear developer intent is to future readers, and how resillient our software is to future change to reducing barriers to developer productivity like implementing a good CI process.  Working together as a team across disciplinary and physical boundaries is at least as big a challenge as writing good code. 

I agree very much with Scott that the why and how of software development are often just as important (and harder to get right) than just the what.  I hope that I have something to contribute to the discussion as we all figure it out together...

Friday, 25 April 2008 11:31:59 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Thursday, 10 April 2008

First, there was a few ideas rattling around in my head, then there was a blog post, and now you can buy the book. :-)


The most difficult challenges facing most of us these days are not about making code work.  That's relatively easy, compared to making code that you can work on and maintain with other people.  I've tried to bring together a bunch of issues and practical solutions for writing code with a team, including defining contracts, creating good tracing and error handling, and building a set of automatic tests that you can rely on.  In addition, there's a bunch in here about how to work with other developers using source control, continuous integration, etc. to make everyone's lives easier and more productive.
Thursday, 10 April 2008 10:41:33 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Friday, 14 March 2008
Once again I'll be teaching at the Oregon Institue of Technology this coming Spring term.  This time around the topic is building an application using C# in Visual Studio 2008.  We'll be covering an entire application lifecycle, including design, coding, test, build and deploy.  We'll go into detail about effective use of source control, how to set up a Continuous Integration process, how to write good unit tests, etc.  Along the way we'll be covering some of the new .NET 3.5 features like LINQ and all that entails.

Spread the word, and come on down if you are interested.

The official course title is .NET Tools with C# (CST407) and it will be Wednesday nights at the Hillsboro campus.

Friday, 14 March 2008 10:02:43 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 17 October 2007
Refactoring is cool.  Really cool.  It also means something very specific.  There's a book and everything.  If you haven't read it, you should. (Hint: it's by Martin Fowler)

Refactoring means reducing code smells by making small, incremental, orderly, and very specific changes to existing code.  Make such changes (one at a time) allows one to improve the structure of existing code without changing its functionality.  Always a good thing.  Refactorings have very specific names like "extract method" or "encapsulate field" or "push member up".  Again, there's a book.  You can look them up.

Where is this going?, you might ask.  I already know this, you say.  Cool.  Then let's move on to what refactoring (Refactoring?) isn't.

Refactoring doesn't mean changing code because you think your way would have been better.  It doesn't mean rewriting things from scratch because you have a different opinion.  It doesn't mean starting over again and again in pursuit of the perfect solution to every coding problem. 

Those other things have names (which I won't mention here for the sake of any children reading this), but "Refactoring" isn't among them.  There's a tie-in here with another term we all love, "Agile".  Refactoring fits into an "agile" process after you've made everything work they way it should (i.e. passes the tests) to make it easier to work with the code on the next story/backlog item/iteration.  The point of agile development (IMHO) is to write as little code a possible to meet your requirements.  It doesn't mean redoing things until you end up with the least possible amount of code, measured in lines.  Again, that has a different name. 

Sometimes code needs to be fixed.  More often than we'd like, in fact.  But if you are (re)writing code in pursuit of the most bestest, don't call it Refactoring.  It confuses the n00bs. 

Wednesday, 17 October 2007 12:22:00 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 20 September 2007
This came up at work this week, so I thought I'd disgorge my views on the issue.  The question at hand is whether or not every method needs to validate its input parameters. 
For example, if I have a method

    public class Example

    {

        public int DoThingy(SomeObject obj)

        {

            return obj.GetImportantValue();

        }

    }


This method depends on the value of obj not being null.  If a caller passes a null, the framework will, on my behalf, throw a NullReferenceException.  Is that OK?  Sometimes it might be, and we'll come back to that.
The alternative is for me to validate my input parameters thusly:

        public int DoThingy(SomeObject obj)

        {

            if (obj == null)

                throw new ArgumentNullException("obj");

 

            return obj.GetImportantValue();

        }


The discussion we got into was wether this was really necessary or not, given that the Example object only got used in one place, and it would require extra code to validate the parameters.  The assertion was made by more than one person that the NullReferenceException was just fine, and that there was no need to do any validation. 
My $.02 on the matter was (and is) that if you don't do parameter validation, you are not upholding and/or communicating your contract.  Part of the contract implied by DoThingy() is that you have to pass a non-null SomeObject reference.  Therefore, in order to properly communicate your contract, you should throw an ArgumentNullException, which informs the caller exactly how they have violated there half of the contract.  Yes, it's extra code.  No, it may never be necessary.  A whole different subject is whether of not "fewest possible lines of code" is a design goal.  I'm going to avoid going there just now.

That said, there are obviously mitigating circumstances that apply.  If the object in question is really only called in one place, upholding the contract is less of a concern, since you should be able to write tests that cover the right cases to make sure the caller never passes null. Although that brings up a separate issue.  If the method only had one caller, which is it in a separate object at all?  Again, we'll table that one.  In addition, since in this particular case the DoThingy() method only takes one parameter, we don't have to wonder to hard when we get a NullReferenceException where the culprit is. 

The other issue besides contract is debugging.  If you don't check your input, and just let the method fail, then the onus is on the caller to figure out what the problem is.  Should they have to work it out?  If the method took 10 parameters, all reference types, and I let the runtime throw NullReferenceException, how long will it take the consumer to find the problem?  On the other hand, if I validate the parameters and throw ArgumentNullException, the caller is informed which parameter is null, since the onus was on me to find the problem. 

One last reason...  If I validate my input parameters, it should leave me with fewer failure cases to test, since the error path is potentially much shorter than if the runtime handles it.  Maybe not much of a savings, but it's there.

Thoughts?

Thursday, 20 September 2007 16:10:41 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [11]  | 
# Friday, 31 August 2007

For the last week (at my new gig) I've been using ReSharper 3.0, and I must say I'm pretty darned impressed.  I hadn't looked at ReSharper since the very early versions, and had sort of written it off in favor of CodeRush/Refactor, but ReSharper 3 has some pretty compelling features...

  • ReSharper parses your code outside of the compiler, and therefore knows stuff the compiler couldn't, like when you reference something in another namespace without the proper using directive in place, and ReSharper figures out which namespace you mean, and asks if you'd like to add the reference.  Slick.
  • Inline suggestions.  ReSharper parses your code and offers up both errors (in an easier way the VS.NET, IMHO) and suggestions, such as "you really could make this field readonly".  These errors and suggestions are also displayed in a sidebar next to the scroll bar as red, green or yellow lines, so you can scroll the the problems quickly.  Nice.
  • Integrated NUnit support.  I love the way ReSharper handles NUnit integration.  You get little buttons in the right hand sidebar (where your breakpoints show up) that allow you to run or debug individual tests or the whole fixture.  The results get put in a custom results pane, much neater than the text based report you get from TestDriven.NET.  If you happen to have JetBrains profiler, that's integrated as well. 
  • Clever autocompletion.  I know I probably shouldn't be so impressed with this one, but just this morning I wrote a line that returned an IEnumerable, then on the next line typed foreach, and it completed the whole block, figured out the right types, etc.  Very clean.
  • Nice refactoring support.  I haven't had too much time to play with this yet, but it looks like it supports a goodly number of refactorings.  Cool.

I'm looking forward to playing with it more in the days to come.

Friday, 31 August 2007 09:49:06 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  |