# Friday, 14 March 2008
Once again I'll be teaching at the Oregon Institue of Technology this coming Spring term.  This time around the topic is building an application using C# in Visual Studio 2008.  We'll be covering an entire application lifecycle, including design, coding, test, build and deploy.  We'll go into detail about effective use of source control, how to set up a Continuous Integration process, how to write good unit tests, etc.  Along the way we'll be covering some of the new .NET 3.5 features like LINQ and all that entails.

Spread the word, and come on down if you are interested.

The official course title is .NET Tools with C# (CST407) and it will be Wednesday nights at the Hillsboro campus.

Friday, 14 March 2008 10:02:43 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 22 January 2008
I've been poking around a bit with the new ASP.NET MVC stuff, and so far I am very favorably impressed.  Sure, there are some things to work out, but I think they represent exactly the right level of abstraction for getting serious work done.  One of the things I've noticed lately since I've been doing lots of ASP.NET work is that the "classic" (if that works here) ASP.NET postback model is very very good at making the trivial case trivial.  If all you want is a simple form that pretends to have button click events, it's super easy.  Unfortunately for many of us, however, it is also very very good at making less trivial cases really hard.  At some point (and relatively quickly, I think) we spend a lot of time fighting the ASP.NET model rather than taking advantage of it.  I have spent hours trying to deal with the fact that ASP.NET is trying to make me code like I was writing a windows forms application when what I'm really writing is a web app.  I shouldn't have to worry about whether or not I need to handle some bit of rendering in OnInit or OnPreInit.  I know what HTML I want, but ASP.NET can make it hard to get there.

The new MVC model is the level of abstraction I'd like to work at.  I know I'm writing a web app, so better to embrace that fact and deal with requests and responses rather than trying to pretend that I have button click events.  The MVC model will make it very easy to get the HTML I want without having to fight the model.  I haven't logged enough time to have found the (inevitable) rough edges yet, but so far if I was starting a new web project I'd definitely want to do it using the MVC model.  The fact that it is easy to write unit test code is an added bonus (and well done, there!) but what really strikes me as useful is the model. 

If you want a web app, write a web app!

Tuesday, 22 January 2008 13:38:41 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 26 December 2007
A while back I was having a problem with getting events to fire from a drop down list inside a repeater.  In that case, I turned off ViewState for the Repeater control, and everything worked fine after that.  Well, last week I came upon a situation (much to complicated to go into here) where that just wouldn't work.  I needed the ViewState to be on, and knew there should be a reasonable way to make it work.  To make a (much too) long story short, after talking it through with Leo I discovered the error of my ways.  I was populating the items inside the drop down list during the Repeater's ItemCreated event.  Apparently that's bad.  I took exactly the same code and moved it to the handler for ItemDataBound, and everyone was happy.  Now I can leave ViewState on, correctly populate the items in the list, and actully get the SelectedIndexChanged event properly fired from each and every DropDownList. 

It's the little things...

Wednesday, 26 December 2007 12:13:48 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 06 December 2007
I happened to meet a woman the other day who grew up speaking Swiss German, and she said that one of the hardest bits of American slang for her to get the hang of was our use of "whatever" (usually with an !) to mean "that's the stupidist thing I've ever heard, but it's too trivial for me to take the time arguing with you about it".  That translation is mine, not hers.  Just look at how many works are saved with that simple but idiomatic usage.

So when I find that someone has changed the name of one of my variables (in several places) from "nextButton" to "nextBtn" causing me two separate merge conflicts, I'm glad we have that little word.

Whatever!

Thursday, 06 December 2007 14:56:06 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  | 
# Monday, 03 December 2007
My SoftSource compatriot Leo makes some good points about the overuse, or at least overreliance on unit testing.  I absolutely agree that you have to be clear about what the point of unit testing is.  Unit testing is for exercising all the possible paths through any given code, and verifying that it does what the author of that code thought that it should do.  That doesn't mean that it does what it is supposed to do in the context of a larger application.  That's why we have integration or functional tests.  It's up to whoever writes your functional tests (hopefully professional testers with specs in hand) to verify that the code does what the user wants it to do.  Integration tests will tell you if your code does what the developer in the cube next to you thinks it should. 

It's all about context.  It is folly to think that running a bunch of unit tests guarantees that the application will do what it should.  In fact, I'm working on a web project right now, and it is entirely possible for every unit test to pass with raging success and still not have the ASP.NET application start up.  That's a pretty clear case of the application not even running, even though all the tests pass.  The tests are still useful for what they provide, but there's a lot more to even automated testing than just validating that the code does what the author intended.

Monday, 03 December 2007 16:48:27 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  | 
If you are following a continuous integration strategy, breaking the build should be a big deal.  Most of the time, if the build is broken it's because someone forgot to check in a new file, or because they didn't update before committing their stuff.  Those are both pretty easily avoided.  One strategy that helps with missing files is to do your work in a task-specific branch, get everything checked in the way you think it should be, and then check out that branch to a separate location on your local system.  If it builds there too, then you probably aren't missing any file.  If you are, you'll know about it before you break the integration branch. 

As far as not updating before committing, there are a couple reasons why that might happen more than you would like.  If the build is constantly breaking, then people will be less likely to want to update before committing, since if the build is broken they won't be able to validate their changes, and will have to put off committing.  If your build process takes too long (more than 10 minutes including unit tests) then updating and building before committing will be painful, and people won't do it.  Of couse, sometimes it's simple laziness, but that just needs to be beaten out of people. :)  The other things are fixable with technology, which is easier than getting people to do the right thing most of the time.

To make the process easier for everyone, don't commit any changes after the build breaks unless those changes are directly related to fixing the build.  If they aren't, then those additional commits will also cause failing builds, but now the waters have been muddied and it is harder to track down which changes are causing failures.  If the build breaks, then work on fixing it rather than committing unrelated changes that make it harder to fix the real problem. 

Monday, 03 December 2007 14:44:06 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  | 
# Thursday, 15 November 2007
I know it's been a bit quiet hereabouts of late.  I'm in the middle of a writing project, as well as a (relatively) new work situation still, and trying to catch up from the fact that my knowledge of ASP.NET basically consisted of "there's code behind" before.  So far my feelings are mixed.  Sometimes I think ASP.NET if pretty darned clever.  Other days I find myself longing for the days of CGI, when at least it was easy to understand what was supposed to happen.  Those days usually have something to do with viewstate. :-)  Anyway...

I've also been learning about the ASP.NET Ajax goodies, and ran into an interesting problem.  If you are using UpdatePanel, you absolutely cannot mess with the control hierarchy after the update panel has been created.  If you do, you'll be presented with an helpful error that basically reads "you've screwed something up!".  It took a while to get to the root of the problem given that input.  Turned out that the update panel was in markup for my page, but somewhere between when it got created and OnInit for the page, the control hiearchy was being changed.  I won't go into detail about how, but suffice it to say stuff in the page got wrapped with some other stuff, thereby changing the hierarchy.  Nobody else seemed to mind, except for the updatepanel.  Too make a potentially very long story only slightly long, as long as all of the screwing around with the controls is done in or prior to OnPreInit for the page, all is well.  Once that change was made everything was happy, and the update panels work fine. 

The page lifecycle is possibly one of the great mysteries of our time...

Thursday, 15 November 2007 10:24:37 (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [2]  | 
# Wednesday, 17 October 2007
OK, I'm the first to admit that when it comes to writing ASP.NET I'm a complete newbie.  Before I started on the project I'm currently working on I'd probably written less than 10 ASP.NET pages.  So maybe this is obvious to the set of all people not me.  But I think maybe not.

I've got a repeater inside a form.  Each item in the repeater is composed of three DropDownList controls.  Those dropdowns are populated via databinding.  When the user has made all of their selections using the dropdowns, the hit the "go" button and a PostBack happens.  I spent the last full day (like since this time yesterday) trying to figure out why no matter what I tried I couldn't retrieve the value the user selected after the form posted back.  Gone.  Every time I got the post back, all of the dropdowns has a selected index of -1. *@!~#!!

I was pretty sure I had the overall pattern down right, since just the day before I made this work in another page that used textboxes instead of dropdownlists.  See the dent in my forehead from the keyboard?  Sure, initially I had some other problems like repopulating the data in the drop downs every time, etc., but I got past that.

Google proved comparatively fruitless.  Lots of people couldn't figure out how to databind the list of items to the drop down, but nobody was talking about posting back results.  The lightening struck, and I found a reference to someone having problems with events not firing from drop down lists if the repeater's ViewStateEnabled == true.  Granted, I'm not hooking up the events, but you never know. 

That was it. 

<asp:Repeater ID="medicationRepeater" runat="server" EnableViewState="false">

Now the PostBack works just like I would expect.  Why this should be is completely beyond my ken.

Anyone?
              

Wednesday, 17 October 2007 13:25:06 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
Refactoring is cool.  Really cool.  It also means something very specific.  There's a book and everything.  If you haven't read it, you should. (Hint: it's by Martin Fowler)

Refactoring means reducing code smells by making small, incremental, orderly, and very specific changes to existing code.  Make such changes (one at a time) allows one to improve the structure of existing code without changing its functionality.  Always a good thing.  Refactorings have very specific names like "extract method" or "encapsulate field" or "push member up".  Again, there's a book.  You can look them up.

Where is this going?, you might ask.  I already know this, you say.  Cool.  Then let's move on to what refactoring (Refactoring?) isn't.

Refactoring doesn't mean changing code because you think your way would have been better.  It doesn't mean rewriting things from scratch because you have a different opinion.  It doesn't mean starting over again and again in pursuit of the perfect solution to every coding problem. 

Those other things have names (which I won't mention here for the sake of any children reading this), but "Refactoring" isn't among them.  There's a tie-in here with another term we all love, "Agile".  Refactoring fits into an "agile" process after you've made everything work they way it should (i.e. passes the tests) to make it easier to work with the code on the next story/backlog item/iteration.  The point of agile development (IMHO) is to write as little code a possible to meet your requirements.  It doesn't mean redoing things until you end up with the least possible amount of code, measured in lines.  Again, that has a different name. 

Sometimes code needs to be fixed.  More often than we'd like, in fact.  But if you are (re)writing code in pursuit of the most bestest, don't call it Refactoring.  It confuses the n00bs. 

Wednesday, 17 October 2007 12:22:00 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 20 September 2007
This came up at work this week, so I thought I'd disgorge my views on the issue.  The question at hand is whether or not every method needs to validate its input parameters. 
For example, if I have a method

    public class Example

    {

        public int DoThingy(SomeObject obj)

        {

            return obj.GetImportantValue();

        }

    }


This method depends on the value of obj not being null.  If a caller passes a null, the framework will, on my behalf, throw a NullReferenceException.  Is that OK?  Sometimes it might be, and we'll come back to that.
The alternative is for me to validate my input parameters thusly:

        public int DoThingy(SomeObject obj)

        {

            if (obj == null)

                throw new ArgumentNullException("obj");

 

            return obj.GetImportantValue();

        }


The discussion we got into was wether this was really necessary or not, given that the Example object only got used in one place, and it would require extra code to validate the parameters.  The assertion was made by more than one person that the NullReferenceException was just fine, and that there was no need to do any validation. 
My $.02 on the matter was (and is) that if you don't do parameter validation, you are not upholding and/or communicating your contract.  Part of the contract implied by DoThingy() is that you have to pass a non-null SomeObject reference.  Therefore, in order to properly communicate your contract, you should throw an ArgumentNullException, which informs the caller exactly how they have violated there half of the contract.  Yes, it's extra code.  No, it may never be necessary.  A whole different subject is whether of not "fewest possible lines of code" is a design goal.  I'm going to avoid going there just now.

That said, there are obviously mitigating circumstances that apply.  If the object in question is really only called in one place, upholding the contract is less of a concern, since you should be able to write tests that cover the right cases to make sure the caller never passes null. Although that brings up a separate issue.  If the method only had one caller, which is it in a separate object at all?  Again, we'll table that one.  In addition, since in this particular case the DoThingy() method only takes one parameter, we don't have to wonder to hard when we get a NullReferenceException where the culprit is. 

The other issue besides contract is debugging.  If you don't check your input, and just let the method fail, then the onus is on the caller to figure out what the problem is.  Should they have to work it out?  If the method took 10 parameters, all reference types, and I let the runtime throw NullReferenceException, how long will it take the consumer to find the problem?  On the other hand, if I validate the parameters and throw ArgumentNullException, the caller is informed which parameter is null, since the onus was on me to find the problem. 

One last reason...  If I validate my input parameters, it should leave me with fewer failure cases to test, since the error path is potentially much shorter than if the runtime handles it.  Maybe not much of a savings, but it's there.

Thoughts?

Thursday, 20 September 2007 16:10:41 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [11]  |