# Tuesday, 25 May 2004

If you've ever gotten involved in the Java vs. .NET debate (and who hasn't) check out N. Alex Rupp's blog.  He's a dyed-in-the-wool Java guy who's going to TechEd this week and talking with .NET developers and INETA people about what they like about .NET.  He has some very interesting things to say about the Java developer - .NET developer relationship.  A very fair and unbiased look at the issues and how the communities interact internally and externally.

It's very refreshing to see someone being so open and honest about the pros and cons of both platforms.  (And it pretty courageous, given the longstanding antagonisms, for him to not only go to TechEd, but to advertise his Java-guy-ness.)

Tuesday, 25 May 2004 15:05:22 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 

Scott has some comments about WSE 2.0 (which just in case you haven't heard yet has RTMed) and I wanted to comment on a few things...

 Question: The Basic Profile is great, but are the other specs getting too complicated?
My Personal Answer (today): Kinda feels like it!  WS-Security will be more useful when there is a more support on the Java side.  As far as WS-Policy, it seems that Dynamic Policy is where the money's at and it's a bummer WSE doesn't support it.    
[Scott]

It's the tools that are at issue here, rather than the specs I think.  I spent some time writing WS-Security by hand about a year ago, and yes, it's complicated, but I don't think unnecessarily so.  The problem is that we aren't supposed to be writing it by hand.  We take SSL totally for granted, but writing an SSL implementation from scratch is non-trivial.  We don't have to write them ourselves anymore, so we can take it for granted.  The problem (in the specific case of WS-Security) is that we have taken it for granted as far as Web Services go.  Unfortunately, that makes the assumption that Web Services are bound to HTTP.  In order to break the dependence on HTTP (which opens up many new application scenarios) we have to replace all the stuff that HTTP gives us "for free" like encryption, addressing, authentication, etc.  Because to fit with SOAP those things all have to be declarative rather than procedural, I think they feel harder than depending on the same thing from procedural code. 

If we are to realize the full potential of Web Services and SO, then we have to have all this infrastructure in place, to the point where it becomes ubiquitous.  Then we can take the WS-*s for granted just like we do SSL today.  Unfortunately the tools haven't caught up yet.  Three or four years ago we were writing an awful lot of SOAP and WSDL related code ourselves, and now the toolsets have caught up (mostly).  Given enough time the tools should be able to encompass the rest of the standards we need to open up all the new application scenarios. 

Steve Maine makes a good analogy to the corporate mailroom.  There's a lot of complexity and complex systems involved in getting mail around the postal system which we don't see on a daily basis.  But it's out there none the less, and we couldn't get mail around without them.  When we can take SO for granted like we do the postal system, then we'll see the full potential of what SO can do for business, etc. in the real world.

Tuesday, 25 May 2004 11:06:13 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

On a whim last weekend I spent some time trying to learn Smalltalk.  So many of the seminal thinkers around XP, etc. were originally Smalltalk heads, so I wanted to see what all the fuss was about. 

I downloaded a copy of Squeak for Windows and went through several of the tutorials.  Pretty interesting stuff, but I think I'll stick to C#.  I can see why people are hot for Smalltalk (or used to be anyway).  Because it's so rigidly OO, it forces you into doing the right things with regard to object orientation.  And the tools are pretty cool. Having such an advanced class browser and inspection system is a big advantage. 

However, I think I'll stick to strongly typed languages (of which C# is currently my favorite).  I guess my overall impression of Smalltalk is that for people who were very competent, you could get a lot of work done in a very short amount of time because the system is so flexible.  On the other hand, because the system is so flexible, I would guess that people where were less then amazingly competent (or confident) would have a very hard time getting anything done at all because you have to understand exactly what you are doing, and many errors will only present themselves at runtime.  It would be fun to work in such a flexible system, but I really appreciate the benefits of compile-time type checking. 

At the same time, I was playing with a copy of MSWLogo (a free Logo implementation for Windows).  What a blast.  I haven't played with Logo since the Apple II days.  Once upon a time you could actually get a physical turtle that held a pen, and connected to a serial port.  You could write Logo programs to get the turtle to scoot around on a really big piece of paper.  I always thought that was a cool idea.  I was also surprised at how much Logo and Smalltalk have in common syntactically. 

I was trying to get my 8-year-old son interested in Logo, but I think he's still a little too young.  I met with a resounding "whatever, Dad".  I guess I didn't get my first Commodore PET until 6th or 7th grade. :-)

Tuesday, 25 May 2004 10:37:49 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 21 May 2004

I was just reading Steven Padfield's article on unit testing ASP.NET code by creating your own HttpContext outside of IIS (which is a very useful technique) and it got me thinking about a technique that I've gotten a fair amount of mileage out of lately, namely creating my own context object that will be available anywhere in a call stack, just like the HttpContext is.

When I started looking into how to implement such a think, I was thinking in terms of deriving from ContextBoundObject, which seemed like overkill.  So I fired up the ever-handy Reflector and found out how HttpContext handles itself.  Turns out that no ContextBoundObject is needed.  Hidden in the bowls of System.Runtime.Remoting.Messaging is a method called CallContext.SetData(string, object) that will stick a named object value into your call context, which can be retrieved from anyplace on the current call stack.  Pretty handy.  If you wrap that in an object like HttpContext, you can store your own context values, and potentially provide context-sensitive methods such as HttpContext.GetConfig().

What you end up with is an object that looks something like this:

using System;
using System.Collections;
using System.Runtime.Remoting.Messaging;

namespace MyContext
{
public class ContextObject
{
private const string ContextTag = "MyContextObject";

private ContextObject()
{
}

/// <summary>
/// returns a valid context object, creating one if
/// none exists
/// </summary>
public static ContextObject CurrentContext
{
get
{
object o = CallContext.GetData(ContextTag);
if(o == null)
{
o = new ContextObject();
CallContext.SetData(ContextTag,o);
}

if(!( o is ContextObject))
{
throw new ApplicationException("Corrupt ContextObject");
}

return (ContextObject)o;
}
}

/// <summary>
/// Clears out the current context. May be useful
/// in situations where you don't have complete
/// control over your call stack, i.e. you aren't at the top of
/// the application call stack and need to maintain
/// a separate context per call.
/// </summary>
public static void TeardownCurrentContext()
{
CallContext.FreeNamedDataSlot(ContextTag);
}

private string contextValue1;

///<summary>
/// a sample value to store to/pull from context
///</summar>
public string ContextValue1
{
get
{
return contextValue1;
}
set
{
contextValue1 = value;
}
}
}
}

You can use the context object from anywhere in your call stack, like this

public class Tester
{
public static void Main(string[] args)
{
ContextObject co = ContextObject.CurrentContext;
co.ContextValue1 = "Hello World";
OtherMethod();
}

public static void OtherMethod()
{
ContextObject co = ContextObject.CurrentContext;
Console.WriteLine(co.ContextValue1);
}
}


 

The resulting output is, of course, "Hello World", since the context object retains its state across calls.  This is a trivial example, and you wouldn't really do it this way, but you get the idea.
Friday, 21 May 2004 10:52:34 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 20 May 2004
Now that I think about it some more, this is a problem that WinFS could really help to solve.  The biggest reason that people don't use things like RDF is sheer laziness (you'll notice the rich RDF on my site :-) ) but if we can use the Longhorn interface to easily enter and organize metadata about content, it might be a cool way to generate RDF or other semantic information.  Hmmmm...  It would be fun to write a WinFS -> RDF widget.  If it wasn't for that dang day job...
XML | Work
Thursday, 20 May 2004 12:43:55 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Scott mentions some difficulty he had lately in finding some some information with Google, which brings to my mind the issue (long debated) of the semantic web.  Scott's problem is exactly the kind of thing that RDF was meant to solve when it first came into being, lo these 6-7 years ago. 

Has anyone taken advantage of it?  Not really.  The odd library and art gallery.  Why?  Two main reasons: 1) pure laziness.  It's extra work to tag everything with metadata 2) RDF is nearly impossible to understand.  That's the biggest rub.  RDF, like so many other standards to come out of IETF/W3C is almost incomprehensible to anyone who didn't write the standard.  The whole notion of writing RDF tuples in XML is something that most people just don't get.  I don't really understand how it's supposed to work myself.  And, like with WSDL and other examples, the people who came up with RDF assumed that people would use tools to write the tuples, so they wouldn't have to understand the format.  The problem with that (and with WSDL) is that since noone understands the standard, noone has written any usable tools either. 

The closest that anyone has come to using RDF in any real way is RSS, which has turned out to be so successful because it is accessible.  It's not hard to understand how RSS is supposed to work, which is why it's not really RDF.  So attaching some metadata to blog content has turned out to be not so hard, mostly because most people don't go beyond a simple category, although RSS supports quite a bit more. 

The drawback to RDF is that it was create by and for librarians, not web page authors (most of whom aren't librarians).  Since most of us don't have librarians to mark up our content with RDF for us, it just doesn't get done.  Part of the implicit assumption behind RDF and the semantic web is that authoritative information only comes from institutional sources, who have the resources to deal with semantic metadata.  If blogging has taught us anything, it's that that particular assumption just isn't true.  Most of the useful information on the internet comes from "non-authoritative" sources.  When was the last time you got a useful answer to a tech support problem from a corporate web site?  The tidbit you need to solve your tech support problem is now-a-days more likely to come from a blog or a USENET post than it is from the company who made the product.  And those people don't give a fig for the "semantic web". 

 

Work | XML
Thursday, 20 May 2004 12:29:22 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 14 May 2004

[Update] I watched it again with my wife last night, and liked it just as much.  Definitely a movie I could watch many times.

OK, my fears were ungrounded.  Troy was pretty darned good.  I was impressed.  It's 2:43 long, and it went by in a flash.  I was surprised when it was over.  Totally didn't seem that long. 

The CG was very judiciously used to give scale to the sets, and the special effects in general were not overdone, which I really appreciated.  The acting was solid, and Brad Pitt actually was pretty good as Achilles. 

Possibly most impressive was the fight coreography.  The final battle between Hector and Achilles was very well staged, and excellently filmed.  One of the best fight scenes I've seen in a really long time.  Even the mass battles were well filmed, and done in such a way that it didn't come across as just another Braveheart ripoff.  There were definitely some liberties taken with Homer (as Mr. Cranky notes), but I'm pretty sure he'd still recognize the story.  The romance with Briseis was perhaps a bit overdone, but it didn't detract that much, I thought. 

I find myself really hoping that the film does well enough to warrant them making the Odyssey next. 

Scott points out an interesting new unit of measurement. :-)

Friday, 14 May 2004 15:18:25 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 13 May 2004

This is a bit convoluted, but stick with me...

We've got a fairly involved code generation process that builds C# classes from XSDs, which is all well and good.  The code generations uses some utility classes to help with reading the schema files, and those have unit tests, which is cool.  The code generation happens as part of our regular build, so if I screw up the CodeSmith tempate such that the code generation fails, I'll know about it at build time.  If I screw up the CodeSmith template such that the code generation succeeds, but the results don't compile, I'll also find out about that at build time. 

However, since the whole build/codegen process takes a while, it's not the kind of thing you want to run all the time after every change (as is the XP way).  So, how do I unit test the generated classes?  I think I have a workable solution, but it took some thinking about. 

I ended up writing a set of unit tests (using NUnit) that run the code generation process on a known schema.  The resulting C# file is then compiled using the CodeDom, then I can test the resulting class itself.  As a happy side benefit, I'll know if the CodeSmith template runs, and if the resulting C# compiles without having to run the whole build process.  Pretty cool.

An interesting side note: one of the things we do with our generated classes is serialize them as XML using the XmlSerializer.  I discovered during this process that if you generate an assembly with the CodeDom, you can't use the XmlSerializer on the resulting classes unless you write the assembly to disk.  I was using an in-memory only assembly, and the XmlSerializer gave me a very helpful exception stating that the assembly must be written to disk, which was easy enough to do.  I'm assuming that this is because the XmlSerialzer itself is going to generate another dymanic assembly, and it needs something to set it's reference to.  Interesting.

Thursday, 13 May 2004 15:18:08 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 11 May 2004

I've watched a couple of new movies in the last week or so, and it's been a pretty mixed bag.  I don't post about it all that often, but I'm a pretty serious moviephile, and so's my wife, so we see a lot of films. 

  • Kill Bill (vol 1): I thought it was pretty good.  You can tell Tarantino worships the genre, and it fit well.  Vikki thought it was too gory, but I thought it was genre-appropriate, given what he was going for.  I think Uma is critically underappreciated since she tends to take off-beat roles.  Lucy Liu was great as the trash-talking ganster boss.
  • Master and Commander: I was totally prepared not to like this film, but I was quite favorably surprised.  The role suited Crowe well, cinematography was very good.  Peter Weir has a great feel for composition.  I thought it had just the right level of realistic violence.  It conveyed the horror of combat without being gratuitous. 
  • Van Helsing: yuck.  spit.  hack.  It sucked.  I knew it would be bad, but I didn't think it would be bad.  Jackman was totally underutilized.  The special effects were cool, but not enough so to carry the film past the wretched dialog and lack of coherent storyline.  I saw one review on the web that compared it to Battlefield Earth,and I don't think I'd go quite that far, but still, majorly lame.  All the supporting characters were overacting without being melodramatic (which would have been genre-appropriate), especially Frankenstein's monster.  What a ham.  Bah!  The werewolf effects were novel, but again, not cool enough.  Check out Dog Soldiers if you want a good werewolf movie.  The guy who plays Dracula could have been good if he'd stuck with Lugosi instead of occasionally lapsing into Oldman.
  • The Last Samurai:  OK, there were totally no surprises here.  It's exactly what you'd expect.  Dances with Wolves, only set in Japan.  But it was well executed.  Good cinematography, decent enough acting, great artistic direction.  The sets and costumes were very well done, and pretty accurate.  I really liked the job they did on the Samurai's kimono.  Don't expect anything dazzling, but a good solid film.  I wish Cruise wouldn't keep taking the same role over and over again though.  He can actually act (witness Magnolia or Eyes Wide Shut). 

I'm looking forward to seen Troy this weekend, although I'm sure it's going to suck.  As both a movie and history buff, I can't really stay away.  :-)

Tuesday, 11 May 2004 11:06:37 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 10 May 2004

Upgraded smoothly to dasBlog 1.6 this weekend.  Went perfectly, no hassle.  Seems a bit faster, and the styles look better under FireFox.

Kudos to Omar and the rest of the team.

Monday, 10 May 2004 09:41:33 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  |