# Friday, 27 June 2003

I finally got around to creating my first RSS feed today.  We are using an automated build tool for .NET called Draco.NET to build our (rather complex) application.  The great thing about Draco is that it watches your source-code repository for any changes, and rebuilds if it detects any changes.  When it’s done, you get a very nicely formatted email that tells you if the build succeeded or failed. 

Unfortunately, as your build process grows, so does the email, since it includes the output from the NAnt build.  Also, because of some strangeness in CVS log files, Draco tends to build rather more frequently than it really needs to, particularly if you are building from two different branches.  The end result is, lots of great big email, or “build spam”. 

So, I cooked up a quick ASP.NET application that will look at the directory containing output from Draco and turn it into an RSS feed.  Now all I get is the success or failure of the build in the RSS stream, with a link to another page that provides the full results if I want to see them.  A relatively small accomplishment, I realize, but there you have it.  

What the exercise did do is confirm my faith in two things:  1) RSS is pretty darn handy, and has a lot of applications, and 2) .NET is pretty much the most straightforward way to do just about anything.  The ASP.NET application only took around 2 hours, and would have taken MUCH longer in ASP or (heaven forefend) ATL Server.

[Listening to: Lady Diamond - Steeleye Span - Spanning the Years(04:37)]
Friday, 27 June 2003 19:13:53 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

 The government is testing an airport scanner that reveals, well, pretty much everything. The image that screeners see is basically you, naked, under your clothes. Along with whatever weapons of mass destruction you happen to be concealing.
[Wired News]

Everybody remember to start doing your sit-ups before you travel…

[Listening to: John Barleycorn - Steeleye Span - Spanning the Years(04:49)]
Friday, 27 June 2003 16:49:13 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 19 June 2003

Chris Goldfarb has some pointers/caveats about upgrading your build process from VS.NET 2002 -> 2003.   I’d add to that additional things to watch out for if you are using a build script that doesn’t use VS.NET.  We’re using NAnt to do our builds, and it uses the underlying .NET SDK compilers without regard to anything in the VS project files.  This leads to an even weirder upgrade, since you have to update your NAnt build file to reflect any changes required to build under 1.1 (and there are likely to be some, it took me most of a day to iron out all the issues) completely outside the context of VS.NET.  

The end result was that we had a full build working under 1.1 long before we had updated all our project files to VS.NET 2003.  This brings up some interesting problems when it comes to dependencies.  We have a fairly complex system with dozens of assemblies, and many of the project files reference assemblies from the build directory.  If your build directory is suddenly full of assemblies compiles against 1.1 and you still have 1.0 projects, chaos ensues.  All together it took the team 2-3 days to iron out all the issues and transition fully to 1.1.   As a side benefit, between the upgrade to 1.1 and moving to the latest version of NAnt (0.8.2) our build now takes about half the time it did before using essentially the same build script.  At worst it only took around 30 minutes, but 15 is still much nicer.

I guess the bottom line either way (and I think Chris reached the same conclusion) is that upgrading to 1.1 is not something you can do piecemeal, and you really have to tackle it all at once.  Embrace the pain and get it over with.

[Listening to: Man of Constant Sorrow - Dan Tyminski - O Brother, Where Art Thou?(03:10)]
Thursday, 19 June 2003 13:50:20 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 18 June 2003

Scott Hanselman writes:

Schema Versioning: Changing a namespace is not versioning, it is new type creation. [meta-douglasp]

Ok...I can see that point of view...then does versioning (as we hope to know it) simply not exist in the world of Schema?

I would say that versioning does exist in the world of Schema, but you do have to work for it.  It is certainly true that there is no standard way of handling schema versioning.  There are ways to deal with it yourself, but they do require some forethought. 

You can add version attributes to the elements in your schema, and leave them open using xsd:any or xsd:string, etc.  Doug Purdy had some good suggestions in his TechEd presentation (WEB400: Loose Coupling and Serialization Patterns).  The bottom line is that you have to leave yourself an out in the schema, and add a version attribute, and you have to do those things up front in the first version.  By their very nature it’s not the kind of thing you can start in version 2 when you discover that you need to add something.  That’s the biggest hurdle right there.  You have to anticipate version 2 while you’re writing version 1.  Granted, it’s usually a safe assumption that things will change, and it’s not too much extra work to build in the flexibility, but it does require some work and some additional planning.

Wednesday, 18 June 2003 14:29:49 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 13 June 2003

Scott Hanselman mentioned this site at the TechEd bloggers meeting. I'd heard of it, but hadn't actually checked it out. It's at http://www.pepysdiary.com/

What an interesting way to approach a historical document. You get to see each day in a man's life revealed as if he were writing about it right now, day by day like any other blog, only you're reading the life of a 17th century Englishman instead of a modern internet denizen. The text is heavily annotated with commentary, references to the dramatis personae, and even links to an English mapping site so you can bring up maps of the places described. The Internet has generated some new and interesting ways of examining historical documents, but I think this is the most interesting one I've seen in some time.

For all you aggregators, it's not obvious where to find their RSS links, but you can find them here.

[Listening to: Local God - Romeo + Juliet Soundtrack - Romeo + Juliet (03:56)]
Friday, 13 June 2003 17:13:50 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 12 June 2003

What a great book! Samurai William: the Englishman who opened the East by Giles Milton is all about the Englishman (William Adams) who was the real-life model for the main character in Clavell's Shogun.

Not only is it a very interesting subject (I had no idea that Europeans were so active in Japan so early) but Milton is a very readable author who knows how to combine hard core historical research with the kind of entertaining anecdotal history that makes it fun to read. I've had a long-standing interest in Japan, having spent a total of about 7 months there since highschool, and I've read a lot of early Japanese history, but most of those tend to overlook the European influence during that period. Milton has compiled a great deal of information about not only Adam's life in Japan, but what was going on with Europeans in the rest of Asia at the time. It ties in with his earlier work "Nathaniel's Nutmeg" (also a great read about the spice trade) in several places.

I also have a copy of Milton's "Big Chief Elizabeth" about the early English settlers of North America, but haven't had a chance to read it yet.

[Listening to: What Are Ya' At? - Great Big Sea - Great Big Sea (03:12)]
Thursday, 12 June 2003 17:02:05 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

I'm in the middle of defining an interface between a set of hardware devices and a central server farm using XSD and WSDL. The semantics of the WSDL interface I understand pretty well, but what I'm wrestling with now is the schema of the data being carted back and forth. I want to use the same endpoint on the server side to recieve data from several different kinds of monitors, which understandable have quite different data reporting needs. The schema I'm working with right now (which I didn't right but consulted on) defines a set of basic structures common to all monitors, and the a separate schema for each type of monitor that extends those base types.

The issue I have with that is how to structure the datatypes in the WSDL. If I make the datatype in the WSDL the common type and just expect the get the derived types, that's one possibility. Another is to make the type in the WSDL xsd:any and just figure it out at the application level. Still another is to change the XSD so that the base types leave open placeholders (xsd:any) and that the concrete monitor types don't extend the schema, they just add their own extra data into the base type in a different namespace.

Right now I'm leaning towards defining the type in the WSDL as xsd:any and just worrying about it at the application level. The disadvantage is that you can't get the full schema information from the WSDL, but since this is essentially a closed system I'm not sure how much that matters. Hmmmmm.

I saw some good presentations at TechEd involving the benefits of loosely coupled schemas (Doug Purdy's was particularly interesting) so I understand what the options are, but that doesn't necesarrily make it easier. I suppose that's what we get paid for.

[Listening to: Someday Soon - Great Big Sea - Great Big Sea (04:18)]
Thursday, 12 June 2003 13:30:52 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 10 June 2003

Having been home a few days now (and spending all day yesterday lying on the floor with my back out) I've had a chance to ponder this year's event. 

Overall, I'd say that I learned a lot, but that the overall conference was less exciting than in years past.  I think that's mostly due to the vagaries of the product cycle, with Everett fully “here” and things like Yukon and Jupiter a little too far out.  I spent most of my time on all things Web Services, which was quite interesting.  The biggest thing I noticed was the new mentality of “it's the WSDL, stupid”.  In times past the message has been to write you code and get WSDL for free, and now the message seems to be that if you want your web services to be compatible with non-.NET platforms, it's worth writing the WSDL / XSD first, then generating code from there.

The one possible dissenter was Clemens Vasters, who demonstrated some very ingenious ways of starting with the code, but tweaking the WSDL to match what you really want, and not what you get from the framework for free.  Overall I think the best sessions I went to were his AOP and WS internals talks.   

The party was also pretty good this year.  Smashmouth rocked, and the overall atmosphere of the event was pretty fun, even if maybe not quite as fun as TechEd 2000's party at Universal Studios Florida.   I must say that watching people climb an inflatable rock after too many margaritas was worth the price of admission this year.

Tuesday, 10 June 2003 14:59:25 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 06 June 2003

 Humph.  I’d have to say this presentation was disappointing.  It was on the development of the London traffic congestion charging system, which is not only highly controversial, but arguably the largest deployed .NET application.  I was hoping to get some technical details about how they got it to scale, but instead it was pretty much just marketecture, which I haven’t seen a lot of here this year.  The main focus was around the fact that .NET beat out J2EE for this job, and that it was done quickly and at comparatively low cost.  OK, I get that about .NET.  The one interesting thing in that space was that Mastek, the India-based development shop that did the implementation, actually did two separate test projects during the RFI for the project, one in J2EE, the other in .NET (v1.0, beta 1).  It’s interesting to see the results of one company seriously trying to build the same application on both platforms, rather than the competitive Pet Store type comparison.  Their conclusion was that they could do the .NET implementation for 30% less. 

Unfortunately the presentation was almost totally devoid of technical details.  For a 300 level presentation for developers, I would expect more than two slides on the implementation.  The only interesting technical details was that they used the same set of business object for both intra- and extranet sites, but the extranet used a wrapper that hid the privileged methods, and a firewall was used between the presentation and business tiers to limit the public site’s access to only the wrapper class.   

Friday, 06 June 2003 11:48:39 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

While the presentation itself was a bit slow, the content was very interesting.  It was about how Microsoft, Unisys and others built a highly scalable system for use by law enforcement and the public health system to send and receive alerts such as Amber Alerts and public health warnings.  The biggest factor influencing the design was the fact that they control essentially the entire system, from the back end datacenter all the way to the workstation setups.  This allowed them to take advantage of a lot of features not normally available to “Internet” applications.  For starters, they chose a WinForms app on the client side to provide the best performance and richest user experience.  They use .NET remoting (over TCP using the binary formatter) to get the best performance over the network, which also allows them to hook up bidirectional eventing so that new bulletins can be distributed rapidly without poling.  The client app uses MSDE to cached frequently used data like addresses and geographical data.  Each local client can be configured to either communicate with a local hub system, or with the back end datacenter. 

Since they had to accommodate 25,000 installed workstations, and an addressbook projected at 100M users it makes sense to take advantage of some heavyweight code on the client side to get the best scalability.  Overall it was a good example of how to build a really large system, although it depends so heavily on being able to control the whole system that it may not be applicable in very many cases. 

I’ll looking forward to comparing and contrasting with DEV372 (coming up this morning) which is a case study of the London traffic billing system.  More on that later.

 

Friday, 06 June 2003 10:34:03 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 05 June 2003

 Hallelujah brother!   I have seen the light of AOP!  I read last year’s MSDN article on AOP and thought that it was a dang good idea, but the implementation was a little on the questionable side.  It took advantage of undocumented goo in the .NET remoting infrastructure to do interception.  Cool, but not necessarily something I’d go into production with.  And it doesn’t work on ServicedComponents.  Clemens Vasters showed how to do AOP in about 4 different contexts, including ASP.NET WebServices, EnterpriseServices, WinForms and using the remoting interception method.  Best of all, he showed how to use the same metadata in each context.  Very cool stuff, and all the samples will be posted to his blog

I was particularly impressed by two things:  the way he explained why AOP is interesting and useful using real world examples, and the AOP examples in ASP.NET web services, which is a great way to abstract the use of SOAP extensions.  Check out his samples for more details. 

 

Thursday, 05 June 2003 15:17:19 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

The good news is that just about everything that I don’t like about implementing WS-Security with WSE v1 has been addressed in v2.  Because it supports WS-Policy, you can now set up declarative rules enforcing security policy without the class developers having to do any coding.  With v1, every web method must remember to check for the existence of a UsernameToken (if that’s what you are using) and each component created on the client side must remember to add the right token(s) to the SoapContext.  While not insurmountable, it’s still a pain.  With version 2 you can set up a policy that is used on both server and client, and the individual client and server components can blissfully go about their business, with the security just taken care of by WSE.  Not only does that make it much easier to implement, and more secure, since you aren’t depending on developers to remember to do the right thing, but since they don’t have to do any additional work, it’s easier to convince them that WS-Security is a good idea (which it is).

The new support for WS-Trust, and specifically of WS-SecureConversation is a big boon to performance.  It allows you to do one heavyweight authentication, using Username or x509, etc. then have the server generate a new light weight token for use during the rest of the conversation.  Much less of a burden on server resources.

There is also new support for non-RPC programming models, so that you can use one-way or multicast messages and more.  And you don’t have to host your web services in IIS, which allows for lots of new deployment possibilities. 

The only drawback to this session was that a great deal of the material overlapped with the content of  WEB401 (on the new security support). 

 

Thursday, 05 June 2003 11:56:47 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 04 June 2003

Once again a triumphant performance from Scott Hanselman.  He did a great job of explaining what makes WSDL interesting, and why we should, if not necessarily love it then at least understand it.   In keeping with the message I’ve been hearing the rest of the week, Scott advocated starting with XSD and WSDL, and generating clients and servers from there, rather than developing WebServices “code first”. 

One of the coolest things he demoed was SoapScope, from MindReef which allows you to sniff soap at a workgroup rather than individual developer box level.  Nice interface, and very handy for debugging distributed problems.

I also appreciated Scott’s Matrix references.  Ignore the WSDL and live in the Matrix, or learn to see the WSDL and, like Neo, gain super powers. :)

Wednesday, 04 June 2003 18:46:17 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

I’ve seen a wide range of TechEd presentations on Web Services now by some of the luminaries (Don Box, Keith Ballinger, Doug Purdy, et. al.) and I find it interesting that the story for how to build .NET web services has changed.  Maybe that’s been true for a while and I never noticed, but either way, now I know.  When .NET first appeared on the scene, the story was that all you had to do was derived from WebService, mark your methods with [WebMethod()], and all would be well. 

This week I’ve consistently been hearing that “code-first” development is out for web services, and that we should all learn to love XSD and WSDL.  So rather than coding in C#, adding some attributes, and letting the framework define our WSDL, we should start with XSD and WSDL definitions of our service, and use wsdl.exe and xsd.exe to create the .NET representations.  Very interesting. 

Further more, we should define our WSDL specifically with versioning and interop in mind.  Some tips and tricks include using WS-Interop as a guideline for defining our WSDL, including version attributes in our SOAP schemas from the get go, and using loosely typed parameters (xsd:string) or loosely typed schemas (xsd:any) to leave “holes” into which we can pour future data structures without breaking existing clients. 

Since I’m about to embark on a fairly major interop project myself, I’m glad to have heard the message, and I say hooray.  I think it makes much more sense to stick to industry standard contracts that we can all come to agreement on and work the code backwards from there, rather than tying ourselves to the framework’s notion of what WSDL should look like.  Ultimately it’s the only way that cross-platform interop is going to work.  The “downside” if it can be called such is that we have to really work to understand WSDL and XSD (or at least the WS-I specified compatible subsets thereof) in order to design web services correctly.  However, anyone who was writing any kind of web service without a firm grounding in WSDL and XSD was heading for trouble anyway.  I’m looking forward to Scott Hansleman’s “Learn to Love WSDL” coming up after lunch today.

 

Wednesday, 04 June 2003 14:55:19 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Yesterday I saw a couple of presentations here at TechEd on the Enterprise Instrumentation Framework.  Currently I use log4net for all my logging needs, and I've been doing some compare and contrast between the two systems.

 

The advantage to using either of these frameworks is that they are loosely coupled.  There is no design time mapping between event sources and event sinks.  In log4net all the developer need to is categorize the log message as Debug,  Info, Warn, Error or Fatal.  At runtime, log4net is configured using an XML document.  You can define multiple event sinks, such as text files, the Event Log, ADO data sources, OutputDebugString, etc.  It's easy to create plugins to support new sinks as well.  There's even a remoting sink so that you can log to a different machine.  In the configuration file, you can send events to different sinks based on the logging level (debug, info, etc) or on the namespace of the class the logging is done from. 

 

In EIF, you can similarly define multiple sources and sinks, and map them at run time.  One big difference is that in EIF, events are strongly types.  You can create an event schema that is used for logging data with distinct types, rather than just as strings.  In text logging that's not so important, but in the Event log, and especially in WMI ( which EIF supports) you can take advantage of the strong typing when you read and or sort the events.  However, that means that you have to define the schema, which is work.  One drawback is that out of the box, EIF supports many fewer event sinks, in fact only three:  WMI, the Event Log, and Windows Event Tracing (on Win2K and up).  As with log4net, EIF allows you to create your own event sinks.  There's currently no UI for reading Windows Event Tracing logs, but they do provide an API.  Furthermore, the configuration for EIF is rather more complicated.

 

The built in support for WMI in EIF is pretty useful, since it abstracts out all the System.Management stuff.  This support makes it easy to support industry standard management tools like HP OpenView.  And you even get perf counters for free when you define an EIF event source.  On the other hand, this support for WMI makes intalling a bit more high ceremony, since you have to include an installer class in your app to register the WMI schema. 

Possibly the coolest feature in EIF is the fact that they can push correlation data onto the .NET call context, which travels across remoting boundaries.  That means that they can correlate a set of events for a particular operation across process and server boundaries.  They demoed a UI that would then create a tree of operations with nested events arranged in order.  Pretty cool stuff.

 

So, EIF has strongly typed events, WMI support, perf counters for free and cross process event correlation.  Log4net is much simpler to implement, requires less coding overhead, and supports a wider range of event sinks out of the box.  It's a tough choice.  The WMI support might swing things toward EIF in the long run, especially if you operations outfit uses WMI management tools.

 

Wednesday, 04 June 2003 11:25:06 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [3]  | 
# Tuesday, 03 June 2003

I went to a get together here at TechEd last night of bloggers and “aggregators” (sounds much better than “lurker”) who are here at the show, organized by Drew Robbins who runs TechEdBloggers.net.  It was really interesting to put some faces with the names.  There were also some interesting discussions on blogs and blogging.  Many seemed comfortable with the notion I put down a while ago (http://erablog.net/filters/12142.post) of communal validation.  Someone dubbed it a “web of trust” which I think is pretty appropriate.  The question also some up, “So who plays the role of VeriSign” in this web of trust.  I think the interesting part about blogging, at least in the technical community, is that that trust seems to be based on personal relationships rather than out outside “authority”.  Unlike in the UseNet world where it’s easy to forgo personal relationships in favor of the collective, blogging seems to foster personal relationships.  That’s a big change from the general anonymity of the Web.  I’ll be interested to see where it goes from here. 

Tuesday, 03 June 2003 10:43:40 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Once upon a time I did the happy dance on stage (in San Francisco: “Anatomy of an eCommerce Web Site”) because I was so excited about the new BizTalk orchestration designer in BTS 2000.  What a great idea, so be able to draw business processes in Visio, then use the drawing to create code that knows how to manage a long running transactional business process.  I had been preaching the gospel of the CommerceServer pipeline as a way of separating business process from code, but BT Orchestrations was even better.

Little did I know…  Yesterday (I’m here at TechEd in Dallas) I saw a demo of BTS 2004.  Wow.  Microsoft has really made great strides in advancing the orchestration interface.  Instead of Visio, it’s now a Visual Studio.NET plugin, and the interface looks really good.  It includes hints to make sure you get everything set up corrects, and full IntelliSense to speed things along.  I was very impressed with the smoothness of the interface.  Not only that, but now you can expose your orchestration as an XML Web Service, and you can call Web Services from inside your schedule. 

I’ve always thought that BTS has gotten short shrift in the developer community.  I think it’s because it tends to be pigeon-holed as something only useful in big B2B projects.  I can think of lots of ways in which orchestration could be very useful outside the realm of B2B.  I guess part of it has to do with the pricing.  While I can think of lots of ways to use BTS in non-B2B scenarios, they aren’t really compelling enough to convince me to spend that much money.  Ah well.   

Tuesday, 03 June 2003 10:31:47 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  |