# Thursday, June 12, 2003

I'm in the middle of defining an interface between a set of hardware devices and a central server farm using XSD and WSDL. The semantics of the WSDL interface I understand pretty well, but what I'm wrestling with now is the schema of the data being carted back and forth. I want to use the same endpoint on the server side to recieve data from several different kinds of monitors, which understandable have quite different data reporting needs. The schema I'm working with right now (which I didn't right but consulted on) defines a set of basic structures common to all monitors, and the a separate schema for each type of monitor that extends those base types.

The issue I have with that is how to structure the datatypes in the WSDL. If I make the datatype in the WSDL the common type and just expect the get the derived types, that's one possibility. Another is to make the type in the WSDL xsd:any and just figure it out at the application level. Still another is to change the XSD so that the base types leave open placeholders (xsd:any) and that the concrete monitor types don't extend the schema, they just add their own extra data into the base type in a different namespace.

Right now I'm leaning towards defining the type in the WSDL as xsd:any and just worrying about it at the application level. The disadvantage is that you can't get the full schema information from the WSDL, but since this is essentially a closed system I'm not sure how much that matters. Hmmmmm.

I saw some good presentations at TechEd involving the benefits of loosely coupled schemas (Doug Purdy's was particularly interesting) so I understand what the options are, but that doesn't necesarrily make it easier. I suppose that's what we get paid for.

[Listening to: Someday Soon - Great Big Sea - Great Big Sea (04:18)]
Thursday, June 12, 2003 1:30:52 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, June 10, 2003

Having been home a few days now (and spending all day yesterday lying on the floor with my back out) I've had a chance to ponder this year's event. 

Overall, I'd say that I learned a lot, but that the overall conference was less exciting than in years past.  I think that's mostly due to the vagaries of the product cycle, with Everett fully “here” and things like Yukon and Jupiter a little too far out.  I spent most of my time on all things Web Services, which was quite interesting.  The biggest thing I noticed was the new mentality of “it's the WSDL, stupid”.  In times past the message has been to write you code and get WSDL for free, and now the message seems to be that if you want your web services to be compatible with non-.NET platforms, it's worth writing the WSDL / XSD first, then generating code from there.

The one possible dissenter was Clemens Vasters, who demonstrated some very ingenious ways of starting with the code, but tweaking the WSDL to match what you really want, and not what you get from the framework for free.  Overall I think the best sessions I went to were his AOP and WS internals talks.   

The party was also pretty good this year.  Smashmouth rocked, and the overall atmosphere of the event was pretty fun, even if maybe not quite as fun as TechEd 2000's party at Universal Studios Florida.   I must say that watching people climb an inflatable rock after too many margaritas was worth the price of admission this year.

Tuesday, June 10, 2003 2:59:25 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, June 06, 2003

 Humph.  I’d have to say this presentation was disappointing.  It was on the development of the London traffic congestion charging system, which is not only highly controversial, but arguably the largest deployed .NET application.  I was hoping to get some technical details about how they got it to scale, but instead it was pretty much just marketecture, which I haven’t seen a lot of here this year.  The main focus was around the fact that .NET beat out J2EE for this job, and that it was done quickly and at comparatively low cost.  OK, I get that about .NET.  The one interesting thing in that space was that Mastek, the India-based development shop that did the implementation, actually did two separate test projects during the RFI for the project, one in J2EE, the other in .NET (v1.0, beta 1).  It’s interesting to see the results of one company seriously trying to build the same application on both platforms, rather than the competitive Pet Store type comparison.  Their conclusion was that they could do the .NET implementation for 30% less. 

Unfortunately the presentation was almost totally devoid of technical details.  For a 300 level presentation for developers, I would expect more than two slides on the implementation.  The only interesting technical details was that they used the same set of business object for both intra- and extranet sites, but the extranet used a wrapper that hid the privileged methods, and a firewall was used between the presentation and business tiers to limit the public site’s access to only the wrapper class.   

Friday, June 06, 2003 11:48:39 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

While the presentation itself was a bit slow, the content was very interesting.  It was about how Microsoft, Unisys and others built a highly scalable system for use by law enforcement and the public health system to send and receive alerts such as Amber Alerts and public health warnings.  The biggest factor influencing the design was the fact that they control essentially the entire system, from the back end datacenter all the way to the workstation setups.  This allowed them to take advantage of a lot of features not normally available to “Internet” applications.  For starters, they chose a WinForms app on the client side to provide the best performance and richest user experience.  They use .NET remoting (over TCP using the binary formatter) to get the best performance over the network, which also allows them to hook up bidirectional eventing so that new bulletins can be distributed rapidly without poling.  The client app uses MSDE to cached frequently used data like addresses and geographical data.  Each local client can be configured to either communicate with a local hub system, or with the back end datacenter. 

Since they had to accommodate 25,000 installed workstations, and an addressbook projected at 100M users it makes sense to take advantage of some heavyweight code on the client side to get the best scalability.  Overall it was a good example of how to build a really large system, although it depends so heavily on being able to control the whole system that it may not be applicable in very many cases. 

I’ll looking forward to comparing and contrasting with DEV372 (coming up this morning) which is a case study of the London traffic billing system.  More on that later.

 

Friday, June 06, 2003 10:34:03 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, June 05, 2003

 Hallelujah brother!   I have seen the light of AOP!  I read last year’s MSDN article on AOP and thought that it was a dang good idea, but the implementation was a little on the questionable side.  It took advantage of undocumented goo in the .NET remoting infrastructure to do interception.  Cool, but not necessarily something I’d go into production with.  And it doesn’t work on ServicedComponents.  Clemens Vasters showed how to do AOP in about 4 different contexts, including ASP.NET WebServices, EnterpriseServices, WinForms and using the remoting interception method.  Best of all, he showed how to use the same metadata in each context.  Very cool stuff, and all the samples will be posted to his blog

I was particularly impressed by two things:  the way he explained why AOP is interesting and useful using real world examples, and the AOP examples in ASP.NET web services, which is a great way to abstract the use of SOAP extensions.  Check out his samples for more details. 

 

Thursday, June 05, 2003 3:17:19 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

The good news is that just about everything that I don’t like about implementing WS-Security with WSE v1 has been addressed in v2.  Because it supports WS-Policy, you can now set up declarative rules enforcing security policy without the class developers having to do any coding.  With v1, every web method must remember to check for the existence of a UsernameToken (if that’s what you are using) and each component created on the client side must remember to add the right token(s) to the SoapContext.  While not insurmountable, it’s still a pain.  With version 2 you can set up a policy that is used on both server and client, and the individual client and server components can blissfully go about their business, with the security just taken care of by WSE.  Not only does that make it much easier to implement, and more secure, since you aren’t depending on developers to remember to do the right thing, but since they don’t have to do any additional work, it’s easier to convince them that WS-Security is a good idea (which it is).

The new support for WS-Trust, and specifically of WS-SecureConversation is a big boon to performance.  It allows you to do one heavyweight authentication, using Username or x509, etc. then have the server generate a new light weight token for use during the rest of the conversation.  Much less of a burden on server resources.

There is also new support for non-RPC programming models, so that you can use one-way or multicast messages and more.  And you don’t have to host your web services in IIS, which allows for lots of new deployment possibilities. 

The only drawback to this session was that a great deal of the material overlapped with the content of  WEB401 (on the new security support). 

 

Thursday, June 05, 2003 11:56:47 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, June 04, 2003

Once again a triumphant performance from Scott Hanselman.  He did a great job of explaining what makes WSDL interesting, and why we should, if not necessarily love it then at least understand it.   In keeping with the message I’ve been hearing the rest of the week, Scott advocated starting with XSD and WSDL, and generating clients and servers from there, rather than developing WebServices “code first”. 

One of the coolest things he demoed was SoapScope, from MindReef which allows you to sniff soap at a workgroup rather than individual developer box level.  Nice interface, and very handy for debugging distributed problems.

I also appreciated Scott’s Matrix references.  Ignore the WSDL and live in the Matrix, or learn to see the WSDL and, like Neo, gain super powers. :)

Wednesday, June 04, 2003 6:46:17 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

I’ve seen a wide range of TechEd presentations on Web Services now by some of the luminaries (Don Box, Keith Ballinger, Doug Purdy, et. al.) and I find it interesting that the story for how to build .NET web services has changed.  Maybe that’s been true for a while and I never noticed, but either way, now I know.  When .NET first appeared on the scene, the story was that all you had to do was derived from WebService, mark your methods with [WebMethod()], and all would be well. 

This week I’ve consistently been hearing that “code-first” development is out for web services, and that we should all learn to love XSD and WSDL.  So rather than coding in C#, adding some attributes, and letting the framework define our WSDL, we should start with XSD and WSDL definitions of our service, and use wsdl.exe and xsd.exe to create the .NET representations.  Very interesting. 

Further more, we should define our WSDL specifically with versioning and interop in mind.  Some tips and tricks include using WS-Interop as a guideline for defining our WSDL, including version attributes in our SOAP schemas from the get go, and using loosely typed parameters (xsd:string) or loosely typed schemas (xsd:any) to leave “holes” into which we can pour future data structures without breaking existing clients. 

Since I’m about to embark on a fairly major interop project myself, I’m glad to have heard the message, and I say hooray.  I think it makes much more sense to stick to industry standard contracts that we can all come to agreement on and work the code backwards from there, rather than tying ourselves to the framework’s notion of what WSDL should look like.  Ultimately it’s the only way that cross-platform interop is going to work.  The “downside” if it can be called such is that we have to really work to understand WSDL and XSD (or at least the WS-I specified compatible subsets thereof) in order to design web services correctly.  However, anyone who was writing any kind of web service without a firm grounding in WSDL and XSD was heading for trouble anyway.  I’m looking forward to Scott Hansleman’s “Learn to Love WSDL” coming up after lunch today.

 

Wednesday, June 04, 2003 2:55:19 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Yesterday I saw a couple of presentations here at TechEd on the Enterprise Instrumentation Framework.  Currently I use log4net for all my logging needs, and I've been doing some compare and contrast between the two systems.

 

The advantage to using either of these frameworks is that they are loosely coupled.  There is no design time mapping between event sources and event sinks.  In log4net all the developer need to is categorize the log message as Debug,  Info, Warn, Error or Fatal.  At runtime, log4net is configured using an XML document.  You can define multiple event sinks, such as text files, the Event Log, ADO data sources, OutputDebugString, etc.  It's easy to create plugins to support new sinks as well.  There's even a remoting sink so that you can log to a different machine.  In the configuration file, you can send events to different sinks based on the logging level (debug, info, etc) or on the namespace of the class the logging is done from. 

 

In EIF, you can similarly define multiple sources and sinks, and map them at run time.  One big difference is that in EIF, events are strongly types.  You can create an event schema that is used for logging data with distinct types, rather than just as strings.  In text logging that's not so important, but in the Event log, and especially in WMI ( which EIF supports) you can take advantage of the strong typing when you read and or sort the events.  However, that means that you have to define the schema, which is work.  One drawback is that out of the box, EIF supports many fewer event sinks, in fact only three:  WMI, the Event Log, and Windows Event Tracing (on Win2K and up).  As with log4net, EIF allows you to create your own event sinks.  There's currently no UI for reading Windows Event Tracing logs, but they do provide an API.  Furthermore, the configuration for EIF is rather more complicated.

 

The built in support for WMI in EIF is pretty useful, since it abstracts out all the System.Management stuff.  This support makes it easy to support industry standard management tools like HP OpenView.  And you even get perf counters for free when you define an EIF event source.  On the other hand, this support for WMI makes intalling a bit more high ceremony, since you have to include an installer class in your app to register the WMI schema. 

Possibly the coolest feature in EIF is the fact that they can push correlation data onto the .NET call context, which travels across remoting boundaries.  That means that they can correlate a set of events for a particular operation across process and server boundaries.  They demoed a UI that would then create a tree of operations with nested events arranged in order.  Pretty cool stuff.

 

So, EIF has strongly typed events, WMI support, perf counters for free and cross process event correlation.  Log4net is much simpler to implement, requires less coding overhead, and supports a wider range of event sinks out of the box.  It's a tough choice.  The WMI support might swing things toward EIF in the long run, especially if you operations outfit uses WMI management tools.

 

Wednesday, June 04, 2003 11:25:06 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [3]  | 
# Tuesday, June 03, 2003

I went to a get together here at TechEd last night of bloggers and “aggregators” (sounds much better than “lurker”) who are here at the show, organized by Drew Robbins who runs TechEdBloggers.net.  It was really interesting to put some faces with the names.  There were also some interesting discussions on blogs and blogging.  Many seemed comfortable with the notion I put down a while ago (http://erablog.net/filters/12142.post) of communal validation.  Someone dubbed it a “web of trust” which I think is pretty appropriate.  The question also some up, “So who plays the role of VeriSign” in this web of trust.  I think the interesting part about blogging, at least in the technical community, is that that trust seems to be based on personal relationships rather than out outside “authority”.  Unlike in the UseNet world where it’s easy to forgo personal relationships in favor of the collective, blogging seems to foster personal relationships.  That’s a big change from the general anonymity of the Web.  I’ll be interested to see where it goes from here. 

Tuesday, June 03, 2003 10:43:40 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  |