# Thursday, June 05, 2003

 Hallelujah brother!   I have seen the light of AOP!  I read last year’s MSDN article on AOP and thought that it was a dang good idea, but the implementation was a little on the questionable side.  It took advantage of undocumented goo in the .NET remoting infrastructure to do interception.  Cool, but not necessarily something I’d go into production with.  And it doesn’t work on ServicedComponents.  Clemens Vasters showed how to do AOP in about 4 different contexts, including ASP.NET WebServices, EnterpriseServices, WinForms and using the remoting interception method.  Best of all, he showed how to use the same metadata in each context.  Very cool stuff, and all the samples will be posted to his blog

I was particularly impressed by two things:  the way he explained why AOP is interesting and useful using real world examples, and the AOP examples in ASP.NET web services, which is a great way to abstract the use of SOAP extensions.  Check out his samples for more details. 

 

Thursday, June 05, 2003 3:17:19 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

The good news is that just about everything that I don’t like about implementing WS-Security with WSE v1 has been addressed in v2.  Because it supports WS-Policy, you can now set up declarative rules enforcing security policy without the class developers having to do any coding.  With v1, every web method must remember to check for the existence of a UsernameToken (if that’s what you are using) and each component created on the client side must remember to add the right token(s) to the SoapContext.  While not insurmountable, it’s still a pain.  With version 2 you can set up a policy that is used on both server and client, and the individual client and server components can blissfully go about their business, with the security just taken care of by WSE.  Not only does that make it much easier to implement, and more secure, since you aren’t depending on developers to remember to do the right thing, but since they don’t have to do any additional work, it’s easier to convince them that WS-Security is a good idea (which it is).

The new support for WS-Trust, and specifically of WS-SecureConversation is a big boon to performance.  It allows you to do one heavyweight authentication, using Username or x509, etc. then have the server generate a new light weight token for use during the rest of the conversation.  Much less of a burden on server resources.

There is also new support for non-RPC programming models, so that you can use one-way or multicast messages and more.  And you don’t have to host your web services in IIS, which allows for lots of new deployment possibilities. 

The only drawback to this session was that a great deal of the material overlapped with the content of  WEB401 (on the new security support). 

 

Thursday, June 05, 2003 11:56:47 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, June 04, 2003

Once again a triumphant performance from Scott Hanselman.  He did a great job of explaining what makes WSDL interesting, and why we should, if not necessarily love it then at least understand it.   In keeping with the message I’ve been hearing the rest of the week, Scott advocated starting with XSD and WSDL, and generating clients and servers from there, rather than developing WebServices “code first”. 

One of the coolest things he demoed was SoapScope, from MindReef which allows you to sniff soap at a workgroup rather than individual developer box level.  Nice interface, and very handy for debugging distributed problems.

I also appreciated Scott’s Matrix references.  Ignore the WSDL and live in the Matrix, or learn to see the WSDL and, like Neo, gain super powers. :)

Wednesday, June 04, 2003 6:46:17 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

I’ve seen a wide range of TechEd presentations on Web Services now by some of the luminaries (Don Box, Keith Ballinger, Doug Purdy, et. al.) and I find it interesting that the story for how to build .NET web services has changed.  Maybe that’s been true for a while and I never noticed, but either way, now I know.  When .NET first appeared on the scene, the story was that all you had to do was derived from WebService, mark your methods with [WebMethod()], and all would be well. 

This week I’ve consistently been hearing that “code-first” development is out for web services, and that we should all learn to love XSD and WSDL.  So rather than coding in C#, adding some attributes, and letting the framework define our WSDL, we should start with XSD and WSDL definitions of our service, and use wsdl.exe and xsd.exe to create the .NET representations.  Very interesting. 

Further more, we should define our WSDL specifically with versioning and interop in mind.  Some tips and tricks include using WS-Interop as a guideline for defining our WSDL, including version attributes in our SOAP schemas from the get go, and using loosely typed parameters (xsd:string) or loosely typed schemas (xsd:any) to leave “holes” into which we can pour future data structures without breaking existing clients. 

Since I’m about to embark on a fairly major interop project myself, I’m glad to have heard the message, and I say hooray.  I think it makes much more sense to stick to industry standard contracts that we can all come to agreement on and work the code backwards from there, rather than tying ourselves to the framework’s notion of what WSDL should look like.  Ultimately it’s the only way that cross-platform interop is going to work.  The “downside” if it can be called such is that we have to really work to understand WSDL and XSD (or at least the WS-I specified compatible subsets thereof) in order to design web services correctly.  However, anyone who was writing any kind of web service without a firm grounding in WSDL and XSD was heading for trouble anyway.  I’m looking forward to Scott Hansleman’s “Learn to Love WSDL” coming up after lunch today.

 

Wednesday, June 04, 2003 2:55:19 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Yesterday I saw a couple of presentations here at TechEd on the Enterprise Instrumentation Framework.  Currently I use log4net for all my logging needs, and I've been doing some compare and contrast between the two systems.

 

The advantage to using either of these frameworks is that they are loosely coupled.  There is no design time mapping between event sources and event sinks.  In log4net all the developer need to is categorize the log message as Debug,  Info, Warn, Error or Fatal.  At runtime, log4net is configured using an XML document.  You can define multiple event sinks, such as text files, the Event Log, ADO data sources, OutputDebugString, etc.  It's easy to create plugins to support new sinks as well.  There's even a remoting sink so that you can log to a different machine.  In the configuration file, you can send events to different sinks based on the logging level (debug, info, etc) or on the namespace of the class the logging is done from. 

 

In EIF, you can similarly define multiple sources and sinks, and map them at run time.  One big difference is that in EIF, events are strongly types.  You can create an event schema that is used for logging data with distinct types, rather than just as strings.  In text logging that's not so important, but in the Event log, and especially in WMI ( which EIF supports) you can take advantage of the strong typing when you read and or sort the events.  However, that means that you have to define the schema, which is work.  One drawback is that out of the box, EIF supports many fewer event sinks, in fact only three:  WMI, the Event Log, and Windows Event Tracing (on Win2K and up).  As with log4net, EIF allows you to create your own event sinks.  There's currently no UI for reading Windows Event Tracing logs, but they do provide an API.  Furthermore, the configuration for EIF is rather more complicated.

 

The built in support for WMI in EIF is pretty useful, since it abstracts out all the System.Management stuff.  This support makes it easy to support industry standard management tools like HP OpenView.  And you even get perf counters for free when you define an EIF event source.  On the other hand, this support for WMI makes intalling a bit more high ceremony, since you have to include an installer class in your app to register the WMI schema. 

Possibly the coolest feature in EIF is the fact that they can push correlation data onto the .NET call context, which travels across remoting boundaries.  That means that they can correlate a set of events for a particular operation across process and server boundaries.  They demoed a UI that would then create a tree of operations with nested events arranged in order.  Pretty cool stuff.

 

So, EIF has strongly typed events, WMI support, perf counters for free and cross process event correlation.  Log4net is much simpler to implement, requires less coding overhead, and supports a wider range of event sinks out of the box.  It's a tough choice.  The WMI support might swing things toward EIF in the long run, especially if you operations outfit uses WMI management tools.

 

Wednesday, June 04, 2003 11:25:06 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [3]  | 
# Tuesday, June 03, 2003

I went to a get together here at TechEd last night of bloggers and “aggregators” (sounds much better than “lurker”) who are here at the show, organized by Drew Robbins who runs TechEdBloggers.net.  It was really interesting to put some faces with the names.  There were also some interesting discussions on blogs and blogging.  Many seemed comfortable with the notion I put down a while ago (http://erablog.net/filters/12142.post) of communal validation.  Someone dubbed it a “web of trust” which I think is pretty appropriate.  The question also some up, “So who plays the role of VeriSign” in this web of trust.  I think the interesting part about blogging, at least in the technical community, is that that trust seems to be based on personal relationships rather than out outside “authority”.  Unlike in the UseNet world where it’s easy to forgo personal relationships in favor of the collective, blogging seems to foster personal relationships.  That’s a big change from the general anonymity of the Web.  I’ll be interested to see where it goes from here. 

Tuesday, June 03, 2003 10:43:40 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Once upon a time I did the happy dance on stage (in San Francisco: “Anatomy of an eCommerce Web Site”) because I was so excited about the new BizTalk orchestration designer in BTS 2000.  What a great idea, so be able to draw business processes in Visio, then use the drawing to create code that knows how to manage a long running transactional business process.  I had been preaching the gospel of the CommerceServer pipeline as a way of separating business process from code, but BT Orchestrations was even better.

Little did I know…  Yesterday (I’m here at TechEd in Dallas) I saw a demo of BTS 2004.  Wow.  Microsoft has really made great strides in advancing the orchestration interface.  Instead of Visio, it’s now a Visual Studio.NET plugin, and the interface looks really good.  It includes hints to make sure you get everything set up corrects, and full IntelliSense to speed things along.  I was very impressed with the smoothness of the interface.  Not only that, but now you can expose your orchestration as an XML Web Service, and you can call Web Services from inside your schedule. 

I’ve always thought that BTS has gotten short shrift in the developer community.  I think it’s because it tends to be pigeon-holed as something only useful in big B2B projects.  I can think of lots of ways in which orchestration could be very useful outside the realm of B2B.  I guess part of it has to do with the pricing.  While I can think of lots of ways to use BTS in non-B2B scenarios, they aren’t really compelling enough to convince me to spend that much money.  Ah well.   

Tuesday, June 03, 2003 10:31:47 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, May 28, 2003

 

The constant point-and-click of the mouse can be a real drag. One company has developed products that sense hand movements to give computer commands, creating input devices that it hopes will replace the mouse. By Katie Dean.
[Wired News]

 

 

This is a great idea, and I’d be all over it if it wasn’t quite so spendy.  I think in the long run this kind of gestural interface could really win out, since you can encode a lot of information in gestures that would be harder or take longer using other input methods.

Wednesday, May 28, 2003 1:10:19 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, May 27, 2003

I’m back home from New Orleans after watching my wife Vikki win a bronze medal at the US Tae Kwon Do Senior Nationals.  Woohoo!

A good time was had by all.  The competition is pretty amazing.   

Tuesday, May 27, 2003 3:11:37 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, May 16, 2003

There's been a great deal of hullabaloo in the last week or so about blogging “ruining the Internet for everyone” and Google segregating hits from blogs into their own section (good or bad?). I realize that since I'm posting this to a blog, this sounds a bit self-serving, but here's my two cents worth:

While it's true that blogging has lowered the bar in terms of access to web publishing, the simple fact is that ever since there was an Internet, anyone who wanted two and had access to a keyboard could post whatever drivel they wanted to the web. All the blogging really adds to the mix is the fact that now you don't even have to have rudimentary knowledge of HTML (or how to save your Word doc as HTML) in order to publish yourself. While that means more volume, it doesn't really change the nature of data on the web.

The real “problem” as far as Google is concerned is that the self-referential nature of blogging upsets their ranking algorithm. This has apparently led people (like Larry Lessig) to conclude that blogging is ruining the nature of content on the web.

I would argue that there's nothing about blogging that changes the need for critical thinking when looking for information on the web. That's always been true, since there's fundamentally no concrete way to verify the validity of anything you read on any site without thinking critically about the nature of the source, how you came across it, and what references it makes to other works. If you apply that kind of filter, then the self-referencial or “incestuous” nature of blogs can be used to advantage.

For example, if I'm looking for interesting information about SOAP, XML Web Services, or how either or both relate to .NET I'd assume that Don Box (for example) is a reliable source, given that I've read his books, articles and speeches in other contexts. If he mentions something in his blog that someone else said about .NET on their blog, I would assume a high degree of certainty that the reference is relevant and useful. Then, when I follow that link, I'll find more links to other blogs that contain relevant and useful information. The tendency is for all those cross links to start repeating themselves, and to establish a COMMUNITY wherein one can assume a high degree of relevant information. All that's needed to validate the COMMUNITY as a whole is a few references from sources that are externally trusted.

In the long run, I think that kind of “incestuousness” can be used to validate, rather than discount large bodies of interesting, useful, and otherwise unattainable information that we wouldn't have access to otherwise.

That's a fairly lengthy rant for me, but I had to get that off my chest. I hate to see pundits discounting a body of information just because it's posted casually and not in academic journals.

Friday, May 16, 2003 11:22:41 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  |