# Friday, February 12, 2010

We just caught this one this morning…  It looks like WF 4 reuses activity instances across workflow instances.  So if I have a WorkflowService that’s hosted in IIS, and I call it from two different client threads at the same time, the two workflow instances now running on the server may be using the same activity instances for child activities.  The documentation is not clear on this point, but that’s the behavior we observed. 

The implication is that you have to treat calls to your Activity’s Execute method as stateless, and not maintain any state in your activity between calls to Execute.  (Our specific problem was around EntityFramework containers.  Apparently they don’t like being called on multiple threads. :) )

Makes sense, but it’s not clear at all from the documentation that it would be the case.  You can rely on the thread safety of your InArguments and OutArguments, since they are accessed through the context, but private fields are right out unless whatever you store in them is also threadsafe.

Friday, February 12, 2010 11:14:40 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, March 02, 2009

After reading Justin Angel’s very good summary of unit testing Silverlight using the Silverlight UnitTest framework, RhinoMocks, Unity etc. I decided to give it a go and find out how easy it was for a “real” application.  I converted the course evaluation app I’ve been working on since December to MVVM using Unity, and then set about trying to test it with the testing tools. 

I must say I do rather like the MVVM pattern, so that part went pretty well, as did the use of Unity, although there was some learning to do there.  It’s not quite as obvious as it maybe should be, but it didn’t take too long.  The biggest issue I had with both Unity and the test tools come in relation to the WCF proxy that I’m using to talk back to the server from Silverlight.  I think it would be a bit easier using the asynchronous interface that is generated as part of the proxy (the one that has all the BeginXXX, EndXXX methods on it) but I’m using the interface that consists of completion events and XXXAsync methods.  That object (in my case it’s called “EvalServiceClient”) doesn’t like being created by Unity, presumably somewhere down in the WCF infrastructure, so I had to create it myself and register the instance with Unity.

Current = new UnityContainer();
Current.RegisterInstance(typeof(EvalServiceClient), new EvalServiceClient());

That isn’t too terrible, but it did take a while to figure out.  One of the things that makes that harder is that the errors that come back just say “Unity couldn’t create your thing” and it takes a bit of digging to find out where and why it actually failed. 

The blogosphere suggests (and I agree) that implementing MVVM in Silverlight isn’t quite as straightforward as it might be in WPF, largely due to the lack of commands.  There are a couple of declarative solutions for mapping UI elements to methods in a View Model, but most relied on quite a bit of infrastructure.  I decided it was OK (enough) to put just enough code in my code-behind to wire up event handlers to methods on the View Model.  Icky?  No too bad.  Commands would obviously be better, but there it is. 

private void submitEval_Click(object sender, RoutedEventArgs e)

private void lstCourse_SelectionChanged(object sender, SelectionChangedEventArgs e)
    CourseInfo ci = lstCourse.SelectedItem as CourseInfo;

I found that the reliance on data binding makes it much easier to separate presentation from business logic.  The View Model can represent any UI specific logic like which buttons should be enabled when (which the underlying business/domain model doesn’t care about) and allow a designer to work strictly with XAML.  Because INotifyPropertyChanged really works, you don’t have to worry about pushing data into the interface, just on which properties have to be marked as changed.  For a computed property like “should the submit button be shown” it may take a few extra notification calls to make sure the UI gets updated properly, but that seems reasonable. 

public bool CanSubmit
        _canSubmit = (_registrationId.HasValue && _questionCategories != null);
        return _canSubmit;
        if (_canSubmit != value)
            _canSubmit = value;

public int? RegistrationId
        return _registrationId;
        if (_registrationId != value)
            _registrationId = value;

public System.Collections.ObjectModel.ObservableCollection<Evaluation.EvaluationServer.QuestionCategory> QuestionCategories
        return _questionCategories;
        if (_questionCategories != value)
            _questionCategories = value;

In the example above, the value of “CanSubit” relies on the state of RegistrationID and QuestionCategories, so the property setters for those properties also “invalidate” CanSubmit so the UI will update properly.  In the XAML, the IsEnabled property of the Submit button is bound to the CanSubmit property of the View Model.

The next challenge was getting the test code to work.  Because I didn’t want the test code to call the real web service, I had to mock the calls to the EvalServiceClient.  For whatever reason, I didn’t have any luck with mocking the object itself.  I think this had to do with the asynchronous nature of the calls.  The code registers an event handler for each completion event, then calls XXXAsync to call the web service.  When it returns, it fires the completion handler.  To make that work with RhinoMocks, you have to record the event hookup, then capture an IEventRasier interface that will let you raise the desired event.

using (mocks.Record())
    client.GetStudentNameCompleted += null;
    raiser = LastCall.IgnoreArguments().GetEventRaiser();

That call to GetEventRaiser fails if I mock the EvalServiceClient object itself, so I had to create an interface that I could mock instead.  Luckily, the generated proxy is a partial class, so it’s easy to add a new interface.

public interface IEvalServiceClient
    event System.EventHandler<GetStudentNameCompletedEventArgs> GetStudentNameCompleted;

    void GetStudentNameAsync();

public partial class EvalServiceClient : IEvalServiceClient


Now the RhinoMocks code mock the IEvalServiceClient interface, and the GetEventRaiser call works just fine.  Because the WCF client actually gets created by Unity, we have to register the new mock instance with the UnityContainer.

MockRepository mocks = new MockRepository();
IEvalServiceClient client = mocks.StrictMock<IEvalServiceClient>();

IEventRaiser raiser;

using (mocks.Record())
    client.GetStudentNameCompleted += null;
    raiser = LastCall.IgnoreArguments().GetEventRaiser();



using (mocks.Playback())
    Page page = new Page();
    raiser.Raise(client, new GetStudentNameCompletedEventArgs(new object[]{"Jones, Fred"}, null, false, null));
    WaitFor(page, "Loaded");

    EnqueueCallback(() => Assert.IsTrue(page.lblStudent.Text == "Jones, Fred"));


During playback, we can use the IEventRaiser to fire the completion event, then check the UI to make sure the property got set correctly. 

I’m pretty convinced that MVVM is a good idea, but this method of testing seems awfully cumbersome to me, plus pretty invasive.  I had to make quite a few changes to my app to make the testing work, including creating the interface for the EvalServiceClient, and marking any controls I needed to write tests against with x:FieldModifier=”public” in my XAML.  It’s good to know how to make this work, but I’m not sure I’d use this method to test everything in my Silverlight app.  Probably only the highest risk areas, or places that would be tedious for a tester to hit.

Monday, March 02, 2009 2:42:25 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, December 05, 2006

This took me WAY longer to figure out that either a) it shoudl have or b) I had hoped it would.  I finally got it working last week though.  I now have my client calling the server via a Dual Contract using IssuedToken security (requires a SAML token), said token being obtained from an STS written by me, which takes a custom token from the client for authentication. 

On the plus side, I know a whole bunch more about the depths of the ServiceModel now than I did before. :-)

It turns out (as far as I can tell) that the binding I needed for client and server cannot be described in a .config file, and must be created in code.  That code looks like this on the server


public static Binding CreateServerBinding(Uri baseAddress, Uri issuerAddress)



    CustomBinding binding = new CustomBinding();

    IssuedSecurityTokenParameters issuedTokenParameters =

        new IssuedSecurityTokenParameters();

    issuedTokenParameters.IssuerAddress = new EndpointAddress(issuerAddress);

    issuedTokenParameters.IssuerBinding = CreateStsBinding();

    issuedTokenParameters.KeyType = SecurityKeyType.SymmetricKey;

    issuedTokenParameters.KeySize = 256;


        = "http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV1.1";


    SecurityBindingElement security

        = SecurityBindingElement.CreateIssuedTokenBindingElement(issuedTokenParameters);


    binding.Elements.Add(new CompositeDuplexBindingElement());

    binding.Elements.Add(new OneWayBindingElement());

    binding.Elements.Add(new TextMessageEncodingBindingElement());

    binding.Elements.Add(new HttpTransportBindingElement());


    return binding;


and essentially the same on the client, with the addition of setting the clientBaseAddress on the CompositeDuplexBindingElement.

This works just fine, getting the token from the STS under the covers and then calling the Dual Contract server interface with the token. 

I ended up with a custom binding to the STS itself, mostly because I needed to pass more credentials than just user name and password.  So the STS binding gets created thusly:


public static Binding CreateStsBinding()


    Binding binding = null;


    SymmetricSecurityBindingElement messageSecurity

        = new SymmetricSecurityBindingElement();



     SignedEncrypted.Add(new VoyagerToken.VoyagerTokenParameters());


    X509SecurityTokenParameters x509ProtectionParameters

        = new X509SecurityTokenParameters( X509KeyIdentifierClauseType.Thumbprint);

    x509ProtectionParameters.InclusionMode = SecurityTokenInclusionMode.Never;


    messageSecurity.ProtectionTokenParameters = x509ProtectionParameters;

    HttpTransportBindingElement httpBinding = new HttpTransportBindingElement();


    binding = new CustomBinding(messageSecurity, httpBinding);


    return binding;         


Not as easy as I had hoped it would be, but it's working well now, so it's all good.  If you go the route of the custom token, it turns out there are all kinds of fun things you can do with token caching, etc.  It does require a fair amount of effort though, since there are 10-12 classes that you have to provide.

Tuesday, December 05, 2006 3:47:33 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, October 11, 2006

I'm here in Redmond at the patterns and practices Summit, and yesterday Don Smith had some very interesting things to say about versioning of web services.  He was polling the audience for opinions and scenarios, and I started to realize that I hadn't thought very hard about the problem.  Typically in cases where I've wanted to version a web services, I've left the existing endpoint in place (supporting the old interface) and added a new endpoint which supports the new interface.  I still think that's a pretty good solution.  What I realized, though, was that there's no way to communicate the information about the new interface programmatically.  It still pretty much requires a phone call. 

Don suggested something like RSS (or actually RSS) to push information about new versions out to clients.  Another suggestion was to have people actively "subscribe" to your service.  That would not only handle the "push" notification, but you'd know how many people were actually using your old vs. new interface, etc. 

I started thinking about something like a WS-VersioningPolicy.  We already have standards like WS-SecurityPolicy, which carry metadata about how you should secure your use of a given service.  Why not use a similar formal, programmatic method for distributing information about versioning policy.  You could convey information like methods that are deprecated, where to find the new versioned endpoint (if that's your strategy), or define up front policies about how you will version your interface without switching to a new endpoint. 

That still doesn't solve the "push" problem.  Clients would have to not only consume the policy file at the time they start using the service, but presumably they'd have to check it from time to time.  That suggests human intervention at some level.  Hmmm. 

This is a hard problem, but one that Don's pretty passionate about solving, so check out his blog for future developments in this space.

Wednesday, October 11, 2006 9:12:47 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, June 15, 2006

Slowing down now.  Me, not TechEd.  Very long days starting to take their toll...

That aside, I've seen some very groovy stuff over the last day or two.  WF as the controller in an MVC architecture, using rule-based activities in WF, WCF talking to an Oracle system over standards based web services (with security, reliable messaging, MTOM, et al).  Shy Coen did a good chalk talk yesterday on publish and subscribe patterns using WCF which gave me some good ideas.  I'm looking forward to seeing more about the Service Factory tomorrow morning.  Meeting lots of very smart people. 

I realize my sentences are getting shorter and shorter, and the nouns will probably start dropping out next.  Attendee party tonight.  Nothing I like better than 12,000 drunk nerds all in one place.  With batting cages.  I'll take pictures. :-)

On a completly different note, look to see some major changes to this blog in the next week or so...

Thursday, June 15, 2006 8:37:10 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, February 08, 2006

Doing some WCF training this week, and the current presenter is talking about how to embrace interoperability in the world of ASMX 2.0, WSE 3, and WCF.  One of the principles he urges us to embrace is KISS.  Keep Interoperable Schemas Simple. 

I love it.  I want T-shirts. 

Wednesday, February 08, 2006 9:23:50 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [2]  | 

I'm doing some WCF (Indigo) training this week, and one of the hands on labs went through an example of a federated trust scenario, with two STS's involved in the process.  I've got to say, I'm really impressed with how easy it was.  Granted, the configuration is pretty hairy, but it's just that, configuration.  You can set up a whole federated trust system using config files.  And it worked.  Not too shabby.  I would never have contemplated attempting something like that in WSE 2, although I think in WSE 3 it's supposed to be a bit easier. 

One thing to note, if you want to do federated trust, is that the WCF team is not shipping an STS.  Presumably for liability reasons, but that's anyone's guess.  They are, however, providing some very complete samples, which could be fairly quickly adapted for use inside one's organization.  There's also a good example STS for WSE 3 up on gotdotnet as of a few weeks ago. 

Overall, my impression is that security in WCF is very thought out, and WAY easier to bend to your will than ever before.  Check it out.

Wednesday, February 08, 2006 9:15:28 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, January 23, 2006

I’ve been playing with the January CTP of WCF, and I’ve encountered what seems like a pretty major setback.  I’ve got an interface that takes a MessageContract and returns a MessageContract.  All well and good.  But then I want to use the AsyncPattern on the service side, so that my routine will get called on a different thread from the one that’s listening on the network.  So I decorate the interface like so:


    public interface IThingy



        IAsyncResult BeginSignon(ThingyRequestMessage<SignOnRequest> request, AsyncCallback cb, object state);



        ThingyResponseMessage<SignOnResponse> EndSignon(IAsyncResult ar);



Now I get an exception at runtime, which says that I can’t mix parameters and messages for the method “EndSignon”.  What it means is that if I return a MessageContract instead of a primitive type, my method has to take a MessageContract and not one or more primitive types.  OK, I get that.  But my EndSignon method is getting flagged because it takes an IAsyncResult (as it must according to the AsyncPattern) and returns a ThingyResponseMessage. 

Does this mean I can’t use MessageContracts with the AsnycPattern?  If so, LAME.  If not, then what am I missing?

SOAP | Web Services | XML | Indigo
Monday, January 23, 2006 3:05:23 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [2]  | 
# Tuesday, January 03, 2006
There’s still time to sign up for the next web services class I’ll be teaching at OIT.  This class (which I haven’t taught before) is going to be on “Enterprise Web Services”.  We’ll cover the things you need to know to build a real enterprise application using Web Services, and how emerging standards make that much easier and more standardized.  The focus will be on applying web services standards to building B2B applications, and participants are expected to already have a solid grounding in XML/SOAP/WSDL, and be able to code in C# or VB.NET.  Class starts Monday, 1/9 at OIT Portland’s Capital Center campus.  CST 407P. 
Tuesday, January 03, 2006 3:58:14 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, December 02, 2005

I haven’t tried doing any WCF (formerly and still known as Indigo) development in probably 6 months or so, and some things have changed since then.  Also, this is the first time I’ve tried implementing a duplex contract.  I made some mistakes along the way, due in part to the fact that the sample code in the November CTP doesn’t match the docs (no surprise there).  Over all, though, it was way easier than I thought it might be.  Certainly easier then .NET Remoting, and the fact that there’s a built-in notion of a duplex contract solves tons of problems.

Anyway, I was trying to get my client to work, and for the life of me couldn’t figure out the errors I was getting, until it finally dawned on me.  Here’s what I had:

        static void Main(string[] args)


            InstanceContext site = new InstanceContext(new CallbackHandler());


            // Create a proxy with the given client endpoint configuration.

            using (MyDuplexProxy proxy = new MyDuplexProxy(site))







It’s probably obvious to everyone who isn’t me why this won’t work.  You can’t dispose of the proxy and still expect a callback.  Now that I say that it makes sense, but it didn’t occur to me for a while, since the callback itself isn’t on the proxy class. So, I changed one line to this:

        static void Main(string[] args)


            InstanceContext site = new InstanceContext(new CallbackHandler());


            // Create a proxy with the given client endpoint configuration.

            using (MyDuplexProxy proxy = new MyDuplexProxy(site))






and everything worked out swimmingly.  I continue to be impressed with how well thought out Indigo is.  While many people like to point out how many mistakes MS has made over the years, you certainly can’t fault them for not learning from them. 

Friday, December 02, 2005 2:31:53 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, September 27, 2005

I’ll be starting the class I’m teaching this term at OIT tomorrow night.  Introduction to Web Services.  There’s still time to get in on it.  CST 407.

Tuesday, September 27, 2005 4:04:41 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, July 08, 2005
I’ll be teaching Introduction to Web Services (CST 407) at OIT (Portland) this Fall.  Tell your friends!  We’ll be covering the basics of Web Services, including theory, history, best practices, and a firm grounding in underlying technologies like XML and SOAP.  Should be a good time.  If you are interested you should be prepared to write code in C#.
Friday, July 08, 2005 4:21:38 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, March 17, 2005

Here we are in the year 2005.  XML has been pretty ubiquitous for at least 5–6 years now.  Namespaces have been in use for pretty much all of that time.  And yet they remain possibly the least understood part of average, everyday XML processing. 

The bottom line is that pretty much any XML parser worth its salt these days supports the namespaces spec.  Which means that


is absolutely not the same thing as

<MyElement xmlns=”urn:runforthehills”/>

Furthermore, in line with the XML Namespaces spec, an application which is expecting the latter, namespace qualified element should not and must not process the former, unqualified element.

The XmlSerializer that we all know and love in .NET is particularly sensitive to this issue (as well it should be).  As far as the serializer is concerned, everything should be namespace qualified.  The way this commonly bites people is thus: a customer/partner sends you a schema representing the XML documents they are going to be sending you.  In the schema, the targetNamespace attribute is set with a value of “http://partner.com/schema”.  When you actually do to debug the application however, it turns out they are sending you totally unqualified XML.  Nothing will work.  There are a few pretty horrible things you can do with the XmlSerializer to try and convince it not to be such a stickler about things, most involving the XmlRootAttribute and XmlAttributeOverrides.  I can share those ways if anyone really wants to see them.  Probably best to keep them under cover.  However, that’s only likely to work if your XML document is flat, meaning that the root element only has one level of child nodes under it.  Otherwise, if you use Xsd.exe to generate your serialization class, each set of sub elements get put in their own object, which will also be namespace qualified.  And you’re back to square one. 

The right solution of course is to get your partner to send you XML that’s actually correct, but often that’s just not possible for a variety of reasons with which I’m sure we’re all familiar.  As a last ditch effort, you can pre-process the XML text before passing it to the XmlSerializer, and inject the right namespace strings.  Yucky, it’s true, but it does actually get the job done.  You will of course, be paying some overhead costs of string processing and possibly parsing the XML twice.  But what can you do?

The other thing to keep in mind is how namespaces play out in XSD schema files.  You can only have one target namespace per schema, so anything you define in that schema file will be in that target namespace.  You can import things from other namespaces, but not from the target namespace.  You can, however, define two different schema files that use the same namespace, then import them both into another schema, as long as there are no name collisions.  If you omit the targetNamespace attribute from your schema, the targetNamespace becomes “”, meaning you are defining the schema for an unqualified XML document. 

Confusing enough?  Read the namespace spec (it’s really short), familiarize yourself with how namespaces work in schema, if you see errors coming back from the XmlSerializer that look like

The element <spam xmlns=””> was not expected.

check your namespaces!  That means you are trying to deserialize an unqualified document, when a qualified one was expected.

Thursday, March 17, 2005 1:00:03 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, February 23, 2005
 Michele Leroux Bustamante posted a really great summary of how to go about creating a web service front end coupled to a multi-tier backend.  Check out the diagram at the end.  It makes it very clear.  I particularly like the use of the facade assembly on the web layer to talk to the business tier.  Very nicely done.
Wednesday, February 23, 2005 10:56:45 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, December 08, 2004

I just listened to Microsoft’s web cast on the new Web Services features in Yukon, and I have to say I’m pretty skeptical.  I think it’s yet another case of Microsoft building support for “open standards” and then making them really only work practically if you’re running Windows on both ends. 

The biggest thing that concerns me is their authentication scheme.  They have disallowed anonymous connections, meaning that your HTTP connection must be authenticated using Basic, Digest, or Integrated Authentication.  That means on non-Windows platforms, you’ll be limited to Basic.  But wait.  They also decided that if you allow Basic auth, you must use SSL.  Furthermore, using Basic auth means that you have to use SQL Auth on your SQL server.  How do you send your SQL Auth credentials?, you might ask.  WS-Security Username tokens.  But WS-Security isn’t supported for either encryption or digital sigs. 

Plus, you have to use a separate SOAP header to carry session information so that you can maintain your context inside SQL server.  (OK, that bit’s pretty clever.)

So if I want to use Web Services to talk to SQL server from a non-Windows platform, I have to use Basic authentication over SSL, and provide a WS-Security header and a session header.  These seem to me like pretty heavy requirements.  Much of this was explained away with “WSE supports all this stuff”.  OK, great, but that’s not going to help my clients who are using Ruby under Linux. 

The guy giving the demo didn’t give us a look at the WSDL that SQL server generates, so I have no idea how complex or otherwise it might be.  Given that it’s hard to judge how hard it would be to deal with the data coming back from SQL Server.  In the plus column, they’ve provided hooks so that you can write your own WSDL if you don’t like the way they do it.

Besides the above issues, I think their implementation and support for Web Services is pretty dang clever.  They’ve thought about a lot of issues. 

What I don’t get is if my Web Services clients only work best under Windows, why do I need Web Services?

Wednesday, December 08, 2004 2:54:24 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  | 
# Tuesday, December 07, 2004

The following is the final I gave my Web Services Applied class last night.  I was hoping to come up with a project that exposed the interesting things about developing a web service, and a client to access that service, with out getting bogged down in implementing what the service was supposed to do.  At the same time, I wanted something less trivial than Hello World!

The class had three hours to complete the project, and to my surprise only one person finished before the end of that time limit.  Is this too hard?  Do you think it’s asking too much?  I’m not used to teaching in an academic setting, so I’m still trying to gauge the difficulty of this kind of thing.  It’s a senior seminar, so it shouldn’t be too easy.  I’d be interested to hear people’s opinions.

Final Exam

Monday, December 6th.

Your final involves creating both a Web Service, and a client to exercise that Web Service. You will need to create a web service which matches the following UML.

The service has three methods: Remember, which takes a string, Forget, and Regurgitate. Remember will cause the service to store a string value, and keep track of the time it remembered that string. Forget will clear the stored memory entirely. Regurgitate will return an array of RememberedThing objects, which combine the string remembered with the time it was remembered.

You will also need to construct a WinForms client application that will allow a user to remember a string using the service, forget all the remembered strings, or display a list of regurgitated strings and times. You will need to call the web service asynchronously so that the WinForms application doesn't block.

You will need to turn in:

  • The complete code for client and server.

  • The WSDL and XmlSchema documents that describe the service.

Criteria for success:

The whole final is worth 100 points devided as follows:

  • Web Service interface – 30pts. (including WSDL and XSD)

  • Web Service functionality – 30pts.

  • Client functionality – 30pts.

  • Asynchronously calling web service – 10pts.

Tuesday, December 07, 2004 3:22:18 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [1]  | 

Maybe everybody knew this but me, but it’s been totally bugging me personally.  For my recent Web Services Applied class, when they set up the lab machines someone goofed up the permissions.  For reason or reasons I don’t understand (I’m guessing some kind of domain policy) there didn’t seem to be any way to get a web service running under IIS to allow anonymous access.  Everything worked fine with Integrated Authentication set, as long as the client knew what it was doing, which meant that WebServicesStudio worked just fine, you could hit the web service with IE and get the default page back.  But as soon as you tried to hit the web service from C# client code, you’d invariably get back HTTP 401.1 Unauthorized.  I tried changing every set of IIS and file system permissions I could think of, to no avail.  Crap.  Luckily, at long last Google once again came to the rescue.  I still don’t know why anonymous access doesn’t work, but I do know how to make the client problem go away.


        static void Main(string[] args)


            localhost.Service1 serv = new ConsoleApplication2.localhost.Service1();

            serv.Credentials = System.Net.CredentialCache.DefaultCredentials;




Setting the credentials on the proxy allows Integrated Authentication to work the way it’s supposed to, and everything works just fine.  Unfortunately I didn’t figure this out until my class was about half way through their final last night, but at least that’s better than not figuring it out at all. :-)

Tuesday, December 07, 2004 11:18:26 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [2]  | 
# Monday, December 06, 2004
I’ll be teaching again next term at OIT (at CAPITAL Center in Beaverton), this time “Enterprise Web Services”.  We’ll be looking at what it takes to build a real-world enterprise application using web services, including such topics as asynchronous messaging, security, reliable messaging and a host of others. We’ll walk through all the stages of building an enterprise-level WS application, using .NET and WSE 2.0 to do the heavy lifting.  Required is a firm grasp of programming in C#, and a basic understanding of Web Services fundamentals such as XML, SOAP, and WSDL.
Monday, December 06, 2004 1:18:30 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
Check out the ever-well-informed-and-entertaining Stuart Celarier this Thursday at CAPITAL center in Beaverton.  Should be a good talk.  If you ask nicely he might even juggle.  :-)
Monday, December 06, 2004 10:38:27 AM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, November 08, 2004

Our CTO, Chris, recently turned me on to Ruby.  I've been playing around with it a bit over the last few weeks, and I've got so say I'm pretty impressed.  I really appreciate that it was designed, as they say, according to the “Principal of Least Surprise”.  Which means that it basically works the way you would think. 

Ruby has a lot in common with Smalltalk, in that “everything is an object” kinda way, but since Ruby's syntax seems more (to me at least) like Python or Boo, it seems more natural than Smalltalk.  Sure, you don't get the wizzy browser, but that's pretty much OK.  When you apply the idea that everything is an object, and you're just sending them messages to ask them (please) to do what you want, you get some amazingly flexible code.  Sure, it's a bit squishy, and for code I was going to put into production I still like compile time type safety, but for scripting or quick tasks, Ruby seems like a very productive way to go.

Possibly more impressive was the fact that the Ruby installer for Windows set up everything exactly the way I would have thought (”least surprise” again) including adding the ruby interpreter into the path (kudos) and setting up the right file extension associations so that everything “just worked”.  Very nice.

The reason Chris actually brought it to my attention was to point me at Rails, which is a very impressive MVC framework for writing web applications in Ruby.  Because Ruby is so squishily late-bound, it can do some really amazing things with database accessors.  Check out the “ActiveRecord” in Rails for some really neat DAL ideas. 

I'm assuming that that same flexibility makes for some pretty groovy Web Services clients, but I haven't had a chance to check any out yet.  Anyone have any experience with SOAP and Ruby?

Monday, November 08, 2004 6:48:14 PM (Pacific Standard Time, UTC-08:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, October 26, 2004

Steve Maine is in the midst of the perennial debate between SOAP and REST, and I feel compelled to add my two cents...

At the XML DevCon last week I noticed that it continues to be fashionable to bash the existing Web Services standards as being too complex and unwieldy (which in several notable cases is true, but it's what we have to work with at this point) but that doesn't change the fact that they solve real problems.  I've always had a sneaking suspicion that people heavily into REST as a concept favored it mostly out of laziness, since it is undeniably a much simpler model than the SOAP/WS-* stack.  On the other hand, it fails to solve a bunch of real problems that SOAP/WS-* does.  WS-Addressing is a good example. 

I spent two years developing an application that involved hardware devices attached to large power transformers and industrial battery systems that needed to communicate back to a central data collection system.  We used SOAP to solve that particular problem, since it was easy to get the data where it needed to go, and we could use WS-Security to provide a high level of data encryption to our customers.  (Utility companies like that.)  However, we had one customer who would only allow us to get data from the monitors on their transformers through a store-and-forward mechanism, whereby the monitors would dump their data to a server inside their firewall, and we could pick up the data via FTP.  This is a great place for WS-Addressing, since all the addressing information staid inside the SOAP document, and it didn't matter if we stored it out to disk for a bit.  There is no way that REST could have solved this particular problem.  Or, at least, no way without coming up with some truly bizarre architecture that would never be anything but gross.

REST is great for solving very simple application scenarios, but that doesn't make it a replacement for SOAP.  I agree that many of the WS-* standards are getting a bit out of hand, but I also agree with Don Box's assessment (in his "WS-Why?" talk last week) that given the constraints, WS-Addressing and WS-Security are the simplest solutions that solve the problem.  There's a reason why these are non-trivial specs.  They solve non-trivial problems.

So rather than focusing on REST vs. SOAP, it's more interesting and appropriate to look at the application scenarios and talk about which is the simplest solution that addresses all the requirements.  I don't think they need to be mutually exclusive.

Tuesday, October 26, 2004 10:01:27 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Friday, August 13, 2004

Looks like Scott and I will be speaking at Chris Sells' XML DevCon this year.  Last year I spoke on XML on transformer monitors.  This year Scott and I will be talking about the work we've been doing with online banking and XML Schema. 

If it's anything like last year's, the conference should be pretty amazing.  The speakers list includes some pretty serious luminaries.  In fact, it's pretty much a bunch of famous guys... and me.  :-)

Sign up now!

Friday, August 13, 2004 4:50:12 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, August 04, 2004

I'll be teaching at OIT (in Portland/Beaverton, not K-Falls) again Fall term.  This time it's "Practical Web Services".  If you're interested, sign up through OIT.  The course number is 15048.  Description follows:

Practical Web Services

Web Services sound like a great idea, but how do you actually go about using them?  How do you go about actually writing your own Web Service to expose your data or functionality?

This class will cover all the details involved in using and building your own Web Services using the Microsoft .NET platform.  The first half of the class will cover the building of a client application to consume a Web Service from the Internet.  The second half will focus on building an equivalent Web Service using ASP.NET.

Students will leave this class with a firm understanding of how to use Web Services built by other people, and how to implement their own Web Services using the .NET platform.

Students should either have taken the previous "Web Services Theory" class, or have instructor approval.  All work will be done in C#, so a firm understanding of C# is required.

Wednesday, August 04, 2004 9:47:02 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, July 30, 2004

After an inspirational post by Loren Halvorson, we decided to emulate his team's use of an Ambient Orb to track the status of our continuous integration builds. 

After being backordered for a couple of months, it finally show up yesterday.  It's happily glowing green on my desk as I write this.  Unfortunately, what we didn't know is that in order to bend the orb to our will and get it to show our build status, we have to shell out $20 a quarter for their "premium" service.  Humph.  Plus, I've been having some "issues" with it.  It's supposed to reflect the current status of the DJIA, which is down right now, but the orb is green, which is supposed to indicate that the Dow is up.  I can't seem to make it be anything but green.  I know it's not a problem with the lights, because when it was "synchronizing" it strobed through just about every color there is.  After it's done though, I only get green. 

Anyway, all that together has prompted us to order their serial interface kit, so we can control the thing directly without depending on their wireless network.  Maybe not as elegant, but it should be deterministic, which it more important in this case.  Seems like a lot of work for what was supposed to be a funny toy.  :-)

Friday, July 30, 2004 11:36:02 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [1]  | 
# Thursday, July 15, 2004

I finished up my Web Services Theory class at OIT last night.  Just the final to go on Monday (mwah ha ha).

We ended with some stuff on WS-* and all the various specs.  I tried to spend minimal time on the actual syntax of WS-*, since some of them are pretty hairy, and spent more time on the business cases for WS-*.  That seemed to go over pretty well.  I think it's easier to understand the business case for why we need WS-Security than it is to understand the spec itself.  Unfortunately, on of the underlying assumptions about all the GXA/WS-* specs is that eventually they will just fade into the background, and you'll never see the actual XML, since some framework piece (like WSE 2.0) will just "take care of it" for you.  What that means is that the actual XML can be pretty complex.  The unfortunate part is that we don't have all those framework bits yet, so we have to deal with all the complexity ourselves.  Thankfully more tools like WSE 2 are available to hide some of that from the average developer.  On the other hand, I'm a great believer in taking the red pill and understanding what really goes on underneath our framework implemenations. 

Thursday, July 15, 2004 4:40:53 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, July 09, 2004

Dare Obasanjo posits that the usefulness of the W3C might be at an end, and I couldn't agree more.  Yes, the W3C was largely behind the standards that "made" the Web, but they've become so bloated and slow that they can't get anything done.

There's no reason why XQuery, XInclude, and any number of other standards that people could be using today aren't finished other than the fact that all the bureaucrats on the committee all want their pet feature in the spec, and the W3C process is all about consensus.  What that ends up meaning is that no one is willing to implement any of these specs seriously until they are full recommendations.  6 years now, and still no XQuery.  It's sufficiently complex that nobody is going to try to implement anything other than toy/test implementations until the spec is a full recommendation.

By contrast, the formally GXA now WS-* specs have been coming along very quickly, and we're seeing real implementation because of it.  The best thing that ever happened to Web Services was the day that IBM and Microsoft agreed to "agree on standards, compete on implementations".  That's all it took.  As soon as you get not one but two 800 lb. gorillas writing specs together, the reality is that the industry will fall behind them.  As a result, we have real implementations of WS-Security, WS-Addressing, etc.  When we in the business world are still working on "Internet time", we can't wait around 6-7 years for a real spec just so every academic in the world gets his favorite thing in the spec.  That's how you get XML Schema, and all the irrelevant junk that's in that spec. 

The specs that have really taken off and gotten wide acceptance have largely been defacto, non-W3C blessed specs, like SAX, RSS, SOAP, etc.  It's time for us to move on and start getting more work done with real standards based on the real world.

SOAP | Web Services | Work | XML
Friday, July 09, 2004 10:35:44 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, June 29, 2004

I'm into the second week of my Web Services Theory class at OIT (Portland).  It's been a lot of fun so far.  We've gone over XML modeling, DOM, XmlTextReader, and last night some XPath/XQuery.  Not in too much depth, since what I'm really shooting for is a grounding in the idea of Web Services, rather than the technical details, but I think it's important to do some practical exercises to really understand the basics. 

Next were on to Xml Schema, then the joy that is WSDL.  I'm a little worried about WSDL.  It's a hard sell, and it takes a lot of time to explain the problems that WSDL was designed to solve that it turned out 95% of people didn't understand or care about.  Ah well.  It's what we have for now. 


Tuesday, June 29, 2004 2:16:38 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, May 25, 2004

Scott has some comments about WSE 2.0 (which just in case you haven't heard yet has RTMed) and I wanted to comment on a few things...

 Question: The Basic Profile is great, but are the other specs getting too complicated?
My Personal Answer (today): Kinda feels like it!  WS-Security will be more useful when there is a more support on the Java side.  As far as WS-Policy, it seems that Dynamic Policy is where the money's at and it's a bummer WSE doesn't support it.    

It's the tools that are at issue here, rather than the specs I think.  I spent some time writing WS-Security by hand about a year ago, and yes, it's complicated, but I don't think unnecessarily so.  The problem is that we aren't supposed to be writing it by hand.  We take SSL totally for granted, but writing an SSL implementation from scratch is non-trivial.  We don't have to write them ourselves anymore, so we can take it for granted.  The problem (in the specific case of WS-Security) is that we have taken it for granted as far as Web Services go.  Unfortunately, that makes the assumption that Web Services are bound to HTTP.  In order to break the dependence on HTTP (which opens up many new application scenarios) we have to replace all the stuff that HTTP gives us "for free" like encryption, addressing, authentication, etc.  Because to fit with SOAP those things all have to be declarative rather than procedural, I think they feel harder than depending on the same thing from procedural code. 

If we are to realize the full potential of Web Services and SO, then we have to have all this infrastructure in place, to the point where it becomes ubiquitous.  Then we can take the WS-*s for granted just like we do SSL today.  Unfortunately the tools haven't caught up yet.  Three or four years ago we were writing an awful lot of SOAP and WSDL related code ourselves, and now the toolsets have caught up (mostly).  Given enough time the tools should be able to encompass the rest of the standards we need to open up all the new application scenarios. 

Steve Maine makes a good analogy to the corporate mailroom.  There's a lot of complexity and complex systems involved in getting mail around the postal system which we don't see on a daily basis.  But it's out there none the less, and we couldn't get mail around without them.  When we can take SO for granted like we do the postal system, then we'll see the full potential of what SO can do for business, etc. in the real world.

Tuesday, May 25, 2004 11:06:13 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  |