# Friday, 09 July 2004

Dare Obasanjo posits that the usefulness of the W3C might be at an end, and I couldn't agree more.  Yes, the W3C was largely behind the standards that "made" the Web, but they've become so bloated and slow that they can't get anything done.

There's no reason why XQuery, XInclude, and any number of other standards that people could be using today aren't finished other than the fact that all the bureaucrats on the committee all want their pet feature in the spec, and the W3C process is all about consensus.  What that ends up meaning is that no one is willing to implement any of these specs seriously until they are full recommendations.  6 years now, and still no XQuery.  It's sufficiently complex that nobody is going to try to implement anything other than toy/test implementations until the spec is a full recommendation.

By contrast, the formally GXA now WS-* specs have been coming along very quickly, and we're seeing real implementation because of it.  The best thing that ever happened to Web Services was the day that IBM and Microsoft agreed to "agree on standards, compete on implementations".  That's all it took.  As soon as you get not one but two 800 lb. gorillas writing specs together, the reality is that the industry will fall behind them.  As a result, we have real implementations of WS-Security, WS-Addressing, etc.  When we in the business world are still working on "Internet time", we can't wait around 6-7 years for a real spec just so every academic in the world gets his favorite thing in the spec.  That's how you get XML Schema, and all the irrelevant junk that's in that spec. 

The specs that have really taken off and gotten wide acceptance have largely been defacto, non-W3C blessed specs, like SAX, RSS, SOAP, etc.  It's time for us to move on and start getting more work done with real standards based on the real world.

SOAP | Web Services | Work | XML
Friday, 09 July 2004 10:35:44 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 29 June 2004

I'm into the second week of my Web Services Theory class at OIT (Portland).  It's been a lot of fun so far.  We've gone over XML modeling, DOM, XmlTextReader, and last night some XPath/XQuery.  Not in too much depth, since what I'm really shooting for is a grounding in the idea of Web Services, rather than the technical details, but I think it's important to do some practical exercises to really understand the basics. 

Next were on to Xml Schema, then the joy that is WSDL.  I'm a little worried about WSDL.  It's a hard sell, and it takes a lot of time to explain the problems that WSDL was designed to solve that it turned out 95% of people didn't understand or care about.  Ah well.  It's what we have for now. 

 

Tuesday, 29 June 2004 14:16:38 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
I took my son backpacking this past weekend with some friends of mine and some of their boys.  It was the first time my son had been backpacking (he's 8) and it's the first time I've been in probably 12-13 years.  It was a great time.  We were up on the Southern slopes of Mt. Hood, on Timothy Lake.  The weather was nice, not too hot.  Far enough from parking lots to cut down on the crowds, but not so far that you felt like you had to struggle to get there and back. There are some images of the spot here.   
Tuesday, 29 June 2004 14:10:45 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 24 June 2004
Jim Newkirk posts a fabulous use of the much overlooked alias feature in C# to make existing NUnit test classes compile with Team System.  That's just cool.
Thursday, 24 June 2004 11:32:55 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

I started teaching a class at OIT this week on "Web Services Theory", in which I'm trying to capture not only reality, but the grand utopian vision that Web Services were meant to solve (more on that later).  That got me thinking about the way the industry as a whole has approached file formats over the last 15 years or so. 

There was a great contraction of file formats in the early 90s, which resulted in way more problems than anyone had anticipated I think, followed by a re-expansion in the late 90s when everyone figured out that the whole Internet thing was here to stay and not just a fad among USENET geeks. 

Once upon a time, back when I was in college I worked as a lab monkey in a big room full on Macs as a "support technician".  What that mostly meant was answering questions about how to format Word documents, and trying to recover the odd thesis paper from the 800k floppy that was the only copy of the 200 page paper and had somehow gotten beer spilled all over it.  (This is back when I was pursuing my degree in East Asian Studies and couldn't imagine why people wanted to work with computers all day.)

Back then, Word documents were RTF.  Which meant that Word docs written on Windows 2.0 running on PS/2 model 40s were easily translatable into Word docs running under System 7 on Mac SEs.  Life was good.  And when somebody backed over a floppy in their VW bug and just had to get their thesis back, we could scrape most of the text off the disc even if had lost the odd sector here and there.  Sure, the RTF was trashed and you had to sift out the now-useless formatting goo, but the text was recoverable in large part.  In other sectors of the industry, files were happily being saved in CSV or fixed length text files (EDI?) and it might have been a pain to write yet another CSV parser, but with a little effort people could get data from one place to another. 

Then the industry suddenly decided that it could add lots more value to documents by making them completely inscrutable.  In our microcosm example, Word moved from RTF to OLE Structured Storage.  We support monkeys rued the day!  Sure, it made it really easy to serialize OLE embedded objects, and all kinds of neat value added junk that most people didn't take advantage of anyway.  On the other hand, we now had to treat our floppies as holy relics, because if so much as one byte went awry, forget ever recovering anything out of your document.  Best to just consider it gone.  We all learned to be completely paranoid about backing up important documents on 3-4 disks just to make sure.  (Since the entire collection of all the papers I ever wrote in college fit on a couple of 1.4Mb floppies, not a big deal, but still a hassle.)

Apple and IBM were just as guilty.  They were off inventing "OpenDoc" which was OLE Structured Storage only invented somewhere else.  And OpenDoc failed horribly, but for lots of non-technical reasons.  The point is, the industry in general was moving file formats towards mutually incomprehensible binary formats.  In part to "add value" and in part to assure "lock in".  If you could only move to another word processing platform by losing all your formatting, it might not be worth it. 

When documents were only likely to be consumed within one office or school environment, this was less of an issue, since it was relatively easy to standardize on a single platform, etc.  When the Internet entered the picture, it posed a real problem, since people now wanted to share information over a much broader range, and the fact that you couldn't possibly read a Word for Windows doc on the Mac just wasn't acceptable. 

When XML first started to be everyone's buzzword of choice in the late 90s, there were lots of detractors who said things like "aren't we just going back to delimited text files? what a lame idea!".  In some ways it was like going back to CSV text files.  Documents became human readable (and machine readable) again.  Sure, they got bigger, but compression got better too, and disks and networks became much more capable.  It was hard to shake people loose from proprietary document formats, but it's mostly happened.  Witness WordML.  OLE structured storage out, XML in.  Of course, WordML is functionally RTF, only way more verbose and bloated, but it's easy to parse and humans can understand it (given time). 

So from a world of all text, we contracted down to binary silo-ed formats, then expanded out to text files again (only with meta-data this time).  It's like a Big Bang of data compatibility.  Let's hope it's a long while before we hit another contracting cycle.  Now if we could just agree on schemas...

Work | XML
Thursday, 24 June 2004 11:24:31 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 17 June 2004

Sigh.  Somewhere along the line (probably because I opened my mouth) I became installer boy for our project.  I've learned some pretty interesting things along the way, mostly having to do with how inadequate the VS.NET setup projects can be if you do anything even mildly out of the ordinary. 

We have two potential targets for our installer, a developer's machine and a production machine, and for a production machine we have to install a subset of the developer installation.  The way to accomplish such an option in a VS.NET setup project is to add a custom dialog called Radio Button (2) and the set it's properties so that your text shows up in the right places on a pre-built dialog that contains nothing but 2 radio buttons.  OK, not so bad.  Then you name an environment variable that will hold the result of the radio group, let's call it SPAM.  You give the radio buttons their own values, like 1 and 2.  Again, OK, not so bad.

The part that really sucks is how you use the results of the dialog box.  For every file that I want to not be installed in production (every single file, not just folders) I have to set its Condition property to "SPAM=1".  I understand that it was probably really easy to implement, and probably meets the needs of many, but really, how lame is that.  Because the Condition property of the folder doesn't propagate to its files, I'll have to add that condition to every new file that gets added to the project.  And, no one but me will be likely to understand how the setup works without asking or doing some research.  Hmph!

On top of that, I've learned how amazingly limited is the support for registering COM interop objects in .NET.  I need to register some COM objects with Version Independent ProgIDs, so clients won't have to constantly upgrade the ProgIDs they use to find our objects.  There's absolutely no support for that in System.Runtime.Interop.  None.  You can't even do registration on a type by type basis.  You can only register all the types in an assembly at once.  Doesn't leave a lot of room for custom behavior. 

So, I ended up writing my own Version Independent ProgIDs to the registry in an installer class.  Not elegant, but there you have it.

Thursday, 17 June 2004 13:37:05 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 07 June 2004

I'm a volunteer Emergency Responder for the city of Hillsboro, so I signed up for this years CERT Rodeo, which was this past Saturday.  What a blast!  I had a really great time.  We had a really good turnout, with probably 80+ responders, plus plenty of "victims" and amateur radio operators to play along with. 

It was a great chance to refresh our skills and meet people from different teams.  There were teams from all over NW Oregon and SW Washington, and they mixed us all up into different teams for the Rodeo. 

We got to spend the day searching for victims in the Portland Fire department's training tower (6 stories), doing triage, disaster medical care and transport of some really gruesomely made up victims, putting out fires, and extricating dummies from under some really #(@*$ing HEAVY actual concrete rubble. 

We also got some great support from the community, with lots of firefighters volunteering to train and help out, and the Red Cross providing snacks and drinks. 

Check out some pictures and videos of the event here.  The rodeos only happen every other year, and I'm already looking forward to the next one.

<Community service plug>If you're interested in learning how to protect yourself, your family and your neighbors in the event of a disaster or other emergency, check out CERT (Community Emergency Response Teams) programs in your area.  NW Oregon has tons of programs, as do many other parts of the country.  In the City of Hillsboro, you can get a free 24 hour training course open to anyone who lives or works in Western Washington County.  It's a great training program conducted by local police, fire and EMS professionals.  It's a great way to feel more secure about yourself and your family in the event of an emergency, and a great community service opportunity.  CERT teams also get to help out with things like airshows, marathons, and other events where EMS coverage is desirable, which is a lot of fun.  FEMA also has some online training materials you can check out.</Community service plug>

Home | CERT
Monday, 07 June 2004 10:22:38 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

I took the family to see the new Harry Potter flick yesterday with about 20 friends.  What a blast.  I had no idea you could get into a theater an hour before the show these days :-).

I was pretty impressed.  It's amazing what a real director can do with a story, instead of just parroting the book for two hours.  The new one is much more like a real movie, and focuses much more on story than on special effects (although there are some cool ones, they are subtle) and cutesy elves.  It's a much darker film, and really showcases some great British actors.  I'd have to say that the kid who plays Harry has pretty much reached the limits of his acting ability, but Hermione rocks!

Well worth seeing, and much less of a kiddy film than the first two.

Monday, 07 June 2004 10:06:10 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 04 June 2004

I have been writing a lot of code lately that involves parsing external files, like XSD and WSDL files.  In my unit tests, I need to be able to read in a sample file that I can use to run the unit tests off of.  The problem is my unit tests (using NUnit) get run in several different places: my local dev sandbox, the build server, etc.  That makes it hard to know where to get the test XSD/WSDL files that I need to run the NUnit tests.  Hard coded, absolute paths don't work, because people may have their code in different places on different machines, and the tests should still pass.  Relative paths don't work either, since the test assemblies sometimes run from where VS.NET puts them (xxx/bin/Debug) and sometimes from the build directory, which is a totally different location.

The solution I finally hit upon was to use embedded resources.  Add your external file to your VS.NET project, and make its "Build Action" = "Embedded Resource".  That way, the file will get embedded into your final assembly as a "manifest resource".  

With that done, your test code can write out that embedded resource to a known location every time (like the temp directory) and use that for testing, cleaning up after itself when it's done.

In the following example, the external file is called "Example.wsdl".  It gets written out to the temp directory, then deleted when all the tests are done using the SetUp and TearDown methods.

    public class TestWsdl
    {
        private string wsdlPath = Path.Combine(Path.GetTempPath(),"Example.wsdl");
        private Wsdl wsdl = null;
    
        public TestWsdl()
        {
        }

        [SetUp()]
        public void Unpack()
        {
            Assembly a = Assembly.GetExecutingAssembly();
            Stream s = a.GetManifestResourceStream("MyNamespace.Test.Example.wsdl");
            StreamReader sr = new StreamReader(s);
            StreamWriter sw = File.CreateText(wsdlPath);
            sw.Write(sr.ReadToEnd());
            sw.Flush();
            sw.Close();
            sr.Close();
        }

        [TearDown()]
        public void CleanUp()
        {
            if(File.Exists(wsdlPath))
            {
                File.Delete(wsdlPath);
            }
        }

The only tricky part can be figuring out the name of your manifest resource to pass to GetManifestResourceStream.  It should be the default namespace for your project plus the filename.  The easiest way to find out what it is is to use Reflector, which lists all the resouces in any given assembly.

Friday, 04 June 2004 10:40:24 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Wednesday, 02 June 2004

I'll be teaching CST 407 Web Services Theory at the Oregon Institute of Technology (OIT) this summer.  The class is Monday and Wednesday evenings for 4 weeks, June 21nd - July 14th.  Registration is open if you are interested.  I'll be focusing on the theorectical aspects of Web Services and Service Orientation, so if you're interested in getting a good grounding in that part of Web Services, come on down!

[Update]

Here's the course description:

Web Services Theory
There has been a lot of buzz in the media of late over Service Oriented Architecture (SOA) and Web Services.  But what does "Web Services" really mean?  Why are they interesting?  What advantages do they offer to companies?  What do they do for you, the developer?


This class will start from the most basic levels of XML and proceed to the fundamentals of Web Services and the SOA.  The focus is on theory, rather than practice, and although there will be practical exercises, the end goal is to understand the fundamentals of how Web Services work, how they can be used, and in what application are they most useful.  This is the first course in a 3-course sequence.  The second course will focus on how to implement Web Services on a specific platform.

Students will leave this class with a firm understanding of how and why Web Services work, and where Web Services fit into the overall picture of modern software development.
For successful completion of this course, some knowledge of programming is required, preferably in C#/C++/Java or VB.
Wednesday, 02 June 2004 15:13:25 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [4]  |