# Friday, June 04, 2004

I have been writing a lot of code lately that involves parsing external files, like XSD and WSDL files.  In my unit tests, I need to be able to read in a sample file that I can use to run the unit tests off of.  The problem is my unit tests (using NUnit) get run in several different places: my local dev sandbox, the build server, etc.  That makes it hard to know where to get the test XSD/WSDL files that I need to run the NUnit tests.  Hard coded, absolute paths don't work, because people may have their code in different places on different machines, and the tests should still pass.  Relative paths don't work either, since the test assemblies sometimes run from where VS.NET puts them (xxx/bin/Debug) and sometimes from the build directory, which is a totally different location.

The solution I finally hit upon was to use embedded resources.  Add your external file to your VS.NET project, and make its "Build Action" = "Embedded Resource".  That way, the file will get embedded into your final assembly as a "manifest resource".  

With that done, your test code can write out that embedded resource to a known location every time (like the temp directory) and use that for testing, cleaning up after itself when it's done.

In the following example, the external file is called "Example.wsdl".  It gets written out to the temp directory, then deleted when all the tests are done using the SetUp and TearDown methods.

    public class TestWsdl
    {
        private string wsdlPath = Path.Combine(Path.GetTempPath(),"Example.wsdl");
        private Wsdl wsdl = null;
    
        public TestWsdl()
        {
        }

        [SetUp()]
        public void Unpack()
        {
            Assembly a = Assembly.GetExecutingAssembly();
            Stream s = a.GetManifestResourceStream("MyNamespace.Test.Example.wsdl");
            StreamReader sr = new StreamReader(s);
            StreamWriter sw = File.CreateText(wsdlPath);
            sw.Write(sr.ReadToEnd());
            sw.Flush();
            sw.Close();
            sr.Close();
        }

        [TearDown()]
        public void CleanUp()
        {
            if(File.Exists(wsdlPath))
            {
                File.Delete(wsdlPath);
            }
        }

The only tricky part can be figuring out the name of your manifest resource to pass to GetManifestResourceStream.  It should be the default namespace for your project plus the filename.  The easiest way to find out what it is is to use Reflector, which lists all the resouces in any given assembly.

Friday, June 04, 2004 10:40:24 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 
# Wednesday, June 02, 2004

I'll be teaching CST 407 Web Services Theory at the Oregon Institute of Technology (OIT) this summer.  The class is Monday and Wednesday evenings for 4 weeks, June 21nd - July 14th.  Registration is open if you are interested.  I'll be focusing on the theorectical aspects of Web Services and Service Orientation, so if you're interested in getting a good grounding in that part of Web Services, come on down!

[Update]

Here's the course description:

Web Services Theory
There has been a lot of buzz in the media of late over Service Oriented Architecture (SOA) and Web Services.  But what does "Web Services" really mean?  Why are they interesting?  What advantages do they offer to companies?  What do they do for you, the developer?


This class will start from the most basic levels of XML and proceed to the fundamentals of Web Services and the SOA.  The focus is on theory, rather than practice, and although there will be practical exercises, the end goal is to understand the fundamentals of how Web Services work, how they can be used, and in what application are they most useful.  This is the first course in a 3-course sequence.  The second course will focus on how to implement Web Services on a specific platform.

Students will leave this class with a firm understanding of how and why Web Services work, and where Web Services fit into the overall picture of modern software development.
For successful completion of this course, some knowledge of programming is required, preferably in C#/C++/Java or VB.
Wednesday, June 02, 2004 3:13:25 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [4]  | 
# Thursday, May 27, 2004

Monopoly or no monopoly, this is what put and what keeps Microsoft on top:

I had an incredible experience today at the sails pavilion - I asked about Speech Server...So I'm sitting at the SQL cabana and the Microsoft helper gets on the radio and asks someone if they've got the Speech Server group at their cabana. About fifteen seconds later, every MS radio in earshot lights up with "Any Speech Server expert, any Speech Server expert, we need an answer in the cabana ASAP!" Really really impressive! [Jason Fredrickson ]

Back in the dim time, I started out as a Mac developer before moving to Win32 (I never had to write a far pointer :-) ).  I was pretty up on developing for the Mac.  I went to two WWDCs in the early/mid 90's.  I was a total Apple bigot.  Why aren't I still?  Because Apple had (and probably has) a habit of completely jerking developers around, when they weren't ignoring them completely.  The barrier to entry was high.  I still have the many $100s worth of Apple Developer books that you pretty much had to buy to write for the Mac.  Apple's own development tools were way overpriced, and horribly under-useable.  Worst of all was the System 8 debacle.  I spent quite a bit of time and effort getting ready for "Copeland" which was Apple's first "System 8" replacement for their antiquated System 7.  I even went and learned Dylan, since Apple said they were going to be moving into the future with Dylan on Copeland.  (Dylan, and particularly Apple's Dylan implementation which was written in Lisp, was awesome at the time.  Everything Java brought to the table later and more useable, IMHO.)  Then Apple pulled the rug out, never shipped Copeland, or any of the DocPart stuff they were touting with IBM, killed Dylan, etc.  I think that was when they really started losing market share.  They were alienating developers at the same time that MS was coming out with Win32 and courting developers.  No matter how cool your operating system is, if nobody writes apps for it, it's not going anywhere (witness BeOS).

I'm not saying MS has never lead developers astray (Cairo?) but overall they have made a concerted effort to attract developers and make them feel valued, which leads to more high quality apps being available on Windows then anywhere else. 

I've been to numerous MS conferences, and always had a good time, and more importantly I always felt like MS was seriously committed to making my life easier and showing me how to better get my job done.  That's worth a lot.

Thursday, May 27, 2004 10:04:49 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, May 25, 2004

If you've ever gotten involved in the Java vs. .NET debate (and who hasn't) check out N. Alex Rupp's blog.  He's a dyed-in-the-wool Java guy who's going to TechEd this week and talking with .NET developers and INETA people about what they like about .NET.  He has some very interesting things to say about the Java developer - .NET developer relationship.  A very fair and unbiased look at the issues and how the communities interact internally and externally.

It's very refreshing to see someone being so open and honest about the pros and cons of both platforms.  (And it pretty courageous, given the longstanding antagonisms, for him to not only go to TechEd, but to advertise his Java-guy-ness.)

Tuesday, May 25, 2004 3:05:22 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [2]  | 

Scott has some comments about WSE 2.0 (which just in case you haven't heard yet has RTMed) and I wanted to comment on a few things...

 Question: The Basic Profile is great, but are the other specs getting too complicated?
My Personal Answer (today): Kinda feels like it!  WS-Security will be more useful when there is a more support on the Java side.  As far as WS-Policy, it seems that Dynamic Policy is where the money's at and it's a bummer WSE doesn't support it.    
[Scott]

It's the tools that are at issue here, rather than the specs I think.  I spent some time writing WS-Security by hand about a year ago, and yes, it's complicated, but I don't think unnecessarily so.  The problem is that we aren't supposed to be writing it by hand.  We take SSL totally for granted, but writing an SSL implementation from scratch is non-trivial.  We don't have to write them ourselves anymore, so we can take it for granted.  The problem (in the specific case of WS-Security) is that we have taken it for granted as far as Web Services go.  Unfortunately, that makes the assumption that Web Services are bound to HTTP.  In order to break the dependence on HTTP (which opens up many new application scenarios) we have to replace all the stuff that HTTP gives us "for free" like encryption, addressing, authentication, etc.  Because to fit with SOAP those things all have to be declarative rather than procedural, I think they feel harder than depending on the same thing from procedural code. 

If we are to realize the full potential of Web Services and SO, then we have to have all this infrastructure in place, to the point where it becomes ubiquitous.  Then we can take the WS-*s for granted just like we do SSL today.  Unfortunately the tools haven't caught up yet.  Three or four years ago we were writing an awful lot of SOAP and WSDL related code ourselves, and now the toolsets have caught up (mostly).  Given enough time the tools should be able to encompass the rest of the standards we need to open up all the new application scenarios. 

Steve Maine makes a good analogy to the corporate mailroom.  There's a lot of complexity and complex systems involved in getting mail around the postal system which we don't see on a daily basis.  But it's out there none the less, and we couldn't get mail around without them.  When we can take SO for granted like we do the postal system, then we'll see the full potential of what SO can do for business, etc. in the real world.

Tuesday, May 25, 2004 11:06:13 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

On a whim last weekend I spent some time trying to learn Smalltalk.  So many of the seminal thinkers around XP, etc. were originally Smalltalk heads, so I wanted to see what all the fuss was about. 

I downloaded a copy of Squeak for Windows and went through several of the tutorials.  Pretty interesting stuff, but I think I'll stick to C#.  I can see why people are hot for Smalltalk (or used to be anyway).  Because it's so rigidly OO, it forces you into doing the right things with regard to object orientation.  And the tools are pretty cool. Having such an advanced class browser and inspection system is a big advantage. 

However, I think I'll stick to strongly typed languages (of which C# is currently my favorite).  I guess my overall impression of Smalltalk is that for people who were very competent, you could get a lot of work done in a very short amount of time because the system is so flexible.  On the other hand, because the system is so flexible, I would guess that people where were less then amazingly competent (or confident) would have a very hard time getting anything done at all because you have to understand exactly what you are doing, and many errors will only present themselves at runtime.  It would be fun to work in such a flexible system, but I really appreciate the benefits of compile-time type checking. 

At the same time, I was playing with a copy of MSWLogo (a free Logo implementation for Windows).  What a blast.  I haven't played with Logo since the Apple II days.  Once upon a time you could actually get a physical turtle that held a pen, and connected to a serial port.  You could write Logo programs to get the turtle to scoot around on a really big piece of paper.  I always thought that was a cool idea.  I was also surprised at how much Logo and Smalltalk have in common syntactically. 

I was trying to get my 8-year-old son interested in Logo, but I think he's still a little too young.  I met with a resounding "whatever, Dad".  I guess I didn't get my first Commodore PET until 6th or 7th grade. :-)

Tuesday, May 25, 2004 10:37:49 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, May 21, 2004

I was just reading Steven Padfield's article on unit testing ASP.NET code by creating your own HttpContext outside of IIS (which is a very useful technique) and it got me thinking about a technique that I've gotten a fair amount of mileage out of lately, namely creating my own context object that will be available anywhere in a call stack, just like the HttpContext is.

When I started looking into how to implement such a think, I was thinking in terms of deriving from ContextBoundObject, which seemed like overkill.  So I fired up the ever-handy Reflector and found out how HttpContext handles itself.  Turns out that no ContextBoundObject is needed.  Hidden in the bowls of System.Runtime.Remoting.Messaging is a method called CallContext.SetData(string, object) that will stick a named object value into your call context, which can be retrieved from anyplace on the current call stack.  Pretty handy.  If you wrap that in an object like HttpContext, you can store your own context values, and potentially provide context-sensitive methods such as HttpContext.GetConfig().

What you end up with is an object that looks something like this:

using System;
using System.Collections;
using System.Runtime.Remoting.Messaging;

namespace MyContext
{
public class ContextObject
{
private const string ContextTag = "MyContextObject";

private ContextObject()
{
}

/// <summary>
/// returns a valid context object, creating one if
/// none exists
/// </summary>
public static ContextObject CurrentContext
{
get
{
object o = CallContext.GetData(ContextTag);
if(o == null)
{
o = new ContextObject();
CallContext.SetData(ContextTag,o);
}

if(!( o is ContextObject))
{
throw new ApplicationException("Corrupt ContextObject");
}

return (ContextObject)o;
}
}

/// <summary>
/// Clears out the current context. May be useful
/// in situations where you don't have complete
/// control over your call stack, i.e. you aren't at the top of
/// the application call stack and need to maintain
/// a separate context per call.
/// </summary>
public static void TeardownCurrentContext()
{
CallContext.FreeNamedDataSlot(ContextTag);
}

private string contextValue1;

///<summary>
/// a sample value to store to/pull from context
///</summar>
public string ContextValue1
{
get
{
return contextValue1;
}
set
{
contextValue1 = value;
}
}
}
}

You can use the context object from anywhere in your call stack, like this

public class Tester
{
public static void Main(string[] args)
{
ContextObject co = ContextObject.CurrentContext;
co.ContextValue1 = "Hello World";
OtherMethod();
}

public static void OtherMethod()
{
ContextObject co = ContextObject.CurrentContext;
Console.WriteLine(co.ContextValue1);
}
}


 

The resulting output is, of course, "Hello World", since the context object retains its state across calls.  This is a trivial example, and you wouldn't really do it this way, but you get the idea.
Friday, May 21, 2004 10:52:34 AM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, May 20, 2004
Now that I think about it some more, this is a problem that WinFS could really help to solve.  The biggest reason that people don't use things like RDF is sheer laziness (you'll notice the rich RDF on my site :-) ) but if we can use the Longhorn interface to easily enter and organize metadata about content, it might be a cool way to generate RDF or other semantic information.  Hmmmm...  It would be fun to write a WinFS -> RDF widget.  If it wasn't for that dang day job...
XML | Work
Thursday, May 20, 2004 12:43:55 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Scott mentions some difficulty he had lately in finding some some information with Google, which brings to my mind the issue (long debated) of the semantic web.  Scott's problem is exactly the kind of thing that RDF was meant to solve when it first came into being, lo these 6-7 years ago. 

Has anyone taken advantage of it?  Not really.  The odd library and art gallery.  Why?  Two main reasons: 1) pure laziness.  It's extra work to tag everything with metadata 2) RDF is nearly impossible to understand.  That's the biggest rub.  RDF, like so many other standards to come out of IETF/W3C is almost incomprehensible to anyone who didn't write the standard.  The whole notion of writing RDF tuples in XML is something that most people just don't get.  I don't really understand how it's supposed to work myself.  And, like with WSDL and other examples, the people who came up with RDF assumed that people would use tools to write the tuples, so they wouldn't have to understand the format.  The problem with that (and with WSDL) is that since noone understands the standard, noone has written any usable tools either. 

The closest that anyone has come to using RDF in any real way is RSS, which has turned out to be so successful because it is accessible.  It's not hard to understand how RSS is supposed to work, which is why it's not really RDF.  So attaching some metadata to blog content has turned out to be not so hard, mostly because most people don't go beyond a simple category, although RSS supports quite a bit more. 

The drawback to RDF is that it was create by and for librarians, not web page authors (most of whom aren't librarians).  Since most of us don't have librarians to mark up our content with RDF for us, it just doesn't get done.  Part of the implicit assumption behind RDF and the semantic web is that authoritative information only comes from institutional sources, who have the resources to deal with semantic metadata.  If blogging has taught us anything, it's that that particular assumption just isn't true.  Most of the useful information on the internet comes from "non-authoritative" sources.  When was the last time you got a useful answer to a tech support problem from a corporate web site?  The tidbit you need to solve your tech support problem is now-a-days more likely to come from a blog or a USENET post than it is from the company who made the product.  And those people don't give a fig for the "semantic web". 

 

Work | XML
Thursday, May 20, 2004 12:29:22 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, May 14, 2004

[Update] I watched it again with my wife last night, and liked it just as much.  Definitely a movie I could watch many times.

OK, my fears were ungrounded.  Troy was pretty darned good.  I was impressed.  It's 2:43 long, and it went by in a flash.  I was surprised when it was over.  Totally didn't seem that long. 

The CG was very judiciously used to give scale to the sets, and the special effects in general were not overdone, which I really appreciated.  The acting was solid, and Brad Pitt actually was pretty good as Achilles. 

Possibly most impressive was the fight coreography.  The final battle between Hector and Achilles was very well staged, and excellently filmed.  One of the best fight scenes I've seen in a really long time.  Even the mass battles were well filmed, and done in such a way that it didn't come across as just another Braveheart ripoff.  There were definitely some liberties taken with Homer (as Mr. Cranky notes), but I'm pretty sure he'd still recognize the story.  The romance with Briseis was perhaps a bit overdone, but it didn't detract that much, I thought. 

I find myself really hoping that the film does well enough to warrant them making the Odyssey next. 

Scott points out an interesting new unit of measurement. :-)

Friday, May 14, 2004 3:18:25 PM (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  |