# Wednesday, 28 May 2003

 

The constant point-and-click of the mouse can be a real drag. One company has developed products that sense hand movements to give computer commands, creating input devices that it hopes will replace the mouse. By Katie Dean.
[Wired News]

 

 

This is a great idea, and I’d be all over it if it wasn’t quite so spendy.  I think in the long run this kind of gestural interface could really win out, since you can encode a lot of information in gestures that would be harder or take longer using other input methods.

Wednesday, 28 May 2003 13:10:19 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 27 May 2003

I’m back home from New Orleans after watching my wife Vikki win a bronze medal at the US Tae Kwon Do Senior Nationals.  Woohoo!

A good time was had by all.  The competition is pretty amazing.   

Tuesday, 27 May 2003 15:11:37 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Friday, 16 May 2003

There's been a great deal of hullabaloo in the last week or so about blogging “ruining the Internet for everyone” and Google segregating hits from blogs into their own section (good or bad?). I realize that since I'm posting this to a blog, this sounds a bit self-serving, but here's my two cents worth:

While it's true that blogging has lowered the bar in terms of access to web publishing, the simple fact is that ever since there was an Internet, anyone who wanted two and had access to a keyboard could post whatever drivel they wanted to the web. All the blogging really adds to the mix is the fact that now you don't even have to have rudimentary knowledge of HTML (or how to save your Word doc as HTML) in order to publish yourself. While that means more volume, it doesn't really change the nature of data on the web.

The real “problem” as far as Google is concerned is that the self-referential nature of blogging upsets their ranking algorithm. This has apparently led people (like Larry Lessig) to conclude that blogging is ruining the nature of content on the web.

I would argue that there's nothing about blogging that changes the need for critical thinking when looking for information on the web. That's always been true, since there's fundamentally no concrete way to verify the validity of anything you read on any site without thinking critically about the nature of the source, how you came across it, and what references it makes to other works. If you apply that kind of filter, then the self-referencial or “incestuous” nature of blogs can be used to advantage.

For example, if I'm looking for interesting information about SOAP, XML Web Services, or how either or both relate to .NET I'd assume that Don Box (for example) is a reliable source, given that I've read his books, articles and speeches in other contexts. If he mentions something in his blog that someone else said about .NET on their blog, I would assume a high degree of certainty that the reference is relevant and useful. Then, when I follow that link, I'll find more links to other blogs that contain relevant and useful information. The tendency is for all those cross links to start repeating themselves, and to establish a COMMUNITY wherein one can assume a high degree of relevant information. All that's needed to validate the COMMUNITY as a whole is a few references from sources that are externally trusted.

In the long run, I think that kind of “incestuousness” can be used to validate, rather than discount large bodies of interesting, useful, and otherwise unattainable information that we wouldn't have access to otherwise.

That's a fairly lengthy rant for me, but I had to get that off my chest. I hate to see pundits discounting a body of information just because it's posted casually and not in academic journals.

Friday, 16 May 2003 23:22:41 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
I'll be giving a presentation on "XML on Large Power Transformers and Substation Batteries" at the Applied XML Developers Conference 2003 West, on July 10th and 11th in Beaverton, OR. Register now at www.sellsbrothers.com/conference.
It looks like there are going to be some really interesting sessions, and tickets reportedly go fast, so sign up now.
I'm really interested in hearing "SOAP, it wasn't Simple, we didn't Access Objects and its not really a Protocol", and "A Steady and Pragmatic Approach to Dynamic XML Security". Cool stuff.
Friday, 16 May 2003 22:59:56 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 

Sean Campbell and Scott Swigart have some pretty strong words about Access vs. MSDE. This is a subject I hadn't spent much time thinking about until pretty recently. I had the opportunity to give some presentations at an Access user's group conference (PAUG)

Being pretty much a died in the wool SQL Server enthusiast (bigot?) I hadn't realized how many people out there are writing useful, complex Access applications that are solving real business problems today. I also hadn't realized that most of them don't want to have anything to do with MSDE. As Sean and Scott put forth, it's just too much work. It's too hard to users to install, not easy to get configured properly, etc. Also, the SQL that MSDE/SQL Server supports is different enough from Access SQL to be not just a minor bit of tweaking in many cases but a full fledged porting activity, Upsizing Wizard or no.

So up until fairly recently, I would have been just as dismissive as Don Box was of their Access rant, but now I'm not so sure. I still haven't changed my feelings about the nature of Access (I still would use MSDE myself) but I have a much more profound understanding of the nature of the Access developer community, and how threatening rather than enabling MSDE looks to them.

Friday, 16 May 2003 19:06:06 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 14 May 2003
Scott has posted a very insightful article into the nature of the CLR as it relates to the underlying Windows platform, and how that's different from the way the Java VM works.
Keep up the good work Scott!
Wednesday, 14 May 2003 13:14:41 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Tuesday, 13 May 2003
It's amazing how little things have changed on the web when you really look at the details. Lots of things that seemed like a good idea to a few people 5-6 years ago now seem like a good idea to a whole bunch of people. Scott Hanselman was musing about what ever happened to PointCast, which was quite the technology back in it's day. In response Don Box supplied a little piece of CDF, the PointCast equivalent used by IE 4.0. Sure looks a lot like RSS to me, without all the namespaces :)
Tuesday, 13 May 2003 14:05:59 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Monday, 12 May 2003

It seems like every time I come across something in the framework documentation that says something along the lines of "this works just fine unless" my application is firmly in the "unless" category :)

There is a new security setting in 1.1 that has to do with serializing objects over remoting which is much more restrictive than the 1.0 framework was. I recompiled my app against 1.1, fixed all the compile time bugs, tracked down all the dependencies, and then the real fun began: finding the runtime bugs.

It turns out that there is a new setting on the remoting formatters that won't deserialize your object by default unless they fall into a certain category of "safe" objects. Mine apparently don't, although going over the list on gotdotnet I don't see how my objects don't fall into the category of "low". Anyway, for whatever reason they don't, so I have to prompt the remoting formatters to use "Full" deserialization. (Low vs. Full???) Then everything works hunky-dory. I wish that the list of conditions was a little more exhausitve. The only things listed as for sure causing a need for "full" support are sending an ObjRef as a parameter, implementing ISponsor, or being an object inserted in the remoting pipeline by IContributeEnvoySink. I tried this with two completely different parts of my application, neither of which meet those criteria, and still had to use "full" support. Hmmmm.

Live and learn I guess.
Monday, 12 May 2003 20:01:17 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Wednesday, 07 May 2003
Here's an interesting scenario:
I started out with an XSD schema, lets call it myObject.xsd. Using xsd.exe, I generated a set of classes to represent the objects in the schema. Straighforward so far...

This part works fine, and I can read and write the objects using the XmlSerializer just as I would expect. The issues started when I tried to get SOAP and WSDL involved. I wanted to return one of the objects definied in myObject.xsd (and correspondingly in myObject.cs) from a Web Service method.

   [WebMethod()]    public myObject ReturnMyObject(){}

The WebService class has an attribute specifying its namespace

   [WebService(Namespace="http://me.com/MyNamespace")]    public class MyWebService : WebService{}

When I try to create a proxy class for the web service using either wsdl.exe, or VS.NET 2003, I get an error claiming that the type myObject is not defined, the schema can't be validated, and no classes will be generated.

It turns out that the crux of the matter was that when I wrote myObject.xsd, I didn't explicitly define a targetNamespace, so when xsd.exe generated the classes, it added an XmlRootAttribute(Namespace="") This conflicts with the namespace specified for the service, so when the WSDL gets generated by the framework, it's no wonder the schemas don't all line up.

Once I went back and fixed the targetNamespace in the schema, everything worked just fine. It makes sense why this would be an issue, but it certainly took a while to track down.

Wednesday, 07 May 2003 19:00:28 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
A fabulous piece on async calls in .NET from Chris Brumme. The stuff that guy has in his head is truly amazing.
Wednesday, 07 May 2003 13:52:17 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
# Thursday, 01 May 2003
There's a new spec out from the Open GIS consortium for defining an XML representation of sensor data. It's pretty verbose, but most of it is optional. I would love to see something like this take off as a standard way to get data from sensors. There are a couple of existing standards for sensor data (ModBus, UCA, DNP, etc) but they are all binary. A fairly complete XML based standard would make our (or at least my) lives much easier.

Now we'll just have to wait and see if it takes them as long to finish as XLink is taking :).

It's nice to see that they used the XML Schema diagramming in XML Spy for their schemas. It's the best one I've seen, and makes it much easier to follow.

Thursday, 01 May 2003 18:38:56 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  | 
I'm working on designing a SOAP API for getting data from some monitors out in the field back to a central server farm for aggregation, and the more I get into the details the more interesting it gets...
One of the biggest restrictions is that the monitors may well get deployed behind someone's firewall. All of the typical SOAP examples are between what are essentially peers on the internet, e.g. two web servers exchanging business data. In this case, since most people aren't going to punch a hole in their firewall for the monitor, we have to assume that all traffic has to originate from the monitor.
This ends up having a big impact on the semantics of the API, since if the central server wants to do something like send new configuration information to the device, it can't send it directly, which means waiting until the monitor "phones home" and asks for updates. This in turn means that the server has to cache any data going down to the monitor until the monitor checks in, and so on.
Anyway, the design is still ongoing, but having to model a SOAP API that always has to orginate from one side, and yet represents two way converstation presents some unique challenges.
There are even more issue when one side of the equation happens to be running on an embedded platform with 16Mb of RAM. :) but more on that some other time.
Thursday, 01 May 2003 17:06:30 (Pacific Daylight Time, UTC-07:00)  #    Disclaimer  |  Comments [0]  |