# Monday, March 15, 2004

I found myself in the position this week of having to rewrite a bunch of XML parsing code that was all written using the DOM (that I didn't write).  It's not that I really have anything against the DOM model, but it seemed like overkill, since this particular code actually was organized into subroutines, each of which would take a string and load it into another XmlDocument instance.  And in each case, all that happened with the DOM was a single XPath query using selectSingleNode.  Pretty much a performance disaster. 

What I found interesting is that when I changed it all to use XPathDocument/XPathNavigators instead, the performance didn't seem much better.  Granted, I didn't do a very scientific investigation.  I'm running NUnit tests inside VS.NET using the NUnit-Addin, and the before and after NUnit tests completed in around the same time.

I'm not suggesting I'm sorry I changed the code, since it's both aesthetically more pleasing (at least to me) and has the potential for better performance over larger documents (and I'm assuming a lot less memory overhead).  I was just surprised that it wasn't faster.  I guess I really should profile both cases and see what's really going on performance wise.  Maybe I'll get around to it eventually ;-)

In a few places where the XPath wasn't really important I changed it to an XmlTextReader instead, and was gratified that the NUnit tests completed in about a quarter of the time that the DOM was taking.  Every little bit counts.

XML