BizTalk Implementation & Tuning Best Practices

Earlier today I attended an MSDN webcast on Implementation & Tuning given by two of the Microsoft BizTalk Rangers, Jeff Nordlund and Petr Kratochvil.  The Rangers (I had no idea what they did) are a group of product managers that assist with customer engagements, usually through MCS if I’m reading into it right.  They just started a new blog, that I have since subscribed to.

Although the presentation became a bit rushed towards the end due to time, it was very valuable.  They covered versioning, error handling and tuning.  The first two were pretty much review, although I did pick up a few new nuggets.  Tuning, however, had a lot of good info.  Hopefully MSDN will post the recorded version, or at least allow you to download the presentation through the above link.  If not, let me know and I’ll post the presentation.

Version nuggets: 

  1. Plan for upgrades during design.

  2. They recommended adding the version number to a schema namespace.  I’ve seen this practice before, although I haven’t done it myself.

  3. Consider breaking a long running orchestration into multiple shorter running orchestrations to allow for more insertion points if code needs to be fixed.  For instance, if you have a long running process and discover a step that is broken, you can fix it and redeploy that piece of the process so that existing instances can use the new code.

  4. No way to version pipeline components.  I guess I knew this since they’re just dlls on the file system but it didn’t hit me until I heard him say it.

Error Handling nuggets:

  1. Decide when to suspend messages in your orchestration and plan for it.

  2. Build in resubmit points in case of failure.  This is something I haven’t planned for much, but should start.

  3. Make sure you have an error reporting and error fixing strategy in place.  Errors will happen and although they can’t all be anticipated, with a good process in place they can be managed when they do occur.

Tuning nuggets:

  1. When calling stored procedures, make sure sprocs will not cause locking.

  2. Petr gave some rough numbers without specifying hardware and the testing process he used, but looking at throughput he saw about a 65% drop in performance when starting another orchestration vs. calling an orchestration from the parent orchestration.  That was interesting to me.

  3. Performance is lost mainly through persistence points.  So if you’re observing poor performance, review the deterministic rules he gave in the presentation and make sure you’re not breaking any of them.  Chances are, there are some performance gains in there for you.

  4. Don’t use an XML pipeline on a send port, unless you are demoting properties back into the content of the data.

  5. Plan for testing, and make sure the process is tested end-to-end in a real life scenario.  If the process is built to run over two weeks, test a run over two weeks.

I’ve saved the best for last.  At least it’s the best one for me!  I’ve been looking into some threading issues where an orchestration is calling an external library, which then calls a web service.  That web service call is the cause of the thread issues.  With a lot of concurrent processes, the thread pool runs out of threads.  I’m planning on posting on this issue in detail, so I’ll leave it at that for now.  For a solution, I’ve tried messing with the machine.config values to modify the threadpool.  That doesn’t work.  The only thing I found that did work was reducing the LowWatermark and HighWatermark to a rediculously low number.  I asked Petr about this issue and he pointed me to some documents (which I have yet to find in my two minute search, but he said they exist) that specify how to access the CLR .NET hosting keys in the registry.  Apparently, there are entries in those keys that show you the thread settings for each domain host.  Sounds very promising!  Like I said, I’ll post more detail on that once I get it all worked out.

All in all, a well spent hour of my Friday.  And now…drinks!!

Leave a Reply

Your email address will not be published.