BizTalk

BizTalk Implementation & Tuning Best Practices

Earlier today I attended an MSDN webcast on Implementation & Tuning given by two of the Microsoft BizTalk Rangers, Jeff Nordlund and Petr Kratochvil.  The Rangers (I had no idea what they did) are a group of product managers that assist with customer engagements, usually through MCS if I’m reading into it right.  They just started a new blog, that I have since subscribed to.


Although the presentation became a bit rushed towards the end due to time, it was very valuable.  They covered versioning, error handling and tuning.  The first two were pretty much review, although I did pick up a few new nuggets.  Tuning, however, had a lot of good info.  Hopefully MSDN will post the recorded version, or at least allow you to download the presentation through the above link.  If not, let me know and I’ll post the presentation.

Version nuggets: 


  1. Plan for upgrades during design.

  2. They recommended adding the version number to a schema namespace.  I’ve seen this practice before, although I haven’t done it myself.

  3. Consider breaking a long running orchestration into multiple shorter running orchestrations to allow for more insertion points if code needs to be fixed.  For instance, if you have a long running process and discover a step that is broken, you can fix it and redeploy that piece of the process so that existing instances can use the new code.

  4. No way to version pipeline components.  I guess I knew this since they’re just dlls on the file system but it didn’t hit me until I heard him say it.

Error Handling nuggets:



  1. Decide when to suspend messages in your orchestration and plan for it.

  2. Build in resubmit points in case of failure.  This is something I haven’t planned for much, but should start.

  3. Make sure you have an error reporting and error fixing strategy in place.  Errors will happen and although they can’t all be anticipated, with a good process in place they can be managed when they do occur.

Tuning nuggets:



  1. When calling stored procedures, make sure sprocs will not cause locking.

  2. Petr gave some rough numbers without specifying hardware and the testing process he used, but looking at throughput he saw about a 65% drop in performance when starting another orchestration vs. calling an orchestration from the parent orchestration.  That was interesting to me.

  3. Performance is lost mainly through persistence points.  So if you’re observing poor performance, review the deterministic rules he gave in the presentation and make sure you’re not breaking any of them.  Chances are, there are some performance gains in there for you.

  4. Don’t use an XML pipeline on a send port, unless you are demoting properties back into the content of the data.

  5. Plan for testing, and make sure the process is tested end-to-end in a real life scenario.  If the process is built to run over two weeks, test a run over two weeks.

I’ve saved the best for last.  At least it’s the best one for me!  I’ve been looking into some threading issues where an orchestration is calling an external library, which then calls a web service.  That web service call is the cause of the thread issues.  With a lot of concurrent processes, the thread pool runs out of threads.  I’m planning on posting on this issue in detail, so I’ll leave it at that for now.  For a solution, I’ve tried messing with the machine.config values to modify the threadpool.  That doesn’t work.  The only thing I found that did work was reducing the LowWatermark and HighWatermark to a rediculously low number.  I asked Petr about this issue and he pointed me to some documents (which I have yet to find in my two minute search, but he said they exist) that specify how to access the CLR .NET hosting keys in the registry.  Apparently, there are entries in those keys that show you the thread settings for each domain host.  Sounds very promising!  Like I said, I’ll post more detail on that once I get it all worked out.


All in all, a well spent hour of my Friday.  And now…drinks!!

Personal

March Madness…Madness!!

My bracket is more or less already shot to hell, but there are some serious games going on!  I think tonight’s games are the two best games I’ve seen that were played in the same night. 


First, Texas beats W.V. on a last second three after W.V. goes ahead on a three of their own with something like 5 seconds left. 


Then UCLA beats Gonzaga on what has to be one of the best games this year.  Gonzaga led all game, until UCLA went ahead on a steal and bucket with something like 8 seconds left.  The Zags had a chance with a last second shot but no such luck.  The result was Morrison crying in the middle of the floor (he actually started crying BEFORE the game was even over.)  Granted, he’s a great college player, but, man is he easy to not like!


I can’t wait for tomorrow night’s game…although I don’t think they’ll live up to tonight!  Go UConn!

.NET, Windows Workflow

ASP.NET/WF Integration

Along the lines of my previous post, and discovered through the same source (ServerSide.net), Jon “stole” an idea I had on how to use WF in an ASP.NET app to manage page flow.  I say “stole” because he had no idea that I’ve had the same thoughts, let alone an idea of who I am!  Anyway, it looks like a great starting point to implement a workflow to manage page flow.  The main key to the solution is that the workflow and ASP.NET app have no idea about each other.  I like that.


When I get some time I’ll see if I can build on his solution to do things like have alternate storage areas (other than Session) and optional integration with a security model, like the Microsoft Application Blocks, to route the workflow based on permissions.

.NET, Windows Workflow

Debugging WF Workflows Hosted in ASP.NET

I came across these tips on Debugging WF-based workflows hosted in ASP.NET from a link on ServerSide.net. Chances are, I would’ve struggled with this for a few hours before figuring it out. Thanks Christian!

.NET

VS2003 & GAC Impact on References

I haven’t played around enough with Visual Studio 2005 to know if this issue is resolved, but I hope it is.  This is becoming more and more of a pain for me, although I think I finally have figured out the root cause (which is why I’m now writing about it.)


Imagine you have a console app you are developing in VS2003.  This app has a file reference to a library, say reflib.dll, with a version of 1.0.0.0 and a file version of 1.0.5.0 with a Copy Local property set to True.  You develop away earning your substantial hourly rate, build the app, test locally, and everything works great.  Time to move to a test environment, so you package it up and deploy the contents of the local build directory to the test server and testing begins.


Bam!  It blows up saying it can’t find a reference.  You dig a little deeper and find the missing reference is a dependency of reflib.dll, say reflib2.dll, with a version of 2.0.0.0 and a file version of 2.0.13.0.  You look all over the test server and it’s no where to be found.  You say, “Hey, I copied the working version from my local build directory so it must be there and I missed it during the deployment.”  So you look in your local build directory and it’s not there.  What the hell?  How did the local copy work if reflib2.dll isn’t there?


I purposefully did not mention that earlier you were working on another project that required reflib2.dll to be GAC’d.  That older project was developed, tested, and deployed…but to another server different from your test server for the current app you’re working on.


Anyone know the cause yet?


Yep…damn GAC.  When VS2003 does a build where there are references which have dependencies on other assemblies not directly referenced, it does not copy them locally if that assembly is in the GAC.  Even if Copy Local is set to True for the reference.


Digging a little deeper, the issue isn’t quite that cut and dry.  Even though the CLR does not care about File Version values, apparently VS2003 does.  Meaning if the version of reflib2.dll in the GAC is 1.0.0.0 with a file version of 1.0.5.0, VS2003 does not copy the file locally.  If you were to keep that version in the GAC but update the referenced version to have a version of 1.0.0.0 with a file version of 1.0.6.0, VS2003 WILL copy the file locally.


So, Solutions:
1)  Remove the version from the GAC when building an app that needs a local copy.
2)  Add a post-build xcopy event to the project to manually copy the dependency to the build directory
3)  Use a continuous build tool (CruiseControl is what we’re using right now) to perform the build on another server that does not build the app with the GAC requirement


So far I’m stuck at 2.  3 sounds good as long as the tool can manage it.  I really don’t know, but will look into it a bit more later.


But wait, there’s more!  Ok, you’ve got all that figured out and added a post-build.  Now you decide to create a Deployment package (MSI) that includes this application.  So you create the project and add the project output to the package.  Guess what?  Same issue.  Although a bit worse, since the best the deployment package can do is pull the build output.  A post-build xcopy to include a file in a build directory is not part of the build output, so the MSI does not contain those files.  Which means the dependencies do not get deployed with the application.  So now we’re back to the same solution of trying to manually include the dependent files in the deployment package through a direct file add.  Not very scalable.


(Edit)
But wait, there’s more, Part II!!  If you’re working with a web project you can’t assign post-build events to the project.  What I’ve thought of so far is creating a dummy project for your web project to reference, and in that project perform your post-build event.  Risky and a hack, yes.  Other than that, manually copy the files.  That’s what I’m sticking with for now…I’ve spent too much time on this as it is.