Dynamic Map and Pipeline Execution in a BizTalk Orchestration: A Case Study

Sometimes you just don’t know what needs to happen until it’s time for it to happen.  Most business process software requires the ability to alter its path while en route to wherever it may be going.  Recently, I worked on a project that had a similar requirement.

Here’s a very high level view of the process/requirements:

  • Data is received in the form of a message in a canonical format.
  • Based on the client identified in the message, a number of output documents would need to be generated.  The number and type must be changed easily as new clients came on board or as old clients added or changed output types.
  • After the messages are sent, the canonical message must be allowed to be processed further from its original state.

In most cases, the ESB toolkit would have been a great fit for this type of work except for a couple of things:  additional processing needed to take place against the original canonical message after the maps and pipelines had done their work; and the number of maps and pipelines that needed to be executed against the canonical schema was not known until runtime.

For our solution we decided to use a single Orchestration for the ability to maintain state of the original message (the last requirement), but also for ease in development.

Once we received the canonical message, the first step was to retrieve a list of output messages that would be required for the given client.  Each output needed two pieces of information:  the map that would be required to extract the specific data from the canonical message, and the special pipeline required to create the flat-file or other output needed (as necessary).  We used the BRE to determine which maps and pipelines were needed for the outputs.  Creating some fairly simple rules, we generated output that conformed to the following schema definition:

service map schema

As a point of interest (and necessity), we did not use the CallRules shape in BizTalk to execute the policy that returned our map list document.  We used .NET code in a separate assembly accessed from within an expression shape.  For info on how to do this, see my previous post on Calling the BRE from .NET Components.

Once it was known which maps and pipelines needed to be executed, it was just a matter of doing the work.  A looping structure was used to iterate through each output document required. 

Once we retrieved the info required to identify the System.Type of the map from our list, we could then run the map.  Inside a message assignment we executed the following code to create the mapped output.

We then needed to execute the pipeline.  Once again, we used our BRE-generated document to tell us which pipeline to use.  After capturing the type, we executed the pipeline:

After pipeline processing, we could then send the pipeline output to where it needed to go through a dynamically-bound port, once again using the BRE to get the information required to determine the correct location.

We then took the original message and sent it on its way for further processing.

For more info regarding dynamic map execution or calling pipelines from within an orchestration, see these MSDN articles:
Maps:  http://msdn.microsoft.com/en-us/library/aa950573(BTS.70).aspx
Pipelines: http://msdn.microsoft.com/en-us/library/aa562035(v=bts.70).aspx

Advertisements

About Ed Jones

Ed is a Connected Systems and .NET Specialist for RBA in the Twin Cities. Contact Ed

5 responses to “Dynamic Map and Pipeline Execution in a BizTalk Orchestration: A Case Study”

  1. Mark Brimble says :

    Did you need to cache your maps as per the msdn dynamic map article?

  2. Johann says :

    Hi Ed, did you find that there were any memory implications when you used the dynamic transformation in orchestrations? We are looking at doing this in long running orchestrations but this Microsoft article has me a bit worried – http://msdn.microsoft.com/en-us/library/aa950573(BTS.70).aspx

    Will appreciate hearing of your experience and will be sure to share mine after I do some proof of concepts myself. Cheers.

    • Ed Jones says :

      Hi Johann,

      Sorry for the delayed response. Our throughput requirements weren’t that high, so we managed very well with this pattern. We did see some memory spikes occasionally, but nothing that caused significant worry. Unfortunately, there’s not much advice I can offer for the longer running orchestrations as ours were relatively short in duration. I, too, would be very interested in hearing about how it worked out for you.

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: