Dynamic Map and Pipeline Execution in a BizTalk Orchestration: A Case Study
Sometimes you just don’t know what needs to happen until it’s time for it to happen. Most business process software requires the ability to alter its path while en route to wherever it may be going. Recently, I worked on a project that had a similar requirement.
Here’s a very high level view of the process/requirements:
- Data is received in the form of a message in a canonical format.
- Based on the client identified in the message, a number of output documents would need to be generated. The number and type must be changed easily as new clients came on board or as old clients added or changed output types.
- After the messages are sent, the canonical message must be allowed to be processed further from its original state.
In most cases, the ESB toolkit would have been a great fit for this type of work except for a couple of things: additional processing needed to take place against the original canonical message after the maps and pipelines had done their work; and the number of maps and pipelines that needed to be executed against the canonical schema was not known until runtime.
For our solution we decided to use a single Orchestration for the ability to maintain state of the original message (the last requirement), but also for ease in development.
Once we received the canonical message, the first step was to retrieve a list of output messages that would be required for the given client. Each output needed two pieces of information: the map that would be required to extract the specific data from the canonical message, and the special pipeline required to create the flat-file or other output needed (as necessary). We used the BRE to determine which maps and pipelines were needed for the outputs. Creating some fairly simple rules, we generated output that conformed to the following schema definition:
As a point of interest (and necessity), we did not use the CallRules shape in BizTalk to execute the policy that returned our map list document. We used .NET code in a separate assembly accessed from within an expression shape. For info on how to do this, see my previous post on Calling the BRE from .NET Components.
Once it was known which maps and pipelines needed to be executed, it was just a matter of doing the work. A looping structure was used to iterate through each output document required.
Once we retrieved the info required to identify the System.Type of the map from our list, we could then run the map. Inside a message assignment we executed the following code to create the mapped output.
After pipeline processing, we could then send the pipeline output to where it needed to go through a dynamically-bound port, once again using the BRE to get the information required to determine the correct location.
We then took the original message and sent it on its way for further processing.
For more info regarding dynamic map execution or calling pipelines from within an orchestration, see these MSDN articles: