Muenchian Grouping in BizTalk while keeping Mapper functionality

Muenchian Grouping is a powerful technique for to allow grouping by common values among looping/repeating nodes in an XML document.  BizTalk does not have out of the box support for this, but it can be achieved by adding custom XSLT to a map.  Chris Romp wrote a post about this years ago that serves as an excellent example of the idea in BizTalk: http://blogs.msdn.com/b/chrisromp/archive/2008/07/31/muenchian-grouping-and-sorting-in-biztalk-maps.aspx.  The drawback of his method is that you lose all other Mapper functionality by using completely custom XSLT, and custom XSLT is more difficult to maintain than a BizTalk map.

Enter Sandro Periera’s phenomenal tutorial on Muenchian Grouping in BizTalk maps (https://code.msdn.microsoft.com/windowsdesktop/Muenchian-Grouping-and-790347d2).  His solution is particularly powerful because it allows you to maintain the functionality and simplicity of the BizTalk Mapping engine while extending it to allow for the Muenchian Grouping technique as well.  However, there is still a limitation to this approach; the XSLT functoids will still be responsible for any transformation of child nodes that are grouped.  That poses a problem if your grouping logic requires that a parent (or perhaps even a root) node gets grouped on the criteria and many child nodes must be appended to the proper parent.

I recently faced just this situation while working for a client.  The XML data coming in needed to be extensively transformed, and in particular, duplicate child nodes had to be converted to unique parent nodes, with the original parents being appended to the correct new unique node.  Custom XSLT is clearly required here, but a hybrid approach can be used to still allow a regular BizTalk map to transform the resultant data.

Read More »

Posted in BizTalk | Tagged , , , , | Leave a comment

MABS EAI Bridge LoB Lookup (Part 2 of 2)

Last week month (sorry about that!), I wrote a post about using MABS to access a LoB system (in the example, SQL Server) behind several layers of firewalls (here).

We looked at the following tasks

  1. Creating the BizTalk services
  2. Setting up BizTalk Adapter Services in a local (or IaaS) environment to run a stored procedure in SQL Server
  3. Creating a sample table and stored procedure
  4. Creating a ServiceBus namespace with ACS
  5. Create the Relay to the LOB system
  6. Creating an EAI bridge to access the LoB system

This week, we’ll look at these tasks:

  1. Testing and debugging the bridge with a Visual Studio add on
  2. Writing a custom component to call the LoB adapter in a EAI Bridge Stage and parse the response
  3. Having the component send an email notification using an Office 365 server

Read More »

Posted in .NET Framework, Architecture and Development, Azure, BizTalk, Cloud, Enterprise .NET, SaaS, SQL Server, WCF | Tagged , , , , , , , , , | Leave a comment

Nino Crudele’s BizTalk NoS Add-in

Nino Crudele, BizTalk MVP, demonstrated his NoS BizTalk Add-In for Visual Studio at the Integrate 2014 conference in Redmond.  In short, it’s a set of Visual Studio Add-in’s that delivers on the promise of taking BizTalk Development and troubleshooting to the next level.

More information on Nino’s blog here

Also, check out Sandro’s blog here

#Integrate2014

 

Posted in 0-Uncategorized | Tagged , | Leave a comment

BizTalk Build Error: The Namespace already contains a definition for _Module_PROXY_

While building BizTalk projects within Visual Studio, it is possible to receiving the following error when trying to compile a project with multiple orchestrations:

“The namespace ‘[namespace]’ already contains a definition for ‘_MODULE_PROXY_’”

While the error seems obvious, in my case, the namespaces were indeed unique, such that the above error did not make much sense.  The underlying issue was in fact that the type names of two of the orchestrations within the project were identical.  It turns out that an orchestration was at one point duplicated, and only the namespace was changed.  After making the type names unique, the project successfully compiled and deployed.

Posted in BizTalk | Tagged , , , , , | Leave a comment

Using SubVersion Revision Numbers in Build Versions

Having used Team Foundation Systems (TFS) for many years I grew accustom to having an automated build process and the ability to have the Revision number in my binary file versions.  Recently I began work for a client that used SubVersion as their source control system.  This removed much of the niceties I was used to having in other TFS based projects.  I set out on a search of the internet and found a couple methods to provide similar functionality of tying a deployed package of code back to a revision in my source control.

Both of these methods require that you use the Tortoise SVN tool and have it installed on your local system.  Since both of these methods do not require you to actually log into SVN to get the revision number, it is important to note that the revision number displayed will reflect the Revision Number of the source code on your local file system.  That is to say, that if you don’t “Update” prior to building, you will have the Revision Number from the last time you updated.

Read More »

Posted in Enterprise .NET, General | Tagged , , , | Leave a comment

MABS EAI Bridge LoB Lookup (Part 1 of 2)

Microsoft Azure BizTalk Services (MABS) has a lot to offer for companies looking for a PaaS Middleware solution.  EAI bridges provide B2B communication as well as LoB access functionality for EDI, XML, and flat file interchanges.  The new mapping system offers some exciting and powerful new functionality as well, vastly simplifying certain tasks that previously required several functiods, and opening up new possibilities and enhanced performance with lists.

However, it is a new technology, and certain tasks that have been very straightforward in BizTalk Server 2013 require a different way of thinking for MABS.  For example, it is a fairly trivial task to create an orchestration that accesses a LoB adapter (using, for example, WCF slqBinding) to do data validation or enhancement, and publishing this orchestration as a web service for client consumption. If your SQL database is SQL Azure, there is some built in functionality to do a “Lookup” in the Azure database, but this may not be an option for an infrastructure that is makes use of features not currently available in SQL Azure, such as the ability to encrypt at rest.  It may also just be possible that an existing LoB SQL database cannot easily be moved for various other reasons.  In this series of posts, I will outline the process for implementing this functionality using the powerful custom code functionality available in MABS.

Read More »

Posted in Architecture and Development, Azure, BizTalk, Biztalk Tutorial, Business Intelligence, Cloud, ESB Guidance and SOA, IaaS, PaaS, PowerShell, SQL Server | Tagged , , , , , , , , , , | 1 Comment

Service Bus Authentication and Authorization

If you’re working in any MABS development that use the typical LOB Relay pattern, there have been changes associated with the security models for the same.

When creating a Service Bus namespace in the Azure portal only SAS (Shared Access Signature) authentication will be enabled/created by default. The accompanying ACS namespace will no longer be created and paired to the Service Bus namespace.

This is a critical component of the LOB RELAY pattern but the decision was made to no automatically create the Microsoft Azure Active Directory Access Control (also known as Access Control Service or ACS). Microsoft’s reasoning is that they felt the base majority of their customer base only use ACS for the access key functionality (ACS is a service that provides an easy way of authenticating and authorizing users of your web applications and services) and not for identity federation. Microsoft reports that SAS both scales better and also provides richer functionality than ACS.

Be that as it may for those of us that require ACS for our MABS development we will now need to execute an Azure PowerShell command to create the Azure Namespace and associated ACS credentials and artifacts such as keys, etc.

After launching the Azure PW applet first connect your account using the following command:
PS C:\> Add-AzureAccount

ps C:\> New-AzureSBNamespace -Name ‘MyNamespace’ -Location ‘Central US’

Once executed successfully go back to the Azure Portal and find the namespace and click on the “Connectivity” button in the bottom middle of the screen to retrieve the ACS information.

For more information, see

Posted in Architecture and Development, Azure, BizTalk, Cloud, PaaS | Leave a comment

Bidirectional Communication Between Directives and Controllers in Angular

In Angular, it’s very easy for a directive to call into a controller. Working in the other direction – that is, calling a directive function from the controller – is not quite as intuitive. In this blog post, I’ll show you an easy way for your controllers to call functions defined in your directives.

First, calling a controller function from a directive is straightforward. You simply define a “callback” function in the controller and pass it to the directive (using the ‘&’ symbol in the isolated scope definition). It’s then trivial for the directive to invoke the function, which calls into the controller. To put things in .NET terms, this is akin to a user control (the directive) raising an event, which the user control’s host (the controller) can handle.

For example, you may want your directive to call your controller when the user clicks a button defined inside the directive’s template:

myPage.html

<div ng-controller=”myController”>
    <my-directive on-button-click=”vm.directiveButtonClicked()” />
</div>

myController.js

function myController($scope) {
    var vm = this;
    vm.directiveButtonClicked = function () {
        // Controller reacting to call initiated by directive
        alert(‘Button was clicked in directive’);
    }
}

myDirectiveTemplate.html

<button ng-click=”buttonClicked”>Click Me</button>

myDirective.js

function myDirective() {
    return {
        restrict: ‘E’,
        templateUrl: ‘/Templates/myDirectiveTemplate.html’,
        scope: {
            onButtonClick: ‘&’
        },
        link: link
    };

    function link(scope, element, attrs, controller) {
        scope.buttonClicked = function () {
            // Button was clicked in the directive
            // Invoke callback function on the controller
            scope.onButtonClick();
         }
     }
}

Unfortunately, there is no clearly established pattern in Angular for communicating in the opposite direction (calling functions of the directive from the controller). Again, in .NET terms, it’s easy for a user control’s host (the controller) to invoke public or internal methods defined by the user control (the directive). But there is no native way to achieve the same thing in Angular, which is certainly curious, because this is not an uncommon requirement.

Several solutions to this problem can be found on the web, but most of them carry caveats and/or add unwanted complexity. Some work by using $watch, but $watch is undesirable and should generally be avoided when possible. Others work, but not with isolated scope, which means you won’t achieve isolation across multiple instances of the directive.

Here is a simple, lightweight technique that will enable your controllers to call functions on your directives, without resorting to $watch, and with full support for isolated scope.

Here’s how it works:

  1. The controller defines a view-model object named “accessor” with no members
  2. The page passes this object to the directive, via an attribute also named “accessor”
  3. The directive receives the accessor, and attaches a function to it
  4. The controller is now able to call the directive function via the accessor

Let’s demonstrate with an example. The directive template has two text boxes for input, but no button. Instead, there is a button on the page that is wired to a handler on the page’s controller. When the user clicks the button, the controller calls the directive. In response, the directive prepares an object with data entered by the user in the text boxes and returns it to the controller.

myPage.html

<div ng-controller=”myController”>
    <my-directive accessor=”vm.accessor” />
    <button ng-click=”vm.callDirective()”>Get Data</button>
</div>

myController.js

function myController($scope) {
    var vm = this;
    vm.accessor = {};
    vm.callDirective = function () {
        if (vm.accessor.getData) {
            var data = vm.accessor.getData();
            alert(‘Data from directive: ‘ + JSON.stringify(data));
        }
    };
}

myDirectiveTemplate.html

Name: <input type=”text” ng-model=”name” /><br />
Credit: <input type=”text” ng-model=”credit” /><br />

myDirective.js

function myDirective() {
    return {
        restrict: ‘E’,
        templateUrl: ‘/Templates/myDirectiveTemplate.html’,
        scope: {
            accessor: ‘=’
        },
        link: link
    };

    function link(scope, element, attrs, controller) {
        if (scope.accessor) {
            scope.accessor.getData = function () {
                return {
                    name: scope.name,
                    credit: scope.credit
                };
            };
        }
    }
}

Notice how the controller defines vm.accessor as a new object with no members. The controller’s expectation is that the directive will attach a getData function to this object. And the directive’s expectation is that the controller has defined and passed in the accessor object specifically for this purpose. Defensive coding patterns are employed on behalf of both expectations; that is, we ensure that no runtime error is raised by the browser in case the controller doesn’t define and pass in the expected accessor object, or in case the directive doesn’t attach the expected function to the accessor object.

The accessor pattern described in this blog post simplifies the task of bi-directional communication, making it just as easy to call your directive from your controller as it is to call in the other direction.

Happy coding!

 

 

Posted in JavaScript, User Experience Design | Tagged , | Leave a comment

CRM BizTalk Integration using Azure Service Bus

You may face the same challenges that I experienced when integrating Microsoft Dynamics CRM and BizTalk. The objective is to capture events in real-time and transmit them to an outside system without losing the order of events. There are two web methods exposed by CRM web services, “Retrieve” and “RetrieveMultiple”. The CRM web services provide a way of querying different entities to achieve what we wanted. However, I tried a different approach to solve this integration challenge by using Azure Service Bus Queues. Azure Service Bus Queues provide a robust and flexible implementation of Publish-Subscribe pattern. The picture depicts the scenario that we have implemented.

visiodiagram

Read More »

Posted in Azure, BizTalk, Dynamics CRM | 1 Comment

Creating a Windows 10 Preview VHD That Can Run as a Hyper-V VM and Can Boot Natively

Introduction

When the Windows 10 Technical Preview came out earlier this month, I wanted to see kick the tires a bit and see what was new. However, I need my laptop to work reliably, so I couldn’t take the risk of installing Windows 10 over my Windows 8.1 installation.

So, I decided to install it to a Virtual Machine (VM) running in Hyper-V. This would allow me to run Windows 10 in a “sandbox” that would not affect my primary operating system. It would also allow me to multitask – doing my normal day-to-day activities on my laptop, while still “playing around” with Windows 10.

The other thing I wanted to be able to do is native boot into the Virtual Hard Disk (VHD) used by this VM. VHD native boot is a nice feature that was added with Windows 7 and Server 2008 R2. When you boot into a VHD, you are running everything in it on bare metal except for the disk. There are 2 advantages to doing this. The first is performance – you use all CPU cores and all of the memory on the computer and there is no virtualization layer to go through (except for the disk as mentioned above). The second is that you can verify that Windows 10 will work on the hardware of your computer – you can’t do that running in a VM since everything is virtualized.

Read More »

Posted in Azure, IaaS | Tagged , , , , , , , | Leave a comment