SQL Server’s AlwaysOn technology (available since SQL Server 2012) provides high-availability and disaster-recovery database functionality which largely supplants mirroring and log-shipping – in fact, mirroring is now deprecated. Exactly what functionality is available and how robust it is, varies by release (2012+), edition (Standard versus Enterprise) and the physical infrastructure devoted to it. AlwaysOn is fairly easy to set up (though it requires cooperation from both Windows Server and networking admins) and, relative to the required effort, provides exceptional durability for SQL Server databases. AlwaysOn is not a complete OOTB durability solution and has significant gaps – e.g. it does not maintain SQL Server logins and roles synchronized across multiple servers – but it is an excellent start for the needs it caters to.
This post assumes the reader has at least a basic familiarity with SQL Server backups, as well…
You’re using the new One Designer cross-versioning in SQL Server Integration Services, and everything breaks when you try to downgrade to SQL Server 2012. The little icon that indicates that everything has gone wrong shows up,
or when you try to interact with any custom components or tasks you get the following error, or something similar:
Now, there are three things worth checking:
Are your UpgradeMapping files set up correctly? They should point to a valid strong-named assembly, and use the same alias, for both versions of SQL Server that you’re attempting to deploy to. If not, fix this issue first and try again.
After migrating your custom objects, navigate to the UserComponentTypeName property (for PipelineComponents) or to the CreationName field of the corresponding DTS:Executable in the package XML.
These should contain either the alias (typically the qualified name of the class, i.e. Sample.SSIS.CustomTask),
or the strong-name associated with…
Tom Babiec wrote a great blog a few months back on inserting multiple parent child tables in a single stored procedure. We use this technique a lot in our data integration work, and it’s proven to be very robust in many contexts. The SQL procedure outlined in that blog is useful not just for BizTalk, but generally speaking for ADO.NET and other applications trying to load multiple rows into SQL Server for multiple parent child tables. However, when dealing with larger datasets (and as the table grows), we’ve noticed some degradation in performance. In some cases, we were seeing variances of 30 seconds to 10+ minutes for loading the same exact data set on the same database. We tried a few different options, including forcing a recompile of the stored procedure between each load
, but this did not solve…
This error is typically harmless, but can result when the XML Disassembler encounters an empty Envelope message instance that’s formatted like this:
instead of this:
BizTalk chooses to make a semantic difference between these two instances, and process the second one fine (publishing no messages), but raising an exception like this for the first:
There was a failure executing the response(receive) pipeline: “…” Source: “XML disassembler” Send Port: “…” URI: “…” Reason: Unexpected event (“eos”) in state “processing_header”.
This can happen particularly when using an XmlProcedure or XmlPolling from SQL – if the resultset is empty, the adapter will publish this message. While this behavior may be desirable (and can frequently be avoided by ensuring you have good Data Available statements on your polling ports, and only call XmlProcedures with valid parameters/at valid times), it can also generate a lot of alarming errors….
Bit of a head scratcher for this one. I was working on some ADO.NET code that involved calling a stored procedure with many (10k+) table valued parameter rows being passed in. Occasionally, I’d see a bug where ExecuteNonQuery would result in an exception with the following stack trace (I tried it with ExecuteReader and ExecuteScalar just to be sure as well):
System.NullReferenceException was unhandled by user code
Message=Object reference not set to an instance of an object.
at System.Data.SqlClient.SqlCommand.OnReturnStatus(Int32 status)
at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
I knew for sure the command object was not null, and so I started looking at the Reference Source. It seemed the parameter collection was the cause of the issue.
I enabled CLR debugging in Visual Studio and dove in. The most relevant block of that function is here:
In my case, count was…
MSDN provides an example of INSERTing large data into SQL Server, leveraging the WCF-SQL adapter’s built in FILESTREAM capabilities. However, it’s also possible to leverage the transaction enlisted by the WCF adapter in a custom pipeline to pull FILESTREAM data out of SQL Server more efficiently than the more common SELECT … FOR XML query which simply grabs the FILESTREAM content and stuffs it into an XML node in the resulting document.
Imagine, for example, you had a large document of some sort (XML, Flat File, etc.) to store in SQL Server that BizTalk would need to process from a FILESTREAM table defined like so:
The tradeoff here is losing the typed XML data in favor of more efficient storage and access to larger file objects (especially when the data will, on average, be large). This can make a vast difference if you have…
There are a few good resources out there for setting up a clustered Master Secret Server out there:
Clustering the Master Secret Server (MSDN)
Installation of SSO on a SQL Failover Cluster
However, I faced some issues recently setting all of this up, getting the following errors (in the event log and the configuration log):
Creation of Adapter FILE Configuration Store entries failed. (BizTalk config log)
Could not import a DTC transaction. Please check that MSDTC is configured correctly for remote operation. See the event log (on computer EntSSOClusterResource) (BizTalk config log)
d:\bt\127854\private\source\setup\btscfg\btscfg.cpp(2213): FAILED hr = c0002a25 (BizTalk Config log)
Failed to initialize the needed name objects. Error Specifics: hr = 0x80004005, com\complus\dtc\dtc\msdtcprx\src\dtcinit.cpp:575, CmdLine: “C:\Program Files\Common Files\Enterprise Single Sign-On\ENTSSO.exe”, Pid: 172 (Event log)
Could not import a DTC transaction. Please check that MSDTC is configured correctly for remote operation. See documentation for details. Error Code: 0x80070057, The parameter is…
With the ESB Toolkit, BizTalk provides an excellent framework for handling exceptions that occur throughout the ESB. There are many built in facilities that are as simple as checking off a box to route failed messages to the portal, and within orchestrations you can easily build ESB Exception messages in catch blocks and route them to the portal as well.
However, these only apply if a message actually makes it to a pipeline or orchestration. For WCF SQL Polling receive locations, it’s possible that no message will ever make it to the pipeline – for example, if the procedure causes an exception to occur (perhaps by a developer intentionally using THROW or RAISERROR), the adapter will write the exception to the event log without providing a message for any pipeline or orchestration processing. Checking “suspend message on failure” doesn’t offer any…
An integration scenario requires a unique incrementing numeric identifier to be sent with each message (or each record in a message). These identifiers cannot be reused (or cannot be reused over certain ranges, or cannot be reused over certain periods of time). A GUID is not suitable because it will not be sequential (not to mention that many legacy systems and data formats may have trouble handling a 128 bit number!).
Integration platforms will have a hard time meeting this on their own – GUIDs work well because they guarantee uniqueness on the fly without needing to worry about history. Messaging platforms typically deal in terms of short executions, and BizTalk is no exception. While persistence of a message might be handled (such as BizTalk does with the MessageBox), persistence of the entire execution process is usually not guaranteed. Deployments,…
Aggregation is a common pattern used in Enterprise Integration. System (or systems) A sends many messages that System B expects in a single message, or in several messages grouped on a particular attribute (or set of attributes).
The most common way to approach this in BizTalk is using a Sequential Convoy orchestration to aggregate the message – Microsoft provides a sample of how to do this in the SDK. This is a powerful pattern, but has a few downsides:
Sequential Convoys can become difficult to manage if they’re expected to run for a long time
Complex subgrouping can multiply the headaches here – for example, if you have to aggregate messages for hundreds of destinations concurrently
The destination message may become very large, to the point where BizTalk cannot optimally process it anymore – particularly if it is a large flat file message.
Modifying the aggregate…