Dell Boomi clients wanted to launch new technology capabilities that will rapidly deliver a competitive edge. Unfortunately, project backlogs and multiple priorities often slow the pace of innovation. Overworked and understaffed IT teams often compound this problem, resulting in employee turnover that makes it even harder for businesses to retain the best and brightest IT staff.
Ultimately, the cycle of pressing priorities and strained IT resources leads to a skills gap that causes many companies to lag behind.
And integration is central to this issue. These days organizations need to be extremely agile in how they integrate their applications and data to drive digital transformation. The volume and diversity of integrations necessary for running a digital business are growing exponentially. Social, mobile, analytics, big data, IoT and AI technologies all require integration into core business systems.
And integration is fundamental for any organization…
When the term “big data” first burst onto the scene about seven years ago, experts predicted that organizations could dramatically improve how they operate by capturing and analyzing vast arrays of rapidly growing information.
Fast forward to 2017. It turns out that “big data” wasn’t just another buzzword. Now an established term in the IT and business lexicon, big data is bigger than ever. By some estimates data volumes are doubling every three years.
But organizations have yet to fully capitalize on the value of data for more informed decision-making, operational efficiencies, and personalized systems of engagement with customers and partners.
“Most companies are capturing only a fraction of the potential value from data and analytics,” according to a recent McKinsey Global Institute study, “The Age of Analytics: Competing in a Data-Driven World.”
Connecting Data for Competitive Advantage
For organizations that want to survive and thrive…
Back in 2011, Gartner analyst Benoit Lheureux wrote a blog post titled: “EDI is Hot, No, Really!”
That was probably stretching it then and six years later…well, let’s just say “hot” is perhaps overstating it. But there’s no overstating the continued importance of EDI to many businesses.
“Bottom line: EDI remains — and will remain for years to come — a high impact, valuable asset to business,” Lheureux wrote in his report. “… EDI is a well-established approach that is still a vital component of most companies’ overall B2B strategy and easily contributes to B2B, cloud computing, business intelligence, et al.”
Like the Internet, EDI has its roots in the military. The scale and complexity of the 1948 Berlin airlift required the exchange of information about transported goods—over a 300-baud teletype modem, no less. The effort led to standards that eventually became EDI as…
While testing an integration it is always important to perform a series of negative test cases to ensure the process can fail gracefully through error handling. It is not always possible for endpoint environments to be set up for quick and easy testing. For this reason, a useful step in the development process is to create test harnesses. These can act as the implementation of abstract test cases beyond simple shape-to-shape testing. Test harnesses are also incredibly useful for regression testing.
There are some instances where you may wish to package a set of processes that ready for installation by managed accounts. Dell Boomi offers a solution to this problem with the use of Integration packs. Integrations packs allow you to offer a pre-packaged solution to your end users.
There are however a few things to consider before using integration packs:
You can only have a maximum of 100 processes in a single integration pack.
There are several components that cannot be added to an integration pack. Those elements included
Process Route Components
Processes that contain Process Route Components
Processes must first be published to a process library before being added to an integration pack. For more information about how to use process libraries in Dell Boomi, see this link
Integration pack management and multi-install integration packs are optional Dell Boomi AtomSphere features that must first be…
A common operation in Dell Boomi is to copy processes from one account to another. These processes can often be used as templates for new processes. One way to accomplish this task is to copy the process to the other account choosing the account and the folder.
While this method does move the process that we need to the correct account, there are a few drawbacks to this method.
It is difficult to determine which version of the process you are using when copying the process in this manner.
Copying a new version of the process is often messy process which involves first deleting the old version of the process before copying over the new version.
In many instances there are better and less painful alternatives to the above mentioned method.
Dell Boomi offers a feature called Process Library Management that fixes these issues….
Through the use of Atom Queues and Listeners a single process can spawn many iterations of a listening process. Each process spawned from a listener will execute asynchronously, independent of any other executions. By default, an Atom Queue listener will spawn an instance of the listening process every time a document is written to the queue. While this will work fine in cases with a low throughput, larger numbers of documents being processed will cause a large number of executions to get kicked off.
Many times when working with Microsoft Azure, it is necessary to read and write information to Microsoft Azure File Storage. Unfortunately, at this time Dell Boomi currently only supports a Microsoft Blob Storage Connector; one possible solution to this problem is to use Microsoft’s AzCopy utility to read and write information to Microsoft Azure File Storage.
AzCopy is a command-line Utility designed for copy data to and from Microsoft Azure Blob, File, and Table storage; but our purposes we will be using it to write data to Microsoft Azure File Storage.
AzCopy can be downloaded from this link (http://aka.ms/downloadazcopy). To install, simply follow the installation instructions.
One possible use of AzCopy is to upload newly created data files to Microsoft Azure File Storage. By doing so we can call AzCopy to upload the data files after we have finished creating them all rather…
Many times when a document cache is created for use in a lookup function, only a few values are actually needed. A typical approach to document caching would write an entire document to the cache, including unneeded values. This approach works fine when caching a small number of documents, but will perform worse as more documents need to be cached. A much more efficient approach is to only cache the elements of a document that are actually used.
Once a process is deployed, any debugging or error tracing is typically done through the process logs. Although this method of debugging answers where in the process and error occurred, the developer lacks the answer to a crucial debugging question: “What data caused this error?” In test mode, a developer can simply look at the Shape Source Data of the component where the error occurred. After deployment, that level of granular debugging is not present. So how can we see the documents that were passed through the process?