ESB Cloud-Migration or Upgrade: Testing is Key to Success

Are you planning to migrate your ESB integration layer to the cloud? Or just plan an upgrade to a new release? This might be for you:

When I discuss this topic with customers, I often get reactions like these:

“How do you test your integration-flows/processes?”

“Boring, let’s focus on the implementation.”

“Why? Anyway unreliable as often the connected systems are not available”

“We already have dozens of integration flows, why would I start now?”

“ESB processes are legacy, not worth investing in testing”

To be honest, I had a similar mindset when I began my career. I learned my lesson, though. 

Recently we delivered several upgrade and migration projects. Most had no testing in place at all when we started. Some customers did not want to invest in writing test cases for the existing integration flows. 

“What’s the point of doing that if you’re already upgrading or replacing the solution? ”

As you can imagine, at the beginning of the projects we often received a lot of push-back on investing time and resources in testing. Especially when it was not an upgrade but a migration. However, sooner or later in every project, we could see the rewards.

As organizations move to the cloud or simply upgrade to a new release, testing the application integration layer is a critical step in the migration process. This layer is responsible for integrating applications and services that are hosted in different environments. It’s important to ensure that these components are communicating with each other as expected. To ensure a successful migration, engineers must take the time to thoroughly test the integration layer.

Moving to infrastructure-as-code and automated-deployments is becoming the de-facto standard. In our projects, we applied this to the integration layer together with automated testing. There will always be some level of integration-testing and manual-testing required. These tests though are usually time-consuming and costly. Often also error-prone to a certain extent.

Therefore it is very important to catch and fix problems early in the process. Integration flows should be treated like programming source code and trigger testing on every change. Sounds logical … nothing new… correct. You might say, I am just stating the obvious and I do not disagree. Reality though is that often these logical things are not implemented.

Example:

The above example triggers a GitHub Actions workflow on each commit+push. The integration flows are built and then they are automatically deployed in a containerized environment. This environment is created as part of the CICD workflow and then used while executing the tests. 

What is required to get this done for your integration flows and which factors contribute to the success:

  • Test Data Quality
  • Automated Test Cases
  • Mocking of Connected Systems
  • Test Coverage

All these factors are important and some are easier to achieve than others. We use our own framework to automatically collect the test data from a given environment and to test. This data is then already in the format ready to be used for testing and mocking.
Test-cases are then written in Java (which can be automated to a certain extent as well, for simple assertions) and they reuse the collected data.

The expected result of a given integration-flow-step can be mocked if the backend systems are unavailable. The framework can use the collected result of the step instead of calling the backend. Alternatively (or in addition) the test cases can be run against a given environment with expected test-results per environment. This means you can use the same test cases but for example, during the build some systems are mocked, and then during a test run against a higher environment, no steps are mocked. The test cases remain untouched.

Once test cases are written and run successfully, the question is: “Did we cover everything?”
Test coverage reports help here a lot and show you the percentage of coverage overall (per integration-flow/ESB-process) and per test scenario.

In addition, you can drill into the data and get a list of steps that are currently not yet tested, even from invoked sub-processes. This makes it easier to identify additional test cases that need to be written.

If a change results in a successful build (including tests) the package can be released and/or deployed.
Lower environments can be deployed automatically using your CICD pipeline.
Continuous Delivery environments with Ansible, Jenkins/Github-Actions, GitHub & AWS are a great combination to achieve this.

It allows the reproducible creation of environments without any manual interaction.
For higher environments, we recommend using the same approach but triggering the deployment with released versions manually. Without any human intervention on the servers, this is less error-prone and also allows shorter turnaround times.

conapi ESB/Integration Testing Framework:

  • Automatically Collect Test- and Mocking-Data
  • Mocking of Integration Flow Steps (i.e. simulate backend)
  • Coverage Report (Percentage, List of untested steps)
  • CICD Ready
  • Test Output Assertions