Best Practices and Standards in EAI

CodeGuru content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Summary

One of the most important technical challenges that a company’s IT department will ever face in this modern industrial world is maintaining homogeneity among the applications within the enterprise. Most companies do not, and the best they can provide is an interoperability model among the existing applications, which will not end up costing more than the legacy systems it is connecting.

Whenever the applications based on different development platforms, using different communication protocols, deployed on different operating systems need to talk to each other, a predictable series of barriers arises. As a result, it leads to the inevitable urge to invent a product that is specifically designed to solve application-to-application (A2A) problems.

With the increased presence of XML as part of the mainstream technologies, a whole new generation of products and tools relating to data exchange has become available. One of the primary purposes of XML is to improve the information interchange. These new XML products are leveraging this feature to not only improve, but also to standardize the exchange of information, specifically between disparate applications.

When it comes to executing a company’s integration vision, the real challenge to be aware of is probably be the EAI projects that take more than the right technology to succeed. Despite the many credible products on the market, success and failure often depend on the project team’s ability to plan, construct, and deploy the solution effectively. Some of the essential practices in EAI are explained as follows:

Building with the Deployment Consideration

In the early stages of an EAI project, the focus is often related to the technology selection. This is not altogether surprising, given that the comparison of EAI technology can be very complex, dominating the project team’s time and attention. However, it is vital to keep the end goal in mind—deployment. In a sense, the EAI is not real to the rest of the company until a solution is deployed. Yet often, key deployment issues are ignored in the design and construction phases of the project. The four ways to actively address the deployment in a solution are as follows:

Instrument components for monitoring

The main system components should be instrumented for monitoring. These include the ability to provide information on the availability of the system as well as the other metrics such as CPU utilization, memory utilization, or open resource connections. This information is vital for systems administrators to tune the system for the optimal performance. It also yields the development team a detailed knowledge of how the system performs in a production environment.

Account for installations and updates

The deployed solution must account not only for the initial installation but also for ongoing updates to system components. In some deployments, updates to integration rules and components such as adapters may prove to be a regular activity. The ability to install and update without the loss of continuous operations is essential once the system is in production.

Establish the service level agreements

Service level agreements (SLAs) should be established and articulated well before the deployment. In fact, they should be articulated as part of the requirements process. The SLA defines the system’s requirements at run-time with regard to availability, resource consumption, and system performance characteristics.

Build a deployment plan in the upfront

Building a deployment plan before a single line of code is written is a way to ensure that deployment issues surface earlier rather than later. A deployment plan should walk through how each phase of deployment will be enacted. It needs to articulate all the necessary activities required to support deployment and ongoing maintenance of the solution. It should include coverage of installation, documentation, technical training, and the timing of the deployment.

Profile the Performance

Many EAI projects fail these days because of the performance issues discovered at the point of deployment. Performance profiling is aimed at correcting this. Profiling system performance should be a practice conducted at key points throughout the development process. Measuring and analysing system performance carries two primary benefits.

First, it is far less costly to correct performance issues and bottlenecks early in the development life cycle. This is particularly true if the performance issues are related to a design flaw. Addressing fundamental design issues during or after the initial deployment is very expensive.

Second, discovering the root of a performance problem after the entire system has been constructed can be extremely challenging from a technical perspective. The system is often limited by the scope of what can be observed, and this in turn is a result of the tracing and auditing facilities built into the system. It also can be influenced by many more variables, such as network traffic. Instead, by testing not only the individual components as they are built but also the baseline architecture, it is possible to establish a more granular view of system performance.

Discovering and resolving performance limitations early is simply easier and leads to better results at the end of the project.

Build a Traceable System

One overlooked aspect of EAI is the challenge of run-time debugging. Detecting problems in distributed integration architectures is extremely tricky and requires that the system be instrumented with tracing code. Tracing code allows you to track the progress of data and the execution of code segments as that data is processed.

Often, in an effort to meet the project schedule, the time allotted to building in tracing code is eliminated in preference for functionality. Whether you are constructing your own EAI system from scratch or leveraging a product to implement an EAI solution, you need to ensure that the system is traceable.

The reality is that no matter how much testing is conducted in the lab environment, some problems inevitably arise only in production scenarios. The introduction of “live” data coupled with production application usage often results in extending the system in ways that are not possible to duplicate in the test lab. When such problems are discovered at run-time—or sometimes even when production deployment is under way, it becomes difficult to fix them without being able to trace the flow and execution of the EAI solution.

Also, keep in mind those levels of trace ability need to be defined because tracing can hurt overall system performance. It helps to set a low tracing option initially as we debug the system.

Reviewing and Addressing Secondary Scenarios

Secondary scenarios arise when the flow of data is not completed within the expected course of execution. For instance, the scenario depicted in the next section provides a context to discuss a secondary scenario. The integration flow can be understood in four steps:

A customer searches for a product online, entering product data into the corporate Web site. The data is used to look up more information from a product catalogue database. The rules combining product catalogue data and customer info are applied. An entry is made into the Siebel CRM application about informing the sales staff of a customer’s status.

However, what if the database is busy processing other operations and the request for customer data times out? How does the system handle a failed retrieval or a returned error? This is an obvious but common secondary scenario.

Another scenario may occur if, in the course of executing the rules in Step 3, the server goes down. How does the system deal with an integration flow halted in mid process as a result of failure at one point?

The number of secondary scenarios for a given system is seemingly infinite. Practically speaking, it is not possible to address every one of them. However, a regular practice of conducting reviews of possible secondary scenarios—documenting and subsequently addressing them—must be an essential practice.

An EAI solution that addresses secondary scenarios will adhere to certain principles of behaviour:

Message preservation

The system should never lose data in the event of a system failure.

Data integrity

In the event of a system failure, the system should always preserve the integrity of the message.

Predefined exception handling

The failed operations must have a predefined course of execution.

Graceful exits

The operations that fail because of unavailable resources or errors should always “exit gracefully” and return to a known state. Failure should never leave the data flow in an undetermined state whereby data is not reliable and the corrective course of action is unknown.

Conclusion

The reality is that many EAI projects are delivered late and over budget. Certainly, a significant factor is vendors promising more than the product can actually deliver. However, it is possible to reduce the exposure to failure and increase the likelihood for success by following a few sound project management practices. EAI best practices are distinct in being specifically oriented to integration projects, and they offer practical advice for the EAI practitioner. The best practices discussed in this paper can serve as a starter list of steps to take.

Author:
Syed Hameed
Wipro Technologies
Bangalore, India

More by Author

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Must Read