Most companies understand the weaknesses of their aging legacy system but still have to deliver products and services, while they pay employees and perform other mission critical operations. In short, there is only so much to go around.
At a typical organization, more than 75 percent of internal IS resources goes to computer maintenance rather than to new development. Organizations with mainframe systems handling core business functions often use the "bailing wire" approach to solve underlying architecture problems.
IS departments typically use band-aid solutions such as software upgrades to keep the legacy system operational rather than correcting the problem at its core.
If these problems are to be avoided in future client/server implementations, some tough questions need to be answered.
Corporate IS, today, must be adaptable in order to keep up with rapid, time sensitive changes in business strategies geared toward increasing market share.
Object orientation, loading business logic in the Middleware layer, and the increased effectiveness of N-Tier archtechtures will allow IS to be a malleable asset, instead of an intransigent deterrent to business growth.
Applications for Current and Future Needs
The proverbial wisdom about starting with a good foundation when building a structure also applies to the design, development and implementation of client/server computing systems.
Newer and more flexible N-Tier client/server architectures are now gaining attention, as N-Tier is usually the most appropriate model for computing environments where flexibility and scalability are important.
Today, many companies offer products that support the N-Tier client/server methodology. Demand for N-Tier client' server architectures and supporting products is quickly increasing amid growing awareness of the drawbacks of two-tier architectures.
Designing an N-Tier client/server architecture is no less complex than developing a two-tier architecture, however the N-Tier architecture, produces a far more flexible and scalable client/server environment.
In a two-tier architecture, the client and the server are the only layers. The Windows/GUI-based PC client accesses data from the server. In this model, both the presentation layer and application layer are handled by the client.
An N-Tier architecture has a presentation layer and two separate server layers - a business logic or application layer and a data layer.
The client becomes the presentation layer and handles the user interface. The application layer functions between the other two layers, sending the client's data requests to the data layer. The client is freed of application layer tasks, which eliminate the need for powerful client technology.
The data server is also freed from unneccesary overhead like stored procedures. In addition, costly database connections are optimized through the middle layer 'funneling' users to a limited set of resources.
This critical middle layer can then be further partitioned to provide as much functionality, scalability and reliability as is required.
In an N-Tier architecture the client can be a low-end Intel based system, Macintosh, X-Terminal or Network Computer (NC) device using a standard browser.
The presentation layer on each of these platforms can access the same application layer and therefore the data it connects to. This dramatically reduces the size and cost of the client while increasing code reusability and maximizing use of existing resources.
In addition, the capabilities of the emerging Java standards can be utilized to implement presentation interfaces.
Client/Server technology is heading in three major directions that make multi-tier development appropriate:
- Small scale systems will continue to be built using the well established two-tier architecture.
- Geographically distributed systems will be built using Internet-based technology, along the lines of a single-tier architecture or a two-tier architecture with a very thin client.
- Complex or mission-critical systems will be built using three-tier and, more generally, N-Tier architectures.
Why N-Tier is right for mission-critical systems.
In the N-Tier model, a departmental client could initiate some departmental business logic on the departmental application server(s) which, as part of a network transaction, could update the departmental database(s) and then initiate business logic on the enterprise application server(s).
These enterprise application server(s) could then update the enterprise database server(s). All of this takes place under the umbrella of a network transaction.
Any one of the chain of application server(s) could initiate a rollback which would be cascaded to all of the application server(s) involved. This capability allows a delegated approach to how business rules are implemented.
This business logic can access data in legacy mainframe operating systems such as CICS / VSAM, IDMS, and/or SQL compliant RDBMS servers such as Oracle, SYBASE, Interbase, DB2 etc, running on a variety of Wintel or UNIX platforms.
In addition, as the business processes are identified and appropriate business logic is implemented on the application server(s), these services could then be globally advertised.
This can allow end-users to develop their own presentation interfaces to the business logic, but force them to abide by the business logic residing on the application server(s).
The inherent qualities of enterprise-wide computing in an N-Tier environment point us clearly to the next logical phase in application evolution.
To understand and prepare for the coming shift to distributed computing, we need to examine the business and technological forces driving this fundamental change.
Distributed Computing (Spread it around!)
The concept of "distributed computing" has been familiar to the data management community for quite some time. Until recently, however, very few companies had actually embarked on the migration to this powerful new architecture.
Now that's all about to change. Because as enterprise information needs have grown in both volume and complexity, companies now recognize the need for a solution that reduces both technology and processing costs, while giving them the ability to move data quickly and efficiently across highly diverse platforms, machines and programming languages.
Distributed computing, using a tiered architecture is now becoming common place. No other tool delivers such a level of integration, flexibility and openness in meeting your needs for reliable computing.
Those are the promises of truly distributed computing. Making the transition to an enterprise-wide solution takes commitment, a reasonable investment in new technologies, and in many cases the guidance of experienced distributed computing specialists.
By understanding the challenges and rewards of this new architecture, information managers can prepare their companies to take full advantage of this emerging standard.
The increasingly rapid upward migration of companies to enterprise-wide computing has been made possible, in part, by the emergence of a generation of powerful, field-proven Middleware technologies.
See The Middle ~ A Pathway To Migration or [Transaction Process (TP)] or Distributed Objects
Business Quality Messaging for details
Developing a Plan
If you plan on building an Integrated Information system using a distributed Environment, you really need to consider the following criteria:
Usability: The architecture should assist users in performing their jobs efficiently and effectively.
Adaptability: The architecture should have the ability to easily and cheaply redesign existing functions for new technology, should provide access to existing legacy information, and provide a variable transition period from the old technology to the new.
Distributability: The clients and components of the architecture should be able to efficiently execute across multiple hardware platforms of a network.
Interoperability: The applications within the architecture should be able to work together in a consistent manner to perform tasks for the users of an information system.
Standardization: Components of the architecture should be based on software standards that are widely available or defined by an international standards organization.
Extensibility: The architecture should be easy to adapt to meet new and ever-changing requirements.
Internationalizability: The architecture should be able to display information in the languages and formats appropriate for all the countries and cultures in which the applications are used.
Manageability: The system managers should be able to economically configure, monitor, diagnose, maintain, and control the resources of the computing environment.
Portability: The software should be relatively easy to be moved from one platform to another.
Scalability: The architecture should be able to efficiently handle any size applications and grow with the business needs.
Security: The architecture should protect information and computer resources from unauthorized use. The security component should have the ability to provide network-wide authentication. addition, this component should provide centralized authorization capabilities.
Reusability:The reuse of existing software components is key to
effective use of valuable software engineering talent and the aggressive
schedules imposed by audience expectations.
Reliability: The components of the architecture should be able to be depended on for mission-critical business operations. Quality design of object classes
and frameworks with an eye to maximizing reuse can also increase the reliability of the resulting software system.
Application Framework Overview
©Micromax Information Services Ltd. 1999