Client/Server is simply an architectural method of providing information to an end user; but that's where the simplicity ends.
Client/Server is a general description of a networked system where a client program initiates contact with a separate server program (usually on a different machine) for a specific function or purpose. The client exists in the position of the requester for the service provided by the server.
As large scale, complex information systems have evolved over the past two decades, the Client/Server model of computing has come to be generally accepted as the preferred architecture for application design and deployment.
C/S computing architecture is currently the heart and soul of enabling technologies like groupware and workflow systems.
The effects of future C/S technologies on our industry are going to be just as profound as the transformation we just went through, when network computing applied a giant chainsaw to monolithic (mainframe-based) applications and separated them into Client and Server (C/S) components.
A Little Background (The Evolution)
The term Client/Server has traditionally been associated with a desktop PC connected over a network to some sort of SQL-database server. In fact, the term Client/Server formally refers to a logical model that provides for a division of tasks into 'client' and 'server' layers or 'tiers'.
One-Tier ~ Monolithic (C/S) Architectures
The Information Technology (IT) industry, have been practicing a simple form of Client/Server computing since the initial inception of the mainframe. That configuration, a mainframe host and a directly connected, (unintelligent) terminal constitutes a one-tier C/S system.
Two-Tier Client/Server Architectures
In a two-tier client/server architecture, the client communicates
directly with the database server. The application or business logic either resides on the client or on the database server in the form of stored procedures.
A two-tier (C/S) model first began to emerge with the applications developed for local area networks in the late eighties & early nineties, and was primarily based upon simple file sharing techniques implemented by X-base style products (dBase, FoxPro, Clipper, Paradox, etc.).
The two-tier model initially involved a non-mainframe host, (a network file server) and an intelligent "fat" client where most of the processing occurs. This configuration did not scale well however, to facilitate large or even mid-size information systems (greater than 50 or so connected clients).
Then the Graphical User Interface (GUI) emerged as the dominant environment for the desktop. With it, emerged a new slant on the early two-tier architecture. The general purpose LAN file server was replaced by a specialized database server. This model spawned the emergence of new development tools: PowerBuilder, Visual Basic, and Delphi to name a few.
Much of the processing still occurred on the "fat" clients, but now datasets of information were delivered to the client using Structured Query Language (SQL) techniques to perform requests from a database server, which simply reported the results of queries.
The more complex the application, the fatter the client becomes and the more powerful the client hardware must be to support it. The cost of adequate client technology becomes prohibitive and may defeat the application's affordability.
In addition, the network 'footprint' using fat clients, is very large, so that the effective bandwidth of the network, and thus the corresponding number of users who can effectively use the network, is reduced.
An alternative 'thin' Client <-> 'fat' Server configuration, where the user invokes procedures stored at the database server, is another approach that is used in the 2-tiered architecture. The 'fat' Server model, is more effective in gaining performance, because the network footprint, although still heavy, is lighter than the fat Client approach.
The down side is stored procedures emphasize proprietary customization and coding as they rely on a single vendor's procedural functionality. In addition, because stored procedures are buried within the database, each database that contains the procedure must be modified when business logic changes. In a large, distributed database, this can lead to difficult version management issues.
In both cases, remote database transport protocols such as SQL-Net are used to carry the transaction. In these models, a 'heavy' network process is required to mediate the Client/Server interaction. Furthermore, not only network transaction size, but query transaction speed is slowed by this heavy interaction.
No matter which technique was used, two-tier (C/S) systems could still not scale beyond approximately 100 users. Overall, these architectures are typically NOT well suited for mission critical applications.
** See Limitations and Misconceptions of
Two-Tier Client/Server Architectures, for more details.
Three-Tier Client/Server Architectures
A newer generation of Client/Server implementations takes this segmented model a step further and adds a middle tier to achieve a '3-tier' architecture.
In a three-tier or multi-tier environment, the client implements the presentation logic (thin client). The business logic is implemented on an application server(s) and the data resides on database server(s).
A Multi-tier architecture is thus defined by the following three component layers:
- A front-end component, which is responsible for providing portable presentation logic;
- A back-end component, which provides access to dedicated services, such as a database server.
- A middle-tier component, which allows users to share and control business logic by isolating it from the actual application;
Other advantages of Multi-Tier Client/Server architectures include:
- Changes to the user interface or to the application logic are largely independent from one another, allowing the application to evolve easily to meet new requirements.
- Network bottlenecks are minimized because the application layer does not transmit extra data to the client, only what is needed to handle a task.
- When business logic changes are required, only the server has to be updated. In two-tier architectures, each client must be modified when logic changes.
- The client is insulated from database and network operations. The client can access data easily and quickly without having to know where data is or how many servers are on the system.
- Database connections can be 'pooled' and thus shared by several users, which greatly reduces the cost associated with per-user licensing.
- The organization has database independence because the data layer is written using standard SQL which is platform independent. The enterprise is not tied to vendor-specific stored procedures.
- The application layer can be written in standard third or fourth generation languages, such as Java, C or COBOL, with which the organization's in-house programmers are experienced.
A multi-tier architecture augments traditional client/server and two-tier computing by introducing (one or more) middle-tier components.
The client system interacts with the middle-tier via a standard protocol such as HTTP or RPC. The middle-tier interacts with the backend server via standard database protocols such as SQL, ODBC and JDBC.
This middle-tier contains most of the application logic, translating client calls into database queries and other actions, and translating data from the database into client data in return.
This placement of business logic on the application server provides scalability and isolation of the business logic in order to handle rapidly changing business needs. In addition, this allows a more open choice of database vendors.
The 3-tier architecture can be extended to N-Tiers when the middle-tier provides connections to various types of services, integrating and coupling them to the client, and to each other.
** see servers and services, for more
N-Tier Architectures (thin all over)
As the Client/Server model continued to evolve, more sophisticated multi-tier solutions appeared, where client-side computers began to operate as both clients and servers.
This latest refinement of the Client/Server model came when software developers recognized that the smaller, specialized processes were easier to design, faster to implement and cheaper to maintain.
These same principles were in turn, applied to the server side of the equation, resulting in smaller, specialized server processes.
Thin is In.
Today, the industry appears to be rapidly moving toward an N-Tier architecture. The majority of new IS development is typically being written as an N-Tier C/S system of some kind.
N-Tier architecture does not preclude the use of the two-tier or three-tier model. Depending on the scale of the application and the requirements for access to data, the two- or three-tiered model can often be used for departmental applications.
It doesn't make sense to force a client's reporting needs to go through the application server when there is no requirement for transactional integrity in ad-hoc reporting. In this situation, the client should be able to access the data directly from the database server.
N-Tier computing is usually considered the most effective approach because it can provide integration of current information technology into this new, more flexible model.
Research estimates that the percentage of Client/Server applications using the N-Tier model will grow almost four-fold over the next two years.
What kind of systems can benefit?
Generally, any Client/Server system can be implemented in an 'N-Tier' architecture, where application logic is partitioned among various servers.
This application partitioning creates an integrated information
infrastructure which enables consistent, secure, and global access to critical data.
A significant reduction in network traffic, which leads to faster network communications, greater reliability, and greater overall performance is also made possible in a 'N-Tier' Client/Server architecture.
Anything you can do, we can do better.
What three-tier and N-Tier client/server brings to the table is the ability to do two things that two-tier client/server can't do:
- funnel database connections and
- partition the application processing load among many servers.
In addition, by centralizing application logic in the middle tier, developers can update business logic without re-deploying the application to thousands of desktops.
N-Tier computing accomplishes a synergistic combination of computing models, by providing centralized common services in a distributed environment.
This multi-level distributed architecture employs a back-end host of some kind (mainframe, UNIX device, database/LAN server), an intelligent client, and one or more intelligent agents in the middle controlling such activities as transaction monitoring, classic On-Line Transaction Processing (OLTP), security, message handling and object store control.
This architecture typically leans heavily upon object oriented methodologies to effect as much flexibility and interchangeability as possible.
TP monitors, application partitioning tools and distributed objects can all spread the processing load among many different machines (hence, n-tier), supporting an almost unlimited number of users and processing loads.
TP monitors allow clients not only to connect to servers but to manage transactions from client to server and back again. Tuxedo, from Novell, and Top End, from NCR Corp., for example, are different from traditional Middleware.
They provide transaction-tracking services, load balancing, recovery services, and the ability to restart servers and queues automatically.
TP monitors provide another location for application processing and have led many client/server systems into the world of three-tier client/server computing. ~ See ~ [TP Monitors] for details.
Alternatives to TP Monitors
TP monitors are not the only N-Tier game in town. Application-partitioning tools and distributed objects provide similar scalability features in a Multitier model.
See: Application Partitioning / Distributed Objects
The Rules Are Changing
The N-Tier model is also focused on the link between C/S implementation strategies and re-evaluation of business rules, i.e., Business Process Reengineering (BPR).
Because legacy systems today are a hodgepodge of rules and special exceptions, the business community is looking upon the object oriented nature of N-Tier computing to "clean up" these systems and prepare industry for C/S computing into the next century.
As we face the year 2000 problem, Web TV, and network computing, we'll also face the challenge of moving traditional applications to N-Tier client/server and developing new applications that need N-Tier.
©Micromax Information Services Ltd. 1999