Client/Server Computing

( For Senior IT Management)

Final Report

Prepared by Albert W.C. Yau & Thomas Y.C. Lee

Project Supervisor: Frank Kriwaczek


1. Introduction

2. Overview of Client-Server Computing

2.1 Evolution of Client-Server Computing

2.2 Configurations in Client-Server Architecture
    2.2.1 Client

    2.2.2 Server

    2.2.3 Middleware

    2.2.4 Butler Pyramid Model of Client-Server Computing

    2.2.5 The Four Dominant Client/Server Application Models

2.3 Characteristics and Features in Client-Server Computing

2.4 Main Applications

3. Other Issues in Client-Server Computing Development

3.1 Importance of Network

3.2 Open System and Standards

3.3 Software Trends

4. Applying client/server in businesses

4.1 Analysis of your businesses

4.2 Reasons for adopting client/server technology

4.3 Benefits obtained from adopting client/server technology

4.4 A sensible approach towards client/server technology

4.5 Limitations for the client/server technology

4.6 Golden Rules of Client/Server Implementation

4.7 Benefits of having IT in our businesses

5. Conclusions

6. Glossary

7. References

1. Introduction

In the 1970s and 1980s was the era of centralized computing, with IBM mainframe occupied over 70% of the world's computer business. Business transactions, activities and database retrival, queries and maintenance are all performed by the omnipresent IBM mainframe. We are now in the transition phase towards Client-Server Computing, a totally new concept and techology to re-engineer the entire business world. Someone has called it the wave of the future - the computing paradigm of the 1990s.

You may start to wonder how is Client-Server Computing different from traditional mainframe computing and what are the benefits from employing it in business. The main emphasis of Client-Server Architecture is to allow large application to be split into smaller tasks and to perform the tasks among host (server machine) and desktops (client machine) in the network. Client machine usually manages the front-end processes such as GUIs (Graphical User Interfaces),dispatch requests to server programs, validate data entered by the user and also manages the local resources that the user interacts with such as the monitor, keyboard, workstation, CPU and other peripherals. On the other hand, the server fulfills the client request by performing the service requested. After the server receives requests from clients, it executes database retrieval , updates and manages data integrity and dispatches responses to client requests.

The goals of Client-Server Computing are to allow every networked workstation (Client) and host (Server) to be accessible, as needed by an application, and to allow all existing software and hardware components from various vendors to work together. When these two conditions are met, the environment can be successful and the benefits of client/server computing, such as cost savings, increased productivity , flexibility, and resource utilization, can be realized.

2. Overview of Client-Server Computing

2.1 Evolution of Client-Server Computing

The evolution of Client-Server Computin has been driven by business needs, as well as the increasing costs for host (mainframe and midrange) machines and maintenance, the decreasing costs and increasing power of micro-computers and the increased reliability of LANs( Local Area Networks).

In the past twenty years, there are dramatic improvements in the hardware and software technologies for micro-computers. Micro-computers become affordable for small businesses and organisations. And at the same time their performances are becoming more and more reliable. On the other hand, the drop in price for mainframe is growing at a slower rate than the drop in its price. Little developments have achieved with mainframes.

The following are the improvements made by micro-computers:

  • Hardware: The speed of desktop microprocessors has grown exponenetially, from a 8MHz 386-based computers to 100Hz-based pentium-based microprocessors. These mass-produced microprocessors are cheaper and more powerful than those used in mainframe and midrange computers. On the other hand, the capacity of main memory in micro-omputers has been quafrupling every three years. Typically main memory size is 16 Megabytes nowadays. Besides, the amount of backup storage and memory such as hard disks and CD-ROMs that are able to support micro-computers has also puts an almost unlimited amount of data in reach for end-users.

  • Software: The development and acceptance of GUIs ( Graphical User Interfaces) such as Windows 3.1 and OS/2 has made the PC working environment more user-friendly. And the user are more efficient in learning new application softwares in a graphical environment. Besides GUIs, the use of multithreaded processing and relational databases has also contributed to the popularity of Client-Server Computing.

    2.2 Configurations in Client-Server Computing

    Client-Server Computing is divided into three components, a Client Process requesting service and a Server Process providing the requested service, with a Middleware in between them for their interaction.

    2.2.1 Client

    A Client Machine usually manage the user-interface portion of the application, validate data entered by the user, dispatch requests to server programs. It is the front-end of the application that the user sees and interacts with. Besides, the Client Process also manages the local resources that the user interacts with such as the monitor, keyboard, workstation, CPU and other peripherals.

    2.2.1 Server

    On the other hand, the Server Machine fulfills the client request by performing the service requested. After the server receives requests from clients, it executes database retrieval , updates and manages data integrity and dispatches responses to client requests. The server-based process may run on another machine on the network; the server is then provided both file system services and application services.Or in some cases, another desktop machine provides the application services. The server acts as software engine that manages shared resources such as databases, printers, communication links, or high powered-processors. The main aim of the Server Process is to perform the back-end tasks that are common to similar applications.

    The simplest form of servers are disk servers and file servers. With a file server, the client passes requests for files or file records over a network to the file server. This form of data service requires large bandwidth and can slow a network with many users. The more advanced form of servers are Database servers, Transaction server and Application servers.

    2.2.1 Middleware

    Middleware allows applications to transparently commnicate with other programs or processes regardless of location. The key element of Middleware is NOS (Network Operating System) that provides services such as routing, distribution, messaging and network management service. NOS rely on communiction protocols to provide specific services. Once the physical connection has been established and transport protocols chosen, a client-server protocol is required before the user can access the network services. A client-server protocol dictates the manner in which clients request information and services from a server and also how the server replies to that request.

    2.2.4 Butler Pyramid Model of Client-Server Computing

    On 19th January at Bulter Group Client/Server Forum London, Martin Butler, Chairman of Butler Group has suggested a new framework for implementing Client/Server Strategy. This is a five-layer model called the Bulter Group VAL (Value Added Layers) Model. The basic structure resembles to the shape of pyramid, with Infrastucture and Middleware at the bottom, followed by Applications, Repository and Business Model on top.

    The characteristics of each layer are summarised as follow:

  • Layer 1 - Infrastructure Layer

    The infrastructure layer is composed of all those components which are passive and do not perform a business function. Examples belong to this categories are computer operating systems, networks, user interfaces and database management systems As far as IT role is concerned, infrastruture is the layer many IT managers understand the most.

  • Layer 2 - Middleware

    Middleware allows applications to transparently commnicate with other programs or processes regardless of location. It is the means of mapping applications to the resources they use. Middleware is the key to integrating heterogeneous hardware and system software environments, providing the level of integration which many organizations are seeking. Typical middlewares will look after network connections, database connections and the interaction between database and application.

  • Layer 3 - Applications

    Appplications are the active components which execute work for the organization, and it is here where many companies invest large amount of time effort and money. Applications which are not of key importance are increasingly purchased as a ready made package, where applications that are vital to increase copany competitiveness in the industry will be developed in-house.

  • Layer 4 - Repository

    Role of repository is to isolate a business model/specification from the tools technology which is used to implement it.

  • Layer 5 - Business Models

    Business model should be independent to all the technologies that are used to implement it, and be transportable to whichever hardware and software environment is most appropriate. It will increasingly be based on object oriented methods and there already exists a generation of tools which support object modelling.

    2.2.5 The Four Dominant Client/Server Application Models

    Having had a deeper look into the terms and architectures of client/server technology, let's consider the dominant application models available. Nowadays, there are four client/server application models that are widely used in the market. They are Structured Query Language ( SQL ) databases, Transaction Processing ( TP ) monitors, groupware and distributed objects. Each one of them is capabable of creating its own complete client/server applications with its own tools. Moreover, they also introduce their own favourable form of middleware ( all this will be further discussed later ). But first, what is the reason for having different models instead of having just one model, and what is the advanatges/disadvantages of having just one particular model.

    The reason why we need different models for different applications is because each one of them have their own advantages and disadvantages, and sometimes one model performs better than the others in one particular situation. Furthermore, standardising the whole market with one particular model will not only discourage the vendors from developing other new (and better) models, but also put off other potential small companies from competing with those gigantic ones. Having said that, standardising the market with one particular model does have the advantage of concentrating the development of that particular model-based softwares, and hence improvements can be achieved much faster and as a result, cost of running/implementing/services will reduce significantly.

    Having realised the needs for different application models, the following will be dedicated to those models and at the end of this section, a comparison of the four models will be made and see which one of them will best suit our future needs in general.

    SQL databases

    SQL (Structured Query Language) has been the standard data description and access language for relational databases for almost a decade, making it the core technology for client/server computing and dominating the client/server landscape today. It began as a declarative language for manipulating data using a few simple commands, however, as SQL applications moved to more demanding client/server environments, it became clear that just managing data wasn't enough. There was also a need to manage the functions that manipulated the data. And stored procedures, sometimes called "TP-lite", met the needs.

    A stored procedure is a named collection of SQL statements and procedural logic that is complied, verified and stored in a server database. Sybase pioneered the concept of stored procedures and now virtually all SQL vendors support stored procedure along with other SQL extensions. The extensions are used to enforce data integrity, perform system maintenance and implement the server side of an application's logic.

    The problem with the SQL standard is that there are just too many of them. There are at least eight efforts underway trying to create a standard based on SQL. (ANSI alone is responsible for three standards, either published or in progress) . And since SQL standards seem to lag vendor implementations by some years, almost everything that's interesting in client/server database technology is non-standard. This includes database administration, data replication, stored procedures, user-defined data types and the formats and protocols on networks. As a result, the lack of one widely accepted standard drives up the cost of databases and related tools and makes maintaining a client/server environment complex and difficult.

    Although SQL suffers all this shortcomings, the fact that there are still so many people using it is because it is easy to create client/server applications in single-vendor/single-server environments. Many GUI tools make SQL applications easy to build, and most of all, it is familiar to the majority of programmers and users out there.

    Transaction Processing (TP) monitors

    In a simple client/server system, many clients issue requests and a server responses. This system may work for 50 or 100 clients, however, as the number of clients increases, the number of requests increases as well and eventually, it reaches the threshold limit of the system, i.e. the system crashes. Sadly, this is the case for most operating systems. Another technology, the TP monitors, is then developed to solve this problem.

    Originally, TP monitor meant teleprocessing monitor - a program that multiplexed many terminals (clients) to a single central server. Over time, TP monitors took on more than just multiplexing and routing functions, and TP monitors came to mean transaction processing. TP monitors manage processes by breaking complex applications into pieces of code called transactions. A transaction can also be viewed as a set of actions that obeys the four so-called ACID properties - Atomic, Consistent, Isolated, and Durable. As for the middleware, TP monitors use some form of transactional RPC or peer-to-peer middleware.

    TP monitors are probably overkilled in single-server/single-vendor departmental applications. That is probably one of the reasons why they have been so slow to take off. Moreover, vendors haven't come to grips yet with the realities of shrink wrapped software market, and they haven't been able to explain the advantages TP monitors offer. The modern client/server incarnations of TP monitors haven't dominant the Ethernet era, but they'll definitely be playing a major role in the intergalactic era. It won't be long to assume that every machine on the network will have a TP monitors to represent it in global transactions. The intergalactic era will make those TP monitors advantages increasingly self-evident.


    Groupware comprises five foundation technologies geared to support collaborative work: multimedia management, work flow, E-mail, conferencing and scheduling. Groupware isn't just another downsized mainframe technology, it is indeed a new model of client/server computing. It helps users to collect unstructured data (e.g. text, images, faxes) into a set of documents.

    Groupware has the advantage of document database management and makes effective use of E-mail, which is its preferred form of middleware . E-mail is one of the easiest ways for electronic processes to communicate with humans. Asynchronous by nature, it's a good match for the way business really work. E-mail is ubiquitous, with over 50 million globally interconnected electronic mailboxes.

    Using work-flow to manages business processes is another revolutionary aspect of groupware. In a work-flow, data passes from one program to another in structured or unstructured client/server environments. Modern work-flow software electronically simulates real world collaborative activity. Work can be routed in ways that correspond to interoffice communications. A good work-flow packages lets you specify acceptance criteria for moving work from one stage to another. So work-flow brings the information to people who can act on it such that the work gets done by the right people.

    Groupware provides many of the components we need for creating intergalactic client/server applications. The technology is also starting to encroach on its competitors' turf.

    Distributed objects

    Distributed-object technology promises the most flexible client/server systems. This is because it encapsulates data and business logic in objects that can roam anywhere on networks, run on different platforms, talk to legacy applications by way of object wrappers, and manage themselves and the resources they control. They are designed to become the currency of the intergalactic client/server era.

    When it comes to standards, distributed-object technology is way ahead of all other client/server approaches. Since 1989, OMG has been busy specifying the architecture for an open software bus on which object components written by different vendors can interoperate across networks and operating systems.

    The secret to OMG's success is that it defined how to specify an interface between a component and the object bus. Specifications are written in IDL , independent of any programming language. IDL becomes the contract that binds client to server components. The beauty of IDL is that it can easily be used to encapsulate existing applications. This way, existing applications need not be rewritten in order to take the full advantages of distributed-object technology.

    In addition to defining the object bus, OMG has specified an extensive set of ORB-related services for creating and deleting objects, accessing them by names, defining the complex relationships among them. Later, OMG also defined a comprehensive set of services for transactional objects such that ordinary object can be created and maintained (e.g. make it transactional, lockable and persistent by having it inherit the appropriate services) using simple IDL entries.

    Being reckon as the currency for the future intergalactic client/server era, distributed objects have to be powerful enough to replace all other client/server models in many aspects. For instance, TP monitors now have the advantages of better handling in transactions, concurrency, and scalability. So what can distributed objects do in order to over run TP monitors. The OMG has anticipated these problems long ago. The CORBA 2.0 aims to solve them by defining key object services, including transactions, concurrency, and relationships etc. Microsoft, with the help from Digital Equipment, has a rival solution known as COM (Commom Object Model) to solve the same problems.

    2.3 Characteristics and Features in Client-Server Computing

    Although there are various different configurations , different hardware and software platforms and even different network protocols in Client-Server Architecture. Generally they all possess certain characteristics and features that distinguish them from traditional mainframe computing environment.

  • Consists of a networked webs of small and powerful machines ( both servers and clients)

    Client-Server Computing uses local processing power-the power of desktop platform. It changes the way enterprise accesses, distributes, and uses data. With this approach, data is no longer under the tight control of Seniors Managers and MIS (Management of Information Systems) staff, it is readily available to middle-rank personel and staff. They can actively involved in the decision-making and operation on behalf of the company. The company becomes more flexible and gives a faster response to the changing business environment outside. In addition, if one machine goes down, the company will still function properly.

  • Open Systems

    Another feature of Client-Server Computing is open systems, which means you can configure your systems, both software and hardware from various vendors as long as they stick to a common standard. In this way, company can tailor their system for their particular situation and needs, pick up and choose the most cost-effective hardware and software components to suit their tasks. For example, you can grab the data and run it through a spreadsheet from your desktop using the brand of computer and software tools that you're most comfortable with and get the job done in your own way.

  • Modularity

    Since we are mixing softwares and hardwares of different natures together as a whole. All the software and hardware components are actually modular in nature. This modularity allows the system to expand and modernise to meet requirements and needs as the company grows. You can add or remove certain client or machine, implement some new application softwares and even add some hardware features without affecting the operation and functioning of the Client-Server System as a whole. Besides, as new computing platforms emerge, you can evalute new environments and system componenets in a modular fashion.

  • Cost Reduction and Better Utilisation of Resources

    Potential cost savings prompt orgarnisations to consider Client-Server Computing. The combined base price of hardware(machines and networks) and software for Client/Server systems is often a tenth of the cost to that of mainframe computing. Furthermore,another feature of Client-Server Computing is able to link existing hardware and software applications and utilise them in an efficient way.

  • Complexity

    Since the environment is typically heterogeneous and multivendor. The hardware platform and operating system of client and server are not usually the same. The biggest challenge to successfully implementing the system is to put together this complex system of hardware and software from multiple vendors. Therefore we need expertise not just for software, hardware or networks but expertise of all these fields and understand the interdependencies and interconnection. Sometimes when the system is down, it is extremely difficult to identify a bug or mistake, but there are just serveral culprits that might cause the problems. Futhermore we have to pay extra efforts and times in traning IS professionals to maintain this new environment in geographically dispersed location.

    2.4 Main Applications

    Client-Server Computing can also be categorized by their support function. The architecture of Client-Server Computing promotes group interaction, whether it is messages, mail, shared data, or shared applications. Users can be "closer" to one another. Users of an application can be anywhere on the network.

    There are three main types of Client-Server Applications:

  • Database Access

    This is the most important kind of application in Client-Server Computing. Applications in GUIs are written to access corporate data. These query- oriented applications provide a single window to the data of the organisation. In some cases these applications are read-only, in others they are read-write. Benefits of these systems are also ease of use and increased worker productivity. Productivity with these systems is measured by how easily workers can access the data they need to do the job. As we can see, we have a networked web of workstations for workers to access at any node, their productivity will increase tremendously. This kind of database system also provide transparent and consistent access to data wherever it is located.

  • Transaction-Processing Applications

    Typical OLTP (Online-Transaction Processing Applications), also known as mission-critical applications, include order entry, inventory , and point-of-sale systems. This kind of mission-critical system must run continuously. if it is unavailable even for a brief moment, the organizations will experience servere repercussions. Examples are stock exchange system, air traffic control networks and airline reservation systems. The transactions are generated at the client and sent to the server for processing. The server may in turn, send one or more operations to other servers. For a transaction to be considered complete, all operations must be successfully performed. If any operation of transaction cannot be completed, the operations that have taken effect must be reversed using a process called commit and rollback.

  • Office Systems

    Many organizations are employing Client-Server Computing to improve interpersonal communications, both internally and externally. Many organizations are using their linked LANs as a network for enterprise-wide mail systems and workgroup applications. In this way, pesonnel in organization can improve coordination and actively participate in stategy formulation and decision-making of the company.

    3. Other Issues in Client-Server Computing Development

    3.1 Importance of Network

    In order to connect clients and servers machines together and to fully make use of the resources contained in each and every machines, we must devise a network system that is able. Networks must be transparent to the users.The network and the dstributed applications running on it must be as reliable as if they were running on a single computer. In addition, the network must provide self-healing capabilities that can reroute network traffic around broken cables and failed components and be flexible enough to react to business-related changes in its environment.

    This seems so straightforward, but with Client-Server Computing, LANs are connecting to other LANs, servers, and mainframes. Things are not straightforward anymore.

    LANs used to be simple too. But now there are three different structures (LAN topologies: Star, Ring & Bus), at least five competing standards for transmissions, and two standards for the information required to manage the network. LANs have become so complex that they require their own operating system.

    Network continues to be the least understood and most critical component in an organization's information structure. Most organizations committed to Client-Server Computing agree that linking LANs is not the place to save money. We should not try to link incompatible LANs with different platforms. Software, hardware and operating system used in network should be thoroughly tested before implementation.

    3.2 Open System and Standards

    One of the important features of Client-Server Computing is open system. Open system conforms to a broad set of formal standards and support platforms from a variety of vendors. Open systems demands the adoption of standards throughout the organization. To make open system successful, the open system standards must be acceptable to both the user community and system manufacturers and be adopted by all levels of organizations

    In order make all these components to work together as a complex system, we must have some kinds of standards to adhere to. We address standards to four areas of Client-Server Computing, such as platforms( software and hardware), Networks, Middleware and Applications. Standards specifications should be developed by consensus and be publicly available. Standards should also be comprehensive and consistent that specify interfaces, services, and supporting formats to accomplish interoperability and portabil

    Currently, there are serveral consortiums working in developing standards for Open Systems.

  • -OSF (Open Software Foundation) , a non-profit consortium of computer vendors, software developers, and chip suppliers to develop standards-based software that will become widely accepted technologies.

  • -UNIX International, a consortium of computer vendors, software developers, is the establishment of UNIX and related standards of the development and licensing of UNIX products.

  • -OMG, an international organization of system vendors, software developers, and users, advocates the deployment of object management technology in the development of software. By applying a common framework to all object-oriented applications, organizations will be able to operate heterogeneous environments.

  • CORBA( Common Object Request Broker Architecture) devised by OMG, DEC, NCR , HP and SUN is a new mechanism that allows objects (applications) to call each other over a network. -SQL Access Group- is an industry consortium working on the definition and implementation of specifications for heterogeneous SQL data access using accepted international standards.

    3.3 Software Trends

    In order to stay abreast of the new trends in Client-Server, we must start by looking at what is happening in the software trends as a whole. In the 1960s and 1970s was the era of centralized computing, with IBM mainframe occupied over 70% of the world's computer business. Throughout the 1980s, many functions once performed by the omnipresent IBM behemoths were systematically taken over by PCs. The result of this gravitational shift toward personal computers makes two significant remarks on the computer industry:

    The PC shift made corporations to reset their IT organizations in a number of ways. We are currently in the Client-Server phase of the software development and will eventually move towards the truly distributed computing environment. In IT manager's point of view: distributed computing can be used as a tool for business process reengineering, corporate right-sizing, and customer responsiveness. The new software developments can be characterized as follows:

    - Distributed: The main operating force in the software industry is the drive toward distributed computing, currently in its Client-Server phase.

    - Multiphased transition: Client-Server is merely an intermediate step toward distribution. The ultimate goal is collaborative computing based on peer-to-peer networks.

    - Enabling technologies: Prominent supporting technologies are object-oriented components, document-centric software architectures, data warehouse technology, standards, and the end-user programming trend.

    On the other hand all the above software developments cannot afford to become set in its ways and without taking considerations or meeting the business needs as follows:

    - Isolated desktop software solutions are no longer sufficient. The software industry must respond to consumer demand for portable, interoperable, distributed software solutions or else becomes extinct like their mainframe-only predecessors.

    - Open Systems for Computing: the pressure on software vendors to establish interoperable, distributed, easy-for-end-user-programmers-to-use tools and standards upon which enterprise-wide architectures can be built.

    -Business process reengineering increased consumer expectation, and networked distributed hardware, as well as numerous second-order complications are driving software complexity through the roof.

    The software industry will, out of necessity, respond to the above pressures by forming forums, consortia, and back-room alliances in order to establish market-leading architectural infrastructure standards, interfaces, and middleware for the express purpose of shifting the balance of power in favour

    4. Applying client/server in businesses

    4.1 Analysis of your businesses

    Before adopting client/server computing, it is important that the business as a whole should be considered fully in all aspests. When analysing a business, there are three views of your company:

    A funtional model

      It reflects the organisational resposibilities and the way in which people who use the system view their work.

    A process model

      It states the function of the your company, such as making goods, orders, delivering goods etc. And it is often NOT surprised if the results for the functional structure and process model do not match! (Maybe this is one of amny reasons why your company is not that competitive).

    An information model

      It states the information which your company needs to function properly.

    4.2 Reasons for adopting client/server technology

    There has never been a technology risen so rapidly as the client/server technology. It has been given rise because of the changes in business needs. Nowadays, businesses need responsive, flexible, integrated and comprehensive applications to support the complete range of business processes. However, for some older systems which are based on older technologies, these applications may not be produced that easily. The problems with the older technologies are that:

    • applications were built in isolation,

    • applications were implemented as monolithic systems,

    • applications were complex, and

    • the supporting technology was based on a centralised control model.

    As a result, the applications provided are just not robust enough for today's needs. All this have supported the client/server technology from growing at this rate.

    4.3 Benefits obtained from adopting client/server technology

    There are people suggesting that client/server has been over-sold without having found its true position. This may be true in the past, but now, this is changing. As client/server is fast-becoming the enabling factor for business process reengineered organisations, because of its flexibility and speedy application development times - it takes around six months to develop a client/server application compared to around 2 years for a mainframe version. Therefore time taken to develope client/server applications has been greatly reduced.

    By adopting the client/server technology, the organisation have changed from steep hierarchies to flattened hierarchies. Also, network management is replacing vertical management. As a result of all this, the organisation is running more efficient and hence, making more profits!

    As a whole, the development and implementation of client/server technology is more complex, more difficult and more expensive than traditional single process applications. However, they are still badly needed because the business demands the increased benefits.

    4.4 A sensible approach towards client/server technology

    Introducing the technology to the customer side of an organisation has to be handled with caution. As client/server is yet another new technology, and new technology takes time for others to get used to. Therefore it is better to implement client/server in a small, but important, part of the business. This way, organisations can test the ground and scale up. And it would be tedious to go straight into the front-office without first having done customer relationship reengineering. For instance, Ladbrokes implemented client/server in the back office before rolling it out to the front counter. They started with a small application which allowed them time to minimize the risk, so when they introduced the telephone betting system, they knew how to build client/server and established common objects and screen designs and generated their very own standards.

    All that raise the timescales. So people may be tempting to implement client/server in one big steps and see the benefits more quickly. But, given the volatile nature of this method, it is very rare to find a big bang implementation these days. Barclays Bank plc was one of the few companies who took the quick route at October '94 when 10,000 users in 1,100 branches were granted overnight access to a 25-million-entry customer database. However, it may be wise not to adopt this approach for ordinary businesses as it doesn't allow for subsequent business changes. It is better to go with short, sharp release cycles; figure out what is essential to the business and then subsequently expand it. And for those who are still using some very old-fashioned IT infrastructure, it is equally important to keep client and product data centralised, rather than distributed in order to provide easier access for end users. Unfortunately, distributed technology carries greater risk than client/server in their case.

    A new breed of sites is formed by having the mainframe technology alongside client/server architectures. The mainframe continues to run administrative and general back-office applications while client server is used to help maximise business advantage. It may even be more cost effective to keep the two environments separate and not try to connect the two. Interfacing to the mainframe may slow down business processes because of access bottlenecks and other difficulties. As a result, the most commonly used data should be kept on the mainframe while data used by departments should be kept in the client/server environment. A lot of people go wrong by replacing all their systems. You should bring legacy systems into the new fold by making them serve the client/server.

    4.5 Limitations for the client/server technology

    As early adopters discovered that client/server still has its limitations. For instance, when the cost of running IT installations was examined, it was found that for the same number of end users, a typical client/server environment is four times more expensive to operate than that of a mainframe. Over half of the cost of such operation comes from indirect staff costs! In the '94 annual report, Does Client/Server Computing Mean Higher Cost? , from a research company OTR Group, it suggested that support costs can be lowered by removing disks from PCs, restricting end-user interface functions and giving training on application packages.

    It is good to let the end-users to have a certain amount of freedom, however, the danger of end-users having too much freedom could be an issue, especially as client/server gives them greater flexibility.

    Being reckon as the "new" technology, some potential conservative customers may still want to hang on to the old systems until the client/server technology has fully developed - only then will there be another new technology invented.

    Another severe problem is that standard still gets in the way. For instance, there are just too many standards (but NOT even one universal) in SQL databases. As a result, this lack of one widely accepted standard drives up the cost of running the databases as well as making it more complex and difficult in maintaining the databases.

    4.6 Golden Rules of Client/Server Implementation

    There have been so many client/server horror stories that organisations may be forgiven for not knowing the technology. But client/server can present significant benefits if implemented CORRECTLY. Benefits such as easier application development, flexibility and better response to customers all add up to attractive advantages. The followings are the "Golden Rules of Client/Server Implementation":

    Fix the business first

      A lot of organisations go to client/server to fix their businesses without having thought it through. They think that by implementing the technology they'll solve their business problems. Make sure the whole company is committed.


      It is best to start at non-critical departmental level and scale up. Address a small number of users first, not 5000. This way, companies can scale up easily, as they already have a working set, rather than doing all at the very beginning.

    Failure to define scope of application

      Design for the general case up front rather than generalise later. The goals must be clear!

    Project Management

      Ask obvious questions. Make test plans and specifications of the software and purchase against these specifications.

    Data modelling

      Client/server has to be modelled against the whole business, not just the applications.

    Configuration management

      Particularly in a distributed, client/server set-up, you need to be a lot more rigorous about knowing the software versions and platforms which are in operation.

    Operating support

      System management can be tricky in distributed environments. Try to track emerging standards from large hardware and networking vendors.

    Do not underestimate the complexity of client/server development

      Again, think before you leap. Changing to client/server environments is not easy and small scale, give plenty of thinking before actually carrying it out, otherwise it's just going to halt in the middle of nowhere.

    4.7 Benefits of having IT in our businesses

    Not just client/server technology benefits business world, the whole IT business is helping all businesses to grow soundly. With the help from IT, consistent information can be used across all applications; also the IT systems support all activity performed by users, not just a part of it. Putting all this in a nutshell, IT and client/server technology (of course) benefits the business very much, only if they are used correctly. Never jump into them without really knowing what they are and whether they are going to do your business any good.

    5. Conclusions

    This report is aimed at senior IT management in considering Client-server Computing who are considering planning and implementing Client-Server architectures for their organization. We have predicted that it is inevitable that Client-Server Computing will be widely accepted and implemented throughout the business world in the years to come. This technology shift towards Client-Server Computing is mainly driven by increasingly complex situation and environment in business in recent year such as global marketing, remote on-line sales distribution, de-centralised corporate strategy, etc. All these demands a quick and swift reponse, easy access to data information, and better coordination among people from all levels both within and outside organization. Client-Server Computing serves all these problems and headaches and therefore becomes the main priority issue in the minds of IT management.

    On the other hand, we cannot make too dramatic change to existing mainframe system, hardwares and softwares , otherwise it will cause serious repercussions and affects to the information infrastructure in organizations. It takes time to train IT staff, testing new software, hardware and network sytem, we must have both the existing and new system to be running at the same time before the new Client-Server Sytem can take over the old mainframe computing environment.

    Since in the Client-Server Computing, we are employing open systems - which allows different hardwares and softwares platforms to work as a whole. The complexity involved must not be estimated and expertise in all fields, be it software, hardware, network and middleware must be adequately acquired.

    Also in this report, a just-right insight is given into those leading client/server applications models. It wouldn't be difficult to figure out that object community may be well on their way to building an object infrastructure that can meet the demand of the intergalactic client/server era. Distributed objects with the proper component packaging and infrastructure may provide the ultimate building blocks for creating client/server solutions, including suites of cooperating business objects.

    And once distributed-object technology takes off, it can subsume all other forms of client/server computing, including TP monitors, SQL databases and groupware. Distributed-objects can do it all and do it BETTER. Objects can help us break large monolithic applications into more manageable components that coexist on the intergalactic bus. They are also the only hope for managing the millions of software entities that will live on intergalactic networks.

    What probably concerns the readers most would be the client/server business issues. It is clear that client/server technology brings lots of benefits to the business as well as enhancing the ability to expand and compete. Sadly, all this can only be achieved at the expense of higher cost of IT running, installations, more complex and difficult maintaing of systems.

    And it is also very important that client/server has to be approached, adopted wisely. Otherwise the business is going to end up in the trash even if you have the best technology in this world. Mainly, there are two ways to approach client/server computing. One is the step-by-step approach. This approach works by converting the business into client/sever environments bits by bits. This way, the business should normally change to client/server successfully, however, this approach suffers a major drawback. It takes, sometimes, too long to convert. On the other hand, one can also adopt the big bang approach, doing everything in one go. By doing so, the business would be rewarded all the client/server benefits immediately without any delay (provided it works). Having said that, there is always a higher probability of making an error without realising it until it gets too late. Both approaches achieve the same goal, but the decision of which one to take, sometimes, does depend on the practical situation. Following a right approach with the correct implementation will always yield the best client/server benefits to the business.

    Client/server is not the only way to solve business problems, it has also got its own limitations, one being that it costs too much. All in all, client/server promises to be a cure for all illnesses may still be true provided correct procedures are followed. As long as the client/server computing is implemented wisely it can bring competitive advantages.

    6. Glossary


    API - Application Program Interface

      API- The interface (calling conventions) by which an application program accesses operating system and other services. An API is defined at source code level and provides a level of abstraction between the application and the kernel (or other privileged utilities) to ensure the portability of the code.

      An API can also provide an interface between a high level language and lower level utilities and services which were written without consideration for the calling conventions supported by compiled languages. In this case, the API's main task may be the translation of parameter lists from one format to another and the interpretation of call-by-value and call-by-reference arguments in one or both directions.


    Best-effort delivery

      Characteristic of network technologies that do not provide reliability at link levels. Best-effort delivery systems work well with the Internet because the Internet protocols assume that the underlying network provides unreliable connectionless delivery. The combination of Internet protocols IP (Internet Protocol) and UDP (User Datagram Protocol) provides best-effort delivery service to application programs.
    Business Process Reengineering

      Reengineering is the organisational process required to align people, processes and technology with strategies to achieve business integration. It can also be thought of as taking a business in its current state and forming an organisational and operational blueprint to redirect skills, policies, information (data), cultural values, organisational structures, processing and incentives towards targeted improvements.

    CSMA - Carrier Sense Multiple Access

      A characteristic of network hardware that operates by allowing multiple stations to contend for access to a transmission medium by listening to see if it is idle.

      The model of interaction in a distributed system in which a program at one side sends a request to a program at another site and awaits a response. The requesting program is called a client; the program satisfying the request is called a server. It is usually easier to build client software than server software.
    Client Process

      A Client Server Process usually manage the user-interface portion of the application, validate data entered by the user, dispatch requests to server programs. It is the front-end of the application that the user sees and interacts with. Besides, the Client Process also manages the local resources that the user interacts with such as the monitor, keyboard, workstation, CPU and other peripherals.
    COBRA - Common Object Request Broker Architecture

      CORBA 1.1, introduced in 1991, defined the IDL and APIs that enable client/server object interaction within a specific implementation of an ORB. CORBA 2.0 specifies how ORBs from different vendors can interoperate.

    DBMS - database management system

      A database management system (DBMS) is an extremely complex set of software programs that controls the organisation, storage and retrieval of data (fields, records and files) in a database. It also controls the security and integrity of the database. The DBMS accepts requests for data from the application program and instructs the operating system to transfer the appropriate data.

      When a DBMS is used, information systems can be changed much more easily as the organisation's information requirements change. New categories of data can be added to the database without disruption to the existing system.

    DCE - Distributed Computing Environment

      An architecture consisting of standard programming interfaces, conventions and server functionalities (eg. naming, distributed file system, remote procedure call) for distributing applications transparently across networks of heterogeneous computers. DCE is promoted and controlled by the Open Software Foundation (OSF).


      Ethernet is an example of a well known network based on CSMA/CD technology. CSMA/CD (Carrier Sense Multiple Access with Collision Detection) uses CSMA access combined with a mechanism that allows the hardware to detect when two stations simultaneously attempt transmission.

      It is a popular local area network technology invented by Xerox Corporation Palo Alto Research Centre. An Ethernet itself is a passive coaxial cable; the interconnections contain all active components. Ethernet is a best-effort delivery system that uses CSMA/CD technology (as mentioned above). Xerox Corporation, Digital Equipment Corporation and Intel Corporation developed and published the standard for 10 Mbps Ethernet. Originally, the coaxial cable specified for Ethernet was a 1/2 inch diameter heavily sheilded cable. However, many office environments now use a lighter coaxial cable sometimes called thinnet or cheapnet. It is also possilble to run Ethernet over shielded twisted pair cable.

    Electronic Mail

      A popular workgroup application, which acts as a cross between a postal and a telephone service, all done electronically across the network.

    File and Print Servers

      These are the two key functions of a network server. They provide a central source of data and applications, as well as access to printers, to all users on the network. It is also possible to dedicate specific PCs or print devices to either function.

    GUI - Graphical User Interface

      The use of pictures rather than just words to represent the input and output of a program. A program with a GUI runs under some windowing system (eg. The X Window System, Microsoft Windows, Acorn RISC OS). The program displays certain icons, buttons, dialogue boxes etc. in its window on the screen and the user controls it by moving a pointer on the screen (typically controlled by a mouse) and selecting certain objects by pressing buttons on the mouse while the pointer is pointing at them.

      A logical group of computer users, which may or may not be a single department or office. Multiple sets of workgroups form a workgroup computing environment. Applications developed for use within this type of environment have become known as groupware, Novell's GroupWise being an example.


      The center of Star (the topology of 10base-T Ethernet LANs) or Token Ring networks. Also, in "intelligent" form, the center of mixed networks - where for example Ethernet, Token Ring, and FDDI LANs can be combined.

    IDL - Interface Definition Language

      Used to write specifications for distributed objects. It is independent of any programming language.

    LAN - local area network

      A data communications network which is geographically limited (typically to a 1 km radius) allowing easy interconnection of terminals, microprocessors and computers within adjacent buildings.


      Middleware allows applications to transparently commnicate with other programs or processes regardless of location. And in the book Essential Client/Server Survival Guide , authors Robert Orfali, Dan Harkey and Jeri Edwards developed a simple model of client/server. In it, the middleware building block runs on both the client and server sides of an application. The block is then further divided into four categories of middleware: transport stacks, network operating systems (NOSes), distributed system management (DSM) and service-specific middleware. NOSes and transport stacks provide the basic communications foundation for all middleware. DSM runs on every node in client/server network; it requires its own middleware on top of the NOS to carry message between managing stations and managed stations. The service-specific middleware depends on the application model.
    MIS -Management Information System

      A computer system, usually based on a mainframe or minicomputer, designed to provide management personnel with up-to-date information on an oraganisation's performance.


      A series of interconnected computers and devices. A LAN is an example.


      A consortium of object vendors called Object Management Group.
    ORB - Object Request Broker

      It's the object interconnection bus. Clients are insulated from the mechanisms used to communicate with, activate, or store server objects. CORBA 1.1, introduced in 1991, defined the IDL and APIs that enable client/server object interaction within a specific implementation of an ORB. CORBA 2.0 specifies how ORBs from different vendors can interoperate.
    ODBC - Open DataBase Connectivity

      A Microsoft standard for accessing different database systems. There are interfaces for Visual Basic, Visual C++, SQL and the ODBC driver pack contains drivers for the Access, Paradox, dBase, Text, Excel and Btrieve databases. ODBC 1.0 was released in September 1992.
    OLTP - On-Line Transaction Processing

      The processing of transactions by computers in real time.
    OODB - object-oriented database

      A system offering DBMS facilities in an object-oriented programming environment. Data is stored as objects and can be interpreted only using the methods specified by its class. The relationship between similar objects is preserved (inheritance) as are references between objects. Queries can be faster because joins are often not needed (as in a relational database). This is because an object can be retrieved directly without a search, by following its object id.
    OSF - Open Software Foundation

      A foundation created by nine computer vendors, (Apollo, DEC, Hewlett-Packard, IBM, Bull, Nixdorf, Philips, Siemens and Hitachi) to promote "Open Computing". It is planned that common operating systems and interfaces, based on developments of Unix and the X Window System will be forthcoming for a wide range of different hardware architectures. OSF announced the release of the industry's first open operating system - OSF/1 on 23 October 1990.
    OSI - Open Systems Interconnection

      The OSI Reference Model of network architecture and a suite of protocols (protocol stack) to implement it were developed by ISO in 1978 as a framework for international standards in heterogeneous computer network architecture. The architecture is split between seven layers, from lowest to highest:
      1 {physical layer},
      2 {datalink layer},
      3 {network layer},
      4 {transport layer},
      5 {session layer},
      6 {presentation layer},
      7 {application layer}.


      This is a form of network which allows all PCs on the LAN to act as file or print servers as well as clients. So you can have a mix of dedicated and non-dedicated servers and dedicated clients. Since a non-dedicated server is neither as secure nor as resilient as a dedicated server, a peer-to-peer networks are best suited to smaller workgroups.

    RPCs - Remote Procedure Calls

      A protocol which allows a program running on one host to cause code to be executed on another host without the programmer needing to explicitly code for this. RPC is an easy and popular paradigm for implementing the client-server model of distributed computing. An RPC is implemented by sending request message to a remote system (the server) to execute a designated procedure, using arguments supplied, and a result message returned to the caller (the client). There are many variations and subtleties in various implementations, resulting in a variety of different (incompatible) RPC protocols.

    Server Process

      A server process fulfills the client request by performing the service requested. After the server receives requests from clients, it executes database retrieval , updates and manages data integrity and dispatches responses to client requests.
    SQL- Structured Query Language

      A language which provides a user interface to relational database management systems, developed by IBM in the 1970s for use in System R. SQL is the de facto standard, as well as being an ISO and ANSI standard. It is often embedded in other programming languages.


      A form of Ethernet cabling based on UTP which has a star topology.
    Token Ring

    The main alternative LAN type to Ethernet, popularised by IBM, which many of its systems are standardised on.

    UTP - Unshielded Twisted Pair

      An inexpensive form of cabling used with Arcnet, Ethernet (10base-T) and Token Ring networks. Telephone cable is a very cheap form of UTP.

    WAN - Wide Area Network

      A means of inter-connecting two separate LANs or other computer systems, such as offices in London and Edingburgh. There are many methods available, using lines provided by BT or Mercury, for example.

      A logical group of computer users, which may or may not be a single department or office. Multiple sets of workgroups form a workgroup computing environment. Applications developed for use within this type of environment have become known as groupware, Novell's GroupWise being an example.

    7. References

    Orfali, Robert, et. al. Essential Client/Server Survival Guide New York : Van Nostrand Reinhold

    Berson, Alex Client-server architecture / Alex Berson. New York : McGraw-Hill, ©1992.

    Smith, Patrick. Client/server computing Carmel, Ind. : SAMS, ©1992.

    Computing Archive, Department of Applied Science, Johns Hopkins University

    Byte Magazine , Issue 6 1993 & Issue 4 1995

    INSPEC - CD ROMS titles by IEE

    IEEE Computer Society Magazine April, May p.49-55

    Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications by IEEE

    Douglas Comer, Internetworking with TCP/IP - Principles, protocols, and architecture (Chapter 17) : Prentice Hall, ©1988

    PC Week 10 Jan 95 P.20-P.30

    Dawna Travis Dewire . Client/Server Computing New York : McGraw-Hill, ©1992.

    last modified by
    Albert Yau and Thomas Lee on 12th June, 1995