MBA management

Concept of Networking topics:

Types of Computer Network


A local area network (LAN) is computer network covering a small physical area, like a home, office, or small group of buildings, such as a school, or an airport. Current wired LANs are most likely to be based on Ethernet technology, although new standards like ITU-T also provide a way to create a wired LAN using existing home wires (coaxial cables, phone lines and power lines).

For example, a library may have a wired or wireless LAN for users to interconnect local devices (e.g., printers and servers) and to connect to the internet. On a wired LAN for users to interconnect local devices (e.g., printers and servers) and to connect to the internet. On a wired LAN, PCs in the library are typically connected by category 5(Cat5) cable, running the IEE 802.3 protocol through a system of interconnected devices and eventually connect to the Internet. The cables to the servers are typically on Cat 5e enhanced cable, which will support IEEE 802.3 at 1 G bit/s. A wireless LAN may exist using a different IEE protocol, 802.11b, 802.11 g or possibly 802.11n. The staff computers can get to the color printer, checkout records, and the academic network and the Internet. All user computers can get to the Internet and the card catalog. Each workgroup can get to its local printer. Note that the printers are not accessible from outside their workgroup.


A metropolitan area network (MAN) is a network that connects two or more local area networks or campus area networks together but does not extend beyond the boundaries of the immediate town/city. Routers, switches and hubs are connected to create a metropolitan area network.


A wide area network (WAN) is a computer network that covers a broad area (i.e., any network whose communications links cross metropolitan, regional, or national boundaries [1]). Contrast with personal area networks (PANs) , local area network ( LANs), campus area networks (CANs), or metropolitan area(e.g., a city) respectively. The largest and most well-known example of a WAN is the Internet. A WAN is a data communications network that covers a relatively broad geographic area (i.e., one city to another and one country to another country) and that often uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.


A global area networks (GAN) specification is in development by several groups, and there is no common definition. In general, however , a GAN is a model for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc. the key challenge in mobile communications is “handing off” the user communications from one local coverage areas, etc. The key challenge in mobile communications is “handing off” the user communications from one local coverage area to the next. In IEEE project 802, this involves a succession of terrestrial WIRELESS local area networks (WLAN).


Internetworking involves connecting two or more distinct computer networks or network segments via a common routing technology. The result is called an internetwork (often shortened to internet). Two or more networks or network segments connected using devices that operate at layer 3 (the ‘network’ layer) of the OSI Basic Reference Model, such as a router. Any interconnection among or between public, private, commercial, industrial, or governmental networks may also be defined as an internetwork.

In modern practice, the interconnected networks use the Internet Protocol. There are at least three variants of internetwork, depending on who administers and who participates in them:

• Intranet
• Extranet
• Internet

Intranets and extranets may or may not have connections to the Internet. If connected to the Internet, the intranet or extranet is normally protected from being accessed from the Internet without proper authorization. The Internet is not considered to be a part of the internet or extranet, although it may serve as a portal for access to portions of an extranet.


An intranet is a set of networks, using the Internet Protocol and IP-based tools such as web browsers and file transfer applications that are under the control of a single administrative entity. That administrative entity closes the intranet to all but specific, authorized users. Most commonly, an intranet is the internal network of an organization. A large intranet will typically have at least one web server to provide users with organizational information.


An Extranet is a network or internetwork that is limited in scope to a single organization or entry but which also has limited connections to the networks of one more other usually, but not necessarily ,trusted organizations or entities (e.g., a company’s customers may not be considered ‘trusted’ from a security standpoint). Technically an extranet may also be categorized as a CAN, MAN,WAN, or other type of network, although, by definition, an extranet cannot consist of a single LAN; it must have at least on connection with an external network.

An extranet is a private network that uses Internet protocols, network connectivity, and possibly the public telecommunication system to securely share part of an organization’s information or operations with suppliers, vendors, partners, customers or other businesses. An extranet can be viewed as part of a company’s intranet that is extended to users outside the company (e.g., normally over the Internet). It has also been described as a “state of mind” in which the Internet is perceived as a way to do business with a preapproved set of other companies business-to- business ( B2B), in isolation from all other Internet users. In contrast, business-to-consumer (B2C) involves known server(s) of one or more companies, communicating with previously unknown consumer users.

Briefly, an extract can be understood as an intranet mapped onto the public Internet or some other transmission system not accessible to the general public, but managed by more than one company’s administrator(s).For example, military networks of different security levels may map onto a common military radio transmission system that never connects to the Internet. Any private network mapped onto a public one is a virtual private network (VPN). In contrast, an intranet is a VPN under the control of a single company’s administrator(s).

An argument has been that “extranet” is just a buzzword for describing what institutions have been doing for decades, that is, interconnecting to each other to create private networks for sharing information. One of the differences that characterize an extranet, however, is that its interconnections are over a shared network rather than through dedicated physical lines, With respect to internet protocol networks, RFC 4364 stats “If all the sites in a VPN are owned by the same enterprise, the VPN is a corporate intranet. If the various sites in a VPN are owned by the same enterprises, the VPN is an extranet. A site can be in more than one VPN; e.g., in an intranet and several extranets. We regard both intranets and extranets as VPNs. In general, when we use the term VPN we will not be distinguishing between intranets and extranets. Even if this argument is valid, the term “extranet” is still applied and can be used to eliminate the use of the above description.”


The Internet is a specific internetwork. It consists of a worldwide interconnection of governmental, academic, public, and private networks upon the networking technologies of the Internet protocol Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the U>S> Department of Defense. The Internet is also the communications backbone underlying the World Web (WWW). The ‘Internet’ is most commonly spelled with a capital ’I’ as a proper noun, for historical reasons and to distinguish it from other generic internetworks.

Participants in the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet protocol Suite and an addressing system (IP Addresses) administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.

Client Server

Client – server is a computing architecture which separates a client from a server, and is almost always implemented over a computer network. Each client or server connected to network can also be referred to as a node. The most basic type of client-server architecture employs only two types of nodes: clients and servers. This type of architecture is sometimes referred to as two-tier. It allows devices to share files and resources. Each instance of the client software can send data requests to one or more connected servers. In turn, the servers can accept these requests, process them, and return the requested information to the client. Although this concept can be applied for a variety of reasons to many different kinds of applications, the architecture remains fundamentally the same. These days, clients are most often web browsers, although that has not always been the case. Servers typically include web servers, database servers and mail servers. Online gaming is usually client-server too. In the specific case of MMORPG< the servers are typically operated by the company selling the game; for other games one of the players will act as the host by setting his game in server mode. The interaction between client and server is often described using sequence diagrams. Sequence diagrams are standardized in the Unified Modeling Language Characteristics.

Characteristics of a client:
• Request sender is known as client.
• Initiates requests.
• Waits for and receives replies.
• Usually connects to a small number of servers at on time.
• Typically interacts directly with end-users using a graphical user interface.

Characteristics of a server:
• Passive (slave)
• Waits for requests from clients
• Upon receipt of requests, processes them and then serves replies
• Usually accepts connections from a large number of clients
• Typically does not interact directly with end-users

Client-server computing or networking is a model of distributed application design that partitions tasks or workloads between service providers (servers) and service requesters, called clients. Often clients and servers operate over a computer network on separate hardware. A server is a high performance host that is a registering unit and shares its resources with clients. A client does not share any of its resources, but requests a server’s content or service function. Clients therefore initiate communication sessions with servers which await (listen to) incoming requests.

Client-server describes the relationship between two computer programs in with one program, the client program, makes a service request to another, the server program. Standard networked functions such as email exchange, web access and database access, are based on the client-server model. For example, a web browser is a client program at the user computer that may access information at any web server in the world. To check your bank account from your computer, a web browser client in your computer forwards your requests to a web server program at the bank. That program may in turn forward the request to its own database client program that sends a request to a database server at another bank computer to retrieve your account balance. The balance is returned to the bank database client, which in turn serves it back to the web browser client in your personal computer, which displays the information for you.

The client – server model has become one of the central ideas of network computing. Many business applications being written today use the client-server model. So do the Internet’s main application protocols, such as HTTP, SMTP, Telnet, DNS. In marketing, the term has been used to distinguish distributed computing by smaller disappeared computer from the “monolithic” centralized computing of mainframe computers. But this distinction has largely disappeared as mainframes and their applications have also turned to the client server model and become part of network computing.

Each instance of the client software can send data requests to one or more connected servers. IN turn, the servers can accept these requests, process them, and return the requested information to the client. Although this concept can be applied for a variety of reasons to many different kinds of applications, the architecture remains fundamentally the same.

The most basic type of client-server architecture employs only two types of hosts: clients and servers. This type of architecture is sometimes referred to as two-tier. It allows devices to share files and resources. The two tier architecture means that the client acts as one tier as one tier and application in combination with server acts as another tier.

Specific types of servers include web servers, ftp servers, application servers, database servers, name servers, file servers, print servers, and terminal servers. Most web services are also types of servers.

Another type of network architecture is known as per-to-peer, because ach host or instance of the program can simultaneously act as both a client and a server, and because each has equivalent responsibilities and status. Per-to-peer architectures are often abbreviated by p2p.

Both client-server and p2p architectures are in wide usage today. Details may b found in Comparison of Centralized (Client-Server) and Decentralized (Peer-to-peer) Networking.

• In most cases, client –server architecture enables the roles and responsibilities of a computing system to be distributed among several independent computers that are known to each other only through a network. This creates an additional advantage to this architecture: greater ease of maintenance .For example, it is possible to replace, repair, upgrade, or even relocate a server while its clients remain both unaware and unaffected by that change.

• All the data are stored on the servers, which generally have far greater security controls than most clients. Servers can better control access and resources, to guarantee that only those clients with the appropriate permissions may access and change data.

• Since data storage is centralized, updates to that data are far easier to administer than what would be possible under a P2P paradigm. Under a P2P architecture, data updates may need to be distributed and applied to each peer in the network, which is both time-consuming and error-prone, as there can be thousands or even millions of peers.

• Many mature client-server technologies are already available which were designed to ensure security friendliness of thee user interface, and ease of use.

• It functions with multiple clients of different capabilities.

• Traffic congestion on the network has been an issue since the inception of the client-server paradigm. As the number of simultaneous client requests to a given server increases, the server can become overloaded. Contrast that to a P2P network’s overall bandwidth can be roughly computed as the sum of the bandwidths of every node in that network.

• The client-server paradigm lacks the robustness of a good P2P network. Under client-server, should a critical server fail, client’s requests cannot be fulfilled. In P2P networks, resources are usually distributed among many nodes. Even if one or more nodes depart and abandon a downloading file, for example, the remaining nodes should still have the data needed to complete the download.

Overview of Middleware

Two fundamental trends influence the way we conceive and construct new computer and information systems. The first is that information technology of all forms is becoming highly commoditized i.e., hardware and software artifacts are getting faster, cheaper, and better at a relatively predictable rate. The second is the growing acceptance of a network-centric paradigm, where distributed applications with a range of quality of service (QoS) needs are constructed by integrating separate components connected by various forms of communication services. The nature of this interconnection can range from

1. The very small and tightly coupled, such as avionics mission computing systems to
2. The very large and loosely coupled, such as global telecommunications systems.

The interplay of these two trends has yielded new architectural concepts and services embodying layers of middleware. These layers are interposed between applications and commonly available hardware and software infrastructure to make it feasible, easier, and more cost effective to develop and evolve systems using reusable software infrastructure to make it feasible, easier, and more cost effective to develop and evolve systems using reusable software. Middleware stems from recognizing the need for more advanced and capable support-beyond simple connectivity- to construct effective distributed systems. A significant portion of middleware- oriented R&D activities over the past decade has focused on:

1. The identification, evolution, and expansion of our understanding of current middleware services in providing this style of development and

2. The need for defining additional middleware layers and capabilities to meet the challenges associated with constructing future network-centric systems.

These activities are expected to continue forward well into this decade to address the needs of next-generation distributed applications. During the past decade we’ve also benefited from the commoditization of hardware (such as CPUs and storage devices) and networking elements (such as IP routes). More recently, the maturation of programming languages (such as Java and C++), operating environments (such as POSIX and Java Virtual Machines), and enabling fundamental middleware based on previous middleware R&D (such as CORBA, Enterprise Java Bans, and NET) are helping to commoditize many software components and architectural layers. The quality of commodity software has generally lagged behind hardware, and more facets of middleware are being conceived as the complexity of application requirements increases, which has yielded variations in maturity and capability across the layers needed to build working systems. Nonetheless, recent improvements in frameworks, patterns and development processes have encapsulated the knowledge that enables common off-the –shelf (COTS) software to be development, combined, and used in an increasing number of real-world applications, such as e-commerce web sites, consumer electronics, avionics mission computing , hot rolling mills, command and control planning systems, backbone routers, and high-speed network switches. The trends outlined above are now yielding additional middleware challenges and opportunities for organizations and developers, both in deploying current middleware-based solutions and in inventing and shaping new ones. To complete our overview, we summarize key challenges and emerging opportunities for moving forward, and outline the role that middleware plays in meeting these challenges.


Growing focus on integration rather than on programming. There is an ongoing trend away from programming applications from scratch to integrating them by configuring and customizing reusable components and frameworks. While it is possible in theory to program applications from scratch, economic and organizational constraints-as well as increasingly complex requirements and competitive pressures-are making it infeasible to do so in practice. Many applications in the future will therefore be configured by integrating reusable commodity hardware and software components that are implemented by different suppliers together with the common middleware substrate needed to make it all work harmoniously.

Demand for end-to-end QoS support, not just component QoS—The need for autonomous and time- critical behavior in next-generation applications necessitates more flexible system infrastructure components that can adapt robustly to dynamic end-to-end changes in application requirements and environmental conditions. For example, next-generation applications will require the simultaneous satisfaction of multiple QoS properties, such as predictable latency/ jitter/ throughput,, scalability, dependability, and security. Applications will also need different levels of QoS under different configurations, environmental conditions, and costs, and multiple QoS properties must be coordinated with and/or traded off against each other to achieve the intended application results. Improvements in current middleware QoS and better control over underlying hardware and software components- as well as additional middleware services to coordinate these-will all be needed.

The increased viability of open systems- Shrinking profit margins and increasing shareholder pressure to cut costs are making it harder for companies to invest in long-term research that does not yield short-term pay offs. As a result, many companies can no longer afford the luxury of internal organizations that produce completely custom hardware and software components with proprietary QoS support. To fill this void, therefore, standards-based hardware and software researched and -becoming increasingly strategic to many industries. This trend also requires companies to transition away from proprietary architectures to more open systems in order to reap the benefits of externally developed components, while still maintaining an ability to compete with domain – specific solutions that can be differentiated and customized. The refactoring of much domain independent middleware into open-source releases based on, open standards is spurring the adoption of common software substrates in many industries. It is also emphasizing the role of domain knowledge in selecting, organizing, and optimizing appropriate middleware components for requirements in particular application domains.

Increased leverage for disruptive technologies leading to increased global competition-One consequence of the commoditization of larger bundled solutions based around middleware integrated components is that industries long protected by high barriers to entry, such as telecom and aerospace, are more vulnerable to disruptive technologies and global competition, which drive prices to marginal cost. For example, advances in high-performance COTS hardware are being combined with real-time and fault tolerant middleware services to simplify the development of predictable and dependable network elements. Systems incorporating these network elements , ranging from PBXs to high-speed backbone routers- and ultimately carrier class switches and services built around these components- can now use standard hardware and software components that are less expensive than legacy propriety systems, yet which are becoming nearly as dependable.

Potential complexity cap for next-generation systems-Although current middleware solves a number of basic problems with distribution and heterogeneity, many challenging research problems remain. In particular, problems of scale, diversity of operating environments, and required level of trust in the sustained and correctly functioning operation of next-generation systems have the potential to outstrip what can be built. Without significantly improved capabilities in a number of areas, we may reach a point where the limits of our starting points put a ceiling on the size and levels of complexity for future systems. Without an investment in fundamental R&D to invent, develop. And popularize the new middleware capabilities needed to realistically and cost-effectively construct next generation network centric applications, the anticipated move towards large-scale distributed “systems of systems” in many domains may not materialize. Even if it does, it may do so with intolerably high risk because of inadequate COTS middleware support for proven for proven, repeatable, and reliable solutions. The additional complexity forced into the realm of application development will only exacerbate the already high rate of project failures exhibited in complex distributed system domains. The proceeding discussion outlines the fundamental drivers that led to the emergence of middleware architectures and components in the previous decade, and will by necessity lead to more advanced middleware architectures and components in the previous decade, and will by necessity lead to more advanced middleware capabilities in this decade. We spend the rest of this paper exploring these topics in more depth, with detailed evaluations of where we are, and where we need to go, with respect to middleware.


Middleware is systems software that resides between the applications and the underlying operating systems, network protocol stacks, and hardware. Its primary role is to

1. Functionally bridge the gap between application programs and the lower-level hardware and software infrastructure in order to coordinate how parts of applications are connected and how they interoperate and

2. Enable and simplify the integration of components developed by multiple technology suppliers. When implemented properly, middleware can help to:
- Shield software developers from low-level, tedious, and error-prone platform details, such as socket-level network programming.
- Amortize software lifecycle costs by leveraging previous development expertise and capturing implementations of key patterns in reusable frameworks, rather than rebuilding them manually for each use.
- Provide a consist of higher-level network- oriented services, such as logging and security that have proven necessary to operate effectively in a networked environment.

Over the past decade, various technologies have devised to alleviate many complexities associated with developing software for distributed applications. Their successes have added a new category of systems software to the familiar operating system, programming language, networking, and database offerings of the previous generation. Some of the most successful of these technologies have centered on distributed object computing (DOC) middleware. DOC is an advanced, mature, and field-tested middleware paradigm that supports flexible and adaptive behavior. DOC middleware architectures are composed of relatively autonomous software objects that can be distributed or collocated throughout a wide range of networks and interconnect. Clients invoke operations on target objects to perform interactions and invoke functionality needed to achieve application goals. Through these interactions, a wide variety of middleware-based services are made available off-the-shelf to simplify application development. Aggregations of these simple, middleware-mediated interactions from the basis of large-scale distributed system deployments.
Copyright © 2015         Home | Contact | Projects | Jobs

Review Questions
  • 1. What is a client server? Describe its role in detail as a link between service providers and service requesters. Discuss its various advantages and disadvantages.
  • 2. What is middleware? What is its role?
  • 3. What are the various networks available for use and what are their features?
Copyright © 2015         Home | Contact | Projects | Jobs

Related Topics
Concept of Networking Keywords
  • Concept of Networking Notes

  • Concept of Networking Programs

  • Concept of Networking Syllabus

  • Concept of Networking Sample Questions

  • Concept of Networking Subjects

  • EMBA Concept of Networking Subjects

  • Concept of Networking Study Material

  • BBA Concept of Networking Study Material