MBA management

Quality in Software Engineering

One of the primary challenges in Software Engineering is to maintain consistently a quality levels to satisfy the customers. The meaning of Quality varies from person to person and it is a relative term. For instance the customer needs 100% bug free software which is practically impossible but the levels of defect can be reduced to be in par with Six Sigma. As Freeman (Oulu 94) states ”Quality should be the driver of the entire development system”.

Quality in the eyes of the customer is Bug free Software, in the eyes of the developer, more efficient and reliable and from the development manager’s perspective it should be “Efficient, cost effective, reliable, maintainable and customer should be satisfied. Development of a quality product is the key to customer satisfaction. Quality should be measured from the early stages of the product development or in the early stages of the project Life Cycle.

We had defined Metrics earlier as a measure of quality, metrics can be used to improve software quality and productivity and are used to measure software resources, processes, tools and products. Software Metrics provide the overall information about the product development like cost, time and all phases information. In fact Metrics can be termed as:

M - Measure
E - Everything
T - That
R - Results
I - In
C - Customer
S - Satisfaction

Metrics - Brief Overview

Metrics should be simple, objective, easily obtainable, robust and valid information.

Metrics is the building block and is primarily part of the PDCA Cycle (Control phase). Goodman defines Metrics as “the continuous application of the measurement to the software development process and its product to supply meaningful and timely management information, together with the use of the techniques to improve the process and its product.”

What is Measurement?

Measurement measure or calculate something. When we look at our day to day activities we measure when we drink a cup of coffee, where the cup is the measure for the amount of coffee. Another example is when we travel to our office where the distance is a measure. Fenton Defines Measures as Measurement is a process by which numbers or symbols are assigned to attributes of entities in the real world in such a way to describe them accordingly to clearly defined rules.

When we take the measures into Software Engineering, Quality is the most important perspective.

Types of Measurements:

Basic Measurement   Derived Measurement

Project Measures

Effort   Product Measures
Quality   Process Measures
Schedule   Measurement of support systems


Metrics is the form through which Management collects the data, validates, analyzes, computes and arrive at the decision making to improve process, product quality. This ultimately leads to the improvement in the customer satisfaction. To get the right information for decision making, the organization should arrive at the Goals, and through proper questioning get the information/ data and process it to create management information. This whole step is called GQM i.e. Goal, Question and Metrics program. GQM (Basili and Rombach 1988) is a framework for developing a metrics program. A reliable metrics provides the evidence for the continuous improvement in the process cycle, allows cost benefit analysis, and provides the decision making. The GQM technique can be used in any phase of the software development.

The reason to measure the software process, products and resources is to characterize, evaluate, predict and improve the development process. Measures are like check gats, which helps the project to go according to the plan. The GQM paradigm consists of three steps:

Generate a set of Goals based on the needs of the organization.

Derive a set of questions in quantifiable terms.

Establish the metrics
The first stage in the GQM process is to generate a set of goals based upon the need of the organization to achieve a quality end product.

Metrics Classification

The basic measures of a SDLC are size, effort, schedule, cost and quality. Metrics (Derived Measures) are generally classified into:

• Product measures.
• Process measures.
• Project measures.
• Measurement of Support Services.
• Customer Satisfaction Measurements.

Product Metrics

Product metrics measures the software as a product from inception to the delivery stage. It also deal with the characteristics, i.e. attributes like source code and measure the requirements, size of the program, design, so on. If we define the product metrics in the early stages of development, it will be of great help to control the development process. Product Metrics are size, complexity metrics, Halstead’s product metrics and quality metrics.

Size Metrics

Size metrics measures the size of the product/software. Measuring function points and bang system in the early stages of project gives more benefits.

Lines of code: There is no defined rule for counting the LOC. The most common definition for LOC” seems to count any line that is not a blank or comment line”, regardless the number of blank or comment lines. In other words, the LOC includes executable code and data definitions but excludes comments. LOC counts are obtained for thee total product and for the new and changed code of the new release.

Size Oriented Metrics

Size oriented metrics are a direct measure of software and the process by which it was developed. These metrics can include effort (time), money spent, KLOC (1000s lines of code), pages of documentation created, errors, and people on the project.

The output of the work performed is measured in terms of LOC< Function points< etc. and changes to the size of the software are measured with the progress of the project during its life cycle. For maintenance projects.

• Complexity of bugs/ Tickets
• Number of bugs/ tickets are measured

In size metrics, the input to achieve the desired effort is measured and the following details are captured.

• Efforts spent to perform various stages of the project.
• Management effort of the project.
• Management effort of the project.
• Deviation from the projected effort.
• Maintenance project.
• Average turnaround effort (TAT).

From this data some simple size oriented metrics can be generated.

Productivity = KLOC/ person – month

Quality = defects/KLOC

Cost = Cost/KLOC

Documentation = pages of documentation/LOC

Size-oriented metrics are not universally accepted. The use of LOC as a key measure is the center of the conflict. Proponents of the LOC measure claim:

• It is an artifact of all software engineering processes which can easily be counted
• Many existing metrics exist which use LOC as an input
• A large body of literature and data exist which is predicated on LOC Opponents of the LOC measure claim:
• That it is language dependent
• Well-designed short programs are penalized
• They do not work well with non-procedural languages
• Their use in planning is difficult because the planner must estimate LOC before the design is completed

Function Oriented Metrics

Function oriented metrics are indirect measures of software which focus on functionality and utility. The first function-oriented metrics was proposed by Albrecht who suggested a productivity measurement approach called the function point method. Function points (FPS) are derived from countable measures and assessments of software complexity.

Five characteristics are used to calculate function points. These values are number of user inputs, number of user outputs, number of user inquiries, (on-line inputs), number of files, number of external interfaces (machine readable interfaces-tape, disk).

The weighted values are summed and function points are calculated using:

FP = count- total x (0.65 + 0.01 x SUM (Fi))

Where Fi are complexity adjustment values. Once calculated, Fps may be used in place of LOC as a measure of productivity, quality, cost, documentation, and other attributes.

Function points were originally designed to be applied to business information systems. Extensions have been suggested called feature points which may enable this measure to be applied to other software engineering applications. Feature points accommodate applications in which the algorithmic complexity is high such are real-time, process control, and embedded software applications.

Feature points are calculated as were function points with the additional software characteristic, algorithms. An algorithm is a bounded computational problem such as inverting a matrix, decoding a bit string, or handling an interrupt.

Complexity Metrics

Using complexity metrics, we can control the process of the software development and measure the complexity of the program.

Some of the types of complexity metrics are:

Cyclomatic Complexity: Given any computer program, we can draw its control flow graph, G, where each node corresponds to a block of sequential code and each are corresponds to a branch or decision point in a program. Using graph theory we can find out the complexity of the program, V(G)= E-N/2, where E is the number of edges and N is the number of nods in the graph. For a structured program, we can directly find out the V(G) is the number of decision points in the program txt. Mc Cabe’s cyclomatic complexity metric is related to the programming effort, debugging performance and maintenance effort. Myer and Statter improved upon this structure by proving upper and lower bounds and also created a function F(H) can b computed as a measure of the flow of the complexity H of the program.

Information Flow: The information flow of the program can be used for the complexity. Kafura and Henry proposed the formula , C= [procedure Length]* ([Fan-in-Fan-out]) where fan-in is the number of local information entering, fan-out is the number of existing information.

Halstead’s Product Metrics

He proposed a set of metrics that will be applicable to many software products and discussed the program vocabulary (n) , Length (N) and also the total effort E, and the development time T for the software product.

He defined the program vocabulary N=N1+n2, where N1 is the number of unique operators of the program and N2 is the number of unique operands of the program. Program volume V=N. Log N, where N is the pure number.

Requirement and Analysis Metrics

For all projects, depending on the type of the SDLC model chosen the metrics are collected. For any project to b successful, the most critical phase is the requirement definition and analysis,. Most projects fail because of improper handling of this phase. It is all the more critical that proper control measures are required to handle this phase.

Requirements are the initial stage of any project and it is important to collect good requirement as early as possible in the project life cycle. Using Requirements engineering, we can decide whether the requirements are good or bad. The following statement says how well the requirement should be:

“A statement of system functionality that can be validated, that must be met or possessed by a system to solve a customer problem or to achieve a customer objective and that is qualified by measurable conditions and bounded by constraints” {AMOO}.

Normally SRS (Software Requirement specification) errors into two types Knowledge Error : These errors occur when people does not know what the requirement are. This can be minimized by proof of concept or proto-typing.

Specification Error : These errors occur when people do not how to adequately specify the requirements. These can be eliminated by proper review and while writing SRS.

Some of the attributes of SRS which can be identified and measured include:

a. Unambiguous
b. Correctness
c. Completeness
d. Verifiable
e. Understandable

The following are the Metrics and the attributes are:

SRS Attribute   Formula   Purpose   Reference Value
Unambiguous   Q1=Nui/Nr   To obtain the percentage of requirements that have been uniformly understood by all the reviewers.   Close to zero, ambiguous requirements. Close to one, un-ambiguous requirements.
Correctness   Q2=Nc/Nr   We calculate the percentage of requirements are valid.   O=correct;1=in-correct
Completeness   Q3=Nu/(Ni*Ns)   To count the number of functions specified correctly.   Nearer to 1 indicates completeness.
Verifiable   Q4=Nr/Nr=xxxx   To measure the verifiability of the requirements.   0-Very poor; 1-Very Good
Understandable   Q5=Nur/Nr   To count the number of requirements understood by all the users.  

0- Not understood;

1- Understood clearly

We can add more attributes for the requirements metrics to be collected and analyzed and that includes traceability.

The interpretations provided here are:

• Nui-No. of requirements for which all reviewers presented identical interpretations.
• Nr No of total requirements.
• Ni-Is the stimulus input of the function.
• N8-Is the state input of the function.
• Nc- Is the number of correct requirements.
• C (Ri) – is cost necessary to verify presence of Requirement Ri.
• T (Ri) – is the time necessary to verify presence of Requirement Ri.
• Size is the number of pages.

Design Metrics

The cost involved in the software development and maintenance necessitates the need to identify whether the design is good or bad. Also many organizations would like to predict whether the design is able to transform the requirements into a blue print for execution.

For a software design to be simple, the components, the use of design metrics based on design structures should allow us to identify the design weakness, which could lead to problems in the implementation phase of a particular design. It should also help in predicting problems in the maintainability of the software that is developed finally on the basis of this design. But design structure metrics will not be taking into account the size of the modules in a software design.

The information flow metric of Henry and Kafura, tries to capture the structural complexity of the system and provide a specific quantitative basis for the design making process. However, this metric is also does not help in accurate prediction of the degree of maintainability and error proneness of a software. Using the length of a component at is design stage itself is a difficult as the designer will not b able to identify this. Also, in this metric if either one of the factors, i.e. fan-in or fan-out is missing then the complexity itself is reduced to Zero.

Accounting to the information flow metric, the main factor determining structural complexity is the degree of connectedness of a module to its environment. It is difficult to make predictions about the software, at the design stage using the level of information flow between components. Using the outlier the outlier analysis looks like a better way of predicting potential problems in the design. However, even her there might be modules that require the unusually high degree of interaction as a result of which they have been identified as outliers.

Design of software is an iterative process that a software engineer will use to make a choice of one design over another. Thus metrics can be used as a basis of making comparisons between the different design solutions, identify outlier modules and thus help the software engineer in coming up with better and more efficient designs.

Productivity Metrics

Productivity is defined as Output over Input. The output is the value delivered and the input is the resources that are spent to generate the output. Also the environmental factors also from part of the input. The environment factors include complexity, quality constraints, time, team distribution, interrupts, tools, design etc. The output can b in terms of Source Code delivered, product or changes applied. Typical productivity Metrics include.

Size over Effort: It is the most popular metrics because of ease of calculation.

The size can be in terms of Lines of code, function points or the number of use cases delivered.

Process Metrics

The organization can set many goals and accordingly the metrics are associated with the goals. In the process metrics, Capability and performance are the two criteria which will be measured. We will discuss some of the common organization goals in SDLC processes and the associated metrics.

GOAL: Improvement of Development process:

Associated Metrics

1. Average lapsed time between defect identification and correction.
2. Number of person hours (effort) to complete each activity.
3. Elapsed time for each activity.
4. Number of defects detected in each activity.
5. Number of deviations from the defined software process.
6. Number of changes added to the requirements.

The data required for collection of the above is:

#   Metrics   Data Required
1   Average lapsed time between defect identification and correction.   Average elapsed time between  defects identification and correction.
2   Number of person hours (effort) to complete each activity.  

For each activity:

a. Actual number of person hours to complete.
3   Elapsed time for each activity.  

Project start date for each activity
*Date activity started

*Date activity ended
4   Number of defects detected in each activity.   Number of defects detected in each activity.
5   Number of deviations from the defined software processes.   Exception/ deviation reports from QA department.
6   Number of changes added to the requirements.   Number of requirements added /changed (CR’s).

Performance Metrics

The characteristics of a software system that defines its performance based on the type of products, and the environment. A web based product which caters to B2C requires robust performance to handle multiple queries at the same time where as a Client- Server product serving Intranet applications requires different levels. Some of the performance criteria include:

• CPU utilization
• Memory utilization (peak time and average time)
• Mean time between failures
• Number of I/o transaction per unit of time (actual vs required)
• Software product complexity
• SLOC produced by the team (Total no of LOC produced)
• SLA Adherence

In software engineering, performance metric is testing that is performed, from one perspective, to determine how fast some aspect of a system performs under a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability, reliability and resource usage. Performance testing is a subset of performance engineering, an emerging computer science practice which strives to build performance into the design and architecture of a system, prior to the onset of actual coding effort.

Performance testing can serve different purposes. It can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what parts of the system or workload cause the system or workload cause the system to perform badly .In the diagnostic case, software engineers us tools such as profilers to measure what parts of a device or software contribute most to the poor performance or to establish throughout levels (and thresholds) for maintained acceptable response time. It is critical to the performance of a new system that performance test efforts begin at the inception of the development project and extend through to development. The later a performance defect is detected, the higher the cost of remediation. This is true in the case of functional testing, but even more so with performance testing, due to end to end nature of its scope.

Purpose of performance Metric

• Demonstrate that the system meet performance criteria.
• Compare two systems to find which performs better.
• Measure what parts of the systems or workload cause the system to perform badly.

In performance testing, it is often crucial (and often difficult to arrange) for the test conditions to be similar to the expected actual use. This is, however, not entirely possible in actual practice. The reason is that production systems have a random nature of the workload and while the test workloads do their best to mimic what may happen in the production environment, it is impossible to exactly replicate this workload variability-except in the simplest system.

Loosely- coupled architectural implementations (e.g.SOA) have crated additional complexities with performance testing. Enterprise services or assets (that share common infrastructure or platform)require coordinated performance testing ( with all consumer creating production- like transaction volumes and load on shard infrastructures or platforms) to truly replicate production- like transaction volumes and load on shared infrastructures or platforms) to truly replicate production-like states. Due to the complexity and financial and time requirements around this activity, some organizations now employ tools that can monitor and create production-like conditions (also referred as “noise’) in their performance testing environments ( PTE) to understand capacity and resource requirements and verify/ validate quality attributes.

Oriented Metrics

Object- Oriented Analysis and Design of software provide many benefits such as reusability, decomposition of problem into easily understood object and the aiding of future modifications. But the OOAD software development life cycle is not easier than the typical procedural approach. Therefore, it is necessary to provide dependable guidelines that one may follow to help ensure good OO programming practices and writ reliable cod. Object-Oriented programming metrics is an aspect to be considered. Metrics to be a set of standards against which one can measure the effectiveness of Object-Oriented Analysis techniques in the design of a system.

OO metrics which can be applied to analyze source code as an indicator of quality attributes. The source code could be any OO language.

Metrics for OO Software Development Environments

In his master’s thesis Morris [Morris 1989] made some important observations on OO code and proposed candidate metrics for productivity measurement:

Methods Per Class

Average number of methods per object class=Total number of methods/Total number of object classes:

• A larger number of methods per object class complicates testing due to the increased object size and complexity.

• If the number of methods per object class gets too large extensibility will be hard.

• A large number of methods per object class may be desirable because subclasses tend to inherit a larger number of methods from super classes and this increases code reuse.

Inheritance Dependencies

Inheritance tree depth = max (inheritance tree path length)

• Inheritance tree depth is likely to be more favorable than breadth in terms of reusability via inheritance. Deeper inheritance trees would seem to promote greater method sharing than would broad trees.

• A deep inheritance tree may be more difficult to test than a broad one.

• Comprehensibility may be diminished with a large number inheritance layers.

Degree of Coupling between Objects

Average number of uses dependencies per object= total number of arcs/total number of objects.

Arcs = max (number of uses arcs) - in an object uses network

Arcs - attached to any single object in a uses network

• A higher degree of coupling between objects complicates application maintenance because object interconnections and interactions are more complex.

• The higher the degree of uncoupled object the more objects will suitable for reuse within the same applications and within other applications.

• Uncoupled objects should be easier to augment than those with a high degree of ‘uses’ dependencies, due to the lower degree of interaction.

• Testability is likely to degrade with a more highly coupled system of objects.

• Object interaction complexity associated with coupling can lead to increased error generation during development.

Degree of Cohesion of Objects

Degree of Cohesion of Objects = Total fan – in for All Objects/Total No. of Objects

• Low cohesion is likely to produce a higher degree of errors in the development process. Low cohesion adds complexity which can translate into a reduction in application reliability.

• Objects which are less dependent on other objects for data are likely to be more reusable.

Objective Library Effectiveness

Average number =Total Number of Object Reuses/ Total Number of Library Objects

*If objects are actually being designed to be reusable beyond a single application, then the effects should appear in object library usage statistics.

Factoring Effectiveness

Factoring Effectiveness = Number of Unique Method/ Total Number of Methods

• Highly factored applications are more reliable for reasons similar to those which argue that such applications are more maintainable. The smaller the number of implementation locations for the average task, the less likely that errors were made during coding.

• The more highly factored an inheritance hierarchy is the greater degree to which method reuse occurs.

• The more highly factored an application is, the smaller the number of implementation locations for average method.

Degree of Reuse of Inheritance Methods

Percent of potential Method Uses Actually REUSED (p p):

PP = (Total Number of Actual Method Uses/ Total Number of Potential Method Uses) x100 percent of potential Method Uses Overridden (PM):

PM = (Total Number of Methods Overridden/ Total Number of potential Method Uses) x 100

*Defining methods in such a way that they can be reused via inheritance dos not guarantee that those methods are actually reused.

Average Method Complexity

Average method complexity =Sum of the cyclomatic complexity of all Methods / Total number of application methods

• More complex methods are likely to be more difficult to maintain.

• Greater method complexity is likely to adversely affect application reliability.

• Greater method complexity is likely to lead to a lower degree of overall application comprehensibility.

• More complex methods are likely to be more difficult to test.

Application granularity

Application granularity = total number of objects/ total function points

• One of the goals of object- oriented design is finger granularity. The purpose is to achieve a greater level of abstraction than possible with data/ procedures-oriented design.

• An application constructed with more finely granular objects (i.e. a lower number of functions per object) is likely to be more easily maintained because objects should be smaller and less complex.

• More finely granular objects should also be more reusable…Therefore; each object’s behavior should be more easily understood and analyzed.

Chidamber and Kemerer’s metrics suit for OO Design is the deepest research in OO metrics investigation. They have defined six metrics for the OO design.

Weighted Methods per Class (WMC)

It is defined as the sum of the complexities of all methods of a class.

• The number of methods and the complexity of methods involved is a predictor of how much time and effort is required to develop and maintain the class.

• The larger the number of methods in a class the greater the potential impact on children, since children will inherit all the methods defined in the class.

• Classes with large number of methods are likely to be more application specific, limiting the possibility of reuse.

Depth of Inheritance Tree (DIT)

It is defined as the maximum length from the node to the root of the tree.

• The deeper a class is in the hierarchy, the greater the number of methods it is likely to inherit, making it more complex to predict its behavior.

• Deeper trees constitute greater design complexity, since more methods and classes are involved.

• The deeper a particular class is in the hierarchy, the greater the potential reuse of inherited methods.

Number of Children (NOC)

It is defined as the number of immediate subclasses.

• The grater the number of children, the greater the reuse, since inheritance is a form of reuse.

• The greater the number of children, the greater the likelihood of improper abstraction of the parent class. If a class has a large number of children, it may be a case of misuse of sub classing.

• The number of children gives an idea of the potential influence a class has on the design. If a class has a large number of children, it may require more testing of the methods in that class.

Coupling between object classes (CBO)

It is defined as the count of the classes to which this class is coupled. Coupling is defined as : Two classes are coupled when methods declared in one class us methods or instance variables of the other class. [ Chidamber and Kemerer 1994]

• Excessive coupling between object classes is detrimental to modular design and prevents reuse. The more independent a class is, the easier it is to reuse it in another application.

• In order to improve modularity and promote encapsulation, inter- object class couples should be kept to a minimum. The larger the number of couples, the higher the sensitivity to changes in other parts of the design, and therefore maintenance is more difficult.

• A measure of coupling is useful to determine how complex the testing of various parts of a design is likely to be. The higher the inter-object class coupling, the more rigorous the testing needs to be.

Response for a Class (RFC)

It is defined as number of methods in thee set of all methods that can be invoked in response to a message sent to an object of a class.

• If a large number of methods can b invoked in response to a message, the testing and debugging of the class becomes more complicated since it requires a greater level of understanding on the part of the tester.

• The larger the number of methods that can b invoked from a class, the greater the complexity of the class.

• A worst case value for possible responses will assist in appropriate allocation of testing time.

Lack of Cohesion in Methods (LCOM)

It is defined as the number of different methods within a class that reference a given instance variable.

• Cohesiveness of methods within a class is desirable, since it promotes encapsulation.

• Lack of cohesion implies classes should probably be split into two or more subclasses.

• Any measure of disparateness of methods helps identify flaws in the design of classes.

• Low cohesion increases complexity, thereby increasing the likelihood of errors during the development process.


1. Measuring complexity of a class is subject to bias.

2. They cannot give a good size and effort estimation of software.

3. These metrics seem only to bring the design phase into play, and does not provide adequate coverage in terms planning.

MOOD (Metrics for Object Oriented Design)

The MOOD metrics set refers to a basic structural mechanism of the OO paradigm as encapsulation (MHF and AHF), polymorphishm (PF), message- passing (CF) and are expressed as quotients. The set includes the following metrics:

Method Hiring Factor (MHF)

MHF is defined as the ratio of the sum of the invisibilities of all methods defined in all classes to the total number of methods defined in the system under consideration.

The invisibility of a method is the percentage of the total classes from which this method is not visible.

Attribute Hiding Factor (AHF)

AHF is defined as the ratio of the sum of the invisibilities of all attributes defined in all classes to the total number of attributes defined in the system under consideration.

Method Inheritance Factor (MIF)

MIF is defined as the ratio of the sum of inherited attributes in all classes of the system under consideration to the total number of available methods (locally defined plus inherited) for all classes.

Attribute Inheritance Factor (AIF)

AIF is defined as the ratio of the sum of inherited attributes in all classes of the system under consideration to be total number of available attributes (locally defined plus inherited) for all classes.

Polymorphism Factor (PF)

PF is defined as the ratio of the actual number of possible distinct polymorphic situation for class Ci.

Coupling Factor (CF)

CF is defined as the ratio of the maximum possible number of coupling in the system to the actual number of coupling not imputable to inheritance.

Use Case Oriented Metrics

Use Case Modeling is one of the ways through which we can capture the requirements from the user perspective and design and develop the software from a user paradigm. The metrics that are available include:

• The number of associations that a use case participates in
• The number of extension points for ach use case
• The number of times the use case participates in the diagram
• The number of use cases which this us case includes
• The number of use cases which include this one
• The number of use cases which this use case extends
• The number of use cases which extends this one

Web Engineering

Web–based systems and applications (Web Apps) deliver a complex array of content and functionality to a board population of end-users. Web-engineering is the process that is used to create high-quality Web Apps. Web engineering (WebE) is not a perfect clone of software engineering, but it borrows many of software engineering’s fundamental concepts and principles, emphasizing the same technical and management activities. There are subtle differences in the way these activities are conducted, but an overriding philosophy that dictates a disciplined approach to the development of a computer-based system is identical.

Web Metrics encompasses the following:

Web Usage and patterns

• User supplied data Transactions
• Transactions
• Usability
• Sit performance Usability
• Financial analysis (ROI)

There are metrics which provides information related to web applications. Some of them are listed below.


This tells you the amount of views your website pages are getting –in particular, this allows you to see how your website fares over time. A view counts as a loading of a page. Still considered a very important metric, but the increasing amount of flash/ AJAX built websites, and the increase in online video, means fewer page views are counted, even though the same amount of content is being looked at. Therefore, it’s not as good an indicator of website popularity that it used to mean.


A ‘visit’ is the equivalent of when someone arrives at your website and starts looking at pages. A visit can consist of many pageviews, or just one. Not as good or interesting as unique visitors or pageviews though- as it’s kind of in between both.

Unique Visitors

A unique visitors counts the number of distinct people that are visiting (making visits) to your website in a particular time period, usually one day. A unique visitor can contain many visits, and containing many pageviews. This is still one of the best metrics to use for your website-as it tells you the amount of different people that are visiting your website on a daily basis- a great indicator of site popularity (more advanced analysts use daily, weekly and monthly unique visitor metrics too).


This is a great metric you should measure- it tells you all the places that people are finding your website and visiting from. If you don’t know where people are coming from, then you don’t know how your marketing efforts are doing, and where to spend additional money.

Top Search Engines

This metrics is a more detailed version of ‘referrers ’and tells you which search engines people are visiting your website from. So if you see you are getting plenty of visits from Yahoo, but not many in Google, you should consider doing some search engine optimization (SEO) for Google, or consider doing some pay-per-click (Google Adwords) instead to get some visits from them.

Top Keywords

As it sounds, this metric tells you the top keywords that people are typing in at search engines and arriving at your website from. It’s basically an even more valuable, in- depth version of the ‘top search engines’ metrics. Do some research on keywords related to your website, and see how your top keywords compare. If you aren’t getting many visits from top keywords that other websites related to you are getting, then it’s time to spend more on SEO or pay-per –click to get these top keywords.

Average Time Spent

This Average Time Spent (ATS) metric indicates the amount of time a visitor spends on your website and pages. It’s usually a good indicator of the quality of your website (depending on the type of website).The longer the ATS< usually, the better. However, a long ATS can be an indicator of a bad website experience and that people can’t find what they are looking for. It’s best to combine it with the bounce rate and exit pages (see below) to get a more accurate picture of the quality of your website content. Also, the average time spent doesn’t take into account the last page saw (it has no way of knowing when the visitor closed their browser or walked away), so blog home pages suffer from this.

Exit pages

This metrics indicates the amount of ‘exits’ from pages on your website. Therefore, it reveals the pages on your website that drive people away. But remember, some exit pages are more natural exit pages, like purchase confirmation or newsletter signup confirmation pages. Look for the highest exited pages that seem to be an important path of your websites flow, like products pages or info pages, and improve these.

Entrance Pages

All too often people just analyze and improve the homepage, because they think that’s where the majority of their traffic arrives from. However, all too often the reality is that many people will arrive deep into your website through search engines. Looking at this metric reveals which of your pages are most often used as entrance pages. Look to improve these pages and make sure it’s easy for visitors to navigate from these pages- otherwise these entrance pages will become exit pages.

Bounce Rate

This is one of the most under-used, but most revealing metrics. To put it simply, it indicates the amount of people that, upon arriving at your website, immediately lave. Therefore, it’s a great indicator of the quality of your website. Bounce Rate is the percentage of single-page visits from entrance page visits for individual pages. In particular, it’s very revealing to check out the bounce rate for your paid search keywords- spend more on the keywords with low bounce rats, and cut out keywords with high bounce rates. A bounce rat blow 40% for pages is considered good.

Repeat Visits

This is another great metric to use, and is a great indicator of the quality of your website. Simply put, the more your visitors return, the better your website is likely to be- so therefor, you should try and get your repeat visits as high as possible. The higher the percent of repeat visits versus first time visits is another great indicator to use for sit quality.

Feed Subscribers

This is a great metric, but only related to blogs- in fact it’s only usually found in RSS feed tools like If you have a blog, it’s essential you sign up to a service like this so you can monitor the amount of subscribers to your blog content. The more subscribers you have, the more popular your blog is likely to be.

Top Internal Search Keywords

Don’t confuse this with search engine traffic metrics- this is for searches actually performed on your site (Like the search box in the top right hand corner of this website).This is one of the most revealing metrics you can use- but why? By looking at the keywords people use to search your website, it tells you exactly what people are wanting/ expecting to see on your website. So, if you have a website about guitars, but people are searching for keyboards, then you should check to see if your website is confusing for visitors, or consider offering content about keyboards.

Conversion Rate

Lastly, but certainly not lastly, knowing the conversion rate on your website is one of the most powerful things to know and act on. And not just conversion for the site as a whole but you should be looking at conversion rate by page or set of pages, Ideally you should set up a funnel for ach conversion so you know exactly where people are leaving before they convert- a prime candidate to analyze conversion rate and funnel is pages within a shopping cart. Also looking at conversion rates by referrers gives a good indicator of the sources of traffic to your website.
Copyright © 2014         Home | Contact | Projects | Jobs

Review Questions
  • 1 What is measurement?
  • 2. Explain in detail about Goal/Question/Metrics?
  • 3. Explain the classification of metrics?
  • 4. What is the use of Size oriented metrics?
  • 5. What is the purpose of performance metrics?
  • 6. What is object oriented metrics? What are the metrics for OO software development environments?
Copyright © 2014         Home | Contact | Projects | Jobs

Related Topics
Types of Metrics Keywords
  • Types of Metrics Notes

  • Types of Metrics Programs

  • Types of Metrics Syllabus

  • Types of Metrics Sample Questions

  • Types of Metrics Subjects

  • Types of Metrics Syllabus

  • EMBA Types of Metrics Subjects

  • Types of Metrics Study Material

  • BBA Types of Metrics Study Material