Metrics Classification
The basic measures of a SDLC are size, effort, schedule, cost and quality. Metrics (Derived Measures) are generally classified into:
• Product measures.
• Process measures.
• Project measures.
• Measurement of Support Services.
• Customer Satisfaction Measurements.
Product Metrics
Product metrics measures the software as a product from inception to the delivery stage. It also deal with the characteristics, i.e. attributes like source code and measure the requirements, size of the program, design, so on. If we define the product metrics in the early stages of development, it will be of great help to control the development process. Product Metrics are size, complexity metrics, Halstead’s product metrics and quality metrics.
Size Metrics
Size metrics measures the size of the product/software. Measuring function points and bang system in the early stages of project gives more benefits.
Lines of code: There is no defined rule for counting the LOC. The most common definition for LOC” seems to count any line that is not a blank or comment line”, regardless the number of blank or comment lines. In other words, the LOC includes executable code and data definitions but excludes comments. LOC counts are obtained for thee total product and for the new and changed code of the new release.
Size Oriented Metrics
Size oriented metrics are a direct measure of software and the process by which it was developed. These metrics can include effort (time), money spent, KLOC (1000s lines of code), pages of documentation created, errors, and people on the project.
The output of the work performed is measured in terms of LOC< Function points< etc. and changes to the size of the software are measured with the progress of the project during its life cycle. For maintenance projects.
• Complexity of bugs/ Tickets
• Number of bugs/ tickets are measured
In size metrics, the input to achieve the desired effort is measured and the following details are captured.
• Efforts spent to perform various stages of the project.
• Management effort of the project.
• Management effort of the project.
• Deviation from the projected effort.
• Maintenance project.
• Average turnaround effort (TAT).
From this data some simple size oriented metrics can be generated.
Productivity = KLOC/ person – month
Quality = defects/KLOC
Cost = Cost/KLOC
Documentation = pages of documentation/LOC
Size-oriented metrics are not universally accepted. The use of LOC as a key measure is the center of the conflict. Proponents of the LOC measure claim:
• It is an artifact of all software engineering processes which can easily be counted
• Many existing metrics exist which use LOC as an input
• A large body of literature and data exist which is predicated on LOC Opponents of the LOC measure claim:
• That it is language dependent
• Well-designed short programs are penalized
• They do not work well with non-procedural languages
• Their use in planning is difficult because the planner must estimate LOC before the design is completed
Function Oriented Metrics
Function oriented metrics are indirect measures of software which focus on functionality and utility. The first function-oriented metrics was proposed by Albrecht who suggested a productivity measurement approach called the function point method. Function points (FPS) are derived from countable measures and assessments of software complexity.
Five characteristics are used to calculate function points. These values are number of user inputs, number of user outputs, number of user inquiries, (on-line inputs), number of files, number of external interfaces (machine readable interfaces-tape, disk).
The weighted values are summed and function points are calculated using:
FP = count- total x (0.65 + 0.01 x SUM (Fi))
Where Fi are complexity adjustment values. Once calculated, Fps may be used in place of LOC as a measure of productivity, quality, cost, documentation, and other attributes.
Function points were originally designed to be applied to business information systems. Extensions have been suggested called feature points which may enable this measure to be applied to other software engineering applications. Feature points accommodate applications in which the algorithmic complexity is high such are real-time, process control, and embedded software applications.
Feature points are calculated as were function points with the additional software characteristic, algorithms. An algorithm is a bounded computational problem such as inverting a matrix, decoding a bit string, or handling an interrupt.
Complexity Metrics
Using complexity metrics, we can control the process of the software development and measure the complexity of the program.
Some of the types of complexity metrics are:
Cyclomatic Complexity: Given any computer program, we can draw its control flow graph, G, where each node corresponds to a block of sequential code and each are corresponds to a branch or decision point in a program. Using graph theory we can find out the complexity of the program, V(G)= E-N/2, where E is the number of edges and N is the number of nods in the graph. For a structured program, we can directly find out the V(G) is the number of decision points in the program txt. Mc Cabe’s cyclomatic complexity metric is related to the programming effort, debugging performance and maintenance effort. Myer and Statter improved upon this structure by proving upper and lower bounds and also created a function F(H) can b computed as a measure of the flow of the complexity H of the program.
Information Flow: The information flow of the program can be used for the complexity. Kafura and Henry proposed the formula , C= [procedure Length]* ([Fan-in-Fan-out]) where fan-in is the number of local information entering, fan-out is the number of existing information.
Halstead’s Product Metrics
He proposed a set of metrics that will be applicable to many software products and discussed the program vocabulary (n) , Length (N) and also the total effort E, and the development time T for the software product.
He defined the program vocabulary N=N1+n2, where N1 is the number of unique operators of the program and N2 is the number of unique operands of the program. Program volume V=N. Log N, where N is the pure number.
Requirement and Analysis Metrics
For all projects, depending on the type of the SDLC model chosen the metrics are collected. For any project to b successful, the most critical phase is the requirement definition and analysis,. Most projects fail because of improper handling of this phase. It is all the more critical that proper control measures are required to handle this phase.
Requirements are the initial stage of any project and it is important to collect good requirement as early as possible in the project life cycle. Using Requirements engineering, we can decide whether the requirements are good or bad. The following statement says how well the requirement should be:
“A statement of system functionality that can be validated, that must be met or possessed by a system to solve a customer problem or to achieve a customer objective and that is qualified by measurable conditions and bounded by constraints” {AMOO}.
Normally SRS (Software Requirement specification) errors into two types Knowledge Error : These errors occur when people does not know what the requirement are. This can be minimized by proof of concept or proto-typing.
Specification Error : These errors occur when people do not how to adequately specify the requirements. These can be eliminated by proper review and while writing SRS.
Some of the attributes of SRS which can be identified and measured include:
a. Unambiguous
b. Correctness
c. Completeness
d. Verifiable
e. Understandable
The following are the Metrics and the attributes are:
SRS Attribute |
|
Formula |
|
Purpose |
|
Reference Value |
Unambiguous |
|
Q1=Nui/Nr |
|
To obtain the percentage of requirements that have been uniformly understood by all the reviewers. |
|
Close to zero, ambiguous requirements. Close to one, un-ambiguous requirements. |
Correctness |
|
Q2=Nc/Nr |
|
We calculate the percentage of requirements are valid. |
|
O=correct;1=in-correct |
Completeness |
|
Q3=Nu/(Ni*Ns) |
|
To count the number of functions specified correctly. |
|
Nearer to 1 indicates completeness. |
Verifiable |
|
Q4=Nr/Nr=xxxx |
|
To measure the verifiability of the requirements. |
|
0-Very poor; 1-Very Good |
Understandable |
|
Q5=Nur/Nr |
|
To count the number of requirements understood by all the users. |
|
0- Not understood;
1- Understood clearly |
We can add more attributes for the requirements metrics to be collected and analyzed and that includes traceability.
The interpretations provided here are:
• Nui-No. of requirements for which all reviewers presented identical interpretations.
• Nr No of total requirements.
• Ni-Is the stimulus input of the function.
• N8-Is the state input of the function.
• Nc- Is the number of correct requirements.
• C (Ri) – is cost necessary to verify presence of Requirement Ri.
• T (Ri) – is the time necessary to verify presence of Requirement Ri.
• Size is the number of pages.
Design Metrics
The cost involved in the software development and maintenance necessitates the need to identify whether the design is good or bad. Also many organizations would like to predict whether the design is able to transform the requirements into a blue print for execution.
For a software design to be simple, the components, the use of design metrics based on design structures should allow us to identify the design weakness, which could lead to problems in the implementation phase of a particular design. It should also help in predicting problems in the maintainability of the software that is developed finally on the basis of this design. But design structure metrics will not be taking into account the size of the modules in a software design.
The information flow metric of Henry and Kafura, tries to capture the structural complexity of the system and provide a specific quantitative basis for the design making process. However, this metric is also does not help in accurate prediction of the degree of maintainability and error proneness of a software. Using the length of a component at is design stage itself is a difficult as the designer will not b able to identify this. Also, in this metric if either one of the factors, i.e. fan-in or fan-out is missing then the complexity itself is reduced to Zero.
Accounting to the information flow metric, the main factor determining structural complexity is the degree of connectedness of a module to its environment. It is difficult to make predictions about the software, at the design stage using the level of information flow between components. Using the outlier the outlier analysis looks like a better way of predicting potential problems in the design. However, even her there might be modules that require the unusually high degree of interaction as a result of which they have been identified as outliers.
Design of software is an iterative process that a software engineer will use to make a choice of one design over another. Thus metrics can be used as a basis of making comparisons between the different design solutions, identify outlier modules and thus help the software engineer in coming up with better and more efficient designs.
Productivity Metrics
Productivity is defined as Output over Input. The output is the value delivered and the input is the resources that are spent to generate the output. Also the environmental factors also from part of the input. The environment factors include complexity, quality constraints, time, team distribution, interrupts, tools, design etc. The output can b in terms of Source Code delivered, product or changes applied. Typical productivity Metrics include.
Size over Effort: It is the most popular metrics because of ease of calculation.
The size can be in terms of Lines of code, function points or the number of use cases delivered.
Process Metrics
The organization can set many goals and accordingly the metrics are associated with the goals. In the process metrics, Capability and performance are the two criteria which will be measured. We will discuss some of the common organization goals in SDLC processes and the associated metrics.
GOAL: Improvement of Development process:
Associated Metrics
1. Average lapsed time between defect identification and correction.
2. Number of person hours (effort) to complete each activity.
3. Elapsed time for each activity.
4. Number of defects detected in each activity.
5. Number of deviations from the defined software process.
6. Number of changes added to the requirements.
The data required for collection of the above is:
# |
|
Metrics |
|
Data Required |
1 |
|
Average lapsed time between defect identification and correction. |
|
Average elapsed time between defects identification and correction. |
2 |
|
Number of person hours (effort) to complete each activity. |
|
For each activity:
a. Actual number of person hours to complete. |
3 |
|
Elapsed time for each activity. |
|
Project start date for each activity
*Date activity started
*Date activity ended |
4 |
|
Number of defects detected in each activity. |
|
Number of defects detected in each activity. |
5 |
|
Number of deviations from the defined software processes. |
|
Exception/ deviation reports from QA department. |
6 |
|
Number of changes added to the requirements. |
|
Number of requirements added /changed (CR’s). |
Performance Metrics
The characteristics of a software system that defines its performance based on the type of products, and the environment. A web based product which caters to B2C requires robust performance to handle multiple queries at the same time where as a Client- Server product serving Intranet applications requires different levels. Some of the performance criteria include:
• CPU utilization
• Memory utilization (peak time and average time)
• Mean time between failures
• Number of I/o transaction per unit of time (actual vs required)
• Software product complexity
• SLOC produced by the team (Total no of LOC produced)
• SLA Adherence
In software engineering, performance metric is testing that is performed, from one perspective, to determine how fast some aspect of a system performs under a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability, reliability and resource usage. Performance testing is a subset of performance engineering, an emerging computer science practice which strives to build performance into the design and architecture of a system, prior to the onset of actual coding effort.
Performance testing can serve different purposes. It can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what parts of the system or workload cause the system or workload cause the system to perform badly .In the diagnostic case, software engineers us tools such as profilers to measure what parts of a device or software contribute most to the poor performance or to establish throughout levels (and thresholds) for maintained acceptable response time. It is critical to the performance of a new system that performance test efforts begin at the inception of the development project and extend through to development. The later a performance defect is detected, the higher the cost of remediation. This is true in the case of functional testing, but even more so with performance testing, due to end to end nature of its scope.
Purpose of performance Metric
• Demonstrate that the system meet performance criteria.
• Compare two systems to find which performs better.
• Measure what parts of the systems or workload cause the system to perform badly.
In performance testing, it is often crucial (and often difficult to arrange) for the test conditions to be similar to the expected actual use. This is, however, not entirely possible in actual practice. The reason is that production systems have a random nature of the workload and while the test workloads do their best to mimic what may happen in the production environment, it is impossible to exactly replicate this workload variability-except in the simplest system.
Loosely- coupled architectural implementations (e.g.SOA) have crated additional complexities with performance testing. Enterprise services or assets (that share common infrastructure or platform)require coordinated performance testing ( with all consumer creating production- like transaction volumes and load on shard infrastructures or platforms) to truly replicate production- like transaction volumes and load on shared infrastructures or platforms) to truly replicate production-like states. Due to the complexity and financial and time requirements around this activity, some organizations now employ tools that can monitor and create production-like conditions (also referred as “noise’) in their performance testing environments ( PTE) to understand capacity and resource requirements and verify/ validate quality attributes.
Oriented Metrics
Object- Oriented Analysis and Design of software provide many benefits such as reusability, decomposition of problem into easily understood object and the aiding of future modifications. But the OOAD software development life cycle is not easier than the typical procedural approach. Therefore, it is necessary to provide dependable guidelines that one may follow to help ensure good OO programming practices and writ reliable cod. Object-Oriented programming metrics is an aspect to be considered. Metrics to be a set of standards against which one can measure the effectiveness of Object-Oriented Analysis techniques in the design of a system.
OO metrics which can be applied to analyze source code as an indicator of quality attributes. The source code could be any OO language.