MBA management

Web Engineering Practices - Design and Testing topics:


Web engineering applies “sound scientific, engineering and management principles and disciplined and systematic approaches to the successful development, deployment and maintenance of high- quality web- based systems and applications”.

Refining the Framework

We have already noted that the WebE process model must be adaptable. That is, a definition of the engineering tasks required to refine each framework activity is left to the discretion of the web engineering team. In some cases, a framework activity is conducted informally. In others, a series of distinct tasks will be defined and conducted by team members. In every case, the team has responsibility for producing a high – quality Web App increment within the time period allocated.

It is important to emphasize that tasks associated with WebE framework activities may be modified, eliminated, or extended based on the characteristics of the problem, the product, the project, and the people on the Web engineering team.

Warren Keuffel: There are those of us who believe that best practices for software development are practical and deserve implementation. And then there those of us who believe that best practices are interesting in an academic sort of way, but are not for the real world, thank you very much.”

Web engineering best practices:

Will every Web App developer use the WebE process framework and task set defined in section? Probably not Web engineering team is sometimes under enormous time pressure and will try to take short-cuts (even if these are ill – advised and result in more development effort, not less). But a set of fundamental best practices - adopted from the software engineering practices should be applied if industry- quality Web Apps are to be built.

1. Take the time to understand business needs and product objectives, even if the details of the WebApp vague. Many WebApp developers erroneously believe that vague requirements (which are quite common) relieve them from the need to be sure that the system they are about to engineer has a legitimate business purpose. The end result is (too often) good technical work that results in the wrong system built for the wrong reasons for the wrong audience. If stakeholders struggle to identify a set of clear objectives for the product (WebApp), do not proceed until they can.

2. Describe how users will interact with the Web App using a scenario- based approach. Stakeholders must be convinced to develop use-cases to reflect how various actors will interact with the Web-App. These scenarios can then be used (1) for project planning and tracking, (2) to guide analysis and design modeling, and (3) as important input for design of tests.

3. Develop a project plan, even if it is very brief. Base the plan on a predefined process framework that is acceptable to all stakeholders. Because project timelines are very short, schedule granularity should be fine; i.e., in many instances, the project should be scheduled and tracked on a daily basis.

4. Spend some time modeling what is that you‘re going to build. Generally, comprehensive analysis and design models are not developed during Web Engineering. However UML class and sequence diagrams along with other.

5. Selected UML notation (e.g., state diagrams) may provide invaluable insight.

6. Review the models for consistency and quality. Formal technical reviews should be conducted throughout a Web project. The time spent on reviews pay important dividends because it often eliminates rework and results in a WebApp that exhibits high quality - thereby increasing customer satisfaction.

7. Use tools and technology that enable you to construct with as many reusable components as possible. A wide array of WebApp tools are available for virtually every aspect of WebApp construction. Many of these tools enable a Web engineer to build significant portions of the application using reusable component.

8. Don’t rely on early users to debug the Web App - Design comprehensive tests and execute them before releasing the system. Users of a web App will often give it one chance. If it fails to perform, they move elsewhere--- never to return. It is for this reason that “test first, then deploy” should be an overriding philosophy, even if deadlines must be stretched.

The impact of Web-based systems and application is arguably the single most significant event in the history of computing. As Web Apps grow in importance, a disciplined WebE approach - adapted from software engineering principles, concepts, process, and methods has begun to evolve.

WebApps are different from other categories of computer software. They are network intensive, content driven, and continuously evolving. The immediacy that drives their development, the overriding need for security in their operation, and thee demand for aesthetic as well as functional content delivery are additional differentiating factors. Like other types of software, WebApps can be assessed using a variety of quality criteria that include usability, functionality, reliability, efficiency, maintainability, security, availability, scalability, and time to market.

WebE can be described in three layers - process, methods, and tools/technology. The WebE process adopts thee agile development philosophy that emphasizes a “lean” engineering approach that leads to the incremental delivery of the system to be built. The generic process framework - communication, planning, modeling, construction, and deployment is applicable to WebE. These framework activities are refined into a set of WebE tasks that are adapted to the needs of each project. A set umbrella activities similar to those applied during software engineering work--- SOA, SCM, project management--- apply to all WebE projects.

The Web Engineering process

WebApps are often content-driven with an emphasis on aesthetics, it is likely that parallel development activities will be scheduled within the WebE process and will involve a team of both technical and non-technical people. The attributes of web-based systems and application have a profound influence on the WebE process that is choosen. If immediacy and continuous evolution are primary attributes of a WebApp, a Web engineering team might choose an agile process model that produces WebApp releases in rapid fire sequence. On the other hand, if a WebApp is to be developed over a longer time period (e.g., a major e-commerce application), an incremental process model might be chosen.

Doug Wallace et al.. “Web development is an adolescent …. Like most adolescents, it wants to be accepted as an adult as it tries to pull away from its parents. If it is going to reach its full potential, it must take a few lessons from the more seasoned world of software development.”

The network intensive nature of applications in this domain suggests a population of users that is diverse (thereby making special demands on requirements elicitation and modeling) and an application architecture that can be driven with an emphasis on aesthetics, it is likely that parallel development activities will be scheduled within the WebE process and involve a team of both technical and non-technical people (e.g., copywriters, graphic designers).

Defining the framework

Any one of the agile process model (e.g., Extreme Programming, Adaptive Software Development, SCRUM) resented here can be applied successfully as a WebE process.

To be effective, any engineering process must be adaptable. That is, the organization of the project team, the modes of communication among team members, the engineering activities and tasks to be performed, the information that is collected and created, and the people doing the work, the project timeline and constraints, and the problem to be solved should all be such that they can be straightway adapted to the engineering process on hand. Before we define a process framework for WebE, we must recognize that:

1. Web Apps are often delivered incrementally. That is, framework activities will occur repeatedly as each increment is engineered and delivered.

2. Changes will occur frequently. These changes may occur as a result of the evaluation of a delivered increment or as a consequence of changing business conditions.

3. Timelines are short. This mitigates against the creation and review of voluminous engineering documentation, but it does not preclude the simple reality that critical analysis, design, and testing must be recorded in some manner.

In addition, the principles defined as part of the “Manifesto for Agile Software Development” should be applied. However, the principles are not the Ten Commandments. It is sometimes reasonable to adopt the spirit of these principles without necessarily abiding by the letter or the manifesto. a) Defining the framework

The WebE process within the generic process framework is:

Customer communication. Within the Web E process, customer communication is characterized by two major tasks: business analysis and formulation. Business analysis defines the business/ organizational context for the WebApp.

Formulation is a requirements gathering activity involving all stakeholders. The intent is to describe the problem that the WebApp is to solve using the best information available.

Planning. The project plan for the web app increment is created. The plan consists of a task definition and a timeline schedule for the time period projected for the development of the web app increment.

Modeling. Conventional software engineering analysis and design tasks are adapted to web app development, merged, and then melded into the WebE modeling activity. The intent is to develop” rapid” analysis and design models that define requirements and at the same time represent a web app that will satisfy them.

Construction. WebE tools and technology are applied to construct the web app that has been modeled. Once the web app increment has been constructed, a series of rapid tests are conducted to ensure that errors in design are uncovered.

Deployment. The web app is configured for its operational environment and delivered to end-users. Then the evaluation period commences. Evaluation feedback is presented to the WebE team, and the increment is modified as required.

Web Engineering---- Basic Questions:

The engineering of any product involves subtleties that are not immediately obvious to those without substantial experience. The characteristics of WebApps force Web engineers to answer a variety of questions that should be addressed during early framework activities.

Strategic questions related to business needs and product objectives are addressed during formulation Requirements questions related to features and functions must be considered during analysis modeling. Board-based design questions related to WebApp architecture, interface characteristics, and navigational issues are considered as the design model evolves. Finally, a set of human issues, related to the manner in which a user actually interacts with the WebApp, is addressed on a continual basis.

Susan Weinshenk suggests a set of questions that must be considered as analysis and design progress. A small (adapted) subset is noted here:

• How important is a Web-site home page? Should contain useful information or a simple listing of links that lead a user to more detail at lower levels?

• What is the most effective page layout (e.g., menu on top, on the right or left?), and does it vary depending upon the type of WebApp being developed?

• Which media options have the most impact? Are graphics more effective than txt? Is video (or audio) an effective option? When should various media options be chosen?

• How much work can we expect a user to do when he or she is looking for information? How many clicks are people willing to make?

• How important are navigational aids when WebApps are complex?

• How complex can forms input be before it becomes irritating for the user? How can forms input be expedited?

• How important are search capabilities? What percentage of users brows and what percent use specific searches? How important is it to structure each page in a manner that assumes a link from some outside source?

• Will the WebApp be designed in a manner that makes it accessible to those who have physical or other disabilities?

There are no absolute answer to questions such as these, and yet, they must be addressed as WebE proceeds.


During the roaring 1990s, the internet boom generated more hubris than any other event in this history of computers. Web App developers at hundreds of young dot. Com companies argued that a new paradigm for software development had arisen, that old rules no longer applied and that time-to-market trumped all other concerns. They laughed at the notion that careful formation and planning should occur before construction commenced. And who could argue? Money was everywhere, 24 years olds become multimillionaires (on paper, at least) - may be things really had changed. And then the bottom fell out.

It became painfully apparent as the twenty-first century began that a “build it and they will come” Philosophy just doesn’t work and that planning is worth the effort, even when development schedules are tight. Constantine and Lockwood note this situation when they write:

Despite breathless declarations that the web represents a new paradigm defined by new rules, professional developers are realizing that lessons learned in the pre-Internet days of software development still apply. Web pages are user interfaces, HTML programming is programming, and browser- deployed applications are software systems that can benefit from basic software engineering principles.

Planning for Web Engineering Projects:

Given the immediacy of WebApps, it is reasonable to ask: Do we really need to spend time planning and managing a WebApps effort? Shouldn’t we just let a WebApp evolve naturally, with little or no explicit management? More than a few Web developers would opt for little or no management, but that doesn’t make them right.

Table 1 presents a table adapted from Kaulik and Samuelsen that indicates how e-projects” (their term for WebApp projects) compare to traditional software projects. Referring to the table, traditional software projects and major e- projects have substantial similarities. Since project management is indicated for traditional projects, it would seem reasonable to argue that it would also be indicated for major e-projects. Small –projects do have special characteristics that make them different from traditional projects. However, even in the case of small e- projects, planning must occur, risks must be considered, a schedule must be established, and controls must be defined so that confusion, frustration, and failure are avoided.

Table 1. Difference between traditional projects and e-projects

    Traditional Projects   Small e-projects   Major e-projects
Requirements gathering   Rigorous   Limited   Rigorous
Technical Specifications   Robust: models, spec   Descriptive overview   Robust: UML Models, spec
Project duration   Measured in months or years   Measured in days Weeks or months   Measured in months or years
Testing and QA   Focused on achieving quality targets   Focused on risk control   SQA as described
Risk management   Explicit   Inherent   Explicit
Shelf-life of deliverables   18 months or longer   3 to 6 months or shorter   6 to 12 months or shorter
Release process   Rigorous   Expedited   Rigorous
Post-release customer feedback   Requires proactive effort   Automatically obtained from user interaction   Obtained both automatically and via solicited feedback

The Web Engineering Team

A successful Web engineering team melds a wide variety of talents who must work as a team in a high-pressure project environment, Timelines are short, changes are relentless, and technology keeps shifting. Crating a team that jells is no simple matter.

Scott Tilley and Shihoug Huang: “In today’s net-centric and Web-enabled world, one now needs to know a lot about a lot.”

The Players:

The creation of a successful Web application demands a broad array of skills. Tilley and Huang address these issues when they state:” There are so many different aspects to {web} application software that there is a (re) emergence of the renaissance person, one who is comfortable operating in several disciplines…”

While the authors are absolutely correct. “ Renaissance” people are in relatively short supply, and given the demands associated with major Web App development projects, the diverse skill set required might be better distributed over a web engineering team.

Web engineering teams can be organized in much the same way as traditional software teams. However, the players and their roles are often quite different. Among the many skills that must be distributed across WebE team members are component-based software engineering, networking, architectural and navigational design, Internet standards/ languages, human interface design, graphic design, content layout, and WebApp testing.

The following roles should be distributed among the members of the WebE team:

Content developers/ Providers Because WebApps are inherently content-driven, one role on the WebE team must focus on the generation and/ or collection of content. Recalling that content spans a broad array of data objects, content developers/ providers may come from diverse (non-software) backgrounds.

Web publisher: The diverse content generated by content developer/ providers must be organized for inclusion within the WebApp and nontechnical content developers/providers. This role is filled by the web publisher, who must understand both content and WebApp technology.

Web engineer: A Web engineer becomes involved in a wide range of activities during the development of a WebApp including requirements elicitation, analysis modeling, architectural. Navigational and interface design, WebApp implementation, and testing. Thee Web engineer should also have a solid understanding of component technologies, client/ server architectures, HTML/XML, and database technologies and a working knowledge of multimedia concepts, hardware/ software platforms, network, security, and Web-site support issues.

Business domain experts: A business domain expert should be able to answer questions related to the business goals, objectives and requirements associated with the WebApp.

Support specialist: The role is assigned to the person (people) who has responsibility for continuing WebApp support. Because WebApps continuously evolve, the support specialist is responsible for corrections, adaptations, and enhancements to the site, including updates to content, implementation of new procedures and forms, and changes to the navigation pattern.

Administer: Often called the “web Master,” this person has the responsibility for the day-to-day operation of the WebApp including: development and implementation of policies for the operation of the WebApp, establishment of support and feedback procedures, implementation of security and access rights, measures, and coordination with support specialists. The administrator may also be involved in the technical activities performed by Web engineers and support specialists.

Building the Team:

Guidelines for building successful software engineering teams are discussed in some detail. But do these guidelines apply in the pressure-packed world of WebApp projects? The answer is yes.

In his bestselling book on a computer industry long past, Tracy Kidder tells the story of a computer company’s heroic attempt to build a computer to meet the challenge of a new product built by a larger competitor. The story is a metaphor for teamwork, leadership, and the grinding stress that all technologists encounter when critical projects don’t go as smoothly as planned.

A summary of Kidder’s does it justice, but these key points have particular relevance when an organization builds a Web engineering team:

A set of team guidelines should be established: these encompass what is expected of each person, how problems are to be dealt with, and what mechanisms exist for improving the effectiveness of the team as the project proceeds.

Strong leadership is a must: The team leader must lead by example and by contact. She must exhibit a level of enthusiasm that gets other members to “signup” psychologically to the work that confirms them.

Respect for individual talents is critical: Not everyone is good at everything. The best teams make use of individual strengths. The best team leaders allow individuals the freedom to run with a good idea.

Every member of the team should commit: The main protagonist in kidder’s book calls this “signing up”.

It’s easy to get started, but it’s very hard to sustain momentum: the best teams never let an “insurmountable” problem stop them. Team members develop a “good enough” solution and proceed hoping that the momentum of forward progress may lead to an even better solution in the long term.

Project Management Issues for Web Engineering:

Once formulation has occurred and basic WebApp requirements have been identified, a business must choose from one of two Web engineering options: (1) the WebApp is outsourced--- web engineering is performed by a third party vendor who has the expertise, talent, and resources that may be lacking within the business, or (2) the WebApp is developed in-house using Web engineers that are employed by the business. A third alternative, doing some Web engineering work in-house and outsourcing other work is also an option.

Steve Mc Connell: “As Thomas Hoobs observed in the 17th century, life under mob rule is solitary, poor, nasty, brutish, and short. Life on a poorly run software project solitary, poor, nasty, brutish, and hardly ever short enough.”

The work to be performed remains the same regardless of whether a WebApp is outsourced, developed in-house, or distributed between an outside vendor and in-house staff. But the communication requirements, the distribution of technical activities, the degree of interaction among stakeholders and developers, and a myriad of other critically important issues do change.

Among the most fundamental principles of software engineering is: Understand the problem before you begin to solve it, and be sure that solution you conceive is one that people really want. That’s the basis of formulation, the first major activity in Web engineering. Another fundamental software engineering principle is; plan the work before you begin performing it. That’s the philosophy that underlines project planning.

What is it? Getting started is always difficult. On one hand, there is a tendency to procrastinate, to wait until every step is crossed and every I is dotted before work begins. On the other hand, there is a desire to jump right in, to begin building even before you really know what needs to be done. Both approaches are inappropriate, and that’s why the first two Web engineering framework activities emphasize formulation and planning. Formulation assesses the underlying need for the WebApp, the overall features and functions that users desire, and the scope of the development effort. Planning addresses the things that must be defined to establish a work flow and a schedule, and to track work as the project proceeds.

Who does it? Web engineers, their managers, and nontechnical stakeholders all participate in formulation and planning.

Why is it important? It’s hard to travel to a place you’ve never visited without directions or a map. You may arrive eventually (or you may not), but the journey is sure to be frustrating and unnecessarily long. Formulation and planning provide a map for a Web engineering team.

What is the work product? Because Web engineering work often adopts an agile philosophy, work product for formulation and planning are usually lean---but they do exist, and they should be recorded in written form. Information gathered during formulation is recorded in a written document that serves as the basis for planning and analysis modeling. The project plan lays out the project schedule and presents any other information that is necessary to communicate to members of the web engineering team and outsiders.

How do I ensure that I’ve done it right? Develop enough detail to establish a solid roadmap, but not so much that you become bogged down. Formulation and planning information should be reviewed with stakeholders to ensure that inconsistencies and omissions are identified early.

Once formulation has occurred and basic web app requirements have been identified, a business must choose from one of two web engineering options: (1) the web app is outsourced- web engineering is performed by a third party vendor who has the expertise, talent, and resources that may be lacking within the business, or (2) The web app is developed in house using web engineers that are employed by the business.

(A) Web app Planning-out sourcing

A substantial percentage of web app is outsourced to venders who (purportedly) specialized in the development of web-based systems and applications. In such cases, customers ask for a fixed price quote for web- based systems and applications. In such cases, customers ask for a fixed price quote for web app development from two or more vendors, evaluate competing quotes, and then select a vendor to do the work.

i. Initiate the project. If outsourcing is the strategy to be chosen for web app development, an organization must perform a number of tasks before searching for an outsourcing vendor to do the work:

1. Many of the analysis tasks should be performed internally.

2. A rough design for the web app should be developed internally.

3. A rough project schedule, including not only final delivery dates, but also milestone dates should be developed.

4. A list of responsibilities for the internal organization and the outsourcing vendor is created.

5. The degree of oversight and interaction by the contracting organization with the vendor should be identified.

ii. Select candidate outsourcing vendors. In order to select candidate web developers, the contractor must perform with due diligence: (1) interview past clients to determine the web vendor’s professionalism, ability to meet schedule and cost commitments, and ability to communicate effectively; (2) determine the name of the vendor’s chief web engineer (s) for successful past projects (and later, be certain that this person is contractually obliged to be involves in your project); and (3) carefully examine samples of the vendor’s work that are similar in look and feel to the WebApp that is to be contracted.

iii. Assess the validity of price quotes and the reliability of estimates.

iv. Understand the degree of project management you can expect/perform.

v. Assess the development schedule.

vi. Manage scope. Because it is highly likely that scope will change as a web app project moves forward, the WebE process model should be adaptable and incremental. This allows the vendor’s development team to “freeze” scope for one increment so that an optional web app release can be created. The next increment may address changes suggested by a review of the proceeding increment, but once the second increment commences, scope is again “frozen” temporarily.

B) WebApp planning-in-house web engineering

The steps that follow are recommended for small and moderately- sized WebE projects:

i. Understand scope, the dimensions of change, and project constraints.

ii. Define an incremental project strategy. The team establishes a project strategy that defines web app increments (releases) that provide useful content and functionality for end-users. Engineering effort can then be more effectively focused.

iii. Perform risk analysis.

iv. Develop a quick estimate. The focus of estimation for most web engineering projects is on macroscopic, rather than microscopic, issues. The WebE team assesses whether planned web app increments can be developed with available resources according to the defined schedule constraints.

v. Select a task set (process description)

vi. Establish a schedule A WebE project schedule has relatively fine granularity for tasks to be performed in the short- term and then much more coarse granularity during later time periods.

vii. Define project tracking mechanisms.

viii. Establish a change management approach. Change management is facilitated by the incremental development strategy that has been recommended for web apps. Because the development time for an increment is short, it is often possible to delay the introduction of a change until the next increment, thereby reducing the delaying effects associated with changes that must be implemented “on the fly”.


Web engineering develop complex systems, and like other technologies who perform this task, they should use metrics to improve the Web engineering process and product. We discussed the strategic and tactical uses for software metrics in a software engineering context. These uses also apply to web engineering.

To summarize, software metrics provide a basic for improving the software process, increasing the accuracy of project estimates, enhancing project tracking, and improving software quality. Web engineering metrics could, if properly characterized, achieve all these benefits and also improve usability, WebApp performance, and user satisfaction.

In the context of Web engineering, metrics have three primary goals: 1) to provide an indication of the quality of the WebApp from a technical point of view, (2) to provide a basis for effort estimation, and (3) to provide an indication of the WebApp from a business point of view.

In this section, we summarize a set of common effort and complexity metrics for WebApps. These may be used to develop a historical database for effort estimation. In addition, complexity metrics may ultimately lead to an ability to quantitatively assess one or more of the technical attributes of WebApps.

Metrics for assessing business value:

It’s interesting to note that business people have considerably outpaced web engineers. In developing , collecting, and using metrics for WebApps ( e.g., by understanding the demographics of end-users and their usage patterns), a company or organization can develop immediate input for more meaningful WebApp content, more effective sales and marketing efforts, and better profitability for the business.

The mechanisms required to collect business value data are often implemented by the Web engineering team, but evaluation of the data and actions that result are performed by other constituents. For example, assume that the number of page views can be determined for each unique visitor. Based on metrics collected, visitors arriving from search engine X average nine page views while visitors from portal Y have only two page views. These average can be used by the marketing department to allocate banner advertising budgets (advertising at search engine x provides greater exposure, based on metrics collected, than advertising at portal Y).

A complete discussion of the collection and use of business value metrics (including the on-going debate about personal privacy) is beyond the scope of this book.

“Worst Practices” for Web App Projects:

Sometimes the best way to learn how to do something correctly is to examine how not to do it. Over the past decade, more than a few WebApp have failed because (1) a disregard for project and change the past decade, more than a few WebApps have failed because (1) a disregard for project and change management principles 9 however informal ) resulted in a web engineering team that “bounced off the walls”, (2) an ad hoc approach to Web App development failed to yield a workable system; (3) a cavalier approach to requirements gathering and analysis failed to yield a system that met user needs;(4) an incompetent approach to design failed to yield development of a WebApp that was usable, functional, extensible (maintainable), and testable;(5) an unfocused approach to testing failed to yield a system that worked prior to its introduction.

With these realities in mind, it might be worthwhile to consider a set of Web engineering “Worst practices,” adapted from an article by Tom Bragg. If your e-project exhibits any of them, immediate remedial action is necessary.

Worst practice #1: We have a great idea, so let’s begin building the WebApp now, don’t bother considering whether the WebApp is business justified, whether users will really want to use it, whether you understand the business requirements. Time is short, we have to start.

Reality: Take a few hours/days and make a business case for the WebApp. Be sure that the idea is endorsed by those who will fund it and those who will use it.

Worst practice #2: Stuff will change constantly, so there‘s no point in trying to understand WebApp requirements. Never write anything down (wastes time). Rely solely on word of mouth.

Reality: It is true that WebApp requirements evolve as Web engineering activities proceed. It‘s also fast and simple to convey information verbally. However, a cavalier approach to requirements gathering and analysis is a catalyst for even more (unnecessary) change.

Worst practice #3: Developers whose dominant experience has been traditional software development can develop WebApps immediately. No new training is required. After all, software is software, isn’t it?

Reality: WebApp are different. A board array of methods, technologies, and tools must be expertly applied. Training and experience with them is essential.

Worst practice #4: Be bureaucratic. Insist on laden process models, time sheets, lots of unnecessary “process” meetings, and project leaders who have never managed WebApp project.

Reality: Encourage an agile process that emphasizes the competence and creatively of an experienced Web engineering team. Then get out of the way and let them do the work. If project related data must be collected (for legal reasons or for thee computation of merits), data entry/ collection should be as simple and unobtrusive as possible.

Worst practice #5: Testing? Why bother/ We’ll give it to a few end-users and let them tell us what works and what doesn’t.

Reality: Over time, end-users do perform through ‘tests, “but they are so upset by unreliability and poor performance that they leave (never to return) long before problems are corrected.


What is it? Design for WebApps encompasses technical and non-technical activities. The look and feel of content is developed as part of graphic design; the aesthetic layout of the user interface is created as part of interface design; and the technical structure of the WebApp is modeled as part of architectural and navigational design. In every instance, a design model should be created before construction begins, but a good Web engineer recognizes that the design will evolve as more is learned about stakeholder requirements as the WebApp is built.

Who does it? Web engineers, graphic designers, content developers, and other stakeholders all participate in the creation of a design model for web engineering.

Why is it important? Design allows a Web engineer to create a model that can be assessed for quality and improved before content and code are generated, tests are conducted, and end-users become involved in large numbers. Design is the place where WebApp quality is established.

What are the steps? WebApp design encompasses six major steps that are driven by information obtained during analysis modeling. Content design uses information contained within the analysis model as a basis for establishing the design of content objects and their relationships. Aesthetic design (also called graphic design) establishes the look and feel that the end-user sees. Architectural design focuses on the overall layout and interaction mechanisms that define the user interface. Navigation design defines how the end-user navigates through the hypermedia structure, and component design represents the detailed internal structure of functional elements of the WebApp.

What is the work product? A design model that encompasses content, aesthetics, architecture, interface, navigation, and components-level design issues is the primary work product of web engineering design.

How do I ensure that I’ve done it right? Each element of the design model is reviewed by the Web engineering team (and selected stakeholders) in an effort to uncover errors, inconsistencies, or omissions. In addition, alternative solutions are considered, and the degree to which the current design model will lead to an effective implementation is also assessed.

Design Issues for Web Engineering:

When design is applied within the context of web engineering, both generic and specific issues must be considered. From a generic viewpoint, design results in a model that guides thee construction of the WebApp. The design model, regardless of its form, should contain enough information to reflect how stakeholder requirements (defined in an analysis model) are to be translated into content and executable code. But design must be specific. It must address key attributes of a manner that enables a web engineer to build and test effectively.

Design and WebApp Quality

Every person who has surfed the web or used a corporate Intranet has an opinion about what makes a “good” WebApp. Individual viewpoints vary widely. Some users enjoy flashy graphics, others want simple text. Some demand copies information, others desire an abbreviated presentation. Some like sophisticated analytical tools or database access, others like to keep it simple. In fact, the user’s Perception of “goodness” (and the resultant acceptance or rejection of the WebApp as a consequence) might be more important than any technical discussion of WebApp quality.

But how is WebApp quality perceived? What attributes must be exhibited to achieve goodness in the eyes of end users and at the same time exhibit the technical characteristics of quality that will enable a Web engineer to correct, adapt, enhance, and support the application over the long term?

Olsina and his colleagues have prepared a “quality requirement tree” that identifies a set of technical attributes - usability, functionality, reliability, efficiency, and maintainability---that lead to high—quality WebApps.

Security: WebApps have become heavily integrated with critical corporate and government databases. E-commerce applications extract and then store sensitive customer information. For there are many other reasons why WebApp security is paramount in many situations. The key measure of security is the ability of the WebApp and its server environment to rebuff unauthorized access and/or thwart an outright malevolent attack.

WebApp Interface Design

Every user interface—whether it is designed for a WebApp, a traditional software application, a consumer product, or an industrial device--- should exhibit the following characteristics ; easy to use, easy to learn, easy to navigate, intuitive, consistent, error-free, and functional.

a)Interface design Principles and Guidelines

Design principles are:

i. Anticipation - A WebApp should be designed so that it anticipates the user’s next move.

ii. Communication - the interface should communicate the status of any activity initiated by the user.

iii. Consistency - the use of navigation controls, menus, icons, and aesthetics (e.g., color, shape, and layout) should be consistent throughout the WebApp.

iv. Controlled autonomy - the interface should facilitate user movement throughout the WebApp, but it should do so in a manner that enforces navigation conventions that have been established for the application.

v. Efficiency - The design of the WebApp and its interface should optimize the user’s work efficiency, not the efficiency of the Web engineer who designs and builds it or the client- server environment that executes it.

vi. Flexibility - The interface should be flexible enough to enable some to accomplish tasks directly and others to explore the WebApp in a somewhat random fashion.

b)Interface Design Workflow

User interface design begins with the identification of user, task, and environmental in this content a metaphor is a representation (drawn for the user’s real word experience) that can be modeled within the context of the interface. A simple example might be a solider switch that is used to control the auditory volume of an mpg file requirements, once user tasks have been identified, user scenarios use-cases are crated and analyzed to define a set of interface objects and actions.

The following tasks represent thee work flow for the WebApp interface design:

1. Review information contained in the analysis model and refine as required.

2. Develop a rough sketch of the web App interface layout. An interface prototype 9 including the layout0 may have been developed as part of the analysis modeling activity. If the layout already exists, it should be reviewed and refined as required. If thee interfaces layout have not been developed, the web engineering team should work with stakeholders to develop it at this time. A schematic first-cut layout sketch is shown in figure.

3. Map user objectives into specific interface actions.

4. Define a set of user tasks that are associated with each action.

5. Storyboard screen images for each interface action. As each action is considered, a sequence of story board images (screen images) should be created to depict how the interface responds to user interaction.

6. Refine interface layout and storyboards using input from aesthetic design.

7. Identify user interface objects that are required to implement the interface.

8. Develop a procedural representation of the user’s interaction with the interface. The optional task uses the UML sequence diagrams and/or activity diagrams to depict the flow activities (and decision) that occur as the user interacts with the WebApp.

9. Develop a behavioral representation of the interface. The optional task makes use of UML state diagrams to represent state transactions and the events that cause them.

10. Describe the interface layout for each state. Using design information developed in tasks 2 and 5, associate a specific layout or screen image with ach WebApp state described in task 9.

11. Refine and review the interface design model. Review of the interface should focus on usability.

Content Design

Content design develops a design representation for content objects and represent the mechanism required to instantiate their relationship to one another. This design activity is conducted by Web engineers.

a) Content Objects

The relationship between content objects defined as part of the WebApp analysis model and design objects representing content is analogous to the relationship between analysis classes and design components. In the context of Web engineering, a content object is more closely aligned with a data object for conventional software. A content object has attributes that include content specific information and implementation specific attributes that are specified as part of design.

Consider the analysis class developed for the SafeHome e-commerce system named Product component, we have noted description that is represented here as a design class named Comp description composed of five content objects: Marketing Description, Photograph, Tech Description, Schematic, and Video shown as shaded objects noted in the figure. Information contained within the content object is noted as attributes. For example, Photograph (a.jpg image) has the attributes of horizontal dimension, vertical dimension, and border style.

b) Content Design Issues

Once all content objects are modeled, the information that each object is to deliver must be authored and then formatted to best meet the customer’s needs. Content authoring is the job of specialist who designs the context object by providing an outline of information to be delivered and an indication of the types of generic context objects (e.g., descriptive text, graphic images, photographs) that will be used to deliver the information.

Aesthetic Design

Aesthetic design, also called graphic design, is an artistic endeavor that complements the technical aspects of Web engineering.

a) Layout Issues

A number of general layout guidelines are worth considering:

i. Don’t be afraid of whit space. It is inadvisable to pack every square inch of a Web page with information. The resulting clutter makes it difficult for the user to identify needed information or features and creates visual chaos that is not pleasing to eye.

ii. Organize layout elements from top-left to bottom-left.

iii. Group navigation, content, and function should be shown geographically within the page. Humans look for patterns in virtually all things. If there are no discernable patterns within a Web page, user frustration is likely to increase.

iv. Don’t extend your real estate with the scrolling bar. Although scrolling is often necessary, most studies indicate that users would prefer not to scroll.

v. Consider resolution and browser window size when designing layout.

Graphic Design Issues

The graphic design considers with layout and then proceeds into a consideration of global color schemes , typefaces, sizes, and styles, the use of supplementary media (e.g., audio, video, animation), and all other aesthetic elements of an application.

Navigation design

Navigation design begins with a consideration of the user hierarchy use-cases developed for each category of user (actor). Each actor may use the WebApp somewhat differently and therefore may have different navigation requirements. In addition, the use-cases developed for ach actor will define a set of classes encompassing on or more content objects or WebApp functions. As each user interacts with the WebApp, she encounters a series of navigation semantic units (NSUs) – “set of information and related navigation structures that collaborate in the fulfillment of a subset of related user requirements”.

To illustrate the development of an NSU, consider the use-case, select safe Home components.

Navigation Syntax

' As design proceeds, the mechanics of navigation are defined. Among many possible options are:

• Individual navigation link - text based links, icons, buttons, and switches, and graphical metaphors.

• Horizontal navigation bar - list major content or functional categories in a bar containing appropriate links. In general, between four and seven categories are listed.

• Vertical navigation bar- lists major content or functional categories in a bar containing appropriate links. In general, between four and seven categories in a bar containing appropriate links. In general, between four and seven categories are listed.

• Vertical navigation column - (1) lists major content or functional categories, or (2) lists virtually all major content objects within the WebApp. If the second option is chosen, such navigation columns can” expend” to present content objects as part of a hierarchy.

• Tabs - a metaphor that is nothing more than a variation of the navigation bar or column, representing content or functional categories as tab sheet that are selected when a link is required.

• Site maps - provide an all- inclusive table of contents for navigation to all content objects and functionally contained within the WebApp.


Web App testing is a collection of related activities with a single goal: TO UNCOVER ERRORSIN WebApp content, function, usability, navigability, performance, capacity, and security. To accomplish this, a testing strategy that encompasses both reviews and executable testing is applied throughout the web engineering process.

There is an urgency that always pervades the web engineering process. As formulation, planning, analysis, design, and construction are conducted, stakeholders concerned about competition from other WebApps, coerced by customer demands, and worried that they’ll miss a market window---- press to get the WebApp on-line. As a consequence, technical activities that often occur late in the Web engineering process, such as WebApp testing, occur earlier than expected. Web engineering team must ensure that each WebE work product exhibits high quality. Wallace and his colleagues note this when they state.

Testing shouldn’t wait until the project is finished. Start testing before you write one line of code. Test constantly and effectively, and you will develop a much more durable web site.

Since analysis and design models cannot be tested in the classical sense, the Web engineering team should conduct formal technical reviews as well as executable tests. The intent is no uncover and correct errors before the WebApp is made available to its end-users.

What is it? WebApp testing is a collection of related activities with a single goal: to uncover errors in WebApp content, function, usability, navigability, performance, capacity, and security. To accomplish this, a testing strategy that encompasses both reviews and executable testing is applied throughout the Web engineering process.

What does it? Web engineers and other project stakeholders (managers, customers, end-users) all participate in WebApp testing.

Why is it important? If end-users encounter errors that shake their faith in the WebApp, they will go elsewhere for the content and function they need, and the WebApp will fail. For this reason, Web engineers must work to eliminate as many errors as possible before the WebApp goes on-line.

What are the steps? The WebApp testing process begins by focusing on user-visible aspects of the WebApp and proceeds to tests that exercise technology and infrastructure. Seven testing steps are performed: content, interface, navigation, component, configuration, performance, and security testing.

What is the work product? In some instance a WebApp test plan is produced. In every instance, a suit of test cases is developed for every testing step and an archive of test results is maintained for future use.

How do I ensure that I’ve done it right? Although you can never be sure that you’ve performed every test is needed, you can be certain that testing has uncovered errors (and that those errors have been corrected). In addition, if you’ve established a test plan, you can check to ensure that all planned tests have been conducted.


Testing is the process of exercising software with the intent of finding errors.

a) Dimension of Quality

Reviews and testing examine one or more of the following quality dimensions;

• Content is evaluated at both the semantic levels.

• Function is tested to uncover errors that indicate lack of conformance to customer requirements.

• Structure is assessed to ensure that it properly delivers WebApp content and function, that is extensible, and that it can be supported as new content or functionality added.

• Usability is tested to ensure that each category of user is supported by the interface.

• Navigability is tested to ensure that all navigation syntax and semantics are exercised to uncover any navigation errors (e.g., dad links, improper links, and erroneous links).

• Interoperability is tested to ensure that the WebApp properly interfaces with other applications and/or databases.

• Security is tested by assessing potential vulnerabilities and attempting to exploit each.

b) Testing Strategy

The following steps summarize the approach:

1. The content model for the WebApp is reviewed to uncover errors.

2. The interface model is reviewed to ensure that all use-cases can be accommodated.

3. The design model for the WebApp is reviewed to uncover navigation errors.

4. The user interface is tested to uncover errors in presentation and/or navigation mechanics.

5. Selected functional components are unit tested.

6. Navigation throughout the architecture is tested.

7. The WebApp is implemented in a variety of different environmental configuration and is tested for compatibility with each configuration.

8. Security tests are conducted in an attempt to exploit vulnerabilities in the WebApp or within its environment.

9. Performance tests are conducted.

10. The WebApp is tested by a controlled and monitored population of end-users; the results of their interaction with the system are evaluated for content and navigation errors, usability concerns, compatibility concerns, and WebApp reliability and performance.

c) Test Planning

A WebApp test plan identifies (1) a task set to be applied as testing commences, (2) the work products to b produced as each testing task is executed, and (3) the manner in which the results of testing are evaluated, recorded, and reused when regression testing is conducted. In some cases, the test plan is integrated with the project plan.

Errors within a WebApp Environment:

We have already noted that the primary intent of testing in any software context is to uncover errors (and correct them). Errors encountered as a consequence of successful WebApp testing have a number of unique characteristics.

1. Because many types of WebApp tests uncover problems that are first evidenced on the client side (i.e., via an interface implemented on a specific browser or a PDA or a mobile phone), the Web engineer sees a symptom of the error, not the error itself.

2. Because a WebApp is implemented in a number of different configurations and within different environments, it may be difficult or impossible to reproduce an error outside the environment in which the error was originally en-counted.

3. Although some errors are the result of incorrect design or improper HTML (or other programming language) coding, many errors can be traced to the WebApp configuration.

4. Because WebApps reside within a client/server architecture, errors can be difficult to trace across three architectural layers: the client, the server, or the network itself.

5. Some errors are due to the static operating environment (i.e., the specific configuration in which testing is conducted), while others are attributable to the dynamic operating environment (i.e. Instantaneous resource loading or time- related errors).

These five errors attributes suggest that environment plays an important role in the diagnosis of all errors uncovered during the Web engineering process. In some situations (e.g., content testing), the site of the error is obvious, but in many other types of WebApp testing (e.g., navigation testing, performance testing, security testing) the underlying cause of the error may be considerably more difficult to determine.

The Testing Process—An Overview

The testing process for Web engineering begins with tests that exercise content and interface functionally that is immediately visible to end-users. As testing proceeds, aspects of the design architecture and navigation are exercised. The user may or may not be cognizant of these WebApp elements. Finally, the focus shifts to tests that exercise technological capabilities that are not always apparent to end-users--- WebApp infrastructure and installation/ important issues.

Component testing exercise content and functional units within the webpage. As the WebApp architecture is constructed, navigation and component testing are used as integration tests. Configuration testing attempts to uncover errors that are specific to a particular client or server environment.

Content Testing

Errors in WebApp content can be as trivial as minor typographical errors or significant as incorrect information, improper organization, or violation of intellectual property laws. Content testing attempts to uncover these and many other problems before the user encounters them.

a) Content Testing Objectives

Content testing has three important objectives: (1) to uncover syntactic errors in text- based documents, graphical representation, and other media, (2) to uncover semantic errors in any content object presented as navigation occurs, and (3) to find errors in the organization or structure of content that is presented to the end-user.

To accomplish the first objective, automated spelling and grammar checkers may be used. However, many syntactic errors evade detection by such tools and must be discovered by a human reviewer (tested). As we noted in the preceding section, copy editing is the single best approach for finding syntactic errors.

Semantic testing focuses on the information presented within each content object, the reviewer (tester) must answer the following questions:

• Is the information factually accurate?
• Is the information concise and to the point?
• Is the layout of the content objective easy for the user to understand?
• Can information embedded within a content object be found easily?
• Have proper references been provided for all information derived from other sources?
• Is the information presented consistent internally and consistent with information presented in other content objects?
• Is the content offensive, misleading, or does it open the door to litigation?
• Dos the content infringe on existing copyrights or trademarks?
• Dos the content contain internal links that supplement existing content? Are the links correct?
• Does the aesthetic style of the content conflict with the aesthetic style of the interface?

Obtaining answers to each of these questions for a large WebApp (containing hundreds of content objects) can be a daunting task. However, failure to uncover semantic errors will shake the user’s faith in the WebApp and can lead to failure of the Web-based application.

Content objects exist within an architecture that has a specific style. During content testing, the structure and organization of the content architecture is tested to ensure that required content is presented to the end-user in the proper order and relationships. For example, the WebApp presents a variety of information about sensors that are used as part of security and surveillance products. Content objects provide descriptive information, technical specifications, a photographic representation and related information. Tests or the SafeHome content architecture strive to uncover errors in the presentation of this information (e.g., a description of Sensor X is presented with a photo of Sensor Y).

b) Database Testing

The objective of database testing is to uncover these errors:

1. The original client- side request for information is rarely presented in the form (e.g., structured query language SQL) that can be input to a database management system (DBMS).

2. The database may be remote to the server that houses the WebApp.

3. Raw data acquired from the database must be transmitted to the WebApp server and properly formatted for subsequent transmittal to the client.

4. The dynamic content objects(s) must be transmitted to the client in a form that can be displayed to the end-user.

User Interface Testing

a) Interface Testing Strategy

The overall strategy for interface testing is to(1) uncover errors related to specific interface mechanism, and (2) uncover errors in the way the interface implements thee semantics of navigation, WebApp functionality, or content display.

To accomplish this strategy, a number of objectives must be achieved:

• Interface features are tested to ensure that design rules, aesthetics, and related visual content are available for the user without error. Features include type fonts, the use of color, frames, images, borders, tables, and related elements that are generated as WebApp execution proceeds.

• Individual interface mechanisms are tested in a manner that is analogous to “unit testing”. For example, tests are designed to exercise all forms, client-side scripting, dynamic HTML, CGI scripts, streaming content, and application specific interface mechanism (e.g., a shopping cart for an e- commerce application). In many cases, testing can focus exclusively on one of these mechanisms (the “until”) to the exclusion of other interface features and functions.

• Each interfaces mechanism is tested within the context of a use-case or NSU for a specific user category. This testing approach is analogous to integration testing in that tests are conducted as interface mechanisms and are integrated to allow a use- case or NSU to be executed.

• The complete interface is tested against selected use cases and NSUs to uncover errors in the semantics of the interface. This testing approach is analogous to validation testing because the purpose is to demonstrate conformance to specific use-case or NSU semantics. It is at this stage that a series of usability tests are conducted.

• The interface is tested within a variety of environments (e.g., browsers) to ensure that it will be compatible. In actuality, this series of tests can also be considered to be part of configuration testing.

b) Testing Interface Mechanism

We present a brief overview of testing considerations for each interface mechanism.

Links. Each navigation link is tested to ensure that the proper content object of function is reached.

Forms. At a macroscopic level, tests are performed to ensure that(1) labels correctly identify fields within the form and that mandatory fields are identified visually for the user: (2) the server receives all information contained within the form and that no data are lost in the transmission between client and server(3) appropriate defaults are used when the user does not select from a pull-down menu or set of buttons: (4) browser functions( e.g., the “back” arrow) do not corrupt data entered in a form: and (5) scripts that perform error checking on data entered work properly and provide meaningful error messages.

Client-side scripting. Black- box tests are conducted to uncover any errors in processing as the scripts are executed.

Dynamic HTML. Each web page that contains dynamic HTML is executed to ensure that the dynamic display is correct.

Pop-up windows. A series of tests that ensure that (1) pop-up is properly sized and positioned: (2) the pop-up does not cover the original WebApp window: (3) the aesthetic design of the pop-up is consistent to the pop-up work, are properly located, and function as required.

CGI scripts. Black-box tests are conducted with an emphasis on data integrity and scripts processing once validated data have been received.

Streaming content. Tests should demonstrate that streaming data are up-to-date, properly displayed, and can be suspended without error and restarted without difficulty.

Testing Interface Semantics:

Once each interface mechanism has been “unit” tested, the focus of interface testing changes to a consideration of interface semantics. Interface semantics testing “evaluates how well the design takes care of users. Offers clear direction, delivers feedback, and maintains consistency of language and approach”.

A thorough review of thee interface design model can provide partial answer to the questions implied by the preceding paragraph. However, each us-case scenario (for each user category) should be tested once the WebApp has been implemented in essence, a use-case becomes the input for the design of a testing sequence. The intent of the testing sequence is to uncover errors that will preclude a user from achieving the objective associated with the use-case.

As each use-case is tested, the Web engineering team maintains a checklist to ensure that every menu item has been exercised at least one time and that every embedded link within a content object has been used, In addition, the test sequence should include improper menu selection and link usage. The intent is to determine whether the WebApp provides effective error handling and recovery.

c) Usability Tests

Usability testing is similar to interface semantics testing in the sense that it also evaluates the degree to which users can interact effectively with the WebApp and the degree to which the WebApp guides user’s actions, provides the meaningful feedback, and enforces a consistent interaction approach.

The first step in usability testing is to identify a set of usability categories and establish testing objectives for each category. The following test categories and objectives and objectives (written in the form of a question) illustrate this approach.

Interactivity - Are navigation mechanism, content, and functions placed in a manner that allows the user to find them quickly.

Readability - Is text well—written and understandable? Are graphic representations easy to understand?

Aesthetic - Do layout, color, typeface, and related characteristics lead to use? Do users” feel comfortable” with the look and feel of the WebApp?

Display Characteristics - Dos the WebApp make optimal us of screen size and resolution?

Time sensitivity - Can important features, functions, and content be used or required in a timely manner?

Navigation Testing

The job of navigation testing is (1) to ensure that the mechanisms that allow the WebApp user to travel through the WebApp are all functional and (2) to validate that each navigation semantic unit (NSU) can be achieved by the appropriate user category.

a) Testing Navigation Syntax

Navigation mechanisms are tested to ensure that each performs its intended function:

• Navigation links — internal links within the WebApp, external links to other WebApps, and anchors within a specific Web page should be tested to ensure that proper content or functionally is reached when the link is chosen.

• Redirects — these links come into play when a user requests a nonexistent URL or selects a link whose destination has been removed or whose name has changed.

• Bookmarks — although bookmarks are a browser function, the WebApp should be tested to ensure that a meaningful page title can be extracted as the bookmark is created.

• Frames and framesets — each frame contains the content of a specific Web page: a frameset contains multiple frames and enables the display of multiple Web pages at the same time.

Because it is possible to nest frames and frameset within one another, these navigation and display mechanisms should be tested for correct content, proper layout and sizing, download performance, and other compatibility.

• Site maps — entries should be tested to ensure that the link takes the user to the proper content or functionality.

• Internal research engines — complex WebApps often contain hundreds or even thousands of content objects, An internal search engine testing validates the accuracy and completeness of the search, the error--- handling properties of the search engine, and advanced search features.

b) Testing Navigation Semantics

As each NSU is tested, the Web Engineering team must answer the following questions;

• Is the NSU achieved in its entirety without error?

• Is every navigation node (defined for a NSU) reachable within the context of the navigation paths defined for the NSU?

• If the NSU can be achieved using more than one navigation path, has every relevant path ben tested?

• If guidance is provided by the user interface to assist in navigation, are directions correct and understandable as navigation proceeds?

Configuration Testing

The job configuration testing is to test a set of probable client-side and server-side configurations to ensure that the user experience will be the same on all of them and to isolate errors that may be specific to a particular configuration.

a) Server-side Issues

On the server side configuration, test cases are designed to verify that the projected server configuration can support the WebApp without error.

Among the questions that need to be asked and answered during server-side configuration testing are:

• Is the WebApp fully compatible with the server OS?

• Are system files, directories, and related system data created correctly when the WebApp is operational?

• Do system security measures (e.g., firewalls or encryptions) allow the WebApp to execute and service users without interference or performance degradation?

• Has the WebApp been tested with the distributed server configuration (if one exists) that has been chosen?

• Is the WebApp properly integrated with database software? Is the Web App sensitive to different versions of database software?

• Do server-side WebApp scripts execute properly?

• Have system administrator errors ben examined for their effect on Web App operations?

• If proxy servers are used, have differences in their configuration ben addressed with on-sit testing?

b) Client-side Issues

On the client side, configuration tests focus more heavily on WebApp compatibility with configurations that contain one or more permutations of the following components:

• Hardware--- CPU, memory, storage, and printing devices.
• Operating systems--- Linux, Macintosh OS, Microsoft Windows, a mobile based OS.
• Browser software--- Internet Explorer, Mozilla/ Netscape, Opera, Safari, and others.
• Users interface components--- Active X, Java applets, and others.
• Plug–ins--- Quick Time, RealPlayer, and many others.
• Connectivity--- cable, DSL, regular modem, T1

Security Testing

Security tests are designed to probe vulnerabilities of (1) the client-side environment, (2) the network communication that occurs as data are passed from client to server and back again, and (3) server-side environment.

On client – side, vulnerabilities can often be traced to pre-existing bugs in browsers, e-mail programs, or communication software.

On server-side, vulnerabilities include denial- of-service attacks and malicious scripts that can be passed along to the client side or side or used to disable server operations.

To protect against these (and many other) vulnerabilities, one or more of the following security elements are implemented:

• Firewalls — filtering mechanism that is a combination of hardware and software which examines each incoming packet of information to ensure that it is coming from a legitimate source, blocking any data that are suspect.

• Authentication — a verification mechanism that validates the identity of all clients and servers, allowing communication to occur only when both sides are verified.

• Encryption — an encoding mechanism that protects sensitive data by modifying it in a way that makes it impossible to read by those with malicious intent. Encryption is strengthened by using digital certificates that allow the client to verify the destination to which the data are transmitted.

• Authorization — a filtering mechanism that allows access to the client environment only by those individuals with appropriate authorization codes (e.g., user ID and password).

Performance Testing

Performance testing is used to uncover performance problems that can result from lack of server side resources, inappropriate network bandwidth, inadequate database capabilities, faulty or weak operating system capabilities, poorly designed WebApp functionality, and hardware and software issues that can lead to degraded client- server performance. The testing lads (1) to understand how the system responds to loading (i.e., number of users, number of transactions, or overall data volume), and (2) to collect metrics that lead to design modifications to improve performance.

a) Performance Testing Objectives

Performance tests are designed to simulate real-world loading situations. Performance testing will help answering the following questions:

• Does the server response time degrade to a point where it is noticeable and unacceptable?

• At what point (in terms of users, transactions or data loading) does performance become unacceptable?

• What system components are responsible for performance degradation?

• What is the average response time for the user under a variety of loading conditions?

• Dos performance degradation have an impact on system security?

To develop answers to these questions, two different performance tests are conducted:

• Load testing - real world loading is tested at a variety of load levels and in a variety of combinations.

• Stress testing - loading is increased to the breaking point to determine how much capacity the WebApp environment can handle.

b) Load Testing

The intent of load testing is to determine how the WebApp and its server – side environment will respond to various loading conditions. As testing proceeds, permutations to the following variables define a set of test conditions.

N, the number of concurrent users.

T, the number of on-line transactions per user per unit time.

D, the data processed by the server per transactions.

In every case, these variables are defined within normal operating bounds of the system. As ach test condition is run, one or more of the following measures are collected; average user response, average time to download a standardized unit of data, or average time to process a transaction. The web engineering team examines these measures to determine whether a precipitous decrease in performance can be traced to a specific combination of N, T, and D.

Load testing can be used to assess recommended connection speeds for users of WebApp. Overall throughout, p, is computed in the following manner.

P = N x T x D


As an example, consider a popular sports news site. At a given moment, 20,000 concurrent users submit a request (a transaction, T) once every two minutes on average. Each transaction requires the WebApp to download a new article that average 3 K bytes in length. Therefore, throughout the system the performance can be calculated as;

P = [20000 x 0.5x 3 kb]/60= 500 Kbytes/sec=4 megabytes per second

The network connection for the server would therefore have to support this data rate and should be tested to ensure that it does.

c) Stress Testing

Stress testing is a continuation of load testing, but in this instance the variables, N, T, and D are forced to meet and then exceed operational limits. The intent of these tests is to answer each of the following questions.

• Does the system degrade “gently” or does the server shut down as capacity is exceeded?

• Does server software generate “server not available” messages? More generally, are users aware that they cannot reach the server?

• Does the server queue requests for resources and empty the queue once capacity demands diminish?

• Are transactions lost, as capacity is exceeded?

• Is data integrity affected, as capacity is exceeded?

• What values of N, T, and D force the server environment to fail? How does failure manifest itself? Are automated notifications sent to technical support staff at the server site?

• If the system does fail, how long will it take to comeback on-line?

• Are certain WebApp functions (e.g., compute intensive functionally, data streaming capabilities) discontinued as capacity reaches the 80 or 90 percent level?

A variation of stress testing is sometimes referred to as spike/bounce testing.

Compatibility Tests:

In some cases, small compatibility issues present no significant problems, but in others, serious errors can be encountered. For example, download speeds may become unacceptable, lack of a required plug- in may make content unavailable, browser differences can change page layout dramatically, font style may be altered and become illegible, or forms may be improperly organized. Compatibility testing strives to uncover these problems before the WebApp goes on-line.

The first step in compatibility testing is to define a set of “commonly encountered” client-side computing configurations and their variants. In essence, a tree structure is crated, identifying each computing platform, typical display devices, the operating systems supported on the platform, the browsers available, likely internet connection speeds, and similar information. Next, the Web engineering team derives a series of compatibility validation tests, from existing interface tests, navigation test performance tests, and security tests. The intent of these tests is to uncover errors or execution problems that can be traced to configuration differences.

WebApp Testing:

The players: Doug Miller (managers of the SafeHome software engineering group) and Vinod Raman, a member of the product software engineering team.

The conversation:

Doug: What do you think of the SafeHome e-commerce WebApp V0.0?
Vinod: The outsourcing vendor’s done a good job Sharan (development manager for the vendor) tells me they’re testing as we speak.
Doug: I’d like you and the rest of the team to do little informal testing on the e-commerce site.
Vinod (grimacing): I thought we were going to hire a third party testing company to validate the WebApp we’re still killing ourselves trying to get the product software out the door.
Doug: We’re going to hire a testing vendor for performance and security testing, and out outsourcing vendor is already testing. Just thought another point of view would be helpful, and besides, we’d like to keep costs in line, so….
Vinod (sights): What are you looking for?
Doug: I want to be sure that the interface and all navigations are solid.
Vinod: I suppose we can start with the use-cases for each of the major interface functions.
Learn about SafeHome specify the SafeHome system you need purchase a SafeHome system Get technical support
Doug: Good. But take thee navigation paths all the way to their conclusion.
Vinod (looking through a notebook of use-cases): Yeah, when you select Specify the SafeHome system you need, that you to.
Select SafeHome components get SafeHome component recommendations
We can exercise the semantics of each path.
Doug: While you‘re there, check out the content that appears at each navigation node.
Vinod: Of course….. and the functional elements as well. Who’s testing usability?
Doug: Oh….. the testing vendor will coordinate usability testing. We‘ve hired a market research firm to line up 20 typical users for the usability study, but if you guys uncover any usability issues….
Doug: Thanks, Vinod.

Component-Level Testing:

Component level testing, also called function focuses on a set of tests that attempts to uncover errors in WebApp functions. each WebApp function is a software module ( implemented in one of the variety of programming or scripting languages) and can be using black box (and in some cases white box) techniques.

Component-level test cases are often driven by forms-level input. Once forms data are defined, the user selects a button or other control mechanism to initiate execution. The following test case design methods are typical.

• Equivalence partitioning--- The input domain of the function is divided into input categories or classes from which test cases of data are relevant for the function. Test cases for each class of input are derived and executed while other classes of input are held constant. For example, an e-commerce application may implement a function that computes shipping charges. Among a variety of shipping information provided via a form is the user’s postal code. Test cases are designed in an attempt to uncover different classes of errors (e.g., an incomplete postal code, a correct postal code, a nonexistent postal code, an erroneous postal code format).

• Boundary value analysis---- Forms data are tested at their boundaries. For example, the shipping calculation function of 2days and a maximum of 14 are noted on the form. However, boundary value tests might input values of 0,1,2,13,14 and 15 to determine how the function reacts to data at and outside the boundaries of valid input.

• Path testing---- If the logical complexity of the function is high, path testing (a whit box test case design method) can be used to ensure that every independent path in the program has been exercised.

In addition to these test case design methods, a technique called forced error testing is used to derive test cases that purposely drive the WebApp component into an error condition. The purpose is to uncover errors that occur during error handling (e.g., incorrect or nonexistent error messages. WebApp failure as a consequence of the error, erroneous output driven by erroneous input, side effects that are related to component processing).

Each component level test case specifies all input values and the expected output to be provided by the component. The actual output produced as a consequence of the test is recorded for future reference during support and maintenance.

In many situations, the correct execution of a WebApp function is tied to proper interfacing with a database that may be external to the WebApp. Therefore database testing becomes an integral part of the component testing regime. However discusses this when he writes:

Database driven Web sites can involve a complex interaction among Web browsers, operating systems, plug in applications, communications, Web services, data base, (scripting language) programs…. security enhancements, and firewalls. Such complexity makes it impossible to test every possible dependency and everything that could go wrong with a site. The typical Web site development project will also be on an aggressive schedule, so the best testing approach will employ risk analysis to determine where to focus testing efforts. Risk analysis should include consideration of how closely the test environment will match the real product environment….. Other typical considerations in risk analysis include:

• Which functionality in the Web site is most critical to its purpose?
• Which areas of the site require the heaviest database interaction?
• Which aspects of the site’s CGI, applets, Active X components and so on are most complex?
• What types of problems would cause the most complaints or the worst publicity?
• What areas of the site will be the most popular?
• What aspects of the site have the highest security risks?

Each of the risk-related issues discussed by Hower should be considered when designing test cases for WebApp components and related functions.


Formulation is a customer communication activity that defines the problem that a WebApp is to solve. Business needs, project goals and objectives, end-user categories, major functions and features, and the degree of interoperability with other applications are all identified. As more detailed and technical information is required, formulation becomes requirements analysis.

The WebE team is composed of a group of technical and nontechnical members who are organized in a manner that gives them considerable autonomy and flexibility. Project management tasks are abbreviated and considerably less formal than those applied for conventional software engineering projects. Many WebApp projects are outsourced, but there is a growing trend towards in-house WebApp development. Project management for each approach differs in both strategy and tactics.

Web engineering metrics are in their infancy but have the potential to provide an indication of the WebApp quality, provide a basis for effort estimation, and provide an induction of the success of the WebApp from a business point of view.

The goal of WebApp testing is to exercise ach of the many dimensions of WebApp quality with the intent of finding errors or uncovering issues that may lead to quality failures. Testing focuses on content, function, structure, usability, navigability, performance, compatibility, interoperability, capacity, and security. Testing also incorporates reviews that occur as the WebApp is designed.

The WebApp testing is to exercise each of the many dimension by initially examining “units” of content, functionality, or navigation. Once individual units have been validated, the focus shifts to tests that exercise the WebApp as a whole. To accomplish this, many tests are derived from the users’ perspectives and are driven by information contained in use-cases. A web engineering test plan is developed that identifies testing steps, work products (e.g., test cases), and mechanisms for the evaluation of test results. The testing process encompasses different types of testing.

Content testing (and reviews) focuses on various categories of content. The intent is to uncover both semantic errors that affect the accuracy of content and the manner in which it is presented to the end-user. Interface testing exercises the interaction mechanisms that enable a user to communicate with the WebApp and validates aesthetic aspects of the interface. The intent is to uncover errors that result from poorly implemented interaction mechanisms, or omissions, inconsistencies or ambiguities in interface semantics.

Navigation testing applies use-cases, derived as part of the analysis activity, in the design of test cases that exercise each usage scenario against the navigation design. Navigation mechanisms are tested to ensure that any errors impeding completion of a user-case are identified and corrected. Component testing exercises content and functional units within the WebApp. Each Web page encapsulates content, navigation links, and processing elements that form a “unit” within the WebApp architecture. These units must be tested.

Configuration testing attempts to uncover errors and/or compatibility problems that are specific to a particular client or server environment. Tests are then conducted to uncover errors associated with ach possible configuration. Security testing incorporates a series of tests designed to exploit vulnerabilities in the WebApp and its environment. The intent is to find security holes. Performance testing encompasses a series of tests that are designed to assess WebApp response time and reliability as demands on server-side resource capability increase.
Copyright © 2015         Home | Contact | Projects | Jobs

Review Questions
  • 1. Explain the basic concepts of web-engineering and the web-engineering process? Describe in detail the various fundamental best practices under the process. How will you define the framework of the process?
  • 2. How will you plan for web-engineering projects under the WebApp projects? State the various differences between traditional projects, small e-projects. Define the various roles for the crucial members of the web-engineering team.
  • 3. What is project planning? State in detail the various steps involved in project planning. Illustrate through a diagram the organizational differences between outsourcing and in-house development for web-apps.
  • 4. Enunciate the different steps involved in the outsourcing exercise and the in-house web-app planning under the web-engineering practices. Describe the various metrics for web-e and web-apps as also the metrics for assessing value.
  • 5. What are the possible worst practices under the web-app projects? Analyze in detail the process of web designing of the application.
  • 6. What is web-app interface design? Explain the various principles and guidelines for interface design and the concepts of concepts of interface design work flow.
Copyright © 2015         Home | Contact | Projects | Jobs

Related Topics
Web Engineering Practices - Design and Testing Keywords
  • Web Engineering Practices - Design and Testing Notes

  • Web Engineering Practices - Design and Testing Programs

  • Web Engineering Practices - Design and Testing Syllabus

  • Web Engineering Practices - Design and Testing Sample Questions

  • Web Engineering Practices - Design and Testing Subjects

  • EMBA Web Engineering Practices - Design and Testing Subjects

  • Web Engineering Practices - Design and Testing Study Material

  • BBA Web Engineering Practices - Design and Testing Study Material