Thursday, 27 March 2014

Test Plan

0 comments
Test Plan
“A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.”
This standard specifies the following test plan outline:

1.    Test Plan Identifier

2.    Introduction

3.    Test Items

4.    Features to be Tested

5.    Features Not to Be Tested

6.    Approach

7.    Item Pass/Fail Criteria

8.    Suspension Criteria and Resumption Requirements

9.    Testing Tasks

10.  Test Deliverables

11.  Environmental Needs

12.  Responsibilities

13.  Staffing and Training Needs

14.  Schedule

15.  Risks and Contingencies

16.  Approvals


1)Test Plan Identifier
§  A unique identifier

2)Introduction

§  Summary of the items and features to be tested
§  Need for and history of each item (optional)
§  References to related documents such as project authorization, project plan, QA plan, configuration management plan, relevant policies, relevant standards
§  References to lower level test plans

3) Test Items

§  Test items and their version
§  Characteristics of their transmittal media
§  References to related documents such as requirements specification, design specification, users guide, operations guide, installation guide
§  References to bug reports related to test items
§  Items which are specifically not going to be tested (optional)

4) Features to be Tested

§  All software features and combinations of features to be tested
§  References to test-design specifications associated with each feature and combination of features

5) Features Not to Be Tested

§  All features and significant combinations of features which will not be tested
§  The reasons these features won’t be tested

6) Approach

§  Overall approach to testing
§  For each major group of features of combinations of features, specify the approach
§  Specify major activities, techniques, and tools which are to be used to test the groups
§  Specify a minimum degree of comprehensiveness required
§  Identify which techniques will be used to judge comprehensiveness
§  Specify any additional completion criteria
§  Specify techniques which are to be used to trace requirements
§  Identify significant constraints on testing, such as test-item availability, testing-resource availability, and deadline

7)Item Pass/Fail Criteria

§  Specify the criteria to be used to determine whether each test item has passed or failed testing

8)Suspension Criteria and Resumption Requirements

§  Specify criteria to be used to suspend the testing activity
§  Specify testing activities which must be redone when testing is resumed

9)Test Deliverables

§  Identify the deliverable documents: test plan, test design specifications, test case specifications, test procedure specifications, test item transmittal reports, test logs, test incident reports, test summary reports
§  Identify test input and output data
§  Identify test tools (optional)

10) Testing Tasks

§  Identify tasks necessary to prepare for and perform testing
§  Identify all task interdependencies
§  Identify any special skills required

11) Environmental Needs

§  Specify necessary and desired properties of the test environment: physical characteristics of the facilities including hardware, communications and system software, the mode of usage (i.e., stand-alone), and any other software or supplies needed
§  Specify the level of security required
§  Identify special test tools needed
§  Identify any other testing needs
§  Identify the source for all needs which are not currently available

12) Responsibilities

§  Identify groups responsible for managing, designing, preparing, executing, witnessing, checking and resolving
§  Identify groups responsible for providing the test items identified in the Test Items section
§  Identify groups responsible for providing the environmental needs identified in the Environmental Needs section

13) Staffing and Training Needs

§  Specify staffing needs by skill level
§  Identify training options for providing necessary skills

14) Schedule

§  Specify test milestones
§  Specify all item transmittal events
§  Estimate time required to do each testing task
§  Schedule all testing tasks and test milestones
§  For each testing resource, specify its periods of use

15) Risks and Contingencies

§  Identify the high-risk assumptions of the test plan
§  Specify contingency plans for each

16)Approvals

§  Specify the names and titles of all persons who must approve the plan
§  Provide space for signatures and dates

Test case Format in Excel sheet

0 comments




Equivalence Partitioning

0 comments

Equivalence Partitioning:-

Equivalence partitioning is a black-box testing method that divides the input domain of a program into classes of data from which test cases can be derived. Test case design for equivalence partitioning is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions. Typically, an input condition is either a specific numeric value, a range of values, a set of related values, or a boolean condition. Equivalence classes may be defined according to the following guidelines:
  1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
  2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
  3. If an input condition specifies a member of a set, one valid and one invalid equivalence class are defined.
  4. If an input condition is boolean, one valid and one invalid class are defined.
As an example, consider data maintained as part of an automated banking application.
The user can access the bank using a personal computer, provide a six-digit password, and follow with a series of typed commands that trigger various banking functions. During the log-on sequence, the software supplied for the banking application accepts data in the form:
  • area code—blank or three-digit number
  • prefix—three-digit number not beginning with 0 or 1
  • suffix—four-digit number
  • password—six digit alphanumeric string
  • commands—check, deposit, bill pay, and the like
The input conditions associated with each data element for the banking application can be specified as area code:
  • Input condition, Boolean—the area code may or may not be present.
  • Input condition, range—values defined between 200 and 999, with specific exceptions.
  • prefix: Input condition, range—specified value >200
  • Input condition, value—four-digit length
  • password: Input condition, Boolean—a password may or may not be present.
  • Input condition, value—six-character string.
  • command: Input condition, set—containing commands noted previously.
Applying the guidelines for the derivation of equivalence classes, test cases for each input domain data item can be developed and executed. Test cases are selected so that the largest number of attributes of an equivalence class are exercised at once.

Boundary Value Analysis (BA)

0 comments

Boundary Value Analysis:-

Boundary value analysis is one of the block box testing technique ,to test input object input data size or length/range there are 6 ways to test the data size or length
  • Min=
  • Max=
  • Min+1=
  • Max+1=
  • Min-1=
  • Max-1=
these are useful for test the input data size or length/range

Example for boundary value analysis:-

test data preparation for username object ,customer requirement is username size should be min size is 4 and max size is 16 now we test the data by using boundary value analysis

min=4-Should be accepted
max=16-Should be accepted
min+1=5-Should be accepted
max+1=17-should be rejected
min-1=3-Should be rejected
max-1=15-Should be accepted

Priority,Severity

0 comments

Priority:-

This field describes importance of test case.in general test engineers divided test cases as below types 
  • Functional test cases-High priority(P0)
  • Non functional test case-Medium priority(p1)
  • Cosmetic test case (GUI and usability test cases)-Low priority(P2)

Severity:-

This field describes seriousness of defect with respect to functionalities and testing also
        test engineers are divided defect severity into below types 

High severity:-

if you are finding any defect ,due to that defect we are not able to proceed for further testing

Medium severity:-

if you are finding any defect, we can proceed further testing but with respect to customer requirement resole it must

Low severity:-

if you are identified any defect we can proceed further testing but may or may not resolve that defect

Error, Defect and Bug

0 comments

     Error, Defect and Bug:


A mistake in code is called Error. Due to errors in coding, test engineers are getting mismatches in application called defects. If defected accepted by development to solve called Bug.


Integration Testing

0 comments

Integration testing:-

After completion of unit testing for each and every module units then programmers are integrating all the modules as a system or software based HLD document .after completion of integration of modules programmers are applying integration test to find out all the modules correctly integrated or not from base module to end module as per HLD document
                 Some times to implement coding for all the modules and to integrated all the modules with in specified for system testing that is not possible for .then programmers are following below types of integration approaches to integrate modules and to perform system testing
  • Top down approach
  • Bottom up approach
  • Hybrid approach
  • System approach

Top down approach:-

in this integration some of sub modules under code constructions instead of that sub module programmers are using dummy module called as "STUB" only for integration purpose

Bottom up approach:-

in this integration some of main module is under construction instead of that module programmers are using temporary module called "DRIVER" only for integration purpose

Hybrid approach:-

in this approach programmers are using both "STUB" and "DRIVER" for main module and sub module is called hybrid approach

System approach:-

coding is completed for all the modules at a time then programmers are following this type of integration approach

Unit Testing

0 comments

Unit Testing:-

After completion of coding in developed software build developers will do unit testing whether any errors are in developed coding
               Unit testing using white box testing technique(WBT) for testing

White Box Testing techniques:-

  • Basic path coverage
  • loops coverage
  • Conditions coverage
  • Programming technique coverage

Basic path coverage:-

During this coverage programmers are verifying each and every statement in program is correctly participating in execution or not

Loops coverage:-

During this coverage programmers are verifying each and every loop statement in a program is correctly terminating or not as per requirement 

Conditions coverage:-

During this coverage programmers are verifying each and every condition is providing corresponding output or not

Programming technique coverage:-

During this coverage programmers are verifying each and every program is implemented with less logical statements or not to get output or response fast


Fish model

0 comments

Software Quality

0 comments
Software Quality:

Technical:
Ø  Meeting Customer Requirements
Ø  Meeting Customer Expectations (User friendly, Performance, Privacy)

Non-Technical:
Ø  Cost of Product
Ø  Time to Market

Software Quality Assurance:
To monitor and measure the strength of development process, Organisation follows SQA concepts.

Software Project:
Software related problems solved by software engineers through a software engineering process

Regression Testing

0 comments

Regression testing:-

Re execution of previous passed selected test cases on modified software build to find out any side effects
raised on previous tested functionalities with respect to the internal code changes 
                                   

User Acceptance Testing

0 comments

User acceptance Testing:-

after successfully completion of system testing then project management people are inviting customer or user to development environment for user acceptance testing during this test customer will perform on developed software to find out whether our developed software is fulfilling his business needs or not
                                          User acceptance test is mainly categorized into two types
  • alpha-test
  • beta-test

Alpha-test:-

this is first level of user acceptance test done by customer or user at development environment or company environment with the involvement of development team testing team and project management team

Beta -test:-

this is second level of user acceptance test done by customer or user at customer environment with the involvement of release team

Wednesday, 26 March 2014

SOFTWARE DEVELOPMENT LIFE CYCLE(SDLC)

0 comments

Software development life cycle:-

It defines step by step process of software developer from starting to ending to develop a software with in specified time,with in specified budget and to deliver quality software to customer
                                  In SDLC overall software development work is divided into below stages 
  1. requirement gathering
  2. analysis
  3. design
  4. coding
  5. testing
  6. delivery&maintenance

Requirement gathering:-

In this stage business analyst will gather information from customers and prepare business requirement specification in this brs business analyst will gather total project information from customer side people

Analysis:-

After completion of business requirement specification system analyst will receive BRS and system analyst prepare software requirement specification . in this srs document contain functional requirement specification and non functional specification.. project development and testing by using SRS

Design:-

After completion of SRS design architecture or system architecture will design project plan document. in this design two types of designs.high level design,low level design

Coding:-

after completion of design developers or programmers  will concentrate on to develop a project with respect to software requirement specification .to implement software build with the help of coding

Testing:-

After completion of developing a software build then test engineers are receiving developed software build to development team then test engineers are will concentrate on testing the over all software build with respect to the software requirement specification to release quality software to customers 

Delivery&maintenance:-

after completion of testing then deployment team or release team will receive quality software to customers
and they have change control board team and maintenance team to maintain the the software build and after software build is go to customer side people then we need to change any modification change control board team will do that modifications



Software Test Life Cycle

0 comments

The different stages in Software Test Life Cycle -

Each of these stages have a definite Entry and Exit criteria  , Activities & Deliverables associated with it.
In an Ideal world you will not enter the next stage until the exit criteria for the previous stage is met. But practically this is not always possible. So for this tutorial , we will focus of activities and deliverables for the different stages in STLC. Lets look into them in detail.

Requirement Analysis

During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability ) .Automation feasibility for the given testing project is also done in this stage.

Activities

  • Identify types of tests to be performed. 
  • Gather details about testing priorities and focus.
  • Prepare Requirement Traceability Matrix (RTM).
  • Identify test environment details where testing is supposed to be carried out. 
  • Automation feasibility analysis (if required).

Deliverables 

  • RTM
  • Automation feasibility report. (if applicable)

Test Planning

This phase is also called Test Strategy phase. Typically , in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan.

Activities

  • Preparation of test plan/strategy document for various types of testing
  • Test tool selection 
  • Test effort estimation 
  • Resource planning and determining roles and responsibilities.
  • Training requirement

Deliverables 

Test Case Development

This phase involves creation, verification and rework of test cases & test scripts. Test data , is identified/created and is reviewed and then reworked as well.

Activities

  • Create test cases, automation scripts (if applicable)
  • Review and baseline test cases and scripts 
  • Create test data (If Test Environment is available)

Deliverables 

  • Test cases/scripts 
  • Test data

Test Environment Setup

Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development StageTest team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment.

Activities 

  • Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment. 
  • Setup test Environment and test data 
  • Perform smoke test on the build

Deliverables 

  • Environment ready with test data set up 
  • Smoke Test Results.

Test Execution

 During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.

Activities 

  • Execute tests as per plan
  • Document test results, and log defects for failed cases 
  • Map defects to test cases in RTM 
  • Retest the defect fixes 
  • Track the defects to closure

Deliverables 

  • Completed RTM with execution status 
  • Test cases updated with results 
  • Defect reports

Test Cycle Closure

Testing team will meet , discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future.

Activities

  • Evaluate cycle completion criteria based on Time,Test coverage,Cost,Software,Critical Business Objectives , Quality
  • Prepare test metrics based on the above parameters. 
  • Document the learning out of the project 
  • Prepare Test closure report 
  • Qualitative and quantitative reporting of quality of the work product to the customer. 
  • Test result analysis to find out the defect distribution by type and severity.

Deliverables 

  • Test Closure report 
  • Test metrics

Defect Life Cycle

0 comments


The different states of a bug can be summarized as follows:
1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed

Description of Various Stages:

1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.
2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.
3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.
4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.
5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.
6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.
7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.
8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.
9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.
10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

SYSTEM TESTING

0 comments

System testing:-

After successfully completion of integration testing then test engineers are receiving software build from development team through common server and installing that build into test environment on that installed software build .test engineers are applying system testing to validate each and every requirement in developed software is correctly working or not as per SRS document
                                     System testing mainly categorized into below sub tests
                                
  •      Functional testing
  • Non-functional testing

Functional testing:-

In functional testing test engineers are validating functional requirements in developed software as per functional requirement specification in SRS document
During functional testing test engineers are concentrate on below functional coverages on window base application and web base application
  1. Behavioral coverage
  2. Input domain coverage 
  3. Output coverage 
  4. Calculation coverage
  5. Error handling coverage
  6. Database coverage
  7. Links coverage 
  8. URLS coverage

Behavioral coverage:-

during this coverage test engineers are validating in developed software each and every object is behaving correctly or not as per given requirement of customer in terms of enable,disable,focused....

Input domain coverage:-

during this test test engineers are validating in developed software each and every object is taking customer specified type and size of input data or not

Output coverage:-

during this test test engineers are validating in developed software each and every functionality is providing customer specified output or not w.r.t to corresponding input data

Calculation coverage:-

during this coverage test engineers are validating in developed software calculation operations are correctly working or not as per requirement of customer 

Error handling coverage:-

during this coverage test engineers are validating in developed software input objects are rejecting wrong operations of users or not with corresponding error messages 

Database coverage:-

during this coverage test engineers are validating completeness and correctness of database operation.
to perform database testing test engineers are receiving "database design document " from database designers this document contains entire details of corresponding database

Links coverage:-

during this coverage test engineers are validating in developed web application each and every web page is providing customer specified no. of links or not in terms of test links and image links

URL'S coverage(Uniform Resource Locator):-

during this coverage test engineers are validating in developed web pages each and every link is opening next page with specified url's or not
To perform links testing and url's testing test engineers are receiving sitemap document from web designers
this document contain each and every page links information and url's of links information

Non-functional testing:-

After successfully completion of functional testing then corresponding non functional test engineers are applying non functional testing to validate non functional requirements in developed software as per non functional requirement specification in SRS document
           During non functional testing test engineers are applying below types of non functional test
  1. Recovery or reliability testing
  2. Compatibility or portability
  3. Hardware compatibility
  4. Inter system testing
  5. GUI or user interface testing
  6. Usability testing
  7. performance testing
  8. Security testing
  9. Parallel testing
  10. Installation testing
  11. Un installation testing
  12. Globalization testing
  13. Nationalization testing
  14. Benchmark testing

Recovery or reliability testing:-

during this test test engineers are validating whether developed software is recovering from abnormal situation to normal situation or not.when application going to any failure situation

Compatibility testing or portability testing:-

during this test test engineers are validating whether developed software is running or working on customer specified operating systems and browsers or not

Hardware compatibility or input devices testing:-

during this test test engineers are validating whether developed software is compatible with different companies of hardware devices or not

Inter system testing:-

during this test test engineers are validating whether developed software functionalities are coexistence with other specified software functionalities or not to perform operations and share data

GUI or User interface testing:-

during this test test engineers are validating whether our software build is providing attractive screens or not good look feel screen or not

Usability testing:-

During this test test engineers are validating whether developed software is user friendliness software or not

Performance testing:-

during this test test engineers are validating in developed software functionalities are providing response with in specified time or not as per performance specification in non functional requirement specification
                       It is mainly divided into two types
  • Load test 
  • Stress test

Load test:-

during this test test engineers are validating whether developed software is supporting for customer specified load or not and each and every of functionality of software is responding with in specified time or not

Stress test:-

during this test test engineers are validating whether our developed software build is supporting more than customer expected load or not .to find out peak load or maximum capacity of load is handle by software 

Security testing:-

during this test test engineers are validating whether our developed software build is providing customer specified security facilities or not, to avoid un authorized accessing secure data from data hackers
                                   during security testing test engineers will concentrate on below coverages
  • Authorization
  • access control
  • Encryption/Description

Authorization:-

during this coverage test engineers are validating whether developed software is giving permissions only for authorized users or valid users or not to access that software

Access control:-

during this coverage test engineers are validating whether our developed software is providing access for authorized users to use specific services

Encryption/description:-

during this coverage test engineers are validating whether developed software is converting plain text into encrypted form or not and encrypted form into plain text or not while sending data from one server to another server to secure data from data hackers

Comparative or competitive testing:-

during this test test engineers are comparing one software product features with similar types of existing products in market to find out strength and week of product 

Installation testing:-

during this test test engineers are validating whether developed software is successfully installing as per developer given installation guidelines or not

Un installation testing:-

during this test test engineers are validating whether developed software is successfully un installing as per given un installation guide lines or not

Globalization(or)Internationalization:-

during this test test engineers are validating whether developed software is supporting different countries of languages ,currency date and time settings or not .the software is developed for global users

Nationalization(or)Localization:-

during this test test engineers are validating whether developed software is supporting for particular country language,currency,date and time settings or not

Benchmark testing:-

during this test test engineers are validating whether software is developed according to company standards or not

Waterfall Model

0 comments
The sequential phases in Waterfall model are:
  • Requirement Gathering and analysis: All possible requirements of the system to be developed are captured in this phase and documented in a requirement specification doc.
  • System Design: The requirement specifications from first phase are studied in this phase and system design is prepared. System Design helps in specifying hardware and system requirements and also helps in defining overall system architecture.
  • Implementation: With inputs from system design, the system is first developed in small programs called units, which are integrated in the next phase. Each unit is developed and tested for its functionality which is referred to as Unit Testing.
  • Integration and Testing: All the units developed in the implementation phase are integrated into a system after testing of each unit. Post integration the entire system is tested for any faults and failures.
  • Deployment of system: Once the functional and non functional testing is done, the product is deployed in the customer environment or released into the market.
  • Maintenance: There are some issues which come up in the client environment. To fix those issues patches are released. Also to enhance the product some better versions are released. Maintenance is done to deliver these changes in the customer environment.

ADVANTAGE

The advantage of waterfall development is that it allows for departmentalization and control. A schedule can be set with deadlines for each stage of development and a product can proceed through the development process model phases one by one.
Development moves from concept, through design, implementation, testing, installation, troubleshooting, and ends up at operation and maintenance. Each phase of development proceeds in strict order.

DISADVANTAGE

The disadvantage of waterfall development is that it does not allow for much reflection or revision. Once an application is in the testing stage, it is very difficult to go back and change something that was not well-documented or thought upon in the concept stage.

Tuesday, 25 March 2014

V-MODEL

0 comments

V stands for verification and validation

Verification:-

It is a process of checking "Are we developing right software or not"

Validation:-

It is a process of checking "Are we developed software or not"as per given requirement of customer 

V-model is suitable for big size projects development which have clear requirement 

BRS(Business requirement specification):-

This document contains business requirement of customer in non technical form.it is prepared by business analyst

SRS(Software requirement specification):-

This document contain software requirement specification based on business requirement of customer in terms of functional requirement specification and non functional specification in technical form.it is prepared by system analyst

HLD(High level design):-

this document contain overall structure of software from base module to end module 

LLD(Low level design):-

This document contain internal process of each and every module 

Coding: 

This is at the bottom of the V-Shape model. Module design is converted into code by developers.

Advantages of V-model:

  • Simple and easy to use.
  • Testing activities like planning, test designing happens well before coding. This saves a lot of time. Hence higher chance of success over the waterfall model.
  • Proactive defect tracking – that is defects are found at early stage.
  • Avoids the downward flow of the defects.
  • Works well for small projects where requirements are easily understood.

Disadvantages of V-model:

  • Very rigid and least flexible.
  • Software is developed during the implementation phase, so no early prototypes of the software are produced.
  • If any changes happen in midway, then the test documents along with requirement documents has to be updated

When to use the V-model:

  • The V-shaped model should be used for small to medium sized projects where requirements are clearly defined and fixed.
  • The V-Shaped model should be chosen when ample technical resources are available with needed technical expertise.

 

Copyright 2008 All Rights Reserved Revolution Two Church theme by Brian Gardner Converted into Blogger Template by Bloganol dot com