Top QA Software Tester Interview Questions

By YuwebDesign

Software Development Models

Models provide general guidelines – not an accurate step-by-step process that has to be followed to the letter.

If needed, adapt the models to product and project characteristics.

A sequential development lifecycle model describing a one-for-one relationship between major phases of software development from business requirements specification to delivery, and corresponding test levels from acceptance testing to component testing.

SDLC V-model (extension of the waterfall model)
is also known as Verification and Validation model.
V-model Sequential Software Development Model

Characteristic of a well-managed test level is that it has a corresponding objective.

Common type of V-model uses 4 test levels,
corresponding to the 4 development levels:

  1. Component (unit) testing
  2. Integration testing
  3. System testing
  4. Acceptance testing

Variants of the V-model exist with fewer or different levels of development and testing, e.g. component integration testing after component testing, and system integration testing after system testing.

Software work products (business scenarios or use cases, requirements specifications, design documents and code) produced during development are often the basis of testing in one or more test levels.
V-model Sequential Software Development Model
References for generic work products include Capability Maturity Model Integration (CMMI) or ‘Software life cycle processes’ (IEEE/IEC 12207).

When the design documents are finished, the tester should have enough information to be writing functional and non-functional test cases.

Iterative-incremental development
Process of establishing requirements, designing, building and testing a system in a series of short development cycles.

Examples are:

  1. prototyping,
  2. Rapid Application Development (RAD),
  3. Rational Unified Process (RUP)
  4. agile development models.

Iterative-incremental Development Models
Main Concept

  1. Project is broken into small modules that can deliver a finished result at the end of the cycle.
  2. (E.g. for banking software: UI, functionality related to the transfer of money etc.).
  3. A working model/prototype of software is produced during the first module.
  4. Each subsequent release of the modules adds functionality to the previous release.
  5. The process continues until the complete system is achieved.
  6. More than one iteration can be going at the same time (two teams working on two modules at the same time, then modules merge).
  7. A system may be tested at several test levels during each iteration.
  8. An increment, added to others developed previously, forms a growing partial system, which should also be tested.
  9. Regression testing is increasingly important on all iterations after the first one.
  10. Verification and validation can be carried out on each increment.

Extreme Programming (XP)
is a software development approach
for small teams
on risk-prone projects
with unstable requirements.

It was created by Kent Beck
who described the approach in his book
‘Extreme Programming Explained’

Testing (‘extreme testing’)
is a core aspect of Extreme Programming.

Programmers are expected
to write unit and functional test code first –
before the application is developed.

Test code is under source control
along with the rest of the code.

Customers are expected
to be an integral part of the project team
and to help develop scenarios
for acceptance/black box testing.

Acceptance tests are preferably automated,
and are modified and rerun
for each of the frequent development iterations.

QA and test personnel are also required
to be an integral part of the project team.

Detailed requirements documentation is not used,
and frequent re-scheduling, re-estimating, and re-prioritizing is expected.

Testing within a Life Cycle Model

Test Objective
A reason or purpose for designing and executing a test.

Objectives of Testing in SDLC:

  1. Finding defects
  2. Gaining confidence about the level of quality
  3. Providing information for decision-making
  4. Preventing defects

Main objective of different viewpoints in testing

  1. development testing (e.g., component, integration and system testing) – to cause as many failures as possible so that defects in the software (functionality) are identified and can be fixed
  2. acceptance testing – to confirm that the system works as expected, to gain confidence that it has met the requirements (no intention of fixing defects)
  3. maintenance testing – no new defects have been introduced during development of the changes.
  4. operational testing – to assess system characteristics such as reliability or availability.

Source: ISTQB CTFL Syllabus

  1. For every development activity there is a corresponding testing activity
  2. Each test level has test objectives specific to that level
  3. The analysis and design of tests for a given test level should begin during the corresponding development activity
  4. Testers should be involved in reviewing documents as soon as drafts are available in the development life cycle
  5. Test levels can be combined or reorganized depending on the nature of the project or the system architecture.

E.g., for the integration of a Commercial Off-The-Shelf (COTS) software product into a system, the purchaser may perform

  • integration testing at the system level (e.g., integration to the infrastructure and other systems, or system deployment)
  • and acceptance testing (functional and/or non-functional, and user and/or operational testing).

  1. When a bug is found, it needs to be communicated and assigned to developers that can fix it.
  2. After the problem is resolved, fixes should be re-tested.
  3. Additionally, determinations should be made
    regarding requirements, software, hardware, safety impact, etc.,
    for regression testing
    to check the fixes didn’t create other problems elsewhere.
  4. If a problem-tracking system is in place,
    it should encapsulate these determinations.

    A variety of commercial, problem-tracking/management software tools are available.

    These tools, with the detailed input of software test engineers,
    will give the team complete information
    so developers can understand the bug,
    get an idea of its severity,
    reproduce it and fix it.

Defect Life Cycle

  1. New
    new defect is found
  2. Bug can be automatically assigned depending on an application or module,
    or can be assigned for analyzing to a development/testing/project manager
  3. Assigned
    If defect – assigned to the development team/ developer for working on the defect
    by the project lead or the manager of the testing team
  4. Open/Active
    developer analyzes the defect and works on fixing it, if required.

    If not defect:

    1. Duplicate
    2. Rejected
    3. Deferred
      (not in scope, not high priority and it can get fixed in the next releases)
    4. Not A Bug
    5. Not reproducible
  5. Fixed
  6. Pending Retest
    developer assigns the defect to the tester for retesting the defect
  7. Retest
    tester starts retesting of the defect
    to verify if the defect is fixed as per the requirements
  8. Verified (if fixed)
    or Reopened (if not fixed, cycle starts over)
  9. Closed

Defect Life Cycle

The following are items to consider in the bug tracking process:

  1. Complete information such that developers can understand the bug, get an idea of it’s severity, and reproduce it if necessary.
  2. Bug identifier (number, ID, etc.)
  3. Current bug status (e.g., ‘Released for Retest’, ‘New’, etc.)
  4. The application name or identifier and version
  5. The function, module, feature, object, screen, etc. where the bug occurred
  6. Environment specifics, system, platform, relevant hardware specifics
  7. Test case name/number/identifier
  8. One-line bug description
  9. Full bug description
  10. Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn’t have easy access to the test case/test script/test tool
  11. Names and/or descriptions of file/data/messages/etc. used in test
  12. File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem
  13. Severity estimate (a 5-level range such as 1-5 or ‘critical’-to-‘low’ is common)
  14. Was the bug reproducible?
  15. Tester name
  16. Test date
  17. Bug reporting date
  18. Name of developer/group/organization the problem is assigned to
  19. Description of problem cause
  20. Description of fix
  21. Code section/file/module/class/method that was fixed
  22. Date of fix
  23. Application version that contains the fix
  24. Tester responsible for retest
  25. Retest date
  26. Retest results
  27. Regression testing requirements
  28. Tester responsible for regression tests
  29. Regression testing results
  30. A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.

SDLC (Software Development Life cycle)
SDLC is Software Development LifeCycle, it is a systematic approach to develop a software.

STLC (Software Test Life Cycle)
The process of testing a software in a well planned and systematic way is known as software testing life cycle (STLC).

Test Organization and Test Management

  1. Independence of Testing
  2. Constructive communication

Source: ISTQB CTFL Syllabus

Ways to improve communication and relationships

  1. Collaboration
    Start with collaboration rather than battles –
    remind everyone of the common goal of better
    quality systems

    Common goal – Quality of Product

    Defects found and fixed during testing will
    save time and money later, and reduce risks.

  2. Communication of defects
    Communicate findings on the product in a neutral, fact-focused
    way without criticizing/blaming the person who created it,
    for example, write objective and factual incident reports
    and review findings.

    Identifying failures during testing may be perceived as
    criticism against the product/author.

    Testers need good interpersonal skills to communicate
    factual information in a constructive way.

  3. Compassion
    Try to understand how the other person feels
    and why they react as they do
  4. Confirmation
    Confirm that the other person has understood
    what you have said and vice versa

Source: ISTQB CTFL Syllabus

Independence of Testing
Separation of responsibilities,
which encourages the accomplishment of objective testing.
through independent view by trained and professional testing resources.

  1. The effectiveness of finding defects by testing and reviews can be improved by using independent testers.
  2. Independence is not replacement for familiarity = developers can find defects in their own code.
  3. Independent testing may be carried out at any level of testing.
  4. For large, complex or safety critical projects, it is usually best to have multiple levels of testing, with some or all of the levels done by independent testers.
  5. Development staff may participate in testing, especially at the lower levels, but their lack of objectivity often limits their effectiveness.
  6. The independent testers may have the authority to require and define test processes and rules, but testers should take on such process-related roles only in the presence of a clear management mandate to do so.
Levels of Independence (low => high)
  1. No independent testers; developers test their own code
    Tests designed by the person(s) who wrote the software.
  2. Tests designed by another person(s) (e.g., development team)
    Another developer (same organization) testing software
  3. Independent testers within the development team
  4. Independent Tester(s) (same organization) outside the development team
    Tests designed by a person(s) from a different organizational group
    within the same organization
    reporting to project management or executive management

    • e.g., an independent test team or group
    • independent test specialists for specific test types
      • usability testers,
      • performance test specialists
      • security testers
      • or certification testers (who certify a software product against standards and regulations)
  5. Independent testers outsourced or external to the organization (different organization)
    Tests designed by a person(s) from a different organization or company
    (i.e., outsourcing or certification by an external body)
Benefits of independent testing within an organization
  1. Avoiding the author bias
    makes tester more effective at finding defects and failures.
  2. Testers see things differently as compared to developers
    and find different defects
  3. enables the testers to detect errors arising from assumptions
    An independent tester can verify assumptions
    people made during specification and implementation of the system
Drawbacks of independent testing within an organization
  1. Isolation from the development team
    (if treated as totally independent)
  2. Developers may lose a sense of responsibility for quality
  3. Independent testers may be seen as a bottleneck or blamed for delays in release

Source: ISTQB CTFL Syllabus

A group of people whose primary responsibility is software testing

The mindset while testing is different from that of while developing:
destructive vs constructive.

Source: ISTQB CTFL Syllabus

Testing tasks may be done by

  1. people in a specific testing role
      The activities and tasks
      performed by people in these roles depend on

    • the project and product context,
    • the people in the roles,
    • and the organization.
  2. or may be done by someone in another role, such as
    • a project manager
    • quality manager,
    • Developer,
    • business and domain expert,
    • infrastructure
    • or IT operations.

People who work on test analysis, test design, specific test types or test automation may be specialists in these roles.

Depending on the test level
and the risks related to the product and the project,
different people may take over the role of tester,
keeping some degree of independence.


  1. testers at the component and integration level would be developers,
  2. testers at the acceptance test level would be business experts and users,
  3. and testers for operational acceptance testing would be operators.

Source: ISTQB CTFL Syllabus

Depending on the organization,
the following roles are more or less standard
on most testing projects:

  1. Testers,
  2. Test Engineers,
  3. Test/QA Team Lead,
  4. Test/QA Manager,
  5. System Administrator,
  6. Database Administrator,
  7. Technical Analyst,
  8. Test Build Manager
  9. Test Configuration Manager

Test Build Managers,
System Administrators,
Database Administrators

  1. deliver current software versions to the test environment,
  2. install the application’s software and apply software patches,
    to both the application and the operating system,
  3. set-up, maintain and back up test environment hardware.

Test Configuration Manager
maintains test environments, scripts, software and test data.

Technical Analyst
performs test assessments and validate system/functional test requirements.

Depending on the project,
one person may wear more than one hat,
e.g., Test Engineers may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager.

Test Leader
  1. Sometimes called a
    1. test manager
    2. test coordinator
    3. Test Team Lead
    4. QA Team Lead
  2. The role of the test leader may be performed by
    1. a project manager,
    2. a development manager,
    3. a quality assurance manager
    4. or the manager of a test group.
  3. In larger projects two positions may exist:
    1. test leader
    2. and test manager
  4. Typically the test leader
    1. coordinates

      1. plans,
      2. monitors and
      3. controls

      the testing activities and tasks

    2. communicates testing status to management
    3. and manages the test team.
  5. Test managers should not re-allocate resource to meet original plans
Tasks of the Test Leader
  1. Coordinate the test strategy and plan
    with project managers and others
  2. Write or review a test strategy for the project,
    and test policy for the organization
  3. Contribute the testing perspective to other project activities,
    such as integration planning
  4. Plan the tests
    considering the context
    and understanding the test objectives and risks
    1. selecting test approaches,
    2. estimating the time, effort and cost of testing,
    3. acquiring resources,
    4. defining test levels, cycles, and planning incident management
  5. Test Leader should
    1. Initiate
      • the specification,
      • preparation,
      • implementation
      • and execution

      of tests,

    2. monitor the test results
    3. and check the exit criteria
  6. Adapt planning
    based on test results and progress
    (sometimes documented in status reports)
    and take any action necessary to compensate for problems
  7. Set up adequate configuration management
    of testware for traceability
  8. Introduce suitable metrics
    for measuring test progress
    and evaluating the quality of the testing and the product
  9. Decide what should be automated, to what degree, and how
  10. Select tools to support testing
    and organize any training in tool use for testers
  11. Decide about the implementation of the test environment
  12. Write test summary reports based on the information gathered during testing

Source: ISTQB CTFL Syllabus

Test Engineers
are engineers who specialize in testing.

Test engineers

  1. create test cases, procedures, scripts and generate data
  2. execute test procedures and scripts
  3. analyze standards of measurements
  4. evaluate results of system/integration/regression testing
Tasks of the Tester

Typical tester tasks may include:

  1. Review and contribute to test plans
  2. Analyze, review and assess user requirements, specifications and models for testability
  3. Create test specifications
  4. Set up the test environment (often coordinating with system administration and network management)
  5. Prepare and acquire test data

  6. Implement tests on all test levels,
    execute and log the tests,
    evaluate the results
    and document the deviations from expected results
  7. Use test administration or management tools
    and test monitoring tools as required
  8. Automate tests (may be supported by a developer or a test automation expert)
  9. Measure performance of components and systems (if applicable)
  10. Review tests developed by others

  1. Speed up the work of the development staff;
  2. Reduce your organization’s risk of legal liability;
  3. Give you the evidence that your software is correct and operates properly;
  4. Improve problem tracking and reporting;
  5. Maximize the value of your software;
  6. Maximize the value of the devices that use it;
  7. Assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down;
  8. Help the work of your development staff, so the development team can devote its time to build up your product;
  9. Promote continual improvement;
  10. Provide documentation required by FDA, FAA, other regulatory agencies and your customers;
  11. Save money by discovering defects ‘early’ in the design process, before failures occur in production, or in the field;
  12. Save the reputation of your company by discovering bugs and design flaws; before bugs and design flaws damage the reputation of your company.

Source: ISTQB CTFL Syllabus, pavantestingtools.com

Tester vs.Test Leader
  1. The test manager plans, organizes, and controls the testing activities,
  2. while the tester specifies and executes tests.

Source: ISTQB CTFL Syllabus

  1. Analyzes business requirements, design documents for completeness and testability.
  2. Develops test plans, test scenarios, test cases, test data and test scripts for different types of testing activities.
  3. Crafting, developing, & executing manual testing
  4. develop, oversee and maintain test scripts and procedures.
  5. Performs execution of test cases, test scripts, captures test results, test metrics and reports them.
  6. Identifies and tracks system defects, Performs root cause analysis of the defects and works with the development team to resolve them.
  7. Partners with test leads and architects to develop and establish Quality Assurance standards and best practices.
  8. Performs and coordinates system verification and validation test activities utilizing various tools such as HP Quality Center/Application lifecycle Management.
  9. Performs Quality Assurance testing activities to ensure the applications and products and/or releases are in compliance with the Quality Assurance standards of the organization.
  10. Sometimes serves as the first support contact for several web properties resolving tier 1 publisher and user issues.

The general testing process is

  1. the creation of a test strategy
    (which sometimes includes the creation of test cases),
  2. creation of a test plan/design
    (which usually includes test cases and test procedures)
  3. and the execution of tests.

It depends on

  1. the size of the organization
  2. and the risks involved.

Generally speaking, QA processes should be balanced with productivity,
in order to keep any bureaucracy from getting out of hand.

  1. For large organizations with high-risk projects,
    a serious management buy-in is required
    and a formalized QA process is necessary.
  2. For medium size organizations with lower risk projects,
    management and organizational buy-in
    and a slower, step-by-step process is required.
  3. For smaller groups or projects,
    an ad-hoc process is more appropriate.

A lot depends on

  1. team leads and managers,
  2. feedback to developers
  3. and good communication is essential among
    1. customers,
    2. managers,
    3. developers,
    4. test engineers
    5. and testers.

Regardless the size of the company,
the greatest value for effort
is in managing requirement processes,
where the goal is requirements that are

  1. clear,
  2. complete
  3. and testable.

By implementing QA processes slowly over time,
using consensus to reach agreement on processes,
and adjusting and experimenting
as an organization grows and matures,
productivity will be improved instead of stifled.

Problem prevention will lessen the need for problem detection,
panics and burn-out will decrease,
and there will be improved focus and less wasted effort.

At the same time, attempts should be made
to keep processes simple and efficient,
minimize paperwork,
promote computer-based processes
and automated tracking and reporting,
minimize time required in meetings,
and promote training as part of the QA process.

However, no one – especially talented technical types –
likes rules or bureaucracy,
and in the short run things may slow down a bit.

A typical scenario would be
that more days of planning and development will be needed,
but less time will be required for late-night bug-fixing
and calming of irate customers.

This is a common problem in the software industry,
especially in new technology areas.

There is no easy solution in this situation, other than:

  1. Hire good people
  2. Management should ‘ruthlessly prioritize’ quality issues and maintain focus on the customer
  3. Everyone in the organization should be clear on what ‘quality’ means to the customer

Security clearance
is a process of determining
your trustworthiness and reliability
before granting you access
to national security information.

The levels of classified access are

  1. confidential,
  2. secret,
  3. top secret,
  4. and sensitive compartmented information,

of which top secret is the highest.

The best bet in this situation
is for the testers to go through the process
of reporting whatever bugs
or blocking-type problems initially show up,
with the focus being on critical bugs.

Since this type of problem can severely affect schedules,
and indicates deeper problems
in the software development process

    such as

  1. insufficient unit testing
  2. or insufficient integration testing,
  3. poor design,
  4. improper build or release procedures, etc.

managers should be notified,
and provided with some documentation
as evidence of the problem.

Since it’s rarely possible to test

  1. every possible aspect of an application,
  2. every possible combination of events,
  3. every dependency,
  4. or everything that could go wrong,

risk analysis is appropriate
to most software development projects.

Use risk analysis to determine where testing should be focused.

This requires judgment skills, common sense and experience.


Considerations should include answers to the following questions:

  1. Which functionality is most important to the project’s intended purpose?
  2. Which functionality is most visible to the user?
  3. Which functionality has the largest safety impact?
  4. Which functionality has the largest financial impact on users?
  5. Which aspects of the application are most important to the customer?
  6. Which aspects of the application can be tested early in the development cycle?
  7. Which parts of the code are most complex and thus most subject to errors?
  8. Which parts of the application were developed in rush or panic mode?
  9. Which aspects of similar/related previous projects caused problems?
  10. Which aspects of similar/related previous projects had large maintenance expenses?
  11. Which parts of the requirements and design are unclear or poorly thought out?
  12. What do the developers think are the highest-risk aspects of the application?
  13. What kinds of problems would cause the worst publicity?
  14. What kinds of problems would cause the most customer service complaints?
  15. What kinds of tests could easily cover multiple functionalities?
  16. Which tests will have the best high-risk-coverage to time-required ratio?

Consider the impact of project errors,
not the size of the project.

However, if extensive testing is still not justified,
risk analysis is again needed
and the considerations listed under
“What if there isn’t enough time for thorough testing?” do apply.

The test engineer then should do “ad hoc” testing,
or write up a limited test plan
based on the risk analysis.

Can the excel sheet template be used for defect reporting?
If so what are the common fields that are to be included?
Who assigns the priority and severity of the defect?


To report bugs in excel:
Sno. Module Screen/ Section Issue detail Severity
Prioriety Issue status
and also set filters on the Columns attributes.

Defect Management System

But most of the companies
use the share point process of reporting bugs.

When the project comes for testing
a module wise detail of project
is inserted to the defect management system
they are using.

It contains following fields:
1. Date
2. Issue brief
3. Issue description (used for developer to regenerate the issue)
4. Issue status (active, resolved, on hold, suspend and not able to regenerate)
5. Assign to (Names of members allocated to project)
6. Priority (High, medium and low)
7. Severity (Major, medium and low)

Work with management/project’s stakeholders early on
to understand how requirements might change,
so that alternate test plans and strategies
can be worked out in advance.

It is helpful if the application’s initial design
allows for some adaptability, so that later changes
do not require redoing the application from scratch.

Additionally, try to…

  1. If the code is well commented and well documented;
    this makes changes easier for the developers.
  2. Use rapid prototyping whenever possible;
    this will help customers feel sure of their requirements and minimize changes.
  3. The project’s initial schedule
    should allow for some extra time
    to commensurate with probable changes.
  4. Try to move new requirements
    to a ‘Phase 2′ version of an application
    and use the original requirements
    for the ‘Phase 1′ version.
  5. Negotiate to allow only easily implemented
    new requirements into the project.
  6. Ensure customers and management
    understand scheduling impacts,
    inherent risks and costs
    of significant requirements changes.

    Then let management or the customers decide
    if the changes are warranted;
    after all, that’s their job.

  7. Balance the effort
    put into setting up automated testing
    with the expected effort
    required to redo them
    to deal with changes.
  8. Design some flexibility into automated test scripts;
    Focus initial automated testing
    on application aspects
    that are most likely to remain unchanged;
  9. Devote appropriate effort
    to risk analysis of changes,
    in order to minimize regression-testing needs;
  10. Design some flexibility into test cases;
    this is not easily done;
    the best bet is to minimize the detail
    in the test cases,
    or set up only higher-level generic-type test plans;
  11. Focus less on detailed test plans and test cases
    and more on ad-hoc testing
    with an understanding of the added risk this entails.

It may take serious effort to determine
if an application has significant unexpected or hidden functionality,
which it would indicate deeper problems
in the software development process.

If the functionality isn’t necessary to the purpose of the application,
it should be removed,
as it may have unknown impacts or dependencies
that were not taken into account by the designer or the customer.

If not removed, design information will be needed
to determine added testing needs
or regression testing needs.

Management should be made aware
of any significant added risks
as a result of the unexpected functionality.

If the functionality only affects areas,
such as minor improvements in the user interface,
it may not be a significant risk.

Client/server testing

Client/server applications can be quite complex
due to the multiple dependencies among clients,
data communications, hardware, and servers.

Thus testing requirements can be extensive.

When time is limited (as it usually is)
the focus should be on integration and system testing.

Additionally, load/stress/performance testing
may be useful in determining
client/server application limitations and capabilities.

There are commercial tools to assist with such testing.

Web sites are essentially client/server applications – with web servers and ‘browser’ clients.

Consideration should be given to the interactions between

  1. html pages,
  2. TCP/IP communications,
  3. Internet connections,
  4. firewalls,
  5. applications that run in web pages
    1. such as applets,
    2. javascript,
    3. plug-in applications
  6. and applications that run on the server side
    1. such as cgi scripts,
    2. database interfaces,
    3. logging applications,
    4. dynamic page generators,
    5. asp, etc.

Additionally, there are

  1. a wide variety of servers and browsers,
  2. various versions of each,
  3. small but sometimes significant differences between them,
  4. variations in connection speeds,
  5. rapidly changing technologies,
  6. and multiple standards and protocols.

The end result is that testing for web sites can become a major ongoing effort.

Test Levels

Test Level
A specific instantiation of a test process.

Synonyms: test stage

  1. Unit/Component Testing (Done by Developers)
  2. Integration Testing (Done by Testers)/li>
  3. System Testing (Done by Testers)
  4. Acceptance Testing(Done by End Users)/li>

Test Level Pyramid

For each of the test levels, the following can be identified:

  1. the generic objectives,
  2. the work product(s) being referenced for deriving test cases (i.e., the test basis),
  3. the test object (i.e., what is being tested),
  4. typical defects and failures to be found,
  5. test harness requirements and tool support,
  6. and specific approaches and responsibilities.

Levels of Testing and Corresponding Activities

Alpha Testing
Simulated or actual operational testing conducted in the developer’s test environment, by roles outside the development organization

Beta Testing
Simulated or actual operational testing conducted at an external site,
by roles outside the development organization.

Synonyms: field testing

A minimal software item for which a separate specification is available.

Component Testing
The testing of individual hardware or software components.
searches for defects in, and verifies the functioning of test objects that are separately testable.
Synonyms: module testing, unit testing, program testing

Test basis

Test cases for component testing are usually derived from

  1. Component requirements/specifications
  2. Program specifications
  3. Detailed design specifications
  4. Data models
  5. Code
Typical test objects
  1. software components/modules (e.g., Database modules)
  2. Programs
  3. Data conversion / migration programs
  4. Objects
  5. Classes, etc.,
Unit Testing Process

It may be done in isolation from the rest of the system,
depending on the context of the development life cycle and the system.

To provide a complete environment for a module,

  • stubs
  • drivers
  • simulators

may be used.

Component testing may include testing of

  1. functionality
  2. specific non-functional characteristics, such as resource-behavior (e.g., searching for memory leaks) or robustness testing,
  3. as well as structural testing (e.g., decision coverage).

Test cases are derived from work products such as a specification of the component, the software design or the data model.

Typically, component testing occurs with access to the code being tested and with the support of a development environment, such as a unit test framework or debugging tool.

In practice, component testing usually involves the programmer who wrote the code.

Defects are typically fixed as soon as they are found, without formally managing these defects.

Test-Driven Development (Test-First Approach)

One approach to component testing is to prepare and automate test cases before coding. This is called a test-first approach or test-driven development.

This approach is highly iterative and is based on cycles of developing test cases, then building and integrating small pieces of code, and executing the component tests correcting any issues and iterating until they pass.

The process of combining components or systems into larger assemblies.

Integration Testing
Testing performed to expose defects
in the interfaces and in the interactions
between integrated components or systems.
Testing of combined parts of an application
to determine if they function together correctly.

  1. Usually performed after unit and functional testing.
  2. This type of testing is especially relevant to client/server and distributed systems.
  3. Test basis includes software and system design
  4. test objects include interfaces
  5. tests concentrate on the interactions between different parts of a system
Test basis
  1. (High level) Software and system design
  2. Technical specification
  3. Architecture
  4. Workflows
  5. Use cases
Typical test objects of Integration Testing
  1. Operating system
  2. Subsystems
  3. Database implementation
  4. Infrastructure
  5. Interfaces between systems and components
  6. Interactions with/between different parts of a system
  7. System configuration and configuration data
  8. File system and hardware
Integration Testing Process

There may be more than one level of integration testing
and it may be carried out on test objects of various size:

  1. Component integration testing tests the interaction between components and is done after component testing

    It tests only communication between components, not component functionality (which was covered during component testing).

  2. System integration testing tests the interactions between different systems or between hardware and software and may be done after system testing.

    In this case the developing organization may control
    only one side of the interface.
    This might be considered as a risk.

    Business processes implemented as workflows
    may involve a series of systems.
    Cross-platform issues may be significant.

    If you need to add system integration testing as a test level for a particular project, it should directly follow system level.
    System integration testing is done when systems of software need to work together. System testing should be completed on each of the systems prior to integrating them together and conducting system integration testing.

The greater the scope of integration, the more difficult it becomes to isolate defects to a specific component or system, which may lead to increased risk and additional time for troubleshooting.

At each stage of the integration,
testers concentrate solely on the integration itself.
E.g., if module A+module B, test the communication between the modules,
not the functionality of each individual module
as that was done during component testing.

Ideally, testers should understand architecture and influence integration planning.

If integration tests are planned
before components or systems are built,
those components can be built
in the order required for most efficient testing.

Systematic integration strategies may be based on

  1. system architecture
    (such as top-down or bottom-up),
  2. functional tasks,
  3. transaction processing sequences,
  4. or some other aspect of the system or components.

Testing Types:

  1. Testing of specific non-functional characteristics
    (e.g., performance) may be included in integration testing
  2. as well as functional testing.
  3. structural
Approaches to integration testing
  1. Big Bang Integration Testing
  2. Incremental Integration Testing
    1. Top-down Incremental Approach
    2. Buttom-up Incremental Approach
    3. Sandwich/Hybrid Integration Testing Approach

Source: ISTQB CTFL Syllabus

Big Bang Integration Testing

  • all components at once
  • time consuming
  • difficult to identify the cause of failure

Big Bang Integration Testing Approach

Source: ISTQB CTFL Syllabus

Incremental Integration Testing
To ease fault isolation and detect defects early,
integration should normally be incremental (vs Big Bang)

Incremental Integration Testing Approach

  • Integrating components one by one,
    test after each step
  • Time consuming since stubs and drivers have to be developed

Incremental Integration Testing Approach Types

  1. Top-down Incremental Approach
    (uses stubs)
    component at the top of the component hierarchy is tested first,
    with lower level components being simulated by stubs.

    Tested components are then used
    to test lower level components.

    The process is repeated until the lowest level components have been tested.

  2. Buttom-up Incremental Approach
    (uses drivers)
    the lowest level components are tested first,
    then higher level components.
    The process is repeated until the component at the top of the hierarchy is tested.
  3. Sandwich/Hybrid Integration Testing Approach
    integration starts from the middle layer and moves simultaneously up and down

    Sandwich/Hybrid Integration Testing Approach

Source: ISTQB CTFL Syllabus

Thread Testing

A variation of top-down testing
where the progressive integration of components
follows the implementation of subsets of the requirements,
as opposed to the integration of components
by successively lower levels.

System Testing
Testing an entire integrated system
(rather than its individual components)
to verify that it meets specified requirements.

Test Basis

Test cases for system testing are usually derived from

  1. System and software requirement specification
  2. Functional specifications
  3. Use cases
  4. Risk and analysis report

Unit vs. System Testing

  1. Test cases for component testing are usually derived from
    • component specifications,
    • design specifications,
    • or data models,
  2. whereas test cases for system testing are usually derived from
    • requirement specifications,
    • functional specifications,
    • or use cases.
Typical test objects
  1. System, user and operation manuals
  2. System configuration and configuration data

System testing is most often the final test on behalf of development
to verify that the system to be delivered meets the specification.
Then product will be delivered to client.

Purpose is to find as many defects as possible.

The testing scope shall be clearly addressed in
the Master and/or Level Test Plan for that test level.

The test environment should correspond to the
final target or production environment as much as possible
in order to minimize the risk
of environment-specific failures not being found in testing.

System Testing may include tests based on:

  1. risks and/or requirement specifications,
  2. business processes,
  3. use cases,
  4. or other high level text description or models of system behavior,
  5. interactions with the operating system, and system resources.
System Testing Types

System Testing should investigate

  • Functional and Non-functional requirements of the system

    System Testing of functional requirements starts by using the most appropriate specification-based (black-box) techniques for the aspect of the system to be tested.
    E.g., a decision table may be created for combinations of effects described in business rules.

    Structure-based techniques (white box) may then be used to assess the thoroughness of the testing with respect to a structural element, such as menu structure or web page navigation.

  • Data quality characteristics

Testers also need to deal with incomplete or undocumented requirements.

An independent test team often carries out system testing.

Acceptance Testing
black box testing conducted to enable a user/client/customer/project manager

  1. determine whether to accept a software product
  2. verify the system functionality and usability prior to the system being released to production
  3. validate the software
    meets a set of agreed acceptance criteria
  4. Acceptance Testing Process
    1. Done in the environment similar to production.
    2. It is a type of black box testing
    3. Customer verifies
      if the Acceptance Testing has passed
      (they approve or disapprove).
    4. The acceptance test is the responsibility
      of the client/customer/users of a system or project manager (not testers);
      other stakeholders may be involved as well.
    5. However, it is conducted with the full support of the project team. The test team also works with the client/customer/project manager to develop the acceptance criteria.
    6. Answers the following questions:
      1. Can the system be released;
      2. What, if any, are the outstanding (business) risks;
      3. Has development met their obligations
    7. Most often focused on a validation (vs verification) testing
      (Eg. system fit for use)
    8. The goal of Acceptance Testing is to establish confidence in
      1. the system,
      2. parts of the system,
      3. or specific non-functional characteristics of the system.
    9. Finding defects is not the main focus in Acceptance Testing.

      The goal of acceptance testing is to build confidence that the software meets the needs of the stakeholders.

      Finding defects during this level of testing is not a goal and should not happen if the previous levels of testing have been completed successfully.

    10. Acceptance Testing may assess the system for deployment and use,
      although it is not necessarily the final level of testing.
      E.g., a large-scale system integration test may come after the acceptance test for a system.
    11. Acceptance Testing may come in various times in the life cycle, e.g.
      1. Commercial Off-The-Shelf (COTS) software may be acceptance tested when it is installed or integrated,
      2. Acceptance Testing of the usability of the component may be done during component testing,
      3. Acceptance Testing of a new functional enhancement may come before system testing
    12. Testers should be involved in reviewing a test specification for UAT as soon as the test specification for UAT has been drafted.
    Typical forms of Acceptance Testing
    1. User Acceptance Testing (UAT)
    2. Operational(/Production) Acceptance Testing
    3. Contract and Regulation (/Compliance) Acceptance Testing
    4. Alpha and Beta Testing
    Test basis
    1. User requirements
    2. System requirements
    3. Business Use cases
    4. Business processes
    5. Risk analysis reports
    Typical test objects
    1. Business processes on fully integrated system
    2. Operational and maintenance processes
    3. User procedures
    4. Forms
    5. Reports
    6. Configuration data

    Source: ISTQB CTFL Syllabus

User Acceptance Testing (UAT)
Acceptance testing conducted in a real or simulated operational environment by intended users focusing their needs, requirements and business processes.

  1. Typically verifies the fitness for use of the system by business users
  2. A formal product evaluation performed by a customer as a condition of purchase.
  3. This is carried out to determine whether the software satisfies its acceptance criteria and should be accepted by the customer.
  4. UAT is one of the final stages of a software project and will often be performed before a new system is accepted by the customer.

Source: ISTQB CTFL Syllabus

Operational(/Production) Acceptance Testing
System meets the requirements for operation
Acceptance of system by system administrators, including:

  1. Testing of backup/restore,
  2. Disaster recovery,
  3. User management,
  4. Maintenance tasks,
  5. Data load and migration tasks,
  6. Periodic checks of security vulnerabilities

Source: ISTQB CTFL Syllabus

Contract and Regulation (/Compliance/Conformance) Acceptance Testing
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

  1. Contract Acceptance Testing is performed against a contract’s acceptance criteria for producing custom-developed software.
    Acceptance criteria should be defined when the parties agree to the contract.
  2. Regulation Acceptance Testing is performed against any regulation that must be adhered to, such as government, legal, or safety regulations.

Source: ISTQB CTFL Syllabus

Alpha Testing Beta Testing
Synonyms factory testing
in-house testing
field testing
site testing
Purpose Testing of an application
when development is nearing completion

to assess readiness of the system
to be exposed to external stakeholders (such as customers)

Minor design changes can still be made
as a result of alpha testing.

Testing of a rerelease of a software product conducted by customers.

Testing an application
when development and testing are essentially completed
and final bugs and problems
need to be found before the final release

Developers of market, or COTS, software
want feedback from potential or existing customers in their market
before the software product is put up for sale commercially.

When and where tested before being moved to a customer’s site. tested after being moved to a customer’s site.
Performed by at the developing organization’s site
but not by the developing team.

A group that is independent of the design team,
but still within the company,
e.g. in-house software test engineers,
or software QA engineers.

Usually performed not by clients,
but by internal testers.

Testing is done by real users/end-users
customers or potential customers
at their own locations
not programmers, software engineers, or test engineers.

Alpha and Beta Testing

Source: ISTQB CTFL Syllabus

Testing Activities

Test activities are related to software development activities.
Different development life cycle models need different approaches to testing.

The fundamental test process consists of the following main activities:

  1. Test planning and control (planning tests)
  2. Choosing test conditions
  3. Test analysis and design (designing test cases)
  4. Test implementation and execution (most visible part)
  5. Evaluating exit criteria and reporting (evaluating results)
  6. Test closure activities after test phase is completed (logging the defects, taking appropriate actions for the defects)
  7. Also reviewing documents (including source code) and conducting static analysis
  8. Automating test cases

Test activities exist before and after test execution.

Although logically sequential,
the activities in the process may overlap
or take place concurrently.

Source: ISTQB CTFL Syllabus

Test Planning
Activity of establishing or updating a test plan

Test plan is simply a guide on how testing will be done

  1. the objectives/scope of testing
  2. and the (technical) specification
    (using requirement documents + discussion with client)

of test activities (to be used in the future)
in order to meet the objectives and mission.

  1. Objectives: what to test (functionality, performance)
  2. Implementation of test strategy: what (not) to test
  3. Prioritization: what to test first.
  4. Test Planning should be a continuous activity throughout the project
    and is performed in all life cycle processes and activities.
  5. As the project and test planning progress,
    more information becomes available
    and more detail can be included in the plan.
  6. Feedback from test activities
    is used to recognize changing risks
    so that planning can be adjusted.
  7. Planning may be documented in:
    1. a master test plan and
    2. in separate test plans for test levels
      • such as system testing
      • and acceptance testing.

    The outline of a test-planning document is covered
    by the ‘Standard for Software Test Documentation’ (IEEE Std 829-1998).

Planning is influenced by
  1. the test policy of the organization,
  2. What is in and out of the scope of testing,
  3. objectives,
  4. risks,
  5. constraints (e.g., budget limitations.),
  6. criticality,
  7. testability
  8. the availability of resources
Test Planning Activities

Test planning activities for an entire system or part of a system may include:

  1. Determining the scope and risks and identifying the objectives of testing
  2. Defining the overall approach of testing, including the definition of the test levels and entry and exit criteria
  3. Integrating and coordinating the testing activities into the software life cycle activities (acquisition, supply, development, operation and maintenance)
  4. Making decisions about what to test, what roles will perform the test activities, how the test activities should be done, and how the test results will be evaluated
  5. Scheduling test analysis and design activities
  6. Scheduling test implementation, execution and evaluation
  7. Assigning resources for the different activities defined
  8. Defining the amount, level of detail, structure and templates for the test documentation
  9. Selecting metrics for monitoring and controlling test preparation and execution, defect resolution and risk issues
  10. Setting the level of detail for test procedures in order to provide enough information to support reproducible test preparation and execution

Source: ISTQB CTFL Syllabus, pavantestingtools.com

Test Estimation
estimation of test effort

Once the test effort is estimated,
resources can be identified
and a schedule can be drawn up.

Test Estimation Approaches

Two approaches for the estimation of test effort are:

  1. The metrics-based approach:
    estimating the testing effort based on
    • metrics of former or similar projects
    • or based on typical values
  2. The expert-based approach:
    estimating the tasks based on estimates made by
    • the owner of the tasks
    • or by experts
Factors influencing Test Estimation

The testing effort (and test estimation) may depend on a number of factors, including:

  1. Characteristics of the product:
    1. the quality of the specification and other information used for test models (i.e., the test basis),
    2. the size of the product,
    3. the complexity of the problem domain,
    4. the requirements for reliability and security,
    5. and the requirements for documentation

    E.g., requirements for reliability and security in the product will affect test effort
    because it will drive how much testing is needed
    to achieve the required level of confidence in the product.

  2. Characteristics of the development process:
    1. the stability of the organization,
    2. tools used,
    3. test process,
    4. skills of the people involved,
    5. and time pressure

    E.g., If the project is using highly skilled and experienced developers,
    The test estimate should factor that in,
    because it would be expected to have higher quality software coming from these developers.

  3. The outcome of testing:
    1. the number of defects
    2. and the amount of rework required

Source: ISTQB CTFL Syllabus

Test Schedule
is a schedule that identifies all tasks
required for a successful testing effort, a
schedule of all test activities
and resource requirements.

Test Control
Ongoing activity of comparing actual progress against the plan, and
reporting the status, including deviations from the plan.
(comparing the planned test progress to the actual test progress)

A test management task that deals with
developing and applying a set of corrective actions
to get a test project on track
when monitoring shows a deviation from what was planned.

Maintaining/ Tracking progress of Project

Test Control and Maintaining/ Tracking progress of Project

  1. Test control should take place during all activities of the fundamental test process.
    Testing activities should be monitored throughout the project
  2. Control occurs throughout the project to ensure that it is staying on track based on the plan and to take any corrective steps that may be necessary.
  3. The monitoring information is used to determine if control actions are needed.
  4. involves actions necessary to meet the mission and objectives of the project.
Test Control and Test Planning

Test controlling affects test planning.
Test planning takes into account the feedback from monitoring and control activities.

Project never goes as planned, there is always divergence, e.g.:

  1. Software will be delivered late for testing
  2. Performance scripts were executed on off hours,
    but now shifted to weekends
    because app is now being used during off hours also
  3. Test environment is not available on time
  4. Defects found were large, so number of testing iterations will increase
Test Control Actions

Test control describes any guiding or corrective actions
taken as a result of information
and metrics gathered and reported.

Actions may cover any test activity and may affect any other software life cycle activity or task.

Examples of test control actions include:

  1. Making decisions based on information from test monitoring
  2. Re-prioritizing tests when an identified risk occurs (e.g., software delivered late)
  3. Changing the test schedule due to availability or unavailability of a test environment
  4. Setting an entry criterion requiring fixes to have been re-tested (confirmation tested) by a developer before accepting them into a build

Source: ISTQB CTFL Syllabus

When does testing of a system’s configuration data happen?

Testing a system’s configuration data
shall be considered during test planning.

Summarize how configuration management supports testing
Configuration Management
  1. Configuration management (CM)
    covers the tools and processes
    used to control, coordinate and track

    1. source code,
    2. requirements
    3. development- and test documentation
    4. text scripts,
    5. third-party software,
    6. hardware,
    7. data
    8. problems,
    9. change requests,
    10. designs,
    11. tools,
    12. compilers,
    13. libraries,
    14. patches,

    changes made to them and who makes the changes.

  2. Small team won’t recognize the power of configuration management,
    but in large teams configuration management is necessary to manage the team.
  3. Reported defects should be version controlled.
    Configuration management helps to avoid testing the wrong software,
    reporting irreproducible defects against versions of code.
  4. Purpose of Configuration Management

    The purpose of configuration management
    is to establish and maintain the integrity of the products
    (components, data and documentation)
    of the software or system
    through the project and product life cycle.

Configuration Management for Testing
    For testing, configuration management
    may involve ensuring the following:

  1. All items of testware are
    1. identified,
    2. version controlled,
    3. tracked for changes,
    4. related to each other
      and to development items (test objects)

    so that traceability can be maintained throughout the test process.

  2. All identified documents and software items
    are referenced unambiguously in test documentation
  3. For the tester, configuration management
    helps to uniquely identify (and to reproduce)
    1. the tested item,
    2. test documents,
    3. the tests
    4. and the test harness(es)

    E.g., status accounting of configuration items,
    identification of test versions,
    record of changes to documentation over time,
    controlled library access

  4. During test planning,
    the configuration management procedures and infrastructure (tools)
    should be chosen, documented and implemented.

Source: ISTQB CTFL Syllabus, pavantestingtools.com

Test Analysis
The activity that identifies test conditions by analyzing the test basis.

Test Design
The activity of deriving and specifying test cases from test conditions.

Test Analysis and Design
Activity during which general testing objectives are transformed
into tangible test conditions and test cases.

Test analysis and design major tasks:

  1. Reviewing the test basis (anything received for the testing),
    evaluating the test basis for testability:

    1. Requirements
    2. software integrity level (risk level)
      degree to which software complies
      with stakeholder-selected software characteristics
      defined to reflect the importance of the software to its stakeholders:

      • software complexity,
      • risk assessment,
      • safety level,
      • security level,
      • desired performance,
      • reliability,
      • cost
    3. risk analysis reports,
    4. architecture
    5. design,
    6. interface specifications
  2. Evaluating testability of the test basis and test objects
  3. Identifying and prioritizing test conditions (what to test first) based on analysis of test items, the specification, behavior and structure of the software
  4. Designing and prioritizing high level test cases
  5. Identifying necessary test data to support the test conditions and test cases (all the information we need to run the tests)
  6. Designing the test environment setup (e.g.,simulation of the real banking software to prevent unnecessary bank transfers) and identifying any required infrastructure and tools
  7. Creating bi-directional traceability between test basis and test cases

Source: ISTQB CTFL Syllabus

The design and prioritization of the high level test cases
happens during test analysis and design.

Prioritization of the test procedures
happens during implementation and execution.

Source: ISTQB CTFL Syllabus

Test cases are designed during test specification.

Test Specification
The complete documentation of the test design, test cases and test procedures for a specific test item.

Source: ISTQB CTFL Syllabus

Test execution should be scheduled during the test planning stage of the project.

It may be refined during test analysis and design and even during test implementation.

Once execution has started, priorities may be adjusted and the schedule may be lengthened or shortened but the general scheduling should occur during the planning stage.

Source: ISTQB CTFL Syllabus

Test Implementation
The activity that prepares the testware needed for test execution based on test analysis and design.

Test Execution
The process of running a test on the component or system under test, producing actual result(s).

Test Implementation and Execution
activity where test procedures or scripts are specified
by combining the test cases in a particular order
and including any other information needed for test execution,
the environment is set up
and the tests are run.

Major Tasks of Implementation
  1. Finalizing, implementing and prioritizing test cases (including the identification of test data)
  2. Developing and prioritizing test procedures, creating test data and, optionally, preparing test harnesses and writing automated test scripts
  3. Creating test suites from the test procedures for efficient test execution
  4. Verifying that the test environment has been set up correctly
  5. Verifying and updating bi-directional traceability between the test basis and test cases (traceability matrix)
Major Tasks of Execution
  1. Executing test procedures either manually or by using test execution tools, according to the planned sequence
  2. Logging the outcome of test execution (using test management tools) and recording the identities and versions of the software under test, test tools and testware
  3. Comparing actual results with expected results (pass-fail)
  4. Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g., a defect in the code, in specified test data, in the test document, or a mistake in the way the test was executed)
  5. Repeating test activities as a result of action taken for each discrepancy,
    • confirmation testing (=retesting = re-execution of a test that previously failed in order to confirm a fix),
    • execution of a corrected test
    • execution of tests in order to ensure that defects have not been introduced in unchanged areas of the software or that defect fixing did not uncover other defects (regression testing)

Source: ISTQB CTFL Syllabus

Execution of tests is completed
by following the test documents
in a methodical manner.

As each test procedure is performed,
an entry is recorded in a test execution log
to note the execution of the procedure
and whether or not the test procedure
uncovered any defects.

Checkpoint meetings are held
throughout the execution phase.
Checkpoint meetings are held daily, if required,
to address and discuss testing issues,
status and activities.

  1. The output from the execution of test procedures is known as test results. Test results are evaluated by test engineers to determine whether the expected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. Every company has a different process for logging and reporting bugs/defects uncovered during testing.
  2. A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer’s risk assessment and recorded in their selected tracking tool.
  3. Proposed fixes are delivered to the testing environment, based on the severity of the problem. Fixes are regression tested and flawless fixes are migrated to a new baseline. Following completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the Project Manager, Software QA Manager and/or Test Team Lead.
  4. After a particular level of testing has been certified, it is the responsibility of the Configuration Manager to coordinate the migration of the release software components to the next test level, as documented in the Configuration Management Plan. The software is only migrated to the production environment after the Project Manager’s formal acceptance.
  5. The test team reviews test document problems identified during testing, and update documents where appropriate.

Inputs for this process:

  1. Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
  2. Test tools, including automated test tools, if applicable.
  3. Developed scripts.
  4. Changes to the design, i.e. Change Request Documents.
  5. Test data.
  6. Availability of the test team and project team.
  7. General and Detailed Design Documents, i.e. Requirements Document, Software Design Document.
  8. A software that has been migrated to the test environment, i.e. unit tested code, via the Configuration/Build Manager.
  9. Test Readiness Document.
  10. Document Updates.

Outputs for this process:

  1. Log and summary of the test results. Usually this is part of the Test Report. This needs to be approved and signed-off with revised testing deliverables.
  2. Changes to the code, also known as test fixes.
  3. Test document problems uncovered as a result of testing. Examples are Requirements document and Design Document problems.
  4. Reports on software design issues, given to software developers for correction. Examples are bug reports on code issues.
  5. Formal record of test incidents, usually part of problem tracking.
  6. Base-lined package, also known as tested source and object code, ready for migration to the next level.

Test Progress Monitoring

The purpose of test monitoring
is to provide feedback and visibility
about test activities.

Information to be monitored
may be collected manually or automatically
and may be used to measure exit criteria, such as coverage.

Metrics may also be used
to assess progress
against the planned schedule and budget.

Documents, Spreadsheets, tools
can be used for monitoring progress

  1. test cases executed
  2. their state,
  3. defects open,
  4. date of defects open,
  5. when to stop testing – i.e. exit criteria met, etc.
Test Metrics

Common test metrics include:

  1. Percentage of work done in test case preparation
    (or percentage of planned test cases prepared)
    monitored during test preparation
  2. Percentage of work done in test environment preparation
    monitored during test preparation
  3. Test case execution
    1. e.g., number of test cases run/not run,
    2. and test cases passed/failed
  4. Defect information
    1. e.g., defect density,
    2. defects found and fixed,
    3. failure rate,
    4. and re-test results
  5. Test coverage of
    1. requirements,
    2. risks
    3. or code
  6. Subjective confidence of testers in the product
  7. Dates of test milestones
  8. Testing costs,
    including the cost compared to the benefit of

    1. finding the next defect
    2. or to run the next test

Test execution metrics are gathered during Test Execution activity. These metrics are used in reporting.

Source: ISTQB CTFL Syllabus

  1. Test Analysis: Identifying Test Conditions
  2. Test Design: Specifying Test Cases
  3. Test Implementation: Specifying test procedures or scripts

Evaluating Exit Criteria

Evaluating Exit Criteria
Activity where test execution is assessed
against the defined objectives.

This should be done for each test level.

Major Tasks:

  1. Checking test logs against the exit criteria specified in test planning
  2. Assessing if more tests are needed or if the exit criteria specified should be changed
  3. Writing a test summary report for stakeholders
Test Reporting
  1. Monitoring refers to gathering data,
    while Reporting means sharing it.
  2. Charts and Graphs
    help stakeholders to understand more on test results
  3. During the reporting we can also discuss about objective

Test reporting is concerned with
summarizing information about the testing endeavor, including:

  1. What happened during a period of testing,
    e.g., dates when exit criteria were met
  2. Analyzed information and metrics
    to support recommendations and decisions about future actions, e.g.:

    1. an assessment of defects remaining,
    2. the economic benefit of continued testing,
    3. outstanding risks,
    4. and the level of confidence in the tested software

The outline of a test summary report is given in ‘Standard for Software Test Documentation’ (IEEE Std 829-1998).

Test Metrics for Test Reporting and Test Control

Metrics should be collected during and at the end of a test level in order to assess:

  1. The adequacy of the test objectives for that test level
  2. The adequacy of the test approaches taken
  3. The effectiveness of the testing with respect to the objectives

Source: ISTQB CTFL Syllabus

This can be difficult to determine.
Many modern software applications are so complex
and run in such an interdependent environment,
that complete testing can never be done.

When deciding how much testing is enough the following should be considered:

  1. account level of risk (technical, safety, and business risks)
    1. Test cases completed with certain percentage passed;
    2. Coverage of code, functionality, or requirements reaches a specified point;
    3. Bug rate falls below a certain level;
  2. project constraints (time and budget)
    1. Deadlines, e.g. release deadlines, testing deadlines;
      Beta or alpha testing period ends.
    2. Test budget has been depleted;
  3. provide sufficient information to stakeholders to make informed decisions about release of software or system/next development step/handover to customers

Source: ISTQB CTFL Syllabus, pavantestingtools.com

Test Closure Activities

  1. collect data from completed test activities
    to consolidate experience, testware, facts and numbers.
  2. occur at project milestones
    such as when a software system is released,
    a test project is completed (or cancelled),
    a milestone has been achieved,
    or a maintenance release has been completed.

Major Tasks:

  1. Checking which planned deliverables have been delivered
  2. Closing incident reports or raising change records for any that remain open (Closure of all defects)
  3. Documenting the acceptance of the system
  4. Finalizing and archiving testware, the test environment and the test infrastructure for later reuse
  5. Handing over the testware to the maintenance organization
  6. Analyzing lessons learned to determine changes needed for future releases and projects
  7. Using the information gathered to improve test maturity

Source: ISTQB CTFL Syllabus

Test Design Techniques, Testing Types and Approaches

Test Approach
is the implementation of the test strategy for a specific project.

  1. The test approach is defined and refined in
    1. the test plans
    2. and test designs.

      The test design specification may refine the test approach as stated in the test plan to fit the test objectives of the test design specification.

  2. It typically includes the decisions made based on
    1. the (test) project’s goal
    2. and risk assessment.
  3. It is the starting point for
    1. planning the test process
    2. selecting the test design techniques and test types to be applied,
    3. defining the entry and exit criteria
  4. The selected approach depends on the context
    and may consider

    1. risks,
    2. hazards and safety,
    3. available resources and skills,
    4. the technology,
    5. the nature of the system
      (e.g., custom built vs. COTS),
    6. test objectives,
    7. and regulations.

Source: ISTQB CTFL Syllabus

Approach Types
    Typical approaches include:

    Approach Type Description Example
    Analytical approaches testing is directed to areas of greatest risk risk-based testing
    Model-based approaches
    • using statistical information about failure rates
      (such as reliability growth models)
    • or usage
      (such as operational profiles)
    stochastic testing
    Methodical approaches
    1. such as failure-based
      (including error guessing and fault attacks),
    2. experience-based,
    3. checklist-based,
    4. and quality characteristic-based
    Process- or standard-compliant approaches
      such as those specified by

    1. industry-specific standards
    2. or the various agile methodologies
    Dynamic and heuristic approaches testing is more reactive to events than pre-planned,
    execution and evaluation are concurrent tasks
    exploratory testing
    Consultative approaches test coverage is driven
    primarily by the advice and guidance
    of technology and/or business domain experts
    outside the test team
    Regression-averse approaches such as those that
    1. include reuse of existing test material,
    2. extensive automation of functional regression tests,
    3. and standard test suites

    Different approaches may be combined,
    for example, a risk-based dynamic approach.

    Source: ISTQB CTFL Syllabus

Test Types and Test Objectives

Test Type
A group of test activities
aimed at verifying the software system (or a part of a system)
based on a specific reason or target for testing.

  1. A test type is focused on a particular test objective
  2. Some techniques fall clearly into a single category;
    others have elements of more than one category.
  3. Recognize that
    • functional,
    • non-functional
    • and white-box tests

    occur at any test level.

  4. A model of the software may be developed and/or used in
Functional testing Non-functional testing Structural testing Testing Related to Changes
Test Objectives A function to be performed by the software A non-functional quality characteristic, such as reliability or usability The structure or architecture of the software system Change related, i.e., confirming that defects can be fixed (confirmation testing) and looking for the unintended changes (regression testing)
e.g., a process flow model, a state transition model or a plain language specification e.g., performance model, usability model, security threat modeling e.g., control flow model or menu structure model

Implementing a software testing methodology
is a three step process of

  1. Creating a test strategy;
  2. Creating a test plan/design;
  3. Executing tests.
  4. The methodology can be used and molded
    to the organization’s needs.

Test Design Technique Purpose

The purpose of a test design technique is to identify

  1. test conditions,
  2. test cases,
  3. and test data.
Choosing Test Techniques

Classify test design techniques
according to their fitness to a given context,
for the test basis, respective models
and software characteristics

  1. Different type of testing focuses on different types of defects
  2. Some techniques are more applicable to
    certain situations and test levels;
    others are applicable to all test levels.
  3. When creating test cases,
    testers generally use a combination of test techniques including

    1. process,
    2. rule
    3. and data-driven techniques

    to ensure adequate coverage of the object under test

The choice of which test techniques to use depends on a number of factors, including

  1. the type of system,
  2. regulatory standards,
  3. customer or contractual requirements,
  4. level and type of risk,
  5. test objective
    • e.g., performance testing for gaming app,
    • functional for banking app
  6. documentation available,
  7. knowledge/experience of the testers,
  8. previous experience with types of defects found.
  9. time and budget,
  10. development life cycle
    Software Development Models used:
    • V-model,
    • Agile,
    • waterfall
  11. use case models

Testing of Function (Functional Testing)

Functional Testing

Testing the features and operational behavior of a product
to ensure they correspond to its specifications.

  1. Black Box Testing
    ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
  2. The functions are what the system (subsystem/component) does.
  3. Functions may be described
    1. in work products
      • requirement specification
      • use cases
      • functional specification
    2. or they may be undocumented.
  4. Functional tests are based on and features
    (described in documents or understood by testers)
    and their interoperability with specific systems,
    and may be performed at all test levels
    (e.g., tests for components may be based on a component specification).
  5. Functional testing considers the external behavior of the software (black-box testing).
  6. Testing focuses on customer requirements.
  7. Functional Testing is QA assurance process.
Steps performed during Functional test
  1. Identify what is the functionality
    (how the product should perform)
  2. Create test input data
    Data needed to perform the tests
  3. Determine the expected data
    Identify what is the expected result
  4. Test execution
  5. Compare actual vs expected result
  6. Raise defect if step 5 fails

Source: ISTQB CTFL Syllabus

Specification/requirement-based vs. Business-process/scenario-based

Testing functionality can be done from two perspectives:

  1. Specification/requirement-based techniques
    may be used to derive test-conditions and test-cases
    from the functionality of the software or system.
  2. Business-process/scenario-based:
    use cases developed from business flow
    (there are some features that can not be documented).

Source: ISTQB CTFL Syllabus

Functional Testing Types
  1. security testing
  2. interoperability testing

Source: ISTQB CTFL Syllabus

Non-functional Testing
is the testing of “how well” the system/product works.

Non-functional Testing

  1. tests customer expectations
  2. may be performed at all test levels
    (unit, integration, system and acceptance)
  3. considers the external behavior of the software and in most cases uses black-box testing to accomplish that
  4. tests measure characteristics of systems and software
    that can be quantified on a varying scale,
    e.g. response times for performance testing
  5. These tests can be referenced to a quality model
    such as the one defined in “Software engineering – Software Product Quality”
    (ISO 9126).
Focus of Non-functional Testing

Non-functional Testing focuses on the below characteristics:

  1. Reliability (dependability, consistency, stability)
  2. Efficiency (resource utilization)
  3. Usability (understandability, learnability, attractiveness)
  4. Maintainability (analyzability, changeability)
  5. Portability (adaptability, installability, coexistence, replaceability)

Non-functional Test Types
  1. Performance testing
  2. Load testing
  3. Volume
  4. Stress testing
  5. Usability testing
  6. Maintainability testing
  7. Reliability testing
  8. Portability testing
  9. Data Integrity
  10. Scalability
  11. Resilience
  12. Recovery/Recoverability
  13. Compatibility

  • It is a classic distinction to denote test techniques as black-box or white-box.
  • Black-box and white-box testing
    may also be combined with experience-based techniques
    to leverage the experience of developers, testers and users
    to determine what should be tested.
Black Box Testing White Box Testing
Synonym Specification-Based Testing
Behavioral or Behavior-based techniques
Testing of Software Structure/Architecture
Structural or Structure-Based techniques
clear box testing,
transparent testing
based on an analysis of the specification
of a piece of software without reference to the internal workings.

It is a way to derive and select

  1. test conditions,
  2. test cases,
  3. or test data

based on an analysis of the appropriate test basis e.g.,

  1. formal requirements documents,
  2. specifications,
  3. use cases,
  4. user stories
based on an analysis of the structure
of the component or system.
The internal structure/ design/ implementation
of the item being tested is not known to the tester.
software testing method in which the internal structure/ design/ implementation of the item being tested is known to the tester.
dynamic testing
Includes both functional and non-functional testing.
Structural techniques are best used after specification-based techniques,
in order to help measure the thoroughness of testing
through assessment of coverage of a type of structure.
May be performed at all test levels
Especially helpful and mainly applicable to the lower levels of testing,
such as unit/component and (component) integration testing levels

  1. Component level:
    the structure of a software component, i.e.,

    • statements,
    • decisions,
    • branches
    • or even distinct paths
  2. Integration level:
    at the integration level the percentage of modules, components or classes
    that have been exercised by a test case suite
    could be expressed as module, component or class coverage.

    the structure may be a call tree
    e.g., (a diagram in which modules call other modules

  3. System level:
    the structure may be
    • a menu structure,
    • business process
    • or web page structure
  4. acceptance testing level
    • e.g. business models
    • menu structures
done by testers
  1. programming knowledge is required
  2. Generally done by software developers
Tool support is useful for the structural testing of code,
e.g., to measure the code coverage of elements,
such as statements or decisions.
  1. Tester is concentrating on what the software does, not how it does it
  2. Does not use any information regarding the internal structure of the component or system to be tested.
  3. Tester considers a system under test as a black box, they only know that sending this input should result in an expected output
  1. If test cases are derived from looking at the code,
    White-box test design technique is being used.
  2. Testers
    1. test the code of product/software,
      • i.e. code coverage
      • Identify unreachable code
    2. need the knowledge about programming language
  3. Structural testing may be based on the architecture/structure of the system,
    such as a calling hierarchy.
Common characteristics:

  1. Models, either formal or informal,
    are used for the specification of the problem to be solved,
    of the software or its components
  2. Test cases can be derived systematically from these models
Common characteristics:

  1. Information about how the software is constructed
    is used to derive the test cases
    (e.g., code and detailed design information)
  2. The extent of coverage of the software
    can be measured for existing test cases,
    and further test cases can be derived systematically
    to increase coverage

Source: ISTQB CTFL Syllabus, geeksforgeeks.org

Gray Box Testing
A combination of Black Box and White Box testing methodologies
that is testing a piece of software against its specification
but using some knowledge of its internal workings.

White Box Testing: Coverage

is the extent that a structure has been exercised by a test suite,
expressed as a percentage of the items being covered.

  1. If coverage is not 100%,
    more tests may be designed
    to test items that were missed
    to increase the coverage.
  2. A control flow diagram may be used
    to visualize the alternatives for each decision.
  3. Most common code coverage types
    1. Statement
    2. Branch/Decision
    3. Path
White Box Code Coverage Techniques

White Box Test Coverage Techniques:  Statement, Decision, and Path Coverage
* Control Flow Diagram

Code coverage includes 3 coverages:

  1. Statement Coverage
    1. In component testing, statement coverage
      is the assessment of the percentage
      of executable statements
      that have been exercised by a test case suite.
    2. The statement testing technique
      derives test cases to execute specific statements,
      normally to increase statement coverage.
    3. Statement coverage is determined by
      the number of executable statements tested
      divided by the total number of executable statements

      (Number of statements exercised / Total number of statements) * 100

    4. Aimed at exercising all programming statements with minimal tests
    5. Covers only true conditions/nodes
    6. Verify what code is expected to do
    7. It doesn’t verify every condition, we need path and branch coverage
    8. Test case is executed so that every node is traversed at least once

      Cover all nodes to achieve 100% statement coverage.

    9. Find shortest path so that all the Edges are covered at least once
  2. Branch/Decision Coverage
    1. Branch vs. Decision coverage
      1. branch coverage is closely related to decision coverage and at 100% coverage they give exactly the same results.
      2. Decision coverage measures the coverage of conditional branches;
      3. branch coverage measures the coverage of both conditional and unconditional branches.
    2. Decision testing is a form of control flow testing as it follows a specific flow of control through the decision points.
    3. Decision coverage, related to branch testing,
      is the assessment of the percentage of decision outcomes
      (e.g., the True and False options of an IF statement)
      that have been exercised by a test case suite.
    4. The decision testing technique derives test cases to execute specific decision outcomes.
    5. Branches originate from decision points in the code and show the transfer of control to different locations in the code.
    6. Covers all (branches) of scenarios (decisions) of True/False,
      i.e. “If”, “while” etc. conditions
    7. Decision coverage is determined by
      the number of all decision outcomes covered
      divided by the total number of decision outcomes in the module

      (Number of decision outcomes executed / Total number of decision outcomes) * 100

      E.g., When the code contains only a single ‘if’ statement and no loops or CASE statements, any single test case we run will result in 50% decision coverage.
      This is because any single test case would cause the outcome of the “if” statement to be either true or false, by definition we achieved 50% decision coverage.

    8. It validates all the branches of code (vs nodes)
      all branches are tested at least once

    9. Decision coverage looks at the number of decision outcomes, not just decision statements.
    10. finds the minimum number of paths
      which will ensure that all the edges are traversed

      Cover all edges to achieve 100% decision coverage.

    11. Decision coverage is stronger than statement coverage; 100% decision coverage guarantees 100% statement coverage, but not vice versa.
  3. Path Coverage
    1. This technique corresponds to testing all possible paths
      which means that each statement and brunch are covered.
    2. Find all paths from source to destination
    3. 100% Path coverage will imply 100% statement coverage and 100% Branch/Decision coverage.
Other Structure-based Techniques
Condition Coverage

Condition vs. Branch coverage

  1. There are stronger levels of structural coverage beyond decision coverage:
    • condition coverage and
    • multiple condition coverage.
  2. Condition coverage is also know as Predicate Coverage
  3. Condition coverage means
    all conditions covered for paths:
    each of the boolean expressions must be evaluated to true and false at least once.
  4. This is often considered impossible to achieve this form of coverage.

A white box testing technique that exercises program loops.

Test Coverage Code Coverage
how many test cases have been executed during testing.
how much testing has been done (with respect to some model)
how much of the feature being tested is actually covered by tests.
how much code is executed during testing.
metric tool, used to measure
the quantity of the programming code,
covered during the execution
Testing Type black box testing white box testing
Interaction with code Minimal interaction with the code Asses to code required
Performed by QA Team Developers
Formula These test coverage tools aim to measure the testing effort in a qualitative manner. The idea here is to see if all requirements and functionalities are logically covered via testing. So, this is not straight-forward to measure as compared to code coverage.
Methodologies like TDD (Test Driven Development) are also helpful to analyse test coverage and bring in the discipline to add tests from get go. To measure the impact of such tests, you will need to manually list out requirements and then analyse test case to understand which of those are covered.
(Lines of code executed/total number of lines of code)*100
Measure of lines of code executed during the test How many test cases executed during the test
functional approach and see if all the features are covered.
It determines whether the test cases are covering entire functional requirements.
Subtypes The common mechanisms used for test coverage measurement are unit testing, functional testing, performance testing, integration or system testing and acceptance testing. Statement, Branch, Decision Coverage
Tools JUnit, TestNG, PyUnit Cobertura in Java
Coverage.py in Python

To check if all work process flows have been covered.

E.g., For acceptance testing, tests are designed to cover all supported financial data file structures and value ranges for bank-to-bank transfers.

Source: ISTQB CTFL Syllabus 2.3.5

At all test levels.

This question requires the candidate to recall that either or both of functional and structural test types can be used at any level.

Section 2.3 of ISTQB CTFL syllabus confirms that both test types can be used at all test levels.

High Order Tests
Black-box tests conducted once the software has been integrated.

4 Types of Specification Based (Black Box) Testing
  1. Equivalence Partitioning
  2. Boundary Value Analysis
  3. Decision Tables
  4. State Transition Testing

Equivalence Class
A portion of a component’s input or output domains for which the component’s behaviour is assumed to be the same from the component’s specification.

Equivalence Partitioning

Black Box Testing: Equivalence Partitioning (EP)

  1. In equivalence partitioning,
    inputs to the software or system are divided into groups
    that are expected to exhibit similar behavior,
    so they are likely to be processed in the same way.
  2. Test cases are designed to cover each partition at least once
  3. Equivalence partitions (or classes) can be found for both valid data,
    i.e., values that should be accepted
    and invalid data, i.e., values that should be rejected.

    Tests can be designed to cover all valid and invalid partitions.

  4. Equivalence partitioning is applicable at all levels of testing.
  5. Equivalence partitioning can be used to achieve input and output coverage goals.
  6. Partitions can be identified for
    1. outputs,
    2. internal values,
    3. time-related values (e.g., before or after an event)
    4. for interface parameters (e.g., integrated components being tested during integration testing).
  7. Partitions can be applied to
    1. human input,
    2. input via interfaces to a system,
    3. or interface parameters in integration testing.
Equivalence Partitioning Examples
AP for Age

Equivalance Partitioning

We need to test only one condition from each partition:
E.g. users under 18, users over 21, married users, unmarried users, all users

Valid Input: 18 – 56
Invalid Input: less than or equal to 17 (<=17), greater than or equal to 57 (>=57)
Valid Class: 18 – 56 = Pick any one input test data from 18 – 56
Invalid Class 1: <=17 = Pick any one input test data less than or equal to 17 Invalid Class 2: >=57 = Pick any one input test data greater than or equal to 57

We have one valid and two invalid conditions here.

AP for Phone Number Input

Equivalence Partitioning Mobile Number Example

Valid input: 10 digits
Invalid Input: 9 digits, 11 digits
Valid Class: Enter 10 digit mobile number = 9876543210
Invalid Class Enter mobile number which has less than 10 digits = 987654321
Invalid Class Enter mobile number which has more than 11 digits = 98765432109

AP for Interest Rates
Invalid Valid (2%) Valid (5%) Valid (10%)
-0.01 $0-$100 $100.01-$500 $500.01

Inputs to check: -$10, $50, $123, $700

AP for Shipping Cost
Weight 1 to 10 lbs 11 to 25 lbs 26 to 50 lbs 51 lbs and up
Shipping cost $5 $7.50 $12 $17

Partition needed: for each of the 4 classes + one for a zero or negative weight

AP for assigning grades for scores on exam papers
1-49 50-59 60-69 70-79+ 80-89+ 90-100
F D- D C B A

You need a test for the invalid too low (0 or less),
one for each valid partition,
and one for invalid too high (over 100)

AP for Display Sizes

The application shall allow playing a video on the following display sizes:

  1. 640×480
  2. 1280×720
  3. 1600×1200
  4. 1920×1080

Verify that the application can play a video on each of the display sizes in the requirement (4 tests).
This is a case where the requirement gives an enumeration of discrete values. Each enumeration value is an Equivalence Class by itself, therefore each will be tested when using Equivalence Partitioning test technique.

Sources: ISTQB CTFL Syllabus, softwaretestingmaterial.com

Boundary Testing

focuses on the boundary or limit conditions
of the software being tested.
(Some of these tests are stress tests).

Boundary Value Analysis

Black Box Testing: Boundary Value Analysis (BVA)

  1. Behavior at the edge (“corner cases”)
    of each equivalence partition
    is more likely to be incorrect
    than behavior within the partition,
    so boundaries are an area
    where testing is likely to yield defects.
  2. The maximum and minimum values of a partition are its boundary values.
  3. A boundary value for a valid partition
    is a valid boundary value;
  4. the boundary of an invalid partition
    is an invalid boundary value.
  5. Tests can be designed to cover both
    valid and invalid boundary values.
  6. When designing test cases,
    a test for each boundary value is chosen.
  7. Boundary value analysis can be applied at all test levels.
  8. It is relatively easy to apply
    and its defect-finding capability is high.
  9. Detailed specifications are helpful
    in determining the interesting boundaries.
  10. This technique is often considered
    as an extension of equivalence partitioning
    or other black-box test design techniques.
  11. It can be used on equivalence classes
    for user input on screen as well as,

    • e.g., on time ranges
      (e.g., time out, transactional speed requirements)
    • or table ranges (e.g., table size is 256*256).

Black Box Testing: Boundary Value Analysis (BVA)

BVA Examples
BVA for Age

Black Box Testing: Boundary Value Analysis (BVA)

BVA for Name Input

Black Box Testing: Boundary Value Analysis (BVA)

BVA for Interest Rates
Invalid Valid (2%) Valid (5%) Valid (10%)
-0.01 $0-$100 $100.01-$500 $500.01

Boundary values between partitions will be:
-$0.01, $0.00, $100, $100.01, $500, $500.01

BVA for Shipping Cost
Weight 1 to 10 lbs 11 to 25 lbs 26 to 50 lbs 51 lbs and up
Shipping cost $5 $7.50 $12 $17

Tests to achieve 100% BVA: 2 per valid range + one for negative weight + one exceeding 100lbs

BVA for assigning grades for scores on exam papers
1-49 50-59 60-69 70-79+ 80-89+ 90-100
F D- D C B A

Test cases will you need to achieve minimum test coverage: 0, 1, 49, 50, 59, 60, 69, 70, 79, 80, 89, 90, 100, 101

BVA for Speed

A speed control and reporting system has the following characteristics:
If you drive 50 km/h or less, nothing will happen.
If you drive faster than 50 km/h, but 55 km/h or less, you will be warned.
If you drive faster than 55 km/h but not more than 60 km/h, you will be fined.
If you drive faster than 60 km/h, your driving license will be suspended.

The following partitions can be identified:

  1. 50 and below: Two-point boundaries 50, 51
  2. 51 – 55: Two-point boundaries 50, 51, 55, 56
  3. 56 – 60: Two-point boundaries 55, 56, 60, 61
  4. 61 and above: Two-point boundaries 60, 61

Sources: ISTQB CTFL Syllabus, guru99.com, softwaretestingmaterial.com

Decision Table Testing
  1. Decision tables are a good way to capture system requirements that contain logical conditions, and to document internal system design.
  2. They may be used to record complex business rules that a system is to implement.
  3. When creating decision tables, the specification is analyzed, and conditions and actions of the system are identified.
  4. The decision table contains
    1. the triggering conditions
    2. often combinations of true and false (Boolean) for all input conditions,
    3. and the resulting actions for each combination of conditions.
    4. Each column of the table corresponds to a business rule
      that defines a unique combination of conditions
      and which result in the execution of the actions
      associated with that rule.
  5. The coverage standard
    commonly used with decision table testing
    is to have at least one test per column in the table,
    which typically involves covering all combinations of triggering conditions.
  6. The strength of decision table testing
    is that it creates combinations of conditions
    that otherwise might not have been exercised during testing.
  7. It may be applied to all situations
    when the action of the software
    depends on several logical decisions.
Decision Table Design

While designing decision table we need:

  1. Identify input combinations.
    Conditions should be minimum

    Conditions Rule 1 Rule 2 Rule 3 Rule 4
    Age greater than 18
  2. Identify True/False
    Conditions Rule 1 Rule 2 Rule 3 Rule 4
    Age greater than 18 T T F F
    Male T F T F
  3. Enter outcome for each combination
    Conditions Rule 1 Rule 2 Rule 3 Rule 4
    Age greater than 18 T T F F
    Male T F T F
    Drink beer T T
    Go to club T T
  4. Consider the error messages
    Conditions Rule 1 Rule 2 Rule 3 Rule 4
    Age greater than 18 T T F F
    Male T F T F
    Drink beer T T
    Go to club T T
    Error T
  5. Table can easily adapt to requirements
  6. Representing table again if more than one action results from any of the combinations
    Conditions Rule 1 Rule 2 Rule 3 Rule 4
    Age greater than 18 T T F F
    Male T F T F
    Outcomes Error Drink beer Go to club Error
  7. Write test case for each Rule
Decision Table Examples
Facebook Login DT

Black Box Testing: Decision Table Testing (DT)

Car Insurance DT

Black Box Testing: Decision Table Testing (DT)

Ticket Purchasing DT

Black Box Testing: Decision Table Testing (DT)

State Transition Testing
  1. A system may exhibit a different response
    depending on current conditions or previous history (its state).
  2. In this case, that aspect of the system can be shown with a state transition diagram.
  3. It allows the tester to view the software in terms of
    1. its states,
    2. transitions between states,
    3. the inputs or events that trigger state changes (transitions)
    4. and the actions which may result from those transitions.
  4. The states of the system or object under test
    are separate, identifiable and finite in number.
  5. Transitions from one state to another are determined by the rules.

    E.g. withdrawing $500 from ATM
    will depend on how much money on your account

  6. A state table shows the relationship between the states and inputs, and can highlight possible transitions that are invalid.
  7. Tests can be designed to cover
    1. a typical sequence of states,
    2. to cover every state,
    3. to exercise every transition,
    4. to exercise specific sequences of transitions
    5. or to test invalid transitions.
  8. State transition testing is much used
    within the embedded software industry
    and technical automation in general.
  9. However, the technique is also suitable
    for modeling a business object having specific states
    or testing screen-dialogue flows
    (e.g., for Internet applications or business scenarios).
State Transition Testing Examples
State Transition Diagram of Marital Status

Black Box Testing: State Transition Testing Marital Status

State Transition Diagram of Cart Checkout Flow

State Transition Diagram of Cart Checkout Flow

State Transition Diagram of Defect Lifecycle

State Transition Diagram of Defect Lifecycle

State Transition Diagram of Pin Login Process

State Transition Diagram of Defect Lifecycle

Confirmation Testing (Re-testing)

Confirmation Testing
Dynamic testing conducted after fixing defects with the objective to confirm that failures caused by those defects do not occur anymore.
Synonyms: re-testing

After a defect is detected and fixed,
the software should be re-tested
to confirm that the original defect has been successfully removed.
This is called confirmation.

Re-execute the test again the same way, when it found the defect

Regression Testing

Regression Testing
Repeated testing of a previously tested component or system
after modification
to ensure that defects have not been introduced
or to discover any defects introduced or uncovered
in unchanged areas of the software,
as a result of a change(s).

  1. Testing to insure while fixing one defect,
    no new defects are entered into the system.
  2. These defects may be either in the software being tested,
    or in another related or unrelated software component.
  3. When a minor modification has been applied to an existing system or program, perform a regression test to uncover defects that may be a result of the modification.
  4. A purpose of performing regression testing when system maintenance activities have occurred is to ensure the overall system has not regressed.
  5. Regression testing may be performed at all test levels,
    And includes functional, non-functional and structural testing.
  6. Tests should be repeatable if they are to be used for confirmation testing and to assist regression testing.
  7. A good set of regression tests should require little or no maintenance and should be reusable across multiple releases.
  8. Regression test suites are run many times and generally evolve slowly, so Regression Testing is a strong candidate for Automation
  9. It is common for organizations to have what is usually called a regression test suite or regression test pack.
  10. Regression suite should be continuously updated (pesticide paradox).
    If suite is large, use subset to test.

Scope of Regression Testing

  1. It is performed when the software is changed
    Either as a result of fixes or new or changed functionality
  2. Any software environment changes

Extent of Regression Testing
The extent of regression testing is based on the risk
of not finding defects in software that was working previously.

Source: ISTQB CTFL Syllabus

Maintainability Testing

How easy it is to maintain a product

  1. Changeability,
  2. stability,
  3. how easy it is to add new features

Maintainability testing is conducted to determine
if the software can be maintained by the developers for

  1. updates,
  2. changes
  3. and modifications.

If you are testing to ensure that
the software will be easy to analyze and change,
you are conducting maintainability non-functional testing type.

Analyzability is one of sub-characteristics of maintainability.
(see ISO 9126 for the software quality characteristics)

Maintenance Testing

Maintenance testing
is done on an existing operational system,
and is triggered by

  1. modifications,
  2. migration,
  3. or retirement

of the software or system.

  1. Once deployed, a software system is often in service for years or decades.
  2. During this time
    the system, its configuration data, or its environment
    are often corrected, changed or extended.
  3. Maintenance testing is done for products that have been released for production that have been modified, migrated or had some component retired/removed.
  4. Decision to test the maintainability of the software should NOT be a trigger for maintenance testing.
  5. The planning of releases in advance is crucial for successful maintenance testing.
  6. On receipt of the new or changed specifications, corresponding test cases are specified or adapted.

Modification Maintenance Testing

A distinction has to be made between planned releases and hot fixes.

Modifications include 2 types

  1. Planned enhancement changes (e.g., release-based),
    1. Perfective modifications
      adapting software to the user’s wishes

      i.e. by supplying new functions or enhancing performance
      such as planned operating system or database upgrades,
      planned upgrade of Commercial-Off-The-Shelf software,

    2. Adaptive modifications
      Adapting software to the environmental changes such as new hardware, new systems software or new legislation);
    3. Corrective planned modifications
      Deferrable correction of defects
      I.e. the defect of low priority is found and planned for fixing in a month.
  2. Ad-hoc (emergency changes) corrective modifications
    patches to correct newly exposed or discovered vulnerabilities of the operating system.

    • Network goes down
    • Some functionality is not working
Migration Maintenance Testing

Maintenance testing for migration
(e.g., from one platform to another)
should include operational tests of the new environment
as well as of the changed software.

Migration testing (conversion testing)
is also needed when data from another application will be migrated into the system being maintained.

Retirement Maintenance Testing

Maintenance testing for the retirement of a system
may include the testing of data migration
or archiving if long data-retention periods are required.

Indicators/triggers for maintenance testing:

  1. modification,
  2. migration
  3. retirement

In addition to testing what has been changed,
maintenance testing includes regression testing
to parts of the system that have not been changed.

Retirement Maintenance Testing vs. Regression Testing
When a system is targeted for decommissioning,
Maintenance testing (data migration) may be required.

Regression testing is more appropriate for current systems,
not the system being retired.

In addition to testing what has been changed, maintenance testing includes regression testing to parts of the system that have not been changed.

The scope of maintenance testing is related to

  1. the risk of the change,
  2. the size of the existing system
  3. and to the size of the change.

Depending on the changes, maintenance testing may be done

  1. at any or all test levels
  2. and for any or all test types.

In maintenance testing, tester should consider 2 parts:

  1. Any changes made in software should be tested thoroughly
  2. Make sure changes made in the software do not affect the existing functionality of the software, so regression testing is also done. For which impact analysis is done

Impact Analysis
Determining how the existing system may be affected by changes.

Impact Analysis is used to

  1. help decide how much regression testing to do.
  2. The impact analysis may be used to determine the regression test suite.

Maintenance testing can be difficult if

  1. specifications are out of date or missing,
  2. or testers with domain knowledge are not available.

On the completion of the testing, the testware, test basis is preserved.


  1. is produced by both, verification and validation, testing methods.
  2. includes: test cases, test plan, test report.

Test basis
is defined as the source of information or the document
that is needed to write test cases
and also for test analysis.

Source: ISTQB CTFL Syllabus

Regression Testing Sanity Testing Smoke Testinge
Repeated testing of a previously tested component or system
after modification to ensure
that defects have not been introduced
or to discover any defects introduced or uncovered
in unchanged areas of the software,
as a result of a change(s).
Brief test of major functional elements
of a piece of software to determine if its basically operational.

A subset of Regression Testing.
Never done on regular releases.

A quick-and-dirty test
that the major functions of
a piece of software work.

Originated in the hardware testing practice
of turning on a new piece of hardware
for the first time a
nd considering it a success
if it does not catch on fire.

Performed by

  1. development team
  2. QA lead
  3. a group of QA
    (sometimes for large scale projects)
  4. In SCRUM teams
    individual QA’s perform this testing
    for the stories that they own.

    SCRUM teams have a flat structure
    with no Leads or Managers
    and each tester has their own responsibilities
    towards their stories.

Verifies in-depth verification of every aspect of the system:
  1. functionality,
  2. bug fixes
  3. UI,
  4. performance,
  5. browser/OS testing etc.
  1. business rules,
  2. each functionality
    is working as expected
first test to be done on a build
released by the development team(s)
to verify basic functionalities
of that particular build

used in Integration, System, and Acceptance Level Testing
before releasing the build to the QA team.

Based on the results of this testing,
further testing is done
or the build is rejected
until the reported issues are fixed.

Once the tests are marked as Smoke tests (in the test suite) pass,
only then the build is accepted by the QA
for in-depth testing and/or regression.

Planning well elaborate and planned testing done at random, not a planned testing
and is done only when there’s a time crunch
(e.g., when we have to sign off in a day or two
but the build for testing is still not released).
Testware An appropriately designed suite of test cases
is created for this testing.
  1. Usually a rough set of test cases is created
    selected sin a way to touch
    all the important bits and pieces
  2. There even may be no time to create the test cases
    and testing is done randomly with no test cases
Not exhaustive testing
but it is a group of tests
executed to verify
the basic functionalities of that particular build
This is a wide and deep testing. This is a wide and shallow testing.
How long This testing is at times scheduled for weeks or even month(s). This mostly spans over 2-3 days max. In Agile the build needs
to be tested and released
within a stretch of hours

  1. Smoke Testing is directly related to Build Acceptance Testing (BAT).
  2. In BAT, we do the same testing
    to verify if the build has not failed
    and that the system is working fine.
  3. Sometimes, when a build is created,
    some issues get introduced
    and when it is delivered,
    the build doesn’t work for the QA.
  4. BAT is a part of a smoke check
    because if the system is failing,
    QA can not accept the build for testing.
  5. Not just the functionalities,
    the system itself has to work
    before the QA’s proceed with In-Depth Testing.

Source: softwaretesinghelp.com

It is important to verify that the software

  1. performs its basic functions as intended,
  2. is able to gracefully handle an abnormal situation.

The entire testing effort
can be basically generalized
into two categories:

  1. positive testing paths
  2. and negative testing paths.
Positive Testing

Positive Testing
Testing aimed at showing software works.
This is also known as “test to pass”.

The first form of testing
that a tester would perform on an application

running test scenarios
that an end user would run for their use
with only correct and valid data.

If a test scenario doesn’t need data,
running the test exactly the manner
in which it’s supposed to run
to ensure that the application is meeting the specifications.

Happy Path Testing and Smoke Tests

Smoke Tests
basic, non-extensive software testing practice,
where you put the code developed so far
through fundamental, ‘happy path’ use cases
to see if the system breaks.

Happy Path
default scenario
featuring no exceptional or error conditions.

Happy Path Testing
(Subset of Positive Testing)
is a well-defined test case
using known input,
which executes without exception
and produces an expected output.


  1. Happy day scenario
  2. sunny day scenario
  3. golden path

E.g., the happy path for a function
validating credit card numbers
would be where
none of the validation rules raise an error,
thus letting execution continue successfully to the end,
generating a positive response.

In use case analysis,
there is only one happy path,
but there may be any number
of additional alternate path scenarios
which are all valid optional outcomes.

If valid alternatives exist,
the happy path is then identified
as the default
or most likely positive alternative.

In contrast to the happy path,
process steps for

  1. alternate paths
  2. and exception paths

may also be documented.

Alternate Path Testing

Sometimes there may be more than one way
of performing a particular function or task
with an intent
to give the end user more flexibility
or for general product consistency.

This is called alternate path testing
which is also a kind of positive testing.

In alternate path testing,
the test is again performed to meet its requirements
but using the different route than the obvious path.

The test scenario would even consume
the same kind of data
to achieve the same result.

Negative Testing/ Unhappy Path

Negative testing
is done to ensure the stability of the application
applying as much creativity as possible
while testing the application against invalid data.

(Although sometimes negative testing is referred to as
testing aimed at showing software does not work.
Also known as “test to fail”.)


  1. unhappy paths
  2. sad paths,
  3. rainy day
  4. exception paths
  5. error path testing
  6. failure testing


  1. to check if the errors are being shown to the user
    where it’s supposed to,
  2. or handling a bad value more gracefully.

Example of negative testing for a pen:

(The basic motive of the pen is to be able to write on paper)

  1. Change the medium that it is supposed to write on, from paper to cloth or a brick and see if it should still write.
  2. Put the pen in the liquid and verify if it writes again.
  3. Replace the refill of the pen with an empty one and check that it should stop writing.
Happy Path vs. Sad Path vs. Bad Path

Sometimes test cases are categorized into:

  1. Happy Path
    (positive results)
    E.g., Entering proper user name and password in the login page.
  2. Sad Path
    (scenarios which do not take us further and get stuck)
    E.g., Entering wrong password and username.
  3. Bad Path
    (don’t fetch any results and makes us lost)
    Entering the junk characters in the username

Sources: Wikipedia, h2kinfosys.com, softwaretestinghelp.com

Robustness Testing = Fuzz Testing

degree to which a system or component can function correctly in the presence of invalid inputs or stressful environmental conditions.

The concept of robustness testing in computer science implies verifying whether a computer application is not generating any sort of unacceptable error condition.

Eventually the term Fuzzing (which security people use for mostly non-intelligent and random robustness testing) extended to also cover model-based robustness testing.

Fuzz testing is a technique that deals with robustness test for any software application, which is a way to inject random, unexpected input values to the system to verify whether the application crashes or if there’s any memory leak.

Source: ISTQB CTFL Syllabus, Wikipedia

Soak Testing
Running a system at high load
for a prolonged period of time.

E.g., running several times more transactions
in an entire day (or night)
than would be expected in a busy day,
to identify any performance problems
that appear after a large number of transactions have been executed.

Endurance Testing
Checks for memory leaks or other problems
that may occur with prolonged execution

Scalability Testing
Performance testing
focused on ensuring the application under test
gracefully handles increases in work load.

Performance Testing Volume/Flood Testing Load Testing Stress Testing
Subtype of Although performance testing is described as a part of system testing, it can be regarded as a distinct level of testing. Performance Testing Performance Testing. Performance Testing.

Non-functional testing technique.

of the speed of a computer, network or device
for determining
how fast the system works
itself and compared to other products
under a variety of simulated usage conditions
Testing the performance or behavior
of the system or application
under the huge amount of data/data volume.
Testing of a system’s performance
under real-life heavy load conditions
to determine how the application behaves
when multiple users access it simultaneously.
that determines the stability
and robustness of the system.

Testing conducted to evaluate a system or component
beyond the limits of its specified requirements
to determine the load under which it fails and how.

How Checks the performance
of the components of a system
by passing different parameters
in different load scenarios.
Data volume in database is increased.
  1. Applied when a development project nears to its completion
  2. Done by constantly increasing the load
    on the application under the test
    until it reaches the threshold limit
  3. The simulation of load is achieved
    by means of creating virtual users
    carrying out a selected set of transactions,
    spread across various test machines
    commonly known as load generators.
  1. Generating a load on the system to be tested
    and checking the performance at each level.
  2. Uses auto-generated simulation model
    that checks all the hypothetical scenarios.
  1. Finding, analyzing and, fixing performance issues
  2. Validating if the hardware adequate to handle the expected load.
  3. Helps to set the benchmark and standards for the application.
  4. Doing capacity planning for future demand of the application
Volume testing focuses on the database.
  1. Find bugs which are not possible with any other testing method:
    1. memory leaks,
    2. buffer overflows, etc.
  2. Assures that application
    is able to achieve the performance point
    recognized during performance testing.
  3. Helps to recognize the upper limit of the system:
    determines at what point the system response time will degrade or fail.
  4. Checks how the system can handle a heavy load.
  5. Determines the operating capacity of an application
  6. Assures that current infrastructure
    is sufficient to run the application
  7. Checks number of concurrent users
    which an application can support,
    and scalability to allow more users to access it.
  8. Helps to set SLA (Service Level Agreement) of the app
  1. Checks how the system behaves under extreme loads
  2. Ensures that under a sudden high load
    for a considerable duration
    the servers don’t crash.
  3. Evaluates the application’s behavior
    beyond normal or peak load conditions
  4. Measures average response times
  5. Producing graphs or charts of responses over time
  6. Test the system in the situation of failures
    and how it recovers from failure.
  7. Ensure that the system has saved the data before crashing
  8. Ensure that any unexpected failures do not harm the system security.
  1. resource usage,
  2. availability,
  3. and reliability of the product
  4. verifies loads, volumes and response times,
    as defined by requirements.
  1. checking response time of the system
  2. checking data loss issues
  1. peak performance,
  2. server quantity
  3. and response time.
  1. stability
  2. response time, etc.
Load limit both below and above the threshold of a break

The aim of performance testing
is to get an indication
of how an application behaves
under regular parameters.

threshold of a break.

Generating increased load
on a web application
is the main aim of load testing.

It checks how the application behaves
during normal and high loads
(e.g., peak hours when traffic is very high)

above the threshold of a break.
  1. Measuring response time
  2. Measuring transaction rates
  3. Simulating concurrent users
  4. their ramp-up pattern,
  5. Generating many transactions
  6. frequency and relative percentage of transactions.
  7. checking HTTP connections
  • Test of a word processor by making change in the large volume of data,
  • test a printer by transferring heavy data.
  • Check mail server with thousands of concurrent users.
Test beyond software capabilities

  • Eg web site’s video needs a plug in to play,
    Which is not installed on the user’s browser
  • E.g. testing a game with overload of memory
  • casually shut down and restart ports of a large network.

Ramp Testing
Continuously raising an input signal
until the system breaks down.

Reliability Testing
How reliable/dependable is the product
Should give correct results
E.g. calculator always should give the correct answer

Usability Testing
Is user comfortable with software
How easily learnable/intuitive/understandable
and comfortable/enjoyable/attractive is the product to use

Accessibility Testing
ensures that the application being tested
is usable by people with disabilities like

  1. Hearing/Auditory:
    1. deafness
    2. hearing impairments
    3. Inability to hear clearly
  2. Visual:
    1. (color) blindness,
    2. poor vision,
    3. visual strobe,
    4. flashing effect problems
  3. Physical:
    1. Inability to use the mouse or keyboard with one hand.
    2. Poor motor skills like hand movements and muscle slowness
  4. Cognitive:
    1. Learning Difficulties
    2. Poor Memory
    3. Inability to understandcomplex scenarios
  5. Literacy:
    • Reading Problems
  6. old age
  7. and other disadvantaged groups.

It is a subset of Usability Testing.

Government agencies all over the world have come out with legalizations, which requires that IT products to be accessible by disabled people.

Legal acts by various governments

  1. United States: Americans with Disabilities Act – 1990
  2. United Kingdom: Disability Discrimination Act – 1995
  3. Australia: Disability Discrimination Act – 1992
  4. Ireland : Disability Act of 2005

Accessibility Testing is important to ensure legal compliance.

Accessibility issues in software can be resolved
if Accessibility Testing is made
a part of normal software testing life cycle.

People with disabilities
use Assistive Technology
which helps them
in operating a software product:

  1. Speech Recognition Software
    converts the speech to text, which serves as input to the computer.
  2. Screen Reader Software
    read out loud text displayed on the screen
  3. Screen Magnification Software
    used to enlarge the monitor
    and make reading easy for vision-impaired users.
  4. Special Keyboard
    ease typing for users with motor control difficulties

Accessibility Testing can be performed in 2 ways:

  1. Manual
  2. Automated

Accessibility Testing Checklist:

  1. application provides keyboard equivalents
    for all mouse operations and windows
  2. instructions are provided as a part of user documentation or manual.
    Is it easy to understand and operate the application using the documentation?
  3. Tabs are ordered logically to ensure smooth navigation
  4. shortcut keys are provided for menus
  5. application supports all operating systems
  6. response time of each screen or page is clearly mentioned
    so that End Users know how long to wait
  7. all labels are written correctly in the application
    • color of the application is flexible for all users (red and blue are common colors for color blindness)
    • color-coding is never used as the only means of conveying information or indicating an action
    • highlighting is viewable with inverted colors
    • Testing of color in the application by changing the contrast ratio
  8. images or icons are used appropriately,
    so it’s easily understood by the end users
    • application has audio alerts
    • user is able to adjust audio or video controls
    • audio and video related content can be properly heard by the disabled people
    • Test all multimedia pages with no speakers
  9. user can override default fonts
    for printing and text displays
  10. user can adjust or disable flashing, rotating or moving displays
  11. Is training provided for users with disabilities that will enable them to become familiar with the software or application?

Source: guru99.com

Security/Penetration Testing
investigates the functions (e.g, firewall)
relating to detection of threats,
such as viruses, from malicious outsiders.

Tests how well the system is protected
against unauthorized internal or external access,
or willful damage.

Confirms that the program
can restrict access to authorized personnel
and that the authorized personnel can access the functions
available to their security level.

This type of testing usually requires sophisticated testing techniques.

Type of functional testing.

Portability Testing Interoperability Testing Compatibility Testing Conversion Testing
process of determining the degree of ease or difficulty
to which a software component or application
can be effectively and efficiently transferred from one

or other operational or usage environment to another.

Non-functional testing type.

Moving from one environment to another
E.g. transfer over platforms/versions

evaluates the capacity of the software product to interact with one or more specified components or systems.

Type of functional testing.

Testing how well software performs
in a particular hardware, software, operating system, or network

E.g., testing whether software is compatible with other browsers, Operating Systems, or hardware, or other elements with which it should operate.

Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Binary Portability Testing
Testing an executable application
for portability across system platforms and environments,
usually for conformation to an ABI specification.

Sources: ISTQB CTFL Syllabus, Wikipedia, pavantestingtools.com

Installation testing
is testing full, partial, upgrade,
or install/uninstall processes.

The installation test for a release
is conducted with the objective
of demonstrating production readiness.

This test includes
the inventory of configuration items,
performed by the application’s System Administration,
the evaluation of data readiness,
and dynamic tests focused on basic system functionality.

When necessary, a sanity test is performed,
following installation testing.

Sources: pavantestingtools.com

Storage Testing
Testing that verifies the program under test
stores data files in the correct directories
and that it reserves sufficient space
to prevent unexpected termination
resulting from lack of space.

This is external storage as opposed to internal storage.

Multi-user testing geared towards determining
the effects of accessing
the same application code, module or database records. I

Identifies and measures the level of

  1. locking,
  2. deadlocking
  3. and use of single-threaded code
  4. and locking semaphores.

Source: pavantestingtools.com

Recovery Testing is not a part of performance testing.

Recovery Testing
activity of testing
how well an application is able to recover
from expected or unexpected events
without loss of data or functionality.

Events can include

  1. shortage of disk space
  2. unexpected loss of communication,
  3. power out conditions
  4. crashes,
  5. hardware failures
  6. and other similar catastrophic problems.

Recovery testing
is the forced failure of the software
in a variety of ways
to verify that recovery is properly performed.

Sources: ISTQB CTFL Syllabus, Wikipedia

Examines an application’s requirements for
  1. pre-existing software,
  2. initial states
  3. and configuration

in order to maintain proper functionality.

Both dynamic testing and static testing can be used as a means for achieving similar objectives.

static analysis
and dynamic testing
have the same objective – identifying defects.

Static Testing
  1. Can test and find defects without executing code.
  2. Useful and cost effective way of testing.
  3. Done during verification process.
  4. Must be implemented in all phases of development.
  5. can be performed well before dynamic test execution.

Unlike dynamic testing, which requires the execution of software,
static testing techniques rely on

  1. the manual examination (reviews)
  2. and automated analysis (static analysis)
  3. of the code or other project documentation
    without the execution of the code.

Static testing includes

  1. reviewing of the documents (including source code)
  2. and static analysis.
Dynamic Testing

In dynamic testing the software code is executed to demonstrate the result of running tests.

It’s done during validation process.

For example:

  1. unit testing,
  2. integration testing,
  3. system testing, etc.

Explain the difference between static and dynamic techniques, considering objectives, types of defects to be identified, and the role of these techniques within the software life cycle

  1. Static testing is not substitute for dynamic testing.
  2. They are complementary; the different techniques can find different types of defects effectively and efficiently.
  3. Compared to dynamic testing, static techniques find causes of failures (defects) rather than the failures themselves.
  4. Typical defects that are easier to find in reviews than in dynamic testing include:
    deviations from standards,
    requirement defects (i.e. missing requirements)
    design defects,
    insufficient maintainability (i.e. non-maintainable code)
    and incorrect interface specifications.
  5. Reviews can find omissions, for example, in requirements, which are unlikely to be found in dynamic testing

Source: ISTQB CTFL Syllabus


A type of static testing
during which a work product (including code) or process
is evaluated by one or more individuals
to detect issues and to provide improvements.

  1. An evaluation of a product or project status
    to ascertain discrepancies from planned results
    and to recommend improvements.
  2. Defects detected during reviews early in the life cycle
    (e.g., defects found in requirements)
    are often much cheaper to remove
    than those detected by running tests on the executing code.
  3. Reviews are a powerful technique of Static Testing,
    which can be done to evaluate documents
    and analyze code even before execution.
  4. A review could be done entirely as a manual activity,
    but there is also tool support.
  5. The main manual activity is to examine a work product
    and make comments about it.
Review Work Products

Any software work product can be reviewed, including

  1. requirements specifications,
  2. design specifications,
  3. code,
  4. test plans,
  5. test specifications,
  6. test cases,
  7. test scripts,
  8. user guides
  9. or web pages.
Examples of Reviews

Examples of reviews include

  1. management review,
  2. informal review,
  3. technical review,
  4. walkthrough,
  5. inspection, etc.
Review Objectives

The way a review is carried out depends on the agreed objectives of the review, e.g.,

  1. find defects,
  2. gain understanding,
  3. educate testers and new team members,
  4. or discussion and decision by consensus.
Benefits of reviews

Describe the importance and value of static techniques for the assessment of software work products

  1. early defect detection and correction,
  2. A cheap way to detect and remove defects.
  3. development productivity improvements,
  4. reduced development timescales,
  5. reduced testing cost and time,
  6. lifetime cost reductions,
  7. fewer defects and improved communication.
  8. Early validation of user requirements.

Defects found early are often much cheaper to remove than defects detected later in the lifecycle.

Preventing defects in design or coding by uncovering omissions, inaccuracies, inconsistencies, ambiguities, and redundancies in requirements.

Success Factors for Reviews
  1. A key factor in the success of a work product review is in defining the objectives: The purpose and structure of the meeting should be communicated.
  2. The right people for the review objectives are involved
  3. Testers are valued reviewers who contribute to the review and also learn about the product which enables them to prepare tests earlier
  4. Defects found are welcomed and expressed objectively
  5. People issues and psychological aspects are dealt with
    (e.g., making it a positive experience for the author)
  6. The review is conducted in an atmosphere of trust;
    the outcome will not be used for the evaluation of the participants
  7. Review techniques are applied that are suitable to achieve the objectives and to the type and level of software work products and reviewers
  8. Training is given in review techniques, especially the more formal techniques such as inspection
  9. Management supports a good review process (e.g., by incorporating adequate time for review activities in project schedules)

  10. There is an emphasis on learning and process improvement
  11. Checklists or roles are used if appropriate to increase effectiveness of defect identification.

    Looking at software products or related work products
    from different perspectives and using checklists
    can make reviews more effective and efficient.

    For example, a checklist based on various perspectives
    such as user, maintainer, tester or operations,
    or a checklist of typical requirements problems
    may help to uncover previously undetected issues.

Source: ISTQB CTFL Syllabus

The different types of reviews vary

  1. from informal,
    characterized by no written instructions for reviewers,
  2. to systematic,
    characterized by team participation,
    documented results of the review,
    and documented procedures for conducting the review
Review Formality Level

The formality of a review process is related to factors such as

  1. the maturity of the development process,
  2. any legal or regulatory requirements
  3. or the need for an audit trail.
Review Types
    A single software product or related work product
    may be the subject of more than one review.

    If more than one type of review is used, the order may vary.
    E.g., an informal review may be carried out before a technical review,
    or an inspection may be carried out on a requirements specification before a walkthrough with customers.

    Main characteristics, options and purposes of common review types:

    Informal Review Walkthrough Technical Review Inspection
    No formal process May vary in practice from quite informal to very formal It is a formal review, but less formal than inspection
    May vary in practice from quite informal to very formal
    It is a most formal review
    Normally doesn’t have a leader and is more an informal discussion.
    1. Meeting led by author (walking the participants through the document that is being reviewed)
    2. Usually moderator is not present and author plays the role of moderator
    Ideally led by trained moderator (not the author) Led by trained moderator (not the author)
    May take the form of pair programming or a technical lead reviewing designs and code May take the form of scenarios, dry runs, peer group participation May be performed as a peer review without management participation
    Main purpose: inexpensiveness Main purposes: learning, gaining understanding, finding defects Main purposes: discussing, making decisions, evaluating alternatives, finding defects, solving technical problems and checking conformance to specifications, plans, regulations, and standards Main purpose: finding defects
    1. Results may be documented
    2. Varies in usefulness depending on the reviewers
    1. Open-ended sessions
    2. Optional pre-meeting preparation of reviewers (common)
    3. Optional preparation of a review report including list of findings
    4. Goal is to have common/mutual understanding of document by all reviewers
    5. Walkthroughs may be used for knowledge transfer to other users (e.g. new employee, transfer)
    6. In walkthrough author explains document step by step (finding defects is not priority)
    7. Walkthrough is done for high level documents like requirement document etc.
    8. Reviewers are selected from different levels/departments of organization for higher quality
    9. Optional scribe (who is not the author)
    1. Technical review focuses on technical content of the document
    2. Reviewers are generally architects, designers etc.
    3. Documented, defined defect-detection process that includes peers and technical experts with optional management participation
    4. Pre-meeting preparation by reviewers
    5. Optional use of checklists
    6. Preparation of a review report which includes the list of findings, the verdict whether the software product meets its requirements and, where appropriate, recommendations related to findings
    1. All stages of Review are followed during Inspection
    2. Usually conducted as a peer examination
    3. Defined roles
    4. Proper logging is done by Scribe or reporter
    5. Includes metrics gathering
    6. Formal process based on rules and checklists
    7. Specified entry and exit criteria for acceptance of the software product
    8. Inspection is appropriate even when there are no written documents
    9. Pre-meeting preparation
    10. Inspection report including list of findings
    11. Formal follow-up process (with optional process improvement components)
    12. Optional reader
    Peer Review

    Walkthroughs, technical reviews and inspections
    can be performed within a peer group,
    i.e., colleagues at the same organizational level.
    This type of review is called a “peer review”.

    Code Inspection
    Code Walkthrough

    A formal testing technique
    where source code is traced by a group
    with a small set of test cases,
    while the state of program variables is manually monitored,
    to analyze the programmer’s logic and assumptions.

    Source: ISTQB CTFL Syllabus, www.pavantestingtools.com

Activities of a Formal Review
  1. Planning
    1. Author sends review Request to moderator
    2. Moderator is the person who manages the review, i.e. time, place, invitation etc.
    3. Defining the review criteria
    4. Review team is decided (including moderator and author)
      • Selecting the personnel
      • Allocating roles
    5. Moderator defines the entry and exit criteria for more formal review types (e.g., inspections)
    6. Checking entry criteria (for more formal review types)
      • Entry criteria ensures document is not having too many major defects, i.e. 1 page maximum 3 checked first to ensure that document is fit for being reviewed and there will be no wasting time/effort
      • Entry criteria ensures document has been cleaned up by running any automated checks that apply
      • Entry criteria checks line numbers are present in a document, not too many mistakes should be there
    7. Selecting which parts of documents to review
      • Document pages are finalized for the review,
      • too many pages should not be part of a single review, maximum 10-20 pages
    8. Review time should be included in Project time (responsibility of Manager)

    The Review team focuses on:

    1. Does the design comply to the requirements
    2. Standards, clarity, naming conventions, templates
    3. Focuses on the related documents at the same level
    4. Focusing on testability and maintainability
  2. Kick-off (initiate review)
    1. It is an optional step during review process. Generally takes place during formal reviews/ inspection.
    2. Distributing documents
    3. Explaining the objectives, process and documents to the participants
    4. Moderator reviews documents with the reviewers
    5. Kick-off helps find more defects during Review meeting
      • Role assignments,
      • checking rate,
      • the pages to be checked,
      • process changes,

      and possible other questions are also discussed during this meeting.

  3. Individual Preparation (individual review)
    1. Preparing for the review meeting by reviewing the document(s) provided by moderator
    2. Noting potential defects, questions and comments
    3. Spelling mistakes are recorded on the document under review but not mentioned during the meeting.
    4. A critical success factor is Checking Rate = the number of pages checked per hour
      Usually checking rate is 5-10 pages per hour, but may be much less for formal inspection, e.g. one per hour.
  4. Examination/evaluation/recording of results
    (Review Meeting/ Issue communication and analysis)

    1. Logging Phase
      1. The issues, e.g. defects, that have been identified during the preparation are mentioned page by page, reviewer by reviewer per page and are logged either by the author or by a scribe.
      2. Defects are logged by Severity:
        • Critical: the scope and impact of the defect is large
        • Major: error in the implementation
        • Minor: non-compliance with standards and templates, spelling mistakes
    2. Discussion Phase
      1. Discussing or logging, with documented results or minutes (for more formal review types)
      2. Examining/evaluating and recording issues during any physical meetings or tracking any group electronic communications
      3. A detailed discussion on whether or not an issue is a defect is not very meaningful, as it is much more efficient to simply log it and proceed to the next one
      4. Moderator tries to keep a good Logging Rate = number of defects logged per minute
      5. Moderator makes sure reviews won’t get personal
    3. Decision Phase
      • Noting defects, making recommendations regarding handling the defects, making decisions about the defects
      • After review decision has to be made whether to review again after rework in case many defects were found
  5. Rework (fixing defects)
    1. Fixing defects found (typically done by the author)
    2. Recording updated status of defects (in formal reviews)
    3. Not every defect found leads to rework.
      It is the author’s responsibility to judge if a defect has to be fixed.
  6. Follow-up (report)
    1. The Moderator is responsible to ensure that satisfactory actions have been taken on all (logged) defects (checked and/or fixed)
    2. It is not necessary for the moderator to check all the corrections in detail
    3. Moderator checks with other reviewers also to get feedback after fixing of defect
    4. Gathering metrics
    5. Checking on exit criteria (for more formal review types)
Formal Review Roles and Responsibilities

A typical formal review will include the 5 roles below:

  1. Manager
    1. decides on the execution of reviews,
    2. allocates time in project schedules
    3. determines if the review objectives have been met.
    4. Sometimes manager also plays the role of reviewer if this would be helpful.
  2. Moderator (Review Leader)
    1. the person who leads the review of the document or set of documents, including planning the review
    2. Coordinates with the author of the document and checks entry criteria for the review
    3. Manages review, i.e. scheduling meeting, distributing meeting etc.
    4. runs the meeting (leads the discussion during review),
    5. performs following-up after the meeting (follow up on the rework, in order to control the Quality of the Product).
    6. If necessary, the moderator may mediate between the various points of view and is often the person upon whom the success of the review rests.
  3. Author
    1. the writer or person with chief responsibility for the document(s) to be reviewed.
    2. Goal is to gain maximum from the review and improve quality of the document.
  4. Reviewers (Checkers/Inspectors)
    1. individuals with a specific technical or business background (sometimes several reviewers with different background are needed)
    2. after the necessary preparation, identify and describe findings (e.g., defects) in the product under review.
    3. They find defects, mostly prior to the meeting
    4. Reviewers should be chosen to represent different perspectives and roles in the review process, and should take part in any review meetings.
  5. Scribe (or Recorder)
    1. documents all the issues, problems and open points that were identified during the meeting.
    2. During review meeting, scribe logs every defect and feedbacks.
    3. Usually Author plays role of Scribe, but it is advantage to have another person as a Scribe so that Author can concentrate on the review.

Source: ISTQB CTFL Syllabus

Static Analysis
Analysis of a program carried out without executing the program.

Static Analyzer
A tool that carries out static analysis.

Static Analysis
  1. The objective of static analysis
    is to find defects in software source code and software models.
  2. The primary purpose of conducting static analysis is to detect defects early.

    Either manual reviews or automated analysis occur early in the lifecycle.

  3. Static analysis is performed without executing the software being examined by the tool;
    dynamic testing does execute the software code.
  4. Static analysis can locate defects that are hard to find in dynamic testing.
  5. Defects can be identified in documentation that might not be caught by dynamic testing.
  6. Manual examination of documentation technique is a form of static analysis
  7. As with reviews, static analysis finds defects rather than failures.
  8. Static analysis tools analyze program code (e.g., control flow and data flow), as well as generated output such as HTML and XML.
Typical defects discovered by static analysis tools

Static analysis tools are typically used by developers
(checking against predefined rules or programming standards)
before and during component and integration testing
or when checking-in code to configuration management tools,
and by designers during software modeling.

Static analysis tools may produce a large number of warning messages,
which need to be well-managed to allow the most effective use of the tool.

Compilers may offer some support for static analysis,
including the calculation of metrics.

  • Referencing a variable with an undefined value
  • Inconsistent interfaces between modules and components
  • Variables that are not used or are improperly declared
  • Unreachable/dead code (best be found using code reviews)
    Dead code should be detected by a static analysis tool and it can be quite hard to find any other way
  • Missing and erroneous logic (potentially infinite loops)
  • Overly complicated constructs
  • Programming standards violations
  • Security vulnerabilities
  • Syntax violations of code and software models
  • Describe, using examples, the typical benefits of static analysis

    The value of static analysis is:

    1. Early detection of defects prior to test execution
    2. Early warning about suspicious aspects of the code or design by the calculation of metrics, such as a high complexity measure
    3. Identification of defects not easily found by dynamic testing
    4. Detecting dependencies and inconsistencies in software models such as links
    5. Improved maintainability of code and design

      Static analysis would be most effective testing technique in determining and improving the maintainability of the code (assuming developers fix what is found

    6. Prevention of defects, if lessons are learned in development

    Coding Standards:

    1. Class Names should start with capital letters
    2. Methods names should start with small letter
    3. In Static analyzer tool we enter the coding standards

    Code Metrics

    1. Comment frequency, complex coding, code size increasing
    2. 20% code consists of 80% defects
    3. Cyclomatic complexity tells how much testing required to be tested

    Code structure

    1. Control flow structure: dead code
    2. Data flow structure: variables never used
    3. Data structure: information helps to identify data structures stack, queue allocation, deletion etc. for designing test cases

    Source: ISTQB CTFL Syllabus

    Common characteristics of experience-based test design techniques
    1. Experience-based testing is where tests are derived from
      1. tester’s skill, knowledge, experience and intuition
      2. experience with similar applications and technologies,
        knowledge about likely defects and their distribution
      3. Knowledge of testers, developers, users and other stakeholders
        about the software, its usage and its environment
    2. Typically used in conjunction with other, more formal techniques,
      often used to fill in the gaps left by the more formal testing techniques

      When used to augment systematic techniques, these techniques can be useful in identifying special tests not easily captured by formal techniques, especially when applied after more formal approaches.

    3. However, this technique may yield widely varying degrees of effectiveness, depending on the testers’ experience.

    Source: ISTQB CTFL Syllabus

    Experience-Based Testing Types
    1. Error Guessing
    2. Exploratory Testing
    3. Checklist-based Testing

    Error Guessing
    1. A commonly used experience-based technique is error guessing.
    2. Error Guessing
      A test technique in which tests are derived
      on the basis of the tester’s knowledge of past failures,
      or general knowledge of failure modes.
    3. Error guessing tests “That could never happen” conditions
    4. The success of error guessing is very much dependent on the skill of the tester.
    5. Generally testers anticipate defects based on
      1. experience
      2. available defect and failure data,
      3. from common knowledge about why software fails.
    6. A structured approach to the error guessing technique is
      to enumerate a list of possible defects (defect and failure lists) and
      to design tests that attack these defects.
      This systematic approach is called fault attack.

    Source: ISTQB CTFL Syllabus

    Exploratory Testing
    1. Exploratory testing
      Exploratory testing is concurrent test design, test execution, test logging and learning, based on a test charter containing test objectives, and carried out within time-boxes.
    2. Testers are involved in minimum planning and maximum test execution
    3. A key aspect of exploratory testing is learning:
      learning by the tester about the software
    4. Exploratory testing is about exploring, finding out about the software: what it does/ doesn’t do, what does and doesn’t work.
    5. It is an approach that is most useful where
      there are few (no) or inadequate specifications and severe time pressure,
      or in order to augment or complement other, more formal testing.
      It can serve as a check on the test process, to help ensure that the most serious defects are found.
    6. Testers can report many issues due to incomplete requirements or missing requirement document.

    Source: ISTQB CTFL Syllabus

    1. Error Guessing
      A test technique in which tests are derived based on the tester’s knowledge of past failures, or general knowledge of failure modes.
    2. Checklist-based Testing
      An experience-based test technique whereby the experienced tester uses a high-level list of items to be noted, checked, or remembered, or a set of rules or criteria against which a product has to be verified.
    3. Exploratory Testing
      An approach to testing where the tester dynamically designs and executes tests based on their knowledge, exploration of the test item and the results of previous tests.

    Ad-Hoc Testing
    When a software testing performed without proper planning and documentation, it is said to be Adhoc Testing. Such kind of tests are executed only once unless we uncover the defects.

    Adhoc testing can be performed when there is limited time to do exhaustive testing and usually performed after the formal test execution.

    Adhoc methods are the least formal type of testing as it is NOT a structured approach. Hence, defects found using this method are hard to replicate as there are no test cases aligned for those scenarios.

    Testing is carried out with the knowledge of the tester about the application and the tester tests randomly without following the specifications/requirements.

    Hence the success of Adhoc testing depends upon the capability of the tester, who carries out the test. The tester has to find defects without any proper planning and documentation, solely based on tester’s intuition.

    Can include negative testing as well.

    Forms of Adhoc Testing

    Buddy Testing: Two buddies, one from development team and one from test team mutually work on identifying defects in the same module. Buddy testing helps the testers develop better test cases while development team can also make design changes early. This kind of testing happens usually after completing the unit testing.

    Pair Testing: Two testers are assigned the same modules and they share ideas and work on the same systems to find defects. One tester executes the tests while another tester records the notes on their findings.

    Monkey Testing: Testing is performed randomly without any test cases in order to break the system.

    Source: tutorialspoint.com

    Gorilla Testing
    Heavily testing a single particular module or functionality.


    The degree to which a component, system or process
    meets specified requirements
    and/or user/customer needs and expectations.


    • A factor (chance of an event, hazard, threat or situation)
      that could result in future negative consequences or a potential problem.
    • Risk is a possibility of undesirable outcome
    1. This loss can be anything:
      1. money,
      2. time,
      3. effort
      4. compromise in quality
    2. Two types of risk: product risk and project risk.
    Risk Level

    Considering the consequence of the risk is important.

    The level of risk will be determined
    by the likelihood of an adverse event happening
    and the impact (the harm resulting from that event).
    E.g. Risk for catching a cold for a young vs an old person.

    Project Risk

    Project risks
    are the risks that surround the project’s capability
    to deliver its objectives, such as:

    1. Organizational factors:
      1. Skill, training and staff shortages
      2. Personnel issues
      3. Resources:
        • Not enough resources,
        • resources on boarding too late (process takes around 15 days)
      4. Political issues, such as:

        • Problems with testers
          communicating their needs and test results
        • Failure by the team
          to follow up on information found in testing and reviews
          (e.g., not improving development and testing practices)
        • Improper attitude toward or expectations of testing (e.g., not appreciating the value of finding defects during testing)

      E.g., A schedule that requires work during Christmas shutdown

    2. Technical issues:
      1. Requirement problems are a project risk:
          Problems in defining the right requirements
        • The extent to which requirements cannot be met
          given existing constraints
      2. Late/not ready on time
        • test environment
        • development and testing data conversion
        • migration planning and tools
      3. Low quality of the
        • design,
        • code,
        • configuration data,
        • test data and tests
    3. Supplier issues:
      • Failure of a third party
      • Contractual issues

    Examples of project risks:

    • Low quality of requirements, design, code and tests.
    • Problems in defining the right requirements, potential failure areas in the software or system.
    • Natural disasters
    • Political problems, and delays in especially complex areas in the product.
    Product Risks

    Product Risks
    Potential failure areas (adverse future events or hazards)
    in the software or system
    that are a risk to the quality of the product.

    These include:

    1. Failure-prone software delivered
    2. The potential that the software/hardware
      could cause harm to an individual or company
    3. Poor software characteristics
      • e.g., functionality
      • security,
      • reliability,
      • usability
      • performance
    4. Poor data integrity and quality
      • e.g., data migration issues,
      • data conversion problems,
      • data transport problems,
      • violation of data standards
    5. Software that does not perform its intended functions

    Example of product risks:

    • A defect that is causing a performance issue
    • There are several usability issues in the software
    • A duplicate requirement
    • An issue with a data conversion procedure
    • A data conversion is failing because of an unexpected data format
    • The software fails to detect the selection of an invalid workflow path by a user with restricted rights (product risk such as this could cause a project to fail if the problem is not detected).
    • Error-prone areas, potential harm to the user, poor product characteristics.
    Risk Analysis and Risk Management
    1. Risks are used to
      1. decide where to start testing
      2. and where to test more;
      3. to reduce the risk of an adverse effect occurring,
      4. or to reduce the impact of an adverse effect.
    2. Product risks are a special type of risk to the success of a project.
    3. When analyzing, managing and mitigating risks,
      the test manager is following
      well-established project management principles.
    4. During the planning stage of a project., team and the project stakeholders
      • develop a list of product risks and project risks
      • determine the extent of testing required for the product risks and the mitigation and contingency actions required for the project risks
    Risk-based Approach to Testing
    1. Testing as a risk-control activity
      provides feedback about the residual risk
      by measuring the effectiveness of critical defect removal
      and of contingency plans.
    2. A risk-based approach to testing
      provides proactive opportunities
      to reduce the levels of product risk,
      starting in the initial stages of a project.
    3. It involves the identification of product risks
      and their use in guiding
      • test planning and control,
      • specification, preparation and execution of tests.
    4. In a risk-based approach the risks identified may be used to:
      • Determine appropriate testing techniques to use on the system
      • Determine the extent of testing to be carried out
      • Prioritize testing in an attempt to find the critical defects as early as possible
      • Determine whether any non-testing activities
        could be employed to reduce risk
        (e.g., providing training to inexperienced designers)
    5. Risk-based testing
      draws on the collective knowledge and insight of the project stakeholders
      to determine the risks and the levels of testing required to address those risks.
    6. To ensure that the chance of a product failure is minimized,
      risk management activities provide a disciplined approach to:

      • Assess (and reassess on a regular basis) what can go wrong (risks)
      • Determine what risks are important to deal with
      • Implement actions to deal with those risks
    7. In addition, testing may support
      • the identification of new risks,
      • may help to determine what risks should be reduced,
      • and may lower uncertainty about risks.

    Source: ISTQB CTFL Syllabus

    Breadth Testing
    A test suite that exercises
    the full functionality of a product
    but does not test features in detail.

    Depth Testing
    A test that exercises
    a feature of a product in full detail.

    Source: pavantestingtools.com

    The context-driven school of software testing
    is flavor of Agile Testing
    that advocates continuous and creative evaluation
    of testing opportunities
    in light of the potential information revealed
    and the value of that information
    to the organization right now.

    Source: pavantestingtools.com

    End-to-End (E2E) Testing
    Testing a complete application environment
    in a situation that mimics real-world use.

    End-to-end testing is a technique used to test whether the flow of an application right from start to finish is behaving as expected. The purpose of performing end-to-end testing is to identify system dependencies and to ensure that the data integrity is maintained between various system components and systems.

    The entire application is tested for critical functionalities such as communicating with the other systems, interfaces, database, network, and other applications.

    Source: tutorialspoint.com

    Exhaustive Testing
    Testing which covers all combinations of input values and preconditions for an element of the software under test.

    Not possible according to the testing principals, as all possible combinations can not be covered.

    Localization Testing?

    This term refers to making software specifically designed for a specific locality.

    Comparison testing
    is testing that compares
    software weaknesses and strengths
    to those of competitors’ products.

    Testing Documentation

    1. Documentation plays a critical role in QA.
    2. QA practices should be documented, so that they are repeatable.
      1. Specifications,
      2. designs,
      3. business rules,
      4. inspection reports,
      5. configurations,
      6. code changes,
      7. test plans,
      8. test cases,
      9. bug reports,
      10. user manuals

      should all be documented.

    3. Ideally, there should be a system
      for easily finding and obtaining of documents
      and determining what document
      will have a particular piece of information.
    4. Use documentation change management, if possible.

    Test Policy
    A high-level document describing the principles, approach and major objectives of the organization regarding testing.
    Synonyms: organizational test policy

    Source: ISTQB CTFL Syllabus

    The test strategy
    is a formal description
    of how a software product will be tested.

    A test strategy is developed
    for all levels of testing, as required.

    The test team analyzes the requirements,
    writes the test strategy
    and reviews the plan
    with the project team.

    The test plan may include

    1. test cases,
    2. conditions,
    3. the test environment,
    4. a list of related tasks,
    5. pass/fail criteria
    6. and risk assessment.

    Inputs for this process:

    1. A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.
    2. A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.
    3. Testing methodology. This is based on known standards.
    4. Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.
    5. Requirements that the system can not provide, e.g. system limitations.

    Outputs for this process:

    1. An approved and signed off test strategy document, test plan, including test cases.
    2. Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

    Test Suite
    less commonly known as a ‘validation suite’,
    is a collection of test cases
    used to validate that an application
    has a specified set of behaviors.

    The scope of a Test Suite varies from organization to organization.

    There may be several Test Suites for a particular product.

    In most cases however a Test Suite is a high level concept,
    grouping together hundreds or thousands of tests
    related by what they are intended to test.

    Source: Wikipedia, pavantestingtools.com

    Test Charter
    A statement of test objectives, and possibly test ideas about how to test. Test charters are used in exploratory testing.
    Source: ISTQB CTFL Syllabus

    Test Basis
    The body of knowledge used as the basis for test analysis and design.

    All documents from which the requirements of a component or system can be inferred.

    The documentation on which the test cases are based.

    Testing Levels Test Basis Documentation

    If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.

    Test basis is defined as the source of information or the document that is needed to write test cases and also for test analysis.

    Test basis should be well defined and adequately structured so that one can easily identify test conditions from which test cases can be derived.

    When test cases are designed early in lifecycle, verifying test basis via test design, which common test objective is being achieved?
    Preventing defects. You are likely to find defects while doing this analysis and this may lead to either gaining or destroying confidence and needing to supply information to the decision makers.

    Source: ISTQB CTFL Syllabus

    Test Condition

    Test Condition
    An aspect of the test basis that is relevant in order to achieve specific test objectives.
    Synonyms: test requirement, test situation, test inventory, objectives

    An item or event of a component or system
    that could be verified by one or more test cases, e.g.,

    1. a function,
    2. transaction,
    3. feature,
    4. quality attribute,
    5. or structural element.
    Test Analysis: Identifying Test Conditions

    During test analysis, the test basis documentation is analyzed in order to determine what to test, i.e., to identify the test conditions.

    1. Test condition is simply something that we could test
      E.g. for requirement document its contents would be test condition
    2. Tests are written from any information called test basis (basis for tests – all the requirements from which we can write test cases)
    3. Test basis could be anything, i.e. verbal communication, emails, documents, video etc.
    4. Exhaustive testing (testing everything) is not possible
    5. Prioritization is important
    6. Some intelligent process to guide our selection is called Test techniques
    7. Traceability – linking test conditions back to their sources in the test basis.
      Traceability (traceability matrix) is done to help during changes is requirements, test failure and verifying all the requirements are covered.
    Functional vs. Non-functional Requirement

    A provision that contains criteria to be fulfilled.
    A condition or capability needed by a user
    to solve a problem or achieve an objective
    that must be met or possessed by a system or system component
    to satisfy a contract, standard, specification, or other formally imposed document.

    Functional Requirement
    A requirement that specifies a function that a component or system must be able to perform.

    Non-functional Requirement
    A requirement that describes how the component or system will do what it is intended to do.

    Source: ISTQB CTFL Syllabus

    They are different, but they talk about the same thing.

    are the users’ description of what the finished product, in their eyes, should do.

    Requirements are descriptions of the future.
    This is a broad term that can include high level ideas for a product, service, experience, building, facility, technology or infrastructure.

    Requirement Specifications
    Requirements may begin as high level ideas
    that are refined over time
    to become requirements specifications
    that are detailed enough
    to be created by a subject matter expert
    without much need for interpretation.

    are detailed descriptions of the present or future
    that can be interpreted without much ambiguity.

    is the technical description of the solution in general,
    covering the requirements and much more.

    The term specifications
    extends beyond requirements specifications
    to detailed documentation of

    1. non-functional requirements,
    2. designs,
    3. standards,
    4. products,
    5. services,
    6. processes,
    7. methods
    8. interfaces,
    9. practices,
    10. infrastructure,
    11. equipment,
    12. technologies
    13. costs
    14. problems
    15. and documentation templates.

    Source: softwareengineering.stackexchange.com

    Functional Specification
    A document that describes in detail
    the characteristics of the product
    with regard to its intended features.

    Sources: pavantestingtools.com

    Software Requirements Specification

    A deliverable that describes

    1. all data,
    2. functional and behavioral requirements,
    3. all constraints,
    4. and all validation requirements

    for software.

    Traceability Matrix
    A document showing the relationship
    between Test Requirements and Test Cases.

    Test Plan
    is a record of the test planning process.

    Documentation describing

    1. the test objectives to be achieved
    2. and the means for achieving them
      1. scope,
      2. approach,
      3. resources
      4. and schedule

    It identifies amongst others test items

    1. the features to be tested,
    2. the testing tasks,
    3. who will do each task,
    4. degree of tester independence,
    5. the test environment,
    6. the test design techniques
    7. entry and exit criteria to be used, and the rationale for their choice,
    8. and any risks requiring contingency planning.

    The process of preparing a test plan
    is a useful way to think through the efforts
    needed to validate the acceptability of a software product.

    The completed document will help people outside the test group
    understand the why and how of product validation.

    It should be thorough enough to be useful,
    but not so thorough that none outside the test group
    will be able to read it.

    Test scenarios and/or cases
    are prepared by reviewing
    functional requirements of the release
    and preparing logical groups of functions
    that can be further broken into test procedures.

    Test procedures
    define test conditions,
    data to be used for testing and expected results,
    including database updates,
    file outputs, report results.

    Generally speaking

    1. Test cases and scenarios
      are designed to represent both
      typical and unusual situations
      that may occur in the application.
    2. Test engineers define unit test requirements and unit test cases.
      Test engineers also execute unit test cases.
    3. Test team, with assistance of developers and clients,
      develops test cases and scenarios
      for integration and system testing.
    4. Test scenarios are executed
      through the use of test procedures or scripts.
    5. Test procedures or scripts
      may cover multiple test scenarios.
    6. Test procedures or scripts define a series of steps
      necessary to perform one or more test scenarios.
    7. Test procedures or scripts
      include the specific data that will be used
      for testing the process or transaction.
    8. Test scripts are mapped back to the requirements
      and traceability matrices are used
      to ensure each test is within scope.
    9. Test data is captured and base lined, prior to testing.
      This data serves as the foundation
      for unit and system testing
      and used to exercise system functionality
      in a controlled environment.
    10. Some output data is also base-lined for future comparison.
      Base-lined data is used
      to support future application maintenance
      via regression testing.
    11. A pretest meeting is held
      to assess the readiness of the application
      and the environment
      and data to be tested.

      A test readiness document is created
      to indicate the status of the entrance criteria of the release.

    Inputs for this process:

    1. Approved Test Strategy Document.
    2. Test tools, or automated test tools, if applicable.
    3. Previously developed scripts, if applicable.
    4. Test documentation problems uncovered as a result of testing.
    5. A good understanding of software complexity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code, and software complexity data.

    Outputs for this process:

    1. Approved documents of test scenarios, test cases, test conditions, and test data.
    2. Reports of software design issues, given to software developers for correction.

    1. Test procedure contains end to end test cases
      E.g., to test the sending of an email from Gmail.com,
      the order of combining test cases to form a test procedure would be:

      1. The test to check the login
      2. The test to compose an email
      3. The test to attach one/more attachments
      4. Formatting the email in the required way by using various options
      5. Adding contacts or email addresses to the To, BCC, CC fields
      6. Sending an email and making sure it is showing in the “Sent Mail” section

      All the test cases above are grouped to achieve a certain target at the end of them.

    2. Test procedures have a few test cases combined at any point in time.
      Test suites can contain 100s or even 1000s of test cases.
    3. Test suite contains all new features and regression test cases.
      E.g., Application’s current version is 2.0. The previous version 1.0 might have had 1000 test cases to test it entirely. For version 2 there are 500 test cases to just test the new functionality that is added in the new version. So, the current test suite would be 1000+500 test cases that include both regression and the new functionality.
    4. Test procedures are coded in a new language called TPL (Test Procedure language).
    5. Test suite contains manual test cases or automation scripts.

    Source: softwaretestinghelp.com

    Primary difference between the test plan, the test design specification, and the test procedure specification
    1. The test plan describes one or more levels of testing,
    2. the test design specification identifies the associated high-level test cases
    3. and a test procedure specification describes the actions for executing a test

    A software project test plan
    is a document that describes
    the objectives, scope, approach,
    and focus of a software testing effort.

    The process of preparing a test plan
    is a useful way to think
    through the efforts needed
    to validate the acceptability
    of a software product.

    The completed document will help
    people outside the test group
    understand the ‘why’ and ‘how’
    of product validation.

    It should be thorough enough to be useful
    but not so thorough
    that no one outside the test group
    will read it.

    Test Plan Document

    The following are some of the items that might be included in a test plan, depending on the particular project:

    1. Title
    2. Identification of software including version/release numbers.
    3. Revision history of document including authors, dates, approvals.
    4. Table of Contents.
    5. Purpose of document, intended audience
    6. Objective of testing effort
    7. Software product overview
    8. Relevant related document list, such as requirements, design documents, other test plans, etc.
    9. Relevant standards or legal requirements
    10. Traceability requirements
    11. Relevant naming conventions and identifier conventions
    12. Overall software project organization and personnel/contact-info/responsibilties
    13. Test organization and personnel/contact-info/responsibilities
    14. Assumptions and dependencies
    15. Project risk analysis
    16. Testing priorities and focus
    17. Scope and limitations of testing
    18. Test outline – a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
    19. Outline of data input equivalence classes, boundary value analysis, error classes
    20. Test environment – hardware, operating systems, other required software, data configurations, interfaces to other systems
    21. Test environment validity analysis – differences between the test and production systems and their impact on test validity.
    22. Test environment setup and configuration issues
    23. Software migration processes
    24. Software CM processes
    25. Test data setup requirements
    26. Database setup requirements
    27. Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
    28. Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
    29. Test automation – justification and overview
    30. Test tools to be used, including versions, patches, etc.
    31. Test script/test code maintenance processes and version control
    32. Problem tracking and resolution – tools and processes
    33. Project test metrics to be used
    34. Reporting requirements and testing deliverables
    35. Software entrance and exit criteria
    36. Initial sanity testing period and criteria
    37. Test suspension and restart criteria
    38. Personnel allocation
    39. Personnel pre-training needs
    40. Test site/location
    41. Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues.
    42. Relevant proprietary, classified, security, and licensing issues.
    43. Open issues
    44. Appendix – glossary, acronyms, etc.
    1. Test plan identifier
    2. Test deliverables
    3. Introduction
    4. Testing tasks
    5. Test items
    6. Environment needs
    7. Features to be tested
    8. Features not to be tested
    9. Responsibilities
    10. Staffing and training needs
    11. Approach
    12. Schedule
    13. Item pass/fail criteria
    14. Risks and contingencies
    15. Suspension and resumption criteria
    16. Approvals

    Source: ISTQB and IEEE Software Test Documentation

    1. Test design specification identifier
    2. Features to be tested
    3. Approach
    4. Refinements
    5. Test identification
    6. Feature
    7. Pass/fail criteria

    Source: ISTQB and IEEE Software Test Documentation

    Use Case

    Use Case Document
    is a business document which provides a story
    of how a system, and its actors,
    will be utilized to achieve a specific goal.

    The specification of tests
    that are conducted from the end-user perspective.

    Use cases tend to focus on operating software
    as an end-user would conduct their day-to-day activities.

    An effective Use Case
    should provide a detailed step-by-step description
    of how the system will be used by its actors
    to achieve the planned outcome.

    1. Use cases contain three items:
      1. Actor
        can be a human or other external system
      2. Action/Process Flow
      3. Outcome
    2. Use cases can be executed manually and automatically.
    3. A use case describes interactions
      between actors (users or systems),
      which produce a result of value
      to a system user or the customer.
    4. A Use case is a description
      of a particular use of the system
      by the end user of a system.
    5. Use case testing is a technique
      that helps us identify test cases
      that exercise the whole system
      on a transaction by transaction basis
      from start to finish.
    6. Use cases use the business language rather than technical terms
      Use cases are defined in terms of the actor, not the system,
      describing what the actor does and what the actor sees
      rather than what inputs the system expects
      and what the system outputs
    7. Use cases may be described
      1. at the abstract level
        (business use case, technology-free, business process level)
      2. at the system level
        (system use case on the system functionality level).
    8. Each use case has preconditions
      which need to be met for the use case to work successfully.
    9. Each use case terminates with postconditions
      which are the observable results and final state of the system
      after the use case has been completed.
    10. A use case has multiple “paths” that can be taken by any user at any one time. A use case scenario is a single path through the use case.

      A use case usually has

      1. mainstream (i.e., most likely) scenario
        making sure that the mainstream business processes are tested
      2. and alternative scenarios.
    11. Use cases describe the “process flows”
      through a system based on its actual likely use,
      so the test cases derived from use cases
      are most useful in uncovering defects
      in the process flows during real-world use of the system
      (system-level tests).
    12. Use cases are very useful for designing acceptance tests with customer/user participation.
    13. They also help uncover integration defects
      caused by the interaction and interference of different components,
      which individual component testing would not see.
    14. Tests can be derived from use cases.
    15. Designing test cases from use cases
      may be combined with other specification-based test techniques.
    16. Use cases are written by BAs.

    Source: ISTQB CTFL Syllabus, Wikipedia

    Test Case

    Test Case
    Test Case is a commonly used term for a specific test.
    This usually is the smallest unit of testing.

    Per ISTQB it is a set of

    1. input values,
    2. execution preconditions,
    3. actions or events (where applicable)
    4. expected results/outputs
    5. and execution postconditions,

    developed to determine if a feature of an application is working correctly.

    A test case tests a test condition by following a test procedure:
    It is based on a particular objective or test condition,
    to exercise a particular test procedure
    to verify compliance with a specific requirement.

    A test case
    is a document that describes
    an input, action, or event
    and an expected response,
    to determine
    if a feature of an application
    is working correctly.

    A test case
    should contain particulars
    such as test case identifier,
    test case name, objective,
    test conditions/setup,
    input data requirements,
    steps, and expected results.

    Test Design: Specifying Test Cases

    During test design
    the test cases and test data
    are created and specified.

    1. Test cases documentation depends on tester experience
    2. Source of information about the correct behavior of the system is called
    3. The expected results should be identified when the test case is written, prior to execution: as part of the specification of a test case
      (as the steps of the test case/test procedure).
    4. If expected results have not been defined, then a plausible, but erroneous, result may be interpreted as the correct one.
    5. Expected results include

      1. outputs,
      2. changes to data and states,
      3. and any other consequences of the test.
    6. Test cases should be meaningful and negative testing is important
      (e.g. login with correct email but wrong password vs both correct)

    The ‘Standard for Software Test Documentation’ (IEEE STD 829-1998) describes
    the content of test design specifications (containing test conditions)
    and test case specifications.

    Test Case Specification Document
    Documentation of a set of one or more test cases.

    Describes detailed summary of

    1. what scenarios will be tested,
    2. how they will be tested,
    3. how often they will be tested for a given feature.
    4. the purpose of a specific test,
    5. the required inputs and expected results,
    6. provides step-by-step procedures for executing the test,
    7. and outlines the pass/fail criteria for determining acceptance.
    1. Test case specification identifier
    2. Output specifications
    3. Test items
    4. Environmental needs
    5. Input specifications
    6. Special procedural requirements
    7. Intercase dependencies

    Source: ISTQB and IEEE Software Test Documentation

    1. Establishing traceability
      from test conditions
      back to the specifications and requirements
      enables both effective impact analysis when requirements change,
      and determining requirements coverage for a set of tests.
    2. During test analysis
      the detailed test approach is implemented
      to select the test design techniques to use
      based on, among other considerations,
      the identified risks.
    3. The process of developing test cases
      can help find problems in the requirements
      or design of an application
      since it requires you to completely think
      through the operation of the application.

      For this reason,
      it is useful to prepare test cases
      early in the development cycle,
      if possible.

    Source: ISTQB CTFL Syllabus

    Test Procedure
    A document providing detailed instructions
    for the execution of one or more test cases.

    A sequence of test cases in execution order,
    and any associated actions that may be required
    to set up the initial preconditions
    and any wrap up activities post execution.

    1. The test procedure specifies the sequence of actions for the execution of a test.
    2. If tests are run using a test execution tool, the sequence of actions is specified in a test script (which is an automated test procedure).
    3. The various test procedures and automated test scripts are subsequently formed into a test execution schedule that defines the order in which the various test procedures, and possibly automated test scripts, are executed.
    4. The test execution schedule will take into account such factors as regression tests, prioritization, and technical and logical dependencies.
    Test Implementation: Specifying test procedures or scripts

    During test implementation the test cases are

    1. developed,
    2. implemented,
    3. prioritized
    4. and organized

    in the test procedure specification (IEEE STD 829-1998).

    Test Implementation: Specifying test procedures or scripts

    1. Grouping of test cases (e.g. tests for log-in)
    2. Test procedure/script are steps to be taken in running a set of tests
    3. Test script written in programming language is called automation script
    1. Test procedure specification identifier
    2. Purpose
    3. Special requirements
    4. Procedure steps

    Source: ISTQB and IEEE Software Test Documentation

    Test Summary Report
    A document summarizing testing activities and results.

    It also provides an evaluation of the corresponding test items against exit criteria.


    1. Test Report
    2. Test Execution Report
    1. Test summary report ID
    2. Summary
      Summarize what was tested and what happened.
      Point to all relevant documents.

      • their version/revision level
      • testing environment
      • reference to the following documents if they exist:
        • test plan,
        • test design specification,
        • test procedure specification
        • test item transmittal reports,
        • test logs
        • test incident reports
    3. Variances
      between what was planned for testing
      and what was actually tested

      • If any test items differed from their specifications, describe that.
      • If the testing process didn’t go as planned, describe that.
      • Say why things were different.
    4. Summary of results
      Which problems have been dealt with? What problems remain?

      • summarize the results
      • identify all resolved incidents
        and summarize their resolutions
      • identify all unresolved incidents
    5. Comprehensive assessment
      How thorough was testing, in the light of how thorough the test plan said it should be? What wasn’t tested well enough? Why not?
    6. Evaluation
      How good are the test items?
      Provide an overall evaluation of each test item
      including its limitations
      based on test results and item pass/fail criteria.
      What’s the risk that they might fail?
    7. Summary of activities
      In outline, what were the main things that happened? What did they cost (people, resource use, time, money)?
    8. Approvals
      Who has to approve this report? Get their signatures.

    Source: ISTQB and IEEE Software Test Documentation

    Test log
    A chronological record of relevant details about the execution of tests.


    1. test record,
    2. test run log
    1. Test log identifier
    2. Description
      1. items being tested,
      2. environment information in which the testing is conducted
    3. Activity and event entries
      1. execution description,
      2. procedure results,
      3. environmental information,
      4. anomalous events,
      5. incident report identifiers

    Source: ISTQB and IEEE Software Test Documentation

    Incident Management
    1. Since one of the objectives of testing is to find defects,
      the discrepancies between actual and expected outcomes
      need to be logged as incidents.
    2. Incident is raised whenever actual results vary from expected results.
    3. An incident must be investigated and may turn out to be a defect.
    4. Depending on a company, such incidents are referred to as
      1. bugs,
      2. defects,
      3. problems
      4. or issues.
    5. Appropriate actions to dispose incidents and defects should be defined.
    6. Project status is calculated from incident status.
    7. Incidents and defects should be tracked
      from discovery and classification
      to correction and confirmation of the solution.
    8. In order to manage all incidents to completion,
      an organization should establish
      an incident management process and rules for classification.
    9. Incidents may be raised during
      1. development,
      2. review,
      3. testing
      4. or use of a software product.
    10. Incidents may be raised for issues in
      1. code or the working system
      2. or in any type of documentation including
        • requirements,
        • development documents,
        • test documents,
        • and user information such as “Help” or installation guides
    Incident Report

    When a bug/defect is found,
    tester is required to report bugs with the detailed steps
    in a document called Incident Report
    (Bug Report/ Issue Report/ Problem Report, etc.)

    Test Incident Report
    document/report generated after the culmination of software testing process,
    wherein the various incidents and defects
    are reported and logged by the team members
    to maintain transparency among the team members
    and to take important steps to resolve these issues.

    Incident reports have the following objectives:

    1. Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary
    2. Provide test leaders a means of tracking the quality of the system under test and the progress of the testing
    3. Provide ideas for test process improvement

    The format defined by the IEEE std 829-1998 for test incident report is as follows:

    1. Test incident report identifier
    2. Summary
    3. Incident description
      1. inputs
      2. expected results
      3. actual results
      4. anomalies
      5. date and time
      6. procedure step
      7. environment
      8. attempts to repeat
      9. testers and observers
    4. Impact
    Test Incident Report Explanation
    1. Test Incident Report Identifier:
      a unique company generated number that helps team to

      1. uniquely identify and differentiate it from other reports.
      2. identify the level of the report and the level of the software it is related to
      3. may identify the level/phase of testing where the incident originally occurred
    2. Summary:
      all the necessary information and important details.

      1. How, when, and where the incident occurred
      2. The way it was discovered by the team
        (test procedures, techniques, methodologies, etc used for incident discovery).
      3. Test logs showcasing the execution of various test cases and procedures.
      4. Test case specification is included to portray the recreation of the incident.
      5. Any other supporting materials, trace logs, memory dumps/maps etc.
    3. Incident Description:
      extensive and detailed information about the incident
      providing any other supporting evidence,
      which can help the developers
      to understand the defects and incidents effortlessly.

      Few of the things included in this section are:

      1. Inputs
      2. Expected results
      3. Actual results
      4. Anomalies
      5. Date & time
      6. Testing procedures and their steps
        detailed steps of the issue so that developers can reproduce and resolve, including logs, database dumps or screenshots
      7. Attempts to repeat
      8. Testers
        Author/Assigner/Reported By
      9. Observers
      10. Assignee
        to whom the tester wants to check the defect

      11. Environment
        (configuration item – Product version, build)
      12. References
        including the identity of the test case specification
        that revealed the problem
      13. Levels of testing where incident(s) were discovered.
        Software or system life cycle process in which the incident was observed
    4. Impact:
      Scope or degree of impact on stakeholder(s) interests

      1. Impact of the incident(s) on:
        1. test plans
        2. test design specifications
        3. test procedure specifications
        4. test case specifications
      2. Details about the actual and potential damage caused by the incident.
      3. These are mainly based on
        1. the severity of the incident,
        2. the priority to fix them
        3. or both.
      4. Status of the incident
        1. new,
        2. open,
        3. deferred,
        4. duplicate,
        5. waiting to be fixed,
        6. fixed,
        7. awaiting re-test,
        8. closed,
        9. assigned,
        10. fixed,
        11. failed,
        12. awaiting verification/confirmation
      5. Change history
        the sequence of actions taken by project team members
        with respect to the incident
        to isolate, repair, and confirm it as fixed
        (Fixed by, date closed)

      The mandatory fields when a tester logs any new bug are

      1. Build version,
      2. Submit On,
      3. Product,
      4. Module,
      5. Severity,
      6. Synopsis
      7. Description to Reproduce

      Optional fields if using manual Bug submission template:

      1. Customer name,
      2. Browser,
      3. Operating system,
      4. File Attachments or screenshots.

      The following fields remain either specified or blank:

      1. Status,
      2. Priority
      3. and ‘Assigned to’

      If you have authority, you can specify these fields. Otherwise, Test manager will do that.

      Source: ISTQB and IEEE Software Test Documentation, softwaretestinghelp.com

    Testing Foundations and Testing Terms

    Principle 1 – Testing shows presence of defects

    1. can show that defects are present,
    2. cannot prove that there are no defects.
    3. reduces the probability of undiscovered defects
    4. if no defects found – it is not a proof of correctness.

    Principle 2 – Exhaustive testing is impossible

    Exhaustive Testing
    A test approach in which the test suite comprises all combinations of input values and preconditions.

    1. Testing everything/all scenarios (all combinations of inputs and preconditions) is not feasible (except for trivial cases).
    2. Instead, risk analysis and priorities should be used to focus testing efforts (smarter testing).

    Principle 3 – Early testing

    1. testing activities shall be started as early as possible in SDLC
    2. Saves time
    3. Saves money
    4. Customer satisfaction

    Which testing activity should occur early in SDLC: documentation review.

    Principle 4 – Defect clustering
    A small number of modules usually contains most of the defects/ is responsible for most of the operational failures.

    Principle 5 – Pesticide paradox

    1. the same set of test cases will no longer find any new defects
    2. test cases need to be regularly reviewed and revised,
    3. new and different tests need to be written to find potentially more defects

    Principle 6 – Testing is context dependent
    Testing method/strategy is different in different contexts (for different software). (safety-critical software is tested differently from an e-commerce site)

    Principle 7 – Absence-of-errors fallacy
    Finding and fixing defects does not help if the system built is unusable and does not fulfill the users’ needs and expectations.

    Test only stable software.

    Source: ISTQB CTFL Syllabus

    Causes of Defects

    1. Human (developer) error (mistake)/environmental conditions
    2. => defect (fault, bug, flaw, issue, incident, problem) in program code/ document
    3. => system failure (fail to do what it should do/do something it shouldn’t)

    Reasons for defects

    1. humans are fallible
    2. time pressure
    3. complex code
    4. complexity of infrastructure
    5. changing technologies
    6. many system interactions
    7. environmental conditions (radiation, magnetism, electronic fields, pollution) => faults in firmware/execution of software by changing hardware conditions (hardware degrades with time)
      E.g., playing the same video game on one computer/hardware vs a different one

    Source: ISTQB CTFL Syllabus

    Memory leaks
    bugs that happen very often
    due to incomplete deallocation.

    Buffer overflow
    means data sent as input to the server
    overflows the boundaries of the input area,
    causing the server to misbehave.

    Not all defects are functional (quality of software).

    Software testing is sometimes required for legal reasons because contracts may specify testing requirements that must be fulfilled.

    Source: ISTQB CTFL Syllabus

    Software Systems are an integral part of life, from business applications (banking) to consumer products (cars).

    Software that does not work correctly can lead to many problems, including loss of money, time or business reputation, and could even cause injury or death.

    Critical application (life depends on software accuracy) need to work perfectly:

    1. Airplane software (landing)
    2. Artificial Intelligence (healthcare life support, operating robots)

    Source: ISTQB CTFL Syllabus

    Quality Assurance
    planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.

    Quality Control
    The operational techniques and the activities
    used to fulfill and verify requirements of quality.

    The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

    A factor that could result in future negative consequences.

    Testing helps to

    1. measure the quality of software in terms of defects found,
      for both functional and non-functional software requirements and characteristics (e.g., reliability, usability, efficiency, maintainability and portability)
    2. give confidence in the quality of the software if few/no defects
    3. reduces the level of risk to the quality of the system
    4. increases quality of the software system when defects are fixed

    Source: ISTQB CTFL Syllabus

    Quality Audit
    A systematic and independent examination to determine
    whether quality activities and related results
    comply with planned arrangements
    and whether these arrangements
    are implemented effectively
    and are suitable to achieve objectives.

    Quality Circle
    A group of individuals with related interests
    that meet at regular intervals
    to consider problems or other matters
    related to the quality of outputs of a process
    and to the correction of problems
    or to the improvement of quality.

    Total Quality Management
    A company commitment
    to develop a process
    that achieves high quality product
    and customer satisfaction.

    Quality Management
    That aspect of the overall management function
    that determines and implements the quality policy.

    Quality Policy
    The overall intentions and direction of an organization
    as regards quality as formally expressed by top management.

    Quality System

    The organizational

    1. structure,
    2. responsibilities,
    3. procedures,
    4. processes
    5. and resources

    for implementing quality management.

    Improving process based on root cause analysis can help prevent defects from reoccuring.

    Lessons learned from other projects, when used to improve processes, improves system quality for future projects.

    Understanding the root causes (root cause analysis) of the defects in other projects helps to better identify and correct the root cause of defects

    1. => processes improved
    2. => prevents defects from reoccuring
    3. => improve the quality of future systems.

    This is an aspect of quality assurance.

    Testing should be integrated as one of the quality assurance activities
    (alongside development standards, training and defect analysis).

    Source: ISTQB CTFL Syllabus

    Testers should start reviewing project documents as soon as a draft is available.

    Finding defects early

    1. keeps the Cost of Project as predicted
    2. delivers Project on Time

    Rigorous testing before release is very important because of the following reasons:

    1. reduce the risk of problems occurring during operation
      (quality product requires lower maintenance cost)
    2. contribute to the quality of the software system
      (point out the defects and errors that were made
      helps to gain customers’ confidence in the product,
      Verifies customer expectations/satisfaction)
    3. may be required to meet contractual or legal requirements, or industry-specific standards

    Source: ISTQB CTFL Syllabus

    Error Incident Defect Failure
    A human action that produces an incorrect result. An event occurring that requires investigation.
    Not all incidents result in defects.
    Imperfection/deficiency/flaw/error in a work product (component or system) where it

    • does not meet its requirements or specifications.
    • restricts the normal flow of an application
      by mismatching the expected behavior of an application with the actual one.
    • An event in which a component or system does not perform its required function within specified limits.
    • Deviation of the component or system from its expected delivery, service or result.

    Not all defects result in failures.
    It is a failure that is seen during the execution, not the defect itself.
    The failure is a symptom of the defect.

    Synonyms: mistake Synonyms: deviation, (software) test incident Synonyms: bug, fault
    If the developers find that there is a mismatch
    in the actual and expected behavior of an application
    in the development phase
    they call it an Error.
    If testers find a mismatch
    in the actual and expected behavior of an application
    in the testing phase
    they call it a Defect.
    If customers or end-users find a mismatch
    in the actual and expected behavior of an application
    in the production phase
    they call it a Failure.

    Source: ISTQB CTFL Syllabus

    Defect Density
    The number of defects per unit size of a work product.
    Synonyms: fault density

    Purpose of tracking defect density
    1. Defect density is used
      to determine which areas of the software
      have the highest number (density) of defects.
    2. This information may be used to
      1. re-evaluate risk priorities
      2. re-allocate testing resources

    Source: ISTQB CTFL Syllabus

    The process of exercising software
    to verify that
    1. it satisfies specified requirements
    2. and to detect errors.

    The process of analyzing a software item
    to detect the differences between existing and required conditions (that is, bugs),
    and to evaluate the features of the software item (Ref. IEEE Std 829).

    The process of operating a system or component
    under specified conditions,
    observing or recording the results,
    and making an evaluation of some aspect
    of the system or component.

    The process of finding, analyzing and removing the causes of software failures.

    The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

    Software testing is a process of executing a program or application with the intent of finding the software bugs.

    It can also be stated as the process of validating and verifying
    that a software program or application or product:

    1. Meets the business and technical requirements
      that guided its design and development
    2. Works as expected

    Debugging vs testing

    1. Dynamic testing shows failures that are caused by defects.
    2. Debugging – the development activity that finds, analyzes and removes the cause of the failure.
    3. Debugging (locating and fixing a defect) is a development activity, not a testing activity.

    Source: ISTQB CTFL Syllabus


    The degree to which a system or component facilitates

    1. the establishment of test criteria
    2. and the performance of tests

    to determine whether those criteria have been met.

    Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.

    Validation is the process (at the end of the development process) of checking whether the specification captures the customer’s needs:
    the software should do what user really requires.

    1. methods used: testing – smoke, regression, functional,
      systems and UAT
    2. done executing software, tests are run

    Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
    The software should conform to its specification.

    Verification is the process of checking that the software meets the specification.

    1. methods used: inspections, reviews, walkthroughs etc.
    2. done without executing software: no tests are run

    Validation vs. Verification
    Verification and validation (and early test design) can be carried out during the development of the software work products.
    Validation vs. Verification

    Validation vs. Verification
    Source: ISTQB CTFL Syllabus

    Severity and Priority need to be defined
    in the standards documents
    to ensure consistent use and interpretation, e.g.:

    1. Severity – The potential impact to the system
      (determined by the market/business value of the defect)

      It indicates the level of threat that a bug can affect the system.

      Realizing the Severity of a bug
      is critical from risk assessment
      and management point of view.

      • (Mission) Critical
        The scope and impact of the defect is large
        Application will not function or system fails
      • Major
        Severe problems but possible to work around
        E.g., error in the implementation
      • Minor
        Does not impact the functionality or usability of the process
        but is not according to requirements/design specifications

        E.g., non-compliance with standards and templates, spelling mistakes

    2. Priority – Urgency/ order in which the incidents are to be addressed
      (Is defect likely to happen soon?)

      Priority is how quickly
      a bug should be fixed
      and eradicated from the application.

      • Immediate
        Must be fixed as soon as possible
      • Delayed
        System is usable but incident must be fixed prior to next level of test or
      • Deferred
        Defect can be left in if necessary due to time/cost
    Bug Severity vs Priority
    1. Bug Severity is the degree of impact that a defect has on the system;
      whereas, Bug Priority is the order of severity which has impacted the system.
    2. Severity is related to standards and functionality of the system;
      whereas, Priority is related to scheduling.
    3. Depending upon the impact of the bug,
      Bug Severity examines whether the impact is serious or not.
      On the other hand, Bug Priority examines whether the bug should be resolved soon or can be delayed.
    4. Bug Severity is operated by functionality.
      On the other hand, bug priority is operated by business value.
    5. In the case of bug severity, the level of severity is less likely to change.
      However, bug priority may differ.
    6. Bug severity is assessed from a technical perspective of the web-application workflow.
      On the other hand, bug priority is assessed from a user-experience perspective on web-application usage.

    Important point in differentiating between severity and priority
    is to understand is who is the moderator
    between severity and priority of the bug,
    who plays a major role between these two terms

    Bug Severity is determined by Quality Analyst, Test engineer;
    Bug Priority is determined by the Product Manager or Client.

    Test-Driven Development (TDD)
    Testing methodology associated with Agile Programming,
    Test-first design is one of the mandatory practices of Extreme Programming (XP).

    It requires that programmers
    do not write any production code
    until they have first written a unit test.

    Test cases are developed,
    and often automated,
    before the software is developed
    to run those test cases.

    Every chunk of code is covered by unit tests,
    which must all pass all the time,
    in an effort to eliminate unit-level
    and regression bugs during development.

    Practitioners of TDD write a lot of tests,
    i.e. an equal number of lines of test code
    to the size of the production code.

    Synonyms: Test-First Approach

    Data Driven Testing

    Testing in which the action of a test case
    is parameterized by externally defined data values,
    maintained as a file or spreadsheet.

    A common technique in Automated Testing.

    1. A data-driven testing approach
      separates out the test inputs (the data),
      usually into a spreadsheet,
      and uses a more generic test script
      that can read the input data and
      execute the same test script with different data.
    2. Testers who are not familiar with the scripting language
      can then create the test data for these predefined scripts.
    3. There are other techniques employed in data-driven techniques,
      where instead of hard-coded data combinations placed in a spreadsheet,
      data is generated using algorithms
      based on configurable parameters at run time
      and supplied to the application.

      For example, a tool may use an algorithm,
      which generates a random user ID,
      and for repeatability in pattern,
      a seed is employed for controlling randomness.

    Source: ISTQB CTFL Syllabus

    1. In a keyword-driven testing approach,
      the spreadsheet contains

      1. keywords
        describing the actions to be taken (also called action words),
      2. and test data.
    2. Testers
      (even if they are not familiar with the scripting language)
      can then define tests using the keywords,
      which can be tailored to the application being tested.
    3. Technical expertise in the scripting language
      is needed for all approaches
      (either by testers or by specialists in test automation).
    4. Regardless of the scripting technique used,
      the expected results for each test
      need to be stored for later comparison.

    Source: ISTQB CTFL Syllabus

    Test Environment
    An execution environment configured for testing.

    An environment containing

    1. hardware,
    2. OS,
    3. network topology,
    4. configuration of the product under test,
    5. instrumentation,
    6. simulators,
    7. any other software tools/support elements
      with which the software
      interacts when under test
      including stubs and test drivers

    needed to conduct a test.

    Synonyms: test bed, test rig

    The Test Plan for a project should have enumerated the test beds(s) to be used.

    A source to determine expected results
    to compare with the actual result
    of the software under test.

    An oracle may be

    1. the existing system (for a benchmark),
    2. other software,
    3. a user manual,
    4. or an individual’s specialized knowledge,

    but should not be the code.

    The oracle assumption is that the tester can routinely identify the correct outcome of a test.

    Test Data
    Data created or selected to satisfy the execution preconditions and inputs to execute one or more test cases.

    Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.

    Source: ISTQB CTFL Syllabus

    Entry criteria define

    1. when to start testing such as at the beginning of a test level
    2. or when a set of tests is ready for execution.

    Typically entry criteria may cover the following:

    1. Test environment availability and readiness
    2. Test tool readiness in the test environment
    3. Testable code availability
    4. Test data availability

    Source: ISTQB CTFL Syllabus

    Exit Criteria
    The set of conditions for officially completing a defined task.


    1. completion criteria,
    2. test completion criteria,
    3. definition of done

    Exit criteria define
    when to stop testing such as

    1. at the end of a test level
    2. or when a set of tests has achieved specific goal.

    The set of generic and specific conditions,
    agreed upon with the stakeholders
    for permitting a process to be officially completed.

    The purpose of exit criteria
    is to prevent a task from being considered completed
    when there are still outstanding parts of the task
    which have not been finished.

    Exit criteria are used to report against
    and to plan when to stop testing.

    Typically exit criteria may cover the following:

    1. Thoroughness measures,
      • such as coverage of code,
      • functionality
      • or risk
    2. Estimates of defect density or reliability measures
    3. Test Cost
    4. Residual risks,
      • e.g., defects not fixed (unresolved defects)
      • or lack of test coverage in certain areas
    5. Schedules
      such as those based on time to market

    Source: ISTQB CTFL Syllabus

    Work products produced during the test process for use in

    1. planning,
    2. designing,
    3. executing,
    4. evaluating
    5. and reporting on testing.

    Artifacts produced during the test process required to

    1. plan,
    2. design,
    3. and execute tests,

    such as

    1. documentation,
    2. scripts,
    3. inputs,
    4. expected results,
    5. set-up and clear-up procedures,
    6. files,
    7. databases,
    8. environment,
    9. and any additional software or utilities used in testing.

    Source: ISTQB CTFL Syllabus

    Commercial off-the-shelf (COTS)
    A software product that is developed for the general market,
    i.e. for a large number of customers,
    and that is delivered to many customers in identical format.

    Synonyms: off-the-shelf software

    A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it.
    It replaces a called component.

    E.g., suppose you have three different modules : Login, Home, User.
    Login = ready for test, but the two minor modules Home and User,
    which are called by Login module = not ready yet for testing.
    At this time, we write a piece of dummy code, which simulates the called methods of Home and User.

    Stub vs. Driver

    A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
    Synonyms: test driver

    E.g., User and Home modules are ready to test, but the Login module is not.
    We need to check functionality that requires Login (not developed yet).
    Since Home and User return values from Login module, we write a dummy piece of code, which simulates the Login module.

    Stub vs. Driver

    1. Stubs simulate a called component used in top-down approach.
    2. Drivers simulate a caller unit used in bottom up approach.

    Stub vs. Driver

    Stubs vs. Drivers

    operates like a given system
    when provided with a set of controlled inputs.

    Simulator mimics the activity of something that it is simulating.

    A device, computer program, or system
    that accepts the same inputs
    and produces the same outputs
    as a given system.

    The internal state of the emulation mechanism does not have to accurately reflect the internal state of the target which it is emulating.

    Simulation, on the other hand, involves modeling the underlying state of the target.

    The end result of a good simulation is that the simulation model will emulate the target which it is simulating.

    MAME is an arcade game emulator.
    There’s no need to model the arcade machine or a terminal in detail to get the desired emulated behavior.

    Flight Simulator is a simulator;
    It models as much as possible every detail of the target to represent what the target does in reality.

    The goal of an emulation is to able to substitute for the object it is emulating. That’s an important point.
    A simulation’s focus is more on the modeling of the internal state of the target.

    If a flight-simulator could transport you from A to B then it would be a flight-emulator.

    An emulator can replace the original for real use.
    A Virtual PC emulates a PC.

    A simulator is a model for study and analysis.

    An emulator will always have to operate close to real-time. For a simulator that is not always the case. A geological simulation could do 1000 years/second or more.

    In testing, a simulation may run far slower than real time.

    Source: stackoverflow.com

    Cyclomatic Complexity
    software metric used to indicate the complexity of a program.

    Synonyms: Also known as McCabe complexity

    It is a quantitative measure
    of the number of linearly independent paths
    through a program’s source code.

    A measure of the logical complexity of an algorithm.

    1. Used in white-box testing.
    2. Can be calculated with respect to
      1. functions,
      2. modules,
      3. methods
      4. or classes

      within a program.

    3. Measures independent paths through program source code.
    4. Independent path
      is a path that has at least one edge
      which has not been traversed before
      in any other paths.
    5. Based on a control flow representation of the program.
    6. Control flow depicts a program as a graph
      which consists of Nodes and Edges.
    7. Nodes represent processing tasks
      while edges represent control flow between the nodes.
    8. The Code complexity of the program can be defined using the formula V(G) = E – N + 2

    Nodes and Edges in the Graph of the Control Flow Diagram

    Source: Wikipedia

    A specification defining requirements
    for portability of applications
    in binary forms
    across different system platforms and environments.

    Test Script
    Commonly used to refer to
    the instructions for a particular test
    that will be carried out by an automated test tool.

    The use of software tools,
    such as automated testing tools,
    to improve software quality.

    A metalanguage
    used to formally describe
    the syntax of a language.

    A white box test case design technique
    that uses the algorithmic flow of the program
    to design tests.

    The set of tests
    derived using
    basis path testing.

    The point
    at which some deliverable
    produced during the software engineering process
    is put under formal change control.

    The Capability Maturity Model for Software (CMM or SW-CMM)
    is a model for

    1. judging the maturity
      of the software processes of an organization
    2. and for identifying the key practices
      required to increase the maturity of these processes.

    Source: pavantestingtools.com

    Computer Aided Software Testing

    Source: pavantestingtools.com

    A graphical representation
    of inputs and the associated outputs effects
    which can be used to design test cases.

    Source: pavantestingtools.com

    Phase of development
    where functionality is implemented in entirety;
    bug fixes are all that are left.

    All functions found in the Functional Specifications have been implemented.

    Source: pavantestingtools.com

    A database that contains
    definitions of all data items
    defined during analysis.

    Source: pavantestingtools.com

    A data-flow diagram (DFD)
    A modeling notation that represents a functional decomposition of a system.

    is a way of representing a flow of a data of a process or a system (usually an information system). The DFD also provides information about the outputs and inputs of each entity and the process itself. A data-flow diagram has no control flow, there are no decision rules and no loops.

    A control-flow diagram (CFD)
    is a diagram to describe the control flow
    of a business process, process or review.

    They are one of the classic business process modeling methodologies

    A control-flow diagram can consist of a subdivision to show sequential steps, with if-then-else conditions, repetition, and/or case conditions. Suitably annotated geometrical figures are used to represent operations, data, or equipment, and arrows are used to indicate the sequential flow from one to another.

    Source: Wikipedia, pavantestingtools.com

    Data flow
    is concerned about where data are routed
    through a program/system
    and what transformations are applied
    during that journey.

    Control flow
    is concerned about the possible order of operations,
    i.e., about the “precedence constraints” between operations.

    Functional Decomposition
    A technique used during planning, analysis and design;
    creates a functional hierarchy for the software.

    A standard of measurement.

    Software metrics are the statistics
    describing the structure
    or content of a program.

    A metric should be a real objective measurement of something,
    e.g., number of bugs per lines of code.

    Race Condition
    A cause of concurrency problems.

    Multiple accesses to a shared resource,
    at least one of which is a write,
    with no mechanism used by either
    to moderate simultaneous access.

    Release Candidate
    A pre-release version,
    which contains the desired functionality of the final version,
    but which needs to be tested for bugs
    (which ideally should be removed before the final version is released).

    Tool Support for Testing

    Tool Support Purpose

    Tool support for testing
    can have one or more of the following purposes
    depending on the context:

    1. Improve the efficiency of test activities
      1. by automating repetitive tasks
      2. or supporting manual test activities
        • like test planning,
        • test design,
        • test reporting and monitoring
    2. Automate activities
      that require significant resources
      when done manually
      (e.g., static testing)
    3. Automate activities
      that cannot be executed manually
      (e.g., large scale performance testing of client-server applications)
    4. Increase reliability of testing
      (e.g., by automating large data comparisons or simulating behavior)

    Test Frameworks

    The term “test frameworks”
    is also frequently used in the industry,
    in at least three meanings:

    1. Reusable and extensible testing libraries
      that can be used to build testing tools
      (called test harnesses as well)
    2. A type of design of test automation (e.g., data-driven, keyword-driven)
    3. Overall process of execution of testing

    Test Tool Classification
    1. There are a number of tools
      that support different aspects of testing.
    2. Tools can be classified based on several criteria such as
      1. purpose,
      2. commercial / free / open-source / shareware,
      3. technology used
      4. testing activities that they support
        (classification below).
    3. Some tools clearly support one activity;
      others may support more than one activity,
      but are classified under the activity
      with which they are most closely associated.
    4. Tools from a single provider,
      especially those that have been designed to work together,
      may be bundled into one package.
    5. Some types of test tools can be intrusive,
      which means that they can cause the probe effect
      (affect the actual outcome of the test).
    Test Tools Types

    Tools are classified below according to the testing activities that they support.

    Some tools offer support more appropriate for developers (e.g., tools that are used during component and component integration testing). Such tools are marked with “(D)” in the list below.
    Types of Test Tools ISTQB

    Test Management Tools Static Testing Tools Test Specification Tools Dynamic Testing Tools Performance and Monitoring Tools Specific Testing Needs
    Requirements Management Tools Review Tools Test Design Tools Test Execution Tools Dynamic Analysis Tools (D) Data Quality Assessment
    Incident Management Tools Static Analysis Tools (D) Test Data Preparation/ Generation Tools Test Harness (D) Performance Testing Tools Usability Testing Tools
    Configuration Management Tools Modeling Tools (D) Unit Test Framework Tools (D) Test Comparators Load Testing Tool
    Coverage Measurement Tools (D) Stress Testing Tools
    Security Testing Tools Monitoring Tools

    Test Management Tools

    Management tools

    1. apply to all test activities
      over the entire software life cycle
    2. Test management tools
      need to interface with other tools or spreadsheets
      in order to produce useful information
      in a format that fits the needs of the organization.
    3. help in managing the testing process
      by providing interfaces for:

      1. managing test data,
      2. executing tests,
      3. tracking defects/incidents
      4. managing requirements,
      5. managing test results,
      6. reporting of the test objects.
      7. support for quantitative analysis
      8. Traceability of tests:
        support tracing the test objects
        to requirement specifications
      9. might have an independent version control capability
        or an interface to an external one.
    Tools Description Example
    Requirements Management Tools
    1. store requirement statements,
    2. store the attributes for the requirements
      (including priority),
    3. provide unique identifiers
      and support tracing the requirements
      to individual tests.
    4. These tools may also help with
      identifying inconsistent or missing requirements.
      E.g. update/change in requirements,
      checking ambiguous words e.g. “might”, “to be decided”
    Incident Management Tools
    Also known as
    1. defect-tracking
    2. defect management
    3. bug-tracking
    4. bug-management tools
    1. store information about the attributes of incidents (including attachments)
    2. manage (including prioritize, assign actions to people) incident reports, i.e., defects, failures,
    3. change requests or perceived problems and anomalies,
    4. help in managing the life cycle of incidents (status: open, assigned, closed),
    5. optionally with support for statistical analysis.
    Configuration Management Tools Although not strictly test tools, these are

    • necessary for storage and version management
      of testware and related software
    • especially when configuring more than one hardware/software environment
    • in terms of operating system versions, compilers, browsers, etc.

    Configuration management tools:

    1. Help to keep large teams in sync
    2. Store information about versions and builds of the software and testware
    3. Used for build and release management

    Static Testing Tools

    Static testing = no execution of code

    Static testing tools
    provide a cost effective way
    of finding more defects
    at an earlier stage
    in the development process.

    1. Storing and sorting review comments
    2. Communicating comments to relevant people
    3. Monitoring the review status
    4. Repository for rules, procedures and checklists during reviews
    Test Tool Description Example
    Review Tools
    1. assist with review processes, checklists, review guidelines
    2. used to store and communicate review comments
    3. and report on defects and effort
    4. They can be of further help
      by providing aid for online reviews
      for large or geographically dispersed teams.
    Static Analysis Tools (D) These tools help developers (mostly) and testers

    1. to find defects
      prior to dynamic testing
      by providing support
      for enforcing coding standards
      (e.g. naming conventions, secure coding),
      analysis of structures and dependencies.

      E.g., If you are looking for a tool
      that will verify if the code complies with coding standards,
      you are seeking a static analysis tool.

    2. They can also help in planning or risk analysis
      by providing metrics for the code
      (e.g., cyclomatic complexity – determines if testing is enough
      or more testing should be done).
    3. Compilers do offer static analysis features
    4. Static analysis tools
      applied to source code
      can enforce coding standards,
      but if applied to existing code
      may generate a large quantity of messages.
    5. Warning messages do not stop the code
      from being translated into an executable program,
      but ideally should be addressed
      so that maintenance of the code is easier in the future
    6. A gradual implementation of the analysis tool
      with initial filters to exclude some messages
      is an effective approach.
    Modeling Tools (D) These tools

    1. Mostly used by developers
      to validate software models
      (e.g., physical data model (PDM) for a relational database),
      by enumerating inconsistencies
      and finding defects
    2. can often aid
      in generating test cases
      based on the model.
    3. Prioritize areas of the model for testing.

    Test Tool Description Example
    Test Design Tools Screen scraper (GUI), e.g. all links present, all buttons clickable

    Help generating expected results, if an oracle is available to the tool

    These tools are used to generate

    • test inputs
    • or executable tests
    • and/or test oracles


    • requirements,
    • graphical user interfaces,
    • design models (state, data or object)
    • or code.
    Test Data Preparation/ Generation Tools Test data preparation tools
    • databases,
    • files
    • or data transmissions

    to set up test data
    to be used during the execution of tests
    to ensure security through data anonymity.

    1. E.g. extract selected data from files or databases
    2. Construct large number of similar records
      (e.g. generate email ID for female users)
    3. Generate new records with some guidelines
      (sign up data for 100 users, e.g. last and first name etc)

    Test Tool Description Example
    Test Execution Tools
    Also known as
    1. “capture/playback”,
    2. “capture/replay”
    3. or “record/playback” tools
    1. A test tool that records test input
      as it is sent to the software under test.
      The input cases stored can then be used
      to reproduce the test at a later time.
    2. Most commonly applied to GUI.
    3. Primary purpose of the test execution tool
      is to execute test objects
      using automated test scripts.
    4. A captured script is
      a linear representation
      with specific data and actions
      as part of each script.
    5. These tools enable tests
      to be executed automatically, or semi-automatically,
      using stored inputs and expected outcomes,
      through the use of a scripting language and
      usually provide a test log for each test run (results).
    6. They can also be used to capture/record tests,
      while tests are executed manually
      and usually support scripting languages
      or GUI-based configuration
      for parameterization of data
      and other customization in the tests.
    7. Best used for Regression testing
    8. This type of tool often requires significant effort in order to achieve significant benefits.
    9. Capturing tests by recording the actions of a manual tester seems attractive, but this approach does not scale to large numbers of automated test scripts.
    10. Test automation scripts
      captured using a capture/replay tool
      tend to be unstable and easily broken
      if changes to the system or unexpected events occur.
    Test Harness/ Test Driver
    Unit Test Framework Tools (D)
    1. Mostly used by developers
    2. A unit test harness or framework
      A program or test tool used to execute tests.

      Facilitates the testing
      of components or parts of a system
      by simulating the environment
      in which that test object will run,
      through the provision of mock objects
      (stubs or drivers).

    3. Recording the pass/fail results of each test (framework tools)
    4. Support debugging (framework tools)
    Test Comparators Test comparators

    1. determine differences between
      • files,
      • databases
      • or test results.
    2. help to automate aspects of the comparison.
    3. save a lot of time when comparing large files
    4. Test execution tools typically include dynamic comparators
      (i.e. when execution is going on),

      but post-execution comparison
      may be done by a separate comparison tool.

    5. A test comparator may use a test oracle,
      especially if it is automated.
    Coverage Measurement Tools (D)
    1. Coverage Measurement Tools,
      through intrusive or non-intrusive means,
      measure the percentage
      of specific types of code structures
      that have been exercised

      • e.g., statements, branches or decisions,
      • and module or function calls)

      by a set of tests.

    2. Mostly used by developers
    3. Identifying coverage items (instrumenting the code)
    4. Calculating the percentage of coverage items
    5. Generating stubs and drivers
    Security Testing Tools
    1. Evaluate the security characteristics of software.
    2. This includes evaluating the ability of the software to protect
      1. data confidentiality,
      2. integrity,
      3. authentication,
      4. authorization,
      5. availability,
      6. and non-repudiation.
    3. Security tools are mostly focused on a particular technology, platform, and purpose.
      1. Identifying viruses
      2. Probing for open ports or other externally visible points of attack
      3. Identifying weaknesses in password files and passwords

    Test Tool Description Example
    Dynamic Analysis Tools (D) Dynamic analysis tools
    1. find defects that are evident
      only when software is executing,
      such as time dependencies or memory leaks.
    2. typically used in
      1. component
      2. and component integration testing,
      3. and when testing middleware.
    1. Detecting memory leaks
      (e.g. memory used for the game app is not freed after quitting)
    2. Identifying pointer arithmetic errors such as null pointers
      (e.g. a variable that was not assigned)
    3. Dead links present in the code
    Performance Testing Tools monitor and report on how a system behaves
    under a variety of simulated usage conditions
    in terms of number of

    1. concurrent users,
    2. their ramp-up pattern,
    3. frequency and relative percentage of transactions.
    Load Testing Tool Load testing
    is done by constantly increasing the load
    on the application under the test
    until it reaches the threshold limit

    The simulation of load is achieved
    by means of creating virtual users
    carrying out a selected set of transactions,
    spread across various test machines
    commonly known as load generators.

    Stress Testing Tools Stress testing

    1. Is done to evaluate the application’s behavior
      beyond normal or peak load conditions
    2. Generating a load on the system to be tested
    3. Measuring average response times
    4. Producing graphs or charts of responses over time
    Monitoring Tools Monitoring tools
    continuously analyze, verify and report
    on usage of specific system resources,
    and give warnings of possible service problems.

    1. Continuously keep track
      of the status of the system in use

      1. servers,
      2. networks,
      3. databases,
      4. security,
      5. performance,
      6. website and internet usage,
      7. and applications
    2. Identifying problems
      and sending a message to the network administrator
    3. Monitoring the number of users on a network
      Monitoring network (=data) traffic

    Data Quality Assessment

    1. Data is at the center of some projects
      such as data conversion/migration projects and applications
      like data warehouses
      and its attributes can vary
      in terms of criticality and volume.
    2. In such contexts, tools need to be employed
      for data quality assessment
      to review and verify
      the data conversion and migration rules
      to ensure that the processed data is
      1. correct,
      2. complete
      3. and complies with a predefined context-specific standard

    Usability Testing
    Other testing tools exist for usability testing.

    Probe effect
    unintended alteration in system behavior caused by measuring that system.

    If a test tool is causing a probe effect, what does it mean?

    The outcome of the test may be influenced by the tool.

    The probe effect means that the tool has been intrusive
    and may influence the results of the test
    and the way the software works.

    E.g., the actual timing may be different due to the extra instructions that are executed by the tool,
    or you may get a different measure of code coverage.

    In code profiling and performance measurements,
    the delays introduced by insertion or removal of code instrumentation
    may result in a non-functioning application, or unpredictable behavior.

    E.g. performance may be slightly worse when performance testing tools are being used.

    Source: Wikipedia, ISTQB CTFL Syllabus

    Simply purchasing or leasing a tool does not guarantee success with that tool.

    Each type of tool may require additional effort to achieve real and lasting benefits.

    There are potential benefits and opportunities with the use of tools in testing, but there are also risks.

    Benefits Risks
    1. Repetitive work is reduced
      (e.g., running regression tests,
      re-entering the same test data,
      and checking against coding standards)
    2. Greater consistency and repeatability
      (e.g., tests executed by a tool
      in the same order with the same frequency,
      and tests derived from requirements)
    3. Objective assessment
      (e.g., static measures, coverage)
    4. Ease of access to information about tests or testing
      1. e.g., statistics and graphs about test progress,
        incident rates and performance
      2. Visual information is easier to understand
      3. Large data can not be handled in documents or excel sheets
    1. Unrealistic expectations for the tool
      (including functionality and ease of use)
    2. Underestimating the time, cost and effort
      for the initial introduction of a tool
      (including training and external expertise
      E.g. skills needed to create good tests and use the tool well)
    3. Underestimating the time and effort
      needed to achieve significant and continuing benefits from the tool
      (including the need for changes in the testing process
      and continuous improvement of the way the tool is used)
    4. Underestimating the effort
      required to maintain the test assets generated by the tool
    5. Over-reliance on the tool
      (replacement for test design
      or use of automated testing
      where manual testing would be better)
    6. Neglecting version control
      of test assets within the tool
    7. Neglecting relationships and interoperability issues
      between critical tools, such as

      1. requirements management tools,
      2. version control tools,
      3. incident management tools,
      4. defect tracking tools
      5. and tools from multiple vendors
    8. Risk of tool vendor
      going out of business,
      retiring the tool,
      or selling the tool to a different vendor
    9. Poor response from vendor
      for support, upgrades, and defect fixes
    10. Risk of suspension of open-source / free tool project
    11. Unforeseen, such as the inability to support a new platform

    Source: ISTQB CTFL Syllabus

    The main considerations in selecting a tool
    for an organization include:

    1. Organization

      Assessment of organizational maturity, strengths and weaknesses

    2. Application

      Whether the tool is compatible with your application,
      the tool should be able to interact with your application.

    3. Test environment
    4. Requirements

      Evaluation against clear requirements and objective criteria

    5. Tool Evaluation

      Features, Scope, ease of use, etc.

      1. Benefits
        Identification of opportunities
        for an improved test process
        supported by tools.

        We need to concentrate
        on the features of the tool
        and how this could be beneficial for our project.

        The additional new features
        and the enhancements of the features
        will also help.

      2. Limitation of the tool
    6. A proof-of-concept (Pilot Project)

      Tool is introduced at a small level in organization
      using a test tool during the evaluation phase
      to establish whether it performs effectively
      with the software under test
      and within the current infrastructure
      or to identify changes needed to that infrastructure
      to effectively use the tool

    7. Vendor
      Evaluation of the vendor
      (including training, support and commercial aspects)
      or service support suppliers in case of non-commercial tools
    8. Training
      Evaluation of training needs
      considering the current test team’s test automation skills

      Identification of internal requirements
      for coaching and mentoring in the use of the tool

    9. Cost

      Estimation of a cost-benefit ratio based on a concrete business case

    Source: ISTQB CTFL Syllabus, pavantestingtools.com

    Introducing the selected tool into an organization
    starts with a pilot project,
    which has the following objectives:

    1. Learn more detail about the tool
    2. Evaluate how the tool fits
      with existing processes and practices,
      and determine what would need to change
    3. Decide on standard ways
      of using, managing, storing and maintaining
      the tool and the test assets
      (e.g., deciding on naming conventions for files and tests,
      creating libraries and defining the modularity of test suites)
    4. Assess whether the benefits will be achieved at reasonable cost

    Source: ISTQB CTFL Syllabus

    Success factors for the deployment of the tool within an organization include:

    1. Rolling out the tool to the rest of the organization incrementally
    2. Adapting and improving processes to fit with the use of the tool
    3. Providing training and coaching/mentoring for new users
    4. Defining usage guidelines
    5. Implementing a way to gather usage information from the actual use
    6. Monitoring tool use and benefits
    7. Providing support for the test team for a given tool
    8. Gathering lessons learned from all teams

    Source: ISTQB CTFL Syllabus

    Test Automation

    There can be some functionality
    which cannot be tested in an automated tool
    so we may have to do it manually.

    Therefore manual testing can never be replaced.

    We can write the scripts for negative testing also
    but it is hectic task.
    When we talk about real environment we do negative testing manually.

    1. Prepare the automation Test plan
    2. Identify the scenario
    3. Record the scenario
    4. Enhance the scripts by inserting check points and Conditional Loops
    5. Incorporated Error Handler
    6. Debug the script
    7. Fix the issue
    8. Rerun the script and report the result

    The common problems are:

    1. Maintenance of the old script when there is a feature change or enhancement
    2. The change in technology of the application will affect the old scripts

    5 types of scripting techniques:

    1. Linear
    2. Structured
    3. Shared
    4. Data Driven
    5. Key Driven

    1. What are the expected loads on the server
      (e.g., number of hits per unit time?),
      and what kind of performance is required under such loads
      (such as web server response time, database query response times).

      What kinds of tools will be needed for performance testing
      (such as web load testing tools,
      other tools already in house that can be adapted,
      web robot downloading tools, etc.)?

    2. Who is the target audience?
      What kind of browsers will they be using?
      What kind of connection speeds will they by using?
      Are they intra- organization
      (thus with likely high connection speeds and similar browsers)
      or Internet-wide
      (thus with a wide variety of connection speeds and browser types)?
    3. What kind of performance is expected on the client side
      (e.g., how fast should pages appear,
      how fast should animations, applets, etc. load and run)?
    4. Will down time for server
      and content maintenance/upgrades be allowed?
      How much?
    5. How reliable are the site’s Internet connections required to be?
      And how does that affect backup system
      or redundant connection requirements and testing?
    6. What processes will be required
      to manage updates to the web site’s content,
      and what are the requirements
      for maintaining, tracking, and controlling
      page content, graphics, links, etc.?
    7. Which HTML specification will be adhered to?
      How strictly?
      What variations will be allowed for targeted browsers?
    8. Will there be any standards or requirements
      for page appearance and/or graphics
      throughout a site or parts of a site?
    9. How will internal and external links
      be validated and updated?
      How often?
    10. Can testing be done on the production system,
      or will a separate test system be required?
    11. How are browser caching,
      variations in browser option settings,
      dial-up connection variabilities,
      and real-world internet ‘traffic congestion’ problems
      to be accounted for in testing?
    12. How extensive or customized
      are the server logging and reporting requirements;
      are they considered an integral part of the system
      and do they require testing?
    13. How are cgi programs, applets, javascripts, ActiveX components, etc.
      to be maintained, tracked, controlled, and tested?
    14. Pages should be 3-5 screens max
      unless content is tightly focused on a single topic.
      If larger, provide internal links within the page.
    15. The page layouts and design elements
      should be consistent throughout a site,
      so that it’s clear to the user
      that they’re still within a site.
    16. Pages should be as browser-independent as possible,
      or pages should be provided or generated
      based on the browser-type.
    17. All pages should have links external to the page;
      there should be no dead-end pages.
    18. The page owner, revision date,
      and a link to a contact person or organization
      should be included on each page.