Top Down Integration Testing

An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

In this approach testing is conducted from main module to sub module. if the sub module is not developed a temporary program called STUB is used for simulate the sub module.

Advantages:

* Advantageous if major flaws occur toward the top of the program.
* Once the I/O functions are added, representation of test cases is easier.
* Early skeletal Program allows demonstrations and boosts morale.


Disadvantages:

* Stub modules must be produced
* Stub Modules are often more complicated than they first appear to be.
* Before the I/O functions are added, representation of test cases in stubs can be difficult.
* Test conditions ma be impossible, or very difficult, to create.
* Observation of test output is more difficult.
* Allows one to think that design and testing can be overlapped.
* Induces one to defer completion of the testing of certain modules.





Bottom Up Integration Testing

An approach to integration testing where the lowest level components are tested first then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

In this approach testing is conducted from sub module to main module, if the main module is not developed a temporary program called DRIVERS is used to simulate the main module.


Advantages:

* Advantageous if major flaws occur toward the bottom of the program.
* Test conditions are easier to create.
* Observation of test results is easier.


Disadvantages:

* Driver Modules must be produced.
* The program as an entity does not exist until the last module is added.





Branch testing

Branch Testing/ Decision Coverage:

Test coverage criteria requires enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. That is, every branch (decision) taken each way, true and false. It helps in validating all the branches in the code making sure that no branch leads to abnormal behaviour of the application.

Formula:

Branch Testing=(Number of decisions outcomes tested / Total Number of decision Outcomes) x 100 %


A branch is the outcome of a decision, so branch coverage simply measures which decision outcomes have been tested. This sounds great because it takes a more in-depth view of the source code than simple statement coverage, but branch coverage can also leave you wanting more.
Determining the number of branches in a method is easy. Boolean decisions obviously have two outcomes, true and false, whereas switches have one outcome for each case—and don't forget the default case! The total number of decision outcomes in a method is therefore equal to the number of branches that need to be covered plus the entry branch in the method (after all, even methods with straight line code have one branch).

Branch Testing Example



/**  Branch Testing Exercise
 *   Create test cases using branch test method for this program
 */
        declare Length as integer
        declare Count as integer
        READ Length;
        READ Count;
        WHILE (Count <= 6) LOOP
            IF (Length >= 100) THEN
                Length = Length - 2;
            ELSE
                Length = Count * Length;
            END IF
            Count = Count + 1;
        END;
        PRINT Length;






DecisionPossible Outcomes
Test Cases
12345678910
 Count  <= 6 T  X
 F X
 Length >= 100 T  X
 F


Test Cases

Case #Input Values
Count   Length
Expected OutcomesActual Outcomes
1 5       101 594
2 5       99 493
3 7       99 99
4

Accessibility Testing

What is accessibility Testing?
Accessibility testing is the technique of making sure that your product is accessibility compliant. There could be many reasons why your product needs to be accessibility compliant as stated above. 
Accessibility testing is a type of systems testing designed to determine whether individuals with disabilities will be able to use the system in question, which could be software, hardware, or some other type of system. Disabilities encompass a wide range of physical problems, including learning disabilities as well as difficulties with sight, hearing and movement.



Why accessibility Testing?

Typical accessibility problems can be classified into following four groups, each of them with different access difficulties and issues:

  • Visual impairments Such as blindness, low or restricted vision, or color blindness. User with visual impairments uses assistive technology software that reads content loud. User with weak vision can also make text larger with browser setting or magnificent setting of operating system.

  • Motor skills Such as the inability to use a keyboard or mouse, or to make fine movements.

  • Hearing impairments Such as reduced or total loss of hearing

  • Cognitive abilities Such as reading difficulties, dyslexia or memory loss.

Perform Accessibility Testing

Development team can make sure that their product is partially accessibility compliant by code inspection and Unit testing. Test team needs to certify that product is accessibility compliant during the functional testing phase. In most cases, accessibility checklist is used to certify the accessibility compliance. This checklist can have information on what should be tested, how it should be tested and status of product for different access related problems. Template of this checklist is available here.

For accessibility testing to succeed, test team should plan a separate cycle for accessibility testing. Management should make sure that test team have information on what to test and all the tools that they need to test accessibility are available to them.

Typical test cases for accessibility might look similar to the following examples -
  • Make sure that all functions are available via keyboard only (do not use mouse)
  • Make sure that information is visible when display setting is changed to High Contrast modes.
  • Make sure that screen reading tools can read all the text available and every picture/Image have corresponding alternate text associated with it.
  • Make sure that product defined keyboard actions do not affect accessibility keyboard shortcuts.
  • And many more.. 

Web Accessibility Testing Tools 
There are many tools in the market to assist you in your accessibility testing. Any single tool cannot certify that your product is accessibility compliant. You will always need more than one tool to check accessibility compliance of your product. Broadly, tools related to accessibility can be divided into two categories. Inspectors or web checkers

This category of tool allows developer or tester to know exactly what information is being provided to an assistive technology. For example, tools like Inspect Object can be used to get information on what all information is given to the assistive technology. Assistive Technologies (AT)

This category of tools is what a person with disability will use. To make sure that product is accessibility compliant, tools like screen readers, screen magnifiers etc. are used. Testing with an assistive technology has to be performed manually to understand how the AT will interact with the product and documentation. More information on the tools is present in tool section of this website for you to explore.
There are various tools available in the market to perform web accessibility testing given below:

Acceptance testing

Acceptance testing, a testing technique performed to determine whether or not the software system has met the requirement specifications. The main purpose of this test is to evaluate the system's compliance with the business requirements and verify if it is has met the required criteria for delivery to end users.
  • After the system test has corrected all or most defects, the system will be delivered to the user or customer for acceptance testing.
  • Acceptance testing is basically done by the user or customer although other stakeholders may be involved as well.
  • The goal of acceptance testing is to establish confidence in the system.
  • Acceptance testing is most often focused on a validation type testing.
  • Acceptance testing may occur at more than just a single level, for example:
    1. A Commercial Off the shelf (COTS) software product may be acceptance tested when it is installed or integrated.
    2. Acceptance testing of the usability of the component may be done during component testing.
    3. Acceptance testing of a new functional enhancement may come before system testing. 


    Acceptance testing

Types of Acceptance Testing:
  1. User Acceptance test: The User Acceptance test focuses mainly on the functionality thereby validating the fitness-for-use of the system by the business user. The user acceptance test is performed by the users and application managers.
  2. Operational Acceptance test: The Operational Acceptance test also known as Production acceptance test validates whether the system meets the requirements for operation. In most of the organization the operational acceptance test is performed by the system administration before the system is released. The operational acceptance test may include testing of backup/restore, disaster recovery, maintenance tasks and periodic check of security vulnerabilities.
  3. Contract Acceptance testing: It is performed against the contract’s acceptance criteria for producing custom developed software. Acceptance should be formally defined when the contract is agreed.
  4. Compliance acceptance testing: It is also known as regulation acceptance testing is performed against the regulations which must be adhered to, such as governmental, legal or safety regulations. 

Acceptance Criteria
:  

Acceptance criteria are defined on the basis of the following attributes

  • Functional Correctness and Completeness
  • Data Integrity
  • Data Conversion
  • Usability
  • Performance
  • Timeliness
  • Confidentiality and Availability
  • Installability and Upgradability
  • Scalability
  • Documentation 
Acceptance Test Plan:

The acceptance test activities are carried out in phases. Firstly, the basic tests are executed, and if the test results are satisfactory then the execution of more complex scenarios are carried out.

The Acceptance test plan has the following attributes:

  • Introduction
  • Acceptance Test Category
  • operation Environment
  • Test case ID
  • Test Title
  • Test Objective
  • Test Procedure
  • Test Schedule
  • Resources 

Acceptance Test Report:


The Acceptance test Report has the following attributes:

  • Report Identifier
  • Summary of Results
  • Variations
  • Recommendations
  • Summary of To-DO List
  • Approval Decision

Adhoc Testing

Adhoc testing is an informal testing type with an aim to break the system. This testing is usually an unplanned activity . It does not follow any test design techniques to create test cases. In fact is does not create test cases altogether! This testing is primarily performed if the knowledge of testers in the system under test is very high. Testers randomly test the application without any test cases or any business requirement document.

Adhoc Testing does not follow any structured way of testing and it is randomly done on any part of application. Main aim of this testing is to find defects by random checking. Adhoc testing can be achieved with the testing technique called Error Guessing. Error guessing can be done by the people having enough experience on the system to "guess" the most likely source of errors.

This testing requires no documentation/ planning /process to be followed. Since this testing aims at finding defects through random approach, without any documentation, defects will not be mapped to test cases. Hence, sometimes, it is very difficult to reproduce the defects as there are no test-steps or requirements mapped to it.



Adhoc testing is not structured

 

When execute Adhoc Testing?
Adhoc testing can be performed when there is limited time to do elaborative testing. Usually adhoc testing is performed after the formal test execution. And if time permits, adhoc testing can be done on the system).Adhoc testing will be effective only if the tester is knowledgeable of the System Under Test. 

Types of adhoc testing 
There are different types of Adhoc testing and they are listed as below:

  1. Buddy Testing: Two buddies, one from development team and one from test team mutually work on identifying defects in the same module. Buddy testing helps the testers develop better test cases while development team can also make design changes early. This kind of testing happens usually after completing the unit testing. 
  2. Pair Testing: Two testers are assigned the same modules and they share ideas and work on the same systems to find defects. One tester executes the tests while another tester records the notes on their findings.
  3. Monkey Testing: Testing is performed randomly without any test cases in order to break the system.  


Best practices of Adhoc testing
Following best practices can ensure effective Adhoc Testing.

Good business knowledge
Testers should have good knowledge of the business and clear understanding of the requirements- Detailed knowledge of the end to end business process will help find defects easily. Experienced testers find more defects as they are better at error guessing.

Test Key Modules
Key business modules should be identified and targeted for adhoc testing.. Business critical modules should be tested first to gain confidence on the quality of the system.

Record Defects


All defects need to be recorded or written in a notepad . Defects must be assigned to developers for fixing. For each valid defect ,corresponding test cases must be written & must be added to planned test cases.

These defect findings should be made as lesson learned and these should be reflected in our next system while we are planning for test cases.


Conclusion:


The advantage of Adhoc testing is to check for the completeness of testing and find more defects then planned testing. The defect catching test cases are added as additional test cases to the planned test cases.

Adhoc Testing saves lot of time as it doesn't require elaborate test planning , documentation and test case design.