Software Testing in Agile



Agile means being able to quickly change direction. Agile software development is a group of software development methodologies.


Agile software development methods:
1. Agile Modeling
2. Agile Unified Process (AUP)
3. Dynamic Systems Development Method (DSDM)
4. Essential Unified Process (EssUP)
5. Extreme Programming (XP)
6. Feature Driven Development (FDD)
7. Open Unified Process (OpenUP)
8. Scrum
9. Velocity tracking


Agile method: Scrum (development)
Scrum is an iterative incremental process of software development commonly used with agile software development.


Software developed during one unit of time is referred to as an iteration, which typically lasts from two to four weeks. Each iteration passes through a full software development cycle, including planning, requirements analysis, design, writing unit tests, then coding until the unit tests pass and a working product is finally demonstrated to stakeholders. 


The Product Backlog is the master list of all functionality desired in the product. A product or a project backlog is a prioritized list of requirements with a rough size and complexity estimate of each requirement. Hence, the backlog has 3 components: requirements, priority, rough size and complexity estimate.


Sprint Backlog: The sprint backlog is the list of tasks that the Scrum team is committing that they will complete in the current sprint. Items on the sprint backlog are drawn from the Product Backlog, by the team based on the priorities set by the Product Owner.


Daily SCRUM meeting rules: 
1. Hold the daily scrum in the same place at the same time every work day. 
2. All team members are required to attend. 
3. The scrum master begins the meeting by starting with the person to his or her left and proceeding counter clockwise around the room until everyone has reported. 
4. Each team member should respond to three questions only: 
a) What have you done since the last daily scrum regarding this project? 
b) What will you do between now and the next daily scrum meeting regarding this project?


QC Process followed in small companies

1) If there is small project then we opt for Ad-hoc testing. Ad-hoc testing is done without test cases. But for this, software tester should have enough knowledge about the project. And it is done by experienced QA engineer.

2) If software tester don't know anything about the project then we perform exploratory testing on that project. We explore all the features and then create and execute test cases.

3) In general, following Testing process is followed by small companies:

a) Requirement Stage: Analyze requirement document, discuss with team internally, ask doubts from client if required.

b) Test Plan: Create test plan which defines the type of testing should be done, who will test this, duration for which we have to test, availability of test resources, types of Testing performed, features to be tested, features not to be tested etc.

4) Test Cases: Test cases (sequence of steps to test the correct behavior of a functionality) are prepared by reviewing the functional requirements in use cases or by
exploring the application.

5) Execution of Test Cases: Each test case is executed by more than one QC engineer and on different configuration PCs to ensure that its working 100% fine.

6) Instant Bug reporting: If we find number of bugs then initially we report the bugs in excel sheet and send it to developers so that they can make a proper plan and start work on it.

7) Bug reporting in bug tracking tool: Then we post each bug with more details in to bug tracking tool.

8) Explain the required bug to developer with exact scenario

9) Re test the issue fixed by developer

10) Mark the status in bug tracking tool.



Having Other People Test Your Software


In one minute, try to find as many differences between the two scenes as you can.

After you finish looking, have several friends do the same. What you'll find is that everyone has very different results. The number of differences found, the order that they were found, even which ones were found will be different.

Combine all the lists and throw out the duplicates, you'll have a complete list.

Software testing works exactly the same way. You're likely under a tight schedule, you find as many bugs as possible in the time you have, but someone else can come in, test the same code, and find additional bugs.

It's easy to fall into the trap of wanting to be solely responsible for testing your own piece of the software, but don't do it. There's too much to gain by having others help you out.

Watching how someone else approaches a problem is a great way to learn new testing techniques.
 

Difference Between System Testing and End-to-End Testing

End-to-End Testing:- Similar to system testing, but involves the testing of complete application environment such as interacting with database, using network communications, interacting with other hardware, application or systems if appropriate.

System Testing:- System Testing is the testing of a system as a whole. This is what user see and feel about the product you provide.

Bug Life Cycles

Various life cycles that a bug passes through during a software testing process have been described in this article. Take a look.
The duration or time span between the first time that the bug is found is called 'New' and closed successfully (status: 'Closed'), rejected, postponed or deferred is called 'Bug/Error Life Cycle'.

Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed.
  




There are seven different life cycles that a bug can pass through:

Cycle I

  • A tester finds a bug and reports it to the Test Lead.
  • The test lead verifies if the bug is valid or not.
  • Test lead finds that the bug is not valid and the bug is 'Rejected'.
Cycle II
  • A tester finds a bug and reports it to the Test Lead.
  • The test lead verifies if the bug is valid or not.
  • The bug is verified and reported to the development team with status as 'New'.
  • The development leader and team verify if it is a valid bug. The bug is invalid and is marked with a status of 'Pending Reject' before passing it back to the testing team.
  • After getting a satisfactory reply from the development side, the test leader marks the bug as 'Rejected'.
Cycle III
  • A tester finds a bug and reports it to the Test Lead.
  • The test lead verifies if the bug is valid or not.
  • The bug is verified and reported to the development team with status as 'New'.
  • The development leader and team verify if it is a valid bug. The bug is valid and the development leader assigns a developer to it, marking the status as 'Assigned'.
  • The developer solves the problem and marks the bug as 'Fixed' and passes it back to the Development leader.
  • The development leader changes the status of the bug to 'Pending Retest' and passes it on to the testing team for retest.
  • The test leader changes the status of the bug to 'Retest' and passes it to a tester for retest.
  • The tester retests the bug and if it is working fine, the tester closes the bug and marks it as 'Closed'.
Cycle IV
  • A tester finds a bug and reports it to the Test Lead.
  • The test lead verifies if the bug is valid or not.
  • The bug is verified and reported to the development team with status as 'New'.
  • The development leader and team verify if it is a valid bug. If the bug is valid, the development leader assigns a developer for it, marking the status as 'Assigned'.
  • The developer solves the problem and marks the bug as 'Fixed' and passes it back to the Development leader.
  • The development leader changes the status of the bug to 'Pending Retest' and passes it on to the testing team for retest.
  • The test leader changes the status of the bug to 'Retest' and passes it to a tester for retest.
  • The tester retests the bug and the same problem persists, so the tester after confirmation from test leader reopens the bug and marks it with a 'Reopen' status. And then, the bug is passed back to the development team for fixing.
Cycle V
  • A tester finds a bug and reports it to the Test Lead.
  • The test lead verifies if the bug is valid or not.
  • The bug is verified and reported to the development team with status as 'New'.
  • The developer tries to verify if the bug is valid but fails to replicate the same scenario as it was at the time of testing, and asks for help from the testing team.
  • The tester also fails to regenerate the scenario in which the bug was found. And finally, the developer rejects the bug marking it as 'Rejected'.
Cycle VI

After confirmation that the data is unavailable or certain functionality is unavailable, the solution and retest of the bug is postponed for indefinite time and it is marked as 'Postponed'.

Cycle VII

If the bug does not stand importance and needs to be postponed, then it is given a status as 'Deferred'.

This was about the various life cycles that a bug goes through in software testing. And in the ways mentioned above, any bug that is found ends up with a status of Closed, Rejected, Deferred or Postponed.

Types of Software Testing

Software testing is a process of executing software in a controlled manner. When the end product is given to the client, it should work correctly according to the specifications and requirements stated by the client. Defect in software is the variance between the actual and expected results. There are different types of testing procedures, which when conducted, help to eliminate the defects from the program.
Testing is a process of gathering information by making observations and comparing them to expectations. - Dale Emery

In our day-to-day life, when we go out, shopping any product such as vegetable, clothes, pens, etc., we do check it before purchasing them for our satisfaction and to get maximum benefits. For example, when we intend to buy a pen, we test the pen before actually purchasing it, i.e. if it's writing, does it work in extreme climatic conditions, etc. So, be it the software, hardware, or any other product, testing turns to be mandatory.

Software testing is a process of verifying and validating whether the program is performing correctly with no bugs. It is the process of analyzing or operating software for the purpose of finding bugs. It also helps to identify the defects / flaws / errors that may appear in the application code, which needs to be fixed. Testing not only means fixing the bug in the code, but also to check whether the program is behaving according to the given specifications and testing strategies. There are various types of strategies such as white box testing, black box testing, gray box testing, etc.

Need for Software Testing Strategies

The types of software testing, depends upon different types of defects. For example:
  • Functional testing is done to detect functional defects in a system.
  • Performance testing is performed to detect defects when the system does not perform according to the specifications.
  • Usability testing to detect usability defects in the system.
  • Security testing is done to detect bugs/defects in the security of the system.
The list goes on as we move on towards different layers of testing.

Software Testing Methods

To determine the true functionality of the application being tested, test cases are designed to help the developers. Test cases provide you with the guidelines for going through the process of software testing, which includes two basic types, viz. Manual Scripted Testing and Automated Testing.
  • Manual Scripted Testing: This is considered to be one of the oldest type, in which test cases are designed and reviewed by the team, before executing it.
  • Automated Testing: This applies automation in the testing, which can be applied to various parts of a software process such as test case management, executing test cases, defect management, and reporting of the bugs/defects. The bug life cycle helps the tester in deciding how to log a bug and also guides the developer to decide on the priority of the bug depending upon the severity of logging it. Software testing to log a bug, explains the contents of a bug that is to be fixed. This can be done with the help of various bug tracking tools such as Bugzilla and defect tracking management tools like the Test Director.
Software Testing Types

Software testing life cycle is the process that explains the flow of the tests that are to be carried on each product. The V- Model i.e Verification and Validation Model is a perfect model which is used in the improvement of the software project. This model contains software development life cycle on one side and software testing life cycle on the other hand side. The checklists for software tester sets a baseline that guides him to carry on the day-to-day activities.

Black Box Testing: It explains the process of giving the input to the system and checking the output, without considering how the system generates the output. It is also known as Behavioral Testing.

Functional Testing: The software is tested for the functional requirements. This checks whether the application is behaving according to the specification.

Performance Testing: This testing checks whether the system is performing properly, according to the user's requirements. Performance testing depends upon the Load and Stress Testing, that is internally or externally applied to the system.
  1. Load Testing: In this type of performance testing, the system is raised beyond the limits in order to check the performance of the system when higher loads are applied.
  2. Stress Testing: In this type of performance testing, the system is tested beyond the normal expectations or operational capacity.
Usability Testing: This is also known as 'Testing for User Friendliness'. It checks the ease of use of an application.

Regression Testing: Regression testing is one of the most important types of testing, which checks whether a small change in any component of the application affects the unchanged components or not. This is done by re-executing the previous versions of the application.

Smoke Testing: It is used to check the testability of the application, and is also called 'Build Verification Testing or Link Testing'. That means, it checks whether the application is ready for further testing and working, without dealing with the finer details.

Sanity Testing: Sanity testing checks for the behavior of the system. This is also called Narrow Regression Testing.

Parallel Testing: Parallel testing is done by comparing results from two different systems like old vs new or manual vs automated.

Recovery Testing: Recovery testing is very necessary to check how fast the system is able to recover against any hardware failure, catastrophic problems or any type of system crash.

Installation Testing: This type of software testing identifies the ways in which installation procedure leads to incorrect results.

Compatibility Testing: Compatibility testing determines if an application under supported configurations performs as expected, with various combinations of hardware and software packages.

Configuration Testing: This testing is done to test for compatibility issues. It determines minimal and optimal configuration of hardware and software, and determines the effect of adding or modifying resources such as memory, disk drives, and CPU.

Compliance Testing: This checks whether the system was developed in accordance with standards, procedures, and guidelines.

Error-Handling Testing: This determines the ability of the system to properly process erroneous transactions.

Manual-Support Testing: This type of software testing is an interface between people and application system.

Inter-Systems Testing: This method is an interface between two or more application systems.

Exploratory Testing: Exploratory testing is similar to ad-hoc testing, and is performed to explore the software features.

Volume Testing: This testing is done when huge amount of data is processed through the application.

Scenario Testing: Scenario testing provides a more realistic and meaningful combination of functions, rather than artificial combinations that are obtained through domain or combinatorial test design.

User Interface Testing: This type of testing is performed to check, how user-friendly the application is. The user should be able to use the application, without any assistance by the system personnel.

System Testing: This testing conducted on a complete, integrated system, to evaluate the system's compliance with the specified requirements. This is done to check if the system meets its functional and non-functional requirements and is also intended to test beyond the bounds defined in the software / hardware requirement specifications.

User Acceptance Testing: Acceptance testing is performed to verify that the product is acceptable to the customer and if it's fulfilling the specified requirements of that customer. This testing includes Alpha and Beta testing.
  1. Alpha Testing: Alpha testing is performed at the developer's site by the customer in a closed environment. This is done after the system testing.
  2. Beta Testing: This is done at the customer's site by the customer in the open environment. The presence of the developer, while performing these tests, is not mandatory. This is considered to be the last step in the software development life cycle as the product is almost ready.
White Box Testing: It is the process of giving the input to the system and checking, how the system processes the input to generate the output. It is mandatory for a tester to have the knowledge of the source code.

Unit Testing: Unit testing is done at the developer's site to check whether a particular piece / unit of code is working fine. It tests the unit of the program as a whole.

Static and Dynamic Analysis: In static analysis, it is required to go through the code in order to find out any possible defect in the code. Whereas, in dynamic analysis, the code is executed and analyzed for the output.

Statement Coverage: It assures that the code is executed in such a way that every statement of the application is executed at least once.

Decision Coverage: This helps in making decision by executing the application, at least once to judge whether it results in true or false.

Condition Coverage: In this type of software testing, each and every condition is executed by making it true and false, in each of the ways, at least once.

Path Coverage: Each and every path within the code is executed at least once to get a full path coverage, which is one of the important parts of the white box testing.

Integration Testing: Integration testing is performed when various modules are integrated with each other to form a sub-system or a system. This mostly focuses in the design and construction of the software architecture. This is further classified into Bottom-Up Integration and Top-Down Integration testing.
  1. Bottom-Up Integration Testing: Here the lowest level components are tested first and then the testing of higher level components is done using 'Drivers'. The entire process is repeated till the time all the higher level components are tested.
  2. Top-Down Integration Testing: This is totally opposite to bottom-up approach, as it tests the top level modules and the branch of the modules are tested step by step using 'Stubs', until the related module comes to an end.
Security Testing: Testing that confirms, how well a system protects itself against unauthorized internal or external, or willful damage of code; means security testing of the system. Security testing assures that the program is accessed by the authorized personnel only.

Mutation Testing: In mutation testing, the application is tested for the code that was modified after fixing a particular bug/defect.

These methods show you the output, and helps you check if the software satisfies the requirement of the customers. Software testing is indeed a vast subject and one can make a successful carrier in this field.

Integration Testing Vs. System Testing

In software testing, there is often integration testing is pitched against system testing, due to the fact that integration and system testing are often interchanged with each other or considered to be synonyms for same type of testing. However, at the very outset I would like to clear that they are not synonyms and are indeed different types of testing...
Software testing is a process which consists of dynamic and static activities, which concerns itself with the evaluation of software products to determine that the software indeed does meet the requirements of the end user, and also to demonstrate that the software is fit for purpose and does not have defects. There are different types of testing tools, which help in the said process. Also, there are certain types of testing which are a part of a particular part of the software development life cycle. Just as there are different models in software development, there are different models in software testing as well. Each of these models use different test levels, which have their own objectives. There are four test levels which are common to all the software testing models, namely component testing, integration testing, system testing and acceptance testing. All other types of software testing can be classified under either of them. Before we focus our attention on integration vs. system testing, we will read in short about integration and system testing.

Integration Testing

The process of combining and testing multiple components together is known as integration testing. It is a systematic approach to build the complete software structure specified in the design from unit tested modules. It assures that the software units are operating properly when they are combined together. The aim of integration testing is to discover errors in the interface between the components. The interface errors and the communication between different modules are also unearthed in this testing. There are two main types of integration testing, namely incremental and non-incremental testing. Incremental testing can be performed using any one of three approaches, namely top down approach, bottom up approach, or functional incremental approach. If there are a number of small systems to be integrated to form a large system, then the system integration testing process is same as component integration testing, as each small system is considered as a component. When system integration testing is done, the interaction of the different systems with each other is carried out, whereas in component testing the interaction between the different components is tested.

System Testing

The testing carried out to analyze the behavior of the whole system according to the requirement specification is known as system testing. The test cases are designed based on risks and/or requirement specifications. In some cases, business processes, system behavior, and system resources may also be taken into consideration. Tests are also run to see the interaction of the software with the underlying operating system. System testing is considered as the final test carried on the software from the development team. Both functional and non-functional requirements of the system are tested in system testing. These tests are carried out in a controlled test environment. There are different types of testing which are a part of system testing like usability testing, load testing, stress testing, compatibility testing, volume testing, etc.

Integration Testing Vs. System Testing

It is often that people believe integration and system testing are the same. However, from the above explanation we can say that they are different types of testing. The next question which automatically arises is, what is the difference between integration testing and system testing. Integration testing aims to check if the different sub functionalities or modules were integrated properly to form a bigger functionality, whereas in system testing the system is tested as a whole, where the functionalities that make up the system are not taken into consideration. It is the working of the system as a whole which is important. In other words, focus of attention is on the modules in integration testing, whereas in system testing the focus of attention is on system functionality. To explain this further, integration tests are carried out before the system moves to the system testing level.

When integration testing is carried out, interface specifications are taken into consideration, while in system testing the requirements specification are important. In system testing, the tester does not have access to the code used to make the system. On the other hand, while carrying out integration testing, the tester is able to see the internal code. When integration testing is carried out, some scaffolding may be required in the form of drivers and/or stubs. The same is not necessary for system testing.

To sum it up in short, both of them are an important part of the software testing process. Integration testing concerns itself with the modules of the software and system testing takes care of testing the software as a single entity. 

Testing Metrics

Testing Metrics
METRIC is a measure to quantify software, software development resources, and/or the software development process. Metrics enables estimation of future work.
A Metric can quantify any of the following factors:
- Schedule,
- Work Effort,
- Product Size,
- Project Status, and
- Quality Performance


Project Management Metrics
Schedule Variance (SV)
This metric gives the variation of actual schedule vs. planned schedule. This metric is represented as percentage.
SV = [(Actual number of days - Planned number of days) / Planned number of days] * 100

Effort Variance (EV)
This metric gives the variation of actual effort vs. estimated effort. This metric is represented as percentage.
EV = [(Actual Person Hours - Estimated Person Hours) / Estimated Person Hours] * 100

Cost Variance (CV)
This metric gives the variation of actual cost vs. estimated cost. This metric is represented as percentage.
CV = [(Actual Cost - Estimated Cost) / Estimated Cost] * 100

Size Variance (SzV)
This metric gives the variation of actual size vs. estimated size. This metric is represented as percentage.
SzV = [(Actual Size - Estimated Size) / Estimated Size] * 100


Requirement Metrics
Requirements Stability Index (RSI)
This metric gives the stability factor of the requirements over a period of time, after the requirements have been mutually agreed and baseline. This metric is represented as percentage.
RSI = [(Number of baseline requirements) ? (Number of changes in requirements after the requirements are baseline)] / (Number of baseline requirements)] * 100

Requirements Traceability Metrics (RTM)
This metric provides the analysis on requirements captured in test cases. RTM helps a user to decide if all the requirements are covered in written test cases. This analysis is done by determining
- Number of Requirements
- Number of Test Cases with matching Requirements
- Number of Requirements with no matching Test Cases


Testing and Review Metrics
Defect Density
This metric provides the analysis on the number of defects to the size of the work product. This metric is represented as percentage.
Defect Density = [Total no. of Defects / Size (FP or KLOC)] * 100

Defect Removal Efficiency (DRE) This metric will indicate the effectiveness of the defect identification and removal in stages for a given project. This metric is represented as percentage.
- Requirements Phase:
DRE = [(Requirement defects corrected during Requirements phase) / Requirement defects injected during Requirements phase)] * 100
- Design Phase:
DRE = [(Design defects corrected during Design phase) / (Defects identified during Requirements phase + Defects injected during Design phase)] * 100
- Code Phase:
DRE = [(Code defects corrected during Coding phase) / (Defects identified during Requirements phase + Defects identified during Design phase + Defects injected during coding phase)] * 100
- Overall:
DRE = [(Total defects corrected at all phases before delivery) / (Total defects detected at all phases before and after delivery)] * 100

Overall Test Effectiveness (OTE)
This metric will indicate the effectiveness of the Testing process in identifying the defects for a given project during the testing stage. This metric is represented as percentage.
OTE = [(Number of defects found during testing) / (Total number of defects found during Testing + Number of defects found during post-delivery)] * 100

Overall Review Effectiveness (ORE)
This metric will indicate the effectiveness of Review process in identifying the defects for a given project. This metric is represented as percentage.
OTE = [(Number of defects found by reviews) / (Total number of defects found by reviews + Number of defects found during Testing + Number of defects found during post-delivery)] * 100

Difference between QA and QC

Software testing is a process of verifying and validating that a software application or program meets the business and technical requirements that guided its design and development, and works as expected. Software testing also identifies important defects, flaws, or errors in the application code that must be fixed.

Quality means the conformance to the specified design requirement. Being correct, the minimum requirement of quality, means performing as required under specified circumstances. Debugging, a narrow view of software testing, is performed heavily to find out design defects by the programmer.

Quality Assurance (QA) is the process that focuses upon each and every role and their responsibilities in the development process and checks whether they are being accomplished as per the guidelines provided by the quality standards or not. It concentrates on process of the product and is done throughout life cycle. It is defect prevention method.

Quality Control (QC) is the process which usually includes inspections and audit operations to segregate bad from good. It concentrates on specific products and is done after the product is built. It is defect-detection and correction method.

Don't confuse between Testing and QA as both are different
QA is continous process in which we monitors and perform steps in order to improve. It is oriented to PREVENTION.
Testing is done under controlled conditions in order to find defects. It is oriented to DETECTION.

Our main target with this site is to help people who are new to Software Testing field and wants to know more about Software Testing, Testing Tools and Testing Certifications. We will try to focus on each and every aspect of software testing. We will make sure to provide best solution for your problems on software testing.

Our baseline is 'Your Own Software Testing Warehouse?; this means we will try to provide you with software testing tutorials or useful material on your demand. You send us what you want to know about Software Testing and we will help you out. This is our sincere effort towards hosting all Software Testing needs under one roof.