Wednesday, 28 October 2015

The Core Activities of Performance Testing

There are seven core activities that occur in a successful performance testing process. Before implementing them, it is important that we understand these seven concepts.
The core activities are as follows:






1. Identification of test environment


Here we identify the physical test and production environment for the software application. It also identifies the tools and resources that are available to the test team. The environment, tools and resources here refer to the configurations and settings of the hardware, the software and the network. A thorough understanding of this test environment enables better test planning and design. This identification process needs to be periodically reviewed during the testing process.


The key factors to consider for test environment identification are as follows:


- Hardware and machine configurations
- Network Architecture and user location
- Domain Name System Configuration
- Software installed.
- Software licenses
- Storage capacity and data volume
- Levels of logging
- Load Balancing
- Load Generation and Monitoring tools
- Volume and type of network traffic.
- Scheduled processes, updates and backups.
- Interaction with external systems.


2. Identification of Performance Acceptance Criteria


This step involves identifying or estimating the performance characteristics of the software application. This starts with noting the performance characteristics which are rendered as good performance by the stakeholders. The main characteristics of a stable performance are Response Time, Resource Utilization and Throughput.
The key factors to consider for identification of performance acceptance criteria are as follows:


- Business Requirements and obligations
- User Expectations
- Industry Standards and Regulatory Compliance Criteria
- Service Level Agreements (SLAs)
- Resource Utilization Limits
- Workload Models
- Anticipated load Conditions
- Stress Conditions
- Performance Indicators
- Previous Releases
- Competing Applications
- Objectives of Optimization
- Safety and scalability
- Schedule, Budget, Resources and Staffing

3. Plan and Design tests


When you design and plan a test for quantifying the performance characteristics, real world simulations should be created. This will generate significantly relevant and useful results that will help the organization to take informed business decisions. If this is not the test objective, then the most valuable usage scenarios should be explicitly determined.


The key factors to consider in planning and designing tests are as follows:


- Obligated usage scenarios
- Usage scenarios implied by the testing goals.
- Most common usage scenarios.
- Performance critical Usage scenarios
- Technical Usage scenario
- Stakeholder Usage scenario
- High Visibility Usage Scenario
- Business Critical Usage Scenario

4. Configuration of Test Environment


Innumerable issues arise from network, hardware, server operating systems and software compatibility. Configuring the test environment needs to be started early. This ensures that the configuration issues are resolved before the testing is begun. Additionally, periodic reconfiguration, updates, enhancements should be carried out throughout the project lifecycle.

The key factors to consider in configuring the test environment are as follows:

- Determine the maximum load that can be generated before reaching a load bottleneck.
- Verify the synchronization of all the system clocks from where the data resources are collected.
- Validate the loadtesting accuracy against the different hardware components.
- Validate the load testing accuracy against the server clusters.
- Validate the distribution of load by monitoring the resource utilization across servers.

5. Implementation of Test Design

The biggest challenge in performance testing is to execute a realistic test with simulated data in a way that the application being tested cannot differentiate between real data and simulated data.


The key factors to consider for implementation of test design are as follows:


- Ensure the correct implementation of the test data feeds.
- Ensure the correct implementation of transaction validations.
- Ensure the correct handling of hidden information fields and special data
- Validate the key performance indicators.
- Ensure the proper population of variables for request parameters.
- Consider request wrapping in test scripts to measure the response time for requests.
- Consider the script to match the designed test compared to changing the test to match the script.
- Evaluate the generated results against those expected. This validates the script development.

6. Execute Tests


The process of executing test cases is dependent on the tools, resources and the environment. It can be said to be a combination of the following tasks:


- Coordinating the execution of tests.
- Validating the tests, configurations and the data environments.
- Executing tests.
- Validating and monitoring the scripts and data while executing.
- Reviewing the results on test completion
- Archiving the tests, test data, test results and related information for later use.
- Logging activity times for later identification.

The key factors to consider while executing tests are as follows:

- Validate the execution of tests for completed data.
- Validate the use of correct values of data for realistic simulation of the business scenario.
- Limit the test execution cycles and review them after each cycle.
- Execute the same test multiple times to determine the factors accounting for the difference.
- Observe any unusual behavior while test execution.
- Set up warning to the team before executing tests.
- Do not carry out extra processes on the load generating machine while generating a load.
- Simulate ramp up and cool down periods.
- Executing a test can be held up when a point of diminishing returns is reached.

7. Analyze, Report and Retest


The main aim of executing tests is more than the results. Conclusions need to be derived from them along with the consolidated data to support the conclusions. This process requires analysis, comparisons and reporting.

The key factors to consider are follows:


- Analyze the data both individually and collectively.
- Analyze and compare the results to determine the inward or outward trend of the application under test.
- If any fixes are made, then validate the fix by repeating the test.
- Share the results of the test and make the raw data available to the team.
- Modify tests if desired objective is not met.
- Exercise caution while reducing test data as valuable data cannot be lost.
- Report early and often.
- Report visually and intuitively.
- Consolidate data correctly and summarize them effectively.
- Intermediate reports should include priorities, issues and limitations for the next execution cycles.

Conclusion

The above testing activities occur at different stages of the testing process. It is very important to understand the importance and objective of each activity elaborately to be able to design them to best fit the project context.

Wednesday, 23 September 2015

The different types of Performance Testing



Performance Testing of an application determines its responsiveness, throughput, reliability and scalability under a given workload. This process evaluates the software against its performance criteria. It compares the software performance on multiple system configurations and quantifies the levels of throughput.

Performance testing usually identifies the bottlenecks in a software application and helps to identify its compliance its initial requirements. The results generated by performance testing also helps to assess the hardware configurations that will be required while deployment. 




Types of Testing:

The types of performance testing can be categorized into the following three categories.

-      Performance Testing: This process validates the responsiveness, speed, scalability and stability of the software. It measures the response times, throughput and the level of resources utilized by the software. This category is the super set of all the performance related testing. 

-      Load Testing: This validates the performance characteristics of the software application when it is subjected from normal to peak load conditions that can be expected during production. It aims at identifying the breaking point of the application within peak load conditions. Endurance test can be considered as a subset of load testing. It determines the performance of the system under load conditions over an extended period of time. Endurance level of a system can be defined by Mean Time Between Failure(MTBF) and Mean Time to Failure(MTTF).

-      Stress Testing: This process is an extension of load testing. It verifies the performance characteristics of the system when it is subjected to conditions beyond the anticipated workloads during production.  It also tests the system under other stressful conditions such as limited memory, insufficient disk space and server failure. These tests help to estimate the conditions under which the application is liable to fail. It reveals bugs that can occur only under extreme load conditions. This helps to identify the indications which need to be monitored to avoid failures. Spike testing can be considered as a variation of stress testing that verifies the performance of a system on repeated extreme loads for short periods of time. 

-      Capacity Testing: It determines the upper limit of the users and transactions that can be supported by the software. It is conducted in conjunction with capacity planning. This can be used to anticipate the additional resources required to support increased user base or increased data volumes. It also helps to determine whether the system should be scaled up or scaled down in future.

Automated Performance Testing Tool:

JMeter is an automated open source testing tool in Java used to perform load and stress testing of application software. It can be used to test the performance of both static and dynamic resources of an application. It can simulate heavy loads on a server or group of servers to test overall performance under different load types. It generates the test results in graphical formats like charts and tables which are very easy to understand. It can be installed by running the JMeter Shell script in Linux or by starting the .bat file in Windows.
 
Conclusion: 

Performance Testing is an indispensable activity for assessing business risks. Besides identifying business risks, it reveals information about usability, functionality and security of the system. It is a qualitative and a quantitative evaluation of the software under test. It generally occurs at the end of the test plan. However, it should be also incorporated in the earlier stages of the development system when important logical structures are being decided.

Wednesday, 2 September 2015

Understanding Performance Testing in Agile Environment

Performance Testing is performed to evaluate the compliance level of an application or software with its initial requirements. It is an integral par Performance Testing is performed to evaluate the compliance level of an application or software with its initial requirements. It is an integral part of the agile process. Its main goal is to test performance and functionality in each sprint. In a traditional development cycle, performance testing usually takes place at the end of the development life cycle. However, this finds its place early in the Agile methodology.  


Managing Performance Testing in Agile Environment
The key element in Agile methodology is its flexibility. To ensure efficiency and thoroughness in this environment, the ways of performance testing may require changes. This approach requires an understanding of the project as a whole and further evaluates the contribution of performance testing into the project. This helps the team to estimate the success criteria for the effort.
Once these criteria are established, a strategy is formed to achieve these criteria by summarizing the testing activities to be performed that will add value at various points of the development cycle. These points refer to project delivery, sprints, iterations or weekly builds. With the continuous evolution in strategy, a performance test and load generation environment will be set up.
 With a strategy and a test set up in place, major tests are planned for the weekly builds. As a build is delivered, the plan is appropriately reported, recorded, revised, reprioritized, added and removed, thus improving the application and the overall strategy as the plan progresses.
Types of Performance Testing performed:
Load Testing is done to verify the application behavior under normal to peak load conditions. It enables to quantify the response time, throughput rate, resource utilization and the breaking point of the application. Endurance Testing determines the performance of the product when subjected to workload volumes over an extended period of time.
Stress test:  It determines the behavior of the application beyond normal to peak load conditions. It reveals errors and faults that arise only under high load conditions. It helps to identify the weak points of the application during extreme load conditions. Spike Test is a subset of stress test that validates the behavior of an application under extreme workload over a short period of time.
Capacity test: This determines the number of users or transactions the system can support.
Testing with JMeter:
JMeter is an open source testing tool in Java with a graphical interface. It is used to measure the performance and load management behavior on a variety of services. It is platform independent and capable of loading performance tests in variety of servers like HTTP, HTTPS, SOAP, Database through JDBC, LDAP, JMS and Mail POP3.It supports various kinds of testing such as Load, Distributed and Functional Testing and can generate heavy load against the application under test. It is capable of performing automated testing using Selenium. It generates the test results in easily understandable formats like chart, table and log file and can be easily installed by running the JMeter Shell script in Linux or by starting the .bat file in Windows.
Additional Considerations for Agile performance Testing:
All significant information must be communicated to the team.
For a major portion of the development life cycle, the testing is about compiling adequate information to enhance the performance as the design and development progresses.
Testing should not be forced on a build simply because it is planned in the strategy. If the current build is not appropriate for testing, then another build should be waited for on which the test can be performed.
Performance testing brings about significant changes in the application architecture, code, hardware and the environment. Hence, the test reports need to be read and understood efficiently to bring about appropriate changes.
Conclusion:
Performance testing in Agile is actually based on the principle of continuously asking what can be done at a point of time to increase the most value of the project. This approach allows managing the test process in a very flexible way. It helps to revise the project vision and context and reprioritize tasks based on their value addition.

Thursday, 9 July 2015

The Performance Testing Best Practices



As a tester, your job is to deliver the best possible feedback on the promptness of an application in the development cycle as early as possible. Obviously, in the real business scenario, deadlines are always tight and testing starts later than it should. There are a few simple principles that can help make your performance testing projects more effective.  


  •       A reasonably accurate result at present is worth more than a very accurate result later on. It is well understood that modifications become costlier later in the project. As a result, the sooner the performance issued are found, fixing them seems the easier.  
  • Test broadly instead of deeply. It is better to test a broad range of scenarios in a very easy manner than to test a few test cases very intensely. In the beginning of a project, a fairly accurate simulation of the real world is all right. Time spent obtaining each test case to unerringly mimic the predicted scenario or testing many slight variations of the real world scenario is better spent a wide range of scenarios is tested.    
  • Testing should be performed in a controlled environment. Testing without good configuration management and dedicated servers will yield test results that cannot be reproduced. Hence, in such a case you cannot measure the improvements correctly when the system’s next version is ready for testing.

How to make a load testing program successful

Based on input from partners and customers, here’s a list of best practices that will assist load testers to make a quick start on the way to a quality load testing program. 

1.       Recording a script 
 
First, find out the most frequent workflows in the application that is being tested. For an existing application, you need to identify this by checking analytics and server logs to get exposed to the most common scenarios. 

When you are recording, confirm that you test the application in a production-like setting, including factors like SSO, SSL, firewalls and load balancing. This is crucial for simulating exact end-to-end behaviour. You are recommended to record scripts from all of the ordinary user environments including different mobile devices and browsers.

2.       Improving a script with validation and correlation 
 
Find and replace all dynamic values in the recorded script, for instance, session-id as well as security tokens. If these values are not regenerated at the time of test execution, most of the systems will not function properly. This procedure is called Correlation.

Ensure that you stress the application’s back-end instead of only the cache servers, by restored static values in place of parameters that will be defined at run time automatically. Validation logic needs to be added to the script in order to confirm that the results obtained under load are consistent. 

3.       Running the load test

Create practical runtime scenarios which take into consideration the number of users, the test scripts, network limitations¸ browser types and schedule. The SLA or set of criteria should be defined for a successful performance test. Keep away from the temptation to extrapolate outcome, one server with 100 users could not run like two servers with 200 users. Testing the equivalent of your complete production stack should be performed. 

4.       Analyzing results
 
It’s imperative to verify all the application parameters along with the number of connections, transaction times, response time as well as throughput. There are several reasons why an application breaks under load and the actual cause is not always easy to predict.
For effective performance testing, server side statistics needs to be analyzed side-by-side with client side statistics. Also, remember to search for any discrepancies that may indicate a problem. Authenticate results against recognized industry rules to fast identify possible bottlenecks.