Thursday, February 16, 2012

Performance Test Plan Template

The following can be used as a performance test plan template:

Summary
Summary contains information such as the following:
  • Which project to be tested
  • Architectural diagram
  • Test strategy
  • Timeline
  • Versions
  • Deliverables (e.g., performance test report, application capacity analysis, SLA analysis, etc.)


Use Cases
  • Which business transactions will be tested
  • X% transaction A, Y% transaction B, etc.
  • Data parameterization for each transaction


Performance Requirements
  • Response time SLA needed if any.  E.g., 95% of transactions must complete within 100ms and 100% within 5 seconds.
  • Transaction rate targets if any.  E.g., peak transaction rate of 1,000 requests per second must be supported.


Test Scripts
  • Details on test scripts covering use cases, XML, JSON, etc.
  • Details on data to be used for parameterization


Environment
  • Which servers to use
  • Configuration to use
  • Any spoofing, stubbing, etc. to be used.


Test Tool
  • Whether LoadRunner, Jmeter, Grinder, an internally developed tool, etc.  
  • Any issues or details relevant to the tool.


Monitoring
  • Metrics to be collected
    • Response times
      • Entry point response times
      • Downstream call response times
    • Transaction rates
    • Server resource usage
      • CPU
      • Memory
      • Network
      • Disk
  • Tools to be used to collect the metrics.


Scalability Testing
This section contains details on scalability testing to be done.

Load will be driven up in increments to the point at which peak capacity is reached.  A bottleneck analysis will be done to attempt to determine the factor limiting capacity to this point, whether some server resource limitation, some downstream call to a database or service, some internal thread blocking, etc.  This will provide information such as the following:


Discussion of whether any horizontal scalability testing will be done.

Scalability testing will allow the following to be determined:
  • Bottlenecks
  • Capacity
  • Response Time SLAs


Stability Testing

This section contains details on stability tests to be run.

To verify stability, a high load should be run against the application for an extended period of time, at a minimum 24 hours and ideally for days or weeks.  A high load can be determined from the results of the scalability test, just below peak capacity, just below the point at which response time takes a turn for the worse. 

During the run, server resource usage should be captured and monitored, and error logs monitored.  Trends should be monitored closely:
  • Does response time degrade over time?  That indicates a resource leak. 
  • Does CPU usage increase over time?  That indicates a design or implementation error.  
  • Does memory leak?  
  • Do errors begin to occur at some point or occur in some pattern?  
  • Does the application eventually crash?

Stability testing should also include verification of fault tolerance, which could include the following:   
  • Bringing a downstream system down under load (stopping a downstream database or downstream webservice)
  • Slowing down a downstream service under load.
  • Applying a sudden heavy burst of traffic under load.
  • Triggering error scenarios under load.
  • Dropping network connections under load.  (using a tool such as tcpview)
  • Bouncing the application under load.
  • Failing over to another server under load.
  • Imparing the network (reducing bandwidth, dropping packets, etc.)
The behavior of the application is observed in each test:
  • Does the application recover automatically?  
  • Does it crash?  
  • Does it cause a cascading effect, affecting other systems? 
  • Does it enter into a degraded state and never recover?  
  • Can the event be monitored and alerted on with available tools?  
  • Are appropriate events logged?


Performance Baseline/Maintenance/Regression Testing

If relevant to the project, the performance test can be used at a fixed load as a standard benchmark test to be run against each build or version of the application.  This will establish trends over time of each metric and will help verify that performance does not degrade over time or regress in a particular version.


No comments:

Post a Comment