Tuesday, January 31, 2012

Performance Test Team Blueprint

The following is a blueprint of how to set up an effective performance test team.

TEAM OBJECTIVE

  • Verify all software is:
    • scalable
    • stable
    • fault tolerant
  • Resolve bottlenecks whenever encountered
  • Automate wherever possible
  • Use consistent processes, test cases, and techniques
  • Provide good documentation


TEST METHODOLOGY
Test software performance at all levels of granularity:

  • Design
    • review architecture and design, verify that design will scale in theory
  • Module
    • Unit performance tests for critical modules, both white box and functional as needed
    • Use mocking or stubbing to isolate module from external dependencies
    • Run performance tests in continuous integration environment
    • Test scalability, stability
  • Service/Product
    • Use stubbing or spoofing to isolate service
    • Cover key functional use cases
    • Test scalability, capacity, stability, fault tolerance
  • End to end
    • Black box performance tests
    • Use consistent, stable benchmark on all builds/versions
    • Trend metrics
    • Look for performance regressions
    • Test scalability, capacity, stability, fault tolerance
  • Infrastructure
    • Ensure infrastructure is scalable
    • Ensure infrastructure is stable



TEST METRICS

  • Ensure application has adequate monitoring tooling/hooks (for example, metrics on all entry points and downstream calls)
  • Collect uniform and meaningful metrics during performance testing
  • Ensure performance test metrics are comparable to production metrics



TOOLS

  • UI/end to end load generator
  • Service/web service load generator
  • Unit test load generator
  • Code/CPU profiler
  • Memory profiler

PERSONNEL

  • Hire software engineers in test with specialization in performance whenever possible.  Ensure hires can automate repetitive tasks and automate the performance testing process.

Friday, January 27, 2012

How to Test Software Resiliency to Network Problems Using a Network Emulator


Software applications can be destabilized by many factors that are difficult to cover in a test or lab environment.  These include timing-related issues, unanticipated bursts, cascading effects, unexpected administrative or batch processes, and integration complexities.  Some of most nefarious factors that can destabilize applications are network problems.  Network infrastructure is often a complex black box and can perform in an inconsistent fashion for various reasons.  Network problems can cause serious application stability problems including cascading failures, unrecoverable states, and outages. 

Network problems can also be difficult to emulate in a test or lab environment.  One technique for handling this is to use a network emulator in the test environment.  Take two servers between which you want to test various network problems or impairments.  These could be two services in a SOA environment, or a web and application server, an application server and a database server, etc.  Plug one of the servers into a network emulator port.  Plug the other server into another network emulator port which is tied to the first emulator port, as illustrated below:

With the network emulator in place between the two servers, run the application under load, and introduce various network impairments, observing how the application behaves.  The following is an example of network impairments that could be introduced:
  • Network latency.  Introduce latency at various levels, such as 1ms, 10ms, 100ms, 1000ms, 10000ms.  Resume normal functioning after varying lengths of time, such as 10 sec, 1 min, 10 min.

  • Bandwidth throttling.  Introduce throttling at various levels, such as 100 mb/s, 10mb/s, 1mb/s, 100kb/s.   Resume normal functioning after varying lengths of time.

  • Network down.  Introduce 100% packet loss for varying lengths of time.
  • Emulate dropped packets for varying lengths of time.

  • Emulate packet accumulation/burst for varying lengths of time.
  • Other network impairments


With each network problem scenario, the application behavior should be carefully studied.  Answers to questions such as the following should be determined:
  • Does the application behave as expected under the network impairments?
  • Is the application behavior appropriate?  Is timeout, retry, and reconnect functionality functioning as expected?
  • When the network recovers to a normal state, does the application recover, or is the application in an unrecoverable state?
  • Is any manual intervention required to bring the application to a normal state?
  • Do any applications or servers require restarting?
  • Are appropriate messages logged?
  • Does excessive message spamming occur?

 Testing network problems in the lab provides an extra measure of security and could be well worth the time, expense and effort.  If network problems still destabilize the application after doing this type of network problem testing, the test suite should be enhanced to cover the type of scenario that was missed.


Tuesday, January 24, 2012

Using Loadrunner Pacing to Hit a Specific Transaction Rate

It is often necessary to apply load at a specific transaction rate per second (TPS).  For example, it may be necessary to test every build or patch of an application by running a fixed benchmark of 20 queries per second against the application, measuring the response time and server utilization at that fixed load.  By then examining the trend over time of response time and server resource utilization under uniform load, performance regressions can be easily identified.

One way to set a specific transaction rate using Loadrunner is by using the pacing feature.
Pacing specified how often an iteration starts.  For example, if an iteration runs in 300 - 500 milliseconds, setting pacing to 1 second for that script will case a user to run the iteration once every second as illustrated below:
Each second the iteration starts and then ends after 400 ms or so.  At the next second interval, the next iteration starts.  An exact transaction rate of 1 iteration per second is reached in this way.  By increasing the user count in this case to 10, a transaction rate of 10 TPS can be achieved.  

An iteration can have multiple transactions per iteration.  If the above example was an iteration that included three transactions (query1, query2, and query3), a single user would run three transactions per second and 10 users would give 30 TPS.

The following formula provides a calculator for determining what the pacing in seconds should be set to given three input parameters, transactions per iteration, user count, and target transaction rate:
  • transactionsPerIteration  (the number of transactions included in an iteration)
  • users (the number of users to be run)
  • tps (the target transaction rate per second)

For example, suppose transactionsPerIteration = 3, users = 50, and target transaction rate = 10 TPS.  The equation gives a pacing in seconds of (3 * 50)/10 = 1.5 seconds.

One thing that can prevent the target transaction rate from being reached is when the iteration slows down when run under multiple users such that the time an iteration runs is longer than the pacing time.  In this case a warning is generated and the transaction rate is not reached.  This can be sometimes be avoided by increasing the user count somewhat.

However, the problem could also be due to limitations in the application's vertical scalability.  The application may have concurrency problems preventing concurrent users from executing the transaction without blocking other users.  In a worst case, the application's concurrency problems may make it impossible to reach the target transaction rate regardless of the number of users.  In any case, it is worthwhile to investigate what the bottlenecks are that are limiting scalability and resolving those issues.

Monday, January 9, 2012

How to Test REST Web Service Using LoadRunner

A simple and easy method of load testing a REST service using LoadRunner is as follows:

  • use the http protocol
  • use web_custom_request
  • specify in web_custom_request the appropriate REST method: 
    • Method=PUT
    • Method=GET
    • Method=DELETE
The following script gives an example of this method in the case of REST PUT request using a JSON request message.  (XML request message would be similar).  One of the fields in the message is parameterized using a random number parameter.  The parameter is given a format of "0000000000000000000000%09lu" to achieve a 32 character-long number string.

Script

Action()
{
    char *request_json_base;
    char *request_json;


    // save web service url to param {URL}
    char *URL = "http://SERVER:8080/Path";
    lr_save_string(URL, "URL_Param");


    // save json request to param {REQUEST_JSON_PARAM}, parameterize "SomeID" as random number
    request_json_base=
     "{" 
     "    \"Field1\"       : \"ValueOfField1\"," 
     "    \"SomeID\"           : \"{SomeID}\","   
     "    \"Field2\"       : \"ValueOfField2\"," 
     "    \"Field3\"       : \"ValueOfField3\"," 
     "}";

    request_json = lr_eval_string(request_json_base);
    lr_save_string(request_json, "REQUEST_JSON_PARAM");
  
    // set http headers
    web_add_header("Content-Type", "application/json; charset=utf-8");


    // validate response
    web_reg_find("Text=success", LAST);


    // send JSON request
    lr_start_transaction("rest_put");


    web_custom_request("post_to_http_jms_provider",
    "URL={URL_Param}",
    "Method=PUT",
    "TargetFrame=",
    "Resource=0",
    "Referer=",
    "Mode=HTTP",
    "Body={REQUEST_JSON_PARAM}",
    LAST); 


    lr_end_transaction("rest_put", LR_AUTO);
}


Console Output

Action.c(49): Notify: Transaction "rest_put" started.
Action.c(51): Notify: Parameter Substitution: parameter "URL_Param" =  "http://SERVER:8080/Path"
Action.c(51): Notify: Parameter Substitution: parameter "REQUEST_JSON_PARAM" =  "{    "Field1"       : "ValueOfField1",    "SomeID"          : "00000000000000000000001244464508",    "Field2"       : "ValueOfField2",    "Field3"       : "ValueOfField3"}"
Action.c(51): t=258ms: 147-byte response headers for "http://SERVER:8080/Path" (RelFrameId=1, Internal ID=1)
Action.c(51):     HTTP/1.1 200 OK\r\n
Action.c(51):     Content-Type: application/octet-stream\r\n
Action.c(51):     Date: Mon, 09 Jan 2012 18:43:07 GMT\r\n
Action.c(51):     Content-Length: 7\r\n
Action.c(51):     \r\n
Action.c(51): t=271ms: 7-byte response body for "http://SERVER:8080/Path" (RelFrameId=1, Internal ID=1)
Action.c(51):     success
Action.c(51): Registered web_reg_find successful for "Text=success" (count=1)   [MsgId: MMSG-26364]
Action.c(51): web_custom_request("post_to_http_jms_provider") was successful, 7 body bytes, 147 header bytes   [MsgId: MMSG-26386]
Action.c(61): Notify: Transaction "rest_put" ended with "Pass" status (Duration: 0.1255 Wasted Time: 0.0000).


Friday, January 6, 2012

How To Show Response Times in LoadRunner Vugen with Logging Disabled

Occasionally you want to see transaction response times in Loadrunner Vugen when logging is disabled, perhaps because log output is too verbose to easily see response times.  In that case, one convenient technique is to output response times using the lr_get_transaction_duration() function:

Script Example
Action()
{
  char *transactionName = "test_transaction";
  lr_start_transaction(transactionName);
  SimulateTransactionLogic();
  lr_output_message("ResponseTime,%f", lr_get_transaction_duration(transactionName));
  lr_end_transaction(transactionName, LR_AUTO);
}

SimulateTransactionLogic()
{
  lr_think_time(0.1);
}


Console Output With Logging Disabled
Action.c(9): ResponseTime,0.103419
Vuser Terminated.


Console Output With Logging Enabled
Running Vuser...
Starting iteration 1.
Starting action Action.
Action.c(5): Notify: Transaction "test_transaction" started.
Action.c(17): lr_think_time: 0.10 seconds.
Action.c(9): ResponseTime,0.106920
Action.c(10): Notify: Transaction "test_transaction" ended with "Pass" status (Duration: 0.1083 Think Time: 0.1005).
Ending action Action.
Ending iteration 1.
Ending Vuser...
Starting action vuser_end.
Ending action vuser_end.
Vuser Terminated.

Wednesday, January 4, 2012

networkspeedtest - a free network bandwidth test application

Download

Network performance problems can cause a variety of difficult to diagnose application performance problems.  It can be necessary to test the bandwidth of the network to verify whether the bandwidth is as expected.
Network speed test is a free, simple network test application that will copy files from one test server to another, showing the network bandwidth used in bytes per second.  This could be compared to a previous baseline to show a network problem.  For example, if 50 MB/sec of bandwidth was previously seen between two test servers, and now there is 1 MB/sec between the same two test servers, there is likely a network problem.

Installation
Unzip networkspeedtest directory to c:\temp


USAGE
copyto <SERVERNAME> repeat

(Copies current directory to SERVERNAME\c$\temp\networkspeedtest showing network speed in bytes per second.)
(repeat parameter causes it to repeat the operation indefinitely)


EXAMPLE
(Shows network bandwidth of 48 MB/sec from current server to SERVER1)

C:\Temp\networkspeedtest> copyto SERVER1 repeat

-- Speed of copying files of varying sizes to SERVER1 in bytes per second
    Speed :            48339531 bps.
    Speed :            58255333 bps.
    Speed :            41308327 bps.
    Speed :            48339531 bps.
    Speed :            48339531 bps.
    Speed :            48339531 bps.
    Speed :            48339531 bps.
    Speed :            48339531 bps.
    Speed :            48339531 bps.
    Speed :            48339531 bps.
    Speed :            41308327 bps.
    Speed :            58255333 bps.
    Speed :            48339531 bps.
    Speed :            41308327 bps.

The following shows windows task manager networking tab during this test:

Tuesday, January 3, 2012

Handling Client Side Certificates in LoadRunner for Web Services Testing

Web services requiring client side certificates can be handled in LoadRunner as follows:
  • Create a .pem client certificate file.  
    • Client certificates in other formats can be converted to .pem format using a utility such as openssl.
  • Copy the .pem file to the loadrunner script directory
  • Set the .pem file in the loadrunner script using the web_set_certificate_ex method:
    • web_set_certificate_ex( 
    •         "CertFilePath=clientcertificate.pem", 
    •         "CertFormat=PEM", 
    •         "KeyFilePath=clientcertificate.pem", 
    •         "KeyFormat=PEM", 
    •         "Password=testpassword", 
    •         LAST);
  • Post xml to the secure url
The following script provides an example of the client certificate file usage:

#include "as_web.h"
#include "lrw_custom_body.h"

char soapURL[]        = "URL=https://SERVER:8080/Path/v1";

// Verification text
char expectedResponse[] = "<ResponseMessage>Success!</ResponseMessage>";

char requestXMLBody[] = 
"Body="
    "<soap:Envelope xmlns:soap=\"http://www.w3.org/2003/05/soap-envelope\" xmlns:urn=\"urn:test:messages:v1\">\n"
"<soap:Header>\n"
"..."
"</soap:Header>\n"
"<soap:Body>\n"
"..."
"</soap:Body>\n"
    "</soap:Envelope>\n";
                  

Action()
{
PostXML( requestXMLBody, soapUR );
return 0;
}


void PostXML( char* xmlBody, char* soapURL, char* transactionName )
{
   /******** CreateAuthAccount Transaction ********/
   
   web_add_header( "Content-Type", "application/soap+xml" );

   web_set_certificate_ex( 
        "CertFilePath=clientcertificate.pem", 
        "CertFormat=PEM", 
        "KeyFilePath=clientcertificate.pem", 
        "KeyFormat=PEM", 
        "Password=testpassword", 
        LAST); 

   // save the response
   web_reg_save_param( "transactionResponse",
                       "LB=",
                       "RB=",
                       "Search=Body",
                       "NOTFOUND=Warning",
                        LAST );
   
   lr_start_transaction("post");

   web_custom_request( "postXML",
                       soapURL,
                       "Method=POST",
                       xmlBody,
                       LAST );

   // check for errors in response
   if( 0 != strstr( lr_eval_string( "{transactionResponse}" ), "ServiceException" ) )
   {
          lr_end_transaction( "post", LR_FAIL );
 lr_error_message( "ERROR (exception found in response): %s", lr_eval_string( "{transactionResponse}" ) );
   }
   else
   {
  lr_end_transaction("post", LR_PASS);
  lr_log_message( lr_eval_string( "{transactionResponse}" ));
   }
}

Monday, January 2, 2012

How To Unit Test Your Loadrunner Java Scripts with the Junit Test Framework

This post provides a method for testing LoadRunner java protocol scripts with junit java testing framework.  Junit tests are typically  integrated into your continuous integration environment so that your LoadRunner scripts are tested on every check-in.  This ensures that code check-ins do not break your scripts.  Your scripts are guaranteed to be  functional at any time for load testing.

Suppose that the following is your target LoadRunner java protocol script:


import lrapi.lr;


public class Actions
{
    public int init() throws Throwable 
    {
        return 0;
    }//end of init




    public int action() throws Throwable 
    {
try
        {
            lr.start_transaction("test");
   lr.log_message("This is the test transaction");
   lr.end_transaction("test", lr.PASS);
        }
        catch (Exception e)
        {
            lr.end_transaction("test", lr.FAIL);
            lr.error_message("ERROR: Received exception "+e.toString());
            
        }
return 0;
    }//end of action




    public int end() throws Throwable 
    {
return 0;
    }//end of end
}

This java loadrunner scripts runs with the following output:


Running Vuser...
Starting iteration 1.
Starting action Actions.
Notify: Transaction "test" started.
This is the test transaction
Notify: Transaction "test" ended with "Pass" status (Duration: 0.0073).
Ending action Actions.
Ending iteration 1.
Ending Vuser...


To test your script with junit, two main issues problems have to be solved:

  • Api calls into lrapi.lr have to be stubbed out
  • A junit test case has to be defined calling the loadrunner script methods

These issues can be solved as follows:
  • Create a class LoadRunnerJunitTests
  • Create a junit test method that calls into the loadrunner classes.
  • Remove the lrapi.lr import
  • Create a stub class "lr" implementing the lrapi.lr api methods and member variables used
  • Remove the public visibility of the Actions class
  • Add unit test failure calls in places where the script fails

The resulting class tests your script using the junit framework as follows:

package test;

import org.junit.Assert;
import org.junit.Test;

/*
 *  Example of how to test a LoadRunner Java protocol script using junit
 */
//TESTABILTY CHANGE: add the following junit test class to run the script methods
public class LoadRunnerJunitTests
{
@Test
public void testLoadrunner() throws Throwable
{
    Actions a = new Actions();
a.init();
a.action();
a.end();
    }
}

// Target LoadRunner script to be tested.
//TESTABILTY CHANGE: removed public visibilty of Actions class
class Actions
{
 public int init() throws Throwable 
 {
     return 0;
 }//end of init


 public int action() throws Throwable 
 {
     try
     {
         lr.start_transaction("test");
         lr.log_message("This is the test transaction");
         lr.end_transaction("test", lr.PASS);
     }
     catch (Exception e)
     {
         lr.end_transaction("test", lr.FAIL);
         lr.error_message("ERROR: Received exception : "+e.toString());

         //TESTABILITY CHANGE: fail the unit test if the script fails
         Assert.fail("Loadrunner script failure");
         
     }
return 0;
 }//end of action


 public int end() throws Throwable 
 {
return 0;
 }//end of end
}

//TESTABILTY CHANGE: add the following lr class implementing the needed loadrunner api methods and member variables
class lr
{
public static String PASS = "PASS";
public static String FAIL = "FAIL";
public static String AUTO = "AUTO";
public static void start_transaction(String str)
{
System.out.println("starting transaction " + str);
}
public static void end_transaction(String str, String status)
{
System.out.println("ending transaction " + str + " with status " + status);
}
public static void log_message(String str)
{
System.out.println("log message " + str);
}
public static void error_message(String str)
{
System.out.println("error message " + str);
}
public static void set_transaction_status(String str)
{
System.out.println("set_transaction_status " + str);
}
public static void abort()
{
System.out.println("aborting ");
System.exit(1);
}
}

The junit test of the loadrunner script runs with the following output:

starting transaction test
log message This is the test transaction
ending transaction test with status PASS