Your test plan should be based on a clearly defined testing objective. This section presents an overview of common testing objectives:
- Measuring End-User Response Time
- Defining Optimal Hardware Configuration
- Checking Reliability
- Checking Hardware or Software Upgrades
- Evaluating New Products
- Identifying Bottlenecks
- Measuring System Capacity
Measuring End-User Response Time
Check how long it takes for the user to perform a business process and receive a response from the server. For example, suppose that you want to verify that while your system operates under normal load conditions, the end users receive responses to all requests within 20 seconds. The following graph presents a sample load vs. response time measurement for a banking application:
Defining Optimal Hardware Configuration
Check how various system configurations (memory, CPU speed, cache, adaptors, modems) affect performance. Once you understand the system architecture and have tested the application response time, you can measure the application response for different system configurations to determine which settings provide the desired performance levels. For example, you could set up three different server configurations and run the same tests on each configuration to measure performance variations:
- Configuration 1: 200MHz, 64MB RAM
- Configuration 2: 200MHz, 128MB RAM
- Configuration 3: 266MHz, 128MB RAM
Determine the level of system stability under heavy or continuous work loads. You can use LoadRunner to create stress on the system: force the system to handle extended activity in a compressed time period to simulate the kind of activity a system would normally experience over a period of weeks or months.
Checking Hardware or Software Upgrades
Perform regression testing to compare a new release of hardware or software to an older release. You can check how an upgrade affects response time (benchmark) and reliability. Application regression testing does not check new features of an upgrade; rather it checks that the new release is as efficient and reliable as the older release.
Evaluating New Products
You can run tests to evaluate individual products and subsystems during the planning and design stage of a product’s life cycle. For example, you can choose the hardware for the server machine or the database package based on evaluation tests.
You can run tests that identify bottlenecks on the system and determine which element is causing performance degradation, for example, file locking, resource contention, and network overload. Use LoadRunner in conjunction with the new network and machine monitoring tools to create load and measure performance at different points in the system.
Measuring System Capacity
Measure system capacity, and determine how much excess capacity the system can handle without performance degradation. To check capacity, you can compare performance versus load on the existing system, and determine where significant response-time degradation begins to occur. This is often called the “knee” of the response time curve.
Once you determine the current capacity, you can decide if resources need to be increased to support additional users.