What is Performance Testing? : A Complete Guide with Best Practices
What is Performance Testing articles ACTE

What is Performance Testing? : A Complete Guide with Best Practices

Last updated on 26th Dec 2021, Blog, General

About author

Pavni Krish (Senior QA Engineer )

Pavni Krish is an Senior QA Engineer manual testing with 8+ years of experience and she has skills in TestLodge, Zephyr, TestLink, Trello, Jira, Basecamp, Sauce Labs, Browser Shots.

(5.0) | 19384 Ratings 591

Performance testing is a testing measure that evaluates the speed, responsiveness and stability of a computer, network, software program or device under a workload. Performance testing can involve quantitative tests done in a lab, or in some scenarios, occur in the production environment.

    • Introduction to Performance Testing
    • Why do Performance Testing?
    • Types of Performance Testing
    • Common Performance Problems
    • Performance Testing Process
    • Performance Testing Metrics: Parameters Monitored
    • Example Performance Test Cases
    • Performance Test Tools
    • Which Applications should we Performance Test?
    • Conclusion

    Subscribe For Free Demo

      Introduction to Performance Testing:

      Performance testing is a software testing process used to test the speed, response time, stability, reliability, scalability and resource utilisation of a software application under specific workloads. The main objective of performance testing is to identify and eliminate performance bottlenecks in a software application. It is a subset of performance engineering and is also known as “perf testing”.

      The focus of performance testing is checking a software program:

      Speed – determines whether the application responds quickly.

      Scalability – determines the maximum user load that the software application can handle.

      Stability – determines whether the application is stable under varying loads.

      Why do Performance Testing?

    • The features and functionality supported by a software system is not the only concern. The performance of a software application matters such as its response time, reliability, resource utilisation and scalability. The goal of performance testing is not to find bugs but to eliminate performance bottlenecks.

    • Performance testing is done to provide stakeholders with information about their application with regard to speed, stability and scalability. More importantly, performance testing reveals what needs to be improved before a product goes on the market. Without performance testing, the software is likely to suffer from issues such as: slow running while multiple users use it simultaneously, inconsistencies across different operating systems, and poor usability.

    • Performance testing will determine whether their software meets the speed, scalability and stability requirements under the expected workload. Applications sent to market with poor performance metrics are likely to gain a bad reputation and fail to meet expected sales goals due to poor performance or poor performance testing.

    • At the same time, mission-critical applications such as space launch programs or life-saving medical devices must be performance tested to ensure that they last for long periods without deviations.

    • According to Dun & Bradstreet, 59% of Fortune 500 companies experience an estimated 1.6 hours of downtime each week. With the average Fortune 500 company with at least 10,000 employees paying $56 per hour, the labour portion of the downtime cost for such an organisation would be $896,000 weekly, which would translate to more than $46 million per year.

    • Google.com’s (19-August-13) downtime of just 5 minutes is estimated to have cost the search giant $545,000. It is estimated that the recent Amazon Web Services outage caused companies to lose up to $1100 per second in sales.

      Types of Performance Testing:

      Load Testing – Tests the ability of the application to perform under expected user load. Its purpose is to identify performance bottlenecks before the software application goes live.

      Stress Testing – This involves testing an application under extreme workloads to see how it handles high traffic or data processing. Its purpose is to identify the breaking point of an application.

      Endurance Testing – This is done to ensure that the software can handle the expected load for a long period of time.

      Spike Testing – Tests the response of software to sudden large spikes in load generated by users.

      Volume Testing – Large numbers under volume testing. Of. The data is fed into a database and the behaviour of the overall software system is monitored. Its purpose is to check the performance of software applications under different database volumes.

      Scalability Testing – The purpose of scalability testing is to determine the effectiveness of a software application in “enhancing” to support increased user load. It helps in planning for capacity addition in your software system.

      Common Performance Problems:

      Most performance issues revolve around speed, response time, load times, and poor scalability. Speed ​​is often one of the most important features of an application. A slow-running application will potentially lose users. Performance testing is done to ensure that an app runs fast enough to retain a user’s attention and interest. Take a look at the following list of common performance issues and note how speed is a common factor in many of them:

      Long Load Time – The load time is generally the initial time it takes for an application to start. This should generally be kept to a minimum. While it is impossible for some applications to load in under a minute, load times should be kept within a few seconds if possible.

      Poor response time – Response time is the time a user inputs data into the application until the application outputs a response to that input. Normally, this should happen very quickly. Again if a user has to wait too long, they lose interest.

      Poor Scalability – A software product suffers from poor scalability when it cannot handle the expected number of users or when it does not accommodate a wide range of users. Load testing should be done to ensure that the application can handle the estimated number of users.

      Bottleneck – Bottlenecks are the bottlenecks in a system that degrade the performance of the overall system. The bottleneck occurs when either coding errors or hardware problems cause a decrease in throughput under certain loads. The bottleneck is often caused by a faulty section of code. The key to fixing a bottleneck problem is to find the section of code that’s causing the slowdown and try to fix it there. Usually bottlenecks are fixed by fixing poorly running processes or by adding additional hardware.

      There are some common performance bottlenecks:

    • Cpu usage
    • Memory usage
    • Network usage
    • Operating system limitations
    • Disk usage

      Performance Testing Process:

      The methodology adopted for performance testing may vary widely but the purpose of performance tests remains the same. It can help to demonstrate that your software system meets certain predefined performance criteria. Or it can help to compare the performance of two software systems. It can also help you identify the parts of your software system that degrade its performance.

      Below is a general procedure on how to perform performance testing performance test process image.

      Identify your test environment – Learn about your physical test environment, production environment, and what test tools are available. Understand the details of the hardware, software and network configuration used during testing before starting the testing process. This will help testers to create more efficient tests. It will also help in identifying potential challenges that testers may face during performance testing processes.

      Identify performance acceptance criteria – this includes targets and constraints for throughput, response time, and resource allocation. It is also essential to identify project success criteria outside these goals and constraints. Testers should be empowered to set performance benchmarks and targets because often project specifications will not include a wide variety of performance benchmarks. Sometimes there might be none. When possible, finding a similar application to compare is a good way to determine performance goals.

      Planning and Design Performance Testing – Determine how usage may differ between end users and identify key scenarios for testing all possible use cases. It is essential to simulate different types of end users, plan performance test data, and outline what metrics will be collected.

      Configuring the test environment – Prepare the test environment before execution. Also, arrange for equipment and other resources.

      Course Curriculum

      Develop Your Skills with Performance Testing Certification Training

      Weekday / Weekend BatchesSee Batch Details

      Implement test design – Create performance tests according to your test design.

      Run tests – Execute and monitor tests.

      Analyse, Tune and Retest – Consolidate, analyse and share test results. Then fine tune and test again if there is an improvement or decrease in performance. Since improvements usually get smaller with each retest, stop if the bottleneck is caused by the CPU. Then you can have the option to increase CPU power.

      Performance Testing Metrics: Parameters Monitored:

      The basic parameters to be monitored during performance testing include:

      Processor Usage – The amount of time a processor spends executing non-idle threads.

      Memory Usage – The amount of physical memory available to processes on the computer.

      Disk Time – The amount of time the disk is busy executing a read or write request.

      Bandwidth – Shows the bits per second used by the network interface.

      Private Bytes – The number of bytes allocated by a process that cannot be shared among other processes. These are used to measure memory leaks and usage.

      Committed Memory – The amount of virtual memory used.

      Page Faults/Second – The overall rate at which fault pages are processed by the processor. This again occurs when a process needs code from outside its working set.

      CPU interrupts per second – is average. The number of hardware interrupts a processor is receiving and processing each second.

      Disk queue length – is average. No. The number of read and write requests queued for the selected disk during a sampling interval.

      Network Output Queue Length – The length of the output packet queue in packets. More than two means delays and bottlenecks need to be stopped.

      Network Bytes Total Per Second – The rate that bytes are sent and received over the interface which includes framing characters.

      Response time – the time a user enters a request until the first character of the response is received.

      Throughput – The rate a computer or network receives requests per second.

      Amount of Connection Pooling – The number of user requests that are received from the pooled connections. The more requests that are served by connections in the pool, the better the performance.

      Hit Ratio – This is related to the number of SQL statements that are handled by cached data rather than expensive I/O operations. This is a good place to start when it comes to solving bottleneck issues.

      Hits per second – no. The number of hits on the web server during each second of load testing.

      Rollback Segment – The amount of data that can be rolled back at any one time.

      Database Locks – The locking of tables and databases needs to be monitored and carefully tuned.

      Top Wait – Monitored to determine how fast data is retrieved from memory so wait times can be cut.

      Thread counts – The health of an application can be measured by the number. Threads that are running and currently active.

      Garbage Collection – It is concerned with returning the unused memory to the system. Garbage collection should be monitored for efficiency.

      Example Performance Test Cases:

    • Verify that the response time does not exceed 4 seconds when 1000 users access the website simultaneously.
    • Verify that the response time of the application under load is within acceptable limits when network connectivity is slow
    • Check the maximum number of users the application can handle before it crashes.
    • Check database execution time when 500 records are read/written simultaneously.
    • Check CPU and memory usage of application and database servers under peak load conditions
    • Verify the response time of the application under low, normal, medium and heavy load conditions.
    • During actual performance test execution, vague terms like allowable limit, heavy load, etc. are replaced with concrete numbers. Performance engineers set these numbers according to the business requirements and technical landscape of the application.

      Performance Test Tools:

      There are various types of performance testing equipment available in the market. The tool you choose for testing will depend on several factors such as the types of protocols supported, licence costs, hardware requirements, platform support, etc. Below is a list of popularly used test equipment.

      LoadNinja – revolutionising the way we load tests. This cloud-based load testing tool empowers teams to record and immediately playback extensive load tests without complex dynamic correlation, and to run these load tests extensively in real browsers. Teams are able to increase test coverage. and cut load testing time by more than 60%.

      HP LoadRunner – The most popular performance testing tool on the market today. The tool is capable of simulating hundreds of thousands of users, placing applications under real-life loads to determine their behaviour under expected load. LoadRunner has a virtual user generator that simulates the actions of live human users.

      Jmeter – One of the major tools used for load testing of web and application servers.

      Which Applications should we Performance Test?

    • Performance testing is always done for client-server based systems only. This means, any application that is not a client-server based architecture should not require performance testing.

    • For example, Microsoft Calculator is neither client-server based nor runs on multiple users; Hence it is not a candidate for performance test.

    • Realistic tests that provide sufficient analysis depth are important elements of “good” performance tests. It’s not only about simulating a large number of transactions, but estimating real user scenarios that provide insight into how your product will perform live.

    • Performance tests generate large amounts of data. The best performance tests are those that allow quick and accurate analysis of all performance problems, identifying their causes.

    • With the emergence of Agile development methodologies and DevOps process practises, performance testing must remain reliable while respecting the accelerating pace of these cycles: development, testing, and production.

    Manual Testing Sample Resumes! Download & Edit, Get Noticed by Top Employers! Download


      In software engineering, performance testing is essential before marketing any software product. This ensures customer satisfaction and protects an investor’s investment against product failure. The cost of performance testing is usually higher than the cost created for improved customer satisfaction, loyalty and retention.

      Performance testing is the practice of evaluating how a system performs in terms of responsiveness and stability under a particular workload. Performance tests are typically executed to check speed, robustness, reliability and application size. This process includes “performance” indicators such as:

    • server request processing time
    • Allowable Concurrent User Volume
    • processor memory consumption; Number and types of errors encountered with the app
    • Performance testing gathers all the tests that verify the speed, robustness, reliability and correct size of an application. It examines several indicators such as browser, page and network response times, server query processing time, number of allowable concurrent users, CPU memory consumption, and number/type of errors that may be encountered while using an application.

    Are you looking training with Right Jobs?

    Contact Us

    Popular Courses

    Get Training Quote for Free