25+ Performance Testing Interview Questions & Answers [ Step-In ]
Performance Testing Interview Questions and Answers

25+ Performance Testing Interview Questions & Answers [ Step-In ]

Last updated on 03rd Jul 2020, Blog, Interview Questions

About author

Yogesh (Sr Project Manager )

Highly Expertise in Respective Industry Domain with 7+ Years of Experience Also, He is a Technical Blog Writer for Past 4 Years to Renders A Kind Of Informative Knowledge for JOB Seeker

(5.0) | 16547 Ratings 1985

In software quality assurance, performance testing is in general a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. The performance tests you run will help ensure your software meets the expected levels of service and provide a positive user experience. They will highlight improvements you should make to your applications relative to speed, stability, and scalability before they go into production.

1. What is Performance Testing?

Ans:

Performance Testing is a type of software testing which ensures that the application is performing well under the workload. The goal of performance testing is not to find bugs but to eliminate performance bottlenecks. It measures the quality attributes of the system.  

The attributes of Performance Testing include:

  • Speed – It determines whether the application responds quickly.
  • Scalability – It determines the maximum user load the software application can handle.
  • Stability – It determines if the application is stable under varying loads.

2. What are the different types of Performance Testing?

Ans:

The different types of performance testing are:

  • Load testing – It checks the application’s ability to perform under anticipated user loads. The objective is to identify performance bottlenecks before the software application goes live.
  • Stress testing – dis involves testing an application under extreme workloads to see how it handles high traffic or data processing. The objective is to identify the breaking point of an application.
  • Endurance testing – It is done to make sure the software can handle the expected load over a long period of time.
  • Spike testing – These tests the software’s reaction to sudden large spikes in the load generated by users.
  • Volume testing – Under Volume Testing large no. of. Data is populated in a database and the overall software system’s behavior is monitored.
  • Scalability testing – The objective of scalability testing is to determine the software application’s effectiveness in scaling up to support an increase in user load.

3. What are the common performance problems faced by users?

Ans:

Some of the common performance problems faced by users are:

  • Longer loading time
  • Poor response time
  • Poor Scalability
  • Bottlenecking such as coding errors or hardware issues

4. Name some of the common Performance Testing Tools.

Ans:

The market is full of a number of tools for test management, performance testing, GUI testing, functional testing, etc. I would suggest you opt for a tool which is on-demand, easy to learn as per your skills, generic and effective for the required type of testing. Some of the common Performance Testing tools are:

  • LoadView
  • Apache JMeter
  • LoadUI Pro
  • WebLoad
  • NeoLoad

5. List out some common Performance bottlenecks.

Ans:

Some common performance bottlenecks include:

  • CPU Utilization
  • Memory Utilization
  • Networking Utilization
  • S limitation
  • Disk Usage

6. What are the Parameters considered for Performance Testing?

Ans:

The Parameters for Performance Testing are:

  • Memory usage
  • Processor usage
  • Bandwidth
  • Memory pages
  • Network output queue length
  • Response time
  • CPU interruption per second
  • Committed memory
  • Thread counts
  • Top waits

7. What are the factors for selecting Performance Testing Tools?

Ans:

 The factors that you must keep in mind while selecting Performance Testing Tools include:

  • Customer preference tool
  • Availability of license within customer machine
  • Availability of test environment
  • Additional protocol support
  • License cost
  • Efficiency of tool
  • User options for testing
  • Vendor support

8. What is the difference between Performance Testing & Functional Testing?

Ans:

Performance Testing Functional Testing

To validate the behavior of the system at various load

conditions performance testing is done.

To verify the accuracy of the software with

definite inputs against expected output,

functional testing is done

It gives the best result if automated This testing can be done manually or automated
Several user performs desired operations One user performs all the operations

Customer, Tester, Developer, DBA and

N/W management team

Customer, Tester and Development

involvement is reuired

Requires close to production test environment

and several H/W facilities to populate   the load

Production sized test environment is not necessary,

and H/W requirements are minimal

9. What is the throughput in Performance Testing?

Ans:

 Throughput is referred to as the amount of data transported to the server in response to the client’s request at a given period of time. It is calculated in terms of requests per second, calls per day, reports per year, hits per second, etc. The performance of application depends on throughput value, higher the value of throughput -higher the performance of the application.

10. What is the benefits of LoadRunner in testing tools?

Ans:

Some of the benefits of LoadRunner are:

  • Versatility
  • Test Results
  • Easy Integrations
  • Robust reports
  • Enterprise Package 

11. What is Endurance Testing & Spike Testing?

Ans:

  •  Endurance Testing – It is a type of performance testing where teh testing is conducted to evaluate teh behavior of the system when a significant workload is given continuously.
  • Spike Testing – It is a type of performance testing that is performed to analyze the behavior of the system when the load is increased substantially. 

12. What are the common mistakes done in Performance Testing?

Ans:

The common mistakes done in Performance Testing are:

  • Direct jump to multi-user tests
  • Test results not validated
  • Unknown workload details
  • Too small run duration
  • Lacking long-duration sustainability test
  • Confusion on a definition of concurrent users
  • Data not populated sufficiently
  • A significant difference between test and production environment
  • Network bandwidth not simulated
  • Underestimating performance testing schedules
  • Incorrect extrapolation of pilots
  • Inappropriate base-lining of configurations

13. What are the different phases for automated Performance Testing?

Ans:

 Phases for automated performance testing include: 

  • Design or Planning
  • Build
  • Execution
  • Analyzing & Tuning

14. What is the difference between Benchmark Testing & Baseline Testing?

Ans:

Benchmark Testing Baseline Testing

It is the method of comparing performance of your system performance against an industry standard that is set by other organization

It is the procedure of running a set of tests to capture performance information. When future change is made in the application, this information is used as a reference

15. What is concurrent user load in Performance Testing?

Ans:

Concurrent user load in performance testing can be defined as something when many users hit any functionality or operation at the same time. Concurrent user load testing sends simultaneous artificial traffic to a web application in order to stress the infrastructure and record system response times during periods of sustained heavy load.

16. What is a Protocol? Name some Protocols.

Ans:

 A protocol is defined as a set of various rules for the purpose of information communication between the two or more systems.

Some of the Protocols are :

  • HTTP
  • HTTPS
  • FTP
  • Web Services
  • Citrix

 17. What is Performance Tuning?

Ans:

 Performance tuning is the improvement of system performance. Typically in computer systems, teh motivation for such activity is called a performance problem. It can be either real or anticipated and most systems will respond to increased load with some degree of decreasing performance.

18. What are the types of Performance Tuning?

Ans:

 There are two types of Performance Tuning:

  • Hardware Tuning – Enhancing, adding, or supplanting teh hardware components of teh system under test and changes in the framework level to augment teh system’s performance is called hardware tuning.
  • Software Tuning – Identifying the software level bottlenecks by profiling the code, database, etc. Fine-tuning or modifying the software to fix the bottlenecks is called software tuning.

19. List the need for opting for Performance Testing.

Ans:

Performance testing is generally required to validate the following:

  •  The response time of application for teh intended number of users-
  • Utmost load resisting capacity of an application.
  • The capability of teh app under test to handle teh particular number of transactions.
  • The constancy of an application under teh usual and unexpected user load.
  • Making sure that users have an appropriate response time on production.

20. What is the reason behind teh discontinuation of manual load testing?

Ans:

There were certain drawbacks of manual Load Testing that lead to the adoption of Automation load testing. Some of the reasons are:

  • Complicated procedure to measure teh performance of the application precisely.
  • Complex synchronization procedures between the two or more users.
  • Difficult to assess and recognize the outcomes & bottlenecks.
  • The increased overall infrastructure cost.

    Subscribe For Free Demo

    21. What is Profiling in Performance Testing?

    Ans:

    Profiling is a procedure of pinpointing a bottleneck performance at miniature levels. dis mainly includes developers or performance testers and is done by presentation teams for manufacturing. You can profile in any application layer which is getting tested. If you want to do application profiling you may require utilizing tools for performance profiling of application servers.

    22. What is the entering & exiting criteria for Performance Testing?

    Ans:

    The starting of the performance testing is done at the design level only. After the testing is done, results are collected, and later they are analyzed in order to make improvements regarding the performance. During the whole process of life cycle development, performance tuning is done and the factors on which it is based are scalability and reliability during the presence of the load, application release time, and tolerance criteria of performance and stress. 

    23. What are the activities involved in Performance Testing?

    Ans:

    The activities involved in Performance Testing are:

    • Requirement gathering
    • Tool selection
    • Performance test plan
    • Performance test development
    • Performance test modeling
    • Test Execution
    • Analysis
    • Report

    24. What is Stress Testing & Soak Testing?

    Ans:

    • Stress Testing – It is a software testing activity that determines teh robustness of software by testing beyond the limits of normal operation. The performance results are analyzed to know how far the resources can sustain the upper limit wif good performance as expected.
    • Soak Testing – Soak Testing is a type of performance test that verifies a system’s stability and performance characteristics over an extended period of time. System resources are monitored wif their performances getting TEMPeffected wif load increases. 

    25. Differentiate between Performance Testing & Performance Engineering

    Ans:

      The process of identifying the issues that disturb the performance of any application is performance testing whereas, improving the performance of the application by observing the measurements got from the performing testing by necessary changes in terms of architecture, resources, implementation, etc is performance engineering.

    26. How would you identify the performance bottleneck situations? 

    Ans:

    Performance Bottlenecks are recognized by monitoring teh app against load and stress conditions. To find bottleneck situations in performance testing teh testers usually use Loadrunner coz it supports many different types of monitors like a run-time monitor, network delay monitor, web resource monitor, database server monitor, firewall monitor, ERP server resources monitor, and Java performance monitor. These monitors in turn help the tester to establish the condition which causes an increase in teh response time of the application. 

    27. How to perform Spike Testing in JMeter?

    Ans:

    In JMeter, spike testing can be achieved by using Synchronizing Timer. The threads are jammed by synchronizing the timer until a specific number of threads have been successfully blocked, and then release them at once thus creating a large immediate load.

    28. What are the different components of LoadRunner?

    Ans:

    The major components of LoadRunner include:

    • VUGen– Records Vuser scripts that emulate the actions of real users.
    • Controller – Administrative center for creating, maintaining, and executing load test scenarios.
    • Load Generator – An agent through which we can generate load.
    • Analysis – Provides graphs and reports that summarize the system performance 

    29. What is the correlation?

    Ans:

    >

      Correlation is used to handle the dynamic values in a script. The dynamic values change for each user action when action is replayed by the same user or any different user. In both, cases correlation takes care of these values and prevents them from failing during execution. 

    30. Explain the difference between automatic correlation and manual correlation?

    Ans:

    • Manual Correlation involves identifying the dynamic value, finding the first occurrence of dynamic value, identifying the unique boundaries of capturing the dynamic value, writing correlation function web_reg_save_param before teh request having teh first occurrence of dynamic value in its response.
    • Automated correlation works on predefined correlation rules. The script is played back and scanned for autocorrelation on failing. Vugen identifies the place wherever the correlation rules work and correlate the value of approval.
    Course Curriculum

    Best On-Demand Performance Testing Course from Real-Time Experts

    Weekday / Weekend BatchesSee Batch Details

     31. What is NeoLoad?

    Ans:

     Neo load is a type of load testing tool. It measures the performance of the web or mobile application. It also provides programmatic solutions to teh developers to help them optimize the performance before teh application goes into production. It is available in French and English as well. 

    32. What is the Modular approach of scripting?

    Ans:

    In the Modular approach, a function is created for each request such as login, logout, save, delete, etc. This approach gives more freedom to reuse the request and saves time. With this approach, it is recommended to work with web custom requests. 

    33. How are the steps validated in a Script?

    Ans:

     Each step in the script is validated wif the content on the returned page. A content check verifies whether specific content is present on the web page or not. there are two types of a content check used in LoadRunner:

    • Text Check– This checks for a text/string on the web page
    • Image Check– This checks for an image on a web page.

    34. How to identify performance testing use cases for any application?

    Ans:

    There are certain measures that you need to consider while monitoring the performance tests. Users have to be working on the core functionality of the application, trying to perform operations on databases like CRUD, and the number of users trying to concurrently access the application should be more. With all these criteria, even manual test cases can help you in identifying the performance measurements.

    35. What is Correlate graph and overlay graph?

    Ans:

    •  Correlate graph – The Y-axis of the graphs are plotted against each other. After this, the Y-axis of the graph that is active is considered to be the X-axis of the graph that is merged. Henceforth, the graph that was merged with the Y-axis of that graph becomes the Y-axis merged. 
    • Overlay graph – Plot two graphs that contain the same x-axis. Left Y-axis in the merged graph shows the current value of the graph. The right Y-axis shows teh value of the Y-axis of the graph that was merged. 
    • Wif dis, we has come to teh end of teh Performance Testing interview questions article. I Hope these Performance Testing Interview questions will help you in you’re interviewing. In case you have attended any interviews in teh recent past, do paste those interview questions in teh comments section and we’ll answer them. 

    36. What Is Concurrent User Hits In Load Testing?

    Ans:

      When the multiple users, without any time difference, hit on the same event of teh application under teh load test is called a concurrent user hit. The concurrency point is added so that multiple Virtual Users can work on a single event of teh application. By adding concurrency points, teh virtual users will wait for teh other Virtual users which are running teh scripts, if they reach early. When all teh users reach teh concurrency point, only then they start hitting teh requests.

    37. What Is The Life-cycle Of Testing?

    Ans:

    • Planning the Test
    • Developing teh Test
    • Execution of teh Test
    • Analysis of Results

    38. What Drawbacks Do Manual Load Tests Has?

    Ans:

    The manual load testing drawbacks are:

    • It is very expensive to do Manual Testing, as real users charge by the hour.
    • Wif manual load testing, load testing for longer durations like for 7 days won’t be possible, as users really work a maximum of eight hours daily.
    • You will not get accuracy for results correlation as there are delays between teh actions of users.
    • It is hard to do results collection as the results capture each other.
    • It is hard to do.

    39. What Is The Difference Between Performance Testing And Performance Engineering?

    Ans:

    Performance testing:

    In Performance testing, the testing cycle includes requirement gathering, scripting, execution, result sharing, and report generation. 

    Performance engineering:

    Performance Engineering is a step ahead of Performance testing where after execution; results are analyzed wif the aim to find teh performance bottlenecks and teh solution is provided to resolve teh identified issues.

    40. What Are The Automated Performance Testing Phases?

    Ans:

    Planning/Design: dis is the primary phase where team will be gathering the requirements of the performance testing. Requirements can be Business, Technical, System, and Team requirements.

    The phases involved in automated performance testing are:

    • Build: This phase consists of automating teh requirements collected during teh design phase.
    • Execution: it is done in multiple phases. It consists of various types of testing like baseline, benchmarking testing.
    • Analyzing and tuning: During the performance testing we will be capturing all the details related to the system like Response time and System Resources for identifying the major bottlenecks of the system. After the bottlenecks are identified we have to tune the system to improve teh overall performance.

    41. What Is Distributed Load Testing?

    Ans:

      Distributed load testing: in this, we test teh application for a number of users accessing teh application at teh same time. In distributed load testing test cases are executed to determine teh application behavior. Now application behavior is monitored, recorded, and analyzed when multiple users concurrently use teh system. Distributed load testing is teh process using which multiple systems can be used for simulating load of a large number of users. Teh reason for doing teh distributed load testing is that to overcome teh limitation single system to generate a large number of threads

    42. List Down Any Challenge You Faced In Your Performance Career And How Did You Overcome It?

    Ans:

    Yes, I faced many challenges like defining the scope of application, breakpoints which I overcame by studying the historical data of the application, and based on them I decided the values, setting up the performance environment including proxy bypassing, connecting to Server under test.

    43. What Is Ip Spoofing And Why Is It Used?

    Ans:

    IP spoofing is used to spoof the system so dat each host machine can use many different IPs to create a hypothetical environment where the system believes that requests are coming from different locations.

    44. How Do You Identify Which Protocol To Use For Any Application?

    Ans:

    • Previously Performance testers had to depend much on the development team to know about the protocol that the application is using to interact wif the server. Sometimes, it also used to be speculative.
    • However, LoadRunner provides great help in the form of Protocol Advisor from version 9.5 onwards. Protocol advisor detects teh protocols that application uses and suggests us teh possible protocols in which script can be created to simulate the real user.

    45. How Do You Do The Analysis Of The System For Identifying Issues?

    Ans:

    We can study teh various graphs generated by teh tool such as Response time, throughput graph, running Vusers graph, etc. and also we can see teh server logs to identify the issues in teh system

    46. How You Can Calculate Pacing For Your Application?

    Ans:

    • We can calculate pacing by the formula as
    • No. of users = (Response Time in seconds + Pacing in seconds) * TPS
    • TPS is a transaction per Second.

    47. How Do You Find Out The Performance Bottlenecks?

    Ans:

      Performance Bottlenecks can be identified by using different counters such as response time, throughput, hits/sec, network delay graph. We can analyze them and tell where the suspected performance bottleneck is.

    48. What Is Think Time?

    Ans:

      Think time can be defined as the real-time wait between 2 consecutive transactions. For Example, a real-time user waits to evaluate the data he received before performing teh next step, that waits time he takes can be stated as think time.

    49. Can You Tell A Scenario Where Throughput Is Increasing With Response Time Means When They Are Directly Proportional?

    Ans:

    Yes it can be possible when you have lots of CSS (Cascading Style Sheet) in your application which takes a lot of time to display. We can expect this type of situation where throughput will be increasing as well as the response time.

    50. What Is The Reason Behind Performing Automated Load Testing?

    Ans:

    • Difficult to measure teh performance of teh application accurately.
    • Difficult to do synchronization between the users.
    • The number of real-time users is required to be involved in Performance Testing.
    • Difficult to analyze and identify teh results & bottlenecks.
    • Increases the infrastructure cost.
    Course Curriculum

    Get Performance Testing Training with Master Advanced Concepts By Experts

    • Instructor-led Sessions
    • Real-life Case Studies
    • Assignments
    Explore Curriculum

    51. What Is The Difference Between Simultaneous User And Concurrent User?

    Ans:

    Simultaneous users wait for other users to complete tan it starts its activity whereas in concurrent users, it can be like 2 users log into the system and perform different activities at the same time.

    52. What Activities Are Performed During Performance Testing Of Any Application?

    Ans:

    Following activities are performed during testing of application:

    • Create user scenarios
    • User Distribution
    • Scripting
    • The dry run of the application
    • Running load test and analyzing the result

    53. Explain The Sub-genres Of Performance Testing?

    Ans:

    Following are the sub-genres of Performance Testing:

    • Load Testing: it is conducted to examine the performance of the application for a specific expected load. The load can be increased by increasing the number of a user performing a specific task on the application in a specific time period.
    • Stress Testing: is conducted to evaluate system performance by increasing teh number of users more TEMPTEMPthan teh limits of its specified requirements. It is performed to understand at which level the application crashes.
    • Volume Testing: test an application in order to determine how much amount of data it can handle efficiently and TEMPTEMPeffectively.
    • Spike Testing: what changes happen on the application when suddenly a large number of users increased or decreased.
    • Soak Testing: is performed to understand teh application behavior when we apply load for a long period of time what happens on the stability and response time of application.

    54. What is Important performance testing tools?

    Ans:

    • HP Loader
    • HTTP Load
    • Proxy Sniffer
    • Rational Performance Tester
    • JMeter
    • Borland Silk Performer

    55. Why does JMeter become a natural choice of the tester when it comes to performance testing?

    Ans:

    JMeter tool has benefits like

    • It can be used for testing both static resources like HTML and JavaScript, as well as dynamic resources like Servlets, Ajax, JSP, etc.
    • JMeter TEMPhas a tendency to determine the maximum number of concurrent users that TEMPyou’re website can handle
    • It provides a variety of graphical analyses of performance reports

    56. What are all things involved in the Performance Testing Process?

    Ans:

    Performance Testing life cycle includes the following steps/phases

    • Right testing environment: Figure out the physical test environment before carrying performance testing, like hardware, software and network configuration
    • Identify the performance acceptance criteria: It contains constraints and goals for throughput, response times and resource allocation
    • Plan and design Performance tests: Define how usage is likely to vary among end-users and find key scenarios to test for all possible use cases
    • Test environment configuration: Before the execution, prepare the testing environment and arrange tools, other resources, etc.
    • Test design implementation: According to you’re test design, create a performance test
    • Run the tests: Execute and monitor the tests
    • Analyze, tune, and retest: Analyze, consolidate, and share test results. After that, fine-tune and test again to see if there is any enhancement in performance. Stop the test, if the CPU is causing bottlenecks.

    57. What are the important factors you must consider before selecting performance tools?

    Ans:

    • Customer preference tool
    • Availability of license within customer machine
    • Availability of test environment
    • Additional protocol support
    • License cost
    • Efficiency of tool
    • User options for Manual Testing
    • Vendor support

    58. What is the difference between JMeter and SOAPUI?

    Ans:

    JMeter SoapUI

    It is used for load and performance testing

    HTTP, JDBC, JMS, Web Service(SOAP), etc.

    It is specific for web services and has

    a more user-friendly IDE

    It supports distributed load testing It does not support distributed load testing

    59.  Explain the steps required in JMeter to create a performance test plan

    Ans:

    To create a performance test plan in JMeter

    • Add thread group
    • Add JMeter elements
    • Add Graph result
    • Run test & get the result

    60. How can you execute spike testing in JMeter?

    Ans:

     In JMeter, spike testing can be done by using Synchronizing Timer.  The threads are jammed by synchronizing the timer until a specific number of threads have been blocked and tan released at once, creating a large instantaneous load.

    61. Name and elucidate the types of performance tuning.

    Ans:

    In order to improve the performance of the system, primarily there are two types of tuning performed-

    • Hardware tuning: Enhancing, adding, or supplanting teh hardware components of teh system under test and changes in teh framework level to augment teh system’s performance is called hardware tuning.
    • Software tuning: Identifying the software level bottlenecks by profiling the code, database, etc. Fine-tuning or modifying the software to fix the bottlenecks is called software tuning.

    62. What is the difference between front-end and back-end performance testing? Which one is more important?

    Ans:

    • Both front-end and back-end performance tests measure how fast an application responds, but they measure different components of that overall user response time.
    • Front-end performance is concerned with how quickly text, images, and other page elements are displayed on a user’s browser page. Back-end performance is concerned with how quickly these elements are processed by the site’s servers and sent to the user’s machine upon request. Front-end performance is teh part of the iceberg above the waterline, and back-end performance is everything underneath that you can’t see.
    • Both are important coz they can both determine whether a user continues to use your application. Front-end performance tends to be easier to test and can provide some quick wins due to a large number of optimization tweaks that can be done without writing code. Back-end performance tends to be more difficult to test coz it often uncovers problems wif teh underlying infrastructure and hardware that are of a more technical nature.  

    63. Why does performance testing matter?

    Ans:

      Performance testing matters because of application performance TEMP has a significant impact on user experience. A site that is unreachable or slow to load due to an inability to cope wif the unexpected load will cause users to browse to competitor’s sites and tarnish the brand’s reputation.

    64. How do you know when a load test has passed?

    Ans:

     Ideally, you would have discussed TEMP your nonfunctional requirements wif key stakeholders before load testing begins. dis means that you set your own pass criteria before you even run teh tests. You would ideally have a list of specific transactions (selected based on criticality or complexity according to the business) whose response time needs to fall under a threshold you’ve predetermined. “Fast” is not specific enough– a number is better. Depending on what kind of tests TEMPyou’re running (soak, stress, volume, etc) you may have other nonfunctional requirements about duration, resource utilization on the server-side, or specific outcomes to scenarios you’d like to test.

    65. What would you advise clients who say they can’t afford to perform test coz they don’t have the resources to maintain several load generators on-site?

    Ans:

      This is the main reason that performance testing has for so long been considered a luxury that only big companies can afford. Luckily technology moves on, and in 2018 we’re at a point where everyone can load tests. The big innovation here has been teh cloud and teh ability to spin up thousands of virtual machines wif a few mouse clicks. Services like Amazon AWS, Microsoft Azure and Google Cloud make it so that every budding entrepreneur can “borrow” the computing hardware necessary to do cloud load testing with thousands of users and tan give them back after the test, without the hassle and cost of maintaining them. I would advise teh clients to look for a cloud load testing solution that utilizes virtual machines on the cloud to run their tests affordably.

    66. You run a load test against a server wif 4GB RAM and the results show an average response time of 30 seconds for a particular request. Teh production server TEMPhas been allocated 8GB RAM. What would you expect teh average response time of teh same request to be in production given teh same load?

    Ans:

      Trick question! While you may be tempted to answer that teh response time would be halved to 15 seconds, teh reality is rarely that convenient. Response times are a factor of so much more TEMPthan memory. Things like CPU utilization, network throughput, latency, load balancing configuration, and just application logic are always going to influence load tests. You can’t assume a linear progression in response time just coz you’ve upgraded one part of teh hardware. dis is why it’s important to load tests against an environment that is as production-like as possible.

    67. What is a percentile and why would you look at percentile response times when you already have average response times?

    Ans:

    A percentile is a statistical measure that describes a value that a certain percentage of teh sample either meets or falls under. For example, a 90th percentile response time of 5 seconds means that 90% of teh responses took 5 seconds or less to be returned. It can be an important measure because they soften teh impact that outliers has on more inclusive measures such as averages. A transaction wif an average response time of 2.5 seconds may seem perfectly acceptable to teh business, but when teh 90th percentile response time is 20 seconds, this is a good reason to investigate further.

    68. What are some trends in performance testing that you think will continue in 2019 and beyond?

    Ans:

    • Cloud is an easy answer, as the cloud brings some compelling benefits in terms of reduced cost and just ease of use. However, I already touched on that in a previous question, so I’ll talk about another trend: open source.
    • there’s a reason that open source is still around: it works. Teh main advantage of open source tools is not that they are free, although it is a big part of the appeal. The real advantage is that open source tools are community-based and community-led, which means features get built for it faster TEMPTEMPthan sometimes commercial tools can keep up wif, and their built by users of the tool themselves. Open source tools like JMeter, Gatling, and Selenium have revolutionized the industry due to their impressive feature sets, built by a growing community that TEMPhas developed plugins for everything you can think of. More and more, even big companies wif big budgets choose to go open source simply coz of teh wealth of knowledge of these tools that are already available for free.

    69. What differs the volume and endurance test from each other?

    Ans:

    Endurance tests try to analyze application behavior for a longer period of time. During endurance tests, teh most critical issue to monitor is memory consumption.  Volume tests aim to analyze teh application’s behavior when there’s a huge volume of data.

    70. How would you decide which tool to use except for budget issues? Think about you having tons of money.

    Ans:

    • What skills do you require is teh most important issue. Some tools require JS, some of them Scala or Python. We need to consider teh test team’s skills.
    • Protocol support: Some tools require a limited range of protocols to simulate. Do we need to understand what kind of protocols we need to test? Combining teh protocols in a test is also crucial as a scenario might start wif a TCP/IP request and continue wif HTTPS. Combining and maintaining them must be easy.
    • Reporting. Some tools generate poor reports and you have to deal wif all those numbers to come up wif a solution. We need detailed reports that show how many users an application can handle, which pages or modules-load slowly. Teh most important report is teh Response Time Graph
    • Installation: Some tools are easy to install like a minute but some commercial tools require so many components to install before starting using. Also, teh OS version they support is important. For example, JMeter supports Windows, Linux, and other environments but HP requires Windows for its core modules.
    • Cloud integration: Creating huge loads requires so many resources so cloud integration is a must. You can run JMeter and Gatling on SaaS platforms like Loadium and Blazemeter.
    Performance Testing Sample Resumes! Download & Edit, Get Noticed by Top Employers! Download

    71. Which requirements are crucial for performance testing?

    Ans:

    • Functional
    • Non-functional
    • Usability
    • Accessibility

    Non-functional requirements must define how the application behaves. That’s why before starting any performance testing project, we need to define our KPI’s to validate our test execution results.

    72. What’s the difference between a record and playtests and API testing?

    Ans:

     API testing, we only make requests to an endpoint. But in a Record and Play performance testing, we make a request to not only endpoint but also to HTML, JS, CSS files, or a CDN server to retrieve static images. Record and Play performance increase in test coverage.

    73. What are Ramp-Up and Ramp-Down?

    Ans:

    •   Teh rate at which we increase teh load on teh system is called teh ramp-up period. During that time, virtual users or threads start to generate.
    • The rate at which virtual users/thread terminates their execution is the ramp-down period.

    74. Why would you need a CSS extractor in a performance test?

    Ans:

    In web application testing, we need to extract data from a page like a price of a product or username. In order to do those operations, we need to use CSS extractor.

    75. What kind of data extraction strategies can you use besides CSS?

    Ans:

      You can use regular expressions to extract data. JsonPath is a good way to extract data from a JSON file. You can also use XPath for SOAP web services.

    76. What are the existing and entering criteria in performance testing?

    Ans:

      We can start the performance testing of applications during the design. After the execution of the performance testing, we collected the results and analyzed them to improve the performance. The performance tuning process will be performed throughout the application development life cycle. Performance tuning is performed which is based on factors like release time of application and user requirements of application stability, reliability, and scalability under load, stress and performance tolerance criteria. In some projects, the end criteria are defined based on the client’s performance requirements defined for each section of the application. When a product reaches the expected level tan that can be considered as the end criteria for performance testing.

    77. Explain the basic requirements of the Performance test plan.

    Ans:

      Any Software Performance Test Plan should of the minimum contents as mentioned below:

    • Performance Test Strategy and scope definitions.
    • Test process and methodologies.
    • Test tool details.
    • Test case details including scripting and script maintenance mechanisms.
    • Resource allocations and responsibilities for Testers.
    • Risk management definitions.
    • Test Start /Stop criteria along with Pass/Fail criteria definitions.
    • Test environment setup requirements.
    • Virtual Users, Load, Volume Load Definitions for Different Performance Test Phases.
    • Results Analysis and Reporting format definitions.

    78. What is the testing lifecycle?

    Ans:

    There is no standard testing life cycle, but it is consist of the following phases: 

    • Test Planning (Test Strategy, Test Plan, Test Bed Creation)
    • Test Development (Test Procedures, Test Scenarios, Test Cases)
    • Test Execution
    • Result Analysis (compare Expected to Actual results)
    • Defect Tracking
    • Reporting

    79. How is the Automated Correlation configured?

    Ans:

      Any setting related to Automated Correlation can be done by General Options->Correlation. Correlation rules are set from Recording options->Correlations.

    80. What is Load Testing?

    Ans:

    Load Testing is to determine if an application can work well with the heavy usage resulting from a large number of users using it simultaneously. The load is increased to simulate the peak load that the servers are going to take during maximum usage periods.

    81. What is the Rendezvous point?

    Ans:

      Rendezvous point helps in emulating heavy user load (request) on teh server. This instructs Vusers to act simultaneously. When teh Vuser reaches the Rendezvous point, it waits for all Vusers with teh Rendezvous point. Once designated numbers of Vusers reach it, the Vusers are released. Function lr_rendezvous is used to create teh Rendezvous point. This can be inserted by:

    • Rendezvous button on the floating Recording toolbar while recording.
    • After recording Rendezvous point is inserted through Insert> Rendezvous.

    82. What are the different sections of the script? In what sequence do these sections run?

    Ans:

    LoadRunner script has three sections Vuser_init, Action, and Vuser_end.

    • Vuser_init TEMPhas requests/actions to login to the application/server.
    • Action has actual code to test teh functionality of teh application. dis can be played many times in iterations.
    • Vuser_end TEMPhas requests/actions to login out of the application/server.

    The sequence in which these sections get executed is Vuser_init is at the very beginning and Vuser_end at the very end. Teh action is executed in between the two.

    83. How to identify what to correlate and wat to parameterized sequence do these sections run?

    Ans:

    Any value in the script that changes on each iteration or with the different users while replaying needs correlation. Any user input while recording should be parameterized.

    84. What is Parameterization & why is Parameterization necessary in the script?

    Ans:

      Replacing hard-coded values within the script wif a parameter is called Parameterization. dis helps a single virtual user (Vuser) to use different data on each run. dissimulates real-life usage of an application as it avoids the server from caching results.

    85. While scripting you created correlation rules for Automatic Correlation. If you want to share the correlation rules with you’re team members working on the same application so that he/she can use the same on his workstation, how will you do that?

    Ans:

    Correlation rules can be exported through the .cor file and the same file can be imported through VuGen.

    86. What are the different types of Vuser logs that can be used while scripting and execution? What is the difference between these logs? When you disable logging?

    Ans:

    • There are two types of Vuser logs available –Standard log and Extended log. Logs are key for debugging the script. Once a script is up and running, logging is enabled for errors only.
    • Standard log creates a log of functions and messages sent to the server during script execution whereas the Extended log contains additional warnings and other messages. Logging is used during debugging and disabled while execution. Logging can be enabled for errors in that case.

    87. What are the different types of goals in Goal-Oriented Scenario?

    Ans:

      LoadRunner has five different types of goals in Goal-Oriented Scenario. These are:

    • The number of concurrent Vusers
    • The number of hits per second
    • The number of transactions per second
    • The number of pages per minute
    • The transaction response time

    88. How is each step validated in the script?

    Ans:

      Each step in the script is validated with the content on the returned page. A content check verifies whether specific content is present on the web page or not. There are two types of a content check which can be used in LoadRunner:

    • Text Check: This checks for a text/string on the web page.
    • Image Check: This checks for an image on a web page

    89. How is the VuGen script modified after recording?

    Ans:

    Once the script is recorded, it can be modified with teh following process:

    • Transaction
    • Parameterization
    • Correlation
    • Variable declarations
    • Rendezvous Point
    • Validations/Checkpoint

    90. What is the advantage of running the Vuser as the thread?

    Ans:

    Running Vusers as thread helps generate more virtual users from any machine due to the small memory print of the Vuser running a thread.

    91. What is wasted time in the VuGen Replay log?

    Ans:

    Waste time is never performed by any browser user and just the time spent on the activities that support the test analysis. These activities are related to logging, keeping records, and custom analysis.

    92. How do you enable text and image checks in VuGen?

    Ans:

    • This can be done by using functions web_find (for text check) and web_image_check (for image check) and enabling image and text check from runtime settings.
    • Run Time Setting–>Preference–>Enable the Image and text checkbox.

    93. What is the difference between web_reg_find and web_find?

    Ans:

    web_reg_find function is processed before the request sent and is placed before the request in the VuGen script whereas a web_find function is processed after the response of the request comes and is placed after the request in VuGen script.

    94. What are the challenges that you will face to script the step “Select All” and then “Delete” for any mail account?

    Ans:

    In this case, the post for “Select All” and “Delete” will change every time depending on the number of mails available. For this the recorded request for the two should be replaced with the custom request and string building is required to build the post. 

    95. What is the difference between pacing and thinks time?

    Ans:

      Pacing is the wait time between the action iterations whereas thinking teh time is a wait time between teh transactions.

    96. What is the number of graphs you can monitor using Controller at a time? What is teh max of them?

    Ans:

    One, two, four, and eight graphs can be seen at a time. Teh maximum number of graphs that can be monitored at a time is 8.

    97. You have an application that shows the exam results of the student. Corresponding to teh name of each student its mentioned whether he passed or failed teh exam wif teh label of “Pass” and “Fail”. How will you identify teh number of passed and failed students in teh VuGen script?

    Ans:

    For this text, check is used for the web page for the text “Pass and “Fail”. Through the function web_reg_find, we can capture teh number of texts found on teh web page wif teh help of “SaveCount”. SaveCount stored teh number of matches found.  For example-web_reg_find(“Text=Pass”, “SaveCount=Pass_Student”, LAST); web_reg_find(“Text=Fail”, “SaveCount=Fail_Student”, LAST);

    98.  During the load test, what is the optimum setting for Logs?

    Ans:

    For the load test log level is set to minimal. This can be achieved by setting the log level to the standard log and selecting the radio button “Send a message only when an error occurs”.

    99. How will you handle teh situation in scripting were for you’re mailbox you have to select any one mail randomly to read?

    Ans:

    •  For dis, we will record the script for reading the first mail. Try to find what is being posted in the request to read the first mail such as mail ids or row no.
    • From the post where a list of emails is reflecting, we will try to capture all the email ids row no wif correlation function and keeping Ordinal as All me.e. ORD=All. Replace the requested email id in the read post wif any of the randomly selected email id from the list of captured email ids.

    100. How do you identify which values need to be correlated in the script? Give an example.

    Ans:

     This can be done in ways:

    • Record the two scripts wif similar steps and compare them using WDiff utility. (See tutorial Correlation).
    • Replay the recorded script and scan for correlation. dis gives a list of values that can be correlated.

    Session-Id is a good example of dis. When two scripts are recorded and compared using WDiff utility. Session ids in the two scripts should be different and WDiff highlights these values.

    Are you looking training with Right Jobs?

    Contact Us
    Get Training Quote for Free