Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

1.

What are some of the best tips for conducting performance testing?

Answer»

Following are some of the BEST tips for conducting performance testing:

  • The test environment should mirror the production ecosystem as MUCH as possible. If there are any deviations, then the test results might not be accurate and might cause problems when the application goes live.
  • It is preferred to have a separate environment dedicated to performance testing.
  • The TOOLS selected for testing should automate our test plan in the best possible way.
  • Performance tests should be run several times for obtaining a consistent accurate measure of the performance.
  • The performance test environment should not be modified in between the testing process.
Conclusion

Performance testing provides in-depth insights regarding the non-functional application requirements like scalability, SPEED, availability and reliability of the software under test. These HELP in identifying and resolving the shortcomings and gaps in performance before the application goes live. 

2.

When should we conduct performance testing for any software?

Answer»

Performance testing is done for measuring the performance of any action in the application. We can run performance TESTS for CHECKING the performance of the websites and apps. In case we are following waterfall methodology, we can test EVERY time we release a new SOFTWARE’s version. If we are using agile methodology, then we need to test continuously.

3.

What are the common mistakes committed during performance testing?

Answer»

FOLLOWING are some of the mistakes committed during performance testing:

  • Unknown or unclear non-functional requirements provided by the business.
  • Unclear workload details
  • Directly jumping to multi-user tests
  • Running TEST cases for a small duration
  • Confusion on an APPROXIMATION of the concurrent users
  • Difference between the test and production environments
  • Network BANDWIDTH not stimulated properly
  • No CLEAR base-lining of the system configurations.
4.

What are the metrics monitored in performance testing?

Answer»

Following are the metrics monitored in performance testing:

MetricsDescription
Processor utilizationTime spent by the processor to execute non-idle threads.
Memory usageAmount of physical memory available so that the server can process.
Disk timeTime taken by the disk to execute read/write request.
BandwidthRepresented in bits per second (bps) used by network interfaces.
CPU interrupts per secondAverage number of hardware interrupts received and processed by the processor each second.
Response timeTime taken to get the first character of the response from the server to the client.
ThroughputRate at which the server or network receives requests per second.
Amount of connection poolingNumber of user requests met by pooled CONNECTIONS. The higher the number, the better is the performance.
Maximum active sessionsMaximum sessions which are active at once.
Hits per secondNumber of hits EVERY second
Thread countsNumber of threads running actively
Private BYTESNumber of bytes allocated by the processor which cannot be shared AMONG other processes. They are used for measuring memory leaks and memory usage.
Committed memoryAmount of virtual memory utilized
Memory pages/secondNumber of pages read from or written to the disk to RESOLVE page faults.
Page faults/secondOverall rate at which fault pages are processed.
5.

Can the end-users of the application conduct performance testing?

Answer»

No, end-users cannot conduct performance testing. HOWEVER, while MAKING use of the software the end-users can discover software bottlenecks. However, that cannot be equated to actual performance testing performed by professional testers. If the end-users want to participate in testing, they can be ACCOMMODATED in the USER Acceptance Testing phase.

6.

What do you mean by concurrent user hits in load testing?

Answer»

Concurrent user HITS scenarios arise when more than one user will be hitting or REQUESTING for the same event during the load testing PROCESS. This SCENARIO is tested to ensure that MULTIPLE users can access the same event requests at the same time in the application.

7.

What are the best ways for carrying out spike testing?

Answer»

Spike testing can be carried out by bombarding the application with networking, random connections, DATA, different operations, firing requests to every SINGLE FUNCTIONALITY of the application. In this way, the application is pushed to the limits and monitoring can be done to IDENTIFY if it can WORK under pressure. The data monitored can be documented and then be analyzed.

8.

How is endurance testing different from spike testing?

Answer»
  • Endurance testing deals with how long the application can endure and perform well irrespective of the loads. Sometimes when an application is used for a long time, it BECOMES SLOW or inactive which is why it becomes important to conduct this testing. Endurance testing analyses all changes in the application by simulating lengthy application usage. For example, endurance testing is conducted on a Banking application where we TEST if it can perform normally under continuous load or large transactions for a long time.
  • Spike testing deals with pushing the application to the limits by subjecting the SOFTWARE to the highest operation level for identifying the STRENGTHS and weaknesses of the application. Spike testing is necessary for instances when eCommerce or shopping sites launch flash sales or holiday discount deals where suddenly a large number of users will be accessing the application. If the application crashes under sudden spike every time, then it would result in a bad user experience and the users would lose faith in the application.
9.

How is load testing different from stress testing?

Answer»
  • Load testing is a type of testing that analyzes the software performance when it is saddled with more than normal WORKLOADS. The load can be of any kind - data or users accessing the system or the application itself on the server’s operating system. When the load is INCREASED, some applications tend to perform slowly (degenerate) and in some other cases, the applications run normally. Load testing determines that the applications run at optimal levels irrespective of the load given to them. 
  • Stress testing on the other hand has a BROADER approach to the software’s performance. It considers the amount of data processed, time taken to process it, network connectivity levels and other applications running in the background. When the stress levels are high, software tends to crash or stop WORKING altogether. This testing imitates the stressful environment to test the resistance of the software to operate correctly.
10.

What are the pre-requisites to enter and exit a performance test execution phase?

Answer»

The necessary entry criteria for the execution PHASE:

  • Completed AUTOMATED scripts
  • The test environment should be ready
  • Finalized Non-functional requirements (NFR)
  • The latest functionally tested code should be deployed
  • The test input data should be ready.

The necessary exit criteria would be:

  • Test cases should cover and meet all the NFRs
  • No more performance bottlenecks are present
  • All defects are finalized
  • The BEHAVIOUR of the application should be consistent and acceptable in heavy and spiked loads.
  • Final REPORTS are submitted and shared.
11.

Can we perform spike testing in JMeter? If yes how?

Answer»

Spike TESTING is conducted to determine how an application behaves when the number of users accessing the system decreases or increases ABRUPTLY. This is because generally when the number of users varies abruptly and suddenly (leading to a spike), then the system behaviour will have unexpected changes. This can be tested in JMeter using Synchronizing Timer. This is simulated by jamming the threads by synchronizing the time until the stipulated number of threads have been blocked and once that is achieved, then release the threads suddenly at once to simulate a large load.

The FOLLOWING steps can be performed:

  • Create a performance TEST plan
  • Create a thread group within it
  • Add all the JMeter ELEMENTS specific to business requirements
  • Add listeners to view the results
  • Run the tests
  • Get the results
  • Monitor the behaviour.
12.

How can we identify situations that belong to performance bottlenecks?

Answer»

We can identify PERFORMANCE bottlenecks by MONITORING the applications that do not perform well against the stipulated stress and load conditions. We can use LoadRunner software for making use of different monitors that monitor DATABASE servers, network delays, firewall monitors ETC.

13.

On what kind of values can we perform correlation and parameterization in the LoadRunner tool?

Answer»

Correlation is performed for dynamic values such as SESSION IDS, session states, DATE values etc that are returned from the server in response to any request. Parameterization is conducted upon static DATA, such as passwords, usernames etc that are USUALLY entered by the user.

14.

Why is it preferred to perform load testing in an automated format?

Answer»

Performing load testing in a manual WAY has the following disadvantages:

  • Accuracy cannot be predicted easily regarding the application’s performance.
  • Synchronization among VARIOUS USERS becomes challenging to coordinate and maintain.
  • In real-time testing, it would require real users to test the application.
  • Additionally, manual testing increases the cost of manual EFFORT required.

Due to all the above-mentioned reasons, it is preferred to perform load testing in automated form.

15.

What are the differences between benchmark testing and baseline testing?

Answer»

Benchmark Testing is a testing process conducted to compare the system FRAMEWORK performance against set industry standards that are laid by some organizations. Baseline Testing is a TYPE of testing where the tester runs various tests to KNOW the information about the performance. Whenever a change is DONE in the future, the result of the baseline testing will be considered as a reference POINT to the next set of testing.