Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

1.

What do you mean by a test case? What are some good practices for writing test cases?

Answer»

A test case is a collection of actions performed to ensure that a specific feature or operation of your software program is working properly. A Test Case is a set of test procedures, data, preconditions, and postconditions created for a specific test scenario in order to verify any requirement. The test case contains specified variables or conditions that a testing engineer might use to compare EXPECTED and actual outcomes in order to assess whether a software product meets the customer's needs.

Following are some good practices for writing test cases : 

  • Simple and transparent test cases are required: Make your test cases as simple as possible. They must be clear and straightforward because the test case author may not be able to perform them. Use declarative language such as "go to the main page," "input data," "click here," and so on. This makes it easier to understand the test stages and speeds up the testing process.
  • Create a test case that considers the end-user: Any software project's ultimate goal is to produce test cases that fulfil client requirements and are simple to use and run. A tester must write test cases from the standpoint of the end-user.
  • Repetition of test cases should be avoided: Test instances should not be repeated. If a test case is required for the execution of another test case, use the test case id in the preconditioned column to refer to it.
  • Make certain you have complete coverage: Make sure you write test cases to ensure you've COVERED all of the software requirements in the specification document. To verify that no functions or conditions are left untested, use the Traceability Matrix.
  • Don't Make Assumptions: When creating a test case, don't make assumptions about the functioning and features of your software application. FOLLOW the specifications in the specification documents.
  • Self-cleaning: The test case you write must RESTORE the Test Environment to its previous condition and should not render it unusable. This is particularly true when it comes to configuration testing.
2.

What are the various artifacts to which you refer when writing test cases?

Answer»

Following are the various ARTIFACTS that we can refer to while writing TEST cases :

  • Specification of functional REQUIREMENTS.
  • Document that EXPLAINS what the requirements are.
  • Wireframes.
  • Use Cases.
  • User Stories.
  • Acceptance criteria.
  • User Acceptance Testing Test cases.
3.

How would you ensure that your testing is thorough and comprehensive?

Answer»

The Requirement TRACEABILITY Matrix and Test Coverage Matrix will assist US in DETERMINING whether or not our test cases are adequately covered. The requirement traceability matrix will assist us in determining whether the test conditions are SUFFICIENT to fulfil all of the requirements. Coverage matrices will assist us in determining whether the test cases are sufficient to SATISFY all of the Requirement Traceability Matrix test conditions.

The below image shows a sample Requirement Traceability Matrix:

The below image shows a sample Test Coverage Matrix:

4.

What do you understand about data driven testing?

Answer»

Data-Driven Testing is a software testing technique that stores test data in a table or spreadsheet format. Testers can use data-driven testing to ENTER a single test script that can run tests for all test data from a table and EXPECT the test results to be RETURNED in the same table. It's also known as parameterized testing or table-driven testing.

Because testers usually have several data sets for a single test, Data-Driven Testing is critical. Creating DIFFERENT tests for each data set can be time-consuming. Data-driven testing allows data to be kept separate from test scripts, and the same test scripts can be run for multiple combinations of input test data, resulting in more efficient test results.

The above image depicts the process of data-driven testing. Test data is taken from a data file and tested on the application and then the PRODUCED output is compared with the actual output. 

5.

Explain what is a testware in the context of quality assurance.

Answer»

Testware is a collection of software created for the specific purpose of software testing, particularly software testing AUTOMATION

For example, automation testware is created to run on automation frameworks. All utilities and application software that work TOGETHER to test a software package but do not necessarily contribute to operational purposes are referred to as testware. As a result, testware is only a working environment for application software or portions thereof, rather than a static configuration. It contains artifacts created during the testing process that is needed to plan, develop, and execute tests, such as documentation, scripts, INPUTS, expected outcomes, set-up and clear-up processes, files, databases, environment, and any other software or tools used during testing. Both verification and validation testing methodologies are used to create testware. Testware, like software, consists of CODES and binaries, as well as test cases, test plans, and test reports. Testware should be preserved and faithfully maintained under the direction of a configuration management system.

6.

Differentiate between gorilla testing and monkey testing.

Answer»

The following table lists the differences between gorilla testing and monkey testing:

Gorilla Testing Monkey Testing 
Gorilla testing is a method of software testing in which a module is frequently TESTED based on some RANDOM inputs, ensuring that the module's operations are checked and that there are no problems in the module.Monkey testing is a method of software testing that evaluates the behaviour of the system and validates whether it crashes or not based on some random inputs and no test cases.
The primary goal of Gorilla testing is to DETERMINE whether or not a module is functioning properly.The primary goal of monkey testing is to determine whether or not a system will crash. 
Gorilla testing is a type of manual testing that is done repeatedly.Monkey testing is a sort of random testing that does not INVOLVE test cases.
Only a few select modules of the system are subjected to this testing.This testing is carried out over the entire system.
Unit testing primarily employs the Gorilla Testing method.In System Testing, the Monkey Testing approach is primarily employed.
Torture Testing, Fault TOLERANCE Testing, and Frustrating Testing are all terms used to describe gorilla testing.Random testing, Fuzz testing, and Stochastic testing are all terms used to describe monkey testing.
7.

What do you mean by gorilla testing in the context of quality assurance?

Answer»

Gorilla Testing:

Gorilla testing is a method of software testing in which a module is frequently tested based on some random inputs, ensuring that the module's operations are checked and that there are no PROBLEMS in the module. So it's also KNOWN as TORTURE Testing, Fault Tolerance Testing, or FRUSTRATING Testing because of the Gorilla Testing pattern. It's a manual test that's done over and over again. In Gorilla Testing, testers and developers WORK together to regularly evaluate a module's functionality.

8.

What do you mean by monkey testing in the context of quality assurance?

Answer»

Monkey testing is a software testing technique in which the tester inserts any random inputs into the software application without using predefined test cases and observes the software program's behaviour to see if it crashes. The goal of monkey testing is to use experimental ways to uncover faults and problems in software applications. Monkey testing is a sort of black-box testing that involves SUPPLYING random inputs to a system in order to check its behaviour, such as whether it is crashing or not. Monkey testing does not necessitate the creation of test cases. It can also be automated, in the sense that we can develop programs or SCRIPTS to produce random inputs in order to test the system's behaviour. When undertaking stress or load testing, this technique comes in handy.

Monkeys are divided into two categories:

  • Smart Monkeys: The smart monkeys are those who have a basic understanding of the application. They know which of an application's PAGES will redirect to which page. They also know whether the inputs they're giving are valid or not. If they discover an error, they are wise enough to report it as a bug. They are also aware of the MENUS and buttons.
  • Dumb Monkeys: Dumb Monkeys are individuals who are completely unaware of the application. They have no idea where an application's pages will reroute. They give random inputs and are unaware of the application's beginning and FINISH points. Despite the fact that they have no knowledge of the application, they discover bugs such as environmental failure or hardware failure. They also have limited knowledge of an application's functioning and user interface.
9.

Differentiate between Quality Assurance (QA) and Quality Control (QC).

Answer»

Quality Control (QC): A SYSTEMATIC set of techniques used to assure the quality of software PRODUCTS or services is known as quality control in software testing. By testing and reviewing the software product's functional and non-functional REQUIREMENTS, the quality control process ensures that it fulfils the actual needs.

The following table lists the differences between Quality Assurance (QA) and Quality Control (QC):

Quality Assurance Quality Control 
It is a method that focuses on assuring that the specified quality will be met.It is a method that focuses on achieving the desired level of quality.
The goal of quality assurance is to avoid defects.The goal of quality control is to find and correct flaws.
It's a way to keep track of quality. (Verification)It's a method for verifying the quality. (Validation)
It does not entail running the software.It always entails running software.
It is a Proactive measure (method of prevention).It is a Reactive measure (a remedial technique).
It is the method for PRODUCING deliverables.It is the technique for ensuring that deliverables are correct.
QA is involved in the entire software DEVELOPMENT process.QC is involved in the entire software testing process.
Quality Assurance establishes standards and processes in order to achieve client expectations.While working on the product, Quality Control ensures that the standards are followed.
It is carried out prior to Quality Control.It is only carried only once the QA activity has been completed.
It's a low-level activity that can detect errors and inaccuracies that quality control can't.It's a High-Level Activity because it can detect errors that Quality Assurance can't.
Quality Assurance guarantees that everything is done correctly, which is why it is classified as a verification activity.Quality Control ensures that everything we do is in accordance with the requirements, which is why it is classified as a validation activity.
SPC, or Statistical Process Control, is a statistical technique used in quality assurance (SPC)SQC, or Statistical Quality Control, is a statistical technique used in quality control.
10.

What do you understand about defect leakage ratio in the context of quality assurance?

Answer»

SOFTWARE testers utilise defect leakage as a metric to determine the effectiveness of Quality Assurance (QA) TESTING. It's the ratio of the total number of flaws attributed to a stage (which are captured in subsequent stages) to the sum of the total number of defects captured in that stage and the total number of defects assigned to a stage (which are captured in subsequent stages). Defect leakage is a metric that counts the percentage of FAULTS that leak from one testing stage to the next, as well as DEMONSTRATING the effectiveness of software testers' testing. The testing team's worth, on the other hand, is only confirmed when defect LEAKING is small or non-existent.

11.

What do you understand about Traceability Matrix (TM) in the context of quality assurance?

Answer»

A Traceability Matrix is a document that connects any two baseline documents that require a many-to-many LINK to ENSURE that the relationship is COMPLETE. It's used to keep track of requirements and make SURE they're being met on the present project.

The following are the three major components of a traceability matrix:

  • Forward Traceability: This matrix is used to determine whether the project is progressing in the appropriate direction and for the correct product. It ensures that each specification is applied to the product and that each specification is adequately tested. It connects test cases to requirements.
  • Backward or reverse Traceability: It is used to verify that the present product is still on track. The goal of this form of traceability is to ensure that we are not expanding the project's scope by adding code, DESIGN features, testing, or other activities not stated in the requirements. It connects requirements to test cases.
  • Bidirectional Traceability: This traceability matrix ensures that all criteria are covered by test cases in both directions (forward and backward). It examines the impact of a change in requirements caused by a work product defect, and vice versa.
12.

What do you understand about bug leakage and bug release?

Answer»
  • BUG Leakage: When a bug is discovered by an end-user, that should have been caught in earlier builds/versions of the application, a bug leakage occurs. Bug leaking refers to a fault that EXISTS during testing but is not discovered by the tester and is SUBSEQUENTLY discovered by the end-user.
  • Bug Release: When a particular version of the software is released with a COLLECTION of known bugs/defects, it is referred to as a bug release (s). Bugs of this type are frequently of LOW severity and/or priority. When the company can afford the existence of a bug in the released software rather than the time/cost of repairing it in that version, this is done. In most cases, these bugs are disclosed in the Release Notes.
13.

What do you mean by build and release in the context of quality assurance? Differentiate between them.

Answer»
  • Build: A build is a software or application that is ready to be tested. Developers create software, which they then hand over to testers to test. It's a broad phrase that refers to any application that will be examined. Developers can either create a complete program or add a new FEATURE to an existing one. Then that program, TOGETHER with the new functionality, is dubbed BUILD, and it is put through its tests by testers.
  • Release: After development and testing, the release is the finalized application. The testing team VERIFIES the program and releases it to the customer after testing it. It's possible that a SINGLE release will have many builds. As a result, it is the program that is supplied to the customer when the development and testing phases are completed. Furthermore, the release is based on builds, and there may be multiple builds.

The following table lists the differences between build and release:

Build Release 
Build refers to a version of the software or application which the development team hands over to the testing team. Release refers to the software which the testing team hands over to the end customers. 
The build version of the software requires testing to be done on it. Generally, sanity testing is performed on a build version.The release version of software no longer requires testing to be done on it.
The build version of the software is made more frequently than the release version.Release versions of the software are made LESS frequently than the build version.
14.

Differentiate between Test Plan and Test Strategy.

Answer»
  • Test Plan: A test plan is a document that illustrates the test procedure, destinations, timetable, estimation, and expectations, as well as the assets needed for testing. It motivates us to DETERMINE the effort required to approve the kind of application being tested. The test plan serves as a diagram to guide software testing exercises as a well-defined method that the test manager closely monitors and controls. Test plan id, highlights to be tested, test systems, testing assignments, highlights pass or bomb criteria, test expectations, duties, and timeline, and so on are all included in the test plan.
  • Test Strategy: In software testing, a test strategy is a collection of guiding principles that specifies the test design and control how the software testing process is carried out. The Test Strategy's goal is to give a systematic method to software testing so that quality, traceability, reliability, and better planning may be assured.

The FOLLOWING table lists the differences between Test Plan and Test Strategy:

Test Plan Test Strategy 
A software project test plan is a document that specifies the scope, PURPOSE, approach, and emphasis of a software testing process.A test strategy is a set of rules that describes how to develop TESTS and outlines how they should be carried out.

Test plan elements include the following:

  • Test plan id
  • Features to be tested
  • Test procedures
  • Testing tasks
  • Features pass or fail criteria
  • Test deliverables
  • Responsibilities
  • Schedule, among others.

Following are the components of a test strategy : 

  • Objectives and scope
  • Documentation formats
  • Test methodologies
  • Team reporting structure
  • Client communication strategy


 

The specifications of the testing process are described in the test plan.The general approaches are described in the test strategy.
A testing manager or lead executes a test plan that specifies how to test when to test, who will test, and what to test.The project manager implements a test strategy. It specifies which approach to use and which module to test.
Changes to the test plan are possible once it has been created.It is impossible to alter the test strategy once it has been created.
Test planning is done to identify risks by determining potential issues and dependencies.This is a long-term approach. You can use information that isn't specific to a project to create a test strategy.
A test plan can exist individually.Test strategy is frequently found as an element of a test plan in smaller projects.
It is established at the project level.It is configured at the organisational level and can be utilised across DIFFERENT projects.
15.

Differentiate between Quality Assurance and Testing.

Answer»

SOFTWARE Testing: Software testing is a method of investigating a system to see how it works according to product requirements and CUSTOMER expectations and identify potential flaws. To TEST the product, find bugs, and see if they've been fixed, a variety of methods are utilised. Customers can use testing to check if the generated product satisfies their expectations in terms of design, compatibility, and functionality, among other things.

Validating the product against specifications and client requirements, as well as detecting and reporting flaws, are all part of the testing process. To detect software flaws, it uses a variety of testing methodologies such as functional, non-functional, and acceptability testing. Furthermore, before the product is released to the client, the purpose of software testing is to ensure that any FOUND faults are entirely corrected with no side effects.

The following table lists the differences between Quality ASSURANCE and Testing:

Quality AssuranceTesting 
It is a subset of the Software Development Lifecycle (SDLC). It is a subset of Quality Control (QC).
It is process-oriented.It is product-oriented.
It is preventive in nature (tries to prevent defects from being present in the product).It is corrective in nature (tries to correct defects present in the product).
Quality Assurance is done to prevent defects in the product.Testing is done to find and fix defects in the product.
Quality Assurance ensures that the processes and procedures followed by the team are in place.Testing confirms that the product conforms to the specifications required.
In Quality Assurance, the focus is on the processes that operate on the product.In Testing, the focus is on the end product itself.
It is a proactive process (a proactive strategy focuses on preventing problems from arising).It is a reactive process (a reactive approach focuses on reacting to events after they have occurred).
Quality Assurance needs to be implemented by the whole team. In Testing, only the testing team is concerned.
16.

What is the lifecycle of a Quality Assurance Process?

Answer»

Every process in Quality Assurance involves the PDCA (PLAN Do Check Act) cycle, often known as the Deming cycle. The FOLLOWING are the phases of this cycle:

  • Plan: The organisation should plan and develop process-related objectives, as well as the methods necessary to provide a high-quality FINAL product. Here, Quality Assurance ensures that the planning MADE takes into consideration the quality of the product.
  • Do: This phase involves process development and testing, as well as "doing" changes to processes. Here, Quality Assurance ensures that the processes followed during development maintain the quality of the product.
  • Check: In this phase, processes are monitored, modified, and checked to see if they achieve the INTENDED goals. Here, Quality Assurance ensures that the processes are checked thoroughly so that no defects might be missed.
  • Act: In this phase, a Quality Assurance tester should take the steps that are required to improve the processes.

These stages are done to guarantee that the organization's procedures are assessed and improved on a regular basis.