InterviewSolution
This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.
| 1. |
What do you mean by a test case? What are some good practices for writing test cases? |
|
Answer» A test case is a collection of actions performed to ensure that a specific feature or operation of your software program is working properly. A Test Case is a set of test procedures, data, preconditions, and postconditions created for a specific test scenario in order to verify any requirement. The test case contains specified variables or conditions that a testing engineer might use to compare EXPECTED and actual outcomes in order to assess whether a software product meets the customer's needs. Following are some good practices for writing test cases :
|
|
| 2. |
What are the various artifacts to which you refer when writing test cases? |
|
Answer» Following are the various ARTIFACTS that we can refer to while writing TEST cases :
|
|
| 3. |
How would you ensure that your testing is thorough and comprehensive? |
|
Answer» The Requirement TRACEABILITY Matrix and Test Coverage Matrix will assist US in DETERMINING whether or not our test cases are adequately covered. The requirement traceability matrix will assist us in determining whether the test conditions are SUFFICIENT to fulfil all of the requirements. Coverage matrices will assist us in determining whether the test cases are sufficient to SATISFY all of the Requirement Traceability Matrix test conditions. The below image shows a sample Requirement Traceability Matrix: The below image shows a sample Test Coverage Matrix: |
|
| 4. |
What do you understand about data driven testing? |
|
Answer» Data-Driven Testing is a software testing technique that stores test data in a table or spreadsheet format. Testers can use data-driven testing to ENTER a single test script that can run tests for all test data from a table and EXPECT the test results to be RETURNED in the same table. It's also known as parameterized testing or table-driven testing. Because testers usually have several data sets for a single test, Data-Driven Testing is critical. Creating DIFFERENT tests for each data set can be time-consuming. Data-driven testing allows data to be kept separate from test scripts, and the same test scripts can be run for multiple combinations of input test data, resulting in more efficient test results. The above image depicts the process of data-driven testing. Test data is taken from a data file and tested on the application and then the PRODUCED output is compared with the actual output. |
|
| 5. |
Explain what is a testware in the context of quality assurance. |
|
Answer» Testware is a collection of software created for the specific purpose of software testing, particularly software testing AUTOMATION. For example, automation testware is created to run on automation frameworks. All utilities and application software that work TOGETHER to test a software package but do not necessarily contribute to operational purposes are referred to as testware. As a result, testware is only a working environment for application software or portions thereof, rather than a static configuration. It contains artifacts created during the testing process that is needed to plan, develop, and execute tests, such as documentation, scripts, INPUTS, expected outcomes, set-up and clear-up processes, files, databases, environment, and any other software or tools used during testing. Both verification and validation testing methodologies are used to create testware. Testware, like software, consists of CODES and binaries, as well as test cases, test plans, and test reports. Testware should be preserved and faithfully maintained under the direction of a configuration management system. |
|
| 6. |
Differentiate between gorilla testing and monkey testing. |
||||||||||||||
|
Answer» The following table lists the differences between gorilla testing and monkey testing:
|
|||||||||||||||
| 7. |
What do you mean by gorilla testing in the context of quality assurance? |
|
Answer» Gorilla Testing: Gorilla testing is a method of software testing in which a module is frequently tested based on some random inputs, ensuring that the module's operations are checked and that there are no PROBLEMS in the module. So it's also KNOWN as TORTURE Testing, Fault Tolerance Testing, or FRUSTRATING Testing because of the Gorilla Testing pattern. It's a manual test that's done over and over again. In Gorilla Testing, testers and developers WORK together to regularly evaluate a module's functionality. |
|
| 8. |
What do you mean by monkey testing in the context of quality assurance? |
|
Answer» Monkey testing is a software testing technique in which the tester inserts any random inputs into the software application without using predefined test cases and observes the software program's behaviour to see if it crashes. The goal of monkey testing is to use experimental ways to uncover faults and problems in software applications. Monkey testing is a sort of black-box testing that involves SUPPLYING random inputs to a system in order to check its behaviour, such as whether it is crashing or not. Monkey testing does not necessitate the creation of test cases. It can also be automated, in the sense that we can develop programs or SCRIPTS to produce random inputs in order to test the system's behaviour. When undertaking stress or load testing, this technique comes in handy. Monkeys are divided into two categories:
|
|
| 9. |
Differentiate between Quality Assurance (QA) and Quality Control (QC). |
||||||||||||||||||||||||||
|
Answer» Quality Control (QC): A SYSTEMATIC set of techniques used to assure the quality of software PRODUCTS or services is known as quality control in software testing. By testing and reviewing the software product's functional and non-functional REQUIREMENTS, the quality control process ensures that it fulfils the actual needs. The following table lists the differences between Quality Assurance (QA) and Quality Control (QC):
|
|||||||||||||||||||||||||||
| 10. |
What do you understand about defect leakage ratio in the context of quality assurance? |
|
Answer» SOFTWARE testers utilise defect leakage as a metric to determine the effectiveness of Quality Assurance (QA) TESTING. It's the ratio of the total number of flaws attributed to a stage (which are captured in subsequent stages) to the sum of the total number of defects captured in that stage and the total number of defects assigned to a stage (which are captured in subsequent stages). Defect leakage is a metric that counts the percentage of FAULTS that leak from one testing stage to the next, as well as DEMONSTRATING the effectiveness of software testers' testing. The testing team's worth, on the other hand, is only confirmed when defect LEAKING is small or non-existent. |
|
| 11. |
What do you understand about Traceability Matrix (TM) in the context of quality assurance? |
|
Answer» A Traceability Matrix is a document that connects any two baseline documents that require a many-to-many LINK to ENSURE that the relationship is COMPLETE. It's used to keep track of requirements and make SURE they're being met on the present project. The following are the three major components of a traceability matrix:
|
|
| 12. |
What do you understand about bug leakage and bug release? |
Answer»
|
|
| 13. |
What do you mean by build and release in the context of quality assurance? Differentiate between them. |
||||||||
Answer»
The following table lists the differences between build and release:
|
|||||||||
| 14. |
Differentiate between Test Plan and Test Strategy. |
||||||||||||||||||
Answer»
The FOLLOWING table lists the differences between Test Plan and Test Strategy:
|
|||||||||||||||||||
| 15. |
Differentiate between Quality Assurance and Testing. |
||||||||||||||||||
|
Answer» SOFTWARE Testing: Software testing is a method of investigating a system to see how it works according to product requirements and CUSTOMER expectations and identify potential flaws. To TEST the product, find bugs, and see if they've been fixed, a variety of methods are utilised. Customers can use testing to check if the generated product satisfies their expectations in terms of design, compatibility, and functionality, among other things. Validating the product against specifications and client requirements, as well as detecting and reporting flaws, are all part of the testing process. To detect software flaws, it uses a variety of testing methodologies such as functional, non-functional, and acceptability testing. Furthermore, before the product is released to the client, the purpose of software testing is to ensure that any FOUND faults are entirely corrected with no side effects. The following table lists the differences between Quality ASSURANCE and Testing:
|
|||||||||||||||||||
| 16. |
What is the lifecycle of a Quality Assurance Process? |
|
Answer» Every process in Quality Assurance involves the PDCA (PLAN Do Check Act) cycle, often known as the Deming cycle. The FOLLOWING are the phases of this cycle:
These stages are done to guarantee that the organization's procedures are assessed and improved on a regular basis. |
|