Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

1.

What do you understand about Multiprocessing and Multithreading in the context of Operating Systems? Differentiate between them.

Answer»
  • Multiprocessing: There are more than two processors in a multiprocessing system. The CPUs are added to the system to aid boost the system's computational speed. A CPU's registers and main memory are unique to it. However, because each CPU is independent, it is possible that one of them will be idle. One CPU may be idle while the other is overburdened with specific tasks. The PROCESS and resources are dynamically shared among the processors in this situation.
  • Multithreading: Multithreading is a system in which several threads of a process are established to increase the system's computational speed. Many threads of a process are PERFORMED simultaneously in multithreading, and process creation in multithreading is done economically.

The following table LISTS the differences between Multiprocessing and Multithreading:

MultiprocessingMultithreading
CPUs are added to increase computing power in multiprocessing. Multithreading, on the other hand, divides a single process into several threads to increase computational capability.
Multiprocessing is the execution of multiple processes at the same time.Many threads of a process are executed at the same time in multithreading.
Multiprocessing is divided into two TYPES: symmetric and asymmetric.Multithreading, on the other hand, is not classified in any of the categories.
Procedure creation is a time-consuming process in multiprocessing.In Multithreading, thread creation is economical.
Each process under multiprocessing has its own address space.When using multithreading, all threads share a COMMON address space.
Additional Useful Interview Resources
  • Coding Interview Questions for Practice
  • Technical Interview Questions for Practice
2.

What do you understand about deadlocks in the context of Operating Systems? What are the four necessary conditions for deadlock to happen?

Answer»

A deadlock occurs when a GROUP of processes is stalled because each process is holding a resource and waiting for another process to obtain it. Consider the situation when two cars are approaching each other on a narrow bridge and there is only ONE way: once they are in front of each other, neither of the two cars can move. In OPERATING systems, a similar situation happens when two or more processes hold some resources while waiting on resources owned by other processes (s). 

The following are the four necessary conditions for deadlock to take place:

  • Mutual Exclusion: One or more resources are not available for sharing (Only one process can use at a time)
  • Hold and Wait: A process is holding at least one resource and waiting for further resources.
  • No Preemption: A resource can only be OBTAINED from a process if it is released by the process.
  • Circular Wait: A COLLECTION of operations is waiting for each other in a circular fashion.
3.

Given a table in a database having two attributes employee name and their salary in it, find and print the employee having the Nth highest salary amongst them using Structured Query Language (SQL).

Answer»

Example: Consider the following table:

enamesal
P100
Q200
R300
S500
T400
U100

Here, if n = 2, then the output should be 

enamesal
T400

Approach 1: We use the concept of dense_rank() to compute the Nth highest salary. The function DENSE_RANK() returns the rank of a row in an ordered collection of rows as a NUMBER. The ranks are in ascending order, starting with 1. This function takes any numeric data type as an ARGUMENT and returns NUMBER. Based on the VALUES of the value_exprs in the order_by_clause, DENSE RANK computes the rank of each row returned from a query in RELATION to the other rows as an analytic function.

SQL Query:

select * from(select ename, sal, dense_rank() over(order by sal desc)rnk from Employee) where rnk=&n;

Approach 2: In this approach, we first find the employees who have the highest N different wages. The Nth highest salary can then be determined by finding the LOWEST salary among the salaries returned by the above query.

SQL Query:

SELECT * FROM Employee WHERE sal = ( SELECT MIN(sal) FROM Employee WHERE sal IN ( SELECT DISTINCT TOP N sal FROM Employee ORDER BY sal DESC ) );

Approach 3: In this approach, we will create two aliases of the Employee table and then compare each salary with the other salaries present in the table to check if the current salary under consideration is greater than N-1 other salaries.

SQL Query:

SELECT ename,sal from Employee e1 where N-1 = (SELECT COUNT(DISTINCT sal)from Employee e2 where e2.sal > e1.sal)
4.

Given an array of integers as input, find the sum of the contiguous subarray in the array such that its sum is the maximum of all the possible subarrays.

Answer»

Example:

Input: Arr = {-2, -3, 4, -1, -2, 1, 5, -3}

Output: 7

Explanation: The largest contiguous subarray sum is formed by the subarray [4, -1, -2, 1, 5]

Approach 1: The very basic approach is to traverse through every possible subarray of the given array and to find the sum of each subarray. We then compare the obtained sum with our answer and update our answer if this sum is greater than the answer. The time complexity in this approach would be O(n^3) and the space complexity would be O(1). We will run a nested loop to consider all the subarrays. Now, INSIDE the nested loop, we will run one more nested loop to find the sum of the elements of the subarray in consideration. 

We can reduce the time complexity in this case to O(n^2) by maintaining a prefix sum array for the given array. This would help us to eliminate the third nested loop. However, doing so increases the space cost to O(n).
We leave the implementation of these approaches for the READERS since they are pretty straightforward. Let us now look at the most optimal solution.

Approach 2: In this approach, we follow Kadane's algorithm. The basic idea behind Kadane's approach is to search the array for all positive contiguous segments (max_ending_here is utilized for this). Also, among all positive segments, we maintain track of the maximum total contiguous segment (max_so_far is utilized for this). We compare each positive-sum to max_so_far and update max_so_far if it is more than max_so_far.

Solution Code:

#include<iostream>using NAMESPACE std;// FUNCTION to find the maximum contiguous subarray sumint findMaxSubArraySum(int arr[], int size){ int max_so_far = arr[0]; int curr_max = arr[0]; for (int i = 1; i < size; i++) { curr_max = max(arr[i], curr_max+arr[i]); max_so_far = max(max_so_far, curr_max); } return max_so_far;} int MAIN(){ int arr[] = {-2, -3, 4, -1, -2, 1, 5, -3}; int n = sizeof(arr)/sizeof(arr[0]); int max_sum = findMaxSubArraySum(arr, n); cout << "The maximum contiguous sum for the given array is : " << max_sum; return 0;}

Output:

7

Time Complexity: O(n)

Space Complexity: O(1)

Explanation: In the above code, the function findMaxSubArraySum() takes the input of an array and the size of the array as its parameters and returns the maximum sum for a contiguous subarray of the given array. We keep two variables to find our answer. curr_max holds the current maximum subarray sum whose elements are the array elements just before the current index. max_so_far holds the maximum subarray sum seen so far.

5.

What are the advantages of indexing in DBMS?

Answer»

The following are the advantages of indexing in DBMS:

  • It allows you to reduce the total number of I/O operations required to retrieve that data by eliminating the necessity to ACCESS a database entry through an index structure.
  • Users may search and retrieve info more quickly.
  • You can save tablespace by indexing because you don't need to CONNECT to a row in a table because the ROWID isn't stored in the index. As a result, you will be able to save table space.

The following are the disadvantages of indexing in the context of DBMS:

  • Partitioning an index-organized table is not permitted.
  • Indexing in SQL reduces the speed of INSERT, DELETE, and UPDATE QUERIES.
6.

What do you understand about indexing in the context of Database Management Systems (DBMS)? What are the different types of indexing?

Answer»

Indexing is a data structure approach for retrieving records fast from a database file. A short table with only two COLUMNS is called an index. A duplicate of a table's primary or candidate key APPEARS in the first column. The second column of the table comprises a series of pointers that carry the address of the disk block where that particular key value is stored. An index takes a search key as input and returns a collection of matching records.
In the real world, we can compare indexing with that of the index page of a page. To access the content of a book, we go through its index to know what content lies on which pages. This makes our retrieval faster. In a similar manner, Database Management Systems maintain indexes so as to make retrieval faster from the database.

The following are the different types of indexing in DBMS:

1. Primary Indexing: A primary index is a two-field, ordered file with a defined length. The first field is the same as the primary key, while the second field points to the data block of concern. There is ALWAYS a one-to-one relationship between the elements in the index table in the primary index. The primary indexing in a database management system is also divided into two types:

  • Dense Index: For each search key value in the database, a record is created in a dense index. This allows you to search more quickly, but it requires more storage space for index information.
  • Sparse Index: It's an index record that only appears for a subset of the file's values. Sparse indexes assist you in resolving DBMS dense indexing challenges. This indexing approach keeps the same data block address in a series of index columns, and when data is needed, the block address is retrieved. It takes up less space and has a smaller maintenance overhead for insertions and deletions, but it is slower to locate records than a dense index.

2. SECONDARY Indexing:  In a database management system, a secondary index can be constructed by a field that has a unique value for each record and should be a candidate key. A non-clustering index is ANOTHER name for it.

3. Clustering Indexing: The records themselves, not references, are stored in a clustered index. Non-primary key columns are sometimes used to build indexes, and they may not be unique for each record. In this case, you can aggregate two or more columns to generate unique values and create a clustered index. This also aids in the faster identification of the record.

7.

Given an array of distinct integers, you need to find all the triplets whose sum is equal to 0.

Answer»

Example:

Input: arr = {1, -2, 1, 0, 5}

Output: [1, -2, 1]

Explanation: In the given array, only the triplet [1, -2, 1] sums up to 0.

Input: arr = {0, -1, 2, -3, 1}

Output: [0, -1, 1], [2, -3, 1]

  • Approach 1: The basic approach involves running three loops and checking if the total of three elements is zero or not one by one. Print elements if the sum of the three elements is 0. Since this is a very STRAIGHTFORWARD approach, we are not providing the implementation of this approach.
  • Approach 2: In this approach, we use the concept of hashing. We run a nested LOOP with two loops, the outer loop from 0 to n-2 and the inner loop from i+1 to n-1. We check whether the hashmap contains the sum of the ith and jth elements multiplied by -1. If the hashmap contains the element, we have found a triplet and we print it.

Solution Code:

//function to print all the triplets in a given array whose sum equals 0.void printTriplets(INT arr[], int n){ bool flag = false; for (int i=0; i<n-1; i++) { unordered_set<int> hash_map; for (int j=i+1; j<n; j++) { int third_element = -(arr[i] + arr[j]); if (hash_map.find(third_element) != hash_map.end()) { cout << third_element << arr[i] << arr[j] << "\n"; flag = true; } else hash_map.insert(arr[j]); } } if (flag == false) cout << "There is no such triplet in the given array." << endl;}

Time Complexity: O(n^2)

Space Complexity: O(n)

Here, n is the size of the array

Explanation: In the above code, the function printTriplets() takes input of an array of distinct elements and prints all the triplets whose sum equals 0. We KEEP track of the element VISITED with the help of a hashmap. 

Approach 3: In this approach, we aim to optimize the space complexity of the above approach. We first sort the input array in ascending order. We create two variables low = i + 1 and high = n – 1 for each index i. 

  • If the total of array[i], array[low], and array[high] is equal to zero then print this triplet and increment low and decrement high.
  • If the sum is less than zero, increase the value of low; as the array is sorted, the sum will increase as the value of low is increased, so array[low+1] > array [low].
  • If the sum is larger than zero, reduce the value of high; as the array is sorted, the sum will decrease as the value of high is decreased, so array[high-1] < array [high].

Solution Code:

void printTriplets(int arr[], int n){ bool flag = false; // Sorting the given input array sort(arr, arr+n); for (int i=0; i<n-1; i++) { // initialize left and right int low = i + 1; int high = n - 1; int cur = arr[i]; while (low < high) { if (cur + arr[low] + arr[high] == 0) { // We print the elements if it's sum is zero cout << cur << arr[low] << arr[high] << "\n"; low++; high--; } // If sum of three elements is less than zero then we increment in left else if (cur + arr[low] + arr[high] < 0) low++; // if sum is greater than zero then we decrement in right side else high--; } } if (flag == false) cout << " No Triplet flag" << endl;}

Time Complexity: O(n^2)

Space Complexity: O(1)

Explanation: In the above code, the function printTriplets takes an array and the size of the array as an input parameter. We first sort the array and then iterate through every possible triplet that may sum up to 0.

8.

What are the advantages of Database Management Systems over File Systems?

Answer»

The following are the advantages of Database Management Systems (DBMS) over File Systems:

  • Data redundancy and inconsistency - Redundancy refers to the concept of data repetition, which means that any data item may have multiple copies. The file system cannot regulate the redundancy of data as each user defines and maintains the needed files for a given application to execute. It's possible that two users are sharing the same files and data for different programs. As a result, changes performed by one user do not appear in files utilized by other users, resulting in data inconsistency. DBMS, on the other hand, manages redundancy by keeping a single data repository that is defined once and accessed by multiple users. As there is no or less redundancy, data remains consistent.
  • Data sharing - The file system does not allow data sharing because it is too complicated. Due to the centralized structure in DBMS, data may be simply shared.
  • Data Storage technique - In a File System, data is stored in files, which are then sorted into folders, which are then structured into a hierarchy of directories and subdirectories. In a DBMS, data is stored in a structured and organized manner. Depending on the type of DBMS, it may be stored either in form of tables(if the DBMS is a Relational DBMS) or in the form of nodes (if the DBMS is a Graph DBMS) and so on. This leads to faster retrieval of data based on search queries.
  • Concurrent access to data - When more than one user accesses the same data at the same time, this is referred to as data concurrency. Anomalies occur when one user's edits are overwritten by changes made by another user. There is no method in the file system to prevent abnormalities. A locking system is provided by DBMS to prevent abnormalities from occurring.
  • Data searching - Each file system search activity necessitates the creation of a separate application program. DBMS, on the other hand, has built-in searching capabilities. To access data from the database, the user merely needs to SUBMIT a short query.
  • Data integrity - Before putting data into a database, some constraints may need to be applied to the data. There is no process in the file system to check these constraints automatically. DBMS, on the other hand, ensures data integrity by enforcing user-defined restrictions on data.
  • System CRASHES - Systems may crash for a variety of reasons in some circumstances. It's a problem with file systems since there's no way to recover the data that's been lost if the system crashes. A DBMS will have the recovery manager which RETRIEVES the data offering it another advantage over file systems.
  • Data security - A file system can safeguard a database using a password, but how long can the password be protected? That is something that no one can promise. In the CASE of DBMS, this does not occur. A database management system (DBMS) contains particular capabilities that assist in protecting data.
9.

What do you understand about programs and processes in the context of Operating Systems? Differentiate between them.

Answer»
  • Program: When we run a newly compiled program, the operating system creates a process to run the application. The program's execution begins with GUI mouse clicks, command line entry of the program's name, and so on. Because it remains in secondary memory, such as the contents of a file on disk,  a program is a passive entity. Multiple processes can be found in a single program.
  • Process: The term process refers to computer program code that has been stored in memory and can be executed by the central processor unit (CPU). A process is an instance of a computer program that is running or an entity that may be allocated to and executed on a processor. When a program is loaded into memory, it becomes a process and hence an active entity.

The following table LISTS the differences between a program and a process:

Program Process
A program consists of a collection of instructions that must be followed in order to execute a certain task. An instance of an executing program is referred to as a process.
Because it is stored in secondary memory, a program is referred to as a passive entity.Processes are active entities since they are produced and loaded into the main memory during execution.
A program only exists in one location and will remain thus until it is deleted.A process has a finite lifespan because it is terminated after the task is completed.
A program is ALSO referred to as a static entity.A process is also referred to as a dynamic entity.
A program does not require any resources; it only needs memory space to store the instructions.During its lifespan, a process requires a lot of resources, including CPU, memory address, and I/O.
There is no control block in a program. Process Control Block is a separate control block for each process.
There are two logical components to a program: code and data.A process, in ADDITION to program data, requires EXTRA information for administration and execution.
10.

What do you understand by Paging and Segmentation? Differentiate between them.

Answer»
  • Paging: Paging is a memory management approach in which the process address space is divided into uniformly sized pieces known as pages (size is the power of 2, between 512 bytes and 8192 bytes). The number of pages in the process is used to determine its size. Similarly, main memory is partitioned into small fixed-size blocks of (physical) memory called frames, with the size of a frame being the same as that of a page in order to maximize main memory UTILIZATION and avoid external fragmentation.
  • Segmentation: Segmentation is a memory management strategy in which each JOB is broken into numerous smaller segments, one for each module, each of which contains parts that execute related functions. Each SEGMENT corresponds to a different program's logical address space. When a process is ready to run, its associated segmentation is loaded into non-contiguous memory, though each segment is placed into a contiguous block of available memory. Memory segmentation is similar to paging, except that segments are changeable in length, whereas paging pages are fixed in size.

The following table lists the differences between Paging and Segmentation:

Paging Segmentation
A process address space is divided into fixed-size pieces called pages in paging. A process address space is divided into pieces of differing sizes during segmentation.
The memory is divided into pages by the operating system.The compiler is in charge of determining the segment size, virtual address, and actual address.
The size of a page is governed by the amount of memory available.The user determines the size of each section.
In terms of memory access, the paging strategy is faster.Segmentation is slower than paging.
Internal fragmentation might result from paging because certain pages may go unused.Because certain memory blocks may not be accessed at all, segmentation might result in external fragmentation.
11.

Given a linked list, check whether the linked list has a loop in it or not.

Answer»

Example:

Input:

1 -> 2 -> 3 -> 4 -> 5

Output:

No loop present.

Input: 

1-> 2 -> 3 -> 4
  A |
            |        V
 6 <  - 5

Output:

Loop present.

  • Approach 1: Traverse the list one by one, adding node addresses to a Hash Table as you go. If NULL is reached at any time, return false; otherwise, return true if the NEXT of the current nodes points to any of the previously recorded nodes in Hash.
  • Solution Code:
// Node structurestruct Node { int data; STRUCT Node* next;};bool findLoop(struct Node* head){ // Hashmap for flagging if a node has been visited unordered_set<Node*> hash_map; while (head != NULL) { // If this node is already present in hashmap it means there is a cycle and so we return true if (hash_map.find(head) != hash_map.end()) return true; // If we are seeing the node for the first time, we insert it in hash map hash_map.insert(head); head = head->next; } // We return false in case there is no loop in the given linked list return false;}

Time Complexity: O(n)
Space Complexity: O(n)

Here, n is the number of nodes in the given linked list.

Explanation: In the above code, the function findLoop detects if a loop is present in a linked list or not. We maintain a hashmap of nodes and as we traverse through the linked list, we flag a node as visited. If we encounter a visited node again, it implies that a loop is present in the linked list and so we return true.

  • Approach 2: In this approach, we maintain two pointers: slow_pointer and fast_pointer. We initialize both of them with the head of the linked list. We iterate through the linked list until either the fast_pointer becomes NULL or the fast_pointer->next becomes NULL. In each iteration, we update the slow_pointer by moving it one step ahead in the linked list while we update the fast_pointer by moving it two steps ahead in the linked list. If at any POINT, the slow_pointer and the fast_pointer become equal, we return true implying that a cycle is INDEED present in the linked list. This algorithm is also known as Floyd's Cycle finding algorithm.
  • Solution Code:
// Node structurestruct Node { int data; struct Node* next;};// RETURNS true if there is a loop in linked list// else returns false.bool findLoop(Node* head){ Node *slow_pointer = head, *fast_pointer = head; while (fast_pointer && fast_pointer->next) { slow_pointer = slow_pointer->next; fast_pointer = fast_pointer->next->next; if (slow_pointer == fast_pointer) { return true; } } return false;}

Time Complexity: O(n)
Space Complexity: O(1)

Explanation: In the above code, the function findLoop takes in the head of the linked list as an argument and returns true if a loop is present in the linked list else false. We maintain two pointers: slow_pointer and fast_pointer as discussed in the solution approach and keep iterating the linked list until either fast_pointer or fast_pointer->next becomes NULL. At any iteration, if the slow_pointer becomes equal to the fast_pointer, we return true. 

12.

What do you understand about normalization in the context of Database Management Systems (DBMS)? What is the need for normalization?

Answer»

The process of organising data in a database is known as normalisation. This includes generating TABLES and defining relationships between them according to rules aimed to secure data while ALSO allowing the database to be more flexible by removing redundancy and inconsistent dependencies. Normalization aims at avoiding undesired characteristics such as Insertion, Update, and Deletion Anomalies. A normal form is a procedure that compares each relation to a set of criteria and eliminates multivalued, joins, functional, and trivial dependencies. Any data that is updated, deleted, or entered has no effect on database tables and helps to improve the integrity and performance of relational databases.

The following points illustrate the need for normalization in DBMS:

  • It is used to CLEAN up the relational table by removing duplicate data and database oddities.
  • By assessing new data types UTILIZED in the table, normalization helps to decrease redundancy and complexity.
  • It's a good idea to break down a huge database table into smaller tables and use relationships to connect them.
  • It prevents duplicate data from being entered into a database, as well as no recurring groups.
  • It lowers the likelihood of anomalies in a database.
13.

What do you understand about Design Patterns in the context of Java? What are the different types of design patterns in Java?

Answer»

Design patterns are reusable SOLUTIONS for common software development challenges. REPETITIVE code, redundant functions, and logic are examples of these issues. These aid in the development of software by reducing the amount of effort and time required by the developers. Design patterns are widely used in object-oriented software products to incorporate best PRACTICES and promote reusability in the development of reliable code.

Design patterns are DIVIDED into three categories. They are as follows:

  • Creational Patterns: By hiding the logic, these patterns provide you with more options when it comes to generating objects. The objects created are independent of the implemented system. Factory design pattern, Builder design, Prototype design, Singleton design, and Abstract Factory design are some examples of creational patterns.
  • Structural Patterns: These patterns aid in the definition of class and object structures, as well as the construction of classes, interfaces, and objects. Adaptor design, Facade design, Decorator design, proxy design, and other structural patterns are examples.
  • Behavioural Patterns: These patterns assist specify how objects should communicate and interact with one another. Command pattern, Iterator pattern, Observer pattern, Strategy pattern, and so on are examples of behavioural patterns.
14.

Explain the following in the context of Object-Oriented Programming language Java: Coupling, Cohesion, Association, Aggregation, Composition.

Answer»

1. Coupling: Coupling is a term used in object-oriented design to describe the degree of direct knowledge that one element has of another. To put it another way, how often do changes in class A CAUSE changes in class B? Coupling can be divided into two categories:

  • Tight Coupling: Tight coupling indicates that the two classes frequently change at the same time. In other words, if A knows more about how B was implemented than it should, then A and B are tightly connected.
    For example, if you wish to modify the design of your skin, you'll have to change the design of your body as well, because the two are strongly related. RMI (Remote Method Invocation) is the best example of tight coupling.

Let us understand it with the help of the following code:

class Volume { public static void main(String args[]) { Box b = new Box(1,2,3); System.out.println(b.volume); }}class Box { public int volume; Box(int length, int width, int height) { this.volume = length * width * height; }}

Output:

6
  • Explanation: In the above code, the class Volume is tightly coupled with the Box class. For the volume to change, the attributes of the Box class need to be changed and hence there is a tight coupling between these two classes.

2. Loose Coupling: Class A and Class B are said to be loosely connected if the only knowledge that class A has about class B is what class B has revealed through its interface. SPRING framework employs dependency injection technique with the help of POJO (Plain Old Java Object) /POJI (Plain Old Java Interface) model to overcome the challenges of tight coupling between objects, and it is possible to achieve loose coupling through dependency injection.

For instance, if you change your clothes, you are not obligated to change your body - if you are able to do so, you have loose coupling. Tight coupling occurs when you are unable to do so. Interface and JMS are examples of loose coupling.

3. Association: The term "association" refers to a relationship that exists between two distinct classes that are established through their Objects. One-to-one, one-to-many, many-to-one, and many-to-many associations are all possible. An Object communicates with another object in Object-Oriented programming to leverage the capabilities and services provided by that object. The following are the two types of association present:

4. Composition: The strongest sort of relationship is composition. If an item owns another object and the other object cannot exist without the owner object, the association is said to be composition. Take the SITUATION of a human with a heart. The heart is contained within the Human object, and the heart cannot live without the Human.

Let us understand it better with the help of the following code:

//Car classpublic class Car { //An engine is an integral part of a car private final Engine engine; public Car () { engine = new Engine(); }}//Engine Classclass Engine { //code}

Explanation: In the above code, we can clearly see that the two classes share a Composition relationship. The existence of an Engine object is entirely dependent on the existence of a Car object. If no Car object exists, there is no Engine object created and so, the two classes Car and Engine can be said to be in a ‘Composition’ relationship.

5. Aggregation: Aggregation can be referred to as weak association. If both Objects may exist independently, the association is considered to be aggregation. It is a unique type of association in which:

  • It's a one-way relationship with a unidirectional association. For example, a department can have students but not the other way around, making it unidirectional.
  • In Aggregation, both entries can exist on their own, which IMPLIES that terminating one will not affect the other.

Let us consider the following code to understand it better:

//Team classpublic class Team { //players can be 0 or more public List players; public Team () { players = new ArrayList(); }}//Player Classclass Player { int id; String name; Player(int id, String name) { id = this.id; name = this.name; }}//Tournament Classclass Tournament{ public static void main(String args[])throws IOException { Team t1 = new Team(); Player p1 = new Player(1, "A"); Player p2 = new Player(2, "B"); t1.players.add(p1); t1.players.add(p2); Team t2 = new Team(); Player p3 = new Player(3, "C"); Player p4 = new Player(4, "D"); t2.players.add(p3); t2.players.add(p4); }}

Explanation: In the above code, we can see that the two classes Team and Player form an Aggregation relationship. A player is a part of a team and so a Player object “HAS-A” relationship with a team object. HOWEVER, the classes Team and Player objects can exist independently. Hence, it forms an “Aggregation” relationship. 

15.

What are the 4 major pillars of Object-Oriented Programming? Explain them.

Answer»

The following are the 4 major pillars of Object-Oriented Programming:

  • Abstraction - Abstraction is a technique for displaying only the information that is required while hiding the rest. Abstraction is the process of picking data from a huge set of data in order to display the information required, hence minimizing programming complexity and effort. Let us consider an example to understand it better. A car driver knows best how to drive a car. He knows when he should apply brakes when he should accelerate. However, he might not be aware of how the entire car works. He does not need to know the working of these functionalities in order to drive a car. This can be considered as a real-life example of abstraction since the driver is unaware of the complex details and is only concerned with how to operate the functionality.
  • ENCAPSULATION - Encapsulation can be described as the grouping of information into a single unit. It's the glue that holds code and the data it manipulates together. Encapsulation can also be thought of as a protective shield that prevents data from being accessible by code outside of the shield.
    Encapsulation MEANS that a class's variables or data are concealed from other classes and can only be ACCESSED through the member functions of the class in which they are specified. Data-hiding is similar to encapsulation in that the data in a class is concealed from other classes.
  • Inheritance - Inheritance is a crucial component of OOP (Object Oriented Programming). It is a Java mechanism that allows one class to inherit the characteristics (FIELDS and METHODS) of another.
    • Superclass: A superclass is a class whose characteristics are inherited (or a base class or a parent class).
    • Subclass: A subclass is a class that inherits from another class (or a derived class, extended class, or child class). In addition to the superclass fields and methods, the subclass can add its own fields and methods.
      Inheritance supports the concept of "reusability," which means that if we want to create a new class but there is already one that contains some of the code we need, we can derive our new class from the old one. We're reusing the old class's fields and functions in this way.
  • Polymorphism - Polymorphism is an OOP concept that describes a variable, object, or function's ability to take on various forms. In English, the verb run, for example, has distinct meanings depending on whether it is used with a laptop, a foot race, or a corporation. We can deduce the meaning of run from the other words used in the sentence. Polymorphism can be treated in the same way.