InterviewSolution
This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.
| 51. |
What is the difference between Push and Pull Subscription? |
|
Answer» Push & Pull is a type of Replication configuration. It decides how data will be replicated to the SUBSCRIBER.
Pull is best suited configured when the subscriber is connected all the time and need the latest data all the time accessed.
Pull is best suited when subscribers are not online every time. The subscriber can allow data lag and can wait for delay in data REFRESH |
|
| 52. |
What are the different types of SQL Server replication? |
Answer»
|
|
| 53. |
List down best practices of tempDB configuration? |
Answer»
|
|
| 54. |
Can the rebuilding cluster index rebuild the nonclustered index on the table? |
|
Answer» Each table can have only one Cluster & multiple nonclustered indexes. All nonclustered indexes use index KEY of cluster index or directly depends on the clustered index. Because of this dependency, we usually got this question if REBUILDING the clustered index will rebuild the nonclustered index on a table or not. The answer is NO. Cluster index rebuild will not rebuild or reorganize the nonclustered index. You need to PLAN a nonclustered index rebuild or reorganization separately. |
|
| 55. |
What are the primary differences between an index reorganization and an index rebuild? |
|
Answer» Index Reorg
Index Rebuild
|
|
| 56. |
What are SQL Injection and its problems? |
|
Answer» An SQL injection is a web hacking techniques done by unauthorized personnel or processes that might DESTROY your database. An SQL injection is a web security vulnerability that allowed hackers to access application code and queries used to interact with the database. Hacker uses those methods to RETRIEVE data BELONGING to other USERS and data not authorized to them. The major challenge is SQL injection can cause a system crash, data stolen, data corruption, etc. An SQL injection is not a TASK of one man or team. Complete support and architecture team work together to get it prevented to happen.Developers \ DBAs are responsible for DB security and proper SQL code. Applications developers are responsible for application code and Db access methods. Infra team is responsible for network, firewall & OS security. Proper SQL instance, OS & Farwell security with a well-written application can help to reduce the risk of SQL injection. |
|
| 57. |
What is the Guest user account? Explain in brief. |
|
Answer» The Guest user account is created by default with SQL server installation. Guest user is not MAPPED to any login but can be used by any login when explicit permissions are not granted to access an object. You can drop the guest user from all databases except Master and TempDB. When any user login to the SQL Server, it has 2 parts Server & database. First, at the Server level, user authentication VERIFIES at the server level and User is able to login to SQL Servers. Second, Login and mapping to the database are verified. In case, Login is not mapped to any user but able to log in on SQL server. SQL automatically map that login to Guest and GRANT him database access. One of the security recommendations of Microsoft is to Drop or disable a Guest user in every database except Master & TempDB database. By having Guest user, you are at risk of unauthorized access to data. |
|
| 58. |
What are SQL Server Agent Fixed Database Roles with the definition? |
|
Answer» SQL Server agent has lots of features and sometimes you need to give rights on an agent to manage and view jobs. Allowing every one with sysadmin permission is not appropriate. SQL Server has 3 fixed DB roles for permission on SQL Agent & Jobs.
|
|
| 59. |
What types of functionality does SQL Server Agent provide? |
|
Answer» SQL Server Agent provides MULTIPLE functionalities LIKE:-
|
|
| 60. |
Consider a scenario where the server is performing a full backup at Sunday 2 AM, Differential backup Monday to Saturday 2 AM and Transaction Log backup in each 30 Min. You have been reported system crash around Wednesday 7 AM, Please share the recovery path with required backups to recover the system. |
|
Answer» You need to restore Last SUNDAY full BACKUP happen at 2 AM then Wednesday DIFFERENTIAL backup of 2 AM followed by all Transaction log backups after. Consider all backups are valid, in CASE of any bad backup file recovery path, will be changed. |
|
| 61. |
What is pseudo-simple Recovery Model? |
|
Answer» Pseudo-simple Recovery Model is situational based recovery model. This is something not given by project but ones with CIRCUMSTANCES. It’s a situation where the DATABASE may be configured in the FULL recovery model but BEHAVES like a SIMPLE recovery model. The database will behave like a SIMPLE recovery model until a full database backup is taken after switching recovery model to Full. For example, You have a database in Simple Recovery Model and now you have switched it to the FULL recovery model. You NEED to perform a full backup after switching recovery model otherwise database will keep behaving like simple and keeps TRUNCATING Transaction log on commit. |
|
| 62. |
What’s the impact if we enable SQL Server Trace Flag 3042 for compressed backups? |
|
Answer» With Compressed backups, SQL Server works on the pre-allocation algorithm to determine how much space the compressed backup will require and will allocate this space at the start of the backup PROCESS. At the time of backup completion, the backup process might expand or shrink the backup file on the basis of actual requirement. Pre-Allocation helps SQL Server save the overhead of constantly GROWING the backup file. Sometimes when we have a very large database and the difference in uncompressed and compressed backups is very high, preallocated and actual backup space has a significant difference. Trace FLAG 3042 promote better space management with little over-head to SQL server to grow backup file on a regular basis. This is useful with limited disk space. Enabling trace flag 3014 may result in a little spike in CPU & memory but not significant to cause performance issues. |
|
| 63. |
How many types of SQL Server database backups and Recovery Models are there? |
|
Answer» SQL Server supports the following types of database backups:
SQL Server database backups are supported by all 3 recovery models:
|
|
| 64. |
Name any 5 components can be installed with the SQL Server 2016 installation? |
Answer»
|
|
| 65. |
Explain any 5 differences between CheckPoint & LazyWriter? |
|
Answer» Checkpoint
Lazy Writer
|
|
| 66. |
Name all Available isolation levels in SQL Server? |
|
Answer» SQL Server supports DIFFERENT types of isolation level.
|
|
| 67. |
What are the dynamic management views and share any 5 that you are using on a regular basis? |
|
Answer» The DMV (Dynamic Management Views) is a set of system views introduced with SQL Server 2005. DMV’s are a new tool of DBA to get internal INFORMATION of the system and it’s working. |
|
| 68. |
Explain the difference between Sequence vs Identity? |
|
Answer» An IDENTITY column in the table has auto-generate & auto increase value with each new row insert. The user cannot insert value in the identity column. The sequence is a new feature introduced with SQL Server 2012 similar to Oracle’s sequence objects. A sequence object generates a sequence of unique numeric VALUES as per the specifications mentioned. Next VALUE for a SEQUENCE object can be generated using the NEXT VALUE FOR clause.
|
|
| 69. |
Explain page, A fundamental unit of storage in SQL Server and different types of pages. |
|
Answer» A page is a fundamental UNIT of storage that stores data. The page is the size of 8KB and we have 8 types of pages in SQL Server.
|
|
| 70. |
How to monitor latency in replication? |
|
Answer» There are three methods.
Available from Replication Monitor or via TSQL statements, Tracer Tokens are special timestamp transactions written to the Publisher’s Transaction Log and picked up by the Log Reader. They are then read by the Distribution Agent and written to the Subscriber. Timestamps for each step are recorded in tracking tables in the Distribution Database and can be displayed in Replication Monitor or via TSQL statements. When Log Reader picks up Token it records time in MStracer_tokens table in the Distribution database. The Distribution Agent then picks up the Token and records Subscriber(s) write time in the MStracer_history tables ALSO in the Distribution database. Below is the T-SQL code to use Tracer tokens to troubleshoot the latency issues. –A SQL Agent JOB to insert a new Tracer Token in the publication database. USE [AdventureWorks] Go EXEC sys.sp_posttracertoken @publication = <PublicationName> Go –Token Tracking Tables USE Distribution Go –publisher_commit SELECT Top 20 * FROM MStracer_tokens Order by tracer_id DESC –subscriber_commit SELECT Top 20 * FROM MStracer_history Order by parent_tracer_id desc |
|
| 71. |
What are the best RAID levels to use with SQL Server? |
|
Answer» Before choosing the RAID (Redundant Array of Independent DISKS) we should have a look into the USAGE of SQL Server files. As a basic thumb rule “Data Files” need random access, “Log files” need sequential access and “TempDB” must be on the fastest drive and must be separated from data and log files. We have to CONSIDER the below FACTORS while choosing the RAID level:
As an Admin, we have to consider all of these PARAMETERS in choosing the proper RAID level. Obviously, the choice is always between RAID-5 and RAID-10 |
|
| 72. |
Can we configure log shipping in the replicated database? |
|
Answer» Replication does not continue after a log shipping failover. If a failover occurs, replication agents do not connect to the secondary, so transactions are not replicated to Subscribers. If a fallback to the primary occurs, replication resumes. All transactions that log shipping COPIES from the secondary back to the primary are replicated to Subscribers. For transactional replication, the behaviour of log shipping depends on the sync with a backup OPTION. This option can be set on the publication database and distribution database; in log shipping for the Publisher, only the setting on the publication database is relevant. Setting this option on the publication database ensures that transactions are not delivered to the distribution database until they are BACKED up at the publication database. The last publication database backup can then be restored at the secondary server without any POSSIBILITY of the distribution database having transactions that the restored publication database does not have. This option guarantees that if the Publisher fails over to a secondary server, consistency is maintained between the Publisher, Distributor, and Subscribers. Latency and throughput are affected because transactions cannot be delivered to the distribution database until they have been backed up at the Publisher. |
|
| 73. |
What are the agents in replication? |
Answer»
|
|
| 74. |
What are the main events and columns helpful in troubleshooting performance issues using a profiler? |
|
Answer» Events: Event Group: Performance Event Group: T-SQL
Event Group: Stored Procedures
Event Group: Locks
Event Group: Sessions
Event Group: Security Audit
Columns: Below are the most common columns that HELP us in understanding the trace file to troubleshoot the problems.
All these columns need not be available for all of the events but depend on the event select we have to choose the appropriate columns. Filters:
|
|
| 75. |
How to apply service pack on Active / Active cluster Nodes? |
Answer»
|
|
| 76. |
You find SP is not applied to all the nodes across the cluster. How to apply SP only on the required nodes? |
|
Answer» If you find that the product level is not consistent across all the nodes, you will need to fool the 2005 patch INSTALLER into only patching the nodes that need updating. To do so, you will have to perform the following STEPS:
Why do you need to do this? Well when the patch installer determines that not all nodes in the cluster are at the same patch level, a passive node operation will fail and will prevent you from moving forward with any further patching. |
|
| 77. |
List down the difference between WHERE and HAVING clause. |
|
Answer» The WHERE clause is the most widely used command in SQL and used for filtering records on the result set. The HAVING clause does the same THING but on the grouped result set. The WHERE clause will be executed FIRST before having a clause trigger. Let me try to explain the differences with examples. Any SQL statement FOLLOWS below syntax: The order of execution of SQL statements follows from top to bottom. It implies that records are filtered first on the WHERE clause and once the result set is grouped, HAVING clause comes into the picture. |
|
| 78. |
List down the difference between DELETE and TRUNCATE command? |
|
Answer» Let me list down some basic difference then we will try to understand with examples as well :
Please consider below example where EVEN after deleting records when transaction was rolled back, it reverted the changes : |
|
| 79. |
What is UDF in SQL and how many different types exist for UDF? |
|
Answer» UDF represents user-DEFINED represents which are designed to accept the parameter and does processing on parameter before returning a result set of processing or actions. The result could be either a scalar value or a complete result set. The UDF’s are widely used in scripts, stored procedures and within other’s UDF as well. There are several benefits of using UDF :
There are two types of UDF based on the output value of UDF’s
|
|
| 80. |
What is the primary key and foreign key? |
|
Answer» The PRIMARY key is the single column or combination of the column which uniquely distinguishes individual rows in the table. It MUST be unique but can not have NULL values which make it different then UNIQUE KEY. A table can have only a single primary key and contain a single column or combination of columns called a composite key. The foreign key in the table is actually the to the primary key of another table. Please find below the list of differences between two Keys which look SIMILAR in nature :
|
|
| 81. |
What are the different normalization forms? |
|
Answer» A normalization which ensures a reduction in redundancy and dependability. It is a popular database breaking down large tables into smaller ones with the help of a relationship. Edgar code was the first one who coined the term normalization and came with three different forms of normalization(1NF,2NF,3NF). Raymond F. Boyce later added another form of normalization which has been NAMED after both inventors (Boyce-Codd NF). Till now we have below normalization forms available so far:
But in most practical scenario database design is well supported till 3NF only. Now, let us understand the first four forms of normalization:
The above 1NF table can be extended to 2NF by diving above tables into below two tables:
For the above table structure, we can have below design to support 3 NF
|
|
| 82. |
What is ACID property in a database? |
|
Answer» It is very critical for any database to maintain DATA integrity and having consistent data. For choosing any database this is one of the important CONSIDERATIONS. Data architects evaluate database on ACID properties only. The ACID is an acronym and STANDS for Atomicity, Consistency Isolation, Durability. Let me EXPLAIN to you four pillars of ACID property:
|
|
| 83. |
What is the difference between Function and Stored procedure? |
|
Answer» Both are a set of SQL STATEMENTS which are encapsulated inside the block to complete some task. But they are very different in so many ways.
Let me list down some major differences for both :
|
|
| 84. |
Explain dirty read in SQL server? |
|
Answer» One of the foremost common issues that occur when running parallel transactions is the Dirty read problem. A dirty read happens once when the transaction is permissible to read the INFORMATION that's being changed by ANOTHER one that is running at the same time HOWEVER which has not nevertheless committed itself. If the transaction that modifies the information and does the commits itself, the dirty read problem never occurs. but if the transaction, that has triggered the changes in the information, is rolled back when the opposite transaction has read the data, the latter transaction has dirty information that doesn’t truly exist. Let us try to understand the scenario when a user tries to buy a product. The transaction which does the ACQUISITION task for the user. the primary step within the transaction would be to update the Items in Stock. Before the transaction, there are 12 items in the stock; the transaction can update this to 11 items. The transaction concurrently communicates with a third party payment gateway. If at this time in time, another transaction, let’s say Transaction 2, reads Items In Stock for laptops, it'll read 11. However, if after, the user who has triggered the FIRST transaction A, seems to possess light funds in his account, transaction A is rolled back and therefore ItemsInStock column can revert to again 12. However, transaction B has 11 items because it read old data from ItemsInStock column. This is often dirty information and therefore the drawback is termed a dirty scan problem. |
|
| 85. |
What are the Advantages of Using CTE? |
|
Answer» CTE is virtually created temporary storage which holds query output created by SQL SYNTAX SELECT, INSERT, UPDATE, DELETE OR CREATE VIEW STATEMENT. It holds the data until the time of the query session. It is similar to the derived table only. We can also use the CTE in case of VIEW in some scenarios: WITH expression _name column_name AS CTE_query_definition WITH common_table_expression Railroad diagram
If the query definition supplies distinct names for all columns then the column names are optional
Let me LIST down some benefits of using CTE :
There are two types of CTE :
|
|
| 86. |
What is self join, explain with example? |
|
Answer» Self-join is the one in which the same table is joined by itself to GET the desired output. We can understand self join in case of the below SCENARIO : Let's assume we have below table structure for Employee :
If we need to extract employee and manager information from the above table then the only option we have is to using self join. We can make a self join on the Employee table on JOINING ManagerID with EmployeeID. Please find below query which explains self join clearly : ALTHOUGH we can have other alternatives as well to get the same desired result set. It uses left outer Join: |
|
| 87. |
When to use SQL temp tables vs table variables? |
|
Answer» There are several scenarios in which we use temp tables. In case we have no DDL or DML access to a table. You can utilize your current read access to maneuver the information into a SQL Server temp table and make modifications from that point. Or on the other hand, you don't have the authorization to make a table in the current database, you can make a SQL Server temp table that you can control. At long last, you may be in a circumstance where you need the information to be available just in the present session. The temp tables have "HASHTAG" followed by the table name. For instance: #Table_name. SQL temp tables are made in the tempdb database. A local SQL Server temp table is just available to the present session. It can't be accessed or utilized by procedures or queries outside of the session. A standout frequently utilized situation for SQL Server temp tables is when we use a loop in the query. For instance, you need to process information for a SQL query and we need a spot to store information for our loop to peruse. It gives a speedy and productive WAY to do as such. Another motivation to utilize SQL Server temp tables is when we have to execute some expensive processing. Suppose that we make a join, and each time we have to pull records from that result-set then it NEEDS to process this join again and again. Why not simply process this outcome set once and toss the records into a SQL temp table? At that point, you can have the remainder of the SQL query allude to the SQL temp table name. In addition to the fact that this saves costly processing, it might even make our code look a little cleaner. The SQL temp table is dropped or demolished once the session drops. Most of the time we will see: DROP #tbl command but it is not mandatory. Temp tables are ALWAYS created in the tempdb database. It resolves name conflict by adding a suffix in case of a similar name for temp tables. Global SQL temp tables are helpful when we need to reference tables in all sessions. Anybody can add, modify, or delete records from the table. Additionally, note that anybody can DROP the table. Like Local SQL Server temp tables, they are dropped once the session drops and there are never again any more references to the table. We can generally utilize the "DROP" command to tidy it up manually. Table variables are more like some other variables. It is a common understanding that table variables exist just in memory, yet that is basically not correct. They live in the tempdb database just like local SQL Server temp tables. Additionally like local SQL temp tables, table variables are accessible just inside the session in which they are declared. The table variable is just available inside the present batch. If PERFORMANCE is criterion then table variables are helpful with modest quantities of data. Generally, a SQL Server temp table is helpful when filtering through a lot of information. So for most times, we will in all probability observe the utilization of a SQL Server temp table rather than a table variable. |
|
| 88. |
How do you implement one-to-one, one-to-many, and many-to-many relationships while designing tables? |
|
Answer» SQL server favors all three relationships and is well supported by SQL server design. Let me explain to you each relationship one by one: One to One – This type of relationship is supported by a single table and in some cases, we can have two tables as well. As the NAME suggests, we can have only a single record from each entity, primary and secondary. A person can have only one passport, he can not have more than one. In below example we have two tables Person and Passport having one to one relationship by leveraging foreign key and forcing unique key constraint on the foreign key. In this case, person ID which is the primary key in one table works as the foreign key. One to Many – This type of relations is ALWAYS supported by two tables. This relationship focuses on at least one entry in the secondary table for each entry in the primary table. But in the primary table, we will always have a single entry for each record against each record of a secondary table. Let me try to understand you with below examples where we have two table book and AUTHOR. A book can have more than one writer so there will always be more than one entry of book in author table but there will always be a single entry of author in the author table. This type of relationship is supported by a primary key-foreign key relationship. Here Book Id is working as a foreign key in the Author table to support one to many. Many to Many – This type of relationship is realized by more than two tables where one table works as Join table between the FIRST two tables. In the below example we have Students, Enrollments, and Classes as three tables where Enrollment table is working as a bridge between Students and classes. One student can enroll in multiple classes and one class can be MAPPED to multiple students. |
|
| 89. |
What are Magic tables in the SQL server? |
|
Answer» The magic tables are an integral PART of Triggers. They facilitate Triggers and make the job easier. There are two magic tables that exist, one is inserted one and the other one is called deleted one. These are built by SQL server itself to hold the data while PROCESSING gets triggered. They hold the inserted, deleted and UPDATED records while doing DML operations on a table. If we have triggers for INSERT operation then the newly inserted values are stored in the INSERTED magic table. If triggers get fired for UPDATE /DELETE operation then updated and deleted values of records are stored in DELETED magic tables.
Let me explain the INSERTED magic table by looking at the below example. In the below screenshot we have triggers for INSERT operation and we can see two tables INSERTED/DELETED which we can use in the trigger for performing any manipulation.
The below example illustrates the use case of a DELETED magic table which COMES into the picture when we have UPDATE/DELETE trigger. This table holds the old record which was either updated/deleted. |
|
| 90. |
What is the difference between clustered and non clustered index? |
|
Answer» The indexing is required to quicken search results in the database. If we compare INDEX in our real world then page number of books and keywords mostly on the back side of book work similar as Index. We can quickly go to respective pages if we know the page number and also if we have an IDEA of keywords, we are looking into the book then just visiting keywords section will make our job easier as keywords are linked with page numbers. There are two types of indexes that exist in SQL database. One is called clustered while other is called non-clustered. The page number of the book is similar to the clustered index while the keyword section of the book represents non-clustered indexes. They exist in the database as a B-Tree structure. Let's go into the details of each index.
We can have only one clustered index per table. Once we create a primary key, by default it ends up CREATING a clustered index on that key. We can also specify the index type as well in case we WOULD like to have non clustered index as well. The table is called heap if it does not have any clustered index. As we said earlier that indexes are B-Tree structures. The leaf node is mapped to data pages of the table in case of a clustered index. The data is physically sorted based on the clustered index. This is the reason why it is quickest in two of the indexes.
Earlier it was possible to have only 249 non clustered indexes on the table but after SQL server 2008 we can have 999 non clustered indexes. It gets created by default when we create UNIQUE key constraint. We can also decide index type as well while creating Unique constraint. It is not mandatory to have any non clustered index. The leaf nodes of non clustered index map to index pages having details of clustering key or RID to get the actual row. In case the table does not have any clustered index, leaf nodes are mapped to the physical location of the row which is called as RID and if present leaf nodes mapped to clustering key. It is much quicker for DML operations like adding /deleting and updating records. |
|
| 91. |
what is Normalization and denormalization in SQL? |
|
Answer» Normalization and DENORMALIZATION are the strategies utilized in databases. The terms are differential where Normalization is a PROCEDURE of limiting the addition, removal and update peculiarities through DISPOSING of the redundant information. Then again, Denormalization is the reverse procedure of Normalization where the repetition is added to the information to improve the EXHIBITION of the PARTICULAR application and information integrity. Normalization prevents the disk space wastage by limiting or disposing of the redundancy. |
|
| 92. |
How many system databases we have in the SQL server? |
Answer»
|
|
| 93. |
List all joins supported in SQL? |
Answer»
|
|
| 94. |
What is the difference between UNION and UNION ALL? |
|
Answer» UNION blends the contents of two structurally-compatible TABLES into a solitary joined table. The distinction among UNION and UNION ALL is that UNION will discard duplicate records through UNION ALL will incorporate duplicate records. Note that the presentation of UNION ALL will commonly be superior to UNION since UNION requires the server to do the extra WORK of expelling any duplicate. In this way, in SITUATIONS where there is a SURETY that there won't be any copies, or where having copies isn't an issue, utilization of UNION ALL eventual suggested for performance reasons. Let's have a look at the below examples explaining the USAGE of both. In the first, we have used UNION and in the second we have explained UNION ALL. |
|