Explore topic-wise InterviewSolutions in .

This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.

1.

What is .TUF file? What is the significance of the same? Any implications if the file is deleted?

Answer»
  • Inner Join- This is the simplest one and most widely used JOIN. This is default one if we do not specify any JOIN between tables. It returns all matching records from both tables where the matching condition is satisfied.
  • LEFT JOIN- We call this LEFT OUTER JOIN as well. When we have SITUATIONS like we want all columns from one table and only matching records from another table, we GO for this type of JOIN. There are two types of same. The LEFT one is the first type where we would like to have all records from LEFT table and only matching records from RIGHT one then we go for this type of JOIN. When we do not have any matching condition in the right table then all columns from the right one will be NULL while LEFT table will have all the records.
  • RIGHT JOIN- We call this RIGHT OUTER JOIN as well. This is just the reverse of what we discussed for LEFT JOIN. The result set will be having all records from the right table but only matching records from the left one. Even if the ON clause which gives matching record fails, it will ENSURE rows are returned from the right table and corresponding columns from the LEFT table with NULL values.
  • FULL JOIN- It is also called FULL OUTER JOIN. It is having characteristics of both LEFT /RIGHT outer join. The result set will be having rows whenever we have the match in any of the tables either LEFT or RIGHT one. We can also say that it gives the same result if we apply UNION on LEFT and RIGHT OUTER JOIN resultset.
  • CROSS JOIN- This is a cartesian product of two tables where each row from the primary table is joined with each and every row of the secondary table. Even if we use SELECT statement on two tables separated by a comma and without any WHERE condition, it will give the same result as we GET from applying CROSS JOIN.
2.

Full backup size is 300 GB, usually my diff backup size varies between 300 MB and 5 GB, unfortunately, one-day diff backup size was increased to 250 GB? What might be the reason any idea?

Answer»

UNION blends the contents of two structurally-compatible tables into a solitary joined table. The distinction among UNION and UNION ALL is that UNION will discard duplicate RECORDS through UNION ALL will incorporate duplicate records. Note that the presentation of UNION ALL will commonly be superior to UNION  SINCE UNION REQUIRES the server to do the extra work of expelling any duplicate. In this way, in situations where there is a surety that there won't be any copies, or where having copies isn't an issue, utilization of UNION ALL eventual suggested for performance REASONS. Let's have a look at the below examples explaining the usage of both. In the first, we have used UNION and in the second we have explained UNION ALL.

3.

See I have an environment, Sunday night full backup, everyday night diff backup, and every 45 min a transactional backup. The disaster happened at 2:30 PM on Saturday. You suddenly found that the last Sunday backup has been corrupted. What’s your recovery plan?

Answer»

.TUF file is the Transaction Undo File, which is created when performing log shipping to a server in Standby mode.

When the database is in Standby mode the database RECOVERY is done when the log is restored, and this mode also creates a file on the destination server. TUF extension which is the transaction undo file.

This file contains information on all the modifications performed at the time backup is taken.

The file plays an important role in the Standby mode… the reason is very obvious while restoring the log backup all UNCOMMITTED transactions are recorded to the undo file with only COMMITTED transactions written to disk which enables the users to read the database. So when we restore the next transaction log backup; the SQL server will fetch all the uncommitted transactions from undo file and CHECK with the new transaction log backup whether committed or not.

If found to be committed the transactions will be written to disk else it will be stored in undoing file until it gets committed or rolled back.

If .tuf file is GOT deleted there is no way to repair log shipping except reconfiguring it from scratch.

4.

I wanted to know what are the maximum worker threads setting and active worker thread count on SQL server. Can you tell me how to capture this info? What’s the default value for the max thread count?

Answer»

Are you the kind of DBA who rebuilds all indexes NIGHTLY? Your differential backups can easily be nearly as large as your FULL backup. That MEANS you’re taking up nearly twice the space just to store the backups, and even WORSE, you’re talking about twice the time to restore the database.

To avoid these issues with diff backups, ideally, schedule the index maintenance to happen right before the full backup.

5.

How distributed transactions work in SQL Server?

Answer»

When you find that the last full backup is corrupted or otherwise unrestorable, making all DIFFERENTIALS after that POINT useless. You then need to go back a further week to the previous full backup (taken 13 days AGO) and restore that, plus the differential from 8 days ago, and the subsequent 8 days of transaction logs (assuming none of those ended up corrupted!).

If you’re taking daily full backups, a corrupted full backup only introduces an ADDITIONAL 24 hours of logs to restore.

Alternatively, a LOG shipped copy of the database could save your bacon (you have a warm standby, and you know the log backups are definitely good).

6.

Can we add an article to the existing publication without generating a snapshot with all articles?

Answer»

We can check the current settings and thread allocation USING the below queries.

  • Thread setting

select max_workers_count from sys.dm_os_sys_info.

  • Active threads

select count(*) from sys.dm_os_threads.

  •  The default VALUE is 255.

Increasing the number of worker threads MAY actually DECREASE the performance because too many threads cause context switching which could take so much of the resources that the OS starts to DEGRADE in overall performance.

7.

Consider a situation where publisher database log file has been increasing and there is just a few MB available on disk. As an experienced professional how do you react to this situation? Remember no disk space available and also we can’t create a new log file on another drive

Answer»

Distributed transactions are transactions that worked across the DATABASES, INSTANCES in the GIVEN session. Snapshot isolation level does not support distributed transactions.

We can explicitly start a distributed transaction USING “BEGIN DISTRIBUTED TRANSACTION <TranName>”

For example, if BEGIN DISTRIBUTED TRANSACTION is issued on ServerA, the session calls a stored procedure on ServerB and another stored procedure on ServerC. The stored procedure on ServerC executes a distributed query against ServerD, and then all four computers are involved in the distributed transaction. The instance of the Database Engine on ServerA is the originating controlling instance for the transaction.

When a distributed query is executed in a local transaction, the transaction is automatically promoted to a distributed transaction if the target OLE DB data source supports ITransactionLocal. If the target OLE DB data source does not support ITransactionLocal, only read-only operations are allowed in the distributed query.

In order to work with these transactions, make SURE below settings are done.

  1. MSDTC must be running on all supported instances
  2. Choose the option “No authentication required” from MSDTC properties
  3. Turn on random options at linked server properties like “RPC”, “RPC Out”, “Data Access” etc.
8.

When should a developer use the NOLOCK hint? What problems can happen when using this hint and how can these problems be addressed?

Answer»

Yes! We can do that. Follow the below steps to publish a new article to the existing publication.

There are two parameters that we need to change to “FALSE”. 1. Immediate SYNC and 2. Allow_Ananymous.

Both the fields were set to ON by default. If the Immediate_sync is enabled every time you add a new article it will cause the entire snapshot to be applied and not the one for the particular article alone.

Steps:

  1. Change the values to “True” for publication properties “Immediate_Sync” and “Allow_Anonymous” using SP_CHANGEPUBLICATION
  2. Add a new article to the publication using SP_AddArticle. While executing this PROCEDURE along with the required parameters also specify the parameter “@force_invalidate_snapshot=1”.
  3. Add the SUBSCRIPTIONS to the publication for the single table/article using “SP_ADDSUBSCRIPTION”. While executing this proc specify the parameter “@Reserved = Internal”. Generate a new snapshot which only includes the newly ADDED article.
9.

What is an optimistic and pessimistic lock in SQL?

Answer»

Essentially we have to identify the BOTTLENECK which is filling the log file.

As a quick resolution check all possible solutions as below:

  • Resolve if there are any errors in the log reader agent/distribution agent
  • Fix if there are any CONNECTIVITY issues either between publisher-DISTRIBUTOR or distributor
  • Fix if there are any issues with I/O at any level
  • Check if there is any huge number of transactions pending from publisher
  • Check if there are any large number of VLFs (USE DBCC Loginfo)which slows the log reader agent work.
  • Check all DATABASE statistics are up-to-date at the distributor. Usually, we do switch off this “Auto Update Stats” by default.
  • To find and resolve these issues we can use “Replication Monitor”, “DBCC Commands”, “SQL Profiler”, “System Tables / SP / Function”.

If in case we can’t resolve just by providing a simple solution we have to shrink the transaction log file. Below are two methods.

To shrink the transaction log file:

  1. Backup the log — So transactions in vlf’s are marked as inactive
  2. Shrink the log file using DBCC SHRINKFILE – Inactive VLF’s would be removed
  3. If you find no difference in size repeat the above steps 1 and 2

To truncate the transaction log file:

In any case, we are not able to provide the solution against the increasing logfile the final solution is to DISABLE the replication, truncate the log and reinitialize the subscribers.

  1. Disable replication jobs
  2. Execute the SP_ReplDone procedure. It disables the replication and mark as “Replicate done” for all pending transactions at the publisher.
  3. Backup the transaction log “WITH TRUNCATE” option.
  4. Shrink the log file using “DBCC SHRINKFILE”
  5. Flues the article cache using “sp_replflush”.
  6. Go to distributor database and truncate the table MSRepl_Commands
  7. Connect to replication monitor and reinitialize all subscriptions by generating a new snapshot.
  8. Enable all replication-related jobs.
10.

What is Lock escalation in SQL

Answer»

The PERFORMANCE of the SQL query gets improved by NOLOCK as it does not lockup rows so it will not affect reading /processing data from the table. The NOLOCK can be used while dealing with large tables having a million records and with a table where data rarely gets changed. If we have too many locks in SQL then row level locking can be extended to block and even on table level which might stall processing of query till the time operation which acquired lock does not get completed. We can extract COMMITTED and uncommitted data using NOLOCK. This situation may result in dirty read if the system has not been designed to HANDLE such a scenario.

Generally to avoid dirty read we use the TIMESTAMP column which can be referenced while EXECUTING DELETE /UPDATE commands to know if data has been changed or not. This helps us in preventing data changes based on dirty read problem. COMPARING each column values is very  expensive operation so this is the reason industry standard suggest TIMESTAMP column

11.

How many models of Locks being supported in the SQL server?

Answer»

An optimistic lock is the WIDELY used strategy UTILIZED in n tier applications were while reading data from the database we check versioning of data before writing it back to the database. The versioning is done on a date, timestamps or checksum/hashes. This approach is used to prevent DIRTY read scenario where we take a decision based on data which is not yet committed in the database. 

The optimistic approach will ensure that in case dirty read is identified based on versioning we will start from fresh. This strategy is popular where system deals with a high volume of data and even in the case of n tier applications where we are not always connected to a database with single connections. It is a pool of connection from which we connect to the database with any connection which is free and available at that TIME. We can not apply lock in such cases. This strategy is applied in most of the banking operations.

Pessimistic is just opposite of Optimistic lock as it takes an exclusive lock on resources till the time we finish with our operations. It keeps data integrity high but performance will be always slower in this case. We NEED to connect to a database with an active connection (which is the case in two-tier application).

12.

Explain Locks in SQL and how many Locks are available?

Answer»

This is the STRATEGY used by SQL server to avoid locking a large number of database resources. The locking on resource use lot of memory spaces. Let's try to understand this by a use case where we have to apply locks on 30,000 records where each record size is 500 bytes for triggering DELETE operation. If we CONSIDER the memory space requirement for the above use case then we would be requiring one shared lock on database, one intent lock on a table, exclusive locks on the pages would be in the range of 1875 and around 30000 exclusive locks on rows. If we consider 96 bytes size for each lock then total memory space requirement would be 3MB for single delete operation.

To avoid such a  SQL server leverage lock ESCALATION strategy. This will prevent the need of large memory space requirement by escalating those locks to single lock instead of many locks. So in cases where we need thousands of locks on many resources, lock escalation will ensure single lock which will fulfill our OBJECTIVE and at the same time taking care of memory space issue. The exclusive lock on the table will ensure that we do not need page level lock to ensure data integrity. Instead of having so many locks required on many COLUMNS and pages lock, SQL Server will escalate to the exclusive lock on a table.

13.

What is the use case of using CURSOR?

Answer»

The locks which get applied on various resources of SQL server can be classified into below models :

  • Exclusive (X): This lock type, when forced, will guarantee that a page or row will be available only for the transaction that forced the lock, as long as the transaction is running. The X lock will be forced by the transaction when it needs to manipulate the page or row data by DML operations like inserting, modifying and deleting. This lock can be forced to a page or row just if there is no other shared or exclusive lock forced as of now on the resources. This ensures that only one exclusive lock can be applied and no other lock can be applied afterward till the time the previous lock gets removed.
  • Shared (S): This lock type, when forced, will save a page or row to be accessible just for reading, which implies that some other transaction will be not allowed to manipulate record till the time lock remains active. Nonetheless, a shared lock can be forced by multiple transactions CONCURRENTLY over a similar page or row and in that manner, multiple transactions can share the capacity for data reading. A shared lock will permit writing tasks, yet no DDL changes will be permitted
  • Update (U): an update lock is like an exclusive lock however is INTENDED to be more adaptable. An update lock can be forced on a record that as of now has a shared lock. In such a case, the update lock will force another shared lock on the intended   When the transaction that holds the update lock is prepared to change the data, the update lock (U) will be changed to an exclusive lock (X). While the update lock can be forced on a record that has the shared lock but the shared lock can't be forced on the record that as of now has the update lock
  • Intent (I): The idea behind such a lock is to guarantee data modification to be executed appropriately by stopping another transaction to gain a lock on the next in the hierarchy object. Generally, when a transaction needs to obtain a lock on the row, it will gain an intent lock on a table, which is a higher chain of the intended object. By obtaining the intent lock, the transaction won't enable other transactions to procure the exclusive lock on that table.
  • Schema (Sch): This lock is applied on a table or index when we want to manipulate any changes in that resource. We can have only one Schema lock at a given point of time. This lock gets applied when we perform operations that depend on the schema of an object.
  • Bulk update (BU): This lock is required when bulk operations need to perform. At the point when a bulk update lock is GAINED, different transactions won't most likely access a table during the mass load execution. Be that as it may, a bulk update lock won't avoid another bulk update to be executed in parallel.

Locking hierarchy of database objects  can be understood by the below diagram and lock is ALWAYS obtained from top to bottom:

Below is the lock compatibility matrix between different modes of lock available:

14.

What are the points to be considered for creating a fast performing stored procedure?

Answer»

Locks allow SEAMLESS functioning of the SQL server even in concurrent user sessions. As we as a whole know, different clients need to get to databases simultaneously. So locks come for rescue to keep information from being undermined or NEGATED when numerous clients ENDEAVOR to do data manipulation tasks DML operations, for example, read, compose and update on DATABASE. "Lock is characterized as a component to guarantee information integration, consistency while enabling simultaneous access to information. It is utilized to execute simultaneous control when various clients get to Database to control its information in the meantime".

Locks can be applied to various database components. Please find below areas where a lock can be applied :

  • RID: (Row ID) This helps us in locking a single row inside a table.
  • Table: It locks the whole table, even data, and indexes as well.
  • Key: It intends to lock key available in tables. It implies the primary key, Candidate Key, Secondary key and so forth.
  • Page: The page represents an 8-kilobyte (KB) data page or index page. The lock can be placed on Page Level additionally, it implies if a specific page is locked so another CLIENT can't refresh the information on it.
  • Extent: Extent is represented by a Contiguous gathering of eight data pages that can incorporate index pages moreover.
  • Database: Entire Database can be locked for some sort of clients who have read authorization on the database.
15.

What is an extended stored procedure and what is the use of it?

Answer»

SQL Server cursors are immaculate when we need to work one record at any given moment, as OPPOSED to taking every one of the information from a table as a solitary bulk. Be that as it may, they ought to be utilized with consideration as they can influence execution, particularly when the volume of information increments. From an amateur's perspective, I truly do feel that cursors ought to be maintained a strategic distance from each time in such a case that they are badly composed, or manage an excess of data, they truly will affect system PERFORMANCE. There will be times when it is absurd to expect to evade cursors, and I doubt if numerous system exists WITHOUT them. In the event that you do discover you have to utilize them, attempt to lessen the number of records to process by utilizing a temp table first, and afterward assembling the cursor from this. The lower the number of records to process, the quicker the cursor will wrap up.  Please find below syntax for CURSORS widely used :

The process FLOW of CURSOR look LIKE below diagram:

16.

What can a developer do amid the logical and physical design of a database so as to help guarantee that their database and SQL Server-based application will perform well?

Answer»

The performance of any application largely depends upon the backend process which extracts data. If it is faster then the application will be also responsive. The designing of the stored procedure is very critical and need to be careful to remove any  We can consider the following points while designing stored procedure :

  1. Variables: We should minimize the use of variables as it is stored in the cache.
  2. Dynamic queries: We should minimize the use of dynamic query as dynamic query gets recompiled every time parameter gets changed.
  3. The stored procedure should be always called with fully qualified names database_name.schema_name.sp_name , this AVOIDS the search of an execution plan in the procedure cache.
  4. We should always use SET NOCOUNT ON which suppresses rows affected to improve the performance of the query. This is very critical in case SP is called frequently as not USING the above syntax can load the network.
  5. We should not use sp_ in stored procedure name as it is used for system procedure so it causes an extra trip to the master database to see if any system procedure matches with the user-defined procedure
  6. We can use sp_executeSQL or KEEPFIXED PLAN to get away with recompilation of stored procedures even if we have parameterized dynamic queries.

  1. We should be careful in choosing WHERE condition as it triggers index seek. Sometimes we might END up doing full table SCANS if were conditions not CHOSEN carefully.
  1. We should avoid using IN operators as it checks for NULL values as well. We should use EXISTS as it does not consider NULL values. The below query second will give faster results as compared to the first one.

  1. If not necessary we should avoid using DISTINCT and ORDER BY clause as it is an additional load on the database engine.
  1. We should avoid CURSORS. Instead of CURSORS, we should use temp tables /table variables inside the loop to achieve the desired result set.
17.

What is a hot and cold backup in SQL server?

Answer»

The extended stored procedure is programs written in c/c++ and similar to a stored procedure like they accept PARAMETER but the result SET is returned through SQL server’s open data services API. It runs from SQL server process memory only and stored in a master database. It does not run from the current database context and to trigger extended stored procedure we need to pass the fully qualified name like master.dbo.xp_*. The BEST is WRAPPING extended stored proc under system procedure so you do not need to call extended one with the fully qualified name.

The cleaning of parameters to be passed to extended stored procedure xp_varbintohexstr  is taken CARE of by

sp_hexstring. sp_hextsring can be executed from any database context without being referenced by a qualifier. The above is an example of wrapping extended procedures inside the system procedure. Once you call the extended stored procedure, the request transfers in the tabular data stream or in SOAP format from calling application to SQL server.

18.

Is it possible to delete rows from views?

Answer»

There are lots of things that need to be considered before going into designing a logical and physical data model. We must consider FIRST what type of data we are going to store, how the data is being accessed, understanding system playing the role of upstream, understanding downstream system and finally planning by understanding volume of data that is going to come on a daily   If we are going to upgrade any existing platform then understanding existing pain area also help developers to design the database to meet future needs along with remediating those bottlenecks.Understanding your data is important but unless we have CLARITY of different components of data, we would not be ABLE to make a better plan.

Also, we need to revisit our design at several stages of a project as we work in an evolving WORLD where requirement changes overnight. So Agility and flexibility of database design are very important. We should be able to meet any future requirements as well with a flexible approach. We should always revisit data relationships, volume and indexes to ensure we stick to the flexible requirement. Also, we should always frequently profile our database server using tools like SQL server profiler to identify any problem areas on a regular basis.

19.

What is an execution plan in SQL server?

Answer»

The answer to this question depends on the View type. There are two types of Views that exist. One is Simple View created from a single table and which does not have any DERIVED columns based on in-built grouping functions like average, sum, etc and the other one is a complex View. We can delete ROWS from views if it is a simpler one but not from Complex one. Let me try to make you understand by going into VIEW definition and syntax.

In SQL, a view is a virtual table dependent on the outcome set of a SQL query. A view contains rows much the same as a real physical table. The fields in a view are fields from at least one real table in the database. You can include SQL functions, WHERE, and JOIN clauses to a view and present the INFORMATION as THOUGH the information were originating from one single table.

There are two types of views that exist. One is the normal one (derived from a single table) and the other one is a complex one (derived from multiple tables). We can delete/update data from view only when it is a normal one and does not have any derived columns based on AVG, COUNT, SUM, MIN, MAX, etc. We can not delete rows from a complex view.

20.

What is the use of RPC in the cluster?

Answer»

When you launch SQL Server Management Studio (SSMS), you see the choice to connect with a DB instance. Whenever WANTED, you can browse instances running on your SYSTEM. Simply click on the drop down and at the base, there is a ‘Browse for more…’ alternative:

This enables you to look for LOCAL or network instances. I have four instances running on my workstation you see them showed in this list:

We should expect one of these instances is top SECRET and we don't need clients to see the instance name. That is conceivable through SQL Server Configuration Manager (SSCM). Open up SSCM and explore to your SQL Server instances:

SELECT the instance of SQL Server, right click and select Properties. Subsequent to choosing properties you will simply set Hide Instance to "Yes" and click OK or Apply. After the change is made, you have to restart the instance of SQL Server to not uncover the name of the instance.

21.

What are the benefits of Readable Secondary Replicas?

Answer»

An execution plan, basically, is the consequence of the query optimizer endeavor to ascertain the most proficient approach to implement the request represented by the SQL query you submitted. They are, in this way, the DBA's essential methods for investigating an ineffectively performing query. Instead of conjecture at why a given inquiry is performing a large number of outputs, putting your I/O through the rooftop, you can utilize the execution intend to distinguish the precise bit of SQL code that is causing the issue. For instance, it might filter a whole table-worth of information when, with the BEST possible list, it could basically backpack out just the row you need. This and more are shown in the execution plan. 

Despite the fact that SQL Server more often than not creates a decent arrangement, now and again it's not sufficiently brilliant to approve its arrangements and fix the poor ones. You can get an expected execution plan and a real graphical execution plan in SQL Server. PRODUCE these PLANS by utilizing the command ctrl M or ctrl L or by utilizing the symbols, which are put to one side of the execute symbol on the standard device bar of SQL Server Management Studio (SSMS).

There are two sorts of execution plans:

  • Estimated execution plan: Estimated plans give an estimation of the WORK that the SQL server is required to perform to get the information.
  • Actual execution plan: Actual execution plans are produced after the Transact-SQL questions or batches are executed. Along these lines, a real execution plan contains run time DATA, for example, the genuine resource usage metrics and any run time alerts.  You ought to always check the actual execution plan while investigating.
22.

How to bring mirror DB online if Principle is down?

Answer»

The Remote Procedure Call (RPC) is a system SERVICE for interprocess communication (IPC) between different PROCESSES. This can be in over on the same computer, on the LAN, or in a remote location, and it can be accessed over a WAN connection or over a VPN connection.

RPC working on a dynamic range of ports and connect using any available port from the available range. There is a long list of services that depends on RPC like Telnet, DHCP, DNS, COM+, DTC, WINS, etc.

You may face multiple errors when RPC is not working like

  1. Cluster name mention not found
  2. Node1 not able to COMMUNICATE to NODE 2
  3. File Share not working
  4. MSDTC issues
  5. Etc.
23.

Explain DB Mirroring operating modes?

Answer»

ALWAYSON provided you with the feature of Readable replica. This is a long-awaited feature where you can UTILIZE your passive node.

Unlike cluster where your passive node CONSUME all your resources but you cannot utilize them before the primary node goes.

Benefits:

  • Offloads your read-only workload from primary to secondary.
  • Utilize your resources of secondary mode
  • Temporary statistics on readable secondary HELP in performance TUNING on secondary.
  • Offload T-log backups to secondary reduce backup load.
24.

What is TUF file and it contains?

Answer»

We need to perform force failover to bring mirror online when Principle is down. We can perform this using below COMMAND:

ALTER DATABASE <DB Name> SET PARTNER FORCE_SERVICE_ALLOW_DATA_LOSS

If you try to do failover like the normal situation when the principle is online [ALTER DATABASE <DB Name> SET PARTNER FAILOVER] then you will receive below error.

ALTER DATABASE <DB Name> SET PARTNER FAILOVER

Msg 1404, LEVEL 16, STATE 10, Line 1 Above command failed because the database mirror is busy. Reissue the command later.d later.

25.

What are the common reasons which cause Log Shipping issues?

Answer»

We have 3 Operating Modes in DB Mirroring.

  • High Availability:- High-Availability operating mode is a combination of principle + mirror + witness with SYNCHRONOUS. In Synchronous, The Principal server SENDS the log BUFFER to the mirror server and then WAITS for a response from the mirror server. Witness server will monitor principal & Mirror Server, In case Principal is not available the witness and mirror will decide automatic failover to BRING mirror online.
  • High Protection:- High-Protection operating mode is a combination of principle + mirror with Synchronous. The Principal server sends the log buffer to the mirror server and then waits for a response from the mirror server but Automatic failover is not possible.
  • High Performance:- High- Performance operating mode runs asynchronously and the transaction safety set to off. The Principal server sends the log buffer but does not wait for a response from the mirror server for acknowledgment. Mirror server can lag behind from principal and cause data loss.
26.

How to add a new article in existing replication?

Answer»

TUF means Transaction Undo file used when log shipping is configured in STANDBY MODE on secondary.

TUF contains details of transaction not APPLIED on secondary in last Log restore. Basically, these transactions were not committed to the Primary database when the Transaction log backup was in progress. On NEXT T-Log restore on secondary, Log shipping refer TUF file or state of active transactions.

If TUF is missing, You can’t recover your log shipping.

27.

What are different replication agents and what's their purpose?

Answer»

Log shipping may fail DUE to multiple issues can cause data long, Here are some possible reasons:

  • Changes in shared folder or share access permissions – Copy Job is responsible for copying log backup from PRIMARY to the secondary server. If you have to change the shared folder permissions on the primary server, Copy job will not able to access the share of primary server to copy Log backups.
  • A human error like someone deletes the T-log backup file or truncate T-log on the primary server – This is the most COMMON issue. If someone has deleted Log backup by mistake log chain will break and restore might fail. ALSO if Someone truncates the T-Log file on primary than log chain will break and new log backup will not generate.
  • Low Drive free space is low on secondary – Standard approach is to have similar drive structure and space on secondary like primary. If secondary has less drive space that it may get full and impact copy or restore process.
  • Low I/O, Memory, NETWORK resources  - Copy & Restore job needs resource and if you have a long list of databases then you need high resources. Secondary server with low network \ IO or memory can cause server slowness \ crash or delay in the restore.
  • TUF file is missing – TUF is transaction undo file which contains active transaction details. If TUF file is deleted and you do not have a backup then you have to reset shipping.
  • MSDB database is full – MSDB keeps track of log shipping & restore history. If MSDB got full then Copy & Restore jobs will start failed.
28.

What is force parametrization?

Answer»

In the existing replication setup, Sometimes we need to add NEW articles. When you perform such OPERATIONS in existing setup, it will mark your snapshot invalid which results in snapshot re-initialize for all articles.

This sounds normal but considers the situation where a number of articles are very high or the size of the remaining article is huge, reinitialization can bring down your system.

SQL server provides you OPTION of generating a snapshot of only newly added article. This can be achieved by T-SQL like:-

  • Set allow_anonymous property of the publication to FALSE
  • Disable Change immediate_sync on the publication
  • Invalidate the snapshot by setting property@force_invalidate_snapshot=1 using sp_addarticle
  • Refresh subscriptions using sp_refreshsubscriptions
  • Start Snapshot Agent using Replication monitor
  • You can notice that bulk-insert STATEMENTS are created only for the SPECIFIC article instead of all articles
  • Start log reader agent
  • Re-enable Change immediate_sync on the publication
  • Change back allow_anonymous property of the publication to TRUE
29.

If a column is having,&gt;50 % null then which index we should choose?

Answer»
  • Snapshot Agent- The Snapshot Agent is used with all types of REPLICATION for the initial COPY of an article from publisher to the subscriber. It generates schema copy and bulk copy FILES of published tables and other objects. Later same is applied to the subscriber. Snapshot Agent runs at the Distributor.
  • Log Reader Agent - Log Reader Agent is used with transactional replication. It’s responsible for moving transactions marked for replication from the transaction log on the Publisher to the distribution database.
  • Distribution Agent - Distribution Agent is responsible for applying Snapshot & Transaction logs at subscriber moved to the distributor by Snapshot or Log Reader agent. Distribution Agent runs on Distributor for push subscriptions or at the Subscriber for pull subscriptions.
  • Merge Agent - Merge Agent is used by merge replication to move initial snapshot followed by movement and reconciliation of incremental data changes. Each merge subscription has its own Merge Agent works with both the Publisher and the Subscriber and updates both. Merge Agent runs on Distributor for push subscriptions or at the Subscriber for pull subscriptions.
  • Queue Reader Agent - Queue Reader Agent works with transactional replication with the queued updating option. Queue Reader agent runs at the Distributor and moves changes made at the Subscriber back to the Publisher. Only one instance of the Queue Reader Agent exists to service all PUBLISHERS and publications for a given distribution database.
30.

Why we execute Update Statistics?

Answer»

By Default, SQL SERVER works in Simple Parameterization where  SQL Server internally will add parameters to an SQL statement EXECUTED without parameters so that it can try to reuse a cached execution plan. But Simple parametrization has lots of limitation but did not work in the following cases:-

  • JOIN
  • IN
  • BULK INSERT
  • UNION
  • INTO
  • DISTINCT
  • TOP
  • GROUP BY
  • HAVING
  • COMPUTE
  • Sub Queries

To FORCE SQL Server to have Auto Parameterization for EVERY SQL execution, You need to enable Force Parameterization at the database level.

Force Parameterization settings need proper testing as it may also create an adverse IMPACT on performance.

31.

How Powershell is useful for DBA? Give any 5 things that you have done using PowerShell?

Answer»

If a column in the table is having >50% NULL values then index selection if very selective.

Index SCHEMA is based on B-Tree structure and if the column is having more than 50% NULL values then all DATA will reside on one side of the tree result ZERO benefits of the index on QUERY execution.

The SQL server is having a special index called FILTERED index which can be used here. You can create an index on column only on NON NULL values. NULL data will not be included in the index.

32.

How can SQL Injection be stopped / Prevent?

Answer»

Update Statistics has PERFORMED a recalculation of query optimization statistics for a table or indexed view. Although, with Auto Stats Update option, Query optimization statistics are automatically recomputed, in some CASES, a query may benefit from updating those statistics more frequently. UPDATE STATISTICS uses tempdb for its processing.

Please note that update statistics causes queries to be RECOMPILED. This may lead to performance issues for initial execution.

You can perform Update Statistics by 2 methods:

  • UPDATE STATISTICS  - This need Alter right on the table and has more controlled options of PERFORMING an update only for one table or specific stats. This can’t be used for the complete database in one go.
  • SP_UPDATESTATS - This need sysadmin RIGHTS on SQL instance. This can help in performing update stats for all the stats of all table in the database.
33.

How can SQL Server instances be hidden?

Answer»

Powershell is windows scripting and powerful enough to manage THINGS from the core in a very efficient manner. Powershell help in deep automation and quick action on DBA activities. The new face of DBA automation.

We can PERFORM multiple actions from a DBA perspective using Powershell, like:-

  1. Check the status of SQL Server service or SQL Server AGENT service
  2. Start/stop a SQL Server service
  3. Find the SQL Server version/edition including the service pack level
  4. Find the SQL Server operating system information such as the OS version, processor number, physical MEMORY, etc.
  5. Perform Backups
  6. Script out a SQL Server Agent Job, based on a specific category
  7. Kill all sessions connected to a SQL Server DATABASE
34.

What are the SQL Server Agent Proxy and its sub-system?

Answer»

An SQL injection is a web hacking techniques done by unauthorized personnel or processes that might destroy your database.

The major challenge is SQL injection can cause a system crash, data stolen, data corruption, etc.

Proper SQL instance, OS &AMP; Farwell security with the well-written application can help to reduce the risk of SQL injection.

Development\DBA

  • Validate or filter the SQL commands that are being passed by the front end
  • Validate data types and parameters
  • Use stored procedures with parameters in place of dynamic SQL
  • Remove old installable from application & database servers
  • Remove old backup, application files & user  profiles
  • Restrict commands from executing with a semicolon, EXEC, CAST, SET, two dashes, apostrophe, special characters, etc.
  • Restrict the option of CMD execution or 3rd party execution
  • Limited or least possible RIGHTS to DB users

Infra\Server

  • Latest Patches
  • Restricted Access
  • Updated Antivirus

Network Administration

  • Allow traffic from required addresses or domains
  • Firewall settings to be reviewed on a regular BASIS to prevent SQL Injection attacks
35.

Explain multi-server administration and its uses.

Answer»

To secure your SQL Server instance, it’s advisable to HIDE your SQL instance. SQL Server instance can be marked as hidden from the SQL Server Configuration Manager.

SQL Server Configuration Manager &GT; SELECT the instance of SQL Server, > Right click and select Properties > After selecting properties you will just set Hide Instance to "Yes" and click OK or Apply.

You NEED to restart the instance of SQL Server.

36.

“You can only perform a full backup of the master database. Use BACKUP DATABASE to back up the entire master database.”

Answer»

As the name implies, SQL Server Agent Proxy is an account that grant privilege to the user to execute a PARTICULAR PROCESS or an ACTION when a user does not have rights. The SQL Server Agent Proxies include multiple sub-systems:

  • ActiveX Script – Access to run ActiveX Scripts
  • Operating System (CmdExec) – Access to run Command line scripts
  • Replication Distributor – Replication Agent Rights
  • Replication Merge – Replication Agent Rights
  • Replication Queue Reader – Replication Agent Rights
  • Replication Snapshot – Replication Agent Rights
  • Replication Transaction-Log Reader – Replication Agent Rights
  • Analysis Services Command - SSAS execution rights
  • Analysis Services Query - SSAS execution rights
  • SSIS PACKAGE Execution – SSIS package execution rights
  • Unassigned Proxies – If required option is not able, You can select this like Other option.

All these sub-systems are available under Proxy in SSMS and you can create as per your REQUIREMENT.

37.

While trying to take a differential backup of MASTER database, I am getting below error. Differential backup is supported by all recovery model then why it’s failing for MASTER database. Can you explain what can be the reason?

Answer»

SQL Server provides the feature of managing jobs from Master \ central server on target servers called multi-server ADMINISTRATION. Jobs and steps information is stored on Master Server. When the jobs COMPLETE on the target servers notification is sent to the master server so this server has the updated information.  This is an enterprise level solution where a consistent set of jobs NEED to RUN on numerous SQL Servers.

38.

What is DCM &amp; BCM Page?

Answer»

To restore the DIFFERENTIAL BACKUP of any database, DB needs to be in restoring mode which means DB will not accessible.

The MASTER database is a STARTUP database for any SQL SERVER instance. SQL instance will be in an offline state if the MASTER database is not accessible.

If we combined both STATEMENTS, We can see that differential backup of the MASTER database is unnecessary as we can not restore. That’s why SQL server will not allow you to do take a differential backup of MASTER DB.

39.

If a transaction starts before backup start time and transaction committed before the completion of the full backup. Please explain which data would be included in the full backup.

Answer»
  • DCM (Differential CHANGED map): DCM is used by SQL server for differential backup. DCM pages keep TRACK of all extents which has changed since the last full database backup.

During SQL server differential backup, database ENGINE reads the DCM page and takes a backup of only those pages which has been changed since the last full backup. If the value is 1 means extent has been modified and 0 means not modified.  After each full backup, all these extents are marked to 0.

  • BCM (Bulk Changed map): BCM page is used in bulk-logged recovery model to track EXTENDS changed DUE to bulk-logged or minimally logged operations.

During log backup, the database engine reads BCM page and includes all the extents which have been modified by the bulk-logged process. If the Value 1 means modified extent and 0 means not modified. After each log backup, all these extents are marked to 0

40.

We have one 3 Node cluster with 1 SQL 2005 and 1 SQL 2008 R2 instance. How many minimum installations are needed to patch all SQL instances in the cluster?

Answer»

Full backups are SET of the complete database with all data. AT the time of any crash or data recovery it’s the starting point to select which full to sue to plan recovery path.

To make THINGS CLEAR and doubt free, SQL Server INCLUDES all data or transactions data into full backup till the END TIME of backup.If your backup took 2 hours that backup will also contain data changes and transaction happen in this 2 hours.

A full backup includes all data and transaction at the completion of backup time.The full backup cover complete transaction with all the changes that were made after the backup start checkpoint to apply those changes during the database recovery PROCESS.

41.

Is it possible to import data using T-SQL commands without using SQL Server Integration Services?  If yes, please share commands?

Answer»

We need to run the SQL Server patch installation minimally 4 times to patch both SQL instance on all 3 cluster NODES.

  • SQL 2005  - 1 installation, SQL 2005 support remote installation. SQL 2005 patch installation will install the patch on all cluster nodes in one go but only for cluster OBJECTS like DB engine or agent.
  • SQL 2008 R2 – 3 installation, SQL 2008 R2 does not support remote installation. We need to patch all 3 nodes separately.

Additionally, We may need to run SQL 2005 setup on other 2 nodes to patch non-cluster objects like SSMS but that’s the ADDITIONAL PART.

42.

What is Row Versioning?

Answer»

Sometimes we have a situation when we cannot perform such activities using SSIS due to multiple types of databases, Different domains, Older version, etc.

We have several commands available to import using T-SQL.

  • BCP – BCP is BULK copy program used to import a large number of rows into SQL server or to TEXT\csv files.
  • Bulk INSERTBulk insert is used to data files (text \ csv \ excel) into a database table in a user-specified format.
  • OpenRowSet – OpenRowSet is used to access remote data from an OLE DB data source. This is an alternative method of Linked server for the one-time or ad-hoc connection.
  • OPENDATASOURCE - OPENDATASOURCE is used to access remote data on an ad-hoc basis with 4 part object name without using a linked server name.
  • OPENQUERY – OPENQUERY is used to execute a specified query on the specified linked server. OPENQUERY can be referenced in from clause with INSERT, UPDATE, or DELETE statement on the target table.
  • Linked SERVERS Linked Servers are configured to access data outside SQL Server from another SQL instance or DB types (Like Oracle \ DB2, etc.). It has the ABILITY to issue queries & transactions on heterogeneous data sources.
43.

Explain the Write-Ahead Transaction Log (WAL) protocol?

Answer»

To understand Row versioning, you should first understand RCSI (Read COMMITTED Snapshot Isolation). RCSI does not hold a lock on the table during the transaction so that the table can be modified in other sessions, eliminate the use of shared locks on read operations.

RCSI isolation maintains versioning in Tempdb for old data called Row Versioning. Row versioning increases overall system performance by reducing the resources used to MANAGE locks.

When any transaction updates the row in the table, New row version was generated. With each upcoming update, If DB is already having the previous version of this row, the previous version of the row is stored in the version store and the new row version contains a POINTER to the old version of the row in the version store.

SQL Server keeps running clean up task to remove old versions which are not in use. Until the transaction is OPEN, all versions of rows that have been modified by that transaction must be kept in the version store. Long-running open transactions and multiple old row versions can cause a huge tempDB issue.

44.

Explain One to One (1:1), One to Many (1:N) and Man to Many (N:N) mapping with an example?

Answer»

Write-ahead transaction log controls & DEFINED recording of DATA modifications to disk. To maintain ACID (Atomicity, Consistency, Isolation, and Durability) property of DBMS, SQL Server uses a write-ahead log (WAL), which guarantees that no data modifications are written to disk before the associated log record is written to disk.

Write-ahead log WORK as below:-

  • SQL Server maintains a buffer cache to maintain data pages on retrieval
  • When a page is modified in the buffer cache, it marked as dirty. The page is not immediately written back to disk.
  • A data page can be modified multiple times before written to disk but maintain separate log ENTRY a transaction log record.
  • The log records must be written to disk before the associated dirty page is REMOVED from the buffer cache and written to disk.
45.

Explain Physical Database Architecture In Brief?

Answer»
  1. ONE to One(1:1) – For each instance, in the first entity, there is one and only one in the second entity and VICE versa. LIKE - Employee Name and Employee ID, One employee can have only one Employee ID and one Employee ID can be allocated to one person only.
  2. One to Many(1:N) – For each instance, in the first entity, there can be one or more in the second entity but for each instance, in the second entity, there can be one and only one instance in the first entity. Like -  Manager & Employee, One Manager can have many employees but one employee can have only one Manager.
  3. Many to Many(N:N) –For each instance, in the first entity there can be one or more instance in the second entity and vice versa. Like – Employee & Project, One Employee can work on MULTIPLE projects and One Project can have multiple employees to work.
46.

“One or more of the server network addresses lacks a fully qualified domain name (FQDN). Specify the FQDN for each server, and click start mirroring again.”

Answer»

The physical database architecture is a description of the way the database and files are organized in a SQL server.

  • Pages and extents- A page is the size of 8KB and set of 8 Pages are called extent. This is the FUNDAMENTAL unit where data is stored.
  • Physical Database Files and Filegroups- Database files visible at file system to store data and logs.
  • Table and INDEX Architecture- Database objects inside the database to store & access data.
  • Database- Database is a set of data & Log file which resides over the filesystem and managed by the operating system.
  • SQL Instance- SQL instance is a LOGICAL entity controlled databases. One SQL instance can have multiple databases. Security and accessibility is ALSO part of the SQL instance.
47.

When I configure database mirroring I’m receiving the below error,

Answer»

The FQDN (fully qualified DOMAIN name) is COMPUTER name of each server with the domain name. This can be FOUND running the following from the command prompt:

  • IPCONFIG /ALL
  • Concatenate the “Host Name” and “PRIMARY DNS Suffix”
  • Host Name . . . . . . . . . . . . : XYZ
  • Primary DNS Suffix . . . . . . . : corp.mycompany.com
  • The FQDN of your computer name will be XYZ.corp.mycompany.com.
48.

Explain Principal, Mirror And Witness Servers in DB Mirroring?

Answer»
  • Principal Server:- The main server HOLDING a primary copy of the database and serving client application & requests.
  • Mirror Server:- The SECONDARY server which HOLDS a mirrored copy of Principal database and acts as a hot or warm standby server on basis Synchronous & Asynchronous configuration.
  • Witness Server: - The witness server is an optional server and it CONTROLS AUTOMATIC failover to the mirror if the principal becomes unavailable.
49.

Which are the basic system tables to track the information about the Log Shipping?

Answer»
  • Log shipping is a disaster recovery solution from Microsoft. Log Shipping comes up with multiple internal tables to refer to its details and monitor current status.
  • log_shipping_monitor_alert – This SYSTEM table saves alert CONFIGURATION used to monitor and trigger a notification on violations.
  • log_shipping_monitor_error_detail – This system table shows errors occurred during Log shipping.
  • log_shipping_monitor_history_detail – This system table saves the history of log shipping and it’s status. This can be referred in future for any issues and security report.
  • log_shipping_monitor_primary – These tables save one record per database with backup and MONITORING threshold.
  • log_shipping_monitor_secondary - These tables save one record per secondary database with the primary server, primary database, restore and monitoring threshold.
  • log_shipping_primary_databases – This table saves a list of all databases serving as primary database & enabled for Log shipping.
  • Log_shipping_secondary_databases - This table saves a list of all databases serving as a secondary database in Log shipping.

Also, You can use the Transaction Log Shipping Report at Server Level to VIEW the current status.

50.

How log shipping works?

Answer»

Log shipping is a Microsoft SQL Server DR solution from the very beginning. In Log Shipping, BACKUP and restores of a database from one Server to another standby server happens automatically.

Log Shipping works with 3 mandatory & 1 optional Job.

  • Backup Job - Backup Job is responsible for taking transactional log backups on the Primary server. It RUNS on Primary Server.
  • Copy Job - Copy Job runs on a secondary server and responsible for copying T-Log backups from the primary server to Secondary server.
  • Restore Job - Restore job runs on secondary server to restore T-log backups in sequential order.

These 3 jobs are created separately for each database CONFIGURED in log shipping.

  • Alert Job – This is an optional job to MONITOR log shipping threshold and generate notifications. This Job is instance SPECIFIC.