InterviewSolution
This section includes InterviewSolutions, each offering curated multiple-choice questions to sharpen your knowledge and support exam preparation. Choose a topic below to get started.
| 1. |
What is .TUF file? What is the significance of the same? Any implications if the file is deleted? |
Answer»
|
|
| 2. |
Full backup size is 300 GB, usually my diff backup size varies between 300 MB and 5 GB, unfortunately, one-day diff backup size was increased to 250 GB? What might be the reason any idea? |
|
Answer» UNION blends the contents of two structurally-compatible tables into a solitary joined table. The distinction among UNION and UNION ALL is that UNION will discard duplicate RECORDS through UNION ALL will incorporate duplicate records. Note that the presentation of UNION ALL will commonly be superior to UNION SINCE UNION REQUIRES the server to do the extra work of expelling any duplicate. In this way, in situations where there is a surety that there won't be any copies, or where having copies isn't an issue, utilization of UNION ALL eventual suggested for performance REASONS. Let's have a look at the below examples explaining the usage of both. In the first, we have used UNION and in the second we have explained UNION ALL. |
|
| 3. |
See I have an environment, Sunday night full backup, everyday night diff backup, and every 45 min a transactional backup. The disaster happened at 2:30 PM on Saturday. You suddenly found that the last Sunday backup has been corrupted. What’s your recovery plan? |
|
Answer» .TUF file is the Transaction Undo File, which is created when performing log shipping to a server in Standby mode. When the database is in Standby mode the database RECOVERY is done when the log is restored, and this mode also creates a file on the destination server. TUF extension which is the transaction undo file. This file contains information on all the modifications performed at the time backup is taken. The file plays an important role in the Standby mode… the reason is very obvious while restoring the log backup all UNCOMMITTED transactions are recorded to the undo file with only COMMITTED transactions written to disk which enables the users to read the database. So when we restore the next transaction log backup; the SQL server will fetch all the uncommitted transactions from undo file and CHECK with the new transaction log backup whether committed or not. If found to be committed the transactions will be written to disk else it will be stored in undoing file until it gets committed or rolled back. If .tuf file is GOT deleted there is no way to repair log shipping except reconfiguring it from scratch. |
|
| 4. |
I wanted to know what are the maximum worker threads setting and active worker thread count on SQL server. Can you tell me how to capture this info? What’s the default value for the max thread count? |
|
Answer» Are you the kind of DBA who rebuilds all indexes NIGHTLY? Your differential backups can easily be nearly as large as your FULL backup. That MEANS you’re taking up nearly twice the space just to store the backups, and even WORSE, you’re talking about twice the time to restore the database. To avoid these issues with diff backups, ideally, schedule the index maintenance to happen right before the full backup. |
|
| 5. |
How distributed transactions work in SQL Server? |
|
Answer» When you find that the last full backup is corrupted or otherwise unrestorable, making all DIFFERENTIALS after that POINT useless. You then need to go back a further week to the previous full backup (taken 13 days AGO) and restore that, plus the differential from 8 days ago, and the subsequent 8 days of transaction logs (assuming none of those ended up corrupted!). If you’re taking daily full backups, a corrupted full backup only introduces an ADDITIONAL 24 hours of logs to restore. Alternatively, a LOG shipped copy of the database could save your bacon (you have a warm standby, and you know the log backups are definitely good). |
|
| 6. |
Can we add an article to the existing publication without generating a snapshot with all articles? |
|
Answer» We can check the current settings and thread allocation USING the below queries.
select max_workers_count from sys.dm_os_sys_info.
select count(*) from sys.dm_os_threads.
Increasing the number of worker threads MAY actually DECREASE the performance because too many threads cause context switching which could take so much of the resources that the OS starts to DEGRADE in overall performance. |
|
| 7. |
Consider a situation where publisher database log file has been increasing and there is just a few MB available on disk. As an experienced professional how do you react to this situation? Remember no disk space available and also we can’t create a new log file on another drive |
|
Answer» Distributed transactions are transactions that worked across the DATABASES, INSTANCES in the GIVEN session. Snapshot isolation level does not support distributed transactions. We can explicitly start a distributed transaction USING “BEGIN DISTRIBUTED TRANSACTION <TranName>” For example, if BEGIN DISTRIBUTED TRANSACTION is issued on ServerA, the session calls a stored procedure on ServerB and another stored procedure on ServerC. The stored procedure on ServerC executes a distributed query against ServerD, and then all four computers are involved in the distributed transaction. The instance of the Database Engine on ServerA is the originating controlling instance for the transaction. When a distributed query is executed in a local transaction, the transaction is automatically promoted to a distributed transaction if the target OLE DB data source supports ITransactionLocal. If the target OLE DB data source does not support ITransactionLocal, only read-only operations are allowed in the distributed query. In order to work with these transactions, make SURE below settings are done.
|
|
| 8. |
When should a developer use the NOLOCK hint? What problems can happen when using this hint and how can these problems be addressed? |
|
Answer» Yes! We can do that. Follow the below steps to publish a new article to the existing publication. There are two parameters that we need to change to “FALSE”. 1. Immediate SYNC and 2. Allow_Ananymous. Both the fields were set to ON by default. If the Immediate_sync is enabled every time you add a new article it will cause the entire snapshot to be applied and not the one for the particular article alone. Steps:
|
|
| 9. |
What is an optimistic and pessimistic lock in SQL? |
|
Answer» Essentially we have to identify the BOTTLENECK which is filling the log file. As a quick resolution check all possible solutions as below:
If in case we can’t resolve just by providing a simple solution we have to shrink the transaction log file. Below are two methods. To shrink the transaction log file:
To truncate the transaction log file: In any case, we are not able to provide the solution against the increasing logfile the final solution is to DISABLE the replication, truncate the log and reinitialize the subscribers.
|
|
| 10. |
What is Lock escalation in SQL |
|
Answer» The PERFORMANCE of the SQL query gets improved by NOLOCK as it does not lockup rows so it will not affect reading /processing data from the table. The NOLOCK can be used while dealing with large tables having a million records and with a table where data rarely gets changed. If we have too many locks in SQL then row level locking can be extended to block and even on table level which might stall processing of query till the time operation which acquired lock does not get completed. We can extract COMMITTED and uncommitted data using NOLOCK. This situation may result in dirty read if the system has not been designed to HANDLE such a scenario. Generally to avoid dirty read we use the TIMESTAMP column which can be referenced while EXECUTING DELETE /UPDATE commands to know if data has been changed or not. This helps us in preventing data changes based on dirty read problem. COMPARING each column values is very expensive operation so this is the reason industry standard suggest TIMESTAMP column |
|
| 11. |
How many models of Locks being supported in the SQL server? |
|
Answer» An optimistic lock is the WIDELY used strategy UTILIZED in n tier applications were while reading data from the database we check versioning of data before writing it back to the database. The versioning is done on a date, timestamps or checksum/hashes. This approach is used to prevent DIRTY read scenario where we take a decision based on data which is not yet committed in the database. The optimistic approach will ensure that in case dirty read is identified based on versioning we will start from fresh. This strategy is popular where system deals with a high volume of data and even in the case of n tier applications where we are not always connected to a database with single connections. It is a pool of connection from which we connect to the database with any connection which is free and available at that TIME. We can not apply lock in such cases. This strategy is applied in most of the banking operations. Pessimistic is just opposite of Optimistic lock as it takes an exclusive lock on resources till the time we finish with our operations. It keeps data integrity high but performance will be always slower in this case. We NEED to connect to a database with an active connection (which is the case in two-tier application). |
|
| 12. |
Explain Locks in SQL and how many Locks are available? |
|
Answer» This is the STRATEGY used by SQL server to avoid locking a large number of database resources. The locking on resource use lot of memory spaces. Let's try to understand this by a use case where we have to apply locks on 30,000 records where each record size is 500 bytes for triggering DELETE operation. If we CONSIDER the memory space requirement for the above use case then we would be requiring one shared lock on database, one intent lock on a table, exclusive locks on the pages would be in the range of 1875 and around 30000 exclusive locks on rows. If we consider 96 bytes size for each lock then total memory space requirement would be 3MB for single delete operation. To avoid such a SQL server leverage lock ESCALATION strategy. This will prevent the need of large memory space requirement by escalating those locks to single lock instead of many locks. So in cases where we need thousands of locks on many resources, lock escalation will ensure single lock which will fulfill our OBJECTIVE and at the same time taking care of memory space issue. The exclusive lock on the table will ensure that we do not need page level lock to ensure data integrity. Instead of having so many locks required on many COLUMNS and pages lock, SQL Server will escalate to the exclusive lock on a table. |
|
| 13. |
What is the use case of using CURSOR? |
|
Answer» The locks which get applied on various resources of SQL server can be classified into below models :
Locking hierarchy of database objects can be understood by the below diagram and lock is ALWAYS obtained from top to bottom: Below is the lock compatibility matrix between different modes of lock available: |
|
| 14. |
What are the points to be considered for creating a fast performing stored procedure? |
|
Answer» Locks allow SEAMLESS functioning of the SQL server even in concurrent user sessions. As we as a whole know, different clients need to get to databases simultaneously. So locks come for rescue to keep information from being undermined or NEGATED when numerous clients ENDEAVOR to do data manipulation tasks DML operations, for example, read, compose and update on DATABASE. "Lock is characterized as a component to guarantee information integration, consistency while enabling simultaneous access to information. It is utilized to execute simultaneous control when various clients get to Database to control its information in the meantime". Locks can be applied to various database components. Please find below areas where a lock can be applied :
|
|
| 15. |
What is an extended stored procedure and what is the use of it? |
|
Answer» SQL Server cursors are immaculate when we need to work one record at any given moment, as OPPOSED to taking every one of the information from a table as a solitary bulk. Be that as it may, they ought to be utilized with consideration as they can influence execution, particularly when the volume of information increments. From an amateur's perspective, I truly do feel that cursors ought to be maintained a strategic distance from each time in such a case that they are badly composed, or manage an excess of data, they truly will affect system PERFORMANCE. There will be times when it is absurd to expect to evade cursors, and I doubt if numerous system exists WITHOUT them. In the event that you do discover you have to utilize them, attempt to lessen the number of records to process by utilizing a temp table first, and afterward assembling the cursor from this. The lower the number of records to process, the quicker the cursor will wrap up. Please find below syntax for CURSORS widely used : |
|
| 16. |
What can a developer do amid the logical and physical design of a database so as to help guarantee that their database and SQL Server-based application will perform well? |
|
Answer» The performance of any application largely depends upon the backend process which extracts data. If it is faster then the application will be also responsive. The designing of the stored procedure is very critical and need to be careful to remove any We can consider the following points while designing stored procedure :
|
|
| 17. |
What is a hot and cold backup in SQL server? |
|
Answer» The extended stored procedure is programs written in c/c++ and similar to a stored procedure like they accept PARAMETER but the result SET is returned through SQL server’s open data services API. It runs from SQL server process memory only and stored in a master database. It does not run from the current database context and to trigger extended stored procedure we need to pass the fully qualified name like master.dbo.xp_*. The BEST is WRAPPING extended stored proc under system procedure so you do not need to call extended one with the fully qualified name. The cleaning of parameters to be passed to extended stored procedure xp_varbintohexstr is taken CARE of by sp_hexstring. sp_hextsring can be executed from any database context without being referenced by a qualifier. The above is an example of wrapping extended procedures inside the system procedure. Once you call the extended stored procedure, the request transfers in the tabular data stream or in SOAP format from calling application to SQL server. |
|
| 18. |
Is it possible to delete rows from views? |
|
Answer» There are lots of things that need to be considered before going into designing a logical and physical data model. We must consider FIRST what type of data we are going to store, how the data is being accessed, understanding system playing the role of upstream, understanding downstream system and finally planning by understanding volume of data that is going to come on a daily If we are going to upgrade any existing platform then understanding existing pain area also help developers to design the database to meet future needs along with remediating those bottlenecks.Understanding your data is important but unless we have CLARITY of different components of data, we would not be ABLE to make a better plan. Also, we need to revisit our design at several stages of a project as we work in an evolving WORLD where requirement changes overnight. So Agility and flexibility of database design are very important. We should be able to meet any future requirements as well with a flexible approach. We should always revisit data relationships, volume and indexes to ensure we stick to the flexible requirement. Also, we should always frequently profile our database server using tools like SQL server profiler to identify any problem areas on a regular basis. |
|
| 19. |
What is an execution plan in SQL server? |
|
Answer» The answer to this question depends on the View type. There are two types of Views that exist. One is Simple View created from a single table and which does not have any DERIVED columns based on in-built grouping functions like average, sum, etc and the other one is a complex View. We can delete ROWS from views if it is a simpler one but not from Complex one. Let me try to make you understand by going into VIEW definition and syntax. In SQL, a view is a virtual table dependent on the outcome set of a SQL query. A view contains rows much the same as a real physical table. The fields in a view are fields from at least one real table in the database. You can include SQL functions, WHERE, and JOIN clauses to a view and present the INFORMATION as THOUGH the information were originating from one single table. There are two types of views that exist. One is the normal one (derived from a single table) and the other one is a complex one (derived from multiple tables). We can delete/update data from view only when it is a normal one and does not have any derived columns based on AVG, COUNT, SUM, MIN, MAX, etc. We can not delete rows from a complex view. |
|
| 20. |
What is the use of RPC in the cluster? |
|
Answer» When you launch SQL Server Management Studio (SSMS), you see the choice to connect with a DB instance. Whenever WANTED, you can browse instances running on your SYSTEM. Simply click on the drop down and at the base, there is a ‘Browse for more…’ alternative: This enables you to look for LOCAL or network instances. I have four instances running on my workstation you see them showed in this list: We should expect one of these instances is top SECRET and we don't need clients to see the instance name. That is conceivable through SQL Server Configuration Manager (SSCM). Open up SSCM and explore to your SQL Server instances: SELECT the instance of SQL Server, right click and select Properties. Subsequent to choosing properties you will simply set Hide Instance to "Yes" and click OK or Apply. After the change is made, you have to restart the instance of SQL Server to not uncover the name of the instance. |
|
| 21. |
What are the benefits of Readable Secondary Replicas? |
|
Answer» An execution plan, basically, is the consequence of the query optimizer endeavor to ascertain the most proficient approach to implement the request represented by the SQL query you submitted. They are, in this way, the DBA's essential methods for investigating an ineffectively performing query. Instead of conjecture at why a given inquiry is performing a large number of outputs, putting your I/O through the rooftop, you can utilize the execution intend to distinguish the precise bit of SQL code that is causing the issue. For instance, it might filter a whole table-worth of information when, with the BEST possible list, it could basically backpack out just the row you need. This and more are shown in the execution plan. Despite the fact that SQL Server more often than not creates a decent arrangement, now and again it's not sufficiently brilliant to approve its arrangements and fix the poor ones. You can get an expected execution plan and a real graphical execution plan in SQL Server. PRODUCE these PLANS by utilizing the command ctrl M or ctrl L or by utilizing the symbols, which are put to one side of the execute symbol on the standard device bar of SQL Server Management Studio (SSMS). There are two sorts of execution plans:
|
|
| 22. |
How to bring mirror DB online if Principle is down? |
|
Answer» The Remote Procedure Call (RPC) is a system SERVICE for interprocess communication (IPC) between different PROCESSES. This can be in over on the same computer, on the LAN, or in a remote location, and it can be accessed over a WAN connection or over a VPN connection. RPC working on a dynamic range of ports and connect using any available port from the available range. There is a long list of services that depends on RPC like Telnet, DHCP, DNS, COM+, DTC, WINS, etc. You may face multiple errors when RPC is not working like
|
|
| 23. |
Explain DB Mirroring operating modes? |
|
Answer» ALWAYSON provided you with the feature of Readable replica. This is a long-awaited feature where you can UTILIZE your passive node. Unlike cluster where your passive node CONSUME all your resources but you cannot utilize them before the primary node goes. Benefits: |
|
| 24. |
What is TUF file and it contains? |
|
Answer» We need to perform force failover to bring mirror online when Principle is down. We can perform this using below COMMAND: ALTER DATABASE <DB Name> SET PARTNER FORCE_SERVICE_ALLOW_DATA_LOSSIf you try to do failover like the normal situation when the principle is online [ALTER DATABASE <DB Name> SET PARTNER FAILOVER] then you will receive below error. ALTER DATABASE <DB Name> SET PARTNER FAILOVERMsg 1404, LEVEL 16, STATE 10, Line 1 Above command failed because the database mirror is busy. Reissue the command later.d later. |
|
| 25. |
What are the common reasons which cause Log Shipping issues? |
|
Answer» We have 3 Operating Modes in DB Mirroring.
|
|
| 26. |
How to add a new article in existing replication? |
|
Answer» TUF means Transaction Undo file used when log shipping is configured in STANDBY MODE on secondary. TUF contains details of transaction not APPLIED on secondary in last Log restore. Basically, these transactions were not committed to the Primary database when the Transaction log backup was in progress. On NEXT T-Log restore on secondary, Log shipping refer TUF file or state of active transactions. If TUF is missing, You can’t recover your log shipping. |
|
| 27. |
What are different replication agents and what's their purpose? |
|
Answer» Log shipping may fail DUE to multiple issues can cause data long, Here are some possible reasons:
|
|
| 28. |
What is force parametrization? |
|
Answer» In the existing replication setup, Sometimes we need to add NEW articles. When you perform such OPERATIONS in existing setup, it will mark your snapshot invalid which results in snapshot re-initialize for all articles. This sounds normal but considers the situation where a number of articles are very high or the size of the remaining article is huge, reinitialization can bring down your system. SQL server provides you OPTION of generating a snapshot of only newly added article. This can be achieved by T-SQL like:-
|
|
| 29. |
If a column is having,>50 % null then which index we should choose? |
Answer»
|
|
| 30. |
Why we execute Update Statistics? |
|
Answer» By Default, SQL SERVER works in Simple Parameterization where SQL Server internally will add parameters to an SQL statement EXECUTED without parameters so that it can try to reuse a cached execution plan. But Simple parametrization has lots of limitation but did not work in the following cases:-
To FORCE SQL Server to have Auto Parameterization for EVERY SQL execution, You need to enable Force Parameterization at the database level. Force Parameterization settings need proper testing as it may also create an adverse IMPACT on performance. |
|
| 31. |
How Powershell is useful for DBA? Give any 5 things that you have done using PowerShell? |
|
Answer» If a column in the table is having >50% NULL values then index selection if very selective. Index SCHEMA is based on B-Tree structure and if the column is having more than 50% NULL values then all DATA will reside on one side of the tree result ZERO benefits of the index on QUERY execution. The SQL server is having a special index called FILTERED index which can be used here. You can create an index on column only on NON NULL values. NULL data will not be included in the index. |
|
| 32. |
How can SQL Injection be stopped / Prevent? |
|
Answer» Update Statistics has PERFORMED a recalculation of query optimization statistics for a table or indexed view. Although, with Auto Stats Update option, Query optimization statistics are automatically recomputed, in some CASES, a query may benefit from updating those statistics more frequently. UPDATE STATISTICS uses tempdb for its processing. Please note that update statistics causes queries to be RECOMPILED. This may lead to performance issues for initial execution. You can perform Update Statistics by 2 methods:
|
|
| 33. |
How can SQL Server instances be hidden? |
|
Answer» Powershell is windows scripting and powerful enough to manage THINGS from the core in a very efficient manner. Powershell help in deep automation and quick action on DBA activities. The new face of DBA automation. We can PERFORM multiple actions from a DBA perspective using Powershell, like:-
|
|
| 34. |
What are the SQL Server Agent Proxy and its sub-system? |
|
Answer» An SQL injection is a web hacking techniques done by unauthorized personnel or processes that might destroy your database. The major challenge is SQL injection can cause a system crash, data stolen, data corruption, etc. Proper SQL instance, OS & Farwell security with the well-written application can help to reduce the risk of SQL injection. Development\DBA
Infra\Server
Network Administration
|
|
| 35. |
Explain multi-server administration and its uses. |
|
Answer» To secure your SQL Server instance, it’s advisable to HIDE your SQL instance. SQL Server instance can be marked as hidden from the SQL Server Configuration Manager. SQL Server Configuration Manager > SELECT the instance of SQL Server, > Right click and select Properties > After selecting properties you will just set Hide Instance to "Yes" and click OK or Apply. You NEED to restart the instance of SQL Server. |
|
| 36. |
“You can only perform a full backup of the master database. Use BACKUP DATABASE to back up the entire master database.” |
|
Answer» As the name implies, SQL Server Agent Proxy is an account that grant privilege to the user to execute a PARTICULAR PROCESS or an ACTION when a user does not have rights. The SQL Server Agent Proxies include multiple sub-systems:
All these sub-systems are available under Proxy in SSMS and you can create as per your REQUIREMENT. |
|
| 37. |
While trying to take a differential backup of MASTER database, I am getting below error. Differential backup is supported by all recovery model then why it’s failing for MASTER database. Can you explain what can be the reason? |
|
Answer» SQL Server provides the feature of managing jobs from Master \ central server on target servers called multi-server ADMINISTRATION. Jobs and steps information is stored on Master Server. When the jobs COMPLETE on the target servers notification is sent to the master server so this server has the updated information. This is an enterprise level solution where a consistent set of jobs NEED to RUN on numerous SQL Servers. |
|
| 38. |
What is DCM & BCM Page? |
|
Answer» To restore the DIFFERENTIAL BACKUP of any database, DB needs to be in restoring mode which means DB will not accessible. The MASTER database is a STARTUP database for any SQL SERVER instance. SQL instance will be in an offline state if the MASTER database is not accessible. If we combined both STATEMENTS, We can see that differential backup of the MASTER database is unnecessary as we can not restore. That’s why SQL server will not allow you to do take a differential backup of MASTER DB. |
|
| 39. |
If a transaction starts before backup start time and transaction committed before the completion of the full backup. Please explain which data would be included in the full backup. |
Answer»
During SQL server differential backup, database ENGINE reads the DCM page and takes a backup of only those pages which has been changed since the last full backup. If the value is 1 means extent has been modified and 0 means not modified. After each full backup, all these extents are marked to 0.
During log backup, the database engine reads BCM page and includes all the extents which have been modified by the bulk-logged process. If the Value 1 means modified extent and 0 means not modified. After each log backup, all these extents are marked to 0 |
|
| 40. |
We have one 3 Node cluster with 1 SQL 2005 and 1 SQL 2008 R2 instance. How many minimum installations are needed to patch all SQL instances in the cluster? |
|
Answer» Full backups are SET of the complete database with all data. AT the time of any crash or data recovery it’s the starting point to select which full to sue to plan recovery path. To make THINGS CLEAR and doubt free, SQL Server INCLUDES all data or transactions data into full backup till the END TIME of backup.If your backup took 2 hours that backup will also contain data changes and transaction happen in this 2 hours. A full backup includes all data and transaction at the completion of backup time.The full backup cover complete transaction with all the changes that were made after the backup start checkpoint to apply those changes during the database recovery PROCESS. |
|
| 41. |
Is it possible to import data using T-SQL commands without using SQL Server Integration Services? If yes, please share commands? |
|
Answer» We need to run the SQL Server patch installation minimally 4 times to patch both SQL instance on all 3 cluster NODES.
Additionally, We may need to run SQL 2005 setup on other 2 nodes to patch non-cluster objects like SSMS but that’s the ADDITIONAL PART. |
|
| 42. |
What is Row Versioning? |
|
Answer» Sometimes we have a situation when we cannot perform such activities using SSIS due to multiple types of databases, Different domains, Older version, etc. We have several commands available to import using T-SQL.
|
|
| 43. |
Explain the Write-Ahead Transaction Log (WAL) protocol? |
|
Answer» To understand Row versioning, you should first understand RCSI (Read COMMITTED Snapshot Isolation). RCSI does not hold a lock on the table during the transaction so that the table can be modified in other sessions, eliminate the use of shared locks on read operations. RCSI isolation maintains versioning in Tempdb for old data called Row Versioning. Row versioning increases overall system performance by reducing the resources used to MANAGE locks. When any transaction updates the row in the table, New row version was generated. With each upcoming update, If DB is already having the previous version of this row, the previous version of the row is stored in the version store and the new row version contains a POINTER to the old version of the row in the version store. SQL Server keeps running clean up task to remove old versions which are not in use. Until the transaction is OPEN, all versions of rows that have been modified by that transaction must be kept in the version store. Long-running open transactions and multiple old row versions can cause a huge tempDB issue. |
|
| 44. |
Explain One to One (1:1), One to Many (1:N) and Man to Many (N:N) mapping with an example? |
|
Answer» Write-ahead transaction log controls & DEFINED recording of DATA modifications to disk. To maintain ACID (Atomicity, Consistency, Isolation, and Durability) property of DBMS, SQL Server uses a write-ahead log (WAL), which guarantees that no data modifications are written to disk before the associated log record is written to disk. Write-ahead log WORK as below:-
|
|
| 45. |
Explain Physical Database Architecture In Brief? |
Answer»
|
|
| 46. |
“One or more of the server network addresses lacks a fully qualified domain name (FQDN). Specify the FQDN for each server, and click start mirroring again.” |
|
Answer» The physical database architecture is a description of the way the database and files are organized in a SQL server.
|
|
| 47. |
When I configure database mirroring I’m receiving the below error, |
|
Answer» The FQDN (fully qualified DOMAIN name) is COMPUTER name of each server with the domain name. This can be FOUND running the following from the command prompt: |
|
| 48. |
Explain Principal, Mirror And Witness Servers in DB Mirroring? |
Answer»
|
|
| 49. |
Which are the basic system tables to track the information about the Log Shipping? |
Answer»
Also, You can use the Transaction Log Shipping Report at Server Level to VIEW the current status. |
|
| 50. |
How log shipping works? |
|
Answer» Log shipping is a Microsoft SQL Server DR solution from the very beginning. In Log Shipping, BACKUP and restores of a database from one Server to another standby server happens automatically. Log Shipping works with 3 mandatory & 1 optional Job.
These 3 jobs are created separately for each database CONFIGURED in log shipping. |
|