Log Shipping in SQL is one of the oldest Disaster Recovery and most reliable solutions from Microsoft. It was first introduced in SQL 2000 and is very popular even now and extremely easy to configure. It’s the least expensive also called a poor man’s DR solution. In a nutshell Log shipping involves moving data by capturing changes from the Primary server to one or more Secondary servers. From use case perspective it might be used to offload data from the OLTP database to secondary servers which could be used for Reporting, data warehousing etc., which are predominantly read data oriented operations. Log shipping involves one or more stand-by servers that is not used for regular operations. In an event of a disaster one of these secondary servers are made Primary and then can be used for business continuity. Log shipping can be used only at database level and not at Instance level. Log shipping does not fit all problems and it is up to the Architects to decide to use it or look at other DR strategies. I’ll explain Log shipping in two parts given the long nature of this article.
SQL Server log shipping allows you to automatically send transaction log backups from the primary database to one or more secondary databases which are hosted on separate secondary servers. There are three distinct steps (SQL server agent jobs) that are involved in Log shipping.
- Backup the transaction log file on the Primary database server. These files could be stored on a Backup share or on a SAN mount point. A SQL job is created under category “Log Shipping”. This job can be configured at any frequency that is suitable to your scenario. This job invokes sqllogship.exe with arguments –backup and –server. Refer to this site for more details on sqllogship command https://docs.microsoft.com/en-us/sql/tools/sqllogship-application.
- Copy these transaction log backup files from the backup share/SAN mount point to the destination Secondary SQL Server instance
- Restore these transaction log backup files on the secondary SQL server instance
A fourth type of SQL server agent Job is the alert job which runs by default every 2 minutes (can be changed) all round the clock which monitors the Primary and Secondary servers and alerts when operations do not complete successfully. This job is created on the Primary server as well as every Secondary server. This job needs to be configured if an Operator needs to be notified or logged on the SQL log files or alerts sent off to a separate Monitor instance.
This log shipping architecture is an active-warm standby architecture. A warm standby server involves taking a full backup (of Primary) and applying the incremental transactional log backups (from Primary) to every secondary server in sequence. The standby servers (secondary servers) are in Standby/Read-only mode which means Read operations are allowed but not update. In the event of a disaster one of the secondary servers is brought online manually. It is important to apply all the outstanding transaction log backups including the backup of the active transaction log from Primary server before bringing the database online which is done using a simple T-SQL statement. There will a little downtime involved when the secondary server needs to be brought online.
A sample log shipping configuration is pictorially depicted below (which I borrowed from MSDN). In this particular instance there are three Secondary servers and one Monitor server. The Backup is taken and stored on a Backup share from which all the other secondary servers copy and restore the transactional log.
When/When-not to use Log Shipping
Although Log Shipping is a great DR strategy and one of my favorites, it doesn’t meet all scenarios and one must be aware before choosing this as your DR solution.
- Log shipping is implemented at a database level
- You cannot opt out certain database articles like Tables, stored Procedures from getting replicated
- Log shipping fail over is manual. Though you could write scripts to automate this.
- There is always a downtime involved when one of the fail over instances (secondary instance) is brought online. This is a critical factor DR architects need to consider.
- Because due to the nature of the backup jobs execution timing which usually runs every few minutes there is a possibility that the latest transaction log backup might not have run yet and a fail over triggered or the transit files didn’t make it to the secondary servers, there are chances of some data loss. You could only minimize it by running the backup job more often.
- In windows clustering the fail over to a passive server is seamless to the application layer but is not the case with log shipping. The application layer will need to be redirected to point to the secondary server that has been brought online or alternatively the load balancer must re-route the traffic to a different set of application servers which are pre-configured to talk to the secondary database server.
- Unlike clustering where it points to a single set of data file mount points, Log shipping has distinct data mount points. So if one data mount goes bad you still have other secondary mounts available.
- Clustering isn’t a very efficient solution in geographically dispersed sites but log shipping is apt.
In my next part, I will walk through the exact steps on how to setup a simple log shipping.