SQL Server transactions in Powershell - sql

I am new to Powershell scripting and SQL server. I am trying to write a test case to verify the integrity of SQL server database (running inside a VM) w.r.t backups.
The goal is to check that when a backup was taken in the middle of a transaction, database is still consistent after the restore.
My test case takes periodic backups of the SQL server VM while another powershell script performs database transactions in parallel (transferring money from one account to another).
I frequently find that database is inconsistent: sum of the money from all accounts is lesser than the initial deposit.
I am wondering if the SQL server transaction logic is buggy. So, can anybody see what is wrong with the below Powershell and SQL Server code? Does it get the transaction semantics right?
Function TransferMoney {
$conn = $args[0]
$from = $args[1]
$to = $args[2]
$amount = $args[3]
$conn.BeginTransaction()
# Keep this transaction intentionally dumb, so that it takes longer to
# execute and Uvm has more chance of getting replicated in the middle of the
# transaction.
# Read the current balances.
$reader = $conn.ExecuteReader("SELECT balance FROM $tableName WHERE id=$from")
$reader.Read()
$from_balance = $reader.GetValue(0)
$reader.close()
$reader = $conn.ExecuteReader("SELECT balance FROM $tableName WHERE id=$to")
$reader.Read()
$to_balance = $reader.GetValue(0)
$reader.close()
$from_balance = $from_balance - $amount
$to_balance = $to_balance + $amount
$conn.ExecuteNonQuery("UPDATE $tableName SET balance=$from_balance WHERE id=$from")
$conn.ExecuteNonQuery("UPDATE $tableName SET balance=$to_balance WHERE id=$to")
$conn.CommitTransaction()
Write-Output "$amount dollars are transferred from account $from to $to. Current balances are $from_balance and $to_balance dollars respectively."
}
Function WorkUnit {
$from = Get-Random -minimum 0 -maximum $numAccounts
$to = ($from + 1) % $numAccounts
$conn = CreateConnection
$conn.ExecuteNonQuery("SET XACT_ABORT ON")
$conn.ExecuteNonQuery("SET TRANSACTION ISOLATION LEVEL SERIALIZABLE")
# Transfer money from one account to another. Transactions may fail if
# multiple jobs pick conflicting account numbers, in which case ignore that
# transfer. Since we use transactions, such failures wouldn't cause any
# loss of money, so test should still succeed.
TransferMoney $conn $from $to $from
# Number of dollars transferred from an account is kept unique (the account
# number itself) so that, in the event of data inconsistency, we can deduce
# which transfer operation has caused the data inconsistency and it can be
# helpful in debugging.
}

Related

How to execute large amount of sql queries asynchronous and in threads

Problem: I have huge amount of sql queries (around 10k-20k) and I want to run them asynchronous in 50 (or more) threads.
I wrote a powershell script for this job, but it is very slow (It took about 20 hours to execute all). Desired result is 3-4 hours max.
Question: How can I optimize this powershell script? Should I reconsider and use another technology like python or c#?
I think it's powershell issue, because when I check with whoisactive the queries are executing fast. Creating, exiting and unloading jobs takes a lot of time, because for each thread is created separate PS instances.
My code:
$NumberOfParallerThreads = 50;
$Arr_AllQueries = #('Exec [mystoredproc] #param1=1, #param2=2',
'Exec [mystoredproc] #param1=11, #param2=22',
'Exec [mystoredproc] #param1=111, #param2=222')
#Creating the batches
$counter = [pscustomobject] #{ Value = 0 };
$Batches_AllQueries = $Arr_AllQueries | Group-Object -Property {
[math]::Floor($counter.Value++ / $NumberOfParallerThreads)
};
forEach ($item in $Batches_AllQueries) {
$tmpBatch = $item.Group;
$tmpBatch | % {
$ScriptBlock = {
# accept the loop variable across the job-context barrier
param($query)
# Execute a command
Try
{
Write-Host "[processing '$query']"
$objConnection = New-Object System.Data.SqlClient.SqlConnection;
$objConnection.ConnectionString = 'Data Source=...';
$ObjCmd = New-Object System.Data.SqlClient.SqlCommand;
$ObjCmd.CommandText = $query;
$ObjCmd.Connection = $objConnection;
$ObjCmd.CommandTimeout = 0;
$objAdapter = New-Object System.Data.SqlClient.SqlDataAdapter;
$objAdapter.SelectCommand = $ObjCmd;
$objDataTable = New-Object System.Data.DataTable;
$objAdapter.Fill($objDataTable) | Out-Null;
$objConnection.Close();
$objConnection = $null;
}
Catch
{
$ErrorMessage = $_.Exception.Message
$FailedItem = $_.Exception.ItemName
Write-Host "[Error processing: $($query)]" -BackgroundColor Red;
Write-Host $ErrorMessage
}
}
# pass the loop variable across the job-context barrier
Start-Job $ScriptBlock -ArgumentList $_ | Out-Null
}
# Wait for all to complete
While (Get-Job -State "Running") { Start-Sleep 2 }
# Display output from all jobs
Get-Job | Receive-Job | Out-Null
# Cleanup
Remove-Job *
}
UPDATE:
Resources: The DB server is on a remote machine with:
24GB RAM,
8 cores,
500GB Storage,
SQL Server 2016
We want to use the maximum cpu power.
Framework limitation: The only limitation is not to use SQL Server to execute the queries. The requests should come from outside source like: Powershell, C#, Python, etc.
RunspacePool is the way to go here, try this:
$AllQueries = #( ... )
$MaxThreads = 5
# Each thread keeps its own connection but shares the query queue
$ScriptBlock = {
Param($WorkQueue)
$objConnection = New-Object System.Data.SqlClient.SqlConnection
$objConnection.ConnectionString = 'Data Source=...'
$objCmd = New-Object System.Data.SqlClient.SqlCommand
$objCmd.Connection = $objConnection
$objCmd.CommandTimeout = 0
$query = ""
while ($WorkQueue.TryDequeue([ref]$query)) {
$objCmd.CommandText = $query
$objAdapter = New-Object System.Data.SqlClient.SqlDataAdapter $objCmd
$objDataTable = New-Object System.Data.DataTable
$objAdapter.Fill($objDataTable) | Out-Null
}
$objConnection.Close()
}
# create a pool
$pool = [RunspaceFactory]::CreateRunspacePool(1, $MaxThreads)
$pool.ApartmentState = 'STA'
$pool.Open()
# convert the query array into a concurrent queue
$workQueue = New-Object System.Collections.Concurrent.ConcurrentQueue[object]
$AllQueries | % { $workQueue.Enqueue($_) }
$threads = #()
# Create each powershell thread and add them to the pool
1..$MaxThreads | % {
$ps = [powershell]::Create()
$ps.RunspacePool = $pool
$ps.AddScript($ScriptBlock) | Out-Null
$ps.AddParameter('WorkQueue', $workQueue) | Out-Null
$threads += [pscustomobject]#{
Ps = $ps
Handle = $null
}
}
# Start all the threads
$threads | % { $_.Handle = $_.Ps.BeginInvoke() }
# Wait for all the threads to complete - errors will still set the IsCompleted flag
while ($threads | ? { !$_.Handle.IsCompleted }) {
Start-Sleep -Seconds 1
}
# Get any results and display an errors
$threads | % {
$_.Ps.EndInvoke($_.Handle) | Write-Output
if ($_.Ps.HadErrors) {
$_.Ps.Streams.Error.ReadAll() | Write-Error
}
}
Unlike powershell jobs, a RunspacePools can share resources. So there is one concurrent queue of all the queries, and each thread keeps its own connection to the database.
As others have said though - unless you're stress testing your database, you're probably better off reorganising the queries into bulk inserts.
You need to reorganize your script so that you keep a database connection open in each worker thread, using it for all queries performed by that thread. Right now you are opening a new database connection for each query, which adds a large amount of overhead. Eliminating that overhead should speed things up to or beyond your target.
Try using SqlCmd.
You can use run multiple processes using Process.Start() and use sqlcmd to run queries in parallel processes.
Of course if you're obligated to do it in threads, this answer will no longer be the solution.
Group your queries based on the table and operations on that table.
Using this you can identity how much async sql queries you could run against your different tables.
Make sure the size of the each table against which you are going to run.
Because if table contains millions of rows and your doing a join operation with some other table as well will increase the time or if it is a CUD operation then might lock your table as well.
And also choose number of threads based on your CPU cores and not based on assumptions. Because CPU core will run one process at a time so better you could create number of cores * 2 threads are efficient one.
So first study your dataset and then do the above 2 items so that you could easily identity what are all the queries are run parallely and efficiently.
Hope this will give some ideas. Better you could use any python script for that So that you could easily trigger more than one process and also monitor their activites.
Sadly I don't have the time right this instant to answer this fully, but this should help:
First, you aren't going to use the entire CPU for inserting that many records, almost promised. But!
Since it appears you are using SQL string commands:
Split the inserts into groups of say ~100 - ~1000 and manually build bulk inserts:
Something like this as a POC:
$query = "INSERT INTO [dbo].[Attributes] ([Name],[PetName]) VALUES "
for ($alot = 0; $alot -le 10; $alot++){
for ($i = 65; $i -le 85; $i++) {
$query += "('" + [char]$i + "', '" + [char]$i + "')";
if ($i -ne 85 -or $alot -ne 10) {$query += ",";}
}
}
Once a batch is built, then pass it to SQL for the insert, using effectively your existing code.
The buld insert would look something like:
INSERT INTO [dbo].[Attributes] ([Name],[PetName]) VALUES ('A', 'A'),('B', 'B'),('C', 'C'),('D', 'D'),('E', 'E'),('F', 'F'),('G', 'G'),('H', 'H'),('I', 'I'),('J', 'J'),('K', 'K'),('L', 'L'),('M', 'M'),('N', 'N'),('O', 'O'),('P', 'P'),('Q', 'Q'),('R', 'R'),('S', 'S')
This alone should speed up your inserts by a ton!
Don't use 50 threads, as previous mentioned unless you have 25+ logical cores. You will spend most of your SQL insert times waiting on the network, and hard drives NOT the CPU. By having that many threads enqueued you will have most of your CPU time reserved on waiting for the slower parts of the stack.
These two things alone I'd imagine can get your inserts down to a matter of minutes (I did 80k+ once using basically this approach in about 90 seconds).
The last part could be refactoring so that each core gets its own Sql connection, and then you leave it open until you are ready to dispose of all threads.
I don't know much about powershell, but I do execute SQL in C# all the time at work.
C#'s new async/await keywords make it extremely easy to do what you are talking about.
C# will also make a thread pool for you with the optimal amount of threads for your machine.
async Task<DataTable> ExecuteQueryAsync(query)
{
return await Task.Run(() => ExecuteQuerySync(query));
}
async Task ExecuteAllQueriesAsync()
{
IList<Task<DataTable>> queryTasks = new List<Task<DataTable>>();
foreach query
{
queryTasks.Add(ExecuteQueryAsync(query));
}
foreach task in queryTasks
{
await task;
}
}
The code above will add all the queries to the thread pool's work queue.
Then wait on them all before completing. The result being that the max level of parallelism will be reached for your SQL.
Hope this helps!

AWS RDS PostgreSQL error "remaining connection slots are reserved for non-replication superuser connections"

In the dashboard I see there are currently 22 open connections to the DB instance, blocking new connections with the error:
remaining connection slots are reserved for non-replication superuser connections.
I'm accessing the DB from web service API running on EC2 instance and always keep the best practise of:
Connection connection = DriverManager.getConnection(URL, USER_NAME, PASSWORD);
Class.forName(DB_CLASS);
Statement statement = connection.createStatement();
ResultSet resultSet = statement.executeQuery(SQL_Query_String);
...
resultSet.close();
statement.close();
connection.close();
Can I do something else in the code?
Should I do something else in the DB management?
Is there a way to periodically close connections?
Amazon has to set the number of connections based on each model's right to demand a certain amount of memory and connections
MODEL max_connections innodb_buffer_pool_size
--------- --------------- -----------------------
t1.micro 34 326107136 ( 311M)
m1-small 125 1179648000 ( 1125M, 1.097G)
m1-large 623 5882511360 ( 5610M, 5.479G)
m1-xlarge 1263 11922309120 (11370M, 11.103G)
m2-xlarge 1441 13605273600 (12975M, 12.671G)
m2-2xlarge 2900 27367833600 (26100M, 25.488G)
m2-4xlarge 5816 54892953600 (52350M, 51.123G)
But if you want you can change the max connection size to custom value by
From RDS Console > Parameter Groups > Edit Parameters,
You can change the value of the max_connections parameter to a custom value.
For closing the connections periodically you can setup a cron job some thing like this.
select pg_terminate_backend(procpid)
from pg_stat_activity
where usename = 'yourusername'
and current_query = '<IDLE>'
and query_start < current_timestamp - interval '5 minutes';
I'm using Amazon RDS, SCALA, Postgresql & Slick. First of all - number of available connections in RDS depends on the amount of available RAM - i.e. size of the RDS instance. It's best not to change the default conn number.
You can check the max connection number by executing the following SQL statement on your RDS DB instance:
show max_connections;
Check your SPRING configuration to see how many threads you're spawning:
database {
dataSourceClass = org.postgresql.ds.PGSimpleDataSource
properties = {
url = "jdbc:postgresql://test.cb1111.us-east-2.rds.amazonaws.com:6666/dbtest"
user = "youruser"
password = "yourpass"
}
numThreads = 90
}
All of the connections ARE made upon SRING BOOT initialization so beware not to cross the RDS limit. That includes other services that connect to the DB. In this case the number of connections will be 90+.
The current limit for db.t2.small is 198 (4GB of RAM)
You can change in the parameter group idle_in_transaction_session_timeout to remove idle connections.
idle_in_transaction_session_timeout (integer)
Terminate any session with an open transaction that has been idle for
longer than the specified duration in milliseconds. This allows any
locks held by that session to be released and the connection slot to
be reused; it also allows tuples visible only to this transaction to
be vacuumed. See Section 24.1 for more details about this.
The default value of 0 disables this feature.
The current value in AWS RDS is 86400000 which when converted to hours (86400000/1000/60/60) is 24 hours.
you can change the max connections in the Parameters Group for your RDS instance. Try to increase it.
Or you can try to upgrade your instance, as the max connexions is set to {DBInstanceClassMemory/31457280}
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html

How can I get the Last Processed timestamp for an SSAS tabular cube?

In SSMS I have connected to a SSAS tabular cube. When I view the properties screen I see the Last Processed timestamp of 11/24/2015 2:59:20 PM.
If I use SELECT LAST_DATA_UPDATE FROM $system.MDSchema_Cubes I see a timestamp of 11/25/2015 12:13:28 PM (if I adjust for the timezone).
If I open up the partitions screen for one of the tables in my cube I see that the most Last Processed timestamp is 11/25/2015 12:13:28 PM which matches the value from the DMV.
I want the Last Processed timestamp for my BISM, the one from the Database Properties screen, not the one from a partition that happened to be processed later.
Is there a way to get this programmatically?
You can use Analysis Services Stored Procedure assembly that you can download from here.
Once you get the assembly file that corresponds to your Analysis Server version connect to your instance via SSMS.
Look for your your Database (Database Cube)
Go to Assemblies folder
Right click and New Assembly...
Browse and select the assembly.
Set the permissions as described in the documentation assembly
Once you have imported the assembly use this MDX query to get the last processed timestamp.
--
with member [Measures].[LastProcessed] as ASSP.GetCubeLastProcessedDate()
select [Measures].[LastProcessed] on 0
from [Armetales DWH]
Let me know if this can help you.
After looking at the code in the Analysis Services Stored Procedure assembly I was able to put together a powershell script that got the date I was looking for. Here is the code:
#we want to always stop the script if any error occurs
$ErrorActionPreference = "Stop"
$error.Clear()
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices") | Out-Null
$databases = #('BISM1', 'BISM2')
$servers = #('Server1\BISM', 'Server2\BISM')
function Get-BISMLastProcessed
{
param(
[string] $connStr
)
Begin {
$server = New-Object Microsoft.AnalysisServices.Server
$server.Connect($connStr)
}
Process {
Try {
$database = $server.Databases.GetByName($_)
Write-Host " Database [$($database.Name)] was last processed $($database.LastProcessed)"
}
Catch [System.Exception] {
Write-Host $Error[0].Exception
}
Finally {
if ($database -ne $null) {
$database.Dispose()
}
}
}
End {
$server.Dispose()
}
}
foreach ($server in $servers) {
$connectStr = "Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=BISM1;Data Source=$server"
Write-Host "Server [$server]"
$databases | Get-BISMLastProcessed $connectStr
Write-Host "----------------"
}
The results are:
Server [Server1\BISM]
Database [BISM1] was last processed 11/30/2015 12:25:48
Database [BISM2] was last processed 12/01/2015 15:53:56
----------------
Server [Server2\BISM]
Database [BISM1] was last processed 11/30/2015 12:19:32
Database [BISM2] was last processed 11/02/2015 23:46:34
----------------

Use Multiple DBs With One Redis Lua Script?

Is it possible to have one Redis Lua script hit more than one database? I currently have information of one type in DB 0 and information of another type in DB 1. My normal workflow is doing updates on DB 1 based on an API call along with meta information from DB 0. I'd love to do everything in one Lua script, but can't figure out how to hit multiple dbs. I'm doing this in Python using redis-py:
lua_script(keys=some_keys,
args=some_args,
client=some_client)
Since the client implies a specific db, I'm stuck. Ideas?
It is usually a wrong idea to put related data in different Redis databases. There is almost no benefit compared to defining namespaces by key naming conventions (no extra granularity regarding security, persistence, expiration management, etc ...). And a major drawback is the clients have to manually handle the selection of the correct database, which is error prone for clients targeting multiple databases at the same time.
Now, if you still want to use multiple databases, there is a way to make it work with redis-py and Lua scripting.
redis-py does not define a wrapper for the SELECT command (normally used to switch the current database), because of the underlying thread-safe connection pool implementation. But nothing prevents you to call SELECT from a Lua script.
Consider the following example:
$ redis-cli
SELECT 0
SET mykey db0
SELECT 1
SET mykey db1
The following script displays the value of mykey in the 2 databases from the same client connection.
import redis
pool = redis.ConnectionPool(host='localhost', port=6379, db=0)
r = redis.Redis(connection_pool=pool)
lua1 = """
redis.call("select", ARGV[1])
return redis.call("get",KEYS[1])
"""
script1 = r.register_script(lua1)
lua2 = """
redis.call("select", ARGV[1])
local ret = redis.call("get",KEYS[1])
redis.call("select", ARGV[2])
return ret
"""
script2 = r.register_script(lua2)
print r.get("mykey")
print script2( keys=["mykey"], args = [1,0] )
print r.get("mykey"), "ok"
print
print r.get("mykey")
print script1( keys=["mykey"], args = [1] )
print r.get("mykey"), "misleading !!!"
Script lua1 is naive: it just selects a given database before returning the value. Its usage is misleading, because after its execution, the current database associated to the connection has changed. Don't do this.
Script lua2 is much better. It takes the target database and the current database as parameters. It makes sure that the current database is reactivated before the end of the script, so that next command applied on the connection still run in the correct database.
Unfortunately, there is no command to guess the current database in the Lua script, so the client has to provide it systematically. Please note the Lua script must reset the current database at the end whatever happens (even in case of previous error), so it makes complex scripts cumbersome and awkward.

"Uninitialized subscription" error in replication monitor

I'm using SQL Server 2012 and trying to implement transactional replication. Im using the system stored procedures to create the publications and subscriptions. I was successful in creating these things, but when i check the replication monitor, it shows "Uninitialized subscription".
When I check the synchronization status on the subscription, i found this log
Date 6/20/2012 7:36:33 PM
Log Job History (HYDHTC0131320D-PublisherDB-PublicationOne-HYDHTC0131320D\MSS-ReplicationSubscri-7C1D7509-C8A6-4073-A901-0433A2B6D2D3)
Step ID 1
Server HYDHTC0131320D\MSSQLSERVER2
Job Name HYDHTC0131320D-PublisherDB-PublicationOne-HYDHTC0131320D\MSS-ReplicationSubscri-7C1D7509-C8A6-4073-A901-0433A2B6D2D3
Step Name Run agent.
Duration 00:07:41
Sql Severity 0
Sql Message ID 0
Operator Emailed
Operator Net sent
Operator Paged
Retries Attempted 0
Message
2012-06-20 14:14:13.986 Copyright (c) 2008 Microsoft Corporation
2012-06-20 14:14:13.986 Microsoft SQL Server Replication Agent: distrib
2012-06-20 14:14:13.986
2012-06-20 14:14:13.986 The timestamps prepended to the output lines are expressed in terms of UTC time.
2012-06-20 14:14:13.986 User-specified agent parameter values:
-Publisher HYDHTC0131320D
-PublisherDB PublisherDB
-Publication PublicationOne
-Distributor HYDHTC0131320D
-SubscriptionType 2
-Subscriber HYDHTC0131320D\MSSQLSERVER2
-SubscriberSecurityMode 1
-SubscriberDB ReplicationSubscriberDB
-Continuous
-XJOBID 0xDFE51AEC7F9E3F42A450CE8874B662CD
-XJOBNAME HYDHTC0131320D-PublisherDB-PublicationOne-HYDHTC0131320D\MSS-ReplicationSubscri-7C1D7509-C8A6-4073-A901-0433A2B6D2D3
-XSTEPID 1
-XSUBSYSTEM Distribution
-XSERVER HYDHTC0131320D\MSSQLSERVER2
-XCMDLINE 0
-XCancelEventHandle 000005F8
-XParentProcessHandle 00000560
2012-06-20 14:14:13.986 Startup Delay: 619 (msecs)
2012-06-20 14:14:14.606 Connecting to Subscriber 'HYDHTC0131320D\MSSQLSERVER2'
2012-06-20 14:14:14.656 Connecting to Distributor 'HYDHTC0131320D'
2012-06-20 14:14:14.671 Parameter values obtained from agent profile:
-bcpbatchsize 2147473647
-commitbatchsize 100
-commitbatchthreshold 1000
-historyverboselevel 1
-keepalivemessageinterval 300
-logintimeout 15
-maxbcpthreads 1
-maxdeliveredtransactions 0
-pollinginterval 5000
-querytimeout 1800
-skiperrors
-transactionsperhistory 100
2012-06-20 14:14:14.683 Agent message code 21040. Publication '' does not exist.
How do I solve this issue?
I received the same error. My fix was to explicitly define the Job_login and job_password which I had as null to start with.
EXEC sp_addpullsubscription_agent
#publisher = #publisher,
#publisher_db = #publicationDB,
#publication = #publication,
#distributor = #publisher,
#job_login = $(Login),
#job_password = $(Password);
It seems there is an error in your replication setup scripts.
I suspect the error is in the call to sp_addpushsubscription_agent (if it is push subscription) or sp_addpullsubscription_agent (if it is pull subscription). Specifically the #publication parameter is wrong as the Distribution Agent is stating that the specified Publication " does not exist.
Please review your script and try again.
I faced same problem and fixed by doing the following :
-Subscriber Job owner is same as Publication User
-Subscriber User has been added to the subscriber users list and added
to the sysadmin server role
Just a few notes as I managed to get a pull subscriber up and running with a backup initialisation:
Make sure you have the agent account of the publisher, distributor and subscriber as sysadmin logins across all instances, example subscriber must be on the distributor and publisher.
Create linked servers for all components on each instance, example distributor and publisher on the subscriber.
Then take a full backup of the source database and restore without recovery on the subscription instance.
Create the publication component on the publisher database (sp_addsubscription) and make sure you have #sync_type = N'replication support only'.
Then take a diff backup of the source database and restore with recovery on the subscription instance.
Then create the pull subscription on the subscription instance (sp_addpullsubscription), note that if the source is a mirrored instance, use the original instance value here (#publisher). If you the principal on the "mirror" instance, meta data will be created correct and the distributor contains the logic to connect the log reader to either of the mirror instances.
Now you will have a problem with primary keys etc failing as the distributor started capturing data before the differential backup ... no problem. Stop the subscriber job and then add a new profile to the distributor agent with setting the -SkipErrors parameter to "2601:2627". This will skip all primary key violation transactions and continue processing. Remember to select this agent profile then and hit "OK".
Restart the subscription job and monitor the subscription as it start to catch up on the transactions.
When caught up, stop the job, change the agent profile back to the default and restart the job.
Hope this help anyone still struggling with pull subscribers based on backup initialisation ... note that at no point did I use any backup configurations within the replication configuration. Also stated that the pull subscription must not initialise (#immediate_sync = 0).
Here is the scripts:
-----------------BEGIN: Script to be run at Publisher 'DB001\OLTP'-----------------
use [DB1]
go
exec sp_addsubscription #publication = N'DB1', #subscriber = N'DB002\OLTP', #destination_db = N'DB1', #sync_type = N'replication support only', #subscription_type = N'pull', #update_mode = N'read only'
GO
-----------------END: Script to be run at Publisher 'DB001\OLTP'-----------------
-----------------BEGIN: Script to be run at Subscriber 'DB002\OLTP'-----------------
use [DB1]
exec sp_addpullsubscription #publisher = N'DB001\OLTP', #publication = N'DB1', #publisher_db = N'DB1', #independent_agent = N'True', #subscription_type = N'pull', #description = N'', #update_mode = N'read only', #immediate_sync = 0
exec sp_addpullsubscription_agent #publisher = N'DB001\OLTP', #publisher_db = N'DB1', #publication = N'DB1', #distributor = N'DB003\DIST', #distributor_security_mode = 1, #distributor_login = N'', #distributor_password = null, #enabled_for_syncmgr = N'False', #frequency_type = 64, #frequency_interval = 0, #frequency_relative_interval = 0, #frequency_recurrence_factor = 0, #frequency_subday = 0, #frequency_subday_interval = 0, #active_start_time_of_day = 0, #active_end_time_of_day = 235959, #active_start_date = 20170327, #active_end_date = 99991231, #alt_snapshot_folder = N'\\DB001\Replication', #working_directory = N'', #use_ftp = N'False', #job_login = null, #job_password = null, #publication_type = 0
GO
-----------------END: Script to be run at Subscriber 'DB002\OLTP'-----------------