Sybase - SAP ASE - replication server: routing declaration - sap

I'm trying to declare a routing in SAP Replication Server.
I have:
A server (let's call it S1) with ASE and RS server (let's call it RS1).
A server (let's call it S2) with ASE and RS server (let's call it RS2).
A server (let's call it S3) with ASE server.
I have A replication in RS1 from database in S1 to databases in S1 and S2.
Now I'm trying to add a replication to a database in S3 via RS2: a routing from RS1 to RS2 and a subscription to the DB in S3.
I declared the routing, an agent between the 2 RSSDs.
When I'm trying to set the subscription (in RS2) to the databse in S3 I've got an error - saying that it doesn't know the replication definition.
Anyone familiar with routing declaration?
Thanks.

The critical thing with routes is they both need to be in the same replication system, meaning they must share the SAME primary rep server (known as the ID server) - this contains information about all the replication servers in the replication server setup, or domain as it's known. You can create many replication servers in a domain, but for them to be able to link together via routes they must all use the same ID server.
NOTE: You can't set them up separately and then link them later. When you set up RS2 you have say RS1 is the id server and put in all the required info into rs_init for RS1 as you run through the various rs_init menus to create RS2.
If that's been done already correctly then:
Firstly set up route between RS1 and RS2 (via a 'create route' command here) if you want data to flow in both directions at some point it makes sense to setup routes both ways between RS1 and RS2, as by definition a route is in one direction. This will mean you can set replication up between any of the three ASE instances.
NOTE: You need to check that the route is actually fully up and active (via admin who) - if not then you need to start looking through the rep server errorlogs as to why that's failing e.g. missing entry in interfaces file, login issue etc.
One routes are set-up you can create a subscription replication definition against the source database and a subscription at the target database when these are attached to different replication servers. This can be at the table-level or a database-level replication definition (MSA) depending on what your aim is.

Update: I resolve it.
Especially what I did was arrange the settings and delete duplicates. Then I set up the connections again, and then the subscription.
drop connections.
Drop route.
Purge route - clean up any old references that were created with the failed create route.
suspend connection.
Stop rep agent and ran the rs_zeroltm to tell the rep agent to start at the end of the log and restarted the rep agent.
resume connection
re-created the route between the RSSDs.
verify the replication definitions were copied to the target RSSD
create a subscription
resume the replicate connection on the second RS

Related

Schedule a SQL-query to move data from one table to another with Azure SQL-db

I have a simple query that takes old data from a table and inserts the data into another table for archiving.
DELETE FROM Events
OUTPUT DELETED.*
INTO ArchiveEvents
WHERE GETDATE()-90 > Events.event_time
I want this query to run daily.
As i currently understand, there is no SQL Server Agent when using Azure SQL-db. Thus SQL Server agent does not seem like the solution here.
What is the easiest/best solution to this using Azure SQL-db?
There are multiple ways to run automated scripts on Azure SQL Database as below:
Using Automation Account Runbooks.
Using Elastic Database Jobs in Azure
Using Azure Data factory.
As you are running just one script, I would suggest you to take a look into Automation Account Runbooks. As an example below, a PowerShell Runbook to execute the statement.
$database = #{
'ServerInstance' = 'servername.database.windows.net'
'Database' = 'databasename'
'Username' = 'uname'
'Password' = 'password'
'Query' = 'DELETE FROM Events OUTPUT DELETED.* INTO archieveevents'
}
Invoke -Sqlcmd #database
Then, it can be scheduled as needed:
You asked in part for a comparison of Elastic Jobs to Runbooks.
Elastic Jobs will also run a pre-determined SQL script against a
target set of servers/databases.
-Elastic jobs were built
internally for Azure SQL by Azure SQL engineers, so the technology is
supported at the same level of Azure SQL.
Elastic jobs can be defined and managed entirely through PowerShell scripts. However, they also support setup/configuration through TSQL.
Elastic Jobs are handy if you want to target many databases, as you set up the job one time, and set the targets and it will run everywhere at once. If you have many databases on a given server that would be good targets, you only need to specify the target
server, and all of the databases on the server are automatically targeted.
If you are adding/removing databases from a given server, and want to have the job dynamically adjust to this change, elastic jobs
is designed to do this seamlessly. You just have to configure
the job to the server, and every time it is run it will target
all (non excluded) databases on the server.
For reference, I am a Microsoft Employee who works in this space.
I have written a walkthrough and fuller explanation of elastic jobs in a blog series. Here is a link to the entry point of the series:https://techcommunity.microsoft.com/t5/azure-sql/elastic-jobs-in-azure-sql-database-what-and-why/ba-p/1177902
You can use Azure data factory, create a pipeline to execute SQL query and trigger it run every day. Azure data factory is used to move and transform data from Azure SQL or other storage.

Copy Azure Database Across Subscriptions

I am trying to copy my existing AzureSQL (singleton) database from PRODUCTION subscription into a NON-PRODUCTION database in a different subscription. I need to repeat this process every night so that our production support environment (non-prod) has the latest copy from production from the night before. Since overwriting database is not possible in AzureSQL, I would like to copy "DBProd" from PROD server as "DBProd2" on to my Non-Prod server (in a different subscription), then delete the existing "DBProd" from the destination server and rename "DBProd2" to "DBProd".
I have searched through this site to find answers and the closest I found was this link below...
Cross Subscription Copying of Databases on Windows Azure SQL Database
In that link user #PaulH submitted the below answer...
"This works for me across subscriptions without having matching SQL Server accounts. I am a member of the server active directory admin group on both source and target servers, and connecting using AD authentication with MFA. – paulH Mar 25 at 11:22"
However, I could not figure out the details of how it was achieved. My preference is to use power shell script to get this done. If any of you had done this before, I would appreciate a snippet of sample code, or any pointers to achieve this.
My other option is to go the BACPAC route (export and import), but I would only want to resort to that if copying of DB across subscriptions is not possible.
Thanks in advnace!
Helios
Went through the link...
Cross Subscription Copying of Databases on Windows Azure SQL Database
The Move-AzureRmResource cmdlet may be all you need. Below how it works.
Let's say you create a new Azure SQL Server on a different resource group.
New-AzureSqlDatabaseServer -Location "East US" -AdministratorLogin "AdminLogin" -AdministratorLoginPassword "AdminPassword"
Copy the source database to the newly created Azure SQL Server.
Start-AzureSqlDatabaseCopy -ServerName "SourceServer" -DatabaseName "Orders" -PartnerServer "NewlyCreatedServer" -PartnerDatabase "OrdersCopy"
Move the resource group of the Newly created Azure SQL Server to another subscription.
Move-AzureRmResource -DestinationResourceGroupName [-DestinationSubscriptionId ] -ResourceId [-Force] [-ApiVersion ] [-Pre] [-DefaultProfile ] [-InformationAction ] [-InformationVariable ] [-WhatIf] [-Confirm] []
For more information, please read this DBAStackExchange thread.

Azure SQL, Copy most of a database into an existing one (not new one) same server

I know I can clone DB into a new one with
CREATE DATABASE Database1_copy AS COPY OF Database1;
(https://learn.microsoft.com/en-us/azure/sql-database/sql-database-copy-transact-sql)
and this goes flawesly, except in Azure, where db properties are managed by Azure portal, so I am try to find a way to copy most of the schema/resources/data into an EXISTING DB
would be great for:
CLONE DATABASE Database_test AS COPY OF Database_production
[even first approach has been to "clone" the entire db, indeed few tables on destination db should be kept, so better approach would be to CLONE EVERYTHING EXCEPT ('table1','table2'). Actually plan to achieving this by scripting the few tables needed on destination db and overwriting them after import, but bet solution would be the other]
You can do this in several ways:
Through the Azure Portal
Open your database in the Azure Portal(https://portal.azure.com)
In the overview blade of your database select the "copy" option
Fill in the parameters, in which server would you like the copy
Using a sql server client and connecting to the server
Open your SQL Server blade in Azure
Select the "Firewall" option
Click on "Add client IP"
Connect to your database with your connection string and your favorite client, could be SSMS
Execute your sql query to clone the database in the same server
-- Copy a SQL database to the same server
-- Execute on the master database.
-- Start copying.
CREATE DATABASE Database1_copy AS COPY OF Database1;
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-copy-transact-sql
The above SQL statement works perfectly fine as expected in Azure SQL Database.
Important Notes:
Log on to the master database (System Databases) using the
server-level principal login or the login that created the
database you want to copy.
Logins that are not the server-level principal must be members of
the dbmanager role in order to copy databases.
Use updated version of the SQL Server Management Studio

SSIS Migrating data to Azure from multiple sources

The scenario is this: We have an application that is deployed to a number of locations. Each application is using a local-instance of SQL Server (2016) with exactly the same DB schema.
The reason for local-instance DBs is that the servers on which the application is deployed will not have internet access - most of the time.
We were now considering keeping the same solution but adding an SSIS package that can be executed at a later time - when the server is connected to the internet.
For now let's assume that once the package is executed - no further DB changes will be made to the local instance.
All tables (except for many-to-many intermediary) have an INT IDENTITY primary key.
What I need is that the table PKs get auto-generated on the Azure DB - which I'm currently doing by setting the mapping property to for the PK, however I would also need all FKs pointing to that PK to get the newly generated ID instead of pointing to the original ID.
Since data would be coming from multiple deployments, I want to keep all data as new entries - without updating / deleting existent records.
Could someone kindly explain or link me to some resource that handles this situation?
[ For future references I'm considering using UNIQUEIDENTIFIER instead of INT, but this is what we have atm... ]
Edit: Added example
So for instance, one of the tables would be Events. Now each DB deployment will have at least one Event starting off from Id 1. I'd like that when consolidating the data into the Azure DB, their actual Id is ignored and instead get an auto-generated Id from the Azure DB. - That part is Ok. But then I'd need all FKs pointing to EventId to point to the new Id, so instead of e.g. 1 they'd get the new Id according to Azure DB (e.g. 3).

Connecting to Oracle11g database from Websphere message broker 6

I am trying a simple insert command from websphere message broker 6 from a compute node.
The data source name which is provided in the odbc.ini file in the message broker is specified in the node property of the compute node. And have wrote the following ESQL code.
SET TABLE = 'MYTABLE';
SET MYVALUE = 'TESTVALUE';
INSERT INTO Database.TABLE VALUES(MYVALUE);
The connection url is provided in the tnsnames.ora. The url is cluster url. Which points to 3 database instances.
When I am running the query i am getting exception that table or view does not exist in the trace.
But when i connect to db using any of the 3 direct urls, i am able to see the table.
Note: database is oracle11g
Can any one explain me what is happening?
The problem was that my application was using the same DSN used by my broker. And while creating the broker, the username and password provided was pointing to different schema, which is not having the the tables for my application.
The solution was creating a new DSN, and using mqsisetdbparams to point it to the correct schema.