I have three servers which are assigned different IDs
Server1 = ID1
Server2 = ID2
Server3 = ID3
I want to get data for Server1 from Server2 and Server3, I have to send same ID (ID1) to the other two servers to fetch data. I want to achieve this by using Role-Link (because in future there could be more different servers), Issue is I can't give similar Destination Party Values to the Party Identifiers. How can I tackle this issue?
I have solved this problem. Actually in Role Links you can't define two different parties with same identifiers. I handle this by using multiple Receive Locations using Schedule Adapter which generates same SourceServerID and different DestinationServerIDs. It is working like a charm now.
Related
My application uses Apache Ignite persistent storage. For some weeks I ran the application storing the persistent data in let's say "c:\db1". Later I ran the same application with persistent data in c:\db2. The data was only stored on this one server node.
Is there a way to merge the data from db1 folder to db2 folder?
No, you can't, at least not easily.
The best way would be two start two nodes in separate clusters, one using c:\db1 and one using c:\db2 and stream data from one to the other:
Start the two clusters
Start a helper application that will load the data
In the application, start two client nodes with different configurations - one connected to the first cluster, one connected to the second
Transfer the data roughly like this (code is not tested!)
IgniteCache cache1 = client1.cache("mycache");
IgniteCache cache2 = client2.cache("mycache");
for (Cache.Entry e : cache1.query(new ScanQuery())) {
client2.put(e.getKey(), e.getValue());
}
I want to move the record from one server to another server on certain criteria.
Note
I don't want to move all the records, I will do some filter on records that which I want
I have to move the records on daily basis.
That server is not in local network.
So If I make stored procedure using linq server, it is possible to move the records. But I don't think it is good way. Is there any other way to solve this issue?
UPDATE
what about BCP Utility?.
I don't have such awareness about it, Is it good performance to export and import for bulk data?
Do the following things :
1. Create Linked server
2. writer Query
Let Server1 with IP:172.16.9.13
Server2 with IP:172.16.9.14
You want to move data from Server1 to Server2 then first add Server2 at Server1
The Write Query Like
Insert into 172.16.9.14.SomeTable
select * from 172.16.9.13.SomeTable where isactive=1
====================Create Linked Server =====================
http://sqlserverplanet.com/dba/how-to-add-a-linked-server
You can add linked server and create procedure for moving records as per your filter criteria.Then schedule sql job on daily basis.
Sample Link for Creating Job
Second option:-
Please create Web service .In that service do this functionality .First fetch the data from target server and then insert to source server.Run this webservice in daily basis using timer or HangFire
The scenario is this: We have an application that is deployed to a number of locations. Each application is using a local-instance of SQL Server (2016) with exactly the same DB schema.
The reason for local-instance DBs is that the servers on which the application is deployed will not have internet access - most of the time.
We were now considering keeping the same solution but adding an SSIS package that can be executed at a later time - when the server is connected to the internet.
For now let's assume that once the package is executed - no further DB changes will be made to the local instance.
All tables (except for many-to-many intermediary) have an INT IDENTITY primary key.
What I need is that the table PKs get auto-generated on the Azure DB - which I'm currently doing by setting the mapping property to for the PK, however I would also need all FKs pointing to that PK to get the newly generated ID instead of pointing to the original ID.
Since data would be coming from multiple deployments, I want to keep all data as new entries - without updating / deleting existent records.
Could someone kindly explain or link me to some resource that handles this situation?
[ For future references I'm considering using UNIQUEIDENTIFIER instead of INT, but this is what we have atm... ]
Edit: Added example
So for instance, one of the tables would be Events. Now each DB deployment will have at least one Event starting off from Id 1. I'd like that when consolidating the data into the Azure DB, their actual Id is ignored and instead get an auto-generated Id from the Azure DB. - That part is Ok. But then I'd need all FKs pointing to EventId to point to the new Id, so instead of e.g. 1 they'd get the new Id according to Azure DB (e.g. 3).
Server1: Prod, hosting DB1
Server2: Dev hosting DB2
Is there a way to query databases living on 2 different server with a same select query? I need to bring all the new rows from Prod to dev, using a query
like below. I will be using SQL Server DTS (import export data utility)to do this thing.
Insert into Dev.db1.table1
Select *
from Prod.db1.table1
where table1.PK not in (Select table1.PK from Dev.db1.table1)
Creating a linked server is the only approach that I am aware of for this to occur. If you are simply trying to add all new rows from prod to dev then why not just create a backup of that one particular table and pull it into the dev environment then write the query from the same server and database?
Granted this is a one time use and a pain for re-occuring instances but if it is a one time thing then I would recommend doing that. Otherwise make a linked server between the two.
To backup a single table in SQL use the SQl Server import and export wizard. Select the prod database as your datasource and then select only the prod table as your source table and make a new table in the dev environment for your destination table.
This should get you what you are looking for.
You say you're using DTS; the modern equivalent would be SSIS.
Typically you'd use a data flow task in an SSIS package to pull all the information from the live system into a staging table on the target, then load it from there. This is a pretty standard operation when data warehousing.
There are plenty of different approaches to save you copying all the data across (e.g. use a timestamp, use rowversion, use Change Data Capture, make use of the fact your primary key only ever gets bigger, etc. etc.) Or you could just do what you want with a lookup flow directly in SSIS...
The best approach will depend on many things: how much data you've got, what data transfer speed you have between the servers, your key types, etc.
When your servers are all in one Active Directory, and when you use Windows Authentification, then all you need is an account which has proper rights on all the databases!
You can then simply reference all tables like server.database.schema.table
For example:
insert into server1.db1.dbo.tblData1 (...)
select ... from server2.db2.dbo.tblData2;
I'm developing an SSIS package which needs to pull data from ServerA based upon data in a DB table on ServerB. I'm DBadmin on ServerB, but very limited access to ServerA.
The query I need to execute, ideally using an OleDB source component, is like this:
SELECT
Blah
FROM ServerA.Database1.dbo.TableA
WHERE Something IN (SELECT foo FROM ServerB.Database2.dbo.TableB)
Is it possible to do this, or do I need to take a different approach?
EDIT: I need to run this query every ten minutes 24x7, and I don't want to pull the data from ServerA as there are millions of rows in the table, which is part of a business critical app which cannot be overloaded.
Pull from serverA into a third data source, pull from serverB into the same source, then use that source to apply your where clause.
OR, pull from serverA to serverB and apply your where clause on serverB.
In response to comment,
OR, pull from serverB to serverA and apply your where clause on serverA. That's really where you want the join done, not in your SSIS package.
Also, see if you can limit most of the rows in serverA based on some criteria outside of B or limit the amount of data from serverB that needs to exist on A for a rough cut before transferring to the SSIS package.
I'm also wondering if they could link serverB to serverA for you...