I came across a good SSIS and SQL Problem. How do I in SSIS create a package that will execute a SQL query in management studio and grab the results of that query (the query results are "Insert INTO statements") and run that insert into statement query results into another sql database within SSIS that updates a table in another server? (The first query runs in one database and the second query runs in a different database)
First of all, sql queries execute on the database, not management studio. Management studio is visual interface for configuring,managing and administer databases.
To me it doesn't sound like there's any problem here at all. Create one connection manager for each DB. Then create two "Execute SQL Tasks", put your insert statements in them use use your connection manangers you've created.
Run the first query in an Execute SQL task and store the results in a string variable.
Then run a second Execute SQL task, using the variable as your SQL command.
Create Connection Managers for each of the databases you need, your source and both (or all) destinations.
Create a Data Flow Task.
In your OLEDB Source, execute your SELECT statement.
Pump the results into a MultiCast Transformation. This allows you to send the exact same result set to multiple destinations.
Create a Destination for each table you want to write to, and connect them to the MultiCast.
Bob's your uncle.
Related
Hello I have to pass a select from a database that is on an ip address to another (identical) database that is on a completely different IP, below the query how to pass to make the switch?
Sql Code:
/*Insert into database with same name into same table addres:: 172.16.50.98*/
Insert into
/* select from database address: 172.16.50.96*/
SELECT IdUtente,Longitudine,Latitudine,Stato,DataCreazione
FROM Quote.dbo.Marcatura
where DataCreazione>'2019-01-08 18:37:28.773'
Linked Server/ OpenQuery is the way to achieve this. have a look on this.
including parameters in OPENQUERY
If the data that's being imported isn't large and this won't be a reoccurring task a linked server would probably be the better option. Creating one through the SSMS GUI is easier if you haven't done this before, but an example of creating one using the SP_ADDLINKEDSERVER stored procedure through T-SQL is below. If your account doesn't have access to the other server the SP_ADDLINKEDSRVLOGIN stored procedure will need to be used to configure the linked server with an account that has the appropriate permissions on the source server, as well as database and any referenced objects. While using the linked server syntax (4 part name) is simpler and easier to read, I'd strongly recommend doing the insert with OPENQUERY instead if only one linked server will be used. This will execute the SQL on the source server, applying any filters there and only return the necessary rows, whereas the linked server syntax will return all the rows before performing the filtering. You can read more about the differences between the two here. You indicated the database name is the same on both servers, and this assumes the same for the table and schema names as well. Make sure to update these accordingly if they differ.
If a large volume of the data will imported or if this will be a regular process creating an SSIS package and setting this to run as a SQL Agent job will be the better approach. If you choose to go this route there are a number of things to consider, but the links below will help you get started. SQL Server Data Tools (SSDT) is where the packages can be developed. While not necessary, executing the packages from the SSIS Catalog, SSISDB, will be much more beneficial than just the using the file system. Either an OLE DB or SQL Server Destination can be used since the table that's being loaded to is on SQL Server, however a SQL Server Destination can only be used on a local database.
Linked Server:
--Create linked server
--SQL product name and SQLNCLI11 provider for SQL Server
EXEC [MASTER].DBO.SP_ADDLINKEDSERVER #server = N'MyLinkedServer', #srvproduct=N'SQL',
#provider=N'SQLNCLI11', #datasrc=N'ServerIPAddress'
--OPENQUERY insert
INSERT INTO Quote.dbo.Marcatura (IdUtente, Longitudine, Latitudine, Stato, DataCreazione)
SELECT
IdUtente,
Longitudine,
Latitudine,
Stato,
DataCreazione
FROM OPENQUERY(MyLinkedServer, '
SELECT
IdUtente,
Longitudine,
Latitudine,
Stato,
DataCreazione
FROM Quote.dbo.Marcatura')
SSIS:
SSIS
SSDT
SSISDB
Execute SQL Task
Data Flow Task
OLE DB Source
OLE DB Destination
SQL Server Destination
SQL Server Agent SSIS Packages
SSIS solution
I think this requires a very simple SSIS package to be achieved:
Create two OLEDB Connection manager; one for each server
Add a data flow task
Inside the Data flow task addan OLEDB Source and OLEDB destination
In the OLEDB source (172.16.50.98 connection manager) select SQL command as Access mode and use the following command:
SELECT IdUtente,Longitudine,Latitudine,Stato,DataCreazione
FROM Quote.dbo.Marcatura
where DataCreazione >'2019-01-08 18:37:28.773'
Map the source columns to the OLEDB destination (172.16.50.96 connection manager)
Helpful links
Extract Data by Using the OLE DB Source
SSIS OLEDB Source to OLE DB Destination example
I have 11 databases in which I'm having tables contains User Details i.e. all employee details. There I have a column "Status"(which is 1 for Active and 0 for Inactive). I have a regular tasks for updating "Status" column value 0 or 1 for mentioned employees and for that, I have to go through all the databases then User table then I have to update. The same task i have to do for all the database and it consumes a lot of time.
If I will get a short Query or Procedure that I have to run once and will do all updation at once then, it would be a great help.
I see a couple of possible options.
You could build an SSIS package to connect to each database and do the necessary updates provided the criteria of which employees to update and what to update them to could be found within the database or some external source such as a text file.
Alternatively, you could use SQLCMD mode in SQL Server Management Studio and then within your SQL script use CONNECT command to switch to each server and database something like this...
:CONNECT Server1
USE Database1
--put your update SQL script
:CONNECT Server2
USE Database2
--put your update SQL script
...
These links provide some further information on using SQLCMD mode...
Connecting to multiple servers in a Query Window using SQLCMD
SQL Server SQLCMD Basics
Noel
As you mentioned, you have 11 databases.
Problem : First, you are using very bad approach for database design,
What really Happens : When you are using multiple databases and you need to check in every database, then the server needs to connect to different database again and again, which takes very more time compared to switching into the tables, because of connection handling.
Solution : In your case, you have only one option to connect different databases in loops and then run the query in the loop for every DB.
Suggestion : you should keep all the data in the same database, you can use an extra column in tables to keep track your data to different entities.
In SQL Server 2016 they introduced parallel inserts into existing tables. By not having certain features on the target table SQL Server can insert the data in parallel streams.
Using the syntax of
INSERT [tableName] WITH (TABLOCK)
SELECT .....
The data will be inserted in parallel. I have seen great improvements using this. What normally would take about 10 minutes to insert 120 million, using this new feature takes only about 30 seconds.
How can I use this new setting in SSIS? I am using Visual Studio 2015 Enterprise and SQL Server 2016.
I know I can use a "Execute SQL Task" and put something like this in, but what I'm wondering is how to use this in the Data Flow? Is there a specific Connection Manager and setting in the Destination Adapter?
in sql server 2016, we need to provide two condition for using parallelism for our insert operations. The first one is compatibility level of database must be set at 130. So before you run your ssis package check your database compatibility level.
SELECT name, compatibility_level FROM sys.databases
the second condition is using TABLOCK hint. In SSIS Package you can choose TABLOCK hint with OLEDB Destination.
No, you cannot utilize Parallel inserts in DTF.
According to Microsoft description of Parallel Inserts in SQL 2016, it can be used only if executing INSERT ... SELECT ... statement with some limitations. Data Flow prepares data table in memory of SSIS Server, OLEDB or ODBC destination will try to load it with 'INSERTorINSERT BULK` statements, which are not subject to parallel operations.
I have a single SQL query that I need to run against ~25 different databases- each residing on a separate SQL server on the network. The query will run from a single central SQL server management studio, and the 24 other SQL server instances are linked. I have the query I need, and I tested that it works- however the goal is to create a script that queries each of the 25 separate SQL instances.
Instead of writing the query out 25 separate times within the script, I'm wondering if there's a way to utilize the single block of code to query each of the linked instances using an array, variables, DO/WHILE, a function or any other method.
Here's the query:
SET NOCOUNT ON
PRINT 'local server';
SELECT isc.ini_schema_name[Device], count(*) [Count]
FROM pharos.dbo.edi_pharos_stations eps, pharos.dbo.ini_schemas isc
WHERE eps.ini_schema_id = isc.ini_schema_id
GROUP BY isc.ini_schema_id, isc.ini_schema_name
For the purpose of this example, if I were to utilize the less-graceful approach of writing out the block of code 24 more times, this would be the next query in the script (to query SQL server hostnamed pharos90-2008).
PRINT 'Pharos90-2008';
SELECT isc.ini_schema_name[Device], count(*) [Count]
FROM [pharos90-2008].pharos.dbo.edi_pharos_stations eps, [pharos90-2008].pharos.dbo.ini_schemas isc
WHERE eps.ini_schema_id = isc.ini_schema_id
GROUP BY isc.ini_schema_id, isc.ini_schema_name
As you can see, the query / code is exactly the same except for the fact that it is referencing a separate linked SQL Server (query being run from a central SQL Server Management Studio).
The ultimate goal is to output the queried data for each SQL instance to a single .txt file; format being, print the name of each particular SQL server followed by the corresponding queried data.
Any advice as to how one would accomplish such a task?
Thanks in advance.
Well, one way would be to create a cursor to iterate all of your linked servers. (You can find linked servers like this...)
SELECT * FROM sys.servers WHERE is_linked = 1
Then, you could use the undocumented sp_MSForEachDB stored procedure to run a dynamic version of your query (changing the server on each iteration) on each database in the cursor's current server. If you search for sp_MSForEachDB you can find plenty of information. But here's one link to save time.
I have database connection to database DB1. The only thing I could do - execute any t-sql statements including using stored procedures. I want to export the specific table (or even the specific rows of specific table) to my local database. As you can read abve, DBs are on diffrent servers meaning no direct connection is possible. Therefore question: Is it possible to write query that returns the other query to execute on local server and get data? Also note, that table contains BLOBs. Thanks.
If you have SQL Server Management Studio, you can use the data import function on your local database to get the data. It works as long as you have Read/Select access on the tables you are trying to copy.
If you have Visual Studio you can use the database tools in there to move data between two servers as long as you can connect to both from your workstation.
Needs Ultimate or Premium though:
http://msdn.microsoft.com/en-us/library/dd193261.aspx
RedGate has some usefull tools too:
http://www.red-gate.com/products/sql-development/sql-compare/features
Maybe you should ask at https://dba.stackexchange.com/ instead.
If you can login to the remote db (where you can only issue t-sql), you may create linked server on your local server to the remote and use it later directly in queries, like:
select * from [LinkedServerName].[DatabaseName].[SchemaName].[TableName]