Update/Rewrite only a single row in SQL Server - sql

I want to only have 1 row in my table. I am populating a row from Server1 to Server2 on a single table using SSSIS. I am updating Execution End time that I get from a table in server1 to a table in server2. Here is the query I use to populate :
SELECT TOP 1 EXEC_END_TIME
FROM cr_stat_execution cse
WHERE cse.EXEC_NAME = 'ETL'
ORDER BY exec_end_time DESC
The problem:
I only want to update server2's table with the recent record only or rewrite previous days data. I don't want to have a history on my table, how can I modify my query to only populate the most recent data from Server1 to Server2 without having rows of history.

Your package will have two Connection managers. In this case, I'll assume OLEDB but ADO.NET or ODBC will also work and further assume they point to Server1 and Server2 and are named as such.
The pattern you are going to have is two Execute SQL Tasks in serial. The first Execute SQL Task will ask Server1 what the value of EXEC_END_TIME is and store that to an SSIS variable.
Create a variable named LastExec and set the data type as datetime and initialize it to something like 1900-01-01 at midnight
In the Execute SQL Task, change the result type from None to Single Row and then on the Result Set tab, map the 0th element to the variable
See also How to set a variable using a scalar-valued tSQL function in SSIS SQL Task
The second Execute SQL Task will update statement as Panagiotis describes and the "magic" will be using the SSIS variable in the query.
UPDATE ThatTable SET ThatField=?
The ? is the place holder for OLEDB and ODBC connection manager queries. The difference being OLE is 0 based ordinal and ODBC is 1 based. ADO.NET will used named parameters so the original comment query would work.
In the Parameter Mapping tab, you will need to associate the SSIS variable with ordinal position 0 of your query. Sample screen Logging information when executing remotely SSIS from ASP.NET
Get the first execute sql working first. Once you can query the database and assign to a variable, getting the next one (set as a successor) in line should be a snap.
Data flow approach
You could continue with your data flow approach. Instead of an OLE DB Destination, you'll use an OLE DB Command. Use the same query and this time, you'd map the source column to the zero-eth element.
It's overkill so that's the reason I did not advocate for its approach.

Related

SSIS Automation

I came across a good SSIS and SQL Problem. How do I in SSIS create a package that will execute a SQL query in management studio and grab the results of that query (the query results are "Insert INTO statements") and run that insert into statement query results into another sql database within SSIS that updates a table in another server? (The first query runs in one database and the second query runs in a different database)
First of all, sql queries execute on the database, not management studio. Management studio is visual interface for configuring,managing and administer databases.
To me it doesn't sound like there's any problem here at all. Create one connection manager for each DB. Then create two "Execute SQL Tasks", put your insert statements in them use use your connection manangers you've created.
Run the first query in an Execute SQL task and store the results in a string variable.
Then run a second Execute SQL task, using the variable as your SQL command.
Create Connection Managers for each of the databases you need, your source and both (or all) destinations.
Create a Data Flow Task.
In your OLEDB Source, execute your SELECT statement.
Pump the results into a MultiCast Transformation. This allows you to send the exact same result set to multiple destinations.
Create a Destination for each table you want to write to, and connect them to the MultiCast.
Bob's your uncle.

Updating multiple tables data from different databases having same column name

I have 11 databases in which I'm having tables contains User Details i.e. all employee details. There I have a column "Status"(which is 1 for Active and 0 for Inactive). I have a regular tasks for updating "Status" column value 0 or 1 for mentioned employees and for that, I have to go through all the databases then User table then I have to update. The same task i have to do for all the database and it consumes a lot of time.
If I will get a short Query or Procedure that I have to run once and will do all updation at once then, it would be a great help.
I see a couple of possible options.
You could build an SSIS package to connect to each database and do the necessary updates provided the criteria of which employees to update and what to update them to could be found within the database or some external source such as a text file.
Alternatively, you could use SQLCMD mode in SQL Server Management Studio and then within your SQL script use CONNECT command to switch to each server and database something like this...
:CONNECT Server1
USE Database1
--put your update SQL script
:CONNECT Server2
USE Database2
--put your update SQL script
...
These links provide some further information on using SQLCMD mode...
Connecting to multiple servers in a Query Window using SQLCMD
SQL Server SQLCMD Basics
Noel
As you mentioned, you have 11 databases.
Problem : First, you are using very bad approach for database design,
What really Happens : When you are using multiple databases and you need to check in every database, then the server needs to connect to different database again and again, which takes very more time compared to switching into the tables, because of connection handling.
Solution : In your case, you have only one option to connect different databases in loops and then run the query in the loop for every DB.
Suggestion : you should keep all the data in the same database, you can use an extra column in tables to keep track your data to different entities.

Why does my SSIS package takes so long to execute?

I am fairly new creating SSIS packages. I have the following SQL Server 2008 table called BanqueDetailHistoryRef containing 10,922,583 rows.
I want to extract the rows that were inserted on a specific date (or dates) and insert them on a table on another server. I am trying to achieve this through a SSIS which diagram looks like this:
OLEDB Source (the table with the 10Million+ records) --> Lookup --> OLEDB Destination
On the look up I have set:
Now, the query (specified on the Lookup transformation):
SELECT * FROM BanqueDetailHistoryRef WHERE ValueDate = '2014-01-06';
Takes around 1 second to run through SQL Server Management Studio, but the described SSIS package is taking really long time to run (like an hour).
Why is causing this? Is this the right way to achieve my desired results?
You didn't show how your OLEDB Source component was set up but looking at the table names I'd guess you are loading the whole 10 million rows in the OLEDB source and then using the Lookup to filter out only the ones you need. This is needlessly slow.
You can remove the Lookup completely and filter the rows in OLEDB source using the same query you had in the Lookup.

Running SQL against the source in Informatica and exporting results to a target table

Using Informatica designer, is there a way to run a complex SELECT statement as-is against a source database, and workflow it into a target table?
For example, SQL Server Integration Services makes it really easy to create source/target connections, paste your source SQL, and map the results to the target table. When the package is run, SQL runs against the source, and results are dumped into the target.
Yes, it is possible.
You need to create a source definition with ports that reflect the columns in your SELECT statement and override the generated query with yours by putting it into the SQL Query field of the Source Qualifier transformation.
Then link the port to the target, generate the session and workflow, configure the connections and your're done.
Yes it is possible, Informatica generates a query of its own for the columns which you propagate from Source Qualififer and you can override this Query at 2 levels:
1. Mapping Level: In source qualifier you can override it and you can validate the query
2. Session Level: In session you can use SQL QUERY column for your source to overrirde the default query and validate that as well. Also at session level you can pass this query as a parameter giving you flexibility to change source query as and when you desire

How to query SQL table for each row returned by SSIS Excel Source component

If i have an SSIS Excel source reading data from an Excel workbook, is there any way i can query an SQL table for each row returned?
Example:
Excel Source returns EmployeeID, EmployeeName and Department. Now for each row returned by the Excel source, is it possible to query an SQL Sever table for the EmployeeCategory where EmployeeID in Excel row matches EmployeeID in the EmployeeCategory SQL table?
Such that i end up with a result set in the format
EmployeeID(Excel), EmployeeName(Excel), Department(Excel) , EmployeeCategory(SQL Table)
Certainly.
This is exactly what the SSIS Lookup Transformation does.
Have a look here: https://www.youtube.com/watch?v=WMfuXYsWZqM
I should add that, for performance reasons, you really don't want to perform a new query for each record in your data flow - that would be extremely slow. What the SSIS Lookup Transformation does however, is it creates an in-memory cache of your SQL Server table, and then looks up the value of the EmployeeCategory for each record in-memory, which is blazingly fast.
If you are sure you want to actually perform a query for each record, the Lookup Transformation component has a property that lets it run with no cache. If you want even more more flexibility (possibly at an even greater performance loss), you could use the OLE DB Command Transformation, that let's you specify the query you want to execute, for each record in your data flow.