I want to executing a select statement in SSIS, but this select statement takes a parameter from another component, and the column of this select statement must be used as inputs to other components.
For example:
select id from myTable where name = (column from a previous component).
And the "id" content of the above select statement should be a column that future components can use.
If i add an "OLE DB Command" component, it allows me to refer to other components as inputs, but I cannot generate an output from it. It seems OLE DB Command component is only used for update/insert statements?
Any ideas on how to do it?
Actually, this is a case for Lookup. It seems you want to do a lookup by name and return id. Pretty simple. Here's how I created an example of this:
Drag a Data Flow Task onto the design surface. Double-click it to switch to it.
Create a Connection Manager for my database
Drag onto the design surface:
an OLE DB Source
A Lookup Transform
An OLE DB Destination
Connect the Source to the Lookup to the Destination. It's the "Lookup Match Output" we want going to the destination. See figure 1.
Configure the source. My source table just had id and name columns.
Configure the lookup
General tab: Use an OLE DB Connection
Connection tab: specify the same connection, but use the Lookup table. My lookup table was just id and name, but name was made unique, so it makes better sense as a lookup column.
On the columns tab, configure name to map to name, with "id" as an output. Configure the lookup operation to be "add new column", and name that column "lookupId". See figure 2.
Ignore the other two tabs
Configure the output to take all three columns. See figure 3.
That's all. For each row from the source, the name column will be used to match the name column of the lookup table. Each match will contribute its id column as the new lookupId column. All three columns will proceed to the destination.
Figure 1:
Figure 2:
Figure 3:
Related
I'm trying to create a SSIS package which will copy data from Table-A on Server-A into Table-B on Server-B. And to avoid duplicates, I want to update the data of the records which already exist in Table-B if there are any changes to the data. Please let me know what would be the best approach for this.
Thank You
You should use the SSIS Sort Transformation to remove duplicate records
Drag Sort Transformation and Connect Flat File Source to it. Double-Click on Sort Transformation and Choose the columns to sort. Also Check the Checkbox : Remove rows with duplicate sort values and then click OK
The SSIS Sort Transformation task is useful when you need to sort data into a certain sort order.
Create regular data flow with 2 components - OLE DB Source and OLE DB Destination (I assume you are using MS SQL Server, in general, use whatever components your company uses to connect to the DB).
In case of 2 DBs, create 2 connection managers, each pointing to its DB. Point OLE DB Source to first connection manager configured to point to source of data, and OLE DB Destination to second connection manager configured to point to destination DB.
Now point OLE DB Source to the source table in source DB, leave all the fields intact. Connect source and destination components with green arrow originally going out of source component. Now point OLE DB Destination to the destination table in target DB. Double-click destination, go to mappings and make sure they are correct (SSIS tries to map automatically using strick name matching), otherwise (in case names are different) connect source and destination fields manually. That's it, you just don't provide mappings for the fields which cannot be accommodated by destination table.
Alternatively, you can leave out the columns you don't need at source component - double-click it, go to Columns and uncheck columns you don't need.
I'm working on an SSIS project that pulls data form Excel and loads to Oracle Database every month. I plan to pull data from Excel file and load to Oracle stage table. I will be using a merge statement because the data that gets loaded each month is a rolling 12 month list and the data can change, so need to be able to INSERT when records don't match or UPDATE when they do. My control flow looks like this: Truncate Stage Table (to clear out table from last package run)---> DATA FLOW from Excel to Stage Table---> Merge to Target Table in Oracle.
My problem is that the data in the source Excel file doesn't have any unique columns to select a primary key or a composite key, as it is a possibility (although very unlikely) that a new record could have the exact same information. I am unable to utilize the "generated always as identity" because my SSIS package needs to truncate at the beginning of each job to clear out the Stage Table. This would generate the same ID numbers in the new load and create problems in the Target Table.
Any suggestions as to how I can get around this problem?
Welcome to SO and ETL. Instead of using a staging table, in SSIS use two sources: Excel file and existing production table. Sort both inputs and then perform a merge join on the unique identifier. From there, use a derived column transformation to add a new column called 'Action' which will mark a row as either an INSERT/UPDATE/DELETE based on whether the join key is NULL. So:
NULL from file means DELETE (not in file, in database)
NULL from database means INSERT (in file, not in database)
Not NULL for both means UPDATE (in file, in database)
From there, use a conditional split to split rows to either a OLE DB Destination (INSERT), or SQL Command (UPDATE or DELETE). You can now remove the stage environment and MERGE command from your process. This has the added benefit of removing the ETL load from the SQL Server, assuming SSIS is running on a separate server.
Note: The sort transformation has the option to remove duplicates.
SQL Server 2012: using a SSIS package, how to validate the source records for duplicate before inserting?
Our source file is a .csv. We are facing duplicate records loaded in the staging table.
At present , we are following manual process of loading data.
How to validate the source file data against the destination table before loading and load only the valid records? Possibility of loading duplicate records not only because of the source file having duplicate records in it but also reloading the same file to the staging table.
We are not Truncate the staging table. We are keeping records as is.
Second question : How to pick the name of the source file and pass it in the loading ? Possibly having a derived column as "FileName" which will get loaded along with raw data to the staging table.
The typical load pattern I use in this case is:
Prepare a staging table that matches the source file
In SSIS run a SQL Task with TRUNCATE StagingTable; (which clears it out)
Then, run a data flow task that loads the entire data file into the staging table
Lastly, merge the staging table into the final table.
I prefer to do this last step in a SQL Task also:
INSERT INTO FinalTable
(PrimaryKey,Column1,Column2,Column3)
SELECT
PrimaryKey,Column1,Column2,Column3
FROM StagingTable SRC
WHERE NOT EXISTS (
SELECT * FROM FinalTable TGT WHERE TGT.PrimaryKey=SRC.PrimaryKey
);
If you prefer a graphical UI, and you don't mind the extra network traffic, and slower processing time, you can do the same type of merge operation using lookups. You can even use the SCD component but I strongly discourage it's use.
Whether you do it in T-SQL or the UI, you need a key that can be used to uniquely identify the records (referred to as PrimaryKey in my example). If you don't have this key, there is no way to 'deduplicate'
Note in this example you have a 'real' staging table whose only purpose is to get the data file into the database. Then you have a final table that contains the final consistent result
Also note that this pattern only adds new rows - it will not update existing rows if they change in the data file.
Given your exact scenario (of loading the same file again), I would first check if the data is even loaded to the staging table. If you do that, you don't have to worry about checking the duplicates at record level.
How are you setting the connection to the file? Most of the data loads I have dealt with, I designed for-each-loop-container where the file name/path would be populated in a user variable. As you said, you could just use a derived column transform to add a new column which gets the value from a variable. If you don't have the file name in a user variable, you could use expression task in the control flow to populate it.
To cover your exact requirement, I would use the above step to populate the file name in the table. You could even normalize to a different table instead of storing long file name for every data record. Once you have all the file names in the database, you could just have an "Execute SQL" at the beginning to see if that file name is already in the database.
Two years back I have faced the same problem with importing TSV files.
I tried many other solutions but best I could design is C# code script for such validation at its best.
What I did as a solution
Create one C# DataTable object in memory with Primary Key constraints,
like:-
DataColumn[] keyColumn = new DataColumn[30];
keyColumn[intJ] = dtFilterdPK.Columns["Column name"];
Then try to add one by one row from your CSV to this DataTables.
Whenever your data will get Duplication based on Primary Key will have an error
Handle this error code in (TRY)..CATCH block and make this duplication error as per your logging requirement.
Avoid those error records importing in DataTable object.
Atlast import your CSV file into your table as BulkImport
Like:
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(myConnection))
{
bulkCopy.DestinationTableName = "Your DB Table Name"; //Assign table name
bulkCopy.WriteToServer(dtToBeImport); //Write into Actual table.
}
Hope this will help you.
I'm using SSIS to import data from one DB to another existing DB. Some columns in the destination tables do not exist in the source tables. Seems the Import & Export Wizard only allows me to select unmapped columns from the source and match them with these new columns in the destination. I'd like to be able to just provide one piece of data to import into all rows of these new columns.
Would like to use the GUI if possible because I'm not skilled at writing scripts. Thanks!
In SSIS, you can add a "derived column" component that will add columns to the buffer rows with the value you want (either a string or an expression).
I don't believe this is possible in the GUI. However, it would be a simple script after the data is loaded with SSIS:
UPDATE table SET newcolumn = new value
If you need to filter the rows, just add
WHERE column = value ...
You could change your source to a select query and list out the columns along with the static value you want to map.
SELECT SOURCECOLUMN_1,SOURCECOLUMN_2,....,SOURCECOLUMN_N,'VALUE' AS DESTINATIONCOLUMN FROM Source_Table
My original thought was that you could use the query right in the Import & Export wizard. you can obviously do alot more if you go in and edit the package, but it sounded like you didn't have much expereince with that. Here is how you would do this in the wizard.
After you have selected your source and destination databases you can Specify Table Copy or Query. Select the Write a query to specify the data to transfer option
On the next screen enter the query listing out all of the columns and add in your static columns.
On the Next screen You will need to select the Destination table or it will default to creating a new table named Query. You should be able to choose from the drop down. As long as you aliased your extra columns with the same names it should map correctly. You can go in and edit mappings here if needed.
You can then save off the SSIS package and it will source form the query.
Alternatively if you already have the SSIS pacakge created without the extra columns you can go in to the Data Flow and change the Data access mode in the OLE DB Source to be a SQL Command instead of a table or view. Add your query here.
You can then go into the properties of the OLE DB Desitination in the Dataflow and map the new column. You could also add in a derived column as #DominicGoulet by adding in a Dervied Column task and putting your static information here and then mapping. If you want to see that solution too let me know.
I am currently using a database that is poorly designed and a slow pipeline so i decided to copy a small portion of the database (15 tables) and only bring over some of those tables for example i want to bring only the rows that have a certain id.
But this is not a one time move i need to add all the stuff that is added to the old database added to the new one on an hourly basis. My research has led me to SSIS and that it may have a way of accomplishing this but i have found no clear examples on how it is done if in fact it is possible. Thanks in advance.
Yes it is possible . You can schedule your ssis package through sql agent to run on hourly basis .
For a table ,you can drag a Data Flow Task on to the control flow .Inside DFT ,you need to place an oledb Source component ,Lookup ,Data conversion (if the types are different in source and target table) and Oledb destination .
oledb Source component : Create a variable of type string and in the expression write your sql query to fetch the data based on ID.Now use this variable in source component.
Lookup: You need to select your source table and combine the primary key column from the source and destination table.It acts similar to inner join query .After combining the primary key from the both the tables ,select the columns which you need from the source .
Oledb destination : Simply select your target table and map the columns from Lookup no matched output .If you need to update the values from the source then use Lookup matched output and connect it to an execute SQL task and write the update query .
Please go thru the link and SO
Scheduling of SSIS package