dynamic ssis package - sql

I need to create SSIS package for importing files from ftp server to table on Data Lake. The problem is that files can have different columns. For example File1 can have A,B,C,D,E columns, next file can have A,B,C just, next A,B,C,D,E,F and so on. What is the best way to approach this problem?
I m talking about different columns for source file and same destination table.
Thanks

Look into BiML, which dynamically creates packages based on meta data.

Add an Object Variable
Add a data flow:
Use this script component to get column names:
3.5 You might want to add a condition split or derived columns to monkey with the output.
Load the records into a recordset destination (use variable created in #1)
Add a ForEach and iterate through ADOObject
Add a variable to store each iteration
Create a variable to store SQL to pull your data set (Ex. "Select * from [" + variablecreatedInStep6 + "]"
Set your Source to use that variable
MAKE SURE EVERYTHING IS DELAYED VALIDATION AS THIS IS ALL DYNAMIC

Related

Using SSIS Package, How to validate the source records for duplicate before inserting?

SQL Server 2012: using a SSIS package, how to validate the source records for duplicate before inserting?
Our source file is a .csv. We are facing duplicate records loaded in the staging table.
At present , we are following manual process of loading data.
How to validate the source file data against the destination table before loading and load only the valid records? Possibility of loading duplicate records not only because of the source file having duplicate records in it but also reloading the same file to the staging table.
We are not Truncate the staging table. We are keeping records as is.
Second question : How to pick the name of the source file and pass it in the loading ? Possibly having a derived column as "FileName" which will get loaded along with raw data to the staging table.
The typical load pattern I use in this case is:
Prepare a staging table that matches the source file
In SSIS run a SQL Task with TRUNCATE StagingTable; (which clears it out)
Then, run a data flow task that loads the entire data file into the staging table
Lastly, merge the staging table into the final table.
I prefer to do this last step in a SQL Task also:
INSERT INTO FinalTable
(PrimaryKey,Column1,Column2,Column3)
SELECT
PrimaryKey,Column1,Column2,Column3
FROM StagingTable SRC
WHERE NOT EXISTS (
SELECT * FROM FinalTable TGT WHERE TGT.PrimaryKey=SRC.PrimaryKey
);
If you prefer a graphical UI, and you don't mind the extra network traffic, and slower processing time, you can do the same type of merge operation using lookups. You can even use the SCD component but I strongly discourage it's use.
Whether you do it in T-SQL or the UI, you need a key that can be used to uniquely identify the records (referred to as PrimaryKey in my example). If you don't have this key, there is no way to 'deduplicate'
Note in this example you have a 'real' staging table whose only purpose is to get the data file into the database. Then you have a final table that contains the final consistent result
Also note that this pattern only adds new rows - it will not update existing rows if they change in the data file.
Given your exact scenario (of loading the same file again), I would first check if the data is even loaded to the staging table. If you do that, you don't have to worry about checking the duplicates at record level.
How are you setting the connection to the file? Most of the data loads I have dealt with, I designed for-each-loop-container where the file name/path would be populated in a user variable. As you said, you could just use a derived column transform to add a new column which gets the value from a variable. If you don't have the file name in a user variable, you could use expression task in the control flow to populate it.
To cover your exact requirement, I would use the above step to populate the file name in the table. You could even normalize to a different table instead of storing long file name for every data record. Once you have all the file names in the database, you could just have an "Execute SQL" at the beginning to see if that file name is already in the database.
Two years back I have faced the same problem with importing TSV files.
I tried many other solutions but best I could design is C# code script for such validation at its best.
What I did as a solution
Create one C# DataTable object in memory with Primary Key constraints,
like:-
DataColumn[] keyColumn = new DataColumn[30];
keyColumn[intJ] = dtFilterdPK.Columns["Column name"];
Then try to add one by one row from your CSV to this DataTables.
Whenever your data will get Duplication based on Primary Key will have an error
Handle this error code in (TRY)..CATCH block and make this duplication error as per your logging requirement.
Avoid those error records importing in DataTable object.
Atlast import your CSV file into your table as BulkImport
Like:
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(myConnection))
{
bulkCopy.DestinationTableName = "Your DB Table Name"; //Assign table name
bulkCopy.WriteToServer(dtToBeImport); //Write into Actual table.
}
Hope this will help you.

inserting multiple text files

I have 4 different text files each file with different name and different column in it place in one folder. I want these 4 four files to be inserted or updated into 4 different existing tables. So How to read read these 4 files dynamically and insert them into their respective table dynamically in SSIS.
Well, you need to use Data Flow Task to move data from a Flat File Source to a Table Destination (OLEDB Destination perhaps). Are the columns in your file delimited in any way? For example, with any of these: (;),(|) or something like that? if it is, you can create a FlatFileConnectionManager and set that to split the columns. If not, you might need to use the FixedWidth option to separate your columns. To use the OLEDB Destination, you will need to create a OLEDB connectionManager to point to the table in your database. I could help you more if I had more information about the files you want to read the data from.
EDIT
Well you said at the start you were working with 4 files and 4 tables, so you can create 4 Flat Destination sourcers with 4 OLEDB destinations aswell (1 of each for each flat file). If I understood you correctly, these 4 files can or cannot exist yet. So if you know the names that the files will get, change the Package Property DelayValidation to true, and then create a connection with a sample text file. You do this so the File path gets saved. The tables, in my opinion DO need to exist. Now, when you said:
i want to load all the text files into each different existing table whenever there is files inside the folder.
The only way I know you can do something similar, is to schedule the execution of your package at a certain time with SQL Server Agent Job. Please let me know if this was what you were looking for.

SSIS Script Component - only to change variables

I have a series of task that are very similar:
SELECT a,b FROM c
Lookup in another table and change value in column b.
Save new value back to c and if not match, send the result on to an error table.
That part is pretty straight forward and illustrated here:
Source ==> Lookup =match=> SQL Update command
=No match=> SQL Save Error command
(Hope you understand what I mean - but it works!)
I now have to repeat this a number of times, where my source-sql changes. So what I want to do is to insert a Script Component in front of the Source and set my User::Sql variable like:
Variables.Sql = "SELECT d, e FROM f"
All of the above is contained in a Data Flow. When I have created one I can then copy that one and only change the Sql variable in the script and then it should all work.
My problem is: When I insert the Script Command it asks me if it is a Source, Destination or Transscript script. And by only setting the variable it does not produce any rows for output and cannot connect to my Source.
Anyone know how to make that work?
(I have simplified the above. I actually want to update multiple variables and use those in my Source, Lookup and Error update as well - therefore it is not more simple just to change the SQL script in the initial Source! But being able to do the above, I will be able to achieve what I want :-))
You should set your variable containing the SQL query in the control flow, before you execute the dataflow.
Then you need to use that variable as an expression in your Dataflow. You can parametrize the query used in the lookup or any other parameters of your dataflow.
If your dataflows really have always the same structure, you could even generate a list of queries and call your dataflow task in a loop, preventing the duplication of the same tasks.

Store filename in variable and create tables with the filename in SSIS

I've few excel source files in one folder in SSIS. I want to pull data from these excel files and load in to SQL tables.
My problem is I want to save all the files names one by one and want to create SQL table with exactly same name as filename
and then want to load each excel file in corresponding table.
Please help me how to create a package for this.
Jayvee has presented the high level view which is good enough! Let me add in bit detail.
I am assuming that you have dynamic Excel file connection.
Declare a variable and named it as FileName. And assign it the first file name which is available in the folder.
Place Foreach Loop Container and double click on it. Specify the Folder: and Files: as shown in image below.
In the same Foreach Loop Editor, go to Variable Mappings. Select Variable from drop down list. This is the same variable which we defined in first step. Set its Index to 0. Click OK.
Remaining task is same as Jayvee explained.
See this link for further help. And this for Result Set Property Not Set Correctly. I think setting ResultSet property to SingleRow will do the job.
your control flow should look like this:

Import SQL to SQL DB: How can I populate columns that exist in destination, not source?

I'm using SSIS to import data from one DB to another existing DB. Some columns in the destination tables do not exist in the source tables. Seems the Import & Export Wizard only allows me to select unmapped columns from the source and match them with these new columns in the destination. I'd like to be able to just provide one piece of data to import into all rows of these new columns.
Would like to use the GUI if possible because I'm not skilled at writing scripts. Thanks!
In SSIS, you can add a "derived column" component that will add columns to the buffer rows with the value you want (either a string or an expression).
I don't believe this is possible in the GUI. However, it would be a simple script after the data is loaded with SSIS:
UPDATE table SET newcolumn = new value
If you need to filter the rows, just add
WHERE column = value ...
You could change your source to a select query and list out the columns along with the static value you want to map.
SELECT SOURCECOLUMN_1,SOURCECOLUMN_2,....,SOURCECOLUMN_N,'VALUE' AS DESTINATIONCOLUMN FROM Source_Table
My original thought was that you could use the query right in the Import & Export wizard. you can obviously do alot more if you go in and edit the package, but it sounded like you didn't have much expereince with that. Here is how you would do this in the wizard.
After you have selected your source and destination databases you can Specify Table Copy or Query. Select the Write a query to specify the data to transfer option
On the next screen enter the query listing out all of the columns and add in your static columns.
On the Next screen You will need to select the Destination table or it will default to creating a new table named Query. You should be able to choose from the drop down. As long as you aliased your extra columns with the same names it should map correctly. You can go in and edit mappings here if needed.
You can then save off the SSIS package and it will source form the query.
Alternatively if you already have the SSIS pacakge created without the extra columns you can go in to the Data Flow and change the Data access mode in the OLE DB Source to be a SQL Command instead of a table or view. Add your query here.
You can then go into the properties of the OLE DB Desitination in the Dataflow and map the new column. You could also add in a derived column as #DominicGoulet by adding in a Dervied Column task and putting your static information here and then mapping. If you want to see that solution too let me know.