Flat File Destination does not save error rows - SSIS - error-handling

Inside a fooreachloop I've got the next configuration in my ssis package:
As you can see, from my source I've got some rows with some problems. With the data viewer I can see them. In theory the flat file destination should record this values on the file indicated by the connector. But instead of that in the destination file I can only see the header and not the values.
From the Staging DB destination the errors are configured with the option of "Redirect Rows"
So, what can be missing in the configuration?

The flat file destination is probably set to truncate, not append. so the lack of data at the end of the loop is probably an indication that the last file succeed with no issues.
If you notice your data viewer is paused, which means that those rows have not yet been flushed to the destination file, so both at the end of execution, and at the precise location the screenshot was taken, i'd expect the file to be empty.
The flat file connection has a "Overwrite data in the file" option to change this behavior, but you many need to manually truncate the file before the first loop other wise you get yesterdays and todays errors in the one file..

Related

SSIS generating csv file without headers

I’m making an SSIS package to create a CSV file using OLE DB Source and Flat File Destination.
When I get the file it doesn't contain the headers but they are clearly defined in the destination.
I've tried all the options related to this:
headers rows to skip -1,
column names in the first data row,
column delimiter,
data rows to skip
and even resetting the columns.
Please check the option in the Flat file connection manager as
Just in case this doesn't work then the setting are not being carried over to your config files. In such cases there are two workarounds:
Editing the .dtconfig to edit "firstcolumnhasnames" to 1 adds the column names without needing to delete the connection from the package.
delete the destination connection, and recreate from scratch.

Iterating through folder - handling files that don't fit schema

I have a directory containing multiple xlsx files and what I want to do is to insert the data from the files in to a database.
So far I have solved this by using tFileList -> tFileInputExcel -> tPostgresOutput
My problem begins when one of this files doesn't match the defined schema and returns an error resulting on a interruption of a workflow.
What I need to figure out is if it's possible skip that file (moving it to another folder for instance) and continuing iterating the rest of existing files.
If I check the option "Die on error" the process ends and doesn't process the rest of the files.
I would approach this by making your initial input schema on the tFileInputExcel be all strings.
After reading the file I would then validate the schema using a tSchemaComplianceCheck set to "Use another schema for compliance check".
You should be able to then connect a reject link from the tSchemaComplianceCheck to a tFileCopy configured to move the file to a new directory (if you want it to move it then just tick "Remove source file").
Here's a quick example:
With the following set as the other schema for the compliance check (notice how it now checks that id and age are Integers):
And then to move the file:
Your main flow from the tSchemaComplianceCheck can carry on using just strings if you are inserting into a database. You might want to use a tConvertType to change things back to the correct data types after this if you are doing any processing that requires proper data types or you are using your tPostgresOutput component to create the table as well.

BIDS Package Error's on Truncate while EXPORTING to flat file

I have a BIDS package. The final "Data Flow Task" exports a SQL table to Flat File. I receive a truncation error on this process. What would cause a truncation error while exporting to flat file? The error was occurring within the "OLE DB" element under the Data Flow tab for the "Data Flow Task".
I have set the column to ignore truncation errors and the export works fine.
I understand truncation errors. I understand why they would happen when you are importing data into a table. I do not understand why this would happen when outputting to a flat file.
This might be occurring for many reasons. Please make sure some of the steps listed below:
1) Check the source Data types that has to match with destination data type. If there are different it might through Truncation Error.
2) Check if there are blocks :- You can check this by creating Data viewer before the Destination and see the data come through.

Kettle - Read multiple files from a folder

I'm trying to read multiple XML files from a folder, to compile all the data they have (all of them have the same XML structure), and than save that data in a CSV file.
I already have a 'read-files' Transformation with the steps: Get File Names and Copy Rows to Result, to get all the XML files. (it's working - I print a file with all the files names)
Then, I enter in a 'for-each-file' Job which has a Transformation with the Get Rows from Result Step, and then another Job to process those files.
I think I'm loosing information from the 'read-files' Transformation to the Transformation in the 'for-each-file' Job which Get all the rows. (I print another file with all the files names, but it is empty)
Can you tell me if I'm thinking in the right way? I have to set some variables, or some option that is disabled? Thanks.
Here is an example of "How to process a Kettle transformation once per filename"
http://www.timbert.net/doku.php?id=techie:kettle:jobs:processtransonceperfile

How to remove a header row from a csv file using SSIS and SQL Server 2005

I have an SSIS package written in Visual Studio 2008 that runs a few stored procedures and places the results in files. These files are then sent off to a vendor. All of the files were created with header rows. One of these files should not be sent with a header row.
I'm not sure how to accomplish this.
I was writing my question while I was testing out an answer. I decided, since I hadn't seen the question on StackOverflow, I would just ask it anyway and then answer it with with what I found.
Here are the steps I took:
I removed the header row from the base file.
In the Flat File Connection Editor, I unchecked the Column names in the first data row box.
I previewed the file and it looked good.
In the Flat File destination editor, I remapped the columns.
I re-ran the package and the csv file was created without a header row.
Yes the trick is not to check the "Column names in the first data row" at the Flat File Connection Manager Editor