SSIS Flat File Import errors - sql

I have a ssis job that imports flat file data into my database and also data conversion. Please find a view of the scheme:
The issue is that I keep getting errors on the "Violations" field see below:
[Flat File Source [37]] Error: Data conversion failed. The data
conversion for column "Violations" returned status value 4 and status
text "Text was truncated or one or more characters had no match in the
target code page.".
[Flat File Source [37]] Error: The "Flat File Source.Outputs[Flat File
Source Output].Columns[Violations]" failed because truncation
occurred, and the truncation row disposition on "Flat File
Source.Outputs[Flat File Source Output].Columns[Violations]" specifies
failure on truncation. A truncation error occurred on the specified
object of the specified component.
[Flat File Source [37]] Error: An error occurred while processing file
"C:\Users\XXXX\XXXX\XXXX\XXXX\XXXX\XXXX\XXXX\Food_Inspections.csv"
on data row 25.
In line 25 of the CSV file, this field is over 4000 characters long.
In data conversion, I currently have the Data Type set to string [DT_STR] of length 8000, coding 65001.
Row delimiter {LF},
Column delimiter Semicolon {;}
I have already looked at other suggested solutions, i.e. increasing OutputColumnWidth to 5000 but it did not help - please advise how to solve this.

Related

Internal Error: Attempt to output 65872 into a 16-bit field. It will be truncate

I am converting a pdf file to htmldom using pdftohtmlex and getting this error:
Internal Error: Attempt to output 65872 into a 16-bit field. It will be truncate and the file may not be useful.

"Error while reading data" error received when uploading CSV file into BigQuery via console UI

I need to upload a CSV file to BigQuery via the UI, after I select the file from my local drive I specify BigQuery to automatically detect the Schema and run the job. It fails with the following message:
"Error while reading data, error message: CSV table encountered too
many errors, giving up. Rows: 2; errors: 1. Please look into the
errors[] collection for more details."
I have tried removing the comma in the last column, and tried changing options in the advanced section but it always results in the same error.
The error log is not helping me understand where the problem is, this is example of the error log entry:
2
019-04-03 23:03:50.261 CLST Bigquery jobcompleted
bquxjob_6b9eae1_169e6166db0 frank#xxxxxxxxx.nn INVALID_ARGUMENT
and:
"Error while reading data, error message: CSV table encountered too
many errors, giving up. Rows: 2; errors: 1. Please look into the
errors[] collection for more details."
and:
"Error while reading data, error message: Error detected while parsing
row starting at position: 46. Error: Data between close double quote
(") and field separator."
The strange thing is that the sample CSV data has NO double quote field separator!?
2019-01-02 00:00:00,326,1,,292,0,,294,0,,-28,0,,262,0,,109,0,,372,0,,453,0,,536,0,,136,0,,2609,0,,1450,0,,352,0,,-123,0,,17852,0,,8528,0
2019-01-02 00:02:29,289,1,,402,0,,165,0,,-218,0,,150,0,,90,0,,263,0,,327,0,,275,0,,67,0,,4863,0,,2808,0,,124,0,,454,0,,21880,0,,6410,0
2019-01-02 00:07:29,622,1,,135,0,,228,0,,-147,0,,130,0,,51,0,,381,0,,428,0,,276,0,,67,0,,2672,0,,1623,0,,346,0,,-140,0,,23962,0,,10759,0
2019-01-02 00:12:29,206,1,,118,0,,431,0,,106,0,,133,0,,50,0,,380,0,,426,0,,272,0,,63,0,,1224,0,,740,0,,371,0,,-127,0,,27758,0,,12187,0
2019-01-02 00:17:29,174,1,,119,0,,363,0,,59,0,,157,0,,67,0,,381,0,,426,0,,344,0,,161,0,,923,0,,595,0,,372,0,,-128,0,,22249,0,,9278,0
2019-01-02 00:22:29,175,1,,119,0,,301,0,,7,0,,124,0,,46,0,,382,0,,425,0,,431,0,,339,0,,1622,0,,1344,0,,379,0,,-126,0,,23888,0,,8963,0
I shared an example of a few lines of CSV data. I expect BigQuery to be able to detect the schema and load the data into a new table.
Using BigQuery new WebUI and your input data I did the following:
Select a dataset
Clicked on create a table
Filled the create table form as follow:
The table was created and I was able to SELECT 6 rows as expected
SELECT * FROM projectId.datasetId.SO LIMIT 1000

Importing txt file into Postgres (delimiter problem?)

I have 700MB txt file that I want to import into my Postgres database.
However, I keep getting the error below. "Jurs" is the last column header.
Note that error happened on a line 10014, not line 1. So I believe I have created a proper schema for a new table but I noticed, on the Line 10014, there is a "\" next to BRUNSVILLE. Is that an issue? I can't figure out what the problem is.
db1=# \COPY harris_2018 FROM 'C:\Users\testu\Downloads\harris_2018\original\
Real_acct_owner\real_acct.txt' with DELIMITER E'\t';
ERROR: missing data for column "jurs"
CONTEXT: COPY harris_2018, line 10014: "0081300000008 2018 STATE OF TEXAS PO BOX 1386
HOUSTON TX 77251-1386 N 0 ..."
Below is a txt file reader.

unable to load csv file from GCS into bigquery

I am unable to load 500mb csv file from google cloud storage to big query but i got this error
Errors:
Too many errors encountered. (error code: invalid)
Job ID xxxx-xxxx-xxxx:bquijob_59e9ec3a_155fe16096e
Start Time Jul 18, 2016, 6:28:27 PM
End Time Jul 18, 2016, 6:28:28 PM
Destination Table xxxx-xxxx-xxxx:DEV.VIS24_2014_TO_2017
Write Preference Write if empty
Source Format CSV
Delimiter ,
Skip Leading Rows 1
Source URI gs://xxxx-xxxx-xxxx-dev/VIS24 2014 to 2017.csv.gz
I have gzipped 500mb csv file to csv.gz to upload to GCS.Please help me to solve this issue
The internal details for your job show that there was an error reading the row #1 of your CSV file. You'll need to investigate further, but it could be that you have a header row that doesn't conform to the schema of the rest of the file, so we're trying to parse a string in the header as an integer or boolean or something like that. You can set the skipLeadingRows property to skip such a row.
Other than that, I'd check that the first row of your data matches the schema you're attempting to import with.
Also, the error message you received is unfortunately very unhelpful, so I've filed a bug internally to make the error you received in this case more helpful.

Error in reading date time format

I have tried to run a query to copy datetime from a text file to database.
It gives following error.
ERROR: date/time field value out of range: "14-09-2013 00:08:57"
HINT: Perhaps you need a different "datestyle" setting.
CONTEXT: COPY finaltest, line 186, column takendate: "14-09-2013 00:08:57"
But when i tried reading "14-09-2013 15:08:57"
It has given no error.
What is the reason that it don't read the time starting with "00".
Edit: Using this code to perform the operation:
COPY finaltest(weight,takendate,lineip) from 'H:\\result.txt' with delimiter','
Data from file looks like this:
29440.86,05-09-2013 00:08:33,005
29500.87,05-09-2013 01:08:33,005
29545.88,05-09-2013 02:08:33,005
29605.89,05-09-2013 03:08:33,005
29665.87,05-09-2013 04:08:33,005