Specify multiple delimiters for Redshift copy command - amazon-s3

Is there a way to specify multiple delimiters to Redshift copy command while loading data.
I have a data file having the following format:-
1 | ab | cd | ef
2 | gh | ij | kl
I am using a command like this:-
COPY MY_TBL
FROM 's3://s3-file-path'
iam_role 'arn:aws:iam::ddfjhgkjdfk'
manifest
IGNOREHEADER 1
gzip delimiter '|';
Fields are separated by | and records are separated using newline. How do I copy this data into Redshift. Because my query above gives me a delimiter not found error

No, delimiters are single characters.
From Data Format Parameters:
Specifies the single ASCII character that is used to separate fields in the input file, such as a pipe character ( | ), a comma ( , ), or a tab ( \t ).
You could import it with a pipe delimiter, then perform an UPDATE command to STRIP() off the spaces.

Your error above suggests that something in your data is causing the COPY command to fail. This could be a number of things, from file encoding, to some funky data in there. I've struggled with the "delimiter not found" error recently, which turned out to be the ESCAPE parameter combined with trailing backslashes in my data which prevented my delimiter (\t) from being picked up.
Fortunately, there are a few steps you can take to help you narrow down the issue:
stl_load_errors - This system table contains details on any error logged by Redshift during the COPY operation. This should be able to identify the row number in your data file that is causing the problem.
NOLOAD - will allow you to run your copy command without actually loading any data to Redshift. This performs the COPY ANALYZE operation and will highlight any errors in the stl_load_errors table.
FILLRECORD - This allows Redshift to "fill" any columns that it sees as missing in the input data. This is essentially to deal with any ragged-right data files, but can be useful in helping to diagnose issues that can lead to the "delimiter not found" error. This will let you load your data to Redshift and then query in database to see where your columns start being out of place.
From the sample you've posted, your setup looks good, but obviously this isn't the entire picture. The options above should help you narrow down the offending row(s) to help resolve the issue.

Related

What does this error mean: Required column value for column index: 8 is missing in row starting at position: 0

I'm attempting to upload a CSV file (which is an output from a BCP command) to BigQuery using the gcloud CLI BQ Load command. I have already uploaded a custom schema file. (was having major issues with Autodetect).
One resource suggested this could be a datatype mismatch. However, the table from the SQL DB lists the column as a decimal, so in my schema file I have listed it as FLOAT since decimal is not a supported data type.
I couldn't find any documentation for what the error means and what I can do to resolve it.
What does this error mean? It means, in this context, a value is REQUIRED for a given column index and one was not found. (By the way, columns are usually 0 indexed, meaning a fault at column index 8 is most likely referring to column number 9)
This can be caused by myriad of different issues, of which I experienced two.
Incorrectly categorizing NULL columns as NOT NULL. After exporting the schema, in JSON, from SSMS, I needed to clean it
up for BQ and in doing so I assigned IS_NULLABLE:NO to
MODE:NULLABLE and IS_NULLABLE:YES to MODE:REQUIRED. These
values should've been reversed. This caused the error because there
were NULL columns where BQ expected a REQUIRED value.
Using the wrong delimiter The file I was outputting was not only comma-delimited but also tab-delimited. I was only able to validate this by using the Get Data tool in Excel and importing the data that way, after which I saw the error for tabs inside the cells.
After outputting with a pipe ( | ) delimiter, I was finally able to successfully load the file into BigQuery without any errors.

Use multi-character delimiter in Amazon Redshift COPY command

I am trying to load a data file which has a multi-character delimeter('|~|') to Amazon Redshift DB using the COPY command. Redshift COPY command does not allow for multi-character delimiters.
My data looks like this -
John|~|23|~|Los Angeles|~|USA
Jade|~|27|~|New York|~|USA
When I try to use multi-characters in the COPY command I get "COPY delimiter must be a single character;" error.
My COPY command looks like this -
copy test_data from 's3://abcd/testFile'
credentials 'aws_access_key_id=<redacted>;aws_secret_access_key=<redacted>'
delimiter '|~|'
null as '\0'
acceptinvchars
ignoreheader as 1
MAXERROR 1;
I cannot replace or edit the source files since they are very large(>100GB), so I need a solution within the AWS Redshift paradigm.
If you can't edit the source files, and you can't use a multi-character delimiter, then use | as the delimiter and add additional (fake) columns that will be loaded with ~.
You can then either ignore these columns, or use CREATE TABLE AS to copy the data to a new table but without those columns.
Or, use CREATE VIEW to make a version of that table without the fake columns.

"UNLOAD" data tables from AWS Redshift and make them readable as CSV

I am currently trying to move several data tables in my current AWS instance's redshift database to a new database in a different AWS instance (for background my company has acquired a new one and we need to consolidate to on instance of AWS).
I am using the UNLOAD command below on a table and I plan on making that table a csv then uploading that file to the destination AWS' S3 and using the COPY command to finish moving the table.
unload ('select * from table1')
to 's3://destination_folder'
CREDENTIALS 'aws_access_key_id=XXXXXXXXXXXXX;aws_secret_access_key=XXXXXXXXX'
ADDQUOTES
DELIMITER AS ','
PARALLEL OFF;
My issue is that when I change the file type to .csv and open the file I get inconsistencies with the data. there are areas where many rows are skipped and on some rows when the expected columns end I get additional columns with the value "f" for unknown reasons. Any help on how I could achieve this transfer would be greatly appreciated.
EDIT 1: It looks like fields with quotes are having the quotes removed. Additionally fields with commas are having the commas separated away. I've identified some fields with quotes and commas and they are throwing everything off. Would the addquotes clause I have apply to the entire field regardless of whether there are quotes and commas within the field?
Default document will have extension as txt and with quotes. Try to open it with Excel and then save as csv file.
refer https://help.xero.com/Q_ConvertTXT

Pentaho | Issue with CSV file to Table output

I am working in Pentaho spoon. I have a requirement to load CSV file data into one table.
I have used , as delimter in CSV file. I can see correct data in preview of CSV file input step. But when I tried to insert data into Table Output step, I am getting data truncation error.
This is because I have below kind of values in one of my column.
"2,ABC Squere".
As you see, I have "," in my column value so it is truncating and throwing error.How to solve this problem?
I want to upload data in Table with this kind of values..
Here is one way of doing it
test.csv
--------
colA,colB,colC
ABC,"2,ABC Squere",test
See below the settings. The key is to use "" as encloser and , as delimiter.
you can change the delimiter say to PIPE and also keeping data as quoted text like "1,Name" this will treat the same as 1 column

Configuring delimiter for Hive MR Jobs

Is there any way to configure the delimiter for Hive MR Jobs??
The default delimiter being used by hive internally is "hive delimiter" (/001). My usecase is to configure the delimiter so that i can use any delimiter as per the requirement. In hadoop there is a property "mapred.textoutputformatter.separator" which will set the key-value separator to the value specified for this property..Is there any such way to configure the delimiter in Hive?..I searched many but didn't get any useful links. Please help me.
As of hive-0.11.0, you can write
INSERT OVERWRITE LOCAL DIRECTORY '...'
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
SELECT ...
See HIVE-3682 for the complete syntax.
You can try that:
SELECT (rest of your query)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY 'YourChar' (example: FIELDS TERMINATED BY '\t')
You can also use this :-
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES ('field.delim'='-','serialization.format'='-')
This will separate columns using - delimiter but it is specific to LazSimpleSerde.
i guess you are using INSERT OVERWRITE DIRECTORY option to write to a hdfs file.
If you create a hive table on top of the hdfs file with no delimiter, it will take '\001' as delimiter, so you can read the file from a hive table without any issues
If you source table dnt not specify the delimiter in the create schema statement, then you wont be able to change that. You op will always contain the default. And yes the delimiter will be controlled by create schema for the source table. So that isnt configurable either.
I have had a similar issue and ended up modifying 001 as second step after finishing hive MR job.