How does unloading an empty table from redshift to s3 behaves? - amazon-s3

If an empty table is unloaded from redshift to S3 using UNLOAD command, does it creates an empty file on S3 or does it not do anything.
Earlier (few days back ) I unloaded using unload command command, it placed a 0 byte file on s3. But today it is not doing anything (that is, there is no file placed on s3) but redshift is showing "UNLOAD completed, 0 record(s) unloaded successfully" message.
Even using HEADER (to unload with headers) in the options of UNLOAD command is not showing any file on s3.
UNLOAD ($$ SELECT * FROM <table_name> $$) TO
's3://<bucket_name>/abc/test1'
iam_role '<iam_role>' ADDQUOTES HEADER ALLOWOVERWRITE DELIMITER AS ','
ESCAPE PARALLEL OFF

As per AWS support, they have gone back to the old UNLOAD behavior of creating empty files when there is no data to be unloaded in Versions >= 1.0.10880. So redshift clusters having Versions >= 1.0.10880 have the fix and is available in all regions.

Looks like the unload functionality changed since yesterday. Empty tables are not generating files while unloading.

Related

UNLOAD to a new file when running in Redshift

I am trying to UNLOAD the file to an S3 Bucket. However, I DONT want to overwrite, but create a new file everytime I run the command. How can I achieve this?
unload ('select * from table1')
to 's3://bucket/file1/file2/file3/table1.csv'
iam_role 'arn:aws:iam::0934857378:role/RedshiftAccessRole,arn:aws:iam::435874575846546:role/RedshiftAccessRole'
DELIMITER ','
PARALLEL OFF
HEADER
Just change the destination path specified in the "TO" section.
If you wish to do this programmatically, you could do it in whatever script/command sends the UNLOAD command.
You might be able to do it via a Stored Procedure by keeping a table with the last file number and writing code to retrieve and increment it.
Or, you could write an AWS Lambda function that is triggered upon creation of the file. The Lambda function could then copy the object to a different path/filename and delete the original object.

Redshift Unload command with CSV extension

I'm using the following Unload command -
unload ('select * from '')to 's3://**summary.csv**'
CREDENTIALS 'aws_access_key_id='';aws_secret_access_key=''' parallel off allowoverwrite CSV HEADER;
The file created in S3 is summary.csv000
If I change and remove the file extension from the command like below
unload ('select * from '')to 's3://**summary**'
CREDENTIALS 'aws_access_key_id='';aws_secret_access_key=''' parallel off allowoverwrite CSV HEADER;
The file create in S3 is summary000
Is there a way to get summary.csv, so I don't have to change the file extension before importing it into excel?
Thanks.
actually a lot of folks asked the similar question, right now it's not possible to have an extension for the files. (but parquet files can have)
The reason behind this is, RedShift by default export it in parallel which is a good thing. Each slice will export its data. Also from the docs,
PARALLEL
By default, UNLOAD writes data in parallel to multiple files,
according to the number of slices in the cluster. The default option
is ON or TRUE. If PARALLEL is OFF or FALSE, UNLOAD writes to one or
more data files serially, sorted absolutely according to the ORDER BY
clause, if one is used. The maximum size for a data file is 6.2 GB.
So, for example, if you unload 13.4 GB of data, UNLOAD creates the
following three files.
So it has to create new files after 6GB that's why they are adding numbers as a suffix.
How do we solve this?
No native options from RedShift, but we can do some workaround with lambda.
Create a new S3 bucket and a folder inside it specifically for this process.(eg: s3://unloadbucket/redshift-files/)
Your unload files should go to this folder.
Lambda function should be triggered based on S3 put object event.
Then the lambda function,
Download the file(if it is large use EFS)
Rename it with .csv
Upload to the same bucket(or different bucket) into a different path (eg: s3://unloadbucket/csvfiles/)
Or even more simple if you use shell/powershell script to do the following process
Download the file
Rename it with .csv
As per AWS Documentation around UNLOAD command, it's possible to save data as CSV.
In your case, this is what your code would look like:
unload ('select * from '')
to 's3://summary/'
CREDENTIALS 'aws_access_key_id='';aws_secret_access_key='''
CSV <<<
parallel off
allowoverwrite
CSV HEADER;

NiFi data insertion into s3 subdirectory

I have a flow where I am extracting data from the database, converting the Avro to the CSV format and pushing the CSV in an s3 bucket which has subfolder in it. My S3 structure is like the following:
As you can see in the above screenshot my files are going into a blank folder(highlighted by red) instead of going inside a subfolder called 'Thermal'. Please see my PutS3Object settings:
The final s3 path I want my files to go into is: export-csv-vehicle-telemetry/vin11/Thermal
What settings should I change in my processor so the file goes directly inside the 'Thermal' folder?
Use Bucket name as: export-csv-vehicle-telemetry/vin15/Thermal instead of export-csv-vehicle-telemetry/vin15/Thermal/
The extra slash at the end is not required while specifying bucket names.
BTW, Your image shows vin11 directory instead of vin15. Check if that is correct.

How to ignore errors but not skip rows in redshift copy command

I have a nested json as my source file in S3 and I am trying to copy this file into redshift.
My issues with this are as follows,
I use MAXERROR - I need to skip certain errors because the source file is missing certain fields in some cases and has them in other
I use a JSONPATH file - to pick the fields that I need to copy to redshift
All the columns in the table are varchar
Obviously, since I am using maxerror the copy command executes successfully but the table has 0 records. Here is my copy command
COPY public.table(col1,col2,col3,col4,col5,col6)
from 's3://bucket/filename'
credentials 'redshift'
format as JSON 'jsonpathfile.json'
timeformat 'YYYY-MM-DDTHH:MI:SS'
EMPTYASNULL ACCEPTANYDATE ACCEPTINVCHARS TRUNCATECOLUMNS maxerror 100 ;
If I check into stl_load_errors it keeps saying
Invalid JSONPath format: Member is not an object.
Does this mean the copy command is not able to find even one object that fits the jsonpath file?
Which is definitely not true. I inferred the schema of the input file to design the jsonpath file.
Here is an example from COPY Examples - Amazon Redshift:
copy category
from 's3://mybucket/category_object_paths.json'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
json 's3://mybucket/category_jsonpath.json';
The path to the jsonpath file is specified fully, whereas your example just refers to the filename.
Try specifying the full path starting with s3:// and see whether that helps.

Exporting large file from BigQuery to Google cloud using wildcard

I have 8Gb table in BigQuery that I'm trying to export to Google Cloud Storage (GCS). If I specify url as it is, I'm getting an error
Errors:
Table gs://***.large_file.json too large to be exported to a single file. Specify a uri including a * to shard export. See 'Exporting data into one or more files' in https://cloud.google.com/bigquery/docs/exporting-data. (error code: invalid)
Okay... I'm specifying * in a file name, but it exports it in 2 files: one 7.13Gb and one ~150Mb.
UPD. I thought I should get about 8 files, 1Gb each? Am I wrong? Or what am I doing wrong?
P.S. I tried this in WebUI mode as well as using Java library.
For files of certain size or larger, BigQuery will export to multiple GCS files - that's why it asks for the "*" glob.
Once you have multiple files in GCS, you can join them into 1 with the compose operation:
gsutil compose gs://bucket/obj1 [gs://bucket/obj2 ...] gs://bucket/composite
https://cloud.google.com/storage/docs/gsutil/commands/compose
To export it to GCP you have to go to the table and click EXPORT > Export to GCS.
This opens the following screen
In Select GCS location you define the bucket, the folder and the file.
For instances, you have a bucket named daria_bucket (Use only lowercase letters, numbers, hyphens (-), and underscores (_). Dots (.) may be used to form a valid domain name.) and want to save the file(s) in the root of the bucket with the name test, then you write (in Select GCS location)
daria_bucket/test.csv
Because the file is too big, you're getting an error. To fix it, you'll have to break it down into more files using wildcard. So, you'll need to add *, just like that
daria_bucket/test*.csv
This is going to store, inside of the bucket daria_bucket, all the data extracted from the table in more than one file named test000000000000, test000000000001, test000000000002, ... testX.
In my case (more than 1 year after you've asked the question), using a random table of 1,25 GBs, got 16 files with 80,3 MBs each.