Export Vertica query result to csv file - sql

I`m working with Vertica. I try to export data from SELECT query into csv. I tried making it with sql query:
SELECT * FROM table_name INTO OUTFILE '/tmp/fileName.csv' FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n';
I got an error:
[Vertica][VJDBC](4856) ERROR: Syntax error at or near "INTO"
Is there a way to export a query result to a csv file? I prefer not to use vsql, but if there no other way, I will use it. I tried the following:
vsql -c "select * from table_name;" > /tmp/export_data.txt

Here is how you do it:
vsql -U dbadmin -F ',' -A -P footer=off -o dumpfile.txt -c "select ... from ... where ...;"
Reference: Exporting Data Using vsql

Accordingly to https://my.vertica.com/docs/7.1.x/HTML/Content/Authoring/ConnectingToHPVertica/vsql/ExportingDataUsingVsql.htm
=> SELECT * FROM my_table;
a | b | c
---+-------+---
a | one | 1
b | two | 2
c | three | 3
d | four | 4
e | five | 5
(5 rows)
=> \a
Output format is unaligned.
=> \t
Showing only tuples.
=> \pset fieldsep ','
Field separator is ",".
=> \o dumpfile.txt
=> select * from my_table;
=> \o
=> \! cat dumpfile.txt
a,one,1
b,two,2
c,three,3
d,four,4
e,five,5

By following way you can write to CSV file as comma separated and no footer.
vsql -h $HOST -U $USER -d $DATABASE -w $PASSWORD -f $SQL_PATH/SQL_FILE -A -o $FILE -F ',' -P footer=off -q

Related

hivevars not going through with beeline in emr

I am running a hive sql script in emr. I am calling the script from beeline (in a shell file) with some hivevars. But they are not going through to the script. Am I missing something obvious?
This is my shell script:
ST_DATE=$(TZ=America/Los_Angeles date +%Y-%m-%d -d "today");
beeline -u jdbc:hive2:// --hivevar ST_DATE="$ST_DATE" --hivevar lookback=45 -f ./ETL/predict_etl_hive.sql || exit 1
And this is how the hive sql file is
-- DEBUG display hivevars
select '${hivevar:lookback}';
select '${hivevar:ST_DATE}';
------------------- STEP 1: ----------------------------
DROP TABLE IF EXISTS string1.us_past_con_${hivevar:lookback}d;
CREATE TABLE IF NOT EXISTS string1.us_past_con_${hivevar:lookback}d
However it does not take the values. when using select to get the values it stays as the name:
0: jdbc:hive2://>
0: jdbc:hive2://> -- DEBUG display hivevars
0: jdbc:hive2://> select '${hivevar:lookback}';
+----------------------+
| _c0 |
+----------------------+
| ${hivevar:lookback} |
+----------------------+
0: jdbc:hive2://> select '${hivevar:ST_DATE}';
+---------------------+
| _c0 |
+---------------------+
| ${hivevar:ST_DATE} |
+---------------------+
And then it eventually fails at the DROP line:
FAILED: ParseException line 1:46 missing EOF at '$' near 'us_past_con_'
22/07/06 21:53:11 [ef15917c-31f8-48ae-8ca4-770871367e05 main]: ERROR ql.Driver: FAILED: ParseException line 1:46 missing EOF at '$' near 'us_past_con_'
org.apache.hadoop.hive.ql.parse.ParseException: line 1:46 missing EOF at '$' near 'us_past_con_'
The vars do go through if I use hive -f instead of beeline

Why is my script not printing output on one line?

This is an image of what I'm asking for
I am using the following -echo- in a script and after I execute, the output format is as shown below:
`echo -e "UPDATE table1 SET table1_f1='$Fname' ,table1_f2='$Lname' where table1_f3='$id';\ncommit;" >> $OutputFile`
output: UPDATE table1 SET table1_f1='Fname' ,table1_f2='Lname' where table1_f3='id ';
the '; is appearing on a new line, why is that happening?
The variable $id in your shell script actually contains that newline (\n or \r\n) at the end; so there isn't really anything wrong in the part of the script you've shown here.
This effect is pretty common if the variable is created based on external commands (update:) or by reading external files as you are here.
For simple values, one way to strip the newline off the end of the value, prior to using it in your echo is:
id=$( echo "${id}" | tr -s '\r' '' | tr -s '\n' '' );
or for scripts that already rely on a particular bash IFS value:
OLDIFS="${IFS}";
IFS=$'\n\t ';
id=$( echo "${id}" | tr -s '\r' '' | tr -s '\n' '' );
IFS="${OLDIFS}";

Postgres copy to TSV file with header

I have a function like so -
CREATE
OR REPLACE FUNCTION ind (bucket text) RETURNS table (
middle character varying (100),
last character varying (100)
) AS $body$ BEGIN return query
select
fname as first,
lname as last
from all_records
; END;
$body$ LANGUAGE PLPGSQL;
How do I output the results of select ind ('Mob') into a tsv file?
I want the output to look like this -
first last
MARY KATHERINE
You can use the COPY command
example:
COPY (select * from ind('Mob')) TO '/tmp/ind.tsv' CSV HEADER DELIMITER E'\t';
the file '/tmp/ind.tsv' will contain you data
Postgres doesn't allow copy with header for tsv for some reason.
If you're using a linux based system you can do it with a script like this:
#create file with tab delimited column list (use \t between each column name)
echo -e "user_id\temail" > user_output.tsv
#now you can append the results of your query to that file by copying to STDOUT
psql -h your_host_name -d your_database_name -c "\copy (SELECT user_id, email FROM my_user_table) to STDOUT;" >> user_output.tsv
Alternatively, if your script is long and you don't want to pass it in with -c command you can use the same approach from a .sql file, use "--quiet" to avoid notices being passed into your file
psql --quiet -h your_host_name -d your_database_name -f your_sql_file.sql >> user_output.tsv

How to save PostgreSQL output to text file

I've struggled with other StackOverflow responses. I would like to save the output of a query to a local text file - it doesn't really matter where the text file is located as long as it is on my local machine.
Code I am using:
\COPY (
select month,count(*) as distinct_Count_month
from
(
select UNIQUE_MEM_ID,to_char(transaction_date, 'YYYY-MM') as month
FROM yi_fourmpanel.card_panel WHERE COBRAND_ID = '10006164'
group by UNIQUE_MEM_ID,to_char(transaction_date, 'YYYY-MM')
) a
group by month) TO 'mycsv.csv' WITH CSV HEADER;
Error with this code is:
<!-- language: none -->
An error occurred when executing the SQL command:
\COPY (
ERROR: syntax error at or near "\"
Position: 1
\COPY (
^
Execution time: 0.08s
(Statement 1 of 2 finished)
An error occurred when executing the SQL command:
select month,count(*) as distinct_Count_month
from
(
select UNIQUE_MEM_ID,to_char(transaction_date, 'YYYY-MM') as month
FROM yi_fourmpanel.card_panel...
ERROR: syntax error at or near ")"
Position: 260
group by month) TO 'mycsv.csv' WITH CSV HEADER
^
Execution time: 0.08s
(Statement 2 of 2 finished)
2 statements failed.
Script execution finished
Total script execution time: 0.16s
1.For server COPY remove \ and run in psql following:
COPY (
WITH data(val1, val2) AS ( VALUES
('v1', 'v2')
)
SELECT *
FROM data
) TO 'yourServerPath/output.csv' CSV HEADER;
cat yourServerPath/output.csv :
val1,val2
v1,v2
2.For client COPY:
psql -h host -U user -d database -c "\copy \
( \
WITH data(val1, val2) AS ( VALUES \
(1, 2) \
) \
SELECT * FROM data) TO 'yourClientPath/output.csv' CSV HEADER;"
cat yourClientPath/output.csv:
val1,val2
1,2
UPDATED
As for example provided, on your client machine in terminal you need to run following script with absolute path to your mycsv.csv :
psql -h host -U username -d db -c "\COPY ( \
select month,count(*) as distinct_Count_month \
from \
( \
select UNIQUE_MEM_ID,to_char(transaction_date, 'YYYY-MM') as month \
FROM yi_fourmpanel.card_panel WHERE COBRAND_ID = '10006164' \
group by UNIQUE_MEM_ID,to_char(transaction_date, 'YYYY-MM') \
) a \
group by month) TO 'path/mycsv.csv' WITH CSV HEADER;"

Escape password in mysqldump command

I am using this;
mysqldump -u userabc -pabc123 dbname |
gzip > /var/backups/archives/mysql/dbname_$(date +\%d-\%m-\%Y_\%T).sql.gz
This works but if the password contains a ^ for example it fails, how can I escape this character and still have mysqldump work with the -p flag;
mysqldump -u userabc -pabc^123 dbname |
gzip > /var/backups/archives/mysql/dbname_$(date +\%d-\%m-\%Y_\%T).sql.gz
quote the password
mysqldump -u fred7 -p'asdf^555^666'
if any of the following * ? [ < > & ; ! | $ ( ) perhaps ^ too