CrateDB, How to drop all tables prior to restore? - cratedb

CrateDB needs tables to be dropped before a restore.
Options unavailable:
#SQL
Multiple statements copied into console/crash cli
Is there an easy way to do this?

The way I resolved this was via a bash script using the Crash CLI which pulls the tables and drops them individually.
You will need to set $HOST and $TABLE_CATALOG
crash --hosts $HOST -c "SELECT CONCAT('\"', TABLE_CATALOG, '\".\"', TABLE_NAME, '\"') FROM INFORMATION_SCHEMA.tables WHERE table_catalog = $TABLE_CATALOG --format="csv" |
tail -n +2 | head -n -1 | sed 's/"/\\"/g' |
xargs -I {} crash --hosts $HOST -c 'DROP TABLE {}'

Related

Sql server bcp export binary in csv, How do I replace it with ''?"

I have a table that has an empty composition
when exporting it with bcp, it exports it in the following way; with the character "?"
bcp table out table.csv -S local -U sa -d BD -c -t '|'
some solution to this problem, from bcp
Had the same problem and solved with
-k
Hope it works for you too ;)
Well, of course it won't work... those field are set not to be NULL.
You should change that.
If you want the data still not to be NULL you should add a standard value for NULL elements from table
select case
when user_Name is null then "default value"
else user_name
end
use:
-c -C65001 -t -r"\n" -T -S;
or:
-k

Retrieving process id using sshcmd on unix

I want to retrieve process id when my code successfully start the job. But its returning null.
I am starting job using sshcmd, creating log of sshcmd output, and then trying to retrieve process id in new_process_id using sshcmd. if I get new_process_id I will show new_process_id else I will show output collected in log file. But I am getting null in new_process_id.
remote_command="nohup J2EEServer/config/AMSS/scripts/${batch_job} & "
sshcmd -q -u ${login_user} -s ${QA_HOST} "$remote_command" > /tmp/nohup_${batch_job} 2>&1
remote_command=$(ps -ef | grep ${login_user} | grep $batch_job | grep -v grep | awk '{print $2}');
new_process_id=`sshcmd -q -u ${login_user} -s ${QA_HOST} "$remote_command"`
runstatus=`grep Synchronized. /tmp/nohup_${batch_job}`
if [[ $runstatus != "" ]]
then
new_process_id=`cat /tmp/nohup_${batch_job}`
fi
echo $new_process_id
The second variable remote_command is the output of that command run on your local machine.
Some other hints: If you are making a second, unrelated variable, give it another name. It will avoid unnecessary confusion.
What you are attempting to do next with runstatus and rewriting an already existing but not used variable is totally unclear to me.

PostgreSQL - dump each table into a different file

I need to extract SQL files from multiple tables of a PostgreSQL database. This is what I've come up with so far:
pg_dump -t 'thr_*' -s dbName -U userName > /home/anik/psqlTest/db_dump.sql
However, as you see, all the tables that start with the prefix thr are being exported to a single unified file (db_dump.sql). I have almost 90 tables in total to extract SQL from, so it is a must that the data be stored into separate files.
How can I do it? Thanks in advance.
If you are happy to hard-code the list of tables, but just want each to be in a different file, you could use a shell script loop to run the pg_dump command multiple times, substituting in the table name each time round the loop:
for table in table1 table2 table3 etc;
do pg_dump -t $table -U userName dbName > /home/anik/psqlTest/db_dump_dir/$table.sql;
done;
EDIT: This approach can be extended to get the list of tables dynamically by running a query through psql and feeding the results into the loop instead of a hard-coded list:
for table in $(psql -U userName -d dbName -t -c "Select table_name From information_schema.tables Where table_type='BASE TABLE' and table_name like 'thr_%'");
do pg_dump -t $table -U userName dbName > /home/anik/psqlTest/db_dump_dir/$table.sql;
done;
Here psql -t -c "SQL" runs SQL and outputs the results with no header or footer; since there is only one column selected, there will be a table name on each line of the output captured by $(command), and your shell will loop through them one at a time.
Since version 9.1 of PostgreSQL (Sept. 2011), one can use the directory format output when doing backups
and 2 versions/2 years after (PostgreSQL 9.3), the --jobs/-j makes it even more efficient to backup every single objects in parallel
but what I don't understand in your original question, is that you use the -s option which dumps only the object definitions (schema), not data.
if you want the data, you shall not use -s but rather -a (data-only) or no option to have schema+data
so, to backup all objects (tables...) that begins with 'th' for the database dbName on the directory dbName_objects/ with 10 concurrent jobs/processes (increase load on the server) :
pg_dump -Fd -f dbName_objects -j 10 -t 'thr_*' -U userName dbName
(you can also use the -a/-s if you want the data or the schema of the objects)
as a result the directory will be populated with a toc.dat (table of content of all the objects) and one file per object (.dat.gz) in a compressed form
each file is named after it's object number, and you can retrieve the list with the following pg_restore command:
pg_restore --list -Fd dbName_objects/ | grep 'TABLE DATA'
in order to have each file not compressed (in raw SQL)
pg_dump --data-only --compress=0 --format=directory --file=dbName_objects --jobs=10 --table='thr_*' --username=userName --dbname=dbName
(not enough reputation to comment the right post)
I used your script with some corrections and some modifications for my own use, may be usefull for others:
#!/bin/bash
# Config:
DB=rezopilotdatabase
U=postgres
# tablename searchpattern, if you want all tables enter "":
P=""
# directory to dump files without trailing slash:
DIR=~/psql_db_dump_dir
mkdir -p $DIR
TABLES="$(psql -d $DB -U $U -t -c "SELECT table_name FROM
information_schema.tables WHERE table_type='BASE TABLE' AND table_name
LIKE '%$P%' ORDER BY table_name")"
for table in $TABLES; do
echo backup $table ...
pg_dump $DB -U $U -w -t $table > $DIR/$table.sql;
done;
echo done
(I think you forgot to add $DB in the pg_dumb command, and I added a -w, for an automated script, it is better not to have a psw prompt I guess, for that, I created a ~/.pgpass file with my password in it
I also gave the user for the command to know which password to fetch in .pgpass)
Hope this helps someone someday.
This bash script will do a backup with one file per table:
#!/bin/bash
# Config:
DB=dbName
U=userName
# tablename searchpattern, if you want all tables enter "":
P=""
# directory to dump files without trailing slash:
DIR=~/psql_db_dump_dir
mkdir -p $DIR
AUTH="-d $DB -U $U"
TABLES="$(psql $AUTH -t -c "SELECT table_name FROM information_schema.tables WHERE table_type='BASE TABLE' AND table_name LIKE '%$P%' ORDER BY table_name")"
for table in $TABLES; do
echo backup $table ...
pg_dump $AUTH -t $table > $DIR/$table.sql;
done;
echo done

How to hide result set decoration in Psql output

How do you hide the column names and row count in the output from psql?
I'm running a SQL query via psql with:
psql --user=myuser -d mydb --output=result.txt -c "SELECT * FROM mytable;"
and I'm expecting output like:
1,abc
2,def
3,xyz
but instead I get:
id,text
-------
1,abc
2,def
3,xyz
(3 rows)
Of course, it's not impossible to filter the top two rows and bottom row out after the fact, but it there a way to do it with only psql? Reading over its manpage, I see options for controlling the field delimiter, but nothing for hiding extraneous output.
You can use the -t or --tuples-only option:
psql --user=myuser -d mydb --output=result.txt -t -c "SELECT * FROM mytable;"
Edited (more than a year later) to add:
You also might want to check out the COPY command. I no longer have any PostgreSQL instances handy to test with, but I think you can write something along these lines:
psql --user=myuser -d mydb -c "COPY mytable TO 'result.txt' DELIMITER ','"
(except that result.txt will need to be an absolute path). The COPY command also supports a more-intelligent CSV format; see its documentation.
You can also redirect output from within psql and use the same option. Use \o to set the output file, and \t to output tuples only (or \pset to turn off just the rowcount "footer").
\o /home/flynn/queryout.txt
\t on
SELECT * FROM a_table;
\t off
\o
Alternatively,
\o /home/flynn/queryout.txt
\pset footer off
. . .
usually when you want to parse the psql generated output you would want to set the -A and -F ...
# generate t.col1, t.col2, t.col3 ...
while read -r c; do test -z "$c" || echo , $table_name.$c | \
perl -ne 's/\n//gm;print' ; \
done < <(cat << EOF | PGPASSWORD=${postgres_db_useradmin_pw:-} \
psql -A -F -v -q -t -X -w -U \
${postgres_db_useradmin:-} --port $postgres_db_port --host $postgres_db_host -d \
$postgres_db_name -v table_name=${table_name:-}
SELECT column_name
FROM information_schema.columns
WHERE 1=1
AND table_schema = 'public'
AND table_name =:'table_name' ;
EOF
)
echo -e "\n\n"
You could find example of the full bash call here:

Does SQL*Plus natively allow queries to run from the shell itself?

For example, is there equivalent of these in SQL*Plus
sqlplus 'SELECT * FROM emp' | less
sqlplus 'SELECT * FROM emp' | grep Primx
One way has been suggested by paxdiablo here. Is that the only way?
you can do it with here documents:
sqlplus -S user/password << EOF | grep Primx
select * from emp;
EOF
-S is for silent mode, followed by username and password combination.