psql shortcut for frequently used queries? (like Unix "alias") - sql

Is it possible to somehow create aliases (like Unix alias command) in psql?
I mean, not SQL FUNCTION, but local aliases to ease manual queries?

I don't know about any possibility. There is only workaround for psql based on psql variables, but there is lot of limits - using parameters for this queries is difficult.
postgres=# \set whoami 'SELECT CURRENT_USER;'
postgres=# :whoami
current_user
--------------
pavel
(1 row)

Pavel's answer is almost correct, except you can use parameter in another way.
after
\set s 'select * from '
\set l ' limit 10;'
The following command
:s agent :l
will equal to
select * from agent limit 10;
According to http://www.postgresql.org/docs/9.0/static/app-psql.html
If an unquoted argument begins with a colon (:), it is taken as a psql
variable and the value of the variable is used as the argument
instead. If the variable name is surrounded by single quotes (e.g.
:'var'), it will be escaped as an SQL literal and the result will be
used as the argument. If the variable name is surrounded by double
quotes, it will be escaped as an SQL identifier and the result will be
used as the argument.
You can also use backquote to run shell command
Arguments that are enclosed in backquotes (`) are taken as a command
line that is passed to the shell. The output of the command (with any
trailing newline removed) is taken as the argument value. The above
escape sequences also apply in backquotes.

how about using UDFs? You can create a UDF that returns a table (set of) then you can query it as this: select * from udf();
It is not as clean, but it is better than nothing and it is portable. And UDFs can take parameters too.

Why not use a view? May be views will help in your case.

This might help, if you need to run frequent queries from command line (not from psql cli).
Add this to .bash_profile /.bashrc
POSTGRES_BIN=~/Postgres/bin
B_RED='\033[1;31m'
RESET='\033[0m'
psqlcommand="$POSTGRES_BIN/psql -U vignesh usersdb -q -c"
function psqlselectrows()
{
[ -z "$1" ] && echo -e "${B_RED}Argument 1 missing: Need table name${RESET}" ||
$psqlcommand "SELECT * from $1"
}
The above command selects rows from the table, passed in the argument.
Note:
Change the database name, as required.
The schema by default is public. To have another default schema, add the following line in ~/.psqlrc file.
SET SEARCH_PATH TO <schema_name>;
If the database is password protected, refer this and make use of the secure method.
I have made some commands for my use, if it might help.
psqlselectrows - To select rows from a table
psqlgettablecount - To get row count of a table
psqltruncatetable - To truncate a table, on prompt
psqlgettablesize - To get the size of a table
psqlgetvacuumdetails - To get vacuum details of a table
psqlsettings - To get default and modified settings configured for Postgres.
(All the above commands need table name as first argument)
#Colors
B_RED='\033[1;31m'
B_GREEN='\033[1;32m'
B_YELLOW='\033[1;33m'
RESET='\033[0m'
#Postgres Command With Params
psqlcommand="$POSTGRES_BIN/psql -U vignesh usersdb -q -c"
function psqlgettablesize()
{
[ -z "$1" ] && echo -e "${B_RED}Argument 1 missing: Need table name${RESET}" ||
$psqlcommand "select pg_size_pretty(pg_total_relation_size('$1')) as total_table_size, pg_size_pretty(pg_relation_size('$1')) as table_size, pg_size_pretty(pg_indexes_size('$1')) as index_size;";
}
function psqlgettablecount()
{
[ -z "$1" ] && echo -e "${B_RED}Argument 1 missing: Need table name${RESET}" ||
$psqlcommand "select count(*) from $1;"
}
function psqlgetvacuumdetails()
{
[ -z "$1" ] && echo -e "${B_RED}Argument 1 missing: Need table name${RESET}" ||
$psqlcommand "SELECT relname, n_live_tup, n_dead_tup, last_analyze::timestamp, analyze_count, last_autoanalyze::timestamp, autoanalyze_count, last_vacuum::timestamp, vacuum_count, last_autovacuum::timestamp, autovacuum_count FROM pg_stat_user_tables where relname='$1' and schemaname = current_schema();"
}
function psqltruncatetable()
{
[ -z "$1" ] && echo -e "${B_RED}Argument 1 missing: Need table name${RESET}" ||
{
read -p "$(echo -e ${B_YELLOW}"Are you sure to truncate table '$1' (y/n)? "${RESET})" choice
case "$choice" in
y|Y ) $psqlcommand "TRUNCATE $1;";;
n|N ) echo -e "${B_GREEN}Table '$1' not truncated${RESET}";;
* ) echo -e "${B_RED}Invalid option${RESET}";;
esac
}
}
function psqlsettings()
{
query="select * from pg_settings"
if [ "$1" != "" ]; then
query="$query where category like '%$1%'"
fi
query="$query ;"
$psqlcommand "$query"
if [ -z "$1" ]; then
echo -e "${B_YELLOW}Passing Category as first argument will filter the related settings.${RESET}"
fi
}
function psqlselectrows()
{
[ -z "$1" ] && echo -e "${B_RED}Argument 1 missing: Need table name${RESET}" ||
$psqlcommand "SELECT * from $1"
}

Related

How to pass unix variable in where condition of query?

I have a file whose filename I am storing in a shell variable and I wish to pass that variable in the WHERE condition of my SQL select query. How can I achieve this ?
my code
cd /path/to/folder
var =$(ls tail)
id_var=$(echo "$var" | cut -f 1 -d '.')
...
...
sqlplus -s user/pwd#db < mysql.sql > output.txt
cat mysql.sql
select * from Records where "GlobalId"='$id_var'
From this answer:
cd /path/to/folder
var =$(ls tail)
id_var=$(echo "$var" | cut -f 1 -d '.')
sqlplus -s user/pwd#db #mysql.sql "${id_var}" > output.txt
Then in mysql.sql use &1 to substitute the first start argument:
select * from Records where "GlobalId"='&1'
Note: &1 is a substitution variable (and not a bind variable) so you will need to make sure that the value passed in does not perform any SQL injection attacks.
You can export the variable
export id_var
Then use envsubst command
envsubst < mysql.sql
This will substitute your variable.

How to Compare Two SQL Files in Shell Script

I have two SQL files A.sql and B.sql. My requirement is I have to compare A.sql and B.sql and I need to check query which are present in A.sql is present in B.sql or not, if it is not there in B.sql then we need to move Query from A.sql to newfile.sql which is not present in B.sql
Below is the Example
A.sql
Select * from emp;
Select * from dept;
Select * from student;
Select * from subject;
B.sql
Select * from emp;
Select * from dept;
Output Excepted
Select * from student;
Select * from subject;
Output what I am getting
Select * from dept;
Select * from student;
Select * from subject;
Below is my script
while read -rd ';' i_sql
do
flag=0
while read -rd ';' e_sql
do
if [ "$i_sql" != "$e_sql" ];
then
flag=0
else
flag=1
break
fi
done < B.sql
if [ !$flag ]
then
echo "$i_sql">>newfile.sql
fi
done < A.sql
Reading the sql query upto semicolon from A.sql and storing it in i_sql
while read -rd ';' i_sql
Reading the sql query upto semicolon from B.sql and storing it in e_sql
while read -rd ';' e_sql
Below i am comparing the i_sql and e_sql if it is equal i am going to else part using break so that it
should not compare with other statements.If it is not equal i am setting flag=0, later i am moving the
query which is not present in B.sql to newfile.sql outside the inner while loop.
if [ "$i_sql" != "$e_sql" ];
then
flag=0
else
flag=1
break
fi
done < B.sql
Below i am moving the Sql query to newfile.sql which is not there in B.sql and which is present in A.sql.
if [ !$flag ]
then
echo "$i_sql">>newfile.sql
fi
done < A.sql
Can anyone please help with the above issue and let me know what is wrong.
Note : My one SQL query doesn't occupies single line, it will be more than 4-5 lines. Just for an example I have used a single line query.
Since my one SQL query is having more than 4-5 lines, so I am reading SQL query in while loop upto semicolon and storing it in a variable and then I am using the variable for comparison.
Thanks in advance!!!
I assume that in your input files one query occupies exactly one line. You did not say this explicitly, but your example suggests it. In this case, you could interpret B.sql a list of literal pattern and ask grep, which of these pattern do not occur in A.sql :
grep -F -f B.sql -v A.sql
-F says literal pattern, -f tells grep where to look for the pattern, and -v says to report lines where none of the pattern matches.
Your logic seems correct. But you need to take care of more details like casing difference, difference in white spaces between the words (Let's say same query has a space before semicolon in one file and no space in the other.)
The reason why 'Select * from dept;' appears in the result may be some whitespace difference.
As suggested in the comments, it is better to use a diff tool command line instead of writing the logic yourself. You can explore diff / vimdiff / git diff ...
This can be achieved through awk:
awk 'FNR==NR { map[$0]=1 } FNR!=NR && map[$0]!=1 { print $0>> "newfile.sql";close("newfile.sql") } FNR!=NR && map[$0]==1 { print }' B.sql A.sql > A.sql.tmp
Process B.sql first (NR==FNR). Create a array indexed with the entries. When we process A.sql (FNR!=NR) and there is not an entry in the map array. we print the line to a newfile.sql file. Otherwise we print to screen.
You can then commit the output on screen back to the A.sql file:
awk 'FNR==NR { map[$0]=1 } FNR!=NR && map[$0]!=1 { print $0>> "newfile.sql";close("newfile.sql") } FNR!=NR && map[$0]==1 { print }' B.sql A.sql > A.sql.tmp > A.sql.tmp && mv -f A.sql.tmp A.sql
The problem is on this line :
if [ !$flag ]
which always yields true beacuse !1 and !0 are non-empty strings.
What you need is :
if [ $flag = 0 ]

query errors out if ran from shell script

I can run this query fine
CREATE TABLE db.table1 STORED AS PARQUET as
SELECT * FROM db.table WHERE UPPER(executing) = 'TRUE';
Unless I run it from bash shell script. I get this error
#!/bin/bash
bash -c 'impala-shell -k -q "CREATE TABLE db.table1 STORED AS PARQUET as
SELECT * FROM db.table WHERE UPPER(executing) = 'TRUE';"'
ERROR: AnalysisException: operands of type STRING and BOOLEAN are not
comparable: upper(executing) = TRUE
I have tried using double quotes, no quotes and lower case with no luck
Single quotes cannot be included in a single-quoted string in shell. The single quotes around TRUE aren't included in the SQL command passed to impala-shell; the first closes the initial ', and the second starts a new quoted string, so your script is equivalent to
bash -c "impala-shell -k -q \"CREATE TABLE db.table1 STORED AS PARQUET as
SELECT * from db.table WHERE UPPER(executing) = TRUE;\""
One solution is to use double quotes as I have above, which allow you to include the single quotes that SQL requires.
bash -c "impala-shell -k -q \"CREATE TABLE db.table1 STORED AS PARQUET as
SELECT * from db.table WHERE UPPER(executing) = 'TRUE';\""
Alternatively, use $'...' to quote the argument to -c, in which case you can include properly escaped single quotes in the string.
bash -c $'impala-shell -k -q "CREATE TABLE db.table1 STORED AS PARQUET as
SELECT * from db.table WHERE UPPER(executing) = \'TRUE\';"'
However it's not clear why you are using bash -c at all instead of just running impala-shell directly as:
impala-shell -k -q "CREATE ... WHERE UPPER(executing) = 'TRUE';"

How to pass parameter into SQL file from UNIX script?

I'm looking to pass in a parameter into a SQL file from my UNIX script. Unfortunately having problems with it.
Please see UNIX script below:
#!/bin/ksh
############
# Functions
_usage() {
SCRIPT_NAME=XXX
-eq 1 -o "$1" = "" -o "$1" = help -o "$1" = Help -o "$1" = HELP ]; then
echo "Usage: $SCRIPT_NAME [ cCode ]"
echo " - For example : $SCRIPT_NAME GH\n"
exit 1
fi
}
_initialise() {
cCode=$1
echo $cCode
}
# Set Variables
_usage $#
_initialise $1
# Main Processing
sql $DBNAME < test.sql $cCode > $PVNUM_LOGFILE
RETCODE=$?
# Check for errors within log file
if [[ $RETCODE != 0 ]] || grep 'E_' $PVNUM_LOGFILE
then
echo "Error - 50 - running test.sql. Please see $PVNUM_LOGFILE"
exit 50
fi
Please see SQL script (test.sql):
SELECT DISTINCT v1.*
FROM data_latest v1
JOIN temp_table t
ON v1.number = t.id
WHERE v1.code = '&1'
The error I am receiving when running my UNIX script is:
INGRES TERMINAL MONITOR Copyright 2008 Ingres Corporation
E_US0022 Either the flag format or one of the flags is incorrect,
or the parameters are not in proper order.
Anyone have any idea what I'm doing wrong?
Thanks!
NOTE: While I don't work with the sql command, I do routinely pass UNIX parameters into SQL template/script files when using the isql command line tool, so fwiw ...
The first thing you'll want to do is replace the &1 string with the value in the cCode variable; one typical method is to use sed to do a global search and replace of &1 with ${cCode} , eg:
$ cCode=XYZ
$ sed "s/\&1/${cCode}/g" test.sql
SELECT DISTINCT v1.*
FROM data_latest v1
JOIN temp_table t
ON v1.number = t.id
WHERE v1.code = 'XYZ' <=== &1 replaced with XYZ
NOTE: You'll need to wrap the sed code in double quotes so that the value of the cCode variable can be referenced.
Now, to get this passed into sql there are a couple options ... capture the sed output to a new file and submit that file to sql or ... [and I'm guessing this is doable with sql], pipe the sed output into sql, eg:
sed "s/\&1/${cCode}/g" test.sql | sql $DBNAME > $PVNUM_LOGFILE
You may need '\p\g' around your SQL in the text file?
I personally tend to code in the SQL to the script itself, as in
#!/bin/ksh
var=01.01.2018
db=database_name
OUTLOG=/path/log.txt
sql $db <<_END_ > $OUTLOG
set autocommit on;
\p\g
set lockmode session where readlock = nolock;
\p\g
SELECT *
FROM table
WHERE date > '${var}' ;
\p\g
_END_
exit 0

Expect Escaping with Awk

I need to process the output of a single record psql query through awk before assigning it to a value in my expect script.
The relevant code:
spawn $env(SHELL)
send "psql -U safeuser -h db test -c \"SELECT foo((SELECT id FROM table where ((table.col1 = \'$user\' AND table.col2 IS NULL) OR table.col2 = \'$user\') AND is_active LIMIT 1));\" | /bin/awk {{NR=3}} {{ print $1 }}; \r"
expect "assword for user safeuser:"
send "$safeuserpw\r"
expect -re '*'
set userpass $expect_out(0, string)
When I run the script, I get:
spawn /bin/bash
can't read "1": no such variable
"send "psql -U safeuser -h db test -c \"SELECT foo((SELECT id FROM table where ((table.col1 = \'$user\' AND table.col2..."
Is there something glaring that I'm missing here? I was under the impression that the double curly-brackets protected the awk code block.
The awk script will show all lines because you're using '=' instead of '==' in the conditional expression. Try the following:
spawn $env(SHELL)
send "psql -U safeuser -h db test -c \"SELECT foo((SELECT id FROM table where ((table.col1 = \'$user\' AND table.col2 IS NULL) OR table.col2 = \'$user\') AND is_active LIMIT 1));\" | /bin/awk \'NR==3 { print $1 }\'; \r"
expect "assword for user safeuser:"
send "$safeuserpw\r"
expect -re '*'
set userpass $expect_out(0, string)
Your send line is being evaluated by tcl because it is in quotes "". if you want to pass it as it should be you should change your awk portion to escape the $ :
...| /bin/awk \'NR==3 { print \$1 }\'; \r"