How to read lines from .txt file into this bash script? - sql

I have this bash script which connects to a postgre sql db and performs a query. I would like to be able to read line from a .txt file into the query as parameters. What is the best way to do that? Your assistance is greatly appreciated! I have my example code below however it is not working.
#!/bin/sh
query="SELECT ci.NAME_VALUE NAME_VALUE FROM certificate_identity ci WHERE ci.NAME_TYPE = 'dNSName' AND reverse(lower(ci.NAME_VALUE)) LIKE reverse(lower('%.$1'));"
(echo $1; echo $query | \
psql -t -h crt.sh -p 5432 -U guest certwatch | \
sed -e 's:^ *::g' -e 's:^*\.::g' -e '/^$/d' | \
sed -e 's:*.::g';) | sort -u

Considering that the file has only one sql query per line:
while read -r line; do echo "${line}" | "your code to run psql here"; done < file_with_query.sql
That means: while read the content of file_with_query.sql line by line, do something with each line.

Related

Two of the same commands give different result (output as utf8)

I have a .bat file that contains two commands:
SQLCMD -S . -d "databaseName" -E -i "path_to_query1.sql" -y0 -s "|" -f o:65001 > outputPath1.json
SQLCMD -S . -d "databaseName" -E -i "path_to_query2.sql" -y0 -s "|" -f o:65001 > outputPath2.json
The argument -f o:65001 is to output it to utf8 format, but only the second line outputs the query in an utf8 format.
Why is this? Why does it seem that the argument "-f o:65001" only works for the second command?
I checked it by switching the order and then again only the second command outputs the query in utf8 format.
Thanks for any tips on this.
EDIT
The solution for my specific problem was to put "chcp 65001" before the SQLCMD's. You then also don't need the argument -f 0:65001

Running pssh as a cron job

I have the script below.
OUTPUT_FOLDER=/home/user/output
LOGFILE=/root/log/test.log
HOST_FILE=/home/user/host_file.txt
mkdir -p $OUTPUT_FOLDER
rm -f $OUTPUT_FOLDER/*
pssh -h $HOST_FILE -o $OUTPUT_FOLDER "cat $LOGFILE | tail -n 100 | grep foo"
When I run this script on its own, it works fine and the $OUTPUT_FOLDER contains the output from the servers in the $HOST_FILE. However, when I ran the script as a cronjob, the $OUTPUT_FOLDER is created, but it's always empty. It's as if the pssh command was never executed.
Why is this? How do I resolve this?

Slick SourceCodeGenerator From SQL File

Is there a way to use the Slick SourceCodeGenerator to generate source code from a file of SQL CREATE statements? I know there is a way to connect to a DB and read in the schema, but I want to cut out that step and just give it the file. Please advise.
Slick ready meta data via jdbc. If you find a jdbc driver that can do that from a SQL file, you may be in luck. Otherwise, why not use an H2 in-memory database? It has compatibility modes for various SQL dialects. They are limited though. Another option would be using something like this: https://github.com/bgranvea/mysql2h2-converter first to produce an H2 compatible schema file.
We used the following script to load a sql schema from a mysql database, convert it to H2 compatible format and then use it in-memory for tests. You should be able to adapt it.
#!/bin/sh
echo ''
export IP=192.168.1.123
export user=foobar
export password=secret
export database=foobar
ping -c 1 $IP &&\
echo "" &&\
echo "Server is reachable"
# dump mysql schema for debuggability (ignore in git)
# convert the mysql to h2db using the converter.
## disable foreign key check in begining and enable it in the end. Prevents foreign key errors
echo "SET FOREIGN_KEY_CHECKS=0;" > foobar-mysql.sql
## Dump the Db structure and remove the auto_increment so as to set the id column back to 1
mysqldump --compact -u $user -h $IP -d $database -p$password\
|sed 's/CONSTRAINT `_*/CONSTRAINT `/g' \
|sed 's/KEY `_*/KEY `/g' \
|sed 's/ AUTO_INCREMENT=[0-9]*//' \
>> foobar-mysql.sql
echo "SET FOREIGN_KEY_CHECKS=1;" >> foobar-mysql.sql &&\
java -jar mysql2h2-converter.jar foobar-mysql.sql \
|perl -0777 -pe 's/([^`]),/\1,\n /g' \
|perl -0777 -pe 's/\)\);/)\n);/g' \
|perl -0777 -pe 's/(CREATE TABLE [^\(]*\()/\1\n /g' \
|sed 's/UNSIGNED/unsigned/g' \
|sed 's/float/real/' \
|sed "s/\(int([0-9]*).*\) DEFAULT '\(.*\)'/\1 DEFAULT \2/" \
|sed "s/tinyint(1)/boolean/" \
> foobar-h2.sql
perl -ne 'print "$ARGV\n" if /.\z/' -- foobar-h2.sql

Write beeline query results to a text file

I need to write the results of executing a hive query to a file. how do i do it? currently, it's printing to the console.
beeline -u db_url -n user_name -p password -f query.sql
i tried:
beeline -u db_url -n user_name -p password -f query.sql 2> output.txt
but output.txt just contains when connection started and closed, not the results of the query - which are still being printed to the console.
I assume beeline -u db_url -n user_name -p password -f query.sql > output.txt must be OK. Without 2
"2" in your command is errlog, not the stdout
so "...query.sql 2> output.txt" would put the errlog results into your text file, while "...query.sql > output.txt" would put the actual output into the text file.
In addition to #dds 's answer you can try adding the silent feature to get rid of all the other stuffs like the connection started and closed status being printed in the output file.
beeline -u db_url -n user_name -p password --silent=true -f query.sql > output.txt
I think you meant to type "csv2" instead of "csv 2". Here's the fixed command line:
beeline -u db_url -n user_name -p password -f query.sql2 > output.txt

How to pass a bash variable into an sql file

I have a bash script to call a select in postgres. I would like to be able to pass a variable from the command line into the sql file.
sh myscript.sh 1234
#!/bin/bash
dbase_connect="psql -h server -U username dbase"
file="/tmp/$fname.csv"
sql="/home/user/sql_files/query.sql"
sudo bash -c "$dbase_connect -c -q -A -F , -f $sql -o $file"
The query can be as simple as:
select name from table where id = $1;
But I don't know how to call the $1 into the sql file. The actual query is much larger and I prefer to keep it out of the bash query itself because it is easier to maintain when called as a seperate .sql file.
you can use sed to insert parameter :
#!/bin/bash
dbase_connect="psql -h server -U username dbase"
file="/tmp/$fname.csv"
sql="/home/user/sql_files/query.sql"
tmp="home/user/sql_files/query.sql.tmp"
s="s/\$1/$1/g"
cat $sql | sed $s > $tmp
sudo bash -c "$dbase_connect -c -q -A -F , -f $tmp -o $file"
rm -f $tmp