Slick SourceCodeGenerator From SQL File - sql

Is there a way to use the Slick SourceCodeGenerator to generate source code from a file of SQL CREATE statements? I know there is a way to connect to a DB and read in the schema, but I want to cut out that step and just give it the file. Please advise.

Slick ready meta data via jdbc. If you find a jdbc driver that can do that from a SQL file, you may be in luck. Otherwise, why not use an H2 in-memory database? It has compatibility modes for various SQL dialects. They are limited though. Another option would be using something like this: https://github.com/bgranvea/mysql2h2-converter first to produce an H2 compatible schema file.
We used the following script to load a sql schema from a mysql database, convert it to H2 compatible format and then use it in-memory for tests. You should be able to adapt it.
#!/bin/sh
echo ''
export IP=192.168.1.123
export user=foobar
export password=secret
export database=foobar
ping -c 1 $IP &&\
echo "" &&\
echo "Server is reachable"
# dump mysql schema for debuggability (ignore in git)
# convert the mysql to h2db using the converter.
## disable foreign key check in begining and enable it in the end. Prevents foreign key errors
echo "SET FOREIGN_KEY_CHECKS=0;" > foobar-mysql.sql
## Dump the Db structure and remove the auto_increment so as to set the id column back to 1
mysqldump --compact -u $user -h $IP -d $database -p$password\
|sed 's/CONSTRAINT `_*/CONSTRAINT `/g' \
|sed 's/KEY `_*/KEY `/g' \
|sed 's/ AUTO_INCREMENT=[0-9]*//' \
>> foobar-mysql.sql
echo "SET FOREIGN_KEY_CHECKS=1;" >> foobar-mysql.sql &&\
java -jar mysql2h2-converter.jar foobar-mysql.sql \
|perl -0777 -pe 's/([^`]),/\1,\n /g' \
|perl -0777 -pe 's/\)\);/)\n);/g' \
|perl -0777 -pe 's/(CREATE TABLE [^\(]*\()/\1\n /g' \
|sed 's/UNSIGNED/unsigned/g' \
|sed 's/float/real/' \
|sed "s/\(int([0-9]*).*\) DEFAULT '\(.*\)'/\1 DEFAULT \2/" \
|sed "s/tinyint(1)/boolean/" \
> foobar-h2.sql
perl -ne 'print "$ARGV\n" if /.\z/' -- foobar-h2.sql

Related

GNU Parallel -q option causing BCP "unknown option" errors (different string quotes on local vs remote hosts)

Seeing very strange behavior where when when using gnu parallel to distribute export jobs using bcp from mssql-tools. It appears that when using the -q option for parallel, strings are interpreted differently on local host than on remote hosts.
Running only as a loop through files on local host, the bcp processes throws no errors
However, distributing the file exports with parallel, the bcp processes executing on the local host throw
/opt/mssql-tools/bin/bcp: unknown option
errors, while those executing on remote hosts (via a --sshloginfile param) finish successfully. The basic code being run looks like...
# setting some vars to pass
TO_SERVER_ODBCDSN="-D -S MyMSSQLServer"
TO_SERVER_IP="-S 172.18.54.22"
DB="$dest_db" #TODO: enforce being more careful with this value
TABLE="$tablename" # MUST exist beforehand, case matters
USER=$(tail -n+1 $source_home/mssql-creds.txt | head -1)
PASSWORD=$(tail -n+2 $source_home/mssql-creds.txt | head -1)
DATAFILES="/some/path/to/files/"
TARGET_GLOB="*.tsv"
RECOMMEDED_IMPORT_MODE='-c' # makes a HUGE difference, see https://stackoverflow.com/a/16310219/8236733
DELIMITER="\\\t" # (currently not used) DO NOT use format like "'\t'", nested quotes seem to cause hard-to-catch error, want "\t" literal
....
bcpexport() {
filename=$1
TO_SERVER_ODBCDSN=$2
DB=$3
TABLE=$4 # MUST exist beforehand, case matters
USER=$5
PASSWORD=$6
RECOMMEDED_IMPORT_MODE=$7 # makes a HUGE difference, see https://stackoverflow.com/a/16310219/8236733
DELIMITER=$8 # not currently used
WORKDIR=$9
LOGDIR=${10}
....
/opt/mssql-tools/bin/bcp "$TABLE" in "$localfile" \
$TO_SERVER_ODBCDSN \
-U $USER -P $PASSWORD \
-d $DB \
$RECOMMEDED_IMPORT_MODE
-t "\t" \
-e ${localfile}.bcperror.log
}
export -f bcpexport
parallelization_pernode=5
parallel -q -j $parallelization_pernode \
--sshloginfile $source_home/parallel-nodes.txt \
--env bcpexport \
bcpexport {} "$TO_SERVER_ODBCDSN" $DB $TABLE $USER $PASSWORD $RECOMMEDED_IMPORT_MODE $DELIMITER $workingdir $logdir \
::: $DATAFILES/$TARGET_GLOB #from hdfs nfs gateway
Looking at the bash interpretation of the processes (by running ps -aux | grep bcp on the hosts that parallelis given in the --sshloginfile) for the remote hosts we see...
/bin/bash -c bcpexport() { ... /opt/mssql-tools/bin/bcp "$TABLE" in "$localfile" $TO_SERVER_ODBCDSN -U $USER -P $PASSWORD -d $DB $RECOMMEDED_IMPORT_MODE; -t "\t" -e ${localfile}.bcperror.log; ...
for the local host, the bash interpretation is...
/bin/bash -c bcpexport() { ... /opt/mssql-tools/bin/bcp "$TABLE" in "$localfile" $TO_SERVER_ODBCDSN -U $USER -P $PASSWORD -d $DB $RECOMMEDED_IMPORT_MODE; -t "\t" -e ${localfile}.bcperror.log; ...
that is, they look the same.
My current thought is that the "\t" in the bcp command is being interpreted in a problematic way. Debugging parallel without vs with the -q option we see...
$ parallel -j 5 --sshloginfile ./parallel-nodes.txt echo "Number {}: Running on \`hostname\`: \t" ::: 1 2 3 4 5
Number 4: Running on HW04.ucera.local: t
Number 1: Running on HW04.ucera.local: t
Number 2: Running on HW03.ucera.local: t
Number 5: Running on HW03.ucera.local: t
Number 3: Running on HW02.ucera.local: t
$ parallel -q -j 5 --sshloginfile ./parallel-nodes.txt echo "Number {}: Running on \`hostname\`: \t" ::: 1 2 3 4 5
Number 1: Running on `hostname`:
Number 4: Running on `hostname`:
Number 3: Running on `hostname`: \t
Number 2: Running on `hostname`: \t
Number 5: Running on `hostname`: \t
The bcp command needs the "\t" literal not the "t" literal (and I suspect several other similar string corruptions (also I do believe that \t is the default for bcp anyway, but this is just an example and want to keep \t for code clarity)), but not sure how to get this for both local and remote nodes or even why this behavior differs by remote vs local.
Basically, need the the strings to be exactly the same for both local and remote hosts even if strings have spaces or escape characters in them (note, I think this used to not be the case (have older script on other machines that don't have this problem))
Not sure if this is counts more as a parallel problem or a bcp problem (currently thinking something is going wrong with the -q option in parallel, but not sure). Anyone have any debugging suggestions or fixes? Ideas of what could be happening?
Firstly, the reason why hostname is not expanded is due to -q. It quotes the ` so that it does not expand.
Secondly, I think what you see is the different behaviours in built-in echo and /bin/echo. Built-in echo depends on the shell. Here I compare echo \\\\t in different shells:
$ parallel --onall --tag -S sh#lo,bash#lo,csh#lo,tcsh#lo,ksh#lo,zsh#lo echo \\\\t ::: a
bash#lo \t a
tcsh#lo a
sh#lo a
ksh#lo \t a
zsh#lo a
csh#lo \t a
That does not, however, get you closer to a solution. If I were you I would use env_parallel to copy the environment variables. And if the login shell on the remote systems are not the same as your shell, then set PARALLEL_SHELL to force using that shell.
So:
#!/bin/bash
env_parallel --session
# setting some vars to pass
TO_SERVER_ODBCDSN="-D -S MyMSSQLServer"
:
:
PARALLEL_SHELL=bash env_parallel -q -j $parallelization_pernode ...
(no need to use neither --env nor 'export -f' when using 'env_parallel --session')
# Cleanup (not needed if this is the last line in the script)
env_parallel --end-session

How to read lines from .txt file into this bash script?

I have this bash script which connects to a postgre sql db and performs a query. I would like to be able to read line from a .txt file into the query as parameters. What is the best way to do that? Your assistance is greatly appreciated! I have my example code below however it is not working.
#!/bin/sh
query="SELECT ci.NAME_VALUE NAME_VALUE FROM certificate_identity ci WHERE ci.NAME_TYPE = 'dNSName' AND reverse(lower(ci.NAME_VALUE)) LIKE reverse(lower('%.$1'));"
(echo $1; echo $query | \
psql -t -h crt.sh -p 5432 -U guest certwatch | \
sed -e 's:^ *::g' -e 's:^*\.::g' -e '/^$/d' | \
sed -e 's:*.::g';) | sort -u
Considering that the file has only one sql query per line:
while read -r line; do echo "${line}" | "your code to run psql here"; done < file_with_query.sql
That means: while read the content of file_with_query.sql line by line, do something with each line.

How to execute postgres' sql queries from batch file?

I need to execute SQL from batch file.
I am executing following to connect to Postgres and select data from table
C:/pgsql/bin/psql -h %DB_HOST% -p 5432 -U %DB_USER% -d %DB_NAME%
select * from test;
I am able to connect to database, however I'm getting the error
'select' is not recognized as an internal or external command,
operable program or batch file.
Has anyone faced such issue?
This is one of the query i am trying, something similar works in shell script, (please ignore syntax error in the query if there are any)
copy testdata (col1,col2,col3) from '%filepath%/%csv_file%' with csv;
You could pipe it into psql
(
echo select * from test;
) | C:/pgsql/bin/psql -h %DB_HOST% -p 5432 -U %DB_USER% -d %DB_NAME%
When closing parenthesis are part of the SQL query they have to be escaped with three carets.
(
echo insert into testconfig(testid,scenarioid,testname ^^^) values( 1,1,'asdf'^^^);
) | psql -h %DB_HOST% -p 5432 -U %DB_USER% -d %DB_NAME%
Use the -f parameter to pass the batch file name
C:/pgsql/bin/psql -h %DB_HOST% -p 5432 -U %DB_USER% -d %DB_NAME% -f 'sql_batch_file.sql'
http://www.postgresql.org/docs/current/static/app-psql.html
-f filename
--file=filename
Use the file filename as the source of commands instead of reading commands interactively. After the file is processed, psql terminates. This is in many ways equivalent to the meta-command \i.
If filename is - (hyphen), then standard input is read until an EOF indication or \q meta-command. Note however that Readline is not used in this case (much as if -n had been specified).
if running on Linux, this is what worked for me (need to update values below with your user, db name etc)
psql "host=YOUR_HOST port=YOUR_PORT dbname=YOUR_DB_NAME user=YOUR_USER_NAME password=YOUR_PASSWORD" -f "fully_qualified_path_to_your_script.sql"
You cannot put the query on separate line, batch interpreter will assume it's another command instead of a query for psql. I believe you will need to quote it as well.
I agree with Spidey:
1] if you are passing the file with the sql use -f or --file parameter
When you want to execute several commands the best way to do that is to add parameter -f, and after that just type path to your file without any " or ' marks (relative paths works also):
psql -h %host% -p 5432 -U %user% -d %dbname% -f ..\..\folder\Data.txt
It also works in .NET Core. I need it to add basic data to my database after migrations.
Kindly refer to the documentation
1] if you are passing the file with the sql use -f or --file parameter
2] if you are passing individual command use -c or --command parameter
If you are trying the shell script
psql postgresql://$username:$password#$host/$database < /app/sql_script/script.sql

MySQL dump structure of all tables and data of some

I'm trying to dump the structure of all the tables in our database, and then only the data of the ones I specifically want, but i seem to be doing something wrong as I'm not getting the empty tables created for the ones I exclude from the data dump.
I have a text file which specifies which tables I want to dump the data for (called showtables.txt):
SHOW TABLES FROM mydb
WHERE Tables_in_mydb NOT LIKE '%_history'
AND Tables_in_mydb NOT LIKE '%_log';
I am then doing this command to dump the structure of all tables, and then the data of the tables returned by that query in the text file:
mysqldump -u root -pmypassword mydb --no-data > mydump.sql; mysql -u root -pmypassword < showtables.txt -N | xargs mysqldump mydb -u root -pmypassword > mydump.sql -v
I am getting the dump of all the tables included in the results of the showtables query, but I am not getting the structures of the rest of the tables.
If I run just the structure part as a single command, that works fine and I get the structures dumped for all tables. But combining it with the data dump seems to not work.
Can you point me to where I'm going wrong with this?
Thanks.
I think you've got the order of your commandline arguments wrong (the redirection to a file should be the end), and you need an extra parameter for xargs so we can specify the database name to mysqldump.
Additionally, you need to append >> the dump data, otherwise you'd be overwriting the mydump.sql file for each table:
mysqldump -u root -pmypassword mydb --no-data > mydump.sql
mysql -u root -pmypassword -N < showtables.txt | xargs -I {} mysqldump -v -u root -pmypassword mydb {} >> mydump.sql
Sources: http://www.cyberciti.biz/faq/linux-unix-bsd-xargs-construct-argument-lists-utility/
Working off of Jon's answer, but the -I in xargs will run a separate mysqldump command for each table. Easier to just allow the xargs default which appends the output of the previous command to the next command. mysqldump's last argument is a list of all tables you'd like to dump.
My solution also shows connecting through a bastion host. gzip'ing before streaming over the SSH connection is vastly faster than sending the uncompressed SQL over the wire.
FILE=~/production.sql.gz
HOST=ext-db-read-0.cdzvblmx0n9h.us-west-1.rds.amazonaws.com
USER=username
PASS="s3cret"
DB=myapp_prod
EXCLUDE="'activities', 'changelogs'"
ssh bastion.mycompany.com <<EOF > $FILE
mysqldump -h $HOST -u $USER -p$PASS $DB --no-data | gzip
mysql -h $HOST -u $USER -p$PASS -N -e "SHOW TABLES WHERE Tables_in_$DB NOT IN ($EXCLUDE)" $DB | xargs mysqldump -v -h $HOST -u $USER -p$PASS $DB | gzip
EOF
If you don't want to save .gz just pipe it through gzip -d:
ssh bastion.mycompany.com <<EOF | gzip -d > $FILE
etc
or directly to your local db:
ssh bastion.mycompany.com <<EOF | gzip -d | mysql -uroot myapp_development

Unix Df -k ouput in csv format

I’m trying to create a shell script to get the server stats of 100 servers and load the details into a table.Initially I’m creating a parameter file which has the list of all the servers, then I ‘m connecting these servers through ssh and run df –k. ssh keys are already setup.
Issues I’m facing is that I’m not able to associate server name to the result, I want server name added as a column to the df –k output.
Also the output format cannot be loaded into a table as there is no delimiter or tab or space properly formatted to load. I have tried sed & various other options but no luck.
#!/bin/ksh
PARMFILE=/opt/sdw/scripts/db_scripts/server_stats.txt
value=$(<server_list1.txt)
echo "$value"
sourceservers=`grep =/opt/sdw/scripts/db_scripts/server_stats.txt |cut -d= -f2`
#Input array passed as parameter file to the script
set -A array_value $value
vLen=${#array_value[#]}
echo $vLen
for(( j=0; j<$vLen; j++))
do
#echo "${array_value[$j]}"
#ssh -q "${array_value[$j]}"; df -k
ssh -q "${array_value[$j]}" 'df -h' >> df.out
ssh -q "${array_value[$j]}" df -h | column -t >> df1.out
ssh -q "${array_value[$j]}" df -k | tr -s " " | sed 's/ /, /g' | sed '1 s/, / /g' | column -t >> df3.out
[[ ! $? = 0 ]] && echo Failure, errno $?, cannot connect to host "${array_value[$j]}" >> sshfailed.list
done
Output
Filesystem Size Used Avail Use%
Mounted on
/dev/mapper/vg00-lvol3 1.5G 434M 923M 32%
Desired output
Filesystem, Size, Used , Avail, Use%, Mounted on, Servername
/dev/mapper/vg00-lvol3, 1.5G, 434M, 923M, 32%, / br724
put "${array_value[$j]}" in a variable like $server for better readability.
then in sed, do a substitution like sed "s/Mounted on/Mounted on $server/g"