Script to copy one column data to another column - sql

I am writing a script to copy one column data to another column.
Tried with following logic bud didnt worked out-
o/p- number of parameter is 0.
My Logic-
• I got the keys from the admintable and then copied the data to some updateupdateStatement file.
• Using awk command I copied specific column data to some temp file
• Then prepared an update statement and then executed it.
#!/bin/ksh
#
# Script to Populate cross_refs based on what is in cross_references
#
#
echo "number of parameters is $#"
if [ $# != 1 ]; then
USAGE="USAGE: $0 cassPassword"
echo ${USAGE}
exit 1
fi
cassPassword=$1
#Add column to admin table
#echo "alter table to add column..."
#echo "ALTER TABLE admin.product ADD cross_refs Map<String,String>;" > updateTable.cql
#cqlsh -u dgadmin -p ${cassPassword} -f updateTable.cql
echo "get keys from cassandra"
echo "copy admin.product (cross_references) to 'updateupdateProductStatement.cql';" > copyInputs.cql
cqlsh -u dgadmin -p ${cassPassword} -f copyInputs.cql
#Convert file that Cassandra created from DOS to Unix
echo "DOS to Unix conversion..."
tr -d '\015' <updateupdateProductStatement.cql >updateupdateProductStatement2.cql
cat updateupdateProductStatement2.cql >tempFile
sed -i "s/^/update admin.product set cross_refs = '/" tempFile
#execute the updated .cql file to run all the update statements
echo "executing updateupdateProductStatement.cql..."
cqlsh -u dgadmin -p ${cassPassword} -f tempFile

I'm not absolutely certain I understand the intent of your script, but I can pick out one line that looks suspect...
cat updateFlatFileInputStatements2.cql |awk -F'\t' '{ 19 1 2}' >tempFile
I think you want to print columns 19, 1 and 2 to your output...
awk -F'\t' 'BEGIN { OFS=" " }{ print $19, $1, $2 }' updateFlatFileInputStatements2.cql > tempFile
Better is to do all the manipulation of tempFile in awk
awk -F'\t' "{ print \"update admin.product set my_refs = \" \$19 \" where id = \" $1 \" and effective_date = \" $2 \"';\"" updateFlatFileInputStatements2.cql > tempFile
Then again, I don't see in your file where tempFile is used... or where updateFlatFileInputStatements2.cql is generated. Looks like this piece of code is doing nothing?
updateupdateStatement.cql ... don't know where that comes from. This then is stripped to form updateupdateStatement2.cql ... which then is manipulated to become tempFile but... you don't use tempFile -- instead you send updateupdateStatement2.cql to cqlsh. The bug may be that you intended to send tempFile instead to your final cqlsh.

Related

BASH script to create SQL statement ignore last column

I am trying to create a bash script that will generate an SQL CREATE TABLE statement from a CSV file.
#!/bin/bash
# Check if the user provided a CSV file
if [ $# -eq 0 ]
then
echo "No CSV file provided."
exit 1
fi
# Check if the CSV file exists
if [ ! -f $1 ]
then
echo "CSV file does not exist."
exit 1
fi
# Get the table name from the CSV file name
table_name=$(basename $1 .csv)
# Extract the header row from the CSV file
header=$(head -n 1 $1)
# Split the header row into column names
IFS=',' read -r -a columns <<< "$header"
# Generate the PostgreSQL `CREATE TABLE` statement
echo "CREATE TABLE $table_name ("
for column in "${columns[#]}"
do
echo " $column TEXT,"
done
echo ");"
If I have a CSV file with three columns(aa,bb,cc), the generated statement does not have the last column for some reason.
Any idea what could be wrong?
If I do:
for a in "${array[#]}"
do
echo "$a"
done
I am getting:
aaa
bbb
ccc
But when add something into the string:
for a in "${array[#]}"
do
echo "$a SOMETHING"
done
I get:
aaa SOMETHING
bbb SOMETHING
SOMETHING
Thanks.
Your csv file has a '\r`
Try the next block for reproducing the problem.
printf -v header "%s,%s,%s\r\n" "aaa" "bbb" "ccc"
IFS=',' read -r -a columns <<< "$header"
echo "Show array"
for a in "${columns[#]}"; do echo "$a"; done
echo "Now with something extra"
for a in "${columns[#]}"; do echo "$a SOMETHING"; done
You should remove the '\r', what can be done with
IFS=',' read -r -a columns < <(tr -d '\r' <<< "${header}")

Optimize informix update

I have a bash script that will update a table based on a file. The way I have it it opens and closes for every line in the file and would like to understand how to open, perform all the updates, and then close. It is fine for a few updates but if it ever requires more than a few hundred could be really taxing on the system.
#!/bin/bash
file=/export/home/dncs/tmp/file.csv
dateFormat=$(date +"%m-%d-%y-%T")
LOGFILE=/export/home/dncs/tmp/Log_${dateFormat}.log
echo "${dateFormat} : Starting work" >> $LOGFILE 2>&1
while IFS="," read mac loc; do
if [[ "$mac" =~ ^([0-9a-fA-F]{2}:){5}[0-9a-fA-F]{2}$ ]]; then
dbaccess thedb <<EndOfUpdate >> $LOGFILE 2>&1
UPDATE profile
SET local_code= '$loc'
WHERE mac_address = '$mac';
EndOfUpdate
else
echo "Error: $mac not valid format" >> $LOGFILE 2>&1
fi
IIH -i $mac >> $LOGFILE 2>&1
done <"$file"
Source File.
12:BF:20:04:BB:30,POR-4
12:BF:21:1C:02:B1,POR-10
12:BF:20:04:72:FD,POR-4
12:BF:20:01:5B:4F,POR-10
12:BF:20:C2:71:42,POR-7
This is more or less what I'd do:
#!/bin/bash
fmt_date() { date +"%Y-%m-%d.%T"; }
file=/export/home/dncs/tmp/file.csv
dateFormat=$(fmt_date)
LOGFILE="/export/home/dncs/tmp/Log_${dateFormat}.log"
exec >> $LOGFILE 2>&1
echo "${dateFormat} : Starting work"
valid_mac='/^\(\([0-9a-fA-F]\{2\}:\)\{5\}[0-9a-fA-F]\{2\}\),\([^,]*\)$/'
update_stmt="UPDATE profile SET local_code = '\3' WHERE mac_address = '\1';"
sed -n -e "$valid_mac s//$update_stmt/p" "$file" |
dbaccess thedb -
sed -n -e "$valid_mac d; s/.*/Error: invalid format: &/p" "$file"
sed -n -e "$valid_mac s//IIH -i \1/p" "$file" | sh
echo "$(fmt_date) : Finished work"
I changed the date format to a variant of ISO 8601; it is easier to parse. You can stick with your Y2K-non-compliant US-ish format if you prefer. The exec line arranges for standard output and standard error from here onwards to go to the log file. The sed command all use the same structure, and all use the same pattern match stored in a variable. This makes consistency easier. The first sed script converts the data into UPDATE statements (which are fed to dbaccess). The second script identifies invalid MAC addresses; it deletes valid ones and maps the invalid lines into error messages. The third script ignores invalid MAC addresses but generates a IIH command for each valid one. The script records an end time — it will allow you to assess how long the processing takes. Again, repetition is avoided by creating and using the fmt_date function.
Be cautious about testing this. I had a file data containing:
87:36:E6:5E:AC:41,loc-OYNK
B2:4D:65:70:32:26,loc-DQLO
ZD:D9:BA:34:FD:97,loc-PLBI
04:EB:71:0D:29:D0,loc-LMEE
DA:67:53:4B:EC:C4,loc-SFUU
I replaced the dbaccess with cat, and the sh with cat. The log file I relocated to the current directory — leading to:
#!/bin/bash
fmt_date() { date +"%Y-%m-%d.%T"; }
#file=/export/home/dncs/tmp/file.csv
file=data
dateFormat=$(fmt_date)
#LOGFILE="/export/home/dncs/tmp/Log_${dateFormat}.log"
LOGFILE="Log-${dateFormat}.log"
exec >> $LOGFILE 2>&1
echo "${dateFormat} : Starting work"
valid_mac='/^\(\([0-9a-fA-F]\{2\}:\)\{5\}[0-9a-fA-F]\{2\}\),\([^,]*\)$/'
update_stmt="UPDATE profile SET local_code = '\3' WHERE mac_address = '\1';"
sed -n -e "$valid_mac s//$update_stmt/p" "$file" |
cat
#dbaccess thedb -
sed -n -e "$valid_mac d; s/.*/Error: invalid format: &/p" "$file"
#sed -n -e "$valid_mac s//IIH -i \1/p" "$file" | sh
sed -n -e "$valid_mac s//IIH -i \1/p" "$file" | cat
echo "$(fmt_date) : Finished work"
After I ran it, the log file contained:
2017-04-27.14:58:20 : Starting work
UPDATE profile SET local_code = 'loc-OYNK' WHERE mac_address = '87:36:E6:5E:AC:41';
UPDATE profile SET local_code = 'loc-DQLO' WHERE mac_address = 'B2:4D:65:70:32:26';
UPDATE profile SET local_code = 'loc-LMEE' WHERE mac_address = '04:EB:71:0D:29:D0';
UPDATE profile SET local_code = 'loc-SFUU' WHERE mac_address = 'DA:67:53:4B:EC:C4';
Error: invalid format: ZD:D9:BA:34:FD:97,loc-PLBI
IIH -i 87:36:E6:5E:AC:41
IIH -i B2:4D:65:70:32:26
IIH -i 04:EB:71:0D:29:D0
IIH -i DA:67:53:4B:EC:C4
2017-04-27.14:58:20 : Finished work
The UPDATE statements would have gone to DB-Access. The bogus MAC address was identified. The correct IIH commands would have been run.
Note that piping the output into sh requires confidence that the data you generate (the IIH commands) will be clean.

awk print overwrite strings

I have a problem using awk in the terminal.
I need to move many files in a group from the actual directory to another one and I have the list of the necessary files in a text file, as:
filename.txt
file1
file2
file3
...
I usually digit:
paste filename.txt | awk '{print "mv "$1" ../dir/"}' | sh
and it executes:
mv file1 ../dir/
mv file2 ../dir/
mv file3 ../dir/
It usually works, but now the command changes its behaviour and awk overwrites the last string ../dir/ on the first one, starting again the print command from the initial position, obtaining:
../dire1 ../dir/
../dire2 ../dir/
../dire3 ../dir/
and of course it cannot be executed.
What's happened?
How do I solve it?
Your input file contains carriage returns (\r aka control-M). Run dos2unix on it before running a UNIX tool on it.
idk what you're using paste for though, and you should not be using awk for this at all anyway, it's just a job for a simple shell script, e.g. remove the echo once you've tested this:
$ < file xargs -n 1 -I {} echo mv "{}" "../dir"
mv file1 ../dir
mv file2 ../dir
mv file3 ../dir

How to replace a string in a file in KSH

My KSH-Script should replace a String in a txt file from the same directory.
sed -i 's/"$original"/"$reversed"/' inputtext.txt
is what I'm using currently, but it doesn't work. There is no error in the code or things like that. It just doesn't work.
Here is my whole code:
#!/bin/ksh
original=$1
reversed=""
counter=0
echo $original | awk -v ORS="" '{ gsub(/./,"&\n") ; print }' | \
while read char
do
letters[$counter]+="$char"
((counter=counter+1))
done
length=${#original}
((length=length-1))
echo $original | awk -v ORS="" '{ gsub(/./,"&\n") ; print }' | \
while read char
do
reversed+=${letters[$length]}
((length=length-1))
done
echo $reversed
sed -i 's/"$original"/"$reversed"/' inputtext.txt
exit 0
I want, that in the file "inputtext.txt" (same dir as the .sh file) every word that equals "$original" gets changed to "$reversed".
What am I doing wrong?
I think single quotes prevent variable expansion. You can try this:
sed -i "s/$original/$reversed/" inputtext.txt

How does awk distinguish between arguments and input file?

This is my shell script that receives a string as an input from the user from the stdin.
#!bin/sh
printf "Enter your query\n"
read query
cmd=`echo $query | cut -f 1 -d " "`
input_file=`echo $query | cut -f 2 -d " "`
printf $input_file
if [ $input_file = "" ]
then
printf "No input file specified\n"
exit
fi
if [ $cmd = "summary" ]
then
awk -f summary.awk $input_file $query
fi
Lets say he enters
summary a.txt /o foo.txt
Now cmd variable will take the value summary and input_file will take a.txt.
Isn't that right?
I want summary.awk to work on $input_file, based on what is present in $query.
My understanding is as follows :
The 1st command line argument passed is treated as input file.
e.g. : awk 'code' file arg1 arg2 arg3
only file is treated as input file
If the input file is piped, it doesn't see any of the arguments as input files.
e.g. : cat file | awk 'code' arg1 arg2 arg3
arg1 is NOT treated as input file.
Am I right?
The problem is
I get awk: cannot open summary (No such file or directory)
Why is it trying to open summary?
It is the next word after $input_file.
How do I fix this issue?
If there's no -f option, the first non-option argument is treated as the script, and the remaining arguments are input files. If there's a -f option, the script comes from that file, and the rest are input files.
If and only if there are no input file arguments, it reads the input from stdin. This means if you pipe to awk and also give it filename arguments, it will ignore the pipe.
This is the same as most Unix file-processing commands.
Try this:
awk -f summary.awk -v query="$query" "$input_file"
This will set the awk variable query to the contents of query.